From nobody Thu Nov 14 07:05:12 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8391B145B25; Thu, 18 Jul 2024 21:12:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721337164; cv=none; b=to0jBvfU4r8DnSmKD6suYXaTeSlcWAHmFbuOie8yrfoQ4j8nAiUW7IWp85T9Kv0MEZEx4JhRhVgcxoYJfaXv85XYXYquh/CyFrvw13a/UP1kLqECtRbily/53vyM5VNs9SoHTo+E1Z4KmUJZ5fckj4DnxXSrJw7s+mK9L3q+DR8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721337164; c=relaxed/simple; bh=voiJnSZtak3rX8dJyCOuacjfMZ/qUSt+GMJzT5DZdmQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=NEV02k7EWTnGWjrDMOHw7LZ6HqEChUxVtIChGeyH9PzML+4z/H9cjvzoOVMCf1eqrf8q4aC55pq3vsEh5Cx3pMDNojo+KFi3wspBRPaeCvin4tH71Bt5kNDoEyQwRV/NI/DQD0a+WJ0hdV5ozg1+hmnj/YzK2Q6ylSPNvSnWkgM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=LgLvlNbn; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="LgLvlNbn" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1721337162; x=1752873162; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=voiJnSZtak3rX8dJyCOuacjfMZ/qUSt+GMJzT5DZdmQ=; b=LgLvlNbnoMFzh6wyLtl5JgWvAO1zKeHf+qxFe4rSM2t9ZEbpJ24iQgOo Ir1at47rxdiurHJ1fEd/uOkNIq3vrfd6TFBDCdvPc2vxvSmEHJoXvq0wt CMrpcnjleAlW+qcl5qt/8iqocPoe9zA232Ak/p8XiaSbZ8KD+LXn1nG38 Q7erYdnT7PsLVMpoQlVk2i6iA7YOSmoTsznRddpCN1/AaA6i8K1UceLbp 1qqAE7sfwryjH7kJZiVIS2W/mzqfbnqRObov00fsy+bQ3gXOMxCAg8Ves WrlZ0Ekkn+wJ7DvWGsOc5r+FFXGPkzGTzs4VD2WW1sXOI0CQIGpyZQzRQ w==; X-CSE-ConnectionGUID: 0FJU4/tLQqqC+fTKcgd2lQ== X-CSE-MsgGUID: ZRtCus5BSN2JXkokrdV9WA== X-IronPort-AV: E=McAfee;i="6700,10204,11137"; a="22697392" X-IronPort-AV: E=Sophos;i="6.09,218,1716274800"; d="scan'208";a="22697392" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 14:12:42 -0700 X-CSE-ConnectionGUID: jJ217vUeTeOpBDU/7FFOBA== X-CSE-MsgGUID: qVLJpxVWTXe9AoEykOEPgA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,218,1716274800"; d="scan'208";a="55760367" Received: from ccbilbre-mobl3.amr.corp.intel.com (HELO rpedgeco-desk4..) ([10.124.223.76]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 14:12:40 -0700 From: Rick Edgecombe To: seanjc@google.com, pbonzini@redhat.com, kvm@vger.kernel.org Cc: kai.huang@intel.com, dmatlack@google.com, erdemaktas@google.com, isaku.yamahata@gmail.com, linux-kernel@vger.kernel.org, sagis@google.com, yan.y.zhao@intel.com, rick.p.edgecombe@intel.com Subject: [PATCH v4 01/18] KVM: x86/mmu: Zap invalid roots with mmu_lock holding for write at uninit Date: Thu, 18 Jul 2024 14:12:13 -0700 Message-Id: <20240718211230.1492011-2-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240718211230.1492011-1-rick.p.edgecombe@intel.com> References: <20240718211230.1492011-1-rick.p.edgecombe@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a bool parameter to kvm_tdp_mmu_zap_invalidated_roots() to specify zapping invalid roots under mmu_lock held for read or write. Hold mmu_lock for write when kvm_tdp_mmu_zap_invalidated_roots() is called by kvm_mmu_uninit_tdp_mmu(). kvm_mmu_uninit_tdp_mmu() is invoked either before or after executing any atomic operations on SPTEs by vCPU threads. Therefore, it will not impact vCPU threads performance if kvm_tdp_mmu_zap_invalidated_roots() acquires mmu_lock for write to zap invalid roots. This is a preparation for future TDX patch which asserts that "Users of atomic zapping don't operate on mirror roots". Co-developed-by: Yan Zhao Signed-off-by: Yan Zhao Signed-off-by: Rick Edgecombe --- v4: - New patch --- arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/mmu/tdp_mmu.c | 16 +++++++++++----- arch/x86/kvm/mmu/tdp_mmu.h | 2 +- 3 files changed, 13 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f6b7391fe438..6f721ab0cd33 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6473,7 +6473,7 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm) * lead to use-after-free. */ if (tdp_mmu_enabled) - kvm_tdp_mmu_zap_invalidated_roots(kvm); + kvm_tdp_mmu_zap_invalidated_roots(kvm, true); } =20 static bool kvm_has_zapped_obsolete_pages(struct kvm *kvm) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index ff27e1eadd54..b92dcd5b266f 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -38,7 +38,7 @@ void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) * ultimately frees all roots. */ kvm_tdp_mmu_invalidate_all_roots(kvm); - kvm_tdp_mmu_zap_invalidated_roots(kvm); + kvm_tdp_mmu_zap_invalidated_roots(kvm, false); =20 WARN_ON(atomic64_read(&kvm->arch.tdp_mmu_pages)); WARN_ON(!list_empty(&kvm->arch.tdp_mmu_roots)); @@ -927,11 +927,14 @@ void kvm_tdp_mmu_zap_all(struct kvm *kvm) * Zap all invalidated roots to ensure all SPTEs are dropped before the "f= ast * zap" completes. */ -void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm) +void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm, bool shared) { struct kvm_mmu_page *root; =20 - read_lock(&kvm->mmu_lock); + if (shared) + read_lock(&kvm->mmu_lock); + else + write_lock(&kvm->mmu_lock); =20 for_each_tdp_mmu_root_yield_safe(kvm, root) { if (!root->tdp_mmu_scheduled_root_to_zap) @@ -949,7 +952,7 @@ void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm) * that may be zapped, as such entries are associated with the * ASID on both VMX and SVM. */ - tdp_mmu_zap_root(kvm, root, true); + tdp_mmu_zap_root(kvm, root, shared); =20 /* * The referenced needs to be put *after* zapping the root, as @@ -959,7 +962,10 @@ void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm) kvm_tdp_mmu_put_root(kvm, root); } =20 - read_unlock(&kvm->mmu_lock); + if (shared) + read_unlock(&kvm->mmu_lock); + else + write_unlock(&kvm->mmu_lock); } =20 /* diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 1b74e058a81c..14421d080fe6 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -23,7 +23,7 @@ bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, gfn_t start, = gfn_t end, bool flush); bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp); void kvm_tdp_mmu_zap_all(struct kvm *kvm); void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm); -void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm); +void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm, bool shared); =20 int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault); =20 --=20 2.34.1 From nobody Thu Nov 14 07:05:12 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2E74C145FEB; Thu, 18 Jul 2024 21:12:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721337165; cv=none; b=laIzAnzp/tJ+gXe9MwXird8hWxScFkM2Q52V/JSyHNpd4MQAM2p1LxFsmsVY0z0Ss8Gyky8F7564Zu/s9tEW9O/8J7mn/nCvNDXwlozGCLya8BPMfZd+nIguN4hkMmnJE7LFW5s+XskRsUFiutynJil5Mo+TFiMmP9d6cgz9c4c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721337165; c=relaxed/simple; bh=BBy7GPOg0d+2ljCCqnuvdMtNSo89q9d+zR/eMMOn/U4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=payp4V+wCYCa3yBjABNB9XkN+WsGMhQBtM+ubvVuY39fa1hdv+bBMtXhDGDxhhf4K0N14Mxso1pKBraLQDoGqU+O+0Bta3i0vLgC3iRooNndYsP8DMS3eO2zAclHfp4zqhxILJ56U/lkqfQu2WxiRkdHK+345tg5gB1M2mbFXw0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=a6d3VlMU; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="a6d3VlMU" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1721337164; x=1752873164; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=BBy7GPOg0d+2ljCCqnuvdMtNSo89q9d+zR/eMMOn/U4=; b=a6d3VlMU6ADxe/jbqBEv38JX+fwMo3bIwDpeWrfQRdnkmvA0oXkhkeTs a7Fokfaks+OBIqNV2YkhzuwBnv7NAI9Og1qXxfGkGr1XGmLfuViXQGK+/ Cdfo8acOCdCV8u2/qgpdfKeNnhpXsjSOE957ACR2ZjtCtEt7+MT8Jf4up wbeaj583rDyBXzIhc1UIfVz4Ccn/VfYXl8pvDXsU2BeRLPj6tphWTTfFH UjaZ7lLFJ8wPpydFmeNXIO7uokoztboEEp9ilFzGiizBvxi2P24CwmFqO aQWtJLCMhW6MrLrjXjVNKqWy1J9ooDdhvtdN9o9KqpR/ccd1biNAMNeCO g==; X-CSE-ConnectionGUID: 93m0vft7RwC3g/FoFw8VEA== X-CSE-MsgGUID: j1s8CAlPSS2Gt83mqeuZig== X-IronPort-AV: E=McAfee;i="6700,10204,11137"; a="22697398" X-IronPort-AV: E=Sophos;i="6.09,218,1716274800"; d="scan'208";a="22697398" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 14:12:42 -0700 X-CSE-ConnectionGUID: 6q3phqdzRE68GH7pnAttGA== X-CSE-MsgGUID: pTGUhlVrSsW2i7RFVf9oRQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,218,1716274800"; d="scan'208";a="55760370" Received: from ccbilbre-mobl3.amr.corp.intel.com (HELO rpedgeco-desk4..) ([10.124.223.76]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 14:12:41 -0700 From: Rick Edgecombe To: seanjc@google.com, pbonzini@redhat.com, kvm@vger.kernel.org Cc: kai.huang@intel.com, dmatlack@google.com, erdemaktas@google.com, isaku.yamahata@gmail.com, linux-kernel@vger.kernel.org, sagis@google.com, yan.y.zhao@intel.com, rick.p.edgecombe@intel.com, Isaku Yamahata Subject: [PATCH v4 02/18] KVM: Add member to struct kvm_gfn_range for target alias Date: Thu, 18 Jul 2024 14:12:14 -0700 Message-Id: <20240718211230.1492011-3-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240718211230.1492011-1-rick.p.edgecombe@intel.com> References: <20240718211230.1492011-1-rick.p.edgecombe@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Isaku Yamahata Add new members to strut kvm_gfn_range to indicate which mapping (private-vs-shared) to operate on: enum kvm_gfn_range_filter attr_filter. Update the core zapping operations to set them appropriately. TDX utilizes two GPA aliases for the same memslots, one for memory that is for private memory and one that is for shared. For private memory, KVM cannot always perform the same operations it does on memory for default VMs, such as zapping pages and having them be faulted back in, as this requires guest coordination. However, some operations such as guest driven conversion of memory between private and shared should zap private memory. Internally to the MMU, private and shared mappings are tracked on separate roots. Mapping and zapping operations will operate on the respective GFN alias for each root (private or shared). So zapping operations will by default zap both aliases. Add fields in struct kvm_gfn_range to allow callers to specify which aliases so they can only target the aliases appropriate for their specific operation. There was feedback that target aliases should be specified such that the default value (0) is to operate on both aliases. Several options were considered. Several variations of having separate bools defined such that the default behavior was to process both aliases. They either allowed nonsensical configurations, or were confusing for the caller. A simple enum was also explored and was close, but was hard to process in the caller. Instead, use an enum with the default value (0) reserved as a disallowed value. Catch ranges that didn't have the target aliases specified by looking for that specific value. Set target alias with enum appropriately for these MMU operations: - For KVM's mmu notifier callbacks, zap shared pages only because private pages won't have a userspace mapping - For setting memory attributes, kvm_arch_pre_set_memory_attributes() chooses the aliases based on the attribute. - For guest_memfd invalidations, zap private only. Link: https://lore.kernel.org/kvm/ZivIF9vjKcuGie3s@google.com/ Signed-off-by: Isaku Yamahata Co-developed-by: Rick Edgecombe Signed-off-by: Rick Edgecombe --- v3: - Fix typo in comment (Paolo) - Remove KVM_PROCESS_PRIVATE_AND_SHARED (Paolo) - Remove outdated reference to exclude_{private,shared} (Paolo) - Set process member in new kvm_mmu_zap_memslot_leafs() function - Rename process -> filter (Paolo) v1: - Replaced KVM_PROCESS_BASED_ON_ARG with BUGGY_KVM_INVALIDATION to follow the original suggestion and not populte kvm_handle_gfn_range(). And add WARN_ON_ONCE(). - Move attribute specific logic into kvm_vm_set_mem_attributes() - Drop Sean's suggested-by tag as the solution has changed - Re-write commit log --- arch/x86/kvm/mmu/mmu.c | 6 ++++++ include/linux/kvm_host.h | 6 ++++++ virt/kvm/guest_memfd.c | 2 ++ virt/kvm/kvm_main.c | 14 ++++++++++++++ 4 files changed, 28 insertions(+) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 6f721ab0cd33..1e184e524e97 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -7496,6 +7496,12 @@ bool kvm_arch_pre_set_memory_attributes(struct kvm *= kvm, if (WARN_ON_ONCE(!kvm_arch_has_private_mem(kvm))) return false; =20 + /* Unmap the old attribute page. */ + if (range->arg.attributes & KVM_MEMORY_ATTRIBUTE_PRIVATE) + range->attr_filter =3D KVM_FILTER_SHARED; + else + range->attr_filter =3D KVM_FILTER_PRIVATE; + return kvm_unmap_gfn_range(kvm, range); } =20 diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index f7ba665652f3..4643680654d5 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -260,11 +260,17 @@ union kvm_mmu_notifier_arg { unsigned long attributes; }; =20 +enum kvm_gfn_range_filter { + KVM_FILTER_SHARED =3D BIT(0), + KVM_FILTER_PRIVATE =3D BIT(1), +}; + struct kvm_gfn_range { struct kvm_memory_slot *slot; gfn_t start; gfn_t end; union kvm_mmu_notifier_arg arg; + enum kvm_gfn_range_filter attr_filter; bool may_block; }; bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range); diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 8f4159dd7824..ce5c96f32c3f 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -113,6 +113,8 @@ static void kvm_gmem_invalidate_begin(struct kvm_gmem *= gmem, pgoff_t start, .end =3D slot->base_gfn + min(pgoff + slot->npages, end) - pgoff, .slot =3D slot, .may_block =3D true, + /* guest memfd is relevant to only private mappings. */ + .attr_filter =3D KVM_FILTER_PRIVATE, }; =20 if (!found_memslot) { diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 293ae714e825..71df4968004e 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -635,6 +635,11 @@ static __always_inline kvm_mn_ret_t __kvm_handle_hva_r= ange(struct kvm *kvm, */ gfn_range.arg =3D range->arg; gfn_range.may_block =3D range->may_block; + /* + * HVA-based notifications aren't relevant to private + * mappings as they don't have a userspace mapping. + */ + gfn_range.attr_filter =3D KVM_FILTER_SHARED; =20 /* * {gfn(page) | page intersects with [hva_start, hva_end)} =3D @@ -2450,6 +2455,14 @@ static __always_inline void kvm_handle_gfn_range(str= uct kvm *kvm, gfn_range.arg =3D range->arg; gfn_range.may_block =3D range->may_block; =20 + /* + * If/when KVM supports more attributes beyond private .vs shared, this + * _could_ set KVM_FILTER_{SHARED,PRIVATE} appropriately if the entire ta= rget + * range already has the desired private vs. shared state (it's unclear + * if that is a net win). For now, KVM reaches this point if and only + * if the private flag is being toggled, i.e. all mappings are in play. + */ + for (i =3D 0; i < kvm_arch_nr_memslot_as_ids(kvm); i++) { slots =3D __kvm_memslots(kvm, i); =20 @@ -2506,6 +2519,7 @@ static int kvm_vm_set_mem_attributes(struct kvm *kvm,= gfn_t start, gfn_t end, struct kvm_mmu_notifier_range pre_set_range =3D { .start =3D start, .end =3D end, + .arg.attributes =3D attributes, .handler =3D kvm_pre_set_memory_attributes, .on_lock =3D kvm_mmu_invalidate_begin, .flush_on_ret =3D true, --=20 2.34.1 From nobody Thu Nov 14 07:05:12 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EFCEF145B39; Thu, 18 Jul 2024 21:12:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721337165; cv=none; b=rRV87llL//1E/D/B0pXkLVK3sLIKcVmQ8L48beKoiKD48RM2YZ+aKA2wwpbXqFDXmFbVS/pkBUhUrPOl3xg/UmkItZWPL1sWHcgVP9l1tKPqmcxiWVahesawzmWQHnGMDmaZu+gf+cKHwIcMpRwZ57FnGwcVaP4G1tQOYgDP1Z4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721337165; c=relaxed/simple; bh=jlzNRd9yJWKf8+sv7l4/9XzebBJzfl7ldvWG3ZHyncs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=iTx1IdFmXWWryWDthkbnFk9ptgpyRCddusTUk7u6b+rLb+DZGz9BQykjl9SFEPXpmfdmO5BsfyMGiQqTutFQVuRiEfzOVuWVx5QVx72Iw6ThOrlvub2tiYf7IekADeyYZRSdgSp081gm903ztBR3xNoFmhoNw2SCdCbukl9BTJw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=HGFFbJoZ; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="HGFFbJoZ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1721337163; x=1752873163; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jlzNRd9yJWKf8+sv7l4/9XzebBJzfl7ldvWG3ZHyncs=; b=HGFFbJoZ1BeJ3lkNS55YoV06uI5lc/CiFAMvfTsjIv5sCghp5x5F5KTS 5/Dj4NmeHqAkuQOi6+Wp8o1HqT4yX5t/nueBlVy/JFqptBev0C5q387D8 Nh+xFXGUMQ/yzol8M3YC9KE4dGf+7JAI6RDD4zaw/PRXMDtHb3ke7VVLV AokItkiGWzXVOAas5RSTIK7ORUStz5gDtAh7bL9Csa4Fw8mRxIX00178P kLYtx8Pq2gf6bv56p2LfASRpzmNMiKNzNe4xsPMWWpnfdmscefUyQyEFR ArbHxuLTtVkkffJ2vr2lSm1QA2jM/AEOMvUw39/B/L//6C5S9+WkIYh9I A==; X-CSE-ConnectionGUID: 5ocLfUbvREqmMTltYvTzjg== X-CSE-MsgGUID: 65HtxbJ1Rx+4tFybQMQkwA== X-IronPort-AV: E=McAfee;i="6700,10204,11137"; a="22697397" X-IronPort-AV: E=Sophos;i="6.09,218,1716274800"; d="scan'208";a="22697397" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 14:12:42 -0700 X-CSE-ConnectionGUID: kCeNH2AdQUGJ2Q2Ki7R7ug== X-CSE-MsgGUID: 6wyZUJo/TzajtobkjbkSCg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,218,1716274800"; d="scan'208";a="55760374" Received: from ccbilbre-mobl3.amr.corp.intel.com (HELO rpedgeco-desk4..) ([10.124.223.76]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 14:12:42 -0700 From: Rick Edgecombe To: seanjc@google.com, pbonzini@redhat.com, kvm@vger.kernel.org Cc: kai.huang@intel.com, dmatlack@google.com, erdemaktas@google.com, isaku.yamahata@gmail.com, linux-kernel@vger.kernel.org, sagis@google.com, yan.y.zhao@intel.com, rick.p.edgecombe@intel.com Subject: [PATCH v4 03/18] KVM: x86: Add a VM type define for TDX Date: Thu, 18 Jul 2024 14:12:15 -0700 Message-Id: <20240718211230.1492011-4-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240718211230.1492011-1-rick.p.edgecombe@intel.com> References: <20240718211230.1492011-1-rick.p.edgecombe@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a VM type define for TDX. Future changes will need to lay the ground work for TDX support by making some behavior conditional on the VM being a TDX guest. Signed-off-by: Rick Edgecombe --- v1: - New patch, split from main series --- arch/x86/include/uapi/asm/kvm.h | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kv= m.h index 988b5204d636..4dea0cfeee51 100644 --- a/arch/x86/include/uapi/asm/kvm.h +++ b/arch/x86/include/uapi/asm/kvm.h @@ -922,5 +922,6 @@ struct kvm_hyperv_eventfd { #define KVM_X86_SEV_VM 2 #define KVM_X86_SEV_ES_VM 3 #define KVM_X86_SNP_VM 4 +#define KVM_X86_TDX_VM 5 =20 #endif /* _ASM_X86_KVM_H */ --=20 2.34.1 From nobody Thu Nov 14 07:05:12 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D157B146A6C; Thu, 18 Jul 2024 21:12:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721337168; cv=none; b=QKNNBgWDJIev+9i/NkOYC4iuRtDfdn6RbP380cfHSUUBzR/oDs1xzQxQXxg7WK70G5K//MB3UYuMvHBeL8njkREZuCH5GrUNWdxSxYWb1i4Du5FsfLgRHR4smPad1XgNeRT57wG/I9OmGNLTqX68fsjuza9cYOz+tGQ0jNUBbJI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721337168; c=relaxed/simple; bh=SMiFx6QYD9OlDR+/KvB40o6xev7EohzN1HhFk0i8bUY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=JMb/EzdYuAxNEWCOxiwCp4HT+/w+o1Ep6bUnY/Lyv/ELVwLa+HSrzKE2oOnrbuaMUfbr290P/xma8dIFZEBvoalh43WbyAMH42Y32pIhn3sIVAxc5P3JihWf5ZU7SUIBV/Y6ukA3m+YGq/b5wHm5Vk+HyhrZhHX3Ajd0ZIhhviA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=i6Etm7UT; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="i6Etm7UT" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1721337167; x=1752873167; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=SMiFx6QYD9OlDR+/KvB40o6xev7EohzN1HhFk0i8bUY=; b=i6Etm7UTx37dgwXugKIExqu9dPT4eHVPSNKPQtQph5vd7o802VgJOulc NyVE/ic5lqwDlD6PTXuBXxLiULfBaBboUXDRE5RFJPjLaqoTu+R9FzahT 8yQXCn696GuHlQG9urLTASkiklx8ajCU5tXY+JvZ/iL46aub//rN9gGt6 llPDpNS5oNq8Xol38GY0T6lIWUgoO3Q0n5+9vITIP1zvJKTiSNklhmKYt 05AFd55ddrAckEi5khfZngIiJBLQaAl4aBMT34IMRC5RGaQ+eBqUx3RT5 +LNNhXe9VV9YPLsei0CZnP63sXz6kCJ/OVprLV12toD+2ebyyEzzyaNrj Q==; X-CSE-ConnectionGUID: RExMVyD9Q1aHgGHTz0O3pw== X-CSE-MsgGUID: TZnhJ0GnRUe7mTlFnhS+KQ== X-IronPort-AV: E=McAfee;i="6700,10204,11137"; a="22697406" X-IronPort-AV: E=Sophos;i="6.09,218,1716274800"; d="scan'208";a="22697406" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 14:12:43 -0700 X-CSE-ConnectionGUID: 6WSVDBtbRPusOPm6kV0jMQ== X-CSE-MsgGUID: uydqjSdYQdCWXjXxnFtmkw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,218,1716274800"; d="scan'208";a="55760378" Received: from ccbilbre-mobl3.amr.corp.intel.com (HELO rpedgeco-desk4..) ([10.124.223.76]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 14:12:43 -0700 From: Rick Edgecombe To: seanjc@google.com, pbonzini@redhat.com, kvm@vger.kernel.org Cc: kai.huang@intel.com, dmatlack@google.com, erdemaktas@google.com, isaku.yamahata@gmail.com, linux-kernel@vger.kernel.org, sagis@google.com, yan.y.zhao@intel.com, rick.p.edgecombe@intel.com, Isaku Yamahata , Binbin Wu Subject: [PATCH v4 04/18] KVM: x86/mmu: Add an external pointer to struct kvm_mmu_page Date: Thu, 18 Jul 2024 14:12:16 -0700 Message-Id: <20240718211230.1492011-5-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240718211230.1492011-1-rick.p.edgecombe@intel.com> References: <20240718211230.1492011-1-rick.p.edgecombe@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Isaku Yamahata Add a external pointer to struct kvm_mmu_page for TDX's private page table and add helper functions to allocate/initialize/free a private page table page. TDX will only be supported with the TDP MMU. Because KVM TDP MMU doesn't use unsync_children and write_flooding_count, pack them to have room for a pointer and use a union to avoid memory overhead. For private GPA, CPU refers to a private page table whose contents are encrypted. The dedicated APIs to operate on it (e.g. updating/reading its PTE entry) are used, and their cost is expensive. When KVM resolves the KVM page fault, it walks the page tables. To reuse the existing KVM MMU code and mitigate the heavy cost of directly walking the private page table allocate two sets of page tables for the private half of the GPA space. For the page tables that KVM will walk, allocate them like normal and refer to them as mirror page tables. Additionally allocate one more page for the page tables the CPU will walk, and call them external page tables. Resolve the KVM page fault with the existing code, and do additional operations necessary for modifying the external page table in future patches. The relationship of the types of page tables in this scheme is depicted below: KVM page fault | | | V | -------------+---------- | | | | V V | shared GPA private GPA | | | | V V | shared PT root mirror PT root | private PT root | | | | V V | V shared PT mirror PT --propagate--> external PT | | | | | \-----------------+------\ | | | | | V | V V shared guest page | private guest page | non-encrypted memory | encrypted memory | PT - Page table Shared PT - Visible to KVM, and the CPU uses it for shared mappings. External PT - The CPU uses it, but it is invisible to KVM. TDX module updates this table to map private guest pages. Mirror PT - It is visible to KVM, but the CPU doesn't use it. KVM uses it to propagate PT change to the actual private PT. Add a helper kvm_has_mirrored_tdp() to trigger this behavior and wire it to the TDX vm type. Co-developed-by: Yan Zhao Signed-off-by: Yan Zhao Signed-off-by: Isaku Yamahata Signed-off-by: Rick Edgecombe Reviewed-by: Binbin Wu --- v4: - private->external comment fixes (Yan) v3: - mirrored->external rename (Paolo) - Remove accidentally included kvm_mmu_alloc_private_spt() (Paolo) - Those -> These (Paolo) - Change log updates to make external/mirrored naming more clear v2: - Rename private->mirror - Don't trigger off of shared mask v1: - Rename terminology, dummy PT =3D> mirror PT. and updated the commit messa= ge By Rick and Kai. - Added a comment on union of private_spt by Rick. - Don't handle the root case in kvm_mmu_alloc_private_spt(), it will not be needed in future patches. (Rick) - Update comments (Yan) - Remove kvm_mmu_init_private_spt(), open code it in later patches (Yan) --- arch/x86/include/asm/kvm_host.h | 5 +++++ arch/x86/kvm/mmu.h | 5 +++++ arch/x86/kvm/mmu/mmu.c | 7 +++++++ arch/x86/kvm/mmu/mmu_internal.h | 31 +++++++++++++++++++++++++++---- arch/x86/kvm/mmu/tdp_mmu.c | 1 + 5 files changed, 45 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 9bb2e164c523..09aa2c56bab6 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -817,6 +817,11 @@ struct kvm_vcpu_arch { struct kvm_mmu_memory_cache mmu_shadow_page_cache; struct kvm_mmu_memory_cache mmu_shadowed_info_cache; struct kvm_mmu_memory_cache mmu_page_header_cache; + /* + * This cache is to allocate external page table. E.g. private EPT used + * by the TDX module. + */ + struct kvm_mmu_memory_cache mmu_external_spt_cache; =20 /* * QEMU userspace and the guest each have their own FPU state. diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index dc80e72e4848..0c3bf89cf7db 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -318,4 +318,9 @@ static inline gpa_t kvm_translate_gpa(struct kvm_vcpu *= vcpu, return gpa; return translate_nested_gpa(vcpu, gpa, access, exception); } + +static inline bool kvm_has_mirrored_tdp(const struct kvm *kvm) +{ + return kvm->arch.vm_type =3D=3D KVM_X86_TDX_VM; +} #endif diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 1e184e524e97..50ba7f63a067 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -688,6 +688,12 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vc= pu, bool maybe_indirect) 1 + PT64_ROOT_MAX_LEVEL + PTE_PREFETCH_NUM); if (r) return r; + if (kvm_has_mirrored_tdp(vcpu->kvm)) { + r =3D kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_external_spt_cache, + PT64_ROOT_MAX_LEVEL); + if (r) + return r; + } r =3D kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_shadow_page_cache, PT64_ROOT_MAX_LEVEL); if (r) @@ -707,6 +713,7 @@ static void mmu_free_memory_caches(struct kvm_vcpu *vcp= u) kvm_mmu_free_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache); kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadow_page_cache); kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadowed_info_cache); + kvm_mmu_free_memory_cache(&vcpu->arch.mmu_external_spt_cache); kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache); } =20 diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index 1721d97743e9..68f99d9d6e7c 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -101,7 +101,22 @@ struct kvm_mmu_page { int root_count; refcount_t tdp_mmu_root_count; }; - unsigned int unsync_children; + union { + /* These two members aren't used for TDP MMU */ + struct { + unsigned int unsync_children; + /* + * Number of writes since the last time traversal + * visited this page. + */ + atomic_t write_flooding_count; + }; + /* + * Page table page of external PT. + * Passed to TDX module, not accessed by KVM. + */ + void *external_spt; + }; union { struct kvm_rmap_head parent_ptes; /* rmap pointers to parent sptes */ tdp_ptep_t ptep; @@ -124,9 +139,6 @@ struct kvm_mmu_page { int clear_spte_count; #endif =20 - /* Number of writes since the last time traversal visited this page. */ - atomic_t write_flooding_count; - #ifdef CONFIG_X86_64 /* Used for freeing the page asynchronously if it is a TDP MMU page. */ struct rcu_head rcu_head; @@ -145,6 +157,17 @@ static inline int kvm_mmu_page_as_id(struct kvm_mmu_pa= ge *sp) return kvm_mmu_role_as_id(sp->role); } =20 +static inline void kvm_mmu_alloc_external_spt(struct kvm_vcpu *vcpu, struc= t kvm_mmu_page *sp) +{ + /* + * external_spt is allocated for TDX module to hold private EPT mappings, + * TDX module will initialize the page by itself. + * Therefore, KVM does not need to initialize or access external_spt. + * KVM only interacts with sp->spt for private EPT operations. + */ + sp->external_spt =3D kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_external_= spt_cache); +} + static inline bool kvm_mmu_page_ad_need_write_protect(struct kvm_mmu_page = *sp) { /* diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index b92dcd5b266f..2d09c353e4bc 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -53,6 +53,7 @@ void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) =20 static void tdp_mmu_free_sp(struct kvm_mmu_page *sp) { + free_page((unsigned long)sp->external_spt); free_page((unsigned long)sp->spt); kmem_cache_free(mmu_page_header_cache, sp); } --=20 2.34.1 From nobody Thu Nov 14 07:05:12 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F257B146A9B; Thu, 18 Jul 2024 21:12:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721337169; cv=none; b=nBcEATm2ogoaHGKQ4CpwlIqu0NxomfJOgYMjcFJ1NXWGKzX6ue0sAJqZuwGaI/FFPhss0352m/3BmjEZh/Z5A1usa1el7aSGj3bGNLUtghTaGxeK0aPKFN796e8UH+HDigC9tVVGNA26X6KPpTr/gwtdwXwjOm8uAg31j8MmRLY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721337169; c=relaxed/simple; bh=SBvf9S0EhqB8FK0uuYH+CPawVD501/FTiaC8zBC9RBo=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=EFmSGd0afWPCUQedPQK2YhUqwt14Y8Tf8jlzLEmjmtgnAIx/go4FP1iqHpMErW6GSGtE5XpGXBU83nAqeTyURV/Bk6Dv0qKQlFU2sWdixBeFu4yWQOvfyk/ww8u4nLWipBx2AHsetVQ70I1I8SAB3Iko1J5rghuzjLCIVtmY/V0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=jnmQPZT6; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="jnmQPZT6" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1721337168; x=1752873168; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=SBvf9S0EhqB8FK0uuYH+CPawVD501/FTiaC8zBC9RBo=; b=jnmQPZT6H3McSWfNQ7gVPRLL7r0xpqiOY2y7wS9LDhDvgQs9URopLRt0 sc/zFz8IRlx3VFwgDvnCQB3ghcwycMzPyTHBiQaUfPlC+UBCJwAwESv9f m4moy3nx7bBk39FySe3vpVVvU0uiWt/wEZxrVIo85nPn0Dzk1ZFveUiZu erY6kZ7eUOlU9RFziJBJjQaQ9EGW8zzPh1mawICvAGmF8P77o7IzPAgff Bs+LMxBKTNL1JrHQO/jQF5kOl83xgYpP6d59DQvyWpHwWNTzGSJN9fJwB dtxmfm7fXxVF4IPFk7ewrGh+BEic/IhU3NtnZy56I6p2K4wqObXUkk6d1 A==; X-CSE-ConnectionGUID: YwFJpdYvTnedTxpnY3tl0A== X-CSE-MsgGUID: nWdstOe/T569NekDMJoXjQ== X-IronPort-AV: E=McAfee;i="6700,10204,11137"; a="22697409" X-IronPort-AV: E=Sophos;i="6.09,218,1716274800"; d="scan'208";a="22697409" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 14:12:44 -0700 X-CSE-ConnectionGUID: ZaD1Z0ovTgaKEpR7r3Azow== X-CSE-MsgGUID: e8Y7W5V3S0aa86I68HiMDA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,218,1716274800"; d="scan'208";a="55760381" Received: from ccbilbre-mobl3.amr.corp.intel.com (HELO rpedgeco-desk4..) ([10.124.223.76]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 14:12:43 -0700 From: Rick Edgecombe To: seanjc@google.com, pbonzini@redhat.com, kvm@vger.kernel.org Cc: kai.huang@intel.com, dmatlack@google.com, erdemaktas@google.com, isaku.yamahata@gmail.com, linux-kernel@vger.kernel.org, sagis@google.com, yan.y.zhao@intel.com, rick.p.edgecombe@intel.com, Isaku Yamahata Subject: [PATCH v4 05/18] KVM: x86/mmu: Add an is_mirror member for union kvm_mmu_page_role Date: Thu, 18 Jul 2024 14:12:17 -0700 Message-Id: <20240718211230.1492011-6-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240718211230.1492011-1-rick.p.edgecombe@intel.com> References: <20240718211230.1492011-1-rick.p.edgecombe@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Isaku Yamahata Introduce a "is_mirror" member to the kvm_mmu_page_role union to identify SPTEs associated with the mirrored EPT. The TDX module maintains the private half of the EPT mapped in the TD in its protected memory. KVM keeps a copy of the private GPAs in a mirrored EPT tree within host memory. This "is_mirror" attribute enables vCPUs to find and get the root page of mirrored EPT from the MMU root list for a guest TD. This also allows KVM MMU code to detect changes in mirrored EPT according to the "is_mirror" mmu page role and propagate the changes to the private EPT managed by TDX module. Signed-off-by: Isaku Yamahata Signed-off-by: Rick Edgecombe --- v3: - Rename role mirror_pt -> is_mirror (Paolo) - Remove unnessary helpers that just access a member (Paolo) v2: - Rename private -> mirrored v1: - Remove warning and NULL check in is_private_sptep() (Rick) - Update commit log (Yan) --- arch/x86/include/asm/kvm_host.h | 3 ++- arch/x86/kvm/mmu/mmu_internal.h | 5 +++++ arch/x86/kvm/mmu/spte.h | 5 +++++ 3 files changed, 12 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 09aa2c56bab6..f764a07a32f9 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -351,7 +351,8 @@ union kvm_mmu_page_role { unsigned ad_disabled:1; unsigned guest_mode:1; unsigned passthrough:1; - unsigned :5; + unsigned is_mirror:1; + unsigned :4; =20 /* * This is left at the top of the word so that diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index 68f99d9d6e7c..3319d0a42f36 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -157,6 +157,11 @@ static inline int kvm_mmu_page_as_id(struct kvm_mmu_pa= ge *sp) return kvm_mmu_role_as_id(sp->role); } =20 +static inline bool is_mirror_sp(const struct kvm_mmu_page *sp) +{ + return sp->role.is_mirror; +} + static inline void kvm_mmu_alloc_external_spt(struct kvm_vcpu *vcpu, struc= t kvm_mmu_page *sp) { /* diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index ef793c459b05..a72f0e3bde17 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -267,6 +267,11 @@ static inline struct kvm_mmu_page *root_to_sp(hpa_t ro= ot) return spte_to_child_sp(root); } =20 +static inline bool is_mirror_sptep(u64 *sptep) +{ + return is_mirror_sp(sptep_to_sp(sptep)); +} + static inline bool is_mmio_spte(struct kvm *kvm, u64 spte) { return (spte & shadow_mmio_mask) =3D=3D kvm->arch.shadow_mmio_value && --=20 2.34.1 From nobody Thu Nov 14 07:05:12 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B511B146D51; Thu, 18 Jul 2024 21:12:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721337170; cv=none; b=e1vvADlajaDi53pyA7xvt2xqF9u8u0bLdvmATg97uurrjZmFi2F9vEsxdHjuO/Jk0EAsSxAl09Q5wLzYuQ9/ccb5n46UKusNz2Ug1J2HNcUsjd2Wt3YDgPY3bJ0oeYGlc71GW3n107pp08+7oi2rh6dzMupMVT/F9iaaCv/qaFs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721337170; c=relaxed/simple; bh=5NadTCqiMWDnFtXY+jGIgYvx41nj3b0uLCzChii5Rj8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=WW00BlDBokAbvQ7daWjxr1tan6hDeovjR2ZYaYD0XmtiFX0CTMt97BaYHdO44Z0CzuLfEIdjdW5In6vYHiGV/VdvRMB6uBsRNqfneFhPnJ+6M4tCai+UfpQKP+1EEF4rpQAO8bQygzvQBJSF2041vfS9JhBF0F05vmKMFoJ0yMY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Z7G6/40T; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Z7G6/40T" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1721337168; x=1752873168; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=5NadTCqiMWDnFtXY+jGIgYvx41nj3b0uLCzChii5Rj8=; b=Z7G6/40TV02Md3rpgoxB1OQGOXCiwlvJIZKCJ01hI55WtPGVw7qEqxIF 81KdE+sPA8ojGZwiR+oWRmKa7Bgjtc/Uh54iUV/x0218wn+8GTkU41dmr 93AlDU5M0cfMRwCWy+5L6oXwT5wdv7q2Jcm6BdPCzcxOeGzEy1yntD+QB HOMD/Y4/PpKjMfma7HAPVan4crA/T9ZMD/3nvWG9brpYetK2vwZXKcaEp cqLQWFzFAxz0bb7DfAp0K7oqbh/A9ZA7mkSyUvtp2kPnru63m08hJuMZt IDkFxiT60fdfAQ7sjB8Y11C+WwwNxA6dHHjxvrV8NESB/y1QGaHuO1MK5 A==; X-CSE-ConnectionGUID: /d4yyoZyRU+pInHBaUVqpA== X-CSE-MsgGUID: xyYRkNBgSpm7t1JGKuBypQ== X-IronPort-AV: E=McAfee;i="6700,10204,11137"; a="22697416" X-IronPort-AV: E=Sophos;i="6.09,218,1716274800"; d="scan'208";a="22697416" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 14:12:44 -0700 X-CSE-ConnectionGUID: lrQCgITwTuuV0OjLTWQLfw== X-CSE-MsgGUID: OUmGSwmATBG0kdfWnQHhcA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,218,1716274800"; d="scan'208";a="55760384" Received: from ccbilbre-mobl3.amr.corp.intel.com (HELO rpedgeco-desk4..) ([10.124.223.76]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 14:12:44 -0700 From: Rick Edgecombe To: seanjc@google.com, pbonzini@redhat.com, kvm@vger.kernel.org Cc: kai.huang@intel.com, dmatlack@google.com, erdemaktas@google.com, isaku.yamahata@gmail.com, linux-kernel@vger.kernel.org, sagis@google.com, yan.y.zhao@intel.com, rick.p.edgecombe@intel.com Subject: [PATCH v4 06/18] KVM: x86/mmu: Make kvm_tdp_mmu_alloc_root() return void Date: Thu, 18 Jul 2024 14:12:18 -0700 Message-Id: <20240718211230.1492011-7-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240718211230.1492011-1-rick.p.edgecombe@intel.com> References: <20240718211230.1492011-1-rick.p.edgecombe@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The kvm_tdp_mmu_alloc_root() function currently always returns 0. This allows for the caller, mmu_alloc_direct_roots(), to call kvm_tdp_mmu_alloc_root() and also return 0 in one line: return kvm_tdp_mmu_alloc_root(vcpu); So it is useful even though the return value of kvm_tdp_mmu_alloc_root() is always the same. However, in future changes, kvm_tdp_mmu_alloc_root() will be called twice in mmu_alloc_direct_roots(). This will force the first call to either awkwardly handle the return value that will always be zero or ignore it. So change kvm_tdp_mmu_alloc_root() to return void. Do it in a separate change so the future change will be cleaner. Signed-off-by: Rick Edgecombe Reviewed-by: Paolo Bonzini --- v3: - Add Paolo's reviewed-by v1: - New patch --- arch/x86/kvm/mmu/mmu.c | 6 ++++-- arch/x86/kvm/mmu/tdp_mmu.c | 3 +-- arch/x86/kvm/mmu/tdp_mmu.h | 2 +- 3 files changed, 6 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 50ba7f63a067..2f7f372a4bfe 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3703,8 +3703,10 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *v= cpu) unsigned i; int r; =20 - if (tdp_mmu_enabled) - return kvm_tdp_mmu_alloc_root(vcpu); + if (tdp_mmu_enabled) { + kvm_tdp_mmu_alloc_root(vcpu); + return 0; + } =20 write_lock(&vcpu->kvm->mmu_lock); r =3D make_mmu_pages_available(vcpu); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 2d09c353e4bc..f4df20b91817 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -224,7 +224,7 @@ static void tdp_mmu_init_child_sp(struct kvm_mmu_page *= child_sp, tdp_mmu_init_sp(child_sp, iter->sptep, iter->gfn, role); } =20 -int kvm_tdp_mmu_alloc_root(struct kvm_vcpu *vcpu) +void kvm_tdp_mmu_alloc_root(struct kvm_vcpu *vcpu) { struct kvm_mmu *mmu =3D vcpu->arch.mmu; union kvm_mmu_page_role role =3D mmu->root_role; @@ -285,7 +285,6 @@ int kvm_tdp_mmu_alloc_root(struct kvm_vcpu *vcpu) */ mmu->root.hpa =3D __pa(root->spt); mmu->root.pgd =3D 0; - return 0; } =20 static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 14421d080fe6..a0e00284b75d 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -10,7 +10,7 @@ void kvm_mmu_init_tdp_mmu(struct kvm *kvm); void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm); =20 -int kvm_tdp_mmu_alloc_root(struct kvm_vcpu *vcpu); +void kvm_tdp_mmu_alloc_root(struct kvm_vcpu *vcpu); =20 __must_check static inline bool kvm_tdp_mmu_get_root(struct kvm_mmu_page *= root) { --=20 2.34.1 From nobody Thu Nov 14 07:05:12 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D57DC146D57; Thu, 18 Jul 2024 21:12:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721337170; cv=none; b=szpAKAg1h7BNjw+qGjJDJY9HUDgUP5J7niBI53zF43l6VAIaM7KJCDCHcUCgmrH4e0g90E7w/byWq+DPGWG1FQT5WCts+EQKSSWYtq95hQip8fW/RP34TqPyOwA9YRShUkzsUJBxVC7bYK+4OeVn3iWIJqFpoGyIctpKN3YjZHA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721337170; c=relaxed/simple; bh=S1jdBkVX26Qb8P58jyvOE2m5JDwxRTsp1tNMU3hN0+M=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=YJmsW5rAJ4GWWbjg0XQCckYKuVK1f1IRcSRIZaU5qxvjLhS4KqGtiUTUFblQappbsNdiEhZO1xsNB4+lG/PpOfCyVcDimNWGi0hEm6L1PPZ0ADmTOcR91ruJUgTP994213WavdBTrysba+hl40wo7v5+6tx4WICjd5BWYrXQi8g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=RuLlm8vM; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="RuLlm8vM" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1721337169; x=1752873169; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=S1jdBkVX26Qb8P58jyvOE2m5JDwxRTsp1tNMU3hN0+M=; b=RuLlm8vMz6d5n09kc25/yipuUCcfoELDXiWi/rvRf3HOONmWDk+p+iYy mUMEaJ4RBy2KF6O9h+V4xX3JoWVzC7dC5lIIf7V9gUI1PkThxFk0IPqzd qPa0YnUYz8+SSqrd1CQphXuU9RnEt/WcXyDojNnJG1one6uic0W0qVgny v2qpWqOy8SJIy1m2KzyjpGzd7DHEzz+IO0gIwnZDZY5AnsE/PHjb/gneC vpE/T+0pzwIYy1+cgGMbEQ4XcdczO0QBr5RuVuPHP8TpylSqYsywLu5iF J4JNwIerCAtezlMiN3WB1UB5OkXmP2VhjPdSaVbje61+SBx3gPxo98DqE w==; X-CSE-ConnectionGUID: 4O/3R5C2RKO5576hrQ0beQ== X-CSE-MsgGUID: bOFCu4j5SMeVNSEZ2/ubxQ== X-IronPort-AV: E=McAfee;i="6700,10204,11137"; a="22697420" X-IronPort-AV: E=Sophos;i="6.09,218,1716274800"; d="scan'208";a="22697420" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 14:12:45 -0700 X-CSE-ConnectionGUID: wRoggmP5R9WV2wVMHrW5yA== X-CSE-MsgGUID: fiv5/ysDSzmG+4+ajDpl+A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,218,1716274800"; d="scan'208";a="55760390" Received: from ccbilbre-mobl3.amr.corp.intel.com (HELO rpedgeco-desk4..) ([10.124.223.76]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 14:12:45 -0700 From: Rick Edgecombe To: seanjc@google.com, pbonzini@redhat.com, kvm@vger.kernel.org Cc: kai.huang@intel.com, dmatlack@google.com, erdemaktas@google.com, isaku.yamahata@gmail.com, linux-kernel@vger.kernel.org, sagis@google.com, yan.y.zhao@intel.com, rick.p.edgecombe@intel.com, Isaku Yamahata Subject: [PATCH v4 07/18] KVM: x86/tdp_mmu: Take struct kvm in iter loops Date: Thu, 18 Jul 2024 14:12:19 -0700 Message-Id: <20240718211230.1492011-8-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240718211230.1492011-1-rick.p.edgecombe@intel.com> References: <20240718211230.1492011-1-rick.p.edgecombe@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Isaku Yamahata Add a struct kvm argument to the TDP MMU iterators. Future changes will want to change how the iterator behaves based on a member of struct kvm. Change the signature and callers of the iterator loop helpers in a separate patch to make the future one easier to review. Signed-off-by: Isaku Yamahata Signed-off-by: Rick Edgecombe --- v3: - Split from "KVM: x86/mmu: Support GFN direct mask" (Paolo) --- arch/x86/kvm/mmu/tdp_iter.h | 6 +++--- arch/x86/kvm/mmu/tdp_mmu.c | 36 ++++++++++++++++++------------------ 2 files changed, 21 insertions(+), 21 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h index 2880fd392e0c..d8f2884e3c66 100644 --- a/arch/x86/kvm/mmu/tdp_iter.h +++ b/arch/x86/kvm/mmu/tdp_iter.h @@ -122,13 +122,13 @@ struct tdp_iter { * Iterates over every SPTE mapping the GFN range [start, end) in a * preorder traversal. */ -#define for_each_tdp_pte_min_level(iter, root, min_level, start, end) \ +#define for_each_tdp_pte_min_level(iter, kvm, root, min_level, start, end)= \ for (tdp_iter_start(&iter, root, min_level, start); \ iter.valid && iter.gfn < end; \ tdp_iter_next(&iter)) =20 -#define for_each_tdp_pte(iter, root, start, end) \ - for_each_tdp_pte_min_level(iter, root, PG_LEVEL_4K, start, end) +#define for_each_tdp_pte(iter, kvm, root, start, end) \ + for_each_tdp_pte_min_level(iter, kvm, root, PG_LEVEL_4K, start, end) =20 tdp_ptep_t spte_to_child_pt(u64 pte, int level); =20 diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index f4df20b91817..89b8a8eed116 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -675,18 +675,18 @@ static inline void tdp_mmu_iter_set_spte(struct kvm *= kvm, struct tdp_iter *iter, iter->gfn, iter->level); } =20 -#define tdp_root_for_each_pte(_iter, _root, _start, _end) \ - for_each_tdp_pte(_iter, _root, _start, _end) +#define tdp_root_for_each_pte(_iter, _kvm, _root, _start, _end) \ + for_each_tdp_pte(_iter, _kvm, _root, _start, _end) =20 -#define tdp_root_for_each_leaf_pte(_iter, _root, _start, _end) \ - tdp_root_for_each_pte(_iter, _root, _start, _end) \ +#define tdp_root_for_each_leaf_pte(_iter, _kvm, _root, _start, _end) \ + tdp_root_for_each_pte(_iter, _kvm, _root, _start, _end) \ if (!is_shadow_present_pte(_iter.old_spte) || \ !is_last_spte(_iter.old_spte, _iter.level)) \ continue; \ else =20 -#define tdp_mmu_for_each_pte(_iter, _mmu, _start, _end) \ - for_each_tdp_pte(_iter, root_to_sp(_mmu->root.hpa), _start, _end) +#define tdp_mmu_for_each_pte(_iter, _kvm, _mmu, _start, _end) \ + for_each_tdp_pte(_iter, _kvm, root_to_sp(_mmu->root.hpa), _start, _end) =20 /* * Yield if the MMU lock is contended or this thread needs to return contr= ol @@ -752,7 +752,7 @@ static void __tdp_mmu_zap_root(struct kvm *kvm, struct = kvm_mmu_page *root, gfn_t end =3D tdp_mmu_max_gfn_exclusive(); gfn_t start =3D 0; =20 - for_each_tdp_pte_min_level(iter, root, zap_level, start, end) { + for_each_tdp_pte_min_level(iter, kvm, root, zap_level, start, end) { retry: if (tdp_mmu_iter_cond_resched(kvm, &iter, false, shared)) continue; @@ -856,7 +856,7 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct k= vm_mmu_page *root, =20 rcu_read_lock(); =20 - for_each_tdp_pte_min_level(iter, root, PG_LEVEL_4K, start, end) { + for_each_tdp_pte_min_level(iter, kvm, root, PG_LEVEL_4K, start, end) { if (can_yield && tdp_mmu_iter_cond_resched(kvm, &iter, flush, false)) { flush =3D false; @@ -1123,7 +1123,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm= _page_fault *fault) =20 rcu_read_lock(); =20 - tdp_mmu_for_each_pte(iter, mmu, fault->gfn, fault->gfn + 1) { + tdp_mmu_for_each_pte(iter, kvm, mmu, fault->gfn, fault->gfn + 1) { int r; =20 if (fault->nx_huge_page_workaround_enabled) @@ -1221,7 +1221,7 @@ static __always_inline bool kvm_tdp_mmu_handle_gfn(st= ruct kvm *kvm, for_each_tdp_mmu_root(kvm, root, range->slot->as_id) { rcu_read_lock(); =20 - tdp_root_for_each_leaf_pte(iter, root, range->start, range->end) + tdp_root_for_each_leaf_pte(iter, kvm, root, range->start, range->end) ret |=3D handler(kvm, &iter, range); =20 rcu_read_unlock(); @@ -1304,7 +1304,7 @@ static bool wrprot_gfn_range(struct kvm *kvm, struct = kvm_mmu_page *root, =20 BUG_ON(min_level > KVM_MAX_HUGEPAGE_LEVEL); =20 - for_each_tdp_pte_min_level(iter, root, min_level, start, end) { + for_each_tdp_pte_min_level(iter, kvm, root, min_level, start, end) { retry: if (tdp_mmu_iter_cond_resched(kvm, &iter, false, true)) continue; @@ -1467,7 +1467,7 @@ static int tdp_mmu_split_huge_pages_root(struct kvm *= kvm, * level above the target level (e.g. splitting a 1GB to 512 2MB pages, * and then splitting each of those to 512 4KB pages). */ - for_each_tdp_pte_min_level(iter, root, target_level + 1, start, end) { + for_each_tdp_pte_min_level(iter, kvm, root, target_level + 1, start, end)= { retry: if (tdp_mmu_iter_cond_resched(kvm, &iter, false, shared)) continue; @@ -1552,7 +1552,7 @@ static bool clear_dirty_gfn_range(struct kvm *kvm, st= ruct kvm_mmu_page *root, =20 rcu_read_lock(); =20 - tdp_root_for_each_pte(iter, root, start, end) { + tdp_root_for_each_pte(iter, kvm, root, start, end) { retry: if (!is_shadow_present_pte(iter.old_spte) || !is_last_spte(iter.old_spte, iter.level)) @@ -1607,7 +1607,7 @@ static void clear_dirty_pt_masked(struct kvm *kvm, st= ruct kvm_mmu_page *root, =20 rcu_read_lock(); =20 - tdp_root_for_each_leaf_pte(iter, root, gfn + __ffs(mask), + tdp_root_for_each_leaf_pte(iter, kvm, root, gfn + __ffs(mask), gfn + BITS_PER_LONG) { if (!mask) break; @@ -1664,7 +1664,7 @@ static void zap_collapsible_spte_range(struct kvm *kv= m, =20 rcu_read_lock(); =20 - for_each_tdp_pte_min_level(iter, root, PG_LEVEL_2M, start, end) { + for_each_tdp_pte_min_level(iter, kvm, root, PG_LEVEL_2M, start, end) { retry: if (tdp_mmu_iter_cond_resched(kvm, &iter, false, true)) continue; @@ -1734,7 +1734,7 @@ static bool write_protect_gfn(struct kvm *kvm, struct= kvm_mmu_page *root, =20 rcu_read_lock(); =20 - for_each_tdp_pte_min_level(iter, root, min_level, gfn, gfn + 1) { + for_each_tdp_pte_min_level(iter, kvm, root, min_level, gfn, gfn + 1) { if (!is_shadow_present_pte(iter.old_spte) || !is_last_spte(iter.old_spte, iter.level)) continue; @@ -1789,7 +1789,7 @@ int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 a= ddr, u64 *sptes, =20 *root_level =3D vcpu->arch.mmu->root_role.level; =20 - tdp_mmu_for_each_pte(iter, mmu, gfn, gfn + 1) { + tdp_mmu_for_each_pte(iter, vcpu->kvm, mmu, gfn, gfn + 1) { leaf =3D iter.level; sptes[leaf] =3D iter.old_spte; } @@ -1815,7 +1815,7 @@ u64 *kvm_tdp_mmu_fast_pf_get_last_sptep(struct kvm_vc= pu *vcpu, gfn_t gfn, struct kvm_mmu *mmu =3D vcpu->arch.mmu; tdp_ptep_t sptep =3D NULL; =20 - tdp_mmu_for_each_pte(iter, mmu, gfn, gfn + 1) { + tdp_mmu_for_each_pte(iter, vcpu->kvm, mmu, gfn, gfn + 1) { *spte =3D iter.old_spte; sptep =3D iter.sptep; } --=20 2.34.1 From nobody Thu Nov 14 07:05:12 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 45A7D1482E8; Thu, 18 Jul 2024 21:12:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721337173; cv=none; b=EyFrOCYIg4rS8x4pyNmkwNDn3MhWZp/uvLVGhlNSulhpkTrgIYNGh1wIRH1rNtLyfsBOB01SZkHGN3+T1FN3l3RNEuZ0VtgKAp+EfmzelwK5ie4phglLsaNSoMlc626ZBUUlPdf98/AIz8sW2hpM4oOabaweym74XuFH3ZCo+KI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721337173; c=relaxed/simple; bh=ErxbfdqQRNxj/yVhrRUoQP5sQ+QEnblBq2Uf+/OullE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Q4/LD0gNzMktt7A/MgN0Myt/AKVXIx06r50Tf3SpQR2NOCECQBmF+MoTkNpy6BTd+n/bcP8JEw7s0BwVNalFeIcIjg/HD2c9HuEpD6IGmqXX+M2GGASToIUr8DgZ6/zyTX/nPrystFm9fXWOTwmDoTIizVi2QfaG4ylfw1Ts928= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=b5FklUsU; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="b5FklUsU" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1721337171; x=1752873171; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ErxbfdqQRNxj/yVhrRUoQP5sQ+QEnblBq2Uf+/OullE=; b=b5FklUsU1QoNQZBLHlZRANycb7CF387yAedZMI7CDjr8llqRde28P43D lfspGk2lvrN/OpJOItZPI+sjw//X2W6vBk6QDJlG1q8woen7/1vKH/con l87c4XehAh1rjCJ1/HDUAFVcr2S5dGatyxZHJrQbgz47d33eSA09OJlJk 5B55vpJBC/BcljZuMDmCY/pt5A8eHCRttTQ57pzliU2cGuwjEQ/VGwSN5 ZDg5ujahIaqVcxsTmP0Latftd9SqL6SDuNgNE+ubzRA9lIz71qobpJVTk w/fpDtUBezY3bqjFgOj3HUc4MsbTSpqx1P0IVn2aJ7XtDkY0Cg5qRFeSv A==; X-CSE-ConnectionGUID: F1uQolgDTfixn/96rqUUCQ== X-CSE-MsgGUID: mc1G8hj6Qlmd2mZxKGjpsA== X-IronPort-AV: E=McAfee;i="6700,10204,11137"; a="22697428" X-IronPort-AV: E=Sophos;i="6.09,218,1716274800"; d="scan'208";a="22697428" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 14:12:46 -0700 X-CSE-ConnectionGUID: +jGES0tmQqC/fhGO3g4izw== X-CSE-MsgGUID: AHw6fDmzSCKGQoHRtB7pYQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,218,1716274800"; d="scan'208";a="55760394" Received: from ccbilbre-mobl3.amr.corp.intel.com (HELO rpedgeco-desk4..) ([10.124.223.76]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 14:12:45 -0700 From: Rick Edgecombe To: seanjc@google.com, pbonzini@redhat.com, kvm@vger.kernel.org Cc: kai.huang@intel.com, dmatlack@google.com, erdemaktas@google.com, isaku.yamahata@gmail.com, linux-kernel@vger.kernel.org, sagis@google.com, yan.y.zhao@intel.com, rick.p.edgecombe@intel.com, Isaku Yamahata Subject: [PATCH v4 08/18] KVM: x86/mmu: Support GFN direct bits Date: Thu, 18 Jul 2024 14:12:20 -0700 Message-Id: <20240718211230.1492011-9-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240718211230.1492011-1-rick.p.edgecombe@intel.com> References: <20240718211230.1492011-1-rick.p.edgecombe@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Isaku Yamahata Teach the MMU to map guest GFNs at a massaged position on the TDP, to aid in implementing TDX shared memory. Like other Coco technologies, TDX has the concept of private and shared memory. For TDX the private and shared mappings are managed on separate EPT roots. The private half is managed indirectly through calls into a protected runtime environment called the TDX module, where the shared half is managed within KVM in normal page tables. For TDX, the shared half will be mapped in the higher alias, with a "shared bit" set in the GPA. However, KVM will still manage it with the same memslots as the private half. This means memslot looks ups and zapping operations will be provided with a GFN without the shared bit set. So KVM will either need to apply or strip the shared bit before mapping or zapping the shared EPT. Having GFNs sometimes have the shared bit and sometimes not would make the code confusing. So instead arrange the code such that GFNs never have shared bit set. Create a concept of "direct bits", that is stripped from the fault address when setting fault->gfn, and applied within the TDP MMU iterator. Calling code will behave as if it is operating on the PTE mapping the GFN (without shared bits) but within the iterator, the actual mappings will be shifted using bits specific for the root. SPs will have the GFN set without the shared bit. In the end the TDP MMU will behave like it is mapping things at the GFN without the shared bit but with a strange page table format where everything is offset by the shared bit. Since TDX only needs to shift the mapping like this for the shared bit, which is mapped as the normal TDP root, add a "gfn_direct_bits" field to the kvm_arch structure for each VM with a default value of 0. It will have the bit set at the position of the GPA shared bit in GFN through TD specific initialization code. Keep TDX specific concepts out of the MMU code by not naming it "shared". Ranged TLB flushes (i.e. flush_remote_tlbs_range()) target specific GFN ranges. In convention established above, these would need to target the shifted GFN range. It won't matter functionally, since the actual implementation will always result in a full flush for the only planned user (TDX). For correctness reasons, future changes can provide a TDX x86_ops.flush_remote_tlbs_range implementation to return -EOPNOTSUPP and force the full flush for TDs. This leaves one problem. Some operations use a concept of max GFN (i.e. kvm_mmu_max_gfn()), to iterate over the whole TDP range. When applying the direct mask to the start of the range, the iterator would end up skipping iterating over the range not covered by the direct mask bit. For safety, make sure the __tdp_mmu_zap_root() operation iterates over the full GFN range supported by the underlying TDP format. Add a new iterator helper, for_each_tdp_pte_min_level_all(), that iterates the entire TDP GFN range, regardless of root. Signed-off-by: Isaku Yamahata Co-developed-by: Yan Zhao Signed-off-by: Yan Zhao Co-developed-by: Rick Edgecombe Signed-off-by: Rick Edgecombe --- v4: - Add for_each_tdp_pte_min_level_all() - Log typos v3: - Add comment for kvm_gfn_root_mask() (Paolo) - Change names mask -> bits (Paolo) - Add comment in struct definition for fault->gfn not containing shared bit. (Paolo) - Drop special handling in kvm_arch_flush_remote_tlbs_range(), implement kvm_x86_ops.flush_remote_tlbs_range in a future patch. (Paolo) - Do addition of kvm arg to iterator in previous patch (Paolo) - OR gfn_bits in try_step_side() too, because of issue seen with 4 level EPT - Add warning for GFN bits in wrong arg in tdp_iter_start() v2: - Rename from "KVM: x86/mmu: Add address conversion functions for TDX shar= ed bit of GPA" - Dropped Binbin's reviewed-by tag because of the extend of the changes - Rename gfn_shared_mask to gfn_direct_mask. - Don't include shared bits in GFNs, hide the existence in the TDP MMU iterator. - Don't do range flushes if a gfn_direct_mask is present. --- arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/mmu.h | 5 +++++ arch/x86/kvm/mmu/mmu_internal.h | 28 ++++++++++++++++++++++++++-- arch/x86/kvm/mmu/tdp_iter.c | 10 ++++++---- arch/x86/kvm/mmu/tdp_iter.h | 15 +++++++++++---- arch/x86/kvm/mmu/tdp_mmu.c | 5 +---- 6 files changed, 51 insertions(+), 14 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index f764a07a32f9..1730f94c9742 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1525,6 +1525,8 @@ struct kvm_arch { */ #define SPLIT_DESC_CACHE_MIN_NR_OBJECTS (SPTE_ENT_PER_PAGE + 1) struct kvm_mmu_memory_cache split_desc_cache; + + gfn_t gfn_direct_bits; }; =20 struct kvm_vm_stat { diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 0c3bf89cf7db..63179a4fba7b 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -323,4 +323,9 @@ static inline bool kvm_has_mirrored_tdp(const struct kv= m *kvm) { return kvm->arch.vm_type =3D=3D KVM_X86_TDX_VM; } + +static inline gfn_t kvm_gfn_direct_bits(const struct kvm *kvm) +{ + return kvm->arch.gfn_direct_bits; +} #endif diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_interna= l.h index 3319d0a42f36..6e768cd438b9 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -6,6 +6,8 @@ #include #include =20 +#include "mmu.h" + #ifdef CONFIG_KVM_PROVE_MMU #define KVM_MMU_WARN_ON(x) WARN_ON_ONCE(x) #else @@ -173,6 +175,18 @@ static inline void kvm_mmu_alloc_external_spt(struct k= vm_vcpu *vcpu, struct kvm_ sp->external_spt =3D kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_external_= spt_cache); } =20 +static inline gfn_t kvm_gfn_root_bits(const struct kvm *kvm, const struct = kvm_mmu_page *root) +{ + /* + * Since mirror SPs are used only for TDX, which maps private memory + * at its "natural" GFN, no mask needs to be applied to them - and, duall= y, + * we expect that the bits is only used for the shared PT. + */ + if (is_mirror_sp(root)) + return 0; + return kvm_gfn_direct_bits(kvm); +} + static inline bool kvm_mmu_page_ad_need_write_protect(struct kvm_mmu_page = *sp) { /* @@ -257,7 +271,12 @@ struct kvm_page_fault { */ u8 goal_level; =20 - /* Shifted addr, or result of guest page table walk if addr is a gva. */ + /* + * Shifted addr, or result of guest page table walk if addr is a gva. In + * the case of VM where memslot's can be mapped at multiple GPA aliases + * (i.e. TDX), the gfn field does not contain the bit that selects between + * the aliases (i.e. the shared bit for TDX). + */ gfn_t gfn; =20 /* The memslot containing gfn. May be NULL. */ @@ -343,7 +362,12 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcp= u *vcpu, gpa_t cr2_or_gpa, int r; =20 if (vcpu->arch.mmu->root_role.direct) { - fault.gfn =3D fault.addr >> PAGE_SHIFT; + /* + * Things like memslots don't understand the concept of a shared + * bit. Strip it so that the GFN can be used like normal, and the + * fault.addr can be used when the shared bit is needed. + */ + fault.gfn =3D gpa_to_gfn(fault.addr) & ~kvm_gfn_direct_bits(vcpu->kvm); fault.slot =3D kvm_vcpu_gfn_to_memslot(vcpu, fault.gfn); } =20 diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c index 04c247bfe318..9e17bfa80901 100644 --- a/arch/x86/kvm/mmu/tdp_iter.c +++ b/arch/x86/kvm/mmu/tdp_iter.c @@ -12,7 +12,7 @@ static void tdp_iter_refresh_sptep(struct tdp_iter *iter) { iter->sptep =3D iter->pt_path[iter->level - 1] + - SPTE_INDEX(iter->gfn << PAGE_SHIFT, iter->level); + SPTE_INDEX((iter->gfn | iter->gfn_bits) << PAGE_SHIFT, iter->level); iter->old_spte =3D kvm_tdp_mmu_read_spte(iter->sptep); } =20 @@ -37,15 +37,17 @@ void tdp_iter_restart(struct tdp_iter *iter) * rooted at root_pt, starting with the walk to translate next_last_level_= gfn. */ void tdp_iter_start(struct tdp_iter *iter, struct kvm_mmu_page *root, - int min_level, gfn_t next_last_level_gfn) + int min_level, gfn_t next_last_level_gfn, gfn_t gfn_bits) { if (WARN_ON_ONCE(!root || (root->role.level < 1) || - (root->role.level > PT64_ROOT_MAX_LEVEL))) { + (root->role.level > PT64_ROOT_MAX_LEVEL) || + (gfn_bits && next_last_level_gfn >=3D gfn_bits))) { iter->valid =3D false; return; } =20 iter->next_last_level_gfn =3D next_last_level_gfn; + iter->gfn_bits =3D gfn_bits; iter->root_level =3D root->role.level; iter->min_level =3D min_level; iter->pt_path[iter->root_level - 1] =3D (tdp_ptep_t)root->spt; @@ -113,7 +115,7 @@ static bool try_step_side(struct tdp_iter *iter) * Check if the iterator is already at the end of the current page * table. */ - if (SPTE_INDEX(iter->gfn << PAGE_SHIFT, iter->level) =3D=3D + if (SPTE_INDEX((iter->gfn | iter->gfn_bits) << PAGE_SHIFT, iter->level) = =3D=3D (SPTE_ENT_PER_PAGE - 1)) return false; =20 diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h index d8f2884e3c66..047b78333653 100644 --- a/arch/x86/kvm/mmu/tdp_iter.h +++ b/arch/x86/kvm/mmu/tdp_iter.h @@ -93,8 +93,10 @@ struct tdp_iter { tdp_ptep_t pt_path[PT64_ROOT_MAX_LEVEL]; /* A pointer to the current SPTE */ tdp_ptep_t sptep; - /* The lowest GFN mapped by the current SPTE */ + /* The lowest GFN (mask bits excluded) mapped by the current SPTE */ gfn_t gfn; + /* Mask applied to convert the GFN to the mapping GPA */ + gfn_t gfn_bits; /* The level of the root page given to the iterator */ int root_level; /* The lowest level the iterator should traverse to */ @@ -123,17 +125,22 @@ struct tdp_iter { * preorder traversal. */ #define for_each_tdp_pte_min_level(iter, kvm, root, min_level, start, end)= \ - for (tdp_iter_start(&iter, root, min_level, start); \ - iter.valid && iter.gfn < end; \ + for (tdp_iter_start(&iter, root, min_level, start, kvm_gfn_root_bits(kvm,= root)); \ + iter.valid && iter.gfn < end; \ tdp_iter_next(&iter)) =20 +#define for_each_tdp_pte_min_level_all(iter, root, min_level) \ + for (tdp_iter_start(&iter, root, min_level, 0, 0); \ + iter.valid && iter.gfn < tdp_mmu_max_gfn_exclusive(); \ + tdp_iter_next(&iter)) + #define for_each_tdp_pte(iter, kvm, root, start, end) \ for_each_tdp_pte_min_level(iter, kvm, root, PG_LEVEL_4K, start, end) =20 tdp_ptep_t spte_to_child_pt(u64 pte, int level); =20 void tdp_iter_start(struct tdp_iter *iter, struct kvm_mmu_page *root, - int min_level, gfn_t next_last_level_gfn); + int min_level, gfn_t next_last_level_gfn, gfn_t gfn_bits); void tdp_iter_next(struct tdp_iter *iter); void tdp_iter_restart(struct tdp_iter *iter); =20 diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 89b8a8eed116..2befece426aa 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -749,10 +749,7 @@ static void __tdp_mmu_zap_root(struct kvm *kvm, struct= kvm_mmu_page *root, { struct tdp_iter iter; =20 - gfn_t end =3D tdp_mmu_max_gfn_exclusive(); - gfn_t start =3D 0; - - for_each_tdp_pte_min_level(iter, kvm, root, zap_level, start, end) { + for_each_tdp_pte_min_level_all(iter, root, zap_level) { retry: if (tdp_mmu_iter_cond_resched(kvm, &iter, false, shared)) continue; --=20 2.34.1 From nobody Thu Nov 14 07:05:12 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AEB3C1474A2; Thu, 18 Jul 2024 21:12:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721337171; cv=none; b=AjBWZM7+OvCnqzYkO0AE1m8CS11d+qosAiFR3o6yUOV/9K240H5qL9WcuD+RiVeko5aw+RzcynRiP/ChHN+iyziUJR0jxLLqEQPRWxK9X/8WUu8dlQ/EX5VZALiLdE2WqaVLMfEpOigqWQpR6I5GxrOBNL0IEKg/ZSsuVozGo6s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721337171; c=relaxed/simple; bh=J6Z4WZnlx/IJ6mxLdmW9GaPD4N1LQimy/PMxLt54QWs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=l75JWdKJEXBV4lF99s64CTYtfvYELmbKf5EotrU58fMemcrtUnUHy0dJKair+KWj+UopC1O4XwLoC2KG4tezm0lU12CcZdfRYiCIDPLq9UbtTTWAntHzTZCwxlYdoCDf3mOS1wX4fiMFa0gUUr4AQjrX3vyLVYhEjvwfB8OVyb4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=DAQ0xGq6; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="DAQ0xGq6" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1721337169; x=1752873169; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=J6Z4WZnlx/IJ6mxLdmW9GaPD4N1LQimy/PMxLt54QWs=; b=DAQ0xGq6bexK5Iv5oAReJgp5WTFNXc6f0tHH+GEY3OmV7Nhli/wnPs8D Y0zK0mrpH7/V/x8zDU51xgi3at5nnV/BUQa/72C8IUAZ2oY8703Gs9Ysc l4w+eGh6NY53DJ9uiggXCzaRYYl5JT4LjCkIiuEs6D8SCIB+ND1IGsCKg R4N9BrQCSdT5JYiOXjvxT5XVY1F28QqMdHxZ2e+NQi+HlAXMH2LhDUAb9 GJW/p6/BJ80HFMBupqZ+2K7FFRGmqZREWBzzVU4ECMM3SO8F1cHYM5WHK Nken92KZdydcEOKjTXIs1F5T6JEMD79mAC/Jat+zxGHn+0VO7/LNtEm9Y A==; X-CSE-ConnectionGUID: AQT2rOJMQfWb/aYFam/2aA== X-CSE-MsgGUID: MFE5pTQMTT+S7APYV0GGsw== X-IronPort-AV: E=McAfee;i="6700,10204,11137"; a="22697433" X-IronPort-AV: E=Sophos;i="6.09,218,1716274800"; d="scan'208";a="22697433" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 14:12:46 -0700 X-CSE-ConnectionGUID: 3QZlwaJLRzaSMYy3ZP/M8g== X-CSE-MsgGUID: wqJ1sLxYTzmxvwnRmoKWTA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,218,1716274800"; d="scan'208";a="55760397" Received: from ccbilbre-mobl3.amr.corp.intel.com (HELO rpedgeco-desk4..) ([10.124.223.76]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 14:12:46 -0700 From: Rick Edgecombe To: seanjc@google.com, pbonzini@redhat.com, kvm@vger.kernel.org Cc: kai.huang@intel.com, dmatlack@google.com, erdemaktas@google.com, isaku.yamahata@gmail.com, linux-kernel@vger.kernel.org, sagis@google.com, yan.y.zhao@intel.com, rick.p.edgecombe@intel.com, Isaku Yamahata Subject: [PATCH v4 09/18] KVM: x86/tdp_mmu: Extract root invalid check from tdx_mmu_next_root() Date: Thu, 18 Jul 2024 14:12:21 -0700 Message-Id: <20240718211230.1492011-10-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240718211230.1492011-1-rick.p.edgecombe@intel.com> References: <20240718211230.1492011-1-rick.p.edgecombe@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Isaku Yamahata Extract tdp_mmu_root_match() to check if the root has given types and use it for the root page table iterator. It checks only_invalid now. TDX KVM operates on a shared page table only (Shared-EPT), a mirrored page table only (Secure-EPT), or both based on the operation. KVM MMU notifier operations only on shared page table. KVM guest_memfd invalidation operations only on mirrored page table, and so on. Introduce a centralized matching function instead of open coding matching logic in the iterator. The next step is to extend the function to check whether the page is shared or private Link: https://lore.kernel.org/kvm/ZivazWQw1oCU8VBC@google.com/ Signed-off-by: Isaku Yamahata Signed-off-by: Rick Edgecombe --- v1: - New patch --- arch/x86/kvm/mmu/tdp_mmu.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 2befece426aa..412e9a031671 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -92,6 +92,14 @@ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mm= u_page *root) call_rcu(&root->rcu_head, tdp_mmu_free_sp_rcu_callback); } =20 +static bool tdp_mmu_root_match(struct kvm_mmu_page *root, bool only_valid) +{ + if (only_valid && root->role.invalid) + return false; + + return true; +} + /* * Returns the next root after @prev_root (or the first root if @prev_root= is * NULL). A reference to the returned root is acquired, and the reference= to @@ -125,7 +133,7 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kv= m *kvm, typeof(*next_root), link); =20 while (next_root) { - if ((!only_valid || !next_root->role.invalid) && + if (tdp_mmu_root_match(next_root, only_valid) && kvm_tdp_mmu_get_root(next_root)) break; =20 @@ -176,7 +184,7 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kv= m *kvm, list_for_each_entry(_root, &_kvm->arch.tdp_mmu_roots, link) \ if (kvm_lockdep_assert_mmu_lock_held(_kvm, false) && \ ((_as_id >=3D 0 && kvm_mmu_page_as_id(_root) !=3D _as_id) || \ - ((_only_valid) && (_root)->role.invalid))) { \ + !tdp_mmu_root_match((_root), (_only_valid)))) { \ } else =20 #define for_each_tdp_mmu_root(_kvm, _root, _as_id) \ --=20 2.34.1 From nobody Thu Nov 14 07:05:12 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 45A061482E7; Thu, 18 Jul 2024 21:12:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721337173; cv=none; b=XnChky+WqYgzcLs7X3QyIRtY2Ia4a44CBnU/37oTe5Vptc0wvuXG5G69scOtdxFuoX446iFF3xQSSS97sd3PenqJCr+0XBKlgB/iHw2zDMY3btsrg+MbMYvKENY4/JykrhGXsF7S8Ks4gkKw0ChltuTHDE3kmjZFq18kvrL3lpU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721337173; c=relaxed/simple; bh=ZFEM13pQvfYyREXhXVeK7dytiHFqPZUsq9cnopNZdxc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Op3U6804BYL+HAoi/U4oql+DwGjFtmcS8CWlrW4IixFgUvvVAa2vjd2/1dLwV7mCwKNiS1LXqD6Tp5viHPVUzaKS30KzFPAk/AEomJnMBUyZE8Yy7GiFoZ+KNfG38vAFL5YimtjQs+V1V0F3SdMZGsU5vsz4W8Lyuqnzm66KauQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=NZY/caG7; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="NZY/caG7" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1721337171; x=1752873171; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ZFEM13pQvfYyREXhXVeK7dytiHFqPZUsq9cnopNZdxc=; b=NZY/caG7L9eq1eSHtHJlcZawM1FBYbqC/+yGxqorvCazVZU4MULEnZMY QrGElgRMKYvnfm9cQTQDlXh2IExLlrkxRtNmpix0LNjPk2zRlCY5RwmSk OF2P+eZFxwIvH7aAU+FvUIpHROZC5tloq2U454GoCvr0J5rm2M2pzMIZt w1kfNR+Dhc3RHIjQ3EcsXa/umfLYrTVeGUTgWH87KOlY66gz0Sj55r1iY WrjnI4oNCnbuBRPEgT52WuWRYOppGWQ/pJBI/4fQYeesjC/q9rroSPqpM pF9TG6ALxL6wdADnypGMpkIkgU8WgcCabefo6braNkMahhPn1+w5wmtfC w==; X-CSE-ConnectionGUID: Hfhdsz5kQsKCPeks4+yxfg== X-CSE-MsgGUID: nfajXWD6QnmnCMM5uaODpw== X-IronPort-AV: E=McAfee;i="6700,10204,11137"; a="22697437" X-IronPort-AV: E=Sophos;i="6.09,218,1716274800"; d="scan'208";a="22697437" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 14:12:47 -0700 X-CSE-ConnectionGUID: O3E8WUe1Rzus0cbpJc80zQ== X-CSE-MsgGUID: aX30zjW5RzS18Ttt0DWOiQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,218,1716274800"; d="scan'208";a="55760403" Received: from ccbilbre-mobl3.amr.corp.intel.com (HELO rpedgeco-desk4..) ([10.124.223.76]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 14:12:47 -0700 From: Rick Edgecombe To: seanjc@google.com, pbonzini@redhat.com, kvm@vger.kernel.org Cc: kai.huang@intel.com, dmatlack@google.com, erdemaktas@google.com, isaku.yamahata@gmail.com, linux-kernel@vger.kernel.org, sagis@google.com, yan.y.zhao@intel.com, rick.p.edgecombe@intel.com, Isaku Yamahata Subject: [PATCH v4 10/18] KVM: x86/tdp_mmu: Introduce KVM MMU root types to specify page table type Date: Thu, 18 Jul 2024 14:12:22 -0700 Message-Id: <20240718211230.1492011-11-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240718211230.1492011-1-rick.p.edgecombe@intel.com> References: <20240718211230.1492011-1-rick.p.edgecombe@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Isaku Yamahata Define an enum kvm_tdp_mmu_root_types to specify the KVM MMU root type [1] so that the iterator on the root page table can consistently filter the root page table type instead of only_valid. TDX KVM will operate on KVM page tables with specified types. Shared page table, private page table, or both. Introduce an enum instead of bool only_valid so that we can easily enhance page table types applicable to shared, private, or both in addition to valid or not. Replace only_valid=3Dfalse with KVM_ANY_ROOTS and only_valid=3Dtrue with KVM_ANY_VALID_ROOTS. Use KVM_ANY_ROOTS and KVM_ANY_VALID_ROOTS to wrap KVM_VALID_ROOTS to avoid further code churn when direct vs mirror root concepts are introduced in future patches. Link: https://lore.kernel.org/kvm/ZivazWQw1oCU8VBC@google.com/ [1] Suggested-by: Sean Christopherson Signed-off-by: Isaku Yamahata Signed-off-by: Rick Edgecombe --- v3: - Drop KVM_ANY_ROOTS, KVM_ANY_VALID_ROOTS and switch to KVM_VALID_ROOTS and KVM_ALL_ROOTS. (Paolo) v1: - Newly introduced. --- arch/x86/kvm/mmu/tdp_mmu.c | 41 +++++++++++++++++++------------------- arch/x86/kvm/mmu/tdp_mmu.h | 7 +++++++ 2 files changed, 28 insertions(+), 20 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 412e9a031671..2e7e6e3137c6 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -92,27 +92,28 @@ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_m= mu_page *root) call_rcu(&root->rcu_head, tdp_mmu_free_sp_rcu_callback); } =20 -static bool tdp_mmu_root_match(struct kvm_mmu_page *root, bool only_valid) +static bool tdp_mmu_root_match(struct kvm_mmu_page *root, + enum kvm_tdp_mmu_root_types types) { - if (only_valid && root->role.invalid) - return false; + if (root->role.invalid) + return types & KVM_INVALID_ROOTS; =20 return true; } =20 /* * Returns the next root after @prev_root (or the first root if @prev_root= is - * NULL). A reference to the returned root is acquired, and the reference= to - * @prev_root is released (the caller obviously must hold a reference to - * @prev_root if it's non-NULL). + * NULL) that matches with @types. A reference to the returned root is + * acquired, and the reference to @prev_root is released (the caller obvio= usly + * must hold a reference to @prev_root if it's non-NULL). * - * If @only_valid is true, invalid roots are skipped. + * Roots that doesn't match with @types are skipped. * * Returns NULL if the end of tdp_mmu_roots was reached. */ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm, struct kvm_mmu_page *prev_root, - bool only_valid) + enum kvm_tdp_mmu_root_types types) { struct kvm_mmu_page *next_root; =20 @@ -133,7 +134,7 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kv= m *kvm, typeof(*next_root), link); =20 while (next_root) { - if (tdp_mmu_root_match(next_root, only_valid) && + if (tdp_mmu_root_match(next_root, types) && kvm_tdp_mmu_get_root(next_root)) break; =20 @@ -158,20 +159,20 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct = kvm *kvm, * If shared is set, this function is operating under the MMU lock in read * mode. */ -#define __for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _only_vali= d) \ - for (_root =3D tdp_mmu_next_root(_kvm, NULL, _only_valid); \ +#define __for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _types) \ + for (_root =3D tdp_mmu_next_root(_kvm, NULL, _types); \ ({ lockdep_assert_held(&(_kvm)->mmu_lock); }), _root; \ - _root =3D tdp_mmu_next_root(_kvm, _root, _only_valid)) \ + _root =3D tdp_mmu_next_root(_kvm, _root, _types)) \ if (_as_id >=3D 0 && kvm_mmu_page_as_id(_root) !=3D _as_id) { \ } else =20 #define for_each_valid_tdp_mmu_root_yield_safe(_kvm, _root, _as_id) \ - __for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, true) + __for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, KVM_VALID_ROOTS) =20 #define for_each_tdp_mmu_root_yield_safe(_kvm, _root) \ - for (_root =3D tdp_mmu_next_root(_kvm, NULL, false); \ + for (_root =3D tdp_mmu_next_root(_kvm, NULL, KVM_ALL_ROOTS); \ ({ lockdep_assert_held(&(_kvm)->mmu_lock); }), _root; \ - _root =3D tdp_mmu_next_root(_kvm, _root, false)) + _root =3D tdp_mmu_next_root(_kvm, _root, KVM_ALL_ROOTS)) =20 /* * Iterate over all TDP MMU roots. Requires that mmu_lock be held for wri= te, @@ -180,18 +181,18 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct = kvm *kvm, * Holding mmu_lock for write obviates the need for RCU protection as the = list * is guaranteed to be stable. */ -#define __for_each_tdp_mmu_root(_kvm, _root, _as_id, _only_valid) \ +#define __for_each_tdp_mmu_root(_kvm, _root, _as_id, _types) \ list_for_each_entry(_root, &_kvm->arch.tdp_mmu_roots, link) \ if (kvm_lockdep_assert_mmu_lock_held(_kvm, false) && \ ((_as_id >=3D 0 && kvm_mmu_page_as_id(_root) !=3D _as_id) || \ - !tdp_mmu_root_match((_root), (_only_valid)))) { \ + !tdp_mmu_root_match((_root), (_types)))) { \ } else =20 #define for_each_tdp_mmu_root(_kvm, _root, _as_id) \ - __for_each_tdp_mmu_root(_kvm, _root, _as_id, false) + __for_each_tdp_mmu_root(_kvm, _root, _as_id, KVM_ALL_ROOTS) =20 #define for_each_valid_tdp_mmu_root(_kvm, _root, _as_id) \ - __for_each_tdp_mmu_root(_kvm, _root, _as_id, true) + __for_each_tdp_mmu_root(_kvm, _root, _as_id, KVM_VALID_ROOTS) =20 static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu) { @@ -1201,7 +1202,7 @@ bool kvm_tdp_mmu_unmap_gfn_range(struct kvm *kvm, str= uct kvm_gfn_range *range, { struct kvm_mmu_page *root; =20 - __for_each_tdp_mmu_root_yield_safe(kvm, root, range->slot->as_id, false) + __for_each_tdp_mmu_root_yield_safe(kvm, root, range->slot->as_id, KVM_ALL= _ROOTS) flush =3D tdp_mmu_zap_leafs(kvm, root, range->start, range->end, range->may_block, flush); =20 diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index a0e00284b75d..8980c869e39c 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -19,6 +19,13 @@ __must_check static inline bool kvm_tdp_mmu_get_root(str= uct kvm_mmu_page *root) =20 void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root); =20 +enum kvm_tdp_mmu_root_types { + KVM_INVALID_ROOTS =3D BIT(0), + + KVM_VALID_ROOTS =3D BIT(1), + KVM_ALL_ROOTS =3D KVM_VALID_ROOTS | KVM_INVALID_ROOTS, +}; + bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, gfn_t start, gfn_t end, bool f= lush); bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp); void kvm_tdp_mmu_zap_all(struct kvm *kvm); --=20 2.34.1 From nobody Thu Nov 14 07:05:12 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 72DB01482F6; Thu, 18 Jul 2024 21:12:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721337173; cv=none; b=lm7hCQXtTZngYCCkXRExfXE24rJOVznDZ6WlDy9gFO46LtouL4cV70fHpnnb0DsJd+6hecRDjKPXop0QA7HyYwKh5nhpVxUJBg/JgeAdfRklve+ZcB5aYeY3qWDFj6//Sq7XuaTR608yEVq2rZ5je2CKiBeXGrR0mYV/fZHAC4Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721337173; c=relaxed/simple; bh=GIqmXeHXZ5jjeqhd6Kt07dWbmumlLDA3CxsAU3bJBSQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=jIAQONqk9C40hIYQazqSO+EG6yfKvRRNqAp96rNnWOgB70HZ+QGcOp4xmQjQeWEmu/hfB/bwGy7DKae7df7Akm+snDEc/rW/VTGpbv3P2f88Ce+E8bALMB7abFzkWpucO66UQEfqJApxnYk681DsLGtEyn06cAOH1ztT3WF/EZo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=CnIfGK+P; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="CnIfGK+P" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1721337171; x=1752873171; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=GIqmXeHXZ5jjeqhd6Kt07dWbmumlLDA3CxsAU3bJBSQ=; b=CnIfGK+PaBbUBw+uzoA7h0Wmy1kSIG3SsBYZLytcIlsDNXk3AAiQjwnV XS3XHSoeu3lrVjGessygoYpMDcpgl+SlasA9GjCx1qzQLGkSh8LH2LXeJ 6L5ravOVpuJrV+VOT4irdrDsnE7rVL9qDSPukXW1709AxStyI6XU5F96o Xtt3SqH5+Z/tjgPd+1D/XnPOfZsN+gCChJSvUyyM/AtoUTyQDJ9qAiSwH zsyCW/yQ3Sv2E7xzenh4PV+ncGlnqmhQW6Gzck4nP9z5Hg1WyBcSDlK/F L/CrIBSPAEuB1AB4314zDaB9ce+Z7caDRFwB5C9oz9Ua8T8TmavM5qzJk A==; X-CSE-ConnectionGUID: oJUruyzvTQeFrcA7bLOlFQ== X-CSE-MsgGUID: lFT+0npfQ/O5LoUdzTLT4A== X-IronPort-AV: E=McAfee;i="6700,10204,11137"; a="22697441" X-IronPort-AV: E=Sophos;i="6.09,218,1716274800"; d="scan'208";a="22697441" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 14:12:48 -0700 X-CSE-ConnectionGUID: hgBd8u5ERiyUAczeDEccAQ== X-CSE-MsgGUID: SHdXR4xoTrGPetcTVI0rnw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,218,1716274800"; d="scan'208";a="55760407" Received: from ccbilbre-mobl3.amr.corp.intel.com (HELO rpedgeco-desk4..) ([10.124.223.76]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 14:12:47 -0700 From: Rick Edgecombe To: seanjc@google.com, pbonzini@redhat.com, kvm@vger.kernel.org Cc: kai.huang@intel.com, dmatlack@google.com, erdemaktas@google.com, isaku.yamahata@gmail.com, linux-kernel@vger.kernel.org, sagis@google.com, yan.y.zhao@intel.com, rick.p.edgecombe@intel.com, Isaku Yamahata Subject: [PATCH v4 11/18] KVM: x86/tdp_mmu: Take root in tdp_mmu_for_each_pte() Date: Thu, 18 Jul 2024 14:12:23 -0700 Message-Id: <20240718211230.1492011-12-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240718211230.1492011-1-rick.p.edgecombe@intel.com> References: <20240718211230.1492011-1-rick.p.edgecombe@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Isaku Yamahata Take the root as an argument of tdp_mmu_for_each_pte() instead of looking it up in the mmu. With no other purpose of passing the mmu, drop it. Future changes will want to change which root is used based on the context of the MMU operation. So change the callers to pass in the root currently used, mmu->root.hpa in a preparatory patch to make the later one smaller and easier to review. Signed-off-by: Isaku Yamahata Signed-off-by: Rick Edgecombe --- v3: - Split from "KVM: x86/mmu: Support GFN direct mask" (Paolo) --- arch/x86/kvm/mmu/tdp_mmu.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 2e7e6e3137c6..19bd891702a9 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -694,8 +694,8 @@ static inline void tdp_mmu_iter_set_spte(struct kvm *kv= m, struct tdp_iter *iter, continue; \ else =20 -#define tdp_mmu_for_each_pte(_iter, _kvm, _mmu, _start, _end) \ - for_each_tdp_pte(_iter, _kvm, root_to_sp(_mmu->root.hpa), _start, _end) +#define tdp_mmu_for_each_pte(_iter, _kvm, _root, _start, _end) \ + for_each_tdp_pte(_iter, _kvm, _root, _start, _end) =20 /* * Yield if the MMU lock is contended or this thread needs to return contr= ol @@ -1117,8 +1117,8 @@ static int tdp_mmu_split_huge_page(struct kvm *kvm, s= truct tdp_iter *iter, */ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { - struct kvm_mmu *mmu =3D vcpu->arch.mmu; struct kvm *kvm =3D vcpu->kvm; + struct kvm_mmu_page *root =3D root_to_sp(vcpu->arch.mmu->root.hpa); struct tdp_iter iter; struct kvm_mmu_page *sp; int ret =3D RET_PF_RETRY; @@ -1129,7 +1129,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm= _page_fault *fault) =20 rcu_read_lock(); =20 - tdp_mmu_for_each_pte(iter, kvm, mmu, fault->gfn, fault->gfn + 1) { + tdp_mmu_for_each_pte(iter, kvm, root, fault->gfn, fault->gfn + 1) { int r; =20 if (fault->nx_huge_page_workaround_enabled) @@ -1788,14 +1788,14 @@ bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm, int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, int *root_level) { + struct kvm_mmu_page *root =3D root_to_sp(vcpu->arch.mmu->root.hpa); struct tdp_iter iter; - struct kvm_mmu *mmu =3D vcpu->arch.mmu; gfn_t gfn =3D addr >> PAGE_SHIFT; int leaf =3D -1; =20 *root_level =3D vcpu->arch.mmu->root_role.level; =20 - tdp_mmu_for_each_pte(iter, vcpu->kvm, mmu, gfn, gfn + 1) { + tdp_mmu_for_each_pte(iter, vcpu->kvm, root, gfn, gfn + 1) { leaf =3D iter.level; sptes[leaf] =3D iter.old_spte; } @@ -1817,11 +1817,11 @@ int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64= addr, u64 *sptes, u64 *kvm_tdp_mmu_fast_pf_get_last_sptep(struct kvm_vcpu *vcpu, gfn_t gfn, u64 *spte) { + struct kvm_mmu_page *root =3D root_to_sp(vcpu->arch.mmu->root.hpa); struct tdp_iter iter; - struct kvm_mmu *mmu =3D vcpu->arch.mmu; tdp_ptep_t sptep =3D NULL; =20 - tdp_mmu_for_each_pte(iter, vcpu->kvm, mmu, gfn, gfn + 1) { + tdp_mmu_for_each_pte(iter, vcpu->kvm, root, gfn, gfn + 1) { *spte =3D iter.old_spte; sptep =3D iter.sptep; } --=20 2.34.1 From nobody Thu Nov 14 07:05:12 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 322D214884F; Thu, 18 Jul 2024 21:12:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721337175; cv=none; b=iZrvAjII5XzNKmO0h/LI21r0aenayRGU86XW4LIYctgIiKgrpy+6SbLOU1SR5axpgE13ydNn/ueIxWY4WVrTyxir8eIgUd7F+jkSMdjvcxdjFXz800bKLGFH1U1LHXFoc0bF9Qv5nKoymLydZRk4U0m/soCVzc6TLazrkTohBjY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721337175; c=relaxed/simple; bh=oN/6G2YjqIL0Q1hGMug5N8fuxYVefiTEs3HboNcccPg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=IBqkuTmreMZEFVpYUTONzlPEjC2iHnsdPAAkdBup8r6w4yi1Q2AyyYB92i7H7Yoen5sdtGyK0MKNl1rQ8rRj38PlPrspk8Ul/n3W+jiKwKxUDDwqmLskEVYnlalEeN6dETvj2iUSJMomuG2FQCDCUd2th0gPRgHRszlrRh+msZE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=WnNkOmgq; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="WnNkOmgq" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1721337173; x=1752873173; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=oN/6G2YjqIL0Q1hGMug5N8fuxYVefiTEs3HboNcccPg=; b=WnNkOmgqYgajFJpC6f6DfeYHpOjOIJWjBhMUZoE3iMsC7EEYYd3Qo8id lMr0pUyjGncHd8Iu6+W6HnKxb9nbyhY1g/ScwlKVUDVVvXxWco8QeXRUW +tpeHCrTd9Vk7cPwi8DiYajXurQkB8vlO65DV14AhSQzpWN3u6jh/VSnJ 9URh9BXpbz+gf3YsAUVrv8LK4iuyNtse3jt8FBf+n9V/n/oVO/au34FOE YZFFwguBsQ53SEoOq/Q+773FFrcYNIcYy8vqIl7VZbmRutonunUVUVDuk ageOp77UEFVdKMZhPLOZZ80zMVuL788ybejO8rYiBkmS5AiqW4GOnsZoj g==; X-CSE-ConnectionGUID: FqJERYKqSaS011oEuBfeFA== X-CSE-MsgGUID: +UbttxdkRJq3i0HHPF/nww== X-IronPort-AV: E=McAfee;i="6700,10204,11137"; a="22697445" X-IronPort-AV: E=Sophos;i="6.09,218,1716274800"; d="scan'208";a="22697445" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 14:12:48 -0700 X-CSE-ConnectionGUID: te1hf0gLSrqBt8QQtON4sw== X-CSE-MsgGUID: A3/MeiVoR1iGVJPR+MLh0g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,218,1716274800"; d="scan'208";a="55760411" Received: from ccbilbre-mobl3.amr.corp.intel.com (HELO rpedgeco-desk4..) ([10.124.223.76]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 14:12:48 -0700 From: Rick Edgecombe To: seanjc@google.com, pbonzini@redhat.com, kvm@vger.kernel.org Cc: kai.huang@intel.com, dmatlack@google.com, erdemaktas@google.com, isaku.yamahata@gmail.com, linux-kernel@vger.kernel.org, sagis@google.com, yan.y.zhao@intel.com, rick.p.edgecombe@intel.com, Isaku Yamahata Subject: [PATCH v4 12/18] KVM: x86/tdp_mmu: Support mirror root for TDP MMU Date: Thu, 18 Jul 2024 14:12:24 -0700 Message-Id: <20240718211230.1492011-13-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240718211230.1492011-1-rick.p.edgecombe@intel.com> References: <20240718211230.1492011-1-rick.p.edgecombe@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Isaku Yamahata Add the ability for the TDP MMU to maintain a mirror of a separate mapping. Like other Coco technologies, TDX has the concept of private and shared memory. For TDX the private and shared mappings are managed on separate EPT roots. The private half is managed indirectly through calls into a protected runtime environment called the TDX module, where the shared half is managed within KVM in normal page tables. In order to handle both shared and private memory, KVM needs to learn to handle faults and other operations on the correct root for the operation. KVM could learn the concept of private roots, and operate on them by calling out to operations that call into the TDX module. But there are two problems with that: 1. Calls into the TDX module are relatively slow compared to the simple accesses required to read a PTE managed directly by KVM. 2. Other Coco technologies deal with private memory completely differently and it will make the code confusing when being read from their perspective. Special operations added for TDX that set private or zap private memory will have nothing to do with these other private memory technologies. (SEV, etc). To handle these, instead teach the TDP MMU about a new concept "mirror roots". Such roots maintain page tables that are not actually mapped, and are just used to traverse quickly to determine if the mid level page tables need to be installed. When the memory be mirrored needs to actually be changed, calls can be made to via x86_ops. private KVM page fault | | | V | private GPA | CPU protected EPTP | | | V | V mirror PT root | external PT root | | | V | V mirror PT --hook to propagate-->external PT | | | \--------------------+------\ | | | | | V V | private guest page | | non-encrypted memory | encrypted memory | Leave calling out to actually update the private page tables that are being mirrored for later changes. Just implement the handling of MMU operations on to mirrored roots. In order to direct operations to correct root, add root types KVM_DIRECT_ROOTS and KVM_MIRROR_ROOTS. Tie the usage of mirrored/direct roots to private/shared with conditionals. It could also be implemented by making the kvm_tdp_mmu_root_types and kvm_gfn_range_filter enum bits line up such that conversion could be a direct assignment with a case. Don't do this because the mapping of private to mirrored is confusing enough. So it is worth not hiding the logic in type casting. Cleanup the mirror root in kvm_mmu_destroy() instead of the normal place in kvm_mmu_free_roots(), because the private root that is being cannot be rebuilt like a normal root. It needs to persist for the lifetime of the VM. The TDX module will also need to be provided with page tables to use for the actual mapping being mirrored by the mirrored page tables. Allocate these in the mapping path using the recently added kvm_mmu_alloc_external_spt(). Don't support 2M page for now. This is avoided by forcing 4k pages in the fault. Add a KVM_BUG_ON() to verify. Signed-off-by: Isaku Yamahata Co-developed-by: Kai Huang Signed-off-by: Kai Huang Co-developed-by: Yan Zhao Signed-off-by: Yan Zhao Co-developed-by: Rick Edgecombe Signed-off-by: Rick Edgecombe --- v4: - Use true instead of 1 when setting role.is_mirror (Binbin) - Handle case of invalid direct root, but valid mirror root (Yan) - Log typos v3: - Change subject from "Make mmu notifier callbacks to check kvm_process" to "Propagate attr_filter to MMU notifier callbacks" (Paolo) - Remove no longer used for_each_tdp_mmu_root() (Binbin) v2: - Use newly added kvm_process_to_root_types() --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/mmu.h | 16 ++++++++++++ arch/x86/kvm/mmu/mmu.c | 12 ++++++++- arch/x86/kvm/mmu/tdp_mmu.c | 34 ++++++++++++++++++++------ arch/x86/kvm/mmu/tdp_mmu.h | 43 ++++++++++++++++++++++++++++++--- 5 files changed, 94 insertions(+), 12 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 1730f94c9742..b142ef6e6676 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -460,6 +460,7 @@ struct kvm_mmu { int (*sync_spte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, int i); struct kvm_mmu_root_info root; + hpa_t mirror_root_hpa; union kvm_cpu_role cpu_role; union kvm_mmu_page_role root_role; =20 diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 63179a4fba7b..4f6c86294f05 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -128,6 +128,15 @@ void kvm_mmu_track_write(struct kvm_vcpu *vcpu, gpa_t = gpa, const u8 *new, =20 static inline int kvm_mmu_reload(struct kvm_vcpu *vcpu) { + /* + * Checking root.hpa is sufficient even when KVM has mirror root. + * We can have either: + * (1) mirror_root_hpa =3D INVALID_PAGE, root.hpa =3D INVALID_PAGE + * (2) mirror_root_hpa =3D root, root.hpa =3D INVALID_PAGE + * (3) mirror_root_hpa =3D root1, root.hpa =3D root2 + * We don't ever have: + * mirror_root_hpa =3D INVALID_PAGE, root.hpa =3D root + */ if (likely(vcpu->arch.mmu->root.hpa !=3D INVALID_PAGE)) return 0; =20 @@ -328,4 +337,11 @@ static inline gfn_t kvm_gfn_direct_bits(const struct k= vm *kvm) { return kvm->arch.gfn_direct_bits; } + +static inline bool kvm_is_addr_direct(struct kvm *kvm, gpa_t gpa) +{ + gpa_t gpa_direct_bits =3D gfn_to_gpa(kvm_gfn_direct_bits(kvm)); + + return !gpa_direct_bits || (gpa & gpa_direct_bits); +} #endif diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 2f7f372a4bfe..2c73360533c2 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3704,7 +3704,10 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *v= cpu) int r; =20 if (tdp_mmu_enabled) { - kvm_tdp_mmu_alloc_root(vcpu); + if (kvm_has_mirrored_tdp(vcpu->kvm) && + !VALID_PAGE(mmu->mirror_root_hpa)) + kvm_tdp_mmu_alloc_root(vcpu, true); + kvm_tdp_mmu_alloc_root(vcpu, false); return 0; } =20 @@ -6290,6 +6293,7 @@ static int __kvm_mmu_create(struct kvm_vcpu *vcpu, st= ruct kvm_mmu *mmu) =20 mmu->root.hpa =3D INVALID_PAGE; mmu->root.pgd =3D 0; + mmu->mirror_root_hpa =3D INVALID_PAGE; for (i =3D 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) mmu->prev_roots[i] =3D KVM_MMU_ROOT_INFO_INVALID; =20 @@ -7265,6 +7269,12 @@ int kvm_mmu_vendor_module_init(void) void kvm_mmu_destroy(struct kvm_vcpu *vcpu) { kvm_mmu_unload(vcpu); + if (tdp_mmu_enabled) { + read_lock(&vcpu->kvm->mmu_lock); + mmu_free_root_page(vcpu->kvm, &vcpu->arch.mmu->mirror_root_hpa, + NULL); + read_unlock(&vcpu->kvm->mmu_lock); + } free_mmu_pages(&vcpu->arch.root_mmu); free_mmu_pages(&vcpu->arch.guest_mmu); mmu_free_memory_caches(vcpu); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 19bd891702a9..5af7355ef015 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -95,10 +95,15 @@ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_m= mu_page *root) static bool tdp_mmu_root_match(struct kvm_mmu_page *root, enum kvm_tdp_mmu_root_types types) { + if (WARN_ON_ONCE(!(types & KVM_VALID_ROOTS))) + return false; + if (root->role.invalid) return types & KVM_INVALID_ROOTS; + if (likely(!is_mirror_sp(root))) + return types & KVM_DIRECT_ROOTS; =20 - return true; + return types & KVM_MIRROR_ROOTS; } =20 /* @@ -233,7 +238,7 @@ static void tdp_mmu_init_child_sp(struct kvm_mmu_page *= child_sp, tdp_mmu_init_sp(child_sp, iter->sptep, iter->gfn, role); } =20 -void kvm_tdp_mmu_alloc_root(struct kvm_vcpu *vcpu) +void kvm_tdp_mmu_alloc_root(struct kvm_vcpu *vcpu, bool mirror) { struct kvm_mmu *mmu =3D vcpu->arch.mmu; union kvm_mmu_page_role role =3D mmu->root_role; @@ -241,6 +246,9 @@ void kvm_tdp_mmu_alloc_root(struct kvm_vcpu *vcpu) struct kvm *kvm =3D vcpu->kvm; struct kvm_mmu_page *root; =20 + if (mirror) + role.is_mirror =3D true; + /* * Check for an existing root before acquiring the pages lock to avoid * unnecessary serialization if multiple vCPUs are loading a new root. @@ -292,8 +300,12 @@ void kvm_tdp_mmu_alloc_root(struct kvm_vcpu *vcpu) * and actually consuming the root if it's invalidated after dropping * mmu_lock, and the root can't be freed as this vCPU holds a reference. */ - mmu->root.hpa =3D __pa(root->spt); - mmu->root.pgd =3D 0; + if (mirror) { + mmu->mirror_root_hpa =3D __pa(root->spt); + } else { + mmu->root.hpa =3D __pa(root->spt); + mmu->root.pgd =3D 0; + } } =20 static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, @@ -1117,8 +1129,8 @@ static int tdp_mmu_split_huge_page(struct kvm *kvm, s= truct tdp_iter *iter, */ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { + struct kvm_mmu_page *root =3D tdp_mmu_get_root_for_fault(vcpu, fault); struct kvm *kvm =3D vcpu->kvm; - struct kvm_mmu_page *root =3D root_to_sp(vcpu->arch.mmu->root.hpa); struct tdp_iter iter; struct kvm_mmu_page *sp; int ret =3D RET_PF_RETRY; @@ -1156,13 +1168,18 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct k= vm_page_fault *fault) */ sp =3D tdp_mmu_alloc_sp(vcpu); tdp_mmu_init_child_sp(sp, &iter); + if (is_mirror_sp(sp)) + kvm_mmu_alloc_external_spt(vcpu, sp); =20 sp->nx_huge_page_disallowed =3D fault->huge_page_disallowed; =20 - if (is_shadow_present_pte(iter.old_spte)) + if (is_shadow_present_pte(iter.old_spte)) { + /* Don't support large page for mirrored roots (TDX) */ + KVM_BUG_ON(is_mirror_sptep(iter.sptep), vcpu->kvm); r =3D tdp_mmu_split_huge_page(kvm, &iter, sp, true); - else + } else { r =3D tdp_mmu_link_sp(kvm, &iter, sp, true); + } =20 /* * Force the guest to retry if installing an upper level SPTE @@ -1817,7 +1834,8 @@ int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 a= ddr, u64 *sptes, u64 *kvm_tdp_mmu_fast_pf_get_last_sptep(struct kvm_vcpu *vcpu, gfn_t gfn, u64 *spte) { - struct kvm_mmu_page *root =3D root_to_sp(vcpu->arch.mmu->root.hpa); + /* Fast pf is not supported for mirrored roots */ + struct kvm_mmu_page *root =3D tdp_mmu_get_root(vcpu, KVM_DIRECT_ROOTS); struct tdp_iter iter; tdp_ptep_t sptep =3D NULL; =20 diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 8980c869e39c..5b607adca680 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -10,7 +10,7 @@ void kvm_mmu_init_tdp_mmu(struct kvm *kvm); void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm); =20 -void kvm_tdp_mmu_alloc_root(struct kvm_vcpu *vcpu); +void kvm_tdp_mmu_alloc_root(struct kvm_vcpu *vcpu, bool private); =20 __must_check static inline bool kvm_tdp_mmu_get_root(struct kvm_mmu_page *= root) { @@ -21,11 +21,48 @@ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_m= mu_page *root); =20 enum kvm_tdp_mmu_root_types { KVM_INVALID_ROOTS =3D BIT(0), - - KVM_VALID_ROOTS =3D BIT(1), + KVM_DIRECT_ROOTS =3D BIT(1), + KVM_MIRROR_ROOTS =3D BIT(2), + KVM_VALID_ROOTS =3D KVM_DIRECT_ROOTS | KVM_MIRROR_ROOTS, KVM_ALL_ROOTS =3D KVM_VALID_ROOTS | KVM_INVALID_ROOTS, }; =20 +static inline enum kvm_tdp_mmu_root_types kvm_gfn_range_filter_to_root_typ= es(struct kvm *kvm, + enum kvm_gfn_range_filter process) +{ + enum kvm_tdp_mmu_root_types ret =3D 0; + + if (!kvm_has_mirrored_tdp(kvm)) + return KVM_DIRECT_ROOTS; + + if (process & KVM_FILTER_PRIVATE) + ret |=3D KVM_MIRROR_ROOTS; + if (process & KVM_FILTER_SHARED) + ret |=3D KVM_DIRECT_ROOTS; + + WARN_ON_ONCE(!ret); + + return ret; +} + +static inline struct kvm_mmu_page *tdp_mmu_get_root_for_fault(struct kvm_v= cpu *vcpu, + struct kvm_page_fault *fault) +{ + if (unlikely(!kvm_is_addr_direct(vcpu->kvm, fault->addr))) + return root_to_sp(vcpu->arch.mmu->mirror_root_hpa); + + return root_to_sp(vcpu->arch.mmu->root.hpa); +} + +static inline struct kvm_mmu_page *tdp_mmu_get_root(struct kvm_vcpu *vcpu, + enum kvm_tdp_mmu_root_types type) +{ + if (unlikely(type =3D=3D KVM_MIRROR_ROOTS)) + return root_to_sp(vcpu->arch.mmu->mirror_root_hpa); + + return root_to_sp(vcpu->arch.mmu->root.hpa); +} + bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, gfn_t start, gfn_t end, bool f= lush); bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp); void kvm_tdp_mmu_zap_all(struct kvm *kvm); --=20 2.34.1 From nobody Thu Nov 14 07:05:12 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 52F8C14885E; Thu, 18 Jul 2024 21:12:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721337175; cv=none; b=fWBxKMNNBkxcc9FQozaV3BcgmtxD++l6VMVX8/0XgQlCQsg2MrYw6PMksSaNkCeg1XxVML/HE02QMtpvT04ukzB3d31BfiqxwUiAgee+7dK7wcUqUTXnGCSBVmiEXYeuXMsJpAt3H42fFkBWW/Ud7E8T5o6lEjTpz8p62ceneec= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721337175; c=relaxed/simple; bh=Bd/MKvRZNcGPNNOzsHPszb0eImA7d/eHd9GNGvPVHAk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=WZ8faK+ayyUQ0OzvvYclORPsrJaqqjDUdKGVTMaB/KREmweVrdLqTYG8tQ4Rmh8T5/BtOSMAskc7EXawnlVBKat5C18NicnXTHs6sZ5xdH0XiaF56pty4tNKm7bBe0yksu1DDPhaBSfSg9uqRdteK/WPmHy6HKoPZw0Tiz3yAB0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=W4GZ5l1J; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="W4GZ5l1J" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1721337173; x=1752873173; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Bd/MKvRZNcGPNNOzsHPszb0eImA7d/eHd9GNGvPVHAk=; b=W4GZ5l1JWAh6M2FgZfKjUj5ABRaBOuzLtcvp4rCWD3h7FuGHpbEPr+C+ aj85wMORJDeRBb5U1ySKJw8PFobAH5YJgVYGhlg9nLJhyex+8xmCTtmfS CuAuU9MJuyDtDDlSMqd8tVpYN+63BGX/7Ka/o2amZoSzo41huPXU1b9SU 72Ks16OGyJs3ftmrUK346WqgFj7XmvlBcp26iZF9KThexJpDJeNojDxnQ wW2bpqq1CrghFW6uQSjd8r1kKEBx3LKjpVIzX3Yhbr6dPaQ8FLxersQA8 KAMmoCrlz3LgErs2jk3tjHQx2u4humxsmo4rVppxGSnslNWS7fNX0p/HG g==; X-CSE-ConnectionGUID: 6xZan0xmRMKEsC3QDq7Qew== X-CSE-MsgGUID: wZkctq5UQPy6OwlEG//NmA== X-IronPort-AV: E=McAfee;i="6700,10204,11137"; a="22697451" X-IronPort-AV: E=Sophos;i="6.09,218,1716274800"; d="scan'208";a="22697451" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 14:12:49 -0700 X-CSE-ConnectionGUID: i3RRP1H2TQ6JK0sdBSNpyw== X-CSE-MsgGUID: CYTRj0DuQU2UDq1MQ4EUEA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,218,1716274800"; d="scan'208";a="55760415" Received: from ccbilbre-mobl3.amr.corp.intel.com (HELO rpedgeco-desk4..) ([10.124.223.76]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 14:12:49 -0700 From: Rick Edgecombe To: seanjc@google.com, pbonzini@redhat.com, kvm@vger.kernel.org Cc: kai.huang@intel.com, dmatlack@google.com, erdemaktas@google.com, isaku.yamahata@gmail.com, linux-kernel@vger.kernel.org, sagis@google.com, yan.y.zhao@intel.com, rick.p.edgecombe@intel.com, Isaku Yamahata Subject: [PATCH v4 13/18] KVM: x86/tdp_mmu: Propagate attr_filter to MMU notifier callbacks Date: Thu, 18 Jul 2024 14:12:25 -0700 Message-Id: <20240718211230.1492011-14-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240718211230.1492011-1-rick.p.edgecombe@intel.com> References: <20240718211230.1492011-1-rick.p.edgecombe@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Isaku Yamahata Teach the MMU notifier callbacks how to check kvm_gfn_range.process to filter which KVM MMU root types to operate on. The private GPAs are backed by guest memfd. Such memory is not subjected to MMU notifier callbacks because it can't be mapped into the host user address space. Now kvm_gfn_range conveys info about which root to operate on. Enhance the callback to filter the root page table type. The KVM MMU notifier comes down to two functions. kvm_tdp_mmu_unmap_gfn_range() and kvm_tdp_mmu_handle_gfn(). For VM's without a private/shared split in the EPT, all operations should target the normal(direct) root. invalidate_range_start() comes into kvm_tdp_mmu_unmap_gfn_range(). invalidate_range_end() doesn't come into arch code. With the switch from for_each_tdp_mmu_root() to __for_each_tdp_mmu_root() in kvm_tdp_mmu_handle_gfn(), there are no longer any users of for_each_tdp_mmu_root(). Remove it. Signed-off-by: Isaku Yamahata Signed-off-by: Rick Edgecombe --- v3: - Change subject from "Make mmu notifier callbacks to check kvm_process" to "Propagate attr_filter to MMU notifier callbacks" (Paolo) - Remove no longer used for_each_tdp_mmu_root() (Binbin) v2: - Use newly added kvm_process_to_root_types() v1: - Remove warning (Rick) - Remove confusing mention of mapping flags (Chao) - Re-write coverletter --- arch/x86/kvm/mmu/tdp_mmu.c | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 5af7355ef015..748fdacc719c 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -193,9 +193,6 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kv= m *kvm, !tdp_mmu_root_match((_root), (_types)))) { \ } else =20 -#define for_each_tdp_mmu_root(_kvm, _root, _as_id) \ - __for_each_tdp_mmu_root(_kvm, _root, _as_id, KVM_ALL_ROOTS) - #define for_each_valid_tdp_mmu_root(_kvm, _root, _as_id) \ __for_each_tdp_mmu_root(_kvm, _root, _as_id, KVM_VALID_ROOTS) =20 @@ -1214,12 +1211,16 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct k= vm_page_fault *fault) return ret; } =20 +/* Used by mmu notifier via kvm_unmap_gfn_range() */ bool kvm_tdp_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *ra= nge, bool flush) { + enum kvm_tdp_mmu_root_types types; struct kvm_mmu_page *root; =20 - __for_each_tdp_mmu_root_yield_safe(kvm, root, range->slot->as_id, KVM_ALL= _ROOTS) + types =3D kvm_gfn_range_filter_to_root_types(kvm, range->attr_filter); + + __for_each_tdp_mmu_root_yield_safe(kvm, root, range->slot->as_id, types) flush =3D tdp_mmu_zap_leafs(kvm, root, range->start, range->end, range->may_block, flush); =20 @@ -1233,15 +1234,18 @@ static __always_inline bool kvm_tdp_mmu_handle_gfn(= struct kvm *kvm, struct kvm_gfn_range *range, tdp_handler_t handler) { + enum kvm_tdp_mmu_root_types types; struct kvm_mmu_page *root; struct tdp_iter iter; bool ret =3D false; =20 + types =3D kvm_gfn_range_filter_to_root_types(kvm, range->attr_filter); + /* * Don't support rescheduling, none of the MMU notifiers that funnel * into this helper allow blocking; it'd be dead, wasteful code. */ - for_each_tdp_mmu_root(kvm, root, range->slot->as_id) { + __for_each_tdp_mmu_root(kvm, root, range->slot->as_id, types) { rcu_read_lock(); =20 tdp_root_for_each_leaf_pte(iter, kvm, root, range->start, range->end) --=20 2.34.1 From nobody Thu Nov 14 07:05:12 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A5F41149001; Thu, 18 Jul 2024 21:12:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721337175; cv=none; b=S5L4qYTDWHdKf9kt4fHfVpjj9C2gfeuRbgP6rGVq7RMyJ2Nhzf1K1cK1jJfvvpO1HGHa32KZSHdIfOULkIbek/1DP9Gr1X8KVP3lz97RIPuEArUzwfM98joHSh8qwZN9R86+jzsVuZaPaqifPkqoXa0gYiSMiY75G4XpKriz6Bc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721337175; c=relaxed/simple; bh=n7vjM6VKLvDpTADUO+TuE9+LJHPWdcTUwDTi4gVrsXg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=aKbCmfKewtSJdEFziBQcPN2rVi9rMoQGxphGyx2spBRZnJijz14ZHRD0spymhvR3CRfEtm2K47YlRRrp729xH4/uKX4A0RQIt03n5XgDYleTc2FUhpAItkPViFVIuTy5tqsMfjiakaEUi032p8BIkfj5sMq8237rFp9bHoBhqg0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=KVjklG+/; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="KVjklG+/" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1721337174; x=1752873174; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=n7vjM6VKLvDpTADUO+TuE9+LJHPWdcTUwDTi4gVrsXg=; b=KVjklG+/HBcxoUUwL/bcmYktIAOfNnnEZOb7ncQ1j3lZ0sVNx6CsOgTC 9VpfLLyarwyCRD+cKcplRHUHnLtsivkbwuzyTpMNnM1amKzkPR5i63GKa T8IhpbAB2K2FiDVRECUq/3rnbn15xm59eVcW/+IguhdRY1gjVlbtPdeFZ 3aE5wG5Ys62tn8K4Ma1gxBDqSSpANa3qW965B2JxxPjht1nlr1UIW7Thj v4zNM4yQV7448r5w8Lhan7XJimw/RCf7hBQC2ZQICcSS8pofoVxfMDKXf olts/l+njH9cBWY3kxepmc/p80/4uSDAqBbxmADDeseeSPn8l4cApUjYW A==; X-CSE-ConnectionGUID: Y+TJf6+RQUCUIuSd1qwfZQ== X-CSE-MsgGUID: RL5Uk/HrSbaZFUh/6KkrcA== X-IronPort-AV: E=McAfee;i="6700,10204,11137"; a="22697461" X-IronPort-AV: E=Sophos;i="6.09,218,1716274800"; d="scan'208";a="22697461" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 14:12:50 -0700 X-CSE-ConnectionGUID: oe2DluFTSC2erUIVfyUjfA== X-CSE-MsgGUID: 0zVaDotTR8W72YO0y5PCWw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,218,1716274800"; d="scan'208";a="55760420" Received: from ccbilbre-mobl3.amr.corp.intel.com (HELO rpedgeco-desk4..) ([10.124.223.76]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 14:12:49 -0700 From: Rick Edgecombe To: seanjc@google.com, pbonzini@redhat.com, kvm@vger.kernel.org Cc: kai.huang@intel.com, dmatlack@google.com, erdemaktas@google.com, isaku.yamahata@gmail.com, linux-kernel@vger.kernel.org, sagis@google.com, yan.y.zhao@intel.com, rick.p.edgecombe@intel.com, Isaku Yamahata Subject: [PATCH v4 14/18] KVM: x86/tdp_mmu: Propagate building mirror page tables Date: Thu, 18 Jul 2024 14:12:26 -0700 Message-Id: <20240718211230.1492011-15-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240718211230.1492011-1-rick.p.edgecombe@intel.com> References: <20240718211230.1492011-1-rick.p.edgecombe@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Isaku Yamahata Integrate hooks for mirroring page table operations for cases where TDX will set PTEs or link page tables. Like other Coco technologies, TDX has the concept of private and shared memory. For TDX the private and shared mappings are managed on separate EPT roots. The private half is managed indirectly through calls into a protected runtime environment called the TDX module, where the shared half is managed within KVM in normal page tables. Since calls into the TDX module are relatively slow, walking private page tables by making calls into the TDX module would not be efficient. Because of this, previous changes have taught the TDP MMU to keep a mirror root, which is separate, unmapped TDP root that private operations can be directed to. Currently this root is disconnected from any actual guest mapping. Now add plumbing to propagate changes to the "external" page tables being mirrored. Just create the x86_ops for now, leave plumbing the operations into the TDX module for future patches. Add two operations for setting up external page tables, one for linking new page tables and one for setting leaf PTEs. Don't add any op for configuring the root PFN, as TDX handles this itself. Don't provide a way to set permissions on the PTEs also, as TDX doesn't support it. This results in MMU "mirroring" support that is very targeted towards TDX. Since it is likely there will be no other user, the main benefit of making the support generic is to keep TDX specific *looking* code outside of the MMU. As a generic feature it will make enough sense from TDX's perspective. For developers unfamiliar with TDX arch it can express the general concepts such that they can continue to work in the code. TDX MMU support will exclude certain MMU operations, so only plug in the mirroring x86 ops where they will be needed. For setting/linking, only hook tdp_mmu_set_spte_atomic() which is used for mapping and linking PTs. Don't bother hooking tdp_mmu_iter_set_spte() as it is only used for setting PTEs in operations unsupported by TDX: splitting huge pages and write protecting. Sprinkle KVM_BUG_ON()s to document as code that these paths are not supported for mirrored page tables. For zapping operations, leave those for near future changes. Many operations in the TDP MMU depend on atomicity of the PTE update. While the mirror PTE on KVM's side can be updated atomically, the update that happens inside the external operations (S-EPT updates via TDX module call) can't happen atomically with the mirror update. The following race could result during two vCPU's populating private memory: * vcpu 1: atomically update 2M level mirror EPT entry to be present * vcpu 2: read 2M level EPT entry that is present * vcpu 2: walk down into 4K level EPT * vcpu 2: atomically update 4K level mirror EPT entry to be present * vcpu 2: set_exterma;_spte() to update 4K secure EPT entry =3D> error because 2M secure EPT entry is not populated yet * vcpu 1: link_external_spt() to update 2M secure EPT entry Prevent this by setting the mirror PTE to FROZEN_SPTE while the reflect operations are performed. Only write the actual mirror PTE value once the reflect operations have completed. When trying to set a PTE to present and encountering a frozen SPTE, retry the fault. By doing this the race is prevented as follows: * vcpu 1: atomically update 2M level EPT entry to be FROZEN_SPTE * vcpu 2: read 2M level EPT entry that is FROZEN_SPTE * vcpu 2: find that the EPT entry is frozen abandon page table walk to resume guest execution * vcpu 1: link_external_spt() to update 2M secure EPT entry * vcpu 1: atomically update 2M level EPT entry to be present (unfreeze) * vcpu 2: resume guest execution Depending on vcpu 1 state, vcpu 2 may result in EPT violation again or make progress on guest execution Signed-off-by: Isaku Yamahata Co-developed-by: Kai Huang Signed-off-by: Kai Huang Co-developed-by: Yan Zhao Signed-off-by: Yan Zhao Co-developed-by: Rick Edgecombe Signed-off-by: Rick Edgecombe --- v4: - Fix external/mirror naming in code and commit log (Binbin) - Fix removes/frozen naming in log (Binbin) - Log/comment typo (Binbin) - No functional chagnge from v3 v3: - Rename mirrored->external (Paolo) - Better comment on logic that bugs if doing tdp_mmu_set_spte() on present PTE. (Paolo) - Move zapping KVM_BUG_ON() to proper patch - Use spte_to_child_sp() (Paolo) - Drop unnessary comment in __tdp_mmu_set_spte_atomic() (Paolo) - Rename pfn->pfn_for_gfn to match remove_external_pte in next patch. - Rename REMOVED_SPTE to FROZEN_SPTE (Paolo) v2: - Split from "KVM: x86/tdp_mmu: Support TDX private mapping for TDP MMU" - Rename x86_ops from "private" to "reflect" - In response to "sp->mirrored_spt" rename helpers to "mirrored" - Drop unused old_pfn and new_pfn in handle_changed_spte() - Drop redundant is_shadow_present_pte() check in __tdp_mmu_set_spte_atomic - Adjust some warnings and KVM_BUG_ONs --- arch/x86/include/asm/kvm-x86-ops.h | 2 + arch/x86/include/asm/kvm_host.h | 7 +++ arch/x86/kvm/mmu/tdp_mmu.c | 97 ++++++++++++++++++++++++++---- 3 files changed, 94 insertions(+), 12 deletions(-) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-= x86-ops.h index 566d19b02483..3ef19fcb5e42 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -95,6 +95,8 @@ KVM_X86_OP_OPTIONAL_RET0(set_tss_addr) KVM_X86_OP_OPTIONAL_RET0(set_identity_map_addr) KVM_X86_OP_OPTIONAL_RET0(get_mt_mask) KVM_X86_OP(load_mmu_pgd) +KVM_X86_OP_OPTIONAL(link_external_spt) +KVM_X86_OP_OPTIONAL(set_external_spte) KVM_X86_OP(has_wbinvd_exit) KVM_X86_OP(get_l2_tsc_offset) KVM_X86_OP(get_l2_tsc_multiplier) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index b142ef6e6676..a17b672b1923 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1738,6 +1738,13 @@ struct kvm_x86_ops { void (*load_mmu_pgd)(struct kvm_vcpu *vcpu, hpa_t root_hpa, int root_level); =20 + /* Update external mapping with page table link. */ + int (*link_external_spt)(struct kvm *kvm, gfn_t gfn, enum pg_level level, + void *external_spt); + /* Update the external page table from spte getting set. */ + int (*set_external_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level, + kvm_pfn_t pfn_for_gfn); + bool (*has_wbinvd_exit)(void); =20 u64 (*get_l2_tsc_offset)(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 748fdacc719c..116dc3e9bdb3 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -440,6 +440,59 @@ static void handle_removed_pt(struct kvm *kvm, tdp_pte= p_t pt, bool shared) call_rcu(&sp->rcu_head, tdp_mmu_free_sp_rcu_callback); } =20 +static void *get_external_spt(gfn_t gfn, u64 new_spte, int level) +{ + if (is_shadow_present_pte(new_spte) && !is_last_spte(new_spte, level)) { + struct kvm_mmu_page *sp =3D spte_to_child_sp(new_spte); + + WARN_ON_ONCE(sp->role.level + 1 !=3D level); + WARN_ON_ONCE(sp->gfn !=3D gfn); + return sp->external_spt; + } + + return NULL; +} + +static int __must_check set_external_spte_present(struct kvm *kvm, tdp_pte= p_t sptep, + gfn_t gfn, u64 old_spte, + u64 new_spte, int level) +{ + bool was_present =3D is_shadow_present_pte(old_spte); + bool is_present =3D is_shadow_present_pte(new_spte); + bool is_leaf =3D is_present && is_last_spte(new_spte, level); + kvm_pfn_t new_pfn =3D spte_to_pfn(new_spte); + int ret =3D 0; + + KVM_BUG_ON(was_present, kvm); + + lockdep_assert_held(&kvm->mmu_lock); + /* + * We need to lock out other updates to the SPTE until the external + * page table has been modified. Use FROZEN_SPTE similar to + * the zapping case. + */ + if (!try_cmpxchg64(sptep, &old_spte, FROZEN_SPTE)) + return -EBUSY; + + /* + * Use different call to either set up middle level + * external page table, or leaf. + */ + if (is_leaf) { + ret =3D static_call(kvm_x86_set_external_spte)(kvm, gfn, level, new_pfn); + } else { + void *external_spt =3D get_external_spt(gfn, new_spte, level); + + KVM_BUG_ON(!external_spt, kvm); + ret =3D static_call(kvm_x86_link_external_spt)(kvm, gfn, level, external= _spt); + } + if (ret) + __kvm_tdp_mmu_write_spte(sptep, old_spte); + else + __kvm_tdp_mmu_write_spte(sptep, new_spte); + return ret; +} + /** * handle_changed_spte - handle bookkeeping associated with an SPTE change * @kvm: kvm instance @@ -548,7 +601,8 @@ static void handle_changed_spte(struct kvm *kvm, int as= _id, gfn_t gfn, kvm_set_pfn_accessed(spte_to_pfn(old_spte)); } =20 -static inline int __must_check __tdp_mmu_set_spte_atomic(struct tdp_iter *= iter, +static inline int __must_check __tdp_mmu_set_spte_atomic(struct kvm *kvm, + struct tdp_iter *iter, u64 new_spte) { u64 *sptep =3D rcu_dereference(iter->sptep); @@ -561,15 +615,25 @@ static inline int __must_check __tdp_mmu_set_spte_ato= mic(struct tdp_iter *iter, */ WARN_ON_ONCE(iter->yielded || is_frozen_spte(iter->old_spte)); =20 - /* - * Note, fast_pf_fix_direct_spte() can also modify TDP MMU SPTEs and - * does not hold the mmu_lock. On failure, i.e. if a different logical - * CPU modified the SPTE, try_cmpxchg64() updates iter->old_spte with - * the current value, so the caller operates on fresh data, e.g. if it - * retries tdp_mmu_set_spte_atomic() - */ - if (!try_cmpxchg64(sptep, &iter->old_spte, new_spte)) - return -EBUSY; + if (is_mirror_sptep(iter->sptep) && !is_frozen_spte(new_spte)) { + int ret; + + ret =3D set_external_spte_present(kvm, iter->sptep, iter->gfn, + iter->old_spte, new_spte, iter->level); + if (ret) + return ret; + } else { + /* + * Note, fast_pf_fix_direct_spte() can also modify TDP MMU SPTEs + * and does not hold the mmu_lock. On failure, i.e. if a + * different logical CPU modified the SPTE, try_cmpxchg64() + * updates iter->old_spte with the current value, so the caller + * operates on fresh data, e.g. if it retries + * tdp_mmu_set_spte_atomic() + */ + if (!try_cmpxchg64(sptep, &iter->old_spte, new_spte)) + return -EBUSY; + } =20 return 0; } @@ -599,7 +663,7 @@ static inline int __must_check tdp_mmu_set_spte_atomic(= struct kvm *kvm, =20 lockdep_assert_held_read(&kvm->mmu_lock); =20 - ret =3D __tdp_mmu_set_spte_atomic(iter, new_spte); + ret =3D __tdp_mmu_set_spte_atomic(kvm, iter, new_spte); if (ret) return ret; =20 @@ -624,7 +688,8 @@ static inline int __must_check tdp_mmu_zap_spte_atomic(= struct kvm *kvm, * Delay processing of the zapped SPTE until after TLBs are flushed and * the FROZEN_SPTE is replaced (see below). */ - ret =3D __tdp_mmu_set_spte_atomic(iter, FROZEN_SPTE); + ret =3D __tdp_mmu_set_spte_atomic(kvm, iter, FROZEN_SPTE); + if (ret) return ret; =20 @@ -681,6 +746,14 @@ static u64 tdp_mmu_set_spte(struct kvm *kvm, int as_id= , tdp_ptep_t sptep, old_spte =3D kvm_tdp_mmu_write_spte(sptep, old_spte, new_spte, level); =20 handle_changed_spte(kvm, as_id, gfn, old_spte, new_spte, level, false); + + /* + * Users that do non-atomic setting of PTEs don't operate on mirror + * roots, so don't handle it and bug the VM if it's seen. + */ + if (is_mirror_sptep(sptep)) + KVM_BUG_ON(is_shadow_present_pte(new_spte), kvm); + return old_spte; } =20 --=20 2.34.1 From nobody Thu Nov 14 07:05:12 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 356351494DF; Thu, 18 Jul 2024 21:12:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721337177; cv=none; b=kqaEk16dsTbe4CKpDk5Zup7CpqHsQ5JQ3sLaIXlk2abNbjafWRr0+OcibldkuHY/ic5kgYjFLH5iznhQbQFVJZ14rokElUP38fM6q0c12PcxinZ+8TQlzfplOCTuTPBRHEK8DFcyWMb+ODDPs6FztfcgTQlC0poyBMPgquuTCbY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721337177; c=relaxed/simple; bh=hyeN8N4x/xCECZf6qOEtw7mjw3OZyP3Gt7S/eTIPXpY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=KPNTqcTmK2/pj9i3Nhs7YOzqU4AI16JZkNxdyiKuRjqQZm+efQFkdtV5509PiEozPcO8cR0VPoE/eTQmxMHaNY88NSfFSqiEsq/zVFBoz/yGpZj2nXHMIjJf8uQiAf8hQW561gUKUaCpM+TiWIV7w/03wbF+zizA/oselUmOSAI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=gNu6sVtC; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="gNu6sVtC" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1721337175; x=1752873175; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=hyeN8N4x/xCECZf6qOEtw7mjw3OZyP3Gt7S/eTIPXpY=; b=gNu6sVtCzxxitCDsvAx0n9fkwplaPTl9/gjfEbiV/yixW2Bg27audsYI hI5eyQ6tfUM+sfRUqKAvJXCy1hAvcJ1dcIZZfvLkrl754B0uT/cmdemMF a4JP/SIkqSOvwoMv9Vi55E4ZV99/rNZq8DJcYvYQRiJ7/NUr8iCth8y5C KmxFTbO9t7tKej29TTaNj6fe0IoFjkQnvTHSvISCMJl2iW6bjgxKkBa4M /amcyF1W8nWff3psnzIQuU3I4ooJtl2I+7B012zH6eQ7xH8d7tZYza1Z6 GKnIvkXlvQjQPLFt8kD64is4g1OHrSJ2qGRq15IEycSZEoh/1Ih15mzBL w==; X-CSE-ConnectionGUID: 93qykgmYQRqBIMoRovIgOg== X-CSE-MsgGUID: /EZ9Fj69Teet5nVOns/zJw== X-IronPort-AV: E=McAfee;i="6700,10204,11137"; a="22697466" X-IronPort-AV: E=Sophos;i="6.09,218,1716274800"; d="scan'208";a="22697466" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 14:12:51 -0700 X-CSE-ConnectionGUID: UD+z9r87RouQxnRr0QpWMw== X-CSE-MsgGUID: h4ha1Je0THeljMC3CAkOqQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,218,1716274800"; d="scan'208";a="55760426" Received: from ccbilbre-mobl3.amr.corp.intel.com (HELO rpedgeco-desk4..) ([10.124.223.76]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 14:12:50 -0700 From: Rick Edgecombe To: seanjc@google.com, pbonzini@redhat.com, kvm@vger.kernel.org Cc: kai.huang@intel.com, dmatlack@google.com, erdemaktas@google.com, isaku.yamahata@gmail.com, linux-kernel@vger.kernel.org, sagis@google.com, yan.y.zhao@intel.com, rick.p.edgecombe@intel.com, Isaku Yamahata Subject: [PATCH v4 15/18] KVM: x86/tdp_mmu: Propagate tearing down mirror page tables Date: Thu, 18 Jul 2024 14:12:27 -0700 Message-Id: <20240718211230.1492011-16-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240718211230.1492011-1-rick.p.edgecombe@intel.com> References: <20240718211230.1492011-1-rick.p.edgecombe@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Isaku Yamahata Integrate hooks for mirroring page table operations for cases where TDX will zap PTEs or free page tables. Like other Coco technologies, TDX has the concept of private and shared memory. For TDX the private and shared mappings are managed on separate EPT roots. The private half is managed indirectly though calls into a protected runtime environment called the TDX module, where the shared half is managed within KVM in normal page tables. Since calls into the TDX module are relatively slow, walking private page tables by making calls into the TDX module would not be efficient. Because of this, previous changes have taught the TDP MMU to keep a mirror root, which is separate, unmapped TDP root that private operations can be directed to. Currently this root is disconnected from the guest. Now add plumbing to propagate changes to the "external" page tables being mirrored. Just create the x86_ops for now, leave plumbing the operations into the TDX module for future patches. Add two operations for tearing down page tables, one for freeing page tables (free_external_spt) and one for zapping PTEs (remove_external_spte). Define them such that remove_external_spte will perform a TLB flush as well. (in TDX terms "ensure there are no active translations"). TDX MMU support will exclude certain MMU operations, so only plug in the mirroring x86 ops where they will be needed. For zapping/freeing, only hook tdp_mmu_iter_set_spte() which is used for mapping and linking PTs. Don't bother hooking tdp_mmu_set_spte_atomic() as it is only used for zapping PTEs in operations unsupported by TDX: zapping collapsible PTEs and kvm_mmu_zap_all_fast(). In previous changes to address races around concurrent populating using tdp_mmu_set_spte_atomic(), a solution was introduced to temporarily set FROZEN_SPTE in the mirrored page tables while performing the external operations. Such a solution is not needed for the tear down paths in TDX as these will always be performed with the mmu_lock held for write. Sprinkle some KVM_BUG_ON()s to reflect this. Signed-off-by: Isaku Yamahata Co-developed-by: Kai Huang Signed-off-by: Kai Huang Co-developed-by: Yan Zhao Signed-off-by: Yan Zhao Co-developed-by: Rick Edgecombe Signed-off-by: Rick Edgecombe --- v4: - Log typos v3: - Rename mirrored->external (Paolo) - Drop new_spte arg from reflect_removed_spte() (Paolo) - ...and drop was_present and is_present bools (Paolo) - Use base_gfn instead of sp->gfn (Paolo) - Better comment on logic that bugs if doing tdp_mmu_set_spte() on present PTE. (Paolo) - Move comment around KVM_BUG_ON() in __tdp_mmu_set_spte_atomic() to this patch, and add better comment. (Paolo) - In remove_external_spte(), remove was_leaf bool, skip duplicates present check and add comment. - Rename REMOVED_SPTE to FROZEN_SPTE (Paolo) v2: - Split from "KVM: x86/tdp_mmu: Support TDX private mapping for TDP MMU" - Rename x86_ops from "private" to "reflect" - In response to "sp->mirrored_spt" rename helpers to "mirrored" - Remove unused present mirroring support in tdp_mmu_set_spte() - Merge reflect_zap_spte() into reflect_remove_spte() - Move mirror zapping logic out of handle_changed_spte() - Add some KVM_BUG_ONs --- arch/x86/include/asm/kvm-x86-ops.h | 2 ++ arch/x86/include/asm/kvm_host.h | 8 +++++ arch/x86/kvm/mmu/tdp_mmu.c | 51 +++++++++++++++++++++++++++++- 3 files changed, 60 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-= x86-ops.h index 3ef19fcb5e42..18a83b211c90 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -97,6 +97,8 @@ KVM_X86_OP_OPTIONAL_RET0(get_mt_mask) KVM_X86_OP(load_mmu_pgd) KVM_X86_OP_OPTIONAL(link_external_spt) KVM_X86_OP_OPTIONAL(set_external_spte) +KVM_X86_OP_OPTIONAL(free_external_spt) +KVM_X86_OP_OPTIONAL(remove_external_spte) KVM_X86_OP(has_wbinvd_exit) KVM_X86_OP(get_l2_tsc_offset) KVM_X86_OP(get_l2_tsc_multiplier) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index a17b672b1923..9c7e2f107eef 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1745,6 +1745,14 @@ struct kvm_x86_ops { int (*set_external_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level, kvm_pfn_t pfn_for_gfn); =20 + /* Update external page tables for page table about to be freed. */ + int (*free_external_spt)(struct kvm *kvm, gfn_t gfn, enum pg_level level, + void *external_spt); + + /* Update external page table from spte getting removed, and flush TLB. */ + int (*remove_external_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level lev= el, + kvm_pfn_t pfn_for_gfn); + bool (*has_wbinvd_exit)(void); =20 u64 (*get_l2_tsc_offset)(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 116dc3e9bdb3..ea2c64450135 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -340,6 +340,29 @@ static void tdp_mmu_unlink_sp(struct kvm *kvm, struct = kvm_mmu_page *sp) spin_unlock(&kvm->arch.tdp_mmu_pages_lock); } =20 +static void remove_external_spte(struct kvm *kvm, gfn_t gfn, u64 old_spte, + int level) +{ + kvm_pfn_t old_pfn =3D spte_to_pfn(old_spte); + int ret; + + /* + * External (TDX) SPTEs are limited to PG_LEVEL_4K, and external + * PTs are removed in a special order, involving free_external_spt(). + * But remove_external_spte() will be called on non-leaf PTEs via + * __tdp_mmu_zap_root(), so avoid the error the former would return + * in this case. + */ + if (!is_last_spte(old_spte, level)) + return; + + /* Zapping leaf spte is allowed only when write lock is held. */ + lockdep_assert_held_write(&kvm->mmu_lock); + /* Because write lock is held, operation should success. */ + ret =3D static_call(kvm_x86_remove_external_spte)(kvm, gfn, level, old_pf= n); + KVM_BUG_ON(ret, kvm); +} + /** * handle_removed_pt() - handle a page table removed from the TDP structure * @@ -435,6 +458,23 @@ static void handle_removed_pt(struct kvm *kvm, tdp_pte= p_t pt, bool shared) } handle_changed_spte(kvm, kvm_mmu_page_as_id(sp), gfn, old_spte, FROZEN_SPTE, level, shared); + + if (is_mirror_sp(sp)) { + KVM_BUG_ON(shared, kvm); + remove_external_spte(kvm, gfn, old_spte, level); + } + } + + if (is_mirror_sp(sp) && + WARN_ON(static_call(kvm_x86_free_external_spt)(kvm, base_gfn, sp->rol= e.level, + sp->external_spt))) { + /* + * Failed to free page table page in mirror page table and + * there is nothing to do further. + * Intentionally leak the page to prevent the kernel from + * accessing the encrypted page. + */ + sp->external_spt =3D NULL; } =20 call_rcu(&sp->rcu_head, tdp_mmu_free_sp_rcu_callback); @@ -618,6 +658,13 @@ static inline int __must_check __tdp_mmu_set_spte_atom= ic(struct kvm *kvm, if (is_mirror_sptep(iter->sptep) && !is_frozen_spte(new_spte)) { int ret; =20 + /* + * Users of atomic zapping don't operate on mirror roots, + * so don't handle it and bug the VM if it's seen. + */ + if (KVM_BUG_ON(!is_shadow_present_pte(new_spte), kvm)) + return -EBUSY; + ret =3D set_external_spte_present(kvm, iter->sptep, iter->gfn, iter->old_spte, new_spte, iter->level); if (ret) @@ -751,8 +798,10 @@ static u64 tdp_mmu_set_spte(struct kvm *kvm, int as_id= , tdp_ptep_t sptep, * Users that do non-atomic setting of PTEs don't operate on mirror * roots, so don't handle it and bug the VM if it's seen. */ - if (is_mirror_sptep(sptep)) + if (is_mirror_sptep(sptep)) { KVM_BUG_ON(is_shadow_present_pte(new_spte), kvm); + remove_external_spte(kvm, gfn, old_spte, level); + } =20 return old_spte; } --=20 2.34.1 From nobody Thu Nov 14 07:05:12 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5944E149C50; Thu, 18 Jul 2024 21:12:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721337177; cv=none; b=Dt0QZvth3W1kmt2JIqBaV9WXlcT0hRICrmhmfzYVipPCj4FfPi532NDQidFKc295uW8IwxQG6igxG8ZPd57cKeCFpEeIgcZsXEObeeb82z7TvUYyUlNeMcIIw8/ZI+EZveV8mFEHHzCzlXDs1NbsbVeS+UIsMId+wGlqaJ7kc9Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721337177; c=relaxed/simple; bh=xMNkuRIzC+H6nAOGbncvwl9AuctbTkUnvnIZlBceRDs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=in2SaF68/6EivwjHSdMh75Cz81FL0kAmDj2d9acuZZaoxjB3Zxo9wnvntdNUJYTSLar1caiFRIa2oo41gjmhavBMe/h6wN+XgM5KUSWQQeXc+mAHf8gNp8lM69hWRFd7H7PjzTxKM5NbgteY1F8JLAzwr9UEYt2mjVCh7jiA624= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=R23NPuGk; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="R23NPuGk" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1721337175; x=1752873175; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=xMNkuRIzC+H6nAOGbncvwl9AuctbTkUnvnIZlBceRDs=; b=R23NPuGkjUtmOzyhGCIzClaySzRHNuG4zVtVb4yWDggDeffp5H7HdPm5 aRK2R1AUwIwSIl3GTpMPHrlXXNu0eSQVTvmWrRWyPwY4JTJYBTJ95jE3H DrK2phq1S4mzXbjWHF/x4jbIwO4+agLC5SV/3X4l3gx4kKNbR/YHtiRpS ZsrJPT0obMHuUmOp4jUwUufR+fJWExNjzt1s+Pg5ib9eGPlyxiOh53+oO +6MeH8DuAIJhGRM0C7G0/gGsr7PDx+638DowZ6emdLCAfCsv84Ga1ZdGc XgKmftVilvxrDPA7OUisTH/QXxmqBK5WEEeMiCQyl+tInd5tgGVWbJmWf w==; X-CSE-ConnectionGUID: orL7qUrRRdGot/Iyr9mezg== X-CSE-MsgGUID: OunEGVGhRwKeg3LnOfjuPA== X-IronPort-AV: E=McAfee;i="6700,10204,11137"; a="22697472" X-IronPort-AV: E=Sophos;i="6.09,218,1716274800"; d="scan'208";a="22697472" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 14:12:52 -0700 X-CSE-ConnectionGUID: /3upg5TMTZen2AHm8YY8HQ== X-CSE-MsgGUID: zmm0NoSjSjOx1QxWOEnHEw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,218,1716274800"; d="scan'208";a="55760429" Received: from ccbilbre-mobl3.amr.corp.intel.com (HELO rpedgeco-desk4..) ([10.124.223.76]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 14:12:51 -0700 From: Rick Edgecombe To: seanjc@google.com, pbonzini@redhat.com, kvm@vger.kernel.org Cc: kai.huang@intel.com, dmatlack@google.com, erdemaktas@google.com, isaku.yamahata@gmail.com, linux-kernel@vger.kernel.org, sagis@google.com, yan.y.zhao@intel.com, rick.p.edgecombe@intel.com, Isaku Yamahata , Chao Gao Subject: [PATCH v4 16/18] KVM: x86/tdp_mmu: Take root types for kvm_tdp_mmu_invalidate_all_roots() Date: Thu, 18 Jul 2024 14:12:28 -0700 Message-Id: <20240718211230.1492011-17-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240718211230.1492011-1-rick.p.edgecombe@intel.com> References: <20240718211230.1492011-1-rick.p.edgecombe@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Isaku Yamahata Rename kvm_tdp_mmu_invalidate_all_roots() to kvm_tdp_mmu_invalidate_roots(), and make it enum kvm_tdp_mmu_root_types as an argument. kvm_tdp_mmu_invalidate_roots() is called with different root types. For kvm_mmu_zap_all_fast() it only operates on shared roots. But when tearing down a VM it needs to invalidate all roots. Have the callers only invalidate the required roots instead of all roots. Within kvm_tdp_mmu_invalidate_roots(), respect the root type passed by checking the root type in root iterator. Suggested-by: Chao Gao Signed-off-by: Isaku Yamahata Signed-off-by: Rick Edgecombe --- v3: - Use root enum instead of process enum (Paolo) - Squash with "KVM: x86/tdp_mmu: Invalidate correct roots" (Paolo) - Update comment in kvm_mmu_zap_all_fast() (Paolo) - Add warning for attempting to invalidate invalid roots (Paolo) v2: - Use process enum instead of root v1: - New patch --- arch/x86/kvm/mmu/mmu.c | 9 +++++++-- arch/x86/kvm/mmu/tdp_mmu.c | 15 +++++++++++++-- arch/x86/kvm/mmu/tdp_mmu.h | 3 ++- 3 files changed, 22 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 2c73360533c2..3fe7f7d94c7e 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6460,8 +6460,13 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm) * write and in the same critical section as making the reload request, * e.g. before kvm_zap_obsolete_pages() could drop mmu_lock and yield. */ - if (tdp_mmu_enabled) - kvm_tdp_mmu_invalidate_all_roots(kvm); + if (tdp_mmu_enabled) { + /* + * External page tables don't support fast zapping, therefore + * their mirrors must be invalidated separately by the caller. + */ + kvm_tdp_mmu_invalidate_roots(kvm, KVM_DIRECT_ROOTS); + } =20 /* * Notify all vcpus to reload its shadow page table and flush TLB. diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index ea2c64450135..2f3ba9d477e9 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -37,7 +37,7 @@ void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) * for zapping and thus puts the TDP MMU's reference to each root, i.e. * ultimately frees all roots. */ - kvm_tdp_mmu_invalidate_all_roots(kvm); + kvm_tdp_mmu_invalidate_roots(kvm, KVM_VALID_ROOTS); kvm_tdp_mmu_zap_invalidated_roots(kvm, false); =20 WARN_ON(atomic64_read(&kvm->arch.tdp_mmu_pages)); @@ -1115,10 +1115,18 @@ void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *= kvm, bool shared) * Note, kvm_tdp_mmu_zap_invalidated_roots() is gifted the TDP MMU's refer= ence. * See kvm_tdp_mmu_alloc_root(). */ -void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm) +void kvm_tdp_mmu_invalidate_roots(struct kvm *kvm, + enum kvm_tdp_mmu_root_types root_types) { struct kvm_mmu_page *root; =20 + /* + * Invalidating invalid roots doesn't make sense, prevent developers from + * having to think about it. + */ + if (WARN_ON_ONCE(root_types & KVM_INVALID_ROOTS)) + root_types &=3D ~KVM_INVALID_ROOTS; + /* * mmu_lock must be held for write to ensure that a root doesn't become * invalid while there are active readers (invalidating a root while @@ -1140,6 +1148,9 @@ void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm) * or get/put references to roots. */ list_for_each_entry(root, &kvm->arch.tdp_mmu_roots, link) { + if (!tdp_mmu_root_match(root, root_types)) + continue; + /* * Note, invalid roots can outlive a memslot update! Invalid * roots must be *zapped* before the memslot update completes, diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 5b607adca680..7927fa4a96e0 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -66,7 +66,8 @@ static inline struct kvm_mmu_page *tdp_mmu_get_root(struc= t kvm_vcpu *vcpu, bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, gfn_t start, gfn_t end, bool f= lush); bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp); void kvm_tdp_mmu_zap_all(struct kvm *kvm); -void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm); +void kvm_tdp_mmu_invalidate_roots(struct kvm *kvm, + enum kvm_tdp_mmu_root_types root_types); void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm, bool shared); =20 int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault); --=20 2.34.1 From nobody Thu Nov 14 07:05:12 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 03A5F149C70; Thu, 18 Jul 2024 21:12:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721337177; cv=none; b=mp60Cd6+Q62Fi8Efs0oOUy8CQKs9gumGE/JcSIrR6oiAC9zWf/f5mJqMJ7w5cwRhft3Z+jvb3LerNYIcP6Brvfxiu8wht8Ije2rOti66TlS618A6A6AWxIq5wSXpe2zrwPmwRI8YwHIbQdKAbvM4QF6xjKTePmL2Jd3gOrH5HWY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721337177; c=relaxed/simple; bh=AKxlZs6v/6XIO+vxKXJVP3JXsqABU0Ig9aw4VXt/5Wg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=OJTEfMYAc75I7yJl0sSWJ13nsbPovKkNa3RuCGJc3JO6yLpk0/QyMlJxn1Eo4+9kEChI2MxKiRHORKoalfNKFRNf30LKCR0YmDbb4wfjKExfrNu1h0NKD/yyP30+qeCNqRQ8NLpbL6ugdbp95JAkBD3oZfCzI/tBHqv4TAkWAlE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=UJ1GBcgn; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="UJ1GBcgn" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1721337176; x=1752873176; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=AKxlZs6v/6XIO+vxKXJVP3JXsqABU0Ig9aw4VXt/5Wg=; b=UJ1GBcgnGKUS6FVtj/C5Q+DeHecrJlJSZtogqRqhzLO4BBn5X4mLs7Hh IgKX04LAjAX4CN0th6KDcLvWih6aqD2I+8ogaL+GiN+jlX9CwJuWlJhSN EwT3zQDppwz6sxPjalRZyWjYIUU/VHc1EeVIls/RVMUrk3zQosMOVTJBW Oqgjrkno9+VaJR5E2PcYJ+IlXoqfBukeGa1JgytFHWhBq4lrn++vgWyP8 yUIouRQjcZxOPXIZPGkNABIxBr1gmaIKGRVRaQ4rsUWveN999e9yMx4cv VEIGTUwTXNgLJRHw+sCurxcW26gVXbCzJzk8Xvrt5eK8T67bOF4jLOlh7 A==; X-CSE-ConnectionGUID: QLUjLxYGQWeuq6eq6c2luw== X-CSE-MsgGUID: CopkzVvbQZetRZWF0VEocQ== X-IronPort-AV: E=McAfee;i="6700,10204,11137"; a="22697479" X-IronPort-AV: E=Sophos;i="6.09,218,1716274800"; d="scan'208";a="22697479" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 14:12:52 -0700 X-CSE-ConnectionGUID: mD75KTJVQ8+BsDnj+Gr3gw== X-CSE-MsgGUID: D6XuAqgKROawyXDz/MsKVg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,218,1716274800"; d="scan'208";a="55760433" Received: from ccbilbre-mobl3.amr.corp.intel.com (HELO rpedgeco-desk4..) ([10.124.223.76]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 14:12:52 -0700 From: Rick Edgecombe To: seanjc@google.com, pbonzini@redhat.com, kvm@vger.kernel.org Cc: kai.huang@intel.com, dmatlack@google.com, erdemaktas@google.com, isaku.yamahata@gmail.com, linux-kernel@vger.kernel.org, sagis@google.com, yan.y.zhao@intel.com, rick.p.edgecombe@intel.com Subject: [PATCH v4 17/18] KVM: x86/tdp_mmu: Don't zap valid mirror roots in kvm_tdp_mmu_zap_all() Date: Thu, 18 Jul 2024 14:12:29 -0700 Message-Id: <20240718211230.1492011-18-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240718211230.1492011-1-rick.p.edgecombe@intel.com> References: <20240718211230.1492011-1-rick.p.edgecombe@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Don't zap valid mirror roots in kvm_tdp_mmu_zap_all(), which in effect is only direct roots (invalid and valid). For TDX, kvm_tdp_mmu_zap_all() is only called during MMU notifier release. Since, mirrored EPT comes from guest mem, it will never be mapped to userspace, and won't apply. But in addition to be unnecessary, mirrored EPT is cleaned up in a special way during VM destruction. Pass the KVM_INVALID_ROOTS bit into __for_each_tdp_mmu_root_yield_safe() as well, to clean up invalid direct roots, as is the current behavior. Co-developed-by: Yan Zhao Signed-off-by: Yan Zhao Signed-off-by: Rick Edgecombe --- v4: - New patch --- arch/x86/kvm/mmu/tdp_mmu.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 2f3ba9d477e9..465c9fdb3301 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1044,19 +1044,23 @@ void kvm_tdp_mmu_zap_all(struct kvm *kvm) struct kvm_mmu_page *root; =20 /* - * Zap all roots, including invalid roots, as all SPTEs must be dropped - * before returning to the caller. Zap directly even if the root is - * also being zapped by a worker. Walking zapped top-level SPTEs isn't - * all that expensive and mmu_lock is already held, which means the - * worker has yielded, i.e. flushing the work instead of zapping here - * isn't guaranteed to be any faster. + * Zap all roots, except valid mirror roots, as all direct SPTEs must + * be dropped before returning to the caller. For TDX, mirror roots + * don't need handling in response to the mmu notifier (the caller) and + * they also won't be invalid until the VM is being torn down. + * + * Zap directly even if the root is also being zapped by a worker. + * Walking zapped top-level SPTEs isn't all that expensive and mmu_lock + * is already held, which means the worker has yielded, i.e. flushing + * the work instead of zapping here isn't guaranteed to be any faster. * * A TLB flush is unnecessary, KVM zaps everything if and only the VM * is being destroyed or the userspace VMM has exited. In both cases, * KVM_RUN is unreachable, i.e. no vCPUs will ever service the request. */ lockdep_assert_held_write(&kvm->mmu_lock); - for_each_tdp_mmu_root_yield_safe(kvm, root) + __for_each_tdp_mmu_root_yield_safe(kvm, root, -1, + KVM_DIRECT_ROOTS | KVM_INVALID_ROOTS) tdp_mmu_zap_root(kvm, root, false); } =20 --=20 2.34.1 From nobody Thu Nov 14 07:05:12 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 459E714A09E; Thu, 18 Jul 2024 21:12:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721337179; cv=none; b=rr/ZhLJWj/AycPR44qAMJFXnsqn1KpESfX96UpBJItWtHzXEWJMBz2WzoYmFFlc4IuzeQ80SlQ/oxPZUGq/UCEhpgc/4QArVNEV0HDZbQQ89/yl5TP/RSuk70xPfgGOZ4/dmOIM79Us2OSotkNKWGhfGKDLWfPVmmSslFKOwgW0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721337179; c=relaxed/simple; bh=NXIuTu49zOahyxCR2t7PbDM9QfUDq2d6bkMaQkdeb5Q=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Yj3EH1QjTWCtTlwS1TJnZXUoKE0dE7sw6gVMtQGvy4qvI6PXAJmVJ/vfLUopHqu57Z4ORjgTHOeCqyFNlRhYC9KUvUSO6QZGW3srLK+UA8PjDDx+2wVrmti4i2PZLOAvxxEaPaV1aGdcPSwmcy+JgzQxzX+Dn06ImHiIUj14B4M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=a5ogO27E; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="a5ogO27E" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1721337177; x=1752873177; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=NXIuTu49zOahyxCR2t7PbDM9QfUDq2d6bkMaQkdeb5Q=; b=a5ogO27EJXQz5CblWw7YHCFwL0ctMtnMJRHJyeL46mCzu72L+GBU0mZm hf0hZyU3ocO5UJijYe2iPPu3eULkZ7b7zM0OMbl1SdFp70xhi0Ysv9CME pw5ObbL3fHwcMOEpNlrEvhurt3ZIM+GX+Ee4ekdmCVHugFfTqTc+LYfs6 ZTpVD3MJqkDH6vn6djTrV6QjnwlT5z9pG0F91IEspAXV5AvlggaCRIMdU eBtalEXVdj28Z8ZdkWiOsRXMn4YR+bWPzO5vxlMGmJ9ElwGfz8WSndGdn Yhpy9EjFTEprf5Gfd6yVsQUrSTp09FhP8aKnJfk4+HVJaCs/oN2cJBdx5 g==; X-CSE-ConnectionGUID: BYiuRY24TOyR3HN5vsYZxQ== X-CSE-MsgGUID: ZcDWVEV5SLqUbdFEuPpI8Q== X-IronPort-AV: E=McAfee;i="6700,10204,11137"; a="22697487" X-IronPort-AV: E=Sophos;i="6.09,218,1716274800"; d="scan'208";a="22697487" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 14:12:53 -0700 X-CSE-ConnectionGUID: Ehj5O1VDTuelfLWxbbeNBg== X-CSE-MsgGUID: CwTw9HX0R9e8QCPlfmBH/w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,218,1716274800"; d="scan'208";a="55760437" Received: from ccbilbre-mobl3.amr.corp.intel.com (HELO rpedgeco-desk4..) ([10.124.223.76]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 14:12:53 -0700 From: Rick Edgecombe To: seanjc@google.com, pbonzini@redhat.com, kvm@vger.kernel.org Cc: kai.huang@intel.com, dmatlack@google.com, erdemaktas@google.com, isaku.yamahata@gmail.com, linux-kernel@vger.kernel.org, sagis@google.com, yan.y.zhao@intel.com, rick.p.edgecombe@intel.com Subject: [PATCH v4 18/18] KVM: x86/mmu: Prevent aliased memslot GFNs Date: Thu, 18 Jul 2024 14:12:30 -0700 Message-Id: <20240718211230.1492011-19-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240718211230.1492011-1-rick.p.edgecombe@intel.com> References: <20240718211230.1492011-1-rick.p.edgecombe@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a few sanity checks to prevent memslot GFNs from ever having alias bits set. Like other Coco technologies, TDX has the concept of private and shared memory. For TDX the private and shared mappings are managed on separate EPT roots. The private half is managed indirectly though calls into a protected runtime environment called the TDX module, where the shared half is managed within KVM in normal page tables. For TDX, the shared half will be mapped in the higher alias, with a "shared bit" set in the GPA. However, KVM will still manage it with the same memslots as the private half. This means memslot looks ups and zapping operations will be provided with a GFN without the shared bit set. If these memslot GFNs ever had the bit that selects between the two aliases it could lead to unexpected behavior in the complicated code that directs faulting or zapping operations between the roots that map the two aliases. As a safety measure, prevent memslots from being set at a GFN range that contains the alias bit. Also, check in the kvm_faultin_pfn() for the fault path. This later check does less today, as the alias bits are specifically stripped from the GFN being checked, however future code could possibly call in to the fault handler in a way that skips this stripping. Since kvm_faultin_pfn() now has many references to vcpu->kvm, extract it to local variable. Link: https://lore.kernel.org/kvm/ZpbKqG_ZhCWxl-Fc@google.com/ Suggested-by: Sean Christopherson Signed-off-by: Rick Edgecombe --- v4: - New patch --- arch/x86/kvm/mmu.h | 5 +++++ arch/x86/kvm/mmu/mmu.c | 10 +++++++--- arch/x86/kvm/x86.c | 3 +++ 3 files changed, 15 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 4f6c86294f05..e6923cd7d648 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -344,4 +344,9 @@ static inline bool kvm_is_addr_direct(struct kvm *kvm, = gpa_t gpa) =20 return !gpa_direct_bits || (gpa & gpa_direct_bits); } + +static inline bool kvm_is_gfn_alias(struct kvm *kvm, gfn_t gfn) +{ + return gfn & kvm_gfn_direct_bits(kvm); +} #endif diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 3fe7f7d94c7e..010ebcf628b4 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4415,8 +4415,12 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, st= ruct kvm_page_fault *fault, unsigned int access) { struct kvm_memory_slot *slot =3D fault->slot; + struct kvm *kvm =3D vcpu->kvm; int ret; =20 + if (KVM_BUG_ON(kvm_is_gfn_alias(kvm, fault->gfn), kvm)) + return -EFAULT; + /* * Note that the mmu_invalidate_seq also serves to detect a concurrent * change in attributes. is_page_fault_stale() will detect an @@ -4430,7 +4434,7 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, str= uct kvm_page_fault *fault, * Now that we have a snapshot of mmu_invalidate_seq we can check for a * private vs. shared mismatch. */ - if (fault->is_private !=3D kvm_mem_is_private(vcpu->kvm, fault->gfn)) { + if (fault->is_private !=3D kvm_mem_is_private(kvm, fault->gfn)) { kvm_mmu_prepare_memory_fault_exit(vcpu, fault); return -EFAULT; } @@ -4492,7 +4496,7 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, str= uct kvm_page_fault *fault, * *guaranteed* to need to retry, i.e. waiting until mmu_lock is held * to detect retry guarantees the worst case latency for the vCPU. */ - if (mmu_invalidate_retry_gfn_unsafe(vcpu->kvm, fault->mmu_seq, fault->gfn= )) + if (mmu_invalidate_retry_gfn_unsafe(kvm, fault->mmu_seq, fault->gfn)) return RET_PF_RETRY; =20 ret =3D __kvm_faultin_pfn(vcpu, fault); @@ -4512,7 +4516,7 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, str= uct kvm_page_fault *fault, * overall cost of failing to detect the invalidation until after * mmu_lock is acquired. */ - if (mmu_invalidate_retry_gfn_unsafe(vcpu->kvm, fault->mmu_seq, fault->gfn= )) { + if (mmu_invalidate_retry_gfn_unsafe(kvm, fault->mmu_seq, fault->gfn)) { kvm_release_pfn_clean(fault->pfn); return RET_PF_RETRY; } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 4f58423c6148..28bae3f95ef6 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12942,6 +12942,9 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, if ((new->base_gfn + new->npages - 1) > kvm_mmu_max_gfn()) return -EINVAL; =20 + if (kvm_is_gfn_alias(kvm, new->base_gfn + new->npages - 1)) + return -EINVAL; + return kvm_alloc_memslot_metadata(kvm, new); } =20 --=20 2.34.1