From nobody Tue Dec 2 02:49:41 2025 Received: from mail-wm1-f50.google.com (mail-wm1-f50.google.com [209.85.128.50]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D378733C1A9 for ; Tue, 18 Nov 2025 17:11:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.50 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763485895; cv=none; b=XqFk+Ui3mvLkwYON65SzUxNIkN7LcMggAEnP475tdBLeoCAETwvzAFqzEbZig6LlA8gDqcOyKXw7UtdtMt/qGqAkpnoihq8GLvCh1awPbYuNGI/MInvkOPwRBtYpyk2N6Sd3Zk/7ENDC/Idw0oj947scrBvTH9xaePJttqlDxfM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763485895; c=relaxed/simple; bh=BS5CpRycSSUp2I3eAz+w6E5e/8PiVRvCDmDV9p6rGjQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=WH2nBME+GUcKv10ZWO4to32Jkup+pPvpx5J8/w9O57WoQP69vZP3sLZ+Kvzfzwp/d7qOK9BAUQEe0Gjm9+2NB0rzoxHRY9FJdOG4kv8RnFDhq5Yu30i9hLZh0mLqqzzGGyJsuIkJFSUg+0ILaImFg7wlTbV8516kX9nuy2OlTck= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=K1Yms5QS; arc=none smtp.client-ip=209.85.128.50 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="K1Yms5QS" Received: by mail-wm1-f50.google.com with SMTP id 5b1f17b1804b1-4775895d69cso31006355e9.0 for ; Tue, 18 Nov 2025 09:11:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1763485890; x=1764090690; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=iUBuCxz40AkXLTFxR3IE9cqbGYvT1tLCutNYkvjnIGs=; b=K1Yms5QSzUBOGQXqGq4S69nm0FnB3ZhDisnO5dtOTXwM7qCo5Y3HfRKHTRU+AGdD6X UCaGGH5k4Z2WONXlt0JYWh+EVu0pNdu5oQjcZKdAmvmLrYnwmm2EkhSRkGtbPnPrGgpE erMzOiPcK8aRWHzUHF3QCyuAVY63k4LvHi24/BnB/KLToqxnYACRr9NSTARgowGjHsRN xHzV8gha1NLagBqQgdkGL6fPiPflHUL30hpoExcrqxE6/vP3jDAyFuQrKAxybDpFe/lc nqYoKSWgm4b29GGANke22xlIehyp8wi0JTxD/XmHQtiqIX0/m81d/k7WfMfW5E8kJWVj L5LQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1763485890; x=1764090690; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=iUBuCxz40AkXLTFxR3IE9cqbGYvT1tLCutNYkvjnIGs=; b=MAKXNi5niTGxWnPg/5zRoK3OvoWfIF3JMVLqWRmRpjT93Uui7eTtg+26vC55MHKFCE 20zP476s2sS6hAjHtnjJvNVGXo/oGIvcRr3tpA98Q0XzGcGp3bAFOdLCFUKn71xqtBSW 7RfMKbbmhM8PAjet5r+TlP2Y/dTerBDOtaiwuu5o6txLk86sSrvGnmK27Fzzf0QJ64Lp TPIhmwsK99Ai13WurV+LHYUmMvRYJo/TxpuZ/mutoR8Bj7n4mHbVoee0F2xE954q7D6E tZv3MojlYwkYk6S81Sckve9zBphSIDE367BoRalf3oshbqcOSRXpelsy5NtC3ItJPfw7 N1GA== X-Forwarded-Encrypted: i=1; AJvYcCUabXHzAmN2c/431w8evNbGLivHl1FnWBCM2ysh3x68kPQGfRHZ9OkQxOyLMFAx0qCln0YRxh34OSedWWs=@vger.kernel.org X-Gm-Message-State: AOJu0YxMvro29ddfhfX9srVOfG9c+ul0fs9GW3PQp0qPLQI9ftOvogCi JUua9FL4GUOmGwbmW51jtIv4tM2qcXbVbwsEDYXA9zoB5eApIXae24w2 X-Gm-Gg: ASbGncumXQtDzLRoyEjZAufDsuDcJdcBjLaWQ8s7hKxERXzgKvws2Lx1Db+aZz94Ibx HNH6vXWNK+zlObqQNLTNptALNpNeVG+Mbzm4HW+oXlTT2PVLuDbiHmyXNAkKnyie9frHJAJRukf YBde3t9BTOaQStp0XAhbpUihbCjTUosIiTWvv4h+GXGsiHy7FmeQPbaSQFvTzUygCELHW+61Ffu 16sDTw2fZDgUXmUduoWpqWQTdBoQHzFVu/oyq+v9wO6w+mXqpAJtBsURwX6EvJRRIDXJLk735fn Q76rFmzdfMYIEaz9UONf+sSbbkyb4bAOYZs/rQUJQrJA/X5Yw8jWGqeYuDeJMfxSv4RD/H/ueu5 t4/A8LMV4TWVeRO8fd/wZGEBUIPPHyP6/UQAtDmEdR4B8NU+WuxOwHdcQQtznB0MlqviQPr/mHp 5Jlk5s8kQdSpA4KE7oH5CUG13dlGNl6/Qp/T5q3RX58JvJiOmvmyXKYV+cKTc8iHcYIsbs4XDmr kzOA3hZJc1aRsCE3Un9hAr/ZPTAkEKW+mkS1pRqdN8= X-Google-Smtp-Source: AGHT+IGmXpmNvUKyuXvAh+rU1gW8toHrRANOPzYgKuN2/PQp2zLb0b5mWo2XSsoBOPKyGfGOWKZv5A== X-Received: by 2002:a05:600c:8b21:b0:477:7bca:8b3c with SMTP id 5b1f17b1804b1-4778fea84f0mr154702925e9.19.1763485889465; Tue, 18 Nov 2025 09:11:29 -0800 (PST) Received: from ip-10-0-150-200.eu-west-1.compute.internal (ec2-52-49-196-232.eu-west-1.compute.amazonaws.com. [52.49.196.232]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-477b103d312sm706385e9.13.2025.11.18.09.11.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 18 Nov 2025 09:11:28 -0800 (PST) From: griffoul@gmail.com X-Google-Original-From: griffoul@gmail.org To: kvm@vger.kernel.org Cc: seanjc@google.com, pbonzini@redhat.com, vkuznets@redhat.com, shuah@kernel.org, dwmw@amazon.co.uk, linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org, Fred Griffoul Subject: [PATCH v2 07/10] KVM: nVMX: Replace evmcs kvm_host_map with pfncache Date: Tue, 18 Nov 2025 17:11:10 +0000 Message-ID: <20251118171113.363528-8-griffoul@gmail.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251118171113.363528-1-griffoul@gmail.org> References: <20251118171113.363528-1-griffoul@gmail.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Fred Griffoul Replace the eVMCS kvm_host_map with a gfn_to_pfn_cache to properly handle memslot changes and unify with other pfncaches in nVMX. The change introduces proper locking/unlocking semantics for eVMCS access through nested_lock_evmcs() and nested_unlock_evmcs() helpers. Signed-off-by: Fred Griffoul --- arch/x86/kvm/vmx/hyperv.h | 21 ++++---- arch/x86/kvm/vmx/nested.c | 109 ++++++++++++++++++++++++++------------ arch/x86/kvm/vmx/vmx.h | 3 +- 3 files changed, 88 insertions(+), 45 deletions(-) diff --git a/arch/x86/kvm/vmx/hyperv.h b/arch/x86/kvm/vmx/hyperv.h index 3c7fea501ca5..3b6fcf8dff64 100644 --- a/arch/x86/kvm/vmx/hyperv.h +++ b/arch/x86/kvm/vmx/hyperv.h @@ -37,11 +37,6 @@ static inline bool nested_vmx_is_evmptr12_set(struct vcp= u_vmx *vmx) return evmptr_is_set(vmx->nested.hv_evmcs_vmptr); } =20 -static inline struct hv_enlightened_vmcs *nested_vmx_evmcs(struct vcpu_vmx= *vmx) -{ - return vmx->nested.hv_evmcs; -} - static inline bool guest_cpu_cap_has_evmcs(struct kvm_vcpu *vcpu) { /* @@ -70,6 +65,8 @@ void nested_evmcs_filter_control_msr(struct kvm_vcpu *vcp= u, u32 msr_index, u64 * int nested_evmcs_check_controls(struct vmcs12 *vmcs12); bool nested_evmcs_l2_tlb_flush_enabled(struct kvm_vcpu *vcpu); void vmx_hv_inject_synthetic_vmexit_post_tlb_flush(struct kvm_vcpu *vcpu); +struct hv_enlightened_vmcs *nested_lock_evmcs(struct vcpu_vmx *vmx); +void nested_unlock_evmcs(struct vcpu_vmx *vmx); #else static inline bool evmptr_is_valid(u64 evmptr) { @@ -91,11 +88,6 @@ static inline bool nested_vmx_is_evmptr12_set(struct vcp= u_vmx *vmx) return false; } =20 -static inline struct hv_enlightened_vmcs *nested_vmx_evmcs(struct vcpu_vmx= *vmx) -{ - return NULL; -} - static inline u32 nested_evmcs_clean_fields(struct vcpu_vmx *vmx) { return 0; @@ -105,6 +97,15 @@ static inline bool nested_evmcs_msr_bitmap(struct vcpu_= vmx *vmx) { return false; } + +static inline struct hv_enlightened_vmcs *nested_lock_evmcs(struct vcpu_vm= x *vmx) +{ + return NULL; +} + +static inline void nested_unlock_evmcs(struct vcpu_vmx *vmx) +{ +} #endif =20 #endif /* __KVM_X86_VMX_HYPERV_H */ diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index aec150612818..d910508e3c22 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -232,8 +232,6 @@ static inline void nested_release_evmcs(struct kvm_vcpu= *vcpu) struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(vcpu); struct vcpu_vmx *vmx =3D to_vmx(vcpu); =20 - kvm_vcpu_unmap(vcpu, &vmx->nested.hv_evmcs_map); - vmx->nested.hv_evmcs =3D NULL; vmx->nested.hv_evmcs_vmptr =3D EVMPTR_INVALID; vmx->nested.hv_clean_fields =3D 0; vmx->nested.hv_msr_bitmap =3D false; @@ -265,7 +263,7 @@ static bool nested_evmcs_handle_vmclear(struct kvm_vcpu= *vcpu, gpa_t vmptr) !evmptr_is_valid(nested_get_evmptr(vcpu))) return false; =20 - if (nested_vmx_evmcs(vmx) && vmptr =3D=3D vmx->nested.hv_evmcs_vmptr) + if (vmptr =3D=3D vmx->nested.hv_evmcs_vmptr) nested_release_evmcs(vcpu); =20 return true; @@ -393,6 +391,9 @@ static void free_nested(struct kvm_vcpu *vcpu) kvm_gpc_deactivate(&vmx->nested.virtual_apic_cache); kvm_gpc_deactivate(&vmx->nested.apic_access_page_cache); kvm_gpc_deactivate(&vmx->nested.msr_bitmap_cache); +#ifdef CONFIG_KVM_HYPERV + kvm_gpc_deactivate(&vmx->nested.hv_evmcs_cache); +#endif =20 free_vpid(vmx->nested.vpid02); vmx->nested.posted_intr_nv =3D -1; @@ -1735,11 +1736,12 @@ static void copy_vmcs12_to_shadow(struct vcpu_vmx *= vmx) vmcs_load(vmx->loaded_vmcs->vmcs); } =20 -static void copy_enlightened_to_vmcs12(struct vcpu_vmx *vmx, u32 hv_clean_= fields) +static void copy_enlightened_to_vmcs12(struct vcpu_vmx *vmx, + struct hv_enlightened_vmcs *evmcs, + u32 hv_clean_fields) { #ifdef CONFIG_KVM_HYPERV struct vmcs12 *vmcs12 =3D vmx->nested.cached_vmcs12; - struct hv_enlightened_vmcs *evmcs =3D nested_vmx_evmcs(vmx); struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(&vmx->vcpu); =20 /* HV_VMX_ENLIGHTENED_CLEAN_FIELD_NONE */ @@ -1987,7 +1989,7 @@ static void copy_vmcs12_to_enlightened(struct vcpu_vm= x *vmx) { #ifdef CONFIG_KVM_HYPERV struct vmcs12 *vmcs12 =3D vmx->nested.cached_vmcs12; - struct hv_enlightened_vmcs *evmcs =3D nested_vmx_evmcs(vmx); + struct hv_enlightened_vmcs *evmcs =3D nested_lock_evmcs(vmx); =20 /* * Should not be changed by KVM: @@ -2155,6 +2157,7 @@ static void copy_vmcs12_to_enlightened(struct vcpu_vm= x *vmx) =20 evmcs->guest_bndcfgs =3D vmcs12->guest_bndcfgs; =20 + nested_unlock_evmcs(vmx); return; #else /* CONFIG_KVM_HYPERV */ KVM_BUG_ON(1, vmx->vcpu.kvm); @@ -2171,6 +2174,8 @@ static enum nested_evmptrld_status nested_vmx_handle_= enlightened_vmptrld( #ifdef CONFIG_KVM_HYPERV struct vcpu_vmx *vmx =3D to_vmx(vcpu); struct hv_enlightened_vmcs *evmcs; + struct gfn_to_pfn_cache *gpc; + enum nested_evmptrld_status status =3D EVMPTRLD_SUCCEEDED; bool evmcs_gpa_changed =3D false; u64 evmcs_gpa; =20 @@ -2183,17 +2188,19 @@ static enum nested_evmptrld_status nested_vmx_handl= e_enlightened_vmptrld( return EVMPTRLD_DISABLED; } =20 + gpc =3D &vmx->nested.hv_evmcs_cache; + if (nested_gpc_lock(gpc, evmcs_gpa)) { + nested_release_evmcs(vcpu); + return EVMPTRLD_ERROR; + } + + evmcs =3D gpc->khva; + if (unlikely(evmcs_gpa !=3D vmx->nested.hv_evmcs_vmptr)) { vmx->nested.current_vmptr =3D INVALID_GPA; =20 nested_release_evmcs(vcpu); =20 - if (kvm_vcpu_map(vcpu, gpa_to_gfn(evmcs_gpa), - &vmx->nested.hv_evmcs_map)) - return EVMPTRLD_ERROR; - - vmx->nested.hv_evmcs =3D vmx->nested.hv_evmcs_map.hva; - /* * Currently, KVM only supports eVMCS version 1 * (=3D=3D KVM_EVMCS_VERSION) and thus we expect guest to set this @@ -2216,10 +2223,11 @@ static enum nested_evmptrld_status nested_vmx_handl= e_enlightened_vmptrld( * eVMCS version or VMCS12 revision_id as valid values for first * u32 field of eVMCS. */ - if ((vmx->nested.hv_evmcs->revision_id !=3D KVM_EVMCS_VERSION) && - (vmx->nested.hv_evmcs->revision_id !=3D VMCS12_REVISION)) { + if ((evmcs->revision_id !=3D KVM_EVMCS_VERSION) && + (evmcs->revision_id !=3D VMCS12_REVISION)) { nested_release_evmcs(vcpu); - return EVMPTRLD_VMFAIL; + status =3D EVMPTRLD_VMFAIL; + goto unlock; } =20 vmx->nested.hv_evmcs_vmptr =3D evmcs_gpa; @@ -2244,14 +2252,11 @@ static enum nested_evmptrld_status nested_vmx_handl= e_enlightened_vmptrld( * between different L2 guests as KVM keeps a single VMCS12 per L1. */ if (from_launch || evmcs_gpa_changed) { - vmx->nested.hv_evmcs->hv_clean_fields &=3D - ~HV_VMX_ENLIGHTENED_CLEAN_FIELD_ALL; - + evmcs->hv_clean_fields &=3D ~HV_VMX_ENLIGHTENED_CLEAN_FIELD_ALL; vmx->nested.force_msr_bitmap_recalc =3D true; } =20 /* Cache evmcs fields to avoid reading evmcs after copy to vmcs12 */ - evmcs =3D vmx->nested.hv_evmcs; vmx->nested.hv_clean_fields =3D evmcs->hv_clean_fields; vmx->nested.hv_flush_hypercall =3D evmcs->hv_enlightenments_control.neste= d_flush_hypercall; vmx->nested.hv_msr_bitmap =3D evmcs->hv_enlightenments_control.msr_bitmap; @@ -2260,13 +2265,15 @@ static enum nested_evmptrld_status nested_vmx_handl= e_enlightened_vmptrld( struct vmcs12 *vmcs12 =3D get_vmcs12(vcpu); =20 if (likely(!vmcs12->hdr.shadow_vmcs)) { - copy_enlightened_to_vmcs12(vmx, vmx->nested.hv_clean_fields); + copy_enlightened_to_vmcs12(vmx, evmcs, vmx->nested.hv_clean_fields); /* Enlightened VMCS doesn't have launch state */ vmcs12->launch_state =3D !from_launch; } } =20 - return EVMPTRLD_SUCCEEDED; +unlock: + nested_gpc_unlock(gpc); + return status; #else return EVMPTRLD_DISABLED; #endif @@ -2771,7 +2778,6 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, stru= ct vmcs12 *vmcs12, enum vm_entry_failure_code *entry_failure_code) { struct vcpu_vmx *vmx =3D to_vmx(vcpu); - struct hv_enlightened_vmcs *evmcs; bool load_guest_pdptrs_vmcs12 =3D false; =20 if (vmx->nested.dirty_vmcs12 || nested_vmx_is_evmptr12_valid(vmx)) { @@ -2909,9 +2915,13 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, str= uct vmcs12 *vmcs12, * bits when it changes a field in eVMCS. Mark all fields as clean * here. */ - evmcs =3D nested_vmx_evmcs(vmx); - if (evmcs) + if (nested_vmx_is_evmptr12_valid(vmx)) { + struct hv_enlightened_vmcs *evmcs; + + evmcs =3D nested_lock_evmcs(vmx); evmcs->hv_clean_fields |=3D HV_VMX_ENLIGHTENED_CLEAN_FIELD_ALL; + nested_unlock_evmcs(vmx); + } =20 return 0; } @@ -4147,6 +4157,18 @@ static void *nested_gpc_lock_if_active(struct gfn_to= _pfn_cache *gpc) return gpc->khva; } =20 +#ifdef CONFIG_KVM_HYPERV +struct hv_enlightened_vmcs *nested_lock_evmcs(struct vcpu_vmx *vmx) +{ + return nested_gpc_lock_if_active(&vmx->nested.hv_evmcs_cache); +} + +void nested_unlock_evmcs(struct vcpu_vmx *vmx) +{ + nested_gpc_unlock(&vmx->nested.hv_evmcs_cache); +} +#endif + static struct pi_desc *nested_lock_pi_desc(struct vcpu_vmx *vmx) { u8 *pi_desc_page; @@ -5636,6 +5658,9 @@ static int enter_vmx_operation(struct kvm_vcpu *vcpu) kvm_gpc_init_for_vcpu(&vmx->nested.virtual_apic_cache, vcpu); kvm_gpc_init_for_vcpu(&vmx->nested.pi_desc_cache, vcpu); =20 +#ifdef CONFIG_KVM_HYPERV + kvm_gpc_init(&vmx->nested.hv_evmcs_cache, vcpu->kvm); +#endif vmx->nested.vmcs02_initialized =3D false; vmx->nested.vmxon =3D true; =20 @@ -5887,6 +5912,8 @@ static int handle_vmread(struct kvm_vcpu *vcpu) /* Read the field, zero-extended to a u64 value */ value =3D vmcs12_read_any(vmcs12, field, offset); } else { + struct hv_enlightened_vmcs *evmcs; + /* * Hyper-V TLFS (as of 6.0b) explicitly states, that while an * enlightened VMCS is active VMREAD/VMWRITE instructions are @@ -5905,7 +5932,9 @@ static int handle_vmread(struct kvm_vcpu *vcpu) return nested_vmx_fail(vcpu, VMXERR_UNSUPPORTED_VMCS_COMPONENT); =20 /* Read the field, zero-extended to a u64 value */ - value =3D evmcs_read_any(nested_vmx_evmcs(vmx), field, offset); + evmcs =3D nested_lock_evmcs(vmx); + value =3D evmcs_read_any(evmcs, field, offset); + nested_unlock_evmcs(vmx); } =20 /* @@ -6935,6 +6964,27 @@ bool nested_vmx_reflect_vmexit(struct kvm_vcpu *vcpu) return true; } =20 +static void vmx_get_enlightened_to_vmcs12(struct vcpu_vmx *vmx) +{ +#ifdef CONFIG_KVM_HYPERV + struct hv_enlightened_vmcs *evmcs; + struct kvm_vcpu *vcpu =3D &vmx->vcpu; + + kvm_vcpu_srcu_read_lock(vcpu); + evmcs =3D nested_lock_evmcs(vmx); + /* + * L1 hypervisor is not obliged to keep eVMCS + * clean fields data always up-to-date while + * not in guest mode, 'hv_clean_fields' is only + * supposed to be actual upon vmentry so we need + * to ignore it here and do full copy. + */ + copy_enlightened_to_vmcs12(vmx, evmcs, 0); + nested_unlock_evmcs(vmx); + kvm_vcpu_srcu_read_unlock(vcpu); +#endif /* CONFIG_KVM_HYPERV */ +} + static int vmx_get_nested_state(struct kvm_vcpu *vcpu, struct kvm_nested_state __user *user_kvm_nested_state, u32 user_data_size) @@ -7025,14 +7075,7 @@ static int vmx_get_nested_state(struct kvm_vcpu *vcp= u, copy_vmcs02_to_vmcs12_rare(vcpu, get_vmcs12(vcpu)); if (!vmx->nested.need_vmcs12_to_shadow_sync) { if (nested_vmx_is_evmptr12_valid(vmx)) - /* - * L1 hypervisor is not obliged to keep eVMCS - * clean fields data always up-to-date while - * not in guest mode, 'hv_clean_fields' is only - * supposed to be actual upon vmentry so we need - * to ignore it here and do full copy. - */ - copy_enlightened_to_vmcs12(vmx, 0); + vmx_get_enlightened_to_vmcs12(vmx); else if (enable_shadow_vmcs) copy_shadow_to_vmcs12(vmx); } diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 87708af502f3..4da5a42b0c60 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -208,8 +208,7 @@ struct nested_vmx { u32 hv_clean_fields; bool hv_msr_bitmap; bool hv_flush_hypercall; - struct hv_enlightened_vmcs *hv_evmcs; - struct kvm_host_map hv_evmcs_map; + struct gfn_to_pfn_cache hv_evmcs_cache; #endif }; =20 --=20 2.43.0