From nobody Sun Feb 8 16:31:47 2026 Received: from mail-wm1-f49.google.com (mail-wm1-f49.google.com [209.85.128.49]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D6C7B3128AF for ; Fri, 2 Jan 2026 14:24:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.49 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767363890; cv=none; b=G3UL6dJlIYboVFqq7oto6eTgWtgy3s3iQHPg6TjvYg6CS/yLZ2Gt+nuiZq87N7ibaYNzm69XQfqCDE7wNw2vp37PujdLDU+zC3OSuAxxZMyLr5c3XO0DDa5p0b+Ucdllk5WESKDUwiOzLKaJC4vib1fftBlSD9OQ4kXaLwJJFgI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767363890; c=relaxed/simple; bh=D94jT9uTm8I3ULBddPOodz+zdhEiw8HnHhGjKYuKSYE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=oTCC6r3Bxm6G1azIMUDmVI8BLDBEwvUD+KGgmIKiGjeOP0v8bTHr+nTLfd7arXQlyeO81qcF7ct1QcOQ7W9FuJoZImB7JvyB572P6+3AFC2V8WvcbjUqB/xTo2+JBeoK6GOuoetqKp+lpdZ/GuUv3UF5TahZ3L887C9Cqjj2chE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=CguNQEhS; arc=none smtp.client-ip=209.85.128.49 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="CguNQEhS" Received: by mail-wm1-f49.google.com with SMTP id 5b1f17b1804b1-47d3ffb0f44so39130305e9.3 for ; Fri, 02 Jan 2026 06:24:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1767363885; x=1767968685; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9jaE1bgRs9kOJOlVmB3CY42RXx5GV/XzBhp8NJevaCI=; b=CguNQEhSuD2H4vLOzUn+g3Vlo25LYmTEFtmVBsg7a3XzacHuas0+gHhKI5bhyXYvqj f3KWV7DOkjYEuRUqEGDjYIuqgHsuQPQ5SmD3SaRUxs9yxDPKxIlXhILWgBvMxQhu9QmL AFG77pMmdTnoGGB4YRo81ITP1qrVIt46e/2NVV+w/l0ZmQb+/h5B7PF/AKbZjdZdZHtx LY19yIWVM1vMrycSRK+Q7md9x5qJxO1Tq/bChhhEUR4Vv/ZhCxESjk17l5LrFyj8LIbi UrS3NlGHGvud9NkJIrwhgMvEPUL4NPsjGC0LXnwL6upHCfBlYY5gUw3Ja/Wvdg7FqF+8 W8EQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767363885; x=1767968685; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=9jaE1bgRs9kOJOlVmB3CY42RXx5GV/XzBhp8NJevaCI=; b=n6iVMJEa+osqTVrkNsiEMtsvla80u0Om8i6kWbZtsVBzdWviyner9AO7FFWT0ueYfb xwILVDYeoYsKn4Rvg63zCiO/mwaLmQnBfwrCWS8yh734A6xXrnDA5NFFtovjzWgMLC8T i5fNOtANCwhphL9nwMlQEpWcpeYu+8pp9xyvqCbAm0HYBqPubmQnYdvrToBPoi0SsqWI TddJkDsxUWrIHz/lzjCOQ4xGSxQqkBeyps3/+lq5TbvixS8suNqAXq7QeY1cltwn1Mqn aeWuTRfT21O+zjicGl2THaPRvJWVuZwYihC4JXQqgSSrvPvQd0k9n/AbLXDGX+nCgBAD wFjg== X-Forwarded-Encrypted: i=1; AJvYcCXiTUjJHUFwxDkgW3/cFGa00BuK1Ov38aFYc3PsiuY2BcIzUXIRjkICcmLBd1AXHkgtR5IzKRFD9IpSXio=@vger.kernel.org X-Gm-Message-State: AOJu0Yw9+z2URR+pqwlYOFUlhYlB/SRFWvEGqWsjIg4R8GIvrLvQhKPg qXPJBWMjuIjBXe0d6x67sGFTKXwabtH3OwuTMB3VoOBDhhF/yxipZsVC X-Gm-Gg: AY/fxX63RAaRF6js/Yzu+/M/nCeyNOca2aP4kYMcJnbIx9Zp/E94iEh6RQo8Fe2WvEM UTnCDA4Xx4Uqs6XL95TCeIODUvlWOBkEtPYlMWlYdKzDt4vnlROTyk/T/bLbF4PjTCI0HsyhOTp Vc0YXb/sG5vmYVL5GnXj+tdALkT3Glp8V12fuI+KFSSrfY7/Z2Ssq9N0MgUnHScFvnhhjpnLPU9 EPYza/o748ahQfJyBWEMFgOOTo2bq5mFhYBNbGXdyYPRN1rZ0IRXxKHtRf06XeHbLT9lBa/aJU/ 2g1S1202Wy+5diKMUqnCds1Y5db9UBngOCuZP/dflfG7Q/0XXRwhMuY5ZuLFjuMbPALpN5UCYrc 87EiVPjNcBgKZ6WFotCKBb5i2XatFoc1rLXosPQ1exfHA+VR0U60nv6+zI6IJygPYzeFiwrjEuZ HQxkwvPTJFdIoWgr2zg+Kn+XkDCla8+uUrSJlWGfIDyOxxgCOMaYwfTXYmY+TbN9gVDARIxcYNv ibuKywmhQilqxe7jsfyS4Uu5vYVD+Pd X-Google-Smtp-Source: AGHT+IFW6Z3xPO2fOOiFRTZIK8Q2oP3dmU3vuq/TX++bDC85o9uKEkfGVGrX2bmfeGzK52NrR0/zBA== X-Received: by 2002:a05:600c:4446:b0:475:e007:bae0 with SMTP id 5b1f17b1804b1-47d1956f896mr532884015e9.16.1767363884415; Fri, 02 Jan 2026 06:24:44 -0800 (PST) Received: from ip-10-0-150-200.eu-west-1.compute.internal (ec2-52-49-196-232.eu-west-1.compute.amazonaws.com. [52.49.196.232]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-47be27b0d5asm806409235e9.13.2026.01.02.06.24.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 02 Jan 2026 06:24:44 -0800 (PST) From: Fred Griffoul To: kvm@vger.kernel.org Cc: seanjc@google.com, pbonzini@redhat.com, vkuznets@redhat.com, shuah@kernel.org, dwmw@amazon.co.uk, linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org, Fred Griffoul Subject: [PATCH v4 07/10] KVM: nVMX: Replace evmcs kvm_host_map with pfncache Date: Fri, 2 Jan 2026 14:24:26 +0000 Message-ID: <20260102142429.896101-8-griffoul@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260102142429.896101-1-griffoul@gmail.com> References: <20260102142429.896101-1-griffoul@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Fred Griffoul Replace the eVMCS kvm_host_map with a gfn_to_pfn_cache to properly handle memslot changes and unify with other pfncaches in nVMX. The change introduces proper locking/unlocking semantics for eVMCS access through nested_lock_evmcs() and nested_unlock_evmcs() helpers. Signed-off-by: Fred Griffoul --- arch/x86/kvm/vmx/hyperv.h | 21 +++---- arch/x86/kvm/vmx/nested.c | 115 ++++++++++++++++++++++++++------------ arch/x86/kvm/vmx/vmx.h | 3 +- 3 files changed, 90 insertions(+), 49 deletions(-) diff --git a/arch/x86/kvm/vmx/hyperv.h b/arch/x86/kvm/vmx/hyperv.h index 3c7fea501ca5..3b6fcf8dff64 100644 --- a/arch/x86/kvm/vmx/hyperv.h +++ b/arch/x86/kvm/vmx/hyperv.h @@ -37,11 +37,6 @@ static inline bool nested_vmx_is_evmptr12_set(struct vcp= u_vmx *vmx) return evmptr_is_set(vmx->nested.hv_evmcs_vmptr); } =20 -static inline struct hv_enlightened_vmcs *nested_vmx_evmcs(struct vcpu_vmx= *vmx) -{ - return vmx->nested.hv_evmcs; -} - static inline bool guest_cpu_cap_has_evmcs(struct kvm_vcpu *vcpu) { /* @@ -70,6 +65,8 @@ void nested_evmcs_filter_control_msr(struct kvm_vcpu *vcp= u, u32 msr_index, u64 * int nested_evmcs_check_controls(struct vmcs12 *vmcs12); bool nested_evmcs_l2_tlb_flush_enabled(struct kvm_vcpu *vcpu); void vmx_hv_inject_synthetic_vmexit_post_tlb_flush(struct kvm_vcpu *vcpu); +struct hv_enlightened_vmcs *nested_lock_evmcs(struct vcpu_vmx *vmx); +void nested_unlock_evmcs(struct vcpu_vmx *vmx); #else static inline bool evmptr_is_valid(u64 evmptr) { @@ -91,11 +88,6 @@ static inline bool nested_vmx_is_evmptr12_set(struct vcp= u_vmx *vmx) return false; } =20 -static inline struct hv_enlightened_vmcs *nested_vmx_evmcs(struct vcpu_vmx= *vmx) -{ - return NULL; -} - static inline u32 nested_evmcs_clean_fields(struct vcpu_vmx *vmx) { return 0; @@ -105,6 +97,15 @@ static inline bool nested_evmcs_msr_bitmap(struct vcpu_= vmx *vmx) { return false; } + +static inline struct hv_enlightened_vmcs *nested_lock_evmcs(struct vcpu_vm= x *vmx) +{ + return NULL; +} + +static inline void nested_unlock_evmcs(struct vcpu_vmx *vmx) +{ +} #endif =20 #endif /* __KVM_X86_VMX_HYPERV_H */ diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 5790e1a26456..491472ca825b 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -233,8 +233,6 @@ static inline void nested_release_evmcs(struct kvm_vcpu= *vcpu) struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(vcpu); struct vcpu_vmx *vmx =3D to_vmx(vcpu); =20 - kvm_vcpu_unmap(vcpu, &vmx->nested.hv_evmcs_map); - vmx->nested.hv_evmcs =3D NULL; vmx->nested.hv_evmcs_vmptr =3D EVMPTR_INVALID; vmx->nested.hv_clean_fields =3D 0; vmx->nested.hv_msr_bitmap =3D false; @@ -266,7 +264,7 @@ static bool nested_evmcs_handle_vmclear(struct kvm_vcpu= *vcpu, gpa_t vmptr) !evmptr_is_valid(nested_get_evmptr(vcpu))) return false; =20 - if (nested_vmx_evmcs(vmx) && vmptr =3D=3D vmx->nested.hv_evmcs_vmptr) + if (vmptr =3D=3D vmx->nested.hv_evmcs_vmptr) nested_release_evmcs(vcpu); =20 return true; @@ -391,6 +389,18 @@ static void *nested_gpc_lock_if_active(struct gfn_to_p= fn_cache *gpc) return gpc->khva; } =20 +#ifdef CONFIG_KVM_HYPERV +struct hv_enlightened_vmcs *nested_lock_evmcs(struct vcpu_vmx *vmx) +{ + return nested_gpc_lock_if_active(vmx->nested.hv_evmcs_cache); +} + +void nested_unlock_evmcs(struct vcpu_vmx *vmx) +{ + nested_gpc_unlock(vmx->nested.hv_evmcs_cache); +} +#endif + static struct pi_desc *nested_lock_pi_desc(struct vcpu_vmx *vmx) { u8 *pi_desc_page; @@ -441,6 +451,9 @@ static void free_nested(struct kvm_vcpu *vcpu) kvm_gpc_deactivate(&vmx->nested.virtual_apic_cache); kvm_gpc_deactivate(&vmx->nested.apic_access_page_cache); kvm_gpc_deactivate(&vmx->nested.msr_bitmap_cache); +#ifdef CONFIG_KVM_HYPERV + kvm_gpc_deactivate(&vmx->nested.hv_evmcs_cache); +#endif =20 free_vpid(vmx->nested.vpid02); vmx->nested.posted_intr_nv =3D -1; @@ -1786,11 +1799,12 @@ static void copy_vmcs12_to_shadow(struct vcpu_vmx *= vmx) vmcs_load(vmx->loaded_vmcs->vmcs); } =20 -static void copy_enlightened_to_vmcs12(struct vcpu_vmx *vmx, u32 hv_clean_= fields) -{ #ifdef CONFIG_KVM_HYPERV +static void copy_enlightened_to_vmcs12(struct vcpu_vmx *vmx, + struct hv_enlightened_vmcs *evmcs, + u32 hv_clean_fields) +{ struct vmcs12 *vmcs12 =3D vmx->nested.cached_vmcs12; - struct hv_enlightened_vmcs *evmcs =3D nested_vmx_evmcs(vmx); struct kvm_vcpu_hv *hv_vcpu =3D to_hv_vcpu(&vmx->vcpu); =20 /* HV_VMX_ENLIGHTENED_CLEAN_FIELD_NONE */ @@ -2029,16 +2043,14 @@ static void copy_enlightened_to_vmcs12(struct vcpu_= vmx *vmx, u32 hv_clean_fields */ =20 return; -#else /* CONFIG_KVM_HYPERV */ - KVM_BUG_ON(1, vmx->vcpu.kvm); -#endif /* CONFIG_KVM_HYPERV */ } +#endif /* CONFIG_KVM_HYPERV */ =20 static void copy_vmcs12_to_enlightened(struct vcpu_vmx *vmx) { #ifdef CONFIG_KVM_HYPERV struct vmcs12 *vmcs12 =3D vmx->nested.cached_vmcs12; - struct hv_enlightened_vmcs *evmcs =3D nested_vmx_evmcs(vmx); + struct hv_enlightened_vmcs *evmcs =3D nested_lock_evmcs(vmx); =20 /* * Should not be changed by KVM: @@ -2206,6 +2218,7 @@ static void copy_vmcs12_to_enlightened(struct vcpu_vm= x *vmx) =20 evmcs->guest_bndcfgs =3D vmcs12->guest_bndcfgs; =20 + nested_unlock_evmcs(vmx); return; #else /* CONFIG_KVM_HYPERV */ KVM_BUG_ON(1, vmx->vcpu.kvm); @@ -2222,6 +2235,8 @@ static enum nested_evmptrld_status nested_vmx_handle_= enlightened_vmptrld( #ifdef CONFIG_KVM_HYPERV struct vcpu_vmx *vmx =3D to_vmx(vcpu); struct hv_enlightened_vmcs *evmcs; + struct gfn_to_pfn_cache *gpc; + enum nested_evmptrld_status status =3D EVMPTRLD_SUCCEEDED; bool evmcs_gpa_changed =3D false; u64 evmcs_gpa; =20 @@ -2234,17 +2249,19 @@ static enum nested_evmptrld_status nested_vmx_handl= e_enlightened_vmptrld( return EVMPTRLD_DISABLED; } =20 + gpc =3D &vmx->nested.hv_evmcs_cache; + if (nested_gpc_lock(gpc, evmcs_gpa)) { + nested_release_evmcs(vcpu); + return EVMPTRLD_ERROR; + } + + evmcs =3D gpc->khva; + if (unlikely(evmcs_gpa !=3D vmx->nested.hv_evmcs_vmptr)) { vmx->nested.current_vmptr =3D INVALID_GPA; =20 nested_release_evmcs(vcpu); =20 - if (kvm_vcpu_map(vcpu, gpa_to_gfn(evmcs_gpa), - &vmx->nested.hv_evmcs_map)) - return EVMPTRLD_ERROR; - - vmx->nested.hv_evmcs =3D vmx->nested.hv_evmcs_map.hva; - /* * Currently, KVM only supports eVMCS version 1 * (=3D=3D KVM_EVMCS_VERSION) and thus we expect guest to set this @@ -2267,10 +2284,11 @@ static enum nested_evmptrld_status nested_vmx_handl= e_enlightened_vmptrld( * eVMCS version or VMCS12 revision_id as valid values for first * u32 field of eVMCS. */ - if ((vmx->nested.hv_evmcs->revision_id !=3D KVM_EVMCS_VERSION) && - (vmx->nested.hv_evmcs->revision_id !=3D VMCS12_REVISION)) { + if ((evmcs->revision_id !=3D KVM_EVMCS_VERSION) && + (evmcs->revision_id !=3D VMCS12_REVISION)) { nested_release_evmcs(vcpu); - return EVMPTRLD_VMFAIL; + status =3D EVMPTRLD_VMFAIL; + goto unlock; } =20 vmx->nested.hv_evmcs_vmptr =3D evmcs_gpa; @@ -2295,14 +2313,11 @@ static enum nested_evmptrld_status nested_vmx_handl= e_enlightened_vmptrld( * between different L2 guests as KVM keeps a single VMCS12 per L1. */ if (from_launch || evmcs_gpa_changed) { - vmx->nested.hv_evmcs->hv_clean_fields &=3D - ~HV_VMX_ENLIGHTENED_CLEAN_FIELD_ALL; - + evmcs->hv_clean_fields &=3D ~HV_VMX_ENLIGHTENED_CLEAN_FIELD_ALL; vmx->nested.force_msr_bitmap_recalc =3D true; } =20 /* Cache evmcs fields to avoid reading evmcs after copy to vmcs12 */ - evmcs =3D vmx->nested.hv_evmcs; vmx->nested.hv_clean_fields =3D evmcs->hv_clean_fields; vmx->nested.hv_flush_hypercall =3D evmcs->hv_enlightenments_control.neste= d_flush_hypercall; vmx->nested.hv_msr_bitmap =3D evmcs->hv_enlightenments_control.msr_bitmap; @@ -2311,13 +2326,15 @@ static enum nested_evmptrld_status nested_vmx_handl= e_enlightened_vmptrld( struct vmcs12 *vmcs12 =3D get_vmcs12(vcpu); =20 if (likely(!vmcs12->hdr.shadow_vmcs)) { - copy_enlightened_to_vmcs12(vmx, vmx->nested.hv_clean_fields); + copy_enlightened_to_vmcs12(vmx, evmcs, vmx->nested.hv_clean_fields); /* Enlightened VMCS doesn't have launch state */ vmcs12->launch_state =3D !from_launch; } } =20 - return EVMPTRLD_SUCCEEDED; +unlock: + nested_gpc_unlock(gpc); + return status; #else return EVMPTRLD_DISABLED; #endif @@ -2813,7 +2830,6 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, stru= ct vmcs12 *vmcs12, enum vm_entry_failure_code *entry_failure_code) { struct vcpu_vmx *vmx =3D to_vmx(vcpu); - struct hv_enlightened_vmcs *evmcs; bool load_guest_pdptrs_vmcs12 =3D false; =20 if (vmx->nested.dirty_vmcs12 || nested_vmx_is_evmptr12_valid(vmx)) { @@ -2951,9 +2967,13 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, str= uct vmcs12 *vmcs12, * bits when it changes a field in eVMCS. Mark all fields as clean * here. */ - evmcs =3D nested_vmx_evmcs(vmx); - if (evmcs) + if (nested_vmx_is_evmptr12_valid(vmx)) { + struct hv_enlightened_vmcs *evmcs; + + evmcs =3D nested_lock_evmcs(vmx); evmcs->hv_clean_fields |=3D HV_VMX_ENLIGHTENED_CLEAN_FIELD_ALL; + nested_unlock_evmcs(vmx); + } =20 return 0; } @@ -5595,6 +5615,9 @@ static int enter_vmx_operation(struct kvm_vcpu *vcpu) kvm_gpc_init_for_vcpu(&vmx->nested.virtual_apic_cache, vcpu); kvm_gpc_init_for_vcpu(&vmx->nested.pi_desc_cache, vcpu); =20 +#ifdef CONFIG_KVM_HYPERV + kvm_gpc_init(&vmx->nested.hv_evmcs_cache, vcpu->kvm); +#endif vmx->nested.vmcs02_initialized =3D false; vmx->nested.vmxon =3D true; =20 @@ -5846,6 +5869,8 @@ static int handle_vmread(struct kvm_vcpu *vcpu) /* Read the field, zero-extended to a u64 value */ value =3D vmcs12_read_any(vmcs12, field, offset); } else { + struct hv_enlightened_vmcs *evmcs; + /* * Hyper-V TLFS (as of 6.0b) explicitly states, that while an * enlightened VMCS is active VMREAD/VMWRITE instructions are @@ -5864,7 +5889,9 @@ static int handle_vmread(struct kvm_vcpu *vcpu) return nested_vmx_fail(vcpu, VMXERR_UNSUPPORTED_VMCS_COMPONENT); =20 /* Read the field, zero-extended to a u64 value */ - value =3D evmcs_read_any(nested_vmx_evmcs(vmx), field, offset); + evmcs =3D nested_lock_evmcs(vmx); + value =3D evmcs_read_any(evmcs, field, offset); + nested_unlock_evmcs(vmx); } =20 /* @@ -6902,6 +6929,27 @@ bool nested_vmx_reflect_vmexit(struct kvm_vcpu *vcpu) return true; } =20 +static void vmx_get_enlightened_to_vmcs12(struct vcpu_vmx *vmx) +{ +#ifdef CONFIG_KVM_HYPERV + struct hv_enlightened_vmcs *evmcs; + struct kvm_vcpu *vcpu =3D &vmx->vcpu; + + kvm_vcpu_srcu_read_lock(vcpu); + evmcs =3D nested_lock_evmcs(vmx); + /* + * L1 hypervisor is not obliged to keep eVMCS + * clean fields data always up-to-date while + * not in guest mode, 'hv_clean_fields' is only + * supposed to be actual upon vmentry so we need + * to ignore it here and do full copy. + */ + copy_enlightened_to_vmcs12(vmx, evmcs, 0); + nested_unlock_evmcs(vmx); + kvm_vcpu_srcu_read_unlock(vcpu); +#endif /* CONFIG_KVM_HYPERV */ +} + static int vmx_get_nested_state(struct kvm_vcpu *vcpu, struct kvm_nested_state __user *user_kvm_nested_state, u32 user_data_size) @@ -6992,14 +7040,7 @@ static int vmx_get_nested_state(struct kvm_vcpu *vcp= u, copy_vmcs02_to_vmcs12_rare(vcpu, get_vmcs12(vcpu)); if (!vmx->nested.need_vmcs12_to_shadow_sync) { if (nested_vmx_is_evmptr12_valid(vmx)) - /* - * L1 hypervisor is not obliged to keep eVMCS - * clean fields data always up-to-date while - * not in guest mode, 'hv_clean_fields' is only - * supposed to be actual upon vmentry so we need - * to ignore it here and do full copy. - */ - copy_enlightened_to_vmcs12(vmx, 0); + vmx_get_enlightened_to_vmcs12(vmx); else if (enable_shadow_vmcs) copy_shadow_to_vmcs12(vmx); } diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index cda96196c56c..5517d68872f0 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -208,8 +208,7 @@ struct nested_vmx { u32 hv_clean_fields; bool hv_msr_bitmap; bool hv_flush_hypercall; - struct hv_enlightened_vmcs *hv_evmcs; - struct kvm_host_map hv_evmcs_map; + struct gfn_to_pfn_cache hv_evmcs_cache; #endif }; =20 --=20 2.43.0