From nobody Tue Dec 2 02:49:41 2025 Received: from mail-wm1-f45.google.com (mail-wm1-f45.google.com [209.85.128.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 87DD7349AFB for ; Tue, 18 Nov 2025 17:11:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.45 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763485897; cv=none; b=Q8lIcd3uM3CHAHWdE1HQ7jiS6WNqoqSrm9j1zerz7r2eMnmQD0w+JJtpNV9KqpdRAyoqqOdQWd/6Sihci9SI+lpARyerpakZ3h6d0L6QxVgTMsSn3qbfhWYqxtSDBn0dLFR52zZllPrfNJte55Lr+m3D+MGbATTiOWjaNRfzS2M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763485897; c=relaxed/simple; bh=ANzdtJVSrGdNRyKpbr/S/liYzi4rKgftBTzHj+60VDo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=lUJSFrmNDDgrZjNOiFkDU3NpSfJfew8s0COSxVUlzoSFRCZ4j8KH+baXjTwAjM4PdNy1soOupFhRzUK8Sb5IEMLNsYpOcdDmwJ2GW63NqtTepsGMsFq0mmjAP+0tZ5AKPqtKWg9jiLz3IBfQUQ5h5kVB1Ii3Y4pETHob5NOWKpc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=XC45YfDN; arc=none smtp.client-ip=209.85.128.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="XC45YfDN" Received: by mail-wm1-f45.google.com with SMTP id 5b1f17b1804b1-47775fb6c56so58721745e9.1 for ; Tue, 18 Nov 2025 09:11:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1763485893; x=1764090693; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=byGUc9HGj/3i2CsHggKCaH358jeuqsufmI+XZ2QMAYA=; b=XC45YfDNtp1xhpgbYBIcRkpvrJBGtKnkzUCc1VKkQWsXP5sdC4uwEW36l4jrkJpWro c+0EzwfkJfwKC7XEF5wpzR6XV4TLrwz2xP7nJBqIgNoufFChf0/kzdUs5RGebyFjluj7 XJhQpcEwKlRIxs0UzM7JIeI5USXyh8GcSE47o0CcviSgXcrLgaLyLAYGKvcm1gDwy5rG EBgWir0Q/fo5LZH/2Jx0PAmB66qUKs5MLNUXX4QS/PLySMnxt8jLEAnJ6uGKWRcc1lPa Ar6QxYL2hemBJpymqKDiSwWio9mCvzAn6MQyRcIInvfOjxsKbUNu3Krb8+CoMhuz171A yZEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1763485893; x=1764090693; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=byGUc9HGj/3i2CsHggKCaH358jeuqsufmI+XZ2QMAYA=; b=HSslnHrufNcv+9AVV81xorjbhAhM64J+72NjY1jLSIWnCV7eWxwgzVjSY4fwWRACpY OsLZfMQpCOEMK0qA7EEVNj8AycxAZYdUPYi5OcrO2u3eBQpXf34S6nViZ285VRTVzboo FZDKDaZSp1Or3rgD0WAIEbK/MOYE3lIWMnPujXvn9vk/bjwL/OexqfN7yZ4JN3a3XR5p ZTGigW5XUH/8AGG88rDphYXyRVQETAeBLlB55AAqMYnkGXFyf4zmGyhD9zDKz9bH0N9j Yiv0s4yqjxPw0OXzA+ji/k56M4t/CbML1zgmAp9pbwSVZZery+ltJ2s+VBJUowR8cP6R 20XQ== X-Forwarded-Encrypted: i=1; AJvYcCXc1ME/34Qww6ESpHN7auac6O2vyTs2CbnryfZ3wQt7Lfj8P/MSFtoxDb7UuLkJHujGZFOzKj8IwBucd5E=@vger.kernel.org X-Gm-Message-State: AOJu0Yy+adD/oOWppIW6XMIZpA1Ml1mcP6VlnxPv3p6skYC/gJPwJDNf ol1lR6+OXncP54PXr99+G1CgIanwkAPOdJf6nse3/QObkrKowAGR0qWp X-Gm-Gg: ASbGncvEL3cjHxes9P2KoOSFcgO/ZNsp5SNxnoW3x3FhL0WyD1KIYFoexfz7WRU003R V9INyWRh7DgIQpM4IjPCxqvq3QpHDgK19JREdUR4H2FeUWPVCD9f7DEN9eVbXDvMmUOLo6mO8dd cyDXP3NvTUQkL9Xz8H0r/YKakk/+GT8Itt1YWUtAUvM9VGLNc//kgPkwwnPKF1NzCtxqCYmqMYF mgf2gEbKw15FzDlh5SmYKR1jJuai5o9Afc9w9m8+41/jw5XSgf/iIeJmLA07cCftbMMF8BdjBHo MZhUX2ZJ4vwHieJINfIsd7q0hkkW7bPABFbs+B8fK9OeIawbP00gm7AgdBFTGf81lME3sZ7wtcf JiAQE93kX57cIpBmy7S6JVUArNXEvENKkR/LyNH2oTobsk3tWMPvi2xnSik76JpewYcqz+/5QKE 9gjloKHD2T1cJ537SqYqdHihuAM3a4c5tcOiPoYMtGmh3V8PCcNuLsSavy6WJj1pu9hDWZfbpJQ l/faqfkT/FCN1hFZbZWN7Ilb0xgQye+ X-Google-Smtp-Source: AGHT+IGlhZ7PFpXe7fJ+Jl6QVx7ClQ5MKi5kL9cv9fTggXll4bf8AE62PIPwe8v3GCpikAO3RwTM7w== X-Received: by 2002:a05:600c:8b43:b0:477:6e02:54a5 with SMTP id 5b1f17b1804b1-4778fe6098dmr164172415e9.18.1763485892645; Tue, 18 Nov 2025 09:11:32 -0800 (PST) Received: from ip-10-0-150-200.eu-west-1.compute.internal (ec2-52-49-196-232.eu-west-1.compute.amazonaws.com. [52.49.196.232]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-477b103d312sm706385e9.13.2025.11.18.09.11.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 18 Nov 2025 09:11:32 -0800 (PST) From: griffoul@gmail.com X-Google-Original-From: griffoul@gmail.org To: kvm@vger.kernel.org Cc: seanjc@google.com, pbonzini@redhat.com, vkuznets@redhat.com, shuah@kernel.org, dwmw@amazon.co.uk, linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org, Fred Griffoul Subject: [PATCH v2 09/10] KVM: nVMX: Use nested context for pfncache persistence Date: Tue, 18 Nov 2025 17:11:12 +0000 Message-ID: <20251118171113.363528-10-griffoul@gmail.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251118171113.363528-1-griffoul@gmail.org> References: <20251118171113.363528-1-griffoul@gmail.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Fred Griffoul Extend the nested context infrastructure to preserve gfn_to_pfn_cache objects for nested VMX using kvm_nested_context_load() and kvm_nested_context_clear() functions. The VMX nested context stores gfn_to_pfn_cache structs for: - MSR permission bitmaps - APIC access page - Virtual APIC page - Posted interrupt descriptor - Enlightened VMCS For traditional nested VMX, those pfn caches are loaded upon 'vmptrld' instruction emulation and the context is cleared upon 'vmclear'. This follows the normal L2 vCPU migration sequence of 'vmclear/vmptrld/vmlaunch'. For enlightened VMCS (eVMCS) support, both functions are called when detecting a change in the eVMCS GPA, ensuring proper context management for Hyper-V nested scenarios. By preserving the gfn_to_pfn_cache objects across L2 context switches, we avoid costly cache refresh operations, significantly improving nested virtualization performance for workloads with frequent L2 vCPU multiplexing on an L1 vCPU or L2 vCPUs migrations between L1 vCPUs. Signed-off-by: Fred Griffoul --- arch/x86/kvm/vmx/nested.c | 155 +++++++++++++++++++++++++++++--------- arch/x86/kvm/vmx/vmx.c | 8 ++ arch/x86/kvm/vmx/vmx.h | 10 +-- include/linux/kvm_host.h | 2 +- 4 files changed, 134 insertions(+), 41 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index d910508e3c22..69c3bcb325f1 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -226,6 +226,93 @@ static void vmx_disable_shadow_vmcs(struct vcpu_vmx *v= mx) vmx->nested.need_vmcs12_to_shadow_sync =3D false; } =20 +struct vmx_nested_context { + struct kvm_nested_context base; + struct gfn_to_pfn_cache msr_bitmap_cache; + struct gfn_to_pfn_cache apic_access_page_cache; + struct gfn_to_pfn_cache virtual_apic_cache; + struct gfn_to_pfn_cache pi_desc_cache; +#ifdef CONFIG_KVM_HYPERV + struct gfn_to_pfn_cache evmcs_cache; +#endif +}; + +static inline struct vmx_nested_context *to_vmx_nested_context( + struct kvm_nested_context *base) +{ + return base ? container_of(base, struct vmx_nested_context, base) : NULL; +} + +static struct kvm_nested_context *vmx_nested_context_alloc(struct kvm_vcpu= *vcpu) +{ + struct vmx_nested_context *ctx; + + ctx =3D kzalloc(sizeof(*ctx), GFP_KERNEL_ACCOUNT); + if (!ctx) + return NULL; + + kvm_gpc_init(&ctx->msr_bitmap_cache, vcpu->kvm); + kvm_gpc_init_for_vcpu(&ctx->apic_access_page_cache, vcpu); + kvm_gpc_init_for_vcpu(&ctx->virtual_apic_cache, vcpu); + kvm_gpc_init_for_vcpu(&ctx->pi_desc_cache, vcpu); +#ifdef CONFIG_KVM_HYPERV + kvm_gpc_init(&ctx->evmcs_cache, vcpu->kvm); +#endif + return &ctx->base; +} + +static void vmx_nested_context_reset(struct kvm_nested_context *base) +{ + /* + * Skip pfncache reinitialization: active ones will be refreshed on + * access. + */ +} + +static void vmx_nested_context_free(struct kvm_nested_context *base) +{ + struct vmx_nested_context *ctx =3D to_vmx_nested_context(base); + + kvm_gpc_deactivate(&ctx->pi_desc_cache); + kvm_gpc_deactivate(&ctx->virtual_apic_cache); + kvm_gpc_deactivate(&ctx->apic_access_page_cache); + kvm_gpc_deactivate(&ctx->msr_bitmap_cache); +#ifdef CONFIG_KVM_HYPERV + kvm_gpc_deactivate(&ctx->evmcs_cache); +#endif + kfree(ctx); +} + +static void vmx_nested_context_load(struct vcpu_vmx *vmx, gpa_t vmptr) +{ + struct vmx_nested_context *ctx; + + ctx =3D to_vmx_nested_context(kvm_nested_context_load(&vmx->vcpu, vmptr)); + if (!ctx) { + /* + * The cache could not be allocated. In the unlikely case of no + * available memory, an error will be returned to L1 when + * mapping the vmcs12 pages. More likely the current pfncaches + * will be reused (and refreshed since their GPAs do not + * match). + */ + return; + } + + vmx->nested.msr_bitmap_cache =3D &ctx->msr_bitmap_cache; + vmx->nested.apic_access_page_cache =3D &ctx->apic_access_page_cache; + vmx->nested.virtual_apic_cache =3D &ctx->virtual_apic_cache; + vmx->nested.pi_desc_cache =3D &ctx->pi_desc_cache; +#ifdef CONFIG_KVM_HYPERV + vmx->nested.hv_evmcs_cache =3D &ctx->evmcs_cache; +#endif +} + +static void vmx_nested_context_clear(struct vcpu_vmx *vmx, gpa_t vmptr) +{ + kvm_nested_context_clear(&vmx->vcpu, vmptr); +} + static inline void nested_release_evmcs(struct kvm_vcpu *vcpu) { #ifdef CONFIG_KVM_HYPERV @@ -325,6 +412,9 @@ static int nested_gpc_lock(struct gfn_to_pfn_cache *gpc= , gpa_t gpa) =20 if (!PAGE_ALIGNED(gpa)) return -EINVAL; + + if (WARN_ON_ONCE(!gpc)) + return -ENOENT; retry: read_lock(&gpc->lock); if (!kvm_gpc_check(gpc, PAGE_SIZE) || (gpc->gpa !=3D gpa)) { @@ -387,14 +477,6 @@ static void free_nested(struct kvm_vcpu *vcpu) vmx->nested.smm.vmxon =3D false; vmx->nested.vmxon_ptr =3D INVALID_GPA; =20 - kvm_gpc_deactivate(&vmx->nested.pi_desc_cache); - kvm_gpc_deactivate(&vmx->nested.virtual_apic_cache); - kvm_gpc_deactivate(&vmx->nested.apic_access_page_cache); - kvm_gpc_deactivate(&vmx->nested.msr_bitmap_cache); -#ifdef CONFIG_KVM_HYPERV - kvm_gpc_deactivate(&vmx->nested.hv_evmcs_cache); -#endif - free_vpid(vmx->nested.vpid02); vmx->nested.posted_intr_nv =3D -1; vmx->nested.current_vmptr =3D INVALID_GPA; @@ -697,7 +779,7 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct= kvm_vcpu *vcpu, return true; } =20 - gpc =3D &vmx->nested.msr_bitmap_cache; + gpc =3D vmx->nested.msr_bitmap_cache; if (nested_gpc_lock(gpc, vmcs12->msr_bitmap)) return false; =20 @@ -2188,7 +2270,13 @@ static enum nested_evmptrld_status nested_vmx_handle= _enlightened_vmptrld( return EVMPTRLD_DISABLED; } =20 - gpc =3D &vmx->nested.hv_evmcs_cache; + if (evmcs_gpa !=3D vmx->nested.hv_evmcs_vmptr) { + vmx_nested_context_clear(vmx, vmx->nested.hv_evmcs_vmptr); + vmx_nested_context_load(vmx, evmcs_gpa); + evmcs_gpa_changed =3D true; + } + + gpc =3D vmx->nested.hv_evmcs_cache; if (nested_gpc_lock(gpc, evmcs_gpa)) { nested_release_evmcs(vcpu); return EVMPTRLD_ERROR; @@ -2196,9 +2284,8 @@ static enum nested_evmptrld_status nested_vmx_handle_= enlightened_vmptrld( =20 evmcs =3D gpc->khva; =20 - if (unlikely(evmcs_gpa !=3D vmx->nested.hv_evmcs_vmptr)) { + if (evmcs_gpa_changed) { vmx->nested.current_vmptr =3D INVALID_GPA; - nested_release_evmcs(vcpu); =20 /* @@ -2232,7 +2319,6 @@ static enum nested_evmptrld_status nested_vmx_handle_= enlightened_vmptrld( =20 vmx->nested.hv_evmcs_vmptr =3D evmcs_gpa; =20 - evmcs_gpa_changed =3D true; /* * Unlike normal vmcs12, enlightened vmcs12 is not fully * reloaded from guest's memory (read only fields, fields not @@ -3540,7 +3626,7 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *= vcpu) =20 =20 if (nested_cpu_has2(vmcs12, SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) { - gpc =3D &vmx->nested.apic_access_page_cache; + gpc =3D vmx->nested.apic_access_page_cache; =20 if (!nested_gpc_hpa(gpc, vmcs12->apic_access_addr, &hpa)) { vmcs_write64(APIC_ACCESS_ADDR, hpa); @@ -3556,7 +3642,7 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *= vcpu) } =20 if (nested_cpu_has(vmcs12, CPU_BASED_TPR_SHADOW)) { - gpc =3D &vmx->nested.virtual_apic_cache; + gpc =3D vmx->nested.virtual_apic_cache; =20 if (!nested_gpc_hpa(gpc, vmcs12->virtual_apic_page_addr, &hpa)) { vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, hpa); @@ -3582,7 +3668,7 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *= vcpu) } =20 if (nested_cpu_has_posted_intr(vmcs12)) { - gpc =3D &vmx->nested.pi_desc_cache; + gpc =3D vmx->nested.pi_desc_cache; =20 if (!nested_gpc_hpa(gpc, vmcs12->posted_intr_desc_addr & PAGE_MASK, &hpa= )) { vmx->nested.pi_desc_offset =3D offset_in_page(vmcs12->posted_intr_desc_= addr); @@ -3642,9 +3728,9 @@ static bool vmx_is_nested_state_invalid(struct kvm_vc= pu *vcpu) * locks. Since kvm_gpc_invalid() doesn't verify gpc memslot * generation, we can also skip acquiring the srcu lock. */ - return kvm_gpc_invalid(&vmx->nested.apic_access_page_cache) || - kvm_gpc_invalid(&vmx->nested.virtual_apic_cache) || - kvm_gpc_invalid(&vmx->nested.pi_desc_cache); + return kvm_gpc_invalid(vmx->nested.apic_access_page_cache) || + kvm_gpc_invalid(vmx->nested.virtual_apic_cache) || + kvm_gpc_invalid(vmx->nested.pi_desc_cache); } =20 static int nested_vmx_write_pml_buffer(struct kvm_vcpu *vcpu, gpa_t gpa) @@ -4140,6 +4226,8 @@ void nested_mark_vmcs12_pages_dirty(struct kvm_vcpu *= vcpu) =20 static void *nested_gpc_lock_if_active(struct gfn_to_pfn_cache *gpc) { + if (!gpc) + return NULL; retry: read_lock(&gpc->lock); if (!gpc->active) { @@ -4160,12 +4248,12 @@ static void *nested_gpc_lock_if_active(struct gfn_t= o_pfn_cache *gpc) #ifdef CONFIG_KVM_HYPERV struct hv_enlightened_vmcs *nested_lock_evmcs(struct vcpu_vmx *vmx) { - return nested_gpc_lock_if_active(&vmx->nested.hv_evmcs_cache); + return nested_gpc_lock_if_active(vmx->nested.hv_evmcs_cache); } =20 void nested_unlock_evmcs(struct vcpu_vmx *vmx) { - nested_gpc_unlock(&vmx->nested.hv_evmcs_cache); + nested_gpc_unlock(vmx->nested.hv_evmcs_cache); } #endif =20 @@ -4173,7 +4261,7 @@ static struct pi_desc *nested_lock_pi_desc(struct vcp= u_vmx *vmx) { u8 *pi_desc_page; =20 - pi_desc_page =3D nested_gpc_lock_if_active(&vmx->nested.pi_desc_cache); + pi_desc_page =3D nested_gpc_lock_if_active(vmx->nested.pi_desc_cache); if (!pi_desc_page) return NULL; =20 @@ -4182,17 +4270,17 @@ static struct pi_desc *nested_lock_pi_desc(struct v= cpu_vmx *vmx) =20 static void nested_unlock_pi_desc(struct vcpu_vmx *vmx) { - nested_gpc_unlock(&vmx->nested.pi_desc_cache); + nested_gpc_unlock(vmx->nested.pi_desc_cache); } =20 static void *nested_lock_vapic(struct vcpu_vmx *vmx) { - return nested_gpc_lock_if_active(&vmx->nested.virtual_apic_cache); + return nested_gpc_lock_if_active(vmx->nested.virtual_apic_cache); } =20 static void nested_unlock_vapic(struct vcpu_vmx *vmx) { - nested_gpc_unlock(&vmx->nested.virtual_apic_cache); + nested_gpc_unlock(vmx->nested.virtual_apic_cache); } =20 static int vmx_complete_nested_posted_interrupt(struct kvm_vcpu *vcpu) @@ -5651,16 +5739,6 @@ static int enter_vmx_operation(struct kvm_vcpu *vcpu) HRTIMER_MODE_ABS_PINNED); =20 vmx->nested.vpid02 =3D allocate_vpid(); - - kvm_gpc_init(&vmx->nested.msr_bitmap_cache, vcpu->kvm); - - kvm_gpc_init_for_vcpu(&vmx->nested.apic_access_page_cache, vcpu); - kvm_gpc_init_for_vcpu(&vmx->nested.virtual_apic_cache, vcpu); - kvm_gpc_init_for_vcpu(&vmx->nested.pi_desc_cache, vcpu); - -#ifdef CONFIG_KVM_HYPERV - kvm_gpc_init(&vmx->nested.hv_evmcs_cache, vcpu->kvm); -#endif vmx->nested.vmcs02_initialized =3D false; vmx->nested.vmxon =3D true; =20 @@ -5856,6 +5934,8 @@ static int handle_vmclear(struct kvm_vcpu *vcpu) &zero, sizeof(zero)); } =20 + vmx_nested_context_clear(vmx, vmptr); + return nested_vmx_succeed(vcpu); } =20 @@ -6100,6 +6180,8 @@ static void set_current_vmptr(struct vcpu_vmx *vmx, g= pa_t vmptr) } vmx->nested.dirty_vmcs12 =3D true; vmx->nested.force_msr_bitmap_recalc =3D true; + + vmx_nested_context_load(vmx, vmptr); } =20 /* Emulate the VMPTRLD instruction */ @@ -7689,4 +7771,7 @@ struct kvm_x86_nested_ops vmx_nested_ops =3D { .get_evmcs_version =3D nested_get_evmcs_version, .hv_inject_synthetic_vmexit_post_tlb_flush =3D vmx_hv_inject_synthetic_vm= exit_post_tlb_flush, #endif + .alloc_context =3D vmx_nested_context_alloc, + .free_context =3D vmx_nested_context_free, + .reset_context =3D vmx_nested_context_reset, }; diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 546272a5d34d..30b13241ae45 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7666,6 +7666,14 @@ int vmx_vm_init(struct kvm *kvm) =20 if (enable_pml) kvm->arch.cpu_dirty_log_size =3D PML_LOG_NR_ENTRIES; + + if (nested) { + int err; + + err =3D kvm_nested_context_table_init(kvm); + if (err) + return err; + } return 0; } =20 diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 4da5a42b0c60..56b96e50290f 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -152,15 +152,15 @@ struct nested_vmx { =20 struct loaded_vmcs vmcs02; =20 - struct gfn_to_pfn_cache msr_bitmap_cache; + struct gfn_to_pfn_cache *msr_bitmap_cache; =20 /* * Guest pages referred to in the vmcs02 with host-physical * pointers, so we must keep them pinned while L2 runs. */ - struct gfn_to_pfn_cache apic_access_page_cache; - struct gfn_to_pfn_cache virtual_apic_cache; - struct gfn_to_pfn_cache pi_desc_cache; + struct gfn_to_pfn_cache *apic_access_page_cache; + struct gfn_to_pfn_cache *virtual_apic_cache; + struct gfn_to_pfn_cache *pi_desc_cache; =20 u64 pi_desc_offset; bool pi_pending; @@ -208,7 +208,7 @@ struct nested_vmx { u32 hv_clean_fields; bool hv_msr_bitmap; bool hv_flush_hypercall; - struct gfn_to_pfn_cache hv_evmcs_cache; + struct gfn_to_pfn_cache *hv_evmcs_cache; #endif }; =20 diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index b05aace9e295..97e0b949e412 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1533,7 +1533,7 @@ static inline bool kvm_gpc_is_hva_active(struct gfn_t= o_pfn_cache *gpc) =20 static inline bool kvm_gpc_invalid(struct gfn_to_pfn_cache *gpc) { - return gpc->active && !gpc->valid; + return gpc && gpc->active && !gpc->valid; } =20 void kvm_sigset_activate(struct kvm_vcpu *vcpu); --=20 2.43.0