From nobody Tue Apr 7 16:14:42 2026 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8AC31408255 for ; Thu, 12 Mar 2026 23:48:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773359311; cv=none; b=Nk2nq2cAGoBUX09pcp/r0bDhQPZ4IDaHVcjn1IVb9PMAUHpKfE39JL6/H47pPj4S9ff4MQf6Oko2WeJdV6K7oKmjv3Jw2bjW3fxW9zgZoXqJhnECyorls/C95F+0JqGvtwGgiuDdxla5xVhyFcwmvSIG05hAZ8Nyy5cTmaBQ/Gc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773359311; c=relaxed/simple; bh=F061Y8+IpRzeULljXW/MFeBey4ktblO8DonWqDEWTkE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=IRgZytZaE9/fbMJP29tShCpXq4HfcpR75srg3krNic75bsInHu3oy6cbhcrkdb2bvCF9hxMeZq6n2x4V6n+TsLt66dpXZo6NYyGQswFuX2tAadXfTNS2i1c41rODq2WDOrS6gbJIV4BhiehHiCacHpZkQmN3sU2J3adj/YpYy6o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=GqpY5/wI; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="GqpY5/wI" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-2aec784479cso43171495ad.3 for ; Thu, 12 Mar 2026 16:48:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1773359308; x=1773964108; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=aOHgZ1Zr8nEpuOZEIOsWidb5H3cnPtis+TGiN+al+2U=; b=GqpY5/wIie9Yx+/rfYSdwgxk4pvRBmGhYGx/xOz5+L5F+2rU4zJa6G4qLstc2HAsYm mvH+c38DoeChd69VcvNt+Gh6tYxqgw7tC36UjK5z9ZC1OYlFAAbmrfuNmxMHQcF/37GR LQba2aokoNrarYBNHemmyBWr/PQJVwsze4Vd1FayFX/r5f5WUG4tGwIwOs9wissc6Uio N39bzr8D3Qz4Xx4lau7UWv6qsIrSN+/IJdEHmuZYp+b/yWglYKixiInpQs19SAwERLYf Ajqah3gos235mRwkhj15aYzfLVRLA3sbCTcDszQGGdASShmdsXbKFzlS206k6kSH39Op rpIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1773359308; x=1773964108; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=aOHgZ1Zr8nEpuOZEIOsWidb5H3cnPtis+TGiN+al+2U=; b=oS+No4K6ZsA6uFFEtgD+9ARypga1SAfykRUmCD2pA8Gzh4H8Itk1hXB1IK1xBxghdm I32FkgefZ2dNR5bmFl5iPtCYCfA+4zhcpEnAyxGRh5E20qf9xt0q/IInx2feCl7n1jjU HmqigcofWXXgvWA8dzQYEg3nmmVuR/6/kd49jCtnrJlLYRXG16LcxjyyA32EkhENaY++ iLl1HZN03IGwwSmn1PB7WPrvawxWD8YJxmm0/9JJvZQnQOdqkipz7CI+qZWZI1zOPGkH bG+UmFo9NlPLVDeBe/scEuAkEeeAzG5c5X0mzHpN88+4oFT4DDFfVGGO8whKYzvn0yrn Rfyg== X-Forwarded-Encrypted: i=1; AJvYcCVYFzQalSC+1qUW2krtV1dY9AiAWGPgLGReuTc+rUB93SbX/vRcENc2mW/hMwjbEKaN2XB+f3CzZYvOp70=@vger.kernel.org X-Gm-Message-State: AOJu0YxGEMCFmMigkaq7aWugoCnIW60/gEMApYmHjoE4vvEDvcZAmdB8 djtf9TLpCR5kFpp3v2BGePT91mBusMfK051jUiKO4aDRv8ss3hWEbY3aENHXaVo2q29K7a2ZV7I x+khpbw== X-Received: from plbla5.prod.google.com ([2002:a17:902:fa05:b0:2ae:4060:2b13]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:ef03:b0:2ae:c00e:d5d6 with SMTP id d9443c01a7336-2aecac5f735mr11003885ad.56.1773359307760; Thu, 12 Mar 2026 16:48:27 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 12 Mar 2026 16:48:22 -0700 In-Reply-To: <20260312234823.3120658-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260312234823.3120658-1-seanjc@google.com> X-Mailer: git-send-email 2.53.0.851.ga537e3e6e9-goog Message-ID: <20260312234823.3120658-2-seanjc@google.com> Subject: [PATCH v2 1/2] KVM: x86: Move nested_run_pending to kvm_vcpu_arch From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yosry Ahmed Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Yosry Ahmed Move nested_run_pending field present in both svm_nested_state and nested_vmx to the common kvm_vcpu_arch. This allows for common code to use without plumbing it through per-vendor helpers. nested_run_pending remains zero-initialized, as the entire kvm_vcpu struct is, and all further accesses are done through vcpu->arch instead of svm->nested or vmx->nested. No functional change intended. Suggested-by: Sean Christopherson Signed-off-by: Yosry Ahmed [sean: expand the commend in the field declaration] Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 9 +++++++ arch/x86/kvm/svm/nested.c | 18 ++++++------- arch/x86/kvm/svm/svm.c | 16 ++++++------ arch/x86/kvm/svm/svm.h | 4 --- arch/x86/kvm/vmx/nested.c | 46 ++++++++++++++++----------------- arch/x86/kvm/vmx/vmx.c | 16 ++++++------ arch/x86/kvm/vmx/vmx.h | 3 --- 7 files changed, 57 insertions(+), 55 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index d3bdc9828133..45171b607cf2 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1099,6 +1099,15 @@ struct kvm_vcpu_arch { */ bool pdptrs_from_userspace; =20 + /* + * Set if an emulated nested VM-Enter to L2 is pending completion. KVM + * must not synthesize a VM-Exit to L1 before entering L2, as VM-Exits + * can only occur at instruction boundaries. The only exception is + * VMX's "notify" exits, which exist in large part to break the CPU out + * of infinite ucode loops, but can corrupt vCPU state in the process! + */ + bool nested_run_pending; + #if IS_ENABLED(CONFIG_HYPERV) hpa_t hv_root_tdp; #endif diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 5ff01d2ac85e..1b0e0336ef11 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -925,7 +925,7 @@ static void nested_vmcb02_prepare_control(struct vcpu_s= vm *svm) * the CPU and/or KVM and should be used regardless of L1's support. */ if (guest_cpu_cap_has(vcpu, X86_FEATURE_NRIPS) || - !svm->nested.nested_run_pending) + !vcpu->arch.nested_run_pending) vmcb02->control.next_rip =3D vmcb12_ctrl->next_rip; =20 svm->nmi_l1_to_l2 =3D is_evtinj_nmi(vmcb02->control.event_inj); @@ -937,7 +937,7 @@ static void nested_vmcb02_prepare_control(struct vcpu_s= vm *svm) if (is_evtinj_soft(vmcb02->control.event_inj)) { svm->soft_int_injected =3D true; if (guest_cpu_cap_has(vcpu, X86_FEATURE_NRIPS) || - !svm->nested.nested_run_pending) + !vcpu->arch.nested_run_pending) svm->soft_int_next_rip =3D vmcb12_ctrl->next_rip; } =20 @@ -1142,11 +1142,11 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu) if (!npt_enabled) vmcb01->save.cr3 =3D kvm_read_cr3(vcpu); =20 - svm->nested.nested_run_pending =3D 1; + vcpu->arch.nested_run_pending =3D 1; =20 if (enter_svm_guest_mode(vcpu, vmcb12_gpa, true) || !nested_svm_merge_msrpm(vcpu)) { - svm->nested.nested_run_pending =3D 0; + vcpu->arch.nested_run_pending =3D 0; svm->nmi_l1_to_l2 =3D false; svm->soft_int_injected =3D false; =20 @@ -1288,7 +1288,7 @@ void nested_svm_vmexit(struct vcpu_svm *svm) /* Exit Guest-Mode */ leave_guest_mode(vcpu); svm->nested.vmcb12_gpa =3D 0; - WARN_ON_ONCE(svm->nested.nested_run_pending); + WARN_ON_ONCE(vcpu->arch.nested_run_pending); =20 kvm_clear_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu); =20 @@ -1498,7 +1498,7 @@ void svm_leave_nested(struct kvm_vcpu *vcpu) struct vcpu_svm *svm =3D to_svm(vcpu); =20 if (is_guest_mode(vcpu)) { - svm->nested.nested_run_pending =3D 0; + vcpu->arch.nested_run_pending =3D 0; svm->nested.vmcb12_gpa =3D INVALID_GPA; =20 leave_guest_mode(vcpu); @@ -1683,7 +1683,7 @@ static int svm_check_nested_events(struct kvm_vcpu *v= cpu) * previously injected event, the pending exception occurred while said * event was being delivered and thus needs to be handled. */ - bool block_nested_exceptions =3D svm->nested.nested_run_pending; + bool block_nested_exceptions =3D vcpu->arch.nested_run_pending; /* * New events (not exceptions) are only recognized at instruction * boundaries. If an event needs reinjection, then KVM is handling a @@ -1858,7 +1858,7 @@ static int svm_get_nested_state(struct kvm_vcpu *vcpu, kvm_state.size +=3D KVM_STATE_NESTED_SVM_VMCB_SIZE; kvm_state.flags |=3D KVM_STATE_NESTED_GUEST_MODE; =20 - if (svm->nested.nested_run_pending) + if (vcpu->arch.nested_run_pending) kvm_state.flags |=3D KVM_STATE_NESTED_RUN_PENDING; } =20 @@ -1995,7 +1995,7 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu, =20 svm_set_gif(svm, !!(kvm_state->flags & KVM_STATE_NESTED_GIF_SET)); =20 - svm->nested.nested_run_pending =3D + vcpu->arch.nested_run_pending =3D !!(kvm_state->flags & KVM_STATE_NESTED_RUN_PENDING); =20 svm->nested.vmcb12_gpa =3D kvm_state->hdr.svm.vmcb_pa; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index d98fbc0e58e8..ece115d47044 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3821,7 +3821,7 @@ static void svm_fixup_nested_rips(struct kvm_vcpu *vc= pu) { struct vcpu_svm *svm =3D to_svm(vcpu); =20 - if (!is_guest_mode(vcpu) || !svm->nested.nested_run_pending) + if (!is_guest_mode(vcpu) || !vcpu->arch.nested_run_pending) return; =20 /* @@ -3969,7 +3969,7 @@ bool svm_nmi_blocked(struct kvm_vcpu *vcpu) static int svm_nmi_allowed(struct kvm_vcpu *vcpu, bool for_injection) { struct vcpu_svm *svm =3D to_svm(vcpu); - if (svm->nested.nested_run_pending) + if (vcpu->arch.nested_run_pending) return -EBUSY; =20 if (svm_nmi_blocked(vcpu)) @@ -4011,7 +4011,7 @@ static int svm_interrupt_allowed(struct kvm_vcpu *vcp= u, bool for_injection) { struct vcpu_svm *svm =3D to_svm(vcpu); =20 - if (svm->nested.nested_run_pending) + if (vcpu->arch.nested_run_pending) return -EBUSY; =20 if (svm_interrupt_blocked(vcpu)) @@ -4234,7 +4234,7 @@ static void svm_complete_soft_interrupt(struct kvm_vc= pu *vcpu, u8 vector, * the soft int and will reinject it via the standard injection flow, * and so KVM needs to grab the state from the pending nested VMRUN. */ - if (is_guest_mode(vcpu) && svm->nested.nested_run_pending) + if (is_guest_mode(vcpu) && vcpu->arch.nested_run_pending) svm_set_nested_run_soft_int_state(vcpu); =20 /* @@ -4537,11 +4537,11 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kv= m_vcpu *vcpu, u64 run_flags) nested_sync_control_from_vmcb02(svm); =20 /* Track VMRUNs that have made past consistency checking */ - if (svm->nested.nested_run_pending && + if (vcpu->arch.nested_run_pending && !svm_is_vmrun_failure(svm->vmcb->control.exit_code)) ++vcpu->stat.nested_run; =20 - svm->nested.nested_run_pending =3D 0; + vcpu->arch.nested_run_pending =3D 0; } =20 svm->vmcb->control.tlb_ctl =3D TLB_CONTROL_DO_NOTHING; @@ -4910,7 +4910,7 @@ bool svm_smi_blocked(struct kvm_vcpu *vcpu) static int svm_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection) { struct vcpu_svm *svm =3D to_svm(vcpu); - if (svm->nested.nested_run_pending) + if (vcpu->arch.nested_run_pending) return -EBUSY; =20 if (svm_smi_blocked(vcpu)) @@ -5028,7 +5028,7 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, const= union kvm_smram *smram) goto unmap_save; =20 ret =3D 0; - svm->nested.nested_run_pending =3D 1; + vcpu->arch.nested_run_pending =3D 1; =20 unmap_save: kvm_vcpu_unmap(vcpu, &map_save); diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index ff1e4b4dc998..d3186956ec4b 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -215,10 +215,6 @@ struct svm_nested_state { */ void *msrpm; =20 - /* A VMRUN has started but has not yet been performed, so - * we cannot inject a nested vmexit yet. */ - bool nested_run_pending; - /* cache for control fields of the guest */ struct vmcb_ctrl_area_cached ctl; =20 diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 937aeb474af7..f1543a6ad524 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -2273,7 +2273,7 @@ static void vmx_start_preemption_timer(struct kvm_vcp= u *vcpu, =20 static u64 nested_vmx_calc_efer(struct vcpu_vmx *vmx, struct vmcs12 *vmcs1= 2) { - if (vmx->nested.nested_run_pending && + if (vmx->vcpu.arch.nested_run_pending && (vmcs12->vm_entry_controls & VM_ENTRY_LOAD_IA32_EFER)) return vmcs12->guest_ia32_efer; else if (vmcs12->vm_entry_controls & VM_ENTRY_IA32E_MODE) @@ -2513,7 +2513,7 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx= , struct loaded_vmcs *vmcs0 /* * Interrupt/Exception Fields */ - if (vmx->nested.nested_run_pending) { + if (vmx->vcpu.arch.nested_run_pending) { vmcs_write32(VM_ENTRY_INTR_INFO_FIELD, vmcs12->vm_entry_intr_info_field); vmcs_write32(VM_ENTRY_EXCEPTION_ERROR_CODE, @@ -2621,7 +2621,7 @@ static void prepare_vmcs02_rare(struct vcpu_vmx *vmx,= struct vmcs12 *vmcs12) vmcs_write64(GUEST_PDPTR3, vmcs12->guest_pdptr3); } =20 - if (kvm_mpx_supported() && vmx->nested.nested_run_pending && + if (kvm_mpx_supported() && vmx->vcpu.arch.nested_run_pending && (vmcs12->vm_entry_controls & VM_ENTRY_LOAD_BNDCFGS)) vmcs_write64(GUEST_BNDCFGS, vmcs12->guest_bndcfgs); } @@ -2718,7 +2718,7 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, stru= ct vmcs12 *vmcs12, !(evmcs->hv_clean_fields & HV_VMX_ENLIGHTENED_CLEAN_FIELD_GUEST_GRP1); } =20 - if (vmx->nested.nested_run_pending && + if (vcpu->arch.nested_run_pending && (vmcs12->vm_entry_controls & VM_ENTRY_LOAD_DEBUG_CONTROLS)) { kvm_set_dr(vcpu, 7, vmcs12->guest_dr7); vmx_guest_debugctl_write(vcpu, vmcs12->guest_ia32_debugctl & @@ -2728,13 +2728,13 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, st= ruct vmcs12 *vmcs12, vmx_guest_debugctl_write(vcpu, vmx->nested.pre_vmenter_debugctl); } =20 - if (!vmx->nested.nested_run_pending || + if (!vcpu->arch.nested_run_pending || !(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_CET_STATE)) vmcs_write_cet_state(vcpu, vmx->nested.pre_vmenter_s_cet, vmx->nested.pre_vmenter_ssp, vmx->nested.pre_vmenter_ssp_tbl); =20 - if (kvm_mpx_supported() && (!vmx->nested.nested_run_pending || + if (kvm_mpx_supported() && (!vcpu->arch.nested_run_pending || !(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_BNDCFGS))) vmcs_write64(GUEST_BNDCFGS, vmx->nested.pre_vmenter_bndcfgs); vmx_set_rflags(vcpu, vmcs12->guest_rflags); @@ -2747,7 +2747,7 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, stru= ct vmcs12 *vmcs12, vcpu->arch.cr0_guest_owned_bits &=3D ~vmcs12->cr0_guest_host_mask; vmcs_writel(CR0_GUEST_HOST_MASK, ~vcpu->arch.cr0_guest_owned_bits); =20 - if (vmx->nested.nested_run_pending && + if (vcpu->arch.nested_run_pending && (vmcs12->vm_entry_controls & VM_ENTRY_LOAD_IA32_PAT)) { vmcs_write64(GUEST_IA32_PAT, vmcs12->guest_ia32_pat); vcpu->arch.pat =3D vmcs12->guest_ia32_pat; @@ -3349,7 +3349,7 @@ static int nested_vmx_check_guest_state(struct kvm_vc= pu *vcpu, * to bit 8 (LME) if bit 31 in the CR0 field (corresponding to * CR0.PG) is 1. */ - if (to_vmx(vcpu)->nested.nested_run_pending && + if (vcpu->arch.nested_run_pending && (vmcs12->vm_entry_controls & VM_ENTRY_LOAD_IA32_EFER)) { if (CC(!kvm_valid_efer(vcpu, vmcs12->guest_ia32_efer)) || CC(ia32e !=3D !!(vmcs12->guest_ia32_efer & EFER_LMA)) || @@ -3627,15 +3627,15 @@ enum nvmx_vmentry_status nested_vmx_enter_non_root_= mode(struct kvm_vcpu *vcpu, =20 kvm_service_local_tlb_flush_requests(vcpu); =20 - if (!vmx->nested.nested_run_pending || + if (!vcpu->arch.nested_run_pending || !(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_DEBUG_CONTROLS)) vmx->nested.pre_vmenter_debugctl =3D vmx_guest_debugctl_read(); if (kvm_mpx_supported() && - (!vmx->nested.nested_run_pending || + (!vcpu->arch.nested_run_pending || !(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_BNDCFGS))) vmx->nested.pre_vmenter_bndcfgs =3D vmcs_read64(GUEST_BNDCFGS); =20 - if (!vmx->nested.nested_run_pending || + if (!vcpu->arch.nested_run_pending || !(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_CET_STATE)) vmcs_read_cet_state(vcpu, &vmx->nested.pre_vmenter_s_cet, &vmx->nested.pre_vmenter_ssp, @@ -3844,7 +3844,7 @@ static int nested_vmx_run(struct kvm_vcpu *vcpu, bool= launch) * We're finally done with prerequisite checking, and can start with * the nested entry. */ - vmx->nested.nested_run_pending =3D 1; + vcpu->arch.nested_run_pending =3D 1; vmx->nested.has_preemption_timer_deadline =3D false; status =3D nested_vmx_enter_non_root_mode(vcpu, true); if (unlikely(status !=3D NVMX_VMENTRY_SUCCESS)) @@ -3876,12 +3876,12 @@ static int nested_vmx_run(struct kvm_vcpu *vcpu, bo= ol launch) !nested_cpu_has(vmcs12, CPU_BASED_NMI_WINDOW_EXITING) && !(nested_cpu_has(vmcs12, CPU_BASED_INTR_WINDOW_EXITING) && (vmcs12->guest_rflags & X86_EFLAGS_IF))) { - vmx->nested.nested_run_pending =3D 0; + vcpu->arch.nested_run_pending =3D 0; return kvm_emulate_halt_noskip(vcpu); } break; case GUEST_ACTIVITY_WAIT_SIPI: - vmx->nested.nested_run_pending =3D 0; + vcpu->arch.nested_run_pending =3D 0; kvm_set_mp_state(vcpu, KVM_MP_STATE_INIT_RECEIVED); break; default: @@ -3891,7 +3891,7 @@ static int nested_vmx_run(struct kvm_vcpu *vcpu, bool= launch) return 1; =20 vmentry_failed: - vmx->nested.nested_run_pending =3D 0; + vcpu->arch.nested_run_pending =3D 0; if (status =3D=3D NVMX_VMENTRY_KVM_INTERNAL_ERROR) return 0; if (status =3D=3D NVMX_VMENTRY_VMEXIT) @@ -4288,7 +4288,7 @@ static int vmx_check_nested_events(struct kvm_vcpu *v= cpu) * previously injected event, the pending exception occurred while said * event was being delivered and thus needs to be handled. */ - bool block_nested_exceptions =3D vmx->nested.nested_run_pending; + bool block_nested_exceptions =3D vcpu->arch.nested_run_pending; /* * Events that don't require injection, i.e. that are virtualized by * hardware, aren't blocked by a pending VM-Enter as KVM doesn't need @@ -4657,7 +4657,7 @@ static void sync_vmcs02_to_vmcs12(struct kvm_vcpu *vc= pu, struct vmcs12 *vmcs12) =20 if (nested_cpu_has_preemption_timer(vmcs12) && vmcs12->vm_exit_controls & VM_EXIT_SAVE_VMX_PREEMPTION_TIMER && - !vmx->nested.nested_run_pending) + !vcpu->arch.nested_run_pending) vmcs12->vmx_preemption_timer_value =3D vmx_get_preemption_timer_value(vcpu); =20 @@ -5056,7 +5056,7 @@ void __nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 v= m_exit_reason, vmx->nested.mtf_pending =3D false; =20 /* trying to cancel vmlaunch/vmresume is a bug */ - WARN_ON_ONCE(vmx->nested.nested_run_pending); + WARN_ON_ONCE(vcpu->arch.nested_run_pending); =20 #ifdef CONFIG_KVM_HYPERV if (kvm_check_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu)) { @@ -6679,7 +6679,7 @@ bool nested_vmx_reflect_vmexit(struct kvm_vcpu *vcpu) unsigned long exit_qual; u32 exit_intr_info; =20 - WARN_ON_ONCE(vmx->nested.nested_run_pending); + WARN_ON_ONCE(vcpu->arch.nested_run_pending); =20 /* * Late nested VM-Fail shares the same flow as nested VM-Exit since KVM @@ -6775,7 +6775,7 @@ static int vmx_get_nested_state(struct kvm_vcpu *vcpu, if (is_guest_mode(vcpu)) { kvm_state.flags |=3D KVM_STATE_NESTED_GUEST_MODE; =20 - if (vmx->nested.nested_run_pending) + if (vcpu->arch.nested_run_pending) kvm_state.flags |=3D KVM_STATE_NESTED_RUN_PENDING; =20 if (vmx->nested.mtf_pending) @@ -6850,7 +6850,7 @@ static int vmx_get_nested_state(struct kvm_vcpu *vcpu, void vmx_leave_nested(struct kvm_vcpu *vcpu) { if (is_guest_mode(vcpu)) { - to_vmx(vcpu)->nested.nested_run_pending =3D 0; + vcpu->arch.nested_run_pending =3D 0; nested_vmx_vmexit(vcpu, -1, 0, 0); } free_nested(vcpu); @@ -7008,7 +7008,7 @@ static int vmx_set_nested_state(struct kvm_vcpu *vcpu, if (!(kvm_state->flags & KVM_STATE_NESTED_GUEST_MODE)) return 0; =20 - vmx->nested.nested_run_pending =3D + vcpu->arch.nested_run_pending =3D !!(kvm_state->flags & KVM_STATE_NESTED_RUN_PENDING); =20 vmx->nested.mtf_pending =3D @@ -7054,7 +7054,7 @@ static int vmx_set_nested_state(struct kvm_vcpu *vcpu, return 0; =20 error_guest_mode: - vmx->nested.nested_run_pending =3D 0; + vcpu->arch.nested_run_pending =3D 0; return ret; } =20 diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index b15b4662b653..6e4b12a5849c 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -5170,7 +5170,7 @@ bool vmx_nmi_blocked(struct kvm_vcpu *vcpu) =20 int vmx_nmi_allowed(struct kvm_vcpu *vcpu, bool for_injection) { - if (to_vmx(vcpu)->nested.nested_run_pending) + if (vcpu->arch.nested_run_pending) return -EBUSY; =20 /* An NMI must not be injected into L2 if it's supposed to VM-Exit. */ @@ -5197,7 +5197,7 @@ bool vmx_interrupt_blocked(struct kvm_vcpu *vcpu) =20 int vmx_interrupt_allowed(struct kvm_vcpu *vcpu, bool for_injection) { - if (to_vmx(vcpu)->nested.nested_run_pending) + if (vcpu->arch.nested_run_pending) return -EBUSY; =20 /* @@ -6009,7 +6009,7 @@ static bool vmx_unhandleable_emulation_required(struc= t kvm_vcpu *vcpu) * only reachable if userspace modifies L2 guest state after KVM has * performed the nested VM-Enter consistency checks. */ - if (vmx->nested.nested_run_pending) + if (vcpu->arch.nested_run_pending) return true; =20 /* @@ -6693,7 +6693,7 @@ static int __vmx_handle_exit(struct kvm_vcpu *vcpu, f= astpath_t exit_fastpath) * invalid guest state should never happen as that means KVM knowingly * allowed a nested VM-Enter with an invalid vmcs12. More below. */ - if (KVM_BUG_ON(vmx->nested.nested_run_pending, vcpu->kvm)) + if (KVM_BUG_ON(vcpu->arch.nested_run_pending, vcpu->kvm)) return -EIO; =20 if (is_guest_mode(vcpu)) { @@ -7621,11 +7621,11 @@ fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, u64 = run_flags) * Track VMLAUNCH/VMRESUME that have made past guest state * checking. */ - if (vmx->nested.nested_run_pending && + if (vcpu->arch.nested_run_pending && !vmx_get_exit_reason(vcpu).failed_vmentry) ++vcpu->stat.nested_run; =20 - vmx->nested.nested_run_pending =3D 0; + vcpu->arch.nested_run_pending =3D 0; } =20 if (unlikely(vmx->fail)) @@ -8382,7 +8382,7 @@ void vmx_setup_mce(struct kvm_vcpu *vcpu) int vmx_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection) { /* we need a nested vmexit to enter SMM, postpone if run is pending */ - if (to_vmx(vcpu)->nested.nested_run_pending) + if (vcpu->arch.nested_run_pending) return -EBUSY; return !is_smm(vcpu); } @@ -8427,7 +8427,7 @@ int vmx_leave_smm(struct kvm_vcpu *vcpu, const union = kvm_smram *smram) if (ret !=3D NVMX_VMENTRY_SUCCESS) return 1; =20 - vmx->nested.nested_run_pending =3D 1; + vcpu->arch.nested_run_pending =3D 1; vmx->nested.smm.guest_mode =3D false; } return 0; diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 70bfe81dea54..db84e8001da5 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -138,9 +138,6 @@ struct nested_vmx { */ bool enlightened_vmcs_enabled; =20 - /* L2 must run next, and mustn't decide to exit to L1. */ - bool nested_run_pending; - /* Pending MTF VM-exit into L1. */ bool mtf_pending; =20 --=20 2.53.0.851.ga537e3e6e9-goog