From nobody Sat Oct 4 16:22:32 2025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3056C212B2B for ; Fri, 15 Aug 2025 00:26:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755217570; cv=none; b=qfNCPrBW2BJ2QBS3kfRoSQwmI8v6dzMtalnlJ/M6R1y6zu6hOdJS087zpBo2aUxPqARe2V+Xy3TbkUlGx63h45Ippwg5UeiTRVqFRlreVhLdqZFSsvZkzp+PMv/zWNOOp2dRoMhZMUuOvgGnclYiemaxS4UPkMflVS0QQOsE4Ao= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755217570; c=relaxed/simple; bh=qmL9NrmtiXC3HK88dDM+5zUIzt7pM8rk2E5vbAlLZ6g=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=trMCj9snhQyNOoiEzxw703QUYPAZPMgMvfCR0NvBc9sGPM2Cu+oi/DBga69L36qsembPqaRWislfNqQzUMJ+J5bk3RnfnutI9LfB8qt8BnddiUtHtq2A0LGdAOw9o/8sFuDRJ2ATKeOp3z4wqd/U0PKTqyUsN1FVzNFssOd5v/s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=2SMBNvUh; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="2SMBNvUh" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-321cf7549afso2081606a91.0 for ; Thu, 14 Aug 2025 17:26:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1755217569; x=1755822369; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=0eUCS0Bqt57t3+SFOXzW9pVxMUOmtV+wP/2egB/EzGA=; b=2SMBNvUhMFQYEiIiQrjkjaIHSNfTf1pr3c+I+sDP+V95HtWQFEzdTPjyFO7KtJ3jrK yok3QRDzk0FRq+3CqAsd0iriwa36PtuWllkcShL6IsxUM6kl2sWwBBqATfovLhAdaoqD Q1G6WZqgXp7nt7nIrh8stoJWxmee/40v8z0KIiazc8hNN1aR4fBgvNjMKRYV9gcBzTpk aS1vKQl3qfAsMT/zOyWORRo4Ejno4DQ4N0R5tpmgL9vD55laDS09v8gYeeBGGwWopoua 0zH8CgH32/Au3GkoMwCsJ39pw6DzSc5B/6oRiTbBa8e3hNYxFVfmZTM6E4obNYb/G6co jDtA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1755217569; x=1755822369; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=0eUCS0Bqt57t3+SFOXzW9pVxMUOmtV+wP/2egB/EzGA=; b=wnxzVfGaPBpDi0ZnWX4aXYczIPWVhOJTptpauGu81vPbG/0RPF7W/Wfb2y+DUf4DYG pY5zkL22G5hXZECxS9j65grWWTDxNYaF+wORFnM9g/22Eq0FMlhx3OP6mc2BMhjpqycD Ey6kD2QWBXiJ8UBaq6MCuE9NWCV5U1Z8tZ2qm/AzsR8iq7xaTo+fWNUBFHjjeRaz5dew jEBY7+ND+Xtl46vKU2SMenx+vionWfluFrivYD7ywai4qjV8seXn2tyt33xVl5feTgEu mmZQBw/CTBGWpD3V2VYYx5ZlAgcQrYzCLVazntdxs/4ZtsFx0y38MhS6T0peaGmAFW3k UV5Q== X-Forwarded-Encrypted: i=1; AJvYcCW8PNxbh9nzPM52acqOzl0u7Xj1DErD75cKpkzDRGH68/VRkpH0AMSqF5DvfO6Q05rsyCL1qJFDi7HR4OY=@vger.kernel.org X-Gm-Message-State: AOJu0YwiEXUvnNTNxFqT2/2wO/qaNtPClwE2tF5OTu12AYzB9SWl7dY2 SbGnFtj41MFxk10DAcT2s8PdtN93IFE0KqS3Oyr4tHdi7or9Aqih1UZJiPdm+iMXsstQ1WqpaAf LbAT3UQ== X-Google-Smtp-Source: AGHT+IHNEO/6ctMp9BDujnLiKXXKG05Q/ECbAKzWkRd0I0EnYabFWDHpgpKuHdxcom2gzRTTIyDfWqa1DfI= X-Received: from pjbos14.prod.google.com ([2002:a17:90b:1cce:b0:31f:3227:1724]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:3f85:b0:31f:35f:96a1 with SMTP id 98e67ed59e1d1-323407b888fmr487019a91.15.1755217568723; Thu, 14 Aug 2025 17:26:08 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 14 Aug 2025 17:25:33 -0700 In-Reply-To: <20250815002540.2375664-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250815002540.2375664-1-seanjc@google.com> X-Mailer: git-send-email 2.51.0.rc1.163.g2494970778-goog Message-ID: <20250815002540.2375664-14-seanjc@google.com> Subject: [PATCH 6.6.y 13/20] KVM: x86: Fully defer to vendor code to decide how to force immediate exit From: Sean Christopherson To: stable@vger.kernel.org, Greg Kroah-Hartman , Sasha Levin Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Paolo Bonzini Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" [ Upstream commit 0ec3d6d1f169baa7fc512ae4b78d17e7c94b7763 ] Now that vmx->req_immediate_exit is used only in the scope of vmx_vcpu_run(), use force_immediate_exit to detect that KVM should usurp the VMX preemption to force a VM-Exit and let vendor code fully handle forcing a VM-Exit. Opportunsitically drop __kvm_request_immediate_exit() and just have vendor code call smp_send_reschedule() directly. SVM already does this when injecting an event while also trying to single-step an IRET, i.e. it's not exactly secret knowledge that KVM uses a reschedule IPI to force an exit. Link: https://lore.kernel.org/r/20240110012705.506918-7-seanjc@google.com Signed-off-by: Sean Christopherson [sean: resolve absurd conflict due to funky kvm_x86_ops.sched_in prototype] Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm-x86-ops.h | 1 - arch/x86/include/asm/kvm_host.h | 3 --- arch/x86/kvm/svm/svm.c | 7 ++++--- arch/x86/kvm/vmx/vmx.c | 32 +++++++++++++----------------- arch/x86/kvm/vmx/vmx.h | 2 -- arch/x86/kvm/x86.c | 10 +--------- 6 files changed, 19 insertions(+), 36 deletions(-) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-= x86-ops.h index e59ded976166..8fe6667d945f 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -102,7 +102,6 @@ KVM_X86_OP(write_tsc_multiplier) KVM_X86_OP(get_exit_info) KVM_X86_OP(check_intercept) KVM_X86_OP(handle_exit_irqoff) -KVM_X86_OP(request_immediate_exit) KVM_X86_OP(sched_in) KVM_X86_OP_OPTIONAL(update_cpu_dirty_logging) KVM_X86_OP_OPTIONAL(vcpu_blocking) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 5703600a454e..8898ad8cb3de 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1695,8 +1695,6 @@ struct kvm_x86_ops { struct x86_exception *exception); void (*handle_exit_irqoff)(struct kvm_vcpu *vcpu); =20 - void (*request_immediate_exit)(struct kvm_vcpu *vcpu); - void (*sched_in)(struct kvm_vcpu *kvm, int cpu); =20 /* @@ -2182,7 +2180,6 @@ extern bool kvm_find_async_pf_gfn(struct kvm_vcpu *vc= pu, gfn_t gfn); =20 int kvm_skip_emulated_instruction(struct kvm_vcpu *vcpu); int kvm_complete_insn_gp(struct kvm_vcpu *vcpu, int err); -void __kvm_request_immediate_exit(struct kvm_vcpu *vcpu); =20 void __user *__x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa, u32 size); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index f42c6ef7dc20..4a53b38ea386 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4222,8 +4222,11 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm= _vcpu *vcpu, * is enough to force an immediate vmexit. */ disable_nmi_singlestep(svm); + force_immediate_exit =3D true; + } + + if (force_immediate_exit) smp_send_reschedule(vcpu->cpu); - } =20 pre_svm_run(vcpu); =20 @@ -5075,8 +5078,6 @@ static struct kvm_x86_ops svm_x86_ops __initdata =3D { .check_intercept =3D svm_check_intercept, .handle_exit_irqoff =3D svm_handle_exit_irqoff, =20 - .request_immediate_exit =3D __kvm_request_immediate_exit, - .sched_in =3D svm_sched_in, =20 .nested_ops =3D &svm_nested_ops, diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 0ecc0e996386..704e5a552b4f 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -49,6 +49,8 @@ #include #include =20 +#include + #include "capabilities.h" #include "cpuid.h" #include "hyperv.h" @@ -1304,8 +1306,6 @@ void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcp= u) u16 fs_sel, gs_sel; int i; =20 - vmx->req_immediate_exit =3D false; - /* * Note that guest MSRs to be saved/restored can also be changed * when guest state is loaded. This happens when guest transitions @@ -6015,7 +6015,8 @@ static int handle_pml_full(struct kvm_vcpu *vcpu) return 1; } =20 -static fastpath_t handle_fastpath_preemption_timer(struct kvm_vcpu *vcpu) +static fastpath_t handle_fastpath_preemption_timer(struct kvm_vcpu *vcpu, + bool force_immediate_exit) { struct vcpu_vmx *vmx =3D to_vmx(vcpu); =20 @@ -6031,7 +6032,7 @@ static fastpath_t handle_fastpath_preemption_timer(st= ruct kvm_vcpu *vcpu) * If the timer expired because KVM used it to force an immediate exit, * then mission accomplished. */ - if (vmx->req_immediate_exit) + if (force_immediate_exit) return EXIT_FASTPATH_EXIT_HANDLED; =20 /* @@ -7210,13 +7211,13 @@ static void atomic_switch_perf_msrs(struct vcpu_vmx= *vmx) msrs[i].host, false); } =20 -static void vmx_update_hv_timer(struct kvm_vcpu *vcpu) +static void vmx_update_hv_timer(struct kvm_vcpu *vcpu, bool force_immediat= e_exit) { struct vcpu_vmx *vmx =3D to_vmx(vcpu); u64 tscl; u32 delta_tsc; =20 - if (vmx->req_immediate_exit) { + if (force_immediate_exit) { vmcs_write32(VMX_PREEMPTION_TIMER_VALUE, 0); vmx->loaded_vmcs->hv_timer_soft_disabled =3D false; } else if (vmx->hv_deadline_tsc !=3D -1) { @@ -7269,7 +7270,8 @@ void noinstr vmx_spec_ctrl_restore_host(struct vcpu_v= mx *vmx, barrier_nospec(); } =20 -static fastpath_t vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu) +static fastpath_t vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu, + bool force_immediate_exit) { /* * If L2 is active, some VMX preemption timer exits can be handled in @@ -7283,7 +7285,7 @@ static fastpath_t vmx_exit_handlers_fastpath(struct k= vm_vcpu *vcpu) case EXIT_REASON_MSR_WRITE: return handle_fastpath_set_msr_irqoff(vcpu); case EXIT_REASON_PREEMPTION_TIMER: - return handle_fastpath_preemption_timer(vcpu); + return handle_fastpath_preemption_timer(vcpu, force_immediate_exit); default: return EXIT_FASTPATH_NONE; } @@ -7425,7 +7427,9 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu,= bool force_immediate_exit) vmx_passthrough_lbr_msrs(vcpu); =20 if (enable_preemption_timer) - vmx_update_hv_timer(vcpu); + vmx_update_hv_timer(vcpu, force_immediate_exit); + else if (force_immediate_exit) + smp_send_reschedule(vcpu->cpu); =20 kvm_wait_lapic_expire(vcpu); =20 @@ -7489,7 +7493,7 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu,= bool force_immediate_exit) vmx_recover_nmi_blocking(vmx); vmx_complete_interrupts(vmx); =20 - return vmx_exit_handlers_fastpath(vcpu); + return vmx_exit_handlers_fastpath(vcpu, force_immediate_exit); } =20 static void vmx_vcpu_free(struct kvm_vcpu *vcpu) @@ -7988,11 +7992,6 @@ static __init void vmx_set_cpu_caps(void) kvm_cpu_cap_check_and_set(X86_FEATURE_WAITPKG); } =20 -static void vmx_request_immediate_exit(struct kvm_vcpu *vcpu) -{ - to_vmx(vcpu)->req_immediate_exit =3D true; -} - static int vmx_check_intercept_io(struct kvm_vcpu *vcpu, struct x86_instruction_info *info) { @@ -8404,8 +8403,6 @@ static struct kvm_x86_ops vmx_x86_ops __initdata =3D { .check_intercept =3D vmx_check_intercept, .handle_exit_irqoff =3D vmx_handle_exit_irqoff, =20 - .request_immediate_exit =3D vmx_request_immediate_exit, - .sched_in =3D vmx_sched_in, =20 .cpu_dirty_log_size =3D PML_ENTITY_NUM, @@ -8663,7 +8660,6 @@ static __init int hardware_setup(void) if (!enable_preemption_timer) { vmx_x86_ops.set_hv_timer =3D NULL; vmx_x86_ops.cancel_hv_timer =3D NULL; - vmx_x86_ops.request_immediate_exit =3D __kvm_request_immediate_exit; } =20 kvm_caps.supported_mce_cap |=3D MCG_LMCE_P; diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index fb36bde2dd87..50d32d830890 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -331,8 +331,6 @@ struct vcpu_vmx { unsigned int ple_window; bool ple_window_dirty; =20 - bool req_immediate_exit; - /* Support for PML */ #define PML_ENTITY_NUM 512 struct page *pml_pg; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 8a7a6cf4b4ec..44784ad244c6 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10505,12 +10505,6 @@ static void kvm_vcpu_reload_apic_access_page(struc= t kvm_vcpu *vcpu) static_call_cond(kvm_x86_set_apic_access_page_addr)(vcpu); } =20 -void __kvm_request_immediate_exit(struct kvm_vcpu *vcpu) -{ - smp_send_reschedule(vcpu->cpu); -} -EXPORT_SYMBOL_GPL(__kvm_request_immediate_exit); - /* * Called within kvm->srcu read side. * Returns 1 to let vcpu_run() continue the guest execution loop without @@ -10756,10 +10750,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) goto cancel_injection; } =20 - if (req_immediate_exit) { + if (req_immediate_exit) kvm_make_request(KVM_REQ_EVENT, vcpu); - static_call(kvm_x86_request_immediate_exit)(vcpu); - } =20 fpregs_assert_state_consistent(); if (test_thread_flag(TIF_NEED_FPU_LOAD)) --=20 2.51.0.rc1.163.g2494970778-goog