From nobody Mon Feb 9 01:27:38 2026 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5F7CB28C028 for ; Tue, 10 Jun 2025 23:20:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749597619; cv=none; b=kyczJZaP/mS0eB34bgeBuUk6nKs9fr0oBwcCcLcUqKhqfS/79ph/fP/m+yKA4qkIqhmhoD2vkUZnI9O33anch4x9zJ5iwQw13A7YcauNys1obNWkqP8MZge0JSXkrdwx7PqlkilXO47/jQIsdVEBsX3S1eHKKYg8u1CPjf8+hlw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749597619; c=relaxed/simple; bh=8juvKaK4gaqWtgLvKQn3nug6qZB3xKNgz/vWlcEhitY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=XwsIDjN5Pc/3zfosU3KFLFDXQlXapkUCEqjFVxXeJuLt5IJGstqwQa/ye/Ef+xmntTYIfVVEzpPRXFk+SdvbEgu2apKzKtvw/O7o9C5wbSA2sqmQMmu/fYxoeTd7H+1v6b2H2yS8TC8VmK+6xA3GgRnzBXNud4iMf7j7D/E64ZM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=JaeO/zUW; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="JaeO/zUW" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-31171a736b2so10020255a91.1 for ; Tue, 10 Jun 2025 16:20:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749597616; x=1750202416; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=bxuye0qblmsaY3g2XAA5/C7McrkA7bbVijWz0WdKsNA=; b=JaeO/zUWQSMDZqETOEjGiOGVGCdJ+Km7EmCDwvbVMCtC58xHp3uOfN2tGuqB6Oracb ewkd9vTwER/ZrelafOVgbm3wAeRKikDp2udkcbKpi4rgSdgHOJHK7GJlQjR8W4Pb5ahu pzbXRGq8fDcEN9SNM3dpIbnkddckTiyqWF+ipZG48sPQGDFNJgwY6lBFYB5qirUXMgzQ +TPQGIBZxB/F51ZTFOi67Rp3Gs9NOTUk7Tk2zSbu6vLR+EXZ1/1M3cR48TQoTbnxeM1z VZxjVXDQ5jQPkmuWo7uWQrk0xdXOLPgaiY77WKSyqpvXEk9ZpmfLChiGteN6r0WBGOmA w15Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749597616; x=1750202416; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=bxuye0qblmsaY3g2XAA5/C7McrkA7bbVijWz0WdKsNA=; b=ZwKSuelAkW5TyjeERZnvXYj7H++zhuYlV5nfMzZGUzOtDiYwkN2++0rSYkidQdL9tK W24q8zhzsRASiOw5xo2GER6Yrwm3Xi+dVKKlQe9upJSASCFRYQWYkn5AB4mQrlKJw2DD 67lYogVCuM0Z2yvCfG7D+D+Cz35PZ+Bzlt5NrvX+FkVrYMGFm99+J/fJGvD4NP9R7wng A3v8h7cV+KbQkSL/WOWrCb36kkLMP7of/jNmol6dmdPFds+ldl6t6eReuT9PsSKxNMkN o+W1veg544vwGJrPiBUqvH1esGrb7cfVhlvy+HuD2kPOt8vIw0eau3IlTgUZCorZd3AV x+Eg== X-Forwarded-Encrypted: i=1; AJvYcCWbA3qx4+ssHh0Mp127bCTs86rtfL/25ZKuxX1AexHtFSvU4jgn2DmMxLJQnPhICwrxKBoYFRKsP3ziIw0=@vger.kernel.org X-Gm-Message-State: AOJu0Yw01GTzo9UlltA1K22Pz5uDHax/fIDG2HzX+MjUrc5YwgB0UfLD TGXNc1HlI1iiQBVCOzlztR9BtaGQM/EWlafwrkiCpvJ9EL44vHKC32Za6iSTAcPIZoI6oudfxBl GMohA9w== X-Google-Smtp-Source: AGHT+IHiJ9IQTadLJLpNjGdfaVB6tLlZK+VgXBKKERwWbBc7g/drIRFxbgcFuE6XmjHBARQVFNP3/jDVGAk= X-Received: from pjh6.prod.google.com ([2002:a17:90b:3f86:b0:30e:6bb2:6855]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:4b92:b0:312:eaea:afa1 with SMTP id 98e67ed59e1d1-313af267226mr1496770a91.29.1749597616670; Tue, 10 Jun 2025 16:20:16 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 10 Jun 2025 16:20:04 -0700 In-Reply-To: <20250610232010.162191-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250610232010.162191-1-seanjc@google.com> X-Mailer: git-send-email 2.50.0.rc0.642.g800a2b2222-goog Message-ID: <20250610232010.162191-3-seanjc@google.com> Subject: [PATCH v6 2/8] KVM: x86: Convert vcpu_run()'s immediate exit param into a generic bitmap From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Adrian Hunter , Maxim Levitsky Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Convert kvm_x86_ops.vcpu_run()'s "force_immediate_exit" boolean parameter into an a generic bitmap so that similar "take action" information can be passed to vendor code without creating a pile of boolean parameters. This will allow dropping kvm_x86_ops.set_dr6() in favor of a new flag, and will also allow for adding similar functionality for re-loading debugctl in the active VMCS. Opportunistically massage the TDX WARN and comment to prepare for adding more run_flags, all of which are expected to be mutually exclusive with TDX, i.e. should be WARNed on. No functional change intended. Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 6 +++++- arch/x86/kvm/svm/svm.c | 4 ++-- arch/x86/kvm/vmx/main.c | 6 +++--- arch/x86/kvm/vmx/tdx.c | 18 +++++++++--------- arch/x86/kvm/vmx/vmx.c | 3 ++- arch/x86/kvm/vmx/x86_ops.h | 4 ++-- arch/x86/kvm/x86.c | 11 ++++++++--- 7 files changed, 31 insertions(+), 21 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 330cdcbed1a6..3b5871ebd7e4 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1673,6 +1673,10 @@ static inline u16 kvm_lapic_irq_dest_mode(bool dest_= mode_logical) return dest_mode_logical ? APIC_DEST_LOGICAL : APIC_DEST_PHYSICAL; } =20 +enum kvm_x86_run_flags { + KVM_RUN_FORCE_IMMEDIATE_EXIT =3D BIT(0), +}; + struct kvm_x86_ops { const char *name; =20 @@ -1754,7 +1758,7 @@ struct kvm_x86_ops { =20 int (*vcpu_pre_run)(struct kvm_vcpu *vcpu); enum exit_fastpath_completion (*vcpu_run)(struct kvm_vcpu *vcpu, - bool force_immediate_exit); + u64 run_flags); int (*handle_exit)(struct kvm_vcpu *vcpu, enum exit_fastpath_completion exit_fastpath); int (*skip_emulated_instruction)(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 0ad1a6d4fb6d..00d78090de3d 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4402,9 +4402,9 @@ static noinstr void svm_vcpu_enter_exit(struct kvm_vc= pu *vcpu, bool spec_ctrl_in guest_state_exit_irqoff(); } =20 -static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu, - bool force_immediate_exit) +static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu, u64 run_f= lags) { + bool force_immediate_exit =3D run_flags & KVM_RUN_FORCE_IMMEDIATE_EXIT; struct vcpu_svm *svm =3D to_svm(vcpu); bool spec_ctrl_intercepted =3D msr_write_intercepted(vcpu, MSR_IA32_SPEC_= CTRL); =20 diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c index d1e02e567b57..fef3e3803707 100644 --- a/arch/x86/kvm/vmx/main.c +++ b/arch/x86/kvm/vmx/main.c @@ -175,12 +175,12 @@ static int vt_vcpu_pre_run(struct kvm_vcpu *vcpu) return vmx_vcpu_pre_run(vcpu); } =20 -static fastpath_t vt_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_= exit) +static fastpath_t vt_vcpu_run(struct kvm_vcpu *vcpu, u64 run_flags) { if (is_td_vcpu(vcpu)) - return tdx_vcpu_run(vcpu, force_immediate_exit); + return tdx_vcpu_run(vcpu, run_flags); =20 - return vmx_vcpu_run(vcpu, force_immediate_exit); + return vmx_vcpu_run(vcpu, run_flags); } =20 static int vt_handle_exit(struct kvm_vcpu *vcpu, diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 3cfe89aad68e..9a758d8b38ea 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1018,20 +1018,20 @@ static void tdx_load_host_xsave_state(struct kvm_vc= pu *vcpu) DEBUGCTLMSR_FREEZE_PERFMON_ON_PMI | \ DEBUGCTLMSR_FREEZE_IN_SMM) =20 -fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit) +fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu, u64 run_flags) { struct vcpu_tdx *tdx =3D to_tdx(vcpu); struct vcpu_vt *vt =3D to_vt(vcpu); =20 /* - * force_immediate_exit requires vCPU entering for events injection with - * an immediately exit followed. But The TDX module doesn't guarantee - * entry, it's already possible for KVM to _think_ it completely entry - * to the guest without actually having done so. - * Since KVM never needs to force an immediate exit for TDX, and can't - * do direct injection, just warn on force_immediate_exit. + * WARN if KVM wants to force an immediate exit, as the TDX module does + * not guarantee entry into the guest, i.e. it's possible for KVM to + * _think_ it completed entry to the guest and forced an immediate exit + * without actually having done so. Luckily, KVM never needs to force + * an immediate exit for TDX (KVM can't do direct event injection, so + * just WARN and continue on. */ - WARN_ON_ONCE(force_immediate_exit); + WARN_ON_ONCE(run_flags); =20 /* * Wait until retry of SEPT-zap-related SEAMCALL completes before @@ -1041,7 +1041,7 @@ fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu, bool f= orce_immediate_exit) if (unlikely(READ_ONCE(to_kvm_tdx(vcpu->kvm)->wait_for_sept_zap))) return EXIT_FASTPATH_EXIT_HANDLED; =20 - trace_kvm_entry(vcpu, force_immediate_exit); + trace_kvm_entry(vcpu, run_flags & KVM_RUN_FORCE_IMMEDIATE_EXIT); =20 if (pi_test_on(&vt->pi_desc)) { apic->send_IPI_self(POSTED_INTR_VECTOR); diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 9ff00ae9f05a..e66f5ffa8716 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7317,8 +7317,9 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vc= pu *vcpu, guest_state_exit_irqoff(); } =20 -fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit) +fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, u64 run_flags) { + bool force_immediate_exit =3D run_flags & KVM_RUN_FORCE_IMMEDIATE_EXIT; struct vcpu_vmx *vmx =3D to_vmx(vcpu); unsigned long cr3, cr4; =20 diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h index b4596f651232..0b4f5c5558d0 100644 --- a/arch/x86/kvm/vmx/x86_ops.h +++ b/arch/x86/kvm/vmx/x86_ops.h @@ -21,7 +21,7 @@ void vmx_vm_destroy(struct kvm *kvm); int vmx_vcpu_precreate(struct kvm *kvm); int vmx_vcpu_create(struct kvm_vcpu *vcpu); int vmx_vcpu_pre_run(struct kvm_vcpu *vcpu); -fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit); +fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, u64 run_flags); void vmx_vcpu_free(struct kvm_vcpu *vcpu); void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event); void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu); @@ -133,7 +133,7 @@ void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_ev= ent); void tdx_vcpu_free(struct kvm_vcpu *vcpu); void tdx_vcpu_load(struct kvm_vcpu *vcpu, int cpu); int tdx_vcpu_pre_run(struct kvm_vcpu *vcpu); -fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit); +fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu, u64 run_flags); void tdx_prepare_switch_to_guest(struct kvm_vcpu *vcpu); void tdx_vcpu_put(struct kvm_vcpu *vcpu); bool tdx_protected_apic_has_interrupt(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index dd34a2ec854c..d4a51b263d6b 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10779,6 +10779,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) dm_request_for_irq_injection(vcpu) && kvm_cpu_accept_dm_intr(vcpu); fastpath_t exit_fastpath; + u64 run_flags; =20 bool req_immediate_exit =3D false; =20 @@ -11023,8 +11024,11 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) goto cancel_injection; } =20 - if (req_immediate_exit) + run_flags =3D 0; + if (req_immediate_exit) { + run_flags |=3D KVM_RUN_FORCE_IMMEDIATE_EXIT; kvm_make_request(KVM_REQ_EVENT, vcpu); + } =20 fpregs_assert_state_consistent(); if (test_thread_flag(TIF_NEED_FPU_LOAD)) @@ -11061,8 +11065,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) WARN_ON_ONCE((kvm_vcpu_apicv_activated(vcpu) !=3D kvm_vcpu_apicv_active(= vcpu)) && (kvm_get_apic_mode(vcpu) !=3D LAPIC_MODE_DISABLED)); =20 - exit_fastpath =3D kvm_x86_call(vcpu_run)(vcpu, - req_immediate_exit); + exit_fastpath =3D kvm_x86_call(vcpu_run)(vcpu, run_flags); if (likely(exit_fastpath !=3D EXIT_FASTPATH_REENTER_GUEST)) break; =20 @@ -11074,6 +11077,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) break; } =20 + run_flags =3D 0; + /* Note, VM-Exits that go down the "slow" path are accounted below. */ ++vcpu->stat.exits; } --=20 2.50.0.rc0.642.g800a2b2222-goog