From nobody Tue Dec 16 14:43:48 2025 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4B3083054E9 for ; Sat, 6 Dec 2025 00:18:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764980327; cv=none; b=KDkITCUs4C1cPOmLKfnvRdxR5WcuFGw7hAn+aPVedSyzoKZKSDj8cyM6Ex+nO2eGcU1+0w1KtSvt9wrIYkxv1d9bpi4ld5hq5pcRPXZyH3ZjBsyST5+35qKpSBasL+WYLd/KuBMY/0ToN6rot4IEw0KJQzplgyb9ljxXJb5A+XE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764980327; c=relaxed/simple; bh=NbkrBG3duMpaol6d8oG4nabrvUXIXCsgan8nKAy9HYc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=TelNb3R2x3e61h5VdKqb1ZOTB70sfAnGb69MhvQbX0p+pzpqdYiTQmlbXcC237+/N1c7MBjWyfmcsV/rv+6kqtHGRBACfBh8sajhFuGhsUfOFujhDT96lrN+05GxS7pWZ03jHaIejYlAzsbF5Qkw9hX75Ke3LIgRfaTw+LcqWgE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=gPbqaZJy; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="gPbqaZJy" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-7be94e1a073so4966251b3a.2 for ; Fri, 05 Dec 2025 16:18:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1764980324; x=1765585124; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=0alQNA5MnRougM8104k8mY12SAXvJNUdA/VqOuM31LI=; b=gPbqaZJy8VbXYC4ud3kXmQdUa2VIcdS334VR5E5WhizmFlnP6MZar84aPvNckNhtXc TrRt0jmVdvIi+pTxkenCupOe3XE5DThLzOuUNVeod/Gl9JRuZ9wkOrEhBy4ZD3cKsTL9 psZKtOdPfmSKVk0om6dvC64o2k8QJJfntq6qemfo2vKrXUDvNIiTEIGKA1b26mMy3BqG PqJh11awL5EPWBQr5AqUP63nnw6TLdWaey/C8m4xPLTd/Ds9AXp64y5f38iCZKRmmvP4 tbyjRS24TZRt6hPoLAkqMBCnlr+wRN/AYV/sbiVF/j6m3A8hdlHQJdiYRiUKDIfB8saD R4nw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1764980324; x=1765585124; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=0alQNA5MnRougM8104k8mY12SAXvJNUdA/VqOuM31LI=; b=rD5kT4wHDtr/2Mda0TVLPWFbtD48RxdcKWWzN2A10PQrspDRCKl3TGDD35YYX/PEVg 2iuj8W4JcRrtp8aW6KiKLdOV/sQQwPQo51mrDmAsK17AjJ/L0f2mjquuajXwYd0nDhpV qQ7+teoPQyr9T2EqY+5G9xRZi0DigRxX6GjeM1TmzQNrUns9g8KMkk4bzQRwtfvs9hrD UT1D6u0/TTVXVa8YW+5mpwOx/xdpPwgEPiTyhw3hJrPM9pbR1BEdsJ2lprHooraC2LZ/ 87KSbwrtQ9FrrCOylCqoRlUuecT8sInqv8oaAdfEyOjlk+8DEyNvPUipBydhwYwp+OuU qF+g== X-Forwarded-Encrypted: i=1; AJvYcCVlxOw2LXllhIap3S2GvyH/Z+UQys867mdtoqog/KV7GDdtGDF5RoTR8zCBWETds+az3q1/LMlaDORNCYo=@vger.kernel.org X-Gm-Message-State: AOJu0Ywm6eohcBrAmsm4Pq0bmA5U0++ejhDXjjy5VjGsYtSqTb+XQQs1 85JrWWYjRzPNCAlyqckcgJpcKgAa1dmUfE1i+NNKaZj+FDYr0HpI53J4z5lwmQBUrKDugcIg9Iw e0bSFkw== X-Google-Smtp-Source: AGHT+IE9VOcwrfHydwxf79golQ2o2etTa7qJp+DbC2tY99Fupcp+X66ru5yHSbM1504AJSJDzZlYcA1+Jcw= X-Received: from pgdc11.prod.google.com ([2002:a05:6a02:510b:b0:bd9:6028:d18c]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a21:4ed2:10b0:361:4ca3:e17d with SMTP id adf61e73a8af0-36617e37b3fmr739726637.13.1764980324070; Fri, 05 Dec 2025 16:18:44 -0800 (PST) Reply-To: Sean Christopherson Date: Fri, 5 Dec 2025 16:17:14 -0800 In-Reply-To: <20251206001720.468579-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251206001720.468579-1-seanjc@google.com> X-Mailer: git-send-email 2.52.0.223.gf5cc29aaa4-goog Message-ID: <20251206001720.468579-39-seanjc@google.com> Subject: [PATCH v6 38/44] KVM: VMX: Drop unused @entry_only param from add_atomic_switch_msr() From: Sean Christopherson To: Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Xin Li , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvm@vger.kernel.org, loongarch@lists.linux.dev, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Mingwei Zhang , Xudong Hao , Sandipan Das , Dapeng Mi , Xiong Zhang , Manali Shukla , Jim Mattson Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Drop the "on VM-Enter only" parameter from add_atomic_switch_msr() as it is no longer used, and for all intents and purposes was never used. The functionality was added, under embargo, by commit 989e3992d2ec ("x86/KVM/VMX: Extend add_atomic_switch_msr() to allow VMENTER only MSRs"), and then ripped out by commit 2f055947ae5e ("x86/kvm: Drop L1TF MSR list approach") just a few commits later. 2f055947ae5e x86/kvm: Drop L1TF MSR list approach 72c6d2db64fa x86/litf: Introduce vmx status variable 215af5499d9e cpu/hotplug: Online siblings when SMT control is turned on 390d975e0c4e x86/KVM/VMX: Use MSR save list for IA32_FLUSH_CMD if required 989e3992d2ec x86/KVM/VMX: Extend add_atomic_switch_msr() to allow VMENTER= only MSRs Furthermore, it's extremely unlikely KVM will ever _need_ to load an MSR value via the auto-load lists only on VM-Enter. MSRs writes via the lists aren't optimized in any way, and so the only reason to use the lists instead of a WRMSR are for cases where the MSR _must_ be load atomically with respect to VM-Enter (and/or VM-Exit). While one could argue that command MSRs, e.g. IA32_FLUSH_CMD, "need" to be done exact at VM-Enter, in practice doing such flushes within a few instructons of VM-Enter is more than sufficient. Note, the shortlog and changelog for commit 390d975e0c4e ("x86/KVM/VMX: Use MSR save list for IA32_FLUSH_CMD if required") are misleading and wrong. That commit added MSR_IA32_FLUSH_CMD to the VM-Enter _load_ list, not the VM-Enter save list (which doesn't exist, only VM-Exit has a store/save list). Signed-off-by: Sean Christopherson Reviewed-by: Dapeng Mi --- arch/x86/kvm/vmx/vmx.c | 13 ++++--------- 1 file changed, 4 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index a51f66d1b201..38491962b2c1 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -1094,7 +1094,7 @@ static __always_inline void add_atomic_switch_msr_spe= cial(struct vcpu_vmx *vmx, } =20 static void add_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr, - u64 guest_val, u64 host_val, bool entry_only) + u64 guest_val, u64 host_val) { int i, j =3D 0; struct msr_autoload *m =3D &vmx->msr_autoload; @@ -1132,8 +1132,7 @@ static void add_atomic_switch_msr(struct vcpu_vmx *vm= x, unsigned msr, } =20 i =3D vmx_find_loadstore_msr_slot(&m->guest, msr); - if (!entry_only) - j =3D vmx_find_loadstore_msr_slot(&m->host, msr); + j =3D vmx_find_loadstore_msr_slot(&m->host, msr); =20 if ((i < 0 && m->guest.nr =3D=3D MAX_NR_LOADSTORE_MSRS) || (j < 0 && m->host.nr =3D=3D MAX_NR_LOADSTORE_MSRS)) { @@ -1148,9 +1147,6 @@ static void add_atomic_switch_msr(struct vcpu_vmx *vm= x, unsigned msr, m->guest.val[i].index =3D msr; m->guest.val[i].value =3D guest_val; =20 - if (entry_only) - return; - if (j < 0) { j =3D m->host.nr++; vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, m->host.nr); @@ -1190,8 +1186,7 @@ static bool update_transition_efer(struct vcpu_vmx *v= mx) if (!(guest_efer & EFER_LMA)) guest_efer &=3D ~EFER_LME; if (guest_efer !=3D kvm_host.efer) - add_atomic_switch_msr(vmx, MSR_EFER, - guest_efer, kvm_host.efer, false); + add_atomic_switch_msr(vmx, MSR_EFER, guest_efer, kvm_host.efer); else clear_atomic_switch_msr(vmx, MSR_EFER); return false; @@ -7350,7 +7345,7 @@ static void atomic_switch_perf_msrs(struct vcpu_vmx *= vmx) clear_atomic_switch_msr(vmx, msrs[i].msr); else add_atomic_switch_msr(vmx, msrs[i].msr, msrs[i].guest, - msrs[i].host, false); + msrs[i].host); } =20 static void vmx_update_hv_timer(struct kvm_vcpu *vcpu, bool force_immediat= e_exit) --=20 2.52.0.223.gf5cc29aaa4-goog