From nobody Tue Dec 16 14:38:16 2025 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D261D30DEA3 for ; Sat, 6 Dec 2025 00:18:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764980338; cv=none; b=oUtfhCn+DWW4YZmNItG/VF7nrpUASfc7o/RgTgjFtCabzAYVAz+XZ79kpcvi5/kwIKpnJtBjwAlPaPB6sO4F01KHCRjsTX6O1eGGBQp0UQcByRceKOaRWVBDg1gpvcXW50HfyXbDlK8lIc5ktK0FZ7URhKA7gif1Vv5mV1Hj43I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764980338; c=relaxed/simple; bh=gvtfcjIXj//cuiiuL7vddv/r8HozbH4yy9UU+z+mReQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=lc/DUndgP00GL/qVFq+W5hNUlg0FXlP4UsqoZ13VUuAPaac517M2e2xEXMW13WGmMjDHE2lBkT864X2qkqA+lmynouQSt07y1f34P//nYi72vVVPQ9+zuyXKo0Ri+QKZe1elWT/98c1enNtnngfHqjkpv+IwrFmuQGtNtPHkuJg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=e2+Lh9NO; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="e2+Lh9NO" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-297e5a18652so32996395ad.1 for ; Fri, 05 Dec 2025 16:18:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1764980335; x=1765585135; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=dghHZC8C4J94U5X0EsLhdqx+qyyu5Gw9y0a71sLAgMI=; b=e2+Lh9NOV9oFxAhcihvtwuvlpFFYgrPpkFtzJewouyhZ6KGw00QMqmtMYDJMA/FfOj PhL1kr6BCAxP7KJOGMuUY8Dz2cjGh1FRNRytpgYyypj0Fc9JGbidyZzu5vjBZcnC0BSC P6D7V11GpTo+aCtf8XAGrjz9h8+RJaytrUwHJ40sh0l2jUsP0banmYi/qXd5O5SF0mO6 ZJOBmMk1zwy8U6Lifpm0B7UBtn9beUaQg9bvdmOb1LBGwGUGgRfE0BPPeer5nQidkRm0 tZzi+VMzi3Lh7izG7kpYYmgnIAgFTAEPRCmi8GhmTbetHugS/ZFwSXs3lmb06aRQwC73 j2dA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1764980335; x=1765585135; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=dghHZC8C4J94U5X0EsLhdqx+qyyu5Gw9y0a71sLAgMI=; b=aj/+AbO3jPcNQ6ggOLD+WgqTSXxqO3UU4Ptx/6f/29wNnN97a9wplhy55wDzni4L2G TbNLcWwnyUjXzJwZOErctE91Im7ebccVNtIfWIUwbAubTlGDvl5r2JJgQYekDyYAuEh4 VeYj1sDXo35Aj+PLvAgG0xYycxlRfsWAtcNC9y64ss6KbZ68BxAsF97421WDAjhC6VFt aC2PSdWyvkDenbuJTA71SnqHjzhENagPj5UrisNea3U0DLkJ03fXX/xMhahHb3aYWFMu w/9r2e6pkkcGwR+jXODwbMFxfMjr8mB1g+ar7GW1SMrMzfcX5nQsAgy4c9JiCawPXc0V oRxQ== X-Forwarded-Encrypted: i=1; AJvYcCUPB4NO3GqQP3ckiZaTo+Vb1l/ULkDNSx4XaGgxaeGJV2BpMmufCVfOt7UlVCl1v9I2ztP3kZNO54lbxso=@vger.kernel.org X-Gm-Message-State: AOJu0Yyt+GPy17x08sMUMFIEgCsUr73LbT1S2Wmedj+yCAQnJxmqsCTn ONtYoBjC2fNsqBECsErsBRRWoLwZ4gYYwEKpo3VmoHyvWXfEpp6cgMxb7lr+KLlP8reOWVvW72y UkdH2Sg== X-Google-Smtp-Source: AGHT+IGiF/KlyKw3jh5Mhte5oVO+4hAY3tNIkOZ4DdyLYn/+Z0r5Rwc99w3eHOaoOE5Aq4b+oO/Y2gl+Kqw= X-Received: from pgbcp9.prod.google.com ([2002:a05:6a02:4009:b0:b99:9560:3dc9]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:e290:b0:35b:c84f:c7b0 with SMTP id adf61e73a8af0-36617e37e14mr1031726637.8.1764980334861; Fri, 05 Dec 2025 16:18:54 -0800 (PST) Reply-To: Sean Christopherson Date: Fri, 5 Dec 2025 16:17:20 -0800 In-Reply-To: <20251206001720.468579-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251206001720.468579-1-seanjc@google.com> X-Mailer: git-send-email 2.52.0.223.gf5cc29aaa4-goog Message-ID: <20251206001720.468579-45-seanjc@google.com> Subject: [PATCH v6 44/44] KVM: VMX: Add mediated PMU support for CPUs without "save perf global ctrl" From: Sean Christopherson To: Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Xin Li , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Sean Christopherson , Paolo Bonzini Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvm@vger.kernel.org, loongarch@lists.linux.dev, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Mingwei Zhang , Xudong Hao , Sandipan Das , Dapeng Mi , Xiong Zhang , Manali Shukla , Jim Mattson Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Extend mediated PMU support for Intel CPUs without support for saving PERF_GLOBAL_CONTROL into the guest VMCS field on VM-Exit, e.g. for Skylake and its derivatives, as well as Icelake. While supporting CPUs without VM_EXIT_SAVE_IA32_PERF_GLOBAL_CTRL isn't completely trivial, it's not that complex either. And not supporting such CPUs would mean not supporting 7+ years of Intel CPUs released in the past 10 years. On VM-Exit, immediately propagate the saved PERF_GLOBAL_CTRL to the VMCS as well as KVM's software cache so that KVM doesn't need to add full EXREG tracking of PERF_GLOBAL_CTRL. In practice, the vast majority of VM-Exits won't trigger software writes to guest PERF_GLOBAL_CTRL, so deferring the VMWRITE to the next VM-Enter would only delay the inevitable without batching/avoiding VMWRITEs. Note! Take care to refresh VM_EXIT_MSR_STORE_COUNT on nested VM-Exit, as it's unfortunately possible that KVM could recalculate MSR intercepts while L2 is active, e.g. if userspace loads nested state and _then_ sets PERF_CAPABILITIES. Eating the VMWRITE on every nested VM-Exit is unfortunate, but that's a pre-existing problem and can/should be solved separately, e.g. modifying the number of auto-load entries while L2 is active is also uncommon on modern CPUs. Signed-off-by: Sean Christopherson Reviewed-by: Dapeng Mi Tested-by: Dapeng Mi --- arch/x86/kvm/vmx/nested.c | 6 ++++- arch/x86/kvm/vmx/pmu_intel.c | 7 ----- arch/x86/kvm/vmx/vmx.c | 52 ++++++++++++++++++++++++++++++++---- 3 files changed, 52 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 614b789ecf16..1ee1edc8419d 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -5142,7 +5142,11 @@ void __nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 = vm_exit_reason, =20 kvm_nested_vmexit_handle_ibrs(vcpu); =20 - /* Update any VMCS fields that might have changed while L2 ran */ + /* + * Update any VMCS fields that might have changed while vmcs02 was the + * active VMCS. The tracking is per-vCPU, not per-VMCS. + */ + vmcs_write32(VM_EXIT_MSR_STORE_COUNT, vmx->msr_autostore.nr); vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.host.nr); vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.guest.nr); vmcs_write64(TSC_OFFSET, vcpu->arch.tsc_offset); diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 55249fa4db95..27eb76e6b6a0 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -777,13 +777,6 @@ static bool intel_pmu_is_mediated_pmu_supported(struct= x86_pmu_capability *host_ if (WARN_ON_ONCE(!cpu_has_load_perf_global_ctrl())) return false; =20 - /* - * KVM doesn't yet support mediated PMU on CPUs without support for - * saving PERF_GLOBAL_CTRL via a dedicated VMCS field. - */ - if (!cpu_has_save_perf_global_ctrl()) - return false; - return true; } =20 diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 6a17cb90eaf4..ba1262c3e3ff 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -1204,6 +1204,17 @@ static bool update_transition_efer(struct vcpu_vmx *= vmx) return true; } =20 +static void vmx_add_autostore_msr(struct vcpu_vmx *vmx, u32 msr) +{ + vmx_add_auto_msr(&vmx->msr_autostore, msr, 0, VM_EXIT_MSR_STORE_COUNT, + vmx->vcpu.kvm); +} + +static void vmx_remove_autostore_msr(struct vcpu_vmx *vmx, u32 msr) +{ + vmx_remove_auto_msr(&vmx->msr_autostore, msr, VM_EXIT_MSR_STORE_COUNT); +} + #ifdef CONFIG_X86_32 /* * On 32-bit kernels, VM exits still load the FS and GS bases from the @@ -4225,6 +4236,8 @@ void pt_update_intercept_for_msr(struct kvm_vcpu *vcp= u) =20 static void vmx_recalc_pmu_msr_intercepts(struct kvm_vcpu *vcpu) { + u64 vm_exit_controls_bits =3D VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL | + VM_EXIT_SAVE_IA32_PERF_GLOBAL_CTRL; bool has_mediated_pmu =3D kvm_vcpu_has_mediated_pmu(vcpu); struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); struct vcpu_vmx *vmx =3D to_vmx(vcpu); @@ -4234,12 +4247,19 @@ static void vmx_recalc_pmu_msr_intercepts(struct kv= m_vcpu *vcpu) if (!enable_mediated_pmu) return; =20 + if (!cpu_has_save_perf_global_ctrl()) { + vm_exit_controls_bits &=3D ~VM_EXIT_SAVE_IA32_PERF_GLOBAL_CTRL; + + if (has_mediated_pmu) + vmx_add_autostore_msr(vmx, MSR_CORE_PERF_GLOBAL_CTRL); + else + vmx_remove_autostore_msr(vmx, MSR_CORE_PERF_GLOBAL_CTRL); + } + vm_entry_controls_changebit(vmx, VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL, has_mediated_pmu); =20 - vm_exit_controls_changebit(vmx, VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL | - VM_EXIT_SAVE_IA32_PERF_GLOBAL_CTRL, - has_mediated_pmu); + vm_exit_controls_changebit(vmx, vm_exit_controls_bits, has_mediated_pmu); =20 for (i =3D 0; i < pmu->nr_arch_gp_counters; i++) { vmx_set_intercept_for_msr(vcpu, MSR_IA32_PERFCTR0 + i, @@ -7346,6 +7366,29 @@ static void atomic_switch_perf_msrs(struct vcpu_vmx = *vmx) msrs[i].host); } =20 +static void vmx_refresh_guest_perf_global_control(struct kvm_vcpu *vcpu) +{ + struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); + struct vcpu_vmx *vmx =3D to_vmx(vcpu); + + if (msr_write_intercepted(vmx, MSR_CORE_PERF_GLOBAL_CTRL)) + return; + + if (!cpu_has_save_perf_global_ctrl()) { + int slot =3D vmx_find_loadstore_msr_slot(&vmx->msr_autostore, + MSR_CORE_PERF_GLOBAL_CTRL); + + if (WARN_ON_ONCE(slot < 0)) + return; + + pmu->global_ctrl =3D vmx->msr_autostore.val[slot].value; + vmcs_write64(GUEST_IA32_PERF_GLOBAL_CTRL, pmu->global_ctrl); + return; + } + + pmu->global_ctrl =3D vmcs_read64(GUEST_IA32_PERF_GLOBAL_CTRL); +} + static void vmx_update_hv_timer(struct kvm_vcpu *vcpu, bool force_immediat= e_exit) { struct vcpu_vmx *vmx =3D to_vmx(vcpu); @@ -7631,8 +7674,7 @@ fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, u64 ru= n_flags) =20 vmx->loaded_vmcs->launched =3D 1; =20 - if (!msr_write_intercepted(vmx, MSR_CORE_PERF_GLOBAL_CTRL)) - vcpu_to_pmu(vcpu)->global_ctrl =3D vmcs_read64(GUEST_IA32_PERF_GLOBAL_CT= RL); + vmx_refresh_guest_perf_global_control(vcpu); =20 vmx_recover_nmi_blocking(vmx); vmx_complete_interrupts(vmx); --=20 2.52.0.223.gf5cc29aaa4-goog