From nobody Sun Dec 14 11:13:59 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0AE73319619; Wed, 29 Oct 2025 21:26:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.19 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761773190; cv=none; b=GlbkRkuBmqlD9DyNmhhPPKV2N3KHef78vTibhy+mpQ9l9CR/2zcXK4PPaATXsyRF8qFQFki4UzqY8EZ4AMFIdAIp2EJnXxCmh/K6YitssvCtxRQYV/k2dKklDB+APRntiDpWGylNGq5UtLVc6tgXIIZ98qjkCSlNcsq52LD5JzY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761773190; c=relaxed/simple; bh=4IEgxQ1JXqIDbvhxVeHf12Bor5YlJD2enTskwLRoQrM=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=aEUl8z7YoBaZzNgUOh8uW25HVfBGQX/DpOC7bcQc6OLrmO+DB9c7KdeAEqSnEsRDMVIgQ57SJimFu3DGFJXIPwqku2MrT66/63alTPiyjOtW3c6N1gYqYmHpdyzkchjNhBO+SIje93xuVMOzVrsO2UnOzVTxIos+6bLcA7941Nc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=ZlqhFTNP; arc=none smtp.client-ip=192.198.163.19 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="ZlqhFTNP" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1761773189; x=1793309189; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=4IEgxQ1JXqIDbvhxVeHf12Bor5YlJD2enTskwLRoQrM=; b=ZlqhFTNPxiHkPeknC3DqfL5x3mKBooyneBjYDL73RaXlb4BRtvK+Y+1Y Qg3Smte0yr/Sju9wPZl91jCOMYiQ2fh1ZImklOcPtEmQ4hRbhbymFmt+8 sBN68XTOOprdk9UkwXQo5hCNsd7vjY5v4Zjw/UeF/0r2KV5BEhu+EB5tD ZGRRy4djTxtJfaDXPxNMFzFEmkFiZER3ECmmqG3xr1sox9w+hlSs7d4fA PQxDb5c/8CU7OrSX6bWloq6faZcl5grw+nblvJTnlVZQD06R9Z+lwL0PC lAhsMvTmTW++G51LeJkz/RWD6T8ZljvUXDjuzKYiUJYpV7M8wD018IRHz Q==; X-CSE-ConnectionGUID: AE8RqPTRQ8CONUjWjXhW6g== X-CSE-MsgGUID: E94ALB/mSLWpOc+1pUbdLA== X-IronPort-AV: E=McAfee;i="6800,10657,11597"; a="62931386" X-IronPort-AV: E=Sophos;i="6.19,265,1754982000"; d="scan'208";a="62931386" Received: from orviesa005.jf.intel.com ([10.64.159.145]) by fmvoesa113.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Oct 2025 14:26:28 -0700 X-CSE-ConnectionGUID: /eFhbi4PSZGtEyuiXhL7PQ== X-CSE-MsgGUID: 8io2q75vTlCsa2k7QMIGEQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.19,265,1754982000"; d="scan'208";a="190908536" Received: from vverma7-desk1.amr.corp.intel.com (HELO desk) ([10.124.223.151]) by orviesa005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Oct 2025 14:26:27 -0700 Date: Wed, 29 Oct 2025 14:26:26 -0700 From: Pawan Gupta To: Thomas Gleixner , Borislav Petkov , Peter Zijlstra , Josh Poimboeuf , Ingo Molnar , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Sean Christopherson , Paolo Bonzini Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Tao Zhang , Jim Mattson , Brendan Jackman Subject: [PATCH 1/3] x86/bugs: Use VM_CLEAR_CPU_BUFFERS in VMX as well Message-ID: <20251029-verw-vm-v1-1-babf9b961519@linux.intel.com> X-Mailer: b4 0.14.2 References: <20251029-verw-vm-v1-0-babf9b961519@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20251029-verw-vm-v1-0-babf9b961519@linux.intel.com> Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" TSA mitigation: d8010d4ba43e ("x86/bugs: Add a Transient Scheduler Attacks mitigation") introduced VM_CLEAR_CPU_BUFFERS for guests on AMD CPUs. Currently on Intel CLEAR_CPU_BUFFERS is being used for guests which has a much broader scope (kernel->user also). Make mitigations on Intel consistent with TSA. This would help handling the guest-only mitigations better in future. Signed-off-by: Pawan Gupta --- arch/x86/kernel/cpu/bugs.c | 9 +++++++-- arch/x86/kvm/vmx/vmenter.S | 3 ++- 2 files changed, 9 insertions(+), 3 deletions(-) diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index d7fa03bf51b4517c12cc68e7c441f7589a4983d1..6d00a9ea7b4f28da291114a7a09= 6b26cc129b57e 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -194,7 +194,7 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush); =20 /* * Controls CPU Fill buffer clear before VMenter. This is a subset of - * X86_FEATURE_CLEAR_CPU_BUF, and should only be enabled when KVM-only + * X86_FEATURE_CLEAR_CPU_BUF_VM, and should only be enabled when KVM-only * mitigation is required. */ DEFINE_STATIC_KEY_FALSE(cpu_buf_vm_clear); @@ -536,6 +536,7 @@ static void __init mds_apply_mitigation(void) if (mds_mitigation =3D=3D MDS_MITIGATION_FULL || mds_mitigation =3D=3D MDS_MITIGATION_VMWERV) { setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF); + setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF_VM); if (!boot_cpu_has(X86_BUG_MSBDS_ONLY) && (mds_nosmt || smt_mitigations =3D=3D SMT_MITIGATIONS_ON)) cpu_smt_disable(false); @@ -647,6 +648,7 @@ static void __init taa_apply_mitigation(void) * present on host, enable the mitigation for UCODE_NEEDED as well. */ setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF); + setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF_VM); =20 if (taa_nosmt || smt_mitigations =3D=3D SMT_MITIGATIONS_ON) cpu_smt_disable(false); @@ -752,6 +754,7 @@ static void __init mmio_apply_mitigation(void) } else { static_branch_enable(&cpu_buf_vm_clear); } + setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF_VM); =20 /* * If Processor-MMIO-Stale-Data bug is present and Fill Buffer data can @@ -839,8 +842,10 @@ static void __init rfds_update_mitigation(void) =20 static void __init rfds_apply_mitigation(void) { - if (rfds_mitigation =3D=3D RFDS_MITIGATION_VERW) + if (rfds_mitigation =3D=3D RFDS_MITIGATION_VERW) { setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF); + setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF_VM); + } } =20 static __init int rfds_parse_cmdline(char *str) diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S index bc255d709d8a16ae22b5bc401965d209a89a8692..0dd23beae207795484150698d16= 74dc4044cc520 100644 --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -161,7 +161,8 @@ SYM_FUNC_START(__vmx_vcpu_run) mov VCPU_RAX(%_ASM_AX), %_ASM_AX =20 /* Clobbers EFLAGS.ZF */ - CLEAR_CPU_BUFFERS + VM_CLEAR_CPU_BUFFERS +.Lskip_clear_cpu_buffers: =20 /* Check EFLAGS.CF from the VMX_RUN_VMRESUME bit test above. */ jnc .Lvmlaunch --=20 2.34.1 From nobody Sun Dec 14 11:13:59 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0EC6431961C; Wed, 29 Oct 2025 21:26:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.19 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761773207; cv=none; b=t65dOXu8TQuppbfhGTWDDRDfF0Qq5SnXYoDGtEKpD8fG7tI4F5Njy6FY5HqSaH+1dBYYy9xPup4vlQSMA4SSecDiP0Cy3R0KRzvA3Hx8rVZe0m0tsuvl1XLCQzDWtRNZrF7T4uMJ7cPt9x8hbeB7YSW9RFh+n6+DRmjfd6gQ+hc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761773207; c=relaxed/simple; bh=jsR8NGNs6hd0OXeFCWqSqCAhot1lMSHp9uAptmE6h8Q=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=k2TIWHdR0aLEC/+p5XzUJcfXNY0t/Dsdk4tgyr7G5x5BDm+ytjQIbGGHXg+VI6Q9qafxhPUe9ZIup+O8AHn2+cZNh+LbuM3viYNqDi9AkAIT3sU52jYpcF0KFYKj5Z1Ot+6h+sgfayjl4u6bIH6IdZFQbsL/aJrM1noCn8dzXaQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Ci2+SNlq; arc=none smtp.client-ip=198.175.65.19 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Ci2+SNlq" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1761773205; x=1793309205; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=jsR8NGNs6hd0OXeFCWqSqCAhot1lMSHp9uAptmE6h8Q=; b=Ci2+SNlqvgomNATxupsstiFz+RxreCYLWR5AiM5O5CEos/XmNQGDWPHP cY85qyYdRMdsD5IibK2uBTG1foHUdM7rS4aAdqpvie1pGgIXcu3E6U0UL ONP2eEhqsox2QbwGv5cuGmsxuzelqYN2c1cl+bdHNSR2HPUSDW1SH1bOh tVdh3ady5cTNjsPm9cY406Mj3rTcpqiyU3l5tjEoX7UOR/VLbRqqZc1ng WDdn9Wf9R8ox5YAt58RdToJSljzWzoaefyJtwnGt5HOBjTC0wZssvmEdb AGq1WoG8Sn8Xio/Z/GiHReZP+b/q1TZz4IJlDWY2YcRSGTh12Fns8Mf8l Q==; X-CSE-ConnectionGUID: l6jVH5BgRJ+enhmYREu8Ng== X-CSE-MsgGUID: IalmG1saTRyybv/3RcMjKw== X-IronPort-AV: E=McAfee;i="6800,10657,11597"; a="63791285" X-IronPort-AV: E=Sophos;i="6.19,265,1754982000"; d="scan'208";a="63791285" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa111.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Oct 2025 14:26:45 -0700 X-CSE-ConnectionGUID: qQltWVofRkGtozbSVlDTAg== X-CSE-MsgGUID: 9r1A/kbTSOGSdbvQ2MB4pw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.19,265,1754982000"; d="scan'208";a="189847076" Received: from vverma7-desk1.amr.corp.intel.com (HELO desk) ([10.124.223.151]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Oct 2025 14:26:43 -0700 Date: Wed, 29 Oct 2025 14:26:43 -0700 From: Pawan Gupta To: Thomas Gleixner , Borislav Petkov , Peter Zijlstra , Josh Poimboeuf , Ingo Molnar , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Sean Christopherson , Paolo Bonzini Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Tao Zhang , Jim Mattson , Brendan Jackman Subject: [PATCH 2/3] x86/mmio: Rename cpu_buf_vm_clear to cpu_buf_vm_clear_mmio_only Message-ID: <20251029-verw-vm-v1-2-babf9b961519@linux.intel.com> X-Mailer: b4 0.14.2 References: <20251029-verw-vm-v1-0-babf9b961519@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20251029-verw-vm-v1-0-babf9b961519@linux.intel.com> Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" cpu_buf_vm_clear static key is only used by the MMIO Stale Data mitigation. Rename it to avoid mixing it up with X86_FEATURE_CLEAR_CPU_BUF_VM. Signed-off-by: Pawan Gupta Reviewed-by: Brendan Jackman --- arch/x86/include/asm/nospec-branch.h | 2 +- arch/x86/kernel/cpu/bugs.c | 8 ++++---- arch/x86/kvm/mmu/spte.c | 2 +- arch/x86/kvm/vmx/vmx.c | 2 +- 4 files changed, 7 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/no= spec-branch.h index 08ed5a2e46a5fd790bcb1b73feb6469518809c06..cb46f5d188de47834466474ec80= 30bb2a2e4fdf3 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -580,7 +580,7 @@ DECLARE_STATIC_KEY_FALSE(cpu_buf_idle_clear); =20 DECLARE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush); =20 -DECLARE_STATIC_KEY_FALSE(cpu_buf_vm_clear); +DECLARE_STATIC_KEY_FALSE(cpu_buf_vm_clear_mmio_only); =20 extern u16 x86_verw_sel; =20 diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 6d00a9ea7b4f28da291114a7a096b26cc129b57e..e7c31c23fbeeb1aba4f538934c1= e8a997adff522 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -197,8 +197,8 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush); * X86_FEATURE_CLEAR_CPU_BUF_VM, and should only be enabled when KVM-only * mitigation is required. */ -DEFINE_STATIC_KEY_FALSE(cpu_buf_vm_clear); -EXPORT_SYMBOL_GPL(cpu_buf_vm_clear); +DEFINE_STATIC_KEY_FALSE(cpu_buf_vm_clear_mmio_only); +EXPORT_SYMBOL_GPL(cpu_buf_vm_clear_mmio_only); =20 #undef pr_fmt #define pr_fmt(fmt) "mitigations: " fmt @@ -750,9 +750,9 @@ static void __init mmio_apply_mitigation(void) */ if (verw_clear_cpu_buf_mitigation_selected) { setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF); - static_branch_disable(&cpu_buf_vm_clear); + static_branch_disable(&cpu_buf_vm_clear_mmio_only); } else { - static_branch_enable(&cpu_buf_vm_clear); + static_branch_enable(&cpu_buf_vm_clear_mmio_only); } setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF_VM); =20 diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 37647afde7d3acfa1301a771ac44792eab879495..380d6675027499715e49e5b35ef= 76e17451fd77b 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -292,7 +292,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_pa= ge *sp, mark_page_dirty_in_slot(vcpu->kvm, slot, gfn); } =20 - if (static_branch_unlikely(&cpu_buf_vm_clear) && + if (static_branch_unlikely(&cpu_buf_vm_clear_mmio_only) && !kvm_vcpu_can_access_host_mmio(vcpu) && kvm_is_mmio_pfn(pfn, &is_host_mmio)) kvm_track_host_mmio_mapping(vcpu); diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index f87c216d976d7d344c924aa4cc18fe1bf8f9b731..451be757b3d1b2fec6b2b79157f= 26dd43bc368b8 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -903,7 +903,7 @@ unsigned int __vmx_vcpu_run_flags(struct vcpu_vmx *vmx) if (!msr_write_intercepted(vmx, MSR_IA32_SPEC_CTRL)) flags |=3D VMX_RUN_SAVE_SPEC_CTRL; =20 - if (static_branch_unlikely(&cpu_buf_vm_clear) && + if (static_branch_unlikely(&cpu_buf_vm_clear_mmio_only) && kvm_vcpu_can_access_host_mmio(&vmx->vcpu)) flags |=3D VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO; =20 --=20 2.34.1 From nobody Sun Dec 14 11:13:59 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.14]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EAC213195E7; Wed, 29 Oct 2025 21:27:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.14 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761773224; cv=none; b=ivSFZXSArtNhb07xyShcqpbpgAe4OFFV7jD93xu27VxOKs50h0LmRoiFp84iqZ8vBWRtNzJ0CKwM1pP3PeyZdPORLFkUNvBUdIOBKPa6x3FqVD23bv+NBxeLReZyw5CC+MrlmATnU3fMhijoqhcbmgBFzkWdmPC86v49D3k4+Qc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761773224; c=relaxed/simple; bh=A53qZ0uLzFAnJw49FPl291qShHFAc1moPoAeVDbmF5I=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=OzAvis3CrISQNpXTASEbKDGWxCXp575L23dkl20AV2pkBfFcgCEQhjImN1+6/qtuhUQ7q57BKRmPHNkT4HnnLwO+ARQGrMn7fILmH6cDmFSzvTcCDkSunVPNaoqPR4/wFQ9DJfsi7dggT2+3Y36lw89A43OR5asWOby02WjvHUM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=cchiK5BF; arc=none smtp.client-ip=198.175.65.14 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="cchiK5BF" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1761773223; x=1793309223; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=A53qZ0uLzFAnJw49FPl291qShHFAc1moPoAeVDbmF5I=; b=cchiK5BFLwz13PQAiK+5fBaKh36cByN9Xwhp/fyGVIbPTqaRVBPFi6X0 Y7WTuctFiApvZbDapgn5z+6cVesVnCghIpWLmS3iWmaP+YcbdR26K9l6P h0ynJiHi5vZvOJjSoMPEY86CLr9MskLyf13oyFNecV1J/eSGcUBExSoQM dTMYMWDUqIvwCv3IJy8GUs1vpzF22vz5f2ouoK1AcbPXPZGHVOxOENQ5Y PXryH8YEfe5tABLKfX5FXx/ZOel06g7WP/RSpkGatYFzagltxtlRRxz5H R9CbIt/VLxyOX4rrEAXE9DqnV01owyCt44FIyPnd/uvWcC5jw5yjs+0fo g==; X-CSE-ConnectionGUID: zw71Y0AnQOSvz1WwGc5d5A== X-CSE-MsgGUID: jFlqe8vcRuWRuo8webMC7Q== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="67742946" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="67742946" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by orvoesa106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Oct 2025 14:27:01 -0700 X-CSE-ConnectionGUID: FXzmgxM7SguTNHtWdw38Sw== X-CSE-MsgGUID: +ad/AQMIS36ddcBG3i4/AQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.19,265,1754982000"; d="scan'208";a="186518157" Received: from vverma7-desk1.amr.corp.intel.com (HELO desk) ([10.124.223.151]) by fmviesa010-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Oct 2025 14:27:00 -0700 Date: Wed, 29 Oct 2025 14:26:59 -0700 From: Pawan Gupta To: Thomas Gleixner , Borislav Petkov , Peter Zijlstra , Josh Poimboeuf , Ingo Molnar , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Sean Christopherson , Paolo Bonzini Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Tao Zhang , Jim Mattson , Brendan Jackman Subject: [PATCH 3/3] x86/mmio: Unify VERW mitigation for guests Message-ID: <20251029-verw-vm-v1-3-babf9b961519@linux.intel.com> X-Mailer: b4 0.14.2 References: <20251029-verw-vm-v1-0-babf9b961519@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20251029-verw-vm-v1-0-babf9b961519@linux.intel.com> Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When a system is only affected by MMIO Stale Data, VERW mitigation is currently handled differently than other data sampling attacks like MDS/TAA/RFDS, that do the VERW in asm. This is because for MMIO Stale Data, VERW is needed only when the guest can access host MMIO, this was tricky to check in asm. Refactoring done by: 83ebe7157483 ("KVM: VMX: Apply MMIO Stale Data mitigation if KVM maps MMIO into the guest") now makes it easier to execute VERW conditionally in asm based on VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO. Unify MMIO Stale Data mitigation with other VERW-based mitigations and only have single VERW callsite in __vmx_vcpu_run(). Remove the now unnecessary call to x86_clear_cpu_buffer() in vmx_vcpu_enter_exit(). This also untangles L1D Flush and MMIO Stale Data mitigation. Earlier, an L1D Flush would skip the VERW for MMIO Stale Data. Now, both the mitigations are independent of each other. Although, this has little practical implication since there are no CPUs that are affected by L1TF and are *only* affected by MMIO Stale Data (i.e. not affected by MDS/TAA/RFDS). But, this makes the code cleaner and easier to maintain. Signed-off-by: Pawan Gupta --- arch/x86/kvm/vmx/run_flags.h | 12 ++++++------ arch/x86/kvm/vmx/vmenter.S | 5 +++++ arch/x86/kvm/vmx/vmx.c | 26 ++++++++++---------------- 3 files changed, 21 insertions(+), 22 deletions(-) diff --git a/arch/x86/kvm/vmx/run_flags.h b/arch/x86/kvm/vmx/run_flags.h index 2f20fb170def8b10c8c0c46f7ba751f845c19e2c..004fe1ca89f05524bf398654005= 6de2caf0abbad 100644 --- a/arch/x86/kvm/vmx/run_flags.h +++ b/arch/x86/kvm/vmx/run_flags.h @@ -2,12 +2,12 @@ #ifndef __KVM_X86_VMX_RUN_FLAGS_H #define __KVM_X86_VMX_RUN_FLAGS_H =20 -#define VMX_RUN_VMRESUME_SHIFT 0 -#define VMX_RUN_SAVE_SPEC_CTRL_SHIFT 1 -#define VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO_SHIFT 2 +#define VMX_RUN_VMRESUME_SHIFT 0 +#define VMX_RUN_SAVE_SPEC_CTRL_SHIFT 1 +#define VMX_RUN_CLEAR_CPU_BUFFERS_SHIFT 2 =20 -#define VMX_RUN_VMRESUME BIT(VMX_RUN_VMRESUME_SHIFT) -#define VMX_RUN_SAVE_SPEC_CTRL BIT(VMX_RUN_SAVE_SPEC_CTRL_SHIFT) -#define VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO BIT(VMX_RUN_CLEAR_CPU_BUFFERS_F= OR_MMIO_SHIFT) +#define VMX_RUN_VMRESUME BIT(VMX_RUN_VMRESUME_SHIFT) +#define VMX_RUN_SAVE_SPEC_CTRL BIT(VMX_RUN_SAVE_SPEC_CTRL_SHIFT) +#define VMX_RUN_CLEAR_CPU_BUFFERS BIT(VMX_RUN_CLEAR_CPU_BUFFERS_SHIFT) =20 #endif /* __KVM_X86_VMX_RUN_FLAGS_H */ diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S index 0dd23beae207795484150698d1674dc4044cc520..ec91f4267eca319ffa8e6079887= e8dfecc7f96d8 100644 --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -137,6 +137,9 @@ SYM_FUNC_START(__vmx_vcpu_run) /* Load @regs to RAX. */ mov (%_ASM_SP), %_ASM_AX =20 + /* jz .Lskip_clear_cpu_buffers below relies on this */ + test $VMX_RUN_CLEAR_CPU_BUFFERS, %ebx + /* Check if vmlaunch or vmresume is needed */ bt $VMX_RUN_VMRESUME_SHIFT, %ebx =20 @@ -160,6 +163,8 @@ SYM_FUNC_START(__vmx_vcpu_run) /* Load guest RAX. This kills the @regs pointer! */ mov VCPU_RAX(%_ASM_AX), %_ASM_AX =20 + /* Check EFLAGS.ZF from the VMX_RUN_CLEAR_CPU_BUFFERS bit test above */ + jz .Lskip_clear_cpu_buffers /* Clobbers EFLAGS.ZF */ VM_CLEAR_CPU_BUFFERS .Lskip_clear_cpu_buffers: diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 451be757b3d1b2fec6b2b79157f26dd43bc368b8..303935882a9f8d1d8f81a499cdc= e1fdc8dad62f0 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -903,9 +903,16 @@ unsigned int __vmx_vcpu_run_flags(struct vcpu_vmx *vmx) if (!msr_write_intercepted(vmx, MSR_IA32_SPEC_CTRL)) flags |=3D VMX_RUN_SAVE_SPEC_CTRL; =20 - if (static_branch_unlikely(&cpu_buf_vm_clear_mmio_only) && - kvm_vcpu_can_access_host_mmio(&vmx->vcpu)) - flags |=3D VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO; + /* + * When affected by MMIO Stale Data only (and not other data sampling + * attacks) only clear for MMIO-capable guests. + */ + if (static_branch_unlikely(&cpu_buf_vm_clear_mmio_only)) { + if (kvm_vcpu_can_access_host_mmio(&vmx->vcpu)) + flags |=3D VMX_RUN_CLEAR_CPU_BUFFERS; + } else { + flags |=3D VMX_RUN_CLEAR_CPU_BUFFERS; + } =20 return flags; } @@ -7320,21 +7327,8 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_v= cpu *vcpu, =20 guest_state_enter_irqoff(); =20 - /* - * L1D Flush includes CPU buffer clear to mitigate MDS, but VERW - * mitigation for MDS is done late in VMentry and is still - * executed in spite of L1D Flush. This is because an extra VERW - * should not matter much after the big hammer L1D Flush. - * - * cpu_buf_vm_clear is used when system is not vulnerable to MDS/TAA, - * and is affected by MMIO Stale Data. In such cases mitigation in only - * needed against an MMIO capable guest. - */ if (static_branch_unlikely(&vmx_l1d_should_flush)) vmx_l1d_flush(vcpu); - else if (static_branch_unlikely(&cpu_buf_vm_clear) && - (flags & VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO)) - x86_clear_cpu_buffers(); =20 vmx_disable_fb_clear(vmx); =20 --=20 2.34.1