From nobody Sun Feb 8 00:50:26 2026 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5CBA11DD543 for ; Fri, 31 Oct 2025 00:30:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761870650; cv=none; b=tWpiU4Q+773l0XE9gBQleYfl0eXBqGdgXbwf+6RGUIsyF8xnbEqDLKMksgw22fRiamtQlND0JbJGgpROv8CHJRGDg+kEiv7UjpN8E91EOVuJBr69epUsW4tIm21B2DJsFO+8FMQoUgJtw8BRumkEhmcTsur0sNTJ26lQ3iQQ+mI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761870650; c=relaxed/simple; bh=eJ+nhqmNAya1k6+OmAJ/q7k3s5/bkxXsA9XB0DsAgRE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=VfTiZo3SZ/k/2o3ZmsKj5DQA/2t2btFRxErpXwerpYfexRaPxH/HGaYhC3QbiNerTiSsK7m2S9J/KaW4uBn9UHpLUauPw7eec7rMlRAXikKcRmqtI2s7Y7gQjxmXKU3kLqZ4TSH+M9P1918Xx5yn+h+ouMkGmxwI3epEp7h/TGM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=HSNV1ug8; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="HSNV1ug8" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-b55735710f0so2630656a12.2 for ; Thu, 30 Oct 2025 17:30:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1761870648; x=1762475448; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=S9xAWTHA8Zs5WgjY+eo7qPd3VInKr+zSYIAk+wJ4peM=; b=HSNV1ug8kG1UFnTQnPipaSCHMelNGBGGc5FcoaLYGduuzlLFja/qJ13vg11AKMIpqd fDaqcSHMSRvzEAsNQmDmiWnymE+AJ2B4AI0DKfL9ligV857OpdveGe3COiPS17JkbAen P+AP3tRTTj4dUtvDVd9xz4KpIccf52dsnjrMUVEumVJGHF4fh1FiOCfPL+CmVNWsQmG5 Pzpi/LpypDjufQtFLmiUporlQyX+FtXpri8kuXn1SX696eYXWWdBlUcmecznwo+Xl4mo FrWnYb0koaGi+E1kwqGRKqgv8MzcPFCEq9pDK1A/Efm/4MnfToxCgti9mwl+z/yxmWVC 8QCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1761870648; x=1762475448; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=S9xAWTHA8Zs5WgjY+eo7qPd3VInKr+zSYIAk+wJ4peM=; b=wjZOVMsrcOGEc3zmeDUs/nFBLkehkZEV2ctgstD1TcZtd6hHR54O28gWJHgNq1SdHq xo7tmPwRKmibI2iWYhlOFgYhe452hB0ISBzDXqKqotLgBcMxFhILhHAOjDDz9axo5pqK xZAAMHrMgRsuqENYhnF3rUKk9MuRKwnaoz++M33N72gMPU/290TIUmaZ57axfDXnbxsD ky/zrdViJtoPuptryiyDhFQLu6hXwBtpJX4PgJ87IFBAv1V1u9djSVbjiMOf6rXqygjK r4Qe4OkzbA3o+kDxL0H9g7V8b8do0w1hiWYSwRZHLhTqYGFA+fqtD4+Dml2uatoGVbkI 7KIA== X-Forwarded-Encrypted: i=1; AJvYcCVvo0sr6EZsixNklQ/zwe5ZRgdM+c8ZS5ywoP9lVqWGIfwZ0RRG9f0NwC89NpExbBdHEF56tx02gxjqXB4=@vger.kernel.org X-Gm-Message-State: AOJu0Ywl4z0+X+TgzhNDc5GDWMcvl94WK9DJltZUKt152Sim0aeINkjN 54MdzvjEQj1TfxJk3VfnXv/ZzLdNe1emX56TdlBxXXeSBess6osyWwIsUmDnwO8ILXnVUZz8LiH GNu2PTQ== X-Google-Smtp-Source: AGHT+IGW9hpj7vBjBBA6u3ERPDC8qtzD54yvG/s87+dvnFJyDmRImsTrlkZ8d1iH/SCn+j3anWxBSXjW36Y= X-Received: from pjd14.prod.google.com ([2002:a17:90b:54ce:b0:339:ee20:f620]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:d487:b0:295:656:4edf with SMTP id d9443c01a7336-2951a38d763mr23412465ad.6.1761870647512; Thu, 30 Oct 2025 17:30:47 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 30 Oct 2025 17:30:33 -0700 In-Reply-To: <20251031003040.3491385-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251031003040.3491385-1-seanjc@google.com> X-Mailer: git-send-email 2.51.1.930.gacf6e81ea2-goog Message-ID: <20251031003040.3491385-2-seanjc@google.com> Subject: [PATCH v4 1/8] x86/bugs: Use VM_CLEAR_CPU_BUFFERS in VMX as well From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Thomas Gleixner , Borislav Petkov , Peter Zijlstra , Josh Poimboeuf Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Pawan Gupta , Brendan Jackman Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Pawan Gupta TSA mitigation: d8010d4ba43e ("x86/bugs: Add a Transient Scheduler Attacks mitigation") introduced VM_CLEAR_CPU_BUFFERS for guests on AMD CPUs. Currently on Intel CLEAR_CPU_BUFFERS is being used for guests which has a much broader scope (kernel->user also). Make mitigations on Intel consistent with TSA. This would help handling the guest-only mitigations better in future. Signed-off-by: Pawan Gupta [sean: make CLEAR_CPU_BUF_VM mutually exclusive with the MMIO mitigation] Signed-off-by: Sean Christopherson Acked-by: Borislav Petkov (AMD) --- arch/x86/kernel/cpu/bugs.c | 9 +++++++-- arch/x86/kvm/vmx/vmenter.S | 2 +- 2 files changed, 8 insertions(+), 3 deletions(-) diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 6a526ae1fe99..723666a1357e 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -194,7 +194,7 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush); =20 /* * Controls CPU Fill buffer clear before VMenter. This is a subset of - * X86_FEATURE_CLEAR_CPU_BUF, and should only be enabled when KVM-only + * X86_FEATURE_CLEAR_CPU_BUF_VM, and should only be enabled when KVM-only * mitigation is required. */ DEFINE_STATIC_KEY_FALSE(cpu_buf_vm_clear); @@ -536,6 +536,7 @@ static void __init mds_apply_mitigation(void) if (mds_mitigation =3D=3D MDS_MITIGATION_FULL || mds_mitigation =3D=3D MDS_MITIGATION_VMWERV) { setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF); + setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF_VM); if (!boot_cpu_has(X86_BUG_MSBDS_ONLY) && (mds_nosmt || smt_mitigations =3D=3D SMT_MITIGATIONS_ON)) cpu_smt_disable(false); @@ -647,6 +648,7 @@ static void __init taa_apply_mitigation(void) * present on host, enable the mitigation for UCODE_NEEDED as well. */ setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF); + setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF_VM); =20 if (taa_nosmt || smt_mitigations =3D=3D SMT_MITIGATIONS_ON) cpu_smt_disable(false); @@ -748,6 +750,7 @@ static void __init mmio_apply_mitigation(void) */ if (verw_clear_cpu_buf_mitigation_selected) { setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF); + setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF_VM); static_branch_disable(&cpu_buf_vm_clear); } else { static_branch_enable(&cpu_buf_vm_clear); @@ -839,8 +842,10 @@ static void __init rfds_update_mitigation(void) =20 static void __init rfds_apply_mitigation(void) { - if (rfds_mitigation =3D=3D RFDS_MITIGATION_VERW) + if (rfds_mitigation =3D=3D RFDS_MITIGATION_VERW) { setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF); + setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF_VM); + } } =20 static __init int rfds_parse_cmdline(char *str) diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S index bc255d709d8a..1f99a98a16a2 100644 --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -161,7 +161,7 @@ SYM_FUNC_START(__vmx_vcpu_run) mov VCPU_RAX(%_ASM_AX), %_ASM_AX =20 /* Clobbers EFLAGS.ZF */ - CLEAR_CPU_BUFFERS + VM_CLEAR_CPU_BUFFERS =20 /* Check EFLAGS.CF from the VMX_RUN_VMRESUME bit test above. */ jnc .Lvmlaunch --=20 2.51.1.930.gacf6e81ea2-goog From nobody Sun Feb 8 00:50:26 2026 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4FBCA1DF75B for ; Fri, 31 Oct 2025 00:30:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761870651; cv=none; b=p0gewX5mKDiYAJeLu58sYjByADhNkOhdssCMhraBPDxhegQUDb8F/jbF6TlmpRXPBAI4ZtZWOnmRh7Ym8rnfbfZ7H0rh1aS/14xXGBNPo7qwsmx9DLm7xtTsc1ISc8jF+C0x8Ahyzze3llHLVIdKQsAiLNeXCYH6iCqezyFQQOo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761870651; c=relaxed/simple; bh=tzp/3ROGUKURX5Xj0fHJRpM3UGAca2Jw3N2xjixt3lk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ljYu1DPrld/TeW1QyOdQhhDSPQuLe0EPIVVaLdla4RLg/7P3/ReN41unnR0jhmY+LDrXmtiIm+bIXZO9DOjlNZemOPFOBjLX6lYx2TGGVXOQEAMwla41cb1iGKdxU3BDueV7ACqTjwIeMw0Viw4kLn2B1ErQ+CJJkukbQCCJGqc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=nIknIISg; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="nIknIISg" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-29085106b99so16038735ad.1 for ; Thu, 30 Oct 2025 17:30:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1761870649; x=1762475449; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=oMHlsC7c5oD7CcKApKn4I2mv/vdAEywap/hlKVGWtz8=; b=nIknIISglhnnq0uMna6VfX6hRFucnzuozAOyZryeM3DIABIV3dsGc5pgd5ZxiYF8Nu KASXx3/aYVb/o/rhpXab+5nBkyYfoe/oOhMXupQHK/VWrC3qn0ZAcH2D7+R1TrRBazr0 jXCpkQ3Q7+00S8TMe7K2NG26sAPTh4z2K9U9c+g2Xt5g21SbD5sXmFqk0Pq8DFHzi9aA X2hiivKQxa/9Nx5wdYAHMq9fEu77Z6QWzASX5ZxKTqZoTdMX8xMu/h7gnHXdEezPneAU el99+7OQsZlDcvbVuox1kT04/k6nuCjU3/FefkTUjxddaxHbgwt7PwnLvQSTQIk8XQPy /6OA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1761870649; x=1762475449; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=oMHlsC7c5oD7CcKApKn4I2mv/vdAEywap/hlKVGWtz8=; b=m7FExuogg1uR1rR/ksTdRnS9GBtra0fnWJeAX6uMu7rubsUg2eOQOV4UFGckvj1ty8 e6quz8hu5ENXRPyyyg33J3PBFij4SoSpXHzuSKg3WkB9xpnPlWuBSN1whN56XW1HAKg9 1Lr9PoZ3U7v4G99eJlDpui6nPQjAGwIuaUaBtNBiV+8BdIlqWYyG4CrAq7RT1Gm//B7t FOw62GvZnJiCRUh196cPjn84lBoTfRn8+DWwaf83tsG4MXR8MLCbMm1cN7VyKkCGeVn+ +nozuTHOPBJjgR/IT8h7sDhZMyOsCot+ohWKcQP+Msq120rKCxYTtEM5rIlXP+5rp2a+ /FIA== X-Forwarded-Encrypted: i=1; AJvYcCU54zxywvwHJIBXW43AP6UCw+4esfAeMJAOtewmRq5QdhCTkDj79nA+0ha7Lr84JqamYGb9g4YMn/pX2RM=@vger.kernel.org X-Gm-Message-State: AOJu0Yzyag/XfVAOnMXs+z+ljKAC9or4/zHkTXo4IVlS+oFgElGcLcb/ NBqxP0mKaN5d5TOip+qbXhwKJZilY3LWovuHRbi6ksqwPwm73VNXAdoApCBhDohsd83IiJI8eSq doDCbjQ== X-Google-Smtp-Source: AGHT+IGdP5TX31n5prdQmUnY/ypxyo2o3kyqL3wOPAaMrWLK7PKvetILEtk7QXVibBOmaJMZNn/wzFqvOCM= X-Received: from pjo12.prod.google.com ([2002:a17:90b:566c:b0:33b:ba24:b204]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:da8e:b0:290:52aa:7291 with SMTP id d9443c01a7336-2951a51e6a1mr23311235ad.53.1761870649510; Thu, 30 Oct 2025 17:30:49 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 30 Oct 2025 17:30:34 -0700 In-Reply-To: <20251031003040.3491385-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251031003040.3491385-1-seanjc@google.com> X-Mailer: git-send-email 2.51.1.930.gacf6e81ea2-goog Message-ID: <20251031003040.3491385-3-seanjc@google.com> Subject: [PATCH v4 2/8] x86/bugs: Decouple ALTERNATIVE usage from VERW macro definition From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Thomas Gleixner , Borislav Petkov , Peter Zijlstra , Josh Poimboeuf Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Pawan Gupta , Brendan Jackman Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Decouple the use of ALTERNATIVE from the encoding of VERW to clear CPU buffers so that KVM can use ALTERNATIVE_2 to handle "always clear buffers" and "clear if guest can access host MMIO" in a single statement. No functional change intended. Signed-off-by: Sean Christopherson Reviewed-by: Brendan Jackman Reviewed-by: Pawan Gupta --- arch/x86/include/asm/nospec-branch.h | 21 ++++++++++----------- 1 file changed, 10 insertions(+), 11 deletions(-) diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/no= spec-branch.h index 08ed5a2e46a5..923ae21cbef1 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -308,24 +308,23 @@ * CFLAGS.ZF. * Note: Only the memory operand variant of VERW clears the CPU buffers. */ -.macro __CLEAR_CPU_BUFFERS feature #ifdef CONFIG_X86_64 - ALTERNATIVE "", "verw x86_verw_sel(%rip)", \feature +#define CLEAR_CPU_BUFFERS_SEQ verw x86_verw_sel(%rip) #else - /* - * In 32bit mode, the memory operand must be a %cs reference. The data - * segments may not be usable (vm86 mode), and the stack segment may not - * be flat (ESPFIX32). - */ - ALTERNATIVE "", "verw %cs:x86_verw_sel", \feature +/* + * In 32bit mode, the memory operand must be a %cs reference. The data seg= ments + * may not be usable (vm86 mode), and the stack segment may not be flat (E= SPFIX32). + */ +#define CLEAR_CPU_BUFFERS_SEQ verw %cs:x86_verw_sel #endif -.endm + +#define __CLEAR_CPU_BUFFERS __stringify(CLEAR_CPU_BUFFERS_SEQ) =20 #define CLEAR_CPU_BUFFERS \ - __CLEAR_CPU_BUFFERS X86_FEATURE_CLEAR_CPU_BUF + ALTERNATIVE "", __CLEAR_CPU_BUFFERS, X86_FEATURE_CLEAR_CPU_BUF =20 #define VM_CLEAR_CPU_BUFFERS \ - __CLEAR_CPU_BUFFERS X86_FEATURE_CLEAR_CPU_BUF_VM + ALTERNATIVE "", __CLEAR_CPU_BUFFERS, X86_FEATURE_CLEAR_CPU_BUF_VM =20 #ifdef CONFIG_X86_64 .macro CLEAR_BRANCH_HISTORY --=20 2.51.1.930.gacf6e81ea2-goog From nobody Sun Feb 8 00:50:26 2026 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D03871EEA31 for ; Fri, 31 Oct 2025 00:30:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761870653; cv=none; b=sUxMDM0ClkEan4Veq0aqcaqNqFaOixRE5so555yeMbt63+6NjUfOhPJhqIXuzHMOPlw8nz21HPsbzZffmY9gA+wC/kKwKgz5GHfJV9cVsDksbpkEi+mUuCVUOOPWGeF5QfnmMIRjtFm79iQZAnp+DE8ki2DT75ekopQAfjvt+sY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761870653; c=relaxed/simple; bh=204kvc9Sh64+E5don3tZqDdX3H79XbbcG9Pj9ZXTPaA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=EV0Jr33hiUoIBH2QdhaXiRSnUIBp8lTPbHlskPEOVxD6yhyWtrzf8774dgfi2KRu7Vc1IMj4HQj042dvisVIeDa4pNdzAcwadqm4aRJxAxiM9FPTGqmDNlBQtF6KeH7YXp+Zh0nwdi4G3nLlFBXh/3uPSQ38ns+VJLlze1Ia7Sw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=mnhPmY0V; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="mnhPmY0V" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-3407734d9a2so709272a91.1 for ; Thu, 30 Oct 2025 17:30:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1761870651; x=1762475451; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=+Ni/Sag5alXumMH5JEH8v/XUc2VFZKFCdXeaB2UWRR0=; b=mnhPmY0VTopZrw7FdgaWBjBzGu2v5KW1YuRuu+NzZoejr/vGWivtn5Zi/2D1EDE+JO toC5GkTtWnMzxEEi2hRXfA/U5XpuUue/8xatu7kpR5tw5QjjGB9snOzBjqsJsf1ZP++p Gx8jBRez92WAJYMHCWhMKA70qJBZ2FkayILKDC34Wyb1DMs3DvkSBIuGukEXdgsnhXFm y6SPCPLQSb71V1gOWAABoe4DKqfoTnNUx8UQRCCe99C5TEgcZMSGNBA+mg+pBMaTQ0NJ 6w2utDLVuNZNfk3QBsk54P1U8a8pfSOe21hHNw0JvSd+jAmhehn0oNETt3HQaNYCDHLR NJFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1761870651; x=1762475451; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=+Ni/Sag5alXumMH5JEH8v/XUc2VFZKFCdXeaB2UWRR0=; b=R75LWkejI8IpW+DABU0w5izmNML7sZB0jmy8idpaABFgQYhrJWuW2KxytVsTCgKAo7 8pFBbeFt/q7WtUMYngTzYM2bQTNQqwL69DiDqvlnIoXmH6WEUfYex6Cvi931dCaogB3B dkhEQa50hkkIc7XDI8h8orfQQ2rnRhSal1kCZ6IrPLOp9sirYMsCvlK8way23OLeW97O R8vE3WUFvXaSUsKSEDthZPYY5WX1pcd8731l82mohHkVgHmkK6wpk3FoL93NpDucqbK+ Ox1L5Ok2YSAYS4GKq9ylfSnxokWhtGp030qI/HnMcdoMBFQs0Lq2upiEdHqObPN7HpxO TXow== X-Forwarded-Encrypted: i=1; AJvYcCXPFlERbIlo345t9wek9XZnhiEgj6YC969Vm+2rGJxcKEv86UM0dEWQsaF/RDZ7obBGODnVAJo0OoE3wEY=@vger.kernel.org X-Gm-Message-State: AOJu0YzTBNAIVwxpZdqGXCxNeGWvmRi5xP8BSMc2Z4JJ5CDTFtb8psMg DZ28mcAYurzi2dRUkq6Fi5FEfquALMUKZ/uQQxCA5y3dAr2OVycb79a9oMMT1FMsYKk7jyZttcd FJUd2ew== X-Google-Smtp-Source: AGHT+IHAbp7NeoWbQWeIYNfOpGSHL72rool/Rd8q9eUo8nRattwseg6De/78cGSxrzhplvoG0mpBkjEtkFA= X-Received: from pjbgi3.prod.google.com ([2002:a17:90b:1103:b0:33b:ba58:40a6]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:2584:b0:32c:2cd:4d67 with SMTP id 98e67ed59e1d1-34082fdf30bmr2092781a91.13.1761870651098; Thu, 30 Oct 2025 17:30:51 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 30 Oct 2025 17:30:35 -0700 In-Reply-To: <20251031003040.3491385-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251031003040.3491385-1-seanjc@google.com> X-Mailer: git-send-email 2.51.1.930.gacf6e81ea2-goog Message-ID: <20251031003040.3491385-4-seanjc@google.com> Subject: [PATCH v4 3/8] x86/bugs: Use an X86_FEATURE_xxx flag for the MMIO Stale Data mitigation From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Thomas Gleixner , Borislav Petkov , Peter Zijlstra , Josh Poimboeuf Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Pawan Gupta , Brendan Jackman Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Convert the MMIO Stale Data mitigation flag from a static branch into an X86_FEATURE_xxx so that it can be used via ALTERNATIVE_2 in KVM. No functional change intended. Signed-off-by: Sean Christopherson Reviewed-by: Brendan Jackman Reviewed-by: Pawan Gupta --- arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/nospec-branch.h | 2 -- arch/x86/kernel/cpu/bugs.c | 11 +---------- arch/x86/kvm/mmu/spte.c | 2 +- arch/x86/kvm/vmx/vmx.c | 4 ++-- 5 files changed, 5 insertions(+), 15 deletions(-) diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpuf= eatures.h index 7129eb44adad..d1d7b5ec6425 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -501,6 +501,7 @@ #define X86_FEATURE_ABMC (21*32+15) /* Assignable Bandwidth Monitoring Co= unters */ #define X86_FEATURE_MSR_IMM (21*32+16) /* MSR immediate form instructions= */ #define X86_FEATURE_X2AVIC_EXT (21*32+17) /* AMD SVM x2AVIC support for 4= k vCPUs */ +#define X86_FEATURE_CLEAR_CPU_BUF_MMIO (21*32+18) /* Clear CPU buffers usi= ng VERW before VMRUN, iff the vCPU can access host MMIO*/ =20 /* * BUG word(s) diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/no= spec-branch.h index 923ae21cbef1..b29df45b1edb 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -579,8 +579,6 @@ DECLARE_STATIC_KEY_FALSE(cpu_buf_idle_clear); =20 DECLARE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush); =20 -DECLARE_STATIC_KEY_FALSE(cpu_buf_vm_clear); - extern u16 x86_verw_sel; =20 #include diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 723666a1357e..9acf6343b0ac 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -192,14 +192,6 @@ EXPORT_SYMBOL_GPL(cpu_buf_idle_clear); */ DEFINE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush); =20 -/* - * Controls CPU Fill buffer clear before VMenter. This is a subset of - * X86_FEATURE_CLEAR_CPU_BUF_VM, and should only be enabled when KVM-only - * mitigation is required. - */ -DEFINE_STATIC_KEY_FALSE(cpu_buf_vm_clear); -EXPORT_SYMBOL_GPL(cpu_buf_vm_clear); - #undef pr_fmt #define pr_fmt(fmt) "mitigations: " fmt =20 @@ -751,9 +743,8 @@ static void __init mmio_apply_mitigation(void) if (verw_clear_cpu_buf_mitigation_selected) { setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF); setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF_VM); - static_branch_disable(&cpu_buf_vm_clear); } else { - static_branch_enable(&cpu_buf_vm_clear); + setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF_MMIO); } =20 /* diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 37647afde7d3..c43dd153d868 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -292,7 +292,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_pa= ge *sp, mark_page_dirty_in_slot(vcpu->kvm, slot, gfn); } =20 - if (static_branch_unlikely(&cpu_buf_vm_clear) && + if (cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF_MMIO) && !kvm_vcpu_can_access_host_mmio(vcpu) && kvm_is_mmio_pfn(pfn, &is_host_mmio)) kvm_track_host_mmio_mapping(vcpu); diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 1021d3b65ea0..68cde725d1c7 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -903,7 +903,7 @@ unsigned int __vmx_vcpu_run_flags(struct vcpu_vmx *vmx) if (!msr_write_intercepted(vmx, MSR_IA32_SPEC_CTRL)) flags |=3D VMX_RUN_SAVE_SPEC_CTRL; =20 - if (static_branch_unlikely(&cpu_buf_vm_clear) && + if (cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF_MMIO) && kvm_vcpu_can_access_host_mmio(&vmx->vcpu)) flags |=3D VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO; =20 @@ -7351,7 +7351,7 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vc= pu *vcpu, */ if (static_branch_unlikely(&vmx_l1d_should_flush)) vmx_l1d_flush(vcpu); - else if (static_branch_unlikely(&cpu_buf_vm_clear) && + else if (cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF_MMIO) && (flags & VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO)) x86_clear_cpu_buffers(); =20 --=20 2.51.1.930.gacf6e81ea2-goog From nobody Sun Feb 8 00:50:26 2026 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7D223212551 for ; Fri, 31 Oct 2025 00:30:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761870656; cv=none; b=qjJT319u3h4DKxqJrLheGefp40Yq+gS603y7FkkvtCmgrUEFHJJTU1amARAhgfbTFD+E5GFu+WFf4/54pqSjmWxmMzR5BeDnaB9OuY1a5Wde7bq5qGLGodbc6y7Aa/FWicQkeCW/VyayNB4NeeBL1/ZYqMy78Ei9NifkDfZ9DaI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761870656; c=relaxed/simple; bh=L4B2/byDWQc7xPnaGzWQ+zlWxtR0CuqfOgb4oT+jN9Y=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ZNpPbjH3hMXfkLbMvFOzuiL5s2SL+bixfWdCGIigUVAl4/BFjR2H6hdTNV2WHmvc+xqN2O+LYrYniDsUoXf363UUA3VZ8gbt3rxcfYcEa2lodNO/bM0cTDzYH11cGvauyPrUPv7JT9a3e4A4+izJ44lTdOP6/xKRoj36YwHknr0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=OvOm54gX; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="OvOm54gX" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-3408686190eso787014a91.3 for ; Thu, 30 Oct 2025 17:30:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1761870654; x=1762475454; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=VmBAwOrgzrqrpUaigLBeNdYHl3f0IKWGJYuT2uuDaXY=; b=OvOm54gXdpaY5lBDd1QPDKbKZakEQiFumUWwyT9nCPRdjf8JYZIPNsfSMikISjO1W/ HukCCiN4pe64KvoaeO+0vwESQx04OTqQgrTyBipMuftRLVyvfA4VAc1DWJq/dRjHPVxH y33fzfGPB/vevya3SHs3SJXVR1O+fQXS+MPecD+2v5VW0tO3RuXvQbn3cHB22ys0dCX0 HmJw/ZqONMpHr37noVtGKvIFdyZBc9MTL7o9Bs6PvWnjw8ufzTt6OQ1efLyrR3oazd7g +zEA/q2lpff05tr/qCkTjRZOJvwpejjdpjJAtXHMCxa//ZuIte/KHVTspUX2fTYQ9DAM +Vog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1761870654; x=1762475454; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=VmBAwOrgzrqrpUaigLBeNdYHl3f0IKWGJYuT2uuDaXY=; b=CgE1Lmu9kf9PbMgT5oOm4dhzdBjMAH1yi8joMvhn0bAFyU8i2PoID2Oh4EUEmq3n2E KCUj84r2r+Bt4HOM0GiNDmDQxCrXzrjrY55ITrePt7tOt3QH2BX4j5NIVMx4+8oglKal 1haaTtjJzvajwOhhtDQd8bN13FW6JP9p+Ra0QkDcyN8C4W2ZnkQLhc81vcxbA9vxLjcD cNMMLKyitk2GNcaJDQpu76HJpi59gFpJZExjYEN6Wx9E5+YzWUqeDQCGYkSmEWv5jxfe tteaGKMF/Wt0+WaQNIsb9ZA6Qmo9j0meCFRc22JCSJW2zMx3JqayYsmcGa6s34oyX7AA vFrQ== X-Forwarded-Encrypted: i=1; AJvYcCXu2mL8q2n1UWBB8Ubl0Q4uz5W5ny5axcNSoVL8JK71v/xty/x7ySZIlYOXiz+WlyIcZy9y0qQVM4Jb9ko=@vger.kernel.org X-Gm-Message-State: AOJu0YwPk+zCwogoojnANPy0zk+oFDq4ft5F0fp1TP9j91a5AbvuG8M2 ue1Wzj2z/tOtvJqnFNfhciQTmoGTXBZn7rJnkPJvy1xxrjDMpwVHpnk606PrkiU9W59bn9Q8Rtu r3T/LCw== X-Google-Smtp-Source: AGHT+IFVRHnLID3eKFv1t9BcD3CWxlHu1pnR2jYex1mPnoWNTlz6hIST/XP7gF8RMU6dGOhlYpHQLmGQSGM= X-Received: from pjbfh4.prod.google.com ([2002:a17:90b:344:b0:33b:a383:f4df]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:574c:b0:340:29be:7838 with SMTP id 98e67ed59e1d1-3408308b37amr2062772a91.29.1761870653659; Thu, 30 Oct 2025 17:30:53 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 30 Oct 2025 17:30:36 -0700 In-Reply-To: <20251031003040.3491385-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251031003040.3491385-1-seanjc@google.com> X-Mailer: git-send-email 2.51.1.930.gacf6e81ea2-goog Message-ID: <20251031003040.3491385-5-seanjc@google.com> Subject: [PATCH v4 4/8] KVM: VMX: Handle MMIO Stale Data in VM-Enter assembly via ALTERNATIVES_2 From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Thomas Gleixner , Borislav Petkov , Peter Zijlstra , Josh Poimboeuf Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Pawan Gupta , Brendan Jackman Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Rework the handling of the MMIO Stale Data mitigation to clear CPU buffers immediately prior to VM-Enter, i.e. in the same location that KVM emits a VERW for unconditional (at runtime) clearing. Co-locating the code and using a single ALTERNATIVES_2 makes it more obvious how VMX mitigates the various vulnerabilities. Deliberately order the alternatives as: 0. Do nothing 1. Clear if vCPU can access MMIO 2. Clear always since the last alternative wins in ALTERNATIVES_2(), i.e. so that KVM will honor the strictest mitigation (always clear CPU buffers) if multiple mitigations are selected. E.g. even if the kernel chooses to mitigate MMIO Stale Data via X86_FEATURE_CLEAR_CPU_BUF_MMIO, some other mitigation may enable X86_FEATURE_CLEAR_CPU_BUF_VM, and that other thing needs to win. Note, decoupling the MMIO mitigation from the L1TF mitigation also fixes a mostly-benign flaw where KVM wouldn't do any clearing/flushing if the L1TF mitigation is configured to conditionally flush the L1D, and the MMIO mitigation but not any other "clear CPU buffers" mitigation is enabled. For that specific scenario, KVM would skip clearing CPU buffers for the MMIO mitigation even though the kernel requested a clear on every VM-Enter. Note #2, the flaw goes back to the introduction of the MDS mitigation. The MDS mitigation was inadvertently fixed by commit 43fb862de8f6 ("KVM/VMX: Move VERW closer to VMentry for MDS mitigation"), but previous kernels that flush CPU buffers in vmx_vcpu_enter_exit() are affected (though it's unlikely the flaw is meaningfully exploitable even older kernels). Fixes: 650b68a0622f ("x86/kvm/vmx: Add MDS protection when L1D Flush is not= active") Suggested-by: Pawan Gupta Signed-off-by: Sean Christopherson Reviewed-by: Pawan Gupta --- arch/x86/kvm/vmx/vmenter.S | 14 +++++++++++++- arch/x86/kvm/vmx/vmx.c | 13 ------------- 2 files changed, 13 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S index 1f99a98a16a2..61a809790a58 100644 --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -71,6 +71,7 @@ * @regs: unsigned long * (to guest registers) * @flags: VMX_RUN_VMRESUME: use VMRESUME instead of VMLAUNCH * VMX_RUN_SAVE_SPEC_CTRL: save guest SPEC_CTRL into vmx->spec_ctrl + * VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO: vCPU can access host MMIO * * Returns: * 0 on VM-Exit, 1 on VM-Fail @@ -137,6 +138,12 @@ SYM_FUNC_START(__vmx_vcpu_run) /* Load @regs to RAX. */ mov (%_ASM_SP), %_ASM_AX =20 + /* Stash "clear for MMIO" in EFLAGS.ZF (used below). */ + ALTERNATIVE_2 "", \ + __stringify(test $VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO, %ebx), \ + X86_FEATURE_CLEAR_CPU_BUF_MMIO, \ + "", X86_FEATURE_CLEAR_CPU_BUF_VM + /* Check if vmlaunch or vmresume is needed */ bt $VMX_RUN_VMRESUME_SHIFT, %ebx =20 @@ -161,7 +168,12 @@ SYM_FUNC_START(__vmx_vcpu_run) mov VCPU_RAX(%_ASM_AX), %_ASM_AX =20 /* Clobbers EFLAGS.ZF */ - VM_CLEAR_CPU_BUFFERS + ALTERNATIVE_2 "", \ + __stringify(jz .Lskip_clear_cpu_buffers; \ + CLEAR_CPU_BUFFERS_SEQ; \ + .Lskip_clear_cpu_buffers:), \ + X86_FEATURE_CLEAR_CPU_BUF_MMIO, \ + __CLEAR_CPU_BUFFERS, X86_FEATURE_CLEAR_CPU_BUF_VM =20 /* Check EFLAGS.CF from the VMX_RUN_VMRESUME bit test above. */ jnc .Lvmlaunch diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 68cde725d1c7..5af2338c7cb8 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7339,21 +7339,8 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_v= cpu *vcpu, =20 guest_state_enter_irqoff(); =20 - /* - * L1D Flush includes CPU buffer clear to mitigate MDS, but VERW - * mitigation for MDS is done late in VMentry and is still - * executed in spite of L1D Flush. This is because an extra VERW - * should not matter much after the big hammer L1D Flush. - * - * cpu_buf_vm_clear is used when system is not vulnerable to MDS/TAA, - * and is affected by MMIO Stale Data. In such cases mitigation in only - * needed against an MMIO capable guest. - */ if (static_branch_unlikely(&vmx_l1d_should_flush)) vmx_l1d_flush(vcpu); - else if (cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF_MMIO) && - (flags & VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO)) - x86_clear_cpu_buffers(); =20 vmx_disable_fb_clear(vmx); =20 --=20 2.51.1.930.gacf6e81ea2-goog From nobody Sun Feb 8 00:50:26 2026 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2FDE321ABB9 for ; Fri, 31 Oct 2025 00:30:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761870657; cv=none; b=nr7GRd4jBg+R4m1gDfdhEoA7FVzlAlseTj8tu/QIngWNULfH3zM65H/VBdeYVxwSoXOV2oyvV1X0BtD7cfLfNVTm3kELwhqSmiEo+jpeB+hqnRjW/GFsnBM21fWxSsWs5mRCSxMD9zYCDZdnfhUX6H01mltcpYBYhP6q7eAdUNc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761870657; c=relaxed/simple; bh=CKcaOTjItVz4WLo4iTzMHZzjSrJXPFSfhk1I27YzRz8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=PSoLSNqZouwgHAICihJbAvgsmNg+PA6MdN6ADI6sp/XwMo41YE2773wf9Ps1rwW5al/l57JvfQcoTNxHFDb4v/RVgTsKGs/oBfz0BT+u0phCMDwRoOW8RGVfnih6hfQ9MtlQutcJ8uQC8F9eN5aheA03NAx8mPRC7HK1fg2KMq4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=UyN/0dBI; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="UyN/0dBI" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-b62b7af4fddso1204562a12.2 for ; Thu, 30 Oct 2025 17:30:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1761870655; x=1762475455; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=plvc6sr/HSiAC52HAJ4IaJ4vyv8ZKviRrYeEAUlJ9Xc=; b=UyN/0dBIOtL71t37zvUkgh4S0/e/HronxyDumSllbzwO6S3qzY8iR7/vWMLFGI7ZE3 KFp1KcvQQcXL7Iomgx/ip+IjYR/Ep28rjJjbHCgoxk87b+QOsBTafGIwD74XBGStJq4g DAnR+JOZ23bksw46fHUT/VKGDuqevGjyJ6lGfxaG3z2mG3wbSJDg3ojPeTBC45ZgfDkt u2/0ATBizzVyNGaM1gzLkehQUeJco+pSq2Ur7u2VaSlCgJQsOg7vyTS77REeFb6ntEfa cEQxp+aOoI/eVo5GweOg8Uzmy0BHWttot8aiBNdAn/bjOb93vZCp6+5GX/LMdvagf17h K7wQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1761870655; x=1762475455; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=plvc6sr/HSiAC52HAJ4IaJ4vyv8ZKviRrYeEAUlJ9Xc=; b=ku15Nrs9/Z/X0jlGv6883GD9PkyuJa/bqyWm4XN5hx+Ky2o+L9cBgKcfJYH7/CSgaI /6tWXdOzXjm9fy1LRUGmg/vr4owhwDJCxJCIuAUt72SWbsSnVmEPfDRXZhrJ6U7/lexP CY999rUQuuTgoH7jM23D0pce8fIxJ8s0VtBTe7hm/D3xgVUzxlNaltjFaHEFvN4Z8Hhw fswJuzbmT7riqSaupkV77AWXEOol5T+kZAc52wkH36Ldt5BWJ7czGJoh7HQnQunlpEw5 1kHZiWDdqwQ5O3+NdfgfUd0X/V0yUPQCnqKxa1jgndc3odDgq0x4KKxgAyoELj5oZcbA E44w== X-Forwarded-Encrypted: i=1; AJvYcCWb+RieOuL9tjRDAuOOTGG0yfbFj5oyYvNEfqOAWpS0R92JZWPc/0fXZW1/MOhxMc9Y0O87+x979/ItVi4=@vger.kernel.org X-Gm-Message-State: AOJu0YymSp63ESuUEAjM2eKeQA2bgH6omOxHABmv7EyjTRwc4PffWj6M pG0Tn5krC2D3fiffU+TwVfjb6Lt0gllBF+T1Jec62RGm7IAKwkWJ2o9xXVJF6kZf2B9Cf82DRgn RZ6HRHQ== X-Google-Smtp-Source: AGHT+IFIzJ7ZEinVQ37x8dGUbzHaunbWA0l4ceoWMJoW0i44376UHeVDoVTVqIoOKauus5lpArxJCTtPusU= X-Received: from pjbcf5.prod.google.com ([2002:a17:90a:ebc5:b0:32d:a4d4:bb17]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:e212:b0:340:fce2:a152 with SMTP id adf61e73a8af0-348cca00f0bmr2183966637.55.1761870655555; Thu, 30 Oct 2025 17:30:55 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 30 Oct 2025 17:30:37 -0700 In-Reply-To: <20251031003040.3491385-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251031003040.3491385-1-seanjc@google.com> X-Mailer: git-send-email 2.51.1.930.gacf6e81ea2-goog Message-ID: <20251031003040.3491385-6-seanjc@google.com> Subject: [PATCH v4 5/8] x86/bugs: KVM: Move VM_CLEAR_CPU_BUFFERS into SVM as SVM_CLEAR_CPU_BUFFERS From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Thomas Gleixner , Borislav Petkov , Peter Zijlstra , Josh Poimboeuf Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Pawan Gupta , Brendan Jackman Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now that VMX encodes its own sequency for clearing CPU buffers, move VM_CLEAR_CPU_BUFFERS into SVM to minimize the chances of KVM botching a mitigation in the future, e.g. using VM_CLEAR_CPU_BUFFERS instead of checking multiple mitigation flags. No functional change intended. Signed-off-by: Sean Christopherson Acked-by: Borislav Petkov (AMD) Reviewed-by: Brendan Jackman --- arch/x86/include/asm/nospec-branch.h | 3 --- arch/x86/kvm/svm/vmenter.S | 6 ++++-- 2 files changed, 4 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/no= spec-branch.h index b29df45b1edb..88fe40d6949a 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -323,9 +323,6 @@ #define CLEAR_CPU_BUFFERS \ ALTERNATIVE "", __CLEAR_CPU_BUFFERS, X86_FEATURE_CLEAR_CPU_BUF =20 -#define VM_CLEAR_CPU_BUFFERS \ - ALTERNATIVE "", __CLEAR_CPU_BUFFERS, X86_FEATURE_CLEAR_CPU_BUF_VM - #ifdef CONFIG_X86_64 .macro CLEAR_BRANCH_HISTORY ALTERNATIVE "", "call clear_bhb_loop", X86_FEATURE_CLEAR_BHB_LOOP diff --git a/arch/x86/kvm/svm/vmenter.S b/arch/x86/kvm/svm/vmenter.S index 235c4af6b692..da5f481cb17e 100644 --- a/arch/x86/kvm/svm/vmenter.S +++ b/arch/x86/kvm/svm/vmenter.S @@ -92,6 +92,8 @@ jmp 901b .endm =20 +#define SVM_CLEAR_CPU_BUFFERS \ + ALTERNATIVE "", __CLEAR_CPU_BUFFERS, X86_FEATURE_CLEAR_CPU_BUF_VM =20 /** * __svm_vcpu_run - Run a vCPU via a transition to SVM guest mode @@ -170,7 +172,7 @@ SYM_FUNC_START(__svm_vcpu_run) mov VCPU_RDI(%_ASM_DI), %_ASM_DI =20 /* Clobbers EFLAGS.ZF */ - VM_CLEAR_CPU_BUFFERS + SVM_CLEAR_CPU_BUFFERS =20 /* Enter guest mode */ 3: vmrun %_ASM_AX @@ -339,7 +341,7 @@ SYM_FUNC_START(__svm_sev_es_vcpu_run) mov KVM_VMCB_pa(%rax), %rax =20 /* Clobbers EFLAGS.ZF */ - VM_CLEAR_CPU_BUFFERS + SVM_CLEAR_CPU_BUFFERS =20 /* Enter guest mode */ 1: vmrun %rax --=20 2.51.1.930.gacf6e81ea2-goog From nobody Sun Feb 8 00:50:26 2026 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0477E221739 for ; Fri, 31 Oct 2025 00:30:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761870659; cv=none; b=nGmv0LTKJJtD7iZ5TbyMQDRyvGfFKwQUANqNZh6sNCpVzTNc/2l9jkOLh6zhPJ5c755qhztHbLt7c22/UdCtKs5GnNVX0B8alYQMyuY8UI6B+PFBVnYC8/qJa3sKj0HIhmVeYN5eencOvv2Um/ocWBhWi1mRPEvbOa3C/Pcx5XQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761870659; c=relaxed/simple; bh=aYmWXaoH1i80BGqIeHrFdG1O1IAKwSd0+UKfWgjlogw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=S7LbQEHEcfOoil6KtrXkoCRDZit3X5UohLssBWKCaWnK7ATmKCI+R1LfbKGDrt1bWJbmJxe+ocBNkrm7lkSd8OLsIzotldknKSJUeGXnUntnJypJt7+FYGzvd5DYrLqf8QOPGF9lE1Z8CeBDyZWr5wBRuAcXHOfLOEoRYybZ5lY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=oCTn32W1; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="oCTn32W1" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-33da21394adso1777565a91.1 for ; Thu, 30 Oct 2025 17:30:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1761870657; x=1762475457; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=J+Ez2liootURycULRis0r6eLUgh70ESz5qK4iYDo4pQ=; b=oCTn32W1jqkTitewuU21G+lhT9OlsOvRgtAazaIHQKntZ/VR/pSE1pk5NVGVEcnBXo w9U11bKgLCdVTz01IRRpR5H2GpF016H1Cn5epgI6xxQekayzv0OWkOy6g7j6qwAWb6RX yHpypVtdyjYYM/iJt51UNIn35RaCVsERKnGWNbIVnw+bQKnS/x60gBzS+vmQd8qDJr8Y qmiJKuTvbIf1+MLAf+11PlUb4jR2SmdpEB/AzUQpl8QSrJnnbOZ03JG1+y/zOLg6nNZd U1jgI0mYkl2aWdRNQS6fvbqgR67kK+6HlERXpkxBZW43OMMj428ZOwhD5nNLbyyFBbME 3BEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1761870657; x=1762475457; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=J+Ez2liootURycULRis0r6eLUgh70ESz5qK4iYDo4pQ=; b=VJ4qGnlH0pR8L91D0YnnGos3q3fqOQnYXctMX9JtULHT3XZmus/l6pBVnsQGhCOJba GvQSAXWOdG7OZJj466ia3o5YCWX243z+C3whZ0YZDXgX12ULHdHyTGwpLUVrWv2n1/gw 3JLoiR4b8h2chJiJidl4wfotutYftgbElNJiHHb15iB7sG8PfihkKrkcr3SO1yPfpYnB JASsXGb1cM7ttFb+yrtZarGTLxj7jtyZ6nN3/xLqrBwbwWzZwudf/s4bqIFhU5qxSp1k GS+SesVHnB5TvdfPhTtK7bDlNh0g1ZL0hRtdixyb/PQdypuyLSlWmr9R19xkMtAd/EXc j1Eg== X-Forwarded-Encrypted: i=1; AJvYcCVdLyKtSIVOmG0+0RkhMkphHh3A5ORWmoLk82G/7cEqBhmKDTx2EaQfTPrWBmrf4N3b4p9juSwGIrzGv6I=@vger.kernel.org X-Gm-Message-State: AOJu0YxCmrtcyUgvT49CUiB+iSNbXYuckScoKMKlLev5F9AgpcebWxvc 7eQPCC4oGU1NuOTY2y4xzpXPWp4nJ5PgtpwcfS2SzqAK2C5XLqEfTv5B94ymQQ29S0fQVdesGNX nW0KURw== X-Google-Smtp-Source: AGHT+IFeWZiIu8lcfwk0BeqW4AcKRL0jcrFiKIe53HTAdWTNST0vC7Y2cgXO/sr9CaxDDQHI55Unzyy0ltg= X-Received: from pjod4.prod.google.com ([2002:a17:90a:8d84:b0:33b:51fe:1a7a]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:3947:b0:340:7b4a:392f with SMTP id 98e67ed59e1d1-34083074659mr2369461a91.17.1761870657326; Thu, 30 Oct 2025 17:30:57 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 30 Oct 2025 17:30:38 -0700 In-Reply-To: <20251031003040.3491385-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251031003040.3491385-1-seanjc@google.com> X-Mailer: git-send-email 2.51.1.930.gacf6e81ea2-goog Message-ID: <20251031003040.3491385-7-seanjc@google.com> Subject: [PATCH v4 6/8] KVM: VMX: Bundle all L1 data cache flush mitigation code together From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Thomas Gleixner , Borislav Petkov , Peter Zijlstra , Josh Poimboeuf Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Pawan Gupta , Brendan Jackman Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Move vmx_l1d_flush(), vmx_cleanup_l1d_flush(), and the vmentry_l1d_flush param code up in vmx.c so that all of the L1 data cache flushing code is bundled together. This will allow conditioning the mitigation code on CONFIG_CPU_MITIGATIONS=3Dy with minimal #ifdefs. No functional change intended. Reviewed-by: Brendan Jackman Signed-off-by: Sean Christopherson Reviewed-by: Pawan Gupta --- arch/x86/kvm/vmx/vmx.c | 174 ++++++++++++++++++++--------------------- 1 file changed, 87 insertions(+), 87 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 5af2338c7cb8..55962146fc34 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -302,6 +302,16 @@ static int vmx_setup_l1d_flush(enum vmx_l1d_flush_stat= e l1tf) return 0; } =20 +static void vmx_cleanup_l1d_flush(void) +{ + if (vmx_l1d_flush_pages) { + free_pages((unsigned long)vmx_l1d_flush_pages, L1D_CACHE_ORDER); + vmx_l1d_flush_pages =3D NULL; + } + /* Restore state so sysfs ignores VMX */ + l1tf_vmx_mitigation =3D VMENTER_L1D_FLUSH_AUTO; +} + static int vmentry_l1d_flush_parse(const char *s) { unsigned int i; @@ -352,6 +362,83 @@ static int vmentry_l1d_flush_get(char *s, const struct= kernel_param *kp) return sysfs_emit(s, "%s\n", vmentry_l1d_param[l1tf_vmx_mitigation].optio= n); } =20 +/* + * Software based L1D cache flush which is used when microcode providing + * the cache control MSR is not loaded. + * + * The L1D cache is 32 KiB on Nehalem and later microarchitectures, but to + * flush it is required to read in 64 KiB because the replacement algorithm + * is not exactly LRU. This could be sized at runtime via topology + * information but as all relevant affected CPUs have 32KiB L1D cache size + * there is no point in doing so. + */ +static noinstr void vmx_l1d_flush(struct kvm_vcpu *vcpu) +{ + int size =3D PAGE_SIZE << L1D_CACHE_ORDER; + + /* + * This code is only executed when the flush mode is 'cond' or + * 'always' + */ + if (static_branch_likely(&vmx_l1d_flush_cond)) { + bool flush_l1d; + + /* + * Clear the per-vcpu flush bit, it gets set again if the vCPU + * is reloaded, i.e. if the vCPU is scheduled out or if KVM + * exits to userspace, or if KVM reaches one of the unsafe + * VMEXIT handlers, e.g. if KVM calls into the emulator. + */ + flush_l1d =3D vcpu->arch.l1tf_flush_l1d; + vcpu->arch.l1tf_flush_l1d =3D false; + + /* + * Clear the per-cpu flush bit, it gets set again from + * the interrupt handlers. + */ + flush_l1d |=3D kvm_get_cpu_l1tf_flush_l1d(); + kvm_clear_cpu_l1tf_flush_l1d(); + + if (!flush_l1d) + return; + } + + vcpu->stat.l1d_flush++; + + if (static_cpu_has(X86_FEATURE_FLUSH_L1D)) { + native_wrmsrq(MSR_IA32_FLUSH_CMD, L1D_FLUSH); + return; + } + + asm volatile( + /* First ensure the pages are in the TLB */ + "xorl %%eax, %%eax\n" + ".Lpopulate_tlb:\n\t" + "movzbl (%[flush_pages], %%" _ASM_AX "), %%ecx\n\t" + "addl $4096, %%eax\n\t" + "cmpl %%eax, %[size]\n\t" + "jne .Lpopulate_tlb\n\t" + "xorl %%eax, %%eax\n\t" + "cpuid\n\t" + /* Now fill the cache */ + "xorl %%eax, %%eax\n" + ".Lfill_cache:\n" + "movzbl (%[flush_pages], %%" _ASM_AX "), %%ecx\n\t" + "addl $64, %%eax\n\t" + "cmpl %%eax, %[size]\n\t" + "jne .Lfill_cache\n\t" + "lfence\n" + :: [flush_pages] "r" (vmx_l1d_flush_pages), + [size] "r" (size) + : "eax", "ebx", "ecx", "edx"); +} + +static const struct kernel_param_ops vmentry_l1d_flush_ops =3D { + .set =3D vmentry_l1d_flush_set, + .get =3D vmentry_l1d_flush_get, +}; +module_param_cb(vmentry_l1d_flush, &vmentry_l1d_flush_ops, NULL, 0644); + static __always_inline void vmx_disable_fb_clear(struct vcpu_vmx *vmx) { u64 msr; @@ -404,12 +491,6 @@ static void vmx_update_fb_clear_dis(struct kvm_vcpu *v= cpu, struct vcpu_vmx *vmx) vmx->disable_fb_clear =3D false; } =20 -static const struct kernel_param_ops vmentry_l1d_flush_ops =3D { - .set =3D vmentry_l1d_flush_set, - .get =3D vmentry_l1d_flush_get, -}; -module_param_cb(vmentry_l1d_flush, &vmentry_l1d_flush_ops, NULL, 0644); - static u32 vmx_segment_access_rights(struct kvm_segment *var); =20 void vmx_vmexit(void); @@ -6672,77 +6753,6 @@ int vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_= t exit_fastpath) return ret; } =20 -/* - * Software based L1D cache flush which is used when microcode providing - * the cache control MSR is not loaded. - * - * The L1D cache is 32 KiB on Nehalem and later microarchitectures, but to - * flush it is required to read in 64 KiB because the replacement algorithm - * is not exactly LRU. This could be sized at runtime via topology - * information but as all relevant affected CPUs have 32KiB L1D cache size - * there is no point in doing so. - */ -static noinstr void vmx_l1d_flush(struct kvm_vcpu *vcpu) -{ - int size =3D PAGE_SIZE << L1D_CACHE_ORDER; - - /* - * This code is only executed when the flush mode is 'cond' or - * 'always' - */ - if (static_branch_likely(&vmx_l1d_flush_cond)) { - bool flush_l1d; - - /* - * Clear the per-vcpu flush bit, it gets set again if the vCPU - * is reloaded, i.e. if the vCPU is scheduled out or if KVM - * exits to userspace, or if KVM reaches one of the unsafe - * VMEXIT handlers, e.g. if KVM calls into the emulator. - */ - flush_l1d =3D vcpu->arch.l1tf_flush_l1d; - vcpu->arch.l1tf_flush_l1d =3D false; - - /* - * Clear the per-cpu flush bit, it gets set again from - * the interrupt handlers. - */ - flush_l1d |=3D kvm_get_cpu_l1tf_flush_l1d(); - kvm_clear_cpu_l1tf_flush_l1d(); - - if (!flush_l1d) - return; - } - - vcpu->stat.l1d_flush++; - - if (static_cpu_has(X86_FEATURE_FLUSH_L1D)) { - native_wrmsrq(MSR_IA32_FLUSH_CMD, L1D_FLUSH); - return; - } - - asm volatile( - /* First ensure the pages are in the TLB */ - "xorl %%eax, %%eax\n" - ".Lpopulate_tlb:\n\t" - "movzbl (%[flush_pages], %%" _ASM_AX "), %%ecx\n\t" - "addl $4096, %%eax\n\t" - "cmpl %%eax, %[size]\n\t" - "jne .Lpopulate_tlb\n\t" - "xorl %%eax, %%eax\n\t" - "cpuid\n\t" - /* Now fill the cache */ - "xorl %%eax, %%eax\n" - ".Lfill_cache:\n" - "movzbl (%[flush_pages], %%" _ASM_AX "), %%ecx\n\t" - "addl $64, %%eax\n\t" - "cmpl %%eax, %[size]\n\t" - "jne .Lfill_cache\n\t" - "lfence\n" - :: [flush_pages] "r" (vmx_l1d_flush_pages), - [size] "r" (size) - : "eax", "ebx", "ecx", "edx"); -} - void vmx_update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr) { struct vmcs12 *vmcs12 =3D get_vmcs12(vcpu); @@ -8677,16 +8687,6 @@ __init int vmx_hardware_setup(void) return r; } =20 -static void vmx_cleanup_l1d_flush(void) -{ - if (vmx_l1d_flush_pages) { - free_pages((unsigned long)vmx_l1d_flush_pages, L1D_CACHE_ORDER); - vmx_l1d_flush_pages =3D NULL; - } - /* Restore state so sysfs ignores VMX */ - l1tf_vmx_mitigation =3D VMENTER_L1D_FLUSH_AUTO; -} - void vmx_exit(void) { allow_smaller_maxphyaddr =3D false; --=20 2.51.1.930.gacf6e81ea2-goog From nobody Sun Feb 8 00:50:26 2026 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 69A242253EB for ; Fri, 31 Oct 2025 00:30:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761870661; cv=none; b=hN5W5JPArsOrrO3WdWWknuw3O1Kd2LPE/LHkwpNOwK0YJYMmE6ZQmFt4gBNa7xOqdOdiROVak5j4mw2qNM68rCdwL4EKmZymBCutOl5OhFYA2N1gQRVPMDZhzQ9httcPL0IKIqe/n46Lp53p4ZAhlkX7TAVTbpeNN/QyEtK0eIs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761870661; c=relaxed/simple; bh=+/nQFAES+hrWFC0pBIVYOueQkR8baZYhWUGk2XUr3do=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=nubt+z/Dr7KZKMxmHaLtYy74UiVJyB3R/M+E2/n41i+PCAv0wyKtqWP6fSscer0qwosoAWM9cJyVAT32q9fvyKvyemtANpc4H54fzL9gAAvSbgyeb3GM0PFkK7sTLIBMu0JdklCLM+8tYE29lF7D+9LjFT4f7sDuWxTSKMjLaKU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=l8+XVSyM; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="l8+XVSyM" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-336b646768eso1726585a91.1 for ; Thu, 30 Oct 2025 17:30:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1761870659; x=1762475459; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=/wh7xt99UOmIbZZ3YA0srOlB+Id97FpIBwjWdRIlXIc=; b=l8+XVSyM4bFxwrQN8DaLPFhc9LuRattX7ueWwoowULmkIak5gIpDeDqqAuwPSweoyl Ow6cn8mmTlQ6ijq2FWqk9b+snZs0IAt/JY7fkY2HQNtG3eYrtR9xbSsFT2bpI4bWuvb2 vnv7rJCh1JnHdBOv37aWACuvXQocdOaeSDTs1ycA9OU3h9jeQ+ke1z4rBcqP8HEZksVc AEV4U0QleYOdOUGklYZER0xAlnmoPJH9zFnjw/NHrVUMt+96rY9UKbXyOdriV5sBZuAV YqrRnreG/VYfwwY6GedwRJHrNWs8KntDYnI1ppXex5AYIqeXYc8GTggVtBHGejkhm4/T QEFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1761870659; x=1762475459; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=/wh7xt99UOmIbZZ3YA0srOlB+Id97FpIBwjWdRIlXIc=; b=GuP7vUu9BXGGJ74U0jAMHXS8Evx7uM5Ujw2CKljsvJ63+BT0qN6qzGeh09OxkGI9gS xOJFiBzgYZjdquAsis2a+FlQtM9neKBbSKoMu1nEOmObd3FNUPwhBJINEfauDthMbYoq jxTF1B58igeHvt6F9TnYn4+bA+BaMV547w0UIT8fPCim3Kbs6CCLlBH8AuZ7SKMohuO+ ZaTjJiadtpI/FrIVptXnYGGJVNjRwG9yrefqNqdSLD4jBqkIlqL/xXJjvtOXDl0cDZ5t Bo/KfTr7xEBIO/XnC/99cDfMYUZbPd0cSrwBhaiSsMyzQv8T4XzwAKgX4eqO8/jjQxh6 xRZg== X-Forwarded-Encrypted: i=1; AJvYcCW9a3erySxtlcNv0fEhsgrRbrq4Z5ukM4hPGzVBwF7VbRKB34k5mYos/ZMGzhnQnAXReCcZNjH2Lm0YCrU=@vger.kernel.org X-Gm-Message-State: AOJu0YzuwGIENaQE+gHzJrCNtgP6q7MS2lzymzv/+oEc0fZ3L19b+gZ4 gAlp/BZFzMnzgbEGqJmcRywNeJuIqcO3hIElvzQeNsCTj0F2MEEbD82NEiMtalKjzDF4uT0nN7G BB+YGuw== X-Google-Smtp-Source: AGHT+IEPApnG8wn21BOQC69xhg9nxg+bzKPscm/u0l8WSfyBjqi5a7Rip/bgAtivYcVxCFCMnb/7mBiNME4= X-Received: from pjbpw15.prod.google.com ([2002:a17:90b:278f:b0:340:5488:df9e]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:3c4c:b0:335:2823:3683 with SMTP id 98e67ed59e1d1-34082fc1bb1mr2252055a91.9.1761870658815; Thu, 30 Oct 2025 17:30:58 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 30 Oct 2025 17:30:39 -0700 In-Reply-To: <20251031003040.3491385-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251031003040.3491385-1-seanjc@google.com> X-Mailer: git-send-email 2.51.1.930.gacf6e81ea2-goog Message-ID: <20251031003040.3491385-8-seanjc@google.com> Subject: [PATCH v4 7/8] KVM: VMX: Disable L1TF L1 data cache flush if CONFIG_CPU_MITIGATIONS=n From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Thomas Gleixner , Borislav Petkov , Peter Zijlstra , Josh Poimboeuf Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Pawan Gupta , Brendan Jackman Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Disable support for flushing the L1 data cache to mitigate L1TF if CPU mitigations are disabled for the entire kernel. KVM's mitigation of L1TF is in no way special enough to justify ignoring CONFIG_CPU_MITIGATIONS=3Dn. Deliberately use CPU_MITIGATIONS instead of the more precise MITIGATION_L1TF, as MITIGATION_L1TF only controls the default behavior, i.e. CONFIG_MITIGATION_L1TF=3Dn doesn't completely disable L1TF mitigations in the kernel. Keep the vmentry_l1d_flush module param to avoid breaking existing setups, and leverage the .set path to alert the user to the fact that vmentry_l1d_flush will be ignored. Don't bother validating the incoming value; if an admin misconfigures vmentry_l1d_flush, the fact that the bad configuration won't be detected when running with CONFIG_CPU_MITIGATIONS=3Dn is likely the least of their worries. Signed-off-by: Sean Christopherson Reviewed-by: Brendan Jackman --- arch/x86/include/asm/hardirq.h | 4 +-- arch/x86/kvm/vmx/vmx.c | 56 ++++++++++++++++++++++++++-------- 2 files changed, 46 insertions(+), 14 deletions(-) diff --git a/arch/x86/include/asm/hardirq.h b/arch/x86/include/asm/hardirq.h index f00c09ffe6a9..6b6d472baa0b 100644 --- a/arch/x86/include/asm/hardirq.h +++ b/arch/x86/include/asm/hardirq.h @@ -5,7 +5,7 @@ #include =20 typedef struct { -#if IS_ENABLED(CONFIG_KVM_INTEL) +#if IS_ENABLED(CONFIG_CPU_MITIGATIONS) && IS_ENABLED(CONFIG_KVM_INTEL) u8 kvm_cpu_l1tf_flush_l1d; #endif unsigned int __nmi_count; /* arch dependent */ @@ -68,7 +68,7 @@ extern u64 arch_irq_stat(void); DECLARE_PER_CPU_CACHE_HOT(u16, __softirq_pending); #define local_softirq_pending_ref __softirq_pending =20 -#if IS_ENABLED(CONFIG_KVM_INTEL) +#if IS_ENABLED(CONFIG_CPU_MITIGATIONS) && IS_ENABLED(CONFIG_KVM_INTEL) /* * This function is called from noinstr interrupt contexts * and must be inlined to not get instrumentation. diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 55962146fc34..1b5540105e4b 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -203,6 +203,7 @@ module_param(pt_mode, int, S_IRUGO); =20 struct x86_pmu_lbr __ro_after_init vmx_lbr_caps; =20 +#ifdef CONFIG_CPU_MITIGATIONS static DEFINE_STATIC_KEY_FALSE(vmx_l1d_should_flush); static DEFINE_STATIC_KEY_FALSE(vmx_l1d_flush_cond); static DEFINE_MUTEX(vmx_l1d_flush_mutex); @@ -225,7 +226,7 @@ static const struct { #define L1D_CACHE_ORDER 4 static void *vmx_l1d_flush_pages; =20 -static int vmx_setup_l1d_flush(enum vmx_l1d_flush_state l1tf) +static int __vmx_setup_l1d_flush(enum vmx_l1d_flush_state l1tf) { struct page *page; unsigned int i; @@ -302,6 +303,16 @@ static int vmx_setup_l1d_flush(enum vmx_l1d_flush_stat= e l1tf) return 0; } =20 +static int vmx_setup_l1d_flush(void) +{ + /* + * Hand the parameter mitigation value in which was stored in the pre + * module init parser. If no parameter was given, it will contain + * 'auto' which will be turned into the default 'cond' mitigation mode. + */ + return __vmx_setup_l1d_flush(vmentry_l1d_flush_param); +} + static void vmx_cleanup_l1d_flush(void) { if (vmx_l1d_flush_pages) { @@ -349,7 +360,7 @@ static int vmentry_l1d_flush_set(const char *s, const s= truct kernel_param *kp) } =20 mutex_lock(&vmx_l1d_flush_mutex); - ret =3D vmx_setup_l1d_flush(l1tf); + ret =3D __vmx_setup_l1d_flush(l1tf); mutex_unlock(&vmx_l1d_flush_mutex); return ret; } @@ -376,6 +387,9 @@ static noinstr void vmx_l1d_flush(struct kvm_vcpu *vcpu) { int size =3D PAGE_SIZE << L1D_CACHE_ORDER; =20 + if (!static_branch_unlikely(&vmx_l1d_should_flush)) + return; + /* * This code is only executed when the flush mode is 'cond' or * 'always' @@ -433,6 +447,31 @@ static noinstr void vmx_l1d_flush(struct kvm_vcpu *vcp= u) : "eax", "ebx", "ecx", "edx"); } =20 +#else /* CONFIG_CPU_MITIGATIONS*/ +static int vmx_setup_l1d_flush(void) +{ + l1tf_vmx_mitigation =3D VMENTER_L1D_FLUSH_NEVER; + return 0; +} +static void vmx_cleanup_l1d_flush(void) +{ + l1tf_vmx_mitigation =3D VMENTER_L1D_FLUSH_AUTO; +} +static __always_inline void vmx_l1d_flush(struct kvm_vcpu *vcpu) +{ + +} +static int vmentry_l1d_flush_set(const char *s, const struct kernel_param = *kp) +{ + pr_warn_once("Kernel compiled without mitigations, ignoring vmentry_l1d_f= lush\n"); + return 0; +} +static int vmentry_l1d_flush_get(char *s, const struct kernel_param *kp) +{ + return sysfs_emit(s, "never\n"); +} +#endif + static const struct kernel_param_ops vmentry_l1d_flush_ops =3D { .set =3D vmentry_l1d_flush_set, .get =3D vmentry_l1d_flush_get, @@ -7349,8 +7388,7 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vc= pu *vcpu, =20 guest_state_enter_irqoff(); =20 - if (static_branch_unlikely(&vmx_l1d_should_flush)) - vmx_l1d_flush(vcpu); + vmx_l1d_flush(vcpu); =20 vmx_disable_fb_clear(vmx); =20 @@ -8722,14 +8760,8 @@ int __init vmx_init(void) if (r) return r; =20 - /* - * Must be called after common x86 init so enable_ept is properly set - * up. Hand the parameter mitigation value in which was stored in - * the pre module init parser. If no parameter was given, it will - * contain 'auto' which will be turned into the default 'cond' - * mitigation mode. - */ - r =3D vmx_setup_l1d_flush(vmentry_l1d_flush_param); + /* Must be called after common x86 init so enable_ept is setup. */ + r =3D vmx_setup_l1d_flush(); if (r) goto err_l1d_flush; =20 --=20 2.51.1.930.gacf6e81ea2-goog From nobody Sun Feb 8 00:50:26 2026 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 04DCE21ABB9 for ; Fri, 31 Oct 2025 00:31:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761870662; cv=none; b=aA/MRr+cLKRF7gUkAqGd66qMZU4cb1l0VPq5oMxnKj1WIeGA6BbiWNS6YSYKDX7Doq8U9HVqcyxSHI5ZqLbih2nRa9z9Y+dN0A10u7V88syfW7++Ir+GWn0YWuR3VpkSpgPXa3hIOFt6CFZ6PS7OsQAt0QFBmqYNpOz8bUNtwsg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761870662; c=relaxed/simple; bh=RT/sHTj0KzhQejEjxZna5Y25pQzofKvau2oAC0XFLaw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=YApKlL5QsjWMLsZhMSxWHwA2xG/vhhPzRE8Fy1LjcWzDAn/jC6VRRw0KydeJW/FaLG15PVyf16R7lwi4MGSz4iZw55SqghE2aH8Bb7dhRN3PMeSmV6B1mEnuRbEwkzgWKG7F/rbM31pI2Hv8wREHS5O5FwKxqOjSKqy7bID3xLY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=TZpi9qhF; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="TZpi9qhF" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-294880e7ca4so33217445ad.0 for ; Thu, 30 Oct 2025 17:31:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1761870660; x=1762475460; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=LbTsWO94j2FTsRt3NBHviHWz1y0m9u2EeuxD0YgFUA4=; b=TZpi9qhFK+2O1C3GQijopQgJMlMLtGga7+AqwSHkJvMxQPN724rnyE+PlzRQ+NsVBf adB0KdVoHD8ANTKbvCMcKcBmFNApJPZFt7VDoWS49c3CBGzS3rqRf12tAV53FDafb+85 E6XL1/gKN31qy4p9gcPqR49nWCiwoXFXpziH0Zq+uIK8dUmtbz9RSi3z5LE7xpG01CWl LbgpG2v420R/EWNpw+VCT73hfxKYG+XI+tkskoPH5/tzSXwesSECLsR/HfTsN0AEzmMk +AviEr025TIbazYPS4ZJmVpV3QwbwVK8DiOHGp0awOZy093QHWlSBCQILceFKbjo95hq jtKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1761870660; x=1762475460; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=LbTsWO94j2FTsRt3NBHviHWz1y0m9u2EeuxD0YgFUA4=; b=D9RB3tZETw0G8tm+JtaK1riMIy0+cN7V28x/KE2/Cu4E6Mh8bp9RQxr1simblUs5/0 Jf5ish3oJSoVSKtkzEh4bmPtHdLViqvOIGAQf7QOSHFWBIFmtPug5qKUNOBIHdKW5hnU KAzDSyqF246UvTuErG5T8A01/ED47ApObV4nd9KgJp+sRlZH5TQIqZAV8UEQwcQd91N/ Gw994igxTFIp7VdVtzejqW3g41LWynQJiUrBBC7k0d/mqYuoHnmLRvOmZhCvWYVcR9gB C6mUYHp5pfi+CUivThZ8dmrvxj0CMtai9aCo7gBjRG1o0x+w1HQApUXU1fLwXaQJ1dOQ rvtg== X-Forwarded-Encrypted: i=1; AJvYcCVljTUEnFTFz75v82CTwR1JXtj08xi7rEYxe6R1m7KAf+QSYHuZrIncXzfc9GwvNGmkGzm68vGwsqy38ls=@vger.kernel.org X-Gm-Message-State: AOJu0YyphhsaJwByH3ctBA6X0MeNrWfvmgYIavTGXJe/t2UcNv0Sgk4r ObgOCPq0EVVy3bYa2k9cd2S4k4XnMim8O1EmqNWxq7EzcY/MXo5C0Loz3aG8y8B09TkVwVOVZkk Byl6V3g== X-Google-Smtp-Source: AGHT+IG9niXeqcQKc5xfb5DpFxjBvH7jNOuHlLCtg8NMqPQ9wNkIMDEwIxItwd+jCVHQRiKCwCLg7bs2U5Y= X-Received: from plblq15.prod.google.com ([2002:a17:903:144f:b0:268:11e:8271]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:ea0e:b0:292:39b4:e785 with SMTP id d9443c01a7336-2951a3e696fmr21856645ad.26.1761870660376; Thu, 30 Oct 2025 17:31:00 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 30 Oct 2025 17:30:40 -0700 In-Reply-To: <20251031003040.3491385-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251031003040.3491385-1-seanjc@google.com> X-Mailer: git-send-email 2.51.1.930.gacf6e81ea2-goog Message-ID: <20251031003040.3491385-9-seanjc@google.com> Subject: [PATCH v4 8/8] KVM: x86: Unify L1TF flushing under per-CPU variable From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini , Thomas Gleixner , Borislav Petkov , Peter Zijlstra , Josh Poimboeuf Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Pawan Gupta , Brendan Jackman Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Brendan Jackman Currently the tracking of the need to flush L1D for L1TF is tracked by two bits: one per-CPU and one per-vCPU. The per-vCPU bit is always set when the vCPU shows up on a core, so there is no interesting state that's truly per-vCPU. Indeed, this is a requirement, since L1D is a part of the physical CPU. So simplify this by combining the two bits. The vCPU bit was being written from preemption-enabled regions. To play nice with those cases, wrap all calls from KVM and use a raw write so that request a flush with preemption enabled doesn't trigger what would effectively be DEBUG_PREEMPT false positives. Preemption doesn't need to be disabled, as kvm_arch_vcpu_load() will mark the new CPU as needing a flush if the vCPU task is migrated, or if userspace runs the vCPU on a different task. Signed-off-by: Brendan Jackman [sean: put raw write in KVM instead of in a hardirq.h variant] Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 3 --- arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/vmx/nested.c | 2 +- arch/x86/kvm/vmx/vmx.c | 20 +++++--------------- arch/x86/kvm/x86.c | 6 +++--- arch/x86/kvm/x86.h | 14 ++++++++++++++ 6 files changed, 24 insertions(+), 23 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 48598d017d6f..fcdc65ab13d8 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1055,9 +1055,6 @@ struct kvm_vcpu_arch { /* be preempted when it's in kernel-mode(cpl=3D0) */ bool preempted_in_kernel; =20 - /* Flush the L1 Data cache for L1TF mitigation on VMENTER */ - bool l1tf_flush_l1d; - /* Host CPU on which VM-entry was most recently attempted */ int last_vmentry_cpu; =20 diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 18d69d48bc55..4e016582adc7 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4859,7 +4859,7 @@ int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 = error_code, */ BUILD_BUG_ON(lower_32_bits(PFERR_SYNTHETIC_MASK)); =20 - vcpu->arch.l1tf_flush_l1d =3D true; + kvm_request_l1tf_flush_l1d(); if (!flags) { trace_kvm_page_fault(vcpu, fault_address, error_code); =20 diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index b0cd745518b4..6f2f969d19f9 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -3828,7 +3828,7 @@ static int nested_vmx_run(struct kvm_vcpu *vcpu, bool= launch) goto vmentry_failed; =20 /* Hide L1D cache contents from the nested guest. */ - vcpu->arch.l1tf_flush_l1d =3D true; + kvm_request_l1tf_flush_l1d(); =20 /* * Must happen outside of nested_vmx_enter_non_root_mode() as it will diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 1b5540105e4b..f87af1836ea1 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -395,26 +395,16 @@ static noinstr void vmx_l1d_flush(struct kvm_vcpu *vc= pu) * 'always' */ if (static_branch_likely(&vmx_l1d_flush_cond)) { - bool flush_l1d; - /* - * Clear the per-vcpu flush bit, it gets set again if the vCPU + * Clear the per-cpu flush bit, it gets set again if the vCPU * is reloaded, i.e. if the vCPU is scheduled out or if KVM * exits to userspace, or if KVM reaches one of the unsafe - * VMEXIT handlers, e.g. if KVM calls into the emulator. + * VMEXIT handlers, e.g. if KVM calls into the emulator, + * or from the interrupt handlers. */ - flush_l1d =3D vcpu->arch.l1tf_flush_l1d; - vcpu->arch.l1tf_flush_l1d =3D false; - - /* - * Clear the per-cpu flush bit, it gets set again from - * the interrupt handlers. - */ - flush_l1d |=3D kvm_get_cpu_l1tf_flush_l1d(); + if (!kvm_get_cpu_l1tf_flush_l1d()) + return; kvm_clear_cpu_l1tf_flush_l1d(); - - if (!flush_l1d) - return; } =20 vcpu->stat.l1d_flush++; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index b4b5d2d09634..851f078cd5ca 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -5189,7 +5189,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cp= u) { struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); =20 - vcpu->arch.l1tf_flush_l1d =3D true; + kvm_request_l1tf_flush_l1d(); =20 if (vcpu->scheduled_out && pmu->version && pmu->event_count) { pmu->need_cleanup =3D true; @@ -7999,7 +7999,7 @@ int kvm_write_guest_virt_system(struct kvm_vcpu *vcpu= , gva_t addr, void *val, unsigned int bytes, struct x86_exception *exception) { /* kvm_write_guest_virt_system can pull in tons of pages. */ - vcpu->arch.l1tf_flush_l1d =3D true; + kvm_request_l1tf_flush_l1d(); =20 return kvm_write_guest_virt_helper(addr, val, bytes, vcpu, PFERR_WRITE_MASK, exception); @@ -9395,7 +9395,7 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gp= a_t cr2_or_gpa, return handle_emulation_failure(vcpu, emulation_type); } =20 - vcpu->arch.l1tf_flush_l1d =3D true; + kvm_request_l1tf_flush_l1d(); =20 if (!(emulation_type & EMULTYPE_NO_DECODE)) { kvm_clear_exception_queue(vcpu); diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index f3dc77f006f9..cd67ccbb747f 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -420,6 +420,20 @@ static inline bool kvm_check_has_quirk(struct kvm *kvm= , u64 quirk) return !(kvm->arch.disabled_quirks & quirk); } =20 +static __always_inline void kvm_request_l1tf_flush_l1d(void) +{ +#if IS_ENABLED(CONFIG_CPU_MITIGATIONS) && IS_ENABLED(CONFIG_KVM_INTEL) + /* + * Use a raw write to set the per-CPU flag, as KVM will ensure a flush + * even if preemption is currently enabled.. If the current vCPU task + * is migrated to a different CPU (or userspace runs the vCPU on a + * different task) before the next VM-Entry, then kvm_arch_vcpu_load() + * will request a flush on the new CPU. + */ + raw_cpu_write(irq_stat.kvm_cpu_l1tf_flush_l1d, 1); +#endif +} + void kvm_inject_realmode_interrupt(struct kvm_vcpu *vcpu, int irq, int inc= _eip); =20 u64 get_kvmclock_ns(struct kvm *kvm); --=20 2.51.1.930.gacf6e81ea2-goog