From nobody Fri Dec 19 15:30:40 2025 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D70A2338F3D for ; Wed, 15 Oct 2025 17:14:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1760548446; cv=none; b=GsSJ+rFTOjwmfxrbqmIMhLBVTBE9osY3D1zxZV4HUdMzlm9tQqoaqnroHjkIVm9b3J8XFHA6nRaTIiz4Pv/n5rmpfYTH3M0BLT4JaBqDwGuaQOk96N+U4742gXM3sAx5xfcoB/w3tHrkz8xD8hg/o5lq67GWlijyunC7OkQm56k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1760548446; c=relaxed/simple; bh=Hab7fHDfDjE0Y/wHSkX4u0aYBvabsid/PwoR0nxnWmc=; h=Date:Mime-Version:Message-ID:Subject:From:To:Cc:Content-Type; b=qcXZX6TVTEMxhj3ylmo9+paE+HFqGDuOoVfraBDZSeoB5Qk21wRQYbWEt2Oe51hB8YuP0Z8W6SivTtamBLwFKhVA1rrRfvl/SP9s0rvH1yAjggdv94srPkCBSRXVcHW8TP4n0Hv5jAiKofmQtwOvBFLeGT/AIxtTFXfTkggb7eM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jackmanb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=YWzWa9J3; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jackmanb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="YWzWa9J3" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-3ee1365964cso6994774f8f.2 for ; Wed, 15 Oct 2025 10:14:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1760548443; x=1761153243; darn=vger.kernel.org; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=BUc6JEVovpC/n0lisIRRlrZC5viDRiRLArSH+3NnX+g=; b=YWzWa9J3/c5HJNrC+m5dJ6IT7IQpPkb3TFFlhFe7lrV9NjMXFS49/wCGY/O4vo/M7A 43/ftYcFBJCB/aIVa/U000pH06QV5i9RTv0nQ8RkEHO2g4l1cNklv3Sh/6jap1US2ajD mYWFMmlgoLVotyGZvPKC+t1nMirDbyF5Dw+iEoNXgAsod1IHobww/XCC+wQN362lZiEv DQcandmIoAvNtxXuDDv4R/3JzLFCgsJwE+qq+2qbG46jIzfEs/ambt3GtKkxUNZaRbWa Sd61TG0fS+lp2zGu2P73CX8IJ0DvzOZng6/bEBbhGBpvmXmbPPyoPex3xOXtn+GzT71v VeXQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760548443; x=1761153243; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=BUc6JEVovpC/n0lisIRRlrZC5viDRiRLArSH+3NnX+g=; b=cy+yXMbHoqlivgAUXPl31gyzn876AwblBIqAveyHIeX9VufVff6GkSicSWNFsD/0Oc 8IMoESxmC/iCEfOPUikCCyR3kzJ2ixL6kfLa2CeLTiDft09fU23Bbcu8yH7zYr/GLWr2 bUKbkS9NcvF7Tn3uQERopKPrbzhM/FcRBXtNHJM1e51bRLR0xjTNrnyKGi/ZGGPrgNKz hfami7lnm4ugtAhFbUqhmFgA6eaI3xAkLSYbbp+kT4TSAvwz4rF0vL30DLI8LIDGoXeG GnArGIg//TrmNk/p9UHQKNQOjjT1CHNSlKmNuiKMQF7ut2YRtzDDp3LF3LUnSBdMNiMl FqdA== X-Gm-Message-State: AOJu0YxUHj4ha17Q5C0h79dAtJ2l0CLHef8mAZXKzD9wqf/qvzRpo60b Fsu1komLN0XdJ8T1ejH9NyKfQfti2sL7Fcjo1lMY2lOLK9kzlenOaU9qPItyns8IJLzuA9oGiMF VZvfHG7pTLyazrw== X-Google-Smtp-Source: AGHT+IHBr6W0XUlE5qvroBz9EmQK5wN/7aWoshMW5hIgeTFZ2bOcdfRnDTjCDcamT99WzGXInPIRcaoc1n1zgw== X-Received: from wruk18.prod.google.com ([2002:a5d:6292:0:b0:424:21b0:f156]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:26cc:b0:425:8134:706 with SMTP id ffacd0b85a97d-42667177f6emr20491027f8f.16.1760548443016; Wed, 15 Oct 2025 10:14:03 -0700 (PDT) Date: Wed, 15 Oct 2025 17:13:55 +0000 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 X-B4-Tracking: v=1; b=H4sIAFLW72gC/3XMQQ6CMBCF4auQWTuGAYqVlfcwLGqZQhOkpMVGQ 3p3K3uX/0vet0NgbzlAV+zgOdpg3ZKjOhWgJ7WMjHbIDVVZCSqpxkeDM20GV/Z6feHlWpMko4S UDeTT6tnY9wHe+9yTDZvzn8OP9Fv/UpGQcGiFFnXbsiJ1G50bZz5r94Q+pfQFZY3yBK0AAAA= X-Change-Id: 20251013-b4-l1tf-percpu-793181fa5884 X-Mailer: b4 0.14.2 Message-ID: <20251015-b4-l1tf-percpu-v2-1-6d7a8d3d40e9@google.com> Subject: [PATCH v2] KVM: x86: Unify L1TF flushing under per-CPU variable From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Sean Christopherson , Paolo Bonzini Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Brendan Jackman Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Currently the tracking of the need to flush L1D for L1TF is tracked by two bits: one per-CPU and one per-vCPU. The per-vCPU bit is always set when the vCPU shows up on a core, so there is no interesting state that's truly per-vCPU. Indeed, this is a requirement, since L1D is a part of the physical CPU. So simplify this by combining the two bits. The vCPU bit was being written from preemption-enabled regions. For those cases, use raw_cpu_write() (via a variant of the setter function) to avoid DEBUG_PREEMPT failures. If the vCPU is getting migrated, the CPU that gets its bit set in these paths is not important; vcpu_load() must always set it on the destination CPU before the guest is resumed. Signed-off-by: Brendan Jackman --- Changes in v2: - Moved the bit back to irq_stat - Fixed DEBUG_PREEMPT issues by adding a _raw variant - Link to v1: https://lore.kernel.org/r/20251013-b4-l1tf-percpu-v1-1-d65c53= 66ea1a@google.com --- arch/x86/include/asm/hardirq.h | 6 ++++++ arch/x86/include/asm/kvm_host.h | 3 --- arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/vmx/nested.c | 2 +- arch/x86/kvm/vmx/vmx.c | 20 +++++--------------- arch/x86/kvm/x86.c | 6 +++--- 6 files changed, 16 insertions(+), 23 deletions(-) diff --git a/arch/x86/include/asm/hardirq.h b/arch/x86/include/asm/hardirq.h index f00c09ffe6a95f07342bb0c6cea3769d71eecfa9..8a5c5deadb5912cc9ae080740c8= a7372e6ef7577 100644 --- a/arch/x86/include/asm/hardirq.h +++ b/arch/x86/include/asm/hardirq.h @@ -2,6 +2,7 @@ #ifndef _ASM_X86_HARDIRQ_H #define _ASM_X86_HARDIRQ_H =20 +#include #include =20 typedef struct { @@ -78,6 +79,11 @@ static __always_inline void kvm_set_cpu_l1tf_flush_l1d(v= oid) __this_cpu_write(irq_stat.kvm_cpu_l1tf_flush_l1d, 1); } =20 +static __always_inline void kvm_set_cpu_l1tf_flush_l1d_raw(void) +{ + raw_cpu_write(irq_stat.kvm_cpu_l1tf_flush_l1d, 1); +} + static __always_inline void kvm_clear_cpu_l1tf_flush_l1d(void) { __this_cpu_write(irq_stat.kvm_cpu_l1tf_flush_l1d, 0); diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 48598d017d6f3f07263a2ffffe670be2658eb9cb..fcdc65ab13d8383018577aacf19= e832e6c4ceb0b 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1055,9 +1055,6 @@ struct kvm_vcpu_arch { /* be preempted when it's in kernel-mode(cpl=3D0) */ bool preempted_in_kernel; =20 - /* Flush the L1 Data cache for L1TF mitigation on VMENTER */ - bool l1tf_flush_l1d; - /* Host CPU on which VM-entry was most recently attempted */ int last_vmentry_cpu; =20 diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 667d66cf76d5e52c22f9517914307244ae868eea..8c0dce401a42d977756ca82d249= bb33c858b9c9f 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4859,7 +4859,7 @@ int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 = error_code, */ BUILD_BUG_ON(lower_32_bits(PFERR_SYNTHETIC_MASK)); =20 - vcpu->arch.l1tf_flush_l1d =3D true; + kvm_set_cpu_l1tf_flush_l1d(); if (!flags) { trace_kvm_page_fault(vcpu, fault_address, error_code); =20 diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 76271962cb7083b475de6d7d24bf9cb918050650..1d376b4e6aa4abc475c1aac2ee9= 37dbedb834cb1 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -3880,7 +3880,7 @@ static int nested_vmx_run(struct kvm_vcpu *vcpu, bool= launch) goto vmentry_failed; =20 /* Hide L1D cache contents from the nested guest. */ - vmx->vcpu.arch.l1tf_flush_l1d =3D true; + kvm_set_cpu_l1tf_flush_l1d_raw(); =20 /* * Must happen outside of nested_vmx_enter_non_root_mode() as it will diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 546272a5d34da301710df1d89414f41fc9b24a1f..6515beefa1fc8da042c0b66c207= 250ccf79c888e 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6673,26 +6673,16 @@ static noinstr void vmx_l1d_flush(struct kvm_vcpu *= vcpu) * 'always' */ if (static_branch_likely(&vmx_l1d_flush_cond)) { - bool flush_l1d; - /* - * Clear the per-vcpu flush bit, it gets set again if the vCPU + * Clear the per-cpu flush bit, it gets set again if the vCPU * is reloaded, i.e. if the vCPU is scheduled out or if KVM * exits to userspace, or if KVM reaches one of the unsafe - * VMEXIT handlers, e.g. if KVM calls into the emulator. + * VMEXIT handlers, e.g. if KVM calls into the emulator, + * or from the interrupt handlers. */ - flush_l1d =3D vcpu->arch.l1tf_flush_l1d; - vcpu->arch.l1tf_flush_l1d =3D false; - - /* - * Clear the per-cpu flush bit, it gets set again from - * the interrupt handlers. - */ - flush_l1d |=3D kvm_get_cpu_l1tf_flush_l1d(); - kvm_clear_cpu_l1tf_flush_l1d(); - - if (!flush_l1d) + if (!kvm_get_cpu_l1tf_flush_l1d()) return; + kvm_clear_cpu_l1tf_flush_l1d(); } =20 vcpu->stat.l1d_flush++; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 4b8138bd48572fd161eda73d2dbdc1dcd0bcbcac..dc886c4b9b1fe3d63a4c255ed4f= c533d20fd1962 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -5190,7 +5190,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cp= u) { struct kvm_pmu *pmu =3D vcpu_to_pmu(vcpu); =20 - vcpu->arch.l1tf_flush_l1d =3D true; + kvm_set_cpu_l1tf_flush_l1d(); =20 if (vcpu->scheduled_out && pmu->version && pmu->event_count) { pmu->need_cleanup =3D true; @@ -8000,7 +8000,7 @@ int kvm_write_guest_virt_system(struct kvm_vcpu *vcpu= , gva_t addr, void *val, unsigned int bytes, struct x86_exception *exception) { /* kvm_write_guest_virt_system can pull in tons of pages. */ - vcpu->arch.l1tf_flush_l1d =3D true; + kvm_set_cpu_l1tf_flush_l1d_raw(); =20 return kvm_write_guest_virt_helper(addr, val, bytes, vcpu, PFERR_WRITE_MASK, exception); @@ -9396,7 +9396,7 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gp= a_t cr2_or_gpa, return handle_emulation_failure(vcpu, emulation_type); } =20 - vcpu->arch.l1tf_flush_l1d =3D true; + kvm_set_cpu_l1tf_flush_l1d_raw(); =20 if (!(emulation_type & EMULTYPE_NO_DECODE)) { kvm_clear_exception_queue(vcpu); --- base-commit: 6b36119b94d0b2bb8cea9d512017efafd461d6ac change-id: 20251013-b4-l1tf-percpu-793181fa5884 Best regards, --=20 Brendan Jackman