From nobody Fri Dec 19 21:46:44 2025 Received: from out-170.mta0.migadu.com (out-170.mta0.migadu.com [91.218.175.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E1BE323BD1F for ; Sat, 8 Nov 2025 00:45:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762562757; cv=none; b=qM9WQjIBXWAtPiXMmqUD99DupqW5lugx8DAHAEbD/qWs6S1GXOdXqL6sFXkeQtt2b+O+oJaNRlcf6ojrQ8uJEZOpiBA+udTBfYiIHfYCLA+ngxTpjHeSTHhaIeTRgkkJay9fORONbeUzYSUf6XUM1ADOcW6F/bntbMNhOl8qXmg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762562757; c=relaxed/simple; bh=x85JRd3eoh4Fmbm0hw42uUNA1ySyfDcR++08WBtPvAY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YBAc+xdW3QmhVjOR3RY+NR7l4I0lY+vuLlmLQWrrxowKrGkqN6nWhHwdHQSTUi96x0MLHNgHUHg5xdxxQT1kM1dmLV2QOvlHxB/dO75QRD16QDuCGPg4eCpcI9PXD9JNbfdGrOgSbRqHjd25kS4cc1PNi3WhIGrQNzRO1dsmf0A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=uMS1n4ZF; arc=none smtp.client-ip=91.218.175.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="uMS1n4ZF" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1762562751; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dqxsukY6YvoTOTLjaVVpP8tsHHl1y/4M81vp3qGNAj8=; b=uMS1n4ZFUiK46BRYuOJuLrtZiJU2mpDTu4SgcqddDGtQc2V31e7+h9DIQ0WDcvT0Bweihi ftPnMmy3y1Dz8w5KNmkhH3VqKPZabohnTCjKFppzy3L4qLTixb9P88szrfwvr5Xi5WcHes BqGCFbKe9Jd9/Bx6ub79Dq/o5oyCSAw= From: Yosry Ahmed To: Sean Christopherson Cc: Paolo Bonzini , Jim Mattson , Maxim Levitsky , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yosry Ahmed , Matteo Rizzo , evn@google.com Subject: [PATCH 1/6] KVM: SVM: Mark VMCB_LBR dirty when MSR_IA32_DEBUGCTLMSR is updated Date: Sat, 8 Nov 2025 00:45:19 +0000 Message-ID: <20251108004524.1600006-2-yosry.ahmed@linux.dev> In-Reply-To: <20251108004524.1600006-1-yosry.ahmed@linux.dev> References: <20251108004524.1600006-1-yosry.ahmed@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" Clear the VMCB_LBR clean bit when MSR_IA32_DEBUGCTLMSR is updated, as the only valid bit is DEBUGCTLMSR_LBR. The history is complicated, it was correctly cleared for L1 before commit 1d5a1b5860ed ("KVM: x86: nSVM: correctly virtualize LBR msrs when L2 is running"), then the latter relied on svm_update_lbrv() to clear it, but it only did so for L2. Go back to clearing it directly in svm_set_msr(). Fixes: 1d5a1b5860ed ("KVM: x86: nSVM: correctly virtualize LBR msrs when L2= is running") Reported-by: Matteo Rizzo Reported-by: evn@google.com Co-developed-by: Jim Mattson Signed-off-by: Jim Mattson Signed-off-by: Yosry Ahmed --- arch/x86/kvm/svm/svm.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 55bd7aa5cd743..d25c56b30b4e2 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -3009,7 +3009,11 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct= msr_data *msr) if (data & DEBUGCTL_RESERVED_BITS) return 1; =20 + if (svm_get_lbr_vmcb(svm)->save.dbgctl =3D=3D data) + break; + svm_get_lbr_vmcb(svm)->save.dbgctl =3D data; + vmcb_mark_dirty(svm->vmcb, VMCB_LBR); svm_update_lbrv(vcpu); break; case MSR_VM_HSAVE_PA: --=20 2.51.2.1041.gc1ab5b90ca-goog From nobody Fri Dec 19 21:46:44 2025 Received: from out-178.mta0.migadu.com (out-178.mta0.migadu.com [91.218.175.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 992C8244687 for ; Sat, 8 Nov 2025 00:45:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762562757; cv=none; b=sIWKjaPr27jKTsU5g5IabtoqQS2HjDBr4l5HcdmtI57WpIjS3Of25tY9akbvcC6F3cFnlCsVFK51j/JeHjVD3/1Jc7CN3n9X/ItMoT3fpzhawMoSYEPxpVj9lvyCqkixtr1El5Dl7upqCobxzfDEnsBeanBwK8Crn47fP5Enx60= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762562757; c=relaxed/simple; bh=JVr3UApUdU/lwfvGUxGHcQSlPHEF12JTkHv16rEIhWs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=c3nxCmqEn8b9B/E9hG9BF0Y17rJCMR+WwYV9ZRtPwzvwvNxab81wgUA7O8+d1pzYOtrT1Us+qerPX0HtCMQpeo4S1SrEDcD4LjhQn5TA72RWh4rrUfbXcaNLOoMYMScgSipQcUOmqHp5v23ci1Rj+zKjf7bv5RpytsXvYpLIutE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=Dfn66SeY; arc=none smtp.client-ip=91.218.175.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="Dfn66SeY" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1762562753; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7WjgKU0UcXzH6sPJxy5qQCTT2XuzfK8VfrppJwY8Nqs=; b=Dfn66SeYoVgJ/DKrBuksnpALQ8y5ngiV/C0NHdLDH2TWkpX2Wgxttmb5OxRD4qJSUF51UV dYl/Kgx3wHuB8FY2I4ppii0sHV2q7MW0mKSaWTgWrpA8N1VjDoOJo3CpvQ/MJ7zu21vMl8 yE/mLOcexO+lRNo1L1UT+JtwmP2O3hE= From: Yosry Ahmed To: Sean Christopherson Cc: Paolo Bonzini , Jim Mattson , Maxim Levitsky , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yosry Ahmed , stable@vger.kernel.org Subject: [PATCH 2/6] KVM: nSVM: Always recalculate LBR MSR intercepts in svm_update_lbrv() Date: Sat, 8 Nov 2025 00:45:20 +0000 Message-ID: <20251108004524.1600006-3-yosry.ahmed@linux.dev> In-Reply-To: <20251108004524.1600006-1-yosry.ahmed@linux.dev> References: <20251108004524.1600006-1-yosry.ahmed@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" svm_update_lbrv() is called when MSR_IA32_DEBUGCTLMSR is updated, and on nested transitions where LBRV is used. It checks whether LBRV enablement needs to be changed in the current VMCB, and if it does, it also recalculate intercepts to LBR MSRs. However, there are cases where intercepts need to be updated even when LBRV enablement doesn't. Example scenario: - L1 has MSR_IA32_DEBUGCTLMSR cleared. - L1 runs L2 without LBR_CTL_ENABLE (no LBRV). - L2 sets DEBUGCTLMSR_LBR in MSR_IA32_DEBUGCTLMSR, svm_update_lbrv() sets LBR_CTL_ENABLE in VMCB02 and disables intercepts to LBR MSRs. - L2 exits to L1, svm_update_lbrv() is not called on this transition. - L1 clears MSR_IA32_DEBUGCTLMSR, svm_update_lbrv() finds that LBR_CTL_ENABLE is already cleared in VMCB01 and does nothing. - Intercepts remain disabled, L1 reads to LBR MSRs read the host MSRs. Fix it by always recalculating intercepts in svm_update_lbrv(). Fixes: 1d5a1b5860ed ("KVM: x86: nSVM: correctly virtualize LBR msrs when L2= is running") Cc: stable@vger.kernel.org Signed-off-by: Yosry Ahmed --- arch/x86/kvm/svm/svm.c | 29 +++++++++++++++++++---------- 1 file changed, 19 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index d25c56b30b4e2..26ab75ecf1c67 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -806,25 +806,29 @@ void svm_copy_lbrs(struct vmcb *to_vmcb, struct vmcb = *from_vmcb) vmcb_mark_dirty(to_vmcb, VMCB_LBR); } =20 -void svm_enable_lbrv(struct kvm_vcpu *vcpu) +static void __svm_enable_lbrv(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm =3D to_svm(vcpu); =20 svm->vmcb->control.virt_ext |=3D LBR_CTL_ENABLE_MASK; - svm_recalc_lbr_msr_intercepts(vcpu); =20 /* Move the LBR msrs to the vmcb02 so that the guest can see them. */ if (is_guest_mode(vcpu)) svm_copy_lbrs(svm->vmcb, svm->vmcb01.ptr); } =20 -static void svm_disable_lbrv(struct kvm_vcpu *vcpu) +void svm_enable_lbrv(struct kvm_vcpu *vcpu) +{ + __svm_enable_lbrv(vcpu); + svm_recalc_lbr_msr_intercepts(vcpu); +} + +static void __svm_disable_lbrv(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm =3D to_svm(vcpu); =20 KVM_BUG_ON(sev_es_guest(vcpu->kvm), vcpu->kvm); svm->vmcb->control.virt_ext &=3D ~LBR_CTL_ENABLE_MASK; - svm_recalc_lbr_msr_intercepts(vcpu); =20 /* * Move the LBR msrs back to the vmcb01 to avoid copying them @@ -853,13 +857,18 @@ void svm_update_lbrv(struct kvm_vcpu *vcpu) (is_guest_mode(vcpu) && guest_cpu_cap_has(vcpu, X86_FEATURE_LBRV) && (svm->nested.ctl.virt_ext & LBR_CTL_ENABLE_MASK)); =20 - if (enable_lbrv =3D=3D current_enable_lbrv) - return; + if (enable_lbrv && !current_enable_lbrv) + __svm_enable_lbrv(vcpu); + else if (!enable_lbrv && current_enable_lbrv) + __svm_disable_lbrv(vcpu); =20 - if (enable_lbrv) - svm_enable_lbrv(vcpu); - else - svm_disable_lbrv(vcpu); + /* + * During nested transitions, it is possible that the current VMCB has + * LBR_CTL set, but the previous LBR_CTL had it cleared (or vice versa). + * In this case, even though LBR_CTL does not need an update, intercepts + * do, so always recalculate the intercepts here. + */ + svm_recalc_lbr_msr_intercepts(vcpu); } =20 void disable_nmi_singlestep(struct vcpu_svm *svm) --=20 2.51.2.1041.gc1ab5b90ca-goog From nobody Fri Dec 19 21:46:44 2025 Received: from out-186.mta0.migadu.com (out-186.mta0.migadu.com [91.218.175.186]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8FA04288C27 for ; Sat, 8 Nov 2025 00:45:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.186 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762562759; cv=none; b=ty8ZC6jRyPIf9ByQfKaAKiUprkybWuPni5BYV48+nNIN9/oQcbTBq4DqXCkDgFcENx8MpTQd1e0p7hCWHpAktzz1vCKunrmrrLehd07jo3TMJy1SvAT4VV4ZJiDzipTqG/scJQC1Y4Bnqw+9BwV/R22FyQFb2lVNvXn53QDEDp0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762562759; c=relaxed/simple; bh=/VgbKWGGjX33hqaPUn9utG8PtENMDB4HKGvuY8flH8E=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Yha7b+QdjaWKfCLJ9xwBGbEcQdr4zvwB3DLLwy0pcN6en6YMaK53Oor2e9K1GacsWFfbNTf1z4JTmiedhiuC4QGwe8P9WNIYZZ2JomNlFLiwUKswUmUV7H9eyMs+KTcvnejMl7s5qzDP/2gHdoNvUr2kLOt1VhVRubrTRLAfxpU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=e7KETnJ4; arc=none smtp.client-ip=91.218.175.186 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="e7KETnJ4" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1762562755; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=VJXJ2YU7OTXUs743d1HHnvsOTj68uUEY0vAsPbI8FKY=; b=e7KETnJ4iqdbyKSbW8qjksEz3uirS7RTH/73xCiRKfMYN75KICYck7A3oCIR4dCJQQQwYb D+dUuznJSMQvs0n1DHdH2fy8ZtYOWI19nt0sowKUGyXaQxC1OsbAr/62QSCwTUjunpYvw4 0tnvshtwubaELlVr9efcOOXNtUKxG2I= From: Yosry Ahmed To: Sean Christopherson Cc: Paolo Bonzini , Jim Mattson , Maxim Levitsky , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yosry Ahmed , stable@vger.kernel.org Subject: [PATCH 3/6] KVM: nSVM: Fix and simplify LBR virtualization handling with nested Date: Sat, 8 Nov 2025 00:45:21 +0000 Message-ID: <20251108004524.1600006-4-yosry.ahmed@linux.dev> In-Reply-To: <20251108004524.1600006-1-yosry.ahmed@linux.dev> References: <20251108004524.1600006-1-yosry.ahmed@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" The current scheme for handling LBRV when nested is used is very complicated, especially when L1 does not enable LBRV (i.e. does not set LBR_CTL_ENABLE_MASK). To avoid copying LBRs between VMCB01 and VMCB02 on every nested transition, the current implementation switches between using VMCB01 or VMCB02 as the source of truth for the LBRs while L2 is running. If L2 enables LBR, VMCB02 is used as the source of truth. When L2 disables LBR, the LBRs are copied to VMCB01 and VMCB01 is used as the source of truth. This introduces significant complexity, and incorrect behavior in some cases. For example, on a nested #VMEXIT, the LBRs are only copied from VMCB02 to VMCB01 if LBRV is enabled in VMCB01. This is because L2's writes to MSR_IA32_DEBUGCTLMSR to enable LBR are intercepted and propagated to VMCB01 instead of VMCB02. However, LBRV is only enabled in VMCB02 when L2 is running. This means that if L2 enables LBR and exits to L1, the LBRs will not be propagated from VMCB02 to VMCB01, because LBRV is disabled in VMCB01. There is no meaningful difference in CPUID rate in L2 when copying LBRs on every nested transition vs. the current approach, so do the simple and correct thing and always copy LBRs between VMCB01 and VMCB02 on nested transitions (when LBRV is disabled by L1). Drop the conditional LBRs copying in __svm_{enable/disable}_lbrv() as it is now unnecessary. VMCB02 becomes the only source of truth for LBRs when L2 is running, regardless of LBRV being enabled by L1, drop svm_get_lbr_vmcb() and use svm->vmcb directly in its place. Fixes: 1d5a1b5860ed ("KVM: x86: nSVM: correctly virtualize LBR msrs when L2= is running") Cc: stable@vger.kernel.org Signed-off-by: Yosry Ahmed --- arch/x86/kvm/svm/nested.c | 20 ++++++----------- arch/x86/kvm/svm/svm.c | 46 +++++++++------------------------------ 2 files changed, 17 insertions(+), 49 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 71664d54d8b2a..c81005b245222 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -678,11 +678,10 @@ static void nested_vmcb02_prepare_save(struct vcpu_sv= m *svm, struct vmcb *vmcb12 */ svm_copy_lbrs(vmcb02, vmcb12); vmcb02->save.dbgctl &=3D ~DEBUGCTL_RESERVED_BITS; - svm_update_lbrv(&svm->vcpu); - - } else if (unlikely(vmcb01->control.virt_ext & LBR_CTL_ENABLE_MASK)) { + } else { svm_copy_lbrs(vmcb02, vmcb01); } + svm_update_lbrv(&svm->vcpu); } =20 static inline bool is_evtinj_soft(u32 evtinj) @@ -835,11 +834,7 @@ static void nested_vmcb02_prepare_control(struct vcpu_= svm *svm, svm->soft_int_next_rip =3D vmcb12_rip; } =20 - vmcb02->control.virt_ext =3D vmcb01->control.virt_ext & - LBR_CTL_ENABLE_MASK; - if (guest_cpu_cap_has(vcpu, X86_FEATURE_LBRV)) - vmcb02->control.virt_ext |=3D - (svm->nested.ctl.virt_ext & LBR_CTL_ENABLE_MASK); + /* LBR_CTL_ENABLE_MASK is controlled by svm_update_lbrv() */ =20 if (!nested_vmcb_needs_vls_intercept(svm)) vmcb02->control.virt_ext |=3D VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK; @@ -1191,13 +1186,12 @@ int nested_svm_vmexit(struct vcpu_svm *svm) kvm_make_request(KVM_REQ_EVENT, &svm->vcpu); =20 if (unlikely(guest_cpu_cap_has(vcpu, X86_FEATURE_LBRV) && - (svm->nested.ctl.virt_ext & LBR_CTL_ENABLE_MASK))) { + (svm->nested.ctl.virt_ext & LBR_CTL_ENABLE_MASK))) svm_copy_lbrs(vmcb12, vmcb02); - svm_update_lbrv(vcpu); - } else if (unlikely(vmcb01->control.virt_ext & LBR_CTL_ENABLE_MASK)) { + else svm_copy_lbrs(vmcb01, vmcb02); - svm_update_lbrv(vcpu); - } + + svm_update_lbrv(vcpu); =20 if (vnmi) { if (vmcb02->control.int_ctl & V_NMI_BLOCKING_MASK) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 26ab75ecf1c67..fc42bcdbb5200 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -808,13 +808,7 @@ void svm_copy_lbrs(struct vmcb *to_vmcb, struct vmcb *= from_vmcb) =20 static void __svm_enable_lbrv(struct kvm_vcpu *vcpu) { - struct vcpu_svm *svm =3D to_svm(vcpu); - - svm->vmcb->control.virt_ext |=3D LBR_CTL_ENABLE_MASK; - - /* Move the LBR msrs to the vmcb02 so that the guest can see them. */ - if (is_guest_mode(vcpu)) - svm_copy_lbrs(svm->vmcb, svm->vmcb01.ptr); + to_svm(vcpu)->vmcb->control.virt_ext |=3D LBR_CTL_ENABLE_MASK; } =20 void svm_enable_lbrv(struct kvm_vcpu *vcpu) @@ -825,35 +819,15 @@ void svm_enable_lbrv(struct kvm_vcpu *vcpu) =20 static void __svm_disable_lbrv(struct kvm_vcpu *vcpu) { - struct vcpu_svm *svm =3D to_svm(vcpu); - KVM_BUG_ON(sev_es_guest(vcpu->kvm), vcpu->kvm); - svm->vmcb->control.virt_ext &=3D ~LBR_CTL_ENABLE_MASK; - - /* - * Move the LBR msrs back to the vmcb01 to avoid copying them - * on nested guest entries. - */ - if (is_guest_mode(vcpu)) - svm_copy_lbrs(svm->vmcb01.ptr, svm->vmcb); -} - -static struct vmcb *svm_get_lbr_vmcb(struct vcpu_svm *svm) -{ - /* - * If LBR virtualization is disabled, the LBR MSRs are always kept in - * vmcb01. If LBR virtualization is enabled and L1 is running VMs of - * its own, the MSRs are moved between vmcb01 and vmcb02 as needed. - */ - return svm->vmcb->control.virt_ext & LBR_CTL_ENABLE_MASK ? svm->vmcb : - svm->vmcb01.ptr; + to_svm(vcpu)->vmcb->control.virt_ext &=3D ~LBR_CTL_ENABLE_MASK; } =20 void svm_update_lbrv(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm =3D to_svm(vcpu); bool current_enable_lbrv =3D svm->vmcb->control.virt_ext & LBR_CTL_ENABLE= _MASK; - bool enable_lbrv =3D (svm_get_lbr_vmcb(svm)->save.dbgctl & DEBUGCTLMSR_LB= R) || + bool enable_lbrv =3D (svm->vmcb->save.dbgctl & DEBUGCTLMSR_LBR) || (is_guest_mode(vcpu) && guest_cpu_cap_has(vcpu, X86_FEATURE_LBRV) && (svm->nested.ctl.virt_ext & LBR_CTL_ENABLE_MASK)); =20 @@ -2738,19 +2712,19 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struc= t msr_data *msr_info) msr_info->data =3D svm->tsc_aux; break; case MSR_IA32_DEBUGCTLMSR: - msr_info->data =3D svm_get_lbr_vmcb(svm)->save.dbgctl; + msr_info->data =3D svm->vmcb->save.dbgctl; break; case MSR_IA32_LASTBRANCHFROMIP: - msr_info->data =3D svm_get_lbr_vmcb(svm)->save.br_from; + msr_info->data =3D svm->vmcb->save.br_from; break; case MSR_IA32_LASTBRANCHTOIP: - msr_info->data =3D svm_get_lbr_vmcb(svm)->save.br_to; + msr_info->data =3D svm->vmcb->save.br_to; break; case MSR_IA32_LASTINTFROMIP: - msr_info->data =3D svm_get_lbr_vmcb(svm)->save.last_excp_from; + msr_info->data =3D svm->vmcb->save.last_excp_from; break; case MSR_IA32_LASTINTTOIP: - msr_info->data =3D svm_get_lbr_vmcb(svm)->save.last_excp_to; + msr_info->data =3D svm->vmcb->save.last_excp_to; break; case MSR_VM_HSAVE_PA: msr_info->data =3D svm->nested.hsave_msr; @@ -3018,10 +2992,10 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struc= t msr_data *msr) if (data & DEBUGCTL_RESERVED_BITS) return 1; =20 - if (svm_get_lbr_vmcb(svm)->save.dbgctl =3D=3D data) + if (svm->vmcb->save.dbgctl =3D=3D data) break; =20 - svm_get_lbr_vmcb(svm)->save.dbgctl =3D data; + svm->vmcb->save.dbgctl =3D data; vmcb_mark_dirty(svm->vmcb, VMCB_LBR); svm_update_lbrv(vcpu); break; --=20 2.51.2.1041.gc1ab5b90ca-goog From nobody Fri Dec 19 21:46:44 2025 Received: from out-188.mta0.migadu.com (out-188.mta0.migadu.com [91.218.175.188]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6AC8129D265 for ; Sat, 8 Nov 2025 00:45:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.188 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762562761; cv=none; b=mm/go+fi1uBy580uTWW0OdtwiYiMXSKxamSD7u3ujyaHnjMEi0gm8zJmnmZL691hSeuCKhe5cCSrO8uB+R5mZsw+UpcKoKoWV/LuLFuVaaj95BSHluQd+MDiYtERaG3//M8z/vCOvw4Dj0wBzp9F/sqEq/1RPMaYVfLaaNbdx5U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762562761; c=relaxed/simple; bh=SgF5vPH6OrXCpzcLbpobpyytt9Q7sNWWmqyGnuQMWt0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ufLyGCZWN6uDvsaZG4MOqfyQSi0SqIW9Ep3tSL3G5LC5PXeD6C6PDL7ymjpA+POEPqA42VpL5WJyL22lC1lF/cAuzy6WG1IpfCfksx24qUzINcS0pwU4hhTW5JkrTV4eeZJVZ0YxPwZi4XcDrDwU7LaGnewVs1TG7Q8NHKoGPBg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=wRDOSNCn; arc=none smtp.client-ip=91.218.175.188 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="wRDOSNCn" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1762562757; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ka5WhGgzPELpKNpMYsrjJEZcjSKDn1eYlDTAk4OWhf0=; b=wRDOSNCnole3BmCbsQOvsKrz1f05ET0eKiCeNYJ96dsmyDWSLDNL1SqHrxyXbe/Gpk1OvP GC3mRwRt/lhanMPW0tnEoR53TzQZUW7+TiISYcJG9rZ6Re5uzUGTXLDRM2e5CvV/zj4Pgk qQyaLgXzWXnvwLbAUfiC75SP9ULwO5Y= From: Yosry Ahmed To: Sean Christopherson Cc: Paolo Bonzini , Jim Mattson , Maxim Levitsky , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yosry Ahmed , stable@vger.kernel.org Subject: [PATCH 4/6] KVM: SVM: Switch svm_copy_lbrs() to a macro Date: Sat, 8 Nov 2025 00:45:22 +0000 Message-ID: <20251108004524.1600006-5-yosry.ahmed@linux.dev> In-Reply-To: <20251108004524.1600006-1-yosry.ahmed@linux.dev> References: <20251108004524.1600006-1-yosry.ahmed@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" In preparation for using svm_copy_lbrs() with 'struct vmcb_save_area' without a containing 'struct vmcb', and later even 'struct vmcb_save_area_cached', make it a macro. Pull the call to vmcb_mark_dirty() out to the callers. Macros are generally not preferred compared to functions, mainly due to type-safety. However, in this case it seems like having a simple macro copying a few fields is better than copy-pasting the same 5 lines of code in different places. On the bright side, pulling vmcb_mark_dirty() calls to the callers makes it clear that in one case, vmcb_mark_dirty() was being called on VMCB12. It is not architecturally defined for the CPU to clear arbitrary clean bits, and it is not needed, so drop that one call. Technically fixes the non-architectural behavior of setting the dirty bit on VMCB12. Fixes: d20c796ca370 ("KVM: x86: nSVM: implement nested LBR virtualization") Cc: stable@vger.kernel.org Signed-off-by: Yosry Ahmed --- arch/x86/kvm/svm/nested.c | 16 ++++++++++------ arch/x86/kvm/svm/svm.c | 11 ----------- arch/x86/kvm/svm/svm.h | 10 +++++++++- 3 files changed, 19 insertions(+), 18 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index c81005b245222..e7861392f2fcd 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -676,10 +676,12 @@ static void nested_vmcb02_prepare_save(struct vcpu_sv= m *svm, struct vmcb *vmcb12 * Reserved bits of DEBUGCTL are ignored. Be consistent with * svm_set_msr's definition of reserved bits. */ - svm_copy_lbrs(vmcb02, vmcb12); + svm_copy_lbrs(&vmcb02->save, &vmcb12->save); + vmcb_mark_dirty(vmcb02, VMCB_LBR); vmcb02->save.dbgctl &=3D ~DEBUGCTL_RESERVED_BITS; } else { - svm_copy_lbrs(vmcb02, vmcb01); + svm_copy_lbrs(&vmcb02->save, &vmcb01->save); + vmcb_mark_dirty(vmcb02, VMCB_LBR); } svm_update_lbrv(&svm->vcpu); } @@ -1186,10 +1188,12 @@ int nested_svm_vmexit(struct vcpu_svm *svm) kvm_make_request(KVM_REQ_EVENT, &svm->vcpu); =20 if (unlikely(guest_cpu_cap_has(vcpu, X86_FEATURE_LBRV) && - (svm->nested.ctl.virt_ext & LBR_CTL_ENABLE_MASK))) - svm_copy_lbrs(vmcb12, vmcb02); - else - svm_copy_lbrs(vmcb01, vmcb02); + (svm->nested.ctl.virt_ext & LBR_CTL_ENABLE_MASK))) { + svm_copy_lbrs(&vmcb12->save, &vmcb02->save); + } else { + svm_copy_lbrs(&vmcb01->save, &vmcb02->save); + vmcb_mark_dirty(vmcb01, VMCB_LBR); + } =20 svm_update_lbrv(vcpu); =20 diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index fc42bcdbb5200..9eb112f0e61f0 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -795,17 +795,6 @@ static void svm_recalc_msr_intercepts(struct kvm_vcpu = *vcpu) */ } =20 -void svm_copy_lbrs(struct vmcb *to_vmcb, struct vmcb *from_vmcb) -{ - to_vmcb->save.dbgctl =3D from_vmcb->save.dbgctl; - to_vmcb->save.br_from =3D from_vmcb->save.br_from; - to_vmcb->save.br_to =3D from_vmcb->save.br_to; - to_vmcb->save.last_excp_from =3D from_vmcb->save.last_excp_from; - to_vmcb->save.last_excp_to =3D from_vmcb->save.last_excp_to; - - vmcb_mark_dirty(to_vmcb, VMCB_LBR); -} - static void __svm_enable_lbrv(struct kvm_vcpu *vcpu) { to_svm(vcpu)->vmcb->control.virt_ext |=3D LBR_CTL_ENABLE_MASK; diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index c2acaa49ee1c5..e510c8183bd87 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -687,8 +687,16 @@ static inline void *svm_vcpu_alloc_msrpm(void) return svm_alloc_permissions_map(MSRPM_SIZE, GFP_KERNEL_ACCOUNT); } =20 +#define svm_copy_lbrs(to, from) \ +({ \ + (to)->dbgctl =3D (from)->dbgctl; \ + (to)->br_from =3D (from)->br_from; \ + (to)->br_to =3D (from)->br_to; \ + (to)->last_excp_from =3D (from)->last_excp_from; \ + (to)->last_excp_to =3D (from)->last_excp_to; \ +}) + void svm_vcpu_free_msrpm(void *msrpm); -void svm_copy_lbrs(struct vmcb *to_vmcb, struct vmcb *from_vmcb); void svm_enable_lbrv(struct kvm_vcpu *vcpu); void svm_update_lbrv(struct kvm_vcpu *vcpu); =20 --=20 2.51.2.1041.gc1ab5b90ca-goog From nobody Fri Dec 19 21:46:44 2025 Received: from out-172.mta0.migadu.com (out-172.mta0.migadu.com [91.218.175.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7F8372C11C5 for ; Sat, 8 Nov 2025 00:46:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762562764; cv=none; b=SnpSLhN0eB6kc1bvxJM7+VOWDgObFilLqRnSLfdetfxzI09ySJV3l2cQ1BoYVFFt6V3FrdQdupL+NlmNL1Q7Hpaphn73zdKNWkJFvngOHoS6qOoE02BysXvVDeu9Uw2kaZ3/HFS6VHtzbx3t2o1eiUdpD0cVA8zetqdXVqbV25I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762562764; c=relaxed/simple; bh=jPnbjDzBUThVjCN4iUHlEfn/weoy+FVgHzyY9Pudk+8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=dZydh9t2SE8ZCGgUiWBUn9S/lHM87+Pf6jdXPFz9P1N4Z+VEIQUp+bE74ib5d1gmXREbySJNN6eVS4QQJ4FZNkLKOPTQiZrkmWxRowVPtMmEndOmrK4binI7T8pGrbJr7RvtmfKsC2Xkunj2CYnUOtF++jJFWlSxHx7wLqzBKF4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=ae5ChuCp; arc=none smtp.client-ip=91.218.175.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="ae5ChuCp" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1762562759; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uQRlG+qWAMXnVf3h5BiO0QmP4IJTVvnw5JQLBnE7avU=; b=ae5ChuCp4g34nFzXrgAJMYo9pwAptFurH58Uzz+9m7n/uCLgq3qGs+r69ARr1icAUt7Qyd lMZn36VAqarC8zTWX6C+k3UGU/Fa0/fx96zLCH1SHpy796bPX8lz1M3FmedsjNtbpZxrs4 PE3pmTUXVGTPk9L/hGZZ3lwpWSwzCuI= From: Yosry Ahmed To: Sean Christopherson Cc: Paolo Bonzini , Jim Mattson , Maxim Levitsky , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yosry Ahmed , stable@vger.kernel.org Subject: [PATCH 5/6] KVM: SVM: Add missing save/restore handling of LBR MSRs Date: Sat, 8 Nov 2025 00:45:23 +0000 Message-ID: <20251108004524.1600006-6-yosry.ahmed@linux.dev> In-Reply-To: <20251108004524.1600006-1-yosry.ahmed@linux.dev> References: <20251108004524.1600006-1-yosry.ahmed@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" MSR_IA32_DEBUGCTLMSR and LBR MSRs are currently not enumerated by KVM_GET_MSR_INDEX_LIST, and LBR MSRs cannot be set with KVM_SET_MSRS. So save/restore is completely broken. Fix it by adding the MSRs to msrs_to_save_base, and allowing writes to LBR MSRs from userspace only (as they are read-only MSRs). Additionally, to correctly restore L1's LBRs while L2 is running, make sure the LBRs are copied from the captured VMCB01 save area in svm_copy_vmrun_state(). Fixes: 24e09cbf480a ("KVM: SVM: enable LBR virtualization") Cc: stable@vger.kernel.org Signed-off-by: Yosry Ahmed Reported-by:=C2=A0Jim Mattson --- arch/x86/kvm/svm/nested.c | 3 +++ arch/x86/kvm/svm/svm.c | 20 ++++++++++++++++++++ arch/x86/kvm/x86.c | 3 +++ 3 files changed, 26 insertions(+) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index e7861392f2fcd..955de5d90f48e 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -1057,6 +1057,9 @@ void svm_copy_vmrun_state(struct vmcb_save_area *to_s= ave, to_save->isst_addr =3D from_save->isst_addr; to_save->ssp =3D from_save->ssp; } + + if (lbrv) + svm_copy_lbrs(to_save, from_save); } =20 void svm_copy_vmloadsave_state(struct vmcb *to_vmcb, struct vmcb *from_vmc= b) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 9eb112f0e61f0..45faf892a0431 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -2988,6 +2988,26 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct= msr_data *msr) vmcb_mark_dirty(svm->vmcb, VMCB_LBR); svm_update_lbrv(vcpu); break; + case MSR_IA32_LASTBRANCHFROMIP: + if (!msr->host_initiated) + return 1; + svm->vmcb->save.br_from =3D data; + break; + case MSR_IA32_LASTBRANCHTOIP: + if (!msr->host_initiated) + return 1; + svm->vmcb->save.br_to =3D data; + break; + case MSR_IA32_LASTINTFROMIP: + if (!msr->host_initiated) + return 1; + svm->vmcb->save.last_excp_from =3D data; + break; + case MSR_IA32_LASTINTTOIP: + if (!msr->host_initiated) + return 1; + svm->vmcb->save.last_excp_to =3D data; + break; case MSR_VM_HSAVE_PA: /* * Old kernels did not validate the value written to diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 9c2e28028c2b5..1d36a0f782c49 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -348,6 +348,9 @@ static const u32 msrs_to_save_base[] =3D { MSR_IA32_U_CET, MSR_IA32_S_CET, MSR_IA32_PL0_SSP, MSR_IA32_PL1_SSP, MSR_IA32_PL2_SSP, MSR_IA32_PL3_SSP, MSR_IA32_INT_SSP_TAB, + MSR_IA32_DEBUGCTLMSR, + MSR_IA32_LASTBRANCHFROMIP, MSR_IA32_LASTBRANCHTOIP, + MSR_IA32_LASTINTFROMIP, MSR_IA32_LASTINTTOIP, }; =20 static const u32 msrs_to_save_pmu[] =3D { --=20 2.51.2.1041.gc1ab5b90ca-goog From nobody Fri Dec 19 21:46:44 2025 Received: from out-182.mta0.migadu.com (out-182.mta0.migadu.com [91.218.175.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5F2742D2390; Sat, 8 Nov 2025 00:46:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762562767; cv=none; b=g1id5kRLElYj+ooAXQfxwOtNRJiA7xRxHQjP7Biy8EiLjvmzIGGpuSkbs3sUGFbMhgAqcS3ZbdglzkxByg455w/2dlYLJhPwSICdxsjl/T/PGqpcPAa33G1I3mn/KXxtQQJ5h6TKaKPjvl3GeJvc7YoTxW0tZ+cumsPPrubjpvA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762562767; c=relaxed/simple; bh=xLkyk4AAk1tD5E/VGsWEZsoK4aysRr0fOWyfwvnKtik=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=JyrMouxAaC5EWJz3j62yuUq9xp3+9L9h7RJTQgdaCp3gXBcXBK5qnbFs113sGorHnK8y2jx4Bms0SKTll5xALm72TUZVVF7H4aCD0kB2gIGU3RvXfmgGvX8CX2Z2jIbEmY+znwBTR20ANsG9+5k8jvJRFj/zVo26YlDsAQNV3QA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=EC3GXKc0; arc=none smtp.client-ip=91.218.175.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="EC3GXKc0" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1762562761; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=itMlBIbN1PqufgHbzCymjjQHzxqWc8TraVJJMw770PU=; b=EC3GXKc0kSpmRQahhu8sHrkiAdxntlqTpvgkPlCsXu0zHApvXgIYWvYfVK3ePu2Tf2jHR7 ggJBocQtSi+9/Tvy+mMmeuwS/Ra77TdIEJoVT6NOQ36+hJwe/Ma0g8iPbAqp7zRSiFFbkV LwAgVEA8IjC9fsm2iyI7vQRb1vi9KOs= From: Yosry Ahmed To: Sean Christopherson Cc: Paolo Bonzini , Jim Mattson , Maxim Levitsky , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yosry Ahmed Subject: [PATCH 6/6] KVM: selftests: Add a test for LBR save/restore (ft. nested) Date: Sat, 8 Nov 2025 00:45:24 +0000 Message-ID: <20251108004524.1600006-7-yosry.ahmed@linux.dev> In-Reply-To: <20251108004524.1600006-1-yosry.ahmed@linux.dev> References: <20251108004524.1600006-1-yosry.ahmed@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" Add a selftest exercising save/restore with usage of LBRs in both L1 and L2, and making sure all LBRs remain intact. Signed-off-by: Yosry Ahmed --- tools/testing/selftests/kvm/Makefile.kvm | 1 + .../selftests/kvm/include/x86/processor.h | 5 + .../selftests/kvm/x86/svm_lbr_nested_state.c | 155 ++++++++++++++++++ 3 files changed, 161 insertions(+) create mode 100644 tools/testing/selftests/kvm/x86/svm_lbr_nested_state.c diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selft= ests/kvm/Makefile.kvm index 148d427ff24be..9a19554ffd3c1 100644 --- a/tools/testing/selftests/kvm/Makefile.kvm +++ b/tools/testing/selftests/kvm/Makefile.kvm @@ -105,6 +105,7 @@ TEST_GEN_PROGS_x86 +=3D x86/svm_vmcall_test TEST_GEN_PROGS_x86 +=3D x86/svm_int_ctl_test TEST_GEN_PROGS_x86 +=3D x86/svm_nested_shutdown_test TEST_GEN_PROGS_x86 +=3D x86/svm_nested_soft_inject_test +TEST_GEN_PROGS_x86 +=3D x86/svm_lbr_nested_state TEST_GEN_PROGS_x86 +=3D x86/tsc_scaling_sync TEST_GEN_PROGS_x86 +=3D x86/sync_regs_test TEST_GEN_PROGS_x86 +=3D x86/ucna_injection_test diff --git a/tools/testing/selftests/kvm/include/x86/processor.h b/tools/te= sting/selftests/kvm/include/x86/processor.h index 51cd84b9ca664..aee4b83c47b19 100644 --- a/tools/testing/selftests/kvm/include/x86/processor.h +++ b/tools/testing/selftests/kvm/include/x86/processor.h @@ -1367,6 +1367,11 @@ static inline bool kvm_is_ignore_msrs(void) return get_kvm_param_bool("ignore_msrs"); } =20 +static inline bool kvm_is_lbrv_enabled(void) +{ + return !!get_kvm_amd_param_integer("lbrv"); +} + uint64_t *__vm_get_page_table_entry(struct kvm_vm *vm, uint64_t vaddr, int *level); uint64_t *vm_get_page_table_entry(struct kvm_vm *vm, uint64_t vaddr); diff --git a/tools/testing/selftests/kvm/x86/svm_lbr_nested_state.c b/tools= /testing/selftests/kvm/x86/svm_lbr_nested_state.c new file mode 100644 index 0000000000000..a343279546fd8 --- /dev/null +++ b/tools/testing/selftests/kvm/x86/svm_lbr_nested_state.c @@ -0,0 +1,155 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * svm_lbr_nested_state + * + * Test that LBRs are maintained correctly in both L1 and L2 during + * save/restore. + * + * Copyright (C) 2025, Google, Inc. + */ + +#include "test_util.h" +#include "kvm_util.h" +#include "processor.h" +#include "svm_util.h" + + +#define L2_GUEST_STACK_SIZE 64 + +#define DO_BRANCH() asm volatile("jmp 1f\n 1: nop") + +struct lbr_branch { + u64 from, to; +}; + +volatile struct lbr_branch l2_branch; + +#define RECORD_BRANCH(b, s) \ +({ \ + wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR); \ + DO_BRANCH(); \ + (b)->from =3D rdmsr(MSR_IA32_LASTBRANCHFROMIP); \ + (b)->to =3D rdmsr(MSR_IA32_LASTBRANCHTOIP); \ + /* Disabe LBR right after to avoid overriding the IPs */ \ + wrmsr(MSR_IA32_DEBUGCTLMSR, 0); \ + \ + GUEST_ASSERT_NE((b)->from, 0); \ + GUEST_ASSERT_NE((b)->to, 0); \ + GUEST_PRINTF("%s: (0x%lx, 0x%lx)\n", (s), (b)->from, (b)->to); \ +}) \ + +#define CHECK_BRANCH_MSRS(b) \ +({ \ + GUEST_ASSERT_EQ((b)->from, rdmsr(MSR_IA32_LASTBRANCHFROMIP)); \ + GUEST_ASSERT_EQ((b)->to, rdmsr(MSR_IA32_LASTBRANCHTOIP)); \ +}) + +#define CHECK_BRANCH_VMCB(b, vmcb) \ +({ \ + GUEST_ASSERT_EQ((b)->from, vmcb->save.br_from); \ + GUEST_ASSERT_EQ((b)->to, vmcb->save.br_to); \ +}) \ + +static void l2_guest_code(struct svm_test_data *svm) +{ + /* Record a branch, trigger save/restore, and make sure LBRs are intact */ + RECORD_BRANCH(&l2_branch, "L2 branch"); + GUEST_SYNC(true); + CHECK_BRANCH_MSRS(&l2_branch); + vmmcall(); +} + +static void l1_guest_code(struct svm_test_data *svm, bool nested_lbrv) +{ + unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE]; + struct vmcb *vmcb =3D svm->vmcb; + struct lbr_branch l1_branch; + + /* Record a branch, trigger save/restore, and make sure LBRs are intact */ + RECORD_BRANCH(&l1_branch, "L1 branch"); + GUEST_SYNC(true); + CHECK_BRANCH_MSRS(&l1_branch); + + /* Run L2, which will also do the same */ + generic_svm_setup(svm, l2_guest_code, + &l2_guest_stack[L2_GUEST_STACK_SIZE]); + + if (nested_lbrv) + vmcb->control.virt_ext =3D LBR_CTL_ENABLE_MASK; + else + vmcb->control.virt_ext &=3D ~LBR_CTL_ENABLE_MASK; + + run_guest(vmcb, svm->vmcb_gpa); + GUEST_ASSERT(svm->vmcb->control.exit_code =3D=3D SVM_EXIT_VMMCALL); + + /* Trigger save/restore one more time before checking, just for kicks */ + GUEST_SYNC(true); + + /* + * If LBR_CTL_ENABLE is set, L1 and L2 should have separate LBR MSRs, so + * expect L1's LBRs to remain intact and L2 LBRs to be in the VMCB. + * Otherwise, the MSRs are shared between L1 & L2 so expect L2's LBRs. + */ + if (nested_lbrv) { + CHECK_BRANCH_MSRS(&l1_branch); + CHECK_BRANCH_VMCB(&l2_branch, vmcb); + } else { + CHECK_BRANCH_MSRS(&l2_branch); + } + GUEST_DONE(); +} + +void test_lbrv_nested_state(bool nested_lbrv) +{ + struct kvm_x86_state *state =3D NULL; + struct kvm_vcpu *vcpu; + vm_vaddr_t svm_gva; + struct kvm_vm *vm; + struct ucall uc; + + pr_info("Testing with nested LBRV %s\n", nested_lbrv ? "enabled" : "disab= led"); + + vm =3D vm_create_with_one_vcpu(&vcpu, l1_guest_code); + vcpu_alloc_svm(vm, &svm_gva); + vcpu_args_set(vcpu, 2, svm_gva, nested_lbrv); + + for (;;) { + vcpu_run(vcpu); + TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_IO); + switch (get_ucall(vcpu, &uc)) { + case UCALL_SYNC: + /* Save the vCPU state and restore it in a new VM on sync */ + pr_info("Guest triggered save/restore.\n"); + state =3D vcpu_save_state(vcpu); + kvm_vm_release(vm); + vcpu =3D vm_recreate_with_one_vcpu(vm); + vcpu_load_state(vcpu, state); + break; + case UCALL_ABORT: + REPORT_GUEST_ASSERT(uc); + /* NOT REACHED */ + case UCALL_DONE: + goto done; + case UCALL_PRINTF: + pr_info("%s", uc.buffer); + break; + default: + TEST_FAIL("Unknown ucall %lu", uc.cmd); + } + } +done: + if (state) + kvm_x86_state_cleanup(state); + kvm_vm_free(vm); +} + +int main(int argc, char *argv[]) +{ + TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_SVM)); + TEST_REQUIRE(kvm_is_lbrv_enabled()); + + test_lbrv_nested_state(/*nested_lbrv=3D*/false); + test_lbrv_nested_state(/*nested_lbrv=3D*/true); + + return 0; +} --=20 2.51.2.1041.gc1ab5b90ca-goog