From nobody Fri Dec 19 12:28:40 2025 Received: from out-178.mta0.migadu.com (out-178.mta0.migadu.com [91.218.175.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D6529223710 for ; Tue, 4 Nov 2025 20:00:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762286421; cv=none; b=s7E14CqJW43N0VCGzaFWoxMfUne3SoGJBtEYVapUKtEKxoDt4OKloAswHY1qJxOyscuUYK0nO/1nYHdEswVXGjyxSahcYQ/MsmglEXX8fxwNL1LNdJNXYXu4NtmWEP7TITHjltxEdxuS1MSpxb3/jz0T0L8TUFepAh0LMgMdwpk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762286421; c=relaxed/simple; bh=tRwloKga782EZzwiy7cnYxehGAHBhyyrBDx7CyPctRE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=icWr8I++tZVnccfzdgsFNoS8VT8RABUvhc1mv3TW6UeL6o7zQTyUFDL0Bye5RiRE+AmLpMZW+nq6mKL7Bz1g6oqAp3MMvNwIFXcnqYAVre5HyIngFUxkUhGqIQe9M+p4+M0UyQ5ooIUyAgmyxfP+xm9WjZsR9FczUw+/vphsUjo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=pFgpXVVy; arc=none smtp.client-ip=91.218.175.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="pFgpXVVy" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1762286416; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8b9FrffI+8lK2iySO/Rkcv5bo0NwOgGuq3yQOfHl+rg=; b=pFgpXVVy0boDRrGqO1PbcXwMQzC6H2JmR5lXoxHr2NEnhQbubOlVUWM8C0xBDNDT8gJzFR HEEWIQo8FjVtIi5wPbz00VdSz5F66KIIa2VSPfnbxdj6+zvq8HbCbZ2X5ixkKcqJk5NkPA vTi4hu5go/atDtQnj9unf00HKVBdVpw= From: Yosry Ahmed To: Sean Christopherson Cc: Paolo Bonzini , Jim Mattson , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yosry Ahmed Subject: [PATCH 06/11] KVM: SVM: switch svm_copy_lbrs() to a macro Date: Tue, 4 Nov 2025 19:59:44 +0000 Message-ID: <20251104195949.3528411-7-yosry.ahmed@linux.dev> In-Reply-To: <20251104195949.3528411-1-yosry.ahmed@linux.dev> References: <20251104195949.3528411-1-yosry.ahmed@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" In preparation for using svm_copy_lbrs() with instances of both 'struct vmcb_save_area' and 'struct vmcb_save_area_cached', make it a macro instead. Pull the call to vmcb_mark_dirty() out to the callers. Macros are generally not preferred compared to functions, mainly due to type-safety. However, in this case it seems like having a simple macro copying a few fields is better than copy-pasting the same 5 lines of code in 6 different places (soon to be 7), or creating 3 different variants of the function. On the bright side, pulling vmcb_mark_dirty() calls to the callers makes it clear that in one case, vmcb_mark_dirty() was being called on VMCB12. It is not architecturally defined for the CPU to clear arbitrary clean bits, and it is not needed, so drop that one call. Signed-off-by: Yosry Ahmed --- arch/x86/kvm/svm/nested.c | 11 +++++++---- arch/x86/kvm/svm/svm.c | 23 ++++++++--------------- arch/x86/kvm/svm/svm.h | 10 +++++++++- 3 files changed, 24 insertions(+), 20 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 986e6382dc4fa..81d7a0ed71392 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -726,12 +726,14 @@ static void nested_vmcb02_prepare_save(struct vcpu_sv= m *svm, struct vmcb *vmcb12 * Reserved bits of DEBUGCTL are ignored. Be consistent with * svm_set_msr's definition of reserved bits. */ - svm_copy_lbrs(vmcb02, vmcb12); + svm_copy_lbrs(&vmcb02->save, &vmcb12->save); + vmcb_mark_dirty(vmcb02, VMCB_LBR); vmcb02->save.dbgctl &=3D ~DEBUGCTL_RESERVED_BITS; svm_update_lbrv(&svm->vcpu); =20 } else if (unlikely(vmcb01->control.misc_ctl2 & SVM_MISC_CTL2_LBR_CTL_ENA= BLE)) { - svm_copy_lbrs(vmcb02, vmcb01); + svm_copy_lbrs(&vmcb02->save, &vmcb01->save); + vmcb_mark_dirty(vmcb02, VMCB_LBR); } } =20 @@ -1242,10 +1244,11 @@ int nested_svm_vmexit(struct vcpu_svm *svm) =20 if (unlikely(guest_cpu_cap_has(vcpu, X86_FEATURE_LBRV) && (svm->nested.ctl.misc_ctl2 & SVM_MISC_CTL2_LBR_CTL_ENABLE))) { - svm_copy_lbrs(vmcb12, vmcb02); + svm_copy_lbrs(&vmcb12->save, &vmcb02->save); svm_update_lbrv(vcpu); } else if (unlikely(vmcb01->control.misc_ctl2 & SVM_MISC_CTL2_LBR_CTL_ENA= BLE)) { - svm_copy_lbrs(vmcb01, vmcb02); + svm_copy_lbrs(&vmcb01->save, &vmcb02->save); + vmcb_mark_dirty(vmcb01, VMCB_LBR); svm_update_lbrv(vcpu); } =20 diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 185f17ff2170b..07958dc7c62ba 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -795,17 +795,6 @@ static void svm_recalc_msr_intercepts(struct kvm_vcpu = *vcpu) */ } =20 -void svm_copy_lbrs(struct vmcb *to_vmcb, struct vmcb *from_vmcb) -{ - to_vmcb->save.dbgctl =3D from_vmcb->save.dbgctl; - to_vmcb->save.br_from =3D from_vmcb->save.br_from; - to_vmcb->save.br_to =3D from_vmcb->save.br_to; - to_vmcb->save.last_excp_from =3D from_vmcb->save.last_excp_from; - to_vmcb->save.last_excp_to =3D from_vmcb->save.last_excp_to; - - vmcb_mark_dirty(to_vmcb, VMCB_LBR); -} - void svm_enable_lbrv(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm =3D to_svm(vcpu); @@ -814,8 +803,10 @@ void svm_enable_lbrv(struct kvm_vcpu *vcpu) svm_recalc_lbr_msr_intercepts(vcpu); =20 /* Move the LBR msrs to the vmcb02 so that the guest can see them. */ - if (is_guest_mode(vcpu)) - svm_copy_lbrs(svm->vmcb, svm->vmcb01.ptr); + if (is_guest_mode(vcpu)) { + svm_copy_lbrs(&svm->vmcb->save, &svm->vmcb01.ptr->save); + vmcb_mark_dirty(svm->vmcb, VMCB_LBR); + } } =20 static void svm_disable_lbrv(struct kvm_vcpu *vcpu) @@ -830,8 +821,10 @@ static void svm_disable_lbrv(struct kvm_vcpu *vcpu) * Move the LBR msrs back to the vmcb01 to avoid copying them * on nested guest entries. */ - if (is_guest_mode(vcpu)) - svm_copy_lbrs(svm->vmcb01.ptr, svm->vmcb); + if (is_guest_mode(vcpu)) { + svm_copy_lbrs(&svm->vmcb01.ptr->save, &svm->vmcb->save); + vmcb_mark_dirty(svm->vmcb01.ptr, VMCB_LBR); + } } =20 static struct vmcb *svm_get_lbr_vmcb(struct vcpu_svm *svm) diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 26ba9472784eb..8577e35a7096a 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -689,8 +689,16 @@ static inline void *svm_vcpu_alloc_msrpm(void) return svm_alloc_permissions_map(MSRPM_SIZE, GFP_KERNEL_ACCOUNT); } =20 +#define svm_copy_lbrs(to, from) \ +({ \ + (to)->dbgctl =3D (from)->dbgctl; \ + (to)->br_from =3D (from)->br_from; \ + (to)->br_to =3D (from)->br_to; \ + (to)->last_excp_from =3D (from)->last_excp_from; \ + (to)->last_excp_to =3D (from)->last_excp_to; \ +}) + void svm_vcpu_free_msrpm(void *msrpm); -void svm_copy_lbrs(struct vmcb *to_vmcb, struct vmcb *from_vmcb); void svm_enable_lbrv(struct kvm_vcpu *vcpu); void svm_update_lbrv(struct kvm_vcpu *vcpu); =20 --=20 2.51.2.1026.g39e6a42477-goog