From nobody Sat Feb 7 08:53:20 2026 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B646913AA2E; Fri, 2 May 2025 09:04:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746176664; cv=none; b=oZonZp4Vsia8pJXUn+4HGQ9ioxKQdd8GW0xVbw7hY4qSo01NA6OI0oGbX3W95WoyDqbTHkl1W0woCYAi3fBXmcvIMRygjD/CPAY75FdYWqilZPFo8iQ8Morh09LCRB4mg13nXgilbusdlqFrIwr/0Bny9FgkKpC8VedsxlAtZXM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746176664; c=relaxed/simple; bh=bwV5csuXMnqI4Pam72gE2jzZyJLOt/+IpkYzF4NyfOE=; h=Date:From:To:Subject:Cc:In-Reply-To:References:MIME-Version: Message-ID:Content-Type; b=Jv8DbwvdSm7SEWA27/F7xizbuys8u3jRQpJXKK0JtBap7V7uiI7a7nkK9txHvQDK+MfeN5XPtEM4fXL5gCSMzwDheDmhHiyDc5VmJLq7vd6sGEdcqspNr84XXJiKR/pNYvPx/5IyL8V5s4+xN7ZInYtN8qEwwTZCiytXwg1Rsvc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=NEwKl9rd; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=lBHu7CwX; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="NEwKl9rd"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="lBHu7CwX" Date: Fri, 02 May 2025 09:04:12 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1746176658; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=INGVsNBb1BADwMoqqtCuoyyEAzGmahySBeQenCsYg94=; b=NEwKl9rdguxLkZwsRYrzczKm6BC3G+spiXlubrLuKzX8x/V90rxcAN0EbHSkKn7MQLOfbK DejF6xoNLrpgJXEd14LB/XfRKrWWU5Lii3ncUP2/oSGY/M0pxzaY6ZTsfS0uUy+x2dEVsY dLbXA+YvUfB/i5L7HUAxR5QGO8v8KIwnE2SmqhgtlGif1dCBYZlRvOQCALmqKysm1eoC0t 6vrfZzbsKMTtH2LSrcKm7CFbdLC1qltNXb51K1590+XyObBtZfRH1hiQnhxXt4MYTBiWAB D/pgwNM5uW8mEWRkyNTMcZ8n7AvawzA4ArzAanZMbtaStTQqt16gN3mkqKwPfQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1746176658; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=INGVsNBb1BADwMoqqtCuoyyEAzGmahySBeQenCsYg94=; b=lBHu7CwXuheEp4lFZ3Dnve35XsxoXV8+N+RKF8W9OlytMvWY9DZ/c1OFTOhH+RPsXQdKlI 9IsnKgLkHDHJZfBQ== From: "tip-bot2 for Xin Li (Intel)" Sender: tip-bot2@linutronix.de Reply-to: linux-kernel@vger.kernel.org To: linux-tip-commits@vger.kernel.org Subject: [tip: x86/merge] x86/msr: Change the function type of native_read_msr_safe() Cc: "Xin Li (Intel)" , Ingo Molnar , "Peter Zijlstra (Intel)" , Andy Lutomirski , Brian Gerst , David Woodhouse , "H. Peter Anvin" , Josh Poimboeuf , Juergen Gross , Kees Cook , Linus Torvalds , Paolo Bonzini , Sean Christopherson , Stefano Stabellini , Uros Bizjak , Vitaly Kuznetsov , x86@kernel.org, linux-kernel@vger.kernel.org In-Reply-To: <20250427092027.1598740-16-xin@zytor.com> References: <20250427092027.1598740-16-xin@zytor.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <174617665306.22196.9443417057136673504.tip-bot2@tip-bot2> Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails Precedence: bulk Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable The following commit has been merged into the x86/merge branch of tip: Commit-ID: 502ad6e5a6196840976c4c84b2ea2f9769942fbe Gitweb: https://git.kernel.org/tip/502ad6e5a6196840976c4c84b2ea2f976= 9942fbe Author: Xin Li (Intel) AuthorDate: Sun, 27 Apr 2025 02:20:27 -07:00 Committer: Ingo Molnar CommitterDate: Fri, 02 May 2025 10:36:36 +02:00 x86/msr: Change the function type of native_read_msr_safe() Modify the function type of native_read_msr_safe() to: int native_read_msr_safe(u32 msr, u64 *val) This change makes the function return an error code instead of the MSR value, aligning it with the type of native_write_msr_safe(). Consequently, their callers can check the results in the same way. While at it, convert leftover MSR data type "unsigned int" to u32. Signed-off-by: Xin Li (Intel) Signed-off-by: Ingo Molnar Acked-by: Peter Zijlstra (Intel) Cc: Andy Lutomirski Cc: Brian Gerst Cc: David Woodhouse Cc: H. Peter Anvin Cc: Josh Poimboeuf Cc: Juergen Gross Cc: Kees Cook Cc: Linus Torvalds Cc: Paolo Bonzini Cc: Sean Christopherson Cc: Stefano Stabellini Cc: Uros Bizjak Cc: Vitaly Kuznetsov Link: https://lore.kernel.org/r/20250427092027.1598740-16-xin@zytor.com --- arch/x86/include/asm/msr.h | 21 +++++++++++---------- arch/x86/include/asm/paravirt.h | 19 ++++++++----------- arch/x86/include/asm/paravirt_types.h | 6 +++--- arch/x86/kvm/svm/svm.c | 19 +++++++------------ arch/x86/xen/enlighten_pv.c | 13 ++++++++----- arch/x86/xen/pmu.c | 14 ++++++++------ 6 files changed, 45 insertions(+), 47 deletions(-) diff --git a/arch/x86/include/asm/msr.h b/arch/x86/include/asm/msr.h index b244076..a9ce56f 100644 --- a/arch/x86/include/asm/msr.h +++ b/arch/x86/include/asm/msr.h @@ -113,18 +113,22 @@ static inline u64 native_read_msr(u32 msr) return val; } =20 -static inline u64 native_read_msr_safe(u32 msr, int *err) +static inline int native_read_msr_safe(u32 msr, u64 *p) { + int err; EAX_EDX_DECLARE_ARGS(val, low, high); =20 asm volatile("1: rdmsr ; xor %[err],%[err]\n" "2:\n\t" _ASM_EXTABLE_TYPE_REG(1b, 2b, EX_TYPE_RDMSR_SAFE, %[err]) - : [err] "=3Dr" (*err), EAX_EDX_RET(val, low, high) + : [err] "=3Dr" (err), EAX_EDX_RET(val, low, high) : "c" (msr)); if (tracepoint_enabled(read_msr)) - do_trace_read_msr(msr, EAX_EDX_VAL(val, low, high), *err); - return EAX_EDX_VAL(val, low, high); + do_trace_read_msr(msr, EAX_EDX_VAL(val, low, high), err); + + *p =3D EAX_EDX_VAL(val, low, high); + + return err; } =20 /* Can be uninlined because referenced by paravirt */ @@ -204,8 +208,8 @@ static inline int wrmsrq_safe(u32 msr, u64 val) /* rdmsr with exception handling */ #define rdmsr_safe(msr, low, high) \ ({ \ - int __err; \ - u64 __val =3D native_read_msr_safe((msr), &__err); \ + u64 __val; \ + int __err =3D native_read_msr_safe((msr), &__val); \ (*low) =3D (u32)__val; \ (*high) =3D (u32)(__val >> 32); \ __err; \ @@ -213,10 +217,7 @@ static inline int wrmsrq_safe(u32 msr, u64 val) =20 static inline int rdmsrq_safe(u32 msr, u64 *p) { - int err; - - *p =3D native_read_msr_safe(msr, &err); - return err; + return native_read_msr_safe(msr, p); } =20 static __always_inline u64 rdpmc(int counter) diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravir= t.h index edf23bd..03f680d 100644 --- a/arch/x86/include/asm/paravirt.h +++ b/arch/x86/include/asm/paravirt.h @@ -175,7 +175,7 @@ static inline void __write_cr4(unsigned long x) PVOP_VCALL1(cpu.write_cr4, x); } =20 -static inline u64 paravirt_read_msr(unsigned msr) +static inline u64 paravirt_read_msr(u32 msr) { return PVOP_CALL1(u64, cpu.read_msr, msr); } @@ -185,9 +185,9 @@ static inline void paravirt_write_msr(u32 msr, u64 val) PVOP_VCALL2(cpu.write_msr, msr, val); } =20 -static inline u64 paravirt_read_msr_safe(unsigned msr, int *err) +static inline int paravirt_read_msr_safe(u32 msr, u64 *val) { - return PVOP_CALL2(u64, cpu.read_msr_safe, msr, err); + return PVOP_CALL2(int, cpu.read_msr_safe, msr, val); } =20 static inline int paravirt_write_msr_safe(u32 msr, u64 val) @@ -225,19 +225,16 @@ static inline int wrmsrq_safe(u32 msr, u64 val) /* rdmsr with exception handling */ #define rdmsr_safe(msr, a, b) \ ({ \ - int _err; \ - u64 _l =3D paravirt_read_msr_safe(msr, &_err); \ + u64 _l; \ + int _err =3D paravirt_read_msr_safe((msr), &_l); \ (*a) =3D (u32)_l; \ - (*b) =3D _l >> 32; \ + (*b) =3D (u32)(_l >> 32); \ _err; \ }) =20 -static inline int rdmsrq_safe(unsigned msr, u64 *p) +static __always_inline int rdmsrq_safe(u32 msr, u64 *p) { - int err; - - *p =3D paravirt_read_msr_safe(msr, &err); - return err; + return paravirt_read_msr_safe(msr, p); } =20 static __always_inline u64 rdpmc(int counter) diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/p= aravirt_types.h index 78777b7..b08b9d3 100644 --- a/arch/x86/include/asm/paravirt_types.h +++ b/arch/x86/include/asm/paravirt_types.h @@ -91,14 +91,14 @@ struct pv_cpu_ops { unsigned int *ecx, unsigned int *edx); =20 /* Unsafe MSR operations. These will warn or panic on failure. */ - u64 (*read_msr)(unsigned int msr); + u64 (*read_msr)(u32 msr); void (*write_msr)(u32 msr, u64 val); =20 /* * Safe MSR operations. - * read sets err to 0 or -EIO. write returns 0 or -EIO. + * Returns 0 or -EIO. */ - u64 (*read_msr_safe)(unsigned int msr, int *err); + int (*read_msr_safe)(u32 msr, u64 *val); int (*write_msr_safe)(u32 msr, u64 val); =20 u64 (*read_pmc)(int counter); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 131f485..4c2a843 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -476,15 +476,13 @@ static void svm_inject_exception(struct kvm_vcpu *vcp= u) =20 static void svm_init_erratum_383(void) { - int err; u64 val; =20 if (!static_cpu_has_bug(X86_BUG_AMD_TLB_MMATCH)) return; =20 /* Use _safe variants to not break nested virtualization */ - val =3D native_read_msr_safe(MSR_AMD64_DC_CFG, &err); - if (err) + if (native_read_msr_safe(MSR_AMD64_DC_CFG, &val)) return; =20 val |=3D (1ULL << 47); @@ -649,13 +647,12 @@ static int svm_enable_virtualization_cpu(void) * erratum is present everywhere). */ if (cpu_has(&boot_cpu_data, X86_FEATURE_OSVW)) { - uint64_t len, status =3D 0; + u64 len, status =3D 0; int err; =20 - len =3D native_read_msr_safe(MSR_AMD64_OSVW_ID_LENGTH, &err); + err =3D native_read_msr_safe(MSR_AMD64_OSVW_ID_LENGTH, &len); if (!err) - status =3D native_read_msr_safe(MSR_AMD64_OSVW_STATUS, - &err); + err =3D native_read_msr_safe(MSR_AMD64_OSVW_STATUS, &status); =20 if (err) osvw_status =3D osvw_len =3D 0; @@ -2146,14 +2143,13 @@ static int ac_interception(struct kvm_vcpu *vcpu) =20 static bool is_erratum_383(void) { - int err, i; + int i; u64 value; =20 if (!erratum_383_found) return false; =20 - value =3D native_read_msr_safe(MSR_IA32_MC0_STATUS, &err); - if (err) + if (native_read_msr_safe(MSR_IA32_MC0_STATUS, &value)) return false; =20 /* Bit 62 may or may not be set for this mce */ @@ -2166,8 +2162,7 @@ static bool is_erratum_383(void) for (i =3D 0; i < 6; ++i) native_write_msr_safe(MSR_IA32_MCx_STATUS(i), 0); =20 - value =3D native_read_msr_safe(MSR_IA32_MCG_STATUS, &err); - if (!err) { + if (!native_read_msr_safe(MSR_IA32_MCG_STATUS, &value)) { value &=3D ~(1ULL << 2); native_write_msr_safe(MSR_IA32_MCG_STATUS, value); } diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c index 4fbe0bd..3be3835 100644 --- a/arch/x86/xen/enlighten_pv.c +++ b/arch/x86/xen/enlighten_pv.c @@ -1087,7 +1087,7 @@ static void xen_write_cr4(unsigned long cr4) native_write_cr4(cr4); } =20 -static u64 xen_do_read_msr(unsigned int msr, int *err) +static u64 xen_do_read_msr(u32 msr, int *err) { u64 val =3D 0; /* Avoid uninitialized value for safe variant. */ =20 @@ -1095,7 +1095,7 @@ static u64 xen_do_read_msr(unsigned int msr, int *err) return val; =20 if (err) - val =3D native_read_msr_safe(msr, err); + *err =3D native_read_msr_safe(msr, &val); else val =3D native_read_msr(msr); =20 @@ -1160,9 +1160,12 @@ static void xen_do_write_msr(u32 msr, u64 val, int *= err) } } =20 -static u64 xen_read_msr_safe(unsigned int msr, int *err) +static int xen_read_msr_safe(u32 msr, u64 *val) { - return xen_do_read_msr(msr, err); + int err; + + *val =3D xen_do_read_msr(msr, &err); + return err; } =20 static int xen_write_msr_safe(u32 msr, u64 val) @@ -1174,7 +1177,7 @@ static int xen_write_msr_safe(u32 msr, u64 val) return err; } =20 -static u64 xen_read_msr(unsigned int msr) +static u64 xen_read_msr(u32 msr) { int err; =20 diff --git a/arch/x86/xen/pmu.c b/arch/x86/xen/pmu.c index 043d72b..8f89ce0 100644 --- a/arch/x86/xen/pmu.c +++ b/arch/x86/xen/pmu.c @@ -319,11 +319,12 @@ static u64 xen_amd_read_pmc(int counter) uint8_t xenpmu_flags =3D get_xenpmu_flags(); =20 if (!xenpmu_data || !(xenpmu_flags & XENPMU_IRQ_PROCESSING)) { - uint32_t msr; - int err; + u32 msr; + u64 val; =20 msr =3D amd_counters_base + (counter * amd_msr_step); - return native_read_msr_safe(msr, &err); + native_read_msr_safe(msr, &val); + return val; } =20 ctxt =3D &xenpmu_data->pmu.c.amd; @@ -340,15 +341,16 @@ static u64 xen_intel_read_pmc(int counter) uint8_t xenpmu_flags =3D get_xenpmu_flags(); =20 if (!xenpmu_data || !(xenpmu_flags & XENPMU_IRQ_PROCESSING)) { - uint32_t msr; - int err; + u32 msr; + u64 val; =20 if (counter & (1 << INTEL_PMC_TYPE_SHIFT)) msr =3D MSR_CORE_PERF_FIXED_CTR0 + (counter & 0xffff); else msr =3D MSR_IA32_PERFCTR0 + counter; =20 - return native_read_msr_safe(msr, &err); + native_read_msr_safe(msr, &val); + return val; } =20 ctxt =3D &xenpmu_data->pmu.c.intel;