From nobody Sun Feb 8 20:35:24 2026 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1987F246792; Fri, 2 May 2025 09:04:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746176672; cv=none; b=srK8j+UlVSXtFD8pI4i7IVayy9DDhiXmHA3Z1xYoB2rrp7pfmY8EwM+FtLPSN3a3rjEVvS/jYOlZUew2TfBTaWqCkRsWGZJjAB0c1PeF5YSf+JabWXpDZAJ5sqc0G8v6YeY2byVVlFLagp3N2KhGc338Tn/B/VYpj/pRiSzUx8U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746176672; c=relaxed/simple; bh=RWzZYwzlUx7IrMbP4+6VrYVj/En9mmm7JCswSe/X8xY=; h=Date:From:To:Subject:Cc:In-Reply-To:References:MIME-Version: Message-ID:Content-Type; b=hUmlf3U5FSuxk9Rh1fcWif4rFU9F5MuN49yKCUWkQQA/T2mjryc/juyf+CrZPHQSpOZs50dctBqFZfiTVxoFPiVY8QBWZeGKdkg1yqxaOIXeG0EUYZU/6dGrR8HW/PtQsOGVPBXIgeiYtkwC8/2Ghu++GyDXs3w5Rsc5hPtXpQ0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=q7kZ1eSe; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=Rnx9YwdZ; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="q7kZ1eSe"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="Rnx9YwdZ" Date: Fri, 02 May 2025 09:04:27 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1746176668; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6JQfFU1+hKfdp/gBbMgUd6tqjRUDw3+YagjRlpm0wME=; b=q7kZ1eSeMy6Ugbqm3RmCRsVi4X0w8pxHbeunwc5ozWYTCRAnocr0K78SfDZyL/PkaYp8fQ WXBXF721mu9mc/w1f9nhfGq2IV5VsvWN15LjGkoiBGHYW2tyY3d1y4oRKKXM3uuyEM6h3n 4oTdFxPL2w9XqWpMFw1wut43wzlyZXUEYsxThb1Q+JCoRMMEAYVuigqSHQEgf4lXVWVxLw NbNcioePwRe4kJayOjcB/iMLz8OUFbPE8NW+5H0PurfVRrha0v+exJcrBDawUtEfTArcMa l5h1NWPXO82DLsAZnApGK/sn5aa6Uje4MyhNtuskR503WplXb7AyNzDN9R7X4w== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1746176668; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6JQfFU1+hKfdp/gBbMgUd6tqjRUDw3+YagjRlpm0wME=; b=Rnx9YwdZ+5T0wyhA1ByH6d16mIU6i8n3EMIvuoNDbbyZ0gMpJg3T5NFbC6jxFBx9tNtUrl s1xUtLlL2SH/TQBw== From: "tip-bot2 for Xin Li (Intel)" Sender: tip-bot2@linutronix.de Reply-to: linux-kernel@vger.kernel.org To: linux-tip-commits@vger.kernel.org Subject: [tip: x86/merge] x86/msr: Move rdtsc{,_ordered}() to Cc: "Xin Li (Intel)" , Ingo Molnar , Dave Hansen , "Peter Zijlstra (Intel)" , Andy Lutomirski , Brian Gerst , Juergen Gross , "H. Peter Anvin" , Linus Torvalds , Kees Cook , Borislav Petkov , Thomas Gleixner , Josh Poimboeuf , Uros Bizjak , x86@kernel.org, linux-kernel@vger.kernel.org In-Reply-To: <20250427092027.1598740-3-xin@zytor.com> References: <20250427092027.1598740-3-xin@zytor.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <174617666745.22196.15200555517786851071.tip-bot2@tip-bot2> Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails Precedence: bulk Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable The following commit has been merged into the x86/merge branch of tip: Commit-ID: 288a4ff0ad29d1a9391b8a111a4b6da51da3aa85 Gitweb: https://git.kernel.org/tip/288a4ff0ad29d1a9391b8a111a4b6da51= da3aa85 Author: Xin Li (Intel) AuthorDate: Fri, 02 May 2025 10:20:14 +02:00 Committer: Ingo Molnar CommitterDate: Fri, 02 May 2025 10:24:39 +02:00 x86/msr: Move rdtsc{,_ordered}() to Relocate rdtsc{,_ordered}() from to . [ mingo: Do not remove the inclusion from just yet, to reduce -next breakages. We can do this later on, separately, shortly before the next -rc1. ] Signed-off-by: Xin Li (Intel) Signed-off-by: Ingo Molnar Acked-by: Dave Hansen Acked-by: Peter Zijlstra (Intel) Cc: Andy Lutomirski Cc: Brian Gerst Cc: Juergen Gross Cc: H. Peter Anvin Cc: Linus Torvalds Cc: Kees Cook Cc: Borislav Petkov Cc: Thomas Gleixner Cc: Josh Poimboeuf Cc: Uros Bizjak Link: https://lore.kernel.org/r/20250427092027.1598740-3-xin@zytor.com --- arch/x86/include/asm/msr.h | 54 +------------------------------------ arch/x86/include/asm/tsc.h | 55 +++++++++++++++++++++++++++++++++++++- 2 files changed, 55 insertions(+), 54 deletions(-) diff --git a/arch/x86/include/asm/msr.h b/arch/x86/include/asm/msr.h index 35a78d2..f5c0969 100644 --- a/arch/x86/include/asm/msr.h +++ b/arch/x86/include/asm/msr.h @@ -153,60 +153,6 @@ native_write_msr_safe(u32 msr, u32 low, u32 high) extern int rdmsr_safe_regs(u32 regs[8]); extern int wrmsr_safe_regs(u32 regs[8]); =20 -/** - * rdtsc() - returns the current TSC without ordering constraints - * - * rdtsc() returns the result of RDTSC as a 64-bit integer. The - * only ordering constraint it supplies is the ordering implied by - * "asm volatile": it will put the RDTSC in the place you expect. The - * CPU can and will speculatively execute that RDTSC, though, so the - * results can be non-monotonic if compared on different CPUs. - */ -static __always_inline u64 rdtsc(void) -{ - EAX_EDX_DECLARE_ARGS(val, low, high); - - asm volatile("rdtsc" : EAX_EDX_RET(val, low, high)); - - return EAX_EDX_VAL(val, low, high); -} - -/** - * rdtsc_ordered() - read the current TSC in program order - * - * rdtsc_ordered() returns the result of RDTSC as a 64-bit integer. - * It is ordered like a load to a global in-memory counter. It should - * be impossible to observe non-monotonic rdtsc_unordered() behavior - * across multiple CPUs as long as the TSC is synced. - */ -static __always_inline u64 rdtsc_ordered(void) -{ - EAX_EDX_DECLARE_ARGS(val, low, high); - - /* - * The RDTSC instruction is not ordered relative to memory - * access. The Intel SDM and the AMD APM are both vague on this - * point, but empirically an RDTSC instruction can be - * speculatively executed before prior loads. An RDTSC - * immediately after an appropriate barrier appears to be - * ordered as a normal load, that is, it provides the same - * ordering guarantees as reading from a global memory location - * that some other imaginary CPU is updating continuously with a - * time stamp. - * - * Thus, use the preferred barrier on the respective CPU, aiming for - * RDTSCP as the default. - */ - asm volatile(ALTERNATIVE_2("rdtsc", - "lfence; rdtsc", X86_FEATURE_LFENCE_RDTSC, - "rdtscp", X86_FEATURE_RDTSCP) - : EAX_EDX_RET(val, low, high) - /* RDTSCP clobbers ECX with MSR_TSC_AUX. */ - :: "ecx"); - - return EAX_EDX_VAL(val, low, high); -} - static inline u64 native_read_pmc(int counter) { EAX_EDX_DECLARE_ARGS(val, low, high); diff --git a/arch/x86/include/asm/tsc.h b/arch/x86/include/asm/tsc.h index 94408a7..4f7f09f 100644 --- a/arch/x86/include/asm/tsc.h +++ b/arch/x86/include/asm/tsc.h @@ -5,10 +5,65 @@ #ifndef _ASM_X86_TSC_H #define _ASM_X86_TSC_H =20 +#include #include #include #include =20 +/** + * rdtsc() - returns the current TSC without ordering constraints + * + * rdtsc() returns the result of RDTSC as a 64-bit integer. The + * only ordering constraint it supplies is the ordering implied by + * "asm volatile": it will put the RDTSC in the place you expect. The + * CPU can and will speculatively execute that RDTSC, though, so the + * results can be non-monotonic if compared on different CPUs. + */ +static __always_inline u64 rdtsc(void) +{ + EAX_EDX_DECLARE_ARGS(val, low, high); + + asm volatile("rdtsc" : EAX_EDX_RET(val, low, high)); + + return EAX_EDX_VAL(val, low, high); +} + +/** + * rdtsc_ordered() - read the current TSC in program order + * + * rdtsc_ordered() returns the result of RDTSC as a 64-bit integer. + * It is ordered like a load to a global in-memory counter. It should + * be impossible to observe non-monotonic rdtsc_unordered() behavior + * across multiple CPUs as long as the TSC is synced. + */ +static __always_inline u64 rdtsc_ordered(void) +{ + EAX_EDX_DECLARE_ARGS(val, low, high); + + /* + * The RDTSC instruction is not ordered relative to memory + * access. The Intel SDM and the AMD APM are both vague on this + * point, but empirically an RDTSC instruction can be + * speculatively executed before prior loads. An RDTSC + * immediately after an appropriate barrier appears to be + * ordered as a normal load, that is, it provides the same + * ordering guarantees as reading from a global memory location + * that some other imaginary CPU is updating continuously with a + * time stamp. + * + * Thus, use the preferred barrier on the respective CPU, aiming for + * RDTSCP as the default. + */ + asm volatile(ALTERNATIVE_2("rdtsc", + "lfence; rdtsc", X86_FEATURE_LFENCE_RDTSC, + "rdtscp", X86_FEATURE_RDTSCP) + : EAX_EDX_RET(val, low, high) + /* RDTSCP clobbers ECX with MSR_TSC_AUX. */ + :: "ecx"); + + return EAX_EDX_VAL(val, low, high); +} + /* * Standard way to access the cycle counter. */