From nobody Sun Feb 8 12:52:08 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3002DC77B7F for ; Tue, 16 May 2023 09:11:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232006AbjEPJLw (ORCPT ); Tue, 16 May 2023 05:11:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46248 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231809AbjEPJKI (ORCPT ); Tue, 16 May 2023 05:10:08 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E68A835B7; Tue, 16 May 2023 02:10:04 -0700 (PDT) Date: Tue, 16 May 2023 09:10:00 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1684228200; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=MMQK29qOJAtneLClZBcugFr2RhOX7qg0qxpTDSALvLY=; b=X/nxFUBoIR1ckvLo8WQk+q2mxIJrqE8y8JTHC3yNPXV+WZkQMAEynx0r6lphG1yq3hALsI XOmSpMDKJQedcmdKXvOCoLO6WNRdszjT/bPdyF3VNZikZl/bNLIrAfBWHFKOOrKuWaDtdT PQmmfE6FLFnM/fxHvzwCHOuGF5dk/ron27BUvbPAZ06o6n17DxoMpYLTfpVnsH9ci8s6Yu 482RpTunBG7LYBjFbr75Y7nKgIpmq9w+0hjW0j7Aky6aNJymqSCL7votePRtf4DSmUOPMY r2UJFqYf9r43NDXCNEway5E+gF9HspTbv1DvLvsQ6AzB9+rN+CZi+mfUwgVDQw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1684228200; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=MMQK29qOJAtneLClZBcugFr2RhOX7qg0qxpTDSALvLY=; b=W+gUGEWW2vqbduZKCF8XFUJI8CWoicMwAHXYtfnJUK574tEF7R2mUWPd63QFsmB04qg/2E gAVGGSLLiP+GvVDQ== From: "tip-bot2 for Thomas Gleixner" Sender: tip-bot2@linutronix.de Reply-to: linux-kernel@vger.kernel.org To: linux-tip-commits@vger.kernel.org Subject: [tip: smp/core] x86/smpboot: Make TSC synchronization function call based Cc: Thomas Gleixner , "Peter Zijlstra (Intel)" , Michael Kelley , Oleksandr Natalenko , Helge Deller , "Guilherme G. Piccoli" , x86@kernel.org, linux-kernel@vger.kernel.org In-Reply-To: <20230512205256.148255496@linutronix.de> References: <20230512205256.148255496@linutronix.de> MIME-Version: 1.0 Message-ID: <168422820027.404.885982874682038365.tip-bot2@tip-bot2> Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The following commit has been merged into the smp/core branch of tip: Commit-ID: 9d349d47f0e39b4d1b68793ded2459daa1f948f0 Gitweb: https://git.kernel.org/tip/9d349d47f0e39b4d1b68793ded2459daa= 1f948f0 Author: Thomas Gleixner AuthorDate: Fri, 12 May 2023 23:07:17 +02:00 Committer: Peter Zijlstra CommitterDate: Mon, 15 May 2023 13:44:53 +02:00 x86/smpboot: Make TSC synchronization function call based Spin-waiting on the control CPU until the AP reaches the TSC synchronization is just a waste especially in the case that there is no synchronization required. As the synchronization has to run with interrupts disabled the control CPU part can just be done from a SMP function call. The upcoming AP issues that call async only in the case that synchronization is required. Signed-off-by: Thomas Gleixner Signed-off-by: Peter Zijlstra (Intel) Tested-by: Michael Kelley Tested-by: Oleksandr Natalenko Tested-by: Helge Deller # parisc Tested-by: Guilherme G. Piccoli # Steam Deck Link: https://lore.kernel.org/r/20230512205256.148255496@linutronix.de --- arch/x86/include/asm/tsc.h | 2 -- arch/x86/kernel/smpboot.c | 20 +++----------------- arch/x86/kernel/tsc_sync.c | 36 +++++++++++------------------------- 3 files changed, 14 insertions(+), 44 deletions(-) diff --git a/arch/x86/include/asm/tsc.h b/arch/x86/include/asm/tsc.h index fbdc3d9..dc1b03b 100644 --- a/arch/x86/include/asm/tsc.h +++ b/arch/x86/include/asm/tsc.h @@ -55,12 +55,10 @@ extern bool tsc_async_resets; #ifdef CONFIG_X86_TSC extern bool tsc_store_and_check_tsc_adjust(bool bootcpu); extern void tsc_verify_tsc_adjust(bool resume); -extern void check_tsc_sync_source(int cpu); extern void check_tsc_sync_target(void); #else static inline bool tsc_store_and_check_tsc_adjust(bool bootcpu) { return f= alse; } static inline void tsc_verify_tsc_adjust(bool resume) { } -static inline void check_tsc_sync_source(int cpu) { } static inline void check_tsc_sync_target(void) { } #endif =20 diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c index 8b07c6e..b2f44a8 100644 --- a/arch/x86/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -275,11 +275,7 @@ static void notrace start_secondary(void *unused) */ smp_callin(); =20 - /* - * Check TSC synchronization with the control CPU, which will do - * its part of this from wait_cpu_online(), making it an implicit - * synchronization point. - */ + /* Check TSC synchronization with the control CPU. */ check_tsc_sync_target(); =20 /* @@ -1141,21 +1137,11 @@ static void wait_cpu_callin(unsigned int cpu) } =20 /* - * Bringup step four: Synchronize the TSC and wait for the target AP - * to reach set_cpu_online() in start_secondary(). + * Bringup step four: Wait for the target AP to reach set_cpu_online() in + * start_secondary(). */ static void wait_cpu_online(unsigned int cpu) { - unsigned long flags; - - /* - * Check TSC synchronization with the AP (keep irqs disabled - * while doing so): - */ - local_irq_save(flags); - check_tsc_sync_source(cpu); - local_irq_restore(flags); - /* * Wait for the AP to mark itself online, so the core caller * can drop sparse_irq_lock. diff --git a/arch/x86/kernel/tsc_sync.c b/arch/x86/kernel/tsc_sync.c index 9452dc9..bbc440c 100644 --- a/arch/x86/kernel/tsc_sync.c +++ b/arch/x86/kernel/tsc_sync.c @@ -245,7 +245,6 @@ bool tsc_store_and_check_tsc_adjust(bool bootcpu) */ static atomic_t start_count; static atomic_t stop_count; -static atomic_t skip_test; static atomic_t test_runs; =20 /* @@ -344,21 +343,14 @@ static inline unsigned int loop_timeout(int cpu) } =20 /* - * Source CPU calls into this - it waits for the freshly booted - * target CPU to arrive and then starts the measurement: + * The freshly booted CPU initiates this via an async SMP function call. */ -void check_tsc_sync_source(int cpu) +static void check_tsc_sync_source(void *__cpu) { + unsigned int cpu =3D (unsigned long)__cpu; int cpus =3D 2; =20 /* - * No need to check if we already know that the TSC is not - * synchronized or if we have no TSC. - */ - if (unsynchronized_tsc()) - return; - - /* * Set the maximum number of test runs to * 1 if the CPU does not provide the TSC_ADJUST MSR * 3 if the MSR is available, so the target can try to adjust @@ -368,16 +360,9 @@ void check_tsc_sync_source(int cpu) else atomic_set(&test_runs, 3); retry: - /* - * Wait for the target to start or to skip the test: - */ - while (atomic_read(&start_count) !=3D cpus - 1) { - if (atomic_read(&skip_test) > 0) { - atomic_set(&skip_test, 0); - return; - } + /* Wait for the target to start. */ + while (atomic_read(&start_count) !=3D cpus - 1) cpu_relax(); - } =20 /* * Trigger the target to continue into the measurement too: @@ -397,14 +382,14 @@ retry: if (!nr_warps) { atomic_set(&test_runs, 0); =20 - pr_debug("TSC synchronization [CPU#%d -> CPU#%d]: passed\n", + pr_debug("TSC synchronization [CPU#%d -> CPU#%u]: passed\n", smp_processor_id(), cpu); =20 } else if (atomic_dec_and_test(&test_runs) || random_warps) { /* Force it to 0 if random warps brought us here */ atomic_set(&test_runs, 0); =20 - pr_warn("TSC synchronization [CPU#%d -> CPU#%d]:\n", + pr_warn("TSC synchronization [CPU#%d -> CPU#%u]:\n", smp_processor_id(), cpu); pr_warn("Measured %Ld cycles TSC warp between CPUs, " "turning off TSC clock.\n", max_warp); @@ -457,11 +442,12 @@ void check_tsc_sync_target(void) * SoCs the TSC is frequency synchronized, but still the TSC ADJUST * register might have been wreckaged by the BIOS.. */ - if (tsc_store_and_check_tsc_adjust(false) || tsc_clocksource_reliable) { - atomic_inc(&skip_test); + if (tsc_store_and_check_tsc_adjust(false) || tsc_clocksource_reliable) return; - } =20 + /* Kick the control CPU into the TSC synchronization function */ + smp_call_function_single(cpumask_first(cpu_online_mask), check_tsc_sync_s= ource, + (unsigned long *)(unsigned long)cpu, 0); retry: /* * Register this CPU's participation and wait for the