From nobody Fri Jan 2 18:51:20 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C5C4E95A8E for ; Mon, 9 Oct 2023 12:30:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1376576AbjJIMaZ (ORCPT ); Mon, 9 Oct 2023 08:30:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38790 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346563AbjJIM3z (ORCPT ); Mon, 9 Oct 2023 08:29:55 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3C736B9; Mon, 9 Oct 2023 05:29:51 -0700 (PDT) Date: Mon, 09 Oct 2023 12:29:47 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1696854588; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3aEE3XOrTmiV23yEt0enup40FBfSYVb/ErZkvbAaV3k=; b=BSNKuDc2sCIhsV12ghSrUr5yUzLDJYTJSrO2qOfxSvC2u4gVsjKtO5cf8EBMKtNEmrfYM3 7OliJ/RMKseHJw8uXWAT3DS7sG4B6nY1uX+48BPthoqrQvfHHbdVlOvI+eNyRg8FMLFQld XQIvZJuYtJzK6OtoCx25ctD1cS3sLtsedMapCDqdTgOZZQ1abAprBSIW5euVVsf6z9+0I+ 7RCVXGQePWe5H9igKcI90RcxBCM3ZoWXt1EhdRtr4JLBjS8shNyOf30F4MpVWR46k4t1Mi OfW5/Kq8pnyU7jNBDQ/GerDlFPq0njaRl5nUQ2Bb4W3z7ZHH+yy3xfY9ahfWuw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1696854588; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3aEE3XOrTmiV23yEt0enup40FBfSYVb/ErZkvbAaV3k=; b=duH8hJbO0vPHoP0ZBJyXMqBObUOG1wGygx/xAAsUEYsz1QppkbqPSXt5mBJwJFMpUrF1kW Rp/MqxuLMgRSqhAw== From: "tip-bot2 for Thomas Gleixner" Sender: tip-bot2@linutronix.de Reply-to: linux-kernel@vger.kernel.org To: linux-tip-commits@vger.kernel.org Subject: [tip: x86/microcode] x86/microcode: Sanitize __wait_for_cpus() Cc: Thomas Gleixner , "Borislav Petkov (AMD)" , x86@kernel.org, linux-kernel@vger.kernel.org In-Reply-To: <20231002115903.204251527@linutronix.de> References: <20231002115903.204251527@linutronix.de> MIME-Version: 1.0 Message-ID: <169685458791.3135.18135037272376022845.tip-bot2@tip-bot2> Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The following commit has been merged into the x86/microcode branch of tip: Commit-ID: 89d6d026e43bd8b093f6238b569a0efc9485d19f Gitweb: https://git.kernel.org/tip/89d6d026e43bd8b093f6238b569a0efc9= 485d19f Author: Thomas Gleixner AuthorDate: Mon, 02 Oct 2023 13:59:59 +02:00 Committer: Borislav Petkov (AMD) CommitterDate: Fri, 06 Oct 2023 15:12:23 +02:00 x86/microcode: Sanitize __wait_for_cpus() The code is too complicated for no reason: - The return value is pointless as this is a strict boolean. - It's way simpler to count down from num_online_cpus() and check for zero. - The timeout argument is pointless as this is always one second. - Touching the NMI watchdog every 100ns does not make any sense, neither does checking every 100ns. This is really not a hotpath operation. Preload the atomic counter with the number of online CPUs and simplify the whole timeout logic. Delay for one microsecond and touch the NMI watchdog once per millisecond. Signed-off-by: Thomas Gleixner Signed-off-by: Borislav Petkov (AMD) Link: https://lore.kernel.org/r/20231002115903.204251527@linutronix.de --- arch/x86/kernel/cpu/microcode/core.c | 39 +++++++++++---------------- 1 file changed, 17 insertions(+), 22 deletions(-) diff --git a/arch/x86/kernel/cpu/microcode/core.c b/arch/x86/kernel/cpu/mic= rocode/core.c index d82d763..2b8e9b6 100644 --- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -281,31 +281,26 @@ static struct platform_device *microcode_pdev; * requirement can be relaxed in the future. Right now, this is conserva= tive * and good. */ -#define SPINUNIT 100 /* 100 nsec */ +static atomic_t late_cpus_in, late_cpus_out; =20 - -static atomic_t late_cpus_in; -static atomic_t late_cpus_out; - -static int __wait_for_cpus(atomic_t *t, long long timeout) +static bool wait_for_cpus(atomic_t *cnt) { - int all_cpus =3D num_online_cpus(); + unsigned int timeout; =20 - atomic_inc(t); + WARN_ON_ONCE(atomic_dec_return(cnt) < 0); =20 - while (atomic_read(t) < all_cpus) { - if (timeout < SPINUNIT) { - pr_err("Timeout while waiting for CPUs rendezvous, remaining: %d\n", - all_cpus - atomic_read(t)); - return 1; - } + for (timeout =3D 0; timeout < USEC_PER_SEC; timeout++) { + if (!atomic_read(cnt)) + return true; =20 - ndelay(SPINUNIT); - timeout -=3D SPINUNIT; + udelay(1); =20 - touch_nmi_watchdog(); + if (!(timeout % USEC_PER_MSEC)) + touch_nmi_watchdog(); } - return 0; + /* Prevent the late comers from making progress and let them time out */ + atomic_inc(cnt); + return false; } =20 /* @@ -323,7 +318,7 @@ static int __reload_late(void *info) * Wait for all CPUs to arrive. A load will not be attempted unless all * CPUs show up. * */ - if (__wait_for_cpus(&late_cpus_in, NSEC_PER_SEC)) + if (!wait_for_cpus(&late_cpus_in)) return -1; =20 /* @@ -346,7 +341,7 @@ static int __reload_late(void *info) } =20 wait_for_siblings: - if (__wait_for_cpus(&late_cpus_out, NSEC_PER_SEC)) + if (!wait_for_cpus(&late_cpus_out)) panic("Timeout during microcode update!\n"); =20 /* @@ -373,8 +368,8 @@ static int microcode_reload_late(void) pr_err("Attempting late microcode loading - it is dangerous and taints th= e kernel.\n"); pr_err("You should switch to early loading, if possible.\n"); =20 - atomic_set(&late_cpus_in, 0); - atomic_set(&late_cpus_out, 0); + atomic_set(&late_cpus_in, num_online_cpus()); + atomic_set(&late_cpus_out, num_online_cpus()); =20 /* * Take a snapshot before the microcode update in order to compare and