From nobody Thu Dec 18 04:29:44 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE137C00A8F for ; Tue, 24 Oct 2023 13:21:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343503AbjJXNVw (ORCPT ); Tue, 24 Oct 2023 09:21:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37814 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234544AbjJXNVA (ORCPT ); Tue, 24 Oct 2023 09:21:00 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2C2D210C9; Tue, 24 Oct 2023 06:20:55 -0700 (PDT) Date: Tue, 24 Oct 2023 13:20:52 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1698153653; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3HRwjwSDCstpcUBGnUCVNfDdoW5MMSv2b52r2sciB7A=; b=YimTm0OGQ041F0Qu+ap0GIA+LuMAU3sF8JLC6k1pMKa01nPyN+R/hzB281ajncmk6WMMFb 4pmCse7kmGaZhtbqMuqm4Aw7+HOhCfR6RiLMlni3hIFKr5s5ZrErG42TgfnyklbsVgZEsa OtnPwhnvIHf8P41TRSl4wJ5xNzSQWl7HVLeiOhE3wdnBXY58EGSbRKgsnO0dVlLOdjGPat IgNmGp2w7gMKnHxoznBD/SDzXOEX8g2BNBx60kmya7Ui9AXg60jhK+hL0aNk254mlhkoKo 0j8ecAx/X1yA++u44fMHjRKwe1HtDN4CQGKjAbkqW8m9K8JZgVygs04eSG5Uxw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1698153653; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3HRwjwSDCstpcUBGnUCVNfDdoW5MMSv2b52r2sciB7A=; b=QUZrUBsKTZ7sLz30pmCEhinra6QmHudUvo7TSRht0/KYQcj02CP/PJ8FW8oiZzTfhOKQft /wDlgXPyvlGUzfDw== From: "tip-bot2 for Thomas Gleixner" Sender: tip-bot2@linutronix.de Reply-to: linux-kernel@vger.kernel.org To: linux-tip-commits@vger.kernel.org Subject: [tip: x86/microcode] x86/microcode: Replace the all-in-one rendevous handler Cc: Thomas Gleixner , "Borislav Petkov (AMD)" , x86@kernel.org, linux-kernel@vger.kernel.org In-Reply-To: <20231002115903.433704135@linutronix.de> References: <20231002115903.433704135@linutronix.de> MIME-Version: 1.0 Message-ID: <169815365245.3135.13257369178148876293.tip-bot2@tip-bot2> Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The following commit has been merged into the x86/microcode branch of tip: Commit-ID: 0bf871651211b58c7b19f40b746b646d5311e2ec Gitweb: https://git.kernel.org/tip/0bf871651211b58c7b19f40b746b646d5= 311e2ec Author: Thomas Gleixner AuthorDate: Mon, 02 Oct 2023 14:00:03 +02:00 Committer: Borislav Petkov (AMD) CommitterDate: Tue, 24 Oct 2023 15:05:55 +02:00 x86/microcode: Replace the all-in-one rendevous handler with a new handler which just separates the control flow of primary and secondary CPUs. Signed-off-by: Thomas Gleixner Signed-off-by: Borislav Petkov (AMD) Link: https://lore.kernel.org/r/20231002115903.433704135@linutronix.de --- arch/x86/kernel/cpu/microcode/core.c | 51 ++++----------------------- 1 file changed, 9 insertions(+), 42 deletions(-) diff --git a/arch/x86/kernel/cpu/microcode/core.c b/arch/x86/kernel/cpu/mic= rocode/core.c index 1ff38f9..1c2710b 100644 --- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -268,7 +268,7 @@ struct microcode_ctrl { }; =20 static DEFINE_PER_CPU(struct microcode_ctrl, ucode_ctrl); -static atomic_t late_cpus_in, late_cpus_out; +static atomic_t late_cpus_in; =20 static bool wait_for_cpus(atomic_t *cnt) { @@ -304,7 +304,7 @@ static bool wait_for_ctrl(void) return false; } =20 -static __maybe_unused void load_secondary(unsigned int cpu) +static void load_secondary(unsigned int cpu) { unsigned int ctrl_cpu =3D this_cpu_read(ucode_ctrl.ctrl_cpu); enum ucode_state ret; @@ -339,7 +339,7 @@ static __maybe_unused void load_secondary(unsigned int = cpu) this_cpu_write(ucode_ctrl.ctrl, SCTRL_DONE); } =20 -static __maybe_unused void load_primary(unsigned int cpu) +static void load_primary(unsigned int cpu) { struct cpumask *secondaries =3D topology_sibling_cpumask(cpu); enum sibling_ctrl ctrl; @@ -376,46 +376,14 @@ static __maybe_unused void load_primary(unsigned int = cpu) =20 static int load_cpus_stopped(void *unused) { - int cpu =3D smp_processor_id(); - enum ucode_state ret; - - /* - * Wait for all CPUs to arrive. A load will not be attempted unless all - * CPUs show up. - * */ - if (!wait_for_cpus(&late_cpus_in)) { - this_cpu_write(ucode_ctrl.result, UCODE_TIMEOUT); - return 0; - } - - /* - * On an SMT system, it suffices to load the microcode on one sibling of - * the core because the microcode engine is shared between the threads. - * Synchronization still needs to take place so that no concurrent - * loading attempts happen on multiple threads of an SMT core. See - * below. - */ - if (cpumask_first(topology_sibling_cpumask(cpu)) !=3D cpu) - goto wait_for_siblings; + unsigned int cpu =3D smp_processor_id(); =20 - ret =3D microcode_ops->apply_microcode(cpu); - this_cpu_write(ucode_ctrl.result, ret); - -wait_for_siblings: - if (!wait_for_cpus(&late_cpus_out)) - panic("Timeout during microcode update!\n"); - - /* - * At least one thread has completed update on each core. - * For others, simply call the update to make sure the - * per-cpu cpuinfo can be updated with right microcode - * revision. - */ - if (cpumask_first(topology_sibling_cpumask(cpu)) =3D=3D cpu) - return 0; + if (this_cpu_read(ucode_ctrl.ctrl_cpu) =3D=3D cpu) + load_primary(cpu); + else + load_secondary(cpu); =20 - ret =3D microcode_ops->apply_microcode(cpu); - this_cpu_write(ucode_ctrl.result, ret); + /* No point to wait here. The CPUs will all wait in stop_machine(). */ return 0; } =20 @@ -429,7 +397,6 @@ static int load_late_stop_cpus(void) pr_err("You should switch to early loading, if possible.\n"); =20 atomic_set(&late_cpus_in, num_online_cpus()); - atomic_set(&late_cpus_out, num_online_cpus()); =20 /* * Take a snapshot before the microcode update in order to compare and