From nobody Thu Dec 18 20:32:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08F07C0015E for ; Sat, 12 Aug 2023 20:00:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230167AbjHLUAJ (ORCPT ); Sat, 12 Aug 2023 16:00:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51666 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230119AbjHLUAG (ORCPT ); Sat, 12 Aug 2023 16:00:06 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B6F0B270C for ; Sat, 12 Aug 2023 12:59:45 -0700 (PDT) Message-ID: <20230812195727.600549655@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691870318; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=vZfu3hn1QJAjVwqJekBOL9H9dLKGyIEaYNIR6HaO1RM=; b=24XntUOddusKNsOZmHUCaWBvLuTGkAp5p3D3uvdIQaUzM3D99QfyTmTMup1AIXBjqUGMFf UawDG36uAudfPOE/D0g6uA0xkEgpJn7FVPHniRkM+gN9i3g3wRMNJw3qAPxDiQet114hEH ZgTzbul3QutO72+HY3X1IiztBw6TqsjKTsnebdFbuydsaL4a3padt4oYgEy0cq5OMqq1k0 /oo2crioPgIawXv9QRkth55aDoUeO8RvCTufWUAAD7NDPRa7o1hKQE94RMxOsBHi7yVZcq tbORMsdRzyb96qWWyYYxDnw6BisKb1gwXImw0n7GD83Dap3SdAZDK0abkcwr6g== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691870318; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=vZfu3hn1QJAjVwqJekBOL9H9dLKGyIEaYNIR6HaO1RM=; b=ixS7kBELq974kNgFX6sjcFSbnaOqnTVjhEZEZFwRpb5/+I5o44bmY+8h35RcAz7y9cm4K8 GRYlsY1G7+WTx2Dg== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven , Nikolay Borisov Subject: [patch V2 01/37] x86/mm: Remove unused microcode.h include References: <20230812194003.682298127@linutronix.de> MIME-Version: 1.0 Date: Sat, 12 Aug 2023 21:58:38 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Thomas Gleixner No usage for anything in that header. Signed-off-by: Thomas Gleixner --- arch/x86/mm/init.c | 1 - 1 file changed, 1 deletion(-) --- --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -20,7 +20,6 @@ #include #include #include /* for MAX_DMA_PFN */ -#include #include #include #include From nobody Thu Dec 18 20:32:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1CDE3C0015E for ; Sat, 12 Aug 2023 20:00:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230331AbjHLUAR (ORCPT ); Sat, 12 Aug 2023 16:00:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38288 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230173AbjHLUAI (ORCPT ); Sat, 12 Aug 2023 16:00:08 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A8A501998 for ; Sat, 12 Aug 2023 12:59:47 -0700 (PDT) Message-ID: <20230812195727.660453052@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691870320; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=Om38mOGR5rsJ+3kNGer82JR+w3FtSUZk8MQVXgwmwBw=; b=HWheSECgs8fHA0pwSALBVWIHDDQWpxZArpuN8fup5l5Qc2d493d7zgXyDpd+pwVJhAtwpR ZOwSCdWdYoOY3uG7mg+FnwFl/bDDO9gu666ujoKGdzlm1aTB20X2ntzZ82YN+0rDOG2hn0 cXFQh9hXAUEz5DJaD0HRzWRhyyiBANA5zW01GzyE2ltpGB5Cn8G9Ay6flDkW9wgsx1F/IZ 2lUe33U6YWPmDLw9Ng97BAtqYpLmnuprwBwwQBCil7tkXpgMipw04RnySDb2Uk6lS9+IMX etqisTUYbGge31Ilnqcek/ry+RH2Jl7ysnSDqaUsTzdudIduG8pt8BegYvT76Q== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691870320; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=Om38mOGR5rsJ+3kNGer82JR+w3FtSUZk8MQVXgwmwBw=; b=D9YFE1jmwB2zsJ9vCq140UTrDSkeNM1bvbcX0su+/Alb1gBbXolfDE/+Lg0404YH2qVqgL J34xbMVZuoaSEUBg== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven , Nikolay Borisov Subject: [patch V2 02/37] x86/microcode: Hide the config knob References: <20230812194003.682298127@linutronix.de> MIME-Version: 1.0 Date: Sat, 12 Aug 2023 21:58:39 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In reality CONFIG_MICROCODE is enabled in any reasonable configuration when Intel or AMD support is enabled. Accomodate to reality. Requested-by: Borislav Petkov Signed-off-by: Thomas Gleixner --- arch/x86/Kconfig | 38 ----------------------------= ----- arch/x86/include/asm/microcode.h | 6 ++--- arch/x86/include/asm/microcode_amd.h | 2 - arch/x86/include/asm/microcode_intel.h | 2 - arch/x86/kernel/cpu/microcode/Makefile | 4 +-- 5 files changed, 8 insertions(+), 44 deletions(-) --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1308,44 +1308,8 @@ config X86_REBOOTFIXUPS Say N otherwise. =20 config MICROCODE - bool "CPU microcode loading support" - default y + def_bool y depends on CPU_SUP_AMD || CPU_SUP_INTEL - help - If you say Y here, you will be able to update the microcode on - Intel and AMD processors. The Intel support is for the IA32 family, - e.g. Pentium Pro, Pentium II, Pentium III, Pentium 4, Xeon etc. The - AMD support is for families 0x10 and later. You will obviously need - the actual microcode binary data itself which is not shipped with - the Linux kernel. - - The preferred method to load microcode from a detached initrd is descri= bed - in Documentation/arch/x86/microcode.rst. For that you need to enable - CONFIG_BLK_DEV_INITRD in order for the loader to be able to scan the - initrd for microcode blobs. - - In addition, you can build the microcode into the kernel. For that you - need to add the vendor-supplied microcode to the CONFIG_EXTRA_FIRMWARE - config option. - -config MICROCODE_INTEL - bool "Intel microcode loading support" - depends on CPU_SUP_INTEL && MICROCODE - default MICROCODE - help - This options enables microcode patch loading support for Intel - processors. - - For the current Intel microcode data package go to - and search for - 'Linux Processor Microcode Data File'. - -config MICROCODE_AMD - bool "AMD microcode loading support" - depends on CPU_SUP_AMD && MICROCODE - help - If you select this option, microcode patch loading support for AMD - processors will be enabled. =20 config MICROCODE_LATE_LOADING bool "Late microcode loading (DANGEROUS)" --- a/arch/x86/include/asm/microcode.h +++ b/arch/x86/include/asm/microcode.h @@ -54,16 +54,16 @@ struct ucode_cpu_info { extern struct ucode_cpu_info ucode_cpu_info[]; struct cpio_data find_microcode_in_initrd(const char *path, bool use_pa); =20 -#ifdef CONFIG_MICROCODE_INTEL +#ifdef CONFIG_CPU_SUP_INTEL extern struct microcode_ops * __init init_intel_microcode(void); #else static inline struct microcode_ops * __init init_intel_microcode(void) { return NULL; } -#endif /* CONFIG_MICROCODE_INTEL */ +#endif /* CONFIG_CPU_SUP_INTEL */ =20 -#ifdef CONFIG_MICROCODE_AMD +#ifdef CONFIG_CPU_SUP_AMD extern struct microcode_ops * __init init_amd_microcode(void); extern void __exit exit_amd_microcode(void); #else --- a/arch/x86/include/asm/microcode_amd.h +++ b/arch/x86/include/asm/microcode_amd.h @@ -43,7 +43,7 @@ struct microcode_amd { =20 #define PATCH_MAX_SIZE (3 * PAGE_SIZE) =20 -#ifdef CONFIG_MICROCODE_AMD +#ifdef CONFIG_CPU_SUP_AMD extern void load_ucode_amd_early(unsigned int cpuid_1_eax); extern int __init save_microcode_in_initrd_amd(unsigned int family); void reload_ucode_amd(unsigned int cpu); --- a/arch/x86/include/asm/microcode_intel.h +++ b/arch/x86/include/asm/microcode_intel.h @@ -71,7 +71,7 @@ static inline u32 intel_get_microcode_re return rev; } =20 -#ifdef CONFIG_MICROCODE_INTEL +#ifdef CONFIG_CPU_SUP_INTEL extern void __init load_ucode_intel_bsp(void); extern void load_ucode_intel_ap(void); extern void show_ucode_info_early(void); --- a/arch/x86/kernel/cpu/microcode/Makefile +++ b/arch/x86/kernel/cpu/microcode/Makefile @@ -1,5 +1,5 @@ # SPDX-License-Identifier: GPL-2.0-only microcode-y :=3D core.o obj-$(CONFIG_MICROCODE) +=3D microcode.o -microcode-$(CONFIG_MICROCODE_INTEL) +=3D intel.o -microcode-$(CONFIG_MICROCODE_AMD) +=3D amd.o +microcode-$(CONFIG_CPU_SUP_INTEL) +=3D intel.o +microcode-$(CONFIG_CPU_SUP_AMD) +=3D amd.o From nobody Thu Dec 18 20:32:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3798C0015E for ; Sat, 12 Aug 2023 20:00:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230282AbjHLUAO (ORCPT ); Sat, 12 Aug 2023 16:00:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51670 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230150AbjHLUAH (ORCPT ); Sat, 12 Aug 2023 16:00:07 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E9A8F2D5B for ; Sat, 12 Aug 2023 12:59:45 -0700 (PDT) Message-ID: <20230812195727.719202319@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691870321; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=ggxYi+OZBvt+3ot3vhQxZ5PJuR5IZeB1koCPjiEIzL0=; b=SabdeBlyiFCniJKKuHlwAfi9r84M8G2kZh3SCHGFPWIjDU0CTpM1PzJmZs/Zp+xlXsn56G Q3zm2indR+18Ui6jdT2w+oAJA3Yv9XxZaV3LF7aOAdckUIQC7+ISB2oUDFzrG9+EQ2Zovk QQKH3dQ/1BjwoL/l/8HwE5p8QWRZ3R8oE3J3RjAPIb6ISxO14896uKIh/EQ+tcs/nRFjFN VWlf3cSDFZaPRbb/wbRlDVeq90fmRItApjmgHgxdFin8YAwag4d3KuyK+UZHBQWmvGlaNC 286q3GsqYZLWdsHDftV1Hij5iCUnnjrQ87RZlcLqojECtOtnH+mpnvk1gPN8aA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691870321; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=ggxYi+OZBvt+3ot3vhQxZ5PJuR5IZeB1koCPjiEIzL0=; b=ya/M02C6qqyLvjgaAHACXIFm/lGyN7D3E+M0iMUto8N1Vc/5j99SIqJhOAIUKDKfoYdDh3 a1oMPgD+H7NERWCw== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven , Nikolay Borisov Subject: [patch V2 03/37] x86/microcode/intel: Move microcode functions out of cpu/intel.c References: <20230812194003.682298127@linutronix.de> MIME-Version: 1.0 Date: Sat, 12 Aug 2023 21:58:41 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" There is really no point to have that in the CPUID evaluation code. Move it into the intel specific microcode handling along with the datastructures, defines and helpers required by it. The exports need to stay for IFS. Signed-off-by: Thomas Gleixner --- V2: Move the structs, defines and helpers into intel.c --- arch/x86/include/asm/microcode_intel.h | 28 ---- arch/x86/kernel/cpu/intel.c | 174 ---------------------------- arch/x86/kernel/cpu/microcode/intel.c | 202 ++++++++++++++++++++++++++++= +++++ 3 files changed, 204 insertions(+), 200 deletions(-) --- a/arch/x86/include/asm/microcode_intel.h +++ b/arch/x86/include/asm/microcode_intel.h @@ -23,39 +23,15 @@ struct microcode_intel { unsigned int bits[]; }; =20 -/* microcode format is extended from prescott processors */ -struct extended_signature { - unsigned int sig; - unsigned int pf; - unsigned int cksum; -}; - -struct extended_sigtable { - unsigned int count; - unsigned int cksum; - unsigned int reserved[3]; - struct extended_signature sigs[]; -}; - -#define DEFAULT_UCODE_DATASIZE (2000) -#define MC_HEADER_SIZE (sizeof(struct microcode_header_intel)) -#define DEFAULT_UCODE_TOTALSIZE (DEFAULT_UCODE_DATASIZE + MC_HEADER_SIZE) -#define EXT_HEADER_SIZE (sizeof(struct extended_sigtable)) -#define EXT_SIGNATURE_SIZE (sizeof(struct extended_signature)) +#define MC_HEADER_SIZE (sizeof(struct microcode_header_intel)) #define MC_HEADER_TYPE_MICROCODE 1 #define MC_HEADER_TYPE_IFS 2 - -#define get_totalsize(mc) \ - (((struct microcode_intel *)mc)->hdr.datasize ? \ - ((struct microcode_intel *)mc)->hdr.totalsize : \ - DEFAULT_UCODE_TOTALSIZE) +#define DEFAULT_UCODE_DATASIZE (2000) =20 #define get_datasize(mc) \ (((struct microcode_intel *)mc)->hdr.datasize ? \ ((struct microcode_intel *)mc)->hdr.datasize : DEFAULT_UCODE_DATASIZE) =20 -#define exttable_size(et) ((et)->count * EXT_SIGNATURE_SIZE + EXT_HEADER_S= IZE) - static inline u32 intel_get_microcode_revision(void) { u32 rev, dummy; --- a/arch/x86/kernel/cpu/intel.c +++ b/arch/x86/kernel/cpu/intel.c @@ -184,180 +184,6 @@ static bool bad_spectre_microcode(struct return false; } =20 -int intel_cpu_collect_info(struct ucode_cpu_info *uci) -{ - unsigned int val[2]; - unsigned int family, model; - struct cpu_signature csig =3D { 0 }; - unsigned int eax, ebx, ecx, edx; - - memset(uci, 0, sizeof(*uci)); - - eax =3D 0x00000001; - ecx =3D 0; - native_cpuid(&eax, &ebx, &ecx, &edx); - csig.sig =3D eax; - - family =3D x86_family(eax); - model =3D x86_model(eax); - - if (model >=3D 5 || family > 6) { - /* get processor flags from MSR 0x17 */ - native_rdmsr(MSR_IA32_PLATFORM_ID, val[0], val[1]); - csig.pf =3D 1 << ((val[1] >> 18) & 7); - } - - csig.rev =3D intel_get_microcode_revision(); - - uci->cpu_sig =3D csig; - - return 0; -} -EXPORT_SYMBOL_GPL(intel_cpu_collect_info); - -/* - * Returns 1 if update has been found, 0 otherwise. - */ -int intel_find_matching_signature(void *mc, unsigned int csig, int cpf) -{ - struct microcode_header_intel *mc_hdr =3D mc; - struct extended_sigtable *ext_hdr; - struct extended_signature *ext_sig; - int i; - - if (intel_cpu_signatures_match(csig, cpf, mc_hdr->sig, mc_hdr->pf)) - return 1; - - /* Look for ext. headers: */ - if (get_totalsize(mc_hdr) <=3D get_datasize(mc_hdr) + MC_HEADER_SIZE) - return 0; - - ext_hdr =3D mc + get_datasize(mc_hdr) + MC_HEADER_SIZE; - ext_sig =3D (void *)ext_hdr + EXT_HEADER_SIZE; - - for (i =3D 0; i < ext_hdr->count; i++) { - if (intel_cpu_signatures_match(csig, cpf, ext_sig->sig, ext_sig->pf)) - return 1; - ext_sig++; - } - return 0; -} -EXPORT_SYMBOL_GPL(intel_find_matching_signature); - -/** - * intel_microcode_sanity_check() - Sanity check microcode file. - * @mc: Pointer to the microcode file contents. - * @print_err: Display failure reason if true, silent if false. - * @hdr_type: Type of file, i.e. normal microcode file or In Field Scan fi= le. - * Validate if the microcode header type matches with the type - * specified here. - * - * Validate certain header fields and verify if computed checksum matches - * with the one specified in the header. - * - * Return: 0 if the file passes all the checks, -EINVAL if any of the chec= ks - * fail. - */ -int intel_microcode_sanity_check(void *mc, bool print_err, int hdr_type) -{ - unsigned long total_size, data_size, ext_table_size; - struct microcode_header_intel *mc_header =3D mc; - struct extended_sigtable *ext_header =3D NULL; - u32 sum, orig_sum, ext_sigcount =3D 0, i; - struct extended_signature *ext_sig; - - total_size =3D get_totalsize(mc_header); - data_size =3D get_datasize(mc_header); - - if (data_size + MC_HEADER_SIZE > total_size) { - if (print_err) - pr_err("Error: bad microcode data file size.\n"); - return -EINVAL; - } - - if (mc_header->ldrver !=3D 1 || mc_header->hdrver !=3D hdr_type) { - if (print_err) - pr_err("Error: invalid/unknown microcode update format. Header type %d\= n", - mc_header->hdrver); - return -EINVAL; - } - - ext_table_size =3D total_size - (MC_HEADER_SIZE + data_size); - if (ext_table_size) { - u32 ext_table_sum =3D 0; - u32 *ext_tablep; - - if (ext_table_size < EXT_HEADER_SIZE || - ((ext_table_size - EXT_HEADER_SIZE) % EXT_SIGNATURE_SIZE)) { - if (print_err) - pr_err("Error: truncated extended signature table.\n"); - return -EINVAL; - } - - ext_header =3D mc + MC_HEADER_SIZE + data_size; - if (ext_table_size !=3D exttable_size(ext_header)) { - if (print_err) - pr_err("Error: extended signature table size mismatch.\n"); - return -EFAULT; - } - - ext_sigcount =3D ext_header->count; - - /* - * Check extended table checksum: the sum of all dwords that - * comprise a valid table must be 0. - */ - ext_tablep =3D (u32 *)ext_header; - - i =3D ext_table_size / sizeof(u32); - while (i--) - ext_table_sum +=3D ext_tablep[i]; - - if (ext_table_sum) { - if (print_err) - pr_warn("Bad extended signature table checksum, aborting.\n"); - return -EINVAL; - } - } - - /* - * Calculate the checksum of update data and header. The checksum of - * valid update data and header including the extended signature table - * must be 0. - */ - orig_sum =3D 0; - i =3D (MC_HEADER_SIZE + data_size) / sizeof(u32); - while (i--) - orig_sum +=3D ((u32 *)mc)[i]; - - if (orig_sum) { - if (print_err) - pr_err("Bad microcode data checksum, aborting.\n"); - return -EINVAL; - } - - if (!ext_table_size) - return 0; - - /* - * Check extended signature checksum: 0 =3D> valid. - */ - for (i =3D 0; i < ext_sigcount; i++) { - ext_sig =3D (void *)ext_header + EXT_HEADER_SIZE + - EXT_SIGNATURE_SIZE * i; - - sum =3D (mc_header->sig + mc_header->pf + mc_header->cksum) - - (ext_sig->sig + ext_sig->pf + ext_sig->cksum); - if (sum) { - if (print_err) - pr_err("Bad extended signature checksum, aborting.\n"); - return -EINVAL; - } - } - return 0; -} -EXPORT_SYMBOL_GPL(intel_microcode_sanity_check); - static void early_init_intel(struct cpuinfo_x86 *c) { u64 misc_enable; --- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -45,6 +45,208 @@ static struct microcode_intel *intel_uco /* last level cache size per core */ static int llc_size_per_core; =20 +/* microcode format is extended from prescott processors */ +struct extended_signature { + unsigned int sig; + unsigned int pf; + unsigned int cksum; +}; + +struct extended_sigtable { + unsigned int count; + unsigned int cksum; + unsigned int reserved[3]; + struct extended_signature sigs[]; +}; + +#define DEFAULT_UCODE_TOTALSIZE (DEFAULT_UCODE_DATASIZE + MC_HEADER_SIZE) +#define EXT_HEADER_SIZE (sizeof(struct extended_sigtable)) +#define EXT_SIGNATURE_SIZE (sizeof(struct extended_signature)) + +static inline unsigned int get_totalsize(struct microcode_header_intel *hd= r) +{ + return hdr->datasize ? : DEFAULT_UCODE_TOTALSIZE; +} + +static inline unsigned int exttable_size(struct extended_sigtable *et) +{ + return et->count * EXT_SIGNATURE_SIZE + EXT_HEADER_SIZE; +} + +int intel_cpu_collect_info(struct ucode_cpu_info *uci) +{ + unsigned int val[2]; + unsigned int family, model; + struct cpu_signature csig =3D { 0 }; + unsigned int eax, ebx, ecx, edx; + + memset(uci, 0, sizeof(*uci)); + + eax =3D 0x00000001; + ecx =3D 0; + native_cpuid(&eax, &ebx, &ecx, &edx); + csig.sig =3D eax; + + family =3D x86_family(eax); + model =3D x86_model(eax); + + if (model >=3D 5 || family > 6) { + /* get processor flags from MSR 0x17 */ + native_rdmsr(MSR_IA32_PLATFORM_ID, val[0], val[1]); + csig.pf =3D 1 << ((val[1] >> 18) & 7); + } + + csig.rev =3D intel_get_microcode_revision(); + + uci->cpu_sig =3D csig; + + return 0; +} +EXPORT_SYMBOL_GPL(intel_cpu_collect_info); + +/* + * Returns 1 if update has been found, 0 otherwise. + */ +int intel_find_matching_signature(void *mc, unsigned int csig, int cpf) +{ + struct microcode_header_intel *mc_hdr =3D mc; + struct extended_sigtable *ext_hdr; + struct extended_signature *ext_sig; + int i; + + if (intel_cpu_signatures_match(csig, cpf, mc_hdr->sig, mc_hdr->pf)) + return 1; + + /* Look for ext. headers: */ + if (get_totalsize(mc_hdr) <=3D get_datasize(mc_hdr) + MC_HEADER_SIZE) + return 0; + + ext_hdr =3D mc + get_datasize(mc_hdr) + MC_HEADER_SIZE; + ext_sig =3D (void *)ext_hdr + EXT_HEADER_SIZE; + + for (i =3D 0; i < ext_hdr->count; i++) { + if (intel_cpu_signatures_match(csig, cpf, ext_sig->sig, ext_sig->pf)) + return 1; + ext_sig++; + } + return 0; +} +EXPORT_SYMBOL_GPL(intel_find_matching_signature); + +/** + * intel_microcode_sanity_check() - Sanity check microcode file. + * @mc: Pointer to the microcode file contents. + * @print_err: Display failure reason if true, silent if false. + * @hdr_type: Type of file, i.e. normal microcode file or In Field Scan fi= le. + * Validate if the microcode header type matches with the type + * specified here. + * + * Validate certain header fields and verify if computed checksum matches + * with the one specified in the header. + * + * Return: 0 if the file passes all the checks, -EINVAL if any of the chec= ks + * fail. + */ +int intel_microcode_sanity_check(void *mc, bool print_err, int hdr_type) +{ + unsigned long total_size, data_size, ext_table_size; + struct microcode_header_intel *mc_header =3D mc; + struct extended_sigtable *ext_header =3D NULL; + u32 sum, orig_sum, ext_sigcount =3D 0, i; + struct extended_signature *ext_sig; + + total_size =3D get_totalsize(mc_header); + data_size =3D get_datasize(mc_header); + + if (data_size + MC_HEADER_SIZE > total_size) { + if (print_err) + pr_err("Error: bad microcode data file size.\n"); + return -EINVAL; + } + + if (mc_header->ldrver !=3D 1 || mc_header->hdrver !=3D hdr_type) { + if (print_err) + pr_err("Error: invalid/unknown microcode update format. Header type %d\= n", + mc_header->hdrver); + return -EINVAL; + } + + ext_table_size =3D total_size - (MC_HEADER_SIZE + data_size); + if (ext_table_size) { + u32 ext_table_sum =3D 0; + u32 *ext_tablep; + + if (ext_table_size < EXT_HEADER_SIZE || + ((ext_table_size - EXT_HEADER_SIZE) % EXT_SIGNATURE_SIZE)) { + if (print_err) + pr_err("Error: truncated extended signature table.\n"); + return -EINVAL; + } + + ext_header =3D mc + MC_HEADER_SIZE + data_size; + if (ext_table_size !=3D exttable_size(ext_header)) { + if (print_err) + pr_err("Error: extended signature table size mismatch.\n"); + return -EFAULT; + } + + ext_sigcount =3D ext_header->count; + + /* + * Check extended table checksum: the sum of all dwords that + * comprise a valid table must be 0. + */ + ext_tablep =3D (u32 *)ext_header; + + i =3D ext_table_size / sizeof(u32); + while (i--) + ext_table_sum +=3D ext_tablep[i]; + + if (ext_table_sum) { + if (print_err) + pr_warn("Bad extended signature table checksum, aborting.\n"); + return -EINVAL; + } + } + + /* + * Calculate the checksum of update data and header. The checksum of + * valid update data and header including the extended signature table + * must be 0. + */ + orig_sum =3D 0; + i =3D (MC_HEADER_SIZE + data_size) / sizeof(u32); + while (i--) + orig_sum +=3D ((u32 *)mc)[i]; + + if (orig_sum) { + if (print_err) + pr_err("Bad microcode data checksum, aborting.\n"); + return -EINVAL; + } + + if (!ext_table_size) + return 0; + + /* + * Check extended signature checksum: 0 =3D> valid. + */ + for (i =3D 0; i < ext_sigcount; i++) { + ext_sig =3D (void *)ext_header + EXT_HEADER_SIZE + + EXT_SIGNATURE_SIZE * i; + + sum =3D (mc_header->sig + mc_header->pf + mc_header->cksum) - + (ext_sig->sig + ext_sig->pf + ext_sig->cksum); + if (sum) { + if (print_err) + pr_err("Bad extended signature checksum, aborting.\n"); + return -EINVAL; + } + } + return 0; +} +EXPORT_SYMBOL_GPL(intel_microcode_sanity_check); + /* * Returns 1 if update has been found, 0 otherwise. */ From nobody Thu Dec 18 20:32:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0163EC0015E for ; Sat, 12 Aug 2023 20:00:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230265AbjHLUAV (ORCPT ); Sat, 12 Aug 2023 16:00:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38256 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230104AbjHLUAI (ORCPT ); Sat, 12 Aug 2023 16:00:08 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3186C1BE9 for ; Sat, 12 Aug 2023 12:59:48 -0700 (PDT) Message-ID: <20230812195727.776541545@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691870323; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=xWepluyiaqfIY/tanS5Ndfk5WwU6Id4g+TqNpVND6ps=; b=Aahc0HEMSZ+5wLZo9GpKM8Iw4g5bzUg1Lb7HiA2tnLTc1U37XqmuTlV4JbL6p8AZ/xzzHf X6tS8KiVekQe//GoZXua1GDA9vwYMo4dz0kX29MNJ4jo44W2+9fuq/LqGHDZVFpatCzfs/ znT5yETlIX2+A0e3oHqgSfrr8ciCi9PRRoZD6fSe8Nz0OUDwN1zuv1VK6NkQbirTolttkQ G1FMO6zCwDFI3GB64z2F5T8gw+a5aQw8LXdgTXrkwWsFc6ozmqaIYUNW3ld7VxcPErrfCr rlDhjix+9FDzpoC9CcnmQGXSBz2q1F4VDJ6QLzJqxN1bPhmxTd6QTB9AX8HhJg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691870323; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=xWepluyiaqfIY/tanS5Ndfk5WwU6Id4g+TqNpVND6ps=; b=kk93GCk+ApTsihgq5rS36mIo0KTVteSUPOhgvsMH7i1lXzTVTWUFhORdr702Dv7wOqYg5b tVdknO1+vkSy2uBg== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven , Nikolay Borisov Subject: [patch V2 04/37] x86/microcode: Include vendor headers into microcode.h References: <20230812194003.682298127@linutronix.de> MIME-Version: 1.0 Date: Sat, 12 Aug 2023 21:58:42 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Ashok Raj Currently vendor specific headers are included explicitly when used in comm= on code. Instead, include the vendor specific headers in microcode.h, and include that in all usages. No functional change. Suggested-by: Boris Petkov Signed-off-by: Ashok Raj Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/microcode.h | 5 ++++- arch/x86/include/asm/microcode_amd.h | 2 -- arch/x86/include/asm/microcode_intel.h | 2 -- arch/x86/kernel/cpu/common.c | 1 - arch/x86/kernel/cpu/intel.c | 2 +- arch/x86/kernel/cpu/microcode/amd.c | 1 - arch/x86/kernel/cpu/microcode/core.c | 2 -- arch/x86/kernel/cpu/microcode/intel.c | 2 +- drivers/platform/x86/intel/ifs/load.c | 2 +- 9 files changed, 7 insertions(+), 12 deletions(-) --- --- a/arch/x86/include/asm/microcode.h +++ b/arch/x86/include/asm/microcode.h @@ -2,10 +2,13 @@ #ifndef _ASM_X86_MICROCODE_H #define _ASM_X86_MICROCODE_H =20 -#include #include #include =20 +#include +#include +#include + struct ucode_patch { struct list_head plist; void *data; /* Intel uses only this one */ --- a/arch/x86/include/asm/microcode_amd.h +++ b/arch/x86/include/asm/microcode_amd.h @@ -2,8 +2,6 @@ #ifndef _ASM_X86_MICROCODE_AMD_H #define _ASM_X86_MICROCODE_AMD_H =20 -#include - #define UCODE_MAGIC 0x00414d44 #define UCODE_EQUIV_CPU_TABLE_TYPE 0x00000000 #define UCODE_UCODE_TYPE 0x00000001 --- a/arch/x86/include/asm/microcode_intel.h +++ b/arch/x86/include/asm/microcode_intel.h @@ -2,8 +2,6 @@ #ifndef _ASM_X86_MICROCODE_INTEL_H #define _ASM_X86_MICROCODE_INTEL_H =20 -#include - struct microcode_header_intel { unsigned int hdrver; unsigned int rev; --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -59,7 +59,6 @@ #include #include #include -#include #include #include #include --- a/arch/x86/kernel/cpu/intel.c +++ b/arch/x86/kernel/cpu/intel.c @@ -20,7 +20,7 @@ #include #include #include -#include +#include #include #include #include --- a/arch/x86/kernel/cpu/microcode/amd.c +++ b/arch/x86/kernel/cpu/microcode/amd.c @@ -29,7 +29,6 @@ #include #include =20 -#include #include #include #include --- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -31,9 +31,7 @@ #include #include =20 -#include #include -#include #include #include #include --- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -30,9 +30,9 @@ #include #include =20 -#include #include #include +#include #include #include #include --- a/drivers/platform/x86/intel/ifs/load.c +++ b/drivers/platform/x86/intel/ifs/load.c @@ -3,7 +3,7 @@ =20 #include #include -#include +#include =20 #include "ifs.h" From nobody Thu Dec 18 20:32:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EDB58C0015E for ; Sat, 12 Aug 2023 20:00:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230270AbjHLUAm (ORCPT ); Sat, 12 Aug 2023 16:00:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58214 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230088AbjHLUAh (ORCPT ); Sat, 12 Aug 2023 16:00:37 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A067D19B1 for ; Sat, 12 Aug 2023 13:00:12 -0700 (PDT) Message-ID: <20230812195727.834943153@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691870324; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=0JPnaj3cEEDg9kfAdYzRsElPoyj/8+m1dAg/elVKOy0=; b=Cpy7aC46miOVTHIsx8AMnGaz4vl9e31PYlTHNUwmUrtp1PL8lAMkPR/xbqeBIlqVH3gyq4 hBfcA96ShVOwRls8eNNGZq9YGHGkCEvsZ4oGkWspXmQBXrhpyaK6eBi/maHZfI73OgSGgp xRO/gole9NdyF3azRYIz63rP4CURwFPhvSYzybuVazmyq1vGPq9jBT6EX4wEPSDcFJvsT7 YePJkwu2RZnPI12/ZAHSxfZBxlJF35vOHKUgUes/4I//FmbmfC6jchc24tnGBVf7dBVJ3N 8BSmfJ1ss7UNiJGg6qglJ3p0YGJH8DVnLBUCmxjDa3BwndKGniMAVPh5VtG7SQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691870324; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=0JPnaj3cEEDg9kfAdYzRsElPoyj/8+m1dAg/elVKOy0=; b=tUiD6WHEtqk3fwz0Ew5KFRxAVreg9Q8aa84y+XILH30l9uYgqOHKCxaTO9CE5OXQdiwjYX 8RkxvSVw5NpSBfAA== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven , Nikolay Borisov Subject: [patch V2 05/37] x86/microcode: Make reload_early_microcode() static References: <20230812194003.682298127@linutronix.de> MIME-Version: 1.0 Date: Sat, 12 Aug 2023 21:58:44 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Thomas Gleixner fe055896c040 ("x86/microcode: Merge the early microcode loader") left this needlessly public. Git archaeology provided by Borislav. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/microcode.h | 2 -- arch/x86/kernel/cpu/microcode/core.c | 2 +- 2 files changed, 1 insertion(+), 3 deletions(-) --- --- a/arch/x86/include/asm/microcode.h +++ b/arch/x86/include/asm/microcode.h @@ -128,13 +128,11 @@ static inline unsigned int x86_cpuid_fam #ifdef CONFIG_MICROCODE extern void __init load_ucode_bsp(void); extern void load_ucode_ap(void); -void reload_early_microcode(unsigned int cpu); extern bool initrd_gone; void microcode_bsp_resume(void); #else static inline void __init load_ucode_bsp(void) { } static inline void load_ucode_ap(void) { } -static inline void reload_early_microcode(unsigned int cpu) { } static inline void microcode_bsp_resume(void) { } #endif =20 --- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -293,7 +293,7 @@ struct cpio_data find_microcode_in_initr #endif } =20 -void reload_early_microcode(unsigned int cpu) +static void reload_early_microcode(unsigned int cpu) { int vendor, family; From nobody Thu Dec 18 20:32:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7416CC0015E for ; Sat, 12 Aug 2023 20:00:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230243AbjHLUAk (ORCPT ); Sat, 12 Aug 2023 16:00:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58206 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230173AbjHLUAh (ORCPT ); Sat, 12 Aug 2023 16:00:37 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 189892694 for ; Sat, 12 Aug 2023 13:00:12 -0700 (PDT) Message-ID: <20230812195727.894165745@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691870326; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=HkbGnsalnjpIVBAgAAwk1+ueDGPGU5QUbDMtasgxkkc=; b=3Wrbsk+vFhGxK+LHZPNwp3JtpGi4E+9ETLFpA1iCl8uy2i8oqjyW6//ubHKeOINGVCog/+ 2/jXUKJF8+o2jep/tiZZ+98pdnjx/3OkpsdkXgxr8Rlw/x2S0aUb+McOiaEFWyf/HOqrgC Bkqt9txXusOHpYJEJ+rm3j5BWbgU/QyPohX2UiHn4rZCl1ux6F0MQy5gvokQju4y84LMEh 9kgqm0cvfxu3vT89AugYpsEmmHQJ+n2dA+FyV3P/lxniuZGSZqqYlvJBpuYF1PsWLLED+W AQXqouHDOqE3/sNxiq6TduRLcSTZZU83liNh2DeVXVPLXJwlO5jwc999DHdfjw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691870326; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=HkbGnsalnjpIVBAgAAwk1+ueDGPGU5QUbDMtasgxkkc=; b=R/VegE7EeRx1Vs4DStGliN0XCw4RTXbqLTxzErhqd2ace1KA8m+VztHHHquEgGYQymDEpL mBE6E7d+Ghy+a+Cw== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven , Nikolay Borisov Subject: [patch V2 06/37] x86/microcode/intel: Rename get_datasize() since its used externally References: <20230812194003.682298127@linutronix.de> MIME-Version: 1.0 Date: Sat, 12 Aug 2023 21:58:45 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Ashok Raj Rename get_datasize() to intel_microcode_get_datasize() and make it an inli= ne. [ tglx: Make the argument typed and fix up the IFS code ] Suggested-by: Boris Petkov Signed-off-by: Ashok Raj Signed-off-by: Thomas Gleixner --- V2: Make the argument typed --- arch/x86/include/asm/microcode_intel.h | 7 ++++--- arch/x86/kernel/cpu/microcode/intel.c | 8 ++++---- drivers/platform/x86/intel/ifs/load.c | 5 +++-- 3 files changed, 11 insertions(+), 9 deletions(-) --- --- a/arch/x86/include/asm/microcode_intel.h +++ b/arch/x86/include/asm/microcode_intel.h @@ -26,9 +26,10 @@ struct microcode_intel { #define MC_HEADER_TYPE_IFS 2 #define DEFAULT_UCODE_DATASIZE (2000) =20 -#define get_datasize(mc) \ - (((struct microcode_intel *)mc)->hdr.datasize ? \ - ((struct microcode_intel *)mc)->hdr.datasize : DEFAULT_UCODE_DATASIZE) +static inline int intel_microcode_get_datasize(struct microcode_header_int= el *hdr) +{ + return hdr->datasize ? : DEFAULT_UCODE_DATASIZE; +} =20 static inline u32 intel_get_microcode_revision(void) { --- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -118,10 +118,10 @@ int intel_find_matching_signature(void * return 1; =20 /* Look for ext. headers: */ - if (get_totalsize(mc_hdr) <=3D get_datasize(mc_hdr) + MC_HEADER_SIZE) + if (get_totalsize(mc_hdr) <=3D intel_microcode_get_datasize(mc_hdr) + MC_= HEADER_SIZE) return 0; =20 - ext_hdr =3D mc + get_datasize(mc_hdr) + MC_HEADER_SIZE; + ext_hdr =3D mc + intel_microcode_get_datasize(mc_hdr) + MC_HEADER_SIZE; ext_sig =3D (void *)ext_hdr + EXT_HEADER_SIZE; =20 for (i =3D 0; i < ext_hdr->count; i++) { @@ -156,7 +156,7 @@ int intel_microcode_sanity_check(void *m struct extended_signature *ext_sig; =20 total_size =3D get_totalsize(mc_header); - data_size =3D get_datasize(mc_header); + data_size =3D intel_microcode_get_datasize(mc_header); =20 if (data_size + MC_HEADER_SIZE > total_size) { if (print_err) @@ -438,7 +438,7 @@ static void show_saved_mc(void) date =3D mc_saved_header->date; =20 total_size =3D get_totalsize(mc_saved_header); - data_size =3D get_datasize(mc_saved_header); + data_size =3D intel_microcode_get_datasize(mc_saved_header); =20 pr_debug("mc_saved[%d]: sig=3D0x%x, pf=3D0x%x, rev=3D0x%x, total size=3D= 0x%x, date =3D %04x-%02x-%02x\n", i++, sig, pf, rev, total_size, --- a/drivers/platform/x86/intel/ifs/load.c +++ b/drivers/platform/x86/intel/ifs/load.c @@ -56,12 +56,13 @@ struct metadata_header { =20 static struct metadata_header *find_meta_data(void *ucode, unsigned int me= ta_type) { + struct microcode_header_intel *hdr =3D &((struct microcode_intel *)ucode)= ->hdr; struct metadata_header *meta_header; unsigned long data_size, total_meta; unsigned long meta_size =3D 0; =20 - data_size =3D get_datasize(ucode); - total_meta =3D ((struct microcode_intel *)ucode)->hdr.metasize; + data_size =3D intel_microcode_get_datasize(hdr); + total_meta =3D hdr->metasize; if (!total_meta) return NULL; From nobody Thu Dec 18 20:32:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6440FC0015E for ; Sat, 12 Aug 2023 20:01:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230399AbjHLUBL (ORCPT ); Sat, 12 Aug 2023 16:01:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58438 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230291AbjHLUA5 (ORCPT ); Sat, 12 Aug 2023 16:00:57 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 30D322684 for ; Sat, 12 Aug 2023 13:00:43 -0700 (PDT) Message-ID: <20230812195727.952876381@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691870327; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=yYdvJbFBJMtBe7EkUJVaLHYZDga0nEQHMFXoGEBX1ko=; b=ynVBnrCvfBZWyEG466/VVakbBB0NkDkQeKQAtthUK0njPewNOcu1vRMi6f8yVvKkiEXQZ6 7rtFYgTPc6F3iNqPHBPV/8X3urdeyqL09QQVZ/oJhRWH2218/y//uPMHa5PM8j7CfCmoOG tF7daufw93Njz7pesZO+bppBguOtnUWgccyZFM2DkiA0SG85kHiCoSmnVWh2/vND88KdzB t9IjxH0WnJJQwrrKaXsq/S3aMY8/ticmHYnXjyMIO7gDw8lWxOh4Xa+dBSK4oM8TdTyApY qLAvPcnFD+h22kdsky4FJeX+hJTirSly5eT6Jt8KjSwwojS/ibaQe/LF057NPQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691870327; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=yYdvJbFBJMtBe7EkUJVaLHYZDga0nEQHMFXoGEBX1ko=; b=IA2a7hdG13nxx/CSPqwT4E6QRQnT+uLMDYxiVVqYBKIqGMWkaPHBZFqrcNmtJEentNww3g PF33LswfGqBL8fAA== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven , Nikolay Borisov Subject: [patch V2 07/37] x86/microcode: Move core specific defines to local header References: <20230812194003.682298127@linutronix.de> MIME-Version: 1.0 Date: Sat, 12 Aug 2023 21:58:47 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Thomas Gleixner There is no reason to expose all of this globally. Move everything which is not required outside of the microcode specific code to local header files and into the respective source files. No functional change. Signed-off-by: Thomas Gleixner --- V2: Move AMD specific structs and defines into the AMD code. --- arch/x86/include/asm/microcode.h | 155 +++++++++-----------------= ----- arch/x86/include/asm/microcode_amd.h | 54 ---------- arch/x86/include/asm/microcode_intel.h | 63 ------------ arch/x86/kernel/cpu/microcode/amd.c | 41 ++++++++ arch/x86/kernel/cpu/microcode/core.c | 3=20 arch/x86/kernel/cpu/microcode/intel.c | 3=20 arch/x86/kernel/cpu/microcode/internal.h | 131 ++++++++++++++++++++++++++ 7 files changed, 223 insertions(+), 227 deletions(-) --- a/arch/x86/include/asm/microcode.h +++ b/arch/x86/include/asm/microcode.h @@ -2,138 +2,77 @@ #ifndef _ASM_X86_MICROCODE_H #define _ASM_X86_MICROCODE_H =20 -#include -#include - -#include -#include -#include - -struct ucode_patch { - struct list_head plist; - void *data; /* Intel uses only this one */ - unsigned int size; - u32 patch_id; - u16 equiv_cpu; -}; - -extern struct list_head microcode_cache; - struct cpu_signature { unsigned int sig; unsigned int pf; unsigned int rev; }; =20 -struct device; - -enum ucode_state { - UCODE_OK =3D 0, - UCODE_NEW, - UCODE_UPDATED, - UCODE_NFOUND, - UCODE_ERROR, +struct ucode_cpu_info { + struct cpu_signature cpu_sig; + void *mc; }; =20 -struct microcode_ops { - enum ucode_state (*request_microcode_fw) (int cpu, struct device *); - - void (*microcode_fini_cpu) (int cpu); +#ifdef CONFIG_MICROCODE +void load_ucode_bsp(void); +void load_ucode_ap(void); +void microcode_bsp_resume(void); +#else +static inline void load_ucode_bsp(void) { } +static inline void load_ucode_ap(void) { } +static inline void microcode_bsp_resume(void) { } +#endif =20 - /* - * The generic 'microcode_core' part guarantees that - * the callbacks below run on a target cpu when they - * are being called. - * See also the "Synchronization" section in microcode_core.c. - */ - enum ucode_state (*apply_microcode) (int cpu); - int (*collect_cpu_info) (int cpu, struct cpu_signature *csig); +#ifdef CONFIG_CPU_SUP_INTEL +/* Intel specific microcode defines. Public for IFS */ +struct microcode_header_intel { + unsigned int hdrver; + unsigned int rev; + unsigned int date; + unsigned int sig; + unsigned int cksum; + unsigned int ldrver; + unsigned int pf; + unsigned int datasize; + unsigned int totalsize; + unsigned int metasize; + unsigned int reserved[2]; }; =20 -struct ucode_cpu_info { - struct cpu_signature cpu_sig; - void *mc; +struct microcode_intel { + struct microcode_header_intel hdr; + unsigned int bits[]; }; -extern struct ucode_cpu_info ucode_cpu_info[]; -struct cpio_data find_microcode_in_initrd(const char *path, bool use_pa); =20 -#ifdef CONFIG_CPU_SUP_INTEL -extern struct microcode_ops * __init init_intel_microcode(void); -#else -static inline struct microcode_ops * __init init_intel_microcode(void) -{ - return NULL; -} -#endif /* CONFIG_CPU_SUP_INTEL */ +#define DEFAULT_UCODE_DATASIZE (2000) +#define MC_HEADER_SIZE (sizeof(struct microcode_header_intel)) +#define MC_HEADER_TYPE_MICROCODE 1 +#define MC_HEADER_TYPE_IFS 2 =20 -#ifdef CONFIG_CPU_SUP_AMD -extern struct microcode_ops * __init init_amd_microcode(void); -extern void __exit exit_amd_microcode(void); -#else -static inline struct microcode_ops * __init init_amd_microcode(void) +static inline int intel_microcode_get_datasize(struct microcode_header_int= el *hdr) { - return NULL; + return hdr->datasize ? : DEFAULT_UCODE_DATASIZE; } -static inline void __exit exit_amd_microcode(void) {} -#endif - -#define MAX_UCODE_COUNT 128 =20 -#define QCHAR(a, b, c, d) ((a) + ((b) << 8) + ((c) << 16) + ((d) << 24)) -#define CPUID_INTEL1 QCHAR('G', 'e', 'n', 'u') -#define CPUID_INTEL2 QCHAR('i', 'n', 'e', 'I') -#define CPUID_INTEL3 QCHAR('n', 't', 'e', 'l') -#define CPUID_AMD1 QCHAR('A', 'u', 't', 'h') -#define CPUID_AMD2 QCHAR('e', 'n', 't', 'i') -#define CPUID_AMD3 QCHAR('c', 'A', 'M', 'D') - -#define CPUID_IS(a, b, c, ebx, ecx, edx) \ - (!((ebx ^ (a))|(edx ^ (b))|(ecx ^ (c)))) - -/* - * In early loading microcode phase on BSP, boot_cpu_data is not set up ye= t. - * x86_cpuid_vendor() gets vendor id for BSP. - * - * In 32 bit AP case, accessing boot_cpu_data needs linear address. To sim= plify - * coding, we still use x86_cpuid_vendor() to get vendor id for AP. - * - * x86_cpuid_vendor() gets vendor information directly from CPUID. - */ -static inline int x86_cpuid_vendor(void) +static inline u32 intel_get_microcode_revision(void) { - u32 eax =3D 0x00000000; - u32 ebx, ecx =3D 0, edx; + u32 rev, dummy; =20 - native_cpuid(&eax, &ebx, &ecx, &edx); + native_wrmsrl(MSR_IA32_UCODE_REV, 0); =20 - if (CPUID_IS(CPUID_INTEL1, CPUID_INTEL2, CPUID_INTEL3, ebx, ecx, edx)) - return X86_VENDOR_INTEL; + /* As documented in the SDM: Do a CPUID 1 here */ + native_cpuid_eax(1); =20 - if (CPUID_IS(CPUID_AMD1, CPUID_AMD2, CPUID_AMD3, ebx, ecx, edx)) - return X86_VENDOR_AMD; + /* get the current revision from MSR 0x8B */ + native_rdmsr(MSR_IA32_UCODE_REV, dummy, rev); =20 - return X86_VENDOR_UNKNOWN; + return rev; } =20 -static inline unsigned int x86_cpuid_family(void) -{ - u32 eax =3D 0x00000001; - u32 ebx, ecx =3D 0, edx; - - native_cpuid(&eax, &ebx, &ecx, &edx); - - return x86_family(eax); -} +void show_ucode_info_early(void); =20 -#ifdef CONFIG_MICROCODE -extern void __init load_ucode_bsp(void); -extern void load_ucode_ap(void); -extern bool initrd_gone; -void microcode_bsp_resume(void); -#else -static inline void __init load_ucode_bsp(void) { } -static inline void load_ucode_ap(void) { } -static inline void microcode_bsp_resume(void) { } -#endif +#else /* CONFIG_CPU_SUP_INTEL */ +static inline void show_ucode_info_early(void) { } +#endif /* !CONFIG_CPU_SUP_INTEL */ =20 #endif /* _ASM_X86_MICROCODE_H */ --- a/arch/x86/include/asm/microcode_amd.h +++ /dev/null @@ -1,54 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef _ASM_X86_MICROCODE_AMD_H -#define _ASM_X86_MICROCODE_AMD_H - -#define UCODE_MAGIC 0x00414d44 -#define UCODE_EQUIV_CPU_TABLE_TYPE 0x00000000 -#define UCODE_UCODE_TYPE 0x00000001 - -#define SECTION_HDR_SIZE 8 -#define CONTAINER_HDR_SZ 12 - -struct equiv_cpu_entry { - u32 installed_cpu; - u32 fixed_errata_mask; - u32 fixed_errata_compare; - u16 equiv_cpu; - u16 res; -} __attribute__((packed)); - -struct microcode_header_amd { - u32 data_code; - u32 patch_id; - u16 mc_patch_data_id; - u8 mc_patch_data_len; - u8 init_flag; - u32 mc_patch_data_checksum; - u32 nb_dev_id; - u32 sb_dev_id; - u16 processor_rev_id; - u8 nb_rev_id; - u8 sb_rev_id; - u8 bios_api_rev; - u8 reserved1[3]; - u32 match_reg[8]; -} __attribute__((packed)); - -struct microcode_amd { - struct microcode_header_amd hdr; - unsigned int mpb[]; -}; - -#define PATCH_MAX_SIZE (3 * PAGE_SIZE) - -#ifdef CONFIG_CPU_SUP_AMD -extern void load_ucode_amd_early(unsigned int cpuid_1_eax); -extern int __init save_microcode_in_initrd_amd(unsigned int family); -void reload_ucode_amd(unsigned int cpu); -#else -static inline void load_ucode_amd_early(unsigned int cpuid_1_eax) {} -static inline int __init -save_microcode_in_initrd_amd(unsigned int family) { return -EINVAL; } -static inline void reload_ucode_amd(unsigned int cpu) {} -#endif -#endif /* _ASM_X86_MICROCODE_AMD_H */ --- a/arch/x86/include/asm/microcode_intel.h +++ /dev/null @@ -1,63 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef _ASM_X86_MICROCODE_INTEL_H -#define _ASM_X86_MICROCODE_INTEL_H - -struct microcode_header_intel { - unsigned int hdrver; - unsigned int rev; - unsigned int date; - unsigned int sig; - unsigned int cksum; - unsigned int ldrver; - unsigned int pf; - unsigned int datasize; - unsigned int totalsize; - unsigned int metasize; - unsigned int reserved[2]; -}; - -struct microcode_intel { - struct microcode_header_intel hdr; - unsigned int bits[]; -}; - -#define MC_HEADER_SIZE (sizeof(struct microcode_header_intel)) -#define MC_HEADER_TYPE_MICROCODE 1 -#define MC_HEADER_TYPE_IFS 2 -#define DEFAULT_UCODE_DATASIZE (2000) - -static inline int intel_microcode_get_datasize(struct microcode_header_int= el *hdr) -{ - return hdr->datasize ? : DEFAULT_UCODE_DATASIZE; -} - -static inline u32 intel_get_microcode_revision(void) -{ - u32 rev, dummy; - - native_wrmsrl(MSR_IA32_UCODE_REV, 0); - - /* As documented in the SDM: Do a CPUID 1 here */ - native_cpuid_eax(1); - - /* get the current revision from MSR 0x8B */ - native_rdmsr(MSR_IA32_UCODE_REV, dummy, rev); - - return rev; -} - -#ifdef CONFIG_CPU_SUP_INTEL -extern void __init load_ucode_intel_bsp(void); -extern void load_ucode_intel_ap(void); -extern void show_ucode_info_early(void); -extern int __init save_microcode_in_initrd_intel(void); -void reload_ucode_intel(void); -#else -static inline __init void load_ucode_intel_bsp(void) {} -static inline void load_ucode_intel_ap(void) {} -static inline void show_ucode_info_early(void) {} -static inline int __init save_microcode_in_initrd_intel(void) { return -EI= NVAL; } -static inline void reload_ucode_intel(void) {} -#endif - -#endif /* _ASM_X86_MICROCODE_INTEL_H */ --- a/arch/x86/kernel/cpu/microcode/amd.c +++ b/arch/x86/kernel/cpu/microcode/amd.c @@ -35,6 +35,47 @@ #include #include =20 +#include "internal.h" + +#define UCODE_MAGIC 0x00414d44 +#define UCODE_EQUIV_CPU_TABLE_TYPE 0x00000000 +#define UCODE_UCODE_TYPE 0x00000001 + +#define SECTION_HDR_SIZE 8 +#define CONTAINER_HDR_SZ 12 + +struct equiv_cpu_entry { + u32 installed_cpu; + u32 fixed_errata_mask; + u32 fixed_errata_compare; + u16 equiv_cpu; + u16 res; +} __packed; + +struct microcode_header_amd { + u32 data_code; + u32 patch_id; + u16 mc_patch_data_id; + u8 mc_patch_data_len; + u8 init_flag; + u32 mc_patch_data_checksum; + u32 nb_dev_id; + u32 sb_dev_id; + u16 processor_rev_id; + u8 nb_rev_id; + u8 sb_rev_id; + u8 bios_api_rev; + u8 reserved1[3]; + u32 match_reg[8]; +} __packed; + +struct microcode_amd { + struct microcode_header_amd hdr; + unsigned int mpb[]; +}; + +#define PATCH_MAX_SIZE (3 * PAGE_SIZE) + static struct equiv_cpu_table { unsigned int num_entries; struct equiv_cpu_entry *entry; --- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -33,11 +33,12 @@ =20 #include #include -#include #include #include #include =20 +#include "internal.h" + #define DRIVER_VERSION "2.2" =20 static struct microcode_ops *microcode_ops; --- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -32,11 +32,12 @@ =20 #include #include -#include #include #include #include =20 +#include "internal.h" + static const char ucode_path[] =3D "kernel/x86/microcode/GenuineIntel.bin"; =20 /* Current microcode patch used in early patching on the APs. */ --- /dev/null +++ b/arch/x86/kernel/cpu/microcode/internal.h @@ -0,0 +1,131 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _X86_MICROCODE_INTERNAL_H +#define _X86_MICROCODE_INTERNAL_H + +#include +#include + +#include +#include + +struct ucode_patch { + struct list_head plist; + void *data; /* Intel uses only this one */ + unsigned int size; + u32 patch_id; + u16 equiv_cpu; +}; + +extern struct list_head microcode_cache; + +struct device; + +enum ucode_state { + UCODE_OK =3D 0, + UCODE_NEW, + UCODE_UPDATED, + UCODE_NFOUND, + UCODE_ERROR, +}; + +struct microcode_ops { + enum ucode_state (*request_microcode_fw)(int cpu, struct device *dev); + + void (*microcode_fini_cpu)(int cpu); + + /* + * The generic 'microcode_core' part guarantees that + * the callbacks below run on a target cpu when they + * are being called. + * See also the "Synchronization" section in microcode_core.c. + */ + enum ucode_state (*apply_microcode)(int cpu); + int (*collect_cpu_info)(int cpu, struct cpu_signature *csig); +}; + +extern struct ucode_cpu_info ucode_cpu_info[]; +struct cpio_data find_microcode_in_initrd(const char *path, bool use_pa); + +#define MAX_UCODE_COUNT 128 + +#define QCHAR(a, b, c, d) ((a) + ((b) << 8) + ((c) << 16) + ((d) << 24)) +#define CPUID_INTEL1 QCHAR('G', 'e', 'n', 'u') +#define CPUID_INTEL2 QCHAR('i', 'n', 'e', 'I') +#define CPUID_INTEL3 QCHAR('n', 't', 'e', 'l') +#define CPUID_AMD1 QCHAR('A', 'u', 't', 'h') +#define CPUID_AMD2 QCHAR('e', 'n', 't', 'i') +#define CPUID_AMD3 QCHAR('c', 'A', 'M', 'D') + +#define CPUID_IS(a, b, c, ebx, ecx, edx) \ + (!(((ebx) ^ (a)) | ((edx) ^ (b)) | ((ecx) ^ (c)))) + +/* + * In early loading microcode phase on BSP, boot_cpu_data is not set up ye= t. + * x86_cpuid_vendor() gets vendor id for BSP. + * + * In 32 bit AP case, accessing boot_cpu_data needs linear address. To sim= plify + * coding, we still use x86_cpuid_vendor() to get vendor id for AP. + * + * x86_cpuid_vendor() gets vendor information directly from CPUID. + */ +static inline int x86_cpuid_vendor(void) +{ + u32 eax =3D 0x00000000; + u32 ebx, ecx =3D 0, edx; + + native_cpuid(&eax, &ebx, &ecx, &edx); + + if (CPUID_IS(CPUID_INTEL1, CPUID_INTEL2, CPUID_INTEL3, ebx, ecx, edx)) + return X86_VENDOR_INTEL; + + if (CPUID_IS(CPUID_AMD1, CPUID_AMD2, CPUID_AMD3, ebx, ecx, edx)) + return X86_VENDOR_AMD; + + return X86_VENDOR_UNKNOWN; +} + +static inline unsigned int x86_cpuid_family(void) +{ + u32 eax =3D 0x00000001; + u32 ebx, ecx =3D 0, edx; + + native_cpuid(&eax, &ebx, &ecx, &edx); + + return x86_family(eax); +} + +extern bool initrd_gone; + +#ifdef CONFIG_CPU_SUP_AMD +void load_ucode_amd_bsp(unsigned int family); +void load_ucode_amd_ap(unsigned int family); +void load_ucode_amd_early(unsigned int cpuid_1_eax); +int save_microcode_in_initrd_amd(unsigned int family); +void reload_ucode_amd(unsigned int cpu); +struct microcode_ops *init_amd_microcode(void); +void exit_amd_microcode(void); +#else /* CONFIG_MICROCODE_AMD */ +static inline void load_ucode_amd_bsp(unsigned int family) { } +static inline void load_ucode_amd_ap(unsigned int family) { } +static inline void load_ucode_amd_early(unsigned int family) { } +static inline int save_microcode_in_initrd_amd(unsigned int family) { retu= rn -EINVAL; } +static inline void reload_ucode_amd(unsigned int cpu) { } +static inline struct microcode_ops *init_amd_microcode(void) { return NULL= ; } +static inline void exit_amd_microcode(void) { } +#endif /* !CONFIG_MICROCODE_AMD */ + +#ifdef CONFIG_CPU_SUP_INTEL +void load_ucode_intel_bsp(void); +void load_ucode_intel_ap(void); +int save_microcode_in_initrd_intel(void); +void reload_ucode_intel(void); +struct microcode_ops *init_intel_microcode(void); +#else /* CONFIG_CPU_SUP_INTEL */ +static inline void load_ucode_intel_bsp(void) { } +static inline void load_ucode_intel_ap(void) { } +static inline int save_microcode_in_initrd_intel(void) { return -EINVAL; } +static inline void reload_ucode_intel(void) { } +static inline struct microcode_ops *init_intel_microcode(void) { return NU= LL; } +#endif /* !CONFIG_CPU_SUP_INTEL */ + +#endif /* _X86_MICROCODE_INTERNAL_H */ From nobody Thu Dec 18 20:32:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA404C0015E for ; Sat, 12 Aug 2023 20:00:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230308AbjHLUAq (ORCPT ); Sat, 12 Aug 2023 16:00:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58218 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230189AbjHLUAi (ORCPT ); Sat, 12 Aug 2023 16:00:38 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8ED321998 for ; Sat, 12 Aug 2023 13:00:13 -0700 (PDT) Message-ID: <20230812195728.010895747@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691870329; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=YMAsfYDkN/xyrOxScaRWZ72+V8OIVPbdwOmS7O9PF88=; b=FnwPzHlDjX8J6q6NC1rBCx2LHxi8MVSsf34AGjtxF7M4h0ndwkVseyWcRAzSejqzV+RsDM C1mbUZFE6+2JU9EDF4mtmkjDW595bjZR1YydQXUc3yQkN1HaINg1oNqZSyzTgUVM2b1xpR ldWED9X7LhnRFK5nokgtjYjre0JRLSdpFIUnLKnlJsbJX1G8T8VA9faqBtx9qIRMrEnYtn LT0zsD9ice8CdIYB1TKP3H6CCE5h133tWX/XmgCIj2twzCH50ce3gEUq2RCWFk+FTtlSHH 4EnohddD19W0KrT8VYBn78OTtE79dtARbiz1Xa4SHXVtj7LX9kpVQPJ28we/1w== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691870329; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=YMAsfYDkN/xyrOxScaRWZ72+V8OIVPbdwOmS7O9PF88=; b=m+3BwOa4D7AR1iptyHgfLAv/RkhXWwj3E+UBhzbBF/Zrw8VMbv7SAGxq5vH2XvomhSIjTR QGa4zVjeiMatZjBA== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven , Nikolay Borisov Subject: [patch V2 08/37] x86/microcode/intel: Remove debug code References: <20230812194003.682298127@linutronix.de> MIME-Version: 1.0 Date: Sat, 12 Aug 2023 21:58:49 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Thomas Gleixner This is really of dubious value. Signed-off-by: Thomas Gleixner --- V2: Reordered in the series - Nikolay --- arch/x86/kernel/cpu/microcode/intel.c | 75 -----------------------------= ----- 1 file changed, 75 deletions(-) --- --- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -10,15 +10,7 @@ * Copyright (C) 2012 Fenghua Yu * H Peter Anvin" */ - -/* - * This needs to be before all headers so that pr_debug in printk.h doesn'= t turn - * printk calls into no_printk(). - * - *#define DEBUG - */ #define pr_fmt(fmt) "microcode: " fmt - #include #include #include @@ -405,69 +397,6 @@ scan_microcode(void *data, size_t size, return patch; } =20 -static void show_saved_mc(void) -{ -#ifdef DEBUG - int i =3D 0, j; - unsigned int sig, pf, rev, total_size, data_size, date; - struct ucode_cpu_info uci; - struct ucode_patch *p; - - if (list_empty(µcode_cache)) { - pr_debug("no microcode data saved.\n"); - return; - } - - intel_cpu_collect_info(&uci); - - sig =3D uci.cpu_sig.sig; - pf =3D uci.cpu_sig.pf; - rev =3D uci.cpu_sig.rev; - pr_debug("CPU: sig=3D0x%x, pf=3D0x%x, rev=3D0x%x\n", sig, pf, rev); - - list_for_each_entry(p, µcode_cache, plist) { - struct microcode_header_intel *mc_saved_header; - struct extended_sigtable *ext_header; - struct extended_signature *ext_sig; - int ext_sigcount; - - mc_saved_header =3D (struct microcode_header_intel *)p->data; - - sig =3D mc_saved_header->sig; - pf =3D mc_saved_header->pf; - rev =3D mc_saved_header->rev; - date =3D mc_saved_header->date; - - total_size =3D get_totalsize(mc_saved_header); - data_size =3D intel_microcode_get_datasize(mc_saved_header); - - pr_debug("mc_saved[%d]: sig=3D0x%x, pf=3D0x%x, rev=3D0x%x, total size=3D= 0x%x, date =3D %04x-%02x-%02x\n", - i++, sig, pf, rev, total_size, - date & 0xffff, - date >> 24, - (date >> 16) & 0xff); - - /* Look for ext. headers: */ - if (total_size <=3D data_size + MC_HEADER_SIZE) - continue; - - ext_header =3D (void *)mc_saved_header + data_size + MC_HEADER_SIZE; - ext_sigcount =3D ext_header->count; - ext_sig =3D (void *)ext_header + EXT_HEADER_SIZE; - - for (j =3D 0; j < ext_sigcount; j++) { - sig =3D ext_sig->sig; - pf =3D ext_sig->pf; - - pr_debug("\tExtended[%d]: sig=3D0x%x, pf=3D0x%x\n", - j, sig, pf); - - ext_sig++; - } - } -#endif -} - /* * Save this microcode patch. It will be loaded early when a CPU is * hot-added or resumes. @@ -480,7 +409,6 @@ static void save_mc_for_early(struct uco mutex_lock(&x86_cpu_microcode_mutex); =20 save_microcode_patch(uci, mc, size); - show_saved_mc(); =20 mutex_unlock(&x86_cpu_microcode_mutex); } @@ -631,9 +559,6 @@ int __init save_microcode_in_initrd_inte intel_cpu_collect_info(&uci); =20 scan_microcode(cp.data, cp.size, &uci, true); - - show_saved_mc(); - return 0; } From nobody Thu Dec 18 20:32:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36D1FC0015E for ; Sat, 12 Aug 2023 20:01:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230481AbjHLUBW (ORCPT ); Sat, 12 Aug 2023 16:01:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58386 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230479AbjHLUBI (ORCPT ); Sat, 12 Aug 2023 16:01:08 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E39382D79 for ; Sat, 12 Aug 2023 13:00:49 -0700 (PDT) Message-ID: <20230812195728.069849788@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691870331; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=+LnxIbJSjtZlbYPhhRKT8XWYvPxcIXTNXgcJ6FnPbbM=; b=wImgZvYW+RCR1dnDUO7eZWHE2DLP4cQhfOnF7k0UMmMN/5wuwfkhSIZcWVuB/hn6CIKQDx 930XvbU4zTvV4Jk6bZc8eJKHs74AfAEfvN6g1sAxhH5uaFviw6EdFCmQW3VRgxFV1u9mor BDR8A2H+qridCSGYHobKBaIuycn+Oss4y7mYhdfrO1v8FrvE0ynZ6B10g0uEODVzqiKy08 e23cRolTDf61hOxBFL3JamTHLtJIXP5stDE57kSBR3KZCCblLJFBbOEzkS3Aopcv1kGJDA gClGO9sOPG0SeBTYWHvqckTsQjFX7BbjUcHVGP9vaFXvwp3D2N7JRhn8kBDo0g== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691870331; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=+LnxIbJSjtZlbYPhhRKT8XWYvPxcIXTNXgcJ6FnPbbM=; b=LskouPJt7WBGAIH/Jb4BEbcZtUi5KSNOgpZ4VmG84w4Yaaw9EX4xYIwpW7f23WDCWlcHb+ 1Rhx/7DZA39X2SAQ== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven , Nikolay Borisov Subject: [patch V2 09/37] x86/microcode/intel: Remove pointless mutex References: <20230812194003.682298127@linutronix.de> MIME-Version: 1.0 Date: Sat, 12 Aug 2023 21:58:50 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Thomas Gleixner There is no concurreny. Signed-off-by: Thomas Gleixner --- arch/x86/kernel/cpu/microcode/intel.c | 24 ++---------------------- 1 file changed, 2 insertions(+), 22 deletions(-) --- --- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -397,22 +397,6 @@ scan_microcode(void *data, size_t size, return patch; } =20 -/* - * Save this microcode patch. It will be loaded early when a CPU is - * hot-added or resumes. - */ -static void save_mc_for_early(struct ucode_cpu_info *uci, u8 *mc, unsigned= int size) -{ - /* Synchronization during CPU hotplug. */ - static DEFINE_MUTEX(x86_cpu_microcode_mutex); - - mutex_lock(&x86_cpu_microcode_mutex); - - save_microcode_patch(uci, mc, size); - - mutex_unlock(&x86_cpu_microcode_mutex); -} - static bool load_builtin_intel_microcode(struct cpio_data *cp) { unsigned int eax =3D 1, ebx, ecx =3D 0, edx; @@ -829,12 +813,8 @@ static enum ucode_state generic_load_mic vfree(uci->mc); uci->mc =3D (struct microcode_intel *)new_mc; =20 - /* - * If early loading microcode is supported, save this mc into - * permanent memory. So it will be loaded early when a CPU is hot added - * or resumes. - */ - save_mc_for_early(uci, new_mc, new_mc_size); + /* Save for CPU hotplug */ + save_microcode_patch(uci, new_mc, new_mc_size); =20 pr_debug("CPU%d found a matching microcode update with version 0x%x (curr= ent=3D0x%x)\n", cpu, new_rev, uci->cpu_sig.rev); From nobody Thu Dec 18 20:32:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5188EC0015E for ; Sat, 12 Aug 2023 20:01:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230450AbjHLUBQ (ORCPT ); Sat, 12 Aug 2023 16:01:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60120 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230474AbjHLUBD (ORCPT ); Sat, 12 Aug 2023 16:01:03 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D25F62D6D for ; Sat, 12 Aug 2023 13:00:48 -0700 (PDT) Message-ID: <20230812195728.129971794@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691870332; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=x7sZwm16k+DN6jvZS+REQb/YogoEg78SbOG09l90uXo=; b=Jub2LnNQCFeUdkcJhVF8CCjgRCpx1V8iqI2glFd+W3WvFCfRP3txvfDUsA0pL9R1Q4eLOK ELBqHs9of2l4RWvnCu7GwyefD30gYYAilT2I7FcdWMGPY8z7Y22PPB6o0+qRLE7TFrsjOV Xgq1eeP1Q/KZ0tEP5/ywl57BjtYUaHoCpwwhBvHweJBnt4wEaXUeGiMKEC9Fm3PFulr3IY N4uoFYB5P5hG2LUIRIczC0uiCKs+p0ov8whXxbFD0m8d70M5p9P3Ay3hR7UKwaaWXcmEc8 kqWDkN02XwVSoscXgdcwOZvhWULPolUQRtVK85TpWNCHVzKCwAbalCz91DE0Hw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691870332; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=x7sZwm16k+DN6jvZS+REQb/YogoEg78SbOG09l90uXo=; b=FQhqhw+uBk1ashXZbLaxl3LfpwdTAMaSphcPRKNS1qUYn6rLed5hs61pkyUpMqMFa8fipY DjSRzhiQH7fOInAw== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven , Nikolay Borisov Subject: [patch V2 10/37] x86/microcode/intel: Rip out mixed stepping support for Intel CPUs References: <20230812194003.682298127@linutronix.de> MIME-Version: 1.0 Date: Sat, 12 Aug 2023 21:58:52 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Ashok Raj Mixed steppings aren't supported on Intel CPUs. Only one patch is required for the entire system. The caching of micro code blobs which match the family and model is therefore pointless and in fact it is disfunctional as CPU hotplug updates use only a single microcode blob, i.e. the one where *intel_ucode_patch points to. Remove the microcode cache and make it an AMD local feature. [ tglx: Save only at the end. Otherwise random microcode ends up in the pointer for early loading ] Originally-by: Thomas Gleixner Signed-off-by: Ashok Raj Signed-off-by: Thomas Gleixner --- V2: Fix the bogus condition - Borislav --- arch/x86/kernel/cpu/microcode/amd.c | 10 ++ arch/x86/kernel/cpu/microcode/core.c | 2=20 arch/x86/kernel/cpu/microcode/intel.c | 135 +++++---------------------= ----- arch/x86/kernel/cpu/microcode/internal.h | 10 -- 4 files changed, 36 insertions(+), 121 deletions(-) --- --- a/arch/x86/kernel/cpu/microcode/amd.c +++ b/arch/x86/kernel/cpu/microcode/amd.c @@ -37,6 +37,16 @@ =20 #include "internal.h" =20 +struct ucode_patch { + struct list_head plist; + void *data; + unsigned int size; + u32 patch_id; + u16 equiv_cpu; +}; + +static LIST_HEAD(microcode_cache); + #define UCODE_MAGIC 0x00414d44 #define UCODE_EQUIV_CPU_TABLE_TYPE 0x00000000 #define UCODE_UCODE_TYPE 0x00000001 --- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -46,8 +46,6 @@ static bool dis_ucode_ldr =3D true; =20 bool initrd_gone; =20 -LIST_HEAD(microcode_cache); - /* * Synchronization. * --- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -33,10 +33,10 @@ static const char ucode_path[] =3D "kernel/x86/microcode/GenuineIntel.bin"; =20 /* Current microcode patch used in early patching on the APs. */ -static struct microcode_intel *intel_ucode_patch; +static struct microcode_intel *intel_ucode_patch __read_mostly; =20 /* last level cache size per core */ -static int llc_size_per_core; +static int llc_size_per_core __ro_after_init; =20 /* microcode format is extended from prescott processors */ struct extended_signature { @@ -253,81 +253,26 @@ static int has_newer_microcode(void *mc, return intel_find_matching_signature(mc, csig, cpf); } =20 -static struct ucode_patch *memdup_patch(void *data, unsigned int size) +static void save_microcode_patch(void *data, unsigned int size) { - struct ucode_patch *p; + struct microcode_header_intel *p; =20 - p =3D kzalloc(sizeof(struct ucode_patch), GFP_KERNEL); - if (!p) - return NULL; - - p->data =3D kmemdup(data, size, GFP_KERNEL); - if (!p->data) { - kfree(p); - return NULL; - } - - return p; -} - -static void save_microcode_patch(struct ucode_cpu_info *uci, void *data, u= nsigned int size) -{ - struct microcode_header_intel *mc_hdr, *mc_saved_hdr; - struct ucode_patch *iter, *tmp, *p =3D NULL; - bool prev_found =3D false; - unsigned int sig, pf; - - mc_hdr =3D (struct microcode_header_intel *)data; - - list_for_each_entry_safe(iter, tmp, µcode_cache, plist) { - mc_saved_hdr =3D (struct microcode_header_intel *)iter->data; - sig =3D mc_saved_hdr->sig; - pf =3D mc_saved_hdr->pf; - - if (intel_find_matching_signature(data, sig, pf)) { - prev_found =3D true; - - if (mc_hdr->rev <=3D mc_saved_hdr->rev) - continue; - - p =3D memdup_patch(data, size); - if (!p) - pr_err("Error allocating buffer %p\n", data); - else { - list_replace(&iter->plist, &p->plist); - kfree(iter->data); - kfree(iter); - } - } - } - - /* - * There weren't any previous patches found in the list cache; save the - * newly found. - */ - if (!prev_found) { - p =3D memdup_patch(data, size); - if (!p) - pr_err("Error allocating buffer for %p\n", data); - else - list_add_tail(&p->plist, µcode_cache); - } + kfree(intel_ucode_patch); + intel_ucode_patch =3D NULL; =20 + p =3D kmemdup(data, size, GFP_KERNEL); if (!p) return; =20 - if (!intel_find_matching_signature(p->data, uci->cpu_sig.sig, uci->cpu_si= g.pf)) - return; - /* * Save for early loading. On 32-bit, that needs to be a physical * address as the APs are running from physical addresses, before * paging has been enabled. */ if (IS_ENABLED(CONFIG_X86_32)) - intel_ucode_patch =3D (struct microcode_intel *)__pa_nodebug(p->data); + intel_ucode_patch =3D (struct microcode_intel *)__pa_nodebug(p); else - intel_ucode_patch =3D p->data; + intel_ucode_patch =3D (struct microcode_intel *)p; } =20 /* @@ -339,6 +284,7 @@ scan_microcode(void *data, size_t size, { struct microcode_header_intel *mc_header; struct microcode_intel *patch =3D NULL; + u32 cur_rev =3D uci->cpu_sig.rev; unsigned int mc_size; =20 while (size) { @@ -348,8 +294,7 @@ scan_microcode(void *data, size_t size, mc_header =3D (struct microcode_header_intel *)data; =20 mc_size =3D get_totalsize(mc_header); - if (!mc_size || - mc_size > size || + if (!mc_size || mc_size > size || intel_microcode_sanity_check(data, false, MC_HEADER_TYPE_MICROCODE) = < 0) break; =20 @@ -361,31 +306,16 @@ scan_microcode(void *data, size_t size, continue; } =20 - if (save) { - save_microcode_patch(uci, data, mc_size); + /* BSP scan: Check whether there is newer microcode */ + if (!save && cur_rev >=3D mc_header->rev) goto next; - } - =20 - if (!patch) { - if (!has_newer_microcode(data, - uci->cpu_sig.sig, - uci->cpu_sig.pf, - uci->cpu_sig.rev)) - goto next; - - } else { - struct microcode_header_intel *phdr =3D &patch->hdr; - - if (!has_newer_microcode(data, - phdr->sig, - phdr->pf, - phdr->rev)) - goto next; - } + /* Save scan: Check whether there is newer or matching microcode */ + if (save && cur_rev !=3D mc_header->rev) + goto next; =20 - /* We have a newer patch, save it. */ patch =3D data; + cur_rev =3D mc_header->rev; =20 next: data +=3D mc_size; @@ -394,6 +324,9 @@ scan_microcode(void *data, size_t size, if (size) return NULL; =20 + if (save && patch) + save_microcode_patch(patch, mc_size); + return patch; } =20 @@ -612,26 +545,10 @@ void load_ucode_intel_ap(void) apply_microcode_early(&uci, true); } =20 -static struct microcode_intel *find_patch(struct ucode_cpu_info *uci) +/* Accessor for microcode pointer */ +static struct microcode_intel *ucode_get_patch(void) { - struct microcode_header_intel *phdr; - struct ucode_patch *iter, *tmp; - - list_for_each_entry_safe(iter, tmp, µcode_cache, plist) { - - phdr =3D (struct microcode_header_intel *)iter->data; - - if (phdr->rev <=3D uci->cpu_sig.rev) - continue; - - if (!intel_find_matching_signature(phdr, - uci->cpu_sig.sig, - uci->cpu_sig.pf)) - continue; - - return iter->data; - } - return NULL; + return intel_ucode_patch; } =20 void reload_ucode_intel(void) @@ -641,7 +558,7 @@ void reload_ucode_intel(void) =20 intel_cpu_collect_info(&uci); =20 - p =3D find_patch(&uci); + p =3D ucode_get_patch(); if (!p) return; =20 @@ -685,7 +602,7 @@ static enum ucode_state apply_microcode_ return UCODE_ERROR; =20 /* Look for a newer patch in our cache: */ - mc =3D find_patch(uci); + mc =3D ucode_get_patch(); if (!mc) { mc =3D uci->mc; if (!mc) @@ -814,7 +731,7 @@ static enum ucode_state generic_load_mic uci->mc =3D (struct microcode_intel *)new_mc; =20 /* Save for CPU hotplug */ - save_microcode_patch(uci, new_mc, new_mc_size); + save_microcode_patch(new_mc, new_mc_size); =20 pr_debug("CPU%d found a matching microcode update with version 0x%x (curr= ent=3D0x%x)\n", cpu, new_rev, uci->cpu_sig.rev); --- a/arch/x86/kernel/cpu/microcode/internal.h +++ b/arch/x86/kernel/cpu/microcode/internal.h @@ -8,16 +8,6 @@ #include #include =20 -struct ucode_patch { - struct list_head plist; - void *data; /* Intel uses only this one */ - unsigned int size; - u32 patch_id; - u16 equiv_cpu; -}; - -extern struct list_head microcode_cache; - struct device; =20 enum ucode_state { From nobody Thu Dec 18 20:32:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC63FC0015E for ; Sat, 12 Aug 2023 20:00:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230433AbjHLUAx (ORCPT ); Sat, 12 Aug 2023 16:00:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58370 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230349AbjHLUAs (ORCPT ); Sat, 12 Aug 2023 16:00:48 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3C5641FE6 for ; Sat, 12 Aug 2023 13:00:18 -0700 (PDT) Message-ID: <20230812195728.188483733@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691870334; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=UL/mdoYp2YOPutZME2dSUHU8tUoO072JwJbw3jCdJRc=; b=k196Gikce4bNi5roHHck82eFjbXLl9qvTM21pyAwgSV1PGVZpP68bSXUquAaTlo45GKi3b 0D5+H5Upb40rc6ezn73UIpYOgu/M/gDP4JoYj0G4dN81V0dc6mwuIAG0uy6VYsvE4PRWok YtU2rUe4OA5EcN5lBn4YpodKCU6WWgxKUcMggfWKFjyPwAGQHd72WZlgJp2eOvIecTOBzi xET5TmbObUsnQlOd8HDXhVC8UlDymZm8Yy8I4rEjAnpqGMC8HUuXsGjOFuYokLWvIzRQmN KvbMjTNo4ijiWVxIkHFJ3Ac2egSkCcSnT5jW/UI4H2clluMxBvqF2oc+XcXIBA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691870334; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=UL/mdoYp2YOPutZME2dSUHU8tUoO072JwJbw3jCdJRc=; b=wYMXV8LdRz0zsOeaxUoMbnhf8qyYqy8Dpx2tC+1KmIkxAQI2AvQetjwn2uLa0ioUSVfAtW IxKPuVssEC4/bBCQ== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven , Nikolay Borisov Subject: [patch V2 11/37] x86/microcode/intel: Simplify scan_microcode() References: <20230812194003.682298127@linutronix.de> MIME-Version: 1.0 Date: Sat, 12 Aug 2023 21:58:53 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Make it readable and comprehensible. Signed-off-by: Thomas Gleixner --- arch/x86/kernel/cpu/microcode/intel.c | 28 +++++++--------------------- 1 file changed, 7 insertions(+), 21 deletions(-) --- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -275,22 +275,16 @@ static void save_microcode_patch(void *d intel_ucode_patch =3D (struct microcode_intel *)p; } =20 -/* - * Get microcode matching with BSP's model. Only CPUs with the same model = as - * BSP can stay in the platform. - */ -static struct microcode_intel * -scan_microcode(void *data, size_t size, struct ucode_cpu_info *uci, bool s= ave) +/* Scan CPIO for microcode matching the boot CPUs family, model, stepping = */ +static struct microcode_intel *scan_microcode(void *data, size_t size, + struct ucode_cpu_info *uci, bool save) { struct microcode_header_intel *mc_header; struct microcode_intel *patch =3D NULL; u32 cur_rev =3D uci->cpu_sig.rev; unsigned int mc_size; =20 - while (size) { - if (size < sizeof(struct microcode_header_intel)) - break; - + for (; size >=3D sizeof(struct microcode_header_intel); size -=3D mc_size= , data +=3D mc_size) { mc_header =3D (struct microcode_header_intel *)data; =20 mc_size =3D get_totalsize(mc_header); @@ -298,27 +292,19 @@ scan_microcode(void *data, size_t size, intel_microcode_sanity_check(data, false, MC_HEADER_TYPE_MICROCODE) = < 0) break; =20 - size -=3D mc_size; - - if (!intel_find_matching_signature(data, uci->cpu_sig.sig, - uci->cpu_sig.pf)) { - data +=3D mc_size; + if (!intel_find_matching_signature(data, uci->cpu_sig.sig, uci->cpu_sig.= pf)) continue; - } =20 /* BSP scan: Check whether there is newer microcode */ if (!save && cur_rev >=3D mc_header->rev) - goto next; + continue; =20 /* Save scan: Check whether there is newer or matching microcode */ if (save && cur_rev !=3D mc_header->rev) - goto next; + continue; =20 patch =3D data; cur_rev =3D mc_header->rev; - -next: - data +=3D mc_size; } =20 if (size) From nobody Thu Dec 18 20:32:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01640C0015E for ; Sat, 12 Aug 2023 20:01:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230390AbjHLUAz (ORCPT ); Sat, 12 Aug 2023 16:00:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59944 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230304AbjHLUAt (ORCPT ); Sat, 12 Aug 2023 16:00:49 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 175CC199A for ; Sat, 12 Aug 2023 13:00:20 -0700 (PDT) Message-ID: <20230812195728.246048244@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691870335; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=f+UFwCEFZREA8XDdgjvWrupo4jQM3/zWUCG1oemkcwo=; b=tBHDKY+tOBlIDxBRXfTvn6gRLVPE+1zyPSKMD2HK6zaIDGqQ/UMDTZaUiQtzKoFZyMo7Yi JfzA0W78akWOqNIfyeRP4F+s1NQYbswnEVoJgADs3y/Ecmrm78E75ZLW3Ubqz7QzrOiljy iHVKprQ7RTYebqS5DjifrVzrhISJFUMLFVPJ/JSYg4I7MbjglGgtKglM5hOCZijx3J2dzz vMAG/v+pMgWIbtMAtJClQ5+bJGuwllJgAiTxDdoousGTsee2LkUQg4TrGIXiEnYVNosvlS k0ehEETEeCPoXpdlPHWAqjRddQ3ASW/RnEFtpfCyEJA6YvIp1CRDel8UI7955Q== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691870335; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=f+UFwCEFZREA8XDdgjvWrupo4jQM3/zWUCG1oemkcwo=; b=8ITEq0BJT9vucr6VcxMCsL2vDO7dLp6zXiylsFC202BUMsn2qdTiZemMBuZPqHzp9gLVgp V0c/NdwFDaSoktCw== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven , Nikolay Borisov Subject: [patch V2 12/37] x86/microcode/intel: Simplify and rename generic_load_microcode() References: <20230812194003.682298127@linutronix.de> MIME-Version: 1.0 Date: Sat, 12 Aug 2023 21:58:55 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" so it becomes less obfuscated and rename it because there is nothing generic about it. Signed-off-by: Thomas Gleixner --- arch/x86/kernel/cpu/microcode/intel.c | 47 ++++++++++++-----------------= ----- 1 file changed, 17 insertions(+), 30 deletions(-) --- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -240,19 +240,6 @@ int intel_microcode_sanity_check(void *m } EXPORT_SYMBOL_GPL(intel_microcode_sanity_check); =20 -/* - * Returns 1 if update has been found, 0 otherwise. - */ -static int has_newer_microcode(void *mc, unsigned int csig, int cpf, int n= ew_rev) -{ - struct microcode_header_intel *mc_hdr =3D mc; - - if (mc_hdr->rev <=3D new_rev) - return 0; - - return intel_find_matching_signature(mc, csig, cpf); -} - static void save_microcode_patch(void *data, unsigned int size) { struct microcode_header_intel *p; @@ -645,14 +632,12 @@ static enum ucode_state apply_microcode_ return ret; } =20 -static enum ucode_state generic_load_microcode(int cpu, struct iov_iter *i= ter) +static enum ucode_state read_ucode_intel(int cpu, struct iov_iter *iter) { struct ucode_cpu_info *uci =3D ucode_cpu_info + cpu; unsigned int curr_mc_size =3D 0, new_mc_size =3D 0; - enum ucode_state ret =3D UCODE_OK; - int new_rev =3D uci->cpu_sig.rev; + int cur_rev =3D uci->cpu_sig.rev; u8 *new_mc =3D NULL, *mc =3D NULL; - unsigned int csig, cpf; =20 while (iov_iter_count(iter)) { struct microcode_header_intel mc_header; @@ -669,6 +654,7 @@ static enum ucode_state generic_load_mic pr_err("error! Bad data in microcode data file (totalsize too small)\n"= ); break; } + data_size =3D mc_size - sizeof(mc_header); if (data_size > iov_iter_count(iter)) { pr_err("error! Bad data in microcode data file (truncated file?)\n"); @@ -691,16 +677,17 @@ static enum ucode_state generic_load_mic break; } =20 - csig =3D uci->cpu_sig.sig; - cpf =3D uci->cpu_sig.pf; - if (has_newer_microcode(mc, csig, cpf, new_rev)) { - vfree(new_mc); - new_rev =3D mc_header.rev; - new_mc =3D mc; - new_mc_size =3D mc_size; - mc =3D NULL; /* trigger new vmalloc */ - ret =3D UCODE_NEW; - } + if (cur_rev >=3D mc_header.rev) + continue; + + if (!intel_find_matching_signature(mc, uci->cpu_sig.sig, uci->cpu_sig.pf= )) + continue; + + vfree(new_mc); + cur_rev =3D mc_header.rev; + new_mc =3D mc; + new_mc_size =3D mc_size; + mc =3D NULL; } =20 vfree(mc); @@ -720,9 +707,9 @@ static enum ucode_state generic_load_mic save_microcode_patch(new_mc, new_mc_size); =20 pr_debug("CPU%d found a matching microcode update with version 0x%x (curr= ent=3D0x%x)\n", - cpu, new_rev, uci->cpu_sig.rev); + cpu, cur_rev, uci->cpu_sig.rev); =20 - return ret; + return UCODE_NEW; } =20 static bool is_blacklisted(unsigned int cpu) @@ -771,7 +758,7 @@ static enum ucode_state request_microcod kvec.iov_base =3D (void *)firmware->data; kvec.iov_len =3D firmware->size; iov_iter_kvec(&iter, ITER_SOURCE, &kvec, 1, firmware->size); - ret =3D generic_load_microcode(cpu, &iter); + ret =3D read_ucode_intel(cpu, &iter); =20 release_firmware(firmware); From nobody Thu Dec 18 20:32:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8311BC001DB for ; Sat, 12 Aug 2023 20:01:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229570AbjHLUBC (ORCPT ); Sat, 12 Aug 2023 16:01:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59946 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230413AbjHLUAw (ORCPT ); Sat, 12 Aug 2023 16:00:52 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A1D4F1BD2 for ; Sat, 12 Aug 2023 13:00:29 -0700 (PDT) Message-ID: <20230812195728.304366279@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691870337; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=GKoAhQM+YL5b11O1A2S3VwOaagu7XrxM1Ya616PWMyI=; b=YkMAatzwa0AVYdTUBSXVGWuw5IFw3f4GjcpHGTyHJwmcTyzVMtNlF4lQDq5qKzMawU0KeF HbwbcrBZhH/Opf5X9YvVw3HR86zTwa5ZlBK/47cyvTaS3jTYAKXMQTLK3UXslZ0DMUdMHI nYTNa/5DUSJfXXSXDmTM/T087dFkZHQwtQLDK8xk/Zs5zpe+t2lOg1JQ3JDpKF1dXIxVln 5QQ9lvupvJk+Sj+gqS5fki2wKTLcDzqQV3RggV5znu+FnscnJMzbesh9Xi8TNMZA/SNWkb SirZt3OtVpCnuEhWp3CWHjjEy0zwXIOSu6f7pf0LDgsV36WteuHpQBTZGxxk/w== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691870337; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=GKoAhQM+YL5b11O1A2S3VwOaagu7XrxM1Ya616PWMyI=; b=4HQXqIMgEzVJkgqGLhuwYjukPcAlUyUYgWdixh3Kamb4uxjv+uU63NMPgbyhMtDDOEskBk cOQScMsSFDctgIBg== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven , Nikolay Borisov Subject: [patch V2 13/37] x86/microcode/intel: Cleanup code further References: <20230812194003.682298127@linutronix.de> MIME-Version: 1.0 Date: Sat, 12 Aug 2023 21:58:56 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Thomas Gleixner Sanitize the microcode scan loop, fixup printks and move the initrd loading function next to the place where it is used and mark it __init. Signed-off-by: Thomas Gleixner --- V2: Fix changelog - Nikolay --- arch/x86/kernel/cpu/microcode/intel.c | 82 +++++++++++++----------------= ----- 1 file changed, 33 insertions(+), 49 deletions(-) --- --- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -36,7 +36,7 @@ static const char ucode_path[] =3D "kernel static struct microcode_intel *intel_ucode_patch __read_mostly; =20 /* last level cache size per core */ -static int llc_size_per_core __ro_after_init; +static unsigned int llc_size_per_core __ro_after_init; =20 /* microcode format is extended from prescott processors */ struct extended_signature { @@ -303,37 +303,10 @@ static struct microcode_intel *scan_micr return patch; } =20 -static bool load_builtin_intel_microcode(struct cpio_data *cp) -{ - unsigned int eax =3D 1, ebx, ecx =3D 0, edx; - struct firmware fw; - char name[30]; - - if (IS_ENABLED(CONFIG_X86_32)) - return false; - - native_cpuid(&eax, &ebx, &ecx, &edx); - - sprintf(name, "intel-ucode/%02x-%02x-%02x", - x86_family(eax), x86_model(eax), x86_stepping(eax)); - - if (firmware_request_builtin(&fw, name)) { - cp->size =3D fw.size; - cp->data =3D (void *)fw.data; - return true; - } - - return false; -} - static void print_ucode_info(int old_rev, int new_rev, unsigned int date) { pr_info_once("updated early: 0x%x -> 0x%x, date =3D %04x-%02x-%02x\n", - old_rev, - new_rev, - date & 0xffff, - date >> 24, - (date >> 16) & 0xff); + old_rev, new_rev, date & 0xffff, date >> 24, (date >> 16) & 0xff); } =20 #ifdef CONFIG_X86_32 @@ -427,6 +400,28 @@ static int apply_microcode_early(struct return 0; } =20 +static bool load_builtin_intel_microcode(struct cpio_data *cp) +{ + unsigned int eax =3D 1, ebx, ecx =3D 0, edx; + struct firmware fw; + char name[30]; + + if (IS_ENABLED(CONFIG_X86_32)) + return false; + + native_cpuid(&eax, &ebx, &ecx, &edx); + + sprintf(name, "intel-ucode/%02x-%02x-%02x", + x86_family(eax), x86_model(eax), x86_stepping(eax)); + + if (firmware_request_builtin(&fw, name)) { + cp->size =3D fw.size; + cp->data =3D (void *)fw.data; + return true; + } + return false; +} + int __init save_microcode_in_initrd_intel(void) { struct ucode_cpu_info uci; @@ -518,25 +513,16 @@ void load_ucode_intel_ap(void) apply_microcode_early(&uci, true); } =20 -/* Accessor for microcode pointer */ -static struct microcode_intel *ucode_get_patch(void) -{ - return intel_ucode_patch; -} - void reload_ucode_intel(void) { - struct microcode_intel *p; struct ucode_cpu_info uci; =20 intel_cpu_collect_info(&uci); =20 - p =3D ucode_get_patch(); - if (!p) + uci.mc =3D intel_ucode_patch; + if (!uci.mc) return; =20 - uci.mc =3D p; - apply_microcode_early(&uci, false); } =20 @@ -574,8 +560,7 @@ static enum ucode_state apply_microcode_ if (WARN_ON(raw_smp_processor_id() !=3D cpu)) return UCODE_ERROR; =20 - /* Look for a newer patch in our cache: */ - mc =3D ucode_get_patch(); + mc =3D intel_ucode_patch; if (!mc) { mc =3D uci->mc; if (!mc) @@ -766,18 +751,17 @@ static enum ucode_state request_microcod } =20 static struct microcode_ops microcode_intel_ops =3D { - .request_microcode_fw =3D request_microcode_fw, - .collect_cpu_info =3D collect_cpu_info, - .apply_microcode =3D apply_microcode_intel, + .request_microcode_fw =3D request_microcode_fw, + .collect_cpu_info =3D collect_cpu_info, + .apply_microcode =3D apply_microcode_intel, }; =20 -static int __init calc_llc_size_per_core(struct cpuinfo_x86 *c) +static __init void calc_llc_size_per_core(struct cpuinfo_x86 *c) { u64 llc_size =3D c->x86_cache_size * 1024ULL; =20 do_div(llc_size, c->x86_max_cores); - - return (int)llc_size; + llc_size_per_core =3D (unsigned int)llc_size; } =20 struct microcode_ops * __init init_intel_microcode(void) @@ -790,7 +774,7 @@ struct microcode_ops * __init init_intel return NULL; } =20 - llc_size_per_core =3D calc_llc_size_per_core(c); + calc_llc_size_per_core(c); =20 return µcode_intel_ops; } From nobody Thu Dec 18 20:32:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B736C0015E for ; Sat, 12 Aug 2023 20:01:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230435AbjHLUA7 (ORCPT ); Sat, 12 Aug 2023 16:00:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58420 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230393AbjHLUAw (ORCPT ); Sat, 12 Aug 2023 16:00:52 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2F4FC30F5 for ; Sat, 12 Aug 2023 13:00:26 -0700 (PDT) Message-ID: <20230812195728.361431946@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691870338; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=bPS0cgHUesN2Rv+0Fkfjz+I6MM2LDsw2pyVoNrGm9hw=; b=DMjAKttuZ4msxTMvpwMJ8PGu/17mXaAgZqsRuzDOPAqwJ+PN5Fu8tWnGmpsg2nmdIqQsI5 wpOBEyI8K6tL6Zi9ezzYMqyC7LBn3JB+Kmm428p0lvum+H+kjnGA0XY12XYevL4hdayFrZ 3hmNL2BT/EmciJkrtLHqmlMwlnUI28OYYwYKrsisrkGz+MoIQJYatRUqKZC+rAAxda1eWI 5hquNQRALDtImr9K1qdePghd5cPgo8PP0Lz1naYggVOtzF6VoxPpHrttQ2+KlL/S0Kxc/1 Qfbi9il00aqtQhnLtE+eO7RZlj4DJfhom4ItngBsDkJHlhj+HUV2Jio2Hhj08Q== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691870338; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=bPS0cgHUesN2Rv+0Fkfjz+I6MM2LDsw2pyVoNrGm9hw=; b=4hYO+P2fw+YY34dJC2gk74zzduaFZkCOXVIfnF2wxlLz46jQ01Tzoup70vtw0fd0hpoMAZ VWaaf/8Rr6sgxwBw== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven , Nikolay Borisov Subject: [patch V2 14/37] x86/microcode/intel: Simplify early loading References: <20230812194003.682298127@linutronix.de> MIME-Version: 1.0 Date: Sat, 12 Aug 2023 21:58:58 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Thomas Gleixner The early loading code is overly complicated: - It scans the builtin/initrd for microcode not only on the BSP, but also on all APs during early boot and then later in the boot process it scans again to duplicate and save the microcode before initrd goes away. That's a pointless exercise because this can be simply done before bringing up the APs when the memory allocator is up and running. - Saving the microcode from within the scan loop is completely non-obvious and a left over of the microcode cache. This can be done at the call site now which makes it obvious. Rework the code so that only the BSP scans the builtin/initrd microcode once during early boot and save it away in an early initcall for later use. Signed-off-by: Thomas Gleixner --- arch/x86/kernel/cpu/microcode/core.c | 4=20 arch/x86/kernel/cpu/microcode/intel.c | 191 +++++++++++++-------------= ----- arch/x86/kernel/cpu/microcode/internal.h | 2=20 3 files changed, 86 insertions(+), 111 deletions(-) --- --- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -207,10 +207,6 @@ static int __init save_microcode_in_init int ret =3D -EINVAL; =20 switch (c->x86_vendor) { - case X86_VENDOR_INTEL: - if (c->x86 >=3D 6) - ret =3D save_microcode_in_initrd_intel(); - break; case X86_VENDOR_AMD: if (c->x86 >=3D 0x10) ret =3D save_microcode_in_initrd_amd(cpuid_eax(1)); --- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -33,7 +33,7 @@ static const char ucode_path[] =3D "kernel/x86/microcode/GenuineIntel.bin"; =20 /* Current microcode patch used in early patching on the APs. */ -static struct microcode_intel *intel_ucode_patch __read_mostly; +static struct microcode_intel *ucode_patch_va __read_mostly; =20 /* last level cache size per core */ static unsigned int llc_size_per_core __ro_after_init; @@ -240,31 +240,34 @@ int intel_microcode_sanity_check(void *m } EXPORT_SYMBOL_GPL(intel_microcode_sanity_check); =20 -static void save_microcode_patch(void *data, unsigned int size) +static void update_ucode_pointer(struct microcode_intel *mc) { - struct microcode_header_intel *p; - - kfree(intel_ucode_patch); - intel_ucode_patch =3D NULL; - - p =3D kmemdup(data, size, GFP_KERNEL); - if (!p) - return; + kfree(ucode_patch_va); =20 /* - * Save for early loading. On 32-bit, that needs to be a physical - * address as the APs are running from physical addresses, before - * paging has been enabled. + * Save the virtual address for early loading on 64bit + * and for eventual free on late loading. + * + * On 32-bit, that needs to store the physical address too as the + * APs are loading before paging has been enabled. */ - if (IS_ENABLED(CONFIG_X86_32)) - intel_ucode_patch =3D (struct microcode_intel *)__pa_nodebug(p); + ucode_patch_va =3D mc; +} + +static void save_microcode_patch(struct microcode_intel *patch) +{ + struct microcode_intel *mc; + + mc =3D kmemdup(patch, get_totalsize(&patch->hdr), GFP_KERNEL); + if (mc) + update_ucode_pointer(mc); else - intel_ucode_patch =3D (struct microcode_intel *)p; + pr_err("Unable to allocate microcode memory\n"); } =20 /* Scan CPIO for microcode matching the boot CPUs family, model, stepping = */ -static struct microcode_intel *scan_microcode(void *data, size_t size, - struct ucode_cpu_info *uci, bool save) +static __init struct microcode_intel *scan_microcode(void *data, size_t si= ze, + struct ucode_cpu_info *uci) { struct microcode_header_intel *mc_header; struct microcode_intel *patch =3D NULL; @@ -282,25 +285,15 @@ static struct microcode_intel *scan_micr if (!intel_find_matching_signature(data, uci->cpu_sig.sig, uci->cpu_sig.= pf)) continue; =20 - /* BSP scan: Check whether there is newer microcode */ - if (!save && cur_rev >=3D mc_header->rev) - continue; - - /* Save scan: Check whether there is newer or matching microcode */ - if (save && cur_rev !=3D mc_header->rev) + /* Check whether there is newer microcode */ + if (cur_rev >=3D mc_header->rev) continue; =20 patch =3D data; cur_rev =3D mc_header->rev; } =20 - if (size) - return NULL; - - if (save && patch) - save_microcode_patch(patch, mc_size); - - return patch; + return size ? NULL : patch; } =20 static void print_ucode_info(int old_rev, int new_rev, unsigned int date) @@ -355,14 +348,14 @@ static inline void print_ucode(int old_r } #endif =20 -static int apply_microcode_early(struct ucode_cpu_info *uci, bool early) +static enum ucode_state apply_microcode_early(struct ucode_cpu_info *uci, = bool early) { struct microcode_intel *mc; u32 rev, old_rev; =20 mc =3D uci->mc; if (!mc) - return 0; + return UCODE_NFOUND; =20 /* * Save us the MSR write below - which is a particular expensive @@ -388,7 +381,7 @@ static int apply_microcode_early(struct =20 rev =3D intel_get_microcode_revision(); if (rev !=3D mc->hdr.rev) - return -1; + return UCODE_ERROR; =20 uci->cpu_sig.rev =3D rev; =20 @@ -397,10 +390,10 @@ static int apply_microcode_early(struct else print_ucode_info(old_rev, uci->cpu_sig.rev, mc->hdr.date); =20 - return 0; + return UCODE_UPDATED; } =20 -static bool load_builtin_intel_microcode(struct cpio_data *cp) +static __init bool load_builtin_intel_microcode(struct cpio_data *cp) { unsigned int eax =3D 1, ebx, ecx =3D 0, edx; struct firmware fw; @@ -422,108 +415,96 @@ static bool load_builtin_intel_microcode return false; } =20 -int __init save_microcode_in_initrd_intel(void) +static __init struct microcode_intel *get_ucode_from_cpio(struct ucode_cpu= _info *uci) { - struct ucode_cpu_info uci; + bool use_pa =3D IS_ENABLED(CONFIG_X86_32); + const char *path =3D ucode_path; struct cpio_data cp; =20 - /* - * initrd is going away, clear patch ptr. We will scan the microcode one - * last time before jettisoning and save a patch, if found. Then we will - * update that pointer too, with a stable patch address to use when - * resuming the cores. - */ - intel_ucode_patch =3D NULL; + /* Paging is not yet enabled on 32bit! */ + if (IS_ENABLED(CONFIG_X86_32)) + path =3D (const char *)__pa_nodebug(ucode_path); =20 + /* Try built-in microcode first */ if (!load_builtin_intel_microcode(&cp)) - cp =3D find_microcode_in_initrd(ucode_path, false); + cp =3D find_microcode_in_initrd(path, use_pa); =20 if (!(cp.data && cp.size)) - return 0; + return NULL; =20 - intel_cpu_collect_info(&uci); + intel_cpu_collect_info(uci); =20 - scan_microcode(cp.data, cp.size, &uci, true); - return 0; + return scan_microcode(cp.data, cp.size, uci); } =20 +static struct microcode_intel *ucode_early_pa __initdata; + /* - * @res_patch, output: a pointer to the patch we found. + * Invoked from an early init call to save the microcode blob which was + * selected during early boot when mm was not usable. The microcode must be + * saved because initrd is going away. It's an early init call so the APs + * just can use the pointer and do not have to scan initrd/builtin firmware + * again. */ -static struct microcode_intel *__load_ucode_intel(struct ucode_cpu_info *u= ci) +static int __init save_microcode_from_cpio(void) { - static const char *path; - struct cpio_data cp; - bool use_pa; - - if (IS_ENABLED(CONFIG_X86_32)) { - path =3D (const char *)__pa_nodebug(ucode_path); - use_pa =3D true; - } else { - path =3D ucode_path; - use_pa =3D false; - } - - /* try built-in microcode first */ - if (!load_builtin_intel_microcode(&cp)) - cp =3D find_microcode_in_initrd(path, use_pa); - - if (!(cp.data && cp.size)) - return NULL; + struct microcode_intel *mc; =20 - intel_cpu_collect_info(uci); + if (!ucode_early_pa) + return 0; =20 - return scan_microcode(cp.data, cp.size, uci, false); + mc =3D __va((void *)ucode_early_pa); + save_microcode_patch(mc); + return 0; } +early_initcall(save_microcode_from_cpio); =20 +/* Load microcode on BSP from CPIO */ void __init load_ucode_intel_bsp(void) { - struct microcode_intel *patch; struct ucode_cpu_info uci; =20 - patch =3D __load_ucode_intel(&uci); - if (!patch) + uci.mc =3D get_ucode_from_cpio(&uci); + if (!uci.mc) return; =20 - uci.mc =3D patch; + if (apply_microcode_early(&uci, true) !=3D UCODE_UPDATED) + return; =20 - apply_microcode_early(&uci, true); + if (IS_ENABLED(CONFIG_X86_64)) { + /* Store the physical address as KASLR happens after this. */ + ucode_early_pa =3D (struct microcode_intel *)__pa_nodebug(uci.mc); + } else { + struct microcode_intel **uce; + + /* Physical address pointer required for 32-bit */ + uce =3D (struct microcode_intel **)__pa_nodebug(&ucode_early_pa); + /* uci.mc is the physical address of the microcode blob */ + *uce =3D uci.mc; + } } =20 +/* Load microcode on AP bringup */ void load_ucode_intel_ap(void) { - struct microcode_intel *patch, **iup; struct ucode_cpu_info uci; =20 + /* Must use physical address on 32bit as paging is not yet enabled */ + uci.mc =3D ucode_patch_va; if (IS_ENABLED(CONFIG_X86_32)) - iup =3D (struct microcode_intel **) __pa_nodebug(&intel_ucode_patch); - else - iup =3D &intel_ucode_patch; - - if (!*iup) { - patch =3D __load_ucode_intel(&uci); - if (!patch) - return; - - *iup =3D patch; - } - - uci.mc =3D *iup; + uci.mc =3D (struct microcode_intel *)__pa_nodebug(uci.mc); =20 - apply_microcode_early(&uci, true); + if (uci.mc) + apply_microcode_early(&uci, true); } =20 +/* Reload microcode on resume */ void reload_ucode_intel(void) { - struct ucode_cpu_info uci; - - intel_cpu_collect_info(&uci); + struct ucode_cpu_info uci =3D { .mc =3D ucode_patch_va, }; =20 - uci.mc =3D intel_ucode_patch; - if (!uci.mc) - return; - - apply_microcode_early(&uci, false); + if (uci.mc) + apply_microcode_early(&uci, false); } =20 static int collect_cpu_info(int cpu_num, struct cpu_signature *csig) @@ -560,7 +541,7 @@ static enum ucode_state apply_microcode_ if (WARN_ON(raw_smp_processor_id() !=3D cpu)) return UCODE_ERROR; =20 - mc =3D intel_ucode_patch; + mc =3D ucode_patch_va; if (!mc) { mc =3D uci->mc; if (!mc) @@ -685,11 +666,11 @@ static enum ucode_state read_ucode_intel if (!new_mc) return UCODE_NFOUND; =20 - vfree(uci->mc); - uci->mc =3D (struct microcode_intel *)new_mc; - /* Save for CPU hotplug */ - save_microcode_patch(new_mc, new_mc_size); + save_microcode_patch((struct microcode_intel *)new_mc); + uci->mc =3D ucode_patch_va; + + vfree(new_mc); =20 pr_debug("CPU%d found a matching microcode update with version 0x%x (curr= ent=3D0x%x)\n", cpu, cur_rev, uci->cpu_sig.rev); --- a/arch/x86/kernel/cpu/microcode/internal.h +++ b/arch/x86/kernel/cpu/microcode/internal.h @@ -107,13 +107,11 @@ static inline void exit_amd_microcode(vo #ifdef CONFIG_CPU_SUP_INTEL void load_ucode_intel_bsp(void); void load_ucode_intel_ap(void); -int save_microcode_in_initrd_intel(void); void reload_ucode_intel(void); struct microcode_ops *init_intel_microcode(void); #else /* CONFIG_CPU_SUP_INTEL */ static inline void load_ucode_intel_bsp(void) { } static inline void load_ucode_intel_ap(void) { } -static inline int save_microcode_in_initrd_intel(void) { return -EINVAL; } static inline void reload_ucode_intel(void) { } static inline struct microcode_ops *init_intel_microcode(void) { return NU= LL; } #endif /* !CONFIG_CPU_SUP_INTEL */ From nobody Thu Dec 18 20:32:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6CAFFC0015E for ; Sat, 12 Aug 2023 20:01:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230463AbjHLUBg (ORCPT ); Sat, 12 Aug 2023 16:01:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46496 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230449AbjHLUBQ (ORCPT ); Sat, 12 Aug 2023 16:01:16 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 22AF035A0 for ; Sat, 12 Aug 2023 13:00:54 -0700 (PDT) Message-ID: <20230812195728.418426203@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691870340; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=+y22VMR8WT7f2YLl6Of8CzpgJMFOpBS1tGm9UJbVhvM=; b=3qkG7O9du49WCwiZWAE/U8r2Ob044whhIMYDue7i2qa1KaxhYg5JsWiA4nLTAcd92GcYyV TLQ89bLg+s8bYLIi0sixWqTcd3An2plDXov7FQ2LNIfzOakTb+DPa/k+a26vFJOAPOGKFK uWiaofo37Jujvjvh6u+RytfjYH2yWIFBDHiJklgbBqlgVta/m5PZzzBozDsFnNvdsg7FdN aF4Q6b553bYSP0ZnoChZPBRxy6vyO4jS4uNQnSP+ud75AHBC8Op4YWYctok3T/fo2vw+G4 fEh4X/Q505XEA651BQKu/WWucfUeKZwsxIYRncYhdYBR3MGP1QGu8fWEh7B2fQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691870340; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=+y22VMR8WT7f2YLl6Of8CzpgJMFOpBS1tGm9UJbVhvM=; b=SAu5UDCd3BfMexoasNfhtAHe1JuqVk2/ZLkMGc5RHxk3Rnx4fa+1QtmoZ981iQDkao+mGH G/OHdGSF9uceRxDg== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven , Nikolay Borisov Subject: [patch V2 15/37] x86/microcode/intel: Save the microcode only after a successful late-load References: <20230812194003.682298127@linutronix.de> MIME-Version: 1.0 Date: Sat, 12 Aug 2023 21:58:59 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Thomas Gleixner There are situations where the late microcode is loaded into memory, but is not applied: 1) The rendevouz fails 2) The microcode is rejected by the CPUs If any of this happens then the pointer which was updated at firmware load time is stale and subsequent CPU hotplug operations either fail to update or create inconsistent microcode state. Save the loaded microcode in a separate pointer from with the late load is attempted and when successful, update the hotplug pointer accordingly via a new micrcode_ops callback. Remove the pointless fallback in the loader to a microcode pointer which is never populated. Signed-off-by: Thomas Gleixner --- arch/x86/kernel/cpu/microcode/core.c | 4 ++++ arch/x86/kernel/cpu/microcode/intel.c | 30 +++++++++++++++-----------= ---- arch/x86/kernel/cpu/microcode/internal.h | 1 + 3 files changed, 20 insertions(+), 15 deletions(-) --- --- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -443,6 +443,10 @@ static int microcode_reload_late(void) store_cpu_caps(&prev_info); =20 ret =3D stop_machine_cpuslocked(__reload_late, NULL, cpu_online_mask); + + if (microcode_ops->finalize_late_load) + microcode_ops->finalize_late_load(ret); + if (!ret) { pr_info("Reload succeeded, microcode revision: 0x%x -> 0x%x\n", old, boot_cpu_data.microcode); --- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -34,6 +34,7 @@ static const char ucode_path[] =3D "kernel =20 /* Current microcode patch used in early patching on the APs. */ static struct microcode_intel *ucode_patch_va __read_mostly; +static struct microcode_intel *ucode_patch_late __read_mostly; =20 /* last level cache size per core */ static unsigned int llc_size_per_core __ro_after_init; @@ -541,12 +542,9 @@ static enum ucode_state apply_microcode_ if (WARN_ON(raw_smp_processor_id() !=3D cpu)) return UCODE_ERROR; =20 - mc =3D ucode_patch_va; - if (!mc) { - mc =3D uci->mc; - if (!mc) - return UCODE_NFOUND; - } + mc =3D ucode_patch_late; + if (!mc) + return UCODE_NFOUND; =20 /* * Save us the MSR write below - which is a particular expensive @@ -666,15 +664,7 @@ static enum ucode_state read_ucode_intel if (!new_mc) return UCODE_NFOUND; =20 - /* Save for CPU hotplug */ - save_microcode_patch((struct microcode_intel *)new_mc); - uci->mc =3D ucode_patch_va; - - vfree(new_mc); - - pr_debug("CPU%d found a matching microcode update with version 0x%x (curr= ent=3D0x%x)\n", - cpu, cur_rev, uci->cpu_sig.rev); - + ucode_patch_late =3D (struct microcode_intel *)new_mc; return UCODE_NEW; } =20 @@ -731,10 +721,20 @@ static enum ucode_state request_microcod return ret; } =20 +static void finalize_late_load(int result) +{ + if (!result) + save_microcode_patch(ucode_patch_late); + + vfree(ucode_patch_late); + ucode_patch_late =3D NULL; +} + static struct microcode_ops microcode_intel_ops =3D { .request_microcode_fw =3D request_microcode_fw, .collect_cpu_info =3D collect_cpu_info, .apply_microcode =3D apply_microcode_intel, + .finalize_late_load =3D finalize_late_load, }; =20 static __init void calc_llc_size_per_core(struct cpuinfo_x86 *c) --- a/arch/x86/kernel/cpu/microcode/internal.h +++ b/arch/x86/kernel/cpu/microcode/internal.h @@ -31,6 +31,7 @@ struct microcode_ops { */ enum ucode_state (*apply_microcode)(int cpu); int (*collect_cpu_info)(int cpu, struct cpu_signature *csig); + void (*finalize_late_load)(int result); }; =20 extern struct ucode_cpu_info ucode_cpu_info[]; From nobody Thu Dec 18 20:32:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2BD3C001DB for ; Sat, 12 Aug 2023 20:01:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231146AbjHLUBj (ORCPT ); Sat, 12 Aug 2023 16:01:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46504 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230452AbjHLUBQ (ORCPT ); Sat, 12 Aug 2023 16:01:16 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 18615359D for ; Sat, 12 Aug 2023 13:00:54 -0700 (PDT) Message-ID: <20230812195728.475622148@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691870341; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=hIX18dJsViRnFjmjnZoQ/63urhaUc4z36vhR8c6ivo4=; b=WQV3BgtdOkoLTVcAG/nDZKVh7Rx9IKoBhWvB5SfAS+bJch1XpGSqKX6nrZJAv/AOfWz9Go 4w73Z9sqG3PoQPpL5u0PJ7vYB2QYfNYM8WIY3Vt3o88ojfgfbbJQ3J1+ezaxPqSReKCxsy 4IJRT7tKYzt9hTyWpMLYIwNIlGAHe9sTqN6Rp8Flt9wm9/DMNhhNT4KjCoUceYVTDgYSyz m7yScmc344zxgL/AJSmClNeRMETwyhJt1vs0sPXSHNOXrisXLBR+cazHu7YpgevXubRo+Q 84W8KGgzK7MrKzRyOXbDh7FbjewlWMzxJoV5FN+LYG7UNEibLfDe5afalMnLHQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691870341; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=hIX18dJsViRnFjmjnZoQ/63urhaUc4z36vhR8c6ivo4=; b=9jBVKS1KlN1sHwjGgPp/P84e8xNLLGKKkkAlRI1uRJRkyBQMAZl1xFrDLtOeSkmRMo9Qjs nea+YAmye3C8LhDQ== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven , Nikolay Borisov Subject: [patch V2 16/37] x86/microcode/intel: Switch to kvmalloc() References: <20230812194003.682298127@linutronix.de> MIME-Version: 1.0 Date: Sat, 12 Aug 2023 21:59:01 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Thomas Gleixner Microcode blobs are getting larger and might soon reach the kmalloc() limit. Switch over kvmalloc(). 32-bit has to stay with kmalloc() as it needs physically contiguous memory because the early loading runs before paging is enabled, so there is a sanity check added to ensure that. Signed-off-by: Thomas Gleixner --- arch/x86/kernel/cpu/microcode/intel.c | 55 +++++++++++++++++++----------= ----- 1 file changed, 32 insertions(+), 23 deletions(-) --- --- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -14,7 +14,6 @@ #include #include #include -#include #include #include #include @@ -243,7 +242,7 @@ EXPORT_SYMBOL_GPL(intel_microcode_sanity =20 static void update_ucode_pointer(struct microcode_intel *mc) { - kfree(ucode_patch_va); + kvfree(ucode_patch_va); =20 /* * Save the virtual address for early loading on 64bit @@ -257,13 +256,18 @@ static void update_ucode_pointer(struct =20 static void save_microcode_patch(struct microcode_intel *patch) { - struct microcode_intel *mc; + unsigned int size =3D get_totalsize(&patch->hdr); + struct microcode_intel *mc =3D NULL; + + if (IS_ENABLED(CONFIG_X86_64)) + mc =3D kvmemdup(patch, size, GFP_KERNEL); + else + mc =3D kmemdup(patch, size, GFP_KERNEL); =20 - mc =3D kmemdup(patch, get_totalsize(&patch->hdr), GFP_KERNEL); if (mc) update_ucode_pointer(mc); else - pr_err("Unable to allocate microcode memory\n"); + pr_err("Unable to allocate microcode memory size: %u\n", size); } =20 /* Scan CPIO for microcode matching the boot CPUs family, model, stepping = */ @@ -610,36 +614,34 @@ static enum ucode_state read_ucode_intel =20 if (!copy_from_iter_full(&mc_header, sizeof(mc_header), iter)) { pr_err("error! Truncated or inaccessible header in microcode data file\= n"); - break; + goto fail; } =20 mc_size =3D get_totalsize(&mc_header); if (mc_size < sizeof(mc_header)) { pr_err("error! Bad data in microcode data file (totalsize too small)\n"= ); - break; + goto fail; } - data_size =3D mc_size - sizeof(mc_header); if (data_size > iov_iter_count(iter)) { pr_err("error! Bad data in microcode data file (truncated file?)\n"); - break; + goto fail; } =20 /* For performance reasons, reuse mc area when possible */ if (!mc || mc_size > curr_mc_size) { - vfree(mc); - mc =3D vmalloc(mc_size); + kvfree(mc); + mc =3D kvmalloc(mc_size, GFP_KERNEL); if (!mc) - break; + goto fail; curr_mc_size =3D mc_size; } =20 memcpy(mc, &mc_header, sizeof(mc_header)); data =3D mc + sizeof(mc_header); if (!copy_from_iter_full(data, data_size, iter) || - intel_microcode_sanity_check(mc, true, MC_HEADER_TYPE_MICROCODE) < 0= ) { - break; - } + intel_microcode_sanity_check(mc, true, MC_HEADER_TYPE_MICROCODE) < 0) + goto fail; =20 if (cur_rev >=3D mc_header.rev) continue; @@ -647,25 +649,32 @@ static enum ucode_state read_ucode_intel if (!intel_find_matching_signature(mc, uci->cpu_sig.sig, uci->cpu_sig.pf= )) continue; =20 - vfree(new_mc); + kvfree(new_mc); cur_rev =3D mc_header.rev; new_mc =3D mc; new_mc_size =3D mc_size; mc =3D NULL; } =20 - vfree(mc); + if (iov_iter_count(iter)) + goto fail; =20 - if (iov_iter_count(iter)) { - vfree(new_mc); - return UCODE_ERROR; + if (IS_ENABLED(CONFIG_X86_32) && new_mc && is_vmalloc_addr(new_mc)) { + pr_err("Microcode too large for 32-bit mode\n"); + goto fail; } =20 + kvfree(mc); if (!new_mc) return UCODE_NFOUND; =20 ucode_patch_late =3D (struct microcode_intel *)new_mc; return UCODE_NEW; + +fail: + kvfree(mc); + kvfree(new_mc); + return UCODE_ERROR; } =20 static bool is_blacklisted(unsigned int cpu) @@ -724,9 +733,9 @@ static enum ucode_state request_microcod static void finalize_late_load(int result) { if (!result) - save_microcode_patch(ucode_patch_late); - - vfree(ucode_patch_late); + update_ucode_pointer(ucode_patch_late); + else + kvfree(ucode_patch_late); ucode_patch_late =3D NULL; } From nobody Thu Dec 18 20:32:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35A53C001DB for ; Sat, 12 Aug 2023 20:01:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231173AbjHLUBm (ORCPT ); Sat, 12 Aug 2023 16:01:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46512 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230468AbjHLUBU (ORCPT ); Sat, 12 Aug 2023 16:01:20 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1D9FC199A for ; Sat, 12 Aug 2023 13:00:57 -0700 (PDT) Message-ID: <20230812195728.533966298@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691870343; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=Qr4AlmFeVzuCfitdDfmm4zZT/Jw8jMZI8/qi5AcNsUA=; b=D3X5bNrP2YB3LcxBfXKEs4lfrbjgP1QPdxNsQGYohjGSxfB6JEjbWa/NvQZZP/MZ4BleF+ bAX31f/Vg4zyWQlvY44qJYyvIncdS2KLlUlAFN6UZk4MTKDQjyRfI2a2eMN6FYXKO0sJoW h/JiSVIWy3GyTKXRzc0lnw80G/3LKdmR8eLjkFBrMDHDGMlXJ6vkGqIvTkI/VzUIM4SOfB iqI68j4m/CCzZZYqccNIDyL160aS8qCUTpeySkGfBBb/RsWRKgEYJW5pQ16fuivClSRygJ 1aLv5oD62uagvdChEfaRyxQhtw68SiDGjOkUPQjgbrE4li9sP+I8SYyeeuZeNQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691870343; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=Qr4AlmFeVzuCfitdDfmm4zZT/Jw8jMZI8/qi5AcNsUA=; b=C7U9GR0+bssJUJTAReizej2oqUWnPXWAT1XKZjFv7/pIR67xHsk7EyIdszqPiXFr27iNgN 9Su4eZY5nUp7DyAA== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven , Nikolay Borisov Subject: [patch V2 17/37] x86/microcode/intel: Unify microcode apply() functions References: <20230812194003.682298127@linutronix.de> MIME-Version: 1.0 Date: Sat, 12 Aug 2023 21:59:02 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Deduplicate the early and late apply() functions. Signed-off-by: Thomas Gleixner --- arch/x86/kernel/cpu/microcode/intel.c | 106 +++++++++++------------------= ----- 1 file changed, 36 insertions(+), 70 deletions(-) --- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -353,12 +353,11 @@ static inline void print_ucode(int old_r } #endif =20 -static enum ucode_state apply_microcode_early(struct ucode_cpu_info *uci, = bool early) +static enum ucode_state apply_microcode(struct ucode_cpu_info *uci, struct= microcode_intel *mc, + u32 *cur_rev) { - struct microcode_intel *mc; - u32 rev, old_rev; + u32 rev; =20 - mc =3D uci->mc; if (!mc) return UCODE_NFOUND; =20 @@ -367,14 +366,12 @@ static enum ucode_state apply_microcode_ * operation - when the other hyperthread has updated the microcode * already. */ - rev =3D intel_get_microcode_revision(); - if (rev >=3D mc->hdr.rev) { - uci->cpu_sig.rev =3D rev; + *cur_rev =3D intel_get_microcode_revision(); + if (*cur_rev >=3D mc->hdr.rev) { + uci->cpu_sig.rev =3D *cur_rev; return UCODE_OK; } =20 - old_rev =3D rev; - /* * Writeback and invalidate caches before updating microcode to avoid * internal issues depending on what the microcode is updating. @@ -389,13 +386,23 @@ static enum ucode_state apply_microcode_ return UCODE_ERROR; =20 uci->cpu_sig.rev =3D rev; + return UCODE_UPDATED; +} =20 - if (early) - print_ucode(old_rev, uci->cpu_sig.rev, mc->hdr.date); - else - print_ucode_info(old_rev, uci->cpu_sig.rev, mc->hdr.date); +static enum ucode_state apply_microcode_early(struct ucode_cpu_info *uci, = bool early) +{ + struct microcode_intel *mc =3D uci->mc; + enum ucode_state ret; + u32 cur_rev; =20 - return UCODE_UPDATED; + ret =3D apply_microcode(uci, mc, &cur_rev); + if (ret =3D=3D UCODE_UPDATED) { + if (early) + print_ucode(cur_rev, uci->cpu_sig.rev, mc->hdr.date); + else + print_ucode_info(cur_rev, uci->cpu_sig.rev, mc->hdr.date); + } + return ret; } =20 static __init bool load_builtin_intel_microcode(struct cpio_data *cp) @@ -532,70 +539,29 @@ static int collect_cpu_info(int cpu_num, return 0; } =20 -static enum ucode_state apply_microcode_intel(int cpu) +static enum ucode_state apply_microcode_late(int cpu) { struct ucode_cpu_info *uci =3D ucode_cpu_info + cpu; - struct cpuinfo_x86 *c =3D &cpu_data(cpu); - bool bsp =3D c->cpu_index =3D=3D boot_cpu_data.cpu_index; - struct microcode_intel *mc; + struct microcode_intel *mc =3D ucode_patch_late; enum ucode_state ret; - static int prev_rev; - u32 rev; - - /* We should bind the task to the CPU */ - if (WARN_ON(raw_smp_processor_id() !=3D cpu)) - return UCODE_ERROR; - - mc =3D ucode_patch_late; - if (!mc) - return UCODE_NFOUND; + u32 cur_rev; =20 - /* - * Save us the MSR write below - which is a particular expensive - * operation - when the other hyperthread has updated the microcode - * already. - */ - rev =3D intel_get_microcode_revision(); - if (rev >=3D mc->hdr.rev) { - ret =3D UCODE_OK; - goto out; - } - - /* - * Writeback and invalidate caches before updating microcode to avoid - * internal issues depending on what the microcode is updating. - */ - native_wbinvd(); - - /* write microcode via MSR 0x79 */ - wrmsrl(MSR_IA32_UCODE_WRITE, (unsigned long)mc->bits); - - rev =3D intel_get_microcode_revision(); - - if (rev !=3D mc->hdr.rev) { - pr_err("CPU%d update to revision 0x%x failed\n", - cpu, mc->hdr.rev); + if (WARN_ON_ONCE(smp_processor_id() !=3D cpu)) return UCODE_ERROR; - } =20 - if (bsp && rev !=3D prev_rev) { - pr_info("updated to revision 0x%x, date =3D %04x-%02x-%02x\n", - rev, - mc->hdr.date & 0xffff, - mc->hdr.date >> 24, + ret =3D apply_microcode(uci, mc, &cur_rev); + if (ret !=3D UCODE_UPDATED && ret !=3D UCODE_OK) + return ret; + + if (!cpu && uci->cpu_sig.rev !=3D cur_rev) { + pr_info("Updated to revision 0x%x, date =3D %04x-%02x-%02x\n", + uci->cpu_sig.rev, mc->hdr.date & 0xffff, mc->hdr.date >> 24, (mc->hdr.date >> 16) & 0xff); - prev_rev =3D rev; } =20 - ret =3D UCODE_UPDATED; - -out: - uci->cpu_sig.rev =3D rev; - c->microcode =3D rev; - - /* Update boot_cpu_data's revision too, if we're on the BSP: */ - if (bsp) - boot_cpu_data.microcode =3D rev; + cpu_data(cpu).microcode =3D uci->cpu_sig.rev; + if (!cpu) + boot_cpu_data.microcode =3D uci->cpu_sig.rev; =20 return ret; } @@ -742,7 +708,7 @@ static void finalize_late_load(int resul static struct microcode_ops microcode_intel_ops =3D { .request_microcode_fw =3D request_microcode_fw, .collect_cpu_info =3D collect_cpu_info, - .apply_microcode =3D apply_microcode_intel, + .apply_microcode =3D apply_microcode_late, .finalize_late_load =3D finalize_late_load, }; From nobody Thu Dec 18 20:32:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2891C0015E for ; Sat, 12 Aug 2023 20:01:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230421AbjHLUBr (ORCPT ); Sat, 12 Aug 2023 16:01:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33180 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230488AbjHLUBW (ORCPT ); Sat, 12 Aug 2023 16:01:22 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 294032106 for ; Sat, 12 Aug 2023 13:00:59 -0700 (PDT) Message-ID: <20230812195728.592367910@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691870345; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=bxbFiQ3H3oFNpJ5dbBKZzcn3JgdwiLcWWFsQ9BnK0wE=; b=JMHfJV1Piw2S4Z5oLeyTooqyb016gvTSzRtHZT3aDKqty3SWzrjZf2giLB6sQiF0egE5PT Tnb1fYeK3HMHDubMXeR6zR7MDRq3xo7lbTAXUKw8dyV0tBsYp2VwN/QEdI9IEceLqoOvMv w5IncR7MkXV1vzTNIhu4RS7WRnf2zU/PY8BYnG+dNwBxJtRdmwaBF9nF1XBHsrWLC0Pv8Q O4OeFs6EFgAl01QrzVvzmR083negnCxil0mB+YwhcsTyzY0TDYJK1CG6CkiWFqgaDCKaKE UUdHPhMmoPCd/CFqofLz535Aq+kPhohsTX8m3x8zKw7JddIa0Ib4fH8sHEjEYg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691870345; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=bxbFiQ3H3oFNpJ5dbBKZzcn3JgdwiLcWWFsQ9BnK0wE=; b=4N7XRWpVIiWiwbJW1wg0FLwp1DK+gbjcJiNUWIKheJramm+eXDZDu4mPqiQEa6InbhxVoJ PZns9WI18LXxXCDQ== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven , Nikolay Borisov Subject: [patch V2 18/37] x86/microcode/intel: Rework intel_cpu_collect_info() References: <20230812194003.682298127@linutronix.de> MIME-Version: 1.0 Date: Sat, 12 Aug 2023 21:59:04 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Nothing needs struct ucode_cpu_info. Make it take struct cpu_signature, let it return a boolean and simplify the implementation. Rename it now that the silly name clash with collect_cpu_info() is gone. Signed-off-by: Thomas Gleixner --- V2: New patch --- arch/x86/include/asm/cpu.h | 4 +-- arch/x86/kernel/cpu/microcode/intel.c | 39 ++++++++++-------------------= ----- drivers/platform/x86/intel/ifs/load.c | 8 ++---- 3 files changed, 17 insertions(+), 34 deletions(-) --- a/arch/x86/include/asm/cpu.h +++ b/arch/x86/include/asm/cpu.h @@ -73,9 +73,9 @@ static inline void init_ia32_feat_ctl(st =20 extern __noendbr void cet_disable(void); =20 -struct ucode_cpu_info; +struct cpu_signature; =20 -int intel_cpu_collect_info(struct ucode_cpu_info *uci); +void intel_collect_cpu_info(struct cpu_signature *sig); =20 static inline bool intel_cpu_signatures_match(unsigned int s1, unsigned in= t p1, unsigned int s2, unsigned int p2) --- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -66,36 +66,21 @@ static inline unsigned int exttable_size return et->count * EXT_SIGNATURE_SIZE + EXT_HEADER_SIZE; } =20 -int intel_cpu_collect_info(struct ucode_cpu_info *uci) +void intel_collect_cpu_info(struct cpu_signature *sig) { - unsigned int val[2]; - unsigned int family, model; - struct cpu_signature csig =3D { 0 }; - unsigned int eax, ebx, ecx, edx; + sig->sig =3D cpuid_eax(1); + sig->pf =3D 0; + sig->rev =3D intel_get_microcode_revision(); =20 - memset(uci, 0, sizeof(*uci)); + if (x86_model(sig->sig) >=3D 5 || x86_family(sig->sig) > 6) { + unsigned int val[2]; =20 - eax =3D 0x00000001; - ecx =3D 0; - native_cpuid(&eax, &ebx, &ecx, &edx); - csig.sig =3D eax; - - family =3D x86_family(eax); - model =3D x86_model(eax); - - if (model >=3D 5 || family > 6) { /* get processor flags from MSR 0x17 */ native_rdmsr(MSR_IA32_PLATFORM_ID, val[0], val[1]); - csig.pf =3D 1 << ((val[1] >> 18) & 7); + sig->pf =3D 1 << ((val[1] >> 18) & 7); } - - csig.rev =3D intel_get_microcode_revision(); - - uci->cpu_sig =3D csig; - - return 0; } -EXPORT_SYMBOL_GPL(intel_cpu_collect_info); +EXPORT_SYMBOL_GPL(intel_collect_cpu_info); =20 /* * Returns 1 if update has been found, 0 otherwise. @@ -318,11 +303,11 @@ static int early_old_rev; */ void show_ucode_info_early(void) { - struct ucode_cpu_info uci; + struct cpu_signature sig; =20 if (delay_ucode_info) { - intel_cpu_collect_info(&uci); - print_ucode_info(early_old_rev, uci.cpu_sig.rev, current_mc_date); + intel_collect_cpu_info(&sig); + print_ucode_info(early_old_rev, sig.rev, current_mc_date); delay_ucode_info =3D 0; } } @@ -444,7 +429,7 @@ static __init struct microcode_intel *ge if (!(cp.data && cp.size)) return NULL; =20 - intel_cpu_collect_info(uci); + intel_collect_cpu_info(&uci->cpu_sig); =20 return scan_microcode(cp.data, cp.size, uci); } --- a/drivers/platform/x86/intel/ifs/load.c +++ b/drivers/platform/x86/intel/ifs/load.c @@ -227,7 +227,7 @@ static int scan_chunks_sanity_check(stru =20 static int image_sanity_check(struct device *dev, const struct microcode_h= eader_intel *data) { - struct ucode_cpu_info uci; + struct cpu_signature sig; =20 /* Provide a specific error message when loading an older/unsupported ima= ge */ if (data->hdrver !=3D MC_HEADER_TYPE_IFS) { @@ -240,11 +240,9 @@ static int image_sanity_check(struct dev return -EINVAL; } =20 - intel_cpu_collect_info(&uci); + intel_collect_cpu_info(&sig); =20 - if (!intel_find_matching_signature((void *)data, - uci.cpu_sig.sig, - uci.cpu_sig.pf)) { + if (!intel_find_matching_signature((void *)data, sig.sig, sig.pf)) { dev_err(dev, "cpu signature, processor flags not matching\n"); return -EINVAL; } From nobody Thu Dec 18 20:32:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66914C0015E for ; Sat, 12 Aug 2023 20:06:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231143AbjHLUG3 (ORCPT ); Sat, 12 Aug 2023 16:06:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40970 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230451AbjHLUG0 (ORCPT ); Sat, 12 Aug 2023 16:06:26 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 57EBB1BD8 for ; Sat, 12 Aug 2023 13:05:59 -0700 (PDT) Message-ID: <20230812195728.649375687@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691870346; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=GDewIFngkCI+5WnVcXevrKcgDXFsMW2Rc1mq3io3ymU=; b=SzDZ/hMvJh/0Z3hzYNfiQofVDznOJ5EAz6wODCsS0ZcL/xbh6IACQFM76wJCpxEnFQ5UyQ KTH+DdnzGOJwklFpJP65q1JGtdf46PYLUpIht4rT4dpfyayX7zEHSsicsN6mUzQS0XJuk+ HqCDXjtUnAk+9xvIsbcPLlvXlj9aqTrLudUH+wecGyS5Sg306WPEJqlYLPO24J3kr7d3ug HhT8IEokz3ovZAefQ5hdSk4LvVoVED7CC0XyrcUr1ofkQjnbb7ZQzxbExkxD5p/AJMO6ee YEaWIUPQmvEjv6SQbQZkc9eF3EWjbNDTkfHfqDWbm3ddTaCKaElfDMF9v9EJ4A== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691870346; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=GDewIFngkCI+5WnVcXevrKcgDXFsMW2Rc1mq3io3ymU=; b=ADY0IDxh4OJy63obIP9ZvEn7D3GClq2W+0PU7mXfrmB4c1RyipkAOEEk9C/ysDaeDchW+1 OdSJmMPhi432vgCg== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven , Nikolay Borisov Subject: [patch V2 19/37] x86/microcode/intel: Reuse intel_cpu_collect_info() References: <20230812194003.682298127@linutronix.de> MIME-Version: 1.0 Date: Sat, 12 Aug 2023 21:59:06 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" No point for an almost duplicate function. Signed-off-by: Thomas Gleixner --- V2: New patch --- arch/x86/kernel/cpu/microcode/intel.c | 16 +--------------- 1 file changed, 1 insertion(+), 15 deletions(-) --- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -506,21 +506,7 @@ void reload_ucode_intel(void) =20 static int collect_cpu_info(int cpu_num, struct cpu_signature *csig) { - struct cpuinfo_x86 *c =3D &cpu_data(cpu_num); - unsigned int val[2]; - - memset(csig, 0, sizeof(*csig)); - - csig->sig =3D cpuid_eax(0x00000001); - - if ((c->x86_model >=3D 5) || (c->x86 > 6)) { - /* get processor flags from MSR 0x17 */ - rdmsr(MSR_IA32_PLATFORM_ID, val[0], val[1]); - csig->pf =3D 1 << ((val[1] >> 18) & 7); - } - - csig->rev =3D c->microcode; - + intel_collect_cpu_info(csig); return 0; } From nobody Thu Dec 18 20:32:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B82A2C0015E for ; Sat, 12 Aug 2023 20:06:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231179AbjHLUGa (ORCPT ); Sat, 12 Aug 2023 16:06:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40988 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230523AbjHLUG1 (ORCPT ); Sat, 12 Aug 2023 16:06:27 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D92291BD1 for ; Sat, 12 Aug 2023 13:05:59 -0700 (PDT) Message-ID: <20230812195728.708313227@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691870348; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=rDI1ykgeaxeUMJlpq0aKo/7WMXntCc7/NJHkM9TFISc=; b=ESEHMgas6zDqDY1Q70SV3AEf1kfnaElwEZ7nZVWCQ2jwa+IUQMfl1AJmpWBsKVA/CAO3qx +65QuC0aPiePqdwsKp0Nje52wp+8XJ/VGyPc3LtxBUyMzicyAP/5m95if+ww2FIWkuhT7A ALJFmn3HgVT2vB+cBUCTSrBChYCMFIHxxJVSgPnMgsukOgcg1gRDHo+SwUI6sj1Bhoca06 +JAGWDQCmwLhOxPcgr9HCUOMP5tNxL4Pgli+NStl99Ub8hnrycayEMfBAnktVhjILQ95I/ +7ZTQ2V5mJ427wOes8RFCm86u+u3g2uLwRv9BvIq7ILg/c4LhDabHu8eHOW1OQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691870348; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=rDI1ykgeaxeUMJlpq0aKo/7WMXntCc7/NJHkM9TFISc=; b=LWFB0AQXcLWH4U6txqX5zCIBRJOxtLsNjIsAHcf0vltZm1PZBY/VM3UFXL8zJ0sPOpmZEM e9vD35QS7SIJkyBQ== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven , Nikolay Borisov Subject: [patch V2 20/37] x86/microcode/intel: Rework intel_find_matching_signature() References: <20230812194003.682298127@linutronix.de> MIME-Version: 1.0 Date: Sat, 12 Aug 2023 21:59:07 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Take a cpu_signature argument and work from there. Move the match() helper next to the callsite as there is no point for having it in a header. Signed-off-by: Thomas Gleixner --- V2: New patch --- arch/x86/include/asm/cpu.h | 16 +--------------- arch/x86/kernel/cpu/microcode/intel.c | 31 +++++++++++++++++++----------= -- drivers/platform/x86/intel/ifs/load.c | 2 +- 3 files changed, 21 insertions(+), 28 deletions(-) --- a/arch/x86/include/asm/cpu.h +++ b/arch/x86/include/asm/cpu.h @@ -77,22 +77,8 @@ struct cpu_signature; =20 void intel_collect_cpu_info(struct cpu_signature *sig); =20 -static inline bool intel_cpu_signatures_match(unsigned int s1, unsigned in= t p1, - unsigned int s2, unsigned int p2) -{ - if (s1 !=3D s2) - return false; - - /* Processor flags are either both 0 ... */ - if (!p1 && !p2) - return true; - - /* ... or they intersect. */ - return p1 & p2; -} - extern u64 x86_read_arch_cap_msr(void); -int intel_find_matching_signature(void *mc, unsigned int csig, int cpf); +bool intel_find_matching_signature(void *mc, struct cpu_signature *sig); int intel_microcode_sanity_check(void *mc, bool print_err, int hdr_type); =20 extern struct cpumask cpus_stop_mask; --- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -82,29 +82,36 @@ void intel_collect_cpu_info(struct cpu_s } EXPORT_SYMBOL_GPL(intel_collect_cpu_info); =20 -/* - * Returns 1 if update has been found, 0 otherwise. - */ -int intel_find_matching_signature(void *mc, unsigned int csig, int cpf) +static inline bool cpu_signatures_match(struct cpu_signature *s1, unsigned= int sig2, + unsigned int pf2) +{ + if (s1->sig !=3D sig2) + return false; + + /* Processor flags are either both 0 or they intersect. */ + return ((!s1->pf && !pf2) || (s1->pf & pf2)); +} + +bool intel_find_matching_signature(void *mc, struct cpu_signature *sig) { struct microcode_header_intel *mc_hdr =3D mc; - struct extended_sigtable *ext_hdr; struct extended_signature *ext_sig; + struct extended_sigtable *ext_hdr; int i; =20 - if (intel_cpu_signatures_match(csig, cpf, mc_hdr->sig, mc_hdr->pf)) - return 1; + if (cpu_signatures_match(sig, mc_hdr->sig, mc_hdr->pf)) + return true; =20 /* Look for ext. headers: */ if (get_totalsize(mc_hdr) <=3D intel_microcode_get_datasize(mc_hdr) + MC_= HEADER_SIZE) - return 0; + return false; =20 ext_hdr =3D mc + intel_microcode_get_datasize(mc_hdr) + MC_HEADER_SIZE; ext_sig =3D (void *)ext_hdr + EXT_HEADER_SIZE; =20 for (i =3D 0; i < ext_hdr->count; i++) { - if (intel_cpu_signatures_match(csig, cpf, ext_sig->sig, ext_sig->pf)) - return 1; + if (cpu_signatures_match(sig, ext_sig->sig, ext_sig->pf)) + return true; ext_sig++; } return 0; @@ -272,7 +279,7 @@ static __init struct microcode_intel *sc intel_microcode_sanity_check(data, false, MC_HEADER_TYPE_MICROCODE) = < 0) break; =20 - if (!intel_find_matching_signature(data, uci->cpu_sig.sig, uci->cpu_sig.= pf)) + if (!intel_find_matching_signature(data, &uci->cpu_sig)) continue; =20 /* Check whether there is newer microcode */ @@ -583,7 +590,7 @@ static enum ucode_state read_ucode_intel if (cur_rev >=3D mc_header.rev) continue; =20 - if (!intel_find_matching_signature(mc, uci->cpu_sig.sig, uci->cpu_sig.pf= )) + if (!intel_find_matching_signature(mc, &uci->cpu_sig)) continue; =20 kvfree(new_mc); --- a/drivers/platform/x86/intel/ifs/load.c +++ b/drivers/platform/x86/intel/ifs/load.c @@ -242,7 +242,7 @@ static int image_sanity_check(struct dev =20 intel_collect_cpu_info(&sig); =20 - if (!intel_find_matching_signature((void *)data, sig.sig, sig.pf)) { + if (!intel_find_matching_signature((void *)data, &sig)) { dev_err(dev, "cpu signature, processor flags not matching\n"); return -EINVAL; } From nobody Thu Dec 18 20:32:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 038AAC0015E for ; Sat, 12 Aug 2023 20:02:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230509AbjHLUCb (ORCPT ); Sat, 12 Aug 2023 16:02:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46776 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230475AbjHLUCX (ORCPT ); Sat, 12 Aug 2023 16:02:23 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A37E63584 for ; Sat, 12 Aug 2023 13:01:57 -0700 (PDT) Message-ID: <20230812195728.767559362@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691870349; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=kkIW+v+zNIk9lQpoyroVUgRr0ogOwnyVqvNr/VLjf7s=; b=zuNXQTulA8DtnMFk1kcH3noS0gOIC12+/0PSST/h+Rmc2YczU2Q5O471jiL/xl4AJqkClu OXONrmERT89Cs94ZdPx8qdBeNSAM4uzjLa3oetmdG0wLpCxONwYB9G/+Utwe2VaFFTfLDf 2PN0kIxBtnhoKAEaKQrPNktdHGrgdN53W2tjtf8P/6TRfBLV7RU+GS0LK3hpkYtW4cjPaq peX8HMUcixXza48s/EyCMDDk1QLkaHfi831eir3+XHJOsMwiQGsAklS4BTkEvjdny9rRXt 9Z21iRJNiEs6qEJVIcKWakKhE4makNymmMGuDwxG+M8ogV/xVZI21a9+XKFGDQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691870349; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=kkIW+v+zNIk9lQpoyroVUgRr0ogOwnyVqvNr/VLjf7s=; b=oKOr19vBE0Rbk5OMlHnMPH7YRqTr0xsB63QVZRC/CVdLuXJZT5ZU9VnFkNr04H9EWJ+bcW aV2fjO0J8UNvNkCQ== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven , Nikolay Borisov Subject: [patch V2 21/37] x86/microcode/amd: Read revision from hardware in collect_cpu_info_amd() References: <20230812194003.682298127@linutronix.de> MIME-Version: 1.0 Date: Sat, 12 Aug 2023 21:59:09 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Prepare to decrapify the core initialization logic which invokes microcode_ops::apply_microcode() several times just to set cpu_data::microcode. Signed-off-by: Thomas Gleixner --- V2: New patch --- arch/x86/kernel/cpu/microcode/amd.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) --- a/arch/x86/kernel/cpu/microcode/amd.c +++ b/arch/x86/kernel/cpu/microcode/amd.c @@ -673,12 +673,12 @@ void reload_ucode_amd(unsigned int cpu) =20 static int collect_cpu_info_amd(int cpu, struct cpu_signature *csig) { - struct cpuinfo_x86 *c =3D &cpu_data(cpu); struct ucode_cpu_info *uci =3D ucode_cpu_info + cpu; + u32 dummy __always_unused; struct ucode_patch *p; =20 csig->sig =3D cpuid_eax(0x00000001); - csig->rev =3D c->microcode; + rdmsr(MSR_AMD64_PATCH_LEVEL, csig->rev, dummy); =20 /* * a patch could have been loaded early, set uci->mc so that From nobody Thu Dec 18 20:32:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51F94C001DB for ; Sat, 12 Aug 2023 20:02:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230453AbjHLUCE (ORCPT ); Sat, 12 Aug 2023 16:02:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55292 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230452AbjHLUBw (ORCPT ); Sat, 12 Aug 2023 16:01:52 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D42A02729 for ; Sat, 12 Aug 2023 13:01:24 -0700 (PDT) Message-ID: <20230812195728.824553325@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691870351; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=D7ZrdA6sXE8GlpN8vSC6yf/zxBb4RukOrJuRRxzbufg=; b=jBFj0txSmp09zpQTI0doP2s65siBwVk5RGfmdK0e4lZY9ag4awjSXqUx66UTzbqPCixHLz 1xv6XkN2AE0dIXQoNiiXqg9sdkZFtFxqnlnwquIRv91Nie0ExsSef+KTF9wtd4WZIRNyIu 6/yMI6KIidZTXkjG4fQ2kU2p4A3TEXy6ufVM4UwMuDxXL5oZHX8/e6NCL+4O2Sfj+0SJIn jdbd7kx1+yZ47YOh9xras2MfwVBq4aaipc3lrtewdt7bLxi7r3LBTKyiHGNOWyF69igx/E 2Nzr2ks+QbQ2AB6pdyxkNywSgV+TQ/YeviciJllVd5ydwOE7/hKwZP5xRhDmNQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691870351; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=D7ZrdA6sXE8GlpN8vSC6yf/zxBb4RukOrJuRRxzbufg=; b=G/gQKPU1/fqDFtGqYTVAv7UTEjNS0OhfNuBqYI6UT9i3429kIx28CIqn9bRfAztlGC/1Lp SUijy4Ly3SiAePDA== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven , Nikolay Borisov Subject: [patch V2 22/37] x86/microcode: Remove pointless apply() invocation References: <20230812194003.682298127@linutronix.de> MIME-Version: 1.0 Date: Sat, 12 Aug 2023 21:59:10 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Microcode is applied on the APs during early bringup. There is no point in trying to apply the microcode again during the hotplug operations and neither at the point where the microcode device is initialized. Collect CPU info and microcode revision in setup_online_cpu() for now. This will move to the CPU hotplug callback in the next step. Signed-off-by: Thomas Gleixner --- V2: New patch --- arch/x86/kernel/cpu/microcode/core.c | 34 ++++++------------------------= ---- include/linux/cpuhotplug.h | 1 - 2 files changed, 6 insertions(+), 29 deletions(-) --- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -533,17 +533,6 @@ static void microcode_fini_cpu(int cpu) microcode_ops->microcode_fini_cpu(cpu); } =20 -static enum ucode_state microcode_init_cpu(int cpu) -{ - struct ucode_cpu_info *uci =3D ucode_cpu_info + cpu; - - memset(uci, 0, sizeof(*uci)); - - microcode_ops->collect_cpu_info(cpu, &uci->cpu_sig); - - return microcode_ops->apply_microcode(cpu); -} - /** * microcode_bsp_resume - Update boot CPU microcode during resume. */ @@ -562,15 +551,6 @@ static struct syscore_ops mc_syscore_ops .resume =3D microcode_bsp_resume, }; =20 -static int mc_cpu_starting(unsigned int cpu) -{ - enum ucode_state err =3D microcode_ops->apply_microcode(cpu); - - pr_debug("%s: CPU%d, err: %d\n", __func__, cpu, err); - - return err =3D=3D UCODE_ERROR; -} - static int mc_cpu_online(unsigned int cpu) { struct device *dev =3D get_cpu_device(cpu); @@ -598,14 +578,14 @@ static int mc_cpu_down_prep(unsigned int static void setup_online_cpu(struct work_struct *work) { int cpu =3D smp_processor_id(); - enum ucode_state err; + struct ucode_cpu_info *uci =3D ucode_cpu_info + cpu; =20 - err =3D microcode_init_cpu(cpu); - if (err =3D=3D UCODE_ERROR) { - pr_err("Error applying microcode on CPU%d\n", cpu); - return; - } + memset(uci, 0, sizeof(*uci)); =20 + microcode_ops->collect_cpu_info(cpu, &uci->cpu_sig); + this_cpu_write(cpu_info.microcode, uci->cpu_sig.rev); + if (!cpu) + boot_cpu_data.microcode =3D uci->cpu_sig.rev; mc_cpu_online(cpu); } =20 @@ -658,8 +638,6 @@ static int __init microcode_init(void) schedule_on_each_cpu(setup_online_cpu); =20 register_syscore_ops(&mc_syscore_ops); - cpuhp_setup_state_nocalls(CPUHP_AP_MICROCODE_LOADER, "x86/microcode:start= ing", - mc_cpu_starting, NULL); cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, "x86/microcode:online", mc_cpu_online, mc_cpu_down_prep); =20 --- a/include/linux/cpuhotplug.h +++ b/include/linux/cpuhotplug.h @@ -156,7 +156,6 @@ enum cpuhp_state { CPUHP_AP_IRQ_LOONGARCH_STARTING, CPUHP_AP_IRQ_SIFIVE_PLIC_STARTING, CPUHP_AP_ARM_MVEBU_COHERENCY, - CPUHP_AP_MICROCODE_LOADER, CPUHP_AP_PERF_X86_AMD_UNCORE_STARTING, CPUHP_AP_PERF_X86_STARTING, CPUHP_AP_PERF_X86_AMD_IBS_STARTING, From nobody Thu Dec 18 20:32:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A1BBC0015E for ; Sat, 12 Aug 2023 20:06:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231180AbjHLUGd (ORCPT ); Sat, 12 Aug 2023 16:06:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41042 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231175AbjHLUG2 (ORCPT ); Sat, 12 Aug 2023 16:06:28 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D23521993 for ; Sat, 12 Aug 2023 13:06:02 -0700 (PDT) Message-ID: <20230812195728.881571946@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691870352; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=l6wRx6fDuLz7AVhgf50kqWA5bfrKXDckgGTayzMQhXA=; b=h1xfnmqDfKHIVcm23sIPOdlKRTiWnkdP52LkPZek4v4EkEUdPRh4K+UHkrYiVpj1Oyfk+9 7lTaH6Y7akEzB2X0a6cYazkCCfnqhxgfxd8iSyO936UU2m6/0Xrin1bWp460AiRuTJkqmt EOxu660RTkGpaMF4LfQUwCkcNEszuUiWzqqNsT7SVHMdNGIDFvWrmP3fb2+1i5EbMUy+BH jfEgJ1DQDtqx1se/2UGw0cc6NsArcXLEN1x5536up9WMdlmIdmq6Tg/nNII3Uph9ap3PzF wcrA2SlFYlnuZr0pdnHBvwg2nUoq9JDh8PsOTb34eWDU2TEXrx3wLfgAh3B5NA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691870352; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=l6wRx6fDuLz7AVhgf50kqWA5bfrKXDckgGTayzMQhXA=; b=B1Qz+4iKBvJZiGkp3POPTyyo0xidh1HdSmIC41F5pPyf5C6ho4QsmpRwYtzI2+LQo7LRZq CKVisDdsTlaqi8CA== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven , Nikolay Borisov Subject: [patch V2 23/37] x86/microcode: Get rid of the schedule work indirection References: <20230812194003.682298127@linutronix.de> MIME-Version: 1.0 Date: Sat, 12 Aug 2023 21:59:12 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Scheduling work on all CPUs to collect the microcode information is just another extra step for no value. Let the CPU hotplug callback registration do it. Signed-off-by: Thomas Gleixner --- V2: New patch --- arch/x86/kernel/cpu/microcode/core.c | 28 +++++++++------------------- 1 file changed, 9 insertions(+), 19 deletions(-) --- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -553,8 +553,15 @@ static struct syscore_ops mc_syscore_ops =20 static int mc_cpu_online(unsigned int cpu) { + struct ucode_cpu_info *uci =3D ucode_cpu_info + cpu; struct device *dev =3D get_cpu_device(cpu); =20 + memset(uci, 0, sizeof(*uci)); + microcode_ops->collect_cpu_info(cpu, &uci->cpu_sig); + this_cpu_write(cpu_info.microcode, uci->cpu_sig.rev); + if (!cpu) + boot_cpu_data.microcode =3D uci->cpu_sig.rev; + if (sysfs_create_group(&dev->kobj, &mc_attr_group)) pr_err("Failed to create group for CPU%d\n", cpu); return 0; @@ -575,20 +582,6 @@ static int mc_cpu_down_prep(unsigned int return 0; } =20 -static void setup_online_cpu(struct work_struct *work) -{ - int cpu =3D smp_processor_id(); - struct ucode_cpu_info *uci =3D ucode_cpu_info + cpu; - - memset(uci, 0, sizeof(*uci)); - - microcode_ops->collect_cpu_info(cpu, &uci->cpu_sig); - this_cpu_write(cpu_info.microcode, uci->cpu_sig.rev); - if (!cpu) - boot_cpu_data.microcode =3D uci->cpu_sig.rev; - mc_cpu_online(cpu); -} - static struct attribute *cpu_root_microcode_attrs[] =3D { #ifdef CONFIG_MICROCODE_LATE_LOADING &dev_attr_reload.attr, @@ -634,12 +627,9 @@ static int __init microcode_init(void) } } =20 - /* Do per-CPU setup */ - schedule_on_each_cpu(setup_online_cpu); - register_syscore_ops(&mc_syscore_ops); - cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, "x86/microcode:online", - mc_cpu_online, mc_cpu_down_prep); + cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "x86/microcode:online", + mc_cpu_online, mc_cpu_down_prep); =20 pr_info("Microcode Update Driver: v%s.", DRIVER_VERSION); From nobody Thu Dec 18 20:32:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4D5ACC001DB for ; Sat, 12 Aug 2023 20:06:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231208AbjHLUGf (ORCPT ); Sat, 12 Aug 2023 16:06:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41062 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230451AbjHLUG3 (ORCPT ); Sat, 12 Aug 2023 16:06:29 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DC1911BD5 for ; Sat, 12 Aug 2023 13:06:02 -0700 (PDT) Message-ID: <20230812195728.939405357@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691870354; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=29AYhkI+1B+aP2ulSjBZYvwAxHR9z6zN13x7VQxgXac=; b=KoZ1ybHnOOGe2zWwq2N/AO8QzTIPNoxSiE04VegirbMVBzppota03EXRPs+zlqOfC+Y4iV mQ5tzz6bOoSTyJMsos6t9XinrRtWes/iOMQ78XsvyUqvXowU8iB9eJSM4Dv2nULqKOOMtT zNXoOetuSz2pq2NSUrsrtw0X2mMqmaQT9eveACdtT8gLqn2LycxbE82y1/Y9wQYVEAWGW3 JYf39p5C+yGSQRZXaJEu9Xqn0xHwoNVWbKTFosp0IVv4/GkbsgH8aro9qp0P+a/irOqeDj ViUJb8aRClX2WeEqoEI+a90k0tGIPaF9Xdi1UMLHw/329kh8gortRCNF20DaBQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691870354; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=29AYhkI+1B+aP2ulSjBZYvwAxHR9z6zN13x7VQxgXac=; b=4ZhZweDjvFpyoC8DV+cd8XYhXs8IDmR6TVjaAgLPr4uZPP7HKldEsPyxif8cPYO00rwb9q ++u2PpZFUiNUafBw== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven , Nikolay Borisov Subject: [patch V2 24/37] x86/microcode: Clean up mc_cpu_down_prep() References: <20230812194003.682298127@linutronix.de> MIME-Version: 1.0 Date: Sat, 12 Aug 2023 21:59:13 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This function has nothing to do with suspend. It's a hotplug callback. Remove the bogus comment. Drop the pointless debug printk. The hotplug core provides tracepoints which track the invocation of those callbacks. Signed-off-by: Thomas Gleixner --- V2: New patch --- arch/x86/kernel/cpu/microcode/core.c | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-) --- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -569,16 +569,10 @@ static int mc_cpu_online(unsigned int cp =20 static int mc_cpu_down_prep(unsigned int cpu) { - struct device *dev; - - dev =3D get_cpu_device(cpu); + struct device *dev =3D get_cpu_device(cpu); =20 microcode_fini_cpu(cpu); - - /* Suspend is in progress, only remove the interface */ sysfs_remove_group(&dev->kobj, &mc_attr_group); - pr_debug("%s: CPU%d\n", __func__, cpu); - return 0; } From nobody Thu Dec 18 20:32:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 155F0C0015E for ; Sat, 12 Aug 2023 20:02:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230430AbjHLUCM (ORCPT ); Sat, 12 Aug 2023 16:02:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46496 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230286AbjHLUCC (ORCPT ); Sat, 12 Aug 2023 16:02:02 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9F4E230D7 for ; Sat, 12 Aug 2023 13:01:36 -0700 (PDT) Message-ID: <20230812195728.992461937@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691870355; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=Q4ru7NAfdX5E2q0WOPskDN2at23E5jYbJEoHUeyYgF4=; b=g8wqhFz16u/bxD/KdJVv7xGcI3IuXOJFFJ2R2yi4mks6Qo3ISIYeiu8BrugMN5I4iRm0r3 FlICq/KdXke7agPaJ1qpLFNaE0wZ/sDDMgv3oTvpff9NaYUmT7EmUIY4vOkDnoxU1Y0Vfz mPUwi/eN0qgOwN1ccowpSmVwqYL+BX2q47szfqLAiIKOPDxJTypb3NLPlOoleIS3II2BMk +y6YUZdQ/NwgYUD4Mxs2laBCSRAu61DcrUliaPu3Uap5zoRBbmVW9Er+i24FIM8ovIGBy2 323cZqBaRPPRtmoC2pY6HuXFtAmdx3oaJWgUFPibSwekJRO+0IeVJvx/Jw0m0Q== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691870355; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=Q4ru7NAfdX5E2q0WOPskDN2at23E5jYbJEoHUeyYgF4=; b=qGeCLME5xdMgvmONjFqr/uFiFkmWK3EV7ltpB7b5rpgZCG9yZRxUgsiROknoX+no7yNb+Z etBwBEuuNXQ6SuBA== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven , Nikolay Borisov Subject: [patch V2 25/37] x86/microcode: Handle "nosmt" correctly References: <20230812194003.682298127@linutronix.de> MIME-Version: 1.0 Date: Sat, 12 Aug 2023 21:59:15 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Thomas Gleixner On CPUs where microcode loading is not NMI safe the SMT sibling which is parked in one of the play_dead() variants, these parked CPUs still react on NMIs. So if a NMI hits while the primary thread updates the microcode the resulting behaviour is undefined. The default play_dead() implementation on modern CPUs is using MWAIT, which is not guaranteed to be safe against an microcode update which affects MWAIT. Take the cpus_booted_once_mask into account to detect this case and refuse to load late if the vendor specific driver does not advertise that late loading is NMI safe. AMD stated that this is safe, so mark the AMD driver accordingly. This requirement will be partially lifted in later changes. Signed-off-by: Thomas Gleixner --- arch/x86/Kconfig | 2 - arch/x86/kernel/cpu/microcode/amd.c | 9 +++-- arch/x86/kernel/cpu/microcode/core.c | 51 +++++++++++++++++++-------= ----- arch/x86/kernel/cpu/microcode/internal.h | 13 +++---- 4 files changed, 44 insertions(+), 31 deletions(-) --- --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1314,7 +1314,7 @@ config MICROCODE config MICROCODE_LATE_LOADING bool "Late microcode loading (DANGEROUS)" default n - depends on MICROCODE + depends on MICROCODE && SMP help Loading microcode late, when the system is up and executing instructions is a tricky business and should be avoided if possible. Just the sequen= ce --- a/arch/x86/kernel/cpu/microcode/amd.c +++ b/arch/x86/kernel/cpu/microcode/amd.c @@ -948,10 +948,11 @@ static void microcode_fini_cpu_amd(int c } =20 static struct microcode_ops microcode_amd_ops =3D { - .request_microcode_fw =3D request_microcode_amd, - .collect_cpu_info =3D collect_cpu_info_amd, - .apply_microcode =3D apply_microcode_amd, - .microcode_fini_cpu =3D microcode_fini_cpu_amd, + .request_microcode_fw =3D request_microcode_amd, + .collect_cpu_info =3D collect_cpu_info_amd, + .apply_microcode =3D apply_microcode_amd, + .microcode_fini_cpu =3D microcode_fini_cpu_amd, + .nmi_safe =3D true, }; =20 struct microcode_ops * __init init_amd_microcode(void) --- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -326,23 +326,6 @@ static struct platform_device *microcode */ #define SPINUNIT 100 /* 100 nsec */ =20 -static int check_online_cpus(void) -{ - unsigned int cpu; - - /* - * Make sure all CPUs are online. It's fine for SMT to be disabled if - * all the primary threads are still online. - */ - for_each_present_cpu(cpu) { - if (topology_is_primary_thread(cpu) && !cpu_online(cpu)) { - pr_err("Not all CPUs online, aborting microcode update.\n"); - return -EINVAL; - } - } - - return 0; -} =20 static atomic_t late_cpus_in; static atomic_t late_cpus_out; @@ -459,6 +442,35 @@ static int microcode_reload_late(void) return ret; } =20 +/* + * Ensure that all required CPUs which are present and have been booted + * once are online. + * + * To pass this check, all primary threads must be online. + * + * If the microcode load is not safe against NMI then all SMT threads + * must be online as well because they still react on NMI when they are + * soft-offlined and parked in one of the play_dead() variants. So if a + * NMI hits while the primary thread updates the microcode the resulting + * behaviour is undefined. The default play_dead() implementation on + * modern CPUs is using MWAIT, which is also not guaranteed to be safe + * against a microcode update which affects MWAIT. + */ +static bool ensure_cpus_are_online(void) +{ + unsigned int cpu; + + for_each_cpu_and(cpu, cpu_present_mask, &cpus_booted_once_mask) { + if (!cpu_online(cpu)) { + if (topology_is_primary_thread(cpu) || !microcode_ops->nmi_safe) { + pr_err("CPU %u not online\n", cpu); + return false; + } + } + } + return true; +} + static ssize_t reload_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t size) @@ -474,9 +486,10 @@ static ssize_t reload_store(struct devic =20 cpus_read_lock(); =20 - ret =3D check_online_cpus(); - if (ret) + if (!ensure_cpus_are_online()) { + ret =3D -EBUSY; goto put; + } =20 tmp_ret =3D microcode_ops->request_microcode_fw(bsp, µcode_pdev->dev= ); if (tmp_ret !=3D UCODE_NEW) --- a/arch/x86/kernel/cpu/microcode/internal.h +++ b/arch/x86/kernel/cpu/microcode/internal.h @@ -20,18 +20,17 @@ enum ucode_state { =20 struct microcode_ops { enum ucode_state (*request_microcode_fw)(int cpu, struct device *dev); - void (*microcode_fini_cpu)(int cpu); =20 /* - * The generic 'microcode_core' part guarantees that - * the callbacks below run on a target cpu when they - * are being called. + * The generic 'microcode_core' part guarantees that the callbacks + * below run on a target cpu when they are being called. * See also the "Synchronization" section in microcode_core.c. */ - enum ucode_state (*apply_microcode)(int cpu); - int (*collect_cpu_info)(int cpu, struct cpu_signature *csig); - void (*finalize_late_load)(int result); + enum ucode_state (*apply_microcode)(int cpu); + int (*collect_cpu_info)(int cpu, struct cpu_signature *csig); + void (*finalize_late_load)(int result); + unsigned int nmi_safe : 1; }; =20 extern struct ucode_cpu_info ucode_cpu_info[]; From nobody Thu Dec 18 20:32:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DDB89C0015E for ; Sat, 12 Aug 2023 20:02:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231126AbjHLUCO (ORCPT ); Sat, 12 Aug 2023 16:02:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230291AbjHLUCC (ORCPT ); Sat, 12 Aug 2023 16:02:02 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C2C8230DA for ; Sat, 12 Aug 2023 13:01:36 -0700 (PDT) Message-ID: <20230812195729.045205660@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691870357; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=qicvq4u13+GKL+4eW6ZEQQqFYpQClLZfFC5XYmTA1Qw=; b=WUbmRsjME0IAcJ4TLm5gjoy8Ty3xcaF6PxdgNtVB880OoxTkVwhgxR2q+4zjxyk/QbaAPR 0tf3jBHmptrq0GUk6cZgKiCv8kyIEZro9zYLPkP6ViRUgvsU2l8bhMD9o/ZzrziLekICTR 4VQifYJca2l8ayroOa1Vmr5vmXF/q2/Fx6LgSzEaS7lyWe6ege4JDuAkdG9K7n1cBA52ab 5ZwZ/F1eSWizadFafq5kiy2kKNmqvZPE0LE7xTgcuBTlmPE+DYPboW4b33XqcVHvAyf2zk p/hDR4ir5KydRARjipJ+3vfaL+FLtwx9AGLwnRgCHnbUyWqZk3uPzzktpbY5PQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691870357; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=qicvq4u13+GKL+4eW6ZEQQqFYpQClLZfFC5XYmTA1Qw=; b=mzhA71ca6pVqzUGne+D9zbgGgsVnXwpF596CShNBBizkycEUD6pPZen/+ymh6Hmye4LeKC sr9RHbNTRtKqfGBA== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven , Nikolay Borisov Subject: [patch V2 26/37] x86/microcode: Clarify the late load logic References: <20230812194003.682298127@linutronix.de> MIME-Version: 1.0 Date: Sat, 12 Aug 2023 21:59:16 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Thomas Gleixner reload_store() is way too complicated. Split the inner workings out and make the following enhancements: - Taint the kernel only when the microcode was actually updated. If. e.g. the rendevouz fails, then nothing happened and there is no reason for tainting. - Return useful error codes Signed-off-by: Thomas Gleixner Reviewed-by: Nikolay Borisov --- arch/x86/kernel/cpu/microcode/core.c | 39 +++++++++++++++---------------= ----- 1 file changed, 17 insertions(+), 22 deletions(-) --- --- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -434,11 +434,11 @@ static int microcode_reload_late(void) pr_info("Reload succeeded, microcode revision: 0x%x -> 0x%x\n", old, boot_cpu_data.microcode); microcode_check(&prev_info); + add_taint(TAINT_CPU_OUT_OF_SPEC, LOCKDEP_STILL_OK); } else { pr_info("Reload failed, current microcode revision: 0x%x\n", boot_cpu_data.microcode); } - return ret; } =20 @@ -471,40 +471,35 @@ static bool ensure_cpus_are_online(void) return true; } =20 +static int ucode_load_late_locked(void) +{ + int ret; + + if (!ensure_cpus_are_online()) + return -EBUSY; + + ret =3D microcode_ops->request_microcode_fw(0, µcode_pdev->dev); + if (ret !=3D UCODE_NEW) + return ret =3D=3D UCODE_NFOUND ? -ENOENT : -EBADFD; + return microcode_reload_late(); +} + static ssize_t reload_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t size) { - enum ucode_state tmp_ret =3D UCODE_OK; - int bsp =3D boot_cpu_data.cpu_index; unsigned long val; - ssize_t ret =3D 0; + ssize_t ret; =20 ret =3D kstrtoul(buf, 0, &val); if (ret || val !=3D 1) return -EINVAL; =20 cpus_read_lock(); - - if (!ensure_cpus_are_online()) { - ret =3D -EBUSY; - goto put; - } - - tmp_ret =3D microcode_ops->request_microcode_fw(bsp, µcode_pdev->dev= ); - if (tmp_ret !=3D UCODE_NEW) - goto put; - - ret =3D microcode_reload_late(); -put: + ret =3D ucode_load_late_locked(); cpus_read_unlock(); =20 - if (ret =3D=3D 0) - ret =3D size; - - add_taint(TAINT_CPU_OUT_OF_SPEC, LOCKDEP_STILL_OK); - - return ret; + return ret ? : size; } =20 static DEVICE_ATTR_WO(reload); From nobody Thu Dec 18 20:32:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87342C001DB for ; Sat, 12 Aug 2023 20:02:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231148AbjHLUCR (ORCPT ); Sat, 12 Aug 2023 16:02:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35694 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230498AbjHLUCI (ORCPT ); Sat, 12 Aug 2023 16:02:08 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 02F1E1FE7 for ; Sat, 12 Aug 2023 13:01:41 -0700 (PDT) Message-ID: <20230812195729.097937370@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691870358; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=/mf0toaJNTepRqxmdRBH+YrhPsSto1ib/DfMaTt6vf0=; b=ugkIvFOaucjW/r7xM473OXfvK5pcVN3cknssl6tn+vYvunLZ6W+hSQkCtmIXvfA73xRLSm hPAzDXdozz4dj4vvIDkTPpuRA2ORtFo6x0+kt0gJW3DdiXGWjnHfoH84+kTHGbTObDexwb HPwk3+o2eG0sTkMTDGUU+DbnDGeN7J547WHDVEx/SwgZFL7QvTNt0eF6tcXQ2Il5fPt5tm ZzlxXsZ8f34v8J3lRgVWYidPt2dxJfLPCKmONlVLz/gZwUsOt6xHDuEXTSNjBu1IubYHQm 8cGtHdeLM0eBszebHkIBO6FKYl5mnXjR8m/o8SeZD7V/7n7yW3UV4q57WEmccA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691870359; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=/mf0toaJNTepRqxmdRBH+YrhPsSto1ib/DfMaTt6vf0=; b=JKRJwqneRt2yMIREkHo8laWhBjmhaaOEaLFiuW5MU6G31/bb3r7ODSBZG7tmt/NPGGFAJ5 daJHOharoQ7aZXAA== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven , Nikolay Borisov Subject: [patch V2 27/37] x86/microcode: Sanitize __wait_for_cpus() References: <20230812194003.682298127@linutronix.de> MIME-Version: 1.0 Date: Sat, 12 Aug 2023 21:59:18 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Thomas Gleixner The code is too complicated for no reason: - The return value is pointless as this is a strict boolean. - It's way simpler to count down from num_online_cpus() and check for zero. - The timeout argument is pointless as this is always one second. - Touching the NMI watchdog every 100ns does not make any sense, neither does checking every 100ns. This is really not a hotpath operation. Preload the atomic counter with the number of online CPUs and simplify the whole timeout logic. Delay for one microsecond and touch the NMI watchdog once per millisecond. Signed-off-by: Thomas Gleixner --- arch/x86/kernel/cpu/microcode/core.c | 41 ++++++++++++++----------------= ----- 1 file changed, 17 insertions(+), 24 deletions(-) --- --- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -324,31 +324,24 @@ static struct platform_device *microcode * requirement can be relaxed in the future. Right now, this is conserva= tive * and good. */ -#define SPINUNIT 100 /* 100 nsec */ +static atomic_t late_cpus_in, late_cpus_out; =20 - -static atomic_t late_cpus_in; -static atomic_t late_cpus_out; - -static int __wait_for_cpus(atomic_t *t, long long timeout) +static bool wait_for_cpus(atomic_t *cnt) { - int all_cpus =3D num_online_cpus(); - - atomic_inc(t); - - while (atomic_read(t) < all_cpus) { - if (timeout < SPINUNIT) { - pr_err("Timeout while waiting for CPUs rendezvous, remaining: %d\n", - all_cpus - atomic_read(t)); - return 1; - } + unsigned int timeout; =20 - ndelay(SPINUNIT); - timeout -=3D SPINUNIT; + WARN_ON_ONCE(atomic_dec_return(cnt) < 0); =20 - touch_nmi_watchdog(); + for (timeout =3D 0; timeout < USEC_PER_SEC; timeout++) { + if (!atomic_read(cnt)) + return true; + udelay(1); + if (!(timeout % 1000)) + touch_nmi_watchdog(); } - return 0; + /* Prevent the late comers to make progress and let them time out */ + atomic_inc(cnt); + return false; } =20 /* @@ -366,7 +359,7 @@ static int __reload_late(void *info) * Wait for all CPUs to arrive. A load will not be attempted unless all * CPUs show up. * */ - if (__wait_for_cpus(&late_cpus_in, NSEC_PER_SEC)) + if (!wait_for_cpus(&late_cpus_in)) return -1; =20 /* @@ -389,7 +382,7 @@ static int __reload_late(void *info) } =20 wait_for_siblings: - if (__wait_for_cpus(&late_cpus_out, NSEC_PER_SEC)) + if (!wait_for_cpus(&late_cpus_out)) panic("Timeout during microcode update!\n"); =20 /* @@ -416,8 +409,8 @@ static int microcode_reload_late(void) pr_err("Attempting late microcode loading - it is dangerous and taints th= e kernel.\n"); pr_err("You should switch to early loading, if possible.\n"); =20 - atomic_set(&late_cpus_in, 0); - atomic_set(&late_cpus_out, 0); + atomic_set(&late_cpus_in, num_online_cpus()); + atomic_set(&late_cpus_out, num_online_cpus()); =20 /* * Take a snapshot before the microcode update in order to compare and From nobody Thu Dec 18 20:32:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9D04C0015E for ; Sat, 12 Aug 2023 20:06:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231218AbjHLUGj (ORCPT ); Sat, 12 Aug 2023 16:06:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41120 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231185AbjHLUGb (ORCPT ); Sat, 12 Aug 2023 16:06:31 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6527C1BD2 for ; Sat, 12 Aug 2023 13:06:05 -0700 (PDT) Message-ID: <20230812195729.151791844@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691870360; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=2kqLUw1a4gV9FrIDWZ/O+PSOOuBOB0W9vRdpe8yDcXg=; b=1Z1lw9Ww3BM9YXEw1BzRvEWs/3vjeiRnep1e0b18O1h3e7/54fwoOtvgkG7UkaxqpLk/oa sdtYLEYQAAnxcl88ABdTn3eh08lHjNOTCkgECUDsfckza313teaJrKvKbRtsq03o6R9N1k Qz1/R+nARgME7/sOgr4eIpFilkYAybZUfIx2QwVbm1woF+OyhO5Gg0Cz/vTc5kGdE1vthK UXTH0+esHfJY3HQye4REWXVlzjqfVYkKYvbZe/auaxghnBgXoVntspJB7luBNDQoWqWAme 6dkAAP+Ieri1AG39TXW0RxHYwb6jjLnYgyHWWNiA2mDUwB0hLCMI+UKbwVfc/g== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691870360; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=2kqLUw1a4gV9FrIDWZ/O+PSOOuBOB0W9vRdpe8yDcXg=; b=vUrkC1ippGLhba5Q8lhTClaptrOpApv0nxztTG+iEF2D0BVUzIBScyVrwverfaFESnoPEp Gw1soY2IdAougYCg== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven , Nikolay Borisov Subject: [patch V2 28/37] x86/microcode: Add per CPU result state References: <20230812194003.682298127@linutronix.de> MIME-Version: 1.0 Date: Sat, 12 Aug 2023 21:59:20 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Thomas Gleixner The microcode rendevouz is purely acting on global state, which does not allow to analyze fails in a coherent way. Introduce per CPU state where the results are written into, which allows to analyze the return codes of the individual CPUs. Initialize the state when walking the cpu_present_mask in the online check to avoid another for_each_cpu() loop. Enhance the result print out with that. The structure is intentionally named ucode_ctrl as it will gain control fields in subsequent changes. Signed-off-by: Thomas Gleixner --- arch/x86/kernel/cpu/microcode/core.c | 108 ++++++++++++++++++--------= ----- arch/x86/kernel/cpu/microcode/internal.h | 1=20 2 files changed, 65 insertions(+), 44 deletions(-) --- --- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -324,6 +324,11 @@ static struct platform_device *microcode * requirement can be relaxed in the future. Right now, this is conserva= tive * and good. */ +struct ucode_ctrl { + enum ucode_state result; +}; + +static DEFINE_PER_CPU(struct ucode_ctrl, ucode_ctrl); static atomic_t late_cpus_in, late_cpus_out; =20 static bool wait_for_cpus(atomic_t *cnt) @@ -344,23 +349,19 @@ static bool wait_for_cpus(atomic_t *cnt) return false; } =20 -/* - * Returns: - * < 0 - on error - * 0 - success (no update done or microcode was updated) - */ -static int __reload_late(void *info) +static int ucode_load_cpus_stopped(void *unused) { int cpu =3D smp_processor_id(); - enum ucode_state err; - int ret =3D 0; + enum ucode_state ret; =20 /* * Wait for all CPUs to arrive. A load will not be attempted unless all * CPUs show up. * */ - if (!wait_for_cpus(&late_cpus_in)) - return -1; + if (!wait_for_cpus(&late_cpus_in)) { + this_cpu_write(ucode_ctrl.result, UCODE_TIMEOUT); + return 0; + } =20 /* * On an SMT system, it suffices to load the microcode on one sibling of @@ -369,17 +370,11 @@ static int __reload_late(void *info) * loading attempts happen on multiple threads of an SMT core. See * below. */ - if (cpumask_first(topology_sibling_cpumask(cpu)) =3D=3D cpu) - err =3D microcode_ops->apply_microcode(cpu); - else + if (cpumask_first(topology_sibling_cpumask(cpu)) !=3D cpu) goto wait_for_siblings; =20 - if (err >=3D UCODE_NFOUND) { - if (err =3D=3D UCODE_ERROR) { - pr_warn("Error reloading microcode on CPU %d\n", cpu); - ret =3D -1; - } - } + ret =3D microcode_ops->apply_microcode(cpu); + this_cpu_write(ucode_ctrl.result, ret); =20 wait_for_siblings: if (!wait_for_cpus(&late_cpus_out)) @@ -391,19 +386,18 @@ static int __reload_late(void *info) * per-cpu cpuinfo can be updated with right microcode * revision. */ - if (cpumask_first(topology_sibling_cpumask(cpu)) !=3D cpu) - err =3D microcode_ops->apply_microcode(cpu); + if (cpumask_first(topology_sibling_cpumask(cpu)) =3D=3D cpu) + return 0; =20 - return ret; + ret =3D microcode_ops->apply_microcode(cpu); + this_cpu_write(ucode_ctrl.result, ret); + return 0; } =20 -/* - * Reload microcode late on all CPUs. Wait for a sec until they - * all gather together. - */ -static int microcode_reload_late(void) +static int ucode_load_late_stop_cpus(void) { - int old =3D boot_cpu_data.microcode, ret; + unsigned int cpu, updated =3D 0, failed =3D 0, timedout =3D 0, siblings = =3D 0; + int old_rev =3D boot_cpu_data.microcode; struct cpuinfo_x86 prev_info; =20 pr_err("Attempting late microcode loading - it is dangerous and taints th= e kernel.\n"); @@ -418,26 +412,47 @@ static int microcode_reload_late(void) */ store_cpu_caps(&prev_info); =20 - ret =3D stop_machine_cpuslocked(__reload_late, NULL, cpu_online_mask); + stop_machine_cpuslocked(ucode_load_cpus_stopped, NULL, cpu_online_mask); + + /* Analyze the results */ + for_each_cpu_and(cpu, cpu_present_mask, &cpus_booted_once_mask) { + switch (per_cpu(ucode_ctrl.result, cpu)) { + case UCODE_UPDATED: updated++; break; + case UCODE_TIMEOUT: timedout++; break; + case UCODE_OK: siblings++; break; + default: failed++; break; + } + } =20 if (microcode_ops->finalize_late_load) - microcode_ops->finalize_late_load(ret); + microcode_ops->finalize_late_load(!updated); =20 - if (!ret) { - pr_info("Reload succeeded, microcode revision: 0x%x -> 0x%x\n", - old, boot_cpu_data.microcode); - microcode_check(&prev_info); - add_taint(TAINT_CPU_OUT_OF_SPEC, LOCKDEP_STILL_OK); - } else { - pr_info("Reload failed, current microcode revision: 0x%x\n", - boot_cpu_data.microcode); + if (!updated) { + /* Nothing changed. */ + if (!failed && !timedout) + return 0; + pr_err("Microcode update failed: %u CPUs failed %u CPUs timed out\n", + failed, timedout); + return -EIO; } - return ret; + + add_taint(TAINT_CPU_OUT_OF_SPEC, LOCKDEP_STILL_OK); + pr_info("Microcode load: updated on %u primary CPUs with %u siblings\n", = updated, siblings); + if (failed || timedout) { + pr_err("Microcode load incomplete. %u CPUs timed out or failed\n", + num_online_cpus() - (updated + siblings)); + } + pr_info("Microcode revision: 0x%x -> 0x%x\n", old_rev, boot_cpu_data.micr= ocode); + microcode_check(&prev_info); + + return updated + siblings =3D=3D num_online_cpus() ? 0 : -EIO; } =20 /* - * Ensure that all required CPUs which are present and have been booted - * once are online. + * This function does two things: + * + * 1) Ensure that all required CPUs which are present and have been booted + * once are online. * * To pass this check, all primary threads must be online. * @@ -448,9 +463,12 @@ static int microcode_reload_late(void) * behaviour is undefined. The default play_dead() implementation on * modern CPUs is using MWAIT, which is also not guaranteed to be safe * against a microcode update which affects MWAIT. + * + * 2) Initialize the per CPU control structure */ -static bool ensure_cpus_are_online(void) +static bool ucode_setup_cpus(void) { + struct ucode_ctrl ctrl =3D { .result =3D -1, }; unsigned int cpu; =20 for_each_cpu_and(cpu, cpu_present_mask, &cpus_booted_once_mask) { @@ -460,6 +478,8 @@ static bool ensure_cpus_are_online(void) return false; } } + /* Initialize the per CPU state */ + per_cpu(ucode_ctrl, cpu) =3D ctrl; } return true; } @@ -468,13 +488,13 @@ static int ucode_load_late_locked(void) { int ret; =20 - if (!ensure_cpus_are_online()) + if (!ucode_setup_cpus()) return -EBUSY; =20 ret =3D microcode_ops->request_microcode_fw(0, µcode_pdev->dev); if (ret !=3D UCODE_NEW) return ret =3D=3D UCODE_NFOUND ? -ENOENT : -EBADFD; - return microcode_reload_late(); + return ucode_load_late_stop_cpus(); } =20 static ssize_t reload_store(struct device *dev, --- a/arch/x86/kernel/cpu/microcode/internal.h +++ b/arch/x86/kernel/cpu/microcode/internal.h @@ -16,6 +16,7 @@ enum ucode_state { UCODE_UPDATED, UCODE_NFOUND, UCODE_ERROR, + UCODE_TIMEOUT, }; =20 struct microcode_ops { From nobody Thu Dec 18 20:32:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A49CC001DB for ; Sat, 12 Aug 2023 20:02:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231128AbjHLUCp (ORCPT ); Sat, 12 Aug 2023 16:02:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33484 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230523AbjHLUCn (ORCPT ); Sat, 12 Aug 2023 16:02:43 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 774151FF0 for ; Sat, 12 Aug 2023 13:02:22 -0700 (PDT) Message-ID: <20230812195729.208217580@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691870362; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=gvgWHgk38J3qGADoq/3qmkZUlBXc3nLuAkYAzeigZSY=; b=rEREPJTL2JY65mizGjNaXmxaCdpAVvnHqW94RU4XCS0ICFRcq0oklA7JVOm5yVG4riYAI9 DRYNMbN3huOlj/3PK/SGOyNDP9H71qbal0V1FujKMDSoA2wHm0WS6zFbSP3TuOlvNyVRSQ 68fNNzr2TR9homzZwVyCigO+igRc8igviIlbO9+b1YiadedBxACCBbetuBs7Cmz1bh4SNT 2oGdqo2jS3dLf9FLdwcz6REX/rS9grRnYEhRMDKgpYug+Yn7ICzj7/Ak1uDvTTjn62v/nP VBZ3Aqnw5jK8wxJTzvxKGh6GPFTorXnEiWhPMo6RK+Gd12/cIVjjk30ZDvBRtA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691870362; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=gvgWHgk38J3qGADoq/3qmkZUlBXc3nLuAkYAzeigZSY=; b=T3qOkHsSABt5D295p3mTi3l39wGkUzJR1z/+oERTRGS2SWiXuM6otvUK0G6uXXT7bXAV1K zAxBV9JB2065CmBA== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven , Nikolay Borisov Subject: [patch V2 29/37] x86/microcode: Add per CPU control field References: <20230812194003.682298127@linutronix.de> MIME-Version: 1.0 Date: Sat, 12 Aug 2023 21:59:21 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Thomas Gleixner Add a per CPU control field to ucode_ctrl and define constants for it: SCTRL_WAIT indicates that the CPU needs to spinwait with timeout SCTRL_APPLY indicates that the CPU needs to invoke the microcode_apply() callback SCTRL_DONE indicates that the CPU can proceed without invoking the microcode_apply() callback. In theory this could be a global control field, but a global control does not cover the following case: 15 primary CPUs load microcode successfully 1 primary CPU fails and returns with an error code With global control the sibling of the failed CPU would either try again or the whole operation would be aborted with the consequence that the 15 siblings do not invoke the apply path and end up with inconsistent software state. The result in dmesg would be inconsistent too. There are two additional fields added and initialized: ctrl_cpu and secondaries. ctrl_cpu is the CPU number of the primary thread for now, but with the upcoming uniform loading at package or system scope this will be one CPU per package or just one CPU. Secondaries hands the control CPU a CPU mask which will be required to release the secondary CPUs out of the wait loop. Preparatory change for implementing a properly split control flow for primary and secondary CPUs. Signed-off-by: Thomas Gleixner --- arch/x86/kernel/cpu/microcode/core.c | 20 ++++++++++++++++++-- 1 file changed, 18 insertions(+), 2 deletions(-) --- --- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -324,8 +324,16 @@ static struct platform_device *microcode * requirement can be relaxed in the future. Right now, this is conserva= tive * and good. */ +enum sibling_ctrl { + SCTRL_WAIT, + SCTRL_APPLY, + SCTRL_DONE, +}; + struct ucode_ctrl { + enum sibling_ctrl ctrl; enum ucode_state result; + unsigned int ctrl_cpu; }; =20 static DEFINE_PER_CPU(struct ucode_ctrl, ucode_ctrl); @@ -468,7 +476,7 @@ static int ucode_load_late_stop_cpus(voi */ static bool ucode_setup_cpus(void) { - struct ucode_ctrl ctrl =3D { .result =3D -1, }; + struct ucode_ctrl ctrl =3D { .ctrl =3D SCTRL_WAIT, .result =3D -1, }; unsigned int cpu; =20 for_each_cpu_and(cpu, cpu_present_mask, &cpus_booted_once_mask) { @@ -478,7 +486,15 @@ static bool ucode_setup_cpus(void) return false; } } - /* Initialize the per CPU state */ + + /* + * Initialize the per CPU state. This is core scope for now, + * but prepared to take package or system scope into account. + */ + if (topology_is_primary_thread(cpu)) + ctrl.ctrl_cpu =3D cpu; + else + ctrl.ctrl_cpu =3D cpumask_first(topology_sibling_cpumask(cpu)); per_cpu(ucode_ctrl, cpu) =3D ctrl; } return true; From nobody Thu Dec 18 20:32:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 864BCC001DB for ; Sat, 12 Aug 2023 20:07:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231288AbjHLUG6 (ORCPT ); Sat, 12 Aug 2023 16:06:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44972 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231267AbjHLUGq (ORCPT ); Sat, 12 Aug 2023 16:06:46 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 171A5CE for ; Sat, 12 Aug 2023 13:06:23 -0700 (PDT) Message-ID: <20230812195729.272941596@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691870363; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=tcStbJBYGm1S6mX3lupmzFXgqW4kHxxe4AHt7jQaeKc=; b=M3fEcYwRPucTriGvueSLaleqRSg7eLY3gJmW4uomGIVX9WE9SMJfOCwZ2EV678dR9kl8Vl 9L1iNrjD67+ttcDGCabe8/c7B0qkKC/hMZ1WEU+g3ecgVskuSH/xgFZNS99QmcI7sIxWJj ULoAudSVYwobcqzO4gKz+donjp57UHqewUqecssbw2bf+2uoluXxvz95X/0AfjBpbvvigM u7QQFTkpkj4cUijMHkTZbc2UiQjF5BP0z6N86qnuBkrHjEEcKosHzDhxjvQXgawzMXVqVS TRVeT4Dvla/1QdJnyIdQoAPrsgtxHIl7Zqzotpqgrs4/8xWLJoeIPdD2BRmd1w== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691870363; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=tcStbJBYGm1S6mX3lupmzFXgqW4kHxxe4AHt7jQaeKc=; b=pfJSr+h2nyv5sAeYgV/pAzJCqIOGswfWse3ifIJNdUM+3lUeo0OaZmeYXgNWR4BDiMDSXW x1+El5Imrg2oQUAg== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven , Nikolay Borisov Subject: [patch V2 30/37] x86/microcode: Provide new control functions References: <20230812194003.682298127@linutronix.de> MIME-Version: 1.0 Date: Sat, 12 Aug 2023 21:59:23 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Thomas Gleixner The current all in one code is unreadable and really not suited for adding future features like uniform loading with package or system scope. Provide a set of new control functions which split the handling of the primary and secondary CPUs. These will replace the current rendevouz all in one function in the next step. This is intentionally a separate change because diff makes an complete unreadable mess otherwise. So the flow separates the primary and the secondary CPUs into their own functions, which use the control field in the per CPU ucode_ctrl struct. primary() secondary() wait_for_all() wait_for_all() apply_ucode() wait_for_release() release() apply_ucode() Signed-off-by: Thomas Gleixner --- arch/x86/kernel/cpu/microcode/core.c | 86 ++++++++++++++++++++++++++++++= +++++ 1 file changed, 86 insertions(+) --- --- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -357,6 +357,92 @@ static bool wait_for_cpus(atomic_t *cnt) return false; } =20 +static bool wait_for_ctrl(void) +{ + unsigned int timeout; + + for (timeout =3D 0; timeout < USEC_PER_SEC; timeout++) { + if (this_cpu_read(ucode_ctrl.ctrl) !=3D SCTRL_WAIT) + return true; + udelay(1); + if (!(timeout % 1000)) + touch_nmi_watchdog(); + } + return false; +} + +static __maybe_unused void ucode_load_secondary(unsigned int cpu) +{ + unsigned int ctrl_cpu =3D this_cpu_read(ucode_ctrl.ctrl_cpu); + enum ucode_state ret; + + /* Initial rendevouz to ensure that all CPUs have arrived */ + if (!wait_for_cpus(&late_cpus_in)) { + pr_err_once("Microcode load: %d CPUs timed out\n", + atomic_read(&late_cpus_in) - 1); + this_cpu_write(ucode_ctrl.result, UCODE_TIMEOUT); + return; + } + + /* + * Wait for primary threads to complete. If one of them hangs due + * to the update, there is no way out. This is non-recoverable + * because the CPU might hold locks or resources and confuse the + * scheduler, watchdogs etc. There is no way to safely evacuate the + * machine. + */ + if (!wait_for_ctrl()) + panic("Microcode load: Primary CPU %d timed out\n", ctrl_cpu); + + /* + * If the primary succeeded then invoke the apply() callback, + * otherwise copy the state from the primary thread. + */ + if (this_cpu_read(ucode_ctrl.ctrl) =3D=3D SCTRL_APPLY) + ret =3D microcode_ops->apply_microcode(cpu); + else + ret =3D per_cpu(ucode_ctrl.result, ctrl_cpu); + + this_cpu_write(ucode_ctrl.result, ret); + this_cpu_write(ucode_ctrl.ctrl, SCTRL_DONE); +} + +static __maybe_unused void ucode_load_primary(unsigned int cpu) +{ + struct cpumask *secondaries =3D topology_sibling_cpumask(cpu); + enum sibling_ctrl ctrl; + enum ucode_state ret; + unsigned int sibling; + + /* Initial rendevouz to ensure that all CPUs have arrived */ + if (!wait_for_cpus(&late_cpus_in)) { + this_cpu_write(ucode_ctrl.result, UCODE_TIMEOUT); + pr_err_once("Microcode load: %d CPUs timed out\n", + atomic_read(&late_cpus_in) - 1); + return; + } + + ret =3D microcode_ops->apply_microcode(cpu); + this_cpu_write(ucode_ctrl.result, ret); + this_cpu_write(ucode_ctrl.ctrl, SCTRL_DONE); + + /* + * If the update was successful, let the siblings run the apply() + * callback. If not, tell them it's done. This also covers the + * case where the CPU has uniform loading at package or system + * scope implemented but does not advertise it. + */ + if (ret =3D=3D UCODE_UPDATED || ret =3D=3D UCODE_OK) + ctrl =3D SCTRL_APPLY; + else + ctrl =3D SCTRL_DONE; + + for_each_cpu(sibling, secondaries) { + if (sibling !=3D cpu) + per_cpu(ucode_ctrl.ctrl, sibling) =3D ctrl; + } +} + static int ucode_load_cpus_stopped(void *unused) { int cpu =3D smp_processor_id(); From nobody Thu Dec 18 20:32:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B19D8C0015E for ; Sat, 12 Aug 2023 20:06:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230451AbjHLUGv (ORCPT ); Sat, 12 Aug 2023 16:06:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44890 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231232AbjHLUGp (ORCPT ); Sat, 12 Aug 2023 16:06:45 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 919E01716 for ; Sat, 12 Aug 2023 13:06:21 -0700 (PDT) Message-ID: <20230812195729.332293834@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691870365; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=tBiZyVFg2RByesw9s33ESky6BK8jMzTV8d/B5WDIvI8=; b=S/7ymfOX7c/c7lnr1QmYEEJG2IjfdZu+GTKQTRf/1CcBJYkqhXhddmO5vbUb6KcywTn/jf 5qAhhE3qWtHqTb+G2qT+Zs+I+qkqQwb30rljGEv4HK83bxXKnH5zL3r6vYkjf+x/3rC/BW q3fq5gxHFCui7SovpGU2TAx99FcN5hQPQz2Vd1jk/X8AZovvPkeG81MEOAAtVrYXe+TEze 09I7V0BefzWlm/UG9iJ+SbO/Mf/Zrk2reH6QQLZ+Ca20cPkD57l8NN9370tZqjewJTVTRl c6CRTwMQLPrRt553RKFOE8o5uG34r4/ZBiCaA8/SzljpK//iSqYDREvYcQQaVA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691870365; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=tBiZyVFg2RByesw9s33ESky6BK8jMzTV8d/B5WDIvI8=; b=+litjPSpn2DnB8QnsGio+HJVIncflnHvnuiDvl3iEs/0kRg7HBzEIluEZ36eHp7ViEb5Hf W5Jys18PMeT8qfDA== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven , Nikolay Borisov Subject: [patch V2 31/37] x86/microcode: Replace the all in one rendevouz handler References: <20230812194003.682298127@linutronix.de> MIME-Version: 1.0 Date: Sat, 12 Aug 2023 21:59:24 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Thomas Gleixner with a new handler which just separates the control flow of primary and secondary CPUs. Signed-off-by: Thomas Gleixner --- arch/x86/kernel/cpu/microcode/core.c | 51 ++++++------------------------= ----- 1 file changed, 9 insertions(+), 42 deletions(-) --- --- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -337,7 +337,7 @@ struct ucode_ctrl { }; =20 static DEFINE_PER_CPU(struct ucode_ctrl, ucode_ctrl); -static atomic_t late_cpus_in, late_cpus_out; +static atomic_t late_cpus_in; =20 static bool wait_for_cpus(atomic_t *cnt) { @@ -371,7 +371,7 @@ static bool wait_for_ctrl(void) return false; } =20 -static __maybe_unused void ucode_load_secondary(unsigned int cpu) +static void ucode_load_secondary(unsigned int cpu) { unsigned int ctrl_cpu =3D this_cpu_read(ucode_ctrl.ctrl_cpu); enum ucode_state ret; @@ -407,7 +407,7 @@ static __maybe_unused void ucode_load_se this_cpu_write(ucode_ctrl.ctrl, SCTRL_DONE); } =20 -static __maybe_unused void ucode_load_primary(unsigned int cpu) +static void ucode_load_primary(unsigned int cpu) { struct cpumask *secondaries =3D topology_sibling_cpumask(cpu); enum sibling_ctrl ctrl; @@ -445,46 +445,14 @@ static __maybe_unused void ucode_load_pr =20 static int ucode_load_cpus_stopped(void *unused) { - int cpu =3D smp_processor_id(); - enum ucode_state ret; - - /* - * Wait for all CPUs to arrive. A load will not be attempted unless all - * CPUs show up. - * */ - if (!wait_for_cpus(&late_cpus_in)) { - this_cpu_write(ucode_ctrl.result, UCODE_TIMEOUT); - return 0; - } - - /* - * On an SMT system, it suffices to load the microcode on one sibling of - * the core because the microcode engine is shared between the threads. - * Synchronization still needs to take place so that no concurrent - * loading attempts happen on multiple threads of an SMT core. See - * below. - */ - if (cpumask_first(topology_sibling_cpumask(cpu)) !=3D cpu) - goto wait_for_siblings; + unsigned int cpu =3D smp_processor_id(); =20 - ret =3D microcode_ops->apply_microcode(cpu); - this_cpu_write(ucode_ctrl.result, ret); - -wait_for_siblings: - if (!wait_for_cpus(&late_cpus_out)) - panic("Timeout during microcode update!\n"); - - /* - * At least one thread has completed update on each core. - * For others, simply call the update to make sure the - * per-cpu cpuinfo can be updated with right microcode - * revision. - */ - if (cpumask_first(topology_sibling_cpumask(cpu)) =3D=3D cpu) - return 0; + if (this_cpu_read(ucode_ctrl.ctrl_cpu) =3D=3D cpu) + ucode_load_primary(cpu); + else + ucode_load_secondary(cpu); =20 - ret =3D microcode_ops->apply_microcode(cpu); - this_cpu_write(ucode_ctrl.result, ret); + /* No point to wait here. The CPUs will all wait in stop_machine(). */ return 0; } =20 @@ -498,7 +466,6 @@ static int ucode_load_late_stop_cpus(voi pr_err("You should switch to early loading, if possible.\n"); =20 atomic_set(&late_cpus_in, num_online_cpus()); - atomic_set(&late_cpus_out, num_online_cpus()); =20 /* * Take a snapshot before the microcode update in order to compare and From nobody Thu Dec 18 20:32:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9DCEFC0015E for ; Sat, 12 Aug 2023 20:02:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230508AbjHLUCe (ORCPT ); Sat, 12 Aug 2023 16:02:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46662 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230503AbjHLUC1 (ORCPT ); Sat, 12 Aug 2023 16:02:27 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 119933596 for ; Sat, 12 Aug 2023 13:02:03 -0700 (PDT) Message-ID: <20230812195729.388848949@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691870366; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=jIHJGcyGY0V0aJcaBH+xFPGYAym+5j4xuz0uER+GG98=; b=DqQ3dPW4vR/kyCueRRFx5V/ufBf+tMf8lRFxoKHqDGLDf+a1iSJ7EQYWbdoBHDggzDkTpq bEojKksXggvMMgAha7xGBtPzKnrx7NJWbBQQB2M558gp86Yh7SxKsNUecsQNCAF+HpuMbm 2lboB9CstC03A3PdLFAfoJqQl2bVfpnl/Rui74ucjqJw2jMohfQ6W4hdm1UDDHSqforTjy JdqQEB2wvDhXZ6D+uY8UhXMCXFwk8fVUXp7NtErQeDYzgw4giIgc8BJ2WfV+q5BOAURgEF V5AO2FP2sQJxGDGmjwbXdnabl3EkrDBMmIVWShe5lYV85PsCFOv/7UqLC1dsSg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691870366; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=jIHJGcyGY0V0aJcaBH+xFPGYAym+5j4xuz0uER+GG98=; b=0mYywXzLhpqF3hICRyPLcikSeAXXvspXxB0pmDisC/jNMWV0YUQxUr7bXmKAvuTHI0t3qc NWL42u12zqKOT3Cg== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven , Nikolay Borisov Subject: [patch V2 32/37] x86/microcode: Rendezvous and load in NMI References: <20230812194003.682298127@linutronix.de> MIME-Version: 1.0 Date: Sat, 12 Aug 2023 21:59:26 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Thomas Gleixner stop_machine() does not prevent the spin-waiting sibling from handling an NMI, which is obviously violating the whole concept of rendezvous. Implement a static branch right in the beginning of the NMI handler which is NOOPed except when enabled by the late loading mechanism. The later loader enables the static branch before stop_machine() is invoked. Each CPU has an nmi_enable in its control structure which indicates whether the CPU should go into the update routine. This is required to bridge the gap between enabling the branch and actually being at the point where it makes sense. Each CPU which arrives in the stopper thread function sets that flag and issues a self NMI right after that. If the NMI function sees the flag clear, it returns. If it's set it clears the flag and enters the rendezvous. This is safe against a real NMI which hits in between setting the flag and sending the NMI to itself. The real NMI will be swallowed by the microcode update and the self NMI will then let stuff continue. Otherwise this would end up with a spurious NMI. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/microcode.h | 12 ++++++++ arch/x86/kernel/cpu/microcode/core.c | 42 ++++++++++++++++++++++++++= ++--- arch/x86/kernel/cpu/microcode/intel.c | 1=20 arch/x86/kernel/cpu/microcode/internal.h | 3 +- arch/x86/kernel/nmi.c | 4 ++ 5 files changed, 57 insertions(+), 5 deletions(-) --- --- a/arch/x86/include/asm/microcode.h +++ b/arch/x86/include/asm/microcode.h @@ -75,4 +75,16 @@ void show_ucode_info_early(void); static inline void show_ucode_info_early(void) { } #endif /* !CONFIG_CPU_SUP_INTEL */ =20 +bool microcode_nmi_handler(void); + +#ifdef CONFIG_MICROCODE_LATE_LOADING +DECLARE_STATIC_KEY_FALSE(microcode_nmi_handler_enable); +static __always_inline bool microcode_nmi_handler_enabled(void) +{ + return static_branch_unlikely(µcode_nmi_handler_enable); +} +#else +static __always_inline bool microcode_nmi_handler_enabled(void) { return f= alse; } +#endif + #endif /* _ASM_X86_MICROCODE_H */ --- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -23,6 +23,7 @@ #include #include #include +#include #include #include #include @@ -31,6 +32,7 @@ #include #include =20 +#include #include #include #include @@ -334,8 +336,10 @@ struct ucode_ctrl { enum sibling_ctrl ctrl; enum ucode_state result; unsigned int ctrl_cpu; + bool nmi_enabled; }; =20 +DEFINE_STATIC_KEY_FALSE(microcode_nmi_handler_enable); static DEFINE_PER_CPU(struct ucode_ctrl, ucode_ctrl); static atomic_t late_cpus_in; =20 @@ -349,7 +353,8 @@ static bool wait_for_cpus(atomic_t *cnt) if (!atomic_read(cnt)) return true; udelay(1); - if (!(timeout % 1000)) + /* If invoked directly, tickle the NMI watchdog */ + if (!microcode_ops->use_nmi && !(timeout % 1000)) touch_nmi_watchdog(); } /* Prevent the late comers to make progress and let them time out */ @@ -365,7 +370,8 @@ static bool wait_for_ctrl(void) if (this_cpu_read(ucode_ctrl.ctrl) !=3D SCTRL_WAIT) return true; udelay(1); - if (!(timeout % 1000)) + /* If invoked directly, tickle the NMI watchdog */ + if (!microcode_ops->use_nmi && !(timeout % 1000)) touch_nmi_watchdog(); } return false; @@ -443,7 +449,7 @@ static void ucode_load_primary(unsigned } } =20 -static int ucode_load_cpus_stopped(void *unused) +static bool microcode_update_handler(void) { unsigned int cpu =3D smp_processor_id(); =20 @@ -452,7 +458,29 @@ static int ucode_load_cpus_stopped(void else ucode_load_secondary(cpu); =20 - /* No point to wait here. The CPUs will all wait in stop_machine(). */ + touch_nmi_watchdog(); + return true; +} + +bool microcode_nmi_handler(void) +{ + if (!this_cpu_read(ucode_ctrl.nmi_enabled)) + return false; + + this_cpu_write(ucode_ctrl.nmi_enabled, false); + return microcode_update_handler(); +} + +static int ucode_load_cpus_stopped(void *unused) +{ + if (microcode_ops->use_nmi) { + /* Enable the NMI handler and raise NMI */ + this_cpu_write(ucode_ctrl.nmi_enabled, true); + apic->send_IPI(smp_processor_id(), NMI_VECTOR); + } else { + /* Just invoke the handler directly */ + microcode_update_handler(); + } return 0; } =20 @@ -473,8 +501,14 @@ static int ucode_load_late_stop_cpus(voi */ store_cpu_caps(&prev_info); =20 + if (microcode_ops->use_nmi) + static_branch_enable_cpuslocked(µcode_nmi_handler_enable); + stop_machine_cpuslocked(ucode_load_cpus_stopped, NULL, cpu_online_mask); =20 + if (microcode_ops->use_nmi) + static_branch_disable_cpuslocked(µcode_nmi_handler_enable); + /* Analyze the results */ for_each_cpu_and(cpu, cpu_present_mask, &cpus_booted_once_mask) { switch (per_cpu(ucode_ctrl.result, cpu)) { --- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -688,6 +688,7 @@ static struct microcode_ops microcode_in .collect_cpu_info =3D collect_cpu_info, .apply_microcode =3D apply_microcode_late, .finalize_late_load =3D finalize_late_load, + .use_nmi =3D IS_ENABLED(CONFIG_X86_64), }; =20 static __init void calc_llc_size_per_core(struct cpuinfo_x86 *c) --- a/arch/x86/kernel/cpu/microcode/internal.h +++ b/arch/x86/kernel/cpu/microcode/internal.h @@ -31,7 +31,8 @@ struct microcode_ops { enum ucode_state (*apply_microcode)(int cpu); int (*collect_cpu_info)(int cpu, struct cpu_signature *csig); void (*finalize_late_load)(int result); - unsigned int nmi_safe : 1; + unsigned int nmi_safe : 1, + use_nmi : 1; }; =20 extern struct ucode_cpu_info ucode_cpu_info[]; --- a/arch/x86/kernel/nmi.c +++ b/arch/x86/kernel/nmi.c @@ -33,6 +33,7 @@ #include #include #include +#include #include =20 #define CREATE_TRACE_POINTS @@ -343,6 +344,9 @@ static noinstr void default_do_nmi(struc =20 instrumentation_begin(); =20 + if (microcode_nmi_handler_enabled() && microcode_nmi_handler()) + goto out; + handled =3D nmi_handle(NMI_LOCAL, regs); __this_cpu_add(nmi_stats.normal, handled); if (handled) { From nobody Thu Dec 18 20:32:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A1251C001DB for ; Sat, 12 Aug 2023 20:07:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231232AbjHLUH1 (ORCPT ); Sat, 12 Aug 2023 16:07:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41120 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231209AbjHLUGq (ORCPT ); Sat, 12 Aug 2023 16:06:46 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C237E19B1 for ; Sat, 12 Aug 2023 13:06:24 -0700 (PDT) Message-ID: <20230812195729.454136685@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691870368; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=OBavvuIISB6T38J3QGPcFo0zGeBzWsPn/MsQxWfrjc4=; b=KwwjhB+5EBP2+E6m+pPVbNUT+KH1rgAxGQyN1QrBB9Ka6OHxFKfeJ5Ru3/JCV2iv6duy+9 tOI2qthFV9X24FJ/XQa1RYnjm3yTgnmng1zDzo8XpG29+FExSvyVjvRAUoZ403sK7R1dxs WjB/GZ3PfGOyi4jcfD3nbiZ1Eijk6MIvZ8HqAE7iW5WIUuLXSEEhcfzHcJ5rNTDIRaaE+e ltT/hzotkRoJCsiUUJI5dHWzPXX46qdLPmafFfTu04JMj+WFAuSl9INg9ZE4SCpf9oxoJ8 HI3yegQpKp/I2k3gmzH80CF1cuV/UnEgY0njj+sdB9ocj/Pei4wGdRRJqY9isA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691870368; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=OBavvuIISB6T38J3QGPcFo0zGeBzWsPn/MsQxWfrjc4=; b=g5Ud4QdNP6SrpVc3hyF6u1xQMY7Ll8PMJXj1UL3NgTsRplA5PxEwSWjhmzyXNzy/AAMeWT fCL9CzRsOOcC4GBw== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven , Nikolay Borisov Subject: [patch V2 33/37] x86/microcode: Protect against instrumentation References: <20230812194003.682298127@linutronix.de> MIME-Version: 1.0 Date: Sat, 12 Aug 2023 21:59:27 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Thomas Gleixner The wait for control loop in which the siblings are waiting for the microcode update on the primary thread must be protected against instrumentation as instrumentation can end up in #INT3, #DB or #PF, which then returns with IRET. That IRET reenables NMI which is the opposite of what the NMI rendezvouz is trying to achieve. Signed-off-by: Thomas Gleixner --- arch/x86/kernel/cpu/microcode/core.c | 112 ++++++++++++++++++++++++++----= ----- 1 file changed, 84 insertions(+), 28 deletions(-) --- --- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -341,53 +341,65 @@ struct ucode_ctrl { =20 DEFINE_STATIC_KEY_FALSE(microcode_nmi_handler_enable); static DEFINE_PER_CPU(struct ucode_ctrl, ucode_ctrl); +static unsigned int loops_per_usec; static atomic_t late_cpus_in; =20 -static bool wait_for_cpus(atomic_t *cnt) +static noinstr bool wait_for_cpus(atomic_t *cnt) { - unsigned int timeout; + unsigned int timeout, loops; =20 - WARN_ON_ONCE(atomic_dec_return(cnt) < 0); + WARN_ON_ONCE(raw_atomic_dec_return(cnt) < 0); =20 for (timeout =3D 0; timeout < USEC_PER_SEC; timeout++) { - if (!atomic_read(cnt)) + if (!raw_atomic_read(cnt)) return true; - udelay(1); + + for (loops =3D 0; loops < loops_per_usec; loops++) + cpu_relax(); + /* If invoked directly, tickle the NMI watchdog */ - if (!microcode_ops->use_nmi && !(timeout % 1000)) + if (!microcode_ops->use_nmi && !(timeout % 1000)) { + instrumentation_begin(); touch_nmi_watchdog(); + instrumentation_end(); + } } /* Prevent the late comers to make progress and let them time out */ - atomic_inc(cnt); + raw_atomic_inc(cnt); return false; } =20 -static bool wait_for_ctrl(void) +static noinstr bool wait_for_ctrl(void) { - unsigned int timeout; + unsigned int timeout, loops; =20 for (timeout =3D 0; timeout < USEC_PER_SEC; timeout++) { - if (this_cpu_read(ucode_ctrl.ctrl) !=3D SCTRL_WAIT) + if (raw_cpu_read(ucode_ctrl.ctrl) !=3D SCTRL_WAIT) return true; - udelay(1); + + for (loops =3D 0; loops < loops_per_usec; loops++) + cpu_relax(); + /* If invoked directly, tickle the NMI watchdog */ - if (!microcode_ops->use_nmi && !(timeout % 1000)) + if (!microcode_ops->use_nmi && !(timeout % 1000)) { + instrumentation_begin(); touch_nmi_watchdog(); + instrumentation_end(); + } } return false; } =20 -static void ucode_load_secondary(unsigned int cpu) +/* + * Protected against instrumentation up to the point where the primary + * thread completed the update. See microcode_nmi_handler() for details. + */ +static noinstr bool ucode_load_secondary_wait(unsigned int ctrl_cpu) { - unsigned int ctrl_cpu =3D this_cpu_read(ucode_ctrl.ctrl_cpu); - enum ucode_state ret; - /* Initial rendevouz to ensure that all CPUs have arrived */ if (!wait_for_cpus(&late_cpus_in)) { - pr_err_once("Microcode load: %d CPUs timed out\n", - atomic_read(&late_cpus_in) - 1); this_cpu_write(ucode_ctrl.result, UCODE_TIMEOUT); - return; + return false; } =20 /* @@ -397,9 +409,33 @@ static void ucode_load_secondary(unsigne * scheduler, watchdogs etc. There is no way to safely evacuate the * machine. */ - if (!wait_for_ctrl()) - panic("Microcode load: Primary CPU %d timed out\n", ctrl_cpu); + if (wait_for_ctrl()) + return true; + + instrumentation_begin(); + panic("Microcode load: Primary CPU %d timed out\n", ctrl_cpu); + instrumentation_end(); +} =20 +/* + * Protected against instrumentation up to the point where the primary + * thread completed the update. See microcode_nmi_handler() for details. + */ +static noinstr void ucode_load_secondary(unsigned int cpu) +{ + unsigned int ctrl_cpu =3D raw_cpu_read(ucode_ctrl.ctrl_cpu); + enum ucode_state ret; + + if (!ucode_load_secondary_wait(ctrl_cpu)) { + instrumentation_begin(); + pr_err_once("Microcode load: %d CPUs timed out\n", + atomic_read(&late_cpus_in) - 1); + instrumentation_end(); + return; + } + + /* Primary thread completed. Allow to invoke instrumentable code */ + instrumentation_begin(); /* * If the primary succeeded then invoke the apply() callback, * otherwise copy the state from the primary thread. @@ -411,6 +447,7 @@ static void ucode_load_secondary(unsigne =20 this_cpu_write(ucode_ctrl.result, ret); this_cpu_write(ucode_ctrl.ctrl, SCTRL_DONE); + instrumentation_end(); } =20 static void ucode_load_primary(unsigned int cpu) @@ -449,25 +486,43 @@ static void ucode_load_primary(unsigned } } =20 -static bool microcode_update_handler(void) +static noinstr bool microcode_update_handler(void) { - unsigned int cpu =3D smp_processor_id(); + unsigned int cpu =3D raw_smp_processor_id(); =20 - if (this_cpu_read(ucode_ctrl.ctrl_cpu) =3D=3D cpu) + if (raw_cpu_read(ucode_ctrl.ctrl_cpu) =3D=3D cpu) { + instrumentation_begin(); ucode_load_primary(cpu); - else + instrumentation_end(); + } else { ucode_load_secondary(cpu); + } =20 + instrumentation_begin(); touch_nmi_watchdog(); + instrumentation_end(); + return true; } =20 -bool microcode_nmi_handler(void) +/* + * Protection against instrumentation is required for CPUs which are not + * safe against an NMI which is delivered to the secondary SMT sibling + * while the primary thread updates the microcode. Instrumentation can end + * up in #INT3, #DB and #PF. The IRET from those exceptions reenables NMI + * which is the opposite of what the NMI rendevouz is trying to achieve. + * + * The primary thread is safe versus instrumentation as the actual + * microcode update handles this correctly. It's only the sibling code + * path which must be NMI safe until the primary thread completed the + * update. + */ +bool noinstr microcode_nmi_handler(void) { - if (!this_cpu_read(ucode_ctrl.nmi_enabled)) + if (!raw_cpu_read(ucode_ctrl.nmi_enabled)) return false; =20 - this_cpu_write(ucode_ctrl.nmi_enabled, false); + raw_cpu_write(ucode_ctrl.nmi_enabled, false); return microcode_update_handler(); } =20 @@ -494,6 +549,7 @@ static int ucode_load_late_stop_cpus(voi pr_err("You should switch to early loading, if possible.\n"); =20 atomic_set(&late_cpus_in, num_online_cpus()); + loops_per_usec =3D loops_per_jiffy / (TICK_NSEC / 1000); =20 /* * Take a snapshot before the microcode update in order to compare and From nobody Thu Dec 18 20:32:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46231C0015E for ; Sat, 12 Aug 2023 20:07:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231252AbjHLUGz (ORCPT ); Sat, 12 Aug 2023 16:06:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44892 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231239AbjHLUGp (ORCPT ); Sat, 12 Aug 2023 16:06:45 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D5D0A170C for ; Sat, 12 Aug 2023 13:06:23 -0700 (PDT) Message-ID: <20230812195729.512632124@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691870369; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=ad2nYm8qlvD7vUCrpLZZfynt58mvDeIySR0Zislle5k=; b=H/8c1sqo/pB7GrLVFZYNapjMORyLh3Ddt07Y3godhlLNGr/xRMkZu7loqb2mn0VdYjnK0W mw3cg8Wo4pnKKnWmHh7bTJug1sMl92v22mb09Fi3rc9ETOjVkY8TC19pg1+VG3wNNFhnFP QA/C7NX2GkMJLJH7B/XpDSdL482fa1gfxqXiItsu0VCsUv740BT7QEZwrTQFfidn24tA7J r3K8VpYRCdt6h0wIbbEoLqOZ2kZiQBZbRxpVLn3+1uVnMta9sB1bt5Ju+Wy5GMTIlFXlQD zpfHdBT3PERL8fgGQvZWytHALtk8qaXwz4qQAG/xOy/ocLKKDOUfafx2LVvEHw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691870369; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=ad2nYm8qlvD7vUCrpLZZfynt58mvDeIySR0Zislle5k=; b=a23JQp9H80BMWWvd6gaOQ7bMea4kRZ2gC7x7OCmSetUPGxWsZ9LYj68kUqlmIR48N1eyZU 7srT2AzDJCOlmqAw== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven , Nikolay Borisov Subject: [patch V2 34/37] x86/apic: Provide apic_force_nmi_on_cpu() References: <20230812194003.682298127@linutronix.de> MIME-Version: 1.0 Date: Sat, 12 Aug 2023 21:59:29 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Thomas Gleixner When SMT siblings are soft-offlined and parked in one of the play_dead() variants they still react on NMI, which is problematic on affected Intel CPUs. The default play_dead() variant uses MWAIT on modern CPUs, which is not guaranteed to be safe when updated concurrently. Right now late loading is prevented when not all SMT siblings are online, but as they still react on NMI, it is possible to bring them out of their park position into a trivial rendevouz handler. Provide a function which allows to do that. I does sanity checks whether the target is in the cpus_booted_once_mask and whether the APIC driver supports it. Mark X2APIC and XAPIC as capable, but exclude 32bit and the UV and NUMACHIP variants as that needs feedback from the relevant experts. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/apic.h | 5 +++++ arch/x86/kernel/apic/apic_flat_64.c | 2 ++ arch/x86/kernel/apic/ipi.c | 9 ++++++++- arch/x86/kernel/apic/x2apic_cluster.c | 1 + arch/x86/kernel/apic/x2apic_phys.c | 1 + 5 files changed, 17 insertions(+), 1 deletion(-) --- --- a/arch/x86/include/asm/apic.h +++ b/arch/x86/include/asm/apic.h @@ -301,6 +301,9 @@ struct apic { enum apic_delivery_modes delivery_mode; bool dest_mode_logical; =20 + /* Allows to send an NMI to an "offline" CPU which hangs in *play_dead() = */ + bool nmi_to_offline_cpu; + u32 (*calc_dest_apicid)(unsigned int cpu); =20 /* ICR related functions */ @@ -505,6 +508,8 @@ extern void default_ioapic_phys_id_map(p extern int default_cpu_present_to_apicid(int mps_cpu); extern int default_check_phys_apicid_present(int phys_apicid); =20 +void apic_send_nmi_to_offline_cpu(unsigned int cpu); + #endif /* CONFIG_X86_LOCAL_APIC */ =20 #ifdef CONFIG_SMP --- a/arch/x86/kernel/apic/apic_flat_64.c +++ b/arch/x86/kernel/apic/apic_flat_64.c @@ -138,6 +138,7 @@ static struct apic apic_flat __ro_after_ .send_IPI_allbutself =3D default_send_IPI_allbutself, .send_IPI_all =3D default_send_IPI_all, .send_IPI_self =3D default_send_IPI_self, + .nmi_to_offline_cpu =3D true, =20 .inquire_remote_apic =3D default_inquire_remote_apic, =20 @@ -229,6 +230,7 @@ static struct apic apic_physflat __ro_af .send_IPI_allbutself =3D default_send_IPI_allbutself, .send_IPI_all =3D default_send_IPI_all, .send_IPI_self =3D default_send_IPI_self, + .nmi_to_offline_cpu =3D true, =20 .inquire_remote_apic =3D default_inquire_remote_apic, =20 --- a/arch/x86/kernel/apic/ipi.c +++ b/arch/x86/kernel/apic/ipi.c @@ -95,8 +95,15 @@ void native_send_call_func_ipi(const str apic->send_IPI_mask(mask, CALL_FUNCTION_VECTOR); } =20 +void apic_send_nmi_to_offline_cpu(unsigned int cpu) +{ + if (WARN_ON_ONCE(!apic->nmi_to_offline_cpu)) + return; + if (WARN_ON_ONCE(!cpumask_test_cpu(cpu, &cpus_booted_once_mask))) + return; + apic->send_IPI(cpu, NMI_VECTOR); +} #endif /* CONFIG_SMP */ - static inline int __prepare_ICR2(unsigned int mask) { return SET_XAPIC_DEST_FIELD(mask); --- a/arch/x86/kernel/apic/x2apic_cluster.c +++ b/arch/x86/kernel/apic/x2apic_cluster.c @@ -264,6 +264,7 @@ static struct apic apic_x2apic_cluster _ .send_IPI_allbutself =3D x2apic_send_IPI_allbutself, .send_IPI_all =3D x2apic_send_IPI_all, .send_IPI_self =3D x2apic_send_IPI_self, + .nmi_to_offline_cpu =3D true, =20 .inquire_remote_apic =3D NULL, =20 --- a/arch/x86/kernel/apic/x2apic_phys.c +++ b/arch/x86/kernel/apic/x2apic_phys.c @@ -188,6 +188,7 @@ static struct apic apic_x2apic_phys __ro .send_IPI_allbutself =3D x2apic_send_IPI_allbutself, .send_IPI_all =3D x2apic_send_IPI_all, .send_IPI_self =3D x2apic_send_IPI_self, + .nmi_to_offline_cpu =3D true, =20 .inquire_remote_apic =3D NULL, From nobody Thu Dec 18 20:32:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0A09C0015E for ; Sat, 12 Aug 2023 20:17:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231460AbjHLURL (ORCPT ); Sat, 12 Aug 2023 16:17:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41592 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229920AbjHLURE (ORCPT ); Sat, 12 Aug 2023 16:17:04 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8E3321FC1 for ; Sat, 12 Aug 2023 13:16:46 -0700 (PDT) Message-ID: <20230812195729.570237823@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691870371; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=Cmgw61F2ex4w7spmiBvMIKhnwIjtZvFaMPXmda/ZPis=; b=iqh3BamRomj3miBly6PYKLF9Y6bvNg0deg5fdA3ZhDUqn5DaWEoSMaJev5h+UH0GeWJERO irSpJDVFih3ubppe18S9bdYmAM0uXBtWS+qy02ELu0l3/Upm5RzeCCqzjutWywITFV5UJv nLaB4XENc64Pk7GYjiS04uwtP5pgurIWCDr2oWBfEThbBUizoHJQ4hOTb8h0mKtQczrsLD jIb8kOkJZFudG5RidA4i5GwIk5b2/Ae1KOIzsT7mYoC4gs/1Z/DWf4XTUW5eAyARqCZfir 6DIKVqXrJOcLcOLHZTCh6hBxRcIMpYICLmAzNV70OCnVzVUdwKwEMgZ/5Ms2oA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691870371; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=Cmgw61F2ex4w7spmiBvMIKhnwIjtZvFaMPXmda/ZPis=; b=aK++AxZY/y8O/qfbk39tVvY3BUbDOUuHEKCpSRPJ9hiAv64yIMr5d0uQ7bICtIJeGQ+Fr0 5W66kdXaKaXba2CA== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven , Nikolay Borisov Subject: [patch V2 35/37] x86/microcode: Handle "offline" CPUs correctly References: <20230812194003.682298127@linutronix.de> MIME-Version: 1.0 Date: Sat, 12 Aug 2023 21:59:30 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Thomas Gleixner Offline CPUs need to be parked in a safe loop when microcode update is in progress on the primary CPU. Currently offline CPUs are parked in 'mwait_play_dead()', and for Intel CPUs, its not a safe instruction, because 'mwait' instruction can be patched in the new microcode update that can cause instability. - Adds a new microcode state 'UCODE_OFFLINE' to report status on per-cpu basis. - Force NMI on the offline CPUs. Wakeup offline CPUs while the update is in progress and then return them back to 'mwait_play_dead()' after microcode update is complete. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/microcode.h | 1=20 arch/x86/kernel/cpu/microcode/core.c | 112 ++++++++++++++++++++++++++= +++-- arch/x86/kernel/cpu/microcode/internal.h | 1=20 arch/x86/kernel/nmi.c | 5 + 4 files changed, 113 insertions(+), 6 deletions(-) --- --- a/arch/x86/include/asm/microcode.h +++ b/arch/x86/include/asm/microcode.h @@ -76,6 +76,7 @@ static inline void show_ucode_info_early #endif /* !CONFIG_CPU_SUP_INTEL */ =20 bool microcode_nmi_handler(void); +void microcode_offline_nmi_handler(void); =20 #ifdef CONFIG_MICROCODE_LATE_LOADING DECLARE_STATIC_KEY_FALSE(microcode_nmi_handler_enable); --- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -341,8 +341,9 @@ struct ucode_ctrl { =20 DEFINE_STATIC_KEY_FALSE(microcode_nmi_handler_enable); static DEFINE_PER_CPU(struct ucode_ctrl, ucode_ctrl); +static atomic_t late_cpus_in, offline_in_nmi; static unsigned int loops_per_usec; -static atomic_t late_cpus_in; +static cpumask_t cpu_offline_mask; =20 static noinstr bool wait_for_cpus(atomic_t *cnt) { @@ -450,7 +451,7 @@ static noinstr void ucode_load_secondary instrumentation_end(); } =20 -static void ucode_load_primary(unsigned int cpu) +static void __ucode_load_primary(unsigned int cpu) { struct cpumask *secondaries =3D topology_sibling_cpumask(cpu); enum sibling_ctrl ctrl; @@ -486,6 +487,67 @@ static void ucode_load_primary(unsigned } } =20 +static bool ucode_kick_offline_cpus(unsigned int nr_offl) +{ + unsigned int cpu, timeout; + + for_each_cpu(cpu, &cpu_offline_mask) { + /* Enable the rendevouz handler and send NMI */ + per_cpu(ucode_ctrl.nmi_enabled, cpu) =3D true; + apic_send_nmi_to_offline_cpu(cpu); + } + + /* Wait for them to arrive */ + for (timeout =3D 0; timeout < (USEC_PER_SEC / 2); timeout++) { + if (atomic_read(&offline_in_nmi) =3D=3D nr_offl) + return true; + udelay(1); + } + /* Let the others time out */ + return false; +} + +static void ucode_release_offline_cpus(void) +{ + unsigned int cpu; + + for_each_cpu(cpu, &cpu_offline_mask) + per_cpu(ucode_ctrl.ctrl, cpu) =3D SCTRL_DONE; +} + +static void ucode_load_primary(unsigned int cpu) +{ + unsigned int nr_offl =3D cpumask_weight(&cpu_offline_mask); + bool proceed =3D true; + + /* Kick soft-offlined SMT siblings if required */ + if (!cpu && nr_offl) + proceed =3D ucode_kick_offline_cpus(nr_offl); + + /* If the soft-offlined CPUs did not respond, abort */ + if (proceed) + __ucode_load_primary(cpu); + + /* Unconditionally release soft-offlined SMT siblings if required */ + if (!cpu && nr_offl) + ucode_release_offline_cpus(); +} + +/* + * Minimal stub rendevouz handler for soft-offlined CPUs which participate + * in the NMI rendevouz to protect against a concurrent NMI on affected + * CPUs. + */ +void noinstr microcode_offline_nmi_handler(void) +{ + if (!raw_cpu_read(ucode_ctrl.nmi_enabled)) + return; + raw_cpu_write(ucode_ctrl.nmi_enabled, false); + raw_cpu_write(ucode_ctrl.result, UCODE_OFFLINE); + raw_atomic_inc(&offline_in_nmi); + wait_for_ctrl(); +} + static noinstr bool microcode_update_handler(void) { unsigned int cpu =3D raw_smp_processor_id(); @@ -542,6 +604,7 @@ static int ucode_load_cpus_stopped(void static int ucode_load_late_stop_cpus(void) { unsigned int cpu, updated =3D 0, failed =3D 0, timedout =3D 0, siblings = =3D 0; + unsigned int nr_offl, offline =3D 0; int old_rev =3D boot_cpu_data.microcode; struct cpuinfo_x86 prev_info; =20 @@ -549,6 +612,7 @@ static int ucode_load_late_stop_cpus(voi pr_err("You should switch to early loading, if possible.\n"); =20 atomic_set(&late_cpus_in, num_online_cpus()); + atomic_set(&offline_in_nmi, 0); loops_per_usec =3D loops_per_jiffy / (TICK_NSEC / 1000); =20 /* @@ -571,6 +635,7 @@ static int ucode_load_late_stop_cpus(voi case UCODE_UPDATED: updated++; break; case UCODE_TIMEOUT: timedout++; break; case UCODE_OK: siblings++; break; + case UCODE_OFFLINE: offline++; break; default: failed++; break; } } @@ -582,6 +647,13 @@ static int ucode_load_late_stop_cpus(voi /* Nothing changed. */ if (!failed && !timedout) return 0; + + nr_offl =3D cpumask_weight(&cpu_offline_mask); + if (offline < nr_offl) { + pr_warn("%u offline siblings did not respond.\n", + nr_offl - atomic_read(&offline_in_nmi)); + return -EIO; + } pr_err("Microcode update failed: %u CPUs failed %u CPUs timed out\n", failed, timedout); return -EIO; @@ -615,19 +687,49 @@ static int ucode_load_late_stop_cpus(voi * modern CPUs is using MWAIT, which is also not guaranteed to be safe * against a microcode update which affects MWAIT. * - * 2) Initialize the per CPU control structure + * As soft-offlined CPUs still react on NMIs, the SMT sibling + * restriction can be lifted when the vendor driver signals to use NMI + * for rendevouz and the APIC provides a mechanism to send an NMI to a + * soft-offlined CPU. The soft-offlined CPUs are then able to + * participate in the rendezvouz in a trivial stub handler. + * + * 2) Initialize the per CPU control structure and create a cpumask + * which contains "offline"; secondary threads, so they can be handled + * correctly by a control CPU. */ static bool ucode_setup_cpus(void) { struct ucode_ctrl ctrl =3D { .ctrl =3D SCTRL_WAIT, .result =3D -1, }; + bool allow_smt_offline; unsigned int cpu; =20 + allow_smt_offline =3D microcode_ops->nmi_safe || + (microcode_ops->use_nmi && apic->nmi_to_offline_cpu); + + cpumask_clear(&cpu_offline_mask); + for_each_cpu_and(cpu, cpu_present_mask, &cpus_booted_once_mask) { + /* + * Offline CPUs sit in one of the play_dead() functions + * with interrupts disabled, but they still react on NMIs + * and execute arbitrary code. Also MWAIT being updated + * while the offline CPU sits there is not necessarily safe + * on all CPU variants. + * + * Mark them in the offline_cpus mask which will be handled + * by CPU0 later in the update process. + * + * Ensure that the primary thread is online so that it is + * guaranteed that all cores are updated. + */ if (!cpu_online(cpu)) { - if (topology_is_primary_thread(cpu) || !microcode_ops->nmi_safe) { - pr_err("CPU %u not online\n", cpu); + if (topology_is_primary_thread(cpu) || !allow_smt_offline) { + pr_err("CPU %u not online, loading aborted\n", cpu); return false; } + cpumask_set_cpu(cpu, &cpu_offline_mask); + per_cpu(ucode_ctrl, cpu) =3D ctrl; + continue; } =20 /* --- a/arch/x86/kernel/cpu/microcode/internal.h +++ b/arch/x86/kernel/cpu/microcode/internal.h @@ -17,6 +17,7 @@ enum ucode_state { UCODE_NFOUND, UCODE_ERROR, UCODE_TIMEOUT, + UCODE_OFFLINE, }; =20 struct microcode_ops { --- a/arch/x86/kernel/nmi.c +++ b/arch/x86/kernel/nmi.c @@ -502,8 +502,11 @@ DEFINE_IDTENTRY_RAW(exc_nmi) if (IS_ENABLED(CONFIG_NMI_CHECK_CPU)) raw_atomic_long_inc(&nsp->idt_calls); =20 - if (IS_ENABLED(CONFIG_SMP) && arch_cpu_is_offline(smp_processor_id())) + if (IS_ENABLED(CONFIG_SMP) && arch_cpu_is_offline(smp_processor_id())) { + if (microcode_nmi_handler_enabled()) + microcode_offline_nmi_handler(); return; + } =20 if (this_cpu_read(nmi_state) !=3D NMI_NOT_RUNNING) { this_cpu_write(nmi_state, NMI_LATCHED); From nobody Thu Dec 18 20:32:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85C89C0015E for ; Sat, 12 Aug 2023 20:17:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231364AbjHLURF (ORCPT ); Sat, 12 Aug 2023 16:17:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41580 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230059AbjHLURD (ORCPT ); Sat, 12 Aug 2023 16:17:03 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8C29E1BFC for ; Sat, 12 Aug 2023 13:16:46 -0700 (PDT) Message-ID: <20230812195729.634757390@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691870373; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=vkb5eya832wOkrcECHFcR/66ZtKy/0rcIUb77ri1flg=; b=yda/B0xCcjVeOYF3HOrlrPoH5/8PbFevw2w8zSd5Vy/w+OtMSr8VsVqXH4WXq3tEo4wmBm pHIgcHKek4usBfQupJGMQKn9VvZ0SemX0RzxfgEoiBuPAwjfuNr9n2W1azQy2b2HJijbEU zwlweP97R/GaRYE5xn25lLsMDuaDghMYpprmGrsLpaeWQIHb+895OD2O+xMzxSgsoM4TzY FUNisuwbbuzFvT6IgfpbMvKq1mgTAAVa/XlVwFe1MTOysb6P0cRcM/NpsqvQ9x7gZTHYO3 ljzI4m/cf+O9j02xuqgzF0fpSkIFfYdFiPUPIXTHoRk+AHGyjxWUKe+ub9LWqQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691870373; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=vkb5eya832wOkrcECHFcR/66ZtKy/0rcIUb77ri1flg=; b=HiTEdQyeeiEDI0HR94QL2i6Rzdnl2YxBJWK3vmKI4M0iJF3bpHi7iebWfrAzOXvtLb/tjW mbMR6YimNoUW3fAw== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven , Nikolay Borisov Subject: [patch V2 36/37] x86/microcode: Prepare for minimal revision check References: <20230812194003.682298127@linutronix.de> MIME-Version: 1.0 Date: Sat, 12 Aug 2023 21:59:32 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Thomas Gleixner Applying microcode late can be fatal for the running kernel when the update changes functionality which is in use already in a non-compatible way, e.g. by removing a CPUID bit. There is no way for admins which do not have access to the vendors deep technical support to decide whether late loading of such a microcode is safe or not. Intel has added a new field to the microcode header which tells the minimal microcode revision which is required to be active in the CPU in order to be safe. Provide infrastructure for handling this in the core code and a command line switch which allows to enforce it. If the update is considered safe the kernel is not tainted and the annoying warning message not emitted. If it's enforced and the currently loaded microcode revision is not safe for late loading then the load is aborted. Signed-off-by: Thomas Gleixner --- Documentation/admin-guide/kernel-parameters.txt | 5 ++++ arch/x86/Kconfig | 23 ++++++++++++++++++- arch/x86/kernel/cpu/microcode/amd.c | 3 ++ arch/x86/kernel/cpu/microcode/core.c | 29 ++++++++++++++++++-= ----- arch/x86/kernel/cpu/microcode/intel.c | 3 ++ arch/x86/kernel/cpu/microcode/internal.h | 3 ++ 6 files changed, 58 insertions(+), 8 deletions(-) --- --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -3239,6 +3239,11 @@ =20 mga=3D [HW,DRM] =20 + microcode.force_minrev=3D [X86] + Format: + Enable or disable the microcode minimal revision + enforcement for the runtime microcode loader. + min_addr=3Dnn[KMG] [KNL,BOOT,IA-64] All physical memory below this physical address is ignored. =20 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1320,7 +1320,28 @@ config MICROCODE_LATE_LOADING is a tricky business and should be avoided if possible. Just the sequen= ce of synchronizing all cores and SMT threads is one fragile dance which d= oes not guarantee that cores might not softlock after the loading. Therefor= e, - use this at your own risk. Late loading taints the kernel too. + use this at your own risk. Late loading taints the kernel unless the + microcode header indicates that it is safe for late loading via the + minimal revision check. This minimal revision check can be enforced on + the kernel command line with "microcode.minrev=3DY". + +config MICROCODE_LATE_FORCE_MINREV + bool "Enforce late microcode loading minimal revision check" + default n + depends on MICROCODE_LATE_LOADING + help + To prevent that users load microcode late which modifies already + in use features, newer microcodes have a minimum revision field + in the microcode header, which tells the kernel which minimum + revision must be active in the CPU to safely load that new microcode + late into the running system. If disabled the check will not + be enforced but the kernel will be tainted when the minimal + revision check fails. + + This minimal revision check can also be controlled via the + "microcode.minrev" parameter on the kernel command line. + + If unsure say Y. =20 config X86_MSR tristate "/dev/cpu/*/msr - Model-specific register support" --- a/arch/x86/kernel/cpu/microcode/amd.c +++ b/arch/x86/kernel/cpu/microcode/amd.c @@ -919,6 +919,9 @@ static enum ucode_state request_microcod enum ucode_state ret =3D UCODE_NFOUND; const struct firmware *fw; =20 + if (force_minrev) + return UCODE_NFOUND; + if (c->x86 >=3D 0x15) snprintf(fw_name, sizeof(fw_name), "amd-ucode/microcode_amd_fam%.2xh.bin= ", c->x86); =20 --- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -46,6 +46,9 @@ static struct microcode_ops *microcode_ops; static bool dis_ucode_ldr =3D true; =20 +bool force_minrev =3D IS_ENABLED(CONFIG_MICROCODE_LATE_FORCE_MINREV); +module_param(force_minrev, bool, S_IRUSR | S_IWUSR); + bool initrd_gone; =20 /* @@ -601,15 +604,17 @@ static int ucode_load_cpus_stopped(void return 0; } =20 -static int ucode_load_late_stop_cpus(void) +static int ucode_load_late_stop_cpus(bool is_safe) { unsigned int cpu, updated =3D 0, failed =3D 0, timedout =3D 0, siblings = =3D 0; unsigned int nr_offl, offline =3D 0; int old_rev =3D boot_cpu_data.microcode; struct cpuinfo_x86 prev_info; =20 - pr_err("Attempting late microcode loading - it is dangerous and taints th= e kernel.\n"); - pr_err("You should switch to early loading, if possible.\n"); + if (!is_safe) { + pr_err("Late microcode loading without minimal revision check.\n"); + pr_err("You should switch to early loading, if possible.\n"); + } =20 atomic_set(&late_cpus_in, num_online_cpus()); atomic_set(&offline_in_nmi, 0); @@ -659,7 +664,9 @@ static int ucode_load_late_stop_cpus(voi return -EIO; } =20 - add_taint(TAINT_CPU_OUT_OF_SPEC, LOCKDEP_STILL_OK); + if (!is_safe || failed || timedout) + add_taint(TAINT_CPU_OUT_OF_SPEC, LOCKDEP_STILL_OK); + pr_info("Microcode load: updated on %u primary CPUs with %u siblings\n", = updated, siblings); if (failed || timedout) { pr_err("Microcode load incomplete. %u CPUs timed out or failed\n", @@ -753,9 +760,17 @@ static int ucode_load_late_locked(void) return -EBUSY; =20 ret =3D microcode_ops->request_microcode_fw(0, µcode_pdev->dev); - if (ret !=3D UCODE_NEW) - return ret =3D=3D UCODE_NFOUND ? -ENOENT : -EBADFD; - return ucode_load_late_stop_cpus(); + + switch (ret) { + case UCODE_NEW: + case UCODE_NEW_SAFE: + break; + case UCODE_NFOUND: + return -ENOENT; + default: + return -EBADFD; + } + return ucode_load_late_stop_cpus(ret =3D=3D UCODE_NEW_SAFE); } =20 static ssize_t reload_store(struct device *dev, --- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -551,6 +551,9 @@ static enum ucode_state read_ucode_intel int cur_rev =3D uci->cpu_sig.rev; u8 *new_mc =3D NULL, *mc =3D NULL; =20 + if (force_minrev) + return UCODE_NFOUND; + while (iov_iter_count(iter)) { struct microcode_header_intel mc_header; unsigned int mc_size, data_size; --- a/arch/x86/kernel/cpu/microcode/internal.h +++ b/arch/x86/kernel/cpu/microcode/internal.h @@ -13,6 +13,7 @@ struct device; enum ucode_state { UCODE_OK =3D 0, UCODE_NEW, + UCODE_NEW_SAFE, UCODE_UPDATED, UCODE_NFOUND, UCODE_ERROR, @@ -36,6 +37,8 @@ struct microcode_ops { use_nmi : 1; }; =20 +extern bool force_minrev; + extern struct ucode_cpu_info ucode_cpu_info[]; struct cpio_data find_microcode_in_initrd(const char *path, bool use_pa); From nobody Thu Dec 18 20:32:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9A94C0015E for ; Sat, 12 Aug 2023 20:17:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231431AbjHLURH (ORCPT ); Sat, 12 Aug 2023 16:17:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41588 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230395AbjHLURE (ORCPT ); Sat, 12 Aug 2023 16:17:04 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8DDDC1BFD for ; Sat, 12 Aug 2023 13:16:46 -0700 (PDT) Message-ID: <20230812195729.693640265@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1691870374; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=w1SyTFpa9lrhJ4lGTg0lf2YQXOnASSGU3DRXcCp/oHw=; b=lkdpuxaIgkKAw5/H2Cjjkkt3E9IS48OQmBpLVgwS6/4waEGKkgxjV5zag6Av25xXqGHls3 T+wg6tYpc1Q8KPV0KiL/kvbBkJZj3Pv+mEmIpUG0nw2OhxJ55Rx0F2SqhgvpZy0LYjQYgR 1LI2B5WVo/34ZrOAnr/H1WR/S3x3unOOz23bvMOMud5iPKI03OuSfKwH4pYd8nt6tmRbSI nyHzL4WVXcw7/a6NHGnNEmk/krmItPAOCUkgkM6nvbA7m5/V6gEhYucNFx2bHnJMH5jfj9 SpCSG4ZF26p21p60IjvgbhyZrfTyd4rZyYys0Ls0H/o1HHzCdmoBaRiLf4z7Yw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1691870374; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=w1SyTFpa9lrhJ4lGTg0lf2YQXOnASSGU3DRXcCp/oHw=; b=QYYU2YXfHxQxYyLceD1EhSyLtOtNBnYmAgJCVOHPRNZaMURvzcwfUyLeAPdpLpH6V0hF6s GiTOI9G/SE/bbnAQ== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Borislav Petkov , Ashok Raj , Arjan van de Ven , Nikolay Borisov Subject: [patch V2 37/37] x86/microcode/intel: Add a minimum required revision for late-loads References: <20230812194003.682298127@linutronix.de> MIME-Version: 1.0 Date: Sat, 12 Aug 2023 21:59:34 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Ashok Raj In general users don't have the necessary information to determine whether late loading of a new microcode version is safe and does not modify anything which the currently running kernel uses already, e.g. removal of CPUID bits or behavioural changes of MSRs. To address this issue, Intel has added a "minimum required version" field to a previously reserved field in the microcode header. Microcode updates should only be applied if the current microcode version is equal to, or greater than this minimum required version. Thomas made some suggestions on how meta-data in the microcode file could provide Linux with information to decide if the new microcode is suitable candidate for late loading. But even the "simpler" option requires a lot of metadata and corresponding kernel code to parse it, so the final suggestion was to add the 'minimum required version' field in the header. When microcode changes visible features, microcode will set the minimum required version to its own revision which prevents late loading. Old microcode blobs have the minimum revision field always set to 0, which indicates that there is no information and the kernel considers it as unsafe. This is a pure OS software mechanism. The hardware/firmware ignores this header field. For early loading there is no restriction because OS visible features are enumerated after the early load and therefor a change has no effect. The check is always enabled, but by default not enforced. It can be enforced via Kconfig or kernel command line. If enforced, the kernel refuses to late load microcode with a minium required version field which is zero or when the currently loaded microcode revision is smaller than the minimum required revision. If not enforced the load happens independent of the revision check to stay compatible with the existing behaviour, but it influences the decision whether the kernel is tainted or not. If the check signals that the late load is safe, then the kernel is not tainted. Early loading is not affected by this. [ tglx: Massaged changelog and fixed up the implementation ] Suggested-by: Thomas Gleixner Signed-off-by: Ashok Raj Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/microcode.h | 3 +- arch/x86/kernel/cpu/microcode/intel.c | 37 +++++++++++++++++++++++++++++= +---- 2 files changed, 35 insertions(+), 5 deletions(-) --- --- a/arch/x86/include/asm/microcode.h +++ b/arch/x86/include/asm/microcode.h @@ -36,7 +36,8 @@ struct microcode_header_intel { unsigned int datasize; unsigned int totalsize; unsigned int metasize; - unsigned int reserved[2]; + unsigned int min_req_ver; + unsigned int reserved; }; =20 struct microcode_intel { --- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -544,16 +544,40 @@ static enum ucode_state apply_microcode_ return ret; } =20 +static bool ucode_validate_minrev(struct microcode_header_intel *mc_header) +{ + int cur_rev =3D boot_cpu_data.microcode; + + /* + * When late-loading, ensure the header declares a minimum revision + * required to perform a late-load. The previously reserved field + * is 0 in older microcode blobs. + */ + if (!mc_header->min_req_ver) { + pr_info("Unsafe microcode update: Microcode header does not specify a re= quired min version\n"); + return false; + } + + /* + * Check whether the minimum revision specified in the header is either + * greater or equal to the current revision. + */ + if (cur_rev < mc_header->min_req_ver) { + pr_info("Unsafe microcode update: Current revision 0x%x too old\n", cur_= rev); + pr_info("Current should be at 0x%x or higher. Use early loading instead\= n", mc_header->min_req_ver); + return false; + } + return true; +} + static enum ucode_state read_ucode_intel(int cpu, struct iov_iter *iter) { struct ucode_cpu_info *uci =3D ucode_cpu_info + cpu; unsigned int curr_mc_size =3D 0, new_mc_size =3D 0; + bool is_safe, new_is_safe =3D false; int cur_rev =3D uci->cpu_sig.rev; u8 *new_mc =3D NULL, *mc =3D NULL; =20 - if (force_minrev) - return UCODE_NFOUND; - while (iov_iter_count(iter)) { struct microcode_header_intel mc_header; unsigned int mc_size, data_size; @@ -596,10 +620,15 @@ static enum ucode_state read_ucode_intel if (!intel_find_matching_signature(mc, &uci->cpu_sig)) continue; =20 + is_safe =3D ucode_validate_minrev(&mc_header); + if (force_minrev && !is_safe) + continue; + kvfree(new_mc); cur_rev =3D mc_header.rev; new_mc =3D mc; new_mc_size =3D mc_size; + new_is_safe =3D is_safe; mc =3D NULL; } =20 @@ -616,7 +645,7 @@ static enum ucode_state read_ucode_intel return UCODE_NFOUND; =20 ucode_patch_late =3D (struct microcode_intel *)new_mc; - return UCODE_NEW; + return new_is_safe ? UCODE_NEW_SAFE : UCODE_NEW; =20 fail: kvfree(mc);