From nobody Thu Oct 2 06:18:02 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.14]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E26A124BBE4 for ; Sun, 21 Sep 2025 22:48:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.14 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758494928; cv=none; b=te9qm87zc/M4qkJtfhVrl/H2gPVlT3XF58d3EW1vaDt6DxoZNR943MVfuMHBCIO+oJ0sp5jfrQWIbWmpNYJR+j2kp+CDB2XIpAgaSL+vOZTGN51hdR6SzecZUydxdOV91B8j2IYFVGwIdTAMpnBSkK9uvYApgniUCqk25qqoKK8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758494928; c=relaxed/simple; bh=pIkHPHzAQfoC0BmBJFfQc4zdmSkmM4xqhNCG7iomLxQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qaY5XJXuQSRxog6VuZqkjOjv0FnRZ8VlkIzzqOZyVxGjSdKtwkEvH3t0leRYEsmU9h1ZuKEN05hSI+Do2wYfeBq8njgrWLEv8R5TKqY1C6O/RC7uMuSWNp6xO1Qa/IgVY6rBeJqX1ABKXexOtmqiTh/ybL+xLQdRIBeYKCQ4Vz4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=eRdwbBV5; arc=none smtp.client-ip=198.175.65.14 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="eRdwbBV5" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758494928; x=1790030928; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=pIkHPHzAQfoC0BmBJFfQc4zdmSkmM4xqhNCG7iomLxQ=; b=eRdwbBV5Lg7GULDzhLZ+wZWYSvNLLti7DE0/sk8A3zG8Z3rhKthC1AsF TQvP7mXKKMPxlO/h2CErDhXYx/z4dFItqB7KnlrKpgobaZ8OQVqSvTYC2 C9138Vt2ltBUBwrwGpGTX7zJO48SxJRGvXzAWEhiVxr916SNRd4FyHI3I h2NpCunZtNrpWJzG8kArdT2luplm4XdPLWVHgyRm8aBGzHxVzxNOUfPsZ UTRx9KBiM7c0IIlqrlEJDvAI4rqPZZTD0p6IXUWywS8UD0N0j01+QON/q i7A9x7LZboGHeK6qnnmDNfTv8kwWz3TKexYRw0bHVyeJslihr9ar/yQYy A==; X-CSE-ConnectionGUID: VzNpo9YqQBql9BQi+Gygkw== X-CSE-MsgGUID: IkDM7nkiTBGxi33GGHcGwA== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="64562336" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="64562336" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by orvoesa106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Sep 2025 15:48:46 -0700 X-CSE-ConnectionGUID: 77Wb7snSS0e2Lxr2UwQqQw== X-CSE-MsgGUID: +fgsqzqFTjKJTq69KW9vYA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,283,1751266800"; d="scan'208";a="177129782" Received: from cbae1-mobl.amr.corp.intel.com (HELO cbae1-mobl.intel.com) ([10.124.135.148]) by fmviesa010.fm.intel.com with ESMTP; 21 Sep 2025 15:48:44 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, chao.gao@intel.com, abusse@amazon.de, tony.luck@intel.com, chang.seok.bae@intel.com Subject: [PATCH v6 1/7] x86/cpu/topology: Make primary thread mask available with SMP=n Date: Sun, 21 Sep 2025 15:48:35 -0700 Message-ID: <20250921224841.3545-2-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250921224841.3545-1-chang.seok.bae@intel.com> References: <20250921224841.3545-1-chang.seok.bae@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" cpu_primary_thread_mask is only defined when CONFIG_SMP=3Dy. However, even in UP kernels there is always exactly one CPU, which can reasonably be treated as the primary thread. Historically, topology_is_primary_thread() always returned true with CONFIG_SMP=3Dn. A recent commit: 4b455f59945aa ("cpu/SMT: Provide a default topology_is_primary_thread()") replaced it with a generic implementation with the note: "When disabling SMT, the primary thread of the SMT will remain enabled/active. Architectures that have a special primary thread (e.g. x86) need to override this function. ..." For consistency and clarity, make the primary thread mask available regardless of SMP, similar to cpu_possible_mask and cpu_present_mask. Move __cpu_primary_thread_mask into common code to prevent build issues. Let cpu_mark_primary_thread() configure the mask even for UP kernels, alongside other masks. Then, topology_is_primary_thread() can consistently reference it. Signed-off-by: Chang S. Bae Reviewed-by: Tony Luck --- V5 -> V6: Collect Tony's review tag V4 -> V5: New patch Preparatory patch to set up the mask correctly before its new use in patch3. --- arch/x86/include/asm/topology.h | 12 ++++++------ arch/x86/kernel/cpu/topology.c | 4 ---- arch/x86/kernel/cpu/topology_common.c | 3 +++ arch/x86/kernel/smpboot.c | 3 --- 4 files changed, 9 insertions(+), 13 deletions(-) diff --git a/arch/x86/include/asm/topology.h b/arch/x86/include/asm/topolog= y.h index 6c79ee7c0957..281252af6e9d 100644 --- a/arch/x86/include/asm/topology.h +++ b/arch/x86/include/asm/topology.h @@ -218,6 +218,12 @@ static inline unsigned int topology_amd_nodes_per_pkg(= void) return __amd_nodes_per_pkg; } =20 +#else /* CONFIG_SMP */ +static inline int topology_phys_to_logical_pkg(unsigned int pkg) { return = 0; } +static inline int topology_max_smt_threads(void) { return 1; } +static inline unsigned int topology_amd_nodes_per_pkg(void) { return 1; } +#endif /* !CONFIG_SMP */ + extern struct cpumask __cpu_primary_thread_mask; #define cpu_primary_thread_mask ((const struct cpumask *)&__cpu_primary_th= read_mask) =20 @@ -231,12 +237,6 @@ static inline bool topology_is_primary_thread(unsigned= int cpu) } #define topology_is_primary_thread topology_is_primary_thread =20 -#else /* CONFIG_SMP */ -static inline int topology_phys_to_logical_pkg(unsigned int pkg) { return = 0; } -static inline int topology_max_smt_threads(void) { return 1; } -static inline unsigned int topology_amd_nodes_per_pkg(void) { return 1; } -#endif /* !CONFIG_SMP */ - static inline void arch_fix_phys_package_id(int num, u32 slot) { } diff --git a/arch/x86/kernel/cpu/topology.c b/arch/x86/kernel/cpu/topology.c index e35ccdc84910..f083023f7dd9 100644 --- a/arch/x86/kernel/cpu/topology.c +++ b/arch/x86/kernel/cpu/topology.c @@ -75,15 +75,11 @@ bool arch_match_cpu_phys_id(int cpu, u64 phys_id) return phys_id =3D=3D (u64)cpuid_to_apicid[cpu]; } =20 -#ifdef CONFIG_SMP static void cpu_mark_primary_thread(unsigned int cpu, unsigned int apicid) { if (!(apicid & (__max_threads_per_core - 1))) cpumask_set_cpu(cpu, &__cpu_primary_thread_mask); } -#else -static inline void cpu_mark_primary_thread(unsigned int cpu, unsigned int = apicid) { } -#endif =20 /* * Convert the APIC ID to a domain level ID by masking out the low bits diff --git a/arch/x86/kernel/cpu/topology_common.c b/arch/x86/kernel/cpu/to= pology_common.c index b5a5e1411469..71625795d711 100644 --- a/arch/x86/kernel/cpu/topology_common.c +++ b/arch/x86/kernel/cpu/topology_common.c @@ -16,6 +16,9 @@ EXPORT_SYMBOL_GPL(x86_topo_system); unsigned int __amd_nodes_per_pkg __ro_after_init; EXPORT_SYMBOL_GPL(__amd_nodes_per_pkg); =20 +/* CPUs which are the primary SMT threads */ +struct cpumask __cpu_primary_thread_mask __read_mostly; + void topology_set_dom(struct topo_scan *tscan, enum x86_topology_domains d= om, unsigned int shift, unsigned int ncpus) { diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c index eb289abece23..6b43417bf270 100644 --- a/arch/x86/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -103,9 +103,6 @@ EXPORT_PER_CPU_SYMBOL(cpu_core_map); DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_die_map); EXPORT_PER_CPU_SYMBOL(cpu_die_map); =20 -/* CPUs which are the primary SMT threads */ -struct cpumask __cpu_primary_thread_mask __read_mostly; - /* Representing CPUs for which sibling maps can be computed */ static cpumask_var_t cpu_sibling_setup_mask; =20 --=20 2.48.1 From nobody Thu Oct 2 06:18:02 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.14]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3A28D25484D for ; Sun, 21 Sep 2025 22:48:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.14 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758494928; cv=none; b=MYKtN8NM26rfiC9PZUKp881X7/TK7sz7e6aBrxZsWZhbKeAZNHsvpQs1dZVriFrV55hJg9ttSnptvrtWgpN4OvTdIJWD4gu9GFtMHBlyqVXJk8XB4flyafSWMjXea9aMJZOPXzvzGVHwbHsW8K67I110MyLu6RtxPJd5WSj6SB0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758494928; c=relaxed/simple; bh=pOVrhhQ6mK/M6vDmsI2cf95vb7d/hkxxWKX35sOaY50=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=fKhk6IwCKK9is5yyTRqrB6HGFI0+hUN4cnT8Mj0I+w+UkQCiTr6rbqAHV/P9sO3YsCERfEraBN5XIdE6ZI5h6owC/NvzRP5Pz5rsj2HJv96pRWY08kqj2Hv6DicOuyQFjlckjSm8uRCiIKhDz4CLanHF24bW1E9ByLlr+Bsdl4M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=cE5idx3n; arc=none smtp.client-ip=198.175.65.14 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="cE5idx3n" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758494928; x=1790030928; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=pOVrhhQ6mK/M6vDmsI2cf95vb7d/hkxxWKX35sOaY50=; b=cE5idx3nYRE/g/aXXUKnU6Iby3wTk+H0X301qeZL36YqW/OGlUqe/yDi olmAipMLirVhCeg1IpEaozh6lNWiUEWZby9r8hXYMewbx/xLaodx0Q6Qi maI5h12G68Ax0v1bHe72F91R4mjf2bJxGRi9sEXkmR/CqtlE84lnbnWeL UYezq0YqMFJfmVXVWX1VD0BWLxCsk1TvcTWuOKIQ47WfpUDtppa6WnWta TvH/3yH1m2l3uWzE8uO6UQHiRqgvMtUwnAYFJVZnWxDdiYOram/qAFKlj xJ7VOEN8E17zXjei9LZK7fAJiDYhySyL3sBxegGA9eqGBU7zDuIPNGdOM g==; X-CSE-ConnectionGUID: GprjOVFoSj65xdEnB6Ylzw== X-CSE-MsgGUID: W8kaiCXGRcuvz5Te8TDCtQ== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="64562342" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="64562342" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by orvoesa106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Sep 2025 15:48:47 -0700 X-CSE-ConnectionGUID: KnVV7BDrRnutGHR0jwdRbA== X-CSE-MsgGUID: B2BXw2pqR2eTyqjGMdvybg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,283,1751266800"; d="scan'208";a="177129787" Received: from cbae1-mobl.amr.corp.intel.com (HELO cbae1-mobl.intel.com) ([10.124.135.148]) by fmviesa010.fm.intel.com with ESMTP; 21 Sep 2025 15:48:46 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, chao.gao@intel.com, abusse@amazon.de, tony.luck@intel.com, chang.seok.bae@intel.com Subject: [PATCH v6 2/7] x86/microcode: Introduce staging step to reduce late-loading time Date: Sun, 21 Sep 2025 15:48:36 -0700 Message-ID: <20250921224841.3545-3-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250921224841.3545-1-chang.seok.bae@intel.com> References: <20250921224841.3545-1-chang.seok.bae@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" As microcode patch sizes continue to grow, late-loading latency spikes can lead to timeouts and disruptions in running workloads. This trend of increasing patch sizes is expected to continue, so a foundational solution is needed to address the issue. To mitigate the problem, a new staging feature is introduced. This option processes most of the microcode update (excluding activation) on a non-critical path, allowing CPUs to remain operational during the majority of the update. By offloading work from the critical path, staging can significantly reduce latency spikes. Integrate staging as a preparatory step in late-loading. Introduce a new callback for staging, which is invoked at the beginning of load_late_stop_cpus(), before CPUs enter the rendezvous phase. Staging follows an opportunistic model: * If successful, it reduces CPU rendezvous time * Even though it fails, the process falls back to the legacy path to finish the loading process but with potentially higher latency. Extend struct microcode_ops to incorporate staging properties, which will be implemented in the vendor code separately. Signed-off-by: Chang S. Bae Tested-by: Anselm Busse Reviewed-by: Chao Gao Reviewed-by: Tony Luck --- V5 -> V6: * Fix typo in changelog: reduces -> reduce (Boris) * Collect Tony's review tag V4 -> V5: * Collect Chao's review tag V1 -> V2: * Move invocation inside of load_late_stop_cpus() (Boris) * Add more note about staging (Dave) There were discussions about whether staging success should be enforced by a configurable option. That topic is identified as follow-up work, separate from this series. https://lore.kernel.org/lkml/54308373-7867-4b76-be34-63730953f83c@intel= .com/ --- arch/x86/kernel/cpu/microcode/core.c | 11 +++++++++++ arch/x86/kernel/cpu/microcode/internal.h | 4 +++- 2 files changed, 14 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/cpu/microcode/core.c b/arch/x86/kernel/cpu/mic= rocode/core.c index f75c140906d0..d7baec8ec0b4 100644 --- a/arch/x86/kernel/cpu/microcode/core.c +++ b/arch/x86/kernel/cpu/microcode/core.c @@ -589,6 +589,17 @@ static int load_late_stop_cpus(bool is_safe) pr_err("You should switch to early loading, if possible.\n"); } =20 + /* + * Pre-load the microcode image into a staging device. This + * process is preemptible and does not require stopping CPUs. + * Successful staging simplifies the subsequent late-loading + * process, reducing rendezvous time. + * + * Even if the transfer fails, the update will proceed as usual. + */ + if (microcode_ops->use_staging) + microcode_ops->stage_microcode(); + atomic_set(&late_cpus_in, num_online_cpus()); atomic_set(&offline_in_nmi, 0); loops_per_usec =3D loops_per_jiffy / (TICK_NSEC / 1000); diff --git a/arch/x86/kernel/cpu/microcode/internal.h b/arch/x86/kernel/cpu= /microcode/internal.h index ae8dbc2b908d..a10b547eda1e 100644 --- a/arch/x86/kernel/cpu/microcode/internal.h +++ b/arch/x86/kernel/cpu/microcode/internal.h @@ -31,10 +31,12 @@ struct microcode_ops { * See also the "Synchronization" section in microcode_core.c. */ enum ucode_state (*apply_microcode)(int cpu); + void (*stage_microcode)(void); int (*collect_cpu_info)(int cpu, struct cpu_signature *csig); void (*finalize_late_load)(int result); unsigned int nmi_safe : 1, - use_nmi : 1; + use_nmi : 1, + use_staging : 1; }; =20 struct early_load_data { --=20 2.48.1 From nobody Thu Oct 2 06:18:02 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.14]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DB5BE2D061A for ; Sun, 21 Sep 2025 22:48:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.14 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758494930; cv=none; b=r79cWmpp/Yo3UZ8Iy2LBx2flk3BIv03b3MjU9KwOXqTAYmg2m8ZVCVx5vp2vLZ01qtlLDeFb+e3EQTQrnZ+//PMuCN02EBTnRwdsbxDnuwtrhFUZxT39lgemPx1Ih9PQTtZPbDRyPZO2Ge8El4s9Zus3UlSoFukKk7pKzQLtObk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758494930; c=relaxed/simple; bh=JCgge53Fw3t5UR9MEKksCig7fONXWXnVW7EOCj1oPzc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YHQKBsxmR6jpkZUrBoMnkkMxc8c0VenFDUVKwcR8NTqLMzLeZ/cxf9dGdqD4/IoSZHYgWUf8ML1ZIvn665KIHVtWrZKB+TK0cP1oO01iX6L34sYH6XWlZWIcL1qwL41U5W/V7B4sk3puY0Tp6/3Z8ZgHfsBGnK4K+C+NqEPtIxI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=AsP+lh9i; arc=none smtp.client-ip=198.175.65.14 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="AsP+lh9i" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758494930; x=1790030930; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=JCgge53Fw3t5UR9MEKksCig7fONXWXnVW7EOCj1oPzc=; b=AsP+lh9irCD8257L90WkpIKAwutm7CPDBhVVwZ/26086NIYXdhRABnnk xEHJexKd+nf/Ca7aJ6hAcUUnohwYBf4mSgfZoSlz/dRsaX016bC7BfkEg XnAESFNdevjMc/ZtRWSXKRj8PY49i+utJH0/ZlPZy5+KGwLYNYIlONA0d 7zEXAnS+o4u6PQaCLDPW0nhpANe2z+1fFi6Phz8Xv920ENvEzBjqCz6Bw JHkQCBJIOulW+xDBUiAa8NP6+ePGnjy0w8hc/s7J0EGznxJrEyHJeTCUc fNY+z3Ui5RsILMlhMf8ZcUXmjo64+p/jyOMtaatE2KpZbEU9QWgAqPsmU A==; X-CSE-ConnectionGUID: dpIw5CokR5u7jNEaqHFI2A== X-CSE-MsgGUID: /OV99bY1QsiYthFrK/mtvw== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="64562348" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="64562348" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by orvoesa106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Sep 2025 15:48:48 -0700 X-CSE-ConnectionGUID: Mx0pTSFAS7GVnpb6AzxOqA== X-CSE-MsgGUID: ywHNAWZ2Tw634bmFPgCHFg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,283,1751266800"; d="scan'208";a="177129792" Received: from cbae1-mobl.amr.corp.intel.com (HELO cbae1-mobl.intel.com) ([10.124.135.148]) by fmviesa010.fm.intel.com with ESMTP; 21 Sep 2025 15:48:47 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, chao.gao@intel.com, abusse@amazon.de, tony.luck@intel.com, chang.seok.bae@intel.com Subject: [PATCH v6 3/7] x86/microcode/intel: Establish staging control logic Date: Sun, 21 Sep 2025 15:48:37 -0700 Message-ID: <20250921224841.3545-4-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250921224841.3545-1-chang.seok.bae@intel.com> References: <20250921224841.3545-1-chang.seok.bae@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When microcode staging is initiated, operations are carried out through an MMIO interface. Each package has a unique interface specified by the IA32_MCU_STAGING_MBOX_ADDR MSR, which maps to a set of 32-bit registers. Prepare staging with the following steps: 1. Ensure the microcode image is 32-bit aligned to match the MMIO register size. 2. Identify each MMIO interface based on its per-package scope. 3. Invoke the staging function for each identified interface, which will be implemented separately. Suggested-by: Thomas Gleixner Signed-off-by: Chang S. Bae Tested-by: Anselm Busse Reviewed-by: Tony Luck Link: https://lore.kernel.org/all/871pznq229.ffs@tglx --- V5 -> V6: * Remove stale text in changelog (Boris) * Place MSR definition in the right spot in msr-index.h (Boris) * Dump error code instead of vague message (Boris) * Collect Tony's review tag V4 -> V5: * Rebase on the primary thread cpumask fix (Dave) * Clean up the revision print code (Dave) * rdmsrl_on_cpu() -> rdmsrq_on_cpu (Chao) V2 -> V3: * Remove a global variable and adjust stage_microcode() (Dave). * Simplify for_each_cpu() loop control code * Handle rdmsrl_on_cpu() return code explicitly (Chao) V1 -> V2: * Adjust to reference the staging_state struct. * Add lockdep_assert_cpus_held() (Boris) --- arch/x86/include/asm/msr-index.h | 2 ++ arch/x86/kernel/cpu/microcode/intel.c | 48 +++++++++++++++++++++++++++ 2 files changed, 50 insertions(+) diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-in= dex.h index 718a55d82fe4..0736e44f7c69 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -1222,6 +1222,8 @@ #define MSR_IA32_VMX_VMFUNC 0x00000491 #define MSR_IA32_VMX_PROCBASED_CTLS3 0x00000492 =20 +#define MSR_IA32_MCU_STAGING_MBOX_ADDR 0x000007a5 + /* Resctrl MSRs: */ /* - Intel: */ #define MSR_IA32_L3_QOS_CFG 0xc81 diff --git a/arch/x86/kernel/cpu/microcode/intel.c b/arch/x86/kernel/cpu/mi= crocode/intel.c index 371ca6eac00e..daae74858347 100644 --- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -299,6 +299,53 @@ static __init struct microcode_intel *scan_microcode(v= oid *data, size_t size, return size ? NULL : patch; } =20 +/* + * Handle the staging process using the mailbox MMIO interface. + * Return 0 on success or an error code on failure. + */ +static int do_stage(u64 mmio_pa) +{ + pr_debug_once("Staging implementation is pending.\n"); + return -EPROTONOSUPPORT; +} + +static void stage_microcode(void) +{ + unsigned int pkg_id =3D UINT_MAX; + int cpu, err; + u64 mmio_pa; + + if (!IS_ALIGNED(get_totalsize(&ucode_patch_late->hdr), sizeof(u32))) + return; + + lockdep_assert_cpus_held(); + + /* + * The MMIO address is unique per package, and all the SMT + * primary threads are online here. Find each MMIO space by + * their package ids to avoid duplicate staging. + */ + for_each_cpu(cpu, cpu_primary_thread_mask) { + if (topology_logical_package_id(cpu) =3D=3D pkg_id) + continue; + + pkg_id =3D topology_logical_package_id(cpu); + + err =3D rdmsrq_on_cpu(cpu, MSR_IA32_MCU_STAGING_MBOX_ADDR, &mmio_pa); + if (WARN_ON_ONCE(err)) + return; + + err =3D do_stage(mmio_pa); + if (err) { + pr_err("Error: staging failed (code =3D %d) for CPU%d at package %u.\n", + err, cpu, pkg_id); + return; + } + } + + pr_info("Staging of patch revision 0x%x succeeded.\n", ucode_patch_late->= hdr.rev); +} + static enum ucode_state __apply_microcode(struct ucode_cpu_info *uci, struct microcode_intel *mc, u32 *cur_rev) @@ -627,6 +674,7 @@ static struct microcode_ops microcode_intel_ops =3D { .collect_cpu_info =3D collect_cpu_info, .apply_microcode =3D apply_microcode_late, .finalize_late_load =3D finalize_late_load, + .stage_microcode =3D stage_microcode, .use_nmi =3D IS_ENABLED(CONFIG_X86_64), }; =20 --=20 2.48.1 From nobody Thu Oct 2 06:18:02 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.14]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5AD842D12EC for ; Sun, 21 Sep 2025 22:48:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.14 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758494930; cv=none; b=GzifEZnYHHfBCqmFPq4EovmIHEBi1Pmg8jDvEB+DSQCHjP6ZSvmt1j8o7ZJ/WNumD2qS0L+1mjAW0qgSJtmDBpPCOK9NvTFCuDFlT1envu2Ro3BlLMO028VTDJYewYRmxLHYvug91jgg2218epodsPVjWtpRG7vlQ+4U0cXN8OU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758494930; c=relaxed/simple; bh=Zp0ROmh2MaLRi6APN8D8kT67BMDXMSnCy8eIffIvCVU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=mC8/l+oSnTwKusJzbT44AT0lP63FuzLPn0W4LNn/AQiZUHXd7jxKfuQJDwy5pCGcOH1NBS6G1OBRtzXo0MFvEZsi634jWSD8TITZjI3dPX4CvhkSBJhIGmSpcQMKnewGRmkuF1Xac72GBereDigXqcc5Jzta47I258OHWhlVe2k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=EInM3IKk; arc=none smtp.client-ip=198.175.65.14 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="EInM3IKk" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758494930; x=1790030930; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Zp0ROmh2MaLRi6APN8D8kT67BMDXMSnCy8eIffIvCVU=; b=EInM3IKk0nAXfB/0ERh9YiLAkghjcGbLWhoNlpgtUxILzkNYylq5HCxI C8ecVv8udxR6sU0wo4QRTct1sw9IPiSKCu5yPHA3pBgFrbQHrTl6UwtTu 9iISkLLU4vhP8WsABzKymb2NgmnMdc7adF/J58BXJYsubT0yBuP5ZH4f1 hb8Go8fVPWpCChwielZOzqg7T1oixDWFYUrSsaAv6+PUf9gZe9FfXTQlF 9lONOs+mXE3jEiehnhDxfdXXtemEtPv/LsiuFpPiyUdd5XHF3CKw04vO1 BlU0io2n2N4PPDROIOF/q117cDyZQhPybntff+Sc0Sk2sRZAyHwTV2yCi A==; X-CSE-ConnectionGUID: +URvUFMPSRGs24/Nt4c00g== X-CSE-MsgGUID: P2C1Ky+4QCuht1dc37DtXw== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="64562356" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="64562356" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by orvoesa106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Sep 2025 15:48:49 -0700 X-CSE-ConnectionGUID: vcuHsDSfSxCaJttgN9ZdGA== X-CSE-MsgGUID: 2jHBS9alSEy2u7N0O7va4g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,283,1751266800"; d="scan'208";a="177129800" Received: from cbae1-mobl.amr.corp.intel.com (HELO cbae1-mobl.intel.com) ([10.124.135.148]) by fmviesa010.fm.intel.com with ESMTP; 21 Sep 2025 15:48:48 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, chao.gao@intel.com, abusse@amazon.de, tony.luck@intel.com, chang.seok.bae@intel.com Subject: [PATCH v6 4/7] x86/microcode/intel: Define staging state struct Date: Sun, 21 Sep 2025 15:48:38 -0700 Message-ID: <20250921224841.3545-5-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250921224841.3545-1-chang.seok.bae@intel.com> References: <20250921224841.3545-1-chang.seok.bae@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Define staging_state struct to simplify function prototypes by consolidating relevant data, instead of passing multiple local variables. Signed-off-by: Chang S. Bae Tested-by: Anselm Busse Reviewed-by: Tony Luck --- V5 -> V6: * Trim the changelog (Boris) * Drop the state field * Collect review tag V4 -> V5: Drop the ucode_ptr field (Dave) V1 -> V2: New patch Prior to V2, local variables were used to track state values, with the intention of improving readability by explicitly passing them between functions. However, given feedback, introducing a dedicated data structure looks to provide a benefit by simplifying the main loop. --- arch/x86/kernel/cpu/microcode/intel.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/arch/x86/kernel/cpu/microcode/intel.c b/arch/x86/kernel/cpu/mi= crocode/intel.c index daae74858347..b9f6bfbc7fea 100644 --- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -54,6 +54,23 @@ struct extended_sigtable { struct extended_signature sigs[]; }; =20 +/** + * struct staging_state - Track the current staging process state + * + * @mmio_base: MMIO base address for staging + * @ucode_len: Total size of the microcode image + * @chunk_size: Size of each data piece + * @bytes_sent: Total bytes transmitted so far + * @offset: Current offset in the microcode image + */ +struct staging_state { + void __iomem *mmio_base; + unsigned int ucode_len; + unsigned int chunk_size; + unsigned int bytes_sent; + unsigned int offset; +}; + #define DEFAULT_UCODE_TOTALSIZE (DEFAULT_UCODE_DATASIZE + MC_HEADER_SIZE) #define EXT_HEADER_SIZE (sizeof(struct extended_sigtable)) #define EXT_SIGNATURE_SIZE (sizeof(struct extended_signature)) --=20 2.48.1 From nobody Thu Oct 2 06:18:02 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.14]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C8284226863 for ; Sun, 21 Sep 2025 22:48:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.14 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758494932; cv=none; b=MZwX6g95U71cXPGjm2vIJJ+qjgIc7JvHpOUJ9QR4Pm7fCZP9KfzmbMvv44cb9ul6Lx8/D6vc2GuTgpwOIPmoS16Vmvn50e0q4DNHuz8i0VaQZ4ckNZ7zXIFqXJ39rNY7ncTrwtulGtcV8F5KgE7s9Np5x/anrAthPftOeD4odjc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758494932; c=relaxed/simple; bh=0e9YXXcVmoNX4ypjLvaFo66muk6wMOodjuWTW6xgsg0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=FvBQYO2ru6d0RmELBvEy074j/svDFsSYpiVHBkPE1PHijveycl2bNaZPPNGCMPKytqoyOoPfT+83kL3I+r4CW5chvi19ZMMFr2Tx6trEWBqczrBLIOPBf4eFa+Ppmk6yNv5pD/hsMfnrodQ/yRlzN2baCxSR2zXN0lm6mBcnJdw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=nNdDc4aW; arc=none smtp.client-ip=198.175.65.14 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="nNdDc4aW" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758494931; x=1790030931; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=0e9YXXcVmoNX4ypjLvaFo66muk6wMOodjuWTW6xgsg0=; b=nNdDc4aWkuunEKlBMCcreBmJ272Up93DH8Bomf0U31QyA87OG2uo5GMR ZKZk2/tRr+jZ+IUrZ7vWfzXE/NtIFB3h8AMFhz0EHQMZQANOIJsUHMDiV Aznq84rsyYDsbtPNLJLU8MOZgSMHdScvxSql1/Z61pY3np6QztoRdVT3x E6/LQlNOCxqAT8wTH2k7RFJfeGFD/m7t0iTm2sX65oMPox0sJ3vwyd1/z dP5JNz84a7YxKfXTra8hoBO8unjF3al1tA4z+s6x3/N3eozR/zNGXMKrs WbI2S+zU2QQZBweEtHhjo8JpGwgsdiTYhAavMJKqkNkh9a0QHEp7TLsOG w==; X-CSE-ConnectionGUID: soxHXY1iSfWLm4kIZ63B+Q== X-CSE-MsgGUID: w5txOVbpQ/qNQV0aq3w91A== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="64562362" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="64562362" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by orvoesa106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Sep 2025 15:48:50 -0700 X-CSE-ConnectionGUID: RNMXBNyWQc+IBLqRxjE5hg== X-CSE-MsgGUID: +xZn6RfHRa6Qlo24yaFeFA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,283,1751266800"; d="scan'208";a="177129805" Received: from cbae1-mobl.amr.corp.intel.com (HELO cbae1-mobl.intel.com) ([10.124.135.148]) by fmviesa010.fm.intel.com with ESMTP; 21 Sep 2025 15:48:49 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, chao.gao@intel.com, abusse@amazon.de, tony.luck@intel.com, chang.seok.bae@intel.com Subject: [PATCH v6 5/7] x86/microcode/intel: Implement staging handler Date: Sun, 21 Sep 2025 15:48:39 -0700 Message-ID: <20250921224841.3545-6-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250921224841.3545-1-chang.seok.bae@intel.com> References: <20250921224841.3545-1-chang.seok.bae@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Previously, per-package staging invocations and their associated state data were established. The next step is to implement the actual staging handler according to the specified protocol. Below are key aspects to note: (a) Each staging process must begin by resetting the staging hardware. (b) The staging hardware processes up to a page-sized chunk of the microcode image per iteration, requiring software to submit data incrementally. (c) Once a data chunk is processed, the hardware responds with an offset in the image for the next chunk. (d) The offset may indicate completion or request retransmission of an already transferred chunk. As long as the total transferred data remains within the predefined limit (twice the image size), retransmissions should be acceptable. Incorporate them in the handler, while data transmission and mailbox format handling are implemented separately. Signed-off-by: Chang S. Bae Tested-by: Anselm Busse Reviewed-by: Tony Luck --- V5 -> V6: * Remove a single-line helper and fold it (Boris) * Fix the header file ordering (Boris) * Fix typo in comment (Boris) * Adjust code to return a unique error code: ETIMEDOUT =3D> EMSGSIZE * Trim the changelog. * Collect Tony's review tag V4 -> V5: * Convert helper functions to return error codes (Dave) * Consolidate loop-control logic * Refactor next-chunk calculation/check for clarity * Remove offset sanity check (moved to next patch) V2 -> V3: * Rework code to eliminate global variables (Dave) * Remove redundant variable resets (Chao) V1 -> V2: * Re-write the changelog for clarity (Dave). * Move staging handling code into intel.c (Boris). * Add extensive comments to clarify staging logic and hardware interactions, along with function renaming (Dave). --- arch/x86/kernel/cpu/microcode/intel.c | 123 +++++++++++++++++++++++++- 1 file changed, 120 insertions(+), 3 deletions(-) diff --git a/arch/x86/kernel/cpu/microcode/intel.c b/arch/x86/kernel/cpu/mi= crocode/intel.c index b9f6bfbc7fea..d3e15f23d53a 100644 --- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -12,9 +12,11 @@ */ #define pr_fmt(fmt) "microcode: " fmt #include +#include #include #include #include +#include #include #include #include @@ -33,6 +35,15 @@ static const char ucode_path[] =3D "kernel/x86/microcode= /GenuineIntel.bin"; =20 #define UCODE_BSP_LOADED ((struct microcode_intel *)0x1UL) =20 +/* Defines for the microcode staging mailbox interface */ +#define MBOX_REG_NUM 4 +#define MBOX_REG_SIZE sizeof(u32) + +#define MBOX_CONTROL_OFFSET 0x0 +#define MBOX_STATUS_OFFSET 0x4 + +#define MASK_MBOX_CTRL_ABORT BIT(0) + /* Current microcode patch used in early patching on the APs. */ static struct microcode_intel *ucode_patch_va __read_mostly; static struct microcode_intel *ucode_patch_late __read_mostly; @@ -317,13 +328,119 @@ static __init struct microcode_intel *scan_microcode= (void *data, size_t size, } =20 /* - * Handle the staging process using the mailbox MMIO interface. + * Prepare for a new microcode transfer: reset hardware and record the + * image size. + */ +static void init_stage(struct staging_state *ss) +{ + ss->ucode_len =3D get_totalsize(&ucode_patch_late->hdr); + + /* + * Abort any ongoing process, effectively resetting the device. + * Unlike regular mailbox data processing requests, this + * operation does not require a status check. + */ + writel(MASK_MBOX_CTRL_ABORT, ss->mmio_base + MBOX_CONTROL_OFFSET); +} + +/* + * Update the chunk size and decide whether another chunk can be sent. + * This accounts for remaining data and retry limits. + */ +static bool can_send_next_chunk(struct staging_state *ss, int *err) +{ + /* A page size or remaining bytes if this is the final chunk */ + ss->chunk_size =3D min(PAGE_SIZE, ss->ucode_len - ss->offset); + + /* + * Each microcode image is divided into chunks, each at most + * one page size. A 10-chunk image would typically require 10 + * transactions. + * + * However, the hardware managing the mailbox has limited + * resources and may not cache the entire image, potentially + * requesting the same chunk multiple times. + * + * To tolerate this behavior, allow up to twice the expected + * number of transactions (i.e., a 10-chunk image can take up to + * 20 attempts). + * + * If the number of attempts exceeds this limit, treat it as + * exceeding the maximum allowed transfer size. + */ + if (ss->bytes_sent + ss->chunk_size > ss->ucode_len * 2) { + *err =3D -EMSGSIZE; + return false; + } + + *err =3D 0; + return true; +} + +/* + * Determine whether staging is complete: either the hardware signaled + * the end offset, or no more transactions are permitted (retry limit + * reached). + */ +static inline bool staging_is_complete(struct staging_state *ss, int *err) +{ + return (ss->offset =3D=3D UINT_MAX) || !can_send_next_chunk(ss, err); +} + +/* + * Transmit a chunk of the microcode image to the hardware. + * Return 0 on success, or an error code on failure. + */ +static int send_data_chunk(struct staging_state *ss, void *ucode_ptr __may= be_unused) +{ + pr_debug_once("Staging mailbox loading code needs to be implemented.\n"); + return -EPROTONOSUPPORT; +} + +/* + * Retrieve the next offset from the hardware response. + * Return 0 on success, or an error code on failure. + */ +static int fetch_next_offset(struct staging_state *ss) +{ + pr_debug_once("Staging mailbox response handling code needs to be impleme= nted.\n"); + return -EPROTONOSUPPORT; +} + +/* + * Handle the staging process using the mailbox MMIO interface. The + * microcode image is transferred in chunks until completion. * Return 0 on success or an error code on failure. */ static int do_stage(u64 mmio_pa) { - pr_debug_once("Staging implementation is pending.\n"); - return -EPROTONOSUPPORT; + struct staging_state ss =3D {}; + int err; + + ss.mmio_base =3D ioremap(mmio_pa, MBOX_REG_NUM * MBOX_REG_SIZE); + if (WARN_ON_ONCE(!ss.mmio_base)) + return -EADDRNOTAVAIL; + + init_stage(&ss); + + /* Perform the staging process while within the retry limit */ + while (!staging_is_complete(&ss, &err)) { + /* Send a chunk of microcode each time: */ + err =3D send_data_chunk(&ss, ucode_patch_late); + if (err) + break; + /* + * Then, ask the hardware which piece of the image it + * needs next. The same piece may be sent more than once. + */ + err =3D fetch_next_offset(&ss); + if (err) + break; + } + + iounmap(ss.mmio_base); + + return err; } =20 static void stage_microcode(void) --=20 2.48.1 From nobody Thu Oct 2 06:18:02 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.14]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0FBEB2D73A7 for ; Sun, 21 Sep 2025 22:48:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.14 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758494932; cv=none; b=pdQqgbN8pd8tHqHELTikE6TG6YOKXEqMmEZ/47r+1FPqBgR2wfznzmv/vdd9oQ5F/9FE3DkuNCKH7eIlmjCQZ9RucrJrSm18GXOw60FlNVu5UewarDoJNPEG0DuZJHw0yC3SIvXMIof+u6lgJf91+f7tcJYfXWp0Cg9RbB0Hmec= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758494932; c=relaxed/simple; bh=hWrP76qUxvmoIMC97vuCs65TBMc6Jo7iHX/hP81dHFQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=BWkK//jqD8z1w7EsysTmBMytlYlcT2jKnUpbjOVLuLcxUhpeouiMK0yAQwMZFmfh3kzcsUWzGa3ep00pXCEQyV1xrXHyAXn8ZOea+LIrQnoRQnI4fKSa+3AYoZTyfealyy7qo9J4R68elsjPisY+Kph4Qjbx+4VgF711U2xhuA8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=UlMUILLs; arc=none smtp.client-ip=198.175.65.14 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="UlMUILLs" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758494932; x=1790030932; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=hWrP76qUxvmoIMC97vuCs65TBMc6Jo7iHX/hP81dHFQ=; b=UlMUILLsAdb8XmxAJ9w35v6h3UqZ5R7wNu4+6/qIeZC/nv2gkgXVdSmj 5CrCJf7hZuclTvANoX9+i0YaePtvIIGZE7tyW353g0bgLZIuBF22VWtSM fcp+3L0ydT41bU+s7LjA2IwRtWzVz4zUjrJZrATwS8Oj+A7RWhHxvHSS2 b+JDVVzgSHMzlI8O7XGn59ut6clqEBCGMkVXWYA+jLZ/CrxcaVXRgG6dJ UecZb8DFLwG/R+XVZDSC5yPni7RNgs4lvjqSnwZxd2whZu/3Kf4hDf/Qu gIO8xcqoFZGEZRPdfM1ycjVR2A25Pj4wDQHz8X8dPRj9Z6YvvKtRthLas A==; X-CSE-ConnectionGUID: 55rLbPmpSluJxFucdMlBZg== X-CSE-MsgGUID: iPwn/oi/Sd2EQbh+ugFvdg== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="64562369" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="64562369" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by orvoesa106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Sep 2025 15:48:51 -0700 X-CSE-ConnectionGUID: tJCcIs8cSqSXPebkz3hW3A== X-CSE-MsgGUID: pXxNIv9MRZWdHbQstUh73Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,283,1751266800"; d="scan'208";a="177129812" Received: from cbae1-mobl.amr.corp.intel.com (HELO cbae1-mobl.intel.com) ([10.124.135.148]) by fmviesa010.fm.intel.com with ESMTP; 21 Sep 2025 15:48:50 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, chao.gao@intel.com, abusse@amazon.de, tony.luck@intel.com, chang.seok.bae@intel.com Subject: [PATCH v6 6/7] x86/microcode/intel: Support mailbox transfer Date: Sun, 21 Sep 2025 15:48:40 -0700 Message-ID: <20250921224841.3545-7-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250921224841.3545-1-chang.seok.bae@intel.com> References: <20250921224841.3545-1-chang.seok.bae@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The functions for sending microcode data and retrieving the next offset were previously placeholders, as they need to handle a specific mailbox format. While the kernel supports similar mailboxes, none of them are compatible with this one. Attempts to share code led to unnecessary complexity, so add a dedicated implementation instead. Signed-off-by: Chang S. Bae Tested-by: Anselm Busse Reviewed-by: Tony Luck --- V5 -> V6: * Trim the changelog, narrowing down the reason (Boris) * Polish the error message (Boris) * Fix the grammar: 'hardware is responded' =3D> 'hardware has responded' (B= oris) * Fix the header file ordering (Boris) * Fix typo: 'resemble' =3D> 'reassemble' (Boris) * Adjust code to return a unique error code * Collect Tony's review tag V4 -> V5: Addressed Dave's feedback * fetch_next_offset(): - Make dword reads explicit - Consolidate offset validation -- adding another user for the end-offset checker - Convert WARN_* with pr_err_once() * Simplify transaction waiting logic a bit V2 -> V3: * Update code to reflect the removal of a global variable (Dave). V1 -> V2: * Add lots of code comments and edit the changelog (Dave). * Encapsulate register read/write operations for processing header and data sections. --- arch/x86/kernel/cpu/microcode/intel.c | 172 +++++++++++++++++++++++++- 1 file changed, 166 insertions(+), 6 deletions(-) diff --git a/arch/x86/kernel/cpu/microcode/intel.c b/arch/x86/kernel/cpu/mi= crocode/intel.c index d3e15f23d53a..22b9c3d4f43d 100644 --- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -41,8 +42,31 @@ static const char ucode_path[] =3D "kernel/x86/microcode= /GenuineIntel.bin"; =20 #define MBOX_CONTROL_OFFSET 0x0 #define MBOX_STATUS_OFFSET 0x4 +#define MBOX_WRDATA_OFFSET 0x8 +#define MBOX_RDDATA_OFFSET 0xc =20 #define MASK_MBOX_CTRL_ABORT BIT(0) +#define MASK_MBOX_CTRL_GO BIT(31) + +#define MASK_MBOX_STATUS_ERROR BIT(2) +#define MASK_MBOX_STATUS_READY BIT(31) + +#define MASK_MBOX_RESP_SUCCESS BIT(0) +#define MASK_MBOX_RESP_PROGRESS BIT(1) +#define MASK_MBOX_RESP_ERROR BIT(2) + +#define MBOX_CMD_LOAD 0x3 +#define MBOX_OBJ_STAGING 0xb +#define MBOX_HEADER(size) ((PCI_VENDOR_ID_INTEL) | \ + (MBOX_OBJ_STAGING << 16) | \ + ((u64)((size) / sizeof(u32)) << 32)) + +/* The size of each mailbox header */ +#define MBOX_HEADER_SIZE sizeof(u64) +/* The size of staging hardware response */ +#define MBOX_RESPONSE_SIZE sizeof(u64) + +#define MBOX_XACTION_TIMEOUT_MS (10 * MSEC_PER_SEC) =20 /* Current microcode patch used in early patching on the APs. */ static struct microcode_intel *ucode_patch_va __read_mostly; @@ -327,6 +351,49 @@ static __init struct microcode_intel *scan_microcode(v= oid *data, size_t size, return size ? NULL : patch; } =20 +static inline u32 read_mbox_dword(void __iomem *mmio_base) +{ + u32 dword =3D readl(mmio_base + MBOX_RDDATA_OFFSET); + + /* Acknowledge read completion to the staging hardware */ + writel(0, mmio_base + MBOX_RDDATA_OFFSET); + return dword; +} + +static inline void write_mbox_dword(void __iomem *mmio_base, u32 dword) +{ + writel(dword, mmio_base + MBOX_WRDATA_OFFSET); +} + +static inline u64 read_mbox_header(void __iomem *mmio_base) +{ + u32 high, low; + + low =3D read_mbox_dword(mmio_base); + high =3D read_mbox_dword(mmio_base); + + return ((u64)high << 32) | low; +} + +static inline void write_mbox_header(void __iomem *mmio_base, u64 value) +{ + write_mbox_dword(mmio_base, value); + write_mbox_dword(mmio_base, value >> 32); +} + +static void write_mbox_data(void __iomem *mmio_base, u32 *chunk, unsigned = int chunk_bytes) +{ + int i; + + /* + * The MMIO space is mapped as Uncached (UC). Each write arrives + * at the device as an individual transaction in program order. + * The device can then reassemble the sequence accordingly. + */ + for (i =3D 0; i < chunk_bytes / sizeof(u32); i++) + write_mbox_dword(mmio_base, chunk[i]); +} + /* * Prepare for a new microcode transfer: reset hardware and record the * image size. @@ -377,6 +444,14 @@ static bool can_send_next_chunk(struct staging_state *= ss, int *err) return true; } =20 +/* + * The hardware indicates completion by returning a sentinel end offset. + */ +static inline bool is_end_offset(u32 offset) +{ + return offset =3D=3D UINT_MAX; +} + /* * Determine whether staging is complete: either the hardware signaled * the end offset, or no more transactions are permitted (retry limit @@ -384,17 +459,68 @@ static bool can_send_next_chunk(struct staging_state = *ss, int *err) */ static inline bool staging_is_complete(struct staging_state *ss, int *err) { - return (ss->offset =3D=3D UINT_MAX) || !can_send_next_chunk(ss, err); + return is_end_offset(ss->offset) || !can_send_next_chunk(ss, err); +} + +/* + * Wait for the hardware to complete a transaction. + * Return 0 on success, or an error code on failure. + */ +static int wait_for_transaction(struct staging_state *ss) +{ + u32 timeout, status; + + /* Allow time for hardware to complete the operation: */ + for (timeout =3D 0; timeout < MBOX_XACTION_TIMEOUT_MS; timeout++) { + msleep(1); + + status =3D readl(ss->mmio_base + MBOX_STATUS_OFFSET); + /* Break out early if the hardware is ready: */ + if (status & MASK_MBOX_STATUS_READY) + break; + } + + /* Check for explicit error response */ + if (status & MASK_MBOX_STATUS_ERROR) + return -EIO; + + /* + * Hardware has neither responded to the action nor signaled any + * error. Treat this as a timeout. + */ + if (!(status & MASK_MBOX_STATUS_READY)) + return -ETIMEDOUT; + + return 0; } =20 /* * Transmit a chunk of the microcode image to the hardware. * Return 0 on success, or an error code on failure. */ -static int send_data_chunk(struct staging_state *ss, void *ucode_ptr __may= be_unused) +static int send_data_chunk(struct staging_state *ss, void *ucode_ptr) { - pr_debug_once("Staging mailbox loading code needs to be implemented.\n"); - return -EPROTONOSUPPORT; + u32 *src_chunk =3D ucode_ptr + ss->offset; + u16 mbox_size; + + /* + * Write a 'request' mailbox object in this order: + * 1. Mailbox header includes total size + * 2. Command header specifies the load operation + * 3. Data section contains a microcode chunk + * + * Thus, the mailbox size is two headers plus the chunk size. + */ + mbox_size =3D MBOX_HEADER_SIZE * 2 + ss->chunk_size; + write_mbox_header(ss->mmio_base, MBOX_HEADER(mbox_size)); + write_mbox_header(ss->mmio_base, MBOX_CMD_LOAD); + write_mbox_data(ss->mmio_base, src_chunk, ss->chunk_size); + ss->bytes_sent +=3D ss->chunk_size; + + /* Notify the hardware that the mailbox is ready for processing. */ + writel(MASK_MBOX_CTRL_GO, ss->mmio_base + MBOX_CONTROL_OFFSET); + + return wait_for_transaction(ss); } =20 /* @@ -403,8 +529,42 @@ static int send_data_chunk(struct staging_state *ss, v= oid *ucode_ptr __maybe_unu */ static int fetch_next_offset(struct staging_state *ss) { - pr_debug_once("Staging mailbox response handling code needs to be impleme= nted.\n"); - return -EPROTONOSUPPORT; + const u64 expected_header =3D MBOX_HEADER(MBOX_HEADER_SIZE + MBOX_RESPONS= E_SIZE); + u32 offset, status; + u64 header; + + /* + * The 'response' mailbox returns three fields, in order: + * 1. Header + * 2. Next offset in the microcode image + * 3. Status flags + */ + header =3D read_mbox_header(ss->mmio_base); + offset =3D read_mbox_dword(ss->mmio_base); + status =3D read_mbox_dword(ss->mmio_base); + + /* All valid responses must start with the expected header. */ + if (header !=3D expected_header) { + pr_err_once("staging: invalid response header (0x%llx)\n", header); + return -EBADR; + } + + /* + * Verify the offset: If not at the end marker, it must not + * exceed the microcode image length. + */ + if (!is_end_offset(offset) && offset > ss->ucode_len) { + pr_err_once("staging: invalid offset (%u) past the image end (%u)\n", + offset, ss->ucode_len); + return -EINVAL; + } + + /* Hardware may report errors explicitly in the status field */ + if (status & MASK_MBOX_RESP_ERROR) + return -EPROTO; + + ss->offset =3D offset; + return 0; } =20 /* --=20 2.48.1 From nobody Thu Oct 2 06:18:02 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.14]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4373D2D8DDD for ; Sun, 21 Sep 2025 22:48:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.14 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758494933; cv=none; b=YldJE3DiVStF+JIUJJim13g5p8qK8EIT5+Rqmi5n+OnlSHzwIRl6s7qbp9FFq3fX9xqP2a3FW8T6mKmrwkpysIrKfQKLbnSxueHcTBT/8l/ISYgcnkpd1SVz9UyswCBDrEK/zrmEzdYvGwrRbUd0NLXhrWaUOTFqQhPYnhd+LL4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758494933; c=relaxed/simple; bh=p0227iz52RNi5Kex3NAQ74ySAajcotyLgJjABWeN+zI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=TMS3E6U2Nbw/ZFQvRV+zgzSCwwhNvgWUScgs/EPIcgxT9fbom341e4wGXit+U4F/9Lk/iueGXuFYukXrhXwVdgcZ/UKBTkSAVpmvnV7MAFVbLxSm6MlcwSA8rlq5TlUz3JPnOSTmHOp3I4Sxj9Pi7TJ0jgUCRNQw90gHGGjs+zE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=C2wYfs+r; arc=none smtp.client-ip=198.175.65.14 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="C2wYfs+r" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758494933; x=1790030933; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=p0227iz52RNi5Kex3NAQ74ySAajcotyLgJjABWeN+zI=; b=C2wYfs+rdxrg+rdCtncYYts7hDIFBTnQyhMm5IhquTmwjVH3mZElYnPF 5gNFZ4LbielFNUAJuVSm+qyed4zjydUz6Hag18Gw3Fh5hF7EEYrRu3oCU yiQGSWKFIcMWut0tJFNSMHoJbyuLFe6JAUKpzj80+H8tnmpG0vrV794Zl 8IemdCEUnER2Q/LPk/rUDirSUeGL7GaZwebVzqzEh0bTDJ9Y++cLE3ZFq KUQIsITDFCiKKZla+9gFjTqEos53CSfg/8KxrJ6O5ueHx6687VZYZsPYT blnnqeQqqu44BN+vsAItJRPDnGDrW4Nzizdo7X0wMa146hZIJerHJ8Ccj A==; X-CSE-ConnectionGUID: gfnnixktQPucAiXpd//Ugg== X-CSE-MsgGUID: X//k4v3LTKa9l7OLM03qYg== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="64562375" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="64562375" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by orvoesa106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Sep 2025 15:48:52 -0700 X-CSE-ConnectionGUID: tDojOhPTSdKmcSlS4QXpEA== X-CSE-MsgGUID: DnplcdfKRNOFHmXdGXQV4g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,283,1751266800"; d="scan'208";a="177129819" Received: from cbae1-mobl.amr.corp.intel.com (HELO cbae1-mobl.intel.com) ([10.124.135.148]) by fmviesa010.fm.intel.com with ESMTP; 21 Sep 2025 15:48:51 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, chao.gao@intel.com, abusse@amazon.de, tony.luck@intel.com, chang.seok.bae@intel.com Subject: [PATCH v6 7/7] x86/microcode/intel: Enable staging when available Date: Sun, 21 Sep 2025 15:48:41 -0700 Message-ID: <20250921224841.3545-8-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250921224841.3545-1-chang.seok.bae@intel.com> References: <20250921224841.3545-1-chang.seok.bae@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With staging support implemented, enable it when the CPU reports the feature. Signed-off-by: Chang S. Bae Tested-by: Anselm Busse Reviewed-by: Chao Gao Reviewed-by: Tony Luck --- V5 -> V6: * Add Tony's review tag * Trim the changelog V4 -> V5: * Collect Chao's review tag * rdmsrl() -> rdmsrq() (Chao) V1 -> V2: * Fold MSR definitions (Boris). --- arch/x86/include/asm/msr-index.h | 7 +++++++ arch/x86/kernel/cpu/microcode/intel.c | 17 +++++++++++++++++ 2 files changed, 24 insertions(+) diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-in= dex.h index 0736e44f7c69..2db9154192ba 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -166,6 +166,10 @@ * Processor MMIO stale data * vulnerabilities. */ +#define ARCH_CAP_MCU_ENUM BIT(16) /* + * Indicates the presence of microcode update + * feature enumeration and status information. + */ #define ARCH_CAP_FB_CLEAR BIT(17) /* * VERW clears CPU fill buffer * even on MDS_NO CPUs. @@ -927,6 +931,9 @@ #define MSR_IA32_UCODE_WRITE 0x00000079 #define MSR_IA32_UCODE_REV 0x0000008b =20 +#define MSR_IA32_MCU_ENUMERATION 0x0000007b +#define MCU_STAGING BIT(4) + /* Intel SGX Launch Enclave Public Key Hash MSRs */ #define MSR_IA32_SGXLEPUBKEYHASH0 0x0000008C #define MSR_IA32_SGXLEPUBKEYHASH1 0x0000008D diff --git a/arch/x86/kernel/cpu/microcode/intel.c b/arch/x86/kernel/cpu/mi= crocode/intel.c index 22b9c3d4f43d..033bcf5adba8 100644 --- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -980,6 +980,18 @@ static __init void calc_llc_size_per_core(struct cpuin= fo_x86 *c) llc_size_per_core =3D (unsigned int)llc_size; } =20 +static __init bool staging_available(void) +{ + u64 val; + + val =3D x86_read_arch_cap_msr(); + if (!(val & ARCH_CAP_MCU_ENUM)) + return false; + + rdmsrq(MSR_IA32_MCU_ENUMERATION, val); + return !!(val & MCU_STAGING); +} + struct microcode_ops * __init init_intel_microcode(void) { struct cpuinfo_x86 *c =3D &boot_cpu_data; @@ -990,6 +1002,11 @@ struct microcode_ops * __init init_intel_microcode(vo= id) return NULL; } =20 + if (staging_available()) { + microcode_intel_ops.use_staging =3D true; + pr_info("Enabled staging feature.\n"); + } + calc_llc_size_per_core(c); =20 return µcode_intel_ops; --=20 2.48.1