From nobody Sat Feb 7 17:09:36 2026 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 56DE034BA3A for ; Fri, 30 Jan 2026 11:36:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.130 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769772997; cv=none; b=T4qI0qCGseTyaRyswauSE85qGs8OEl4TKTmUHwOAvQ9f/AYaBM2O2+cuxLXF+XhmddKBK+6j7cIa/MANmj/Rf+g36hFa/Ee6sX/BCBYajhDkz1/NUlumHON/gKsOH8v1RfH0ROAiLIrYQFonCXAgSdpn2BG/3TTvV/PECjlkE3c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769772997; c=relaxed/simple; bh=VyKjpzbHHHNtxU7tnUiK/WGFdsUpMouG7EZatwZMBGM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=d5QtikcvW8k2SUdw+9+kKCBsDXEuV1wRs2C3+O4n0hWC41PSvxYRpFSuJ6M4cew2bHtIEZHUB8zhzlGIfrxvc8ij24GcyigByYya401adLnMEMty+OIj0zVQlAAA1t1w1R2FZdi9RmyxIivndvGiSoXT8Gi1aKXp1qtQDHqtxpw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; arc=none smtp.client-ip=195.135.223.130 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id AAB053468D; Fri, 30 Jan 2026 11:36:33 +0000 (UTC) Authentication-Results: smtp-out1.suse.de; none Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 67CE23EA61; Fri, 30 Jan 2026 11:36:33 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id 4vwyGMGXfGlwHQAAD6G6ig (envelope-from ); Fri, 30 Jan 2026 11:36:33 +0000 From: Juergen Gross To: linux-kernel@vger.kernel.org, x86@kernel.org Cc: Juergen Gross , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" Subject: [PATCH v2 1/4] x86/mtrr: Move cache_enable() and cache_disable() to mtrr/generic.c Date: Fri, 30 Jan 2026 12:36:22 +0100 Message-ID: <20260130113625.599305-2-jgross@suse.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260130113625.599305-1-jgross@suse.com> References: <20260130113625.599305-1-jgross@suse.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Spam-Score: -4.00 X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Rspamd-Action: no action X-Rspamd-Queue-Id: AAB053468D X-Rspamd-Server: rspamd1.dmz-prg2.suse.org X-Spam-Level: X-Spamd-Result: default: False [-4.00 / 50.00]; REPLY(-4.00)[] X-Spam-Flag: NO Content-Type: text/plain; charset="utf-8" cache_enable() and cache_disable() are used for generic MTRR code only. Move them and related stuff to mtrr/generic.c, allowing to make them static. This requires to move the cache_enable() and cache_disable() calls from cache_cpu_init() into mtrr_generic_set_state(). This allows to make mtrr_enable() and mtrr_disable() static, too. While moving the code, drop the comment related to the PAT MSR, as this is not true anymore. No change of functionality. Signed-off-by: Juergen Gross --- arch/x86/include/asm/cacheinfo.h | 2 - arch/x86/include/asm/mtrr.h | 2 - arch/x86/kernel/cpu/cacheinfo.c | 80 +--------------------------- arch/x86/kernel/cpu/mtrr/generic.c | 83 +++++++++++++++++++++++++++++- 4 files changed, 82 insertions(+), 85 deletions(-) diff --git a/arch/x86/include/asm/cacheinfo.h b/arch/x86/include/asm/cachei= nfo.h index 5aa061199866..07b0e5e6d5bb 100644 --- a/arch/x86/include/asm/cacheinfo.h +++ b/arch/x86/include/asm/cacheinfo.h @@ -7,8 +7,6 @@ extern unsigned int memory_caching_control; #define CACHE_MTRR 0x01 #define CACHE_PAT 0x02 =20 -void cache_disable(void); -void cache_enable(void); void set_cache_aps_delayed_init(bool val); bool get_cache_aps_delayed_init(void); void cache_bp_init(void); diff --git a/arch/x86/include/asm/mtrr.h b/arch/x86/include/asm/mtrr.h index 76b95bd1a405..d547b364ce65 100644 --- a/arch/x86/include/asm/mtrr.h +++ b/arch/x86/include/asm/mtrr.h @@ -58,8 +58,6 @@ extern int mtrr_del(int reg, unsigned long base, unsigned= long size); extern int mtrr_del_page(int reg, unsigned long base, unsigned long size); extern int mtrr_trim_uncached_memory(unsigned long end_pfn); extern int amd_special_default_mtrr(void); -void mtrr_disable(void); -void mtrr_enable(void); void mtrr_generic_set_state(void); # else static inline void guest_force_mtrr_state(struct mtrr_var_range *var, diff --git a/arch/x86/kernel/cpu/cacheinfo.c b/arch/x86/kernel/cpu/cacheinf= o.c index 51a95b07831f..0d2150de0120 100644 --- a/arch/x86/kernel/cpu/cacheinfo.c +++ b/arch/x86/kernel/cpu/cacheinfo.c @@ -635,92 +635,14 @@ int populate_cache_leaves(unsigned int cpu) return 0; } =20 -/* - * Disable and enable caches. Needed for changing MTRRs and the PAT MSR. - * - * Since we are disabling the cache don't allow any interrupts, - * they would run extremely slow and would only increase the pain. - * - * The caller must ensure that local interrupts are disabled and - * are reenabled after cache_enable() has been called. - */ -static unsigned long saved_cr4; -static DEFINE_RAW_SPINLOCK(cache_disable_lock); - -/* - * Cache flushing is the most time-consuming step when programming the - * MTRRs. On many Intel CPUs without known erratas, it can be skipped - * if the CPU declares cache self-snooping support. - */ -static void maybe_flush_caches(void) -{ - if (!static_cpu_has(X86_FEATURE_SELFSNOOP)) - wbinvd(); -} - -void cache_disable(void) __acquires(cache_disable_lock) -{ - unsigned long cr0; - - /* - * This is not ideal since the cache is only flushed/disabled - * for this CPU while the MTRRs are changed, but changing this - * requires more invasive changes to the way the kernel boots. - */ - raw_spin_lock(&cache_disable_lock); - - /* Enter the no-fill (CD=3D1, NW=3D0) cache mode and flush caches. */ - cr0 =3D read_cr0() | X86_CR0_CD; - write_cr0(cr0); - - maybe_flush_caches(); - - /* Save value of CR4 and clear Page Global Enable (bit 7) */ - if (cpu_feature_enabled(X86_FEATURE_PGE)) { - saved_cr4 =3D __read_cr4(); - __write_cr4(saved_cr4 & ~X86_CR4_PGE); - } - - /* Flush all TLBs via a mov %cr3, %reg; mov %reg, %cr3 */ - count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL); - flush_tlb_local(); - - if (cpu_feature_enabled(X86_FEATURE_MTRR)) - mtrr_disable(); - - maybe_flush_caches(); -} - -void cache_enable(void) __releases(cache_disable_lock) -{ - /* Flush TLBs (no need to flush caches - they are disabled) */ - count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL); - flush_tlb_local(); - - if (cpu_feature_enabled(X86_FEATURE_MTRR)) - mtrr_enable(); - - /* Enable caches */ - write_cr0(read_cr0() & ~X86_CR0_CD); - - /* Restore value of CR4 */ - if (cpu_feature_enabled(X86_FEATURE_PGE)) - __write_cr4(saved_cr4); - - raw_spin_unlock(&cache_disable_lock); -} - static void cache_cpu_init(void) { unsigned long flags; =20 local_irq_save(flags); =20 - if (memory_caching_control & CACHE_MTRR) { - cache_disable(); + if (memory_caching_control & CACHE_MTRR) mtrr_generic_set_state(); - cache_enable(); - } =20 if (memory_caching_control & CACHE_PAT) pat_cpu_init(); diff --git a/arch/x86/kernel/cpu/mtrr/generic.c b/arch/x86/kernel/cpu/mtrr/= generic.c index 0863733858dc..2c874b88e12c 100644 --- a/arch/x86/kernel/cpu/mtrr/generic.c +++ b/arch/x86/kernel/cpu/mtrr/generic.c @@ -945,7 +945,7 @@ static unsigned long set_mtrr_state(void) return change_mask; } =20 -void mtrr_disable(void) +static void mtrr_disable(void) { /* Save MTRR state */ rdmsr(MSR_MTRRdefType, deftype_lo, deftype_hi); @@ -954,16 +954,93 @@ void mtrr_disable(void) mtrr_wrmsr(MSR_MTRRdefType, deftype_lo & MTRR_DEF_TYPE_DISABLE, deftype_h= i); } =20 -void mtrr_enable(void) +static void mtrr_enable(void) { /* Intel (P6) standard MTRRs */ mtrr_wrmsr(MSR_MTRRdefType, deftype_lo, deftype_hi); } =20 +/* + * Disable and enable caches. Needed for changing MTRRs. + * + * Since we are disabling the cache don't allow any interrupts, + * they would run extremely slow and would only increase the pain. + * + * The caller must ensure that local interrupts are disabled and + * are reenabled after cache_enable() has been called. + */ +static unsigned long saved_cr4; +static DEFINE_RAW_SPINLOCK(cache_disable_lock); + +/* + * Cache flushing is the most time-consuming step when programming the + * MTRRs. On many Intel CPUs without known erratas, it can be skipped + * if the CPU declares cache self-snooping support. + */ +static void maybe_flush_caches(void) +{ + if (!static_cpu_has(X86_FEATURE_SELFSNOOP)) + wbinvd(); +} + +static void cache_disable(void) __acquires(cache_disable_lock) +{ + unsigned long cr0; + + /* + * This is not ideal since the cache is only flushed/disabled + * for this CPU while the MTRRs are changed, but changing this + * requires more invasive changes to the way the kernel boots. + */ + raw_spin_lock(&cache_disable_lock); + + /* Enter the no-fill (CD=3D1, NW=3D0) cache mode and flush caches. */ + cr0 =3D read_cr0() | X86_CR0_CD; + write_cr0(cr0); + + maybe_flush_caches(); + + /* Save value of CR4 and clear Page Global Enable (bit 7) */ + if (cpu_feature_enabled(X86_FEATURE_PGE)) { + saved_cr4 =3D __read_cr4(); + __write_cr4(saved_cr4 & ~X86_CR4_PGE); + } + + /* Flush all TLBs via a mov %cr3, %reg; mov %reg, %cr3 */ + count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL); + flush_tlb_local(); + + if (cpu_feature_enabled(X86_FEATURE_MTRR)) + mtrr_disable(); + + maybe_flush_caches(); +} + +static void cache_enable(void) __releases(cache_disable_lock) +{ + /* Flush TLBs (no need to flush caches - they are disabled) */ + count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL); + flush_tlb_local(); + + if (cpu_feature_enabled(X86_FEATURE_MTRR)) + mtrr_enable(); + + /* Enable caches */ + write_cr0(read_cr0() & ~X86_CR0_CD); + + /* Restore value of CR4 */ + if (cpu_feature_enabled(X86_FEATURE_PGE)) + __write_cr4(saved_cr4); + + raw_spin_unlock(&cache_disable_lock); +} + void mtrr_generic_set_state(void) { unsigned long mask, count; =20 + cache_disable(); + /* Actually set the state */ mask =3D set_mtrr_state(); =20 @@ -973,6 +1050,8 @@ void mtrr_generic_set_state(void) set_bit(count, &smp_changes_mask); mask >>=3D 1; } + + cache_enable(); } =20 /** --=20 2.52.0 From nobody Sat Feb 7 17:09:36 2026 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3D7D036C0D9 for ; Fri, 30 Jan 2026 11:36:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.130 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769773003; cv=none; b=aRpEGX3z7XuEvOoik6Mm1mzQmJbf3iNjOLKSXftKNIYQpfPkT6c0zdBg2PYBKr4N942VbXImOjfd/I7EVH3mcqfw6U9lvgpFAYGR7eWOC15d7nuEkMDHeN/hTWBORyTHwu3/+c4ttlBCNpS6flkuC68qV0jKXyJJHsMRStVQu3M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769773003; c=relaxed/simple; bh=yNGdYUk2EXwzFDzR5TYecDEyFxfOdywkX3lirC6lR2I=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Eh59sroiu2Vdob8gnqLSaXkpaX4Hh7OBqXXTQzJdT1NTwGcUwmIaDOcUMNNKecJi1gtQ2I4cO7cujZ3cAuWyIVH7320b+Bjz1lf5sDnTFN6BEFjVQAlUh0LoW7DiNn6IcLtWZyJrVdbjAoq6uLJWhVzkBpjPUs1SNrZmRj7A2Zk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; arc=none smtp.client-ip=195.135.223.130 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 3D13D3468F; Fri, 30 Jan 2026 11:36:39 +0000 (UTC) Authentication-Results: smtp-out1.suse.de; none Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id F1E973EA61; Fri, 30 Jan 2026 11:36:38 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id 8v3TOcaXfGm0HQAAD6G6ig (envelope-from ); Fri, 30 Jan 2026 11:36:38 +0000 From: Juergen Gross To: linux-kernel@vger.kernel.org, x86@kernel.org Cc: Juergen Gross , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" Subject: [PATCH v2 2/4] x86/mtrr: Introduce MTRR work state structure Date: Fri, 30 Jan 2026 12:36:23 +0100 Message-ID: <20260130113625.599305-3-jgross@suse.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260130113625.599305-1-jgross@suse.com> References: <20260130113625.599305-1-jgross@suse.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Spamd-Result: default: False [-4.00 / 50.00]; REPLY(-4.00)[] X-Spam-Flag: NO X-Spam-Score: -4.00 X-Rspamd-Queue-Id: 3D13D3468F X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Rspamd-Action: no action X-Rspamd-Server: rspamd2.dmz-prg2.suse.org X-Spam-Level: Content-Type: text/plain; charset="utf-8" Instead of using static variables for storing local state across cache_disable() ... cache_enable(), use a structure allocated on the stack for the same purpose. Signed-off-by: Juergen Gross --- V2: - add parameter description to set_mtrr_state() (kernel test robot) --- arch/x86/kernel/cpu/mtrr/generic.c | 60 ++++++++++++++++-------------- 1 file changed, 33 insertions(+), 27 deletions(-) diff --git a/arch/x86/kernel/cpu/mtrr/generic.c b/arch/x86/kernel/cpu/mtrr/= generic.c index 2c874b88e12c..7d9e582a4048 100644 --- a/arch/x86/kernel/cpu/mtrr/generic.c +++ b/arch/x86/kernel/cpu/mtrr/generic.c @@ -905,18 +905,21 @@ static bool set_mtrr_var_ranges(unsigned int index, s= truct mtrr_var_range *vr) return changed; } =20 -static u32 deftype_lo, deftype_hi; +struct mtrr_work_state { + unsigned long cr4; + u32 lo; + u32 hi; +}; =20 /** * set_mtrr_state - Set the MTRR state for this CPU. * - * NOTE: The CPU must already be in a safe state for MTRR changes, includi= ng - * measures that only a single CPU can be active in set_mtrr_state()= in - * order to not be subject to races for usage of deftype_lo. This is - * accomplished by taking cache_disable_lock. + * NOTE: The CPU must already be in a safe state for MTRR changes. + * + * @state: pointer to mtrr_work_state * RETURNS: 0 if no changes made, else a mask indicating what was changed. */ -static unsigned long set_mtrr_state(void) +static unsigned long set_mtrr_state(struct mtrr_work_state *state) { unsigned long change_mask =3D 0; unsigned int i; @@ -933,10 +936,10 @@ static unsigned long set_mtrr_state(void) * Set_mtrr_restore restores the old value of MTRRdefType, * so to set it we fiddle with the saved value: */ - if ((deftype_lo & MTRR_DEF_TYPE_TYPE) !=3D mtrr_state.def_type || - ((deftype_lo & MTRR_DEF_TYPE_ENABLE) >> MTRR_STATE_SHIFT) !=3D mtrr_s= tate.enabled) { + if ((state->lo & MTRR_DEF_TYPE_TYPE) !=3D mtrr_state.def_type || + ((state->lo & MTRR_DEF_TYPE_ENABLE) >> MTRR_STATE_SHIFT) !=3D mtrr_st= ate.enabled) { =20 - deftype_lo =3D (deftype_lo & MTRR_DEF_TYPE_DISABLE) | + state->lo =3D (state->lo & MTRR_DEF_TYPE_DISABLE) | mtrr_state.def_type | (mtrr_state.enabled << MTRR_STATE_SHIFT); change_mask |=3D MTRR_CHANGE_MASK_DEFTYPE; @@ -945,19 +948,19 @@ static unsigned long set_mtrr_state(void) return change_mask; } =20 -static void mtrr_disable(void) +static void mtrr_disable(struct mtrr_work_state *state) { /* Save MTRR state */ - rdmsr(MSR_MTRRdefType, deftype_lo, deftype_hi); + rdmsr(MSR_MTRRdefType, state->lo, state->hi); =20 /* Disable MTRRs, and set the default type to uncached */ - mtrr_wrmsr(MSR_MTRRdefType, deftype_lo & MTRR_DEF_TYPE_DISABLE, deftype_h= i); + mtrr_wrmsr(MSR_MTRRdefType, state->lo & MTRR_DEF_TYPE_DISABLE, state->hi); } =20 -static void mtrr_enable(void) +static void mtrr_enable(struct mtrr_work_state *state) { /* Intel (P6) standard MTRRs */ - mtrr_wrmsr(MSR_MTRRdefType, deftype_lo, deftype_hi); + mtrr_wrmsr(MSR_MTRRdefType, state->lo, state->hi); } =20 /* @@ -969,7 +972,6 @@ static void mtrr_enable(void) * The caller must ensure that local interrupts are disabled and * are reenabled after cache_enable() has been called. */ -static unsigned long saved_cr4; static DEFINE_RAW_SPINLOCK(cache_disable_lock); =20 /* @@ -983,7 +985,8 @@ static void maybe_flush_caches(void) wbinvd(); } =20 -static void cache_disable(void) __acquires(cache_disable_lock) +static void cache_disable(struct mtrr_work_state *state) + __acquires(cache_disable_lock) { unsigned long cr0; =20 @@ -1002,8 +1005,8 @@ static void cache_disable(void) __acquires(cache_disa= ble_lock) =20 /* Save value of CR4 and clear Page Global Enable (bit 7) */ if (cpu_feature_enabled(X86_FEATURE_PGE)) { - saved_cr4 =3D __read_cr4(); - __write_cr4(saved_cr4 & ~X86_CR4_PGE); + state->cr4 =3D __read_cr4(); + __write_cr4(state->cr4 & ~X86_CR4_PGE); } =20 /* Flush all TLBs via a mov %cr3, %reg; mov %reg, %cr3 */ @@ -1011,26 +1014,27 @@ static void cache_disable(void) __acquires(cache_di= sable_lock) flush_tlb_local(); =20 if (cpu_feature_enabled(X86_FEATURE_MTRR)) - mtrr_disable(); + mtrr_disable(state); =20 maybe_flush_caches(); } =20 -static void cache_enable(void) __releases(cache_disable_lock) +static void cache_enable(struct mtrr_work_state *state) + __releases(cache_disable_lock) { /* Flush TLBs (no need to flush caches - they are disabled) */ count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL); flush_tlb_local(); =20 if (cpu_feature_enabled(X86_FEATURE_MTRR)) - mtrr_enable(); + mtrr_enable(state); =20 /* Enable caches */ write_cr0(read_cr0() & ~X86_CR0_CD); =20 /* Restore value of CR4 */ if (cpu_feature_enabled(X86_FEATURE_PGE)) - __write_cr4(saved_cr4); + __write_cr4(state->cr4); =20 raw_spin_unlock(&cache_disable_lock); } @@ -1038,11 +1042,12 @@ static void cache_enable(void) __releases(cache_dis= able_lock) void mtrr_generic_set_state(void) { unsigned long mask, count; + struct mtrr_work_state state; =20 - cache_disable(); + cache_disable(&state); =20 /* Actually set the state */ - mask =3D set_mtrr_state(); + mask =3D set_mtrr_state(&state); =20 /* Use the atomic bitops to update the global mask */ for (count =3D 0; count < sizeof(mask) * 8; ++count) { @@ -1051,7 +1056,7 @@ void mtrr_generic_set_state(void) mask >>=3D 1; } =20 - cache_enable(); + cache_enable(&state); } =20 /** @@ -1069,11 +1074,12 @@ static void generic_set_mtrr(unsigned int reg, unsi= gned long base, { unsigned long flags; struct mtrr_var_range *vr; + struct mtrr_work_state state; =20 vr =3D &mtrr_state.var_ranges[reg]; =20 local_irq_save(flags); - cache_disable(); + cache_disable(&state); =20 if (size =3D=3D 0) { /* @@ -1092,7 +1098,7 @@ static void generic_set_mtrr(unsigned int reg, unsign= ed long base, mtrr_wrmsr(MTRRphysMask_MSR(reg), vr->mask_lo, vr->mask_hi); } =20 - cache_enable(); + cache_enable(&state); local_irq_restore(flags); } =20 --=20 2.52.0 From nobody Sat Feb 7 17:09:36 2026 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8B990242D6C for ; Fri, 30 Jan 2026 11:36:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769773008; cv=none; b=ISej5mH0g+n7qsrqE9NpKXzPLYloDCn6Ka01Ck9q/5fiLuNQjCzZ0MFgbqcfFsTWLXSBBf7zU1qwOs7YGAP6g9rJgiOu6HKTK1zkSe/2TBh6dDFmdt049GzpfGo700WZz+QRPXpLUqgKlmejtXfHTOaMQIa2MaS7DNsyMoLoRXs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769773008; c=relaxed/simple; bh=W0C3APLi4li8HiYOML4rZyGmR1k49qqeIQYJoYVIhKs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=BTtDDkec0BPxPkZ6CNTGvi5O5KmQf65rKFFln3KG2mIhSIhgjqScw3nNcriD2YCgj5SDfxxB1rM2un3DJHKYdDOVNTJgibZ+RyPpmCuBILsqRmS8giB/XoNz2THaEzhIxOrvZU4mAbPx1xNXFu8+5Ce0hYKE2KkfnTXq32I/oeU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id CB9D25BCE2; Fri, 30 Jan 2026 11:36:44 +0000 (UTC) Authentication-Results: smtp-out2.suse.de; none Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 851E13EA61; Fri, 30 Jan 2026 11:36:44 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id 3Ac9H8yXfGlXHgAAD6G6ig (envelope-from ); Fri, 30 Jan 2026 11:36:44 +0000 From: Juergen Gross To: linux-kernel@vger.kernel.org, x86@kernel.org Cc: Juergen Gross , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" Subject: [PATCH v2 3/4] x86/mtrr: Add a prepare_set hook to mtrr_ops Date: Fri, 30 Jan 2026 12:36:24 +0100 Message-ID: <20260130113625.599305-4-jgross@suse.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260130113625.599305-1-jgross@suse.com> References: <20260130113625.599305-1-jgross@suse.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Spamd-Result: default: False [-4.00 / 50.00]; REPLY(-4.00)[] X-Spam-Flag: NO X-Spam-Score: -4.00 X-Rspamd-Queue-Id: CB9D25BCE2 X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Rspamd-Action: no action X-Rspamd-Server: rspamd2.dmz-prg2.suse.org X-Spam-Level: Content-Type: text/plain; charset="utf-8" In order to prepare dropping the cache_disable_lock add a new hook to struct mtrr_ops, allowing to set some global state before calling the .set hook on all active CPUs. Move setting of mtrr_state.var_ranges[] from generic_set_mtrr() to the new prepare hook. Note that doing that only once outside the cache_disable_lock is fine, as generic_set_mtrr() is called via set_mtrr() only and this call is protected by mtrr_mutex. Signed-off-by: Juergen Gross --- arch/x86/kernel/cpu/mtrr/generic.c | 32 ++++++++++++++++++++++++------ arch/x86/kernel/cpu/mtrr/mtrr.c | 3 +++ arch/x86/kernel/cpu/mtrr/mtrr.h | 2 ++ 3 files changed, 31 insertions(+), 6 deletions(-) diff --git a/arch/x86/kernel/cpu/mtrr/generic.c b/arch/x86/kernel/cpu/mtrr/= generic.c index 7d9e582a4048..d49e1837a7af 100644 --- a/arch/x86/kernel/cpu/mtrr/generic.c +++ b/arch/x86/kernel/cpu/mtrr/generic.c @@ -1059,6 +1059,31 @@ void mtrr_generic_set_state(void) cache_enable(&state); } =20 +/** + * generic_prepare_set_mtrr - set variable MTRR register data in mtrr_state + * + * @reg: The register to set. + * @base: The base address of the region. + * @size: The size of the region. If this is 0 the region is disabled. + * @type: The type of the region. + * + * Returns nothing. + */ +static void generic_prepare_set_mtrr(unsigned int reg, unsigned long base, + unsigned long size, mtrr_type type) +{ + struct mtrr_var_range *vr =3D &mtrr_state.var_ranges[reg]; + + if (size =3D=3D 0) { + memset(vr, 0, sizeof(struct mtrr_var_range)); + } else { + vr->base_lo =3D base << PAGE_SHIFT | type; + vr->base_hi =3D (base >> (32 - PAGE_SHIFT)) & ~phys_hi_rsvd; + vr->mask_lo =3D -size << PAGE_SHIFT | MTRR_PHYSMASK_V; + vr->mask_hi =3D (-size >> (32 - PAGE_SHIFT)) & ~phys_hi_rsvd; + } +} + /** * generic_set_mtrr - set variable MTRR register on the local CPU. * @@ -1087,13 +1112,7 @@ static void generic_set_mtrr(unsigned int reg, unsig= ned long base, * clear the relevant mask register to disable a range. */ mtrr_wrmsr(MTRRphysMask_MSR(reg), 0, 0); - memset(vr, 0, sizeof(struct mtrr_var_range)); } else { - vr->base_lo =3D base << PAGE_SHIFT | type; - vr->base_hi =3D (base >> (32 - PAGE_SHIFT)) & ~phys_hi_rsvd; - vr->mask_lo =3D -size << PAGE_SHIFT | MTRR_PHYSMASK_V; - vr->mask_hi =3D (-size >> (32 - PAGE_SHIFT)) & ~phys_hi_rsvd; - mtrr_wrmsr(MTRRphysBase_MSR(reg), vr->base_lo, vr->base_hi); mtrr_wrmsr(MTRRphysMask_MSR(reg), vr->mask_lo, vr->mask_hi); } @@ -1158,6 +1177,7 @@ int positive_have_wrcomb(void) const struct mtrr_ops generic_mtrr_ops =3D { .get =3D generic_get_mtrr, .get_free_region =3D generic_get_free_region, + .prepare_set =3D generic_prepare_set_mtrr, .set =3D generic_set_mtrr, .validate_add_page =3D generic_validate_add_page, .have_wrcomb =3D generic_have_wrcomb, diff --git a/arch/x86/kernel/cpu/mtrr/mtrr.c b/arch/x86/kernel/cpu/mtrr/mtr= r.c index 4b3d492afe17..32948fb4e742 100644 --- a/arch/x86/kernel/cpu/mtrr/mtrr.c +++ b/arch/x86/kernel/cpu/mtrr/mtrr.c @@ -175,6 +175,9 @@ static void set_mtrr(unsigned int reg, unsigned long ba= se, unsigned long size, .smp_type =3D type }; =20 + if (mtrr_if->prepare_set) + mtrr_if->prepare_set(reg, base, size, type); + stop_machine_cpuslocked(mtrr_rendezvous_handler, &data, cpu_online_mask); =20 generic_rebuild_map(); diff --git a/arch/x86/kernel/cpu/mtrr/mtrr.h b/arch/x86/kernel/cpu/mtrr/mtr= r.h index 2de3bd2f95d1..4d32c095cfc5 100644 --- a/arch/x86/kernel/cpu/mtrr/mtrr.h +++ b/arch/x86/kernel/cpu/mtrr/mtrr.h @@ -17,6 +17,8 @@ extern unsigned int mtrr_usage_table[MTRR_MAX_VAR_RANGES]; =20 struct mtrr_ops { u32 var_regs; + void (*prepare_set)(unsigned int reg, unsigned long base, + unsigned long size, mtrr_type type); void (*set)(unsigned int reg, unsigned long base, unsigned long size, mtrr_type type); void (*get)(unsigned int reg, unsigned long *base, --=20 2.52.0 From nobody Sat Feb 7 17:09:36 2026 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E6BA9242D6C for ; Fri, 30 Jan 2026 11:36:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769773015; cv=none; b=tCFLJyc4NwfmIbCvipqqH8HesKCIFO4q6X8X7XqxB2Ro9IHiSxo+jhDqpOX2ZdusadwVSXPJJ0uQ4+nfUbgzVT9pNvDw4lHotmrMVi3cEhEKIa1xtx7YLxXHasg5TeTZ+e56QCHCjncGwV8gWu7VwOzafML/Cfe90EOqcK+QPHg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769773015; c=relaxed/simple; bh=6enp7fhWbIxPFVMpL3Gux4c3ByJ3xmD20INyTy7mJRk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=VMwWZZCO+1umCt+6g/MKj8augcTGJaX5NI5/4VQvTfyCY58xOqBC18V3+rnzmgKvL74LpZlcV4AvfTWzIljuc6Zu4J7IGhyoPyISe+ieN71NDvmXG3wemE49k8+gPp5qAbSpJKB5ugBjVtw30lHulIP4e3zdhTe3Z9Gzq+ArX6c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=qOVHlo2R; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=qOVHlo2R; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="qOVHlo2R"; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="qOVHlo2R" Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 7870C5BCC1; Fri, 30 Jan 2026 11:36:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1769773010; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UE3VzBbKzKt4j6z1sSUxKExz3XaeTJdPcN4/kigYAas=; b=qOVHlo2RMIwR0UhTV/WNht58wgtmu1bzEEUcT3Y1e4qVKuyIi5vwlkOUmd5p9OiQ1iTTFc QjiBZNewKYZA9rIYBKaIzlU0SfkUdiNlPl7desu6IWceMIMk2EudlGHuiB3IW87uDjhUz+ CHEH1iCBlvGm6oAE2ZuBXqhdTT/da8A= Authentication-Results: smtp-out2.suse.de; none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1769773010; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UE3VzBbKzKt4j6z1sSUxKExz3XaeTJdPcN4/kigYAas=; b=qOVHlo2RMIwR0UhTV/WNht58wgtmu1bzEEUcT3Y1e4qVKuyIi5vwlkOUmd5p9OiQ1iTTFc QjiBZNewKYZA9rIYBKaIzlU0SfkUdiNlPl7desu6IWceMIMk2EudlGHuiB3IW87uDjhUz+ CHEH1iCBlvGm6oAE2ZuBXqhdTT/da8A= Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 2C1DB3EA61; Fri, 30 Jan 2026 11:36:50 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id kjORCdKXfGlmHgAAD6G6ig (envelope-from ); Fri, 30 Jan 2026 11:36:50 +0000 From: Juergen Gross To: linux-kernel@vger.kernel.org, x86@kernel.org Cc: Juergen Gross , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" Subject: [PATCH v2 4/4] x86/mtrr: Drop cache_disable_lock Date: Fri, 30 Jan 2026 12:36:25 +0100 Message-ID: <20260130113625.599305-5-jgross@suse.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260130113625.599305-1-jgross@suse.com> References: <20260130113625.599305-1-jgross@suse.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Spamd-Result: default: False [-6.80 / 50.00]; REPLY(-4.00)[]; BAYES_HAM(-3.00)[100.00%]; NEURAL_HAM_LONG(-1.00)[-1.000]; MID_CONTAINS_FROM(1.00)[]; R_MISSING_CHARSET(0.50)[]; NEURAL_HAM_SHORT(-0.20)[-0.996]; MIME_GOOD(-0.10)[text/plain]; RCVD_COUNT_TWO(0.00)[2]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; FROM_HAS_DN(0.00)[]; MIME_TRACE(0.00)[0:+]; TO_MATCH_ENVRCPT_ALL(0.00)[]; FUZZY_RATELIMITED(0.00)[rspamd.com]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:mid,suse.com:email,imap1.dmz-prg2.suse.org:helo]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; FROM_EQ_ENVFROM(0.00)[]; TO_DN_SOME(0.00)[]; RCPT_COUNT_SEVEN(0.00)[8]; RCVD_TLS_ALL(0.00)[] X-Spam-Flag: NO X-Spam-Score: -6.80 X-Spam-Level: Content-Type: text/plain; charset="utf-8" Now that no global state is modified under cache_disable_lock, it can be dropped. All required serialization is done via mtrr_mutex and cpus_read_lock(), ensuring that only one set_mtrr() can be active at any time and that this call can't run concurrently with CPU bringup. The main advantages are a faster boot of machines with lots of CPUs, and avoiding hard lockups on those machines in case mtrr_generic_set_state() takes too long in uncached mode, resulting in other CPUs waiting for seconds to get the cache_disable_lock. This has been seen happening in more than 1% of all boots on an Intel machine with 960 CPUs. With this patch applied boot was always successful. Signed-off-by: Juergen Gross --- I was considering applying a "Fixes:" tag, but this could only reference the initial kernel git commit, as MTRR support predates git, and the problem existed since MTRRs became a thing. Signed-off-by: Juergen Gross --- arch/x86/kernel/cpu/mtrr/generic.c | 12 ------------ 1 file changed, 12 deletions(-) diff --git a/arch/x86/kernel/cpu/mtrr/generic.c b/arch/x86/kernel/cpu/mtrr/= generic.c index d49e1837a7af..236b83867bab 100644 --- a/arch/x86/kernel/cpu/mtrr/generic.c +++ b/arch/x86/kernel/cpu/mtrr/generic.c @@ -972,7 +972,6 @@ static void mtrr_enable(struct mtrr_work_state *state) * The caller must ensure that local interrupts are disabled and * are reenabled after cache_enable() has been called. */ -static DEFINE_RAW_SPINLOCK(cache_disable_lock); =20 /* * Cache flushing is the most time-consuming step when programming the @@ -986,17 +985,9 @@ static void maybe_flush_caches(void) } =20 static void cache_disable(struct mtrr_work_state *state) - __acquires(cache_disable_lock) { unsigned long cr0; =20 - /* - * This is not ideal since the cache is only flushed/disabled - * for this CPU while the MTRRs are changed, but changing this - * requires more invasive changes to the way the kernel boots. - */ - raw_spin_lock(&cache_disable_lock); - /* Enter the no-fill (CD=3D1, NW=3D0) cache mode and flush caches. */ cr0 =3D read_cr0() | X86_CR0_CD; write_cr0(cr0); @@ -1020,7 +1011,6 @@ static void cache_disable(struct mtrr_work_state *sta= te) } =20 static void cache_enable(struct mtrr_work_state *state) - __releases(cache_disable_lock) { /* Flush TLBs (no need to flush caches - they are disabled) */ count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL); @@ -1035,8 +1025,6 @@ static void cache_enable(struct mtrr_work_state *stat= e) /* Restore value of CR4 */ if (cpu_feature_enabled(X86_FEATURE_PGE)) __write_cr4(state->cr4); - - raw_spin_unlock(&cache_disable_lock); } =20 void mtrr_generic_set_state(void) --=20 2.52.0