From nobody Mon Feb 9 01:29:15 2026 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D8AF83101B9 for ; Wed, 21 Jan 2026 14:11:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.130 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769004679; cv=none; b=RkqfYEKVX0+WYQFPjoEXVV6TuwnJQ4a29YKyhZoNuBILE0tsBjyypWXleNHv8ilnMBS0RBUV02p6hLulzy/UpvBHtW9bakouIk8Vg/rFIWOnajZRLjLUcbwlQhAxtV4HvvlZHpIFNshwoRBmyEhJBldWEJRbjfaxBvFPM7ZLavs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769004679; c=relaxed/simple; bh=VyKjpzbHHHNtxU7tnUiK/WGFdsUpMouG7EZatwZMBGM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=R3N/1q7B05ebgTv/nJ1BqspBUWucME6Ij4jYgfG2b4XbTr3UJuLEeM7yH3lqwGFAfNrxTd96vqqgB/DVvuwV1MpmSxbm6ssl8fz3ehZZPw5X4SAiP+Gi3GTwMFbJnqBqMvsKUBO2iDVIZALt0iK0x/8+bYyRzkH+8Z9OgZaOHQg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=iq7I2sFp; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b=iq7I2sFp; arc=none smtp.client-ip=195.135.223.130 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="iq7I2sFp"; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="iq7I2sFp" Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 001A13368F; Wed, 21 Jan 2026 14:11:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1769004674; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BnyhvQQBU1ut2w4zp04Ujkv0VrnsGOAVO0HJQ82luyc=; b=iq7I2sFpD529TNPwR4hHv4/LB9ZRCjKrAZlkiO/VYrUYnG1kwEH16LHFsDAl4usRxuRCQS B61rKR0dMK2E+aSqqTtKHPppyv0CeJ3mRpn0RQQkyhrG8XWC6mLL+ePuyrrXMr5+VxF0Ds 2jlGkuYMsQrPdar+1J8ODvfwiuJH6b0= Authentication-Results: smtp-out1.suse.de; dkim=pass header.d=suse.com header.s=susede1 header.b=iq7I2sFp DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1769004674; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BnyhvQQBU1ut2w4zp04Ujkv0VrnsGOAVO0HJQ82luyc=; b=iq7I2sFpD529TNPwR4hHv4/LB9ZRCjKrAZlkiO/VYrUYnG1kwEH16LHFsDAl4usRxuRCQS B61rKR0dMK2E+aSqqTtKHPppyv0CeJ3mRpn0RQQkyhrG8XWC6mLL+ePuyrrXMr5+VxF0Ds 2jlGkuYMsQrPdar+1J8ODvfwiuJH6b0= Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id B0A863EA63; Wed, 21 Jan 2026 14:11:13 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id t05mKYHecGmTWQAAD6G6ig (envelope-from ); Wed, 21 Jan 2026 14:11:13 +0000 From: Juergen Gross To: linux-kernel@vger.kernel.org, x86@kernel.org Cc: Juergen Gross , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" Subject: [PATCH 1/4] x86/mtrr: Move cache_enable() and cache_disable() to mtrr/generic.c Date: Wed, 21 Jan 2026 15:11:03 +0100 Message-ID: <20260121141106.755458-2-jgross@suse.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260121141106.755458-1-jgross@suse.com> References: <20260121141106.755458-1-jgross@suse.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Spamd-Result: default: False [-3.01 / 50.00]; BAYES_HAM(-3.00)[100.00%]; MID_CONTAINS_FROM(1.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; R_MISSING_CHARSET(0.50)[]; R_DKIM_ALLOW(-0.20)[suse.com:s=susede1]; NEURAL_HAM_SHORT(-0.20)[-1.000]; MIME_GOOD(-0.10)[text/plain]; MX_GOOD(-0.01)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; RBL_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:104:10:150:64:97:from]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; ARC_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; FUZZY_RATELIMITED(0.00)[rspamd.com]; RCVD_TLS_ALL(0.00)[]; DKIM_TRACE(0.00)[suse.com:+]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; TO_DN_SOME(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; RECEIVED_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:106:10:150:64:167:received]; RCPT_COUNT_SEVEN(0.00)[8]; RCVD_VIA_SMTP_AUTH(0.00)[]; DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:mid,suse.com:dkim,suse.com:email,imap1.dmz-prg2.suse.org:helo,imap1.dmz-prg2.suse.org:rdns] X-Spam-Flag: NO X-Spam-Score: -3.01 X-Rspamd-Queue-Id: 001A13368F X-Rspamd-Action: no action X-Rspamd-Server: rspamd2.dmz-prg2.suse.org X-Spam-Level: Content-Type: text/plain; charset="utf-8" cache_enable() and cache_disable() are used for generic MTRR code only. Move them and related stuff to mtrr/generic.c, allowing to make them static. This requires to move the cache_enable() and cache_disable() calls from cache_cpu_init() into mtrr_generic_set_state(). This allows to make mtrr_enable() and mtrr_disable() static, too. While moving the code, drop the comment related to the PAT MSR, as this is not true anymore. No change of functionality. Signed-off-by: Juergen Gross --- arch/x86/include/asm/cacheinfo.h | 2 - arch/x86/include/asm/mtrr.h | 2 - arch/x86/kernel/cpu/cacheinfo.c | 80 +--------------------------- arch/x86/kernel/cpu/mtrr/generic.c | 83 +++++++++++++++++++++++++++++- 4 files changed, 82 insertions(+), 85 deletions(-) diff --git a/arch/x86/include/asm/cacheinfo.h b/arch/x86/include/asm/cachei= nfo.h index 5aa061199866..07b0e5e6d5bb 100644 --- a/arch/x86/include/asm/cacheinfo.h +++ b/arch/x86/include/asm/cacheinfo.h @@ -7,8 +7,6 @@ extern unsigned int memory_caching_control; #define CACHE_MTRR 0x01 #define CACHE_PAT 0x02 =20 -void cache_disable(void); -void cache_enable(void); void set_cache_aps_delayed_init(bool val); bool get_cache_aps_delayed_init(void); void cache_bp_init(void); diff --git a/arch/x86/include/asm/mtrr.h b/arch/x86/include/asm/mtrr.h index 76b95bd1a405..d547b364ce65 100644 --- a/arch/x86/include/asm/mtrr.h +++ b/arch/x86/include/asm/mtrr.h @@ -58,8 +58,6 @@ extern int mtrr_del(int reg, unsigned long base, unsigned= long size); extern int mtrr_del_page(int reg, unsigned long base, unsigned long size); extern int mtrr_trim_uncached_memory(unsigned long end_pfn); extern int amd_special_default_mtrr(void); -void mtrr_disable(void); -void mtrr_enable(void); void mtrr_generic_set_state(void); # else static inline void guest_force_mtrr_state(struct mtrr_var_range *var, diff --git a/arch/x86/kernel/cpu/cacheinfo.c b/arch/x86/kernel/cpu/cacheinf= o.c index 51a95b07831f..0d2150de0120 100644 --- a/arch/x86/kernel/cpu/cacheinfo.c +++ b/arch/x86/kernel/cpu/cacheinfo.c @@ -635,92 +635,14 @@ int populate_cache_leaves(unsigned int cpu) return 0; } =20 -/* - * Disable and enable caches. Needed for changing MTRRs and the PAT MSR. - * - * Since we are disabling the cache don't allow any interrupts, - * they would run extremely slow and would only increase the pain. - * - * The caller must ensure that local interrupts are disabled and - * are reenabled after cache_enable() has been called. - */ -static unsigned long saved_cr4; -static DEFINE_RAW_SPINLOCK(cache_disable_lock); - -/* - * Cache flushing is the most time-consuming step when programming the - * MTRRs. On many Intel CPUs without known erratas, it can be skipped - * if the CPU declares cache self-snooping support. - */ -static void maybe_flush_caches(void) -{ - if (!static_cpu_has(X86_FEATURE_SELFSNOOP)) - wbinvd(); -} - -void cache_disable(void) __acquires(cache_disable_lock) -{ - unsigned long cr0; - - /* - * This is not ideal since the cache is only flushed/disabled - * for this CPU while the MTRRs are changed, but changing this - * requires more invasive changes to the way the kernel boots. - */ - raw_spin_lock(&cache_disable_lock); - - /* Enter the no-fill (CD=3D1, NW=3D0) cache mode and flush caches. */ - cr0 =3D read_cr0() | X86_CR0_CD; - write_cr0(cr0); - - maybe_flush_caches(); - - /* Save value of CR4 and clear Page Global Enable (bit 7) */ - if (cpu_feature_enabled(X86_FEATURE_PGE)) { - saved_cr4 =3D __read_cr4(); - __write_cr4(saved_cr4 & ~X86_CR4_PGE); - } - - /* Flush all TLBs via a mov %cr3, %reg; mov %reg, %cr3 */ - count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL); - flush_tlb_local(); - - if (cpu_feature_enabled(X86_FEATURE_MTRR)) - mtrr_disable(); - - maybe_flush_caches(); -} - -void cache_enable(void) __releases(cache_disable_lock) -{ - /* Flush TLBs (no need to flush caches - they are disabled) */ - count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL); - flush_tlb_local(); - - if (cpu_feature_enabled(X86_FEATURE_MTRR)) - mtrr_enable(); - - /* Enable caches */ - write_cr0(read_cr0() & ~X86_CR0_CD); - - /* Restore value of CR4 */ - if (cpu_feature_enabled(X86_FEATURE_PGE)) - __write_cr4(saved_cr4); - - raw_spin_unlock(&cache_disable_lock); -} - static void cache_cpu_init(void) { unsigned long flags; =20 local_irq_save(flags); =20 - if (memory_caching_control & CACHE_MTRR) { - cache_disable(); + if (memory_caching_control & CACHE_MTRR) mtrr_generic_set_state(); - cache_enable(); - } =20 if (memory_caching_control & CACHE_PAT) pat_cpu_init(); diff --git a/arch/x86/kernel/cpu/mtrr/generic.c b/arch/x86/kernel/cpu/mtrr/= generic.c index 0863733858dc..2c874b88e12c 100644 --- a/arch/x86/kernel/cpu/mtrr/generic.c +++ b/arch/x86/kernel/cpu/mtrr/generic.c @@ -945,7 +945,7 @@ static unsigned long set_mtrr_state(void) return change_mask; } =20 -void mtrr_disable(void) +static void mtrr_disable(void) { /* Save MTRR state */ rdmsr(MSR_MTRRdefType, deftype_lo, deftype_hi); @@ -954,16 +954,93 @@ void mtrr_disable(void) mtrr_wrmsr(MSR_MTRRdefType, deftype_lo & MTRR_DEF_TYPE_DISABLE, deftype_h= i); } =20 -void mtrr_enable(void) +static void mtrr_enable(void) { /* Intel (P6) standard MTRRs */ mtrr_wrmsr(MSR_MTRRdefType, deftype_lo, deftype_hi); } =20 +/* + * Disable and enable caches. Needed for changing MTRRs. + * + * Since we are disabling the cache don't allow any interrupts, + * they would run extremely slow and would only increase the pain. + * + * The caller must ensure that local interrupts are disabled and + * are reenabled after cache_enable() has been called. + */ +static unsigned long saved_cr4; +static DEFINE_RAW_SPINLOCK(cache_disable_lock); + +/* + * Cache flushing is the most time-consuming step when programming the + * MTRRs. On many Intel CPUs without known erratas, it can be skipped + * if the CPU declares cache self-snooping support. + */ +static void maybe_flush_caches(void) +{ + if (!static_cpu_has(X86_FEATURE_SELFSNOOP)) + wbinvd(); +} + +static void cache_disable(void) __acquires(cache_disable_lock) +{ + unsigned long cr0; + + /* + * This is not ideal since the cache is only flushed/disabled + * for this CPU while the MTRRs are changed, but changing this + * requires more invasive changes to the way the kernel boots. + */ + raw_spin_lock(&cache_disable_lock); + + /* Enter the no-fill (CD=3D1, NW=3D0) cache mode and flush caches. */ + cr0 =3D read_cr0() | X86_CR0_CD; + write_cr0(cr0); + + maybe_flush_caches(); + + /* Save value of CR4 and clear Page Global Enable (bit 7) */ + if (cpu_feature_enabled(X86_FEATURE_PGE)) { + saved_cr4 =3D __read_cr4(); + __write_cr4(saved_cr4 & ~X86_CR4_PGE); + } + + /* Flush all TLBs via a mov %cr3, %reg; mov %reg, %cr3 */ + count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL); + flush_tlb_local(); + + if (cpu_feature_enabled(X86_FEATURE_MTRR)) + mtrr_disable(); + + maybe_flush_caches(); +} + +static void cache_enable(void) __releases(cache_disable_lock) +{ + /* Flush TLBs (no need to flush caches - they are disabled) */ + count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL); + flush_tlb_local(); + + if (cpu_feature_enabled(X86_FEATURE_MTRR)) + mtrr_enable(); + + /* Enable caches */ + write_cr0(read_cr0() & ~X86_CR0_CD); + + /* Restore value of CR4 */ + if (cpu_feature_enabled(X86_FEATURE_PGE)) + __write_cr4(saved_cr4); + + raw_spin_unlock(&cache_disable_lock); +} + void mtrr_generic_set_state(void) { unsigned long mask, count; =20 + cache_disable(); + /* Actually set the state */ mask =3D set_mtrr_state(); =20 @@ -973,6 +1050,8 @@ void mtrr_generic_set_state(void) set_bit(count, &smp_changes_mask); mask >>=3D 1; } + + cache_enable(); } =20 /** --=20 2.52.0