From nobody Sat May 18 20:15:26 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17574CCA480 for ; Fri, 15 Jul 2022 14:26:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234262AbiGOO0I (ORCPT ); Fri, 15 Jul 2022 10:26:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49344 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232086AbiGOOZz (ORCPT ); Fri, 15 Jul 2022 10:25:55 -0400 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5A38D68719; Fri, 15 Jul 2022 07:25:54 -0700 (PDT) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 159531FEFD; Fri, 15 Jul 2022 14:25:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1657895153; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hzOhvTFCR82QZ7iKoRr+Nal/xjLFCKOCfmvvUa9Dxk8=; b=Z0JBV9s4Lfa87plttXYv1iKr47UlJrL14HRvAZPYuh8VEtPbZQBTUYjQ0fRmoA0fRHN6jZ 6NAxb1ezCZIxkxQnkYRg8HS3eCF3aah417PQ9z1w76NKH3GdfOCv4VgCSjq1pnIa0qiRe+ 7CLyc6NdFJ3cN/zZbljXaomOKYAO4gk= Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id B50A913B28; Fri, 15 Jul 2022 14:25:52 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id yF3cKvB40WK+QQAAMHmgww (envelope-from ); Fri, 15 Jul 2022 14:25:52 +0000 From: Juergen Gross To: xen-devel@lists.xenproject.org, x86@kernel.org, linux-kernel@vger.kernel.org Cc: brchuckz@netscape.net, jbeulich@suse.com, Juergen Gross , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , stable@vger.kernel.org Subject: [PATCH 1/3] x86: move some code out of arch/x86/kernel/cpu/mtrr Date: Fri, 15 Jul 2022 16:25:47 +0200 Message-Id: <20220715142549.25223-2-jgross@suse.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220715142549.25223-1-jgross@suse.com> References: <20220715142549.25223-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Prepare making PAT and MTRR support independent from each other by moving some code needed by both out of the MTRR specific sources. Cc: # 5.17 Fixes: bdd8b6c98239 ("drm/i915: replace X86_FEATURE_PAT with pat_enabled()") Signed-off-by: Juergen Gross --- arch/x86/include/asm/mtrr.h | 4 ++ arch/x86/include/asm/processor.h | 3 ++ arch/x86/kernel/cpu/common.c | 76 ++++++++++++++++++++++++++++ arch/x86/kernel/cpu/mtrr/generic.c | 80 +++--------------------------- 4 files changed, 91 insertions(+), 72 deletions(-) diff --git a/arch/x86/include/asm/mtrr.h b/arch/x86/include/asm/mtrr.h index 76d726074c16..12a16caed395 100644 --- a/arch/x86/include/asm/mtrr.h +++ b/arch/x86/include/asm/mtrr.h @@ -48,6 +48,8 @@ extern void mtrr_aps_init(void); extern void mtrr_bp_restore(void); extern int mtrr_trim_uncached_memory(unsigned long end_pfn); extern int amd_special_default_mtrr(void); +void mtrr_disable(void); +void mtrr_enable(void); # else static inline u8 mtrr_type_lookup(u64 addr, u64 end, u8 *uniform) { @@ -87,6 +89,8 @@ static inline void mtrr_centaur_report_mcr(int mcr, u32 l= o, u32 hi) #define set_mtrr_aps_delayed_init() do {} while (0) #define mtrr_aps_init() do {} while (0) #define mtrr_bp_restore() do {} while (0) +#define mtrr_disable() do {} while (0) +#define mtrr_enable() do {} while (0) # endif =20 #ifdef CONFIG_COMPAT diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/proces= sor.h index 356308c73951..5c934b922450 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -865,4 +865,7 @@ bool arch_is_platform_page(u64 paddr); #define arch_is_platform_page arch_is_platform_page #endif =20 +void cache_disable(void); +void cache_enable(void); + #endif /* _ASM_X86_PROCESSOR_H */ diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 736262a76a12..e43322f8a4ef 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -61,6 +61,7 @@ #include #include #include +#include =20 #include "cpu.h" =20 @@ -2327,3 +2328,78 @@ void arch_smt_update(void) /* Check whether IPI broadcasting can be enabled */ apic_smt_update(); } + +/* + * Disable and enable caches. Needed for changing MTRRs and the PAT MSR. + * + * Since we are disabling the cache don't allow any interrupts, + * they would run extremely slow and would only increase the pain. + * + * The caller must ensure that local interrupts are disabled and + * are reenabled after cache_enable() has been called. + */ +static unsigned long saved_cr4; +static DEFINE_RAW_SPINLOCK(cache_disable_lock); + +void cache_disable(void) __acquires(cache_disable_lock) +{ + unsigned long cr0; + + /* + * Note that this is not ideal + * since the cache is only flushed/disabled for this CPU while the + * MTRRs are changed, but changing this requires more invasive + * changes to the way the kernel boots + */ + + raw_spin_lock(&cache_disable_lock); + + /* Enter the no-fill (CD=3D1, NW=3D0) cache mode and flush caches. */ + cr0 =3D read_cr0() | X86_CR0_CD; + write_cr0(cr0); + + /* + * Cache flushing is the most time-consuming step when programming + * the MTRRs. Fortunately, as per the Intel Software Development + * Manual, we can skip it if the processor supports cache self- + * snooping. + */ + if (!static_cpu_has(X86_FEATURE_SELFSNOOP)) + wbinvd(); + + /* Save value of CR4 and clear Page Global Enable (bit 7) */ + if (boot_cpu_has(X86_FEATURE_PGE)) { + saved_cr4 =3D __read_cr4(); + __write_cr4(saved_cr4 & ~X86_CR4_PGE); + } + + /* Flush all TLBs via a mov %cr3, %reg; mov %reg, %cr3 */ + count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL); + flush_tlb_local(); + + if (boot_cpu_has(X86_FEATURE_MTRR)) + mtrr_disable(); + + /* Again, only flush caches if we have to. */ + if (!static_cpu_has(X86_FEATURE_SELFSNOOP)) + wbinvd(); +} + +void cache_enable(void) __releases(cache_disable_lock) +{ + /* Flush TLBs (no need to flush caches - they are disabled) */ + count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL); + flush_tlb_local(); + + if (boot_cpu_has(X86_FEATURE_MTRR)) + mtrr_enable(); + + /* Enable caches */ + write_cr0(read_cr0() & ~X86_CR0_CD); + + /* Restore value of CR4 */ + if (boot_cpu_has(X86_FEATURE_PGE)) + __write_cr4(saved_cr4); + + raw_spin_unlock(&cache_disable_lock); +} diff --git a/arch/x86/kernel/cpu/mtrr/generic.c b/arch/x86/kernel/cpu/mtrr/= generic.c index 558108296f3c..84732215b61d 100644 --- a/arch/x86/kernel/cpu/mtrr/generic.c +++ b/arch/x86/kernel/cpu/mtrr/generic.c @@ -396,9 +396,6 @@ print_fixed(unsigned base, unsigned step, const mtrr_ty= pe *types) } } =20 -static void prepare_set(void); -static void post_set(void); - static void __init print_mtrr_state(void) { unsigned int i; @@ -450,11 +447,11 @@ void __init mtrr_bp_pat_init(void) unsigned long flags; =20 local_irq_save(flags); - prepare_set(); + cache_disable(); =20 pat_init(); =20 - post_set(); + cache_enable(); local_irq_restore(flags); } =20 @@ -715,80 +712,19 @@ static unsigned long set_mtrr_state(void) return change_mask; } =20 - -static unsigned long cr4; -static DEFINE_RAW_SPINLOCK(set_atomicity_lock); - -/* - * Since we are disabling the cache don't allow any interrupts, - * they would run extremely slow and would only increase the pain. - * - * The caller must ensure that local interrupts are disabled and - * are reenabled after post_set() has been called. - */ -static void prepare_set(void) __acquires(set_atomicity_lock) +void mtrr_disable(void) { - unsigned long cr0; - - /* - * Note that this is not ideal - * since the cache is only flushed/disabled for this CPU while the - * MTRRs are changed, but changing this requires more invasive - * changes to the way the kernel boots - */ - - raw_spin_lock(&set_atomicity_lock); - - /* Enter the no-fill (CD=3D1, NW=3D0) cache mode and flush caches. */ - cr0 =3D read_cr0() | X86_CR0_CD; - write_cr0(cr0); - - /* - * Cache flushing is the most time-consuming step when programming - * the MTRRs. Fortunately, as per the Intel Software Development - * Manual, we can skip it if the processor supports cache self- - * snooping. - */ - if (!static_cpu_has(X86_FEATURE_SELFSNOOP)) - wbinvd(); - - /* Save value of CR4 and clear Page Global Enable (bit 7) */ - if (boot_cpu_has(X86_FEATURE_PGE)) { - cr4 =3D __read_cr4(); - __write_cr4(cr4 & ~X86_CR4_PGE); - } - - /* Flush all TLBs via a mov %cr3, %reg; mov %reg, %cr3 */ - count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL); - flush_tlb_local(); - /* Save MTRR state */ rdmsr(MSR_MTRRdefType, deftype_lo, deftype_hi); =20 /* Disable MTRRs, and set the default type to uncached */ mtrr_wrmsr(MSR_MTRRdefType, deftype_lo & ~0xcff, deftype_hi); - - /* Again, only flush caches if we have to. */ - if (!static_cpu_has(X86_FEATURE_SELFSNOOP)) - wbinvd(); } =20 -static void post_set(void) __releases(set_atomicity_lock) +void mtrr_enable(void) { - /* Flush TLBs (no need to flush caches - they are disabled) */ - count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL); - flush_tlb_local(); - /* Intel (P6) standard MTRRs */ mtrr_wrmsr(MSR_MTRRdefType, deftype_lo, deftype_hi); - - /* Enable caches */ - write_cr0(read_cr0() & ~X86_CR0_CD); - - /* Restore value of CR4 */ - if (boot_cpu_has(X86_FEATURE_PGE)) - __write_cr4(cr4); - raw_spin_unlock(&set_atomicity_lock); } =20 static void generic_set_all(void) @@ -797,7 +733,7 @@ static void generic_set_all(void) unsigned long flags; =20 local_irq_save(flags); - prepare_set(); + cache_disable(); =20 /* Actually set the state */ mask =3D set_mtrr_state(); @@ -805,7 +741,7 @@ static void generic_set_all(void) /* also set PAT */ pat_init(); =20 - post_set(); + cache_enable(); local_irq_restore(flags); =20 /* Use the atomic bitops to update the global mask */ @@ -836,7 +772,7 @@ static void generic_set_mtrr(unsigned int reg, unsigned= long base, vr =3D &mtrr_state.var_ranges[reg]; =20 local_irq_save(flags); - prepare_set(); + cache_disable(); =20 if (size =3D=3D 0) { /* @@ -855,7 +791,7 @@ static void generic_set_mtrr(unsigned int reg, unsigned= long base, mtrr_wrmsr(MTRRphysMask_MSR(reg), vr->mask_lo, vr->mask_hi); } =20 - post_set(); + cache_enable(); local_irq_restore(flags); } =20 --=20 2.35.3 From nobody Sat May 18 20:15:26 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1657895181; cv=none; d=zohomail.com; s=zohoarc; b=EMip9QTL7fRnb0dlMOD3EH56ho3MYNCLXYje7Dxg54x0y5B9yvjcNn3ADITpnO+BF+wbD4OIXABwYFy2xgU1UqUdvHHWNlTFRZQ9X5Qe4/+19cgaA5IzbogzG9005FbcYKZu4QxEqIx2qan5ppT28Eqf9lqKeIatTqkkjShAV8I= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1657895181; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=xntJCVnqXRvk9vE3Jl/BFSzPfnA84i+bvpB8MsbTJGc=; b=ejkuTWrG5KOCNF4fTAvYv8wurEIzahEfQCq43TPF8pRpfALlHKf32hA4BGyqzXJkXzBpqW0PNTNHithMlUBitKAmyKebFbjVkFrJphnNYoaq9sdooksq9zxJOzdePrHjmnCR1ZHPILNibCmg7cwY5CA5dFcE6Mi3yYSMULFry9U= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1657895181732402.9033583037086; Fri, 15 Jul 2022 07:26:21 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.368263.599476 (Exim 4.92) (envelope-from ) id 1oCMGS-0001RG-Jo; Fri, 15 Jul 2022 14:25:56 +0000 Received: by outflank-mailman (output) from mailman id 368263.599476; Fri, 15 Jul 2022 14:25:56 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oCMGS-0001R9-EK; Fri, 15 Jul 2022 14:25:56 +0000 Received: by outflank-mailman (input) for mailman id 368263; Fri, 15 Jul 2022 14:25:55 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oCMGR-0001BV-Ae for xen-devel@lists.xenproject.org; Fri, 15 Jul 2022 14:25:55 +0000 Received: from smtp-out1.suse.de (smtp-out1.suse.de [2001:67c:2178:6::1c]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 0891baf6-044a-11ed-924f-1f966e50362f; Fri, 15 Jul 2022 16:25:54 +0200 (CEST) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 80F2934DD4; Fri, 15 Jul 2022 14:25:53 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 1D52B13754; Fri, 15 Jul 2022 14:25:53 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id +OLnBfF40WK+QQAAMHmgww (envelope-from ); Fri, 15 Jul 2022 14:25:53 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 0891baf6-044a-11ed-924f-1f966e50362f DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1657895153; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xntJCVnqXRvk9vE3Jl/BFSzPfnA84i+bvpB8MsbTJGc=; b=lOgqztde0SbRvIEqM6HAvbK3tjPLn1bvAuQVuCXa9oRPJPnmWvKxrugSZxsdfLzsOOX/dl R2FwYYKg48SifuMNapYEea3ILYC/mlWR0DFoBH9jjhbI3Oriu1ZjiY+MtSyhjtmuSSAmsJ 5k0k4Q/GW+N3aIW6S88sM87/E5K8kIc= From: Juergen Gross To: xen-devel@lists.xenproject.org, x86@kernel.org, linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org Cc: brchuckz@netscape.net, jbeulich@suse.com, Juergen Gross , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , "Rafael J. Wysocki" , Pavel Machek , stable@vger.kernel.org Subject: [PATCH 2/3] x86: add wrapper functions for mtrr functions handling also pat Date: Fri, 15 Jul 2022 16:25:48 +0200 Message-Id: <20220715142549.25223-3-jgross@suse.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220715142549.25223-1-jgross@suse.com> References: <20220715142549.25223-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) X-ZM-MESSAGEID: 1657895182385100003 Content-Type: text/plain; charset="utf-8" There are several MTRR functions which also do PAT handling. In order to support PAT handling without MTRR in the future, add some wrappers for those functions. Cc: # 5.17 Fixes: bdd8b6c98239 ("drm/i915: replace X86_FEATURE_PAT with pat_enabled()") Signed-off-by: Juergen Gross --- arch/x86/include/asm/mtrr.h | 2 -- arch/x86/include/asm/processor.h | 7 +++++ arch/x86/kernel/cpu/common.c | 44 +++++++++++++++++++++++++++++++- arch/x86/kernel/cpu/mtrr/mtrr.c | 25 +++--------------- arch/x86/kernel/setup.c | 5 +--- arch/x86/kernel/smpboot.c | 8 +++--- arch/x86/power/cpu.c | 2 +- 7 files changed, 59 insertions(+), 34 deletions(-) diff --git a/arch/x86/include/asm/mtrr.h b/arch/x86/include/asm/mtrr.h index 12a16caed395..900083ac9f60 100644 --- a/arch/x86/include/asm/mtrr.h +++ b/arch/x86/include/asm/mtrr.h @@ -43,7 +43,6 @@ extern int mtrr_del(int reg, unsigned long base, unsigned= long size); extern int mtrr_del_page(int reg, unsigned long base, unsigned long size); extern void mtrr_centaur_report_mcr(int mcr, u32 lo, u32 hi); extern void mtrr_ap_init(void); -extern void set_mtrr_aps_delayed_init(void); extern void mtrr_aps_init(void); extern void mtrr_bp_restore(void); extern int mtrr_trim_uncached_memory(unsigned long end_pfn); @@ -86,7 +85,6 @@ static inline void mtrr_centaur_report_mcr(int mcr, u32 l= o, u32 hi) { } #define mtrr_ap_init() do {} while (0) -#define set_mtrr_aps_delayed_init() do {} while (0) #define mtrr_aps_init() do {} while (0) #define mtrr_bp_restore() do {} while (0) #define mtrr_disable() do {} while (0) diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/proces= sor.h index 5c934b922450..e2140204fb7e 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -865,7 +865,14 @@ bool arch_is_platform_page(u64 paddr); #define arch_is_platform_page arch_is_platform_page #endif =20 +extern bool cache_aps_delayed_init; + void cache_disable(void); void cache_enable(void); +void cache_bp_init(void); +void cache_ap_init(void); +void cache_set_aps_delayed_init(void); +void cache_aps_init(void); +void cache_bp_restore(void); =20 #endif /* _ASM_X86_PROCESSOR_H */ diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index e43322f8a4ef..0a1bd14f7966 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1929,7 +1929,7 @@ void identify_secondary_cpu(struct cpuinfo_x86 *c) #ifdef CONFIG_X86_32 enable_sep_cpu(); #endif - mtrr_ap_init(); + cache_ap_init(); validate_apic_and_package_id(c); x86_spec_ctrl_setup_ap(); update_srbds_msr(); @@ -2403,3 +2403,45 @@ void cache_enable(void) __releases(cache_disable_loc= k) =20 raw_spin_unlock(&cache_disable_lock); } + +void __init cache_bp_init(void) +{ + if (IS_ENABLED(CONFIG_MTRR)) + mtrr_bp_init(); + else + pat_disable("PAT support disabled because CONFIG_MTRR is disabled in the= kernel."); +} + +void cache_ap_init(void) +{ + if (cache_aps_delayed_init) + return; + + mtrr_ap_init(); +} + +bool cache_aps_delayed_init; + +void cache_set_aps_delayed_init(void) +{ + cache_aps_delayed_init =3D true; +} + +void cache_aps_init(void) +{ + /* + * Check if someone has requested the delay of AP cache initialization, + * by doing cache_set_aps_delayed_init(), prior to this point. If not, + * then we are done. + */ + if (!cache_aps_delayed_init) + return; + + mtrr_aps_init(); + cache_aps_delayed_init =3D false; +} + +void cache_bp_restore(void) +{ + mtrr_bp_restore(); +} diff --git a/arch/x86/kernel/cpu/mtrr/mtrr.c b/arch/x86/kernel/cpu/mtrr/mtr= r.c index 2746cac9d8a9..c1593cfae641 100644 --- a/arch/x86/kernel/cpu/mtrr/mtrr.c +++ b/arch/x86/kernel/cpu/mtrr/mtrr.c @@ -69,7 +69,6 @@ unsigned int mtrr_usage_table[MTRR_MAX_VAR_RANGES]; static DEFINE_MUTEX(mtrr_mutex); =20 u64 size_or_mask, size_and_mask; -static bool mtrr_aps_delayed_init; =20 static const struct mtrr_ops *mtrr_ops[X86_VENDOR_NUM] __ro_after_init; =20 @@ -176,7 +175,8 @@ static int mtrr_rendezvous_handler(void *info) if (data->smp_reg !=3D ~0U) { mtrr_if->set(data->smp_reg, data->smp_base, data->smp_size, data->smp_type); - } else if (mtrr_aps_delayed_init || !cpu_online(smp_processor_id())) { + } else if ((use_intel() && cache_aps_delayed_init) || + !cpu_online(smp_processor_id())) { mtrr_if->set_all(); } return 0; @@ -789,7 +789,7 @@ void mtrr_ap_init(void) if (!mtrr_enabled()) return; =20 - if (!use_intel() || mtrr_aps_delayed_init) + if (!use_intel()) return; =20 /* @@ -823,16 +823,6 @@ void mtrr_save_state(void) smp_call_function_single(first_cpu, mtrr_save_fixed_ranges, NULL, 1); } =20 -void set_mtrr_aps_delayed_init(void) -{ - if (!mtrr_enabled()) - return; - if (!use_intel()) - return; - - mtrr_aps_delayed_init =3D true; -} - /* * Delayed MTRR initialization for all AP's */ @@ -841,16 +831,7 @@ void mtrr_aps_init(void) if (!use_intel() || !mtrr_enabled()) return; =20 - /* - * Check if someone has requested the delay of AP MTRR initialization, - * by doing set_mtrr_aps_delayed_init(), prior to this point. If not, - * then we are done. - */ - if (!mtrr_aps_delayed_init) - return; - set_mtrr(~0U, 0, 0, 0); - mtrr_aps_delayed_init =3D false; } =20 void mtrr_bp_restore(void) diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index bd6c6fd373ae..27d61f73c68a 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -1001,10 +1001,7 @@ void __init setup_arch(char **cmdline_p) max_pfn =3D e820__end_of_ram_pfn(); =20 /* update e820 for memory not covered by WB MTRRs */ - if (IS_ENABLED(CONFIG_MTRR)) - mtrr_bp_init(); - else - pat_disable("PAT support disabled because CONFIG_MTRR is disabled in the= kernel."); + cache_bp_init(); =20 if (mtrr_trim_uncached_memory(max_pfn)) max_pfn =3D e820__end_of_ram_pfn(); diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c index 5e7f9532a10d..535d73a47062 100644 --- a/arch/x86/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -1432,7 +1432,7 @@ void __init native_smp_prepare_cpus(unsigned int max_= cpus) =20 uv_system_init(); =20 - set_mtrr_aps_delayed_init(); + cache_set_aps_delayed_init(); =20 smp_quirk_init_udelay(); =20 @@ -1443,12 +1443,12 @@ void __init native_smp_prepare_cpus(unsigned int ma= x_cpus) =20 void arch_thaw_secondary_cpus_begin(void) { - set_mtrr_aps_delayed_init(); + cache_set_aps_delayed_init(); } =20 void arch_thaw_secondary_cpus_end(void) { - mtrr_aps_init(); + cache_aps_init(); } =20 /* @@ -1491,7 +1491,7 @@ void __init native_smp_cpus_done(unsigned int max_cpu= s) =20 nmi_selftest(); impress_friends(); - mtrr_aps_init(); + cache_aps_init(); } =20 static int __initdata setup_possible_cpus =3D -1; diff --git a/arch/x86/power/cpu.c b/arch/x86/power/cpu.c index bb176c72891c..21e014715322 100644 --- a/arch/x86/power/cpu.c +++ b/arch/x86/power/cpu.c @@ -261,7 +261,7 @@ static void notrace __restore_processor_state(struct sa= ved_context *ctxt) do_fpu_end(); tsc_verify_tsc_adjust(true); x86_platform.restore_sched_clock_state(); - mtrr_bp_restore(); + cache_bp_restore(); perf_restore_debug_store(); =20 c =3D &cpu_data(smp_processor_id()); --=20 2.35.3 From nobody Sat May 18 20:15:26 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 320ABC433EF for ; Fri, 15 Jul 2022 14:26:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235043AbiGOO0M (ORCPT ); Fri, 15 Jul 2022 10:26:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49352 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232964AbiGOOZ5 (ORCPT ); Fri, 15 Jul 2022 10:25:57 -0400 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5E6BD6BC3C; Fri, 15 Jul 2022 07:25:55 -0700 (PDT) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id F419634DD5; Fri, 15 Jul 2022 14:25:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1657895154; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vwUbrqUUHvFoVh3Zv7qCqQxs3hoqoCxPCyuEJqRoUEM=; b=ANCRktCNdbwJrH1JJDcdGCzxaLCVeh7q2T72wPJjvT6lgNoOU4jRPubM60FZJ0pFrZQyLu 6Ka5m5KPYPmolEHu7zfcIXo5p3SnSBIP3r5SzvByFDah0ynL6x2M5mFCiilSfsV4zPjePI j2kL3wL2uTim4xmH1slYYf1MYzImfmo= Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 89D0F13754; Fri, 15 Jul 2022 14:25:53 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id WJg6IPF40WK+QQAAMHmgww (envelope-from ); Fri, 15 Jul 2022 14:25:53 +0000 From: Juergen Gross To: xen-devel@lists.xenproject.org, x86@kernel.org, linux-kernel@vger.kernel.org Cc: brchuckz@netscape.net, jbeulich@suse.com, Juergen Gross , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Boris Ostrovsky , stable@vger.kernel.org Subject: [PATCH 3/3] x86: decouple pat and mtrr handling Date: Fri, 15 Jul 2022 16:25:49 +0200 Message-Id: <20220715142549.25223-4-jgross@suse.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220715142549.25223-1-jgross@suse.com> References: <20220715142549.25223-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Today PAT is usable only with MTRR being active, with some nasty tweaks to make PAT usable when running as Xen PV guest, which doesn't support MTRR. The reason for this coupling is, that both, PAT MSR changes and MTRR changes, require a similar sequence and so full PAT support was added using the already available MTRR handling. Xen PV PAT handling can work without MTRR, as it just needs to consume the PAT MSR setting done by the hypervisor without the ability and need to change it. This in turn has resulted in a convoluted initialization sequence and wrong decisions regarding cache mode availability due to misguiding PAT availability flags. Fix all of that by allowing to use PAT without MTRR and by adding an environment dependent PAT init function. Cc: # 5.17 Fixes: bdd8b6c98239 ("drm/i915: replace X86_FEATURE_PAT with pat_enabled()") Signed-off-by: Juergen Gross --- arch/x86/include/asm/memtype.h | 13 ++- arch/x86/include/asm/mtrr.h | 21 +++-- arch/x86/kernel/cpu/common.c | 13 +-- arch/x86/kernel/cpu/mtrr/generic.c | 14 ---- arch/x86/kernel/cpu/mtrr/mtrr.c | 33 ++++---- arch/x86/kernel/cpu/mtrr/mtrr.h | 1 - arch/x86/kernel/setup.c | 7 -- arch/x86/mm/pat/memtype.c | 127 +++++++++++++++++++++-------- arch/x86/xen/enlighten_pv.c | 4 + 9 files changed, 146 insertions(+), 87 deletions(-) diff --git a/arch/x86/include/asm/memtype.h b/arch/x86/include/asm/memtype.h index 9ca760e430b9..93ad980631dc 100644 --- a/arch/x86/include/asm/memtype.h +++ b/arch/x86/include/asm/memtype.h @@ -8,7 +8,18 @@ extern bool pat_enabled(void); extern void pat_disable(const char *reason); extern void pat_init(void); -extern void init_cache_modes(void); +#ifdef CONFIG_X86_PAT +void pat_init_set(void (*func)(void)); +void pat_init_noset(void); +void pat_cpu_init(void); +void pat_ap_init_nomtrr(void); +void pat_aps_init_nomtrr(void); +#else +#define pat_init_set(f) do { } while (0) +#define pat_cpu_init(f) do { } while (0) +#define pat_ap_init_nomtrr(f) do { } while (0) +#define pat_aps_init_nomtrr(f) do { } while (0) +#endif =20 extern int memtype_reserve(u64 start, u64 end, enum page_cache_mode req_pcm, enum page_cache_mode *ret_pcm); diff --git a/arch/x86/include/asm/mtrr.h b/arch/x86/include/asm/mtrr.h index 900083ac9f60..bb76e5c6e21d 100644 --- a/arch/x86/include/asm/mtrr.h +++ b/arch/x86/include/asm/mtrr.h @@ -42,9 +42,9 @@ extern int mtrr_add_page(unsigned long base, unsigned lon= g size, extern int mtrr_del(int reg, unsigned long base, unsigned long size); extern int mtrr_del_page(int reg, unsigned long base, unsigned long size); extern void mtrr_centaur_report_mcr(int mcr, u32 lo, u32 hi); -extern void mtrr_ap_init(void); -extern void mtrr_aps_init(void); -extern void mtrr_bp_restore(void); +extern bool mtrr_ap_init(void); +extern bool mtrr_aps_init(void); +extern bool mtrr_bp_restore(void); extern int mtrr_trim_uncached_memory(unsigned long end_pfn); extern int amd_special_default_mtrr(void); void mtrr_disable(void); @@ -84,9 +84,18 @@ static inline int mtrr_trim_uncached_memory(unsigned lon= g end_pfn) static inline void mtrr_centaur_report_mcr(int mcr, u32 lo, u32 hi) { } -#define mtrr_ap_init() do {} while (0) -#define mtrr_aps_init() do {} while (0) -#define mtrr_bp_restore() do {} while (0) +static inline bool mtrr_ap_init(void) +{ + return false; +} +static inline bool mtrr_aps_init(void) +{ + return false; +} +static inline bool mtrr_bp_restore(void) +{ + return false; +} #define mtrr_disable() do {} while (0) #define mtrr_enable() do {} while (0) # endif diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 0a1bd14f7966..3edfb779dab5 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -2408,8 +2408,8 @@ void __init cache_bp_init(void) { if (IS_ENABLED(CONFIG_MTRR)) mtrr_bp_init(); - else - pat_disable("PAT support disabled because CONFIG_MTRR is disabled in the= kernel."); + + pat_cpu_init(); } =20 void cache_ap_init(void) @@ -2417,7 +2417,8 @@ void cache_ap_init(void) if (cache_aps_delayed_init) return; =20 - mtrr_ap_init(); + if (!mtrr_ap_init()) + pat_ap_init_nomtrr(); } =20 bool cache_aps_delayed_init; @@ -2437,11 +2438,13 @@ void cache_aps_init(void) if (!cache_aps_delayed_init) return; =20 - mtrr_aps_init(); + if (!mtrr_aps_init()) + pat_aps_init_nomtrr(); cache_aps_delayed_init =3D false; } =20 void cache_bp_restore(void) { - mtrr_bp_restore(); + if (!mtrr_bp_restore()) + pat_cpu_init(); } diff --git a/arch/x86/kernel/cpu/mtrr/generic.c b/arch/x86/kernel/cpu/mtrr/= generic.c index 84732215b61d..bb6dd96923a4 100644 --- a/arch/x86/kernel/cpu/mtrr/generic.c +++ b/arch/x86/kernel/cpu/mtrr/generic.c @@ -441,20 +441,6 @@ static void __init print_mtrr_state(void) pr_debug("TOM2: %016llx aka %lldM\n", mtrr_tom2, mtrr_tom2>>20); } =20 -/* PAT setup for BP. We need to go through sync steps here */ -void __init mtrr_bp_pat_init(void) -{ - unsigned long flags; - - local_irq_save(flags); - cache_disable(); - - pat_init(); - - cache_enable(); - local_irq_restore(flags); -} - /* Grab all of the MTRR state for this CPU into *state */ bool __init get_mtrr_state(void) { diff --git a/arch/x86/kernel/cpu/mtrr/mtrr.c b/arch/x86/kernel/cpu/mtrr/mtr= r.c index c1593cfae641..76b1553d90f9 100644 --- a/arch/x86/kernel/cpu/mtrr/mtrr.c +++ b/arch/x86/kernel/cpu/mtrr/mtrr.c @@ -762,9 +762,6 @@ void __init mtrr_bp_init(void) /* BIOS may override */ __mtrr_enabled =3D get_mtrr_state(); =20 - if (mtrr_enabled()) - mtrr_bp_pat_init(); - if (mtrr_cleanup(phys_addr)) { changed_by_mtrr_cleanup =3D 1; mtrr_if->set_all(); @@ -772,25 +769,17 @@ void __init mtrr_bp_init(void) } } =20 - if (!mtrr_enabled()) { + if (!mtrr_enabled()) pr_info("Disabled\n"); - - /* - * PAT initialization relies on MTRR's rendezvous handler. - * Skip PAT init until the handler can initialize both - * features independently. - */ - pat_disable("MTRRs disabled, skipping PAT initialization too."); - } } =20 -void mtrr_ap_init(void) +bool mtrr_ap_init(void) { if (!mtrr_enabled()) - return; + return false; =20 if (!use_intel()) - return; + return false; =20 /* * Ideally we should hold mtrr_mutex here to avoid mtrr entries @@ -806,6 +795,8 @@ void mtrr_ap_init(void) * lock to prevent mtrr entry changes */ set_mtrr_from_inactive_cpu(~0U, 0, 0, 0); + + return true; } =20 /** @@ -826,20 +817,24 @@ void mtrr_save_state(void) /* * Delayed MTRR initialization for all AP's */ -void mtrr_aps_init(void) +bool mtrr_aps_init(void) { if (!use_intel() || !mtrr_enabled()) - return; + return false; =20 set_mtrr(~0U, 0, 0, 0); + + return true; } =20 -void mtrr_bp_restore(void) +bool mtrr_bp_restore(void) { if (!use_intel() || !mtrr_enabled()) - return; + return false; =20 mtrr_if->set_all(); + + return true; } =20 static int __init mtrr_init_finialize(void) diff --git a/arch/x86/kernel/cpu/mtrr/mtrr.h b/arch/x86/kernel/cpu/mtrr/mtr= r.h index 2ac99e561181..f6135a539073 100644 --- a/arch/x86/kernel/cpu/mtrr/mtrr.h +++ b/arch/x86/kernel/cpu/mtrr/mtrr.h @@ -53,7 +53,6 @@ void set_mtrr_prepare_save(struct set_mtrr_context *ctxt); void fill_mtrr_var_range(unsigned int index, u32 base_lo, u32 base_hi, u32 mask_lo, u32 mask_hi); bool get_mtrr_state(void); -void mtrr_bp_pat_init(void); =20 extern void __init set_mtrr_ops(const struct mtrr_ops *ops); =20 diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index 27d61f73c68a..14bb40cd22c6 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -1008,13 +1008,6 @@ void __init setup_arch(char **cmdline_p) =20 max_possible_pfn =3D max_pfn; =20 - /* - * This call is required when the CPU does not support PAT. If - * mtrr_bp_init() invoked it already via pat_init() the call has no - * effect. - */ - init_cache_modes(); - /* * Define random base addresses for memory sections after max_pfn is * defined and before each memory section base is used. diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memtype.c index d5ef64ddd35e..3d4bc27ffebb 100644 --- a/arch/x86/mm/pat/memtype.c +++ b/arch/x86/mm/pat/memtype.c @@ -41,6 +41,7 @@ #include #include #include +#include =20 #include #include @@ -65,30 +66,23 @@ static bool __read_mostly pat_disabled =3D !IS_ENABLED(= CONFIG_X86_PAT); static bool __read_mostly pat_bp_enabled; static bool __read_mostly pat_cm_initialized; =20 -/* - * PAT support is enabled by default, but can be disabled for - * various user-requested or hardware-forced reasons: - */ -void pat_disable(const char *msg_reason) -{ - if (pat_disabled) - return; +static void init_cache_modes(void); =20 - if (pat_bp_initialized) { - WARN_ONCE(1, "x86/PAT: PAT cannot be disabled after initialization\n"); - return; - } - - pat_disabled =3D true; - pr_info("x86/PAT: %s\n", msg_reason); +#ifdef CONFIG_X86_PAT +static void pat_init_nopat(void) +{ + init_cache_modes(); } =20 static int __init nopat(char *str) { - pat_disable("PAT support disabled via boot option."); + pat_disabled =3D true; + pr_info("PAT support disabled via boot option."); + pat_init_set(pat_init_nopat); return 0; } early_param("nopat", nopat); +#endif =20 bool pat_enabled(void) { @@ -243,13 +237,17 @@ static void pat_bp_init(u64 pat) u64 tmp_pat; =20 if (!boot_cpu_has(X86_FEATURE_PAT)) { - pat_disable("PAT not supported by the CPU."); + pr_info("PAT not supported by the CPU."); + pat_disabled =3D true; + pat_init_set(pat_init_nopat); return; } =20 rdmsrl(MSR_IA32_CR_PAT, tmp_pat); if (!tmp_pat) { - pat_disable("PAT support disabled by the firmware."); + pr_info("PAT support disabled by the firmware."); + pat_disabled =3D true; + pat_init_set(pat_init_nopat); return; } =20 @@ -272,7 +270,7 @@ static void pat_ap_init(u64 pat) wrmsrl(MSR_IA32_CR_PAT, pat); } =20 -void init_cache_modes(void) +static void init_cache_modes(void) { u64 pat =3D 0; =20 @@ -318,25 +316,12 @@ void init_cache_modes(void) __init_cache_modes(pat); } =20 -/** - * pat_init - Initialize the PAT MSR and PAT table on the current CPU - * - * This function initializes PAT MSR and PAT table with an OS-defined value - * to enable additional cache attributes, WC, WT and WP. - * - * This function must be called on all CPUs using the specific sequence of - * operations defined in Intel SDM. mtrr_rendezvous_handler() provides this - * procedure for PAT. - */ -void pat_init(void) +#ifdef CONFIG_X86_PAT +static void pat_init_native(void) { u64 pat; struct cpuinfo_x86 *c =3D &boot_cpu_data; =20 -#ifndef CONFIG_X86_PAT - pr_info_once("x86/PAT: PAT support disabled because CONFIG_X86_PAT is dis= abled in the kernel.\n"); -#endif - if (pat_disabled) return; =20 @@ -406,6 +391,80 @@ void pat_init(void) =20 #undef PAT =20 +void pat_init_noset(void) +{ + pat_bp_enabled =3D true; + init_cache_modes(); +} + +static void (*pat_init_func)(void) =3D pat_init_native; + +void __init pat_init_set(void (*func)(void)) +{ + pat_init_func =3D func; +} + +/** + * pat_init - Initialize the PAT MSR and PAT table on the current CPU + * + * This function initializes PAT MSR and PAT table with an OS-defined value + * to enable additional cache attributes, WC, WT and WP. + * + * This function must be called on all CPUs using the specific sequence of + * operations defined in Intel SDM. mtrr_rendezvous_handler() provides this + * procedure for PAT. + */ +void pat_init(void) +{ + pat_init_func(); +} + +static int __pat_cpu_init(void *data) +{ + unsigned long flags; + + local_irq_save(flags); + cache_disable(); + + pat_init(); + + cache_enable(); + local_irq_restore(flags); + + return 0; +} + +void pat_cpu_init(void) +{ + if (pat_init_func !=3D pat_init_native) + pat_init(); + else + __pat_cpu_init(NULL); +} + +void pat_ap_init_nomtrr(void) +{ + if (pat_init_func !=3D pat_init_native) + return; + + stop_machine_from_inactive_cpu(__pat_cpu_init, NULL, cpu_callout_mask); +} + +void pat_aps_init_nomtrr(void) +{ + if (pat_init_func !=3D pat_init_native) + return; + + stop_machine(__pat_cpu_init, NULL, cpu_online_mask); +} +#else +void pat_init(void) +{ + pr_info_once("x86/PAT: PAT support disabled because CONFIG_X86_PAT is dis= abled in the kernel.\n"); + init_cache_modes(); +} +#endif /* CONFIG_X86_PAT */ + static DEFINE_SPINLOCK(memtype_lock); /* protects memtype accesses */ =20 /* diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c index 70fb2ea85e90..97831d822872 100644 --- a/arch/x86/xen/enlighten_pv.c +++ b/arch/x86/xen/enlighten_pv.c @@ -69,6 +69,7 @@ #include #include #include +#include #ifdef CONFIG_X86_IOPL_IOPERM #include #endif @@ -1317,6 +1318,9 @@ asmlinkage __visible void __init xen_start_kernel(str= uct start_info *si) initrd_start =3D __pa(xen_start_info->mod_start); } =20 + /* Don't try to modify PAT MSR. */ + pat_init_set(pat_init_noset); + /* Poke various useful things into boot_params */ boot_params.hdr.type_of_loader =3D (9 << 4) | 0; boot_params.hdr.ramdisk_image =3D initrd_start; --=20 2.35.3