From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F1E8C433FE for ; Wed, 5 Oct 2022 11:33:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229837AbiJELdA (ORCPT ); Wed, 5 Oct 2022 07:33:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35350 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229516AbiJELc4 (ORCPT ); Wed, 5 Oct 2022 07:32:56 -0400 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 526F472EF0; Wed, 5 Oct 2022 04:32:55 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 8BA20CE12DC; Wed, 5 Oct 2022 11:32:53 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A6560C433C1; Wed, 5 Oct 2022 11:32:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969572; bh=R2zatfwqPcwdj+AMFhwt6nmmp5CBSbEm0wsQ+0Xei8Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mWkXFlQO/ou0J2/sP9k6g8PE6+0thonkAllSOTvYiX0Bjs/S2XsDw+nCu9itcMte/ MVp6yMPVTdekxoB2kvqtZpBIVCZfirkmx804UuQEWgIa7iIzawvDKaeLTZvLMAAA8x Yy+nHq6hlbAeijJvJTy60Vl4OFhCEznE6r29Pows= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Thadeu Lima de Souza Cascardo Subject: [PATCH 5.4 01/51] Revert "x86/speculation: Add RSB VM Exit protections" Date: Wed, 5 Oct 2022 13:31:49 +0200 Message-Id: <20221005113210.333783784@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 X-stable: review X-Patchwork-Hint: ignore MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Thadeu Lima de Souza Cascardo This reverts commit f2f41ef0352db9679bfae250d7a44b3113f3a3cc. This is commit 2b1299322016731d56807aa49254a5ea3080b6b3 upstream. In order to apply IBRS mitigation for Retbleed, PBRSB mitigations must be reverted and the reapplied, so the backports can look sane. Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- Documentation/admin-guide/hw-vuln/spectre.rst | 8 --- arch/x86/include/asm/cpufeatures.h | 2=20 arch/x86/include/asm/msr-index.h | 4 - arch/x86/include/asm/nospec-branch.h | 15 ------ arch/x86/kernel/cpu/bugs.c | 61 ---------------------= ----- arch/x86/kernel/cpu/common.c | 12 ----- arch/x86/kvm/vmx/vmenter.S | 1=20 tools/arch/x86/include/asm/cpufeatures.h | 1=20 8 files changed, 3 insertions(+), 101 deletions(-) --- a/Documentation/admin-guide/hw-vuln/spectre.rst +++ b/Documentation/admin-guide/hw-vuln/spectre.rst @@ -422,14 +422,6 @@ The possible values in this file are: 'RSB filling' Protection of RSB on context switch enabled =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D =20 - - EIBRS Post-barrier Return Stack Buffer (PBRSB) protection status: - - =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D - 'PBRSB-eIBRS: SW sequence' CPU is affected and protection of RSB on VM= EXIT enabled - 'PBRSB-eIBRS: Vulnerable' CPU is vulnerable - 'PBRSB-eIBRS: Not affected' CPU is not affected by PBRSB - =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D - Full mitigation might require a microcode update from the CPU vendor. When the necessary microcode is not available, the kernel will report vulnerability. --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -286,7 +286,6 @@ #define X86_FEATURE_CQM_MBM_LOCAL (11*32+ 3) /* LLC Local MBM monitoring */ #define X86_FEATURE_FENCE_SWAPGS_USER (11*32+ 4) /* "" LFENCE in user entr= y SWAPGS path */ #define X86_FEATURE_FENCE_SWAPGS_KERNEL (11*32+ 5) /* "" LFENCE in kernel = entry SWAPGS path */ -#define X86_FEATURE_RSB_VMEXIT_LITE (11*32+ 6) /* "" Fill RSB on VM exit w= hen EIBRS is enabled */ =20 /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */ #define X86_FEATURE_AVX512_BF16 (12*32+ 5) /* AVX512 BFLOAT16 instruction= s */ @@ -408,6 +407,5 @@ #define X86_BUG_SRBDS X86_BUG(24) /* CPU may leak RNG bits if not mitiga= ted */ #define X86_BUG_MMIO_STALE_DATA X86_BUG(25) /* CPU is affected by Process= or MMIO Stale Data vulnerabilities */ #define X86_BUG_MMIO_UNKNOWN X86_BUG(26) /* CPU is too old and its MMIO S= tale Data status is unknown */ -#define X86_BUG_EIBRS_PBRSB X86_BUG(27) /* EIBRS is vulnerable to Post Ba= rrier RSB Predictions */ =20 #endif /* _ASM_X86_CPUFEATURES_H */ --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -129,10 +129,6 @@ * bit available to control VERW * behavior. */ -#define ARCH_CAP_PBRSB_NO BIT(24) /* - * Not susceptible to Post-Barrier - * Return Stack Buffer Predictions. - */ =20 #define MSR_IA32_FLUSH_CMD 0x0000010b #define L1D_FLUSH BIT(0) /* --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -79,13 +79,6 @@ add $(BITS_PER_LONG/8) * nr, sp; #endif =20 -#define __ISSUE_UNBALANCED_RET_GUARD(sp) \ - call 881f; \ - int3; \ -881: \ - add $(BITS_PER_LONG/8), sp; \ - lfence; - #ifdef __ASSEMBLY__ =20 /* @@ -155,14 +148,6 @@ #endif .endm =20 -.macro ISSUE_UNBALANCED_RET_GUARD ftr:req - ANNOTATE_NOSPEC_ALTERNATIVE - ALTERNATIVE "jmp .Lskip_pbrsb_\@", \ - __stringify(__ISSUE_UNBALANCED_RET_GUARD(%_ASM_SP)) \ - \ftr -.Lskip_pbrsb_\@: -.endm - /* * A simpler FILL_RETURN_BUFFER macro. Don't make people use the CPP * monstrosity above, manually. --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -1046,49 +1046,6 @@ static enum spectre_v2_mitigation __init return SPECTRE_V2_RETPOLINE; } =20 -static void __init spectre_v2_determine_rsb_fill_type_at_vmexit(enum spect= re_v2_mitigation mode) -{ - /* - * Similar to context switches, there are two types of RSB attacks - * after VM exit: - * - * 1) RSB underflow - * - * 2) Poisoned RSB entry - * - * When retpoline is enabled, both are mitigated by filling/clearing - * the RSB. - * - * When IBRS is enabled, while #1 would be mitigated by the IBRS branch - * prediction isolation protections, RSB still needs to be cleared - * because of #2. Note that SMEP provides no protection here, unlike - * user-space-poisoned RSB entries. - * - * eIBRS should protect against RSB poisoning, but if the EIBRS_PBRSB - * bug is present then a LITE version of RSB protection is required, - * just a single call needs to retire before a RET is executed. - */ - switch (mode) { - case SPECTRE_V2_NONE: - /* These modes already fill RSB at vmexit */ - case SPECTRE_V2_LFENCE: - case SPECTRE_V2_RETPOLINE: - case SPECTRE_V2_EIBRS_RETPOLINE: - return; - - case SPECTRE_V2_EIBRS_LFENCE: - case SPECTRE_V2_EIBRS: - if (boot_cpu_has_bug(X86_BUG_EIBRS_PBRSB)) { - setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT_LITE); - pr_info("Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT\n"); - } - return; - } - - pr_warn_once("Unknown Spectre v2 mode, disabling RSB mitigation at VM exi= t"); - dump_stack(); -} - static void __init spectre_v2_select_mitigation(void) { enum spectre_v2_mitigation_cmd cmd =3D spectre_v2_parse_cmdline(); @@ -1181,8 +1138,6 @@ static void __init spectre_v2_select_mit setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW); pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switc= h\n"); =20 - spectre_v2_determine_rsb_fill_type_at_vmexit(mode); - /* * Retpoline means the kernel is safe because it has no indirect * branches. Enhanced IBRS protects firmware too, so, enable restricted @@ -1930,19 +1885,6 @@ static char *ibpb_state(void) return ""; } =20 -static char *pbrsb_eibrs_state(void) -{ - if (boot_cpu_has_bug(X86_BUG_EIBRS_PBRSB)) { - if (boot_cpu_has(X86_FEATURE_RSB_VMEXIT_LITE) || - boot_cpu_has(X86_FEATURE_RETPOLINE)) - return ", PBRSB-eIBRS: SW sequence"; - else - return ", PBRSB-eIBRS: Vulnerable"; - } else { - return ", PBRSB-eIBRS: Not affected"; - } -} - static ssize_t spectre_v2_show_state(char *buf) { if (spectre_v2_enabled =3D=3D SPECTRE_V2_LFENCE) @@ -1955,13 +1897,12 @@ static ssize_t spectre_v2_show_state(cha spectre_v2_enabled =3D=3D SPECTRE_V2_EIBRS_LFENCE) return sprintf(buf, "Vulnerable: eIBRS+LFENCE with unprivileged eBPF and= SMT\n"); =20 - return sprintf(buf, "%s%s%s%s%s%s%s\n", + return sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled], ibpb_state(), boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "", stibp_state(), boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "", - pbrsb_eibrs_state(), spectre_v2_module_string()); } =20 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1025,7 +1025,6 @@ static void identify_cpu_without_cpuid(s #define NO_SWAPGS BIT(6) #define NO_ITLB_MULTIHIT BIT(7) #define NO_SPECTRE_V2 BIT(8) -#define NO_EIBRS_PBRSB BIT(9) #define NO_MMIO BIT(10) =20 #define VULNWL(_vendor, _family, _model, _whitelist) \ @@ -1072,7 +1071,7 @@ static const __initconst struct x86_cpu_ =20 VULNWL_INTEL(ATOM_GOLDMONT, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTI= HIT | NO_MMIO), VULNWL_INTEL(ATOM_GOLDMONT_D, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MUL= TIHIT | NO_MMIO), - VULNWL_INTEL(ATOM_GOLDMONT_PLUS, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_M= ULTIHIT | NO_MMIO | NO_EIBRS_PBRSB), + VULNWL_INTEL(ATOM_GOLDMONT_PLUS, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_M= ULTIHIT | NO_MMIO), =20 /* * Technically, swapgs isn't serializing on AMD (despite it previously @@ -1082,9 +1081,7 @@ static const __initconst struct x86_cpu_ * good enough for our purposes. */ =20 - VULNWL_INTEL(ATOM_TREMONT, NO_EIBRS_PBRSB), - VULNWL_INTEL(ATOM_TREMONT_L, NO_EIBRS_PBRSB), - VULNWL_INTEL(ATOM_TREMONT_D, NO_ITLB_MULTIHIT | NO_EIBRS_PBRSB), + VULNWL_INTEL(ATOM_TREMONT_D, NO_ITLB_MULTIHIT), =20 /* AMD Family 0xf - 0x12 */ VULNWL_AMD(0x0f, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO= _ITLB_MULTIHIT | NO_MMIO), @@ -1251,11 +1248,6 @@ static void __init cpu_set_bug_bits(stru setup_force_cpu_bug(X86_BUG_MMIO_UNKNOWN); } =20 - if (cpu_has(c, X86_FEATURE_IBRS_ENHANCED) && - !cpu_matches(cpu_vuln_whitelist, NO_EIBRS_PBRSB) && - !(ia32_cap & ARCH_CAP_PBRSB_NO)) - setup_force_cpu_bug(X86_BUG_EIBRS_PBRSB); - if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN)) return; =20 --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -92,7 +92,6 @@ ENTRY(vmx_vmexit) pop %_ASM_AX .Lvmexit_skip_rsb: #endif - ISSUE_UNBALANCED_RET_GUARD X86_FEATURE_RSB_VMEXIT_LITE ret ENDPROC(vmx_vmexit) =20 --- a/tools/arch/x86/include/asm/cpufeatures.h +++ b/tools/arch/x86/include/asm/cpufeatures.h @@ -284,7 +284,6 @@ #define X86_FEATURE_CQM_MBM_LOCAL (11*32+ 3) /* LLC Local MBM monitoring */ #define X86_FEATURE_FENCE_SWAPGS_USER (11*32+ 4) /* "" LFENCE in user entr= y SWAPGS path */ #define X86_FEATURE_FENCE_SWAPGS_KERNEL (11*32+ 5) /* "" LFENCE in kernel = entry SWAPGS path */ -#define X86_FEATURE_RSB_VMEXIT_LITE (11*32+ 6) /* "" Fill RSB on VM-Exit w= hen EIBRS is enabled */ =20 /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */ #define X86_FEATURE_AVX512_BF16 (12*32+ 5) /* AVX512 BFLOAT16 instruction= s */ From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F6CCC433FE for ; Wed, 5 Oct 2022 11:33:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229702AbiJELdG (ORCPT ); Wed, 5 Oct 2022 07:33:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35408 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229838AbiJELdA (ORCPT ); Wed, 5 Oct 2022 07:33:00 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4B4DC72EF0; Wed, 5 Oct 2022 04:32:58 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id D774061644; Wed, 5 Oct 2022 11:32:57 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E7714C433D6; Wed, 5 Oct 2022 11:32:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969577; bh=EIHVC1gjGM6LQUM4Kefve/uFTzDwWI3PCEdNhCMIcZU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=n2xCpN2Mamyq26ycInEQ71UHeyomWMDS70MZ099FKjIHXNxmaYaN1Fs4X1fufCeYK 36A6x8a+Bq8/3oBhsFwD2m29XEobvHCRyEJeRVkLdHflECif9bjAtPIADRnTUGTXVU VGbBCS+43JnJCUF+Ia4y4t0lgkr4MESqSDOd4IPQ= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Thadeu Lima de Souza Cascardo Subject: [PATCH 5.4 02/51] Revert "x86/cpu: Add a steppings field to struct x86_cpu_id" Date: Wed, 5 Oct 2022 13:31:50 +0200 Message-Id: <20221005113210.366262888@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Thadeu Lima de Souza Cascardo This reverts commit 749ec6b48a9a41f95154cd5aa61053aaeb7c7aff. This is commit e9d7144597b10ff13ff2264c059f7d4a7fbc89ac upstream. A proper backport will be done. This will make it easier to check for parts affected by Retbleed, which require X86_MATCH_VENDOR_FAM_MODEL. Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- arch/x86/include/asm/cpu_device_id.h | 30 ------------------------------ arch/x86/kernel/cpu/match.c | 7 +------ include/linux/mod_devicetable.h | 6 ------ 3 files changed, 1 insertion(+), 42 deletions(-) --- a/arch/x86/include/asm/cpu_device_id.h +++ b/arch/x86/include/asm/cpu_device_id.h @@ -9,36 +9,6 @@ =20 #include =20 -#define X86_CENTAUR_FAM6_C7_D 0xd -#define X86_CENTAUR_FAM6_NANO 0xf - -#define X86_STEPPINGS(mins, maxs) GENMASK(maxs, mins) - -/** - * X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE - Base macro for CPU match= ing - * @_vendor: The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY - * The name is expanded to X86_VENDOR_@_vendor - * @_family: The family number or X86_FAMILY_ANY - * @_model: The model number, model constant or X86_MODEL_ANY - * @_steppings: Bitmask for steppings, stepping constant or X86_STEPPING_A= NY - * @_feature: A X86_FEATURE bit or X86_FEATURE_ANY - * @_data: Driver specific data or NULL. The internal storage - * format is unsigned long. The supplied value, pointer - * etc. is casted to unsigned long internally. - * - * Backport version to keep the SRBDS pile consistant. No shorter variants - * required for this. - */ -#define X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE(_vendor, _family, _mo= del, \ - _steppings, _feature, _data) { \ - .vendor =3D X86_VENDOR_##_vendor, \ - .family =3D _family, \ - .model =3D _model, \ - .steppings =3D _steppings, \ - .feature =3D _feature, \ - .driver_data =3D (unsigned long) _data \ -} - /* * Match specific microcode revisions. * --- a/arch/x86/kernel/cpu/match.c +++ b/arch/x86/kernel/cpu/match.c @@ -34,18 +34,13 @@ const struct x86_cpu_id *x86_match_cpu(c const struct x86_cpu_id *m; struct cpuinfo_x86 *c =3D &boot_cpu_data; =20 - for (m =3D match; - m->vendor | m->family | m->model | m->steppings | m->feature; - m++) { + for (m =3D match; m->vendor | m->family | m->model | m->feature; m++) { if (m->vendor !=3D X86_VENDOR_ANY && c->x86_vendor !=3D m->vendor) continue; if (m->family !=3D X86_FAMILY_ANY && c->x86 !=3D m->family) continue; if (m->model !=3D X86_MODEL_ANY && c->x86_model !=3D m->model) continue; - if (m->steppings !=3D X86_STEPPING_ANY && - !(BIT(c->x86_stepping) & m->steppings)) - continue; if (m->feature !=3D X86_FEATURE_ANY && !cpu_has(c, m->feature)) continue; return m; --- a/include/linux/mod_devicetable.h +++ b/include/linux/mod_devicetable.h @@ -657,10 +657,6 @@ struct mips_cdmm_device_id { /* * MODULE_DEVICE_TABLE expects this struct to be called x86cpu_device_id. * Although gcc seems to ignore this error, clang fails without this defin= e. - * - * Note: The ordering of the struct is different from upstream because the - * static initializers in kernels < 5.7 still use C89 style while upstream - * has been converted to proper C99 initializers. */ #define x86cpu_device_id x86_cpu_id struct x86_cpu_id { @@ -669,7 +665,6 @@ struct x86_cpu_id { __u16 model; __u16 feature; /* bit index */ kernel_ulong_t driver_data; - __u16 steppings; }; =20 #define X86_FEATURE_MATCH(x) \ @@ -678,7 +673,6 @@ struct x86_cpu_id { #define X86_VENDOR_ANY 0xffff #define X86_FAMILY_ANY 0 #define X86_MODEL_ANY 0 -#define X86_STEPPING_ANY 0 #define X86_FEATURE_ANY 0 /* Same as FPU, you can't test for that */ =20 /* From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA6ABC433F5 for ; Wed, 5 Oct 2022 11:33:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229927AbiJELdP (ORCPT ); Wed, 5 Oct 2022 07:33:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35542 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229853AbiJELdD (ORCPT ); Wed, 5 Oct 2022 07:33:03 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8489F733F1; Wed, 5 Oct 2022 04:33:01 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id D4AB56164C; Wed, 5 Oct 2022 11:33:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DCAFFC433D7; Wed, 5 Oct 2022 11:32:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969580; bh=7QIxz3D/b2DgX+EEhNB+kTqbX3TTp6DICflpZ+/I2tM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KENfPXdg6Ho6+3Zx7Ckc8hmpPDhkBZgxhw6/YmlfL0qjMffdooG/b/i11Oyb4yQY9 TbAEp0gEvckXo39R24Bbs2G/b9BZmp5y5nhtqrw9R2BFZChP5razOhW4E2HXE9dZf8 CS7Ag8J9XsAQGyPe4Jc2+eIdw8otMOssBZxaHxR4= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Thomas Gleixner , Borislav Petkov , Thadeu Lima de Souza Cascardo Subject: [PATCH 5.4 03/51] x86/devicetable: Move x86 specific macro out of generic code Date: Wed, 5 Oct 2022 13:31:51 +0200 Message-Id: <20221005113210.405563413@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Thomas Gleixner commit ba5bade4cc0d2013cdf5634dae554693c968a090 upstream. There is no reason that this gunk is in a generic header file. The wildcard defines need to stay as they are required by file2alias. Signed-off-by: Thomas Gleixner Signed-off-by: Borislav Petkov Reviewed-by: Greg Kroah-Hartman Link: https://lkml.kernel.org/r/20200320131508.736205164@linutronix.de Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- arch/x86/include/asm/cpu_device_id.h | 13 ++++++++++++- arch/x86/kvm/svm.c | 1 + arch/x86/kvm/vmx/vmx.c | 1 + drivers/cpufreq/acpi-cpufreq.c | 1 + drivers/cpufreq/amd_freq_sensitivity.c | 1 + include/linux/mod_devicetable.h | 4 +--- 6 files changed, 17 insertions(+), 4 deletions(-) --- a/arch/x86/include/asm/cpu_device_id.h +++ b/arch/x86/include/asm/cpu_device_id.h @@ -6,10 +6,21 @@ * Declare drivers belonging to specific x86 CPUs * Similar in spirit to pci_device_id and related PCI functions */ - #include =20 /* + * The wildcard initializers are in mod_devicetable.h because + * file2alias needs them. Sigh. + */ + +#define X86_FEATURE_MATCH(x) { \ + .vendor =3D X86_VENDOR_ANY, \ + .family =3D X86_FAMILY_ANY, \ + .model =3D X86_MODEL_ANY, \ + .feature =3D x, \ +} + +/* * Match specific microcode revisions. * * vendor/family/model/stepping must be all set. --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -47,6 +47,7 @@ #include #include #include +#include =20 #include #include "trace.h" --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -31,6 +31,7 @@ #include #include #include +#include #include #include #include --- a/drivers/cpufreq/acpi-cpufreq.c +++ b/drivers/cpufreq/acpi-cpufreq.c @@ -30,6 +30,7 @@ #include #include #include +#include =20 MODULE_AUTHOR("Paul Diefenbaugh, Dominik Brodowski"); MODULE_DESCRIPTION("ACPI Processor P-States Driver"); --- a/drivers/cpufreq/amd_freq_sensitivity.c +++ b/drivers/cpufreq/amd_freq_sensitivity.c @@ -18,6 +18,7 @@ =20 #include #include +#include =20 #include "cpufreq_ondemand.h" =20 --- a/include/linux/mod_devicetable.h +++ b/include/linux/mod_devicetable.h @@ -667,9 +667,7 @@ struct x86_cpu_id { kernel_ulong_t driver_data; }; =20 -#define X86_FEATURE_MATCH(x) \ - { X86_VENDOR_ANY, X86_FAMILY_ANY, X86_MODEL_ANY, x } - +/* Wild cards for x86_cpu_id::vendor, family, model and feature */ #define X86_VENDOR_ANY 0xffff #define X86_FAMILY_ANY 0 #define X86_MODEL_ANY 0 From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46C06C4332F for ; Wed, 5 Oct 2022 11:33:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229926AbiJELdZ (ORCPT ); Wed, 5 Oct 2022 07:33:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35510 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229915AbiJELdN (ORCPT ); Wed, 5 Oct 2022 07:33:13 -0400 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E9E7475382; Wed, 5 Oct 2022 04:33:06 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 64787CE12E0; Wed, 5 Oct 2022 11:33:04 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7EE23C4314B; Wed, 5 Oct 2022 11:33:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969582; bh=HYr9PYk8I4rPJ7GYxUKxLJUYySE24EtAksSsW9eqVGY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gsL4/A8ksdqRLmNv1Lx9QCD6bvbfLaYP5xN9+RZ/mTdyn54X4fBfXPU7I1GOhB33n qIoMYIJqbWqyCgqmrr1SXZvKPAI2VAEbOWhaqPUPix3n9RIrnvP8PP8chg/Pt2SqZV Oa0fFal4idQ5ljhNMFtd7M3fEbEK0NedSCq088gg= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Thomas Gleixner , Borislav Petkov , Thadeu Lima de Souza Cascardo Subject: [PATCH 5.4 04/51] x86/cpu: Add consistent CPU match macros Date: Wed, 5 Oct 2022 13:31:52 +0200 Message-Id: <20221005113210.453256468@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Thomas Gleixner commit 20d437447c0089cda46c683db219d3b4e2cde40e upstream. Finding all places which build x86_cpu_id match tables is tedious and the logic is hidden in lots of differently named macro wrappers. Most of these initializer macros use plain C89 initializers which rely on the ordering of the struct members. So new members could only be added at the end of the struct, but that's ugly as hell and C99 initializers are really the right thing to use. Provide a set of macros which: - Have a proper naming scheme, starting with X86_MATCH_ - Use C99 initializers The set of provided macros are all subsets of the base macro X86_MATCH_VENDOR_FAM_MODEL_FEATURE() which allows to supply all possible selection criteria: vendor, family, model, feature The other macros shorten this to avoid typing all arguments when they are not needed and would require one of the _ANY constants. They have been created due to the requirements of the existing usage sites. Also add a few model constants for Centaur CPUs and QUARK. Signed-off-by: Thomas Gleixner Signed-off-by: Borislav Petkov Reviewed-by: Greg Kroah-Hartman Link: https://lkml.kernel.org/r/20200320131508.826011988@linutronix.de Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- arch/x86/include/asm/cpu_device_id.h | 140 ++++++++++++++++++++++++++++++= ++--- arch/x86/include/asm/intel-family.h | 6 + arch/x86/kernel/cpu/match.c | 13 ++- 3 files changed, 146 insertions(+), 13 deletions(-) --- a/arch/x86/include/asm/cpu_device_id.h +++ b/arch/x86/include/asm/cpu_device_id.h @@ -5,21 +5,143 @@ /* * Declare drivers belonging to specific x86 CPUs * Similar in spirit to pci_device_id and related PCI functions - */ -#include - -/* + * * The wildcard initializers are in mod_devicetable.h because * file2alias needs them. Sigh. */ +#include +/* Get the INTEL_FAM* model defines */ +#include +/* And the X86_VENDOR_* ones */ +#include =20 -#define X86_FEATURE_MATCH(x) { \ - .vendor =3D X86_VENDOR_ANY, \ - .family =3D X86_FAMILY_ANY, \ - .model =3D X86_MODEL_ANY, \ - .feature =3D x, \ +/* Centaur FAM6 models */ +#define X86_CENTAUR_FAM6_C7_A 0xa +#define X86_CENTAUR_FAM6_C7_D 0xd +#define X86_CENTAUR_FAM6_NANO 0xf + +/** + * X86_MATCH_VENDOR_FAM_MODEL_FEATURE - Base macro for CPU matching + * @_vendor: The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY + * The name is expanded to X86_VENDOR_@_vendor + * @_family: The family number or X86_FAMILY_ANY + * @_model: The model number, model constant or X86_MODEL_ANY + * @_feature: A X86_FEATURE bit or X86_FEATURE_ANY + * @_data: Driver specific data or NULL. The internal storage + * format is unsigned long. The supplied value, pointer + * etc. is casted to unsigned long internally. + * + * Use only if you need all selectors. Otherwise use one of the shorter + * macros of the X86_MATCH_* family. If there is no matching shorthand + * macro, consider to add one. If you really need to wrap one of the macros + * into another macro at the usage site for good reasons, then please + * start this local macro with X86_MATCH to allow easy grepping. + */ +#define X86_MATCH_VENDOR_FAM_MODEL_FEATURE(_vendor, _family, _model, \ + _feature, _data) { \ + .vendor =3D X86_VENDOR_##_vendor, \ + .family =3D _family, \ + .model =3D _model, \ + .feature =3D _feature, \ + .driver_data =3D (unsigned long) _data \ } =20 +/** + * X86_MATCH_VENDOR_FAM_FEATURE - Macro for matching vendor, family and CP= U feature + * @vendor: The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY + * The name is expanded to X86_VENDOR_@vendor + * @family: The family number or X86_FAMILY_ANY + * @feature: A X86_FEATURE bit + * @data: Driver specific data or NULL. The internal storage + * format is unsigned long. The supplied value, pointer + * etc. is casted to unsigned long internally. + * + * All other missing arguments of X86_MATCH_VENDOR_FAM_MODEL_FEATURE() are + * set to wildcards. + */ +#define X86_MATCH_VENDOR_FAM_FEATURE(vendor, family, feature, data) \ + X86_MATCH_VENDOR_FAM_MODEL_FEATURE(vendor, family, \ + X86_MODEL_ANY, feature, data) + +/** + * X86_MATCH_VENDOR_FEATURE - Macro for matching vendor and CPU feature + * @vendor: The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY + * The name is expanded to X86_VENDOR_@vendor + * @feature: A X86_FEATURE bit + * @data: Driver specific data or NULL. The internal storage + * format is unsigned long. The supplied value, pointer + * etc. is casted to unsigned long internally. + * + * All other missing arguments of X86_MATCH_VENDOR_FAM_MODEL_FEATURE() are + * set to wildcards. + */ +#define X86_MATCH_VENDOR_FEATURE(vendor, feature, data) \ + X86_MATCH_VENDOR_FAM_FEATURE(vendor, X86_FAMILY_ANY, feature, data) + +/** + * X86_MATCH_FEATURE - Macro for matching a CPU feature + * @feature: A X86_FEATURE bit + * @data: Driver specific data or NULL. The internal storage + * format is unsigned long. The supplied value, pointer + * etc. is casted to unsigned long internally. + * + * All other missing arguments of X86_MATCH_VENDOR_FAM_MODEL_FEATURE() are + * set to wildcards. + */ +#define X86_MATCH_FEATURE(feature, data) \ + X86_MATCH_VENDOR_FEATURE(ANY, feature, data) + +/* Transitional to keep the existing code working */ +#define X86_FEATURE_MATCH(feature) X86_MATCH_FEATURE(feature, NULL) + +/** + * X86_MATCH_VENDOR_FAM_MODEL - Match vendor, family and model + * @vendor: The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY + * The name is expanded to X86_VENDOR_@vendor + * @family: The family number or X86_FAMILY_ANY + * @model: The model number, model constant or X86_MODEL_ANY + * @data: Driver specific data or NULL. The internal storage + * format is unsigned long. The supplied value, pointer + * etc. is casted to unsigned long internally. + * + * All other missing arguments of X86_MATCH_VENDOR_FAM_MODEL_FEATURE() are + * set to wildcards. + */ +#define X86_MATCH_VENDOR_FAM_MODEL(vendor, family, model, data) \ + X86_MATCH_VENDOR_FAM_MODEL_FEATURE(vendor, family, model, \ + X86_FEATURE_ANY, data) + +/** + * X86_MATCH_VENDOR_FAM - Match vendor and family + * @vendor: The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY + * The name is expanded to X86_VENDOR_@vendor + * @family: The family number or X86_FAMILY_ANY + * @data: Driver specific data or NULL. The internal storage + * format is unsigned long. The supplied value, pointer + * etc. is casted to unsigned long internally. + * + * All other missing arguments to X86_MATCH_VENDOR_FAM_MODEL_FEATURE() are + * set of wildcards. + */ +#define X86_MATCH_VENDOR_FAM(vendor, family, data) \ + X86_MATCH_VENDOR_FAM_MODEL(vendor, family, X86_MODEL_ANY, data) + +/** + * X86_MATCH_INTEL_FAM6_MODEL - Match vendor INTEL, family 6 and model + * @model: The model name without the INTEL_FAM6_ prefix or ANY + * The model name is expanded to INTEL_FAM6_@model internally + * @data: Driver specific data or NULL. The internal storage + * format is unsigned long. The supplied value, pointer + * etc. is casted to unsigned long internally. + * + * The vendor is set to INTEL, the family to 6 and all other missing + * arguments of X86_MATCH_VENDOR_FAM_MODEL_FEATURE() are set to wildcards. + * + * See X86_MATCH_VENDOR_FAM_MODEL_FEATURE() for further information. + */ +#define X86_MATCH_INTEL_FAM6_MODEL(model, data) \ + X86_MATCH_VENDOR_FAM_MODEL(INTEL, 6, INTEL_FAM6_##model, data) + /* * Match specific microcode revisions. * --- a/arch/x86/include/asm/intel-family.h +++ b/arch/x86/include/asm/intel-family.h @@ -35,6 +35,9 @@ * The #define line may optionally include a comment including platform na= mes. */ =20 +/* Wildcard match for FAM6 so X86_MATCH_INTEL_FAM6_MODEL(ANY) works */ +#define INTEL_FAM6_ANY X86_MODEL_ANY + #define INTEL_FAM6_CORE_YONAH 0x0E =20 #define INTEL_FAM6_CORE2_MEROM 0x0F @@ -126,6 +129,9 @@ #define INTEL_FAM6_XEON_PHI_KNL 0x57 /* Knights Landing */ #define INTEL_FAM6_XEON_PHI_KNM 0x85 /* Knights Mill */ =20 +/* Family 5 */ +#define INTEL_FAM5_QUARK_X1000 0x09 /* Quark X1000 SoC */ + /* Useful macros */ #define INTEL_CPU_FAM_ANY(_family, _model, _driver_data) \ { \ --- a/arch/x86/kernel/cpu/match.c +++ b/arch/x86/kernel/cpu/match.c @@ -16,12 +16,17 @@ * respective wildcard entries. * * A typical table entry would be to match a specific CPU - * { X86_VENDOR_INTEL, 6, 0x12 } - * or to match a specific CPU feature - * { X86_FEATURE_MATCH(X86_FEATURE_FOOBAR) } + * + * X86_MATCH_VENDOR_FAM_MODEL_FEATURE(INTEL, 6, INTEL_FAM6_BROADWELL, + * X86_FEATURE_ANY, NULL); * * Fields can be wildcarded with %X86_VENDOR_ANY, %X86_FAMILY_ANY, - * %X86_MODEL_ANY, %X86_FEATURE_ANY or 0 (except for vendor) + * %X86_MODEL_ANY, %X86_FEATURE_ANY (except for vendor) + * + * asm/cpu_device_id.h contains a set of useful macros which are shortcuts + * for various common selections. The above can be shortened to: + * + * X86_MATCH_INTEL_FAM6_MODEL(BROADWELL, NULL); * * Arrays used to match for this should also be declared using * MODULE_DEVICE_TABLE(x86cpu, ...) From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC989C433F5 for ; Wed, 5 Oct 2022 11:33:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229911AbiJELd3 (ORCPT ); Wed, 5 Oct 2022 07:33:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35790 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229886AbiJELdO (ORCPT ); Wed, 5 Oct 2022 07:33:14 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 718837548E; Wed, 5 Oct 2022 04:33:08 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id A5BB0B81DB3; Wed, 5 Oct 2022 11:33:06 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 14E7AC433C1; Wed, 5 Oct 2022 11:33:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969585; bh=wRsN4cDX9MSmYZUpIrwDnd3bDx0R1YmsP+dixY+4siI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=xFF6tgWcnOKtKJs0j0NpVgqG/KocvjYRCS+MGAO+TlaKFWD+SF7nUIAIUxHKuq9k1 7LK/EEsH3PuMsJjRM/BKRsf9kdpil86B9PxNYNV6oz740N/LFHpoJA7BHklLc6Ka+8 W8hSentRqvpyHLJI93Ujmursdzq/AgeiglMQ/W1g= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Mark Gross , Borislav Petkov , Thomas Gleixner , Tony Luck , Josh Poimboeuf , Thadeu Lima de Souza Cascardo Subject: [PATCH 5.4 05/51] x86/cpu: Add a steppings field to struct x86_cpu_id Date: Wed, 5 Oct 2022 13:31:53 +0200 Message-Id: <20221005113210.501228600@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Mark Gross commit e9d7144597b10ff13ff2264c059f7d4a7fbc89ac upstream. Intel uses the same family/model for several CPUs. Sometimes the stepping must be checked to tell them apart. On x86 there can be at most 16 steppings. Add a steppings bitmask to x86_cpu_id and a X86_MATCH_VENDOR_FAMILY_MODEL_STEPPING_FEATURE macro and support for matching against family/model/stepping. [ bp: Massage. ] Signed-off-by: Mark Gross Signed-off-by: Borislav Petkov Signed-off-by: Thomas Gleixner Reviewed-by: Tony Luck Reviewed-by: Josh Poimboeuf [cascardo: have steppings be the last member as there are initializers that don't use named members] Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- arch/x86/include/asm/cpu_device_id.h | 27 ++++++++++++++++++++++++--- arch/x86/kernel/cpu/match.c | 7 ++++++- include/linux/mod_devicetable.h | 6 ++++++ 3 files changed, 36 insertions(+), 4 deletions(-) --- a/arch/x86/include/asm/cpu_device_id.h +++ b/arch/x86/include/asm/cpu_device_id.h @@ -20,12 +20,14 @@ #define X86_CENTAUR_FAM6_C7_D 0xd #define X86_CENTAUR_FAM6_NANO 0xf =20 +#define X86_STEPPINGS(mins, maxs) GENMASK(maxs, mins) /** - * X86_MATCH_VENDOR_FAM_MODEL_FEATURE - Base macro for CPU matching + * X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE - Base macro for CPU match= ing * @_vendor: The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY * The name is expanded to X86_VENDOR_@_vendor * @_family: The family number or X86_FAMILY_ANY * @_model: The model number, model constant or X86_MODEL_ANY + * @_steppings: Bitmask for steppings, stepping constant or X86_STEPPING_A= NY * @_feature: A X86_FEATURE bit or X86_FEATURE_ANY * @_data: Driver specific data or NULL. The internal storage * format is unsigned long. The supplied value, pointer @@ -37,16 +39,35 @@ * into another macro at the usage site for good reasons, then please * start this local macro with X86_MATCH to allow easy grepping. */ -#define X86_MATCH_VENDOR_FAM_MODEL_FEATURE(_vendor, _family, _model, \ - _feature, _data) { \ +#define X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE(_vendor, _family, _mo= del, \ + _steppings, _feature, _data) { \ .vendor =3D X86_VENDOR_##_vendor, \ .family =3D _family, \ .model =3D _model, \ + .steppings =3D _steppings, \ .feature =3D _feature, \ .driver_data =3D (unsigned long) _data \ } =20 /** + * X86_MATCH_VENDOR_FAM_MODEL_FEATURE - Macro for CPU matching + * @_vendor: The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY + * The name is expanded to X86_VENDOR_@_vendor + * @_family: The family number or X86_FAMILY_ANY + * @_model: The model number, model constant or X86_MODEL_ANY + * @_feature: A X86_FEATURE bit or X86_FEATURE_ANY + * @_data: Driver specific data or NULL. The internal storage + * format is unsigned long. The supplied value, pointer + * etc. is casted to unsigned long internally. + * + * The steppings arguments of X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE= () is + * set to wildcards. + */ +#define X86_MATCH_VENDOR_FAM_MODEL_FEATURE(vendor, family, model, feature,= data) \ + X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE(vendor, family, model, \ + X86_STEPPING_ANY, feature, data) + +/** * X86_MATCH_VENDOR_FAM_FEATURE - Macro for matching vendor, family and CP= U feature * @vendor: The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY * The name is expanded to X86_VENDOR_@vendor --- a/arch/x86/kernel/cpu/match.c +++ b/arch/x86/kernel/cpu/match.c @@ -39,13 +39,18 @@ const struct x86_cpu_id *x86_match_cpu(c const struct x86_cpu_id *m; struct cpuinfo_x86 *c =3D &boot_cpu_data; =20 - for (m =3D match; m->vendor | m->family | m->model | m->feature; m++) { + for (m =3D match; + m->vendor | m->family | m->model | m->steppings | m->feature; + m++) { if (m->vendor !=3D X86_VENDOR_ANY && c->x86_vendor !=3D m->vendor) continue; if (m->family !=3D X86_FAMILY_ANY && c->x86 !=3D m->family) continue; if (m->model !=3D X86_MODEL_ANY && c->x86_model !=3D m->model) continue; + if (m->steppings !=3D X86_STEPPING_ANY && + !(BIT(c->x86_stepping) & m->steppings)) + continue; if (m->feature !=3D X86_FEATURE_ANY && !cpu_has(c, m->feature)) continue; return m; --- a/include/linux/mod_devicetable.h +++ b/include/linux/mod_devicetable.h @@ -657,6 +657,10 @@ struct mips_cdmm_device_id { /* * MODULE_DEVICE_TABLE expects this struct to be called x86cpu_device_id. * Although gcc seems to ignore this error, clang fails without this defin= e. + * + * Note: The ordering of the struct is different from upstream because the + * static initializers in kernels < 5.7 still use C89 style while upstream + * has been converted to proper C99 initializers. */ #define x86cpu_device_id x86_cpu_id struct x86_cpu_id { @@ -665,12 +669,14 @@ struct x86_cpu_id { __u16 model; __u16 feature; /* bit index */ kernel_ulong_t driver_data; + __u16 steppings; }; =20 /* Wild cards for x86_cpu_id::vendor, family, model and feature */ #define X86_VENDOR_ANY 0xffff #define X86_FAMILY_ANY 0 #define X86_MODEL_ANY 0 +#define X86_STEPPING_ANY 0 #define X86_FEATURE_ANY 0 /* Same as FPU, you can't test for that */ =20 /* From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3ACC2C433FE for ; Wed, 5 Oct 2022 11:33:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229953AbiJELdg (ORCPT ); Wed, 5 Oct 2022 07:33:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35542 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229679AbiJELdO (ORCPT ); Wed, 5 Oct 2022 07:33:14 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 29BEC75CCC; Wed, 5 Oct 2022 04:33:10 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 9F7D661646; Wed, 5 Oct 2022 11:33:08 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B0A4BC433C1; Wed, 5 Oct 2022 11:33:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969588; bh=LSPx1gDfiYfefFdjkwGDKesJ1mIfUlbjg9qS061DB2o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YrxOp2Eco+Qp98zYVaxSLE8Crh2/S2xT1dzPjykUnnLu6ARhX1L0OlxLPiVHHxIiJ LYJJK0xpZxP/CrT7KObjWlm4cGfHAdXIB7IrFr11lO1ouE2aC4YTp4irCwW6+56SB+ ll8VTj1mQATKhDe8kz0OcoYlAV2a7new+r6PB1DA= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , "Peter Zijlstra (Intel)" , Borislav Petkov , Thadeu Lima de Souza Cascardo Subject: [PATCH 5.4 06/51] x86/kvm/vmx: Make noinstr clean Date: Wed, 5 Oct 2022 13:31:54 +0200 Message-Id: <20221005113210.548725185@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Peter Zijlstra commit 742ab6df974ae8384a2dd213db1a3a06cf6d8936 upstream. The recent mmio_stale_data fixes broke the noinstr constraints: vmlinux.o: warning: objtool: vmx_vcpu_enter_exit+0x15b: call to wrmsrl.co= nstprop.0() leaves .noinstr.text section vmlinux.o: warning: objtool: vmx_vcpu_enter_exit+0x1bf: call to kvm_arch_= has_assigned_device() leaves .noinstr.text section make it all happy again. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- arch/x86/kvm/vmx/vmx.c | 6 +++--- arch/x86/kvm/x86.c | 4 ++-- include/linux/kvm_host.h | 2 +- 3 files changed, 6 insertions(+), 6 deletions(-) --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -359,9 +359,9 @@ static __always_inline void vmx_disable_ if (!vmx->disable_fb_clear) return; =20 - rdmsrl(MSR_IA32_MCU_OPT_CTRL, msr); + msr =3D __rdmsr(MSR_IA32_MCU_OPT_CTRL); msr |=3D FB_CLEAR_DIS; - wrmsrl(MSR_IA32_MCU_OPT_CTRL, msr); + native_wrmsrl(MSR_IA32_MCU_OPT_CTRL, msr); /* Cache the MSR value to avoid reading it later */ vmx->msr_ia32_mcu_opt_ctrl =3D msr; } @@ -372,7 +372,7 @@ static __always_inline void vmx_enable_f return; =20 vmx->msr_ia32_mcu_opt_ctrl &=3D ~FB_CLEAR_DIS; - wrmsrl(MSR_IA32_MCU_OPT_CTRL, vmx->msr_ia32_mcu_opt_ctrl); + native_wrmsrl(MSR_IA32_MCU_OPT_CTRL, vmx->msr_ia32_mcu_opt_ctrl); } =20 static void vmx_update_fb_clear_dis(struct kvm_vcpu *vcpu, struct vcpu_vmx= *vmx) --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10329,9 +10329,9 @@ void kvm_arch_end_assignment(struct kvm } EXPORT_SYMBOL_GPL(kvm_arch_end_assignment); =20 -bool kvm_arch_has_assigned_device(struct kvm *kvm) +bool noinstr kvm_arch_has_assigned_device(struct kvm *kvm) { - return atomic_read(&kvm->arch.assigned_device_count); + return arch_atomic_read(&kvm->arch.assigned_device_count); } EXPORT_SYMBOL_GPL(kvm_arch_has_assigned_device); =20 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -955,7 +955,7 @@ static inline void kvm_arch_end_assignme { } =20 -static inline bool kvm_arch_has_assigned_device(struct kvm *kvm) +static __always_inline bool kvm_arch_has_assigned_device(struct kvm *kvm) { return false; } From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 505DAC4332F for ; Wed, 5 Oct 2022 11:33:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230004AbiJELdj (ORCPT ); Wed, 5 Oct 2022 07:33:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35508 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229895AbiJELdQ (ORCPT ); Wed, 5 Oct 2022 07:33:16 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1BE6172FEE; Wed, 5 Oct 2022 04:33:12 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 46A3E61645; Wed, 5 Oct 2022 11:33:11 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 56DF2C433D6; Wed, 5 Oct 2022 11:33:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969590; bh=pIptsOiS1gMpsNUQNIRZbi5bYJZFZSCrJ005eSxFfgw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=U3477F558CdEL+ghhJbMSZm+R0vqGWJV8TgxV7TKT3VHAGMznIb6ex9WzCpyWiVMB HJqrBXlQVmegAAIKsRiCLbz68jc8n6QCPcZ0B4l77O/1Y527t6RPmCCrGH4o6w2GnO mUVooY3s141+VMfBnlDuH1DUrQEvSrQ3Jb6fKzKg= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , "Peter Zijlstra (Intel)" , Borislav Petkov , Josh Poimboeuf , Thadeu Lima de Souza Cascardo Subject: [PATCH 5.4 07/51] x86/cpufeatures: Move RETPOLINE flags to word 11 Date: Wed, 5 Oct 2022 13:31:55 +0200 Message-Id: <20221005113210.599340518@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Peter Zijlstra commit a883d624aed463c84c22596006e5a96f5b44db31 upstream. In order to extend the RETPOLINE features to 4, move them to word 11 where there is still room. This mostly keeps DISABLE_RETPOLINE simple. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- arch/x86/include/asm/cpufeatures.h | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -203,8 +203,8 @@ #define X86_FEATURE_PROC_FEEDBACK ( 7*32+ 9) /* AMD ProcFeedbackInterface = */ #define X86_FEATURE_SME ( 7*32+10) /* AMD Secure Memory Encryption */ #define X86_FEATURE_PTI ( 7*32+11) /* Kernel Page Table Isolation enable= d */ -#define X86_FEATURE_RETPOLINE ( 7*32+12) /* "" Generic Retpoline mitigati= on for Spectre variant 2 */ -#define X86_FEATURE_RETPOLINE_LFENCE ( 7*32+13) /* "" Use LFENCE for Spect= re variant 2 */ +/* FREE! ( 7*32+12) */ +/* FREE! ( 7*32+13) */ #define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory Nu= mber */ #define X86_FEATURE_CDP_L2 ( 7*32+15) /* Code and Data Prioritization L2 = */ #define X86_FEATURE_MSR_SPEC_CTRL ( 7*32+16) /* "" MSR SPEC_CTRL is implem= ented */ @@ -286,6 +286,8 @@ #define X86_FEATURE_CQM_MBM_LOCAL (11*32+ 3) /* LLC Local MBM monitoring */ #define X86_FEATURE_FENCE_SWAPGS_USER (11*32+ 4) /* "" LFENCE in user entr= y SWAPGS path */ #define X86_FEATURE_FENCE_SWAPGS_KERNEL (11*32+ 5) /* "" LFENCE in kernel = entry SWAPGS path */ +#define X86_FEATURE_RETPOLINE (11*32+12) /* "" Generic Retpoline mitigati= on for Spectre variant 2 */ +#define X86_FEATURE_RETPOLINE_LFENCE (11*32+13) /* "" Use LFENCE for Spect= re variant 2 */ =20 /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */ #define X86_FEATURE_AVX512_BF16 (12*32+ 5) /* AVX512 BFLOAT16 instruction= s */ From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2324FC433FE for ; Wed, 5 Oct 2022 11:33:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229907AbiJELdq (ORCPT ); Wed, 5 Oct 2022 07:33:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35864 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229875AbiJELdW (ORCPT ); Wed, 5 Oct 2022 07:33:22 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 26515733F3; Wed, 5 Oct 2022 04:33:14 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id EB65561644; Wed, 5 Oct 2022 11:33:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0649DC433C1; Wed, 5 Oct 2022 11:33:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969593; bh=qq3RoIDloIilSZvKPjVCt/4T8Vzd7BWbega5B7xoWdY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cDfSULf5igADRzTSv+q3I1O1M5MX7L2OL1jUNrgt8cgzOyCAUS6hS+JW4LbivSiEF QzDcMnm6zhHfIbstbZ8G2xLSMARvriJTIpOfyc9SrYkK2WDRA5OGg90KmmK6AKCIkr BD/hEzwQvUOBrbCfN1Zq/E6gsJokyXJRnLbbyoAw= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Kim Phillips , Alexandre Chartre , "Peter Zijlstra (Intel)" , Borislav Petkov , Josh Poimboeuf , Thadeu Lima de Souza Cascardo Subject: [PATCH 5.4 08/51] x86/bugs: Report AMD retbleed vulnerability Date: Wed, 5 Oct 2022 13:31:56 +0200 Message-Id: <20221005113210.647100502@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Alexandre Chartre commit 6b80b59b3555706508008f1f127b5412c89c7fd8 upstream. Report that AMD x86 CPUs are vulnerable to the RETBleed (Arbitrary Speculative Code Execution with Return Instructions) attack. [peterz: add hygon] [kim: invert parity; fam15h] Co-developed-by: Kim Phillips Signed-off-by: Kim Phillips Signed-off-by: Alexandre Chartre Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov [cascardo: adjusted BUG numbers to match upstream] Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- arch/x86/include/asm/cpufeatures.h | 3 ++- arch/x86/kernel/cpu/bugs.c | 13 +++++++++++++ arch/x86/kernel/cpu/common.c | 19 +++++++++++++++++++ drivers/base/cpu.c | 8 ++++++++ include/linux/cpu.h | 2 ++ 5 files changed, 44 insertions(+), 1 deletion(-) --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -408,6 +408,7 @@ #define X86_BUG_ITLB_MULTIHIT X86_BUG(23) /* CPU may incur MCE during cer= tain page attribute changes */ #define X86_BUG_SRBDS X86_BUG(24) /* CPU may leak RNG bits if not mitiga= ted */ #define X86_BUG_MMIO_STALE_DATA X86_BUG(25) /* CPU is affected by Process= or MMIO Stale Data vulnerabilities */ -#define X86_BUG_MMIO_UNKNOWN X86_BUG(26) /* CPU is too old and its MMIO S= tale Data status is unknown */ +#define X86_BUG_RETBLEED X86_BUG(26) /* CPU is affected by RETBleed */ +#define X86_BUG_MMIO_UNKNOWN X86_BUG(28) /* CPU is too old and its MMIO S= tale Data status is unknown */ =20 #endif /* _ASM_X86_CPUFEATURES_H */ --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -1911,6 +1911,11 @@ static ssize_t srbds_show_state(char *bu return sprintf(buf, "%s\n", srbds_strings[srbds_mitigation]); } =20 +static ssize_t retbleed_show_state(char *buf) +{ + return sprintf(buf, "Vulnerable\n"); +} + static ssize_t cpu_show_common(struct device *dev, struct device_attribute= *attr, char *buf, unsigned int bug) { @@ -1957,6 +1962,9 @@ static ssize_t cpu_show_common(struct de case X86_BUG_MMIO_UNKNOWN: return mmio_stale_data_show_state(buf); =20 + case X86_BUG_RETBLEED: + return retbleed_show_state(buf); + default: break; } @@ -2016,4 +2024,9 @@ ssize_t cpu_show_mmio_stale_data(struct else return cpu_show_common(dev, attr, buf, X86_BUG_MMIO_STALE_DATA); } + +ssize_t cpu_show_retbleed(struct device *dev, struct device_attribute *att= r, char *buf) +{ + return cpu_show_common(dev, attr, buf, X86_BUG_RETBLEED); +} #endif --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1099,16 +1099,27 @@ static const __initconst struct x86_cpu_ {} }; =20 +#define VULNBL(vendor, family, model, blacklist) \ + X86_MATCH_VENDOR_FAM_MODEL(vendor, family, model, blacklist) + #define VULNBL_INTEL_STEPPINGS(model, steppings, issues) \ X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE(INTEL, 6, \ INTEL_FAM6_##model, steppings, \ X86_FEATURE_ANY, issues) =20 +#define VULNBL_AMD(family, blacklist) \ + VULNBL(AMD, family, X86_MODEL_ANY, blacklist) + +#define VULNBL_HYGON(family, blacklist) \ + VULNBL(HYGON, family, X86_MODEL_ANY, blacklist) + #define SRBDS BIT(0) /* CPU is affected by X86_BUG_MMIO_STALE_DATA */ #define MMIO BIT(1) /* CPU is affected by Shared Buffers Data Sampling (SBDS), a variant of X8= 6_BUG_MMIO_STALE_DATA */ #define MMIO_SBDS BIT(2) +/* CPU is affected by RETbleed, speculating where you would not expect it = */ +#define RETBLEED BIT(3) =20 static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst =3D { VULNBL_INTEL_STEPPINGS(IVYBRIDGE, X86_STEPPING_ANY, SRBDS), @@ -1141,6 +1152,11 @@ static const struct x86_cpu_id cpu_vuln_ VULNBL_INTEL_STEPPINGS(ATOM_TREMONT, X86_STEPPINGS(0x1, 0x1), MMIO | MMIO= _SBDS), VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_D, X86_STEPPING_ANY, MMIO), VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_L, X86_STEPPINGS(0x0, 0x0), MMIO | MM= IO_SBDS), + + VULNBL_AMD(0x15, RETBLEED), + VULNBL_AMD(0x16, RETBLEED), + VULNBL_AMD(0x17, RETBLEED), + VULNBL_HYGON(0x18, RETBLEED), {} }; =20 @@ -1248,6 +1264,9 @@ static void __init cpu_set_bug_bits(stru setup_force_cpu_bug(X86_BUG_MMIO_UNKNOWN); } =20 + if (cpu_matches(cpu_vuln_blacklist, RETBLEED)) + setup_force_cpu_bug(X86_BUG_RETBLEED); + if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN)) return; =20 --- a/drivers/base/cpu.c +++ b/drivers/base/cpu.c @@ -574,6 +574,12 @@ ssize_t __weak cpu_show_mmio_stale_data( return sysfs_emit(buf, "Not affected\n"); } =20 +ssize_t __weak cpu_show_retbleed(struct device *dev, + struct device_attribute *attr, char *buf) +{ + return sysfs_emit(buf, "Not affected\n"); +} + static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL); static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL); static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL); @@ -584,6 +590,7 @@ static DEVICE_ATTR(tsx_async_abort, 0444 static DEVICE_ATTR(itlb_multihit, 0444, cpu_show_itlb_multihit, NULL); static DEVICE_ATTR(srbds, 0444, cpu_show_srbds, NULL); static DEVICE_ATTR(mmio_stale_data, 0444, cpu_show_mmio_stale_data, NULL); +static DEVICE_ATTR(retbleed, 0444, cpu_show_retbleed, NULL); =20 static struct attribute *cpu_root_vulnerabilities_attrs[] =3D { &dev_attr_meltdown.attr, @@ -596,6 +603,7 @@ static struct attribute *cpu_root_vulner &dev_attr_itlb_multihit.attr, &dev_attr_srbds.attr, &dev_attr_mmio_stale_data.attr, + &dev_attr_retbleed.attr, NULL }; =20 --- a/include/linux/cpu.h +++ b/include/linux/cpu.h @@ -68,6 +68,8 @@ extern ssize_t cpu_show_srbds(struct dev extern ssize_t cpu_show_mmio_stale_data(struct device *dev, struct device_attribute *attr, char *buf); +extern ssize_t cpu_show_retbleed(struct device *dev, + struct device_attribute *attr, char *buf); =20 extern __printf(4, 5) struct device *cpu_device_create(struct device *parent, void *drvdata, From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5E89C433F5 for ; Wed, 5 Oct 2022 11:33:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230040AbiJELdv (ORCPT ); Wed, 5 Oct 2022 07:33:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36054 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229902AbiJELd3 (ORCPT ); Wed, 5 Oct 2022 07:33:29 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A235F760CE; Wed, 5 Oct 2022 04:33:19 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 3BC44B81DB8; Wed, 5 Oct 2022 11:33:17 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 938CDC433C1; Wed, 5 Oct 2022 11:33:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969595; bh=MOuOrUF4coxHHSalSbe/SzSDynG9INXF1p7TBFesBSs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Nm81uA+ajbGDzC/MCW/0+e9vyyxMV9DW2kR981vu36aKW5ENh7By8LuyF6mtgiQds KMD6IYEQ1KVrv3FAx3wiAahskDIG9g2KiJRds7wNmTbYcWVQD10tdwaU+0vH6Y2DiG OP8pUGjBxAWwgHuTkmc/lR/blkA6+d+C6Qh796Is= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Alexandre Chartre , "Peter Zijlstra (Intel)" , Borislav Petkov , Josh Poimboeuf , Thadeu Lima de Souza Cascardo Subject: [PATCH 5.4 09/51] x86/bugs: Add AMD retbleed= boot parameter Date: Wed, 5 Oct 2022 13:31:57 +0200 Message-Id: <20221005113210.694808495@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Alexandre Chartre commit 7fbf47c7ce50b38a64576b150e7011ae73d54669 upstream. Add the "retbleed=3D" boot parameter to select a mitigation for RETBleed. Possible values are "off", "auto" and "unret" (JMP2RET mitigation). The default value is "auto". Currently, "retbleed=3Dauto" will select the unret mitigation on AMD and Hygon and no mitigation on Intel (JMP2RET is not effective on Intel). [peterz: rebase; add hygon] [jpoimboe: cleanups] Signed-off-by: Alexandre Chartre Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov [cascardo: this effectively remove the UNRET mitigation as an option, so it has to be complemented by a later pick of the same commit later. This is done in order to pick retbleed_select_mitigation] Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- Documentation/admin-guide/kernel-parameters.txt | 12 +++ arch/x86/kernel/cpu/bugs.c | 74 +++++++++++++++++++= ++++- 2 files changed, 85 insertions(+), 1 deletion(-) --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -4298,6 +4298,18 @@ =20 retain_initrd [RAM] Keep initrd memory after extraction =20 + retbleed=3D [X86] Control mitigation of RETBleed (Arbitrary + Speculative Code Execution with Return Instructions) + vulnerability. + + off - unconditionally disable + auto - automatically select a migitation + + Selecting 'auto' will choose a mitigation method at run + time according to the CPU. + + Not specifying this option is equivalent to retbleed=3Dauto. + rfkill.default_state=3D 0 "airplane mode". All wifi, bluetooth, wimax, gps, fm, etc. communication is blocked by default. --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -36,6 +36,7 @@ #include "cpu.h" =20 static void __init spectre_v1_select_mitigation(void); +static void __init retbleed_select_mitigation(void); static void __init spectre_v2_select_mitigation(void); static void __init ssb_select_mitigation(void); static void __init l1tf_select_mitigation(void); @@ -111,6 +112,12 @@ void __init check_bugs(void) =20 /* Select the proper CPU mitigations before patching alternatives: */ spectre_v1_select_mitigation(); + retbleed_select_mitigation(); + /* + * spectre_v2_select_mitigation() relies on the state set by + * retbleed_select_mitigation(); specifically the STIBP selection is + * forced for UNRET. + */ spectre_v2_select_mitigation(); ssb_select_mitigation(); l1tf_select_mitigation(); @@ -706,6 +713,71 @@ static int __init nospectre_v1_cmdline(c early_param("nospectre_v1", nospectre_v1_cmdline); =20 #undef pr_fmt +#define pr_fmt(fmt) "RETBleed: " fmt + +enum retbleed_mitigation { + RETBLEED_MITIGATION_NONE, +}; + +enum retbleed_mitigation_cmd { + RETBLEED_CMD_OFF, + RETBLEED_CMD_AUTO, +}; + +const char * const retbleed_strings[] =3D { + [RETBLEED_MITIGATION_NONE] =3D "Vulnerable", +}; + +static enum retbleed_mitigation retbleed_mitigation __ro_after_init =3D + RETBLEED_MITIGATION_NONE; +static enum retbleed_mitigation_cmd retbleed_cmd __ro_after_init =3D + RETBLEED_CMD_AUTO; + +static int __init retbleed_parse_cmdline(char *str) +{ + if (!str) + return -EINVAL; + + if (!strcmp(str, "off")) + retbleed_cmd =3D RETBLEED_CMD_OFF; + else if (!strcmp(str, "auto")) + retbleed_cmd =3D RETBLEED_CMD_AUTO; + else + pr_err("Unknown retbleed option (%s). Defaulting to 'auto'\n", str); + + return 0; +} +early_param("retbleed", retbleed_parse_cmdline); + +#define RETBLEED_UNTRAIN_MSG "WARNING: BTB untrained return thunk mitigati= on is only effective on AMD/Hygon!\n" +#define RETBLEED_COMPILER_MSG "WARNING: kernel not compiled with RETPOLINE= or -mfunction-return capable compiler!\n" + +static void __init retbleed_select_mitigation(void) +{ + if (!boot_cpu_has_bug(X86_BUG_RETBLEED) || cpu_mitigations_off()) + return; + + switch (retbleed_cmd) { + case RETBLEED_CMD_OFF: + return; + + case RETBLEED_CMD_AUTO: + default: + if (!boot_cpu_has_bug(X86_BUG_RETBLEED)) + break; + + break; + } + + switch (retbleed_mitigation) { + default: + break; + } + + pr_info("%s\n", retbleed_strings[retbleed_mitigation]); +} + +#undef pr_fmt #define pr_fmt(fmt) "Spectre V2 : " fmt =20 static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init =3D @@ -1913,7 +1985,7 @@ static ssize_t srbds_show_state(char *bu =20 static ssize_t retbleed_show_state(char *buf) { - return sprintf(buf, "Vulnerable\n"); + return sprintf(buf, "%s\n", retbleed_strings[retbleed_mitigation]); } =20 static ssize_t cpu_show_common(struct device *dev, struct device_attribute= *attr, From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5C08C4332F for ; Wed, 5 Oct 2022 11:32:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229829AbiJELc6 (ORCPT ); Wed, 5 Oct 2022 07:32:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35348 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229502AbiJELc4 (ORCPT ); Wed, 5 Oct 2022 07:32:56 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B6B4D72EF2; Wed, 5 Oct 2022 04:32:55 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 4904461644; Wed, 5 Oct 2022 11:32:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5B5F4C433D6; Wed, 5 Oct 2022 11:32:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969574; bh=fwgReMXDdknhWnQB/FEkL8D+3b6eFuhMNw36AAKop58=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WNYjqK7Q9vjgr1MjtHRpp2YaIZprD2bbjc1Qfn9AwD+f36QHqjQZTcuUh9rEbUcAI AJRf+zJuseiIpPVrQfQnD6cF12jDotx/7H63cEiIohDMDjuVnTg5436gQsQSKO6Xus V3URVkezyWk2iIvuS3M/U4Lv2wHK/qmza3pA72Vc= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , "Peter Zijlstra (Intel)" , Borislav Petkov , Josh Poimboeuf , Thadeu Lima de Souza Cascardo Subject: [PATCH 5.4 10/51] x86/bugs: Keep a per-CPU IA32_SPEC_CTRL value Date: Wed, 5 Oct 2022 13:31:58 +0200 Message-Id: <20221005113210.743950869@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Peter Zijlstra commit caa0ff24d5d0e02abce5e65c3d2b7f20a6617be5 upstream. Due to TIF_SSBD and TIF_SPEC_IB the actual IA32_SPEC_CTRL value can differ from x86_spec_ctrl_base. As such, keep a per-CPU value reflecting the current task's MSR content. [jpoimboe: rename] Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- arch/x86/include/asm/nospec-branch.h | 1 + arch/x86/kernel/cpu/bugs.c | 28 +++++++++++++++++++++++----- arch/x86/kernel/process.c | 2 +- 3 files changed, 25 insertions(+), 6 deletions(-) --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -297,6 +297,7 @@ static inline void indirect_branch_predi =20 /* The Intel SPEC CTRL MSR base value cache */ extern u64 x86_spec_ctrl_base; +extern void write_spec_ctrl_current(u64 val); =20 /* * With retpoline, we must use IBRS to restrict branch prediction --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -47,12 +47,30 @@ static void __init taa_select_mitigation static void __init mmio_select_mitigation(void); static void __init srbds_select_mitigation(void); =20 -/* The base value of the SPEC_CTRL MSR that always has to be preserved. */ +/* The base value of the SPEC_CTRL MSR without task-specific bits set */ u64 x86_spec_ctrl_base; EXPORT_SYMBOL_GPL(x86_spec_ctrl_base); + +/* The current value of the SPEC_CTRL MSR with task-specific bits set */ +DEFINE_PER_CPU(u64, x86_spec_ctrl_current); +EXPORT_SYMBOL_GPL(x86_spec_ctrl_current); + static DEFINE_MUTEX(spec_ctrl_mutex); =20 /* + * Keep track of the SPEC_CTRL MSR value for the current task, which may d= iffer + * from x86_spec_ctrl_base due to STIBP/SSB in __speculation_ctrl_update(). + */ +void write_spec_ctrl_current(u64 val) +{ + if (this_cpu_read(x86_spec_ctrl_current) =3D=3D val) + return; + + this_cpu_write(x86_spec_ctrl_current, val); + wrmsrl(MSR_IA32_SPEC_CTRL, val); +} + +/* * The vendor and possibly platform specific bits which can be modified in * x86_spec_ctrl_base. */ @@ -1177,7 +1195,7 @@ static void __init spectre_v2_select_mit if (spectre_v2_in_eibrs_mode(mode)) { /* Force it so VMEXIT will restore correctly */ x86_spec_ctrl_base |=3D SPEC_CTRL_IBRS; - wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base); + write_spec_ctrl_current(x86_spec_ctrl_base); } =20 switch (mode) { @@ -1232,7 +1250,7 @@ static void __init spectre_v2_select_mit =20 static void update_stibp_msr(void * __unused) { - wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base); + write_spec_ctrl_current(x86_spec_ctrl_base); } =20 /* Update x86_spec_ctrl_base in case SMT state changed. */ @@ -1475,7 +1493,7 @@ static enum ssb_mitigation __init __ssb_ x86_amd_ssb_disable(); } else { x86_spec_ctrl_base |=3D SPEC_CTRL_SSBD; - wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base); + write_spec_ctrl_current(x86_spec_ctrl_base); } } =20 @@ -1692,7 +1710,7 @@ int arch_prctl_spec_ctrl_get(struct task void x86_spec_ctrl_setup_ap(void) { if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL)) - wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base); + write_spec_ctrl_current(x86_spec_ctrl_base); =20 if (ssb_mode =3D=3D SPEC_STORE_BYPASS_DISABLE) x86_amd_ssb_disable(); --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -449,7 +449,7 @@ static __always_inline void __speculatio } =20 if (updmsr) - wrmsrl(MSR_IA32_SPEC_CTRL, msr); + write_spec_ctrl_current(msr); } =20 static unsigned long speculation_ctrl_update_tif(struct task_struct *tsk) From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D47CC4332F for ; Wed, 5 Oct 2022 11:39:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230358AbiJELjO (ORCPT ); Wed, 5 Oct 2022 07:39:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36832 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230182AbiJELiV (ORCPT ); Wed, 5 Oct 2022 07:38:21 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0FE033C176; Wed, 5 Oct 2022 04:35:18 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 67F6A61650; Wed, 5 Oct 2022 11:34:47 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 79408C433C1; Wed, 5 Oct 2022 11:34:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969686; bh=jptJQbqQkDR9voAeaKmIiDtDOa/Eemt4DuBNEaUezZs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WCZAi3/LEHjM/mwt5bb7hzDNoJFvQB/iylil8xS+D+ioU+yeVzhuL2Fka1ZyUsD8h 3FEOtADKpKmxEzS+WXKR8f3O6QEtnKc8+HQlDp5DwBdERW8F8QXaJbhRWql1DSZhhb 2ebl7q6xol9u0FA5ZNkttGVGjgc4l6kWdrbAYQCI= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , "Peter Zijlstra (Intel)" , Borislav Petkov , Thadeu Lima de Souza Cascardo Subject: [PATCH 5.4 11/51] x86/entry: Remove skip_r11rcx Date: Wed, 5 Oct 2022 13:31:59 +0200 Message-Id: <20221005113210.784576725@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Peter Zijlstra commit 1b331eeea7b8676fc5dbdf80d0a07e41be226177 upstream. Yes, r11 and rcx have been restored previously, but since they're being popped anyway (into rsi) might as well pop them into their own regs -- setting them to the value they already are. Less magical code. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Link: https://lore.kernel.org/r/20220506121631.365070674@infradead.org Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- arch/x86/entry/calling.h | 10 +--------- arch/x86/entry/entry_64.S | 3 +-- 2 files changed, 2 insertions(+), 11 deletions(-) --- a/arch/x86/entry/calling.h +++ b/arch/x86/entry/calling.h @@ -146,27 +146,19 @@ For 32-bit we have the following convent =20 .endm =20 -.macro POP_REGS pop_rdi=3D1 skip_r11rcx=3D0 +.macro POP_REGS pop_rdi=3D1 popq %r15 popq %r14 popq %r13 popq %r12 popq %rbp popq %rbx - .if \skip_r11rcx - popq %rsi - .else popq %r11 - .endif popq %r10 popq %r9 popq %r8 popq %rax - .if \skip_r11rcx - popq %rsi - .else popq %rcx - .endif popq %rdx popq %rsi .if \pop_rdi --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -248,8 +248,7 @@ GLOBAL(entry_SYSCALL_64_after_hwframe) * perf profiles. Nothing jumps here. */ syscall_return_via_sysret: - /* rcx and r11 are already restored (see code above) */ - POP_REGS pop_rdi=3D0 skip_r11rcx=3D1 + POP_REGS pop_rdi=3D0 =20 /* * Now all regs are restored except RSP and RDI. From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50032C433FE for ; Wed, 5 Oct 2022 11:34:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230050AbiJELd6 (ORCPT ); Wed, 5 Oct 2022 07:33:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35856 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229957AbiJELdg (ORCPT ); Wed, 5 Oct 2022 07:33:36 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 49FC574DEA; Wed, 5 Oct 2022 04:33:22 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id B49716164D; Wed, 5 Oct 2022 11:33:21 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C31EAC433D6; Wed, 5 Oct 2022 11:33:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969601; bh=QMKnccYRFqomZwcUmCSmqJ4solcNMys4u6l5Ry4XS5k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fsRpkElHGo+qNzmEFwCMMzaQWmDdrDZGO4Nh9qHGZvkPfdTm7w751gkFioZLiRPXe YFmIeXVZfYp/+M/Ytt4jBrjxJI5MPTnmcVcNDOCW5o6CmWJS+NJhjqEIQh+7i44Nf6 MLQyiv/EOQcTaKxBCPizrCM1s6kMHmE0dt+OqKZk= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , "Peter Zijlstra (Intel)" , Borislav Petkov , Josh Poimboeuf , Thadeu Lima de Souza Cascardo Subject: [PATCH 5.4 12/51] x86/entry: Add kernel IBRS implementation Date: Wed, 5 Oct 2022 13:32:00 +0200 Message-Id: <20221005113210.834274185@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Peter Zijlstra commit 2dbb887e875b1de3ca8f40ddf26bcfe55798c609 upstream. Implement Kernel IBRS - currently the only known option to mitigate RSB underflow speculation issues on Skylake hardware. Note: since IBRS_ENTER requires fuller context established than UNTRAIN_RET, it must be placed after it. However, since UNTRAIN_RET itself implies a RET, it must come after IBRS_ENTER. This means IBRS_ENTER needs to also move UNTRAIN_RET. Note 2: KERNEL_IBRS is sub-optimal for XenPV. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov [cascardo: conflict at arch/x86/entry/entry_64.S, skip_r11rcx] [cascardo: conflict at arch/x86/entry/entry_64_compat.S] [cascardo: conflict fixups, no ANNOTATE_NOENDBR] [cascardo: entry fixups because of missing UNTRAIN_RET] [cascardo: conflicts on fsgsbase] Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- arch/x86/entry/calling.h | 58 ++++++++++++++++++++++++++++++++= +++++ arch/x86/entry/entry_64.S | 29 +++++++++++++++++- arch/x86/entry/entry_64_compat.S | 11 ++++++- arch/x86/include/asm/cpufeatures.h | 2 - 4 files changed, 97 insertions(+), 3 deletions(-) --- a/arch/x86/entry/calling.h +++ b/arch/x86/entry/calling.h @@ -6,6 +6,8 @@ #include #include #include +#include +#include =20 /* =20 @@ -309,6 +311,62 @@ For 32-bit we have the following convent #endif =20 /* + * IBRS kernel mitigation for Spectre_v2. + * + * Assumes full context is established (PUSH_REGS, CR3 and GS) and it clob= bers + * the regs it uses (AX, CX, DX). Must be called before the first RET + * instruction (NOTE! UNTRAIN_RET includes a RET instruction) + * + * The optional argument is used to save/restore the current value, + * which is used on the paranoid paths. + * + * Assumes x86_spec_ctrl_{base,current} to have SPEC_CTRL_IBRS set. + */ +.macro IBRS_ENTER save_reg + ALTERNATIVE "jmp .Lend_\@", "", X86_FEATURE_KERNEL_IBRS + movl $MSR_IA32_SPEC_CTRL, %ecx + +.ifnb \save_reg + rdmsr + shl $32, %rdx + or %rdx, %rax + mov %rax, \save_reg + test $SPEC_CTRL_IBRS, %eax + jz .Ldo_wrmsr_\@ + lfence + jmp .Lend_\@ +.Ldo_wrmsr_\@: +.endif + + movq PER_CPU_VAR(x86_spec_ctrl_current), %rdx + movl %edx, %eax + shr $32, %rdx + wrmsr +.Lend_\@: +.endm + +/* + * Similar to IBRS_ENTER, requires KERNEL GS,CR3 and clobbers (AX, CX, DX) + * regs. Must be called after the last RET. + */ +.macro IBRS_EXIT save_reg + ALTERNATIVE "jmp .Lend_\@", "", X86_FEATURE_KERNEL_IBRS + movl $MSR_IA32_SPEC_CTRL, %ecx + +.ifnb \save_reg + mov \save_reg, %rdx +.else + movq PER_CPU_VAR(x86_spec_ctrl_current), %rdx + andl $(~SPEC_CTRL_IBRS), %edx +.endif + + movl %edx, %eax + shr $32, %rdx + wrmsr +.Lend_\@: +.endm + +/* * Mitigate Spectre v1 for conditional swapgs code paths. * * FENCE_SWAPGS_USER_ENTRY is used in the user entry swapgs code path, to --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -172,6 +172,10 @@ GLOBAL(entry_SYSCALL_64_after_hwframe) /* IRQs are off. */ movq %rax, %rdi movq %rsp, %rsi + + /* clobbers %rax, make sure it is after saving the syscall nr */ + IBRS_ENTER + call do_syscall_64 /* returns with IRQs disabled */ =20 TRACE_IRQS_IRETQ /* we're about to change IF */ @@ -248,6 +252,7 @@ GLOBAL(entry_SYSCALL_64_after_hwframe) * perf profiles. Nothing jumps here. */ syscall_return_via_sysret: + IBRS_EXIT POP_REGS pop_rdi=3D0 =20 /* @@ -621,6 +626,7 @@ GLOBAL(retint_user) TRACE_IRQS_IRETQ =20 GLOBAL(swapgs_restore_regs_and_return_to_usermode) + IBRS_EXIT #ifdef CONFIG_DEBUG_ENTRY /* Assert that pt_regs indicates user mode. */ testb $3, CS(%rsp) @@ -1247,7 +1253,13 @@ ENTRY(paranoid_entry) */ FENCE_SWAPGS_KERNEL_ENTRY =20 - ret + /* + * Once we have CR3 and %GS setup save and set SPEC_CTRL. Just like + * CR3 above, keep the old value in a callee saved register. + */ + IBRS_ENTER save_reg=3D%r15 + + RET END(paranoid_entry) =20 /* @@ -1275,12 +1287,20 @@ ENTRY(paranoid_exit) jmp .Lparanoid_exit_restore .Lparanoid_exit_no_swapgs: TRACE_IRQS_IRETQ_DEBUG + + /* + * Must restore IBRS state before both CR3 and %GS since we need access + * to the per-CPU x86_spec_ctrl_shadow variable. + */ + IBRS_EXIT save_reg=3D%r15 + /* Always restore stashed CR3 value (see paranoid_entry) */ RESTORE_CR3 scratch_reg=3D%rbx save_reg=3D%r14 .Lparanoid_exit_restore: jmp restore_regs_and_return_to_kernel END(paranoid_exit) =20 + /* * Save all registers in pt_regs, and switch GS if needed. */ @@ -1300,6 +1320,7 @@ ENTRY(error_entry) FENCE_SWAPGS_USER_ENTRY /* We have user CR3. Change to kernel CR3. */ SWITCH_TO_KERNEL_CR3 scratch_reg=3D%rax + IBRS_ENTER =20 .Lerror_entry_from_usermode_after_swapgs: /* Put us onto the real thread stack. */ @@ -1355,6 +1376,7 @@ ENTRY(error_entry) SWAPGS FENCE_SWAPGS_USER_ENTRY SWITCH_TO_KERNEL_CR3 scratch_reg=3D%rax + IBRS_ENTER =20 /* * Pretend that the exception came from user mode: set up pt_regs @@ -1460,6 +1482,8 @@ ENTRY(nmi) PUSH_AND_CLEAR_REGS rdx=3D(%rdx) ENCODE_FRAME_POINTER =20 + IBRS_ENTER + /* * At this point we no longer need to worry about stack damage * due to nesting -- we're on the normal thread stack and we're @@ -1683,6 +1707,9 @@ end_repeat_nmi: movq $-1, %rsi call do_nmi =20 + /* Always restore stashed SPEC_CTRL value (see paranoid_entry) */ + IBRS_EXIT save_reg=3D%r15 + /* Always restore stashed CR3 value (see paranoid_entry) */ RESTORE_CR3 scratch_reg=3D%r15 save_reg=3D%r14 =20 --- a/arch/x86/entry/entry_64_compat.S +++ b/arch/x86/entry/entry_64_compat.S @@ -4,7 +4,6 @@ * * Copyright 2000-2002 Andi Kleen, SuSE Labs. */ -#include "calling.h" #include #include #include @@ -17,6 +16,8 @@ #include #include =20 +#include "calling.h" + .section .entry.text, "ax" =20 /* @@ -106,6 +107,8 @@ ENTRY(entry_SYSENTER_compat) xorl %r15d, %r15d /* nospec r15 */ cld =20 + IBRS_ENTER + /* * SYSENTER doesn't filter flags, so we need to clear NT and AC * ourselves. To save a few cycles, we can check whether @@ -253,6 +256,8 @@ GLOBAL(entry_SYSCALL_compat_after_hwfram */ TRACE_IRQS_OFF =20 + IBRS_ENTER + movq %rsp, %rdi call do_fast_syscall_32 /* XEN PV guests always use IRET path */ @@ -267,6 +272,9 @@ sysret32_from_system_call: */ STACKLEAK_ERASE TRACE_IRQS_ON /* User mode traces as IRQs on. */ + + IBRS_EXIT + movq RBX(%rsp), %rbx /* pt_regs->rbx */ movq RBP(%rsp), %rbp /* pt_regs->rbp */ movq EFLAGS(%rsp), %r11 /* pt_regs->flags (in r11) */ @@ -408,6 +416,7 @@ ENTRY(entry_INT80_compat) * gate turned them off. */ TRACE_IRQS_OFF + IBRS_ENTER =20 movq %rsp, %rdi call do_int80_syscall_32 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -203,7 +203,7 @@ #define X86_FEATURE_PROC_FEEDBACK ( 7*32+ 9) /* AMD ProcFeedbackInterface = */ #define X86_FEATURE_SME ( 7*32+10) /* AMD Secure Memory Encryption */ #define X86_FEATURE_PTI ( 7*32+11) /* Kernel Page Table Isolation enable= d */ -/* FREE! ( 7*32+12) */ +#define X86_FEATURE_KERNEL_IBRS ( 7*32+12) /* "" Set/clear IBRS on kernel= entry/exit */ /* FREE! ( 7*32+13) */ #define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory Nu= mber */ #define X86_FEATURE_CDP_L2 ( 7*32+15) /* Code and Data Prioritization L2 = */ From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F31DC433F5 for ; Wed, 5 Oct 2022 11:35:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229969AbiJELf3 (ORCPT ); Wed, 5 Oct 2022 07:35:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36968 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229912AbiJELek (ORCPT ); Wed, 5 Oct 2022 07:34:40 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 39D2E78210; Wed, 5 Oct 2022 04:33:53 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 438E061644; Wed, 5 Oct 2022 11:33:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 53BF4C433D7; Wed, 5 Oct 2022 11:33:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969630; bh=jbG1Ybl3JuXXxmHTUGpeU2RNhI0WAwvWxKlYPQUTSOY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=xsTOVm+XrTkM8gBQgrLezdNl+Hht0peCV8tBzibrWbPpkHVKe10JE+4oa5KfPFNpb b+AlgI7JE051OGlniXx1abR48UMuaYGiVp2HT8+LEXnCWaMQfdNKeXd6qrEKBUlTPY jklo66OOKqq1Vj9CchSkhw8A5/jQ5N4RoxdtAbZY= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , "Peter Zijlstra (Intel)" , Borislav Petkov , Josh Poimboeuf , Thadeu Lima de Souza Cascardo Subject: [PATCH 5.4 13/51] x86/bugs: Optimize SPEC_CTRL MSR writes Date: Wed, 5 Oct 2022 13:32:01 +0200 Message-Id: <20221005113210.877158014@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Peter Zijlstra commit c779bc1a9002fa474175b80e72b85c9bf628abb0 upstream. When changing SPEC_CTRL for user control, the WRMSR can be delayed until return-to-user when KERNEL_IBRS has been enabled. This avoids an MSR write during context switch. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- arch/x86/include/asm/nospec-branch.h | 2 +- arch/x86/kernel/cpu/bugs.c | 18 ++++++++++++------ arch/x86/kernel/process.c | 2 +- 3 files changed, 14 insertions(+), 8 deletions(-) --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -297,7 +297,7 @@ static inline void indirect_branch_predi =20 /* The Intel SPEC CTRL MSR base value cache */ extern u64 x86_spec_ctrl_base; -extern void write_spec_ctrl_current(u64 val); +extern void write_spec_ctrl_current(u64 val, bool force); =20 /* * With retpoline, we must use IBRS to restrict branch prediction --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -61,13 +61,19 @@ static DEFINE_MUTEX(spec_ctrl_mutex); * Keep track of the SPEC_CTRL MSR value for the current task, which may d= iffer * from x86_spec_ctrl_base due to STIBP/SSB in __speculation_ctrl_update(). */ -void write_spec_ctrl_current(u64 val) +void write_spec_ctrl_current(u64 val, bool force) { if (this_cpu_read(x86_spec_ctrl_current) =3D=3D val) return; =20 this_cpu_write(x86_spec_ctrl_current, val); - wrmsrl(MSR_IA32_SPEC_CTRL, val); + + /* + * When KERNEL_IBRS this MSR is written on return-to-user, unless + * forced the update can be delayed until that time. + */ + if (force || !cpu_feature_enabled(X86_FEATURE_KERNEL_IBRS)) + wrmsrl(MSR_IA32_SPEC_CTRL, val); } =20 /* @@ -1195,7 +1201,7 @@ static void __init spectre_v2_select_mit if (spectre_v2_in_eibrs_mode(mode)) { /* Force it so VMEXIT will restore correctly */ x86_spec_ctrl_base |=3D SPEC_CTRL_IBRS; - write_spec_ctrl_current(x86_spec_ctrl_base); + write_spec_ctrl_current(x86_spec_ctrl_base, true); } =20 switch (mode) { @@ -1250,7 +1256,7 @@ static void __init spectre_v2_select_mit =20 static void update_stibp_msr(void * __unused) { - write_spec_ctrl_current(x86_spec_ctrl_base); + write_spec_ctrl_current(x86_spec_ctrl_base, true); } =20 /* Update x86_spec_ctrl_base in case SMT state changed. */ @@ -1493,7 +1499,7 @@ static enum ssb_mitigation __init __ssb_ x86_amd_ssb_disable(); } else { x86_spec_ctrl_base |=3D SPEC_CTRL_SSBD; - write_spec_ctrl_current(x86_spec_ctrl_base); + write_spec_ctrl_current(x86_spec_ctrl_base, true); } } =20 @@ -1710,7 +1716,7 @@ int arch_prctl_spec_ctrl_get(struct task void x86_spec_ctrl_setup_ap(void) { if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL)) - write_spec_ctrl_current(x86_spec_ctrl_base); + write_spec_ctrl_current(x86_spec_ctrl_base, true); =20 if (ssb_mode =3D=3D SPEC_STORE_BYPASS_DISABLE) x86_amd_ssb_disable(); --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -449,7 +449,7 @@ static __always_inline void __speculatio } =20 if (updmsr) - write_spec_ctrl_current(msr); + write_spec_ctrl_current(msr, false); } =20 static unsigned long speculation_ctrl_update_tif(struct task_struct *tsk) From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A8F2C433F5 for ; Wed, 5 Oct 2022 11:36:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230136AbiJELgk (ORCPT ); Wed, 5 Oct 2022 07:36:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35882 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229965AbiJELgE (ORCPT ); Wed, 5 Oct 2022 07:36:04 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E8E6E76969; Wed, 5 Oct 2022 04:34:22 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C161A61668; Wed, 5 Oct 2022 11:34:20 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CB4C3C433C1; Wed, 5 Oct 2022 11:34:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969660; bh=Xfn+loM1qGjKqspSxcVOujpPvEEEGlLOmbdEjF93YkE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gqcr6pN/PoSiU7tOa4kM/j3lUCO10K042DrrCBmlSfohnAN35L0gnqtotsa+z/EE0 Vh2yZ2KRTanlg6nlJhXfZKwhPjBVntTQUviB692iFzIZ0IdgtwbWYtLUlCDfydNLyc PBR7VibroBuh4yafNw++nnPC1k8DZPkAIp0Xw6jE= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Pawan Gupta , "Peter Zijlstra (Intel)" , Borislav Petkov , Josh Poimboeuf , Thadeu Lima de Souza Cascardo Subject: [PATCH 5.4 14/51] x86/speculation: Add spectre_v2=ibrs option to support Kernel IBRS Date: Wed, 5 Oct 2022 13:32:02 +0200 Message-Id: <20221005113210.929116747@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Pawan Gupta commit 7c693f54c873691a4b7da05c7e0f74e67745d144 upstream. Extend spectre_v2=3D boot option with Kernel IBRS. [jpoimboe: no STIBP with IBRS] Signed-off-by: Pawan Gupta Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- Documentation/admin-guide/kernel-parameters.txt | 1=20 arch/x86/include/asm/nospec-branch.h | 1=20 arch/x86/kernel/cpu/bugs.c | 66 ++++++++++++++++++-= ----- 3 files changed, 54 insertions(+), 14 deletions(-) --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -4553,6 +4553,7 @@ eibrs - enhanced IBRS eibrs,retpoline - enhanced IBRS + Retpolines eibrs,lfence - enhanced IBRS + LFENCE + ibrs - use IBRS to protect kernel =20 Not specifying this option is equivalent to spectre_v2=3Dauto. --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -234,6 +234,7 @@ enum spectre_v2_mitigation { SPECTRE_V2_EIBRS, SPECTRE_V2_EIBRS_RETPOLINE, SPECTRE_V2_EIBRS_LFENCE, + SPECTRE_V2_IBRS, }; =20 /* The indirect branch speculation control variants */ --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -877,6 +877,7 @@ enum spectre_v2_mitigation_cmd { SPECTRE_V2_CMD_EIBRS, SPECTRE_V2_CMD_EIBRS_RETPOLINE, SPECTRE_V2_CMD_EIBRS_LFENCE, + SPECTRE_V2_CMD_IBRS, }; =20 enum spectre_v2_user_cmd { @@ -949,11 +950,12 @@ spectre_v2_parse_user_cmdline(enum spect return SPECTRE_V2_USER_CMD_AUTO; } =20 -static inline bool spectre_v2_in_eibrs_mode(enum spectre_v2_mitigation mod= e) +static inline bool spectre_v2_in_ibrs_mode(enum spectre_v2_mitigation mode) { - return (mode =3D=3D SPECTRE_V2_EIBRS || - mode =3D=3D SPECTRE_V2_EIBRS_RETPOLINE || - mode =3D=3D SPECTRE_V2_EIBRS_LFENCE); + return mode =3D=3D SPECTRE_V2_IBRS || + mode =3D=3D SPECTRE_V2_EIBRS || + mode =3D=3D SPECTRE_V2_EIBRS_RETPOLINE || + mode =3D=3D SPECTRE_V2_EIBRS_LFENCE; } =20 static void __init @@ -1018,12 +1020,12 @@ spectre_v2_user_select_mitigation(enum s } =20 /* - * If no STIBP, enhanced IBRS is enabled or SMT impossible, STIBP is not - * required. + * If no STIBP, IBRS or enhanced IBRS is enabled, or SMT impossible, + * STIBP is not required. */ if (!boot_cpu_has(X86_FEATURE_STIBP) || !smt_possible || - spectre_v2_in_eibrs_mode(spectre_v2_enabled)) + spectre_v2_in_ibrs_mode(spectre_v2_enabled)) return; =20 /* @@ -1048,6 +1050,7 @@ static const char * const spectre_v2_str [SPECTRE_V2_EIBRS] =3D "Mitigation: Enhanced IBRS", [SPECTRE_V2_EIBRS_LFENCE] =3D "Mitigation: Enhanced IBRS + LFENCE", [SPECTRE_V2_EIBRS_RETPOLINE] =3D "Mitigation: Enhanced IBRS + Retpolines= ", + [SPECTRE_V2_IBRS] =3D "Mitigation: IBRS", }; =20 static const struct { @@ -1065,6 +1068,7 @@ static const struct { { "eibrs,lfence", SPECTRE_V2_CMD_EIBRS_LFENCE, false }, { "eibrs,retpoline", SPECTRE_V2_CMD_EIBRS_RETPOLINE, false }, { "auto", SPECTRE_V2_CMD_AUTO, false }, + { "ibrs", SPECTRE_V2_CMD_IBRS, false }, }; =20 static void __init spec_v2_print_cond(const char *reason, bool secure) @@ -1127,6 +1131,24 @@ static enum spectre_v2_mitigation_cmd __ return SPECTRE_V2_CMD_AUTO; } =20 + if (cmd =3D=3D SPECTRE_V2_CMD_IBRS && boot_cpu_data.x86_vendor !=3D X86_V= ENDOR_INTEL) { + pr_err("%s selected but not Intel CPU. Switching to AUTO select\n", + mitigation_options[i].option); + return SPECTRE_V2_CMD_AUTO; + } + + if (cmd =3D=3D SPECTRE_V2_CMD_IBRS && !boot_cpu_has(X86_FEATURE_IBRS)) { + pr_err("%s selected but CPU doesn't have IBRS. Switching to AUTO select\= n", + mitigation_options[i].option); + return SPECTRE_V2_CMD_AUTO; + } + + if (cmd =3D=3D SPECTRE_V2_CMD_IBRS && boot_cpu_has(X86_FEATURE_XENPV)) { + pr_err("%s selected but running as XenPV guest. Switching to AUTO select= \n", + mitigation_options[i].option); + return SPECTRE_V2_CMD_AUTO; + } + spec_v2_print_cond(mitigation_options[i].option, mitigation_options[i].secure); return cmd; @@ -1166,6 +1188,14 @@ static void __init spectre_v2_select_mit break; } =20 + if (boot_cpu_has_bug(X86_BUG_RETBLEED) && + retbleed_cmd !=3D RETBLEED_CMD_OFF && + boot_cpu_has(X86_FEATURE_IBRS) && + boot_cpu_data.x86_vendor =3D=3D X86_VENDOR_INTEL) { + mode =3D SPECTRE_V2_IBRS; + break; + } + mode =3D spectre_v2_select_retpoline(); break; =20 @@ -1182,6 +1212,10 @@ static void __init spectre_v2_select_mit mode =3D spectre_v2_select_retpoline(); break; =20 + case SPECTRE_V2_CMD_IBRS: + mode =3D SPECTRE_V2_IBRS; + break; + case SPECTRE_V2_CMD_EIBRS: mode =3D SPECTRE_V2_EIBRS; break; @@ -1198,7 +1232,7 @@ static void __init spectre_v2_select_mit if (mode =3D=3D SPECTRE_V2_EIBRS && unprivileged_ebpf_enabled()) pr_err(SPECTRE_V2_EIBRS_EBPF_MSG); =20 - if (spectre_v2_in_eibrs_mode(mode)) { + if (spectre_v2_in_ibrs_mode(mode)) { /* Force it so VMEXIT will restore correctly */ x86_spec_ctrl_base |=3D SPEC_CTRL_IBRS; write_spec_ctrl_current(x86_spec_ctrl_base, true); @@ -1209,6 +1243,10 @@ static void __init spectre_v2_select_mit case SPECTRE_V2_EIBRS: break; =20 + case SPECTRE_V2_IBRS: + setup_force_cpu_cap(X86_FEATURE_KERNEL_IBRS); + break; + case SPECTRE_V2_LFENCE: case SPECTRE_V2_EIBRS_LFENCE: setup_force_cpu_cap(X86_FEATURE_RETPOLINE_LFENCE); @@ -1235,17 +1273,17 @@ static void __init spectre_v2_select_mit pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switc= h\n"); =20 /* - * Retpoline means the kernel is safe because it has no indirect - * branches. Enhanced IBRS protects firmware too, so, enable restricted - * speculation around firmware calls only when Enhanced IBRS isn't - * supported. + * Retpoline protects the kernel, but doesn't protect firmware. IBRS + * and Enhanced IBRS protect firmware too, so enable IBRS around + * firmware calls only when IBRS / Enhanced IBRS aren't otherwise + * enabled. * * Use "mode" to check Enhanced IBRS instead of boot_cpu_has(), because * the user might select retpoline on the kernel command line and if * the CPU supports Enhanced IBRS, kernel might un-intentionally not * enable IBRS around firmware calls. */ - if (boot_cpu_has(X86_FEATURE_IBRS) && !spectre_v2_in_eibrs_mode(mode)) { + if (boot_cpu_has(X86_FEATURE_IBRS) && !spectre_v2_in_ibrs_mode(mode)) { setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW); pr_info("Enabling Restricted Speculation for firmware calls\n"); } @@ -1951,7 +1989,7 @@ static ssize_t mmio_stale_data_show_stat =20 static char *stibp_state(void) { - if (spectre_v2_in_eibrs_mode(spectre_v2_enabled)) + if (spectre_v2_in_ibrs_mode(spectre_v2_enabled)) return ""; =20 switch (spectre_v2_user_stibp) { From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 787E9C433FE for ; Wed, 5 Oct 2022 11:38:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230246AbiJELi0 (ORCPT ); Wed, 5 Oct 2022 07:38:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36832 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230242AbiJELhe (ORCPT ); Wed, 5 Oct 2022 07:37:34 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EEBB769F7E; Wed, 5 Oct 2022 04:35:02 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 16D73B81DB2; Wed, 5 Oct 2022 11:34:32 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 81D99C433D6; Wed, 5 Oct 2022 11:34:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969670; bh=gBiAvbUZKoEUICInTYMxi4aPA3uBRlGwkmB4wKU5hak=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=w2krV6qPh4/dlAE7yLARbhlBUnvIP8BDmPfXq3ryuHnjnfF9RcX0ppDKewQ6BEEy4 qMzqe2DhKHV3b3q/gwpmzGNsEMrrvNiCkCoG4xsWICNyAunXmnK9oFajnDiNAJ+8Yb 5sKPbC7IK57OlELhZe36TGhYi+5R1Pyca+Ebgxvc= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , "Peter Zijlstra (Intel)" , Borislav Petkov , Josh Poimboeuf , Thadeu Lima de Souza Cascardo Subject: [PATCH 5.4 15/51] x86/bugs: Split spectre_v2_select_mitigation() and spectre_v2_user_select_mitigation() Date: Wed, 5 Oct 2022 13:32:03 +0200 Message-Id: <20221005113210.972035316@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Peter Zijlstra commit 166115c08a9b0b846b783088808a27d739be6e8d upstream. retbleed will depend on spectre_v2, while spectre_v2_user depends on retbleed. Break this cycle. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- arch/x86/kernel/cpu/bugs.c | 25 +++++++++++++++++-------- 1 file changed, 17 insertions(+), 8 deletions(-) --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -36,8 +36,9 @@ #include "cpu.h" =20 static void __init spectre_v1_select_mitigation(void); -static void __init retbleed_select_mitigation(void); static void __init spectre_v2_select_mitigation(void); +static void __init retbleed_select_mitigation(void); +static void __init spectre_v2_user_select_mitigation(void); static void __init ssb_select_mitigation(void); static void __init l1tf_select_mitigation(void); static void __init mds_select_mitigation(void); @@ -136,13 +137,19 @@ void __init check_bugs(void) =20 /* Select the proper CPU mitigations before patching alternatives: */ spectre_v1_select_mitigation(); + spectre_v2_select_mitigation(); + /* + * retbleed_select_mitigation() relies on the state set by + * spectre_v2_select_mitigation(); specifically it wants to know about + * spectre_v2=3Dibrs. + */ retbleed_select_mitigation(); /* - * spectre_v2_select_mitigation() relies on the state set by + * spectre_v2_user_select_mitigation() relies on the state set by * retbleed_select_mitigation(); specifically the STIBP selection is * forced for UNRET. */ - spectre_v2_select_mitigation(); + spectre_v2_user_select_mitigation(); ssb_select_mitigation(); l1tf_select_mitigation(); md_clear_select_mitigation(); @@ -918,13 +925,15 @@ static void __init spec_v2_user_print_co pr_info("spectre_v2_user=3D%s forced on command line.\n", reason); } =20 +static __ro_after_init enum spectre_v2_mitigation_cmd spectre_v2_cmd; + static enum spectre_v2_user_cmd __init -spectre_v2_parse_user_cmdline(enum spectre_v2_mitigation_cmd v2_cmd) +spectre_v2_parse_user_cmdline(void) { char arg[20]; int ret, i; =20 - switch (v2_cmd) { + switch (spectre_v2_cmd) { case SPECTRE_V2_CMD_NONE: return SPECTRE_V2_USER_CMD_NONE; case SPECTRE_V2_CMD_FORCE: @@ -959,7 +968,7 @@ static inline bool spectre_v2_in_ibrs_mo } =20 static void __init -spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd) +spectre_v2_user_select_mitigation(void) { enum spectre_v2_user_mitigation mode =3D SPECTRE_V2_USER_NONE; bool smt_possible =3D IS_ENABLED(CONFIG_SMP); @@ -972,7 +981,7 @@ spectre_v2_user_select_mitigation(enum s cpu_smt_control =3D=3D CPU_SMT_NOT_SUPPORTED) smt_possible =3D false; =20 - cmd =3D spectre_v2_parse_user_cmdline(v2_cmd); + cmd =3D spectre_v2_parse_user_cmdline(); switch (cmd) { case SPECTRE_V2_USER_CMD_NONE: goto set_mode; @@ -1289,7 +1298,7 @@ static void __init spectre_v2_select_mit } =20 /* Set up IBPB and STIBP depending on the general spectre V2 command */ - spectre_v2_user_select_mitigation(cmd); + spectre_v2_cmd =3D cmd; } =20 static void update_stibp_msr(void * __unused) From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 067D4C43219 for ; Wed, 5 Oct 2022 11:38:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230148AbiJELit (ORCPT ); Wed, 5 Oct 2022 07:38:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37244 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230115AbiJELhh (ORCPT ); Wed, 5 Oct 2022 07:37:37 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 42037753A2; Wed, 5 Oct 2022 04:35:05 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id BDF70B81DB7; Wed, 5 Oct 2022 11:34:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1B57BC433D6; Wed, 5 Oct 2022 11:34:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969673; bh=TWQQPF2VMlo6ZXLojZa9kCcPFYvFrvU4d9wUG0EMQws=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mfDkY9nPYSSxoplEWYPXeouyCQY1tNYJH6IRaC5hXmr5hHESjMogWv3vi5YffmxS6 w6lfos5OnjtDDIesbq/9efZmLB1SbBwDNUIC3pHWt1rfm7EYtM/zQWvxGn8h6GNgAi 90tQcRAqIrmgkN8mo+xhBhaz2s07183Mu0xzyJJ0= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , "Peter Zijlstra (Intel)" , Borislav Petkov , Josh Poimboeuf , Thadeu Lima de Souza Cascardo Subject: [PATCH 5.4 16/51] x86/bugs: Report Intel retbleed vulnerability Date: Wed, 5 Oct 2022 13:32:04 +0200 Message-Id: <20221005113211.010261288@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Peter Zijlstra commit 6ad0ad2bf8a67e27d1f9d006a1dabb0e1c360cc3 upstream. Skylake suffers from RSB underflow speculation issues; report this vulnerability and it's mitigation (spectre_v2=3Dibrs). [jpoimboe: cleanups, eibrs] Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- arch/x86/include/asm/msr-index.h | 1 + arch/x86/kernel/cpu/bugs.c | 36 +++++++++++++++++++++++++++++++---= -- arch/x86/kernel/cpu/common.c | 24 ++++++++++++------------ 3 files changed, 44 insertions(+), 17 deletions(-) --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -82,6 +82,7 @@ #define MSR_IA32_ARCH_CAPABILITIES 0x0000010a #define ARCH_CAP_RDCL_NO BIT(0) /* Not susceptible to Meltdown */ #define ARCH_CAP_IBRS_ALL BIT(1) /* Enhanced IBRS support */ +#define ARCH_CAP_RSBA BIT(2) /* RET may use alternative branch predictor= s */ #define ARCH_CAP_SKIP_VMENTRY_L1DFLUSH BIT(3) /* Skip L1D flush on vmentry= */ #define ARCH_CAP_SSB_NO BIT(4) /* * Not susceptible to Speculative Store Bypass --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -743,11 +743,16 @@ static int __init nospectre_v1_cmdline(c } early_param("nospectre_v1", nospectre_v1_cmdline); =20 +static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init =3D + SPECTRE_V2_NONE; + #undef pr_fmt #define pr_fmt(fmt) "RETBleed: " fmt =20 enum retbleed_mitigation { RETBLEED_MITIGATION_NONE, + RETBLEED_MITIGATION_IBRS, + RETBLEED_MITIGATION_EIBRS, }; =20 enum retbleed_mitigation_cmd { @@ -757,6 +762,8 @@ enum retbleed_mitigation_cmd { =20 const char * const retbleed_strings[] =3D { [RETBLEED_MITIGATION_NONE] =3D "Vulnerable", + [RETBLEED_MITIGATION_IBRS] =3D "Mitigation: IBRS", + [RETBLEED_MITIGATION_EIBRS] =3D "Mitigation: Enhanced IBRS", }; =20 static enum retbleed_mitigation retbleed_mitigation __ro_after_init =3D @@ -782,6 +789,7 @@ early_param("retbleed", retbleed_parse_c =20 #define RETBLEED_UNTRAIN_MSG "WARNING: BTB untrained return thunk mitigati= on is only effective on AMD/Hygon!\n" #define RETBLEED_COMPILER_MSG "WARNING: kernel not compiled with RETPOLINE= or -mfunction-return capable compiler!\n" +#define RETBLEED_INTEL_MSG "WARNING: Spectre v2 mitigation leaves CPU vuln= erable to RETBleed attacks, data leaks possible!\n" =20 static void __init retbleed_select_mitigation(void) { @@ -794,8 +802,10 @@ static void __init retbleed_select_mitig =20 case RETBLEED_CMD_AUTO: default: - if (!boot_cpu_has_bug(X86_BUG_RETBLEED)) - break; + /* + * The Intel mitigation (IBRS) was already selected in + * spectre_v2_select_mitigation(). + */ =20 break; } @@ -805,15 +815,31 @@ static void __init retbleed_select_mitig break; } =20 + /* + * Let IBRS trump all on Intel without affecting the effects of the + * retbleed=3D cmdline option. + */ + if (boot_cpu_data.x86_vendor =3D=3D X86_VENDOR_INTEL) { + switch (spectre_v2_enabled) { + case SPECTRE_V2_IBRS: + retbleed_mitigation =3D RETBLEED_MITIGATION_IBRS; + break; + case SPECTRE_V2_EIBRS: + case SPECTRE_V2_EIBRS_RETPOLINE: + case SPECTRE_V2_EIBRS_LFENCE: + retbleed_mitigation =3D RETBLEED_MITIGATION_EIBRS; + break; + default: + pr_err(RETBLEED_INTEL_MSG); + } + } + pr_info("%s\n", retbleed_strings[retbleed_mitigation]); } =20 #undef pr_fmt #define pr_fmt(fmt) "Spectre V2 : " fmt =20 -static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init =3D - SPECTRE_V2_NONE; - static enum spectre_v2_user_mitigation spectre_v2_user_stibp __ro_after_in= it =3D SPECTRE_V2_USER_NONE; static enum spectre_v2_user_mitigation spectre_v2_user_ibpb __ro_after_ini= t =3D --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1131,24 +1131,24 @@ static const struct x86_cpu_id cpu_vuln_ VULNBL_INTEL_STEPPINGS(BROADWELL_G, X86_STEPPING_ANY, SRBDS), VULNBL_INTEL_STEPPINGS(BROADWELL_X, X86_STEPPING_ANY, MMIO), VULNBL_INTEL_STEPPINGS(BROADWELL, X86_STEPPING_ANY, SRBDS), - VULNBL_INTEL_STEPPINGS(SKYLAKE_L, X86_STEPPINGS(0x3, 0x3), SRBDS | MMIO), + VULNBL_INTEL_STEPPINGS(SKYLAKE_L, X86_STEPPINGS(0x3, 0x3), SRBDS | MMIO |= RETBLEED), VULNBL_INTEL_STEPPINGS(SKYLAKE_L, X86_STEPPING_ANY, SRBDS), VULNBL_INTEL_STEPPINGS(SKYLAKE_X, BIT(3) | BIT(4) | BIT(6) | - BIT(7) | BIT(0xB), MMIO), - VULNBL_INTEL_STEPPINGS(SKYLAKE, X86_STEPPINGS(0x3, 0x3), SRBDS | MMIO), + BIT(7) | BIT(0xB), MMIO | RETBLEED), + VULNBL_INTEL_STEPPINGS(SKYLAKE, X86_STEPPINGS(0x3, 0x3), SRBDS | MMIO | = RETBLEED), VULNBL_INTEL_STEPPINGS(SKYLAKE, X86_STEPPING_ANY, SRBDS), - VULNBL_INTEL_STEPPINGS(KABYLAKE_L, X86_STEPPINGS(0x9, 0xC), SRBDS | MMIO), + VULNBL_INTEL_STEPPINGS(KABYLAKE_L, X86_STEPPINGS(0x9, 0xC), SRBDS | MMIO = | RETBLEED), VULNBL_INTEL_STEPPINGS(KABYLAKE_L, X86_STEPPINGS(0x0, 0x8), SRBDS), - VULNBL_INTEL_STEPPINGS(KABYLAKE, X86_STEPPINGS(0x9, 0xD), SRBDS | MMIO), + VULNBL_INTEL_STEPPINGS(KABYLAKE, X86_STEPPINGS(0x9, 0xD), SRBDS | MMIO | = RETBLEED), VULNBL_INTEL_STEPPINGS(KABYLAKE, X86_STEPPINGS(0x0, 0x8), SRBDS), - VULNBL_INTEL_STEPPINGS(ICELAKE_L, X86_STEPPINGS(0x5, 0x5), MMIO | MMIO_SB= DS), + VULNBL_INTEL_STEPPINGS(ICELAKE_L, X86_STEPPINGS(0x5, 0x5), MMIO | MMIO_SB= DS | RETBLEED), VULNBL_INTEL_STEPPINGS(ICELAKE_D, X86_STEPPINGS(0x1, 0x1), MMIO), VULNBL_INTEL_STEPPINGS(ICELAKE_X, X86_STEPPINGS(0x4, 0x6), MMIO), - VULNBL_INTEL_STEPPINGS(COMETLAKE, BIT(2) | BIT(3) | BIT(5), MMIO | MMIO_S= BDS), - VULNBL_INTEL_STEPPINGS(COMETLAKE_L, X86_STEPPINGS(0x1, 0x1), MMIO | MMIO_= SBDS), - VULNBL_INTEL_STEPPINGS(COMETLAKE_L, X86_STEPPINGS(0x0, 0x0), MMIO), - VULNBL_INTEL_STEPPINGS(LAKEFIELD, X86_STEPPINGS(0x1, 0x1), MMIO | MMIO_SB= DS), - VULNBL_INTEL_STEPPINGS(ROCKETLAKE, X86_STEPPINGS(0x1, 0x1), MMIO), + VULNBL_INTEL_STEPPINGS(COMETLAKE, BIT(2) | BIT(3) | BIT(5), MMIO | MMIO_S= BDS | RETBLEED), + VULNBL_INTEL_STEPPINGS(COMETLAKE_L, X86_STEPPINGS(0x1, 0x1), MMIO | MMIO_= SBDS | RETBLEED), + VULNBL_INTEL_STEPPINGS(COMETLAKE_L, X86_STEPPINGS(0x0, 0x0), MMIO | RETBL= EED), + VULNBL_INTEL_STEPPINGS(LAKEFIELD, X86_STEPPINGS(0x1, 0x1), MMIO | MMIO_SB= DS | RETBLEED), + VULNBL_INTEL_STEPPINGS(ROCKETLAKE, X86_STEPPINGS(0x1, 0x1), MMIO | RETBLE= ED), VULNBL_INTEL_STEPPINGS(ATOM_TREMONT, X86_STEPPINGS(0x1, 0x1), MMIO | MMIO= _SBDS), VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_D, X86_STEPPING_ANY, MMIO), VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_L, X86_STEPPINGS(0x0, 0x0), MMIO | MM= IO_SBDS), @@ -1264,7 +1264,7 @@ static void __init cpu_set_bug_bits(stru setup_force_cpu_bug(X86_BUG_MMIO_UNKNOWN); } =20 - if (cpu_matches(cpu_vuln_blacklist, RETBLEED)) + if ((cpu_matches(cpu_vuln_blacklist, RETBLEED) || (ia32_cap & ARCH_CAP_RS= BA))) setup_force_cpu_bug(X86_BUG_RETBLEED); =20 if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN)) From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66CC1C433F5 for ; Wed, 5 Oct 2022 11:39:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230250AbiJELi6 (ORCPT ); Wed, 5 Oct 2022 07:38:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36556 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230159AbiJELhi (ORCPT ); Wed, 5 Oct 2022 07:37:38 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AEF3B754AA; Wed, 5 Oct 2022 04:35:06 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 6BA29B81DB9; Wed, 5 Oct 2022 11:34:37 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CF1F8C433D6; Wed, 5 Oct 2022 11:34:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969676; bh=xm2Rn13H63GtVpbO2o7PmBE4U37Apjse9AqAqx3GU8I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HaShqkuo+wYA4DPnant4sSvAqgrKnDSviRjVVmUxgDlg+PnF50PlkT7RLmLw1P/NY xlH6/hpZZ+KUTA+IsigQFM8OURzVNReOnfjcdVpkEKqMHmHdGnOV0IU/NJQNeCsAJH jIc30Bf+JxwPMjIGqend2eZpPMKM0XcXVU6qon5U= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Tim Chen , "Peter Zijlstra (Intel)" , Borislav Petkov , Josh Poimboeuf , Thadeu Lima de Souza Cascardo Subject: [PATCH 5.4 17/51] intel_idle: Disable IBRS during long idle Date: Wed, 5 Oct 2022 13:32:05 +0200 Message-Id: <20221005113211.059678801@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Peter Zijlstra commit bf5835bcdb9635c97f85120dba9bfa21e111130f upstream. Having IBRS enabled while the SMT sibling is idle unnecessarily slows down the running sibling. OTOH, disabling IBRS around idle takes two MSR writes, which will increase the idle latency. Therefore, only disable IBRS around deeper idle states. Shallow idle states are bounded by the tick in duration, since NOHZ is not allowed for them by virtue of their short target residency. Only do this for mwait-driven idle, since that keeps interrupts disabled across idle, which makes disabling IBRS vs IRQ-entry a non-issue. Note: C6 is a random threshold, most importantly C1 probably shouldn't disable IBRS, benchmarking needed. Suggested-by: Tim Chen Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov [cascardo: no CPUIDLE_FLAG_IRQ_ENABLE] Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman [cascardo: context adjustments] Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- arch/x86/include/asm/nospec-branch.h | 1=20 arch/x86/kernel/cpu/bugs.c | 6 ++++ drivers/idle/intel_idle.c | 43 ++++++++++++++++++++++++++++++= ----- 3 files changed, 44 insertions(+), 6 deletions(-) --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -299,6 +299,7 @@ static inline void indirect_branch_predi /* The Intel SPEC CTRL MSR base value cache */ extern u64 x86_spec_ctrl_base; extern void write_spec_ctrl_current(u64 val, bool force); +extern u64 spec_ctrl_current(void); =20 /* * With retpoline, we must use IBRS to restrict branch prediction --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -77,6 +77,12 @@ void write_spec_ctrl_current(u64 val, bo wrmsrl(MSR_IA32_SPEC_CTRL, val); } =20 +u64 spec_ctrl_current(void) +{ + return this_cpu_read(x86_spec_ctrl_current); +} +EXPORT_SYMBOL_GPL(spec_ctrl_current); + /* * The vendor and possibly platform specific bits which can be modified in * x86_spec_ctrl_base. --- a/drivers/idle/intel_idle.c +++ b/drivers/idle/intel_idle.c @@ -46,11 +46,13 @@ #include #include #include +#include #include #include #include #include #include +#include #include #include =20 @@ -98,6 +100,12 @@ static struct cpuidle_state *cpuidle_sta #define CPUIDLE_FLAG_TLB_FLUSHED 0x10000 =20 /* + * Disable IBRS across idle (when KERNEL_IBRS), is exclusive vs IRQ_ENABLE + * above. + */ +#define CPUIDLE_FLAG_IBRS BIT(16) + +/* * MWAIT takes an 8-bit "hint" in EAX "suggesting" * the C-state (top nibble) and sub-state (bottom nibble) * 0x00 means "MWAIT(C1)", 0x10 means "MWAIT(C2)" etc. @@ -107,6 +115,24 @@ static struct cpuidle_state *cpuidle_sta #define flg2MWAIT(flags) (((flags) >> 24) & 0xFF) #define MWAIT2flg(eax) ((eax & 0xFF) << 24) =20 +static __cpuidle int intel_idle_ibrs(struct cpuidle_device *dev, + struct cpuidle_driver *drv, int index) +{ + bool smt_active =3D sched_smt_active(); + u64 spec_ctrl =3D spec_ctrl_current(); + int ret; + + if (smt_active) + wrmsrl(MSR_IA32_SPEC_CTRL, 0); + + ret =3D intel_idle(dev, drv, index); + + if (smt_active) + wrmsrl(MSR_IA32_SPEC_CTRL, spec_ctrl); + + return ret; +} + /* * States are indexed by the cstate number, * which is also the index into the MWAIT hint array. @@ -605,7 +631,7 @@ static struct cpuidle_state skl_cstates[ { .name =3D "C6", .desc =3D "MWAIT 0x20", - .flags =3D MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED, + .flags =3D MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED | CPUIDLE_FLAG_IBR= S, .exit_latency =3D 85, .target_residency =3D 200, .enter =3D &intel_idle, @@ -613,7 +639,7 @@ static struct cpuidle_state skl_cstates[ { .name =3D "C7s", .desc =3D "MWAIT 0x33", - .flags =3D MWAIT2flg(0x33) | CPUIDLE_FLAG_TLB_FLUSHED, + .flags =3D MWAIT2flg(0x33) | CPUIDLE_FLAG_TLB_FLUSHED | CPUIDLE_FLAG_IBR= S, .exit_latency =3D 124, .target_residency =3D 800, .enter =3D &intel_idle, @@ -621,7 +647,7 @@ static struct cpuidle_state skl_cstates[ { .name =3D "C8", .desc =3D "MWAIT 0x40", - .flags =3D MWAIT2flg(0x40) | CPUIDLE_FLAG_TLB_FLUSHED, + .flags =3D MWAIT2flg(0x40) | CPUIDLE_FLAG_TLB_FLUSHED | CPUIDLE_FLAG_IBR= S, .exit_latency =3D 200, .target_residency =3D 800, .enter =3D &intel_idle, @@ -629,7 +655,7 @@ static struct cpuidle_state skl_cstates[ { .name =3D "C9", .desc =3D "MWAIT 0x50", - .flags =3D MWAIT2flg(0x50) | CPUIDLE_FLAG_TLB_FLUSHED, + .flags =3D MWAIT2flg(0x50) | CPUIDLE_FLAG_TLB_FLUSHED | CPUIDLE_FLAG_IBR= S, .exit_latency =3D 480, .target_residency =3D 5000, .enter =3D &intel_idle, @@ -637,7 +663,7 @@ static struct cpuidle_state skl_cstates[ { .name =3D "C10", .desc =3D "MWAIT 0x60", - .flags =3D MWAIT2flg(0x60) | CPUIDLE_FLAG_TLB_FLUSHED, + .flags =3D MWAIT2flg(0x60) | CPUIDLE_FLAG_TLB_FLUSHED | CPUIDLE_FLAG_IBR= S, .exit_latency =3D 890, .target_residency =3D 5000, .enter =3D &intel_idle, @@ -666,7 +692,7 @@ static struct cpuidle_state skx_cstates[ { .name =3D "C6", .desc =3D "MWAIT 0x20", - .flags =3D MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED, + .flags =3D MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED | CPUIDLE_FLAG_IBR= S, .exit_latency =3D 133, .target_residency =3D 600, .enter =3D &intel_idle, @@ -1370,6 +1396,11 @@ static void __init intel_idle_cpuidle_dr drv->states[drv->state_count] =3D /* structure copy */ cpuidle_state_table[cstate]; =20 + if (cpu_feature_enabled(X86_FEATURE_KERNEL_IBRS) && + cpuidle_state_table[cstate].flags & CPUIDLE_FLAG_IBRS) { + drv->states[drv->state_count].enter =3D intel_idle_ibrs; + } + drv->state_count +=3D 1; } From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 697C8C43217 for ; Wed, 5 Oct 2022 11:37:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230232AbiJELhN (ORCPT ); Wed, 5 Oct 2022 07:37:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36602 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230209AbiJELgo (ORCPT ); Wed, 5 Oct 2022 07:36:44 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CE60C64F9; Wed, 5 Oct 2022 04:34:39 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 740CA6164C; Wed, 5 Oct 2022 11:34:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 82E76C433D6; Wed, 5 Oct 2022 11:34:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969678; bh=IkNFUpN+oIlLLes/MvBToZUHdVJXZqhbvojCrc7rbHY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=vyAsjQCbO2zbLTEWYN8ynXGLIl4Rgfjb3qhJFxAA2adLr1PjHZtumERjKIGhBGGH+ iMMDJtCubAbDjZBHOg2n9fDuKzfvQcjwwMtBQmwaPRzauy7EYMV2sPGmgWNNeNsiNK 8NoMxawe9ClMbj0qW2FzMPPfRmb3TgqLVyi4zsT4= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , "Peter Zijlstra (Intel)" , Alexandre Chartre , Josh Poimboeuf , Thadeu Lima de Souza Cascardo Subject: [PATCH 5.4 18/51] x86/speculation: Change FILL_RETURN_BUFFER to work with objtool Date: Wed, 5 Oct 2022 13:32:06 +0200 Message-Id: <20221005113211.104615605@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Peter Zijlstra commit 089dd8e53126ebaf506e2dc0bf89d652c36bfc12 upstream. Change FILL_RETURN_BUFFER so that objtool groks it and can generate correct ORC unwind information. - Since ORC is alternative invariant; that is, all alternatives should have the same ORC entries, the __FILL_RETURN_BUFFER body can not be part of an alternative. Therefore, move it out of the alternative and keep the alternative as a sort of jump_label around it. - Use the ANNOTATE_INTRA_FUNCTION_CALL annotation to white-list these 'funny' call instructions to nowhere. - Use UNWIND_HINT_EMPTY to 'fill' the speculation traps, otherwise objtool will consider them unreachable. - Move the RSP adjustment into the loop, such that the loop has a deterministic stack layout. Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Alexandre Chartre Acked-by: Josh Poimboeuf Link: https://lkml.kernel.org/r/20200428191700.032079304@infradead.org [cascardo: fixup because of backport of ba6e31af2be96c4d0536f2152ed6f7b6c11= bca47 ("x86/speculation: Add LFENCE to RSB fill sequence")] [cascardo: no intra-function call validation support] [cascardo: avoid UNWIND_HINT_EMPTY because of svm] Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- arch/x86/include/asm/nospec-branch.h | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -4,11 +4,13 @@ #define _ASM_X86_NOSPEC_BRANCH_H_ =20 #include +#include =20 #include #include #include #include +#include =20 /* * This should be used immediately before a retpoline alternative. It tells @@ -60,9 +62,9 @@ lfence; \ jmp 775b; \ 774: \ + add $(BITS_PER_LONG/8) * 2, sp; \ dec reg; \ jnz 771b; \ - add $(BITS_PER_LONG/8) * nr, sp; \ /* barrier for jnz misprediction */ \ lfence; #else @@ -154,10 +156,8 @@ */ .macro FILL_RETURN_BUFFER reg:req nr:req ftr:req #ifdef CONFIG_RETPOLINE - ANNOTATE_NOSPEC_ALTERNATIVE - ALTERNATIVE "jmp .Lskip_rsb_\@", \ - __stringify(__FILL_RETURN_BUFFER(\reg,\nr,%_ASM_SP)) \ - \ftr + ALTERNATIVE "jmp .Lskip_rsb_\@", "", \ftr + __FILL_RETURN_BUFFER(\reg,\nr,%_ASM_SP) .Lskip_rsb_\@: #endif .endm From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70F04C433F5 for ; Wed, 5 Oct 2022 11:39:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230105AbiJELjC (ORCPT ); Wed, 5 Oct 2022 07:39:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35830 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230264AbiJELiH (ORCPT ); Wed, 5 Oct 2022 07:38:07 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 447F97756F; Wed, 5 Oct 2022 04:35:09 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id B758EB81DB3; Wed, 5 Oct 2022 11:34:42 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 20223C433D6; Wed, 5 Oct 2022 11:34:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969681; bh=Ty9DMkp2Gc+DjuIrWflWxWL4ZC368ftDbmoQi/qCS/8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GcS7EAaN0Yfj/ldQHm2x91+qeWeYsIjG1uyn+uM7cpdpUwPjiWUxaFEvFSIpEWUuJ 9C/dlb0EwovQWA4bux1Txf8cf0TjnoYuHQS4rkiZfSVD/k+f+WaoQl5yolxtp05AQO ubqS2o+7ybEgZy3I3ecrH8Q3+Qz8akEexN+iGoa8= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Josh Poimboeuf , "Peter Zijlstra (Intel)" , Borislav Petkov , Thadeu Lima de Souza Cascardo Subject: [PATCH 5.4 19/51] x86/speculation: Fix RSB filling with CONFIG_RETPOLINE=n Date: Wed, 5 Oct 2022 13:32:07 +0200 Message-Id: <20221005113211.151979510@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Josh Poimboeuf commit b2620facef4889fefcbf2e87284f34dcd4189bce upstream. If a kernel is built with CONFIG_RETPOLINE=3Dn, but the user still wants to mitigate Spectre v2 using IBRS or eIBRS, the RSB filling will be silently disabled. There's nothing retpoline-specific about RSB buffer filling. Remove the CONFIG_RETPOLINE guards around it. Signed-off-by: Josh Poimboeuf Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- arch/x86/entry/entry_32.S | 2 -- arch/x86/entry/entry_64.S | 2 -- arch/x86/include/asm/nospec-branch.h | 2 -- 3 files changed, 6 deletions(-) --- a/arch/x86/entry/entry_32.S +++ b/arch/x86/entry/entry_32.S @@ -750,7 +750,6 @@ ENTRY(__switch_to_asm) movl %ebx, PER_CPU_VAR(stack_canary)+stack_canary_offset #endif =20 -#ifdef CONFIG_RETPOLINE /* * When switching from a shallower to a deeper call stack * the RSB may either underflow or use entries populated @@ -759,7 +758,6 @@ ENTRY(__switch_to_asm) * speculative execution to prevent attack. */ FILL_RETURN_BUFFER %ebx, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_CTXSW -#endif =20 /* restore callee-saved registers */ popfl --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -305,7 +305,6 @@ ENTRY(__switch_to_asm) movq %rbx, PER_CPU_VAR(fixed_percpu_data) + stack_canary_offset #endif =20 -#ifdef CONFIG_RETPOLINE /* * When switching from a shallower to a deeper call stack * the RSB may either underflow or use entries populated @@ -314,7 +313,6 @@ ENTRY(__switch_to_asm) * speculative execution to prevent attack. */ FILL_RETURN_BUFFER %r12, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_CTXSW -#endif =20 /* restore callee-saved registers */ popq %r15 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -155,11 +155,9 @@ * monstrosity above, manually. */ .macro FILL_RETURN_BUFFER reg:req nr:req ftr:req -#ifdef CONFIG_RETPOLINE ALTERNATIVE "jmp .Lskip_rsb_\@", "", \ftr __FILL_RETURN_BUFFER(\reg,\nr,%_ASM_SP) .Lskip_rsb_\@: -#endif .endm =20 #else /* __ASSEMBLY__ */ From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62094C433FE for ; Wed, 5 Oct 2022 11:39:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230345AbiJELjK (ORCPT ); Wed, 5 Oct 2022 07:39:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36414 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230283AbiJELiR (ORCPT ); Wed, 5 Oct 2022 07:38:17 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2A62478BE0; Wed, 5 Oct 2022 04:35:14 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 51BC0B81DBB; Wed, 5 Oct 2022 11:34:45 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BEAF2C433C1; Wed, 5 Oct 2022 11:34:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969684; bh=MV7dQaKSYp4Sg+MpEy05kXdmNhrUL4TtOkIUK7Xvac0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=x3CsxNQ2G8/m1lhPlr+Pi+kO/j8TNB6jQv1FvHvaZvCvAwtthHNoQ0KWJXv+U4y+m vBHu7u6lNNQEfmuHGBHVa0938trenYm+uK4NmUuF1d0DLFO5sZVL6u9PexGdDFG0NK lonXeh9mF2uuV5PyUn3p2fYMRm7LA/Ip+EV/CHaA= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Josh Poimboeuf , "Peter Zijlstra (Intel)" , Borislav Petkov , Thadeu Lima de Souza Cascardo Subject: [PATCH 5.4 20/51] x86/speculation: Fix firmware entry SPEC_CTRL handling Date: Wed, 5 Oct 2022 13:32:08 +0200 Message-Id: <20221005113211.201886619@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Josh Poimboeuf commit e6aa13622ea8283cc699cac5d018cc40a2ba2010 upstream. The firmware entry code may accidentally clear STIBP or SSBD. Fix that. Signed-off-by: Josh Poimboeuf Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- arch/x86/include/asm/nospec-branch.h | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -307,18 +307,16 @@ extern u64 spec_ctrl_current(void); */ #define firmware_restrict_branch_speculation_start() \ do { \ - u64 val =3D x86_spec_ctrl_base | SPEC_CTRL_IBRS; \ - \ preempt_disable(); \ - alternative_msr_write(MSR_IA32_SPEC_CTRL, val, \ + alternative_msr_write(MSR_IA32_SPEC_CTRL, \ + spec_ctrl_current() | SPEC_CTRL_IBRS, \ X86_FEATURE_USE_IBRS_FW); \ } while (0) =20 #define firmware_restrict_branch_speculation_end() \ do { \ - u64 val =3D x86_spec_ctrl_base; \ - \ - alternative_msr_write(MSR_IA32_SPEC_CTRL, val, \ + alternative_msr_write(MSR_IA32_SPEC_CTRL, \ + spec_ctrl_current(), \ X86_FEATURE_USE_IBRS_FW); \ preempt_enable(); \ } while (0) From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2512C433F5 for ; Wed, 5 Oct 2022 11:34:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230071AbiJELeE (ORCPT ); Wed, 5 Oct 2022 07:34:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35510 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229980AbiJELdh (ORCPT ); Wed, 5 Oct 2022 07:33:37 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7D22574377; Wed, 5 Oct 2022 04:33:25 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 7831E61645; Wed, 5 Oct 2022 11:33:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 832A2C433B5; Wed, 5 Oct 2022 11:33:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969603; bh=f1rxL/ZU4hrwCN+ID9VPHxiCs0f7piVCOV+3m7m4f8Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=veJq4rzwxmAsAiY+O326SUk8gpeLIr8Kt3C9BLf9+1npIClVVRMHhhcnM/BguV7TH KWBrX1GutPp2UvEId3IFr71nreU/b+zRiHeVny7NOeiBC1hZFLUqSms07oEoW/vkHB 1qXoyOIELpmNFl3xT+2teB6+OQ/m8VF+1rq7/XIs= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Josh Poimboeuf , "Peter Zijlstra (Intel)" , Borislav Petkov , Thadeu Lima de Souza Cascardo Subject: [PATCH 5.4 21/51] x86/speculation: Fix SPEC_CTRL write on SMT state change Date: Wed, 5 Oct 2022 13:32:09 +0200 Message-Id: <20221005113211.251461820@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Josh Poimboeuf commit 56aa4d221f1ee2c3a49b45b800778ec6e0ab73c5 upstream. If the SMT state changes, SSBD might get accidentally disabled. Fix that. Signed-off-by: Josh Poimboeuf Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- arch/x86/kernel/cpu/bugs.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -1335,7 +1335,8 @@ static void __init spectre_v2_select_mit =20 static void update_stibp_msr(void * __unused) { - write_spec_ctrl_current(x86_spec_ctrl_base, true); + u64 val =3D spec_ctrl_current() | (x86_spec_ctrl_base & SPEC_CTRL_STIBP); + write_spec_ctrl_current(val, true); } =20 /* Update x86_spec_ctrl_base in case SMT state changed. */ From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E10EC4332F for ; Wed, 5 Oct 2022 11:34:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230087AbiJELeX (ORCPT ); Wed, 5 Oct 2022 07:34:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36024 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230017AbiJELdj (ORCPT ); Wed, 5 Oct 2022 07:33:39 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6D51E7675D; Wed, 5 Oct 2022 04:33:28 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 330C66164C; Wed, 5 Oct 2022 11:33:27 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3DD0AC433C1; Wed, 5 Oct 2022 11:33:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969606; bh=6zn1xqzWU79cWaiBvCW1z5cFjLrT4QBXgu2Asg4Mn2c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=xZMpaZU/pnPI20Rh5ffZcBuUhxWhHn8okq0tH3DHL6YbiXaVZilhyqt1cCfKp0tu0 OnkYGKZooEs78lATez92CfolRa0zJaqcL0GhBEKv8YMHm5iiKn1q51AA2sex7E8ao+ f1Hy+4XevsMwoHqSm206B58gWYMc3bQahhyMuuO4= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Josh Poimboeuf , "Peter Zijlstra (Intel)" , Borislav Petkov , Thadeu Lima de Souza Cascardo Subject: [PATCH 5.4 22/51] x86/speculation: Use cached host SPEC_CTRL value for guest entry/exit Date: Wed, 5 Oct 2022 13:32:10 +0200 Message-Id: <20221005113211.292737780@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Josh Poimboeuf commit bbb69e8bee1bd882784947095ffb2bfe0f7c9470 upstream. There's no need to recalculate the host value for every entry/exit. Just use the cached value in spec_ctrl_current(). Signed-off-by: Josh Poimboeuf Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- arch/x86/kernel/cpu/bugs.c | 12 +----------- 1 file changed, 1 insertion(+), 11 deletions(-) --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -198,7 +198,7 @@ void __init check_bugs(void) void x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool set= guest) { - u64 msrval, guestval, hostval =3D x86_spec_ctrl_base; + u64 msrval, guestval, hostval =3D spec_ctrl_current(); struct thread_info *ti =3D current_thread_info(); =20 /* Is MSR_SPEC_CTRL implemented ? */ @@ -211,15 +211,6 @@ x86_virt_spec_ctrl(u64 guest_spec_ctrl, guestval =3D hostval & ~x86_spec_ctrl_mask; guestval |=3D guest_spec_ctrl & x86_spec_ctrl_mask; =20 - /* SSBD controlled in MSR_SPEC_CTRL */ - if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) || - static_cpu_has(X86_FEATURE_AMD_SSBD)) - hostval |=3D ssbd_tif_to_spec_ctrl(ti->flags); - - /* Conditional STIBP enabled? */ - if (static_branch_unlikely(&switch_to_cond_stibp)) - hostval |=3D stibp_tif_to_spec_ctrl(ti->flags); - if (hostval !=3D guestval) { msrval =3D setguest ? guestval : hostval; wrmsrl(MSR_IA32_SPEC_CTRL, msrval); @@ -1274,7 +1265,6 @@ static void __init spectre_v2_select_mit pr_err(SPECTRE_V2_EIBRS_EBPF_MSG); =20 if (spectre_v2_in_ibrs_mode(mode)) { - /* Force it so VMEXIT will restore correctly */ x86_spec_ctrl_base |=3D SPEC_CTRL_IBRS; write_spec_ctrl_current(x86_spec_ctrl_base, true); } From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 112C7C4332F for ; Wed, 5 Oct 2022 11:34:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230094AbiJELe0 (ORCPT ); Wed, 5 Oct 2022 07:34:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36054 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230030AbiJELdl (ORCPT ); Wed, 5 Oct 2022 07:33:41 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 78C6774343; Wed, 5 Oct 2022 04:33:32 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 82F55B81DB9; Wed, 5 Oct 2022 11:33:30 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EC1F1C433D7; Wed, 5 Oct 2022 11:33:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969609; bh=u3+cFIHV4V6qO9LaZjBFtnapRIMjXNPYg5c943scA5Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JsxubWNaclzqjbj+kSqN4cyW0pbv3rUqOB7OgqKUbKOdUJdU8qAXjLHhhFxe8gzMy wjHRf32T4RScCzWIVzV7T0d4yizdi2zJgt1k7U7nfq5ROuib1Mu1BlwugwBMd8DJlD Nm+mU/xhOhPCH83uI90b/oBCEGGSy9xBIEnVc128= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Josh Poimboeuf , Borislav Petkov , Paolo Bonzini , Thadeu Lima de Souza Cascardo Subject: [PATCH 5.4 23/51] x86/speculation: Remove x86_spec_ctrl_mask Date: Wed, 5 Oct 2022 13:32:11 +0200 Message-Id: <20221005113211.338588897@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Josh Poimboeuf commit acac5e98ef8d638a411cfa2ee676c87e1973f126 upstream. This mask has been made redundant by kvm_spec_ctrl_test_value(). And it doesn't even work when MSR interception is disabled, as the guest can just write to SPEC_CTRL directly. Signed-off-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Reviewed-by: Paolo Bonzini Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- arch/x86/kernel/cpu/bugs.c | 31 +------------------------------ 1 file changed, 1 insertion(+), 30 deletions(-) --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -84,12 +84,6 @@ u64 spec_ctrl_current(void) EXPORT_SYMBOL_GPL(spec_ctrl_current); =20 /* - * The vendor and possibly platform specific bits which can be modified in - * x86_spec_ctrl_base. - */ -static u64 __ro_after_init x86_spec_ctrl_mask =3D SPEC_CTRL_IBRS; - -/* * AMD specific MSR info for Speculative Store Bypass control. * x86_amd_ls_cfg_ssbd_mask is initialized in identify_boot_cpu(). */ @@ -137,10 +131,6 @@ void __init check_bugs(void) if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL)) rdmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base); =20 - /* Allow STIBP in MSR_SPEC_CTRL if supported */ - if (boot_cpu_has(X86_FEATURE_STIBP)) - x86_spec_ctrl_mask |=3D SPEC_CTRL_STIBP; - /* Select the proper CPU mitigations before patching alternatives: */ spectre_v1_select_mitigation(); spectre_v2_select_mitigation(); @@ -198,19 +188,10 @@ void __init check_bugs(void) void x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool set= guest) { - u64 msrval, guestval, hostval =3D spec_ctrl_current(); + u64 msrval, guestval =3D guest_spec_ctrl, hostval =3D spec_ctrl_current(); struct thread_info *ti =3D current_thread_info(); =20 - /* Is MSR_SPEC_CTRL implemented ? */ if (static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL)) { - /* - * Restrict guest_spec_ctrl to supported values. Clear the - * modifiable bits in the host base value and or the - * modifiable bits from the guest value. - */ - guestval =3D hostval & ~x86_spec_ctrl_mask; - guestval |=3D guest_spec_ctrl & x86_spec_ctrl_mask; - if (hostval !=3D guestval) { msrval =3D setguest ? guestval : hostval; wrmsrl(MSR_IA32_SPEC_CTRL, msrval); @@ -1543,16 +1524,6 @@ static enum ssb_mitigation __init __ssb_ } =20 /* - * If SSBD is controlled by the SPEC_CTRL MSR, then set the proper - * bit in the mask to allow guests to use the mitigation even in the - * case where the host does not enable it. - */ - if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) || - static_cpu_has(X86_FEATURE_AMD_SSBD)) { - x86_spec_ctrl_mask |=3D SPEC_CTRL_SSBD; - } - - /* * We have three CPU feature flags that are in play here: * - X86_BUG_SPEC_STORE_BYPASS - CPU is susceptible. * - X86_FEATURE_SSBD - CPU is able to turn off speculative store bypass From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8DC9DC433FE for ; Wed, 5 Oct 2022 11:34:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230096AbiJELe2 (ORCPT ); Wed, 5 Oct 2022 07:34:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36368 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229832AbiJELdo (ORCPT ); Wed, 5 Oct 2022 07:33:44 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BDE577756C; Wed, 5 Oct 2022 04:33:34 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 9D5F461644; Wed, 5 Oct 2022 11:33:32 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id AD88BC433D6; Wed, 5 Oct 2022 11:33:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969612; bh=Pk6Suo2T1eUyONR7+gvbv9SwmbnhbztlxPqRFvQ9FHE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UOtrO7tURoxB0qpqOrmSyoeSp9b3ROI1o8x4iJnaDMKBBA9HDOLBP12VRvfyL09tu QvWChyjJ2OxQzQl6L5EBpVuYPBfjw6aAtTotUrNX7GyQ7sQzSkn1lI8ML3w6f7FUjO 1J2tK6YOGFUTcuWD4+eboa8ElgcWODFiYV5aGVHc= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Paolo Bonzini , Sean Christopherson , Uros Bizjak , Thadeu Lima de Souza Cascardo Subject: [PATCH 5.4 24/51] KVM/VMX: Use TEST %REG,%REG instead of CMP $0,%REG in vmenter.S Date: Wed, 5 Oct 2022 13:32:12 +0200 Message-Id: <20221005113211.383989348@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Uros Bizjak commit 6c44221b05236cc65d76cb5dc2463f738edff39d upstream. Saves one byte in __vmx_vcpu_run for the same functionality. Cc: Paolo Bonzini Cc: Sean Christopherson Signed-off-by: Uros Bizjak Message-Id: <20201029140457.126965-1-ubizjak@gmail.com> Signed-off-by: Paolo Bonzini Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- arch/x86/kvm/vmx/vmenter.S | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -135,7 +135,7 @@ ENTRY(__vmx_vcpu_run) mov (%_ASM_SP), %_ASM_AX =20 /* Check if vmlaunch or vmresume is needed */ - cmpb $0, %bl + testb %bl, %bl =20 /* Load guest registers. Don't clobber flags. */ mov VCPU_RBX(%_ASM_AX), %_ASM_BX From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07EB3C433FE for ; Wed, 5 Oct 2022 11:34:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230059AbiJELet (ORCPT ); Wed, 5 Oct 2022 07:34:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35870 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230074AbiJELeR (ORCPT ); Wed, 5 Oct 2022 07:34:17 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A682374E17; Wed, 5 Oct 2022 04:33:42 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 0DEA6B81DB5; Wed, 5 Oct 2022 11:33:36 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 66FA2C433D6; Wed, 5 Oct 2022 11:33:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969614; bh=ScDVN5Mov0PXX86mL4ZFMgi0u+MXwkhDVt0IxU/ikUA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=xxeAs4pNbzFE3ntTy/p+cgosXcnigQtRr7vU25W63Zr+trwBmOM2bdxOdLneSY4uK XMAogaXDuIh07ZN7zUUqWeg2Vmb8NNwxoQvP4ruhzSW3sxLjT6QB83/hwGPi9C04PV 9B4wH01d7qy+qjdaBu5hTsQL+a6roKJSAeu9iEKg= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Paolo Bonzini , Sean Christopherson , Uros Bizjak , Thadeu Lima de Souza Cascardo Subject: [PATCH 5.4 25/51] KVM/nVMX: Use __vmx_vcpu_run in nested_vmx_check_vmentry_hw Date: Wed, 5 Oct 2022 13:32:13 +0200 Message-Id: <20221005113211.431319766@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Uros Bizjak commit 150f17bfab37e981ba03b37440638138ff2aa9ec upstream. Replace inline assembly in nested_vmx_check_vmentry_hw with a call to __vmx_vcpu_run. The function is not performance critical, so (double) GPR save/restore in __vmx_vcpu_run can be tolerated, as far as performance effects are concerned. Cc: Paolo Bonzini Cc: Sean Christopherson Reviewed-and-tested-by: Sean Christopherson Signed-off-by: Uros Bizjak [sean: dropped versioning info from changelog] Signed-off-by: Sean Christopherson Message-Id: <20201231002702.2223707-5-seanjc@google.com> Signed-off-by: Paolo Bonzini [cascardo: small fixups] Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- arch/x86/kvm/vmx/nested.c | 32 +++----------------------------- arch/x86/kvm/vmx/vmx.c | 2 -- arch/x86/kvm/vmx/vmx.h | 1 + 3 files changed, 4 insertions(+), 31 deletions(-) --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -11,6 +11,7 @@ #include "mmu.h" #include "nested.h" #include "trace.h" +#include "vmx.h" #include "x86.h" =20 static bool __read_mostly enable_shadow_vmcs =3D 1; @@ -2863,35 +2864,8 @@ static int nested_vmx_check_vmentry_hw(s vmx->loaded_vmcs->host_state.cr4 =3D cr4; } =20 - asm( - "sub $%c[wordsize], %%" _ASM_SP "\n\t" /* temporarily adjust RSP for CAL= L */ - "cmp %%" _ASM_SP ", %c[host_state_rsp](%[loaded_vmcs]) \n\t" - "je 1f \n\t" - __ex("vmwrite %%" _ASM_SP ", %[HOST_RSP]") "\n\t" - "mov %%" _ASM_SP ", %c[host_state_rsp](%[loaded_vmcs]) \n\t" - "1: \n\t" - "add $%c[wordsize], %%" _ASM_SP "\n\t" /* un-adjust RSP */ - - /* Check if vmlaunch or vmresume is needed */ - "cmpb $0, %c[launched](%[loaded_vmcs])\n\t" - - /* - * VMLAUNCH and VMRESUME clear RFLAGS.{CF,ZF} on VM-Exit, set - * RFLAGS.CF on VM-Fail Invalid and set RFLAGS.ZF on VM-Fail - * Valid. vmx_vmenter() directly "returns" RFLAGS, and so the - * results of VM-Enter is captured via CC_{SET,OUT} to vm_fail. - */ - "call vmx_vmenter\n\t" - - CC_SET(be) - : ASM_CALL_CONSTRAINT, CC_OUT(be) (vm_fail) - : [HOST_RSP]"r"((unsigned long)HOST_RSP), - [loaded_vmcs]"r"(vmx->loaded_vmcs), - [launched]"i"(offsetof(struct loaded_vmcs, launched)), - [host_state_rsp]"i"(offsetof(struct loaded_vmcs, host_state.rsp)), - [wordsize]"i"(sizeof(ulong)) - : "memory" - ); + vm_fail =3D __vmx_vcpu_run(vmx, (unsigned long *)&vcpu->arch.regs, + vmx->loaded_vmcs->launched); =20 if (vmx->msr_autoload.host.nr) vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.host.nr); --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6540,8 +6540,6 @@ void vmx_update_host_rsp(struct vcpu_vmx } } =20 -bool __vmx_vcpu_run(struct vcpu_vmx *vmx, unsigned long *regs, bool launch= ed); - static void vmx_vcpu_run(struct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx =3D to_vmx(vcpu); --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -336,6 +336,7 @@ void vmx_set_virtual_apic_mode(struct kv struct shared_msr_entry *find_msr_entry(struct vcpu_vmx *vmx, u32 msr); void pt_update_intercept_for_msr(struct vcpu_vmx *vmx); void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp); +bool __vmx_vcpu_run(struct vcpu_vmx *vmx, unsigned long *regs, bool launch= ed); =20 #define POSTED_INTR_ON 0 #define POSTED_INTR_SN 1 From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1534BC4332F for ; Wed, 5 Oct 2022 11:35:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230111AbiJELe6 (ORCPT ); Wed, 5 Oct 2022 07:34:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36026 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230011AbiJELeU (ORCPT ); Wed, 5 Oct 2022 07:34:20 -0400 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DEED77696C; Wed, 5 Oct 2022 04:33:44 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 2AC02CE1251; Wed, 5 Oct 2022 11:33:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1E89FC433D6; Wed, 5 Oct 2022 11:33:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969617; bh=yly4OAdLW9Kq7E4+xiBpUMXQ6TEGk+RkuJ53Z7Gg0mQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rogLfHsgF1JS52jBmxxsHAqgTKhFuVfiB+8PSwMoelycrrve/0GAQBL4FGzamdZqp Ukna9fZKwIQ1Eag861MoKM0KhBRQIl3gExFjiqsDeTjZg9Z0pxuEIhlJv9/16ZvEmx l7P9m0Sg+EYNE2kOsDpofhp/1ToPaZ3o3iSk/7jY= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Josh Poimboeuf , "Peter Zijlstra (Intel)" , Borislav Petkov , Thadeu Lima de Souza Cascardo , Ben Hutchings Subject: [PATCH 5.4 26/51] KVM: VMX: Flatten __vmx_vcpu_run() Date: Wed, 5 Oct 2022 13:32:14 +0200 Message-Id: <20221005113211.476736046@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Josh Poimboeuf commit 8bd200d23ec42d66ccd517a72dd0b9cc6132d2fd upstream. Move the vmx_vm{enter,exit}() functionality into __vmx_vcpu_run(). This will make it easier to do the spec_ctrl handling before the first RET. Signed-off-by: Josh Poimboeuf Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov [cascardo: remove ENDBR] Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman [cascardo: no unwinding save/restore] Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- arch/x86/kvm/vmx/vmenter.S | 114 ++++++++++++++--------------------------= ----- 1 file changed, 37 insertions(+), 77 deletions(-) --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -30,72 +30,6 @@ .text =20 /** - * vmx_vmenter - VM-Enter the current loaded VMCS - * - * %RFLAGS.ZF: !VMCS.LAUNCHED, i.e. controls VMLAUNCH vs. VMRESUME - * - * Returns: - * %RFLAGS.CF is set on VM-Fail Invalid - * %RFLAGS.ZF is set on VM-Fail Valid - * %RFLAGS.{CF,ZF} are cleared on VM-Success, i.e. VM-Exit - * - * Note that VMRESUME/VMLAUNCH fall-through and return directly if - * they VM-Fail, whereas a successful VM-Enter + VM-Exit will jump - * to vmx_vmexit. - */ -ENTRY(vmx_vmenter) - /* EFLAGS.ZF is set if VMCS.LAUNCHED =3D=3D 0 */ - je 2f - -1: vmresume - ret - -2: vmlaunch - ret - -3: cmpb $0, kvm_rebooting - je 4f - ret -4: ud2 - - .pushsection .fixup, "ax" -5: jmp 3b - .popsection - - _ASM_EXTABLE(1b, 5b) - _ASM_EXTABLE(2b, 5b) - -ENDPROC(vmx_vmenter) - -/** - * vmx_vmexit - Handle a VMX VM-Exit - * - * Returns: - * %RFLAGS.{CF,ZF} are cleared on VM-Success, i.e. VM-Exit - * - * This is vmx_vmenter's partner in crime. On a VM-Exit, control will jump - * here after hardware loads the host's state, i.e. this is the destination - * referred to by VMCS.HOST_RIP. - */ -ENTRY(vmx_vmexit) -#ifdef CONFIG_RETPOLINE - ALTERNATIVE "jmp .Lvmexit_skip_rsb", "", X86_FEATURE_RETPOLINE - /* Preserve guest's RAX, it's used to stuff the RSB. */ - push %_ASM_AX - - /* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */ - FILL_RETURN_BUFFER %_ASM_AX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE - - /* Clear RFLAGS.CF and RFLAGS.ZF to preserve VM-Exit, i.e. !VM-Fail. */ - or $1, %_ASM_AX - - pop %_ASM_AX -.Lvmexit_skip_rsb: -#endif - ret -ENDPROC(vmx_vmexit) - -/** * __vmx_vcpu_run - Run a vCPU via a transition to VMX guest mode * @vmx: struct vcpu_vmx * (forwarded to vmx_update_host_rsp) * @regs: unsigned long * (to guest registers) @@ -127,8 +61,7 @@ ENTRY(__vmx_vcpu_run) /* Copy @launched to BL, _ASM_ARG3 is volatile. */ mov %_ASM_ARG3B, %bl =20 - /* Adjust RSP to account for the CALL to vmx_vmenter(). */ - lea -WORD_SIZE(%_ASM_SP), %_ASM_ARG2 + lea (%_ASM_SP), %_ASM_ARG2 call vmx_update_host_rsp =20 /* Load @regs to RAX. */ @@ -157,11 +90,25 @@ ENTRY(__vmx_vcpu_run) /* Load guest RAX. This kills the @regs pointer! */ mov VCPU_RAX(%_ASM_AX), %_ASM_AX =20 - /* Enter guest mode */ - call vmx_vmenter + /* Check EFLAGS.ZF from 'testb' above */ + je .Lvmlaunch =20 - /* Jump on VM-Fail. */ - jbe 2f +/* + * If VMRESUME/VMLAUNCH and corresponding vmexit succeed, execution resume= s at + * the 'vmx_vmexit' label below. + */ +.Lvmresume: + vmresume + jmp .Lvmfail + +.Lvmlaunch: + vmlaunch + jmp .Lvmfail + + _ASM_EXTABLE(.Lvmresume, .Lfixup) + _ASM_EXTABLE(.Lvmlaunch, .Lfixup) + +SYM_INNER_LABEL(vmx_vmexit, SYM_L_GLOBAL) =20 /* Temporarily save guest's RAX. */ push %_ASM_AX @@ -188,9 +135,13 @@ ENTRY(__vmx_vcpu_run) mov %r15, VCPU_R15(%_ASM_AX) #endif =20 + /* IMPORTANT: RSB must be stuffed before the first return. */ + FILL_RETURN_BUFFER %_ASM_BX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE + /* Clear RAX to indicate VM-Exit (as opposed to VM-Fail). */ xor %eax, %eax =20 +.Lclear_regs: /* * Clear all general purpose registers except RSP and RAX to prevent * speculative use of the guest's values, even those that are reloaded @@ -200,7 +151,7 @@ ENTRY(__vmx_vcpu_run) * free. RSP and RAX are exempt as RSP is restored by hardware during * VM-Exit and RAX is explicitly loaded with 0 or 1 to return VM-Fail. */ -1: xor %ebx, %ebx + xor %ebx, %ebx xor %ecx, %ecx xor %edx, %edx xor %esi, %esi @@ -219,8 +170,8 @@ ENTRY(__vmx_vcpu_run) =20 /* "POP" @regs. */ add $WORD_SIZE, %_ASM_SP - pop %_ASM_BX =20 + pop %_ASM_BX #ifdef CONFIG_X86_64 pop %r12 pop %r13 @@ -233,11 +184,20 @@ ENTRY(__vmx_vcpu_run) pop %_ASM_BP ret =20 - /* VM-Fail. Out-of-line to avoid a taken Jcc after VM-Exit. */ -2: mov $1, %eax - jmp 1b +.Lfixup: + cmpb $0, kvm_rebooting + jne .Lvmfail + ud2 +.Lvmfail: + /* VM-Fail: set return value to 1 */ + mov $1, %eax + jmp .Lclear_regs + ENDPROC(__vmx_vcpu_run) =20 + +.section .text, "ax" + /** * vmread_error_trampoline - Trampoline from inline asm to vmread_error() * @field: VMCS field encoding that failed From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C8F2C433FE for ; Wed, 5 Oct 2022 11:35:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230129AbiJELfP (ORCPT ); Wed, 5 Oct 2022 07:35:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36784 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230090AbiJELeY (ORCPT ); Wed, 5 Oct 2022 07:34:24 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F263E75382; Wed, 5 Oct 2022 04:33:46 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 7C98AB81D49; Wed, 5 Oct 2022 11:33:41 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B27C5C433C1; Wed, 5 Oct 2022 11:33:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969620; bh=7M23J4fV8jXLrfsiVX3ag2XZNuw765APqSW6hUGCSQg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=xUflmJk25t+IvovQ+MpjMdE0swzFegqDQldy0lqNFPS5tXPl7EgAQEu3ITCM2zeAK lgUj2pf2Y3qrQ0chh7NYgnuq0c85p6GjmI8liR4/bgVd01o62ptv3RZgnzsXK9fel8 ZDcHONbFU5gShjTwOzaOcbjm1Gww8tqLM9XWoi4c= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Josh Poimboeuf , "Peter Zijlstra (Intel)" , Borislav Petkov , Thadeu Lima de Souza Cascardo Subject: [PATCH 5.4 27/51] KVM: VMX: Convert launched argument to flags Date: Wed, 5 Oct 2022 13:32:15 +0200 Message-Id: <20221005113211.524153712@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Thadeu Lima de Souza Cascardo commit bb06650634d3552c0f8557e9d16aa1a408040e28 upstream. Convert __vmx_vcpu_run()'s 'launched' argument to 'flags', in preparation for doing SPEC_CTRL handling immediately after vmexit, which will need another flag. This is much easier than adding a fourth argument, because this code supports both 32-bit and 64-bit, and the fourth argument on 32-bit would have to be pushed on the stack. Note that __vmx_vcpu_run_flags() is called outside of the noinstr critical section because it will soon start calling potentially traceable functions. Signed-off-by: Josh Poimboeuf Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- arch/x86/kvm/vmx/nested.c | 2 +- arch/x86/kvm/vmx/run_flags.h | 7 +++++++ arch/x86/kvm/vmx/vmenter.S | 9 +++++---- arch/x86/kvm/vmx/vmx.c | 12 +++++++++++- arch/x86/kvm/vmx/vmx.h | 5 ++++- 5 files changed, 28 insertions(+), 7 deletions(-) create mode 100644 arch/x86/kvm/vmx/run_flags.h --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -2865,7 +2865,7 @@ static int nested_vmx_check_vmentry_hw(s } =20 vm_fail =3D __vmx_vcpu_run(vmx, (unsigned long *)&vcpu->arch.regs, - vmx->loaded_vmcs->launched); + __vmx_vcpu_run_flags(vmx)); =20 if (vmx->msr_autoload.host.nr) vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.host.nr); --- /dev/null +++ b/arch/x86/kvm/vmx/run_flags.h @@ -0,0 +1,7 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __KVM_X86_VMX_RUN_FLAGS_H +#define __KVM_X86_VMX_RUN_FLAGS_H + +#define VMX_RUN_VMRESUME (1 << 0) + +#endif /* __KVM_X86_VMX_RUN_FLAGS_H */ --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -4,6 +4,7 @@ #include #include #include +#include "run_flags.h" =20 #define WORD_SIZE (BITS_PER_LONG / 8) =20 @@ -33,7 +34,7 @@ * __vmx_vcpu_run - Run a vCPU via a transition to VMX guest mode * @vmx: struct vcpu_vmx * (forwarded to vmx_update_host_rsp) * @regs: unsigned long * (to guest registers) - * @launched: %true if the VMCS has been launched + * @flags: VMX_RUN_VMRESUME: use VMRESUME instead of VMLAUNCH * * Returns: * 0 on VM-Exit, 1 on VM-Fail @@ -58,7 +59,7 @@ ENTRY(__vmx_vcpu_run) */ push %_ASM_ARG2 =20 - /* Copy @launched to BL, _ASM_ARG3 is volatile. */ + /* Copy @flags to BL, _ASM_ARG3 is volatile. */ mov %_ASM_ARG3B, %bl =20 lea (%_ASM_SP), %_ASM_ARG2 @@ -68,7 +69,7 @@ ENTRY(__vmx_vcpu_run) mov (%_ASM_SP), %_ASM_AX =20 /* Check if vmlaunch or vmresume is needed */ - testb %bl, %bl + testb $VMX_RUN_VMRESUME, %bl =20 /* Load guest registers. Don't clobber flags. */ mov VCPU_RBX(%_ASM_AX), %_ASM_BX @@ -91,7 +92,7 @@ ENTRY(__vmx_vcpu_run) mov VCPU_RAX(%_ASM_AX), %_ASM_AX =20 /* Check EFLAGS.ZF from 'testb' above */ - je .Lvmlaunch + jz .Lvmlaunch =20 /* * If VMRESUME/VMLAUNCH and corresponding vmexit succeed, execution resume= s at --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -863,6 +863,16 @@ static bool msr_write_intercepted(struct return true; } =20 +unsigned int __vmx_vcpu_run_flags(struct vcpu_vmx *vmx) +{ + unsigned int flags =3D 0; + + if (vmx->loaded_vmcs->launched) + flags |=3D VMX_RUN_VMRESUME; + + return flags; +} + static void clear_atomic_switch_msr_special(struct vcpu_vmx *vmx, unsigned long entry, unsigned long exit) { @@ -6627,7 +6637,7 @@ static void vmx_vcpu_run(struct kvm_vcpu write_cr2(vcpu->arch.cr2); =20 vmx->fail =3D __vmx_vcpu_run(vmx, (unsigned long *)&vcpu->arch.regs, - vmx->loaded_vmcs->launched); + __vmx_vcpu_run_flags(vmx)); =20 vcpu->arch.cr2 =3D read_cr2(); =20 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -10,6 +10,7 @@ #include "capabilities.h" #include "ops.h" #include "vmcs.h" +#include "run_flags.h" =20 extern const u32 vmx_msr_index[]; extern u64 host_efer; @@ -336,7 +337,9 @@ void vmx_set_virtual_apic_mode(struct kv struct shared_msr_entry *find_msr_entry(struct vcpu_vmx *vmx, u32 msr); void pt_update_intercept_for_msr(struct vcpu_vmx *vmx); void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp); -bool __vmx_vcpu_run(struct vcpu_vmx *vmx, unsigned long *regs, bool launch= ed); +unsigned int __vmx_vcpu_run_flags(struct vcpu_vmx *vmx); +bool __vmx_vcpu_run(struct vcpu_vmx *vmx, unsigned long *regs, + unsigned int flags); =20 #define POSTED_INTR_ON 0 #define POSTED_INTR_SN 1 From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 02366C433F5 for ; Wed, 5 Oct 2022 11:35:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230118AbiJELfE (ORCPT ); Wed, 5 Oct 2022 07:35:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36706 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230029AbiJELeW (ORCPT ); Wed, 5 Oct 2022 07:34:22 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1A2EF7675C; Wed, 5 Oct 2022 04:33:46 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 5E35C61656; Wed, 5 Oct 2022 11:33:43 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6B04FC433C1; Wed, 5 Oct 2022 11:33:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969622; bh=rBbbjsior6InncscIOPQLA0VuFXpVQNZOsohWs9XViM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=U35aw44fTeDRCZpZ5YSg3QIJde5sbrugl2JyU/WCtsI9SdcluhTuOp/kIA94GV9bF mTIaiWSA8n63/pwwiKXM6qiRXyVh+7qaXkXJlaV+WPFX1g/sz8x0ngPHgA69v0jVd0 WQzbOoTtv2euD+rZLUMiAQ+wxbTHdr4vVDbQxMMo= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Josh Poimboeuf , "Peter Zijlstra (Intel)" , Borislav Petkov , Thadeu Lima de Souza Cascardo Subject: [PATCH 5.4 28/51] KVM: VMX: Prevent guest RSB poisoning attacks with eIBRS Date: Wed, 5 Oct 2022 13:32:16 +0200 Message-Id: <20221005113211.571486799@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Josh Poimboeuf commit fc02735b14fff8c6678b521d324ade27b1a3d4cf upstream. On eIBRS systems, the returns in the vmexit return path from __vmx_vcpu_run() to vmx_vcpu_run() are exposed to RSB poisoning attacks. Fix that by moving the post-vmexit spec_ctrl handling to immediately after the vmexit. Signed-off-by: Josh Poimboeuf Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- arch/x86/include/asm/nospec-branch.h | 1=20 arch/x86/kernel/cpu/bugs.c | 4 ++ arch/x86/kvm/vmx/run_flags.h | 1=20 arch/x86/kvm/vmx/vmenter.S | 49 +++++++++++++++++++++++++++---= ----- arch/x86/kvm/vmx/vmx.c | 48 ++++++++++++++++++++----------= ---- arch/x86/kvm/vmx/vmx.h | 1=20 6 files changed, 73 insertions(+), 31 deletions(-) --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -296,6 +296,7 @@ static inline void indirect_branch_predi =20 /* The Intel SPEC CTRL MSR base value cache */ extern u64 x86_spec_ctrl_base; +extern u64 x86_spec_ctrl_current; extern void write_spec_ctrl_current(u64 val, bool force); extern u64 spec_ctrl_current(void); =20 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -185,6 +185,10 @@ void __init check_bugs(void) #endif } =20 +/* + * NOTE: For VMX, this function is not called in the vmexit path. + * It uses vmx_spec_ctrl_restore_host() instead. + */ void x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool set= guest) { --- a/arch/x86/kvm/vmx/run_flags.h +++ b/arch/x86/kvm/vmx/run_flags.h @@ -3,5 +3,6 @@ #define __KVM_X86_VMX_RUN_FLAGS_H =20 #define VMX_RUN_VMRESUME (1 << 0) +#define VMX_RUN_SAVE_SPEC_CTRL (1 << 1) =20 #endif /* __KVM_X86_VMX_RUN_FLAGS_H */ --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -32,9 +32,10 @@ =20 /** * __vmx_vcpu_run - Run a vCPU via a transition to VMX guest mode - * @vmx: struct vcpu_vmx * (forwarded to vmx_update_host_rsp) + * @vmx: struct vcpu_vmx * * @regs: unsigned long * (to guest registers) - * @flags: VMX_RUN_VMRESUME: use VMRESUME instead of VMLAUNCH + * @flags: VMX_RUN_VMRESUME: use VMRESUME instead of VMLAUNCH + * VMX_RUN_SAVE_SPEC_CTRL: save guest SPEC_CTRL into vmx->spec_ctrl * * Returns: * 0 on VM-Exit, 1 on VM-Fail @@ -53,6 +54,12 @@ ENTRY(__vmx_vcpu_run) #endif push %_ASM_BX =20 + /* Save @vmx for SPEC_CTRL handling */ + push %_ASM_ARG1 + + /* Save @flags for SPEC_CTRL handling */ + push %_ASM_ARG3 + /* * Save @regs, _ASM_ARG2 may be modified by vmx_update_host_rsp() and * @regs is needed after VM-Exit to save the guest's register values. @@ -136,23 +143,21 @@ SYM_INNER_LABEL(vmx_vmexit, SYM_L_GLOBAL mov %r15, VCPU_R15(%_ASM_AX) #endif =20 - /* IMPORTANT: RSB must be stuffed before the first return. */ - FILL_RETURN_BUFFER %_ASM_BX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE - - /* Clear RAX to indicate VM-Exit (as opposed to VM-Fail). */ - xor %eax, %eax + /* Clear return value to indicate VM-Exit (as opposed to VM-Fail). */ + xor %ebx, %ebx =20 .Lclear_regs: /* - * Clear all general purpose registers except RSP and RAX to prevent + * Clear all general purpose registers except RSP and RBX to prevent * speculative use of the guest's values, even those that are reloaded * via the stack. In theory, an L1 cache miss when restoring registers * could lead to speculative execution with the guest's values. * Zeroing XORs are dirt cheap, i.e. the extra paranoia is essentially * free. RSP and RAX are exempt as RSP is restored by hardware during - * VM-Exit and RAX is explicitly loaded with 0 or 1 to return VM-Fail. + * VM-Exit and RBX is explicitly loaded with 0 or 1 to hold the return + * value. */ - xor %ebx, %ebx + xor %eax, %eax xor %ecx, %ecx xor %edx, %edx xor %esi, %esi @@ -172,6 +177,28 @@ SYM_INNER_LABEL(vmx_vmexit, SYM_L_GLOBAL /* "POP" @regs. */ add $WORD_SIZE, %_ASM_SP =20 + /* + * IMPORTANT: RSB filling and SPEC_CTRL handling must be done before + * the first unbalanced RET after vmexit! + * + * For retpoline, RSB filling is needed to prevent poisoned RSB entries + * and (in some cases) RSB underflow. + * + * eIBRS has its own protection against poisoned RSB, so it doesn't + * need the RSB filling sequence. But it does need to be enabled + * before the first unbalanced RET. + */ + + FILL_RETURN_BUFFER %_ASM_CX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE + + pop %_ASM_ARG2 /* @flags */ + pop %_ASM_ARG1 /* @vmx */ + + call vmx_spec_ctrl_restore_host + + /* Put return value in AX */ + mov %_ASM_BX, %_ASM_AX + pop %_ASM_BX #ifdef CONFIG_X86_64 pop %r12 @@ -191,7 +218,7 @@ SYM_INNER_LABEL(vmx_vmexit, SYM_L_GLOBAL ud2 .Lvmfail: /* VM-Fail: set return value to 1 */ - mov $1, %eax + mov $1, %_ASM_BX jmp .Lclear_regs =20 ENDPROC(__vmx_vcpu_run) --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -870,6 +870,14 @@ unsigned int __vmx_vcpu_run_flags(struct if (vmx->loaded_vmcs->launched) flags |=3D VMX_RUN_VMRESUME; =20 + /* + * If writes to the SPEC_CTRL MSR aren't intercepted, the guest is free + * to change it directly without causing a vmexit. In that case read + * it after vmexit and store it in vmx->spec_ctrl. + */ + if (unlikely(!msr_write_intercepted(vmx, MSR_IA32_SPEC_CTRL))) + flags |=3D VMX_RUN_SAVE_SPEC_CTRL; + return flags; } =20 @@ -6550,6 +6558,26 @@ void vmx_update_host_rsp(struct vcpu_vmx } } =20 +void noinstr vmx_spec_ctrl_restore_host(struct vcpu_vmx *vmx, + unsigned int flags) +{ + u64 hostval =3D this_cpu_read(x86_spec_ctrl_current); + + if (!cpu_feature_enabled(X86_FEATURE_MSR_SPEC_CTRL)) + return; + + if (flags & VMX_RUN_SAVE_SPEC_CTRL) + vmx->spec_ctrl =3D __rdmsr(MSR_IA32_SPEC_CTRL); + + /* + * If the guest/host SPEC_CTRL values differ, restore the host value. + */ + if (vmx->spec_ctrl !=3D hostval) + native_wrmsrl(MSR_IA32_SPEC_CTRL, hostval); + + barrier_nospec(); +} + static void vmx_vcpu_run(struct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx =3D to_vmx(vcpu); @@ -6643,26 +6671,6 @@ static void vmx_vcpu_run(struct kvm_vcpu =20 vmx_enable_fb_clear(vmx); =20 - /* - * We do not use IBRS in the kernel. If this vCPU has used the - * SPEC_CTRL MSR it may have left it on; save the value and - * turn it off. This is much more efficient than blindly adding - * it to the atomic save/restore list. Especially as the former - * (Saving guest MSRs on vmexit) doesn't even exist in KVM. - * - * For non-nested case: - * If the L01 MSR bitmap does not intercept the MSR, then we need to - * save it. - * - * For nested case: - * If the L02 MSR bitmap does not intercept the MSR, then we need to - * save it. - */ - if (unlikely(!msr_write_intercepted(vmx, MSR_IA32_SPEC_CTRL))) - vmx->spec_ctrl =3D native_read_msr(MSR_IA32_SPEC_CTRL); - - x86_spec_ctrl_restore_host(vmx->spec_ctrl, 0); - /* All fields are clean at this point */ if (static_branch_unlikely(&enable_evmcs)) current_evmcs->hv_clean_fields |=3D --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -337,6 +337,7 @@ void vmx_set_virtual_apic_mode(struct kv struct shared_msr_entry *find_msr_entry(struct vcpu_vmx *vmx, u32 msr); void pt_update_intercept_for_msr(struct vcpu_vmx *vmx); void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp); +void vmx_spec_ctrl_restore_host(struct vcpu_vmx *vmx, unsigned int flags); unsigned int __vmx_vcpu_run_flags(struct vcpu_vmx *vmx); bool __vmx_vcpu_run(struct vcpu_vmx *vmx, unsigned long *regs, unsigned int flags); From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78C02C433F5 for ; Wed, 5 Oct 2022 11:35:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230123AbiJELfH (ORCPT ); Wed, 5 Oct 2022 07:35:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36320 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230084AbiJELeW (ORCPT ); Wed, 5 Oct 2022 07:34:22 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9894674B82; Wed, 5 Oct 2022 04:33:47 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 1460361645; Wed, 5 Oct 2022 11:33:46 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1FF47C433D7; Wed, 5 Oct 2022 11:33:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969625; bh=XkPzfNyPowbMi3mS7dxk8n8qdgd2wl7+MFrrgQeIweo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ua7qo0WLoo4tQ5V4gsDgs13s2Lb/wdPaYR4R4Hnllmd5ZHyeqF1kVzPL/FR6QxclW 1FDoq2icwSrdi4R37nkDOtJici0ayto2Tzb5issBSfctwGjsX4ljAQ3n5zr4euitZM akgLzGS7XkFiTkGT8sPmfSDSkQcAihYrc4MOVBS4= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Josh Poimboeuf , "Peter Zijlstra (Intel)" , Borislav Petkov , Thadeu Lima de Souza Cascardo Subject: [PATCH 5.4 29/51] KVM: VMX: Fix IBRS handling after vmexit Date: Wed, 5 Oct 2022 13:32:17 +0200 Message-Id: <20221005113211.618592607@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Josh Poimboeuf commit bea7e31a5caccb6fe8ed989c065072354f0ecb52 upstream. For legacy IBRS to work, the IBRS bit needs to be always re-written after vmexit, even if it's already on. Signed-off-by: Josh Poimboeuf Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- arch/x86/kvm/vmx/vmx.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6571,8 +6571,13 @@ void noinstr vmx_spec_ctrl_restore_host( =20 /* * If the guest/host SPEC_CTRL values differ, restore the host value. + * + * For legacy IBRS, the IBRS bit always needs to be written after + * transitioning from a less privileged predictor mode, regardless of + * whether the guest/host values differ. */ - if (vmx->spec_ctrl !=3D hostval) + if (cpu_feature_enabled(X86_FEATURE_KERNEL_IBRS) || + vmx->spec_ctrl !=3D hostval) native_wrmsrl(MSR_IA32_SPEC_CTRL, hostval); =20 barrier_nospec(); From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D431CC433FE for ; Wed, 5 Oct 2022 11:35:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229993AbiJELfe (ORCPT ); Wed, 5 Oct 2022 07:35:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36966 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229946AbiJELej (ORCPT ); Wed, 5 Oct 2022 07:34:39 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DBE8C77EB5; Wed, 5 Oct 2022 04:33:51 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 560ADB81DB4; Wed, 5 Oct 2022 11:33:49 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B2D8CC433B5; Wed, 5 Oct 2022 11:33:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969628; bh=pGY8N5cnw5W2MSd2Ss6RWRO2PsKqVwuydphJSVRqXhE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HFeeqD8FNMAz2do+mu9EHivJl4us40485w9RLLIixWHrcXxW8FKu6moWKh0OmK8Z6 j/d0Zc5ULozh66FPybh5w/hOptoUKz3/rJnQ9dtiTGOeW7NvgMyx1GsjREOuSIv1ZU Frdnis71WKn7p8TjH9lExP53unJyymgWAfD3+NfI= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Josh Poimboeuf , "Peter Zijlstra (Intel)" , Borislav Petkov , Thadeu Lima de Souza Cascardo Subject: [PATCH 5.4 30/51] x86/speculation: Fill RSB on vmexit for IBRS Date: Wed, 5 Oct 2022 13:32:18 +0200 Message-Id: <20221005113211.669773975@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Josh Poimboeuf commit 9756bba28470722dacb79ffce554336dd1f6a6cd upstream. Prevent RSB underflow/poisoning attacks with RSB. While at it, add a bunch of comments to attempt to document the current state of tribal knowledge about RSB attacks and what exactly is being mitigated. Signed-off-by: Josh Poimboeuf Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- arch/x86/include/asm/cpufeatures.h | 2 - arch/x86/kernel/cpu/bugs.c | 63 ++++++++++++++++++++++++++++++++= ++--- arch/x86/kvm/vmx/vmenter.S | 6 +-- 3 files changed, 62 insertions(+), 9 deletions(-) --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -204,7 +204,7 @@ #define X86_FEATURE_SME ( 7*32+10) /* AMD Secure Memory Encryption */ #define X86_FEATURE_PTI ( 7*32+11) /* Kernel Page Table Isolation enable= d */ #define X86_FEATURE_KERNEL_IBRS ( 7*32+12) /* "" Set/clear IBRS on kernel= entry/exit */ -/* FREE! ( 7*32+13) */ +#define X86_FEATURE_RSB_VMEXIT ( 7*32+13) /* "" Fill RSB on VM-Exit */ #define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory Nu= mber */ #define X86_FEATURE_CDP_L2 ( 7*32+15) /* Code and Data Prioritization L2 = */ #define X86_FEATURE_MSR_SPEC_CTRL ( 7*32+16) /* "" MSR SPEC_CTRL is implem= ented */ --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -1278,17 +1278,70 @@ static void __init spectre_v2_select_mit pr_info("%s\n", spectre_v2_strings[mode]); =20 /* - * If spectre v2 protection has been enabled, unconditionally fill - * RSB during a context switch; this protects against two independent - * issues: + * If Spectre v2 protection has been enabled, fill the RSB during a + * context switch. In general there are two types of RSB attacks + * across context switches, for which the CALLs/RETs may be unbalanced. * - * - RSB underflow (and switch to BTB) on Skylake+ - * - SpectreRSB variant of spectre v2 on X86_BUG_SPECTRE_V2 CPUs + * 1) RSB underflow + * + * Some Intel parts have "bottomless RSB". When the RSB is empty, + * speculated return targets may come from the branch predictor, + * which could have a user-poisoned BTB or BHB entry. + * + * AMD has it even worse: *all* returns are speculated from the BTB, + * regardless of the state of the RSB. + * + * When IBRS or eIBRS is enabled, the "user -> kernel" attack + * scenario is mitigated by the IBRS branch prediction isolation + * properties, so the RSB buffer filling wouldn't be necessary to + * protect against this type of attack. + * + * The "user -> user" attack scenario is mitigated by RSB filling. + * + * 2) Poisoned RSB entry + * + * If the 'next' in-kernel return stack is shorter than 'prev', + * 'next' could be tricked into speculating with a user-poisoned RSB + * entry. + * + * The "user -> kernel" attack scenario is mitigated by SMEP and + * eIBRS. + * + * The "user -> user" scenario, also known as SpectreBHB, requires + * RSB clearing. + * + * So to mitigate all cases, unconditionally fill RSB on context + * switches. + * + * FIXME: Is this pointless for retbleed-affected AMD? */ setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW); pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switc= h\n"); =20 /* + * Similar to context switches, there are two types of RSB attacks + * after vmexit: + * + * 1) RSB underflow + * + * 2) Poisoned RSB entry + * + * When retpoline is enabled, both are mitigated by filling/clearing + * the RSB. + * + * When IBRS is enabled, while #1 would be mitigated by the IBRS branch + * prediction isolation protections, RSB still needs to be cleared + * because of #2. Note that SMEP provides no protection here, unlike + * user-space-poisoned RSB entries. + * + * eIBRS, on the other hand, has RSB-poisoning protections, so it + * doesn't need RSB clearing after vmexit. + */ + if (boot_cpu_has(X86_FEATURE_RETPOLINE) || + boot_cpu_has(X86_FEATURE_KERNEL_IBRS)) + setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT); + + /* * Retpoline protects the kernel, but doesn't protect firmware. IBRS * and Enhanced IBRS protect firmware too, so enable IBRS around * firmware calls only when IBRS / Enhanced IBRS aren't otherwise --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -181,15 +181,15 @@ SYM_INNER_LABEL(vmx_vmexit, SYM_L_GLOBAL * IMPORTANT: RSB filling and SPEC_CTRL handling must be done before * the first unbalanced RET after vmexit! * - * For retpoline, RSB filling is needed to prevent poisoned RSB entries - * and (in some cases) RSB underflow. + * For retpoline or IBRS, RSB filling is needed to prevent poisoned RSB + * entries and (in some cases) RSB underflow. * * eIBRS has its own protection against poisoned RSB, so it doesn't * need the RSB filling sequence. But it does need to be enabled * before the first unbalanced RET. */ =20 - FILL_RETURN_BUFFER %_ASM_CX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE + FILL_RETURN_BUFFER %_ASM_CX, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_VMEXIT =20 pop %_ASM_ARG2 /* @flags */ pop %_ASM_ARG1 /* @vmx */ From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A6FBC433FE for ; Wed, 5 Oct 2022 11:35:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230154AbiJELfh (ORCPT ); Wed, 5 Oct 2022 07:35:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36570 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229977AbiJELen (ORCPT ); Wed, 5 Oct 2022 07:34:43 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1AB0278221; Wed, 5 Oct 2022 04:33:55 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id DD41561639; Wed, 5 Oct 2022 11:33:53 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EA284C433B5; Wed, 5 Oct 2022 11:33:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969633; bh=SgML4ERX6LPoyE2me/6XSp22j98OujAnFt9cQ9QEdL4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BsK3DTFGcIgd/ka9iMGH86xnmCftO0BSp+Wc6zpfwc3UJ7rlHok4TVKOEQcRO0qUU +57UQNIRRqWHgE2dMqG0TMWHM44W80XUxNYp8huqZiQTttHE72f1XzxK795PN4fQws s8gf8qfWIzr/Rzbu4OYDcYbArhLLOkFBZ1WK6ags= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , "Peter Zijlstra (Intel)" , Borislav Petkov , Dave Hansen , Thadeu Lima de Souza Cascardo Subject: [PATCH 5.4 31/51] x86/common: Stamp out the stepping madness Date: Wed, 5 Oct 2022 13:32:19 +0200 Message-Id: <20221005113211.719262757@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Peter Zijlstra commit 7a05bc95ed1c5a59e47aaade9fb4083c27de9e62 upstream. The whole MMIO/RETBLEED enumeration went overboard on steppings. Get rid of all that and simply use ANY. If a future stepping of these models would not be affected, it had better set the relevant ARCH_CAP_$FOO_NO bit in IA32_ARCH_CAPABILITIES. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Acked-by: Dave Hansen Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- arch/x86/kernel/cpu/common.c | 37 ++++++++++++++++--------------------- 1 file changed, 16 insertions(+), 21 deletions(-) --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1126,32 +1126,27 @@ static const struct x86_cpu_id cpu_vuln_ VULNBL_INTEL_STEPPINGS(HASWELL, X86_STEPPING_ANY, SRBDS), VULNBL_INTEL_STEPPINGS(HASWELL_L, X86_STEPPING_ANY, SRBDS), VULNBL_INTEL_STEPPINGS(HASWELL_G, X86_STEPPING_ANY, SRBDS), - VULNBL_INTEL_STEPPINGS(HASWELL_X, BIT(2) | BIT(4), MMIO), - VULNBL_INTEL_STEPPINGS(BROADWELL_D, X86_STEPPINGS(0x3, 0x5), MMIO), + VULNBL_INTEL_STEPPINGS(HASWELL_X, X86_STEPPING_ANY, MMIO), + VULNBL_INTEL_STEPPINGS(BROADWELL_D, X86_STEPPING_ANY, MMIO), VULNBL_INTEL_STEPPINGS(BROADWELL_G, X86_STEPPING_ANY, SRBDS), VULNBL_INTEL_STEPPINGS(BROADWELL_X, X86_STEPPING_ANY, MMIO), VULNBL_INTEL_STEPPINGS(BROADWELL, X86_STEPPING_ANY, SRBDS), - VULNBL_INTEL_STEPPINGS(SKYLAKE_L, X86_STEPPINGS(0x3, 0x3), SRBDS | MMIO |= RETBLEED), - VULNBL_INTEL_STEPPINGS(SKYLAKE_L, X86_STEPPING_ANY, SRBDS), - VULNBL_INTEL_STEPPINGS(SKYLAKE_X, BIT(3) | BIT(4) | BIT(6) | - BIT(7) | BIT(0xB), MMIO | RETBLEED), - VULNBL_INTEL_STEPPINGS(SKYLAKE, X86_STEPPINGS(0x3, 0x3), SRBDS | MMIO | = RETBLEED), - VULNBL_INTEL_STEPPINGS(SKYLAKE, X86_STEPPING_ANY, SRBDS), - VULNBL_INTEL_STEPPINGS(KABYLAKE_L, X86_STEPPINGS(0x9, 0xC), SRBDS | MMIO = | RETBLEED), - VULNBL_INTEL_STEPPINGS(KABYLAKE_L, X86_STEPPINGS(0x0, 0x8), SRBDS), - VULNBL_INTEL_STEPPINGS(KABYLAKE, X86_STEPPINGS(0x9, 0xD), SRBDS | MMIO | = RETBLEED), - VULNBL_INTEL_STEPPINGS(KABYLAKE, X86_STEPPINGS(0x0, 0x8), SRBDS), - VULNBL_INTEL_STEPPINGS(ICELAKE_L, X86_STEPPINGS(0x5, 0x5), MMIO | MMIO_SB= DS | RETBLEED), - VULNBL_INTEL_STEPPINGS(ICELAKE_D, X86_STEPPINGS(0x1, 0x1), MMIO), - VULNBL_INTEL_STEPPINGS(ICELAKE_X, X86_STEPPINGS(0x4, 0x6), MMIO), - VULNBL_INTEL_STEPPINGS(COMETLAKE, BIT(2) | BIT(3) | BIT(5), MMIO | MMIO_S= BDS | RETBLEED), - VULNBL_INTEL_STEPPINGS(COMETLAKE_L, X86_STEPPINGS(0x1, 0x1), MMIO | MMIO_= SBDS | RETBLEED), + VULNBL_INTEL_STEPPINGS(SKYLAKE_L, X86_STEPPING_ANY, SRBDS | MMIO | RETBL= EED), + VULNBL_INTEL_STEPPINGS(SKYLAKE_X, X86_STEPPING_ANY, MMIO | RETBLEED), + VULNBL_INTEL_STEPPINGS(SKYLAKE, X86_STEPPING_ANY, SRBDS | MMIO | RETBLE= ED), + VULNBL_INTEL_STEPPINGS(KABYLAKE_L, X86_STEPPING_ANY, SRBDS | MMIO | RETB= LEED), + VULNBL_INTEL_STEPPINGS(KABYLAKE, X86_STEPPING_ANY, SRBDS | MMIO | RETBLE= ED), + VULNBL_INTEL_STEPPINGS(ICELAKE_L, X86_STEPPING_ANY, MMIO | MMIO_SBDS | R= ETBLEED), + VULNBL_INTEL_STEPPINGS(ICELAKE_D, X86_STEPPING_ANY, MMIO), + VULNBL_INTEL_STEPPINGS(ICELAKE_X, X86_STEPPING_ANY, MMIO), + VULNBL_INTEL_STEPPINGS(COMETLAKE, X86_STEPPING_ANY, MMIO | MMIO_SBDS | R= ETBLEED), VULNBL_INTEL_STEPPINGS(COMETLAKE_L, X86_STEPPINGS(0x0, 0x0), MMIO | RETBL= EED), - VULNBL_INTEL_STEPPINGS(LAKEFIELD, X86_STEPPINGS(0x1, 0x1), MMIO | MMIO_SB= DS | RETBLEED), - VULNBL_INTEL_STEPPINGS(ROCKETLAKE, X86_STEPPINGS(0x1, 0x1), MMIO | RETBLE= ED), - VULNBL_INTEL_STEPPINGS(ATOM_TREMONT, X86_STEPPINGS(0x1, 0x1), MMIO | MMIO= _SBDS), + VULNBL_INTEL_STEPPINGS(COMETLAKE_L, X86_STEPPING_ANY, MMIO | MMIO_SBDS |= RETBLEED), + VULNBL_INTEL_STEPPINGS(LAKEFIELD, X86_STEPPING_ANY, MMIO | MMIO_SBDS | R= ETBLEED), + VULNBL_INTEL_STEPPINGS(ROCKETLAKE, X86_STEPPING_ANY, MMIO | RETBLEED), + VULNBL_INTEL_STEPPINGS(ATOM_TREMONT, X86_STEPPING_ANY, MMIO | MMIO_SBDS), VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_D, X86_STEPPING_ANY, MMIO), - VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_L, X86_STEPPINGS(0x0, 0x0), MMIO | MM= IO_SBDS), + VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_L, X86_STEPPING_ANY, MMIO | MMIO_SBD= S), =20 VULNBL_AMD(0x15, RETBLEED), VULNBL_AMD(0x16, RETBLEED), From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5E6DC433FE for ; Wed, 5 Oct 2022 11:36:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230167AbiJELgH (ORCPT ); Wed, 5 Oct 2022 07:36:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35870 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229912AbiJELfa (ORCPT ); Wed, 5 Oct 2022 07:35:30 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4C835786CE; Wed, 5 Oct 2022 04:34:09 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 2A59CB81DB2; Wed, 5 Oct 2022 11:33:57 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8CADBC433C1; Wed, 5 Oct 2022 11:33:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969635; bh=46VLJu9YyPc9gZ9l+X0jmicASSTF3QF8dJGDhZN+hIw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KjFwDRb6p+8gs/k4OqjjzhfJPPVIpCB1PtCwKQ96fu76CeM9iMjh/igoXCaCjsLm9 /40DLbe6FuvhDrju63q368zqmu3+rUq+pxBpjy7VN8G0mhYtV+o6QWrEhHjA6egnjm K78eC9s3E0ftkYk4A5+EmedyVihyG9zRa+CHB0Sg= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Andrew Cooper , Borislav Petkov , Thadeu Lima de Souza Cascardo Subject: [PATCH 5.4 32/51] x86/cpu/amd: Enumerate BTC_NO Date: Wed, 5 Oct 2022 13:32:20 +0200 Message-Id: <20221005113211.771085542@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Andrew Cooper commit 26aae8ccbc1972233afd08fb3f368947c0314265 upstream. BTC_NO indicates that hardware is not susceptible to Branch Type Confusion. Zen3 CPUs don't suffer BTC. Hypervisors are expected to synthesise BTC_NO when it is appropriate given the migration pool, to prevent kernels using heuristics. [ bp: Massage. ] Signed-off-by: Andrew Cooper Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/kernel/cpu/amd.c | 21 +++++++++++++++------ arch/x86/kernel/cpu/common.c | 6 ++++-- 3 files changed, 20 insertions(+), 8 deletions(-) --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -304,6 +304,7 @@ #define X86_FEATURE_AMD_SSBD (13*32+24) /* "" Speculative Store Bypass Di= sable */ #define X86_FEATURE_VIRT_SSBD (13*32+25) /* Virtualized Speculative Store= Bypass Disable */ #define X86_FEATURE_AMD_SSB_NO (13*32+26) /* "" Speculative Store Bypass = is fixed in hardware. */ +#define X86_FEATURE_BTC_NO (13*32+29) /* "" Not vulnerable to Branch Type= Confusion */ =20 /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 1= 4 */ #define X86_FEATURE_DTHERM (14*32+ 0) /* Digital Thermal Sensor */ --- a/arch/x86/kernel/cpu/amd.c +++ b/arch/x86/kernel/cpu/amd.c @@ -894,12 +894,21 @@ static void init_amd_zn(struct cpuinfo_x node_reclaim_distance =3D 32; #endif =20 - /* - * Fix erratum 1076: CPB feature bit not being set in CPUID. - * Always set it, except when running under a hypervisor. - */ - if (!cpu_has(c, X86_FEATURE_HYPERVISOR) && !cpu_has(c, X86_FEATURE_CPB)) - set_cpu_cap(c, X86_FEATURE_CPB); + /* Fix up CPUID bits, but only if not virtualised. */ + if (!cpu_has(c, X86_FEATURE_HYPERVISOR)) { + + /* Erratum 1076: CPB feature bit not being set in CPUID. */ + if (!cpu_has(c, X86_FEATURE_CPB)) + set_cpu_cap(c, X86_FEATURE_CPB); + + /* + * Zen3 (Fam19 model < 0x10) parts are not susceptible to + * Branch Type Confusion, but predate the allocation of the + * BTC_NO bit. + */ + if (c->x86 =3D=3D 0x19 && !cpu_has(c, X86_FEATURE_BTC_NO)) + set_cpu_cap(c, X86_FEATURE_BTC_NO); + } } =20 static void init_amd(struct cpuinfo_x86 *c) --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1259,8 +1259,10 @@ static void __init cpu_set_bug_bits(stru setup_force_cpu_bug(X86_BUG_MMIO_UNKNOWN); } =20 - if ((cpu_matches(cpu_vuln_blacklist, RETBLEED) || (ia32_cap & ARCH_CAP_RS= BA))) - setup_force_cpu_bug(X86_BUG_RETBLEED); + if (!cpu_has(c, X86_FEATURE_BTC_NO)) { + if (cpu_matches(cpu_vuln_blacklist, RETBLEED) || (ia32_cap & ARCH_CAP_RS= BA)) + setup_force_cpu_bug(X86_BUG_RETBLEED); + } =20 if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN)) return; From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 99617C433FE for ; Wed, 5 Oct 2022 11:36:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230056AbiJELgP (ORCPT ); Wed, 5 Oct 2022 07:36:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36052 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230159AbiJELfp (ORCPT ); Wed, 5 Oct 2022 07:35:45 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E7ED1786D9; Wed, 5 Oct 2022 04:34:10 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 1C70E61646; Wed, 5 Oct 2022 11:33:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 32819C433C1; Wed, 5 Oct 2022 11:33:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969638; bh=OZmRwyCICynEbQ4jPcY7WzVABYtiTSKSIS4PsXbufSE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Yv+HmrKeVWZrYNMk8+8w8B9dNwFk+R60YGiPO07MW13Zm57yg27JbD2/gJRNTtv0m i7N/8ObFTY3muDbMQwQ+X8xdHlIlK+lZ16HAoW/1CvHIEXfhdXT47ekXtEFR3OXTl/ I6vvVTBmN9tBXzQLs7X5F1Nt+/FdnYtZWL0XmdQk= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Pawan Gupta , Borislav Petkov , Thadeu Lima de Souza Cascardo Subject: [PATCH 5.4 33/51] x86/bugs: Add Cannon lake to RETBleed affected CPU list Date: Wed, 5 Oct 2022 13:32:21 +0200 Message-Id: <20221005113211.826455270@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Pawan Gupta commit f54d45372c6ac9c993451de5e51312485f7d10bc upstream. Cannon lake is also affected by RETBleed, add it to the list. Fixes: 6ad0ad2bf8a6 ("x86/bugs: Report Intel retbleed vulnerability") Signed-off-by: Pawan Gupta Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- arch/x86/kernel/cpu/common.c | 1 + 1 file changed, 1 insertion(+) --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1136,6 +1136,7 @@ static const struct x86_cpu_id cpu_vuln_ VULNBL_INTEL_STEPPINGS(SKYLAKE, X86_STEPPING_ANY, SRBDS | MMIO | RETBLE= ED), VULNBL_INTEL_STEPPINGS(KABYLAKE_L, X86_STEPPING_ANY, SRBDS | MMIO | RETB= LEED), VULNBL_INTEL_STEPPINGS(KABYLAKE, X86_STEPPING_ANY, SRBDS | MMIO | RETBLE= ED), + VULNBL_INTEL_STEPPINGS(CANNONLAKE_L, X86_STEPPING_ANY, RETBLEED), VULNBL_INTEL_STEPPINGS(ICELAKE_L, X86_STEPPING_ANY, MMIO | MMIO_SBDS | R= ETBLEED), VULNBL_INTEL_STEPPINGS(ICELAKE_D, X86_STEPPING_ANY, MMIO), VULNBL_INTEL_STEPPINGS(ICELAKE_X, X86_STEPPING_ANY, MMIO), From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23066C433F5 for ; Wed, 5 Oct 2022 11:35:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229888AbiJELfz (ORCPT ); Wed, 5 Oct 2022 07:35:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35856 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229678AbiJELfE (ORCPT ); Wed, 5 Oct 2022 07:35:04 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 156DB7858D; Wed, 5 Oct 2022 04:34:03 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 8E9D8B81DB7; Wed, 5 Oct 2022 11:34:02 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DFDA9C433D6; Wed, 5 Oct 2022 11:34:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969641; bh=feMDUVj9ZKy4SEiv4+FTL0RHqr2n8lEuSdXvgtrlVRA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PknBcldNLhefgzZ5DNpPkJ6SzobTub4dT7+VuHsnncDFDXubKGCFI/SfuZWi1Jf2n HQSICeKWRzV2zOpifb3I3i4VKPBZGdtx14KMktlxRWr9WuvNy80Nvv86TxAbxosA0b A9mR+rNu1lIRBrIS6WoRz1b6TJBQT6kzEFeJFQ6o= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Pawan Gupta , Borislav Petkov , Thadeu Lima de Souza Cascardo Subject: [PATCH 5.4 34/51] x86/speculation: Disable RRSBA behavior Date: Wed, 5 Oct 2022 13:32:22 +0200 Message-Id: <20221005113211.871526433@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Pawan Gupta commit 4ad3278df6fe2b0852b00d5757fc2ccd8e92c26e upstream. Some Intel processors may use alternate predictors for RETs on RSB-underflow. This condition may be vulnerable to Branch History Injection (BHI) and intramode-BTI. Kernel earlier added spectre_v2 mitigation modes (eIBRS+Retpolines, eIBRS+LFENCE, Retpolines) which protect indirect CALLs and JMPs against such attacks. However, on RSB-underflow, RET target prediction may fallback to alternate predictors. As a result, RET's predicted target may get influenced by branch history. A new MSR_IA32_SPEC_CTRL bit (RRSBA_DIS_S) controls this fallback behavior when in kernel mode. When set, RETs will not take predictions from alternate predictors, hence mitigating RETs as well. Support for this is enumerated by CPUID.7.2.EDX[RRSBA_CTRL] (bit2). For spectre v2 mitigation, when a user selects a mitigation that protects indirect CALLs and JMPs against BHI and intramode-BTI, set RRSBA_DIS_S also to protect RETs for RSB-underflow case. Signed-off-by: Pawan Gupta Signed-off-by: Borislav Petkov [cascardo: no tools/arch/x86/include/asm/msr-index.h] Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/msr-index.h | 9 +++++++++ arch/x86/kernel/cpu/bugs.c | 26 ++++++++++++++++++++++++++ arch/x86/kernel/cpu/scattered.c | 1 + 4 files changed, 37 insertions(+) --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -286,6 +286,7 @@ #define X86_FEATURE_CQM_MBM_LOCAL (11*32+ 3) /* LLC Local MBM monitoring */ #define X86_FEATURE_FENCE_SWAPGS_USER (11*32+ 4) /* "" LFENCE in user entr= y SWAPGS path */ #define X86_FEATURE_FENCE_SWAPGS_KERNEL (11*32+ 5) /* "" LFENCE in kernel = entry SWAPGS path */ +#define X86_FEATURE_RRSBA_CTRL (11*32+11) /* "" RET prediction control */ #define X86_FEATURE_RETPOLINE (11*32+12) /* "" Generic Retpoline mitigati= on for Spectre variant 2 */ #define X86_FEATURE_RETPOLINE_LFENCE (11*32+13) /* "" Use LFENCE for Spect= re variant 2 */ =20 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -47,6 +47,8 @@ #define SPEC_CTRL_STIBP BIT(SPEC_CTRL_STIBP_SHIFT) /* STIBP mask */ #define SPEC_CTRL_SSBD_SHIFT 2 /* Speculative Store Bypass Disable bit= */ #define SPEC_CTRL_SSBD BIT(SPEC_CTRL_SSBD_SHIFT) /* Speculative Store By= pass Disable */ +#define SPEC_CTRL_RRSBA_DIS_S_SHIFT 6 /* Disable RRSBA behavior */ +#define SPEC_CTRL_RRSBA_DIS_S BIT(SPEC_CTRL_RRSBA_DIS_S_SHIFT) =20 #define MSR_IA32_PRED_CMD 0x00000049 /* Prediction Command */ #define PRED_CMD_IBPB BIT(0) /* Indirect Branch Prediction Barrier */ @@ -130,6 +132,13 @@ * bit available to control VERW * behavior. */ +#define ARCH_CAP_RRSBA BIT(19) /* + * Indicates RET may use predictors + * other than the RSB. With eIBRS + * enabled predictions in kernel mode + * are restricted to targets in + * kernel. + */ =20 #define MSR_IA32_FLUSH_CMD 0x0000010b #define L1D_FLUSH BIT(0) /* --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -1181,6 +1181,22 @@ static enum spectre_v2_mitigation __init return SPECTRE_V2_RETPOLINE; } =20 +/* Disable in-kernel use of non-RSB RET predictors */ +static void __init spec_ctrl_disable_kernel_rrsba(void) +{ + u64 ia32_cap; + + if (!boot_cpu_has(X86_FEATURE_RRSBA_CTRL)) + return; + + ia32_cap =3D x86_read_arch_cap_msr(); + + if (ia32_cap & ARCH_CAP_RRSBA) { + x86_spec_ctrl_base |=3D SPEC_CTRL_RRSBA_DIS_S; + write_spec_ctrl_current(x86_spec_ctrl_base, true); + } +} + static void __init spectre_v2_select_mitigation(void) { enum spectre_v2_mitigation_cmd cmd =3D spectre_v2_parse_cmdline(); @@ -1274,6 +1290,16 @@ static void __init spectre_v2_select_mit break; } =20 + /* + * Disable alternate RSB predictions in kernel when indirect CALLs and + * JMPs gets protection against BHI and Intramode-BTI, but RET + * prediction from a non-RSB predictor is still a risk. + */ + if (mode =3D=3D SPECTRE_V2_EIBRS_LFENCE || + mode =3D=3D SPECTRE_V2_EIBRS_RETPOLINE || + mode =3D=3D SPECTRE_V2_RETPOLINE) + spec_ctrl_disable_kernel_rrsba(); + spectre_v2_enabled =3D mode; pr_info("%s\n", spectre_v2_strings[mode]); =20 --- a/arch/x86/kernel/cpu/scattered.c +++ b/arch/x86/kernel/cpu/scattered.c @@ -26,6 +26,7 @@ struct cpuid_bit { static const struct cpuid_bit cpuid_bits[] =3D { { X86_FEATURE_APERFMPERF, CPUID_ECX, 0, 0x00000006, 0 }, { X86_FEATURE_EPB, CPUID_ECX, 3, 0x00000006, 0 }, + { X86_FEATURE_RRSBA_CTRL, CPUID_EDX, 2, 0x00000007, 2 }, { X86_FEATURE_CQM_LLC, CPUID_EDX, 1, 0x0000000f, 0 }, { X86_FEATURE_CQM_OCCUP_LLC, CPUID_EDX, 0, 0x0000000f, 1 }, { X86_FEATURE_CQM_MBM_TOTAL, CPUID_EDX, 1, 0x0000000f, 1 }, From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BDE34C433FE for ; Wed, 5 Oct 2022 11:36:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230175AbiJELgM (ORCPT ); Wed, 5 Oct 2022 07:36:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36624 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230149AbiJELfb (ORCPT ); Wed, 5 Oct 2022 07:35:31 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E7AF5785BD; Wed, 5 Oct 2022 04:34:08 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 111D2B81DB4; Wed, 5 Oct 2022 11:34:05 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 808ABC433C1; Wed, 5 Oct 2022 11:34:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969643; bh=dNJutloN59HRkmZgpyaMxTd1V6kkzB7sPBtf61h92S0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=j8rMZGnCkk/AKL3zpB83BF6t4GZYe34gBuGHrsy0ZKdDHAXm8sONMAaY1LimqNlPU cLS4VWlEWRrKxTcuZBDKndZk7Qy0H2RhuPobFSVAahTG0dbPxlj0oiVuUKt4oRTmb0 wWJAzXQpeBo4rMSyYAGAlFIyZVeXo47JVctQPVUo= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , kernel test robot , Nathan Chancellor , Linus Torvalds , Thadeu Lima de Souza Cascardo Subject: [PATCH 5.4 35/51] x86/speculation: Use DECLARE_PER_CPU for x86_spec_ctrl_current Date: Wed, 5 Oct 2022 13:32:23 +0200 Message-Id: <20221005113211.923225409@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Nathan Chancellor commit db886979683a8360ced9b24ab1125ad0c4d2cf76 upstream. Clang warns: arch/x86/kernel/cpu/bugs.c:58:21: error: section attribute is specified o= n redeclared variable [-Werror,-Wsection] DEFINE_PER_CPU(u64, x86_spec_ctrl_current); ^ arch/x86/include/asm/nospec-branch.h:283:12: note: previous declaration i= s here extern u64 x86_spec_ctrl_current; ^ 1 error generated. The declaration should be using DECLARE_PER_CPU instead so all attributes stay in sync. Cc: stable@vger.kernel.org Fixes: fc02735b14ff ("KVM: VMX: Prevent guest RSB poisoning attacks with eI= BRS") Reported-by: kernel test robot Signed-off-by: Nathan Chancellor Signed-off-by: Linus Torvalds Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- arch/x86/include/asm/nospec-branch.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -11,6 +11,7 @@ #include #include #include +#include =20 /* * This should be used immediately before a retpoline alternative. It tells @@ -296,7 +297,7 @@ static inline void indirect_branch_predi =20 /* The Intel SPEC CTRL MSR base value cache */ extern u64 x86_spec_ctrl_base; -extern u64 x86_spec_ctrl_current; +DECLARE_PER_CPU(u64, x86_spec_ctrl_current); extern void write_spec_ctrl_current(u64 val, bool force); extern u64 spec_ctrl_current(void); From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0615CC433FE for ; Wed, 5 Oct 2022 11:37:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230223AbiJELhI (ORCPT ); Wed, 5 Oct 2022 07:37:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36534 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230205AbiJELge (ORCPT ); Wed, 5 Oct 2022 07:36:34 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 128D878BD1; Wed, 5 Oct 2022 04:34:32 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 1047661645; Wed, 5 Oct 2022 11:34:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 214D1C433C1; Wed, 5 Oct 2022 11:34:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969646; bh=t65oprr7S3lYsoRHe2IrBosQUoaTsvjs7jgWRldnO9Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FMTZnXKGYhAfGQ1BXyRTjqHizGZOXq+XnNiIx9x8k8sEbUfCncVcT2aTpMFkNOD1G Gu11+GO/tr3B1UwLGiK24k0UA0CWzra4FP1v8s8rVelQgXzoMv3dIexXrHvEhj9DyL 2a7aF3oMA1V8wIWRTDdPpYiIKke+3Kafy8ZH8xNc= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Pawan Gupta , "Peter Zijlstra (Intel)" , Thadeu Lima de Souza Cascardo Subject: [PATCH 5.4 36/51] x86/bugs: Warn when "ibrs" mitigation is selected on Enhanced IBRS parts Date: Wed, 5 Oct 2022 13:32:24 +0200 Message-Id: <20221005113211.975317660@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Pawan Gupta commit eb23b5ef9131e6d65011de349a4d25ef1b3d4314 upstream. IBRS mitigation for spectre_v2 forces write to MSR_IA32_SPEC_CTRL at every kernel entry/exit. On Enhanced IBRS parts setting MSR_IA32_SPEC_CTRL[IBRS] only once at boot is sufficient. MSR writes at every kernel entry/exit incur unnecessary performance loss. When Enhanced IBRS feature is present, print a warning about this unnecessary performance loss. Signed-off-by: Pawan Gupta Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Thadeu Lima de Souza Cascardo Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/2a5eaf54583c2bfe0edc4fea64006656256cca17.16= 57814857.git.pawan.kumar.gupta@linux.intel.com Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- arch/x86/kernel/cpu/bugs.c | 3 +++ 1 file changed, 3 insertions(+) --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -851,6 +851,7 @@ static inline const char *spectre_v2_mod #define SPECTRE_V2_LFENCE_MSG "WARNING: LFENCE mitigation is not recommend= ed for this CPU, data leaks possible!\n" #define SPECTRE_V2_EIBRS_EBPF_MSG "WARNING: Unprivileged eBPF is enabled w= ith eIBRS on, data leaks possible via Spectre v2 BHB attacks!\n" #define SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG "WARNING: Unprivileged eBPF i= s enabled with eIBRS+LFENCE mitigation and SMT, data leaks possible via Spe= ctre v2 BHB attacks!\n" +#define SPECTRE_V2_IBRS_PERF_MSG "WARNING: IBRS mitigation selected on Enh= anced IBRS CPU, this may cause unnecessary performance loss\n" =20 #ifdef CONFIG_BPF_SYSCALL void unpriv_ebpf_notify(int new_state) @@ -1277,6 +1278,8 @@ static void __init spectre_v2_select_mit =20 case SPECTRE_V2_IBRS: setup_force_cpu_cap(X86_FEATURE_KERNEL_IBRS); + if (boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) + pr_warn(SPECTRE_V2_IBRS_PERF_MSG); break; =20 case SPECTRE_V2_LFENCE: From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6154EC433F5 for ; Wed, 5 Oct 2022 11:37:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230214AbiJELhR (ORCPT ); Wed, 5 Oct 2022 07:37:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36024 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230145AbiJELgs (ORCPT ); Wed, 5 Oct 2022 07:36:48 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 35592AE7A; Wed, 5 Oct 2022 04:34:40 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id A06B7B81DB6; Wed, 5 Oct 2022 11:34:10 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F0B6FC433C1; Wed, 5 Oct 2022 11:34:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969649; bh=B4sZEX166mWT5BaNYP1vbS3IZSZpdO5+//a7udEGz5Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VkXnbzrHcI0ogCsb3HhjXYCbyV3bhhgA0nkbkDkB8lyHsba1h8zQsVOdGeEEHclZF jEfJmxbzbmGLX+T7FdEvwjpzHAMhLidZ4qv4KzCJs2ZLixsnymyyvbU3o1HAcPcwjg bqv8ElF3VrFHNDQg696a1Hy8c36EDXZsHb4rK1U4= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Daniel Sneddon , Pawan Gupta , Borislav Petkov , Thadeu Lima de Souza Cascardo Subject: [PATCH 5.4 37/51] x86/speculation: Add RSB VM Exit protections Date: Wed, 5 Oct 2022 13:32:25 +0200 Message-Id: <20221005113212.024639083@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Daniel Sneddon commit 2b1299322016731d56807aa49254a5ea3080b6b3 upstream. tl;dr: The Enhanced IBRS mitigation for Spectre v2 does not work as documented for RET instructions after VM exits. Mitigate it with a new one-entry RSB stuffing mechanism and a new LFENCE. =3D=3D Background =3D=3D Indirect Branch Restricted Speculation (IBRS) was designed to help mitigate Branch Target Injection and Speculative Store Bypass, i.e. Spectre, attacks. IBRS prevents software run in less privileged modes from affecting branch prediction in more privileged modes. IBRS requires the MSR to be written on every privilege level change. To overcome some of the performance issues of IBRS, Enhanced IBRS was introduced. eIBRS is an "always on" IBRS, in other words, just turn it on once instead of writing the MSR on every privilege level change. When eIBRS is enabled, more privileged modes should be protected from less privileged modes, including protecting VMMs from guests. =3D=3D Problem =3D=3D Here's a simplification of how guests are run on Linux' KVM: void run_kvm_guest(void) { // Prepare to run guest VMRESUME(); // Clean up after guest runs } The execution flow for that would look something like this to the processor: 1. Host-side: call run_kvm_guest() 2. Host-side: VMRESUME 3. Guest runs, does "CALL guest_function" 4. VM exit, host runs again 5. Host might make some "cleanup" function calls 6. Host-side: RET from run_kvm_guest() Now, when back on the host, there are a couple of possible scenarios of post-guest activity the host needs to do before executing host code: * on pre-eIBRS hardware (legacy IBRS, or nothing at all), the RSB is not touched and Linux has to do a 32-entry stuffing. * on eIBRS hardware, VM exit with IBRS enabled, or restoring the host IBRS=3D1 shortly after VM exit, has a documented side effect of flushing the RSB except in this PBRSB situation where the software needs to stuff the last RSB entry "by hand". IOW, with eIBRS supported, host RET instructions should no longer be influenced by guest behavior after the host retires a single CALL instruction. However, if the RET instructions are "unbalanced" with CALLs after a VM exit as is the RET in #6, it might speculatively use the address for the instruction after the CALL in #3 as an RSB prediction. This is a problem since the (untrusted) guest controls this address. Balanced CALL/RET instruction pairs such as in step #5 are not affected. =3D=3D Solution =3D=3D The PBRSB issue affects a wide variety of Intel processors which support eIBRS. But not all of them need mitigation. Today, X86_FEATURE_RSB_VMEXIT triggers an RSB filling sequence that mitigates PBRSB. Systems setting RSB_VMEXIT need no further mitigation - i.e., eIBRS systems which enable legacy IBRS explicitly. However, such systems (X86_FEATURE_IBRS_ENHANCED) do not set RSB_VMEXIT and most of them need a new mitigation. Therefore, introduce a new feature flag X86_FEATURE_RSB_VMEXIT_LITE which triggers a lighter-weight PBRSB mitigation versus RSB_VMEXIT. The lighter-weight mitigation performs a CALL instruction which is immediately followed by a speculative execution barrier (INT3). This steers speculative execution to the barrier -- just like a retpoline -- which ensures that speculation can never reach an unbalanced RET. Then, ensure this CALL is retired before continuing execution with an LFENCE. In other words, the window of exposure is opened at VM exit where RET behavior is troublesome. While the window is open, force RSB predictions sampling for RET targets to a dead end at the INT3. Close the window with the LFENCE. There is a subset of eIBRS systems which are not vulnerable to PBRSB. Add these systems to the cpu_vuln_whitelist[] as NO_EIBRS_PBRSB. Future systems that aren't vulnerable will set ARCH_CAP_PBRSB_NO. [ bp: Massage, incorporate review comments from Andy Cooper. ] Signed-off-by: Daniel Sneddon Co-developed-by: Pawan Gupta Signed-off-by: Pawan Gupta Signed-off-by: Borislav Petkov [cascardo: no intra-function validation] Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- Documentation/admin-guide/hw-vuln/spectre.rst | 8 ++ arch/x86/include/asm/cpufeatures.h | 2=20 arch/x86/include/asm/msr-index.h | 4 + arch/x86/include/asm/nospec-branch.h | 16 ++++ arch/x86/kernel/cpu/bugs.c | 86 +++++++++++++++++++--= ----- arch/x86/kernel/cpu/common.c | 12 +++ arch/x86/kvm/vmx/vmenter.S | 8 +- tools/arch/x86/include/asm/cpufeatures.h | 1=20 8 files changed, 108 insertions(+), 29 deletions(-) --- a/Documentation/admin-guide/hw-vuln/spectre.rst +++ b/Documentation/admin-guide/hw-vuln/spectre.rst @@ -422,6 +422,14 @@ The possible values in this file are: 'RSB filling' Protection of RSB on context switch enabled =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D =20 + - EIBRS Post-barrier Return Stack Buffer (PBRSB) protection status: + + =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D + 'PBRSB-eIBRS: SW sequence' CPU is affected and protection of RSB on VM= EXIT enabled + 'PBRSB-eIBRS: Vulnerable' CPU is vulnerable + 'PBRSB-eIBRS: Not affected' CPU is not affected by PBRSB + =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D + Full mitigation might require a microcode update from the CPU vendor. When the necessary microcode is not available, the kernel will report vulnerability. --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -289,6 +289,7 @@ #define X86_FEATURE_RRSBA_CTRL (11*32+11) /* "" RET prediction control */ #define X86_FEATURE_RETPOLINE (11*32+12) /* "" Generic Retpoline mitigati= on for Spectre variant 2 */ #define X86_FEATURE_RETPOLINE_LFENCE (11*32+13) /* "" Use LFENCE for Spect= re variant 2 */ +#define X86_FEATURE_RSB_VMEXIT_LITE (11*32+17) /* "" Fill RSB on VM exit w= hen EIBRS is enabled */ =20 /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */ #define X86_FEATURE_AVX512_BF16 (12*32+ 5) /* AVX512 BFLOAT16 instruction= s */ @@ -411,6 +412,7 @@ #define X86_BUG_SRBDS X86_BUG(24) /* CPU may leak RNG bits if not mitiga= ted */ #define X86_BUG_MMIO_STALE_DATA X86_BUG(25) /* CPU is affected by Process= or MMIO Stale Data vulnerabilities */ #define X86_BUG_RETBLEED X86_BUG(26) /* CPU is affected by RETBleed */ +#define X86_BUG_EIBRS_PBRSB X86_BUG(27) /* EIBRS is vulnerable to Post Ba= rrier RSB Predictions */ #define X86_BUG_MMIO_UNKNOWN X86_BUG(28) /* CPU is too old and its MMIO S= tale Data status is unknown */ =20 #endif /* _ASM_X86_CPUFEATURES_H */ --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -139,6 +139,10 @@ * are restricted to targets in * kernel. */ +#define ARCH_CAP_PBRSB_NO BIT(24) /* + * Not susceptible to Post-Barrier + * Return Stack Buffer Predictions. + */ =20 #define MSR_IA32_FLUSH_CMD 0x0000010b #define L1D_FLUSH BIT(0) /* --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -151,13 +151,27 @@ #endif .endm =20 +.macro ISSUE_UNBALANCED_RET_GUARD + call .Lunbalanced_ret_guard_\@ + int3 +.Lunbalanced_ret_guard_\@: + add $(BITS_PER_LONG/8), %_ASM_SP + lfence +.endm + /* * A simpler FILL_RETURN_BUFFER macro. Don't make people use the CPP * monstrosity above, manually. */ -.macro FILL_RETURN_BUFFER reg:req nr:req ftr:req +.macro FILL_RETURN_BUFFER reg:req nr:req ftr:req ftr2 +.ifb \ftr2 ALTERNATIVE "jmp .Lskip_rsb_\@", "", \ftr +.else + ALTERNATIVE_2 "jmp .Lskip_rsb_\@", "", \ftr, "jmp .Lunbalanced_\@", \ftr2 +.endif __FILL_RETURN_BUFFER(\reg,\nr,%_ASM_SP) +.Lunbalanced_\@: + ISSUE_UNBALANCED_RET_GUARD .Lskip_rsb_\@: .endm =20 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -1198,6 +1198,53 @@ static void __init spec_ctrl_disable_ker } } =20 +static void __init spectre_v2_determine_rsb_fill_type_at_vmexit(enum spect= re_v2_mitigation mode) +{ + /* + * Similar to context switches, there are two types of RSB attacks + * after VM exit: + * + * 1) RSB underflow + * + * 2) Poisoned RSB entry + * + * When retpoline is enabled, both are mitigated by filling/clearing + * the RSB. + * + * When IBRS is enabled, while #1 would be mitigated by the IBRS branch + * prediction isolation protections, RSB still needs to be cleared + * because of #2. Note that SMEP provides no protection here, unlike + * user-space-poisoned RSB entries. + * + * eIBRS should protect against RSB poisoning, but if the EIBRS_PBRSB + * bug is present then a LITE version of RSB protection is required, + * just a single call needs to retire before a RET is executed. + */ + switch (mode) { + case SPECTRE_V2_NONE: + return; + + case SPECTRE_V2_EIBRS_LFENCE: + case SPECTRE_V2_EIBRS: + if (boot_cpu_has_bug(X86_BUG_EIBRS_PBRSB)) { + setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT_LITE); + pr_info("Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT\n"); + } + return; + + case SPECTRE_V2_EIBRS_RETPOLINE: + case SPECTRE_V2_RETPOLINE: + case SPECTRE_V2_LFENCE: + case SPECTRE_V2_IBRS: + setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT); + pr_info("Spectre v2 / SpectreRSB : Filling RSB on VMEXIT\n"); + return; + } + + pr_warn_once("Unknown Spectre v2 mode, disabling RSB mitigation at VM exi= t"); + dump_stack(); +} + static void __init spectre_v2_select_mitigation(void) { enum spectre_v2_mitigation_cmd cmd =3D spectre_v2_parse_cmdline(); @@ -1347,28 +1394,7 @@ static void __init spectre_v2_select_mit setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW); pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switc= h\n"); =20 - /* - * Similar to context switches, there are two types of RSB attacks - * after vmexit: - * - * 1) RSB underflow - * - * 2) Poisoned RSB entry - * - * When retpoline is enabled, both are mitigated by filling/clearing - * the RSB. - * - * When IBRS is enabled, while #1 would be mitigated by the IBRS branch - * prediction isolation protections, RSB still needs to be cleared - * because of #2. Note that SMEP provides no protection here, unlike - * user-space-poisoned RSB entries. - * - * eIBRS, on the other hand, has RSB-poisoning protections, so it - * doesn't need RSB clearing after vmexit. - */ - if (boot_cpu_has(X86_FEATURE_RETPOLINE) || - boot_cpu_has(X86_FEATURE_KERNEL_IBRS)) - setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT); + spectre_v2_determine_rsb_fill_type_at_vmexit(mode); =20 /* * Retpoline protects the kernel, but doesn't protect firmware. IBRS @@ -2108,6 +2134,19 @@ static char *ibpb_state(void) return ""; } =20 +static char *pbrsb_eibrs_state(void) +{ + if (boot_cpu_has_bug(X86_BUG_EIBRS_PBRSB)) { + if (boot_cpu_has(X86_FEATURE_RSB_VMEXIT_LITE) || + boot_cpu_has(X86_FEATURE_RSB_VMEXIT)) + return ", PBRSB-eIBRS: SW sequence"; + else + return ", PBRSB-eIBRS: Vulnerable"; + } else { + return ", PBRSB-eIBRS: Not affected"; + } +} + static ssize_t spectre_v2_show_state(char *buf) { if (spectre_v2_enabled =3D=3D SPECTRE_V2_LFENCE) @@ -2120,12 +2159,13 @@ static ssize_t spectre_v2_show_state(cha spectre_v2_enabled =3D=3D SPECTRE_V2_EIBRS_LFENCE) return sprintf(buf, "Vulnerable: eIBRS+LFENCE with unprivileged eBPF and= SMT\n"); =20 - return sprintf(buf, "%s%s%s%s%s%s\n", + return sprintf(buf, "%s%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled], ibpb_state(), boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "", stibp_state(), boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "", + pbrsb_eibrs_state(), spectre_v2_module_string()); } =20 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1025,6 +1025,7 @@ static void identify_cpu_without_cpuid(s #define NO_SWAPGS BIT(6) #define NO_ITLB_MULTIHIT BIT(7) #define NO_SPECTRE_V2 BIT(8) +#define NO_EIBRS_PBRSB BIT(9) #define NO_MMIO BIT(10) =20 #define VULNWL(_vendor, _family, _model, _whitelist) \ @@ -1071,7 +1072,7 @@ static const __initconst struct x86_cpu_ =20 VULNWL_INTEL(ATOM_GOLDMONT, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTI= HIT | NO_MMIO), VULNWL_INTEL(ATOM_GOLDMONT_D, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MUL= TIHIT | NO_MMIO), - VULNWL_INTEL(ATOM_GOLDMONT_PLUS, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_M= ULTIHIT | NO_MMIO), + VULNWL_INTEL(ATOM_GOLDMONT_PLUS, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_M= ULTIHIT | NO_MMIO | NO_EIBRS_PBRSB), =20 /* * Technically, swapgs isn't serializing on AMD (despite it previously @@ -1081,7 +1082,9 @@ static const __initconst struct x86_cpu_ * good enough for our purposes. */ =20 - VULNWL_INTEL(ATOM_TREMONT_D, NO_ITLB_MULTIHIT), + VULNWL_INTEL(ATOM_TREMONT, NO_EIBRS_PBRSB), + VULNWL_INTEL(ATOM_TREMONT_L, NO_EIBRS_PBRSB), + VULNWL_INTEL(ATOM_TREMONT_D, NO_ITLB_MULTIHIT | NO_EIBRS_PBRSB), =20 /* AMD Family 0xf - 0x12 */ VULNWL_AMD(0x0f, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO= _ITLB_MULTIHIT | NO_MMIO), @@ -1265,6 +1268,11 @@ static void __init cpu_set_bug_bits(stru setup_force_cpu_bug(X86_BUG_RETBLEED); } =20 + if (cpu_has(c, X86_FEATURE_IBRS_ENHANCED) && + !cpu_matches(cpu_vuln_whitelist, NO_EIBRS_PBRSB) && + !(ia32_cap & ARCH_CAP_PBRSB_NO)) + setup_force_cpu_bug(X86_BUG_EIBRS_PBRSB); + if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN)) return; =20 --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -185,11 +185,13 @@ SYM_INNER_LABEL(vmx_vmexit, SYM_L_GLOBAL * entries and (in some cases) RSB underflow. * * eIBRS has its own protection against poisoned RSB, so it doesn't - * need the RSB filling sequence. But it does need to be enabled - * before the first unbalanced RET. + * need the RSB filling sequence. But it does need to be enabled, and a + * single call to retire, before the first unbalanced RET. */ =20 - FILL_RETURN_BUFFER %_ASM_CX, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_VMEXIT + FILL_RETURN_BUFFER %_ASM_CX, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_VMEXIT,\ + X86_FEATURE_RSB_VMEXIT_LITE + =20 pop %_ASM_ARG2 /* @flags */ pop %_ASM_ARG1 /* @vmx */ --- a/tools/arch/x86/include/asm/cpufeatures.h +++ b/tools/arch/x86/include/asm/cpufeatures.h @@ -284,6 +284,7 @@ #define X86_FEATURE_CQM_MBM_LOCAL (11*32+ 3) /* LLC Local MBM monitoring */ #define X86_FEATURE_FENCE_SWAPGS_USER (11*32+ 4) /* "" LFENCE in user entr= y SWAPGS path */ #define X86_FEATURE_FENCE_SWAPGS_KERNEL (11*32+ 5) /* "" LFENCE in kernel = entry SWAPGS path */ +#define X86_FEATURE_RSB_VMEXIT_LITE (11*32+17) /* "" Fill RSB on VM-Exit w= hen EIBRS is enabled */ =20 /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */ #define X86_FEATURE_AVX512_BF16 (12*32+ 5) /* AVX512 BFLOAT16 instruction= s */ From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D75FC433FE for ; Wed, 5 Oct 2022 11:36:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229864AbiJELgg (ORCPT ); Wed, 5 Oct 2022 07:36:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36952 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229678AbiJELf4 (ORCPT ); Wed, 5 Oct 2022 07:35:56 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4FFCA786F8; Wed, 5 Oct 2022 04:34:16 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id A031C61639; Wed, 5 Oct 2022 11:34:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id AB199C433C1; Wed, 5 Oct 2022 11:34:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969652; bh=7H36+bGM/K0rmtrqZ0EZ7Q0z04NzUgj+kjvHLFrlzcY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=u7K1YXCrK8TnbhjbDeeUpytmRK3GwBj8PFJ1Tw7d4aKzUo8wqKcrmTFiY3PGYW2wy +aH6rJ+xAgCusPAc+Mod0mqCUcU0jftCB5p85yTx7zF30RreJfRwLJXsyixOsJtqaB +WXDd4eREXmmRLk4hV7/ES2rA6xbxRQAScARxNF0= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Christoph Hellwig , "Darrick J. Wong" , "Darrick J. Wong" , Chandan Babu R Subject: [PATCH 5.4 38/51] xfs: fix misuse of the XFS_ATTR_INCOMPLETE flag Date: Wed, 5 Oct 2022 13:32:26 +0200 Message-Id: <20221005113212.064213076@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Christoph Hellwig commit 780d29057781d986cd87dbbe232cd02876ad430f upstream. XFS_ATTR_INCOMPLETE is a flag in the on-disk attribute format, and thus in a different namespace as the ATTR_* flags in xfs_da_args.flags. Switch to using a XFS_DA_OP_INCOMPLETE flag in op_flags instead. Without this users might be able to inject this flag into operations using the attr by handle ioctl. Signed-off-by: Christoph Hellwig Reviewed-by: Darrick J. Wong Signed-off-by: Darrick J. Wong Acked-by: Darrick J. Wong Signed-off-by: Chandan Babu R Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- fs/xfs/libxfs/xfs_attr.c | 2 +- fs/xfs/libxfs/xfs_attr_leaf.c | 4 ++-- fs/xfs/libxfs/xfs_da_btree.h | 4 +++- fs/xfs/libxfs/xfs_da_format.h | 2 -- 4 files changed, 6 insertions(+), 6 deletions(-) --- a/fs/xfs/libxfs/xfs_attr.c +++ b/fs/xfs/libxfs/xfs_attr.c @@ -1007,7 +1007,7 @@ restart: * The INCOMPLETE flag means that we will find the "old" * attr, not the "new" one. */ - args->flags |=3D XFS_ATTR_INCOMPLETE; + args->op_flags |=3D XFS_DA_OP_INCOMPLETE; state =3D xfs_da_state_alloc(); state->args =3D args; state->mp =3D mp; --- a/fs/xfs/libxfs/xfs_attr_leaf.c +++ b/fs/xfs/libxfs/xfs_attr_leaf.c @@ -2345,8 +2345,8 @@ xfs_attr3_leaf_lookup_int( * If we are looking for INCOMPLETE entries, show only those. * If we are looking for complete entries, show only those. */ - if ((args->flags & XFS_ATTR_INCOMPLETE) !=3D - (entry->flags & XFS_ATTR_INCOMPLETE)) { + if (!!(args->op_flags & XFS_DA_OP_INCOMPLETE) !=3D + !!(entry->flags & XFS_ATTR_INCOMPLETE)) { continue; } if (entry->flags & XFS_ATTR_LOCAL) { --- a/fs/xfs/libxfs/xfs_da_btree.h +++ b/fs/xfs/libxfs/xfs_da_btree.h @@ -82,6 +82,7 @@ typedef struct xfs_da_args { #define XFS_DA_OP_OKNOENT 0x0008 /* lookup/add op, ENOENT ok, else die */ #define XFS_DA_OP_CILOOKUP 0x0010 /* lookup to return CI name if found */ #define XFS_DA_OP_ALLOCVAL 0x0020 /* lookup to alloc buffer if found */ +#define XFS_DA_OP_INCOMPLETE 0x0040 /* lookup INCOMPLETE attr keys */ =20 #define XFS_DA_OP_FLAGS \ { XFS_DA_OP_JUSTCHECK, "JUSTCHECK" }, \ @@ -89,7 +90,8 @@ typedef struct xfs_da_args { { XFS_DA_OP_ADDNAME, "ADDNAME" }, \ { XFS_DA_OP_OKNOENT, "OKNOENT" }, \ { XFS_DA_OP_CILOOKUP, "CILOOKUP" }, \ - { XFS_DA_OP_ALLOCVAL, "ALLOCVAL" } + { XFS_DA_OP_ALLOCVAL, "ALLOCVAL" }, \ + { XFS_DA_OP_INCOMPLETE, "INCOMPLETE" } =20 /* * Storage for holding state during Btree searches and split/join ops. --- a/fs/xfs/libxfs/xfs_da_format.h +++ b/fs/xfs/libxfs/xfs_da_format.h @@ -740,8 +740,6 @@ struct xfs_attr3_icleaf_hdr { =20 /* * Flags used in the leaf_entry[i].flags field. - * NOTE: the INCOMPLETE bit must not collide with the flags bits specified - * on the system call, they are "or"ed together for various operations. */ #define XFS_ATTR_LOCAL_BIT 0 /* attr is stored locally */ #define XFS_ATTR_ROOT_BIT 1 /* limit access to trusted attrs */ From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C932C433F5 for ; Wed, 5 Oct 2022 11:36:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230192AbiJELg1 (ORCPT ); Wed, 5 Oct 2022 07:36:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36368 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230102AbiJELfw (ORCPT ); Wed, 5 Oct 2022 07:35:52 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 35C0F785AC; Wed, 5 Oct 2022 04:34:17 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 721006164C; Wed, 5 Oct 2022 11:34:15 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 807DFC433C1; Wed, 5 Oct 2022 11:34:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969654; bh=FTP0Lmh5M9tJNDoLKEL/OY3mjGBsd/tmQmjAPkpht7E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=orHDtlYqgjM57EDQr1iR6P+ouDdwpkV+gMIPHepU0Rrk02xUGlTv0pZXpZEqqzSDH HlZWepgk074V1TVFsuueyxxsQgSu9QtdhxrQg6memtmOPM5686lHZU4YxgAVtCydtD Bz5NJn+T++7yDoC3vMEd2feuurHdUd5BX4R9slLg= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "Darrick J. Wong" , Christoph Hellwig , "Darrick J. Wong" , Chandan Babu R Subject: [PATCH 5.4 39/51] xfs: introduce XFS_MAX_FILEOFF Date: Wed, 5 Oct 2022 13:32:27 +0200 Message-Id: <20221005113212.104481774@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Darrick J. Wong" commit a5084865524dee1fe8ea1fee17c60b4369ad4f5e upstream. Introduce a new #define for the maximum supported file block offset. We'll use this in the next patch to make it more obvious that we're doing some operation for all possible inode fork mappings after a given offset. We can't use ULLONG_MAX here because bunmapi uses that to detect when it's done. Signed-off-by: Darrick J. Wong Reviewed-by: Christoph Hellwig Acked-by: Darrick J. Wong Signed-off-by: Chandan Babu R Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- fs/xfs/libxfs/xfs_format.h | 7 +++++++ fs/xfs/xfs_reflink.c | 3 ++- 2 files changed, 9 insertions(+), 1 deletion(-) --- a/fs/xfs/libxfs/xfs_format.h +++ b/fs/xfs/libxfs/xfs_format.h @@ -1540,6 +1540,13 @@ typedef struct xfs_bmdr_block { #define BMBT_BLOCKCOUNT_BITLEN 21 =20 #define BMBT_STARTOFF_MASK ((1ULL << BMBT_STARTOFF_BITLEN) - 1) +#define BMBT_BLOCKCOUNT_MASK ((1ULL << BMBT_BLOCKCOUNT_BITLEN) - 1) + +/* + * bmbt records have a file offset (block) field that is 54 bits wide, so = this + * is the largest xfs_fileoff_t that we ever expect to see. + */ +#define XFS_MAX_FILEOFF (BMBT_STARTOFF_MASK + BMBT_BLOCKCOUNT_MASK) =20 typedef struct xfs_bmbt_rec { __be64 l0, l1; --- a/fs/xfs/xfs_reflink.c +++ b/fs/xfs/xfs_reflink.c @@ -1544,7 +1544,8 @@ xfs_reflink_clear_inode_flag( * We didn't find any shared blocks so turn off the reflink flag. * First, get rid of any leftover CoW mappings. */ - error =3D xfs_reflink_cancel_cow_blocks(ip, tpp, 0, NULLFILEOFF, true); + error =3D xfs_reflink_cancel_cow_blocks(ip, tpp, 0, XFS_MAX_FILEOFF, + true); if (error) return error; From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0DC67C433F5 for ; Wed, 5 Oct 2022 11:37:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230079AbiJELhg (ORCPT ); Wed, 5 Oct 2022 07:37:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36414 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230159AbiJELgz (ORCPT ); Wed, 5 Oct 2022 07:36:55 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA539178B1; Wed, 5 Oct 2022 04:34:44 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 2373B61656; Wed, 5 Oct 2022 11:34:18 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 312CDC433D6; Wed, 5 Oct 2022 11:34:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969657; bh=nInTyQACV3CY1ciFEyiiev7nDSZpeoPFRNvmRpNvuX0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=yYxuoqhqFDQPfEYRI4mTTTPQeHQetLnquq0xOQooKr83QXHN29QSb0insiHqO6AA3 TsxVh+UMQozLzJrMFBNaPMMcS36fK4VAEnblurTew7MiufVRk10zpH4EdBBMkIXWoN 3SqPAkkHbWSXKoLVbpexYpCA4dGZNgn3Xz/GbNUA= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "Darrick J. Wong" , Christoph Hellwig , "Darrick J. Wong" , Chandan Babu R Subject: [PATCH 5.4 40/51] xfs: truncate should remove all blocks, not just to the end of the page cache Date: Wed, 5 Oct 2022 13:32:28 +0200 Message-Id: <20221005113212.143617510@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Darrick J. Wong" commit 4bbb04abb4ee2e1f7d65e52557ba1c4038ea43ed upstream. xfs_itruncate_extents_flags() is supposed to unmap every block in a file from EOF onwards. Oddly, it uses s_maxbytes as the upper limit to the bunmapi range, even though s_maxbytes reflects the highest offset the pagecache can support, not the highest offset that XFS supports. The result of this confusion is that if you create a 20T file on a 64-bit machine, mount the filesystem on a 32-bit machine, and remove the file, we leak everything above 16T. Fix this by capping the bunmapi request at the maximum possible block offset, not s_maxbytes. Signed-off-by: Darrick J. Wong Reviewed-by: Christoph Hellwig Acked-by: Darrick J. Wong Signed-off-by: Chandan Babu R Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- fs/xfs/xfs_inode.c | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) --- a/fs/xfs/xfs_inode.c +++ b/fs/xfs/xfs_inode.c @@ -1513,7 +1513,6 @@ xfs_itruncate_extents_flags( struct xfs_mount *mp =3D ip->i_mount; struct xfs_trans *tp =3D *tpp; xfs_fileoff_t first_unmap_block; - xfs_fileoff_t last_block; xfs_filblks_t unmap_len; int error =3D 0; int done =3D 0; @@ -1536,21 +1535,22 @@ xfs_itruncate_extents_flags( * the end of the file (in a crash where the space is allocated * but the inode size is not yet updated), simply remove any * blocks which show up between the new EOF and the maximum - * possible file size. If the first block to be removed is - * beyond the maximum file size (ie it is the same as last_block), - * then there is nothing to do. + * possible file size. + * + * We have to free all the blocks to the bmbt maximum offset, even if + * the page cache can't scale that far. */ first_unmap_block =3D XFS_B_TO_FSB(mp, (xfs_ufsize_t)new_size); - last_block =3D XFS_B_TO_FSB(mp, mp->m_super->s_maxbytes); - if (first_unmap_block =3D=3D last_block) + if (first_unmap_block >=3D XFS_MAX_FILEOFF) { + WARN_ON_ONCE(first_unmap_block > XFS_MAX_FILEOFF); return 0; + } =20 - ASSERT(first_unmap_block < last_block); - unmap_len =3D last_block - first_unmap_block + 1; - while (!done) { + unmap_len =3D XFS_MAX_FILEOFF - first_unmap_block + 1; + while (unmap_len > 0) { ASSERT(tp->t_firstblock =3D=3D NULLFSBLOCK); - error =3D xfs_bunmapi(tp, ip, first_unmap_block, unmap_len, flags, - XFS_ITRUNC_MAX_EXTENTS, &done); + error =3D __xfs_bunmapi(tp, ip, first_unmap_block, &unmap_len, + flags, XFS_ITRUNC_MAX_EXTENTS); if (error) goto out; =20 @@ -1570,7 +1570,7 @@ xfs_itruncate_extents_flags( if (whichfork =3D=3D XFS_DATA_FORK) { /* Remove all pending CoW reservations. */ error =3D xfs_reflink_cancel_cow_blocks(ip, &tp, - first_unmap_block, last_block, true); + first_unmap_block, XFS_MAX_FILEOFF, true); if (error) goto out; From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3D57C433FE for ; Wed, 5 Oct 2022 11:37:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230221AbiJELht (ORCPT ); Wed, 5 Oct 2022 07:37:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36570 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229678AbiJELhH (ORCPT ); Wed, 5 Oct 2022 07:37:07 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 154711FCC2; Wed, 5 Oct 2022 04:34:50 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 153FBB81DB5; Wed, 5 Oct 2022 11:34:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8226AC433C1; Wed, 5 Oct 2022 11:34:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969662; bh=UJOpOicRU4NtFi9i6xwfp3NCfWObWtL19QToa62WIt4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=xEOhu5JgFLDG9iHKVfS+DzNMfAtOzXTkTSmL4gTaqmneDpkiHQYHu96DMpo4Opuui XK1Bs7PsShbSIQbyueMzHaqpsWNEN+ETwBFQoKohk0Xjn0uJESN32NQ6R2DM/UfsCx araMO7WAA2t5L1QfvPfLwg8Z+QILgOtHjG7J9Emc= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "Darrick J. Wong" , Christoph Hellwig , "Darrick J. Wong" , Chandan Babu R Subject: [PATCH 5.4 41/51] xfs: fix s_maxbytes computation on 32-bit kernels Date: Wed, 5 Oct 2022 13:32:29 +0200 Message-Id: <20221005113212.190566979@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Darrick J. Wong" commit 932befe39ddea29cf47f4f1dc080d3dba668f0ca upstream. I observed a hang in generic/308 while running fstests on a i686 kernel. The hang occurred when trying to purge the pagecache on a large sparse file that had a page created past MAX_LFS_FILESIZE, which caused an integer overflow in the pagecache xarray and resulted in an infinite loop. I then noticed that Linus changed the definition of MAX_LFS_FILESIZE in commit 0cc3b0ec23ce ("Clarify (and fix) MAX_LFS_FILESIZE macros") so that it is now one page short of the maximum page index on 32-bit kernels. Because the XFS function to compute max offset open-codes the 2005-era MAX_LFS_FILESIZE computation and neither the vfs nor mm perform any sanity checking of s_maxbytes, the code in generic/308 can create a page above the pagecache's limit and kaboom. Fix all this by setting s_maxbytes to MAX_LFS_FILESIZE directly and aborting the mount with a warning if our assumptions ever break. I have no answer for why this seems to have been broken for years and nobody noticed. Signed-off-by: Darrick J. Wong Reviewed-by: Christoph Hellwig Acked-by: Darrick J. Wong Signed-off-by: Chandan Babu R Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- fs/xfs/xfs_super.c | 48 +++++++++++++++++++++--------------------------- 1 file changed, 21 insertions(+), 27 deletions(-) --- a/fs/xfs/xfs_super.c +++ b/fs/xfs/xfs_super.c @@ -512,32 +512,6 @@ xfs_showargs( seq_puts(m, ",noquota"); } =20 -static uint64_t -xfs_max_file_offset( - unsigned int blockshift) -{ - unsigned int pagefactor =3D 1; - unsigned int bitshift =3D BITS_PER_LONG - 1; - - /* Figure out maximum filesize, on Linux this can depend on - * the filesystem blocksize (on 32 bit platforms). - * __block_write_begin does this in an [unsigned] long long... - * page->index << (PAGE_SHIFT - bbits) - * So, for page sized blocks (4K on 32 bit platforms), - * this wraps at around 8Tb (hence MAX_LFS_FILESIZE which is - * (((u64)PAGE_SIZE << (BITS_PER_LONG-1))-1) - * but for smaller blocksizes it is less (bbits =3D log2 bsize). - */ - -#if BITS_PER_LONG =3D=3D 32 - ASSERT(sizeof(sector_t) =3D=3D 8); - pagefactor =3D PAGE_SIZE; - bitshift =3D BITS_PER_LONG; -#endif - - return (((uint64_t)pagefactor) << bitshift) - 1; -} - /* * Set parameters for inode allocation heuristics, taking into account * filesystem size and inode32/inode64 mount options; i.e. specifically @@ -1650,6 +1624,26 @@ xfs_fs_fill_super( if (error) goto out_free_sb; =20 + /* + * XFS block mappings use 54 bits to store the logical block offset. + * This should suffice to handle the maximum file size that the VFS + * supports (currently 2^63 bytes on 64-bit and ULONG_MAX << PAGE_SHIFT + * bytes on 32-bit), but as XFS and VFS have gotten the s_maxbytes + * calculation wrong on 32-bit kernels in the past, we'll add a WARN_ON + * to check this assertion. + * + * Avoid integer overflow by comparing the maximum bmbt offset to the + * maximum pagecache offset in units of fs blocks. + */ + if (XFS_B_TO_FSBT(mp, MAX_LFS_FILESIZE) > XFS_MAX_FILEOFF) { + xfs_warn(mp, +"MAX_LFS_FILESIZE block offset (%llu) exceeds extent map maximum (%llu)!", + XFS_B_TO_FSBT(mp, MAX_LFS_FILESIZE), + XFS_MAX_FILEOFF); + error =3D -EINVAL; + goto out_free_sb; + } + error =3D xfs_filestream_mount(mp); if (error) goto out_free_sb; @@ -1661,7 +1655,7 @@ xfs_fs_fill_super( sb->s_magic =3D XFS_SUPER_MAGIC; sb->s_blocksize =3D mp->m_sb.sb_blocksize; sb->s_blocksize_bits =3D ffs(sb->s_blocksize) - 1; - sb->s_maxbytes =3D xfs_max_file_offset(sb->s_blocksize_bits); + sb->s_maxbytes =3D MAX_LFS_FILESIZE; sb->s_max_links =3D XFS_MAXLINK; sb->s_time_gran =3D 1; sb->s_time_min =3D S32_MIN; From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D03BC4332F for ; Wed, 5 Oct 2022 11:37:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230218AbiJELhB (ORCPT ); Wed, 5 Oct 2022 07:37:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36900 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229900AbiJELgc (ORCPT ); Wed, 5 Oct 2022 07:36:32 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4718878BCA; Wed, 5 Oct 2022 04:34:29 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id CA0E7B81D88; Wed, 5 Oct 2022 11:34:26 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 38049C433D6; Wed, 5 Oct 2022 11:34:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969665; bh=qacnJnwtsB+NvtpaVE0cc0ElSXFbbpAHrbEIXpu35cU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=craXUsE7+IUOPMQzyR9ImJOL4u75daPHJOjzHN4F73vtKyjOokZIUz+68bvIWz5ip wqEu2JchD7AbZyGO8TUtXO9RyUoh8J6DnrZ2tdjLsQjqS9SDQTYHViNl2TwnlIpNsc 5XijlIq4S4y1DLbpMI2iNgAZPDD7Z2xZqT6/cNao= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Christoph Hellwig , Carlos Maiolino , "Darrick J. Wong" , "Darrick J. Wong" , Chandan Babu R Subject: [PATCH 5.4 42/51] xfs: fix IOCB_NOWAIT handling in xfs_file_dio_aio_read Date: Wed, 5 Oct 2022 13:32:30 +0200 Message-Id: <20221005113212.237973105@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Christoph Hellwig commit 7b53b868a1812a9a6ab5e69249394bd37f29ce2c upstream. Direct I/O reads can also be used with RWF_NOWAIT & co. Fix the inode locking in xfs_file_dio_aio_read to take IOCB_NOWAIT into account. Signed-off-by: Christoph Hellwig Reviewed-by: Carlos Maiolino Reviewed-by: Darrick J. Wong Signed-off-by: Darrick J. Wong Acked-by: Darrick J. Wong Signed-off-by: Chandan Babu R Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- fs/xfs/xfs_file.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) --- a/fs/xfs/xfs_file.c +++ b/fs/xfs/xfs_file.c @@ -187,7 +187,12 @@ xfs_file_dio_aio_read( =20 file_accessed(iocb->ki_filp); =20 - xfs_ilock(ip, XFS_IOLOCK_SHARED); + if (iocb->ki_flags & IOCB_NOWAIT) { + if (!xfs_ilock_nowait(ip, XFS_IOLOCK_SHARED)) + return -EAGAIN; + } else { + xfs_ilock(ip, XFS_IOLOCK_SHARED); + } ret =3D iomap_dio_rw(iocb, to, &xfs_iomap_ops, NULL); xfs_iunlock(ip, XFS_IOLOCK_SHARED); From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61664C433F5 for ; Wed, 5 Oct 2022 11:38:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230215AbiJELiP (ORCPT ); Wed, 5 Oct 2022 07:38:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36288 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230148AbiJELhN (ORCPT ); Wed, 5 Oct 2022 07:37:13 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DDA475C974; Wed, 5 Oct 2022 04:34:58 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 6F7A7B81DB4; Wed, 5 Oct 2022 11:34:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DAF64C433C1; Wed, 5 Oct 2022 11:34:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969668; bh=ckUmsiX0CC2yKRoWCsA2sfVWRVPw+W7hzeSxtd9e6L4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=xAQI7C8bohhRfv3U9UEApT0WSAWMrmvU7Jt3oVKRxGTnoGLl2qD/MfvEFfEs46se0 5ErHGWwHOuqG+N5SfQZ7eIz21cekyYEjjVtHUc9/uoO9ecSrAY1nEp6hp95Wng3TgR Mx7XsMA9MUmDCC5rtVSFWIuLJQFTiXm7K6QTRt08= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "Darrick J. Wong" , Christoph Hellwig , "Darrick J. Wong" , Chandan Babu R Subject: [PATCH 5.4 43/51] xfs: refactor remote attr value buffer invalidation Date: Wed, 5 Oct 2022 13:32:31 +0200 Message-Id: <20221005113212.274250332@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Darrick J. Wong" commit 8edbb26b06023de31ad7d4c9b984d99f66577929 upstream. [Replaced XFS_IS_CORRUPT() calls with ASSERT() for 5.4.y backport] Hoist the code that invalidates remote extended attribute value buffers into a separate helper function. This prepares us for a memory corruption fix in the next patch. Signed-off-by: Darrick J. Wong Reviewed-by: Christoph Hellwig Acked-by: Darrick J. Wong Signed-off-by: Chandan Babu R Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- fs/xfs/libxfs/xfs_attr_remote.c | 48 ++++++++++++++++++++++++-----------= ----- fs/xfs/libxfs/xfs_attr_remote.h | 2 + 2 files changed, 31 insertions(+), 19 deletions(-) --- a/fs/xfs/libxfs/xfs_attr_remote.c +++ b/fs/xfs/libxfs/xfs_attr_remote.c @@ -551,6 +551,32 @@ xfs_attr_rmtval_set( return 0; } =20 +/* Mark stale any incore buffers for the remote value. */ +int +xfs_attr_rmtval_stale( + struct xfs_inode *ip, + struct xfs_bmbt_irec *map, + xfs_buf_flags_t incore_flags) +{ + struct xfs_mount *mp =3D ip->i_mount; + struct xfs_buf *bp; + + ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); + + ASSERT((map->br_startblock !=3D DELAYSTARTBLOCK) && + (map->br_startblock !=3D HOLESTARTBLOCK)); + + bp =3D xfs_buf_incore(mp->m_ddev_targp, + XFS_FSB_TO_DADDR(mp, map->br_startblock), + XFS_FSB_TO_BB(mp, map->br_blockcount), incore_flags); + if (bp) { + xfs_buf_stale(bp); + xfs_buf_relse(bp); + } + + return 0; +} + /* * Remove the value associated with an attribute by deleting the * out-of-line buffer that it is stored on. @@ -559,7 +585,6 @@ int xfs_attr_rmtval_remove( struct xfs_da_args *args) { - struct xfs_mount *mp =3D args->dp->i_mount; xfs_dablk_t lblkno; int blkcnt; int error; @@ -574,9 +599,6 @@ xfs_attr_rmtval_remove( blkcnt =3D args->rmtblkcnt; while (blkcnt > 0) { struct xfs_bmbt_irec map; - struct xfs_buf *bp; - xfs_daddr_t dblkno; - int dblkcnt; int nmap; =20 /* @@ -588,21 +610,9 @@ xfs_attr_rmtval_remove( if (error) return error; ASSERT(nmap =3D=3D 1); - ASSERT((map.br_startblock !=3D DELAYSTARTBLOCK) && - (map.br_startblock !=3D HOLESTARTBLOCK)); - - dblkno =3D XFS_FSB_TO_DADDR(mp, map.br_startblock), - dblkcnt =3D XFS_FSB_TO_BB(mp, map.br_blockcount); - - /* - * If the "remote" value is in the cache, remove it. - */ - bp =3D xfs_buf_incore(mp->m_ddev_targp, dblkno, dblkcnt, XBF_TRYLOCK); - if (bp) { - xfs_buf_stale(bp); - xfs_buf_relse(bp); - bp =3D NULL; - } + error =3D xfs_attr_rmtval_stale(args->dp, &map, XBF_TRYLOCK); + if (error) + return error; =20 lblkno +=3D map.br_blockcount; blkcnt -=3D map.br_blockcount; --- a/fs/xfs/libxfs/xfs_attr_remote.h +++ b/fs/xfs/libxfs/xfs_attr_remote.h @@ -11,5 +11,7 @@ int xfs_attr3_rmt_blocks(struct xfs_moun int xfs_attr_rmtval_get(struct xfs_da_args *args); int xfs_attr_rmtval_set(struct xfs_da_args *args); int xfs_attr_rmtval_remove(struct xfs_da_args *args); +int xfs_attr_rmtval_stale(struct xfs_inode *ip, struct xfs_bmbt_irec *map, + xfs_buf_flags_t incore_flags); =20 #endif /* __XFS_ATTR_REMOTE_H__ */ From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CD30C4332F for ; Wed, 5 Oct 2022 11:39:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230324AbiJELjH (ORCPT ); Wed, 5 Oct 2022 07:39:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36340 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230173AbiJELiR (ORCPT ); Wed, 5 Oct 2022 07:38:17 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DBB3B78BF7; Wed, 5 Oct 2022 04:35:17 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 9F2D961656; Wed, 5 Oct 2022 11:35:08 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id AAD00C43144; Wed, 5 Oct 2022 11:35:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969708; bh=rBENtrbxF/c5JYGI/sptVfXgZ0feS/ok9A0VRbiROU0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EDKMUit1uV1+lCcGZbJl/T6YSp1PQ5mXANGMOt4BrHOJAvxZ0CkJvzJRgo9GJQjIZ 8zWWGbxqGZ+smyhKiTxqa3PEB9O4+xiFD+JJsvl1uhkkv4wFaA2+oUFQNINwNtK5lL q+0MGORLxpmYsAmIe9ZeIl2lvQmCaN8hjCaGXltc= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "Darrick J. Wong" , Christoph Hellwig , "Darrick J. Wong" , Chandan Babu R Subject: [PATCH 5.4 44/51] xfs: fix memory corruption during remote attr value buffer invalidation Date: Wed, 5 Oct 2022 13:32:32 +0200 Message-Id: <20221005113212.314687809@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Darrick J. Wong" commit e8db2aafcedb7d88320ab83f1000f1606b26d4d7 upstream. [Replaced XFS_IS_CORRUPT() calls with ASSERT() for 5.4.y backport] While running generic/103, I observed what looks like memory corruption and (with slub debugging turned on) a slub redzone warning on i386 when inactivating an inode with a 64k remote attr value. On a v5 filesystem, maximally sized remote attr values require one block more than 64k worth of space to hold both the remote attribute value header (64 bytes). On a 4k block filesystem this results in a 68k buffer; on a 64k block filesystem, this would be a 128k buffer. Note that even though we'll never use more than 65,600 bytes of this buffer, XFS_MAX_BLOCKSIZE is 64k. This is a problem because the definition of struct xfs_buf_log_format allows for XFS_MAX_BLOCKSIZE worth of dirty bitmap (64k). On i386 when we invalidate a remote attribute, xfs_trans_binval zeroes all 68k worth of the dirty map, writing right off the end of the log item and corrupting memory. We've gotten away with this on x86_64 for years because the compiler inserts a u32 padding on the end of struct xfs_buf_log_format. Fortunately for us, remote attribute values are written to disk with xfs_bwrite(), which is to say that they are not logged. Fix the problem by removing all places where we could end up creating a buffer log item for a remote attribute value and leave a note explaining why. Next, replace the open-coded buffer invalidation with a call to the helper we created in the previous patch that does better checking for bad metadata before marking the buffer stale. Signed-off-by: Darrick J. Wong Reviewed-by: Christoph Hellwig Acked-by: Darrick J. Wong Signed-off-by: Chandan Babu R Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- fs/xfs/libxfs/xfs_attr_remote.c | 37 ++++++++++++++++++++++++++----- fs/xfs/xfs_attr_inactive.c | 47 +++++++++++------------------------= ----- 2 files changed, 44 insertions(+), 40 deletions(-) --- a/fs/xfs/libxfs/xfs_attr_remote.c +++ b/fs/xfs/libxfs/xfs_attr_remote.c @@ -25,6 +25,23 @@ #define ATTR_RMTVALUE_MAPSIZE 1 /* # of map entries at once */ =20 /* + * Remote Attribute Values + * =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + * + * Remote extended attribute values are conceptually simple -- they're wri= tten + * to data blocks mapped by an inode's attribute fork, and they have an up= per + * size limit of 64k. Setting a value does not involve the XFS log. + * + * However, on a v5 filesystem, maximally sized remote attr values require= one + * block more than 64k worth of space to hold both the remote attribute va= lue + * header (64 bytes). On a 4k block filesystem this results in a 68k buff= er; + * on a 64k block filesystem, this would be a 128k buffer. Note that the = log + * format can only handle a dirty buffer of XFS_MAX_BLOCKSIZE length (64k). + * Therefore, we /must/ ensure that remote attribute value buffers never t= ouch + * the logging system and therefore never have a log item. + */ + +/* * Each contiguous block has a header, so it is not just a simple attribute * length to FSB conversion. */ @@ -400,17 +417,25 @@ xfs_attr_rmtval_get( (map[i].br_startblock !=3D HOLESTARTBLOCK)); dblkno =3D XFS_FSB_TO_DADDR(mp, map[i].br_startblock); dblkcnt =3D XFS_FSB_TO_BB(mp, map[i].br_blockcount); - error =3D xfs_trans_read_buf(mp, args->trans, - mp->m_ddev_targp, - dblkno, dblkcnt, 0, &bp, - &xfs_attr3_rmt_buf_ops); - if (error) + bp =3D xfs_buf_read(mp->m_ddev_targp, dblkno, dblkcnt, 0, + &xfs_attr3_rmt_buf_ops); + if (!bp) + return -ENOMEM; + error =3D bp->b_error; + if (error) { + xfs_buf_ioerror_alert(bp, __func__); + xfs_buf_relse(bp); + + /* bad CRC means corrupted metadata */ + if (error =3D=3D -EFSBADCRC) + error =3D -EFSCORRUPTED; return error; + } =20 error =3D xfs_attr_rmtval_copyout(mp, bp, args->dp->i_ino, &offset, &valuelen, &dst); - xfs_trans_brelse(args->trans, bp); + xfs_buf_relse(bp); if (error) return error; =20 --- a/fs/xfs/xfs_attr_inactive.c +++ b/fs/xfs/xfs_attr_inactive.c @@ -25,22 +25,20 @@ #include "xfs_error.h" =20 /* - * Look at all the extents for this logical region, - * invalidate any buffers that are incore/in transactions. + * Invalidate any incore buffers associated with this remote attribute val= ue + * extent. We never log remote attribute value buffers, which means that= they + * won't be attached to a transaction and are therefore safe to mark stale. + * The actual bunmapi will be taken care of later. */ STATIC int -xfs_attr3_leaf_freextent( - struct xfs_trans **trans, +xfs_attr3_rmt_stale( struct xfs_inode *dp, xfs_dablk_t blkno, int blkcnt) { struct xfs_bmbt_irec map; - struct xfs_buf *bp; xfs_dablk_t tblkno; - xfs_daddr_t dblkno; int tblkcnt; - int dblkcnt; int nmap; int error; =20 @@ -57,35 +55,18 @@ xfs_attr3_leaf_freextent( nmap =3D 1; error =3D xfs_bmapi_read(dp, (xfs_fileoff_t)tblkno, tblkcnt, &map, &nmap, XFS_BMAPI_ATTRFORK); - if (error) { + if (error) return error; - } ASSERT(nmap =3D=3D 1); - ASSERT(map.br_startblock !=3D DELAYSTARTBLOCK); =20 /* - * If it's a hole, these are already unmapped - * so there's nothing to invalidate. + * Mark any incore buffers for the remote value as stale. We + * never log remote attr value buffers, so the buffer should be + * easy to kill. */ - if (map.br_startblock !=3D HOLESTARTBLOCK) { - - dblkno =3D XFS_FSB_TO_DADDR(dp->i_mount, - map.br_startblock); - dblkcnt =3D XFS_FSB_TO_BB(dp->i_mount, - map.br_blockcount); - bp =3D xfs_trans_get_buf(*trans, - dp->i_mount->m_ddev_targp, - dblkno, dblkcnt, 0); - if (!bp) - return -ENOMEM; - xfs_trans_binval(*trans, bp); - /* - * Roll to next transaction. - */ - error =3D xfs_trans_roll_inode(trans, dp); - if (error) - return error; - } + error =3D xfs_attr_rmtval_stale(dp, &map, 0); + if (error) + return error; =20 tblkno +=3D map.br_blockcount; tblkcnt -=3D map.br_blockcount; @@ -174,9 +155,7 @@ xfs_attr3_leaf_inactive( */ error =3D 0; for (lp =3D list, i =3D 0; i < count; i++, lp++) { - tmp =3D xfs_attr3_leaf_freextent(trans, dp, - lp->valueblk, lp->valuelen); - + tmp =3D xfs_attr3_rmt_stale(dp, lp->valueblk, lp->valuelen); if (error =3D=3D 0) error =3D tmp; /* save only the 1st errno */ } From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE20FC433F5 for ; Wed, 5 Oct 2022 11:38:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230260AbiJELiE (ORCPT ); Wed, 5 Oct 2022 07:38:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36606 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230228AbiJELhL (ORCPT ); Wed, 5 Oct 2022 07:37:11 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0A03451400; Wed, 5 Oct 2022 04:34:56 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 047376166F; Wed, 5 Oct 2022 11:34:50 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 11E25C433C1; Wed, 5 Oct 2022 11:34:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969689; bh=RHMNw4VDHSb0eehitrExluPSxkriKJXv1C+tOQe7P18=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=xgGoEQAM654okGfmMj7zf51t9kYX4cq8N8hUJAqCNonKsxzZopftALmCinL3Qc79t g/6nYQHXnuCDMDWt6NZWOh27c0ZBwgrO+es6uZu+phSESbaxBcZHoWzNYj/flQC2H/ qL65DltX6hX+bkgq6wZ2sdbRSedAgzv7sdNtSnOg= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Christoph Hellwig , "Darrick J. Wong" , "Darrick J. Wong" , Chandan Babu R Subject: [PATCH 5.4 45/51] xfs: move incore structures out of xfs_da_format.h Date: Wed, 5 Oct 2022 13:32:33 +0200 Message-Id: <20221005113212.349623056@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Christoph Hellwig commit a39f089a25e75c3d17b955d8eb8bc781f23364f3 upstream. Move the abstract in-memory version of various btree block headers out of xfs_da_format.h as they aren't on-disk formats. Signed-off-by: Christoph Hellwig Reviewed-by: Darrick J. Wong Signed-off-by: Darrick J. Wong Acked-by: Darrick J. Wong Signed-off-by: Chandan Babu R Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- fs/xfs/libxfs/xfs_attr_leaf.h | 23 ++++++++++++++++ fs/xfs/libxfs/xfs_da_btree.h | 13 +++++++++ fs/xfs/libxfs/xfs_da_format.c | 1=20 fs/xfs/libxfs/xfs_da_format.h | 57 -------------------------------------= ----- fs/xfs/libxfs/xfs_dir2.h | 2 + fs/xfs/libxfs/xfs_dir2_priv.h | 19 ++++++++++++++ 6 files changed, 58 insertions(+), 57 deletions(-) --- a/fs/xfs/libxfs/xfs_attr_leaf.h +++ b/fs/xfs/libxfs/xfs_attr_leaf.h @@ -17,6 +17,29 @@ struct xfs_inode; struct xfs_trans; =20 /* + * Incore version of the attribute leaf header. + */ +struct xfs_attr3_icleaf_hdr { + uint32_t forw; + uint32_t back; + uint16_t magic; + uint16_t count; + uint16_t usedbytes; + /* + * Firstused is 32-bit here instead of 16-bit like the on-disk variant + * to support maximum fsb size of 64k without overflow issues throughout + * the attr code. Instead, the overflow condition is handled on + * conversion to/from disk. + */ + uint32_t firstused; + __u8 holes; + struct { + uint16_t base; + uint16_t size; + } freemap[XFS_ATTR_LEAF_MAPSIZE]; +}; + +/* * Used to keep a list of "remote value" extents when unlinking an inode. */ typedef struct xfs_attr_inactive_list { --- a/fs/xfs/libxfs/xfs_da_btree.h +++ b/fs/xfs/libxfs/xfs_da_btree.h @@ -127,6 +127,19 @@ typedef struct xfs_da_state { } xfs_da_state_t; =20 /* + * In-core version of the node header to abstract the differences in the v= 2 and + * v3 disk format of the headers. Callers need to convert to/from disk for= mat as + * appropriate. + */ +struct xfs_da3_icnode_hdr { + uint32_t forw; + uint32_t back; + uint16_t magic; + uint16_t count; + uint16_t level; +}; + +/* * Utility macros to aid in logging changed structure fields. */ #define XFS_DA_LOGOFF(BASE, ADDR) ((char *)(ADDR) - (char *)(BASE)) --- a/fs/xfs/libxfs/xfs_da_format.c +++ b/fs/xfs/libxfs/xfs_da_format.c @@ -13,6 +13,7 @@ #include "xfs_mount.h" #include "xfs_inode.h" #include "xfs_dir2.h" +#include "xfs_dir2_priv.h" =20 /* * Shortform directory ops --- a/fs/xfs/libxfs/xfs_da_format.h +++ b/fs/xfs/libxfs/xfs_da_format.h @@ -94,19 +94,6 @@ struct xfs_da3_intnode { }; =20 /* - * In-core version of the node header to abstract the differences in the v= 2 and - * v3 disk format of the headers. Callers need to convert to/from disk for= mat as - * appropriate. - */ -struct xfs_da3_icnode_hdr { - uint32_t forw; - uint32_t back; - uint16_t magic; - uint16_t count; - uint16_t level; -}; - -/* * Directory version 2. * * There are 4 possible formats: @@ -434,14 +421,6 @@ struct xfs_dir3_leaf_hdr { __be32 pad; /* 64 bit alignment */ }; =20 -struct xfs_dir3_icleaf_hdr { - uint32_t forw; - uint32_t back; - uint16_t magic; - uint16_t count; - uint16_t stale; -}; - /* * Leaf block entry. */ @@ -521,19 +500,6 @@ struct xfs_dir3_free { #define XFS_DIR3_FREE_CRC_OFF offsetof(struct xfs_dir3_free, hdr.hdr.crc) =20 /* - * In core version of the free block header, abstracted away from on-disk = format - * differences. Use this in the code, and convert to/from the disk version= using - * xfs_dir3_free_hdr_from_disk/xfs_dir3_free_hdr_to_disk. - */ -struct xfs_dir3_icfree_hdr { - uint32_t magic; - uint32_t firstdb; - uint32_t nvalid; - uint32_t nused; - -}; - -/* * Single block format. * * The single block format looks like the following drawing on disk: @@ -710,29 +676,6 @@ struct xfs_attr3_leafblock { }; =20 /* - * incore, neutral version of the attribute leaf header - */ -struct xfs_attr3_icleaf_hdr { - uint32_t forw; - uint32_t back; - uint16_t magic; - uint16_t count; - uint16_t usedbytes; - /* - * firstused is 32-bit here instead of 16-bit like the on-disk variant - * to support maximum fsb size of 64k without overflow issues throughout - * the attr code. Instead, the overflow condition is handled on - * conversion to/from disk. - */ - uint32_t firstused; - __u8 holes; - struct { - uint16_t base; - uint16_t size; - } freemap[XFS_ATTR_LEAF_MAPSIZE]; -}; - -/* * Special value to represent fs block size in the leaf header firstused f= ield. * Only used when block size overflows the 2-bytes available on disk. */ --- a/fs/xfs/libxfs/xfs_dir2.h +++ b/fs/xfs/libxfs/xfs_dir2.h @@ -18,6 +18,8 @@ struct xfs_dir2_sf_entry; struct xfs_dir2_data_hdr; struct xfs_dir2_data_entry; struct xfs_dir2_data_unused; +struct xfs_dir3_icfree_hdr; +struct xfs_dir3_icleaf_hdr; =20 extern struct xfs_name xfs_name_dotdot; =20 --- a/fs/xfs/libxfs/xfs_dir2_priv.h +++ b/fs/xfs/libxfs/xfs_dir2_priv.h @@ -8,6 +8,25 @@ =20 struct dir_context; =20 +/* + * In-core version of the leaf and free block headers to abstract the + * differences in the v2 and v3 disk format of the headers. + */ +struct xfs_dir3_icleaf_hdr { + uint32_t forw; + uint32_t back; + uint16_t magic; + uint16_t count; + uint16_t stale; +}; + +struct xfs_dir3_icfree_hdr { + uint32_t magic; + uint32_t firstdb; + uint32_t nvalid; + uint32_t nused; +}; + /* xfs_dir2.c */ extern int xfs_dir2_grow_inode(struct xfs_da_args *args, int space, xfs_dir2_db_t *dbp); From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A71B7C433FE for ; Wed, 5 Oct 2022 11:39:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229738AbiJELjS (ORCPT ); Wed, 5 Oct 2022 07:39:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50310 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230294AbiJELiY (ORCPT ); Wed, 5 Oct 2022 07:38:24 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9596C72FFA; Wed, 5 Oct 2022 04:35:21 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 67E55B81DC2; Wed, 5 Oct 2022 11:34:53 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B18E6C433D7; Wed, 5 Oct 2022 11:34:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969692; bh=j8qDiqd4007TTVXR5GQqV4oL8mrRzRo/CNbjxYNi3ss=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=F08dObQkVzId/ETV1b2KTWPvAqOKrn/LHdP0dbk4nHpiroD1+YKo62owGkoiYKhZr pwSBwAIX1jgmSEcIDWP9I/97SYPmisIuf0X5tM7zzkOifyYwq7CdJtIWYqBCAjwOy3 QwnrNPpVzrdwh+64M8cj6f68lAXOpHGjoTpqxtRw= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "Darrick J. Wong" , Christoph Hellwig , "Darrick J. Wong" , Chandan Babu R Subject: [PATCH 5.4 46/51] xfs: streamline xfs_attr3_leaf_inactive Date: Wed, 5 Oct 2022 13:32:34 +0200 Message-Id: <20221005113212.390334261@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Darrick J. Wong" commit 0bb9d159bd018b271e783d3b2d3bc82fa0727321 upstream. Now that we know we don't have to take a transaction to stale the incore buffers for a remote value, get rid of the unnecessary memory allocation in the leaf walker and call the rmt_stale function directly. Flatten the loop while we're at it. Signed-off-by: Darrick J. Wong Reviewed-by: Christoph Hellwig Acked-by: Darrick J. Wong Signed-off-by: Chandan Babu R Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- fs/xfs/libxfs/xfs_attr_leaf.h | 9 --- fs/xfs/xfs_attr_inactive.c | 103 ++++++++++++-------------------------= ----- 2 files changed, 30 insertions(+), 82 deletions(-) --- a/fs/xfs/libxfs/xfs_attr_leaf.h +++ b/fs/xfs/libxfs/xfs_attr_leaf.h @@ -39,15 +39,6 @@ struct xfs_attr3_icleaf_hdr { } freemap[XFS_ATTR_LEAF_MAPSIZE]; }; =20 -/* - * Used to keep a list of "remote value" extents when unlinking an inode. - */ -typedef struct xfs_attr_inactive_list { - xfs_dablk_t valueblk; /* block number of value bytes */ - int valuelen; /* number of bytes in value */ -} xfs_attr_inactive_list_t; - - /*=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D * Function prototypes for the kernel. *=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D*/ --- a/fs/xfs/xfs_attr_inactive.c +++ b/fs/xfs/xfs_attr_inactive.c @@ -37,8 +37,6 @@ xfs_attr3_rmt_stale( int blkcnt) { struct xfs_bmbt_irec map; - xfs_dablk_t tblkno; - int tblkcnt; int nmap; int error; =20 @@ -46,14 +44,12 @@ xfs_attr3_rmt_stale( * Roll through the "value", invalidating the attribute value's * blocks. */ - tblkno =3D blkno; - tblkcnt =3D blkcnt; - while (tblkcnt > 0) { + while (blkcnt > 0) { /* * Try to remember where we decided to put the value. */ nmap =3D 1; - error =3D xfs_bmapi_read(dp, (xfs_fileoff_t)tblkno, tblkcnt, + error =3D xfs_bmapi_read(dp, (xfs_fileoff_t)blkno, blkcnt, &map, &nmap, XFS_BMAPI_ATTRFORK); if (error) return error; @@ -68,8 +64,8 @@ xfs_attr3_rmt_stale( if (error) return error; =20 - tblkno +=3D map.br_blockcount; - tblkcnt -=3D map.br_blockcount; + blkno +=3D map.br_blockcount; + blkcnt -=3D map.br_blockcount; } =20 return 0; @@ -83,84 +79,45 @@ xfs_attr3_rmt_stale( */ STATIC int xfs_attr3_leaf_inactive( - struct xfs_trans **trans, - struct xfs_inode *dp, - struct xfs_buf *bp) + struct xfs_trans **trans, + struct xfs_inode *dp, + struct xfs_buf *bp) { - struct xfs_attr_leafblock *leaf; - struct xfs_attr3_icleaf_hdr ichdr; - struct xfs_attr_leaf_entry *entry; + struct xfs_attr3_icleaf_hdr ichdr; + struct xfs_mount *mp =3D bp->b_mount; + struct xfs_attr_leafblock *leaf =3D bp->b_addr; + struct xfs_attr_leaf_entry *entry; struct xfs_attr_leaf_name_remote *name_rmt; - struct xfs_attr_inactive_list *list; - struct xfs_attr_inactive_list *lp; - int error; - int count; - int size; - int tmp; - int i; - struct xfs_mount *mp =3D bp->b_mount; + int error; + int i; =20 - leaf =3D bp->b_addr; xfs_attr3_leaf_hdr_from_disk(mp->m_attr_geo, &ichdr, leaf); =20 /* - * Count the number of "remote" value extents. + * Find the remote value extents for this leaf and invalidate their + * incore buffers. */ - count =3D 0; entry =3D xfs_attr3_leaf_entryp(leaf); for (i =3D 0; i < ichdr.count; entry++, i++) { - if (be16_to_cpu(entry->nameidx) && - ((entry->flags & XFS_ATTR_LOCAL) =3D=3D 0)) { - name_rmt =3D xfs_attr3_leaf_name_remote(leaf, i); - if (name_rmt->valueblk) - count++; - } - } + int blkcnt; =20 - /* - * If there are no "remote" values, we're done. - */ - if (count =3D=3D 0) { - xfs_trans_brelse(*trans, bp); - return 0; - } + if (!entry->nameidx || (entry->flags & XFS_ATTR_LOCAL)) + continue; =20 - /* - * Allocate storage for a list of all the "remote" value extents. - */ - size =3D count * sizeof(xfs_attr_inactive_list_t); - list =3D kmem_alloc(size, 0); - - /* - * Identify each of the "remote" value extents. - */ - lp =3D list; - entry =3D xfs_attr3_leaf_entryp(leaf); - for (i =3D 0; i < ichdr.count; entry++, i++) { - if (be16_to_cpu(entry->nameidx) && - ((entry->flags & XFS_ATTR_LOCAL) =3D=3D 0)) { - name_rmt =3D xfs_attr3_leaf_name_remote(leaf, i); - if (name_rmt->valueblk) { - lp->valueblk =3D be32_to_cpu(name_rmt->valueblk); - lp->valuelen =3D xfs_attr3_rmt_blocks(dp->i_mount, - be32_to_cpu(name_rmt->valuelen)); - lp++; - } - } - } - xfs_trans_brelse(*trans, bp); /* unlock for trans. in freextent() */ - - /* - * Invalidate each of the "remote" value extents. - */ - error =3D 0; - for (lp =3D list, i =3D 0; i < count; i++, lp++) { - tmp =3D xfs_attr3_rmt_stale(dp, lp->valueblk, lp->valuelen); - if (error =3D=3D 0) - error =3D tmp; /* save only the 1st errno */ + name_rmt =3D xfs_attr3_leaf_name_remote(leaf, i); + if (!name_rmt->valueblk) + continue; + + blkcnt =3D xfs_attr3_rmt_blocks(dp->i_mount, + be32_to_cpu(name_rmt->valuelen)); + error =3D xfs_attr3_rmt_stale(dp, + be32_to_cpu(name_rmt->valueblk), blkcnt); + if (error) + goto err; } =20 - kmem_free(list); + xfs_trans_brelse(*trans, bp); +err: return error; } From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 816EBC433F5 for ; Wed, 5 Oct 2022 11:38:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229882AbiJELiS (ORCPT ); Wed, 5 Oct 2022 07:38:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36322 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230000AbiJELha (ORCPT ); Wed, 5 Oct 2022 07:37:30 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0827C5F7D2; Wed, 5 Oct 2022 04:35:00 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 12039B81DBC; Wed, 5 Oct 2022 11:34:56 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 66D02C433D6; Wed, 5 Oct 2022 11:34:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969694; bh=6xJcFc9dGQ5BNpTfJqZ6QZWR7Ia24f+OL1DRDtn1inU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eiVm6vuE2JUEaVuBx/W7McbEkrCSrmofiF484Q03gJUn+S+fXwEhTi1RuBL5EyF/b cWBbxAxVoNdu3pxCI9sn9bqL7MktQltC00PS6ghm+QgaaiQSC1/fFKGarFo9Pv5vfG kbEAdrJNmRsuoweEhl62wES5a6HFutptHDKYW9wU= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Dan Carpenter , "Darrick J. Wong" , Allison Collins , Eric Sandeen , "Darrick J. Wong" , Chandan Babu R Subject: [PATCH 5.4 47/51] xfs: fix uninitialized variable in xfs_attr3_leaf_inactive Date: Wed, 5 Oct 2022 13:32:35 +0200 Message-Id: <20221005113212.438169634@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Darrick J. Wong" commit 54027a49938bbee1af62fad191139b14d4ee5cd2 upstream. Dan Carpenter pointed out that error is uninitialized. While there never should be an attr leaf block with zero entries, let's not leave that logic bomb there. Fixes: 0bb9d159bd01 ("xfs: streamline xfs_attr3_leaf_inactive") Reported-by: Dan Carpenter Signed-off-by: Darrick J. Wong Reviewed-by: Allison Collins Reviewed-by: Eric Sandeen Acked-by: Darrick J. Wong Signed-off-by: Chandan Babu R Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- fs/xfs/xfs_attr_inactive.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/fs/xfs/xfs_attr_inactive.c +++ b/fs/xfs/xfs_attr_inactive.c @@ -88,7 +88,7 @@ xfs_attr3_leaf_inactive( struct xfs_attr_leafblock *leaf =3D bp->b_addr; struct xfs_attr_leaf_entry *entry; struct xfs_attr_leaf_name_remote *name_rmt; - int error; + int error =3D 0; int i; =20 xfs_attr3_leaf_hdr_from_disk(mp->m_attr_geo, &ichdr, leaf); From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6F8AC4332F for ; Wed, 5 Oct 2022 11:38:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230293AbiJELiX (ORCPT ); Wed, 5 Oct 2022 07:38:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36826 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230061AbiJELhd (ORCPT ); Wed, 5 Oct 2022 07:37:33 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D1A4661D7B; Wed, 5 Oct 2022 04:35:01 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E7D1B61543; Wed, 5 Oct 2022 11:34:57 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0A1CCC433C1; Wed, 5 Oct 2022 11:34:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969697; bh=QpZ5IYhmYr0sbxy56R21Xhgqvtwy16ZLanrt25BG6eU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TCbaAOSjGx22TtqaI9jMdybLAF6mzubcuwcwL6ylCr+8SZQE4AAuli8wgmnifgNnX eDs36frM7ytuEXBNIJPGE+SFZJgGNiJhXjYiWg6nuVALutd4C0Na0kGRKr0R+ykNBx Xs/prMqt+BXrEZ0uHvDECCTQZC6d9OiIac7g9+9I= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Hulk Robot , Stephen Rothwell , YueHaibing , "Darrick J. Wong" , "Darrick J. Wong" , Chandan Babu R Subject: [PATCH 5.4 48/51] xfs: remove unused variable done Date: Wed, 5 Oct 2022 13:32:36 +0200 Message-Id: <20221005113212.479111487@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: YueHaibing commit b3531f5fc16d4df2b12567bce48cd9f3ab5f9131 upstream. fs/xfs/xfs_inode.c: In function 'xfs_itruncate_extents_flags': fs/xfs/xfs_inode.c:1523:8: warning: unused variable 'done' [-Wunused-variab= le] commit 4bbb04abb4ee ("xfs: truncate should remove all blocks, not just to the end of the page cache") left behind this, so remove it. Fixes: 4bbb04abb4ee ("xfs: truncate should remove all blocks, not just to t= he end of the page cache") Reported-by: Hulk Robot Reported-by: Stephen Rothwell Signed-off-by: YueHaibing Reviewed-by: Darrick J. Wong Signed-off-by: Darrick J. Wong Acked-by: Darrick J. Wong Signed-off-by: Chandan Babu R Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- fs/xfs/xfs_inode.c | 1 - 1 file changed, 1 deletion(-) --- a/fs/xfs/xfs_inode.c +++ b/fs/xfs/xfs_inode.c @@ -1515,7 +1515,6 @@ xfs_itruncate_extents_flags( xfs_fileoff_t first_unmap_block; xfs_filblks_t unmap_len; int error =3D 0; - int done =3D 0; =20 ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); ASSERT(!atomic_read(&VFS_I(ip)->i_count) || From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7B98C433F5 for ; Wed, 5 Oct 2022 11:39:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230329AbiJELjU (ORCPT ); Wed, 5 Oct 2022 07:39:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50726 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230310AbiJELif (ORCPT ); Wed, 5 Oct 2022 07:38:35 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4A7D679608; Wed, 5 Oct 2022 04:35:24 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 93BEE6164C; Wed, 5 Oct 2022 11:35:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A4755C433D6; Wed, 5 Oct 2022 11:34:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969700; bh=myyM/cpOxrZJqyMCFSWCCMivBd348xIo/wFw2g9wxm8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=E4qMwT9XujtN6nfjXauwtjy1HupfwnsSehfevg5cuF5qbQ095GAldp4hY56lOAb+5 9u9lJ8o/vA0Botx7175YtEgoUXaRbnJj9KJkU+/8Hh2t014toLZmDNb6tZzQXXR6sW gyt3GefCaBmoMawKCpH6dPC0Xj7p0TOVwsnaXPsA= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Shuah Khan , Hamza Mahfooz , Alex Deucher , Sasha Levin Subject: [PATCH 5.4 49/51] Revert "drm/amdgpu: use dirty framebuffer helper" Date: Wed, 5 Oct 2022 13:32:37 +0200 Message-Id: <20221005113212.519468645@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Greg Kroah-Hartman This reverts commit c89849ecfd2e10838b31c519c2a6607266b58f02 which is commit 66f99628eb24409cb8feb5061f78283c8b65f820 upstream. It is reported to cause problems on 5.4.y so it should be reverted for now. Reported-by: Shuah Khan Link: https://lore.kernel.org/r/7af02bc3-c0f2-7326-e467-02549e88c9ce@linuxf= oundation.org Cc: Hamza Mahfooz Cc: Alex Deucher Cc: Alex Deucher Cc: Sasha Levin Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- drivers/gpu/drm/amd/amdgpu/amdgpu_display.c | 2 -- 1 file changed, 2 deletions(-) --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c @@ -35,7 +35,6 @@ #include #include #include -#include #include #include #include @@ -496,7 +495,6 @@ bool amdgpu_display_ddc_probe(struct amd static const struct drm_framebuffer_funcs amdgpu_fb_funcs =3D { .destroy =3D drm_gem_fb_destroy, .create_handle =3D drm_gem_fb_create_handle, - .dirty =3D drm_atomic_helper_dirtyfb, }; =20 uint32_t amdgpu_display_supported_domains(struct amdgpu_device *adev, From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 709C7C43217 for ; Wed, 5 Oct 2022 11:38:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230243AbiJELiy (ORCPT ); Wed, 5 Oct 2022 07:38:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36580 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230093AbiJELhi (ORCPT ); Wed, 5 Oct 2022 07:37:38 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E2C0974DF4; Wed, 5 Oct 2022 04:35:05 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 0248BB81DB8; Wed, 5 Oct 2022 11:35:04 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 51898C433C1; Wed, 5 Oct 2022 11:35:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969702; bh=xSuAihsjkHs/qqp/+v/TFR04hTHtsVz3pZtXM2Vuxi0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Q3/3xptYe+Pffo9qWqtUcIwP+05pIoMqElA0cTwJQYuKPCkZB0mwsH3yEjdCmfBI2 e7pxViXXmhCUSMrRMlrjJ4HzQpxBHlPz98Whfrbf867evuwadOzLwC7MbvG5Hq4I6k lJuyF+uHinpH0TGpVJ+yvMr10FUZDrJnO5s7Kz/0= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Nathan Chancellor , Sami Tolvanen , Kees Cook Subject: [PATCH 5.4 50/51] Makefile.extrawarn: Move -Wcast-function-type-strict to W=1 Date: Wed, 5 Oct 2022 13:32:38 +0200 Message-Id: <20221005113212.556753745@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Sami Tolvanen commit 2120635108b35ecad9c59c8b44f6cbdf4f98214e upstream. We enable -Wcast-function-type globally in the kernel to warn about mismatching types in function pointer casts. Compilers currently warn only about ABI incompability with this flag, but Clang 16 will enable a stricter version of the check by default that checks for an exact type match. This will be very noisy in the kernel, so disable -Wcast-function-type-strict without W=3D1 until the new warnings have been addressed. Cc: stable@vger.kernel.org Link: https://reviews.llvm.org/D134831 Link: https://github.com/ClangBuiltLinux/linux/issues/1724 Suggested-by: Nathan Chancellor Signed-off-by: Sami Tolvanen Signed-off-by: Kees Cook Link: https://lore.kernel.org/r/20220930203310.4010564-1-samitolvanen@googl= e.com Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- scripts/Makefile.extrawarn | 1 + 1 file changed, 1 insertion(+) --- a/scripts/Makefile.extrawarn +++ b/scripts/Makefile.extrawarn @@ -50,6 +50,7 @@ KBUILD_CFLAGS +=3D -Wno-sign-compare KBUILD_CFLAGS +=3D -Wno-format-zero-length KBUILD_CFLAGS +=3D $(call cc-disable-warning, pointer-to-enum-cast) KBUILD_CFLAGS +=3D $(call cc-disable-warning, unaligned-access) +KBUILD_CFLAGS +=3D $(call cc-disable-warning, cast-function-type-strict) endif =20 endif From nobody Sun Dec 14 02:06:17 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00C95C433F5 for ; Wed, 5 Oct 2022 11:39:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230333AbiJELjY (ORCPT ); Wed, 5 Oct 2022 07:39:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39546 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230352AbiJELio (ORCPT ); Wed, 5 Oct 2022 07:38:44 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 407D579617; Wed, 5 Oct 2022 04:35:26 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 92223B81DB6; Wed, 5 Oct 2022 11:35:06 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0029DC433D6; Wed, 5 Oct 2022 11:35:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1664969705; bh=7a+WTDLovQeSm3K4+3q8ZCqpy2ODJ1EzuBZHDte6j3s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AG0qBJahFCqR7Rsx8zYJMqTRkV+sS/49DjrAHHbjhi4Gx97vQxVUurPW8srxtwl+u OhSGB2vRfxZtN+cPz20n5KHmE3Hi2veVhfICe6bmZDdoLhRNB06cjXe65ruRDklAcJ brqCkcvOMuSqxc0vOkrirf+jgD3ekVUjw5JJVI8U= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Shuah Khan , Jonathan Corbet Subject: [PATCH 5.4 51/51] docs: update mediator information in CoC docs Date: Wed, 5 Oct 2022 13:32:39 +0200 Message-Id: <20221005113212.603825522@linuxfoundation.org> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221005113210.255710920@linuxfoundation.org> References: <20221005113210.255710920@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Shuah Khan commit 8bfdfa0d6b929ede7b6189e0e546ceb6a124d05d upstream. Update mediator information in the CoC interpretation document. Signed-off-by: Shuah Khan Link: https://lore.kernel.org/r/20220901212319.56644-1-skhan@linuxfoundatio= n.org Cc: stable@vger.kernel.org Signed-off-by: Jonathan Corbet Signed-off-by: Greg Kroah-Hartman Reviewed-by: Guenter Roeck Tested-by: Allen Pais Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Slade Watkins --- Documentation/process/code-of-conduct-interpretation.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/Documentation/process/code-of-conduct-interpretation.rst +++ b/Documentation/process/code-of-conduct-interpretation.rst @@ -51,7 +51,7 @@ the Technical Advisory Board (TAB) or ot uncertain how to handle situations that come up. It will not be considered a violation report unless you want it to be. If you are uncertain about approaching the TAB or any other maintainers, please -reach out to our conflict mediator, Mishi Choudhary . +reach out to our conflict mediator, Joanna Lee . =20 In the end, "be kind to each other" is really what the end goal is for everybody. We know everyone is human and we all fail at times, but the