From nobody Fri Apr 4 10:43:50 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 87A821CD1E4; Wed, 2 Apr 2025 18:19:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743617997; cv=none; b=Ol8Gl6Z5FFkO83VbEkEV9PJjCDNzMjTw2q3BJoae9sg2kxr36lgzv7fOfHULtRpVIOhiD5ng6vTKSwa0dcXiEUMWmYzjG+NDOoQf7nWC+6jHHxQ0mKyudm4GhWXQUF+KT0t7swU0L6hCbIJ+yIj2/pXG2I+zXhsHkOtDYiRUwhs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743617997; c=relaxed/simple; bh=awLUV5Moo7lLn6htb5EV0VB4byiNMh96cfT7JySAgfI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=oPuAkK5Lk8KChLh+ODwu/f/qqhIlmhz9LYJBQczkJza/GuR7gxjwoF/T+EHkqN2Exet9L8lvrjPaYYv37m2VEc4UdH7cZBEJcPgBPQTvBzuaLhgDY+E3iyj2vFFFLgiQ1m+cJU7PbjxeLj91N0Y96EVn2hMymsQAbGSX5DaRuQc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=OA98Rl3X; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="OA98Rl3X" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 36657C4CEE8; Wed, 2 Apr 2025 18:19:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1743617997; bh=awLUV5Moo7lLn6htb5EV0VB4byiNMh96cfT7JySAgfI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OA98Rl3XtnGvPurXVCP4tZb/jYAIFYlGfLBbZeGQ3bdES2LRVFCMLQ6yhrnHcFCvP lHXGpIPgt92+1vDqIxitbxLD2qTIwPJdZJJ5dTrw1/dPs8zacY2avlYoX6REFam3Ko AUw39cNOZEKhGx6bs6X9em/E1ufTuLTfJBOaEfCZDHfOu4gkGUH3JCZmQ0pj1aSr+2 2EyENBVGAXLY+bP8xgzrgtCi2/vBytTRy0rj0pvUa5WylH0YppSGexbGivvtfRsUAM cVrr3ULofo/3zdw9XqSObZBL7wPQEoFsnl1nJcnyqFPQLJLoudUHhK/z7uHpH+62E4 cPADstBCh15gQ== From: Josh Poimboeuf To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, amit@kernel.org, kvm@vger.kernel.org, amit.shah@amd.com, thomas.lendacky@amd.com, bp@alien8.de, tglx@linutronix.de, peterz@infradead.org, pawan.kumar.gupta@linux.intel.com, corbet@lwn.net, mingo@redhat.com, dave.hansen@linux.intel.com, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, daniel.sneddon@linux.intel.com, kai.huang@intel.com, sandipan.das@amd.com, boris.ostrovsky@oracle.com, Babu.Moger@amd.com, david.kaplan@amd.com, dwmw@amazon.co.uk, andrew.cooper3@citrix.com Subject: [PATCH v3 1/6] x86/bugs: Rename entry_ibpb() Date: Wed, 2 Apr 2025 11:19:18 -0700 Message-ID: X-Mailer: git-send-email 2.48.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" There's nothing entry-specific about entry_ibpb(). In preparation for calling it from elsewhere, rename it to __write_ibpb(). Signed-off-by: Josh Poimboeuf --- arch/x86/entry/entry.S | 7 ++++--- arch/x86/include/asm/nospec-branch.h | 6 +++--- arch/x86/kernel/cpu/bugs.c | 6 +++--- 3 files changed, 10 insertions(+), 9 deletions(-) diff --git a/arch/x86/entry/entry.S b/arch/x86/entry/entry.S index d3caa31240ed..3a53319988b9 100644 --- a/arch/x86/entry/entry.S +++ b/arch/x86/entry/entry.S @@ -17,7 +17,8 @@ =20 .pushsection .noinstr.text, "ax" =20 -SYM_FUNC_START(entry_ibpb) +// Clobbers AX, CX, DX +SYM_FUNC_START(__write_ibpb) ANNOTATE_NOENDBR movl $MSR_IA32_PRED_CMD, %ecx movl $PRED_CMD_IBPB, %eax @@ -27,9 +28,9 @@ SYM_FUNC_START(entry_ibpb) /* Make sure IBPB clears return stack preductions too. */ FILL_RETURN_BUFFER %rax, RSB_CLEAR_LOOPS, X86_BUG_IBPB_NO_RET RET -SYM_FUNC_END(entry_ibpb) +SYM_FUNC_END(__write_ibpb) /* For KVM */ -EXPORT_SYMBOL_GPL(entry_ibpb); +EXPORT_SYMBOL_GPL(__write_ibpb); =20 .popsection =20 diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/no= spec-branch.h index 8a5cc8e70439..bbac79cad04c 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -269,7 +269,7 @@ * typically has NO_MELTDOWN). * * While retbleed_untrain_ret() doesn't clobber anything but requires stac= k, - * entry_ibpb() will clobber AX, CX, DX. + * __write_ibpb() will clobber AX, CX, DX. * * As such, this must be placed after every *SWITCH_TO_KERNEL_CR3 at a poi= nt * where we have a stack but before any RET instruction. @@ -279,7 +279,7 @@ VALIDATE_UNRET_END CALL_UNTRAIN_RET ALTERNATIVE_2 "", \ - "call entry_ibpb", \ibpb_feature, \ + "call __write_ibpb", \ibpb_feature, \ __stringify(\call_depth_insns), X86_FEATURE_CALL_DEPTH #endif .endm @@ -368,7 +368,7 @@ extern void srso_return_thunk(void); extern void srso_alias_return_thunk(void); =20 extern void entry_untrain_ret(void); -extern void entry_ibpb(void); +extern void __write_ibpb(void); =20 #ifdef CONFIG_X86_64 extern void clear_bhb_loop(void); diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 4386aa6c69e1..310cb3f7139c 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -1142,7 +1142,7 @@ static void __init retbleed_select_mitigation(void) setup_clear_cpu_cap(X86_FEATURE_RETHUNK); =20 /* - * There is no need for RSB filling: entry_ibpb() ensures + * There is no need for RSB filling: __write_ibpb() ensures * all predictions, including the RSB, are invalidated, * regardless of IBPB implementation. */ @@ -2676,7 +2676,7 @@ static void __init srso_select_mitigation(void) setup_clear_cpu_cap(X86_FEATURE_RETHUNK); =20 /* - * There is no need for RSB filling: entry_ibpb() ensures + * There is no need for RSB filling: __write_ibpb() ensures * all predictions, including the RSB, are invalidated, * regardless of IBPB implementation. */ @@ -2701,7 +2701,7 @@ static void __init srso_select_mitigation(void) srso_mitigation =3D SRSO_MITIGATION_IBPB_ON_VMEXIT; =20 /* - * There is no need for RSB filling: entry_ibpb() ensures + * There is no need for RSB filling: __write_ibpb() ensures * all predictions, including the RSB, are invalidated, * regardless of IBPB implementation. */ --=20 2.48.1 From nobody Fri Apr 4 10:43:50 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 854491DDC00; Wed, 2 Apr 2025 18:19:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743617998; cv=none; b=B/CrqBpCWRnXnTLC0XIW4t6YX8bYdGvMKFuusByArC3JKXfzuVxnH7NB4kEDIBuEL/IXux1Ej8/UZaVtiFDaJwbfvdYUt3VrTVfVqAlZHxakdOM+W4rzheSbpBGks4xFulkNiWM0RhY+tW2aExmKynSnmjLNJF4NiCaSZiSi+p8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743617998; c=relaxed/simple; bh=2sKV+LntbszUSApWuWeLszgKjAuXSnWw3sW9BSNSr88=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=EmtBClFAfWCG9L5GGY/vdOMaF8U3XHjj1YgsmBf1NFdbL+KQuHt2F/FFsslJ4/lUqzreYYIcNCl5Kr5ILZZq/yZL1Hot1B7PH00ppqjS6KfNl3lqF3pkn6+QS+sKhR1Ev3xrWx0NTg8VAm4THYG/+JDsjQwmGx8JX1Euq29eKgk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=fUR80Qwv; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="fUR80Qwv" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 32B01C4CEED; Wed, 2 Apr 2025 18:19:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1743617998; bh=2sKV+LntbszUSApWuWeLszgKjAuXSnWw3sW9BSNSr88=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fUR80Qwv0QUa4PTWjnJLr+XzXm4FC/c98oUILv9lMPZdXhp6jBWeb6y1VpaqfRYO8 mJl3gZUj6vbCoJyEZeR/SOIk1R1BS5f5OOZiEhxb8j4Z1eVwWXDKj/6FAGPyzrMLI6 bWwbjs2SYF8GMzKDQ2+TdTLa4oS1aS5+l4PouWjiwOZego01LL8cNr0hvsogthDert Dg+C1CeUYt+y/B4Ze3ppCTtYO+gds3pOqlcwlT3xrV3MHdc75YiHDtjc1RDxAa7iwl Quq5GLnWeA96EiflC1sE/QtVOMyl038IG4UdKY3N3nrIXj2Cd0Lzv89XWwYhLjxf6p dLPjZinLyMaEg== From: Josh Poimboeuf To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, amit@kernel.org, kvm@vger.kernel.org, amit.shah@amd.com, thomas.lendacky@amd.com, bp@alien8.de, tglx@linutronix.de, peterz@infradead.org, pawan.kumar.gupta@linux.intel.com, corbet@lwn.net, mingo@redhat.com, dave.hansen@linux.intel.com, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, daniel.sneddon@linux.intel.com, kai.huang@intel.com, sandipan.das@amd.com, boris.ostrovsky@oracle.com, Babu.Moger@amd.com, david.kaplan@amd.com, dwmw@amazon.co.uk, andrew.cooper3@citrix.com Subject: [PATCH v3 2/6] x86/bugs: Use SBPB in __write_ibpb() if applicable Date: Wed, 2 Apr 2025 11:19:19 -0700 Message-ID: X-Mailer: git-send-email 2.48.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" __write_ibpb() does IBPB, which (among other things) flushes branch type predictions on AMD. If the CPU has SRSO_NO, or if the SRSO mitigation has been disabled, branch type flushing isn't needed, in which case the lighter-weight SBPB can be used. Signed-off-by: Josh Poimboeuf --- arch/x86/entry/entry.S | 2 +- arch/x86/kernel/cpu/bugs.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/entry/entry.S b/arch/x86/entry/entry.S index 3a53319988b9..a5b421ec19c0 100644 --- a/arch/x86/entry/entry.S +++ b/arch/x86/entry/entry.S @@ -21,7 +21,7 @@ SYM_FUNC_START(__write_ibpb) ANNOTATE_NOENDBR movl $MSR_IA32_PRED_CMD, %ecx - movl $PRED_CMD_IBPB, %eax + movl _ASM_RIP(x86_pred_cmd), %eax xorl %edx, %edx wrmsr =20 diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 310cb3f7139c..c8b8dc829046 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -58,7 +58,7 @@ EXPORT_SYMBOL_GPL(x86_spec_ctrl_base); DEFINE_PER_CPU(u64, x86_spec_ctrl_current); EXPORT_PER_CPU_SYMBOL_GPL(x86_spec_ctrl_current); =20 -u64 x86_pred_cmd __ro_after_init =3D PRED_CMD_IBPB; +u32 x86_pred_cmd __ro_after_init =3D PRED_CMD_IBPB; EXPORT_SYMBOL_GPL(x86_pred_cmd); =20 static u64 __ro_after_init x86_arch_cap_msr; --=20 2.48.1 From nobody Fri Apr 4 10:43:50 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7D3BA1E47B3; Wed, 2 Apr 2025 18:19:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743617999; cv=none; b=pPvkiYhnkoFzykrbgP7d3QXtEaCkD/jQC5l2AQaaXoF/FyU0pDUi2pdD0bzVsIeyqBIj/UPDB8+IASMWPorzCTpwAKLNJLf6DGsLmp2ux2AddBRrelSTipgIKE1uERB94Eu8dRh7oQMBtcjGzMfy6F9lO/xms3aM08Wj6jSw2eA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743617999; c=relaxed/simple; bh=64pVMOmaZt9USwQNf+DBb/Pu9LUbJImpmZtbb4tK2zI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XYYY76Fm/1dlaQ/rYRzBXNTB+IxN10Mjn+ZNvmYgPDGZc9Z14sXHz9H4WB4flVkQgnvu8AFPVtXaGKBS5SzyX6jTjb9kRdCOy0siccuSr2HdIP1ATrmd7kf1E4sZo0OIetTGr2XFLYRZrSJTxNVHOV8ZrMpaPXjju0TzgE5pOYw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=NsKbkz9l; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="NsKbkz9l" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2EE7AC4CEE5; Wed, 2 Apr 2025 18:19:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1743617999; bh=64pVMOmaZt9USwQNf+DBb/Pu9LUbJImpmZtbb4tK2zI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NsKbkz9lz7e46GeGgim4vuzQSKT8jOrS9WzcnLdqW+tuUDtV2rO7Qx3D+6zgODxMn 6Ylg7jKfWabo4nTAe1nXk3EuEeAHoENcnQSBb4Yk0ILDOGiIWSYpuW3cj2kVVbhySR tPxKv0QHDGH4otrxYinQVaRAg0iDCIxzA50aKMejYG2TfydFvHvz/IoVrXtt5FVvzg XlGjwnuVNEzj/IMWGPmc7nWOQZLWHXxEN5DS9+A2gFSCBq2FQI+TQZ7dDet3R/MJAV iEHCbnypwrBfM59ThVmWyPjmwA+JIhvWsdtG9iZy4zRA8ab8aSu/BTmTac8EwSzYky 2vqDMw32EAv8w== From: Josh Poimboeuf To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, amit@kernel.org, kvm@vger.kernel.org, amit.shah@amd.com, thomas.lendacky@amd.com, bp@alien8.de, tglx@linutronix.de, peterz@infradead.org, pawan.kumar.gupta@linux.intel.com, corbet@lwn.net, mingo@redhat.com, dave.hansen@linux.intel.com, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, daniel.sneddon@linux.intel.com, kai.huang@intel.com, sandipan.das@amd.com, boris.ostrovsky@oracle.com, Babu.Moger@amd.com, david.kaplan@amd.com, dwmw@amazon.co.uk, andrew.cooper3@citrix.com Subject: [PATCH v3 3/6] x86/bugs: Fix RSB clearing in indirect_branch_prediction_barrier() Date: Wed, 2 Apr 2025 11:19:20 -0700 Message-ID: <27fe2029a2ef8bc0909e53e7e4c3f5b437242627.1743617897.git.jpoimboe@kernel.org> X-Mailer: git-send-email 2.48.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" IBPB is expected to clear the RSB. However, if X86_BUG_IBPB_NO_RET is set, that doesn't happen. Make indirect_branch_prediction_barrier() take that into account by calling __write_ibpb() which already does the right thing. Fixes: 50e4b3b94090 ("x86/entry: Have entry_ibpb() invalidate return predic= tions") Signed-off-by: Josh Poimboeuf --- arch/x86/include/asm/nospec-branch.h | 6 +++--- arch/x86/kernel/cpu/bugs.c | 1 - 2 files changed, 3 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/no= spec-branch.h index bbac79cad04c..f99b32f014ec 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -514,11 +514,11 @@ void alternative_msr_write(unsigned int msr, u64 val,= unsigned int feature) : "memory"); } =20 -extern u64 x86_pred_cmd; - static inline void indirect_branch_prediction_barrier(void) { - alternative_msr_write(MSR_IA32_PRED_CMD, x86_pred_cmd, X86_FEATURE_IBPB); + asm_inline volatile(ALTERNATIVE("", "call __write_ibpb", X86_FEATURE_IBPB) + : ASM_CALL_CONSTRAINT + :: "rax", "rcx", "rdx", "memory"); } =20 /* The Intel SPEC CTRL MSR base value cache */ diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index c8b8dc829046..9f9637cff7a3 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -59,7 +59,6 @@ DEFINE_PER_CPU(u64, x86_spec_ctrl_current); EXPORT_PER_CPU_SYMBOL_GPL(x86_spec_ctrl_current); =20 u32 x86_pred_cmd __ro_after_init =3D PRED_CMD_IBPB; -EXPORT_SYMBOL_GPL(x86_pred_cmd); =20 static u64 __ro_after_init x86_arch_cap_msr; =20 --=20 2.48.1 From nobody Fri Apr 4 10:43:50 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 892DE1EB183; Wed, 2 Apr 2025 18:20:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743618000; cv=none; b=E72FPENkk5pJBKuWqCoM8DbyUylrWz1jKwR/KBqy1QeitfCSs1bad54yjviHk3S5b9/M2tiYzLR97Oo7AK6zJgGtD4mgbBWg5qKeS+uPaQQ/Kbv76rW9QNAOov9uA12kv+jlr08/Nr9wd2EbvUZ9Tmtdl7//vVfcLD0M0ZISYVk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743618000; c=relaxed/simple; bh=7LwnFejQ/r4nazVJxKxvMgFD1Q5sOegQn9QdjI3vu+c=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=aF3+fUbYvk47ZhQ96LL7XXNjY0zdEroI+jvksC7AsdgAaQIBWlnaEn/wy6UZDqjSFn75pHJfAS8+M48y3nYI+SzDg/ByAr9X1hSSiRfKmsFXiFSwkKx54Qm5ht4tY5iURU/+eDNqjbD/ckAotC5N3UuXwxl99HjNTf+Eg4Vaowo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Y4bjEsyc; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Y4bjEsyc" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 280A7C4CEF0; Wed, 2 Apr 2025 18:19:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1743618000; bh=7LwnFejQ/r4nazVJxKxvMgFD1Q5sOegQn9QdjI3vu+c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Y4bjEsyc9+W+AjGTkFexf1niDmh2IBf0kzAbqZAOdJK/P8gRT9kCf3W9tdnTJRhc7 OKkyl4Lq8MPvqPCGmgucUy0wW/r3vhyyF1PreOiwuBBbutOEDGfIY0HnUcZ31fGegn Wl92MW9O24nn3MWkbrqR/S3nAGM959xj38G5M3MBacjMhUGGDlcT/SDBgQsX87gEuB TllrvbnpCMRVtvTNIEasjcePSAfLe+10P4J/LLiA8QroBESD7lOxSTHcKujz+eg5A/ i/nD1ga69BS04dijSmtYakXNYWGtOBIx1pfzouw1/ZyVF4VhQEWdBsk6G1jCjp7Ofk U6GcGLaxTGxtg== From: Josh Poimboeuf To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, amit@kernel.org, kvm@vger.kernel.org, amit.shah@amd.com, thomas.lendacky@amd.com, bp@alien8.de, tglx@linutronix.de, peterz@infradead.org, pawan.kumar.gupta@linux.intel.com, corbet@lwn.net, mingo@redhat.com, dave.hansen@linux.intel.com, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, daniel.sneddon@linux.intel.com, kai.huang@intel.com, sandipan.das@amd.com, boris.ostrovsky@oracle.com, Babu.Moger@amd.com, david.kaplan@amd.com, dwmw@amazon.co.uk, andrew.cooper3@citrix.com Subject: [PATCH v3 4/6] x86/bugs: Don't fill RSB on VMEXIT with eIBRS+retpoline Date: Wed, 2 Apr 2025 11:19:21 -0700 Message-ID: X-Mailer: git-send-email 2.48.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" eIBRS protects against guest->host RSB underflow/poisoning attacks. Adding retpoline to the mix doesn't change that. Retpoline has a balanced CALL/RET anyway. So the current full RSB filling on VMEXIT with eIBRS+retpoline is overkill. Disable it or do the VMEXIT_LITE mitigation if needed. Suggested-by: Pawan Gupta Reviewed-by: Pawan Gupta Reviewed-by: Amit Shah Signed-off-by: Josh Poimboeuf --- arch/x86/kernel/cpu/bugs.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 9f9637cff7a3..354411fd4800 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -1617,20 +1617,20 @@ static void __init spectre_v2_determine_rsb_fill_ty= pe_at_vmexit(enum spectre_v2_ case SPECTRE_V2_NONE: return; =20 - case SPECTRE_V2_EIBRS_LFENCE: case SPECTRE_V2_EIBRS: + case SPECTRE_V2_EIBRS_LFENCE: + case SPECTRE_V2_EIBRS_RETPOLINE: if (boot_cpu_has_bug(X86_BUG_EIBRS_PBRSB)) { - setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT_LITE); pr_info("Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT\n"); + setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT_LITE); } return; =20 - case SPECTRE_V2_EIBRS_RETPOLINE: case SPECTRE_V2_RETPOLINE: case SPECTRE_V2_LFENCE: case SPECTRE_V2_IBRS: - setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT); pr_info("Spectre v2 / SpectreRSB : Filling RSB on VMEXIT\n"); + setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT); return; } =20 --=20 2.48.1 From nobody Fri Apr 4 10:43:50 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 255B11EDA2E; Wed, 2 Apr 2025 18:20:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743618001; cv=none; b=HYmS1ib+FEba+1wSSoG7ngUmFzJOyIm6enJr9ZyT8CWODcx9qRqv/ZEb63OrkfUghPJjllMnx5o3+TqlFdgoOHe01QlJaRmYLRJx48ClU8wyUxXev4xe0qxoz9D8qT6t4ETg1JLEdBWhQcJtv9Xx6UZbVLod9Lg0KQslT58y3g8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743618001; c=relaxed/simple; bh=uX8Sj3f5CaYUiWV6uBubNDGp1oMdwdKoROUZtIpOqYM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PVWCf76Q80PMqFKTCL1j3rF52uYZS72/FgNTTGnFwV0q6qVQSznasFw6ZGDz98iLBFJSj/5Pa9xzkZvM08+XMRgHw/UcGE5gMPuY2r573jWBCOl2TN7j+OkpxnJxfstSozTm01pnVF1/cAq4jjAYG2KwLQyLf2c7cZ7HCQQe82k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=mBwC2nUP; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="mBwC2nUP" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2B365C4CEE5; Wed, 2 Apr 2025 18:20:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1743618001; bh=uX8Sj3f5CaYUiWV6uBubNDGp1oMdwdKoROUZtIpOqYM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mBwC2nUPCr1WpSagC26/cEajWWbIMW/lHRyvHKsC1i+VZYmy0K3KbEgUBHfq+sCfn pQLVOVJ5752OWOX0/jwN2nQI9sIuxqyCYHyGfPY0rPgE8k6Cdc3IXdK4mKhOyVS0K8 1//L8HmOu7rXSmUkaVfnrDR3kD3C9uWopj13poMNaHQlWiNSlRizFkyn96HFludQlI c4rcc4K7PIqAgFoGh2O/JAq/RMvDYgwvD+HsY5CuQkb7XjxZFwfW+Zcvgdvk4jOxB+ oq8+WHkO/KBZxvNPGMF7SfvvzOXHcx1xQJCExYphHWlWiPqB+eQrAMH7TycaUSa9Lj MAJmKSZxCW6uw== From: Josh Poimboeuf To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, amit@kernel.org, kvm@vger.kernel.org, amit.shah@amd.com, thomas.lendacky@amd.com, bp@alien8.de, tglx@linutronix.de, peterz@infradead.org, pawan.kumar.gupta@linux.intel.com, corbet@lwn.net, mingo@redhat.com, dave.hansen@linux.intel.com, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, daniel.sneddon@linux.intel.com, kai.huang@intel.com, sandipan.das@amd.com, boris.ostrovsky@oracle.com, Babu.Moger@amd.com, david.kaplan@amd.com, dwmw@amazon.co.uk, andrew.cooper3@citrix.com Subject: [PATCH v3 5/6] x86/bugs: Don't fill RSB on context switch with eIBRS Date: Wed, 2 Apr 2025 11:19:22 -0700 Message-ID: <8979e2e1d9f48aa4480c2ebd5ea0f9e31f9707e5.1743617897.git.jpoimboe@kernel.org> X-Mailer: git-send-email 2.48.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" User->user Spectre v2 attacks (including RSB) across context switches are already mitigated by IBPB in cond_mitigation(), if enabled globally or if either the prev or the next task has opted in to protection. RSB filling without IBPB serves no purpose for protecting user space, as indirect branches are still vulnerable. User->kernel RSB attacks are mitigated by eIBRS. In which case the RSB filling on context switch isn't needed, so remove it. Suggested-by: Pawan Gupta Reviewed-by: Pawan Gupta Reviewed-by: Amit Shah Signed-off-by: Josh Poimboeuf --- arch/x86/kernel/cpu/bugs.c | 24 ++++++++++++------------ arch/x86/mm/tlb.c | 6 +++--- 2 files changed, 15 insertions(+), 15 deletions(-) diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 354411fd4800..680c779e9711 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -1591,7 +1591,7 @@ static void __init spec_ctrl_disable_kernel_rrsba(voi= d) rrsba_disabled =3D true; } =20 -static void __init spectre_v2_determine_rsb_fill_type_at_vmexit(enum spect= re_v2_mitigation mode) +static void __init spectre_v2_select_rsb_mitigation(enum spectre_v2_mitiga= tion mode) { /* * Similar to context switches, there are two types of RSB attacks @@ -1615,7 +1615,7 @@ static void __init spectre_v2_determine_rsb_fill_type= _at_vmexit(enum spectre_v2_ */ switch (mode) { case SPECTRE_V2_NONE: - return; + break; =20 case SPECTRE_V2_EIBRS: case SPECTRE_V2_EIBRS_LFENCE: @@ -1624,18 +1624,21 @@ static void __init spectre_v2_determine_rsb_fill_ty= pe_at_vmexit(enum spectre_v2_ pr_info("Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT\n"); setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT_LITE); } - return; + break; =20 case SPECTRE_V2_RETPOLINE: case SPECTRE_V2_LFENCE: case SPECTRE_V2_IBRS: - pr_info("Spectre v2 / SpectreRSB : Filling RSB on VMEXIT\n"); + pr_info("Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEX= IT\n"); + setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW); setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT); - return; - } + break; =20 - pr_warn_once("Unknown Spectre v2 mode, disabling RSB mitigation at VM exi= t"); - dump_stack(); + default: + pr_warn_once("Unknown Spectre v2 mode, disabling RSB mitigation\n"); + dump_stack(); + break; + } } =20 /* @@ -1867,10 +1870,7 @@ static void __init spectre_v2_select_mitigation(void) * * FIXME: Is this pointless for retbleed-affected AMD? */ - setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW); - pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switc= h\n"); - - spectre_v2_determine_rsb_fill_type_at_vmexit(mode); + spectre_v2_select_rsb_mitigation(mode); =20 /* * Retpoline protects the kernel, but doesn't protect firmware. IBRS diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index e459d97ef397..eb83348f9305 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -667,9 +667,9 @@ static void cond_mitigation(struct task_struct *next) prev_mm =3D this_cpu_read(cpu_tlbstate.last_user_mm_spec); =20 /* - * Avoid user/user BTB poisoning by flushing the branch predictor - * when switching between processes. This stops one process from - * doing Spectre-v2 attacks on another. + * Avoid user->user BTB/RSB poisoning by flushing them when switching + * between processes. This stops one process from doing Spectre-v2 + * attacks on another. * * Both, the conditional and the always IBPB mode use the mm * pointer to avoid the IBPB when switching between tasks of the --=20 2.48.1 From nobody Fri Apr 4 10:43:50 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 26E891EFFA6; Wed, 2 Apr 2025 18:20:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743618002; cv=none; b=mQXUD2yvem1ljzgF9n5dR7PdVsRmBcM6DHmqW+fojiezSqS3wjTg/hAupOBxufKypZ+7bZacktprEi3tkfjIDuNM/+HbkMJbktBLrVf11MWRjQOkpWrL3NyAZctPO2xu3kH0pg3Pqettpy6n7iIqQYCtyiolB5mCY4D2C71S8/U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743618002; c=relaxed/simple; bh=S/yW0WljGBGazJK8E2H09ljwQ3dxrafbW7SAjZXkxOY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=M3sVGjR1ICW0NHJxD/rnopDZV+bzlPxENbXsJ4yanYVMisAddNpiQIJGO/VyKUEhx6anlTOv8IOHnJ1S5TXrBlW5bri3mLRGPPjjdHxUFyeUr7nYZLWhnX9oW9MHzSjF1GdwoBYUzqeoL93PnUzRUjGJX777WxiPgilmjmaYNOE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ZPKNpfsC; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ZPKNpfsC" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 244B7C4CEEA; Wed, 2 Apr 2025 18:20:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1743618002; bh=S/yW0WljGBGazJK8E2H09ljwQ3dxrafbW7SAjZXkxOY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZPKNpfsC17IoFFmJXDWyXqtNKeoy6nqG6n9EOcVx9jveQeiZ+7cEs11t89of/10m6 RJAuG6bjRkR6fA1/8XoVS9B54a1HB1mPPxlcbiJKXNQx+mmpMbuDiIK9R/Wvs4Jyli zfwupFEb6TjL7a+Pwpom+LSPEsreB+NU6EzAJ1JdOxWx11C50qHkYPzI0GFwPxC6D+ 3lAQv46qhUteGM8LsK4cMRTvYn7UdAfAu2AhJw1zdvt+n/jMEqwxioI4hgs6yh21// WH8oj0mDt81N5vFdfmhZkYDktPL7iBsOKsnZt1C8O4WHm+OMEhSjwJzUYqJSkGTRsc U3POoY4wgLAeg== From: Josh Poimboeuf To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, amit@kernel.org, kvm@vger.kernel.org, amit.shah@amd.com, thomas.lendacky@amd.com, bp@alien8.de, tglx@linutronix.de, peterz@infradead.org, pawan.kumar.gupta@linux.intel.com, corbet@lwn.net, mingo@redhat.com, dave.hansen@linux.intel.com, hpa@zytor.com, seanjc@google.com, pbonzini@redhat.com, daniel.sneddon@linux.intel.com, kai.huang@intel.com, sandipan.das@amd.com, boris.ostrovsky@oracle.com, Babu.Moger@amd.com, david.kaplan@amd.com, dwmw@amazon.co.uk, andrew.cooper3@citrix.com Subject: [PATCH v3 6/6] x86/bugs: Add RSB mitigation document Date: Wed, 2 Apr 2025 11:19:23 -0700 Message-ID: X-Mailer: git-send-email 2.48.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Create a document to summarize hard-earned knowledge about RSB-related mitigations, with references, and replace the overly verbose yet incomplete comments with a reference to the document. Signed-off-by: Josh Poimboeuf --- Documentation/admin-guide/hw-vuln/index.rst | 1 + Documentation/admin-guide/hw-vuln/rsb.rst | 241 ++++++++++++++++++++ arch/x86/kernel/cpu/bugs.c | 64 ++---- 3 files changed, 255 insertions(+), 51 deletions(-) create mode 100644 Documentation/admin-guide/hw-vuln/rsb.rst diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/ad= min-guide/hw-vuln/index.rst index ff0b440ef2dc..451874b8135d 100644 --- a/Documentation/admin-guide/hw-vuln/index.rst +++ b/Documentation/admin-guide/hw-vuln/index.rst @@ -22,3 +22,4 @@ are configurable at compile, boot or run time. srso gather_data_sampling reg-file-data-sampling + rsb diff --git a/Documentation/admin-guide/hw-vuln/rsb.rst b/Documentation/admi= n-guide/hw-vuln/rsb.rst new file mode 100644 index 000000000000..97bf75993d5d --- /dev/null +++ b/Documentation/admin-guide/hw-vuln/rsb.rst @@ -0,0 +1,241 @@ +.. SPDX-License-Identifier: GPL-2.0 + +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D +RSB-related mitigations +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +.. warning:: + Please keep this document up-to-date, otherwise you will be + volunteered to update it and convert it to a very long comment in + bugs.c! + +Since 2018 there have been many Spectre CVEs related to the Return Stack +Buffer (RSB). Information about these CVEs and how to mitigate them is +scattered amongst a myriad of microarchitecture-specific documents. + +This document attempts to consolidate all the relevant information in +once place and clarify the reasoning behind the current RSB-related +mitigations. + +At a high level, there are two classes of RSB attacks: RSB poisoning +(Intel and AMD) and RSB underflow (Intel only). They must each be +considered individually for each attack vector (and microarchitecture +where applicable). + +---- + +RSB poisoning (Intel and AMD) +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D + +SpectreRSB +~~~~~~~~~~ + +RSB poisoning is a technique used by Spectre-RSB [#spectre-rsb]_ where +an attacker poisons an RSB entry to cause a victim's return instruction +to speculate to an attacker-controlled address. This can happen when +there are unbalanced CALLs/RETs after a context switch or VMEXIT. + +* All attack vectors can potentially be mitigated by flushing out any + poisoned RSB entries using an RSB filling sequence + [#intel-rsb-filling]_ [#amd-rsb-filling]_ when transitioning between + untrusted and trusted domains. But this has a performance impact and + should be avoided whenever possible. + +* On context switch, the user->user mitigation requires ensuring the + RSB gets filled or cleared whenever IBPB gets written [#cond-ibpb]_ + during a context switch: + + * AMD: + IBPB (or SBPB [#amd-sbpb]_ if used) automatically clears the RSB + if IBPB_RET is set in CPUID [#amd-ibpb-rsb]_. Otherwise the RSB + filling sequence [#amd-rsb-filling]_ must be always be done in + addition to IBPB. + + * Intel: + IBPB automatically clears the RSB: + + "Software that executed before the IBPB command cannot control + the predicted targets of indirect branches executed after the + command on the same logical processor. The term indirect branch + in this context includes near return instructions, so these + predicted targets may come from the RSB." [#intel-ibpb-rsb]_ + +* On context switch, user->kernel attacks are mitigated by SMEP, as user + space can only insert its own return addresses into the RSB: + + * AMD: + "Finally, branches that are predicted as 'ret' instructions get + their predicted targets from the Return Address Predictor (RAP). + AMD recommends software use a RAP stuffing sequence (mitigation + V2-3 in [2]) and/or Supervisor Mode Execution Protection (SMEP) + to ensure that the addresses in the RAP are safe for + speculation. Collectively, we refer to these mitigations as "RAP + Protection"." [#amd-smep-rsb]_ + + * Intel: + "On processors with enhanced IBRS, an RSB overwrite sequence may + not suffice to prevent the predicted target of a near return + from using an RSB entry created in a less privileged predictor + mode. Software can prevent this by enabling SMEP (for + transitions from user mode to supervisor mode) and by having + IA32_SPEC_CTRL.IBRS set during VM exits." [#intel-smep-rsb]_ + +* On VMEXIT, guest->host attacks are mitigated by eIBRS (and PBRSB + mitigation if needed): + + * AMD: + "When Automatic IBRS is enabled, the internal return address + stack used for return address predictions is cleared on VMEXIT." + [#amd-eibrs-vmexit]_ + + * Intel: + "On processors with enhanced IBRS, an RSB overwrite sequence may + not suffice to prevent the predicted target of a near return + from using an RSB entry created in a less privileged predictor + mode. Software can prevent this by enabling SMEP (for + transitions from user mode to supervisor mode) and by having + IA32_SPEC_CTRL.IBRS set during VM exits. Processors with + enhanced IBRS still support the usage model where IBRS is set + only in the OS/VMM for OSes that enable SMEP. To do this, such + processors will ensure that guest behavior cannot control the + RSB after a VM exit once IBRS is set, even if IBRS was not set + at the time of the VM exit." [#intel-eibrs-vmexit]_ + + Note that some Intel CPUs are susceptible to Post-barrier Return + Stack Buffer Predictions (PBRSB)[#intel-pbrsb]_, where the last CALL + from the guest can be used to predict the first unbalanced RET. In + this case the PBRSB mitigation is needed in addition to eIBRS. + +AMD Retbleed / SRSO / Branch Type Confusion +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +On AMD, poisoned RSB entries can also be created by the AMD Retbleed +variant [#retbleed-paper]_ and/or Speculative Return Stack Overflow +[#amd-srso]_ (Inception [#inception-paper]_). These attacks are made +possible by Branch Type Confusion [#amd-btc]_. The kernel protects +itself by replacing every RET in the kernel with a branch to a single +safe RET. + +---- + +RSB underflow (Intel only) +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D + +Intel Retbleed +~~~~~~~~~~~~~~ + +Some Intel Skylake-generation CPUs are susceptible to the Intel variant +of Retbleed [#retbleed-paper]_ (Return Stack Buffer Underflow +[#intel-rsbu]_). If a RET is executed when the RSB buffer is empty due +to mismatched CALLs/RETs or returning from a deep call stack, the branch +predictor can fall back to using the Branch Target Buffer (BTB). If a +user forces a BTB collision then the RET can speculatively branch to a +user-controlled address. + +* Note that RSB filling doesn't fully mitigate this issue. If there + are enough unbalanced RETs, the RSB may still underflow and fall back + to using a poisoned BTB entry. + +* On context switch, user->user underflow attacks are mitigated by the + conditional IBPB [#cond-ibpb]_ on context switch which clears the BTB: + + * "The indirect branch predictor barrier (IBPB) is an indirect branch + control mechanism that establishes a barrier, preventing software + that executed before the barrier from controlling the predicted + targets of indirect branches executed after the barrier on the same + logical processor." [#intel-ibpb-btb]_ + + .. note:: + I wasn't able to find any offical documentation from Intel + explicitly stating that IBPB clears the BTB. However, it's + broadly known to be true and relied upon in several mitigations. + +* On context switch and VMEXIT, user->kernel and guest->host underflows + are mitigated by IBRS or eIBRS: + + * "Enabling IBRS (including enhanced IBRS) will mitigate the "RSBU" + attack demonstrated by the researchers. As previously documented, + Intel recommends the use of enhanced IBRS, where supported. This + includes any processor that enumerates RRSBA but not RRSBA_DIS_S." + [#intel-rsbu]_ + + As an alternative to classic IBRS, call depth tracking can be used to + track kernel returns and fill the RSB when it gets close to being + empty. + +Restricted RSB Alternate (RRSBA) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Some newer Intel CPUs have Restricted RSB Alternate (RRSBA) behavior, +which, similar to the Intel variant of Retbleed described above, also +falls back to using the BTB on RSB underflow. The only difference is +that the predicted targets are restricted to the current domain. + +* "Restricted RSB Alternate (RRSBA) behavior allows alternate branch + predictors to be used by near RET instructions when the RSB is + empty. When eIBRS is enabled, the predicted targets of these + alternate predictors are restricted to those belonging to the + indirect branch predictor entries of the current prediction domain. + [#intel-eibrs-rrsba]_ + +When a CPU with RRSBA is vulnerable to Branch History Injection +[#bhi-paper]_ [#intel-bhi]_, an RSB underflow could be used for an +intra-mode BTI attack. This is mitigated by clearing the BHB on +kernel entry. + +However if the kernel uses retpolines instead of eIBRS, it needs to +disable RRSBA: + +* "Where software is using retpoline as a mitigation for BHI or + intra-mode BTI, and the processor both enumerates RRSBA and + enumerates RRSBA_DIS controls, it should disable this behavior. " + [#intel-retpoline-rrsba]_ + +---- + +References +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +.. [#spectre-rsb] `Spectre Returns! Speculation Attacks using the Return S= tack Buffer `_ + +.. [#intel-rsb-filling] "Empty RSB Mitigation on Skylake-generation" in `R= etpoline: A Branch Target Injection Mitigation `_ + +.. [#amd-rsb-filling] "Mitigation V2-3" in `Software Techniques for Managi= ng Speculation `_ + +.. [#cond-ibpb] Whether IBPB is written depends on whether the prev and/or= next task is protected from Spectre attacks. It typically requires opting= in per task or system-wide. For more details see the documentation for th= e ``spectre_v2_user`` cmdline option in Documentation/admin-guide/kernel-pa= rameters.txt. + +.. [#amd-sbpb] IBPB without flushing of branch type predictions. Only exi= sts for AMD. + +.. [#amd-ibpb-rsb] "Function 8000_0008h -- Processor Capacity Parameters a= nd Extended Feature Identification" in `AMD64 Architecture Programmer's Man= ual Volume 3: General-Purpose and System Instructions `_. SBPB behaves the same way according to `this email `_. + +.. [#intel-ibpb-rsb] "Introduction" in `Post-barrier Return Stack Buffer P= redictions / CVE-2022-26373 / INTEL-SA-00706 `_ + +.. [#amd-smep-rsb] "Existing Mitigations" in `Technical Guidance for Mitig= ating Branch Type Confusion `_ + +.. [#intel-smep-rsb] "Enhanced IBRS" in `Indirect Branch Restricted Specul= ation `_ + +.. [#amd-eibrs-vmexit] "Extended Feature Enable Register (EFER)" in `AMD64= Architecture Programmer's Manual Volume 2: System Programming `_ + +.. [#intel-eibrs-vmexit] "Enhanced IBRS" in `Indirect Branch Restricted Sp= eculation `_ + +.. [#intel-pbrsb] `Post-barrier Return Stack Buffer Predictions / CVE-2022= -26373 / INTEL-SA-00706 `_ + +.. [#retbleed-paper] `Retbleed: Arbitrary Speculative Code Execution with = Return Instruction `_ + +.. [#amd-btc] `Technical Guidance for Mitigating Branch Type Confusion `_ + +.. [#amd-srso] `Technical Update Regarding Speculative Return Stack Overfl= ow `_ + +.. [#inception-paper] `Inception: Exposing New Attack Surfaces with Traini= ng in Transient Execution `_ + +.. [#intel-rsbu] `Return Stack Buffer Underflow / Return Stack Buffer Unde= rflow / CVE-2022-29901, CVE-2022-28693 / INTEL-SA-00702 `_ + +.. [#intel-ibpb-btb] `Indirect Branch Predictor Barrier' `_ + +.. [#intel-eibrs-rrsba] "Guidance for RSBU" in `Return Stack Buffer Underf= low / Return Stack Buffer Underflow / CVE-2022-29901, CVE-2022-28693 / INTE= L-SA-00702 `_ + +.. [#bhi-paper] `Branch History Injection: On the Effectiveness of Hardwar= e Mitigations Against Cross-Privilege Spectre-v2 Attacks `_ + +.. [#intel-bhi] `Branch History Injection and Intra-mode Branch Target Inj= ection / CVE-2022-0001, CVE-2022-0002 / INTEL-SA-00598 `_ + +.. [#intel-retpoline-rrsba] "Retpoline" in `Branch History Injection and I= ntra-mode Branch Target Injection / CVE-2022-0001, CVE-2022-0002 / INTEL-SA= -00598 `_ diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 680c779e9711..e78bb781c091 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -1594,25 +1594,25 @@ static void __init spec_ctrl_disable_kernel_rrsba(v= oid) static void __init spectre_v2_select_rsb_mitigation(enum spectre_v2_mitiga= tion mode) { /* - * Similar to context switches, there are two types of RSB attacks - * after VM exit: + * WARNING! There are many subtleties to consider when changing *any* + * code related to RSB-related mitigations. Before doing so, carefully + * read the following document, and update if necessary: * - * 1) RSB underflow + * Documentation/admin-guide/hw-vuln/rsb.rst * - * 2) Poisoned RSB entry + * In an overly simplified nutshell: * - * When retpoline is enabled, both are mitigated by filling/clearing - * the RSB. + * - User->user RSB attacks are conditionally mitigated during + * context switch by cond_mitigation -> __write_ibpb(). * - * When IBRS is enabled, while #1 would be mitigated by the IBRS branch - * prediction isolation protections, RSB still needs to be cleared - * because of #2. Note that SMEP provides no protection here, unlike - * user-space-poisoned RSB entries. + * - User->kernel and guest->host attacks are mitigated by eIBRS or + * RSB filling. * - * eIBRS should protect against RSB poisoning, but if the EIBRS_PBRSB - * bug is present then a LITE version of RSB protection is required, - * just a single call needs to retire before a RET is executed. + * Though, depending on config, note that other alternative + * mitigations may end up getting used instead, e.g., IBPB on + * entry/vmexit, call depth tracking, or return thunks. */ + switch (mode) { case SPECTRE_V2_NONE: break; @@ -1832,44 +1832,6 @@ static void __init spectre_v2_select_mitigation(void) spectre_v2_enabled =3D mode; pr_info("%s\n", spectre_v2_strings[mode]); =20 - /* - * If Spectre v2 protection has been enabled, fill the RSB during a - * context switch. In general there are two types of RSB attacks - * across context switches, for which the CALLs/RETs may be unbalanced. - * - * 1) RSB underflow - * - * Some Intel parts have "bottomless RSB". When the RSB is empty, - * speculated return targets may come from the branch predictor, - * which could have a user-poisoned BTB or BHB entry. - * - * AMD has it even worse: *all* returns are speculated from the BTB, - * regardless of the state of the RSB. - * - * When IBRS or eIBRS is enabled, the "user -> kernel" attack - * scenario is mitigated by the IBRS branch prediction isolation - * properties, so the RSB buffer filling wouldn't be necessary to - * protect against this type of attack. - * - * The "user -> user" attack scenario is mitigated by RSB filling. - * - * 2) Poisoned RSB entry - * - * If the 'next' in-kernel return stack is shorter than 'prev', - * 'next' could be tricked into speculating with a user-poisoned RSB - * entry. - * - * The "user -> kernel" attack scenario is mitigated by SMEP and - * eIBRS. - * - * The "user -> user" scenario, also known as SpectreBHB, requires - * RSB clearing. - * - * So to mitigate all cases, unconditionally fill RSB on context - * switches. - * - * FIXME: Is this pointless for retbleed-affected AMD? - */ spectre_v2_select_rsb_mitigation(mode); =20 /* --=20 2.48.1