[PATCH] x86/retbleed: Add __x86_return_thunk alignment checks

Borislav Petkov posted 1 patch 2 years, 7 months ago
There is a newer version of this series
arch/x86/kernel/vmlinux.lds.S | 4 ++++
arch/x86/lib/retpoline.S      | 2 +-
2 files changed, 5 insertions(+), 1 deletion(-)
[PATCH] x86/retbleed: Add __x86_return_thunk alignment checks
Posted by Borislav Petkov 2 years, 7 months ago
From: "Borislav Petkov (AMD)" <bp@alien8.de>

Add a linker assertion and compute the 0xcc padding dynamically so that
__x86_return_thunk is always cacheline-aligned. Leave the SYM_START()
macro in as the untraining doesn't need ENDBR annotations anyway.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
---
 arch/x86/kernel/vmlinux.lds.S | 4 ++++
 arch/x86/lib/retpoline.S      | 2 +-
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 25f155205770..03c885d3640f 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -508,4 +508,8 @@ INIT_PER_CPU(irq_stack_backing_store);
            "fixed_percpu_data is not at start of per-cpu area");
 #endif
 
+#ifdef CONFIG_RETHUNK
+. = ASSERT((__x86_return_thunk & 0x3f) == 0, "__x86_return_thunk not cacheline-aligned");
+#endif
+
 #endif /* CONFIG_X86_64 */
diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
index b3b1e376dce8..3fd066d42ec0 100644
--- a/arch/x86/lib/retpoline.S
+++ b/arch/x86/lib/retpoline.S
@@ -143,7 +143,7 @@ SYM_CODE_END(__x86_indirect_jump_thunk_array)
  *    from re-poisioning the BTB prediction.
  */
 	.align 64
-	.skip 63, 0xcc
+	.skip 64 - (__x86_return_thunk - zen_untrain_ret), 0xcc
 SYM_START(zen_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
 	ANNOTATE_NOENDBR
 	/*
-- 
2.35.1
Re: [PATCH] x86/retbleed: Add __x86_return_thunk alignment checks
Posted by Andrew Cooper 2 years, 7 months ago
On 15/05/2023 3:07 pm, Borislav Petkov wrote:
> diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
> index 25f155205770..03c885d3640f 100644
> --- a/arch/x86/kernel/vmlinux.lds.S
> +++ b/arch/x86/kernel/vmlinux.lds.S
> @@ -508,4 +508,8 @@ INIT_PER_CPU(irq_stack_backing_store);
>             "fixed_percpu_data is not at start of per-cpu area");
>  #endif
>  
> +#ifdef CONFIG_RETHUNK
> +. = ASSERT((__x86_return_thunk & 0x3f) == 0, "__x86_return_thunk not cacheline-aligned");

Probably best to say 64b aligned.  The safety property is to do with the
layout of the BTB, not of a cacheline.

FWIW,

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
[tip: x86/cpu] x86/retbleed: Add __x86_return_thunk alignment checks
Posted by tip-bot2 for Borislav Petkov (AMD) 2 years, 7 months ago
The following commit has been merged into the x86/cpu branch of tip:

Commit-ID:     f220125b999b2c9694149c6bda2798d8096f47ed
Gitweb:        https://git.kernel.org/tip/f220125b999b2c9694149c6bda2798d8096f47ed
Author:        Borislav Petkov (AMD) <bp@alien8.de>
AuthorDate:    Mon, 15 May 2023 16:07:26 +02:00
Committer:     Borislav Petkov (AMD) <bp@alien8.de>
CommitterDate: Wed, 17 May 2023 12:14:21 +02:00

x86/retbleed: Add __x86_return_thunk alignment checks

Add a linker assertion and compute the 0xcc padding dynamically so that
__x86_return_thunk is always cacheline-aligned. Leave the SYM_START()
macro in as the untraining doesn't need ENDBR annotations anyway.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Link: https://lore.kernel.org/r/20230515140726.28689-1-bp@alien8.de
---
 arch/x86/kernel/vmlinux.lds.S | 4 ++++
 arch/x86/lib/retpoline.S      | 2 +-
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 25f1552..03c885d 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -508,4 +508,8 @@ INIT_PER_CPU(irq_stack_backing_store);
            "fixed_percpu_data is not at start of per-cpu area");
 #endif
 
+#ifdef CONFIG_RETHUNK
+. = ASSERT((__x86_return_thunk & 0x3f) == 0, "__x86_return_thunk not cacheline-aligned");
+#endif
+
 #endif /* CONFIG_X86_64 */
diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
index b3b1e37..3fd066d 100644
--- a/arch/x86/lib/retpoline.S
+++ b/arch/x86/lib/retpoline.S
@@ -143,7 +143,7 @@ SYM_CODE_END(__x86_indirect_jump_thunk_array)
  *    from re-poisioning the BTB prediction.
  */
 	.align 64
-	.skip 63, 0xcc
+	.skip 64 - (__x86_return_thunk - zen_untrain_ret), 0xcc
 SYM_START(zen_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
 	ANNOTATE_NOENDBR
 	/*