[PATCH v2] um: Fix misaligned stack in stub_exe

David Gow posted 1 patch 1 month ago
arch/um/kernel/skas/stub_exe.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
[PATCH v2] um: Fix misaligned stack in stub_exe
Posted by David Gow 1 month ago
The stub_exe could segfault when built with some compilers (e.g. gcc
13.2.0), as SSE instructions which relied on stack alignment could be
generated, but the stack was misaligned.

This seems to be due to the __start entry point being run with a 16-byte
aligned stack, but the x86_64 SYSV ABI wanting the stack to be so
aligned _before_ a function call (so it is misaligned when the function
is entered due to the return address being pushed). The function
prologue then realigns it. Because the entry point is never _called_,
and hence there is no return address, the prologue is therefore actually
misaligning it, and causing the generated movaps instructions to
SIGSEGV. This results in the following error:
start_userspace : expected SIGSTOP, got status = 139

Force the compiler to emit code to re-align the stack in real_init(), so
that the generated SSE code doesn't crash. This isn't necessarily the
_correct_ way of solving the problem, but it avoids the need to rewrite
__start in assembly for each architecture for now.

Fixes: 32e8eaf263d9 ("um: use execveat to create userspace MMs")
Signed-off-by: David Gow <davidgow@google.com>
---

Changes since v1:
https://lore.kernel.org/linux-um/20241017231007.1500497-2-davidgow@google.com/
- Use force_arg_align_pointer on real_init() instead of naked on
  __start, which works with clang.

 arch/um/kernel/skas/stub_exe.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

---
diff --git a/arch/um/kernel/skas/stub_exe.c b/arch/um/kernel/skas/stub_exe.c
index 04f75c577f1a..50fded2384e6 100644
--- a/arch/um/kernel/skas/stub_exe.c
+++ b/arch/um/kernel/skas/stub_exe.c
@@ -6,7 +6,7 @@
 
 void _start(void);
 
-noinline static void real_init(void)
+noinline __attribute__((force_align_arg_pointer)) static void real_init(void)
 {
 	struct stub_init_data init_data;
 	unsigned long res;
-- 
2.47.0.105.g07ac214952-goog
Re: [PATCH v2] um: Fix misaligned stack in stub_exe
Posted by Johannes Berg 1 month ago
Thanks :)
> 
> Changes since v1:
> https://lore.kernel.org/linux-um/20241017231007.1500497-2-davidgow@google.com/
> - Use force_arg_align_pointer on real_init() instead of naked on
>   __start, which works with clang.

I already applied it, so need to fix on top of it now, not replace it.

However I was just playing with the below - was just looking at the size
though, but what do you think?

johannes


From 57c5a80a4db2de33a11a5a20fcbea8f3643844f5 Mon Sep 17 00:00:00 2001
From: Johannes Berg <johannes.berg@intel.com>
Date: Tue, 22 Oct 2024 11:48:21 +0200
Subject: [PATCH] um: make stub_exe _start() pure inline asm

Since __attribute__((naked)) cannot be used with functions
containing C statements, just generate the few instructions
it needs in assembly directly.

Fixes: 8508a5e0e9db ("um: Fix misaligned stack in stub_exe")
Link: https://lore.kernel.org/linux-um/CABVgOSntH-uoOFMP5HwMXjx_f1osMnVdhgKRKm4uz6DFm2Lb8Q@mail.gmail.com/
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
---
 arch/um/kernel/skas/stub_exe.c      | 8 +-------
 arch/x86/um/shared/sysdep/stub_32.h | 8 ++++++++
 arch/x86/um/shared/sysdep/stub_64.h | 8 ++++++++
 3 files changed, 17 insertions(+), 7 deletions(-)

diff --git a/arch/um/kernel/skas/stub_exe.c b/arch/um/kernel/skas/stub_exe.c
index 722ce6267476..a61f9c008233 100644
--- a/arch/um/kernel/skas/stub_exe.c
+++ b/arch/um/kernel/skas/stub_exe.c
@@ -81,11 +81,5 @@ noinline static void real_init(void)
 
 __attribute__((naked)) void _start(void)
 {
-	char *alloc;
-
-	/* Make enough space for the stub (including space for alignment) */
-	alloc = __builtin_alloca((1 + 2 * STUB_DATA_PAGES - 1) * UM_KERN_PAGE_SIZE);
-	asm volatile("" : "+r,m"(alloc) : : "memory");
-
-	real_init();
+	stub_start(real_init);
 }
diff --git a/arch/x86/um/shared/sysdep/stub_32.h b/arch/x86/um/shared/sysdep/stub_32.h
index 631a18d0ff44..760e8ce8093f 100644
--- a/arch/x86/um/shared/sysdep/stub_32.h
+++ b/arch/x86/um/shared/sysdep/stub_32.h
@@ -123,4 +123,12 @@ static __always_inline void *get_stub_data(void)
 
 	return (void *)ret;
 }
+
+#define stub_start(fn)							\
+	asm volatile (							\
+		"subl %0,%%esp ;"					\
+		"movl %1, %%eax ; "					\
+		"call *%%eax ;"						\
+		:: "i" ((STUB_DATA_PAGES + 1) * UM_KERN_PAGE_SIZE),	\
+		   "i" (&fn))
 #endif
diff --git a/arch/x86/um/shared/sysdep/stub_64.h b/arch/x86/um/shared/sysdep/stub_64.h
index 17153dfd780a..148bf423289e 100644
--- a/arch/x86/um/shared/sysdep/stub_64.h
+++ b/arch/x86/um/shared/sysdep/stub_64.h
@@ -126,4 +126,12 @@ static __always_inline void *get_stub_data(void)
 
 	return (void *)ret;
 }
+
+#define stub_start(fn)							\
+	asm volatile (							\
+		"subq %0,%%rsp ;"					\
+		"movq %1,%%rax ;"					\
+		"call *%%rax ;"						\
+		:: "i" ((STUB_DATA_PAGES + 1) * UM_KERN_PAGE_SIZE),	\
+		   "i" (&fn))
 #endif
-- 
2.47.0