[tip: x86/asm] x86/kexec: Merge x86_32 and x86_64 code using macros from <asm/asm.h>

tip-bot2 for Uros Bizjak posted 1 patch 11 months, 1 week ago
There is a newer version of this series
arch/x86/include/asm/kexec.h | 58 +++++++++++++++--------------------
1 file changed, 25 insertions(+), 33 deletions(-)
[tip: x86/asm] x86/kexec: Merge x86_32 and x86_64 code using macros from <asm/asm.h>
Posted by tip-bot2 for Uros Bizjak 11 months, 1 week ago
The following commit has been merged into the x86/asm branch of tip:

Commit-ID:     aa3942d4d12ef57f031faa2772fe410c24191e36
Gitweb:        https://git.kernel.org/tip/aa3942d4d12ef57f031faa2772fe410c24191e36
Author:        Uros Bizjak <ubizjak@gmail.com>
AuthorDate:    Thu, 06 Mar 2025 15:52:11 +01:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Thu, 06 Mar 2025 22:04:48 +01:00

x86/kexec: Merge x86_32 and x86_64 code using macros from <asm/asm.h>

Merge common x86_32 and x86_64 code in crash_setup_regs()
using macros from <asm/asm.h>.

The compiled object files before and after the patch are unchanged.

Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Baoquan He <bhe@redhat.com>
Cc: Vivek Goyal <vgoyal@redhat.com>
Cc: Dave Young <dyoung@redhat.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Link: https://lore.kernel.org/r/20250306145227.55819-1-ubizjak@gmail.com
---
 arch/x86/include/asm/kexec.h | 58 +++++++++++++++--------------------
 1 file changed, 25 insertions(+), 33 deletions(-)

diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
index 8ad1874..e3589d6 100644
--- a/arch/x86/include/asm/kexec.h
+++ b/arch/x86/include/asm/kexec.h
@@ -18,6 +18,7 @@
 #include <linux/string.h>
 #include <linux/kernel.h>
 
+#include <asm/asm.h>
 #include <asm/page.h>
 #include <asm/ptrace.h>
 
@@ -71,41 +72,32 @@ static inline void crash_setup_regs(struct pt_regs *newregs,
 	if (oldregs) {
 		memcpy(newregs, oldregs, sizeof(*newregs));
 	} else {
+		asm volatile("mov %%" _ASM_BX ",%0" : "=m"(newregs->bx));
+		asm volatile("mov %%" _ASM_CX ",%0" : "=m"(newregs->cx));
+		asm volatile("mov %%" _ASM_DX ",%0" : "=m"(newregs->dx));
+		asm volatile("mov %%" _ASM_SI ",%0" : "=m"(newregs->si));
+		asm volatile("mov %%" _ASM_DI ",%0" : "=m"(newregs->di));
+		asm volatile("mov %%" _ASM_BP ",%0" : "=m"(newregs->bp));
+		asm volatile("mov %%" _ASM_AX ",%0" : "=m"(newregs->ax));
+		asm volatile("mov %%" _ASM_SP ",%0" : "=m"(newregs->sp));
+#ifdef CONFIG_X86_64
+		asm volatile("mov %%r8,%0" : "=m"(newregs->r8));
+		asm volatile("mov %%r9,%0" : "=m"(newregs->r9));
+		asm volatile("mov %%r10,%0" : "=m"(newregs->r10));
+		asm volatile("mov %%r11,%0" : "=m"(newregs->r11));
+		asm volatile("mov %%r12,%0" : "=m"(newregs->r12));
+		asm volatile("mov %%r13,%0" : "=m"(newregs->r13));
+		asm volatile("mov %%r14,%0" : "=m"(newregs->r14));
+		asm volatile("mov %%r15,%0" : "=m"(newregs->r15));
+#endif
+		asm volatile("mov %%ss,%k0" : "=a"(newregs->ss));
+		asm volatile("mov %%cs,%k0" : "=a"(newregs->cs));
 #ifdef CONFIG_X86_32
-		asm volatile("movl %%ebx,%0" : "=m"(newregs->bx));
-		asm volatile("movl %%ecx,%0" : "=m"(newregs->cx));
-		asm volatile("movl %%edx,%0" : "=m"(newregs->dx));
-		asm volatile("movl %%esi,%0" : "=m"(newregs->si));
-		asm volatile("movl %%edi,%0" : "=m"(newregs->di));
-		asm volatile("movl %%ebp,%0" : "=m"(newregs->bp));
-		asm volatile("movl %%eax,%0" : "=m"(newregs->ax));
-		asm volatile("movl %%esp,%0" : "=m"(newregs->sp));
-		asm volatile("movl %%ss, %%eax;" :"=a"(newregs->ss));
-		asm volatile("movl %%cs, %%eax;" :"=a"(newregs->cs));
-		asm volatile("movl %%ds, %%eax;" :"=a"(newregs->ds));
-		asm volatile("movl %%es, %%eax;" :"=a"(newregs->es));
-		asm volatile("pushfl; popl %0" :"=m"(newregs->flags));
-#else
-		asm volatile("movq %%rbx,%0" : "=m"(newregs->bx));
-		asm volatile("movq %%rcx,%0" : "=m"(newregs->cx));
-		asm volatile("movq %%rdx,%0" : "=m"(newregs->dx));
-		asm volatile("movq %%rsi,%0" : "=m"(newregs->si));
-		asm volatile("movq %%rdi,%0" : "=m"(newregs->di));
-		asm volatile("movq %%rbp,%0" : "=m"(newregs->bp));
-		asm volatile("movq %%rax,%0" : "=m"(newregs->ax));
-		asm volatile("movq %%rsp,%0" : "=m"(newregs->sp));
-		asm volatile("movq %%r8,%0" : "=m"(newregs->r8));
-		asm volatile("movq %%r9,%0" : "=m"(newregs->r9));
-		asm volatile("movq %%r10,%0" : "=m"(newregs->r10));
-		asm volatile("movq %%r11,%0" : "=m"(newregs->r11));
-		asm volatile("movq %%r12,%0" : "=m"(newregs->r12));
-		asm volatile("movq %%r13,%0" : "=m"(newregs->r13));
-		asm volatile("movq %%r14,%0" : "=m"(newregs->r14));
-		asm volatile("movq %%r15,%0" : "=m"(newregs->r15));
-		asm volatile("movl %%ss, %%eax;" :"=a"(newregs->ss));
-		asm volatile("movl %%cs, %%eax;" :"=a"(newregs->cs));
-		asm volatile("pushfq; popq %0" :"=m"(newregs->flags));
+		asm volatile("mov %%ds,%k0" : "=a"(newregs->ds));
+		asm volatile("mov %%es,%k0" : "=a"(newregs->es));
 #endif
+		asm volatile("pushf\n\t"
+			     "pop %0" : "=m"(newregs->flags));
 		newregs->ip = _THIS_IP_;
 	}
 }
Re: [tip: x86/asm] x86/kexec: Merge x86_32 and x86_64 code using macros from <asm/asm.h>
Posted by H. Peter Anvin 11 months, 1 week ago
On March 6, 2025 1:33:43 PM PST, tip-bot2 for Uros Bizjak <tip-bot2@linutronix.de> wrote:
>The following commit has been merged into the x86/asm branch of tip:
>
>Commit-ID:     aa3942d4d12ef57f031faa2772fe410c24191e36
>Gitweb:        https://git.kernel.org/tip/aa3942d4d12ef57f031faa2772fe410c24191e36
>Author:        Uros Bizjak <ubizjak@gmail.com>
>AuthorDate:    Thu, 06 Mar 2025 15:52:11 +01:00
>Committer:     Ingo Molnar <mingo@kernel.org>
>CommitterDate: Thu, 06 Mar 2025 22:04:48 +01:00
>
>x86/kexec: Merge x86_32 and x86_64 code using macros from <asm/asm.h>
>
>Merge common x86_32 and x86_64 code in crash_setup_regs()
>using macros from <asm/asm.h>.
>
>The compiled object files before and after the patch are unchanged.
>
>Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
>Signed-off-by: Ingo Molnar <mingo@kernel.org>
>Cc: David Woodhouse <dwmw@amazon.co.uk>
>Cc: Baoquan He <bhe@redhat.com>
>Cc: Vivek Goyal <vgoyal@redhat.com>
>Cc: Dave Young <dyoung@redhat.com>
>Cc: Ard Biesheuvel <ardb@kernel.org>
>Cc: "H. Peter Anvin" <hpa@zytor.com>
>Link: https://lore.kernel.org/r/20250306145227.55819-1-ubizjak@gmail.com
>---
> arch/x86/include/asm/kexec.h | 58 +++++++++++++++--------------------
> 1 file changed, 25 insertions(+), 33 deletions(-)
>
>diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
>index 8ad1874..e3589d6 100644
>--- a/arch/x86/include/asm/kexec.h
>+++ b/arch/x86/include/asm/kexec.h
>@@ -18,6 +18,7 @@
> #include <linux/string.h>
> #include <linux/kernel.h>
> 
>+#include <asm/asm.h>
> #include <asm/page.h>
> #include <asm/ptrace.h>
> 
>@@ -71,41 +72,32 @@ static inline void crash_setup_regs(struct pt_regs *newregs,
> 	if (oldregs) {
> 		memcpy(newregs, oldregs, sizeof(*newregs));
> 	} else {
>+		asm volatile("mov %%" _ASM_BX ",%0" : "=m"(newregs->bx));
>+		asm volatile("mov %%" _ASM_CX ",%0" : "=m"(newregs->cx));
>+		asm volatile("mov %%" _ASM_DX ",%0" : "=m"(newregs->dx));
>+		asm volatile("mov %%" _ASM_SI ",%0" : "=m"(newregs->si));
>+		asm volatile("mov %%" _ASM_DI ",%0" : "=m"(newregs->di));
>+		asm volatile("mov %%" _ASM_BP ",%0" : "=m"(newregs->bp));
>+		asm volatile("mov %%" _ASM_AX ",%0" : "=m"(newregs->ax));
>+		asm volatile("mov %%" _ASM_SP ",%0" : "=m"(newregs->sp));
>+#ifdef CONFIG_X86_64
>+		asm volatile("mov %%r8,%0" : "=m"(newregs->r8));
>+		asm volatile("mov %%r9,%0" : "=m"(newregs->r9));
>+		asm volatile("mov %%r10,%0" : "=m"(newregs->r10));
>+		asm volatile("mov %%r11,%0" : "=m"(newregs->r11));
>+		asm volatile("mov %%r12,%0" : "=m"(newregs->r12));
>+		asm volatile("mov %%r13,%0" : "=m"(newregs->r13));
>+		asm volatile("mov %%r14,%0" : "=m"(newregs->r14));
>+		asm volatile("mov %%r15,%0" : "=m"(newregs->r15));
>+#endif
>+		asm volatile("mov %%ss,%k0" : "=a"(newregs->ss));
>+		asm volatile("mov %%cs,%k0" : "=a"(newregs->cs));
> #ifdef CONFIG_X86_32
>-		asm volatile("movl %%ebx,%0" : "=m"(newregs->bx));
>-		asm volatile("movl %%ecx,%0" : "=m"(newregs->cx));
>-		asm volatile("movl %%edx,%0" : "=m"(newregs->dx));
>-		asm volatile("movl %%esi,%0" : "=m"(newregs->si));
>-		asm volatile("movl %%edi,%0" : "=m"(newregs->di));
>-		asm volatile("movl %%ebp,%0" : "=m"(newregs->bp));
>-		asm volatile("movl %%eax,%0" : "=m"(newregs->ax));
>-		asm volatile("movl %%esp,%0" : "=m"(newregs->sp));
>-		asm volatile("movl %%ss, %%eax;" :"=a"(newregs->ss));
>-		asm volatile("movl %%cs, %%eax;" :"=a"(newregs->cs));
>-		asm volatile("movl %%ds, %%eax;" :"=a"(newregs->ds));
>-		asm volatile("movl %%es, %%eax;" :"=a"(newregs->es));
>-		asm volatile("pushfl; popl %0" :"=m"(newregs->flags));
>-#else
>-		asm volatile("movq %%rbx,%0" : "=m"(newregs->bx));
>-		asm volatile("movq %%rcx,%0" : "=m"(newregs->cx));
>-		asm volatile("movq %%rdx,%0" : "=m"(newregs->dx));
>-		asm volatile("movq %%rsi,%0" : "=m"(newregs->si));
>-		asm volatile("movq %%rdi,%0" : "=m"(newregs->di));
>-		asm volatile("movq %%rbp,%0" : "=m"(newregs->bp));
>-		asm volatile("movq %%rax,%0" : "=m"(newregs->ax));
>-		asm volatile("movq %%rsp,%0" : "=m"(newregs->sp));
>-		asm volatile("movq %%r8,%0" : "=m"(newregs->r8));
>-		asm volatile("movq %%r9,%0" : "=m"(newregs->r9));
>-		asm volatile("movq %%r10,%0" : "=m"(newregs->r10));
>-		asm volatile("movq %%r11,%0" : "=m"(newregs->r11));
>-		asm volatile("movq %%r12,%0" : "=m"(newregs->r12));
>-		asm volatile("movq %%r13,%0" : "=m"(newregs->r13));
>-		asm volatile("movq %%r14,%0" : "=m"(newregs->r14));
>-		asm volatile("movq %%r15,%0" : "=m"(newregs->r15));
>-		asm volatile("movl %%ss, %%eax;" :"=a"(newregs->ss));
>-		asm volatile("movl %%cs, %%eax;" :"=a"(newregs->cs));
>-		asm volatile("pushfq; popq %0" :"=m"(newregs->flags));
>+		asm volatile("mov %%ds,%k0" : "=a"(newregs->ds));
>+		asm volatile("mov %%es,%k0" : "=a"(newregs->es));
> #endif
>+		asm volatile("pushf\n\t"
>+			     "pop %0" : "=m"(newregs->flags));
> 		newregs->ip = _THIS_IP_;
> 	}
> }

Incidentally, doing this in C code is obviously completely broken, especially doing it in multiple statements. You have no idea what the compiler has messed with before you get there.
Re: [tip: x86/asm] x86/kexec: Merge x86_32 and x86_64 code using macros from <asm/asm.h>
Posted by Uros Bizjak 11 months, 1 week ago
On Fri, Mar 7, 2025 at 4:00 AM H. Peter Anvin <hpa@zytor.com> wrote:
>
> On March 6, 2025 1:33:43 PM PST, tip-bot2 for Uros Bizjak <tip-bot2@linutronix.de> wrote:
> >The following commit has been merged into the x86/asm branch of tip:
> >
> >Commit-ID:     aa3942d4d12ef57f031faa2772fe410c24191e36
> >Gitweb:        https://git.kernel.org/tip/aa3942d4d12ef57f031faa2772fe410c24191e36
> >Author:        Uros Bizjak <ubizjak@gmail.com>
> >AuthorDate:    Thu, 06 Mar 2025 15:52:11 +01:00
> >Committer:     Ingo Molnar <mingo@kernel.org>
> >CommitterDate: Thu, 06 Mar 2025 22:04:48 +01:00
> >
> >x86/kexec: Merge x86_32 and x86_64 code using macros from <asm/asm.h>
> >
> >Merge common x86_32 and x86_64 code in crash_setup_regs()
> >using macros from <asm/asm.h>.
> >
> >The compiled object files before and after the patch are unchanged.
> >
> >Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
> >Signed-off-by: Ingo Molnar <mingo@kernel.org>
> >Cc: David Woodhouse <dwmw@amazon.co.uk>
> >Cc: Baoquan He <bhe@redhat.com>
> >Cc: Vivek Goyal <vgoyal@redhat.com>
> >Cc: Dave Young <dyoung@redhat.com>
> >Cc: Ard Biesheuvel <ardb@kernel.org>
> >Cc: "H. Peter Anvin" <hpa@zytor.com>
> >Link: https://lore.kernel.org/r/20250306145227.55819-1-ubizjak@gmail.com
> >---
> > arch/x86/include/asm/kexec.h | 58 +++++++++++++++--------------------
> > 1 file changed, 25 insertions(+), 33 deletions(-)
> >
> >diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
> >index 8ad1874..e3589d6 100644
> >--- a/arch/x86/include/asm/kexec.h
> >+++ b/arch/x86/include/asm/kexec.h
> >@@ -18,6 +18,7 @@
> > #include <linux/string.h>
> > #include <linux/kernel.h>
> >
> >+#include <asm/asm.h>
> > #include <asm/page.h>
> > #include <asm/ptrace.h>
> >
> >@@ -71,41 +72,32 @@ static inline void crash_setup_regs(struct pt_regs *newregs,
> >       if (oldregs) {
> >               memcpy(newregs, oldregs, sizeof(*newregs));
> >       } else {
> >+              asm volatile("mov %%" _ASM_BX ",%0" : "=m"(newregs->bx));
> >+              asm volatile("mov %%" _ASM_CX ",%0" : "=m"(newregs->cx));
> >+              asm volatile("mov %%" _ASM_DX ",%0" : "=m"(newregs->dx));
> >+              asm volatile("mov %%" _ASM_SI ",%0" : "=m"(newregs->si));
> >+              asm volatile("mov %%" _ASM_DI ",%0" : "=m"(newregs->di));
> >+              asm volatile("mov %%" _ASM_BP ",%0" : "=m"(newregs->bp));
> >+              asm volatile("mov %%" _ASM_AX ",%0" : "=m"(newregs->ax));
> >+              asm volatile("mov %%" _ASM_SP ",%0" : "=m"(newregs->sp));
> >+#ifdef CONFIG_X86_64
> >+              asm volatile("mov %%r8,%0" : "=m"(newregs->r8));
> >+              asm volatile("mov %%r9,%0" : "=m"(newregs->r9));
> >+              asm volatile("mov %%r10,%0" : "=m"(newregs->r10));
> >+              asm volatile("mov %%r11,%0" : "=m"(newregs->r11));
> >+              asm volatile("mov %%r12,%0" : "=m"(newregs->r12));
> >+              asm volatile("mov %%r13,%0" : "=m"(newregs->r13));
> >+              asm volatile("mov %%r14,%0" : "=m"(newregs->r14));
> >+              asm volatile("mov %%r15,%0" : "=m"(newregs->r15));
> >+#endif
> >+              asm volatile("mov %%ss,%k0" : "=a"(newregs->ss));
> >+              asm volatile("mov %%cs,%k0" : "=a"(newregs->cs));
> > #ifdef CONFIG_X86_32
> >-              asm volatile("movl %%ebx,%0" : "=m"(newregs->bx));
> >-              asm volatile("movl %%ecx,%0" : "=m"(newregs->cx));
> >-              asm volatile("movl %%edx,%0" : "=m"(newregs->dx));
> >-              asm volatile("movl %%esi,%0" : "=m"(newregs->si));
> >-              asm volatile("movl %%edi,%0" : "=m"(newregs->di));
> >-              asm volatile("movl %%ebp,%0" : "=m"(newregs->bp));
> >-              asm volatile("movl %%eax,%0" : "=m"(newregs->ax));
> >-              asm volatile("movl %%esp,%0" : "=m"(newregs->sp));
> >-              asm volatile("movl %%ss, %%eax;" :"=a"(newregs->ss));
> >-              asm volatile("movl %%cs, %%eax;" :"=a"(newregs->cs));
> >-              asm volatile("movl %%ds, %%eax;" :"=a"(newregs->ds));
> >-              asm volatile("movl %%es, %%eax;" :"=a"(newregs->es));
> >-              asm volatile("pushfl; popl %0" :"=m"(newregs->flags));
> >-#else
> >-              asm volatile("movq %%rbx,%0" : "=m"(newregs->bx));
> >-              asm volatile("movq %%rcx,%0" : "=m"(newregs->cx));
> >-              asm volatile("movq %%rdx,%0" : "=m"(newregs->dx));
> >-              asm volatile("movq %%rsi,%0" : "=m"(newregs->si));
> >-              asm volatile("movq %%rdi,%0" : "=m"(newregs->di));
> >-              asm volatile("movq %%rbp,%0" : "=m"(newregs->bp));
> >-              asm volatile("movq %%rax,%0" : "=m"(newregs->ax));
> >-              asm volatile("movq %%rsp,%0" : "=m"(newregs->sp));
> >-              asm volatile("movq %%r8,%0" : "=m"(newregs->r8));
> >-              asm volatile("movq %%r9,%0" : "=m"(newregs->r9));
> >-              asm volatile("movq %%r10,%0" : "=m"(newregs->r10));
> >-              asm volatile("movq %%r11,%0" : "=m"(newregs->r11));
> >-              asm volatile("movq %%r12,%0" : "=m"(newregs->r12));
> >-              asm volatile("movq %%r13,%0" : "=m"(newregs->r13));
> >-              asm volatile("movq %%r14,%0" : "=m"(newregs->r14));
> >-              asm volatile("movq %%r15,%0" : "=m"(newregs->r15));
> >-              asm volatile("movl %%ss, %%eax;" :"=a"(newregs->ss));
> >-              asm volatile("movl %%cs, %%eax;" :"=a"(newregs->cs));
> >-              asm volatile("pushfq; popq %0" :"=m"(newregs->flags));
> >+              asm volatile("mov %%ds,%k0" : "=a"(newregs->ds));
> >+              asm volatile("mov %%es,%k0" : "=a"(newregs->es));
> > #endif
> >+              asm volatile("pushf\n\t"
> >+                           "pop %0" : "=m"(newregs->flags));
> >               newregs->ip = _THIS_IP_;
> >       }
> > }
>
> Incidentally, doing this in C code is obviously completely broken, especially doing it in multiple statements. You have no idea what the compiler has messed with before you get there.

These are "asm volatile" statemets, so at least they won't be
scheduled in a different way. OTOH, please note that the patch is very
carefully written to not change code flow, usage of hardregs in the
inline asm is usually the sign of fragile code.

Uros.
Re: [tip: x86/asm] x86/kexec: Merge x86_32 and x86_64 code using macros from <asm/asm.h>
Posted by H. Peter Anvin 11 months, 1 week ago
On March 7, 2025 1:20:17 AM PST, Uros Bizjak <ubizjak@gmail.com> wrote:
>On Fri, Mar 7, 2025 at 4:00 AM H. Peter Anvin <hpa@zytor.com> wrote:
>>
>> On March 6, 2025 1:33:43 PM PST, tip-bot2 for Uros Bizjak <tip-bot2@linutronix.de> wrote:
>> >The following commit has been merged into the x86/asm branch of tip:
>> >
>> >Commit-ID:     aa3942d4d12ef57f031faa2772fe410c24191e36
>> >Gitweb:        https://git.kernel.org/tip/aa3942d4d12ef57f031faa2772fe410c24191e36
>> >Author:        Uros Bizjak <ubizjak@gmail.com>
>> >AuthorDate:    Thu, 06 Mar 2025 15:52:11 +01:00
>> >Committer:     Ingo Molnar <mingo@kernel.org>
>> >CommitterDate: Thu, 06 Mar 2025 22:04:48 +01:00
>> >
>> >x86/kexec: Merge x86_32 and x86_64 code using macros from <asm/asm.h>
>> >
>> >Merge common x86_32 and x86_64 code in crash_setup_regs()
>> >using macros from <asm/asm.h>.
>> >
>> >The compiled object files before and after the patch are unchanged.
>> >
>> >Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
>> >Signed-off-by: Ingo Molnar <mingo@kernel.org>
>> >Cc: David Woodhouse <dwmw@amazon.co.uk>
>> >Cc: Baoquan He <bhe@redhat.com>
>> >Cc: Vivek Goyal <vgoyal@redhat.com>
>> >Cc: Dave Young <dyoung@redhat.com>
>> >Cc: Ard Biesheuvel <ardb@kernel.org>
>> >Cc: "H. Peter Anvin" <hpa@zytor.com>
>> >Link: https://lore.kernel.org/r/20250306145227.55819-1-ubizjak@gmail.com
>> >---
>> > arch/x86/include/asm/kexec.h | 58 +++++++++++++++--------------------
>> > 1 file changed, 25 insertions(+), 33 deletions(-)
>> >
>> >diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
>> >index 8ad1874..e3589d6 100644
>> >--- a/arch/x86/include/asm/kexec.h
>> >+++ b/arch/x86/include/asm/kexec.h
>> >@@ -18,6 +18,7 @@
>> > #include <linux/string.h>
>> > #include <linux/kernel.h>
>> >
>> >+#include <asm/asm.h>
>> > #include <asm/page.h>
>> > #include <asm/ptrace.h>
>> >
>> >@@ -71,41 +72,32 @@ static inline void crash_setup_regs(struct pt_regs *newregs,
>> >       if (oldregs) {
>> >               memcpy(newregs, oldregs, sizeof(*newregs));
>> >       } else {
>> >+              asm volatile("mov %%" _ASM_BX ",%0" : "=m"(newregs->bx));
>> >+              asm volatile("mov %%" _ASM_CX ",%0" : "=m"(newregs->cx));
>> >+              asm volatile("mov %%" _ASM_DX ",%0" : "=m"(newregs->dx));
>> >+              asm volatile("mov %%" _ASM_SI ",%0" : "=m"(newregs->si));
>> >+              asm volatile("mov %%" _ASM_DI ",%0" : "=m"(newregs->di));
>> >+              asm volatile("mov %%" _ASM_BP ",%0" : "=m"(newregs->bp));
>> >+              asm volatile("mov %%" _ASM_AX ",%0" : "=m"(newregs->ax));
>> >+              asm volatile("mov %%" _ASM_SP ",%0" : "=m"(newregs->sp));
>> >+#ifdef CONFIG_X86_64
>> >+              asm volatile("mov %%r8,%0" : "=m"(newregs->r8));
>> >+              asm volatile("mov %%r9,%0" : "=m"(newregs->r9));
>> >+              asm volatile("mov %%r10,%0" : "=m"(newregs->r10));
>> >+              asm volatile("mov %%r11,%0" : "=m"(newregs->r11));
>> >+              asm volatile("mov %%r12,%0" : "=m"(newregs->r12));
>> >+              asm volatile("mov %%r13,%0" : "=m"(newregs->r13));
>> >+              asm volatile("mov %%r14,%0" : "=m"(newregs->r14));
>> >+              asm volatile("mov %%r15,%0" : "=m"(newregs->r15));
>> >+#endif
>> >+              asm volatile("mov %%ss,%k0" : "=a"(newregs->ss));
>> >+              asm volatile("mov %%cs,%k0" : "=a"(newregs->cs));
>> > #ifdef CONFIG_X86_32
>> >-              asm volatile("movl %%ebx,%0" : "=m"(newregs->bx));
>> >-              asm volatile("movl %%ecx,%0" : "=m"(newregs->cx));
>> >-              asm volatile("movl %%edx,%0" : "=m"(newregs->dx));
>> >-              asm volatile("movl %%esi,%0" : "=m"(newregs->si));
>> >-              asm volatile("movl %%edi,%0" : "=m"(newregs->di));
>> >-              asm volatile("movl %%ebp,%0" : "=m"(newregs->bp));
>> >-              asm volatile("movl %%eax,%0" : "=m"(newregs->ax));
>> >-              asm volatile("movl %%esp,%0" : "=m"(newregs->sp));
>> >-              asm volatile("movl %%ss, %%eax;" :"=a"(newregs->ss));
>> >-              asm volatile("movl %%cs, %%eax;" :"=a"(newregs->cs));
>> >-              asm volatile("movl %%ds, %%eax;" :"=a"(newregs->ds));
>> >-              asm volatile("movl %%es, %%eax;" :"=a"(newregs->es));
>> >-              asm volatile("pushfl; popl %0" :"=m"(newregs->flags));
>> >-#else
>> >-              asm volatile("movq %%rbx,%0" : "=m"(newregs->bx));
>> >-              asm volatile("movq %%rcx,%0" : "=m"(newregs->cx));
>> >-              asm volatile("movq %%rdx,%0" : "=m"(newregs->dx));
>> >-              asm volatile("movq %%rsi,%0" : "=m"(newregs->si));
>> >-              asm volatile("movq %%rdi,%0" : "=m"(newregs->di));
>> >-              asm volatile("movq %%rbp,%0" : "=m"(newregs->bp));
>> >-              asm volatile("movq %%rax,%0" : "=m"(newregs->ax));
>> >-              asm volatile("movq %%rsp,%0" : "=m"(newregs->sp));
>> >-              asm volatile("movq %%r8,%0" : "=m"(newregs->r8));
>> >-              asm volatile("movq %%r9,%0" : "=m"(newregs->r9));
>> >-              asm volatile("movq %%r10,%0" : "=m"(newregs->r10));
>> >-              asm volatile("movq %%r11,%0" : "=m"(newregs->r11));
>> >-              asm volatile("movq %%r12,%0" : "=m"(newregs->r12));
>> >-              asm volatile("movq %%r13,%0" : "=m"(newregs->r13));
>> >-              asm volatile("movq %%r14,%0" : "=m"(newregs->r14));
>> >-              asm volatile("movq %%r15,%0" : "=m"(newregs->r15));
>> >-              asm volatile("movl %%ss, %%eax;" :"=a"(newregs->ss));
>> >-              asm volatile("movl %%cs, %%eax;" :"=a"(newregs->cs));
>> >-              asm volatile("pushfq; popq %0" :"=m"(newregs->flags));
>> >+              asm volatile("mov %%ds,%k0" : "=a"(newregs->ds));
>> >+              asm volatile("mov %%es,%k0" : "=a"(newregs->es));
>> > #endif
>> >+              asm volatile("pushf\n\t"
>> >+                           "pop %0" : "=m"(newregs->flags));
>> >               newregs->ip = _THIS_IP_;
>> >       }
>> > }
>>
>> Incidentally, doing this in C code is obviously completely broken, especially doing it in multiple statements. You have no idea what the compiler has messed with before you get there.
>
>These are "asm volatile" statemets, so at least they won't be
>scheduled in a different way. OTOH, please note that the patch is very
>carefully written to not change code flow, usage of hardregs in the
>inline asm is usually the sign of fragile code.
>
>Uros.
>

That doesn't matter, though; that only means they can't be moved relatively to each other, but the compiler is perfectly capable of inserting code before, or in between. 

Your patch might be a functional null, but the code is broken on a deep and fundamental basis.