The following commit has been merged into the locking/core branch of tip:
Commit-ID: 8b64db9733c2e4d30fd068d0b9dcef7b4424b035
Gitweb: https://git.kernel.org/tip/8b64db9733c2e4d30fd068d0b9dcef7b4424b035
Author: Uros Bizjak <ubizjak@gmail.com>
AuthorDate: Sun, 03 Nov 2024 17:09:31 +01:00
Committer: Peter Zijlstra <peterz@infradead.org>
CommitterDate: Tue, 05 Nov 2024 12:55:34 +01:00
locking/atomic/x86: Use ALT_OUTPUT_SP() for __alternative_atomic64()
CONFIG_X86_CMPXCHG64 variant of x86_32 __alternative_atomic64()
macro uses CALL instruction inside asm statement. Use
ALT_OUTPUT_SP() macro to add required dependence on %esp register.
Fixes: 819165fb34b9 ("x86: Adjust asm constraints in atomic64 wrappers")
Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20241103160954.3329-1-ubizjak@gmail.com
---
arch/x86/include/asm/atomic64_32.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/atomic64_32.h b/arch/x86/include/asm/atomic64_32.h
index 1f650b4..6c6e9b9 100644
--- a/arch/x86/include/asm/atomic64_32.h
+++ b/arch/x86/include/asm/atomic64_32.h
@@ -51,7 +51,8 @@ static __always_inline s64 arch_atomic64_read_nonatomic(const atomic64_t *v)
#ifdef CONFIG_X86_CMPXCHG64
#define __alternative_atomic64(f, g, out, in...) \
asm volatile("call %c[func]" \
- : out : [func] "i" (atomic64_##g##_cx8), ## in)
+ : ALT_OUTPUT_SP(out) \
+ : [func] "i" (atomic64_##g##_cx8), ## in)
#define ATOMIC64_DECL(sym) ATOMIC64_DECL_ONE(sym##_cx8)
#else