When enabling Clang's Context Analysis (aka. Thread Safety Analysis) on
kernel/futex/core.o (see Peter's changes at [1]), in arm64 LTO builds we
could see:
| kernel/futex/core.c:982:1: warning: spinlock 'atomic ? __u.__val : q->lock_ptr' is still held at the end of function [-Wthread-safety-analysis]
| 982 | }
| | ^
| kernel/futex/core.c:976:2: note: spinlock acquired here
| 976 | spin_lock(lock_ptr);
| | ^
| kernel/futex/core.c:982:1: warning: expecting spinlock 'q->lock_ptr' to be held at the end of function [-Wthread-safety-analysis]
| 982 | }
| | ^
| kernel/futex/core.c:966:6: note: spinlock acquired here
| 966 | void futex_q_lockptr_lock(struct futex_q *q)
| | ^
| 2 warnings generated.
Where we have:
extern void futex_q_lockptr_lock(struct futex_q *q) __acquires(q->lock_ptr);
..
void futex_q_lockptr_lock(struct futex_q *q)
{
spinlock_t *lock_ptr;
/*
* See futex_unqueue() why lock_ptr can change.
*/
guard(rcu)();
retry:
>> lock_ptr = READ_ONCE(q->lock_ptr);
spin_lock(lock_ptr);
...
}
The READ_ONCE() above is expanded to arm64's LTO __READ_ONCE(). Here,
Clang Thread Safety Analysis's alias analysis resolves 'lock_ptr' to
'atomic ? __u.__val : q->lock_ptr', and considers this the identity of
the context lock given it can't see through the inline assembly;
however, we simply want 'q->lock_ptr' as the canonical context lock.
While for code generation the compiler simplified to __u.__val for
pointers (8 byte case -> atomic), TSA's analysis (a) happens much
earlier on the AST, and (b) would be the wrong deduction.
Now that we've gotten rid of the 'atomic' ternary comparison, we can
return '__u.__val' through a pointer that we initialize with '&x', but
then change with a pointer-to-pointer. When READ_ONCE()'ing a context
lock pointer, TSA's alias analysis does not invalidate the initial alias
when updated through the pointer-to-pointer, and we make it effectively
"see through" the __READ_ONCE().
Code generation is unchanged.
Link: https://lkml.kernel.org/r/20260121110704.221498346@infradead.org [1]
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202601221040.TeM0ihff-lkp@intel.com/
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Marco Elver <elver@google.com>
---
v2:
* Rebase.
---
arch/arm64/include/asm/rwonce.h | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/rwonce.h b/arch/arm64/include/asm/rwonce.h
index 712de3238f9a..3a50a1d0d17e 100644
--- a/arch/arm64/include/asm/rwonce.h
+++ b/arch/arm64/include/asm/rwonce.h
@@ -48,8 +48,11 @@
*/
#define __READ_ONCE(x) \
({ \
- typeof(&(x)) __x = &(x); \
+ auto __x = &(x); \
+ auto __ret = (__rwonce_typeof_unqual(*__x) *)__x; \
+ auto __retp = &__ret; \
union { __rwonce_typeof_unqual(*__x) __val; char __c[1]; } __u; \
+ *__retp = &__u.__val; \
switch (sizeof(x)) { \
case 1: \
asm volatile(__LOAD_RCPC(b, %w0, %1) \
@@ -74,7 +77,7 @@
default: \
__u.__val = *(volatile typeof(*__x) *)__x; \
} \
- __u.__val; \
+ *__ret; \
})
#endif /* !BUILD_VDSO */
--
2.53.0.rc1.217.geba53bf80e-goog
On Thu, 29 Jan 2026 01:52:34 +0100
Marco Elver <elver@google.com> wrote:
> When enabling Clang's Context Analysis (aka. Thread Safety Analysis) on
> kernel/futex/core.o (see Peter's changes at [1]), in arm64 LTO builds we
> could see:
>
> | kernel/futex/core.c:982:1: warning: spinlock 'atomic ? __u.__val : q->lock_ptr' is still held at the end of function [-Wthread-safety-analysis]
> | 982 | }
> | | ^
> | kernel/futex/core.c:976:2: note: spinlock acquired here
> | 976 | spin_lock(lock_ptr);
> | | ^
> | kernel/futex/core.c:982:1: warning: expecting spinlock 'q->lock_ptr' to be held at the end of function [-Wthread-safety-analysis]
> | 982 | }
> | | ^
> | kernel/futex/core.c:966:6: note: spinlock acquired here
> | 966 | void futex_q_lockptr_lock(struct futex_q *q)
> | | ^
> | 2 warnings generated.
>
> Where we have:
>
> extern void futex_q_lockptr_lock(struct futex_q *q) __acquires(q->lock_ptr);
> ..
> void futex_q_lockptr_lock(struct futex_q *q)
> {
> spinlock_t *lock_ptr;
>
> /*
> * See futex_unqueue() why lock_ptr can change.
> */
> guard(rcu)();
> retry:
> >> lock_ptr = READ_ONCE(q->lock_ptr);
> spin_lock(lock_ptr);
> ...
> }
>
> The READ_ONCE() above is expanded to arm64's LTO __READ_ONCE(). Here,
> Clang Thread Safety Analysis's alias analysis resolves 'lock_ptr' to
> 'atomic ? __u.__val : q->lock_ptr',
Doesn't the previous patch remove that conditional?
This description should really refer to the code before this patch.
> and considers this the identity of
> the context lock given it can't see through the inline assembly;
> however, we simply want 'q->lock_ptr' as the canonical context lock.
> While for code generation the compiler simplified to __u.__val for
> pointers (8 byte case -> atomic), TSA's analysis (a) happens much
> earlier on the AST, and (b) would be the wrong deduction.
>
> Now that we've gotten rid of the 'atomic' ternary comparison, we can
> return '__u.__val' through a pointer that we initialize with '&x', but
> then change with a pointer-to-pointer. When READ_ONCE()'ing a context
> lock pointer, TSA's alias analysis does not invalidate the initial alias
> when updated through the pointer-to-pointer, and we make it effectively
> "see through" the __READ_ONCE().
Some of that need to be a comment in the code.
I also suspect you've just found a bug in the TSA logic.
>
> Code generation is unchanged.
>
> Link: https://lkml.kernel.org/r/20260121110704.221498346@infradead.org [1]
> Reported-by: kernel test robot <lkp@intel.com>
> Closes: https://lore.kernel.org/oe-kbuild-all/202601221040.TeM0ihff-lkp@intel.com/
> Cc: Peter Zijlstra <peterz@infradead.org>
> Signed-off-by: Marco Elver <elver@google.com>
> ---
> v2:
> * Rebase.
> ---
> arch/arm64/include/asm/rwonce.h | 7 +++++--
> 1 file changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/include/asm/rwonce.h b/arch/arm64/include/asm/rwonce.h
> index 712de3238f9a..3a50a1d0d17e 100644
> --- a/arch/arm64/include/asm/rwonce.h
> +++ b/arch/arm64/include/asm/rwonce.h
> @@ -48,8 +48,11 @@
> */
> #define __READ_ONCE(x) \
> ({ \
> - typeof(&(x)) __x = &(x); \
> + auto __x = &(x); \
> + auto __ret = (__rwonce_typeof_unqual(*__x) *)__x; \
> + auto __retp = &__ret; \
> union { __rwonce_typeof_unqual(*__x) __val; char __c[1]; } __u; \
Can you define __val using typeof(__ret)?
Saves expanding the macro twice (although it isn't the horrid
__unqual_scaler_typeof() any more).
David
> + *__retp = &__u.__val; \
> switch (sizeof(x)) { \
> case 1: \
> asm volatile(__LOAD_RCPC(b, %w0, %1) \
> @@ -74,7 +77,7 @@
> default: \
> __u.__val = *(volatile typeof(*__x) *)__x; \
> } \
> - __u.__val; \
> + *__ret; \
> })
>
> #endif /* !BUILD_VDSO */
On Thu, 29 Jan 2026 at 12:31, David Laight <david.laight.linux@gmail.com> wrote:
>
> On Thu, 29 Jan 2026 01:52:34 +0100
> Marco Elver <elver@google.com> wrote:
>
> > When enabling Clang's Context Analysis (aka. Thread Safety Analysis) on
> > kernel/futex/core.o (see Peter's changes at [1]), in arm64 LTO builds we
> > could see:
> >
> > | kernel/futex/core.c:982:1: warning: spinlock 'atomic ? __u.__val : q->lock_ptr' is still held at the end of function [-Wthread-safety-analysis]
> > | 982 | }
> > | | ^
> > | kernel/futex/core.c:976:2: note: spinlock acquired here
> > | 976 | spin_lock(lock_ptr);
> > | | ^
> > | kernel/futex/core.c:982:1: warning: expecting spinlock 'q->lock_ptr' to be held at the end of function [-Wthread-safety-analysis]
> > | 982 | }
> > | | ^
> > | kernel/futex/core.c:966:6: note: spinlock acquired here
> > | 966 | void futex_q_lockptr_lock(struct futex_q *q)
> > | | ^
> > | 2 warnings generated.
> >
> > Where we have:
> >
> > extern void futex_q_lockptr_lock(struct futex_q *q) __acquires(q->lock_ptr);
> > ..
> > void futex_q_lockptr_lock(struct futex_q *q)
> > {
> > spinlock_t *lock_ptr;
> >
> > /*
> > * See futex_unqueue() why lock_ptr can change.
> > */
> > guard(rcu)();
> > retry:
> > >> lock_ptr = READ_ONCE(q->lock_ptr);
> > spin_lock(lock_ptr);
> > ...
> > }
> >
> > The READ_ONCE() above is expanded to arm64's LTO __READ_ONCE(). Here,
> > Clang Thread Safety Analysis's alias analysis resolves 'lock_ptr' to
> > 'atomic ? __u.__val : q->lock_ptr',
>
> Doesn't the previous patch remove that conditional?
> This description should really refer to the code before this patch.
Will word-smith this a bit. But this refers to the state of where the
original issue was found that spawned all this.
> > and considers this the identity of
> > the context lock given it can't see through the inline assembly;
> > however, we simply want 'q->lock_ptr' as the canonical context lock.
> > While for code generation the compiler simplified to __u.__val for
> > pointers (8 byte case -> atomic), TSA's analysis (a) happens much
> > earlier on the AST, and (b) would be the wrong deduction.
> >
> > Now that we've gotten rid of the 'atomic' ternary comparison, we can
> > return '__u.__val' through a pointer that we initialize with '&x', but
> > then change with a pointer-to-pointer. When READ_ONCE()'ing a context
> > lock pointer, TSA's alias analysis does not invalidate the initial alias
> > when updated through the pointer-to-pointer, and we make it effectively
> > "see through" the __READ_ONCE().
>
> Some of that need to be a comment in the code.
> I also suspect you've just found a bug in the TSA logic.
Adding a comment. From a soundness POV, yes it's a bug, but I think
reassigning a pointer via a pointer-to-pointer in the same scope is
just pointless, so I'm willing to keep this as a deliberate escape
hatch (might need to add a test to Clang to capture this and discuss
if someone wants to change).
On Thu, Jan 29, 2026 at 01:52:34AM +0100, Marco Elver wrote:
> When enabling Clang's Context Analysis (aka. Thread Safety Analysis) on
> kernel/futex/core.o (see Peter's changes at [1]), in arm64 LTO builds we
> could see:
>
> | kernel/futex/core.c:982:1: warning: spinlock 'atomic ? __u.__val : q->lock_ptr' is still held at the end of function [-Wthread-safety-analysis]
> | 982 | }
> | | ^
> | kernel/futex/core.c:976:2: note: spinlock acquired here
> | 976 | spin_lock(lock_ptr);
> | | ^
> | kernel/futex/core.c:982:1: warning: expecting spinlock 'q->lock_ptr' to be held at the end of function [-Wthread-safety-analysis]
> | 982 | }
> | | ^
> | kernel/futex/core.c:966:6: note: spinlock acquired here
> | 966 | void futex_q_lockptr_lock(struct futex_q *q)
> | | ^
> | 2 warnings generated.
>
> Where we have:
>
> extern void futex_q_lockptr_lock(struct futex_q *q) __acquires(q->lock_ptr);
> ..
> void futex_q_lockptr_lock(struct futex_q *q)
> {
> spinlock_t *lock_ptr;
>
> /*
> * See futex_unqueue() why lock_ptr can change.
> */
> guard(rcu)();
> retry:
> >> lock_ptr = READ_ONCE(q->lock_ptr);
> spin_lock(lock_ptr);
> ...
> }
>
> The READ_ONCE() above is expanded to arm64's LTO __READ_ONCE(). Here,
> Clang Thread Safety Analysis's alias analysis resolves 'lock_ptr' to
> 'atomic ? __u.__val : q->lock_ptr', and considers this the identity of
> the context lock given it can't see through the inline assembly;
> however, we simply want 'q->lock_ptr' as the canonical context lock.
> While for code generation the compiler simplified to __u.__val for
> pointers (8 byte case -> atomic), TSA's analysis (a) happens much
> earlier on the AST, and (b) would be the wrong deduction.
>
> Now that we've gotten rid of the 'atomic' ternary comparison, we can
> return '__u.__val' through a pointer that we initialize with '&x', but
> then change with a pointer-to-pointer. When READ_ONCE()'ing a context
> lock pointer, TSA's alias analysis does not invalidate the initial alias
> when updated through the pointer-to-pointer, and we make it effectively
> "see through" the __READ_ONCE().
>
Seems reasonable to me, but I don't have the compiler knowledge to do a
full review, so:
Tested-by: Boqun Feng <boqun@kernel.org>
We also have similar issues for asm-based smp_load_acquire(), to trigger
that, you can just replace `READ_ONCE(q->lock_ptr)` with
`smp_load_acquire(&q->lock_ptr)`.
Regards,
Boqun
> Code generation is unchanged.
>
> Link: https://lkml.kernel.org/r/20260121110704.221498346@infradead.org [1]
> Reported-by: kernel test robot <lkp@intel.com>
> Closes: https://lore.kernel.org/oe-kbuild-all/202601221040.TeM0ihff-lkp@intel.com/
> Cc: Peter Zijlstra <peterz@infradead.org>
> Signed-off-by: Marco Elver <elver@google.com>
> ---
> v2:
> * Rebase.
> ---
> arch/arm64/include/asm/rwonce.h | 7 +++++--
> 1 file changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/include/asm/rwonce.h b/arch/arm64/include/asm/rwonce.h
> index 712de3238f9a..3a50a1d0d17e 100644
> --- a/arch/arm64/include/asm/rwonce.h
> +++ b/arch/arm64/include/asm/rwonce.h
> @@ -48,8 +48,11 @@
> */
> #define __READ_ONCE(x) \
> ({ \
> - typeof(&(x)) __x = &(x); \
> + auto __x = &(x); \
> + auto __ret = (__rwonce_typeof_unqual(*__x) *)__x; \
> + auto __retp = &__ret; \
> union { __rwonce_typeof_unqual(*__x) __val; char __c[1]; } __u; \
> + *__retp = &__u.__val; \
> switch (sizeof(x)) { \
> case 1: \
> asm volatile(__LOAD_RCPC(b, %w0, %1) \
> @@ -74,7 +77,7 @@
> default: \
> __u.__val = *(volatile typeof(*__x) *)__x; \
> } \
> - __u.__val; \
> + *__ret; \
> })
>
> #endif /* !BUILD_VDSO */
> --
> 2.53.0.rc1.217.geba53bf80e-goog
>
© 2016 - 2026 Red Hat, Inc.