include/linux/seqlock.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
Arnd reported an x86 randconfig using gcc-15 tripped over
__scoped_seqlock_bug(). Turns out GCC chose not to inline the
scoped_seqlock helper functions and as such was not able to optimize
properly.
Reported-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
For tip/locking/urgent
include/linux/seqlock.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
index a8a8661839b6..221123660e71 100644
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@ -1224,7 +1224,7 @@ struct ss_tmp {
spinlock_t *lock_irqsave;
};
-static inline void __scoped_seqlock_cleanup(struct ss_tmp *sst)
+static __always_inline void __scoped_seqlock_cleanup(struct ss_tmp *sst)
{
if (sst->lock)
spin_unlock(sst->lock);
@@ -1252,7 +1252,7 @@ static inline void __scoped_seqlock_bug(void) { }
extern void __scoped_seqlock_bug(void);
#endif
-static inline void
+static __always_inline void
__scoped_seqlock_next(struct ss_tmp *sst, seqlock_t *lock, enum ss_state target)
{
switch (sst->state) {
* Peter Zijlstra <peterz@infradead.org> wrote: > Arnd reported an x86 randconfig using gcc-15 tripped over > __scoped_seqlock_bug(). Turns out GCC chose not to inline the > scoped_seqlock helper functions and as such was not able to optimize > properly. BTW., I found a Clang randconfig too that fails the build, so it's not limited to GCC. Thanks, Ingo
On Thu, Dec 4, 2025, at 11:43, Peter Zijlstra wrote:
> Arnd reported an x86 randconfig using gcc-15 tripped over
> __scoped_seqlock_bug(). Turns out GCC chose not to inline the
> scoped_seqlock helper functions and as such was not able to optimize
> properly.
>
> Reported-by: Arnd Bergmann <arnd@arndb.de>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Arnd Bergmann <arnd@arndb.de>
>
> -static inline void __scoped_seqlock_cleanup(struct ss_tmp *sst)
> +static __always_inline void __scoped_seqlock_cleanup(struct ss_tmp *sst)
> {
> if (sst->lock)
> spin_unlock(sst->lock);
> @@ -1252,7 +1252,7 @@ static inline void __scoped_seqlock_bug(void) { }
> extern void __scoped_seqlock_bug(void);
> #endif
>
> -static inline void
> +static __always_inline void
> __scoped_seqlock_next(struct ss_tmp *sst, seqlock_t *lock, enum
> ss_state target)
> {
> switch (sst->state) {
It looks I got close: I had tried the __always_inline on
__scoped_seqlock_next but missed the one on __scoped_seqlock_cleanup,
so that was not enough.
Your version addresses the issue for me, thanks a lot for the fix!
Arnd
* Arnd Bergmann <arnd@arndb.de> wrote:
> On Thu, Dec 4, 2025, at 11:43, Peter Zijlstra wrote:
> > Arnd reported an x86 randconfig using gcc-15 tripped over
> > __scoped_seqlock_bug(). Turns out GCC chose not to inline the
> > scoped_seqlock helper functions and as such was not able to optimize
> > properly.
> >
> > Reported-by: Arnd Bergmann <arnd@arndb.de>
> > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
>
> Tested-by: Arnd Bergmann <arnd@arndb.de>
>
> >
> > -static inline void __scoped_seqlock_cleanup(struct ss_tmp *sst)
> > +static __always_inline void __scoped_seqlock_cleanup(struct ss_tmp *sst)
> > {
> > if (sst->lock)
> > spin_unlock(sst->lock);
> > @@ -1252,7 +1252,7 @@ static inline void __scoped_seqlock_bug(void) { }
> > extern void __scoped_seqlock_bug(void);
> > #endif
> >
> > -static inline void
> > +static __always_inline void
> > __scoped_seqlock_next(struct ss_tmp *sst, seqlock_t *lock, enum
> > ss_state target)
> > {
> > switch (sst->state) {
>
> It looks I got close: I had tried the __always_inline on
> __scoped_seqlock_next but missed the one on __scoped_seqlock_cleanup,
> so that was not enough.
Same here, I ran into that build failure and ended up finding this
as a side-effect:
24bc5ea5c01a ("seqlock, procfs: Match scoped_seqlock_read() critical section vs. RCU ordering in do_task_stat() to do_io_accounting()")
And like you I was trying to work around the compiler failure
via forced-inlining of __scoped_seqlock_next(), but missed
__scoped_seqlock_cleanup() ... :-)
>
> Your version addresses the issue for me, thanks a lot for the fix!
Works for me too, and I've applied the fix to tip:locking/urgent.
Thanks,
Ingo
The following commit has been merged into the locking/urgent branch of tip:
Commit-ID: 90dfeef1cd38dff19f8b3a752d13bfd79f0f7694
Gitweb: https://git.kernel.org/tip/90dfeef1cd38dff19f8b3a752d13bfd79f0f7694
Author: Peter Zijlstra <peterz@infradead.org>
AuthorDate: Thu, 04 Dec 2025 11:43:32 +01:00
Committer: Ingo Molnar <mingo@kernel.org>
CommitterDate: Sat, 06 Dec 2025 09:53:05 +01:00
seqlock: Cure some more scoped_seqlock() optimization fails
Arnd reported an x86 randconfig using gcc-15 tripped over
__scoped_seqlock_bug(). Turns out GCC chose not to inline the
scoped_seqlock helper functions and as such was not able to optimize
properly.
[ mingo: Clang fails the build too in some circumstances. ]
Reported-by: Arnd Bergmann <arnd@arndb.de>
Tested-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Link: https://patch.msgid.link/20251204104332.GG2528459@noisy.programming.kicks-ass.net
---
include/linux/seqlock.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
index a8a8661..2211236 100644
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@ -1224,7 +1224,7 @@ struct ss_tmp {
spinlock_t *lock_irqsave;
};
-static inline void __scoped_seqlock_cleanup(struct ss_tmp *sst)
+static __always_inline void __scoped_seqlock_cleanup(struct ss_tmp *sst)
{
if (sst->lock)
spin_unlock(sst->lock);
@@ -1252,7 +1252,7 @@ static inline void __scoped_seqlock_bug(void) { }
extern void __scoped_seqlock_bug(void);
#endif
-static inline void
+static __always_inline void
__scoped_seqlock_next(struct ss_tmp *sst, seqlock_t *lock, enum ss_state target)
{
switch (sst->state) {
© 2016 - 2025 Red Hat, Inc.