arch/arm64/include/asm/barrier.h | 13 +++++ arch/arm64/include/asm/rqspinlock.h | 85 ----------------------------- drivers/cpuidle/poll_state.c | 31 +++-------- include/asm-generic/barrier.h | 63 +++++++++++++++++++++ include/linux/atomic.h | 8 +++ kernel/bpf/rqspinlock.c | 29 ++++------ 6 files changed, 105 insertions(+), 124 deletions(-)
This series adds waited variants of the smp_cond_load() primitives:
smp_cond_load_relaxed_timeout(), and smp_cond_load_acquire_timeout().
As the name suggests, the new interfaces are meant for contexts where
you want to wait on a condition variable for a finite duration. This is
easy enough to do with a loop around cpu_relax(). However, some
architectures (ex. arm64) also allow waiting on a cacheline. So, these
interfaces handle a mixture of spin/wait with a smp_cond_load() thrown
in.
The interfaces are:
smp_cond_load_relaxed_timeout(ptr, cond_expr, time_check_expr)
smp_cond_load_acquire_timeout(ptr, cond_expr, time_check_expr)
The added parameter, time_check_expr, determines the bail out condition.
Also add the ancillary interfaces atomic_cond_read_*_timeout() and,
atomic64_cond_read_*_timeout(), both of which are wrappers around
smp_cond_load_*_timeout().
Update poll_idle() and resilient queued spinlocks to use these
interfaces.
Changelog:
v5 [1]:
- use cpu_poll_relax() instead of cpu_relax().
- instead of defining an arm64 specific
smp_cond_load_relaxed_timeout(), just define the appropriate
cpu_poll_relax().
- re-read the target pointer when we exit due to the time-check.
- s/SMP_TIMEOUT_SPIN_COUNT/SMP_TIMEOUT_POLL_COUNT/
(Suggested by Will Deacon)
- add atomic_cond_read_*_timeout() and atomic64_cond_read_*_timeout()
interfaces.
- rqspinlock: use atomic_cond_read_acquire_timeout().
- cpuidle: use smp_cond_load_relaxed_tiemout() for polling.
(Suggested by Catalin Marinas)
- rqspinlock: define SMP_TIMEOUT_POLL_COUNT to be 16k for non arm64
v4 [2]:
- naming change 's/timewait/timeout/'
- resilient spinlocks: get rid of res_smp_cond_load_acquire_waiting()
and fixup use of RES_CHECK_TIMEOUT().
(Both suggested by Catalin Marinas)
v3 [3]:
- further interface simplifications (suggested by Catalin Marinas)
v2 [4]:
- simplified the interface (suggested by Catalin Marinas)
- get rid of wait_policy, and a multitude of constants
- adds a slack parameter
This helped remove a fair amount of duplicated code duplication and in hindsight
unnecessary constants.
v1 [5]:
- add wait_policy (coarse and fine)
- derive spin-count etc at runtime instead of using arbitrary
constants.
Haris Okanovic tested v4 of this series with poll_idle()/haltpoll patches. [6]
Any comments appreciated!
Thanks!
Ankur
[1] https://lore.kernel.org/lkml/20250911034655.3916002-1-ankur.a.arora@oracle.com/
[2] https://lore.kernel.org/lkml/20250829080735.3598416-1-ankur.a.arora@oracle.com/
[3] https://lore.kernel.org/lkml/20250627044805.945491-1-ankur.a.arora@oracle.com/
[4] https://lore.kernel.org/lkml/20250502085223.1316925-1-ankur.a.arora@oracle.com/
[5] https://lore.kernel.org/lkml/20250203214911.898276-1-ankur.a.arora@oracle.com/
[6] https://lore.kernel.org/lkml/2cecbf7fb23ee83a4ce027e1be3f46f97efd585c.camel@amazon.com/
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: "Rafael J. Wysocki" <rafael@kernel.org>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: linux-arch@vger.kernel.org
Ankur Arora (7):
asm-generic: barrier: Add smp_cond_load_relaxed_timeout()
arm64: barrier: Add smp_cond_load_relaxed_timeout()
arm64: rqspinlock: Remove private copy of
smp_cond_load_acquire_timewait
asm-generic: barrier: Add smp_cond_load_acquire_timeout()
atomic: add atomic_cond_read_*_timeout()
rqspinlock: use smp_cond_load_acquire_timeout()
cpuidle/poll_state: poll via smp_cond_load_relaxed_timeout()
arch/arm64/include/asm/barrier.h | 13 +++++
arch/arm64/include/asm/rqspinlock.h | 85 -----------------------------
drivers/cpuidle/poll_state.c | 31 +++--------
include/asm-generic/barrier.h | 63 +++++++++++++++++++++
include/linux/atomic.h | 8 +++
kernel/bpf/rqspinlock.c | 29 ++++------
6 files changed, 105 insertions(+), 124 deletions(-)
--
2.43.5
An "AI" review bot flagged a couple of errors in the series: missing param (patch 5), and a possible race in the poll_idle() (patch 7). Let me quickly resend with those fixed. Thanks Ankur Ankur Arora <ankur.a.arora@oracle.com> writes: > This series adds waited variants of the smp_cond_load() primitives: > smp_cond_load_relaxed_timeout(), and smp_cond_load_acquire_timeout(). > > As the name suggests, the new interfaces are meant for contexts where > you want to wait on a condition variable for a finite duration. This is > easy enough to do with a loop around cpu_relax(). However, some > architectures (ex. arm64) also allow waiting on a cacheline. So, these > interfaces handle a mixture of spin/wait with a smp_cond_load() thrown > in. > > The interfaces are: > smp_cond_load_relaxed_timeout(ptr, cond_expr, time_check_expr) > smp_cond_load_acquire_timeout(ptr, cond_expr, time_check_expr) > > The added parameter, time_check_expr, determines the bail out condition. > > Also add the ancillary interfaces atomic_cond_read_*_timeout() and, > atomic64_cond_read_*_timeout(), both of which are wrappers around > smp_cond_load_*_timeout(). > > Update poll_idle() and resilient queued spinlocks to use these > interfaces. > > Changelog: > > v5 [1]: > - use cpu_poll_relax() instead of cpu_relax(). > - instead of defining an arm64 specific > smp_cond_load_relaxed_timeout(), just define the appropriate > cpu_poll_relax(). > - re-read the target pointer when we exit due to the time-check. > - s/SMP_TIMEOUT_SPIN_COUNT/SMP_TIMEOUT_POLL_COUNT/ > (Suggested by Will Deacon) > > - add atomic_cond_read_*_timeout() and atomic64_cond_read_*_timeout() > interfaces. > - rqspinlock: use atomic_cond_read_acquire_timeout(). > - cpuidle: use smp_cond_load_relaxed_tiemout() for polling. > (Suggested by Catalin Marinas) > > - rqspinlock: define SMP_TIMEOUT_POLL_COUNT to be 16k for non arm64 > > v4 [2]: > - naming change 's/timewait/timeout/' > - resilient spinlocks: get rid of res_smp_cond_load_acquire_waiting() > and fixup use of RES_CHECK_TIMEOUT(). > (Both suggested by Catalin Marinas) > > v3 [3]: > - further interface simplifications (suggested by Catalin Marinas) > > v2 [4]: > - simplified the interface (suggested by Catalin Marinas) > - get rid of wait_policy, and a multitude of constants > - adds a slack parameter > This helped remove a fair amount of duplicated code duplication and in hindsight > unnecessary constants. > > v1 [5]: > - add wait_policy (coarse and fine) > - derive spin-count etc at runtime instead of using arbitrary > constants. > > Haris Okanovic tested v4 of this series with poll_idle()/haltpoll patches. [6] > > Any comments appreciated! > > Thanks! > Ankur > > [1] https://lore.kernel.org/lkml/20250911034655.3916002-1-ankur.a.arora@oracle.com/ > [2] https://lore.kernel.org/lkml/20250829080735.3598416-1-ankur.a.arora@oracle.com/ > [3] https://lore.kernel.org/lkml/20250627044805.945491-1-ankur.a.arora@oracle.com/ > [4] https://lore.kernel.org/lkml/20250502085223.1316925-1-ankur.a.arora@oracle.com/ > [5] https://lore.kernel.org/lkml/20250203214911.898276-1-ankur.a.arora@oracle.com/ > [6] https://lore.kernel.org/lkml/2cecbf7fb23ee83a4ce027e1be3f46f97efd585c.camel@amazon.com/ > > Cc: Arnd Bergmann <arnd@arndb.de> > Cc: Will Deacon <will@kernel.org> > Cc: Catalin Marinas <catalin.marinas@arm.com> > Cc: Peter Zijlstra <peterz@infradead.org> > Cc: "Rafael J. Wysocki" <rafael@kernel.org> > Cc: Daniel Lezcano <daniel.lezcano@linaro.org> > Cc: Kumar Kartikeya Dwivedi <memxor@gmail.com> > Cc: Alexei Starovoitov <ast@kernel.org> > Cc: linux-arch@vger.kernel.org > > Ankur Arora (7): > asm-generic: barrier: Add smp_cond_load_relaxed_timeout() > arm64: barrier: Add smp_cond_load_relaxed_timeout() > arm64: rqspinlock: Remove private copy of > smp_cond_load_acquire_timewait > asm-generic: barrier: Add smp_cond_load_acquire_timeout() > atomic: add atomic_cond_read_*_timeout() > rqspinlock: use smp_cond_load_acquire_timeout() > cpuidle/poll_state: poll via smp_cond_load_relaxed_timeout() > > arch/arm64/include/asm/barrier.h | 13 +++++ > arch/arm64/include/asm/rqspinlock.h | 85 ----------------------------- > drivers/cpuidle/poll_state.c | 31 +++-------- > include/asm-generic/barrier.h | 63 +++++++++++++++++++++ > include/linux/atomic.h | 8 +++ > kernel/bpf/rqspinlock.c | 29 ++++------ > 6 files changed, 105 insertions(+), 124 deletions(-) -- ankur
© 2016 - 2026 Red Hat, Inc.