arch/arm64/include/asm/barrier.h | 54 +++++++++++ arch/arm64/include/asm/rqspinlock.h | 2 +- include/asm-generic/barrier.h | 137 ++++++++++++++++++++++++++++ 3 files changed, 192 insertions(+), 1 deletion(-)
Hi,
This series adds waited variants of the smp_cond_load() primitives:
smp_cond_load_relaxed_timewait(), and smp_cond_load_acquire_timewait().
Why?: as the name suggests, the new interfaces are meant for contexts
where you want to wait on a condition variable for a finite duration.
This is easy enough to do with a loop around cpu_relax(). However,
some architectures (ex. arm64) also allow waiting on a cacheline. So,
these interfaces handle a mixture of spin/wait with a smp_cond_load()
thrown in.
There are two known users for these interfaces:
- poll_idle() [1]
- resilient queued spinlocks [2]
The interfaces are:
smp_cond_load_relaxed_spinwait(ptr, cond_expr,
time_expr, time_limit, slack)
smp_cond_load_acquire_spinwait(ptr, cond_expr,
time_expr, time_limit, slack)
The added parameters pertain to the timeout checks and a measure of how
much slack the caller can tolerate in the timeout. The slack is useful
when in the wait state and thus dependent on an asynchronous event.
Changelog:
v2 [3]:
- simplified the interface (suggested by Catalin Marinas)
- get rid of wait_policy, and a multitude of constants
- adds a slack parameter
This helped remove a fair amount of duplicated code duplication and in hindsight
unnecessary constants.
v1 [4]:
- add wait_policy (coarse and fine)
- derive spin-count etc at runtime instead of using arbitrary
constants.
Haris Okanovic had tested an earlier version of this series with
poll_idle()/haltpoll patches. [5]
Any comments appreciated!
Ankur
[1] https://lore.kernel.org/lkml/20241107190818.522639-3-ankur.a.arora@oracle.com/
[2] Uses the smp_cond_load_acquire_timewait() from v1
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/arm64/include/asm/rqspinlock.h
[3] https://lore.kernel.org/lkml/20250502085223.1316925-1-ankur.a.arora@oracle.com/
[4] https://lore.kernel.org/lkml/20250203214911.898276-1-ankur.a.arora@oracle.com/
[5] https://lore.kernel.org/lkml/f2f5d09e79539754ced085ed89865787fa668695.camel@amazon.com
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: linux-arch@vger.kernel.org
Ankur Arora (5):
asm-generic: barrier: Add smp_cond_load_relaxed_timewait()
asm-generic: barrier: Handle spin-wait in
smp_cond_load_relaxed_timewait()
asm-generic: barrier: Add smp_cond_load_acquire_timewait()
arm64: barrier: Support waiting in smp_cond_load_relaxed_timewait()
arm64: barrier: Handle waiting in smp_cond_load_relaxed_timewait()
arch/arm64/include/asm/barrier.h | 54 +++++++++++
arch/arm64/include/asm/rqspinlock.h | 2 +-
include/asm-generic/barrier.h | 137 ++++++++++++++++++++++++++++
3 files changed, 192 insertions(+), 1 deletion(-)
--
2.43.5
Gentle ping for review. Ankur Ankur Arora <ankur.a.arora@oracle.com> writes: > Hi, > > This series adds waited variants of the smp_cond_load() primitives: > smp_cond_load_relaxed_timewait(), and smp_cond_load_acquire_timewait(). > > Why?: as the name suggests, the new interfaces are meant for contexts > where you want to wait on a condition variable for a finite duration. > This is easy enough to do with a loop around cpu_relax(). However, > some architectures (ex. arm64) also allow waiting on a cacheline. So, > these interfaces handle a mixture of spin/wait with a smp_cond_load() > thrown in. > > There are two known users for these interfaces: > > - poll_idle() [1] > - resilient queued spinlocks [2] > > The interfaces are: > smp_cond_load_relaxed_spinwait(ptr, cond_expr, > time_expr, time_limit, slack) > smp_cond_load_acquire_spinwait(ptr, cond_expr, > time_expr, time_limit, slack) > > The added parameters pertain to the timeout checks and a measure of how > much slack the caller can tolerate in the timeout. The slack is useful > when in the wait state and thus dependent on an asynchronous event. > > Changelog: > v2 [3]: > - simplified the interface (suggested by Catalin Marinas) > - get rid of wait_policy, and a multitude of constants > - adds a slack parameter > This helped remove a fair amount of duplicated code duplication and in hindsight > unnecessary constants. > > v1 [4]: > - add wait_policy (coarse and fine) > - derive spin-count etc at runtime instead of using arbitrary > constants. > > Haris Okanovic had tested an earlier version of this series with > poll_idle()/haltpoll patches. [5] > > Any comments appreciated! > > Ankur > > [1] https://lore.kernel.org/lkml/20241107190818.522639-3-ankur.a.arora@oracle.com/ > [2] Uses the smp_cond_load_acquire_timewait() from v1 > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/arm64/include/asm/rqspinlock.h > [3] https://lore.kernel.org/lkml/20250502085223.1316925-1-ankur.a.arora@oracle.com/ > [4] https://lore.kernel.org/lkml/20250203214911.898276-1-ankur.a.arora@oracle.com/ > [5] https://lore.kernel.org/lkml/f2f5d09e79539754ced085ed89865787fa668695.camel@amazon.com > > Cc: Arnd Bergmann <arnd@arndb.de> > Cc: Will Deacon <will@kernel.org> > Cc: Catalin Marinas <catalin.marinas@arm.com> > Cc: Peter Zijlstra <peterz@infradead.org> > Cc: Kumar Kartikeya Dwivedi <memxor@gmail.com> > Cc: Alexei Starovoitov <ast@kernel.org> > Cc: linux-arch@vger.kernel.org > > Ankur Arora (5): > asm-generic: barrier: Add smp_cond_load_relaxed_timewait() > asm-generic: barrier: Handle spin-wait in > smp_cond_load_relaxed_timewait() > asm-generic: barrier: Add smp_cond_load_acquire_timewait() > arm64: barrier: Support waiting in smp_cond_load_relaxed_timewait() > arm64: barrier: Handle waiting in smp_cond_load_relaxed_timewait() > > arch/arm64/include/asm/barrier.h | 54 +++++++++++ > arch/arm64/include/asm/rqspinlock.h | 2 +- > include/asm-generic/barrier.h | 137 ++++++++++++++++++++++++++++ > 3 files changed, 192 insertions(+), 1 deletion(-) -- ankur
© 2016 - 2026 Red Hat, Inc.