[PATCH v7 2/7] arm64: barrier: Support smp_cond_load_relaxed_timeout()

Ankur Arora posted 7 patches 3 months, 3 weeks ago
There is a newer version of this series
[PATCH v7 2/7] arm64: barrier: Support smp_cond_load_relaxed_timeout()
Posted by Ankur Arora 3 months, 3 weeks ago
Support waiting in smp_cond_load_relaxed_timeout() via
__cmpwait_relaxed(). Limit this to when the event-stream is enabled,
to ensure that we wake from WFE periodically and don't block forever
if there are no stores to the cacheline.

In the unlikely event that the event-stream is unavailable, fallback
to spin-waiting.

Also set SMP_TIMEOUT_POLL_COUNT to 1 so we do the time-check for each
iteration in smp_cond_load_relaxed_timeout().

Cc: linux-arm-kernel@lists.infradead.org
Cc: Catalin Marinas <catalin.marinas@arm.com>
Suggested-by: Will Deacon <will@kernel.org>
Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
---
 arch/arm64/include/asm/barrier.h | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h
index f5801b0ba9e9..92c16dfb8ca6 100644
--- a/arch/arm64/include/asm/barrier.h
+++ b/arch/arm64/include/asm/barrier.h
@@ -219,6 +219,19 @@ do {									\
 	(typeof(*ptr))VAL;						\
 })
 
+#define SMP_TIMEOUT_POLL_COUNT	1
+
+/* Re-declared here to avoid include dependency. */
+extern bool arch_timer_evtstrm_available(void);
+
+#define cpu_poll_relax(ptr, val)					\
+do {									\
+	if (arch_timer_evtstrm_available())				\
+		__cmpwait_relaxed(ptr, val);				\
+	else								\
+		cpu_relax();						\
+} while (0)
+
 #include <asm-generic/barrier.h>
 
 #endif	/* __ASSEMBLY__ */
-- 
2.43.5
Re: [PATCH v7 2/7] arm64: barrier: Support smp_cond_load_relaxed_timeout()
Posted by Christoph Lameter (Ampere) 3 months, 2 weeks ago
On Thu, 16 Oct 2025, Ankur Arora wrote:

> +#define SMP_TIMEOUT_POLL_COUNT	1

A way to disable the spinning in the core code and thus arm64 wont spin
there anymore. Good.

Spinning is bad and a waste of cpu resources. If this is done then I
would like the arch code to implement the spinning and not the core so
that there is a motivation for the arch maintainer to
come up with a way to avoid the spinning at some point.

The patch is ok as is.

Reviewed-by: Christoph Lameter (Ampere) <cl@gentwo.org>
Re: [PATCH v7 2/7] arm64: barrier: Support smp_cond_load_relaxed_timeout()
Posted by Ankur Arora 3 months, 2 weeks ago
Christoph Lameter (Ampere) <cl@gentwo.org> writes:

> On Thu, 16 Oct 2025, Ankur Arora wrote:
>
>> +#define SMP_TIMEOUT_POLL_COUNT	1
>
> A way to disable the spinning in the core code and thus arm64 wont spin
> there anymore. Good.
>
> Spinning is bad and a waste of cpu resources. If this is done then I
> would like the arch code to implement the spinning and not the core so

Agreed.

> that there is a motivation for the arch maintainer to
> come up with a way to avoid the spinning at some point.
>
> The patch is ok as is.
>
> Reviewed-by: Christoph Lameter (Ampere) <cl@gentwo.org>

Thanks for all the reviews!

--
ankur