[PATCH 1/5] clocksource/drivers/arm_arch_timer: Add a static key indicating the need for a runtime workaround

Marc Zyngier posted 5 patches 1 month, 1 week ago
[PATCH 1/5] clocksource/drivers/arm_arch_timer: Add a static key indicating the need for a runtime workaround
Posted by Marc Zyngier 1 month, 1 week ago
In order to decide whether we can read the architected counter without
disabling preemption to look up a workaround, introduce a static key
that denotes whether a workaround is required at all.

The behaviour of this new static key is a bit unusual:

- it starts as 'true', indicating that workarounds are required

- each time a new CPU boots, it is added to a cpumask

- when all possible CPUs have booted at least once, and that it
  has been established that none of them require a workaround,
  the key flips to 'false'

Of course, as long as not all the CPUs have booted once, you
may end-up with slow accessors, but that's what you get for not
sharing your toys.

Things are made a bit complicated because static keys cannot be
flipped from a CPUHP callback. Instead, schedule a deferred work
from there. Yes, this is fun.

Nothing is making use of this stuff yet, but watch this space.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 drivers/clocksource/arm_arch_timer.c | 33 ++++++++++++++++++++++++++++
 1 file changed, 33 insertions(+)

diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c
index 90aeff44a2764..c5b42001c9282 100644
--- a/drivers/clocksource/arm_arch_timer.c
+++ b/drivers/clocksource/arm_arch_timer.c
@@ -90,6 +90,8 @@ static int arch_counter_get_width(void)
 /*
  * Architected system timer support.
  */
+static inline bool arch_counter_broken_accessors(void);
+
 static noinstr u64 raw_counter_get_cntpct_stable(void)
 {
 	return __arch_counter_get_cntpct_stable();
@@ -555,10 +557,40 @@ static bool arch_timer_counter_has_wa(void)
 {
 	return atomic_read(&timer_unstable_counter_workaround_in_use);
 }
+
+static DEFINE_STATIC_KEY_TRUE(broken_cnt_accessors);
+
+static inline bool arch_counter_broken_accessors(void)
+{
+	return static_branch_unlikely(&broken_cnt_accessors);
+}
+
+static void enable_direct_accessors(struct work_struct *wk)
+{
+	pr_info("Enabling direct accessors\n");
+	static_branch_disable(&broken_cnt_accessors);
+}
+
+static int arch_timer_set_direct_accessors(unsigned int cpu)
+{
+	static DECLARE_WORK(enable_accessors_wk, enable_direct_accessors);
+	static cpumask_t seen_cpus;
+
+	cpumask_set_cpu(cpu, &seen_cpus);
+
+	if (arch_counter_broken_accessors()	&&
+	    !arch_timer_counter_has_wa()	&&
+	    cpumask_equal(&seen_cpus, cpu_possible_mask))
+		schedule_work(&enable_accessors_wk);
+
+	return 0;
+}
 #else
 #define arch_timer_check_ool_workaround(t,a)		do { } while(0)
 #define arch_timer_this_cpu_has_cntvct_wa()		({false;})
 #define arch_timer_counter_has_wa()			({false;})
+static inline bool arch_counter_broken_accessors(void)	{ return false ; }
+#define arch_timer_set_direct_accessors(c)		do { } while(0)
 #endif /* CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND */
 
 static __always_inline irqreturn_t timer_handler(const int access,
@@ -840,6 +872,7 @@ static int arch_timer_starting_cpu(unsigned int cpu)
 	}
 
 	arch_counter_set_user_access();
+	arch_timer_set_direct_accessors(cpu);
 
 	return 0;
 }
-- 
2.47.3