Timers started using add_timer_on, can get stuck if the
specified cpu is offline. If the user of add_timer_on
can't guarantee that the specified cpu is online and
ends up starting timer on an offline cpu, then that
timer may not give expected results.
Such users can use new interface timer_try_add_on_cpu,
which starts timer on a given cpu, only after ensuring
that it remains online. If it sees that the specified cpu
is offline or if it can't ensure that the cpu is online,
it does not start timer on any cpu and leaves the decision
of selecting other cpu to the caller.
Thus it ensures that started timer does not get lost, because
it was started on an offlined cpu.
Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Imran Khan <imran.f.khan@oracle.com>
---
include/linux/timer.h | 1 +
kernel/time/timer.c | 33 +++++++++++++++++++++++++++++++++
2 files changed, 34 insertions(+)
diff --git a/include/linux/timer.h b/include/linux/timer.h
index e67ecd1cbc97d..210c15527b325 100644
--- a/include/linux/timer.h
+++ b/include/linux/timer.h
@@ -148,6 +148,7 @@ static inline int timer_pending(const struct timer_list * timer)
}
extern void add_timer_on(struct timer_list *timer, int cpu);
+extern bool timer_try_add_on_cpu(struct timer_list *timer, int cpu);
extern int mod_timer(struct timer_list *timer, unsigned long expires);
extern int mod_timer_pending(struct timer_list *timer, unsigned long expires);
extern int timer_reduce(struct timer_list *timer, unsigned long expires);
diff --git a/kernel/time/timer.c b/kernel/time/timer.c
index ec9eb58e45241..800ed9b4dea7a 100644
--- a/kernel/time/timer.c
+++ b/kernel/time/timer.c
@@ -1394,6 +1394,39 @@ void add_timer_on(struct timer_list *timer, int cpu)
}
EXPORT_SYMBOL_GPL(add_timer_on);
+/**
+ * timer_try_add_on_cpu - Try to start a timer on a particular CPU,
+ * after ensuring that it is and remains online.
+ * @timer: The timer to be started
+ * @cpu: The CPU to start it on
+ *
+ * Check and ensure that specified cpu is around, before starting a timer
+ * on it.
+ *
+ * Return:
+ * * %true - If timer was started on an online cpu
+ * * %false - If the specified cpu was offline or if its online status
+ * could not be ensured due to unavailability of hotplug lock.
+ */
+bool timer_try_add_on_cpu(struct timer_list *timer, int cpu)
+{
+ bool ret = true;
+
+ if (unlikely(!cpu_online(cpu)))
+ ret = false;
+ else if (cpus_read_trylock()) {
+ if (likely(cpu_online(cpu)))
+ add_timer_on(timer, cpu);
+ else
+ ret = false;
+ cpus_read_unlock();
+ } else
+ ret = false;
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(timer_try_add_on_cpu);
+
/**
* __timer_delete - Internal function: Deactivate a timer
* @timer: The timer to be deactivated
--
2.34.1
On Thu, Jan 16 2025 at 00:41, Imran Khan wrote:
> + * Return:
> + * * %true - If timer was started on an online cpu
> + * * %false - If the specified cpu was offline or if its online status
> + * could not be ensured due to unavailability of hotplug lock.
> + */
> +bool timer_try_add_on_cpu(struct timer_list *timer, int cpu)
> +{
> + bool ret = true;
> +
> + if (unlikely(!cpu_online(cpu)))
> + ret = false;
> + else if (cpus_read_trylock()) {
> + if (likely(cpu_online(cpu)))
> + add_timer_on(timer, cpu);
> + else
> + ret = false;
> + cpus_read_unlock();
> + } else
> + ret = false;
> +
> + return ret;
Aside of the horrible coding style, that cpus_read_trylock() part does
not make any sense.
It's perfectly valid to queue a timer on a online CPU when the CPU
hotplug lock is held write, which can have tons of reasons even
unrelated to an actual CPU hotplug operation.
Even during a hotplug operation adding a timer on a particular CPU is
valid, whether that's the CPU which is actually plugged or not is
irrelevant.
So if we add such a function, then it needs to have very precisely
defined semantics, which have to be independent of the CPU hotplug lock.
The only way I can imagine is that the state is part of the per CPU
timer base, but then I have to ask the question what is actually tried
to solve here.
As far as I understood that there is an issue in the RDS code, queueing
a delayed work on a offline CPU, but that should have triggered at least
the warning in __queue_delayed_work(), right?
So the question is whether this try() interface is solving any of this
and not papering over the CPU hotplug related issues in the RDS code in
some way.
Thanks,
tglx
Hello Thomas,
Thanks for taking a look and your feedback.
On 16/1/2025 3:04 am, Thomas Gleixner wrote:
> On Thu, Jan 16 2025 at 00:41, Imran Khan wrote:
>> + * Return:
>> + * * %true - If timer was started on an online cpu
>> + * * %false - If the specified cpu was offline or if its online status
>> + * could not be ensured due to unavailability of hotplug lock.
>> + */
>> +bool timer_try_add_on_cpu(struct timer_list *timer, int cpu)
>> +{
>> + bool ret = true;
>> +
>> + if (unlikely(!cpu_online(cpu)))
>> + ret = false;
>> + else if (cpus_read_trylock()) {
>> + if (likely(cpu_online(cpu)))
>> + add_timer_on(timer, cpu);
>> + else
>> + ret = false;
>> + cpus_read_unlock();
>> + } else
>> + ret = false;
>> +
>> + return ret;
>
> Aside of the horrible coding style, that cpus_read_trylock() part does
> not make any sense.
>
> It's perfectly valid to queue a timer on a online CPU when the CPU
> hotplug lock is held write, which can have tons of reasons even
> unrelated to an actual CPU hotplug operation.
>
> Even during a hotplug operation adding a timer on a particular CPU is
> valid, whether that's the CPU which is actually plugged or not is
> irrelevant.
>
> So if we add such a function, then it needs to have very precisely
> defined semantics, which have to be independent of the CPU hotplug lock.
>
The hotplug lock is being used to avoid the scenario where cpu_online tells
that @cpu is online but @cpu gets offline before add_timer_on could
actually add the timer to that @cpu's timer base.
Are you saying that this can't happen or by "defined semantics"
you mean that @cpu indicated as online by cpu_online should not get
offline in the middle of this function.
> The only way I can imagine is that the state is part of the per CPU
> timer base, but then I have to ask the question what is actually tried
> to solve here.
>
> As far as I understood that there is an issue in the RDS code, queueing
> a delayed work on a offline CPU, but that should have triggered at least
> the warning in __queue_delayed_work(), right?
>
I guess you are referring to warning of [1]. This was just added few days
back but the timer of delayed_work can still end up on offlined cpu.
> So the question is whether this try() interface is solving any of this
> and not papering over the CPU hotplug related issues in the RDS code in
> some way.
>
The RDS code that I referred to in my query, is an in-house change and there
may be some scope of updating the cached-cpu information there with cpu hotplug
callbacks. But we also wanted to see if something could be done on timer
side to address the possibilty of timer ending up on an offlined cpu. That's
why I asked earlier if you see any merit in having a try() interface.
As of now I don't have any more cases, running into this problem (putting
timer-wheel timers on offlined cpu). May be with warning in
__queue_delayed_work and (if gets added) in add_timer_on we may see
more such cases.
But if you agree, try() interface could still be added albeit without
hotplug lock.
Thanks,
Imran
[1]: https://github.com/torvalds/linux/blob/master/kernel/workqueue.c#L2511
> Thanks,
>
> tglx
>
>
Hello Thomas,
Below, I have tried further to explain the reason
behind using cpus_read_trylock in try_add_timer_on_cpu.
Say CPU X is being offlined and CPU Y is checking for
its online status (before issuing add_timer_on(X)). CPU
Y is not the bootstrap processor (BP) here, it is executing
something else that does:
if cpu_online(X)
add_timer_on(X);
If at the time of checking cpu_online(X),the hotplug thread
of CPU X i.e. cpuhp/X has not yet done __cpu_disable, CPU Y
will see CPU X as online and issue add_timer_on (in the above
snippet).
In this case, whether the timer ends on an offlined cpu or
not, depends on who gets the per_cpu timer_base.lock first.
If the bootstrap processor, offlining CPU X, gets this lock
first (in timers_dead_cpu), it will migrate all the timers from
CPU X and then release timer_base.lock. Then CPU Y (add_timer_on)
will get this lock and add timer to CPU X's timer_base, but since
CPU X's timer have already been migrated, this newly added
timer will be left on an offlined CPU.
On the other hand if CPU Y (add_timer_on) wins the race, it would
have already added the timer into timer_base of CPU X, before
BP (timers_dead_cpu) gets the timer_base.lock and migrates all
timers (including the one just added), to bootstrap processor and
hence the timer will not be left on an offlined CPU.
Could you please let me know if you see any problems/mistakes
in the above reasoning ?
From your previous reply I could not understand if you are
totally against using cpus_read_trylock (because it may not
be needed here and I am wrongly seeing its need) or if you are
against using cpus_read_trylock in try_add_timer_on_cpu (i.e.
caller of try_add_timer_on_cpu should take this lock).
So I have tried to explain my reasoning further and know your thoughts.
Thanks,
Imran
On 16/1/2025 4:00 am, imran.f.khan@oracle.com wrote:
> Hello Thomas,
> Thanks for taking a look and your feedback.
> On 16/1/2025 3:04 am, Thomas Gleixner wrote:
>> On Thu, Jan 16 2025 at 00:41, Imran Khan wrote:
>>> + * Return:
>>> + * * %true - If timer was started on an online cpu
>>> + * * %false - If the specified cpu was offline or if its online status
>>> + * could not be ensured due to unavailability of hotplug lock.
>>> + */
>>> +bool timer_try_add_on_cpu(struct timer_list *timer, int cpu)
>>> +{
>>> + bool ret = true;
>>> +
>>> + if (unlikely(!cpu_online(cpu)))
>>> + ret = false;
>>> + else if (cpus_read_trylock()) {
>>> + if (likely(cpu_online(cpu)))
>>> + add_timer_on(timer, cpu);
>>> + else
>>> + ret = false;
>>> + cpus_read_unlock();
>>> + } else
>>> + ret = false;
>>> +
>>> + return ret;
>>
>> Aside of the horrible coding style, that cpus_read_trylock() part does
>> not make any sense.
>>
>> It's perfectly valid to queue a timer on a online CPU when the CPU
>> hotplug lock is held write, which can have tons of reasons even
>> unrelated to an actual CPU hotplug operation.
>>
>> Even during a hotplug operation adding a timer on a particular CPU is
>> valid, whether that's the CPU which is actually plugged or not is
>> irrelevant.
>>
>> So if we add such a function, then it needs to have very precisely
>> defined semantics, which have to be independent of the CPU hotplug lock.
>>
>
> The hotplug lock is being used to avoid the scenario where cpu_online tells
> that @cpu is online but @cpu gets offline before add_timer_on could
> actually add the timer to that @cpu's timer base.
> Are you saying that this can't happen or by "defined semantics"
> you mean that @cpu indicated as online by cpu_online should not get
> offline in the middle of this function.
>
>> The only way I can imagine is that the state is part of the per CPU
>> timer base, but then I have to ask the question what is actually tried
>> to solve here.
>>
>> As far as I understood that there is an issue in the RDS code, queueing
>> a delayed work on a offline CPU, but that should have triggered at least
>> the warning in __queue_delayed_work(), right?
>>
>
> I guess you are referring to warning of [1]. This was just added few days
> back but the timer of delayed_work can still end up on offlined cpu.
>
>> So the question is whether this try() interface is solving any of this
>> and not papering over the CPU hotplug related issues in the RDS code in
>> some way.
>>
>
> The RDS code that I referred to in my query, is an in-house change and there
> may be some scope of updating the cached-cpu information there with cpu hotplug
> callbacks. But we also wanted to see if something could be done on timer
> side to address the possibilty of timer ending up on an offlined cpu. That's
> why I asked earlier if you see any merit in having a try() interface.
>
> As of now I don't have any more cases, running into this problem (putting
> timer-wheel timers on offlined cpu). May be with warning in
> __queue_delayed_work and (if gets added) in add_timer_on we may see
> more such cases.
>
> But if you agree, try() interface could still be added albeit without
> hotplug lock.
>
> Thanks,
> Imran
>
> [1]: https://github.com/torvalds/linux/blob/master/kernel/workqueue.c#L2511
>> Thanks,
>>
>> tglx
>>
>>
>
© 2016 - 2025 Red Hat, Inc.