Calling smp_processor_id() on:
- In CONFIG_DEBUG_PREEMPT=y, if preemption/irq is disabled, then it does
not print any warning.
- In CONFIG_DEBUG_PREEMPT=n, it doesn't do anything apart from getting
__smp_processor_id
So with both CONFIG_DEBUG_PREEMPT=y/n, in preemption disabled section
it is better to cache the value. It could save a few cycles. Though
tiny, repeated could add up to a small value.
timer_clear_idle is called with interrupts disabled.
So cache the value once.
Signed-off-by: Shrikanth Hegde <sshegde@linux.ibm.com>
---
kernel/time/timer.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/kernel/time/timer.c b/kernel/time/timer.c
index 7e1e3bde6b8b..04d928c21aba 100644
--- a/kernel/time/timer.c
+++ b/kernel/time/timer.c
@@ -2319,6 +2319,7 @@ u64 timer_base_try_to_set_idle(unsigned long basej, u64 basem, bool *idle)
*/
void timer_clear_idle(void)
{
+ int this_cpu = smp_processor_id();
/*
* We do this unlocked. The worst outcome is a remote pinned timer
* enqueue sending a pointless IPI, but taking the lock would just
@@ -2327,9 +2328,9 @@ void timer_clear_idle(void)
* path. Required for BASE_LOCAL only.
*/
__this_cpu_write(timer_bases[BASE_LOCAL].is_idle, false);
- if (tick_nohz_full_cpu(smp_processor_id()))
+ if (tick_nohz_full_cpu(this_cpu))
__this_cpu_write(timer_bases[BASE_GLOBAL].is_idle, false);
- trace_timer_base_idle(false, smp_processor_id());
+ trace_timer_base_idle(false, this_cpu);
/* Activate without holding the timer_base->lock */
tmigr_cpu_activate();
--
2.47.3
On Tue, Mar 24, 2026 at 01:06:30AM +0530, Shrikanth Hegde wrote:
> Calling smp_processor_id() on:
> - In CONFIG_DEBUG_PREEMPT=y, if preemption/irq is disabled, then it does
> not print any warning.
> - In CONFIG_DEBUG_PREEMPT=n, it doesn't do anything apart from getting
> __smp_processor_id
>
> So with both CONFIG_DEBUG_PREEMPT=y/n, in preemption disabled section
> it is better to cache the value. It could save a few cycles. Though
> tiny, repeated could add up to a small value.
>
> timer_clear_idle is called with interrupts disabled.
> So cache the value once.
>
> Signed-off-by: Shrikanth Hegde <sshegde@linux.ibm.com>
> ---
> kernel/time/timer.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/time/timer.c b/kernel/time/timer.c
> index 7e1e3bde6b8b..04d928c21aba 100644
> --- a/kernel/time/timer.c
> +++ b/kernel/time/timer.c
> @@ -2319,6 +2319,7 @@ u64 timer_base_try_to_set_idle(unsigned long basej, u64 basem, bool *idle)
> */
> void timer_clear_idle(void)
> {
> + int this_cpu = smp_processor_id();
> /*
> * We do this unlocked. The worst outcome is a remote pinned timer
> * enqueue sending a pointless IPI, but taking the lock would just
> @@ -2327,9 +2328,9 @@ void timer_clear_idle(void)
> * path. Required for BASE_LOCAL only.
> */
> __this_cpu_write(timer_bases[BASE_LOCAL].is_idle, false);
> - if (tick_nohz_full_cpu(smp_processor_id()))
> + if (tick_nohz_full_cpu(this_cpu))
> __this_cpu_write(timer_bases[BASE_GLOBAL].is_idle, false);
> - trace_timer_base_idle(false, smp_processor_id());
> + trace_timer_base_idle(false, this_cpu);
>
> /* Activate without holding the timer_base->lock */
> tmigr_cpu_activate();
> --
> 2.47.3
>
Reviewed-by: Mukesh Kumar Chaurasiya (IBM) <mkchauras@gmail.com>
The following commit has been merged into the timers/core branch of tip:
Commit-ID: 551e49beb1752387aed09eb2a0ea4c82ed1f3a35
Gitweb: https://git.kernel.org/tip/551e49beb1752387aed09eb2a0ea4c82ed1f3a35
Author: Shrikanth Hegde <sshegde@linux.ibm.com>
AuthorDate: Tue, 24 Mar 2026 01:06:30 +05:30
Committer: Thomas Gleixner <tglx@kernel.org>
CommitterDate: Tue, 24 Mar 2026 23:21:30 +01:00
timers: Get this_cpu once while clearing the idle state
Calling smp_processor_id() on:
- In CONFIG_DEBUG_PREEMPT=y, if preemption/irq is disabled, then it does
not print any warning.
- In CONFIG_DEBUG_PREEMPT=n, it doesn't do anything apart from getting
__smp_processor_id
So with both CONFIG_DEBUG_PREEMPT=y/n, in preemption disabled section it is
better to cache the value. It saves a few cycles. Though tiny, repeated
adds up.
timer_clear_idle() is called with interrupts disabled. So cache the value
once.
Signed-off-by: Shrikanth Hegde <sshegde@linux.ibm.com>
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
Reviewed-by: Mukesh Kumar Chaurasiya (IBM) <mkchauras@gmail.com>
Link: https://patch.msgid.link/20260323193630.640311-5-sshegde@linux.ibm.com
---
kernel/time/timer.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/kernel/time/timer.c b/kernel/time/timer.c
index 7e1e3bd..04d928c 100644
--- a/kernel/time/timer.c
+++ b/kernel/time/timer.c
@@ -2319,6 +2319,7 @@ u64 timer_base_try_to_set_idle(unsigned long basej, u64 basem, bool *idle)
*/
void timer_clear_idle(void)
{
+ int this_cpu = smp_processor_id();
/*
* We do this unlocked. The worst outcome is a remote pinned timer
* enqueue sending a pointless IPI, but taking the lock would just
@@ -2327,9 +2328,9 @@ void timer_clear_idle(void)
* path. Required for BASE_LOCAL only.
*/
__this_cpu_write(timer_bases[BASE_LOCAL].is_idle, false);
- if (tick_nohz_full_cpu(smp_processor_id()))
+ if (tick_nohz_full_cpu(this_cpu))
__this_cpu_write(timer_bases[BASE_GLOBAL].is_idle, false);
- trace_timer_base_idle(false, smp_processor_id());
+ trace_timer_base_idle(false, this_cpu);
/* Activate without holding the timer_base->lock */
tmigr_cpu_activate();
© 2016 - 2026 Red Hat, Inc.