kernel/time/tick-sched.c | 5 +++++ 1 file changed, 5 insertions(+)
When offlining and onlining CPUs the overall reported idle and iowait
times as reported by /proc/stat jump backward and forward:
> cat /proc/stat
cpu 132 0 176 225249 47 6 6 21 0 0
cpu0 80 0 115 112575 33 3 4 18 0 0
cpu1 52 0 60 112673 13 3 1 2 0 0
> chcpu -d 1
> cat /proc/stat
cpu 133 0 177 226681 47 6 6 21 0 0
cpu0 80 0 116 113387 33 3 4 18 0 0
> chcpu -e 1
> cat /proc/stat
cpu 133 0 178 114431 33 6 6 21 0 0 <---- jump backward
cpu0 80 0 116 114247 33 3 4 18 0 0
cpu1 52 0 61 183 0 3 1 2 0 0 <---- idle + iowait start with 0
> chcpu -d 1
> cat /proc/stat
cpu 133 0 178 228956 47 6 6 21 0 0 <---- jump forward
cpu0 81 0 117 114929 33 3 4 18 0 0
Reason for this is that get_idle_time() in fs/proc/stat.c has different
sources for both values depending on if a CPU is online or offline:
- if a CPU is online the values may be taken from its per cpu
tick_cpu_sched structure
- if a CPU is offline the values are taken from its per cpu cpustat
structure
The problem is that the per cpu tick_cpu_sched structure is set to zero on
CPU offline. See tick_cancel_sched_timer() in kernel/time/tick-sched.c.
Therefore when a CPU is brought offline and online afterwards both its idle
and iowait sleeptime will be zero, causing a jump backward in total system
idle and iowait sleeptime. In a similar way if a CPU is then brought
offline again the total idle and iowait sleeptimes will jump forward.
It looks like this behavior was introduced with commit 4b0c0f294f60
("tick: Cleanup NOHZ per cpu data on cpu down").
This was only noticed now on s390, since we switched to generic idle time
reporting with commit be76ea614460 ("s390/idle: remove arch_cpu_idle_time()
and corresponding code").
Fix this by preserving the values of idle_sleeptime and iowait_sleeptime
members of the per-cpu tick_sched structure on CPU hotplug.
Fixes: 4b0c0f294f60 ("tick: Cleanup NOHZ per cpu data on cpu down")
Reported-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
---
kernel/time/tick-sched.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
index a17d26002831..d2501673028d 100644
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -1576,13 +1576,18 @@ void tick_setup_sched_timer(void)
void tick_cancel_sched_timer(int cpu)
{
struct tick_sched *ts = &per_cpu(tick_cpu_sched, cpu);
+ ktime_t idle_sleeptime, iowait_sleeptime;
# ifdef CONFIG_HIGH_RES_TIMERS
if (ts->sched_timer.base)
hrtimer_cancel(&ts->sched_timer);
# endif
+ idle_sleeptime = ts->idle_sleeptime;
+ iowait_sleeptime = ts->iowait_sleeptime;
memset(ts, 0, sizeof(*ts));
+ ts->idle_sleeptime = idle_sleeptime;
+ ts->iowait_sleeptime = iowait_sleeptime;
}
#endif
--
2.40.1
On Mon, 2024-01-15 at 17:35 +0100, Heiko Carstens wrote: > > > + idle_sleeptime = ts->idle_sleeptime; > + iowait_sleeptime = ts->iowait_sleeptime; > memset(ts, 0, sizeof(*ts)); > + ts->idle_sleeptime = idle_sleeptime; > + ts->iowait_sleeptime = iowait_sleeptime; > } Should idle_calls and idle_sleeps be preserved and restored too? Seems like if we preserve the idle_sleeptime, and wish to compute the average sleep time per sleep, we will need to know the value of idle_sleeps that's also preserved across CPU offline/online. Tim
Le Mon, Jan 22, 2024 at 10:19:30AM -0800, Tim Chen a écrit : > On Mon, 2024-01-15 at 17:35 +0100, Heiko Carstens wrote: > > > > > > + idle_sleeptime = ts->idle_sleeptime; > > + iowait_sleeptime = ts->iowait_sleeptime; > > memset(ts, 0, sizeof(*ts)); > > + ts->idle_sleeptime = idle_sleeptime; > > + ts->iowait_sleeptime = iowait_sleeptime; > > } > > Should idle_calls and idle_sleeps be preserved and > restored too? > > Seems like if we preserve the > idle_sleeptime, and wish to compute the average > sleep time per sleep, we will need to know the value of > idle_sleeps that's also preserved across CPU offline/online. I guess those can be saved as well. Would you like to send the patch? Thanks. > > Tim
On Mon, 2024-01-22 at 23:31 +0100, Frederic Weisbecker wrote: > Le Mon, Jan 22, 2024 at 10:19:30AM -0800, Tim Chen a écrit : > > On Mon, 2024-01-15 at 17:35 +0100, Heiko Carstens wrote: > > > > > > > > > + idle_sleeptime = ts->idle_sleeptime; > > > + iowait_sleeptime = ts->iowait_sleeptime; > > > memset(ts, 0, sizeof(*ts)); > > > + ts->idle_sleeptime = idle_sleeptime; > > > + ts->iowait_sleeptime = iowait_sleeptime; > > > } > > > > Should idle_calls and idle_sleeps be preserved and > > restored too? > > > > Seems like if we preserve the > > idle_sleeptime, and wish to compute the average > > sleep time per sleep, we will need to know the value of > > idle_sleeps that's also preserved across CPU offline/online. > > I guess those can be saved as well. Would you like to send the patch? > Okay, sent the patch in a separate email. Tim
Le Mon, Jan 15, 2024 at 05:35:55PM +0100, Heiko Carstens a écrit :
> When offlining and onlining CPUs the overall reported idle and iowait
> times as reported by /proc/stat jump backward and forward:
>
> > cat /proc/stat
> cpu 132 0 176 225249 47 6 6 21 0 0
> cpu0 80 0 115 112575 33 3 4 18 0 0
> cpu1 52 0 60 112673 13 3 1 2 0 0
>
> > chcpu -d 1
> > cat /proc/stat
> cpu 133 0 177 226681 47 6 6 21 0 0
> cpu0 80 0 116 113387 33 3 4 18 0 0
>
> > chcpu -e 1
> > cat /proc/stat
> cpu 133 0 178 114431 33 6 6 21 0 0 <---- jump backward
> cpu0 80 0 116 114247 33 3 4 18 0 0
> cpu1 52 0 61 183 0 3 1 2 0 0 <---- idle + iowait start with 0
>
> > chcpu -d 1
> > cat /proc/stat
> cpu 133 0 178 228956 47 6 6 21 0 0 <---- jump forward
> cpu0 81 0 117 114929 33 3 4 18 0 0
>
> Reason for this is that get_idle_time() in fs/proc/stat.c has different
> sources for both values depending on if a CPU is online or offline:
>
> - if a CPU is online the values may be taken from its per cpu
> tick_cpu_sched structure
>
> - if a CPU is offline the values are taken from its per cpu cpustat
> structure
>
> The problem is that the per cpu tick_cpu_sched structure is set to zero on
> CPU offline. See tick_cancel_sched_timer() in kernel/time/tick-sched.c.
>
> Therefore when a CPU is brought offline and online afterwards both its idle
> and iowait sleeptime will be zero, causing a jump backward in total system
> idle and iowait sleeptime. In a similar way if a CPU is then brought
> offline again the total idle and iowait sleeptimes will jump forward.
>
> It looks like this behavior was introduced with commit 4b0c0f294f60
> ("tick: Cleanup NOHZ per cpu data on cpu down").
>
> This was only noticed now on s390, since we switched to generic idle time
> reporting with commit be76ea614460 ("s390/idle: remove arch_cpu_idle_time()
> and corresponding code").
>
> Fix this by preserving the values of idle_sleeptime and iowait_sleeptime
> members of the per-cpu tick_sched structure on CPU hotplug.
>
> Fixes: 4b0c0f294f60 ("tick: Cleanup NOHZ per cpu data on cpu down")
> Reported-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
> ---
> kernel/time/tick-sched.c | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
> index a17d26002831..d2501673028d 100644
> --- a/kernel/time/tick-sched.c
> +++ b/kernel/time/tick-sched.c
> @@ -1576,13 +1576,18 @@ void tick_setup_sched_timer(void)
> void tick_cancel_sched_timer(int cpu)
> {
> struct tick_sched *ts = &per_cpu(tick_cpu_sched, cpu);
> + ktime_t idle_sleeptime, iowait_sleeptime;
>
> # ifdef CONFIG_HIGH_RES_TIMERS
> if (ts->sched_timer.base)
> hrtimer_cancel(&ts->sched_timer);
> # endif
>
> + idle_sleeptime = ts->idle_sleeptime;
> + iowait_sleeptime = ts->iowait_sleeptime;
> memset(ts, 0, sizeof(*ts));
> + ts->idle_sleeptime = idle_sleeptime;
> + ts->iowait_sleeptime = iowait_sleeptime;
And this is safe because it is in global stop machine. So we are
guaranteed that nobody sees the transitionning state. In the worst
case ts->idle_sleeptime_seq is observed as changed to 0 in read_seqcount_retry()
and the values are simply fetched again.
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
This makes me think that we should always use cpustat[CPUTIME_IDLE] instead of
maintaining this separate ts->idle_sleeptime field. kcpustat even has a seqcount
that would make ts->idle_sleeptime_seq obsolete. Then the tick based idle accounting
could disappear on nohz, along with a few hacks. Instead of that we are
currently maintaining two different idle accounting that are roughly the same.
But anyway this is all a different story, just mumbling to myself for the next
nohz cleanups.
Thanks!
The following commit has been merged into the timers/core branch of tip:
Commit-ID: 71fee48fb772ac4f6cfa63dbebc5629de8b4cc09
Gitweb: https://git.kernel.org/tip/71fee48fb772ac4f6cfa63dbebc5629de8b4cc09
Author: Heiko Carstens <hca@linux.ibm.com>
AuthorDate: Mon, 15 Jan 2024 17:35:55 +01:00
Committer: Thomas Gleixner <tglx@linutronix.de>
CommitterDate: Fri, 19 Jan 2024 16:40:38 +01:00
tick-sched: Fix idle and iowait sleeptime accounting vs CPU hotplug
When offlining and onlining CPUs the overall reported idle and iowait
times as reported by /proc/stat jump backward and forward:
cpu 132 0 176 225249 47 6 6 21 0 0
cpu0 80 0 115 112575 33 3 4 18 0 0
cpu1 52 0 60 112673 13 3 1 2 0 0
cpu 133 0 177 226681 47 6 6 21 0 0
cpu0 80 0 116 113387 33 3 4 18 0 0
cpu 133 0 178 114431 33 6 6 21 0 0 <---- jump backward
cpu0 80 0 116 114247 33 3 4 18 0 0
cpu1 52 0 61 183 0 3 1 2 0 0 <---- idle + iowait start with 0
cpu 133 0 178 228956 47 6 6 21 0 0 <---- jump forward
cpu0 81 0 117 114929 33 3 4 18 0 0
Reason for this is that get_idle_time() in fs/proc/stat.c has different
sources for both values depending on if a CPU is online or offline:
- if a CPU is online the values may be taken from its per cpu
tick_cpu_sched structure
- if a CPU is offline the values are taken from its per cpu cpustat
structure
The problem is that the per cpu tick_cpu_sched structure is set to zero on
CPU offline. See tick_cancel_sched_timer() in kernel/time/tick-sched.c.
Therefore when a CPU is brought offline and online afterwards both its idle
and iowait sleeptime will be zero, causing a jump backward in total system
idle and iowait sleeptime. In a similar way if a CPU is then brought
offline again the total idle and iowait sleeptimes will jump forward.
It looks like this behavior was introduced with commit 4b0c0f294f60
("tick: Cleanup NOHZ per cpu data on cpu down").
This was only noticed now on s390, since we switched to generic idle time
reporting with commit be76ea614460 ("s390/idle: remove arch_cpu_idle_time()
and corresponding code").
Fix this by preserving the values of idle_sleeptime and iowait_sleeptime
members of the per-cpu tick_sched structure on CPU hotplug.
Fixes: 4b0c0f294f60 ("tick: Cleanup NOHZ per cpu data on cpu down")
Reported-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20240115163555.1004144-1-hca@linux.ibm.com
---
kernel/time/tick-sched.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
index a17d260..d250167 100644
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -1576,13 +1576,18 @@ void tick_setup_sched_timer(void)
void tick_cancel_sched_timer(int cpu)
{
struct tick_sched *ts = &per_cpu(tick_cpu_sched, cpu);
+ ktime_t idle_sleeptime, iowait_sleeptime;
# ifdef CONFIG_HIGH_RES_TIMERS
if (ts->sched_timer.base)
hrtimer_cancel(&ts->sched_timer);
# endif
+ idle_sleeptime = ts->idle_sleeptime;
+ iowait_sleeptime = ts->iowait_sleeptime;
memset(ts, 0, sizeof(*ts));
+ ts->idle_sleeptime = idle_sleeptime;
+ ts->iowait_sleeptime = iowait_sleeptime;
}
#endif
© 2016 - 2025 Red Hat, Inc.