kernel/sched/cputime.c | 6 ------ 1 file changed, 6 deletions(-)
This reverts commit 77baa5bafcbe1b2a15ef9c37232c21279c95481c.
After commit b29a62d87cc0 ("mul_u64_u64_div_u64: make it precise always")
it is no longer necessary to have this workaround.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
---
kernel/sched/cputime.c | 6 ------
1 file changed, 6 deletions(-)
diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index 0bed0fa1ac..a5e00293ae 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -582,12 +582,6 @@ void cputime_adjust(struct task_cputime *curr, struct prev_cputime *prev,
}
stime = mul_u64_u64_div_u64(stime, rtime, stime + utime);
- /*
- * Because mul_u64_u64_div_u64() can approximate on some
- * achitectures; enforce the constraint that: a*b/(b+c) <= a.
- */
- if (unlikely(stime > rtime))
- stime = rtime;
update:
/*
--
2.46.1
Hello, On Fri, Oct 04, 2024 at 08:19:54PM -0400, Nicolas Pitre wrote: > This reverts commit 77baa5bafcbe1b2a15ef9c37232c21279c95481c. > > After commit b29a62d87cc0 ("mul_u64_u64_div_u64: make it precise always") > it is no longer necessary to have this workaround. > > Signed-off-by: Nicolas Pitre <npitre@baylibre.com> > --- > kernel/sched/cputime.c | 6 ------ > 1 file changed, 6 deletions(-) > > diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c > index 0bed0fa1ac..a5e00293ae 100644 > --- a/kernel/sched/cputime.c > +++ b/kernel/sched/cputime.c > @@ -582,12 +582,6 @@ void cputime_adjust(struct task_cputime *curr, struct prev_cputime *prev, > } > > stime = mul_u64_u64_div_u64(stime, rtime, stime + utime); > - /* > - * Because mul_u64_u64_div_u64() can approximate on some > - * achitectures; enforce the constraint that: a*b/(b+c) <= a. > - */ > - if (unlikely(stime > rtime)) > - stime = rtime; I didn't look in detail, but even with mul_u64_u64_div_u64() being exact now, stime > rtime can still be hit if stime + utime overflows. Can this happen? (Can stime + utime become 0?) The example from the commit log of 77baa5bafcbe ("sched/cputime: Fix mul_u64_u64_div_u64() precision for cputime") however won't occur any more. IMHO that is good enough to justify this patch. Acked-by: Uwe Kleine-König <u.kleine-koenig@baylibre.com> Thanks for following up your improvement to mul_u64_u64_div_u64() with this change. Best regards Uwe
© 2016 - 2024 Red Hat, Inc.