The timer migration mechanism allows active CPUs to pull timers from
idle ones to improve the overall idle time. This is however undesired
when CPU intensive workloads run on isolated cores, as the algorithm
would move the timers from housekeeping to isolated cores, negatively
affecting the isolation.
Exclude isolated cores from the timer migration algorithm, extend the
concept of unavailable cores, currently used for offline ones, to
isolated ones:
* A core is unavailable if isolated or offline;
* A core is available if non isolated and online;
A core is considered unavailable as isolated if it belongs to:
* the isolcpus (domain) list
* an isolated cpuset
Except if it is:
* in the nohz_full list (already idle for the hierarchy)
* the nohz timekeeper core (must be available to handle global timers)
CPUs are added to the hierarchy during late boot, excluding isolated
ones, the hierarchy is also adapted when the cpuset isolation changes.
Due to how the timer migration algorithm works, any CPU part of the
hierarchy can have their global timers pulled by remote CPUs and have to
pull remote timers, only skipping pulling remote timers would break the
logic.
For this reason, prevent isolated CPUs from pulling remote global
timers, but also the other way around: any global timer started on an
isolated CPU will run there. This does not break the concept of
isolation (global timers don't come from outside the CPU) and, if
considered inappropriate, can usually be mitigated with other isolation
techniques (e.g. IRQ pinning).
This effect was noticed on a 128 cores machine running oslat on the
isolated cores (1-31,33-63,65-95,97-127). The tool monopolises CPUs,
and the CPU with lowest count in a timer migration hierarchy (here 1
and 65) appears as always active and continuously pulls global timers,
from the housekeeping CPUs. This ends up moving driver work (e.g.
delayed work) to isolated CPUs and causes latency spikes:
before the change:
# oslat -c 1-31,33-63,65-95,97-127 -D 62s
...
Maximum: 1203 10 3 4 ... 5 (us)
after the change:
# oslat -c 1-31,33-63,65-95,97-127 -D 62s
...
Maximum: 10 4 3 4 3 ... 5 (us)
The same behaviour was observed on a machine with as few as 20 cores /
40 threads with isocpus set to: 1-9,11-39 with rtla-osnoise-top.
Tested-by: John B. Wyatt IV <jwyatt@redhat.com>
Tested-by: John B. Wyatt IV <sageofredondo@gmail.com>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
---
include/linux/timer.h | 9 ++
kernel/cgroup/cpuset.c | 3 +
kernel/time/timer_migration.c | 156 ++++++++++++++++++++++++++++++++--
3 files changed, 163 insertions(+), 5 deletions(-)
diff --git a/include/linux/timer.h b/include/linux/timer.h
index 0414d9e6b4fc..62e1cea71125 100644
--- a/include/linux/timer.h
+++ b/include/linux/timer.h
@@ -188,4 +188,13 @@ int timers_dead_cpu(unsigned int cpu);
#define timers_dead_cpu NULL
#endif
+#if defined(CONFIG_SMP) && defined(CONFIG_NO_HZ_COMMON)
+extern int tmigr_isolated_exclude_cpumask(struct cpumask *exclude_cpumask);
+#else
+static inline int tmigr_isolated_exclude_cpumask(struct cpumask *exclude_cpumask)
+{
+ return 0;
+}
+#endif
+
#endif
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index cf34623fe66f..bfc3b319e1c0 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -1350,6 +1350,9 @@ static void update_isolation_cpumasks(bool isolcpus_updated)
ret = workqueue_unbound_exclude_cpumask(isolated_cpus);
WARN_ON_ONCE(ret < 0);
+
+ ret = tmigr_isolated_exclude_cpumask(isolated_cpus);
+ WARN_ON_ONCE(ret < 0);
}
/**
diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c
index d3eb9714e692..0e275d526d50 100644
--- a/kernel/time/timer_migration.c
+++ b/kernel/time/timer_migration.c
@@ -10,6 +10,7 @@
#include <linux/spinlock.h>
#include <linux/timerqueue.h>
#include <trace/events/ipi.h>
+#include <linux/sched/isolation.h>
#include "timer_migration.h"
#include "tick-internal.h"
@@ -430,6 +431,9 @@ static DEFINE_PER_CPU(struct tmigr_cpu, tmigr_cpu);
*/
static cpumask_var_t tmigr_available_cpumask;
+/* Enabled during late initcall */
+static DEFINE_STATIC_KEY_FALSE(tmigr_exclude_isolated);
+
#define TMIGR_NONE 0xFF
#define BIT_CNT 8
@@ -438,6 +442,33 @@ static inline bool tmigr_is_not_available(struct tmigr_cpu *tmc)
return !(tmc->tmgroup && tmc->available);
}
+/*
+ * Returns true if @cpu should be excluded from the hierarchy as isolated.
+ * Domain isolated CPUs don't participate in timer migration, nohz_full CPUs
+ * are still part of the hierarchy but become idle (from a tick and timer
+ * migration perspective) when they stop their tick. This lets the timekeeping
+ * CPU handle their global timers. Marking also isolated CPUs as idle would be
+ * too costly, hence they are completely excluded from the hierarchy.
+ * This check is necessary, for instance, to prevent offline isolated CPUs from
+ * being incorrectly marked as available once getting back online.
+ *
+ * This function returns false during early boot and the isolation logic is
+ * enabled only after isolated CPUs are marked as unavailable at late boot.
+ * The tick CPU can be isolated at boot, however we cannot mark it as
+ * unavailable to avoid having no global migrator for the nohz_full CPUs. This
+ * should be ensured by the callers of this function: implicitly from hotplug
+ * callbacs and explicitly in tmigr_init_isolation and
+ * tmigr_isolated_exclude_cpumask.
+ */
+static inline bool tmigr_is_isolated(int cpu)
+{
+ if (static_branch_unlikely(&tmigr_exclude_isolated))
+ return (!housekeeping_cpu(cpu, HK_TYPE_DOMAIN) ||
+ cpuset_cpu_is_isolated(cpu)) &&
+ housekeeping_cpu(cpu, HK_TYPE_KERNEL_NOISE);
+ return false;
+}
+
/*
* Returns true, when @childmask corresponds to the group migrator or when the
* group is not active - so no migrator is set.
@@ -1439,8 +1470,9 @@ static int tmigr_clear_cpu_available(unsigned int cpu)
int migrator;
u64 firstexp;
- cpumask_clear_cpu(cpu, tmigr_available_cpumask);
scoped_guard(raw_spinlock_irq, &tmc->lock) {
+ if (!tmc->available)
+ return 0;
tmc->available = false;
WRITE_ONCE(tmc->wakeup, KTIME_MAX);
@@ -1453,11 +1485,11 @@ static int tmigr_clear_cpu_available(unsigned int cpu)
}
if (firstexp != KTIME_MAX) {
- migrator = cpumask_any(tmigr_available_cpumask);
+ migrator = cpumask_any_but(tmigr_available_cpumask, cpu);
work_on_cpu(migrator, tmigr_trigger_active, NULL);
}
- return 0;
+ return 1;
}
static int tmigr_set_cpu_available(unsigned int cpu)
@@ -1468,17 +1500,130 @@ static int tmigr_set_cpu_available(unsigned int cpu)
if (WARN_ON_ONCE(!tmc->tmgroup))
return -EINVAL;
- cpumask_set_cpu(cpu, tmigr_available_cpumask);
+ if (tmigr_is_isolated(cpu))
+ return 0;
+
scoped_guard(raw_spinlock_irq, &tmc->lock) {
+ if (tmc->available)
+ return 0;
trace_tmigr_cpu_available(tmc);
tmc->idle = timer_base_is_idle();
if (!tmc->idle)
__tmigr_cpu_activate(tmc);
tmc->available = true;
}
+ return 1;
+}
+
+static int tmigr_online_cpu(unsigned int cpu)
+{
+ if (tmigr_set_cpu_available(cpu) > 0)
+ cpumask_set_cpu(cpu, tmigr_available_cpumask);
+ return 0;
+}
+
+static int tmigr_offline_cpu(unsigned int cpu)
+{
+ if (tmigr_clear_cpu_available(cpu) > 0)
+ cpumask_clear_cpu(cpu, tmigr_available_cpumask);
+ return 0;
+}
+
+static void tmigr_cpu_isolate(struct work_struct *ignored)
+{
+ tmigr_clear_cpu_available(smp_processor_id());
+}
+
+static void tmigr_cpu_unisolate(struct work_struct *ignored)
+{
+ tmigr_set_cpu_available(smp_processor_id());
+}
+
+static int __tmigr_isolated_exclude_cpumask(struct cpumask *exclude_cpumask)
+{
+ struct work_struct __percpu *works __free(free_percpu) =
+ alloc_percpu(struct work_struct);
+ cpumask_var_t cpumask_unisol __free(free_cpumask_var) = CPUMASK_VAR_NULL;
+ cpumask_var_t cpumask_isol __free(free_cpumask_var) = CPUMASK_VAR_NULL;
+ int cpu;
+
+ if (!alloc_cpumask_var(&cpumask_isol, GFP_KERNEL))
+ return -ENOMEM;
+ if (!alloc_cpumask_var(&cpumask_unisol, GFP_KERNEL))
+ return -ENOMEM;
+ if (!works)
+ return -ENOMEM;
+
+ cpumask_andnot(cpumask_unisol, cpu_online_mask, exclude_cpumask);
+ cpumask_andnot(cpumask_unisol, cpumask_unisol, tmigr_available_cpumask);
+ /* Set up the mask earlier to avoid races with the migrator CPU */
+ cpumask_or(tmigr_available_cpumask, tmigr_available_cpumask, cpumask_unisol);
+ for_each_cpu(cpu, cpumask_unisol) {
+ struct work_struct *work = per_cpu_ptr(works, cpu);
+
+ INIT_WORK(work, tmigr_cpu_unisolate);
+ schedule_work_on(cpu, work);
+ }
+
+ cpumask_and(cpumask_isol, exclude_cpumask, tmigr_available_cpumask);
+ cpumask_and(cpumask_isol, cpumask_isol, housekeeping_cpumask(HK_TYPE_KERNEL_NOISE));
+ /*
+ * Handle this here and not in the cpuset code because exclude_cpumask
+ * might include also the tick CPU if included in isolcpus.
+ */
+ for_each_cpu(cpu, cpumask_isol) {
+ if (!tick_nohz_cpu_hotpluggable(cpu)) {
+ cpumask_clear_cpu(cpu, cpumask_isol);
+ break;
+ }
+ }
+ /* Set up the mask earlier to avoid races with the migrator CPU */
+ cpumask_andnot(tmigr_available_cpumask, tmigr_available_cpumask, cpumask_isol);
+ for_each_cpu(cpu, cpumask_isol) {
+ struct work_struct *work = per_cpu_ptr(works, cpu);
+
+ INIT_WORK(work, tmigr_cpu_isolate);
+ schedule_work_on(cpu, work);
+ }
+
+ for_each_cpu_or(cpu, cpumask_isol, cpumask_unisol)
+ flush_work(per_cpu_ptr(works, cpu));
+
return 0;
}
+/**
+ * tmigr_isolated_exclude_cpumask - Exclude given CPUs from hierarchy
+ * @exclude_cpumask: the cpumask to be excluded from timer migration hierarchy
+ *
+ * This function can be called from cpuset code to provide the new set of
+ * isolated CPUs that should be excluded from the hierarchy.
+ * Online CPUs not present in exclude_cpumask but already excluded are brought
+ * back to the hierarchy.
+ * Functions to isolate/unisolate need to be called locally and can sleep.
+ */
+int tmigr_isolated_exclude_cpumask(struct cpumask *exclude_cpumask)
+{
+ lockdep_assert_cpus_held();
+ return __tmigr_isolated_exclude_cpumask(exclude_cpumask);
+}
+
+static int __init tmigr_init_isolation(void)
+{
+ cpumask_var_t cpumask __free(free_cpumask_var) = CPUMASK_VAR_NULL;
+
+ static_branch_enable(&tmigr_exclude_isolated);
+
+ if (!housekeeping_enabled(HK_TYPE_DOMAIN))
+ return 0;
+ if (!alloc_cpumask_var(&cpumask, GFP_KERNEL))
+ return -ENOMEM;
+
+ cpumask_andnot(cpumask, cpu_possible_mask, housekeeping_cpumask(HK_TYPE_DOMAIN));
+
+ return __tmigr_isolated_exclude_cpumask(cpumask);
+}
+
static void tmigr_init_group(struct tmigr_group *group, unsigned int lvl,
int node)
{
@@ -1867,7 +2012,7 @@ static int __init tmigr_init(void)
goto err;
ret = cpuhp_setup_state(CPUHP_AP_TMIGR_ONLINE, "tmigr:online",
- tmigr_set_cpu_available, tmigr_clear_cpu_available);
+ tmigr_online_cpu, tmigr_offline_cpu);
if (ret)
goto err;
@@ -1878,3 +2023,4 @@ static int __init tmigr_init(void)
return ret;
}
early_initcall(tmigr_init);
+late_initcall(tmigr_init_isolation);
--
2.51.1
On 11/13/25 3:33 AM, Gabriele Monaco wrote:
> The timer migration mechanism allows active CPUs to pull timers from
> idle ones to improve the overall idle time. This is however undesired
> when CPU intensive workloads run on isolated cores, as the algorithm
> would move the timers from housekeeping to isolated cores, negatively
> affecting the isolation.
>
> Exclude isolated cores from the timer migration algorithm, extend the
> concept of unavailable cores, currently used for offline ones, to
> isolated ones:
> * A core is unavailable if isolated or offline;
> * A core is available if non isolated and online;
>
> A core is considered unavailable as isolated if it belongs to:
> * the isolcpus (domain) list
> * an isolated cpuset
> Except if it is:
> * in the nohz_full list (already idle for the hierarchy)
> * the nohz timekeeper core (must be available to handle global timers)
>
> CPUs are added to the hierarchy during late boot, excluding isolated
> ones, the hierarchy is also adapted when the cpuset isolation changes.
>
> Due to how the timer migration algorithm works, any CPU part of the
> hierarchy can have their global timers pulled by remote CPUs and have to
> pull remote timers, only skipping pulling remote timers would break the
> logic.
> For this reason, prevent isolated CPUs from pulling remote global
> timers, but also the other way around: any global timer started on an
> isolated CPU will run there. This does not break the concept of
> isolation (global timers don't come from outside the CPU) and, if
> considered inappropriate, can usually be mitigated with other isolation
> techniques (e.g. IRQ pinning).
>
> This effect was noticed on a 128 cores machine running oslat on the
> isolated cores (1-31,33-63,65-95,97-127). The tool monopolises CPUs,
> and the CPU with lowest count in a timer migration hierarchy (here 1
> and 65) appears as always active and continuously pulls global timers,
> from the housekeeping CPUs. This ends up moving driver work (e.g.
> delayed work) to isolated CPUs and causes latency spikes:
>
> before the change:
>
> # oslat -c 1-31,33-63,65-95,97-127 -D 62s
> ...
> Maximum: 1203 10 3 4 ... 5 (us)
>
> after the change:
>
> # oslat -c 1-31,33-63,65-95,97-127 -D 62s
> ...
> Maximum: 10 4 3 4 3 ... 5 (us)
>
> The same behaviour was observed on a machine with as few as 20 cores /
> 40 threads with isocpus set to: 1-9,11-39 with rtla-osnoise-top.
>
> Tested-by: John B. Wyatt IV <jwyatt@redhat.com>
> Tested-by: John B. Wyatt IV <sageofredondo@gmail.com>
> Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
> Signed-off-by: Gabriele Monaco <gmonaco@redhat.com>
> ---
> include/linux/timer.h | 9 ++
> kernel/cgroup/cpuset.c | 3 +
> kernel/time/timer_migration.c | 156 ++++++++++++++++++++++++++++++++--
> 3 files changed, 163 insertions(+), 5 deletions(-)
>
> diff --git a/include/linux/timer.h b/include/linux/timer.h
> index 0414d9e6b4fc..62e1cea71125 100644
> --- a/include/linux/timer.h
> +++ b/include/linux/timer.h
> @@ -188,4 +188,13 @@ int timers_dead_cpu(unsigned int cpu);
> #define timers_dead_cpu NULL
> #endif
>
> +#if defined(CONFIG_SMP) && defined(CONFIG_NO_HZ_COMMON)
> +extern int tmigr_isolated_exclude_cpumask(struct cpumask *exclude_cpumask);
> +#else
> +static inline int tmigr_isolated_exclude_cpumask(struct cpumask *exclude_cpumask)
> +{
> + return 0;
> +}
> +#endif
> +
> #endif
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index cf34623fe66f..bfc3b319e1c0 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -1350,6 +1350,9 @@ static void update_isolation_cpumasks(bool isolcpus_updated)
>
> ret = workqueue_unbound_exclude_cpumask(isolated_cpus);
> WARN_ON_ONCE(ret < 0);
> +
> + ret = tmigr_isolated_exclude_cpumask(isolated_cpus);
> + WARN_ON_ONCE(ret < 0);
> }
>
> /**
> diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c
> index d3eb9714e692..0e275d526d50 100644
> --- a/kernel/time/timer_migration.c
> +++ b/kernel/time/timer_migration.c
> @@ -10,6 +10,7 @@
> #include <linux/spinlock.h>
> #include <linux/timerqueue.h>
> #include <trace/events/ipi.h>
> +#include <linux/sched/isolation.h>
>
> #include "timer_migration.h"
> #include "tick-internal.h"
> @@ -430,6 +431,9 @@ static DEFINE_PER_CPU(struct tmigr_cpu, tmigr_cpu);
> */
> static cpumask_var_t tmigr_available_cpumask;
>
> +/* Enabled during late initcall */
> +static DEFINE_STATIC_KEY_FALSE(tmigr_exclude_isolated);
> +
> #define TMIGR_NONE 0xFF
> #define BIT_CNT 8
>
> @@ -438,6 +442,33 @@ static inline bool tmigr_is_not_available(struct tmigr_cpu *tmc)
> return !(tmc->tmgroup && tmc->available);
> }
>
> +/*
> + * Returns true if @cpu should be excluded from the hierarchy as isolated.
> + * Domain isolated CPUs don't participate in timer migration, nohz_full CPUs
> + * are still part of the hierarchy but become idle (from a tick and timer
> + * migration perspective) when they stop their tick. This lets the timekeeping
> + * CPU handle their global timers. Marking also isolated CPUs as idle would be
> + * too costly, hence they are completely excluded from the hierarchy.
> + * This check is necessary, for instance, to prevent offline isolated CPUs from
> + * being incorrectly marked as available once getting back online.
> + *
> + * This function returns false during early boot and the isolation logic is
> + * enabled only after isolated CPUs are marked as unavailable at late boot.
> + * The tick CPU can be isolated at boot, however we cannot mark it as
> + * unavailable to avoid having no global migrator for the nohz_full CPUs. This
> + * should be ensured by the callers of this function: implicitly from hotplug
> + * callbacs and explicitly in tmigr_init_isolation and
> + * tmigr_isolated_exclude_cpumask.
> + */
> +static inline bool tmigr_is_isolated(int cpu)
> +{
> + if (static_branch_unlikely(&tmigr_exclude_isolated))
> + return (!housekeeping_cpu(cpu, HK_TYPE_DOMAIN) ||
> + cpuset_cpu_is_isolated(cpu)) &&
> + housekeeping_cpu(cpu, HK_TYPE_KERNEL_NOISE);
> + return false;
> +}
> +
> /*
> * Returns true, when @childmask corresponds to the group migrator or when the
> * group is not active - so no migrator is set.
> @@ -1439,8 +1470,9 @@ static int tmigr_clear_cpu_available(unsigned int cpu)
> int migrator;
> u64 firstexp;
>
> - cpumask_clear_cpu(cpu, tmigr_available_cpumask);
> scoped_guard(raw_spinlock_irq, &tmc->lock) {
> + if (!tmc->available)
> + return 0;
> tmc->available = false;
> WRITE_ONCE(tmc->wakeup, KTIME_MAX);
>
> @@ -1453,11 +1485,11 @@ static int tmigr_clear_cpu_available(unsigned int cpu)
> }
>
> if (firstexp != KTIME_MAX) {
> - migrator = cpumask_any(tmigr_available_cpumask);
> + migrator = cpumask_any_but(tmigr_available_cpumask, cpu);
> work_on_cpu(migrator, tmigr_trigger_active, NULL);
> }
>
> - return 0;
> + return 1;
> }
>
> static int tmigr_set_cpu_available(unsigned int cpu)
> @@ -1468,17 +1500,130 @@ static int tmigr_set_cpu_available(unsigned int cpu)
> if (WARN_ON_ONCE(!tmc->tmgroup))
> return -EINVAL;
>
> - cpumask_set_cpu(cpu, tmigr_available_cpumask);
> + if (tmigr_is_isolated(cpu))
> + return 0;
> +
> scoped_guard(raw_spinlock_irq, &tmc->lock) {
> + if (tmc->available)
> + return 0;
> trace_tmigr_cpu_available(tmc);
> tmc->idle = timer_base_is_idle();
> if (!tmc->idle)
> __tmigr_cpu_activate(tmc);
> tmc->available = true;
> }
> + return 1;
> +}
> +
> +static int tmigr_online_cpu(unsigned int cpu)
> +{
> + if (tmigr_set_cpu_available(cpu) > 0)
> + cpumask_set_cpu(cpu, tmigr_available_cpumask);
> + return 0;
> +}
> +
> +static int tmigr_offline_cpu(unsigned int cpu)
> +{
> + if (tmigr_clear_cpu_available(cpu) > 0)
> + cpumask_clear_cpu(cpu, tmigr_available_cpumask);
> + return 0;
> +}
> +
> +static void tmigr_cpu_isolate(struct work_struct *ignored)
> +{
> + tmigr_clear_cpu_available(smp_processor_id());
> +}
> +
> +static void tmigr_cpu_unisolate(struct work_struct *ignored)
> +{
> + tmigr_set_cpu_available(smp_processor_id());
> +}
> +
> +static int __tmigr_isolated_exclude_cpumask(struct cpumask *exclude_cpumask)
> +{
> + struct work_struct __percpu *works __free(free_percpu) =
> + alloc_percpu(struct work_struct);
> + cpumask_var_t cpumask_unisol __free(free_cpumask_var) = CPUMASK_VAR_NULL;
> + cpumask_var_t cpumask_isol __free(free_cpumask_var) = CPUMASK_VAR_NULL;
> + int cpu;
There are currently only 2 callers for this function - from late_init
call and from cpuset. Concurrent call is not possible. Maybe we can just
pre-allocate these cpumask_var_t and percpu work structures once and
reuse it instead of doing an allocation and free each time it is called.
The pre-allocation can be done in tmigr_init_isolation().
> +
> + if (!alloc_cpumask_var(&cpumask_isol, GFP_KERNEL))
> + return -ENOMEM;
> + if (!alloc_cpumask_var(&cpumask_unisol, GFP_KERNEL))
> + return -ENOMEM;
> + if (!works)
> + return -ENOMEM;
> +
> + cpumask_andnot(cpumask_unisol, cpu_online_mask, exclude_cpumask);
> + cpumask_andnot(cpumask_unisol, cpumask_unisol, tmigr_available_cpumask);
> + /* Set up the mask earlier to avoid races with the migrator CPU */
> + cpumask_or(tmigr_available_cpumask, tmigr_available_cpumask, cpumask_unisol);
> + for_each_cpu(cpu, cpumask_unisol) {
> + struct work_struct *work = per_cpu_ptr(works, cpu);
> +
> + INIT_WORK(work, tmigr_cpu_unisolate);
> + schedule_work_on(cpu, work);
> + }
> +
> + cpumask_and(cpumask_isol, exclude_cpumask, tmigr_available_cpumask);
> + cpumask_and(cpumask_isol, cpumask_isol, housekeeping_cpumask(HK_TYPE_KERNEL_NOISE));
> + /*
> + * Handle this here and not in the cpuset code because exclude_cpumask
> + * might include also the tick CPU if included in isolcpus.
> + */
> + for_each_cpu(cpu, cpumask_isol) {
> + if (!tick_nohz_cpu_hotpluggable(cpu)) {
> + cpumask_clear_cpu(cpu, cpumask_isol);
> + break;
> + }
> + }
> + /* Set up the mask earlier to avoid races with the migrator CPU */
> + cpumask_andnot(tmigr_available_cpumask, tmigr_available_cpumask, cpumask_isol);
> + for_each_cpu(cpu, cpumask_isol) {
> + struct work_struct *work = per_cpu_ptr(works, cpu);
> +
> + INIT_WORK(work, tmigr_cpu_isolate);
> + schedule_work_on(cpu, work);
> + }
> +
> + for_each_cpu_or(cpu, cpumask_isol, cpumask_unisol)
> + flush_work(per_cpu_ptr(works, cpu));
> +
> return 0;
> }
>
> +/**
> + * tmigr_isolated_exclude_cpumask - Exclude given CPUs from hierarchy
> + * @exclude_cpumask: the cpumask to be excluded from timer migration hierarchy
> + *
> + * This function can be called from cpuset code to provide the new set of
> + * isolated CPUs that should be excluded from the hierarchy.
> + * Online CPUs not present in exclude_cpumask but already excluded are brought
> + * back to the hierarchy.
> + * Functions to isolate/unisolate need to be called locally and can sleep.
> + */
> +int tmigr_isolated_exclude_cpumask(struct cpumask *exclude_cpumask)
> +{
> + lockdep_assert_cpus_held();
> + return __tmigr_isolated_exclude_cpumask(exclude_cpumask);
> +}
> +
> +static int __init tmigr_init_isolation(void)
> +{
> + cpumask_var_t cpumask __free(free_cpumask_var) = CPUMASK_VAR_NULL;
> +
> + static_branch_enable(&tmigr_exclude_isolated);
> +
> + if (!housekeeping_enabled(HK_TYPE_DOMAIN))
> + return 0;
> + if (!alloc_cpumask_var(&cpumask, GFP_KERNEL))
> + return -ENOMEM;
> +
> + cpumask_andnot(cpumask, cpu_possible_mask, housekeeping_cpumask(HK_TYPE_DOMAIN));
> +
> + return __tmigr_isolated_exclude_cpumask(cpumask);
> +}
Should we put all these functions under "#if defined(CONFIG_SMP) &&
defined(CONFIG_NO_HZ_COMMON)" like in the timer.h header file?
Cheers,
Longman
> +
> static void tmigr_init_group(struct tmigr_group *group, unsigned int lvl,
> int node)
> {
> @@ -1867,7 +2012,7 @@ static int __init tmigr_init(void)
> goto err;
>
> ret = cpuhp_setup_state(CPUHP_AP_TMIGR_ONLINE, "tmigr:online",
> - tmigr_set_cpu_available, tmigr_clear_cpu_available);
> + tmigr_online_cpu, tmigr_offline_cpu);
> if (ret)
> goto err;
>
> @@ -1878,3 +2023,4 @@ static int __init tmigr_init(void)
> return ret;
> }
> early_initcall(tmigr_init);
> +late_initcall(tmigr_init_isolation);
On Wed, 2025-11-19 at 15:43 -0500, Waiman Long wrote:
> On 11/13/25 3:33 AM, Gabriele Monaco wrote:
> >
> > +static int __tmigr_isolated_exclude_cpumask(struct cpumask
> > *exclude_cpumask)
> > +{
> > + struct work_struct __percpu *works __free(free_percpu) =
> > + alloc_percpu(struct work_struct);
> > + cpumask_var_t cpumask_unisol __free(free_cpumask_var) =
> > CPUMASK_VAR_NULL;
> > + cpumask_var_t cpumask_isol __free(free_cpumask_var) =
> > CPUMASK_VAR_NULL;
> > + int cpu;
>
> There are currently only 2 callers for this function - from late_init
> call and from cpuset. Concurrent call is not possible. Maybe we can just
> pre-allocate these cpumask_var_t and percpu work structures once and
> reuse it instead of doing an allocation and free each time it is called.
> The pre-allocation can be done in tmigr_init_isolation().
>
I have no strong opinion on this, but after changes suggested by Thomas it gets
superfluous to allocate 2 cpumasks (after flushing what is now cpumask_unisol
it's no longer needed and we can re-use it).
Considering this only runs at boot and every time a cpuset changes isolation, is
it worth the extra steps to pre-allocate?
> >
> > +/**
> > + * tmigr_isolated_exclude_cpumask - Exclude given CPUs from hierarchy
> > + * @exclude_cpumask: the cpumask to be excluded from timer migration
> > hierarchy
> > + *
> > + * This function can be called from cpuset code to provide the new set of
> > + * isolated CPUs that should be excluded from the hierarchy.
> > + * Online CPUs not present in exclude_cpumask but already excluded are
> > brought
> > + * back to the hierarchy.
> > + * Functions to isolate/unisolate need to be called locally and can sleep.
> > + */
> > +int tmigr_isolated_exclude_cpumask(struct cpumask *exclude_cpumask)
> > +{
> > + lockdep_assert_cpus_held();
> > + return __tmigr_isolated_exclude_cpumask(exclude_cpumask);
> > +}
> >
>
> Should we put all these functions under "#if defined(CONFIG_SMP) &&
> defined(CONFIG_NO_HZ_COMMON)" like in the timer.h header file?
I think that's implied in the build condition of timer_migration.o
https://elixir.bootlin.com/linux/v6.17.8/source/kernel/time/Makefile#L27
At least I got these ifdefs from timer_migration.h and none of those functions
are ifdeffed in timer_migration.c
Thanks,
Gabriele
On 11/20/25 5:48 AM, Gabriele Monaco wrote:
> On Wed, 2025-11-19 at 15:43 -0500, Waiman Long wrote:
>> On 11/13/25 3:33 AM, Gabriele Monaco wrote:
>>> +static int __tmigr_isolated_exclude_cpumask(struct cpumask
>>> *exclude_cpumask)
>>> +{
>>> + struct work_struct __percpu *works __free(free_percpu) =
>>> + alloc_percpu(struct work_struct);
>>> + cpumask_var_t cpumask_unisol __free(free_cpumask_var) =
>>> CPUMASK_VAR_NULL;
>>> + cpumask_var_t cpumask_isol __free(free_cpumask_var) =
>>> CPUMASK_VAR_NULL;
>>> + int cpu;
>> There are currently only 2 callers for this function - from late_init
>> call and from cpuset. Concurrent call is not possible. Maybe we can just
>> pre-allocate these cpumask_var_t and percpu work structures once and
>> reuse it instead of doing an allocation and free each time it is called.
>> The pre-allocation can be done in tmigr_init_isolation().
>>
> I have no strong opinion on this, but after changes suggested by Thomas it gets
> superfluous to allocate 2 cpumasks (after flushing what is now cpumask_unisol
> it's no longer needed and we can re-use it).
>
> Considering this only runs at boot and every time a cpuset changes isolation, is
> it worth the extra steps to pre-allocate?
It is just a suggestion. We can see how it goes and decide if this
change is needed or not.
>
>>>
>>> +/**
>>> + * tmigr_isolated_exclude_cpumask - Exclude given CPUs from hierarchy
>>> + * @exclude_cpumask: the cpumask to be excluded from timer migration
>>> hierarchy
>>> + *
>>> + * This function can be called from cpuset code to provide the new set of
>>> + * isolated CPUs that should be excluded from the hierarchy.
>>> + * Online CPUs not present in exclude_cpumask but already excluded are
>>> brought
>>> + * back to the hierarchy.
>>> + * Functions to isolate/unisolate need to be called locally and can sleep.
>>> + */
>>> +int tmigr_isolated_exclude_cpumask(struct cpumask *exclude_cpumask)
>>> +{
>>> + lockdep_assert_cpus_held();
>>> + return __tmigr_isolated_exclude_cpumask(exclude_cpumask);
>>> +}
>>>
>> Should we put all these functions under "#if defined(CONFIG_SMP) &&
>> defined(CONFIG_NO_HZ_COMMON)" like in the timer.h header file?
> I think that's implied in the build condition of timer_migration.o
> https://elixir.bootlin.com/linux/v6.17.8/source/kernel/time/Makefile#L27
>
> At least I got these ifdefs from timer_migration.h and none of those functions
> are ifdeffed in timer_migration.c
You are right. I haven't check the condition for building timer_migration.o.
Cheers,
Longman
On Thu, Nov 13 2025 at 09:33, Gabriele Monaco wrote:
> +/* Enabled during late initcall */
> +static DEFINE_STATIC_KEY_FALSE(tmigr_exclude_isolated);
> +
> #define TMIGR_NONE 0xFF
> #define BIT_CNT 8
>
> @@ -438,6 +442,33 @@ static inline bool tmigr_is_not_available(struct tmigr_cpu *tmc)
> return !(tmc->tmgroup && tmc->available);
> }
>
> +/*
> + * Returns true if @cpu should be excluded from the hierarchy as isolated.
> + * Domain isolated CPUs don't participate in timer migration, nohz_full CPUs
> + * are still part of the hierarchy but become idle (from a tick and timer
> + * migration perspective) when they stop their tick. This lets the timekeeping
> + * CPU handle their global timers. Marking also isolated CPUs as idle would be
> + * too costly, hence they are completely excluded from the hierarchy.
> + * This check is necessary, for instance, to prevent offline isolated CPUs from
> + * being incorrectly marked as available once getting back online.
> + *
> + * This function returns false during early boot and the isolation logic is
> + * enabled only after isolated CPUs are marked as unavailable at late boot.
> + * The tick CPU can be isolated at boot, however we cannot mark it as
> + * unavailable to avoid having no global migrator for the nohz_full CPUs. This
> + * should be ensured by the callers of this function: implicitly from hotplug
> + * callbacs and explicitly in tmigr_init_isolation and
callbacks tmigr_init_isolation()
> + * tmigr_isolated_exclude_cpumask.
tmigr_isolated_exclude_cpumask() It's documented how functions should be
denoted in comments and change logs, no?
> + */
> +static inline bool tmigr_is_isolated(int cpu)
> +{
> + if (static_branch_unlikely(&tmigr_exclude_isolated))
> + return (!housekeeping_cpu(cpu, HK_TYPE_DOMAIN) ||
> + cpuset_cpu_is_isolated(cpu)) &&
> + housekeeping_cpu(cpu, HK_TYPE_KERNEL_NOISE);
Lacks brackets on the if ()
https://www.kernel.org/doc/html/latest/process/maintainer-tip.html#bracket-rules
Also you can make this way more readable by inverting the condition:
if (!static_branch_unlikely(&tmigr_exclude_isolated))
return false;
return .....;
No?
> + return false;
> +}
> +
> /*
> * Returns true, when @childmask corresponds to the group migrator or when the
> * group is not active - so no migrator is set.
> @@ -1439,8 +1470,9 @@ static int tmigr_clear_cpu_available(unsigned int cpu)
> int migrator;
> u64 firstexp;
>
> - cpumask_clear_cpu(cpu, tmigr_available_cpumask);
By removing this the function name does not make any sense any
more. Splitting the cpumask_clear_set() out, renaming the function
> scoped_guard(raw_spinlock_irq, &tmc->lock) {
> + if (!tmc->available)
> + return 0;
and adding this
> tmc->available = false;
> WRITE_ONCE(tmc->wakeup, KTIME_MAX);
>
> @@ -1453,11 +1485,11 @@ static int tmigr_clear_cpu_available(unsigned int cpu)
> }
>
> if (firstexp != KTIME_MAX) {
> - migrator = cpumask_any(tmigr_available_cpumask);
> + migrator = cpumask_any_but(tmigr_available_cpumask, cpu);
and this should be done in a preparatory patch along with a
reasonable explanation in the change log.
> work_on_cpu(migrator, tmigr_trigger_active, NULL);
> }
>
> - return 0;
> + return 1;
But thinking more about it. What's the actual point of moving this 'clear'
out instead of just moving it further down?
It does not matter at all whether the isol/unisol muck clears an already
cleared bit or not. But it would keep the function name comprehensible
and avoid all this online/offline wrapper nonsense.
> }
>
> static int tmigr_set_cpu_available(unsigned int cpu)
> @@ -1468,17 +1500,130 @@ static int tmigr_set_cpu_available(unsigned int cpu)
> if (WARN_ON_ONCE(!tmc->tmgroup))
> return -EINVAL;
>
> - cpumask_set_cpu(cpu, tmigr_available_cpumask);
Ditto.
> +static int __tmigr_isolated_exclude_cpumask(struct cpumask *exclude_cpumask)
> +{
> + struct work_struct __percpu *works __free(free_percpu) =
> + alloc_percpu(struct work_struct);
> + cpumask_var_t cpumask_unisol __free(free_cpumask_var) = CPUMASK_VAR_NULL;
> + cpumask_var_t cpumask_isol __free(free_cpumask_var) = CPUMASK_VAR_NULL;
> + int cpu;
> +
> + if (!alloc_cpumask_var(&cpumask_isol, GFP_KERNEL))
> + return -ENOMEM;
> + if (!alloc_cpumask_var(&cpumask_unisol, GFP_KERNEL))
> + return -ENOMEM;
> + if (!works)
> + return -ENOMEM;
Checking the first allocation after trying to allocate other stuff makes
a lot of sense - NOT.
> + cpumask_andnot(cpumask_unisol, cpu_online_mask, exclude_cpumask);
> + cpumask_andnot(cpumask_unisol, cpumask_unisol, tmigr_available_cpumask);
> + /* Set up the mask earlier to avoid races with the migrator CPU */
> + cpumask_or(tmigr_available_cpumask, tmigr_available_cpumask, cpumask_unisol);
Your new line key is broken. This comment is barely noticeable. What's
worse is that it completely fails to explain what the actual race is.
> + for_each_cpu(cpu, cpumask_unisol) {
> + struct work_struct *work = per_cpu_ptr(works, cpu);
> +
> + INIT_WORK(work, tmigr_cpu_unisolate);
> + schedule_work_on(cpu, work);
> + }
> +
> + cpumask_and(cpumask_isol, exclude_cpumask, tmigr_available_cpumask);
> + cpumask_and(cpumask_isol, cpumask_isol, housekeeping_cpumask(HK_TYPE_KERNEL_NOISE));
> + /*
> + * Handle this here and not in the cpuset code because exclude_cpumask
> + * might include also the tick CPU if included in isolcpus.
> + */
> + for_each_cpu(cpu, cpumask_isol) {
> + if (!tick_nohz_cpu_hotpluggable(cpu)) {
> + cpumask_clear_cpu(cpu, cpumask_isol);
> + break;
> + }
> + }
> + /* Set up the mask earlier to avoid races with the migrator CPU */
> + cpumask_andnot(tmigr_available_cpumask, tmigr_available_cpumask, cpumask_isol);
> + for_each_cpu(cpu, cpumask_isol) {
> + struct work_struct *work = per_cpu_ptr(works, cpu);
This lacks a comment explaining that cpumask_unisol and _isol are not
overlapping. I had to stare at this five times to convince myself that
it's correct.
> +
> + INIT_WORK(work, tmigr_cpu_isolate);
> + schedule_work_on(cpu, work);
> + }
> +
> + for_each_cpu_or(cpu, cpumask_isol, cpumask_unisol)
> + flush_work(per_cpu_ptr(works, cpu));
> +
> return 0;
> }
>
> +/**
> + * tmigr_isolated_exclude_cpumask - Exclude given CPUs from hierarchy
> + * @exclude_cpumask: the cpumask to be excluded from timer migration hierarchy
> + *
> + * This function can be called from cpuset code to provide the new set of
> + * isolated CPUs that should be excluded from the hierarchy.
> + * Online CPUs not present in exclude_cpumask but already excluded are brought
> + * back to the hierarchy.
> + * Functions to isolate/unisolate need to be called locally and can sleep.
> + */
> +int tmigr_isolated_exclude_cpumask(struct cpumask *exclude_cpumask)
> +{
> + lockdep_assert_cpus_held();
> + return __tmigr_isolated_exclude_cpumask(exclude_cpumask);
This wrapper is required because...
> +}
> +
> +static int __init tmigr_init_isolation(void)
> +{
> + cpumask_var_t cpumask __free(free_cpumask_var) = CPUMASK_VAR_NULL;
> +
> + static_branch_enable(&tmigr_exclude_isolated);
> +
> + if (!housekeeping_enabled(HK_TYPE_DOMAIN))
> + return 0;
> + if (!alloc_cpumask_var(&cpumask, GFP_KERNEL))
> + return -ENOMEM;
> +
> + cpumask_andnot(cpumask, cpu_possible_mask, housekeeping_cpumask(HK_TYPE_DOMAIN));
... it would be too sensible to guard this with:
guard(cpus_read_lock)();
for consistency sake _AND_ what's more important to protect it against
the RCU torture test code which plays with CPU hotplug starting in
device_initcall(), which runs before
> +late_initcall(tmigr_init_isolation);
Thanks,
tglx
Le Wed, Nov 19, 2025 at 05:50:15PM +0100, Thomas Gleixner a écrit :
> On Thu, Nov 13 2025 at 09:33, Gabriele Monaco wrote:
> > - cpumask_clear_cpu(cpu, tmigr_available_cpumask);
>
> By removing this the function name does not make any sense any
> more. Splitting the cpumask_clear_set() out, renaming the function
>
> > scoped_guard(raw_spinlock_irq, &tmc->lock) {
> > + if (!tmc->available)
> > + return 0;
>
> and adding this
>
> > tmc->available = false;
> > WRITE_ONCE(tmc->wakeup, KTIME_MAX);
> >
> > @@ -1453,11 +1485,11 @@ static int tmigr_clear_cpu_available(unsigned int cpu)
> > }
> >
> > if (firstexp != KTIME_MAX) {
> > - migrator = cpumask_any(tmigr_available_cpumask);
> > + migrator = cpumask_any_but(tmigr_available_cpumask, cpu);
>
> and this should be done in a preparatory patch along with a
> reasonable explanation in the change log.
>
> > work_on_cpu(migrator, tmigr_trigger_active, NULL);
> > }
> >
> > - return 0;
> > + return 1;
>
> But thinking more about it. What's the actual point of moving this 'clear'
> out instead of just moving it further down?
>
> It does not matter at all whether the isol/unisol muck clears an already
> cleared bit or not. But it would keep the function name comprehensible
> and avoid all this online/offline wrapper nonsense.
That was my suggestion.
It's because tmigr_clear_cpu_available() and tmigr_set_cpu_available()
can now all be called concurrently through the workqueues and race and
mess up the cpumask if they all try to clear/set at the same time...
And I couldn't find a saner way to order things...
Thanks.
--
Frederic Weisbecker
SUSE Labs
On Wed, Nov 19 2025 at 18:14, Frederic Weisbecker wrote:
> Le Wed, Nov 19, 2025 at 05:50:15PM +0100, Thomas Gleixner a écrit :
>> But thinking more about it. What's the actual point of moving this 'clear'
>> out instead of just moving it further down?
>>
>> It does not matter at all whether the isol/unisol muck clears an already
>> cleared bit or not. But it would keep the function name comprehensible
>> and avoid all this online/offline wrapper nonsense.
>
> That was my suggestion.
>
> It's because tmigr_clear_cpu_available() and tmigr_set_cpu_available()
> can now all be called concurrently through the workqueues and race and
> mess up the cpumask if they all try to clear/set at the same time...
Huch?
cpumask_set_cpu() uses set_bit() and cpumask_clear_cpu() uses
clear_bit(). Both are atomic and nothing gets messed up.
The only undefined case would be if you end up setting/clearing the same
bit, which would require that the unisol and isol maps overlap. But that
would be a bug on it's own, no?
Thanks,
tglx
Le Wed, Nov 19, 2025 at 07:15:42PM +0100, Thomas Gleixner a écrit : > On Wed, Nov 19 2025 at 18:14, Frederic Weisbecker wrote: > > Le Wed, Nov 19, 2025 at 05:50:15PM +0100, Thomas Gleixner a écrit : > >> But thinking more about it. What's the actual point of moving this 'clear' > >> out instead of just moving it further down? > >> > >> It does not matter at all whether the isol/unisol muck clears an already > >> cleared bit or not. But it would keep the function name comprehensible > >> and avoid all this online/offline wrapper nonsense. > > > > That was my suggestion. > > > > It's because tmigr_clear_cpu_available() and tmigr_set_cpu_available() > > can now all be called concurrently through the workqueues and race and > > mess up the cpumask if they all try to clear/set at the same time... > > Huch? > > cpumask_set_cpu() uses set_bit() and cpumask_clear_cpu() uses > clear_bit(). Both are atomic and nothing gets messed up. Urgh, right... > > The only undefined case would be if you end up setting/clearing the same > bit, which would require that the unisol and isol maps overlap. But that > would be a bug on it's own, no? Ok. But then the "unisolate" works must be flushed before this line: + /* Set up the mask earlier to avoid races with the migrator CPU */ + cpumask_andnot(tmigr_available_cpumask, tmigr_available_cpumask, cpumask_isol); Because that is non-atomic and can race with the cpumask_set_cpu() from the works, right? Thanks. -- Frederic Weisbecker SUSE Labs
On Wed, Nov 19 2025 at 21:13, Frederic Weisbecker wrote:
> Le Wed, Nov 19, 2025 at 07:15:42PM +0100, Thomas Gleixner a écrit :
>> On Wed, Nov 19 2025 at 18:14, Frederic Weisbecker wrote:
>> > Le Wed, Nov 19, 2025 at 05:50:15PM +0100, Thomas Gleixner a écrit :
>> >> But thinking more about it. What's the actual point of moving this 'clear'
>> >> out instead of just moving it further down?
>> >>
>> >> It does not matter at all whether the isol/unisol muck clears an already
>> >> cleared bit or not. But it would keep the function name comprehensible
>> >> and avoid all this online/offline wrapper nonsense.
>> >
>> > That was my suggestion.
>> >
>> > It's because tmigr_clear_cpu_available() and tmigr_set_cpu_available()
>> > can now all be called concurrently through the workqueues and race and
>> > mess up the cpumask if they all try to clear/set at the same time...
>>
>> Huch?
>>
>> cpumask_set_cpu() uses set_bit() and cpumask_clear_cpu() uses
>> clear_bit(). Both are atomic and nothing gets messed up.
>
> Urgh, right...
>
>>
>> The only undefined case would be if you end up setting/clearing the same
>> bit, which would require that the unisol and isol maps overlap. But that
>> would be a bug on it's own, no?
>
> Ok.
>
> But then the "unisolate" works must be flushed before this line:
>
> + /* Set up the mask earlier to avoid races with the migrator CPU */
> + cpumask_andnot(tmigr_available_cpumask, tmigr_available_cpumask, cpumask_isol);
>
> Because that is non-atomic and can race with the cpumask_set_cpu() from the
> works, right?
Correct, but I still have to understand why this has to happen
upfront. As I said before this comment is useless:
> + /* Set up the mask earlier to avoid races with the migrator CPU */
which decodes to:
Set up the mask earlier than something unspecified to avoid
unspecified races with the migrator CPU.
Seriously?
But I finally understood what this is about after staring at it with
less disgust again for five times in a row:
tmigr_clear_cpu_available() requires a stable cpumask to find a
migrator.
So what this is about is to avoid touching the cpumask in those worker
functions so that tmigr_isolated_exclude_cpumask() can fiddle with them
asynchronously.
Right?
That's voodoo programming which nobody - even those involved - will
understand anymore three months down the road.
The idea that updating the masks upfront will provide stable state is
flawed to begin with. Let's assume the following scenario:
1) The isol/unisol mask is just flipped around
2) The newly available CPUs are marked in the mask
3) The work is scheduled on those CPUs
4) The newly unavailable CPUs are cleared in the mask
5) The work is scheduled on those CPUs
6) The newly available CPU workers are delayed so that the newly
unavailable workers get there first. They find a stable CPU mask,
but none of these now "available" CPUs are actually brought into
active state yet.
That's just a perfect recipe for some completely undecodable bug to
happen anytime soon even it it supposed to "work" by chance.
The way how this was designed in the first place is that changing the
"availability" is fully serialized by the CPU hotplug machinery. Which
means zero surprises.
Now you create a side channel, which lacks these serialization
guarantees for absolutely no reason. Changing this isolation muck at
run-time is not a hotpath operation and trying to optimize it for no
good reason is just stupid.
The most trivial solution is to schedule the works one by one walking
through the newly available and then through the unavailable maps.
If you want to be a bit smarter, then you can just use a global mutex,
which is taken inside the set/clear_available() functions which
serializes the related functionality and bulk schedule/flush the unisol
(make available) first and then proceed to the isol (make unavailable).
Then nothing has to change vs. the set/clear operations and everything
just works.
That mutex does not do any harm in the CPU hotplug case and the
serialization vs. the workers is not going to be the end of the world.
I'm willing to bet that no real-world use-case will ever notice the
existance of this mutex. The microbenchmark which shows off the "I'm so
smart" metric is completely irrelevant especially when the result is
fragile, incomprehensible and therefore unmaintainable.
Keep it correct and simple is still the most important engineering
principle. Premature optimization is a guaranteed path to failure.
If there is a compelling use case which justifies the resulting
complexity, then it can be built on top. I'm not holding my breath. See
above...
Thanks,
tglx
Le Wed, Nov 19, 2025 at 10:23:11PM +0100, Thomas Gleixner a écrit : > On Wed, Nov 19 2025 at 21:13, Frederic Weisbecker wrote: > > Le Wed, Nov 19, 2025 at 07:15:42PM +0100, Thomas Gleixner a écrit : > >> On Wed, Nov 19 2025 at 18:14, Frederic Weisbecker wrote: > >> > Le Wed, Nov 19, 2025 at 05:50:15PM +0100, Thomas Gleixner a écrit : > >> >> But thinking more about it. What's the actual point of moving this 'clear' > >> >> out instead of just moving it further down? > >> >> > >> >> It does not matter at all whether the isol/unisol muck clears an already > >> >> cleared bit or not. But it would keep the function name comprehensible > >> >> and avoid all this online/offline wrapper nonsense. > >> > > >> > That was my suggestion. > >> > > >> > It's because tmigr_clear_cpu_available() and tmigr_set_cpu_available() > >> > can now all be called concurrently through the workqueues and race and > >> > mess up the cpumask if they all try to clear/set at the same time... > >> > >> Huch? > >> > >> cpumask_set_cpu() uses set_bit() and cpumask_clear_cpu() uses > >> clear_bit(). Both are atomic and nothing gets messed up. > > > > Urgh, right... > > > >> > >> The only undefined case would be if you end up setting/clearing the same > >> bit, which would require that the unisol and isol maps overlap. But that > >> would be a bug on it's own, no? > > > > Ok. > > > > But then the "unisolate" works must be flushed before this line: > > > > + /* Set up the mask earlier to avoid races with the migrator CPU */ > > + cpumask_andnot(tmigr_available_cpumask, tmigr_available_cpumask, cpumask_isol); > > > > Because that is non-atomic and can race with the cpumask_set_cpu() from the > > works, right? > > Correct, but I still have to understand why this has to happen > upfront. As I said before this comment is useless: > > > + /* Set up the mask earlier to avoid races with the migrator CPU */ > > which decodes to: > > Set up the mask earlier than something unspecified to avoid > unspecified races with the migrator CPU. > > Seriously? > > But I finally understood what this is about after staring at it with > less disgust again for five times in a row: > > tmigr_clear_cpu_available() requires a stable cpumask to find a > migrator. > > So what this is about is to avoid touching the cpumask in those worker > functions so that tmigr_isolated_exclude_cpumask() can fiddle with them > asynchronously. > > Right? > > That's voodoo programming which nobody - even those involved - will > understand anymore three months down the road. > > The idea that updating the masks upfront will provide stable state is > flawed to begin with. Let's assume the following scenario: > > 1) The isol/unisol mask is just flipped around > > 2) The newly available CPUs are marked in the mask > > 3) The work is scheduled on those CPUs > > 4) The newly unavailable CPUs are cleared in the mask > > 5) The work is scheduled on those CPUs > > 6) The newly available CPU workers are delayed so that the newly > unavailable workers get there first. They find a stable CPU mask, > but none of these now "available" CPUs are actually brought into > active state yet. > > That's just a perfect recipe for some completely undecodable bug to > happen anytime soon even it it supposed to "work" by chance. Urgh, indeed I completely missed that... > > The way how this was designed in the first place is that changing the > "availability" is fully serialized by the CPU hotplug machinery. Which > means zero surprises. > > Now you create a side channel, which lacks these serialization > guarantees for absolutely no reason. Changing this isolation muck at > run-time is not a hotpath operation and trying to optimize it for no > good reason is just stupid. > > The most trivial solution is to schedule the works one by one walking > through the newly available and then through the unavailable maps. Right. > > If you want to be a bit smarter, then you can just use a global mutex, > which is taken inside the set/clear_available() functions which > serializes the related functionality and bulk schedule/flush the unisol > (make available) first and then proceed to the isol (make unavailable). > > Then nothing has to change vs. the set/clear operations and everything > just works. > > That mutex does not do any harm in the CPU hotplug case and the > serialization vs. the workers is not going to be the end of the world. > > I'm willing to bet that no real-world use-case will ever notice the > existance of this mutex. The microbenchmark which shows off the "I'm so > smart" metric is completely irrelevant especially when the result is > fragile, incomprehensible and therefore unmaintainable. > > Keep it correct and simple is still the most important engineering > principle. Premature optimization is a guaranteed path to failure. > > If there is a compelling use case which justifies the resulting > complexity, then it can be built on top. I'm not holding my breath. See > above... Perhaps the only thing that worries me is if an isolated partition is inverted. Say 0-3 is non isolated and 4-7 is isolated. And then cpuset is overwritten so that the reverse is applied: 0-3 is isolated and 4-7 is not isolated. If all isol works reach before unisol works, then tmigr_clear_cpu_available() -> cpumask_any(tmigr_available_mask) won't find any CPU left on the last call. Now last time I tried to invert an isolated cpumask in cpuset, I got -EINVAL so something must be preventing from that. Just in case let's have a WARN_ON_ONCE(cpumask_any(tmigr_available_mask) >= nr_cpu_ids). But other than this detail, your solution looks good! Thanks. -- Frederic Weisbecker SUSE Labs
On Wed, Nov 19 2025 at 23:02, Frederic Weisbecker wrote:
> Le Wed, Nov 19, 2025 at 10:23:11PM +0100, Thomas Gleixner a écrit :
>> If you want to be a bit smarter, then you can just use a global mutex,
>> which is taken inside the set/clear_available() functions which
>> serializes the related functionality and bulk schedule/flush the unisol
>> (make available) first and then proceed to the isol (make unavailable).
>>
>> Then nothing has to change vs. the set/clear operations and everything
>> just works.
>>
>> That mutex does not do any harm in the CPU hotplug case and the
>> serialization vs. the workers is not going to be the end of the world.
>>
>> I'm willing to bet that no real-world use-case will ever notice the
>> existance of this mutex. The microbenchmark which shows off the "I'm so
>> smart" metric is completely irrelevant especially when the result is
>> fragile, incomprehensible and therefore unmaintainable.
>>
>> Keep it correct and simple is still the most important engineering
>> principle. Premature optimization is a guaranteed path to failure.
>>
>> If there is a compelling use case which justifies the resulting
>> complexity, then it can be built on top. I'm not holding my breath. See
>> above...
>
> Perhaps the only thing that worries me is if an isolated partition
> is inverted. Say 0-3 is non isolated and 4-7 is isolated. And then
> cpuset is overwritten so that the reverse is applied: 0-3 is isolated
> and 4-7 is not isolated. If all isol works reach before unisol works,
> then tmigr_clear_cpu_available() -> cpumask_any(tmigr_available_mask)
> won't find any CPU left on the last call.
schedule the newly available (now unisolated) ones first and flush that
work. After that you can safely mark the others unavailable, no?
Thanks
tglx
Le Wed, Nov 19, 2025 at 11:10:58PM +0100, Thomas Gleixner a écrit : > On Wed, Nov 19 2025 at 23:02, Frederic Weisbecker wrote: > > Le Wed, Nov 19, 2025 at 10:23:11PM +0100, Thomas Gleixner a écrit : > >> If you want to be a bit smarter, then you can just use a global mutex, > >> which is taken inside the set/clear_available() functions which > >> serializes the related functionality and bulk schedule/flush the unisol > >> (make available) first and then proceed to the isol (make unavailable). > >> > >> Then nothing has to change vs. the set/clear operations and everything > >> just works. > >> > >> That mutex does not do any harm in the CPU hotplug case and the > >> serialization vs. the workers is not going to be the end of the world. > >> > >> I'm willing to bet that no real-world use-case will ever notice the > >> existance of this mutex. The microbenchmark which shows off the "I'm so > >> smart" metric is completely irrelevant especially when the result is > >> fragile, incomprehensible and therefore unmaintainable. > >> > >> Keep it correct and simple is still the most important engineering > >> principle. Premature optimization is a guaranteed path to failure. > >> > >> If there is a compelling use case which justifies the resulting > >> complexity, then it can be built on top. I'm not holding my breath. See > >> above... > > > > Perhaps the only thing that worries me is if an isolated partition > > is inverted. Say 0-3 is non isolated and 4-7 is isolated. And then > > cpuset is overwritten so that the reverse is applied: 0-3 is isolated > > and 4-7 is not isolated. If all isol works reach before unisol works, > > then tmigr_clear_cpu_available() -> cpumask_any(tmigr_available_mask) > > won't find any CPU left on the last call. > > schedule the newly available (now unisolated) ones first and flush that > work. After that you can safely mark the others unavailable, no? Yes! -- Frederic Weisbecker SUSE Labs
© 2016 - 2025 Red Hat, Inc.