From nobody Tue Dec 2 02:04:07 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 88A1234AB06 for ; Thu, 20 Nov 2025 14:57:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763650652; cv=none; b=KqrtL7ilk9WKnuETf+4SOyqSxCZV/wc8cj7qUjhUPNmBxmRJdSrL3S5o9ZGG1RV7kW45W9dksSd2eW5MNRG18iSOUPcmIafdwyDm9+5SDR/8JFaIPBq9qaopGKJtWSYvrrtdLS7NYb2CBu6nm3Yt/rW0cwr3yO+huMKUUytmgx8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763650652; c=relaxed/simple; bh=MBqulWxhzCg7tTSjhaBDfM0FVbGGkEXZ+OcNH7h6AAE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=mXzGB0X2ap0Yul5uBXK64lLZnHCOYqxGdDbbVD6xkHp8jtGEhtoiEGt5EhpIHc78PLoo7cS67gGz+otlELL5Y5tDkysu2XtP64Egxz051xjZd+qsm5ktlvJJqUdE6btmxF4Bk1/ZVb4VlLLTawzPCuuSbCJEWVNMWuymGUT/yWU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Uju+ND70; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Uju+ND70" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1763650649; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=K5GVdwYLSWSD+P3UHz0vpKR8IPFFO0pks0u5J755EOY=; b=Uju+ND70TCfP6TdTAa5XXOZk8JK8ZNtBgelR6GH3FfmWf6nu7P4xiq2dfb6q19F7jlRCIo t11VC+rsBU+YTySGOW27bLxPXfaSQrpwa7w9+3UbvMq7VPoxqV4Qm3YH0NV3+uhC1/fhqI 8RBsfo5MSkCTkp9S+HJHbKkxKc+QJck= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-643-IU1R8PINPv6i0GlRLcI6Ow-1; Thu, 20 Nov 2025 09:57:26 -0500 X-MC-Unique: IU1R8PINPv6i0GlRLcI6Ow-1 X-Mimecast-MFC-AGG-ID: IU1R8PINPv6i0GlRLcI6Ow_1763650645 Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 0E91419560AF; Thu, 20 Nov 2025 14:57:25 +0000 (UTC) Received: from gmonaco-thinkpadt14gen3.rmtit.csb (unknown [10.45.224.239]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id D4D2D1940E88; Thu, 20 Nov 2025 14:57:21 +0000 (UTC) From: Gabriele Monaco To: linux-kernel@vger.kernel.org, Anna-Maria Behnsen , Frederic Weisbecker , Thomas Gleixner , Waiman Long Cc: Gabriele Monaco , "John B. Wyatt IV" , "John B. Wyatt IV" Subject: [PATCH v16 7/7] timers/migration: Exclude isolated cpus from hierarchy Date: Thu, 20 Nov 2025 15:56:53 +0100 Message-ID: <20251120145653.296659-8-gmonaco@redhat.com> In-Reply-To: <20251120145653.296659-1-gmonaco@redhat.com> References: <20251120145653.296659-1-gmonaco@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 Content-Type: text/plain; charset="utf-8" The timer migration mechanism allows active CPUs to pull timers from idle ones to improve the overall idle time. This is however undesired when CPU intensive workloads run on isolated cores, as the algorithm would move the timers from housekeeping to isolated cores, negatively affecting the isolation. Exclude isolated cores from the timer migration algorithm, extend the concept of unavailable cores, currently used for offline ones, to isolated ones: * A core is unavailable if isolated or offline; * A core is available if non isolated and online; A core is considered unavailable as isolated if it belongs to: * the isolcpus (domain) list * an isolated cpuset Except if it is: * in the nohz_full list (already idle for the hierarchy) * the nohz timekeeper core (must be available to handle global timers) CPUs are added to the hierarchy during late boot, excluding isolated ones, the hierarchy is also adapted when the cpuset isolation changes. Due to how the timer migration algorithm works, any CPU part of the hierarchy can have their global timers pulled by remote CPUs and have to pull remote timers, only skipping pulling remote timers would break the logic. For this reason, prevent isolated CPUs from pulling remote global timers, but also the other way around: any global timer started on an isolated CPU will run there. This does not break the concept of isolation (global timers don't come from outside the CPU) and, if considered inappropriate, can usually be mitigated with other isolation techniques (e.g. IRQ pinning). This effect was noticed on a 128 cores machine running oslat on the isolated cores (1-31,33-63,65-95,97-127). The tool monopolises CPUs, and the CPU with lowest count in a timer migration hierarchy (here 1 and 65) appears as always active and continuously pulls global timers, from the housekeeping CPUs. This ends up moving driver work (e.g. delayed work) to isolated CPUs and causes latency spikes: before the change: # oslat -c 1-31,33-63,65-95,97-127 -D 62s ... Maximum: 1203 10 3 4 ... 5 (us) after the change: # oslat -c 1-31,33-63,65-95,97-127 -D 62s ... Maximum: 10 4 3 4 3 ... 5 (us) The same behaviour was observed on a machine with as few as 20 cores / 40 threads with isocpus set to: 1-9,11-39 with rtla-osnoise-top. Tested-by: John B. Wyatt IV Tested-by: John B. Wyatt IV Signed-off-by: Gabriele Monaco Reviewed-by: Frederic Weisbecker Reviewed-by: Thomas Gleixner --- include/linux/timer.h | 9 +++ kernel/cgroup/cpuset.c | 3 + kernel/time/timer_migration.c | 142 ++++++++++++++++++++++++++++++++++ 3 files changed, 154 insertions(+) diff --git a/include/linux/timer.h b/include/linux/timer.h index 0414d9e6b4fc..62e1cea71125 100644 --- a/include/linux/timer.h +++ b/include/linux/timer.h @@ -188,4 +188,13 @@ int timers_dead_cpu(unsigned int cpu); #define timers_dead_cpu NULL #endif =20 +#if defined(CONFIG_SMP) && defined(CONFIG_NO_HZ_COMMON) +extern int tmigr_isolated_exclude_cpumask(struct cpumask *exclude_cpumask); +#else +static inline int tmigr_isolated_exclude_cpumask(struct cpumask *exclude_c= pumask) +{ + return 0; +} +#endif + #endif diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index cf34623fe66f..bfc3b319e1c0 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -1350,6 +1350,9 @@ static void update_isolation_cpumasks(bool isolcpus_u= pdated) =20 ret =3D workqueue_unbound_exclude_cpumask(isolated_cpus); WARN_ON_ONCE(ret < 0); + + ret =3D tmigr_isolated_exclude_cpumask(isolated_cpus); + WARN_ON_ONCE(ret < 0); } =20 /** diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c index a01c7f8bdf52..4c5f2a576e13 100644 --- a/kernel/time/timer_migration.c +++ b/kernel/time/timer_migration.c @@ -10,6 +10,7 @@ #include #include #include +#include =20 #include "timer_migration.h" #include "tick-internal.h" @@ -427,8 +428,13 @@ static DEFINE_PER_CPU(struct tmigr_cpu, tmigr_cpu); /* * CPUs available for timer migration. * Protected by cpuset_mutex (with cpus_read_lock held) or cpus_write_lock. + * Additionally tmigr_available_mutex serialises set/clear operations with= each other. */ static cpumask_var_t tmigr_available_cpumask; +static DEFINE_MUTEX(tmigr_available_mutex); + +/* Enabled during late initcall */ +static DEFINE_STATIC_KEY_FALSE(tmigr_exclude_isolated); =20 #define TMIGR_NONE 0xFF #define BIT_CNT 8 @@ -438,6 +444,33 @@ static inline bool tmigr_is_not_available(struct tmigr= _cpu *tmc) return !(tmc->tmgroup && tmc->available); } =20 +/* + * Returns true if @cpu should be excluded from the hierarchy as isolated. + * Domain isolated CPUs don't participate in timer migration, nohz_full CP= Us + * are still part of the hierarchy but become idle (from a tick and timer + * migration perspective) when they stop their tick. This lets the timekee= ping + * CPU handle their global timers. Marking also isolated CPUs as idle woul= d be + * too costly, hence they are completely excluded from the hierarchy. + * This check is necessary, for instance, to prevent offline isolated CPUs= from + * being incorrectly marked as available once getting back online. + * + * This function returns false during early boot and the isolation logic is + * enabled only after isolated CPUs are marked as unavailable at late boot. + * The tick CPU can be isolated at boot, however we cannot mark it as + * unavailable to avoid having no global migrator for the nohz_full CPUs. = This + * should be ensured by the callers of this function: implicitly from hotp= lug + * callbacs and explicitly in tmigr_init_isolation() and + * tmigr_isolated_exclude_cpumask(). + */ +static inline bool tmigr_is_isolated(int cpu) +{ + if (!static_branch_unlikely(&tmigr_exclude_isolated)) + return false; + return (!housekeeping_cpu(cpu, HK_TYPE_DOMAIN) || + cpuset_cpu_is_isolated(cpu)) && + housekeeping_cpu(cpu, HK_TYPE_KERNEL_NOISE); +} + /* * Returns true, when @childmask corresponds to the group migrator or when= the * group is not active - so no migrator is set. @@ -1439,8 +1472,12 @@ static int tmigr_clear_cpu_available(unsigned int cp= u) int migrator; u64 firstexp; =20 + guard(mutex)(&tmigr_available_mutex); + cpumask_clear_cpu(cpu, tmigr_available_cpumask); scoped_guard(raw_spinlock_irq, &tmc->lock) { + if (!tmc->available) + return 0; tmc->available =3D false; WRITE_ONCE(tmc->wakeup, KTIME_MAX); =20 @@ -1468,8 +1505,15 @@ static int tmigr_set_cpu_available(unsigned int cpu) if (WARN_ON_ONCE(!tmc->tmgroup)) return -EINVAL; =20 + if (tmigr_is_isolated(cpu)) + return 0; + + guard(mutex)(&tmigr_available_mutex); + cpumask_set_cpu(cpu, tmigr_available_cpumask); scoped_guard(raw_spinlock_irq, &tmc->lock) { + if (tmc->available) + return 0; trace_tmigr_cpu_available(tmc); tmc->idle =3D timer_base_is_idle(); if (!tmc->idle) @@ -1479,6 +1523,103 @@ static int tmigr_set_cpu_available(unsigned int cpu) return 0; } =20 +static void tmigr_cpu_isolate(struct work_struct *ignored) +{ + tmigr_clear_cpu_available(smp_processor_id()); +} + +static void tmigr_cpu_unisolate(struct work_struct *ignored) +{ + tmigr_set_cpu_available(smp_processor_id()); +} + +/** + * tmigr_isolated_exclude_cpumask - Exclude given CPUs from hierarchy + * @exclude_cpumask: the cpumask to be excluded from timer migration hiera= rchy + * + * This function can be called from cpuset code to provide the new set of + * isolated CPUs that should be excluded from the hierarchy. + * Online CPUs not present in exclude_cpumask but already excluded are bro= ught + * back to the hierarchy. + * Functions to isolate/unisolate need to be called locally and can sleep. + */ +int tmigr_isolated_exclude_cpumask(struct cpumask *exclude_cpumask) +{ + struct work_struct __percpu *works __free(free_percpu) =3D + alloc_percpu(struct work_struct); + cpumask_var_t cpumask __free(free_cpumask_var) =3D CPUMASK_VAR_NULL; + int cpu; + + lockdep_assert_cpus_held(); + + if (!works) + return -ENOMEM; + if (!alloc_cpumask_var(&cpumask, GFP_KERNEL)) + return -ENOMEM; + + /* + * First set previously isolated CPUs as available (unisolate). + * This cpumask contains only CPUs that switched to available now. + */ + cpumask_andnot(cpumask, cpu_online_mask, exclude_cpumask); + cpumask_andnot(cpumask, cpumask, tmigr_available_cpumask); + + for_each_cpu(cpu, cpumask) { + struct work_struct *work =3D per_cpu_ptr(works, cpu); + + INIT_WORK(work, tmigr_cpu_unisolate); + schedule_work_on(cpu, work); + } + for_each_cpu(cpu, cpumask) + flush_work(per_cpu_ptr(works, cpu)); + + /* + * Then clear previously available CPUs (isolate). + * This cpumask contains only CPUs that switched to not available now. + * There cannot be overlap with the newly available ones. + */ + cpumask_and(cpumask, exclude_cpumask, tmigr_available_cpumask); + cpumask_and(cpumask, cpumask, housekeeping_cpumask(HK_TYPE_KERNEL_NOISE)); + /* + * Handle this here and not in the cpuset code because exclude_cpumask + * might include also the tick CPU if included in isolcpus. + */ + for_each_cpu(cpu, cpumask) { + if (!tick_nohz_cpu_hotpluggable(cpu)) { + cpumask_clear_cpu(cpu, cpumask); + break; + } + } + + for_each_cpu(cpu, cpumask) { + struct work_struct *work =3D per_cpu_ptr(works, cpu); + + INIT_WORK(work, tmigr_cpu_isolate); + schedule_work_on(cpu, work); + } + for_each_cpu(cpu, cpumask) + flush_work(per_cpu_ptr(works, cpu)); + + return 0; +} + +static int __init tmigr_init_isolation(void) +{ + cpumask_var_t cpumask __free(free_cpumask_var) =3D CPUMASK_VAR_NULL; + + static_branch_enable(&tmigr_exclude_isolated); + + if (!housekeeping_enabled(HK_TYPE_DOMAIN)) + return 0; + if (!alloc_cpumask_var(&cpumask, GFP_KERNEL)) + return -ENOMEM; + + cpumask_andnot(cpumask, cpu_possible_mask, housekeeping_cpumask(HK_TYPE_D= OMAIN)); + + guard(cpus_read_lock)(); + return tmigr_isolated_exclude_cpumask(cpumask); +} + static void tmigr_init_group(struct tmigr_group *group, unsigned int lvl, int node) { @@ -1878,3 +2019,4 @@ static int __init tmigr_init(void) return ret; } early_initcall(tmigr_init); +late_initcall(tmigr_init_isolation); --=20 2.51.1