From nobody Thu May 7 19:05:18 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38BA6C433EF for ; Fri, 20 May 2022 10:35:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348186AbiETKfs (ORCPT ); Fri, 20 May 2022 06:35:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38912 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347431AbiETKfm (ORCPT ); Fri, 20 May 2022 06:35:42 -0400 Received: from outbound-smtp59.blacknight.com (outbound-smtp59.blacknight.com [46.22.136.243]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7B58391570 for ; Fri, 20 May 2022 03:35:41 -0700 (PDT) Received: from mail.blacknight.com (pemlinmail01.blacknight.ie [81.17.254.10]) by outbound-smtp59.blacknight.com (Postfix) with ESMTPS id 1B155FA956 for ; Fri, 20 May 2022 11:35:40 +0100 (IST) Received: (qmail 30313 invoked from network); 20 May 2022 10:35:39 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPA; 20 May 2022 10:35:39 -0000 From: Mel Gorman To: Peter Zijlstra Cc: Ingo Molnar , Vincent Guittot , Valentin Schneider , K Prateek Nayak , Aubrey Li , Ying Huang , LKML , Mel Gorman Subject: [PATCH 1/4] sched/numa: Initialise numa_migrate_retry Date: Fri, 20 May 2022 11:35:16 +0100 Message-Id: <20220520103519.1863-2-mgorman@techsingularity.net> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220520103519.1863-1-mgorman@techsingularity.net> References: <20220520103519.1863-1-mgorman@techsingularity.net> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" On clone, numa_migrate_retry is inherited from the parent which means that the first NUMA placement of a task is non-deterministic. This affects when load balancing recognises numa tasks and whether to migrate "regular", "remote" or "all" tasks between NUMA scheduler domains. Signed-off-by: Mel Gorman Tested-by: K Prateek Nayak --- kernel/sched/fair.c | 1 + 1 file changed, 1 insertion(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index d4bd299d67ab..867806a57119 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2873,6 +2873,7 @@ void init_numa_balancing(unsigned long clone_flags, s= truct task_struct *p) p->node_stamp =3D 0; p->numa_scan_seq =3D mm ? mm->numa_scan_seq : 0; p->numa_scan_period =3D sysctl_numa_balancing_scan_delay; + p->numa_migrate_retry =3D 0; /* Protect against double add, see task_tick_numa and task_numa_work */ p->numa_work.next =3D &p->numa_work; p->numa_faults =3D NULL; --=20 2.34.1 From nobody Thu May 7 19:05:18 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67ADCC433F5 for ; Fri, 20 May 2022 10:36:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348195AbiETKgG (ORCPT ); Fri, 20 May 2022 06:36:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39800 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348202AbiETKf6 (ORCPT ); Fri, 20 May 2022 06:35:58 -0400 Received: from outbound-smtp63.blacknight.com (outbound-smtp63.blacknight.com [46.22.136.252]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0131C20F4B for ; Fri, 20 May 2022 03:35:51 -0700 (PDT) Received: from mail.blacknight.com (pemlinmail01.blacknight.ie [81.17.254.10]) by outbound-smtp63.blacknight.com (Postfix) with ESMTPS id 4B6ECFA96A for ; Fri, 20 May 2022 11:35:50 +0100 (IST) Received: (qmail 30665 invoked from network); 20 May 2022 10:35:50 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPA; 20 May 2022 10:35:50 -0000 From: Mel Gorman To: Peter Zijlstra Cc: Ingo Molnar , Vincent Guittot , Valentin Schneider , K Prateek Nayak , Aubrey Li , Ying Huang , LKML , Mel Gorman Subject: [PATCH 2/4] sched/numa: Do not swap tasks between nodes when spare capacity is available Date: Fri, 20 May 2022 11:35:17 +0100 Message-Id: <20220520103519.1863-3-mgorman@techsingularity.net> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220520103519.1863-1-mgorman@techsingularity.net> References: <20220520103519.1863-1-mgorman@techsingularity.net> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" If a destination node has spare capacity but there is an imbalance then two tasks are selected for swapping. If the tasks have no numa group or are within the same NUMA group, it's simply shuffling tasks around without having any impact on the compute imbalance. Instead, it's just punishing one task to help another. Signed-off-by: Mel Gorman Tested-by: K Prateek Nayak --- kernel/sched/fair.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 867806a57119..03b1ad79d47d 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1778,6 +1778,15 @@ static bool task_numa_compare(struct task_numa_env *= env, */ cur_ng =3D rcu_dereference(cur->numa_group); if (cur_ng =3D=3D p_ng) { + /* + * Do not swap within a group or between tasks that have + * no group if there is spare capacity. Swapping does + * not address the load imbalance and helps one task at + * the cost of punishing another. + */ + if (env->dst_stats.node_type =3D=3D node_has_spare) + goto unlock; + imp =3D taskimp + task_weight(cur, env->src_nid, dist) - task_weight(cur, env->dst_nid, dist); /* --=20 2.34.1 From nobody Thu May 7 19:05:18 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 359A8C433EF for ; Fri, 20 May 2022 10:36:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348241AbiETKgQ (ORCPT ); Fri, 20 May 2022 06:36:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40050 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348246AbiETKgD (ORCPT ); Fri, 20 May 2022 06:36:03 -0400 Received: from outbound-smtp24.blacknight.com (outbound-smtp24.blacknight.com [81.17.249.192]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 25A873BA4D for ; Fri, 20 May 2022 03:36:01 -0700 (PDT) Received: from mail.blacknight.com (pemlinmail01.blacknight.ie [81.17.254.10]) by outbound-smtp24.blacknight.com (Postfix) with ESMTPS id 71C11C0BB4 for ; Fri, 20 May 2022 11:36:00 +0100 (IST) Received: (qmail 31180 invoked from network); 20 May 2022 10:36:00 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPA; 20 May 2022 10:36:00 -0000 From: Mel Gorman To: Peter Zijlstra Cc: Ingo Molnar , Vincent Guittot , Valentin Schneider , K Prateek Nayak , Aubrey Li , Ying Huang , LKML , Mel Gorman Subject: [PATCH 3/4] sched/numa: Apply imbalance limitations consistently Date: Fri, 20 May 2022 11:35:18 +0100 Message-Id: <20220520103519.1863-4-mgorman@techsingularity.net> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220520103519.1863-1-mgorman@techsingularity.net> References: <20220520103519.1863-1-mgorman@techsingularity.net> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The imbalance limitations are applied inconsistently at fork time and at runtime. At fork, a new task can remain local until there are too many running tasks even if the degree of imbalance is larger than NUMA_IMBALANCE_MIN which is different to runtime. Secondly, the imbalance figure used during load balancing is different to the one used at NUMA placement. Load balancing uses the number of tasks that must move to restore imbalance where as NUMA balancing uses the total imbalance. In combination, it is possible for a parallel workload that uses a small number of CPUs without applying scheduler policies to have very variable run-to-run performance. [lkp@intel.com: Fix build breakage for arc-allyesconfig] Signed-off-by: Mel Gorman Reported-by: kernel test robot Tested-by: K Prateek Nayak --- kernel/sched/fair.c | 81 +++++++++++++++++++++++++-------------------- 1 file changed, 45 insertions(+), 36 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 03b1ad79d47d..0b3646be88b3 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1043,6 +1043,33 @@ update_stats_curr_start(struct cfs_rq *cfs_rq, struc= t sched_entity *se) * Scheduling class queueing methods: */ =20 +#ifdef CONFIG_NUMA +#define NUMA_IMBALANCE_MIN 2 + +static inline long +adjust_numa_imbalance(int imbalance, int dst_running, int imb_numa_nr) +{ + /* + * Allow a NUMA imbalance if busy CPUs is less than the maximum + * threshold. Above this threshold, individual tasks may be contending + * for both memory bandwidth and any shared HT resources. This is an + * approximation as the number of running tasks may not be related to + * the number of busy CPUs due to sched_setaffinity. + */ + if (dst_running > imb_numa_nr) + return imbalance; + + /* + * Allow a small imbalance based on a simple pair of communicating + * tasks that remain local when the destination is lightly loaded. + */ + if (imbalance <=3D NUMA_IMBALANCE_MIN) + return 0; + + return imbalance; +} +#endif /* CONFIG_NUMA */ + #ifdef CONFIG_NUMA_BALANCING /* * Approximate time to scan a full NUMA task in ms. The task scan period is @@ -1536,8 +1563,6 @@ struct task_numa_env { =20 static unsigned long cpu_load(struct rq *rq); static unsigned long cpu_runnable(struct rq *rq); -static inline long adjust_numa_imbalance(int imbalance, - int dst_running, int imb_numa_nr); =20 static inline enum numa_type numa_classify(unsigned int imbalance_pct, @@ -9098,16 +9123,6 @@ static bool update_pick_idlest(struct sched_group *i= dlest, return true; } =20 -/* - * Allow a NUMA imbalance if busy CPUs is less than 25% of the domain. - * This is an approximation as the number of running tasks may not be - * related to the number of busy CPUs due to sched_setaffinity. - */ -static inline bool allow_numa_imbalance(int running, int imb_numa_nr) -{ - return running <=3D imb_numa_nr; -} - /* * find_idlest_group() finds and returns the least busy CPU group within t= he * domain. @@ -9224,6 +9239,7 @@ find_idlest_group(struct sched_domain *sd, struct tas= k_struct *p, int this_cpu) break; =20 case group_has_spare: +#ifdef CONFIG_NUMA if (sd->flags & SD_NUMA) { #ifdef CONFIG_NUMA_BALANCING int idlest_cpu; @@ -9237,7 +9253,7 @@ find_idlest_group(struct sched_domain *sd, struct tas= k_struct *p, int this_cpu) idlest_cpu =3D cpumask_first(sched_group_span(idlest)); if (cpu_to_node(idlest_cpu) =3D=3D p->numa_preferred_nid) return idlest; -#endif +#endif /* CONFIG_NUMA_BALANCING */ /* * Otherwise, keep the task close to the wakeup source * and improve locality if the number of running tasks @@ -9245,9 +9261,14 @@ find_idlest_group(struct sched_domain *sd, struct ta= sk_struct *p, int this_cpu) * allowed. If there is a real need of migration, * periodic load balance will take care of it. */ - if (allow_numa_imbalance(local_sgs.sum_nr_running + 1, sd->imb_numa_nr)) + imbalance =3D abs(local_sgs.idle_cpus - idlest_sgs.idle_cpus); + if (!adjust_numa_imbalance(imbalance, + local_sgs.sum_nr_running + 1, + sd->imb_numa_nr)) { return NULL; + } } +#endif /* CONFIG_NUMA */ =20 /* * Select group with highest number of idle CPUs. We could also @@ -9334,24 +9355,6 @@ static inline void update_sd_lb_stats(struct lb_env = *env, struct sd_lb_stats *sd } } =20 -#define NUMA_IMBALANCE_MIN 2 - -static inline long adjust_numa_imbalance(int imbalance, - int dst_running, int imb_numa_nr) -{ - if (!allow_numa_imbalance(dst_running, imb_numa_nr)) - return imbalance; - - /* - * Allow a small imbalance based on a simple pair of communicating - * tasks that remain local when the destination is lightly loaded. - */ - if (imbalance <=3D NUMA_IMBALANCE_MIN) - return 0; - - return imbalance; -} - /** * calculate_imbalance - Calculate the amount of imbalance present within = the * groups of a given sched_domain during load balance. @@ -9436,7 +9439,7 @@ static inline void calculate_imbalance(struct lb_env = *env, struct sd_lb_stats *s */ env->migration_type =3D migrate_task; lsub_positive(&nr_diff, local->sum_nr_running); - env->imbalance =3D nr_diff >> 1; + env->imbalance =3D nr_diff; } else { =20 /* @@ -9444,15 +9447,21 @@ static inline void calculate_imbalance(struct lb_en= v *env, struct sd_lb_stats *s * idle cpus. */ env->migration_type =3D migrate_task; - env->imbalance =3D max_t(long, 0, (local->idle_cpus - - busiest->idle_cpus) >> 1); + env->imbalance =3D max_t(long, 0, + (local->idle_cpus - busiest->idle_cpus)); } =20 +#ifdef CONFIG_NUMA /* Consider allowing a small imbalance between NUMA groups */ if (env->sd->flags & SD_NUMA) { env->imbalance =3D adjust_numa_imbalance(env->imbalance, - local->sum_nr_running + 1, env->sd->imb_numa_nr); + local->sum_nr_running + 1, + env->sd->imb_numa_nr); } +#endif + + /* Number of tasks to move to restore balance */ + env->imbalance >>=3D 1; =20 return; } --=20 2.34.1 From nobody Thu May 7 19:05:18 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78CE2C433F5 for ; Fri, 20 May 2022 10:36:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345861AbiETKg0 (ORCPT ); Fri, 20 May 2022 06:36:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40676 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348225AbiETKgQ (ORCPT ); Fri, 20 May 2022 06:36:16 -0400 Received: from outbound-smtp21.blacknight.com (outbound-smtp21.blacknight.com [81.17.249.41]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 49F5972223 for ; Fri, 20 May 2022 03:36:11 -0700 (PDT) Received: from mail.blacknight.com (pemlinmail01.blacknight.ie [81.17.254.10]) by outbound-smtp21.blacknight.com (Postfix) with ESMTPS id 96549CCB7F for ; Fri, 20 May 2022 11:36:10 +0100 (IST) Received: (qmail 31755 invoked from network); 20 May 2022 10:36:10 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPA; 20 May 2022 10:36:10 -0000 From: Mel Gorman To: Peter Zijlstra Cc: Ingo Molnar , Vincent Guittot , Valentin Schneider , K Prateek Nayak , Aubrey Li , Ying Huang , LKML , Mel Gorman Subject: [PATCH 4/4] sched/numa: Adjust imb_numa_nr to a better approximation of memory channels Date: Fri, 20 May 2022 11:35:19 +0100 Message-Id: <20220520103519.1863-5-mgorman@techsingularity.net> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220520103519.1863-1-mgorman@techsingularity.net> References: <20220520103519.1863-1-mgorman@techsingularity.net> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" For a single LLC per node, a NUMA imbalance is allowed up until 25% of CPUs sharing a node could be active. One intent of the cut-off is to avoid an imbalance of memory channels but there is no topological information based on active memory channels. Furthermore, there can be differences between nodes depending on the number of populated DIMMs. A cut-off of 25% was arbitrary but generally worked. It does have a severe corner cases though when an parallel workload is using 25% of all available CPUs over-saturates memory channels. This can happen due to the initial forking of tasks that get pulled more to one node after early wakeups (e.g. a barrier synchronisation) that is not quickly corrected by the load balancer. The LB may fail to act quickly as the parallel tasks are considered to be poor migrate candidates due to locality or cache hotness. On a range of modern Intel CPUs, 12.5% appears to be a better cut-off assuming all memory channels are populated and is used as the new cut-off point. A minimum of 1 is specified to allow a communicating pair to remain local even for CPUs with low numbers of cores. For modern AMDs, there are multiple LLCs and are not affected. Signed-off-by: Mel Gorman Tested-by: K Prateek Nayak --- kernel/sched/topology.c | 23 +++++++++++++++-------- 1 file changed, 15 insertions(+), 8 deletions(-) diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 810750e62118..2740e245cb37 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -2295,23 +2295,30 @@ build_sched_domains(const struct cpumask *cpu_map, = struct sched_domain_attr *att =20 /* * For a single LLC per node, allow an - * imbalance up to 25% of the node. This is an - * arbitrary cutoff based on SMT-2 to balance - * between memory bandwidth and avoiding - * premature sharing of HT resources and SMT-4 - * or SMT-8 *may* benefit from a different - * cutoff. + * imbalance up to 12.5% of the node. This is + * arbitrary cutoff based two factors -- SMT and + * memory channels. For SMT-2, the intent is to + * avoid premature sharing of HT resources but + * SMT-4 or SMT-8 *may* benefit from a different + * cutoff. For memory channels, this is a very + * rough estimate of how many channels may be + * active and is based on recent CPUs with + * many cores. * * For multiple LLCs, allow an imbalance * until multiple tasks would share an LLC * on one node while LLCs on another node - * remain idle. + * remain idle. This assumes that there are + * enough logical CPUs per LLC to avoid SMT + * factors and that there is a correlation + * between LLCs and memory channels. */ nr_llcs =3D sd->span_weight / child->span_weight; if (nr_llcs =3D=3D 1) - imb =3D sd->span_weight >> 2; + imb =3D sd->span_weight >> 3; else imb =3D nr_llcs; + imb =3D max(1U, imb); sd->imb_numa_nr =3D imb; =20 /* Set span based on the first NUMA domain. */ --=20 2.34.1