From nobody Sun Feb 8 17:37:15 2026 Received: from mail-dl1-f46.google.com (mail-dl1-f46.google.com [74.125.82.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C9D753EA2B6 for ; Thu, 22 Jan 2026 16:17:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=74.125.82.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769098637; cv=none; b=AjoZzQhb3Mnz4HyrS0T0S1crtu8D68i9v99y2O4tAY8q9ViQVeGfftAOUYUNISetnae7VMQCOFDDbEdyTEfEt1h2d/jehMP6Y47lZwQWsztiP1GpYqY7TlgSnWBhARmwB75nnvDvCIl0K9UXXgjLwv5GktExwU4ZKfCQ15VnJgE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769098637; c=relaxed/simple; bh=xT8DNNVs3K7gOLtjevQypsshrPtLyrgA8Nb+h2IueCI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=JtVNhCJgOPMa1KBVypCZDe8j0jg0eDxSjwr3MZ3PAMTPPqSrpSTNj/bf42/9NyiL5yd05/EK0Rg2xrLSXapoW4F8P2vH6G/f65TtrtG/QcsRWCtaPxmHwSC1k0I3oqQMx+R+8NnKwVSax9QHmu8dD7GIBTGTbeMsfK9nVn7lkFY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=JMh0fHcJ; arc=none smtp.client-ip=74.125.82.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="JMh0fHcJ" Received: by mail-dl1-f46.google.com with SMTP id a92af1059eb24-124566b6693so164157c88.0 for ; Thu, 22 Jan 2026 08:17:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1769098624; x=1769703424; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=typTWoY1NbQ7lW/9lPbT4WdyEQtUQii14Uxv5ycxQxY=; b=JMh0fHcJ1xnzbLNU6CVIym0kmB6KK3UM9wC7VPPmRUTtqxEnFb6a6sXqdjo9O3Nu34 q/6wvdyEDcnSvDBSA/TwewWiVy9dZ8uAihJm8nR7zeqQGRoOqTpT8GvoQkG4HQtlaLqI ZYaE5zpwEPrvLlAbIrHT9Ni2tnmYfGvGASq0Oh1YXDnrRMO3pbE9EhnzRoi35mUYwazV HZM1GTkjITK5ns68mcgDzSpQ3gQn1DZlOJn6L7EFReaN6LzYsChKkvU6QFrmcRkx6P7t CTKF6URsKpld7A8krmDY1XcJmS9whX8kkHFmC1Xl7tHWpeIjpO/kCJYfeR5PIm36AAna S0/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769098624; x=1769703424; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=typTWoY1NbQ7lW/9lPbT4WdyEQtUQii14Uxv5ycxQxY=; b=kynDPr/hcL++ze9hYlyv/qD3pou2NsNBn9Km6uVAz7mFbS5Fwo4zeQp7Mc7wr3B9Lt C4I0l2/TqxmVcuNRTe/xA8iqCFGkh0NXnzBfZN5Oi02urLTJkKyERurYjfLzP9g3JxJG 9RGSY5DmTAHUu4xVXXiLTanyv7k8DQzovZuLfKtTPzNQgn7Yqxous6VsvkkgB7s+l6Fe SDda+WUCc7B/MW5WVr6ksm/1R6Qsa0LiOs2l7k8QNZDPZydpn9YJxscDsbZZoDZifZB3 pAkdpkdgCzFiVayOgLNeS5e4gYYIsED/hvBt2jN/cQ1QraYoy+57RkZaFh1BAn4x24Jg YC8g== X-Forwarded-Encrypted: i=1; AJvYcCWzSsIWEOXhFe/PoZAQSZb6UzB8D+XYXIh++QWTd9Qa+zBdRipFNjNTHIx7/LpNT+VW12pcrRGLg8edwTY=@vger.kernel.org X-Gm-Message-State: AOJu0YzWRtu7mxDX1Q5wsGYlaWBEOvvKehVOQMDLzKBtyp+O3PGfW6Qa 28SVd9iSd6nrAshsemKKNYWA07AJESBOTVHaHRe4/7h8HHJPQHmq3j4K X-Gm-Gg: AZuq6aKjKnvfzYMduLK41MgB5VkHXk9Rlz4pKl2Op0Q/J/lWGr6kG4AxgynWSI3qfeA ckRHkDXgm9MM6FNfIr082DzCRQ1AScZipUC74b8mls5rB91B2NtKdU492hIILHhhtKzcgva1ymF mfTc64K2ma0phzN5+TluTyoYVC6ekuTFLeGjv+/teRIISe6v7nmXMuT6/cpllLsNajhSBm1xM0O RjfBIXrwO3IEmftK/ios5iA3fZDhN8zeERoAm2lZeTWfPHLJ7x9xy7UC6P0NTCnMXPqiLkvkY8B h1utSS8GP1AaDvuK1vkuI0nctHicml8UBDC5y6LUnKboDtS3qXC42+HCv9ISTlvtdTioabU4iNa oaWkLRBf5LZLqzRR9Hi5p6a1twrQaYL+0wYUrDOlLYWFLFMPt4fY/AuAG9hw0xrR2wo/Yy3ryu9 O2i0o= X-Received: by 2002:a05:693c:2d8f:b0:2ac:1a21:841d with SMTP id 5a478bee46e88-2b6b4e5b5f9mr14690820eec.16.1769098624004; Thu, 22 Jan 2026 08:17:04 -0800 (PST) Received: from debian ([74.48.213.230]) by smtp.gmail.com with ESMTPSA id 5a478bee46e88-2b70d7f729bsm10179331eec.16.2026.01.22.08.16.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 22 Jan 2026 08:17:03 -0800 (PST) From: Qiliang Yuan To: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org Cc: dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, vschneid@redhat.com, linux-kernel@vger.kernel.org, yuanql9@chinatelecom.cn, Qiliang Yuan Subject: [PATCH] sched/numa: Optimize NUMA placement algorithm complexity from O(Nodes) to O(Active_Nodes) Date: Thu, 22 Jan 2026 11:16:47 -0500 Message-ID: <20260122161647.142704-2-realwujing@gmail.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20260122161647.142704-1-realwujing@gmail.com> References: <20260122161647.142704-1-realwujing@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" On systems with a large number of NUMA nodes, periodic scanning of all onli= ne nodes for fault decay and task placement becomes a bottleneck. This patch introduces 'numa_faults_nodes_mask' in task_struct to track nodes where the task has actually incurred faults. By replacing for_each_online_n= ode() with for_each_node_mask(), we reduce the search space and decay overhead, especially for tasks whose memory footprint is localized to a few nodes. Signed-off-by: Qiliang Yuan Signed-off-by: Qiliang Yuan --- include/linux/sched.h | 1 + kernel/sched/fair.c | 18 +++++++++++++++--- 2 files changed, 16 insertions(+), 3 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index d395f2810fac..2c426e10c9d5 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1397,6 +1397,7 @@ struct task_struct { */ unsigned long *numa_faults; unsigned long total_numa_faults; + nodemask_t numa_faults_nodes_mask; =20 /* * numa_faults_locality tracks if faults recorded during the last diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index e71302282671..44cf35c43684 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2538,8 +2538,9 @@ static int task_numa_migrate(struct task_struct *p) */ ng =3D deref_curr_numa_group(p); if (env.best_cpu =3D=3D -1 || (ng && ng->active_nodes > 1)) { - for_each_node_state(nid, N_CPU) { - if (nid =3D=3D env.src_nid || nid =3D=3D p->numa_preferred_nid) + for_each_node_mask(nid, p->numa_faults_nodes_mask) { + if (nid =3D=3D env.src_nid || nid =3D=3D p->numa_preferred_nid || + !node_state(nid, N_CPU)) continue; =20 dist =3D node_distance(env.src_nid, env.dst_nid); @@ -2892,11 +2893,12 @@ static void task_numa_placement(struct task_struct = *p) } =20 /* Find the node with the highest number of faults */ - for_each_online_node(nid) { + for_each_node_mask(nid, p->numa_faults_nodes_mask) { /* Keep track of the offsets in numa_faults array */ int mem_idx, membuf_idx, cpu_idx, cpubuf_idx; unsigned long faults =3D 0, group_faults =3D 0; int priv; + bool node_has_faults =3D false; =20 for (priv =3D 0; priv < NR_NUMA_HINT_FAULT_TYPES; priv++) { long diff, f_diff, f_weight; @@ -2928,6 +2930,10 @@ static void task_numa_placement(struct task_struct *= p) p->numa_faults[cpu_idx] +=3D f_diff; faults +=3D p->numa_faults[mem_idx]; p->total_numa_faults +=3D diff; + + if (p->numa_faults[mem_idx] || p->numa_faults[cpu_idx]) + node_has_faults =3D true; + if (ng) { /* * safe because we can only change our own group @@ -2952,6 +2958,9 @@ static void task_numa_placement(struct task_struct *p) max_faults =3D group_faults; max_nid =3D nid; } + + if (!node_has_faults) + node_clear(nid, p->numa_faults_nodes_mask); } =20 /* Cannot migrate task to CPU-less node */ @@ -3209,6 +3218,8 @@ void task_numa_fault(int last_cpupid, int mem_node, i= nt pages, int flags) =20 p->numa_faults[task_faults_idx(NUMA_MEMBUF, mem_node, priv)] +=3D pages; p->numa_faults[task_faults_idx(NUMA_CPUBUF, cpu_node, priv)] +=3D pages; + node_set(mem_node, p->numa_faults_nodes_mask); + node_set(cpu_node, p->numa_faults_nodes_mask); p->numa_faults_locality[local] +=3D pages; } =20 @@ -3545,6 +3556,7 @@ void init_numa_balancing(u64 clone_flags, struct task= _struct *p) /* Protect against double add, see task_tick_numa and task_numa_work */ p->numa_work.next =3D &p->numa_work; p->numa_faults =3D NULL; + nodes_clear(p->numa_faults_nodes_mask); p->numa_pages_migrated =3D 0; p->total_numa_faults =3D 0; RCU_INIT_POINTER(p->numa_group, NULL); --=20 2.51.0