From nobody Sun Apr 12 21:00:58 2026 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B4B943822A3 for ; Wed, 1 Apr 2026 21:47:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775080031; cv=none; b=mTNfqgEYtwSfz1lkDPy6nGXa1D7p9xvsnlDRQK5IaNGy4Dh+PEax9ZiLhT+DaMX2qM/0S5n5II9KoQKWPL18awukjKPc2idLNMK16SkorScYEGrg+7Bf1GGGcD8vnCq9BCYQFQji0zG1zLSc25pODZBSlHqoUiUj7xwyiNjjVMc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775080031; c=relaxed/simple; bh=/1VsQJ36u+2s6RLaOmz2eSMcYeXwCACoMck9yzt6CNA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=JbWk940lNtHCksaET0WwjL6MwsRB+SfEPiDinbSDcQvtiZvWEaANQoYncI/p423x/GPaeHg6go3kYSDnGz86xCqvP4QxAYx4rVJgBuBrKGMK7VktDLkxay8fIb3xPzaHm7RK3XSToXwc3BUiB6Ou+4bRHL6zQ9Hp5WNPEqfpAX8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=CsgG4GLj; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="CsgG4GLj" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1775080030; x=1806616030; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/1VsQJ36u+2s6RLaOmz2eSMcYeXwCACoMck9yzt6CNA=; b=CsgG4GLjDWflxcksOSngzGZ3GK6SZ6G8HBlnTlWj8Qpjts13MWcuuJCs qBa+9Y7Aq31o1QA3otFBEF1X9SeGqkB01INjWd0l9ObV5jHIlcF7MMXQd w41IDHuPc6w5YnylPsYYB37Uo4Cfw1wKl8TIngQTMxgEJB6+NUTqd9GpT BG9mtuO7CQTyPHcsBnpW+37BmrrfY7z+WOToNRCQ6GIuQwjZr9v2me0q2 E062YK73YltYqpDgn/E3G8phwjS6bBQX+L1x+l08KdAs92hWtR3C9ricf MBw9kfu5NT8GO7HtZnPN2NyPd7RTYG1VlWf3qLH5yL5UPuhX0Wt4R2YsA g==; X-CSE-ConnectionGUID: QOVELvOYQ82S2SePkEZDMQ== X-CSE-MsgGUID: G5JYIwBzQzO4RD4SJMdYyQ== X-IronPort-AV: E=McAfee;i="6800,10657,11746"; a="79739970" X-IronPort-AV: E=Sophos;i="6.23,153,1770624000"; d="scan'208";a="79739970" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Apr 2026 14:47:04 -0700 X-CSE-ConnectionGUID: ixMf3idZRGeIChR5rebvUQ== X-CSE-MsgGUID: /vR5Jk6FQOKiAQPEVroQsw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,153,1770624000"; d="scan'208";a="249842468" Received: from b04f130c83f2.jf.intel.com ([10.165.154.98]) by fmviesa002.fm.intel.com with ESMTP; 01 Apr 2026 14:47:02 -0700 From: Tim Chen To: Peter Zijlstra , Ingo Molnar , K Prateek Nayak , "Gautham R . Shenoy" , Vincent Guittot Cc: Tim Chen , Juri Lelli , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Madadi Vineeth Reddy , Hillf Danton , Shrikanth Hegde , Jianyong Wu , Yangyu Chen , Tingyin Duan , Vern Hao , Vern Hao , Len Brown , Aubrey Li , Zhao Liu , Chen Yu , Chen Yu , Adam Li , Aaron Lu , Tim Chen , Josh Don , Gavin Guo , Qais Yousef , Libo Chen , linux-kernel@vger.kernel.org Subject: [Patch v4 13/22] sched/cache: Add migrate_llc_task migration type for cache-aware balancing Date: Wed, 1 Apr 2026 14:52:25 -0700 Message-Id: X-Mailer: git-send-email 2.32.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce a new migration type, migrate_llc_task, to support cache-aware load balancing. After identifying the busiest sched_group (having the most tasks preferring the destination LLC), mark migrations with this type. During load balancing, each runqueue in the busiest sched_group is examined, and the runqueue with the highest number of tasks preferring the destination CPU is selected as the busiest runqueue. Co-developed-by: Chen Yu Signed-off-by: Chen Yu Signed-off-by: Tim Chen --- Notes: v3->v4: No change. kernel/sched/fair.c | 38 +++++++++++++++++++++++++++++++++++++- 1 file changed, 37 insertions(+), 1 deletion(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index c032eeebe191..e0e618cd4e15 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -9768,7 +9768,8 @@ enum migration_type { migrate_load =3D 0, migrate_util, migrate_task, - migrate_misfit + migrate_misfit, + migrate_llc_task }; =20 #define LBF_ALL_PINNED 0x01 @@ -10382,6 +10383,10 @@ static int detach_tasks(struct lb_env *env) =20 env->imbalance =3D 0; break; + + case migrate_llc_task: + env->imbalance--; + break; } =20 detach_task(p, env); @@ -12022,6 +12027,15 @@ static inline void calculate_imbalance(struct lb_e= nv *env, struct sd_lb_stats *s return; } =20 +#ifdef CONFIG_SCHED_CACHE + if (busiest->group_type =3D=3D group_llc_balance) { + /* Move a task that prefer local LLC */ + env->migration_type =3D migrate_llc_task; + env->imbalance =3D 1; + return; + } +#endif + if (busiest->group_type =3D=3D group_imbalanced) { /* * In the group_imb case we cannot rely on group-wide averages @@ -12328,7 +12342,10 @@ static struct rq *sched_balance_find_src_rq(struct= lb_env *env, { struct rq *busiest =3D NULL, *rq; unsigned long busiest_util =3D 0, busiest_load =3D 0, busiest_capacity = =3D 1; + unsigned int __maybe_unused busiest_pref_llc =3D 0; + struct sched_domain __maybe_unused *sd_tmp; unsigned int busiest_nr =3D 0; + int __maybe_unused dst_llc; int i; =20 for_each_cpu_and(i, sched_group_span(group), env->cpus) { @@ -12456,6 +12473,23 @@ static struct rq *sched_balance_find_src_rq(struct= lb_env *env, =20 break; =20 + case migrate_llc_task: +#ifdef CONFIG_SCHED_CACHE + sd_tmp =3D rcu_dereference_all(rq->sd); + dst_llc =3D llc_id(env->dst_cpu); + + if (valid_llc_buf(sd_tmp, dst_llc)) { + unsigned int this_pref_llc =3D + sd_tmp->llc_counts[dst_llc]; + + if (busiest_pref_llc < this_pref_llc) { + busiest_pref_llc =3D this_pref_llc; + busiest =3D rq; + } + } +#endif + break; + } } =20 @@ -12619,6 +12653,8 @@ static void update_lb_imbalance_stat(struct lb_env = *env, struct sched_domain *sd case migrate_misfit: __schedstat_add(sd->lb_imbalance_misfit[idle], env->imbalance); break; + case migrate_llc_task: + break; } } =20 --=20 2.32.0