From nobody Thu Apr 2 17:10:30 2026 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.9]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D7ACF33FE1F for ; Tue, 10 Feb 2026 22:13:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.9 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770761611; cv=none; b=LYnGbU1eJGO9U8QvhlnNFIOUX/XvarI33ljQ5tB2eQbfBYcOXsZO2cm25MppD1ZYemIG0tmZDXNeGmr/n4xOtheeK0FwigSG7NbQ15eUh89Q0j/qq36hKHbQt8gYgj5GC4kUNElI2lk9gHfKzya64zJzI2YLsBhwLMEhluKaba8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770761611; c=relaxed/simple; bh=UN264F5dvdXr4hYvmSUaM59YbuS9/BGP5NAiGJVH8vQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Q14MMmwWT9F/+s24LYl4uzdC5sU/4AWsFPkdSDsDW0hI0ZWC6ZuOtATG30OyEVpk+6cbFSYyAE/72ArbMwEExBp04TmAP+MUh0HZnAw1IIczCmDiV4Z+ei1cCht40wf7QzVCPB6GL1FjAHMNwJavCiJV0c4pA0onb6B/kojs3lg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=TM24SUhH; arc=none smtp.client-ip=192.198.163.9 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="TM24SUhH" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1770761609; x=1802297609; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=UN264F5dvdXr4hYvmSUaM59YbuS9/BGP5NAiGJVH8vQ=; b=TM24SUhH9/RyZoG459NP+AmabZVsnPngn1zE2/ZV02xp+osEX8qJODQz 74LaSBw4VHSQmRsslLREG8pHQKCpimbafrIshAdmV0JQ9Vy/Hp5FgAheQ XTaoI9P3Zo7b406AjQVD5j82513az4ESzCQszsAX2Cj5S4N0fni6f72BN OknXiaWNswFLoKA4mAkkEzQyBUc11fS8rCly/sYQ04zfS1Ustv1ScE6kg nSEpyB+OL5LsYQhxe32BoD9eTSylsZ1O/q/ACXyzBbPTQBNC+kwdPHgi2 1iv7LDQI4+9Bmx1VvnZLQDjNv5h5VE5C3dnJih3bWaLrvig2I7UOmy5KK Q==; X-CSE-ConnectionGUID: PwbWaJb2Ttqlus8fJEOX7Q== X-CSE-MsgGUID: w5xkH7KzRuiSvXukasL2xQ== X-IronPort-AV: E=McAfee;i="6800,10657,11697"; a="82631463" X-IronPort-AV: E=Sophos;i="6.21,283,1763452800"; d="scan'208";a="82631463" Received: from fmviesa004.fm.intel.com ([10.60.135.144]) by fmvoesa103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2026 14:13:30 -0800 X-CSE-ConnectionGUID: 1/zkImpXRPeYfWS2QZ9C+g== X-CSE-MsgGUID: NEMij9YqSmCxg4sYBGmvsw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.21,283,1763452800"; d="scan'208";a="216373973" Received: from b04f130c83f2.jf.intel.com ([10.165.154.98]) by fmviesa004.fm.intel.com with ESMTP; 10 Feb 2026 14:13:28 -0800 From: Tim Chen To: Peter Zijlstra , Ingo Molnar , K Prateek Nayak , "Gautham R . Shenoy" , Vincent Guittot Cc: Tim Chen , Juri Lelli , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Madadi Vineeth Reddy , Hillf Danton , Shrikanth Hegde , Jianyong Wu , Yangyu Chen , Tingyin Duan , Vern Hao , Vern Hao , Len Brown , Aubrey Li , Zhao Liu , Chen Yu , Chen Yu , Adam Li , Aaron Lu , Tim Chen , Josh Don , Gavin Guo , Qais Yousef , Libo Chen , linux-kernel@vger.kernel.org Subject: [PATCH v3 12/21] sched/cache: Add migrate_llc_task migration type for cache-aware balancing Date: Tue, 10 Feb 2026 14:18:52 -0800 Message-Id: <9038c2e0d40b744d5db19138c384819717eb03e6.1770760558.git.tim.c.chen@linux.intel.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce a new migration type, migrate_llc_task, to support cache-aware load balancing. After identifying the busiest sched_group (having the most tasks preferring the destination LLC), mark migrations with this type. During load balancing, each runqueue in the busiest sched_group is examined, and the runqueue with the highest number of tasks preferring the destination CPU is selected as the busiest runqueue. Signed-off-by: Tim Chen --- Notes: v2->v3: Let the enum and switch statements have the same order. (Peter Zijlstra) kernel/sched/fair.c | 38 +++++++++++++++++++++++++++++++++++++- 1 file changed, 37 insertions(+), 1 deletion(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 43dcf2827298..1697791ef11c 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -9665,7 +9665,8 @@ enum migration_type { migrate_load =3D 0, migrate_util, migrate_task, - migrate_misfit + migrate_misfit, + migrate_llc_task }; =20 #define LBF_ALL_PINNED 0x01 @@ -10266,6 +10267,10 @@ static int detach_tasks(struct lb_env *env) =20 env->imbalance =3D 0; break; + + case migrate_llc_task: + env->imbalance--; + break; } =20 detach_task(p, env); @@ -11902,6 +11907,15 @@ static inline void calculate_imbalance(struct lb_e= nv *env, struct sd_lb_stats *s return; } =20 +#ifdef CONFIG_SCHED_CACHE + if (busiest->group_type =3D=3D group_llc_balance) { + /* Move a task that prefer local LLC */ + env->migration_type =3D migrate_llc_task; + env->imbalance =3D 1; + return; + } +#endif + if (busiest->group_type =3D=3D group_imbalanced) { /* * In the group_imb case we cannot rely on group-wide averages @@ -12209,6 +12223,11 @@ static struct rq *sched_balance_find_src_rq(struct= lb_env *env, struct rq *busiest =3D NULL, *rq; unsigned long busiest_util =3D 0, busiest_load =3D 0, busiest_capacity = =3D 1; unsigned int busiest_nr =3D 0; +#ifdef CONFIG_SCHED_CACHE + unsigned int busiest_pref_llc =3D 0; + struct sched_domain *sd_tmp; + int dst_llc; +#endif int i; =20 for_each_cpu_and(i, sched_group_span(group), env->cpus) { @@ -12336,6 +12355,21 @@ static struct rq *sched_balance_find_src_rq(struct= lb_env *env, =20 break; =20 + case migrate_llc_task: +#ifdef CONFIG_SCHED_CACHE + sd_tmp =3D rcu_dereference(rq->sd); + dst_llc =3D llc_id(env->dst_cpu); + if (valid_llc_buf(sd_tmp, dst_llc)) { + unsigned int this_pref_llc =3D sd_tmp->pf[dst_llc]; + + if (busiest_pref_llc < this_pref_llc) { + busiest_pref_llc =3D this_pref_llc; + busiest =3D rq; + } + } +#endif + break; + } } =20 @@ -12499,6 +12533,8 @@ static void update_lb_imbalance_stat(struct lb_env = *env, struct sched_domain *sd case migrate_misfit: __schedstat_add(sd->lb_imbalance_misfit[idle], env->imbalance); break; + case migrate_llc_task: + break; } } =20 --=20 2.32.0