From nobody Sun Apr 12 21:03:04 2026 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F32A138A727 for ; Wed, 1 Apr 2026 21:47:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775080031; cv=none; b=ta1MlU/T7mGP+t7UN4bTIsGYZov7iN1x549/qwtmoM/qOAQZYb6zFgvYvA27fBCSd2HBv95bVzDAQVZZF87jcmno8e8eOJ7G2KHU0VK+N4k7UoZFdKu3TPVYELbq2kFvflR9RSNzhGIRqygBf1UOgkzG6CZkDtoCcZnoxGg/e3Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775080031; c=relaxed/simple; bh=k9Uk21NxydJmCRrFWzBtXDDfAv12XOMCZ0S8LWZtF3o=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=fuJSwSGOuuc4H0bjGklZLDZyZpc+4SQH80st2ekyr+k67JDFjK32/NSj22iLwJAxgYJGjnL2OO56JNNqD8mCj/aiDQfUlV1UUXe9yo3huz8ozMG+4iOHl9jUiiaOFvkd5lzyF25gbQDqjfQ6paLmXw5UPKh5dUKpSO8vU+0MZTE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=LRrHfyc3; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="LRrHfyc3" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1775080030; x=1806616030; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=k9Uk21NxydJmCRrFWzBtXDDfAv12XOMCZ0S8LWZtF3o=; b=LRrHfyc3kPtxJ38nN931b/MfIuT7PGNF4vq0YQpvzV/6bjbULbVhIKtV XmOVD6S8b+Zm67FKOPQsp0k/OuTqukPtX1DLEB2MfA0rCv1mbh6f0mqBm LY9SvsJ5ytBkwAD2A/+Di2MnqQtt0Oveve3lY+Aaljx4Ex2UHmjlWzOFC P48zYWGwQkyHJsqIPQFdDjye7Tp7WQmqi+U1mwUPVNFi1x+UzsD4lcRID +x74akHygwdeV/0aki/hqUs6ZTrXdNfVf7LlWBowTWGlcBwZgfCq2aQMY MxBjKRrsV7ZQPZc79Wa+yEgqsITHjFKBASYGU27RKaFkfqomldn5TCuVw g==; X-CSE-ConnectionGUID: WMOvAXyYS2SeWEFc3AYf3g== X-CSE-MsgGUID: XsZCYiXsQVKPYFV8cAhLyA== X-IronPort-AV: E=McAfee;i="6800,10657,11746"; a="79739993" X-IronPort-AV: E=Sophos;i="6.23,153,1770624000"; d="scan'208";a="79739993" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Apr 2026 14:47:05 -0700 X-CSE-ConnectionGUID: rWAOLod0RpqTfvHYF8aRXQ== X-CSE-MsgGUID: 0bmUED16QRGSB4pU6oJgiw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,153,1770624000"; d="scan'208";a="249842473" Received: from b04f130c83f2.jf.intel.com ([10.165.154.98]) by fmviesa002.fm.intel.com with ESMTP; 01 Apr 2026 14:47:04 -0700 From: Tim Chen To: Peter Zijlstra , Ingo Molnar , K Prateek Nayak , "Gautham R . Shenoy" , Vincent Guittot Cc: Tim Chen , Juri Lelli , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Madadi Vineeth Reddy , Hillf Danton , Shrikanth Hegde , Jianyong Wu , Yangyu Chen , Tingyin Duan , Vern Hao , Vern Hao , Len Brown , Aubrey Li , Zhao Liu , Chen Yu , Chen Yu , Adam Li , Aaron Lu , Tim Chen , Josh Don , Gavin Guo , Qais Yousef , Libo Chen , linux-kernel@vger.kernel.org Subject: [Patch v4 14/22] sched/cache: Handle moving single tasks to/from their preferred LLC Date: Wed, 1 Apr 2026 14:52:26 -0700 Message-Id: <9b816d8c27fabf2a9c0e1f61a6b90afe8ec4ad52.1775065312.git.tim.c.chen@linux.intel.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Cache aware scheduling mainly does two things: 1. Prevent task from migrating out of its preferred LLC if not nessasary. 2. Migrating task to their preferred LLC if nessasary. For 1: In the generic load balance, if the busiest runqueue has only one task, active balancing may be invoked to move it away. However, this migration might break LLC locality. Prevent regular load balance from migrating a task that prefers the current LLC. The load level and imbalance do not warrant breaking LLC preference per the can_migrate_llc() policy. Here, the benefit of LLC locality outweighs the power efficiency gained from migrating the only runnable task away. Before migration, check whether the task is running on its preferred LLC: Do not move a lone task to another LLC if it would move the task away from its preferred LLC or cause excessive imbalance between LLCs. For 2: On the other hand, if the migration type is migrate_llc_task, it means that there are tasks on the env->src_cpu that want to be migrated to their preferred LLC, launch the active load balance anyway. Co-developed-by: Chen Yu Signed-off-by: Chen Yu Signed-off-by: Tim Chen --- Notes: v3->v4: Use rcu_dereference_all() in alb_break_llc(). Add comments to explain the scenario to inhibit active balancing for cache aware aggregation. kernel/sched/fair.c | 54 ++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 53 insertions(+), 1 deletion(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index e0e618cd4e15..fef916afa1d5 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -10115,12 +10115,60 @@ static __maybe_unused enum llc_mig can_migrate_ll= c_task(int src_cpu, int dst_cpu task_util(p), to_pref); } =20 +/* + * Check if active load balance breaks LLC locality in + * terms of cache aware load balance. The load level and + * imbalance do not warrant breaking LLC preference per + * the can_migrate_llc() policy. Here, the benefit of + * LLC locality outweighs the power efficiency gained from + * migrating the only runnable task away. + */ +static inline bool +alb_break_llc(struct lb_env *env) +{ + if (!sched_cache_enabled()) + return false; + + if (cpus_share_cache(env->src_cpu, env->dst_cpu)) + return false; + /* + * All tasks prefer to stay on their current CPU. + * Do not pull a task from its preferred CPU if: + * 1. It is the only task running there(not too imbalance); OR + * 2. Migrating it away from its preferred LLC would violate + * the cache-aware scheduling policy. + */ + if (env->src_rq->nr_pref_llc_running && + env->src_rq->nr_pref_llc_running =3D=3D env->src_rq->cfs.h_nr_runnabl= e) { + unsigned long util =3D 0; + struct task_struct *cur; + + if (env->src_rq->nr_running <=3D 1) + return true; + + cur =3D rcu_dereference_all(env->src_rq->curr); + if (cur) + util =3D task_util(cur); + + if (can_migrate_llc(env->src_cpu, env->dst_cpu, + util, false) =3D=3D mig_forbid) + return true; + } + + return false; +} #else static inline bool get_llc_stats(int cpu, unsigned long *util, unsigned long *cap) { return false; } + +static inline bool +alb_break_llc(struct lb_env *env) +{ + return false; +} #endif /* * can_migrate_task - may task p from runqueue rq be migrated to this_cpu? @@ -12541,6 +12589,9 @@ static int need_active_balance(struct lb_env *env) { struct sched_domain *sd =3D env->sd; =20 + if (alb_break_llc(env)) + return 0; + if (asym_active_balance(env)) return 1; =20 @@ -12560,7 +12611,8 @@ static int need_active_balance(struct lb_env *env) return 1; } =20 - if (env->migration_type =3D=3D migrate_misfit) + if (env->migration_type =3D=3D migrate_misfit || + env->migration_type =3D=3D migrate_llc_task) return 1; =20 return 0; --=20 2.32.0