From nobody Fri Dec 19 21:09:59 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C1938F513 for ; Wed, 3 Dec 2025 23:01:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764802905; cv=none; b=sPmV7aM8SfneES++JSxoAMTpkJxsxkIaVzLucunnA9mKqP6A+4Tm600kyT9VTXTzXq34T39lXTUp9sHWoERIl8w+bTu7J1HC+rfyTlXxwEVQV8C99GFpkkbN1BPFHILnrVb4xczJGDnWK5dD50Ye9FIBTMyihvIerGvjfEsmqNE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764802905; c=relaxed/simple; bh=lHL2pgABc7GHr6ACmg9H32RJUswizn6AHQFobrHrbFw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=G7rXkqjakmupf9n++e5JGAkMIXq3jqgQc6G6Gw5IYyY/VhHNnlVMfdVNOcDomPtYPBMavf9m7Y2bsSMUvQExqTt6CASUZ8aGZ8iX+XoR/Ej28b5EwCnggenbKxXL4Xj0/E38v+KIJD/T8MnOLbFEeGjSREtAQxxgu/2prdjZMw8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=FiUlG+0K; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="FiUlG+0K" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764802901; x=1796338901; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lHL2pgABc7GHr6ACmg9H32RJUswizn6AHQFobrHrbFw=; b=FiUlG+0K/UC9vVMh/oPWl1WUBZdhy5MrB44PaaHkXUAA4jYHkLTFSSsi qocTAQQFuheK8JLYpFg2R7aU2iv4GZRGXge93BEc9kS9nTpx4oQOMWekm +vXMxJj28JhCGkxAcIYAkVQvbks0I4+snX/or9+O6+kLtJoq4VW98lvHt gsZRnKPvbTAbfB8BLT4mfbZqijYwb7I27I0TW2bqZx35wIeRxh9EeBFyi ROuei6K/cuomwGMaKK20uTZT8/nP1CIoBiGImBAQQNhK7Hgo6jMMsFX23 lLTcZHF+7w8PBbIBEKU+iwv08wqwC5Czno4lf4DE3GutioUzRJHIw2uIq A==; X-CSE-ConnectionGUID: LC9AWrvJQPSiwhGuWRZBoA== X-CSE-MsgGUID: vAm1J3bzStCyct17U9yt9Q== X-IronPort-AV: E=McAfee;i="6800,10657,11631"; a="77136497" X-IronPort-AV: E=Sophos;i="6.20,247,1758610800"; d="scan'208";a="77136497" Received: from fmviesa004.fm.intel.com ([10.60.135.144]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Dec 2025 15:01:40 -0800 X-CSE-ConnectionGUID: pVhkYbuLQ6qhXibAglfMuQ== X-CSE-MsgGUID: 3iy0SCYQQeWYaEaxUD8biw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,247,1758610800"; d="scan'208";a="199763859" Received: from b04f130c83f2.jf.intel.com ([10.165.154.98]) by fmviesa004.fm.intel.com with ESMTP; 03 Dec 2025 15:01:40 -0800 From: Tim Chen To: Peter Zijlstra , Ingo Molnar , K Prateek Nayak , "Gautham R . Shenoy" , Vincent Guittot Cc: Tim Chen , Juri Lelli , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Madadi Vineeth Reddy , Hillf Danton , Shrikanth Hegde , Jianyong Wu , Yangyu Chen , Tingyin Duan , Vern Hao , Vern Hao , Len Brown , Aubrey Li , Zhao Liu , Chen Yu , Chen Yu , Adam Li , Aaron Lu , Tim Chen , linux-kernel@vger.kernel.org Subject: [PATCH v2 14/23] sched/cache: Consider LLC preference when selecting tasks for load balancing Date: Wed, 3 Dec 2025 15:07:33 -0800 Message-Id: <048601436d24f19e84c0a002e1c5897f95853276.1764801860.git.tim.c.chen@linux.intel.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently, task selection from the busiest runqueue ignores LLC preferences. Reorder tasks in the busiest queue to prioritize selection as follows: 1. Tasks preferring the destination CPU's LLC 2. Tasks with no LLC preference 3. Tasks preferring an LLC different from their current one 4. Tasks preferring the LLC they are currently on This improves the likelihood that tasks are migrated to their preferred LLC. Signed-off-by: Tim Chen --- Notes: v1->v2: No change. kernel/sched/fair.c | 66 ++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 65 insertions(+), 1 deletion(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index aed3fab98d7c..dd09a816670e 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -10092,6 +10092,68 @@ static struct task_struct *detach_one_task(struct = lb_env *env) return NULL; } =20 +#ifdef CONFIG_SCHED_CACHE +/* + * Prepare lists to detach tasks in the following order: + * 1. tasks that prefer dst cpu's LLC + * 2. tasks that have no preference in LLC + * 3. tasks that prefer LLC other than the ones they are on + * 4. tasks that prefer the LLC that they are currently on. + */ +static struct list_head +*order_tasks_by_llc(struct lb_env *env, struct list_head *tasks) +{ + struct task_struct *p; + LIST_HEAD(pref_old_llc); + LIST_HEAD(pref_new_llc); + LIST_HEAD(no_pref_llc); + LIST_HEAD(pref_other_llc); + + if (!sched_cache_enabled()) + return tasks; + + if (cpus_share_cache(env->dst_cpu, env->src_cpu)) + return tasks; + + while (!list_empty(tasks)) { + p =3D list_last_entry(tasks, struct task_struct, se.group_node); + + if (p->preferred_llc =3D=3D llc_id(env->dst_cpu)) { + list_move(&p->se.group_node, &pref_new_llc); + continue; + } + + if (p->preferred_llc =3D=3D llc_id(env->src_cpu)) { + list_move(&p->se.group_node, &pref_old_llc); + continue; + } + + if (p->preferred_llc =3D=3D -1) { + list_move(&p->se.group_node, &no_pref_llc); + continue; + } + + list_move(&p->se.group_node, &pref_other_llc); + } + + /* + * We detach tasks from list tail in detach tasks. Put tasks + * to be chosen first at end of list. + */ + list_splice(&pref_new_llc, tasks); + list_splice(&no_pref_llc, tasks); + list_splice(&pref_other_llc, tasks); + list_splice(&pref_old_llc, tasks); + return tasks; +} +#else +static inline struct list_head +*order_tasks_by_llc(struct lb_env *env, struct list_head *tasks) +{ + return tasks; +} +#endif + /* * detach_tasks() -- tries to detach up to imbalance load/util/tasks from * busiest_rq, as part of a balancing operation within domain "sd". @@ -10100,7 +10162,7 @@ static struct task_struct *detach_one_task(struct l= b_env *env) */ static int detach_tasks(struct lb_env *env) { - struct list_head *tasks =3D &env->src_rq->cfs_tasks; + struct list_head *tasks; unsigned long util, load; struct task_struct *p; int detached =3D 0; @@ -10119,6 +10181,8 @@ static int detach_tasks(struct lb_env *env) if (env->imbalance <=3D 0) return 0; =20 + tasks =3D order_tasks_by_llc(env, &env->src_rq->cfs_tasks); + while (!list_empty(tasks)) { /* * We don't want to steal all, otherwise we may be treated likewise, --=20 2.32.0