From nobody Fri Dec 19 20:51:05 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F15862EC0B3 for ; Wed, 3 Dec 2025 23:01:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764802907; cv=none; b=gaatxX9hyfNCQNZuo8e4RU3vaqRhxVWET62DnEKpixJNU5xDEVuougssJt9/6wdKqXoIUOBKaKYsQEEI9+soes2dovmZhy3fGDXwD4VJshA6aArNO/9BRtmRmrSUH+Qeb4uxqCx6TiODM+aPCVtCEwIA755BalFPfrmj7+qULOI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764802907; c=relaxed/simple; bh=Qq+bxGUfP5y5uzFrPweEIf2ig+fLfO0Fva+8tsaaHnM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=PMemmZdPDG2ErK7Z6ebePwKSI9cabjQRZi7fOaAynPsVbH0TYAxCQkgG7kmEu1N/+0Kmoqb2iEytzk5b6Y83O56eTuw4wsJTpcQbn5OA5nrv8fwKgYRvMuPqwTWStSC5o/clmWh6Un/rG7VXFCAXnoxf+tadmloUwr1ceD4Iuek= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=K1c6F2rL; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="K1c6F2rL" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764802904; x=1796338904; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Qq+bxGUfP5y5uzFrPweEIf2ig+fLfO0Fva+8tsaaHnM=; b=K1c6F2rLIDMugmXFGo0VPRa3CkwpTWx9IJrRa/hsq4UrL7DnV0pw8ajG BaGeCuW4iC0q3KpRjUrb5Gjs2+rOB74bBmgvjzvP0Bgae0TPuFdvMjX23 z6+gGGgG19Wv4ve1vRjEwTT08BRcUINH2YNXiTUVgX6ibcCJComlk0Y6n quNDMVfwdU0hQZhwOtrSHXPRqMojx8I7m9WQ/PmD1woe8uT6yci0V4u2u jfnFFUMEbPvj3J6FUSZjuQwGSGo/EqXqp0xk/5KRyXKafHJF8xEhV/udJ e4v9JDT09EYShziT4Bzd1zuoH2hhzYHA7OeJFLCwdgppCCBWwVA2w3KmJ w==; X-CSE-ConnectionGUID: 9y9zHDIITpm9FBwYNTu03A== X-CSE-MsgGUID: Qrhpktr7Tg2wRc89JHsg/g== X-IronPort-AV: E=McAfee;i="6800,10657,11631"; a="77136537" X-IronPort-AV: E=Sophos;i="6.20,247,1758610800"; d="scan'208";a="77136537" Received: from fmviesa004.fm.intel.com ([10.60.135.144]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Dec 2025 15:01:42 -0800 X-CSE-ConnectionGUID: JansXFpeT5WbVZuKS0jHBA== X-CSE-MsgGUID: NjtVIxeZSgSD5Yg4dtwXow== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,247,1758610800"; d="scan'208";a="199763888" Received: from b04f130c83f2.jf.intel.com ([10.165.154.98]) by fmviesa004.fm.intel.com with ESMTP; 03 Dec 2025 15:01:42 -0800 From: Tim Chen To: Peter Zijlstra , Ingo Molnar , K Prateek Nayak , "Gautham R . Shenoy" , Vincent Guittot Cc: Tim Chen , Juri Lelli , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Madadi Vineeth Reddy , Hillf Danton , Shrikanth Hegde , Jianyong Wu , Yangyu Chen , Tingyin Duan , Vern Hao , Vern Hao , Len Brown , Aubrey Li , Zhao Liu , Chen Yu , Chen Yu , Adam Li , Aaron Lu , Tim Chen , linux-kernel@vger.kernel.org Subject: [PATCH v2 15/23] sched/cache: Respect LLC preference in task migration and detach Date: Wed, 3 Dec 2025 15:07:34 -0800 Message-Id: <1c75f54a2e259737eb9b15c98a5c1d1f142fdef6.1764801860.git.tim.c.chen@linux.intel.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" During the final step of load balancing, can_migrate_task() now considers a task's LLC preference before moving it out of its preferred LLC. Additionally, add checks in detach_tasks() to prevent selecting tasks that prefer their current LLC. Co-developed-by: Chen Yu Signed-off-by: Chen Yu Signed-off-by: Tim Chen --- Notes: v1->v2: Leave out tasks under core scheduling from the cache aware load balance. (K Prateek Nayak) =20 Reduce the degree of honoring preferred_llc in detach_tasks(). If certain conditions are met, stop migrating tasks that prefer their current LLC and instead continue load balancing from other busiest runqueues. (K Prateek Nayak) kernel/sched/fair.c | 63 ++++++++++++++++++++++++++++++++++++++++++-- kernel/sched/sched.h | 13 +++++++++ 2 files changed, 74 insertions(+), 2 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index dd09a816670e..580a967efdac 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -9852,8 +9852,8 @@ static enum llc_mig can_migrate_llc(int src_cpu, int = dst_cpu, * Check if task p can migrate from source LLC to * destination LLC in terms of cache aware load balance. */ -static __maybe_unused enum llc_mig can_migrate_llc_task(int src_cpu, int d= st_cpu, - struct task_struct *p) +static enum llc_mig can_migrate_llc_task(int src_cpu, int dst_cpu, + struct task_struct *p) { struct mm_struct *mm; bool to_pref; @@ -10025,6 +10025,13 @@ int can_migrate_task(struct task_struct *p, struct= lb_env *env) if (env->flags & LBF_ACTIVE_LB) return 1; =20 +#ifdef CONFIG_SCHED_CACHE + if (sched_cache_enabled() && + can_migrate_llc_task(env->src_cpu, env->dst_cpu, p) =3D=3D mig_forbid= && + !task_has_sched_core(p)) + return 0; +#endif + degrades =3D migrate_degrades_locality(p, env); if (!degrades) hot =3D task_hot(p, env); @@ -10146,12 +10153,55 @@ static struct list_head list_splice(&pref_old_llc, tasks); return tasks; } + +static bool stop_migrate_src_rq(struct task_struct *p, + struct lb_env *env, + int detached) +{ + if (!sched_cache_enabled() || p->preferred_llc =3D=3D -1 || + cpus_share_cache(env->src_cpu, env->dst_cpu) || + env->sd->nr_balance_failed) + return false; + + /* + * Stop migration for the src_rq and pull from a + * different busy runqueue in the following cases: + * + * 1. Trying to migrate task to its preferred + * LLC, but the chosen task does not prefer dest + * LLC - case 3 in order_tasks_by_llc(). This violates + * the goal of migrate_llc_task. However, we should + * stop detaching only if some tasks have been detached + * and the imbalance has been mitigated. + * + * 2. Don't detach more tasks if the remaining tasks want + * to stay. We know the remaining tasks all prefer the + * current LLC, because after order_tasks_by_llc(), the + * tasks that prefer the current LLC are the least favored + * candidates to be migrated out. + */ + if (env->migration_type =3D=3D migrate_llc_task && + detached && llc_id(env->dst_cpu) !=3D p->preferred_llc) + return true; + + if (llc_id(env->src_cpu) =3D=3D p->preferred_llc) + return true; + + return false; +} #else static inline struct list_head *order_tasks_by_llc(struct lb_env *env, struct list_head *tasks) { return tasks; } + +static bool stop_migrate_src_rq(struct task_struct *p, + struct lb_env *env, + int detached) +{ + return false; +} #endif =20 /* @@ -10205,6 +10255,15 @@ static int detach_tasks(struct lb_env *env) =20 p =3D list_last_entry(tasks, struct task_struct, se.group_node); =20 + /* + * Check if detaching current src_rq should be stopped, because + * doing so would break cache aware load balance. If we stop + * here, the env->flags has LBF_ALL_PINNED, which would cause + * the load balance to pull from another busy runqueue. + */ + if (stop_migrate_src_rq(p, env, detached)) + break; + if (!can_migrate_task(p, env)) goto next; =20 diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 8f2a779825e4..40798a06e058 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1485,6 +1485,14 @@ extern void sched_core_dequeue(struct rq *rq, struct= task_struct *p, int flags); extern void sched_core_get(void); extern void sched_core_put(void); =20 +static inline bool task_has_sched_core(struct task_struct *p) +{ + if (sched_core_disabled()) + return false; + + return !!p->core_cookie; +} + #else /* !CONFIG_SCHED_CORE: */ =20 static inline bool sched_core_enabled(struct rq *rq) @@ -1524,6 +1532,11 @@ static inline bool sched_group_cookie_match(struct r= q *rq, return true; } =20 +static inline bool task_has_sched_core(struct task_struct *p) +{ + return false; +} + #endif /* !CONFIG_SCHED_CORE */ =20 #ifdef CONFIG_RT_GROUP_SCHED --=20 2.32.0