From nobody Thu Apr 2 17:10:30 2026 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.9]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9A71E33A708 for ; Tue, 10 Feb 2026 22:13:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.9 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770761593; cv=none; b=Xff1SOBNxlYC/Cl/OYVKL+ODGk4wOtBtJxrAp9d4nJk39pDbSGZU1ue1TR51hbJ8HbRuLmlLShdlFxBwnm3ky0u6d+MCuIt33zcw4AG82Cf8s0XVfGKAGVNTM9m1yfvArHVQddzFhpvWdPImIk7/lNNQzCufDCMbffu9Cctc26Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770761593; c=relaxed/simple; bh=frV2KkgqfOLT5qQofbpSaoorLbzVFFWBXGLK+9NJg0E=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=tLefFrxxGWPayNsqvhCIV+sZ6HUwRbGV0vokkRalzekB9yK8My33vNJFUtWbk2FFZnSlci7HOsc5LfvWWAUIoBAyM7DJC7zsziDDtYRWosmUZic71U3B+S0CZK8ATuzeXW/hn6DJx98WKbck8IOQZJn6io3cti96oHNAIudvwno= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=ayEUExAv; arc=none smtp.client-ip=192.198.163.9 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="ayEUExAv" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1770761593; x=1802297593; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=frV2KkgqfOLT5qQofbpSaoorLbzVFFWBXGLK+9NJg0E=; b=ayEUExAvRA6fKbL6BAfyo5IUDoP2+uS7N7yRREcUWLwxIiwNuzryxl0M J7dnvrkQ8ynNWrnDDG3niokrZs6KvSCLvYK0LGpRrbmUD6RSW9GkWMOso FRzsnRLlMs/9LnnXbwJWIipFYUvPB7lCsKpGxIS41EVCmHvJ/DqOH2C2I T0Gy9cUyeIoOSXXab5WAqRxct0/0qJwdVD8C8Mbx6L94bpO9r8IE+TTjP 6yNXDYe02UyJtopinbWEdAC5OJpnuyzKp25JBXyPKbpHnV96dOEs2xFDn xrdvZquH3XLmqk9JprkX1tr+36sh9FtF6M0pAORuxR/ofpq6V6ysOm8wH g==; X-CSE-ConnectionGUID: rA7xOuXJRxeXDjc5YfOwRg== X-CSE-MsgGUID: +RoPoL/0TmSGqfLBsfmoaA== X-IronPort-AV: E=McAfee;i="6800,10657,11697"; a="82631312" X-IronPort-AV: E=Sophos;i="6.21,283,1763452800"; d="scan'208";a="82631312" Received: from fmviesa004.fm.intel.com ([10.60.135.144]) by fmvoesa103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2026 14:13:12 -0800 X-CSE-ConnectionGUID: zYyG9ZqcQ96eji+Wxkb/Yw== X-CSE-MsgGUID: +LNVKCaRTvOvBtd4slrtCQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.21,283,1763452800"; d="scan'208";a="216373903" Received: from b04f130c83f2.jf.intel.com ([10.165.154.98]) by fmviesa004.fm.intel.com with ESMTP; 10 Feb 2026 14:13:11 -0800 From: Tim Chen To: Peter Zijlstra , Ingo Molnar , K Prateek Nayak , "Gautham R . Shenoy" , Vincent Guittot Cc: Tim Chen , Juri Lelli , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Madadi Vineeth Reddy , Hillf Danton , Shrikanth Hegde , Jianyong Wu , Yangyu Chen , Tingyin Duan , Vern Hao , Vern Hao , Len Brown , Aubrey Li , Zhao Liu , Chen Yu , Chen Yu , Adam Li , Aaron Lu , Tim Chen , Josh Don , Gavin Guo , Qais Yousef , Libo Chen , linux-kernel@vger.kernel.org Subject: [PATCH v3 05/21] sched/cache: Assign preferred LLC ID to processes Date: Tue, 10 Feb 2026 14:18:45 -0800 Message-Id: <4a92b93edb669845e3bdca24c3ae3354b317c3eb.1770760558.git.tim.c.chen@linux.intel.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With cache-aware scheduling enabled, each task is assigned a preferred LLC ID. This allows quick identification of the LLC domain where the task prefers to run, similar to numa_preferred_nid in NUMA balancing. Signed-off-by: Tim Chen --- Notes: v2->v3: Add comments around code handling NUMA balance conflict with cache aware scheduling. (Peter Zijlstra) =20 Check if NUMA balancing is disabled before checking numa_preferred_nid (Jianyong Wu) include/linux/sched.h | 1 + init/init_task.c | 3 +++ kernel/sched/fair.c | 42 ++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 46 insertions(+) diff --git a/include/linux/sched.h b/include/linux/sched.h index 2817a21ee055..c98bd1c46088 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1411,6 +1411,7 @@ struct task_struct { =20 #ifdef CONFIG_SCHED_CACHE struct callback_head cache_work; + int preferred_llc; #endif =20 struct rseq_data rseq; diff --git a/init/init_task.c b/init/init_task.c index 49b13d7c3985..baa420de2644 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -218,6 +218,9 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = =3D { .numa_group =3D NULL, .numa_faults =3D NULL, #endif +#ifdef CONFIG_SCHED_CACHE + .preferred_llc =3D -1, +#endif #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) .kasan_depth =3D 1, #endif diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index bf5f39a01017..0b4ed0f2809d 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1273,11 +1273,43 @@ static unsigned long fraction_mm_sched(struct rq *r= q, return div64_u64(NICE_0_LOAD * pcpu_sched->runtime, rq->cpu_runtime + 1); } =20 +static int get_pref_llc(struct task_struct *p, struct mm_struct *mm) +{ + int mm_sched_llc =3D -1; + + if (!mm) + return -1; + + if (mm->sc_stat.cpu !=3D -1) { + mm_sched_llc =3D llc_id(mm->sc_stat.cpu); + +#ifdef CONFIG_NUMA_BALANCING + /* + * Don't assign preferred LLC if it + * conflicts with NUMA balancing. + * This can happen when sched_setnuma() gets + * called, however it is not much of an issue + * because we expect account_mm_sched() to get + * called fairly regularly -- at a higher rate + * than sched_setnuma() at least -- and thus the + * conflict only exists for a short period of time. + */ + if (static_branch_likely(&sched_numa_balancing) && + p->numa_preferred_nid >=3D 0 && + cpu_to_node(mm->sc_stat.cpu) !=3D p->numa_preferred_nid) + mm_sched_llc =3D -1; +#endif + } + + return mm_sched_llc; +} + static inline void account_mm_sched(struct rq *rq, struct task_struct *p, s64 delta_exec) { struct sched_cache_time *pcpu_sched; struct mm_struct *mm =3D p->mm; + int mm_sched_llc =3D -1; unsigned long epoch; =20 if (!sched_cache_enabled()) @@ -1311,6 +1343,11 @@ void account_mm_sched(struct rq *rq, struct task_str= uct *p, s64 delta_exec) if (mm->sc_stat.cpu !=3D -1) mm->sc_stat.cpu =3D -1; } + + mm_sched_llc =3D get_pref_llc(p, mm); + + if (p->preferred_llc !=3D mm_sched_llc) + p->preferred_llc =3D mm_sched_llc; } =20 static void task_tick_cache(struct rq *rq, struct task_struct *p) @@ -1440,6 +1477,11 @@ void init_sched_mm(struct task_struct *p) { } =20 static void task_tick_cache(struct rq *rq, struct task_struct *p) { } =20 +static inline int get_pref_llc(struct task_struct *p, + struct mm_struct *mm) +{ + return -1; +} #endif =20 /* --=20 2.32.0