From nobody Sun Apr 12 21:01:32 2026 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 155AF39EF3F for ; Wed, 1 Apr 2026 21:46:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775080013; cv=none; b=skjeypkZsALhFwKutwf7ofRSLt4YmswuuBRl9/kL4G5opk1l7pe80gJAQR7gcUk58nzEhSQ1QFpEO9QsnmG4nDwZ9mOBddc5rHoqMu9HRW9QvHhsPu8GwfBOGVzFMJTh2FQQN/AXHDUprbt8crWbEy2syOrys21x1ZRL+p2Hgt8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775080013; c=relaxed/simple; bh=zIrGxCfSIrrUSwdof+9A2yeTzepDEvpsUJBp8tXz6PE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=UnF5efYb7vFPs0e/jzJRm1mFaNqqwv3p2jXITkhYaQLWKaH6nV15yFQvCDEctMJ882xyz9nc2fHXgdri/uN+C6vo6La7Z3+Pqshrihvm+dpuknxzAnudY88qBco+jw7fGwZ+h0koqyGAnTQEp8CRQV+faeu+qWl2Tb69mo8if5Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Y5vYWfay; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Y5vYWfay" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1775080012; x=1806616012; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=zIrGxCfSIrrUSwdof+9A2yeTzepDEvpsUJBp8tXz6PE=; b=Y5vYWfayI0SAHkmWYRIoLDhezDQ5wM6Gn1b2H9E1UtnD2ZGkggSHZtp7 Hpxx06vOznVdC5t/o4QDoGDdNZK8endyZ03rNBKadCKdhlcyQNmQsy1Ew Ry+LCTy4RTNSnmZ6Yo794c6NsBvHSVKl/lJAv6HRtCIvJPPuGCtYrfvGG ROQ17JaHbqAoOstkReY8lYqc2Q/mC9MIv9OfafBb/AL3TbpWvrcb3sOXo NSB/fM5y6j2gug7l+XxSqb8HF1y+HnCfr2pSoiFAOtBQmc9kUYsa9NRJo DlTgiCOpkeuoMMUSd4/HqHeXL7iT1w7vlH415OglAuWbeLnULl3cn/Ibm w==; X-CSE-ConnectionGUID: XgG88s+eSma3xRXDzkLzlg== X-CSE-MsgGUID: 5juUGFaBQnqWe5DhRN1InQ== X-IronPort-AV: E=McAfee;i="6800,10657,11746"; a="79739813" X-IronPort-AV: E=Sophos;i="6.23,153,1770624000"; d="scan'208";a="79739813" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Apr 2026 14:46:52 -0700 X-CSE-ConnectionGUID: 2AllH0UjQ2eJefWOQb1FpQ== X-CSE-MsgGUID: DBjug+auTNGI2nOumjETig== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,153,1770624000"; d="scan'208";a="249842417" Received: from b04f130c83f2.jf.intel.com ([10.165.154.98]) by fmviesa002.fm.intel.com with ESMTP; 01 Apr 2026 14:46:50 -0700 From: Tim Chen To: Peter Zijlstra , Ingo Molnar , K Prateek Nayak , "Gautham R . Shenoy" , Vincent Guittot Cc: Tim Chen , Juri Lelli , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Madadi Vineeth Reddy , Hillf Danton , Shrikanth Hegde , Jianyong Wu , Yangyu Chen , Tingyin Duan , Vern Hao , Vern Hao , Len Brown , Aubrey Li , Zhao Liu , Chen Yu , Chen Yu , Adam Li , Aaron Lu , Tim Chen , Josh Don , Gavin Guo , Qais Yousef , Libo Chen , linux-kernel@vger.kernel.org Subject: [Patch v4 06/22] sched/cache: Assign preferred LLC ID to processes Date: Wed, 1 Apr 2026 14:52:18 -0700 Message-Id: X-Mailer: git-send-email 2.32.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With cache-aware scheduling enabled, each task is assigned a preferred LLC ID. This allows quick identification of the LLC domain where the task prefers to run, similar to numa_preferred_nid in NUMA balancing. Co-developed-by: Chen Yu Signed-off-by: Chen Yu Signed-off-by: Tim Chen --- Notes: v3->v4: Use WRITE_ONCE()/READ_ONCE() on p->preferred_llc (Madadi Vineeth Reddy) include/linux/sched.h | 1 + init/init_task.c | 3 +++ kernel/sched/fair.c | 43 +++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 47 insertions(+) diff --git a/include/linux/sched.h b/include/linux/sched.h index bd33f5b9096b..526108acc483 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1408,6 +1408,7 @@ struct task_struct { =20 #ifdef CONFIG_SCHED_CACHE struct callback_head cache_work; + int preferred_llc; #endif =20 struct rseq_data rseq; diff --git a/init/init_task.c b/init/init_task.c index 5c838757fc10..9f964898d40e 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -214,6 +214,9 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = =3D { .numa_group =3D NULL, .numa_faults =3D NULL, #endif +#ifdef CONFIG_SCHED_CACHE + .preferred_llc =3D -1, +#endif #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) .kasan_depth =3D 1, #endif diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 6244443ecdc0..1eda689e0136 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1366,11 +1366,43 @@ static unsigned long fraction_mm_sched(struct rq *r= q, return div64_u64(NICE_0_LOAD * pcpu_sched->runtime, rq->cpu_runtime + 1); } =20 +static int get_pref_llc(struct task_struct *p, struct mm_struct *mm) +{ + int mm_sched_llc =3D -1; + + if (!mm) + return -1; + + if (mm->sc_stat.cpu !=3D -1) { + mm_sched_llc =3D llc_id(mm->sc_stat.cpu); + +#ifdef CONFIG_NUMA_BALANCING + /* + * Don't assign preferred LLC if it + * conflicts with NUMA balancing. + * This can happen when sched_setnuma() gets + * called, however it is not much of an issue + * because we expect account_mm_sched() to get + * called fairly regularly -- at a higher rate + * than sched_setnuma() at least -- and thus the + * conflict only exists for a short period of time. + */ + if (static_branch_likely(&sched_numa_balancing) && + p->numa_preferred_nid >=3D 0 && + cpu_to_node(mm->sc_stat.cpu) !=3D p->numa_preferred_nid) + mm_sched_llc =3D -1; +#endif + } + + return mm_sched_llc; +} + static inline void account_mm_sched(struct rq *rq, struct task_struct *p, s64 delta_exec) { struct sched_cache_time *pcpu_sched; struct mm_struct *mm =3D p->mm; + int mm_sched_llc =3D -1; unsigned long epoch; =20 if (!sched_cache_enabled()) @@ -1404,6 +1436,11 @@ void account_mm_sched(struct rq *rq, struct task_str= uct *p, s64 delta_exec) if (mm->sc_stat.cpu !=3D -1) mm->sc_stat.cpu =3D -1; } + + mm_sched_llc =3D get_pref_llc(p, mm); + + if (READ_ONCE(p->preferred_llc) !=3D mm_sched_llc) + WRITE_ONCE(p->preferred_llc, mm_sched_llc); } =20 static void task_tick_cache(struct rq *rq, struct task_struct *p) @@ -1577,6 +1614,12 @@ void init_sched_mm(struct task_struct *p) { } =20 static void task_tick_cache(struct rq *rq, struct task_struct *p) { } =20 +static inline int get_pref_llc(struct task_struct *p, + struct mm_struct *mm) +{ + return -1; +} + #endif /* CONFIG_SCHED_CACHE */ =20 /* --=20 2.32.0