From nobody Thu Apr 2 17:10:29 2026 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.9]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EFFFD33A701 for ; Tue, 10 Feb 2026 22:13:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.9 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770761599; cv=none; b=OtU6wwFJFfLEBKrQOeChXULozjzh5P8LDXQwNvbnn1/PLwSwI7eQ5xHl2SfJacRiJzFUmJtTAdOJLBiqt55eThcO8vL2P+r8GXEcRUNVwZZ8Gk99Z76FE5RLZQpqB1NIKPFm12X6nTNAVaauFQOyIApRz8GFX9wncRcH3Osf9OM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770761599; c=relaxed/simple; bh=Tft1zswN1Z0GrU5Df0edo2pze4mac6yMpiFecl7nqK0=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=XrKhhOkH3E7zlXYh6TmfTjfP0mEGd84jZTtOCDWEqaPWv2zEoGqVFPnupU8224+fu9pe2K1ii6Uw7qXI8fwCp/yZwcPBdUF17dFvFSyDnXs2NMX0NiiuYI8Dv8y/tFhSYHjy3Rmad43iH+Tn4tOglOWlsmfT17uc2aeTKxVzXJA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=GI88rwsL; arc=none smtp.client-ip=192.198.163.9 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="GI88rwsL" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1770761598; x=1802297598; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Tft1zswN1Z0GrU5Df0edo2pze4mac6yMpiFecl7nqK0=; b=GI88rwsL+Fri6OHzVL92NL2sOs9+ovlyZ2a+RTjL/Su2QzwUqUkd2js5 9GtgUysFrmlo0KYQBMMKiiSqodzZ4RSFYesJvg9DFhvJUneth5v9jasgO cs+03bGRgBBHxeBtrT1OAceaSzPTvX0sk3pF79n/oK4H+csoYG/OWT4BQ 5ZcF3lOFh68xyhP1Q9SORM6hfxm3WmUfmhFdFHOKu0n9bvskX0P/Ob/M4 sLMuZtnCnqzDVcr4eICXHjdwoUflI0D5Q/eS+Nbn1CJUDttfJpxiCRyvc azRFK5frhjZXgqt9Fc8fV/DmoTLCJ9LLD1SvhA3nXwBsBApueYNHJAzmw A==; X-CSE-ConnectionGUID: npdPCJs+SgqKdJSU5Se8fQ== X-CSE-MsgGUID: Z45iMYw1RNmM4+TOFs7vCw== X-IronPort-AV: E=McAfee;i="6800,10657,11697"; a="82631356" X-IronPort-AV: E=Sophos;i="6.21,283,1763452800"; d="scan'208";a="82631356" Received: from fmviesa004.fm.intel.com ([10.60.135.144]) by fmvoesa103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2026 14:13:18 -0800 X-CSE-ConnectionGUID: mmHdVatVSPyMhOV76Dz32A== X-CSE-MsgGUID: DavtVsktQ5m+AX7JSdRrkQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.21,283,1763452800"; d="scan'208";a="216373935" Received: from b04f130c83f2.jf.intel.com ([10.165.154.98]) by fmviesa004.fm.intel.com with ESMTP; 10 Feb 2026 14:13:16 -0800 From: Tim Chen To: Peter Zijlstra , Ingo Molnar , K Prateek Nayak , "Gautham R . Shenoy" , Vincent Guittot Cc: Tim Chen , Juri Lelli , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Madadi Vineeth Reddy , Hillf Danton , Shrikanth Hegde , Jianyong Wu , Yangyu Chen , Tingyin Duan , Vern Hao , Vern Hao , Len Brown , Aubrey Li , Zhao Liu , Chen Yu , Chen Yu , Adam Li , Aaron Lu , Tim Chen , Josh Don , Gavin Guo , Qais Yousef , Libo Chen , linux-kernel@vger.kernel.org Subject: [PATCH v3 07/21] sched/cache: Introduce per CPU's tasks LLC preference counter Date: Tue, 10 Feb 2026 14:18:47 -0800 Message-Id: X-Mailer: git-send-email 2.32.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The lowest level of sched domain for each CPU is assigned an array where each element tracks the number of tasks preferring a given LLC, indexed from 0 to max_llcs - 1. Since each CPU has its dedicated sd, this implies that each CPU will have a dedicated task LLC preference counter. For example, sd->pf[3] =3D 2 signifies that there are 2 tasks on this runqueue which prefer to run within LLC3. The load balancer can use this information to identify busy runqueues and migrate tasks to their preferred LLC domains. This array will be reallocated at runtime during sched domain rebuild. Introduce the buffer allocation mechanism, and the statistics will be calculated in the subsequent patch. Note: the LLC preference statistics of each CPU are reset on sched domain rebuild and may under count temporarily, until the CPU becomes idle and the count is cleared. This is a trade off to avoid complex data synchronization across sched domain builds. Suggested-by: Peter Zijlstra (Intel) Suggested-by: K Prateek Nayak Co-developed-by: Chen Yu Signed-off-by: Chen Yu Signed-off-by: Tim Chen --- Notes: v2->v3: Allocate preferred LLC buffer in rq->sd rather than the rq. That way it automagically gets reallocated and old buffer gets recycled during sched domain rebuild. (Peter Zijlstra) include/linux/sched/topology.h | 4 +++ kernel/sched/sched.h | 2 ++ kernel/sched/topology.c | 64 +++++++++++++++++++++++++++++++++- 3 files changed, 69 insertions(+), 1 deletion(-) diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h index a4e2fb31f2fd..3aa6c101b2e4 100644 --- a/include/linux/sched/topology.h +++ b/include/linux/sched/topology.h @@ -102,6 +102,10 @@ struct sched_domain { u64 max_newidle_lb_cost; unsigned long last_decay_max_lb_cost; =20 +#ifdef CONFIG_SCHED_CACHE + unsigned int *pf; +#endif + #ifdef CONFIG_SCHEDSTATS /* sched_balance_rq() stats */ unsigned int lb_count[CPU_MAX_IDLE_TYPES]; diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 35cea6aa32a4..ac8c7ac1ac0d 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -3903,6 +3903,8 @@ static inline void mm_cid_switch_to(struct task_struc= t *prev, struct task_struct #endif /* !CONFIG_SCHED_MM_CID */ =20 #ifdef CONFIG_SCHED_CACHE +extern int max_llcs; + static inline bool sched_cache_enabled(void) { return false; diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index ca46b5cf7f78..dae78b5915a7 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -21,6 +21,7 @@ void sched_domains_mutex_unlock(void) static cpumask_var_t sched_domains_tmpmask; static cpumask_var_t sched_domains_tmpmask2; static int tl_max_llcs; +int max_llcs; =20 static int __init sched_debug_setup(char *str) { @@ -628,6 +629,11 @@ static void destroy_sched_domain(struct sched_domain *= sd) =20 if (sd->shared && atomic_dec_and_test(&sd->shared->ref)) kfree(sd->shared); + +#ifdef CONFIG_SCHED_CACHE + /* only the bottom sd has pref_llc array */ + kfree(sd->pf); +#endif kfree(sd); } =20 @@ -747,10 +753,15 @@ cpu_attach_domain(struct sched_domain *sd, struct roo= t_domain *rd, int cpu) if (sd && sd_degenerate(sd)) { tmp =3D sd; sd =3D sd->parent; - destroy_sched_domain(tmp); + if (sd) { struct sched_group *sg =3D sd->groups; =20 +#ifdef CONFIG_SCHED_CACHE + /* move pf to parent as child is being destroyed */ + sd->pf =3D tmp->pf; + tmp->pf =3D NULL; +#endif /* * sched groups hold the flags of the child sched * domain for convenience. Clear such flags since @@ -762,6 +773,8 @@ cpu_attach_domain(struct sched_domain *sd, struct root_= domain *rd, int cpu) =20 sd->child =3D NULL; } + + destroy_sched_domain(tmp); } =20 sched_domain_debug(sd, cpu); @@ -787,6 +800,46 @@ enum s_alloc { sa_none, }; =20 +#ifdef CONFIG_SCHED_CACHE +static bool alloc_sd_pref(const struct cpumask *cpu_map, + struct s_data *d) +{ + struct sched_domain *sd; + unsigned int *pf; + int i; + + for_each_cpu(i, cpu_map) { + sd =3D *per_cpu_ptr(d->sd, i); + if (!sd) + goto err; + + pf =3D kcalloc(tl_max_llcs, sizeof(unsigned int), GFP_KERNEL); + if (!pf) + goto err; + + sd->pf =3D pf; + } + + return true; +err: + for_each_cpu(i, cpu_map) { + sd =3D *per_cpu_ptr(d->sd, i); + if (sd) { + kfree(sd->pf); + sd->pf =3D NULL; + } + } + + return false; +} +#else +static bool alloc_sd_pref(const struct cpumask *cpu_map, + struct s_data *d) +{ + return false; +} +#endif + /* * Return the canonical balance CPU for this group, this is the first CPU * of this group that's also in the balance mask. @@ -2710,6 +2763,8 @@ build_sched_domains(const struct cpumask *cpu_map, st= ruct sched_domain_attr *att } } =20 + alloc_sd_pref(cpu_map, &d); + /* Attach the domains */ rcu_read_lock(); for_each_cpu(i, cpu_map) { @@ -2723,6 +2778,13 @@ build_sched_domains(const struct cpumask *cpu_map, s= truct sched_domain_attr *att } rcu_read_unlock(); =20 + /* + * Ensure we see enlarged sd->pf when we use new llc_ids and + * bigger max_llcs. + */ + smp_mb(); + max_llcs =3D tl_max_llcs; + if (has_asym) static_branch_inc_cpuslocked(&sched_asym_cpucapacity); =20 --=20 2.32.0