From nobody Fri Dec 19 21:09:58 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8446B2FCBE3 for ; Wed, 3 Dec 2025 23:01:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764802901; cv=none; b=AczUsF+ErJIRlmzmMhdwLmi7ZupDah78/dCkfXKoZGQ3XVlhu9qwGaFYSDg3FFQU9754xRJEORkGrcVZU1ssicX++R+V0FXfSTdSUEZWfvt980XcoUhlWnK7J8un6y7YNQXxJBfZVrhj31WyccQPJJevDmK67sgqqF6PsKk29mc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764802901; c=relaxed/simple; bh=da90OiAHbhR9NPA8Ratl9FUXidYv15t1ql0bzkXvmzA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=hCA0ZezNljYOVjtzlpNDPYqpoGKoW7yU4ihuYN4DdplXI8ZjqyOysntDUcfzbne+6CzBonX2R+LOUwUNh5V4ZvlW0NEG+WGaT266Gr89t7EmmUAyb0SQ4i4NDSbCHrELFwlVL45n3XsDuBwIKNxjYMRKZj90lzt9XJuGVK0hJpE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=E/oDxO7e; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="E/oDxO7e" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764802898; x=1796338898; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=da90OiAHbhR9NPA8Ratl9FUXidYv15t1ql0bzkXvmzA=; b=E/oDxO7ef4nI4G5J3jOvjR+X/vFua+P9e3AZXKLJcxFJriNr7Ua944xG AxkcNTluTudW0fa7LiL2oLSyXQGNm4wxTedztXy+Kb3GNW3m1xItQPgjY yaKpw+/5zQcwTUlI7cSSe2yq6pGi70PjZnOQeUYqx+6LdidqnzQeT9x0d oKfUVrBxLwV+bxjJ5X7pfb+amTWF/9P1/Z2cwQnN4MgR4+xZfJ/oQETi0 OhZkv30WMo989iIGaDW9QOVZENXrnIYuSR0poLGwGoz4vGxEA6oadIK33 rSOZLBiBoM9ORQbnZoVJ4AxudF9GCXu3fDkCd/li1EhJxcKamQHTatJeP g==; X-CSE-ConnectionGUID: Ktrog9qIS3GVBMh0FikIKg== X-CSE-MsgGUID: wFdH+7CRS02fjEaxSZiyog== X-IronPort-AV: E=McAfee;i="6800,10657,11631"; a="77136444" X-IronPort-AV: E=Sophos;i="6.20,247,1758610800"; d="scan'208";a="77136444" Received: from fmviesa004.fm.intel.com ([10.60.135.144]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Dec 2025 15:01:37 -0800 X-CSE-ConnectionGUID: 6HowfZdBQD20KHd0gzJbtg== X-CSE-MsgGUID: 6WhOzrMuS8+5P3U6JQUdyA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,247,1758610800"; d="scan'208";a="199763835" Received: from b04f130c83f2.jf.intel.com ([10.165.154.98]) by fmviesa004.fm.intel.com with ESMTP; 03 Dec 2025 15:01:37 -0800 From: Tim Chen To: Peter Zijlstra , Ingo Molnar , K Prateek Nayak , "Gautham R . Shenoy" , Vincent Guittot Cc: Tim Chen , Juri Lelli , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Madadi Vineeth Reddy , Hillf Danton , Shrikanth Hegde , Jianyong Wu , Yangyu Chen , Tingyin Duan , Vern Hao , Vern Hao , Len Brown , Aubrey Li , Zhao Liu , Chen Yu , Chen Yu , Adam Li , Aaron Lu , Tim Chen , linux-kernel@vger.kernel.org Subject: [PATCH v2 12/23] sched/cache: Add migrate_llc_task migration type for cache-aware balancing Date: Wed, 3 Dec 2025 15:07:31 -0800 Message-Id: X-Mailer: git-send-email 2.32.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce a new migration type, migrate_llc_task, to support cache-aware load balancing. After identifying the busiest sched_group (having the most tasks preferring the destination LLC), mark migrations with this type. During load balancing, each runqueue in the busiest sched_group is examined, and the runqueue with the highest number of tasks preferring the destination CPU is selected as the busiest runqueue. Signed-off-by: Tim Chen --- Notes: v1->v2: Remove unnecessary cpus_share_cache() check in sched_balance_find_src_rq() (K Prateek Nayak) kernel/sched/fair.c | 32 +++++++++++++++++++++++++++++++- 1 file changed, 31 insertions(+), 1 deletion(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index db555c11b5b8..529adf342ce0 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -9547,7 +9547,8 @@ enum migration_type { migrate_load =3D 0, migrate_util, migrate_task, - migrate_misfit + migrate_misfit, + migrate_llc_task }; =20 #define LBF_ALL_PINNED 0x01 @@ -10134,6 +10135,10 @@ static int detach_tasks(struct lb_env *env) env->imbalance -=3D util; break; =20 + case migrate_llc_task: + env->imbalance--; + break; + case migrate_task: env->imbalance--; break; @@ -11766,6 +11771,15 @@ static inline void calculate_imbalance(struct lb_e= nv *env, struct sd_lb_stats *s return; } =20 +#ifdef CONFIG_SCHED_CACHE + if (busiest->group_type =3D=3D group_llc_balance) { + /* Move a task that prefer local LLC */ + env->migration_type =3D migrate_llc_task; + env->imbalance =3D 1; + return; + } +#endif + if (busiest->group_type =3D=3D group_imbalanced) { /* * In the group_imb case we cannot rely on group-wide averages @@ -12073,6 +12087,10 @@ static struct rq *sched_balance_find_src_rq(struct= lb_env *env, struct rq *busiest =3D NULL, *rq; unsigned long busiest_util =3D 0, busiest_load =3D 0, busiest_capacity = =3D 1; unsigned int busiest_nr =3D 0; +#ifdef CONFIG_SCHED_CACHE + unsigned int busiest_pref_llc =3D 0; + int dst_llc; +#endif int i; =20 for_each_cpu_and(i, sched_group_span(group), env->cpus) { @@ -12181,6 +12199,16 @@ static struct rq *sched_balance_find_src_rq(struct= lb_env *env, } break; =20 + case migrate_llc_task: +#ifdef CONFIG_SCHED_CACHE + dst_llc =3D llc_id(env->dst_cpu); + if (dst_llc >=3D 0 && + busiest_pref_llc < rq->nr_pref_llc[dst_llc]) { + busiest_pref_llc =3D rq->nr_pref_llc[dst_llc]; + busiest =3D rq; + } +#endif + break; case migrate_task: if (busiest_nr < nr_running) { busiest_nr =3D nr_running; @@ -12363,6 +12391,8 @@ static void update_lb_imbalance_stat(struct lb_env = *env, struct sched_domain *sd case migrate_misfit: __schedstat_add(sd->lb_imbalance_misfit[idle], env->imbalance); break; + case migrate_llc_task: + break; } } =20 --=20 2.32.0