From nobody Wed Feb 11 06:30:43 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88F24C7EE25 for ; Thu, 8 Jun 2023 22:32:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237189AbjFHWcZ (ORCPT ); Thu, 8 Jun 2023 18:32:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56316 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237014AbjFHWcQ (ORCPT ); Thu, 8 Jun 2023 18:32:16 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 41D73C2 for ; Thu, 8 Jun 2023 15:32:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686263535; x=1717799535; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=GtHT20V983sr8n3O3e7GEiqoyfK5vMEHBMPj3byjlrM=; b=DOgZyUuipnZmO8SgyaUmEcIwqlYn1hfR/rC929YszL40yF6fEcaJdd87 Sd8fjK3UZR/RjM//DKXIloMGivn/b3kAilP6oe9kEV5BKssw7HzPkVorg jAlTdChubMIIfRKqtkqamE1tyjF7ObywrVn0JGsxWVYISAiOiBUHUHlji amIP3kp8eLe8nmc0azBdpkodkg/ud60bMTkp8K2YxP7WA2JcDAoISQE1E cSCK65LpkaPi1Og8vjZwbnPbBtes5SnIXHiq1jxJF2x/g7ijhtIJxuGVR Ko8E8rCA4CClUreck7jHnPq/pK94gBrS1AoX1bTPAONRlz/yAV//s+loz w==; X-IronPort-AV: E=McAfee;i="6600,9927,10735"; a="347094722" X-IronPort-AV: E=Sophos;i="6.00,227,1681196400"; d="scan'208";a="347094722" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jun 2023 15:32:14 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10735"; a="956906445" X-IronPort-AV: E=Sophos;i="6.00,227,1681196400"; d="scan'208";a="956906445" Received: from b04f130c83f2.jf.intel.com ([10.165.154.98]) by fmsmga006.fm.intel.com with ESMTP; 08 Jun 2023 15:32:13 -0700 From: Tim Chen To: Peter Zijlstra Cc: Tim C Chen , Juri Lelli , Vincent Guittot , Ricardo Neri , "Ravi V . Shankar" , Ben Segall , Daniel Bristot de Oliveira , Dietmar Eggemann , Len Brown , Mel Gorman , "Rafael J . Wysocki" , Srinivas Pandruvada , Steven Rostedt , Valentin Schneider , Ionela Voinescu , x86@kernel.org, linux-kernel@vger.kernel.org, Shrikanth Hegde , Srikar Dronamraju , naveen.n.rao@linux.vnet.ibm.com, Yicong Yang , Barry Song , Chen Yu , Hillf Danton Subject: [Patch v2 2/6] sched/topology: Record number of cores in sched group Date: Thu, 8 Jun 2023 15:32:28 -0700 Message-Id: X-Mailer: git-send-email 2.32.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Tim C Chen When balancing sibling domains that have different number of cores, tasks in respective sibling domain should be proportional to the number of cores in each domain. In preparation of implementing such a policy, record the number of tasks in a scheduling group. Signed-off-by: Tim Chen --- kernel/sched/sched.h | 1 + kernel/sched/topology.c | 10 +++++++++- 2 files changed, 10 insertions(+), 1 deletion(-) diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 3d0eb36350d2..5f7f36e45b87 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1860,6 +1860,7 @@ struct sched_group { atomic_t ref; =20 unsigned int group_weight; + unsigned int cores; struct sched_group_capacity *sgc; int asym_prefer_cpu; /* CPU of highest priority in group */ int flags; diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 6d5628fcebcf..6b099dbdfb39 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -1275,14 +1275,22 @@ build_sched_groups(struct sched_domain *sd, int cpu) static void init_sched_groups_capacity(int cpu, struct sched_domain *sd) { struct sched_group *sg =3D sd->groups; + struct cpumask *mask =3D sched_domains_tmpmask2; =20 WARN_ON(!sg); =20 do { - int cpu, max_cpu =3D -1; + int cpu, cores =3D 0, max_cpu =3D -1; =20 sg->group_weight =3D cpumask_weight(sched_group_span(sg)); =20 + cpumask_copy(mask, sched_group_span(sg)); + for_each_cpu(cpu, mask) { + cores++; + cpumask_andnot(mask, mask, cpu_smt_mask(cpu)); + } + sg->cores =3D cores; + if (!(sd->flags & SD_ASYM_PACKING)) goto next; =20 --=20 2.32.0