From nobody Wed Feb 11 10:54:55 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29BA6EB64D9 for ; Fri, 7 Jul 2023 22:57:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229969AbjGGW5G (ORCPT ); Fri, 7 Jul 2023 18:57:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60486 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231245AbjGGW4z (ORCPT ); Fri, 7 Jul 2023 18:56:55 -0400 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C2DCC1997 for ; Fri, 7 Jul 2023 15:56:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1688770614; x=1720306614; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=GtHT20V983sr8n3O3e7GEiqoyfK5vMEHBMPj3byjlrM=; b=DW44dRz7PP7qXGzmS0qu36AVxPOX/tWHCYhmHNNhwBvXgo02ykRyF853 +KJ0gjLdLL7cT9Gx4OXQfcNWcWvTXRLdh3BhgG4+UleE7IbUmb2nYmfcE HvF0z40i1c/wwagL/P1FqKleyPzyznrO7VkgKy6qsUhGbRBIJJURQ2pXY 9jSX97WBjxejfoHbgBZfKVzgMovp82Ia1GKrvvleuvOBSgui43EoBjZt1 GuIhAhZxFhX8FBBvO2CnGNDlBQAcuZ+6JaF0iM+3RXbFllfSOS6UizjGP mC651krSCGxP0tjfX3mhjg8nCx22KmGcvt9ZyJfyf31wowwmTkywVbYXC w==; X-IronPort-AV: E=McAfee;i="6600,9927,10764"; a="427683460" X-IronPort-AV: E=Sophos;i="6.01,189,1684825200"; d="scan'208";a="427683460" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Jul 2023 15:56:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10764"; a="714176664" X-IronPort-AV: E=Sophos;i="6.01,189,1684825200"; d="scan'208";a="714176664" Received: from b04f130c83f2.jf.intel.com ([10.165.154.98]) by orsmga007.jf.intel.com with ESMTP; 07 Jul 2023 15:56:54 -0700 From: Tim Chen To: Peter Zijlstra Cc: Tim C Chen , Juri Lelli , Vincent Guittot , Ricardo Neri , "Ravi V . Shankar" , Ben Segall , Daniel Bristot de Oliveira , Dietmar Eggemann , Len Brown , Mel Gorman , "Rafael J . Wysocki" , Srinivas Pandruvada , Steven Rostedt , Valentin Schneider , Ionela Voinescu , x86@kernel.org, linux-kernel@vger.kernel.org, Shrikanth Hegde , Srikar Dronamraju , naveen.n.rao@linux.vnet.ibm.com, Yicong Yang , Barry Song , Chen Yu , Hillf Danton Subject: [Patch v3 2/6] sched/topology: Record number of cores in sched group Date: Fri, 7 Jul 2023 15:57:01 -0700 Message-Id: <04641eeb0e95c21224352f5743ecb93dfac44654.1688770494.git.tim.c.chen@linux.intel.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Tim C Chen When balancing sibling domains that have different number of cores, tasks in respective sibling domain should be proportional to the number of cores in each domain. In preparation of implementing such a policy, record the number of tasks in a scheduling group. Signed-off-by: Tim Chen Reviewed-by: Valentin Schneider --- kernel/sched/sched.h | 1 + kernel/sched/topology.c | 10 +++++++++- 2 files changed, 10 insertions(+), 1 deletion(-) diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 3d0eb36350d2..5f7f36e45b87 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1860,6 +1860,7 @@ struct sched_group { atomic_t ref; =20 unsigned int group_weight; + unsigned int cores; struct sched_group_capacity *sgc; int asym_prefer_cpu; /* CPU of highest priority in group */ int flags; diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 6d5628fcebcf..6b099dbdfb39 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -1275,14 +1275,22 @@ build_sched_groups(struct sched_domain *sd, int cpu) static void init_sched_groups_capacity(int cpu, struct sched_domain *sd) { struct sched_group *sg =3D sd->groups; + struct cpumask *mask =3D sched_domains_tmpmask2; =20 WARN_ON(!sg); =20 do { - int cpu, max_cpu =3D -1; + int cpu, cores =3D 0, max_cpu =3D -1; =20 sg->group_weight =3D cpumask_weight(sched_group_span(sg)); =20 + cpumask_copy(mask, sched_group_span(sg)); + for_each_cpu(cpu, mask) { + cores++; + cpumask_andnot(mask, mask, cpu_smt_mask(cpu)); + } + sg->cores =3D cores; + if (!(sd->flags & SD_ASYM_PACKING)) goto next; =20 --=20 2.32.0