From nobody Sat Feb 7 18:20:28 2026 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 680B818BC3D; Tue, 13 Jan 2026 02:33:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.10 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768271587; cv=none; b=aCRW3SW/miZlJ60LWduKecm2Aln3t+Hyoliwf8WsLAh78CPjABpQRJ11DG8XhebYe+BCk0lLDUPrJCSu6q8sR8h5RFx+nKyNUZ+nfMVlMyswtiKCkEavSoOOM0IhMfYZiKa/5q+U0uSshpGb7SoACLmQdaAdIBJmttSjM6Efhd0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768271587; c=relaxed/simple; bh=AOuDSE6/eWcFlRb6r186vh6Tjpsg1kdsTdkYdZKJIMo=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version:Content-Type; b=fSS0mEFdkkVO21eWZtMA4lmCl3rOvwYP6WVKWWybAgG5SkDVS8FDBzJoHpUuXgOM3s1L0ZqF9f1oxEsal/El2EM64e+aRYCoqg/7YmZ/xeS9cV3DfgMFfihUhVcPNQaVK6ej6/j0gBvhj/MqQYLerwOp3yE2Ca6QEqzU8/wARx8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=BARovaB8; arc=none smtp.client-ip=192.198.163.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="BARovaB8" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1768271585; x=1799807585; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=AOuDSE6/eWcFlRb6r186vh6Tjpsg1kdsTdkYdZKJIMo=; b=BARovaB81Jo0qN1APxp6COoe3OO1ylEPFaHAoLwGUOzsO2Y+KjEPaiPC GUtCVMVDO8jqyf3k6gIhN3Tm+QLz7PnfVCRmabxYEMnEGTfKf/nJIlCSj 4967UdLJQtjQtmQZAvk0QwF+8FjF8CT/dAfEiG6ScTqx7JmJcEqZ8K2UF CDslCE//WZJqWGWJEOylQ2Jam1vVrkQOsxtHLUEiIeYn1o5U874LA9ntX UNFw2SLVp23bSx3RZiHfVikqJjfqZHpAdizxeYe27qGeKdRcQfHAqochn KLLYEWmyWvi0gOnFtDPYlDf7qulFoyS75/zlRIDex7LZn9NE8wZfA/pvW Q==; X-CSE-ConnectionGUID: 6XYWL33SRDa1FOvRYw4bQQ== X-CSE-MsgGUID: Cb+J04l3R3yLbCQMx06XHQ== X-IronPort-AV: E=McAfee;i="6800,10657,11669"; a="80916894" X-IronPort-AV: E=Sophos;i="6.21,222,1763452800"; d="scan'208";a="80916894" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2026 18:33:02 -0800 X-CSE-ConnectionGUID: DjF4Dm12SIGaeRUUVZU1VA== X-CSE-MsgGUID: cH5xLomVR4ueF4JafFWPJA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.21,222,1763452800"; d="scan'208";a="203462074" Received: from linux-pnp-server-11.sh.intel.com ([10.239.176.178]) by orviesa010.jf.intel.com with ESMTP; 12 Jan 2026 18:32:59 -0800 From: Wangyang Guo To: Andrew Morton , Radu Rendec , Ming Lei Cc: Thomas Gleixner , Keith Busch , Jens Axboe , Christoph Hellwig , Sagi Grimberg , linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, virtualization@lists.linux-foundation.org, linux-block@vger.kernel.org, Wangyang Guo , Tianyou Li , Tim Chen , Dan Liang Subject: [PATCH v2] lib/group_cpus: make group CPU cluster aware Date: Tue, 13 Jan 2026 10:29:58 +0800 Message-ID: <20260113022958.3379650-1-wangyang.guo@intel.com> X-Mailer: git-send-email 2.47.3 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable As CPU core counts increase, the number of NVMe IRQs may be smaller than the total number of CPUs. This forces multiple CPUs to share the same IRQ. If the IRQ affinity and the CPU=E2=80=99s cluster do not align, a performance penalty can be observed on some platforms. This patch improves IRQ affinity by grouping CPUs by cluster within each NUMA domain, ensuring better locality between CPUs and their assigned NVMe IRQs. Details: Intel Xeon E platform packs 4 CPU cores as 1 module (cluster) and share the L2 cache. Let's say, if there are 40 CPUs in 1 NUMA domain and 11 IRQs to dispatch. The existing algorithm will map first 7 IRQs each with 4 CPUs and remained 4 IRQs each with 3 CPUs. The last 4 IRQs may have cross cluster issue. For example, the 9th IRQ which pinned to CPU32, then for CPU31, it will have cross L2 memory access. CPU |28 29 30 31|32 33 34 35|36 ... -------- -------- -------- IRQ 8 9 10 If this patch applied, then first 2 IRQs each mapped with 2 CPUs and rest 9 IRQs each mapped with 4 CPUs, which avoids the cross cluster memory access. CPU |00 01 02 03|04 05 06 07|08 09 10 11| ... ----- ----- ----------- ----------- IRQ 1 2 3 4 As a result, 15%+ performance difference is observed in FIO libaio/randread/bs=3D8k. Changes since V1: - Add more performance details in commit messages. - Fix endless loop when topology_cluster_cpumask return invalid mask. History: v1: https://lore.kernel.org/all/20251024023038.872616-1-wangyang.guo@inte= l.com/ v1 [RESEND]: https://lore.kernel.org/all/20251111020608.1501543-1-wangyan= g.guo@intel.com/ Reviewed-by: Tianyou Li Reviewed-by: Tim Chen Tested-by: Dan Liang Signed-off-by: Wangyang Guo --- lib/group_cpus.c | 271 +++++++++++++++++++++++++++++++++++------------ 1 file changed, 206 insertions(+), 65 deletions(-) diff --git a/lib/group_cpus.c b/lib/group_cpus.c index 6d08ac05f371..a93df70919df 100644 --- a/lib/group_cpus.c +++ b/lib/group_cpus.c @@ -114,48 +114,15 @@ static int ncpus_cmp_func(const void *l, const void *= r) return ln->ncpus - rn->ncpus; } =20 -/* - * Allocate group number for each node, so that for each node: - * - * 1) the allocated number is >=3D 1 - * - * 2) the allocated number is <=3D active CPU number of this node - * - * The actual allocated total groups may be less than @numgrps when - * active total CPU number is less than @numgrps. - * - * Active CPUs means the CPUs in '@cpu_mask AND @node_to_cpumask[]' - * for each node. - */ -static void alloc_nodes_groups(unsigned int numgrps, - cpumask_var_t *node_to_cpumask, - const struct cpumask *cpu_mask, - const nodemask_t nodemsk, - struct cpumask *nmsk, - struct node_groups *node_groups) +static void alloc_groups_to_nodes(unsigned int numgrps, + unsigned int numcpus, + struct node_groups *node_groups, + unsigned int num_nodes) { - unsigned n, remaining_ncpus =3D 0; - - for (n =3D 0; n < nr_node_ids; n++) { - node_groups[n].id =3D n; - node_groups[n].ncpus =3D UINT_MAX; - } - - for_each_node_mask(n, nodemsk) { - unsigned ncpus; - - cpumask_and(nmsk, cpu_mask, node_to_cpumask[n]); - ncpus =3D cpumask_weight(nmsk); - - if (!ncpus) - continue; - remaining_ncpus +=3D ncpus; - node_groups[n].ncpus =3D ncpus; - } + unsigned int n, remaining_ncpus =3D numcpus; + unsigned int ngroups, ncpus; =20 - numgrps =3D min_t(unsigned, remaining_ncpus, numgrps); - - sort(node_groups, nr_node_ids, sizeof(node_groups[0]), + sort(node_groups, num_nodes, sizeof(node_groups[0]), ncpus_cmp_func, NULL); =20 /* @@ -226,9 +193,8 @@ static void alloc_nodes_groups(unsigned int numgrps, * finally for each node X: grps(X) <=3D ncpu(X). * */ - for (n =3D 0; n < nr_node_ids; n++) { - unsigned ngroups, ncpus; =20 + for (n =3D 0; n < num_nodes; n++) { if (node_groups[n].ncpus =3D=3D UINT_MAX) continue; =20 @@ -246,12 +212,201 @@ static void alloc_nodes_groups(unsigned int numgrps, } } =20 +/* + * Allocate group number for each node, so that for each node: + * + * 1) the allocated number is >=3D 1 + * + * 2) the allocated number is <=3D active CPU number of this node + * + * The actual allocated total groups may be less than @numgrps when + * active total CPU number is less than @numgrps. + * + * Active CPUs means the CPUs in '@cpu_mask AND @node_to_cpumask[]' + * for each node. + */ +static void alloc_nodes_groups(unsigned int numgrps, + cpumask_var_t *node_to_cpumask, + const struct cpumask *cpu_mask, + const nodemask_t nodemsk, + struct cpumask *nmsk, + struct node_groups *node_groups) +{ + unsigned int n, numcpus =3D 0; + + for (n =3D 0; n < nr_node_ids; n++) { + node_groups[n].id =3D n; + node_groups[n].ncpus =3D UINT_MAX; + } + + for_each_node_mask(n, nodemsk) { + unsigned int ncpus; + + cpumask_and(nmsk, cpu_mask, node_to_cpumask[n]); + ncpus =3D cpumask_weight(nmsk); + + if (!ncpus) + continue; + numcpus +=3D ncpus; + node_groups[n].ncpus =3D ncpus; + } + + numgrps =3D min_t(unsigned int, numcpus, numgrps); + alloc_groups_to_nodes(numgrps, numcpus, node_groups, nr_node_ids); +} + +static void assign_cpus_to_groups(unsigned int ncpus, + struct cpumask *nmsk, + struct node_groups *nv, + struct cpumask *masks, + unsigned int *curgrp, + unsigned int last_grp) +{ + unsigned int v, cpus_per_grp, extra_grps; + /* Account for rounding errors */ + extra_grps =3D ncpus - nv->ngroups * (ncpus / nv->ngroups); + + /* Spread allocated groups on CPUs of the current node */ + for (v =3D 0; v < nv->ngroups; v++, *curgrp +=3D 1) { + cpus_per_grp =3D ncpus / nv->ngroups; + + /* Account for extra groups to compensate rounding errors */ + if (extra_grps) { + cpus_per_grp++; + --extra_grps; + } + + /* + * wrapping has to be considered given 'startgrp' + * may start anywhere + */ + if (*curgrp >=3D last_grp) + *curgrp =3D 0; + grp_spread_init_one(&masks[*curgrp], nmsk, cpus_per_grp); + } +} + +static int alloc_cluster_groups(unsigned int ncpus, + unsigned int ngroups, + struct cpumask *node_cpumask, + cpumask_var_t msk, + const struct cpumask ***clusters_ptr, + struct node_groups **cluster_groups_ptr) +{ + unsigned int ncluster =3D 0; + unsigned int cpu, nc, n; + const struct cpumask *cluster_mask; + const struct cpumask **clusters; + struct node_groups *cluster_groups; + + cpumask_copy(msk, node_cpumask); + + /* Probe how many clusters in this node. */ + while (1) { + cpu =3D cpumask_first(msk); + if (cpu >=3D nr_cpu_ids) + break; + + cluster_mask =3D topology_cluster_cpumask(cpu); + if (!cpumask_weight(cluster_mask)) + goto no_cluster; + /* Clean out CPUs on the same cluster. */ + cpumask_andnot(msk, msk, cluster_mask); + ncluster++; + } + + /* If ngroups < ncluster, cross cluster is inevitable, skip. */ + if (ncluster =3D=3D 0 || ncluster > ngroups) + goto no_cluster; + + /* Allocate memory based on cluster number. */ + clusters =3D kcalloc(ncluster, sizeof(struct cpumask *), GFP_KERNEL); + if (!clusters) + goto no_cluster; + cluster_groups =3D kcalloc(ncluster, sizeof(struct node_groups), GFP_KERN= EL); + if (!cluster_groups) + goto fail_cluster_groups; + + /* Filling cluster info for later process. */ + cpumask_copy(msk, node_cpumask); + for (n =3D 0; n < ncluster; n++) { + cpu =3D cpumask_first(msk); + cluster_mask =3D topology_cluster_cpumask(cpu); + nc =3D cpumask_weight_and(cluster_mask, node_cpumask); + clusters[n] =3D cluster_mask; + cluster_groups[n].id =3D n; + cluster_groups[n].ncpus =3D nc; + cpumask_andnot(msk, msk, cluster_mask); + } + + alloc_groups_to_nodes(ngroups, ncpus, cluster_groups, ncluster); + + *clusters_ptr =3D clusters; + *cluster_groups_ptr =3D cluster_groups; + return ncluster; + + fail_cluster_groups: + kfree(clusters); + no_cluster: + return 0; +} + +/* + * Try group CPUs evenly for cluster locality within a NUMA node. + * + * Return: true if success, false otherwise. + */ +static bool __try_group_cluster_cpus(unsigned int ncpus, + unsigned int ngroups, + struct cpumask *node_cpumask, + struct cpumask *masks, + unsigned int *curgrp, + unsigned int last_grp) +{ + struct node_groups *cluster_groups; + const struct cpumask **clusters; + unsigned int ncluster; + bool ret =3D false; + cpumask_var_t nmsk; + unsigned int i, nc; + + if (!zalloc_cpumask_var(&nmsk, GFP_KERNEL)) + goto fail_nmsk_alloc; + + ncluster =3D alloc_cluster_groups(ncpus, ngroups, node_cpumask, nmsk, + &clusters, &cluster_groups); + + if (ncluster =3D=3D 0) + goto fail_no_clusters; + + for (i =3D 0; i < ncluster; i++) { + struct node_groups *nv =3D &cluster_groups[i]; + + /* Get the cpus on this cluster. */ + cpumask_and(nmsk, node_cpumask, clusters[nv->id]); + nc =3D cpumask_weight(nmsk); + if (!nc) + continue; + WARN_ON_ONCE(nv->ngroups > nc); + + assign_cpus_to_groups(nc, nmsk, nv, masks, curgrp, last_grp); + } + + ret =3D true; + kfree(cluster_groups); + kfree(clusters); + fail_no_clusters: + free_cpumask_var(nmsk); + fail_nmsk_alloc: + return ret; +} + static int __group_cpus_evenly(unsigned int startgrp, unsigned int numgrps, cpumask_var_t *node_to_cpumask, const struct cpumask *cpu_mask, struct cpumask *nmsk, struct cpumask *masks) { - unsigned int i, n, nodes, cpus_per_grp, extra_grps, done =3D 0; + unsigned int i, n, nodes, done =3D 0; unsigned int last_grp =3D numgrps; unsigned int curgrp =3D startgrp; nodemask_t nodemsk =3D NODE_MASK_NONE; @@ -287,7 +442,7 @@ static int __group_cpus_evenly(unsigned int startgrp, u= nsigned int numgrps, alloc_nodes_groups(numgrps, node_to_cpumask, cpu_mask, nodemsk, nmsk, node_groups); for (i =3D 0; i < nr_node_ids; i++) { - unsigned int ncpus, v; + unsigned int ncpus; struct node_groups *nv =3D &node_groups[i]; =20 if (nv->ngroups =3D=3D UINT_MAX) @@ -301,28 +456,14 @@ static int __group_cpus_evenly(unsigned int startgrp,= unsigned int numgrps, =20 WARN_ON_ONCE(nv->ngroups > ncpus); =20 - /* Account for rounding errors */ - extra_grps =3D ncpus - nv->ngroups * (ncpus / nv->ngroups); - - /* Spread allocated groups on CPUs of the current node */ - for (v =3D 0; v < nv->ngroups; v++, curgrp++) { - cpus_per_grp =3D ncpus / nv->ngroups; - - /* Account for extra groups to compensate rounding errors */ - if (extra_grps) { - cpus_per_grp++; - --extra_grps; - } - - /* - * wrapping has to be considered given 'startgrp' - * may start anywhere - */ - if (curgrp >=3D last_grp) - curgrp =3D 0; - grp_spread_init_one(&masks[curgrp], nmsk, - cpus_per_grp); + if (__try_group_cluster_cpus(ncpus, nv->ngroups, nmsk, + masks, &curgrp, last_grp)) { + done +=3D nv->ngroups; + continue; } + + assign_cpus_to_groups(ncpus, nmsk, nv, masks, &curgrp, + last_grp); done +=3D nv->ngroups; } kfree(node_groups); --=20 2.47.3