From nobody Wed Apr 1 20:37:31 2026 Received: from stravinsky.debian.org (stravinsky.debian.org [82.195.75.108]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2189E31716B; Wed, 1 Apr 2026 13:04:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=82.195.75.108 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775048659; cv=none; b=ZW2JmwISFcMD0bdlmdWWYl5s7pPuaWL2DCR9tRDxwf17MUVssEj4l2b4jARYNBiAOjMrNi5sGcWfZGI1r3IFT5mkEc9gquM4eS0hjJl8vSNmwfM1Hx6TcTNaPHEY19Cp9g9LJrnFsFAKqjUenmiLxBeRlcHqTcJTLGmAlL1vbAI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775048659; c=relaxed/simple; bh=sk11Fbdh26mbZFGSaKsfAPdaxJv6aqELEvTG+kfANY8=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=IBjQqtuY7ejYU4b+zkFH9eR9Xpk9AOe5Gio7VfGNV7lMDi4OZK3ctwjDrO0SG9NqDAE2SiVDWEdacP4z3DXzuqmeSl+osXQAhxupez8zn3jHyz0dlB4jVjs5wJE2GbKx3YrtBEwlMb7uDzSg6cieCPoz2EHW0mQCSK+EXwj9+7o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=debian.org; spf=none smtp.mailfrom=debian.org; dkim=pass (2048-bit key) header.d=debian.org header.i=@debian.org header.b=nQ14M+Wd; arc=none smtp.client-ip=82.195.75.108 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=debian.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=debian.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=debian.org header.i=@debian.org header.b="nQ14M+Wd" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=debian.org; s=smtpauto.stravinsky; h=X-Debian-User:Cc:To:In-Reply-To:References: Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description; bh=s1zLcZo6T2vsPWGFhNUJfP3VcN0QHQLQX6og1NFcSS8=; b=nQ14M+Wd+CqjTuMB3MrLxJJlgt z40Gun3z/bLZlPcbFRSOPv10hvBdli5tD8yMDQJbDsKOT9tRN2dgjNEpK20cNBQvOILlzh9syMIHe 2o+JbpOFu3KWUQTELPzDF9GyEjDT8jOzJa0CmlO35QpeFNKlh3NWpZwjJbBm9RgwhfI2bx/T7T5O7 BF7v3GJLIhtrzHDly6p+KWf/sQ41l/JBT2GyPyDDF57ljKD3yT3XA4b4dz34z1EKNdm6PS93iMmrp XEHnCaIRrmUI/PXV+CMMCus+aymuqE7qCtvWQ717yQz785UEEAPLhwv4fvun1/l6HHKmEjjakNudS gjajLjHQ==; Received: from authenticated user by stravinsky.debian.org with esmtpsa (TLS1.3:ECDHE_X25519__RSA_PSS_RSAE_SHA256__AES_256_GCM:256) (Exim 4.96) (envelope-from ) id 1w7vF3-0030PP-2d; Wed, 01 Apr 2026 13:04:16 +0000 From: Breno Leitao Date: Wed, 01 Apr 2026 06:03:52 -0700 Subject: [PATCH v3 1/6] workqueue: fix typo in WQ_AFFN_SMT comment Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260401-workqueue_sharded-v3-1-ab0b9336bf0b@debian.org> References: <20260401-workqueue_sharded-v3-0-ab0b9336bf0b@debian.org> In-Reply-To: <20260401-workqueue_sharded-v3-0-ab0b9336bf0b@debian.org> To: Tejun Heo , Lai Jiangshan , Andrew Morton Cc: linux-kernel@vger.kernel.org, puranjay@kernel.org, linux-crypto@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, Michael van der Westhuizen , kernel-team@meta.com, Chuck Lever , Breno Leitao X-Mailer: b4 0.16-dev-453a6 X-Developer-Signature: v=1; a=openpgp-sha256; l=759; i=leitao@debian.org; h=from:subject:message-id; bh=sk11Fbdh26mbZFGSaKsfAPdaxJv6aqELEvTG+kfANY8=; b=owEBbQKS/ZANAwAIATWjk5/8eHdtAcsmYgBpzRfIiT7JiBCn2ThsK9IDWf1TOGVXT6dMKy0iT bLLZodqAfiJAjMEAAEIAB0WIQSshTmm6PRnAspKQ5s1o5Of/Hh3bQUCac0XyAAKCRA1o5Of/Hh3 bdNYD/964ZjjevJW677B/TZsbzfyV0JJaOoNt+mpn35vYvh/+LMuFKZIH+Z3utWLFIXj7KO+I80 IG2obg+/nXYZyxgCYDZkTnniM2iczJXYDFyAtrtla8o2WZ+KnWEKBYEb2cwj4F+NpMSej2uodmE 0ZPSLZy1ybCZaYPJ3D/A0tcMooGQjf3Da6IyEA631gDpTTgjwRK8iEpMuGwj1bZgKAv5N3K5CXk 05oJB5vCqEzRLlvVCHS6xD3jCsZyqVy7EctRArIZZE+pCv8wGLnJWAtAdopZH54ufic+mTm794i ucUt2JKCvmexqr6TdvJ1E26EbClD0uYXPsBXtNgCOb/gfufGrqcY79S+qZ9uvI1BNPhZluI1qLT zlRWPErC7QatbAd7r7fN8/NN7O6uZMs9yECd6+KCbcyf/Sp0zt3T8pKeB+7Z/C2V6HP8lWkZ5aV bjAjBrrPnd96ESbSmY29UY4JgNGRQ+2RnAC9bwKeJOQDNDtcCKXywGK+UiDz3CiBCXgruFsCZUJ jlTkvxY0qbk4bFIBhtVmlk+D3lxTsdI1FBfAupDQv81ZK0NtPbJEyMJ1K/gpTUsIAk5/cYNhLmK rd68UN/2vzhHOFbklC7R71g5+YNMprfSxHrdnsbECsVS2VZ6+NHWTklZNsZm/GnRFyzo+RnLnEl MBQ4IhSlohoAd2g== X-Developer-Key: i=leitao@debian.org; a=openpgp; fpr=AC8539A6E8F46702CA4A439B35A3939FFC78776D X-Debian-User: leitao Fix "poer" -> "per" in the WQ_AFFN_SMT enum comment. Signed-off-by: Breno Leitao --- include/linux/workqueue.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h index a4749f56398f..17543aec2a6e 100644 --- a/include/linux/workqueue.h +++ b/include/linux/workqueue.h @@ -131,7 +131,7 @@ struct rcu_work { enum wq_affn_scope { WQ_AFFN_DFL, /* use system default */ WQ_AFFN_CPU, /* one pod per CPU */ - WQ_AFFN_SMT, /* one pod poer SMT */ + WQ_AFFN_SMT, /* one pod per SMT */ WQ_AFFN_CACHE, /* one pod per LLC */ WQ_AFFN_NUMA, /* one pod per NUMA node */ WQ_AFFN_SYSTEM, /* one pod across the whole system */ --=20 2.52.0 From nobody Wed Apr 1 20:37:31 2026 Received: from stravinsky.debian.org (stravinsky.debian.org [82.195.75.108]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5B39A3FBEAB; Wed, 1 Apr 2026 13:04:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=82.195.75.108 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775048664; cv=none; b=GTcpxL7AQfKXJKEnBEmaF6fwiO8z2vgr6nLa3aOeiB0FKFfLVMQV3Lyt49suUGoFXHZbrwpjRIr/lKLQBs1zvAAfyeA6LKR9MHixDmt5KSj2+jgVT5jA8GX3O6IqW45jo+EUWBoXoB0JGM+5e8RbYvDXRbCsASdpTu4MUt0LXTc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775048664; c=relaxed/simple; bh=uqrRZqvJAR6Dw5ouGkDItmGD0F8LaFtmPzGnhC6hEt0=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=EjIrSNl9SJRyRCiOMc9ej8RbHVmofEsMXE9hsBcLzyVN+uSPrqcow/sqxMUxqsZIj9zXe45kCp8otSOBRSW49Y+JISQjFOl9SD8VZS4tvQLO9KaVA7lzBlqOURIzVJVw09iMAC7Be9aFXwVS9D7mxQGuu+u57xih/R+9VMbYTeQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=debian.org; spf=none smtp.mailfrom=debian.org; dkim=pass (2048-bit key) header.d=debian.org header.i=@debian.org header.b=nZZGvwuu; arc=none smtp.client-ip=82.195.75.108 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=debian.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=debian.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=debian.org header.i=@debian.org header.b="nZZGvwuu" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=debian.org; s=smtpauto.stravinsky; h=X-Debian-User:Cc:To:In-Reply-To:References: Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description; bh=O8eQKJWzpv4gl/M+Dvotl6MYTbNuE9E+fGGayQ0vTmk=; b=nZZGvwuuq+ccds+UNi1PPfWIhI x0kh8J0B+EoFI7Z2Yf+8S76wpTiEZYfevCXU8U4FA/Pt9VpHINf9/rFsOeORs0uH6SOrcEBX3cRhm 6X6labsHHLffJEHONnAT73iQGgs50YPJaqvkB6x+eci09cakppEi4w1dPsa6D2YzP4r3ZSjevoC+6 WKIybIJrf7Z4ojbTkAHOH4426AdaId22QDzREoaQhYAiUFVXiFRXER8nkxtTWl1dbelNeiOg+uEOz GKjgXpGf3s0p6kOTnauFUKn+hbGjWqxVch8TGox7EIhWvYstUM6xhaAizq89kiB7qCcts28v3Jd8U lqjAPdbw==; Received: from authenticated user by stravinsky.debian.org with esmtpsa (TLS1.3:ECDHE_X25519__RSA_PSS_RSAE_SHA256__AES_256_GCM:256) (Exim 4.96) (envelope-from ) id 1w7vF7-0030PY-2m; Wed, 01 Apr 2026 13:04:20 +0000 From: Breno Leitao Date: Wed, 01 Apr 2026 06:03:53 -0700 Subject: [PATCH v3 2/6] workqueue: add WQ_AFFN_CACHE_SHARD affinity scope Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260401-workqueue_sharded-v3-2-ab0b9336bf0b@debian.org> References: <20260401-workqueue_sharded-v3-0-ab0b9336bf0b@debian.org> In-Reply-To: <20260401-workqueue_sharded-v3-0-ab0b9336bf0b@debian.org> To: Tejun Heo , Lai Jiangshan , Andrew Morton Cc: linux-kernel@vger.kernel.org, puranjay@kernel.org, linux-crypto@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, Michael van der Westhuizen , kernel-team@meta.com, Chuck Lever , Breno Leitao X-Mailer: b4 0.16-dev-453a6 X-Developer-Signature: v=1; a=openpgp-sha256; l=10302; i=leitao@debian.org; h=from:subject:message-id; bh=uqrRZqvJAR6Dw5ouGkDItmGD0F8LaFtmPzGnhC6hEt0=; b=owEBbQKS/ZANAwAIATWjk5/8eHdtAcsmYgBpzRfIDkzGroJJBI6P940u/GdXAPCS7FR3mNuE6 71Z/xBRVaCJAjMEAAEIAB0WIQSshTmm6PRnAspKQ5s1o5Of/Hh3bQUCac0XyAAKCRA1o5Of/Hh3 bZdcEACrYYLKBhswPM6eGRH61pJxjy+EiEGtU/VK/5B3WPkbKjZB/4IoEPBtgwCmRNC3VI/Hbf9 kcWe7i0HtER4NuP46onE7xO+pL5Qe2Vr2cEFrte0LroBiWwdxhXP/Vpy7Ssc+ZX0UG+afSSviMN q2/9MdO7tWO+cNF8iI8mLYkFu/QPHJQknc+gUSx+XHzszTBsTiGSIVtNatnelN2cZw9GT/pU/Gp MigkHZ7JRirBnaaWEBuj5us0muKRlX2inZ60wff/i4gwb6ct6Sc4Zgy4Tc7kR3hOY0a6KNlARHS 7v//bIys7z660YtJx4El1eODG8p0ie1EqFBteKi5BNqhMwotMFdmhaHdc/qLuWzbJTDiXNQSfR1 Av/QH3KRJ69jolJJsYqMLAhgKFjPEuFnz4ycPotp+fgd5sEDVzZbnyei70kb2+blcoKgpzp5zbN uz66Gq7ZpNtO4Sji+T1yPVnRTvcKBn0xIVXdNFGw3+xRLvsHpFGrQ/r3ZlFL8r9fZyS+bzUW5Wj igZBimZDdm6UROaZjUkm+SnmRLYDdJ7KH4apYzAcwb1heXDBblu1ksRosGvTN8jx7AjWh+PEcdn jAVgtXpcbX9/VjvKad+/86aQ9fLc1+YTn4+fNlAX9Z1kphNmzMmObGVQBgrGBbIoDESl/+Czur5 GM4L5WuxQagwguQ== X-Developer-Key: i=leitao@debian.org; a=openpgp; fpr=AC8539A6E8F46702CA4A439B35A3939FFC78776D X-Debian-User: leitao On systems where many CPUs share one LLC, unbound workqueues using WQ_AFFN_CACHE collapse to a single worker pool, causing heavy spinlock contention on pool->lock. For example, Chuck Lever measured 39% of cycles lost to native_queued_spin_lock_slowpath on a 12-core shared-L3 NFS-over-RDMA system. The existing affinity hierarchy (cpu, smt, cache, numa, system) offers no intermediate option between per-LLC and per-SMT-core granularity. Add WQ_AFFN_CACHE_SHARD, which subdivides each LLC into groups of at most wq_cache_shard_size cores (default 8, tunable via boot parameter). Shards are always split on core (SMT group) boundaries so that Hyper-Threading siblings are never placed in different pods. Cores are distributed across shards as evenly as possible -- for example, 36 cores in a single LLC with max shard size 8 produces 5 shards of 8+7+7+7+7 cores. The implementation follows the same comparator pattern as other affinity scopes: precompute_cache_shard_ids() pre-fills the cpu_shard_id[] array from the already-initialized WQ_AFFN_CACHE and WQ_AFFN_SMT topology, and cpus_share_cache_shard() is passed to init_pod_type(). Benchmark on NVIDIA Grace (72 CPUs, single LLC, 50k items/thread), show cache_shard delivers ~5x the throughput and ~6.5x lower p50 latency compared to cache scope on this 72-core single-LLC system. Suggested-by: Tejun Heo Signed-off-by: Breno Leitao --- include/linux/workqueue.h | 1 + kernel/workqueue.c | 183 ++++++++++++++++++++++++++++++++++++++++++= ++++ 2 files changed, 184 insertions(+) diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h index 17543aec2a6e..50bdb7e30d35 100644 --- a/include/linux/workqueue.h +++ b/include/linux/workqueue.h @@ -133,6 +133,7 @@ enum wq_affn_scope { WQ_AFFN_CPU, /* one pod per CPU */ WQ_AFFN_SMT, /* one pod per SMT */ WQ_AFFN_CACHE, /* one pod per LLC */ + WQ_AFFN_CACHE_SHARD, /* synthetic sub-LLC shards */ WQ_AFFN_NUMA, /* one pod per NUMA node */ WQ_AFFN_SYSTEM, /* one pod across the whole system */ =20 diff --git a/kernel/workqueue.c b/kernel/workqueue.c index b77119d71641..5b1d42115e20 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -130,6 +130,14 @@ enum wq_internal_consts { WORKER_ID_LEN =3D 10 + WQ_NAME_LEN, /* "kworker/R-" + WQ_NAME_LEN */ }; =20 +/* Layout of shards within one LLC pod */ +struct llc_shard_layout { + int nr_large_shards; /* number of large shards (cores_per_shard + 1) */ + int cores_per_shard; /* base number of cores per default shard */ + int nr_shards; /* total number of shards */ + /* nr_default shards =3D (nr_shards - nr_large_shards) */ +}; + /* * We don't want to trap softirq for too long. See MAX_SOFTIRQ_TIME and * MAX_SOFTIRQ_RESTART in kernel/softirq.c. These are macros because @@ -409,6 +417,7 @@ static const char *wq_affn_names[WQ_AFFN_NR_TYPES] =3D { [WQ_AFFN_CPU] =3D "cpu", [WQ_AFFN_SMT] =3D "smt", [WQ_AFFN_CACHE] =3D "cache", + [WQ_AFFN_CACHE_SHARD] =3D "cache_shard", [WQ_AFFN_NUMA] =3D "numa", [WQ_AFFN_SYSTEM] =3D "system", }; @@ -431,6 +440,9 @@ module_param_named(cpu_intensive_warning_thresh, wq_cpu= _intensive_warning_thresh static bool wq_power_efficient =3D IS_ENABLED(CONFIG_WQ_POWER_EFFICIENT_DE= FAULT); module_param_named(power_efficient, wq_power_efficient, bool, 0444); =20 +static unsigned int wq_cache_shard_size =3D 8; +module_param_named(cache_shard_size, wq_cache_shard_size, uint, 0444); + static bool wq_online; /* can kworkers be created yet? */ static bool wq_topo_initialized __read_mostly =3D false; =20 @@ -8113,6 +8125,175 @@ static bool __init cpus_share_numa(int cpu0, int cp= u1) return cpu_to_node(cpu0) =3D=3D cpu_to_node(cpu1); } =20 +/* Maps each CPU to its shard index within the LLC pod it belongs to */ +static int cpu_shard_id[NR_CPUS] __initdata; + +/** + * llc_count_cores - count distinct cores (SMT groups) within an LLC pod + * @pod_cpus: the cpumask of CPUs in the LLC pod + * @smt_pods: the SMT pod type, used to identify sibling groups + * + * A core is represented by the lowest-numbered CPU in its SMT group. Retu= rns + * the number of distinct cores found in @pod_cpus. + */ +static int __init llc_count_cores(const struct cpumask *pod_cpus, + struct wq_pod_type *smt_pods) +{ + const struct cpumask *sibling_cpus; + int nr_cores =3D 0, c; + + /* + * Count distinct cores by only counting the first CPU in each + * SMT sibling group. + */ + for_each_cpu(c, pod_cpus) { + sibling_cpus =3D smt_pods->pod_cpus[smt_pods->cpu_pod[c]]; + if (cpumask_first(sibling_cpus) =3D=3D c) + nr_cores++; + } + + return nr_cores; +} + +/* + * llc_shard_size - number of cores in a given shard + * + * Cores are spread as evenly as possible. The first @nr_large_shards shar= ds are + * "large shards" with (cores_per_shard + 1) cores; the rest are "default + * shards" with cores_per_shard cores. + */ +static int __init llc_shard_size(int shard_id, int cores_per_shard, int nr= _large_shards) +{ + /* The first @nr_large_shards shards are large shards */ + if (shard_id < nr_large_shards) + return cores_per_shard + 1; + + /* The remaining shards are default shards */ + return cores_per_shard; +} + +/* + * llc_calc_shard_layout - compute the shard layout for an LLC pod + * @nr_cores: number of distinct cores in the LLC pod + * + * Chooses the number of shards that keeps average shard size closest to + * wq_cache_shard_size. Returns a struct describing the total number of sh= ards, + * the base size of each, and how many are large shards. + */ +static struct llc_shard_layout __init llc_calc_shard_layout(int nr_cores) +{ + struct llc_shard_layout layout; + + /* Ensure at least one shard; pick the count closest to the target size */ + layout.nr_shards =3D max(1, DIV_ROUND_CLOSEST(nr_cores, wq_cache_shard_si= ze)); + layout.cores_per_shard =3D nr_cores / layout.nr_shards; + layout.nr_large_shards =3D nr_cores % layout.nr_shards; + + return layout; +} + +/* + * llc_shard_is_full - check whether a shard has reached its core capacity + * @cores_in_shard: number of cores already assigned to this shard + * @shard_id: index of the shard being checked + * @layout: the shard layout computed by llc_calc_shard_layout() + * + * Returns true if @cores_in_shard equals the expected size for @shard_id. + */ +static bool __init llc_shard_is_full(int cores_in_shard, int shard_id, + const struct llc_shard_layout *layout) +{ + return cores_in_shard =3D=3D llc_shard_size(shard_id, layout->cores_per_s= hard, + layout->nr_large_shards); +} + +/** + * llc_populate_cpu_shard_id - populate cpu_shard_id[] for each CPU in an = LLC pod + * @pod_cpus: the cpumask of CPUs in the LLC pod + * @smt_pods: the SMT pod type, used to identify sibling groups + * @nr_cores: number of distinct cores in @pod_cpus (from llc_count_cores= ()) + * + * Walks @pod_cpus in order. At each SMT group leader, advances to the next + * shard once the current shard is full. Results are written to cpu_shard_= id[]. + */ +static void __init llc_populate_cpu_shard_id(const struct cpumask *pod_cpu= s, + struct wq_pod_type *smt_pods, + int nr_cores) +{ + struct llc_shard_layout layout =3D llc_calc_shard_layout(nr_cores); + const struct cpumask *sibling_cpus; + /* Count the number of cores in the current shard_id */ + int cores_in_shard =3D 0; + /* This is a cursor for the shards. Go from zero to nr_shards - 1*/ + int shard_id =3D 0; + int c; + + /* Iterate at every CPU for a given LLC pod, and assign it a shard */ + for_each_cpu(c, pod_cpus) { + sibling_cpus =3D smt_pods->pod_cpus[smt_pods->cpu_pod[c]]; + if (cpumask_first(sibling_cpus) =3D=3D c) { + /* This is the CPU leader for the siblings */ + if (llc_shard_is_full(cores_in_shard, shard_id, &layout)) { + shard_id++; + cores_in_shard =3D 0; + } + cores_in_shard++; + cpu_shard_id[c] =3D shard_id; + } else { + /* + * The siblings' shard MUST be the same as the leader. + * never split threads in the same core. + */ + cpu_shard_id[c] =3D cpu_shard_id[cpumask_first(sibling_cpus)]; + } + } + + WARN_ON_ONCE(shard_id !=3D (layout.nr_shards - 1)); +} + +/** + * precompute_cache_shard_ids - assign each CPU its shard index within its= LLC + * + * Iterates over all LLC pods. For each pod, counts distinct cores then as= signs + * shard indices to all CPUs in the pod. Must be called after WQ_AFFN_CACH= E and + * WQ_AFFN_SMT have been initialized. + */ +static void __init precompute_cache_shard_ids(void) +{ + struct wq_pod_type *llc_pods =3D &wq_pod_types[WQ_AFFN_CACHE]; + struct wq_pod_type *smt_pods =3D &wq_pod_types[WQ_AFFN_SMT]; + const struct cpumask *cpus_sharing_llc; + int nr_cores; + int pod; + + if (!wq_cache_shard_size) { + pr_warn("workqueue: cache_shard_size must be > 0, setting to 1\n"); + wq_cache_shard_size =3D 1; + } + + for (pod =3D 0; pod < llc_pods->nr_pods; pod++) { + cpus_sharing_llc =3D llc_pods->pod_cpus[pod]; + + /* Number of cores in this given LLC */ + nr_cores =3D llc_count_cores(cpus_sharing_llc, smt_pods); + llc_populate_cpu_shard_id(cpus_sharing_llc, smt_pods, nr_cores); + } +} + +/* + * cpus_share_cache_shard - test whether two CPUs belong to the same cache= shard + * + * Two CPUs share a cache shard if they are in the same LLC and have the s= ame + * shard index. Used as the pod affinity callback for WQ_AFFN_CACHE_SHARD. + */ +static bool __init cpus_share_cache_shard(int cpu0, int cpu1) +{ + if (!cpus_share_cache(cpu0, cpu1)) + return false; + + return cpu_shard_id[cpu0] =3D=3D cpu_shard_id[cpu1]; +} + /** * workqueue_init_topology - initialize CPU pods for unbound workqueues * @@ -8128,6 +8309,8 @@ void __init workqueue_init_topology(void) init_pod_type(&wq_pod_types[WQ_AFFN_CPU], cpus_dont_share); init_pod_type(&wq_pod_types[WQ_AFFN_SMT], cpus_share_smt); init_pod_type(&wq_pod_types[WQ_AFFN_CACHE], cpus_share_cache); + precompute_cache_shard_ids(); + init_pod_type(&wq_pod_types[WQ_AFFN_CACHE_SHARD], cpus_share_cache_shard); init_pod_type(&wq_pod_types[WQ_AFFN_NUMA], cpus_share_numa); =20 wq_topo_initialized =3D true; --=20 2.52.0 From nobody Wed Apr 1 20:37:31 2026 Received: from stravinsky.debian.org (stravinsky.debian.org [82.195.75.108]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5E17240F8CE; Wed, 1 Apr 2026 13:04:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=82.195.75.108 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775048667; cv=none; b=Hk+hDwKknQBqjd373JvcMv3wIfYtprQpLhzD03RIEgOxm3emAPL6ToeMnmriCI0J6THXr/QkeyO1O/RlGJGgCV9JSQ36wyrXFxYIFkJKcBueTfviQF6CMTw//1qQWa/vehgJk6hq1rp79Sl0Tyhf+iY3/8drYl6Fffs9MIDPeag= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775048667; c=relaxed/simple; bh=DxEMq1saeg34BF3Yzy95iKoy3ZKirsE09XMJF+o4lRM=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=frzxugTQ0vWnJuGwzhJmAPByX6x56Ivg5/6SXU+rG/IYN3SBrQCuwDFfKDkuNCe6ikR1slRe+P1W3d5pZVRV5VOXcOwRUWs8hIvlAJ+ofN1l3zGLbpP+rwAYbKX/0GUVFOK8GquuJRWCKb5aBHgB/dhzcoJoDECLMjK6JUk5sjY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=debian.org; spf=none smtp.mailfrom=debian.org; dkim=pass (2048-bit key) header.d=debian.org header.i=@debian.org header.b=bBDdmxnz; arc=none smtp.client-ip=82.195.75.108 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=debian.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=debian.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=debian.org header.i=@debian.org header.b="bBDdmxnz" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=debian.org; s=smtpauto.stravinsky; h=X-Debian-User:Cc:To:In-Reply-To:References: Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description; bh=eHESwEHIq+qhqnmRErI38j145fir3UHyhCm3AZu26Jk=; b=bBDdmxnzHSnNVcUYWLaioq1bY6 vF3q5xvcsOqs3uTDMawJHICgbcltevWEUOWIDWeyCtNU4l8aF6HQ/v+6shgrXTW8/4qtmikXbJEOi vZFVdjGePfi8/mHa2xZx+5skkgD/Kr7Ig7V4EVEZ7SgMxsHl3/CwRkXnRT/JlQHQZr6gCLNzE2U9+ zy45xGIjHr/uaE9AfK9bJyvySOhGNfN4JaZjL0Gb7ILZHWCvcXRGm8Vxq9m3wTbenFFrScS1q5rxL D0gh+6ydqbUnQIqeBnpCdmcnVZf12mdJcCWU25wg+DPOj51duJb9BR/YVJzCKl+zPnzieGWn7Y/fb IIXtHBnQ==; Received: from authenticated user by stravinsky.debian.org with esmtpsa (TLS1.3:ECDHE_X25519__RSA_PSS_RSAE_SHA256__AES_256_GCM:256) (Exim 4.96) (envelope-from ) id 1w7vFC-0030Pp-0B; Wed, 01 Apr 2026 13:04:24 +0000 From: Breno Leitao Date: Wed, 01 Apr 2026 06:03:54 -0700 Subject: [PATCH v3 3/6] workqueue: set WQ_AFFN_CACHE_SHARD as the default affinity scope Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260401-workqueue_sharded-v3-3-ab0b9336bf0b@debian.org> References: <20260401-workqueue_sharded-v3-0-ab0b9336bf0b@debian.org> In-Reply-To: <20260401-workqueue_sharded-v3-0-ab0b9336bf0b@debian.org> To: Tejun Heo , Lai Jiangshan , Andrew Morton Cc: linux-kernel@vger.kernel.org, puranjay@kernel.org, linux-crypto@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, Michael van der Westhuizen , kernel-team@meta.com, Chuck Lever , Breno Leitao X-Mailer: b4 0.16-dev-453a6 X-Developer-Signature: v=1; a=openpgp-sha256; l=1448; i=leitao@debian.org; h=from:subject:message-id; bh=DxEMq1saeg34BF3Yzy95iKoy3ZKirsE09XMJF+o4lRM=; b=owEBbQKS/ZANAwAIATWjk5/8eHdtAcsmYgBpzRfI1GonAFW4MAKAztm3cYO921irGE2DYuh3x ceeVLV42HqJAjMEAAEIAB0WIQSshTmm6PRnAspKQ5s1o5Of/Hh3bQUCac0XyAAKCRA1o5Of/Hh3 bXatEACobQ1fDniF0Fy1IiXb+Jku/flGZS3t759U9j47WJKr7CuLkMUBO+RY8EWmdLf+/EeBHnM trCPd5KwUMLfq6gp9BL/N1lIai/d49Im/oAFd1hPCy1c3+ewg3CjlKasx7W09uMe5vWVintdbE5 uqNCvAkfhotuDsiGA5aAb59NdA9V0IoPNe8vKblDm2AmnEGFN+p9+/WqBbVT0ZzQKLQip3mjHj0 1MSyuU08XHJv/8zECXGVoFBOZVou7pQDX5haNyhMLh2AiODxUGo0NHS9T42/3cJl8cJfFXdgw0j WuJkjFZ0WZUyJoQc0dPmSMHcArYDv4Ho83mvRyeyKR3fxqCykXB5bHv4AQLAWK0InyNjYQVOl+1 jdqNP2m5xB++vE2rlHBRaagQrwMqQlajJZ4XhsaLgyKJ5RmeR1x/LVD+6AnaBooGG4edCMCIjCb 7dSrCtwbxZRJWOTysp/byjyYT3eXiTe6FAb26GuXv9uL/AcdnIz0UpslROA2yANcfBcJiOfD4E5 elRz1bDe3yzvy9Ah7dd2iWp+e9kS0++rAGg5UaaHn0erYI7ScjR4tARcB0+sigAb5otLwTaKgTD 4G2OFG33tQMZSst1elqE3I/fKMc7ghqZTB4jCvJyqR14pczP5Mau8fPS6mk4pv2fMG+i/zwqU6y 40wuQLCM50CS6dA== X-Developer-Key: i=leitao@debian.org; a=openpgp; fpr=AC8539A6E8F46702CA4A439B35A3939FFC78776D X-Debian-User: leitao Set WQ_AFFN_CACHE_SHARD as the default affinity scope for unbound workqueues. On systems where many CPUs share one LLC, the previous default (WQ_AFFN_CACHE) collapses all CPUs to a single worker pool, causing heavy spinlock contention on pool->lock. WQ_AFFN_CACHE_SHARD subdivides each LLC into smaller groups, providing a better balance between locality and contention. Users can revert to the previous behavior with workqueue.default_affinity_scope=3Dcache. On systems with 8 or fewer cores per LLC, CACHE_SHARD produces a single shard covering the entire LLC, making it functionally identical to the previous CACHE default. The sharding only activates when an LLC has more than 8 cores. Signed-off-by: Breno Leitao --- kernel/workqueue.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 5b1d42115e20..3b5b21136414 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -449,7 +449,7 @@ static bool wq_topo_initialized __read_mostly =3D false; static struct kmem_cache *pwq_cache; =20 static struct wq_pod_type wq_pod_types[WQ_AFFN_NR_TYPES]; -static enum wq_affn_scope wq_affn_dfl =3D WQ_AFFN_CACHE; +static enum wq_affn_scope wq_affn_dfl =3D WQ_AFFN_CACHE_SHARD; =20 /* buf for wq_update_unbound_pod_attrs(), protected by CPU hotplug exclusi= on */ static struct workqueue_attrs *unbound_wq_update_pwq_attrs_buf; --=20 2.52.0 From nobody Wed Apr 1 20:37:31 2026 Received: from stravinsky.debian.org (stravinsky.debian.org [82.195.75.108]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 553F2411634; Wed, 1 Apr 2026 13:04:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=82.195.75.108 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775048671; cv=none; b=DWdKGq84CMpx+XTASh5KJx+FrpSW3p8fFnCtIVmyFt1sLdOW7hK8cPztbVFT9e7XI+8QKY5r55gp1nwLqUhl5wQugY2FwyH/bc/AzvwVO/X0b5hmtuUJUtu8OSIk30LnNLvs8GLsN+C9C96VpB0Dt0+nOX9dnWXoEiDLch3IqVo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775048671; c=relaxed/simple; bh=WC6GVYqqH6uJpxirkIWe/Ma1i1dg1WyP1WFJSYjEFjY=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=PKn6MlcJPFq1gWK+xsKCdmC0zse1lFt6XHlbvzYW1cEgWv5KlrKp6DGY63rmqKrGctUt2M/kHzfsIL8gXbAkiRLfeER49KiBoU1qfNO5+1InvIqf4PlZHIMG9DFQbnVQ/OX9XBGB0v8dadjfjABsiJ4Hsex69XzRz3a2lcKCRkI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=debian.org; spf=none smtp.mailfrom=debian.org; dkim=pass (2048-bit key) header.d=debian.org header.i=@debian.org header.b=R62WKQMm; arc=none smtp.client-ip=82.195.75.108 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=debian.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=debian.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=debian.org header.i=@debian.org header.b="R62WKQMm" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=debian.org; s=smtpauto.stravinsky; h=X-Debian-User:Cc:To:In-Reply-To:References: Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description; bh=/9/FcUTrB1glb7awiHGGVzYmo0mRbW1klav76bPoW60=; b=R62WKQMmSVF7JrKZHW7GoZjp/G XH4VEWaWutQULOGDZcsBgaL4YA6JtRibQYr3NSGzkMvH6Li37wA6ixO0Tstnm8dMHmfuBrLJCmE8j OTAriTsZaNkOOOz++/fhr+0a/v9lUU3fb8YfIfElu0f+Wfwiy6c3fM5Q4kydzCi2/i6d8ZmLdJFc2 ESWBrrxmdQIbv21LYGPUZpu6DqWneF+CV0bLTOdTh5f6V3R4z4h0Ri1sJjPde5CbgldeyOvpP3FDc B5C8qpCKUpoGsiugwBKOHHGZu9kiueORr1RkNHo9Sk1n6eAaylctgE1XMr2ljN5TGWbaKj7/dpsaa 8EnPZudg==; Received: from authenticated user by stravinsky.debian.org with esmtpsa (TLS1.3:ECDHE_X25519__RSA_PSS_RSAE_SHA256__AES_256_GCM:256) (Exim 4.96) (envelope-from ) id 1w7vFF-0030Py-2y; Wed, 01 Apr 2026 13:04:28 +0000 From: Breno Leitao Date: Wed, 01 Apr 2026 06:03:55 -0700 Subject: [PATCH v3 4/6] tools/workqueue: add CACHE_SHARD support to wq_dump.py Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260401-workqueue_sharded-v3-4-ab0b9336bf0b@debian.org> References: <20260401-workqueue_sharded-v3-0-ab0b9336bf0b@debian.org> In-Reply-To: <20260401-workqueue_sharded-v3-0-ab0b9336bf0b@debian.org> To: Tejun Heo , Lai Jiangshan , Andrew Morton Cc: linux-kernel@vger.kernel.org, puranjay@kernel.org, linux-crypto@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, Michael van der Westhuizen , kernel-team@meta.com, Chuck Lever , Breno Leitao X-Mailer: b4 0.16-dev-453a6 X-Developer-Signature: v=1; a=openpgp-sha256; l=1481; i=leitao@debian.org; h=from:subject:message-id; bh=WC6GVYqqH6uJpxirkIWe/Ma1i1dg1WyP1WFJSYjEFjY=; b=owEBbQKS/ZANAwAIATWjk5/8eHdtAcsmYgBpzRfIFsglWU/C6Vu4ZEjPNoX8kONRE6ru70gtc j/UX2j27waJAjMEAAEIAB0WIQSshTmm6PRnAspKQ5s1o5Of/Hh3bQUCac0XyAAKCRA1o5Of/Hh3 bZV/EACVte0wYZbmP8dVyCvQZd/6j6sp5gNvtm32/fXSaQcZtB0CxXsodhwelKGhlAFY2LEGDR4 2tu5Le3H2JxrEqSyB3AEGUXJMljIWvIcHBnpN5beQ6Y8aLEAz4VAm/f+kOXTTANGLAOis0fAZZ8 lTnh8Hw9Q44u+kml0aDbSLPxq67UqxA5j5MDwXWdqliEqDG34I3Vrf1fNBa5cChEBBW56uZYUht 1LJqyzddRQazp34CYmNUTnJP0HDpGdkZfh7QTKlRFoRMuVsjd9/wL84F3npVklm0HfyqMqM7vYS 4N250fofPv7n8BBgh14dDxxTkjIcH+TvXVRuWW9m54kmBLuBBODCcH/pKzeJtUpZWsQIrXPa2Xo W2hsEvu0kCCKo3VM908UUcd+cAXaJyz29MakVrABtjX4SRLnzFoInocLGAjYrF9/OO+TitLgK4F Y2XcHfc2m8cVrLeICOdf8CjCRwt6oOxJAVjYXX7RQrFF4UbLBmmMQSsTzqDBaZAD98kGJNxG31O F+KUDX7Iym9ykWg9F95xkPfLIy30LtvrzUN2OOtqdku/qokUXkW+AuGXcVsB4km1wKsazpBTz5H sgal8n2CViD8ngbxjepG81WUa6JJFdy2wFacA2/5Pr1UUN2YeTYKvJrbdUak99HpL6UPvvBN1jR VNCe+6GqIh0UQ7A== X-Developer-Key: i=leitao@debian.org; a=openpgp; fpr=AC8539A6E8F46702CA4A439B35A3939FFC78776D X-Debian-User: leitao The WQ_AFFN_CACHE_SHARD affinity scope was added to the kernel but wq_dump.py was not updated to enumerate it. Add the missing constant lookup and include it in the affinity scopes iteration so that drgn output shows the CACHE_SHARD pod topology alongside the other scopes. Signed-off-by: Breno Leitao --- tools/workqueue/wq_dump.py | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/tools/workqueue/wq_dump.py b/tools/workqueue/wq_dump.py index d29b918306b4..06948ffcfc4b 100644 --- a/tools/workqueue/wq_dump.py +++ b/tools/workqueue/wq_dump.py @@ -107,6 +107,7 @@ WQ_MEM_RECLAIM =3D prog['WQ_MEM_RECLAIM'] WQ_AFFN_CPU =3D prog['WQ_AFFN_CPU'] WQ_AFFN_SMT =3D prog['WQ_AFFN_SMT'] WQ_AFFN_CACHE =3D prog['WQ_AFFN_CACHE'] +WQ_AFFN_CACHE_SHARD =3D prog['WQ_AFFN_CACHE_SHARD'] WQ_AFFN_NUMA =3D prog['WQ_AFFN_NUMA'] WQ_AFFN_SYSTEM =3D prog['WQ_AFFN_SYSTEM'] =20 @@ -138,7 +139,7 @@ def print_pod_type(pt): print(f' [{cpu}]=3D{pt.cpu_pod[cpu].value_()}', end=3D'') print('') =20 -for affn in [WQ_AFFN_CPU, WQ_AFFN_SMT, WQ_AFFN_CACHE, WQ_AFFN_NUMA, WQ_AFF= N_SYSTEM]: +for affn in [WQ_AFFN_CPU, WQ_AFFN_SMT, WQ_AFFN_CACHE, WQ_AFFN_CACHE_SHARD,= WQ_AFFN_NUMA, WQ_AFFN_SYSTEM]: print('') print(f'{wq_affn_names[affn].string_().decode().upper()}{" (default)" = if affn =3D=3D wq_affn_dfl else ""}') print_pod_type(wq_pod_types[affn]) --=20 2.52.0 From nobody Wed Apr 1 20:37:31 2026 Received: from stravinsky.debian.org (stravinsky.debian.org [82.195.75.108]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5EA6F413245; Wed, 1 Apr 2026 13:04:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=82.195.75.108 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775048676; cv=none; b=uBbrSppVsdCW0tEosY8QWJAyh/j/ru8pl672RQbTtm1X6gdF742N2brCsq2tJ68ryBlXE/rEh4xsZgJpqNwsIQvA4BK/jcuuzfOF2MM0FhFgNal7yyMZUHzg0Ft3en4fmqgU1ga0XKjreP0UXbHmwqXMRsoJ6oBY+V6y7bdRQd8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775048676; c=relaxed/simple; bh=lzyk9Q32hb8/ESHa+z9bZun2ttyufg2hEeyWdc20Lqc=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=OslhfR+rq7k2eJrvwAwmqX5Ni+CJwthQANvvP/8j+n1kDRMnVcaSOrLhnjkS4jDEn+myCoJmssSKQX3ySVzNNQYuyNxnJCQA9tZp5urvM+zElSu5kEjNcs0ZkX8jI9u5qcVj9tCu41gHK+Ckw0cTTcB3JPH/6IPeNmUTunTNRSQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=debian.org; spf=none smtp.mailfrom=debian.org; dkim=pass (2048-bit key) header.d=debian.org header.i=@debian.org header.b=b9x9X/iu; arc=none smtp.client-ip=82.195.75.108 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=debian.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=debian.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=debian.org header.i=@debian.org header.b="b9x9X/iu" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=debian.org; s=smtpauto.stravinsky; h=X-Debian-User:Cc:To:In-Reply-To:References: Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description; bh=uA6NwbSAM5Stt0CSfKD6RNJYCyIOL5imWVO8WuD/wrc=; b=b9x9X/iuhZnwSOJY39LbAZCD5w 3qNJpxdSryFOP/m5kIZgMbqZgo4UtLhNEQpL7P0nL2v7yAsj6+sZ/NzL7H3kaRz8KlYz2lequhDhP XwatpTXwgG9lu4x/tIyXpbFpYzkQbgrk1ey2+RFhgAlo1AHqDe+jufwvihn9EElq0fmm7x364TuVO 3ToldNKTZAX5825RWheAWWiGgl81K93FoBTTfjfQk9oKNRoAKwYPAaWfYv717ZJ8DobfznSGdyrWq 8qQ1uU8xHoX9cJY6fqlLQo69pjoqicGiaRuub34NUiL8Ko0kYFHFcwKPg3sUCis5GXC3bRm2YfPxH SA4ERthQ==; Received: from authenticated user by stravinsky.debian.org with esmtpsa (TLS1.3:ECDHE_X25519__RSA_PSS_RSAE_SHA256__AES_256_GCM:256) (Exim 4.96) (envelope-from ) id 1w7vFJ-0030QD-2i; Wed, 01 Apr 2026 13:04:32 +0000 From: Breno Leitao Date: Wed, 01 Apr 2026 06:03:56 -0700 Subject: [PATCH v3 5/6] workqueue: add test_workqueue benchmark module Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260401-workqueue_sharded-v3-5-ab0b9336bf0b@debian.org> References: <20260401-workqueue_sharded-v3-0-ab0b9336bf0b@debian.org> In-Reply-To: <20260401-workqueue_sharded-v3-0-ab0b9336bf0b@debian.org> To: Tejun Heo , Lai Jiangshan , Andrew Morton Cc: linux-kernel@vger.kernel.org, puranjay@kernel.org, linux-crypto@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, Michael van der Westhuizen , kernel-team@meta.com, Chuck Lever , Breno Leitao X-Mailer: b4 0.16-dev-453a6 X-Developer-Signature: v=1; a=openpgp-sha256; l=10462; i=leitao@debian.org; h=from:subject:message-id; bh=lzyk9Q32hb8/ESHa+z9bZun2ttyufg2hEeyWdc20Lqc=; b=owEBbQKS/ZANAwAIATWjk5/8eHdtAcsmYgBpzRfIiZwOV4kL5EYS1u6FVvrMq51T1CG6uBQas GHWOMITlUuJAjMEAAEIAB0WIQSshTmm6PRnAspKQ5s1o5Of/Hh3bQUCac0XyAAKCRA1o5Of/Hh3 bY30EAClSbNrzqv6st731jFECxyynBfdcuZYoari30VGPoqb0XLL/uhZUNDWxb9nlzddRG4zA21 wZCVFUXAOhbjowXFVOWZ4N78I/Cn2Ue5oqjTuy0k70ACDzF2qhOZmlE6zJuSyG37IayO2irzvDT cPzof1ek2iUVSzvk5QoxhEmbJjqtzKstMjLr1nPd8hm9ge7ROWlXpX2ztmuZEM+BJ4jl7vD7EEt YRyu8kDFApcc2D/JY+QylGf/4r5nhBQhoIT6x+jjwzxVU0hPKeoNj/gFqF5pS4sIAx/vYR4YUuf vlV3JitZrfI90Yk7JGoZkSaSPFsAY4FsHYb/rG9fnmnYJpAe6C22x7oMVC74hF53mhS1RmJcWfx Ny0zTFJnIsGmte45SVbQvbsCRN7vY2wO3FPNW94IVQZi4+rSjV3G9JvRlmN1YK4fzWu2ycrb3rm CNrSlVAvY+cupg5wWLOzo0MKxkkM7/9GfWqtMD9fkVeB8nmibtk1vf0kaYukdcNbL9Qs9XiYvau 4Ejwe5UkGq8xYzJ6bicRX7IlFRJgYJ0vw2rbUjmMFCdd9zUFAo+RjvZjn8h0RMUpUsjZuGgcr5y EnjiKfLoau+faUaKBUZZ0fPJBuCp3ZARnCVGaRH5ntvazCMP4QD05TyNrp0dUWrfa5d3BXic9Jt HSXSUxs62nxs7iQ== X-Developer-Key: i=leitao@debian.org; a=openpgp; fpr=AC8539A6E8F46702CA4A439B35A3939FFC78776D X-Debian-User: leitao Add a kernel module that benchmarks queue_work() throughput on an unbound workqueue to measure pool->lock contention under different affinity scope configurations (cache vs cache_shard). The module spawns N kthreads (default: num_online_cpus()), each bound to a different CPU. All threads start simultaneously and queue work items, measuring the latency of each queue_work() call. Results are reported as p50/p90/p95 latencies for each affinity scope. The affinity scope is switched between runs via the workqueue's sysfs affinity_scope attribute (WQ_SYSFS), avoiding the need for any new exported symbols. The module runs as __init-only, returning -EAGAIN to auto-unload, and can be re-run via insmod. Example of the output: running 50 threads, 50000 items/thread cpu 6806017 items/sec p50=3D2574 p90=3D5068 p95=3D581= 8 ns smt 6821040 items/sec p50=3D2624 p90=3D5168 p95=3D594= 9 ns cache_shard 1633653 items/sec p50=3D5337 p90=3D9694 p95=3D112= 07 ns cache 286069 items/sec p50=3D72509 p90=3D82304 p95=3D850= 09 ns numa 319403 items/sec p50=3D63745 p90=3D73480 p95=3D765= 05 ns system 308461 items/sec p50=3D66561 p90=3D75714 p95=3D780= 48 ns Signed-off-by: Breno Leitao --- lib/Kconfig.debug | 10 ++ lib/Makefile | 1 + lib/test_workqueue.c | 294 +++++++++++++++++++++++++++++++++++++++++++++++= ++++ 3 files changed, 305 insertions(+) diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 93f356d2b3d9..38bee649697f 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -2628,6 +2628,16 @@ config TEST_VMALLOC =20 If unsure, say N. =20 +config TEST_WORKQUEUE + tristate "Test module for stress/performance analysis of workqueue" + default n + help + This builds the "test_workqueue" module for benchmarking + workqueue throughput under contention. Useful for evaluating + affinity scope changes (e.g., cache_shard vs cache). + + If unsure, say N. + config TEST_BPF tristate "Test BPF filter functionality" depends on m && NET diff --git a/lib/Makefile b/lib/Makefile index 1b9ee167517f..ea660cca04f4 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -79,6 +79,7 @@ UBSAN_SANITIZE_test_ubsan.o :=3D y obj-$(CONFIG_TEST_KSTRTOX) +=3D test-kstrtox.o obj-$(CONFIG_TEST_LKM) +=3D test_module.o obj-$(CONFIG_TEST_VMALLOC) +=3D test_vmalloc.o +obj-$(CONFIG_TEST_WORKQUEUE) +=3D test_workqueue.o obj-$(CONFIG_TEST_RHASHTABLE) +=3D test_rhashtable.o obj-$(CONFIG_TEST_STATIC_KEYS) +=3D test_static_keys.o obj-$(CONFIG_TEST_STATIC_KEYS) +=3D test_static_key_base.o diff --git a/lib/test_workqueue.c b/lib/test_workqueue.c new file mode 100644 index 000000000000..f2ae1ac4bd93 --- /dev/null +++ b/lib/test_workqueue.c @@ -0,0 +1,294 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Test module for stress and performance analysis of workqueue. + * + * Benchmarks queue_work() throughput on an unbound workqueue to measure + * pool->lock contention under different affinity scope configurations + * (e.g., cache vs cache_shard). + * + * The affinity scope is changed between runs via the workqueue's sysfs + * affinity_scope attribute (WQ_SYSFS). + * + * Copyright (c) 2026 Meta Platforms, Inc. and affiliates + * Copyright (c) 2026 Breno Leitao + * + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define WQ_NAME "bench_wq" +#define SCOPE_PATH "/sys/bus/workqueue/devices/" WQ_NAME "/affinity_scope" + +static int nr_threads; +module_param(nr_threads, int, 0444); +MODULE_PARM_DESC(nr_threads, + "Number of threads to spawn (default: 0 =3D num_online_cpus())"); + +static int wq_items =3D 50000; +module_param(wq_items, int, 0444); +MODULE_PARM_DESC(wq_items, + "Number of work items each thread queues (default: 50000)"); + +static struct workqueue_struct *bench_wq; +static atomic_t threads_done; +static DECLARE_COMPLETION(start_comp); +static DECLARE_COMPLETION(all_done_comp); + +struct thread_ctx { + struct completion work_done; + struct work_struct work; + u64 *latencies; + int cpu; + int items; +}; + +static void bench_work_fn(struct work_struct *work) +{ + struct thread_ctx *ctx =3D container_of(work, struct thread_ctx, work); + + complete(&ctx->work_done); +} + +static int bench_kthread_fn(void *data) +{ + struct thread_ctx *ctx =3D data; + ktime_t t_start, t_end; + int i; + + /* Wait for all threads to be ready */ + wait_for_completion(&start_comp); + + if (kthread_should_stop()) + return 0; + + for (i =3D 0; i < ctx->items; i++) { + reinit_completion(&ctx->work_done); + INIT_WORK(&ctx->work, bench_work_fn); + + t_start =3D ktime_get(); + queue_work(bench_wq, &ctx->work); + t_end =3D ktime_get(); + + ctx->latencies[i] =3D ktime_to_ns(ktime_sub(t_end, t_start)); + wait_for_completion(&ctx->work_done); + } + + if (atomic_dec_and_test(&threads_done)) + complete(&all_done_comp); + + /* + * Wait for kthread_stop() so the module text isn't freed + * while we're still executing. + */ + while (!kthread_should_stop()) + schedule(); + + return 0; +} + +static int cmp_u64(const void *a, const void *b) +{ + u64 va =3D *(const u64 *)a; + u64 vb =3D *(const u64 *)b; + + if (va < vb) + return -1; + if (va > vb) + return 1; + return 0; +} + +static int __init set_affn_scope(const char *scope) +{ + struct file *f; + loff_t pos =3D 0; + ssize_t ret; + + f =3D filp_open(SCOPE_PATH, O_WRONLY, 0); + if (IS_ERR(f)) { + pr_err("test_workqueue: open %s failed: %ld\n", + SCOPE_PATH, PTR_ERR(f)); + return PTR_ERR(f); + } + + ret =3D kernel_write(f, scope, strlen(scope), &pos); + filp_close(f, NULL); + + if (ret < 0) { + pr_err("test_workqueue: write '%s' failed: %zd\n", scope, ret); + return ret; + } + + return 0; +} + +static int __init run_bench(int n_threads, const char *scope, const char *= label) +{ + struct task_struct **tasks; + unsigned long total_items; + struct thread_ctx *ctxs; + u64 *all_latencies; + ktime_t start, end; + int cpu, i, j, ret; + s64 elapsed_us; + + ret =3D set_affn_scope(scope); + if (ret) + return ret; + + ctxs =3D kcalloc(n_threads, sizeof(*ctxs), GFP_KERNEL); + if (!ctxs) + return -ENOMEM; + + tasks =3D kcalloc(n_threads, sizeof(*tasks), GFP_KERNEL); + if (!tasks) { + kfree(ctxs); + return -ENOMEM; + } + + total_items =3D (unsigned long)n_threads * wq_items; + all_latencies =3D kvmalloc_array(total_items, sizeof(u64), GFP_KERNEL); + if (!all_latencies) { + kfree(tasks); + kfree(ctxs); + return -ENOMEM; + } + + /* Allocate per-thread latency arrays */ + for (i =3D 0; i < n_threads; i++) { + ctxs[i].latencies =3D kvmalloc_array(wq_items, sizeof(u64), + GFP_KERNEL); + if (!ctxs[i].latencies) { + while (--i >=3D 0) + kvfree(ctxs[i].latencies); + kvfree(all_latencies); + kfree(tasks); + kfree(ctxs); + return -ENOMEM; + } + } + + atomic_set(&threads_done, n_threads); + reinit_completion(&all_done_comp); + reinit_completion(&start_comp); + + /* Create kthreads, each bound to a different online CPU */ + i =3D 0; + for_each_online_cpu(cpu) { + if (i >=3D n_threads) + break; + + ctxs[i].cpu =3D cpu; + ctxs[i].items =3D wq_items; + init_completion(&ctxs[i].work_done); + + tasks[i] =3D kthread_create(bench_kthread_fn, &ctxs[i], + "wq_bench/%d", cpu); + if (IS_ERR(tasks[i])) { + ret =3D PTR_ERR(tasks[i]); + pr_err("test_workqueue: failed to create kthread %d: %d\n", + i, ret); + /* Unblock threads waiting on start_comp before stopping them */ + complete_all(&start_comp); + while (--i >=3D 0) + kthread_stop(tasks[i]); + goto out_free; + } + + kthread_bind(tasks[i], cpu); + wake_up_process(tasks[i]); + i++; + } + + /* Start timing and release all threads */ + start =3D ktime_get(); + complete_all(&start_comp); + + /* Wait for all threads to finish the benchmark */ + wait_for_completion(&all_done_comp); + + /* Drain any remaining work */ + flush_workqueue(bench_wq); + + /* Ensure all kthreads have fully exited before module memory is freed */ + for (i =3D 0; i < n_threads; i++) + kthread_stop(tasks[i]); + + end =3D ktime_get(); + elapsed_us =3D ktime_us_delta(end, start); + + /* Merge all per-thread latencies and sort for percentile calculation */ + j =3D 0; + for (i =3D 0; i < n_threads; i++) { + memcpy(&all_latencies[j], ctxs[i].latencies, + wq_items * sizeof(u64)); + j +=3D wq_items; + } + + sort(all_latencies, total_items, sizeof(u64), cmp_u64, NULL); + + pr_info("test_workqueue: %-16s %llu items/sec\tp50=3D%llu\tp90=3D%llu\t= p95=3D%llu ns\n", + label, + elapsed_us ? total_items * 1000000ULL / elapsed_us : 0, + all_latencies[total_items * 50 / 100], + all_latencies[total_items * 90 / 100], + all_latencies[total_items * 95 / 100]); + + ret =3D 0; +out_free: + for (i =3D 0; i < n_threads; i++) + kvfree(ctxs[i].latencies); + kvfree(all_latencies); + kfree(tasks); + kfree(ctxs); + + return ret; +} + +static const char * const bench_scopes[] =3D { + "cpu", "smt", "cache_shard", "cache", "numa", "system", +}; + +static int __init test_workqueue_init(void) +{ + int n_threads =3D min(nr_threads ?: num_online_cpus(), num_online_cpus()); + int i; + + if (wq_items <=3D 0) { + pr_err("test_workqueue: wq_items must be > 0\n"); + return -EINVAL; + } + + bench_wq =3D alloc_workqueue(WQ_NAME, WQ_UNBOUND | WQ_SYSFS, 0); + if (!bench_wq) + return -ENOMEM; + + pr_info("test_workqueue: running %d threads, %d items/thread\n", + n_threads, wq_items); + + for (i =3D 0; i < ARRAY_SIZE(bench_scopes); i++) + run_bench(n_threads, bench_scopes[i], bench_scopes[i]); + + destroy_workqueue(bench_wq); + + /* Return -EAGAIN so the module doesn't stay loaded after the benchmark */ + return -EAGAIN; +} + +module_init(test_workqueue_init); +MODULE_AUTHOR("Breno Leitao "); +MODULE_DESCRIPTION("Stress/performance benchmark for workqueue subsystem"); +MODULE_LICENSE("GPL"); --=20 2.52.0 From nobody Wed Apr 1 20:37:31 2026 Received: from stravinsky.debian.org (stravinsky.debian.org [82.195.75.108]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 30282410D32; Wed, 1 Apr 2026 13:04:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=82.195.75.108 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775048679; cv=none; b=Jy7+P+Tg7v1yBBvvPfCdjMJNT8TzxxlkkfZux7jhcAwobGFNm9PZEvw/4w4SV0FUlz2CAQb3F5M4svh/sxYZMbl7z63UbenF+xl7lQTCw0AhPpVElNGPMqd0/TY37L7zEw5Z0bjOV7ufPd7ct7s/zoXb996FgROL30km0khDj9g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775048679; c=relaxed/simple; bh=5Pw4JTjMHoWtFOnOgkDdj2OLNGdWfnbSS9HL1wXN6fA=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=crPON1PXSjskE/zSwmV4c64WsTyKBb/UftM5m/suZ7hiOhlUuAZPtRLT+j2kxj4LZWbizgtw8XAK7CsO7jZjycjkirUyh3eko4g/qa16jA/jzN6dQPo5XrO8miu1NPYnnLYqj3sN5G/C3DzXFdIvYRFAIgrR5mUApv6bpS5JQ5E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=debian.org; spf=none smtp.mailfrom=debian.org; dkim=pass (2048-bit key) header.d=debian.org header.i=@debian.org header.b=raVGemIX; arc=none smtp.client-ip=82.195.75.108 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=debian.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=debian.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=debian.org header.i=@debian.org header.b="raVGemIX" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=debian.org; s=smtpauto.stravinsky; h=X-Debian-User:Cc:To:In-Reply-To:References: Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description; bh=s/JzVv7uX+jn9sxECGXmsOIbTtkpN1VlGTlQ6CcanAw=; b=raVGemIXjkoaXp3kqEXPNh32vv bSsRlbb3hj7AhmjgzUefJfd1qkzYqdGBAL+aZT8mPvEfCD5O0ASYKjsHIUAfAixK6KzS+DP9KEKyk Ank5qssWSwKp+UdNBRMGWQ6XJS3VlhS7d/IPabgiiiFPGU0f1UlOSg4Ytb3xMG2YjTbjS+D30fsLc k71Vu35hGrIPjPVnYqodva2Ze35w0qd8dfENabLZdqVjxFiidC+soNuTvUdAGnSpZaEF6VUU1vfnM FLnrp8wmZQfhXVtInLwDz+Ap6atPitzMeVURJ1kSIQSiRQqKZeMdtblk+4ek21FHHaFaGxadg8qab t1dRcWfA==; Received: from authenticated user by stravinsky.debian.org with esmtpsa (TLS1.3:ECDHE_X25519__RSA_PSS_RSAE_SHA256__AES_256_GCM:256) (Exim 4.96) (envelope-from ) id 1w7vFN-0030QM-3D; Wed, 01 Apr 2026 13:04:36 +0000 From: Breno Leitao Date: Wed, 01 Apr 2026 06:03:57 -0700 Subject: [PATCH v3 6/6] docs: workqueue: document WQ_AFFN_CACHE_SHARD affinity scope Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260401-workqueue_sharded-v3-6-ab0b9336bf0b@debian.org> References: <20260401-workqueue_sharded-v3-0-ab0b9336bf0b@debian.org> In-Reply-To: <20260401-workqueue_sharded-v3-0-ab0b9336bf0b@debian.org> To: Tejun Heo , Lai Jiangshan , Andrew Morton Cc: linux-kernel@vger.kernel.org, puranjay@kernel.org, linux-crypto@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, Michael van der Westhuizen , kernel-team@meta.com, Chuck Lever , Breno Leitao X-Mailer: b4 0.16-dev-453a6 X-Developer-Signature: v=1; a=openpgp-sha256; l=2720; i=leitao@debian.org; h=from:subject:message-id; bh=5Pw4JTjMHoWtFOnOgkDdj2OLNGdWfnbSS9HL1wXN6fA=; b=owEBbQKS/ZANAwAIATWjk5/8eHdtAcsmYgBpzRfIamkTVw4lb7Hti3QtVvpTg5VUO54ETqF1I Z3lWumpoeKJAjMEAAEIAB0WIQSshTmm6PRnAspKQ5s1o5Of/Hh3bQUCac0XyAAKCRA1o5Of/Hh3 bRC7EACVTGz1PZijw5iSKF4Eq+dqiMj8aBZGYF8UsfMj2bjwdXy9j57Ll9umpW8NYRtrSdZefLz Cu2fXNAq63vWMacwseuaky7OQTue4Ah7IeJUTN2soyleqHmZrlGJinSTHFgqALzE2uhXikM4IN0 oBVdEmIQjY7gzMJ7jYC6PtQoFawr7vEX6Xkz+5yNGHAHSSo6wWvShzmuy9bZkFsWqSd/jsSOt/D BysiupPARA7eoUetHac3igg+3+M2COSQFhnBHwdILv/dHV4fAJEbKeqhcJJUbDbtuAfebHWwBj1 EOQUsEsB88x3qa2skYCPsBLqpBMvNIGOUI3Lr+/Na4JV6s9jgTigOIO/UE5A3bjSvGq8E+7tsqk W2JF1DTmRWxi1N8g7OJGm2gTn3wJY9ffahc/aGrT0PFf0Y0a0tSAW1nOw8UgraacvwACsh1hmem JS46KW5tGylXfCTk5fE0GAqOQu2FcfqgDF4gtXm5rvHSTyXkKG6yR3uJEiZ/Iv9wKTndoi9YgyF WW5pxHzUXCe3VccXN6yBWBBaMYn4+AGOBlV1aUiWoQI5+5rcmeg0tyVDEEhukyyY6Svtcq4nyQE gIb3C46OldNE/ZxCbxMT5gxFOWZRXxxJD6UdogTEBwbdSY3wfbuRPmcA+RC2fj1oLjpyoMN8j+Z /U3YEhahJpWWevA== X-Developer-Key: i=leitao@debian.org; a=openpgp; fpr=AC8539A6E8F46702CA4A439B35A3939FFC78776D X-Debian-User: leitao Update kernel-parameters.txt and workqueue.rst to reflect the new cache_shard affinity scope and the default change from cache to cache_shard. Signed-off-by: Breno Leitao --- Documentation/admin-guide/kernel-parameters.txt | 3 ++- Documentation/core-api/workqueue.rst | 14 ++++++++++---- 2 files changed, 12 insertions(+), 5 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentatio= n/admin-guide/kernel-parameters.txt index 03a550630644..b2558f76b7bd 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -8535,7 +8535,8 @@ Kernel parameters workqueue.default_affinity_scope=3D Select the default affinity scope to use for unbound workqueues. Can be one of "cpu", "smt", "cache", - "numa" and "system". Default is "cache". For more + "cache_shard", "numa" and "system". Default is + "cache_shard". For more information, see the Affinity Scopes section in Documentation/core-api/workqueue.rst. =20 diff --git a/Documentation/core-api/workqueue.rst b/Documentation/core-api/= workqueue.rst index 165ca73e8351..411e1b28b8de 100644 --- a/Documentation/core-api/workqueue.rst +++ b/Documentation/core-api/workqueue.rst @@ -378,9 +378,9 @@ Affinity Scopes =20 An unbound workqueue groups CPUs according to its affinity scope to improve cache locality. For example, if a workqueue is using the default affinity -scope of "cache", it will group CPUs according to last level cache -boundaries. A work item queued on the workqueue will be assigned to a work= er -on one of the CPUs which share the last level cache with the issuing CPU. +scope of "cache_shard", it will group CPUs into sub-LLC shards. A work item +queued on the workqueue will be assigned to a worker on one of the CPUs +within the same shard as the issuing CPU. Once started, the worker may or may not be allowed to move outside the sco= pe depending on the ``affinity_strict`` setting of the scope. =20 @@ -402,7 +402,13 @@ Workqueue currently supports the following affinity sc= opes. ``cache`` CPUs are grouped according to cache boundaries. Which specific cache boundary is used is determined by the arch code. L3 is used in a lot of - cases. This is the default affinity scope. + cases. + +``cache_shard`` + CPUs are grouped into sub-LLC shards of at most ``wq_cache_shard_size`` + cores (default 8, tunable via the ``workqueue.cache_shard_size`` boot + parameter). Shards are always split on core (SMT group) boundaries. + This is the default affinity scope. =20 ``numa`` CPUs are grouped according to NUMA boundaries. --=20 2.52.0