From nobody Sat Feb 7 22:48:02 2026 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 3F6102D9494 for ; Fri, 5 Dec 2025 22:00:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764972036; cv=none; b=TRSDjDMV1LrXZRvQLpOfUtHA6zVcZITJWcNWdvCwGeiK95PIpPK5YBUP4wuAgN8RvuO/lEae16+WBhR4Ep2usilvrgwMA4xV5GAxOuKHtlPcr+PNNe6BtGtz7Hhuv1hPhQm0QN/EzCjiYA35/6PS9vEqbBuHHkLsYa5c6K62xbY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764972036; c=relaxed/simple; bh=I/jwUSUgTy6h1PRmxJAgeGndaKMsF7rFZZ3fKEAiF9U=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=r3IFbvjBB1Nk7cLNsruhSPm28dFSHoDDliiHSZzTnb20b2OW8QMYfoFmnsWb7iQJB2RoCLIk2o1juz/W7ck95yvyCbWM2EPveHFeJ0f4Acn2BIj0jDYrE1zneIxiAPsWXCzdBYblpkIujPN9l6qpTCYyioLA6gstO0qPFjYLmQY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4663C1BC0; Fri, 5 Dec 2025 14:00:26 -0800 (PST) Received: from merodach.members.linode.com (usa-sjc-mx-foss1.foss.arm.com [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 91B603F7BD; Fri, 5 Dec 2025 14:00:29 -0800 (PST) From: James Morse To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: James Morse , D Scott Phillips OS , carl@os.amperecomputing.com, lcherian@marvell.com, bobo.shaobowang@huawei.com, tan.shaopeng@fujitsu.com, baolin.wang@linux.alibaba.com, Jamie Iles , Xin Hao , peternewman@google.com, dfustini@baylibre.com, amitsinght@marvell.com, David Hildenbrand , Dave Martin , Koba Ko , Shanker Donthineni , fenghuay@nvidia.com, baisheng.gao@unisoc.com, Jonathan Cameron , Gavin Shan , Ben Horgan , rohit.mathew@arm.com, reinette.chatre@intel.com, Punit Agrawal , Zeng Heng , Dave Martin Subject: [RFC PATCH 16/38] arm_mpam: resctrl: Add support for 'MB' resource Date: Fri, 5 Dec 2025 21:58:39 +0000 Message-Id: <20251205215901.17772-17-james.morse@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20251205215901.17772-1-james.morse@arm.com> References: <20251205215901.17772-1-james.morse@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" resctrl supports 'MB', as a percentage throttling of traffic somewhere after the L3. This is the control that mba_sc uses, so ideally the class chosen should be as close as possible to the counters used for mba_local. MB's percentage control should be backed either with the fixed point fraction MBW_MAX. The bandwidth portion bitmaps is not used as its tricky to pick which bits to use to avoid contention, and may be possible to expose this as something other than a percentage in the future. CC: Zeng Heng Co-developed-by: Dave Martin Signed-off-by: Dave Martin Signed-off-by: James Morse > --- drivers/resctrl/mpam_resctrl.c | 212 ++++++++++++++++++++++++++++++++- 1 file changed, 211 insertions(+), 1 deletion(-) diff --git a/drivers/resctrl/mpam_resctrl.c b/drivers/resctrl/mpam_resctrl.c index 55576d0caf12..b9f3f00d8cad 100644 --- a/drivers/resctrl/mpam_resctrl.c +++ b/drivers/resctrl/mpam_resctrl.c @@ -247,6 +247,33 @@ static bool cache_has_usable_cpor(struct mpam_class *c= lass) return (class->props.cpbm_wd <=3D 32); } =20 +static bool mba_class_use_mbw_max(struct mpam_props *cprops) +{ + return (mpam_has_feature(mpam_feat_mbw_max, cprops) && + cprops->bwa_wd); +} + +static bool class_has_usable_mba(struct mpam_props *cprops) +{ + return mba_class_use_mbw_max(cprops); +} + +/* + * Calculate the worst-case percentage change from each implemented step + * in the control. + */ +static u32 get_mba_granularity(struct mpam_props *cprops) +{ + if (!mba_class_use_mbw_max(cprops)) + return 0; + + /* + * bwa_wd is the number of bits implemented in the 0.xxx + * fixed point fraction. 1 bit is 50%, 2 is 25% etc. + */ + return DIV_ROUND_UP(MAX_MBA_BW, 1 << cprops->bwa_wd); +} + /* * Each fixed-point hardware value architecturally represents a range * of values: the full range 0% - 100% is split contiguously into @@ -287,6 +314,96 @@ static u16 percent_to_mbw_max(u8 pc, struct mpam_props= *cprops) return val; } =20 +static u32 get_mba_min(struct mpam_props *cprops) +{ + u32 val =3D 0; + + if (mba_class_use_mbw_max(cprops)) + val =3D mbw_max_to_percent(val, cprops); + else + WARN_ON_ONCE(1); + + return val; +} + +/* Find the L3 cache that has affinity with this CPU */ +static int find_l3_equivalent_bitmask(int cpu, cpumask_var_t tmp_cpumask) +{ + u32 cache_id =3D get_cpu_cacheinfo_id(cpu, 3); + + lockdep_assert_cpus_held(); + + return mpam_get_cpumask_from_cache_id(cache_id, 3, tmp_cpumask); +} + +/* + * topology_matches_l3() - Is the provided class the same shape as L3 + * @victim: The class we'd like to pretend is L3. + * + * resctrl expects all the world's a Xeon, and all counters are on the + * L3. We play fast and loose with this, mapping counters on other + * classes - provided the CPU->domain mapping is the same kind of shape. + * + * Using cacheinfo directly would make this work even if resctrl can't + * use the L3 - but cacheinfo can't tell us anything about offline CPUs. + * Using the L3 resctrl domain list also depends on CPUs being online. + * Using the mpam_class we picked for L3 so we can use its domain list + * assumes that there are MPAM controls on the L3. + * Instead, this path eventually uses the mpam_get_cpumask_from_cache_id() + * helper which can tell us about offline CPUs ... but getting the cache_id + * to start with relies on at least one CPU per L3 cache being online at + * boot. + * + * Walk the victim component list and compare the affinity mask with the + * corresponding L3. The topology matches if each victim:component's affin= ity + * mask is the same as the CPU's corresponding L3's. These lists/masks are + * computed from firmware tables so don't change at runtime. + */ +static bool topology_matches_l3(struct mpam_class *victim) +{ + int cpu, err; + struct mpam_component *victim_iter; + cpumask_var_t __free(free_cpumask_var) tmp_cpumask; + + if (!alloc_cpumask_var(&tmp_cpumask, GFP_KERNEL)) + return false; + + guard(srcu)(&mpam_srcu); + list_for_each_entry_srcu(victim_iter, &victim->components, class_list, + srcu_read_lock_held(&mpam_srcu)) { + if (cpumask_empty(&victim_iter->affinity)) { + pr_debug("class %u has CPU-less component %u - can't match L3!\n", + victim->level, victim_iter->comp_id); + return false; + } + + cpu =3D cpumask_any(&victim_iter->affinity); + if (WARN_ON_ONCE(cpu >=3D nr_cpu_ids)) + return false; + + cpumask_clear(tmp_cpumask); + err =3D find_l3_equivalent_bitmask(cpu, tmp_cpumask); + if (err) { + pr_debug("Failed to find L3's equivalent component to class %u componen= t %u\n", + victim->level, victim_iter->comp_id); + return false; + } + + /* Any differing bits in the affinity mask? */ + if (!cpumask_equal(tmp_cpumask, &victim_iter->affinity)) { + pr_debug("class %u component %u has Mismatched CPU mask with L3 equival= ent\n" + "L3:%*pbl !=3D victim:%*pbl\n", + victim->level, victim_iter->comp_id, + cpumask_pr_args(tmp_cpumask), + cpumask_pr_args(&victim_iter->affinity)); + + return false; + } + } + + return true; +} + /* Test whether we can export MPAM_CLASS_CACHE:{2,3}? */ static void mpam_resctrl_pick_caches(void) { @@ -330,10 +447,63 @@ static void mpam_resctrl_pick_caches(void) } } =20 +static void mpam_resctrl_pick_mba(void) +{ + struct mpam_class *class, *candidate_class =3D NULL; + struct mpam_resctrl_res *res; + + lockdep_assert_cpus_held(); + + guard(srcu)(&mpam_srcu); + list_for_each_entry_srcu(class, &mpam_classes, classes_list, + srcu_read_lock_held(&mpam_srcu)) { + struct mpam_props *cprops =3D &class->props; + + if (class->level < 3) { + pr_debug("class %u is before L3\n", class->level); + continue; + } + + if (!class_has_usable_mba(cprops)) { + pr_debug("class %u has no bandwidth control\n", + class->level); + continue; + } + + if (!cpumask_equal(&class->affinity, cpu_possible_mask)) { + pr_debug("class %u has missing CPUs\n", class->level); + continue; + } + + if (!topology_matches_l3(class)) { + pr_debug("class %u topology doesn't match L3\n", + class->level); + continue; + } + + /* + * mba_sc reads the mbm_local counter, and waggles the MBA + * controls. mbm_local is implicitly part of the L3, pick a + * resource to be MBA that as close as possible to the L3. + */ + if (!candidate_class || class->level < candidate_class->level) + candidate_class =3D class; + } + + if (candidate_class) { + pr_debug("selected class %u to back MBA\n", + candidate_class->level); + res =3D &mpam_resctrl_controls[RDT_RESOURCE_MBA]; + res->class =3D candidate_class; + exposed_alloc_capable =3D true; + } +} + static int mpam_resctrl_control_init(struct mpam_resctrl_res *res, enum resctrl_res_level type) { struct mpam_class *class =3D res->class; + struct mpam_props *cprops =3D &class->props; struct rdt_resource *r =3D &res->resctrl_res; =20 switch (res->resctrl_res.rid) { @@ -362,6 +532,20 @@ static int mpam_resctrl_control_init(struct mpam_resct= rl_res *res, * 'all the bits' is the correct answer here. */ r->cache.shareable_bits =3D resctrl_get_default_ctrl(r); + break; + case RDT_RESOURCE_MBA: + r->alloc_capable =3D true; + r->schema_fmt =3D RESCTRL_SCHEMA_RANGE; + r->ctrl_scope =3D RESCTRL_L3_CACHE; + + r->membw.delay_linear =3D true; + r->membw.throttle_mode =3D THREAD_THROTTLE_UNDEFINED; + r->membw.min_bw =3D get_mba_min(cprops); + r->membw.max_bw =3D MAX_MBA_BW; + r->membw.bw_gran =3D get_mba_granularity(cprops); + + r->name =3D "MB"; + break; default: break; @@ -377,7 +561,17 @@ static int mpam_resctrl_pick_domain_id(int cpu, struct= mpam_component *comp) if (class->type =3D=3D MPAM_CLASS_CACHE) return comp->comp_id; =20 - /* TODO: repaint domain ids to match the L3 domain ids */ + if (topology_matches_l3(class)) { + /* Use the corresponding L3 component ID as the domain ID */ + int id =3D get_cpu_cacheinfo_id(cpu, 3); + + /* Implies topology_matches_l3() made a mistake */ + if (WARN_ON_ONCE(id =3D=3D -1)) + return comp->comp_id; + + return id; + } + /* * Otherwise, expose the ID used by the firmware table code. */ @@ -419,6 +613,12 @@ u32 resctrl_arch_get_config(struct rdt_resource *r, st= ruct rdt_ctrl_domain *d, case RDT_RESOURCE_L3: configured_by =3D mpam_feat_cpor_part; break; + case RDT_RESOURCE_MBA: + if (mpam_has_feature(mpam_feat_mbw_max, cprops)) { + configured_by =3D mpam_feat_mbw_max; + break; + } + fallthrough; default: return resctrl_get_default_ctrl(r); } @@ -430,6 +630,8 @@ u32 resctrl_arch_get_config(struct rdt_resource *r, str= uct rdt_ctrl_domain *d, switch (configured_by) { case mpam_feat_cpor_part: return cfg->cpbm; + case mpam_feat_mbw_max: + return mbw_max_to_percent(cfg->mbw_max, cprops); default: return resctrl_get_default_ctrl(r); } @@ -474,6 +676,13 @@ int resctrl_arch_update_one(struct rdt_resource *r, st= ruct rdt_ctrl_domain *d, cfg.cpbm =3D cfg_val; mpam_set_feature(mpam_feat_cpor_part, &cfg); break; + case RDT_RESOURCE_MBA: + if (mpam_has_feature(mpam_feat_mbw_max, cprops)) { + cfg.mbw_max =3D percent_to_mbw_max(cfg_val, cprops); + mpam_set_feature(mpam_feat_mbw_max, &cfg); + break; + } + fallthrough; default: return -EINVAL; } @@ -743,6 +952,7 @@ int mpam_resctrl_setup(void) =20 /* Find some classes to use for controls */ mpam_resctrl_pick_caches(); + mpam_resctrl_pick_mba(); =20 /* Initialise the resctrl structures from the classes */ for (i =3D 0; i < RDT_NUM_RESOURCES; i++) { --=20 2.39.5