From nobody Mon Feb 9 19:38:00 2026 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 31F7337F118 for ; Mon, 12 Jan 2026 17:03:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768237399; cv=none; b=aRBLZyDWmxeOvmye3Qs47OtanFbqwB5SZkLpB+462iS7Ev8CvX2fWqYXpAwdqVz/VFA34D07I5QjjX9X9e1UZkS8LFa8VemQ8xEM0LRpvkF1vcgutB1ahzKv9o9D6+iZN8cgvYYWP8J/VaamcsWiR8TsVLEczgB1xkKsG/9LH30= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768237399; c=relaxed/simple; bh=LA1jCsSEHgSw8PMmHP0HENk2IhJXdbdgzP9F3i1ocNg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=QjBCk7b7vkb9ZPJ/u/NHgm9Hj1ozEwOM05bR9ocOSpmV3qwWNTFvbhRNkCfwloGYHmeMxyfRf9NWpu12lZfykR4m/ZdI97yrWAlg+IaULdbTk+RYH6cyHnj9rcq80v4hfnx7KPbEJESfLb+9kHH2k4jHQJ+SnCOGy+mgN3G2Xd4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7CB451516; Mon, 12 Jan 2026 09:03:06 -0800 (PST) Received: from e134344.cambridge.arm.com (e134344.arm.com [10.1.196.46]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 22CA73F694; Mon, 12 Jan 2026 09:03:07 -0800 (PST) From: Ben Horgan To: ben.horgan@arm.com Cc: amitsinght@marvell.com, baisheng.gao@unisoc.com, baolin.wang@linux.alibaba.com, carl@os.amperecomputing.com, dave.martin@arm.com, david@kernel.org, dfustini@baylibre.com, fenghuay@nvidia.com, gshan@redhat.com, james.morse@arm.com, jonathan.cameron@huawei.com, kobak@nvidia.com, lcherian@marvell.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, peternewman@google.com, punit.agrawal@oss.qualcomm.com, quic_jiles@quicinc.com, reinette.chatre@intel.com, rohit.mathew@arm.com, scott@os.amperecomputing.com, sdonthineni@nvidia.com, tan.shaopeng@fujitsu.com, xhao@linux.alibaba.com, catalin.marinas@arm.com, will@kernel.org, corbet@lwn.net, maz@kernel.org, oupton@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com, kvmarm@lists.linux.dev Subject: [PATCH v3 37/47] arm_mpam: resctrl: Update the rmid reallocation limit Date: Mon, 12 Jan 2026 16:59:04 +0000 Message-ID: <20260112165914.4086692-38-ben.horgan@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260112165914.4086692-1-ben.horgan@arm.com> References: <20260112165914.4086692-1-ben.horgan@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: James Morse resctrl's limbo code needs to be told when the data left in a cache is small enough for the partid+pmg value to be re-allocated. x86 uses the cache size divided by the number of rmid users the cache may have. Do the same, but for the smallest cache, and with the number of partid-and-pmg users. Reviewed-by: Jonathan Cameron Signed-off-by: James Morse Signed-off-by: Ben Horgan --- Changes since v2: Move waiting for cache info into it's own patch --- drivers/resctrl/mpam_resctrl.c | 35 ++++++++++++++++++++++++++++++++++ 1 file changed, 35 insertions(+) diff --git a/drivers/resctrl/mpam_resctrl.c b/drivers/resctrl/mpam_resctrl.c index 5adc78f9c96f..a6be3ce84241 100644 --- a/drivers/resctrl/mpam_resctrl.c +++ b/drivers/resctrl/mpam_resctrl.c @@ -561,6 +561,38 @@ void resctrl_arch_reset_cntr(struct rdt_resource *r, s= truct rdt_mon_domain *d, reset_mon_cdp_safe(mon, mon_comp, USE_PRE_ALLOCATED, closid, rmid); } =20 +/* + * The rmid realloc threshold should be for the smallest cache exposed to + * resctrl. + */ +static int update_rmid_limits(struct mpam_class *class) +{ + u32 num_unique_pmg =3D resctrl_arch_system_num_rmid_idx(); + struct mpam_props *cprops =3D &class->props; + struct cacheinfo *ci; + + lockdep_assert_cpus_held(); + + /* Assume cache levels are the same size for all CPUs... */ + ci =3D get_cpu_cacheinfo_level(smp_processor_id(), class->level); + if (!ci || ci->size =3D=3D 0) { + pr_debug("Could not read cache size for class %u\n", + class->level); + return -EINVAL; + } + + if (!mpam_has_feature(mpam_feat_msmon_csu, cprops)) + return 0; + + if (!resctrl_rmid_realloc_limit || + ci->size < resctrl_rmid_realloc_limit) { + resctrl_rmid_realloc_limit =3D ci->size; + resctrl_rmid_realloc_threshold =3D ci->size / num_unique_pmg; + } + + return 0; +} + static bool cache_has_usable_cpor(struct mpam_class *class) { struct mpam_props *cprops =3D &class->props; @@ -1006,6 +1038,9 @@ static void mpam_resctrl_pick_counters(void) /* CSU counters only make sense on a cache. */ switch (class->type) { case MPAM_CLASS_CACHE: + if (update_rmid_limits(class)) + continue; + counter_update_class(QOS_L3_OCCUP_EVENT_ID, class); break; default: --=20 2.43.0