From nobody Fri Dec 19 18:54:00 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5A5592FF159 for ; Thu, 4 Dec 2025 20:54:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881656; cv=none; b=Do3Q9DitnTfVd/ntCSX7NMQ+cyXEOn+2puzj6PGNBFcfLqW+31xb9MRVLqjKxSTxpNttAPfHYZZupTqDDCdw+NHYAdND6Jny8EhaC5u5jdYihSQpe9PSpm4z5gwdwmIvJpoP4+xXyAIEpqXnUQRQwahojXbylYztoxOHyTZzoSY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881656; c=relaxed/simple; bh=uG7+YqtEtzVlwaBgTbjM4zxMwbqEfXKjC407S1dGiWg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=LceWfVTCUaHeV0QKHqhjKftgrKR1m1943b4oqa2kYteeVCOuSJrbu0AZfmmS1SZsDFyUG3r0UMQrAVjNoO0r4px8fHYA9W3pRAv1CLfi8XW7hX76VbZbLOHIlg4pZqoc2JxYIQ1Fv8+z0VQzjwDDxvmA/fy2j7OsVqHTFhnOd98= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=hbgbgqNn; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="hbgbgqNn" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764881654; x=1796417654; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=uG7+YqtEtzVlwaBgTbjM4zxMwbqEfXKjC407S1dGiWg=; b=hbgbgqNnD/9ZFsUBy6iiqpJqhbYbvz2hZBXqYZ+I86VDRDxGP8QTg8qg l5jXeODmUZBQb6fv9ZNV0BRGRfEKziOZfAa+VbuXXGBm22AxC0TZOpyIL s+YGOVVyG6OgFVFBNqeprZqhh17o/+MFsoslw0wUGEdvuoWhUGZg4Jxbq 7Rcz9dLoTZY2MISbwhyT0mT0A7IumR1zBt+4komFeP5huCaoNrtZI3kki /npb9NiY+ISiEoHCVqrU+FJEehNo7BZ+Cm301yYBbcLXQli1Iyr3qyd7N nlAXNEctgR6VkTioSjg8uf7RGcLMCo5IHqL2taDluXAg/CILLZLr2WD9J g==; X-CSE-ConnectionGUID: pCrFq/pvR4SjDsfd5XVDww== X-CSE-MsgGUID: 5wLcEAr3Q7uLtqfF1v3LSg== X-IronPort-AV: E=McAfee;i="6800,10657,11632"; a="69510833" X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="69510833" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:13 -0800 X-CSE-ConnectionGUID: ixyQuWzxSry/h/+dvCOhBA== X-CSE-MsgGUID: eK6F2J6mQbC96feP2ajL5w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="225752706" Received: from mgerlach-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.165]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:12 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v15 01/32] x86,fs/resctrl: Improve domain type checking Date: Thu, 4 Dec 2025 12:53:31 -0800 Message-ID: <20251204205404.12763-2-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251204205404.12763-1-tony.luck@intel.com> References: <20251204205404.12763-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Every resctrl resource has a list of domain structures. struct rdt_ctrl_dom= ain and struct rdt_mon_domain both begin with struct rdt_domain_hdr with rdt_domain_hdr::type used in validity checks before accessing the domain of= a particular type. Add the resource id to struct rdt_domain_hdr in preparation for a new monit= oring domain structure that will be associated with a new monitoring resource. Im= prove existing domain validity checks with a new helper domain_header_is_valid() that checks both domain type and resource id. domain_header_is_valid() sho= uld be used before every call to container_of() that accesses a domain structur= e. Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- include/linux/resctrl.h | 9 +++++++++ arch/x86/kernel/cpu/resctrl/core.c | 10 ++++++---- fs/resctrl/ctrlmondata.c | 2 +- 3 files changed, 16 insertions(+), 5 deletions(-) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index 54701668b3df..e7c218f8d4f7 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -131,15 +131,24 @@ enum resctrl_domain_type { * @list: all instances of this resource * @id: unique id for this instance * @type: type of this instance + * @rid: resource id for this instance * @cpu_mask: which CPUs share this resource */ struct rdt_domain_hdr { struct list_head list; int id; enum resctrl_domain_type type; + enum resctrl_res_level rid; struct cpumask cpu_mask; }; =20 +static inline bool domain_header_is_valid(struct rdt_domain_hdr *hdr, + enum resctrl_domain_type type, + enum resctrl_res_level rid) +{ + return !WARN_ON_ONCE(hdr->type !=3D type || hdr->rid !=3D rid); +} + /** * struct rdt_ctrl_domain - group of CPUs sharing a resctrl control resour= ce * @hdr: common header for different domain types diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 3792ab4819dc..0b8b7b8697a7 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -464,7 +464,7 @@ static void domain_add_cpu_ctrl(int cpu, struct rdt_res= ource *r) =20 hdr =3D resctrl_find_domain(&r->ctrl_domains, id, &add_pos); if (hdr) { - if (WARN_ON_ONCE(hdr->type !=3D RESCTRL_CTRL_DOMAIN)) + if (!domain_header_is_valid(hdr, RESCTRL_CTRL_DOMAIN, r->rid)) return; d =3D container_of(hdr, struct rdt_ctrl_domain, hdr); =20 @@ -481,6 +481,7 @@ static void domain_add_cpu_ctrl(int cpu, struct rdt_res= ource *r) d =3D &hw_dom->d_resctrl; d->hdr.id =3D id; d->hdr.type =3D RESCTRL_CTRL_DOMAIN; + d->hdr.rid =3D r->rid; cpumask_set_cpu(cpu, &d->hdr.cpu_mask); =20 rdt_domain_reconfigure_cdp(r); @@ -520,7 +521,7 @@ static void domain_add_cpu_mon(int cpu, struct rdt_reso= urce *r) =20 hdr =3D resctrl_find_domain(&r->mon_domains, id, &add_pos); if (hdr) { - if (WARN_ON_ONCE(hdr->type !=3D RESCTRL_MON_DOMAIN)) + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, r->rid)) return; d =3D container_of(hdr, struct rdt_mon_domain, hdr); =20 @@ -538,6 +539,7 @@ static void domain_add_cpu_mon(int cpu, struct rdt_reso= urce *r) d =3D &hw_dom->d_resctrl; d->hdr.id =3D id; d->hdr.type =3D RESCTRL_MON_DOMAIN; + d->hdr.rid =3D r->rid; ci =3D get_cpu_cacheinfo_level(cpu, RESCTRL_L3_CACHE); if (!ci) { pr_warn_once("Can't find L3 cache for CPU:%d resource %s\n", cpu, r->nam= e); @@ -598,7 +600,7 @@ static void domain_remove_cpu_ctrl(int cpu, struct rdt_= resource *r) return; } =20 - if (WARN_ON_ONCE(hdr->type !=3D RESCTRL_CTRL_DOMAIN)) + if (!domain_header_is_valid(hdr, RESCTRL_CTRL_DOMAIN, r->rid)) return; =20 d =3D container_of(hdr, struct rdt_ctrl_domain, hdr); @@ -644,7 +646,7 @@ static void domain_remove_cpu_mon(int cpu, struct rdt_r= esource *r) return; } =20 - if (WARN_ON_ONCE(hdr->type !=3D RESCTRL_MON_DOMAIN)) + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, r->rid)) return; =20 d =3D container_of(hdr, struct rdt_mon_domain, hdr); diff --git a/fs/resctrl/ctrlmondata.c b/fs/resctrl/ctrlmondata.c index b2d178d3556e..905c310de573 100644 --- a/fs/resctrl/ctrlmondata.c +++ b/fs/resctrl/ctrlmondata.c @@ -653,7 +653,7 @@ int rdtgroup_mondata_show(struct seq_file *m, void *arg) * the resource to find the domain with "domid". */ hdr =3D resctrl_find_domain(&r->mon_domains, domid, NULL); - if (!hdr || WARN_ON_ONCE(hdr->type !=3D RESCTRL_MON_DOMAIN)) { + if (!hdr || !domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, resid)) { ret =3D -ENOENT; goto out; } --=20 2.51.1 From nobody Fri Dec 19 18:54:00 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DC4E9302CBA for ; Thu, 4 Dec 2025 20:54:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881658; cv=none; b=e34VhshVkpB0645pswthzFg7uKfpmpPpfT3X2kfsUzAyiaITEG54n8R6edJpQ/z2bVWr4bdnDBwWpGK6B6NT4J5Voad+tRI0VyQioejgijK9ZDmvJjUlcg7CjLIYwpxBGaoIz08uZCQZW+Ockyvls5bnfSB3jda2OsG64HI6hlA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881658; c=relaxed/simple; bh=LlRdkmpBiMV2KJh/axg+W+2z6l7UPK5CIKtf/6Q8CGY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=B132wPW4b8JOYz7tFSHT8X3OWQ+qSPqCItapDWL+0OVYtbzaBC8eCurVNoJx8tkAUZLLRKrLf8rBHUEMXhi+hasQ86F9IY7sZXT36z3c0d/4Y26thYumBvjo+5xTFcvijytIiGB89KjBgNmgh1xPJ3xmBWpChkQxDz5NJ8Lj2IA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=E1kPaLEv; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="E1kPaLEv" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764881656; x=1796417656; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=LlRdkmpBiMV2KJh/axg+W+2z6l7UPK5CIKtf/6Q8CGY=; b=E1kPaLEvR77C3WjKolpUI+ByiEg248Y/R+X7yCyZFE/QwJt0JOfc4rCw yXPXRjdz1PIqmQvgRQCQmN0mAhKT2EW4b2fL6mbDPQXv0bZ0btarGLWLj 9l66cNfU8bQxmwnX3B3i6ZoHMTiFrck0eT0ecbohWpmna1OUd9NIXmZBp +KbTOj5H84pmfEBdIbaGucrdayF7bm1hIc7dCTu+qAoFZ8iDb3VL5/b4x TKjCboQbV3PM9y8uPCx6aRpQNTLLwQZaILfuE+2VjBObE/vMFW84E5h0a KJYLBqH4Mo5MmhLf+H4AmrdMKwWB9oG8eFvM0O98wXreNGM180+2du4E6 g==; X-CSE-ConnectionGUID: dZWmfToBTMquDXc7VWQR/w== X-CSE-MsgGUID: 7m7jqcjJSMuxjaBtkKwiAg== X-IronPort-AV: E=McAfee;i="6800,10657,11632"; a="69510842" X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="69510842" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:13 -0800 X-CSE-ConnectionGUID: 8xaxrYGQSD6HdnE2JwrCmQ== X-CSE-MsgGUID: 3tFIg/veTO6iyIzCJbXZvw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="225752715" Received: from mgerlach-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.165]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:13 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v15 02/32] x86/resctrl: Move L3 initialization into new helper function Date: Thu, 4 Dec 2025 12:53:32 -0800 Message-ID: <20251204205404.12763-3-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251204205404.12763-1-tony.luck@intel.com> References: <20251204205404.12763-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Carve out the resource monitoring domain init code into a separate helper in order to be able to initialize new types of monitoring domains besides the usual L3 ones. Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- arch/x86/kernel/cpu/resctrl/core.c | 64 ++++++++++++++++-------------- 1 file changed, 34 insertions(+), 30 deletions(-) diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 0b8b7b8697a7..2a568b316711 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -501,37 +501,13 @@ static void domain_add_cpu_ctrl(int cpu, struct rdt_r= esource *r) } } =20 -static void domain_add_cpu_mon(int cpu, struct rdt_resource *r) +static void l3_mon_domain_setup(int cpu, int id, struct rdt_resource *r, s= truct list_head *add_pos) { - int id =3D get_domain_id_from_scope(cpu, r->mon_scope); - struct list_head *add_pos =3D NULL; struct rdt_hw_mon_domain *hw_dom; - struct rdt_domain_hdr *hdr; struct rdt_mon_domain *d; struct cacheinfo *ci; int err; =20 - lockdep_assert_held(&domain_list_lock); - - if (id < 0) { - pr_warn_once("Can't find monitor domain id for CPU:%d scope:%d for resou= rce %s\n", - cpu, r->mon_scope, r->name); - return; - } - - hdr =3D resctrl_find_domain(&r->mon_domains, id, &add_pos); - if (hdr) { - if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, r->rid)) - return; - d =3D container_of(hdr, struct rdt_mon_domain, hdr); - - cpumask_set_cpu(cpu, &d->hdr.cpu_mask); - /* Update the mbm_assign_mode state for the CPU if supported */ - if (r->mon.mbm_cntr_assignable) - resctrl_arch_mbm_cntr_assign_set_one(r); - return; - } - hw_dom =3D kzalloc_node(sizeof(*hw_dom), GFP_KERNEL, cpu_to_node(cpu)); if (!hw_dom) return; @@ -539,7 +515,7 @@ static void domain_add_cpu_mon(int cpu, struct rdt_reso= urce *r) d =3D &hw_dom->d_resctrl; d->hdr.id =3D id; d->hdr.type =3D RESCTRL_MON_DOMAIN; - d->hdr.rid =3D r->rid; + d->hdr.rid =3D RDT_RESOURCE_L3; ci =3D get_cpu_cacheinfo_level(cpu, RESCTRL_L3_CACHE); if (!ci) { pr_warn_once("Can't find L3 cache for CPU:%d resource %s\n", cpu, r->nam= e); @@ -549,10 +525,6 @@ static void domain_add_cpu_mon(int cpu, struct rdt_res= ource *r) d->ci_id =3D ci->id; cpumask_set_cpu(cpu, &d->hdr.cpu_mask); =20 - /* Update the mbm_assign_mode state for the CPU if supported */ - if (r->mon.mbm_cntr_assignable) - resctrl_arch_mbm_cntr_assign_set_one(r); - arch_mon_domain_online(r, d); =20 if (arch_domain_mbm_alloc(r->mon.num_rmid, hw_dom)) { @@ -570,6 +542,38 @@ static void domain_add_cpu_mon(int cpu, struct rdt_res= ource *r) } } =20 +static void domain_add_cpu_mon(int cpu, struct rdt_resource *r) +{ + int id =3D get_domain_id_from_scope(cpu, r->mon_scope); + struct list_head *add_pos =3D NULL; + struct rdt_domain_hdr *hdr; + + lockdep_assert_held(&domain_list_lock); + + if (id < 0) { + pr_warn_once("Can't find monitor domain id for CPU:%d scope:%d for resou= rce %s\n", + cpu, r->mon_scope, r->name); + return; + } + + hdr =3D resctrl_find_domain(&r->mon_domains, id, &add_pos); + if (hdr) + cpumask_set_cpu(cpu, &hdr->cpu_mask); + + switch (r->rid) { + case RDT_RESOURCE_L3: + /* Update the mbm_assign_mode state for the CPU if supported */ + if (r->mon.mbm_cntr_assignable) + resctrl_arch_mbm_cntr_assign_set_one(r); + if (!hdr) + l3_mon_domain_setup(cpu, id, r, add_pos); + break; + default: + pr_warn_once("Unknown resource rid=3D%d\n", r->rid); + break; + } +} + static void domain_add_cpu(int cpu, struct rdt_resource *r) { if (r->alloc_capable) --=20 2.51.1 From nobody Fri Dec 19 18:54:00 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4CC2E2DCC13 for ; Thu, 4 Dec 2025 20:54:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881658; cv=none; b=JquLouffH+XYtDyyORxZZ4MaF9RdnKw90VAFUiMVRS/7ftP0vWbPqvGkcy+XiuDdwJHknQE0OmhSnVq4iBEoG3T7ckAYRhiRCZpMfnDgtkQ+JFKsriqDWVsDGvZ86EPQEKt0cdqeDXzrXtPW/6c5lo08zRQSh3ufIKKD6LNx/7c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881658; c=relaxed/simple; bh=4TavaMTu7axZkv0Du315ayr07cIT7myhftmXcK2c5+g=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Q9Gg27kYZu3kvWE8VSeJMl0xHruG9vTpKEjsuUwSOq13Rh+WG8jfwRoiaQmjLNGV+uAmUDYkTAE59IxjRaSLZhKEQEuB35ym8aX+4ZbEcNlgLjJXnQa7xEnAVka3wt2yiK/e5qzA9RAMHrl4UM/echw7TSe42SImE7UpZf+R1rQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=JrRlLjvj; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="JrRlLjvj" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764881656; x=1796417656; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4TavaMTu7axZkv0Du315ayr07cIT7myhftmXcK2c5+g=; b=JrRlLjvjRmj7McaUKYnsgdf0RLm1c4c+Gk7rhlGVR2z8bAUXZ9tpjjA5 ShYFZKyoW/XRbR//xQjZ0h+ifdHsh5UIYkB8nzcewUxAopKWCH51v2Lr5 6KqE6BBHH4vobMST8Ha0lVvN6Uis/CFJu8YqGaGJtYoO/dNwV0ai2+iK7 hBlxXamIS0j9sAcRO6HpfnYMRj4ecBd0wXMO7LpZeSWkSRkElOQNVHLCY 0urHC6g6n7bdm+eXHWqy3tXI78ygaK+fQZMgK4klt2exewc+UurlGmILH 6xAeu+vLoxOhKaScqTJfk72zD1LoLO31LV9oYkRoQ5JkP+8fAqhe1Cl9t g==; X-CSE-ConnectionGUID: atu3tNFkQqqPqQoLl7t51w== X-CSE-MsgGUID: 7l6D6p4OTyebsT7Gtgsrug== X-IronPort-AV: E=McAfee;i="6800,10657,11632"; a="69510861" X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="69510861" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:14 -0800 X-CSE-ConnectionGUID: xQ4AKNIgQ8O9pDYBhSGpxw== X-CSE-MsgGUID: WpLCCjs2SHij0IA6xps7TQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="225752724" Received: from mgerlach-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.165]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:13 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v15 03/32] x86/resctrl: Refactor domain_remove_cpu_mon() ready for new domain types Date: Thu, 4 Dec 2025 12:53:33 -0800 Message-ID: <20251204205404.12763-4-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251204205404.12763-1-tony.luck@intel.com> References: <20251204205404.12763-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" New telemetry events will be associated with a new package scoped resource = with a new domain structure. Refactor domain_remove_cpu_mon() so all the L3 domain processing is separate the general domain action of clearing the CPU bit in the mask. Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- arch/x86/kernel/cpu/resctrl/core.c | 27 +++++++++++++++++---------- 1 file changed, 17 insertions(+), 10 deletions(-) diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 2a568b316711..49b133e847d4 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -631,9 +631,7 @@ static void domain_remove_cpu_ctrl(int cpu, struct rdt_= resource *r) static void domain_remove_cpu_mon(int cpu, struct rdt_resource *r) { int id =3D get_domain_id_from_scope(cpu, r->mon_scope); - struct rdt_hw_mon_domain *hw_dom; struct rdt_domain_hdr *hdr; - struct rdt_mon_domain *d; =20 lockdep_assert_held(&domain_list_lock); =20 @@ -650,20 +648,29 @@ static void domain_remove_cpu_mon(int cpu, struct rdt= _resource *r) return; } =20 - if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, r->rid)) + cpumask_clear_cpu(cpu, &hdr->cpu_mask); + if (!cpumask_empty(&hdr->cpu_mask)) return; =20 - d =3D container_of(hdr, struct rdt_mon_domain, hdr); - hw_dom =3D resctrl_to_arch_mon_dom(d); + switch (r->rid) { + case RDT_RESOURCE_L3: { + struct rdt_hw_mon_domain *hw_dom; + struct rdt_mon_domain *d; =20 - cpumask_clear_cpu(cpu, &d->hdr.cpu_mask); - if (cpumask_empty(&d->hdr.cpu_mask)) { + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) + return; + + d =3D container_of(hdr, struct rdt_mon_domain, hdr); + hw_dom =3D resctrl_to_arch_mon_dom(d); resctrl_offline_mon_domain(r, d); - list_del_rcu(&d->hdr.list); + list_del_rcu(&hdr->list); synchronize_rcu(); mon_domain_free(hw_dom); - - return; + break; + } + default: + pr_warn_once("Unknown resource rid=3D%d\n", r->rid); + break; } } =20 --=20 2.51.1 From nobody Fri Dec 19 18:54:00 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A39EF2FBDEC for ; Thu, 4 Dec 2025 20:54:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881659; cv=none; b=BoUq9LP31G8ffoctO8l2F7r3HrOH3JL1tbTiV2d4tYrs+ErArSgYG7PvpUdybAxEvrgKhCEVb5uOaol9kna1N/eyqsZaHN7mT+EkC1XBXkFQaKsxblIwXq7tXm3vYY1eCwMRRljx5JWWZSXKaotVbyZAjMTa5ekpDaS63XFwpzg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881659; c=relaxed/simple; bh=u7RmMkbdyc5QHVMIPLc9Jc97zGTH0J2VNf3TiEGFH8k=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YjstYxubpmileymC5jtVt6UKs8zB2iCqR2sdzWyfZLbzdFrfA6z4NYXCikx7uUsKhRvCUOD7gZttQALWa+LgvcO3aDB6fD6OaNXsab949E8fpyc9kUtc7iQbD95meiYZaBSfPlRGKcYDDEDOhWV4r1MseDyhgYS4v83wwMZzKSw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=gdUxVhRF; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="gdUxVhRF" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764881658; x=1796417658; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=u7RmMkbdyc5QHVMIPLc9Jc97zGTH0J2VNf3TiEGFH8k=; b=gdUxVhRFecne04Lf7NII1CvxUmEy0HT6aD6cftdJnD6mE55iPXsn8Rfx w9p87wfZsgA26wmgbSM1d8N0HJabcBfr7zjR3Lqe48J1SWYOhYarAQmDk kLt9Wqgk19D3RwFUQJq5pFJJeOLhroVbChGsp563ntOeg0Tu38bqjLPaT H5qIxfEC8fJit7t69ldnzHumMqYZ6AfhWkXud431EkeGxDrhW8RvbV1Mb wWynTMXhoqqdfRE1vEVZS6UTSRHiylVaRQOovJusKr8sRONtBT8cYq84J d4HLT2v8ORIq7wyNyfUuaIWIJqjFiSYjavnndHI2ZFjNm9JNaJ+io3KQ7 g==; X-CSE-ConnectionGUID: TuGL1OLSSYSvZFfi47bg5g== X-CSE-MsgGUID: ua3Q0nFNQaOfWHVaHXwU9Q== X-IronPort-AV: E=McAfee;i="6800,10657,11632"; a="69510869" X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="69510869" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:15 -0800 X-CSE-ConnectionGUID: NcU3xXg5TH2Gzt2EMbxyYg== X-CSE-MsgGUID: 0QncwkP0RmyL5VrIa32hxg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="225752734" Received: from mgerlach-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.165]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:14 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v15 04/32] x86/resctrl: Clean up domain_remove_cpu_ctrl() Date: Thu, 4 Dec 2025 12:53:34 -0800 Message-ID: <20251204205404.12763-5-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251204205404.12763-1-tony.luck@intel.com> References: <20251204205404.12763-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" For symmetry with domain_remove_cpu_mon() refactor domain_remove_cpu_ctrl() to take an early return when removing a CPU does not empty the domain. Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- arch/x86/kernel/cpu/resctrl/core.c | 29 ++++++++++++++--------------- 1 file changed, 14 insertions(+), 15 deletions(-) diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 49b133e847d4..64ed81cbf8bf 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -604,28 +604,27 @@ static void domain_remove_cpu_ctrl(int cpu, struct rd= t_resource *r) return; } =20 + cpumask_clear_cpu(cpu, &hdr->cpu_mask); + if (!cpumask_empty(&hdr->cpu_mask)) + return; + if (!domain_header_is_valid(hdr, RESCTRL_CTRL_DOMAIN, r->rid)) return; =20 d =3D container_of(hdr, struct rdt_ctrl_domain, hdr); hw_dom =3D resctrl_to_arch_ctrl_dom(d); =20 - cpumask_clear_cpu(cpu, &d->hdr.cpu_mask); - if (cpumask_empty(&d->hdr.cpu_mask)) { - resctrl_offline_ctrl_domain(r, d); - list_del_rcu(&d->hdr.list); - synchronize_rcu(); - - /* - * rdt_ctrl_domain "d" is going to be freed below, so clear - * its pointer from pseudo_lock_region struct. - */ - if (d->plr) - d->plr->d =3D NULL; - ctrl_domain_free(hw_dom); + resctrl_offline_ctrl_domain(r, d); + list_del_rcu(&hdr->list); + synchronize_rcu(); =20 - return; - } + /* + * rdt_ctrl_domain "d" is going to be freed below, so clear + * its pointer from pseudo_lock_region struct. + */ + if (d->plr) + d->plr->d =3D NULL; + ctrl_domain_free(hw_dom); } =20 static void domain_remove_cpu_mon(int cpu, struct rdt_resource *r) --=20 2.51.1 From nobody Fri Dec 19 18:54:00 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5476B307AE9 for ; Thu, 4 Dec 2025 20:54:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881660; cv=none; b=tTMtKSniI9sT0+79lea5Q1Fs7V+0RN+2G0OpEz1NiaLxCMwXI/zogybrpw2RtXUdnDjky9uNJQgA5l/Sq0Mm76qybZfWtbl4Mbj+3HT/Byd9ZYUNi3eFcTmdzVE0kGDMcnqSkH/5vPa+b2OlEhdN+XKDCKwFYDBc2306RI9aXfI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881660; c=relaxed/simple; bh=J1iewankRssVmV8QzqPDemKrnLrRCTFys3HG8VXsBKA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=aBbLAXMI9L6wPRG1csE120rIm24ICXTZ8sGmD++1+wft1fRDuuGtQSdKgHV+n/XADPe/WMHpnyvs9gLosFQyPlSHS6LyZHKAQUOWOpgeHfUfW1M5EjIcc1nvy1t1Xkxvsdb5phtxrcwpQrDWRvaq7D6lAlBXIY+0QUg22n93GMk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=ABSate18; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="ABSate18" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764881658; x=1796417658; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=J1iewankRssVmV8QzqPDemKrnLrRCTFys3HG8VXsBKA=; b=ABSate18XUTVKoRRmTT3xft8qhrStN27BKcdqAoiDwbh9YIAsdmO/S8j F8ZAFd47diho9BDi4vbLXA5oenj1Q2zQAZ57yX0KG+mhxWZqrrhJpO9vM DHBdGC8M6KXYxvjyVA7lTlriyoEYap1wael9AYeopkFNI3HFWVfUPf/XK PeXXZCe+0WJ7KQS217isYe31r1Z1vSD59NVc9cyCadZ0mqmCfTKbetULX 9kOps/R9r/mNV4XtLgHb10ZZuSB2qy/xqIekcsAqNE8mNUK1KbaZ3jNXm btYjofT/V4EDGj42bIE9wmVfF5xgTWN+tTSX4Xy63Lj1i/P0pLlYa7/gH Q==; X-CSE-ConnectionGUID: d8pILo1pSvev22OHQyta1g== X-CSE-MsgGUID: gS1PKxMyRsaxY3I9O+9I7Q== X-IronPort-AV: E=McAfee;i="6800,10657,11632"; a="69510877" X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="69510877" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:16 -0800 X-CSE-ConnectionGUID: tNGpEyoIR/2T7RPIErktHw== X-CSE-MsgGUID: jmqRG/CpQca6/Y9XIIvT4g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="225752737" Received: from mgerlach-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.165]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:15 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v15 05/32] x86,fs/resctrl: Refactor domain create/remove using struct rdt_domain_hdr Date: Thu, 4 Dec 2025 12:53:35 -0800 Message-ID: <20251204205404.12763-6-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251204205404.12763-1-tony.luck@intel.com> References: <20251204205404.12763-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Up until now, all monitoring events were associated with the L3 resource an= d it made sense to use the L3 specific "struct rdt_mon_domain *" argument to fun= ctions operating on domains. Telemetry events will be tied to a new resource with its instances represen= ted by a new domain structure that, just like struct rdt_mon_domain, starts with the generic struct rdt_domain_hdr. Prepare to support domains belonging to different resources by changing the calling convention of functions operating on domains. Pass the generic hea= der and use that to find the domain specific structure where needed. Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- include/linux/resctrl.h | 4 +- fs/resctrl/internal.h | 2 +- arch/x86/kernel/cpu/resctrl/core.c | 4 +- fs/resctrl/ctrlmondata.c | 14 ++++-- fs/resctrl/rdtgroup.c | 69 +++++++++++++++++++++--------- 5 files changed, 63 insertions(+), 30 deletions(-) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index e7c218f8d4f7..5db37c7e89c5 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -507,9 +507,9 @@ int resctrl_arch_update_one(struct rdt_resource *r, str= uct rdt_ctrl_domain *d, u32 resctrl_arch_get_config(struct rdt_resource *r, struct rdt_ctrl_domain= *d, u32 closid, enum resctrl_conf_type type); int resctrl_online_ctrl_domain(struct rdt_resource *r, struct rdt_ctrl_dom= ain *d); -int resctrl_online_mon_domain(struct rdt_resource *r, struct rdt_mon_domai= n *d); +int resctrl_online_mon_domain(struct rdt_resource *r, struct rdt_domain_hd= r *hdr); void resctrl_offline_ctrl_domain(struct rdt_resource *r, struct rdt_ctrl_d= omain *d); -void resctrl_offline_mon_domain(struct rdt_resource *r, struct rdt_mon_dom= ain *d); +void resctrl_offline_mon_domain(struct rdt_resource *r, struct rdt_domain_= hdr *hdr); void resctrl_online_cpu(unsigned int cpu); void resctrl_offline_cpu(unsigned int cpu); =20 diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h index bff4a54ae333..5e52269b391e 100644 --- a/fs/resctrl/internal.h +++ b/fs/resctrl/internal.h @@ -362,7 +362,7 @@ void mon_event_count(void *info); int rdtgroup_mondata_show(struct seq_file *m, void *arg); =20 void mon_event_read(struct rmid_read *rr, struct rdt_resource *r, - struct rdt_mon_domain *d, struct rdtgroup *rdtgrp, + struct rdt_domain_hdr *hdr, struct rdtgroup *rdtgrp, cpumask_t *cpumask, int evtid, int first); =20 int resctrl_mon_resource_init(void); diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 64ed81cbf8bf..1fab4c67d273 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -534,7 +534,7 @@ static void l3_mon_domain_setup(int cpu, int id, struct= rdt_resource *r, struct =20 list_add_tail_rcu(&d->hdr.list, add_pos); =20 - err =3D resctrl_online_mon_domain(r, d); + err =3D resctrl_online_mon_domain(r, &d->hdr); if (err) { list_del_rcu(&d->hdr.list); synchronize_rcu(); @@ -661,7 +661,7 @@ static void domain_remove_cpu_mon(int cpu, struct rdt_r= esource *r) =20 d =3D container_of(hdr, struct rdt_mon_domain, hdr); hw_dom =3D resctrl_to_arch_mon_dom(d); - resctrl_offline_mon_domain(r, d); + resctrl_offline_mon_domain(r, hdr); list_del_rcu(&hdr->list); synchronize_rcu(); mon_domain_free(hw_dom); diff --git a/fs/resctrl/ctrlmondata.c b/fs/resctrl/ctrlmondata.c index 905c310de573..3154cdc98a31 100644 --- a/fs/resctrl/ctrlmondata.c +++ b/fs/resctrl/ctrlmondata.c @@ -551,14 +551,21 @@ struct rdt_domain_hdr *resctrl_find_domain(struct lis= t_head *h, int id, } =20 void mon_event_read(struct rmid_read *rr, struct rdt_resource *r, - struct rdt_mon_domain *d, struct rdtgroup *rdtgrp, + struct rdt_domain_hdr *hdr, struct rdtgroup *rdtgrp, cpumask_t *cpumask, int evtid, int first) { + struct rdt_mon_domain *d =3D NULL; int cpu; =20 /* When picking a CPU from cpu_mask, ensure it can't race with cpuhp */ lockdep_assert_cpus_held(); =20 + if (hdr) { + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) + return; + d =3D container_of(hdr, struct rdt_mon_domain, hdr); + } + /* * Setup the parameters to pass to mon_event_count() to read the data. */ @@ -653,12 +660,11 @@ int rdtgroup_mondata_show(struct seq_file *m, void *a= rg) * the resource to find the domain with "domid". */ hdr =3D resctrl_find_domain(&r->mon_domains, domid, NULL); - if (!hdr || !domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, resid)) { + if (!hdr) { ret =3D -ENOENT; goto out; } - d =3D container_of(hdr, struct rdt_mon_domain, hdr); - mon_event_read(&rr, r, d, rdtgrp, &d->hdr.cpu_mask, evtid, false); + mon_event_read(&rr, r, hdr, rdtgrp, &hdr->cpu_mask, evtid, false); } =20 checkresult: diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index 8e39dfda56bc..89ffe54fb0fc 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -3229,17 +3229,22 @@ static void mon_rmdir_one_subdir(struct kernfs_node= *pkn, char *name, char *subn * when last domain being summed is removed. */ static void rmdir_mondata_subdir_allrdtgrp(struct rdt_resource *r, - struct rdt_mon_domain *d) + struct rdt_domain_hdr *hdr) { struct rdtgroup *prgrp, *crgrp; + struct rdt_mon_domain *d; char subname[32]; bool snc_mode; char name[32]; =20 + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) + return; + + d =3D container_of(hdr, struct rdt_mon_domain, hdr); snc_mode =3D r->mon_scope =3D=3D RESCTRL_L3_NODE; - sprintf(name, "mon_%s_%02d", r->name, snc_mode ? d->ci_id : d->hdr.id); + sprintf(name, "mon_%s_%02d", r->name, snc_mode ? d->ci_id : hdr->id); if (snc_mode) - sprintf(subname, "mon_sub_%s_%02d", r->name, d->hdr.id); + sprintf(subname, "mon_sub_%s_%02d", r->name, hdr->id); =20 list_for_each_entry(prgrp, &rdt_all_groups, rdtgroup_list) { mon_rmdir_one_subdir(prgrp->mon.mon_data_kn, name, subname); @@ -3249,15 +3254,20 @@ static void rmdir_mondata_subdir_allrdtgrp(struct r= dt_resource *r, } } =20 -static int mon_add_all_files(struct kernfs_node *kn, struct rdt_mon_domain= *d, +static int mon_add_all_files(struct kernfs_node *kn, struct rdt_domain_hdr= *hdr, struct rdt_resource *r, struct rdtgroup *prgrp, bool do_sum) { struct rmid_read rr =3D {0}; + struct rdt_mon_domain *d; struct mon_data *priv; struct mon_evt *mevt; int ret, domid; =20 + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) + return -EINVAL; + + d =3D container_of(hdr, struct rdt_mon_domain, hdr); for_each_mon_event(mevt) { if (mevt->rid !=3D r->rid || !mevt->enabled) continue; @@ -3271,23 +3281,28 @@ static int mon_add_all_files(struct kernfs_node *kn= , struct rdt_mon_domain *d, return ret; =20 if (!do_sum && resctrl_is_mbm_event(mevt->evtid)) - mon_event_read(&rr, r, d, prgrp, &d->hdr.cpu_mask, mevt->evtid, true); + mon_event_read(&rr, r, hdr, prgrp, &hdr->cpu_mask, mevt->evtid, true); } =20 return 0; } =20 static int mkdir_mondata_subdir(struct kernfs_node *parent_kn, - struct rdt_mon_domain *d, + struct rdt_domain_hdr *hdr, struct rdt_resource *r, struct rdtgroup *prgrp) { struct kernfs_node *kn, *ckn; + struct rdt_mon_domain *d; char name[32]; bool snc_mode; int ret =3D 0; =20 lockdep_assert_held(&rdtgroup_mutex); =20 + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) + return -EINVAL; + + d =3D container_of(hdr, struct rdt_mon_domain, hdr); snc_mode =3D r->mon_scope =3D=3D RESCTRL_L3_NODE; sprintf(name, "mon_%s_%02d", r->name, snc_mode ? d->ci_id : d->hdr.id); kn =3D kernfs_find_and_get(parent_kn, name); @@ -3305,13 +3320,13 @@ static int mkdir_mondata_subdir(struct kernfs_node = *parent_kn, ret =3D rdtgroup_kn_set_ugid(kn); if (ret) goto out_destroy; - ret =3D mon_add_all_files(kn, d, r, prgrp, snc_mode); + ret =3D mon_add_all_files(kn, hdr, r, prgrp, snc_mode); if (ret) goto out_destroy; } =20 if (snc_mode) { - sprintf(name, "mon_sub_%s_%02d", r->name, d->hdr.id); + sprintf(name, "mon_sub_%s_%02d", r->name, hdr->id); ckn =3D kernfs_create_dir(kn, name, parent_kn->mode, prgrp); if (IS_ERR(ckn)) { ret =3D -EINVAL; @@ -3322,7 +3337,7 @@ static int mkdir_mondata_subdir(struct kernfs_node *p= arent_kn, if (ret) goto out_destroy; =20 - ret =3D mon_add_all_files(ckn, d, r, prgrp, false); + ret =3D mon_add_all_files(ckn, hdr, r, prgrp, false); if (ret) goto out_destroy; } @@ -3340,7 +3355,7 @@ static int mkdir_mondata_subdir(struct kernfs_node *p= arent_kn, * and "monitor" groups with given domain id. */ static void mkdir_mondata_subdir_allrdtgrp(struct rdt_resource *r, - struct rdt_mon_domain *d) + struct rdt_domain_hdr *hdr) { struct kernfs_node *parent_kn; struct rdtgroup *prgrp, *crgrp; @@ -3348,12 +3363,12 @@ static void mkdir_mondata_subdir_allrdtgrp(struct r= dt_resource *r, =20 list_for_each_entry(prgrp, &rdt_all_groups, rdtgroup_list) { parent_kn =3D prgrp->mon.mon_data_kn; - mkdir_mondata_subdir(parent_kn, d, r, prgrp); + mkdir_mondata_subdir(parent_kn, hdr, r, prgrp); =20 head =3D &prgrp->mon.crdtgrp_list; list_for_each_entry(crgrp, head, mon.crdtgrp_list) { parent_kn =3D crgrp->mon.mon_data_kn; - mkdir_mondata_subdir(parent_kn, d, r, crgrp); + mkdir_mondata_subdir(parent_kn, hdr, r, crgrp); } } } @@ -3362,14 +3377,14 @@ static int mkdir_mondata_subdir_alldom(struct kernf= s_node *parent_kn, struct rdt_resource *r, struct rdtgroup *prgrp) { - struct rdt_mon_domain *dom; + struct rdt_domain_hdr *hdr; int ret; =20 /* Walking r->domains, ensure it can't race with cpuhp */ lockdep_assert_cpus_held(); =20 - list_for_each_entry(dom, &r->mon_domains, hdr.list) { - ret =3D mkdir_mondata_subdir(parent_kn, dom, r, prgrp); + list_for_each_entry(hdr, &r->mon_domains, list) { + ret =3D mkdir_mondata_subdir(parent_kn, hdr, r, prgrp); if (ret) return ret; } @@ -4253,16 +4268,23 @@ void resctrl_offline_ctrl_domain(struct rdt_resourc= e *r, struct rdt_ctrl_domain mutex_unlock(&rdtgroup_mutex); } =20 -void resctrl_offline_mon_domain(struct rdt_resource *r, struct rdt_mon_dom= ain *d) +void resctrl_offline_mon_domain(struct rdt_resource *r, struct rdt_domain_= hdr *hdr) { + struct rdt_mon_domain *d; + mutex_lock(&rdtgroup_mutex); =20 + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) + goto out_unlock; + + d =3D container_of(hdr, struct rdt_mon_domain, hdr); + /* * If resctrl is mounted, remove all the * per domain monitor data directories. */ if (resctrl_mounted && resctrl_arch_mon_capable()) - rmdir_mondata_subdir_allrdtgrp(r, d); + rmdir_mondata_subdir_allrdtgrp(r, hdr); =20 if (resctrl_is_mbm_enabled()) cancel_delayed_work(&d->mbm_over); @@ -4280,7 +4302,7 @@ void resctrl_offline_mon_domain(struct rdt_resource *= r, struct rdt_mon_domain *d } =20 domain_destroy_mon_state(d); - +out_unlock: mutex_unlock(&rdtgroup_mutex); } =20 @@ -4353,12 +4375,17 @@ int resctrl_online_ctrl_domain(struct rdt_resource = *r, struct rdt_ctrl_domain *d return err; } =20 -int resctrl_online_mon_domain(struct rdt_resource *r, struct rdt_mon_domai= n *d) +int resctrl_online_mon_domain(struct rdt_resource *r, struct rdt_domain_hd= r *hdr) { - int err; + struct rdt_mon_domain *d; + int err =3D -EINVAL; =20 mutex_lock(&rdtgroup_mutex); =20 + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) + goto out_unlock; + + d =3D container_of(hdr, struct rdt_mon_domain, hdr); err =3D domain_setup_mon_state(r, d); if (err) goto out_unlock; @@ -4379,7 +4406,7 @@ int resctrl_online_mon_domain(struct rdt_resource *r,= struct rdt_mon_domain *d) * If resctrl is mounted, add per domain monitor data directories. */ if (resctrl_mounted && resctrl_arch_mon_capable()) - mkdir_mondata_subdir_allrdtgrp(r, d); + mkdir_mondata_subdir_allrdtgrp(r, hdr); =20 out_unlock: mutex_unlock(&rdtgroup_mutex); --=20 2.51.1 From nobody Fri Dec 19 18:54:00 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 54651307AC0 for ; Thu, 4 Dec 2025 20:54:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881661; cv=none; b=iZeRqiDv7IxgPLdgbamKae2kGqnKho4f+kFKQ8ZQ0YrmGmr/2e1G0GbtwfkJ5iLlYNsVEUYQspfBFEjJ7L9uJ5/J3gWKMfLkESD92zKPTIZJvxpKavniOHVmFzpc5+9cjXkJax45IdVQo/jcSkGas+A+rJ5/ZXpZvq26kcxbIUQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881661; c=relaxed/simple; bh=ZWm7hPc2pA5FN4dZKCDkSqVyPN+dNMQhermj0QSQ+fo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=uO0WqjPde/dGiKNjQnm/Ne49WCXymgq8uOAGH2qSvFO9azuFEkf0s0uZnJBsffgdIpOzNlG1pxS4AjFbEAQUvIq5F2oqPcQIKqkc0MlnOJzOQ5R+PIhYtebMRDWLUdVpX5sXNgQJiMXvA3yN31TwCUCIzj28GC09w7DcKymeaLo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=SRdEQtBD; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="SRdEQtBD" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764881658; x=1796417658; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ZWm7hPc2pA5FN4dZKCDkSqVyPN+dNMQhermj0QSQ+fo=; b=SRdEQtBDwdzQwHiZk3BHENOKB203pk7WXOlfv2vKj2DLeCBlHAWC5/Wx 8/L9GPMamKmTzYk+2iiJ19aKHGa6MLy6XGXc9JFKhxDab27t5JVAf5ZNx /rXypVADOOgsFP7cnB4qTPmgtomKXLt6QVxbJnl/30RX2/QlP8IwKFLL+ 9hHNA0QE8RAnXrdrcOApC8PQ2zh/PI77/FjYcVylgYrJ45Y1HhzMcmwJS cyJIUsElhIGG/pya/2xYcMmXarFmMZchODxL2BL5uPGpPAGUzVOlv0ypr iuuZVo//HKh6jVJS83Qmnt/g7wrAjUTiT1oyKncnJW8rZaEvpeFKA709i Q==; X-CSE-ConnectionGUID: RrmyOyC8TU+GeONeoEhWYQ== X-CSE-MsgGUID: m/nk7Sj2T8eH2S6ZgISW9A== X-IronPort-AV: E=McAfee;i="6800,10657,11632"; a="69510886" X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="69510886" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:16 -0800 X-CSE-ConnectionGUID: pyinPHR5SquCoeITzYp56w== X-CSE-MsgGUID: SYTCYtiMTyaDagPWLQMX8g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="225752743" Received: from mgerlach-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.165]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:16 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v15 06/32] fs/resctrl: Split L3 dependent parts out of __mon_event_count() Date: Thu, 4 Dec 2025 12:53:36 -0800 Message-ID: <20251204205404.12763-7-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251204205404.12763-1-tony.luck@intel.com> References: <20251204205404.12763-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Carve out the L3 resource specific event reading code into a separate helper to support reading event data from a new monitoring resource. Suggested-by: Reinette Chatre Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- fs/resctrl/monitor.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index 572a9925bd6c..b5e0db38c8bf 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -413,7 +413,7 @@ static void mbm_cntr_free(struct rdt_mon_domain *d, int= cntr_id) memset(&d->cntr_cfg[cntr_id], 0, sizeof(*d->cntr_cfg)); } =20 -static int __mon_event_count(struct rdtgroup *rdtgrp, struct rmid_read *rr) +static int __l3_mon_event_count(struct rdtgroup *rdtgrp, struct rmid_read = *rr) { int cpu =3D smp_processor_id(); u32 closid =3D rdtgrp->closid; @@ -494,6 +494,17 @@ static int __mon_event_count(struct rdtgroup *rdtgrp, = struct rmid_read *rr) return ret; } =20 +static int __mon_event_count(struct rdtgroup *rdtgrp, struct rmid_read *rr) +{ + switch (rr->r->rid) { + case RDT_RESOURCE_L3: + return __l3_mon_event_count(rdtgrp, rr); + default: + rr->err =3D -EINVAL; + return -EINVAL; + } +} + /* * mbm_bw_count() - Update bw count from values previously read by * __mon_event_count(). --=20 2.51.1 From nobody Fri Dec 19 18:54:00 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AE877306D2A for ; Thu, 4 Dec 2025 20:54:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881662; cv=none; b=Nhyjg3J7uQxDNJgS6kfD6JGi50C1AqwSkYvhwXPWSTIkz71jueb4u2KJ3aSKUfNavyY6TMKds500j7dl8lvppl1omSuJSjRO1/X2JaTfJU4fVGRRcEDO2ESm4rBebGy+4NJxNozffYnah458qL0lHmmF+X6gV+pSgmzJsVi0t1o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881662; c=relaxed/simple; bh=us6eImwWdXZuqZLqhkG/Vf1Um0pCNWlZ9qBUGyeULYE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=KULkn0zKfblapl16/09qLz0ccOc4yswt/Kfed6OvWpWoJBq2H5MW0+JBU++Ra7c2Tze4vqUtRtmP9cKoPetze/qhSHdJ/Q20uuiHJpo7vdivapJsUcch2G6xNBcUoafdxog55X2GS097Qn3U6tcd0es3RABlg0zeFrSUqfjHpVg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=OPYl2lWH; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="OPYl2lWH" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764881660; x=1796417660; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=us6eImwWdXZuqZLqhkG/Vf1Um0pCNWlZ9qBUGyeULYE=; b=OPYl2lWHXq/BiZ4fXImg0CJeaakuppew0P4M5tzO+PO41F9etVkng/Eu tzRonb8hpyh2XaEPvWlloR1vKNw3iX8BLC3zaGz0vYs8aZVQEkRHe57UH +oxis/wo2mykmb9jxmqn0wWu1hpcyI1/WZb+4ySZCOcgMWoBxkChgsopj cDwPgTBGMsmQY1hgjPofOwOPrxPyxQSQ4I/CTTNoTtd/rjetlcQFcXfVH FssW2XwxslR0lg8wTeV+qd317gsfvGO/tzgfJndk5vLnXZmRWujiQOIMs fuT1akluh4C0xxRj4OX7p1zt8xbkWC//Fzl9bHL+VZ2uCnwukgNSxO+lr Q==; X-CSE-ConnectionGUID: YYfIQoeQT1KlsSBOqbzJmQ== X-CSE-MsgGUID: D3u/WAnKRmiJeH7Tt/8N2g== X-IronPort-AV: E=McAfee;i="6800,10657,11632"; a="69510896" X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="69510896" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:17 -0800 X-CSE-ConnectionGUID: PTlEsrqcRYimqsXKR40EDg== X-CSE-MsgGUID: LDPvrV7bSpeS/dFShoEEWg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="225752747" Received: from mgerlach-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.165]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:16 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v15 07/32] x86,fs/resctrl: Use struct rdt_domain_hdr when reading counters Date: Thu, 4 Dec 2025 12:53:37 -0800 Message-ID: <20251204205404.12763-8-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251204205404.12763-1-tony.luck@intel.com> References: <20251204205404.12763-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Convert the whole call sequence from mon_event_read() to resctrl_arch_rmid_= read() to pass resource independent struct rdt_domain_hdr instead of an L3 specific domain structure to prepare for monitoring events in other resources. Signed-off-by: Tony Luck --- include/linux/resctrl.h | 4 +- fs/resctrl/internal.h | 18 +++--- arch/x86/kernel/cpu/resctrl/monitor.c | 12 +++- fs/resctrl/ctrlmondata.c | 9 +-- fs/resctrl/monitor.c | 85 ++++++++++++++++++--------- 5 files changed, 78 insertions(+), 50 deletions(-) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index 5db37c7e89c5..9b9877fb3238 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -517,7 +517,7 @@ void resctrl_offline_cpu(unsigned int cpu); * resctrl_arch_rmid_read() - Read the eventid counter corresponding to rm= id * for this resource and domain. * @r: resource that the counter should be read from. - * @d: domain that the counter should be read from. + * @hdr: Header of domain that the counter should be read from. * @closid: closid that matches the rmid. Depending on the architecture, = the * counter may match traffic of both @closid and @rmid, or @rmid * only. @@ -538,7 +538,7 @@ void resctrl_offline_cpu(unsigned int cpu); * Return: * 0 on success, or -EIO, -EINVAL etc on error. */ -int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_mon_domain *= d, +int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain_hdr *= hdr, u32 closid, u32 rmid, enum resctrl_event_id eventid, u64 *val, void *arch_mon_ctx); =20 diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h index 5e52269b391e..9912b774a580 100644 --- a/fs/resctrl/internal.h +++ b/fs/resctrl/internal.h @@ -106,24 +106,26 @@ struct mon_data { * resource group then its event count is summed with the count from all * its child resource groups. * @r: Resource describing the properties of the event being read. - * @d: Domain that the counter should be read from. If NULL then sum all - * domains in @r sharing L3 @ci.id + * @hdr: Header of domain that the counter should be read from. If NULL = then + * sum all domains in @r sharing L3 @ci.id * @evtid: Which monitor event to read. * @first: Initialize MBM counter when true. - * @ci: Cacheinfo for L3. Only set when @d is NULL. Used when summing d= omains. + * @ci: Cacheinfo for L3. Only set when @hdr is NULL. Used when summing + * domains. * @is_mbm_cntr: true if "mbm_event" counter assignment mode is enabled an= d it * is an MBM event. * @err: Error encountered when reading counter. - * @val: Returned value of event counter. If @rgrp is a parent resource = group, - * @val includes the sum of event counts from its child resource groups. - * If @d is NULL, @val includes the sum of all domains in @r sharing @c= i.id, - * (summed across child resource groups if @rgrp is a parent resource g= roup). + * @val: Returned value of event counter. If @rgrp is a parent resource + * group, @val includes the sum of event counts from its child + * resource groups. If @hdr is NULL, @val includes the sum of all + * domains in @r sharing @ci.id, (summed across child resource groups + * if @rgrp is a parent resource group). * @arch_mon_ctx: Hardware monitor allocated for this read request (MPAM o= nly). */ struct rmid_read { struct rdtgroup *rgrp; struct rdt_resource *r; - struct rdt_mon_domain *d; + struct rdt_domain_hdr *hdr; enum resctrl_event_id evtid; bool first; struct cacheinfo *ci; diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/re= sctrl/monitor.c index dffcc8307500..3da970ea1903 100644 --- a/arch/x86/kernel/cpu/resctrl/monitor.c +++ b/arch/x86/kernel/cpu/resctrl/monitor.c @@ -238,19 +238,25 @@ static u64 get_corrected_val(struct rdt_resource *r, = struct rdt_mon_domain *d, return chunks * hw_res->mon_scale; } =20 -int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_mon_domain *= d, +int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain_hdr *= hdr, u32 unused, u32 rmid, enum resctrl_event_id eventid, u64 *val, void *ignored) { - struct rdt_hw_mon_domain *hw_dom =3D resctrl_to_arch_mon_dom(d); - int cpu =3D cpumask_any(&d->hdr.cpu_mask); + struct rdt_hw_mon_domain *hw_dom; struct arch_mbm_state *am; + struct rdt_mon_domain *d; u64 msr_val; u32 prmid; + int cpu; int ret; =20 resctrl_arch_rmid_read_context_check(); + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) + return -EINVAL; =20 + d =3D container_of(hdr, struct rdt_mon_domain, hdr); + hw_dom =3D resctrl_to_arch_mon_dom(d); + cpu =3D cpumask_any(&hdr->cpu_mask); prmid =3D logical_rmid_to_physical_rmid(cpu, rmid); ret =3D __rmid_read_phys(prmid, eventid, &msr_val); =20 diff --git a/fs/resctrl/ctrlmondata.c b/fs/resctrl/ctrlmondata.c index 3154cdc98a31..9242a2982e77 100644 --- a/fs/resctrl/ctrlmondata.c +++ b/fs/resctrl/ctrlmondata.c @@ -554,25 +554,18 @@ void mon_event_read(struct rmid_read *rr, struct rdt_= resource *r, struct rdt_domain_hdr *hdr, struct rdtgroup *rdtgrp, cpumask_t *cpumask, int evtid, int first) { - struct rdt_mon_domain *d =3D NULL; int cpu; =20 /* When picking a CPU from cpu_mask, ensure it can't race with cpuhp */ lockdep_assert_cpus_held(); =20 - if (hdr) { - if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) - return; - d =3D container_of(hdr, struct rdt_mon_domain, hdr); - } - /* * Setup the parameters to pass to mon_event_count() to read the data. */ rr->rgrp =3D rdtgrp; rr->evtid =3D evtid; rr->r =3D r; - rr->d =3D d; + rr->hdr =3D hdr; rr->first =3D first; if (resctrl_arch_mbm_cntr_assign_enabled(r) && resctrl_is_mbm_event(evtid)) { diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index b5e0db38c8bf..e1c12201388f 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -159,7 +159,7 @@ void __check_limbo(struct rdt_mon_domain *d, bool force= _free) break; =20 entry =3D __rmid_entry(idx); - if (resctrl_arch_rmid_read(r, d, entry->closid, entry->rmid, + if (resctrl_arch_rmid_read(r, &d->hdr, entry->closid, entry->rmid, QOS_L3_OCCUP_EVENT_ID, &val, arch_mon_ctx)) { rmid_dirty =3D true; @@ -421,11 +421,16 @@ static int __l3_mon_event_count(struct rdtgroup *rdtg= rp, struct rmid_read *rr) struct rdt_mon_domain *d; int cntr_id =3D -ENOENT; struct mbm_state *m; - int err, ret; u64 tval =3D 0; =20 + if (!domain_header_is_valid(rr->hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)= ) { + rr->err =3D -EIO; + return -EINVAL; + } + d =3D container_of(rr->hdr, struct rdt_mon_domain, hdr); + if (rr->is_mbm_cntr) { - cntr_id =3D mbm_cntr_get(rr->r, rr->d, rdtgrp, rr->evtid); + cntr_id =3D mbm_cntr_get(rr->r, d, rdtgrp, rr->evtid); if (cntr_id < 0) { rr->err =3D -ENOENT; return -EINVAL; @@ -434,31 +439,50 @@ static int __l3_mon_event_count(struct rdtgroup *rdtg= rp, struct rmid_read *rr) =20 if (rr->first) { if (rr->is_mbm_cntr) - resctrl_arch_reset_cntr(rr->r, rr->d, closid, rmid, cntr_id, rr->evtid); + resctrl_arch_reset_cntr(rr->r, d, closid, rmid, cntr_id, rr->evtid); else - resctrl_arch_reset_rmid(rr->r, rr->d, closid, rmid, rr->evtid); - m =3D get_mbm_state(rr->d, closid, rmid, rr->evtid); + resctrl_arch_reset_rmid(rr->r, d, closid, rmid, rr->evtid); + m =3D get_mbm_state(d, closid, rmid, rr->evtid); if (m) memset(m, 0, sizeof(struct mbm_state)); return 0; } =20 - if (rr->d) { - /* Reading a single domain, must be on a CPU in that domain. */ - if (!cpumask_test_cpu(cpu, &rr->d->hdr.cpu_mask)) - return -EINVAL; - if (rr->is_mbm_cntr) - rr->err =3D resctrl_arch_cntr_read(rr->r, rr->d, closid, rmid, cntr_id, - rr->evtid, &tval); - else - rr->err =3D resctrl_arch_rmid_read(rr->r, rr->d, closid, rmid, - rr->evtid, &tval, rr->arch_mon_ctx); - if (rr->err) - return rr->err; + /* Reading a single domain, must be on a CPU in that domain. */ + if (!cpumask_test_cpu(cpu, &d->hdr.cpu_mask)) + return -EINVAL; + if (rr->is_mbm_cntr) + rr->err =3D resctrl_arch_cntr_read(rr->r, d, closid, rmid, cntr_id, + rr->evtid, &tval); + else + rr->err =3D resctrl_arch_rmid_read(rr->r, rr->hdr, closid, rmid, + rr->evtid, &tval, rr->arch_mon_ctx); + if (rr->err) + return rr->err; =20 - rr->val +=3D tval; + rr->val +=3D tval; =20 - return 0; + return 0; +} + +static int __l3_mon_event_count_sum(struct rdtgroup *rdtgrp, struct rmid_r= ead *rr) +{ + int cpu =3D smp_processor_id(); + u32 closid =3D rdtgrp->closid; + u32 rmid =3D rdtgrp->mon.rmid; + struct rdt_mon_domain *d; + u64 tval =3D 0; + int err, ret; + + /* + * Summing across domains is only done for systems that implement + * Sub-NUMA Cluster. There is no overlap with systems that support + * assignable counters. + */ + if (rr->is_mbm_cntr) { + pr_warn_once("Summing domains using assignable counters is not supported= \n"); + rr->err =3D -EINVAL; + return -EINVAL; } =20 /* Summing domains that share a cache, must be on a CPU for that cache. */ @@ -476,12 +500,8 @@ static int __l3_mon_event_count(struct rdtgroup *rdtgr= p, struct rmid_read *rr) list_for_each_entry(d, &rr->r->mon_domains, hdr.list) { if (d->ci_id !=3D rr->ci->id) continue; - if (rr->is_mbm_cntr) - err =3D resctrl_arch_cntr_read(rr->r, d, closid, rmid, cntr_id, - rr->evtid, &tval); - else - err =3D resctrl_arch_rmid_read(rr->r, d, closid, rmid, - rr->evtid, &tval, rr->arch_mon_ctx); + err =3D resctrl_arch_rmid_read(rr->r, &d->hdr, closid, rmid, + rr->evtid, &tval, rr->arch_mon_ctx); if (!err) { rr->val +=3D tval; ret =3D 0; @@ -498,7 +518,10 @@ static int __mon_event_count(struct rdtgroup *rdtgrp, = struct rmid_read *rr) { switch (rr->r->rid) { case RDT_RESOURCE_L3: - return __l3_mon_event_count(rdtgrp, rr); + if (rr->hdr) + return __l3_mon_event_count(rdtgrp, rr); + else + return __l3_mon_event_count_sum(rdtgrp, rr); default: rr->err =3D -EINVAL; return -EINVAL; @@ -522,9 +545,13 @@ static void mbm_bw_count(struct rdtgroup *rdtgrp, stru= ct rmid_read *rr) u64 cur_bw, bytes, cur_bytes; u32 closid =3D rdtgrp->closid; u32 rmid =3D rdtgrp->mon.rmid; + struct rdt_mon_domain *d; struct mbm_state *m; =20 - m =3D get_mbm_state(rr->d, closid, rmid, rr->evtid); + if (!domain_header_is_valid(rr->hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) + return; + d =3D container_of(rr->hdr, struct rdt_mon_domain, hdr); + m =3D get_mbm_state(d, closid, rmid, rr->evtid); if (WARN_ON_ONCE(!m)) return; =20 @@ -697,7 +724,7 @@ static void mbm_update_one_event(struct rdt_resource *r= , struct rdt_mon_domain * struct rmid_read rr =3D {0}; =20 rr.r =3D r; - rr.d =3D d; + rr.hdr =3D &d->hdr; rr.evtid =3D evtid; if (resctrl_arch_mbm_cntr_assign_enabled(r)) { rr.is_mbm_cntr =3D true; --=20 2.51.1 From nobody Fri Dec 19 18:54:00 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EE08F3064AF for ; Thu, 4 Dec 2025 20:54:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881665; cv=none; b=eVPK44oHJ4LclNSTIJOLxNQM4ssOUlIoGdj4xyJp4JT14/Dh8fahD3dg+w+TpPLD+tok6GM/WuYfNmu3+H/OKOQOQy4+zSoa00ukVFWvj9QFkd+AyMEMZFZv48LMLQAbuYQwYFth8zeDuzMtgD4uK1p33P5o2f7Z96oEaq+abA0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881665; c=relaxed/simple; bh=7mCt0VWpdiplhTx0pZzTzHwWXbyemZjX9Z5KPpXP09o=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=DyEuw9x9qfQhOiXEXzWmONISNOY3GWL772Oe1PZuiogUzLVc9pvsOW38Spl0nLCk+duFBK1VKm42fpRe7oYlA5iLAkDiZ3f/oLsh1ksVv9cPHdLhX2n7F0Ub8Xz7FSZXTi6IS/dt9MXza7mK+Xgr4gcZMmeSht3oDXQ+wiI3bqM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Mh6UpV6b; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Mh6UpV6b" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764881661; x=1796417661; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7mCt0VWpdiplhTx0pZzTzHwWXbyemZjX9Z5KPpXP09o=; b=Mh6UpV6bvokRR7kG2sx9lq3of40upCKsd8Qw6yd9QUswfVf0Q+11vSqI K23VdNj7m2zsxb3pjfJGXNuaHTAqty6EdejrPAlO8pjel6KgQNB8yiTHB bIMKxOp+EKretD+DIEPzbuUTE8g1nMeZvBA2juhdaEEpYWxhJP54YaARZ s67NrB7j6vkW26XuX4wtNdzrzuURmaSSimLRcTGLz997lAM2jkNfbP8pq FqQRIiipQwF5Y8YXVDfXoaOw3MiqEHsOi8KDOVsWrYqaow5SqMohj6OMF iLIIGnBu5NS1YFSG5i1aOgXl3K4Zxje6gpxBRxkHisW/lg4U7bXb+2Hdk A==; X-CSE-ConnectionGUID: 2YH7Ce+jSt6ER3Oy6GwjcA== X-CSE-MsgGUID: hT24EQxPT/Cra2+OQBtKLQ== X-IronPort-AV: E=McAfee;i="6800,10657,11632"; a="69510906" X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="69510906" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:18 -0800 X-CSE-ConnectionGUID: E1MX82lnT+63RDOt4NStRA== X-CSE-MsgGUID: z8qPLXXZSMKYbP8r1tDWKw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="225752751" Received: from mgerlach-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.165]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:17 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v15 08/32] x86,fs/resctrl: Rename struct rdt_mon_domain and rdt_hw_mon_domain Date: Thu, 4 Dec 2025 12:53:38 -0800 Message-ID: <20251204205404.12763-9-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251204205404.12763-1-tony.luck@intel.com> References: <20251204205404.12763-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The upcoming telemetry event monitoring is not tied to the L3 resource and will have a new domain structure. Rename the L3 resource specific domain data structures to include "l3_" in their names to avoid confusion between the different resource specific domain structures: rdt_mon_domain -> rdt_l3_mon_domain rdt_hw_mon_domain -> rdt_hw_l3_mon_domain No functional change. Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- include/linux/resctrl.h | 22 ++++---- arch/x86/kernel/cpu/resctrl/internal.h | 16 +++--- fs/resctrl/internal.h | 8 +-- arch/x86/kernel/cpu/resctrl/core.c | 14 +++--- arch/x86/kernel/cpu/resctrl/monitor.c | 36 ++++++------- fs/resctrl/ctrlmondata.c | 2 +- fs/resctrl/monitor.c | 70 +++++++++++++------------- fs/resctrl/rdtgroup.c | 40 +++++++-------- 8 files changed, 104 insertions(+), 104 deletions(-) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index 9b9877fb3238..79aaaabcdd3f 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -178,7 +178,7 @@ struct mbm_cntr_cfg { }; =20 /** - * struct rdt_mon_domain - group of CPUs sharing a resctrl monitor resource + * struct rdt_l3_mon_domain - group of CPUs sharing RDT_RESOURCE_L3 monito= ring * @hdr: common header for different domain types * @ci_id: cache info id for this domain * @rmid_busy_llc: bitmap of which limbo RMIDs are above threshold @@ -192,7 +192,7 @@ struct mbm_cntr_cfg { * @cntr_cfg: array of assignable counters' configuration (indexed * by counter ID) */ -struct rdt_mon_domain { +struct rdt_l3_mon_domain { struct rdt_domain_hdr hdr; unsigned int ci_id; unsigned long *rmid_busy_llc; @@ -367,10 +367,10 @@ struct resctrl_cpu_defaults { }; =20 struct resctrl_mon_config_info { - struct rdt_resource *r; - struct rdt_mon_domain *d; - u32 evtid; - u32 mon_config; + struct rdt_resource *r; + struct rdt_l3_mon_domain *d; + u32 evtid; + u32 mon_config; }; =20 /** @@ -585,7 +585,7 @@ struct rdt_domain_hdr *resctrl_find_domain(struct list_= head *h, int id, * * This can be called from any CPU. */ -void resctrl_arch_reset_rmid(struct rdt_resource *r, struct rdt_mon_domain= *d, +void resctrl_arch_reset_rmid(struct rdt_resource *r, struct rdt_l3_mon_dom= ain *d, u32 closid, u32 rmid, enum resctrl_event_id eventid); =20 @@ -598,7 +598,7 @@ void resctrl_arch_reset_rmid(struct rdt_resource *r, st= ruct rdt_mon_domain *d, * * This can be called from any CPU. */ -void resctrl_arch_reset_rmid_all(struct rdt_resource *r, struct rdt_mon_do= main *d); +void resctrl_arch_reset_rmid_all(struct rdt_resource *r, struct rdt_l3_mon= _domain *d); =20 /** * resctrl_arch_reset_all_ctrls() - Reset the control for each CLOSID to i= ts @@ -624,7 +624,7 @@ void resctrl_arch_reset_all_ctrls(struct rdt_resource *= r); * * This can be called from any CPU. */ -void resctrl_arch_config_cntr(struct rdt_resource *r, struct rdt_mon_domai= n *d, +void resctrl_arch_config_cntr(struct rdt_resource *r, struct rdt_l3_mon_do= main *d, enum resctrl_event_id evtid, u32 rmid, u32 closid, u32 cntr_id, bool assign); =20 @@ -647,7 +647,7 @@ void resctrl_arch_config_cntr(struct rdt_resource *r, s= truct rdt_mon_domain *d, * Return: * 0 on success, or -EIO, -EINVAL etc on error. */ -int resctrl_arch_cntr_read(struct rdt_resource *r, struct rdt_mon_domain *= d, +int resctrl_arch_cntr_read(struct rdt_resource *r, struct rdt_l3_mon_domai= n *d, u32 closid, u32 rmid, int cntr_id, enum resctrl_event_id eventid, u64 *val); =20 @@ -662,7 +662,7 @@ int resctrl_arch_cntr_read(struct rdt_resource *r, stru= ct rdt_mon_domain *d, * * This can be called from any CPU. */ -void resctrl_arch_reset_cntr(struct rdt_resource *r, struct rdt_mon_domain= *d, +void resctrl_arch_reset_cntr(struct rdt_resource *r, struct rdt_l3_mon_dom= ain *d, u32 closid, u32 rmid, int cntr_id, enum resctrl_event_id eventid); =20 diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/r= esctrl/internal.h index 4a916c84a322..d73c0adf1026 100644 --- a/arch/x86/kernel/cpu/resctrl/internal.h +++ b/arch/x86/kernel/cpu/resctrl/internal.h @@ -63,17 +63,17 @@ struct rdt_hw_ctrl_domain { }; =20 /** - * struct rdt_hw_mon_domain - Arch private attributes of a set of CPUs tha= t share - * a resource for a monitor function - * @d_resctrl: Properties exposed to the resctrl file system + * struct rdt_hw_l3_mon_domain - Arch private attributes of a set of CPUs = sharing + * RDT_RESOURCE_L3 monitoring + * @d_resctrl: Properties exposed to the resctrl file system * @arch_mbm_states: Per-event pointer to the MBM event's saved state. * An MBM event's state is an array of struct arch_mbm_state * indexed by RMID on x86. * * Members of this structure are accessed via helpers that provide abstrac= tion. */ -struct rdt_hw_mon_domain { - struct rdt_mon_domain d_resctrl; +struct rdt_hw_l3_mon_domain { + struct rdt_l3_mon_domain d_resctrl; struct arch_mbm_state *arch_mbm_states[QOS_NUM_L3_MBM_EVENTS]; }; =20 @@ -82,9 +82,9 @@ static inline struct rdt_hw_ctrl_domain *resctrl_to_arch_= ctrl_dom(struct rdt_ctr return container_of(r, struct rdt_hw_ctrl_domain, d_resctrl); } =20 -static inline struct rdt_hw_mon_domain *resctrl_to_arch_mon_dom(struct rdt= _mon_domain *r) +static inline struct rdt_hw_l3_mon_domain *resctrl_to_arch_mon_dom(struct = rdt_l3_mon_domain *r) { - return container_of(r, struct rdt_hw_mon_domain, d_resctrl); + return container_of(r, struct rdt_hw_l3_mon_domain, d_resctrl); } =20 /** @@ -140,7 +140,7 @@ static inline struct rdt_hw_resource *resctrl_to_arch_r= es(struct rdt_resource *r =20 extern struct rdt_hw_resource rdt_resources_all[]; =20 -void arch_mon_domain_online(struct rdt_resource *r, struct rdt_mon_domain = *d); +void arch_mon_domain_online(struct rdt_resource *r, struct rdt_l3_mon_doma= in *d); =20 /* CPUID.(EAX=3D10H, ECX=3DResID=3D1).EAX */ union cpuid_0x10_1_eax { diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h index 9912b774a580..af47b6ddef62 100644 --- a/fs/resctrl/internal.h +++ b/fs/resctrl/internal.h @@ -369,7 +369,7 @@ void mon_event_read(struct rmid_read *rr, struct rdt_re= source *r, =20 int resctrl_mon_resource_init(void); =20 -void mbm_setup_overflow_handler(struct rdt_mon_domain *dom, +void mbm_setup_overflow_handler(struct rdt_l3_mon_domain *dom, unsigned long delay_ms, int exclude_cpu); =20 @@ -377,14 +377,14 @@ void mbm_handle_overflow(struct work_struct *work); =20 bool is_mba_sc(struct rdt_resource *r); =20 -void cqm_setup_limbo_handler(struct rdt_mon_domain *dom, unsigned long del= ay_ms, +void cqm_setup_limbo_handler(struct rdt_l3_mon_domain *dom, unsigned long = delay_ms, int exclude_cpu); =20 void cqm_handle_limbo(struct work_struct *work); =20 -bool has_busy_rmid(struct rdt_mon_domain *d); +bool has_busy_rmid(struct rdt_l3_mon_domain *d); =20 -void __check_limbo(struct rdt_mon_domain *d, bool force_free); +void __check_limbo(struct rdt_l3_mon_domain *d, bool force_free); =20 void resctrl_file_fflags_init(const char *config, unsigned long fflags); =20 diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 1fab4c67d273..cc1b846f9645 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -368,7 +368,7 @@ static void ctrl_domain_free(struct rdt_hw_ctrl_domain = *hw_dom) kfree(hw_dom); } =20 -static void mon_domain_free(struct rdt_hw_mon_domain *hw_dom) +static void mon_domain_free(struct rdt_hw_l3_mon_domain *hw_dom) { int idx; =20 @@ -405,7 +405,7 @@ static int domain_setup_ctrlval(struct rdt_resource *r,= struct rdt_ctrl_domain * * @num_rmid: The size of the MBM counter array * @hw_dom: The domain that owns the allocated arrays */ -static int arch_domain_mbm_alloc(u32 num_rmid, struct rdt_hw_mon_domain *h= w_dom) +static int arch_domain_mbm_alloc(u32 num_rmid, struct rdt_hw_l3_mon_domain= *hw_dom) { size_t tsize =3D sizeof(*hw_dom->arch_mbm_states[0]); enum resctrl_event_id eventid; @@ -503,8 +503,8 @@ static void domain_add_cpu_ctrl(int cpu, struct rdt_res= ource *r) =20 static void l3_mon_domain_setup(int cpu, int id, struct rdt_resource *r, s= truct list_head *add_pos) { - struct rdt_hw_mon_domain *hw_dom; - struct rdt_mon_domain *d; + struct rdt_hw_l3_mon_domain *hw_dom; + struct rdt_l3_mon_domain *d; struct cacheinfo *ci; int err; =20 @@ -653,13 +653,13 @@ static void domain_remove_cpu_mon(int cpu, struct rdt= _resource *r) =20 switch (r->rid) { case RDT_RESOURCE_L3: { - struct rdt_hw_mon_domain *hw_dom; - struct rdt_mon_domain *d; + struct rdt_hw_l3_mon_domain *hw_dom; + struct rdt_l3_mon_domain *d; =20 if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) return; =20 - d =3D container_of(hdr, struct rdt_mon_domain, hdr); + d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); hw_dom =3D resctrl_to_arch_mon_dom(d); resctrl_offline_mon_domain(r, hdr); list_del_rcu(&hdr->list); diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/re= sctrl/monitor.c index 3da970ea1903..04b8f1e1f314 100644 --- a/arch/x86/kernel/cpu/resctrl/monitor.c +++ b/arch/x86/kernel/cpu/resctrl/monitor.c @@ -109,7 +109,7 @@ static inline u64 get_corrected_mbm_count(u32 rmid, uns= igned long val) * * In RMID sharing mode there are fewer "logical RMID" values available * to accumulate data ("physical RMIDs" are divided evenly between SNC - * nodes that share an L3 cache). Linux creates an rdt_mon_domain for + * nodes that share an L3 cache). Linux creates an rdt_l3_mon_domain for * each SNC node. * * The value loaded into IA32_PQR_ASSOC is the "logical RMID". @@ -157,7 +157,7 @@ static int __rmid_read_phys(u32 prmid, enum resctrl_eve= nt_id eventid, u64 *val) return 0; } =20 -static struct arch_mbm_state *get_arch_mbm_state(struct rdt_hw_mon_domain = *hw_dom, +static struct arch_mbm_state *get_arch_mbm_state(struct rdt_hw_l3_mon_doma= in *hw_dom, u32 rmid, enum resctrl_event_id eventid) { @@ -171,11 +171,11 @@ static struct arch_mbm_state *get_arch_mbm_state(stru= ct rdt_hw_mon_domain *hw_do return state ? &state[rmid] : NULL; } =20 -void resctrl_arch_reset_rmid(struct rdt_resource *r, struct rdt_mon_domain= *d, +void resctrl_arch_reset_rmid(struct rdt_resource *r, struct rdt_l3_mon_dom= ain *d, u32 unused, u32 rmid, enum resctrl_event_id eventid) { - struct rdt_hw_mon_domain *hw_dom =3D resctrl_to_arch_mon_dom(d); + struct rdt_hw_l3_mon_domain *hw_dom =3D resctrl_to_arch_mon_dom(d); int cpu =3D cpumask_any(&d->hdr.cpu_mask); struct arch_mbm_state *am; u32 prmid; @@ -194,9 +194,9 @@ void resctrl_arch_reset_rmid(struct rdt_resource *r, st= ruct rdt_mon_domain *d, * Assumes that hardware counters are also reset and thus that there is * no need to record initial non-zero counts. */ -void resctrl_arch_reset_rmid_all(struct rdt_resource *r, struct rdt_mon_do= main *d) +void resctrl_arch_reset_rmid_all(struct rdt_resource *r, struct rdt_l3_mon= _domain *d) { - struct rdt_hw_mon_domain *hw_dom =3D resctrl_to_arch_mon_dom(d); + struct rdt_hw_l3_mon_domain *hw_dom =3D resctrl_to_arch_mon_dom(d); enum resctrl_event_id eventid; int idx; =20 @@ -217,10 +217,10 @@ static u64 mbm_overflow_count(u64 prev_msr, u64 cur_m= sr, unsigned int width) return chunks >> shift; } =20 -static u64 get_corrected_val(struct rdt_resource *r, struct rdt_mon_domain= *d, +static u64 get_corrected_val(struct rdt_resource *r, struct rdt_l3_mon_dom= ain *d, u32 rmid, enum resctrl_event_id eventid, u64 msr_val) { - struct rdt_hw_mon_domain *hw_dom =3D resctrl_to_arch_mon_dom(d); + struct rdt_hw_l3_mon_domain *hw_dom =3D resctrl_to_arch_mon_dom(d); struct rdt_hw_resource *hw_res =3D resctrl_to_arch_res(r); struct arch_mbm_state *am; u64 chunks; @@ -242,9 +242,9 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, stru= ct rdt_domain_hdr *hdr, u32 unused, u32 rmid, enum resctrl_event_id eventid, u64 *val, void *ignored) { - struct rdt_hw_mon_domain *hw_dom; + struct rdt_hw_l3_mon_domain *hw_dom; + struct rdt_l3_mon_domain *d; struct arch_mbm_state *am; - struct rdt_mon_domain *d; u64 msr_val; u32 prmid; int cpu; @@ -254,7 +254,7 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, stru= ct rdt_domain_hdr *hdr, if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) return -EINVAL; =20 - d =3D container_of(hdr, struct rdt_mon_domain, hdr); + d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); hw_dom =3D resctrl_to_arch_mon_dom(d); cpu =3D cpumask_any(&hdr->cpu_mask); prmid =3D logical_rmid_to_physical_rmid(cpu, rmid); @@ -308,11 +308,11 @@ static int __cntr_id_read(u32 cntr_id, u64 *val) return 0; } =20 -void resctrl_arch_reset_cntr(struct rdt_resource *r, struct rdt_mon_domain= *d, +void resctrl_arch_reset_cntr(struct rdt_resource *r, struct rdt_l3_mon_dom= ain *d, u32 unused, u32 rmid, int cntr_id, enum resctrl_event_id eventid) { - struct rdt_hw_mon_domain *hw_dom =3D resctrl_to_arch_mon_dom(d); + struct rdt_hw_l3_mon_domain *hw_dom =3D resctrl_to_arch_mon_dom(d); struct arch_mbm_state *am; =20 am =3D get_arch_mbm_state(hw_dom, rmid, eventid); @@ -324,7 +324,7 @@ void resctrl_arch_reset_cntr(struct rdt_resource *r, st= ruct rdt_mon_domain *d, } } =20 -int resctrl_arch_cntr_read(struct rdt_resource *r, struct rdt_mon_domain *= d, +int resctrl_arch_cntr_read(struct rdt_resource *r, struct rdt_l3_mon_domai= n *d, u32 unused, u32 rmid, int cntr_id, enum resctrl_event_id eventid, u64 *val) { @@ -354,7 +354,7 @@ int resctrl_arch_cntr_read(struct rdt_resource *r, stru= ct rdt_mon_domain *d, * must adjust RMID counter numbers based on SNC node. See * logical_rmid_to_physical_rmid() for code that does this. */ -void arch_mon_domain_online(struct rdt_resource *r, struct rdt_mon_domain = *d) +void arch_mon_domain_online(struct rdt_resource *r, struct rdt_l3_mon_doma= in *d) { if (snc_nodes_per_l3_cache > 1) msr_clear_bit(MSR_RMID_SNC_CONFIG, 0); @@ -516,7 +516,7 @@ static void resctrl_abmc_set_one_amd(void *arg) */ static void _resctrl_abmc_enable(struct rdt_resource *r, bool enable) { - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; =20 lockdep_assert_cpus_held(); =20 @@ -555,11 +555,11 @@ static void resctrl_abmc_config_one_amd(void *info) /* * Send an IPI to the domain to assign the counter to RMID, event pair. */ -void resctrl_arch_config_cntr(struct rdt_resource *r, struct rdt_mon_domai= n *d, +void resctrl_arch_config_cntr(struct rdt_resource *r, struct rdt_l3_mon_do= main *d, enum resctrl_event_id evtid, u32 rmid, u32 closid, u32 cntr_id, bool assign) { - struct rdt_hw_mon_domain *hw_dom =3D resctrl_to_arch_mon_dom(d); + struct rdt_hw_l3_mon_domain *hw_dom =3D resctrl_to_arch_mon_dom(d); union l3_qos_abmc_cfg abmc_cfg =3D { 0 }; struct arch_mbm_state *am; =20 diff --git a/fs/resctrl/ctrlmondata.c b/fs/resctrl/ctrlmondata.c index 9242a2982e77..a3c734fe656e 100644 --- a/fs/resctrl/ctrlmondata.c +++ b/fs/resctrl/ctrlmondata.c @@ -600,9 +600,9 @@ int rdtgroup_mondata_show(struct seq_file *m, void *arg) struct kernfs_open_file *of =3D m->private; enum resctrl_res_level resid; enum resctrl_event_id evtid; + struct rdt_l3_mon_domain *d; struct rdt_domain_hdr *hdr; struct rmid_read rr =3D {0}; - struct rdt_mon_domain *d; struct rdtgroup *rdtgrp; int domid, cpu, ret =3D 0; struct rdt_resource *r; diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index e1c12201388f..9edbe9805d33 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -130,7 +130,7 @@ static void limbo_release_entry(struct rmid_entry *entr= y) * decrement the count. If the busy count gets to zero on an RMID, we * free the RMID */ -void __check_limbo(struct rdt_mon_domain *d, bool force_free) +void __check_limbo(struct rdt_l3_mon_domain *d, bool force_free) { struct rdt_resource *r =3D resctrl_arch_get_resource(RDT_RESOURCE_L3); u32 idx_limit =3D resctrl_arch_system_num_rmid_idx(); @@ -188,7 +188,7 @@ void __check_limbo(struct rdt_mon_domain *d, bool force= _free) resctrl_arch_mon_ctx_free(r, QOS_L3_OCCUP_EVENT_ID, arch_mon_ctx); } =20 -bool has_busy_rmid(struct rdt_mon_domain *d) +bool has_busy_rmid(struct rdt_l3_mon_domain *d) { u32 idx_limit =3D resctrl_arch_system_num_rmid_idx(); =20 @@ -289,7 +289,7 @@ int alloc_rmid(u32 closid) static void add_rmid_to_limbo(struct rmid_entry *entry) { struct rdt_resource *r =3D resctrl_arch_get_resource(RDT_RESOURCE_L3); - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; u32 idx; =20 lockdep_assert_held(&rdtgroup_mutex); @@ -342,7 +342,7 @@ void free_rmid(u32 closid, u32 rmid) list_add_tail(&entry->list, &rmid_free_lru); } =20 -static struct mbm_state *get_mbm_state(struct rdt_mon_domain *d, u32 closi= d, +static struct mbm_state *get_mbm_state(struct rdt_l3_mon_domain *d, u32 cl= osid, u32 rmid, enum resctrl_event_id evtid) { u32 idx =3D resctrl_arch_rmid_idx_encode(closid, rmid); @@ -362,7 +362,7 @@ static struct mbm_state *get_mbm_state(struct rdt_mon_d= omain *d, u32 closid, * Return: * Valid counter ID on success, or -ENOENT on failure. */ -static int mbm_cntr_get(struct rdt_resource *r, struct rdt_mon_domain *d, +static int mbm_cntr_get(struct rdt_resource *r, struct rdt_l3_mon_domain *= d, struct rdtgroup *rdtgrp, enum resctrl_event_id evtid) { int cntr_id; @@ -389,7 +389,7 @@ static int mbm_cntr_get(struct rdt_resource *r, struct = rdt_mon_domain *d, * Return: * Valid counter ID on success, or -ENOSPC on failure. */ -static int mbm_cntr_alloc(struct rdt_resource *r, struct rdt_mon_domain *d, +static int mbm_cntr_alloc(struct rdt_resource *r, struct rdt_l3_mon_domain= *d, struct rdtgroup *rdtgrp, enum resctrl_event_id evtid) { int cntr_id; @@ -408,7 +408,7 @@ static int mbm_cntr_alloc(struct rdt_resource *r, struc= t rdt_mon_domain *d, /* * mbm_cntr_free() - Clear the counter ID configuration details in the dom= ain @d. */ -static void mbm_cntr_free(struct rdt_mon_domain *d, int cntr_id) +static void mbm_cntr_free(struct rdt_l3_mon_domain *d, int cntr_id) { memset(&d->cntr_cfg[cntr_id], 0, sizeof(*d->cntr_cfg)); } @@ -418,7 +418,7 @@ static int __l3_mon_event_count(struct rdtgroup *rdtgrp= , struct rmid_read *rr) int cpu =3D smp_processor_id(); u32 closid =3D rdtgrp->closid; u32 rmid =3D rdtgrp->mon.rmid; - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; int cntr_id =3D -ENOENT; struct mbm_state *m; u64 tval =3D 0; @@ -427,7 +427,7 @@ static int __l3_mon_event_count(struct rdtgroup *rdtgrp= , struct rmid_read *rr) rr->err =3D -EIO; return -EINVAL; } - d =3D container_of(rr->hdr, struct rdt_mon_domain, hdr); + d =3D container_of(rr->hdr, struct rdt_l3_mon_domain, hdr); =20 if (rr->is_mbm_cntr) { cntr_id =3D mbm_cntr_get(rr->r, d, rdtgrp, rr->evtid); @@ -470,7 +470,7 @@ static int __l3_mon_event_count_sum(struct rdtgroup *rd= tgrp, struct rmid_read *r int cpu =3D smp_processor_id(); u32 closid =3D rdtgrp->closid; u32 rmid =3D rdtgrp->mon.rmid; - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; u64 tval =3D 0; int err, ret; =20 @@ -545,12 +545,12 @@ static void mbm_bw_count(struct rdtgroup *rdtgrp, str= uct rmid_read *rr) u64 cur_bw, bytes, cur_bytes; u32 closid =3D rdtgrp->closid; u32 rmid =3D rdtgrp->mon.rmid; - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; struct mbm_state *m; =20 if (!domain_header_is_valid(rr->hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) return; - d =3D container_of(rr->hdr, struct rdt_mon_domain, hdr); + d =3D container_of(rr->hdr, struct rdt_l3_mon_domain, hdr); m =3D get_mbm_state(d, closid, rmid, rr->evtid); if (WARN_ON_ONCE(!m)) return; @@ -650,7 +650,7 @@ static struct rdt_ctrl_domain *get_ctrl_domain_from_cpu= (int cpu, * throttle MSRs already have low percentage values. To avoid * unnecessarily restricting such rdtgroups, we also increase the bandwidt= h. */ -static void update_mba_bw(struct rdtgroup *rgrp, struct rdt_mon_domain *do= m_mbm) +static void update_mba_bw(struct rdtgroup *rgrp, struct rdt_l3_mon_domain = *dom_mbm) { u32 closid, rmid, cur_msr_val, new_msr_val; struct mbm_state *pmbm_data, *cmbm_data; @@ -718,7 +718,7 @@ static void update_mba_bw(struct rdtgroup *rgrp, struct= rdt_mon_domain *dom_mbm) resctrl_arch_update_one(r_mba, dom_mba, closid, CDP_NONE, new_msr_val); } =20 -static void mbm_update_one_event(struct rdt_resource *r, struct rdt_mon_do= main *d, +static void mbm_update_one_event(struct rdt_resource *r, struct rdt_l3_mon= _domain *d, struct rdtgroup *rdtgrp, enum resctrl_event_id evtid) { struct rmid_read rr =3D {0}; @@ -750,7 +750,7 @@ static void mbm_update_one_event(struct rdt_resource *r= , struct rdt_mon_domain * resctrl_arch_mon_ctx_free(rr.r, rr.evtid, rr.arch_mon_ctx); } =20 -static void mbm_update(struct rdt_resource *r, struct rdt_mon_domain *d, +static void mbm_update(struct rdt_resource *r, struct rdt_l3_mon_domain *d, struct rdtgroup *rdtgrp) { /* @@ -771,12 +771,12 @@ static void mbm_update(struct rdt_resource *r, struct= rdt_mon_domain *d, void cqm_handle_limbo(struct work_struct *work) { unsigned long delay =3D msecs_to_jiffies(CQM_LIMBOCHECK_INTERVAL); - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; =20 cpus_read_lock(); mutex_lock(&rdtgroup_mutex); =20 - d =3D container_of(work, struct rdt_mon_domain, cqm_limbo.work); + d =3D container_of(work, struct rdt_l3_mon_domain, cqm_limbo.work); =20 __check_limbo(d, false); =20 @@ -799,7 +799,7 @@ void cqm_handle_limbo(struct work_struct *work) * @exclude_cpu: Which CPU the handler should not run on, * RESCTRL_PICK_ANY_CPU to pick any CPU. */ -void cqm_setup_limbo_handler(struct rdt_mon_domain *dom, unsigned long del= ay_ms, +void cqm_setup_limbo_handler(struct rdt_l3_mon_domain *dom, unsigned long = delay_ms, int exclude_cpu) { unsigned long delay =3D msecs_to_jiffies(delay_ms); @@ -816,7 +816,7 @@ void mbm_handle_overflow(struct work_struct *work) { unsigned long delay =3D msecs_to_jiffies(MBM_OVERFLOW_INTERVAL); struct rdtgroup *prgrp, *crgrp; - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; struct list_head *head; struct rdt_resource *r; =20 @@ -831,7 +831,7 @@ void mbm_handle_overflow(struct work_struct *work) goto out_unlock; =20 r =3D resctrl_arch_get_resource(RDT_RESOURCE_L3); - d =3D container_of(work, struct rdt_mon_domain, mbm_over.work); + d =3D container_of(work, struct rdt_l3_mon_domain, mbm_over.work); =20 list_for_each_entry(prgrp, &rdt_all_groups, rdtgroup_list) { mbm_update(r, d, prgrp); @@ -865,7 +865,7 @@ void mbm_handle_overflow(struct work_struct *work) * @exclude_cpu: Which CPU the handler should not run on, * RESCTRL_PICK_ANY_CPU to pick any CPU. */ -void mbm_setup_overflow_handler(struct rdt_mon_domain *dom, unsigned long = delay_ms, +void mbm_setup_overflow_handler(struct rdt_l3_mon_domain *dom, unsigned lo= ng delay_ms, int exclude_cpu) { unsigned long delay =3D msecs_to_jiffies(delay_ms); @@ -1120,7 +1120,7 @@ ssize_t resctrl_mbm_assign_on_mkdir_write(struct kern= fs_open_file *of, char *buf * mbm_cntr_free_all() - Clear all the counter ID configuration details in= the * domain @d. Called when mbm_assign_mode is changed. */ -static void mbm_cntr_free_all(struct rdt_resource *r, struct rdt_mon_domai= n *d) +static void mbm_cntr_free_all(struct rdt_resource *r, struct rdt_l3_mon_do= main *d) { memset(d->cntr_cfg, 0, sizeof(*d->cntr_cfg) * r->mon.num_mbm_cntrs); } @@ -1129,7 +1129,7 @@ static void mbm_cntr_free_all(struct rdt_resource *r,= struct rdt_mon_domain *d) * resctrl_reset_rmid_all() - Reset all non-architecture states for all the * supported RMIDs. */ -static void resctrl_reset_rmid_all(struct rdt_resource *r, struct rdt_mon_= domain *d) +static void resctrl_reset_rmid_all(struct rdt_resource *r, struct rdt_l3_m= on_domain *d) { u32 idx_limit =3D resctrl_arch_system_num_rmid_idx(); enum resctrl_event_id evt; @@ -1150,7 +1150,7 @@ static void resctrl_reset_rmid_all(struct rdt_resourc= e *r, struct rdt_mon_domain * Assign the counter if @assign is true else unassign the counter. Reset = the * associated non-architectural state. */ -static void rdtgroup_assign_cntr(struct rdt_resource *r, struct rdt_mon_do= main *d, +static void rdtgroup_assign_cntr(struct rdt_resource *r, struct rdt_l3_mon= _domain *d, enum resctrl_event_id evtid, u32 rmid, u32 closid, u32 cntr_id, bool assign) { @@ -1170,7 +1170,7 @@ static void rdtgroup_assign_cntr(struct rdt_resource = *r, struct rdt_mon_domain * * Return: * 0 on success, < 0 on failure. */ -static int rdtgroup_alloc_assign_cntr(struct rdt_resource *r, struct rdt_m= on_domain *d, +static int rdtgroup_alloc_assign_cntr(struct rdt_resource *r, struct rdt_l= 3_mon_domain *d, struct rdtgroup *rdtgrp, struct mon_evt *mevt) { int cntr_id; @@ -1205,7 +1205,7 @@ static int rdtgroup_alloc_assign_cntr(struct rdt_reso= urce *r, struct rdt_mon_dom * Return: * 0 on success, < 0 on failure. */ -static int rdtgroup_assign_cntr_event(struct rdt_mon_domain *d, struct rdt= group *rdtgrp, +static int rdtgroup_assign_cntr_event(struct rdt_l3_mon_domain *d, struct = rdtgroup *rdtgrp, struct mon_evt *mevt) { struct rdt_resource *r =3D resctrl_arch_get_resource(mevt->rid); @@ -1255,7 +1255,7 @@ void rdtgroup_assign_cntrs(struct rdtgroup *rdtgrp) * rdtgroup_free_unassign_cntr() - Unassign and reset the counter ID confi= guration * for the event pointed to by @mevt within the domain @d and resctrl grou= p @rdtgrp. */ -static void rdtgroup_free_unassign_cntr(struct rdt_resource *r, struct rdt= _mon_domain *d, +static void rdtgroup_free_unassign_cntr(struct rdt_resource *r, struct rdt= _l3_mon_domain *d, struct rdtgroup *rdtgrp, struct mon_evt *mevt) { int cntr_id; @@ -1276,7 +1276,7 @@ static void rdtgroup_free_unassign_cntr(struct rdt_re= source *r, struct rdt_mon_d * the event structure @mevt from the domain @d and the group @rdtgrp. Una= ssign * the counters from all the domains if @d is NULL else unassign from @d. */ -static void rdtgroup_unassign_cntr_event(struct rdt_mon_domain *d, struct = rdtgroup *rdtgrp, +static void rdtgroup_unassign_cntr_event(struct rdt_l3_mon_domain *d, stru= ct rdtgroup *rdtgrp, struct mon_evt *mevt) { struct rdt_resource *r =3D resctrl_arch_get_resource(mevt->rid); @@ -1351,7 +1351,7 @@ static int resctrl_parse_mem_transactions(char *tok, = u32 *val) static void rdtgroup_update_cntr_event(struct rdt_resource *r, struct rdtg= roup *rdtgrp, enum resctrl_event_id evtid) { - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; int cntr_id; =20 list_for_each_entry(d, &r->mon_domains, hdr.list) { @@ -1457,7 +1457,7 @@ ssize_t resctrl_mbm_assign_mode_write(struct kernfs_o= pen_file *of, char *buf, size_t nbytes, loff_t off) { struct rdt_resource *r =3D rdt_kn_parent_priv(of->kn); - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; int ret =3D 0; bool enable; =20 @@ -1530,7 +1530,7 @@ int resctrl_num_mbm_cntrs_show(struct kernfs_open_fil= e *of, struct seq_file *s, void *v) { struct rdt_resource *r =3D rdt_kn_parent_priv(of->kn); - struct rdt_mon_domain *dom; + struct rdt_l3_mon_domain *dom; bool sep =3D false; =20 cpus_read_lock(); @@ -1554,7 +1554,7 @@ int resctrl_available_mbm_cntrs_show(struct kernfs_op= en_file *of, struct seq_file *s, void *v) { struct rdt_resource *r =3D rdt_kn_parent_priv(of->kn); - struct rdt_mon_domain *dom; + struct rdt_l3_mon_domain *dom; bool sep =3D false; u32 cntrs, i; int ret =3D 0; @@ -1595,7 +1595,7 @@ int resctrl_available_mbm_cntrs_show(struct kernfs_op= en_file *of, int mbm_L3_assignments_show(struct kernfs_open_file *of, struct seq_file *= s, void *v) { struct rdt_resource *r =3D resctrl_arch_get_resource(RDT_RESOURCE_L3); - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; struct rdtgroup *rdtgrp; struct mon_evt *mevt; int ret =3D 0; @@ -1658,7 +1658,7 @@ static struct mon_evt *mbm_get_mon_event_by_name(stru= ct rdt_resource *r, char *n return NULL; } =20 -static int rdtgroup_modify_assign_state(char *assign, struct rdt_mon_domai= n *d, +static int rdtgroup_modify_assign_state(char *assign, struct rdt_l3_mon_do= main *d, struct rdtgroup *rdtgrp, struct mon_evt *mevt) { int ret =3D 0; @@ -1684,7 +1684,7 @@ static int rdtgroup_modify_assign_state(char *assign,= struct rdt_mon_domain *d, static int resctrl_parse_mbm_assignment(struct rdt_resource *r, struct rdt= group *rdtgrp, char *event, char *tok) { - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; unsigned long dom_id =3D 0; char *dom_str, *id_str; struct mon_evt *mevt; diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index 89ffe54fb0fc..2ed435db1923 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -1640,7 +1640,7 @@ static void mondata_config_read(struct resctrl_mon_co= nfig_info *mon_info) static int mbm_config_show(struct seq_file *s, struct rdt_resource *r, u32= evtid) { struct resctrl_mon_config_info mon_info; - struct rdt_mon_domain *dom; + struct rdt_l3_mon_domain *dom; bool sep =3D false; =20 cpus_read_lock(); @@ -1688,7 +1688,7 @@ static int mbm_local_bytes_config_show(struct kernfs_= open_file *of, } =20 static void mbm_config_write_domain(struct rdt_resource *r, - struct rdt_mon_domain *d, u32 evtid, u32 val) + struct rdt_l3_mon_domain *d, u32 evtid, u32 val) { struct resctrl_mon_config_info mon_info =3D {0}; =20 @@ -1729,8 +1729,8 @@ static void mbm_config_write_domain(struct rdt_resour= ce *r, static int mon_config_write(struct rdt_resource *r, char *tok, u32 evtid) { char *dom_str =3D NULL, *id_str; + struct rdt_l3_mon_domain *d; unsigned long dom_id, val; - struct rdt_mon_domain *d; =20 /* Walking r->domains, ensure it can't race with cpuhp */ lockdep_assert_cpus_held(); @@ -2781,7 +2781,7 @@ static int rdt_get_tree(struct fs_context *fc) { struct rdt_fs_context *ctx =3D rdt_fc2context(fc); unsigned long flags =3D RFTYPE_CTRL_BASE; - struct rdt_mon_domain *dom; + struct rdt_l3_mon_domain *dom; struct rdt_resource *r; int ret; =20 @@ -3232,7 +3232,7 @@ static void rmdir_mondata_subdir_allrdtgrp(struct rdt= _resource *r, struct rdt_domain_hdr *hdr) { struct rdtgroup *prgrp, *crgrp; - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; char subname[32]; bool snc_mode; char name[32]; @@ -3240,7 +3240,7 @@ static void rmdir_mondata_subdir_allrdtgrp(struct rdt= _resource *r, if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) return; =20 - d =3D container_of(hdr, struct rdt_mon_domain, hdr); + d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); snc_mode =3D r->mon_scope =3D=3D RESCTRL_L3_NODE; sprintf(name, "mon_%s_%02d", r->name, snc_mode ? d->ci_id : hdr->id); if (snc_mode) @@ -3258,8 +3258,8 @@ static int mon_add_all_files(struct kernfs_node *kn, = struct rdt_domain_hdr *hdr, struct rdt_resource *r, struct rdtgroup *prgrp, bool do_sum) { + struct rdt_l3_mon_domain *d; struct rmid_read rr =3D {0}; - struct rdt_mon_domain *d; struct mon_data *priv; struct mon_evt *mevt; int ret, domid; @@ -3267,7 +3267,7 @@ static int mon_add_all_files(struct kernfs_node *kn, = struct rdt_domain_hdr *hdr, if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) return -EINVAL; =20 - d =3D container_of(hdr, struct rdt_mon_domain, hdr); + d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); for_each_mon_event(mevt) { if (mevt->rid !=3D r->rid || !mevt->enabled) continue; @@ -3292,7 +3292,7 @@ static int mkdir_mondata_subdir(struct kernfs_node *p= arent_kn, struct rdt_resource *r, struct rdtgroup *prgrp) { struct kernfs_node *kn, *ckn; - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; char name[32]; bool snc_mode; int ret =3D 0; @@ -3302,7 +3302,7 @@ static int mkdir_mondata_subdir(struct kernfs_node *p= arent_kn, if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) return -EINVAL; =20 - d =3D container_of(hdr, struct rdt_mon_domain, hdr); + d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); snc_mode =3D r->mon_scope =3D=3D RESCTRL_L3_NODE; sprintf(name, "mon_%s_%02d", r->name, snc_mode ? d->ci_id : d->hdr.id); kn =3D kernfs_find_and_get(parent_kn, name); @@ -4246,7 +4246,7 @@ static void rdtgroup_setup_default(void) mutex_unlock(&rdtgroup_mutex); } =20 -static void domain_destroy_mon_state(struct rdt_mon_domain *d) +static void domain_destroy_mon_state(struct rdt_l3_mon_domain *d) { int idx; =20 @@ -4270,14 +4270,14 @@ void resctrl_offline_ctrl_domain(struct rdt_resourc= e *r, struct rdt_ctrl_domain =20 void resctrl_offline_mon_domain(struct rdt_resource *r, struct rdt_domain_= hdr *hdr) { - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; =20 mutex_lock(&rdtgroup_mutex); =20 if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) goto out_unlock; =20 - d =3D container_of(hdr, struct rdt_mon_domain, hdr); + d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); =20 /* * If resctrl is mounted, remove all the @@ -4319,7 +4319,7 @@ void resctrl_offline_mon_domain(struct rdt_resource *= r, struct rdt_domain_hdr *h * * Returns 0 for success, or -ENOMEM. */ -static int domain_setup_mon_state(struct rdt_resource *r, struct rdt_mon_d= omain *d) +static int domain_setup_mon_state(struct rdt_resource *r, struct rdt_l3_mo= n_domain *d) { u32 idx_limit =3D resctrl_arch_system_num_rmid_idx(); size_t tsize =3D sizeof(*d->mbm_states[0]); @@ -4377,7 +4377,7 @@ int resctrl_online_ctrl_domain(struct rdt_resource *r= , struct rdt_ctrl_domain *d =20 int resctrl_online_mon_domain(struct rdt_resource *r, struct rdt_domain_hd= r *hdr) { - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; int err =3D -EINVAL; =20 mutex_lock(&rdtgroup_mutex); @@ -4385,7 +4385,7 @@ int resctrl_online_mon_domain(struct rdt_resource *r,= struct rdt_domain_hdr *hdr if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) goto out_unlock; =20 - d =3D container_of(hdr, struct rdt_mon_domain, hdr); + d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); err =3D domain_setup_mon_state(r, d); if (err) goto out_unlock; @@ -4432,10 +4432,10 @@ static void clear_childcpus(struct rdtgroup *r, uns= igned int cpu) } } =20 -static struct rdt_mon_domain *get_mon_domain_from_cpu(int cpu, - struct rdt_resource *r) +static struct rdt_l3_mon_domain *get_mon_domain_from_cpu(int cpu, + struct rdt_resource *r) { - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; =20 lockdep_assert_cpus_held(); =20 @@ -4451,7 +4451,7 @@ static struct rdt_mon_domain *get_mon_domain_from_cpu= (int cpu, void resctrl_offline_cpu(unsigned int cpu) { struct rdt_resource *l3 =3D resctrl_arch_get_resource(RDT_RESOURCE_L3); - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; struct rdtgroup *rdtgrp; =20 mutex_lock(&rdtgroup_mutex); --=20 2.51.1 From nobody Fri Dec 19 18:54:00 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 54780301460 for ; Thu, 4 Dec 2025 20:54:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881664; cv=none; b=BjDYomfxreaI/W3xacl+iqVKzciYRUdccVdThUv/rhsqFs40GbFsr+9G2a7la+BdcXzlB7vyIwxUaX+e60zZ19fYSZ0s8eSIeLrp+NdtYSm5vFDFieYm5lk4LrrEblFcg1pjccsKFVIExmPsFiI87LRlXRdG/A4Oi3g1PEGZUQA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881664; c=relaxed/simple; bh=EZOvwr7Di9aBiD+AAW7BdZ5lI4L43ZI0Bgoy+xyaqOE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PUx5GDS6uYnwOu5KJjehmtPk/XIkuxNpQjghk81EpFju+L8d58T+jtp2/7OfVzetbKPEIyVajwPtIy7Ve3Jhu5rTIuL2MKC+DkDi4BAQq61zBtKykrZM75TK/fSmAiik92jGBvY/DO5gqNQOT0K3NK3SfJIwBXVNOsO4N0esebE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=gN4BMwER; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="gN4BMwER" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764881661; x=1796417661; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=EZOvwr7Di9aBiD+AAW7BdZ5lI4L43ZI0Bgoy+xyaqOE=; b=gN4BMwERtHehIbZVMHJEofYZ3RlT/pbBSuS97vPKD6IOIWA8PjoVMA3g Q1kp56RrI9mAVsQofO3eGtn1ElYFS3iA2DLy+rYSYJOdYmtIoJwXi8Oy6 wFYF3Dtz4viAOvL0COGGdFu+AvcSj7CEQ6cf//SKv4yBpInWPAOcloYrY GGUOrzbim54jwtwHvX3nhtEQCsryZG2O/f+Z+8B42TcsG4vAUvtwar3BE qOlCKf3kLoPYzbRSJLGOQV+SldFjBcefL5STFM9KBlYKCdjS2W4p3oj0a GUCg230BUbDnQa8htuW8LJXNIE3LH82RhRPaQTdKYCa6P/xRwogqzraZM A==; X-CSE-ConnectionGUID: EUDSRDtgT5OpDjboQhGzKw== X-CSE-MsgGUID: kl5apcrZQ4Sv0KI/Jy2/kg== X-IronPort-AV: E=McAfee;i="6800,10657,11632"; a="69510914" X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="69510914" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:18 -0800 X-CSE-ConnectionGUID: pcEoQLHURLWgchMeu5Etzw== X-CSE-MsgGUID: 8Pjh+f01TCuDptqjeoReTA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="225752756" Received: from mgerlach-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.165]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:18 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v15 09/32] x86,fs/resctrl: Rename some L3 specific functions Date: Thu, 4 Dec 2025 12:53:39 -0800 Message-ID: <20251204205404.12763-10-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251204205404.12763-1-tony.luck@intel.com> References: <20251204205404.12763-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With the arrival of monitor events tied to new domains associated with a different resource it would be clearer if the L3 resource specific functions are more accurately named. Rename three groups of functions: Functions that allocate/free architecture per-RMID MBM state information: arch_domain_mbm_alloc() -> l3_mon_domain_mbm_alloc() mon_domain_free() -> l3_mon_domain_free() Functions that allocate/free filesystem per-RMID MBM state information: domain_setup_mon_state() -> domain_setup_l3_mon_state() domain_destroy_mon_state() -> domain_destroy_l3_mon_state() Initialization/exit: rdt_get_mon_l3_config() -> rdt_get_l3_mon_config() resctrl_mon_resource_init() -> resctrl_l3_mon_resource_init() resctrl_mon_resource_exit() -> resctrl_l3_mon_resource_exit() Ensure kernel-doc descriptions of these functions' return values are present and correctly formatted. Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- arch/x86/kernel/cpu/resctrl/internal.h | 2 +- fs/resctrl/internal.h | 6 +++--- arch/x86/kernel/cpu/resctrl/core.c | 20 +++++++++++--------- arch/x86/kernel/cpu/resctrl/monitor.c | 2 +- fs/resctrl/monitor.c | 8 ++++---- fs/resctrl/rdtgroup.c | 24 ++++++++++++------------ 6 files changed, 32 insertions(+), 30 deletions(-) diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/r= esctrl/internal.h index d73c0adf1026..11d06995810e 100644 --- a/arch/x86/kernel/cpu/resctrl/internal.h +++ b/arch/x86/kernel/cpu/resctrl/internal.h @@ -213,7 +213,7 @@ union l3_qos_abmc_cfg { =20 void rdt_ctrl_update(void *arg); =20 -int rdt_get_mon_l3_config(struct rdt_resource *r); +int rdt_get_l3_mon_config(struct rdt_resource *r); =20 bool rdt_cpu_has(int flag); =20 diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h index af47b6ddef62..9768341aa21c 100644 --- a/fs/resctrl/internal.h +++ b/fs/resctrl/internal.h @@ -357,7 +357,9 @@ int alloc_rmid(u32 closid); =20 void free_rmid(u32 closid, u32 rmid); =20 -void resctrl_mon_resource_exit(void); +int resctrl_l3_mon_resource_init(void); + +void resctrl_l3_mon_resource_exit(void); =20 void mon_event_count(void *info); =20 @@ -367,8 +369,6 @@ void mon_event_read(struct rmid_read *rr, struct rdt_re= source *r, struct rdt_domain_hdr *hdr, struct rdtgroup *rdtgrp, cpumask_t *cpumask, int evtid, int first); =20 -int resctrl_mon_resource_init(void); - void mbm_setup_overflow_handler(struct rdt_l3_mon_domain *dom, unsigned long delay_ms, int exclude_cpu); diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index cc1b846f9645..b3a2dc56155d 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -368,7 +368,7 @@ static void ctrl_domain_free(struct rdt_hw_ctrl_domain = *hw_dom) kfree(hw_dom); } =20 -static void mon_domain_free(struct rdt_hw_l3_mon_domain *hw_dom) +static void l3_mon_domain_free(struct rdt_hw_l3_mon_domain *hw_dom) { int idx; =20 @@ -401,11 +401,13 @@ static int domain_setup_ctrlval(struct rdt_resource *= r, struct rdt_ctrl_domain * } =20 /** - * arch_domain_mbm_alloc() - Allocate arch private storage for the MBM cou= nters + * l3_mon_domain_mbm_alloc() - Allocate arch private storage for the MBM c= ounters * @num_rmid: The size of the MBM counter array * @hw_dom: The domain that owns the allocated arrays + * + * Return: 0 for success, or -ENOMEM. */ -static int arch_domain_mbm_alloc(u32 num_rmid, struct rdt_hw_l3_mon_domain= *hw_dom) +static int l3_mon_domain_mbm_alloc(u32 num_rmid, struct rdt_hw_l3_mon_doma= in *hw_dom) { size_t tsize =3D sizeof(*hw_dom->arch_mbm_states[0]); enum resctrl_event_id eventid; @@ -519,7 +521,7 @@ static void l3_mon_domain_setup(int cpu, int id, struct= rdt_resource *r, struct ci =3D get_cpu_cacheinfo_level(cpu, RESCTRL_L3_CACHE); if (!ci) { pr_warn_once("Can't find L3 cache for CPU:%d resource %s\n", cpu, r->nam= e); - mon_domain_free(hw_dom); + l3_mon_domain_free(hw_dom); return; } d->ci_id =3D ci->id; @@ -527,8 +529,8 @@ static void l3_mon_domain_setup(int cpu, int id, struct= rdt_resource *r, struct =20 arch_mon_domain_online(r, d); =20 - if (arch_domain_mbm_alloc(r->mon.num_rmid, hw_dom)) { - mon_domain_free(hw_dom); + if (l3_mon_domain_mbm_alloc(r->mon.num_rmid, hw_dom)) { + l3_mon_domain_free(hw_dom); return; } =20 @@ -538,7 +540,7 @@ static void l3_mon_domain_setup(int cpu, int id, struct= rdt_resource *r, struct if (err) { list_del_rcu(&d->hdr.list); synchronize_rcu(); - mon_domain_free(hw_dom); + l3_mon_domain_free(hw_dom); } } =20 @@ -664,7 +666,7 @@ static void domain_remove_cpu_mon(int cpu, struct rdt_r= esource *r) resctrl_offline_mon_domain(r, hdr); list_del_rcu(&hdr->list); synchronize_rcu(); - mon_domain_free(hw_dom); + l3_mon_domain_free(hw_dom); break; } default: @@ -917,7 +919,7 @@ static __init bool get_rdt_mon_resources(void) if (!ret) return false; =20 - return !rdt_get_mon_l3_config(r); + return !rdt_get_l3_mon_config(r); } =20 static __init void __check_quirks_intel(void) diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/re= sctrl/monitor.c index 04b8f1e1f314..20605212656c 100644 --- a/arch/x86/kernel/cpu/resctrl/monitor.c +++ b/arch/x86/kernel/cpu/resctrl/monitor.c @@ -424,7 +424,7 @@ static __init int snc_get_config(void) return ret; } =20 -int __init rdt_get_mon_l3_config(struct rdt_resource *r) +int __init rdt_get_l3_mon_config(struct rdt_resource *r) { unsigned int mbm_offset =3D boot_cpu_data.x86_cache_mbm_width_offset; struct rdt_hw_resource *hw_res =3D resctrl_to_arch_res(r); diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index 9edbe9805d33..d5ae0ef4c947 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -1780,7 +1780,7 @@ ssize_t mbm_L3_assignments_write(struct kernfs_open_f= ile *of, char *buf, } =20 /** - * resctrl_mon_resource_init() - Initialise global monitoring structures. + * resctrl_l3_mon_resource_init() - Initialise global monitoring structure= s. * * Allocate and initialise global monitor resources that do not belong to a * specific domain. i.e. the rmid_ptrs[] used for the limbo and free lists. @@ -1789,9 +1789,9 @@ ssize_t mbm_L3_assignments_write(struct kernfs_open_f= ile *of, char *buf, * Resctrl's cpuhp callbacks may be called before this point to bring a do= main * online. * - * Returns 0 for success, or -ENOMEM. + * Return: 0 for success, or -ENOMEM. */ -int resctrl_mon_resource_init(void) +int resctrl_l3_mon_resource_init(void) { struct rdt_resource *r =3D resctrl_arch_get_resource(RDT_RESOURCE_L3); int ret; @@ -1841,7 +1841,7 @@ int resctrl_mon_resource_init(void) return 0; } =20 -void resctrl_mon_resource_exit(void) +void resctrl_l3_mon_resource_exit(void) { struct rdt_resource *r =3D resctrl_arch_get_resource(RDT_RESOURCE_L3); =20 diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index 2ed435db1923..b57e1e78bbc2 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -4246,7 +4246,7 @@ static void rdtgroup_setup_default(void) mutex_unlock(&rdtgroup_mutex); } =20 -static void domain_destroy_mon_state(struct rdt_l3_mon_domain *d) +static void domain_destroy_l3_mon_state(struct rdt_l3_mon_domain *d) { int idx; =20 @@ -4301,13 +4301,13 @@ void resctrl_offline_mon_domain(struct rdt_resource= *r, struct rdt_domain_hdr *h cancel_delayed_work(&d->cqm_limbo); } =20 - domain_destroy_mon_state(d); + domain_destroy_l3_mon_state(d); out_unlock: mutex_unlock(&rdtgroup_mutex); } =20 /** - * domain_setup_mon_state() - Initialise domain monitoring structures. + * domain_setup_l3_mon_state() - Initialise domain monitoring structures. * @r: The resource for the newly online domain. * @d: The newly online domain. * @@ -4315,11 +4315,11 @@ void resctrl_offline_mon_domain(struct rdt_resource= *r, struct rdt_domain_hdr *h * Called when the first CPU of a domain comes online, regardless of wheth= er * the filesystem is mounted. * During boot this may be called before global allocations have been made= by - * resctrl_mon_resource_init(). + * resctrl_l3_mon_resource_init(). * - * Returns 0 for success, or -ENOMEM. + * Return: 0 for success, or -ENOMEM. */ -static int domain_setup_mon_state(struct rdt_resource *r, struct rdt_l3_mo= n_domain *d) +static int domain_setup_l3_mon_state(struct rdt_resource *r, struct rdt_l3= _mon_domain *d) { u32 idx_limit =3D resctrl_arch_system_num_rmid_idx(); size_t tsize =3D sizeof(*d->mbm_states[0]); @@ -4386,7 +4386,7 @@ int resctrl_online_mon_domain(struct rdt_resource *r,= struct rdt_domain_hdr *hdr goto out_unlock; =20 d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); - err =3D domain_setup_mon_state(r, d); + err =3D domain_setup_l3_mon_state(r, d); if (err) goto out_unlock; =20 @@ -4503,13 +4503,13 @@ int resctrl_init(void) =20 io_alloc_init(); =20 - ret =3D resctrl_mon_resource_init(); + ret =3D resctrl_l3_mon_resource_init(); if (ret) return ret; =20 ret =3D sysfs_create_mount_point(fs_kobj, "resctrl"); if (ret) { - resctrl_mon_resource_exit(); + resctrl_l3_mon_resource_exit(); return ret; } =20 @@ -4544,7 +4544,7 @@ int resctrl_init(void) =20 cleanup_mountpoint: sysfs_remove_mount_point(fs_kobj, "resctrl"); - resctrl_mon_resource_exit(); + resctrl_l3_mon_resource_exit(); =20 return ret; } @@ -4580,7 +4580,7 @@ static bool resctrl_online_domains_exist(void) * When called by the architecture code, all CPUs and resctrl domains must= be * offline. This ensures the limbo and overflow handlers are not scheduled= to * run, meaning the data structures they access can be freed by - * resctrl_mon_resource_exit(). + * resctrl_l3_mon_resource_exit(). * * After resctrl_exit() returns, the architecture code should return an * error from all resctrl_arch_ functions that can do this. @@ -4607,5 +4607,5 @@ void resctrl_exit(void) * it can be used to umount resctrl. */ =20 - resctrl_mon_resource_exit(); + resctrl_l3_mon_resource_exit(); } --=20 2.51.1 From nobody Fri Dec 19 18:54:00 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 64AA3322DC2 for ; Thu, 4 Dec 2025 20:54:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881665; cv=none; b=nBYjWFHergLe1sxcHAX3NXxuZDUOzPcW+nf7710OpRxYZj/WufsvjWYusSaH8YIMIVrjGrNfWwQlo2VdgUep69q0KU5JGWAmjfLFFYmyQhWN82pIEaBOnrwMcf6JvcMjrHI1iI5NRNj0I+/QxgUW1zFQLAKjGUWAykQ8CmlIpQQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881665; c=relaxed/simple; bh=KDuv5tzC4bt6BhiOU+2suQA4WqgaGz4znZ7KQmLVwMA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=t9sNMdN6lWlDj0NSZBkwOD3PMm90boMHb3qt8XCZb8bDlQ8Eh4LjyDZwueDdGISrDVufG8r4coRapiuw7ZJ+Z6rcAy1TJuSqUDOBAAQLLcrcr9cFWQXTuwx4ZG9jHyTFFEq7AsVlH3PWnTg/BqFmwOfmP9DLmiaa6glnZV6mlUs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Wv3CNUsa; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Wv3CNUsa" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764881662; x=1796417662; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KDuv5tzC4bt6BhiOU+2suQA4WqgaGz4znZ7KQmLVwMA=; b=Wv3CNUsaeT2BpFfLkuMEpBXLGSAMhaK2k51sJaGMJP9W1VW2GBUy5RHK GM/I48JMZtIg0xBQl+4OuZLHu64la1d0aajJaegpGby7IwoKw2sRv2m5L BhnX8XCI2p7lDHACIjuSJQFXs7qbCEsC5f7W7MaLqeWmz421nR4rJR5gb MZFQQR3ztd8vqlcAj4w3LxV5oJCkLl2Snjola+LTdB1HE6qUJxqB1mBUC PRYDnmjnt49DS51GbFVlSDlpH7NALE2R9gbZYdaTmRCyeIcgLpCL4gehW FQbqnqs+znzGg/U/DLT8kdvI+77pL2yiPkqjoLTgSIjn24tlKoovuR/XS w==; X-CSE-ConnectionGUID: Xj9tQ4u/Q/iqx7zMlz+WTg== X-CSE-MsgGUID: kkp+zChjSiaCjouEinI/6w== X-IronPort-AV: E=McAfee;i="6800,10657,11632"; a="69510923" X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="69510923" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:19 -0800 X-CSE-ConnectionGUID: iG6R6VDNTm6dAj+Exsxmeg== X-CSE-MsgGUID: h8FzPGmyRKOGK3lKaV5buA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="225752761" Received: from mgerlach-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.165]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:18 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v15 10/32] fs/resctrl: Make event details accessible to functions when reading events Date: Thu, 4 Dec 2025 12:53:40 -0800 Message-ID: <20251204205404.12763-11-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251204205404.12763-1-tony.luck@intel.com> References: <20251204205404.12763-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Reading monitoring event data from MMIO requires more context than the even= t id to be able to read the correct memory location. struct mon_evt is the appro= priate place for this event specific context. Prepare for addition of extra fields to struct mon_evt by changing the call= ing conventions to pass a pointer to the mon_evt structure instead of just the event id. Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- fs/resctrl/internal.h | 10 +++++----- fs/resctrl/ctrlmondata.c | 18 +++++++++--------- fs/resctrl/monitor.c | 22 +++++++++++----------- fs/resctrl/rdtgroup.c | 6 +++--- 4 files changed, 28 insertions(+), 28 deletions(-) diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h index 9768341aa21c..86cf38ab08a7 100644 --- a/fs/resctrl/internal.h +++ b/fs/resctrl/internal.h @@ -81,7 +81,7 @@ extern struct mon_evt mon_event_all[QOS_NUM_EVENTS]; * struct mon_data - Monitoring details for each event file. * @list: Member of the global @mon_data_kn_priv_list list. * @rid: Resource id associated with the event file. - * @evtid: Event id associated with the event file. + * @evt: Event structure associated with the event file. * @sum: Set when event must be summed across multiple * domains. * @domid: When @sum is zero this is the domain to which @@ -95,7 +95,7 @@ extern struct mon_evt mon_event_all[QOS_NUM_EVENTS]; struct mon_data { struct list_head list; enum resctrl_res_level rid; - enum resctrl_event_id evtid; + struct mon_evt *evt; int domid; bool sum; }; @@ -108,7 +108,7 @@ struct mon_data { * @r: Resource describing the properties of the event being read. * @hdr: Header of domain that the counter should be read from. If NULL = then * sum all domains in @r sharing L3 @ci.id - * @evtid: Which monitor event to read. + * @evt: Which monitor event to read. * @first: Initialize MBM counter when true. * @ci: Cacheinfo for L3. Only set when @hdr is NULL. Used when summing * domains. @@ -126,7 +126,7 @@ struct rmid_read { struct rdtgroup *rgrp; struct rdt_resource *r; struct rdt_domain_hdr *hdr; - enum resctrl_event_id evtid; + struct mon_evt *evt; bool first; struct cacheinfo *ci; bool is_mbm_cntr; @@ -367,7 +367,7 @@ int rdtgroup_mondata_show(struct seq_file *m, void *arg= ); =20 void mon_event_read(struct rmid_read *rr, struct rdt_resource *r, struct rdt_domain_hdr *hdr, struct rdtgroup *rdtgrp, - cpumask_t *cpumask, int evtid, int first); + cpumask_t *cpumask, struct mon_evt *evt, int first); =20 void mbm_setup_overflow_handler(struct rdt_l3_mon_domain *dom, unsigned long delay_ms, diff --git a/fs/resctrl/ctrlmondata.c b/fs/resctrl/ctrlmondata.c index a3c734fe656e..7f9b2fed117a 100644 --- a/fs/resctrl/ctrlmondata.c +++ b/fs/resctrl/ctrlmondata.c @@ -552,7 +552,7 @@ struct rdt_domain_hdr *resctrl_find_domain(struct list_= head *h, int id, =20 void mon_event_read(struct rmid_read *rr, struct rdt_resource *r, struct rdt_domain_hdr *hdr, struct rdtgroup *rdtgrp, - cpumask_t *cpumask, int evtid, int first) + cpumask_t *cpumask, struct mon_evt *evt, int first) { int cpu; =20 @@ -563,15 +563,15 @@ void mon_event_read(struct rmid_read *rr, struct rdt_= resource *r, * Setup the parameters to pass to mon_event_count() to read the data. */ rr->rgrp =3D rdtgrp; - rr->evtid =3D evtid; + rr->evt =3D evt; rr->r =3D r; rr->hdr =3D hdr; rr->first =3D first; if (resctrl_arch_mbm_cntr_assign_enabled(r) && - resctrl_is_mbm_event(evtid)) { + resctrl_is_mbm_event(evt->evtid)) { rr->is_mbm_cntr =3D true; } else { - rr->arch_mon_ctx =3D resctrl_arch_mon_ctx_alloc(r, evtid); + rr->arch_mon_ctx =3D resctrl_arch_mon_ctx_alloc(r, evt->evtid); if (IS_ERR(rr->arch_mon_ctx)) { rr->err =3D -EINVAL; return; @@ -592,14 +592,13 @@ void mon_event_read(struct rmid_read *rr, struct rdt_= resource *r, smp_call_on_cpu(cpu, smp_mon_event_count, rr, false); =20 if (rr->arch_mon_ctx) - resctrl_arch_mon_ctx_free(r, evtid, rr->arch_mon_ctx); + resctrl_arch_mon_ctx_free(r, evt->evtid, rr->arch_mon_ctx); } =20 int rdtgroup_mondata_show(struct seq_file *m, void *arg) { struct kernfs_open_file *of =3D m->private; enum resctrl_res_level resid; - enum resctrl_event_id evtid; struct rdt_l3_mon_domain *d; struct rdt_domain_hdr *hdr; struct rmid_read rr =3D {0}; @@ -607,6 +606,7 @@ int rdtgroup_mondata_show(struct seq_file *m, void *arg) int domid, cpu, ret =3D 0; struct rdt_resource *r; struct cacheinfo *ci; + struct mon_evt *evt; struct mon_data *md; =20 rdtgrp =3D rdtgroup_kn_lock_live(of->kn); @@ -623,7 +623,7 @@ int rdtgroup_mondata_show(struct seq_file *m, void *arg) =20 resid =3D md->rid; domid =3D md->domid; - evtid =3D md->evtid; + evt =3D md->evt; r =3D resctrl_arch_get_resource(resid); =20 if (md->sum) { @@ -641,7 +641,7 @@ int rdtgroup_mondata_show(struct seq_file *m, void *arg) continue; rr.ci =3D ci; mon_event_read(&rr, r, NULL, rdtgrp, - &ci->shared_cpu_map, evtid, false); + &ci->shared_cpu_map, evt, false); goto checkresult; } } @@ -657,7 +657,7 @@ int rdtgroup_mondata_show(struct seq_file *m, void *arg) ret =3D -ENOENT; goto out; } - mon_event_read(&rr, r, hdr, rdtgrp, &hdr->cpu_mask, evtid, false); + mon_event_read(&rr, r, hdr, rdtgrp, &hdr->cpu_mask, evt, false); } =20 checkresult: diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index d5ae0ef4c947..340b847ab397 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -430,7 +430,7 @@ static int __l3_mon_event_count(struct rdtgroup *rdtgrp= , struct rmid_read *rr) d =3D container_of(rr->hdr, struct rdt_l3_mon_domain, hdr); =20 if (rr->is_mbm_cntr) { - cntr_id =3D mbm_cntr_get(rr->r, d, rdtgrp, rr->evtid); + cntr_id =3D mbm_cntr_get(rr->r, d, rdtgrp, rr->evt->evtid); if (cntr_id < 0) { rr->err =3D -ENOENT; return -EINVAL; @@ -439,10 +439,10 @@ static int __l3_mon_event_count(struct rdtgroup *rdtg= rp, struct rmid_read *rr) =20 if (rr->first) { if (rr->is_mbm_cntr) - resctrl_arch_reset_cntr(rr->r, d, closid, rmid, cntr_id, rr->evtid); + resctrl_arch_reset_cntr(rr->r, d, closid, rmid, cntr_id, rr->evt->evtid= ); else - resctrl_arch_reset_rmid(rr->r, d, closid, rmid, rr->evtid); - m =3D get_mbm_state(d, closid, rmid, rr->evtid); + resctrl_arch_reset_rmid(rr->r, d, closid, rmid, rr->evt->evtid); + m =3D get_mbm_state(d, closid, rmid, rr->evt->evtid); if (m) memset(m, 0, sizeof(struct mbm_state)); return 0; @@ -453,10 +453,10 @@ static int __l3_mon_event_count(struct rdtgroup *rdtg= rp, struct rmid_read *rr) return -EINVAL; if (rr->is_mbm_cntr) rr->err =3D resctrl_arch_cntr_read(rr->r, d, closid, rmid, cntr_id, - rr->evtid, &tval); + rr->evt->evtid, &tval); else rr->err =3D resctrl_arch_rmid_read(rr->r, rr->hdr, closid, rmid, - rr->evtid, &tval, rr->arch_mon_ctx); + rr->evt->evtid, &tval, rr->arch_mon_ctx); if (rr->err) return rr->err; =20 @@ -501,7 +501,7 @@ static int __l3_mon_event_count_sum(struct rdtgroup *rd= tgrp, struct rmid_read *r if (d->ci_id !=3D rr->ci->id) continue; err =3D resctrl_arch_rmid_read(rr->r, &d->hdr, closid, rmid, - rr->evtid, &tval, rr->arch_mon_ctx); + rr->evt->evtid, &tval, rr->arch_mon_ctx); if (!err) { rr->val +=3D tval; ret =3D 0; @@ -551,7 +551,7 @@ static void mbm_bw_count(struct rdtgroup *rdtgrp, struc= t rmid_read *rr) if (!domain_header_is_valid(rr->hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) return; d =3D container_of(rr->hdr, struct rdt_l3_mon_domain, hdr); - m =3D get_mbm_state(d, closid, rmid, rr->evtid); + m =3D get_mbm_state(d, closid, rmid, rr->evt->evtid); if (WARN_ON_ONCE(!m)) return; =20 @@ -725,11 +725,11 @@ static void mbm_update_one_event(struct rdt_resource = *r, struct rdt_l3_mon_domai =20 rr.r =3D r; rr.hdr =3D &d->hdr; - rr.evtid =3D evtid; + rr.evt =3D &mon_event_all[evtid]; if (resctrl_arch_mbm_cntr_assign_enabled(r)) { rr.is_mbm_cntr =3D true; } else { - rr.arch_mon_ctx =3D resctrl_arch_mon_ctx_alloc(rr.r, rr.evtid); + rr.arch_mon_ctx =3D resctrl_arch_mon_ctx_alloc(rr.r, evtid); if (IS_ERR(rr.arch_mon_ctx)) { pr_warn_ratelimited("Failed to allocate monitor context: %ld", PTR_ERR(rr.arch_mon_ctx)); @@ -747,7 +747,7 @@ static void mbm_update_one_event(struct rdt_resource *r= , struct rdt_l3_mon_domai mbm_bw_count(rdtgrp, &rr); =20 if (rr.arch_mon_ctx) - resctrl_arch_mon_ctx_free(rr.r, rr.evtid, rr.arch_mon_ctx); + resctrl_arch_mon_ctx_free(rr.r, evtid, rr.arch_mon_ctx); } =20 static void mbm_update(struct rdt_resource *r, struct rdt_l3_mon_domain *d, diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index b57e1e78bbc2..771e40f02ba6 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -3103,7 +3103,7 @@ static struct mon_data *mon_get_kn_priv(enum resctrl_= res_level rid, int domid, =20 list_for_each_entry(priv, &mon_data_kn_priv_list, list) { if (priv->rid =3D=3D rid && priv->domid =3D=3D domid && - priv->sum =3D=3D do_sum && priv->evtid =3D=3D mevt->evtid) + priv->sum =3D=3D do_sum && priv->evt =3D=3D mevt) return priv; } =20 @@ -3114,7 +3114,7 @@ static struct mon_data *mon_get_kn_priv(enum resctrl_= res_level rid, int domid, priv->rid =3D rid; priv->domid =3D domid; priv->sum =3D do_sum; - priv->evtid =3D mevt->evtid; + priv->evt =3D mevt; list_add_tail(&priv->list, &mon_data_kn_priv_list); =20 return priv; @@ -3281,7 +3281,7 @@ static int mon_add_all_files(struct kernfs_node *kn, = struct rdt_domain_hdr *hdr, return ret; =20 if (!do_sum && resctrl_is_mbm_event(mevt->evtid)) - mon_event_read(&rr, r, hdr, prgrp, &hdr->cpu_mask, mevt->evtid, true); + mon_event_read(&rr, r, hdr, prgrp, &hdr->cpu_mask, mevt, true); } =20 return 0; --=20 2.51.1 From nobody Fri Dec 19 18:54:00 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 61E25326D77 for ; Thu, 4 Dec 2025 20:54:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881667; cv=none; b=kU/2kuwDftuxqD42wfhsv18Fa3G8o2pdOq8fpOjZDUxmS7PtVQImWCYSsDdrR43Q5Z8abvSrQNNgGQ+fFZRYPTmN1JEOxpH37dJN9D2yq5qdExiL2bl5LHem0E0H/njy5oyMnMSIl1u0sB9GihmJ4eFbDknvLAkbuzVE1549cx8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881667; c=relaxed/simple; bh=5vWaOQZcSEmnlAOkjv9XHLHkBufwrmqBLbyXGGLw3Eo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=oC+TMUdeQ21yvL6lKNR3Whk18cr5bMYZFc2MLKVFcfcAVMdtUaZgwOX83u2PDwQx9bZY/nb3jDIH10YsdDSDn0RDjrq9aTtWQmaxzpH415/Ijstf/xMWTQIFgyxz9+o1yaRVKjyDyIc/KA3LOEu6TNSAqJ5KfMYO6/lvpo2pBs0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=HA5OUcMT; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="HA5OUcMT" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764881664; x=1796417664; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=5vWaOQZcSEmnlAOkjv9XHLHkBufwrmqBLbyXGGLw3Eo=; b=HA5OUcMTUko9ZW2mw2vzqOwWMfj+qiTFwg4SzBGFP+eqbCNzu1GNI3Vb TA5BDH+Fe7T8EuuTjBeTRDdjTlnJLKTDap7C+smKoYBIdImvuiKOwMq8N QBvB9uX25OtsnmLWCEBLtqfSlCQPNwHiC7fg/KprIqFQYGcZUYIDmXy/s XskkFEHAJTo7U3fx5mLEMgRJxzRZKFoxgo0AoQdLTe+/xoqFv6xwXNWsU nv7OpT864FbRZU6dhgbN7WPTNe4Rc4D1hv+unUvOMrDYnlm51UzNytkMP aJRAlijF+bEJJiJWPJ+3lJ2/FQ//0puSQFjIZVPpdpd9CdpY5WK3jNkw4 w==; X-CSE-ConnectionGUID: Z8TVDdq8TYqAOLJrtD9TKQ== X-CSE-MsgGUID: UNjkRw9LTJ2ERMPhgeTKKw== X-IronPort-AV: E=McAfee;i="6800,10657,11632"; a="69510934" X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="69510934" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:20 -0800 X-CSE-ConnectionGUID: skGn2I/hQv+bUCqU2Cac2Q== X-CSE-MsgGUID: CAJmVeKoTJSF/6osynOdUQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="225752765" Received: from mgerlach-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.165]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:19 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v15 11/32] x86,fs/resctrl: Handle events that can be read from any CPU Date: Thu, 4 Dec 2025 12:53:41 -0800 Message-ID: <20251204205404.12763-12-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251204205404.12763-1-tony.luck@intel.com> References: <20251204205404.12763-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" resctrl assumes that monitor events can only be read from a CPU in the cpumask_t set of each domain. This is true for x86 events accessed with an MSR interface, but may not be true for other access methods such as MMIO. Introduce and use flag mon_evt::any_cpu, settable by architecture, that indicates there are no restrictions on which CPU can read that event. Signed-off-by: Tony Luck --- include/linux/resctrl.h | 2 +- fs/resctrl/internal.h | 2 ++ arch/x86/kernel/cpu/resctrl/core.c | 6 +++--- fs/resctrl/ctrlmondata.c | 6 ++++++ fs/resctrl/monitor.c | 9 ++++++--- 5 files changed, 18 insertions(+), 7 deletions(-) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index 79aaaabcdd3f..22c5d07fe9ff 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -412,7 +412,7 @@ u32 resctrl_arch_get_num_closid(struct rdt_resource *r); u32 resctrl_arch_system_num_rmid_idx(void); int resctrl_arch_update_domains(struct rdt_resource *r, u32 closid); =20 -void resctrl_enable_mon_event(enum resctrl_event_id eventid); +void resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu); =20 bool resctrl_is_mon_event_enabled(enum resctrl_event_id eventid); =20 diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h index 86cf38ab08a7..fb0b6e40d022 100644 --- a/fs/resctrl/internal.h +++ b/fs/resctrl/internal.h @@ -61,6 +61,7 @@ static inline struct rdt_fs_context *rdt_fc2context(struc= t fs_context *fc) * READS_TO_REMOTE_MEM) being tracked by @evtid. * Only valid if @evtid is an MBM event. * @configurable: true if the event is configurable + * @any_cpu: true if the event can be read from any CPU * @enabled: true if the event is enabled */ struct mon_evt { @@ -69,6 +70,7 @@ struct mon_evt { char *name; u32 evt_cfg; bool configurable; + bool any_cpu; bool enabled; }; =20 diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index b3a2dc56155d..bd4a98106153 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -902,15 +902,15 @@ static __init bool get_rdt_mon_resources(void) bool ret =3D false; =20 if (rdt_cpu_has(X86_FEATURE_CQM_OCCUP_LLC)) { - resctrl_enable_mon_event(QOS_L3_OCCUP_EVENT_ID); + resctrl_enable_mon_event(QOS_L3_OCCUP_EVENT_ID, false); ret =3D true; } if (rdt_cpu_has(X86_FEATURE_CQM_MBM_TOTAL)) { - resctrl_enable_mon_event(QOS_L3_MBM_TOTAL_EVENT_ID); + resctrl_enable_mon_event(QOS_L3_MBM_TOTAL_EVENT_ID, false); ret =3D true; } if (rdt_cpu_has(X86_FEATURE_CQM_MBM_LOCAL)) { - resctrl_enable_mon_event(QOS_L3_MBM_LOCAL_EVENT_ID); + resctrl_enable_mon_event(QOS_L3_MBM_LOCAL_EVENT_ID, false); ret =3D true; } if (rdt_cpu_has(X86_FEATURE_ABMC)) diff --git a/fs/resctrl/ctrlmondata.c b/fs/resctrl/ctrlmondata.c index 7f9b2fed117a..2c69fcd70eeb 100644 --- a/fs/resctrl/ctrlmondata.c +++ b/fs/resctrl/ctrlmondata.c @@ -578,6 +578,11 @@ void mon_event_read(struct rmid_read *rr, struct rdt_r= esource *r, } } =20 + if (evt->any_cpu) { + mon_event_count(rr); + goto out_ctx_free; + } + cpu =3D cpumask_any_housekeeping(cpumask, RESCTRL_PICK_ANY_CPU); =20 /* @@ -591,6 +596,7 @@ void mon_event_read(struct rmid_read *rr, struct rdt_re= source *r, else smp_call_on_cpu(cpu, smp_mon_event_count, rr, false); =20 +out_ctx_free: if (rr->arch_mon_ctx) resctrl_arch_mon_ctx_free(r, evt->evtid, rr->arch_mon_ctx); } diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index 340b847ab397..081ff659b52c 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -518,10 +518,12 @@ static int __mon_event_count(struct rdtgroup *rdtgrp,= struct rmid_read *rr) { switch (rr->r->rid) { case RDT_RESOURCE_L3: - if (rr->hdr) + if (rr->hdr) { + WARN_ON_ONCE(rr->evt->any_cpu); return __l3_mon_event_count(rdtgrp, rr); - else + } else { return __l3_mon_event_count_sum(rdtgrp, rr); + } default: rr->err =3D -EINVAL; return -EINVAL; @@ -987,7 +989,7 @@ struct mon_evt mon_event_all[QOS_NUM_EVENTS] =3D { }, }; =20 -void resctrl_enable_mon_event(enum resctrl_event_id eventid) +void resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu) { if (WARN_ON_ONCE(eventid < QOS_FIRST_EVENT || eventid >=3D QOS_NUM_EVENTS= )) return; @@ -996,6 +998,7 @@ void resctrl_enable_mon_event(enum resctrl_event_id eve= ntid) return; } =20 + mon_event_all[eventid].any_cpu =3D any_cpu; mon_event_all[eventid].enabled =3D true; } =20 --=20 2.51.1 From nobody Fri Dec 19 18:54:00 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8A6781FBC92 for ; Thu, 4 Dec 2025 20:54:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881668; cv=none; b=Fj8gccDDYmZvCHIyxy852dcTV593AYWVOStPXDIdkPYBPGGcnmdMD8LqpNzCsaw1hM9TXLT5P7QkkPQOr8vEo+qC+XgXnT/JtTsrSNiV/KzUJTysY26YYXQu2F3qWihGDy75XMF1wwCWPbuSv8xsdoHxU6WYMMEW8oZ1BzlxjCQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881668; c=relaxed/simple; bh=z/zkC1kcUnrgNYc0NEIavcnMWkczaux6Xq3hbVdpHNg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Sy30tEMvo2FtGdNBdep22NNfm/wlxEIl206YFqjZYy2AL6+RBI/TeGXEacp2mjGE0cgm2EnFyh2fX++UpRzlVHyzzCFBJee4LzSKZVzx7Iz4y8E7YA8vYYteUjOyett+kiQkhQgQPnhTBqYABPjW2ZX5bLir9qRi+eiPG/P8sLc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=SjuBOudA; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="SjuBOudA" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764881665; x=1796417665; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=z/zkC1kcUnrgNYc0NEIavcnMWkczaux6Xq3hbVdpHNg=; b=SjuBOudAF/u8drfQXX8ZwTqvHr+bhLOLYOlyxsYW3HqR7r+8Y20zwhiO d1ZKeHpR7mmvmxdEw/0PntBX0rtxVh3whVQSDvvrHHdm8HyzF1trs18tC SQiswPLv1lIx0yJtdu/46DwG3FET1HfX5ko6SR3OHRXsGDD9b1NrER8VV r+IbH9M5eP2ZtrV9js1WBj1UegOy6bP7HWV4HwSjfwHN41ztz22Ntjz9j WtUVPZFFV31cwHP5qUJ+it/MQ8FY7h429mKzPAIiK55WaIEDAG1QUkjRu D5KksJ7mjdsKV/URAhxx5IxX33lkWVCk1iA5WkCDr8jGAgV/r0Rt4qb11 w==; X-CSE-ConnectionGUID: OjDeXAFqQo2gpDIMFkmx6g== X-CSE-MsgGUID: 4z/+KCGvQeqdQIl2ucCL0w== X-IronPort-AV: E=McAfee;i="6800,10657,11632"; a="69510942" X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="69510942" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:21 -0800 X-CSE-ConnectionGUID: 6MBgBKrKTpKS3cdEY7CShw== X-CSE-MsgGUID: nt+2Byx4SougriDUSPDaaA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="225752770" Received: from mgerlach-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.165]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:20 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v15 12/32] x86,fs/resctrl: Support binary fixed point event counters Date: Thu, 4 Dec 2025 12:53:42 -0800 Message-ID: <20251204205404.12763-13-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251204205404.12763-1-tony.luck@intel.com> References: <20251204205404.12763-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" resctrl assumes that all monitor events can be displayed as unsigned decimal integers. Hardware architecture counters may provide some telemetry events with great= er precision where the event is not a simple count, but is a measurement of some sort (e.g. Joules for energy consumed). Add a new argument to resctrl_enable_mon_event() for architecture code to inform the file system that the value for a counter is a fixed-point value with a specific number of binary places. Only allow architecture to use floating point format on events that the file system has marked with mon_evt::is_floating_point which reflects the contract with user space on how the event values are displayed. Display fixed point values with values rounded to ceil(binary_bits * log10(= 2)) decimal places. Special case for zero binary bits to print "{value}.0". Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- include/linux/resctrl.h | 3 +- fs/resctrl/internal.h | 8 ++++ arch/x86/kernel/cpu/resctrl/core.c | 6 +-- fs/resctrl/ctrlmondata.c | 74 ++++++++++++++++++++++++++++++ fs/resctrl/monitor.c | 10 +++- 5 files changed, 95 insertions(+), 6 deletions(-) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index 22c5d07fe9ff..c43526cdf304 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -412,7 +412,8 @@ u32 resctrl_arch_get_num_closid(struct rdt_resource *r); u32 resctrl_arch_system_num_rmid_idx(void); int resctrl_arch_update_domains(struct rdt_resource *r, u32 closid); =20 -void resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu); +void resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu, + unsigned int binary_bits); =20 bool resctrl_is_mon_event_enabled(enum resctrl_event_id eventid); =20 diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h index fb0b6e40d022..14e5a9ed1fbd 100644 --- a/fs/resctrl/internal.h +++ b/fs/resctrl/internal.h @@ -62,6 +62,9 @@ static inline struct rdt_fs_context *rdt_fc2context(struc= t fs_context *fc) * Only valid if @evtid is an MBM event. * @configurable: true if the event is configurable * @any_cpu: true if the event can be read from any CPU + * @is_floating_point: event values are displayed in floating point format + * @binary_bits: number of fixed-point binary bits from architecture, + * only valid if @is_floating_point is true * @enabled: true if the event is enabled */ struct mon_evt { @@ -71,6 +74,8 @@ struct mon_evt { u32 evt_cfg; bool configurable; bool any_cpu; + bool is_floating_point; + unsigned int binary_bits; bool enabled; }; =20 @@ -79,6 +84,9 @@ extern struct mon_evt mon_event_all[QOS_NUM_EVENTS]; #define for_each_mon_event(mevt) for (mevt =3D &mon_event_all[QOS_FIRST_EV= ENT]; \ mevt < &mon_event_all[QOS_NUM_EVENTS]; mevt++) =20 +/* Limit for mon_evt::binary_bits */ +#define MAX_BINARY_BITS 27 + /** * struct mon_data - Monitoring details for each event file. * @list: Member of the global @mon_data_kn_priv_list list. diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index bd4a98106153..9222eee7ce07 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -902,15 +902,15 @@ static __init bool get_rdt_mon_resources(void) bool ret =3D false; =20 if (rdt_cpu_has(X86_FEATURE_CQM_OCCUP_LLC)) { - resctrl_enable_mon_event(QOS_L3_OCCUP_EVENT_ID, false); + resctrl_enable_mon_event(QOS_L3_OCCUP_EVENT_ID, false, 0); ret =3D true; } if (rdt_cpu_has(X86_FEATURE_CQM_MBM_TOTAL)) { - resctrl_enable_mon_event(QOS_L3_MBM_TOTAL_EVENT_ID, false); + resctrl_enable_mon_event(QOS_L3_MBM_TOTAL_EVENT_ID, false, 0); ret =3D true; } if (rdt_cpu_has(X86_FEATURE_CQM_MBM_LOCAL)) { - resctrl_enable_mon_event(QOS_L3_MBM_LOCAL_EVENT_ID, false); + resctrl_enable_mon_event(QOS_L3_MBM_LOCAL_EVENT_ID, false, 0); ret =3D true; } if (rdt_cpu_has(X86_FEATURE_ABMC)) diff --git a/fs/resctrl/ctrlmondata.c b/fs/resctrl/ctrlmondata.c index 2c69fcd70eeb..f319fd1a6de3 100644 --- a/fs/resctrl/ctrlmondata.c +++ b/fs/resctrl/ctrlmondata.c @@ -17,6 +17,7 @@ =20 #include #include +#include #include #include #include @@ -601,6 +602,77 @@ void mon_event_read(struct rmid_read *rr, struct rdt_r= esource *r, resctrl_arch_mon_ctx_free(r, evt->evtid, rr->arch_mon_ctx); } =20 +/* + * Decimal place precision to use for each number of fixed-point + * binary bits computed from ceil(binary_bits * log10(2)) except + * binary_bits =3D=3D 0 which will print "value.0" + */ +static const unsigned int decplaces[MAX_BINARY_BITS + 1] =3D { + [0] =3D 1, + [1] =3D 1, + [2] =3D 1, + [3] =3D 1, + [4] =3D 2, + [5] =3D 2, + [6] =3D 2, + [7] =3D 3, + [8] =3D 3, + [9] =3D 3, + [10] =3D 4, + [11] =3D 4, + [12] =3D 4, + [13] =3D 4, + [14] =3D 5, + [15] =3D 5, + [16] =3D 5, + [17] =3D 6, + [18] =3D 6, + [19] =3D 6, + [20] =3D 7, + [21] =3D 7, + [22] =3D 7, + [23] =3D 7, + [24] =3D 8, + [25] =3D 8, + [26] =3D 8, + [27] =3D 9 +}; + +static void print_event_value(struct seq_file *m, unsigned int binary_bits= , u64 val) +{ + unsigned long long frac =3D 0; + + if (binary_bits) { + /* Mask off the integer part of the fixed-point value. */ + frac =3D val & GENMASK_ULL(binary_bits - 1, 0); + + /* + * Multiply by 10^{desired decimal places}. The integer part of + * the fixed point value is now almost what is needed. + */ + frac *=3D int_pow(10ull, decplaces[binary_bits]); + + /* + * Round to nearest by adding a value that would be a "1" in the + * binary_bits + 1 place. Integer part of fixed point value is + * now the needed value. + */ + frac +=3D 1ull << (binary_bits - 1); + + /* + * Extract the integer part of the value. This is the decimal + * representation of the original fixed-point fractional value. + */ + frac >>=3D binary_bits; + } + + /* + * "frac" is now in the range [0 .. 10^decplaces). I.e. string + * representation will fit into chosen number of decimal places. + */ + seq_printf(m, "%llu.%0*llu\n", val >> binary_bits, decplaces[binary_bits]= , frac); +} + int rdtgroup_mondata_show(struct seq_file *m, void *arg) { struct kernfs_open_file *of =3D m->private; @@ -678,6 +750,8 @@ int rdtgroup_mondata_show(struct seq_file *m, void *arg) seq_puts(m, "Unavailable\n"); else if (rr.err =3D=3D -ENOENT) seq_puts(m, "Unassigned\n"); + else if (evt->is_floating_point) + print_event_value(m, evt->binary_bits, rr.val); else seq_printf(m, "%llu\n", rr.val); =20 diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index 081ff659b52c..59736ab08213 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -989,16 +989,22 @@ struct mon_evt mon_event_all[QOS_NUM_EVENTS] =3D { }, }; =20 -void resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu) +void resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu,= unsigned int binary_bits) { - if (WARN_ON_ONCE(eventid < QOS_FIRST_EVENT || eventid >=3D QOS_NUM_EVENTS= )) + if (WARN_ON_ONCE(eventid < QOS_FIRST_EVENT || eventid >=3D QOS_NUM_EVENTS= || + binary_bits > MAX_BINARY_BITS)) return; if (mon_event_all[eventid].enabled) { pr_warn("Duplicate enable for event %d\n", eventid); return; } + if (binary_bits && !mon_event_all[eventid].is_floating_point) { + pr_warn("Event %d may not be floating point\n", eventid); + return; + } =20 mon_event_all[eventid].any_cpu =3D any_cpu; + mon_event_all[eventid].binary_bits =3D binary_bits; mon_event_all[eventid].enabled =3D true; } =20 --=20 2.51.1 From nobody Fri Dec 19 18:54:00 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4F8A12FBDEC for ; Thu, 4 Dec 2025 20:54:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881671; cv=none; b=SVZVFrs4H18xeLwG8qfDyBF5X5Gn+7QZcaj7EgsrPFimym5NgW1Isg/gntJfnop0AUMXlVzjpkzmpT0qUEfDWbTC78ucqcPdXwx69J95NF5n9mALhcByOG++IBHjO/9mI7NmapS47Q2Oefm2tTO+5o4X8paDBskguT3C2yp1ibM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881671; c=relaxed/simple; bh=vrL7JfY3s49csiMkieoQzxc27A+XOltjUvm7AN7IfJk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Fw8QexFbvoAwMapROj5xmIb6tePQd2WedXipuXVRxPFXEZWryxkaRLCckDaiEfaEUbbVZQUmWZmPzxd3aICCXJSy9FilwhA7+ed7SnGi9glpUtkYw4+E1Vjp4ZCkwV/M8JUsDem6QJcJqj3z17wBISA7tEeK1sSBZzyaGWUMSm8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=SViZD0lX; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="SViZD0lX" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764881666; x=1796417666; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=vrL7JfY3s49csiMkieoQzxc27A+XOltjUvm7AN7IfJk=; b=SViZD0lXDrg6RYN3f+NRnBXawpwIrVSrW3EtUjAYRAZ530TDFqSXFXWW pVsC2OSWuxQtNb//q7tWTmYh8gChuKOUZjeTrbkl+SEss7gdh8ssRTU/U GtL/LOZM2KbQTUBCi54jAwrgTtqul0dyvHQyMwDpOQou2bWMPCt0Q7VtE a84pKi7HddizfjZ+9sgvLUejwxKabkqb4EITF1RSLyO8tRz49XRZmz8yy xQ0QjCydlkERlXkW4pWIlTJDuuiXMAKWLp2TZYCVsvS9WkfQH4tFpNIaW TDCfLtfxs4oN+a7SbAgknYzO4c0+zrBnaswB0PWfjawy3ND0BrJwvIqVG w==; X-CSE-ConnectionGUID: 6VEEWLamSC2svm8SsDkt/A== X-CSE-MsgGUID: mcN7PAbYQj2I3eISk5r1Gg== X-IronPort-AV: E=McAfee;i="6800,10657,11632"; a="69510951" X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="69510951" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:21 -0800 X-CSE-ConnectionGUID: Pq08iDA/QrabcNT6uDg1Ug== X-CSE-MsgGUID: 9LocSSarRWuCYkCl+gROgA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="225752780" Received: from mgerlach-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.165]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:20 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v15 13/32] x86,fs/resctrl: Add an architectural hook called for each mount Date: Thu, 4 Dec 2025 12:53:43 -0800 Message-ID: <20251204205404.12763-14-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251204205404.12763-1-tony.luck@intel.com> References: <20251204205404.12763-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Enumeration of Intel telemetry events is an asynchronous process involving several mutually dependent drivers added as auxiliary devices during the device_initcall() phase of Linux boot. The process finishes after the probe functions of these drivers completes. But this happens after resctrl_arch_late_init() is executed. Tracing the enumeration process shows that it does complete a full seven seconds before the earliest possible mount of the resctrl file system (when included in /etc/fstab for automatic mount by systemd). Add a hook at the beginning of the mount code that will be used to check for telemetry events and initialize if any are found. Call the hook on every attempted mount. Expectations are that most actions (like enumeration) will only need to be performed on the first call. resctrl filesystem calls the hook with no locks held. Architecture code is responsible for any required locking. Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- include/linux/resctrl.h | 6 ++++++ arch/x86/kernel/cpu/resctrl/core.c | 9 +++++++++ fs/resctrl/rdtgroup.c | 2 ++ 3 files changed, 17 insertions(+) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index c43526cdf304..dc148b7feb71 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -514,6 +514,12 @@ void resctrl_offline_mon_domain(struct rdt_resource *r= , struct rdt_domain_hdr *h void resctrl_online_cpu(unsigned int cpu); void resctrl_offline_cpu(unsigned int cpu); =20 +/* + * Architecture hook called at beginning of each file system mount attempt. + * No locks are held. + */ +void resctrl_arch_pre_mount(void); + /** * resctrl_arch_rmid_read() - Read the eventid counter corresponding to rm= id * for this resource and domain. diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 9222eee7ce07..2dd48b59ba9b 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -726,6 +726,15 @@ static int resctrl_arch_offline_cpu(unsigned int cpu) return 0; } =20 +void resctrl_arch_pre_mount(void) +{ + static atomic_t only_once =3D ATOMIC_INIT(0); + int old =3D 0; + + if (!atomic_try_cmpxchg(&only_once, &old, 1)) + return; +} + enum { RDT_FLAG_CMT, RDT_FLAG_MBM_TOTAL, diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index 771e40f02ba6..b20d104ea0c9 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -2785,6 +2785,8 @@ static int rdt_get_tree(struct fs_context *fc) struct rdt_resource *r; int ret; =20 + resctrl_arch_pre_mount(); + cpus_read_lock(); mutex_lock(&rdtgroup_mutex); /* --=20 2.51.1 From nobody Fri Dec 19 18:54:00 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A86622F83C3 for ; Thu, 4 Dec 2025 20:54:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881670; cv=none; b=pSOSDasLXwEgjQE4so8t0yjF1Rou/aMhG1/+CNsLmKgK+4dzHHGIBWGa8ewG8Lg6L+hFSvY6yuJU7MPyNYz1sz/C+cV2Co86EOB5Yz6/MyBzjbFpJN6pVHEv8av+wuFdkiJZEM8K/5iutz3IbS7DJi1TMYDPUSISnk0l44a/teE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881670; c=relaxed/simple; bh=hyL2/X8OODb/MAa6mAq8coFobmArLr4dQvg/AGa1bOs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=jKYC4s4mbCtBAAePa2Aj8T0CLqE0vr3zYoHXRThnwiPrRZm1XA7OCsGjki+HD/QKAuHvcyzqv7VFC5cQck/+z5fS9NdaDE2WnZiBKRnF52PE0LA3UhXnLV3f8abwTEwnV78+C8Vy35fozrF1e21S3Fnl3qdGP0l0/dqYhIY34Ok= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=FLzr60I5; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="FLzr60I5" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764881668; x=1796417668; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=hyL2/X8OODb/MAa6mAq8coFobmArLr4dQvg/AGa1bOs=; b=FLzr60I5NuqkVaMVlUOT9/E0HnzPN61ISHfXB9LdiveipSXG2qWiWNER CdijyyiPFNJXevnt587lJ4l5AVX9VXSmMsjRujLvrVWDCFMihI1laB/Z0 JHMKphV1iYOXCrgCMsSRadxcVT7oIYXzgEWi2w1tF2YUZL967pWvkl5K1 d6wqRkqUmapOwbsTx3FkWMot4yqtiRsCbyJYa9k4jBdvD7/1uByD8O48A L6jp/cOta949fc903RusTKs0660kSfV5IePZywO+r7yPFuhwYNdqOJlli d3ObZvZMFgxrmxrO70nszYaDfbbPMDM4ln1WL1dRuOoL64Dfy/2kEFVL8 w==; X-CSE-ConnectionGUID: MrHSXrowS9m//qIDTA+x6A== X-CSE-MsgGUID: 5eepJKBITbCEJnh7kbZvVQ== X-IronPort-AV: E=McAfee;i="6800,10657,11632"; a="69510961" X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="69510961" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:22 -0800 X-CSE-ConnectionGUID: Rpu/OjL7R9+c0PiuUT9naQ== X-CSE-MsgGUID: bv/03jmJTZSDap047yQPJw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="225752791" Received: from mgerlach-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.165]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:21 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v15 14/32] x86,fs/resctrl: Add and initialize a resource for package scope monitoring Date: Thu, 4 Dec 2025 12:53:44 -0800 Message-ID: <20251204205404.12763-15-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251204205404.12763-1-tony.luck@intel.com> References: <20251204205404.12763-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a new PERF_PKG resource and introduce package level scope for monitoring telemetry events so that CPU hot plug notifiers can build domains at the package granularity. Use the physical package ID available via topology_physical_package_id() to identify the monitoring domains with package level scope. This enables user space to use: /sys/devices/system/cpu/cpuX/topology/physical_package_id to identify the monitoring domain a CPU is associated with. Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- include/linux/resctrl.h | 2 ++ fs/resctrl/internal.h | 2 ++ arch/x86/kernel/cpu/resctrl/core.c | 10 ++++++++++ fs/resctrl/rdtgroup.c | 2 ++ 4 files changed, 16 insertions(+) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index dc148b7feb71..2a3613f27274 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -53,6 +53,7 @@ enum resctrl_res_level { RDT_RESOURCE_L2, RDT_RESOURCE_MBA, RDT_RESOURCE_SMBA, + RDT_RESOURCE_PERF_PKG, =20 /* Must be the last */ RDT_NUM_RESOURCES, @@ -270,6 +271,7 @@ enum resctrl_scope { RESCTRL_L2_CACHE =3D 2, RESCTRL_L3_CACHE =3D 3, RESCTRL_L3_NODE, + RESCTRL_PACKAGE, }; =20 /** diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h index 14e5a9ed1fbd..0110d1175398 100644 --- a/fs/resctrl/internal.h +++ b/fs/resctrl/internal.h @@ -255,6 +255,8 @@ struct rdtgroup { =20 #define RFTYPE_ASSIGN_CONFIG BIT(11) =20 +#define RFTYPE_RES_PERF_PKG BIT(12) + #define RFTYPE_CTRL_INFO (RFTYPE_INFO | RFTYPE_CTRL) =20 #define RFTYPE_MON_INFO (RFTYPE_INFO | RFTYPE_MON) diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 2dd48b59ba9b..986b1303efb9 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -100,6 +100,14 @@ struct rdt_hw_resource rdt_resources_all[RDT_NUM_RESOU= RCES] =3D { .schema_fmt =3D RESCTRL_SCHEMA_RANGE, }, }, + [RDT_RESOURCE_PERF_PKG] =3D + { + .r_resctrl =3D { + .name =3D "PERF_PKG", + .mon_scope =3D RESCTRL_PACKAGE, + .mon_domains =3D mon_domain_init(RDT_RESOURCE_PERF_PKG), + }, + }, }; =20 u32 resctrl_arch_system_num_rmid_idx(void) @@ -440,6 +448,8 @@ static int get_domain_id_from_scope(int cpu, enum resct= rl_scope scope) return get_cpu_cacheinfo_id(cpu, scope); case RESCTRL_L3_NODE: return cpu_to_node(cpu); + case RESCTRL_PACKAGE: + return topology_physical_package_id(cpu); default: break; } diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index b20d104ea0c9..4952ba6b8609 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -2395,6 +2395,8 @@ static unsigned long fflags_from_resource(struct rdt_= resource *r) case RDT_RESOURCE_MBA: case RDT_RESOURCE_SMBA: return RFTYPE_RES_MB; + case RDT_RESOURCE_PERF_PKG: + return RFTYPE_RES_PERF_PKG; } =20 return WARN_ON_ONCE(1); --=20 2.51.1 From nobody Fri Dec 19 18:54:00 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 87D3B2FBDFA for ; Thu, 4 Dec 2025 20:54:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881670; cv=none; b=A0q756KjpsCwiq9vK/wBa/eCG9AERaNHp09O5had2Xg2FKuBLhDuC7tqlPML/3eczEyeJ4n1j6xrPEabqdztvZMgYaEU2S61Lq4yiPg4E5hPVwUFeja49pRsZ5X3eHliUDoLMqRHnimOnLUwymvqYAquzVKCx5RxFtntGukWJl8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881670; c=relaxed/simple; bh=9KtQhAEuNacTIhDtiaux0qfjZKcSMZhDOZ14EYOTjwQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=kQbd7Ae92SFTWald/56ybDaKhJZJ7k7XP3xlab2sQ9r64gLqcU+7wfSeud2uCk+bPbBK7LQ5tT0bpyJyNxvk3a/nqojvWslvOkF1lNu2/X7f1sIopfpKEZtrEb7Y9NVAn1ML4c8dwZAKF4w4O/SGIB804CCnIoBAMrZoRZrL1Io= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=npT6bRtl; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="npT6bRtl" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764881668; x=1796417668; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9KtQhAEuNacTIhDtiaux0qfjZKcSMZhDOZ14EYOTjwQ=; b=npT6bRtlybWfnhAbk9w5yeAKcevSKglHWKX0JkItFmbzbXHBKDlUgaqy rjUdtEyc8JInBKUWauaNYu4C6xlSksVujPommdRABKDFh/HcBAejjaV7U dBjYLTtXuyQEhzbEbYR6XlXgZBbmO6RxX6kspuKvt3EoOksXcZQ9h70YK OFKRa0xFEsUge10bgwLY5fWTeU+5TeYsbIp4m+ujPq/wCZ2I2aMHWNbWI g+ITFLntVmZcmzGSrK5S1LbXT85Q78LKIY769hJzY/ILP2i+HYiYvvY8Q CrtiSVR2L+99OdIsboJRTKPltHEc0+dOFkUAZyUBcdFwAO/fVX8rNSF9n A==; X-CSE-ConnectionGUID: pXFMqXqqQwKmINC2bBXgIg== X-CSE-MsgGUID: ariGBpZrRuq2ID16v+0UOQ== X-IronPort-AV: E=McAfee;i="6800,10657,11632"; a="69510970" X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="69510970" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:23 -0800 X-CSE-ConnectionGUID: vTy3EsiSTKOCTi7fBreAUQ== X-CSE-MsgGUID: ZkyRC1M+Q66BoPFZshgLAw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="225752799" Received: from mgerlach-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.165]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:22 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v15 15/32] fs/resctrl: Emphasize that L3 monitoring resource is required for summing domains Date: Thu, 4 Dec 2025 12:53:45 -0800 Message-ID: <20251204205404.12763-16-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251204205404.12763-1-tony.luck@intel.com> References: <20251204205404.12763-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The feature to sum event data across multiple domains supports systems with Sub-NUMA Cluster (SNC) mode enabled. The top-level monitoring files in each "mon_L3_XX" directory provide the sum of data across all SNC nodes sharing an L3 cache instance while the "mon_sub_L3_YY" sub-directories provide the event data of the individual nodes. SNC is only associated with the L3 resource and domains and as a result the flow handling the sum of event data implicitly assumes it is working with the L3 resource and domains. Reading of telemetry events do not require to sum event data so this feature can remain dedicated to SNC and keep the implicit assumption of working with the L3 resource and domains. Add a WARN to where the implicit assumption of working with the L3 resource is made and add comments on how the structure controlling the event sum feature is used. Suggested-by: Reinette Chatre Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- fs/resctrl/internal.h | 4 ++-- fs/resctrl/ctrlmondata.c | 8 +++++++- fs/resctrl/rdtgroup.c | 3 ++- 3 files changed, 11 insertions(+), 4 deletions(-) diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h index 0110d1175398..50d88e91e0da 100644 --- a/fs/resctrl/internal.h +++ b/fs/resctrl/internal.h @@ -92,8 +92,8 @@ extern struct mon_evt mon_event_all[QOS_NUM_EVENTS]; * @list: Member of the global @mon_data_kn_priv_list list. * @rid: Resource id associated with the event file. * @evt: Event structure associated with the event file. - * @sum: Set when event must be summed across multiple - * domains. + * @sum: Set for RDT_RESOURCE_L3 when event must be summed + * across multiple domains. * @domid: When @sum is zero this is the domain to which * the event file belongs. When @sum is one this * is the id of the L3 cache that all domains to be diff --git a/fs/resctrl/ctrlmondata.c b/fs/resctrl/ctrlmondata.c index f319fd1a6de3..cc4237c57cbe 100644 --- a/fs/resctrl/ctrlmondata.c +++ b/fs/resctrl/ctrlmondata.c @@ -677,7 +677,6 @@ int rdtgroup_mondata_show(struct seq_file *m, void *arg) { struct kernfs_open_file *of =3D m->private; enum resctrl_res_level resid; - struct rdt_l3_mon_domain *d; struct rdt_domain_hdr *hdr; struct rmid_read rr =3D {0}; struct rdtgroup *rdtgrp; @@ -705,6 +704,13 @@ int rdtgroup_mondata_show(struct seq_file *m, void *ar= g) r =3D resctrl_arch_get_resource(resid); =20 if (md->sum) { + struct rdt_l3_mon_domain *d; + + if (WARN_ON_ONCE(resid !=3D RDT_RESOURCE_L3)) { + ret =3D -EINVAL; + goto out; + } + /* * This file requires summing across all domains that share * the L3 cache id that was provided in the "domid" field of the diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index 4952ba6b8609..dce1f0a6d40b 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -3095,7 +3095,8 @@ static void rmdir_all_sub(void) * @rid: The resource id for the event file being created. * @domid: The domain id for the event file being created. * @mevt: The type of event file being created. - * @do_sum: Whether SNC summing monitors are being created. + * @do_sum: Whether SNC summing monitors are being created. Only set + * when @rid =3D=3D RDT_RESOURCE_L3. */ static struct mon_data *mon_get_kn_priv(enum resctrl_res_level rid, int do= mid, struct mon_evt *mevt, --=20 2.51.1 From nobody Fri Dec 19 18:54:00 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5A1ED328243 for ; Thu, 4 Dec 2025 20:54:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881674; cv=none; b=WO1hhclIvkmtaOjP/srU1ZJY7nFNDW0ePAY0zZO/Chw3bWrAoYm/ItM5ZwtTU32kHPau8PrkR/OxxePOf/uIRQewf0bnZNgkb8nItySMSdti4lQkpm0JigOoNBwdQcfnYBOJ4bJJ3UVqZ2NDrvywGUFkQ5UsolfQFz8m+BCAifc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881674; c=relaxed/simple; bh=/U9yTGGRBUnxvbwPaXARH6IngCroh561p528+aY2s2g=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Nuwog3tisqfvHa3xPB2Ti/6S9+pU1Ktg+0S7lMXVS+ps0ovfhZXcTmAjC8Iw0DIMz9VjSAUd4T6hJ8lq2+VHtBbh+t8nbPaYaX59AVGO83S2LN7azdXrilbEBL3nhjxSXMzDErcPwkDF1ivXYcdnefQpwk4bzF9I4vMSOqattF0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=J8AKGQJd; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="J8AKGQJd" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764881670; x=1796417670; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/U9yTGGRBUnxvbwPaXARH6IngCroh561p528+aY2s2g=; b=J8AKGQJdwiIGT1ntb103IkOb2LJOWGucRVpBCZfuU5919lfjG3FTQA3R tGjKrHrCjU2SmBUGg7PBjqTyQtaBsqvyruSSGzyvQX8wea4wF95K6ztN8 yU1hVK/hQOtwJc6al/vbtcV+zCP20XgewIvhv8maEF/+Ar0XcjKQDDnyw V0D9Wwq2QmG56WXlWiJeHXw4m2lk82pttpt2BBNv7ZqvOGD3nO1eqbxHC iNarpVmPWwAkt/LlO//b88+esG2XMlfkQu5blZuv8uMRhHKY+bfw0WSx7 7DLjmyxm0CcE5W6HPBdoNFkpIQtOqEs6tU9jOCsuOfWxGsmfHwEqo+2Qi g==; X-CSE-ConnectionGUID: kFFBWsV9RjerSaej/8O2oQ== X-CSE-MsgGUID: wWYoaWOWTCmggn0dknokYg== X-IronPort-AV: E=McAfee;i="6800,10657,11632"; a="69510980" X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="69510980" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:23 -0800 X-CSE-ConnectionGUID: xYdWA4OXQxS4d5pdR8RSIA== X-CSE-MsgGUID: ExMA0ClWToqRgM98KvqF7w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="225752804" Received: from mgerlach-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.165]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:23 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v15 16/32] x86/resctrl: Discover hardware telemetry events Date: Thu, 4 Dec 2025 12:53:46 -0800 Message-ID: <20251204205404.12763-17-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251204205404.12763-1-tony.luck@intel.com> References: <20251204205404.12763-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Each CPU collects data for telemetry events that it sends to the nearest telemetry event aggregator either when the value of MSR_IA32_PQR_ASSOC.RMID changes, or when a two millisecond timer expires. There is a feature type ("energy" or "perf"), guid, and MMIO region associa= ted with each aggregator. This combination links to an XML description of the set of telemetry events tracked by the aggregator. XML files are published by Intel in a GitHub repository [1]. The telemetry event aggregators maintain per-RMID per-event counts of the total seen for all the CPUs. There may be multiple telemetry event aggregat= ors per package. There are separate sets of aggregators for each feature type. Aggregators in a set may have different guids. All aggregators with the same feature type and guid are symmetric keeping counts for the same set of events for the CPUs that provide data to them. The XML file for each aggregator provides the following information: 0) Feature type of the events ("perf" or "energy") 1) Which telemetry events are tracked by the aggregator. 2) The order in which the event counters appear for each RMID. 3) The value type of each event counter (integer or fixed-point). 4) The number of RMIDs supported. 5) Which additional aggregator status registers are included. 6) The total size of the MMIO region for an aggregator. Introduce struct event_group that condenses the relevant information from an XML file. Hereafter an "event group" refers to a group of events of a particular feature type ("energy" or "perf") with a particular guid. The event_group::pfname field is used to choose the parameter to pass to intel_pmt_get_regions_by_feature(). It will later be used in console messages and with the rdt=3D boot parameter. The INTEL_PMT_TELEMETRY driver enumerates support for telemetry events. This driver provides intel_pmt_get_regions_by_feature() to list all availab= le telemetry event aggregators of a given feature type. The list includes the "guid", the base address in MMIO space for the region where the event count= ers are exposed, and the package id where the all the CPUs that report to this aggregator are located. Call INTEL_PMT_TELEMETRY's intel_pmt_get_regions_by_feature() for each event group to obtain a private copy of that event group's aggregator data. Dupli= cate the aggregator data between event groups that have the same feature type but different guid. Further processing on this private copy will be unique to the event group. Return the aggregator data to INTEL_PMT_TELEMETRY at resctrl exit time. resctrl will silently ignore unknown guid values. Add a new Kconfig option CONFIG_X86_CPU_RESCTRL_INTEL_AET for the Intel spe= cific parts of telemetry code. This depends on the INTEL_PMT_TELEMETRY and INTEL_= TPMI drivers being built-in to the kernel for enumeration of telemetry features. Signed-off-by: Tony Luck Link: https://github.com/intel/Intel-PMT # [1] --- arch/x86/kernel/cpu/resctrl/internal.h | 8 ++ arch/x86/kernel/cpu/resctrl/core.c | 5 ++ arch/x86/kernel/cpu/resctrl/intel_aet.c | 110 ++++++++++++++++++++++++ arch/x86/Kconfig | 13 +++ arch/x86/kernel/cpu/resctrl/Makefile | 1 + 5 files changed, 137 insertions(+) create mode 100644 arch/x86/kernel/cpu/resctrl/intel_aet.c diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/r= esctrl/internal.h index 11d06995810e..f2e6e3577df0 100644 --- a/arch/x86/kernel/cpu/resctrl/internal.h +++ b/arch/x86/kernel/cpu/resctrl/internal.h @@ -222,4 +222,12 @@ void __init intel_rdt_mbm_apply_quirk(void); void rdt_domain_reconfigure_cdp(struct rdt_resource *r); void resctrl_arch_mbm_cntr_assign_set_one(struct rdt_resource *r); =20 +#ifdef CONFIG_X86_CPU_RESCTRL_INTEL_AET +bool intel_aet_get_events(void); +void __exit intel_aet_exit(void); +#else +static inline bool intel_aet_get_events(void) { return false; } +static inline void __exit intel_aet_exit(void) { } +#endif + #endif /* _ASM_X86_RESCTRL_INTERNAL_H */ diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 986b1303efb9..88be77d5d20d 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -743,6 +743,9 @@ void resctrl_arch_pre_mount(void) =20 if (!atomic_try_cmpxchg(&only_once, &old, 1)) return; + + if (!intel_aet_get_events()) + return; } =20 enum { @@ -1104,6 +1107,8 @@ late_initcall(resctrl_arch_late_init); =20 static void __exit resctrl_arch_exit(void) { + intel_aet_exit(); + cpuhp_remove_state(rdt_online); =20 resctrl_exit(); diff --git a/arch/x86/kernel/cpu/resctrl/intel_aet.c b/arch/x86/kernel/cpu/= resctrl/intel_aet.c new file mode 100644 index 000000000000..3cb79e30d284 --- /dev/null +++ b/arch/x86/kernel/cpu/resctrl/intel_aet.c @@ -0,0 +1,110 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Resource Director Technology(RDT) + * - Intel Application Energy Telemetry + * + * Copyright (C) 2025 Intel Corporation + * + * Author: + * Tony Luck + */ + +#define pr_fmt(fmt) "resctrl: " fmt + +#include +#include +#include +#include +#include +#include + +#include "internal.h" + +/** + * struct event_group - Events with the same feature type ("energy" or "pe= rf") and guid. + * @pfname: PMT feature name (energy or perf) of this event group + * used by boot rdt=3D option. + * @pfg: Points to the aggregated telemetry space information + * returned by the intel_pmt_get_regions_by_feature() + * call to the INTEL_PMT_TELEMETRY driver that contains + * data for all telemetry regions of type @pfname. + * Valid if the system supports the event group, + * NULL otherwise. + */ +struct event_group { + /* Data fields for additional structures to manage this group. */ + const char *pfname; + struct pmt_feature_group *pfg; +}; + +static struct event_group *known_event_groups[] =3D { +}; + +#define for_each_event_group(_peg) \ + for (_peg =3D known_event_groups; \ + _peg < &known_event_groups[ARRAY_SIZE(known_event_groups)]; \ + _peg++) + +/* Stub for now */ +static bool enable_events(struct event_group *e, struct pmt_feature_group = *p) +{ + return false; +} + +static enum pmt_feature_id lookup_pfid(const char *pfname) +{ + if (!strcmp(pfname, "energy")) + return FEATURE_PER_RMID_ENERGY_TELEM; + else if (!strcmp(pfname, "perf")) + return FEATURE_PER_RMID_PERF_TELEM; + + pr_warn("Unknown PMT feature name '%s'\n", pfname); + + return FEATURE_INVALID; +} + +/* + * Request a copy of struct pmt_feature_group for each event group. If the= re is + * one, the returned structure has an array of telemetry_region structures, + * each element of the array describes one telemetry aggregator. The + * telemetry aggregators may have different guids so obtain duplicate stru= ct + * pmt_feature_group for event groups with same feature type but different + * guid. Post-processing ensures an event group can only use the telemetry + * aggregators that match its guid. An event group keeps a pointer to its + * struct pmt_feature_group to indicate that its events are successfully + * enabled. + */ +bool intel_aet_get_events(void) +{ + struct pmt_feature_group *p; + enum pmt_feature_id pfid; + struct event_group **peg; + bool ret =3D false; + + for_each_event_group(peg) { + pfid =3D lookup_pfid((*peg)->pfname); + p =3D intel_pmt_get_regions_by_feature(pfid); + if (IS_ERR_OR_NULL(p)) + continue; + if (enable_events(*peg, p)) { + (*peg)->pfg =3D p; + ret =3D true; + } else { + intel_pmt_put_feature_group(p); + } + } + + return ret; +} + +void __exit intel_aet_exit(void) +{ + struct event_group **peg; + + for_each_event_group(peg) { + if ((*peg)->pfg) { + intel_pmt_put_feature_group((*peg)->pfg); + (*peg)->pfg =3D NULL; + } + } +} diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 34fb46d5341b..52dda19d584d 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -538,6 +538,19 @@ config X86_CPU_RESCTRL =20 Say N if unsure. =20 +config X86_CPU_RESCTRL_INTEL_AET + bool "Intel Application Energy Telemetry" + depends on X86_CPU_RESCTRL && CPU_SUP_INTEL && INTEL_PMT_TELEMETRY=3Dy &&= INTEL_TPMI=3Dy + help + Enable per-RMID telemetry events in resctrl. + + Intel feature that collects per-RMID execution data + about energy consumption, measure of frequency independent + activity and other performance metrics. Data is aggregated + per package. + + Say N if unsure. + config X86_FRED bool "Flexible Return and Event Delivery" depends on X86_64 diff --git a/arch/x86/kernel/cpu/resctrl/Makefile b/arch/x86/kernel/cpu/res= ctrl/Makefile index d8a04b195da2..273ddfa30836 100644 --- a/arch/x86/kernel/cpu/resctrl/Makefile +++ b/arch/x86/kernel/cpu/resctrl/Makefile @@ -1,6 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_X86_CPU_RESCTRL) +=3D core.o rdtgroup.o monitor.o obj-$(CONFIG_X86_CPU_RESCTRL) +=3D ctrlmondata.o +obj-$(CONFIG_X86_CPU_RESCTRL_INTEL_AET) +=3D intel_aet.o obj-$(CONFIG_RESCTRL_FS_PSEUDO_LOCK) +=3D pseudo_lock.o =20 # To allow define_trace.h's recursive include: --=20 2.51.1 From nobody Fri Dec 19 18:54:00 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 19954328258 for ; Thu, 4 Dec 2025 20:54:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881673; cv=none; b=SMbPxDqn/RpKSMUm/lcDZ/tAHtsQClIPrAwXm7/zdhu0KzQEiNCQM72QD5iKHaU3dW1LJp0HXrQjr2/oN3B6XYFSZZYaDNHGBWv4/gxJMDULPQAjGqRj0diO51ztElraT4hKZYHudPAxjTcTdIdl8oDaSJKju7TKAKwDeSkOGuA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881673; c=relaxed/simple; bh=6E6jI55V7HFZ0gjUlFHzeYDtgx2rxi+akF/Ey9qovB0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=FHCknJnftlXBaw5EZzjCSQcbhpZwcyE8gBmrxK8Zzo+3Bb4LbWUsDxJ78vbDWsyBHIIFNECWLtGe9sN+UdCBOzvOjvmQ0oVi45BaeX0qw/oudDKERRQUgVlMPvN4M4J7wgB5ldx7KhzTCE6IWeNRnHDq2PZdfigq+j4xSBo/r9w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=KkSFgc0G; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="KkSFgc0G" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764881671; x=1796417671; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6E6jI55V7HFZ0gjUlFHzeYDtgx2rxi+akF/Ey9qovB0=; b=KkSFgc0GcPB7yWauC89BFupBOb+nhkCnhWelXoUmiQn2QA5i7aSC0w9H RYz0/zgRfw2oxG0aIiP1W3P8bsC7xsdu1eSV/gsabpLdOxNRcjku8IhCQ uvrOBLm9Y6NjzMD0FfO8lFlT5/upU5R9Pwg5eahSmDvk0GLznnDfVQmdw sJw1CZNBjvhrQG/BUZJFrEDs6Dt/ePOxO8raBEoi7rI+1V1fp96N9+f6E vq71LYPmiJFoxnLkYETv5a/rcgoHqQNkUVtHj4zlnkKF6J7ARdT2lb9kx EBiZASCaaRpP/3NEdJClgfT6B34ebOPyK2qzVQ0Br5vXmkbNN2JJVOnZR Q==; X-CSE-ConnectionGUID: RFjMo7luSvGyH8nSpbm1ig== X-CSE-MsgGUID: io7p6DiLSmKqgB4UwIuD8g== X-IronPort-AV: E=McAfee;i="6800,10657,11632"; a="69510988" X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="69510988" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:24 -0800 X-CSE-ConnectionGUID: ZKPgUIjNRByMnnzo0/FiLw== X-CSE-MsgGUID: G/bOLERSQiyYJrPjpMIfLA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="225752809" Received: from mgerlach-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.165]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:23 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v15 17/32] x86,fs/resctrl: Fill in details of events for guid 0x26696143 and 0x26557651 Date: Thu, 4 Dec 2025 12:53:47 -0800 Message-ID: <20251204205404.12763-18-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251204205404.12763-1-tony.luck@intel.com> References: <20251204205404.12763-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The telemetry event aggregators of the Intel Clearwater Forest CPU support two RMID-based feature types: "energy" with guid 0x26696143 [1], and "perf" with guid 0x26557651 [2]. The event counter offsets in an aggregator's MMIO space are arranged in groups for each RMID. E.g the "energy" counters for guid 0x26696143 are arranged like this: MMIO offset:0x0000 Counter for RMID 0 PMT_EVENT_ENERGY MMIO offset:0x0008 Counter for RMID 0 PMT_EVENT_ACTIVITY MMIO offset:0x0010 Counter for RMID 1 PMT_EVENT_ENERGY MMIO offset:0x0018 Counter for RMID 1 PMT_EVENT_ACTIVITY ... MMIO offset:0x23F0 Counter for RMID 575 PMT_EVENT_ENERGY MMIO offset:0x23F8 Counter for RMID 575 PMT_EVENT_ACTIVITY After all counters there are three status registers that provide indications of how many times an aggregator was unable to process event counts, the time stamp for the most recent loss of data, and the time stamp of the most rece= nt successful update. MMIO offset:0x2400 AGG_DATA_LOSS_COUNT MMIO offset:0x2408 AGG_DATA_LOSS_TIMESTAMP MMIO offset:0x2410 LAST_UPDATE_TIMESTAMP Define event_group structures for both of these aggregator types and define= the events tracked by the aggregators in the file system code. PMT_EVENT_ENERGY and PMT_EVENT_ACTIVITY are produced in fixed point format. File system code must output as floating point values. Signed-off-by: Tony Luck Link: https://github.com/intel/Intel-PMT/blob/main/xml/CWF/OOBMSM/RMID-ENER= GY/cwf_aggregator.xml # [1] Link: https://github.com/intel/Intel-PMT/blob/main/xml/CWF/OOBMSM/RMID-PERF= /cwf_aggregator.xml # [2] Reviewed-by: Reinette Chatre --- include/linux/resctrl_types.h | 11 +++++ arch/x86/kernel/cpu/resctrl/intel_aet.c | 66 +++++++++++++++++++++++++ fs/resctrl/monitor.c | 35 +++++++------ 3 files changed, 97 insertions(+), 15 deletions(-) diff --git a/include/linux/resctrl_types.h b/include/linux/resctrl_types.h index acfe07860b34..a5f56faa18d2 100644 --- a/include/linux/resctrl_types.h +++ b/include/linux/resctrl_types.h @@ -50,6 +50,17 @@ enum resctrl_event_id { QOS_L3_MBM_TOTAL_EVENT_ID =3D 0x02, QOS_L3_MBM_LOCAL_EVENT_ID =3D 0x03, =20 + /* Intel Telemetry Events */ + PMT_EVENT_ENERGY, + PMT_EVENT_ACTIVITY, + PMT_EVENT_STALLS_LLC_HIT, + PMT_EVENT_C1_RES, + PMT_EVENT_UNHALTED_CORE_CYCLES, + PMT_EVENT_STALLS_LLC_MISS, + PMT_EVENT_AUTO_C6_RES, + PMT_EVENT_UNHALTED_REF_CYCLES, + PMT_EVENT_UOPS_RETIRED, + /* Must be the last */ QOS_NUM_EVENTS, }; diff --git a/arch/x86/kernel/cpu/resctrl/intel_aet.c b/arch/x86/kernel/cpu/= resctrl/intel_aet.c index 3cb79e30d284..33b7bb180582 100644 --- a/arch/x86/kernel/cpu/resctrl/intel_aet.c +++ b/arch/x86/kernel/cpu/resctrl/intel_aet.c @@ -11,15 +11,33 @@ =20 #define pr_fmt(fmt) "resctrl: " fmt =20 +#include #include #include #include #include #include +#include #include +#include =20 #include "internal.h" =20 +/** + * struct pmt_event - Telemetry event. + * @id: Resctrl event id. + * @idx: Counter index within each per-RMID block of counters. + * @bin_bits: Zero for integer valued events, else number bits in fraction + * part of fixed-point. + */ +struct pmt_event { + enum resctrl_event_id id; + unsigned int idx; + unsigned int bin_bits; +}; + +#define EVT(_id, _idx, _bits) { .id =3D _id, .idx =3D _idx, .bin_bits =3D = _bits } + /** * struct event_group - Events with the same feature type ("energy" or "pe= rf") and guid. * @pfname: PMT feature name (energy or perf) of this event group @@ -30,14 +48,62 @@ * data for all telemetry regions of type @pfname. * Valid if the system supports the event group, * NULL otherwise. + * @guid: Unique number per XML description file. + * @mmio_size: Number of bytes of MMIO registers for this group. + * @num_events: Number of events in this group. + * @evts: Array of event descriptors. */ struct event_group { /* Data fields for additional structures to manage this group. */ const char *pfname; struct pmt_feature_group *pfg; + + /* Remaining fields initialized from XML file. */ + u32 guid; + size_t mmio_size; + unsigned int num_events; + struct pmt_event evts[] __counted_by(num_events); +}; + +#define XML_MMIO_SIZE(num_rmids, num_events, num_extra_status) \ + (((num_rmids) * (num_events) + (num_extra_status)) * sizeof(u64)) + +/* + * Link: https://github.com/intel/Intel-PMT/blob/main/xml/CWF/OOBMSM/RMID-= ENERGY/cwf_aggregator.xml + */ +static struct event_group energy_0x26696143 =3D { + .pfname =3D "energy", + .guid =3D 0x26696143, + .mmio_size =3D XML_MMIO_SIZE(576, 2, 3), + .num_events =3D 2, + .evts =3D { + EVT(PMT_EVENT_ENERGY, 0, 18), + EVT(PMT_EVENT_ACTIVITY, 1, 18), + } +}; + +/* + * Link: https://github.com/intel/Intel-PMT/blob/main/xml/CWF/OOBMSM/RMID-= PERF/cwf_aggregator.xml + */ +static struct event_group perf_0x26557651 =3D { + .pfname =3D "perf", + .guid =3D 0x26557651, + .mmio_size =3D XML_MMIO_SIZE(576, 7, 3), + .num_events =3D 7, + .evts =3D { + EVT(PMT_EVENT_STALLS_LLC_HIT, 0, 0), + EVT(PMT_EVENT_C1_RES, 1, 0), + EVT(PMT_EVENT_UNHALTED_CORE_CYCLES, 2, 0), + EVT(PMT_EVENT_STALLS_LLC_MISS, 3, 0), + EVT(PMT_EVENT_AUTO_C6_RES, 4, 0), + EVT(PMT_EVENT_UNHALTED_REF_CYCLES, 5, 0), + EVT(PMT_EVENT_UOPS_RETIRED, 6, 0), + } }; =20 static struct event_group *known_event_groups[] =3D { + &energy_0x26696143, + &perf_0x26557651, }; =20 #define for_each_event_group(_peg) \ diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index 59736ab08213..acf2437c5b34 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -966,27 +966,32 @@ static void dom_data_exit(struct rdt_resource *r) mutex_unlock(&rdtgroup_mutex); } =20 +#define MON_EVENT(_eventid, _name, _res, _fp) \ + [_eventid] =3D { \ + .name =3D _name, \ + .evtid =3D _eventid, \ + .rid =3D _res, \ + .is_floating_point =3D _fp, \ +} + /* * All available events. Architecture code marks the ones that * are supported by a system using resctrl_enable_mon_event() * to set .enabled. */ struct mon_evt mon_event_all[QOS_NUM_EVENTS] =3D { - [QOS_L3_OCCUP_EVENT_ID] =3D { - .name =3D "llc_occupancy", - .evtid =3D QOS_L3_OCCUP_EVENT_ID, - .rid =3D RDT_RESOURCE_L3, - }, - [QOS_L3_MBM_TOTAL_EVENT_ID] =3D { - .name =3D "mbm_total_bytes", - .evtid =3D QOS_L3_MBM_TOTAL_EVENT_ID, - .rid =3D RDT_RESOURCE_L3, - }, - [QOS_L3_MBM_LOCAL_EVENT_ID] =3D { - .name =3D "mbm_local_bytes", - .evtid =3D QOS_L3_MBM_LOCAL_EVENT_ID, - .rid =3D RDT_RESOURCE_L3, - }, + MON_EVENT(QOS_L3_OCCUP_EVENT_ID, "llc_occupancy", RDT_RESOURCE_L3, false= ), + MON_EVENT(QOS_L3_MBM_TOTAL_EVENT_ID, "mbm_total_bytes", RDT_RESOURCE_L3,= false), + MON_EVENT(QOS_L3_MBM_LOCAL_EVENT_ID, "mbm_local_bytes", RDT_RESOURCE_L3,= false), + MON_EVENT(PMT_EVENT_ENERGY, "core_energy", RDT_RESOURCE_PERF_PKG, true= ), + MON_EVENT(PMT_EVENT_ACTIVITY, "activity", RDT_RESOURCE_PERF_PKG, true), + MON_EVENT(PMT_EVENT_STALLS_LLC_HIT, "stalls_llc_hit", RDT_RESOURCE_PERF_= PKG, false), + MON_EVENT(PMT_EVENT_C1_RES, "c1_res", RDT_RESOURCE_PERF_PKG, false), + MON_EVENT(PMT_EVENT_UNHALTED_CORE_CYCLES, "unhalted_core_cycles", RDT_RES= OURCE_PERF_PKG, false), + MON_EVENT(PMT_EVENT_STALLS_LLC_MISS, "stalls_llc_miss", RDT_RESOURCE_PER= F_PKG, false), + MON_EVENT(PMT_EVENT_AUTO_C6_RES, "c6_res", RDT_RESOURCE_PERF_PKG, false= ), + MON_EVENT(PMT_EVENT_UNHALTED_REF_CYCLES, "unhalted_ref_cycles", RDT_RESOU= RCE_PERF_PKG, false), + MON_EVENT(PMT_EVENT_UOPS_RETIRED, "uops_retired", RDT_RESOURCE_PERF_PKG= , false), }; =20 void resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu,= unsigned int binary_bits) --=20 2.51.1 From nobody Fri Dec 19 18:54:00 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C039F328268 for ; Thu, 4 Dec 2025 20:54:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881675; cv=none; b=da6azKy6IuOGOdV9RwZhHnqP1BIfC2kC49va0SUiqgUxhGgGQGoyGxSl5RBvn0iOww1GqjbwYSQPOIsPFoXWF4Rj1EReJUgYZw8Z8MN2K7HcdAp2CLxZSXnz/02Pc15DUfrP/FH8fS1za6apoark/Yas9Bvidkd2iErs+1n6pUU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881675; c=relaxed/simple; bh=ytNu7Mia9zHrNLmhPRX1P7r9x936HGc8s1qlRvsJvB8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=dSDIAJJc9pgSDrnfWHsAaQuIpOhUmHKEpMr178bD+PMuLdc59jNeYP5n0YJoX1m2MqSQHzzMr4VP5L7aMLpFVGy6zM0CZS9KthXi4aeZuMf0Cpr2YVvuAyTbWcOMEQ7sWoK3g/wqfu8mGLwXFtp6TPDpdPmk+NNbgvgAIU+glcg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=WSCb/jNj; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="WSCb/jNj" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764881672; x=1796417672; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ytNu7Mia9zHrNLmhPRX1P7r9x936HGc8s1qlRvsJvB8=; b=WSCb/jNjgrBRCIFNe2g2VQeaAIv+nPasRpZBZSbpvkeXrLRvx+2ol/7U wawJfPqDcTORNlNlllvbHkh8Q4BW5u09L8EdNWw6loBZUZMQwxGAowedj 5xQ6c+IJn2LhlY+H2ieAYMzJ6JkgtQAPZEZJyBbtqhlWw2+H99vdlLGjb b1XU/IIZzMu3uXS1z3Y96ZfJvCHvRs6OFzY/DRVwzhZIFv1jPuZJnCN8o 6XdBabEh9Ji9xcUELsVABTewO5H6UD35oz785xOZHCigWt7QSfa5vw2ux IBlfW7v777dEY5GcZaszCojUKywrE3qjhc080w31t6uP+QlTkCRlEC9g2 g==; X-CSE-ConnectionGUID: XlE/QzhlRkyPbchIjEKFLw== X-CSE-MsgGUID: SVAqAyQPTRWGjLBISfyV8g== X-IronPort-AV: E=McAfee;i="6800,10657,11632"; a="69510998" X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="69510998" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:25 -0800 X-CSE-ConnectionGUID: Bz+y1eukQkSI+4IZv7pugw== X-CSE-MsgGUID: LaXxFxacRCGJfESuUGwEXA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="225752819" Received: from mgerlach-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.165]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:24 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v15 18/32] x86,fs/resctrl: Add architectural event pointer Date: Thu, 4 Dec 2025 12:53:48 -0800 Message-ID: <20251204205404.12763-19-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251204205404.12763-1-tony.luck@intel.com> References: <20251204205404.12763-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The resctrl file system layer passes the domain, RMID, and event id to the architecture to fetch an event counter. Fetching a telemetry event counter requires additional information that is private to the architecture, for example, the offset into MMIO space from where the counter should be read. Add mon_evt::arch_priv that architecture can use for any private data relat= ed to the event. resctrl filesystem initializes mon_evt::arch_priv when the architecture enables the event and passes it back to architecture when needing to fetch an event counter. Suggested-by: Reinette Chatre Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- include/linux/resctrl.h | 7 +++++-- fs/resctrl/internal.h | 4 ++++ arch/x86/kernel/cpu/resctrl/core.c | 6 +++--- arch/x86/kernel/cpu/resctrl/monitor.c | 2 +- fs/resctrl/monitor.c | 14 ++++++++++---- 5 files changed, 23 insertions(+), 10 deletions(-) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index 2a3613f27274..b30f99335bbe 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -415,7 +415,7 @@ u32 resctrl_arch_system_num_rmid_idx(void); int resctrl_arch_update_domains(struct rdt_resource *r, u32 closid); =20 void resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu, - unsigned int binary_bits); + unsigned int binary_bits, void *arch_priv); =20 bool resctrl_is_mon_event_enabled(enum resctrl_event_id eventid); =20 @@ -532,6 +532,9 @@ void resctrl_arch_pre_mount(void); * only. * @rmid: rmid of the counter to read. * @eventid: eventid to read, e.g. L3 occupancy. + * @arch_priv: Architecture private data for this event. + * The @arch_priv provided by the architecture via + * resctrl_enable_mon_event(). * @val: result of the counter read in bytes. * @arch_mon_ctx: An architecture specific value from * resctrl_arch_mon_ctx_alloc(), for MPAM this identifies @@ -549,7 +552,7 @@ void resctrl_arch_pre_mount(void); */ int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain_hdr *= hdr, u32 closid, u32 rmid, enum resctrl_event_id eventid, - u64 *val, void *arch_mon_ctx); + void *arch_priv, u64 *val, void *arch_mon_ctx); =20 /** * resctrl_arch_rmid_read_context_check() - warn about invalid contexts diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h index 50d88e91e0da..399f625be67d 100644 --- a/fs/resctrl/internal.h +++ b/fs/resctrl/internal.h @@ -66,6 +66,9 @@ static inline struct rdt_fs_context *rdt_fc2context(struc= t fs_context *fc) * @binary_bits: number of fixed-point binary bits from architecture, * only valid if @is_floating_point is true * @enabled: true if the event is enabled + * @arch_priv: Architecture private data for this event. + * The @arch_priv provided by the architecture via + * resctrl_enable_mon_event(). */ struct mon_evt { enum resctrl_event_id evtid; @@ -77,6 +80,7 @@ struct mon_evt { bool is_floating_point; unsigned int binary_bits; bool enabled; + void *arch_priv; }; =20 extern struct mon_evt mon_event_all[QOS_NUM_EVENTS]; diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 88be77d5d20d..3c6946a5ff1b 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -924,15 +924,15 @@ static __init bool get_rdt_mon_resources(void) bool ret =3D false; =20 if (rdt_cpu_has(X86_FEATURE_CQM_OCCUP_LLC)) { - resctrl_enable_mon_event(QOS_L3_OCCUP_EVENT_ID, false, 0); + resctrl_enable_mon_event(QOS_L3_OCCUP_EVENT_ID, false, 0, NULL); ret =3D true; } if (rdt_cpu_has(X86_FEATURE_CQM_MBM_TOTAL)) { - resctrl_enable_mon_event(QOS_L3_MBM_TOTAL_EVENT_ID, false, 0); + resctrl_enable_mon_event(QOS_L3_MBM_TOTAL_EVENT_ID, false, 0, NULL); ret =3D true; } if (rdt_cpu_has(X86_FEATURE_CQM_MBM_LOCAL)) { - resctrl_enable_mon_event(QOS_L3_MBM_LOCAL_EVENT_ID, false, 0); + resctrl_enable_mon_event(QOS_L3_MBM_LOCAL_EVENT_ID, false, 0, NULL); ret =3D true; } if (rdt_cpu_has(X86_FEATURE_ABMC)) diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/re= sctrl/monitor.c index 20605212656c..6929614ba6e6 100644 --- a/arch/x86/kernel/cpu/resctrl/monitor.c +++ b/arch/x86/kernel/cpu/resctrl/monitor.c @@ -240,7 +240,7 @@ static u64 get_corrected_val(struct rdt_resource *r, st= ruct rdt_l3_mon_domain *d =20 int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain_hdr *= hdr, u32 unused, u32 rmid, enum resctrl_event_id eventid, - u64 *val, void *ignored) + void *arch_priv, u64 *val, void *ignored) { struct rdt_hw_l3_mon_domain *hw_dom; struct rdt_l3_mon_domain *d; diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index acf2437c5b34..251dd573ed5f 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -137,9 +137,11 @@ void __check_limbo(struct rdt_l3_mon_domain *d, bool f= orce_free) struct rmid_entry *entry; u32 idx, cur_idx =3D 1; void *arch_mon_ctx; + void *arch_priv; bool rmid_dirty; u64 val =3D 0; =20 + arch_priv =3D mon_event_all[QOS_L3_OCCUP_EVENT_ID].arch_priv; arch_mon_ctx =3D resctrl_arch_mon_ctx_alloc(r, QOS_L3_OCCUP_EVENT_ID); if (IS_ERR(arch_mon_ctx)) { pr_warn_ratelimited("Failed to allocate monitor context: %ld", @@ -160,7 +162,7 @@ void __check_limbo(struct rdt_l3_mon_domain *d, bool fo= rce_free) =20 entry =3D __rmid_entry(idx); if (resctrl_arch_rmid_read(r, &d->hdr, entry->closid, entry->rmid, - QOS_L3_OCCUP_EVENT_ID, &val, + QOS_L3_OCCUP_EVENT_ID, arch_priv, &val, arch_mon_ctx)) { rmid_dirty =3D true; } else { @@ -456,7 +458,8 @@ static int __l3_mon_event_count(struct rdtgroup *rdtgrp= , struct rmid_read *rr) rr->evt->evtid, &tval); else rr->err =3D resctrl_arch_rmid_read(rr->r, rr->hdr, closid, rmid, - rr->evt->evtid, &tval, rr->arch_mon_ctx); + rr->evt->evtid, rr->evt->arch_priv, + &tval, rr->arch_mon_ctx); if (rr->err) return rr->err; =20 @@ -501,7 +504,8 @@ static int __l3_mon_event_count_sum(struct rdtgroup *rd= tgrp, struct rmid_read *r if (d->ci_id !=3D rr->ci->id) continue; err =3D resctrl_arch_rmid_read(rr->r, &d->hdr, closid, rmid, - rr->evt->evtid, &tval, rr->arch_mon_ctx); + rr->evt->evtid, rr->evt->arch_priv, + &tval, rr->arch_mon_ctx); if (!err) { rr->val +=3D tval; ret =3D 0; @@ -994,7 +998,8 @@ struct mon_evt mon_event_all[QOS_NUM_EVENTS] =3D { MON_EVENT(PMT_EVENT_UOPS_RETIRED, "uops_retired", RDT_RESOURCE_PERF_PKG= , false), }; =20 -void resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu,= unsigned int binary_bits) +void resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu, + unsigned int binary_bits, void *arch_priv) { if (WARN_ON_ONCE(eventid < QOS_FIRST_EVENT || eventid >=3D QOS_NUM_EVENTS= || binary_bits > MAX_BINARY_BITS)) @@ -1010,6 +1015,7 @@ void resctrl_enable_mon_event(enum resctrl_event_id e= ventid, bool any_cpu, unsig =20 mon_event_all[eventid].any_cpu =3D any_cpu; mon_event_all[eventid].binary_bits =3D binary_bits; + mon_event_all[eventid].arch_priv =3D arch_priv; mon_event_all[eventid].enabled =3D true; } =20 --=20 2.51.1 From nobody Fri Dec 19 18:54:00 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1999532862C for ; Thu, 4 Dec 2025 20:54:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881676; cv=none; b=hmmwWiJ6bkW8cfMrPOzw9DdwroBzpJRCoWtodRJp4x0NdoXolbJbQwh7uNamY7RI3AsY2XhyM7QW5oU97CHP+nERFYKdC0HudPd0bsJ8kt3WZQZ23ebg8DpyKdzfBX9ckDy26VNfXR4kyxd8RHVkF6/UDF0YXa/KIjGdNryN9J8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881676; c=relaxed/simple; bh=1Fiy1Jt3SNN4uJBIF4OP/XRClRrKLBBagZLLc2DJoZg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=pkpBrXuC4BIkI3UGqRqa4b5imu9VB+wXkSaTRD0lRGMEGRXwZr97NoIvDqWRwVg5rh6IV86+hRv/D6IU6YaRhtPoH9HDkogkYfASdHCSa4/RSS+OwGSgsjyq5XCOygATl7kuIlP/uJ5FwXkdHgrNT4Y9uVUmp4kQLbz4yM+RADM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=dMDrUY/W; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="dMDrUY/W" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764881674; x=1796417674; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=1Fiy1Jt3SNN4uJBIF4OP/XRClRrKLBBagZLLc2DJoZg=; b=dMDrUY/WXFcPFznEIzxYarQVL+8E4/jXhBgbtirflzuSIJt3jnqp4Z5D MSA5+Q5xyV+KLNipbBOavXIvjh95nheG6KuKxp09JFZAfJVNovgPSVgUa JTiiyYoCzeQ2wre7UdTbST9iTZZ9VAnUSPCSe9L8Ss4TEPND5NGtG05Iy Ejc6wfdAE+QAlJiZW2+ivEzrqiROvbJLQ2VjkqkkOnsZxSK28JOAkcfI6 QtAZ1mEE7K1mw3mVSbwYz789xScEZ/VDEqaH2vDdBM++AdmBjh6hPlpGD wIcaNaXq7kuQ1mMOKG+jPCD1bJQ/bPPk3NhZcDk8fRs+ZXDVVnGTqMx29 w==; X-CSE-ConnectionGUID: g1F+Pa9SSBOAYXQxJy44WQ== X-CSE-MsgGUID: 5KTzl8kPTmyZ14BN03uCsA== X-IronPort-AV: E=McAfee;i="6800,10657,11632"; a="69511009" X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="69511009" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:26 -0800 X-CSE-ConnectionGUID: YadT9JUOSvWmo8UHEJhICA== X-CSE-MsgGUID: 2yAnRPGeTOOzgIAO3k8oVg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="225752823" Received: from mgerlach-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.165]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:25 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v15 19/32] x86/resctrl: Find and enable usable telemetry events Date: Thu, 4 Dec 2025 12:53:49 -0800 Message-ID: <20251204205404.12763-20-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251204205404.12763-1-tony.luck@intel.com> References: <20251204205404.12763-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Every event group has a private copy of the data of all telemetry event aggregators (aka "telemetry regions") tracking its feature type. Included may be regions that have the same feature type but tracking different guid from the event group's. Traverse the event group's telemetry region data and mark all regions that are not usable by the event group as unusable by clearing those regions' MMIO addresses. A region is considered unusable if: 1) guid does not match the guid of the event group. 2) Package ID is invalid. 3) The enumerated size of the MMIO region does not match the expected value from the XML description file. Hereafter any telemetry region with an MMIO address is considered valid for the event group it is associated with. Enable all the event group's events as long as there is at least one usable region from where data for its events can be read. Enabling of events can fail. Warn the user if none of the events in an event group can be enab= led. Note that it is architecturally possible that some telemetry events are only supported by a subset of the packages in the system. It is not expected that systems will ever do this. If they do the user will see event files in resctrl that always return "Unavailable". Signed-off-by: Tony Luck --- include/linux/resctrl.h | 2 +- arch/x86/kernel/cpu/resctrl/intel_aet.c | 67 ++++++++++++++++++++++++- fs/resctrl/monitor.c | 10 ++-- 3 files changed, 72 insertions(+), 7 deletions(-) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index b30f99335bbe..14126d228e61 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -414,7 +414,7 @@ u32 resctrl_arch_get_num_closid(struct rdt_resource *r); u32 resctrl_arch_system_num_rmid_idx(void); int resctrl_arch_update_domains(struct rdt_resource *r, u32 closid); =20 -void resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu, +bool resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu, unsigned int binary_bits, void *arch_priv); =20 bool resctrl_is_mon_event_enabled(enum resctrl_event_id eventid); diff --git a/arch/x86/kernel/cpu/resctrl/intel_aet.c b/arch/x86/kernel/cpu/= resctrl/intel_aet.c index 33b7bb180582..bc42df4498f8 100644 --- a/arch/x86/kernel/cpu/resctrl/intel_aet.c +++ b/arch/x86/kernel/cpu/resctrl/intel_aet.c @@ -16,9 +16,11 @@ #include #include #include +#include #include #include #include +#include #include =20 #include "internal.h" @@ -111,12 +113,73 @@ static struct event_group *known_event_groups[] =3D { _peg < &known_event_groups[ARRAY_SIZE(known_event_groups)]; \ _peg++) =20 -/* Stub for now */ -static bool enable_events(struct event_group *e, struct pmt_feature_group = *p) +/* + * Clear the address field of regions that did not pass the checks in + * skip_telem_region() so they will not be used by intel_aet_read_event(). + * This is safe to do because intel_pmt_get_regions_by_feature() allocates + * a new pmt_feature_group structure to return to each caller and only mak= es + * use of the pmt_feature_group::kref field when intel_pmt_put_feature_gro= up() + * returns the structure. + */ +static void mark_telem_region_unusable(struct telemetry_region *tr) { + tr->addr =3D NULL; +} + +static bool skip_telem_region(struct telemetry_region *tr, struct event_gr= oup *e) +{ + if (tr->guid !=3D e->guid) + return true; + if (tr->plat_info.package_id >=3D topology_max_packages()) { + pr_warn("Bad package %u in guid 0x%x\n", tr->plat_info.package_id, + tr->guid); + return true; + } + if (tr->size !=3D e->mmio_size) { + pr_warn("MMIO space wrong size (%zu bytes) for guid 0x%x. Expected %zu b= ytes.\n", + tr->size, e->guid, e->mmio_size); + return true; + } + return false; } =20 +static bool group_has_usable_regions(struct event_group *e, struct pmt_fea= ture_group *p) +{ + bool usable_regions =3D false; + + for (int i =3D 0; i < p->count; i++) { + if (skip_telem_region(&p->regions[i], e)) { + mark_telem_region_unusable(&p->regions[i]); + continue; + } + usable_regions =3D true; + } + + return usable_regions; +} + +static bool enable_events(struct event_group *e, struct pmt_feature_group = *p) +{ + struct rdt_resource *r =3D &rdt_resources_all[RDT_RESOURCE_PERF_PKG].r_re= sctrl; + int skipped_events =3D 0; + + if (!group_has_usable_regions(e, p)) + return false; + + for (int j =3D 0; j < e->num_events; j++) { + if (!resctrl_enable_mon_event(e->evts[j].id, true, + e->evts[j].bin_bits, &e->evts[j])) + skipped_events++; + } + if (e->num_events =3D=3D skipped_events) { + pr_info("No events enabled in %s %s:0x%x\n", r->name, e->pfname, e->guid= ); + return false; + } + + return skipped_events < e->num_events; +} + static enum pmt_feature_id lookup_pfid(const char *pfname) { if (!strcmp(pfname, "energy")) diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index 251dd573ed5f..83652130a8a6 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -998,25 +998,27 @@ struct mon_evt mon_event_all[QOS_NUM_EVENTS] =3D { MON_EVENT(PMT_EVENT_UOPS_RETIRED, "uops_retired", RDT_RESOURCE_PERF_PKG= , false), }; =20 -void resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu, +bool resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu, unsigned int binary_bits, void *arch_priv) { if (WARN_ON_ONCE(eventid < QOS_FIRST_EVENT || eventid >=3D QOS_NUM_EVENTS= || binary_bits > MAX_BINARY_BITS)) - return; + return false; if (mon_event_all[eventid].enabled) { pr_warn("Duplicate enable for event %d\n", eventid); - return; + return false; } if (binary_bits && !mon_event_all[eventid].is_floating_point) { pr_warn("Event %d may not be floating point\n", eventid); - return; + return false; } =20 mon_event_all[eventid].any_cpu =3D any_cpu; mon_event_all[eventid].binary_bits =3D binary_bits; mon_event_all[eventid].arch_priv =3D arch_priv; mon_event_all[eventid].enabled =3D true; + + return true; } =20 bool resctrl_is_mon_event_enabled(enum resctrl_event_id eventid) --=20 2.51.1 From nobody Fri Dec 19 18:54:00 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6EA14328639 for ; Thu, 4 Dec 2025 20:54:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881677; cv=none; b=ojoMrRUZgxEFHlCAce19KW3YT047IJQga6IgD3Fl4izD2BIYP99EtE98cOM0SWpkUuNNZJ6TNjixBfqEbMwqZjUX+yE7YvoLvHDoV4wbfn1EeoRlIWSWoD4kggQepicwQVuzWa/asg0eGlQEZE+MQkBbHP2Bcd66cEF1AYssjU4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881677; c=relaxed/simple; bh=upL5fYFchnBxZeE3JP/gOFxFpTtLRTRKUexJZ+e8PrQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=S9ul4m4drxkMEpx6OMzoq+8wqkU78f3IUENbi9J/RnkPbG5Q/Oo8pC4ylZx7pIiY+iixz8v1UuhEUcrN7jHEObVeuI7XHN0L9Gq6EiXQZFbx/vn2i89q0eIlmzeUsYRdy3l/hCWMjey8h29BTBPdG/aTO+l0+0fPcz1e/ySNQhw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=XsU6Bpyd; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="XsU6Bpyd" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764881674; x=1796417674; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=upL5fYFchnBxZeE3JP/gOFxFpTtLRTRKUexJZ+e8PrQ=; b=XsU6BpydzO8i1lFqpZYjU0h8FjdQ20Rc/nvzp+l099dRLfQzQO3baWwx EmrzdBEYm4OvuFa41hpDBRwdneUScIiHZHFF0tTG81PcLR3bBEH7kvHRo hEh5Vf8MiSDn/fzPr0jc3DhGlbz4QlQygt0XZRCbULAQqRROeKSmPAix/ EkrnGV+bIir7HSuflGTR9+3EofFygtzZHjETFNEMUpEPDKdiJBGkWJefM KmkAmw1V5G7IKjkDhQmSxv3ciaC+MGS2XIeqS7+DLsowHBnbA+IFOcBy4 t3D9R4KORoggSpQpwS7HoNQAP+g7igSSXr9EvykAl0TJnUWeSbzjN3uXO g==; X-CSE-ConnectionGUID: OuTYwX0/Qpq3G9OV0hztIA== X-CSE-MsgGUID: Kjd6KdmXTpanpqp3UX9ahw== X-IronPort-AV: E=McAfee;i="6800,10657,11632"; a="69511018" X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="69511018" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:26 -0800 X-CSE-ConnectionGUID: mcbMgTZxSD2A/lDhhp0/Gw== X-CSE-MsgGUID: N3PEGpoHSquRuoUjDk0h7Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="225752829" Received: from mgerlach-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.165]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:26 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v15 20/32] x86/resctrl: Read telemetry events Date: Thu, 4 Dec 2025 12:53:50 -0800 Message-ID: <20251204205404.12763-21-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251204205404.12763-1-tony.luck@intel.com> References: <20251204205404.12763-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce intel_aet_read_event() to read telemetry events for resource RDT_RESOURCE_PERF_PKG. There may be multiple aggregators tracking each pack= age, so scan all of them and add up all counters. Aggregators may return an inva= lid data indication if they have received no records for a given RMID. User will see "Unavailable" if none of the aggregators on a package provide valid cou= nts. Resctrl now uses readq() so depends on X86_64. Update Kconfig. Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- arch/x86/kernel/cpu/resctrl/internal.h | 5 +++ arch/x86/kernel/cpu/resctrl/intel_aet.c | 51 +++++++++++++++++++++++++ arch/x86/kernel/cpu/resctrl/monitor.c | 4 ++ fs/resctrl/monitor.c | 14 +++++++ arch/x86/Kconfig | 2 +- 5 files changed, 75 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/r= esctrl/internal.h index f2e6e3577df0..10743f5d5fd4 100644 --- a/arch/x86/kernel/cpu/resctrl/internal.h +++ b/arch/x86/kernel/cpu/resctrl/internal.h @@ -225,9 +225,14 @@ void resctrl_arch_mbm_cntr_assign_set_one(struct rdt_r= esource *r); #ifdef CONFIG_X86_CPU_RESCTRL_INTEL_AET bool intel_aet_get_events(void); void __exit intel_aet_exit(void); +int intel_aet_read_event(int domid, u32 rmid, void *arch_priv, u64 *val); #else static inline bool intel_aet_get_events(void) { return false; } static inline void __exit intel_aet_exit(void) { } +static inline int intel_aet_read_event(int domid, u32 rmid, void *arch_pri= v, u64 *val) +{ + return -EINVAL; +} #endif =20 #endif /* _ASM_X86_RESCTRL_INTERNAL_H */ diff --git a/arch/x86/kernel/cpu/resctrl/intel_aet.c b/arch/x86/kernel/cpu/= resctrl/intel_aet.c index bc42df4498f8..85ca24e42ec1 100644 --- a/arch/x86/kernel/cpu/resctrl/intel_aet.c +++ b/arch/x86/kernel/cpu/resctrl/intel_aet.c @@ -11,11 +11,15 @@ =20 #define pr_fmt(fmt) "resctrl: " fmt =20 +#include #include +#include #include +#include #include #include #include +#include #include #include #include @@ -237,3 +241,50 @@ void __exit intel_aet_exit(void) } } } + +#define DATA_VALID BIT_ULL(63) +#define DATA_BITS GENMASK_ULL(62, 0) + +/* + * Read counter for an event on a domain (summing all aggregators on the + * domain). If an aggregator hasn't received any data for a specific RMID, + * the MMIO read indicates that data is not valid. Return success if at + * least one aggregator has valid data. + */ +int intel_aet_read_event(int domid, u32 rmid, void *arch_priv, u64 *val) +{ + struct pmt_event *pevt =3D arch_priv; + struct event_group *e; + bool valid =3D false; + u64 total =3D 0; + u64 evtcount; + void *pevt0; + u32 idx; + + pevt0 =3D pevt - pevt->idx; + e =3D container_of(pevt0, struct event_group, evts); + idx =3D rmid * e->num_events; + idx +=3D pevt->idx; + + if (idx * sizeof(u64) + sizeof(u64) > e->mmio_size) { + pr_warn_once("MMIO index %u out of range\n", idx); + return -EIO; + } + + for (int i =3D 0; i < e->pfg->count; i++) { + if (!e->pfg->regions[i].addr) + continue; + if (e->pfg->regions[i].plat_info.package_id !=3D domid) + continue; + evtcount =3D readq(e->pfg->regions[i].addr + idx * sizeof(u64)); + if (!(evtcount & DATA_VALID)) + continue; + total +=3D evtcount & DATA_BITS; + valid =3D true; + } + + if (valid) + *val =3D total; + + return valid ? 0 : -EINVAL; +} diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/re= sctrl/monitor.c index 6929614ba6e6..e6a154240b8d 100644 --- a/arch/x86/kernel/cpu/resctrl/monitor.c +++ b/arch/x86/kernel/cpu/resctrl/monitor.c @@ -251,6 +251,10 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, str= uct rdt_domain_hdr *hdr, int ret; =20 resctrl_arch_rmid_read_context_check(); + + if (r->rid =3D=3D RDT_RESOURCE_PERF_PKG) + return intel_aet_read_event(hdr->id, rmid, arch_priv, val); + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) return -EINVAL; =20 diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index 83652130a8a6..8d2b0bb0bfc9 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -528,6 +528,20 @@ static int __mon_event_count(struct rdtgroup *rdtgrp, = struct rmid_read *rr) } else { return __l3_mon_event_count_sum(rdtgrp, rr); } + case RDT_RESOURCE_PERF_PKG: { + u64 tval =3D 0; + + rr->err =3D resctrl_arch_rmid_read(rr->r, rr->hdr, rdtgrp->closid, + rdtgrp->mon.rmid, rr->evt->evtid, + rr->evt->arch_priv, + &tval, rr->arch_mon_ctx); + if (rr->err) + return rr->err; + + rr->val +=3D tval; + + return 0; + } default: rr->err =3D -EINVAL; return -EINVAL; diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 52dda19d584d..4cf9de520baf 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -540,7 +540,7 @@ config X86_CPU_RESCTRL =20 config X86_CPU_RESCTRL_INTEL_AET bool "Intel Application Energy Telemetry" - depends on X86_CPU_RESCTRL && CPU_SUP_INTEL && INTEL_PMT_TELEMETRY=3Dy &&= INTEL_TPMI=3Dy + depends on X86_64 && X86_CPU_RESCTRL && CPU_SUP_INTEL && INTEL_PMT_TELEME= TRY=3Dy && INTEL_TPMI=3Dy help Enable per-RMID telemetry events in resctrl. =20 --=20 2.51.1 From nobody Fri Dec 19 18:54:00 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E2F7832937F for ; Thu, 4 Dec 2025 20:54:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881678; cv=none; b=bDH6vtdn3Z1Rt5Qof+w7pSeHO7SsGj6GuOYhHu5WznV8qDw7r0EuA4+/Q0sZyBkGryxvCQZOUZ/DbDFVJQ+C6AXpqtumec6t8sOU9Hh1mC6o/dnjx1n3AV81o3es3BHP5hK9qKMh5dYbuS5L32rqrYYpWwKAPJLPaA4Q4EVDn4I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881678; c=relaxed/simple; bh=n+gg9cG6VstFpoCB6IkctOIGC7MzoledoWWzPFL/JkU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=tVT0rmCX0YAbLY6whJYpaV2ZSpHGKx6FyfyIFccxmDrz2OCIUxGx+/sTeLUi+8hmJW9YhN/bWw78QQ9X94CQBKNBeesxKlEUID3g852/gMFV6JRzhHkJMOnlNM0WPeJMa6hbFt/V5cJ9Ej5cQ6LW/ClON3R75ajLe3MjdsLLr8I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=JBtgx7sq; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="JBtgx7sq" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764881676; x=1796417676; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=n+gg9cG6VstFpoCB6IkctOIGC7MzoledoWWzPFL/JkU=; b=JBtgx7sqT6a1rQrSwkmwDyIXZvXjRgK+aicXXwEcn5Kax/a0LXPVGFqH BtYF1/z61Pvq1XSmGxVPk0VBVgZYeq2c8w3hzFn5l0Ea6lQ64aUIlRJXO nikUsulVQGUPeeewjlSIfgLZhUTmu8UF+Xevd6IRcOeFW7qXcJE1XN3rO E4hgzRMlHM7Gh7tj8cXsLNCSbCDWUJCCdJQ2rB6Nm00SDDhoxxpoVOovc 8S23my8d56GjzmfuuIW2cKwgDZYRfuY926nmc512ZKyFA9SRYnrRjgHNp 6+N/o5BqvzyMR2DSL9gnClrBkHHfmVmLLc2VnPY3hFl65uF9KIDxvE2Rt A==; X-CSE-ConnectionGUID: wTbNxq/ZSzuAYw2WjmhOkQ== X-CSE-MsgGUID: OOssZVXFRimU+54BBRBMIA== X-IronPort-AV: E=McAfee;i="6800,10657,11632"; a="69511028" X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="69511028" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:27 -0800 X-CSE-ConnectionGUID: UTyNweamRaudwtEUEM0IKw== X-CSE-MsgGUID: 6kWSzwW+QaeBwHXdq0CeOw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="225752833" Received: from mgerlach-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.165]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:26 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v15 21/32] fs/resctrl: Refactor mkdir_mondata_subdir() Date: Thu, 4 Dec 2025 12:53:51 -0800 Message-ID: <20251204205404.12763-22-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251204205404.12763-1-tony.luck@intel.com> References: <20251204205404.12763-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Population of a monitor group's mon_data directory is unreasonably complica= ted because of the support for Sub-NUMA Cluster (SNC) mode. Split out the SNC code into a helper function to make it easier to add supp= ort for a new telemetry resource. Move all the duplicated code to make and set owner of domain directories into the mon_add_all_files() helper and rename to _mkdir_mondata_subdir(). Suggested-by: Reinette Chatre Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- fs/resctrl/rdtgroup.c | 108 +++++++++++++++++++++++------------------- 1 file changed, 58 insertions(+), 50 deletions(-) diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index dce1f0a6d40b..c0db5d8999ee 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -3259,57 +3259,65 @@ static void rmdir_mondata_subdir_allrdtgrp(struct r= dt_resource *r, } } =20 -static int mon_add_all_files(struct kernfs_node *kn, struct rdt_domain_hdr= *hdr, - struct rdt_resource *r, struct rdtgroup *prgrp, - bool do_sum) +/* + * Create a directory for a domain and populate it with monitor files. Cre= ate + * summing monitors when @hdr is NULL. No need to initialize summing monit= ors. + */ +static struct kernfs_node *_mkdir_mondata_subdir(struct kernfs_node *paren= t_kn, char *name, + struct rdt_domain_hdr *hdr, + struct rdt_resource *r, + struct rdtgroup *prgrp, int domid) { - struct rdt_l3_mon_domain *d; struct rmid_read rr =3D {0}; + struct kernfs_node *kn; struct mon_data *priv; struct mon_evt *mevt; - int ret, domid; + int ret; =20 - if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) - return -EINVAL; + kn =3D kernfs_create_dir(parent_kn, name, parent_kn->mode, prgrp); + if (IS_ERR(kn)) + return kn; + + ret =3D rdtgroup_kn_set_ugid(kn); + if (ret) + goto out_destroy; =20 - d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); for_each_mon_event(mevt) { if (mevt->rid !=3D r->rid || !mevt->enabled) continue; - domid =3D do_sum ? d->ci_id : d->hdr.id; - priv =3D mon_get_kn_priv(r->rid, domid, mevt, do_sum); - if (WARN_ON_ONCE(!priv)) - return -EINVAL; + priv =3D mon_get_kn_priv(r->rid, domid, mevt, !hdr); + if (WARN_ON_ONCE(!priv)) { + ret =3D -EINVAL; + goto out_destroy; + } =20 ret =3D mon_addfile(kn, mevt->name, priv); if (ret) - return ret; + goto out_destroy; =20 - if (!do_sum && resctrl_is_mbm_event(mevt->evtid)) + if (hdr && resctrl_is_mbm_event(mevt->evtid)) mon_event_read(&rr, r, hdr, prgrp, &hdr->cpu_mask, mevt, true); } =20 - return 0; + return kn; +out_destroy: + kernfs_remove(kn); + return ERR_PTR(ret); } =20 -static int mkdir_mondata_subdir(struct kernfs_node *parent_kn, - struct rdt_domain_hdr *hdr, - struct rdt_resource *r, struct rdtgroup *prgrp) +static int mkdir_mondata_subdir_snc(struct kernfs_node *parent_kn, + struct rdt_domain_hdr *hdr, + struct rdt_resource *r, struct rdtgroup *prgrp) { - struct kernfs_node *kn, *ckn; + struct kernfs_node *ckn, *kn; struct rdt_l3_mon_domain *d; char name[32]; - bool snc_mode; - int ret =3D 0; - - lockdep_assert_held(&rdtgroup_mutex); =20 if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) return -EINVAL; =20 d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); - snc_mode =3D r->mon_scope =3D=3D RESCTRL_L3_NODE; - sprintf(name, "mon_%s_%02d", r->name, snc_mode ? d->ci_id : d->hdr.id); + sprintf(name, "mon_%s_%02d", r->name, d->ci_id); kn =3D kernfs_find_and_get(parent_kn, name); if (kn) { /* @@ -3318,41 +3326,41 @@ static int mkdir_mondata_subdir(struct kernfs_node = *parent_kn, */ kernfs_put(kn); } else { - kn =3D kernfs_create_dir(parent_kn, name, parent_kn->mode, prgrp); + kn =3D _mkdir_mondata_subdir(parent_kn, name, NULL, r, prgrp, d->ci_id); if (IS_ERR(kn)) return PTR_ERR(kn); + } =20 - ret =3D rdtgroup_kn_set_ugid(kn); - if (ret) - goto out_destroy; - ret =3D mon_add_all_files(kn, hdr, r, prgrp, snc_mode); - if (ret) - goto out_destroy; + sprintf(name, "mon_sub_%s_%02d", r->name, hdr->id); + ckn =3D _mkdir_mondata_subdir(kn, name, hdr, r, prgrp, hdr->id); + if (IS_ERR(ckn)) { + kernfs_remove(kn); + return PTR_ERR(ckn); } =20 - if (snc_mode) { - sprintf(name, "mon_sub_%s_%02d", r->name, hdr->id); - ckn =3D kernfs_create_dir(kn, name, parent_kn->mode, prgrp); - if (IS_ERR(ckn)) { - ret =3D -EINVAL; - goto out_destroy; - } + kernfs_activate(kn); + return 0; +} =20 - ret =3D rdtgroup_kn_set_ugid(ckn); - if (ret) - goto out_destroy; +static int mkdir_mondata_subdir(struct kernfs_node *parent_kn, + struct rdt_domain_hdr *hdr, + struct rdt_resource *r, struct rdtgroup *prgrp) +{ + struct kernfs_node *kn; + char name[32]; =20 - ret =3D mon_add_all_files(ckn, hdr, r, prgrp, false); - if (ret) - goto out_destroy; - } + lockdep_assert_held(&rdtgroup_mutex); + + if (r->rid =3D=3D RDT_RESOURCE_L3 && r->mon_scope =3D=3D RESCTRL_L3_NODE) + return mkdir_mondata_subdir_snc(parent_kn, hdr, r, prgrp); + + sprintf(name, "mon_%s_%02d", r->name, hdr->id); + kn =3D _mkdir_mondata_subdir(parent_kn, name, hdr, r, prgrp, hdr->id); + if (IS_ERR(kn)) + return PTR_ERR(kn); =20 kernfs_activate(kn); return 0; - -out_destroy: - kernfs_remove(kn); - return ret; } =20 /* --=20 2.51.1 From nobody Fri Dec 19 18:54:00 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B315232938C for ; Thu, 4 Dec 2025 20:54:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881679; cv=none; b=U5MVDIbYcjSLDIFxmY0wpd0PZkAAC7kM6kTYpSNNI9v1N/UZUeI0T85MPTkHq/LwzwzRCoG7by7fdIO35IrJZxBN6C6m5mbu6fsM01ucnbhCMYggzlszCedqBKwvVnW0530mJTIbERkyUc/2wYipBhRKmgdNTvrpAfu9VGutH9s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881679; c=relaxed/simple; bh=miRJzo1AWfZUKy+ImvgvbTXaJITzgINN7ynFnTS9zo4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=FZlZ3kR6S3lSqtWdO4PgQQPEDksr4dDXuH1DZcTbX1/IWQf2noIo+aaYJz5vgb2oBFGFdHYfuFOX1DItdTohUflIsX6R9ufNCtVs/mAlBZbWBytuAOqAIQoxzX7x/qpgc3WRN5xtOhMHg3irtNWk5Dok6z9iMQ9k/HscdCYmYlg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=WZ9LBOjl; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="WZ9LBOjl" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764881677; x=1796417677; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=miRJzo1AWfZUKy+ImvgvbTXaJITzgINN7ynFnTS9zo4=; b=WZ9LBOjlmnCrYN+P4rJYmuuKUzAxMTp+gs9MA8Nu9NhR3hiWbS8Hv48n M5tMHGTQQat6T48QwlM3MpM/TG9iQ75GvotBkvetSmG5gmo8ZFThkuba8 FlNYR10yZP1NhAK644RkVALiQflNmFFx8Pgjddu9wLYwtv9Ib+wf9+yMm zGBdqNUPj7l7bvtCOND4SU+o5TJ/Hxz7rwmWNiydj4E9pPEgJB59/RhyR Fqkw4teKkKyiWtGIv5w3nxtLtm8nkwk4JoKyHzMfTk+dBk83sH91Ne/qa K5AbO5xP3sK22gFNfxDOtozTXnH7ekfj8TUDcdq9O5O6q00tVL008OHyS A==; X-CSE-ConnectionGUID: N5IKI5ZFQ5GUNrpMeVYyJA== X-CSE-MsgGUID: 1mIcYC+cRxmA93bbryJcwg== X-IronPort-AV: E=McAfee;i="6800,10657,11632"; a="69511036" X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="69511036" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:28 -0800 X-CSE-ConnectionGUID: vqmTcI+hTran4Ekk1jDC4g== X-CSE-MsgGUID: rSTNl2dmRVKzDFfmhUeO/A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="225752839" Received: from mgerlach-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.165]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:27 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v15 22/32] fs/resctrl: Refactor rmdir_mondata_subdir_allrdtgrp() Date: Thu, 4 Dec 2025 12:53:52 -0800 Message-ID: <20251204205404.12763-23-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251204205404.12763-1-tony.luck@intel.com> References: <20251204205404.12763-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Clearing a monitor group's mon_data directory is complicated because of the support for Sub-NUMA Cluster (SNC) mode. Refactor the SNC case into a helper function to make it easier to add suppo= rt for a new telemetry resource. Suggested-by: Reinette Chatre Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- fs/resctrl/rdtgroup.c | 42 +++++++++++++++++++++++++++++++----------- 1 file changed, 31 insertions(+), 11 deletions(-) diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index c0db5d8999ee..679247c43b1a 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -3228,28 +3228,24 @@ static void mon_rmdir_one_subdir(struct kernfs_node= *pkn, char *name, char *subn } =20 /* - * Remove all subdirectories of mon_data of ctrl_mon groups - * and monitor groups for the given domain. - * Remove files and directories containing "sum" of domain data - * when last domain being summed is removed. + * Remove files and directories for one SNC node. If it is the last node + * sharing an L3 cache, then remove the upper level directory containing + * the "sum" files too. */ -static void rmdir_mondata_subdir_allrdtgrp(struct rdt_resource *r, - struct rdt_domain_hdr *hdr) +static void rmdir_mondata_subdir_allrdtgrp_snc(struct rdt_resource *r, + struct rdt_domain_hdr *hdr) { struct rdtgroup *prgrp, *crgrp; struct rdt_l3_mon_domain *d; char subname[32]; - bool snc_mode; char name[32]; =20 if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) return; =20 d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); - snc_mode =3D r->mon_scope =3D=3D RESCTRL_L3_NODE; - sprintf(name, "mon_%s_%02d", r->name, snc_mode ? d->ci_id : hdr->id); - if (snc_mode) - sprintf(subname, "mon_sub_%s_%02d", r->name, hdr->id); + sprintf(name, "mon_%s_%02d", r->name, d->ci_id); + sprintf(subname, "mon_sub_%s_%02d", r->name, hdr->id); =20 list_for_each_entry(prgrp, &rdt_all_groups, rdtgroup_list) { mon_rmdir_one_subdir(prgrp->mon.mon_data_kn, name, subname); @@ -3259,6 +3255,30 @@ static void rmdir_mondata_subdir_allrdtgrp(struct rd= t_resource *r, } } =20 +/* + * Remove all subdirectories of mon_data of ctrl_mon groups + * and monitor groups for the given domain. + */ +static void rmdir_mondata_subdir_allrdtgrp(struct rdt_resource *r, + struct rdt_domain_hdr *hdr) +{ + struct rdtgroup *prgrp, *crgrp; + char name[32]; + + if (r->rid =3D=3D RDT_RESOURCE_L3 && r->mon_scope =3D=3D RESCTRL_L3_NODE)= { + rmdir_mondata_subdir_allrdtgrp_snc(r, hdr); + return; + } + + sprintf(name, "mon_%s_%02d", r->name, hdr->id); + list_for_each_entry(prgrp, &rdt_all_groups, rdtgroup_list) { + kernfs_remove_by_name(prgrp->mon.mon_data_kn, name); + + list_for_each_entry(crgrp, &prgrp->mon.crdtgrp_list, mon.crdtgrp_list) + kernfs_remove_by_name(crgrp->mon.mon_data_kn, name); + } +} + /* * Create a directory for a domain and populate it with monitor files. Cre= ate * summing monitors when @hdr is NULL. No need to initialize summing monit= ors. --=20 2.51.1 From nobody Fri Dec 19 18:54:00 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0B7DC329C59 for ; Thu, 4 Dec 2025 20:54:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881680; cv=none; b=g6TZNa84p2ouQ7VYSbXnRj8k/oZTv1FuiiSomdCSvvFp7UEUt0rp5Hd0Y8/CAYnK0SPVgiQm4gTn/QnMvCuxSKSIiEtVhyM3xurbeaaa3xWWPilPLymNQbTEg/FBdFE0kUiGqZoCBEPxZn0j65tDOr207pFot5DL9QEJLah3sF4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881680; c=relaxed/simple; bh=AFFNg9aOyG6Er+cjCYljr0X19y2MaV9WG5onQ9ip9uA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=o5NSNFQy4GQK9mv75JaM+92+ZpI064ntWxM3wCAF5VAs6S0XP7iQiJd2WhXnAAHrLFmu1MaxTO7kMP1AN8vUsbDumtRYJSsmyqTjCbJbhsuStXY9rprugcu0SlkRfthbIgOZiJss0LlhadLx5+L59GLEt3VKPB9WH7UP6FuzRs4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=MtFa9eN+; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="MtFa9eN+" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764881678; x=1796417678; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=AFFNg9aOyG6Er+cjCYljr0X19y2MaV9WG5onQ9ip9uA=; b=MtFa9eN+1abfIORcoFUZKPwppr3KBncbIlVdqRjZI9t1qOvFUbHK/SE2 0021QlH+hpm1wmyEQAEeTVtirNT32s0FvSpeLirjVQ5t0xl+iZ/b23SLZ kDDYW2qk46iGXyn7K5nUg4G2nhQ+NJgLoSaD6p6+JwNQHXzPulrmB5JcA xdY8Ss5ksSnsM6/5zo15+NYfNZ44wJBFdd5fbCZ9AIf7gQ5lVlCM/8Orm FHYd09Jj2oqJzNPOXARKlnnrQPu2qe2e2kBNVrolbGK7G3YhLDSUSlSI/ n8Zs0hdnTRqVLytENkl6vgNx0gXb9LJCf3egwWwWt624dUNgk2OIgNM+u g==; X-CSE-ConnectionGUID: 1oYNkvzNQ6CBZZieLje54g== X-CSE-MsgGUID: yl2ipEjUQUuo+xGZJhb2HA== X-IronPort-AV: E=McAfee;i="6800,10657,11632"; a="69511044" X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="69511044" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:28 -0800 X-CSE-ConnectionGUID: PyLkiFihTJaJfuXxV3G8hg== X-CSE-MsgGUID: 7Nfv18a+RnyYMAuP6P3usA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="225752847" Received: from mgerlach-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.165]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:28 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v15 23/32] x86,fs/resctrl: Handle domain creation/deletion for RDT_RESOURCE_PERF_PKG Date: Thu, 4 Dec 2025 12:53:53 -0800 Message-ID: <20251204205404.12763-24-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251204205404.12763-1-tony.luck@intel.com> References: <20251204205404.12763-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The L3 resource has several requirements for domains. There are per-domain structures that hold the 64-bit values of counters, and elements to keep track of the overflow and limbo threads. None of these are needed for the PERF_PKG resource. The hardware counters are wide enough that they do not wrap around for decades. Define a new rdt_perf_pkg_mon_domain structure which just consists of the standard rdt_domain_hdr to keep track of domain id and CPU mask. Update resctrl_online_mon_domain() for RDT_RESOURCE_PERF_PKG. The only acti= on needed for this resource is to create and populate domain directories if a domain is added while resctrl is mounted. Similarly resctrl_offline_mon_domain() only needs to remove domain director= ies. Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- arch/x86/kernel/cpu/resctrl/internal.h | 13 +++++++++++ arch/x86/kernel/cpu/resctrl/core.c | 17 +++++++++++++++ arch/x86/kernel/cpu/resctrl/intel_aet.c | 29 +++++++++++++++++++++++++ fs/resctrl/rdtgroup.c | 17 ++++++++++----- 4 files changed, 71 insertions(+), 5 deletions(-) diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/r= esctrl/internal.h index 10743f5d5fd4..3b228b241fb2 100644 --- a/arch/x86/kernel/cpu/resctrl/internal.h +++ b/arch/x86/kernel/cpu/resctrl/internal.h @@ -87,6 +87,14 @@ static inline struct rdt_hw_l3_mon_domain *resctrl_to_ar= ch_mon_dom(struct rdt_l3 return container_of(r, struct rdt_hw_l3_mon_domain, d_resctrl); } =20 +/** + * struct rdt_perf_pkg_mon_domain - CPUs sharing an package scoped resctrl= monitor resource + * @hdr: common header for different domain types + */ +struct rdt_perf_pkg_mon_domain { + struct rdt_domain_hdr hdr; +}; + /** * struct msr_param - set a range of MSRs from a domain * @res: The resource to use @@ -226,6 +234,8 @@ void resctrl_arch_mbm_cntr_assign_set_one(struct rdt_re= source *r); bool intel_aet_get_events(void); void __exit intel_aet_exit(void); int intel_aet_read_event(int domid, u32 rmid, void *arch_priv, u64 *val); +void intel_aet_mon_domain_setup(int cpu, int id, struct rdt_resource *r, + struct list_head *add_pos); #else static inline bool intel_aet_get_events(void) { return false; } static inline void __exit intel_aet_exit(void) { } @@ -233,6 +243,9 @@ static inline int intel_aet_read_event(int domid, u32 r= mid, void *arch_priv, u64 { return -EINVAL; } + +static inline void intel_aet_mon_domain_setup(int cpu, int id, struct rdt_= resource *r, + struct list_head *add_pos) { } #endif =20 #endif /* _ASM_X86_RESCTRL_INTERNAL_H */ diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 3c6946a5ff1b..283d653002a2 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -580,6 +580,10 @@ static void domain_add_cpu_mon(int cpu, struct rdt_res= ource *r) if (!hdr) l3_mon_domain_setup(cpu, id, r, add_pos); break; + case RDT_RESOURCE_PERF_PKG: + if (!hdr) + intel_aet_mon_domain_setup(cpu, id, r, add_pos); + break; default: pr_warn_once("Unknown resource rid=3D%d\n", r->rid); break; @@ -679,6 +683,19 @@ static void domain_remove_cpu_mon(int cpu, struct rdt_= resource *r) l3_mon_domain_free(hw_dom); break; } + case RDT_RESOURCE_PERF_PKG: { + struct rdt_perf_pkg_mon_domain *pkgd; + + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_PERF_P= KG)) + return; + + pkgd =3D container_of(hdr, struct rdt_perf_pkg_mon_domain, hdr); + resctrl_offline_mon_domain(r, hdr); + list_del_rcu(&hdr->list); + synchronize_rcu(); + kfree(pkgd); + break; + } default: pr_warn_once("Unknown resource rid=3D%d\n", r->rid); break; diff --git a/arch/x86/kernel/cpu/resctrl/intel_aet.c b/arch/x86/kernel/cpu/= resctrl/intel_aet.c index 85ca24e42ec1..8fcd72fca81f 100644 --- a/arch/x86/kernel/cpu/resctrl/intel_aet.c +++ b/arch/x86/kernel/cpu/resctrl/intel_aet.c @@ -14,15 +14,20 @@ #include #include #include +#include #include #include +#include #include #include #include #include #include +#include +#include #include #include +#include #include #include #include @@ -288,3 +293,27 @@ int intel_aet_read_event(int domid, u32 rmid, void *ar= ch_priv, u64 *val) =20 return valid ? 0 : -EINVAL; } + +void intel_aet_mon_domain_setup(int cpu, int id, struct rdt_resource *r, + struct list_head *add_pos) +{ + struct rdt_perf_pkg_mon_domain *d; + int err; + + d =3D kzalloc_node(sizeof(*d), GFP_KERNEL, cpu_to_node(cpu)); + if (!d) + return; + + d->hdr.id =3D id; + d->hdr.type =3D RESCTRL_MON_DOMAIN; + d->hdr.rid =3D RDT_RESOURCE_PERF_PKG; + cpumask_set_cpu(cpu, &d->hdr.cpu_mask); + list_add_tail_rcu(&d->hdr.list, add_pos); + + err =3D resctrl_online_mon_domain(r, &d->hdr); + if (err) { + list_del_rcu(&d->hdr.list); + synchronize_rcu(); + kfree(d); + } +} diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index 679247c43b1a..ac3c6e44b7c5 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -4307,11 +4307,6 @@ void resctrl_offline_mon_domain(struct rdt_resource = *r, struct rdt_domain_hdr *h =20 mutex_lock(&rdtgroup_mutex); =20 - if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) - goto out_unlock; - - d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); - /* * If resctrl is mounted, remove all the * per domain monitor data directories. @@ -4319,6 +4314,13 @@ void resctrl_offline_mon_domain(struct rdt_resource = *r, struct rdt_domain_hdr *h if (resctrl_mounted && resctrl_arch_mon_capable()) rmdir_mondata_subdir_allrdtgrp(r, hdr); =20 + if (r->rid !=3D RDT_RESOURCE_L3) + goto out_unlock; + + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) + goto out_unlock; + + d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); if (resctrl_is_mbm_enabled()) cancel_delayed_work(&d->mbm_over); if (resctrl_is_mon_event_enabled(QOS_L3_OCCUP_EVENT_ID) && has_busy_rmid(= d)) { @@ -4415,6 +4417,9 @@ int resctrl_online_mon_domain(struct rdt_resource *r,= struct rdt_domain_hdr *hdr =20 mutex_lock(&rdtgroup_mutex); =20 + if (r->rid !=3D RDT_RESOURCE_L3) + goto mkdir; + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) goto out_unlock; =20 @@ -4432,6 +4437,8 @@ int resctrl_online_mon_domain(struct rdt_resource *r,= struct rdt_domain_hdr *hdr if (resctrl_is_mon_event_enabled(QOS_L3_OCCUP_EVENT_ID)) INIT_DELAYED_WORK(&d->cqm_limbo, cqm_handle_limbo); =20 +mkdir: + err =3D 0; /* * If the filesystem is not mounted then only the default resource group * exists. Creation of its directories is deferred until mount time --=20 2.51.1 From nobody Fri Dec 19 18:54:00 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D8012325735 for ; Thu, 4 Dec 2025 20:54:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881681; cv=none; b=BPQOQIlX4FoZjMC4U8n+IDmpMTKS2HD5rIK0tbZOugF2ZnuPxY0PfWUeeSWj6UpVC6OEYO23WAO4jsG183D+dITVLN9gvrGkylwKkaiA3WwTxn1IPXCXijBile8iP/W/8UJpZLs+YXxL3hC6ZPTZvcg7+Afh2eJW/JmDGotp+Qw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881681; c=relaxed/simple; bh=b0DstXJmh/PBquGnFrytjf8CQVVS81XVAER4YDpRZ6M=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=kEf/CI3cOD2J1ak/Ncj2iuQMoSJmkVEazaPYUp9K/n0272XWqDRnK9mY8rOoGWFptYHSm81JOoIrL5EsY+wpfPkp5fTyg4jl2lVrs6M0KtubT+64iELC4WlWOh29yqpNqpNA4waEWRUNVKFvIdUhGVnDKIqnEGhMRZ2G1V/Pdgs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=MWKiPtIC; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="MWKiPtIC" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764881679; x=1796417679; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=b0DstXJmh/PBquGnFrytjf8CQVVS81XVAER4YDpRZ6M=; b=MWKiPtIC1Y3MyJY58aW+y4c8gMgC33VUweiHECoZup2BCKt/zL8SkJDE BQA0MAHgjqc8/StogwkB4FNIDqLhxEyIp3y6rkzC+Rb5TI8Izwzt6MER4 KtHLYLTN4L6EnbVDdu2g2pn1UXJ/j0+lxS1kEwetZYHNa8ZaPKkZ6ZNRS A7mkRU8hb1fNRdmll7ijBo2aUHyYsK4d1hUqfcR5uIMxqSCkUp19aZAsZ PaMbX79n41T1O04J7A7+zgfNxaEBoxdqMEiJjdz8EeMq44hU7Ux6wzavp /b86aqej/CsNvXfGc5E7JxAMyTSjOv87ye0NOaqV182zdXpp2lwJvWW25 w==; X-CSE-ConnectionGUID: SvR+2oeqSZ2lPU9er9mQbQ== X-CSE-MsgGUID: DDFtmLXFTfqzVCRrXlmRFg== X-IronPort-AV: E=McAfee;i="6800,10657,11632"; a="69511052" X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="69511052" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:29 -0800 X-CSE-ConnectionGUID: hmf6gR/RRPqtBzhPKCGA4w== X-CSE-MsgGUID: QBKjAOl2QtuKwOFscCiklg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="225752852" Received: from mgerlach-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.165]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:28 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v15 24/32] x86/resctrl: Add energy/perf choices to rdt boot option Date: Thu, 4 Dec 2025 12:53:54 -0800 Message-ID: <20251204205404.12763-25-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251204205404.12763-1-tony.luck@intel.com> References: <20251204205404.12763-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Legacy resctrl features are enumerated by X86_FEATURE_* flags. These may be overridden by quirks to disable features in the case of errata. Users can use kernel command line options to either disable a feature, or to force enable a feature that was disabled by a quirk. A different approach is needed for hardware features that do not have an X86_FEATURE_* flag. Update the parse loop of the "rdt=3D" boot option with a call to intel_aet_= option() to handles "perf" and "energy" options. Prefixing an option with "!" force disables a feature. A ":guid" suffix allows for fine grain control per-guid. Signed-off-by: Tony Luck --- .../admin-guide/kernel-parameters.txt | 7 +++- arch/x86/kernel/cpu/resctrl/internal.h | 2 ++ arch/x86/kernel/cpu/resctrl/core.c | 2 ++ arch/x86/kernel/cpu/resctrl/intel_aet.c | 34 +++++++++++++++++++ 4 files changed, 44 insertions(+), 1 deletion(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentatio= n/admin-guide/kernel-parameters.txt index 2b465eab41a1..cc9d2800abeb 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -6217,9 +6217,14 @@ rdt=3D [HW,X86,RDT] Turn on/off individual RDT features. List is: cmt, mbmtotal, mbmlocal, l3cat, l3cdp, l2cat, l2cdp, - mba, smba, bmec, abmc, sdciae. + mba, smba, bmec, abmc, sdciae, energy[:guid], + perf[:guid]. E.g. to turn on cmt and turn off mba use: rdt=3Dcmt,!mba + To turn off all energy telemetry monitoring and ensure that + perf telemetry monitoring associated with guid 0x12345 + is enabled use: + rdt=3D!energy,perf:0x12345 =20 reboot=3D [KNL] Format (x86 or x86_64): diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/r= esctrl/internal.h index 3b228b241fb2..df09091f7c6c 100644 --- a/arch/x86/kernel/cpu/resctrl/internal.h +++ b/arch/x86/kernel/cpu/resctrl/internal.h @@ -236,6 +236,7 @@ void __exit intel_aet_exit(void); int intel_aet_read_event(int domid, u32 rmid, void *arch_priv, u64 *val); void intel_aet_mon_domain_setup(int cpu, int id, struct rdt_resource *r, struct list_head *add_pos); +bool intel_aet_option(bool force_off, char *tok); #else static inline bool intel_aet_get_events(void) { return false; } static inline void __exit intel_aet_exit(void) { } @@ -246,6 +247,7 @@ static inline int intel_aet_read_event(int domid, u32 r= mid, void *arch_priv, u64 =20 static inline void intel_aet_mon_domain_setup(int cpu, int id, struct rdt_= resource *r, struct list_head *add_pos) { } +static inline bool intel_aet_option(bool force_off, char *tok) { return fa= lse; } #endif =20 #endif /* _ASM_X86_RESCTRL_INTERNAL_H */ diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 283d653002a2..960974ffa866 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -820,6 +820,8 @@ static int __init set_rdt_options(char *str) force_off =3D *tok =3D=3D '!'; if (force_off) tok++; + if (intel_aet_option(force_off, tok)) + continue; for (o =3D rdt_options; o < &rdt_options[NUM_RDT_OPTIONS]; o++) { if (strcmp(tok, o->name) =3D=3D 0) { if (force_off) diff --git a/arch/x86/kernel/cpu/resctrl/intel_aet.c b/arch/x86/kernel/cpu/= resctrl/intel_aet.c index 8fcd72fca81f..fec4bb781f82 100644 --- a/arch/x86/kernel/cpu/resctrl/intel_aet.c +++ b/arch/x86/kernel/cpu/resctrl/intel_aet.c @@ -59,6 +59,10 @@ struct pmt_event { * data for all telemetry regions of type @pfname. * Valid if the system supports the event group, * NULL otherwise. + * @force_off: True when "rdt" command line disables this @guid + * or architecture code disables this @guid. + * @force_on: True when "rdt" command line overrides disable of + * this @guid. * @guid: Unique number per XML description file. * @mmio_size: Number of bytes of MMIO registers for this group. * @num_events: Number of events in this group. @@ -68,6 +72,7 @@ struct event_group { /* Data fields for additional structures to manage this group. */ const char *pfname; struct pmt_feature_group *pfg; + bool force_off, force_on; =20 /* Remaining fields initialized from XML file. */ u32 guid; @@ -122,6 +127,32 @@ static struct event_group *known_event_groups[] =3D { _peg < &known_event_groups[ARRAY_SIZE(known_event_groups)]; \ _peg++) =20 +bool intel_aet_option(bool force_off, char *tok) +{ + struct event_group **peg; + bool ret =3D false; + u32 guid =3D 0; + char *name; + + name =3D strsep(&tok, ":"); + if (tok && kstrtou32(tok, 16, &guid)) + return false; + + for_each_event_group(peg) { + if (strcmp(name, (*peg)->pfname)) + continue; + if (guid && (*peg)->guid !=3D guid) + continue; + if (force_off) + (*peg)->force_off =3D true; + else + (*peg)->force_on =3D true; + ret =3D true; + } + + return ret; +} + /* * Clear the address field of regions that did not pass the checks in * skip_telem_region() so they will not be used by intel_aet_read_event(). @@ -173,6 +204,9 @@ static bool enable_events(struct event_group *e, struct= pmt_feature_group *p) struct rdt_resource *r =3D &rdt_resources_all[RDT_RESOURCE_PERF_PKG].r_re= sctrl; int skipped_events =3D 0; =20 + if (e->force_off) + return false; + if (!group_has_usable_regions(e, p)) return false; =20 --=20 2.51.1 From nobody Fri Dec 19 18:54:00 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1AD7E329E5B for ; Thu, 4 Dec 2025 20:54:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881682; cv=none; b=o0ZJGamAZgsEWhDWKhDb2HeqRSA7dUw/nVB97riwtFW1SbX/WV7AKC68sMYZ9TB32ZY1pBu1BPS82oUGsMz3r8G4E0yNuoc3sWae36xuc4q2Yntjy++B/+oqR7YUvKHLeNeEcHH9A5fIW4zQeX0crKKXEMM0N/JYqx71iiU/LAk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881682; c=relaxed/simple; bh=2CkZlubfNiCFZHcuKYyDw2SuTm0VcqomMrWdzKN4NNw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Cz0HbTmokAve80IrNuesJQV02B+sr1BZOehL73rZlRzFFqtnudFr2uCEOl/gH/CMORofKhqOBhx6euzvhh4+AqDOMPJBjgCYYAiFZNPzw/mGAv3gjum0VnIa8rXTxFd02vqtvgF25iPciurJ+72qSDcktk3k+Frfiait0VV7ngQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=iUcu6Was; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="iUcu6Was" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764881680; x=1796417680; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=2CkZlubfNiCFZHcuKYyDw2SuTm0VcqomMrWdzKN4NNw=; b=iUcu6WasgLIOXVZOH6N6QhwxXe/nSsmPBzazRm9VIuY8otf9lm6hW6RV ZvJ41BgtXCGPULLlKKUZ9fxBsg21QJn/U/MEjRsgGUdjwjWULKduwHp+q 6P4N1c8S0zmvXr5+DtICF2Th0hgJGkHbHbFi3qfFZvoMrzQ53TFaX9A94 YWWw+1wpIPzag+XJ3n1u9s/WAUgMWf1wDqcmd8ZSMMV2Tv6jkKxUCYGZE vpdSA5M1R1sCmdelkRRTRItH9WoDqzW8dJhRf+J4QfoefTLWSyxj/LE6F O20tJmXYx5kmn9QTfptHZgQVk9mu1UXg1bEVwbSqzpckWRIae2kTzFclk w==; X-CSE-ConnectionGUID: URFdWlnBSTujtkbtTqfAHQ== X-CSE-MsgGUID: 8hrZeL48RMCE0gfUWmVmFA== X-IronPort-AV: E=McAfee;i="6800,10657,11632"; a="69511060" X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="69511060" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:30 -0800 X-CSE-ConnectionGUID: d5b7cxoXT12nT2fAoPKGFg== X-CSE-MsgGUID: +0CPf8udTq+rp3xzfi8frA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="225752857" Received: from mgerlach-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.165]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:29 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v15 25/32] x86/resctrl: Handle number of RMIDs supported by RDT_RESOURCE_PERF_PKG Date: Thu, 4 Dec 2025 12:53:55 -0800 Message-ID: <20251204205404.12763-26-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251204205404.12763-1-tony.luck@intel.com> References: <20251204205404.12763-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" There are now three meanings for "number of RMIDs": 1) The number for legacy features enumerated by CPUID leaf 0xF. This is the maximum number of distinct values that can be loaded into MSR_IA32_PQR_ASSO= C. Note that systems with Sub-NUMA Cluster mode enabled will force scaling down the CPUID enumerated value by the number of SNC nodes per L3-cache. 2) The number of registers in MMIO space for each event. This is enumerated in the XML files and is the value initialized into event_group::num_rmid. 3) The number of "hardware counters" (this isn't a strictly accurate description of how things work, but serves as a useful analogy that does describe the limitations) feeding to those MMIO registers. This is enumerat= ed in telemetry_region::num_rmids returned by intel_pmt_get_regions_by_feature= () Event groups with insufficient "hardware counters" to track all RMIDs are difficult for users to use, since the system may reassign "hardware counter= s" at any time. This means that users cannot reliably collect two consecutive event counts to compute the rate at which events are occurring. By default such event groups are disabled. The user may override this with a command l= ine "rdt=3D" option. In this case limit an under-resourced event group's number= of possible monitor resource groups to the lowest number of "hardware counters= ". Scan all enabled event groups and assign the RDT_RESOURCE_PERF_PKG resource "num_rmid" value to the smallest of these values as this value will be used later to compare against the number of RMIDs supported by other resources to determine how many monitoring resource groups are supported. N.B. Change type of resctrl_mon::num_rmid to u32 to match its usage and the type of event_group::num_rmid so that min(r->num_rmid, e->num_rmid) won't complain about mixing signed and unsigned types. Signed-off-by: Tony Luck --- include/linux/resctrl.h | 2 +- arch/x86/kernel/cpu/resctrl/intel_aet.c | 57 ++++++++++++++++++++++++- fs/resctrl/rdtgroup.c | 2 +- 3 files changed, 57 insertions(+), 4 deletions(-) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index 14126d228e61..8623e450619a 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -295,7 +295,7 @@ enum resctrl_schema_fmt { * events of monitor groups created via mkdir. */ struct resctrl_mon { - int num_rmid; + u32 num_rmid; unsigned int mbm_cfg_mask; int num_mbm_cntrs; bool mbm_cntr_assignable; diff --git a/arch/x86/kernel/cpu/resctrl/intel_aet.c b/arch/x86/kernel/cpu/= resctrl/intel_aet.c index fec4bb781f82..38fcddc72ed8 100644 --- a/arch/x86/kernel/cpu/resctrl/intel_aet.c +++ b/arch/x86/kernel/cpu/resctrl/intel_aet.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include #include @@ -60,10 +61,15 @@ struct pmt_event { * Valid if the system supports the event group, * NULL otherwise. * @force_off: True when "rdt" command line disables this @guid - * or architecture code disables this @guid. + * or architecture code disables this @guid due to + * insufficient RMIDs. * @force_on: True when "rdt" command line overrides disable of * this @guid. * @guid: Unique number per XML description file. + * @num_rmid: Number of RMIDs supported by this group. May be + * adjusted downwards if enumeration from + * intel_pmt_get_regions_by_feature() indicates fewer + * RMIDs can be tracked simultaneously. * @mmio_size: Number of bytes of MMIO registers for this group. * @num_events: Number of events in this group. * @evts: Array of event descriptors. @@ -76,6 +82,7 @@ struct event_group { =20 /* Remaining fields initialized from XML file. */ u32 guid; + u32 num_rmid; size_t mmio_size; unsigned int num_events; struct pmt_event evts[] __counted_by(num_events); @@ -90,6 +97,7 @@ struct event_group { static struct event_group energy_0x26696143 =3D { .pfname =3D "energy", .guid =3D 0x26696143, + .num_rmid =3D 576, .mmio_size =3D XML_MMIO_SIZE(576, 2, 3), .num_events =3D 2, .evts =3D { @@ -104,6 +112,7 @@ static struct event_group energy_0x26696143 =3D { static struct event_group perf_0x26557651 =3D { .pfname =3D "perf", .guid =3D 0x26557651, + .num_rmid =3D 576, .mmio_size =3D XML_MMIO_SIZE(576, 7, 3), .num_events =3D 7, .evts =3D { @@ -199,6 +208,24 @@ static bool group_has_usable_regions(struct event_grou= p *e, struct pmt_feature_g return usable_regions; } =20 +static bool all_regions_have_sufficient_rmid(struct event_group *e, struct= pmt_feature_group *p) +{ + struct telemetry_region *tr; + bool ret =3D true; + + for (int i =3D 0; i < p->count; i++) { + if (!p->regions[i].addr) + continue; + tr =3D &p->regions[i]; + if (tr->num_rmids < e->num_rmid) { + e->force_off =3D true; + ret =3D false; + } + } + + return ret; +} + static bool enable_events(struct event_group *e, struct pmt_feature_group = *p) { struct rdt_resource *r =3D &rdt_resources_all[RDT_RESOURCE_PERF_PKG].r_re= sctrl; @@ -210,6 +237,27 @@ static bool enable_events(struct event_group *e, struc= t pmt_feature_group *p) if (!group_has_usable_regions(e, p)) return false; =20 + /* + * Only enable feature with insufficient RMIDs if the user requested + * it from the kernel command line. + */ + if (!all_regions_have_sufficient_rmid(e, p) && !e->force_on) { + pr_info("%s %s:0x%x monitoring not enabled due to insufficient RMIDs\n", + r->name, e->pfname, e->guid); + return false; + } + + for (int i =3D 0; i < p->count; i++) { + if (!p->regions[i].addr) + continue; + /* + * e->num_rmid only adjusted lower if user (via rdt=3D kernel + * parameter) forces an event group with insufficient RMID + * to be enabled. + */ + e->num_rmid =3D min(e->num_rmid, p->regions[i].num_rmids); + } + for (int j =3D 0; j < e->num_events; j++) { if (!resctrl_enable_mon_event(e->evts[j].id, true, e->evts[j].bin_bits, &e->evts[j])) @@ -220,7 +268,12 @@ static bool enable_events(struct event_group *e, struc= t pmt_feature_group *p) return false; } =20 - return skipped_events < e->num_events; + if (r->mon.num_rmid) + r->mon.num_rmid =3D min(r->mon.num_rmid, e->num_rmid); + else + r->mon.num_rmid =3D e->num_rmid; + + return true; } =20 static enum pmt_feature_id lookup_pfid(const char *pfname) diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index ac3c6e44b7c5..60ce2390723e 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -1157,7 +1157,7 @@ static int rdt_num_rmids_show(struct kernfs_open_file= *of, { struct rdt_resource *r =3D rdt_kn_parent_priv(of->kn); =20 - seq_printf(seq, "%d\n", r->mon.num_rmid); + seq_printf(seq, "%u\n", r->mon.num_rmid); =20 return 0; } --=20 2.51.1 From nobody Fri Dec 19 18:54:00 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BA0E8329E69 for ; Thu, 4 Dec 2025 20:54:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881683; cv=none; b=mK1KsmmrVESNQv97+ZLxCcF6/6riJ6KrpnlzsttNxGMGVVN5ARjyjYLlp3WVolQB+TufVthi4o1LYsCUoRPtOdSrqx5lMWWpQul2x3LF8WAdJKh2eSQYe8BZGpM5XN0HSF7xjxIH+h9IqahXxBSgHFXhTxwBOZuLOu94J2o+l30= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881683; c=relaxed/simple; bh=eaBiDk8pJwVVuOapZWv87PM630xB23PA9jEkyuXZDw4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=G5Ekl13AD2KJCVvnCllGYveRMoYRdS9zf2i0VWoOQDYOYQHKhJqzhNbZNdkWCWxvLZ57YabTep3wUujRV/KlmGtMrSn+u1IoeHxaQUZCV0yf1ShRAVQpZ0NFvOX56Vp6lp/uHNKj+a/pCbYPjgBiP1M+yTtMgxCxpsLq+VrjsEc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=KgWbccKj; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="KgWbccKj" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764881681; x=1796417681; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=eaBiDk8pJwVVuOapZWv87PM630xB23PA9jEkyuXZDw4=; b=KgWbccKjbatl3EHAQK3Sjsf1mVSxH0HCJT2wNryFkMwlbQWQgcc5sgEX By4zStyuTlvoeQnDhjJM5ZUekGXmncF0A3ISp6Y8IRfj4zFnh9uzVSFJ3 y8FLg0VaYSXUJE5lGWotWbAbTX+kr6ITXRMhRy7EzTjduCRREIwcainmm /AjAjG2dr9buVqjB5VLl7FPjyREeGqRd9NKU6wQEdRX0wle0Epcd6p7vj BMgusCoHB9QHi4Lgfyxr13628OWZmFK0yedGdsUm3ekKpQ42gmBpRddU/ RZ8F0zeH8zO9V3nFYaYZEEpwb6FvE+QbbwX3EetLvAyAMv4GcsHBFVVm5 g==; X-CSE-ConnectionGUID: emUyVViPT4+cP5XNyhsx6g== X-CSE-MsgGUID: lVry/4jMTcmF8uI/HsMEtA== X-IronPort-AV: E=McAfee;i="6800,10657,11632"; a="69511068" X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="69511068" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:31 -0800 X-CSE-ConnectionGUID: 07klnEbpTWqPRqzj+vmyLQ== X-CSE-MsgGUID: MG3Aj8UQSQiTSNE/Is/UjQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="225752865" Received: from mgerlach-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.165]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:30 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v15 26/32] fs/resctrl: Move allocation/free of closid_num_dirty_rmid[] Date: Thu, 4 Dec 2025 12:53:56 -0800 Message-ID: <20251204205404.12763-27-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251204205404.12763-1-tony.luck@intel.com> References: <20251204205404.12763-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" closid_num_dirty_rmid[] and rmid_ptrs[] are allocated together during resct= rl initialization and freed together during resctrl exit. Telemetry events are enumerated on resctrl mount so only at resctrl mount will the number of RMID supported by all monitoring resources and needed as size for rmid_ptrs[] be known. Separate closid_num_dirty_rmid[] and rmid_ptrs[] allocation and free in preparation for rmid_ptrs[] to be allocated on resctrl mount. Keep the rdtgroup_mutex protection around the allocation and free of closid_num_dirty_rmid[] as ARM needs this to guarantee memory ordering. Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- fs/resctrl/monitor.c | 79 ++++++++++++++++++++++++++++---------------- 1 file changed, 51 insertions(+), 28 deletions(-) diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index 8d2b0bb0bfc9..4cfddef45006 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -907,36 +907,14 @@ void mbm_setup_overflow_handler(struct rdt_l3_mon_dom= ain *dom, unsigned long del static int dom_data_init(struct rdt_resource *r) { u32 idx_limit =3D resctrl_arch_system_num_rmid_idx(); - u32 num_closid =3D resctrl_arch_get_num_closid(r); struct rmid_entry *entry =3D NULL; int err =3D 0, i; u32 idx; =20 mutex_lock(&rdtgroup_mutex); - if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID)) { - u32 *tmp; - - /* - * If the architecture hasn't provided a sanitised value here, - * this may result in larger arrays than necessary. Resctrl will - * use a smaller system wide value based on the resources in - * use. - */ - tmp =3D kcalloc(num_closid, sizeof(*tmp), GFP_KERNEL); - if (!tmp) { - err =3D -ENOMEM; - goto out_unlock; - } - - closid_num_dirty_rmid =3D tmp; - } =20 rmid_ptrs =3D kcalloc(idx_limit, sizeof(struct rmid_entry), GFP_KERNEL); if (!rmid_ptrs) { - if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID)) { - kfree(closid_num_dirty_rmid); - closid_num_dirty_rmid =3D NULL; - } err =3D -ENOMEM; goto out_unlock; } @@ -972,11 +950,6 @@ static void dom_data_exit(struct rdt_resource *r) if (!r->mon_capable) goto out_unlock; =20 - if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID)) { - kfree(closid_num_dirty_rmid); - closid_num_dirty_rmid =3D NULL; - } - kfree(rmid_ptrs); rmid_ptrs =3D NULL; =20 @@ -1815,6 +1788,45 @@ ssize_t mbm_L3_assignments_write(struct kernfs_open_= file *of, char *buf, return ret ?: nbytes; } =20 +static int closid_num_dirty_rmid_alloc(struct rdt_resource *r) +{ + if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID)) { + u32 num_closid =3D resctrl_arch_get_num_closid(r); + u32 *tmp; + + /* For ARM memory ordering access to closid_num_dirty_rmid */ + mutex_lock(&rdtgroup_mutex); + + /* + * If the architecture hasn't provided a sanitised value here, + * this may result in larger arrays than necessary. Resctrl will + * use a smaller system wide value based on the resources in + * use. + */ + tmp =3D kcalloc(num_closid, sizeof(*tmp), GFP_KERNEL); + if (!tmp) { + mutex_unlock(&rdtgroup_mutex); + return -ENOMEM; + } + + closid_num_dirty_rmid =3D tmp; + + mutex_unlock(&rdtgroup_mutex); + } + + return 0; +} + +static void closid_num_dirty_rmid_free(void) +{ + if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID)) { + mutex_lock(&rdtgroup_mutex); + kfree(closid_num_dirty_rmid); + closid_num_dirty_rmid =3D NULL; + mutex_unlock(&rdtgroup_mutex); + } +} + /** * resctrl_l3_mon_resource_init() - Initialise global monitoring structure= s. * @@ -1835,10 +1847,16 @@ int resctrl_l3_mon_resource_init(void) if (!r->mon_capable) return 0; =20 - ret =3D dom_data_init(r); + ret =3D closid_num_dirty_rmid_alloc(r); if (ret) return ret; =20 + ret =3D dom_data_init(r); + if (ret) { + closid_num_dirty_rmid_free(); + return ret; + } + if (resctrl_arch_is_evt_configurable(QOS_L3_MBM_TOTAL_EVENT_ID)) { mon_event_all[QOS_L3_MBM_TOTAL_EVENT_ID].configurable =3D true; resctrl_file_fflags_init("mbm_total_bytes_config", @@ -1881,5 +1899,10 @@ void resctrl_l3_mon_resource_exit(void) { struct rdt_resource *r =3D resctrl_arch_get_resource(RDT_RESOURCE_L3); =20 + if (!r->mon_capable) + return; + + closid_num_dirty_rmid_free(); + dom_data_exit(r); } --=20 2.51.1 From nobody Fri Dec 19 18:54:00 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8451C32A3EB for ; Thu, 4 Dec 2025 20:54:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881683; cv=none; b=BtghbSYdYIUnI6OvyLB5hwA0NU3X5Y086x0rxV9NS2/RDmliEfjUdidhb7gvaBFhXMOsx+A24vxqy4xtvqZSm0ybiVQf++mYmK2hZt69tCLKbOQjQJnMVav+HMIizqSNkGOJRAHgjCPf/burDfSX7PD59F36Qb/clP8WMeMV+PY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881683; c=relaxed/simple; bh=6P6/qdwIinWh3GhPxgZSngc0CYjUPfiSTLftQro4OAU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Jn68D4Pth30N7dYPlCUKzaS2dzxLoLkRw7aIvHyWzADJidpouwLijui6H2/tE9Eq68TdFlghnXnTj392Fbj8NkILIq9hw2AiUgLsDKwLKc/+hLW99yB1ngIJm7mHvvjpEfMNb39W+2HrtroGSjN+WnmGKexp4/+/drw1yabsUZI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=SMLcT/Mz; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="SMLcT/Mz" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764881681; x=1796417681; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6P6/qdwIinWh3GhPxgZSngc0CYjUPfiSTLftQro4OAU=; b=SMLcT/MzJdg64np48A0OjEAOsZ65RKXQ4CqS5jagwGyg37axFO/MtIW/ LWBaLvdu6BOtK2fajmTZiX+2QZhY6wk7h0kNnmTlhiJpObTG0ClcssPYr pQtPG3DHKEPEHaMdoPnnc4Qj639H+GxndfhGlPFhiGcbcxuCKKtj2P43X s529FrjXsyO5BS5/VZ4VFVSqVRQpVI/VMA1w5VLrLu1DvSNUpOzjwF/GW AuwFvtPpVeYSiiDA5OdaMSzh8MAro6rG/ZNl3cx8OUSbMIqNqgfpgWymf sbBtN4rYNhp2bpYAiYlvRDu/JnWZW0I8U6cbRD19nM1bihfkx+HHYeAo3 A==; X-CSE-ConnectionGUID: ak5snzJDRIKNx2BkqLjVKg== X-CSE-MsgGUID: udYquAq8TyWCCmvQ2/AElA== X-IronPort-AV: E=McAfee;i="6800,10657,11632"; a="69511076" X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="69511076" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:31 -0800 X-CSE-ConnectionGUID: GQ1uEawjRES/qp8Qiju/qA== X-CSE-MsgGUID: GaErUmvAT5CEg7lurl7z5w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="225752868" Received: from mgerlach-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.165]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:30 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v15 27/32] x86,fs/resctrl: Compute number of RMIDs as minimum across resources Date: Thu, 4 Dec 2025 12:53:57 -0800 Message-ID: <20251204205404.12763-28-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251204205404.12763-1-tony.luck@intel.com> References: <20251204205404.12763-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" resctrl assumes that only the L3 resource supports monitor events, so it simply takes the rdt_resource::num_rmid from RDT_RESOURCE_L3 as the system's number of RMIDs. The addition of telemetry events in a different resource breaks that assumption. Compute the number of available RMIDs as the minimum value across all mon_capable resources (analogous to how the number of CLOSIDs is computed across alloc_capable resources). Note that mount time enumeration of the telemetry resource means that this number can be reduced. If this happens, then some memory will be wasted as the allocations for rdt_l3_mon_domain::mbm_states[] and rdt_l3_mon_domain::rmid_busy_llc created during resctrl initialization will be larger than needed. Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- arch/x86/kernel/cpu/resctrl/core.c | 15 +++++++++++++-- fs/resctrl/rdtgroup.c | 6 ++++++ 2 files changed, 19 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 960974ffa866..bca65851d592 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -110,12 +110,23 @@ struct rdt_hw_resource rdt_resources_all[RDT_NUM_RESO= URCES] =3D { }, }; =20 +/** + * resctrl_arch_system_num_rmid_idx - Compute number of supported RMIDs + * (minimum across all mon_capable resource) + * + * Return: Number of supported RMIDs at time of call. Note that mount time + * enumeration of resources may reduce the number. + */ u32 resctrl_arch_system_num_rmid_idx(void) { - struct rdt_resource *r =3D &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl; + u32 num_rmids =3D U32_MAX; + struct rdt_resource *r; + + for_each_mon_capable_rdt_resource(r) + num_rmids =3D min(num_rmids, r->mon.num_rmid); =20 /* RMID are independent numbers for x86. num_rmid_idx =3D=3D num_rmid */ - return r->mon.num_rmid; + return num_rmids =3D=3D U32_MAX ? 0 : num_rmids; } =20 struct rdt_resource *resctrl_arch_get_resource(enum resctrl_res_level l) diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index 60ce2390723e..e95d3d0dc515 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -4352,6 +4352,12 @@ void resctrl_offline_mon_domain(struct rdt_resource = *r, struct rdt_domain_hdr *h * During boot this may be called before global allocations have been made= by * resctrl_l3_mon_resource_init(). * + * Called during CPU online that may run as soon as CPU online callbacks + * are set up during resctrl initialization. The number of supported RMIDs + * may be reduced if additional mon_capable resources are enumerated + * at mount time. This means the rdt_l3_mon_domain::mbm_states[] and + * rdt_l3_mon_domain::rmid_busy_llc allocations may be larger than needed. + * * Return: 0 for success, or -ENOMEM. */ static int domain_setup_l3_mon_state(struct rdt_resource *r, struct rdt_l3= _mon_domain *d) --=20 2.51.1 From nobody Fri Dec 19 18:54:00 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3F25C32ABD0 for ; Thu, 4 Dec 2025 20:54:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881685; cv=none; b=P+RR6zC1YSMWi9h/MudARFrxUeDnN30AoUyprR1emR5sgWfQJwNmAHX4VaNFF+5VODHtagjyDlrZwdIziesqTtTXbnS0la06AHFq65MFWeETugne6F4XIOovq4dTJAvn5Lb5inGJtYat/AjQBZN7dBa2nkc12Qy3gzyrRzYmV7Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881685; c=relaxed/simple; bh=+n+JvokATZFeYc55f/ADAB5n8RX+tcT3RgtqGGFpAQU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=UHc2Uajphl52fr7NygalYVLBrrQQdT2uhPK49lH9jXDNJiG3HDNkA/S8jhnh4l3a/LnM/8wGvaVP9AkH6HK3JuyHg/4bg3BxXbkEEGshocexEBhKBFXlTQQ3IGgds6p3O2aaxUK372ysZ7bpHcUP1tZL8GNBq3uv7dVZiWky6Og= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=dY5P51CG; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="dY5P51CG" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764881683; x=1796417683; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+n+JvokATZFeYc55f/ADAB5n8RX+tcT3RgtqGGFpAQU=; b=dY5P51CGa7J3R12Qfpbq/UuEWKJBCJgNYla3bazj0OPcWJYHFUuVywqj 9vZ6NJG9DVBLukAzMMl920awhgoaVgsJS/UAZNsVfOmVqlmw6Y91GFLFx aS4Yqz4FxMycHtIvylR3eGq6zRV76uRF1S6bQ30wg1w6r5cG2gH5kwNoU cqOIlFCuIsx6Jb50W+RRQaUYCNiucRplzH5DgwNYkDSUNyydHvQsH6DBQ ww2HDLOHBBnpuukcGnVdgLcLkFKt79mW5/Y6SCruloGJ6Dme+ROavP2SU eeoaI19jTCEInCKbeWwu+9oy0eyd9VjODGT1cO5e9IJb/tc67PLJh+g+R g==; X-CSE-ConnectionGUID: SGgOKibbRuqSEsHwQ+7gGg== X-CSE-MsgGUID: WWBJAqlIQu+WgRzsW7YqBw== X-IronPort-AV: E=McAfee;i="6800,10657,11632"; a="69511084" X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="69511084" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:32 -0800 X-CSE-ConnectionGUID: bwVfA3SfQ+OLk7Zojl9awg== X-CSE-MsgGUID: dkNvCY32TMyJ9KyTMTJxHA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="225752878" Received: from mgerlach-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.165]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:31 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v15 28/32] fs/resctrl: Move RMID initialization to first mount Date: Thu, 4 Dec 2025 12:53:58 -0800 Message-ID: <20251204205404.12763-29-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251204205404.12763-1-tony.luck@intel.com> References: <20251204205404.12763-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" L3 monitor features are enumerated during resctrl initialization and rmid_ptrs[] that tracks all RMIDs and depends on the number of supported RMIDs is allocated during this time. Telemetry monitor features are enumerated during first resctrl mount and may support a different number of RMIDs compared to L3 monitor features. Delay allocation and initialization of rmid_ptrs[] until first mount. Since the number of RMIDs cannot change on later mounts, keep the same set = of rmid_ptrs[] until resctrl_exit(). This is required because the limbo handler keeps running after resctrl is unmounted and needs to access rmid_ptrs[] as it keeps tracking busy RMIDs after unmount. Rename routines to match what they now do: dom_data_init() -> setup_rmid_lru_list() dom_data_exit() -> free_rmid_lru_list() Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- fs/resctrl/internal.h | 4 ++++ fs/resctrl/monitor.c | 54 ++++++++++++++++++++----------------------- fs/resctrl/rdtgroup.c | 5 ++++ 3 files changed, 34 insertions(+), 29 deletions(-) diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h index 399f625be67d..1a9b29119f88 100644 --- a/fs/resctrl/internal.h +++ b/fs/resctrl/internal.h @@ -369,6 +369,10 @@ int closids_supported(void); =20 void closid_free(int closid); =20 +int setup_rmid_lru_list(void); + +void free_rmid_lru_list(void); + int alloc_rmid(u32 closid); =20 void free_rmid(u32 closid, u32 rmid); diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index 4cfddef45006..0ba1b3fb6525 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -904,20 +904,29 @@ void mbm_setup_overflow_handler(struct rdt_l3_mon_dom= ain *dom, unsigned long del schedule_delayed_work_on(cpu, &dom->mbm_over, delay); } =20 -static int dom_data_init(struct rdt_resource *r) +int setup_rmid_lru_list(void) { - u32 idx_limit =3D resctrl_arch_system_num_rmid_idx(); struct rmid_entry *entry =3D NULL; - int err =3D 0, i; + u32 idx_limit; u32 idx; + int i; =20 - mutex_lock(&rdtgroup_mutex); + if (!resctrl_arch_mon_capable()) + return 0; =20 + /* + * Called on every mount, but the number of RMIDs cannot change + * after the first mount, so keep using the same set of rmid_ptrs[] + * until resctrl_exit(). Note that the limbo handler continues to + * access rmid_ptrs[] after resctrl is unmounted. + */ + if (rmid_ptrs) + return 0; + + idx_limit =3D resctrl_arch_system_num_rmid_idx(); rmid_ptrs =3D kcalloc(idx_limit, sizeof(struct rmid_entry), GFP_KERNEL); - if (!rmid_ptrs) { - err =3D -ENOMEM; - goto out_unlock; - } + if (!rmid_ptrs) + return -ENOMEM; =20 for (i =3D 0; i < idx_limit; i++) { entry =3D &rmid_ptrs[i]; @@ -930,30 +939,24 @@ static int dom_data_init(struct rdt_resource *r) /* * RESCTRL_RESERVED_CLOSID and RESCTRL_RESERVED_RMID are special and * are always allocated. These are used for the rdtgroup_default - * control group, which will be setup later in resctrl_init(). + * control group, which was setup earlier in rdtgroup_setup_default(). */ idx =3D resctrl_arch_rmid_idx_encode(RESCTRL_RESERVED_CLOSID, RESCTRL_RESERVED_RMID); entry =3D __rmid_entry(idx); list_del(&entry->list); =20 -out_unlock: - mutex_unlock(&rdtgroup_mutex); - - return err; + return 0; } =20 -static void dom_data_exit(struct rdt_resource *r) +void free_rmid_lru_list(void) { - mutex_lock(&rdtgroup_mutex); - - if (!r->mon_capable) - goto out_unlock; + if (!resctrl_arch_mon_capable()) + return; =20 + mutex_lock(&rdtgroup_mutex); kfree(rmid_ptrs); rmid_ptrs =3D NULL; - -out_unlock: mutex_unlock(&rdtgroup_mutex); } =20 @@ -1831,7 +1834,8 @@ static void closid_num_dirty_rmid_free(void) * resctrl_l3_mon_resource_init() - Initialise global monitoring structure= s. * * Allocate and initialise global monitor resources that do not belong to a - * specific domain. i.e. the rmid_ptrs[] used for the limbo and free lists. + * specific domain. i.e. the closid_num_dirty_rmid[] used to find the CLOS= ID + * with the cleanest set of RMIDs. * Called once during boot after the struct rdt_resource's have been confi= gured * but before the filesystem is mounted. * Resctrl's cpuhp callbacks may be called before this point to bring a do= main @@ -1851,12 +1855,6 @@ int resctrl_l3_mon_resource_init(void) if (ret) return ret; =20 - ret =3D dom_data_init(r); - if (ret) { - closid_num_dirty_rmid_free(); - return ret; - } - if (resctrl_arch_is_evt_configurable(QOS_L3_MBM_TOTAL_EVENT_ID)) { mon_event_all[QOS_L3_MBM_TOTAL_EVENT_ID].configurable =3D true; resctrl_file_fflags_init("mbm_total_bytes_config", @@ -1903,6 +1901,4 @@ void resctrl_l3_mon_resource_exit(void) return; =20 closid_num_dirty_rmid_free(); - - dom_data_exit(r); } diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index e95d3d0dc515..d49ffc56ea61 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -2799,6 +2799,10 @@ static int rdt_get_tree(struct fs_context *fc) goto out; } =20 + ret =3D setup_rmid_lru_list(); + if (ret) + goto out; + ret =3D rdtgroup_setup_root(ctx); if (ret) goto out; @@ -4654,4 +4658,5 @@ void resctrl_exit(void) */ =20 resctrl_l3_mon_resource_exit(); + free_rmid_lru_list(); } --=20 2.51.1 From nobody Fri Dec 19 18:54:00 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4616932ABF9 for ; Thu, 4 Dec 2025 20:54:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881685; cv=none; b=QSkBlIz8jlmE/sbDiJFy/kBxHIYbaFyEQS4XOrGo2VSMDW6R+c/mjYbWn073G1ayDUGo/vMKH3u0c7CDd62zC4jRw9Ys2E3Evz+ju7VTPUYmiKuPuaoKJUneYhU2rjcp7sez1ayv1xFimbmnBeXl/71r/1haaDnEBuxKqGhwzsQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881685; c=relaxed/simple; bh=XAfywqzIO9v8aMzwLwSad0Acq7REiDb7hYT15DhaUu8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GBn7dm1701EgjAZmGsytuG0NU7N1M5eG8909K9nbX13AMQPDVj/2k5OdvE6+zcIngrQrqNJ5xF1BuzAliHlT2tH6YIf7Q5lxFZkMwrm0z7fo9lQd3/6gPdeFTV6Yb8gfHMkw0cHipyuFIbMmB8lSHua9JO0lp2Rq72Ti5KKntVU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=kMOvQ+JW; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="kMOvQ+JW" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764881683; x=1796417683; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=XAfywqzIO9v8aMzwLwSad0Acq7REiDb7hYT15DhaUu8=; b=kMOvQ+JWXjBkbQiOi1S3JP0w0inFFB+DAFsuvHVb3AUGCcCSV59KUm0N lobZ0FUHvrUq9So4yfxkN4CJF3W+keEk8M3Bqt+BWgZ82fqKvGJsc/EhO b+ef0B6C90F90/HKhgKGhobMI77O3xPsgtrEm2co5D/hjUjesu9P94yfl aqZ0YhCmmubiuWtBsqly4p8PLqCcI6nFtASJCbD5LogxjHY5NucR5jKcK bjfo0nMJmzmN8vS/K0pi4tN53o4Y5xr3a72aBVywp+dYpDJlewMYzudUk lGvNUQZTcKCBgtFf84k8Ped4xU/6M1wXz8uHZnTftux8MA/N5ZylkMtt/ Q==; X-CSE-ConnectionGUID: S1fz/bSBRI2adrqHVkUj7Q== X-CSE-MsgGUID: leLM0waFTCOCva423EpxGw== X-IronPort-AV: E=McAfee;i="6800,10657,11632"; a="69511092" X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="69511092" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:33 -0800 X-CSE-ConnectionGUID: Zu5muyNKTGSqRaEuIQgNfQ== X-CSE-MsgGUID: Y6DQMQDqRxi798tr1/QYLw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="225752886" Received: from mgerlach-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.165]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:32 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v15 29/32] x86/resctrl: Enable RDT_RESOURCE_PERF_PKG Date: Thu, 4 Dec 2025 12:53:59 -0800 Message-ID: <20251204205404.12763-30-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251204205404.12763-1-tony.luck@intel.com> References: <20251204205404.12763-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Since telemetry events are enumerated on resctrl mount the RDT_RESOURCE_PER= F_PKG resource is not considered "monitoring capable" during early resctrl initia= lization. This means that the domain list for RDT_RESOURCE_PERF_PKG is not built when= the CPU hot plug notifiers are registered and run for the first time right after re= sctrl initialization. Mark the RDT_RESOURCE_PERF_PKG as "monitoring capable" upon successful tele= metry event enumeration to ensure future CPU hotplug events include this resource= and initialize its domain list for CPUs that are already online. Print to console log announcing the name of the telemetry feature detected. Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- arch/x86/kernel/cpu/resctrl/core.c | 16 +++++++++++++++- arch/x86/kernel/cpu/resctrl/intel_aet.c | 6 ++++++ 2 files changed, 21 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index bca65851d592..829633bc54e5 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -766,14 +766,28 @@ static int resctrl_arch_offline_cpu(unsigned int cpu) =20 void resctrl_arch_pre_mount(void) { + struct rdt_resource *r =3D &rdt_resources_all[RDT_RESOURCE_PERF_PKG].r_re= sctrl; static atomic_t only_once =3D ATOMIC_INIT(0); - int old =3D 0; + int cpu, old =3D 0; =20 if (!atomic_try_cmpxchg(&only_once, &old, 1)) return; =20 if (!intel_aet_get_events()) return; + + /* + * Late discovery of telemetry events means the domains for the + * resource were not built. Do that now. + */ + cpus_read_lock(); + mutex_lock(&domain_list_lock); + r->mon_capable =3D true; + rdt_mon_capable =3D true; + for_each_online_cpu(cpu) + domain_add_cpu_mon(cpu, r); + mutex_unlock(&domain_list_lock); + cpus_read_unlock(); } =20 enum { diff --git a/arch/x86/kernel/cpu/resctrl/intel_aet.c b/arch/x86/kernel/cpu/= resctrl/intel_aet.c index 38fcddc72ed8..2e68c6baf9b2 100644 --- a/arch/x86/kernel/cpu/resctrl/intel_aet.c +++ b/arch/x86/kernel/cpu/resctrl/intel_aet.c @@ -273,6 +273,12 @@ static bool enable_events(struct event_group *e, struc= t pmt_feature_group *p) else r->mon.num_rmid =3D e->num_rmid; =20 + if (skipped_events) + pr_info("%s %s:0x%x monitoring detected (skipped %d events)\n", r->name, + e->pfname, e->guid, skipped_events); + else + pr_info("%s %s:0x%x monitoring detected\n", r->name, e->pfname, e->guid); + return true; } =20 --=20 2.51.1 From nobody Fri Dec 19 18:54:00 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1C6A032AAC9 for ; Thu, 4 Dec 2025 20:54:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881686; cv=none; b=DoW3AZFXemejTU01oiMe5P62mIoRe2p3z0GprNIS3kjt7/LBnICpqdFqzIKifc6e1OpLfWzCUNZVI2lZ5x5w0d8/9an9DPOdhDTMCf5A7pvQBTV3yXPIZ5GHuI5MDQy7zan52smdgyDVI2ZH0Ql8KexzKoGaivneQHCVZ2lH1So= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881686; c=relaxed/simple; bh=a6GqSiYes3g1hfurapo7TsD5x6nKUzU6tpAj8OUBXbc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=iDR0KFeYDAB3rwyMHwmVwF3LFB6vSiSFrRvCoTo1JwHCybLf9fcAfwrYMlvKVNlbr1a2XsUt25CPQb8E4OvsUxp5sO0cVsjDKapZaOSmVHBXYa4BTSJVPGLZMI/b4nOal6OHXYnhvJs16n1vfRALjvaSdoS7Y9XqsUfn6DIN+Ac= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=e1KfSz44; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="e1KfSz44" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764881684; x=1796417684; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=a6GqSiYes3g1hfurapo7TsD5x6nKUzU6tpAj8OUBXbc=; b=e1KfSz44NeN/NC/g/CC3JBjYwl4K5XYm4PwT06v4khfQFVi0BhZZAx47 Xg/dLy2rdLQ/EykQMepQz0ufizEJRV+TNgGtC0YvxIxOT0EzaJW6HsACq R9fb/jZsj2twQdCBo12fiksCWr52D5SssSpcfT6NBq2ogHKlG1OsEtN3J ZT2PZKnubkJT5TxEp8nTBrSSKUJcASFUl350hMqurEArZgdbFF+jB4tm9 VIpvHfKnmrlDMSGiuLEx2svbTk730aELHjmPg4oEQ3Hw0U84Rlaz7WmSy Al3IEaXs0cNvJVLlV+CSYpaUGNsC9GN1mh1jCWpRoxPUmCEdPgC6ybKU9 w==; X-CSE-ConnectionGUID: WfpxeyiBRrenMKwsE4vj+A== X-CSE-MsgGUID: OYBErPJGSQO5uRhgDrnMNQ== X-IronPort-AV: E=McAfee;i="6800,10657,11632"; a="69511100" X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="69511100" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:33 -0800 X-CSE-ConnectionGUID: ol0jKBxLQTqK7on07XF9AA== X-CSE-MsgGUID: yKbjNv6RSeuUfqPBjIUkEA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="225752894" Received: from mgerlach-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.165]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:33 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v15 30/32] fs/resctrl: Provide interface to create architecture specific debugfs area Date: Thu, 4 Dec 2025 12:54:00 -0800 Message-ID: <20251204205404.12763-31-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251204205404.12763-1-tony.luck@intel.com> References: <20251204205404.12763-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" All files below /sys/fs/resctrl are considered user ABI. This leaves no place for architectures to provide additional interfaces. Add resctrl_debugfs_mon_info_arch_mkdir() which creates a directory in the debugfs file system for a monitoring resource. Naming follows the layout of the main resctrl hierarchy: /sys/kernel/debug/resctrl/info/{resource}_MON/{arch} The {arch} last level directory name matches the output of the user level "uname -m" command. Architecture code may use this directory for debug information, or for minor tuning of features. It must not be used for basic feature enabling as debug= fs may not be configured/mounted on production systems. Suggested-by: Reinette Chatre Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- include/linux/resctrl.h | 10 ++++++++++ fs/resctrl/rdtgroup.c | 29 +++++++++++++++++++++++++++++ 2 files changed, 39 insertions(+) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index 8623e450619a..b862a9dd785b 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -702,6 +702,16 @@ bool resctrl_arch_get_io_alloc_enabled(struct rdt_reso= urce *r); extern unsigned int resctrl_rmid_realloc_threshold; extern unsigned int resctrl_rmid_realloc_limit; =20 +/** + * resctrl_debugfs_mon_info_arch_mkdir() - Create a debugfs info directory. + * Removed by resctrl_exit(). + * @r: Resource (must be mon_capable). + * + * Return: NULL if resource is not monitoring capable, + * dentry pointer on success, or ERR_PTR(-ERROR) on failure. + */ +struct dentry *resctrl_debugfs_mon_info_arch_mkdir(struct rdt_resource *r); + int resctrl_init(void); void resctrl_exit(void); =20 diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index d49ffc56ea61..d13291d71adf 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -24,6 +24,7 @@ #include #include #include +#include =20 #include =20 @@ -75,6 +76,8 @@ static void rdtgroup_destroy_root(void); =20 struct dentry *debugfs_resctrl; =20 +static struct dentry *debugfs_resctrl_info; + /* * Memory bandwidth monitoring event to use for the default CTRL_MON group * and each new CTRL_MON group created by the user. Only relevant when @@ -4599,6 +4602,31 @@ int resctrl_init(void) return ret; } =20 +/* + * Create /sys/kernel/debug/resctrl/info/{r->name}_MON/{arch} directory + * by request for architecture to use for debugging or minor tuning. + * Basic functionality of features must not be controlled by files + * added to this directory as debugfs may not be configured/mounted + * on production systems. + */ +struct dentry *resctrl_debugfs_mon_info_arch_mkdir(struct rdt_resource *r) +{ + struct dentry *moninfodir; + char name[32]; + + if (!r->mon_capable) + return NULL; + + if (!debugfs_resctrl_info) + debugfs_resctrl_info =3D debugfs_create_dir("info", debugfs_resctrl); + + sprintf(name, "%s_MON", r->name); + + moninfodir =3D debugfs_create_dir(name, debugfs_resctrl_info); + + return debugfs_create_dir(utsname()->machine, moninfodir); +} + static bool resctrl_online_domains_exist(void) { struct rdt_resource *r; @@ -4650,6 +4678,7 @@ void resctrl_exit(void) =20 debugfs_remove_recursive(debugfs_resctrl); debugfs_resctrl =3D NULL; + debugfs_resctrl_info =3D NULL; unregister_filesystem(&rdt_fs_type); =20 /* --=20 2.51.1 From nobody Fri Dec 19 18:54:00 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6413232D432 for ; Thu, 4 Dec 2025 20:54:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881690; cv=none; b=lyLCHViMboN0kkV9mjaFCFyY0sGB5lyqCkQr5z/dN+nt9rvsb6xUxVKkp4REtdwM3F/jjq0p4tQdSfsCvEJCr0B1xyVfdB8UQUAu2+I+0N7vRRk/3Pv3xd7AORy9kjerhTULtGEk3mNOjIPznpUQr5NCMPcoIZpzTC/foFJgqpQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881690; c=relaxed/simple; bh=HL2ho3E1T6eXXqytACxTDdFzlnxmn2vkjFpSYn+ew2Q=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Q5A0oaVfSBKD31nMa/xp1rveu4GIQH9OGj7LcJyNoGI1la86CQQ9wHY5pIu+rM6jLpLHUK9zywjfpNHZGge17DXZGQATvhGwb2erv+gcJAFb/eavlF5+roEkiuntN7YowCPwn06E5Guu/4dI8Qg1WLSKQjWKc0Mt9U+QCKbmQCQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=FyHozmz4; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="FyHozmz4" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764881687; x=1796417687; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=HL2ho3E1T6eXXqytACxTDdFzlnxmn2vkjFpSYn+ew2Q=; b=FyHozmz4Hfexf2zR6I4ZowS2qioCHzTK3j0nLPr5Id/quehxl7CH5p8b nJKl0C+uD7r5a6yRM2aT9fIIT3TQPrPzO6T2QIU2TVS8qdKQevlqMUegD 2DLkFrU0uN6b89gaYW4122YQpcLjMLutGxlHfJxm7CnhUjB52toBnp9nX 7U3toOKQl8lpcHwGYUO9AXazpcVfFBAwgMQdW+fOJUrHfYTTBNZyyFZKt 86dOvuKWg9vpIYD6K49OW2MjTEC7RAGPUoNMWf97N3uNpZNkrzt7n9lWZ rBnyN/0BP7On0eO0oA8LSwv2tUN3imO+Vb7IY6C2WCePXvcCnnkO4+VKE A==; X-CSE-ConnectionGUID: XpnzHYxKTnukLtKaYoXd/w== X-CSE-MsgGUID: tk6QsiwlSOKpU023/KDW1Q== X-IronPort-AV: E=McAfee;i="6800,10657,11632"; a="69511108" X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="69511108" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:34 -0800 X-CSE-ConnectionGUID: Jy846jq2RbafxmGAk0r/Qg== X-CSE-MsgGUID: vXaCT4IfQ3ui9xOHCqLL1Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="225752901" Received: from mgerlach-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.165]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:33 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v15 31/32] x86/resctrl: Add debugfs files to show telemetry aggregator status Date: Thu, 4 Dec 2025 12:54:01 -0800 Message-ID: <20251204205404.12763-32-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251204205404.12763-1-tony.luck@intel.com> References: <20251204205404.12763-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Each telemetry aggregator provides three status registers at the top end of MMIO space after all the per-RMID per-event counters: data_loss_count: This counts the number of times that this aggregator failed to accumulate a counter value supplied by a CPU core. data_loss_timestamp: This is a "timestamp" from a free running 25MHz uncore timer indicating when the most recent data loss occurred. last_update_timestamp: Another 25MHz timestamp indicating when the most recent counter update was successfully applied. Create files in /sys/kernel/debug/resctrl/info/PERF_PKG_MON/x86_64/ to disp= lay the value of each of these status registers for each aggregator in each ena= bled event group. The prefix for each file name describes the type of aggregator, the guid, which package it is located on, and an opaque instance number to provide a unique file name when there are multiple aggregators on a package. The suffix is one of the three strings listed above. An example name is: energy_0x26696143_pkg0_agg2_data_loss_count These files are removed along with all other debugfs entries by the call to debugfs_remove_recursive() in resctrl_exit(). Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- arch/x86/kernel/cpu/resctrl/internal.h | 2 + arch/x86/kernel/cpu/resctrl/core.c | 2 + arch/x86/kernel/cpu/resctrl/intel_aet.c | 60 +++++++++++++++++++++++++ 3 files changed, 64 insertions(+) diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/r= esctrl/internal.h index df09091f7c6c..e538174fe193 100644 --- a/arch/x86/kernel/cpu/resctrl/internal.h +++ b/arch/x86/kernel/cpu/resctrl/internal.h @@ -236,6 +236,7 @@ void __exit intel_aet_exit(void); int intel_aet_read_event(int domid, u32 rmid, void *arch_priv, u64 *val); void intel_aet_mon_domain_setup(int cpu, int id, struct rdt_resource *r, struct list_head *add_pos); +void intel_aet_add_debugfs(void); bool intel_aet_option(bool force_off, char *tok); #else static inline bool intel_aet_get_events(void) { return false; } @@ -247,6 +248,7 @@ static inline int intel_aet_read_event(int domid, u32 r= mid, void *arch_priv, u64 =20 static inline void intel_aet_mon_domain_setup(int cpu, int id, struct rdt_= resource *r, struct list_head *add_pos) { } +static inline void intel_aet_add_debugfs(void) { } static inline bool intel_aet_option(bool force_off, char *tok) { return fa= lse; } #endif =20 diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 829633bc54e5..62e96aad060d 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -788,6 +788,8 @@ void resctrl_arch_pre_mount(void) domain_add_cpu_mon(cpu, r); mutex_unlock(&domain_list_lock); cpus_read_unlock(); + + intel_aet_add_debugfs(); } =20 enum { diff --git a/arch/x86/kernel/cpu/resctrl/intel_aet.c b/arch/x86/kernel/cpu/= resctrl/intel_aet.c index 2e68c6baf9b2..693820b9c155 100644 --- a/arch/x86/kernel/cpu/resctrl/intel_aet.c +++ b/arch/x86/kernel/cpu/resctrl/intel_aet.c @@ -15,8 +15,11 @@ #include #include #include +#include +#include #include #include +#include #include #include #include @@ -29,6 +32,7 @@ #include #include #include +#include #include #include #include @@ -226,6 +230,46 @@ static bool all_regions_have_sufficient_rmid(struct ev= ent_group *e, struct pmt_f return ret; } =20 +static int status_read(void *priv, u64 *val) +{ + void __iomem *info =3D (void __iomem *)priv; + + *val =3D readq(info); + + return 0; +} + +DEFINE_SIMPLE_ATTRIBUTE(status_fops, status_read, NULL, "%llu\n"); + +static void make_status_files(struct dentry *dir, struct event_group *e, u= 8 pkg, + int instance, void *info_end) +{ + char name[80]; + + sprintf(name, "%s_0x%x_pkg%u_agg%d_data_loss_count", e->pfname, e->guid, = pkg, instance); + debugfs_create_file(name, 0400, dir, info_end - 24, &status_fops); + + sprintf(name, "%s_0x%x_pkg%u_agg%d_data_loss_timestamp", e->pfname, e->gu= id, pkg, instance); + debugfs_create_file(name, 0400, dir, info_end - 16, &status_fops); + + sprintf(name, "%s_0x%x_pkg%u_agg%d_last_update_timestamp", e->pfname, e->= guid, pkg, instance); + debugfs_create_file(name, 0400, dir, info_end - 8, &status_fops); +} + +static void create_debug_event_status_files(struct dentry *dir, struct eve= nt_group *e) +{ + struct pmt_feature_group *p =3D e->pfg; + void *info_end; + + for (int i =3D 0; i < p->count; i++) { + if (!p->regions[i].addr) + continue; + info_end =3D (void __force *)p->regions[i].addr + e->mmio_size; + make_status_files(dir, e, p->regions[i].plat_info.package_id, + i, info_end); + } +} + static bool enable_events(struct event_group *e, struct pmt_feature_group = *p) { struct rdt_resource *r =3D &rdt_resources_all[RDT_RESOURCE_PERF_PKG].r_re= sctrl; @@ -410,3 +454,19 @@ void intel_aet_mon_domain_setup(int cpu, int id, struc= t rdt_resource *r, kfree(d); } } + +void intel_aet_add_debugfs(void) +{ + struct rdt_resource *r =3D &rdt_resources_all[RDT_RESOURCE_PERF_PKG].r_re= sctrl; + struct event_group **peg; + struct dentry *infodir; + + infodir =3D resctrl_debugfs_mon_info_arch_mkdir(r); + + if (IS_ERR_OR_NULL(infodir)) + return; + + for_each_event_group(peg) + if ((*peg)->pfg) + create_debug_event_status_files(infodir, *peg); +} --=20 2.51.1 From nobody Fri Dec 19 18:54:00 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 64B7332D434 for ; Thu, 4 Dec 2025 20:54:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881689; cv=none; b=qBRsOzWZqiJxCwH2WqgurC1+k7cEgrlDNC0qxtW9LQl4Fd+n4UaSHL021clhyqgCBOuDMlI/VQQLDt3DjBZcZYSmlCtkoLguOHfW89Nxr5Ga7pvw6BM2ZHQRcH3uFsAaVBicFXqBhVET+X9EznDEiSuzuyEG3HiC1HcreEvmrAU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881689; c=relaxed/simple; bh=Zb42NFH4f2rjklwZ01XPhH0/lWBFTrR4cuGDvYQslxU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=KomTy0vVD9dQU69oSYFBR7hy/IbhU42StMTULcr0XUAVxLFZ9a3xsF1JcAyWpidep1q6ImoxLMart3MsV54+DWv2yt4N5aK2CG3QowVhMJL+KXqJUmYaAAPfO47RLYckV+YwYKtA5QGR+5Ul/65w3LJJPq6VpXIQdNQwBPg6ChU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=OfWBcLO6; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="OfWBcLO6" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764881687; x=1796417687; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Zb42NFH4f2rjklwZ01XPhH0/lWBFTrR4cuGDvYQslxU=; b=OfWBcLO6XNbaQ8W7IOoat9QfatMmbhgqcIzMmVCjyXp1A3x1E5lG3Vc3 paWOIfoXb7sDcqYy3YTvekYso/QF8KWFQHXH44tubQFntsLbTe1uk+zoF Qj2/Lu5ik3wVWfiexgsflt911QcSqYvP0/UjE1QROC5Hu1CQ2t7LPZG6/ X6NVRAdJ6i53gDyokTtHUm4XWKHRsFeJ4hpcU8j5CxhwxhHL6cV9QqSYc ooLxAGoACloykxy2vqIXrK4u1RufY7PHGjwFhM2j6LKDk93uZhFClwrp/ jj0soxABqY1eTNYosf5pMW7agAkexd82TfwZ+BfSx5K5EyPOkaNlZSxSX A==; X-CSE-ConnectionGUID: t3Ye8hPQQ86pchLKf8gTXQ== X-CSE-MsgGUID: LQjIBzT2T+KQaO7IDFv1Mg== X-IronPort-AV: E=McAfee;i="6800,10657,11632"; a="69511118" X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="69511118" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:35 -0800 X-CSE-ConnectionGUID: nFzSkxhsQVWuvAeM3IEnmA== X-CSE-MsgGUID: WXny7BQmQAmIxHAhalsmmQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="225752908" Received: from mgerlach-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.165]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:34 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v15 32/32] x86,fs/resctrl: Update documentation for telemetry events Date: Thu, 4 Dec 2025 12:54:02 -0800 Message-ID: <20251204205404.12763-33-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251204205404.12763-1-tony.luck@intel.com> References: <20251204205404.12763-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Update resctrl filesystem documentation with the details about the resctrl files that support telemetry events. Signed-off-by: Tony Luck --- Documentation/filesystems/resctrl.rst | 102 +++++++++++++++++++++++--- 1 file changed, 90 insertions(+), 12 deletions(-) diff --git a/Documentation/filesystems/resctrl.rst b/Documentation/filesyst= ems/resctrl.rst index 8c8ce678148a..5418ca72bed3 100644 --- a/Documentation/filesystems/resctrl.rst +++ b/Documentation/filesystems/resctrl.rst @@ -252,13 +252,12 @@ with respect to allocation: bandwidth percentages are directly applied to the threads running on the core =20 -If RDT monitoring is available there will be an "L3_MON" directory +If L3 monitoring is available there will be an "L3_MON" directory with the following files: =20 "num_rmids": - The number of RMIDs available. This is the - upper bound for how many "CTRL_MON" + "MON" - groups can be created. + The number of RMIDs supported by hardware for + L3 monitoring events. =20 "mon_features": Lists the monitoring events if @@ -484,6 +483,25 @@ with the following files: bytes) at which a previously used LLC_occupancy counter can be considered for re-use. =20 +If telemetry monitoring is available there will be a "PERF_PKG_MON" direct= ory +with the following files: + +"num_rmids": + The number of RMIDs for telemetry monitoring events. By default, + resctrl will not enable telemetry events of a particular type + ("perf" or "energy") if the number of RMIDs that can be tracked + concurrently for that type is lower than the total number of + RMIDs supported by that type. The user can force-enable each + type (or individual guids within a type) of telemetry events + with the "rdt=3D" boot command line option, but this may reduce + the number of monitoring groups that can be created. + +"mon_features": + Lists the telemetry monitoring events that are enabled on this system. + +The upper bound for how many "CTRL_MON" + "MON" can be created +is the smaller of the L3_MON and PERF_PKG_MON "num_rmids" values. + Finally, in the top level of the "info" directory there is a file named "last_cmd_status". This is reset with every "command" issued via the file system (making new directories or writing to any of the @@ -589,15 +607,40 @@ When control is enabled all CTRL_MON groups will also= contain: When monitoring is enabled all MON groups will also contain: =20 "mon_data": - This contains a set of files organized by L3 domain and by - RDT event. E.g. on a system with two L3 domains there will - be subdirectories "mon_L3_00" and "mon_L3_01". Each of these - directories have one file per event (e.g. "llc_occupancy", - "mbm_total_bytes", and "mbm_local_bytes"). In a MON group these - files provide a read out of the current value of the event for - all tasks in the group. In CTRL_MON groups these files provide - the sum for all tasks in the CTRL_MON group and all tasks in + This contains directories for each monitor domain. + + If L3 monitoring is enabled, there will be a "mon_L3_XX" directory for + each instance of an L3 cache. Each directory contains files for the enabl= ed + L3 events (e.g. "llc_occupancy", "mbm_total_bytes", and "mbm_local_bytes"= ). + + If telemetry monitoring is enabled, there will be a "mon_PERF_PKG_YY" + directory for each physical processor package. Each directory contains + files for the enabled telemetry events (e.g. "core_energy". "activity", + "uops_retired", etc.) + + The info/`*`/mon_features files provide the full list of enabled + event/file names. + + "core energy" reports a floating point number for the energy (in Joules) + consumed by cores (registers, arithmetic units, TLB and L1/L2 caches) + during execution of instructions summed across all logical CPUs on a + package for the current monitoring group. + + "activity" also reports a floating point value (in Farads). This provides + an estimate of work done independent of the frequency that the CPUs used + for execution. + + Note that "core energy" and "activity" only measure energy/activity in the + "core" of the CPU (arithmetic units, TLB, L1 and L2 caches, etc.). They + do not include L3 cache, memory, I/O devices etc. + + All other events report decimal integer values. + + In a MON group these files provide a read out of the current value of + the event for all tasks in the group. In CTRL_MON groups these files + provide the sum for all tasks in the CTRL_MON group and all tasks in MON groups. Please see example section for more details on usage. + On systems with Sub-NUMA Cluster (SNC) enabled there are extra directories for each node (located within the "mon_L3_XX" directory for the L3 cache they occupy). These are named "mon_sub_L3_YY" @@ -1590,6 +1633,41 @@ Example with C:: resctrl_release_lock(fd); } =20 +Debugfs +=3D=3D=3D=3D=3D=3D=3D +In addition to the use of debugfs for tracing of pseudo-locking performanc= e, +architecture code may create debugfs directories associated with monitoring +features for a specific resource. + +The full pathname for these is in the form: + + /sys/kernel/debug/resctrl/info/{resource_name}_MON/{arch}/ + +The presence, names, and format of these files may vary between architectu= res +even if the same resource is present. + +PERF_PKG_MON/x86_64 +------------------- +Three files are present per telemetry aggregator instance that show status. +The prefix of each file name describes the type ("energy" or "perf"), the +guid, which processor package it belongs to, and the instance number of the +aggregator. For example: "energy_0x26696143_pkg1_agg2". + +The suffix describes which data is reported in the file and is one of: + +data_loss_count: + This counts the number of times that this aggregator + failed to accumulate a counter value supplied by a CPU. + +data_loss_timestamp: + This is a "timestamp" from a free running 25MHz uncore + timer indicating when the most recent data loss occurred. + +last_update_timestamp: + Another 25MHz timestamp indicating when the + most recent counter update was successfully applied. + + Examples for RDT Monitoring along with allocation usage =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D Reading monitored data --=20 2.51.1