From nobody Sun Dec 14 22:14:55 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 911D02D8DA4 for ; Wed, 10 Dec 2025 23:14:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408473; cv=none; b=h51rBY9wy855zZHJXkIcGibeeqpGrcXHODug1KlLPjJvO7mI0uLgYcRHmb8k4AzaOd+tMUaUiSdkH3DkV3Su1v8n7o9swuglCY0FsI/hZPA4SgdLdkcKjO+irIobLnTcYDV9wELB2VQIQytrQomc0F4YEuw05srvtX47n/GelfE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408473; c=relaxed/simple; bh=uG7+YqtEtzVlwaBgTbjM4zxMwbqEfXKjC407S1dGiWg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ObWqyC0yIQnOHhChbIuzDJafgQGiDTjn2nvmdWwqbcErnrLPIY+o1JodxEzdRT9YwhfAjXM/TsH9RwIKoNJ8qfIEfFUQa5BWX2VtjmIZ+8TdebwePy2H53mPl5nfEtuqUsQxUf4TLxsf9MUg+U1/kGLgpwZfrQU+oLZjJmqNJFg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=OAWGLmfB; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="OAWGLmfB" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1765408468; x=1796944468; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=uG7+YqtEtzVlwaBgTbjM4zxMwbqEfXKjC407S1dGiWg=; b=OAWGLmfBq9XiV/xXFoaBmMT/KHiZe2hinmhVjlNnR7DqCrV1/HUJMoFM sSxw9cSSouY8XYtB9XpjA1AxTerJMzJiMj4PtOCZ8HQQlU7dMRZOANmaX btUxDmjWgjM4SYFaERtRvVV+H+p+9THgWeXvnz4GwHHTz4EyxXmYYGBvh q6ULFRL9wNt7Sbe7a0oXJAE/3XPMROFkWC1reBsCleGpFxR8DVa4mmiM+ nWvE7YfDiYzE59rmXpYJmYDcOlbs2SUrm0f+BtOZW8RRrEemhRevFmaDv rWpFFMRLf3Was5hMUe7MAh5k27GAwidejtvH4D/AC/DiO+2rNcLmRmh0v w==; X-CSE-ConnectionGUID: SsTeQMraQAqkUE+CNN8NKQ== X-CSE-MsgGUID: Uz3dhAK2SJqA+3SSlK51Yw== X-IronPort-AV: E=McAfee;i="6800,10657,11638"; a="69973477" X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="69973477" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:23 -0800 X-CSE-ConnectionGUID: kjXcpGLaRz6MZ02Vh1I/sw== X-CSE-MsgGUID: gm6axnEaTgikBHwJpe1IFw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="227297023" Received: from daliomra-mobl3.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.221.254]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:23 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v16 01/32] x86,fs/resctrl: Improve domain type checking Date: Wed, 10 Dec 2025 15:13:40 -0800 Message-ID: <20251210231413.59102-2-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251210231413.59102-1-tony.luck@intel.com> References: <20251210231413.59102-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Every resctrl resource has a list of domain structures. struct rdt_ctrl_dom= ain and struct rdt_mon_domain both begin with struct rdt_domain_hdr with rdt_domain_hdr::type used in validity checks before accessing the domain of= a particular type. Add the resource id to struct rdt_domain_hdr in preparation for a new monit= oring domain structure that will be associated with a new monitoring resource. Im= prove existing domain validity checks with a new helper domain_header_is_valid() that checks both domain type and resource id. domain_header_is_valid() sho= uld be used before every call to container_of() that accesses a domain structur= e. Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- include/linux/resctrl.h | 9 +++++++++ arch/x86/kernel/cpu/resctrl/core.c | 10 ++++++---- fs/resctrl/ctrlmondata.c | 2 +- 3 files changed, 16 insertions(+), 5 deletions(-) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index 54701668b3df..e7c218f8d4f7 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -131,15 +131,24 @@ enum resctrl_domain_type { * @list: all instances of this resource * @id: unique id for this instance * @type: type of this instance + * @rid: resource id for this instance * @cpu_mask: which CPUs share this resource */ struct rdt_domain_hdr { struct list_head list; int id; enum resctrl_domain_type type; + enum resctrl_res_level rid; struct cpumask cpu_mask; }; =20 +static inline bool domain_header_is_valid(struct rdt_domain_hdr *hdr, + enum resctrl_domain_type type, + enum resctrl_res_level rid) +{ + return !WARN_ON_ONCE(hdr->type !=3D type || hdr->rid !=3D rid); +} + /** * struct rdt_ctrl_domain - group of CPUs sharing a resctrl control resour= ce * @hdr: common header for different domain types diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 3792ab4819dc..0b8b7b8697a7 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -464,7 +464,7 @@ static void domain_add_cpu_ctrl(int cpu, struct rdt_res= ource *r) =20 hdr =3D resctrl_find_domain(&r->ctrl_domains, id, &add_pos); if (hdr) { - if (WARN_ON_ONCE(hdr->type !=3D RESCTRL_CTRL_DOMAIN)) + if (!domain_header_is_valid(hdr, RESCTRL_CTRL_DOMAIN, r->rid)) return; d =3D container_of(hdr, struct rdt_ctrl_domain, hdr); =20 @@ -481,6 +481,7 @@ static void domain_add_cpu_ctrl(int cpu, struct rdt_res= ource *r) d =3D &hw_dom->d_resctrl; d->hdr.id =3D id; d->hdr.type =3D RESCTRL_CTRL_DOMAIN; + d->hdr.rid =3D r->rid; cpumask_set_cpu(cpu, &d->hdr.cpu_mask); =20 rdt_domain_reconfigure_cdp(r); @@ -520,7 +521,7 @@ static void domain_add_cpu_mon(int cpu, struct rdt_reso= urce *r) =20 hdr =3D resctrl_find_domain(&r->mon_domains, id, &add_pos); if (hdr) { - if (WARN_ON_ONCE(hdr->type !=3D RESCTRL_MON_DOMAIN)) + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, r->rid)) return; d =3D container_of(hdr, struct rdt_mon_domain, hdr); =20 @@ -538,6 +539,7 @@ static void domain_add_cpu_mon(int cpu, struct rdt_reso= urce *r) d =3D &hw_dom->d_resctrl; d->hdr.id =3D id; d->hdr.type =3D RESCTRL_MON_DOMAIN; + d->hdr.rid =3D r->rid; ci =3D get_cpu_cacheinfo_level(cpu, RESCTRL_L3_CACHE); if (!ci) { pr_warn_once("Can't find L3 cache for CPU:%d resource %s\n", cpu, r->nam= e); @@ -598,7 +600,7 @@ static void domain_remove_cpu_ctrl(int cpu, struct rdt_= resource *r) return; } =20 - if (WARN_ON_ONCE(hdr->type !=3D RESCTRL_CTRL_DOMAIN)) + if (!domain_header_is_valid(hdr, RESCTRL_CTRL_DOMAIN, r->rid)) return; =20 d =3D container_of(hdr, struct rdt_ctrl_domain, hdr); @@ -644,7 +646,7 @@ static void domain_remove_cpu_mon(int cpu, struct rdt_r= esource *r) return; } =20 - if (WARN_ON_ONCE(hdr->type !=3D RESCTRL_MON_DOMAIN)) + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, r->rid)) return; =20 d =3D container_of(hdr, struct rdt_mon_domain, hdr); diff --git a/fs/resctrl/ctrlmondata.c b/fs/resctrl/ctrlmondata.c index b2d178d3556e..905c310de573 100644 --- a/fs/resctrl/ctrlmondata.c +++ b/fs/resctrl/ctrlmondata.c @@ -653,7 +653,7 @@ int rdtgroup_mondata_show(struct seq_file *m, void *arg) * the resource to find the domain with "domid". */ hdr =3D resctrl_find_domain(&r->mon_domains, domid, NULL); - if (!hdr || WARN_ON_ONCE(hdr->type !=3D RESCTRL_MON_DOMAIN)) { + if (!hdr || !domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, resid)) { ret =3D -ENOENT; goto out; } --=20 2.51.1 From nobody Sun Dec 14 22:14:55 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BD1C62E7179 for ; Wed, 10 Dec 2025 23:14:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408475; cv=none; b=NQIIWy5dP7GzjHYXOS7HmjZx7GWA2MCBZaR9uD+VP3dDD+jT7/t1fuTG1TpdwelC5kDuwcVXvhKr9XvrCt6re5JOz1fPm5oA4srhnw0VOXLvjcLEPLeA0tcY9tAvvR0n/JHGyZaTXhgAv4ysbGC0ie1Jck15SJhK7dCLIE6K6sk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408475; c=relaxed/simple; bh=LlRdkmpBiMV2KJh/axg+W+2z6l7UPK5CIKtf/6Q8CGY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XBV2QLr2dFITUqhAkbcDvfu0ss3d2jQ1Jjg0ZV9IgyeNYQos4RZsKATQ068ton+wpdyir7D6Udf/2P7AHOyTt/FWn0HTOROeDdveP+hbw37mS4bhGLzabu3VepqYdqRIN5+lygkMFKiGEfrEFK8NwiFXTtsh690+Ecgv7JGnF00= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=BQ/tOEsr; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="BQ/tOEsr" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1765408471; x=1796944471; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=LlRdkmpBiMV2KJh/axg+W+2z6l7UPK5CIKtf/6Q8CGY=; b=BQ/tOEsrdboLzu9WoWdNr1fZt0Km1C45ZNN0Vx35sJZ/mH4eAC0NO4Pu ETJiQobUUouefu7ednf6jvAK6PF7EIjAmR8fOKiglrxTR3dSFrl4qHJsr CwBdLWFB9O+XySmol4STQhLkXT6RwDOYF7DceFfle4T6K+Fapwm3qEp8r EDirfy3tZMpJ1DXd1NaBjZtHEazu7gifGcQHIyYq/GuqCxW3HuXnN03NP cxMi+Z6xA74JMjkITMsW4QdhBqn2VqUYPizgWco1gq2vrhX0KzfTUFNUL 6YV6Shm2xa7DhDRxcsuxYrvCIUnxhkUHNb69wBPrGIIjBZ79RYDycVFjl A==; X-CSE-ConnectionGUID: bJHirHsETZiMhnqUyqfqrA== X-CSE-MsgGUID: g7IGLFonRhSfTu4ptA+u1Q== X-IronPort-AV: E=McAfee;i="6800,10657,11638"; a="69973485" X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="69973485" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:24 -0800 X-CSE-ConnectionGUID: vxbOZqPMTjOaHECTFRT9wA== X-CSE-MsgGUID: sGm1vmJfS3mnwwN8ctpEkQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="227297028" Received: from daliomra-mobl3.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.221.254]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:24 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v16 02/32] x86/resctrl: Move L3 initialization into new helper function Date: Wed, 10 Dec 2025 15:13:41 -0800 Message-ID: <20251210231413.59102-3-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251210231413.59102-1-tony.luck@intel.com> References: <20251210231413.59102-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Carve out the resource monitoring domain init code into a separate helper in order to be able to initialize new types of monitoring domains besides the usual L3 ones. Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- arch/x86/kernel/cpu/resctrl/core.c | 64 ++++++++++++++++-------------- 1 file changed, 34 insertions(+), 30 deletions(-) diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 0b8b7b8697a7..2a568b316711 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -501,37 +501,13 @@ static void domain_add_cpu_ctrl(int cpu, struct rdt_r= esource *r) } } =20 -static void domain_add_cpu_mon(int cpu, struct rdt_resource *r) +static void l3_mon_domain_setup(int cpu, int id, struct rdt_resource *r, s= truct list_head *add_pos) { - int id =3D get_domain_id_from_scope(cpu, r->mon_scope); - struct list_head *add_pos =3D NULL; struct rdt_hw_mon_domain *hw_dom; - struct rdt_domain_hdr *hdr; struct rdt_mon_domain *d; struct cacheinfo *ci; int err; =20 - lockdep_assert_held(&domain_list_lock); - - if (id < 0) { - pr_warn_once("Can't find monitor domain id for CPU:%d scope:%d for resou= rce %s\n", - cpu, r->mon_scope, r->name); - return; - } - - hdr =3D resctrl_find_domain(&r->mon_domains, id, &add_pos); - if (hdr) { - if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, r->rid)) - return; - d =3D container_of(hdr, struct rdt_mon_domain, hdr); - - cpumask_set_cpu(cpu, &d->hdr.cpu_mask); - /* Update the mbm_assign_mode state for the CPU if supported */ - if (r->mon.mbm_cntr_assignable) - resctrl_arch_mbm_cntr_assign_set_one(r); - return; - } - hw_dom =3D kzalloc_node(sizeof(*hw_dom), GFP_KERNEL, cpu_to_node(cpu)); if (!hw_dom) return; @@ -539,7 +515,7 @@ static void domain_add_cpu_mon(int cpu, struct rdt_reso= urce *r) d =3D &hw_dom->d_resctrl; d->hdr.id =3D id; d->hdr.type =3D RESCTRL_MON_DOMAIN; - d->hdr.rid =3D r->rid; + d->hdr.rid =3D RDT_RESOURCE_L3; ci =3D get_cpu_cacheinfo_level(cpu, RESCTRL_L3_CACHE); if (!ci) { pr_warn_once("Can't find L3 cache for CPU:%d resource %s\n", cpu, r->nam= e); @@ -549,10 +525,6 @@ static void domain_add_cpu_mon(int cpu, struct rdt_res= ource *r) d->ci_id =3D ci->id; cpumask_set_cpu(cpu, &d->hdr.cpu_mask); =20 - /* Update the mbm_assign_mode state for the CPU if supported */ - if (r->mon.mbm_cntr_assignable) - resctrl_arch_mbm_cntr_assign_set_one(r); - arch_mon_domain_online(r, d); =20 if (arch_domain_mbm_alloc(r->mon.num_rmid, hw_dom)) { @@ -570,6 +542,38 @@ static void domain_add_cpu_mon(int cpu, struct rdt_res= ource *r) } } =20 +static void domain_add_cpu_mon(int cpu, struct rdt_resource *r) +{ + int id =3D get_domain_id_from_scope(cpu, r->mon_scope); + struct list_head *add_pos =3D NULL; + struct rdt_domain_hdr *hdr; + + lockdep_assert_held(&domain_list_lock); + + if (id < 0) { + pr_warn_once("Can't find monitor domain id for CPU:%d scope:%d for resou= rce %s\n", + cpu, r->mon_scope, r->name); + return; + } + + hdr =3D resctrl_find_domain(&r->mon_domains, id, &add_pos); + if (hdr) + cpumask_set_cpu(cpu, &hdr->cpu_mask); + + switch (r->rid) { + case RDT_RESOURCE_L3: + /* Update the mbm_assign_mode state for the CPU if supported */ + if (r->mon.mbm_cntr_assignable) + resctrl_arch_mbm_cntr_assign_set_one(r); + if (!hdr) + l3_mon_domain_setup(cpu, id, r, add_pos); + break; + default: + pr_warn_once("Unknown resource rid=3D%d\n", r->rid); + break; + } +} + static void domain_add_cpu(int cpu, struct rdt_resource *r) { if (r->alloc_capable) --=20 2.51.1 From nobody Sun Dec 14 22:14:55 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 876892E1EEE for ; Wed, 10 Dec 2025 23:14:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408476; cv=none; b=uAJFATb+rjlCQed0TMOGYdJDjTANRjXAMDodDjJ+UYXhy4gJZZoOekk9FtwPGjRhCRj8OdhtGLLrSrMTNlwbxF1Q8b4zzvuw5P9VAN83HRYTJm9ozr1onp8x/jAIOiO+JEEUhe/oHiU3ybv+gnoLtVbB5RMwfACBq34cNIxEA5k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408476; c=relaxed/simple; bh=4TavaMTu7axZkv0Du315ayr07cIT7myhftmXcK2c5+g=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=V/ZnQB4cV9rZ0y9gvPukx2EKPIf2A1IEJJjPhl6FS0GRUuD/XwDYpK9FsTsX7+1R+jfiJvqAlWJ42wSJEWk/Eo1Aq2RsaHgVy/4nTf76EQFnoKPojzvqRnLbXZ9uxPE9c8Atjj4ofNmG052FnCKNAGCpA2t/B2kjZ3Fuis2Zn7c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=fOjKemVm; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="fOjKemVm" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1765408474; x=1796944474; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4TavaMTu7axZkv0Du315ayr07cIT7myhftmXcK2c5+g=; b=fOjKemVmlzLok1YIRCa2frR8Sxi26eOLzxVmpVM31VTSlKRjVS9ITaK/ 26UO6OQcxsP87V9nXjvL2GBevaeovczgkLVWSwsNh12tsDhiCp3JU59FW p7rKMqh/Bk83IdDZh6LJ5Kp6Prm7h8CG/c25paE1OKfbOZw5ro6XNKS+l 1GE689V1uMsjygUBejdOhHx0ILAfO3KL/41WAi6xynP0i1xtoFc2tsAft RKj6+U5yen/Y9YB0etb2bvBv7kOhjdmwvyj9i+XWXhcOUhDd3/vgaH2Yp SgHY3mXQ0xunMoveBqt+BhcjxHRgtNXhuB7tQk6OTvhRDpTROr7cKFYSz w==; X-CSE-ConnectionGUID: 8B9NFPsXRECxex3gwHJf6A== X-CSE-MsgGUID: W/UY+LTGSZ20PEpl/dhzMA== X-IronPort-AV: E=McAfee;i="6800,10657,11638"; a="69973493" X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="69973493" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:25 -0800 X-CSE-ConnectionGUID: uZUUDIQEQNOYa8bODt0B0g== X-CSE-MsgGUID: Q80ZxPAuQN6VcvGKdMfc+A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="227297034" Received: from daliomra-mobl3.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.221.254]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:24 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v16 03/32] x86/resctrl: Refactor domain_remove_cpu_mon() ready for new domain types Date: Wed, 10 Dec 2025 15:13:42 -0800 Message-ID: <20251210231413.59102-4-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251210231413.59102-1-tony.luck@intel.com> References: <20251210231413.59102-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" New telemetry events will be associated with a new package scoped resource = with a new domain structure. Refactor domain_remove_cpu_mon() so all the L3 domain processing is separate the general domain action of clearing the CPU bit in the mask. Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- arch/x86/kernel/cpu/resctrl/core.c | 27 +++++++++++++++++---------- 1 file changed, 17 insertions(+), 10 deletions(-) diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 2a568b316711..49b133e847d4 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -631,9 +631,7 @@ static void domain_remove_cpu_ctrl(int cpu, struct rdt_= resource *r) static void domain_remove_cpu_mon(int cpu, struct rdt_resource *r) { int id =3D get_domain_id_from_scope(cpu, r->mon_scope); - struct rdt_hw_mon_domain *hw_dom; struct rdt_domain_hdr *hdr; - struct rdt_mon_domain *d; =20 lockdep_assert_held(&domain_list_lock); =20 @@ -650,20 +648,29 @@ static void domain_remove_cpu_mon(int cpu, struct rdt= _resource *r) return; } =20 - if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, r->rid)) + cpumask_clear_cpu(cpu, &hdr->cpu_mask); + if (!cpumask_empty(&hdr->cpu_mask)) return; =20 - d =3D container_of(hdr, struct rdt_mon_domain, hdr); - hw_dom =3D resctrl_to_arch_mon_dom(d); + switch (r->rid) { + case RDT_RESOURCE_L3: { + struct rdt_hw_mon_domain *hw_dom; + struct rdt_mon_domain *d; =20 - cpumask_clear_cpu(cpu, &d->hdr.cpu_mask); - if (cpumask_empty(&d->hdr.cpu_mask)) { + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) + return; + + d =3D container_of(hdr, struct rdt_mon_domain, hdr); + hw_dom =3D resctrl_to_arch_mon_dom(d); resctrl_offline_mon_domain(r, d); - list_del_rcu(&d->hdr.list); + list_del_rcu(&hdr->list); synchronize_rcu(); mon_domain_free(hw_dom); - - return; + break; + } + default: + pr_warn_once("Unknown resource rid=3D%d\n", r->rid); + break; } } =20 --=20 2.51.1 From nobody Sun Dec 14 22:14:55 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 463152EDD7E for ; Wed, 10 Dec 2025 23:14:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408477; cv=none; b=Vq4w9SS3JR2ucsAzP3z4pbymiZ/rS9pluFoPdmlMJ7ohgDS0o8OwtbP1TCVUiLaZSdUl+PnbzVkjUNf3gQWoxcW5Q0B2HHsH+WRGofj4LKd2f7vBk/PnYLL/j5fDievBmdMKJpV6hU6X8QB0SZuhRMS1ApL99CWB5suXefKC0AA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408477; c=relaxed/simple; bh=u7RmMkbdyc5QHVMIPLc9Jc97zGTH0J2VNf3TiEGFH8k=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=WtL+YlHuQBTzJ16cHTtqxQdJV4QMC7DTIwHD/AMNOsPHcNDKOgoEg/Z9L11Rm+dKNAjTmhi9X5SJwXakPOj8Hfwksy5gWtlFxFqHN2MDwMxtaL5cF9L4gUS6NOHnTuxgDf0jcuJsHSvTlfrlxV4ciUh/xnDCdDaoO5OxqBbDX/A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=SRdztRvi; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="SRdztRvi" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1765408474; x=1796944474; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=u7RmMkbdyc5QHVMIPLc9Jc97zGTH0J2VNf3TiEGFH8k=; b=SRdztRviQj418AlArtx+CFp1/FWcYNaqhJ3VJdPq34kQy52YLP3bebDG HunxzjkzkVHxyIqSPtbojpjZlLGDqML3oRZQB7bqfmq4jE7yNAhYyPFEy EFKS3KG0rjUL8AI5Qjt33Xz7cav1z1FE++J7So+gWhFmCZwPXWPdUSgSs xnrWNTKxAtlykrPEhSy9vXmJEGQy+mFTEHJkio7F9s2hv/xxgjd2zwUu3 doypAiVlEGj45EXwRXbVJL71yDTVaAFfmXI8Cr+DKM+i2y0O8TfIhasQ7 Zen3srBP9+nbx/HNuH6rwTQQWB3uJgxX1Ts6tjHys9hApOZZLcIfvFF7E w==; X-CSE-ConnectionGUID: auogGN3xRwa5Uuzqp6lkaA== X-CSE-MsgGUID: k8rk8JWDRmO/MOmnR4u8aA== X-IronPort-AV: E=McAfee;i="6800,10657,11638"; a="69973503" X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="69973503" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:26 -0800 X-CSE-ConnectionGUID: DYWPofemSwSnJHnuTMKglw== X-CSE-MsgGUID: qNSB5aUJSbyInlvxUf8QhQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="227297038" Received: from daliomra-mobl3.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.221.254]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:25 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v16 04/32] x86/resctrl: Clean up domain_remove_cpu_ctrl() Date: Wed, 10 Dec 2025 15:13:43 -0800 Message-ID: <20251210231413.59102-5-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251210231413.59102-1-tony.luck@intel.com> References: <20251210231413.59102-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" For symmetry with domain_remove_cpu_mon() refactor domain_remove_cpu_ctrl() to take an early return when removing a CPU does not empty the domain. Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- arch/x86/kernel/cpu/resctrl/core.c | 29 ++++++++++++++--------------- 1 file changed, 14 insertions(+), 15 deletions(-) diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 49b133e847d4..64ed81cbf8bf 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -604,28 +604,27 @@ static void domain_remove_cpu_ctrl(int cpu, struct rd= t_resource *r) return; } =20 + cpumask_clear_cpu(cpu, &hdr->cpu_mask); + if (!cpumask_empty(&hdr->cpu_mask)) + return; + if (!domain_header_is_valid(hdr, RESCTRL_CTRL_DOMAIN, r->rid)) return; =20 d =3D container_of(hdr, struct rdt_ctrl_domain, hdr); hw_dom =3D resctrl_to_arch_ctrl_dom(d); =20 - cpumask_clear_cpu(cpu, &d->hdr.cpu_mask); - if (cpumask_empty(&d->hdr.cpu_mask)) { - resctrl_offline_ctrl_domain(r, d); - list_del_rcu(&d->hdr.list); - synchronize_rcu(); - - /* - * rdt_ctrl_domain "d" is going to be freed below, so clear - * its pointer from pseudo_lock_region struct. - */ - if (d->plr) - d->plr->d =3D NULL; - ctrl_domain_free(hw_dom); + resctrl_offline_ctrl_domain(r, d); + list_del_rcu(&hdr->list); + synchronize_rcu(); =20 - return; - } + /* + * rdt_ctrl_domain "d" is going to be freed below, so clear + * its pointer from pseudo_lock_region struct. + */ + if (d->plr) + d->plr->d =3D NULL; + ctrl_domain_free(hw_dom); } =20 static void domain_remove_cpu_mon(int cpu, struct rdt_resource *r) --=20 2.51.1 From nobody Sun Dec 14 22:14:55 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C71D724DCF6 for ; Wed, 10 Dec 2025 23:14:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408479; cv=none; b=PPP7DuimIfLNGYQx/+MPmL6qJdvR6ihbox61aVqdN7CwFnwcmUf2jeP30yFoBmDhWT1PUxlqWkCpedNX79jk5J3Hb5KC2FoFSoeQ/xIOg8qy3k7LvS+WALX2yyr+Mt/ycbbOHdsq+GDtzUJmF5menhoyO1d06FSmODDwTu7Z2K0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408479; c=relaxed/simple; bh=J1iewankRssVmV8QzqPDemKrnLrRCTFys3HG8VXsBKA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=j/cdVdYo9UH5y2J6e6nNANCqixgbIMZ0c+qyqY5+5jJSBfFIG3N7LNBEUC1r1Uke0q5br/P8jhnjtit5/pFFk0LN/u/hBbWACprHqtnm5FhmohD9zDHdc/O48mm/pc02EbeS6F6jspcjGbH0POtLllVRTz/bsqiQI87IH4bSvMA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Cazt35Db; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Cazt35Db" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1765408476; x=1796944476; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=J1iewankRssVmV8QzqPDemKrnLrRCTFys3HG8VXsBKA=; b=Cazt35Db3gPYcPzR810TZ+7YR6r/2dzNJMQ/MkhessUWtMa2oyF1t7n1 ZjG9IBQIRC095yXbyjmuGXR9EsVyoD+kIhFv5UjaEeFtjLQwBQeCZTgZm z1sn1RLIjW07W04CHB/iCI70325CyMxnEsQYMSsf1IcjVMLlZrF+HJKtx aUOWYFsBAncLDny3C/lQirrHE0a0uoS74mmS+3op+tXg8bLSWfni98rGu X8xA4Z9aJ6p7l7qmdbWIVW7wdtZqJiRmOrMBFprj7FK8yHzZW5WuExGR7 WQXtlVibAMQi8yqvXDqGJDPFD5WmQGFPDaHpqdKzGvFN/chnbB+TND4zv w==; X-CSE-ConnectionGUID: hL6ICqYFSz+eAZSwCs2qww== X-CSE-MsgGUID: D2pdynuORzi7ghDN1v+4xA== X-IronPort-AV: E=McAfee;i="6800,10657,11638"; a="69973512" X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="69973512" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:27 -0800 X-CSE-ConnectionGUID: XOCLsxUUTVSPa0x7TEgXLw== X-CSE-MsgGUID: mgT0VLhgQvepkUbIaREugQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="227297041" Received: from daliomra-mobl3.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.221.254]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:26 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v16 05/32] x86,fs/resctrl: Refactor domain create/remove using struct rdt_domain_hdr Date: Wed, 10 Dec 2025 15:13:44 -0800 Message-ID: <20251210231413.59102-6-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251210231413.59102-1-tony.luck@intel.com> References: <20251210231413.59102-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Up until now, all monitoring events were associated with the L3 resource an= d it made sense to use the L3 specific "struct rdt_mon_domain *" argument to fun= ctions operating on domains. Telemetry events will be tied to a new resource with its instances represen= ted by a new domain structure that, just like struct rdt_mon_domain, starts with the generic struct rdt_domain_hdr. Prepare to support domains belonging to different resources by changing the calling convention of functions operating on domains. Pass the generic hea= der and use that to find the domain specific structure where needed. Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- include/linux/resctrl.h | 4 +- fs/resctrl/internal.h | 2 +- arch/x86/kernel/cpu/resctrl/core.c | 4 +- fs/resctrl/ctrlmondata.c | 14 ++++-- fs/resctrl/rdtgroup.c | 69 +++++++++++++++++++++--------- 5 files changed, 63 insertions(+), 30 deletions(-) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index e7c218f8d4f7..5db37c7e89c5 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -507,9 +507,9 @@ int resctrl_arch_update_one(struct rdt_resource *r, str= uct rdt_ctrl_domain *d, u32 resctrl_arch_get_config(struct rdt_resource *r, struct rdt_ctrl_domain= *d, u32 closid, enum resctrl_conf_type type); int resctrl_online_ctrl_domain(struct rdt_resource *r, struct rdt_ctrl_dom= ain *d); -int resctrl_online_mon_domain(struct rdt_resource *r, struct rdt_mon_domai= n *d); +int resctrl_online_mon_domain(struct rdt_resource *r, struct rdt_domain_hd= r *hdr); void resctrl_offline_ctrl_domain(struct rdt_resource *r, struct rdt_ctrl_d= omain *d); -void resctrl_offline_mon_domain(struct rdt_resource *r, struct rdt_mon_dom= ain *d); +void resctrl_offline_mon_domain(struct rdt_resource *r, struct rdt_domain_= hdr *hdr); void resctrl_online_cpu(unsigned int cpu); void resctrl_offline_cpu(unsigned int cpu); =20 diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h index bff4a54ae333..5e52269b391e 100644 --- a/fs/resctrl/internal.h +++ b/fs/resctrl/internal.h @@ -362,7 +362,7 @@ void mon_event_count(void *info); int rdtgroup_mondata_show(struct seq_file *m, void *arg); =20 void mon_event_read(struct rmid_read *rr, struct rdt_resource *r, - struct rdt_mon_domain *d, struct rdtgroup *rdtgrp, + struct rdt_domain_hdr *hdr, struct rdtgroup *rdtgrp, cpumask_t *cpumask, int evtid, int first); =20 int resctrl_mon_resource_init(void); diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 64ed81cbf8bf..1fab4c67d273 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -534,7 +534,7 @@ static void l3_mon_domain_setup(int cpu, int id, struct= rdt_resource *r, struct =20 list_add_tail_rcu(&d->hdr.list, add_pos); =20 - err =3D resctrl_online_mon_domain(r, d); + err =3D resctrl_online_mon_domain(r, &d->hdr); if (err) { list_del_rcu(&d->hdr.list); synchronize_rcu(); @@ -661,7 +661,7 @@ static void domain_remove_cpu_mon(int cpu, struct rdt_r= esource *r) =20 d =3D container_of(hdr, struct rdt_mon_domain, hdr); hw_dom =3D resctrl_to_arch_mon_dom(d); - resctrl_offline_mon_domain(r, d); + resctrl_offline_mon_domain(r, hdr); list_del_rcu(&hdr->list); synchronize_rcu(); mon_domain_free(hw_dom); diff --git a/fs/resctrl/ctrlmondata.c b/fs/resctrl/ctrlmondata.c index 905c310de573..3154cdc98a31 100644 --- a/fs/resctrl/ctrlmondata.c +++ b/fs/resctrl/ctrlmondata.c @@ -551,14 +551,21 @@ struct rdt_domain_hdr *resctrl_find_domain(struct lis= t_head *h, int id, } =20 void mon_event_read(struct rmid_read *rr, struct rdt_resource *r, - struct rdt_mon_domain *d, struct rdtgroup *rdtgrp, + struct rdt_domain_hdr *hdr, struct rdtgroup *rdtgrp, cpumask_t *cpumask, int evtid, int first) { + struct rdt_mon_domain *d =3D NULL; int cpu; =20 /* When picking a CPU from cpu_mask, ensure it can't race with cpuhp */ lockdep_assert_cpus_held(); =20 + if (hdr) { + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) + return; + d =3D container_of(hdr, struct rdt_mon_domain, hdr); + } + /* * Setup the parameters to pass to mon_event_count() to read the data. */ @@ -653,12 +660,11 @@ int rdtgroup_mondata_show(struct seq_file *m, void *a= rg) * the resource to find the domain with "domid". */ hdr =3D resctrl_find_domain(&r->mon_domains, domid, NULL); - if (!hdr || !domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, resid)) { + if (!hdr) { ret =3D -ENOENT; goto out; } - d =3D container_of(hdr, struct rdt_mon_domain, hdr); - mon_event_read(&rr, r, d, rdtgrp, &d->hdr.cpu_mask, evtid, false); + mon_event_read(&rr, r, hdr, rdtgrp, &hdr->cpu_mask, evtid, false); } =20 checkresult: diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index 8e39dfda56bc..89ffe54fb0fc 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -3229,17 +3229,22 @@ static void mon_rmdir_one_subdir(struct kernfs_node= *pkn, char *name, char *subn * when last domain being summed is removed. */ static void rmdir_mondata_subdir_allrdtgrp(struct rdt_resource *r, - struct rdt_mon_domain *d) + struct rdt_domain_hdr *hdr) { struct rdtgroup *prgrp, *crgrp; + struct rdt_mon_domain *d; char subname[32]; bool snc_mode; char name[32]; =20 + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) + return; + + d =3D container_of(hdr, struct rdt_mon_domain, hdr); snc_mode =3D r->mon_scope =3D=3D RESCTRL_L3_NODE; - sprintf(name, "mon_%s_%02d", r->name, snc_mode ? d->ci_id : d->hdr.id); + sprintf(name, "mon_%s_%02d", r->name, snc_mode ? d->ci_id : hdr->id); if (snc_mode) - sprintf(subname, "mon_sub_%s_%02d", r->name, d->hdr.id); + sprintf(subname, "mon_sub_%s_%02d", r->name, hdr->id); =20 list_for_each_entry(prgrp, &rdt_all_groups, rdtgroup_list) { mon_rmdir_one_subdir(prgrp->mon.mon_data_kn, name, subname); @@ -3249,15 +3254,20 @@ static void rmdir_mondata_subdir_allrdtgrp(struct r= dt_resource *r, } } =20 -static int mon_add_all_files(struct kernfs_node *kn, struct rdt_mon_domain= *d, +static int mon_add_all_files(struct kernfs_node *kn, struct rdt_domain_hdr= *hdr, struct rdt_resource *r, struct rdtgroup *prgrp, bool do_sum) { struct rmid_read rr =3D {0}; + struct rdt_mon_domain *d; struct mon_data *priv; struct mon_evt *mevt; int ret, domid; =20 + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) + return -EINVAL; + + d =3D container_of(hdr, struct rdt_mon_domain, hdr); for_each_mon_event(mevt) { if (mevt->rid !=3D r->rid || !mevt->enabled) continue; @@ -3271,23 +3281,28 @@ static int mon_add_all_files(struct kernfs_node *kn= , struct rdt_mon_domain *d, return ret; =20 if (!do_sum && resctrl_is_mbm_event(mevt->evtid)) - mon_event_read(&rr, r, d, prgrp, &d->hdr.cpu_mask, mevt->evtid, true); + mon_event_read(&rr, r, hdr, prgrp, &hdr->cpu_mask, mevt->evtid, true); } =20 return 0; } =20 static int mkdir_mondata_subdir(struct kernfs_node *parent_kn, - struct rdt_mon_domain *d, + struct rdt_domain_hdr *hdr, struct rdt_resource *r, struct rdtgroup *prgrp) { struct kernfs_node *kn, *ckn; + struct rdt_mon_domain *d; char name[32]; bool snc_mode; int ret =3D 0; =20 lockdep_assert_held(&rdtgroup_mutex); =20 + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) + return -EINVAL; + + d =3D container_of(hdr, struct rdt_mon_domain, hdr); snc_mode =3D r->mon_scope =3D=3D RESCTRL_L3_NODE; sprintf(name, "mon_%s_%02d", r->name, snc_mode ? d->ci_id : d->hdr.id); kn =3D kernfs_find_and_get(parent_kn, name); @@ -3305,13 +3320,13 @@ static int mkdir_mondata_subdir(struct kernfs_node = *parent_kn, ret =3D rdtgroup_kn_set_ugid(kn); if (ret) goto out_destroy; - ret =3D mon_add_all_files(kn, d, r, prgrp, snc_mode); + ret =3D mon_add_all_files(kn, hdr, r, prgrp, snc_mode); if (ret) goto out_destroy; } =20 if (snc_mode) { - sprintf(name, "mon_sub_%s_%02d", r->name, d->hdr.id); + sprintf(name, "mon_sub_%s_%02d", r->name, hdr->id); ckn =3D kernfs_create_dir(kn, name, parent_kn->mode, prgrp); if (IS_ERR(ckn)) { ret =3D -EINVAL; @@ -3322,7 +3337,7 @@ static int mkdir_mondata_subdir(struct kernfs_node *p= arent_kn, if (ret) goto out_destroy; =20 - ret =3D mon_add_all_files(ckn, d, r, prgrp, false); + ret =3D mon_add_all_files(ckn, hdr, r, prgrp, false); if (ret) goto out_destroy; } @@ -3340,7 +3355,7 @@ static int mkdir_mondata_subdir(struct kernfs_node *p= arent_kn, * and "monitor" groups with given domain id. */ static void mkdir_mondata_subdir_allrdtgrp(struct rdt_resource *r, - struct rdt_mon_domain *d) + struct rdt_domain_hdr *hdr) { struct kernfs_node *parent_kn; struct rdtgroup *prgrp, *crgrp; @@ -3348,12 +3363,12 @@ static void mkdir_mondata_subdir_allrdtgrp(struct r= dt_resource *r, =20 list_for_each_entry(prgrp, &rdt_all_groups, rdtgroup_list) { parent_kn =3D prgrp->mon.mon_data_kn; - mkdir_mondata_subdir(parent_kn, d, r, prgrp); + mkdir_mondata_subdir(parent_kn, hdr, r, prgrp); =20 head =3D &prgrp->mon.crdtgrp_list; list_for_each_entry(crgrp, head, mon.crdtgrp_list) { parent_kn =3D crgrp->mon.mon_data_kn; - mkdir_mondata_subdir(parent_kn, d, r, crgrp); + mkdir_mondata_subdir(parent_kn, hdr, r, crgrp); } } } @@ -3362,14 +3377,14 @@ static int mkdir_mondata_subdir_alldom(struct kernf= s_node *parent_kn, struct rdt_resource *r, struct rdtgroup *prgrp) { - struct rdt_mon_domain *dom; + struct rdt_domain_hdr *hdr; int ret; =20 /* Walking r->domains, ensure it can't race with cpuhp */ lockdep_assert_cpus_held(); =20 - list_for_each_entry(dom, &r->mon_domains, hdr.list) { - ret =3D mkdir_mondata_subdir(parent_kn, dom, r, prgrp); + list_for_each_entry(hdr, &r->mon_domains, list) { + ret =3D mkdir_mondata_subdir(parent_kn, hdr, r, prgrp); if (ret) return ret; } @@ -4253,16 +4268,23 @@ void resctrl_offline_ctrl_domain(struct rdt_resourc= e *r, struct rdt_ctrl_domain mutex_unlock(&rdtgroup_mutex); } =20 -void resctrl_offline_mon_domain(struct rdt_resource *r, struct rdt_mon_dom= ain *d) +void resctrl_offline_mon_domain(struct rdt_resource *r, struct rdt_domain_= hdr *hdr) { + struct rdt_mon_domain *d; + mutex_lock(&rdtgroup_mutex); =20 + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) + goto out_unlock; + + d =3D container_of(hdr, struct rdt_mon_domain, hdr); + /* * If resctrl is mounted, remove all the * per domain monitor data directories. */ if (resctrl_mounted && resctrl_arch_mon_capable()) - rmdir_mondata_subdir_allrdtgrp(r, d); + rmdir_mondata_subdir_allrdtgrp(r, hdr); =20 if (resctrl_is_mbm_enabled()) cancel_delayed_work(&d->mbm_over); @@ -4280,7 +4302,7 @@ void resctrl_offline_mon_domain(struct rdt_resource *= r, struct rdt_mon_domain *d } =20 domain_destroy_mon_state(d); - +out_unlock: mutex_unlock(&rdtgroup_mutex); } =20 @@ -4353,12 +4375,17 @@ int resctrl_online_ctrl_domain(struct rdt_resource = *r, struct rdt_ctrl_domain *d return err; } =20 -int resctrl_online_mon_domain(struct rdt_resource *r, struct rdt_mon_domai= n *d) +int resctrl_online_mon_domain(struct rdt_resource *r, struct rdt_domain_hd= r *hdr) { - int err; + struct rdt_mon_domain *d; + int err =3D -EINVAL; =20 mutex_lock(&rdtgroup_mutex); =20 + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) + goto out_unlock; + + d =3D container_of(hdr, struct rdt_mon_domain, hdr); err =3D domain_setup_mon_state(r, d); if (err) goto out_unlock; @@ -4379,7 +4406,7 @@ int resctrl_online_mon_domain(struct rdt_resource *r,= struct rdt_mon_domain *d) * If resctrl is mounted, add per domain monitor data directories. */ if (resctrl_mounted && resctrl_arch_mon_capable()) - mkdir_mondata_subdir_allrdtgrp(r, d); + mkdir_mondata_subdir_allrdtgrp(r, hdr); =20 out_unlock: mutex_unlock(&rdtgroup_mutex); --=20 2.51.1 From nobody Sun Dec 14 22:14:55 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E2BFD2DE713 for ; Wed, 10 Dec 2025 23:14:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408480; cv=none; b=fEzk0Gp76UZR+MdzhwXMC99xzbsdKKECBgHFi8tf8zN2VmZD3ppDj94J0AvX/+R9+iBQbon1q/PQzKIs3HypR3YrqCWZLiIKbOEV16/4VS3X0Jf69ptlC07vV3e6bZS/N4D+PLqs7eYV9NjdrMT0iKzO0FTH1cVYTKKvCj7nEvs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408480; c=relaxed/simple; bh=ZWm7hPc2pA5FN4dZKCDkSqVyPN+dNMQhermj0QSQ+fo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=mhYcW89n066xhQ0CpcoIi6vdyEIgjsmN8xnpE1fZkrefP3E3g05TME3+Th3oMDl43JxvNLbgX3k/UHBNtnnrOhQVgrPIuw0Smk2DoAV8ERzYgQ3O+J3QlMAasBmJP+wBElgGNZU/HYheWkR5fjFR0lJU9d3tZSPFwTTaLn0fvrc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=MxzzKndd; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="MxzzKndd" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1765408477; x=1796944477; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ZWm7hPc2pA5FN4dZKCDkSqVyPN+dNMQhermj0QSQ+fo=; b=MxzzKnddV+vCcbwGRaMKaOcvORtkHUeg2qYczoXXkTMtwgyYlygGs2Y2 jqZLsP98XI8IcIRWa4Cfc/c4yFl+ePVStwMBH3NCZ8XYFKxDFvVuJ7jSX BvHjO8naf9Q17Cliv3wexjvikGYX7i9/LWPPEp8V/a1Kfl+nqLAGhDuPG xpOH+JMGcsUJQcYvBXQ93I6sF28SeL4DYk5N//JO4rRtREx8nRb4Rvmmy XvwhuQreD8lJ1qEDqTCccsIwe2IJiwDvmqdPY9n8k46OMV3Ae8OypiOP3 jqjCOpfW4Gsn+rq+FaR4qyVGdEhaTOVvFOJ6+tWqhHu24SSCgkKWMgFRo w==; X-CSE-ConnectionGUID: WvPyb9KhR8O2mUPIQO24cg== X-CSE-MsgGUID: UwUluoOLTbSS9SPJHkG5Zw== X-IronPort-AV: E=McAfee;i="6800,10657,11638"; a="69973519" X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="69973519" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:27 -0800 X-CSE-ConnectionGUID: ObJqWqlyT0upfueh4kY6aw== X-CSE-MsgGUID: 5SP9BKi6QbCZ3ClPgjsc8g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="227297046" Received: from daliomra-mobl3.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.221.254]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:27 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v16 06/32] fs/resctrl: Split L3 dependent parts out of __mon_event_count() Date: Wed, 10 Dec 2025 15:13:45 -0800 Message-ID: <20251210231413.59102-7-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251210231413.59102-1-tony.luck@intel.com> References: <20251210231413.59102-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Carve out the L3 resource specific event reading code into a separate helper to support reading event data from a new monitoring resource. Suggested-by: Reinette Chatre Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- fs/resctrl/monitor.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index 572a9925bd6c..b5e0db38c8bf 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -413,7 +413,7 @@ static void mbm_cntr_free(struct rdt_mon_domain *d, int= cntr_id) memset(&d->cntr_cfg[cntr_id], 0, sizeof(*d->cntr_cfg)); } =20 -static int __mon_event_count(struct rdtgroup *rdtgrp, struct rmid_read *rr) +static int __l3_mon_event_count(struct rdtgroup *rdtgrp, struct rmid_read = *rr) { int cpu =3D smp_processor_id(); u32 closid =3D rdtgrp->closid; @@ -494,6 +494,17 @@ static int __mon_event_count(struct rdtgroup *rdtgrp, = struct rmid_read *rr) return ret; } =20 +static int __mon_event_count(struct rdtgroup *rdtgrp, struct rmid_read *rr) +{ + switch (rr->r->rid) { + case RDT_RESOURCE_L3: + return __l3_mon_event_count(rdtgrp, rr); + default: + rr->err =3D -EINVAL; + return -EINVAL; + } +} + /* * mbm_bw_count() - Update bw count from values previously read by * __mon_event_count(). --=20 2.51.1 From nobody Sun Dec 14 22:14:55 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A5EB72EBDF2 for ; Wed, 10 Dec 2025 23:14:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408483; cv=none; b=MTyAVGs3BvG6x3l7a1I+W46lX3Cru4/AowDg6kV0v+RfZ45OISBv5KNEvH2dKHiyc69UolMniydSuyWrUDtooLgNe7Q+QTY74b7x8NfZtg1uGDsBYYyEfBVD5ZlBMciBOoze6etPE8MFJ9ofQM2sg502IDAbzOjsrXijC7ec41c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408483; c=relaxed/simple; bh=i3pJuukre5TrcwUrPSMBDI2HLkMgDtySD77ogVHDiRI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=KSlekAye+o3i/LLzQow5iTInsAG0A6sVRJW8fqJHiOfCi4y1lNFpw/2OlFTTwHOUsc1+TJe4sNFDVbpVR0rmQgYiqUjlYyJVgfR13mHPqoImbuIP6Drnx+ScGtzejIkR52E1+JfBhNMmRNld1wJAYytGCbqfL6R9ffocZCpQFGo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=HJyfbWRy; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="HJyfbWRy" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1765408478; x=1796944478; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=i3pJuukre5TrcwUrPSMBDI2HLkMgDtySD77ogVHDiRI=; b=HJyfbWRyjKgYPGsmATR8x0uBNXASL0UOhEOuARujJEkOLSGENxKot6Nc hWX/gpsgaBUQzq+1oFZzmSOEPVH7tNhoOKsap/Nvj8D66joteWwK9QEOi pxhA7dYNiSTF4pymK+xySBVH8dOyB64Z5naGv6PdbNbnwfmL79/t/qeYC w8gA2t01G8qoKVKxyFL9H7bMZmXvw10dc05DGWEMTsohmJ1g7w21HyBec 22y3fz3JekorWK+1Uq1W55lI6RJaFijvEXVJIRdBdHLBKD6nfJkAMso5e qqUbbSxtNcnvnSRVjyFQ11LHZcPk8tG4wdbFZNdus01/6kf0ULCtXfxVC Q==; X-CSE-ConnectionGUID: 6fFjrZzDSGOgOqw49LI9NA== X-CSE-MsgGUID: IbVe6pHJQ1mAyMroMwOg1w== X-IronPort-AV: E=McAfee;i="6800,10657,11638"; a="69973529" X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="69973529" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:28 -0800 X-CSE-ConnectionGUID: lbyneMkkQvKzPIzL+TPT8Q== X-CSE-MsgGUID: NFwP6i/YS/KqGiMnYJ6tKQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="227297052" Received: from daliomra-mobl3.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.221.254]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:28 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v16 07/32] x86,fs/resctrl: Use struct rdt_domain_hdr when reading counters Date: Wed, 10 Dec 2025 15:13:46 -0800 Message-ID: <20251210231413.59102-8-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251210231413.59102-1-tony.luck@intel.com> References: <20251210231413.59102-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Convert the whole call sequence from mon_event_read() to resctrl_arch_rmid_= read() to pass resource independent struct rdt_domain_hdr instead of an L3 specific d= omain structure to prepare for monitoring events in other resources. This additional layer of indirection obscures which aspects of event counti= ng depend on a valid domain. Event initialization, support for assignable counters, a= nd normal event counting implicitly depend on a valid domain while summing of domains= does not. Split summing domains from the core event counting handling to make their r= espective dependencies obvious. Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- include/linux/resctrl.h | 4 +- fs/resctrl/internal.h | 18 +++--- arch/x86/kernel/cpu/resctrl/monitor.c | 12 +++- fs/resctrl/ctrlmondata.c | 9 +-- fs/resctrl/monitor.c | 85 ++++++++++++++++++--------- 5 files changed, 78 insertions(+), 50 deletions(-) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index 5db37c7e89c5..9b9877fb3238 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -517,7 +517,7 @@ void resctrl_offline_cpu(unsigned int cpu); * resctrl_arch_rmid_read() - Read the eventid counter corresponding to rm= id * for this resource and domain. * @r: resource that the counter should be read from. - * @d: domain that the counter should be read from. + * @hdr: Header of domain that the counter should be read from. * @closid: closid that matches the rmid. Depending on the architecture, = the * counter may match traffic of both @closid and @rmid, or @rmid * only. @@ -538,7 +538,7 @@ void resctrl_offline_cpu(unsigned int cpu); * Return: * 0 on success, or -EIO, -EINVAL etc on error. */ -int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_mon_domain *= d, +int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain_hdr *= hdr, u32 closid, u32 rmid, enum resctrl_event_id eventid, u64 *val, void *arch_mon_ctx); =20 diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h index 5e52269b391e..9912b774a580 100644 --- a/fs/resctrl/internal.h +++ b/fs/resctrl/internal.h @@ -106,24 +106,26 @@ struct mon_data { * resource group then its event count is summed with the count from all * its child resource groups. * @r: Resource describing the properties of the event being read. - * @d: Domain that the counter should be read from. If NULL then sum all - * domains in @r sharing L3 @ci.id + * @hdr: Header of domain that the counter should be read from. If NULL = then + * sum all domains in @r sharing L3 @ci.id * @evtid: Which monitor event to read. * @first: Initialize MBM counter when true. - * @ci: Cacheinfo for L3. Only set when @d is NULL. Used when summing d= omains. + * @ci: Cacheinfo for L3. Only set when @hdr is NULL. Used when summing + * domains. * @is_mbm_cntr: true if "mbm_event" counter assignment mode is enabled an= d it * is an MBM event. * @err: Error encountered when reading counter. - * @val: Returned value of event counter. If @rgrp is a parent resource = group, - * @val includes the sum of event counts from its child resource groups. - * If @d is NULL, @val includes the sum of all domains in @r sharing @c= i.id, - * (summed across child resource groups if @rgrp is a parent resource g= roup). + * @val: Returned value of event counter. If @rgrp is a parent resource + * group, @val includes the sum of event counts from its child + * resource groups. If @hdr is NULL, @val includes the sum of all + * domains in @r sharing @ci.id, (summed across child resource groups + * if @rgrp is a parent resource group). * @arch_mon_ctx: Hardware monitor allocated for this read request (MPAM o= nly). */ struct rmid_read { struct rdtgroup *rgrp; struct rdt_resource *r; - struct rdt_mon_domain *d; + struct rdt_domain_hdr *hdr; enum resctrl_event_id evtid; bool first; struct cacheinfo *ci; diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/re= sctrl/monitor.c index dffcc8307500..3da970ea1903 100644 --- a/arch/x86/kernel/cpu/resctrl/monitor.c +++ b/arch/x86/kernel/cpu/resctrl/monitor.c @@ -238,19 +238,25 @@ static u64 get_corrected_val(struct rdt_resource *r, = struct rdt_mon_domain *d, return chunks * hw_res->mon_scale; } =20 -int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_mon_domain *= d, +int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain_hdr *= hdr, u32 unused, u32 rmid, enum resctrl_event_id eventid, u64 *val, void *ignored) { - struct rdt_hw_mon_domain *hw_dom =3D resctrl_to_arch_mon_dom(d); - int cpu =3D cpumask_any(&d->hdr.cpu_mask); + struct rdt_hw_mon_domain *hw_dom; struct arch_mbm_state *am; + struct rdt_mon_domain *d; u64 msr_val; u32 prmid; + int cpu; int ret; =20 resctrl_arch_rmid_read_context_check(); + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) + return -EINVAL; =20 + d =3D container_of(hdr, struct rdt_mon_domain, hdr); + hw_dom =3D resctrl_to_arch_mon_dom(d); + cpu =3D cpumask_any(&hdr->cpu_mask); prmid =3D logical_rmid_to_physical_rmid(cpu, rmid); ret =3D __rmid_read_phys(prmid, eventid, &msr_val); =20 diff --git a/fs/resctrl/ctrlmondata.c b/fs/resctrl/ctrlmondata.c index 3154cdc98a31..9242a2982e77 100644 --- a/fs/resctrl/ctrlmondata.c +++ b/fs/resctrl/ctrlmondata.c @@ -554,25 +554,18 @@ void mon_event_read(struct rmid_read *rr, struct rdt_= resource *r, struct rdt_domain_hdr *hdr, struct rdtgroup *rdtgrp, cpumask_t *cpumask, int evtid, int first) { - struct rdt_mon_domain *d =3D NULL; int cpu; =20 /* When picking a CPU from cpu_mask, ensure it can't race with cpuhp */ lockdep_assert_cpus_held(); =20 - if (hdr) { - if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) - return; - d =3D container_of(hdr, struct rdt_mon_domain, hdr); - } - /* * Setup the parameters to pass to mon_event_count() to read the data. */ rr->rgrp =3D rdtgrp; rr->evtid =3D evtid; rr->r =3D r; - rr->d =3D d; + rr->hdr =3D hdr; rr->first =3D first; if (resctrl_arch_mbm_cntr_assign_enabled(r) && resctrl_is_mbm_event(evtid)) { diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index b5e0db38c8bf..e1c12201388f 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -159,7 +159,7 @@ void __check_limbo(struct rdt_mon_domain *d, bool force= _free) break; =20 entry =3D __rmid_entry(idx); - if (resctrl_arch_rmid_read(r, d, entry->closid, entry->rmid, + if (resctrl_arch_rmid_read(r, &d->hdr, entry->closid, entry->rmid, QOS_L3_OCCUP_EVENT_ID, &val, arch_mon_ctx)) { rmid_dirty =3D true; @@ -421,11 +421,16 @@ static int __l3_mon_event_count(struct rdtgroup *rdtg= rp, struct rmid_read *rr) struct rdt_mon_domain *d; int cntr_id =3D -ENOENT; struct mbm_state *m; - int err, ret; u64 tval =3D 0; =20 + if (!domain_header_is_valid(rr->hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)= ) { + rr->err =3D -EIO; + return -EINVAL; + } + d =3D container_of(rr->hdr, struct rdt_mon_domain, hdr); + if (rr->is_mbm_cntr) { - cntr_id =3D mbm_cntr_get(rr->r, rr->d, rdtgrp, rr->evtid); + cntr_id =3D mbm_cntr_get(rr->r, d, rdtgrp, rr->evtid); if (cntr_id < 0) { rr->err =3D -ENOENT; return -EINVAL; @@ -434,31 +439,50 @@ static int __l3_mon_event_count(struct rdtgroup *rdtg= rp, struct rmid_read *rr) =20 if (rr->first) { if (rr->is_mbm_cntr) - resctrl_arch_reset_cntr(rr->r, rr->d, closid, rmid, cntr_id, rr->evtid); + resctrl_arch_reset_cntr(rr->r, d, closid, rmid, cntr_id, rr->evtid); else - resctrl_arch_reset_rmid(rr->r, rr->d, closid, rmid, rr->evtid); - m =3D get_mbm_state(rr->d, closid, rmid, rr->evtid); + resctrl_arch_reset_rmid(rr->r, d, closid, rmid, rr->evtid); + m =3D get_mbm_state(d, closid, rmid, rr->evtid); if (m) memset(m, 0, sizeof(struct mbm_state)); return 0; } =20 - if (rr->d) { - /* Reading a single domain, must be on a CPU in that domain. */ - if (!cpumask_test_cpu(cpu, &rr->d->hdr.cpu_mask)) - return -EINVAL; - if (rr->is_mbm_cntr) - rr->err =3D resctrl_arch_cntr_read(rr->r, rr->d, closid, rmid, cntr_id, - rr->evtid, &tval); - else - rr->err =3D resctrl_arch_rmid_read(rr->r, rr->d, closid, rmid, - rr->evtid, &tval, rr->arch_mon_ctx); - if (rr->err) - return rr->err; + /* Reading a single domain, must be on a CPU in that domain. */ + if (!cpumask_test_cpu(cpu, &d->hdr.cpu_mask)) + return -EINVAL; + if (rr->is_mbm_cntr) + rr->err =3D resctrl_arch_cntr_read(rr->r, d, closid, rmid, cntr_id, + rr->evtid, &tval); + else + rr->err =3D resctrl_arch_rmid_read(rr->r, rr->hdr, closid, rmid, + rr->evtid, &tval, rr->arch_mon_ctx); + if (rr->err) + return rr->err; =20 - rr->val +=3D tval; + rr->val +=3D tval; =20 - return 0; + return 0; +} + +static int __l3_mon_event_count_sum(struct rdtgroup *rdtgrp, struct rmid_r= ead *rr) +{ + int cpu =3D smp_processor_id(); + u32 closid =3D rdtgrp->closid; + u32 rmid =3D rdtgrp->mon.rmid; + struct rdt_mon_domain *d; + u64 tval =3D 0; + int err, ret; + + /* + * Summing across domains is only done for systems that implement + * Sub-NUMA Cluster. There is no overlap with systems that support + * assignable counters. + */ + if (rr->is_mbm_cntr) { + pr_warn_once("Summing domains using assignable counters is not supported= \n"); + rr->err =3D -EINVAL; + return -EINVAL; } =20 /* Summing domains that share a cache, must be on a CPU for that cache. */ @@ -476,12 +500,8 @@ static int __l3_mon_event_count(struct rdtgroup *rdtgr= p, struct rmid_read *rr) list_for_each_entry(d, &rr->r->mon_domains, hdr.list) { if (d->ci_id !=3D rr->ci->id) continue; - if (rr->is_mbm_cntr) - err =3D resctrl_arch_cntr_read(rr->r, d, closid, rmid, cntr_id, - rr->evtid, &tval); - else - err =3D resctrl_arch_rmid_read(rr->r, d, closid, rmid, - rr->evtid, &tval, rr->arch_mon_ctx); + err =3D resctrl_arch_rmid_read(rr->r, &d->hdr, closid, rmid, + rr->evtid, &tval, rr->arch_mon_ctx); if (!err) { rr->val +=3D tval; ret =3D 0; @@ -498,7 +518,10 @@ static int __mon_event_count(struct rdtgroup *rdtgrp, = struct rmid_read *rr) { switch (rr->r->rid) { case RDT_RESOURCE_L3: - return __l3_mon_event_count(rdtgrp, rr); + if (rr->hdr) + return __l3_mon_event_count(rdtgrp, rr); + else + return __l3_mon_event_count_sum(rdtgrp, rr); default: rr->err =3D -EINVAL; return -EINVAL; @@ -522,9 +545,13 @@ static void mbm_bw_count(struct rdtgroup *rdtgrp, stru= ct rmid_read *rr) u64 cur_bw, bytes, cur_bytes; u32 closid =3D rdtgrp->closid; u32 rmid =3D rdtgrp->mon.rmid; + struct rdt_mon_domain *d; struct mbm_state *m; =20 - m =3D get_mbm_state(rr->d, closid, rmid, rr->evtid); + if (!domain_header_is_valid(rr->hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) + return; + d =3D container_of(rr->hdr, struct rdt_mon_domain, hdr); + m =3D get_mbm_state(d, closid, rmid, rr->evtid); if (WARN_ON_ONCE(!m)) return; =20 @@ -697,7 +724,7 @@ static void mbm_update_one_event(struct rdt_resource *r= , struct rdt_mon_domain * struct rmid_read rr =3D {0}; =20 rr.r =3D r; - rr.d =3D d; + rr.hdr =3D &d->hdr; rr.evtid =3D evtid; if (resctrl_arch_mbm_cntr_assign_enabled(r)) { rr.is_mbm_cntr =3D true; --=20 2.51.1 From nobody Sun Dec 14 22:14:55 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0C5AA2FBE11 for ; Wed, 10 Dec 2025 23:14:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408486; cv=none; b=KdLaeVHus8oZDkgHqd0GFgQbAIx8ztCBXwfOrsvMbOofI8ufpZwfD1g2PX9d20lxi3EWzVSNspp+T2zgI25WUVhzdYlcp6ArJ7CPabMVO7/e7sPaMjhz/nYGkPwxGIyjhkupK+iv4JdCDpQ5RhPLgZWjYMlw/YtnoKXy8mHMJHE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408486; c=relaxed/simple; bh=gTjM3Ex9MohUKEsaa0BQ8VU0yaX65qThvmZVsfmLs1I=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=piHlzCWdA6y8gW63JKntxut14nRUhndJL+wlwBzLXGfTz8uua+ByleG1L+xInMmhZGg+01Yajl4tSjQE+Sq08UJ8UeUpr7wsQIzDEz2/gqsJp2W6Z91LRpd6juNR/DVD+USIhVJ0WkfQ2XzVj+fSNn4AbcF+6RL/PZZnKP0605w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=TGYR5gxS; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="TGYR5gxS" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1765408480; x=1796944480; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=gTjM3Ex9MohUKEsaa0BQ8VU0yaX65qThvmZVsfmLs1I=; b=TGYR5gxSEGkrmMsza3QMwJ52Cn4j9oldI8lO31HNtoUdPpUq2MwYVvEw oCITecor529KnapV+oHYXnv0H+0b6bjNJ5jNo+polkjRISmKyT8NaZ09Y XOeeVZVfd/JA91e5w/LfbZ4pUYAyerE/Z9XivQz33eyraApysinzJufA9 gJxef2yZv0rI65MMWf7pywDZjY1VtSfh4F2wuAx2VeVdt9MEm+ZuP4Zu0 y/gktrEQ5r+S5ftM4uCJSZ7H30OKxRy9rC3rAtoxcEV4NkIlBcVCxiCoz DjRZG+0O1+uUKZYXhmHDIBPYtSSNPP4l8tgsZX2MgSgApgGQarfE64e9/ w==; X-CSE-ConnectionGUID: KBduvgZmTpOvZXveR09+iA== X-CSE-MsgGUID: V4n7qFnoQ8S/Uv8+CshIWQ== X-IronPort-AV: E=McAfee;i="6800,10657,11638"; a="69973537" X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="69973537" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:29 -0800 X-CSE-ConnectionGUID: 6fhhurttQeGLFvNY3BNMmw== X-CSE-MsgGUID: TvQWRVs0REKhDlDvfSsJ4A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="227297055" Received: from daliomra-mobl3.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.221.254]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:28 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v16 08/32] x86,fs/resctrl: Rename struct rdt_mon_domain and rdt_hw_mon_domain Date: Wed, 10 Dec 2025 15:13:47 -0800 Message-ID: <20251210231413.59102-9-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251210231413.59102-1-tony.luck@intel.com> References: <20251210231413.59102-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The upcoming telemetry event monitoring is not tied to the L3 resource and will have a new domain structure. Rename the L3 resource specific domain data structures to include "l3_" in their names to avoid confusion between the different resource specific domain structures: rdt_mon_domain -> rdt_l3_mon_domain rdt_hw_mon_domain -> rdt_hw_l3_mon_domain No functional change. Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- include/linux/resctrl.h | 22 ++++---- arch/x86/kernel/cpu/resctrl/internal.h | 16 +++--- fs/resctrl/internal.h | 8 +-- arch/x86/kernel/cpu/resctrl/core.c | 14 +++--- arch/x86/kernel/cpu/resctrl/monitor.c | 36 ++++++------- fs/resctrl/ctrlmondata.c | 2 +- fs/resctrl/monitor.c | 70 +++++++++++++------------- fs/resctrl/rdtgroup.c | 40 +++++++-------- 8 files changed, 104 insertions(+), 104 deletions(-) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index 9b9877fb3238..79aaaabcdd3f 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -178,7 +178,7 @@ struct mbm_cntr_cfg { }; =20 /** - * struct rdt_mon_domain - group of CPUs sharing a resctrl monitor resource + * struct rdt_l3_mon_domain - group of CPUs sharing RDT_RESOURCE_L3 monito= ring * @hdr: common header for different domain types * @ci_id: cache info id for this domain * @rmid_busy_llc: bitmap of which limbo RMIDs are above threshold @@ -192,7 +192,7 @@ struct mbm_cntr_cfg { * @cntr_cfg: array of assignable counters' configuration (indexed * by counter ID) */ -struct rdt_mon_domain { +struct rdt_l3_mon_domain { struct rdt_domain_hdr hdr; unsigned int ci_id; unsigned long *rmid_busy_llc; @@ -367,10 +367,10 @@ struct resctrl_cpu_defaults { }; =20 struct resctrl_mon_config_info { - struct rdt_resource *r; - struct rdt_mon_domain *d; - u32 evtid; - u32 mon_config; + struct rdt_resource *r; + struct rdt_l3_mon_domain *d; + u32 evtid; + u32 mon_config; }; =20 /** @@ -585,7 +585,7 @@ struct rdt_domain_hdr *resctrl_find_domain(struct list_= head *h, int id, * * This can be called from any CPU. */ -void resctrl_arch_reset_rmid(struct rdt_resource *r, struct rdt_mon_domain= *d, +void resctrl_arch_reset_rmid(struct rdt_resource *r, struct rdt_l3_mon_dom= ain *d, u32 closid, u32 rmid, enum resctrl_event_id eventid); =20 @@ -598,7 +598,7 @@ void resctrl_arch_reset_rmid(struct rdt_resource *r, st= ruct rdt_mon_domain *d, * * This can be called from any CPU. */ -void resctrl_arch_reset_rmid_all(struct rdt_resource *r, struct rdt_mon_do= main *d); +void resctrl_arch_reset_rmid_all(struct rdt_resource *r, struct rdt_l3_mon= _domain *d); =20 /** * resctrl_arch_reset_all_ctrls() - Reset the control for each CLOSID to i= ts @@ -624,7 +624,7 @@ void resctrl_arch_reset_all_ctrls(struct rdt_resource *= r); * * This can be called from any CPU. */ -void resctrl_arch_config_cntr(struct rdt_resource *r, struct rdt_mon_domai= n *d, +void resctrl_arch_config_cntr(struct rdt_resource *r, struct rdt_l3_mon_do= main *d, enum resctrl_event_id evtid, u32 rmid, u32 closid, u32 cntr_id, bool assign); =20 @@ -647,7 +647,7 @@ void resctrl_arch_config_cntr(struct rdt_resource *r, s= truct rdt_mon_domain *d, * Return: * 0 on success, or -EIO, -EINVAL etc on error. */ -int resctrl_arch_cntr_read(struct rdt_resource *r, struct rdt_mon_domain *= d, +int resctrl_arch_cntr_read(struct rdt_resource *r, struct rdt_l3_mon_domai= n *d, u32 closid, u32 rmid, int cntr_id, enum resctrl_event_id eventid, u64 *val); =20 @@ -662,7 +662,7 @@ int resctrl_arch_cntr_read(struct rdt_resource *r, stru= ct rdt_mon_domain *d, * * This can be called from any CPU. */ -void resctrl_arch_reset_cntr(struct rdt_resource *r, struct rdt_mon_domain= *d, +void resctrl_arch_reset_cntr(struct rdt_resource *r, struct rdt_l3_mon_dom= ain *d, u32 closid, u32 rmid, int cntr_id, enum resctrl_event_id eventid); =20 diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/r= esctrl/internal.h index 4a916c84a322..d73c0adf1026 100644 --- a/arch/x86/kernel/cpu/resctrl/internal.h +++ b/arch/x86/kernel/cpu/resctrl/internal.h @@ -63,17 +63,17 @@ struct rdt_hw_ctrl_domain { }; =20 /** - * struct rdt_hw_mon_domain - Arch private attributes of a set of CPUs tha= t share - * a resource for a monitor function - * @d_resctrl: Properties exposed to the resctrl file system + * struct rdt_hw_l3_mon_domain - Arch private attributes of a set of CPUs = sharing + * RDT_RESOURCE_L3 monitoring + * @d_resctrl: Properties exposed to the resctrl file system * @arch_mbm_states: Per-event pointer to the MBM event's saved state. * An MBM event's state is an array of struct arch_mbm_state * indexed by RMID on x86. * * Members of this structure are accessed via helpers that provide abstrac= tion. */ -struct rdt_hw_mon_domain { - struct rdt_mon_domain d_resctrl; +struct rdt_hw_l3_mon_domain { + struct rdt_l3_mon_domain d_resctrl; struct arch_mbm_state *arch_mbm_states[QOS_NUM_L3_MBM_EVENTS]; }; =20 @@ -82,9 +82,9 @@ static inline struct rdt_hw_ctrl_domain *resctrl_to_arch_= ctrl_dom(struct rdt_ctr return container_of(r, struct rdt_hw_ctrl_domain, d_resctrl); } =20 -static inline struct rdt_hw_mon_domain *resctrl_to_arch_mon_dom(struct rdt= _mon_domain *r) +static inline struct rdt_hw_l3_mon_domain *resctrl_to_arch_mon_dom(struct = rdt_l3_mon_domain *r) { - return container_of(r, struct rdt_hw_mon_domain, d_resctrl); + return container_of(r, struct rdt_hw_l3_mon_domain, d_resctrl); } =20 /** @@ -140,7 +140,7 @@ static inline struct rdt_hw_resource *resctrl_to_arch_r= es(struct rdt_resource *r =20 extern struct rdt_hw_resource rdt_resources_all[]; =20 -void arch_mon_domain_online(struct rdt_resource *r, struct rdt_mon_domain = *d); +void arch_mon_domain_online(struct rdt_resource *r, struct rdt_l3_mon_doma= in *d); =20 /* CPUID.(EAX=3D10H, ECX=3DResID=3D1).EAX */ union cpuid_0x10_1_eax { diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h index 9912b774a580..af47b6ddef62 100644 --- a/fs/resctrl/internal.h +++ b/fs/resctrl/internal.h @@ -369,7 +369,7 @@ void mon_event_read(struct rmid_read *rr, struct rdt_re= source *r, =20 int resctrl_mon_resource_init(void); =20 -void mbm_setup_overflow_handler(struct rdt_mon_domain *dom, +void mbm_setup_overflow_handler(struct rdt_l3_mon_domain *dom, unsigned long delay_ms, int exclude_cpu); =20 @@ -377,14 +377,14 @@ void mbm_handle_overflow(struct work_struct *work); =20 bool is_mba_sc(struct rdt_resource *r); =20 -void cqm_setup_limbo_handler(struct rdt_mon_domain *dom, unsigned long del= ay_ms, +void cqm_setup_limbo_handler(struct rdt_l3_mon_domain *dom, unsigned long = delay_ms, int exclude_cpu); =20 void cqm_handle_limbo(struct work_struct *work); =20 -bool has_busy_rmid(struct rdt_mon_domain *d); +bool has_busy_rmid(struct rdt_l3_mon_domain *d); =20 -void __check_limbo(struct rdt_mon_domain *d, bool force_free); +void __check_limbo(struct rdt_l3_mon_domain *d, bool force_free); =20 void resctrl_file_fflags_init(const char *config, unsigned long fflags); =20 diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 1fab4c67d273..cc1b846f9645 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -368,7 +368,7 @@ static void ctrl_domain_free(struct rdt_hw_ctrl_domain = *hw_dom) kfree(hw_dom); } =20 -static void mon_domain_free(struct rdt_hw_mon_domain *hw_dom) +static void mon_domain_free(struct rdt_hw_l3_mon_domain *hw_dom) { int idx; =20 @@ -405,7 +405,7 @@ static int domain_setup_ctrlval(struct rdt_resource *r,= struct rdt_ctrl_domain * * @num_rmid: The size of the MBM counter array * @hw_dom: The domain that owns the allocated arrays */ -static int arch_domain_mbm_alloc(u32 num_rmid, struct rdt_hw_mon_domain *h= w_dom) +static int arch_domain_mbm_alloc(u32 num_rmid, struct rdt_hw_l3_mon_domain= *hw_dom) { size_t tsize =3D sizeof(*hw_dom->arch_mbm_states[0]); enum resctrl_event_id eventid; @@ -503,8 +503,8 @@ static void domain_add_cpu_ctrl(int cpu, struct rdt_res= ource *r) =20 static void l3_mon_domain_setup(int cpu, int id, struct rdt_resource *r, s= truct list_head *add_pos) { - struct rdt_hw_mon_domain *hw_dom; - struct rdt_mon_domain *d; + struct rdt_hw_l3_mon_domain *hw_dom; + struct rdt_l3_mon_domain *d; struct cacheinfo *ci; int err; =20 @@ -653,13 +653,13 @@ static void domain_remove_cpu_mon(int cpu, struct rdt= _resource *r) =20 switch (r->rid) { case RDT_RESOURCE_L3: { - struct rdt_hw_mon_domain *hw_dom; - struct rdt_mon_domain *d; + struct rdt_hw_l3_mon_domain *hw_dom; + struct rdt_l3_mon_domain *d; =20 if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) return; =20 - d =3D container_of(hdr, struct rdt_mon_domain, hdr); + d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); hw_dom =3D resctrl_to_arch_mon_dom(d); resctrl_offline_mon_domain(r, hdr); list_del_rcu(&hdr->list); diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/re= sctrl/monitor.c index 3da970ea1903..04b8f1e1f314 100644 --- a/arch/x86/kernel/cpu/resctrl/monitor.c +++ b/arch/x86/kernel/cpu/resctrl/monitor.c @@ -109,7 +109,7 @@ static inline u64 get_corrected_mbm_count(u32 rmid, uns= igned long val) * * In RMID sharing mode there are fewer "logical RMID" values available * to accumulate data ("physical RMIDs" are divided evenly between SNC - * nodes that share an L3 cache). Linux creates an rdt_mon_domain for + * nodes that share an L3 cache). Linux creates an rdt_l3_mon_domain for * each SNC node. * * The value loaded into IA32_PQR_ASSOC is the "logical RMID". @@ -157,7 +157,7 @@ static int __rmid_read_phys(u32 prmid, enum resctrl_eve= nt_id eventid, u64 *val) return 0; } =20 -static struct arch_mbm_state *get_arch_mbm_state(struct rdt_hw_mon_domain = *hw_dom, +static struct arch_mbm_state *get_arch_mbm_state(struct rdt_hw_l3_mon_doma= in *hw_dom, u32 rmid, enum resctrl_event_id eventid) { @@ -171,11 +171,11 @@ static struct arch_mbm_state *get_arch_mbm_state(stru= ct rdt_hw_mon_domain *hw_do return state ? &state[rmid] : NULL; } =20 -void resctrl_arch_reset_rmid(struct rdt_resource *r, struct rdt_mon_domain= *d, +void resctrl_arch_reset_rmid(struct rdt_resource *r, struct rdt_l3_mon_dom= ain *d, u32 unused, u32 rmid, enum resctrl_event_id eventid) { - struct rdt_hw_mon_domain *hw_dom =3D resctrl_to_arch_mon_dom(d); + struct rdt_hw_l3_mon_domain *hw_dom =3D resctrl_to_arch_mon_dom(d); int cpu =3D cpumask_any(&d->hdr.cpu_mask); struct arch_mbm_state *am; u32 prmid; @@ -194,9 +194,9 @@ void resctrl_arch_reset_rmid(struct rdt_resource *r, st= ruct rdt_mon_domain *d, * Assumes that hardware counters are also reset and thus that there is * no need to record initial non-zero counts. */ -void resctrl_arch_reset_rmid_all(struct rdt_resource *r, struct rdt_mon_do= main *d) +void resctrl_arch_reset_rmid_all(struct rdt_resource *r, struct rdt_l3_mon= _domain *d) { - struct rdt_hw_mon_domain *hw_dom =3D resctrl_to_arch_mon_dom(d); + struct rdt_hw_l3_mon_domain *hw_dom =3D resctrl_to_arch_mon_dom(d); enum resctrl_event_id eventid; int idx; =20 @@ -217,10 +217,10 @@ static u64 mbm_overflow_count(u64 prev_msr, u64 cur_m= sr, unsigned int width) return chunks >> shift; } =20 -static u64 get_corrected_val(struct rdt_resource *r, struct rdt_mon_domain= *d, +static u64 get_corrected_val(struct rdt_resource *r, struct rdt_l3_mon_dom= ain *d, u32 rmid, enum resctrl_event_id eventid, u64 msr_val) { - struct rdt_hw_mon_domain *hw_dom =3D resctrl_to_arch_mon_dom(d); + struct rdt_hw_l3_mon_domain *hw_dom =3D resctrl_to_arch_mon_dom(d); struct rdt_hw_resource *hw_res =3D resctrl_to_arch_res(r); struct arch_mbm_state *am; u64 chunks; @@ -242,9 +242,9 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, stru= ct rdt_domain_hdr *hdr, u32 unused, u32 rmid, enum resctrl_event_id eventid, u64 *val, void *ignored) { - struct rdt_hw_mon_domain *hw_dom; + struct rdt_hw_l3_mon_domain *hw_dom; + struct rdt_l3_mon_domain *d; struct arch_mbm_state *am; - struct rdt_mon_domain *d; u64 msr_val; u32 prmid; int cpu; @@ -254,7 +254,7 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, stru= ct rdt_domain_hdr *hdr, if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) return -EINVAL; =20 - d =3D container_of(hdr, struct rdt_mon_domain, hdr); + d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); hw_dom =3D resctrl_to_arch_mon_dom(d); cpu =3D cpumask_any(&hdr->cpu_mask); prmid =3D logical_rmid_to_physical_rmid(cpu, rmid); @@ -308,11 +308,11 @@ static int __cntr_id_read(u32 cntr_id, u64 *val) return 0; } =20 -void resctrl_arch_reset_cntr(struct rdt_resource *r, struct rdt_mon_domain= *d, +void resctrl_arch_reset_cntr(struct rdt_resource *r, struct rdt_l3_mon_dom= ain *d, u32 unused, u32 rmid, int cntr_id, enum resctrl_event_id eventid) { - struct rdt_hw_mon_domain *hw_dom =3D resctrl_to_arch_mon_dom(d); + struct rdt_hw_l3_mon_domain *hw_dom =3D resctrl_to_arch_mon_dom(d); struct arch_mbm_state *am; =20 am =3D get_arch_mbm_state(hw_dom, rmid, eventid); @@ -324,7 +324,7 @@ void resctrl_arch_reset_cntr(struct rdt_resource *r, st= ruct rdt_mon_domain *d, } } =20 -int resctrl_arch_cntr_read(struct rdt_resource *r, struct rdt_mon_domain *= d, +int resctrl_arch_cntr_read(struct rdt_resource *r, struct rdt_l3_mon_domai= n *d, u32 unused, u32 rmid, int cntr_id, enum resctrl_event_id eventid, u64 *val) { @@ -354,7 +354,7 @@ int resctrl_arch_cntr_read(struct rdt_resource *r, stru= ct rdt_mon_domain *d, * must adjust RMID counter numbers based on SNC node. See * logical_rmid_to_physical_rmid() for code that does this. */ -void arch_mon_domain_online(struct rdt_resource *r, struct rdt_mon_domain = *d) +void arch_mon_domain_online(struct rdt_resource *r, struct rdt_l3_mon_doma= in *d) { if (snc_nodes_per_l3_cache > 1) msr_clear_bit(MSR_RMID_SNC_CONFIG, 0); @@ -516,7 +516,7 @@ static void resctrl_abmc_set_one_amd(void *arg) */ static void _resctrl_abmc_enable(struct rdt_resource *r, bool enable) { - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; =20 lockdep_assert_cpus_held(); =20 @@ -555,11 +555,11 @@ static void resctrl_abmc_config_one_amd(void *info) /* * Send an IPI to the domain to assign the counter to RMID, event pair. */ -void resctrl_arch_config_cntr(struct rdt_resource *r, struct rdt_mon_domai= n *d, +void resctrl_arch_config_cntr(struct rdt_resource *r, struct rdt_l3_mon_do= main *d, enum resctrl_event_id evtid, u32 rmid, u32 closid, u32 cntr_id, bool assign) { - struct rdt_hw_mon_domain *hw_dom =3D resctrl_to_arch_mon_dom(d); + struct rdt_hw_l3_mon_domain *hw_dom =3D resctrl_to_arch_mon_dom(d); union l3_qos_abmc_cfg abmc_cfg =3D { 0 }; struct arch_mbm_state *am; =20 diff --git a/fs/resctrl/ctrlmondata.c b/fs/resctrl/ctrlmondata.c index 9242a2982e77..a3c734fe656e 100644 --- a/fs/resctrl/ctrlmondata.c +++ b/fs/resctrl/ctrlmondata.c @@ -600,9 +600,9 @@ int rdtgroup_mondata_show(struct seq_file *m, void *arg) struct kernfs_open_file *of =3D m->private; enum resctrl_res_level resid; enum resctrl_event_id evtid; + struct rdt_l3_mon_domain *d; struct rdt_domain_hdr *hdr; struct rmid_read rr =3D {0}; - struct rdt_mon_domain *d; struct rdtgroup *rdtgrp; int domid, cpu, ret =3D 0; struct rdt_resource *r; diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index e1c12201388f..9edbe9805d33 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -130,7 +130,7 @@ static void limbo_release_entry(struct rmid_entry *entr= y) * decrement the count. If the busy count gets to zero on an RMID, we * free the RMID */ -void __check_limbo(struct rdt_mon_domain *d, bool force_free) +void __check_limbo(struct rdt_l3_mon_domain *d, bool force_free) { struct rdt_resource *r =3D resctrl_arch_get_resource(RDT_RESOURCE_L3); u32 idx_limit =3D resctrl_arch_system_num_rmid_idx(); @@ -188,7 +188,7 @@ void __check_limbo(struct rdt_mon_domain *d, bool force= _free) resctrl_arch_mon_ctx_free(r, QOS_L3_OCCUP_EVENT_ID, arch_mon_ctx); } =20 -bool has_busy_rmid(struct rdt_mon_domain *d) +bool has_busy_rmid(struct rdt_l3_mon_domain *d) { u32 idx_limit =3D resctrl_arch_system_num_rmid_idx(); =20 @@ -289,7 +289,7 @@ int alloc_rmid(u32 closid) static void add_rmid_to_limbo(struct rmid_entry *entry) { struct rdt_resource *r =3D resctrl_arch_get_resource(RDT_RESOURCE_L3); - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; u32 idx; =20 lockdep_assert_held(&rdtgroup_mutex); @@ -342,7 +342,7 @@ void free_rmid(u32 closid, u32 rmid) list_add_tail(&entry->list, &rmid_free_lru); } =20 -static struct mbm_state *get_mbm_state(struct rdt_mon_domain *d, u32 closi= d, +static struct mbm_state *get_mbm_state(struct rdt_l3_mon_domain *d, u32 cl= osid, u32 rmid, enum resctrl_event_id evtid) { u32 idx =3D resctrl_arch_rmid_idx_encode(closid, rmid); @@ -362,7 +362,7 @@ static struct mbm_state *get_mbm_state(struct rdt_mon_d= omain *d, u32 closid, * Return: * Valid counter ID on success, or -ENOENT on failure. */ -static int mbm_cntr_get(struct rdt_resource *r, struct rdt_mon_domain *d, +static int mbm_cntr_get(struct rdt_resource *r, struct rdt_l3_mon_domain *= d, struct rdtgroup *rdtgrp, enum resctrl_event_id evtid) { int cntr_id; @@ -389,7 +389,7 @@ static int mbm_cntr_get(struct rdt_resource *r, struct = rdt_mon_domain *d, * Return: * Valid counter ID on success, or -ENOSPC on failure. */ -static int mbm_cntr_alloc(struct rdt_resource *r, struct rdt_mon_domain *d, +static int mbm_cntr_alloc(struct rdt_resource *r, struct rdt_l3_mon_domain= *d, struct rdtgroup *rdtgrp, enum resctrl_event_id evtid) { int cntr_id; @@ -408,7 +408,7 @@ static int mbm_cntr_alloc(struct rdt_resource *r, struc= t rdt_mon_domain *d, /* * mbm_cntr_free() - Clear the counter ID configuration details in the dom= ain @d. */ -static void mbm_cntr_free(struct rdt_mon_domain *d, int cntr_id) +static void mbm_cntr_free(struct rdt_l3_mon_domain *d, int cntr_id) { memset(&d->cntr_cfg[cntr_id], 0, sizeof(*d->cntr_cfg)); } @@ -418,7 +418,7 @@ static int __l3_mon_event_count(struct rdtgroup *rdtgrp= , struct rmid_read *rr) int cpu =3D smp_processor_id(); u32 closid =3D rdtgrp->closid; u32 rmid =3D rdtgrp->mon.rmid; - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; int cntr_id =3D -ENOENT; struct mbm_state *m; u64 tval =3D 0; @@ -427,7 +427,7 @@ static int __l3_mon_event_count(struct rdtgroup *rdtgrp= , struct rmid_read *rr) rr->err =3D -EIO; return -EINVAL; } - d =3D container_of(rr->hdr, struct rdt_mon_domain, hdr); + d =3D container_of(rr->hdr, struct rdt_l3_mon_domain, hdr); =20 if (rr->is_mbm_cntr) { cntr_id =3D mbm_cntr_get(rr->r, d, rdtgrp, rr->evtid); @@ -470,7 +470,7 @@ static int __l3_mon_event_count_sum(struct rdtgroup *rd= tgrp, struct rmid_read *r int cpu =3D smp_processor_id(); u32 closid =3D rdtgrp->closid; u32 rmid =3D rdtgrp->mon.rmid; - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; u64 tval =3D 0; int err, ret; =20 @@ -545,12 +545,12 @@ static void mbm_bw_count(struct rdtgroup *rdtgrp, str= uct rmid_read *rr) u64 cur_bw, bytes, cur_bytes; u32 closid =3D rdtgrp->closid; u32 rmid =3D rdtgrp->mon.rmid; - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; struct mbm_state *m; =20 if (!domain_header_is_valid(rr->hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) return; - d =3D container_of(rr->hdr, struct rdt_mon_domain, hdr); + d =3D container_of(rr->hdr, struct rdt_l3_mon_domain, hdr); m =3D get_mbm_state(d, closid, rmid, rr->evtid); if (WARN_ON_ONCE(!m)) return; @@ -650,7 +650,7 @@ static struct rdt_ctrl_domain *get_ctrl_domain_from_cpu= (int cpu, * throttle MSRs already have low percentage values. To avoid * unnecessarily restricting such rdtgroups, we also increase the bandwidt= h. */ -static void update_mba_bw(struct rdtgroup *rgrp, struct rdt_mon_domain *do= m_mbm) +static void update_mba_bw(struct rdtgroup *rgrp, struct rdt_l3_mon_domain = *dom_mbm) { u32 closid, rmid, cur_msr_val, new_msr_val; struct mbm_state *pmbm_data, *cmbm_data; @@ -718,7 +718,7 @@ static void update_mba_bw(struct rdtgroup *rgrp, struct= rdt_mon_domain *dom_mbm) resctrl_arch_update_one(r_mba, dom_mba, closid, CDP_NONE, new_msr_val); } =20 -static void mbm_update_one_event(struct rdt_resource *r, struct rdt_mon_do= main *d, +static void mbm_update_one_event(struct rdt_resource *r, struct rdt_l3_mon= _domain *d, struct rdtgroup *rdtgrp, enum resctrl_event_id evtid) { struct rmid_read rr =3D {0}; @@ -750,7 +750,7 @@ static void mbm_update_one_event(struct rdt_resource *r= , struct rdt_mon_domain * resctrl_arch_mon_ctx_free(rr.r, rr.evtid, rr.arch_mon_ctx); } =20 -static void mbm_update(struct rdt_resource *r, struct rdt_mon_domain *d, +static void mbm_update(struct rdt_resource *r, struct rdt_l3_mon_domain *d, struct rdtgroup *rdtgrp) { /* @@ -771,12 +771,12 @@ static void mbm_update(struct rdt_resource *r, struct= rdt_mon_domain *d, void cqm_handle_limbo(struct work_struct *work) { unsigned long delay =3D msecs_to_jiffies(CQM_LIMBOCHECK_INTERVAL); - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; =20 cpus_read_lock(); mutex_lock(&rdtgroup_mutex); =20 - d =3D container_of(work, struct rdt_mon_domain, cqm_limbo.work); + d =3D container_of(work, struct rdt_l3_mon_domain, cqm_limbo.work); =20 __check_limbo(d, false); =20 @@ -799,7 +799,7 @@ void cqm_handle_limbo(struct work_struct *work) * @exclude_cpu: Which CPU the handler should not run on, * RESCTRL_PICK_ANY_CPU to pick any CPU. */ -void cqm_setup_limbo_handler(struct rdt_mon_domain *dom, unsigned long del= ay_ms, +void cqm_setup_limbo_handler(struct rdt_l3_mon_domain *dom, unsigned long = delay_ms, int exclude_cpu) { unsigned long delay =3D msecs_to_jiffies(delay_ms); @@ -816,7 +816,7 @@ void mbm_handle_overflow(struct work_struct *work) { unsigned long delay =3D msecs_to_jiffies(MBM_OVERFLOW_INTERVAL); struct rdtgroup *prgrp, *crgrp; - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; struct list_head *head; struct rdt_resource *r; =20 @@ -831,7 +831,7 @@ void mbm_handle_overflow(struct work_struct *work) goto out_unlock; =20 r =3D resctrl_arch_get_resource(RDT_RESOURCE_L3); - d =3D container_of(work, struct rdt_mon_domain, mbm_over.work); + d =3D container_of(work, struct rdt_l3_mon_domain, mbm_over.work); =20 list_for_each_entry(prgrp, &rdt_all_groups, rdtgroup_list) { mbm_update(r, d, prgrp); @@ -865,7 +865,7 @@ void mbm_handle_overflow(struct work_struct *work) * @exclude_cpu: Which CPU the handler should not run on, * RESCTRL_PICK_ANY_CPU to pick any CPU. */ -void mbm_setup_overflow_handler(struct rdt_mon_domain *dom, unsigned long = delay_ms, +void mbm_setup_overflow_handler(struct rdt_l3_mon_domain *dom, unsigned lo= ng delay_ms, int exclude_cpu) { unsigned long delay =3D msecs_to_jiffies(delay_ms); @@ -1120,7 +1120,7 @@ ssize_t resctrl_mbm_assign_on_mkdir_write(struct kern= fs_open_file *of, char *buf * mbm_cntr_free_all() - Clear all the counter ID configuration details in= the * domain @d. Called when mbm_assign_mode is changed. */ -static void mbm_cntr_free_all(struct rdt_resource *r, struct rdt_mon_domai= n *d) +static void mbm_cntr_free_all(struct rdt_resource *r, struct rdt_l3_mon_do= main *d) { memset(d->cntr_cfg, 0, sizeof(*d->cntr_cfg) * r->mon.num_mbm_cntrs); } @@ -1129,7 +1129,7 @@ static void mbm_cntr_free_all(struct rdt_resource *r,= struct rdt_mon_domain *d) * resctrl_reset_rmid_all() - Reset all non-architecture states for all the * supported RMIDs. */ -static void resctrl_reset_rmid_all(struct rdt_resource *r, struct rdt_mon_= domain *d) +static void resctrl_reset_rmid_all(struct rdt_resource *r, struct rdt_l3_m= on_domain *d) { u32 idx_limit =3D resctrl_arch_system_num_rmid_idx(); enum resctrl_event_id evt; @@ -1150,7 +1150,7 @@ static void resctrl_reset_rmid_all(struct rdt_resourc= e *r, struct rdt_mon_domain * Assign the counter if @assign is true else unassign the counter. Reset = the * associated non-architectural state. */ -static void rdtgroup_assign_cntr(struct rdt_resource *r, struct rdt_mon_do= main *d, +static void rdtgroup_assign_cntr(struct rdt_resource *r, struct rdt_l3_mon= _domain *d, enum resctrl_event_id evtid, u32 rmid, u32 closid, u32 cntr_id, bool assign) { @@ -1170,7 +1170,7 @@ static void rdtgroup_assign_cntr(struct rdt_resource = *r, struct rdt_mon_domain * * Return: * 0 on success, < 0 on failure. */ -static int rdtgroup_alloc_assign_cntr(struct rdt_resource *r, struct rdt_m= on_domain *d, +static int rdtgroup_alloc_assign_cntr(struct rdt_resource *r, struct rdt_l= 3_mon_domain *d, struct rdtgroup *rdtgrp, struct mon_evt *mevt) { int cntr_id; @@ -1205,7 +1205,7 @@ static int rdtgroup_alloc_assign_cntr(struct rdt_reso= urce *r, struct rdt_mon_dom * Return: * 0 on success, < 0 on failure. */ -static int rdtgroup_assign_cntr_event(struct rdt_mon_domain *d, struct rdt= group *rdtgrp, +static int rdtgroup_assign_cntr_event(struct rdt_l3_mon_domain *d, struct = rdtgroup *rdtgrp, struct mon_evt *mevt) { struct rdt_resource *r =3D resctrl_arch_get_resource(mevt->rid); @@ -1255,7 +1255,7 @@ void rdtgroup_assign_cntrs(struct rdtgroup *rdtgrp) * rdtgroup_free_unassign_cntr() - Unassign and reset the counter ID confi= guration * for the event pointed to by @mevt within the domain @d and resctrl grou= p @rdtgrp. */ -static void rdtgroup_free_unassign_cntr(struct rdt_resource *r, struct rdt= _mon_domain *d, +static void rdtgroup_free_unassign_cntr(struct rdt_resource *r, struct rdt= _l3_mon_domain *d, struct rdtgroup *rdtgrp, struct mon_evt *mevt) { int cntr_id; @@ -1276,7 +1276,7 @@ static void rdtgroup_free_unassign_cntr(struct rdt_re= source *r, struct rdt_mon_d * the event structure @mevt from the domain @d and the group @rdtgrp. Una= ssign * the counters from all the domains if @d is NULL else unassign from @d. */ -static void rdtgroup_unassign_cntr_event(struct rdt_mon_domain *d, struct = rdtgroup *rdtgrp, +static void rdtgroup_unassign_cntr_event(struct rdt_l3_mon_domain *d, stru= ct rdtgroup *rdtgrp, struct mon_evt *mevt) { struct rdt_resource *r =3D resctrl_arch_get_resource(mevt->rid); @@ -1351,7 +1351,7 @@ static int resctrl_parse_mem_transactions(char *tok, = u32 *val) static void rdtgroup_update_cntr_event(struct rdt_resource *r, struct rdtg= roup *rdtgrp, enum resctrl_event_id evtid) { - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; int cntr_id; =20 list_for_each_entry(d, &r->mon_domains, hdr.list) { @@ -1457,7 +1457,7 @@ ssize_t resctrl_mbm_assign_mode_write(struct kernfs_o= pen_file *of, char *buf, size_t nbytes, loff_t off) { struct rdt_resource *r =3D rdt_kn_parent_priv(of->kn); - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; int ret =3D 0; bool enable; =20 @@ -1530,7 +1530,7 @@ int resctrl_num_mbm_cntrs_show(struct kernfs_open_fil= e *of, struct seq_file *s, void *v) { struct rdt_resource *r =3D rdt_kn_parent_priv(of->kn); - struct rdt_mon_domain *dom; + struct rdt_l3_mon_domain *dom; bool sep =3D false; =20 cpus_read_lock(); @@ -1554,7 +1554,7 @@ int resctrl_available_mbm_cntrs_show(struct kernfs_op= en_file *of, struct seq_file *s, void *v) { struct rdt_resource *r =3D rdt_kn_parent_priv(of->kn); - struct rdt_mon_domain *dom; + struct rdt_l3_mon_domain *dom; bool sep =3D false; u32 cntrs, i; int ret =3D 0; @@ -1595,7 +1595,7 @@ int resctrl_available_mbm_cntrs_show(struct kernfs_op= en_file *of, int mbm_L3_assignments_show(struct kernfs_open_file *of, struct seq_file *= s, void *v) { struct rdt_resource *r =3D resctrl_arch_get_resource(RDT_RESOURCE_L3); - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; struct rdtgroup *rdtgrp; struct mon_evt *mevt; int ret =3D 0; @@ -1658,7 +1658,7 @@ static struct mon_evt *mbm_get_mon_event_by_name(stru= ct rdt_resource *r, char *n return NULL; } =20 -static int rdtgroup_modify_assign_state(char *assign, struct rdt_mon_domai= n *d, +static int rdtgroup_modify_assign_state(char *assign, struct rdt_l3_mon_do= main *d, struct rdtgroup *rdtgrp, struct mon_evt *mevt) { int ret =3D 0; @@ -1684,7 +1684,7 @@ static int rdtgroup_modify_assign_state(char *assign,= struct rdt_mon_domain *d, static int resctrl_parse_mbm_assignment(struct rdt_resource *r, struct rdt= group *rdtgrp, char *event, char *tok) { - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; unsigned long dom_id =3D 0; char *dom_str, *id_str; struct mon_evt *mevt; diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index 89ffe54fb0fc..2ed435db1923 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -1640,7 +1640,7 @@ static void mondata_config_read(struct resctrl_mon_co= nfig_info *mon_info) static int mbm_config_show(struct seq_file *s, struct rdt_resource *r, u32= evtid) { struct resctrl_mon_config_info mon_info; - struct rdt_mon_domain *dom; + struct rdt_l3_mon_domain *dom; bool sep =3D false; =20 cpus_read_lock(); @@ -1688,7 +1688,7 @@ static int mbm_local_bytes_config_show(struct kernfs_= open_file *of, } =20 static void mbm_config_write_domain(struct rdt_resource *r, - struct rdt_mon_domain *d, u32 evtid, u32 val) + struct rdt_l3_mon_domain *d, u32 evtid, u32 val) { struct resctrl_mon_config_info mon_info =3D {0}; =20 @@ -1729,8 +1729,8 @@ static void mbm_config_write_domain(struct rdt_resour= ce *r, static int mon_config_write(struct rdt_resource *r, char *tok, u32 evtid) { char *dom_str =3D NULL, *id_str; + struct rdt_l3_mon_domain *d; unsigned long dom_id, val; - struct rdt_mon_domain *d; =20 /* Walking r->domains, ensure it can't race with cpuhp */ lockdep_assert_cpus_held(); @@ -2781,7 +2781,7 @@ static int rdt_get_tree(struct fs_context *fc) { struct rdt_fs_context *ctx =3D rdt_fc2context(fc); unsigned long flags =3D RFTYPE_CTRL_BASE; - struct rdt_mon_domain *dom; + struct rdt_l3_mon_domain *dom; struct rdt_resource *r; int ret; =20 @@ -3232,7 +3232,7 @@ static void rmdir_mondata_subdir_allrdtgrp(struct rdt= _resource *r, struct rdt_domain_hdr *hdr) { struct rdtgroup *prgrp, *crgrp; - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; char subname[32]; bool snc_mode; char name[32]; @@ -3240,7 +3240,7 @@ static void rmdir_mondata_subdir_allrdtgrp(struct rdt= _resource *r, if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) return; =20 - d =3D container_of(hdr, struct rdt_mon_domain, hdr); + d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); snc_mode =3D r->mon_scope =3D=3D RESCTRL_L3_NODE; sprintf(name, "mon_%s_%02d", r->name, snc_mode ? d->ci_id : hdr->id); if (snc_mode) @@ -3258,8 +3258,8 @@ static int mon_add_all_files(struct kernfs_node *kn, = struct rdt_domain_hdr *hdr, struct rdt_resource *r, struct rdtgroup *prgrp, bool do_sum) { + struct rdt_l3_mon_domain *d; struct rmid_read rr =3D {0}; - struct rdt_mon_domain *d; struct mon_data *priv; struct mon_evt *mevt; int ret, domid; @@ -3267,7 +3267,7 @@ static int mon_add_all_files(struct kernfs_node *kn, = struct rdt_domain_hdr *hdr, if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) return -EINVAL; =20 - d =3D container_of(hdr, struct rdt_mon_domain, hdr); + d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); for_each_mon_event(mevt) { if (mevt->rid !=3D r->rid || !mevt->enabled) continue; @@ -3292,7 +3292,7 @@ static int mkdir_mondata_subdir(struct kernfs_node *p= arent_kn, struct rdt_resource *r, struct rdtgroup *prgrp) { struct kernfs_node *kn, *ckn; - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; char name[32]; bool snc_mode; int ret =3D 0; @@ -3302,7 +3302,7 @@ static int mkdir_mondata_subdir(struct kernfs_node *p= arent_kn, if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) return -EINVAL; =20 - d =3D container_of(hdr, struct rdt_mon_domain, hdr); + d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); snc_mode =3D r->mon_scope =3D=3D RESCTRL_L3_NODE; sprintf(name, "mon_%s_%02d", r->name, snc_mode ? d->ci_id : d->hdr.id); kn =3D kernfs_find_and_get(parent_kn, name); @@ -4246,7 +4246,7 @@ static void rdtgroup_setup_default(void) mutex_unlock(&rdtgroup_mutex); } =20 -static void domain_destroy_mon_state(struct rdt_mon_domain *d) +static void domain_destroy_mon_state(struct rdt_l3_mon_domain *d) { int idx; =20 @@ -4270,14 +4270,14 @@ void resctrl_offline_ctrl_domain(struct rdt_resourc= e *r, struct rdt_ctrl_domain =20 void resctrl_offline_mon_domain(struct rdt_resource *r, struct rdt_domain_= hdr *hdr) { - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; =20 mutex_lock(&rdtgroup_mutex); =20 if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) goto out_unlock; =20 - d =3D container_of(hdr, struct rdt_mon_domain, hdr); + d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); =20 /* * If resctrl is mounted, remove all the @@ -4319,7 +4319,7 @@ void resctrl_offline_mon_domain(struct rdt_resource *= r, struct rdt_domain_hdr *h * * Returns 0 for success, or -ENOMEM. */ -static int domain_setup_mon_state(struct rdt_resource *r, struct rdt_mon_d= omain *d) +static int domain_setup_mon_state(struct rdt_resource *r, struct rdt_l3_mo= n_domain *d) { u32 idx_limit =3D resctrl_arch_system_num_rmid_idx(); size_t tsize =3D sizeof(*d->mbm_states[0]); @@ -4377,7 +4377,7 @@ int resctrl_online_ctrl_domain(struct rdt_resource *r= , struct rdt_ctrl_domain *d =20 int resctrl_online_mon_domain(struct rdt_resource *r, struct rdt_domain_hd= r *hdr) { - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; int err =3D -EINVAL; =20 mutex_lock(&rdtgroup_mutex); @@ -4385,7 +4385,7 @@ int resctrl_online_mon_domain(struct rdt_resource *r,= struct rdt_domain_hdr *hdr if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) goto out_unlock; =20 - d =3D container_of(hdr, struct rdt_mon_domain, hdr); + d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); err =3D domain_setup_mon_state(r, d); if (err) goto out_unlock; @@ -4432,10 +4432,10 @@ static void clear_childcpus(struct rdtgroup *r, uns= igned int cpu) } } =20 -static struct rdt_mon_domain *get_mon_domain_from_cpu(int cpu, - struct rdt_resource *r) +static struct rdt_l3_mon_domain *get_mon_domain_from_cpu(int cpu, + struct rdt_resource *r) { - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; =20 lockdep_assert_cpus_held(); =20 @@ -4451,7 +4451,7 @@ static struct rdt_mon_domain *get_mon_domain_from_cpu= (int cpu, void resctrl_offline_cpu(unsigned int cpu) { struct rdt_resource *l3 =3D resctrl_arch_get_resource(RDT_RESOURCE_L3); - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; struct rdtgroup *rdtgrp; =20 mutex_lock(&rdtgroup_mutex); --=20 2.51.1 From nobody Sun Dec 14 22:14:55 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7E4F52E2DF2 for ; Wed, 10 Dec 2025 23:14:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408485; cv=none; b=XyaNIaTr6DxyF2RGJWGeScg2X3aq2KuM/Ja6Qm/yfMzJrnGyVULRKbIgoEdjqAz3X3+iFpwQDlKF12SXVtISvtY5+I7CYHKYD9iRKxn2tWdNdFqcv5PosCju71V26g8VY+Wz771AtqNlTYi0KxwUHosJSiJyYQSQSbh8HRJIAS4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408485; c=relaxed/simple; bh=EZOvwr7Di9aBiD+AAW7BdZ5lI4L43ZI0Bgoy+xyaqOE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=QNypiO4mx4lk3s0o7yOc/y0lQd3hc1+zgRF6kq3WjNk08tnExReHoAw8eJKtNBsZRHgWlWYF5pfcMIK4Nod2JjaTMNvwQmeQU80cJhdHyIr0OU0M+ymZji8glihIX9JhCOG8vyoRRs6eQLYHJSLoKIjk8Xelq24lkj3ULRE4DTs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=hrcRVBhU; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="hrcRVBhU" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1765408481; x=1796944481; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=EZOvwr7Di9aBiD+AAW7BdZ5lI4L43ZI0Bgoy+xyaqOE=; b=hrcRVBhUNEgAfT5XUIcsD9gscr9S5t9Zzhg9oc+deZruPcNonPrbxOD8 JBecDxLUZpHi1n0n3qVT3O6bOpRLSSDvhbE2M+sGg2EtFAHa1opgfgzXl ox8HxDqVa8nzAXUdyGxrV+maiutP+s7GGA9U5f16dO+wO2ZdcO0c/hERJ e3Iuw0NqyiGRD6f3wVaRAqgV6J2uAbMxOozuaK70tESxsftx2RW6nrrov RVpDbWN/6HyO682PcCXjq4FK8JSODA54KTlPjhUnLLd+ZIc59o+33crLP cjwB+7M5p4LK5j9dDVIUozuYxszMveo0Xxf4idrxXaRojuflcu/iHr5tl Q==; X-CSE-ConnectionGUID: nV28VJYVT5mYxN7KUV2+0w== X-CSE-MsgGUID: AKpY+gE7T6yICEqWIe8RIQ== X-IronPort-AV: E=McAfee;i="6800,10657,11638"; a="69973545" X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="69973545" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:30 -0800 X-CSE-ConnectionGUID: KZmaEJbARbG3uQVaHaPQ1Q== X-CSE-MsgGUID: 1s0WYGY9SEyOYAKgNcmMHw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="227297060" Received: from daliomra-mobl3.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.221.254]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:29 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v16 09/32] x86,fs/resctrl: Rename some L3 specific functions Date: Wed, 10 Dec 2025 15:13:48 -0800 Message-ID: <20251210231413.59102-10-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251210231413.59102-1-tony.luck@intel.com> References: <20251210231413.59102-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With the arrival of monitor events tied to new domains associated with a different resource it would be clearer if the L3 resource specific functions are more accurately named. Rename three groups of functions: Functions that allocate/free architecture per-RMID MBM state information: arch_domain_mbm_alloc() -> l3_mon_domain_mbm_alloc() mon_domain_free() -> l3_mon_domain_free() Functions that allocate/free filesystem per-RMID MBM state information: domain_setup_mon_state() -> domain_setup_l3_mon_state() domain_destroy_mon_state() -> domain_destroy_l3_mon_state() Initialization/exit: rdt_get_mon_l3_config() -> rdt_get_l3_mon_config() resctrl_mon_resource_init() -> resctrl_l3_mon_resource_init() resctrl_mon_resource_exit() -> resctrl_l3_mon_resource_exit() Ensure kernel-doc descriptions of these functions' return values are present and correctly formatted. Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- arch/x86/kernel/cpu/resctrl/internal.h | 2 +- fs/resctrl/internal.h | 6 +++--- arch/x86/kernel/cpu/resctrl/core.c | 20 +++++++++++--------- arch/x86/kernel/cpu/resctrl/monitor.c | 2 +- fs/resctrl/monitor.c | 8 ++++---- fs/resctrl/rdtgroup.c | 24 ++++++++++++------------ 6 files changed, 32 insertions(+), 30 deletions(-) diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/r= esctrl/internal.h index d73c0adf1026..11d06995810e 100644 --- a/arch/x86/kernel/cpu/resctrl/internal.h +++ b/arch/x86/kernel/cpu/resctrl/internal.h @@ -213,7 +213,7 @@ union l3_qos_abmc_cfg { =20 void rdt_ctrl_update(void *arg); =20 -int rdt_get_mon_l3_config(struct rdt_resource *r); +int rdt_get_l3_mon_config(struct rdt_resource *r); =20 bool rdt_cpu_has(int flag); =20 diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h index af47b6ddef62..9768341aa21c 100644 --- a/fs/resctrl/internal.h +++ b/fs/resctrl/internal.h @@ -357,7 +357,9 @@ int alloc_rmid(u32 closid); =20 void free_rmid(u32 closid, u32 rmid); =20 -void resctrl_mon_resource_exit(void); +int resctrl_l3_mon_resource_init(void); + +void resctrl_l3_mon_resource_exit(void); =20 void mon_event_count(void *info); =20 @@ -367,8 +369,6 @@ void mon_event_read(struct rmid_read *rr, struct rdt_re= source *r, struct rdt_domain_hdr *hdr, struct rdtgroup *rdtgrp, cpumask_t *cpumask, int evtid, int first); =20 -int resctrl_mon_resource_init(void); - void mbm_setup_overflow_handler(struct rdt_l3_mon_domain *dom, unsigned long delay_ms, int exclude_cpu); diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index cc1b846f9645..b3a2dc56155d 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -368,7 +368,7 @@ static void ctrl_domain_free(struct rdt_hw_ctrl_domain = *hw_dom) kfree(hw_dom); } =20 -static void mon_domain_free(struct rdt_hw_l3_mon_domain *hw_dom) +static void l3_mon_domain_free(struct rdt_hw_l3_mon_domain *hw_dom) { int idx; =20 @@ -401,11 +401,13 @@ static int domain_setup_ctrlval(struct rdt_resource *= r, struct rdt_ctrl_domain * } =20 /** - * arch_domain_mbm_alloc() - Allocate arch private storage for the MBM cou= nters + * l3_mon_domain_mbm_alloc() - Allocate arch private storage for the MBM c= ounters * @num_rmid: The size of the MBM counter array * @hw_dom: The domain that owns the allocated arrays + * + * Return: 0 for success, or -ENOMEM. */ -static int arch_domain_mbm_alloc(u32 num_rmid, struct rdt_hw_l3_mon_domain= *hw_dom) +static int l3_mon_domain_mbm_alloc(u32 num_rmid, struct rdt_hw_l3_mon_doma= in *hw_dom) { size_t tsize =3D sizeof(*hw_dom->arch_mbm_states[0]); enum resctrl_event_id eventid; @@ -519,7 +521,7 @@ static void l3_mon_domain_setup(int cpu, int id, struct= rdt_resource *r, struct ci =3D get_cpu_cacheinfo_level(cpu, RESCTRL_L3_CACHE); if (!ci) { pr_warn_once("Can't find L3 cache for CPU:%d resource %s\n", cpu, r->nam= e); - mon_domain_free(hw_dom); + l3_mon_domain_free(hw_dom); return; } d->ci_id =3D ci->id; @@ -527,8 +529,8 @@ static void l3_mon_domain_setup(int cpu, int id, struct= rdt_resource *r, struct =20 arch_mon_domain_online(r, d); =20 - if (arch_domain_mbm_alloc(r->mon.num_rmid, hw_dom)) { - mon_domain_free(hw_dom); + if (l3_mon_domain_mbm_alloc(r->mon.num_rmid, hw_dom)) { + l3_mon_domain_free(hw_dom); return; } =20 @@ -538,7 +540,7 @@ static void l3_mon_domain_setup(int cpu, int id, struct= rdt_resource *r, struct if (err) { list_del_rcu(&d->hdr.list); synchronize_rcu(); - mon_domain_free(hw_dom); + l3_mon_domain_free(hw_dom); } } =20 @@ -664,7 +666,7 @@ static void domain_remove_cpu_mon(int cpu, struct rdt_r= esource *r) resctrl_offline_mon_domain(r, hdr); list_del_rcu(&hdr->list); synchronize_rcu(); - mon_domain_free(hw_dom); + l3_mon_domain_free(hw_dom); break; } default: @@ -917,7 +919,7 @@ static __init bool get_rdt_mon_resources(void) if (!ret) return false; =20 - return !rdt_get_mon_l3_config(r); + return !rdt_get_l3_mon_config(r); } =20 static __init void __check_quirks_intel(void) diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/re= sctrl/monitor.c index 04b8f1e1f314..20605212656c 100644 --- a/arch/x86/kernel/cpu/resctrl/monitor.c +++ b/arch/x86/kernel/cpu/resctrl/monitor.c @@ -424,7 +424,7 @@ static __init int snc_get_config(void) return ret; } =20 -int __init rdt_get_mon_l3_config(struct rdt_resource *r) +int __init rdt_get_l3_mon_config(struct rdt_resource *r) { unsigned int mbm_offset =3D boot_cpu_data.x86_cache_mbm_width_offset; struct rdt_hw_resource *hw_res =3D resctrl_to_arch_res(r); diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index 9edbe9805d33..d5ae0ef4c947 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -1780,7 +1780,7 @@ ssize_t mbm_L3_assignments_write(struct kernfs_open_f= ile *of, char *buf, } =20 /** - * resctrl_mon_resource_init() - Initialise global monitoring structures. + * resctrl_l3_mon_resource_init() - Initialise global monitoring structure= s. * * Allocate and initialise global monitor resources that do not belong to a * specific domain. i.e. the rmid_ptrs[] used for the limbo and free lists. @@ -1789,9 +1789,9 @@ ssize_t mbm_L3_assignments_write(struct kernfs_open_f= ile *of, char *buf, * Resctrl's cpuhp callbacks may be called before this point to bring a do= main * online. * - * Returns 0 for success, or -ENOMEM. + * Return: 0 for success, or -ENOMEM. */ -int resctrl_mon_resource_init(void) +int resctrl_l3_mon_resource_init(void) { struct rdt_resource *r =3D resctrl_arch_get_resource(RDT_RESOURCE_L3); int ret; @@ -1841,7 +1841,7 @@ int resctrl_mon_resource_init(void) return 0; } =20 -void resctrl_mon_resource_exit(void) +void resctrl_l3_mon_resource_exit(void) { struct rdt_resource *r =3D resctrl_arch_get_resource(RDT_RESOURCE_L3); =20 diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index 2ed435db1923..b57e1e78bbc2 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -4246,7 +4246,7 @@ static void rdtgroup_setup_default(void) mutex_unlock(&rdtgroup_mutex); } =20 -static void domain_destroy_mon_state(struct rdt_l3_mon_domain *d) +static void domain_destroy_l3_mon_state(struct rdt_l3_mon_domain *d) { int idx; =20 @@ -4301,13 +4301,13 @@ void resctrl_offline_mon_domain(struct rdt_resource= *r, struct rdt_domain_hdr *h cancel_delayed_work(&d->cqm_limbo); } =20 - domain_destroy_mon_state(d); + domain_destroy_l3_mon_state(d); out_unlock: mutex_unlock(&rdtgroup_mutex); } =20 /** - * domain_setup_mon_state() - Initialise domain monitoring structures. + * domain_setup_l3_mon_state() - Initialise domain monitoring structures. * @r: The resource for the newly online domain. * @d: The newly online domain. * @@ -4315,11 +4315,11 @@ void resctrl_offline_mon_domain(struct rdt_resource= *r, struct rdt_domain_hdr *h * Called when the first CPU of a domain comes online, regardless of wheth= er * the filesystem is mounted. * During boot this may be called before global allocations have been made= by - * resctrl_mon_resource_init(). + * resctrl_l3_mon_resource_init(). * - * Returns 0 for success, or -ENOMEM. + * Return: 0 for success, or -ENOMEM. */ -static int domain_setup_mon_state(struct rdt_resource *r, struct rdt_l3_mo= n_domain *d) +static int domain_setup_l3_mon_state(struct rdt_resource *r, struct rdt_l3= _mon_domain *d) { u32 idx_limit =3D resctrl_arch_system_num_rmid_idx(); size_t tsize =3D sizeof(*d->mbm_states[0]); @@ -4386,7 +4386,7 @@ int resctrl_online_mon_domain(struct rdt_resource *r,= struct rdt_domain_hdr *hdr goto out_unlock; =20 d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); - err =3D domain_setup_mon_state(r, d); + err =3D domain_setup_l3_mon_state(r, d); if (err) goto out_unlock; =20 @@ -4503,13 +4503,13 @@ int resctrl_init(void) =20 io_alloc_init(); =20 - ret =3D resctrl_mon_resource_init(); + ret =3D resctrl_l3_mon_resource_init(); if (ret) return ret; =20 ret =3D sysfs_create_mount_point(fs_kobj, "resctrl"); if (ret) { - resctrl_mon_resource_exit(); + resctrl_l3_mon_resource_exit(); return ret; } =20 @@ -4544,7 +4544,7 @@ int resctrl_init(void) =20 cleanup_mountpoint: sysfs_remove_mount_point(fs_kobj, "resctrl"); - resctrl_mon_resource_exit(); + resctrl_l3_mon_resource_exit(); =20 return ret; } @@ -4580,7 +4580,7 @@ static bool resctrl_online_domains_exist(void) * When called by the architecture code, all CPUs and resctrl domains must= be * offline. This ensures the limbo and overflow handlers are not scheduled= to * run, meaning the data structures they access can be freed by - * resctrl_mon_resource_exit(). + * resctrl_l3_mon_resource_exit(). * * After resctrl_exit() returns, the architecture code should return an * error from all resctrl_arch_ functions that can do this. @@ -4607,5 +4607,5 @@ void resctrl_exit(void) * it can be used to umount resctrl. */ =20 - resctrl_mon_resource_exit(); + resctrl_l3_mon_resource_exit(); } --=20 2.51.1 From nobody Sun Dec 14 22:14:55 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4F7C63002BB for ; Wed, 10 Dec 2025 23:14:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408489; cv=none; b=mPjxUig6+V6jORJRcNvQKgh3xMuwU90KceXvP4t2sZ/mukHJpKIBXZ7BEOSk8/ub7t2kH8/RiHc4wXQTrPX6sfkd0BDkdgyERi5KpDfE9zxHGFBP3LXKMg7iHZQWqDpfKGm+rU6hJPM9xpVcOH+s57aPk4g6IYGa58LGGtWV1Us= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408489; c=relaxed/simple; bh=KDuv5tzC4bt6BhiOU+2suQA4WqgaGz4znZ7KQmLVwMA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=sTx8wdjTHNz5NWFqCm0EaceUqYUuf2CI+rXOAeXbiexEIWvxrAx14k9g9mK04t7NsiRjV23cv7VRRbKXvDppznx+yf3TeQTTSPfqV4OqQdGtl6UQNxdyFFG/eCBD9E8pzXfFSznyg91KsThkyTMERN+QZvnfgUTGqEi81whN09E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=jJ09r6BE; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="jJ09r6BE" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1765408483; x=1796944483; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KDuv5tzC4bt6BhiOU+2suQA4WqgaGz4znZ7KQmLVwMA=; b=jJ09r6BEp3hUBO+lXbNFrtYDwEpVlmp0q+CmO55hrXINcAfG66/qtAOu lOzYG9Z7KxXAJcWjT1SbBSnbQQmGaZV3eOjWDnRB1hd6B/qw4od2mlhhm 9P8zaIXzOydgTPotVsn3sWlzqc0QGF31yuwKx9RazIR48kDRllQW82NDK CWudaZoF2OB+V4G+XLcGKvd4LJPhE8BxDIo7lC5NprtvBFWN1IWeFgYh6 akEGE1yWkchxQ/Bl5yCAdS8cJxwBeEiibPQFaze4UDeodtwISEqgffo39 CghLHO+A5xWhHg51u4jz09uX3E6M7MhCyiTTtI/HRdsIZ4+LJcSOo5m/A w==; X-CSE-ConnectionGUID: HpYnKCX0SYavLHKNDeDCvg== X-CSE-MsgGUID: sV8jkGLwQveO4QqaepYzCQ== X-IronPort-AV: E=McAfee;i="6800,10657,11638"; a="69973553" X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="69973553" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:30 -0800 X-CSE-ConnectionGUID: oJtn34tvRmWcJzop7rmmGA== X-CSE-MsgGUID: mq1nn18FRJmhHbwLKWljpg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="227297063" Received: from daliomra-mobl3.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.221.254]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:30 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v16 10/32] fs/resctrl: Make event details accessible to functions when reading events Date: Wed, 10 Dec 2025 15:13:49 -0800 Message-ID: <20251210231413.59102-11-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251210231413.59102-1-tony.luck@intel.com> References: <20251210231413.59102-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Reading monitoring event data from MMIO requires more context than the even= t id to be able to read the correct memory location. struct mon_evt is the appro= priate place for this event specific context. Prepare for addition of extra fields to struct mon_evt by changing the call= ing conventions to pass a pointer to the mon_evt structure instead of just the event id. Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- fs/resctrl/internal.h | 10 +++++----- fs/resctrl/ctrlmondata.c | 18 +++++++++--------- fs/resctrl/monitor.c | 22 +++++++++++----------- fs/resctrl/rdtgroup.c | 6 +++--- 4 files changed, 28 insertions(+), 28 deletions(-) diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h index 9768341aa21c..86cf38ab08a7 100644 --- a/fs/resctrl/internal.h +++ b/fs/resctrl/internal.h @@ -81,7 +81,7 @@ extern struct mon_evt mon_event_all[QOS_NUM_EVENTS]; * struct mon_data - Monitoring details for each event file. * @list: Member of the global @mon_data_kn_priv_list list. * @rid: Resource id associated with the event file. - * @evtid: Event id associated with the event file. + * @evt: Event structure associated with the event file. * @sum: Set when event must be summed across multiple * domains. * @domid: When @sum is zero this is the domain to which @@ -95,7 +95,7 @@ extern struct mon_evt mon_event_all[QOS_NUM_EVENTS]; struct mon_data { struct list_head list; enum resctrl_res_level rid; - enum resctrl_event_id evtid; + struct mon_evt *evt; int domid; bool sum; }; @@ -108,7 +108,7 @@ struct mon_data { * @r: Resource describing the properties of the event being read. * @hdr: Header of domain that the counter should be read from. If NULL = then * sum all domains in @r sharing L3 @ci.id - * @evtid: Which monitor event to read. + * @evt: Which monitor event to read. * @first: Initialize MBM counter when true. * @ci: Cacheinfo for L3. Only set when @hdr is NULL. Used when summing * domains. @@ -126,7 +126,7 @@ struct rmid_read { struct rdtgroup *rgrp; struct rdt_resource *r; struct rdt_domain_hdr *hdr; - enum resctrl_event_id evtid; + struct mon_evt *evt; bool first; struct cacheinfo *ci; bool is_mbm_cntr; @@ -367,7 +367,7 @@ int rdtgroup_mondata_show(struct seq_file *m, void *arg= ); =20 void mon_event_read(struct rmid_read *rr, struct rdt_resource *r, struct rdt_domain_hdr *hdr, struct rdtgroup *rdtgrp, - cpumask_t *cpumask, int evtid, int first); + cpumask_t *cpumask, struct mon_evt *evt, int first); =20 void mbm_setup_overflow_handler(struct rdt_l3_mon_domain *dom, unsigned long delay_ms, diff --git a/fs/resctrl/ctrlmondata.c b/fs/resctrl/ctrlmondata.c index a3c734fe656e..7f9b2fed117a 100644 --- a/fs/resctrl/ctrlmondata.c +++ b/fs/resctrl/ctrlmondata.c @@ -552,7 +552,7 @@ struct rdt_domain_hdr *resctrl_find_domain(struct list_= head *h, int id, =20 void mon_event_read(struct rmid_read *rr, struct rdt_resource *r, struct rdt_domain_hdr *hdr, struct rdtgroup *rdtgrp, - cpumask_t *cpumask, int evtid, int first) + cpumask_t *cpumask, struct mon_evt *evt, int first) { int cpu; =20 @@ -563,15 +563,15 @@ void mon_event_read(struct rmid_read *rr, struct rdt_= resource *r, * Setup the parameters to pass to mon_event_count() to read the data. */ rr->rgrp =3D rdtgrp; - rr->evtid =3D evtid; + rr->evt =3D evt; rr->r =3D r; rr->hdr =3D hdr; rr->first =3D first; if (resctrl_arch_mbm_cntr_assign_enabled(r) && - resctrl_is_mbm_event(evtid)) { + resctrl_is_mbm_event(evt->evtid)) { rr->is_mbm_cntr =3D true; } else { - rr->arch_mon_ctx =3D resctrl_arch_mon_ctx_alloc(r, evtid); + rr->arch_mon_ctx =3D resctrl_arch_mon_ctx_alloc(r, evt->evtid); if (IS_ERR(rr->arch_mon_ctx)) { rr->err =3D -EINVAL; return; @@ -592,14 +592,13 @@ void mon_event_read(struct rmid_read *rr, struct rdt_= resource *r, smp_call_on_cpu(cpu, smp_mon_event_count, rr, false); =20 if (rr->arch_mon_ctx) - resctrl_arch_mon_ctx_free(r, evtid, rr->arch_mon_ctx); + resctrl_arch_mon_ctx_free(r, evt->evtid, rr->arch_mon_ctx); } =20 int rdtgroup_mondata_show(struct seq_file *m, void *arg) { struct kernfs_open_file *of =3D m->private; enum resctrl_res_level resid; - enum resctrl_event_id evtid; struct rdt_l3_mon_domain *d; struct rdt_domain_hdr *hdr; struct rmid_read rr =3D {0}; @@ -607,6 +606,7 @@ int rdtgroup_mondata_show(struct seq_file *m, void *arg) int domid, cpu, ret =3D 0; struct rdt_resource *r; struct cacheinfo *ci; + struct mon_evt *evt; struct mon_data *md; =20 rdtgrp =3D rdtgroup_kn_lock_live(of->kn); @@ -623,7 +623,7 @@ int rdtgroup_mondata_show(struct seq_file *m, void *arg) =20 resid =3D md->rid; domid =3D md->domid; - evtid =3D md->evtid; + evt =3D md->evt; r =3D resctrl_arch_get_resource(resid); =20 if (md->sum) { @@ -641,7 +641,7 @@ int rdtgroup_mondata_show(struct seq_file *m, void *arg) continue; rr.ci =3D ci; mon_event_read(&rr, r, NULL, rdtgrp, - &ci->shared_cpu_map, evtid, false); + &ci->shared_cpu_map, evt, false); goto checkresult; } } @@ -657,7 +657,7 @@ int rdtgroup_mondata_show(struct seq_file *m, void *arg) ret =3D -ENOENT; goto out; } - mon_event_read(&rr, r, hdr, rdtgrp, &hdr->cpu_mask, evtid, false); + mon_event_read(&rr, r, hdr, rdtgrp, &hdr->cpu_mask, evt, false); } =20 checkresult: diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index d5ae0ef4c947..340b847ab397 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -430,7 +430,7 @@ static int __l3_mon_event_count(struct rdtgroup *rdtgrp= , struct rmid_read *rr) d =3D container_of(rr->hdr, struct rdt_l3_mon_domain, hdr); =20 if (rr->is_mbm_cntr) { - cntr_id =3D mbm_cntr_get(rr->r, d, rdtgrp, rr->evtid); + cntr_id =3D mbm_cntr_get(rr->r, d, rdtgrp, rr->evt->evtid); if (cntr_id < 0) { rr->err =3D -ENOENT; return -EINVAL; @@ -439,10 +439,10 @@ static int __l3_mon_event_count(struct rdtgroup *rdtg= rp, struct rmid_read *rr) =20 if (rr->first) { if (rr->is_mbm_cntr) - resctrl_arch_reset_cntr(rr->r, d, closid, rmid, cntr_id, rr->evtid); + resctrl_arch_reset_cntr(rr->r, d, closid, rmid, cntr_id, rr->evt->evtid= ); else - resctrl_arch_reset_rmid(rr->r, d, closid, rmid, rr->evtid); - m =3D get_mbm_state(d, closid, rmid, rr->evtid); + resctrl_arch_reset_rmid(rr->r, d, closid, rmid, rr->evt->evtid); + m =3D get_mbm_state(d, closid, rmid, rr->evt->evtid); if (m) memset(m, 0, sizeof(struct mbm_state)); return 0; @@ -453,10 +453,10 @@ static int __l3_mon_event_count(struct rdtgroup *rdtg= rp, struct rmid_read *rr) return -EINVAL; if (rr->is_mbm_cntr) rr->err =3D resctrl_arch_cntr_read(rr->r, d, closid, rmid, cntr_id, - rr->evtid, &tval); + rr->evt->evtid, &tval); else rr->err =3D resctrl_arch_rmid_read(rr->r, rr->hdr, closid, rmid, - rr->evtid, &tval, rr->arch_mon_ctx); + rr->evt->evtid, &tval, rr->arch_mon_ctx); if (rr->err) return rr->err; =20 @@ -501,7 +501,7 @@ static int __l3_mon_event_count_sum(struct rdtgroup *rd= tgrp, struct rmid_read *r if (d->ci_id !=3D rr->ci->id) continue; err =3D resctrl_arch_rmid_read(rr->r, &d->hdr, closid, rmid, - rr->evtid, &tval, rr->arch_mon_ctx); + rr->evt->evtid, &tval, rr->arch_mon_ctx); if (!err) { rr->val +=3D tval; ret =3D 0; @@ -551,7 +551,7 @@ static void mbm_bw_count(struct rdtgroup *rdtgrp, struc= t rmid_read *rr) if (!domain_header_is_valid(rr->hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) return; d =3D container_of(rr->hdr, struct rdt_l3_mon_domain, hdr); - m =3D get_mbm_state(d, closid, rmid, rr->evtid); + m =3D get_mbm_state(d, closid, rmid, rr->evt->evtid); if (WARN_ON_ONCE(!m)) return; =20 @@ -725,11 +725,11 @@ static void mbm_update_one_event(struct rdt_resource = *r, struct rdt_l3_mon_domai =20 rr.r =3D r; rr.hdr =3D &d->hdr; - rr.evtid =3D evtid; + rr.evt =3D &mon_event_all[evtid]; if (resctrl_arch_mbm_cntr_assign_enabled(r)) { rr.is_mbm_cntr =3D true; } else { - rr.arch_mon_ctx =3D resctrl_arch_mon_ctx_alloc(rr.r, rr.evtid); + rr.arch_mon_ctx =3D resctrl_arch_mon_ctx_alloc(rr.r, evtid); if (IS_ERR(rr.arch_mon_ctx)) { pr_warn_ratelimited("Failed to allocate monitor context: %ld", PTR_ERR(rr.arch_mon_ctx)); @@ -747,7 +747,7 @@ static void mbm_update_one_event(struct rdt_resource *r= , struct rdt_l3_mon_domai mbm_bw_count(rdtgrp, &rr); =20 if (rr.arch_mon_ctx) - resctrl_arch_mon_ctx_free(rr.r, rr.evtid, rr.arch_mon_ctx); + resctrl_arch_mon_ctx_free(rr.r, evtid, rr.arch_mon_ctx); } =20 static void mbm_update(struct rdt_resource *r, struct rdt_l3_mon_domain *d, diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index b57e1e78bbc2..771e40f02ba6 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -3103,7 +3103,7 @@ static struct mon_data *mon_get_kn_priv(enum resctrl_= res_level rid, int domid, =20 list_for_each_entry(priv, &mon_data_kn_priv_list, list) { if (priv->rid =3D=3D rid && priv->domid =3D=3D domid && - priv->sum =3D=3D do_sum && priv->evtid =3D=3D mevt->evtid) + priv->sum =3D=3D do_sum && priv->evt =3D=3D mevt) return priv; } =20 @@ -3114,7 +3114,7 @@ static struct mon_data *mon_get_kn_priv(enum resctrl_= res_level rid, int domid, priv->rid =3D rid; priv->domid =3D domid; priv->sum =3D do_sum; - priv->evtid =3D mevt->evtid; + priv->evt =3D mevt; list_add_tail(&priv->list, &mon_data_kn_priv_list); =20 return priv; @@ -3281,7 +3281,7 @@ static int mon_add_all_files(struct kernfs_node *kn, = struct rdt_domain_hdr *hdr, return ret; =20 if (!do_sum && resctrl_is_mbm_event(mevt->evtid)) - mon_event_read(&rr, r, hdr, prgrp, &hdr->cpu_mask, mevt->evtid, true); + mon_event_read(&rr, r, hdr, prgrp, &hdr->cpu_mask, mevt, true); } =20 return 0; --=20 2.51.1 From nobody Sun Dec 14 22:14:55 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A4893302CA7 for ; Wed, 10 Dec 2025 23:14:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408492; cv=none; b=iQDJeGI6n6XCWFoUVHnWh34o96xsDBDiB2TUx1GDfmiDnWH78m3JBQe7VZ3ubPNr40AIDX1Jj5iu/IOrDHO2FcoBwB9kAooicDk/L1EYRYpRCLXQpTHo70q5jMoFNjgr8BWUhTQ8+5wrA9Pl5w01tXughaw+oQoy+DlhRyjsBS0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408492; c=relaxed/simple; bh=yjq5f5v9qQ3ZJcN5e2FKGscdLRNYCiWqUBYdKv+PTGQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=dykYAwzd+sY81qCMWbRXwIovIQUCgDA7u03AD/Wl+qRnsOMs9agYsDqqid4MEvEdHjwHOHuixe1gjWDgSWXm82f/pfB+/5X2SUTCDo3xxizn0NqEHlUqZ/4LMvVk2g62P+TQz8doaPcpk2joFwK8z0bxcLLiJUKmpKvNBFsCl2I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=mx2naAkF; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="mx2naAkF" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1765408488; x=1796944488; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=yjq5f5v9qQ3ZJcN5e2FKGscdLRNYCiWqUBYdKv+PTGQ=; b=mx2naAkFXi7Q3v9/uWj1w3KX+Jj16q+mnfhtVn6zsMckA24eL5Zyw+ib 9qzv13eGHTVOsaiZbEX+6sMrugBIyrKMH3987kNykbjJoUr41yvNS1/de qmH5NBYH1HukiLADdKVZYmDT7aXKH+xO0LeBrSdYGGutEV8nDPjI6KowG PWvzEzCxsGDhMewt7nfzfVNffVv2cPnxw90Hn+BwLjPKBGbD2a0XeiLcf ktz827gnq0jCfvdQqC1DN9B1oZkilKWkUMaU8S+FVWJlQtckAkvj+z9Bk 9wkdbKaACR31mAUPIn4tZosd5LFcOVo10ADPMdaJ1uGQfm+WSfhjpn9yu w==; X-CSE-ConnectionGUID: eWhNSUwGSRaeyE8pHi+pDA== X-CSE-MsgGUID: lDgnuMJaQJ2xR5Kl3/Bzzg== X-IronPort-AV: E=McAfee;i="6800,10657,11638"; a="69973561" X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="69973561" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:31 -0800 X-CSE-ConnectionGUID: +VBmE4GUSdKFoxDQQ64J7Q== X-CSE-MsgGUID: +gMPaL0cSr6AW0VjDRP3Dg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="227297068" Received: from daliomra-mobl3.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.221.254]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:30 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v16 11/32] x86,fs/resctrl: Handle events that can be read from any CPU Date: Wed, 10 Dec 2025 15:13:50 -0800 Message-ID: <20251210231413.59102-12-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251210231413.59102-1-tony.luck@intel.com> References: <20251210231413.59102-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" resctrl assumes that monitor events can only be read from a CPU in the cpumask_t set of each domain. This is true for x86 events accessed with an MSR interface, but may not be true for other access methods such as MMIO. Introduce and use flag mon_evt::any_cpu, settable by architecture, that indicates there are no restrictions on which CPU can read that event. This flag is not supported by the L3 event reading that requires to be run on a CPU that belongs to the L3 domain of the event being read. Signed-off-by: Tony Luck --- include/linux/resctrl.h | 2 +- fs/resctrl/internal.h | 2 ++ arch/x86/kernel/cpu/resctrl/core.c | 6 +++--- fs/resctrl/ctrlmondata.c | 6 ++++++ fs/resctrl/monitor.c | 4 +++- 5 files changed, 15 insertions(+), 5 deletions(-) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index 79aaaabcdd3f..22c5d07fe9ff 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -412,7 +412,7 @@ u32 resctrl_arch_get_num_closid(struct rdt_resource *r); u32 resctrl_arch_system_num_rmid_idx(void); int resctrl_arch_update_domains(struct rdt_resource *r, u32 closid); =20 -void resctrl_enable_mon_event(enum resctrl_event_id eventid); +void resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu); =20 bool resctrl_is_mon_event_enabled(enum resctrl_event_id eventid); =20 diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h index 86cf38ab08a7..fb0b6e40d022 100644 --- a/fs/resctrl/internal.h +++ b/fs/resctrl/internal.h @@ -61,6 +61,7 @@ static inline struct rdt_fs_context *rdt_fc2context(struc= t fs_context *fc) * READS_TO_REMOTE_MEM) being tracked by @evtid. * Only valid if @evtid is an MBM event. * @configurable: true if the event is configurable + * @any_cpu: true if the event can be read from any CPU * @enabled: true if the event is enabled */ struct mon_evt { @@ -69,6 +70,7 @@ struct mon_evt { char *name; u32 evt_cfg; bool configurable; + bool any_cpu; bool enabled; }; =20 diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index b3a2dc56155d..bd4a98106153 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -902,15 +902,15 @@ static __init bool get_rdt_mon_resources(void) bool ret =3D false; =20 if (rdt_cpu_has(X86_FEATURE_CQM_OCCUP_LLC)) { - resctrl_enable_mon_event(QOS_L3_OCCUP_EVENT_ID); + resctrl_enable_mon_event(QOS_L3_OCCUP_EVENT_ID, false); ret =3D true; } if (rdt_cpu_has(X86_FEATURE_CQM_MBM_TOTAL)) { - resctrl_enable_mon_event(QOS_L3_MBM_TOTAL_EVENT_ID); + resctrl_enable_mon_event(QOS_L3_MBM_TOTAL_EVENT_ID, false); ret =3D true; } if (rdt_cpu_has(X86_FEATURE_CQM_MBM_LOCAL)) { - resctrl_enable_mon_event(QOS_L3_MBM_LOCAL_EVENT_ID); + resctrl_enable_mon_event(QOS_L3_MBM_LOCAL_EVENT_ID, false); ret =3D true; } if (rdt_cpu_has(X86_FEATURE_ABMC)) diff --git a/fs/resctrl/ctrlmondata.c b/fs/resctrl/ctrlmondata.c index 7f9b2fed117a..2c69fcd70eeb 100644 --- a/fs/resctrl/ctrlmondata.c +++ b/fs/resctrl/ctrlmondata.c @@ -578,6 +578,11 @@ void mon_event_read(struct rmid_read *rr, struct rdt_r= esource *r, } } =20 + if (evt->any_cpu) { + mon_event_count(rr); + goto out_ctx_free; + } + cpu =3D cpumask_any_housekeeping(cpumask, RESCTRL_PICK_ANY_CPU); =20 /* @@ -591,6 +596,7 @@ void mon_event_read(struct rmid_read *rr, struct rdt_re= source *r, else smp_call_on_cpu(cpu, smp_mon_event_count, rr, false); =20 +out_ctx_free: if (rr->arch_mon_ctx) resctrl_arch_mon_ctx_free(r, evt->evtid, rr->arch_mon_ctx); } diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index 340b847ab397..8c76ac133bca 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -518,6 +518,7 @@ static int __mon_event_count(struct rdtgroup *rdtgrp, s= truct rmid_read *rr) { switch (rr->r->rid) { case RDT_RESOURCE_L3: + WARN_ON_ONCE(rr->evt->any_cpu); if (rr->hdr) return __l3_mon_event_count(rdtgrp, rr); else @@ -987,7 +988,7 @@ struct mon_evt mon_event_all[QOS_NUM_EVENTS] =3D { }, }; =20 -void resctrl_enable_mon_event(enum resctrl_event_id eventid) +void resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu) { if (WARN_ON_ONCE(eventid < QOS_FIRST_EVENT || eventid >=3D QOS_NUM_EVENTS= )) return; @@ -996,6 +997,7 @@ void resctrl_enable_mon_event(enum resctrl_event_id eve= ntid) return; } =20 + mon_event_all[eventid].any_cpu =3D any_cpu; mon_event_all[eventid].enabled =3D true; } =20 --=20 2.51.1 From nobody Sun Dec 14 22:14:55 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A1414302752 for ; Wed, 10 Dec 2025 23:14:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408494; cv=none; b=QsEH35tVyqXLS6lHdiu4k/IhNxLuUUTbhpA/Gem7cu42PjByONbbpBUu31/7+u54dtLoQ+srjvVMmRjVzXEYvWVb52M90XtGZ4w1SDD/P1H0Q8LkCq3nleFyHtSkhh/kEMQBf31b7IjaLUwkvWYLi5XbfdKscIHgrFGDWiy8iOU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408494; c=relaxed/simple; bh=RG5IbjeIgi8phx4lID+dB0DRLoIge3eN62a4wy8w5Pk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=WIZDBzbQwagqiSfoyViQ+I40K3/kA4Ra53QohKrXdED/B5JWsvnFx7tb4iLBuwKOzIAo+hhQ+PmuJV3jdJjDKwtwU+Hrlayy89Jd53yBpk2ab1QJKZO1sdxoejo8DszWiRhJA6REj+Jh8vJnI+WqXlBJi4YCUyANMohGQOPn58E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=C58PUimp; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="C58PUimp" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1765408488; x=1796944488; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=RG5IbjeIgi8phx4lID+dB0DRLoIge3eN62a4wy8w5Pk=; b=C58PUimp6uZ5dVMiGXMjltEXADiczfL0AD6H9NxybuEi6I5wuzgxnLkf P8U5FMAsO1BCqqQrZEcXSlee1jIhC2m6oco3+hcSFoW2dOcXR6sjHoFGS vvXO++pm0Tt4vYvgoxN0g6tIjBeXsD35O9SbVAoLdjArcwZnkcCR8djfz pIbJtYM7tZT7h5fuP/Ibiw+7r2M36/ukA1CB4DBI95HgUIAUCnQnT+VHX eXTlBJhZXJSUKx+uaYhFC2mjcsl+qz+HxcCN5/C+gCA5tlQBJk+HDVl3p lkzN3ls3jr5M1ZgTTfDF2mZCPHuPVTZXAOJICTvkcvPLaOpa269rlc0RK Q==; X-CSE-ConnectionGUID: A+WpVeFcRTixbBhzm9Y2Nw== X-CSE-MsgGUID: P5dT0L5EQP69eg/oGheR3g== X-IronPort-AV: E=McAfee;i="6800,10657,11638"; a="69973569" X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="69973569" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:32 -0800 X-CSE-ConnectionGUID: QHWGOoEEScCjC+SnOv+/Cg== X-CSE-MsgGUID: xTDoXdgCSoOsnBhdT0qMAw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="227297073" Received: from daliomra-mobl3.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.221.254]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:31 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v16 12/32] x86,fs/resctrl: Support binary fixed point event counters Date: Wed, 10 Dec 2025 15:13:51 -0800 Message-ID: <20251210231413.59102-13-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251210231413.59102-1-tony.luck@intel.com> References: <20251210231413.59102-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" resctrl assumes that all monitor events can be displayed as unsigned decimal integers. Hardware architecture counters may provide some telemetry events with great= er precision where the event is not a simple count, but is a measurement of some sort (e.g. Joules for energy consumed). Add a new argument to resctrl_enable_mon_event() for architecture code to inform the file system that the value for a counter is a fixed-point value with a specific number of binary places. Only allow architecture to use floating point format on events that the file system has marked with mon_evt::is_floating_point which reflects the contract with user space on how the event values are displayed. Display fixed point values with values rounded to ceil(binary_bits * log10(= 2)) decimal places. Special case for zero binary bits to print "{value}.0". Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- include/linux/resctrl.h | 3 +- fs/resctrl/internal.h | 8 ++++ arch/x86/kernel/cpu/resctrl/core.c | 6 +-- fs/resctrl/ctrlmondata.c | 74 ++++++++++++++++++++++++++++++ fs/resctrl/monitor.c | 10 +++- 5 files changed, 95 insertions(+), 6 deletions(-) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index 22c5d07fe9ff..c43526cdf304 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -412,7 +412,8 @@ u32 resctrl_arch_get_num_closid(struct rdt_resource *r); u32 resctrl_arch_system_num_rmid_idx(void); int resctrl_arch_update_domains(struct rdt_resource *r, u32 closid); =20 -void resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu); +void resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu, + unsigned int binary_bits); =20 bool resctrl_is_mon_event_enabled(enum resctrl_event_id eventid); =20 diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h index fb0b6e40d022..14e5a9ed1fbd 100644 --- a/fs/resctrl/internal.h +++ b/fs/resctrl/internal.h @@ -62,6 +62,9 @@ static inline struct rdt_fs_context *rdt_fc2context(struc= t fs_context *fc) * Only valid if @evtid is an MBM event. * @configurable: true if the event is configurable * @any_cpu: true if the event can be read from any CPU + * @is_floating_point: event values are displayed in floating point format + * @binary_bits: number of fixed-point binary bits from architecture, + * only valid if @is_floating_point is true * @enabled: true if the event is enabled */ struct mon_evt { @@ -71,6 +74,8 @@ struct mon_evt { u32 evt_cfg; bool configurable; bool any_cpu; + bool is_floating_point; + unsigned int binary_bits; bool enabled; }; =20 @@ -79,6 +84,9 @@ extern struct mon_evt mon_event_all[QOS_NUM_EVENTS]; #define for_each_mon_event(mevt) for (mevt =3D &mon_event_all[QOS_FIRST_EV= ENT]; \ mevt < &mon_event_all[QOS_NUM_EVENTS]; mevt++) =20 +/* Limit for mon_evt::binary_bits */ +#define MAX_BINARY_BITS 27 + /** * struct mon_data - Monitoring details for each event file. * @list: Member of the global @mon_data_kn_priv_list list. diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index bd4a98106153..9222eee7ce07 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -902,15 +902,15 @@ static __init bool get_rdt_mon_resources(void) bool ret =3D false; =20 if (rdt_cpu_has(X86_FEATURE_CQM_OCCUP_LLC)) { - resctrl_enable_mon_event(QOS_L3_OCCUP_EVENT_ID, false); + resctrl_enable_mon_event(QOS_L3_OCCUP_EVENT_ID, false, 0); ret =3D true; } if (rdt_cpu_has(X86_FEATURE_CQM_MBM_TOTAL)) { - resctrl_enable_mon_event(QOS_L3_MBM_TOTAL_EVENT_ID, false); + resctrl_enable_mon_event(QOS_L3_MBM_TOTAL_EVENT_ID, false, 0); ret =3D true; } if (rdt_cpu_has(X86_FEATURE_CQM_MBM_LOCAL)) { - resctrl_enable_mon_event(QOS_L3_MBM_LOCAL_EVENT_ID, false); + resctrl_enable_mon_event(QOS_L3_MBM_LOCAL_EVENT_ID, false, 0); ret =3D true; } if (rdt_cpu_has(X86_FEATURE_ABMC)) diff --git a/fs/resctrl/ctrlmondata.c b/fs/resctrl/ctrlmondata.c index 2c69fcd70eeb..f319fd1a6de3 100644 --- a/fs/resctrl/ctrlmondata.c +++ b/fs/resctrl/ctrlmondata.c @@ -17,6 +17,7 @@ =20 #include #include +#include #include #include #include @@ -601,6 +602,77 @@ void mon_event_read(struct rmid_read *rr, struct rdt_r= esource *r, resctrl_arch_mon_ctx_free(r, evt->evtid, rr->arch_mon_ctx); } =20 +/* + * Decimal place precision to use for each number of fixed-point + * binary bits computed from ceil(binary_bits * log10(2)) except + * binary_bits =3D=3D 0 which will print "value.0" + */ +static const unsigned int decplaces[MAX_BINARY_BITS + 1] =3D { + [0] =3D 1, + [1] =3D 1, + [2] =3D 1, + [3] =3D 1, + [4] =3D 2, + [5] =3D 2, + [6] =3D 2, + [7] =3D 3, + [8] =3D 3, + [9] =3D 3, + [10] =3D 4, + [11] =3D 4, + [12] =3D 4, + [13] =3D 4, + [14] =3D 5, + [15] =3D 5, + [16] =3D 5, + [17] =3D 6, + [18] =3D 6, + [19] =3D 6, + [20] =3D 7, + [21] =3D 7, + [22] =3D 7, + [23] =3D 7, + [24] =3D 8, + [25] =3D 8, + [26] =3D 8, + [27] =3D 9 +}; + +static void print_event_value(struct seq_file *m, unsigned int binary_bits= , u64 val) +{ + unsigned long long frac =3D 0; + + if (binary_bits) { + /* Mask off the integer part of the fixed-point value. */ + frac =3D val & GENMASK_ULL(binary_bits - 1, 0); + + /* + * Multiply by 10^{desired decimal places}. The integer part of + * the fixed point value is now almost what is needed. + */ + frac *=3D int_pow(10ull, decplaces[binary_bits]); + + /* + * Round to nearest by adding a value that would be a "1" in the + * binary_bits + 1 place. Integer part of fixed point value is + * now the needed value. + */ + frac +=3D 1ull << (binary_bits - 1); + + /* + * Extract the integer part of the value. This is the decimal + * representation of the original fixed-point fractional value. + */ + frac >>=3D binary_bits; + } + + /* + * "frac" is now in the range [0 .. 10^decplaces). I.e. string + * representation will fit into chosen number of decimal places. + */ + seq_printf(m, "%llu.%0*llu\n", val >> binary_bits, decplaces[binary_bits]= , frac); +} + int rdtgroup_mondata_show(struct seq_file *m, void *arg) { struct kernfs_open_file *of =3D m->private; @@ -678,6 +750,8 @@ int rdtgroup_mondata_show(struct seq_file *m, void *arg) seq_puts(m, "Unavailable\n"); else if (rr.err =3D=3D -ENOENT) seq_puts(m, "Unassigned\n"); + else if (evt->is_floating_point) + print_event_value(m, evt->binary_bits, rr.val); else seq_printf(m, "%llu\n", rr.val); =20 diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index 8c76ac133bca..844cf6875f60 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -988,16 +988,22 @@ struct mon_evt mon_event_all[QOS_NUM_EVENTS] =3D { }, }; =20 -void resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu) +void resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu,= unsigned int binary_bits) { - if (WARN_ON_ONCE(eventid < QOS_FIRST_EVENT || eventid >=3D QOS_NUM_EVENTS= )) + if (WARN_ON_ONCE(eventid < QOS_FIRST_EVENT || eventid >=3D QOS_NUM_EVENTS= || + binary_bits > MAX_BINARY_BITS)) return; if (mon_event_all[eventid].enabled) { pr_warn("Duplicate enable for event %d\n", eventid); return; } + if (binary_bits && !mon_event_all[eventid].is_floating_point) { + pr_warn("Event %d may not be floating point\n", eventid); + return; + } =20 mon_event_all[eventid].any_cpu =3D any_cpu; + mon_event_all[eventid].binary_bits =3D binary_bits; mon_event_all[eventid].enabled =3D true; } =20 --=20 2.51.1 From nobody Sun Dec 14 22:14:55 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 85FBD2F5461 for ; Wed, 10 Dec 2025 23:14:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408494; cv=none; b=Rmgxq2koKAfMFkqYiZYzTtRG76UUfH5P8jiGOnKbJXIzcS6dGFqdUITc2V9SE9yL8hGUCVBd89NvLoIA1s9ltI7fuATY6hyU6IczjiZ9ISo3bgQtCVJIlbzzn4LX9q8mM6RcQdhqCqGBEkUUTWfj+blWo0pWJvRDvq2bDoPdMkY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408494; c=relaxed/simple; bh=vrL7JfY3s49csiMkieoQzxc27A+XOltjUvm7AN7IfJk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=jTVDxuPypqvnxlkszDyVZYwiqfd4cqIFRrxipi2uFYxwFvXsFRWaTVopMkWUekceNLEJXv8jGmNHHj8n9VC4oQuPF0FUmuDj8zdH1M83aH9I9SoElO8aYfcLADKpjNcxQ5sIDSgZsWon+Pr6tu+xYvs38a0jIAGxLe31pPpLJpk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=TpCfniad; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="TpCfniad" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1765408490; x=1796944490; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=vrL7JfY3s49csiMkieoQzxc27A+XOltjUvm7AN7IfJk=; b=TpCfniadw4cD9mKUcahJVY4uOVmeLkrJmKFGGCO8S/bX/AE8PpilJlLE OBdQrj6oPEn2RsaGlJaLFWriaGuWm8q9FMbTAoEBh3kt+GgjoUvene7kd E3MYgocaJoXwUXXXXNBmKH90PA78YfuCXp0uLfC+mhkCIeDPsfOHPn48c /G6sMHujM6/JvcTpd/rPtVio7kQGdQF0SNIomg0Ynz53xfWOPmwSa5OOQ IntpBEWo60w91EJklNYi1QctKyBw6ExKM3f+0w7g0WJHqxG9654yByHox Mu9hILZ60r9yplf6B6h6YnJPIsW9kpb6MfOFSvn6ohSWK/ZAPKrc4LiSu g==; X-CSE-ConnectionGUID: nsM7FR3ZS12WuOhPFxK1BQ== X-CSE-MsgGUID: sTYFSTbiSmavkXmUfTZ5BQ== X-IronPort-AV: E=McAfee;i="6800,10657,11638"; a="69973577" X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="69973577" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:32 -0800 X-CSE-ConnectionGUID: 1cbVZiJOSBqhEOPVUiHlUQ== X-CSE-MsgGUID: TFltrmcySruVY/IzgenFWw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="227297076" Received: from daliomra-mobl3.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.221.254]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:32 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v16 13/32] x86,fs/resctrl: Add an architectural hook called for each mount Date: Wed, 10 Dec 2025 15:13:52 -0800 Message-ID: <20251210231413.59102-14-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251210231413.59102-1-tony.luck@intel.com> References: <20251210231413.59102-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Enumeration of Intel telemetry events is an asynchronous process involving several mutually dependent drivers added as auxiliary devices during the device_initcall() phase of Linux boot. The process finishes after the probe functions of these drivers completes. But this happens after resctrl_arch_late_init() is executed. Tracing the enumeration process shows that it does complete a full seven seconds before the earliest possible mount of the resctrl file system (when included in /etc/fstab for automatic mount by systemd). Add a hook at the beginning of the mount code that will be used to check for telemetry events and initialize if any are found. Call the hook on every attempted mount. Expectations are that most actions (like enumeration) will only need to be performed on the first call. resctrl filesystem calls the hook with no locks held. Architecture code is responsible for any required locking. Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- include/linux/resctrl.h | 6 ++++++ arch/x86/kernel/cpu/resctrl/core.c | 9 +++++++++ fs/resctrl/rdtgroup.c | 2 ++ 3 files changed, 17 insertions(+) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index c43526cdf304..dc148b7feb71 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -514,6 +514,12 @@ void resctrl_offline_mon_domain(struct rdt_resource *r= , struct rdt_domain_hdr *h void resctrl_online_cpu(unsigned int cpu); void resctrl_offline_cpu(unsigned int cpu); =20 +/* + * Architecture hook called at beginning of each file system mount attempt. + * No locks are held. + */ +void resctrl_arch_pre_mount(void); + /** * resctrl_arch_rmid_read() - Read the eventid counter corresponding to rm= id * for this resource and domain. diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 9222eee7ce07..2dd48b59ba9b 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -726,6 +726,15 @@ static int resctrl_arch_offline_cpu(unsigned int cpu) return 0; } =20 +void resctrl_arch_pre_mount(void) +{ + static atomic_t only_once =3D ATOMIC_INIT(0); + int old =3D 0; + + if (!atomic_try_cmpxchg(&only_once, &old, 1)) + return; +} + enum { RDT_FLAG_CMT, RDT_FLAG_MBM_TOTAL, diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index 771e40f02ba6..b20d104ea0c9 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -2785,6 +2785,8 @@ static int rdt_get_tree(struct fs_context *fc) struct rdt_resource *r; int ret; =20 + resctrl_arch_pre_mount(); + cpus_read_lock(); mutex_lock(&rdtgroup_mutex); /* --=20 2.51.1 From nobody Sun Dec 14 22:14:55 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 30EE730504D for ; Wed, 10 Dec 2025 23:14:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408495; cv=none; b=Jr8OrBWkEOhr2oEfnqgmiv35Q5LrgZMbrpKMDfdhX+aex2gs5vpGKEuwqTS31vNSt8GlO6N4mRP4FeB9mTJcIV8RM+6FWDqB3Ko734/ymL/LpJ/MqUn0lc0wdasOTlstWkWLB6cE5CwS16fnGjboynMpVbN7lKLPI10Yz8bbZnA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408495; c=relaxed/simple; bh=Evr1NV6rXbpyXcI6IoQ/0FbzCNTJnINz0x6HvQO2+84=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PxyGwCoMUey8qLRRyp7wleb9euxZN5WW75UbF2ZV6TgoXCMWVjZkyL99FZSdvtaf3l80HN1H2eQQzFjk9RSYcRjx7DjPQtwWaK6PAyTc9GokX5XM2A5UtbHVXelQRBCeKfDayw9CIdPnyoo+neu3QlgFLQLptZq+Y+mS9jfuPiM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Da5UoTkR; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Da5UoTkR" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1765408491; x=1796944491; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Evr1NV6rXbpyXcI6IoQ/0FbzCNTJnINz0x6HvQO2+84=; b=Da5UoTkRYoQ6c2+GetmkdXO165Lhq6kMrps9sOnOMivxwob5i9gwBZ3Q fwSYKAn7xHRQqKZNvY0aaVoQSGCK+IPxsdsta77YnKghptaqXaJbiNWRu aQzjRqqJvLuSgsQ7+Imjy8oMjyxvPZIE/3q+q5V7QqWzymw0pmJ4Hdb/Q UK4CPVRKKgRlUGmg4v6Zm/j4Z0dNSgO1cIXZIVN+14hDfW5DVnzJQTanN zzOVZBx/rhIzWEXKVAhg8EUsHDyWlfyzDWbuz3jCj0b47XikK599DBntu tMoPPVj8lZPXQ4Tsh57Joc1JIqdJCZ8puOFtKXxFfiRu6glTIxtvFQavm g==; X-CSE-ConnectionGUID: Q3hfxt+iTZ2REc3aBSjl6g== X-CSE-MsgGUID: VptCQ4gNS1mO/64tUXEM0A== X-IronPort-AV: E=McAfee;i="6800,10657,11638"; a="69973585" X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="69973585" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:33 -0800 X-CSE-ConnectionGUID: KRJ5gVxITKmFL+CSr7cYeA== X-CSE-MsgGUID: 4eAe8muNTOatr3sYlXU24Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="227297080" Received: from daliomra-mobl3.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.221.254]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:33 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v16 14/32] x86,fs/resctrl: Add and initialize a resource for package scope monitoring Date: Wed, 10 Dec 2025 15:13:53 -0800 Message-ID: <20251210231413.59102-15-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251210231413.59102-1-tony.luck@intel.com> References: <20251210231413.59102-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a new PERF_PKG resource and introduce package level scope for monitoring telemetry events so that CPU hot plug notifiers can build domains at the package granularity. Use the physical package ID available via topology_physical_package_id() to identify the monitoring domains with package level scope. This enables user space to use: /sys/devices/system/cpu/cpuX/topology/physical_package_id to identify the monitoring domain a CPU is associated with. Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- include/linux/resctrl.h | 2 ++ fs/resctrl/internal.h | 2 ++ arch/x86/kernel/cpu/resctrl/core.c | 10 ++++++++++ fs/resctrl/rdtgroup.c | 2 ++ 4 files changed, 16 insertions(+) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index dc148b7feb71..2a3613f27274 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -53,6 +53,7 @@ enum resctrl_res_level { RDT_RESOURCE_L2, RDT_RESOURCE_MBA, RDT_RESOURCE_SMBA, + RDT_RESOURCE_PERF_PKG, =20 /* Must be the last */ RDT_NUM_RESOURCES, @@ -270,6 +271,7 @@ enum resctrl_scope { RESCTRL_L2_CACHE =3D 2, RESCTRL_L3_CACHE =3D 3, RESCTRL_L3_NODE, + RESCTRL_PACKAGE, }; =20 /** diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h index 14e5a9ed1fbd..0110d1175398 100644 --- a/fs/resctrl/internal.h +++ b/fs/resctrl/internal.h @@ -255,6 +255,8 @@ struct rdtgroup { =20 #define RFTYPE_ASSIGN_CONFIG BIT(11) =20 +#define RFTYPE_RES_PERF_PKG BIT(12) + #define RFTYPE_CTRL_INFO (RFTYPE_INFO | RFTYPE_CTRL) =20 #define RFTYPE_MON_INFO (RFTYPE_INFO | RFTYPE_MON) diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 2dd48b59ba9b..986b1303efb9 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -100,6 +100,14 @@ struct rdt_hw_resource rdt_resources_all[RDT_NUM_RESOU= RCES] =3D { .schema_fmt =3D RESCTRL_SCHEMA_RANGE, }, }, + [RDT_RESOURCE_PERF_PKG] =3D + { + .r_resctrl =3D { + .name =3D "PERF_PKG", + .mon_scope =3D RESCTRL_PACKAGE, + .mon_domains =3D mon_domain_init(RDT_RESOURCE_PERF_PKG), + }, + }, }; =20 u32 resctrl_arch_system_num_rmid_idx(void) @@ -440,6 +448,8 @@ static int get_domain_id_from_scope(int cpu, enum resct= rl_scope scope) return get_cpu_cacheinfo_id(cpu, scope); case RESCTRL_L3_NODE: return cpu_to_node(cpu); + case RESCTRL_PACKAGE: + return topology_physical_package_id(cpu); default: break; } diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index b20d104ea0c9..4952ba6b8609 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -2395,6 +2395,8 @@ static unsigned long fflags_from_resource(struct rdt_= resource *r) case RDT_RESOURCE_MBA: case RDT_RESOURCE_SMBA: return RFTYPE_RES_MB; + case RDT_RESOURCE_PERF_PKG: + return RFTYPE_RES_PERF_PKG; } =20 return WARN_ON_ONCE(1); --=20 2.51.1 From nobody Sun Dec 14 22:14:55 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9C733307486 for ; Wed, 10 Dec 2025 23:14:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408498; cv=none; b=CH+XPt9ErtNE2myl4kfm1AsE7rNTbchklYaJTKfMax6YHeLZ2bsgk/5AYKpLErhUbsBdf5Fq95lqPqxk0CpJXRK+DmzoASwZGLXrRe6EuoC9CEltRAh5N6OGySlI5xXmvSXLjOFjeprBvK4wiuHNW00CQAY1gTDN6LtjxtroMxA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408498; c=relaxed/simple; bh=9KtQhAEuNacTIhDtiaux0qfjZKcSMZhDOZ14EYOTjwQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=agNwpC4e4aqj/5efwY9MsHw27NyXWUIEDdhqyhVi0bvKbNwu56eu64xC7TQK/+TQiLqD+OylbPTrg1vcwuZztnACw6OPgi4vcldD83uIebv96n75qekxFuB8+VSkafaooQZ2ALrmQVA6yUokCXZEWy3wlpsYQraMkiEL5rl0j5U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=a/oQHNRb; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="a/oQHNRb" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1765408493; x=1796944493; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9KtQhAEuNacTIhDtiaux0qfjZKcSMZhDOZ14EYOTjwQ=; b=a/oQHNRbABDxClTRh8rBxbjsk4oIhjRzTCuHfFfVW/S5SkJG9oYugJsO ooYE/+BrFHDwueWVd2Xwif6xxXbe3amogUQMwsjbf7ZB6lrhz7kGfK8p7 eHplVQKVE9VV700GHE0+glsIHA3PZGfz+HSavKokFTia2nfKduTIvZDGn ul5Fa7vNauV5vgArFC0ghgTd0Emdb4cLA/t7n6v3xKn87X3qt0ie7PNlv jmyTUDDEwoJISWr8vqfGSlA81eRKlVenOFD9NzU8jjmKBcWpa7sIvtmN4 HKiUMrItumc9IKTSIHFmwSuKkLdHkB1v2hJszzBqAyoaDsVFcNYvXnBZs g==; X-CSE-ConnectionGUID: z7DDBFewTvyCJhXGaKBUuQ== X-CSE-MsgGUID: 10qH7a8SSnukqNYLcOpzBg== X-IronPort-AV: E=McAfee;i="6800,10657,11638"; a="69973595" X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="69973595" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:34 -0800 X-CSE-ConnectionGUID: 9H1zVkj4TtargcZiv9i/8A== X-CSE-MsgGUID: aIiSpyglT0WkhdBellUIFw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="227297083" Received: from daliomra-mobl3.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.221.254]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:33 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v16 15/32] fs/resctrl: Emphasize that L3 monitoring resource is required for summing domains Date: Wed, 10 Dec 2025 15:13:54 -0800 Message-ID: <20251210231413.59102-16-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251210231413.59102-1-tony.luck@intel.com> References: <20251210231413.59102-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The feature to sum event data across multiple domains supports systems with Sub-NUMA Cluster (SNC) mode enabled. The top-level monitoring files in each "mon_L3_XX" directory provide the sum of data across all SNC nodes sharing an L3 cache instance while the "mon_sub_L3_YY" sub-directories provide the event data of the individual nodes. SNC is only associated with the L3 resource and domains and as a result the flow handling the sum of event data implicitly assumes it is working with the L3 resource and domains. Reading of telemetry events do not require to sum event data so this feature can remain dedicated to SNC and keep the implicit assumption of working with the L3 resource and domains. Add a WARN to where the implicit assumption of working with the L3 resource is made and add comments on how the structure controlling the event sum feature is used. Suggested-by: Reinette Chatre Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- fs/resctrl/internal.h | 4 ++-- fs/resctrl/ctrlmondata.c | 8 +++++++- fs/resctrl/rdtgroup.c | 3 ++- 3 files changed, 11 insertions(+), 4 deletions(-) diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h index 0110d1175398..50d88e91e0da 100644 --- a/fs/resctrl/internal.h +++ b/fs/resctrl/internal.h @@ -92,8 +92,8 @@ extern struct mon_evt mon_event_all[QOS_NUM_EVENTS]; * @list: Member of the global @mon_data_kn_priv_list list. * @rid: Resource id associated with the event file. * @evt: Event structure associated with the event file. - * @sum: Set when event must be summed across multiple - * domains. + * @sum: Set for RDT_RESOURCE_L3 when event must be summed + * across multiple domains. * @domid: When @sum is zero this is the domain to which * the event file belongs. When @sum is one this * is the id of the L3 cache that all domains to be diff --git a/fs/resctrl/ctrlmondata.c b/fs/resctrl/ctrlmondata.c index f319fd1a6de3..cc4237c57cbe 100644 --- a/fs/resctrl/ctrlmondata.c +++ b/fs/resctrl/ctrlmondata.c @@ -677,7 +677,6 @@ int rdtgroup_mondata_show(struct seq_file *m, void *arg) { struct kernfs_open_file *of =3D m->private; enum resctrl_res_level resid; - struct rdt_l3_mon_domain *d; struct rdt_domain_hdr *hdr; struct rmid_read rr =3D {0}; struct rdtgroup *rdtgrp; @@ -705,6 +704,13 @@ int rdtgroup_mondata_show(struct seq_file *m, void *ar= g) r =3D resctrl_arch_get_resource(resid); =20 if (md->sum) { + struct rdt_l3_mon_domain *d; + + if (WARN_ON_ONCE(resid !=3D RDT_RESOURCE_L3)) { + ret =3D -EINVAL; + goto out; + } + /* * This file requires summing across all domains that share * the L3 cache id that was provided in the "domid" field of the diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index 4952ba6b8609..dce1f0a6d40b 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -3095,7 +3095,8 @@ static void rmdir_all_sub(void) * @rid: The resource id for the event file being created. * @domid: The domain id for the event file being created. * @mevt: The type of event file being created. - * @do_sum: Whether SNC summing monitors are being created. + * @do_sum: Whether SNC summing monitors are being created. Only set + * when @rid =3D=3D RDT_RESOURCE_L3. */ static struct mon_data *mon_get_kn_priv(enum resctrl_res_level rid, int do= mid, struct mon_evt *mevt, --=20 2.51.1 From nobody Sun Dec 14 22:14:55 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 72C6730214C for ; Wed, 10 Dec 2025 23:14:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408503; cv=none; b=AB0jP5rFjHDgCI5Uuc72lI8Mk/dJ6nE6dJ5+RAfbiHfQB/1TMu9knCFDQ+pJTJNxZiGaiWbY6Ow7zbjjREbHJny+AS+t+5XoqvUt9OZU0i4rDaUHwta08AaPUSahsc8bb90idJDY5z+RzZctey5ZwvK8NfpUau85POMc7uRO86U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408503; c=relaxed/simple; bh=VpFfv/CGm4lI2PiQoSWuxd4/cOGr0Yyi8yn5mpikwM0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=oyNtPLcvqSyWrJJym4kj/sMY4aS9xur4avS6iQvXO/wdpwvcRvllcb9CxqQjDvLigcdoMs0BvQgd/EOk9JW7wVeJZZBM8NQS+xirqhCZh0uePV3V82Lgfwnp8/xN4Ugnq9S/4QFTFjGTiR+ZucKOhXfroCD+KGKxiU/uzy/BPQk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=SWIkfk51; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="SWIkfk51" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1765408496; x=1796944496; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=VpFfv/CGm4lI2PiQoSWuxd4/cOGr0Yyi8yn5mpikwM0=; b=SWIkfk51XqUMWvxsCF/x3CDERGR9h1QJ4TANsYdKB+fYLO4r1E0wsUNg FbHZ/VD13cA+AC+faMHrZ1Igm7IWapkFrlKNOsXww5ZQG5Ic6Xym6gFJk 3fAAqYUmDX0GFsM7l1TKNJR1wxxEzq9rao999pYF6sphIYZ8EESLmNFqx hBgEfFWVHxJX9jBZPqj2tN++nLf+wMta/L7C3hXNUblacxeu7mH9x39dI NpdLmUn6pecs98TxmZZmQj0Orf+6Ttg2E/UTZCegqpH6CG0qwjcdDpg8R yY7tX58jyIQhYWaVyuxGbatK1TFiHEp1rvRmOiYfUyNjw4gefRoMQwE1v w==; X-CSE-ConnectionGUID: eCXicqYdSNqgst49lO7lEg== X-CSE-MsgGUID: 1DIPUhKpSXOEwAaLK0CP8g== X-IronPort-AV: E=McAfee;i="6800,10657,11638"; a="69973603" X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="69973603" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:35 -0800 X-CSE-ConnectionGUID: V817ec7/T6eNN9nj2mokQg== X-CSE-MsgGUID: oIYuADJBTN+31DO0ZxZuaQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="227297089" Received: from daliomra-mobl3.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.221.254]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:34 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v16 16/32] x86/resctrl: Discover hardware telemetry events Date: Wed, 10 Dec 2025 15:13:55 -0800 Message-ID: <20251210231413.59102-17-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251210231413.59102-1-tony.luck@intel.com> References: <20251210231413.59102-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Each CPU collects data for telemetry events that it sends to the nearest telemetry event aggregator either when the value of MSR_IA32_PQR_ASSOC.RMID changes, or when a two millisecond timer expires. There is a feature type ("energy" or "perf"), guid, and MMIO region associa= ted with each aggregator. This combination links to an XML description of the set of telemetry events tracked by the aggregator. XML files are published by Intel in a GitHub repository [1]. The telemetry event aggregators maintain per-RMID per-event counts of the total seen for all the CPUs. There may be multiple telemetry event aggregat= ors per package. There are separate sets of aggregators for each feature type. Aggregators in a set may have different guids. All aggregators with the same feature type and guid are symmetric keeping counts for the same set of events for the CPUs that provide data to them. The XML file for each aggregator provides the following information: 0) Feature type of the events ("perf" or "energy") 1) Which telemetry events are tracked by the aggregator. 2) The order in which the event counters appear for each RMID. 3) The value type of each event counter (integer or fixed-point). 4) The number of RMIDs supported. 5) Which additional aggregator status registers are included. 6) The total size of the MMIO region for an aggregator. Introduce struct event_group that condenses the relevant information from an XML file. Hereafter an "event group" refers to a group of events of a particular feature type (event_group::pfname set to "energy" or "perf") with a particular guid. Use event_group::pfname to determine the feature id needed to obtain the aggregator details. It will later be used in console messages and with the rdt=3D boot parameter. The INTEL_PMT_TELEMETRY driver enumerates support for telemetry events. This driver provides intel_pmt_get_regions_by_feature() to list all availab= le telemetry event aggregators of a given feature type. The list includes the "guid", the base address in MMIO space for the region where the event count= ers are exposed, and the package id where the all the CPUs that report to this aggregator are located. Call INTEL_PMT_TELEMETRY's intel_pmt_get_regions_by_feature() for each event group to obtain a private copy of that event group's aggregator data. Dupli= cate the aggregator data between event groups that have the same feature type but different guid. Further processing on this private copy will be unique to the event group. Return the aggregator data to INTEL_PMT_TELEMETRY at resctrl exit time. resctrl will silently ignore unknown guid values. Add a new Kconfig option CONFIG_X86_CPU_RESCTRL_INTEL_AET for the Intel spe= cific parts of telemetry code. This depends on the INTEL_PMT_TELEMETRY and INTEL_= TPMI drivers being built-in to the kernel for enumeration of telemetry features. Signed-off-by: Tony Luck Link: https://github.com/intel/Intel-PMT # [1] Reviewed-by: Reinette Chatre --- arch/x86/kernel/cpu/resctrl/internal.h | 8 ++ arch/x86/kernel/cpu/resctrl/core.c | 5 ++ arch/x86/kernel/cpu/resctrl/intel_aet.c | 109 ++++++++++++++++++++++++ arch/x86/Kconfig | 13 +++ arch/x86/kernel/cpu/resctrl/Makefile | 1 + 5 files changed, 136 insertions(+) create mode 100644 arch/x86/kernel/cpu/resctrl/intel_aet.c diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/r= esctrl/internal.h index 11d06995810e..f2e6e3577df0 100644 --- a/arch/x86/kernel/cpu/resctrl/internal.h +++ b/arch/x86/kernel/cpu/resctrl/internal.h @@ -222,4 +222,12 @@ void __init intel_rdt_mbm_apply_quirk(void); void rdt_domain_reconfigure_cdp(struct rdt_resource *r); void resctrl_arch_mbm_cntr_assign_set_one(struct rdt_resource *r); =20 +#ifdef CONFIG_X86_CPU_RESCTRL_INTEL_AET +bool intel_aet_get_events(void); +void __exit intel_aet_exit(void); +#else +static inline bool intel_aet_get_events(void) { return false; } +static inline void __exit intel_aet_exit(void) { } +#endif + #endif /* _ASM_X86_RESCTRL_INTERNAL_H */ diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 986b1303efb9..88be77d5d20d 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -743,6 +743,9 @@ void resctrl_arch_pre_mount(void) =20 if (!atomic_try_cmpxchg(&only_once, &old, 1)) return; + + if (!intel_aet_get_events()) + return; } =20 enum { @@ -1104,6 +1107,8 @@ late_initcall(resctrl_arch_late_init); =20 static void __exit resctrl_arch_exit(void) { + intel_aet_exit(); + cpuhp_remove_state(rdt_online); =20 resctrl_exit(); diff --git a/arch/x86/kernel/cpu/resctrl/intel_aet.c b/arch/x86/kernel/cpu/= resctrl/intel_aet.c new file mode 100644 index 000000000000..6e0d063cfc80 --- /dev/null +++ b/arch/x86/kernel/cpu/resctrl/intel_aet.c @@ -0,0 +1,109 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Resource Director Technology(RDT) + * - Intel Application Energy Telemetry + * + * Copyright (C) 2025 Intel Corporation + * + * Author: + * Tony Luck + */ + +#define pr_fmt(fmt) "resctrl: " fmt + +#include +#include +#include +#include +#include +#include + +#include "internal.h" + +/** + * struct event_group - Events with the same feature type ("energy" or "pe= rf") and guid. + * @pfname: PMT feature name ("energy" or "perf") of this event group. + * @pfg: Points to the aggregated telemetry space information + * returned by the intel_pmt_get_regions_by_feature() + * call to the INTEL_PMT_TELEMETRY driver that contains + * data for all telemetry regions of type @pfname. + * Valid if the system supports the event group, + * NULL otherwise. + */ +struct event_group { + /* Data fields for additional structures to manage this group. */ + const char *pfname; + struct pmt_feature_group *pfg; +}; + +static struct event_group *known_event_groups[] =3D { +}; + +#define for_each_event_group(_peg) \ + for (_peg =3D known_event_groups; \ + _peg < &known_event_groups[ARRAY_SIZE(known_event_groups)]; \ + _peg++) + +/* Stub for now */ +static bool enable_events(struct event_group *e, struct pmt_feature_group = *p) +{ + return false; +} + +static enum pmt_feature_id lookup_pfid(const char *pfname) +{ + if (!strcmp(pfname, "energy")) + return FEATURE_PER_RMID_ENERGY_TELEM; + else if (!strcmp(pfname, "perf")) + return FEATURE_PER_RMID_PERF_TELEM; + + pr_warn("Unknown PMT feature name '%s'\n", pfname); + + return FEATURE_INVALID; +} + +/* + * Request a copy of struct pmt_feature_group for each event group. If the= re is + * one, the returned structure has an array of telemetry_region structures, + * each element of the array describes one telemetry aggregator. The + * telemetry aggregators may have different guids so obtain duplicate stru= ct + * pmt_feature_group for event groups with same feature type but different + * guid. Post-processing ensures an event group can only use the telemetry + * aggregators that match its guid. An event group keeps a pointer to its + * struct pmt_feature_group to indicate that its events are successfully + * enabled. + */ +bool intel_aet_get_events(void) +{ + struct pmt_feature_group *p; + enum pmt_feature_id pfid; + struct event_group **peg; + bool ret =3D false; + + for_each_event_group(peg) { + pfid =3D lookup_pfid((*peg)->pfname); + p =3D intel_pmt_get_regions_by_feature(pfid); + if (IS_ERR_OR_NULL(p)) + continue; + if (enable_events(*peg, p)) { + (*peg)->pfg =3D p; + ret =3D true; + } else { + intel_pmt_put_feature_group(p); + } + } + + return ret; +} + +void __exit intel_aet_exit(void) +{ + struct event_group **peg; + + for_each_event_group(peg) { + if ((*peg)->pfg) { + intel_pmt_put_feature_group((*peg)->pfg); + (*peg)->pfg =3D NULL; + } + } +} diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 34fb46d5341b..52dda19d584d 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -538,6 +538,19 @@ config X86_CPU_RESCTRL =20 Say N if unsure. =20 +config X86_CPU_RESCTRL_INTEL_AET + bool "Intel Application Energy Telemetry" + depends on X86_CPU_RESCTRL && CPU_SUP_INTEL && INTEL_PMT_TELEMETRY=3Dy &&= INTEL_TPMI=3Dy + help + Enable per-RMID telemetry events in resctrl. + + Intel feature that collects per-RMID execution data + about energy consumption, measure of frequency independent + activity and other performance metrics. Data is aggregated + per package. + + Say N if unsure. + config X86_FRED bool "Flexible Return and Event Delivery" depends on X86_64 diff --git a/arch/x86/kernel/cpu/resctrl/Makefile b/arch/x86/kernel/cpu/res= ctrl/Makefile index d8a04b195da2..273ddfa30836 100644 --- a/arch/x86/kernel/cpu/resctrl/Makefile +++ b/arch/x86/kernel/cpu/resctrl/Makefile @@ -1,6 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_X86_CPU_RESCTRL) +=3D core.o rdtgroup.o monitor.o obj-$(CONFIG_X86_CPU_RESCTRL) +=3D ctrlmondata.o +obj-$(CONFIG_X86_CPU_RESCTRL_INTEL_AET) +=3D intel_aet.o obj-$(CONFIG_RESCTRL_FS_PSEUDO_LOCK) +=3D pseudo_lock.o =20 # To allow define_trace.h's recursive include: --=20 2.51.1 From nobody Sun Dec 14 22:14:55 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E87D92D3A7B for ; Wed, 10 Dec 2025 23:14:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408504; cv=none; b=LAxAk7p3EUhPGyf2X/ueatqJLaO8i+p1/eYq2ds4bR50/xa6fnOENaX+/0X2rmqgSqz+VwQgdDGkThqUtgIe26k72g5cZAR+o382JzTkdXTM7i8c0TpG/RhxM/meN+69INe7sYry11wgHLPySS4uLa43f+AEQYP0MG46Zkbf3wE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408504; c=relaxed/simple; bh=CNAvAMYtGhPq/P+ofsRqdqwfVJTklEU6nlpXHTOuTlU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=r8OlbYLxbOzI+8rvWYGdV8vmd8utXFjSvpcGKnn+7KWutKmpheHTmZipWX0aovLAjK0C/fAk5iGCYkKHc3nQpLVt3A28Xm30BpAUGON6zGQmNV9ieDUg4jGpGIdscodBtPPL0PiRbArLXfIDJmyHx50QMan+r/feoW67rVrxix0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=I0nyyMuV; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="I0nyyMuV" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1765408495; x=1796944495; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=CNAvAMYtGhPq/P+ofsRqdqwfVJTklEU6nlpXHTOuTlU=; b=I0nyyMuVII5wFxGfoq/RHpVxqFDd3beyv747uAm7BqMNBI2KAHuI4c7B UlDb93e5P0RMBN2oKgEck47JvIzgadLLXBs2EeaD7CsxyTbHFdALkxP+D LM4Kt8pp6nIlULn0hWUy4YjA9rW7FQ6bqUF9WiyCFFu+Xv6yo9Z0P5w2e BANDPL5vdcXhMtIXn1WeJX1OcHJhXv5TYZphB4AU7k1HmzcktxsNI5Yg5 8B9on9QKveKH8UZ8nPI8dfBnGBpxJS+/nTCg+XvXEffdqnT9VxdEjKxRm eLTiGW+O2OWmy0svGxcNmEoQlOcvxgImRRvOGLFd3vlZi0NhnBopa3X8H w==; X-CSE-ConnectionGUID: h0nR0wdkR9uHD+WKDhUNqQ== X-CSE-MsgGUID: rv/+0POeRsiYDZRId1qphA== X-IronPort-AV: E=McAfee;i="6800,10657,11638"; a="69973612" X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="69973612" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:35 -0800 X-CSE-ConnectionGUID: mwBGRi5rSjuYJCKBhZmJZA== X-CSE-MsgGUID: p6d2j5t9QmyqNcSXXdE2tA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="227297095" Received: from daliomra-mobl3.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.221.254]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:35 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v16 17/32] x86,fs/resctrl: Fill in details of events for guid 0x26696143 and 0x26557651 Date: Wed, 10 Dec 2025 15:13:56 -0800 Message-ID: <20251210231413.59102-18-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251210231413.59102-1-tony.luck@intel.com> References: <20251210231413.59102-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The telemetry event aggregators of the Intel Clearwater Forest CPU support two RMID-based feature types: "energy" with guid 0x26696143 [1], and "perf" with guid 0x26557651 [2]. The event counter offsets in an aggregator's MMIO space are arranged in groups for each RMID. E.g the "energy" counters for guid 0x26696143 are arranged like this: MMIO offset:0x0000 Counter for RMID 0 PMT_EVENT_ENERGY MMIO offset:0x0008 Counter for RMID 0 PMT_EVENT_ACTIVITY MMIO offset:0x0010 Counter for RMID 1 PMT_EVENT_ENERGY MMIO offset:0x0018 Counter for RMID 1 PMT_EVENT_ACTIVITY ... MMIO offset:0x23F0 Counter for RMID 575 PMT_EVENT_ENERGY MMIO offset:0x23F8 Counter for RMID 575 PMT_EVENT_ACTIVITY After all counters there are three status registers that provide indications of how many times an aggregator was unable to process event counts, the time stamp for the most recent loss of data, and the time stamp of the most rece= nt successful update. MMIO offset:0x2400 AGG_DATA_LOSS_COUNT MMIO offset:0x2408 AGG_DATA_LOSS_TIMESTAMP MMIO offset:0x2410 LAST_UPDATE_TIMESTAMP Define event_group structures for both of these aggregator types and define= the events tracked by the aggregators in the file system code. PMT_EVENT_ENERGY and PMT_EVENT_ACTIVITY are produced in fixed point format. File system code must output as floating point values. Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre Link: https://github.com/intel/Intel-PMT/blob/main/xml/CWF/OOBMSM/RMID-ENER= GY/cwf_aggregator.xml # [1] Link: https://github.com/intel/Intel-PMT/blob/main/xml/CWF/OOBMSM/RMID-PERF= /cwf_aggregator.xml # [2] --- include/linux/resctrl_types.h | 11 +++++ arch/x86/kernel/cpu/resctrl/intel_aet.c | 66 +++++++++++++++++++++++++ fs/resctrl/monitor.c | 35 +++++++------ 3 files changed, 97 insertions(+), 15 deletions(-) diff --git a/include/linux/resctrl_types.h b/include/linux/resctrl_types.h index acfe07860b34..a5f56faa18d2 100644 --- a/include/linux/resctrl_types.h +++ b/include/linux/resctrl_types.h @@ -50,6 +50,17 @@ enum resctrl_event_id { QOS_L3_MBM_TOTAL_EVENT_ID =3D 0x02, QOS_L3_MBM_LOCAL_EVENT_ID =3D 0x03, =20 + /* Intel Telemetry Events */ + PMT_EVENT_ENERGY, + PMT_EVENT_ACTIVITY, + PMT_EVENT_STALLS_LLC_HIT, + PMT_EVENT_C1_RES, + PMT_EVENT_UNHALTED_CORE_CYCLES, + PMT_EVENT_STALLS_LLC_MISS, + PMT_EVENT_AUTO_C6_RES, + PMT_EVENT_UNHALTED_REF_CYCLES, + PMT_EVENT_UOPS_RETIRED, + /* Must be the last */ QOS_NUM_EVENTS, }; diff --git a/arch/x86/kernel/cpu/resctrl/intel_aet.c b/arch/x86/kernel/cpu/= resctrl/intel_aet.c index 6e0d063cfc80..c7d08eb26395 100644 --- a/arch/x86/kernel/cpu/resctrl/intel_aet.c +++ b/arch/x86/kernel/cpu/resctrl/intel_aet.c @@ -11,15 +11,33 @@ =20 #define pr_fmt(fmt) "resctrl: " fmt =20 +#include #include #include #include #include #include +#include #include +#include =20 #include "internal.h" =20 +/** + * struct pmt_event - Telemetry event. + * @id: Resctrl event id. + * @idx: Counter index within each per-RMID block of counters. + * @bin_bits: Zero for integer valued events, else number bits in fraction + * part of fixed-point. + */ +struct pmt_event { + enum resctrl_event_id id; + unsigned int idx; + unsigned int bin_bits; +}; + +#define EVT(_id, _idx, _bits) { .id =3D _id, .idx =3D _idx, .bin_bits =3D = _bits } + /** * struct event_group - Events with the same feature type ("energy" or "pe= rf") and guid. * @pfname: PMT feature name ("energy" or "perf") of this event group. @@ -29,14 +47,62 @@ * data for all telemetry regions of type @pfname. * Valid if the system supports the event group, * NULL otherwise. + * @guid: Unique number per XML description file. + * @mmio_size: Number of bytes of MMIO registers for this group. + * @num_events: Number of events in this group. + * @evts: Array of event descriptors. */ struct event_group { /* Data fields for additional structures to manage this group. */ const char *pfname; struct pmt_feature_group *pfg; + + /* Remaining fields initialized from XML file. */ + u32 guid; + size_t mmio_size; + unsigned int num_events; + struct pmt_event evts[] __counted_by(num_events); +}; + +#define XML_MMIO_SIZE(num_rmids, num_events, num_extra_status) \ + (((num_rmids) * (num_events) + (num_extra_status)) * sizeof(u64)) + +/* + * Link: https://github.com/intel/Intel-PMT/blob/main/xml/CWF/OOBMSM/RMID-= ENERGY/cwf_aggregator.xml + */ +static struct event_group energy_0x26696143 =3D { + .pfname =3D "energy", + .guid =3D 0x26696143, + .mmio_size =3D XML_MMIO_SIZE(576, 2, 3), + .num_events =3D 2, + .evts =3D { + EVT(PMT_EVENT_ENERGY, 0, 18), + EVT(PMT_EVENT_ACTIVITY, 1, 18), + } +}; + +/* + * Link: https://github.com/intel/Intel-PMT/blob/main/xml/CWF/OOBMSM/RMID-= PERF/cwf_aggregator.xml + */ +static struct event_group perf_0x26557651 =3D { + .pfname =3D "perf", + .guid =3D 0x26557651, + .mmio_size =3D XML_MMIO_SIZE(576, 7, 3), + .num_events =3D 7, + .evts =3D { + EVT(PMT_EVENT_STALLS_LLC_HIT, 0, 0), + EVT(PMT_EVENT_C1_RES, 1, 0), + EVT(PMT_EVENT_UNHALTED_CORE_CYCLES, 2, 0), + EVT(PMT_EVENT_STALLS_LLC_MISS, 3, 0), + EVT(PMT_EVENT_AUTO_C6_RES, 4, 0), + EVT(PMT_EVENT_UNHALTED_REF_CYCLES, 5, 0), + EVT(PMT_EVENT_UOPS_RETIRED, 6, 0), + } }; =20 static struct event_group *known_event_groups[] =3D { + &energy_0x26696143, + &perf_0x26557651, }; =20 #define for_each_event_group(_peg) \ diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index 844cf6875f60..9729acacdc19 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -965,27 +965,32 @@ static void dom_data_exit(struct rdt_resource *r) mutex_unlock(&rdtgroup_mutex); } =20 +#define MON_EVENT(_eventid, _name, _res, _fp) \ + [_eventid] =3D { \ + .name =3D _name, \ + .evtid =3D _eventid, \ + .rid =3D _res, \ + .is_floating_point =3D _fp, \ +} + /* * All available events. Architecture code marks the ones that * are supported by a system using resctrl_enable_mon_event() * to set .enabled. */ struct mon_evt mon_event_all[QOS_NUM_EVENTS] =3D { - [QOS_L3_OCCUP_EVENT_ID] =3D { - .name =3D "llc_occupancy", - .evtid =3D QOS_L3_OCCUP_EVENT_ID, - .rid =3D RDT_RESOURCE_L3, - }, - [QOS_L3_MBM_TOTAL_EVENT_ID] =3D { - .name =3D "mbm_total_bytes", - .evtid =3D QOS_L3_MBM_TOTAL_EVENT_ID, - .rid =3D RDT_RESOURCE_L3, - }, - [QOS_L3_MBM_LOCAL_EVENT_ID] =3D { - .name =3D "mbm_local_bytes", - .evtid =3D QOS_L3_MBM_LOCAL_EVENT_ID, - .rid =3D RDT_RESOURCE_L3, - }, + MON_EVENT(QOS_L3_OCCUP_EVENT_ID, "llc_occupancy", RDT_RESOURCE_L3, false= ), + MON_EVENT(QOS_L3_MBM_TOTAL_EVENT_ID, "mbm_total_bytes", RDT_RESOURCE_L3,= false), + MON_EVENT(QOS_L3_MBM_LOCAL_EVENT_ID, "mbm_local_bytes", RDT_RESOURCE_L3,= false), + MON_EVENT(PMT_EVENT_ENERGY, "core_energy", RDT_RESOURCE_PERF_PKG, true= ), + MON_EVENT(PMT_EVENT_ACTIVITY, "activity", RDT_RESOURCE_PERF_PKG, true), + MON_EVENT(PMT_EVENT_STALLS_LLC_HIT, "stalls_llc_hit", RDT_RESOURCE_PERF_= PKG, false), + MON_EVENT(PMT_EVENT_C1_RES, "c1_res", RDT_RESOURCE_PERF_PKG, false), + MON_EVENT(PMT_EVENT_UNHALTED_CORE_CYCLES, "unhalted_core_cycles", RDT_RES= OURCE_PERF_PKG, false), + MON_EVENT(PMT_EVENT_STALLS_LLC_MISS, "stalls_llc_miss", RDT_RESOURCE_PER= F_PKG, false), + MON_EVENT(PMT_EVENT_AUTO_C6_RES, "c6_res", RDT_RESOURCE_PERF_PKG, false= ), + MON_EVENT(PMT_EVENT_UNHALTED_REF_CYCLES, "unhalted_ref_cycles", RDT_RESOU= RCE_PERF_PKG, false), + MON_EVENT(PMT_EVENT_UOPS_RETIRED, "uops_retired", RDT_RESOURCE_PERF_PKG= , false), }; =20 void resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu,= unsigned int binary_bits) --=20 2.51.1 From nobody Sun Dec 14 22:14:55 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 702752E6CC2 for ; Wed, 10 Dec 2025 23:14:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408502; cv=none; b=GTuYQ1TwLSP+Xqa5zdVWQv6dFVIanPbon99JPtcxhXNHCR5Km5TiX+KaSCLF3uTEaakQOAE3Q2JXN/ZNhw0p6+cQrrz7S1Qjqheg0K7lOyuQANTKUI4CnDNxL/huKTMJEAPxl9gTwD8J+TbjPjzrzUeWvFbbYrnICk4vv1+6yCc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408502; c=relaxed/simple; bh=6p+WPNCiY1S0MCxQm+bzEJw7pRb1iEBKcc5fsj1+R24=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rp085xjyVGmFs7M5VusIYfA2IZBt5/P9hgmkwHBPHRlc/I0H8zu5q2LskGA4JpCWQ0Vmu/DM4AmqGYskVf2E2ZK9DOovsCbgww4Ag2JXaRQZj9j+oumaeRBQnLvxlXnYBqCHaHMzizSYVUdlknXdxMsFlgvz/6KWGrISFK+KNi8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=C6Ja32FG; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="C6Ja32FG" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1765408496; x=1796944496; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6p+WPNCiY1S0MCxQm+bzEJw7pRb1iEBKcc5fsj1+R24=; b=C6Ja32FGuagKur2+qW0uRjaK0Otg4VrMSvnKSfe+pQb3++eJ73ERFMRg 689dwDacM5SUSiroOeh+xNOQYDi30PFpQgS1+bhNlMWQvDKPvMgolmSrm Qq3qinFQ5NrERSBcaO3zKKaNEZvptLXfREIrRKmEsx5la0nQK/Ln4lcCt YLGFbeCcYv8ZLxIruWoAFkxerK/t9lpUSiyXWHfwzmMAwAjUYyaIYAE43 BDlT9Zc8kMuBHBf/eDIda5BSTjsx3IzW+sgAsPcjNo874D6xnyfq2xKHx RJUz5yN6abixy884C4xb297ZGAqIbH6hL/PfnqEroKVHzuzoapQH2tSfi Q==; X-CSE-ConnectionGUID: vbeAi6D4RZ2HjEQ7aU8xTQ== X-CSE-MsgGUID: WiZ8ZDYsT82Q2he6gOBSFA== X-IronPort-AV: E=McAfee;i="6800,10657,11638"; a="69973620" X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="69973620" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:36 -0800 X-CSE-ConnectionGUID: 1gnIFnwYT1Wq8KaGfjoiaw== X-CSE-MsgGUID: hlAoZPBnRnKe4DnwdUv9cQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="227297101" Received: from daliomra-mobl3.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.221.254]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:35 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v16 18/32] x86,fs/resctrl: Add architectural event pointer Date: Wed, 10 Dec 2025 15:13:57 -0800 Message-ID: <20251210231413.59102-19-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251210231413.59102-1-tony.luck@intel.com> References: <20251210231413.59102-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The resctrl file system layer passes the domain, RMID, and event id to the architecture to fetch an event counter. Fetching a telemetry event counter requires additional information that is private to the architecture, for example, the offset into MMIO space from where the counter should be read. Add mon_evt::arch_priv that architecture can use for any private data relat= ed to the event. resctrl filesystem initializes mon_evt::arch_priv when the architecture enables the event and passes it back to architecture when needing to fetch an event counter. Suggested-by: Reinette Chatre Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- include/linux/resctrl.h | 7 +++++-- fs/resctrl/internal.h | 4 ++++ arch/x86/kernel/cpu/resctrl/core.c | 6 +++--- arch/x86/kernel/cpu/resctrl/monitor.c | 2 +- fs/resctrl/monitor.c | 14 ++++++++++---- 5 files changed, 23 insertions(+), 10 deletions(-) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index 2a3613f27274..b30f99335bbe 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -415,7 +415,7 @@ u32 resctrl_arch_system_num_rmid_idx(void); int resctrl_arch_update_domains(struct rdt_resource *r, u32 closid); =20 void resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu, - unsigned int binary_bits); + unsigned int binary_bits, void *arch_priv); =20 bool resctrl_is_mon_event_enabled(enum resctrl_event_id eventid); =20 @@ -532,6 +532,9 @@ void resctrl_arch_pre_mount(void); * only. * @rmid: rmid of the counter to read. * @eventid: eventid to read, e.g. L3 occupancy. + * @arch_priv: Architecture private data for this event. + * The @arch_priv provided by the architecture via + * resctrl_enable_mon_event(). * @val: result of the counter read in bytes. * @arch_mon_ctx: An architecture specific value from * resctrl_arch_mon_ctx_alloc(), for MPAM this identifies @@ -549,7 +552,7 @@ void resctrl_arch_pre_mount(void); */ int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain_hdr *= hdr, u32 closid, u32 rmid, enum resctrl_event_id eventid, - u64 *val, void *arch_mon_ctx); + void *arch_priv, u64 *val, void *arch_mon_ctx); =20 /** * resctrl_arch_rmid_read_context_check() - warn about invalid contexts diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h index 50d88e91e0da..399f625be67d 100644 --- a/fs/resctrl/internal.h +++ b/fs/resctrl/internal.h @@ -66,6 +66,9 @@ static inline struct rdt_fs_context *rdt_fc2context(struc= t fs_context *fc) * @binary_bits: number of fixed-point binary bits from architecture, * only valid if @is_floating_point is true * @enabled: true if the event is enabled + * @arch_priv: Architecture private data for this event. + * The @arch_priv provided by the architecture via + * resctrl_enable_mon_event(). */ struct mon_evt { enum resctrl_event_id evtid; @@ -77,6 +80,7 @@ struct mon_evt { bool is_floating_point; unsigned int binary_bits; bool enabled; + void *arch_priv; }; =20 extern struct mon_evt mon_event_all[QOS_NUM_EVENTS]; diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 88be77d5d20d..3c6946a5ff1b 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -924,15 +924,15 @@ static __init bool get_rdt_mon_resources(void) bool ret =3D false; =20 if (rdt_cpu_has(X86_FEATURE_CQM_OCCUP_LLC)) { - resctrl_enable_mon_event(QOS_L3_OCCUP_EVENT_ID, false, 0); + resctrl_enable_mon_event(QOS_L3_OCCUP_EVENT_ID, false, 0, NULL); ret =3D true; } if (rdt_cpu_has(X86_FEATURE_CQM_MBM_TOTAL)) { - resctrl_enable_mon_event(QOS_L3_MBM_TOTAL_EVENT_ID, false, 0); + resctrl_enable_mon_event(QOS_L3_MBM_TOTAL_EVENT_ID, false, 0, NULL); ret =3D true; } if (rdt_cpu_has(X86_FEATURE_CQM_MBM_LOCAL)) { - resctrl_enable_mon_event(QOS_L3_MBM_LOCAL_EVENT_ID, false, 0); + resctrl_enable_mon_event(QOS_L3_MBM_LOCAL_EVENT_ID, false, 0, NULL); ret =3D true; } if (rdt_cpu_has(X86_FEATURE_ABMC)) diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/re= sctrl/monitor.c index 20605212656c..6929614ba6e6 100644 --- a/arch/x86/kernel/cpu/resctrl/monitor.c +++ b/arch/x86/kernel/cpu/resctrl/monitor.c @@ -240,7 +240,7 @@ static u64 get_corrected_val(struct rdt_resource *r, st= ruct rdt_l3_mon_domain *d =20 int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain_hdr *= hdr, u32 unused, u32 rmid, enum resctrl_event_id eventid, - u64 *val, void *ignored) + void *arch_priv, u64 *val, void *ignored) { struct rdt_hw_l3_mon_domain *hw_dom; struct rdt_l3_mon_domain *d; diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index 9729acacdc19..af43a33ce4cb 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -137,9 +137,11 @@ void __check_limbo(struct rdt_l3_mon_domain *d, bool f= orce_free) struct rmid_entry *entry; u32 idx, cur_idx =3D 1; void *arch_mon_ctx; + void *arch_priv; bool rmid_dirty; u64 val =3D 0; =20 + arch_priv =3D mon_event_all[QOS_L3_OCCUP_EVENT_ID].arch_priv; arch_mon_ctx =3D resctrl_arch_mon_ctx_alloc(r, QOS_L3_OCCUP_EVENT_ID); if (IS_ERR(arch_mon_ctx)) { pr_warn_ratelimited("Failed to allocate monitor context: %ld", @@ -160,7 +162,7 @@ void __check_limbo(struct rdt_l3_mon_domain *d, bool fo= rce_free) =20 entry =3D __rmid_entry(idx); if (resctrl_arch_rmid_read(r, &d->hdr, entry->closid, entry->rmid, - QOS_L3_OCCUP_EVENT_ID, &val, + QOS_L3_OCCUP_EVENT_ID, arch_priv, &val, arch_mon_ctx)) { rmid_dirty =3D true; } else { @@ -456,7 +458,8 @@ static int __l3_mon_event_count(struct rdtgroup *rdtgrp= , struct rmid_read *rr) rr->evt->evtid, &tval); else rr->err =3D resctrl_arch_rmid_read(rr->r, rr->hdr, closid, rmid, - rr->evt->evtid, &tval, rr->arch_mon_ctx); + rr->evt->evtid, rr->evt->arch_priv, + &tval, rr->arch_mon_ctx); if (rr->err) return rr->err; =20 @@ -501,7 +504,8 @@ static int __l3_mon_event_count_sum(struct rdtgroup *rd= tgrp, struct rmid_read *r if (d->ci_id !=3D rr->ci->id) continue; err =3D resctrl_arch_rmid_read(rr->r, &d->hdr, closid, rmid, - rr->evt->evtid, &tval, rr->arch_mon_ctx); + rr->evt->evtid, rr->evt->arch_priv, + &tval, rr->arch_mon_ctx); if (!err) { rr->val +=3D tval; ret =3D 0; @@ -993,7 +997,8 @@ struct mon_evt mon_event_all[QOS_NUM_EVENTS] =3D { MON_EVENT(PMT_EVENT_UOPS_RETIRED, "uops_retired", RDT_RESOURCE_PERF_PKG= , false), }; =20 -void resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu,= unsigned int binary_bits) +void resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu, + unsigned int binary_bits, void *arch_priv) { if (WARN_ON_ONCE(eventid < QOS_FIRST_EVENT || eventid >=3D QOS_NUM_EVENTS= || binary_bits > MAX_BINARY_BITS)) @@ -1009,6 +1014,7 @@ void resctrl_enable_mon_event(enum resctrl_event_id e= ventid, bool any_cpu, unsig =20 mon_event_all[eventid].any_cpu =3D any_cpu; mon_event_all[eventid].binary_bits =3D binary_bits; + mon_event_all[eventid].arch_priv =3D arch_priv; mon_event_all[eventid].enabled =3D true; } =20 --=20 2.51.1 From nobody Sun Dec 14 22:14:55 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E6E083093C7 for ; Wed, 10 Dec 2025 23:14:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408506; cv=none; b=AuUiCL/FlXRLq+Max3jipSTZ14WA4/V1LtVU2qH3lotu+/TP+WMhrJ7g05N3b56yxfoVLmdjUrTi9eaeMzjpfaoGrKtWpvZD7oIq+sbv0Qjn1XiuDYT7YQd9BEEGH9Ob/bbPWDk8OaMXDTUECo8xdsJeeuF/dvqMZWL8fimrYE4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408506; c=relaxed/simple; bh=J1FE4YF9acDbYyCHOfHQBiCqA41cHrThw2mcW1RE+c4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=LR1DE07cdT9D1r8hqZWjdD+bPmnrviJAzX3vVR3dbCpIejwn3pxysZGd6GJbJrD/vD5leAmxHL63NrkILbg5w3z9xKdVRcws4tpRxW4Rpa7hLBOIUvVfBqnJGc9tHmyxcskuaoP8+ZS4WZlYgGM7Ritc3h+J2VyEAV4irPZjLQ8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Y/+0IE7y; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Y/+0IE7y" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1765408499; x=1796944499; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=J1FE4YF9acDbYyCHOfHQBiCqA41cHrThw2mcW1RE+c4=; b=Y/+0IE7y40dGlNPQs9Z79qLlrOT2VK6wxNU9LpXmQkEvEkNbaiV4IXgy 4vm2EERYZg46IWt8u1M8hhYlyFGoaKnRUo9UIb2XYpWsoTJRL6h8bUi5L dlOASSCmxhL84d6mXwhIHZfugys+/uH3hUU04ORrXMt5QQteytc0Jnd8m RkaX11w/vkD6qlBcWrEsEsv0O9bCt+3kdKP+MVqTg1TXh81uyKmx+Hw1E ege3QgdB/0ZQ1q074f/ObnFGjXrbvwjBceeDxOYTlwKlrAIdwM+BdX3zd 372uutJ2ZaM+TM2Apv15x5gJ6Q9KQbHF+34j3qui1xyFRaUkuJUmR9Juw A==; X-CSE-ConnectionGUID: gIysb2uESTmuYJlP8cdzOg== X-CSE-MsgGUID: Thc11cgWQFWp8uTOx28NHg== X-IronPort-AV: E=McAfee;i="6800,10657,11638"; a="69973628" X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="69973628" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:37 -0800 X-CSE-ConnectionGUID: r0Vm+GMeSlSWastiHMUWog== X-CSE-MsgGUID: URJn3wpaSs6/VWZlxuSIhQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="227297104" Received: from daliomra-mobl3.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.221.254]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:36 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v16 19/32] x86/resctrl: Find and enable usable telemetry events Date: Wed, 10 Dec 2025 15:13:58 -0800 Message-ID: <20251210231413.59102-20-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251210231413.59102-1-tony.luck@intel.com> References: <20251210231413.59102-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Every event group has a private copy of the data of all telemetry event aggregators (aka "telemetry regions") tracking its feature type. Included may be regions that have the same feature type but tracking different guid from the event group's. Traverse the event group's telemetry region data and mark all regions that are not usable by the event group as unusable by clearing those regions' MMIO addresses. A region is considered unusable if: 1) guid does not match the guid of the event group. 2) Package ID is invalid. 3) The enumerated size of the MMIO region does not match the expected value from the XML description file. Hereafter any telemetry region with an MMIO address is considered valid for the event group it is associated with. Enable all the event group's events as long as there is at least one usable region from where data for its events can be read. Enabling of events can fail. Each event group is independent of other event groups. So even if no events can be enabled from one event group, keep running to enable other event groups. Note that it is architecturally possible that some telemetry events are only supported by a subset of the packages in the system. It is not expected that systems will ever do this. If they do the user will see event files in resctrl that always return "Unavailable". Signed-off-by: Tony Luck --- include/linux/resctrl.h | 2 +- arch/x86/kernel/cpu/resctrl/intel_aet.c | 67 ++++++++++++++++++++++++- fs/resctrl/monitor.c | 10 ++-- 3 files changed, 72 insertions(+), 7 deletions(-) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index b30f99335bbe..14126d228e61 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -414,7 +414,7 @@ u32 resctrl_arch_get_num_closid(struct rdt_resource *r); u32 resctrl_arch_system_num_rmid_idx(void); int resctrl_arch_update_domains(struct rdt_resource *r, u32 closid); =20 -void resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu, +bool resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu, unsigned int binary_bits, void *arch_priv); =20 bool resctrl_is_mon_event_enabled(enum resctrl_event_id eventid); diff --git a/arch/x86/kernel/cpu/resctrl/intel_aet.c b/arch/x86/kernel/cpu/= resctrl/intel_aet.c index c7d08eb26395..611c6b1fc08d 100644 --- a/arch/x86/kernel/cpu/resctrl/intel_aet.c +++ b/arch/x86/kernel/cpu/resctrl/intel_aet.c @@ -16,9 +16,11 @@ #include #include #include +#include #include #include #include +#include #include =20 #include "internal.h" @@ -110,12 +112,73 @@ static struct event_group *known_event_groups[] =3D { _peg < &known_event_groups[ARRAY_SIZE(known_event_groups)]; \ _peg++) =20 -/* Stub for now */ -static bool enable_events(struct event_group *e, struct pmt_feature_group = *p) +/* + * Clear the address field of regions that did not pass the checks in + * skip_telem_region() so they will not be used by intel_aet_read_event(). + * This is safe to do because intel_pmt_get_regions_by_feature() allocates + * a new pmt_feature_group structure to return to each caller and only mak= es + * use of the pmt_feature_group::kref field when intel_pmt_put_feature_gro= up() + * returns the structure. + */ +static void mark_telem_region_unusable(struct telemetry_region *tr) { + tr->addr =3D NULL; +} + +static bool skip_telem_region(struct telemetry_region *tr, struct event_gr= oup *e) +{ + if (tr->guid !=3D e->guid) + return true; + if (tr->plat_info.package_id >=3D topology_max_packages()) { + pr_warn("Bad package %u in guid 0x%x\n", tr->plat_info.package_id, + tr->guid); + return true; + } + if (tr->size !=3D e->mmio_size) { + pr_warn("MMIO space wrong size (%zu bytes) for guid 0x%x. Expected %zu b= ytes.\n", + tr->size, e->guid, e->mmio_size); + return true; + } + return false; } =20 +static bool group_has_usable_regions(struct event_group *e, struct pmt_fea= ture_group *p) +{ + bool usable_regions =3D false; + + for (int i =3D 0; i < p->count; i++) { + if (skip_telem_region(&p->regions[i], e)) { + mark_telem_region_unusable(&p->regions[i]); + continue; + } + usable_regions =3D true; + } + + return usable_regions; +} + +static bool enable_events(struct event_group *e, struct pmt_feature_group = *p) +{ + struct rdt_resource *r =3D &rdt_resources_all[RDT_RESOURCE_PERF_PKG].r_re= sctrl; + int skipped_events =3D 0; + + if (!group_has_usable_regions(e, p)) + return false; + + for (int j =3D 0; j < e->num_events; j++) { + if (!resctrl_enable_mon_event(e->evts[j].id, true, + e->evts[j].bin_bits, &e->evts[j])) + skipped_events++; + } + if (e->num_events =3D=3D skipped_events) { + pr_info("No events enabled in %s %s:0x%x\n", r->name, e->pfname, e->guid= ); + return false; + } + + return true; +} + static enum pmt_feature_id lookup_pfid(const char *pfname) { if (!strcmp(pfname, "energy")) diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index af43a33ce4cb..9af08b673e39 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -997,25 +997,27 @@ struct mon_evt mon_event_all[QOS_NUM_EVENTS] =3D { MON_EVENT(PMT_EVENT_UOPS_RETIRED, "uops_retired", RDT_RESOURCE_PERF_PKG= , false), }; =20 -void resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu, +bool resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu, unsigned int binary_bits, void *arch_priv) { if (WARN_ON_ONCE(eventid < QOS_FIRST_EVENT || eventid >=3D QOS_NUM_EVENTS= || binary_bits > MAX_BINARY_BITS)) - return; + return false; if (mon_event_all[eventid].enabled) { pr_warn("Duplicate enable for event %d\n", eventid); - return; + return false; } if (binary_bits && !mon_event_all[eventid].is_floating_point) { pr_warn("Event %d may not be floating point\n", eventid); - return; + return false; } =20 mon_event_all[eventid].any_cpu =3D any_cpu; mon_event_all[eventid].binary_bits =3D binary_bits; mon_event_all[eventid].arch_priv =3D arch_priv; mon_event_all[eventid].enabled =3D true; + + return true; } =20 bool resctrl_is_mon_event_enabled(enum resctrl_event_id eventid) --=20 2.51.1 From nobody Sun Dec 14 22:14:55 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 68A8C3019D8 for ; Wed, 10 Dec 2025 23:15:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408512; cv=none; b=OSjDPilCFPVPTDZCyiC6QZTQg9/7uPHYYvCpHpDyh5c63LivLPJQnMlaVCNgQCCxyV0ecJBF06U+0446i8svKso5iu9ULvxio+Hj8sDKSMncRxy1ACtc6vQroRXreHRXEfWS7lPEgUD07rLxDgbCdIwizLxMhjQjiDLez9AbviE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408512; c=relaxed/simple; bh=O6qWvBa+YL0hQ3KuiKirbTj4qE8Sdyb3EdGGzT6U7ss=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=NC/IZf1HW/CFOFY0IPJmxArASkRz03F/jBL7ELRuGKDP4XvSZud62SBXCQGk3untyssZ0cHJ1aeLA0TlrHRnzF6hnImoVQb+J4F6Yyuv61IOAGRNzLRiowOm+l9esA9C3+0eiolddfaVgQD4eYi847jsd/ICfXRQ9+3XzrbxS+o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=EgQ4S3kl; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="EgQ4S3kl" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1765408505; x=1796944505; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=O6qWvBa+YL0hQ3KuiKirbTj4qE8Sdyb3EdGGzT6U7ss=; b=EgQ4S3kl4Rf4RghNwes8IVCbJ955gXCDBeb1ZH6uvSobV4TM/UGF6qv1 s23+OYdHEURTHvxdif94VPXImUbIubnCUuAzKNjek7cMnb1fRXfPkobfZ 1xi7q93UxURlXjFQ/nWwe0Wg9A9ycUXOpScRdUvpsS/a7TPniMbZCJAAU 3w9k4PYyttl8va5lb//hVSzgvHLQElqhGLDEKniyiFuD1s+oZBb+ax4V/ X7Sg4UWgivmaie10Y47mYh3Zhr2vCU6Lpz8PJuGMmAsp+z2LffQZJZBY0 evC/f0KjBZqX0gr7sr88lx3QTTds+HBffTgYPsUjc10b9lJ8KUdXTe0Tk w==; X-CSE-ConnectionGUID: qkI8vclLQOmVYWZHSGVdGw== X-CSE-MsgGUID: 3fUsbfODS6uVgld2RPZGMw== X-IronPort-AV: E=McAfee;i="6800,10657,11638"; a="69973636" X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="69973636" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:37 -0800 X-CSE-ConnectionGUID: wgYJijVJTt+5BGNXZztISQ== X-CSE-MsgGUID: MAul+G2dTGGP7HGyWYq63Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="227297111" Received: from daliomra-mobl3.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.221.254]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:37 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v16 20/32] x86/resctrl: Read telemetry events Date: Wed, 10 Dec 2025 15:13:59 -0800 Message-ID: <20251210231413.59102-21-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251210231413.59102-1-tony.luck@intel.com> References: <20251210231413.59102-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce intel_aet_read_event() to read telemetry events for resource RDT_RESOURCE_PERF_PKG. There may be multiple aggregators tracking each pack= age, so scan all of them and add up all counters. Aggregators may return an inva= lid data indication if they have received no records for a given RMID. User will see "Unavailable" if none of the aggregators on a package provide valid cou= nts. Resctrl now uses readq() so depends on X86_64. Update Kconfig. Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- arch/x86/kernel/cpu/resctrl/internal.h | 5 +++ arch/x86/kernel/cpu/resctrl/intel_aet.c | 51 +++++++++++++++++++++++++ arch/x86/kernel/cpu/resctrl/monitor.c | 4 ++ fs/resctrl/monitor.c | 14 +++++++ arch/x86/Kconfig | 2 +- 5 files changed, 75 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/r= esctrl/internal.h index f2e6e3577df0..10743f5d5fd4 100644 --- a/arch/x86/kernel/cpu/resctrl/internal.h +++ b/arch/x86/kernel/cpu/resctrl/internal.h @@ -225,9 +225,14 @@ void resctrl_arch_mbm_cntr_assign_set_one(struct rdt_r= esource *r); #ifdef CONFIG_X86_CPU_RESCTRL_INTEL_AET bool intel_aet_get_events(void); void __exit intel_aet_exit(void); +int intel_aet_read_event(int domid, u32 rmid, void *arch_priv, u64 *val); #else static inline bool intel_aet_get_events(void) { return false; } static inline void __exit intel_aet_exit(void) { } +static inline int intel_aet_read_event(int domid, u32 rmid, void *arch_pri= v, u64 *val) +{ + return -EINVAL; +} #endif =20 #endif /* _ASM_X86_RESCTRL_INTERNAL_H */ diff --git a/arch/x86/kernel/cpu/resctrl/intel_aet.c b/arch/x86/kernel/cpu/= resctrl/intel_aet.c index 611c6b1fc08d..38985613d4be 100644 --- a/arch/x86/kernel/cpu/resctrl/intel_aet.c +++ b/arch/x86/kernel/cpu/resctrl/intel_aet.c @@ -11,11 +11,15 @@ =20 #define pr_fmt(fmt) "resctrl: " fmt =20 +#include #include +#include #include +#include #include #include #include +#include #include #include #include @@ -236,3 +240,50 @@ void __exit intel_aet_exit(void) } } } + +#define DATA_VALID BIT_ULL(63) +#define DATA_BITS GENMASK_ULL(62, 0) + +/* + * Read counter for an event on a domain (summing all aggregators on the + * domain). If an aggregator hasn't received any data for a specific RMID, + * the MMIO read indicates that data is not valid. Return success if at + * least one aggregator has valid data. + */ +int intel_aet_read_event(int domid, u32 rmid, void *arch_priv, u64 *val) +{ + struct pmt_event *pevt =3D arch_priv; + struct event_group *e; + bool valid =3D false; + u64 total =3D 0; + u64 evtcount; + void *pevt0; + u32 idx; + + pevt0 =3D pevt - pevt->idx; + e =3D container_of(pevt0, struct event_group, evts); + idx =3D rmid * e->num_events; + idx +=3D pevt->idx; + + if (idx * sizeof(u64) + sizeof(u64) > e->mmio_size) { + pr_warn_once("MMIO index %u out of range\n", idx); + return -EIO; + } + + for (int i =3D 0; i < e->pfg->count; i++) { + if (!e->pfg->regions[i].addr) + continue; + if (e->pfg->regions[i].plat_info.package_id !=3D domid) + continue; + evtcount =3D readq(e->pfg->regions[i].addr + idx * sizeof(u64)); + if (!(evtcount & DATA_VALID)) + continue; + total +=3D evtcount & DATA_BITS; + valid =3D true; + } + + if (valid) + *val =3D total; + + return valid ? 0 : -EINVAL; +} diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/re= sctrl/monitor.c index 6929614ba6e6..e6a154240b8d 100644 --- a/arch/x86/kernel/cpu/resctrl/monitor.c +++ b/arch/x86/kernel/cpu/resctrl/monitor.c @@ -251,6 +251,10 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, str= uct rdt_domain_hdr *hdr, int ret; =20 resctrl_arch_rmid_read_context_check(); + + if (r->rid =3D=3D RDT_RESOURCE_PERF_PKG) + return intel_aet_read_event(hdr->id, rmid, arch_priv, val); + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) return -EINVAL; =20 diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index 9af08b673e39..8a4c2ae72740 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -527,6 +527,20 @@ static int __mon_event_count(struct rdtgroup *rdtgrp, = struct rmid_read *rr) return __l3_mon_event_count(rdtgrp, rr); else return __l3_mon_event_count_sum(rdtgrp, rr); + case RDT_RESOURCE_PERF_PKG: { + u64 tval =3D 0; + + rr->err =3D resctrl_arch_rmid_read(rr->r, rr->hdr, rdtgrp->closid, + rdtgrp->mon.rmid, rr->evt->evtid, + rr->evt->arch_priv, + &tval, rr->arch_mon_ctx); + if (rr->err) + return rr->err; + + rr->val +=3D tval; + + return 0; + } default: rr->err =3D -EINVAL; return -EINVAL; diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 52dda19d584d..4cf9de520baf 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -540,7 +540,7 @@ config X86_CPU_RESCTRL =20 config X86_CPU_RESCTRL_INTEL_AET bool "Intel Application Energy Telemetry" - depends on X86_CPU_RESCTRL && CPU_SUP_INTEL && INTEL_PMT_TELEMETRY=3Dy &&= INTEL_TPMI=3Dy + depends on X86_64 && X86_CPU_RESCTRL && CPU_SUP_INTEL && INTEL_PMT_TELEME= TRY=3Dy && INTEL_TPMI=3Dy help Enable per-RMID telemetry events in resctrl. =20 --=20 2.51.1 From nobody Sun Dec 14 22:14:55 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 68517301715 for ; Wed, 10 Dec 2025 23:15:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408509; cv=none; b=LkhoJ7m7sZJ8hFOtusLm3g2bMszkqTh4HYcPZfRSHqymq4WGEVg/l2yhFPXFl0WyVi8bGD8uHtIwFZyBFINvCeez2ibueA/3A3ZtsGehW27ZvFggEhjqcqcfaYc7aCrVBtBhxEIV1ozELTRcpibZwxW/L45JLmP9Jy/mRrVvzro= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408509; c=relaxed/simple; bh=n+gg9cG6VstFpoCB6IkctOIGC7MzoledoWWzPFL/JkU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Ka79qx44ichXtJEbobsfPUJYyySlp1gDaCJrth/izCp1XvQDxQ2ykj7TK5kO9hpo2EIhdNcbAFxGNQH2nlBpNQc8c+8yNYFCu6Emze5vvPZTr83qNkLZSTZsKNWK11f5aZFRWNlZXsPQ8vj86dmGUT/LMH5NOUV48U0QYID8dwI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=N67badLM; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="N67badLM" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1765408505; x=1796944505; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=n+gg9cG6VstFpoCB6IkctOIGC7MzoledoWWzPFL/JkU=; b=N67badLMiEtFNv1cTbByG8QMZeP4QkBSlmZ/YMIQV4guNGoRxMs3ybPV XH3XJgzhr/yTtUz5WA62kLejynHhY88/FGFnqNr/7FAXo/foazjaBB/MF k4ZlYKg4GB5M2oFtd/To4ps8v9RQ+m1UOX9y2sFJy8St3aLzACZgE1sEz Yc1lFZwsaSr2WWcGvh7LxGe6aHxCHzEmplBw6X9iARpU9zHo3pZC++A2M Gv06/uklqy0CgeyEI3uQMS4TVFaUHEuDS4ZdyY8V+YVcJxv43ZOD4OZaM DF27AHe1r63MeAQ1VLqWisTwaNyZ3pJ77HIou/xy3bIiIu+8oT2CiSL6t Q==; X-CSE-ConnectionGUID: RdSuyOwWTAmWbVsbifqr5A== X-CSE-MsgGUID: z04uRRdLQliuAjIJT2mFFA== X-IronPort-AV: E=McAfee;i="6800,10657,11638"; a="69973645" X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="69973645" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:38 -0800 X-CSE-ConnectionGUID: GC7kkrFQRSqWIxjrJfb97A== X-CSE-MsgGUID: jkm2sZt9StuL8GnRDciNCg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="227297116" Received: from daliomra-mobl3.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.221.254]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:38 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v16 21/32] fs/resctrl: Refactor mkdir_mondata_subdir() Date: Wed, 10 Dec 2025 15:14:00 -0800 Message-ID: <20251210231413.59102-22-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251210231413.59102-1-tony.luck@intel.com> References: <20251210231413.59102-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Population of a monitor group's mon_data directory is unreasonably complica= ted because of the support for Sub-NUMA Cluster (SNC) mode. Split out the SNC code into a helper function to make it easier to add supp= ort for a new telemetry resource. Move all the duplicated code to make and set owner of domain directories into the mon_add_all_files() helper and rename to _mkdir_mondata_subdir(). Suggested-by: Reinette Chatre Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- fs/resctrl/rdtgroup.c | 108 +++++++++++++++++++++++------------------- 1 file changed, 58 insertions(+), 50 deletions(-) diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index dce1f0a6d40b..c0db5d8999ee 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -3259,57 +3259,65 @@ static void rmdir_mondata_subdir_allrdtgrp(struct r= dt_resource *r, } } =20 -static int mon_add_all_files(struct kernfs_node *kn, struct rdt_domain_hdr= *hdr, - struct rdt_resource *r, struct rdtgroup *prgrp, - bool do_sum) +/* + * Create a directory for a domain and populate it with monitor files. Cre= ate + * summing monitors when @hdr is NULL. No need to initialize summing monit= ors. + */ +static struct kernfs_node *_mkdir_mondata_subdir(struct kernfs_node *paren= t_kn, char *name, + struct rdt_domain_hdr *hdr, + struct rdt_resource *r, + struct rdtgroup *prgrp, int domid) { - struct rdt_l3_mon_domain *d; struct rmid_read rr =3D {0}; + struct kernfs_node *kn; struct mon_data *priv; struct mon_evt *mevt; - int ret, domid; + int ret; =20 - if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) - return -EINVAL; + kn =3D kernfs_create_dir(parent_kn, name, parent_kn->mode, prgrp); + if (IS_ERR(kn)) + return kn; + + ret =3D rdtgroup_kn_set_ugid(kn); + if (ret) + goto out_destroy; =20 - d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); for_each_mon_event(mevt) { if (mevt->rid !=3D r->rid || !mevt->enabled) continue; - domid =3D do_sum ? d->ci_id : d->hdr.id; - priv =3D mon_get_kn_priv(r->rid, domid, mevt, do_sum); - if (WARN_ON_ONCE(!priv)) - return -EINVAL; + priv =3D mon_get_kn_priv(r->rid, domid, mevt, !hdr); + if (WARN_ON_ONCE(!priv)) { + ret =3D -EINVAL; + goto out_destroy; + } =20 ret =3D mon_addfile(kn, mevt->name, priv); if (ret) - return ret; + goto out_destroy; =20 - if (!do_sum && resctrl_is_mbm_event(mevt->evtid)) + if (hdr && resctrl_is_mbm_event(mevt->evtid)) mon_event_read(&rr, r, hdr, prgrp, &hdr->cpu_mask, mevt, true); } =20 - return 0; + return kn; +out_destroy: + kernfs_remove(kn); + return ERR_PTR(ret); } =20 -static int mkdir_mondata_subdir(struct kernfs_node *parent_kn, - struct rdt_domain_hdr *hdr, - struct rdt_resource *r, struct rdtgroup *prgrp) +static int mkdir_mondata_subdir_snc(struct kernfs_node *parent_kn, + struct rdt_domain_hdr *hdr, + struct rdt_resource *r, struct rdtgroup *prgrp) { - struct kernfs_node *kn, *ckn; + struct kernfs_node *ckn, *kn; struct rdt_l3_mon_domain *d; char name[32]; - bool snc_mode; - int ret =3D 0; - - lockdep_assert_held(&rdtgroup_mutex); =20 if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) return -EINVAL; =20 d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); - snc_mode =3D r->mon_scope =3D=3D RESCTRL_L3_NODE; - sprintf(name, "mon_%s_%02d", r->name, snc_mode ? d->ci_id : d->hdr.id); + sprintf(name, "mon_%s_%02d", r->name, d->ci_id); kn =3D kernfs_find_and_get(parent_kn, name); if (kn) { /* @@ -3318,41 +3326,41 @@ static int mkdir_mondata_subdir(struct kernfs_node = *parent_kn, */ kernfs_put(kn); } else { - kn =3D kernfs_create_dir(parent_kn, name, parent_kn->mode, prgrp); + kn =3D _mkdir_mondata_subdir(parent_kn, name, NULL, r, prgrp, d->ci_id); if (IS_ERR(kn)) return PTR_ERR(kn); + } =20 - ret =3D rdtgroup_kn_set_ugid(kn); - if (ret) - goto out_destroy; - ret =3D mon_add_all_files(kn, hdr, r, prgrp, snc_mode); - if (ret) - goto out_destroy; + sprintf(name, "mon_sub_%s_%02d", r->name, hdr->id); + ckn =3D _mkdir_mondata_subdir(kn, name, hdr, r, prgrp, hdr->id); + if (IS_ERR(ckn)) { + kernfs_remove(kn); + return PTR_ERR(ckn); } =20 - if (snc_mode) { - sprintf(name, "mon_sub_%s_%02d", r->name, hdr->id); - ckn =3D kernfs_create_dir(kn, name, parent_kn->mode, prgrp); - if (IS_ERR(ckn)) { - ret =3D -EINVAL; - goto out_destroy; - } + kernfs_activate(kn); + return 0; +} =20 - ret =3D rdtgroup_kn_set_ugid(ckn); - if (ret) - goto out_destroy; +static int mkdir_mondata_subdir(struct kernfs_node *parent_kn, + struct rdt_domain_hdr *hdr, + struct rdt_resource *r, struct rdtgroup *prgrp) +{ + struct kernfs_node *kn; + char name[32]; =20 - ret =3D mon_add_all_files(ckn, hdr, r, prgrp, false); - if (ret) - goto out_destroy; - } + lockdep_assert_held(&rdtgroup_mutex); + + if (r->rid =3D=3D RDT_RESOURCE_L3 && r->mon_scope =3D=3D RESCTRL_L3_NODE) + return mkdir_mondata_subdir_snc(parent_kn, hdr, r, prgrp); + + sprintf(name, "mon_%s_%02d", r->name, hdr->id); + kn =3D _mkdir_mondata_subdir(parent_kn, name, hdr, r, prgrp, hdr->id); + if (IS_ERR(kn)) + return PTR_ERR(kn); =20 kernfs_activate(kn); return 0; - -out_destroy: - kernfs_remove(kn); - return ret; } =20 /* --=20 2.51.1 From nobody Sun Dec 14 22:14:55 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 49B28302145 for ; Wed, 10 Dec 2025 23:15:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408516; cv=none; b=M1Bhn/w0aM8t16BfpdScgLH20kgn53Ig3IaahvjhtYgjowPWfJnr3lOpv5Cs5aMwEa4IKGl2v1mLGG6ntX/5mknnbhu7I2c/o3oHWWwgTLFCzs6fD63rkcF4KvojdeAt820nN0JqvlAh/u/WQSyVI2rITdymApPqmKDOH5j716Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408516; c=relaxed/simple; bh=miRJzo1AWfZUKy+ImvgvbTXaJITzgINN7ynFnTS9zo4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=iqBu7FR9Y4c1ZBK16okFjdPzFt+8rXYnfBAUlqrL7OAybJLY722oIUvvlD1Elu3Uq+QfYQoXUT2iDN8P2T7qtAkB5a9Ix+vbqdJr5fgL9CmKl83j1dldMnz2G5xsZdMJ3G9G6k3lkc+JLxGSliNm7Ra+YZqFWcegr0pl++D2z4U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=gvZh2os0; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="gvZh2os0" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1765408506; x=1796944506; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=miRJzo1AWfZUKy+ImvgvbTXaJITzgINN7ynFnTS9zo4=; b=gvZh2os0sdHT6EOqtqDgYoQLvLORlV3tMcDtmDMrpCE/dTySwCh2GaYd cD9FZpJA7UuzdCcHeNyEu0/52g/n+sE2+wFfnSnHmyNCUMcL8vJUxV4rI U/VmQVj6YwnpXmbwXl74by/kdY6UjAz7zxmYmB5GXg1E5ozNpG9v5x53q ZR0nzsgPwaVp+QsC/KrSROyNaz9Rt17MM6V+/GJ+TTnFr81a0k5w+1bD2 y8woFKk8GIQoEHPx9rkeu9odyVkrq1s9882GCT4G+vlCsexktmcOnoBbA llWSQJQxMmg21UESgZZlrHsrSCDFM0PlZpaOWKG7/G1FHvambnnVi2VVZ Q==; X-CSE-ConnectionGUID: YRYYmngPRQ6b0MXz57Qa/Q== X-CSE-MsgGUID: oAxndy0mTW6eQnMglibsDg== X-IronPort-AV: E=McAfee;i="6800,10657,11638"; a="69973653" X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="69973653" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:39 -0800 X-CSE-ConnectionGUID: bqfp0dwNRXqLec4BwDe+sg== X-CSE-MsgGUID: Dj9MeepqSieNtEO4a1aDJA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="227297121" Received: from daliomra-mobl3.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.221.254]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:38 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v16 22/32] fs/resctrl: Refactor rmdir_mondata_subdir_allrdtgrp() Date: Wed, 10 Dec 2025 15:14:01 -0800 Message-ID: <20251210231413.59102-23-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251210231413.59102-1-tony.luck@intel.com> References: <20251210231413.59102-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Clearing a monitor group's mon_data directory is complicated because of the support for Sub-NUMA Cluster (SNC) mode. Refactor the SNC case into a helper function to make it easier to add suppo= rt for a new telemetry resource. Suggested-by: Reinette Chatre Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- fs/resctrl/rdtgroup.c | 42 +++++++++++++++++++++++++++++++----------- 1 file changed, 31 insertions(+), 11 deletions(-) diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index c0db5d8999ee..679247c43b1a 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -3228,28 +3228,24 @@ static void mon_rmdir_one_subdir(struct kernfs_node= *pkn, char *name, char *subn } =20 /* - * Remove all subdirectories of mon_data of ctrl_mon groups - * and monitor groups for the given domain. - * Remove files and directories containing "sum" of domain data - * when last domain being summed is removed. + * Remove files and directories for one SNC node. If it is the last node + * sharing an L3 cache, then remove the upper level directory containing + * the "sum" files too. */ -static void rmdir_mondata_subdir_allrdtgrp(struct rdt_resource *r, - struct rdt_domain_hdr *hdr) +static void rmdir_mondata_subdir_allrdtgrp_snc(struct rdt_resource *r, + struct rdt_domain_hdr *hdr) { struct rdtgroup *prgrp, *crgrp; struct rdt_l3_mon_domain *d; char subname[32]; - bool snc_mode; char name[32]; =20 if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) return; =20 d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); - snc_mode =3D r->mon_scope =3D=3D RESCTRL_L3_NODE; - sprintf(name, "mon_%s_%02d", r->name, snc_mode ? d->ci_id : hdr->id); - if (snc_mode) - sprintf(subname, "mon_sub_%s_%02d", r->name, hdr->id); + sprintf(name, "mon_%s_%02d", r->name, d->ci_id); + sprintf(subname, "mon_sub_%s_%02d", r->name, hdr->id); =20 list_for_each_entry(prgrp, &rdt_all_groups, rdtgroup_list) { mon_rmdir_one_subdir(prgrp->mon.mon_data_kn, name, subname); @@ -3259,6 +3255,30 @@ static void rmdir_mondata_subdir_allrdtgrp(struct rd= t_resource *r, } } =20 +/* + * Remove all subdirectories of mon_data of ctrl_mon groups + * and monitor groups for the given domain. + */ +static void rmdir_mondata_subdir_allrdtgrp(struct rdt_resource *r, + struct rdt_domain_hdr *hdr) +{ + struct rdtgroup *prgrp, *crgrp; + char name[32]; + + if (r->rid =3D=3D RDT_RESOURCE_L3 && r->mon_scope =3D=3D RESCTRL_L3_NODE)= { + rmdir_mondata_subdir_allrdtgrp_snc(r, hdr); + return; + } + + sprintf(name, "mon_%s_%02d", r->name, hdr->id); + list_for_each_entry(prgrp, &rdt_all_groups, rdtgroup_list) { + kernfs_remove_by_name(prgrp->mon.mon_data_kn, name); + + list_for_each_entry(crgrp, &prgrp->mon.crdtgrp_list, mon.crdtgrp_list) + kernfs_remove_by_name(crgrp->mon.mon_data_kn, name); + } +} + /* * Create a directory for a domain and populate it with monitor files. Cre= ate * summing monitors when @hdr is NULL. No need to initialize summing monit= ors. --=20 2.51.1 From nobody Sun Dec 14 22:14:55 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E212230BF64 for ; Wed, 10 Dec 2025 23:15:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408518; cv=none; b=hq0jMTQDxzCJFTiSQrvpr9msYyCi16jxcIwA7d0v3B5l6p9zQSd91Po8VaIwCGPNCS0oPCkVE14Oe9zPT+qQysNF6wVvljmKI/k/b0roWiR47GpODtDsJodV7/2SNNc6Bk52JKfSK9qFEkP8u6KmWCzYjMS15sD65bQUTAkpJ/4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408518; c=relaxed/simple; bh=cEsp40PnZwTJgfGgz1MpMpI1JzQBeUKc9W271Y1yebE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=AbHIrk3fSnDw2Ra9MpCu9D3/WO5blkMr8sM6s2WUM7tHmtmfYhtTlFlyBNu0MKEnkY3BBGsfUCvnaMnB9JRXCLT/tvlFube+L4Zz5i9oxs2/uLSKX1bQ/Ijjjo5MjsNEV3rkwDvr2oUp6WYo0h1CtDONqF8W4cT563tJh+kIFno= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=CdoLacHw; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="CdoLacHw" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1765408508; x=1796944508; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=cEsp40PnZwTJgfGgz1MpMpI1JzQBeUKc9W271Y1yebE=; b=CdoLacHwqLYeq2e415KPdgeHR4jLMtsL5qCyqibDzGdADzqW4UMIlGc1 I5GYaI8Y4g9oI2HKmHVRFb1j6RDWHKyxeeYl30iuKXfbd6dyetV8m68Yc b249BDHH16/WFbp/AC/mgVtAwF+KTBb/VygC65BnisZFNXLC0vlvCyjm+ 00Q9v7NdC4XLfbBOc9IAbZeJPjxOQd+1YfC13METcwYCRfq+D5cWCHljS y9Ie9K3ZFPAQOyijor63UUiouvW5PQT3VBRy77hJbw7rbfO/gXsm+bwPa NhjJlOSFMoyktP4jydD2MoQW/CN5nMaUQFnGOdDos25aMticRv1/etVx2 Q==; X-CSE-ConnectionGUID: qxN+f7BxT3uwM6xQRH0hHg== X-CSE-MsgGUID: hDv0UqXFSbmfvMwol9Wd6w== X-IronPort-AV: E=McAfee;i="6800,10657,11638"; a="69973662" X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="69973662" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:40 -0800 X-CSE-ConnectionGUID: tXQmlBt6SDa3xMM2lLuKSA== X-CSE-MsgGUID: UBdp5uJBTMyP9BL/xihUTg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="227297125" Received: from daliomra-mobl3.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.221.254]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:39 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v16 23/32] x86,fs/resctrl: Handle domain creation/deletion for RDT_RESOURCE_PERF_PKG Date: Wed, 10 Dec 2025 15:14:02 -0800 Message-ID: <20251210231413.59102-24-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251210231413.59102-1-tony.luck@intel.com> References: <20251210231413.59102-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The L3 resource has several requirements for domains. There are per-domain structures that hold the 64-bit values of counters, and elements to keep track of the overflow and limbo threads. None of these are needed for the PERF_PKG resource. The hardware counters are wide enough that they do not wrap around for decades. Define a new rdt_perf_pkg_mon_domain structure which just consists of the standard rdt_domain_hdr to keep track of domain id and CPU mask. Update resctrl_online_mon_domain() for RDT_RESOURCE_PERF_PKG. The only acti= on needed for this resource is to create and populate domain directories if a domain is added while resctrl is mounted. Similarly resctrl_offline_mon_domain() only needs to remove domain director= ies. Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- arch/x86/kernel/cpu/resctrl/internal.h | 13 +++++++++++ arch/x86/kernel/cpu/resctrl/core.c | 17 +++++++++++++++ arch/x86/kernel/cpu/resctrl/intel_aet.c | 29 +++++++++++++++++++++++++ fs/resctrl/rdtgroup.c | 17 ++++++++++----- 4 files changed, 71 insertions(+), 5 deletions(-) diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/r= esctrl/internal.h index 10743f5d5fd4..3b228b241fb2 100644 --- a/arch/x86/kernel/cpu/resctrl/internal.h +++ b/arch/x86/kernel/cpu/resctrl/internal.h @@ -87,6 +87,14 @@ static inline struct rdt_hw_l3_mon_domain *resctrl_to_ar= ch_mon_dom(struct rdt_l3 return container_of(r, struct rdt_hw_l3_mon_domain, d_resctrl); } =20 +/** + * struct rdt_perf_pkg_mon_domain - CPUs sharing an package scoped resctrl= monitor resource + * @hdr: common header for different domain types + */ +struct rdt_perf_pkg_mon_domain { + struct rdt_domain_hdr hdr; +}; + /** * struct msr_param - set a range of MSRs from a domain * @res: The resource to use @@ -226,6 +234,8 @@ void resctrl_arch_mbm_cntr_assign_set_one(struct rdt_re= source *r); bool intel_aet_get_events(void); void __exit intel_aet_exit(void); int intel_aet_read_event(int domid, u32 rmid, void *arch_priv, u64 *val); +void intel_aet_mon_domain_setup(int cpu, int id, struct rdt_resource *r, + struct list_head *add_pos); #else static inline bool intel_aet_get_events(void) { return false; } static inline void __exit intel_aet_exit(void) { } @@ -233,6 +243,9 @@ static inline int intel_aet_read_event(int domid, u32 r= mid, void *arch_priv, u64 { return -EINVAL; } + +static inline void intel_aet_mon_domain_setup(int cpu, int id, struct rdt_= resource *r, + struct list_head *add_pos) { } #endif =20 #endif /* _ASM_X86_RESCTRL_INTERNAL_H */ diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 3c6946a5ff1b..283d653002a2 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -580,6 +580,10 @@ static void domain_add_cpu_mon(int cpu, struct rdt_res= ource *r) if (!hdr) l3_mon_domain_setup(cpu, id, r, add_pos); break; + case RDT_RESOURCE_PERF_PKG: + if (!hdr) + intel_aet_mon_domain_setup(cpu, id, r, add_pos); + break; default: pr_warn_once("Unknown resource rid=3D%d\n", r->rid); break; @@ -679,6 +683,19 @@ static void domain_remove_cpu_mon(int cpu, struct rdt_= resource *r) l3_mon_domain_free(hw_dom); break; } + case RDT_RESOURCE_PERF_PKG: { + struct rdt_perf_pkg_mon_domain *pkgd; + + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_PERF_P= KG)) + return; + + pkgd =3D container_of(hdr, struct rdt_perf_pkg_mon_domain, hdr); + resctrl_offline_mon_domain(r, hdr); + list_del_rcu(&hdr->list); + synchronize_rcu(); + kfree(pkgd); + break; + } default: pr_warn_once("Unknown resource rid=3D%d\n", r->rid); break; diff --git a/arch/x86/kernel/cpu/resctrl/intel_aet.c b/arch/x86/kernel/cpu/= resctrl/intel_aet.c index 38985613d4be..35761d6cd55b 100644 --- a/arch/x86/kernel/cpu/resctrl/intel_aet.c +++ b/arch/x86/kernel/cpu/resctrl/intel_aet.c @@ -14,15 +14,20 @@ #include #include #include +#include #include #include +#include #include #include #include #include #include +#include +#include #include #include +#include #include #include #include @@ -287,3 +292,27 @@ int intel_aet_read_event(int domid, u32 rmid, void *ar= ch_priv, u64 *val) =20 return valid ? 0 : -EINVAL; } + +void intel_aet_mon_domain_setup(int cpu, int id, struct rdt_resource *r, + struct list_head *add_pos) +{ + struct rdt_perf_pkg_mon_domain *d; + int err; + + d =3D kzalloc_node(sizeof(*d), GFP_KERNEL, cpu_to_node(cpu)); + if (!d) + return; + + d->hdr.id =3D id; + d->hdr.type =3D RESCTRL_MON_DOMAIN; + d->hdr.rid =3D RDT_RESOURCE_PERF_PKG; + cpumask_set_cpu(cpu, &d->hdr.cpu_mask); + list_add_tail_rcu(&d->hdr.list, add_pos); + + err =3D resctrl_online_mon_domain(r, &d->hdr); + if (err) { + list_del_rcu(&d->hdr.list); + synchronize_rcu(); + kfree(d); + } +} diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index 679247c43b1a..ac3c6e44b7c5 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -4307,11 +4307,6 @@ void resctrl_offline_mon_domain(struct rdt_resource = *r, struct rdt_domain_hdr *h =20 mutex_lock(&rdtgroup_mutex); =20 - if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) - goto out_unlock; - - d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); - /* * If resctrl is mounted, remove all the * per domain monitor data directories. @@ -4319,6 +4314,13 @@ void resctrl_offline_mon_domain(struct rdt_resource = *r, struct rdt_domain_hdr *h if (resctrl_mounted && resctrl_arch_mon_capable()) rmdir_mondata_subdir_allrdtgrp(r, hdr); =20 + if (r->rid !=3D RDT_RESOURCE_L3) + goto out_unlock; + + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) + goto out_unlock; + + d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); if (resctrl_is_mbm_enabled()) cancel_delayed_work(&d->mbm_over); if (resctrl_is_mon_event_enabled(QOS_L3_OCCUP_EVENT_ID) && has_busy_rmid(= d)) { @@ -4415,6 +4417,9 @@ int resctrl_online_mon_domain(struct rdt_resource *r,= struct rdt_domain_hdr *hdr =20 mutex_lock(&rdtgroup_mutex); =20 + if (r->rid !=3D RDT_RESOURCE_L3) + goto mkdir; + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) goto out_unlock; =20 @@ -4432,6 +4437,8 @@ int resctrl_online_mon_domain(struct rdt_resource *r,= struct rdt_domain_hdr *hdr if (resctrl_is_mon_event_enabled(QOS_L3_OCCUP_EVENT_ID)) INIT_DELAYED_WORK(&d->cqm_limbo, cqm_handle_limbo); =20 +mkdir: + err =3D 0; /* * If the filesystem is not mounted then only the default resource group * exists. Creation of its directories is deferred until mount time --=20 2.51.1 From nobody Sun Dec 14 22:14:55 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 87A8730C600 for ; Wed, 10 Dec 2025 23:15:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408518; cv=none; b=NN8gVexRjsR7yeFpilSPI59X1CF70SLc5kS2xl7Jr7cspLD3M9fO+UHhjcrI6yPG5cfp+B1tN1uVksz2dAra/LbKao7+/44fCC951XBeniTeukm6ZpC0e9v9oRJWEw5RhmDXGTIpMcRw3dYsHq1d6M2DJVkUbFx8RvTRD2JEZYY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408518; c=relaxed/simple; bh=Iq+khrH84RR1qQPndfNyfQcjfBndGjwEGwQ1TgErAtw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=TdZ65zsoDWTxDxX3eLbrltXULEPaYff8kzdbTNjtv6Tt3w3hoUhCAwEnuXSHHgIE+zugq+3ezYYtzx6cXty/8FXwsPrBqNEh/eqZkbs2F5bGD+y6piBYsVDRsrkqT7wBM2N6BEDs3IfXC57seUNpQCQmtsxkDWvOrT+veb6HxPc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=XD23K1vh; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="XD23K1vh" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1765408510; x=1796944510; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Iq+khrH84RR1qQPndfNyfQcjfBndGjwEGwQ1TgErAtw=; b=XD23K1vh2lwh5RkvUnbbgjNxoijMvbACO4nTI1WPRUDQL4MTfHVtKzyz JTC/ZmR7term2mAFNChqXkiO6jyY6NSqFckBLtMdBqukK7DoNCNon0FVJ a7YNuct72jHD9PyXlSfPeVZI0sjmg+nJ6tiWmFjgSP5167Xz+S47ga5RP HBqRGLRfdr5kWnmxkPPrkqK1Zagd1be0CLyzj+2JFUbV/jvlF5Acobve4 24gacNB1fWxa+X3R7S+3jLF4GF/IxLsNEAhHU42lj4ovJubF90EtuKdof Z1qU/qsTNAnCT5CEoufHZKQE2EW/cfViKUphrQ8KfoUKVFxh2ZQhT9Jss Q==; X-CSE-ConnectionGUID: n6h5h7ElSaeJtNTfxLWfYQ== X-CSE-MsgGUID: 4bIq+a4RRWKuaG8/j+W32Q== X-IronPort-AV: E=McAfee;i="6800,10657,11638"; a="69973670" X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="69973670" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:40 -0800 X-CSE-ConnectionGUID: HRopHPBUQvGgeYu2WlqiEQ== X-CSE-MsgGUID: MDRVIbjSTzmgvOzmytzNEQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="227297131" Received: from daliomra-mobl3.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.221.254]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:40 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v16 24/32] x86/resctrl: Add energy/perf choices to rdt boot option Date: Wed, 10 Dec 2025 15:14:03 -0800 Message-ID: <20251210231413.59102-25-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251210231413.59102-1-tony.luck@intel.com> References: <20251210231413.59102-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Legacy resctrl features are enumerated by X86_FEATURE_* flags. These may be overridden by quirks to disable features in the case of errata. Users can use kernel command line options to either disable a feature, or to force enable a feature that was disabled by a quirk. A different approach is needed for hardware features that do not have an X86_FEATURE_* flag. Update parsing of the "rdt=3D" boot parameter to call the telemetry driver directly to handle new "perf" and "energy" options that controls activation of telemetry monitoring of the named type. By itself a "perf" or "energy" o= ption controls the forced enabling or disabling (with ! prefix) of all event grou= ps of the named type. A ":guid" suffix allows for fine grain control per event gr= oup. Signed-off-by: Tony Luck --- .../admin-guide/kernel-parameters.txt | 7 +++- arch/x86/kernel/cpu/resctrl/internal.h | 2 + arch/x86/kernel/cpu/resctrl/core.c | 2 + arch/x86/kernel/cpu/resctrl/intel_aet.c | 38 +++++++++++++++++++ 4 files changed, 48 insertions(+), 1 deletion(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentatio= n/admin-guide/kernel-parameters.txt index 2b465eab41a1..cc9d2800abeb 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -6217,9 +6217,14 @@ rdt=3D [HW,X86,RDT] Turn on/off individual RDT features. List is: cmt, mbmtotal, mbmlocal, l3cat, l3cdp, l2cat, l2cdp, - mba, smba, bmec, abmc, sdciae. + mba, smba, bmec, abmc, sdciae, energy[:guid], + perf[:guid]. E.g. to turn on cmt and turn off mba use: rdt=3Dcmt,!mba + To turn off all energy telemetry monitoring and ensure that + perf telemetry monitoring associated with guid 0x12345 + is enabled use: + rdt=3D!energy,perf:0x12345 =20 reboot=3D [KNL] Format (x86 or x86_64): diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/r= esctrl/internal.h index 3b228b241fb2..df09091f7c6c 100644 --- a/arch/x86/kernel/cpu/resctrl/internal.h +++ b/arch/x86/kernel/cpu/resctrl/internal.h @@ -236,6 +236,7 @@ void __exit intel_aet_exit(void); int intel_aet_read_event(int domid, u32 rmid, void *arch_priv, u64 *val); void intel_aet_mon_domain_setup(int cpu, int id, struct rdt_resource *r, struct list_head *add_pos); +bool intel_aet_option(bool force_off, char *tok); #else static inline bool intel_aet_get_events(void) { return false; } static inline void __exit intel_aet_exit(void) { } @@ -246,6 +247,7 @@ static inline int intel_aet_read_event(int domid, u32 r= mid, void *arch_priv, u64 =20 static inline void intel_aet_mon_domain_setup(int cpu, int id, struct rdt_= resource *r, struct list_head *add_pos) { } +static inline bool intel_aet_option(bool force_off, char *tok) { return fa= lse; } #endif =20 #endif /* _ASM_X86_RESCTRL_INTERNAL_H */ diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 283d653002a2..960974ffa866 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -820,6 +820,8 @@ static int __init set_rdt_options(char *str) force_off =3D *tok =3D=3D '!'; if (force_off) tok++; + if (intel_aet_option(force_off, tok)) + continue; for (o =3D rdt_options; o < &rdt_options[NUM_RDT_OPTIONS]; o++) { if (strcmp(tok, o->name) =3D=3D 0) { if (force_off) diff --git a/arch/x86/kernel/cpu/resctrl/intel_aet.c b/arch/x86/kernel/cpu/= resctrl/intel_aet.c index 35761d6cd55b..df91e1ea4a7b 100644 --- a/arch/x86/kernel/cpu/resctrl/intel_aet.c +++ b/arch/x86/kernel/cpu/resctrl/intel_aet.c @@ -52,12 +52,17 @@ struct pmt_event { /** * struct event_group - Events with the same feature type ("energy" or "pe= rf") and guid. * @pfname: PMT feature name ("energy" or "perf") of this event group. + * Used by boot rdt=3D option. * @pfg: Points to the aggregated telemetry space information * returned by the intel_pmt_get_regions_by_feature() * call to the INTEL_PMT_TELEMETRY driver that contains * data for all telemetry regions of type @pfname. * Valid if the system supports the event group, * NULL otherwise. + * @force_off: True when "rdt" command line or architecture code disables + * this event group. + * @force_on: True when "rdt" command line overrides disable of this + * event group. * @guid: Unique number per XML description file. * @mmio_size: Number of bytes of MMIO registers for this group. * @num_events: Number of events in this group. @@ -67,6 +72,7 @@ struct event_group { /* Data fields for additional structures to manage this group. */ const char *pfname; struct pmt_feature_group *pfg; + bool force_off, force_on; =20 /* Remaining fields initialized from XML file. */ u32 guid; @@ -121,6 +127,35 @@ static struct event_group *known_event_groups[] =3D { _peg < &known_event_groups[ARRAY_SIZE(known_event_groups)]; \ _peg++) =20 +bool intel_aet_option(bool force_off, char *tok) +{ + struct event_group **peg; + bool ret =3D false; + u32 guid =3D 0; + char *name; + + if (!tok) + return false; + + name =3D strsep(&tok, ":"); + if (tok && kstrtou32(tok, 16, &guid)) + return false; + + for_each_event_group(peg) { + if (strcmp(name, (*peg)->pfname)) + continue; + if (guid && (*peg)->guid !=3D guid) + continue; + if (force_off) + (*peg)->force_off =3D true; + else + (*peg)->force_on =3D true; + ret =3D true; + } + + return ret; +} + /* * Clear the address field of regions that did not pass the checks in * skip_telem_region() so they will not be used by intel_aet_read_event(). @@ -172,6 +207,9 @@ static bool enable_events(struct event_group *e, struct= pmt_feature_group *p) struct rdt_resource *r =3D &rdt_resources_all[RDT_RESOURCE_PERF_PKG].r_re= sctrl; int skipped_events =3D 0; =20 + if (e->force_off) + return false; + if (!group_has_usable_regions(e, p)) return false; =20 --=20 2.51.1 From nobody Sun Dec 14 22:14:55 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5626B30DD39 for ; Wed, 10 Dec 2025 23:15:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408525; cv=none; b=l1VZoHsp7tBBP2mE7s7Xyt6jrv8P1zYPguPNxvR8SXK9WS9sx/Xfn3RAcbvuXvTcG2mJARA1Nih1F8G6QG7vOCM6G6OR2Gs/s8QlVvw7MPbmNsf50NIoYwTt81lnRWfLg0SrUmb3vHJh2UWHver7HF0whi3N95MdrAXrMkqUTtA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408525; c=relaxed/simple; bh=jjJ9ZaQxJEbd5BORfDSxYTGzjlSKcmV3qib0G1ee5ac=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=sXm1kwLT7A78C/F3ZgaM5tdKNtlh2KW0Y5MTtF/M0m+LjcZz9rX0QyMhxbs5qMghJ0t7pcz1/cCpWCknNquRTWq5YVX+Yn+ycXV0wDxX1uygEWfsoSk0DGAH65F5KMob+5kd+Xxbsy/7X3DOO+T6zI/lhnPcUuaBu8zzvrAqiF8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=e9bTTAdY; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="e9bTTAdY" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1765408515; x=1796944515; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jjJ9ZaQxJEbd5BORfDSxYTGzjlSKcmV3qib0G1ee5ac=; b=e9bTTAdYEw4/oMYZN0Gl6LAx7/WKqve7GQqfpSpZfVSvLM8cqdypa5Lb 5U3SQI0Pe+jObMnp7QilOsntnsr0Jz6cK15NM9q1VnVh7yRcLg+sHbTF+ h1l/NFRoZoY1xi+Q2UqXJHqnopTV0eKO26LdTO49oibEZUX64tzYFo1fO 2/Jzgntoh0fr5TNpjsGaFjqfyNOWDILm1eCw6R95zXcK3e+mzF4VaL0rg IXD9C8sTvKbIPSL+6oQCAhMb3oUGWHXFkUfpU5RRxjfmzVyIUSAOJA8sO cZtzqeSVqo7wKdDWbuwa1cVMzry13TPLmDmFVudOjng7gAXo6ZLiTfULl w==; X-CSE-ConnectionGUID: lxn3wFILSm2jsozCRVCAxw== X-CSE-MsgGUID: VSjS3KbPTvqNLc9c0zlz4Q== X-IronPort-AV: E=McAfee;i="6800,10657,11638"; a="69973679" X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="69973679" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:41 -0800 X-CSE-ConnectionGUID: l6pVkQXIQkGHn0ydlBbjSQ== X-CSE-MsgGUID: 8uUReQNURfCY9+Sl1NyBKg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="227297134" Received: from daliomra-mobl3.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.221.254]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:40 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v16 25/32] x86/resctrl: Handle number of RMIDs supported by RDT_RESOURCE_PERF_PKG Date: Wed, 10 Dec 2025 15:14:04 -0800 Message-ID: <20251210231413.59102-26-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251210231413.59102-1-tony.luck@intel.com> References: <20251210231413.59102-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" There are now three meanings for "number of RMIDs": 1) The number for legacy features enumerated by CPUID leaf 0xF. This is the maximum number of distinct values that can be loaded into MSR_IA32_PQR_ASSO= C. Note that systems with Sub-NUMA Cluster mode enabled will force scaling down the CPUID enumerated value by the number of SNC nodes per L3-cache. 2) The number of registers in MMIO space for each event. This is enumerated in the XML files and is the value initialized into event_group::num_rmid. 3) The number of "hardware counters" (this isn't a strictly accurate description of how things work, but serves as a useful analogy that does describe the limitations) feeding to those MMIO registers. This is enumerat= ed in telemetry_region::num_rmids returned by intel_pmt_get_regions_by_feature= () Event groups with insufficient "hardware counters" to track all RMIDs are difficult for users to use, since the system may reassign "hardware counter= s" at any time. This means that users cannot reliably collect two consecutive event counts to compute the rate at which events are occurring. Disable such event groups by default. The user may override this with a command line "rdt=3D" option. In this case limit an under-resourced event group's number= of possible monitor resource groups to the lowest number of "hardware counters= ". Scan all enabled event groups and assign the RDT_RESOURCE_PERF_PKG resource "num_rmid" value to the smallest of these values as this value will be used later to compare against the number of RMIDs supported by other resources to determine how many monitoring resource groups are supported. N.B. Change type of resctrl_mon::num_rmid to u32 to match its usage and the type of event_group::num_rmid so that min(r->num_rmid, e->num_rmid) won't complain about mixing signed and unsigned types. Signed-off-by: Tony Luck --- include/linux/resctrl.h | 2 +- arch/x86/kernel/cpu/resctrl/intel_aet.c | 54 ++++++++++++++++++++++++- fs/resctrl/rdtgroup.c | 2 +- 3 files changed, 55 insertions(+), 3 deletions(-) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index 14126d228e61..8623e450619a 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -295,7 +295,7 @@ enum resctrl_schema_fmt { * events of monitor groups created via mkdir. */ struct resctrl_mon { - int num_rmid; + u32 num_rmid; unsigned int mbm_cfg_mask; int num_mbm_cntrs; bool mbm_cntr_assignable; diff --git a/arch/x86/kernel/cpu/resctrl/intel_aet.c b/arch/x86/kernel/cpu/= resctrl/intel_aet.c index df91e1ea4a7b..34f882df1243 100644 --- a/arch/x86/kernel/cpu/resctrl/intel_aet.c +++ b/arch/x86/kernel/cpu/resctrl/intel_aet.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include #include @@ -60,10 +61,14 @@ struct pmt_event { * Valid if the system supports the event group, * NULL otherwise. * @force_off: True when "rdt" command line or architecture code disables - * this event group. + * this event group due to insufficient RMIDs. * @force_on: True when "rdt" command line overrides disable of this * event group. * @guid: Unique number per XML description file. + * @num_rmid: Number of RMIDs supported by this group. May be + * adjusted downwards if enumeration from + * intel_pmt_get_regions_by_feature() indicates fewer + * RMIDs can be tracked simultaneously. * @mmio_size: Number of bytes of MMIO registers for this group. * @num_events: Number of events in this group. * @evts: Array of event descriptors. @@ -76,6 +81,7 @@ struct event_group { =20 /* Remaining fields initialized from XML file. */ u32 guid; + u32 num_rmid; size_t mmio_size; unsigned int num_events; struct pmt_event evts[] __counted_by(num_events); @@ -90,6 +96,7 @@ struct event_group { static struct event_group energy_0x26696143 =3D { .pfname =3D "energy", .guid =3D 0x26696143, + .num_rmid =3D 576, .mmio_size =3D XML_MMIO_SIZE(576, 2, 3), .num_events =3D 2, .evts =3D { @@ -104,6 +111,7 @@ static struct event_group energy_0x26696143 =3D { static struct event_group perf_0x26557651 =3D { .pfname =3D "perf", .guid =3D 0x26557651, + .num_rmid =3D 576, .mmio_size =3D XML_MMIO_SIZE(576, 7, 3), .num_events =3D 7, .evts =3D { @@ -202,6 +210,24 @@ static bool group_has_usable_regions(struct event_grou= p *e, struct pmt_feature_g return usable_regions; } =20 +static bool all_regions_have_sufficient_rmid(struct event_group *e, struct= pmt_feature_group *p) +{ + struct telemetry_region *tr; + bool ret =3D true; + + for (int i =3D 0; i < p->count; i++) { + if (!p->regions[i].addr) + continue; + tr =3D &p->regions[i]; + if (tr->num_rmids < e->num_rmid) { + e->force_off =3D true; + return false; + } + } + + return ret; +} + static bool enable_events(struct event_group *e, struct pmt_feature_group = *p) { struct rdt_resource *r =3D &rdt_resources_all[RDT_RESOURCE_PERF_PKG].r_re= sctrl; @@ -213,6 +239,27 @@ static bool enable_events(struct event_group *e, struc= t pmt_feature_group *p) if (!group_has_usable_regions(e, p)) return false; =20 + /* + * Only enable event group with insufficient RMIDs if the user requested + * it from the kernel command line. + */ + if (!all_regions_have_sufficient_rmid(e, p) && !e->force_on) { + pr_info("%s %s:0x%x monitoring not enabled due to insufficient RMIDs\n", + r->name, e->pfname, e->guid); + return false; + } + + for (int i =3D 0; i < p->count; i++) { + if (!p->regions[i].addr) + continue; + /* + * e->num_rmid only adjusted lower if user (via rdt=3D kernel + * parameter) forces an event group with insufficient RMID + * to be enabled. + */ + e->num_rmid =3D min(e->num_rmid, p->regions[i].num_rmids); + } + for (int j =3D 0; j < e->num_events; j++) { if (!resctrl_enable_mon_event(e->evts[j].id, true, e->evts[j].bin_bits, &e->evts[j])) @@ -223,6 +270,11 @@ static bool enable_events(struct event_group *e, struc= t pmt_feature_group *p) return false; } =20 + if (r->mon.num_rmid) + r->mon.num_rmid =3D min(r->mon.num_rmid, e->num_rmid); + else + r->mon.num_rmid =3D e->num_rmid; + return true; } =20 diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index ac3c6e44b7c5..60ce2390723e 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -1157,7 +1157,7 @@ static int rdt_num_rmids_show(struct kernfs_open_file= *of, { struct rdt_resource *r =3D rdt_kn_parent_priv(of->kn); =20 - seq_printf(seq, "%d\n", r->mon.num_rmid); + seq_printf(seq, "%u\n", r->mon.num_rmid); =20 return 0; } --=20 2.51.1 From nobody Sun Dec 14 22:14:55 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 50A0D3064B7 for ; Wed, 10 Dec 2025 23:15:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408529; cv=none; b=h7h7bv/5L10+1ackucjWtAH1nC2VBMistHGblJT1XhnMZSCbnugcqNGPozVhZvuvfBYwmZF4pHWGppiC+Zw30qooA/bdKL86dfSnpaoD157tVgx5gl1U2cN/qNSzaC3rXm9rXwm34dC22OSqx346H7CudfKdRw+0dnq447L3pM0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408529; c=relaxed/simple; bh=CGFEj4tvfooKXhbfegsNmka6gSziOv51Ou9GZK5vcwI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=FG6DKl0q+qzo9je7g6+Ixxv25u+ttSeMXPYG7vkEseg1eNkzQhs7vPUyToJhXQpOAX2OlZf0NgIbZ98qU8kJ2dmamITUggD8pmrLq7mpmtvIRjJ98S6sB115otmsiZz2eVF0tw41Sw2/NRYC1PS3+VGQjvgNSQkloZ9Q9O7aY6k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=ZdDq8XT9; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="ZdDq8XT9" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1765408518; x=1796944518; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=CGFEj4tvfooKXhbfegsNmka6gSziOv51Ou9GZK5vcwI=; b=ZdDq8XT9TZUyDOCYuZX9R6kX5rSeue8lELAMmkVijNT8pZKw455QaVX4 qpMwmbjyGr+pq/tHIIKefkAYVNAw3aIc7SlkZ7qBndCT4gFsXiKuSNTrU feAx3/j5C5sSCBfr6gZuqZ/aGXJJe6554WM55bUDtRjmd1WZ54H7AtuRU foJG09r3+tv6ki2eVGyhWrPgOfFsEi69wGzRfZS8Km7JPMnf6Caic8FCw P9jtvBWanOsYyAgN8d+++qdYidzXogp6X7zt11nMi/+GjLvrGe8LMaAx+ FOqKLd9RUapT0AeVQQB0gxxcYUCbigkP1wG7hYczD2EXUQRSLS5SvhGKd g==; X-CSE-ConnectionGUID: jtZ3HN9PSoC5gz7pzJ56oQ== X-CSE-MsgGUID: 3CjPN8rlQ1OYMcWUDj6EfA== X-IronPort-AV: E=McAfee;i="6800,10657,11638"; a="69973689" X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="69973689" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:42 -0800 X-CSE-ConnectionGUID: lq6VEEjjTrqjOudrPjJN4g== X-CSE-MsgGUID: OHKHhwzjT7euxFd0b1xWGQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="227297138" Received: from daliomra-mobl3.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.221.254]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:41 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v16 26/32] fs/resctrl: Move allocation/free of closid_num_dirty_rmid[] Date: Wed, 10 Dec 2025 15:14:05 -0800 Message-ID: <20251210231413.59102-27-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251210231413.59102-1-tony.luck@intel.com> References: <20251210231413.59102-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" closid_num_dirty_rmid[] and rmid_ptrs[] are allocated together during resct= rl initialization and freed together during resctrl exit. Telemetry events are enumerated on resctrl mount so only at resctrl mount will the number of RMID supported by all monitoring resources and needed as size for rmid_ptrs[] be known. Separate closid_num_dirty_rmid[] and rmid_ptrs[] allocation and free in preparation for rmid_ptrs[] to be allocated on resctrl mount. Keep the rdtgroup_mutex protection around the allocation and free of closid_num_dirty_rmid[] as ARM needs this to guarantee memory ordering. Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- fs/resctrl/monitor.c | 79 ++++++++++++++++++++++++++++---------------- 1 file changed, 51 insertions(+), 28 deletions(-) diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index 8a4c2ae72740..bbf8c9037887 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -906,36 +906,14 @@ void mbm_setup_overflow_handler(struct rdt_l3_mon_dom= ain *dom, unsigned long del static int dom_data_init(struct rdt_resource *r) { u32 idx_limit =3D resctrl_arch_system_num_rmid_idx(); - u32 num_closid =3D resctrl_arch_get_num_closid(r); struct rmid_entry *entry =3D NULL; int err =3D 0, i; u32 idx; =20 mutex_lock(&rdtgroup_mutex); - if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID)) { - u32 *tmp; - - /* - * If the architecture hasn't provided a sanitised value here, - * this may result in larger arrays than necessary. Resctrl will - * use a smaller system wide value based on the resources in - * use. - */ - tmp =3D kcalloc(num_closid, sizeof(*tmp), GFP_KERNEL); - if (!tmp) { - err =3D -ENOMEM; - goto out_unlock; - } - - closid_num_dirty_rmid =3D tmp; - } =20 rmid_ptrs =3D kcalloc(idx_limit, sizeof(struct rmid_entry), GFP_KERNEL); if (!rmid_ptrs) { - if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID)) { - kfree(closid_num_dirty_rmid); - closid_num_dirty_rmid =3D NULL; - } err =3D -ENOMEM; goto out_unlock; } @@ -971,11 +949,6 @@ static void dom_data_exit(struct rdt_resource *r) if (!r->mon_capable) goto out_unlock; =20 - if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID)) { - kfree(closid_num_dirty_rmid); - closid_num_dirty_rmid =3D NULL; - } - kfree(rmid_ptrs); rmid_ptrs =3D NULL; =20 @@ -1814,6 +1787,45 @@ ssize_t mbm_L3_assignments_write(struct kernfs_open_= file *of, char *buf, return ret ?: nbytes; } =20 +static int closid_num_dirty_rmid_alloc(struct rdt_resource *r) +{ + if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID)) { + u32 num_closid =3D resctrl_arch_get_num_closid(r); + u32 *tmp; + + /* For ARM memory ordering access to closid_num_dirty_rmid */ + mutex_lock(&rdtgroup_mutex); + + /* + * If the architecture hasn't provided a sanitised value here, + * this may result in larger arrays than necessary. Resctrl will + * use a smaller system wide value based on the resources in + * use. + */ + tmp =3D kcalloc(num_closid, sizeof(*tmp), GFP_KERNEL); + if (!tmp) { + mutex_unlock(&rdtgroup_mutex); + return -ENOMEM; + } + + closid_num_dirty_rmid =3D tmp; + + mutex_unlock(&rdtgroup_mutex); + } + + return 0; +} + +static void closid_num_dirty_rmid_free(void) +{ + if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID)) { + mutex_lock(&rdtgroup_mutex); + kfree(closid_num_dirty_rmid); + closid_num_dirty_rmid =3D NULL; + mutex_unlock(&rdtgroup_mutex); + } +} + /** * resctrl_l3_mon_resource_init() - Initialise global monitoring structure= s. * @@ -1834,10 +1846,16 @@ int resctrl_l3_mon_resource_init(void) if (!r->mon_capable) return 0; =20 - ret =3D dom_data_init(r); + ret =3D closid_num_dirty_rmid_alloc(r); if (ret) return ret; =20 + ret =3D dom_data_init(r); + if (ret) { + closid_num_dirty_rmid_free(); + return ret; + } + if (resctrl_arch_is_evt_configurable(QOS_L3_MBM_TOTAL_EVENT_ID)) { mon_event_all[QOS_L3_MBM_TOTAL_EVENT_ID].configurable =3D true; resctrl_file_fflags_init("mbm_total_bytes_config", @@ -1880,5 +1898,10 @@ void resctrl_l3_mon_resource_exit(void) { struct rdt_resource *r =3D resctrl_arch_get_resource(RDT_RESOURCE_L3); =20 + if (!r->mon_capable) + return; + + closid_num_dirty_rmid_free(); + dom_data_exit(r); } --=20 2.51.1 From nobody Sun Dec 14 22:14:55 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 238093009E5 for ; Wed, 10 Dec 2025 23:15:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408538; cv=none; b=a9uw5irhyUl32AQlCLDlReY3Bdi+3UZCWbKD1zDmFmpHGZh4CNINjQhXRAuunRELD+OEoDaHe4sHUfJ6J/5adByD7upq3z6ckBWVMw/sj4c+faslNhqjKkiaJ23IUt/Eq0FPYAYRm4XiZ+v7B3MZefW0GfvBH13nQggtvHG+lBg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408538; c=relaxed/simple; bh=6P6/qdwIinWh3GhPxgZSngc0CYjUPfiSTLftQro4OAU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=thabdIqNFrEobcFacR+K/DIRFEcHnXUrJGl+RqkXJqEjqQQXWGqfG+TZgw9KuyhDDLqw9EuydrQzcIieMukm8tCrQYDvMpmJ150pO0b42JOxt/HUbM6e3rXbDiFh34VLCXDK0gNp6r3MYus5qeeBr+wDXrqF0nnjfzuSQaSGu5Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Zc3KO4sA; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Zc3KO4sA" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1765408522; x=1796944522; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6P6/qdwIinWh3GhPxgZSngc0CYjUPfiSTLftQro4OAU=; b=Zc3KO4sAeH0YsRpuCsiA7wLKxUQIGjKMdJmj/NFyEcv9iMWF3JwAm1bq KgCShDlGzq77zwlaxdpEWxvO/XGHN2P8k6Q6Te4+CpvHc1uJPcU0egGnl ayCKEyaMFlB0NGllLOCm7+xLeCjFYhOYVKEmBA1ob1CQo7ogtuSdT73Ke x2h5lhpGvrzwuS2U9yk8J86Dkoca3fhnJWOzCdd5RbbeLlTgHTXQpSayi AECv9kcsYqcBXfrNcLhePK3K9ptW3ThDMZHgSNToNHCYZFSDAdRgI7l7l dAgwH+Jvn+k3TwXaXkX0gTUG2Uz05bFTyy92iGYotfOt6I2MehekG8+xN Q==; X-CSE-ConnectionGUID: nLBd8GeqQryFYnoIyu/CBg== X-CSE-MsgGUID: yxBI1kqzR+mPUMdK1vODcA== X-IronPort-AV: E=McAfee;i="6800,10657,11638"; a="69973697" X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="69973697" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:42 -0800 X-CSE-ConnectionGUID: h5tA16MaTOq2r3wMD/2Ifw== X-CSE-MsgGUID: EgkP/Gw2Rm2kCJPKzSOFLg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="227297146" Received: from daliomra-mobl3.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.221.254]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:42 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v16 27/32] x86,fs/resctrl: Compute number of RMIDs as minimum across resources Date: Wed, 10 Dec 2025 15:14:06 -0800 Message-ID: <20251210231413.59102-28-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251210231413.59102-1-tony.luck@intel.com> References: <20251210231413.59102-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" resctrl assumes that only the L3 resource supports monitor events, so it simply takes the rdt_resource::num_rmid from RDT_RESOURCE_L3 as the system's number of RMIDs. The addition of telemetry events in a different resource breaks that assumption. Compute the number of available RMIDs as the minimum value across all mon_capable resources (analogous to how the number of CLOSIDs is computed across alloc_capable resources). Note that mount time enumeration of the telemetry resource means that this number can be reduced. If this happens, then some memory will be wasted as the allocations for rdt_l3_mon_domain::mbm_states[] and rdt_l3_mon_domain::rmid_busy_llc created during resctrl initialization will be larger than needed. Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- arch/x86/kernel/cpu/resctrl/core.c | 15 +++++++++++++-- fs/resctrl/rdtgroup.c | 6 ++++++ 2 files changed, 19 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 960974ffa866..bca65851d592 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -110,12 +110,23 @@ struct rdt_hw_resource rdt_resources_all[RDT_NUM_RESO= URCES] =3D { }, }; =20 +/** + * resctrl_arch_system_num_rmid_idx - Compute number of supported RMIDs + * (minimum across all mon_capable resource) + * + * Return: Number of supported RMIDs at time of call. Note that mount time + * enumeration of resources may reduce the number. + */ u32 resctrl_arch_system_num_rmid_idx(void) { - struct rdt_resource *r =3D &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl; + u32 num_rmids =3D U32_MAX; + struct rdt_resource *r; + + for_each_mon_capable_rdt_resource(r) + num_rmids =3D min(num_rmids, r->mon.num_rmid); =20 /* RMID are independent numbers for x86. num_rmid_idx =3D=3D num_rmid */ - return r->mon.num_rmid; + return num_rmids =3D=3D U32_MAX ? 0 : num_rmids; } =20 struct rdt_resource *resctrl_arch_get_resource(enum resctrl_res_level l) diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index 60ce2390723e..e95d3d0dc515 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -4352,6 +4352,12 @@ void resctrl_offline_mon_domain(struct rdt_resource = *r, struct rdt_domain_hdr *h * During boot this may be called before global allocations have been made= by * resctrl_l3_mon_resource_init(). * + * Called during CPU online that may run as soon as CPU online callbacks + * are set up during resctrl initialization. The number of supported RMIDs + * may be reduced if additional mon_capable resources are enumerated + * at mount time. This means the rdt_l3_mon_domain::mbm_states[] and + * rdt_l3_mon_domain::rmid_busy_llc allocations may be larger than needed. + * * Return: 0 for success, or -ENOMEM. */ static int domain_setup_l3_mon_state(struct rdt_resource *r, struct rdt_l3= _mon_domain *d) --=20 2.51.1 From nobody Sun Dec 14 22:14:55 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2ED1830E0D3 for ; Wed, 10 Dec 2025 23:15:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408528; cv=none; b=ZmJM42Emt28wtY0wAHmTmtDQC6XzaD5BGcyOMqOoBnZbb/+0QJyuF8J+hghkXTjY0LlquXZ6bzkQvvh7+xetP8B2fZZmeknsxHZAF5jGnII3fUnjH+3WuHTDHhPBref7XF8PtLHmScVQ4/veNAGLXIw9XVMGaUWqKNaf6lKMN6Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408528; c=relaxed/simple; bh=xgOc8c+cf5niz3qKU/uS+NpRBagTkLK++biEY/B199s=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GKsY/8Z+XWnaqMbWKb7IZhmuQiGPU3bbAXdwyalP2YLatL5g2vzOmqhOzGrwJLhaePVp6cqHSmZXJDWmgxc6OYVDKo0luK2v3cuzfIHxPzVjvJxbyv+GoHIyU+i4O+sgH8js1wqwJ7mZATU7gsylD+0M33Zelq2qgYG9Wv4TQ/8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=cZxIj3bO; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="cZxIj3bO" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1765408519; x=1796944519; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=xgOc8c+cf5niz3qKU/uS+NpRBagTkLK++biEY/B199s=; b=cZxIj3bOTbjB996kGAWQPSK1V1NMnBnFVEMK7il5b2CmTipN6wZgtBNN cfeOWIGUo3+tn7QBow/y3CCQrSjfN3Z3dHrBf8W3JsXMR3RJry8vNF1Mh LazsBDiil80FNpd38atfSlU8mIR48RwZit6IVT1A3XWa5xhvCGRmEMK1o 0MTlqqG58+pxu7T3gBJvEsyAJp9s98QMCfZqH4vR+gQGTdXwhKm5EItsJ WkaLXBTFUUP+wE1rEhvQotE7+2Vz7KiR9VbrWBLh/sBWtDqTFdTUJGbnn YlHTOzuvs8ElaHuqLNgSvTu9x7BYE7FMCFu4BVge5Nigge8i/G4POuc/k g==; X-CSE-ConnectionGUID: cw1dmxxkRH+9N0u54qzdrw== X-CSE-MsgGUID: Vx/qckJMRV+xokrHmJQZSQ== X-IronPort-AV: E=McAfee;i="6800,10657,11638"; a="69973705" X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="69973705" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:43 -0800 X-CSE-ConnectionGUID: /HWOzWZuQxSCgrehLsCvrA== X-CSE-MsgGUID: 4m/sglfuQj6nKw5UpFlbGg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="227297155" Received: from daliomra-mobl3.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.221.254]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:42 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v16 28/32] fs/resctrl: Move RMID initialization to first mount Date: Wed, 10 Dec 2025 15:14:07 -0800 Message-ID: <20251210231413.59102-29-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251210231413.59102-1-tony.luck@intel.com> References: <20251210231413.59102-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" L3 monitor features are enumerated during resctrl initialization and rmid_ptrs[] that tracks all RMIDs and depends on the number of supported RMIDs is allocated during this time. Telemetry monitor features are enumerated during first resctrl mount and may support a different number of RMIDs compared to L3 monitor features. Delay allocation and initialization of rmid_ptrs[] until first mount. Since the number of RMIDs cannot change on later mounts, keep the same set = of rmid_ptrs[] until resctrl_exit(). This is required because the limbo handler keeps running after resctrl is unmounted and needs to access rmid_ptrs[] as it keeps tracking busy RMIDs after unmount. Rename routines to match what they now do: dom_data_init() -> setup_rmid_lru_list() dom_data_exit() -> free_rmid_lru_list() Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- fs/resctrl/internal.h | 4 ++++ fs/resctrl/monitor.c | 54 ++++++++++++++++++++----------------------- fs/resctrl/rdtgroup.c | 5 ++++ 3 files changed, 34 insertions(+), 29 deletions(-) diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h index 399f625be67d..1a9b29119f88 100644 --- a/fs/resctrl/internal.h +++ b/fs/resctrl/internal.h @@ -369,6 +369,10 @@ int closids_supported(void); =20 void closid_free(int closid); =20 +int setup_rmid_lru_list(void); + +void free_rmid_lru_list(void); + int alloc_rmid(u32 closid); =20 void free_rmid(u32 closid, u32 rmid); diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index bbf8c9037887..0cd5476a483a 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -903,20 +903,29 @@ void mbm_setup_overflow_handler(struct rdt_l3_mon_dom= ain *dom, unsigned long del schedule_delayed_work_on(cpu, &dom->mbm_over, delay); } =20 -static int dom_data_init(struct rdt_resource *r) +int setup_rmid_lru_list(void) { - u32 idx_limit =3D resctrl_arch_system_num_rmid_idx(); struct rmid_entry *entry =3D NULL; - int err =3D 0, i; + u32 idx_limit; u32 idx; + int i; =20 - mutex_lock(&rdtgroup_mutex); + if (!resctrl_arch_mon_capable()) + return 0; =20 + /* + * Called on every mount, but the number of RMIDs cannot change + * after the first mount, so keep using the same set of rmid_ptrs[] + * until resctrl_exit(). Note that the limbo handler continues to + * access rmid_ptrs[] after resctrl is unmounted. + */ + if (rmid_ptrs) + return 0; + + idx_limit =3D resctrl_arch_system_num_rmid_idx(); rmid_ptrs =3D kcalloc(idx_limit, sizeof(struct rmid_entry), GFP_KERNEL); - if (!rmid_ptrs) { - err =3D -ENOMEM; - goto out_unlock; - } + if (!rmid_ptrs) + return -ENOMEM; =20 for (i =3D 0; i < idx_limit; i++) { entry =3D &rmid_ptrs[i]; @@ -929,30 +938,24 @@ static int dom_data_init(struct rdt_resource *r) /* * RESCTRL_RESERVED_CLOSID and RESCTRL_RESERVED_RMID are special and * are always allocated. These are used for the rdtgroup_default - * control group, which will be setup later in resctrl_init(). + * control group, which was setup earlier in rdtgroup_setup_default(). */ idx =3D resctrl_arch_rmid_idx_encode(RESCTRL_RESERVED_CLOSID, RESCTRL_RESERVED_RMID); entry =3D __rmid_entry(idx); list_del(&entry->list); =20 -out_unlock: - mutex_unlock(&rdtgroup_mutex); - - return err; + return 0; } =20 -static void dom_data_exit(struct rdt_resource *r) +void free_rmid_lru_list(void) { - mutex_lock(&rdtgroup_mutex); - - if (!r->mon_capable) - goto out_unlock; + if (!resctrl_arch_mon_capable()) + return; =20 + mutex_lock(&rdtgroup_mutex); kfree(rmid_ptrs); rmid_ptrs =3D NULL; - -out_unlock: mutex_unlock(&rdtgroup_mutex); } =20 @@ -1830,7 +1833,8 @@ static void closid_num_dirty_rmid_free(void) * resctrl_l3_mon_resource_init() - Initialise global monitoring structure= s. * * Allocate and initialise global monitor resources that do not belong to a - * specific domain. i.e. the rmid_ptrs[] used for the limbo and free lists. + * specific domain. i.e. the closid_num_dirty_rmid[] used to find the CLOS= ID + * with the cleanest set of RMIDs. * Called once during boot after the struct rdt_resource's have been confi= gured * but before the filesystem is mounted. * Resctrl's cpuhp callbacks may be called before this point to bring a do= main @@ -1850,12 +1854,6 @@ int resctrl_l3_mon_resource_init(void) if (ret) return ret; =20 - ret =3D dom_data_init(r); - if (ret) { - closid_num_dirty_rmid_free(); - return ret; - } - if (resctrl_arch_is_evt_configurable(QOS_L3_MBM_TOTAL_EVENT_ID)) { mon_event_all[QOS_L3_MBM_TOTAL_EVENT_ID].configurable =3D true; resctrl_file_fflags_init("mbm_total_bytes_config", @@ -1902,6 +1900,4 @@ void resctrl_l3_mon_resource_exit(void) return; =20 closid_num_dirty_rmid_free(); - - dom_data_exit(r); } diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index e95d3d0dc515..d49ffc56ea61 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -2799,6 +2799,10 @@ static int rdt_get_tree(struct fs_context *fc) goto out; } =20 + ret =3D setup_rmid_lru_list(); + if (ret) + goto out; + ret =3D rdtgroup_setup_root(ctx); if (ret) goto out; @@ -4654,4 +4658,5 @@ void resctrl_exit(void) */ =20 resctrl_l3_mon_resource_exit(); + free_rmid_lru_list(); } --=20 2.51.1 From nobody Sun Dec 14 22:14:55 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7DA982EBB9E for ; Wed, 10 Dec 2025 23:15:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408538; cv=none; b=EwdjUSHOmnKCFuauII2dTeHwfPszQ3zpUBKnoeti+0k5sO3y6wsLT9wfXgXzyr6/7Dj6YsJ56x9QSpjsOOy8nzc4eHA0C9XA7/+qPvWBFNCdZJtHVPb1XFVmFdSre8uMVsogSw3Av3V0Ibxcq77XkXMcvwsJtw90kf9qnENJbRY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408538; c=relaxed/simple; bh=mdPSsXNAF+ChB2cevDfLFY2DwPCA14rHNG70ozRiqTE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=iCdE4AS6/IOcfUbIo/dwauZ9PDumZZ0nMCb6Lcu2x+M5T+OBuG67sPXspvs3SWTXC8vHL3r5zPNr+mcqHKGoTOEIPK2x4+/264TC+gjrdAmNZX097zgZDlbJ/M9lqO+8LITAFQfYamqFLzB/FlO0dw1JdPGmuXShFXuOQ//A9fo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=ABpZ+drs; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="ABpZ+drs" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1765408529; x=1796944529; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mdPSsXNAF+ChB2cevDfLFY2DwPCA14rHNG70ozRiqTE=; b=ABpZ+drsjFrGCyChm9reHPxdag1Uzpa2qCgEMnWc35+yL7/66KaqJfvl hkP+oYvRg8IRIhfvE9oJRZKd3+tURv9ysOOWLO1Mp+IT09FFOZtG7RAYH VKAmMKXzx7iwaKkhOedyQ4FjcaQ0FaA/6I6KHeBmxgorr47iBmY6xGgxE Alf2AxdMyraoU7zXMOzSpwFlpMyW5SafHX+vZN8I0XR0I3Jc1KXRsFXvP ZCoRFv2AvGqzOXe6ZYBu54F4zfJBN/tXD4sdZYlq9twTZy3su39nd79uY Vg5P6mBVi3mQCc4eEabvBLDWviEFr6S0H79hig6gPVdDLDxChbHqkXgpr w==; X-CSE-ConnectionGUID: tQuGlCweSaGOa7Yiu9YqjQ== X-CSE-MsgGUID: htMQxm3+RBeQsVRl7fcyzw== X-IronPort-AV: E=McAfee;i="6800,10657,11638"; a="69973715" X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="69973715" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:44 -0800 X-CSE-ConnectionGUID: ALEXaJlrRmWip92vLRTi0w== X-CSE-MsgGUID: ba8T8sJ0ReSl4y0KBqrqwA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="227297161" Received: from daliomra-mobl3.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.221.254]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:43 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v16 29/32] x86/resctrl: Enable RDT_RESOURCE_PERF_PKG Date: Wed, 10 Dec 2025 15:14:08 -0800 Message-ID: <20251210231413.59102-30-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251210231413.59102-1-tony.luck@intel.com> References: <20251210231413.59102-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Since telemetry events are enumerated on resctrl mount the RDT_RESOURCE_PER= F_PKG resource is not considered "monitoring capable" during early resctrl initia= lization. This means that the domain list for RDT_RESOURCE_PERF_PKG is not built when= the CPU hot plug notifiers are registered and run for the first time right after re= sctrl initialization. Mark the RDT_RESOURCE_PERF_PKG as "monitoring capable" upon successful tele= metry event enumeration to ensure future CPU hotplug events include this resource= and initialize its domain list for CPUs that are already online. Print to console log announcing the name of the telemetry feature detected. Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- arch/x86/kernel/cpu/resctrl/core.c | 16 +++++++++++++++- arch/x86/kernel/cpu/resctrl/intel_aet.c | 6 ++++++ 2 files changed, 21 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index bca65851d592..829633bc54e5 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -766,14 +766,28 @@ static int resctrl_arch_offline_cpu(unsigned int cpu) =20 void resctrl_arch_pre_mount(void) { + struct rdt_resource *r =3D &rdt_resources_all[RDT_RESOURCE_PERF_PKG].r_re= sctrl; static atomic_t only_once =3D ATOMIC_INIT(0); - int old =3D 0; + int cpu, old =3D 0; =20 if (!atomic_try_cmpxchg(&only_once, &old, 1)) return; =20 if (!intel_aet_get_events()) return; + + /* + * Late discovery of telemetry events means the domains for the + * resource were not built. Do that now. + */ + cpus_read_lock(); + mutex_lock(&domain_list_lock); + r->mon_capable =3D true; + rdt_mon_capable =3D true; + for_each_online_cpu(cpu) + domain_add_cpu_mon(cpu, r); + mutex_unlock(&domain_list_lock); + cpus_read_unlock(); } =20 enum { diff --git a/arch/x86/kernel/cpu/resctrl/intel_aet.c b/arch/x86/kernel/cpu/= resctrl/intel_aet.c index 34f882df1243..ea613e49d213 100644 --- a/arch/x86/kernel/cpu/resctrl/intel_aet.c +++ b/arch/x86/kernel/cpu/resctrl/intel_aet.c @@ -275,6 +275,12 @@ static bool enable_events(struct event_group *e, struc= t pmt_feature_group *p) else r->mon.num_rmid =3D e->num_rmid; =20 + if (skipped_events) + pr_info("%s %s:0x%x monitoring detected (skipped %d events)\n", r->name, + e->pfname, e->guid, skipped_events); + else + pr_info("%s %s:0x%x monitoring detected\n", r->name, e->pfname, e->guid); + return true; } =20 --=20 2.51.1 From nobody Sun Dec 14 22:14:55 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E643230EF67 for ; Wed, 10 Dec 2025 23:15:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408539; cv=none; b=o0h1tIgiB/036ohVJbYsQrQnCRrA3KtSb2xo0KiR64UGWWnv9gryCylNOydWoiwKv5P4VZsbz8Jz0LXiuXOphWPE2Pza5Yg7rqzfXoesDRiw8XsSs2e2j4TAp6Ko6Dtda8QW3mCl45JCCatuGdee5S57tl31OP/aVvWHJko1LC8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408539; c=relaxed/simple; bh=a6GqSiYes3g1hfurapo7TsD5x6nKUzU6tpAj8OUBXbc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qto3GYuotGLFs4CWL74lTUBpoJK5zZnDf45ENcovGpvHqgaMl8wF5f7JCKh3bXkXadtQcD3NgfBSWOxIrg+x9mfzGpka9X08ng1WsZEXfgf1qHdJ/IUeUbZ/mbT3FpaGyf3IcmtVqeKrmPrBf/w9iPNZgRyLVdJyDAMXsKnFtoQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=l7MZth89; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="l7MZth89" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1765408532; x=1796944532; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=a6GqSiYes3g1hfurapo7TsD5x6nKUzU6tpAj8OUBXbc=; b=l7MZth892LtgIXMtloLs21FyEEMlgf6XddlAN2LZ6tUYeNXM1iiip61X 5+9brLfPYeqABC3FmyrxvTLysk9CZoCM3EjF7gxNNZ2uUtFLODrbOfXZg WUx0byg69LA5H2qKoLdsRq1taizk5u13GspFdGjTZOrAHornnRO9SO9wp 2lcgeVdNWKfr0WTHjvgF8w0PPT6zoB3OvL/a+rFK5RNpSJlzKw9W/NPOw 3JdN+vfjSnLvl7n7xe42Pz5EvxgQKcVZoHHjh8ko0CzAFmit16OjKrcGe hlNBQRkt5DAx4xOtjluSGdDWJb87/J++kYvQ9FL1thx0MXhdZF/b+NZvA w==; X-CSE-ConnectionGUID: VZzj09mZQkyjh4b5aapxbw== X-CSE-MsgGUID: bjWm7/f+TVWzO8Dy417VNA== X-IronPort-AV: E=McAfee;i="6800,10657,11638"; a="69973723" X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="69973723" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:45 -0800 X-CSE-ConnectionGUID: 2kpcaM3ZTg+pj6CuVyfhXQ== X-CSE-MsgGUID: txepatvgR+mGQcj+1qIL7Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="227297167" Received: from daliomra-mobl3.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.221.254]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:44 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v16 30/32] fs/resctrl: Provide interface to create architecture specific debugfs area Date: Wed, 10 Dec 2025 15:14:09 -0800 Message-ID: <20251210231413.59102-31-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251210231413.59102-1-tony.luck@intel.com> References: <20251210231413.59102-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" All files below /sys/fs/resctrl are considered user ABI. This leaves no place for architectures to provide additional interfaces. Add resctrl_debugfs_mon_info_arch_mkdir() which creates a directory in the debugfs file system for a monitoring resource. Naming follows the layout of the main resctrl hierarchy: /sys/kernel/debug/resctrl/info/{resource}_MON/{arch} The {arch} last level directory name matches the output of the user level "uname -m" command. Architecture code may use this directory for debug information, or for minor tuning of features. It must not be used for basic feature enabling as debug= fs may not be configured/mounted on production systems. Suggested-by: Reinette Chatre Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- include/linux/resctrl.h | 10 ++++++++++ fs/resctrl/rdtgroup.c | 29 +++++++++++++++++++++++++++++ 2 files changed, 39 insertions(+) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index 8623e450619a..b862a9dd785b 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -702,6 +702,16 @@ bool resctrl_arch_get_io_alloc_enabled(struct rdt_reso= urce *r); extern unsigned int resctrl_rmid_realloc_threshold; extern unsigned int resctrl_rmid_realloc_limit; =20 +/** + * resctrl_debugfs_mon_info_arch_mkdir() - Create a debugfs info directory. + * Removed by resctrl_exit(). + * @r: Resource (must be mon_capable). + * + * Return: NULL if resource is not monitoring capable, + * dentry pointer on success, or ERR_PTR(-ERROR) on failure. + */ +struct dentry *resctrl_debugfs_mon_info_arch_mkdir(struct rdt_resource *r); + int resctrl_init(void); void resctrl_exit(void); =20 diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index d49ffc56ea61..d13291d71adf 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -24,6 +24,7 @@ #include #include #include +#include =20 #include =20 @@ -75,6 +76,8 @@ static void rdtgroup_destroy_root(void); =20 struct dentry *debugfs_resctrl; =20 +static struct dentry *debugfs_resctrl_info; + /* * Memory bandwidth monitoring event to use for the default CTRL_MON group * and each new CTRL_MON group created by the user. Only relevant when @@ -4599,6 +4602,31 @@ int resctrl_init(void) return ret; } =20 +/* + * Create /sys/kernel/debug/resctrl/info/{r->name}_MON/{arch} directory + * by request for architecture to use for debugging or minor tuning. + * Basic functionality of features must not be controlled by files + * added to this directory as debugfs may not be configured/mounted + * on production systems. + */ +struct dentry *resctrl_debugfs_mon_info_arch_mkdir(struct rdt_resource *r) +{ + struct dentry *moninfodir; + char name[32]; + + if (!r->mon_capable) + return NULL; + + if (!debugfs_resctrl_info) + debugfs_resctrl_info =3D debugfs_create_dir("info", debugfs_resctrl); + + sprintf(name, "%s_MON", r->name); + + moninfodir =3D debugfs_create_dir(name, debugfs_resctrl_info); + + return debugfs_create_dir(utsname()->machine, moninfodir); +} + static bool resctrl_online_domains_exist(void) { struct rdt_resource *r; @@ -4650,6 +4678,7 @@ void resctrl_exit(void) =20 debugfs_remove_recursive(debugfs_resctrl); debugfs_resctrl =3D NULL; + debugfs_resctrl_info =3D NULL; unregister_filesystem(&rdt_fs_type); =20 /* --=20 2.51.1 From nobody Sun Dec 14 22:14:55 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9A259302CB1 for ; Wed, 10 Dec 2025 23:15:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408540; cv=none; b=ttDe65gqDeL6d6GuF79vhb6RCoIz9JGzz2X1/iM49hmE6NChETBvvSiaNes3bRLTFBLrFBy0kU6dtzCXcKaYuqmMwWIkacN7fsv0Yk3LzsKu2gEe4T9XLIuguTt01gRBU/XGlC8DRW2w4nyQNjpw1JL4ma04a77Bx/nEOrZl3lE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408540; c=relaxed/simple; bh=I4dI07NVNd0pJPyZ7K2u8kCryG0DQYp+v3VZZhkK6eE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=dbFbZZLLwp9JDKXWkaHsBuK6ZKHMfmQWqLvsWz31MmNhNASSsE8HjeHO+i6mEQHoto/IYa2JzezKie1VEHrUz8ovjMKbnwqlLDfuTIqcO1n7gx60O2KOqfZtkagaf7MkSEASRgbVQjBVsh/lBhQ24oqzZWMwMes4F7Uc+4iig0E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=fail smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=nsqgkc98; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="nsqgkc98" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1765408533; x=1796944533; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=I4dI07NVNd0pJPyZ7K2u8kCryG0DQYp+v3VZZhkK6eE=; b=nsqgkc98POfhA+5ft/fpXi0vyfKaqjII5i1o/BfjYh7ImTtOV4ylCBxk 9V/0w40UdfQxdHn/LrIHyq983hjB/CDV6oOOFdEyTAY7vtv0niy5dTjFx FNpMC5OmQI9nX3ZzGRNHH59Ele+g6LGj9dwf2BsnYwgk0r/RmHOY3pyRa 8bwWMvMOYMbo5vzjs17oTOh8evKcuo6CcrxHnjDflcq8RIp3939yeB02o SrdRms3PH3cfvq6JUavSxtRtS2IA/slP1+vieRtslQKCK8iOytISlhdS+ 79Q9tXwm+P0ceTuQn47JIMytzqxQlNjtJm/yVg2uuXJoWUMrgZ4TdSKPg Q==; X-CSE-ConnectionGUID: vQKER9KDQFyqstE7IkR7fQ== X-CSE-MsgGUID: AItQCsDUR7aIYryi5UONww== X-IronPort-AV: E=McAfee;i="6800,10657,11638"; a="69973731" X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="69973731" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:45 -0800 X-CSE-ConnectionGUID: jO5KDURdSOi34SugzkfjNg== X-CSE-MsgGUID: omGyjIYAT66Gm+CsfULIww== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="227297172" Received: from daliomra-mobl3.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.221.254]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:45 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v16 31/32] x86/resctrl: Add debugfs files to show telemetry aggregator status Date: Wed, 10 Dec 2025 15:14:10 -0800 Message-ID: <20251210231413.59102-32-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251210231413.59102-1-tony.luck@intel.com> References: <20251210231413.59102-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Each telemetry aggregator provides three status registers at the top end of MMIO space after all the per-RMID per-event counters: data_loss_count: This counts the number of times that this aggregator failed to accumulate a counter value supplied by a CPU core. data_loss_timestamp: This is a "timestamp" from a free running 25MHz uncore timer indicating when the most recent data loss occurred. last_update_timestamp: Another 25MHz timestamp indicating when the most recent counter update was successfully applied. Create files in /sys/kernel/debug/resctrl/info/PERF_PKG_MON/x86_64/ to disp= lay the value of each of these status registers for each aggregator in each ena= bled event group. The prefix for each file name describes the type of aggregator, the guid, which package it is located on, and an opaque instance number to provide a unique file name when there are multiple aggregators on a package. The suffix is one of the three strings listed above. An example name is: energy_0x26696143_pkg0_agg2_data_loss_count These files are removed along with all other debugfs entries by the call to debugfs_remove_recursive() in resctrl_exit(). Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- arch/x86/kernel/cpu/resctrl/internal.h | 2 + arch/x86/kernel/cpu/resctrl/core.c | 2 + arch/x86/kernel/cpu/resctrl/intel_aet.c | 60 +++++++++++++++++++++++++ 3 files changed, 64 insertions(+) diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/r= esctrl/internal.h index df09091f7c6c..e538174fe193 100644 --- a/arch/x86/kernel/cpu/resctrl/internal.h +++ b/arch/x86/kernel/cpu/resctrl/internal.h @@ -236,6 +236,7 @@ void __exit intel_aet_exit(void); int intel_aet_read_event(int domid, u32 rmid, void *arch_priv, u64 *val); void intel_aet_mon_domain_setup(int cpu, int id, struct rdt_resource *r, struct list_head *add_pos); +void intel_aet_add_debugfs(void); bool intel_aet_option(bool force_off, char *tok); #else static inline bool intel_aet_get_events(void) { return false; } @@ -247,6 +248,7 @@ static inline int intel_aet_read_event(int domid, u32 r= mid, void *arch_priv, u64 =20 static inline void intel_aet_mon_domain_setup(int cpu, int id, struct rdt_= resource *r, struct list_head *add_pos) { } +static inline void intel_aet_add_debugfs(void) { } static inline bool intel_aet_option(bool force_off, char *tok) { return fa= lse; } #endif =20 diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 829633bc54e5..62e96aad060d 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -788,6 +788,8 @@ void resctrl_arch_pre_mount(void) domain_add_cpu_mon(cpu, r); mutex_unlock(&domain_list_lock); cpus_read_unlock(); + + intel_aet_add_debugfs(); } =20 enum { diff --git a/arch/x86/kernel/cpu/resctrl/intel_aet.c b/arch/x86/kernel/cpu/= resctrl/intel_aet.c index ea613e49d213..4071959b0d3e 100644 --- a/arch/x86/kernel/cpu/resctrl/intel_aet.c +++ b/arch/x86/kernel/cpu/resctrl/intel_aet.c @@ -15,8 +15,11 @@ #include #include #include +#include +#include #include #include +#include #include #include #include @@ -29,6 +32,7 @@ #include #include #include +#include #include #include #include @@ -228,6 +232,46 @@ static bool all_regions_have_sufficient_rmid(struct ev= ent_group *e, struct pmt_f return ret; } =20 +static int status_read(void *priv, u64 *val) +{ + void __iomem *info =3D (void __iomem *)priv; + + *val =3D readq(info); + + return 0; +} + +DEFINE_SIMPLE_ATTRIBUTE(status_fops, status_read, NULL, "%llu\n"); + +static void make_status_files(struct dentry *dir, struct event_group *e, u= 8 pkg, + int instance, void *info_end) +{ + char name[80]; + + sprintf(name, "%s_0x%x_pkg%u_agg%d_data_loss_count", e->pfname, e->guid, = pkg, instance); + debugfs_create_file(name, 0400, dir, info_end - 24, &status_fops); + + sprintf(name, "%s_0x%x_pkg%u_agg%d_data_loss_timestamp", e->pfname, e->gu= id, pkg, instance); + debugfs_create_file(name, 0400, dir, info_end - 16, &status_fops); + + sprintf(name, "%s_0x%x_pkg%u_agg%d_last_update_timestamp", e->pfname, e->= guid, pkg, instance); + debugfs_create_file(name, 0400, dir, info_end - 8, &status_fops); +} + +static void create_debug_event_status_files(struct dentry *dir, struct eve= nt_group *e) +{ + struct pmt_feature_group *p =3D e->pfg; + void *info_end; + + for (int i =3D 0; i < p->count; i++) { + if (!p->regions[i].addr) + continue; + info_end =3D (void __force *)p->regions[i].addr + e->mmio_size; + make_status_files(dir, e, p->regions[i].plat_info.package_id, + i, info_end); + } +} + static bool enable_events(struct event_group *e, struct pmt_feature_group = *p) { struct rdt_resource *r =3D &rdt_resources_all[RDT_RESOURCE_PERF_PKG].r_re= sctrl; @@ -412,3 +456,19 @@ void intel_aet_mon_domain_setup(int cpu, int id, struc= t rdt_resource *r, kfree(d); } } + +void intel_aet_add_debugfs(void) +{ + struct rdt_resource *r =3D &rdt_resources_all[RDT_RESOURCE_PERF_PKG].r_re= sctrl; + struct event_group **peg; + struct dentry *infodir; + + infodir =3D resctrl_debugfs_mon_info_arch_mkdir(r); + + if (IS_ERR_OR_NULL(infodir)) + return; + + for_each_event_group(peg) + if ((*peg)->pfg) + create_debug_event_status_files(infodir, *peg); +} --=20 2.51.1 From nobody Sun Dec 14 22:14:55 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9424E30F922 for ; Wed, 10 Dec 2025 23:15:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408547; cv=none; b=WcI2MfaoZrgxAKJRr7uzTEPU/AruXqeibnc3WKyOrkefTTexK31AvboQq6m+DK1B3zITI2LOWH7qNfUe56IQupAOVUEErGtK00vW49WwFSpQAhSeaDhw3eq/1lkE1EwPhmQQM24cfxR0KUI3UlaUWP+YLUNBD4gIUkWo0ESzqCE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765408547; c=relaxed/simple; bh=B7UGOzbYErrQGwX600naSgXzqImjveOVc654iEKv1VI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=tSVmGdpU5GaVzIe9XRQXa6eqvvTFGe7D9C6p1s7K/aAitwBou/yBQ3FSI6vxHDkS+7yekazscSACLvQTyDxOAvDT2sgVSsF5y4+FR9Ri/zPz0Ii+9X4ZqcndtX9jc4m3XZao7aBD29F69WEf7n1aSA3aDns/El9MVxa4t3RSDp4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=fail smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=WH5R0kyf; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="WH5R0kyf" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1765408539; x=1796944539; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=B7UGOzbYErrQGwX600naSgXzqImjveOVc654iEKv1VI=; b=WH5R0kyfgkQXBtY0NjtStS6THl9bs0EEj1phGxiEgmrSih1yxYSgUjVL R/DZtEGkcJee1owEhxDYdjxTKPpQiUmvtWQyjbYG3NUrNRpA/JzLPATV6 /zyLwlDE31pqbyQIV5y68Zie9bZkYZD+0HebpYhubhjBfzRbrvdIFjgdf Qw1P8gAn9xhgs6VQpSh36llYsg7hsUAmgarnlPNsr3t9FubZ6DVCd2y4u b4ytgob5zKBe9pIHvcEgPLcq5RyhisfxTgjsb5fDAUSHVO96eH9Nr+WWJ spiP3QH3CswkVnYAi1Wvtzwont3DC1REIN/3ZoA78COyueYX+avAkwXA0 g==; X-CSE-ConnectionGUID: mzV4+fBSS8mLC7Tzv0nReA== X-CSE-MsgGUID: Ho7pnVraQ8iPVQke5QWnag== X-IronPort-AV: E=McAfee;i="6800,10657,11638"; a="69973739" X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="69973739" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:46 -0800 X-CSE-ConnectionGUID: GwfU4GvaRWaIwVAmSSrkvw== X-CSE-MsgGUID: GHmmETOOSmWRxcHr3SxVjA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,265,1758610800"; d="scan'208";a="227297176" Received: from daliomra-mobl3.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.221.254]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2025 15:14:45 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v16 32/32] x86,fs/resctrl: Update documentation for telemetry events Date: Wed, 10 Dec 2025 15:14:11 -0800 Message-ID: <20251210231413.59102-33-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251210231413.59102-1-tony.luck@intel.com> References: <20251210231413.59102-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Update resctrl filesystem documentation with the details about the resctrl files that support telemetry events. Signed-off-by: Tony Luck --- Documentation/filesystems/resctrl.rst | 101 +++++++++++++++++++++++--- 1 file changed, 89 insertions(+), 12 deletions(-) diff --git a/Documentation/filesystems/resctrl.rst b/Documentation/filesyst= ems/resctrl.rst index 8c8ce678148a..513d04e0be29 100644 --- a/Documentation/filesystems/resctrl.rst +++ b/Documentation/filesystems/resctrl.rst @@ -252,13 +252,12 @@ with respect to allocation: bandwidth percentages are directly applied to the threads running on the core =20 -If RDT monitoring is available there will be an "L3_MON" directory +If L3 monitoring is available there will be an "L3_MON" directory with the following files: =20 "num_rmids": - The number of RMIDs available. This is the - upper bound for how many "CTRL_MON" + "MON" - groups can be created. + The number of RMIDs supported by hardware for + L3 monitoring events. =20 "mon_features": Lists the monitoring events if @@ -484,6 +483,24 @@ with the following files: bytes) at which a previously used LLC_occupancy counter can be considered for re-use. =20 +If telemetry monitoring is available there will be a "PERF_PKG_MON" direct= ory +with the following files: + +"num_rmids": + The number of RMIDs for telemetry monitoring events. + + On Intel resctrl will not enable telemetry events if the number of + RMIDs that can be tracked concurrently is lower than the total number + of RMIDs supported. Telemetry events can be force-enabled with the + "rdt=3D" kernel parameter, but this may reduce the number of + monitoring groups that can be created. + +"mon_features": + Lists the telemetry monitoring events that are enabled on this system. + +The upper bound for how many "CTRL_MON" + "MON" can be created +is the smaller of the L3_MON and PERF_PKG_MON "num_rmids" values. + Finally, in the top level of the "info" directory there is a file named "last_cmd_status". This is reset with every "command" issued via the file system (making new directories or writing to any of the @@ -589,15 +606,40 @@ When control is enabled all CTRL_MON groups will also= contain: When monitoring is enabled all MON groups will also contain: =20 "mon_data": - This contains a set of files organized by L3 domain and by - RDT event. E.g. on a system with two L3 domains there will - be subdirectories "mon_L3_00" and "mon_L3_01". Each of these - directories have one file per event (e.g. "llc_occupancy", - "mbm_total_bytes", and "mbm_local_bytes"). In a MON group these - files provide a read out of the current value of the event for - all tasks in the group. In CTRL_MON groups these files provide - the sum for all tasks in the CTRL_MON group and all tasks in + This contains directories for each monitor domain. + + If L3 monitoring is enabled, there will be a "mon_L3_XX" directory for + each instance of an L3 cache. Each directory contains files for the enabl= ed + L3 events (e.g. "llc_occupancy", "mbm_total_bytes", and "mbm_local_bytes"= ). + + If telemetry monitoring is enabled, there will be a "mon_PERF_PKG_YY" + directory for each physical processor package. Each directory contains + files for the enabled telemetry events (e.g. "core_energy". "activity", + "uops_retired", etc.) + + The info/`*`/mon_features files provide the full list of enabled + event/file names. + + "core energy" reports a floating point number for the energy (in Joules) + consumed by cores (registers, arithmetic units, TLB and L1/L2 caches) + during execution of instructions summed across all logical CPUs on a + package for the current monitoring group. + + "activity" also reports a floating point value (in Farads). This provides + an estimate of work done independent of the frequency that the CPUs used + for execution. + + Note that "core energy" and "activity" only measure energy/activity in the + "core" of the CPU (arithmetic units, TLB, L1 and L2 caches, etc.). They + do not include L3 cache, memory, I/O devices etc. + + All other events report decimal integer values. + + In a MON group these files provide a read out of the current value of + the event for all tasks in the group. In CTRL_MON groups these files + provide the sum for all tasks in the CTRL_MON group and all tasks in MON groups. Please see example section for more details on usage. + On systems with Sub-NUMA Cluster (SNC) enabled there are extra directories for each node (located within the "mon_L3_XX" directory for the L3 cache they occupy). These are named "mon_sub_L3_YY" @@ -1590,6 +1632,41 @@ Example with C:: resctrl_release_lock(fd); } =20 +Debugfs +=3D=3D=3D=3D=3D=3D=3D +In addition to the use of debugfs for tracing of pseudo-locking performanc= e, +architecture code may create debugfs directories associated with monitoring +features for a specific resource. + +The full pathname for these is in the form: + + /sys/kernel/debug/resctrl/info/{resource_name}_MON/{arch}/ + +The presence, names, and format of these files may vary between architectu= res +even if the same resource is present. + +PERF_PKG_MON/x86_64 +------------------- +Three files are present per telemetry aggregator instance that show status. +The prefix of each file name describes the type ("energy" or "perf"), the +guid, which processor package it belongs to, and the instance number of the +aggregator. For example: "energy_0x26696143_pkg1_agg2". + +The suffix describes which data is reported in the file and is one of: + +data_loss_count: + This counts the number of times that this aggregator + failed to accumulate a counter value supplied by a CPU. + +data_loss_timestamp: + This is a "timestamp" from a free running 25MHz uncore + timer indicating when the most recent data loss occurred. + +last_update_timestamp: + Another 25MHz timestamp indicating when the + most recent counter update was successfully applied. + + Examples for RDT Monitoring along with allocation usage =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D Reading monitored data --=20 2.51.1