From nobody Sat Sep 27 20:26:34 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5343031A7F0 for ; Thu, 25 Sep 2025 20:04:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830646; cv=none; b=YIj0g7oYzrpKz2zRPoIxYZMXjyxeUCaPRD0sJdPm9xKQYPDgmmXaxPJQ+OB3TfMqbsAuYXze6anW6myHm5IhNFCM6EGuvesHABMrZtDnnX3Z6nU/KDufjOALG1V044qLFENbSzqUcW0aNWLXu8bDo+U/HjWVavIx4Uh9UnfeGfA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830646; c=relaxed/simple; bh=t2a7NQveQZuHMPbd6nZpCNV7ek2x4qLHPLpr7+DlI2Q=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=lki7cNycl5K2oXEDShdp1wx4wTnzOWhcoViEbC1lXCAjXk+YCMFv9nc7vgwhXl2TjXWIZzmPqXJzRY8X2MAho3ds3EaQRilKCSvCLJ199cWAbzY/z9rkD/uLKIypZr4vgA3byNtnzIWiIHqjTw9mHcoozGTxR0CfLVVzRZD/aDI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=LIiLtdYC; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="LIiLtdYC" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758830646; x=1790366646; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=t2a7NQveQZuHMPbd6nZpCNV7ek2x4qLHPLpr7+DlI2Q=; b=LIiLtdYCQ2N3wtxdehNMY08H0EfbKtHd6ntMv+Evt/WmtaCqdx6rJuoc h5Zj24fxHZDMJCXNKt2MiebsxlGTN2h3wuqsczuOlCWo9JNsaVjkMdDJR l2jlsfnk6uhrMiEXSfI7nMeZuorJwHXaWOgeRgE6D5lRFMVuxuG4FOFbz 8Ll/6bBQRayv09O/k8TU+PDy7/I9Qk4C/+0x5JDmr+rCMm+eovkN4UGa+ ziAkmS7mG7wr8YOabhko2auTF54gXNr9Z64nYMUubLdoxZNU25YpLla8s 8NE6TxK/aE9c8jU4YZ9ySsPimnAYmrq+5FzsHVGcDeVofRi5qmaRKrAv1 A==; X-CSE-ConnectionGUID: 9M0bKbu6St+Uo1PMRxZc6A== X-CSE-MsgGUID: pPcEIhRhQ32RC16PA8fKPQ== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="61074136" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="61074136" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:04 -0700 X-CSE-ConnectionGUID: WLwk7WxeRv6ifvATkIsrpw== X-CSE-MsgGUID: zasdtukFQ1WpT5ih48gXUg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,293,1751266800"; d="scan'208";a="177003596" Received: from inaky-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.206]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:04 -0700 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v11 01/31] x86,fs/resctrl: Improve domain type checking Date: Thu, 25 Sep 2025 13:02:55 -0700 Message-ID: <20250925200328.64155-2-tony.luck@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250925200328.64155-1-tony.luck@intel.com> References: <20250925200328.64155-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Each resctrl resource has a list of domain structures. These all begin with a common rdt_domain_hdr. Improve type checking of these headers by adding the resource id. Add domain_header_is_valid() before each call to container_of() to ensure the domain is the expected type. Signed-off-by: Tony Luck --- include/linux/resctrl.h | 9 +++++++++ arch/x86/kernel/cpu/resctrl/core.c | 10 ++++++---- fs/resctrl/ctrlmondata.c | 2 +- 3 files changed, 16 insertions(+), 5 deletions(-) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index a7d92718b653..dfc91c5e8483 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -131,15 +131,24 @@ enum resctrl_domain_type { * @list: all instances of this resource * @id: unique id for this instance * @type: type of this instance + * @rid: resource id for this instance * @cpu_mask: which CPUs share this resource */ struct rdt_domain_hdr { struct list_head list; int id; enum resctrl_domain_type type; + enum resctrl_res_level rid; struct cpumask cpu_mask; }; =20 +static inline bool domain_header_is_valid(struct rdt_domain_hdr *hdr, + enum resctrl_domain_type type, + enum resctrl_res_level rid) +{ + return !WARN_ON_ONCE(hdr->type !=3D type || hdr->rid !=3D rid); +} + /** * struct rdt_ctrl_domain - group of CPUs sharing a resctrl control resour= ce * @hdr: common header for different domain types diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 06ca5a30140c..8be2619db2e7 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -459,7 +459,7 @@ static void domain_add_cpu_ctrl(int cpu, struct rdt_res= ource *r) =20 hdr =3D resctrl_find_domain(&r->ctrl_domains, id, &add_pos); if (hdr) { - if (WARN_ON_ONCE(hdr->type !=3D RESCTRL_CTRL_DOMAIN)) + if (!domain_header_is_valid(hdr, RESCTRL_CTRL_DOMAIN, r->rid)) return; d =3D container_of(hdr, struct rdt_ctrl_domain, hdr); =20 @@ -476,6 +476,7 @@ static void domain_add_cpu_ctrl(int cpu, struct rdt_res= ource *r) d =3D &hw_dom->d_resctrl; d->hdr.id =3D id; d->hdr.type =3D RESCTRL_CTRL_DOMAIN; + d->hdr.rid =3D r->rid; cpumask_set_cpu(cpu, &d->hdr.cpu_mask); =20 rdt_domain_reconfigure_cdp(r); @@ -515,7 +516,7 @@ static void domain_add_cpu_mon(int cpu, struct rdt_reso= urce *r) =20 hdr =3D resctrl_find_domain(&r->mon_domains, id, &add_pos); if (hdr) { - if (WARN_ON_ONCE(hdr->type !=3D RESCTRL_MON_DOMAIN)) + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, r->rid)) return; d =3D container_of(hdr, struct rdt_mon_domain, hdr); =20 @@ -533,6 +534,7 @@ static void domain_add_cpu_mon(int cpu, struct rdt_reso= urce *r) d =3D &hw_dom->d_resctrl; d->hdr.id =3D id; d->hdr.type =3D RESCTRL_MON_DOMAIN; + d->hdr.rid =3D r->rid; ci =3D get_cpu_cacheinfo_level(cpu, RESCTRL_L3_CACHE); if (!ci) { pr_warn_once("Can't find L3 cache for CPU:%d resource %s\n", cpu, r->nam= e); @@ -593,7 +595,7 @@ static void domain_remove_cpu_ctrl(int cpu, struct rdt_= resource *r) return; } =20 - if (WARN_ON_ONCE(hdr->type !=3D RESCTRL_CTRL_DOMAIN)) + if (!domain_header_is_valid(hdr, RESCTRL_CTRL_DOMAIN, r->rid)) return; =20 d =3D container_of(hdr, struct rdt_ctrl_domain, hdr); @@ -639,7 +641,7 @@ static void domain_remove_cpu_mon(int cpu, struct rdt_r= esource *r) return; } =20 - if (WARN_ON_ONCE(hdr->type !=3D RESCTRL_MON_DOMAIN)) + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, r->rid)) return; =20 d =3D container_of(hdr, struct rdt_mon_domain, hdr); diff --git a/fs/resctrl/ctrlmondata.c b/fs/resctrl/ctrlmondata.c index 0d0ef54fc4de..f248eaf50d3c 100644 --- a/fs/resctrl/ctrlmondata.c +++ b/fs/resctrl/ctrlmondata.c @@ -649,7 +649,7 @@ int rdtgroup_mondata_show(struct seq_file *m, void *arg) * the resource to find the domain with "domid". */ hdr =3D resctrl_find_domain(&r->mon_domains, domid, NULL); - if (!hdr || WARN_ON_ONCE(hdr->type !=3D RESCTRL_MON_DOMAIN)) { + if (!hdr || !domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, resid)) { ret =3D -ENOENT; goto out; } --=20 2.51.0 From nobody Sat Sep 27 20:26:34 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0A1B131B824 for ; Thu, 25 Sep 2025 20:04:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830647; cv=none; b=ombtDumuMlNq1aq9/UYaNegaEEvct5YCLedBtnZTVjjZX87ZBQbSJgj/TYzHthHCFlwJfdO4EwhPaLxoOIVTjnAk4zNM6ORmsfK4jdspe/sSNUfp3pmsB4A5YhxqBCYZrbRYA0Pz+xPsVGLS6BGlh+eFlDXbgzv8Vzfym6XJxsw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830647; c=relaxed/simple; bh=QpDDN10wiuqjDcD8OQKIjrS6jeO4TdOlFQbokXt5DZE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=VTwIrLZgo5sc7XiXZVVwiBKdwERKKg5lIRUcfTKRwiTMOlEYC9ndXKcdcdJJzk6D7vhjy4brHo+wPmH18phLvxh4QUr1vT0ElWaNPDu83aDitj84yXBC5/FYZGvy3DDD+bxkhjkWqqRLDr9DAzdP1PXXvwkSML58KritFscjAx0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=e0NRjNfi; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="e0NRjNfi" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758830646; x=1790366646; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=QpDDN10wiuqjDcD8OQKIjrS6jeO4TdOlFQbokXt5DZE=; b=e0NRjNfibSswMhUQXqLPaqAKxiWCwcIo/HRtpsRAUAJt65AaWLyM7UYc 6xMBQjwUXC4t7OIwy9UxULbpNXq2M9G8ejJeo9sVQhIQUeUQUq06WkeEk /smUY8U1zCDeMfC9ij9FwmVYy1e5UqODCRut38V6dPb/FV+1Fed0RlKAa S9mIS5CMfdnuGzLccgSPhXlopRXu+FFjH0uQgf0hfozUzunfWuJdjz3J1 3+Pv8T8zCzpozTivfbkOxK1ynCZJF/E34bFJ9S02PC/tMjjph7h5X6UOW 3yivIt1eeXc3acSGWn+S4Mre2+L8C8ZCC5AqvqBgbSZc70vX0FwckaMLL Q==; X-CSE-ConnectionGUID: eAw9GHzrTFm+U/zQn3PfGg== X-CSE-MsgGUID: ZRGoWstjRh+ljK/fkhjvtA== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="61074144" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="61074144" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:05 -0700 X-CSE-ConnectionGUID: pJsaP+pZQdmBTMXc5SHqog== X-CSE-MsgGUID: 9Hd5LD+ZTHCbTILRVq0uiA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,293,1751266800"; d="scan'208";a="177003600" Received: from inaky-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.206]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:04 -0700 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v11 02/31] x86/resctrl: Move L3 initialization into new helper function Date: Thu, 25 Sep 2025 13:02:56 -0700 Message-ID: <20250925200328.64155-3-tony.luck@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250925200328.64155-1-tony.luck@intel.com> References: <20250925200328.64155-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Carve out the resource monitoring domain init code into a separate helper in order to be able to initialize new types of monitoring domains besides the usual L3 ones. Signed-off-by: Tony Luck --- arch/x86/kernel/cpu/resctrl/core.c | 64 ++++++++++++++++-------------- 1 file changed, 34 insertions(+), 30 deletions(-) diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 8be2619db2e7..d422ae3b7ed6 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -496,37 +496,13 @@ static void domain_add_cpu_ctrl(int cpu, struct rdt_r= esource *r) } } =20 -static void domain_add_cpu_mon(int cpu, struct rdt_resource *r) +static void l3_mon_domain_setup(int cpu, int id, struct rdt_resource *r, s= truct list_head *add_pos) { - int id =3D get_domain_id_from_scope(cpu, r->mon_scope); - struct list_head *add_pos =3D NULL; struct rdt_hw_mon_domain *hw_dom; - struct rdt_domain_hdr *hdr; struct rdt_mon_domain *d; struct cacheinfo *ci; int err; =20 - lockdep_assert_held(&domain_list_lock); - - if (id < 0) { - pr_warn_once("Can't find monitor domain id for CPU:%d scope:%d for resou= rce %s\n", - cpu, r->mon_scope, r->name); - return; - } - - hdr =3D resctrl_find_domain(&r->mon_domains, id, &add_pos); - if (hdr) { - if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, r->rid)) - return; - d =3D container_of(hdr, struct rdt_mon_domain, hdr); - - cpumask_set_cpu(cpu, &d->hdr.cpu_mask); - /* Update the mbm_assign_mode state for the CPU if supported */ - if (r->mon.mbm_cntr_assignable) - resctrl_arch_mbm_cntr_assign_set_one(r); - return; - } - hw_dom =3D kzalloc_node(sizeof(*hw_dom), GFP_KERNEL, cpu_to_node(cpu)); if (!hw_dom) return; @@ -534,7 +510,7 @@ static void domain_add_cpu_mon(int cpu, struct rdt_reso= urce *r) d =3D &hw_dom->d_resctrl; d->hdr.id =3D id; d->hdr.type =3D RESCTRL_MON_DOMAIN; - d->hdr.rid =3D r->rid; + d->hdr.rid =3D RDT_RESOURCE_L3; ci =3D get_cpu_cacheinfo_level(cpu, RESCTRL_L3_CACHE); if (!ci) { pr_warn_once("Can't find L3 cache for CPU:%d resource %s\n", cpu, r->nam= e); @@ -544,10 +520,6 @@ static void domain_add_cpu_mon(int cpu, struct rdt_res= ource *r) d->ci_id =3D ci->id; cpumask_set_cpu(cpu, &d->hdr.cpu_mask); =20 - /* Update the mbm_assign_mode state for the CPU if supported */ - if (r->mon.mbm_cntr_assignable) - resctrl_arch_mbm_cntr_assign_set_one(r); - arch_mon_domain_online(r, d); =20 if (arch_domain_mbm_alloc(r->mon.num_rmid, hw_dom)) { @@ -565,6 +537,38 @@ static void domain_add_cpu_mon(int cpu, struct rdt_res= ource *r) } } =20 +static void domain_add_cpu_mon(int cpu, struct rdt_resource *r) +{ + int id =3D get_domain_id_from_scope(cpu, r->mon_scope); + struct list_head *add_pos =3D NULL; + struct rdt_domain_hdr *hdr; + + lockdep_assert_held(&domain_list_lock); + + if (id < 0) { + pr_warn_once("Can't find monitor domain id for CPU:%d scope:%d for resou= rce %s\n", + cpu, r->mon_scope, r->name); + return; + } + + hdr =3D resctrl_find_domain(&r->mon_domains, id, &add_pos); + if (hdr) + cpumask_set_cpu(cpu, &hdr->cpu_mask); + + switch (r->rid) { + case RDT_RESOURCE_L3: + /* Update the mbm_assign_mode state for the CPU if supported */ + if (r->mon.mbm_cntr_assignable) + resctrl_arch_mbm_cntr_assign_set_one(r); + if (!hdr) + l3_mon_domain_setup(cpu, id, r, add_pos); + break; + default: + pr_warn_once("Unknown resource rid=3D%d\n", r->rid); + break; + } +} + static void domain_add_cpu(int cpu, struct rdt_resource *r) { if (r->alloc_capable) --=20 2.51.0 From nobody Sat Sep 27 20:26:34 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E0A4C31DD94 for ; Thu, 25 Sep 2025 20:04:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830648; cv=none; b=S+z54thm7HyuL0AOIpjXlXj+mRfrFYGpYLfUrpWLV3pEbrFs4hFVQ8WGX1j+K3DZ+gbg2o9dbPBcAQre0c2WkCdH37ijto3pydqtEeckNAtqLChO57t9qYvtjOPJrt5FF2KSen10P5cleG9a9oOMNMp+djVTL3jXF6ye/3rATjY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830648; c=relaxed/simple; bh=vO9qC1IKNnoJg1FEY+mpdsCvg9yf+NzRFgSMvAtgDrQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=dFbmuun/ZdZVDAxMdzkz9fZbx2xVBTBv8MpmcYiNyE607IO0mcVtfLfH73x7gWT2yRbkG8shZJN9yPutbQw5J91clcg2l2e4pqOZy3cXThA5PrZQhUeJJbMAMV/GqPrzQpvmjDwavE0O9KjHafLEq8+vZVwjqRFdVkmE3fk6Pgk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=TpvFK/dx; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="TpvFK/dx" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758830647; x=1790366647; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=vO9qC1IKNnoJg1FEY+mpdsCvg9yf+NzRFgSMvAtgDrQ=; b=TpvFK/dxXLHZa2MnHd+OEp3Q+cupDrNle2umwPzmrstwOCYjOF7Yy9wu MZ67jJFxfzv27thxJwmjBbMKbgq9OkOqS1b1jp18e/0wy0HxXXo5V6h0H TIIW1qbgyUQ1lE/G6Ah+yetNm+1MXRHBHhHTYDR7Zjuu+FPmyAjEKKopi 0mkG+QUU0V1YZZIVw81ri/ZcZM6cJmVJ2g8fpZu/KGIFHw9Ww0d1D4fWy MJk/syRY5rt1fsJQpmAX2uVSn7RnnihFzNzJ5E7EhUJr9mLk1G0q8qBbR ZEsbCKlx1tsiig/ZDowD/jHN+UxMDmH7Fhj/N5J62N/NnUJrY0ITAmJEH w==; X-CSE-ConnectionGUID: 1eu5srTrQfuO39wtIrGffw== X-CSE-MsgGUID: 1Ry08yMSStmRH0YOHDdx9g== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="61074152" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="61074152" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:05 -0700 X-CSE-ConnectionGUID: 3PCIaxO9QGGCUwWkxoP4gg== X-CSE-MsgGUID: wYgKQ2NYSz6NZI0hO6HBUg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,293,1751266800"; d="scan'208";a="177003603" Received: from inaky-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.206]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:05 -0700 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v11 03/31] x86,fs/resctrl: Refactor domain_remove_cpu_mon() ready for new domain types Date: Thu, 25 Sep 2025 13:02:57 -0700 Message-ID: <20250925200328.64155-4-tony.luck@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250925200328.64155-1-tony.luck@intel.com> References: <20250925200328.64155-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" New telemetry events will be associated with a new package scoped resource with new domain structures. Refactor domain_remove_cpu_mon() so all the L3 processing is separate from general actions of clearing the CPU bit in the mask. Signed-off-by: Tony Luck --- arch/x86/kernel/cpu/resctrl/core.c | 21 +++++++++++++-------- 1 file changed, 13 insertions(+), 8 deletions(-) diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index d422ae3b7ed6..b471918bced6 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -645,20 +645,25 @@ static void domain_remove_cpu_mon(int cpu, struct rdt= _resource *r) return; } =20 - if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, r->rid)) + cpumask_clear_cpu(cpu, &hdr->cpu_mask); + if (!cpumask_empty(&hdr->cpu_mask)) return; =20 - d =3D container_of(hdr, struct rdt_mon_domain, hdr); - hw_dom =3D resctrl_to_arch_mon_dom(d); + switch (r->rid) { + case RDT_RESOURCE_L3: + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) + return; =20 - cpumask_clear_cpu(cpu, &d->hdr.cpu_mask); - if (cpumask_empty(&d->hdr.cpu_mask)) { + d =3D container_of(hdr, struct rdt_mon_domain, hdr); + hw_dom =3D resctrl_to_arch_mon_dom(d); resctrl_offline_mon_domain(r, d); - list_del_rcu(&d->hdr.list); + list_del_rcu(&hdr->list); synchronize_rcu(); mon_domain_free(hw_dom); - - return; + break; + default: + pr_warn_once("Unknown resource rid=3D%d\n", r->rid); + break; } } =20 --=20 2.51.0 From nobody Sat Sep 27 20:26:34 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2170B31E0E4 for ; Thu, 25 Sep 2025 20:04:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830648; cv=none; b=cpbTSCeg6RUbmwri++8j0RVMCzQV1qD3LA17f9NNsJC+VIZiy51jrhpTrydyEcqboqA/78aPaZ7eRtSjVJ6QUeU8pTFbVAero/BInfojsL3ycOXy/LUtqYx7bKYozX9SpzUxRcA9egHEwM5UkHMV5Pd3oQoZe5A7Css5wNnJzeo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830648; c=relaxed/simple; bh=3UhXG2+nFOE22Y9OsPyaWXsBmdTkUv4k885gDZlF2eQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=cnZBdG3/V3dOI44UZKsQApe/pTdFqMrBT4OzI9eF6sWqhI1oKw6luYhe04hjRUYx5I7Xe7novtyX2nv8y7HAv9eigMvMSnIBdl/rVo35gooPAR+qDQgjHv3oz74ooxp7+JVrCPlHrGVFiCKs24RirxtSYO2TKmFckfpowFw/Qk8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Yno78H6x; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Yno78H6x" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758830648; x=1790366648; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=3UhXG2+nFOE22Y9OsPyaWXsBmdTkUv4k885gDZlF2eQ=; b=Yno78H6xZR48LyIEsFgVF5woM3SZxI1jMjnXA7xBw5/y8KPKcVoBmCb1 kxIiDbeapDAYIUjEzQNIUc8+VfsEKP2adY41I7Uc2DkTcNZB+YLlODDE0 BjsvIFc2Cb3p81CqLH1RdvBj6+zxQAFWdqk9sK/tRMmDdFtnvcYpEx/wG ur0VfYN7f04MOihdFU3/0lKOnmPXtwcpu1aeLzlcxTrxqXZZWhYcYXcx9 jzleU8vhNGSaAqTyF9UJ0mK2vM/TB4Jj+fEWB+P2PU1VIHgIDXshDN8Kr SNGHJO/oP/RY5t5t1B4aDsHbNmSsl+IylmnOdrruqSoaDuAHGeswQnHhI A==; X-CSE-ConnectionGUID: 3XOjtWldTAGiXncwCZaorg== X-CSE-MsgGUID: B+5Erf0TR/6ANVv+9eKLRQ== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="61074161" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="61074161" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:05 -0700 X-CSE-ConnectionGUID: pfk54LA2Q2KzhljmU4LYfg== X-CSE-MsgGUID: eq9C6MZ9QH+SfbIoyaqfSg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,293,1751266800"; d="scan'208";a="177003609" Received: from inaky-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.206]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:05 -0700 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v11 04/31] x86/resctrl: Clean up domain_remove_cpu_ctrl() Date: Thu, 25 Sep 2025 13:02:58 -0700 Message-ID: <20250925200328.64155-5-tony.luck@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250925200328.64155-1-tony.luck@intel.com> References: <20250925200328.64155-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" For symmetry with domain_remove_cpu_mon() refactor domain_remove_cpu_ctrl() to take an early return when removing a CPU does not empty the domain. Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- arch/x86/kernel/cpu/resctrl/core.c | 29 ++++++++++++++--------------- 1 file changed, 14 insertions(+), 15 deletions(-) diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index b471918bced6..28c8e28bb1dd 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -599,28 +599,27 @@ static void domain_remove_cpu_ctrl(int cpu, struct rd= t_resource *r) return; } =20 + cpumask_clear_cpu(cpu, &hdr->cpu_mask); + if (!cpumask_empty(&hdr->cpu_mask)) + return; + if (!domain_header_is_valid(hdr, RESCTRL_CTRL_DOMAIN, r->rid)) return; =20 d =3D container_of(hdr, struct rdt_ctrl_domain, hdr); hw_dom =3D resctrl_to_arch_ctrl_dom(d); =20 - cpumask_clear_cpu(cpu, &d->hdr.cpu_mask); - if (cpumask_empty(&d->hdr.cpu_mask)) { - resctrl_offline_ctrl_domain(r, d); - list_del_rcu(&d->hdr.list); - synchronize_rcu(); - - /* - * rdt_ctrl_domain "d" is going to be freed below, so clear - * its pointer from pseudo_lock_region struct. - */ - if (d->plr) - d->plr->d =3D NULL; - ctrl_domain_free(hw_dom); + resctrl_offline_ctrl_domain(r, d); + list_del_rcu(&hdr->list); + synchronize_rcu(); =20 - return; - } + /* + * rdt_ctrl_domain "d" is going to be freed below, so clear + * its pointer from pseudo_lock_region struct. + */ + if (d->plr) + d->plr->d =3D NULL; + ctrl_domain_free(hw_dom); } =20 static void domain_remove_cpu_mon(int cpu, struct rdt_resource *r) --=20 2.51.0 From nobody Sat Sep 27 20:26:34 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D59D531E894 for ; Thu, 25 Sep 2025 20:04:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830649; cv=none; b=WpQXertvNYyy1qrd4uahm2fFsYASbsQxFfYD58PMCn4+SxDL+dlYOUEjmizYpIcAEsFI9DdyBmTOPtd2zTEV72K/TGyaYqL0909735j56fLL5l9d7JOZI06ovbrjOw1mM5DYBAYfp6/4mD52Q+WbYu4one4+O3gh6zUXuHRIBaw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830649; c=relaxed/simple; bh=IjzQ6pFyAEvXMme6bmcXQStx7GqYf4uw5lkgRlZejjI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=LbuCg6E3UZU0zrC7qjawl1u//kEoESB7ZyOTtA8HW+GrxClBXzwT6V24kecGXit9DOxSyQiK1/g0St3bjK7zDFN2GAEed8UP5dgjkYP6Nh9xgcSciA/LZ4/W12fLHPl/7SO4hfLUnZMOpQ+YqV8DGbptcQYtJfOdnzkEmM8AvZ4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=FQE71ykK; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="FQE71ykK" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758830648; x=1790366648; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=IjzQ6pFyAEvXMme6bmcXQStx7GqYf4uw5lkgRlZejjI=; b=FQE71ykKPYT+KKIkTmgisVsPideQHku0PfO/yM1DtK7dS1T1IyQZa7B2 oJdNpqoOqX04rqa/v3GL3tIVIBOIbIpdaBR9znJOSkTU/v73CE/8TX6VJ KEo7pOJCPcmafeQ8pC6Rvjz+lyt3ppOxTZCP9Mr+GtUtwqh3+/aIflhZi TiElyYAwSpmurL3J6x0MAq3pbd7I6N2jdMF76vXnw8ucfHKAHYT0/iYbE cb+ViUhGB1g7ZA37wjvmjgRB+HMSh47CLehysukEG0Vu+DIVMTisKqtie CLqunV6NcAKHGYBw3Ms0FGW3nPX+8jyxVYj3dHj7v9eaujAf3dqhyj/0l g==; X-CSE-ConnectionGUID: JBK8fyMeQmC1TewCHLVqRw== X-CSE-MsgGUID: EdO1LlivTNCNpQ2cWWKJ6Q== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="61074169" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="61074169" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:06 -0700 X-CSE-ConnectionGUID: rhu7a/ONTCWaQYFpgLIK2Q== X-CSE-MsgGUID: CVjBWNrdSJWOAvWZthUxeg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,293,1751266800"; d="scan'208";a="177003612" Received: from inaky-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.206]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:05 -0700 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v11 05/31] x86,fs/resctrl: Refactor domain create/remove using struct rdt_domain_hdr Date: Thu, 25 Sep 2025 13:02:59 -0700 Message-ID: <20250925200328.64155-6-tony.luck@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250925200328.64155-1-tony.luck@intel.com> References: <20250925200328.64155-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Up until now, all monitoring events were associated with the L3 resource and it made sense to use the L3 specific "struct rdt_mon_domain *" arguments to functions manipulating domains. To simplify enabling of enumeration of domains for events in other resources change the calling convention to pass the generic struct rdt_domain_hdr and use that to find the domain specific structure where needed. Signed-off-by: Tony Luck --- include/linux/resctrl.h | 4 +- fs/resctrl/internal.h | 2 +- arch/x86/kernel/cpu/resctrl/core.c | 4 +- fs/resctrl/ctrlmondata.c | 15 ++++--- fs/resctrl/rdtgroup.c | 65 ++++++++++++++++++++---------- 5 files changed, 58 insertions(+), 32 deletions(-) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index dfc91c5e8483..0b55809af5d7 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -504,9 +504,9 @@ int resctrl_arch_update_one(struct rdt_resource *r, str= uct rdt_ctrl_domain *d, u32 resctrl_arch_get_config(struct rdt_resource *r, struct rdt_ctrl_domain= *d, u32 closid, enum resctrl_conf_type type); int resctrl_online_ctrl_domain(struct rdt_resource *r, struct rdt_ctrl_dom= ain *d); -int resctrl_online_mon_domain(struct rdt_resource *r, struct rdt_mon_domai= n *d); +int resctrl_online_mon_domain(struct rdt_resource *r, struct rdt_domain_hd= r *hdr); void resctrl_offline_ctrl_domain(struct rdt_resource *r, struct rdt_ctrl_d= omain *d); -void resctrl_offline_mon_domain(struct rdt_resource *r, struct rdt_mon_dom= ain *d); +void resctrl_offline_mon_domain(struct rdt_resource *r, struct rdt_domain_= hdr *hdr); void resctrl_online_cpu(unsigned int cpu); void resctrl_offline_cpu(unsigned int cpu); =20 diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h index cf1fd82dc5a9..22fdb3a9b6f4 100644 --- a/fs/resctrl/internal.h +++ b/fs/resctrl/internal.h @@ -362,7 +362,7 @@ void mon_event_count(void *info); int rdtgroup_mondata_show(struct seq_file *m, void *arg); =20 void mon_event_read(struct rmid_read *rr, struct rdt_resource *r, - struct rdt_mon_domain *d, struct rdtgroup *rdtgrp, + struct rdt_domain_hdr *hdr, struct rdtgroup *rdtgrp, cpumask_t *cpumask, int evtid, int first); =20 int resctrl_mon_resource_init(void); diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 28c8e28bb1dd..2d93387b9251 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -529,7 +529,7 @@ static void l3_mon_domain_setup(int cpu, int id, struct= rdt_resource *r, struct =20 list_add_tail_rcu(&d->hdr.list, add_pos); =20 - err =3D resctrl_online_mon_domain(r, d); + err =3D resctrl_online_mon_domain(r, &d->hdr); if (err) { list_del_rcu(&d->hdr.list); synchronize_rcu(); @@ -655,7 +655,7 @@ static void domain_remove_cpu_mon(int cpu, struct rdt_r= esource *r) =20 d =3D container_of(hdr, struct rdt_mon_domain, hdr); hw_dom =3D resctrl_to_arch_mon_dom(d); - resctrl_offline_mon_domain(r, d); + resctrl_offline_mon_domain(r, hdr); list_del_rcu(&hdr->list); synchronize_rcu(); mon_domain_free(hw_dom); diff --git a/fs/resctrl/ctrlmondata.c b/fs/resctrl/ctrlmondata.c index f248eaf50d3c..3ceef35208be 100644 --- a/fs/resctrl/ctrlmondata.c +++ b/fs/resctrl/ctrlmondata.c @@ -547,11 +547,16 @@ struct rdt_domain_hdr *resctrl_find_domain(struct lis= t_head *h, int id, } =20 void mon_event_read(struct rmid_read *rr, struct rdt_resource *r, - struct rdt_mon_domain *d, struct rdtgroup *rdtgrp, + struct rdt_domain_hdr *hdr, struct rdtgroup *rdtgrp, cpumask_t *cpumask, int evtid, int first) { + struct rdt_mon_domain *d; int cpu; =20 + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) + return; + d =3D container_of(hdr, struct rdt_mon_domain, hdr); + /* When picking a CPU from cpu_mask, ensure it can't race with cpuhp */ lockdep_assert_cpus_held(); =20 @@ -598,7 +603,6 @@ int rdtgroup_mondata_show(struct seq_file *m, void *arg) enum resctrl_event_id evtid; struct rdt_domain_hdr *hdr; struct rmid_read rr =3D {0}; - struct rdt_mon_domain *d; struct rdtgroup *rdtgrp; int domid, cpu, ret =3D 0; struct rdt_resource *r; @@ -623,6 +627,8 @@ int rdtgroup_mondata_show(struct seq_file *m, void *arg) r =3D resctrl_arch_get_resource(resid); =20 if (md->sum) { + struct rdt_mon_domain *d; + /* * This file requires summing across all domains that share * the L3 cache id that was provided in the "domid" field of the @@ -649,12 +655,11 @@ int rdtgroup_mondata_show(struct seq_file *m, void *a= rg) * the resource to find the domain with "domid". */ hdr =3D resctrl_find_domain(&r->mon_domains, domid, NULL); - if (!hdr || !domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, resid)) { + if (!hdr) { ret =3D -ENOENT; goto out; } - d =3D container_of(hdr, struct rdt_mon_domain, hdr); - mon_event_read(&rr, r, d, rdtgrp, &d->hdr.cpu_mask, evtid, false); + mon_event_read(&rr, r, hdr, rdtgrp, &hdr->cpu_mask, evtid, false); } =20 checkresult: diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index 0320360cd7a6..e3b83e48f2d9 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -3164,13 +3164,18 @@ static void mon_rmdir_one_subdir(struct kernfs_node= *pkn, char *name, char *subn * when last domain being summed is removed. */ static void rmdir_mondata_subdir_allrdtgrp(struct rdt_resource *r, - struct rdt_mon_domain *d) + struct rdt_domain_hdr *hdr) { struct rdtgroup *prgrp, *crgrp; + struct rdt_mon_domain *d; char subname[32]; bool snc_mode; char name[32]; =20 + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) + return; + + d =3D container_of(hdr, struct rdt_mon_domain, hdr); snc_mode =3D r->mon_scope =3D=3D RESCTRL_L3_NODE; sprintf(name, "mon_%s_%02d", r->name, snc_mode ? d->ci_id : d->hdr.id); if (snc_mode) @@ -3184,19 +3189,18 @@ static void rmdir_mondata_subdir_allrdtgrp(struct r= dt_resource *r, } } =20 -static int mon_add_all_files(struct kernfs_node *kn, struct rdt_mon_domain= *d, +static int mon_add_all_files(struct kernfs_node *kn, struct rdt_domain_hdr= *hdr, struct rdt_resource *r, struct rdtgroup *prgrp, - bool do_sum) + int domid, bool do_sum) { struct rmid_read rr =3D {0}; struct mon_data *priv; struct mon_evt *mevt; - int ret, domid; + int ret; =20 for_each_mon_event(mevt) { if (mevt->rid !=3D r->rid || !mevt->enabled) continue; - domid =3D do_sum ? d->ci_id : d->hdr.id; priv =3D mon_get_kn_priv(r->rid, domid, mevt, do_sum); if (WARN_ON_ONCE(!priv)) return -EINVAL; @@ -3206,23 +3210,28 @@ static int mon_add_all_files(struct kernfs_node *kn= , struct rdt_mon_domain *d, return ret; =20 if (!do_sum && resctrl_is_mbm_event(mevt->evtid)) - mon_event_read(&rr, r, d, prgrp, &d->hdr.cpu_mask, mevt->evtid, true); + mon_event_read(&rr, r, hdr, prgrp, &hdr->cpu_mask, mevt->evtid, true); } =20 return 0; } =20 static int mkdir_mondata_subdir(struct kernfs_node *parent_kn, - struct rdt_mon_domain *d, + struct rdt_domain_hdr *hdr, struct rdt_resource *r, struct rdtgroup *prgrp) { struct kernfs_node *kn, *ckn; + struct rdt_mon_domain *d; char name[32]; bool snc_mode; int ret =3D 0; =20 lockdep_assert_held(&rdtgroup_mutex); =20 + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) + return -EINVAL; + + d =3D container_of(hdr, struct rdt_mon_domain, hdr); snc_mode =3D r->mon_scope =3D=3D RESCTRL_L3_NODE; sprintf(name, "mon_%s_%02d", r->name, snc_mode ? d->ci_id : d->hdr.id); kn =3D kernfs_find_and_get(parent_kn, name); @@ -3240,13 +3249,13 @@ static int mkdir_mondata_subdir(struct kernfs_node = *parent_kn, ret =3D rdtgroup_kn_set_ugid(kn); if (ret) goto out_destroy; - ret =3D mon_add_all_files(kn, d, r, prgrp, snc_mode); + ret =3D mon_add_all_files(kn, hdr, r, prgrp, hdr->id, snc_mode); if (ret) goto out_destroy; } =20 if (snc_mode) { - sprintf(name, "mon_sub_%s_%02d", r->name, d->hdr.id); + sprintf(name, "mon_sub_%s_%02d", r->name, hdr->id); ckn =3D kernfs_create_dir(kn, name, parent_kn->mode, prgrp); if (IS_ERR(ckn)) { ret =3D -EINVAL; @@ -3257,7 +3266,7 @@ static int mkdir_mondata_subdir(struct kernfs_node *p= arent_kn, if (ret) goto out_destroy; =20 - ret =3D mon_add_all_files(ckn, d, r, prgrp, false); + ret =3D mon_add_all_files(ckn, hdr, r, prgrp, hdr->id, false); if (ret) goto out_destroy; } @@ -3275,7 +3284,7 @@ static int mkdir_mondata_subdir(struct kernfs_node *p= arent_kn, * and "monitor" groups with given domain id. */ static void mkdir_mondata_subdir_allrdtgrp(struct rdt_resource *r, - struct rdt_mon_domain *d) + struct rdt_domain_hdr *hdr) { struct kernfs_node *parent_kn; struct rdtgroup *prgrp, *crgrp; @@ -3283,12 +3292,12 @@ static void mkdir_mondata_subdir_allrdtgrp(struct r= dt_resource *r, =20 list_for_each_entry(prgrp, &rdt_all_groups, rdtgroup_list) { parent_kn =3D prgrp->mon.mon_data_kn; - mkdir_mondata_subdir(parent_kn, d, r, prgrp); + mkdir_mondata_subdir(parent_kn, hdr, r, prgrp); =20 head =3D &prgrp->mon.crdtgrp_list; list_for_each_entry(crgrp, head, mon.crdtgrp_list) { parent_kn =3D crgrp->mon.mon_data_kn; - mkdir_mondata_subdir(parent_kn, d, r, crgrp); + mkdir_mondata_subdir(parent_kn, hdr, r, crgrp); } } } @@ -3297,14 +3306,14 @@ static int mkdir_mondata_subdir_alldom(struct kernf= s_node *parent_kn, struct rdt_resource *r, struct rdtgroup *prgrp) { - struct rdt_mon_domain *dom; + struct rdt_domain_hdr *hdr; int ret; =20 /* Walking r->domains, ensure it can't race with cpuhp */ lockdep_assert_cpus_held(); =20 - list_for_each_entry(dom, &r->mon_domains, hdr.list) { - ret =3D mkdir_mondata_subdir(parent_kn, dom, r, prgrp); + list_for_each_entry(hdr, &r->mon_domains, list) { + ret =3D mkdir_mondata_subdir(parent_kn, hdr, r, prgrp); if (ret) return ret; } @@ -4187,8 +4196,10 @@ void resctrl_offline_ctrl_domain(struct rdt_resource= *r, struct rdt_ctrl_domain mutex_unlock(&rdtgroup_mutex); } =20 -void resctrl_offline_mon_domain(struct rdt_resource *r, struct rdt_mon_dom= ain *d) +void resctrl_offline_mon_domain(struct rdt_resource *r, struct rdt_domain_= hdr *hdr) { + struct rdt_mon_domain *d; + mutex_lock(&rdtgroup_mutex); =20 /* @@ -4196,8 +4207,12 @@ void resctrl_offline_mon_domain(struct rdt_resource = *r, struct rdt_mon_domain *d * per domain monitor data directories. */ if (resctrl_mounted && resctrl_arch_mon_capable()) - rmdir_mondata_subdir_allrdtgrp(r, d); + rmdir_mondata_subdir_allrdtgrp(r, hdr); =20 + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) + goto out_unlock; + + d =3D container_of(hdr, struct rdt_mon_domain, hdr); if (resctrl_is_mbm_enabled()) cancel_delayed_work(&d->mbm_over); if (resctrl_is_mon_event_enabled(QOS_L3_OCCUP_EVENT_ID) && has_busy_rmid(= d)) { @@ -4214,7 +4229,7 @@ void resctrl_offline_mon_domain(struct rdt_resource *= r, struct rdt_mon_domain *d } =20 domain_destroy_mon_state(d); - +out_unlock: mutex_unlock(&rdtgroup_mutex); } =20 @@ -4287,12 +4302,17 @@ int resctrl_online_ctrl_domain(struct rdt_resource = *r, struct rdt_ctrl_domain *d return err; } =20 -int resctrl_online_mon_domain(struct rdt_resource *r, struct rdt_mon_domai= n *d) +int resctrl_online_mon_domain(struct rdt_resource *r, struct rdt_domain_hd= r *hdr) { - int err; + struct rdt_mon_domain *d; + int err =3D -EINVAL; =20 mutex_lock(&rdtgroup_mutex); =20 + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) + goto out_unlock; + + d =3D container_of(hdr, struct rdt_mon_domain, hdr); err =3D domain_setup_mon_state(r, d); if (err) goto out_unlock; @@ -4306,6 +4326,7 @@ int resctrl_online_mon_domain(struct rdt_resource *r,= struct rdt_mon_domain *d) if (resctrl_is_mon_event_enabled(QOS_L3_OCCUP_EVENT_ID)) INIT_DELAYED_WORK(&d->cqm_limbo, cqm_handle_limbo); =20 + err =3D 0; /* * If the filesystem is not mounted then only the default resource group * exists. Creation of its directories is deferred until mount time @@ -4313,7 +4334,7 @@ int resctrl_online_mon_domain(struct rdt_resource *r,= struct rdt_mon_domain *d) * If resctrl is mounted, add per domain monitor data directories. */ if (resctrl_mounted && resctrl_arch_mon_capable()) - mkdir_mondata_subdir_allrdtgrp(r, d); + mkdir_mondata_subdir_allrdtgrp(r, hdr); =20 out_unlock: mutex_unlock(&rdtgroup_mutex); --=20 2.51.0 From nobody Sat Sep 27 20:26:34 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8064431FEC8 for ; Thu, 25 Sep 2025 20:04:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830650; cv=none; b=Vti18oAmDrwShVjZxpQ2tOpTfO25Zcod/tMT9jPU1Bu3tXyrfcuBTr/TC1hP2xGrqtPVLbVcYfd5FeRg6Cf0UD1nvis59IsxYiYzyW34wASwEXb0In6295gu8j+TMKgvCKgKYhHS0qmkDiADFBIDzYeLfDvskCBtSQi6wQvsI9Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830650; c=relaxed/simple; bh=Bbr49C589CtICSxXRwinuWAvo1jHIw7lqV+CUlNYv2U=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=WUhMjRY2S0OyxBtVfLPnLw84FPIdgx86jNjmYY7/PsQsOIArxz3Le5O5ddrbSNl/+lGyITQh+/7MLzse2qR6I3BFvfd3yKscTNTUpiLHZq3BoJxPy8AO/tyKwlPLtpMSjr21h6lUbQr2xVjpHTSVbjG3hfMyuI3/0xwKjm6juQw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=ieWIgNsk; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="ieWIgNsk" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758830649; x=1790366649; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Bbr49C589CtICSxXRwinuWAvo1jHIw7lqV+CUlNYv2U=; b=ieWIgNskoEk+E91mkaV41dXSHOJt0OoqhUt68UyhJaMNI+7unL7d8Tks TXJviuvrlA9uXxmGqjh7XKeMIhFIjDbrqF33TwUYUgvlEqaB8/9QJQuIh zg4+w9k/jrleFvGzTWyqqQj/Pk22PzBhpozTUmomEOUyi5iB7N6Wn+/7e JqzidNkmk5afDi3WmDJtuuEF5zuiOKnRsS9sOI64fC+6euflOzGZ1GY2T If+zRz/mXu23jD7sQ5ShomaVG6yG1B0XvLh62jtbAKZlPe0xqOR+7IAHg 7UUIrBg/T1nvT4rAEEsTEq467WZIgY3xsAyfON74i5vU4DYlS7vjNJAWm Q==; X-CSE-ConnectionGUID: n26o99WLQVOXrR5dE/P6sw== X-CSE-MsgGUID: PQwVOy6cQ5Cu7qsu55T04g== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="61074177" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="61074177" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:06 -0700 X-CSE-ConnectionGUID: ZCFfF8XSRK2YKPXr9RanJQ== X-CSE-MsgGUID: SPkuPOS4QYqfWUCiR5llQw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,293,1751266800"; d="scan'208";a="177003615" Received: from inaky-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.206]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:06 -0700 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v11 06/31] x86,fs/resctrl: Use struct rdt_domain_hdr when reading counters Date: Thu, 25 Sep 2025 13:03:00 -0700 Message-ID: <20250925200328.64155-7-tony.luck@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250925200328.64155-1-tony.luck@intel.com> References: <20250925200328.64155-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use a generic struct rdt_domain_hdr representing a generic domain header in struct rmid_read in order to support other telemetry events' domains besides an L3 one. Adjust the code interacting with it to the new struct layout. Signed-off-by: Tony Luck --- include/linux/resctrl.h | 8 +++---- fs/resctrl/internal.h | 18 +++++++------- arch/x86/kernel/cpu/resctrl/monitor.c | 17 +++++++++++--- fs/resctrl/ctrlmondata.c | 7 +----- fs/resctrl/monitor.c | 34 +++++++++++++++++---------- 5 files changed, 50 insertions(+), 34 deletions(-) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index 0b55809af5d7..0fef3045cac3 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -514,7 +514,7 @@ void resctrl_offline_cpu(unsigned int cpu); * resctrl_arch_rmid_read() - Read the eventid counter corresponding to rm= id * for this resource and domain. * @r: resource that the counter should be read from. - * @d: domain that the counter should be read from. + * @hdr: Header of domain that the counter should be read from. * @closid: closid that matches the rmid. Depending on the architecture, = the * counter may match traffic of both @closid and @rmid, or @rmid * only. @@ -535,7 +535,7 @@ void resctrl_offline_cpu(unsigned int cpu); * Return: * 0 on success, or -EIO, -EINVAL etc on error. */ -int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_mon_domain *= d, +int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain_hdr *= hdr, u32 closid, u32 rmid, enum resctrl_event_id eventid, u64 *val, void *arch_mon_ctx); =20 @@ -630,7 +630,7 @@ void resctrl_arch_config_cntr(struct rdt_resource *r, s= truct rdt_mon_domain *d, * assigned to the RMID, event pair for this resource * and domain. * @r: Resource that the counter should be read from. - * @d: Domain that the counter should be read from. + * @hdr: Header of domain that the counter should be read from. * @closid: CLOSID that matches the RMID. * @rmid: The RMID to which @cntr_id is assigned. * @cntr_id: The counter to read. @@ -644,7 +644,7 @@ void resctrl_arch_config_cntr(struct rdt_resource *r, s= truct rdt_mon_domain *d, * Return: * 0 on success, or -EIO, -EINVAL etc on error. */ -int resctrl_arch_cntr_read(struct rdt_resource *r, struct rdt_mon_domain *= d, +int resctrl_arch_cntr_read(struct rdt_resource *r, struct rdt_domain_hdr *= hdr, u32 closid, u32 rmid, int cntr_id, enum resctrl_event_id eventid, u64 *val); =20 diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h index 22fdb3a9b6f4..698ed84fd073 100644 --- a/fs/resctrl/internal.h +++ b/fs/resctrl/internal.h @@ -106,24 +106,26 @@ struct mon_data { * resource group then its event count is summed with the count from all * its child resource groups. * @r: Resource describing the properties of the event being read. - * @d: Domain that the counter should be read from. If NULL then sum all - * domains in @r sharing L3 @ci.id + * @hdr: Header of domain that the counter should be read from. If NULL = then + * sum all domains in @r sharing L3 @ci.id * @evtid: Which monitor event to read. * @first: Initialize MBM counter when true. - * @ci: Cacheinfo for L3. Only set when @d is NULL. Used when summing d= omains. + * @ci: Cacheinfo for L3. Only set when @hdr is NULL. Used when summing + * domains. * @is_mbm_cntr: true if "mbm_event" counter assignment mode is enabled an= d it * is an MBM event. * @err: Error encountered when reading counter. - * @val: Returned value of event counter. If @rgrp is a parent resource = group, - * @val includes the sum of event counts from its child resource groups. - * If @d is NULL, @val includes the sum of all domains in @r sharing @c= i.id, - * (summed across child resource groups if @rgrp is a parent resource g= roup). + * @val: Returned value of event counter. If @rgrp is a parent resource + * group, @val includes the sum of event counts from its child + * resource groups. If @hdr is NULL, @val includes the sum of all + * domains in @r sharing @ci.id, (summed across child resource groups + * if @rgrp is a parent resource group). * @arch_mon_ctx: Hardware monitor allocated for this read request (MPAM o= nly). */ struct rmid_read { struct rdtgroup *rgrp; struct rdt_resource *r; - struct rdt_mon_domain *d; + struct rdt_domain_hdr *hdr; enum resctrl_event_id evtid; bool first; struct cacheinfo *ci; diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/re= sctrl/monitor.c index c8945610d455..cee1cd7fbdce 100644 --- a/arch/x86/kernel/cpu/resctrl/monitor.c +++ b/arch/x86/kernel/cpu/resctrl/monitor.c @@ -238,17 +238,23 @@ static u64 get_corrected_val(struct rdt_resource *r, = struct rdt_mon_domain *d, return chunks * hw_res->mon_scale; } =20 -int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_mon_domain *= d, +int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain_hdr *= hdr, u32 unused, u32 rmid, enum resctrl_event_id eventid, u64 *val, void *ignored) { - int cpu =3D cpumask_any(&d->hdr.cpu_mask); + struct rdt_mon_domain *d; u64 msr_val; u32 prmid; + int cpu; int ret; =20 resctrl_arch_rmid_read_context_check(); =20 + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) + return -EINVAL; + + d =3D container_of(hdr, struct rdt_mon_domain, hdr); + cpu =3D cpumask_any(&hdr->cpu_mask); prmid =3D logical_rmid_to_physical_rmid(cpu, rmid); ret =3D __rmid_read_phys(prmid, eventid, &msr_val); if (ret) @@ -312,13 +318,18 @@ void resctrl_arch_reset_cntr(struct rdt_resource *r, = struct rdt_mon_domain *d, } } =20 -int resctrl_arch_cntr_read(struct rdt_resource *r, struct rdt_mon_domain *= d, +int resctrl_arch_cntr_read(struct rdt_resource *r, struct rdt_domain_hdr *= hdr, u32 unused, u32 rmid, int cntr_id, enum resctrl_event_id eventid, u64 *val) { + struct rdt_mon_domain *d; u64 msr_val; int ret; =20 + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) + return -EINVAL; + + d =3D container_of(hdr, struct rdt_mon_domain, hdr); ret =3D __cntr_id_read(cntr_id, &msr_val); if (ret) return ret; diff --git a/fs/resctrl/ctrlmondata.c b/fs/resctrl/ctrlmondata.c index 3ceef35208be..7b9fc5d3bdc8 100644 --- a/fs/resctrl/ctrlmondata.c +++ b/fs/resctrl/ctrlmondata.c @@ -550,13 +550,8 @@ void mon_event_read(struct rmid_read *rr, struct rdt_r= esource *r, struct rdt_domain_hdr *hdr, struct rdtgroup *rdtgrp, cpumask_t *cpumask, int evtid, int first) { - struct rdt_mon_domain *d; int cpu; =20 - if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) - return; - d =3D container_of(hdr, struct rdt_mon_domain, hdr); - /* When picking a CPU from cpu_mask, ensure it can't race with cpuhp */ lockdep_assert_cpus_held(); =20 @@ -566,7 +561,7 @@ void mon_event_read(struct rmid_read *rr, struct rdt_re= source *r, rr->rgrp =3D rdtgrp; rr->evtid =3D evtid; rr->r =3D r; - rr->d =3D d; + rr->hdr =3D hdr; rr->first =3D first; if (resctrl_arch_mbm_cntr_assign_enabled(r) && resctrl_is_mbm_event(evtid)) { diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index 4076336fbba6..32116361a5f6 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -159,7 +159,7 @@ void __check_limbo(struct rdt_mon_domain *d, bool force= _free) break; =20 entry =3D __rmid_entry(idx); - if (resctrl_arch_rmid_read(r, d, entry->closid, entry->rmid, + if (resctrl_arch_rmid_read(r, &d->hdr, entry->closid, entry->rmid, QOS_L3_OCCUP_EVENT_ID, &val, arch_mon_ctx)) { rmid_dirty =3D true; @@ -424,8 +424,12 @@ static int __mon_event_count(struct rdtgroup *rdtgrp, = struct rmid_read *rr) int err, ret; u64 tval =3D 0; =20 + if (!domain_header_is_valid(rr->hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) + return -EINVAL; + d =3D container_of(rr->hdr, struct rdt_mon_domain, hdr); + if (rr->is_mbm_cntr) { - cntr_id =3D mbm_cntr_get(rr->r, rr->d, rdtgrp, rr->evtid); + cntr_id =3D mbm_cntr_get(rr->r, d, rdtgrp, rr->evtid); if (cntr_id < 0) { rr->err =3D -ENOENT; return -EINVAL; @@ -434,24 +438,24 @@ static int __mon_event_count(struct rdtgroup *rdtgrp,= struct rmid_read *rr) =20 if (rr->first) { if (rr->is_mbm_cntr) - resctrl_arch_reset_cntr(rr->r, rr->d, closid, rmid, cntr_id, rr->evtid); + resctrl_arch_reset_cntr(rr->r, d, closid, rmid, cntr_id, rr->evtid); else - resctrl_arch_reset_rmid(rr->r, rr->d, closid, rmid, rr->evtid); - m =3D get_mbm_state(rr->d, closid, rmid, rr->evtid); + resctrl_arch_reset_rmid(rr->r, d, closid, rmid, rr->evtid); + m =3D get_mbm_state(d, closid, rmid, rr->evtid); if (m) memset(m, 0, sizeof(struct mbm_state)); return 0; } =20 - if (rr->d) { + if (rr->hdr) { /* Reading a single domain, must be on a CPU in that domain. */ - if (!cpumask_test_cpu(cpu, &rr->d->hdr.cpu_mask)) + if (!cpumask_test_cpu(cpu, &rr->hdr->cpu_mask)) return -EINVAL; if (rr->is_mbm_cntr) - rr->err =3D resctrl_arch_cntr_read(rr->r, rr->d, closid, rmid, cntr_id, + rr->err =3D resctrl_arch_cntr_read(rr->r, rr->hdr, closid, rmid, cntr_i= d, rr->evtid, &tval); else - rr->err =3D resctrl_arch_rmid_read(rr->r, rr->d, closid, rmid, + rr->err =3D resctrl_arch_rmid_read(rr->r, rr->hdr, closid, rmid, rr->evtid, &tval, rr->arch_mon_ctx); if (rr->err) return rr->err; @@ -477,10 +481,10 @@ static int __mon_event_count(struct rdtgroup *rdtgrp,= struct rmid_read *rr) if (d->ci_id !=3D rr->ci->id) continue; if (rr->is_mbm_cntr) - err =3D resctrl_arch_cntr_read(rr->r, d, closid, rmid, cntr_id, + err =3D resctrl_arch_cntr_read(rr->r, &d->hdr, closid, rmid, cntr_id, rr->evtid, &tval); else - err =3D resctrl_arch_rmid_read(rr->r, d, closid, rmid, + err =3D resctrl_arch_rmid_read(rr->r, &d->hdr, closid, rmid, rr->evtid, &tval, rr->arch_mon_ctx); if (!err) { rr->val +=3D tval; @@ -511,9 +515,13 @@ static void mbm_bw_count(struct rdtgroup *rdtgrp, stru= ct rmid_read *rr) u64 cur_bw, bytes, cur_bytes; u32 closid =3D rdtgrp->closid; u32 rmid =3D rdtgrp->mon.rmid; + struct rdt_mon_domain *d; struct mbm_state *m; =20 - m =3D get_mbm_state(rr->d, closid, rmid, rr->evtid); + if (!domain_header_is_valid(rr->hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) + return; + d =3D container_of(rr->hdr, struct rdt_mon_domain, hdr); + m =3D get_mbm_state(d, closid, rmid, rr->evtid); if (WARN_ON_ONCE(!m)) return; =20 @@ -686,7 +694,7 @@ static void mbm_update_one_event(struct rdt_resource *r= , struct rdt_mon_domain * struct rmid_read rr =3D {0}; =20 rr.r =3D r; - rr.d =3D d; + rr.hdr =3D &d->hdr; rr.evtid =3D evtid; if (resctrl_arch_mbm_cntr_assign_enabled(r)) { rr.is_mbm_cntr =3D true; --=20 2.51.0 From nobody Sat Sep 27 20:26:34 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C7B5F31FED4 for ; Thu, 25 Sep 2025 20:04:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830651; cv=none; b=sCrT1GeMJMwHsGCb9kVReNIbBggaCOFUDjMTsoCrAkddwq0Wvu+4dXWp18nwiMrXrUWwRuxG+Jf2R7H6OxtxCzpG3AeXA6Zn/gQmg5H3AnPqcQumWM9ioHxN3NphXoVmvsd5wJwCfeEIhmHJ52Wz/+SXilAdONUBwOwYR3BIgwo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830651; c=relaxed/simple; bh=FPObs4p1qcJAZc1HDW4iYA7z4x0WaYocKRPFRWQhNXU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ezxSmu3F5HzbMAGSEPDAolBOaML4Dj2/WtmrrR1KYRamlIJT5dx22I3XoLf6wUC9My3Dn7zWzdym+RApCtUq85sN1GvkKy1n8Fye9/kMM87y/WhoGzGadCZD8K44WTEwuiaBcyby3lrtd/uEo4YA5T0AvpE9FEpXYcuS4vYB1Ok= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=nNPnC/hX; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="nNPnC/hX" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758830649; x=1790366649; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=FPObs4p1qcJAZc1HDW4iYA7z4x0WaYocKRPFRWQhNXU=; b=nNPnC/hXx0veFz7l1x6bVZ5ANPNXkpVq8CpWH5lfJlHC4sPZGaO40T+n 8otZr3r5oMoCPkdFQn5JgLyEI757V4L16+ibM0t3Ks0nnYcjTAlYQjFcA VstJySs0LJX+tL566weMCKOB6Ev4ETFWnSRvLcfwXhBmZdYmwVvtU/j6R P+24bUCSw1zyIIK+p0a2VMG4ftd4xdNxp4DlX7tPYYE5mNEA0PHFp5kPM +x6MxEMrbihmd1jVnoNmLF52C8utMJivuACBlD34WCyF9T99BJ5lGt3o5 72qYVhADXcv0RPpKB69mfnfWneqOlx5JQ18mMy0Hb3vN/IlKTtdePIm8y w==; X-CSE-ConnectionGUID: lKCO53prTY6na1BL68RcrA== X-CSE-MsgGUID: qeMJMJ1aS8KaXSmSo59HOg== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="61074185" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="61074185" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:07 -0700 X-CSE-ConnectionGUID: Pse5mOMFS4KbZTypY6jkvA== X-CSE-MsgGUID: 0P9DsdCmTbWVajnIesL8TQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,293,1751266800"; d="scan'208";a="177003618" Received: from inaky-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.206]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:06 -0700 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v11 07/31] x86,fs/resctrl: Rename struct rdt_mon_domain and rdt_hw_mon_domain Date: Thu, 25 Sep 2025 13:03:01 -0700 Message-ID: <20250925200328.64155-8-tony.luck@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250925200328.64155-1-tony.luck@intel.com> References: <20250925200328.64155-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The upcoming telemetry event monitoring are not tied to the L3 resource and will have a new domain structures. Rename the L3 resource specific domain data structures to include "l3_" in their names to avoid confusion between the different resource specific domain structures: rdt_mon_domain -> rdt_l3_mon_domain rdt_hw_mon_domain -> rdt_hw_l3_mon_domain No functional change. Signed-off-by: Tony Luck --- include/linux/resctrl.h | 20 ++++---- arch/x86/kernel/cpu/resctrl/internal.h | 16 +++--- fs/resctrl/internal.h | 8 +-- arch/x86/kernel/cpu/resctrl/core.c | 14 +++--- arch/x86/kernel/cpu/resctrl/monitor.c | 36 +++++++------- fs/resctrl/ctrlmondata.c | 2 +- fs/resctrl/monitor.c | 68 +++++++++++++------------- fs/resctrl/rdtgroup.c | 36 +++++++------- 8 files changed, 100 insertions(+), 100 deletions(-) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index 0fef3045cac3..66569662efee 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -178,7 +178,7 @@ struct mbm_cntr_cfg { }; =20 /** - * struct rdt_mon_domain - group of CPUs sharing a resctrl monitor resource + * struct rdt_l3_mon_domain - group of CPUs sharing a resctrl monitor reso= urce * @hdr: common header for different domain types * @ci_id: cache info id for this domain * @rmid_busy_llc: bitmap of which limbo RMIDs are above threshold @@ -192,7 +192,7 @@ struct mbm_cntr_cfg { * @cntr_cfg: array of assignable counters' configuration (indexed * by counter ID) */ -struct rdt_mon_domain { +struct rdt_l3_mon_domain { struct rdt_domain_hdr hdr; unsigned int ci_id; unsigned long *rmid_busy_llc; @@ -364,10 +364,10 @@ struct resctrl_cpu_defaults { }; =20 struct resctrl_mon_config_info { - struct rdt_resource *r; - struct rdt_mon_domain *d; - u32 evtid; - u32 mon_config; + struct rdt_resource *r; + struct rdt_l3_mon_domain *d; + u32 evtid; + u32 mon_config; }; =20 /** @@ -582,7 +582,7 @@ struct rdt_domain_hdr *resctrl_find_domain(struct list_= head *h, int id, * * This can be called from any CPU. */ -void resctrl_arch_reset_rmid(struct rdt_resource *r, struct rdt_mon_domain= *d, +void resctrl_arch_reset_rmid(struct rdt_resource *r, struct rdt_l3_mon_dom= ain *d, u32 closid, u32 rmid, enum resctrl_event_id eventid); =20 @@ -595,7 +595,7 @@ void resctrl_arch_reset_rmid(struct rdt_resource *r, st= ruct rdt_mon_domain *d, * * This can be called from any CPU. */ -void resctrl_arch_reset_rmid_all(struct rdt_resource *r, struct rdt_mon_do= main *d); +void resctrl_arch_reset_rmid_all(struct rdt_resource *r, struct rdt_l3_mon= _domain *d); =20 /** * resctrl_arch_reset_all_ctrls() - Reset the control for each CLOSID to i= ts @@ -621,7 +621,7 @@ void resctrl_arch_reset_all_ctrls(struct rdt_resource *= r); * * This can be called from any CPU. */ -void resctrl_arch_config_cntr(struct rdt_resource *r, struct rdt_mon_domai= n *d, +void resctrl_arch_config_cntr(struct rdt_resource *r, struct rdt_l3_mon_do= main *d, enum resctrl_event_id evtid, u32 rmid, u32 closid, u32 cntr_id, bool assign); =20 @@ -659,7 +659,7 @@ int resctrl_arch_cntr_read(struct rdt_resource *r, stru= ct rdt_domain_hdr *hdr, * * This can be called from any CPU. */ -void resctrl_arch_reset_cntr(struct rdt_resource *r, struct rdt_mon_domain= *d, +void resctrl_arch_reset_cntr(struct rdt_resource *r, struct rdt_l3_mon_dom= ain *d, u32 closid, u32 rmid, int cntr_id, enum resctrl_event_id eventid); =20 diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/r= esctrl/internal.h index 9f4c2f0aaf5c..6eca3d522fcc 100644 --- a/arch/x86/kernel/cpu/resctrl/internal.h +++ b/arch/x86/kernel/cpu/resctrl/internal.h @@ -60,17 +60,17 @@ struct rdt_hw_ctrl_domain { }; =20 /** - * struct rdt_hw_mon_domain - Arch private attributes of a set of CPUs tha= t share - * a resource for a monitor function - * @d_resctrl: Properties exposed to the resctrl file system + * struct rdt_hw_l3_mon_domain - Arch private attributes of a set of CPUs = that share + * a resource for a monitor function + * @d_resctrl: Properties exposed to the resctrl file system * @arch_mbm_states: Per-event pointer to the MBM event's saved state. * An MBM event's state is an array of struct arch_mbm_state * indexed by RMID on x86. * * Members of this structure are accessed via helpers that provide abstrac= tion. */ -struct rdt_hw_mon_domain { - struct rdt_mon_domain d_resctrl; +struct rdt_hw_l3_mon_domain { + struct rdt_l3_mon_domain d_resctrl; struct arch_mbm_state *arch_mbm_states[QOS_NUM_L3_MBM_EVENTS]; }; =20 @@ -79,9 +79,9 @@ static inline struct rdt_hw_ctrl_domain *resctrl_to_arch_= ctrl_dom(struct rdt_ctr return container_of(r, struct rdt_hw_ctrl_domain, d_resctrl); } =20 -static inline struct rdt_hw_mon_domain *resctrl_to_arch_mon_dom(struct rdt= _mon_domain *r) +static inline struct rdt_hw_l3_mon_domain *resctrl_to_arch_mon_dom(struct = rdt_l3_mon_domain *r) { - return container_of(r, struct rdt_hw_mon_domain, d_resctrl); + return container_of(r, struct rdt_hw_l3_mon_domain, d_resctrl); } =20 /** @@ -135,7 +135,7 @@ static inline struct rdt_hw_resource *resctrl_to_arch_r= es(struct rdt_resource *r =20 extern struct rdt_hw_resource rdt_resources_all[]; =20 -void arch_mon_domain_online(struct rdt_resource *r, struct rdt_mon_domain = *d); +void arch_mon_domain_online(struct rdt_resource *r, struct rdt_l3_mon_doma= in *d); =20 /* CPUID.(EAX=3D10H, ECX=3DResID=3D1).EAX */ union cpuid_0x10_1_eax { diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h index 698ed84fd073..d9e291d94926 100644 --- a/fs/resctrl/internal.h +++ b/fs/resctrl/internal.h @@ -369,7 +369,7 @@ void mon_event_read(struct rmid_read *rr, struct rdt_re= source *r, =20 int resctrl_mon_resource_init(void); =20 -void mbm_setup_overflow_handler(struct rdt_mon_domain *dom, +void mbm_setup_overflow_handler(struct rdt_l3_mon_domain *dom, unsigned long delay_ms, int exclude_cpu); =20 @@ -377,14 +377,14 @@ void mbm_handle_overflow(struct work_struct *work); =20 bool is_mba_sc(struct rdt_resource *r); =20 -void cqm_setup_limbo_handler(struct rdt_mon_domain *dom, unsigned long del= ay_ms, +void cqm_setup_limbo_handler(struct rdt_l3_mon_domain *dom, unsigned long = delay_ms, int exclude_cpu); =20 void cqm_handle_limbo(struct work_struct *work); =20 -bool has_busy_rmid(struct rdt_mon_domain *d); +bool has_busy_rmid(struct rdt_l3_mon_domain *d); =20 -void __check_limbo(struct rdt_mon_domain *d, bool force_free); +void __check_limbo(struct rdt_l3_mon_domain *d, bool force_free); =20 void resctrl_file_fflags_init(const char *config, unsigned long fflags); =20 diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 2d93387b9251..42f4f702eeec 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -363,7 +363,7 @@ static void ctrl_domain_free(struct rdt_hw_ctrl_domain = *hw_dom) kfree(hw_dom); } =20 -static void mon_domain_free(struct rdt_hw_mon_domain *hw_dom) +static void mon_domain_free(struct rdt_hw_l3_mon_domain *hw_dom) { int idx; =20 @@ -400,7 +400,7 @@ static int domain_setup_ctrlval(struct rdt_resource *r,= struct rdt_ctrl_domain * * @num_rmid: The size of the MBM counter array * @hw_dom: The domain that owns the allocated arrays */ -static int arch_domain_mbm_alloc(u32 num_rmid, struct rdt_hw_mon_domain *h= w_dom) +static int arch_domain_mbm_alloc(u32 num_rmid, struct rdt_hw_l3_mon_domain= *hw_dom) { size_t tsize =3D sizeof(*hw_dom->arch_mbm_states[0]); enum resctrl_event_id eventid; @@ -498,8 +498,8 @@ static void domain_add_cpu_ctrl(int cpu, struct rdt_res= ource *r) =20 static void l3_mon_domain_setup(int cpu, int id, struct rdt_resource *r, s= truct list_head *add_pos) { - struct rdt_hw_mon_domain *hw_dom; - struct rdt_mon_domain *d; + struct rdt_hw_l3_mon_domain *hw_dom; + struct rdt_l3_mon_domain *d; struct cacheinfo *ci; int err; =20 @@ -625,9 +625,9 @@ static void domain_remove_cpu_ctrl(int cpu, struct rdt_= resource *r) static void domain_remove_cpu_mon(int cpu, struct rdt_resource *r) { int id =3D get_domain_id_from_scope(cpu, r->mon_scope); - struct rdt_hw_mon_domain *hw_dom; + struct rdt_hw_l3_mon_domain *hw_dom; + struct rdt_l3_mon_domain *d; struct rdt_domain_hdr *hdr; - struct rdt_mon_domain *d; =20 lockdep_assert_held(&domain_list_lock); =20 @@ -653,7 +653,7 @@ static void domain_remove_cpu_mon(int cpu, struct rdt_r= esource *r) if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) return; =20 - d =3D container_of(hdr, struct rdt_mon_domain, hdr); + d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); hw_dom =3D resctrl_to_arch_mon_dom(d); resctrl_offline_mon_domain(r, hdr); list_del_rcu(&hdr->list); diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/re= sctrl/monitor.c index cee1cd7fbdce..b448e6816fe7 100644 --- a/arch/x86/kernel/cpu/resctrl/monitor.c +++ b/arch/x86/kernel/cpu/resctrl/monitor.c @@ -109,7 +109,7 @@ static inline u64 get_corrected_mbm_count(u32 rmid, uns= igned long val) * * In RMID sharing mode there are fewer "logical RMID" values available * to accumulate data ("physical RMIDs" are divided evenly between SNC - * nodes that share an L3 cache). Linux creates an rdt_mon_domain for + * nodes that share an L3 cache). Linux creates an rdt_l3_mon_domain for * each SNC node. * * The value loaded into IA32_PQR_ASSOC is the "logical RMID". @@ -157,7 +157,7 @@ static int __rmid_read_phys(u32 prmid, enum resctrl_eve= nt_id eventid, u64 *val) return 0; } =20 -static struct arch_mbm_state *get_arch_mbm_state(struct rdt_hw_mon_domain = *hw_dom, +static struct arch_mbm_state *get_arch_mbm_state(struct rdt_hw_l3_mon_doma= in *hw_dom, u32 rmid, enum resctrl_event_id eventid) { @@ -171,11 +171,11 @@ static struct arch_mbm_state *get_arch_mbm_state(stru= ct rdt_hw_mon_domain *hw_do return state ? &state[rmid] : NULL; } =20 -void resctrl_arch_reset_rmid(struct rdt_resource *r, struct rdt_mon_domain= *d, +void resctrl_arch_reset_rmid(struct rdt_resource *r, struct rdt_l3_mon_dom= ain *d, u32 unused, u32 rmid, enum resctrl_event_id eventid) { - struct rdt_hw_mon_domain *hw_dom =3D resctrl_to_arch_mon_dom(d); + struct rdt_hw_l3_mon_domain *hw_dom =3D resctrl_to_arch_mon_dom(d); int cpu =3D cpumask_any(&d->hdr.cpu_mask); struct arch_mbm_state *am; u32 prmid; @@ -194,9 +194,9 @@ void resctrl_arch_reset_rmid(struct rdt_resource *r, st= ruct rdt_mon_domain *d, * Assumes that hardware counters are also reset and thus that there is * no need to record initial non-zero counts. */ -void resctrl_arch_reset_rmid_all(struct rdt_resource *r, struct rdt_mon_do= main *d) +void resctrl_arch_reset_rmid_all(struct rdt_resource *r, struct rdt_l3_mon= _domain *d) { - struct rdt_hw_mon_domain *hw_dom =3D resctrl_to_arch_mon_dom(d); + struct rdt_hw_l3_mon_domain *hw_dom =3D resctrl_to_arch_mon_dom(d); enum resctrl_event_id eventid; int idx; =20 @@ -217,10 +217,10 @@ static u64 mbm_overflow_count(u64 prev_msr, u64 cur_m= sr, unsigned int width) return chunks >> shift; } =20 -static u64 get_corrected_val(struct rdt_resource *r, struct rdt_mon_domain= *d, +static u64 get_corrected_val(struct rdt_resource *r, struct rdt_l3_mon_dom= ain *d, u32 rmid, enum resctrl_event_id eventid, u64 msr_val) { - struct rdt_hw_mon_domain *hw_dom =3D resctrl_to_arch_mon_dom(d); + struct rdt_hw_l3_mon_domain *hw_dom =3D resctrl_to_arch_mon_dom(d); struct rdt_hw_resource *hw_res =3D resctrl_to_arch_res(r); struct arch_mbm_state *am; u64 chunks; @@ -242,7 +242,7 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, stru= ct rdt_domain_hdr *hdr, u32 unused, u32 rmid, enum resctrl_event_id eventid, u64 *val, void *ignored) { - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; u64 msr_val; u32 prmid; int cpu; @@ -253,7 +253,7 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, stru= ct rdt_domain_hdr *hdr, if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) return -EINVAL; =20 - d =3D container_of(hdr, struct rdt_mon_domain, hdr); + d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); cpu =3D cpumask_any(&hdr->cpu_mask); prmid =3D logical_rmid_to_physical_rmid(cpu, rmid); ret =3D __rmid_read_phys(prmid, eventid, &msr_val); @@ -302,11 +302,11 @@ static int __cntr_id_read(u32 cntr_id, u64 *val) return 0; } =20 -void resctrl_arch_reset_cntr(struct rdt_resource *r, struct rdt_mon_domain= *d, +void resctrl_arch_reset_cntr(struct rdt_resource *r, struct rdt_l3_mon_dom= ain *d, u32 unused, u32 rmid, int cntr_id, enum resctrl_event_id eventid) { - struct rdt_hw_mon_domain *hw_dom =3D resctrl_to_arch_mon_dom(d); + struct rdt_hw_l3_mon_domain *hw_dom =3D resctrl_to_arch_mon_dom(d); struct arch_mbm_state *am; =20 am =3D get_arch_mbm_state(hw_dom, rmid, eventid); @@ -322,14 +322,14 @@ int resctrl_arch_cntr_read(struct rdt_resource *r, st= ruct rdt_domain_hdr *hdr, u32 unused, u32 rmid, int cntr_id, enum resctrl_event_id eventid, u64 *val) { - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; u64 msr_val; int ret; =20 if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) return -EINVAL; =20 - d =3D container_of(hdr, struct rdt_mon_domain, hdr); + d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); ret =3D __cntr_id_read(cntr_id, &msr_val); if (ret) return ret; @@ -353,7 +353,7 @@ int resctrl_arch_cntr_read(struct rdt_resource *r, stru= ct rdt_domain_hdr *hdr, * must adjust RMID counter numbers based on SNC node. See * logical_rmid_to_physical_rmid() for code that does this. */ -void arch_mon_domain_online(struct rdt_resource *r, struct rdt_mon_domain = *d) +void arch_mon_domain_online(struct rdt_resource *r, struct rdt_l3_mon_doma= in *d) { if (snc_nodes_per_l3_cache > 1) msr_clear_bit(MSR_RMID_SNC_CONFIG, 0); @@ -505,7 +505,7 @@ static void resctrl_abmc_set_one_amd(void *arg) */ static void _resctrl_abmc_enable(struct rdt_resource *r, bool enable) { - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; =20 lockdep_assert_cpus_held(); =20 @@ -544,11 +544,11 @@ static void resctrl_abmc_config_one_amd(void *info) /* * Send an IPI to the domain to assign the counter to RMID, event pair. */ -void resctrl_arch_config_cntr(struct rdt_resource *r, struct rdt_mon_domai= n *d, +void resctrl_arch_config_cntr(struct rdt_resource *r, struct rdt_l3_mon_do= main *d, enum resctrl_event_id evtid, u32 rmid, u32 closid, u32 cntr_id, bool assign) { - struct rdt_hw_mon_domain *hw_dom =3D resctrl_to_arch_mon_dom(d); + struct rdt_hw_l3_mon_domain *hw_dom =3D resctrl_to_arch_mon_dom(d); union l3_qos_abmc_cfg abmc_cfg =3D { 0 }; struct arch_mbm_state *am; =20 diff --git a/fs/resctrl/ctrlmondata.c b/fs/resctrl/ctrlmondata.c index 7b9fc5d3bdc8..c95f8eb8e731 100644 --- a/fs/resctrl/ctrlmondata.c +++ b/fs/resctrl/ctrlmondata.c @@ -622,7 +622,7 @@ int rdtgroup_mondata_show(struct seq_file *m, void *arg) r =3D resctrl_arch_get_resource(resid); =20 if (md->sum) { - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; =20 /* * This file requires summing across all domains that share diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index 32116361a5f6..88b990e939ea 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -130,7 +130,7 @@ static void limbo_release_entry(struct rmid_entry *entr= y) * decrement the count. If the busy count gets to zero on an RMID, we * free the RMID */ -void __check_limbo(struct rdt_mon_domain *d, bool force_free) +void __check_limbo(struct rdt_l3_mon_domain *d, bool force_free) { struct rdt_resource *r =3D resctrl_arch_get_resource(RDT_RESOURCE_L3); u32 idx_limit =3D resctrl_arch_system_num_rmid_idx(); @@ -188,7 +188,7 @@ void __check_limbo(struct rdt_mon_domain *d, bool force= _free) resctrl_arch_mon_ctx_free(r, QOS_L3_OCCUP_EVENT_ID, arch_mon_ctx); } =20 -bool has_busy_rmid(struct rdt_mon_domain *d) +bool has_busy_rmid(struct rdt_l3_mon_domain *d) { u32 idx_limit =3D resctrl_arch_system_num_rmid_idx(); =20 @@ -289,7 +289,7 @@ int alloc_rmid(u32 closid) static void add_rmid_to_limbo(struct rmid_entry *entry) { struct rdt_resource *r =3D resctrl_arch_get_resource(RDT_RESOURCE_L3); - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; u32 idx; =20 lockdep_assert_held(&rdtgroup_mutex); @@ -342,7 +342,7 @@ void free_rmid(u32 closid, u32 rmid) list_add_tail(&entry->list, &rmid_free_lru); } =20 -static struct mbm_state *get_mbm_state(struct rdt_mon_domain *d, u32 closi= d, +static struct mbm_state *get_mbm_state(struct rdt_l3_mon_domain *d, u32 cl= osid, u32 rmid, enum resctrl_event_id evtid) { u32 idx =3D resctrl_arch_rmid_idx_encode(closid, rmid); @@ -362,7 +362,7 @@ static struct mbm_state *get_mbm_state(struct rdt_mon_d= omain *d, u32 closid, * Return: * Valid counter ID on success, or -ENOENT on failure. */ -static int mbm_cntr_get(struct rdt_resource *r, struct rdt_mon_domain *d, +static int mbm_cntr_get(struct rdt_resource *r, struct rdt_l3_mon_domain *= d, struct rdtgroup *rdtgrp, enum resctrl_event_id evtid) { int cntr_id; @@ -389,7 +389,7 @@ static int mbm_cntr_get(struct rdt_resource *r, struct = rdt_mon_domain *d, * Return: * Valid counter ID on success, or -ENOSPC on failure. */ -static int mbm_cntr_alloc(struct rdt_resource *r, struct rdt_mon_domain *d, +static int mbm_cntr_alloc(struct rdt_resource *r, struct rdt_l3_mon_domain= *d, struct rdtgroup *rdtgrp, enum resctrl_event_id evtid) { int cntr_id; @@ -408,7 +408,7 @@ static int mbm_cntr_alloc(struct rdt_resource *r, struc= t rdt_mon_domain *d, /* * mbm_cntr_free() - Clear the counter ID configuration details in the dom= ain @d. */ -static void mbm_cntr_free(struct rdt_mon_domain *d, int cntr_id) +static void mbm_cntr_free(struct rdt_l3_mon_domain *d, int cntr_id) { memset(&d->cntr_cfg[cntr_id], 0, sizeof(*d->cntr_cfg)); } @@ -418,7 +418,7 @@ static int __mon_event_count(struct rdtgroup *rdtgrp, s= truct rmid_read *rr) int cpu =3D smp_processor_id(); u32 closid =3D rdtgrp->closid; u32 rmid =3D rdtgrp->mon.rmid; - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; int cntr_id =3D -ENOENT; struct mbm_state *m; int err, ret; @@ -426,7 +426,7 @@ static int __mon_event_count(struct rdtgroup *rdtgrp, s= truct rmid_read *rr) =20 if (!domain_header_is_valid(rr->hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) return -EINVAL; - d =3D container_of(rr->hdr, struct rdt_mon_domain, hdr); + d =3D container_of(rr->hdr, struct rdt_l3_mon_domain, hdr); =20 if (rr->is_mbm_cntr) { cntr_id =3D mbm_cntr_get(rr->r, d, rdtgrp, rr->evtid); @@ -515,12 +515,12 @@ static void mbm_bw_count(struct rdtgroup *rdtgrp, str= uct rmid_read *rr) u64 cur_bw, bytes, cur_bytes; u32 closid =3D rdtgrp->closid; u32 rmid =3D rdtgrp->mon.rmid; - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; struct mbm_state *m; =20 if (!domain_header_is_valid(rr->hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) return; - d =3D container_of(rr->hdr, struct rdt_mon_domain, hdr); + d =3D container_of(rr->hdr, struct rdt_l3_mon_domain, hdr); m =3D get_mbm_state(d, closid, rmid, rr->evtid); if (WARN_ON_ONCE(!m)) return; @@ -620,7 +620,7 @@ static struct rdt_ctrl_domain *get_ctrl_domain_from_cpu= (int cpu, * throttle MSRs already have low percentage values. To avoid * unnecessarily restricting such rdtgroups, we also increase the bandwidt= h. */ -static void update_mba_bw(struct rdtgroup *rgrp, struct rdt_mon_domain *do= m_mbm) +static void update_mba_bw(struct rdtgroup *rgrp, struct rdt_l3_mon_domain = *dom_mbm) { u32 closid, rmid, cur_msr_val, new_msr_val; struct mbm_state *pmbm_data, *cmbm_data; @@ -688,7 +688,7 @@ static void update_mba_bw(struct rdtgroup *rgrp, struct= rdt_mon_domain *dom_mbm) resctrl_arch_update_one(r_mba, dom_mba, closid, CDP_NONE, new_msr_val); } =20 -static void mbm_update_one_event(struct rdt_resource *r, struct rdt_mon_do= main *d, +static void mbm_update_one_event(struct rdt_resource *r, struct rdt_l3_mon= _domain *d, struct rdtgroup *rdtgrp, enum resctrl_event_id evtid) { struct rmid_read rr =3D {0}; @@ -720,7 +720,7 @@ static void mbm_update_one_event(struct rdt_resource *r= , struct rdt_mon_domain * resctrl_arch_mon_ctx_free(rr.r, rr.evtid, rr.arch_mon_ctx); } =20 -static void mbm_update(struct rdt_resource *r, struct rdt_mon_domain *d, +static void mbm_update(struct rdt_resource *r, struct rdt_l3_mon_domain *d, struct rdtgroup *rdtgrp) { /* @@ -741,12 +741,12 @@ static void mbm_update(struct rdt_resource *r, struct= rdt_mon_domain *d, void cqm_handle_limbo(struct work_struct *work) { unsigned long delay =3D msecs_to_jiffies(CQM_LIMBOCHECK_INTERVAL); - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; =20 cpus_read_lock(); mutex_lock(&rdtgroup_mutex); =20 - d =3D container_of(work, struct rdt_mon_domain, cqm_limbo.work); + d =3D container_of(work, struct rdt_l3_mon_domain, cqm_limbo.work); =20 __check_limbo(d, false); =20 @@ -769,7 +769,7 @@ void cqm_handle_limbo(struct work_struct *work) * @exclude_cpu: Which CPU the handler should not run on, * RESCTRL_PICK_ANY_CPU to pick any CPU. */ -void cqm_setup_limbo_handler(struct rdt_mon_domain *dom, unsigned long del= ay_ms, +void cqm_setup_limbo_handler(struct rdt_l3_mon_domain *dom, unsigned long = delay_ms, int exclude_cpu) { unsigned long delay =3D msecs_to_jiffies(delay_ms); @@ -786,7 +786,7 @@ void mbm_handle_overflow(struct work_struct *work) { unsigned long delay =3D msecs_to_jiffies(MBM_OVERFLOW_INTERVAL); struct rdtgroup *prgrp, *crgrp; - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; struct list_head *head; struct rdt_resource *r; =20 @@ -801,7 +801,7 @@ void mbm_handle_overflow(struct work_struct *work) goto out_unlock; =20 r =3D resctrl_arch_get_resource(RDT_RESOURCE_L3); - d =3D container_of(work, struct rdt_mon_domain, mbm_over.work); + d =3D container_of(work, struct rdt_l3_mon_domain, mbm_over.work); =20 list_for_each_entry(prgrp, &rdt_all_groups, rdtgroup_list) { mbm_update(r, d, prgrp); @@ -835,7 +835,7 @@ void mbm_handle_overflow(struct work_struct *work) * @exclude_cpu: Which CPU the handler should not run on, * RESCTRL_PICK_ANY_CPU to pick any CPU. */ -void mbm_setup_overflow_handler(struct rdt_mon_domain *dom, unsigned long = delay_ms, +void mbm_setup_overflow_handler(struct rdt_l3_mon_domain *dom, unsigned lo= ng delay_ms, int exclude_cpu) { unsigned long delay =3D msecs_to_jiffies(delay_ms); @@ -1090,7 +1090,7 @@ ssize_t resctrl_mbm_assign_on_mkdir_write(struct kern= fs_open_file *of, char *buf * mbm_cntr_free_all() - Clear all the counter ID configuration details in= the * domain @d. Called when mbm_assign_mode is changed. */ -static void mbm_cntr_free_all(struct rdt_resource *r, struct rdt_mon_domai= n *d) +static void mbm_cntr_free_all(struct rdt_resource *r, struct rdt_l3_mon_do= main *d) { memset(d->cntr_cfg, 0, sizeof(*d->cntr_cfg) * r->mon.num_mbm_cntrs); } @@ -1099,7 +1099,7 @@ static void mbm_cntr_free_all(struct rdt_resource *r,= struct rdt_mon_domain *d) * resctrl_reset_rmid_all() - Reset all non-architecture states for all the * supported RMIDs. */ -static void resctrl_reset_rmid_all(struct rdt_resource *r, struct rdt_mon_= domain *d) +static void resctrl_reset_rmid_all(struct rdt_resource *r, struct rdt_l3_m= on_domain *d) { u32 idx_limit =3D resctrl_arch_system_num_rmid_idx(); enum resctrl_event_id evt; @@ -1120,7 +1120,7 @@ static void resctrl_reset_rmid_all(struct rdt_resourc= e *r, struct rdt_mon_domain * Assign the counter if @assign is true else unassign the counter. Reset = the * associated non-architectural state. */ -static void rdtgroup_assign_cntr(struct rdt_resource *r, struct rdt_mon_do= main *d, +static void rdtgroup_assign_cntr(struct rdt_resource *r, struct rdt_l3_mon= _domain *d, enum resctrl_event_id evtid, u32 rmid, u32 closid, u32 cntr_id, bool assign) { @@ -1140,7 +1140,7 @@ static void rdtgroup_assign_cntr(struct rdt_resource = *r, struct rdt_mon_domain * * Return: * 0 on success, < 0 on failure. */ -static int rdtgroup_alloc_assign_cntr(struct rdt_resource *r, struct rdt_m= on_domain *d, +static int rdtgroup_alloc_assign_cntr(struct rdt_resource *r, struct rdt_l= 3_mon_domain *d, struct rdtgroup *rdtgrp, struct mon_evt *mevt) { int cntr_id; @@ -1175,7 +1175,7 @@ static int rdtgroup_alloc_assign_cntr(struct rdt_reso= urce *r, struct rdt_mon_dom * Return: * 0 on success, < 0 on failure. */ -static int rdtgroup_assign_cntr_event(struct rdt_mon_domain *d, struct rdt= group *rdtgrp, +static int rdtgroup_assign_cntr_event(struct rdt_l3_mon_domain *d, struct = rdtgroup *rdtgrp, struct mon_evt *mevt) { struct rdt_resource *r =3D resctrl_arch_get_resource(mevt->rid); @@ -1225,7 +1225,7 @@ void rdtgroup_assign_cntrs(struct rdtgroup *rdtgrp) * rdtgroup_free_unassign_cntr() - Unassign and reset the counter ID confi= guration * for the event pointed to by @mevt within the domain @d and resctrl grou= p @rdtgrp. */ -static void rdtgroup_free_unassign_cntr(struct rdt_resource *r, struct rdt= _mon_domain *d, +static void rdtgroup_free_unassign_cntr(struct rdt_resource *r, struct rdt= _l3_mon_domain *d, struct rdtgroup *rdtgrp, struct mon_evt *mevt) { int cntr_id; @@ -1246,7 +1246,7 @@ static void rdtgroup_free_unassign_cntr(struct rdt_re= source *r, struct rdt_mon_d * the event structure @mevt from the domain @d and the group @rdtgrp. Una= ssign * the counters from all the domains if @d is NULL else unassign from @d. */ -static void rdtgroup_unassign_cntr_event(struct rdt_mon_domain *d, struct = rdtgroup *rdtgrp, +static void rdtgroup_unassign_cntr_event(struct rdt_l3_mon_domain *d, stru= ct rdtgroup *rdtgrp, struct mon_evt *mevt) { struct rdt_resource *r =3D resctrl_arch_get_resource(mevt->rid); @@ -1321,7 +1321,7 @@ static int resctrl_parse_mem_transactions(char *tok, = u32 *val) static void rdtgroup_update_cntr_event(struct rdt_resource *r, struct rdtg= roup *rdtgrp, enum resctrl_event_id evtid) { - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; int cntr_id; =20 list_for_each_entry(d, &r->mon_domains, hdr.list) { @@ -1427,7 +1427,7 @@ ssize_t resctrl_mbm_assign_mode_write(struct kernfs_o= pen_file *of, char *buf, size_t nbytes, loff_t off) { struct rdt_resource *r =3D rdt_kn_parent_priv(of->kn); - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; int ret =3D 0; bool enable; =20 @@ -1500,7 +1500,7 @@ int resctrl_num_mbm_cntrs_show(struct kernfs_open_fil= e *of, struct seq_file *s, void *v) { struct rdt_resource *r =3D rdt_kn_parent_priv(of->kn); - struct rdt_mon_domain *dom; + struct rdt_l3_mon_domain *dom; bool sep =3D false; =20 cpus_read_lock(); @@ -1524,7 +1524,7 @@ int resctrl_available_mbm_cntrs_show(struct kernfs_op= en_file *of, struct seq_file *s, void *v) { struct rdt_resource *r =3D rdt_kn_parent_priv(of->kn); - struct rdt_mon_domain *dom; + struct rdt_l3_mon_domain *dom; bool sep =3D false; u32 cntrs, i; int ret =3D 0; @@ -1565,7 +1565,7 @@ int resctrl_available_mbm_cntrs_show(struct kernfs_op= en_file *of, int mbm_L3_assignments_show(struct kernfs_open_file *of, struct seq_file *= s, void *v) { struct rdt_resource *r =3D resctrl_arch_get_resource(RDT_RESOURCE_L3); - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; struct rdtgroup *rdtgrp; struct mon_evt *mevt; int ret =3D 0; @@ -1628,7 +1628,7 @@ static struct mon_evt *mbm_get_mon_event_by_name(stru= ct rdt_resource *r, char *n return NULL; } =20 -static int rdtgroup_modify_assign_state(char *assign, struct rdt_mon_domai= n *d, +static int rdtgroup_modify_assign_state(char *assign, struct rdt_l3_mon_do= main *d, struct rdtgroup *rdtgrp, struct mon_evt *mevt) { int ret =3D 0; @@ -1654,7 +1654,7 @@ static int rdtgroup_modify_assign_state(char *assign,= struct rdt_mon_domain *d, static int resctrl_parse_mbm_assignment(struct rdt_resource *r, struct rdt= group *rdtgrp, char *event, char *tok) { - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; unsigned long dom_id =3D 0; char *dom_str, *id_str; struct mon_evt *mevt; diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index e3b83e48f2d9..1b4f4bd63143 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -1618,7 +1618,7 @@ static void mondata_config_read(struct resctrl_mon_co= nfig_info *mon_info) static int mbm_config_show(struct seq_file *s, struct rdt_resource *r, u32= evtid) { struct resctrl_mon_config_info mon_info; - struct rdt_mon_domain *dom; + struct rdt_l3_mon_domain *dom; bool sep =3D false; =20 cpus_read_lock(); @@ -1666,7 +1666,7 @@ static int mbm_local_bytes_config_show(struct kernfs_= open_file *of, } =20 static void mbm_config_write_domain(struct rdt_resource *r, - struct rdt_mon_domain *d, u32 evtid, u32 val) + struct rdt_l3_mon_domain *d, u32 evtid, u32 val) { struct resctrl_mon_config_info mon_info =3D {0}; =20 @@ -1707,8 +1707,8 @@ static void mbm_config_write_domain(struct rdt_resour= ce *r, static int mon_config_write(struct rdt_resource *r, char *tok, u32 evtid) { char *dom_str =3D NULL, *id_str; + struct rdt_l3_mon_domain *d; unsigned long dom_id, val; - struct rdt_mon_domain *d; =20 /* Walking r->domains, ensure it can't race with cpuhp */ lockdep_assert_cpus_held(); @@ -2716,7 +2716,7 @@ static int rdt_get_tree(struct fs_context *fc) { struct rdt_fs_context *ctx =3D rdt_fc2context(fc); unsigned long flags =3D RFTYPE_CTRL_BASE; - struct rdt_mon_domain *dom; + struct rdt_l3_mon_domain *dom; struct rdt_resource *r; int ret; =20 @@ -3167,7 +3167,7 @@ static void rmdir_mondata_subdir_allrdtgrp(struct rdt= _resource *r, struct rdt_domain_hdr *hdr) { struct rdtgroup *prgrp, *crgrp; - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; char subname[32]; bool snc_mode; char name[32]; @@ -3175,7 +3175,7 @@ static void rmdir_mondata_subdir_allrdtgrp(struct rdt= _resource *r, if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) return; =20 - d =3D container_of(hdr, struct rdt_mon_domain, hdr); + d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); snc_mode =3D r->mon_scope =3D=3D RESCTRL_L3_NODE; sprintf(name, "mon_%s_%02d", r->name, snc_mode ? d->ci_id : d->hdr.id); if (snc_mode) @@ -3221,7 +3221,7 @@ static int mkdir_mondata_subdir(struct kernfs_node *p= arent_kn, struct rdt_resource *r, struct rdtgroup *prgrp) { struct kernfs_node *kn, *ckn; - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; char name[32]; bool snc_mode; int ret =3D 0; @@ -3231,7 +3231,7 @@ static int mkdir_mondata_subdir(struct kernfs_node *p= arent_kn, if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) return -EINVAL; =20 - d =3D container_of(hdr, struct rdt_mon_domain, hdr); + d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); snc_mode =3D r->mon_scope =3D=3D RESCTRL_L3_NODE; sprintf(name, "mon_%s_%02d", r->name, snc_mode ? d->ci_id : d->hdr.id); kn =3D kernfs_find_and_get(parent_kn, name); @@ -4174,7 +4174,7 @@ static void rdtgroup_setup_default(void) mutex_unlock(&rdtgroup_mutex); } =20 -static void domain_destroy_mon_state(struct rdt_mon_domain *d) +static void domain_destroy_mon_state(struct rdt_l3_mon_domain *d) { int idx; =20 @@ -4198,7 +4198,7 @@ void resctrl_offline_ctrl_domain(struct rdt_resource = *r, struct rdt_ctrl_domain =20 void resctrl_offline_mon_domain(struct rdt_resource *r, struct rdt_domain_= hdr *hdr) { - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; =20 mutex_lock(&rdtgroup_mutex); =20 @@ -4212,7 +4212,7 @@ void resctrl_offline_mon_domain(struct rdt_resource *= r, struct rdt_domain_hdr *h if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) goto out_unlock; =20 - d =3D container_of(hdr, struct rdt_mon_domain, hdr); + d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); if (resctrl_is_mbm_enabled()) cancel_delayed_work(&d->mbm_over); if (resctrl_is_mon_event_enabled(QOS_L3_OCCUP_EVENT_ID) && has_busy_rmid(= d)) { @@ -4246,7 +4246,7 @@ void resctrl_offline_mon_domain(struct rdt_resource *= r, struct rdt_domain_hdr *h * * Returns 0 for success, or -ENOMEM. */ -static int domain_setup_mon_state(struct rdt_resource *r, struct rdt_mon_d= omain *d) +static int domain_setup_mon_state(struct rdt_resource *r, struct rdt_l3_mo= n_domain *d) { u32 idx_limit =3D resctrl_arch_system_num_rmid_idx(); size_t tsize =3D sizeof(*d->mbm_states[0]); @@ -4304,7 +4304,7 @@ int resctrl_online_ctrl_domain(struct rdt_resource *r= , struct rdt_ctrl_domain *d =20 int resctrl_online_mon_domain(struct rdt_resource *r, struct rdt_domain_hd= r *hdr) { - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; int err =3D -EINVAL; =20 mutex_lock(&rdtgroup_mutex); @@ -4312,7 +4312,7 @@ int resctrl_online_mon_domain(struct rdt_resource *r,= struct rdt_domain_hdr *hdr if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) goto out_unlock; =20 - d =3D container_of(hdr, struct rdt_mon_domain, hdr); + d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); err =3D domain_setup_mon_state(r, d); if (err) goto out_unlock; @@ -4360,10 +4360,10 @@ static void clear_childcpus(struct rdtgroup *r, uns= igned int cpu) } } =20 -static struct rdt_mon_domain *get_mon_domain_from_cpu(int cpu, - struct rdt_resource *r) +static struct rdt_l3_mon_domain *get_mon_domain_from_cpu(int cpu, + struct rdt_resource *r) { - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; =20 lockdep_assert_cpus_held(); =20 @@ -4379,7 +4379,7 @@ static struct rdt_mon_domain *get_mon_domain_from_cpu= (int cpu, void resctrl_offline_cpu(unsigned int cpu) { struct rdt_resource *l3 =3D resctrl_arch_get_resource(RDT_RESOURCE_L3); - struct rdt_mon_domain *d; + struct rdt_l3_mon_domain *d; struct rdtgroup *rdtgrp; =20 mutex_lock(&rdtgroup_mutex); --=20 2.51.0 From nobody Sat Sep 27 20:26:34 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 09310320391 for ; Thu, 25 Sep 2025 20:04:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830652; cv=none; b=mL7+gMvAHOq6C+4OtyFqlPcCH/MqgRyV1HyX0VeCK1hwcb+x5wqVFfxAI4IsKHKicjfIpf0cHYUWu5MEc/3DgtvnGMDPPVycgXo49OpphfbGaMlCWpEfUidAT5EFzVpFJJPnebsTzHqP1f1UrNCxTCgrRICHsaIuLybH+tPG7yA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830652; c=relaxed/simple; bh=gyXvtfuM2DO2MmMztQrjC+UHtGEtL4e/uxyFFach+uA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=kH9vHO4/Vv7pggaMspSOBQfWGqAI/5cHHn3Wc2FyJR22FrnXGE2p12DBbMDBoE2XhxqjcaadKzAqKVYHL5yMBaCE/3lZ7KV0RoJV4FCQPUf5fFPjruj7oPNRuFeiGTogyi194GdyxQTwzOdrRGWGebGxpafUcJKbbKo19t4XJmA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=jq1auwiL; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="jq1auwiL" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758830651; x=1790366651; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=gyXvtfuM2DO2MmMztQrjC+UHtGEtL4e/uxyFFach+uA=; b=jq1auwiLA0CCo8Fz5khYDMqE85IAOU591CEPseZcjlfa4qU6f7sjQSrv 1Gyb+783Dda6LkPPhyn0iJxfndFSOykLSDi1WI1P/KzedvewFaeV31+U0 SyFS4X9FluNPKa2qaSmxO4NWkh2X1TVqE+8cIXOZC0xoUQoEqz9FRm0dR V4abmGXHBKf9AUJQfXfbClhYCe+aBZzBSJeRXVM5qVoVJJ3iDC+oCSeot 4YwzswwzvMCxR/7Omn+0cooPJjTUmFuq805IyH+KM54/iKN7G5Isdf/fx OvFnCLuvlDdVA1BBGE2WdHts8crQ18B4RLnXyZ49dyilFqZ+3MEb/LUKQ Q==; X-CSE-ConnectionGUID: wc0g4A5qQimO6L5+zY3IwA== X-CSE-MsgGUID: n6zPPD5oThCKIV31mkAsCQ== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="61074193" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="61074193" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:07 -0700 X-CSE-ConnectionGUID: 86dO9KSRRKKiXXMHPcoM4w== X-CSE-MsgGUID: ZYBy7Py6SHCZjsE2fI71Yw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,293,1751266800"; d="scan'208";a="177003623" Received: from inaky-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.206]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:07 -0700 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v11 08/31] x86,fs/resctrl: Rename some L3 specific functions Date: Thu, 25 Sep 2025 13:03:02 -0700 Message-ID: <20250925200328.64155-9-tony.luck@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250925200328.64155-1-tony.luck@intel.com> References: <20250925200328.64155-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With the arrival of monitor events tied to new domains associated with a different resource it would be clearer if the L3 resource specific functions are more accurately named. Rename three groups of functions: Functions that allocate/free architecture per-RMID MBM state information: arch_domain_mbm_alloc() -> l3_mon_domain_mbm_alloc() mon_domain_free() -> l3_mon_domain_free() Functions that allocate/free filesystem per-RMID MBM state information: domain_setup_mon_state() -> domain_setup_l3_mon_state() domain_destroy_mon_state() -> domain_destroy_l3_mon_state() Initialization/exit: rdt_get_mon_l3_config() -> rdt_get_l3_mon_config() resctrl_mon_resource_init() -> resctrl_l3_mon_resource_init() resctrl_mon_resource_exit() -> resctrl_l3_mon_resource_exit() Signed-off-by: Tony Luck --- arch/x86/kernel/cpu/resctrl/internal.h | 2 +- fs/resctrl/internal.h | 6 +++--- arch/x86/kernel/cpu/resctrl/core.c | 18 +++++++++--------- arch/x86/kernel/cpu/resctrl/monitor.c | 2 +- fs/resctrl/monitor.c | 6 +++--- fs/resctrl/rdtgroup.c | 22 +++++++++++----------- 6 files changed, 28 insertions(+), 28 deletions(-) diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/r= esctrl/internal.h index 6eca3d522fcc..14fadcff0d2b 100644 --- a/arch/x86/kernel/cpu/resctrl/internal.h +++ b/arch/x86/kernel/cpu/resctrl/internal.h @@ -208,7 +208,7 @@ union l3_qos_abmc_cfg { =20 void rdt_ctrl_update(void *arg); =20 -int rdt_get_mon_l3_config(struct rdt_resource *r); +int rdt_get_l3_mon_config(struct rdt_resource *r); =20 bool rdt_cpu_has(int flag); =20 diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h index d9e291d94926..88b4489b68e1 100644 --- a/fs/resctrl/internal.h +++ b/fs/resctrl/internal.h @@ -357,7 +357,9 @@ int alloc_rmid(u32 closid); =20 void free_rmid(u32 closid, u32 rmid); =20 -void resctrl_mon_resource_exit(void); +int resctrl_l3_mon_resource_init(void); + +void resctrl_l3_mon_resource_exit(void); =20 void mon_event_count(void *info); =20 @@ -367,8 +369,6 @@ void mon_event_read(struct rmid_read *rr, struct rdt_re= source *r, struct rdt_domain_hdr *hdr, struct rdtgroup *rdtgrp, cpumask_t *cpumask, int evtid, int first); =20 -int resctrl_mon_resource_init(void); - void mbm_setup_overflow_handler(struct rdt_l3_mon_domain *dom, unsigned long delay_ms, int exclude_cpu); diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 42f4f702eeec..4762790c6e62 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -363,7 +363,7 @@ static void ctrl_domain_free(struct rdt_hw_ctrl_domain = *hw_dom) kfree(hw_dom); } =20 -static void mon_domain_free(struct rdt_hw_l3_mon_domain *hw_dom) +static void l3_mon_domain_free(struct rdt_hw_l3_mon_domain *hw_dom) { int idx; =20 @@ -396,11 +396,11 @@ static int domain_setup_ctrlval(struct rdt_resource *= r, struct rdt_ctrl_domain * } =20 /** - * arch_domain_mbm_alloc() - Allocate arch private storage for the MBM cou= nters + * l3_mon_domain_mbm_alloc() - Allocate arch private storage for the MBM c= ounters * @num_rmid: The size of the MBM counter array * @hw_dom: The domain that owns the allocated arrays */ -static int arch_domain_mbm_alloc(u32 num_rmid, struct rdt_hw_l3_mon_domain= *hw_dom) +static int l3_mon_domain_mbm_alloc(u32 num_rmid, struct rdt_hw_l3_mon_doma= in *hw_dom) { size_t tsize =3D sizeof(*hw_dom->arch_mbm_states[0]); enum resctrl_event_id eventid; @@ -514,7 +514,7 @@ static void l3_mon_domain_setup(int cpu, int id, struct= rdt_resource *r, struct ci =3D get_cpu_cacheinfo_level(cpu, RESCTRL_L3_CACHE); if (!ci) { pr_warn_once("Can't find L3 cache for CPU:%d resource %s\n", cpu, r->nam= e); - mon_domain_free(hw_dom); + l3_mon_domain_free(hw_dom); return; } d->ci_id =3D ci->id; @@ -522,8 +522,8 @@ static void l3_mon_domain_setup(int cpu, int id, struct= rdt_resource *r, struct =20 arch_mon_domain_online(r, d); =20 - if (arch_domain_mbm_alloc(r->mon.num_rmid, hw_dom)) { - mon_domain_free(hw_dom); + if (l3_mon_domain_mbm_alloc(r->mon.num_rmid, hw_dom)) { + l3_mon_domain_free(hw_dom); return; } =20 @@ -533,7 +533,7 @@ static void l3_mon_domain_setup(int cpu, int id, struct= rdt_resource *r, struct if (err) { list_del_rcu(&d->hdr.list); synchronize_rcu(); - mon_domain_free(hw_dom); + l3_mon_domain_free(hw_dom); } } =20 @@ -658,7 +658,7 @@ static void domain_remove_cpu_mon(int cpu, struct rdt_r= esource *r) resctrl_offline_mon_domain(r, hdr); list_del_rcu(&hdr->list); synchronize_rcu(); - mon_domain_free(hw_dom); + l3_mon_domain_free(hw_dom); break; default: pr_warn_once("Unknown resource rid=3D%d\n", r->rid); @@ -906,7 +906,7 @@ static __init bool get_rdt_mon_resources(void) if (!ret) return false; =20 - return !rdt_get_mon_l3_config(r); + return !rdt_get_l3_mon_config(r); } =20 static __init void __check_quirks_intel(void) diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/re= sctrl/monitor.c index b448e6816fe7..ea81305fbc5d 100644 --- a/arch/x86/kernel/cpu/resctrl/monitor.c +++ b/arch/x86/kernel/cpu/resctrl/monitor.c @@ -422,7 +422,7 @@ static __init int snc_get_config(void) return ret; } =20 -int __init rdt_get_mon_l3_config(struct rdt_resource *r) +int __init rdt_get_l3_mon_config(struct rdt_resource *r) { unsigned int mbm_offset =3D boot_cpu_data.x86_cache_mbm_width_offset; struct rdt_hw_resource *hw_res =3D resctrl_to_arch_res(r); diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index 88b990e939ea..54ae3494adfe 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -1750,7 +1750,7 @@ ssize_t mbm_L3_assignments_write(struct kernfs_open_f= ile *of, char *buf, } =20 /** - * resctrl_mon_resource_init() - Initialise global monitoring structures. + * resctrl_l3_mon_resource_init() - Initialise global monitoring structure= s. * * Allocate and initialise global monitor resources that do not belong to a * specific domain. i.e. the rmid_ptrs[] used for the limbo and free lists. @@ -1761,7 +1761,7 @@ ssize_t mbm_L3_assignments_write(struct kernfs_open_f= ile *of, char *buf, * * Returns 0 for success, or -ENOMEM. */ -int resctrl_mon_resource_init(void) +int resctrl_l3_mon_resource_init(void) { struct rdt_resource *r =3D resctrl_arch_get_resource(RDT_RESOURCE_L3); int ret; @@ -1813,7 +1813,7 @@ int resctrl_mon_resource_init(void) return 0; } =20 -void resctrl_mon_resource_exit(void) +void resctrl_l3_mon_resource_exit(void) { struct rdt_resource *r =3D resctrl_arch_get_resource(RDT_RESOURCE_L3); =20 diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index 1b4f4bd63143..88b80944cf85 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -4174,7 +4174,7 @@ static void rdtgroup_setup_default(void) mutex_unlock(&rdtgroup_mutex); } =20 -static void domain_destroy_mon_state(struct rdt_l3_mon_domain *d) +static void domain_destroy_l3_mon_state(struct rdt_l3_mon_domain *d) { int idx; =20 @@ -4228,13 +4228,13 @@ void resctrl_offline_mon_domain(struct rdt_resource= *r, struct rdt_domain_hdr *h cancel_delayed_work(&d->cqm_limbo); } =20 - domain_destroy_mon_state(d); + domain_destroy_l3_mon_state(d); out_unlock: mutex_unlock(&rdtgroup_mutex); } =20 /** - * domain_setup_mon_state() - Initialise domain monitoring structures. + * domain_setup_l3_mon_state() - Initialise domain monitoring structures. * @r: The resource for the newly online domain. * @d: The newly online domain. * @@ -4242,11 +4242,11 @@ void resctrl_offline_mon_domain(struct rdt_resource= *r, struct rdt_domain_hdr *h * Called when the first CPU of a domain comes online, regardless of wheth= er * the filesystem is mounted. * During boot this may be called before global allocations have been made= by - * resctrl_mon_resource_init(). + * resctrl_l3_mon_resource_init(). * * Returns 0 for success, or -ENOMEM. */ -static int domain_setup_mon_state(struct rdt_resource *r, struct rdt_l3_mo= n_domain *d) +static int domain_setup_l3_mon_state(struct rdt_resource *r, struct rdt_l3= _mon_domain *d) { u32 idx_limit =3D resctrl_arch_system_num_rmid_idx(); size_t tsize =3D sizeof(*d->mbm_states[0]); @@ -4313,7 +4313,7 @@ int resctrl_online_mon_domain(struct rdt_resource *r,= struct rdt_domain_hdr *hdr goto out_unlock; =20 d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); - err =3D domain_setup_mon_state(r, d); + err =3D domain_setup_l3_mon_state(r, d); if (err) goto out_unlock; =20 @@ -4429,13 +4429,13 @@ int resctrl_init(void) =20 thread_throttle_mode_init(); =20 - ret =3D resctrl_mon_resource_init(); + ret =3D resctrl_l3_mon_resource_init(); if (ret) return ret; =20 ret =3D sysfs_create_mount_point(fs_kobj, "resctrl"); if (ret) { - resctrl_mon_resource_exit(); + resctrl_l3_mon_resource_exit(); return ret; } =20 @@ -4470,7 +4470,7 @@ int resctrl_init(void) =20 cleanup_mountpoint: sysfs_remove_mount_point(fs_kobj, "resctrl"); - resctrl_mon_resource_exit(); + resctrl_l3_mon_resource_exit(); =20 return ret; } @@ -4506,7 +4506,7 @@ static bool resctrl_online_domains_exist(void) * When called by the architecture code, all CPUs and resctrl domains must= be * offline. This ensures the limbo and overflow handlers are not scheduled= to * run, meaning the data structures they access can be freed by - * resctrl_mon_resource_exit(). + * resctrl_l3_mon_resource_exit(). * * After resctrl_exit() returns, the architecture code should return an * error from all resctrl_arch_ functions that can do this. @@ -4533,5 +4533,5 @@ void resctrl_exit(void) * it can be used to umount resctrl. */ =20 - resctrl_mon_resource_exit(); + resctrl_l3_mon_resource_exit(); } --=20 2.51.0 From nobody Sat Sep 27 20:26:34 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BCF78320A31 for ; Thu, 25 Sep 2025 20:04:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830653; cv=none; b=ILf1zMHQ9unuzZFcTb3lUAKT2dN1ibj2c/WQV88qD7SisO3rvtgtcw2EkmwItNX3/iN8lybvylIARdKR5uOobZXjJOmmIoTSUdpbUqJSnG2Z1c02sRkMR6XG6l/r3iFaR45qun7GE3QZZ/5+SNtIempAvxnnOOPL48WmL3DmTcs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830653; c=relaxed/simple; bh=r1U+EYgfuhfN9D7nf5NPLO1L3eszz3nkYvZ6HirLH+A=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gcXUjemoY3FvbGrERNNfx1tsuHHTQzYZwfeSCw2McSEpy+E+IclB9SE428jVB5HtdBFTq04wOtuvOI95/XvtGkSdd+PffN3rckDIKYrL6EALC/buUnSTcSqLzXoESIKIm40dqCkNJkiq/ey1haVrxm1wCDtM8OHdwLNsRDJWZMY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=GMGjeks3; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="GMGjeks3" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758830651; x=1790366651; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=r1U+EYgfuhfN9D7nf5NPLO1L3eszz3nkYvZ6HirLH+A=; b=GMGjeks3wWm/XHEsoQ5hUPWtiahRa5NMPfNq1PkV+aBnDISsKXydIVq9 P8O43mzFeZfiM8wMOavquiTQXF3T9rJTFg8Y0tU23v4BKTe0lSEjRNW++ N/umcbbuAqCLI5pN28zqm3L0bzfX/U6bTBcp4pLYM/BhfmFdtFaCWSbLy OE29U676h9mPYz2R/f2sF3/2RSEdQFJZkVjUnKmz0yvBNhoiG31msOibn LV2zMWt5+yoAdZIgjb/Zw3TuPE2B6sdXFb8kSodGHXX8hAjNLffwMf4pz OmT6/J9Kn+ZXV0U6Vi1gUZ+BWZxe8OFO7LGIqsq4Q6qtJ5o4pszzG309P Q==; X-CSE-ConnectionGUID: xs581tW5QjK6ExCp6Tg6Kg== X-CSE-MsgGUID: u1FiwwTcTIaSBP5ZWtCsCA== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="61074202" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="61074202" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:08 -0700 X-CSE-ConnectionGUID: wXYOdRJBTbeA3Vvge2jgLw== X-CSE-MsgGUID: 9xbthVi5Trq7CC5i/YH1ng== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,293,1751266800"; d="scan'208";a="177003627" Received: from inaky-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.206]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:07 -0700 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v11 09/31] fs/resctrl: Make event details accessible to functions when reading events Date: Thu, 25 Sep 2025 13:03:03 -0700 Message-ID: <20250925200328.64155-10-tony.luck@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250925200328.64155-1-tony.luck@intel.com> References: <20250925200328.64155-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Reading monitoring data from MMIO requires more context to be able to read the correct memory location. struct mon_evt is the appropriate place for this event specific context. Prepare for addition of extra fields to mon_evt by changing the calling conventions to pass a pointer to the mon_evt structure instead of just the event id. Signed-off-by: Tony Luck --- fs/resctrl/internal.h | 10 +++++----- fs/resctrl/ctrlmondata.c | 18 +++++++++--------- fs/resctrl/monitor.c | 24 ++++++++++++------------ fs/resctrl/rdtgroup.c | 6 +++--- 4 files changed, 29 insertions(+), 29 deletions(-) diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h index 88b4489b68e1..12a2ab7e3c9b 100644 --- a/fs/resctrl/internal.h +++ b/fs/resctrl/internal.h @@ -81,7 +81,7 @@ extern struct mon_evt mon_event_all[QOS_NUM_EVENTS]; * struct mon_data - Monitoring details for each event file. * @list: Member of the global @mon_data_kn_priv_list list. * @rid: Resource id associated with the event file. - * @evtid: Event id associated with the event file. + * @evt: Event structure associated with the event file. * @sum: Set when event must be summed across multiple * domains. * @domid: When @sum is zero this is the domain to which @@ -95,7 +95,7 @@ extern struct mon_evt mon_event_all[QOS_NUM_EVENTS]; struct mon_data { struct list_head list; enum resctrl_res_level rid; - enum resctrl_event_id evtid; + struct mon_evt *evt; int domid; bool sum; }; @@ -108,7 +108,7 @@ struct mon_data { * @r: Resource describing the properties of the event being read. * @hdr: Header of domain that the counter should be read from. If NULL = then * sum all domains in @r sharing L3 @ci.id - * @evtid: Which monitor event to read. + * @evt: Which monitor event to read. * @first: Initialize MBM counter when true. * @ci: Cacheinfo for L3. Only set when @hdr is NULL. Used when summing * domains. @@ -126,7 +126,7 @@ struct rmid_read { struct rdtgroup *rgrp; struct rdt_resource *r; struct rdt_domain_hdr *hdr; - enum resctrl_event_id evtid; + struct mon_evt *evt; bool first; struct cacheinfo *ci; bool is_mbm_cntr; @@ -367,7 +367,7 @@ int rdtgroup_mondata_show(struct seq_file *m, void *arg= ); =20 void mon_event_read(struct rmid_read *rr, struct rdt_resource *r, struct rdt_domain_hdr *hdr, struct rdtgroup *rdtgrp, - cpumask_t *cpumask, int evtid, int first); + cpumask_t *cpumask, struct mon_evt *evt, int first); =20 void mbm_setup_overflow_handler(struct rdt_l3_mon_domain *dom, unsigned long delay_ms, diff --git a/fs/resctrl/ctrlmondata.c b/fs/resctrl/ctrlmondata.c index c95f8eb8e731..77602563cb1f 100644 --- a/fs/resctrl/ctrlmondata.c +++ b/fs/resctrl/ctrlmondata.c @@ -548,7 +548,7 @@ struct rdt_domain_hdr *resctrl_find_domain(struct list_= head *h, int id, =20 void mon_event_read(struct rmid_read *rr, struct rdt_resource *r, struct rdt_domain_hdr *hdr, struct rdtgroup *rdtgrp, - cpumask_t *cpumask, int evtid, int first) + cpumask_t *cpumask, struct mon_evt *evt, int first) { int cpu; =20 @@ -559,15 +559,15 @@ void mon_event_read(struct rmid_read *rr, struct rdt_= resource *r, * Setup the parameters to pass to mon_event_count() to read the data. */ rr->rgrp =3D rdtgrp; - rr->evtid =3D evtid; + rr->evt =3D evt; rr->r =3D r; rr->hdr =3D hdr; rr->first =3D first; if (resctrl_arch_mbm_cntr_assign_enabled(r) && - resctrl_is_mbm_event(evtid)) { + resctrl_is_mbm_event(evt->evtid)) { rr->is_mbm_cntr =3D true; } else { - rr->arch_mon_ctx =3D resctrl_arch_mon_ctx_alloc(r, evtid); + rr->arch_mon_ctx =3D resctrl_arch_mon_ctx_alloc(r, evt->evtid); if (IS_ERR(rr->arch_mon_ctx)) { rr->err =3D -EINVAL; return; @@ -588,20 +588,20 @@ void mon_event_read(struct rmid_read *rr, struct rdt_= resource *r, smp_call_on_cpu(cpu, smp_mon_event_count, rr, false); =20 if (rr->arch_mon_ctx) - resctrl_arch_mon_ctx_free(r, evtid, rr->arch_mon_ctx); + resctrl_arch_mon_ctx_free(r, evt->evtid, rr->arch_mon_ctx); } =20 int rdtgroup_mondata_show(struct seq_file *m, void *arg) { struct kernfs_open_file *of =3D m->private; enum resctrl_res_level resid; - enum resctrl_event_id evtid; struct rdt_domain_hdr *hdr; struct rmid_read rr =3D {0}; struct rdtgroup *rdtgrp; int domid, cpu, ret =3D 0; struct rdt_resource *r; struct cacheinfo *ci; + struct mon_evt *evt; struct mon_data *md; =20 rdtgrp =3D rdtgroup_kn_lock_live(of->kn); @@ -618,7 +618,7 @@ int rdtgroup_mondata_show(struct seq_file *m, void *arg) =20 resid =3D md->rid; domid =3D md->domid; - evtid =3D md->evtid; + evt =3D md->evt; r =3D resctrl_arch_get_resource(resid); =20 if (md->sum) { @@ -638,7 +638,7 @@ int rdtgroup_mondata_show(struct seq_file *m, void *arg) continue; rr.ci =3D ci; mon_event_read(&rr, r, NULL, rdtgrp, - &ci->shared_cpu_map, evtid, false); + &ci->shared_cpu_map, evt, false); goto checkresult; } } @@ -654,7 +654,7 @@ int rdtgroup_mondata_show(struct seq_file *m, void *arg) ret =3D -ENOENT; goto out; } - mon_event_read(&rr, r, hdr, rdtgrp, &hdr->cpu_mask, evtid, false); + mon_event_read(&rr, r, hdr, rdtgrp, &hdr->cpu_mask, evt, false); } =20 checkresult: diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index 54ae3494adfe..ee08ffbacc2b 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -429,7 +429,7 @@ static int __mon_event_count(struct rdtgroup *rdtgrp, s= truct rmid_read *rr) d =3D container_of(rr->hdr, struct rdt_l3_mon_domain, hdr); =20 if (rr->is_mbm_cntr) { - cntr_id =3D mbm_cntr_get(rr->r, d, rdtgrp, rr->evtid); + cntr_id =3D mbm_cntr_get(rr->r, d, rdtgrp, rr->evt->evtid); if (cntr_id < 0) { rr->err =3D -ENOENT; return -EINVAL; @@ -438,10 +438,10 @@ static int __mon_event_count(struct rdtgroup *rdtgrp,= struct rmid_read *rr) =20 if (rr->first) { if (rr->is_mbm_cntr) - resctrl_arch_reset_cntr(rr->r, d, closid, rmid, cntr_id, rr->evtid); + resctrl_arch_reset_cntr(rr->r, d, closid, rmid, cntr_id, rr->evt->evtid= ); else - resctrl_arch_reset_rmid(rr->r, d, closid, rmid, rr->evtid); - m =3D get_mbm_state(d, closid, rmid, rr->evtid); + resctrl_arch_reset_rmid(rr->r, d, closid, rmid, rr->evt->evtid); + m =3D get_mbm_state(d, closid, rmid, rr->evt->evtid); if (m) memset(m, 0, sizeof(struct mbm_state)); return 0; @@ -453,10 +453,10 @@ static int __mon_event_count(struct rdtgroup *rdtgrp,= struct rmid_read *rr) return -EINVAL; if (rr->is_mbm_cntr) rr->err =3D resctrl_arch_cntr_read(rr->r, rr->hdr, closid, rmid, cntr_i= d, - rr->evtid, &tval); + rr->evt->evtid, &tval); else rr->err =3D resctrl_arch_rmid_read(rr->r, rr->hdr, closid, rmid, - rr->evtid, &tval, rr->arch_mon_ctx); + rr->evt->evtid, &tval, rr->arch_mon_ctx); if (rr->err) return rr->err; =20 @@ -482,10 +482,10 @@ static int __mon_event_count(struct rdtgroup *rdtgrp,= struct rmid_read *rr) continue; if (rr->is_mbm_cntr) err =3D resctrl_arch_cntr_read(rr->r, &d->hdr, closid, rmid, cntr_id, - rr->evtid, &tval); + rr->evt->evtid, &tval); else err =3D resctrl_arch_rmid_read(rr->r, &d->hdr, closid, rmid, - rr->evtid, &tval, rr->arch_mon_ctx); + rr->evt->evtid, &tval, rr->arch_mon_ctx); if (!err) { rr->val +=3D tval; ret =3D 0; @@ -521,7 +521,7 @@ static void mbm_bw_count(struct rdtgroup *rdtgrp, struc= t rmid_read *rr) if (!domain_header_is_valid(rr->hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) return; d =3D container_of(rr->hdr, struct rdt_l3_mon_domain, hdr); - m =3D get_mbm_state(d, closid, rmid, rr->evtid); + m =3D get_mbm_state(d, closid, rmid, rr->evt->evtid); if (WARN_ON_ONCE(!m)) return; =20 @@ -695,11 +695,11 @@ static void mbm_update_one_event(struct rdt_resource = *r, struct rdt_l3_mon_domai =20 rr.r =3D r; rr.hdr =3D &d->hdr; - rr.evtid =3D evtid; + rr.evt =3D &mon_event_all[evtid]; if (resctrl_arch_mbm_cntr_assign_enabled(r)) { rr.is_mbm_cntr =3D true; } else { - rr.arch_mon_ctx =3D resctrl_arch_mon_ctx_alloc(rr.r, rr.evtid); + rr.arch_mon_ctx =3D resctrl_arch_mon_ctx_alloc(rr.r, evtid); if (IS_ERR(rr.arch_mon_ctx)) { pr_warn_ratelimited("Failed to allocate monitor context: %ld", PTR_ERR(rr.arch_mon_ctx)); @@ -717,7 +717,7 @@ static void mbm_update_one_event(struct rdt_resource *r= , struct rdt_l3_mon_domai mbm_bw_count(rdtgrp, &rr); =20 if (rr.arch_mon_ctx) - resctrl_arch_mon_ctx_free(rr.r, rr.evtid, rr.arch_mon_ctx); + resctrl_arch_mon_ctx_free(rr.r, evtid, rr.arch_mon_ctx); } =20 static void mbm_update(struct rdt_resource *r, struct rdt_l3_mon_domain *d, diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index 88b80944cf85..dc289b03c3d1 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -3038,7 +3038,7 @@ static struct mon_data *mon_get_kn_priv(enum resctrl_= res_level rid, int domid, =20 list_for_each_entry(priv, &mon_data_kn_priv_list, list) { if (priv->rid =3D=3D rid && priv->domid =3D=3D domid && - priv->sum =3D=3D do_sum && priv->evtid =3D=3D mevt->evtid) + priv->sum =3D=3D do_sum && priv->evt =3D=3D mevt) return priv; } =20 @@ -3049,7 +3049,7 @@ static struct mon_data *mon_get_kn_priv(enum resctrl_= res_level rid, int domid, priv->rid =3D rid; priv->domid =3D domid; priv->sum =3D do_sum; - priv->evtid =3D mevt->evtid; + priv->evt =3D mevt; list_add_tail(&priv->list, &mon_data_kn_priv_list); =20 return priv; @@ -3210,7 +3210,7 @@ static int mon_add_all_files(struct kernfs_node *kn, = struct rdt_domain_hdr *hdr, return ret; =20 if (!do_sum && resctrl_is_mbm_event(mevt->evtid)) - mon_event_read(&rr, r, hdr, prgrp, &hdr->cpu_mask, mevt->evtid, true); + mon_event_read(&rr, r, hdr, prgrp, &hdr->cpu_mask, mevt, true); } =20 return 0; --=20 2.51.0 From nobody Sat Sep 27 20:26:34 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E8FE732126B for ; Thu, 25 Sep 2025 20:04:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830654; cv=none; b=TLXbWcSvCEMSDTKN7t4mpl3MKbiHWk8U0Rsh+Fkmk851FMmb52pdA6gTNlmqhvaRAqglw8zCIv7xkBZMDEGkIoQJ+F2WXis+VASOGoBT8BkJzL5WBabmlTBd2j0xhCY9V5veyUT4iKlwQAT3rJ8o3S/BMgmNpZA4m+LeZh6AOek= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830654; c=relaxed/simple; bh=twEWUzV9QVJG19A6MjLSnqqgXcexWoT8KTf5Py27uuU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=dn9ojNDV9lR9qvQbjOihvZKFr/GoD87StdJw95Bvx/SyueKin6U+8T/YIS4YMOtAJmuvQQukhbYzNz74ri8BUjh/D5YJrNkgkreGWVrVwZkukD/S5yPJ5MpIhAO7b3NcK+shAipPCVviflj/209wQ/kQw7Mp+3aGp1TLNXGmInw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=KHyKYARs; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="KHyKYARs" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758830652; x=1790366652; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=twEWUzV9QVJG19A6MjLSnqqgXcexWoT8KTf5Py27uuU=; b=KHyKYARsHCHT1jXLFJcIgOLfkauHoVt7XszhhF//u8IeKTgdZZD05ml0 /FT880hmS/o7KdEPqkPq3obBP2R2qcS48mqpBf3Q4kwrVv8aw/zSJWK2X PBzx1Z3NvIo1TF3v4n/pVffTsSu65wjtaWuR0QHSNkOcOzaavv8LYoQmu XsephTFWA/QB6vNh0y34svHAcqZCzcOyNzWmQciLReLFQdNRvg6qEvIzp a7WXpND3iXCpUi+vLXSpKpuH3sYltI6NuW7WazNmoT5ZgN2yAoyIZylqN czWo6eLAnU8Sf64rSp/z3VSlJOUBmxaga8w4L91zqdaTRoapmUo4VMX+q Q==; X-CSE-ConnectionGUID: dEV3vWYPR7Wl4snitHB2OQ== X-CSE-MsgGUID: JGzWAmbSRr6sXQHE1xQReg== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="61074211" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="61074211" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:08 -0700 X-CSE-ConnectionGUID: rBgxTjO7QK6dr7S2F3BQLQ== X-CSE-MsgGUID: 9TT6mFhTTvyYphXDum+m5Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,293,1751266800"; d="scan'208";a="177003630" Received: from inaky-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.206]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:07 -0700 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v11 10/31] x86,fs/resctrl: Handle events that can be read from any CPU Date: Thu, 25 Sep 2025 13:03:04 -0700 Message-ID: <20250925200328.64155-11-tony.luck@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250925200328.64155-1-tony.luck@intel.com> References: <20250925200328.64155-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" resctrl assumes that monitor events can only be read from a CPU in the cpumask_t set of each domain. This is true for x86 events accessed with an MSR interface, but may not be true for other access methods such as MMIO. Add a flag to struct mon_evt, settable by architecture code, to indicate there are no restrictions on which CPU can read that event. Bypass all the smp_call*() code for events that can be read on any CPU and call mon_event_count() directly from mon_event_read(). Simplify CPU checking in __mon_event_count() with a helper. Signed-off-by: Tony Luck --- include/linux/resctrl.h | 2 +- fs/resctrl/internal.h | 2 ++ arch/x86/kernel/cpu/resctrl/core.c | 6 ++--- fs/resctrl/ctrlmondata.c | 6 +++++ fs/resctrl/monitor.c | 43 ++++++++++++++++++++++-------- 5 files changed, 44 insertions(+), 15 deletions(-) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index 66569662efee..22edd8d131d8 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -409,7 +409,7 @@ u32 resctrl_arch_get_num_closid(struct rdt_resource *r); u32 resctrl_arch_system_num_rmid_idx(void); int resctrl_arch_update_domains(struct rdt_resource *r, u32 closid); =20 -void resctrl_enable_mon_event(enum resctrl_event_id eventid); +void resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu); =20 bool resctrl_is_mon_event_enabled(enum resctrl_event_id eventid); =20 diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h index 12a2ab7e3c9b..40b76eaa33d0 100644 --- a/fs/resctrl/internal.h +++ b/fs/resctrl/internal.h @@ -61,6 +61,7 @@ static inline struct rdt_fs_context *rdt_fc2context(struc= t fs_context *fc) * READS_TO_REMOTE_MEM) being tracked by @evtid. * Only valid if @evtid is an MBM event. * @configurable: true if the event is configurable + * @any_cpu: true if the event can be read from any CPU * @enabled: true if the event is enabled */ struct mon_evt { @@ -69,6 +70,7 @@ struct mon_evt { char *name; u32 evt_cfg; bool configurable; + bool any_cpu; bool enabled; }; =20 diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 4762790c6e62..8db941fef7a0 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -889,15 +889,15 @@ static __init bool get_rdt_mon_resources(void) bool ret =3D false; =20 if (rdt_cpu_has(X86_FEATURE_CQM_OCCUP_LLC)) { - resctrl_enable_mon_event(QOS_L3_OCCUP_EVENT_ID); + resctrl_enable_mon_event(QOS_L3_OCCUP_EVENT_ID, false); ret =3D true; } if (rdt_cpu_has(X86_FEATURE_CQM_MBM_TOTAL)) { - resctrl_enable_mon_event(QOS_L3_MBM_TOTAL_EVENT_ID); + resctrl_enable_mon_event(QOS_L3_MBM_TOTAL_EVENT_ID, false); ret =3D true; } if (rdt_cpu_has(X86_FEATURE_CQM_MBM_LOCAL)) { - resctrl_enable_mon_event(QOS_L3_MBM_LOCAL_EVENT_ID); + resctrl_enable_mon_event(QOS_L3_MBM_LOCAL_EVENT_ID, false); ret =3D true; } if (rdt_cpu_has(X86_FEATURE_ABMC)) diff --git a/fs/resctrl/ctrlmondata.c b/fs/resctrl/ctrlmondata.c index 77602563cb1f..fbf55e61445c 100644 --- a/fs/resctrl/ctrlmondata.c +++ b/fs/resctrl/ctrlmondata.c @@ -574,6 +574,11 @@ void mon_event_read(struct rmid_read *rr, struct rdt_r= esource *r, } } =20 + if (evt->any_cpu) { + mon_event_count(rr); + goto out_ctx_free; + } + cpu =3D cpumask_any_housekeeping(cpumask, RESCTRL_PICK_ANY_CPU); =20 /* @@ -587,6 +592,7 @@ void mon_event_read(struct rmid_read *rr, struct rdt_re= source *r, else smp_call_on_cpu(cpu, smp_mon_event_count, rr, false); =20 +out_ctx_free: if (rr->arch_mon_ctx) resctrl_arch_mon_ctx_free(r, evt->evtid, rr->arch_mon_ctx); } diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index ee08ffbacc2b..6f8a9b5a2f6b 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -413,9 +413,33 @@ static void mbm_cntr_free(struct rdt_l3_mon_domain *d,= int cntr_id) memset(&d->cntr_cfg[cntr_id], 0, sizeof(*d->cntr_cfg)); } =20 +/* + * Called from preemptible context via a direct call of mon_event_count() = for + * events that can be read on any CPU. + * Called from preemptible but non-migratable process context (mon_event_c= ount() + * via smp_call_on_cpu()) OR non-preemptible context (mon_event_count() via + * smp_call_function_any()) for events that need to be read on a specific = CPU. + */ +static bool cpu_on_correct_domain(struct rmid_read *rr) +{ + int cpu; + + /* Any CPU is OK for this event */ + if (rr->evt->any_cpu) + return true; + + cpu =3D smp_processor_id(); + + /* Single domain. Must be on a CPU in that domain. */ + if (rr->hdr) + return cpumask_test_cpu(cpu, &rr->hdr->cpu_mask); + + /* Summing domains that share a cache, must be on a CPU for that cache. */ + return cpumask_test_cpu(cpu, &rr->ci->shared_cpu_map); +} + static int __mon_event_count(struct rdtgroup *rdtgrp, struct rmid_read *rr) { - int cpu =3D smp_processor_id(); u32 closid =3D rdtgrp->closid; u32 rmid =3D rdtgrp->mon.rmid; struct rdt_l3_mon_domain *d; @@ -424,6 +448,9 @@ static int __mon_event_count(struct rdtgroup *rdtgrp, s= truct rmid_read *rr) int err, ret; u64 tval =3D 0; =20 + if (!cpu_on_correct_domain(rr)) + return -EINVAL; + if (!domain_header_is_valid(rr->hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) return -EINVAL; d =3D container_of(rr->hdr, struct rdt_l3_mon_domain, hdr); @@ -448,9 +475,6 @@ static int __mon_event_count(struct rdtgroup *rdtgrp, s= truct rmid_read *rr) } =20 if (rr->hdr) { - /* Reading a single domain, must be on a CPU in that domain. */ - if (!cpumask_test_cpu(cpu, &rr->hdr->cpu_mask)) - return -EINVAL; if (rr->is_mbm_cntr) rr->err =3D resctrl_arch_cntr_read(rr->r, rr->hdr, closid, rmid, cntr_i= d, rr->evt->evtid, &tval); @@ -465,10 +489,6 @@ static int __mon_event_count(struct rdtgroup *rdtgrp, = struct rmid_read *rr) return 0; } =20 - /* Summing domains that share a cache, must be on a CPU for that cache. */ - if (!cpumask_test_cpu(cpu, &rr->ci->shared_cpu_map)) - return -EINVAL; - /* * Legacy files must report the sum of an event across all * domains that share the same L3 cache instance. @@ -957,7 +977,7 @@ struct mon_evt mon_event_all[QOS_NUM_EVENTS] =3D { }, }; =20 -void resctrl_enable_mon_event(enum resctrl_event_id eventid) +void resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu) { if (WARN_ON_ONCE(eventid < QOS_FIRST_EVENT || eventid >=3D QOS_NUM_EVENTS= )) return; @@ -966,6 +986,7 @@ void resctrl_enable_mon_event(enum resctrl_event_id eve= ntid) return; } =20 + mon_event_all[eventid].any_cpu =3D any_cpu; mon_event_all[eventid].enabled =3D true; } =20 @@ -1791,9 +1812,9 @@ int resctrl_l3_mon_resource_init(void) =20 if (r->mon.mbm_cntr_assignable) { if (!resctrl_is_mon_event_enabled(QOS_L3_MBM_TOTAL_EVENT_ID)) - resctrl_enable_mon_event(QOS_L3_MBM_TOTAL_EVENT_ID); + resctrl_enable_mon_event(QOS_L3_MBM_TOTAL_EVENT_ID, false); if (!resctrl_is_mon_event_enabled(QOS_L3_MBM_LOCAL_EVENT_ID)) - resctrl_enable_mon_event(QOS_L3_MBM_LOCAL_EVENT_ID); + resctrl_enable_mon_event(QOS_L3_MBM_LOCAL_EVENT_ID, false); mon_event_all[QOS_L3_MBM_TOTAL_EVENT_ID].evt_cfg =3D r->mon.mbm_cfg_mask; mon_event_all[QOS_L3_MBM_LOCAL_EVENT_ID].evt_cfg =3D r->mon.mbm_cfg_mask= & (READS_TO_LOCAL_MEM | --=20 2.51.0 From nobody Sat Sep 27 20:26:34 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 61A2731DD94 for ; Thu, 25 Sep 2025 20:04:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830654; cv=none; b=bOVI7qNMcYhWq/8x1T0dqVQSjf+XbofNwr+hpvTHqTqRW8p2CeFBukkdUBSytAinyGueBBDAIrsMLtxNNQyaRLGhhVUsQ59WVYwGDiq9yPye9FPL/LxN5+5A+UDxlB0cY7dyUQPK4YxGJL3Tpz/fgHtjSymibJwpJFYUY6sDhoA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830654; c=relaxed/simple; bh=2f6/Swl91POazZDU/TNZoGBxShRWy4x/LTMyWVuZPCg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=mwO7IsdvVwaQj2Ymbjz6xxKzWWEM7u9Duv8z41m1YMxofjNNobhIh2ByJ+1WYRcLIPUZvJg4ij1oV/iFZDg8ZNNmgta1EEa3TfgrrOJt3/0fipGkBUHa7swRVU2gtFXrCcpRdxPD5Wy6+m7FbZk1iSJqq9ZqlDSfro7usR/iSWs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=ecHc3PRi; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="ecHc3PRi" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758830653; x=1790366653; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=2f6/Swl91POazZDU/TNZoGBxShRWy4x/LTMyWVuZPCg=; b=ecHc3PRiHfkVzD3QJ6CW3eK/XWvf/MkZJXkS2BqVrhXpqlf8Fi0cNu1L iYU+buGkO77LAPtvaw+h8NR4s1aVUXRhlC52poS4NW5xqopp7wHZhb3K2 ftttKZ3mZ6rryu+0ikI0E7JMhe1oBqa508VWHboJFzD6QK7GoBEP+Guji A6HWwaj2MCWTKziorLJj3+WOkgNsm8BmtKUWcOjAk5ipYcXsdj2Zw+clj e2m6HsKBoIcJSXEEHh04vWnQacrtXjwXTf4n+wqixL9+8nn3MrrVw0afu VRVGlkNYx3LpDaTJrJ4X5OHODWMQxnFDgSckrb5OPNc73ILJ/gd5XRV1m A==; X-CSE-ConnectionGUID: sWcATv7ERzWooy/kpAxL3Q== X-CSE-MsgGUID: H/RA0fG/R4q6XSwvHlTl5w== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="61074219" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="61074219" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:08 -0700 X-CSE-ConnectionGUID: iXyfSXsjS4WkKTGeyIkIew== X-CSE-MsgGUID: wjD8r8GARwSaVAIPVL/j3g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,293,1751266800"; d="scan'208";a="177003634" Received: from inaky-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.206]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:08 -0700 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v11 11/31] x86,fs/resctrl: Support binary fixed point event counters Date: Thu, 25 Sep 2025 13:03:05 -0700 Message-ID: <20250925200328.64155-12-tony.luck@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250925200328.64155-1-tony.luck@intel.com> References: <20250925200328.64155-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" resctrl assumes that all monitor events can be displayed as unsigned decimal integers. Hardware architecture counters may provide some telemetry events with greater precision where the event is not a simple count, but is a measurement of some sort (e.g. Joules for energy consumed). Add a new argument to resctrl_enable_mon_event() for architecture code to inform the file system that the value for a counter is a fixed-point value with a specific number of binary places. Only allow architecture to use floating point format on events that the file system has marked with mon_evt::is_floating_point. Display fixed point values with values rounded to an appropriate number of decimal places for the precision of the number of binary places provided. Add one extra decimal place for every three additional binary places, except for low precision binary values where exact representation is possible: 1 binary place is 0.0 or 0.5 =3D> 1 decimal place 2 binary places is 0.0, 0.25, 0.5, 0.75 =3D> 2 decimal places 3 binary places is 0.0, 0.125, etc. =3D> 3 decimal places Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- include/linux/resctrl.h | 3 +- fs/resctrl/internal.h | 8 +++ arch/x86/kernel/cpu/resctrl/core.c | 6 +-- fs/resctrl/ctrlmondata.c | 84 ++++++++++++++++++++++++++++++ fs/resctrl/monitor.c | 14 +++-- 5 files changed, 107 insertions(+), 8 deletions(-) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index 22edd8d131d8..de66928e9430 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -409,7 +409,8 @@ u32 resctrl_arch_get_num_closid(struct rdt_resource *r); u32 resctrl_arch_system_num_rmid_idx(void); int resctrl_arch_update_domains(struct rdt_resource *r, u32 closid); =20 -void resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu); +void resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu, + unsigned int binary_bits); =20 bool resctrl_is_mon_event_enabled(enum resctrl_event_id eventid); =20 diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h index 40b76eaa33d0..f5189b6771a0 100644 --- a/fs/resctrl/internal.h +++ b/fs/resctrl/internal.h @@ -62,6 +62,9 @@ static inline struct rdt_fs_context *rdt_fc2context(struc= t fs_context *fc) * Only valid if @evtid is an MBM event. * @configurable: true if the event is configurable * @any_cpu: true if the event can be read from any CPU + * @is_floating_point: event values are displayed in floating point format + * @binary_bits: number of fixed-point binary bits from architecture, + * only valid if @is_floating_point is true * @enabled: true if the event is enabled */ struct mon_evt { @@ -71,6 +74,8 @@ struct mon_evt { u32 evt_cfg; bool configurable; bool any_cpu; + bool is_floating_point; + unsigned int binary_bits; bool enabled; }; =20 @@ -79,6 +84,9 @@ extern struct mon_evt mon_event_all[QOS_NUM_EVENTS]; #define for_each_mon_event(mevt) for (mevt =3D &mon_event_all[QOS_FIRST_EV= ENT]; \ mevt < &mon_event_all[QOS_NUM_EVENTS]; mevt++) =20 +/* Limit for mon_evt::binary_bits */ +#define MAX_BINARY_BITS 27 + /** * struct mon_data - Monitoring details for each event file. * @list: Member of the global @mon_data_kn_priv_list list. diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 8db941fef7a0..ccba27df3ea6 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -889,15 +889,15 @@ static __init bool get_rdt_mon_resources(void) bool ret =3D false; =20 if (rdt_cpu_has(X86_FEATURE_CQM_OCCUP_LLC)) { - resctrl_enable_mon_event(QOS_L3_OCCUP_EVENT_ID, false); + resctrl_enable_mon_event(QOS_L3_OCCUP_EVENT_ID, false, 0); ret =3D true; } if (rdt_cpu_has(X86_FEATURE_CQM_MBM_TOTAL)) { - resctrl_enable_mon_event(QOS_L3_MBM_TOTAL_EVENT_ID, false); + resctrl_enable_mon_event(QOS_L3_MBM_TOTAL_EVENT_ID, false, 0); ret =3D true; } if (rdt_cpu_has(X86_FEATURE_CQM_MBM_LOCAL)) { - resctrl_enable_mon_event(QOS_L3_MBM_LOCAL_EVENT_ID, false); + resctrl_enable_mon_event(QOS_L3_MBM_LOCAL_EVENT_ID, false, 0); ret =3D true; } if (rdt_cpu_has(X86_FEATURE_ABMC)) diff --git a/fs/resctrl/ctrlmondata.c b/fs/resctrl/ctrlmondata.c index fbf55e61445c..ae43e09fa5e5 100644 --- a/fs/resctrl/ctrlmondata.c +++ b/fs/resctrl/ctrlmondata.c @@ -17,6 +17,7 @@ =20 #include #include +#include #include #include #include @@ -597,6 +598,87 @@ void mon_event_read(struct rmid_read *rr, struct rdt_r= esource *r, resctrl_arch_mon_ctx_free(r, evt->evtid, rr->arch_mon_ctx); } =20 +/* + * Decimal place precision to use for each number of fixed-point + * binary bits. + */ +static unsigned int decplaces[MAX_BINARY_BITS + 1] =3D { + [1] =3D 1, + [2] =3D 2, + [3] =3D 3, + [4] =3D 3, + [5] =3D 3, + [6] =3D 3, + [7] =3D 3, + [8] =3D 3, + [9] =3D 3, + [10] =3D 4, + [11] =3D 4, + [12] =3D 4, + [13] =3D 5, + [14] =3D 5, + [15] =3D 5, + [16] =3D 6, + [17] =3D 6, + [18] =3D 6, + [19] =3D 7, + [20] =3D 7, + [21] =3D 7, + [22] =3D 8, + [23] =3D 8, + [24] =3D 8, + [25] =3D 9, + [26] =3D 9, + [27] =3D 9 +}; + +static void print_event_value(struct seq_file *m, unsigned int binary_bits= , u64 val) +{ + unsigned long long frac; + char buf[10]; + + if (!binary_bits) { + seq_printf(m, "%llu.0\n", val); + return; + } + + /* Mask off the integer part of the fixed-point value. */ + frac =3D val & GENMASK_ULL(binary_bits, 0); + + /* + * Multiply by 10^{desired decimal places}. The integer part of + * the fixed point value is now almost what is needed. + */ + frac *=3D int_pow(10ull, decplaces[binary_bits]); + + /* + * Round to nearest by adding a value that would be a "1" in the + * binary_bits + 1 place. Integer part of fixed point value is + * now the needed value. + */ + frac +=3D 1ull << (binary_bits - 1); + + /* + * Extract the integer part of the value. This is the decimal + * representation of the original fixed-point fractional value. + */ + frac >>=3D binary_bits; + + /* + * "frac" is now in the range [0 .. 10^decplaces). I.e. string + * representation will fit into chosen number of decimal places. + */ + snprintf(buf, sizeof(buf), "%0*llu", decplaces[binary_bits], frac); + + /* Trim trailing zeroes */ + for (int i =3D decplaces[binary_bits] - 1; i > 0; i--) { + if (buf[i] !=3D '0') + break; + buf[i] =3D '\0'; + } + seq_printf(m, "%llu.%s\n", val >> binary_bits, buf); +} + int rdtgroup_mondata_show(struct seq_file *m, void *arg) { struct kernfs_open_file *of =3D m->private; @@ -675,6 +757,8 @@ int rdtgroup_mondata_show(struct seq_file *m, void *arg) seq_puts(m, "Unavailable\n"); else if (rr.err =3D=3D -ENOENT) seq_puts(m, "Unassigned\n"); + else if (evt->is_floating_point) + print_event_value(m, evt->binary_bits, rr.val); else seq_printf(m, "%llu\n", rr.val); =20 diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index 6f8a9b5a2f6b..e354f01df615 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -977,16 +977,22 @@ struct mon_evt mon_event_all[QOS_NUM_EVENTS] =3D { }, }; =20 -void resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu) +void resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu,= unsigned int binary_bits) { - if (WARN_ON_ONCE(eventid < QOS_FIRST_EVENT || eventid >=3D QOS_NUM_EVENTS= )) + if (WARN_ON_ONCE(eventid < QOS_FIRST_EVENT || eventid >=3D QOS_NUM_EVENTS= || + binary_bits > MAX_BINARY_BITS)) return; if (mon_event_all[eventid].enabled) { pr_warn("Duplicate enable for event %d\n", eventid); return; } + if (binary_bits && !mon_event_all[eventid].is_floating_point) { + pr_warn("Event %d may not be floating point\n", eventid); + return; + } =20 mon_event_all[eventid].any_cpu =3D any_cpu; + mon_event_all[eventid].binary_bits =3D binary_bits; mon_event_all[eventid].enabled =3D true; } =20 @@ -1812,9 +1818,9 @@ int resctrl_l3_mon_resource_init(void) =20 if (r->mon.mbm_cntr_assignable) { if (!resctrl_is_mon_event_enabled(QOS_L3_MBM_TOTAL_EVENT_ID)) - resctrl_enable_mon_event(QOS_L3_MBM_TOTAL_EVENT_ID, false); + resctrl_enable_mon_event(QOS_L3_MBM_TOTAL_EVENT_ID, false, 0); if (!resctrl_is_mon_event_enabled(QOS_L3_MBM_LOCAL_EVENT_ID)) - resctrl_enable_mon_event(QOS_L3_MBM_LOCAL_EVENT_ID, false); + resctrl_enable_mon_event(QOS_L3_MBM_LOCAL_EVENT_ID, false, 0); mon_event_all[QOS_L3_MBM_TOTAL_EVENT_ID].evt_cfg =3D r->mon.mbm_cfg_mask; mon_event_all[QOS_L3_MBM_LOCAL_EVENT_ID].evt_cfg =3D r->mon.mbm_cfg_mask= & (READS_TO_LOCAL_MEM | --=20 2.51.0 From nobody Sat Sep 27 20:26:34 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 314AB321422 for ; Thu, 25 Sep 2025 20:04:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830655; cv=none; b=i0YnGVVoPRwR3Xtxn0EJy1Rw36sVYL5Z7iF0gmT2GRN1rBjGgfyV3nSEAppAiU0OBkcIQwsnLJRRDiMoFjgLSxuTlqZn/R7Zx4btRxFZ/9r6WoviYCd3JdkOAnFd9ETaxjns4dhoOVksZ76Kpniai/Xo4z1HsOSMB7GhVdbCwCk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830655; c=relaxed/simple; bh=AkDDdTR9vj1/vTSw7b2/MkEuHeAyPOmzsSwe02gqWMU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=r/Q8yV2KT9V7p3+sO6wWJzyQbWwxwi9JitKSc00XF+QjfIZhr6hfWI5jPnxMGS10vbizw3TTXePBfmqXRiis1ZzZU7/kqTAH8iWYRUIreejgXSrkrjSbkoqJ0x0/zSFvtNC89tDOv4TY341U1aTGkvRy7qn0VRn3rKljA5yuPq0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=ViG7v1BQ; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="ViG7v1BQ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758830654; x=1790366654; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=AkDDdTR9vj1/vTSw7b2/MkEuHeAyPOmzsSwe02gqWMU=; b=ViG7v1BQgUXIAUOGefI+nEJdIKfXCQ2Uqp9/XS11TLHefCN2jzC2tU7Q /szvXZVsbVs+TKQ51FdzBT/a6GdSICM/bKTJ2tjmLefF7D1gD6VFvsYy0 k7S4qq+iko3kRHe4w3wyTPRa3Wcf1i8dkcorVnsQ9cPVuXZHMJXbpSH3a Ct5Oi/Rkl6z81b/QYgtZxXE7mgm5rNh/QtaQ0EUheCQRkLgP/ZL8tIJCg HkFSlNJttTGgY6d4zx8+CJn4Rd5/KaxdavTgGbuReB/4j5uDPYuxzu7He 5dP1jFGmpJWN7HHuiU9vRoo/8vdjQxqg189ojb3fMz4E33UMnBkVFhpys g==; X-CSE-ConnectionGUID: y1PzFIrRSdWi43r+lvt7Tg== X-CSE-MsgGUID: 2KA0B3RJRsiHIr24hGf6Rg== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="61074229" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="61074229" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:09 -0700 X-CSE-ConnectionGUID: pO/ZfK+2S7GIisluk+GECQ== X-CSE-MsgGUID: PqIXRIbOTTu75LM++VP/PQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,293,1751266800"; d="scan'208";a="177003639" Received: from inaky-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.206]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:08 -0700 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v11 12/31] x86,fs/resctrl: Add an architectural hook called for each mount Date: Thu, 25 Sep 2025 13:03:06 -0700 Message-ID: <20250925200328.64155-13-tony.luck@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250925200328.64155-1-tony.luck@intel.com> References: <20250925200328.64155-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Enumeration of Intel telemetry events is an asynchronous process involving several mutually dependent drivers added as auxiliary devices during the device_initcall() phase of Linux boot. The process finishes after the probe functions of these drivers completes. But this happens after resctrl_arch_late_init() is executed. Tracing the enumeration process shows that it does complete a full seven seconds before the earliest possible mount of the resctrl file system (when included in /etc/fstab for automatic mount by systemd). Add a hook at the beginning of the mount code that will be used to check for telemetry events and initialize if any are found. Call the hook on every attempted mount. Expectations are that most actions (like enumeration) will only need to be performed on the first call. resctrl filesystem calls the hook with no locks held. Architecture code is responsible for any required locking. Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- include/linux/resctrl.h | 6 ++++++ arch/x86/kernel/cpu/resctrl/core.c | 9 +++++++++ fs/resctrl/rdtgroup.c | 2 ++ 3 files changed, 17 insertions(+) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index de66928e9430..6350064ac8be 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -511,6 +511,12 @@ void resctrl_offline_mon_domain(struct rdt_resource *r= , struct rdt_domain_hdr *h void resctrl_online_cpu(unsigned int cpu); void resctrl_offline_cpu(unsigned int cpu); =20 +/* + * Architecture hook called at beginning of each file system mount attempt. + * No locks are held. + */ +void resctrl_arch_pre_mount(void); + /** * resctrl_arch_rmid_read() - Read the eventid counter corresponding to rm= id * for this resource and domain. diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index ccba27df3ea6..ee6d53aae455 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -717,6 +717,15 @@ static int resctrl_arch_offline_cpu(unsigned int cpu) return 0; } =20 +void resctrl_arch_pre_mount(void) +{ + static atomic_t only_once =3D ATOMIC_INIT(0); + int old =3D 0; + + if (!atomic_try_cmpxchg(&only_once, &old, 1)) + return; +} + enum { RDT_FLAG_CMT, RDT_FLAG_MBM_TOTAL, diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index dc289b03c3d1..72ae7224a2da 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -2720,6 +2720,8 @@ static int rdt_get_tree(struct fs_context *fc) struct rdt_resource *r; int ret; =20 + resctrl_arch_pre_mount(); + cpus_read_lock(); mutex_lock(&rdtgroup_mutex); /* --=20 2.51.0 From nobody Sat Sep 27 20:26:34 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 32A4430CDBA for ; Thu, 25 Sep 2025 20:04:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830656; cv=none; b=i7bDnOQEe4DKuT6TASEYyM5x52onjOAfyqOS83CorhjlbPF+7SvVPMTRElz5TLvfGYFHJfJyou5ftOYOwRPOUTRinrML3efvNCa4+C+mk7lrH30512TXMDSBEx0KLk6+Q8grWEPx/K/8s39xgvJDx0Bfu7Ytfmsc6R0EJUHcPM8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830656; c=relaxed/simple; bh=YGbAnFmurLh7byiT75mi0ZvMHKcRUq7uV1rFIt85zc8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qQxQ407wncxFjHW1fT43t2ftyYaKIABYYVTRCrIZlrJbBEDe3tMp8ZNrh15NRZSrUSyU+5uGtdxfkAd3iakczJAcd9V+a143jN3FtzAaPx+UVLVnPuGV235vP6y7pSxSjmBlO7laPUiqUhDehoP0cgu0FilgiYMkYF43QhLZxPU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=HgOaUzJU; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="HgOaUzJU" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758830655; x=1790366655; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=YGbAnFmurLh7byiT75mi0ZvMHKcRUq7uV1rFIt85zc8=; b=HgOaUzJUGS28EZopGaMmXM2OmpBgv0xOxSie2JtQo3hxabO//k7mzxwB V0aijtxXrVholuEBYbf9VuKwIqO3GLBk5h5ItK2pRRGgOqBJXQjEQbh9h R48bPCwpEtp74fdPUBgtV2qszjemMjkJtZKsOMi1ImXMw3zTimhhwTQ6J 4F0eETpamNc96xYoe8fuO7LH3ekTgsuQdKeD8AbGXet3NskU9WUrUshiv gxNSX7BgRRDjh2cV3JRkNMeULTRk/0Qq0PU3n3EYdZToi/1Kn/igBCvUY 0qcpjNm+EQJ4RyOHswD6aoJ0luPJfkQbuEDGQEKW5pAXfePCz0Csy2Xfx g==; X-CSE-ConnectionGUID: j+c/25rWSwq6QnplOvhaAQ== X-CSE-MsgGUID: fO5CzA1JSiixO97HPv7/+Q== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="61074238" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="61074238" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:09 -0700 X-CSE-ConnectionGUID: +1xI2iuMR5a9n1OyQykjPg== X-CSE-MsgGUID: IhBA5GBpQdGr4s6ph/EAaQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,293,1751266800"; d="scan'208";a="177003642" Received: from inaky-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.206]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:09 -0700 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v11 13/31] x86,fs/resctrl: Add and initialize rdt_resource for package scope monitor Date: Thu, 25 Sep 2025 13:03:07 -0700 Message-ID: <20250925200328.64155-14-tony.luck@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250925200328.64155-1-tony.luck@intel.com> References: <20250925200328.64155-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a new PERF_PKG resource and introduce package level scope for monitoring telemetry events so that CPU hot plug notifiers can build domains at the package granularity. Use the physical package ID available via topology_physical_package_id() to identify the monitoring domains with package level scope. This enables user space to use: /sys/devices/system/cpu/cpuX/topology/physical_package_id to identify the monitoring domain a CPU is associated with. Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- include/linux/resctrl.h | 2 ++ fs/resctrl/internal.h | 2 ++ arch/x86/kernel/cpu/resctrl/core.c | 10 ++++++++++ fs/resctrl/rdtgroup.c | 2 ++ 4 files changed, 16 insertions(+) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index 6350064ac8be..ff67224b80c8 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -53,6 +53,7 @@ enum resctrl_res_level { RDT_RESOURCE_L2, RDT_RESOURCE_MBA, RDT_RESOURCE_SMBA, + RDT_RESOURCE_PERF_PKG, =20 /* Must be the last */ RDT_NUM_RESOURCES, @@ -267,6 +268,7 @@ enum resctrl_scope { RESCTRL_L2_CACHE =3D 2, RESCTRL_L3_CACHE =3D 3, RESCTRL_L3_NODE, + RESCTRL_PACKAGE, }; =20 /** diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h index f5189b6771a0..96d97f4ff957 100644 --- a/fs/resctrl/internal.h +++ b/fs/resctrl/internal.h @@ -255,6 +255,8 @@ struct rdtgroup { =20 #define RFTYPE_ASSIGN_CONFIG BIT(11) =20 +#define RFTYPE_RES_PERF_PKG BIT(11) + #define RFTYPE_CTRL_INFO (RFTYPE_INFO | RFTYPE_CTRL) =20 #define RFTYPE_MON_INFO (RFTYPE_INFO | RFTYPE_MON) diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index ee6d53aae455..64c6f507b7bc 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -100,6 +100,14 @@ struct rdt_hw_resource rdt_resources_all[RDT_NUM_RESOU= RCES] =3D { .schema_fmt =3D RESCTRL_SCHEMA_RANGE, }, }, + [RDT_RESOURCE_PERF_PKG] =3D + { + .r_resctrl =3D { + .name =3D "PERF_PKG", + .mon_scope =3D RESCTRL_PACKAGE, + .mon_domains =3D mon_domain_init(RDT_RESOURCE_PERF_PKG), + }, + }, }; =20 u32 resctrl_arch_system_num_rmid_idx(void) @@ -433,6 +441,8 @@ static int get_domain_id_from_scope(int cpu, enum resct= rl_scope scope) return get_cpu_cacheinfo_id(cpu, scope); case RESCTRL_L3_NODE: return cpu_to_node(cpu); + case RESCTRL_PACKAGE: + return topology_physical_package_id(cpu); default: break; } diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index 72ae7224a2da..6e8937f94e7a 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -2330,6 +2330,8 @@ static unsigned long fflags_from_resource(struct rdt_= resource *r) case RDT_RESOURCE_MBA: case RDT_RESOURCE_SMBA: return RFTYPE_RES_MB; + case RDT_RESOURCE_PERF_PKG: + return RFTYPE_RES_PERF_PKG; } =20 return WARN_ON_ONCE(1); --=20 2.51.0 From nobody Sat Sep 27 20:26:34 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B4E6D3218DD for ; Thu, 25 Sep 2025 20:04:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830657; cv=none; b=KnrBOCN7/vuYIFBOVN/rveGUQW03cpi1s5x8GCcAshKrx3FToNgl/nrwgXw0KbJQITIxEaiegOfuBNhb9rrprHB4C/Kh0N0U+7WpajM/y53pXuWiNZA0DyoMPNQn9SfVHED8o4HyrUwCx/zWerwso32/YV65hxzfIFtyyE47PZo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830657; c=relaxed/simple; bh=qH/GDjzlCQXAzpzL1YuJ9XPTstvOMwofyuGrPN9kIf0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZEWZwHFeI/CMVMxIQWSWNHhj4OhrPAhAxEhJXGl/HRJqJFuFON7rsoPn/3Sql5eeBP2v+w4JcryyA/7QGfsct/CSWIPhQQSprG3Oo9Vg8uF/ge6WoeWHDXVXnWpxaGcu4I/+xO/gi8OHabXEYk79bWGWMCKAXQfFdLgdK1Zm4oo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=jWgq8oTg; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="jWgq8oTg" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758830655; x=1790366655; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=qH/GDjzlCQXAzpzL1YuJ9XPTstvOMwofyuGrPN9kIf0=; b=jWgq8oTgiQ8DTrRBACRfTzSk3wd8uSYg9dCUBAi0MTBXmEJiOJ7e7OHD UtWik+jkx1uBGUGKuV0P4SCKeT/zU8z2mEWfj7WE90731igJB9o8GxTUV ZOvRztDwhPX7piccPp9pgI7/ZVPeA5AlAp7TrS6esXEsVbnkjV/BZN6Bh 6v8DgRJ2/DmP76FsmuYN+al8s26LvCk+roMBgmJz+gpXTpu3V+pU64Pm5 1on/yVpf6vj/6UQq+HzH8Hm4/LGVh/FAXZ4IKrNmVNbCGUiGsuYAQtgp3 m0KK5ad5yEES19U4Go8ggMuLbsfmEN2RIukzlYHSccDXIR498Q6avu6IX w==; X-CSE-ConnectionGUID: qEPGiT4fSgiLKCzEPDs2AA== X-CSE-MsgGUID: cowYo1fUR2m3vrHqbuPkeg== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="61074247" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="61074247" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:09 -0700 X-CSE-ConnectionGUID: jVvXz877TDGPlM2UbkdY/g== X-CSE-MsgGUID: nMBK5xPeS1qggIKXVQ9H+Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,293,1751266800"; d="scan'208";a="177003646" Received: from inaky-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.206]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:09 -0700 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v11 14/31] x86/resctrl: Discover hardware telemetry events Date: Thu, 25 Sep 2025 13:03:08 -0700 Message-ID: <20250925200328.64155-15-tony.luck@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250925200328.64155-1-tony.luck@intel.com> References: <20250925200328.64155-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Each CPU collects data for telemetry events that it sends to the nearest telemetry event aggregator either when the value of IA32_PQR_ASSOC.RMID changes, or when a two millisecond timer expires. The telemetry event aggregators maintain per-RMID per-event counts of the total seen for all the CPUs. There may be more than one set of telemetry event aggregators per package. There are separate sets of aggregators for each type of event, but all aggregators for a given type are symmetric keeping counts for the same set of events for the CPUs that provide data to them. Each telemetry event aggregator is responsible for a specific group of events. E.g. on the Intel Clearwater Forest CPU there are two types of aggregators. One type tracks a pair of energy related events. The other type tracks a subset of "perf" type events. The event counts are made available to Linux in a region of MMIO space for each aggregator. All details about the layout of counters in each aggregator MMIO region are described in XML files published by Intel and made available in a GitHub repository [1]. The key to matching a specific telemetry aggregator to the XML file that describes the MMIO layout is a 32-bit value. The Linux telemetry subsystem refers to this as a "guid" while the XML files call it a "uniqueid". Each XML file provides the following information: 1) Which telemetry events are included in the group. 2) The order in which the event counters appear for each RMID. 3) The value type of each event counter (integer or fixed-point). 4) The number of RMIDs supported. 5) Which additional aggregator status registers are included. 6) The total size of the MMIO region for an aggregator. The INTEL_PMT_TELEMETRY driver enumerates support for telemetry events. This driver provides intel_pmt_get_regions_by_feature() to list all available telemetry event aggregators. The list includes the "guid", the base address in MMIO space for the region where the event counters are exposed, and the package id where the all the CPUs that report to this aggregator are located. Add a new Kconfig option CONFIG_X86_CPU_RESCTRL_INTEL_AET for the Intel specific parts of telemetry code. This depends on the INTEL_PMT_TELEMETRY and INTEL_TPMI drivers being built-in to the kernel for enumeration of telemetry features. Use INTEL_PMT_TELEMETRY's intel_pmt_get_regions_by_feature() with each per-RMID telemetry feature id to obtain a private copy of struct pmt_feature_group that contains all discovered/enumerated telemetry aggregator data for all event groups (known and unknown to resctrl) of that feature id. Further processing on this structure will enable all supported events in resctrl. Return the structure to INTEL_PMT_TELEMETRY at resctrl exit time. Signed-off-by: Tony Luck Link: https://github.com/intel/Intel-PMT # [1] --- Note that checkpatch complains about this: DEFINE_FREE(intel_pmt_put_feature_group, struct pmt_feature_group *, if (!IS_ERR_OR_NULL(_T)) intel_pmt_put_feature_group(_T)) with: CHECK: Alignment should match open parenthesis But if the alignment is fixed, it then complains: WARNING: Statements should start on a tabstop --- arch/x86/kernel/cpu/resctrl/internal.h | 8 ++ arch/x86/kernel/cpu/resctrl/core.c | 5 + arch/x86/kernel/cpu/resctrl/intel_aet.c | 144 ++++++++++++++++++++++++ arch/x86/Kconfig | 13 +++ arch/x86/kernel/cpu/resctrl/Makefile | 1 + 5 files changed, 171 insertions(+) create mode 100644 arch/x86/kernel/cpu/resctrl/intel_aet.c diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/r= esctrl/internal.h index 14fadcff0d2b..886261a82b81 100644 --- a/arch/x86/kernel/cpu/resctrl/internal.h +++ b/arch/x86/kernel/cpu/resctrl/internal.h @@ -217,4 +217,12 @@ void __init intel_rdt_mbm_apply_quirk(void); void rdt_domain_reconfigure_cdp(struct rdt_resource *r); void resctrl_arch_mbm_cntr_assign_set_one(struct rdt_resource *r); =20 +#ifdef CONFIG_X86_CPU_RESCTRL_INTEL_AET +bool intel_aet_get_events(void); +void __exit intel_aet_exit(void); +#else +static inline bool intel_aet_get_events(void) { return false; } +static inline void __exit intel_aet_exit(void) { } +#endif + #endif /* _ASM_X86_RESCTRL_INTERNAL_H */ diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 64c6f507b7bc..9003a6344410 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -734,6 +734,9 @@ void resctrl_arch_pre_mount(void) =20 if (!atomic_try_cmpxchg(&only_once, &old, 1)) return; + + if (!intel_aet_get_events()) + return; } =20 enum { @@ -1091,6 +1094,8 @@ late_initcall(resctrl_arch_late_init); =20 static void __exit resctrl_arch_exit(void) { + intel_aet_exit(); + cpuhp_remove_state(rdt_online); =20 resctrl_exit(); diff --git a/arch/x86/kernel/cpu/resctrl/intel_aet.c b/arch/x86/kernel/cpu/= resctrl/intel_aet.c new file mode 100644 index 000000000000..966c840f0d6b --- /dev/null +++ b/arch/x86/kernel/cpu/resctrl/intel_aet.c @@ -0,0 +1,144 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Resource Director Technology(RDT) + * - Intel Application Energy Telemetry + * + * Copyright (C) 2025 Intel Corporation + * + * Author: + * Tony Luck + */ + +#define pr_fmt(fmt) "resctrl: " fmt + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "internal.h" + +/** + * struct event_group - All information about a group of telemetry events. + * @pfg: Points to the aggregated telemetry space information + * returned by the intel_pmt_get_regions_by_feature() + * call to the INTEL_PMT_TELEMETRY driver that contains + * data for all telemetry regions of a specific type. + * Valid if the system supports the event group. + * NULL otherwise. + * @guid: Unique number per XML description file. + */ +struct event_group { + /* Data fields for additional structures to manage this group. */ + struct pmt_feature_group *pfg; + + /* Remaining fields initialized from XML file. */ + u32 guid; +}; + +/* + * Link: https://github.com/intel/Intel-PMT + * File: xml/CWF/OOBMSM/RMID-ENERGY/cwf_aggregator.xml + */ +static struct event_group energy_0x26696143 =3D { + .guid =3D 0x26696143, +}; + +/* + * Link: https://github.com/intel/Intel-PMT + * File: xml/CWF/OOBMSM/RMID-PERF/cwf_aggregator.xml + */ +static struct event_group perf_0x26557651 =3D { + .guid =3D 0x26557651, +}; + +static struct event_group *known_energy_event_groups[] =3D { + &energy_0x26696143, +}; + +static struct event_group *known_perf_event_groups[] =3D { + &perf_0x26557651, +}; + +#define for_each_enabled_event_group(_peg, _grp) \ + for (_peg =3D (_grp); _peg < &_grp[ARRAY_SIZE(_grp)]; _peg++) \ + if ((*_peg)->pfg) + +/* Stub for now */ +static bool enable_events(struct event_group *e, struct pmt_feature_group = *p) +{ + return false; +} + +DEFINE_FREE(intel_pmt_put_feature_group, struct pmt_feature_group *, + if (!IS_ERR_OR_NULL(_T)) + intel_pmt_put_feature_group(_T)) + +/* + * Make a request to the INTEL_PMT_TELEMETRY driver for a copy of the + * pmt_feature_group for a specific feature. If there is one, the returned + * structure has an array of telemetry_region structures. Each describes + * one telemetry aggregator. + * Try to use every telemetry aggregator with a known guid. + */ +static bool get_pmt_feature(enum pmt_feature_id feature, struct event_grou= p **evgs, + unsigned int num_evg) +{ + struct pmt_feature_group *p __free(intel_pmt_put_feature_group) =3D NULL; + struct event_group **peg; + bool ret; + + p =3D intel_pmt_get_regions_by_feature(feature); + + if (IS_ERR_OR_NULL(p)) + return false; + + for (peg =3D evgs; peg < &evgs[num_evg]; peg++) { + ret =3D enable_events(*peg, p); + if (ret) { + (*peg)->pfg =3D no_free_ptr(p); + return true; + } + } + + return false; +} + +/* + * Ask INTEL_PMT_TELEMETRY driver for all the RMID based telemetry groups + * that it supports. + */ +bool intel_aet_get_events(void) +{ + bool ret1, ret2; + + ret1 =3D get_pmt_feature(FEATURE_PER_RMID_ENERGY_TELEM, + known_energy_event_groups, + ARRAY_SIZE(known_energy_event_groups)); + ret2 =3D get_pmt_feature(FEATURE_PER_RMID_PERF_TELEM, + known_perf_event_groups, + ARRAY_SIZE(known_perf_event_groups)); + + return ret1 || ret2; +} + +void __exit intel_aet_exit(void) +{ + struct event_group **peg; + + for_each_enabled_event_group(peg, known_energy_event_groups) { + intel_pmt_put_feature_group((*peg)->pfg); + (*peg)->pfg =3D NULL; + } + for_each_enabled_event_group(peg, known_perf_event_groups) { + intel_pmt_put_feature_group((*peg)->pfg); + (*peg)->pfg =3D NULL; + } +} diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 52c8910ba2ef..ce9d086625c1 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -525,6 +525,19 @@ config X86_CPU_RESCTRL =20 Say N if unsure. =20 +config X86_CPU_RESCTRL_INTEL_AET + bool "Intel Application Energy Telemetry" + depends on X86_CPU_RESCTRL && CPU_SUP_INTEL && INTEL_PMT_TELEMETRY=3Dy &&= INTEL_TPMI=3Dy + help + Enable per-RMID telemetry events in resctrl. + + Intel feature that collects per-RMID execution data + about energy consumption, measure of frequency independent + activity and other performance metrics. Data is aggregated + per package. + + Say N if unsure. + config X86_FRED bool "Flexible Return and Event Delivery" depends on X86_64 diff --git a/arch/x86/kernel/cpu/resctrl/Makefile b/arch/x86/kernel/cpu/res= ctrl/Makefile index d8a04b195da2..273ddfa30836 100644 --- a/arch/x86/kernel/cpu/resctrl/Makefile +++ b/arch/x86/kernel/cpu/resctrl/Makefile @@ -1,6 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_X86_CPU_RESCTRL) +=3D core.o rdtgroup.o monitor.o obj-$(CONFIG_X86_CPU_RESCTRL) +=3D ctrlmondata.o +obj-$(CONFIG_X86_CPU_RESCTRL_INTEL_AET) +=3D intel_aet.o obj-$(CONFIG_RESCTRL_FS_PSEUDO_LOCK) +=3D pseudo_lock.o =20 # To allow define_trace.h's recursive include: --=20 2.51.0 From nobody Sat Sep 27 20:26:34 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 46D1D321F4A for ; Thu, 25 Sep 2025 20:04:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830657; cv=none; b=F2De/WlVVUl9sW5LQOy4LfAUHHIxl3LEKja6eMAVxJUfpMwNVSNqXCPJkXnWQZxhmlyiNsVEOa6/k+8ak1ZDZVJvu/jpDcumbvn1BpJpiypywZ/wiiej3/2Cur0qi9nfOvciMnspdQcZ+h7aGN5QXs43+nsDfk4ANZLfOaK8BwE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830657; c=relaxed/simple; bh=fO4d2GWjJNklhnEan/lOkw7j2X8R9b5sXAboQ9tOI0A=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Di3pxstUhPgqcL5/L6aiCSYTP+u0PlFTbecD2uuwVaAjj2RWWjcvLCclL7yCFHfcs82N/Kd60kOyJcvqxBRMbhx7dg40oIGfDuwrluBhCx7foFRwqLPtwNLNWhoa9N7GpNwSwuDeFOMMfT7FGXCqTVJfpVtFmiFv6oYk6i4fy5E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=mwp9p1dt; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="mwp9p1dt" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758830656; x=1790366656; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=fO4d2GWjJNklhnEan/lOkw7j2X8R9b5sXAboQ9tOI0A=; b=mwp9p1dtBfE/wpACFxuIB5vF+AREv4xcqBv6Wq2cKOsTQ4Gd5bKjP4SH lwMu6h6Dhz/H5K2B9wQSqYoP6vDhE04/Dr8CI+FRjyHii6Q4g0FCxnJGe PTFh0BSTBfhfL68aE9NnDAyP5o8/W0B62Chg2SO9CJRT4DW3t/hLxtFb/ m7J2GrPc4T5aDi6xXUqHXdvbW3ZDDiA/SI1R3G5naCMPrlGnK5GLUqUvm m1GSPjWB9Pan1snZTV6+gSVBicqeL/3+YC9BVhKjAvD0P0Qs5PLgrDBwM EDSUdm/IYAwFwmAoQZ+s9VLrJSN7MwWNGRENAjOxtucHq7J2HmSOER0e5 A==; X-CSE-ConnectionGUID: b5XQGtjmTzSHqcuXWh+sVg== X-CSE-MsgGUID: /4pmhx3mRNOoGkuSxMvIsw== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="61074255" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="61074255" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:10 -0700 X-CSE-ConnectionGUID: dShZAYevRpS6yg316E0VWQ== X-CSE-MsgGUID: 8rDLNdkySkyVCnkEFsqIYA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,293,1751266800"; d="scan'208";a="177003651" Received: from inaky-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.206]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:09 -0700 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v11 15/31] x86,fs/resctrl: Fill in details of events for guid 0x26696143 and 0x26557651 Date: Thu, 25 Sep 2025 13:03:09 -0700 Message-ID: <20250925200328.64155-16-tony.luck@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250925200328.64155-1-tony.luck@intel.com> References: <20250925200328.64155-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The Intel Clearwater Forest CPU supports two RMID-based PMT feature groups documented in the xml/CWF/OOBMSM/RMID-ENERGY/cwf_aggregator.xml and xml/CWF/OOBMSM/RMID-PERF/cwf_aggregator.xml files in the Intel PMT GIT repository [1]. The counter offsets in MMIO space are arranged in groups for each RMID. E.g the "energy" counters for guid 0x26696143 are arranged like this: MMIO offset:0x0000 Counter for RMID 0 PMT_EVENT_ENERGY MMIO offset:0x0008 Counter for RMID 0 PMT_EVENT_ACTIVITY MMIO offset:0x0010 Counter for RMID 1 PMT_EVENT_ENERGY MMIO offset:0x0018 Counter for RMID 1 PMT_EVENT_ACTIVITY ... MMIO offset:0x23F0 Counter for RMID 575 PMT_EVENT_ENERGY MMIO offset:0x23F8 Counter for RMID 575 PMT_EVENT_ACTIVITY After all counters there are three status registers that provide indications of how many times an aggregator was unable to process event counts, the time stamp for the most recent loss of data, and the time stamp of the most recent successful update. MMIO offset:0x2400 AGG_DATA_LOSS_COUNT MMIO offset:0x2408 AGG_DATA_LOSS_TIMESTAMP MMIO offset:0x2410 LAST_UPDATE_TIMESTAMP Define these events in the file system code and add the events to the event_group structures. PMT_EVENT_ENERGY and PMT_EVENT_ACTIVITY are produced in fixed point format. File system code must output as floating point values. Signed-off-by: Tony Luck Link: https://github.com/intel/Intel-PMT # [1] --- include/linux/resctrl_types.h | 11 +++++++ arch/x86/kernel/cpu/resctrl/intel_aet.c | 43 +++++++++++++++++++++++++ fs/resctrl/monitor.c | 35 +++++++++++--------- 3 files changed, 74 insertions(+), 15 deletions(-) diff --git a/include/linux/resctrl_types.h b/include/linux/resctrl_types.h index acfe07860b34..a5f56faa18d2 100644 --- a/include/linux/resctrl_types.h +++ b/include/linux/resctrl_types.h @@ -50,6 +50,17 @@ enum resctrl_event_id { QOS_L3_MBM_TOTAL_EVENT_ID =3D 0x02, QOS_L3_MBM_LOCAL_EVENT_ID =3D 0x03, =20 + /* Intel Telemetry Events */ + PMT_EVENT_ENERGY, + PMT_EVENT_ACTIVITY, + PMT_EVENT_STALLS_LLC_HIT, + PMT_EVENT_C1_RES, + PMT_EVENT_UNHALTED_CORE_CYCLES, + PMT_EVENT_STALLS_LLC_MISS, + PMT_EVENT_AUTO_C6_RES, + PMT_EVENT_UNHALTED_REF_CYCLES, + PMT_EVENT_UOPS_RETIRED, + /* Must be the last */ QOS_NUM_EVENTS, }; diff --git a/arch/x86/kernel/cpu/resctrl/intel_aet.c b/arch/x86/kernel/cpu/= resctrl/intel_aet.c index 966c840f0d6b..f9b5f6cd08f8 100644 --- a/arch/x86/kernel/cpu/resctrl/intel_aet.c +++ b/arch/x86/kernel/cpu/resctrl/intel_aet.c @@ -13,6 +13,7 @@ =20 #include #include +#include #include #include #include @@ -20,11 +21,27 @@ #include #include #include +#include #include #include =20 #include "internal.h" =20 +/** + * struct pmt_event - Telemetry event. + * @id: Resctrl event id. + * @idx: Counter index within each per-RMID block of counters. + * @bin_bits: Zero for integer valued events, else number bits in fraction + * part of fixed-point. + */ +struct pmt_event { + enum resctrl_event_id id; + unsigned int idx; + unsigned int bin_bits; +}; + +#define EVT(_id, _idx, _bits) { .id =3D _id, .idx =3D _idx, .bin_bits =3D = _bits } + /** * struct event_group - All information about a group of telemetry events. * @pfg: Points to the aggregated telemetry space information @@ -34,6 +51,9 @@ * Valid if the system supports the event group. * NULL otherwise. * @guid: Unique number per XML description file. + * @mmio_size: Number of bytes of MMIO registers for this group. + * @num_events: Number of events in this group. + * @evts: Array of event descriptors. */ struct event_group { /* Data fields for additional structures to manage this group. */ @@ -41,14 +61,26 @@ struct event_group { =20 /* Remaining fields initialized from XML file. */ u32 guid; + size_t mmio_size; + unsigned int num_events; + struct pmt_event evts[] __counted_by(num_events); }; =20 +#define XML_MMIO_SIZE(num_rmids, num_events, num_extra_status) \ + (((num_rmids) * (num_events) + (num_extra_status)) * sizeof(u64)) + /* * Link: https://github.com/intel/Intel-PMT * File: xml/CWF/OOBMSM/RMID-ENERGY/cwf_aggregator.xml */ static struct event_group energy_0x26696143 =3D { .guid =3D 0x26696143, + .mmio_size =3D XML_MMIO_SIZE(576, 2, 3), + .num_events =3D 2, + .evts =3D { + EVT(PMT_EVENT_ENERGY, 0, 18), + EVT(PMT_EVENT_ACTIVITY, 1, 18), + } }; =20 /* @@ -57,6 +89,17 @@ static struct event_group energy_0x26696143 =3D { */ static struct event_group perf_0x26557651 =3D { .guid =3D 0x26557651, + .mmio_size =3D XML_MMIO_SIZE(576, 7, 3), + .num_events =3D 7, + .evts =3D { + EVT(PMT_EVENT_STALLS_LLC_HIT, 0, 0), + EVT(PMT_EVENT_C1_RES, 1, 0), + EVT(PMT_EVENT_UNHALTED_CORE_CYCLES, 2, 0), + EVT(PMT_EVENT_STALLS_LLC_MISS, 3, 0), + EVT(PMT_EVENT_AUTO_C6_RES, 4, 0), + EVT(PMT_EVENT_UNHALTED_REF_CYCLES, 5, 0), + EVT(PMT_EVENT_UOPS_RETIRED, 6, 0), + } }; =20 static struct event_group *known_energy_event_groups[] =3D { diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index e354f01df615..d44b764853bf 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -954,27 +954,32 @@ static void dom_data_exit(struct rdt_resource *r) mutex_unlock(&rdtgroup_mutex); } =20 +#define MON_EVENT(_eventid, _name, _res, _fp) \ + [_eventid] =3D { \ + .name =3D _name, \ + .evtid =3D _eventid, \ + .rid =3D _res, \ + .is_floating_point =3D _fp, \ +} + /* * All available events. Architecture code marks the ones that * are supported by a system using resctrl_enable_mon_event() * to set .enabled. */ struct mon_evt mon_event_all[QOS_NUM_EVENTS] =3D { - [QOS_L3_OCCUP_EVENT_ID] =3D { - .name =3D "llc_occupancy", - .evtid =3D QOS_L3_OCCUP_EVENT_ID, - .rid =3D RDT_RESOURCE_L3, - }, - [QOS_L3_MBM_TOTAL_EVENT_ID] =3D { - .name =3D "mbm_total_bytes", - .evtid =3D QOS_L3_MBM_TOTAL_EVENT_ID, - .rid =3D RDT_RESOURCE_L3, - }, - [QOS_L3_MBM_LOCAL_EVENT_ID] =3D { - .name =3D "mbm_local_bytes", - .evtid =3D QOS_L3_MBM_LOCAL_EVENT_ID, - .rid =3D RDT_RESOURCE_L3, - }, + MON_EVENT(QOS_L3_OCCUP_EVENT_ID, "llc_occupancy", RDT_RESOURCE_L3, false= ), + MON_EVENT(QOS_L3_MBM_TOTAL_EVENT_ID, "mbm_total_bytes", RDT_RESOURCE_L3,= false), + MON_EVENT(QOS_L3_MBM_LOCAL_EVENT_ID, "mbm_local_bytes", RDT_RESOURCE_L3,= false), + MON_EVENT(PMT_EVENT_ENERGY, "core_energy", RDT_RESOURCE_PERF_PKG, true= ), + MON_EVENT(PMT_EVENT_ACTIVITY, "activity", RDT_RESOURCE_PERF_PKG, true), + MON_EVENT(PMT_EVENT_STALLS_LLC_HIT, "stalls_llc_hit", RDT_RESOURCE_PERF_= PKG, false), + MON_EVENT(PMT_EVENT_C1_RES, "c1_res", RDT_RESOURCE_PERF_PKG, false), + MON_EVENT(PMT_EVENT_UNHALTED_CORE_CYCLES, "unhalted_core_cycles", RDT_RES= OURCE_PERF_PKG, false), + MON_EVENT(PMT_EVENT_STALLS_LLC_MISS, "stalls_llc_miss", RDT_RESOURCE_PER= F_PKG, false), + MON_EVENT(PMT_EVENT_AUTO_C6_RES, "c6_res", RDT_RESOURCE_PERF_PKG, false= ), + MON_EVENT(PMT_EVENT_UNHALTED_REF_CYCLES, "unhalted_ref_cycles", RDT_RESOU= RCE_PERF_PKG, false), + MON_EVENT(PMT_EVENT_UOPS_RETIRED, "uops_retired", RDT_RESOURCE_PERF_PKG= , false), }; =20 void resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu,= unsigned int binary_bits) --=20 2.51.0 From nobody Sat Sep 27 20:26:34 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4898632274A for ; Thu, 25 Sep 2025 20:04:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830658; cv=none; b=Ao9K8HvWAu5nuFYIH+9uenaq22OuWdK3KJTaK9VGp1MZ57rWc5sfh+8CEuXDld8jSmGKnqXGy8alC89cQYooJ7rrl0jReXYePR6Eze/+eoUWxobPhXdmQf2ccAfwHitJfWD4MFW42X73zlTUCxkmxjcizhAJwX3iRWjnEQinIP0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830658; c=relaxed/simple; bh=U0Yl/G1Vvk4oPFYU0OAdg2xdx8nCzbM1e1nhJOiSJ4A=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=AvJ2YxjfnK2y9Cw9wfK5ZHFItfl86sdv5c7u6yFa8yf6+yBhYc+9qbWaFbhSUfffJJurx9F3M+lFZDsMeK9reYfYT8isM1lbHr9seMRWbGau89VG35PonnZLxXBKKk4uIM0lDKVfCUD2rbb6/lf6b+N+Mlq/xllu3HXFR3PxQOQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=O/plT/4w; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="O/plT/4w" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758830657; x=1790366657; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=U0Yl/G1Vvk4oPFYU0OAdg2xdx8nCzbM1e1nhJOiSJ4A=; b=O/plT/4w97EskvNwLYKqQohMVul96Zs7sujDu4sVjzirZqmRkysn+xhu sLkrq2k+ezqa63gpvEbTN2IJYzNMevIh0gYdpbOA9RFtJQ127CX0uYr6J np41cQDMZrsQLxrWzGFMHzRUEWLKJsOgRtdJpAz9l7Yu2C/t8mQvN2rfu IuVKfIF7ZLTsycFEA5IYFBWLXXMThpNTCDERm0ddiKWjn1x8xUmjBmCkU eLEg65Cs8/5mBiasGyd11xBP32YPf4j83zP3azoNpEvzndH2XIwEROURH 7M/tGf1Sm+U+23kkfbg5dQHN4G6HY+P6mMNUfVXnFrNRBZ+yyuBKCKFiN w==; X-CSE-ConnectionGUID: vX8rALkCQM2ck2K9QrG//w== X-CSE-MsgGUID: bw6QUwhRT/6jhwl94/2G5Q== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="61074264" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="61074264" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:10 -0700 X-CSE-ConnectionGUID: 1uDrmDccQlCv07uCEIFg2w== X-CSE-MsgGUID: pziVcnTFTNSdkReJXPqYhw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,293,1751266800"; d="scan'208";a="177003655" Received: from inaky-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.206]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:10 -0700 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v11 16/31] x86,fs/resctrl: Add architectural event pointer Date: Thu, 25 Sep 2025 13:03:10 -0700 Message-ID: <20250925200328.64155-17-tony.luck@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250925200328.64155-1-tony.luck@intel.com> References: <20250925200328.64155-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The resctrl file system layer passes the domain, RMID, and event id to resctrl_arch_rmid_read() to fetch an event counter. Fetching a telemetry event counter requires additional information that is private to the architecture, for example, the offset into MMIO space from where counter should be read. Add mon_evt::arch_priv void pointer. Architecture code can initialize this when marking each event enabled. File system code passes this pointer to resctrl_arch_rmid_read(). Suggested-by: Reinette Chatre Signed-off-by: Tony Luck --- include/linux/resctrl.h | 7 +++++-- fs/resctrl/internal.h | 4 ++++ arch/x86/kernel/cpu/resctrl/core.c | 6 +++--- arch/x86/kernel/cpu/resctrl/monitor.c | 2 +- fs/resctrl/monitor.c | 18 ++++++++++++------ 5 files changed, 25 insertions(+), 12 deletions(-) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index ff67224b80c8..111c8f1dc77e 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -412,7 +412,7 @@ u32 resctrl_arch_system_num_rmid_idx(void); int resctrl_arch_update_domains(struct rdt_resource *r, u32 closid); =20 void resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu, - unsigned int binary_bits); + unsigned int binary_bits, void *arch_priv); =20 bool resctrl_is_mon_event_enabled(enum resctrl_event_id eventid); =20 @@ -529,6 +529,9 @@ void resctrl_arch_pre_mount(void); * only. * @rmid: rmid of the counter to read. * @eventid: eventid to read, e.g. L3 occupancy. + * @arch_priv: Architecture private data for this event. + * The @arch_priv provided by the architecture via + * resctrl_enable_mon_event(). * @val: result of the counter read in bytes. * @arch_mon_ctx: An architecture specific value from * resctrl_arch_mon_ctx_alloc(), for MPAM this identifies @@ -546,7 +549,7 @@ void resctrl_arch_pre_mount(void); */ int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain_hdr *= hdr, u32 closid, u32 rmid, enum resctrl_event_id eventid, - u64 *val, void *arch_mon_ctx); + void *arch_priv, u64 *val, void *arch_mon_ctx); =20 /** * resctrl_arch_rmid_read_context_check() - warn about invalid contexts diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h index 96d97f4ff957..aee6c4684f81 100644 --- a/fs/resctrl/internal.h +++ b/fs/resctrl/internal.h @@ -66,6 +66,9 @@ static inline struct rdt_fs_context *rdt_fc2context(struc= t fs_context *fc) * @binary_bits: number of fixed-point binary bits from architecture, * only valid if @is_floating_point is true * @enabled: true if the event is enabled + * @arch_priv: Architecture private data for this event. + * The @arch_priv provided by the architecture via + * resctrl_enable_mon_event(). */ struct mon_evt { enum resctrl_event_id evtid; @@ -77,6 +80,7 @@ struct mon_evt { bool is_floating_point; unsigned int binary_bits; bool enabled; + void *arch_priv; }; =20 extern struct mon_evt mon_event_all[QOS_NUM_EVENTS]; diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 9003a6344410..588de539a739 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -911,15 +911,15 @@ static __init bool get_rdt_mon_resources(void) bool ret =3D false; =20 if (rdt_cpu_has(X86_FEATURE_CQM_OCCUP_LLC)) { - resctrl_enable_mon_event(QOS_L3_OCCUP_EVENT_ID, false, 0); + resctrl_enable_mon_event(QOS_L3_OCCUP_EVENT_ID, false, 0, NULL); ret =3D true; } if (rdt_cpu_has(X86_FEATURE_CQM_MBM_TOTAL)) { - resctrl_enable_mon_event(QOS_L3_MBM_TOTAL_EVENT_ID, false, 0); + resctrl_enable_mon_event(QOS_L3_MBM_TOTAL_EVENT_ID, false, 0, NULL); ret =3D true; } if (rdt_cpu_has(X86_FEATURE_CQM_MBM_LOCAL)) { - resctrl_enable_mon_event(QOS_L3_MBM_LOCAL_EVENT_ID, false, 0); + resctrl_enable_mon_event(QOS_L3_MBM_LOCAL_EVENT_ID, false, 0, NULL); ret =3D true; } if (rdt_cpu_has(X86_FEATURE_ABMC)) diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/re= sctrl/monitor.c index ea81305fbc5d..175488185b06 100644 --- a/arch/x86/kernel/cpu/resctrl/monitor.c +++ b/arch/x86/kernel/cpu/resctrl/monitor.c @@ -240,7 +240,7 @@ static u64 get_corrected_val(struct rdt_resource *r, st= ruct rdt_l3_mon_domain *d =20 int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain_hdr *= hdr, u32 unused, u32 rmid, enum resctrl_event_id eventid, - u64 *val, void *ignored) + void *arch_priv, u64 *val, void *ignored) { struct rdt_l3_mon_domain *d; u64 msr_val; diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index d44b764853bf..1eb054749d20 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -137,9 +137,11 @@ void __check_limbo(struct rdt_l3_mon_domain *d, bool f= orce_free) struct rmid_entry *entry; u32 idx, cur_idx =3D 1; void *arch_mon_ctx; + void *arch_priv; bool rmid_dirty; u64 val =3D 0; =20 + arch_priv =3D mon_event_all[QOS_L3_OCCUP_EVENT_ID].arch_priv; arch_mon_ctx =3D resctrl_arch_mon_ctx_alloc(r, QOS_L3_OCCUP_EVENT_ID); if (IS_ERR(arch_mon_ctx)) { pr_warn_ratelimited("Failed to allocate monitor context: %ld", @@ -160,7 +162,7 @@ void __check_limbo(struct rdt_l3_mon_domain *d, bool fo= rce_free) =20 entry =3D __rmid_entry(idx); if (resctrl_arch_rmid_read(r, &d->hdr, entry->closid, entry->rmid, - QOS_L3_OCCUP_EVENT_ID, &val, + QOS_L3_OCCUP_EVENT_ID, arch_priv, &val, arch_mon_ctx)) { rmid_dirty =3D true; } else { @@ -480,7 +482,8 @@ static int __mon_event_count(struct rdtgroup *rdtgrp, s= truct rmid_read *rr) rr->evt->evtid, &tval); else rr->err =3D resctrl_arch_rmid_read(rr->r, rr->hdr, closid, rmid, - rr->evt->evtid, &tval, rr->arch_mon_ctx); + rr->evt->evtid, rr->evt->arch_priv, + &tval, rr->arch_mon_ctx); if (rr->err) return rr->err; =20 @@ -505,7 +508,8 @@ static int __mon_event_count(struct rdtgroup *rdtgrp, s= truct rmid_read *rr) rr->evt->evtid, &tval); else err =3D resctrl_arch_rmid_read(rr->r, &d->hdr, closid, rmid, - rr->evt->evtid, &tval, rr->arch_mon_ctx); + rr->evt->evtid, rr->evt->arch_priv, + &tval, rr->arch_mon_ctx); if (!err) { rr->val +=3D tval; ret =3D 0; @@ -982,7 +986,8 @@ struct mon_evt mon_event_all[QOS_NUM_EVENTS] =3D { MON_EVENT(PMT_EVENT_UOPS_RETIRED, "uops_retired", RDT_RESOURCE_PERF_PKG= , false), }; =20 -void resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu,= unsigned int binary_bits) +void resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu, + unsigned int binary_bits, void *arch_priv) { if (WARN_ON_ONCE(eventid < QOS_FIRST_EVENT || eventid >=3D QOS_NUM_EVENTS= || binary_bits > MAX_BINARY_BITS)) @@ -998,6 +1003,7 @@ void resctrl_enable_mon_event(enum resctrl_event_id ev= entid, bool any_cpu, unsig =20 mon_event_all[eventid].any_cpu =3D any_cpu; mon_event_all[eventid].binary_bits =3D binary_bits; + mon_event_all[eventid].arch_priv =3D arch_priv; mon_event_all[eventid].enabled =3D true; } =20 @@ -1823,9 +1829,9 @@ int resctrl_l3_mon_resource_init(void) =20 if (r->mon.mbm_cntr_assignable) { if (!resctrl_is_mon_event_enabled(QOS_L3_MBM_TOTAL_EVENT_ID)) - resctrl_enable_mon_event(QOS_L3_MBM_TOTAL_EVENT_ID, false, 0); + resctrl_enable_mon_event(QOS_L3_MBM_TOTAL_EVENT_ID, false, 0, NULL); if (!resctrl_is_mon_event_enabled(QOS_L3_MBM_LOCAL_EVENT_ID)) - resctrl_enable_mon_event(QOS_L3_MBM_LOCAL_EVENT_ID, false, 0); + resctrl_enable_mon_event(QOS_L3_MBM_LOCAL_EVENT_ID, false, 0, NULL); mon_event_all[QOS_L3_MBM_TOTAL_EVENT_ID].evt_cfg =3D r->mon.mbm_cfg_mask; mon_event_all[QOS_L3_MBM_LOCAL_EVENT_ID].evt_cfg =3D r->mon.mbm_cfg_mask= & (READS_TO_LOCAL_MEM | --=20 2.51.0 From nobody Sat Sep 27 20:26:34 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4A80A322A19 for ; Thu, 25 Sep 2025 20:04:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830659; cv=none; b=qP/kQXhTNwQ9inLi2/xuOt378SAXy5V8Y4P9YSIBugxqB9jNBOxgq634MhdLbkBwiT4IrQgsa8xfg7vpU57SJ/HHaEP9MLb9aqByKQyTNF3ozIpPFOu5IcNAC8uNdR1OOPwXZNx54g4V9TBluadSao5RFfgvFuq/tGvNyZzRiMw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830659; c=relaxed/simple; bh=t9iHAyxu8Fi6yLKyyET7uVNtK2AboFitByCgxUevL7Y=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=cN1cl5rNB99SfMRSZci/0/CuDtGKR3zQ6mqfFT+zWHnxU/6cE6QffWdUB5IubW7XSNuppKTmEai8xASPiBW+5SuSnB4ZdSYv/4ygMnGFv3qcqRJaMuJxFiAWZrw8ijTI4maOhV0Meny/gJ+HJDuuGsY7E4GOP5f7V1LYY0Bw7yA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=h6ghTBIH; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="h6ghTBIH" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758830658; x=1790366658; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=t9iHAyxu8Fi6yLKyyET7uVNtK2AboFitByCgxUevL7Y=; b=h6ghTBIHTy5GTtDd91dMTXW5XjRzL31EU70UNbHlg0kdRC7zr/NiYB9r d5d6SJ45enWczD3b7Qvy5gsBMWv+gdGSjI/dprFs1brx9v+n9/4ZZdHUW +xBFgelH8Auf4y/cLkDrEBu11i1v+Je6vYhG7/GSOXiDy3b4fGEcr4Ij4 cfpIrljYQ5GVTjFEcI9DjX8WNkEM0v4NEPFTDPQXJPc3vFyevIyUttU93 vBfLvsdam+XP9lfs+3ZDWT9Wd6X513aTGoBCqu+/Ry63dXBUkiX9BIS/O ltcZVT778SFTUgdYw+Avev+xeSSHOHN3aZYGj4J5rPsBmpEkW7t4CqE+Q w==; X-CSE-ConnectionGUID: yW7E1Ky/Tf6/REklzs2qeg== X-CSE-MsgGUID: aTmgUrYbRxaGO/FL1AMgUA== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="61074275" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="61074275" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:11 -0700 X-CSE-ConnectionGUID: g/8mBfe3TYWvs+8nBaLGtQ== X-CSE-MsgGUID: gH5mJoO8SDeUrWWSPElkKw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,293,1751266800"; d="scan'208";a="177003659" Received: from inaky-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.206]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:10 -0700 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v11 17/31] x86/resctrl: Find and enable usable telemetry events Date: Thu, 25 Sep 2025 13:03:11 -0700 Message-ID: <20250925200328.64155-18-tony.luck@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250925200328.64155-1-tony.luck@intel.com> References: <20250925200328.64155-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The INTEL_PMT_TELEMETRY driver provides telemetry region structures of the types requested by resctrl. Scan these structures to discover which pass sanity checks to derive a list of valid regions: 1) They have guid known to resctrl. 2) They have a valid package ID. 3) The enumerated size of the MMIO region matches the expected value from the XML description file. 4) At least one region passes the above checks. For each valid region enable all the events in the associated event_group::evts[]. Pass a pointer to the pmt_event structure of the event within the struct event_group that resctrl stores in mon_evt::arch_priv. resctrl passes this pointer back when asking to read the event data which enables the data to be found in MMIO. Signed-off-by: Tony Luck --- arch/x86/kernel/cpu/resctrl/intel_aet.c | 38 +++++++++++++++++++++++-- 1 file changed, 36 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/cpu/resctrl/intel_aet.c b/arch/x86/kernel/cpu/= resctrl/intel_aet.c index f9b5f6cd08f8..98ba9ba05ee5 100644 --- a/arch/x86/kernel/cpu/resctrl/intel_aet.c +++ b/arch/x86/kernel/cpu/resctrl/intel_aet.c @@ -20,9 +20,11 @@ #include #include #include +#include #include #include #include +#include #include =20 #include "internal.h" @@ -114,12 +116,44 @@ static struct event_group *known_perf_event_groups[] = =3D { for (_peg =3D (_grp); _peg < &_grp[ARRAY_SIZE(_grp)]; _peg++) \ if ((*_peg)->pfg) =20 -/* Stub for now */ -static bool enable_events(struct event_group *e, struct pmt_feature_group = *p) +static bool skip_telem_region(struct telemetry_region *tr, struct event_gr= oup *e) { + if (tr->guid !=3D e->guid) + return true; + if (tr->plat_info.package_id >=3D topology_max_packages()) { + pr_warn("Bad package %u in guid 0x%x\n", tr->plat_info.package_id, + tr->guid); + return true; + } + if (tr->size !=3D e->mmio_size) { + pr_warn("MMIO space wrong size (%zu bytes) for guid 0x%x. Expected %zu b= ytes.\n", + tr->size, e->guid, e->mmio_size); + return true; + } + return false; } =20 +static bool enable_events(struct event_group *e, struct pmt_feature_group = *p) +{ + bool usable_events =3D false; + + for (int i =3D 0; i < p->count; i++) { + if (skip_telem_region(&p->regions[i], e)) + continue; + usable_events =3D true; + } + + if (!usable_events) + return false; + + for (int j =3D 0; j < e->num_events; j++) + resctrl_enable_mon_event(e->evts[j].id, true, + e->evts[j].bin_bits, &e->evts[j]); + + return true; +} + DEFINE_FREE(intel_pmt_put_feature_group, struct pmt_feature_group *, if (!IS_ERR_OR_NULL(_T)) intel_pmt_put_feature_group(_T)) --=20 2.51.0 From nobody Sat Sep 27 20:26:34 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B977131FEC8 for ; Thu, 25 Sep 2025 20:04:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830660; cv=none; b=nhOzu29ISj/yA64076/I+qp2l+ioHSHH99GUaGKLNh583BCK8GUBBKdQhUE0lCWbSKlAp4SQI/+fdD0ILscti0hwL1sGclfTJ7e3ipkRoy1d5uoe71PqKNKL04u4MWujP+gA2YwfFuDWvK+e31OXRGcBsGBAHVgjeekEFoozabM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830660; c=relaxed/simple; bh=/RsJXpchbZOQWLP1rBycPqcyp3pfyqvgNkBpbmu913M=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=NHXiexHGfpAZx3i8Xlo7HwAWZ+XXngRZIHaWsdTs0pONdIp2Ks+yCcJx48s54xntnK+vySOWGr+E4MPSIWY3WRie8blD6M+Xqp3iho9ziEsWOEoOopLRGwdDcUKr8XwXZU5wJ4Zc4cetD5LoF6iK0OiEAJmF4v3bf6f3qs+zm+w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=YrnbJAAk; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="YrnbJAAk" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758830658; x=1790366658; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/RsJXpchbZOQWLP1rBycPqcyp3pfyqvgNkBpbmu913M=; b=YrnbJAAkLb7kJ9VF7iB++5nzn0px00kh0dDwl0OpTkjzv/x3IzzPjfVh uyL/pATQiHmau8dd9UyoNsC4jaaS63NGtuvv+q4Ub305ZK5Wa+PkU7gC3 zU8KS6WV7EOo0tzLWcP3uy/xzlunLyfXpeQULa9rTSd14YNVeYgvN+Vht pfFoAhJxQV94MM6W3e3ooOOPEJ3C5cAAqolUtmVuVUZ7Ahj64yMtvHdpe 3P5ygX/7+HhhdcGxZlciURrHcztDYaXiZ8mPwWhoyuc9SJX0nK8CLkz5h 3sFkqyNAsrgUsl3LKBP0vFAC2xUuOK2xkZcbf9H6pjRLviorHvpWioWnj g==; X-CSE-ConnectionGUID: AmFR95dlSvuiAeNfGo0gsw== X-CSE-MsgGUID: TuPvIo+wTfe7o70QQIDPEQ== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="61074285" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="61074285" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:11 -0700 X-CSE-ConnectionGUID: 1zdtfjHiR3K3ZB9T8hxgQg== X-CSE-MsgGUID: MU6+2s2KTY21b+dV9a7frQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,293,1751266800"; d="scan'208";a="177003662" Received: from inaky-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.206]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:11 -0700 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v11 18/31] fs/resctrl: Refactor L3 specific parts of __mon_event_count() Date: Thu, 25 Sep 2025 13:03:12 -0700 Message-ID: <20250925200328.64155-19-tony.luck@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250925200328.64155-1-tony.luck@intel.com> References: <20250925200328.64155-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The "MBM counter assignment" and "reset counter on first read" features are only applicable to the RDT_RESOURCE_L3 resource. Add a check for the RDT_RESOURCE_L3 resource. Signed-off-by: Tony Luck --- fs/resctrl/monitor.c | 38 ++++++++++++++++++++------------------ 1 file changed, 20 insertions(+), 18 deletions(-) diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index 1eb054749d20..d484983c0f02 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -453,27 +453,29 @@ static int __mon_event_count(struct rdtgroup *rdtgrp,= struct rmid_read *rr) if (!cpu_on_correct_domain(rr)) return -EINVAL; =20 - if (!domain_header_is_valid(rr->hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) - return -EINVAL; - d =3D container_of(rr->hdr, struct rdt_l3_mon_domain, hdr); - - if (rr->is_mbm_cntr) { - cntr_id =3D mbm_cntr_get(rr->r, d, rdtgrp, rr->evt->evtid); - if (cntr_id < 0) { - rr->err =3D -ENOENT; + if (rr->r->rid =3D=3D RDT_RESOURCE_L3) { + if (!domain_header_is_valid(rr->hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3= )) return -EINVAL; + d =3D container_of(rr->hdr, struct rdt_l3_mon_domain, hdr); + + if (rr->is_mbm_cntr) { + cntr_id =3D mbm_cntr_get(rr->r, d, rdtgrp, rr->evt->evtid); + if (cntr_id < 0) { + rr->err =3D -ENOENT; + return -EINVAL; + } } - } =20 - if (rr->first) { - if (rr->is_mbm_cntr) - resctrl_arch_reset_cntr(rr->r, d, closid, rmid, cntr_id, rr->evt->evtid= ); - else - resctrl_arch_reset_rmid(rr->r, d, closid, rmid, rr->evt->evtid); - m =3D get_mbm_state(d, closid, rmid, rr->evt->evtid); - if (m) - memset(m, 0, sizeof(struct mbm_state)); - return 0; + if (rr->first) { + if (rr->is_mbm_cntr) + resctrl_arch_reset_cntr(rr->r, d, closid, rmid, cntr_id, rr->evt->evti= d); + else + resctrl_arch_reset_rmid(rr->r, d, closid, rmid, rr->evt->evtid); + m =3D get_mbm_state(d, closid, rmid, rr->evt->evtid); + if (m) + memset(m, 0, sizeof(struct mbm_state)); + return 0; + } } =20 if (rr->hdr) { --=20 2.51.0 From nobody Sat Sep 27 20:26:34 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AAFDC322C94 for ; Thu, 25 Sep 2025 20:04:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830661; cv=none; b=ey2G6FgxEAdlSfi04je1ZpJh7tntA7zI8keneZPt32JlDeGi7Zb8n6FItAZn1gLk9oxVRuo9i6JSrvw4aGlQo2m6Jm+ZK6yW9E4jPna5yC4Sq0J3cRF49RcxhD3ytmrDqRVfTQCATkQh9tcJYYYgG9WIvZtM2IN6Nj7SN2PS2E4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830661; c=relaxed/simple; bh=+RXa62AV+Tx5eKtJw1BZAV0tBy1MlMpJrPYaE552JhY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=guWc4ZkrrU4MfJPjq1up1wHaMB7ybDwTEk4/gyHQabUsIAKvRXaGV9l03NrBauBQAeWitCC98wi1nN11tDoSSDLmAxGYAhO/lDhqgV/wLfuXJ21wJiay2czx9ImxmhIUJwIMJ3pu7iWbhxsKhDTduiZF8KZ6kO/qEGQnOP2E0i0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=NOxwWeYr; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="NOxwWeYr" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758830659; x=1790366659; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+RXa62AV+Tx5eKtJw1BZAV0tBy1MlMpJrPYaE552JhY=; b=NOxwWeYrK8fDpfs/pfFrxIBXV3pJ41fjDo+uqpgo/P20W4LNj8csKWxS a+WOPV4O7W8WxncI5vDNU7LoDklWtZJtWSvXwcJ3mxxKkrTyZF1aPm1kw vTF/ah3PoYfQhVk3c4cpS/OWLs/N9AE0NFIh45SA6GlYI8P1udWfv9OOB oqvjYVdmGqZ8G2hxIcUmONW3mVGoVaRTZApHxvWcfLMRQwr77s8U+RhhU WPkr82X6u5x55s4bXqmJwxROW3KtMrIVhkbn0Y0FX27woyv0+7zgU6X+S BmqLFfjOQZtsMjMM+IR+q1nhHaoeM7v9Hkk5o73cqf2NdP/QpVs/2xYDe g==; X-CSE-ConnectionGUID: I153ZQeKQcmyvhjOELMevA== X-CSE-MsgGUID: reSHn6+ETWSzcf5nU+AttA== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="61074293" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="61074293" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:12 -0700 X-CSE-ConnectionGUID: WRmizlY8SA+lxKWu/zIIiQ== X-CSE-MsgGUID: +aMFRMoPQ72r6uHSUNLy8Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,293,1751266800"; d="scan'208";a="177003666" Received: from inaky-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.206]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:11 -0700 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v11 19/31] x86/resctrl: Read telemetry events Date: Thu, 25 Sep 2025 13:03:13 -0700 Message-ID: <20250925200328.64155-20-tony.luck@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250925200328.64155-1-tony.luck@intel.com> References: <20250925200328.64155-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Telemetry events are enabled during the first mount of the resctrl file system. Mark telemetry regions that did not pass the sanity checks by clearing their MMIO address fields so that they will not be used when reading events. Introduce intel_aet_read_event() to read telemetry events for resource RDT_RESOURCE_PERF_PKG. There may be multiple aggregators tracking each package, so scan all of them and add up all counters. Aggregators may return an invalid data indication if they have received no records for a given RMID. Return success to the user if one or more aggregators provide valid data. Resctrl now uses readq() so depends on X86_64. Update Kconfig. Signed-off-by: Tony Luck --- arch/x86/kernel/cpu/resctrl/internal.h | 7 +++ arch/x86/kernel/cpu/resctrl/intel_aet.c | 65 ++++++++++++++++++++++++- arch/x86/kernel/cpu/resctrl/monitor.c | 3 ++ arch/x86/Kconfig | 2 +- 4 files changed, 75 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/r= esctrl/internal.h index 886261a82b81..97616c81682b 100644 --- a/arch/x86/kernel/cpu/resctrl/internal.h +++ b/arch/x86/kernel/cpu/resctrl/internal.h @@ -220,9 +220,16 @@ void resctrl_arch_mbm_cntr_assign_set_one(struct rdt_r= esource *r); #ifdef CONFIG_X86_CPU_RESCTRL_INTEL_AET bool intel_aet_get_events(void); void __exit intel_aet_exit(void); +int intel_aet_read_event(int domid, u32 rmid, enum resctrl_event_id evtid, + void *arch_priv, u64 *val); #else static inline bool intel_aet_get_events(void) { return false; } static inline void __exit intel_aet_exit(void) { } +static inline int intel_aet_read_event(int domid, u32 rmid, enum resctrl_e= vent_id evtid, + void *arch_priv, u64 *val) +{ + return -EINVAL; +} #endif =20 #endif /* _ASM_X86_RESCTRL_INTERNAL_H */ diff --git a/arch/x86/kernel/cpu/resctrl/intel_aet.c b/arch/x86/kernel/cpu/= resctrl/intel_aet.c index 98ba9ba05ee5..d53211ac6204 100644 --- a/arch/x86/kernel/cpu/resctrl/intel_aet.c +++ b/arch/x86/kernel/cpu/resctrl/intel_aet.c @@ -12,13 +12,17 @@ #define pr_fmt(fmt) "resctrl: " fmt =20 #include +#include #include #include +#include #include #include +#include #include #include #include +#include #include #include #include @@ -134,13 +138,28 @@ static bool skip_telem_region(struct telemetry_region= *tr, struct event_group *e return false; } =20 +/* + * Clear the address field of regions that did not pass the checks in + * skip_telem_region() so they will not be used by intel_aet_read_event(). + * This is safe to do because intel_pmt_get_regions_by_feature() allocates + * a new pmt_feature_group structure to return to each caller and only mak= es + * use of the pmt_feature_group::kref field when intel_pmt_put_feature_gro= up() + * returns the structure. + */ +static void mark_telem_region_unusable(struct telemetry_region *tr) +{ + tr->addr =3D NULL; +} + static bool enable_events(struct event_group *e, struct pmt_feature_group = *p) { bool usable_events =3D false; =20 for (int i =3D 0; i < p->count; i++) { - if (skip_telem_region(&p->regions[i], e)) + if (skip_telem_region(&p->regions[i], e)) { + mark_telem_region_unusable(&p->regions[i]); continue; + } usable_events =3D true; } =20 @@ -219,3 +238,47 @@ void __exit intel_aet_exit(void) (*peg)->pfg =3D NULL; } } + +#define DATA_VALID BIT_ULL(63) +#define DATA_BITS GENMASK_ULL(62, 0) + +/* + * Read counter for an event on a domain (summing all aggregators + * on the domain). If an aggregator hasn't received any data for a + * specific RMID, the MMIO read indicates that data is not valid. + * Return success if at least one aggregator has valid data. + */ +int intel_aet_read_event(int domid, u32 rmid, enum resctrl_event_id eventi= d, + void *arch_priv, u64 *val) +{ + struct pmt_event *pevt =3D arch_priv; + struct event_group *e; + bool valid =3D false; + u64 evtcount; + void *pevt0; + u32 idx; + + pevt0 =3D pevt - pevt->idx; + e =3D container_of(pevt0, struct event_group, evts); + idx =3D rmid * e->num_events; + idx +=3D pevt->idx; + + if (idx * sizeof(u64) + sizeof(u64) > e->mmio_size) { + pr_warn_once("MMIO index %u out of range\n", idx); + return -EIO; + } + + for (int i =3D 0; i < e->pfg->count; i++) { + if (!e->pfg->regions[i].addr) + continue; + if (e->pfg->regions[i].plat_info.package_id !=3D domid) + continue; + evtcount =3D readq(e->pfg->regions[i].addr + idx * sizeof(u64)); + if (!(evtcount & DATA_VALID)) + continue; + *val +=3D evtcount & DATA_BITS; + valid =3D true; + } + + return valid ? 0 : -EINVAL; +} diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/re= sctrl/monitor.c index 175488185b06..7d14ae6a9737 100644 --- a/arch/x86/kernel/cpu/resctrl/monitor.c +++ b/arch/x86/kernel/cpu/resctrl/monitor.c @@ -250,6 +250,9 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, stru= ct rdt_domain_hdr *hdr, =20 resctrl_arch_rmid_read_context_check(); =20 + if (r->rid =3D=3D RDT_RESOURCE_PERF_PKG) + return intel_aet_read_event(hdr->id, rmid, eventid, arch_priv, val); + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) return -EINVAL; =20 diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index ce9d086625c1..6e0ec28ee904 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -527,7 +527,7 @@ config X86_CPU_RESCTRL =20 config X86_CPU_RESCTRL_INTEL_AET bool "Intel Application Energy Telemetry" - depends on X86_CPU_RESCTRL && CPU_SUP_INTEL && INTEL_PMT_TELEMETRY=3Dy &&= INTEL_TPMI=3Dy + depends on X86_64 && X86_CPU_RESCTRL && CPU_SUP_INTEL && INTEL_PMT_TELEME= TRY=3Dy && INTEL_TPMI=3Dy help Enable per-RMID telemetry events in resctrl. =20 --=20 2.51.0 From nobody Sat Sep 27 20:26:34 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 958573233EF for ; Thu, 25 Sep 2025 20:04:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830661; cv=none; b=Zxajc/ojNlwY7ZZvJMFGuppreqS79RF8ni1CO/t2tnIHBH/V43dYhuioxRbHPubLZGl4xUdVEZrnRb2zgj5ddA3oqKwBzlV7hV8dgFEUp8w4rvuydb0GnvapFsc/51nlmIUW2DZyT44AOXZLYOS/1nfnkthTxNK5cg67JzCj3lE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830661; c=relaxed/simple; bh=xo1ybD9cfc9M4Kz+Mli/XGLt6Z6TLya4Q1mtqhvg5Ss=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=FK3weWCA1066f3M0aQEzKX7L2Pr/beFZCi0aU4Icdk46RVlgjKQ9Jr9hEUKH65YZLYNpdsjqbrP/seea8kAhzqfylm7Ml+af3ixABTIH2mdqzO5Sz27saEBKlrs4z06SNTLAAhpqD/vTm8fqmkMt8V6OAaZgjDT7pdTZhXWTfEw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=bwH2h31r; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="bwH2h31r" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758830660; x=1790366660; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=xo1ybD9cfc9M4Kz+Mli/XGLt6Z6TLya4Q1mtqhvg5Ss=; b=bwH2h31rk0+eHQu5dCfNBgdv2I/tj2Xt7v2xlG26mU+PVzX8ADRYt+GI sGfL68J9CMhVPGxpk7cmgrb/2stV6IaT6igFmmGkbsisI+N+dsxM8fOCe fjVKd5Dhw8Jka2mVcSPOkTYyHN+FTb1G3AxDXujpQyjmbe9aiYMfWqWuX ieyBLf3XOi5FyNFf7fuPrvl5menGrh9oPv64naedA/ghH82q/JQAAjaRm NbaoWp/1SXGYSO+IlmB8Igb7dvI8utflT1p3OjJgD6ez2nVntgSWGfubo mg+MG6F9xZB7tva2RLx3MY/nosTWyuvhh6NphtmTBsfWWQRChf+gD2RpS w==; X-CSE-ConnectionGUID: nr5pJCb3TeC6zIoMCbOHiw== X-CSE-MsgGUID: j+7UdpMVRdqoi71D1ACCxg== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="61074303" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="61074303" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:12 -0700 X-CSE-ConnectionGUID: 0emR60+lTDyhXRLzgb6wXw== X-CSE-MsgGUID: kgh+EWWrStOZFYXYw4EALQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,293,1751266800"; d="scan'208";a="177003671" Received: from inaky-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.206]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:12 -0700 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v11 20/31] fs/resctrl: Refactor Sub-NUMA Cluster (SNC) in mkdir/rmdir code flow Date: Thu, 25 Sep 2025 13:03:14 -0700 Message-ID: <20250925200328.64155-21-tony.luck@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250925200328.64155-1-tony.luck@intel.com> References: <20250925200328.64155-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" SNC is only present in the RDT_RESOURCE_L3 domain. Refactor code that makes and removes directories under "mon_data" to special case the L3 resource. Signed-off-by: Tony Luck --- fs/resctrl/rdtgroup.c | 50 +++++++++++++++++++++++++++---------------- 1 file changed, 32 insertions(+), 18 deletions(-) diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index 6e8937f94e7a..cab5cb9e6c93 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -3155,6 +3155,7 @@ static void mon_rmdir_one_subdir(struct kernfs_node *= pkn, char *name, char *subn return; kernfs_put(kn); =20 + /* Subdirectories are only present on SNC enabled systems */ if (kn->dir.subdirs <=3D 1) kernfs_remove(kn); else @@ -3171,19 +3172,24 @@ static void rmdir_mondata_subdir_allrdtgrp(struct r= dt_resource *r, struct rdt_domain_hdr *hdr) { struct rdtgroup *prgrp, *crgrp; - struct rdt_l3_mon_domain *d; + int domid =3D hdr->id; char subname[32]; - bool snc_mode; char name[32]; =20 - if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) - return; + if (r->rid =3D=3D RDT_RESOURCE_L3) { + struct rdt_l3_mon_domain *d; =20 - d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); - snc_mode =3D r->mon_scope =3D=3D RESCTRL_L3_NODE; - sprintf(name, "mon_%s_%02d", r->name, snc_mode ? d->ci_id : d->hdr.id); - if (snc_mode) - sprintf(subname, "mon_sub_%s_%02d", r->name, d->hdr.id); + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) + return; + + d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); + /* SNC mode? */ + if (r->mon_scope =3D=3D RESCTRL_L3_NODE) { + domid =3D d->ci_id; + sprintf(subname, "mon_sub_%s_%02d", r->name, hdr->id); + } + } + sprintf(name, "mon_%s_%02d", r->name, domid); =20 list_for_each_entry(prgrp, &rdt_all_groups, rdtgroup_list) { mon_rmdir_one_subdir(prgrp->mon.mon_data_kn, name, subname); @@ -3213,7 +3219,7 @@ static int mon_add_all_files(struct kernfs_node *kn, = struct rdt_domain_hdr *hdr, if (ret) return ret; =20 - if (!do_sum && resctrl_is_mbm_event(mevt->evtid)) + if (r->rid =3D=3D RDT_RESOURCE_L3 && !do_sum && resctrl_is_mbm_event(mev= t->evtid)) mon_event_read(&rr, r, hdr, prgrp, &hdr->cpu_mask, mevt, true); } =20 @@ -3225,19 +3231,27 @@ static int mkdir_mondata_subdir(struct kernfs_node = *parent_kn, struct rdt_resource *r, struct rdtgroup *prgrp) { struct kernfs_node *kn, *ckn; - struct rdt_l3_mon_domain *d; + bool snc_mode =3D false; + int domid =3D hdr->id; char name[32]; - bool snc_mode; int ret =3D 0; =20 lockdep_assert_held(&rdtgroup_mutex); =20 - if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) - return -EINVAL; + if (r->rid =3D=3D RDT_RESOURCE_L3) { + snc_mode =3D r->mon_scope =3D=3D RESCTRL_L3_NODE; + if (snc_mode) { + struct rdt_l3_mon_domain *d; + + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) + return -EINVAL; + + d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); + domid =3D d->ci_id; + } + } + sprintf(name, "mon_%s_%02d", r->name, domid); =20 - d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); - snc_mode =3D r->mon_scope =3D=3D RESCTRL_L3_NODE; - sprintf(name, "mon_%s_%02d", r->name, snc_mode ? d->ci_id : d->hdr.id); kn =3D kernfs_find_and_get(parent_kn, name); if (kn) { /* @@ -3253,7 +3267,7 @@ static int mkdir_mondata_subdir(struct kernfs_node *p= arent_kn, ret =3D rdtgroup_kn_set_ugid(kn); if (ret) goto out_destroy; - ret =3D mon_add_all_files(kn, hdr, r, prgrp, hdr->id, snc_mode); + ret =3D mon_add_all_files(kn, hdr, r, prgrp, domid, snc_mode); if (ret) goto out_destroy; } --=20 2.51.0 From nobody Sat Sep 27 20:26:34 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5047931E8A4 for ; Thu, 25 Sep 2025 20:04:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830663; cv=none; b=qTS705B+/ptxMxDeBv34EUiRLUmM8IxMtCzdrraD99uvw3X/vdaGL8yNMIaY0lgPuo0jyTSIuWYzXIEsu94LaOTS4+dhae8KYi2nV61BfP9NISP+GajGvkDHw3i+ixqjn2OQQIvaT/kymsLo/uRCydCXKAeC2S5Cu5gylK8de4g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830663; c=relaxed/simple; bh=NYZq+e/7j/QeJaD8gSC2nHl3bHirqNZYGb8Xy8peETc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=I+RMD4DWiZ+X3Mz1H7jxozfDnKLRnH39kBAeKvupicasxi9jyupC9bQUPDyMp4i0dLSesm5rn34pUmttOiBXJen4aEwGZmj8lOiaJ2uiOx65cvbsmPS2KjBDfptfg3IInUSksC6g718md00UEDFsG/Nz2lNQ8jN2mq4XZ9s+Hm0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=JKsgED/Y; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="JKsgED/Y" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758830661; x=1790366661; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=NYZq+e/7j/QeJaD8gSC2nHl3bHirqNZYGb8Xy8peETc=; b=JKsgED/Y358g7yzT6w1jqAQZnWTVWLMNudYh8tQVGQqoEg6jHzbZsKxZ E10VcYjaxl9b0JVZ1yHfjvy8kkJAZqTh0RgCnympvK9HhYETsbsSyYcLn iMhsAnzV2cf5ZMFTicOjGTivc9Mi1zGao6IKP4I+WN+tYSKOi9xRtQyyT xqO74lWHs/Bg21CXVsOA/pKMMv/Qj4J/geJQt27ISeOWJKRh5Uw7SXOGK nBuh/EnwhfEIpdHrmCrOeIfsUJgFcy17y6zAwiuZ+TW+9kMDNTzTVZUCH DJkWOBs0dOMRjjro+DzOFSZDgjDkcSKUs5L8WCwWuYAqFvhtXWukrGKh/ A==; X-CSE-ConnectionGUID: AYtUlQMtRZuRLs3gE+6sWg== X-CSE-MsgGUID: skOryL7LRXeiauROsL0ZxA== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="61074314" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="61074314" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:12 -0700 X-CSE-ConnectionGUID: BooRzUWiRWO9uY9wfgrW8w== X-CSE-MsgGUID: UwvtG2ajSxuRbBNBL7v9/w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,293,1751266800"; d="scan'208";a="177003674" Received: from inaky-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.206]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:12 -0700 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v11 21/31] x86/resctrl: Handle domain creation/deletion for RDT_RESOURCE_PERF_PKG Date: Thu, 25 Sep 2025 13:03:15 -0700 Message-ID: <20250925200328.64155-22-tony.luck@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250925200328.64155-1-tony.luck@intel.com> References: <20250925200328.64155-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The L3 resource has several requirements for domains. There are per-domain structures that hold the 64-bit values of counters, and elements to keep track of the overflow and limbo threads. None of these are needed for the PERF_PKG resource. The hardware counters are wide enough that they do not wrap around for decades. Define a new rdt_perf_pkg_mon_domain structure which just consists of the standard rdt_domain_hdr to keep track of domain id and CPU mask. Support the PERF_PKG resource in the CPU online/offline handlers. Add WARN checks to code that sums domains for Sub-NUMA cluster to confirm the resource ID is RDT_RESOURCE_L3. Signed-off-by: Tony Luck --- arch/x86/kernel/cpu/resctrl/internal.h | 13 +++++++++++ arch/x86/kernel/cpu/resctrl/core.c | 15 +++++++++++++ arch/x86/kernel/cpu/resctrl/intel_aet.c | 29 +++++++++++++++++++++++++ fs/resctrl/ctrlmondata.c | 5 +++++ fs/resctrl/rdtgroup.c | 10 +++++++++ 5 files changed, 72 insertions(+) diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/r= esctrl/internal.h index 97616c81682b..b920f54f8736 100644 --- a/arch/x86/kernel/cpu/resctrl/internal.h +++ b/arch/x86/kernel/cpu/resctrl/internal.h @@ -84,6 +84,14 @@ static inline struct rdt_hw_l3_mon_domain *resctrl_to_ar= ch_mon_dom(struct rdt_l3 return container_of(r, struct rdt_hw_l3_mon_domain, d_resctrl); } =20 +/** + * struct rdt_perf_pkg_mon_domain - CPUs sharing an package scoped resctrl= monitor resource + * @hdr: common header for different domain types + */ +struct rdt_perf_pkg_mon_domain { + struct rdt_domain_hdr hdr; +}; + /** * struct msr_param - set a range of MSRs from a domain * @res: The resource to use @@ -222,6 +230,8 @@ bool intel_aet_get_events(void); void __exit intel_aet_exit(void); int intel_aet_read_event(int domid, u32 rmid, enum resctrl_event_id evtid, void *arch_priv, u64 *val); +void intel_aet_mon_domain_setup(int cpu, int id, struct rdt_resource *r, + struct list_head *add_pos); #else static inline bool intel_aet_get_events(void) { return false; } static inline void __exit intel_aet_exit(void) { } @@ -230,6 +240,9 @@ static inline int intel_aet_read_event(int domid, u32 r= mid, enum resctrl_event_i { return -EINVAL; } + +static inline void intel_aet_mon_domain_setup(int cpu, int id, struct rdt_= resource *r, + struct list_head *add_pos) { } #endif =20 #endif /* _ASM_X86_RESCTRL_INTERNAL_H */ diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 588de539a739..5dff83e763a5 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -573,6 +573,10 @@ static void domain_add_cpu_mon(int cpu, struct rdt_res= ource *r) if (!hdr) l3_mon_domain_setup(cpu, id, r, add_pos); break; + case RDT_RESOURCE_PERF_PKG: + if (!hdr) + intel_aet_mon_domain_setup(cpu, id, r, add_pos); + break; default: pr_warn_once("Unknown resource rid=3D%d\n", r->rid); break; @@ -635,6 +639,7 @@ static void domain_remove_cpu_ctrl(int cpu, struct rdt_= resource *r) static void domain_remove_cpu_mon(int cpu, struct rdt_resource *r) { int id =3D get_domain_id_from_scope(cpu, r->mon_scope); + struct rdt_perf_pkg_mon_domain *pkgd; struct rdt_hw_l3_mon_domain *hw_dom; struct rdt_l3_mon_domain *d; struct rdt_domain_hdr *hdr; @@ -670,6 +675,16 @@ static void domain_remove_cpu_mon(int cpu, struct rdt_= resource *r) synchronize_rcu(); l3_mon_domain_free(hw_dom); break; + case RDT_RESOURCE_PERF_PKG: + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_PERF_P= KG)) + return; + + pkgd =3D container_of(hdr, struct rdt_perf_pkg_mon_domain, hdr); + resctrl_offline_mon_domain(r, hdr); + list_del_rcu(&hdr->list); + synchronize_rcu(); + kfree(pkgd); + break; default: pr_warn_once("Unknown resource rid=3D%d\n", r->rid); break; diff --git a/arch/x86/kernel/cpu/resctrl/intel_aet.c b/arch/x86/kernel/cpu/= resctrl/intel_aet.c index d53211ac6204..dc0d16af66be 100644 --- a/arch/x86/kernel/cpu/resctrl/intel_aet.c +++ b/arch/x86/kernel/cpu/resctrl/intel_aet.c @@ -17,16 +17,21 @@ #include #include #include +#include #include #include +#include #include #include #include #include #include #include +#include +#include #include #include +#include #include #include #include @@ -282,3 +287,27 @@ int intel_aet_read_event(int domid, u32 rmid, enum res= ctrl_event_id eventid, =20 return valid ? 0 : -EINVAL; } + +void intel_aet_mon_domain_setup(int cpu, int id, struct rdt_resource *r, + struct list_head *add_pos) +{ + struct rdt_perf_pkg_mon_domain *d; + int err; + + d =3D kzalloc_node(sizeof(*d), GFP_KERNEL, cpu_to_node(cpu)); + if (!d) + return; + + d->hdr.id =3D id; + d->hdr.type =3D RESCTRL_MON_DOMAIN; + d->hdr.rid =3D r->rid; + cpumask_set_cpu(cpu, &d->hdr.cpu_mask); + list_add_tail_rcu(&d->hdr.list, add_pos); + + err =3D resctrl_online_mon_domain(r, &d->hdr); + if (err) { + list_del_rcu(&d->hdr.list); + synchronize_rcu(); + kfree(d); + } +} diff --git a/fs/resctrl/ctrlmondata.c b/fs/resctrl/ctrlmondata.c index ae43e09fa5e5..f7fbfc4d258d 100644 --- a/fs/resctrl/ctrlmondata.c +++ b/fs/resctrl/ctrlmondata.c @@ -712,6 +712,11 @@ int rdtgroup_mondata_show(struct seq_file *m, void *ar= g) if (md->sum) { struct rdt_l3_mon_domain *d; =20 + if (WARN_ON_ONCE(resid !=3D RDT_RESOURCE_L3)) { + ret =3D -EINVAL; + goto out; + } + /* * This file requires summing across all domains that share * the L3 cache id that was provided in the "domid" field of the diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index cab5cb9e6c93..fa6dfebea6b2 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -3040,6 +3040,9 @@ static struct mon_data *mon_get_kn_priv(enum resctrl_= res_level rid, int domid, =20 lockdep_assert_held(&rdtgroup_mutex); =20 + if (WARN_ON_ONCE(do_sum && rid !=3D RDT_RESOURCE_L3)) + return NULL; + list_for_each_entry(priv, &mon_data_kn_priv_list, list) { if (priv->rid =3D=3D rid && priv->domid =3D=3D domid && priv->sum =3D=3D do_sum && priv->evt =3D=3D mevt) @@ -4227,6 +4230,9 @@ void resctrl_offline_mon_domain(struct rdt_resource *= r, struct rdt_domain_hdr *h if (resctrl_mounted && resctrl_arch_mon_capable()) rmdir_mondata_subdir_allrdtgrp(r, hdr); =20 + if (r->rid !=3D RDT_RESOURCE_L3) + goto out_unlock; + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) goto out_unlock; =20 @@ -4327,6 +4333,9 @@ int resctrl_online_mon_domain(struct rdt_resource *r,= struct rdt_domain_hdr *hdr =20 mutex_lock(&rdtgroup_mutex); =20 + if (r->rid !=3D RDT_RESOURCE_L3) + goto mkdir; + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) goto out_unlock; =20 @@ -4344,6 +4353,7 @@ int resctrl_online_mon_domain(struct rdt_resource *r,= struct rdt_domain_hdr *hdr if (resctrl_is_mon_event_enabled(QOS_L3_OCCUP_EVENT_ID)) INIT_DELAYED_WORK(&d->cqm_limbo, cqm_handle_limbo); =20 +mkdir: err =3D 0; /* * If the filesystem is not mounted then only the default resource group --=20 2.51.0 From nobody Sat Sep 27 20:26:34 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 23CCA323F59 for ; Thu, 25 Sep 2025 20:04:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830663; cv=none; b=eXwaj6jtK7qo0w4iEG/dhCLhcT6lImlwbRBlUmRRybaoGXM0DFbKwLmX+bSw9bBuB2qlSrR8quya1bmiySMqtjnROhdS12c1UpUAWJ5EG3JUr7s10O1qVD8Wl9vAd12Ta7J5mKkfnefniU3KuRFKmaqHoMkN/u/ZWYU54n59MXo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830663; c=relaxed/simple; bh=8vlCScUjoTKB0J46AqpBHyDZVyE3uZHSlb4z+5DJdSE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=OapZ5k7YRCP4g23aScGc2xkilwajMPdo6X2tU/BQOJgee0i7s+M/yKSPB5hwbbv9gddb6MGe3Sf5jFZbw6xVXODHbO8InLdyW7yIJAJT4iuCMaMcx9fX3jFJIrqhxABCJv41Qw9btzdRMIWyl26GpNmDIyaglnF4B3EhjP/VYt0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=VfxUxEf9; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="VfxUxEf9" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758830662; x=1790366662; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=8vlCScUjoTKB0J46AqpBHyDZVyE3uZHSlb4z+5DJdSE=; b=VfxUxEf92e6yCqgHEOCbJgxilS2Dbbt4R4ZmDv8Q5RkgPzUe+KCo1zuv Oy8+PJKfiGjjWuEfb/EoHlrdpE65Rbu42rDmZdzh7ODCrJVc71YNO+Yju vl8qdH/QQopSElkgO/BLVcJMjrYhljOc4LxlD5HQ1Z1aZD7rfwIOZ5fru /Tr+KUCbMG/14YL6J8Ti15CyxW+kuNZeeZCEYy00rr5fEzsMAmgGcAkg2 pCWcXK5qZld0YkGupcZtdYZLi5oIOEbNR3bv2VSZF864338mZkrQKnO4X jlD2koVGqiP8dCNfIWl0QI/ZYDikr5qly7cCHBP6Nj5N1SSt5cihIDRD3 w==; X-CSE-ConnectionGUID: onQR3HukTGK7wD11uzFgAQ== X-CSE-MsgGUID: HCqt+fvgR9qONjoOhLUUcg== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="61074323" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="61074323" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:13 -0700 X-CSE-ConnectionGUID: 61fSyg6KTQ20rLibVYrkZw== X-CSE-MsgGUID: fbzSZmoxSw+Bkzsq+TWNJw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,293,1751266800"; d="scan'208";a="177003678" Received: from inaky-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.206]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:12 -0700 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v11 22/31] x86/resctrl: Add energy/perf choices to rdt boot option Date: Thu, 25 Sep 2025 13:03:16 -0700 Message-ID: <20250925200328.64155-23-tony.luck@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250925200328.64155-1-tony.luck@intel.com> References: <20250925200328.64155-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Legacy resctrl features are enumerated by X86_FEATURE_* flags. These may be overridden by quirks to disable features in the case of errata. Users can use kernel command line options to either disable a feature, or to force enable a feature that was disabled by a quirk. Provide similar functionality for hardware features that do not have an X86_FEATURE_* flag. Unlike other features that are tied to X86_FEATURE_* flags, these must be queried by name. Add rdt_is_feature_enabled() to check whether quirks or kernel command line have disabled a feature. Users may force a feature to be disabled. E.g. "rdt=3D!perf" will ensure that none of the perf telemetry events are enabled. Resctrl architecture code may disable a feature that does not provide full functionality. Users may override that decision. E.g. "rdt=3Denergy" will enable any available energy telemetry events even if they do not provide full functionality. Signed-off-by: Tony Luck --- .../admin-guide/kernel-parameters.txt | 2 +- arch/x86/kernel/cpu/resctrl/internal.h | 2 ++ arch/x86/kernel/cpu/resctrl/core.c | 29 +++++++++++++++++++ 3 files changed, 32 insertions(+), 1 deletion(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentatio= n/admin-guide/kernel-parameters.txt index 889e68e83682..74bc150b53f7 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -6155,7 +6155,7 @@ rdt=3D [HW,X86,RDT] Turn on/off individual RDT features. List is: cmt, mbmtotal, mbmlocal, l3cat, l3cdp, l2cat, l2cdp, - mba, smba, bmec, abmc. + mba, smba, bmec, abmc, energy, perf. E.g. to turn on cmt and turn off mba use: rdt=3Dcmt,!mba =20 diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/r= esctrl/internal.h index b920f54f8736..e3710b9f993e 100644 --- a/arch/x86/kernel/cpu/resctrl/internal.h +++ b/arch/x86/kernel/cpu/resctrl/internal.h @@ -225,6 +225,8 @@ void __init intel_rdt_mbm_apply_quirk(void); void rdt_domain_reconfigure_cdp(struct rdt_resource *r); void resctrl_arch_mbm_cntr_assign_set_one(struct rdt_resource *r); =20 +bool rdt_is_feature_enabled(char *name); + #ifdef CONFIG_X86_CPU_RESCTRL_INTEL_AET bool intel_aet_get_events(void); void __exit intel_aet_exit(void); diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 5dff83e763a5..f749a871e8c5 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -766,6 +766,8 @@ enum { RDT_FLAG_SMBA, RDT_FLAG_BMEC, RDT_FLAG_ABMC, + RDT_FLAG_ENERGY, + RDT_FLAG_PERF, }; =20 #define RDT_OPT(idx, n, f) \ @@ -792,6 +794,8 @@ static struct rdt_options rdt_options[] __ro_after_ini= t =3D { RDT_OPT(RDT_FLAG_SMBA, "smba", X86_FEATURE_SMBA), RDT_OPT(RDT_FLAG_BMEC, "bmec", X86_FEATURE_BMEC), RDT_OPT(RDT_FLAG_ABMC, "abmc", X86_FEATURE_ABMC), + RDT_OPT(RDT_FLAG_ENERGY, "energy", 0), + RDT_OPT(RDT_FLAG_PERF, "perf", 0), }; #define NUM_RDT_OPTIONS ARRAY_SIZE(rdt_options) =20 @@ -841,6 +845,31 @@ bool rdt_cpu_has(int flag) return ret; } =20 +/* + * Hardware features that do not have X86_FEATURE_* bits. There is no + * "hardware does not support this at all" case. Assume that the caller + * has already determined that hardware support is present and just needs + * to check if the feature has been disabled by a quirk that has not been + * overridden by a command line option. + */ +bool rdt_is_feature_enabled(char *name) +{ + struct rdt_options *o; + bool ret =3D true; + + for (o =3D rdt_options; o < &rdt_options[NUM_RDT_OPTIONS]; o++) { + if (!strcmp(name, o->name)) { + if (o->force_off) + ret =3D false; + if (o->force_on) + ret =3D true; + break; + } + } + + return ret; +} + bool resctrl_arch_is_evt_configurable(enum resctrl_event_id evt) { if (!rdt_cpu_has(X86_FEATURE_BMEC)) --=20 2.51.0 From nobody Sat Sep 27 20:26:34 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C9F4C323F74 for ; Thu, 25 Sep 2025 20:04:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830664; cv=none; b=l6Z7X/PlPiVGtYE3PKffKLlw2z3Vt5x8UxM/Wcu+MU3od0q94OFmmiKrzU4SH1ZuZXgXivp/OFuxPr6x0kkAHKayvgwnKofYeLnH/w4qE3d4PxbcOkdEUlltyjNazaxV0PsbEkgUNf2ocmouyp7p1cs8YuWtR7TipxRNd+BGOQ4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830664; c=relaxed/simple; bh=bOdL0fgm+j+SQ0f4EU5JMSLqDoHWZE0HXg4XJy2cqkc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gAHVJHcsItINrp90bvvbe06M/qpGw5dG8bqh93yKLGYOnto2Onyc8Qzpgr9SPzme/mSCjQJse1rAAtEPG5fUcdg6aw5TWnwokHMnHe3xWS61iHdoOfRPYufVF7NRQ17IFAHon9mlkHAOvaJunzMZo/r1sMxq/S43Cm/uuehQsrM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=ja1+tZbR; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="ja1+tZbR" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758830662; x=1790366662; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=bOdL0fgm+j+SQ0f4EU5JMSLqDoHWZE0HXg4XJy2cqkc=; b=ja1+tZbRXz2R0ZCJ7BddVHdFEz77mOavQbpX07W6G8HIYS9eA3FjiDlS Vw/ypEqLCS74FHPNCL+gCZu3z5hNOL/dImOT7sdVIV0L3rn9T9YORYmoj 4+WNiAn8s5Z3wshuO8A4cJ3xx6LrqVLkAdW0OOF/YvvWXLshJokrsjKem Nqgdzv9TnmMlIT+HShwHZ+z8bA0gGYs15saf81HbSuWyqok19uxHtRP1y WI9OfyvO0ymJm+Yncvqbofiqe75n0BtYaE5iA28eaTc9iPCOA2U2oS7Ne rJR5hFUixHude3je1xMpZNPUbpfZVSSP2WY2ltPU1S3xxd+mh7Iw1TgOW A==; X-CSE-ConnectionGUID: WXRIq8qaSriMqFXu0qcqaQ== X-CSE-MsgGUID: BSxZpsQ2SdO4ULEN8+XMJw== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="61074331" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="61074331" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:13 -0700 X-CSE-ConnectionGUID: n/fwR9GQTw+a8TJ8rf8Mpw== X-CSE-MsgGUID: b7EWsMhpQUCQ9XO+I+U2ZA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,293,1751266800"; d="scan'208";a="177003681" Received: from inaky-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.206]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:13 -0700 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v11 23/31] x86/resctrl: Handle number of RMIDs supported by telemetry resources Date: Thu, 25 Sep 2025 13:03:17 -0700 Message-ID: <20250925200328.64155-24-tony.luck@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250925200328.64155-1-tony.luck@intel.com> References: <20250925200328.64155-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" There are now three meanings for "number of RMIDs": 1) The number for legacy features enumerated by CPUID leaf 0xF. This is the maximum number of distinct values that can be loaded into the IA32_PQR_ASSOC MSR. Note that systems with Sub-NUMA Cluster mode enabled will force scaling down the CPUID enumerated value by the number of SNC nodes per L3-cache. 2) The number of registers in MMIO space for each event. This is enumerated in the XML files and is the value initialized into event_group::num_rmids. 3) The number of "hardware counters" (this isn't a strictly accurate description of how things work, but serves as a useful analogy that does describe the limitations) feeding to those MMIO registers. This is enumerated in telemetry_region::num_rmids returned from the call to intel_pmt_get_regions_by_feature() Event groups with insufficient "hardware counters" to track all RMIDs are difficult for users to use, since the system may reassign "hardware counters" at any time. This means that users cannot reliably collect two consecutive event counts to compute the rate at which events are occurring. Introduce rdt_set_feature_disabled() to mark any under-resourced event groups (those with telemetry_region::num_rmids < event_group::num_rmids) as unusable. Note that the rdt_options[] structure must now be writable at run-time. The request to disable will be overridden if the user explicitly requests to enable using the "rdt=3D" Linux boot argument. This will result in the available number of monitoring resource groups being limited by the under-resourced event groups. Scan all enabled event groups and assign the RDT_RESOURCE_PERF_PKG resource "num_rmids" value to the smallest of these values as this value will be used later to compare against the number of RMIDs supported by other resources to determine how many monitoring resource groups are supported. N.B. Change type of rdt_resource::num_rmid to u32 to match type of event_group::num_rmids so that min(r->num_rmid, e->num_rmids) won't complain about mixing signed and unsigned types. Print r->num_rmid as unsigned value in rdt_num_rmids_show(). Signed-off-by: Tony Luck --- include/linux/resctrl.h | 2 +- arch/x86/kernel/cpu/resctrl/internal.h | 2 ++ arch/x86/kernel/cpu/resctrl/core.c | 18 +++++++++- arch/x86/kernel/cpu/resctrl/intel_aet.c | 48 +++++++++++++++++++++++++ fs/resctrl/rdtgroup.c | 2 +- 5 files changed, 69 insertions(+), 3 deletions(-) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index 111c8f1dc77e..c7b5e56d25bb 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -292,7 +292,7 @@ enum resctrl_schema_fmt { * events of monitor groups created via mkdir. */ struct resctrl_mon { - int num_rmid; + u32 num_rmid; unsigned int mbm_cfg_mask; int num_mbm_cntrs; bool mbm_cntr_assignable; diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/r= esctrl/internal.h index e3710b9f993e..cea76f88422c 100644 --- a/arch/x86/kernel/cpu/resctrl/internal.h +++ b/arch/x86/kernel/cpu/resctrl/internal.h @@ -227,6 +227,8 @@ void resctrl_arch_mbm_cntr_assign_set_one(struct rdt_re= source *r); =20 bool rdt_is_feature_enabled(char *name); =20 +void rdt_set_feature_disabled(char *name); + #ifdef CONFIG_X86_CPU_RESCTRL_INTEL_AET bool intel_aet_get_events(void); void __exit intel_aet_exit(void); diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index f749a871e8c5..5b7f9a44d562 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -782,7 +782,7 @@ struct rdt_options { bool force_off, force_on; }; =20 -static struct rdt_options rdt_options[] __ro_after_init =3D { +static struct rdt_options rdt_options[] =3D { RDT_OPT(RDT_FLAG_CMT, "cmt", X86_FEATURE_CQM_OCCUP_LLC), RDT_OPT(RDT_FLAG_MBM_TOTAL, "mbmtotal", X86_FEATURE_CQM_MBM_TOTAL), RDT_OPT(RDT_FLAG_MBM_LOCAL, "mbmlocal", X86_FEATURE_CQM_MBM_LOCAL), @@ -845,6 +845,22 @@ bool rdt_cpu_has(int flag) return ret; } =20 +/* + * Can be called during feature enumeration if sanity check of + * a feature's parameters indicates problems with the feature. + */ +void rdt_set_feature_disabled(char *name) +{ + struct rdt_options *o; + + for (o =3D rdt_options; o < &rdt_options[NUM_RDT_OPTIONS]; o++) { + if (!strcmp(name, o->name)) { + o->force_off =3D true; + return; + } + } +} + /* * Hardware features that do not have X86_FEATURE_* bits. There is no * "hardware does not support this at all" case. Assume that the caller diff --git a/arch/x86/kernel/cpu/resctrl/intel_aet.c b/arch/x86/kernel/cpu/= resctrl/intel_aet.c index dc0d16af66be..039e63d8c2e7 100644 --- a/arch/x86/kernel/cpu/resctrl/intel_aet.c +++ b/arch/x86/kernel/cpu/resctrl/intel_aet.c @@ -25,6 +25,7 @@ #include #include #include +#include #include #include #include @@ -55,6 +56,7 @@ struct pmt_event { =20 /** * struct event_group - All information about a group of telemetry events. + * @name: Name for this group (used by boot rdt=3D option) * @pfg: Points to the aggregated telemetry space information * returned by the intel_pmt_get_regions_by_feature() * call to the INTEL_PMT_TELEMETRY driver that contains @@ -62,16 +64,22 @@ struct pmt_event { * Valid if the system supports the event group. * NULL otherwise. * @guid: Unique number per XML description file. + * @num_rmids: Number of RMIDs supported by this group. May be + * adjusted downwards if enumeration from + * intel_pmt_get_regions_by_feature() indicates fewer + * RMIDs can be tracked simultaneously. * @mmio_size: Number of bytes of MMIO registers for this group. * @num_events: Number of events in this group. * @evts: Array of event descriptors. */ struct event_group { /* Data fields for additional structures to manage this group. */ + char *name; struct pmt_feature_group *pfg; =20 /* Remaining fields initialized from XML file. */ u32 guid; + u32 num_rmids; size_t mmio_size; unsigned int num_events; struct pmt_event evts[] __counted_by(num_events); @@ -85,7 +93,9 @@ struct event_group { * File: xml/CWF/OOBMSM/RMID-ENERGY/cwf_aggregator.xml */ static struct event_group energy_0x26696143 =3D { + .name =3D "energy", .guid =3D 0x26696143, + .num_rmids =3D 576, .mmio_size =3D XML_MMIO_SIZE(576, 2, 3), .num_events =3D 2, .evts =3D { @@ -99,7 +109,9 @@ static struct event_group energy_0x26696143 =3D { * File: xml/CWF/OOBMSM/RMID-PERF/cwf_aggregator.xml */ static struct event_group perf_0x26557651 =3D { + .name =3D "perf", .guid =3D 0x26557651, + .num_rmids =3D 576, .mmio_size =3D XML_MMIO_SIZE(576, 7, 3), .num_events =3D 7, .evts =3D { @@ -156,21 +168,57 @@ static void mark_telem_region_unusable(struct telemet= ry_region *tr) tr->addr =3D NULL; } =20 +static bool all_regions_have_sufficient_rmid(struct event_group *e, struct= pmt_feature_group *p) +{ + struct telemetry_region *tr; + + for (int i =3D 0; i < p->count; i++) { + tr =3D &p->regions[i]; + if (skip_telem_region(tr, e)) + continue; + + if (tr->num_rmids < e->num_rmids) + return false; + } + + return true; +} + static bool enable_events(struct event_group *e, struct pmt_feature_group = *p) { + struct rdt_resource *r =3D &rdt_resources_all[RDT_RESOURCE_PERF_PKG].r_re= sctrl; bool usable_events =3D false; =20 + /* Disable feature if insufficient RMIDs */ + if (!all_regions_have_sufficient_rmid(e, p)) + rdt_set_feature_disabled(e->name); + + /* User can override above disable from kernel command line */ + if (!rdt_is_feature_enabled(e->name)) + return false; + for (int i =3D 0; i < p->count; i++) { if (skip_telem_region(&p->regions[i], e)) { mark_telem_region_unusable(&p->regions[i]); continue; } + + /* + * e->num_rmids only adjusted lower if user forces an unusable + * region to be usable + */ + e->num_rmids =3D min(e->num_rmids, p->regions[i].num_rmids); usable_events =3D true; } =20 if (!usable_events) return false; =20 + if (r->mon.num_rmid) + r->mon.num_rmid =3D min(r->mon.num_rmid, e->num_rmids); + else + r->mon.num_rmid =3D e->num_rmids; + for (int j =3D 0; j < e->num_events; j++) resctrl_enable_mon_event(e->evts[j].id, true, e->evts[j].bin_bits, &e->evts[j]); diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index fa6dfebea6b2..19efb345c4a6 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -1135,7 +1135,7 @@ static int rdt_num_rmids_show(struct kernfs_open_file= *of, { struct rdt_resource *r =3D rdt_kn_parent_priv(of->kn); =20 - seq_printf(seq, "%d\n", r->mon.num_rmid); + seq_printf(seq, "%u\n", r->mon.num_rmid); =20 return 0; } --=20 2.51.0 From nobody Sat Sep 27 20:26:34 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 689C1326D46 for ; Thu, 25 Sep 2025 20:04:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830665; cv=none; b=YLnNmliqCX6pWiDHV//Z0GFFE5AhZKnYbyWNWbay3Ya+yFmR6Pabe34E3WIJwx2TSl51vNgcDekMSRhPmBg7k1GLwt74MVAQ9vQOHfhRcE/JbSRscDCiqFgWsCnpBA5/diASJo31Oif5WXmudXcn7yJWCZXaP8eB68FaBGF19Oc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830665; c=relaxed/simple; bh=WZ1VQPKETnAUaaJQPINDo0+bF5Nl1BBFH5euWNjpisQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=mJPaWzumTGVqQ3UFmvOBk3gdHAYnl/MHZYbunf5fpzXYbBl3YNOhtKb7Nlui9H2DN6vWvGEnOtVfdYcoBUBoCdyxNZ887L6iGLfFfYvoQraezk0eNT8KeB65hbIH0yFl6CY2BZkhDp13LgPLGJPed3GDKySCmw1I0bc2woKCq84= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=TRVA0hjc; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="TRVA0hjc" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758830664; x=1790366664; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=WZ1VQPKETnAUaaJQPINDo0+bF5Nl1BBFH5euWNjpisQ=; b=TRVA0hjcQhwB+x68vs+xytLUleJpP7mm6mbFnPtMS+X69SaELiJjyJET 4zJ3JXauZYE6PYdtp117in63xKtRr/q7GieMYpEymQlOc9WKVVqRdOGHZ uMF/KJm+hI/YlL447nJUdGs6UIpfrY1+ezJdPD67YgevORLrk1QqilFhg CQQsh8jvXCmyNMT6SlCcgsgAg3++YqScSWVXpqHOXdQ8vLXFgFjXRoIAF h0hKMkswK22SrtRaxocC3+owkE3QECCJPj9j5R9l6UghGTqMgq1laz7Uq sNtqvwhcZu5YGGAc3EBMQZ3VN5MbH1FnwFQN1d4rOxmRlGdmsGV9yqzWA Q==; X-CSE-ConnectionGUID: 9atJDMepTe+newUz1TYsbw== X-CSE-MsgGUID: 5gPWUEskQDySN6mOZA/mJg== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="61074343" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="61074343" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:14 -0700 X-CSE-ConnectionGUID: AqAY7lp1QrOAxM5yGOq8LQ== X-CSE-MsgGUID: Oh+EW0JOSMOlXJXhJF0KXg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,293,1751266800"; d="scan'208";a="177003687" Received: from inaky-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.206]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:13 -0700 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v11 24/31] fs/resctrl: Move allocation/free of closid_num_dirty_rmid[] Date: Thu, 25 Sep 2025 13:03:18 -0700 Message-ID: <20250925200328.64155-25-tony.luck@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250925200328.64155-1-tony.luck@intel.com> References: <20250925200328.64155-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" closid_num_dirty_rmid[] is allocated in dom_data_init() during resctrl initialization and freed by dom_data_exit() during resctrl exit giving it the same life cycle as rmid_ptrs[]. Move closid_num_dirty_rmid[] allocaction/free out to resctrl_l3_mon_resource_init() and resctrl_l3_mon_resource_exit() in preparation for rmid_ptrs[] to be allocated on resctrl mount in support of the new telemetry events. Keep the rdtgroup_mutex protection around the allocation/free of closid_num_dirty_rmid[] as ARM needs this to guarantee memory ordering. Signed-off-by: Tony Luck --- fs/resctrl/monitor.c | 77 ++++++++++++++++++++++++++++---------------- 1 file changed, 49 insertions(+), 28 deletions(-) diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index d484983c0f02..5960a0afd0ca 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -883,36 +883,14 @@ void mbm_setup_overflow_handler(struct rdt_l3_mon_dom= ain *dom, unsigned long del static int dom_data_init(struct rdt_resource *r) { u32 idx_limit =3D resctrl_arch_system_num_rmid_idx(); - u32 num_closid =3D resctrl_arch_get_num_closid(r); struct rmid_entry *entry =3D NULL; int err =3D 0, i; u32 idx; =20 mutex_lock(&rdtgroup_mutex); - if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID)) { - u32 *tmp; - - /* - * If the architecture hasn't provided a sanitised value here, - * this may result in larger arrays than necessary. Resctrl will - * use a smaller system wide value based on the resources in - * use. - */ - tmp =3D kcalloc(num_closid, sizeof(*tmp), GFP_KERNEL); - if (!tmp) { - err =3D -ENOMEM; - goto out_unlock; - } - - closid_num_dirty_rmid =3D tmp; - } =20 rmid_ptrs =3D kcalloc(idx_limit, sizeof(struct rmid_entry), GFP_KERNEL); if (!rmid_ptrs) { - if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID)) { - kfree(closid_num_dirty_rmid); - closid_num_dirty_rmid =3D NULL; - } err =3D -ENOMEM; goto out_unlock; } @@ -948,11 +926,6 @@ static void dom_data_exit(struct rdt_resource *r) if (!r->mon_capable) goto out_unlock; =20 - if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID)) { - kfree(closid_num_dirty_rmid); - closid_num_dirty_rmid =3D NULL; - } - kfree(rmid_ptrs); rmid_ptrs =3D NULL; =20 @@ -1789,6 +1762,43 @@ ssize_t mbm_L3_assignments_write(struct kernfs_open_= file *of, char *buf, return ret ?: nbytes; } =20 +static int closid_num_dirty_rmid_alloc(struct rdt_resource *r) +{ + if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID)) { + u32 num_closid =3D resctrl_arch_get_num_closid(r); + u32 *tmp; + + /* For ARM memory ordering access to closid_num_dirty_rmid */ + mutex_lock(&rdtgroup_mutex); + + /* + * If the architecture hasn't provided a sanitised value here, + * this may result in larger arrays than necessary. Resctrl will + * use a smaller system wide value based on the resources in + * use. + */ + tmp =3D kcalloc(num_closid, sizeof(*tmp), GFP_KERNEL); + if (!tmp) + return -ENOMEM; + + closid_num_dirty_rmid =3D tmp; + + mutex_unlock(&rdtgroup_mutex); + } + + return 0; +} + +static void closid_num_dirty_rmid_free(void) +{ + if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID)) { + mutex_lock(&rdtgroup_mutex); + kfree(closid_num_dirty_rmid); + closid_num_dirty_rmid =3D NULL; + mutex_unlock(&rdtgroup_mutex); + } +} + /** * resctrl_l3_mon_resource_init() - Initialise global monitoring structure= s. * @@ -1809,10 +1819,16 @@ int resctrl_l3_mon_resource_init(void) if (!r->mon_capable) return 0; =20 - ret =3D dom_data_init(r); + ret =3D closid_num_dirty_rmid_alloc(r); if (ret) return ret; =20 + ret =3D dom_data_init(r); + if (ret) { + closid_num_dirty_rmid_free(); + return ret; + } + if (resctrl_arch_is_evt_configurable(QOS_L3_MBM_TOTAL_EVENT_ID)) { mon_event_all[QOS_L3_MBM_TOTAL_EVENT_ID].configurable =3D true; resctrl_file_fflags_init("mbm_total_bytes_config", @@ -1857,5 +1873,10 @@ void resctrl_l3_mon_resource_exit(void) { struct rdt_resource *r =3D resctrl_arch_get_resource(RDT_RESOURCE_L3); =20 + if (!r->mon_capable) + return; + + closid_num_dirty_rmid_free(); + dom_data_exit(r); } --=20 2.51.0 From nobody Sat Sep 27 20:26:34 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B94BD31FEC3 for ; Thu, 25 Sep 2025 20:04:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830665; cv=none; b=Co3Lo0HicUvHXxfpEOhLKIPB0gVtt0UFmH2rBVHxV7pGknJfgZ1Q6o1JJMYpXF/gveJjmgtwL8DZ8d5v/WM1u3ePT7tsRbzNRITD+3o1uhHaycC6B5I/KD5atwlVtzQKENYo0hNGhXKbWbqBAPWzuahNvHL6BfRCHncDa75EHf4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830665; c=relaxed/simple; bh=A/s/Fc4eulOfSdUOZRNHjPNqYxrapq1zmZT32Ja/r34=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=VKXKrS1cKIuS/ehkLymZM2uZzJc18L+OPyDCYNx7W9ZEeG8yGK/ZOE7b4BZ9+a6Z+UZ7wo1LWlDCSd8Mv1FjvProHObtfVJeZ4k2KiSLuqHkACQNsgLn1wJI5tyXwkdNCRhfmJis8cc/zG3P9Hl0vo5Ojr/RI6opP9WBy1lkr0k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=N9u8bH7Y; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="N9u8bH7Y" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758830664; x=1790366664; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=A/s/Fc4eulOfSdUOZRNHjPNqYxrapq1zmZT32Ja/r34=; b=N9u8bH7YVpV9LlKmzCS2P/DKKN/LTk+ELWKKNcxkeJ15ho6cRhFU0Dd3 jr0LLRdAEdlbjB7YZt9LrmqcZW5ud70zGB3HPCDCUz8Cvc/bFs0AyJ9iL dTQQyBgIP9s3jzui1WLRJFwhYBz/MoA8J1oz8z+DA3IIz1jMgnjSxidLR G60H14f/KnfHPFWAiKmtU45Jrz1B5HfKegepw3ImMheEVXtTvwdFRkAPk EazA64B6wGlAG/SHQzYmqMkibOZAbHcMKxGMw1IWT4O+dJ/g1KOITVxr1 yf6drLH3CxqT/3u7u8A0ON0ocJb8p08jwuS/p32xhGvjrRfVqTpm4XDQR g==; X-CSE-ConnectionGUID: MTq02lZxRQG1BjLxBBbVLQ== X-CSE-MsgGUID: SPHOwQS6QO+he/w5xVITBA== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="61074353" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="61074353" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:14 -0700 X-CSE-ConnectionGUID: hSsBN8VEQVK4z0Lm0vX3Ew== X-CSE-MsgGUID: eV6PWUQ6QySosY+aCkOqVw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,293,1751266800"; d="scan'208";a="177003691" Received: from inaky-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.206]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:14 -0700 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v11 25/31] fs,x86/resctrl: Compute number of RMIDs as minimum across resources Date: Thu, 25 Sep 2025 13:03:19 -0700 Message-ID: <20250925200328.64155-26-tony.luck@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250925200328.64155-1-tony.luck@intel.com> References: <20250925200328.64155-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" resctrl assumes that only the L3 resource supports monitor events, so it simply takes the rdt_resource::num_rmid from RDT_RESOURCE_L3 as the system's number of RMIDs. The addition of telemetry events in a different resource breaks that assumption. Compute the number of available RMIDs as the minimum value across all mon_capable resources (analogous to how the number of CLOSIDs is computed across alloc_capable resources). Note that mount time enumeration of the telemetry resource means that this number can be reduced. If this happens, then some memory will be wasted as the allocations for rdt_l3_mon_domain::mbm_states[] and rdt_l3_mon_domain::rmid_busy_llc created during resctrl initialization will be larger than needed. Signed-off-by: Tony Luck --- arch/x86/kernel/cpu/resctrl/core.c | 15 +++++++++++++-- fs/resctrl/rdtgroup.c | 6 ++++++ 2 files changed, 19 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 5b7f9a44d562..1d43087c5975 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -110,12 +110,23 @@ struct rdt_hw_resource rdt_resources_all[RDT_NUM_RESO= URCES] =3D { }, }; =20 +/** + * resctrl_arch_system_num_rmid_idx - Compute number of supported RMIDs + * (minimum across all mon_capable resource) + * + * Return: Number of supported RMIDs at time of call. Note that mount time + * enumeration of resources may reduce the number. + */ u32 resctrl_arch_system_num_rmid_idx(void) { - struct rdt_resource *r =3D &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl; + u32 num_rmids =3D U32_MAX; + struct rdt_resource *r; + + for_each_mon_capable_rdt_resource(r) + num_rmids =3D min(num_rmids, r->mon.num_rmid); =20 /* RMID are independent numbers for x86. num_rmid_idx =3D=3D num_rmid */ - return r->mon.num_rmid; + return num_rmids =3D=3D U32_MAX ? 0 : num_rmids; } =20 struct rdt_resource *resctrl_arch_get_resource(enum resctrl_res_level l) diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index 19efb345c4a6..5e3ee4b8f70b 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -4268,6 +4268,12 @@ void resctrl_offline_mon_domain(struct rdt_resource = *r, struct rdt_domain_hdr *h * During boot this may be called before global allocations have been made= by * resctrl_l3_mon_resource_init(). * + * Called during CPU online that may run as soon as CPU online callbacks + * are set up during resctrl initialization. The number of supported RMIDs + * may be reduced if additional mon_capable resources are enumerated + * at mount time. This means the rdt_l3_mon_domain::mbm_states[] and + * rdt_l3_mon_domain::rmid_busy_llc allocations may be larger than needed. + * * Returns 0 for success, or -ENOMEM. */ static int domain_setup_l3_mon_state(struct rdt_resource *r, struct rdt_l3= _mon_domain *d) --=20 2.51.0 From nobody Sat Sep 27 20:26:34 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7707B326D79 for ; Thu, 25 Sep 2025 20:04:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830666; cv=none; b=olsMTgSe00ZGDC3onNwJeO7/h2Y8+q7qMIJ6mrKFC4chY7vIB0fG2yCQLyhEgkg6bMuQZsS8HrhJz2pvseveE55/EK7VhoDfI9kT3lu10Apjy7TP3CprbqbIpY5M3LGVSDXveFY2iOn5GbOTUoyXRZngCjI4oiIsfZWo0liR9k8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830666; c=relaxed/simple; bh=1MMY4PAZRGnOiPMB9iT/8hUSyj60OcM6gcQ/DakGfmk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=mzBv8A5NpSjsptpsIl/ZwDHmvWp5DPn9N/p1wuuBOqpEDb4a/V5X/JuIVWo1Xf/+EvNgblkEVa8G5hMp0nBMZeArA2UiK59F8b5uReEg11IPHCoQyAUCfEvhHT7CR7nzcVoEVUHEvV88cAtgfrDBJdzTooswLy4d2RuekmquuRA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=jOHZJdh+; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="jOHZJdh+" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758830665; x=1790366665; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=1MMY4PAZRGnOiPMB9iT/8hUSyj60OcM6gcQ/DakGfmk=; b=jOHZJdh+osgj5GaLpCR+CuR8b4QKAtes6oe8/CeysYf5A5QfUqkJVnxU htj1+Cp53OoQBJaEke/1yL92hBLGcot9IlBz0OVXT5IH3dYRJVF78uLVX hgFd1UQCR7EJJZ69HqkJCFkualX/xfh6UwKvqPW+1TzMAX5dMb38pqSgA 7Tvi3Lk5VUd8SWu5GgAJ8VxvG682AP16RZabFUNaEZRICFGHFdMPC7k3V 8e/4z/NoTRuqekgJrrsWUX4kuV7qsrBTizhsasq4hBieyQA5m54Km1zhI rWL8UvDR6szF5SVAxBP5LGNsAbXMfzbOHa7FNzAPxdUt1AklBC3w/jb6c w==; X-CSE-ConnectionGUID: 0OIWgI8oQoKMA+YUJf4uGw== X-CSE-MsgGUID: THyWRXlQTCW4I3LC7+6kLg== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="61074362" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="61074362" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:14 -0700 X-CSE-ConnectionGUID: YTcZVJ6DQK6PeKTPIWBDKQ== X-CSE-MsgGUID: AeA3gc9XTAuyWwPHbEOefg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,293,1751266800"; d="scan'208";a="177003694" Received: from inaky-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.206]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:14 -0700 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v11 26/31] fs/resctrl: Move RMID initialization to first mount Date: Thu, 25 Sep 2025 13:03:20 -0700 Message-ID: <20250925200328.64155-27-tony.luck@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250925200328.64155-1-tony.luck@intel.com> References: <20250925200328.64155-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" L3 monitor features are enumerated during resctrl initialization and rmid_ptrs[] that tracks all RMIDs and depends on the number of supported RMIDs is allocated during this time. Telemetry monitor features are enumerated during first resctrl mount and may support a different number of RMIDs compared to L3 monitor features. Delay allocation and initialization of rmid_ptrs[] until first mount. Since the number of RMIDs cannot change on later mounts, keep the same set of rmid_ptrs[] until resctrl_exit(). This is required because the limbo handler keeps running after resctrl is unmounted and may likely need to access rmid_ptrs[] as it keeps tracking busy RMIDs after unmount. Rename routines to match what they now do: dom_data_init() -> setup_rmid_lru_list() dom_data_exit() -> free_rmid_lru_list() Signed-off-by: Tony Luck --- fs/resctrl/internal.h | 4 ++++ fs/resctrl/monitor.c | 50 +++++++++++++++++++------------------------ fs/resctrl/rdtgroup.c | 5 +++++ 3 files changed, 31 insertions(+), 28 deletions(-) diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h index aee6c4684f81..223a6cc6a64a 100644 --- a/fs/resctrl/internal.h +++ b/fs/resctrl/internal.h @@ -369,6 +369,10 @@ int closids_supported(void); =20 void closid_free(int closid); =20 +int setup_rmid_lru_list(void); + +void free_rmid_lru_list(void); + int alloc_rmid(u32 closid); =20 void free_rmid(u32 closid, u32 rmid); diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index 5960a0afd0ca..c0e1b672afce 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -880,20 +880,27 @@ void mbm_setup_overflow_handler(struct rdt_l3_mon_dom= ain *dom, unsigned long del schedule_delayed_work_on(cpu, &dom->mbm_over, delay); } =20 -static int dom_data_init(struct rdt_resource *r) +int setup_rmid_lru_list(void) { u32 idx_limit =3D resctrl_arch_system_num_rmid_idx(); struct rmid_entry *entry =3D NULL; - int err =3D 0, i; u32 idx; + int i; =20 - mutex_lock(&rdtgroup_mutex); + if (!resctrl_arch_mon_capable()) + return 0; + + /* + * Called on every mount, but the number of RMIDs cannot change + * after the first mount, so keep using the same set of rmid_ptrs[] + * until resctrl_exit(). + */ + if (rmid_ptrs) + return 0; =20 rmid_ptrs =3D kcalloc(idx_limit, sizeof(struct rmid_entry), GFP_KERNEL); - if (!rmid_ptrs) { - err =3D -ENOMEM; - goto out_unlock; - } + if (!rmid_ptrs) + return -ENOMEM; =20 for (i =3D 0; i < idx_limit; i++) { entry =3D &rmid_ptrs[i]; @@ -906,30 +913,24 @@ static int dom_data_init(struct rdt_resource *r) /* * RESCTRL_RESERVED_CLOSID and RESCTRL_RESERVED_RMID are special and * are always allocated. These are used for the rdtgroup_default - * control group, which will be setup later in resctrl_init(). + * control group, which was setup earlier in rdtgroup_setup_default(). */ idx =3D resctrl_arch_rmid_idx_encode(RESCTRL_RESERVED_CLOSID, RESCTRL_RESERVED_RMID); entry =3D __rmid_entry(idx); list_del(&entry->list); =20 -out_unlock: - mutex_unlock(&rdtgroup_mutex); - - return err; + return 0; } =20 -static void dom_data_exit(struct rdt_resource *r) +void free_rmid_lru_list(void) { - mutex_lock(&rdtgroup_mutex); - - if (!r->mon_capable) - goto out_unlock; + if (!resctrl_arch_mon_capable()) + return; =20 + mutex_lock(&rdtgroup_mutex); kfree(rmid_ptrs); rmid_ptrs =3D NULL; - -out_unlock: mutex_unlock(&rdtgroup_mutex); } =20 @@ -1803,7 +1804,8 @@ static void closid_num_dirty_rmid_free(void) * resctrl_l3_mon_resource_init() - Initialise global monitoring structure= s. * * Allocate and initialise global monitor resources that do not belong to a - * specific domain. i.e. the rmid_ptrs[] used for the limbo and free lists. + * specific domain. i.e. the closid_num_dirty_rmid[] used to find the CLOS= ID + * with the cleanest set of RMIDs. * Called once during boot after the struct rdt_resource's have been confi= gured * but before the filesystem is mounted. * Resctrl's cpuhp callbacks may be called before this point to bring a do= main @@ -1823,12 +1825,6 @@ int resctrl_l3_mon_resource_init(void) if (ret) return ret; =20 - ret =3D dom_data_init(r); - if (ret) { - closid_num_dirty_rmid_free(); - return ret; - } - if (resctrl_arch_is_evt_configurable(QOS_L3_MBM_TOTAL_EVENT_ID)) { mon_event_all[QOS_L3_MBM_TOTAL_EVENT_ID].configurable =3D true; resctrl_file_fflags_init("mbm_total_bytes_config", @@ -1877,6 +1873,4 @@ void resctrl_l3_mon_resource_exit(void) return; =20 closid_num_dirty_rmid_free(); - - dom_data_exit(r); } diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index 5e3ee4b8f70b..f82bdb8f6f1d 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -2734,6 +2734,10 @@ static int rdt_get_tree(struct fs_context *fc) goto out; } =20 + ret =3D setup_rmid_lru_list(); + if (ret) + goto out; + ret =3D rdtgroup_setup_root(ctx); if (ret) goto out; @@ -4568,4 +4572,5 @@ void resctrl_exit(void) */ =20 resctrl_l3_mon_resource_exit(); + free_rmid_lru_list(); } --=20 2.51.0 From nobody Sat Sep 27 20:26:34 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ABD50324B38 for ; Thu, 25 Sep 2025 20:04:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830667; cv=none; b=dTnduHooEg49WzSGljIH7EGg5BNYCMpx3GEqkyT/MUr9t0yFIfBFDlFjAY4cXIlf7hw5EOr6fIFsK2+O4g+eFdDyFafXDXgVyfR8VjmPM6y4Wl0MzO4FIltMHD2XVlZGssWK7F+77IPTZD9wioq79m7UUAkj3XQnOFLf258VhRE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830667; c=relaxed/simple; bh=F/bYMDpwsvBJEBQwRTIY2pjvEIXc8qpGNlaU4LojosM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=HAiUUfY1oCVmUWN9zamJd8uiWC+g9h9gsrcwioLmsiCfSVPIpLyQsaaLHfnEu6dWLYiA6HJvWUTD3w0GLQzmzNu98DbAGbxKadevaxbFZjrLJeC83q8IePrWVnZn6JFuOZyWOtlY4ZrEKRcd8WMPvROndyoGjp98O7nWZcuM++g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=LR6ZWj2x; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="LR6ZWj2x" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758830666; x=1790366666; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=F/bYMDpwsvBJEBQwRTIY2pjvEIXc8qpGNlaU4LojosM=; b=LR6ZWj2x3IQXxieIyjGyv9uaen/0qgk95KyfXD04rwQqr4xzD5S/8com +BhTsma/viIp7+QwEb2mif3nYt8vEYrx+3/iwPQmdO5cF5WYcNPc4ROte RCCUSV83SW4AfRZ83gABNGdDpr7zK4XkrG51Q//NbKnlEVxXRv3/Q1GKd jsKrRheRIASmNNomva8q8502yxRqDwnvDu4ptpDH6+qBdQ6979fKIzLoR aN0dXPYpSoLhTcrA+eFjXeSgf30K3UVS4tWMiyOF7myubUz5o2KAWrkX4 cwYhsGyEhC7XYvTqlQ/xekC+5O2UAYCyISS6VRJSCHQk5xLAb5zoCtH2s w==; X-CSE-ConnectionGUID: dlj1bZZ3QDqkHlK53D4Pqg== X-CSE-MsgGUID: R3l9Jg7bScGl/ejdcqM2QA== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="61074371" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="61074371" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:15 -0700 X-CSE-ConnectionGUID: CtTXJXAWQpyeXnkbOLvT5Q== X-CSE-MsgGUID: HziKMY2BQrSNaNMrKi6+qw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,293,1751266800"; d="scan'208";a="177003699" Received: from inaky-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.206]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:14 -0700 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v11 27/31] x86/resctrl: Enable RDT_RESOURCE_PERF_PKG Date: Thu, 25 Sep 2025 13:03:21 -0700 Message-ID: <20250925200328.64155-28-tony.luck@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250925200328.64155-1-tony.luck@intel.com> References: <20250925200328.64155-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Mark the RDT_RESOURCE_PERF_PKG resource as mon_capable and set the global rdt_mon_capable flag. Call domain_add_cpu_mon() for each online CPU to allocate all domains for the RDT_RESOURCE_PERF_PKG since they were not created during resctrl initialization because of the enumeration delay until first mount. Signed-off-by: Tony Luck --- arch/x86/kernel/cpu/resctrl/core.c | 17 ++++++++++++++++- arch/x86/kernel/cpu/resctrl/intel_aet.c | 5 +++++ 2 files changed, 21 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 1d43087c5975..48ed6242d136 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -755,14 +755,29 @@ static int resctrl_arch_offline_cpu(unsigned int cpu) =20 void resctrl_arch_pre_mount(void) { + struct rdt_resource *r =3D &rdt_resources_all[RDT_RESOURCE_PERF_PKG].r_re= sctrl; static atomic_t only_once =3D ATOMIC_INIT(0); - int old =3D 0; + int cpu, old =3D 0; =20 if (!atomic_try_cmpxchg(&only_once, &old, 1)) return; =20 if (!intel_aet_get_events()) return; + + if (!r->mon_capable) + return; + + /* + * Late discovery of telemetry events means the domains for the + * resource were not built. Do that now. + */ + cpus_read_lock(); + mutex_lock(&domain_list_lock); + for_each_online_cpu(cpu) + domain_add_cpu_mon(cpu, r); + mutex_unlock(&domain_list_lock); + cpus_read_unlock(); } =20 enum { diff --git a/arch/x86/kernel/cpu/resctrl/intel_aet.c b/arch/x86/kernel/cpu/= resctrl/intel_aet.c index 039e63d8c2e7..f6afe862b9de 100644 --- a/arch/x86/kernel/cpu/resctrl/intel_aet.c +++ b/arch/x86/kernel/cpu/resctrl/intel_aet.c @@ -214,6 +214,9 @@ static bool enable_events(struct event_group *e, struct= pmt_feature_group *p) if (!usable_events) return false; =20 + r->mon_capable =3D true; + rdt_mon_capable =3D true; + if (r->mon.num_rmid) r->mon.num_rmid =3D min(r->mon.num_rmid, e->num_rmids); else @@ -223,6 +226,8 @@ static bool enable_events(struct event_group *e, struct= pmt_feature_group *p) resctrl_enable_mon_event(e->evts[j].id, true, e->evts[j].bin_bits, &e->evts[j]); =20 + pr_info("%s %s monitoring detected\n", r->name, e->name); + return true; } =20 --=20 2.51.0 From nobody Sat Sep 27 20:26:34 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DB5FC327A12 for ; Thu, 25 Sep 2025 20:04:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830668; cv=none; b=XT5odY7L6zSJoCeJ8w0T2hx2l6HREMnl4E+/e88U9UvNOfQifOnioN8ZrG5la1hJe8WoW/6iOF92oS4uEcEvoP7NgQLKiiFRY8lGKT4vEhM1BV2CwqDMzEAkFTFxuhFGM3/3SfcZIhiXs/VZ5bLF3VqNnQFChEg0rtQSYrv+Rv0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830668; c=relaxed/simple; bh=JdXqo8CyC6iZZ1CPme9AeWkUa3NJj4dp5td2iQYLYZk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=VZz2MQcRwYSAJ4yCYgUgGLeFUBc383+4Ss1rL4NTi/gPF5fvR+FDY7FWcZ/AB6OTf2kBZ56fVQge94hsgaq13YAWhbq/NPq1MPAEYCCfckCgnXhCiHy4iDkCsDOJwK+DOQIqddoVE+glJGsK6xf9m7d1d8ib4LGKLM7VfnRI8xo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Mb/+a+Fa; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Mb/+a+Fa" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758830666; x=1790366666; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=JdXqo8CyC6iZZ1CPme9AeWkUa3NJj4dp5td2iQYLYZk=; b=Mb/+a+FaxpL03KTbH2y4eiOKfBLfANRiXFbCVWIFVfbPYD+GsolTkmim 1FxsRNmX+q4wm8kqFX/QLLEyzLaxeVO4yqUPuNP3j2irUFxr4utrzpUNe 1D/l4GozAkwdvqrWExOs3hA/NJzn9fucqFLQog45Ft3eGl/HurTuNlKyf LDYffDj2ZdHp+kRjm4Y21kyuUBJh7SHaIBaD5JzsF9/GJAx6Y3IlZ+wRw 6dMesII2SMb88M0Sp9EYnYRXktfRSkNVFi6BwehGKrRgznD/6x3fg2Blh DugXAZuZ6kkcwc3WUobI8l8YFXYyjuIIB8iiZMh1JxhQy7VHpsxUMjqPi A==; X-CSE-ConnectionGUID: l238+69ITVqL4ua3oEdBdQ== X-CSE-MsgGUID: ZONzSbi8QNKSq9/F+o5taw== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="61074380" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="61074380" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:15 -0700 X-CSE-ConnectionGUID: MX1cIAToSWyzBq95uiBNWA== X-CSE-MsgGUID: lnT2P06XQy6H8eHs9XTzhQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,293,1751266800"; d="scan'208";a="177003702" Received: from inaky-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.206]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:15 -0700 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v11 28/31] fs/resctrl: Provide interface to create architecture specific debugfs area Date: Thu, 25 Sep 2025 13:03:22 -0700 Message-ID: <20250925200328.64155-29-tony.luck@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250925200328.64155-1-tony.luck@intel.com> References: <20250925200328.64155-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" All files below /sys/fs/resctrl are considered user ABI. This leaves no place for architectures to provide additional interfaces. Add resctrl_debugfs_mon_info_arch_mkdir() which creates a directory in the debugfs file system for a monitoring resource. Naming follows the layout of the main resctrl hierarchy: /sys/kernel/debug/resctrl/info/{resource}_MON/{arch} The {arch} last level directory name matches the output of the user level "uname -m" command. Architecture code may use this directory for debug information, or for minor tuning of features. It must not be used for basic feature enabling as debugfs may not be configured/mounted on production systems. Suggested-by: Reinette Chatre Signed-off-by: Tony Luck --- include/linux/resctrl.h | 10 ++++++++++ fs/resctrl/rdtgroup.c | 29 +++++++++++++++++++++++++++++ 2 files changed, 39 insertions(+) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index c7b5e56d25bb..d4be0f54c7e8 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -678,6 +678,16 @@ void resctrl_arch_reset_cntr(struct rdt_resource *r, s= truct rdt_l3_mon_domain *d extern unsigned int resctrl_rmid_realloc_threshold; extern unsigned int resctrl_rmid_realloc_limit; =20 +/** + * resctrl_debugfs_mon_info_arch_mkdir() - Create a debugfs info directory. + * Removed by resctrl_exit(). + * @r: Resource (must be mon_capable). + * + * Return: NULL if resource is not monitoring capable, + * dentry pointer on success, or ERR_PTR(-ERROR) on failure. + */ +struct dentry *resctrl_debugfs_mon_info_arch_mkdir(struct rdt_resource *r); + int resctrl_init(void); void resctrl_exit(void); =20 diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index f82bdb8f6f1d..16b088c5f2be 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -24,6 +24,7 @@ #include #include #include +#include =20 #include =20 @@ -75,6 +76,8 @@ static void rdtgroup_destroy_root(void); =20 struct dentry *debugfs_resctrl; =20 +static struct dentry *debugfs_resctrl_info; + /* * Memory bandwidth monitoring event to use for the default CTRL_MON group * and each new CTRL_MON group created by the user. Only relevant when @@ -4513,6 +4516,31 @@ int resctrl_init(void) return ret; } =20 +/* + * Create /sys/kernel/debug/resctrl/info/{r->name}_MON/{arch} directory + * by request for architecture to use for debugging or minor tuning. + * Basic functionality of features must not be controlled by files + * added to this directory as debugfs may not be configured/mounted + * on production systems. + */ +struct dentry *resctrl_debugfs_mon_info_arch_mkdir(struct rdt_resource *r) +{ + struct dentry *moninfodir; + char name[32]; + + if (!r->mon_capable) + return NULL; + + if (!debugfs_resctrl_info) + debugfs_resctrl_info =3D debugfs_create_dir("info", debugfs_resctrl); + + sprintf(name, "%s_MON", r->name); + + moninfodir =3D debugfs_create_dir(name, debugfs_resctrl_info); + + return debugfs_create_dir(utsname()->machine, moninfodir); +} + static bool resctrl_online_domains_exist(void) { struct rdt_resource *r; @@ -4564,6 +4592,7 @@ void resctrl_exit(void) =20 debugfs_remove_recursive(debugfs_resctrl); debugfs_resctrl =3D NULL; + debugfs_resctrl_info =3D NULL; unregister_filesystem(&rdt_fs_type); =20 /* --=20 2.51.0 From nobody Sat Sep 27 20:26:34 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D10C63277B1 for ; Thu, 25 Sep 2025 20:04:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830669; cv=none; b=lge/c9MUfjwmJfW7QGFNWhVhpzpTJyUVTryVptW0FzUF07DtiXyPQMaj1eBQd7JsDgk4L59zca18M9KjdRnJdzKVGJOo3BjNYbf48nKfdA3n7Elntx/iAs1XwTMhTA24BfpVxsf7izO2fuELvHKifW2Fai6dPm3OeNik49Na8Wc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830669; c=relaxed/simple; bh=KUB65dBQVze38qJ3dYR+jdqsRXWXGNnAMDUO6kvsa2k=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gy7Oc6FZ+2YYj/XtETU7ZArdNlged6FPGwOvLU4EdHo3ggkDzS0Ly+vyr2SjsPyYEOL3rMiWC6OqMCc7R7wdWGbHgpXw/xa3S0DHTzOWzrxnzpJYqXIl1EZSj8FYMFXl/8VsIBGUPpkgejj2ERM/53tYKLkt8s9SJ+DGxYok0ik= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=hQF01sFR; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="hQF01sFR" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758830667; x=1790366667; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KUB65dBQVze38qJ3dYR+jdqsRXWXGNnAMDUO6kvsa2k=; b=hQF01sFRbM5qp1O+vrvsygP9iEczWhmXUJy4J17fBYZzXF3O+JlXNXUt AIhSUti94ORnAWtmGXZrnohHy3QbZaweaNrnd2EdYX/LO6/YI7oKZAb9g 9fBfWH4ZIM8orTwsxMb9hv0gAnR11BXVvJ5tdVMMOYINkVgbA24HG3P+5 mu3v8sZ0eaixzC0VER2ak19XmNPQ2Wl+cOUuCSxE7+UAYuG1RuEv5yCc3 PYUls3FVK27TZn13AfCoYLiajoYoToBEb8U4eKdyXdpG8vUle0LzQVGdu 9eXyPMDlXckrSuR7meC74F+2vgc41qX5sWbPWtKbGz8bkrUrt0ViNbDKV Q==; X-CSE-ConnectionGUID: G1DErsd4QHakhZE/twePpw== X-CSE-MsgGUID: kEZNvmlqRNmUhsB2J/3Wlw== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="61074390" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="61074390" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:16 -0700 X-CSE-ConnectionGUID: UWWSSLsCSNmGNZ3GpdMlNw== X-CSE-MsgGUID: +riKI2dDQPKG5D81Yk9/MQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,293,1751266800"; d="scan'208";a="177003705" Received: from inaky-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.206]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:15 -0700 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v11 29/31] x86/resctrl: Add debugfs files to show telemetry aggregator status Date: Thu, 25 Sep 2025 13:03:23 -0700 Message-ID: <20250925200328.64155-30-tony.luck@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250925200328.64155-1-tony.luck@intel.com> References: <20250925200328.64155-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Each telemetry aggregator provides three status registers at the top end of MMIO space after all the per-RMID per-event counters: data_loss_count: This counts the number of times that this aggregator failed to accumulate a counter value supplied by a CPU core. data_loss_timestamp: This is a "timestamp" from a free running 25MHz uncore timer indicating when the most recent data loss occurred. last_update_timestamp: Another 25MHz timestamp indicating when the most recent counter update was successfully applied. Create files in /sys/kernel/debug/resctrl/info/PERF_PKG_MON/x86_64/ to display the value of each of these status registers for each aggregator in each enabled event group. The prefix for each file name describes the type of aggregator, which package it is located on, and an opaque instance number to provide a unique file name when there are multiple aggregators on a package. The suffix is one of the three strings listed above. An example name is: energy_pkg0_agg2_data_loss_count These files are removed along with all other debugfs entries by the call to debugfs_remove_recursive() in resctrl_exit(). Signed-off-by: Tony Luck --- arch/x86/kernel/cpu/resctrl/intel_aet.c | 51 +++++++++++++++++++++++++ 1 file changed, 51 insertions(+) diff --git a/arch/x86/kernel/cpu/resctrl/intel_aet.c b/arch/x86/kernel/cpu/= resctrl/intel_aet.c index f6afe862b9de..f84935c57b67 100644 --- a/arch/x86/kernel/cpu/resctrl/intel_aet.c +++ b/arch/x86/kernel/cpu/resctrl/intel_aet.c @@ -18,8 +18,11 @@ #include #include #include +#include +#include #include #include +#include #include #include #include @@ -33,6 +36,7 @@ #include #include #include +#include #include #include #include @@ -184,9 +188,50 @@ static bool all_regions_have_sufficient_rmid(struct ev= ent_group *e, struct pmt_f return true; } =20 +static int status_read(void *priv, u64 *val) +{ + void __iomem *info =3D (void __iomem *)priv; + + *val =3D readq(info); + + return 0; +} + +DEFINE_SIMPLE_ATTRIBUTE(status_fops, status_read, NULL, "%llu\n"); + +static void make_status_files(struct dentry *dir, struct event_group *e, i= nt pkg, + int instance, void *info_end) +{ + char name[64]; + + sprintf(name, "%s_pkg%d_agg%d_data_loss_count", e->name, pkg, instance); + debugfs_create_file(name, 0400, dir, info_end - 24, &status_fops); + + sprintf(name, "%s_pkg%d_agg%d_data_loss_timestamp", e->name, pkg, instanc= e); + debugfs_create_file(name, 0400, dir, info_end - 16, &status_fops); + + sprintf(name, "%s_pkg%d_agg%d_last_update_timestamp", e->name, pkg, insta= nce); + debugfs_create_file(name, 0400, dir, info_end - 8, &status_fops); +} + +static void create_debug_event_status_files(struct dentry *dir, struct eve= nt_group *e, + struct pmt_feature_group *p) +{ + void *info_end; + + for (int i =3D 0; i < p->count; i++) { + if (!p->regions[i].addr) + continue; + info_end =3D (void __force *)p->regions[i].addr + e->mmio_size; + make_status_files(dir, e, p->regions[i].plat_info.package_id, + i, info_end); + } +} + static bool enable_events(struct event_group *e, struct pmt_feature_group = *p) { struct rdt_resource *r =3D &rdt_resources_all[RDT_RESOURCE_PERF_PKG].r_re= sctrl; + static struct dentry *infodir; bool usable_events =3D false; =20 /* Disable feature if insufficient RMIDs */ @@ -226,6 +271,12 @@ static bool enable_events(struct event_group *e, struc= t pmt_feature_group *p) resctrl_enable_mon_event(e->evts[j].id, true, e->evts[j].bin_bits, &e->evts[j]); =20 + if (!infodir) + infodir =3D resctrl_debugfs_mon_info_arch_mkdir(r); + + if (!IS_ERR_OR_NULL(infodir)) + create_debug_event_status_files(infodir, e, p); + pr_info("%s %s monitoring detected\n", r->name, e->name); =20 return true; --=20 2.51.0 From nobody Sat Sep 27 20:26:34 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A71EE32858A for ; Thu, 25 Sep 2025 20:04:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830669; cv=none; b=tyQpj9ZR7qpx6GLpIy7cH6Ko9kOmB7yNYvaUNn2B0KFFyXr78B+nKZVZDY1nhJJxTU7Tle0OkWs506yU03MKTPi3Z5NIojhUoqBemLb55taw6I8H08qvqOkMSVzgafmAHHm+a0pMeI4ZJRuLwzGnkNWzM1LvbVzpxYPzaVCyADY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830669; c=relaxed/simple; bh=GTDzyftXvBl8MpKeSXIIhicwNVfPwqh8GuSLto2mY4k=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gXUU60Nr7WJx7KVBSu2a+ZWVnTViC63UEoJgdES3wgBk46l7Rw+wPfKU4uccJBlrA9ISbU/60A70Hv+hEXWYelaHuiVbTFeo/NqwcQ4pNTg9Rm9ssYjlXKS72QS6bIk6QhXLjYNb5ZpkvFbxssE1eRKrukQ2O16w5ns/gZXvFBQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=UB00AgtV; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="UB00AgtV" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758830668; x=1790366668; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=GTDzyftXvBl8MpKeSXIIhicwNVfPwqh8GuSLto2mY4k=; b=UB00AgtVGG1Ij5DsrUX9gENCGBXFVlhJpXGfSH2EdTj79KueagvAxkFl WD2ZZ3YPN8aHKVEhR0k+h1eNoVhsInM7BnKaA2NHRj4CzQHGTyLnWDY5Q 2M3Gp4EKK8lMScNlAnQrDofcVmZE/Os7RjjVQxNdsGij5X7ebuGOe9JYO uqKGYvn8Z9Sp6qqKTL4QeNrszkYolqgdriExrtwOOiqC9A/QEQVmc996t YvmDSabYZStDuhZ//CsxsNvIYBJLnObtdsAga0sGxq1sRv+yQiM3lM1CF 0ziPgTxjZJlu4JKpPUI0kSGISe774mgkUop/MmzN1IAZ8hbf2MSDgZs0n Q==; X-CSE-ConnectionGUID: Nq9otCerSh6JRtc1ExbGsA== X-CSE-MsgGUID: jBPjfwqQQ5iiAcjl9NGYgA== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="61074403" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="61074403" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:16 -0700 X-CSE-ConnectionGUID: 131bugDMR2O3RoYVfQTXDA== X-CSE-MsgGUID: vM1dP+P6RW2K3Eoa/a++BA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,293,1751266800"; d="scan'208";a="177003709" Received: from inaky-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.206]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:16 -0700 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v11 30/31] x86,fs/resctrl: Update Documentation for package events Date: Thu, 25 Sep 2025 13:03:24 -0700 Message-ID: <20250925200328.64155-31-tony.luck@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250925200328.64155-1-tony.luck@intel.com> References: <20250925200328.64155-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Update resctrl filesystem documentation with the details about the resctrl files that support telemetry events. Signed-off-by: Tony Luck --- Documentation/filesystems/resctrl.rst | 100 ++++++++++++++++++++++---- 1 file changed, 87 insertions(+), 13 deletions(-) diff --git a/Documentation/filesystems/resctrl.rst b/Documentation/filesyst= ems/resctrl.rst index 006d23af66e1..cb6da9614f58 100644 --- a/Documentation/filesystems/resctrl.rst +++ b/Documentation/filesystems/resctrl.rst @@ -168,13 +168,12 @@ with respect to allocation: bandwidth percentages are directly applied to the threads running on the core =20 -If RDT monitoring is available there will be an "L3_MON" directory +If L3 monitoring is available there will be an "L3_MON" directory with the following files: =20 "num_rmids": - The number of RMIDs available. This is the - upper bound for how many "CTRL_MON" + "MON" - groups can be created. + The number of RMIDs supported by hardware for + L3 monitoring events. =20 "mon_features": Lists the monitoring events if @@ -400,6 +399,19 @@ with the following files: bytes) at which a previously used LLC_occupancy counter can be considered for re-use. =20 +If telemetry monitoring is available there will be an "PERF_PKG_MON" direc= tory +with the following files: + +"num_rmids": + The number of RMIDs supported by hardware for + telemetry monitoring events. + +"mon_features": + Lists the telemetry monitoring events that are enabled on this system. + +The upper bound for how many "CTRL_MON" + "MON" can be created +is the smaller of the L3_MON and PERF_PKG_MON "num_rmids" values. + Finally, in the top level of the "info" directory there is a file named "last_cmd_status". This is reset with every "command" issued via the file system (making new directories or writing to any of the @@ -505,15 +517,40 @@ When control is enabled all CTRL_MON groups will also= contain: When monitoring is enabled all MON groups will also contain: =20 "mon_data": - This contains a set of files organized by L3 domain and by - RDT event. E.g. on a system with two L3 domains there will - be subdirectories "mon_L3_00" and "mon_L3_01". Each of these - directories have one file per event (e.g. "llc_occupancy", - "mbm_total_bytes", and "mbm_local_bytes"). In a MON group these - files provide a read out of the current value of the event for - all tasks in the group. In CTRL_MON groups these files provide - the sum for all tasks in the CTRL_MON group and all tasks in - MON groups. Please see example section for more details on usage. + This contains directories for each monitor domain. One set for + each instance of an L3 cache, another set for each processor + package. The L3 cache directories are named "mon_L3_00", + "mon_L3_01" etc. The package directories "mon_PERF_PKG_00", + "mon_PERF_PKG_01" etc. + + Within each directory there is one file per event. For + example the L3 directories may contain "llc_occupancy", "mbm_total_bytes", + and "mbm_local_bytes". The PERF_PKG directories may contain "core_energy", + "activity", etc. The info/`*`/mon_features files provide the full + list of event/file names. + + "core energy" reports a floating point number for the energy (in Joules) + consumed by cores (registers, arithmetic units, TLB and L1/L2 caches) + during execution of instructions summed across all logical CPUs on a + package for the current RMID. + + "activity" also reports a floating point value (in Farads). + This provides an estimate of work done independent of the + frequency that the CPUs used for execution. + + Note that these two counters only measure energy/activity + in the "core" of the CPU (arithmetic units, TLB, L1 and L2 + caches, etc.). They do not include L3 cache, memory, I/O + devices etc. + + All other events report decimal integer values. + + In a MON group these files provide a read out of the current + value of the event for all tasks in the group. In CTRL_MON groups + these files provide the sum for all tasks in the CTRL_MON group + and all tasks in MON groups. Please see example section for more + details on usage. + On systems with Sub-NUMA Cluster (SNC) enabled there are extra directories for each node (located within the "mon_L3_XX" directory for the L3 cache they occupy). These are named "mon_sub_L3_YY" @@ -1506,6 +1543,43 @@ Example with C:: resctrl_release_lock(fd); } =20 +Debugfs +=3D=3D=3D=3D=3D=3D=3D +In addition to the use of debugfs for tracing of pseudo-locking +performance, architecture code may create debugfs directories +associated with monitoring features for a specific resource. + +The full pathname for these is in the form: + + /sys/kernel/debug/resctrl/info/{resource_name}_MON/{arch}/ + +The presence, names, and format of these files may vary +between architectures even if the same resource is present. + +PERF_PKG_MON/x86_64 +------------------- +Three files are present per telemetry aggregator instance +that show status. The prefix of +each file name describes the type ("energy" or "perf") which +processor package it belongs to, and the instance number of +the aggregator. For example: "energy_pkg1_agg2". + +The suffix describes which data is reported in the file and +is one of: + +data_loss_count: + This counts the number of times that this aggregator + failed to accumulate a counter value supplied by a CPU. + +data_loss_timestamp: + This is a "timestamp" from a free running 25MHz uncore + timer indicating when the most recent data loss occurred. + +last_update_timestamp: + Another 25MHz timestamp indicating when the + most recent counter update was successfully applied. + + Examples for RDT Monitoring along with allocation usage =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D Reading monitored data --=20 2.51.0 From nobody Sat Sep 27 20:26:34 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2C8A332897A for ; Thu, 25 Sep 2025 20:04:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830670; cv=none; b=nKh167G5eBQfv9nXOQHCR3J2/4JksMyuZzLq/B/bc4zv5jkC9YinU3iJ5rluT/d7b9JOA8ymJrhBjFH+nKhbSLztoFPUA62bAELbFuvfkRlrsUOpp7SPlqbb/x1ZKmSijST1DTinUNOMQ0cEAKJUV94/OkdA34ppqP5Wf1gJ7Jk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758830670; c=relaxed/simple; bh=KKZ76b6f1fVw3YfuVP6not30dWXnaMRHgFBHlr5XoWI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XT4WQTcGTKcfZoEmyhs0l+8WEwYmVQ02b87Q1Tm5E5cvFWeaFhoaa3/oY36I7yAnhPmphJj9wFzYdbGjP97vU11lQDAjagJ5+Gjliv703n3iUcybLo5zdNXSrfWkQlFmuyF13FEVceql3JfHY7D9/ZR4pJAnlQv8tOjm+Uf60n8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=ZCOasC2a; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="ZCOasC2a" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1758830669; x=1790366669; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KKZ76b6f1fVw3YfuVP6not30dWXnaMRHgFBHlr5XoWI=; b=ZCOasC2a8x53YLPd1wYO87N/DsPsdIJ898xc7Pg09cffcwxClemMSIIt moWuwFEYTlGKl2WaZHyR04Vi1dNWQFc9BPoT/n75jJXMfoaNCeCFwqFw9 CrHwOUgJC+e5QFcZB5+zVI8dVrlpuIaZpHWaS9Vvs9hTCd1Tu8vvG58P3 EIjO2h/VnNsjEZ1W707Y0rzy3dD3bej6/vOczPWx5LTh1vK76xQMKM15I w/YA8j8OyDPoJo2LMUhvyIrJD0uQLj8gtJpRpT8J/ilaDrNgPUjKSLYEg J3jaMgTlplyAKYHaUSnQjKBYHyRId2YXAAZt1OEbL1gLPk6MmLQretnpv Q==; X-CSE-ConnectionGUID: GAKD21lvSEeehCrPCczG5g== X-CSE-MsgGUID: Oniw8+rnTp+p6kw/JZcahg== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="61074412" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="61074412" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:16 -0700 X-CSE-ConnectionGUID: K0RKm+OUSNqvhPHncTdHuA== X-CSE-MsgGUID: 02TGTeRXQN2qyC4Yyn4/+g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,293,1751266800"; d="scan'208";a="177003714" Received: from inaky-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.206]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2025 13:04:16 -0700 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v11 31/31] fs/resctrl: Some kerneldoc updates Date: Thu, 25 Sep 2025 13:03:25 -0700 Message-ID: <20250925200328.64155-32-tony.luck@intel.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20250925200328.64155-1-tony.luck@intel.com> References: <20250925200328.64155-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" resctrl event monitoring on Sub-NUMA Cluster (SNC) systems sums the counts for events across all nodes sharing an L3 cache. Update the kerneldoc for rmid_read::sum and the do_sum argument to mon_get_kn_priv() to say these are only used on the RDT_RESOURCE_L3 resource. Add Return: value description for l3_mon_domain_mbm_alloc(), resctrl_l3_mon_resource_init(), and domain_setup_l3_mon_state() Signed-off-by: Tony Luck --- fs/resctrl/internal.h | 4 ++-- arch/x86/kernel/cpu/resctrl/core.c | 2 ++ fs/resctrl/monitor.c | 2 +- fs/resctrl/rdtgroup.c | 5 +++-- 4 files changed, 8 insertions(+), 5 deletions(-) diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h index 223a6cc6a64a..0dd89d3fa31a 100644 --- a/fs/resctrl/internal.h +++ b/fs/resctrl/internal.h @@ -96,8 +96,8 @@ extern struct mon_evt mon_event_all[QOS_NUM_EVENTS]; * @list: Member of the global @mon_data_kn_priv_list list. * @rid: Resource id associated with the event file. * @evt: Event structure associated with the event file. - * @sum: Set when event must be summed across multiple - * domains. + * @sum: Set for RDT_RESOURCE_L3 when event must be summed + * across multiple domains. * @domid: When @sum is zero this is the domain to which * the event file belongs. When @sum is one this * is the id of the L3 cache that all domains to be diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 48ed6242d136..78c176e15b93 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -418,6 +418,8 @@ static int domain_setup_ctrlval(struct rdt_resource *r,= struct rdt_ctrl_domain * * l3_mon_domain_mbm_alloc() - Allocate arch private storage for the MBM c= ounters * @num_rmid: The size of the MBM counter array * @hw_dom: The domain that owns the allocated arrays + * + * Return: %0 for success; Error code otherwise. */ static int l3_mon_domain_mbm_alloc(u32 num_rmid, struct rdt_hw_l3_mon_doma= in *hw_dom) { diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index c0e1b672afce..4cc310b9e78e 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -1811,7 +1811,7 @@ static void closid_num_dirty_rmid_free(void) * Resctrl's cpuhp callbacks may be called before this point to bring a do= main * online. * - * Returns 0 for success, or -ENOMEM. + * Return: %0 for success; Error code otherwise. */ int resctrl_l3_mon_resource_init(void) { diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index 16b088c5f2be..04765dad3d31 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -3037,7 +3037,8 @@ static void rmdir_all_sub(void) * @rid: The resource id for the event file being created. * @domid: The domain id for the event file being created. * @mevt: The type of event file being created. - * @do_sum: Whether SNC summing monitors are being created. + * @do_sum: Whether SNC summing monitors are being created. Only set + * when @rid =3D=3D RDT_RESOURCE_L3. */ static struct mon_data *mon_get_kn_priv(enum resctrl_res_level rid, int do= mid, struct mon_evt *mevt, @@ -4281,7 +4282,7 @@ void resctrl_offline_mon_domain(struct rdt_resource *= r, struct rdt_domain_hdr *h * at mount time. This means the rdt_l3_mon_domain::mbm_states[] and * rdt_l3_mon_domain::rmid_busy_llc allocations may be larger than needed. * - * Returns 0 for success, or -ENOMEM. + * Return: %0 for success; Error code otherwise. */ static int domain_setup_l3_mon_state(struct rdt_resource *r, struct rdt_l3= _mon_domain *d) { --=20 2.51.0