From nobody Fri Dec 19 19:01:19 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 19954328258 for ; Thu, 4 Dec 2025 20:54:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881673; cv=none; b=SMbPxDqn/RpKSMUm/lcDZ/tAHtsQClIPrAwXm7/zdhu0KzQEiNCQM72QD5iKHaU3dW1LJp0HXrQjr2/oN3B6XYFSZZYaDNHGBWv4/gxJMDULPQAjGqRj0diO51ztElraT4hKZYHudPAxjTcTdIdl8oDaSJKju7TKAKwDeSkOGuA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764881673; c=relaxed/simple; bh=6E6jI55V7HFZ0gjUlFHzeYDtgx2rxi+akF/Ey9qovB0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=FHCknJnftlXBaw5EZzjCSQcbhpZwcyE8gBmrxK8Zzo+3Bb4LbWUsDxJ78vbDWsyBHIIFNECWLtGe9sN+UdCBOzvOjvmQ0oVi45BaeX0qw/oudDKERRQUgVlMPvN4M4J7wgB5ldx7KhzTCE6IWeNRnHDq2PZdfigq+j4xSBo/r9w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=KkSFgc0G; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="KkSFgc0G" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764881671; x=1796417671; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6E6jI55V7HFZ0gjUlFHzeYDtgx2rxi+akF/Ey9qovB0=; b=KkSFgc0GcPB7yWauC89BFupBOb+nhkCnhWelXoUmiQn2QA5i7aSC0w9H RYz0/zgRfw2oxG0aIiP1W3P8bsC7xsdu1eSV/gsabpLdOxNRcjku8IhCQ uvrOBLm9Y6NjzMD0FfO8lFlT5/upU5R9Pwg5eahSmDvk0GLznnDfVQmdw sJw1CZNBjvhrQG/BUZJFrEDs6Dt/ePOxO8raBEoi7rI+1V1fp96N9+f6E vq71LYPmiJFoxnLkYETv5a/rcgoHqQNkUVtHj4zlnkKF6J7ARdT2lb9kx EBiZASCaaRpP/3NEdJClgfT6B34ebOPyK2qzVQ0Br5vXmkbNN2JJVOnZR Q==; X-CSE-ConnectionGUID: RFjMo7luSvGyH8nSpbm1ig== X-CSE-MsgGUID: io7p6DiLSmKqgB4UwIuD8g== X-IronPort-AV: E=McAfee;i="6800,10657,11632"; a="69510988" X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="69510988" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:24 -0800 X-CSE-ConnectionGUID: ZKPgUIjNRByMnnzo0/FiLw== X-CSE-MsgGUID: G/bOLERSQiyYJrPjpMIfLA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,250,1758610800"; d="scan'208";a="225752809" Received: from mgerlach-mobl1.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.165]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 12:54:23 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v15 17/32] x86,fs/resctrl: Fill in details of events for guid 0x26696143 and 0x26557651 Date: Thu, 4 Dec 2025 12:53:47 -0800 Message-ID: <20251204205404.12763-18-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251204205404.12763-1-tony.luck@intel.com> References: <20251204205404.12763-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The telemetry event aggregators of the Intel Clearwater Forest CPU support two RMID-based feature types: "energy" with guid 0x26696143 [1], and "perf" with guid 0x26557651 [2]. The event counter offsets in an aggregator's MMIO space are arranged in groups for each RMID. E.g the "energy" counters for guid 0x26696143 are arranged like this: MMIO offset:0x0000 Counter for RMID 0 PMT_EVENT_ENERGY MMIO offset:0x0008 Counter for RMID 0 PMT_EVENT_ACTIVITY MMIO offset:0x0010 Counter for RMID 1 PMT_EVENT_ENERGY MMIO offset:0x0018 Counter for RMID 1 PMT_EVENT_ACTIVITY ... MMIO offset:0x23F0 Counter for RMID 575 PMT_EVENT_ENERGY MMIO offset:0x23F8 Counter for RMID 575 PMT_EVENT_ACTIVITY After all counters there are three status registers that provide indications of how many times an aggregator was unable to process event counts, the time stamp for the most recent loss of data, and the time stamp of the most rece= nt successful update. MMIO offset:0x2400 AGG_DATA_LOSS_COUNT MMIO offset:0x2408 AGG_DATA_LOSS_TIMESTAMP MMIO offset:0x2410 LAST_UPDATE_TIMESTAMP Define event_group structures for both of these aggregator types and define= the events tracked by the aggregators in the file system code. PMT_EVENT_ENERGY and PMT_EVENT_ACTIVITY are produced in fixed point format. File system code must output as floating point values. Signed-off-by: Tony Luck Link: https://github.com/intel/Intel-PMT/blob/main/xml/CWF/OOBMSM/RMID-ENER= GY/cwf_aggregator.xml # [1] Link: https://github.com/intel/Intel-PMT/blob/main/xml/CWF/OOBMSM/RMID-PERF= /cwf_aggregator.xml # [2] Reviewed-by: Reinette Chatre --- include/linux/resctrl_types.h | 11 +++++ arch/x86/kernel/cpu/resctrl/intel_aet.c | 66 +++++++++++++++++++++++++ fs/resctrl/monitor.c | 35 +++++++------ 3 files changed, 97 insertions(+), 15 deletions(-) diff --git a/include/linux/resctrl_types.h b/include/linux/resctrl_types.h index acfe07860b34..a5f56faa18d2 100644 --- a/include/linux/resctrl_types.h +++ b/include/linux/resctrl_types.h @@ -50,6 +50,17 @@ enum resctrl_event_id { QOS_L3_MBM_TOTAL_EVENT_ID =3D 0x02, QOS_L3_MBM_LOCAL_EVENT_ID =3D 0x03, =20 + /* Intel Telemetry Events */ + PMT_EVENT_ENERGY, + PMT_EVENT_ACTIVITY, + PMT_EVENT_STALLS_LLC_HIT, + PMT_EVENT_C1_RES, + PMT_EVENT_UNHALTED_CORE_CYCLES, + PMT_EVENT_STALLS_LLC_MISS, + PMT_EVENT_AUTO_C6_RES, + PMT_EVENT_UNHALTED_REF_CYCLES, + PMT_EVENT_UOPS_RETIRED, + /* Must be the last */ QOS_NUM_EVENTS, }; diff --git a/arch/x86/kernel/cpu/resctrl/intel_aet.c b/arch/x86/kernel/cpu/= resctrl/intel_aet.c index 3cb79e30d284..33b7bb180582 100644 --- a/arch/x86/kernel/cpu/resctrl/intel_aet.c +++ b/arch/x86/kernel/cpu/resctrl/intel_aet.c @@ -11,15 +11,33 @@ =20 #define pr_fmt(fmt) "resctrl: " fmt =20 +#include #include #include #include #include #include +#include #include +#include =20 #include "internal.h" =20 +/** + * struct pmt_event - Telemetry event. + * @id: Resctrl event id. + * @idx: Counter index within each per-RMID block of counters. + * @bin_bits: Zero for integer valued events, else number bits in fraction + * part of fixed-point. + */ +struct pmt_event { + enum resctrl_event_id id; + unsigned int idx; + unsigned int bin_bits; +}; + +#define EVT(_id, _idx, _bits) { .id =3D _id, .idx =3D _idx, .bin_bits =3D = _bits } + /** * struct event_group - Events with the same feature type ("energy" or "pe= rf") and guid. * @pfname: PMT feature name (energy or perf) of this event group @@ -30,14 +48,62 @@ * data for all telemetry regions of type @pfname. * Valid if the system supports the event group, * NULL otherwise. + * @guid: Unique number per XML description file. + * @mmio_size: Number of bytes of MMIO registers for this group. + * @num_events: Number of events in this group. + * @evts: Array of event descriptors. */ struct event_group { /* Data fields for additional structures to manage this group. */ const char *pfname; struct pmt_feature_group *pfg; + + /* Remaining fields initialized from XML file. */ + u32 guid; + size_t mmio_size; + unsigned int num_events; + struct pmt_event evts[] __counted_by(num_events); +}; + +#define XML_MMIO_SIZE(num_rmids, num_events, num_extra_status) \ + (((num_rmids) * (num_events) + (num_extra_status)) * sizeof(u64)) + +/* + * Link: https://github.com/intel/Intel-PMT/blob/main/xml/CWF/OOBMSM/RMID-= ENERGY/cwf_aggregator.xml + */ +static struct event_group energy_0x26696143 =3D { + .pfname =3D "energy", + .guid =3D 0x26696143, + .mmio_size =3D XML_MMIO_SIZE(576, 2, 3), + .num_events =3D 2, + .evts =3D { + EVT(PMT_EVENT_ENERGY, 0, 18), + EVT(PMT_EVENT_ACTIVITY, 1, 18), + } +}; + +/* + * Link: https://github.com/intel/Intel-PMT/blob/main/xml/CWF/OOBMSM/RMID-= PERF/cwf_aggregator.xml + */ +static struct event_group perf_0x26557651 =3D { + .pfname =3D "perf", + .guid =3D 0x26557651, + .mmio_size =3D XML_MMIO_SIZE(576, 7, 3), + .num_events =3D 7, + .evts =3D { + EVT(PMT_EVENT_STALLS_LLC_HIT, 0, 0), + EVT(PMT_EVENT_C1_RES, 1, 0), + EVT(PMT_EVENT_UNHALTED_CORE_CYCLES, 2, 0), + EVT(PMT_EVENT_STALLS_LLC_MISS, 3, 0), + EVT(PMT_EVENT_AUTO_C6_RES, 4, 0), + EVT(PMT_EVENT_UNHALTED_REF_CYCLES, 5, 0), + EVT(PMT_EVENT_UOPS_RETIRED, 6, 0), + } }; =20 static struct event_group *known_event_groups[] =3D { + &energy_0x26696143, + &perf_0x26557651, }; =20 #define for_each_event_group(_peg) \ diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index 59736ab08213..acf2437c5b34 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -966,27 +966,32 @@ static void dom_data_exit(struct rdt_resource *r) mutex_unlock(&rdtgroup_mutex); } =20 +#define MON_EVENT(_eventid, _name, _res, _fp) \ + [_eventid] =3D { \ + .name =3D _name, \ + .evtid =3D _eventid, \ + .rid =3D _res, \ + .is_floating_point =3D _fp, \ +} + /* * All available events. Architecture code marks the ones that * are supported by a system using resctrl_enable_mon_event() * to set .enabled. */ struct mon_evt mon_event_all[QOS_NUM_EVENTS] =3D { - [QOS_L3_OCCUP_EVENT_ID] =3D { - .name =3D "llc_occupancy", - .evtid =3D QOS_L3_OCCUP_EVENT_ID, - .rid =3D RDT_RESOURCE_L3, - }, - [QOS_L3_MBM_TOTAL_EVENT_ID] =3D { - .name =3D "mbm_total_bytes", - .evtid =3D QOS_L3_MBM_TOTAL_EVENT_ID, - .rid =3D RDT_RESOURCE_L3, - }, - [QOS_L3_MBM_LOCAL_EVENT_ID] =3D { - .name =3D "mbm_local_bytes", - .evtid =3D QOS_L3_MBM_LOCAL_EVENT_ID, - .rid =3D RDT_RESOURCE_L3, - }, + MON_EVENT(QOS_L3_OCCUP_EVENT_ID, "llc_occupancy", RDT_RESOURCE_L3, false= ), + MON_EVENT(QOS_L3_MBM_TOTAL_EVENT_ID, "mbm_total_bytes", RDT_RESOURCE_L3,= false), + MON_EVENT(QOS_L3_MBM_LOCAL_EVENT_ID, "mbm_local_bytes", RDT_RESOURCE_L3,= false), + MON_EVENT(PMT_EVENT_ENERGY, "core_energy", RDT_RESOURCE_PERF_PKG, true= ), + MON_EVENT(PMT_EVENT_ACTIVITY, "activity", RDT_RESOURCE_PERF_PKG, true), + MON_EVENT(PMT_EVENT_STALLS_LLC_HIT, "stalls_llc_hit", RDT_RESOURCE_PERF_= PKG, false), + MON_EVENT(PMT_EVENT_C1_RES, "c1_res", RDT_RESOURCE_PERF_PKG, false), + MON_EVENT(PMT_EVENT_UNHALTED_CORE_CYCLES, "unhalted_core_cycles", RDT_RES= OURCE_PERF_PKG, false), + MON_EVENT(PMT_EVENT_STALLS_LLC_MISS, "stalls_llc_miss", RDT_RESOURCE_PER= F_PKG, false), + MON_EVENT(PMT_EVENT_AUTO_C6_RES, "c6_res", RDT_RESOURCE_PERF_PKG, false= ), + MON_EVENT(PMT_EVENT_UNHALTED_REF_CYCLES, "unhalted_ref_cycles", RDT_RESOU= RCE_PERF_PKG, false), + MON_EVENT(PMT_EVENT_UOPS_RETIRED, "uops_retired", RDT_RESOURCE_PERF_PKG= , false), }; =20 void resctrl_enable_mon_event(enum resctrl_event_id eventid, bool any_cpu,= unsigned int binary_bits) --=20 2.51.1