From nobody Sun Sep 14 05:38:56 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97A5FC04E69 for ; Thu, 13 Jul 2023 16:32:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232676AbjGMQch (ORCPT ); Thu, 13 Jul 2023 12:32:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43480 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230166AbjGMQcd (ORCPT ); Thu, 13 Jul 2023 12:32:33 -0400 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4F6472719; Thu, 13 Jul 2023 09:32:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1689265952; x=1720801952; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=UfMNDGqaFTdJ9acHmvMM5A8aK/ToJAVWU+2PC7nhIQo=; b=TVF+kkA5oUzSy4xh2J+maOj5HuzcGk/qeewYdQ5jvWUnXNBrrV55pHg8 G6bFI7DY4LHFyBym8Ao4ymUqHE7F3ZlG0a6U8u2Py2BXySGbuu6NK5EaK IcsXgVLBfufYJbU82aVAMcpsISqhlsJdrmhueelrCzES/TNZZKn+Ewwpw o7aCAMghIoIKJIsQqByfKKix6rRN+meOoFenma4vDayNxdTf5kONgKhuv gAE8DJa0gT4+/oMZ02bOk/CItGkdbk0bjy/DFYLGUjluN/LbNJ3jvUAk3 9YazBwI63CAdjLEDg6/K+pBN8iv2eMRezW1jXMZWV0cb1IaugKDBC0TXG g==; X-IronPort-AV: E=McAfee;i="6600,9927,10770"; a="362707607" X-IronPort-AV: E=Sophos;i="6.01,203,1684825200"; d="scan'208";a="362707607" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jul 2023 09:32:22 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10770"; a="722046373" X-IronPort-AV: E=Sophos;i="6.01,203,1684825200"; d="scan'208";a="722046373" Received: from agluck-desk3.sc.intel.com ([172.25.222.74]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jul 2023 09:32:22 -0700 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Peter Newman , Jonathan Corbet , Shuah Khan , x86@kernel.org Cc: Shaopeng Tan , James Morse , Jamie Iles , Babu Moger , Randy Dunlap , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v3 2/8] x86/resctrl: Remove hard code of RDT_RESOURCE_L3 in monitor.c Date: Thu, 13 Jul 2023 09:32:01 -0700 Message-Id: <20230713163207.219710-3-tony.luck@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230713163207.219710-1-tony.luck@intel.com> References: <20230713163207.219710-1-tony.luck@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Scope of monitoring may be scoped at L3 cache granularity (legacy) or at the node level (systems with Sub NUMA Cluster enabled). Save the struct rdt_resource pointer that was used to initialize the monitor sections of code and use that value instead of the hard-coded RDT_RESOURCE_L3. No functional change. Signed-off-by: Tony Luck Reviewed-by: Peter Newman --- arch/x86/kernel/cpu/resctrl/monitor.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/re= sctrl/monitor.c index ded1fc7cb7cb..9be6ffdd01ae 100644 --- a/arch/x86/kernel/cpu/resctrl/monitor.c +++ b/arch/x86/kernel/cpu/resctrl/monitor.c @@ -30,6 +30,8 @@ struct rmid_entry { struct list_head list; }; =20 +static struct rdt_resource *mon_resource; + /** * @rmid_free_lru A least recently used list of free RMIDs * These RMIDs are guaranteed to have an occupancy less than the @@ -268,7 +270,7 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, stru= ct rdt_domain *d, */ void __check_limbo(struct rdt_domain *d, bool force_free) { - struct rdt_resource *r =3D &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl; + struct rdt_resource *r =3D mon_resource; struct rmid_entry *entry; u32 crmid =3D 1, nrmid; bool rmid_dirty; @@ -333,7 +335,7 @@ int alloc_rmid(void) =20 static void add_rmid_to_limbo(struct rmid_entry *entry) { - struct rdt_resource *r =3D &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl; + struct rdt_resource *r =3D mon_resource; struct rdt_domain *d; int cpu, err; u64 val =3D 0; @@ -645,7 +647,7 @@ void cqm_handle_limbo(struct work_struct *work) =20 mutex_lock(&rdtgroup_mutex); =20 - r =3D &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl; + r =3D mon_resource; d =3D container_of(work, struct rdt_domain, cqm_limbo.work); =20 __check_limbo(d, false); @@ -681,7 +683,7 @@ void mbm_handle_overflow(struct work_struct *work) if (!static_branch_likely(&rdt_mon_enable_key)) goto out_unlock; =20 - r =3D &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl; + r =3D mon_resource; d =3D container_of(work, struct rdt_domain, mbm_over.work); =20 list_for_each_entry(prgrp, &rdt_all_groups, rdtgroup_list) { @@ -759,9 +761,9 @@ static struct mon_evt mbm_local_event =3D { /* * Initialize the event list for the resource. * - * Note that MBM events are also part of RDT_RESOURCE_L3 resource - * because as per the SDM the total and local memory bandwidth - * are enumerated as part of L3 monitoring. + * Monitor events can either be part of RDT_RESOURCE_L3 resource, + * or they may be per NUMA node on systems with sub-NUMA cluster + * enabled and are then in the RDT_RESOURCE_NODE resource. */ static void l3_mon_evt_init(struct rdt_resource *r) { @@ -773,6 +775,8 @@ static void l3_mon_evt_init(struct rdt_resource *r) list_add_tail(&mbm_total_event.list, &r->evt_list); if (is_mbm_local_enabled()) list_add_tail(&mbm_local_event.list, &r->evt_list); + + mon_resource =3D r; } =20 int __init rdt_get_mon_l3_config(struct rdt_resource *r) --=20 2.40.1