From nobody Tue Dec 2 01:06:31 2025 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 61E3F31ED86 for ; Mon, 24 Nov 2025 18:54:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764010492; cv=none; b=lZ4FwWy22KgY+nTTt+RUotIVhA0fF6XkC74kUXjQ1xabZxNW9jx68N29zQq33BZiWUXTlaXKbHEYapKMcJ3/op7rlJAKigCO//VNK/yN5ZgEFGFIfaFlnjZ1/F0a5FLQYGnvjmX7UjMBSLSmSMTpD6PbtedvgTx2J9mbshkrLQc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764010492; c=relaxed/simple; bh=ef6nO40LtM4aQkzZYnZnb01vK8vpsMZNATtiZokKjdQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=MdJG7rOEwM6Vq3eM8plASFv7cz2v7Y2YUSHmwxhdsZsx5O9boVfaJbWtKjXczSEmUJBt74cS1r+3f5YoYewQ42BQwdNyr07t66bTcJrbEaxVSC4TRzMLFLqiA48XZuSXN33B7EwO/6d3TJfscApjzcc5uthMeEzBv4J6+b8b+lY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=fzwQDiix; arc=none smtp.client-ip=192.198.163.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="fzwQDiix" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764010490; x=1795546490; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ef6nO40LtM4aQkzZYnZnb01vK8vpsMZNATtiZokKjdQ=; b=fzwQDiixocb6TceLgdTF6XIIkauFo2DwkFWZMSBuZVAGEe6RTM81XIc7 weLEkJthPTUc2HDKt93Z16TmqRO3bEajCpjchEty9cmTa1Ctt/HQcIF3A hYY5sS7oEqtisI4w1WLZevVmcl9tCvzGXh0rBRKvTbvVhA3VD1Flw9O2Y VdqrY3HPw2Ry8m4DenQQ6FCI2yJz9ZjnkSpqYBT6tY4ZiXcgNHrFtaz1s MOK1p3h6Gjby2awg+UoOu6BwBKBMpQiZ9aPklOwMHiwGWSput7yZywEm/ e2y2R8kFl9eQzEFL8AmT66EzwLWy+jRe8/GBIjjzdp/X1xBvxHWjiywl4 A==; X-CSE-ConnectionGUID: htFjhaueThW47IsgmjcCnA== X-CSE-MsgGUID: XdB0m7V6StWL2K/3uMNmuQ== X-IronPort-AV: E=McAfee;i="6800,10657,11623"; a="76636563" X-IronPort-AV: E=Sophos;i="6.20,223,1758610800"; d="scan'208";a="76636563" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by fmvoesa105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Nov 2025 10:54:24 -0800 X-CSE-ConnectionGUID: 1JPrWc9vQf6Prh6Aga3zSw== X-CSE-MsgGUID: pFXHpdd4Qs+Jx18BZDPzgA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,223,1758610800"; d="scan'208";a="192224969" Received: from rfrazer-mobl3.amr.corp.intel.com (HELO agluck-desk3.home.arpa) ([10.124.222.153]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Nov 2025 10:54:24 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v14 09/32] x86,fs/resctrl: Rename some L3 specific functions Date: Mon, 24 Nov 2025 10:53:46 -0800 Message-ID: <20251124185412.24155-10-tony.luck@intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251124185412.24155-1-tony.luck@intel.com> References: <20251124185412.24155-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With the arrival of monitor events tied to new domains associated with a different resource it would be clearer if the L3 resource specific functions are more accurately named. Rename three groups of functions: Functions that allocate/free architecture per-RMID MBM state information: arch_domain_mbm_alloc() -> l3_mon_domain_mbm_alloc() mon_domain_free() -> l3_mon_domain_free() Functions that allocate/free filesystem per-RMID MBM state information: domain_setup_mon_state() -> domain_setup_l3_mon_state() domain_destroy_mon_state() -> domain_destroy_l3_mon_state() Initialization/exit: rdt_get_mon_l3_config() -> rdt_get_l3_mon_config() resctrl_mon_resource_init() -> resctrl_l3_mon_resource_init() resctrl_mon_resource_exit() -> resctrl_l3_mon_resource_exit() Ensure kernel-doc descriptions of these functions' return values are present and correctly formatted. Signed-off-by: Tony Luck Reviewed-by: Reinette Chatre --- arch/x86/kernel/cpu/resctrl/internal.h | 2 +- fs/resctrl/internal.h | 6 +++--- arch/x86/kernel/cpu/resctrl/core.c | 20 +++++++++++--------- arch/x86/kernel/cpu/resctrl/monitor.c | 2 +- fs/resctrl/monitor.c | 8 ++++---- fs/resctrl/rdtgroup.c | 24 ++++++++++++------------ 6 files changed, 32 insertions(+), 30 deletions(-) diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/r= esctrl/internal.h index d6da21d4684b..ae182b5f9a3c 100644 --- a/arch/x86/kernel/cpu/resctrl/internal.h +++ b/arch/x86/kernel/cpu/resctrl/internal.h @@ -213,7 +213,7 @@ union l3_qos_abmc_cfg { =20 void rdt_ctrl_update(void *arg); =20 -int rdt_get_mon_l3_config(struct rdt_resource *r); +int rdt_get_l3_mon_config(struct rdt_resource *r); =20 bool rdt_cpu_has(int flag); =20 diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h index af47b6ddef62..9768341aa21c 100644 --- a/fs/resctrl/internal.h +++ b/fs/resctrl/internal.h @@ -357,7 +357,9 @@ int alloc_rmid(u32 closid); =20 void free_rmid(u32 closid, u32 rmid); =20 -void resctrl_mon_resource_exit(void); +int resctrl_l3_mon_resource_init(void); + +void resctrl_l3_mon_resource_exit(void); =20 void mon_event_count(void *info); =20 @@ -367,8 +369,6 @@ void mon_event_read(struct rmid_read *rr, struct rdt_re= source *r, struct rdt_domain_hdr *hdr, struct rdtgroup *rdtgrp, cpumask_t *cpumask, int evtid, int first); =20 -int resctrl_mon_resource_init(void); - void mbm_setup_overflow_handler(struct rdt_l3_mon_domain *dom, unsigned long delay_ms, int exclude_cpu); diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index cc1b846f9645..b3a2dc56155d 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -368,7 +368,7 @@ static void ctrl_domain_free(struct rdt_hw_ctrl_domain = *hw_dom) kfree(hw_dom); } =20 -static void mon_domain_free(struct rdt_hw_l3_mon_domain *hw_dom) +static void l3_mon_domain_free(struct rdt_hw_l3_mon_domain *hw_dom) { int idx; =20 @@ -401,11 +401,13 @@ static int domain_setup_ctrlval(struct rdt_resource *= r, struct rdt_ctrl_domain * } =20 /** - * arch_domain_mbm_alloc() - Allocate arch private storage for the MBM cou= nters + * l3_mon_domain_mbm_alloc() - Allocate arch private storage for the MBM c= ounters * @num_rmid: The size of the MBM counter array * @hw_dom: The domain that owns the allocated arrays + * + * Return: 0 for success, or -ENOMEM. */ -static int arch_domain_mbm_alloc(u32 num_rmid, struct rdt_hw_l3_mon_domain= *hw_dom) +static int l3_mon_domain_mbm_alloc(u32 num_rmid, struct rdt_hw_l3_mon_doma= in *hw_dom) { size_t tsize =3D sizeof(*hw_dom->arch_mbm_states[0]); enum resctrl_event_id eventid; @@ -519,7 +521,7 @@ static void l3_mon_domain_setup(int cpu, int id, struct= rdt_resource *r, struct ci =3D get_cpu_cacheinfo_level(cpu, RESCTRL_L3_CACHE); if (!ci) { pr_warn_once("Can't find L3 cache for CPU:%d resource %s\n", cpu, r->nam= e); - mon_domain_free(hw_dom); + l3_mon_domain_free(hw_dom); return; } d->ci_id =3D ci->id; @@ -527,8 +529,8 @@ static void l3_mon_domain_setup(int cpu, int id, struct= rdt_resource *r, struct =20 arch_mon_domain_online(r, d); =20 - if (arch_domain_mbm_alloc(r->mon.num_rmid, hw_dom)) { - mon_domain_free(hw_dom); + if (l3_mon_domain_mbm_alloc(r->mon.num_rmid, hw_dom)) { + l3_mon_domain_free(hw_dom); return; } =20 @@ -538,7 +540,7 @@ static void l3_mon_domain_setup(int cpu, int id, struct= rdt_resource *r, struct if (err) { list_del_rcu(&d->hdr.list); synchronize_rcu(); - mon_domain_free(hw_dom); + l3_mon_domain_free(hw_dom); } } =20 @@ -664,7 +666,7 @@ static void domain_remove_cpu_mon(int cpu, struct rdt_r= esource *r) resctrl_offline_mon_domain(r, hdr); list_del_rcu(&hdr->list); synchronize_rcu(); - mon_domain_free(hw_dom); + l3_mon_domain_free(hw_dom); break; } default: @@ -917,7 +919,7 @@ static __init bool get_rdt_mon_resources(void) if (!ret) return false; =20 - return !rdt_get_mon_l3_config(r); + return !rdt_get_l3_mon_config(r); } =20 static __init void __check_quirks_intel(void) diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/re= sctrl/monitor.c index 04b8f1e1f314..20605212656c 100644 --- a/arch/x86/kernel/cpu/resctrl/monitor.c +++ b/arch/x86/kernel/cpu/resctrl/monitor.c @@ -424,7 +424,7 @@ static __init int snc_get_config(void) return ret; } =20 -int __init rdt_get_mon_l3_config(struct rdt_resource *r) +int __init rdt_get_l3_mon_config(struct rdt_resource *r) { unsigned int mbm_offset =3D boot_cpu_data.x86_cache_mbm_width_offset; struct rdt_hw_resource *hw_res =3D resctrl_to_arch_res(r); diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index f90609212c86..cbd9dd5656af 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -1774,7 +1774,7 @@ ssize_t mbm_L3_assignments_write(struct kernfs_open_f= ile *of, char *buf, } =20 /** - * resctrl_mon_resource_init() - Initialise global monitoring structures. + * resctrl_l3_mon_resource_init() - Initialise global monitoring structure= s. * * Allocate and initialise global monitor resources that do not belong to a * specific domain. i.e. the rmid_ptrs[] used for the limbo and free lists. @@ -1783,9 +1783,9 @@ ssize_t mbm_L3_assignments_write(struct kernfs_open_f= ile *of, char *buf, * Resctrl's cpuhp callbacks may be called before this point to bring a do= main * online. * - * Returns 0 for success, or -ENOMEM. + * Return: 0 for success, or -ENOMEM. */ -int resctrl_mon_resource_init(void) +int resctrl_l3_mon_resource_init(void) { struct rdt_resource *r =3D resctrl_arch_get_resource(RDT_RESOURCE_L3); int ret; @@ -1835,7 +1835,7 @@ int resctrl_mon_resource_init(void) return 0; } =20 -void resctrl_mon_resource_exit(void) +void resctrl_l3_mon_resource_exit(void) { struct rdt_resource *r =3D resctrl_arch_get_resource(RDT_RESOURCE_L3); =20 diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index 2ed435db1923..b57e1e78bbc2 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -4246,7 +4246,7 @@ static void rdtgroup_setup_default(void) mutex_unlock(&rdtgroup_mutex); } =20 -static void domain_destroy_mon_state(struct rdt_l3_mon_domain *d) +static void domain_destroy_l3_mon_state(struct rdt_l3_mon_domain *d) { int idx; =20 @@ -4301,13 +4301,13 @@ void resctrl_offline_mon_domain(struct rdt_resource= *r, struct rdt_domain_hdr *h cancel_delayed_work(&d->cqm_limbo); } =20 - domain_destroy_mon_state(d); + domain_destroy_l3_mon_state(d); out_unlock: mutex_unlock(&rdtgroup_mutex); } =20 /** - * domain_setup_mon_state() - Initialise domain monitoring structures. + * domain_setup_l3_mon_state() - Initialise domain monitoring structures. * @r: The resource for the newly online domain. * @d: The newly online domain. * @@ -4315,11 +4315,11 @@ void resctrl_offline_mon_domain(struct rdt_resource= *r, struct rdt_domain_hdr *h * Called when the first CPU of a domain comes online, regardless of wheth= er * the filesystem is mounted. * During boot this may be called before global allocations have been made= by - * resctrl_mon_resource_init(). + * resctrl_l3_mon_resource_init(). * - * Returns 0 for success, or -ENOMEM. + * Return: 0 for success, or -ENOMEM. */ -static int domain_setup_mon_state(struct rdt_resource *r, struct rdt_l3_mo= n_domain *d) +static int domain_setup_l3_mon_state(struct rdt_resource *r, struct rdt_l3= _mon_domain *d) { u32 idx_limit =3D resctrl_arch_system_num_rmid_idx(); size_t tsize =3D sizeof(*d->mbm_states[0]); @@ -4386,7 +4386,7 @@ int resctrl_online_mon_domain(struct rdt_resource *r,= struct rdt_domain_hdr *hdr goto out_unlock; =20 d =3D container_of(hdr, struct rdt_l3_mon_domain, hdr); - err =3D domain_setup_mon_state(r, d); + err =3D domain_setup_l3_mon_state(r, d); if (err) goto out_unlock; =20 @@ -4503,13 +4503,13 @@ int resctrl_init(void) =20 io_alloc_init(); =20 - ret =3D resctrl_mon_resource_init(); + ret =3D resctrl_l3_mon_resource_init(); if (ret) return ret; =20 ret =3D sysfs_create_mount_point(fs_kobj, "resctrl"); if (ret) { - resctrl_mon_resource_exit(); + resctrl_l3_mon_resource_exit(); return ret; } =20 @@ -4544,7 +4544,7 @@ int resctrl_init(void) =20 cleanup_mountpoint: sysfs_remove_mount_point(fs_kobj, "resctrl"); - resctrl_mon_resource_exit(); + resctrl_l3_mon_resource_exit(); =20 return ret; } @@ -4580,7 +4580,7 @@ static bool resctrl_online_domains_exist(void) * When called by the architecture code, all CPUs and resctrl domains must= be * offline. This ensures the limbo and overflow handlers are not scheduled= to * run, meaning the data structures they access can be freed by - * resctrl_mon_resource_exit(). + * resctrl_l3_mon_resource_exit(). * * After resctrl_exit() returns, the architecture code should return an * error from all resctrl_arch_ functions that can do this. @@ -4607,5 +4607,5 @@ void resctrl_exit(void) * it can be used to umount resctrl. */ =20 - resctrl_mon_resource_exit(); + resctrl_l3_mon_resource_exit(); } --=20 2.51.1