From nobody Wed Oct 8 16:38:07 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3A6AD2ECE84 for ; Thu, 26 Jun 2025 16:49:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.19 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750956600; cv=none; b=NMsaf+N01rzNEf0lGpoA9lcQ6fnvIqAe+uAFJtEKUurKQlIvqLWIm50jsdOtSXPP4d/yGG9Ib0yV0XmTmVDSJBv/6sj1npmUjQsbuMnU1om+gY2vGJ7C75jwm3+a8LifFi/YM1HKoavv69jlzlW9VtvFqkuf2BHBa/rfxN3ktiU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750956600; c=relaxed/simple; bh=XG4aift2WzNLhx9e1E/0B82zYligoCRxdA/IZqyRDxA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=EbHOS3ur+MWWayktiRKOH/GYJj6ZExe+RYo2CPHAMdRgQxj3zXimJOJ7C5Vv/B03eu9yuqCsZRDDvv9dn66WQN96KBoSY+7WPgjgedZxC4LKac0Q/UInAa3ljOCszaWH9CmYk9rDxhLyWjMds8l/Sxlr2af1FNopjGuzXYV/E6o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=oE7tp2Dz; arc=none smtp.client-ip=198.175.65.19 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="oE7tp2Dz" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1750956598; x=1782492598; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=XG4aift2WzNLhx9e1E/0B82zYligoCRxdA/IZqyRDxA=; b=oE7tp2Dzji2fmp7AEVtVHUURJvetDBvP1UdOqMOZ7N3yktrHDTk/qoBN RlgT5UPqQ2BwdiYf8Qvd8NB0RL5Ats4fRCwBp9NjqgbHIQAcw8SbsuVcr QyBREtduZJ1y4UhfZ37sAU6uZk3OaHk7cAse1uw2I520/lQpI5GONpl4t ts5X2O5nCd+sWtRyskV0I9FlBzUDRfNRL9adFIHna6LWTQmAXD3Ua16ok oTOLBI/iDwfV7Bbeb4I02TPGezZBWjkZfZxEU4p1OKY91TIylBB5aVMzP CnvtOffMMgE8F5J5/c9yvPSa6MmIj+3WH5fS5aZm/pD4oW0DLKRsGOTyq g==; X-CSE-ConnectionGUID: ElBleRWHQOSygUnnETt4Hw== X-CSE-MsgGUID: rFGb1XSDQYCyk6TEbGvHNw== X-IronPort-AV: E=McAfee;i="6800,10657,11476"; a="53136383" X-IronPort-AV: E=Sophos;i="6.16,268,1744095600"; d="scan'208";a="53136383" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by orvoesa111.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Jun 2025 09:49:55 -0700 X-CSE-ConnectionGUID: 2Qzw49+XQyexWE+7C7qgsQ== X-CSE-MsgGUID: m+GoeIjFRa29SvSFugpL4g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.16,268,1744095600"; d="scan'208";a="153069200" Received: from agluck-desk3.sc.intel.com ([172.25.103.51]) by orviesa008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Jun 2025 09:49:54 -0700 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Anil Keshavamurthy , Chen Yu Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v6 09/30] x86,fs/resctrl: Use struct rdt_domain_hdr instead of struct rdt_mon_domain Date: Thu, 26 Jun 2025 09:49:18 -0700 Message-ID: <20250626164941.106341-10-tony.luck@intel.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250626164941.106341-1-tony.luck@intel.com> References: <20250626164941.106341-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Historically all monitoring events have been associated with the L3 resource and it made sense to use "struct rdt_mon_domain *" arguments to functions manipulating domains. But the addition of monitor events tied to other resources changes this assumption. Change calling sequence for domain addition and deletion. Also for reading events. This includes the smp_call*() IPI where the rmid_read now holds a pointer to struct rdt_domain_hdr. The mon_data structure is unchanged, but documentation is updated to not that mon_data::sum is only used for RDT_RESOURCE_L3. Signed-off-by: Tony Luck --- include/linux/resctrl.h | 8 +- fs/resctrl/internal.h | 14 ++-- arch/x86/kernel/cpu/resctrl/core.c | 4 +- arch/x86/kernel/cpu/resctrl/monitor.c | 18 ++++- fs/resctrl/ctrlmondata.c | 16 ++-- fs/resctrl/monitor.c | 31 +++++--- fs/resctrl/rdtgroup.c | 103 ++++++++++++++++++-------- 7 files changed, 130 insertions(+), 64 deletions(-) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index dc7ccd60e8c2..b332466312e1 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -452,9 +452,9 @@ int resctrl_arch_update_one(struct rdt_resource *r, str= uct rdt_ctrl_domain *d, u32 resctrl_arch_get_config(struct rdt_resource *r, struct rdt_ctrl_domain= *d, u32 closid, enum resctrl_conf_type type); int resctrl_online_ctrl_domain(struct rdt_resource *r, struct rdt_ctrl_dom= ain *d); -int resctrl_online_mon_domain(struct rdt_resource *r, struct rdt_mon_domai= n *d); +int resctrl_online_mon_domain(struct rdt_resource *r, struct rdt_domain_hd= r *hdr); void resctrl_offline_ctrl_domain(struct rdt_resource *r, struct rdt_ctrl_d= omain *d); -void resctrl_offline_mon_domain(struct rdt_resource *r, struct rdt_mon_dom= ain *d); +void resctrl_offline_mon_domain(struct rdt_resource *r, struct rdt_domain_= hdr *hdr); void resctrl_online_cpu(unsigned int cpu); void resctrl_offline_cpu(unsigned int cpu); =20 @@ -462,7 +462,7 @@ void resctrl_offline_cpu(unsigned int cpu); * resctrl_arch_rmid_read() - Read the eventid counter corresponding to rm= id * for this resource and domain. * @r: resource that the counter should be read from. - * @d: domain that the counter should be read from. + * @hdr: Header of domain that the counter should be read from. * @closid: closid that matches the rmid. Depending on the architecture, = the * counter may match traffic of both @closid and @rmid, or @rmid * only. @@ -483,7 +483,7 @@ void resctrl_offline_cpu(unsigned int cpu); * Return: * 0 on success, or -EIO, -EINVAL etc on error. */ -int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_mon_domain *= d, +int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain_hdr *= hdr, u32 closid, u32 rmid, enum resctrl_event_id eventid, u64 *val, void *arch_mon_ctx); =20 diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h index 445a41060724..ce3d24c512e3 100644 --- a/fs/resctrl/internal.h +++ b/fs/resctrl/internal.h @@ -77,8 +77,8 @@ extern struct mon_evt mon_event_all[QOS_NUM_EVENTS]; * @list: Member of the global @mon_data_kn_priv_list list. * @rid: Resource id associated with the event file. * @evtid: Event id associated with the event file. - * @sum: Set when event must be summed across multiple - * domains. + * @sum: Set for RDT_RESOURCE_L3 when event must be summed + * across multiple domains. * @domid: When @sum is zero this is the domain to which * the event file belongs. When @sum is one this * is the id of the L3 cache that all domains to be @@ -101,22 +101,22 @@ struct mon_data { * resource group then its event count is summed with the count from all * its child resource groups. * @r: Resource describing the properties of the event being read. - * @d: Domain that the counter should be read from. If NULL then sum all + * @hdr: Header of domain that the counter should be read from. If NULL = then sum all * domains in @r sharing L3 @ci.id * @evtid: Which monitor event to read. * @first: Initialize MBM counter when true. - * @ci_id: Cacheinfo id for L3. Only set when @d is NULL. Used when summin= g domains. + * @ci_id: Cacheinfo id for L3. Only set when @hdr is NULL. Used when summ= ing domains. * @err: Error encountered when reading counter. * @val: Returned value of event counter. If @rgrp is a parent resource = group, * @val includes the sum of event counts from its child resource groups. - * If @d is NULL, @val includes the sum of all domains in @r sharing @c= i.id, + * If @hdr is NULL, @val includes the sum of all domains in @r sharing = @ci.id, * (summed across child resource groups if @rgrp is a parent resource g= roup). * @arch_mon_ctx: Hardware monitor allocated for this read request (MPAM o= nly). */ struct rmid_read { struct rdtgroup *rgrp; struct rdt_resource *r; - struct rdt_mon_domain *d; + struct rdt_domain_hdr *hdr; enum resctrl_event_id evtid; bool first; unsigned int ci_id; @@ -352,7 +352,7 @@ void mon_event_count(void *info); int rdtgroup_mondata_show(struct seq_file *m, void *arg); =20 void mon_event_read(struct rmid_read *rr, struct rdt_resource *r, - struct rdt_mon_domain *d, struct rdtgroup *rdtgrp, + struct rdt_domain_hdr *hdr, struct rdtgroup *rdtgrp, cpumask_t *cpumask, int evtid, int first); =20 int resctrl_mon_resource_init(void); diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resct= rl/core.c index 2075c98aa4e7..1fecb6425b9e 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -529,7 +529,7 @@ static void l3_mon_domain_setup(int cpu, int id, struct= rdt_resource *r, struct =20 list_add_tail_rcu(&d->hdr.list, add_pos); =20 - err =3D resctrl_online_mon_domain(r, d); + err =3D resctrl_online_mon_domain(r, &d->hdr); if (err) { list_del_rcu(&d->hdr.list); synchronize_rcu(); @@ -655,7 +655,7 @@ static void domain_remove_cpu_mon(int cpu, struct rdt_r= esource *r) case RDT_RESOURCE_L3: d =3D container_of(hdr, struct rdt_mon_domain, hdr); hw_dom =3D resctrl_to_arch_mon_dom(d); - resctrl_offline_mon_domain(r, d); + resctrl_offline_mon_domain(r, hdr); list_del_rcu(&d->hdr.list); synchronize_rcu(); mon_domain_free(hw_dom); diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/re= sctrl/monitor.c index f01db2034d08..b31794c5dcd4 100644 --- a/arch/x86/kernel/cpu/resctrl/monitor.c +++ b/arch/x86/kernel/cpu/resctrl/monitor.c @@ -217,20 +217,30 @@ static u64 mbm_overflow_count(u64 prev_msr, u64 cur_m= sr, unsigned int width) return chunks >> shift; } =20 -int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_mon_domain *= d, +int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain_hdr *= hdr, u32 unused, u32 rmid, enum resctrl_event_id eventid, u64 *val, void *ignored) { - struct rdt_hw_mon_domain *hw_dom =3D resctrl_to_arch_mon_dom(d); - struct rdt_hw_resource *hw_res =3D resctrl_to_arch_res(r); - int cpu =3D cpumask_any(&d->hdr.cpu_mask); + int cpu =3D cpumask_any(&hdr->cpu_mask); + struct rdt_hw_mon_domain *hw_dom; + struct rdt_hw_resource *hw_res; struct arch_mbm_state *am; + struct rdt_mon_domain *d; u64 msr_val, chunks; u32 prmid; int ret; =20 resctrl_arch_rmid_read_context_check(); =20 + if (r->rid !=3D RDT_RESOURCE_L3) + return -EINVAL; + + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) + return -EINVAL; + + d =3D container_of(hdr, struct rdt_mon_domain, hdr); + hw_dom =3D resctrl_to_arch_mon_dom(d); + hw_res =3D resctrl_to_arch_res(r); prmid =3D logical_rmid_to_physical_rmid(cpu, rmid); ret =3D __rmid_read_phys(prmid, eventid, &msr_val); if (ret) diff --git a/fs/resctrl/ctrlmondata.c b/fs/resctrl/ctrlmondata.c index cdb4bc8baa99..1c1c0e7bbc11 100644 --- a/fs/resctrl/ctrlmondata.c +++ b/fs/resctrl/ctrlmondata.c @@ -547,7 +547,7 @@ struct rdt_domain_hdr *resctrl_find_domain(struct list_= head *h, int id, } =20 void mon_event_read(struct rmid_read *rr, struct rdt_resource *r, - struct rdt_mon_domain *d, struct rdtgroup *rdtgrp, + struct rdt_domain_hdr *hdr, struct rdtgroup *rdtgrp, cpumask_t *cpumask, int evtid, int first) { int cpu; @@ -561,7 +561,7 @@ void mon_event_read(struct rmid_read *rr, struct rdt_re= source *r, rr->rgrp =3D rdtgrp; rr->evtid =3D evtid; rr->r =3D r; - rr->d =3D d; + rr->hdr =3D hdr; rr->first =3D first; rr->arch_mon_ctx =3D resctrl_arch_mon_ctx_alloc(r, evtid); if (IS_ERR(rr->arch_mon_ctx)) { @@ -592,7 +592,6 @@ int rdtgroup_mondata_show(struct seq_file *m, void *arg) enum resctrl_event_id evtid; struct rdt_domain_hdr *hdr; struct rmid_read rr =3D {0}; - struct rdt_mon_domain *d; struct rdtgroup *rdtgrp; int domid, cpu, ret =3D 0; struct rdt_resource *r; @@ -617,6 +616,12 @@ int rdtgroup_mondata_show(struct seq_file *m, void *ar= g) r =3D resctrl_arch_get_resource(resid); =20 if (md->sum) { + struct rdt_mon_domain *d; + + if (WARN_ON_ONCE(resid !=3D RDT_RESOURCE_L3)) { + ret =3D -EIO; + goto out; + } /* * This file requires summing across all domains that share * the L3 cache id that was provided in the "domid" field of the @@ -643,12 +648,11 @@ int rdtgroup_mondata_show(struct seq_file *m, void *a= rg) * the resource to find the domain with "domid". */ hdr =3D resctrl_find_domain(&r->mon_domains, domid, NULL); - if (!hdr || !domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURC= E_L3)) { + if (!hdr || !domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, resid)) { ret =3D -ENOENT; goto out; } - d =3D container_of(hdr, struct rdt_mon_domain, hdr); - mon_event_read(&rr, r, d, rdtgrp, &d->hdr.cpu_mask, evtid, false); + mon_event_read(&rr, r, hdr, rdtgrp, &hdr->cpu_mask, evtid, false); } =20 checkresult: diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c index dcc6c00eb362..85fe88b965fa 100644 --- a/fs/resctrl/monitor.c +++ b/fs/resctrl/monitor.c @@ -159,7 +159,7 @@ void __check_limbo(struct rdt_mon_domain *d, bool force= _free) break; =20 entry =3D __rmid_entry(idx); - if (resctrl_arch_rmid_read(r, d, entry->closid, entry->rmid, + if (resctrl_arch_rmid_read(r, &d->hdr, entry->closid, entry->rmid, QOS_L3_OCCUP_EVENT_ID, &val, arch_mon_ctx)) { rmid_dirty =3D true; @@ -365,19 +365,23 @@ static int __mon_event_count(u32 closid, u32 rmid, st= ruct rmid_read *rr) int err, ret; u64 tval =3D 0; =20 - if (rr->first) { - resctrl_arch_reset_rmid(rr->r, rr->d, closid, rmid, rr->evtid); - m =3D get_mbm_state(rr->d, closid, rmid, rr->evtid); + if (rr->r->rid =3D=3D RDT_RESOURCE_L3 && rr->first) { + if (WARN_ON_ONCE(!domain_header_is_valid(rr->hdr, RESCTRL_MON_DOMAIN, + RDT_RESOURCE_L3))) + return -EINVAL; + d =3D container_of(rr->hdr, struct rdt_mon_domain, hdr); + resctrl_arch_reset_rmid(rr->r, d, closid, rmid, rr->evtid); + m =3D get_mbm_state(d, closid, rmid, rr->evtid); if (m) memset(m, 0, sizeof(struct mbm_state)); return 0; } =20 - if (rr->d) { + if (rr->hdr) { /* Reading a single domain, must be on a CPU in that domain. */ - if (!cpumask_test_cpu(cpu, &rr->d->hdr.cpu_mask)) + if (!cpumask_test_cpu(cpu, &rr->hdr->cpu_mask)) return -EINVAL; - rr->err =3D resctrl_arch_rmid_read(rr->r, rr->d, closid, rmid, + rr->err =3D resctrl_arch_rmid_read(rr->r, rr->hdr, closid, rmid, rr->evtid, &tval, rr->arch_mon_ctx); if (rr->err) return rr->err; @@ -387,6 +391,9 @@ static int __mon_event_count(u32 closid, u32 rmid, stru= ct rmid_read *rr) return 0; } =20 + if (WARN_ON_ONCE(rr->r->rid !=3D RDT_RESOURCE_L3)) + return -EINVAL; + /* Summing domains that share a cache, must be on a CPU for that cache. */ ci =3D get_cpu_cacheinfo_level(cpu, RESCTRL_L3_CACHE); if (!ci || ci->id !=3D rr->ci_id) @@ -403,7 +410,7 @@ static int __mon_event_count(u32 closid, u32 rmid, stru= ct rmid_read *rr) list_for_each_entry(d, &rr->r->mon_domains, hdr.list) { if (d->ci_id !=3D rr->ci_id) continue; - err =3D resctrl_arch_rmid_read(rr->r, d, closid, rmid, + err =3D resctrl_arch_rmid_read(rr->r, &d->hdr, closid, rmid, rr->evtid, &tval, rr->arch_mon_ctx); if (!err) { rr->val +=3D tval; @@ -432,9 +439,13 @@ static int __mon_event_count(u32 closid, u32 rmid, str= uct rmid_read *rr) static void mbm_bw_count(u32 closid, u32 rmid, struct rmid_read *rr) { u64 cur_bw, bytes, cur_bytes; + struct rdt_mon_domain *d; struct mbm_state *m; =20 - m =3D get_mbm_state(rr->d, closid, rmid, rr->evtid); + if (WARN_ON_ONCE(domain_header_is_valid(rr->hdr, RESCTRL_MON_DOMAIN, RDT_= RESOURCE_L3))) + return; + d =3D container_of(rr->hdr, struct rdt_mon_domain, hdr); + m =3D get_mbm_state(d, closid, rmid, rr->evtid); if (WARN_ON_ONCE(!m)) return; =20 @@ -608,7 +619,7 @@ static void mbm_update_one_event(struct rdt_resource *r= , struct rdt_mon_domain * struct rmid_read rr =3D {0}; =20 rr.r =3D r; - rr.d =3D d; + rr.hdr =3D &d->hdr; rr.evtid =3D evtid; rr.arch_mon_ctx =3D resctrl_arch_mon_ctx_alloc(rr.r, rr.evtid); if (IS_ERR(rr.arch_mon_ctx)) { diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index 05438e15e2ca..3828480e0426 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -2887,7 +2887,8 @@ static void rmdir_all_sub(void) * @rid: The resource id for the event file being created. * @domid: The domain id for the event file being created. * @mevt: The type of event file being created. - * @do_sum: Whether SNC summing monitors are being created. + * @do_sum: Whether SNC summing monitors are being created. Only set + * when @rid =3D=3D RDT_RESOURCE_L3. */ static struct mon_data *mon_get_kn_priv(enum resctrl_res_level rid, int do= mid, struct mon_evt *mevt, @@ -2897,6 +2898,9 @@ static struct mon_data *mon_get_kn_priv(enum resctrl_= res_level rid, int domid, =20 lockdep_assert_held(&rdtgroup_mutex); =20 + if (WARN_ON_ONCE(do_sum && rid !=3D RDT_RESOURCE_L3)) + return NULL; + list_for_each_entry(priv, &mon_data_kn_priv_list, list) { if (priv->rid =3D=3D rid && priv->domid =3D=3D domid && priv->sum =3D=3D do_sum && priv->evtid =3D=3D mevt->evtid) @@ -3024,17 +3028,27 @@ static void mon_rmdir_one_subdir(struct kernfs_node= *pkn, char *name, char *subn * when last domain being summed is removed. */ static void rmdir_mondata_subdir_allrdtgrp(struct rdt_resource *r, - struct rdt_mon_domain *d) + struct rdt_domain_hdr *hdr) { struct rdtgroup *prgrp, *crgrp; + int domid =3D hdr->id; char subname[32]; - bool snc_mode; char name[32]; =20 - snc_mode =3D r->mon_scope =3D=3D RESCTRL_L3_NODE; - sprintf(name, "mon_%s_%02d", r->name, snc_mode ? d->ci_id : d->hdr.id); - if (snc_mode) - sprintf(subname, "mon_sub_%s_%02d", r->name, d->hdr.id); + if (r->rid =3D=3D RDT_RESOURCE_L3) { + struct rdt_mon_domain *d; + + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) + return; + d =3D container_of(hdr, struct rdt_mon_domain, hdr); + + /* SNC mode? */ + if (r->mon_scope =3D=3D RESCTRL_L3_NODE) { + domid =3D d->ci_id; + sprintf(subname, "mon_sub_%s_%02d", r->name, d->hdr.id); + } + } + sprintf(name, "mon_%s_%02d", r->name, domid); =20 list_for_each_entry(prgrp, &rdt_all_groups, rdtgroup_list) { mon_rmdir_one_subdir(prgrp->mon.mon_data_kn, name, subname); @@ -3044,19 +3058,18 @@ static void rmdir_mondata_subdir_allrdtgrp(struct r= dt_resource *r, } } =20 -static int mon_add_all_files(struct kernfs_node *kn, struct rdt_mon_domain= *d, +static int mon_add_all_files(struct kernfs_node *kn, struct rdt_domain_hdr= *hdr, struct rdt_resource *r, struct rdtgroup *prgrp, - bool do_sum) + int domid, bool do_sum) { struct rmid_read rr =3D {0}; struct mon_data *priv; struct mon_evt *mevt; - int ret, domid; + int ret; =20 for_each_mon_event(mevt) { if (mevt->rid !=3D r->rid || !mevt->enabled) continue; - domid =3D do_sum ? d->ci_id : d->hdr.id; priv =3D mon_get_kn_priv(r->rid, domid, mevt, do_sum); if (WARN_ON_ONCE(!priv)) return -EINVAL; @@ -3065,26 +3078,38 @@ static int mon_add_all_files(struct kernfs_node *kn= , struct rdt_mon_domain *d, if (ret) return ret; =20 - if (!do_sum && resctrl_is_mbm_event(mevt->evtid)) - mon_event_read(&rr, r, d, prgrp, &d->hdr.cpu_mask, mevt->evtid, true); + if (r->rid =3D=3D RDT_RESOURCE_L3 && !do_sum && resctrl_is_mbm_event(mev= t->evtid)) + mon_event_read(&rr, r, hdr, prgrp, &hdr->cpu_mask, mevt->evtid, true); } =20 return 0; } =20 static int mkdir_mondata_subdir(struct kernfs_node *parent_kn, - struct rdt_mon_domain *d, + struct rdt_domain_hdr *hdr, struct rdt_resource *r, struct rdtgroup *prgrp) { struct kernfs_node *kn, *ckn; + int domid =3D hdr->id; + bool snc_mode =3D 0; char name[32]; - bool snc_mode; int ret =3D 0; =20 lockdep_assert_held(&rdtgroup_mutex); =20 - snc_mode =3D r->mon_scope =3D=3D RESCTRL_L3_NODE; - sprintf(name, "mon_%s_%02d", r->name, snc_mode ? d->ci_id : d->hdr.id); + if (r->rid =3D=3D RDT_RESOURCE_L3) { + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) + return -EINVAL; + snc_mode =3D r->mon_scope =3D=3D RESCTRL_L3_NODE; + if (snc_mode) { + struct rdt_mon_domain *d; + + d =3D container_of(hdr, struct rdt_mon_domain, hdr); + domid =3D d->ci_id; + } + } + sprintf(name, "mon_%s_%02d", r->name, domid); + kn =3D kernfs_find_and_get(parent_kn, name); if (kn) { /* @@ -3100,13 +3125,13 @@ static int mkdir_mondata_subdir(struct kernfs_node = *parent_kn, ret =3D rdtgroup_kn_set_ugid(kn); if (ret) goto out_destroy; - ret =3D mon_add_all_files(kn, d, r, prgrp, snc_mode); + ret =3D mon_add_all_files(kn, hdr, r, prgrp, domid, snc_mode); if (ret) goto out_destroy; } =20 if (snc_mode) { - sprintf(name, "mon_sub_%s_%02d", r->name, d->hdr.id); + sprintf(name, "mon_sub_%s_%02d", r->name, hdr->id); ckn =3D kernfs_create_dir(kn, name, parent_kn->mode, prgrp); if (IS_ERR(ckn)) { ret =3D -EINVAL; @@ -3117,7 +3142,7 @@ static int mkdir_mondata_subdir(struct kernfs_node *p= arent_kn, if (ret) goto out_destroy; =20 - ret =3D mon_add_all_files(ckn, d, r, prgrp, false); + ret =3D mon_add_all_files(ckn, hdr, r, prgrp, hdr->id, false); if (ret) goto out_destroy; } @@ -3135,7 +3160,7 @@ static int mkdir_mondata_subdir(struct kernfs_node *p= arent_kn, * and "monitor" groups with given domain id. */ static void mkdir_mondata_subdir_allrdtgrp(struct rdt_resource *r, - struct rdt_mon_domain *d) + struct rdt_domain_hdr *hdr) { struct kernfs_node *parent_kn; struct rdtgroup *prgrp, *crgrp; @@ -3143,12 +3168,12 @@ static void mkdir_mondata_subdir_allrdtgrp(struct r= dt_resource *r, =20 list_for_each_entry(prgrp, &rdt_all_groups, rdtgroup_list) { parent_kn =3D prgrp->mon.mon_data_kn; - mkdir_mondata_subdir(parent_kn, d, r, prgrp); + mkdir_mondata_subdir(parent_kn, hdr, r, prgrp); =20 head =3D &prgrp->mon.crdtgrp_list; list_for_each_entry(crgrp, head, mon.crdtgrp_list) { parent_kn =3D crgrp->mon.mon_data_kn; - mkdir_mondata_subdir(parent_kn, d, r, crgrp); + mkdir_mondata_subdir(parent_kn, hdr, r, crgrp); } } } @@ -3157,14 +3182,14 @@ static int mkdir_mondata_subdir_alldom(struct kernf= s_node *parent_kn, struct rdt_resource *r, struct rdtgroup *prgrp) { - struct rdt_mon_domain *dom; + struct rdt_domain_hdr *hdr; int ret; =20 /* Walking r->domains, ensure it can't race with cpuhp */ lockdep_assert_cpus_held(); =20 - list_for_each_entry(dom, &r->mon_domains, hdr.list) { - ret =3D mkdir_mondata_subdir(parent_kn, dom, r, prgrp); + list_for_each_entry(hdr, &r->mon_domains, list) { + ret =3D mkdir_mondata_subdir(parent_kn, hdr, r, prgrp); if (ret) return ret; } @@ -4036,8 +4061,10 @@ void resctrl_offline_ctrl_domain(struct rdt_resource= *r, struct rdt_ctrl_domain mutex_unlock(&rdtgroup_mutex); } =20 -void resctrl_offline_mon_domain(struct rdt_resource *r, struct rdt_mon_dom= ain *d) +void resctrl_offline_mon_domain(struct rdt_resource *r, struct rdt_domain_= hdr *hdr) { + struct rdt_mon_domain *d; + mutex_lock(&rdtgroup_mutex); =20 /* @@ -4045,11 +4072,15 @@ void resctrl_offline_mon_domain(struct rdt_resource= *r, struct rdt_mon_domain *d * per domain monitor data directories. */ if (resctrl_mounted && resctrl_arch_mon_capable()) - rmdir_mondata_subdir_allrdtgrp(r, d); + rmdir_mondata_subdir_allrdtgrp(r, hdr); =20 if (r->rid !=3D RDT_RESOURCE_L3) goto out_unlock; =20 + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, RDT_RESOURCE_L3)) + goto out_unlock; + + d =3D container_of(hdr, struct rdt_mon_domain, hdr); if (resctrl_is_mbm_enabled()) cancel_delayed_work(&d->mbm_over); if (resctrl_is_mon_event_enabled(QOS_L3_OCCUP_EVENT_ID) && has_busy_rmid(= d)) { @@ -4132,12 +4163,20 @@ int resctrl_online_ctrl_domain(struct rdt_resource = *r, struct rdt_ctrl_domain *d return err; } =20 -int resctrl_online_mon_domain(struct rdt_resource *r, struct rdt_mon_domai= n *d) +int resctrl_online_mon_domain(struct rdt_resource *r, struct rdt_domain_hd= r *hdr) { - int err; + struct rdt_mon_domain *d; + int err =3D -EINVAL; =20 mutex_lock(&rdtgroup_mutex); =20 + if (r->rid !=3D RDT_RESOURCE_L3) + goto mkdir; + + if (!domain_header_is_valid(hdr, RESCTRL_MON_DOMAIN, r->rid)) + goto out_unlock; + + d =3D container_of(hdr, struct rdt_mon_domain, hdr); err =3D domain_setup_mon_state(r, d); if (err) goto out_unlock; @@ -4151,6 +4190,8 @@ int resctrl_online_mon_domain(struct rdt_resource *r,= struct rdt_mon_domain *d) if (resctrl_is_mon_event_enabled(QOS_L3_OCCUP_EVENT_ID)) INIT_DELAYED_WORK(&d->cqm_limbo, cqm_handle_limbo); =20 +mkdir: + err =3D 0; /* * If the filesystem is not mounted then only the default resource group * exists. Creation of its directories is deferred until mount time @@ -4158,7 +4199,7 @@ int resctrl_online_mon_domain(struct rdt_resource *r,= struct rdt_mon_domain *d) * If resctrl is mounted, add per domain monitor data directories. */ if (resctrl_mounted && resctrl_arch_mon_capable()) - mkdir_mondata_subdir_allrdtgrp(r, d); + mkdir_mondata_subdir_allrdtgrp(r, hdr); =20 out_unlock: mutex_unlock(&rdtgroup_mutex); --=20 2.49.0