From nobody Wed Dec 17 21:14:01 2025 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 2E2C316B74A; Tue, 25 Jun 2024 13:34:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719322449; cv=none; b=f8xDiVzYm6PRd6S6a+MmNluqFBa8Y/0ebAuTYyflv+bFYlZskMyBnDYIQ415GTVa+90VYK6Pg7ZNWEj24ZK8stEqzsXCgDAUI6XAmGuH1o/RVnWpAbFD9GsSIyA1yWJC2b2fNC3h8R//mpqiPhokzK9+iIrKdJmWNDr+v4XRVjw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719322449; c=relaxed/simple; bh=h3XxyN6IXydL8e0zIIIS9V4ZStSl00kff0Mk919Sp94=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=QcmSR9/d0+hlut66MOp9B1oLuhoEtGe3rgKc8uB/9woTIbrpBuhuAtAwKMYGWz94bw38bcMFcXxz7yXqPwHmEMDt3peOBvqGMIDbKz3Dc1Ryh8KEuHNpudgLvSLe3j2MjW5W/jL2WNWRXZZBPUUj5rPzx82cs0xMNjvnub6UIJk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9D64CDA7; Tue, 25 Jun 2024 06:34:32 -0700 (PDT) Received: from e127643.broadband (usa-sjc-mx-foss1.foss.arm.com [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B43D63F73B; Tue, 25 Jun 2024 06:34:03 -0700 (PDT) From: James Clark To: coresight@lists.linaro.org, suzuki.poulose@arm.com, gankulkarni@os.amperecomputing.com, mike.leach@linaro.org, leo.yan@linux.dev, anshuman.khandual@arm.com, jszu@nvidia.com, bwicaksono@nvidia.com Cc: James Clark , Alexander Shishkin , Maxime Coquelin , Alexandre Torgue , John Garry , Will Deacon , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Jiri Olsa , Ian Rogers , Adrian Hunter , "Liang, Kan" , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-stm32@st-md-mailman.stormreply.com, linux-perf-users@vger.kernel.org Subject: [PATCH v4 13/17] coresight: Make CPU id map a property of a trace ID map Date: Tue, 25 Jun 2024 14:30:56 +0100 Message-Id: <20240625133105.671245-14-james.clark@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240625133105.671245-1-james.clark@arm.com> References: <20240625133105.671245-1-james.clark@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The global CPU ID mappings won't work for per-sink ID maps so move it to the ID map struct. coresight_trace_id_release_all_pending() is hard coded to operate on the default map, but once Perf sessions use their own maps the pending release mechanism will be deleted. So it doesn't need to be extended to accept a trace ID map argument at this point. Signed-off-by: James Clark --- drivers/hwtracing/coresight/coresight-trace-id.c | 16 +++++++++------- include/linux/coresight.h | 1 + 2 files changed, 10 insertions(+), 7 deletions(-) diff --git a/drivers/hwtracing/coresight/coresight-trace-id.c b/drivers/hwt= racing/coresight/coresight-trace-id.c index 5561989a03fa..8a777c0af6ea 100644 --- a/drivers/hwtracing/coresight/coresight-trace-id.c +++ b/drivers/hwtracing/coresight/coresight-trace-id.c @@ -13,10 +13,12 @@ #include "coresight-trace-id.h" =20 /* Default trace ID map. Used in sysfs mode and for system sources */ -static struct coresight_trace_id_map id_map_default; +static DEFINE_PER_CPU(atomic_t, id_map_default_cpu_ids) =3D ATOMIC_INIT(0); +static struct coresight_trace_id_map id_map_default =3D { + .cpu_map =3D &id_map_default_cpu_ids +}; =20 -/* maintain a record of the mapping of IDs and pending releases per cpu */ -static DEFINE_PER_CPU(atomic_t, cpu_id) =3D ATOMIC_INIT(0); +/* maintain a record of the pending releases per cpu */ static cpumask_t cpu_id_release_pending; =20 /* perf session active counter */ @@ -49,7 +51,7 @@ static void coresight_trace_id_dump_table(struct coresigh= t_trace_id_map *id_map, /* unlocked read of current trace ID value for given CPU */ static int _coresight_trace_id_read_cpu_id(int cpu, struct coresight_trace= _id_map *id_map) { - return atomic_read(&per_cpu(cpu_id, cpu)); + return atomic_read(per_cpu_ptr(id_map->cpu_map, cpu)); } =20 /* look for next available odd ID, return 0 if none found */ @@ -145,7 +147,7 @@ static void coresight_trace_id_release_all_pending(void) clear_bit(bit, id_map->pend_rel_ids); } for_each_cpu(cpu, &cpu_id_release_pending) { - atomic_set(&per_cpu(cpu_id, cpu), 0); + atomic_set(per_cpu_ptr(id_map_default.cpu_map, cpu), 0); cpumask_clear_cpu(cpu, &cpu_id_release_pending); } spin_unlock_irqrestore(&id_map_lock, flags); @@ -181,7 +183,7 @@ static int _coresight_trace_id_get_cpu_id(int cpu, stru= ct coresight_trace_id_map goto get_cpu_id_out_unlock; =20 /* allocate the new id to the cpu */ - atomic_set(&per_cpu(cpu_id, cpu), id); + atomic_set(per_cpu_ptr(id_map->cpu_map, cpu), id); =20 get_cpu_id_clr_pend: /* we are (re)using this ID - so ensure it is not marked for release */ @@ -215,7 +217,7 @@ static void _coresight_trace_id_put_cpu_id(int cpu, str= uct coresight_trace_id_ma } else { /* otherwise clear id */ coresight_trace_id_free(id, id_map); - atomic_set(&per_cpu(cpu_id, cpu), 0); + atomic_set(per_cpu_ptr(id_map->cpu_map, cpu), 0); } =20 spin_unlock_irqrestore(&id_map_lock, flags); diff --git a/include/linux/coresight.h b/include/linux/coresight.h index c16c61a8411d..7d62b88bfb5c 100644 --- a/include/linux/coresight.h +++ b/include/linux/coresight.h @@ -234,6 +234,7 @@ struct coresight_sysfs_link { struct coresight_trace_id_map { DECLARE_BITMAP(used_ids, CORESIGHT_TRACE_IDS_MAX); DECLARE_BITMAP(pend_rel_ids, CORESIGHT_TRACE_IDS_MAX); + atomic_t __percpu *cpu_map; }; =20 /** --=20 2.34.1