From nobody Thu May 7 21:42:23 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8F1FC433F5 for ; Tue, 17 May 2022 22:14:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230286AbiEQWOo (ORCPT ); Tue, 17 May 2022 18:14:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50954 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230234AbiEQWOi (ORCPT ); Tue, 17 May 2022 18:14:38 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A747245523; Tue, 17 May 2022 15:14:37 -0700 (PDT) Date: Tue, 17 May 2022 22:14:34 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1652825676; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=ZkW25diCjZhhJiIeWzEaepfs+AscLxtkOsxOdut4Bxk=; b=h5TezNhRyghH/9o4IflJ2M3p0L6N+YMUWGRO1Zdc3NAygyLs1vT/E0cbiCYWejFzw0QTjj ZEldu3XxINxCdi+b7OieIMXq/aXSGAtGYXFxPvZCkRHM6es/sMr+XTyHsg9dZ81TDAbh4g XAlPSzWyf4eNU+gUb6pQEam+Npk0JilGgQNBQx64RQoynwAD0GMlGRULrxCPfAS9r5Sm3p 9hCQibGAIud6PKXC3Fuxc7wuiU909vUqDCYeIFtodZXqHRSnNfBwpisa03NeMClcZmK5Nv LL/+wv1TAPYcoTLGdQyXPvh4jCF+odJ3uvOf/Ffp1/nN4IQHLgx2EIdRVKvdpQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1652825676; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=ZkW25diCjZhhJiIeWzEaepfs+AscLxtkOsxOdut4Bxk=; b=iz9WaRRx9zxKx0zNPcIP4BOX5W1vUenA1F6c3Ff57rBO4HJjpYBvskTeY69AdwYperIAkZ 1/TvTIO9XjD465Cg== From: "tip-bot2 for Peter Zijlstra" Sender: tip-bot2@linutronix.de Reply-to: linux-kernel@vger.kernel.org To: linux-tip-commits@vger.kernel.org Subject: [tip: perf/core] perf/x86/amd: Fix AMD BRS period adjustment Cc: "Peter Zijlstra (Intel)" , x86@kernel.org, linux-kernel@vger.kernel.org MIME-Version: 1.0 Message-ID: <165282567475.4207.12287002853279789468.tip-bot2@tip-bot2> Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The following commit has been merged into the perf/core branch of tip: Commit-ID: 3c27b0c6ea48bc61492a138c410e262735d660ab Gitweb: https://git.kernel.org/tip/3c27b0c6ea48bc61492a138c410e26273= 5d660ab Author: Peter Zijlstra AuthorDate: Tue, 10 May 2022 21:22:04 +02:00 Committer: Peter Zijlstra CommitterDate: Wed, 18 May 2022 00:08:25 +02:00 perf/x86/amd: Fix AMD BRS period adjustment There's two problems with the current amd_brs_adjust_period() code: - it isn't in fact AMD specific and wil always adjust the period; - it adjusts the period, while it should only adjust the event count, resulting in repoting a short period. Fix this by using x86_pmu.limit_period, this makes it specific to the AMD BRS case and ensures only the event count is adjusted while the reported period is unmodified. Fixes: ba2fe7500845 ("perf/x86/amd: Add AMD branch sampling period adjustme= nt") Signed-off-by: Peter Zijlstra (Intel) --- arch/x86/events/amd/core.c | 13 +++++++++++++ arch/x86/events/core.c | 7 ------- arch/x86/events/perf_event.h | 18 ------------------ 3 files changed, 13 insertions(+), 25 deletions(-) diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c index d81eac2..3eee59c 100644 --- a/arch/x86/events/amd/core.c +++ b/arch/x86/events/amd/core.c @@ -1255,6 +1255,18 @@ static void amd_pmu_sched_task(struct perf_event_con= text *ctx, amd_pmu_brs_sched_task(ctx, sched_in); } =20 +static u64 amd_pmu_limit_period(struct perf_event *event, u64 left) +{ + /* + * Decrease period by the depth of the BRS feature to get the last N + * taken branches and approximate the desired period + */ + if (has_branch_stack(event) && left > x86_pmu.lbr_nr) + left -=3D x86_pmu.lbr_nr; + + return left; +} + static __initconst const struct x86_pmu amd_pmu =3D { .name =3D "AMD", .handle_irq =3D amd_pmu_handle_irq, @@ -1415,6 +1427,7 @@ static int __init amd_core_pmu_init(void) if (boot_cpu_data.x86 >=3D 0x19 && !amd_brs_init()) { x86_pmu.get_event_constraints =3D amd_get_event_constraints_f19h; x86_pmu.sched_task =3D amd_pmu_sched_task; + x86_pmu.limit_period =3D amd_pmu_limit_period; /* * put_event_constraints callback same as Fam17h, set above */ diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index b08052b..3078889 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -1375,13 +1375,6 @@ int x86_perf_event_set_period(struct perf_event *eve= nt) return x86_pmu.set_topdown_event_period(event); =20 /* - * decrease period by the depth of the BRS feature to get - * the last N taken branches and approximate the desired period - */ - if (has_branch_stack(event)) - period =3D amd_brs_adjust_period(period); - - /* * If we are way outside a reasonable range then just skip forward: */ if (unlikely(left <=3D -period)) { diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index 3b03245..21a5482 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -1254,14 +1254,6 @@ static inline void amd_pmu_brs_del(struct perf_event= *event) } =20 void amd_pmu_brs_sched_task(struct perf_event_context *ctx, bool sched_in); - -static inline s64 amd_brs_adjust_period(s64 period) -{ - if (period > x86_pmu.lbr_nr) - return period - x86_pmu.lbr_nr; - - return period; -} #else static inline int amd_brs_init(void) { @@ -1290,11 +1282,6 @@ static inline void amd_pmu_brs_sched_task(struct per= f_event_context *ctx, bool s { } =20 -static inline s64 amd_brs_adjust_period(s64 period) -{ - return period; -} - static inline void amd_brs_enable_all(void) { } @@ -1324,11 +1311,6 @@ static inline void amd_brs_enable_all(void) static inline void amd_brs_disable_all(void) { } - -static inline s64 amd_brs_adjust_period(s64 period) -{ - return period; -} #endif /* CONFIG_CPU_SUP_AMD */ =20 static inline int is_pebs_pt(struct perf_event *event)