[PATCH v2] perf/x86/amd: Warn only on new bits set

Breno Leitao posted 1 patch 1 year, 6 months ago
There is a newer version of this series
arch/x86/events/amd/core.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
[PATCH v2] perf/x86/amd: Warn only on new bits set
Posted by Breno Leitao 1 year, 6 months ago
Warning at every leaking bits can cause a flood of message, triggering
vairous stall-warning mechanisms to fire, including CSD locks, which
makes the machine to be unusable.

Track the bits that are being leaked, and only warn when a new bit is
set.

That said, this patch will help with the following issues:

1) It will tell us which bits are being set, so, it is easy to
   communicate it back to vendor, and to do a root-cause analyzes.

2) It avoid the machine to be unusable, because, worst case
   scenario, the user gets less than 60 WARNs (one per unhandled bit).

Suggested-by: Paul E. McKenney <paulmck@kernel.org>
Reviewed-by: Sandipan Das <sandipan.das@amd.com>
Signed-off-by: Breno Leitao <leitao@debian.org>
---
Changelog:
v2:
  * Improved the patch description, getting the benefits in words.

v1:
  * https://lore.kernel.org/all/20240524141021.3889002-1-leitao@debian.org/


 arch/x86/events/amd/core.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c
index 920e3a640cad..577158d0c324 100644
--- a/arch/x86/events/amd/core.c
+++ b/arch/x86/events/amd/core.c
@@ -943,11 +943,12 @@ static int amd_pmu_v2_snapshot_branch_stack(struct perf_branch_entry *entries, u
 static int amd_pmu_v2_handle_irq(struct pt_regs *regs)
 {
 	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+	static atomic64_t status_warned = ATOMIC64_INIT(0);
+	u64 reserved, status, mask, new_bits;
 	struct perf_sample_data data;
 	struct hw_perf_event *hwc;
 	struct perf_event *event;
 	int handled = 0, idx;
-	u64 reserved, status, mask;
 	bool pmu_enabled;
 
 	/*
@@ -1012,7 +1013,11 @@ static int amd_pmu_v2_handle_irq(struct pt_regs *regs)
 	 * the corresponding PMCs are expected to be inactive according to the
 	 * active_mask
 	 */
-	WARN_ON(status > 0);
+	if (status > 0) {
+		new_bits = atomic64_fetch_or(status, &status_warned) ^ atomic64_read(&status_warned);
+		// A new bit was set for the very first time.
+		WARN(new_bits, "New overflows for inactive PMCs: %llx\n", new_bits);
+	}
 
 	/* Clear overflow and freeze bits */
 	amd_pmu_ack_global_status(~status);
-- 
2.43.0
Re: [PATCH v2] perf/x86/amd: Warn only on new bits set
Posted by Paul E. McKenney 1 year, 6 months ago
On Wed, Jul 31, 2024 at 08:46:51AM -0700, Breno Leitao wrote:
> Warning at every leaking bits can cause a flood of message, triggering
> vairous stall-warning mechanisms to fire, including CSD locks, which
> makes the machine to be unusable.
> 
> Track the bits that are being leaked, and only warn when a new bit is
> set.
> 
> That said, this patch will help with the following issues:
> 
> 1) It will tell us which bits are being set, so, it is easy to
>    communicate it back to vendor, and to do a root-cause analyzes.
> 
> 2) It avoid the machine to be unusable, because, worst case
>    scenario, the user gets less than 60 WARNs (one per unhandled bit).
> 
> Suggested-by: Paul E. McKenney <paulmck@kernel.org>
> Reviewed-by: Sandipan Das <sandipan.das@amd.com>
> Signed-off-by: Breno Leitao <leitao@debian.org>

Nice!!!

A question about an admittedly unlikely race below.

> ---
> Changelog:
> v2:
>   * Improved the patch description, getting the benefits in words.
> 
> v1:
>   * https://lore.kernel.org/all/20240524141021.3889002-1-leitao@debian.org/
> 
> 
>  arch/x86/events/amd/core.c | 9 +++++++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c
> index 920e3a640cad..577158d0c324 100644
> --- a/arch/x86/events/amd/core.c
> +++ b/arch/x86/events/amd/core.c
> @@ -943,11 +943,12 @@ static int amd_pmu_v2_snapshot_branch_stack(struct perf_branch_entry *entries, u
>  static int amd_pmu_v2_handle_irq(struct pt_regs *regs)
>  {
>  	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
> +	static atomic64_t status_warned = ATOMIC64_INIT(0);
> +	u64 reserved, status, mask, new_bits;
>  	struct perf_sample_data data;
>  	struct hw_perf_event *hwc;
>  	struct perf_event *event;
>  	int handled = 0, idx;
> -	u64 reserved, status, mask;
>  	bool pmu_enabled;
>  
>  	/*
> @@ -1012,7 +1013,11 @@ static int amd_pmu_v2_handle_irq(struct pt_regs *regs)
>  	 * the corresponding PMCs are expected to be inactive according to the
>  	 * active_mask
>  	 */
> -	WARN_ON(status > 0);
> +	if (status > 0) {
> +		new_bits = atomic64_fetch_or(status, &status_warned) ^ atomic64_read(&status_warned);

It is possible that two CPUs could execute the above line concurrently,
correct?  In that case, the reports might be a bit confused.

Why not be exact, perhaps as follows, introducing a "u64 prev_bits"?

		prev_bits = atomic64_fetch_or(status, &status_warned);
		new_bits = status & ~prev_bits;

Or, if you would like to avoid the added variable and to keep this to
a single line:

		new_bits = status & ~atomic64_fetch_or(status, &status_warned);

Or is my boolean arithmetic off this morning?  (Wouldn't be the first
time...)

							Thanx, Paul

> +		// A new bit was set for the very first time.
> +		WARN(new_bits, "New overflows for inactive PMCs: %llx\n", new_bits);
> +	}
>  
>  	/* Clear overflow and freeze bits */
>  	amd_pmu_ack_global_status(~status);
> -- 
> 2.43.0
>