[kvm-unit-tests patch v3 3/8] x86/pmu: Fix incorrect masking of fixed counters

Dapeng Mi posted 8 patches 5 months ago
[kvm-unit-tests patch v3 3/8] x86/pmu: Fix incorrect masking of fixed counters
Posted by Dapeng Mi 5 months ago
From: dongsheng <dongsheng.x.zhang@intel.com>

The current implementation mistakenly limits the width of fixed counters
to the width of GP counters. Corrects the logic to ensure fixed counters
are properly masked according to their own width.

Opportunistically refine the GP counter bitwidth processing code.

Signed-off-by: dongsheng <dongsheng.x.zhang@intel.com>
Co-developed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Tested-by: Yi Lai <yi1.lai@intel.com>
---
 x86/pmu.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/x86/pmu.c b/x86/pmu.c
index 04946d10..44c728a5 100644
--- a/x86/pmu.c
+++ b/x86/pmu.c
@@ -556,18 +556,16 @@ static void check_counter_overflow(void)
 		int idx;
 
 		cnt.count = overflow_preset;
-		if (pmu_use_full_writes())
-			cnt.count &= (1ull << pmu.gp_counter_width) - 1;
-
 		if (i == pmu.nr_gp_counters) {
 			if (!pmu.is_intel)
 				break;
 
 			cnt.ctr = fixed_events[0].unit_sel;
-			cnt.count = measure_for_overflow(&cnt);
-			cnt.count &= (1ull << pmu.gp_counter_width) - 1;
+			cnt.count &= (1ull << pmu.fixed_counter_width) - 1;
 		} else {
 			cnt.ctr = MSR_GP_COUNTERx(i);
+			if (pmu_use_full_writes())
+				cnt.count &= (1ull << pmu.gp_counter_width) - 1;
 		}
 
 		if (i % 2)
-- 
2.34.1
Re: [kvm-unit-tests patch v3 3/8] x86/pmu: Fix incorrect masking of fixed counters
Posted by Sean Christopherson 2 months, 2 weeks ago
On Wed, Sep 03, 2025, Dapeng Mi wrote:
> From: dongsheng <dongsheng.x.zhang@intel.com>
> 
> The current implementation mistakenly limits the width of fixed counters
> to the width of GP counters. Corrects the logic to ensure fixed counters
> are properly masked according to their own width.
> 
> Opportunistically refine the GP counter bitwidth processing code.
> 
> Signed-off-by: dongsheng <dongsheng.x.zhang@intel.com>
> Co-developed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
> Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
> Tested-by: Yi Lai <yi1.lai@intel.com>
> ---
>  x86/pmu.c | 8 +++-----
>  1 file changed, 3 insertions(+), 5 deletions(-)
> 
> diff --git a/x86/pmu.c b/x86/pmu.c
> index 04946d10..44c728a5 100644
> --- a/x86/pmu.c
> +++ b/x86/pmu.c
> @@ -556,18 +556,16 @@ static void check_counter_overflow(void)
>  		int idx;
>  
>  		cnt.count = overflow_preset;
> -		if (pmu_use_full_writes())
> -			cnt.count &= (1ull << pmu.gp_counter_width) - 1;
> -
>  		if (i == pmu.nr_gp_counters) {
>  			if (!pmu.is_intel)
>  				break;
>  
>  			cnt.ctr = fixed_events[0].unit_sel;
> -			cnt.count = measure_for_overflow(&cnt);

Per commit 7ec3b67a ("x86/pmu: Reset the expected count of the fixed counter 0
when i386"), re-measuring for the fixed counter is necessary when running a 32-bit
guests.  I didn't see failures (spotted this by inspection), but I don't see any
point in making this change without good reason.

> -			cnt.count &= (1ull << pmu.gp_counter_width) - 1;
> +			cnt.count &= (1ull << pmu.fixed_counter_width) - 1;
>  		} else {
>  			cnt.ctr = MSR_GP_COUNTERx(i);
> +			if (pmu_use_full_writes())
> +				cnt.count &= (1ull << pmu.gp_counter_width) - 1;
>  		}
>  
>  		if (i % 2)
> -- 
> 2.34.1
>
Re: [kvm-unit-tests patch v3 3/8] x86/pmu: Fix incorrect masking of fixed counters
Posted by Mi, Dapeng 2 months, 2 weeks ago
On 11/21/2025 6:28 AM, Sean Christopherson wrote:
> On Wed, Sep 03, 2025, Dapeng Mi wrote:
>> From: dongsheng <dongsheng.x.zhang@intel.com>
>>
>> The current implementation mistakenly limits the width of fixed counters
>> to the width of GP counters. Corrects the logic to ensure fixed counters
>> are properly masked according to their own width.
>>
>> Opportunistically refine the GP counter bitwidth processing code.
>>
>> Signed-off-by: dongsheng <dongsheng.x.zhang@intel.com>
>> Co-developed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
>> Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
>> Tested-by: Yi Lai <yi1.lai@intel.com>
>> ---
>>  x86/pmu.c | 8 +++-----
>>  1 file changed, 3 insertions(+), 5 deletions(-)
>>
>> diff --git a/x86/pmu.c b/x86/pmu.c
>> index 04946d10..44c728a5 100644
>> --- a/x86/pmu.c
>> +++ b/x86/pmu.c
>> @@ -556,18 +556,16 @@ static void check_counter_overflow(void)
>>  		int idx;
>>  
>>  		cnt.count = overflow_preset;
>> -		if (pmu_use_full_writes())
>> -			cnt.count &= (1ull << pmu.gp_counter_width) - 1;
>> -
>>  		if (i == pmu.nr_gp_counters) {
>>  			if (!pmu.is_intel)
>>  				break;
>>  
>>  			cnt.ctr = fixed_events[0].unit_sel;
>> -			cnt.count = (&cnt);
> Per commit 7ec3b67a ("x86/pmu: Reset the expected count of the fixed counter 0
> when i386"), re-measuring for the fixed counter is necessary when running a 32-bit
> guests.  I didn't see failures (spotted this by inspection), but I don't see any
> point in making this change without good reason.

Thanks. I didn't realized that the 2nd measure_for_overflow() is intended
to add ...


>
>> -			cnt.count &= (1ull << pmu.gp_counter_width) - 1;
>> +			cnt.count &= (1ull << pmu.fixed_counter_width) - 1;
>>  		} else {
>>  			cnt.ctr = MSR_GP_COUNTERx(i);
>> +			if (pmu_use_full_writes())
>> +				cnt.count &= (1ull << pmu.gp_counter_width) - 1;
>>  		}
>>  
>>  		if (i % 2)
>> -- 
>> 2.34.1
>>