[RESEND PATCH 05/12] perf/x86: Add config_mask to represent EVENTSEL bitmask

kan.liang@linux.intel.com posted 12 patches 1 year, 6 months ago
There is a newer version of this series
[RESEND PATCH 05/12] perf/x86: Add config_mask to represent EVENTSEL bitmask
Posted by kan.liang@linux.intel.com 1 year, 6 months ago
From: Kan Liang <kan.liang@linux.intel.com>

Different vendors may support different fields in EVENTSEL MSR, such as
Intel would introduce new fields umask2 and eq bits in EVENTSEL MSR
since Perfmon version 6. However, a fixed mask X86_RAW_EVENT_MASK is
used to filter the attr.config.

Introduce a new config_mask to record the real supported EVENTSEL
bitmask.
Only apply it to the existing code now. No functional change.

Reviewed-by: Andi Kleen <ak@linux.intel.com>
Co-developed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
---
 arch/x86/events/core.c       | 5 ++++-
 arch/x86/events/intel/core.c | 1 +
 arch/x86/events/perf_event.h | 7 +++++++
 3 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index d31a8cc7b626..80da99fcae6d 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -624,7 +624,7 @@ int x86_pmu_hw_config(struct perf_event *event)
 		event->hw.config |= ARCH_PERFMON_EVENTSEL_OS;
 
 	if (event->attr.type == event->pmu->type)
-		event->hw.config |= event->attr.config & X86_RAW_EVENT_MASK;
+		event->hw.config |= x86_pmu_get_event_config(event);
 
 	if (event->attr.sample_period && x86_pmu.limit_period) {
 		s64 left = event->attr.sample_period;
@@ -2098,6 +2098,9 @@ static int __init init_hw_perf_events(void)
 	if (!x86_pmu.intel_ctrl)
 		x86_pmu.intel_ctrl = x86_pmu.cntr_mask64;
 
+	if (!x86_pmu.config_mask)
+		x86_pmu.config_mask = X86_RAW_EVENT_MASK;
+
 	perf_events_lapic_init();
 	register_nmi_handler(NMI_LOCAL, perf_event_nmi_handler, 0, "PMI");
 
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 60806f373226..626e9a5e50d2 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -6144,6 +6144,7 @@ static __always_inline int intel_pmu_init_hybrid(enum hybrid_pmu_type pmus)
 		pmu->cntr_mask64 = x86_pmu.cntr_mask64;
 		pmu->fixed_cntr_mask64 = x86_pmu.fixed_cntr_mask64;
 		pmu->pebs_events_mask = intel_pmu_pebs_mask(pmu->cntr_mask64);
+		pmu->config_mask = X86_RAW_EVENT_MASK;
 		pmu->unconstrained = (struct event_constraint)
 				     __EVENT_CONSTRAINT(0, pmu->cntr_mask64,
 							0, x86_pmu_num_counters(&pmu->pmu), 0, 0);
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index 66209bb2ba77..4fc72da7a7c4 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -695,6 +695,7 @@ struct x86_hybrid_pmu {
 	union perf_capabilities		intel_cap;
 	u64				intel_ctrl;
 	u64				pebs_events_mask;
+	u64				config_mask;
 	union {
 			u64		cntr_mask64;
 			unsigned long	cntr_mask[BITS_TO_LONGS(X86_PMC_IDX_MAX)];
@@ -790,6 +791,7 @@ struct x86_pmu {
 	int		(*rdpmc_index)(int index);
 	u64		(*event_map)(int);
 	int		max_events;
+	u64		config_mask;
 	union {
 			u64		cntr_mask64;
 			unsigned long	cntr_mask[BITS_TO_LONGS(X86_PMC_IDX_MAX)];
@@ -1231,6 +1233,11 @@ static inline int x86_pmu_num_counters_fixed(struct pmu *pmu)
 	return hweight64(hybrid(pmu, fixed_cntr_mask64));
 }
 
+static inline u64 x86_pmu_get_event_config(struct perf_event *event)
+{
+	return event->attr.config & hybrid(event->pmu, config_mask);
+}
+
 extern struct event_constraint emptyconstraint;
 
 extern struct event_constraint unconstrained;
-- 
2.35.1
Re: [RESEND PATCH 05/12] perf/x86: Add config_mask to represent EVENTSEL bitmask
Posted by Peter Zijlstra 1 year, 5 months ago
On Tue, Jun 18, 2024 at 08:10:37AM -0700, kan.liang@linux.intel.com wrote:
> From: Kan Liang <kan.liang@linux.intel.com>
> 
> Different vendors may support different fields in EVENTSEL MSR, such as
> Intel would introduce new fields umask2 and eq bits in EVENTSEL MSR
> since Perfmon version 6. However, a fixed mask X86_RAW_EVENT_MASK is
> used to filter the attr.config.
> 

> @@ -1231,6 +1233,11 @@ static inline int x86_pmu_num_counters_fixed(struct pmu *pmu)
>  	return hweight64(hybrid(pmu, fixed_cntr_mask64));
>  }
>  
> +static inline u64 x86_pmu_get_event_config(struct perf_event *event)
> +{
> +	return event->attr.config & hybrid(event->pmu, config_mask);
> +}

Seriously, we're going to be having such major event encoding
differences between cores on a single chip?
Re: [RESEND PATCH 05/12] perf/x86: Add config_mask to represent EVENTSEL bitmask
Posted by Liang, Kan 1 year, 5 months ago

On 2024-06-20 3:44 a.m., Peter Zijlstra wrote:
> On Tue, Jun 18, 2024 at 08:10:37AM -0700, kan.liang@linux.intel.com wrote:
>> From: Kan Liang <kan.liang@linux.intel.com>
>>
>> Different vendors may support different fields in EVENTSEL MSR, such as
>> Intel would introduce new fields umask2 and eq bits in EVENTSEL MSR
>> since Perfmon version 6. However, a fixed mask X86_RAW_EVENT_MASK is
>> used to filter the attr.config.
>>
> 
>> @@ -1231,6 +1233,11 @@ static inline int x86_pmu_num_counters_fixed(struct pmu *pmu)
>>  	return hweight64(hybrid(pmu, fixed_cntr_mask64));
>>  }
>>  
>> +static inline u64 x86_pmu_get_event_config(struct perf_event *event)
>> +{
>> +	return event->attr.config & hybrid(event->pmu, config_mask);
>> +}
> 
> Seriously, we're going to be having such major event encoding
> differences between cores on a single chip?

For LNL, no. But ARL-H may have an event encoding differences.
I will double check.

The problem is that there is no guarantee for the future platforms.
With the CPUID leaf 0x23, all the features are enumerated per CPU.
In theory, it's possible that different layout of the EVENTSEL MSR
between different types of core.
If we take the virtualization into account, that's even worse.

It should be a safe way to add the hybrid() check.


Thanks,
Kan
Re: [RESEND PATCH 05/12] perf/x86: Add config_mask to represent EVENTSEL bitmask
Posted by Peter Zijlstra 1 year, 5 months ago
On Thu, Jun 20, 2024 at 12:16:46PM -0400, Liang, Kan wrote:
> 
> 
> On 2024-06-20 3:44 a.m., Peter Zijlstra wrote:
> > On Tue, Jun 18, 2024 at 08:10:37AM -0700, kan.liang@linux.intel.com wrote:
> >> From: Kan Liang <kan.liang@linux.intel.com>
> >>
> >> Different vendors may support different fields in EVENTSEL MSR, such as
> >> Intel would introduce new fields umask2 and eq bits in EVENTSEL MSR
> >> since Perfmon version 6. However, a fixed mask X86_RAW_EVENT_MASK is
> >> used to filter the attr.config.
> >>
> > 
> >> @@ -1231,6 +1233,11 @@ static inline int x86_pmu_num_counters_fixed(struct pmu *pmu)
> >>  	return hweight64(hybrid(pmu, fixed_cntr_mask64));
> >>  }
> >>  
> >> +static inline u64 x86_pmu_get_event_config(struct perf_event *event)
> >> +{
> >> +	return event->attr.config & hybrid(event->pmu, config_mask);
> >> +}
> > 
> > Seriously, we're going to be having such major event encoding
> > differences between cores on a single chip?
> 
> For LNL, no. But ARL-H may have an event encoding differences.
> I will double check.
> 
> The problem is that there is no guarantee for the future platforms.
> With the CPUID leaf 0x23, all the features are enumerated per CPU.
> In theory, it's possible that different layout of the EVENTSEL MSR
> between different types of core.
> If we take the virtualization into account, that's even worse.

Virt and hybrid is a trainwreck anyway :/

> It should be a safe way to add the hybrid() check.

Safe yes, sad also yes :-( It would be really nice if they all at least
can commit to the same event format. Could you please check?
Re: [RESEND PATCH 05/12] perf/x86: Add config_mask to represent EVENTSEL bitmask
Posted by Liang, Kan 1 year, 5 months ago

On 2024-06-20 12:16 p.m., Liang, Kan wrote:
> 
> 
> On 2024-06-20 3:44 a.m., Peter Zijlstra wrote:
>> On Tue, Jun 18, 2024 at 08:10:37AM -0700, kan.liang@linux.intel.com wrote:
>>> From: Kan Liang <kan.liang@linux.intel.com>
>>>
>>> Different vendors may support different fields in EVENTSEL MSR, such as
>>> Intel would introduce new fields umask2 and eq bits in EVENTSEL MSR
>>> since Perfmon version 6. However, a fixed mask X86_RAW_EVENT_MASK is
>>> used to filter the attr.config.
>>>
>>
>>> @@ -1231,6 +1233,11 @@ static inline int x86_pmu_num_counters_fixed(struct pmu *pmu)
>>>  	return hweight64(hybrid(pmu, fixed_cntr_mask64));
>>>  }
>>>  
>>> +static inline u64 x86_pmu_get_event_config(struct perf_event *event)
>>> +{
>>> +	return event->attr.config & hybrid(event->pmu, config_mask);
>>> +}
>>
>> Seriously, we're going to be having such major event encoding
>> differences between cores on a single chip?
> 
> For LNL, no. But ARL-H may have an event encoding differences.
> I will double check.

There are two generations of e-core on ARL-H. The event encoding is
different.

The new fields umask2 and eq bits are enumerated by CPUID.(EAX=23H,
ECX=0H):EBX. They are supported by CPU 11 but not CPU 12.

CPU 11:
   0x00000023 0x00: eax=0x0000000f ebx=0x00000003 ecx=0x00000008
edx=0x00000000
CPU 12:
   0x00000023 0x00: eax=0x0000000b ebx=0x00000000 ecx=0x00000006
edx=0x00000000


Thanks,
Kan
> 
> The problem is that there is no guarantee for the future platforms.
> With the CPUID leaf 0x23, all the features are enumerated per CPU.
> In theory, it's possible that different layout of the EVENTSEL MSR
> between different types of core.
> If we take the virtualization into account, that's even worse.
> 
> It should be a safe way to add the hybrid() check.
> 
> 
> Thanks,
> Kan
>
Re: [RESEND PATCH 05/12] perf/x86: Add config_mask to represent EVENTSEL bitmask
Posted by Peter Zijlstra 1 year, 5 months ago
On Fri, Jun 21, 2024 at 02:34:35PM -0400, Liang, Kan wrote:
> 
> 
> On 2024-06-20 12:16 p.m., Liang, Kan wrote:
> > 
> > 
> > On 2024-06-20 3:44 a.m., Peter Zijlstra wrote:
> >> On Tue, Jun 18, 2024 at 08:10:37AM -0700, kan.liang@linux.intel.com wrote:
> >>> From: Kan Liang <kan.liang@linux.intel.com>
> >>>
> >>> Different vendors may support different fields in EVENTSEL MSR, such as
> >>> Intel would introduce new fields umask2 and eq bits in EVENTSEL MSR
> >>> since Perfmon version 6. However, a fixed mask X86_RAW_EVENT_MASK is
> >>> used to filter the attr.config.
> >>>
> >>
> >>> @@ -1231,6 +1233,11 @@ static inline int x86_pmu_num_counters_fixed(struct pmu *pmu)
> >>>  	return hweight64(hybrid(pmu, fixed_cntr_mask64));
> >>>  }
> >>>  
> >>> +static inline u64 x86_pmu_get_event_config(struct perf_event *event)
> >>> +{
> >>> +	return event->attr.config & hybrid(event->pmu, config_mask);
> >>> +}
> >>
> >> Seriously, we're going to be having such major event encoding
> >> differences between cores on a single chip?
> > 
> > For LNL, no. But ARL-H may have an event encoding differences.
> > I will double check.
> 
> There are two generations of e-core on ARL-H. The event encoding is
> different.
> 
> The new fields umask2 and eq bits are enumerated by CPUID.(EAX=23H,
> ECX=0H):EBX. They are supported by CPU 11 but not CPU 12.
> 
> CPU 11:
>    0x00000023 0x00: eax=0x0000000f ebx=0x00000003 ecx=0x00000008
> edx=0x00000000
> CPU 12:
>    0x00000023 0x00: eax=0x0000000b ebx=0x00000000 ecx=0x00000006
> edx=0x00000000
> 

*groan*...

So we're going to be having 3 PMUs on that thing I suppose. Oh well.
Re: [RESEND PATCH 05/12] perf/x86: Add config_mask to represent EVENTSEL bitmask
Posted by Liang, Kan 1 year, 5 months ago

On 2024-06-24 4:28 a.m., Peter Zijlstra wrote:
> On Fri, Jun 21, 2024 at 02:34:35PM -0400, Liang, Kan wrote:
>>
>>
>> On 2024-06-20 12:16 p.m., Liang, Kan wrote:
>>>
>>>
>>> On 2024-06-20 3:44 a.m., Peter Zijlstra wrote:
>>>> On Tue, Jun 18, 2024 at 08:10:37AM -0700, kan.liang@linux.intel.com wrote:
>>>>> From: Kan Liang <kan.liang@linux.intel.com>
>>>>>
>>>>> Different vendors may support different fields in EVENTSEL MSR, such as
>>>>> Intel would introduce new fields umask2 and eq bits in EVENTSEL MSR
>>>>> since Perfmon version 6. However, a fixed mask X86_RAW_EVENT_MASK is
>>>>> used to filter the attr.config.
>>>>>
>>>>
>>>>> @@ -1231,6 +1233,11 @@ static inline int x86_pmu_num_counters_fixed(struct pmu *pmu)
>>>>>  	return hweight64(hybrid(pmu, fixed_cntr_mask64));
>>>>>  }
>>>>>  
>>>>> +static inline u64 x86_pmu_get_event_config(struct perf_event *event)
>>>>> +{
>>>>> +	return event->attr.config & hybrid(event->pmu, config_mask);
>>>>> +}
>>>>
>>>> Seriously, we're going to be having such major event encoding
>>>> differences between cores on a single chip?
>>>
>>> For LNL, no. But ARL-H may have an event encoding differences.
>>> I will double check.
>>
>> There are two generations of e-core on ARL-H. The event encoding is
>> different.
>>
>> The new fields umask2 and eq bits are enumerated by CPUID.(EAX=23H,
>> ECX=0H):EBX. They are supported by CPU 11 but not CPU 12.
>>
>> CPU 11:
>>    0x00000023 0x00: eax=0x0000000f ebx=0x00000003 ecx=0x00000008
>> edx=0x00000000
>> CPU 12:
>>    0x00000023 0x00: eax=0x0000000b ebx=0x00000000 ecx=0x00000006
>> edx=0x00000000
>>
> 
> *groan*...
> 
> So we're going to be having 3 PMUs on that thing I suppose. Oh well.

Yes, 3 PMUs. 1 PMU For p-core. The other 2 PMUs are for two different
types of e-cores.

Thanks,
Kan