There's no need to invoke CPUID yet another time. However, as the host CPU
policy is set up only shortly after init_intel() ran on the BSP, defer the
logic to a pre-SMP initcall. This can't be (a new) one in cpu/intel.c
though, as that's linked after acpi/cpu_idle.c (which is where we already
need the feature set). Since opt_arat is local to the cpu/ subtree,
introduce a new Intel-specific helper to hold the code needed.
Further, as we assume symmetry anyway, use setup_force_cpu_cap() and hence
limit the checking to the boot CPU.
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
The need to move where cpu_has_arat is checked would go away if we did
away with opt_arat (as mentioned in the previous patch), and hence could
use cpu_has_arat directly where right now XEN_ARAT is checked.
--- a/xen/arch/x86/acpi/cpu_idle.c
+++ b/xen/arch/x86/acpi/cpu_idle.c
@@ -1666,6 +1666,9 @@ static int __init cf_check cpuidle_presm
{
void *cpu = (void *)(long)smp_processor_id();
+ if ( boot_cpu_data.vendor == X86_VENDOR_INTEL )
+ intel_init_arat();
+
if ( !xen_cpuidle )
return 0;
--- a/xen/arch/x86/cpu/intel.c
+++ b/xen/arch/x86/cpu/intel.c
@@ -665,10 +665,6 @@ static void cf_check init_intel(struct c
__set_bit(X86_FEATURE_NONSTOP_TSC, c->x86_capability);
__set_bit(X86_FEATURE_TSC_RELIABLE, c->x86_capability);
}
- if ( opt_arat &&
- ( c->cpuid_level >= 0x00000006 ) &&
- ( cpuid_eax(0x00000006) & (1u<<2) ) )
- __set_bit(X86_FEATURE_XEN_ARAT, c->x86_capability);
if ((opt_cpu_info && !(c->apicid & (c->x86_num_siblings - 1))) ||
c == &boot_cpu_data )
@@ -690,3 +686,9 @@ const struct cpu_dev __initconst_cf_clob
.c_early_init = early_init_intel,
.c_init = init_intel,
};
+
+void __init intel_init_arat(void)
+{
+ if ( opt_arat && cpu_has_arat )
+ setup_force_cpu_cap(X86_FEATURE_XEN_ARAT);
+}
--- a/xen/arch/x86/include/asm/cpufeature.h
+++ b/xen/arch/x86/include/asm/cpufeature.h
@@ -176,6 +176,9 @@ static inline bool boot_cpu_has(unsigned
#define cpu_has_fma4 boot_cpu_has(X86_FEATURE_FMA4)
#define cpu_has_tbm boot_cpu_has(X86_FEATURE_TBM)
+/* CPUID level 0x00000006.eax */
+#define cpu_has_arat host_cpu_policy.basic.pm.arat
+
/* CPUID level 0x00000006.ecx */
#define cpu_has_aperfmperf host_cpu_policy.basic.pm.aperfmperf
--- a/xen/arch/x86/include/asm/processor.h
+++ b/xen/arch/x86/include/asm/processor.h
@@ -102,6 +102,7 @@ extern void setup_force_cpu_cap(unsigned
extern bool is_forced_cpu_cap(unsigned int cap);
extern void print_cpu_info(unsigned int cpu);
extern void init_intel_cacheinfo(struct cpuinfo_x86 *c);
+extern void intel_init_arat(void);
#define cpu_to_core(_cpu) (cpu_data[_cpu].cpu_core_id)
#define cpu_to_socket(_cpu) (cpu_data[_cpu].phys_proc_id)
On 18/11/2025 3:08 pm, Jan Beulich wrote:
> There's no need to invoke CPUID yet another time. However, as the host CPU
> policy is set up only shortly after init_intel() ran on the BSP, defer the
> logic to a pre-SMP initcall. This can't be (a new) one in cpu/intel.c
> though, as that's linked after acpi/cpu_idle.c (which is where we already
> need the feature set). Since opt_arat is local to the cpu/ subtree,
> introduce a new Intel-specific helper to hold the code needed.
>
> Further, as we assume symmetry anyway, use setup_force_cpu_cap() and hence
> limit the checking to the boot CPU.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> The need to move where cpu_has_arat is checked would go away if we did
> away with opt_arat (as mentioned in the previous patch), and hence could
> use cpu_has_arat directly where right now XEN_ARAT is checked.
>
> --- a/xen/arch/x86/acpi/cpu_idle.c
> +++ b/xen/arch/x86/acpi/cpu_idle.c
> @@ -1666,6 +1666,9 @@ static int __init cf_check cpuidle_presm
> {
> void *cpu = (void *)(long)smp_processor_id();
>
> + if ( boot_cpu_data.vendor == X86_VENDOR_INTEL )
> + intel_init_arat();
I really would prefer to avoid the need for this.
Now that microcode loading has moved to the start of day, we can drop
most of the order-of-init complexity for CPUID/etc, and I expect that
problems like this will cease to exist as a result.
Notably, we've now got no relevant difference between early_init() and
regular init(). That was a complexity we inherited from Linux.
~Andrew
On 18.11.2025 20:42, Andrew Cooper wrote:
> On 18/11/2025 3:08 pm, Jan Beulich wrote:
>> There's no need to invoke CPUID yet another time. However, as the host CPU
>> policy is set up only shortly after init_intel() ran on the BSP, defer the
>> logic to a pre-SMP initcall. This can't be (a new) one in cpu/intel.c
>> though, as that's linked after acpi/cpu_idle.c (which is where we already
>> need the feature set). Since opt_arat is local to the cpu/ subtree,
>> introduce a new Intel-specific helper to hold the code needed.
>>
>> Further, as we assume symmetry anyway, use setup_force_cpu_cap() and hence
>> limit the checking to the boot CPU.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> The need to move where cpu_has_arat is checked would go away if we did
>> away with opt_arat (as mentioned in the previous patch), and hence could
>> use cpu_has_arat directly where right now XEN_ARAT is checked.
>>
>> --- a/xen/arch/x86/acpi/cpu_idle.c
>> +++ b/xen/arch/x86/acpi/cpu_idle.c
>> @@ -1666,6 +1666,9 @@ static int __init cf_check cpuidle_presm
>> {
>> void *cpu = (void *)(long)smp_processor_id();
>>
>> + if ( boot_cpu_data.vendor == X86_VENDOR_INTEL )
>> + intel_init_arat();
>
> I really would prefer to avoid the need for this.
So would I, but ...
> Now that microcode loading has moved to the start of day, we can drop
> most of the order-of-init complexity for CPUID/etc, and I expect that
> problems like this will cease to exist as a result.
>
> Notably, we've now got no relevant difference between early_init() and
> regular init(). That was a complexity we inherited from Linux.
... I don't see how this leads to any concrete suggestion as to better
arrangements. With cpu_has_arat using the host policy now,
setup_force_cpu_cap(X86_FEATURE_XEN_ARAT) being conditional upon that
has to run _after_ calculate_host_policy(), which in turn runs after
intel_init(). We could add a .c_late_init hook to struct cpu_dev, but
doing so felt like overkill for the purpose here. Plus it feels like
that wouldn't really address your concern either.
Jan
© 2016 - 2026 Red Hat, Inc.