[XEN PATCH v2 12/15] x86/vmx: guard access to cpu_has_vmx_* in common code

Sergiy Kibrik posted 15 patches 6 months, 1 week ago
There is a newer version of this series
[XEN PATCH v2 12/15] x86/vmx: guard access to cpu_has_vmx_* in common code
Posted by Sergiy Kibrik 6 months, 1 week ago
There're several places in common code, outside of arch/x86/hvm/vmx,
where cpu_has_vmx_* get accessed without checking if VMX present first.
We may want to guard these macros, as they read global variables defined
inside vmx-specific files -- so VMX can be made optional later on.

Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
---
Here I've tried a different approach from prev.patches [1,2] -- instead of
modifying whole set of cpu_has_{svm/vmx}_* macros, we can:
 1) do not touch SVM part at all, because just as Andrew pointed out they're
used inside arch/x86/hvm/svm only.
 2) track several places in common code where cpu_has_vmx_* features are
checked out and guard them using cpu_has_vmx condition
 3) two of cpu_has_vmx_* macros being used in common code are checked in a bit
more tricky way, so instead of making complex conditionals even more complicated,
we can instead integrate cpu_has_vmx condition inside these two macros.

This patch aims to replace [1,2] from v1 series by doing steps above.

 1. https://lore.kernel.org/xen-devel/20240416064402.3469959-1-Sergiy_Kibrik@epam.com/
 2. https://lore.kernel.org/xen-devel/20240416064606.3470052-1-Sergiy_Kibrik@epam.com/
---
changes in v2:
 - do not touch SVM code and macros
 - drop vmx_ctrl_has_feature()
 - guard cpu_has_vmx_* macros in common code instead
changes in v1:
 - introduced helper routine vmx_ctrl_has_feature() and used it for all
   cpu_has_vmx_* macros
---
 xen/arch/x86/hvm/hvm.c                  | 2 +-
 xen/arch/x86/hvm/viridian/viridian.c    | 4 ++--
 xen/arch/x86/include/asm/hvm/vmx/vmcs.h | 4 ++--
 xen/arch/x86/traps.c                    | 5 +++--
 4 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 9594e0a5c5..ab75de9779 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -5180,7 +5180,7 @@ int hvm_debug_op(struct vcpu *v, int32_t op)
     {
         case XEN_DOMCTL_DEBUG_OP_SINGLE_STEP_ON:
         case XEN_DOMCTL_DEBUG_OP_SINGLE_STEP_OFF:
-            if ( !cpu_has_monitor_trap_flag )
+            if ( !cpu_has_vmx || !cpu_has_monitor_trap_flag )
                 return -EOPNOTSUPP;
             break;
         default:
diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 0496c52ed5..657c6a3ea7 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -196,7 +196,7 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf,
         res->a = CPUID4A_RELAX_TIMER_INT;
         if ( viridian_feature_mask(d) & HVMPV_hcall_remote_tlb_flush )
             res->a |= CPUID4A_HCALL_REMOTE_TLB_FLUSH;
-        if ( !cpu_has_vmx_apic_reg_virt )
+        if ( !cpu_has_vmx || !cpu_has_vmx_apic_reg_virt )
             res->a |= CPUID4A_MSR_BASED_APIC;
         if ( viridian_feature_mask(d) & HVMPV_hcall_ipi )
             res->a |= CPUID4A_SYNTHETIC_CLUSTER_IPI;
@@ -236,7 +236,7 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf,
 
     case 6:
         /* Detected and in use hardware features. */
-        if ( cpu_has_vmx_virtualize_apic_accesses )
+        if ( cpu_has_vmx && cpu_has_vmx_virtualize_apic_accesses )
             res->a |= CPUID6A_APIC_OVERLAY;
         if ( cpu_has_vmx_msr_bitmap || (read_efer() & EFER_SVME) )
             res->a |= CPUID6A_MSR_BITMAPS;
diff --git a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
index 58140af691..aa05f9cf6e 100644
--- a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
+++ b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
@@ -306,7 +306,7 @@ extern u64 vmx_ept_vpid_cap;
 #define cpu_has_vmx_vnmi \
     (vmx_pin_based_exec_control & PIN_BASED_VIRTUAL_NMIS)
 #define cpu_has_vmx_msr_bitmap \
-    (vmx_cpu_based_exec_control & CPU_BASED_ACTIVATE_MSR_BITMAP)
+    (cpu_has_vmx && vmx_cpu_based_exec_control & CPU_BASED_ACTIVATE_MSR_BITMAP)
 #define cpu_has_vmx_secondary_exec_control \
     (vmx_cpu_based_exec_control & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS)
 #define cpu_has_vmx_tertiary_exec_control \
@@ -347,7 +347,7 @@ extern u64 vmx_ept_vpid_cap;
 #define cpu_has_vmx_vmfunc \
     (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_VM_FUNCTIONS)
 #define cpu_has_vmx_virt_exceptions \
-    (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_VIRT_EXCEPTIONS)
+    (cpu_has_vmx && vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_VIRT_EXCEPTIONS)
 #define cpu_has_vmx_pml \
     (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_PML)
 #define cpu_has_vmx_mpx \
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 7b8ee45edf..3595bb379a 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -1130,7 +1130,7 @@ void cpuid_hypervisor_leaves(const struct vcpu *v, uint32_t leaf,
         if ( !is_hvm_domain(d) || subleaf != 0 )
             break;
 
-        if ( cpu_has_vmx_apic_reg_virt )
+        if ( cpu_has_vmx && cpu_has_vmx_apic_reg_virt )
             res->a |= XEN_HVM_CPUID_APIC_ACCESS_VIRT;
 
         /*
@@ -1139,7 +1139,8 @@ void cpuid_hypervisor_leaves(const struct vcpu *v, uint32_t leaf,
          * and wrmsr in the guest will run without VMEXITs (see
          * vmx_vlapic_msr_changed()).
          */
-        if ( cpu_has_vmx_virtualize_x2apic_mode &&
+        if ( cpu_has_vmx &&
+             cpu_has_vmx_virtualize_x2apic_mode &&
              cpu_has_vmx_apic_reg_virt &&
              cpu_has_vmx_virtual_intr_delivery )
             res->a |= XEN_HVM_CPUID_X2APIC_VIRT;
-- 
2.25.1
Re: [XEN PATCH v2 12/15] x86/vmx: guard access to cpu_has_vmx_* in common code
Posted by Stefano Stabellini 6 months, 1 week ago
On Wed, 15 May 2024, Sergiy Kibrik wrote:
> There're several places in common code, outside of arch/x86/hvm/vmx,
> where cpu_has_vmx_* get accessed without checking if VMX present first.
> We may want to guard these macros, as they read global variables defined
> inside vmx-specific files -- so VMX can be made optional later on.
> 
> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
> CC: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Jan Beulich <jbeulich@suse.com>
> ---
> Here I've tried a different approach from prev.patches [1,2] -- instead of
> modifying whole set of cpu_has_{svm/vmx}_* macros, we can:
>  1) do not touch SVM part at all, because just as Andrew pointed out they're
> used inside arch/x86/hvm/svm only.
>  2) track several places in common code where cpu_has_vmx_* features are
> checked out and guard them using cpu_has_vmx condition
>  3) two of cpu_has_vmx_* macros being used in common code are checked in a bit
> more tricky way, so instead of making complex conditionals even more complicated,
> we can instead integrate cpu_has_vmx condition inside these two macros.
> 
> This patch aims to replace [1,2] from v1 series by doing steps above.
> 
>  1. https://lore.kernel.org/xen-devel/20240416064402.3469959-1-Sergiy_Kibrik@epam.com/
>  2. https://lore.kernel.org/xen-devel/20240416064606.3470052-1-Sergiy_Kibrik@epam.com/

I am missing some of the previous discussions but why can't we just fix
all of the cpu_has_vmx_* #defines in vmcs.h to also check for
cpu_has_vmx?

That seems easier and simpler than to add add-hoc checks at the invocations?


> ---
> changes in v2:
>  - do not touch SVM code and macros
>  - drop vmx_ctrl_has_feature()
>  - guard cpu_has_vmx_* macros in common code instead
> changes in v1:
>  - introduced helper routine vmx_ctrl_has_feature() and used it for all
>    cpu_has_vmx_* macros
> ---
>  xen/arch/x86/hvm/hvm.c                  | 2 +-
>  xen/arch/x86/hvm/viridian/viridian.c    | 4 ++--
>  xen/arch/x86/include/asm/hvm/vmx/vmcs.h | 4 ++--
>  xen/arch/x86/traps.c                    | 5 +++--
>  4 files changed, 8 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 9594e0a5c5..ab75de9779 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -5180,7 +5180,7 @@ int hvm_debug_op(struct vcpu *v, int32_t op)
>      {
>          case XEN_DOMCTL_DEBUG_OP_SINGLE_STEP_ON:
>          case XEN_DOMCTL_DEBUG_OP_SINGLE_STEP_OFF:
> -            if ( !cpu_has_monitor_trap_flag )
> +            if ( !cpu_has_vmx || !cpu_has_monitor_trap_flag )
>                  return -EOPNOTSUPP;
>              break;
>          default:
> diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
> index 0496c52ed5..657c6a3ea7 100644
> --- a/xen/arch/x86/hvm/viridian/viridian.c
> +++ b/xen/arch/x86/hvm/viridian/viridian.c
> @@ -196,7 +196,7 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf,
>          res->a = CPUID4A_RELAX_TIMER_INT;
>          if ( viridian_feature_mask(d) & HVMPV_hcall_remote_tlb_flush )
>              res->a |= CPUID4A_HCALL_REMOTE_TLB_FLUSH;
> -        if ( !cpu_has_vmx_apic_reg_virt )
> +        if ( !cpu_has_vmx || !cpu_has_vmx_apic_reg_virt )
>              res->a |= CPUID4A_MSR_BASED_APIC;
>          if ( viridian_feature_mask(d) & HVMPV_hcall_ipi )
>              res->a |= CPUID4A_SYNTHETIC_CLUSTER_IPI;
> @@ -236,7 +236,7 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf,
>  
>      case 6:
>          /* Detected and in use hardware features. */
> -        if ( cpu_has_vmx_virtualize_apic_accesses )
> +        if ( cpu_has_vmx && cpu_has_vmx_virtualize_apic_accesses )
>              res->a |= CPUID6A_APIC_OVERLAY;
>          if ( cpu_has_vmx_msr_bitmap || (read_efer() & EFER_SVME) )
>              res->a |= CPUID6A_MSR_BITMAPS;
> diff --git a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
> index 58140af691..aa05f9cf6e 100644
> --- a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
> +++ b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
> @@ -306,7 +306,7 @@ extern u64 vmx_ept_vpid_cap;
>  #define cpu_has_vmx_vnmi \
>      (vmx_pin_based_exec_control & PIN_BASED_VIRTUAL_NMIS)
>  #define cpu_has_vmx_msr_bitmap \
> -    (vmx_cpu_based_exec_control & CPU_BASED_ACTIVATE_MSR_BITMAP)
> +    (cpu_has_vmx && vmx_cpu_based_exec_control & CPU_BASED_ACTIVATE_MSR_BITMAP)
>  #define cpu_has_vmx_secondary_exec_control \
>      (vmx_cpu_based_exec_control & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS)
>  #define cpu_has_vmx_tertiary_exec_control \
> @@ -347,7 +347,7 @@ extern u64 vmx_ept_vpid_cap;
>  #define cpu_has_vmx_vmfunc \
>      (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_VM_FUNCTIONS)
>  #define cpu_has_vmx_virt_exceptions \
> -    (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_VIRT_EXCEPTIONS)
> +    (cpu_has_vmx && vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_VIRT_EXCEPTIONS)
>  #define cpu_has_vmx_pml \
>      (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_PML)
>  #define cpu_has_vmx_mpx \
> diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
> index 7b8ee45edf..3595bb379a 100644
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -1130,7 +1130,7 @@ void cpuid_hypervisor_leaves(const struct vcpu *v, uint32_t leaf,
>          if ( !is_hvm_domain(d) || subleaf != 0 )
>              break;
>  
> -        if ( cpu_has_vmx_apic_reg_virt )
> +        if ( cpu_has_vmx && cpu_has_vmx_apic_reg_virt )
>              res->a |= XEN_HVM_CPUID_APIC_ACCESS_VIRT;
>  
>          /*
> @@ -1139,7 +1139,8 @@ void cpuid_hypervisor_leaves(const struct vcpu *v, uint32_t leaf,
>           * and wrmsr in the guest will run without VMEXITs (see
>           * vmx_vlapic_msr_changed()).
>           */
> -        if ( cpu_has_vmx_virtualize_x2apic_mode &&
> +        if ( cpu_has_vmx &&
> +             cpu_has_vmx_virtualize_x2apic_mode &&
>               cpu_has_vmx_apic_reg_virt &&
>               cpu_has_vmx_virtual_intr_delivery )
>              res->a |= XEN_HVM_CPUID_X2APIC_VIRT;
> -- 
> 2.25.1
>
Re: [XEN PATCH v2 12/15] x86/vmx: guard access to cpu_has_vmx_* in common code
Posted by Jan Beulich 6 months, 1 week ago
On 16.05.2024 02:50, Stefano Stabellini wrote:
> On Wed, 15 May 2024, Sergiy Kibrik wrote:
>> There're several places in common code, outside of arch/x86/hvm/vmx,
>> where cpu_has_vmx_* get accessed without checking if VMX present first.
>> We may want to guard these macros, as they read global variables defined
>> inside vmx-specific files -- so VMX can be made optional later on.
>>
>> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
>> CC: Andrew Cooper <andrew.cooper3@citrix.com>
>> CC: Jan Beulich <jbeulich@suse.com>
>> ---
>> Here I've tried a different approach from prev.patches [1,2] -- instead of
>> modifying whole set of cpu_has_{svm/vmx}_* macros, we can:
>>  1) do not touch SVM part at all, because just as Andrew pointed out they're
>> used inside arch/x86/hvm/svm only.
>>  2) track several places in common code where cpu_has_vmx_* features are
>> checked out and guard them using cpu_has_vmx condition
>>  3) two of cpu_has_vmx_* macros being used in common code are checked in a bit
>> more tricky way, so instead of making complex conditionals even more complicated,
>> we can instead integrate cpu_has_vmx condition inside these two macros.
>>
>> This patch aims to replace [1,2] from v1 series by doing steps above.
>>
>>  1. https://lore.kernel.org/xen-devel/20240416064402.3469959-1-Sergiy_Kibrik@epam.com/
>>  2. https://lore.kernel.org/xen-devel/20240416064606.3470052-1-Sergiy_Kibrik@epam.com/
> 
> I am missing some of the previous discussions but why can't we just fix
> all of the cpu_has_vmx_* #defines in vmcs.h to also check for
> cpu_has_vmx?
> 
> That seems easier and simpler than to add add-hoc checks at the invocations?

I'd like to take the question on step further: Following 0b5f149338e3
("x86/HVM: hide SVM/VMX when their enabling is prohibited by firmware"),
is this change needed at all? IOW is there a path left where cpu_has_vmx
may be false, by any cpu_has_vmx_* may still yield true?

Jan
Re: [XEN PATCH v2 12/15] x86/vmx: guard access to cpu_has_vmx_* in common code
Posted by Sergiy Kibrik 5 months, 4 weeks ago
16.05.24 10:32, Jan Beulich:
> On 16.05.2024 02:50, Stefano Stabellini wrote:
>> On Wed, 15 May 2024, Sergiy Kibrik wrote:
>>> There're several places in common code, outside of arch/x86/hvm/vmx,
>>> where cpu_has_vmx_* get accessed without checking if VMX present first.
>>> We may want to guard these macros, as they read global variables defined
>>> inside vmx-specific files -- so VMX can be made optional later on.
>>>
>>> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
>>> CC: Andrew Cooper <andrew.cooper3@citrix.com>
>>> CC: Jan Beulich <jbeulich@suse.com>
>>> ---
>>> Here I've tried a different approach from prev.patches [1,2] -- instead of
>>> modifying whole set of cpu_has_{svm/vmx}_* macros, we can:
>>>   1) do not touch SVM part at all, because just as Andrew pointed out they're
>>> used inside arch/x86/hvm/svm only.
>>>   2) track several places in common code where cpu_has_vmx_* features are
>>> checked out and guard them using cpu_has_vmx condition
>>>   3) two of cpu_has_vmx_* macros being used in common code are checked in a bit
>>> more tricky way, so instead of making complex conditionals even more complicated,
>>> we can instead integrate cpu_has_vmx condition inside these two macros.
>>>
>>> This patch aims to replace [1,2] from v1 series by doing steps above.
>>>
[..]
>>
>> I am missing some of the previous discussions but why can't we just fix
>> all of the cpu_has_vmx_* #defines in vmcs.h to also check for
>> cpu_has_vmx?
>>
>> That seems easier and simpler than to add add-hoc checks at the invocations?
> 
> I'd like to take the question on step further: Following 0b5f149338e3
> ("x86/HVM: hide SVM/VMX when their enabling is prohibited by firmware"),
> is this change needed at all? IOW is there a path left where cpu_has_vmx
> may be false, by any cpu_has_vmx_* may still yield true?
> 

This change is about exec control variables (vmx_secondary_exec_control, 
vmx_pin_based_exec_control etc) not been built, because they're in vmx 
code, but accessed in common code. The description is probably unclear 
about that.
Also build issues related to VMX can be solved differently, without 
touching cpu_has_vmx_* macros and related logic at all.
I can move exec control variables from vmcs.c to common hvm.c, this 
would be simpler change and directly related to problem that I'm having.

   -Sergiy
Re: [XEN PATCH v2 12/15] x86/vmx: guard access to cpu_has_vmx_* in common code
Posted by Jan Beulich 5 months, 4 weeks ago
On 29.05.2024 12:58, Sergiy Kibrik wrote:
> 16.05.24 10:32, Jan Beulich:
>> On 16.05.2024 02:50, Stefano Stabellini wrote:
>>> On Wed, 15 May 2024, Sergiy Kibrik wrote:
>>>> There're several places in common code, outside of arch/x86/hvm/vmx,
>>>> where cpu_has_vmx_* get accessed without checking if VMX present first.
>>>> We may want to guard these macros, as they read global variables defined
>>>> inside vmx-specific files -- so VMX can be made optional later on.
>>>>
>>>> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
>>>> CC: Andrew Cooper <andrew.cooper3@citrix.com>
>>>> CC: Jan Beulich <jbeulich@suse.com>
>>>> ---
>>>> Here I've tried a different approach from prev.patches [1,2] -- instead of
>>>> modifying whole set of cpu_has_{svm/vmx}_* macros, we can:
>>>>   1) do not touch SVM part at all, because just as Andrew pointed out they're
>>>> used inside arch/x86/hvm/svm only.
>>>>   2) track several places in common code where cpu_has_vmx_* features are
>>>> checked out and guard them using cpu_has_vmx condition
>>>>   3) two of cpu_has_vmx_* macros being used in common code are checked in a bit
>>>> more tricky way, so instead of making complex conditionals even more complicated,
>>>> we can instead integrate cpu_has_vmx condition inside these two macros.
>>>>
>>>> This patch aims to replace [1,2] from v1 series by doing steps above.
>>>>
> [..]
>>>
>>> I am missing some of the previous discussions but why can't we just fix
>>> all of the cpu_has_vmx_* #defines in vmcs.h to also check for
>>> cpu_has_vmx?
>>>
>>> That seems easier and simpler than to add add-hoc checks at the invocations?
>>
>> I'd like to take the question on step further: Following 0b5f149338e3
>> ("x86/HVM: hide SVM/VMX when their enabling is prohibited by firmware"),
>> is this change needed at all? IOW is there a path left where cpu_has_vmx
>> may be false, by any cpu_has_vmx_* may still yield true?
>>
> 
> This change is about exec control variables (vmx_secondary_exec_control, 
> vmx_pin_based_exec_control etc) not been built, because they're in vmx 
> code, but accessed in common code. The description is probably unclear 
> about that.
> Also build issues related to VMX can be solved differently, without 
> touching cpu_has_vmx_* macros and related logic at all.
> I can move exec control variables from vmcs.c to common hvm.c, this 
> would be simpler change and directly related to problem that I'm having.

That would be moving them one layer too high. Proper disentangling then
will need to wait until that data is actually part of the (host) CPU
policy. For the time being your change may thus be acceptable, assuming
that we won't be very quick in doing said move.

Jan