From: Xenia Ragiadakou <burzalodowa@gmail.com>
Replace cpu_has_vmx check with using_vmx(), so that not only VMX support in CPU
is being checked at runtime, but also at build time we ensure the availability
of functions vmx_vmcs_enter() & vmx_vmcs_exit().
Also since CONFIG_VMX is checked in using_vmx and it depends on CONFIG_HVM,
we can drop #ifdef CONFIG_HVM lines around using_vmx.
Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
changes in v4:
- adjusted call to using_vmx(), as it has become an inline function
- added tag
- description changed a bit for more clarity
changes in v3:
-using_vmx instead of IS_ENABLED(CONFIG_VMX)
- updated description
---
xen/arch/x86/traps.c | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index ee91fc56b1..d2af6d70d2 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -676,7 +676,6 @@ void vcpu_show_execution_state(struct vcpu *v)
vcpu_pause(v); /* acceptably dangerous */
-#ifdef CONFIG_HVM
/*
* For VMX special care is needed: Reading some of the register state will
* require VMCS accesses. Engaging foreign VMCSes involves acquiring of a
@@ -684,12 +683,11 @@ void vcpu_show_execution_state(struct vcpu *v)
* region. Despite this being a layering violation, engage the VMCS right
* here. This then also avoids doing so several times in close succession.
*/
- if ( cpu_has_vmx && is_hvm_vcpu(v) )
+ if ( using_vmx() && is_hvm_vcpu(v) )
{
ASSERT(!in_irq());
vmx_vmcs_enter(v);
}
-#endif
/* Prevent interleaving of output. */
flags = console_lock_recursive_irqsave();
@@ -714,10 +712,8 @@ void vcpu_show_execution_state(struct vcpu *v)
console_unlock_recursive_irqrestore(flags);
}
-#ifdef CONFIG_HVM
- if ( cpu_has_vmx && is_hvm_vcpu(v) )
+ if ( using_vmx() && is_hvm_vcpu(v) )
vmx_vmcs_exit(v);
-#endif
vcpu_unpause(v);
}
--
2.25.1