From: Xenia Ragiadakou <burzalodowa@gmail.com>
Replace cpu_has_svm check with using_svm(), so that not only SVM support in CPU
is being checked at runtime, but also at build time we ensure the availability
of functions svm_load_segs() and svm_load_segs_prefetch().
Since SVM depends on HVM, it can be used alone.
Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
changes in v4:
- changed patch subject line
- adjusted call to using_svm(), as it has become an inline function
- use #ifdef CONFIG_PV
- description changed a bit for more clarity
- added tag
changes in v3:
- using_svm instead of IS_ENABLED(CONFIG_SVM)
- updated description
---
xen/arch/x86/domain.c | 8 +++-----
1 file changed, 3 insertions(+), 5 deletions(-)
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index ccadfe0c9e..05cb9f7a4c 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1732,11 +1732,9 @@ static void load_segments(struct vcpu *n)
if ( !(n->arch.flags & TF_kernel_mode) )
SWAP(gsb, gss);
-#ifdef CONFIG_HVM
- if ( cpu_has_svm && (uregs->fs | uregs->gs) <= 3 )
+ if ( using_svm() && (uregs->fs | uregs->gs) <= 3 )
fs_gs_done = svm_load_segs(n->arch.pv.ldt_ents, LDT_VIRT_START(n),
n->arch.pv.fs_base, gsb, gss);
-#endif
}
if ( !fs_gs_done )
@@ -2049,9 +2047,9 @@ static void __context_switch(void)
write_ptbase(n);
-#if defined(CONFIG_PV) && defined(CONFIG_HVM)
+#ifdef CONFIG_PV
/* Prefetch the VMCB if we expect to use it later in the context switch */
- if ( cpu_has_svm && is_pv_64bit_domain(nd) && !is_idle_domain(nd) )
+ if ( using_svm() && is_pv_64bit_domain(nd) && !is_idle_domain(nd) )
svm_load_segs_prefetch();
#endif
--
2.25.1