:p
atchew
Login
This is another series to provide a means to render the CPU virtualisation technology support in Xen configurable. Currently, irrespectively of the target platform, both AMD-V and Intel VT-x drivers are built. The series adds three new Kconfig controls, ALT2PM, AMD_SVM and INTEL_VMX, that can be used to switch to a finer-grained configuration for a given platform, and reduce dead code. The code separation is done using the new config guards. Major changes in this series, comparing to v4, are renaming of config options from SVM to AMD_SVM and from VMX to INTEL_VMX -- the way they were in initial RFC series. Also patch for ioreq has been remade once again trying to make it clearer and simplify a bit. More specific changes are provided in per-patch changelog. v4 series here: https://lore.kernel.org/xen-devel/cover.1720501197.git.Sergiy_Kibrik@epam.com/ -Sergiy Sergiy Kibrik (6): x86/monitor: guard altp2m usage x86: introduce CONFIG_ALTP2M Kconfig option x86: introduce using_{svm,vmx}() helpers x86/vmx: guard access to cpu_has_vmx_* in common code x86/vpmu: guard calls to vmx/svm functions x86/vmx: replace CONFIG_HVM with CONFIG_INTEL_VMX in vmx.h Xenia Ragiadakou (7): x86: introduce AMD-V and Intel VT-x Kconfig options x86/p2m: guard EPT functions with using_vmx() check x86/traps: guard vmx specific functions with usinc_vmx() check x86/PV: guard svm specific functions with usinc_svm() check x86/oprofile: guard svm specific symbols with CONFIG_AMD_SVM ioreq: do not build arch_vcpu_ioreq_completion() for non-VMX configurations x86/hvm: make AMD-V and Intel VT-x support configurable xen/Kconfig | 3 +++ xen/arch/arm/ioreq.c | 6 ----- xen/arch/x86/Kconfig | 32 ++++++++++++++++++++++++ xen/arch/x86/cpu/vpmu_amd.c | 11 +++++---- xen/arch/x86/cpu/vpmu_intel.c | 32 +++++++++++++----------- xen/arch/x86/domain.c | 8 +++--- xen/arch/x86/hvm/Makefile | 4 +-- xen/arch/x86/hvm/hvm.c | 4 +-- xen/arch/x86/hvm/ioreq.c | 2 ++ xen/arch/x86/hvm/monitor.c | 4 ++- xen/arch/x86/hvm/nestedhvm.c | 4 +-- xen/arch/x86/include/asm/altp2m.h | 5 +++- xen/arch/x86/include/asm/hvm/hvm.h | 12 ++++++++- xen/arch/x86/include/asm/hvm/vmx/vmcs.h | 33 ++++++++++++++++--------- xen/arch/x86/include/asm/hvm/vmx/vmx.h | 2 +- xen/arch/x86/include/asm/p2m.h | 23 +++++++++++++---- xen/arch/x86/mm/Makefile | 5 ++-- xen/arch/x86/mm/hap/Makefile | 2 +- xen/arch/x86/mm/p2m-basic.c | 4 +-- xen/arch/x86/oprofile/op_model_athlon.c | 2 +- xen/arch/x86/traps.c | 8 ++---- xen/include/xen/ioreq.h | 10 ++++++++ 22 files changed, 147 insertions(+), 69 deletions(-) -- 2.25.1
From: Xenia Ragiadakou <burzalodowa@gmail.com> Introduce two new Kconfig options, AMD_SVM and INTEL_VMX, to allow code specific to each virtualization technology to be separated and, when not required, stripped. CONFIG_AMD_SVM will be used to enable virtual machine extensions on platforms that implement the AMD Virtualization Technology (AMD-V). CONFIG_INTEL_VMX will be used to enable virtual machine extensions on platforms that implement the Intel Virtualization Technology (Intel VT-x). Both features depend on HVM support. Since, at this point, disabling any of them would cause Xen to not compile, the options are enabled by default if HVM and are not selectable by the user. No functional change intended. Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com> Acked-by: Jan Beulich <jbeulich@suse.com> --- changes in v5: - change kconfig option name SVM/VMX -> AMD_SVM/INTEL_VMX changes in v3: - tag added changes in v2: - simplify kconfig expression to def_bool HVM - keep file list in Makefile in alphabetical order --- xen/arch/x86/Kconfig | 6 ++++++ xen/arch/x86/hvm/Makefile | 4 ++-- xen/arch/x86/mm/Makefile | 3 ++- xen/arch/x86/mm/hap/Makefile | 2 +- 4 files changed, 11 insertions(+), 4 deletions(-) diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/Kconfig +++ b/xen/arch/x86/Kconfig @@ -XXX,XX +XXX,XX @@ config HVM If unsure, say Y. +config AMD_SVM + def_bool HVM + +config INTEL_VMX + def_bool HVM + config XEN_SHSTK bool "Supervisor Shadow Stacks" depends on HAS_AS_CET_SS diff --git a/xen/arch/x86/hvm/Makefile b/xen/arch/x86/hvm/Makefile index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/hvm/Makefile +++ b/xen/arch/x86/hvm/Makefile @@ -XXX,XX +XXX,XX @@ -obj-y += svm/ -obj-y += vmx/ +obj-$(CONFIG_AMD_SVM) += svm/ +obj-$(CONFIG_INTEL_VMX) += vmx/ obj-y += viridian/ obj-y += asid.o diff --git a/xen/arch/x86/mm/Makefile b/xen/arch/x86/mm/Makefile index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/mm/Makefile +++ b/xen/arch/x86/mm/Makefile @@ -XXX,XX +XXX,XX @@ obj-$(CONFIG_MEM_SHARING) += mem_sharing.o obj-$(CONFIG_HVM) += nested.o obj-$(CONFIG_HVM) += p2m.o obj-y += p2m-basic.o -obj-$(CONFIG_HVM) += p2m-ept.o p2m-pod.o p2m-pt.o +obj-$(CONFIG_INTEL_VMX) += p2m-ept.o +obj-$(CONFIG_HVM) += p2m-pod.o p2m-pt.o obj-y += paging.o obj-y += physmap.o diff --git a/xen/arch/x86/mm/hap/Makefile b/xen/arch/x86/mm/hap/Makefile index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/mm/hap/Makefile +++ b/xen/arch/x86/mm/hap/Makefile @@ -XXX,XX +XXX,XX @@ obj-y += guest_walk_2.o obj-y += guest_walk_3.o obj-y += guest_walk_4.o obj-y += nested_hap.o -obj-y += nested_ept.o +obj-$(CONFIG_INTEL_VMX) += nested_ept.o -- 2.25.1
Explicitly check whether altp2m is on for domain when getting altp2m index. If explicit call to altp2m_active() always returns false, DCE will remove call to altp2m_vcpu_idx(). p2m_get_mem_access() expects 0 as altp2m_idx parameter when altp2m not active or not supported, so 0 is a fallback value then. The puspose of that is later to be able to disable altp2m support and exclude its code from the build completely, when not supported by target platform (as of now it's supported for VT-d only). Also all other calls to altp2m_vcpu_idx() are guarded by altp2m_active(), so this change puts usage of this routine in line with the rest of code. Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com> CC: Tamas K Lengyel <tamas@tklengyel.com> CC: Jan Beulich <jbeulich@suse.com> --- changes in v5: - changed patch description changes in v2: - patch description changed, removed VMX mentioning - guard by altp2m_active() instead of hvm_altp2m_supported() --- xen/arch/x86/hvm/monitor.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/xen/arch/x86/hvm/monitor.c b/xen/arch/x86/hvm/monitor.c index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/hvm/monitor.c +++ b/xen/arch/x86/hvm/monitor.c @@ -XXX,XX +XXX,XX @@ bool hvm_monitor_check_p2m(unsigned long gla, gfn_t gfn, uint32_t pfec, struct vcpu *curr = current; vm_event_request_t req = {}; paddr_t gpa = (gfn_to_gaddr(gfn) | (gla & ~PAGE_MASK)); + unsigned int altp2m_idx = altp2m_active(curr->domain) ? + altp2m_vcpu_idx(curr) : 0; int rc; ASSERT(curr->arch.vm_event->send_event); @@ -XXX,XX +XXX,XX @@ bool hvm_monitor_check_p2m(unsigned long gla, gfn_t gfn, uint32_t pfec, * p2m_get_mem_access() can fail from a invalid MFN and return -ESRCH * in which case access must be restricted. */ - rc = p2m_get_mem_access(curr->domain, gfn, &access, altp2m_vcpu_idx(curr)); + rc = p2m_get_mem_access(curr->domain, gfn, &access, altp2m_idx); if ( rc == -ESRCH ) access = XENMEM_access_n; -- 2.25.1
Add new option to make altp2m code inclusion optional. Currently altp2m implemented for Intel EPT only, so option is dependant on VMX. Also the prompt itself depends on EXPERT=y, so that option is available for fine-tuning, if one want to play around with it. Use this option instead of more generic CONFIG_HVM option. That implies the possibility to build hvm code without altp2m support, hence we need to declare altp2m routines for hvm code to compile successfully (altp2m_vcpu_initialise(), altp2m_vcpu_destroy(), altp2m_vcpu_enable_ve()) Also guard altp2m routines, so that they can be disabled completely in the build -- when target platform does not actually support altp2m (AMD-V & ARM as of now). Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> CC: Tamas K Lengyel <tamas@tklengyel.com> CC: Stefano Stabellini <sstabellini@kernel.org> --- changes in v5: - change kconfig option name VMX -> INTEL_VMX changes in v4: - move static inline stub for p2m_altp2m_check() from under CONFIG_HVM under CONFIG_ALTP2M - keep AP2MGET_prepopulate/AP2MGET_query under CONFIG_ALTP2M as Jan suggested changes in v3: - added help text - use conditional prompt depending on EXPERT=y - corrected & extended patch description - put a blank line before #ifdef CONFIG_ALTP2M - sqashed in a separate patch for guarding altp2m code with CONFIG_ALTP2M option --- xen/arch/x86/Kconfig | 11 +++++++++++ xen/arch/x86/include/asm/altp2m.h | 5 ++++- xen/arch/x86/include/asm/hvm/hvm.h | 2 +- xen/arch/x86/include/asm/p2m.h | 23 ++++++++++++++++++----- xen/arch/x86/mm/Makefile | 2 +- 5 files changed, 35 insertions(+), 8 deletions(-) diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/Kconfig +++ b/xen/arch/x86/Kconfig @@ -XXX,XX +XXX,XX @@ config REQUIRE_NX was unavailable. However, if enabled, Xen will no longer boot on any CPU which is lacking NX support. +config ALTP2M + bool "Alternate P2M support" if EXPERT + default y + depends on INTEL_VMX + help + Alternate-p2m allows a guest to manage multiple p2m guest physical + "memory views" (as opposed to a single p2m). + Useful for memory introspection. + + If unsure, stay with defaults. + endmenu source "common/Kconfig" diff --git a/xen/arch/x86/include/asm/altp2m.h b/xen/arch/x86/include/asm/altp2m.h index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/include/asm/altp2m.h +++ b/xen/arch/x86/include/asm/altp2m.h @@ -XXX,XX +XXX,XX @@ #ifndef __ASM_X86_ALTP2M_H #define __ASM_X86_ALTP2M_H -#ifdef CONFIG_HVM +#ifdef CONFIG_ALTP2M #include <xen/types.h> #include <xen/sched.h> /* for struct vcpu, struct domain */ @@ -XXX,XX +XXX,XX @@ static inline bool altp2m_active(const struct domain *d) /* Only declaration is needed. DCE will optimise it out when linking. */ uint16_t altp2m_vcpu_idx(const struct vcpu *v); +void altp2m_vcpu_initialise(struct vcpu *v); +void altp2m_vcpu_destroy(struct vcpu *v); +int altp2m_vcpu_enable_ve(struct vcpu *v, gfn_t gfn); void altp2m_vcpu_disable_ve(struct vcpu *v); #endif diff --git a/xen/arch/x86/include/asm/hvm/hvm.h b/xen/arch/x86/include/asm/hvm/hvm.h index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/include/asm/hvm/hvm.h +++ b/xen/arch/x86/include/asm/hvm/hvm.h @@ -XXX,XX +XXX,XX @@ static inline bool hvm_hap_supported(void) /* returns true if hardware supports alternate p2m's */ static inline bool hvm_altp2m_supported(void) { - return hvm_funcs.caps.altp2m; + return IS_ENABLED(CONFIG_ALTP2M) && hvm_funcs.caps.altp2m; } /* Returns true if we have the minimum hardware requirements for nested virt */ diff --git a/xen/arch/x86/include/asm/p2m.h b/xen/arch/x86/include/asm/p2m.h index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/include/asm/p2m.h +++ b/xen/arch/x86/include/asm/p2m.h @@ -XXX,XX +XXX,XX @@ static inline gfn_t mfn_to_gfn(const struct domain *d, mfn_t mfn) return _gfn(mfn_x(mfn)); } -#ifdef CONFIG_HVM +#ifdef CONFIG_ALTP2M #define AP2MGET_prepopulate true #define AP2MGET_query false @@ -XXX,XX +XXX,XX @@ static inline gfn_t mfn_to_gfn(const struct domain *d, mfn_t mfn) int altp2m_get_effective_entry(struct p2m_domain *ap2m, gfn_t gfn, mfn_t *mfn, p2m_type_t *t, p2m_access_t *a, bool prepopulate); +#else +static inline int _altp2m_get_effective_entry(struct p2m_domain *ap2m, + gfn_t gfn, mfn_t *mfn, + p2m_type_t *t, p2m_access_t *a) +{ + ASSERT_UNREACHABLE(); + return -EOPNOTSUPP; +} +#define altp2m_get_effective_entry(ap2m, gfn, mfn, t, a, prepopulate) \ + _altp2m_get_effective_entry(ap2m, gfn, mfn, t, a) #endif /* Init the datastructures for later use by the p2m code */ @@ -XXX,XX +XXX,XX @@ static inline bool p2m_set_altp2m(struct vcpu *v, unsigned int idx) /* Switch alternate p2m for a single vcpu */ bool p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx); -/* Check to see if vcpu should be switched to a different p2m. */ -void p2m_altp2m_check(struct vcpu *v, uint16_t idx); - /* Flush all the alternate p2m's for a domain */ void p2m_flush_altp2m(struct domain *d); @@ -XXX,XX +XXX,XX @@ int p2m_set_altp2m_view_visibility(struct domain *d, unsigned int altp2m_idx, uint8_t visible); #else /* !CONFIG_HVM */ struct p2m_domain *p2m_get_altp2m(struct vcpu *v); -static inline void p2m_altp2m_check(struct vcpu *v, uint16_t idx) {} #endif /* CONFIG_HVM */ +#ifdef CONFIG_ALTP2M +/* Check to see if vcpu should be switched to a different p2m. */ +void p2m_altp2m_check(struct vcpu *v, uint16_t idx); +#else +static inline void p2m_altp2m_check(struct vcpu *v, uint16_t idx) {} +#endif + /* p2m access to IOMMU flags */ static inline unsigned int p2m_access_to_iommu_flags(p2m_access_t p2ma) { diff --git a/xen/arch/x86/mm/Makefile b/xen/arch/x86/mm/Makefile index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/mm/Makefile +++ b/xen/arch/x86/mm/Makefile @@ -XXX,XX +XXX,XX @@ obj-y += shadow/ obj-$(CONFIG_HVM) += hap/ -obj-$(CONFIG_HVM) += altp2m.o +obj-$(CONFIG_ALTP2M) += altp2m.o obj-$(CONFIG_HVM) += guest_walk_2.o guest_walk_3.o guest_walk_4.o obj-$(CONFIG_SHADOW_PAGING) += guest_walk_4.o obj-$(CONFIG_MEM_ACCESS) += mem_access.o -- 2.25.1
As we now have AMD_SVM/INTEL_VMX config options for enabling/disabling these features completely in the build, we need some build-time checks to ensure that vmx/svm code can be used and things compile. Macros cpu_has_{svm,vmx} used to be doing such checks at runtime, however they do not check if SVM/VMX support is enabled in the build. Also cpu_has_{svm,vmx} can potentially be called from non-{VMX,SVM} build yet running on {VMX,SVM}-enabled CPU, so would correctly indicate that VMX/SVM is indeed supported by CPU, but code to drive it can't be used. New routines using_{vmx,svm}() indicate that both CPU _and_ build provide corresponding technology support, while cpu_has_{vmx,svm} still remains for informational runtime purpose, just as their naming suggests. These new helpers are used right away in several sites, namely guard calls to start_nested_{svm,vmx} and start_{svm,vmx} to fix a build when INTEL_VMX=n or AMD_SVM=n. Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> --- changes in v5: - change kconfig option name SVM/VMX -> AMD_SVM/INTEL_VMX changes in v4: - make using_{vmx,svm} static inline functions instead of macros - squash patch with 2 other patches where using_{vmx,svm} are being used - changed patch description changes in v3: - introduce separate macros instead of modifying behaviour of cpu_has_{vmx,svm} --- xen/arch/x86/hvm/hvm.c | 4 ++-- xen/arch/x86/hvm/nestedhvm.c | 4 ++-- xen/arch/x86/include/asm/hvm/hvm.h | 10 ++++++++++ 3 files changed, 14 insertions(+), 4 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -XXX,XX +XXX,XX @@ static int __init cf_check hvm_enable(void) { const struct hvm_function_table *fns = NULL; - if ( cpu_has_vmx ) + if ( using_vmx() ) fns = start_vmx(); - else if ( cpu_has_svm ) + else if ( using_svm() ) fns = start_svm(); if ( fns == NULL ) diff --git a/xen/arch/x86/hvm/nestedhvm.c b/xen/arch/x86/hvm/nestedhvm.c index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/hvm/nestedhvm.c +++ b/xen/arch/x86/hvm/nestedhvm.c @@ -XXX,XX +XXX,XX @@ static int __init cf_check nestedhvm_setup(void) * done, so that if (for example) HAP is disabled, nested virt is * disabled as well. */ - if ( cpu_has_vmx ) + if ( using_vmx() ) start_nested_vmx(&hvm_funcs); - else if ( cpu_has_svm ) + else if ( using_svm() ) start_nested_svm(&hvm_funcs); return 0; diff --git a/xen/arch/x86/include/asm/hvm/hvm.h b/xen/arch/x86/include/asm/hvm/hvm.h index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/include/asm/hvm/hvm.h +++ b/xen/arch/x86/include/asm/hvm/hvm.h @@ -XXX,XX +XXX,XX @@ int hvm_copy_context_and_params(struct domain *dst, struct domain *src); int hvm_get_param(struct domain *d, uint32_t index, uint64_t *value); +static inline bool using_vmx(void) +{ + return IS_ENABLED(CONFIG_INTEL_VMX) && cpu_has_vmx; +} + +static inline bool using_svm(void) +{ + return IS_ENABLED(CONFIG_AMD_SVM) && cpu_has_svm; +} + #ifdef CONFIG_HVM #define hvm_get_guest_tsc(v) hvm_get_guest_tsc_fixed(v, 0) -- 2.25.1
From: Xenia Ragiadakou <burzalodowa@gmail.com> Replace cpu_has_vmx check with using_vmx(), so that DCE would remove calls to functions ept_p2m_init() and ept_p2m_uninit() on non-VMX build. Since currently Intel EPT implementation depends on CONFIG_INTEL_VMX config option, when VMX is off these functions are unavailable. Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com> Acked-by: Jan Beulich <jbeulich@suse.com> --- changes in v5: - changed description changes in v4: - changed description a bit - added tag - adjusted call to using_vmx(), as it has become an inline function changes in v3: - using_vmx instead of IS_ENABLED(CONFIG_VMX) - updated description --- xen/arch/x86/mm/p2m-basic.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/xen/arch/x86/mm/p2m-basic.c b/xen/arch/x86/mm/p2m-basic.c index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/mm/p2m-basic.c +++ b/xen/arch/x86/mm/p2m-basic.c @@ -XXX,XX +XXX,XX @@ static int p2m_initialise(struct domain *d, struct p2m_domain *p2m) p2m_pod_init(p2m); p2m_nestedp2m_init(p2m); - if ( hap_enabled(d) && cpu_has_vmx ) + if ( hap_enabled(d) && using_vmx() ) ret = ept_p2m_init(p2m); else p2m_pt_init(p2m); @@ -XXX,XX +XXX,XX @@ struct p2m_domain *p2m_init_one(struct domain *d) void p2m_free_one(struct p2m_domain *p2m) { p2m_free_logdirty(p2m); - if ( hap_enabled(p2m->domain) && cpu_has_vmx ) + if ( hap_enabled(p2m->domain) && using_vmx() ) ept_p2m_uninit(p2m); free_cpumask_var(p2m->dirty_cpumask); xfree(p2m); -- 2.25.1
From: Xenia Ragiadakou <burzalodowa@gmail.com> Replace cpu_has_vmx check with using_vmx(), so that not only VMX support in CPU is being checked at runtime, but also at build time we ensure the availability of functions vmx_vmcs_enter() & vmx_vmcs_exit(). Also since CONFIG_VMX is checked in using_vmx and it depends on CONFIG_HVM, we can drop #ifdef CONFIG_HVM lines around using_vmx. Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com> Acked-by: Jan Beulich <jbeulich@suse.com> --- changes in v4: - adjusted call to using_vmx(), as it has become an inline function - added tag - description changed a bit for more clarity changes in v3: -using_vmx instead of IS_ENABLED(CONFIG_VMX) - updated description --- xen/arch/x86/traps.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/traps.c +++ b/xen/arch/x86/traps.c @@ -XXX,XX +XXX,XX @@ void vcpu_show_execution_state(struct vcpu *v) vcpu_pause(v); /* acceptably dangerous */ -#ifdef CONFIG_HVM /* * For VMX special care is needed: Reading some of the register state will * require VMCS accesses. Engaging foreign VMCSes involves acquiring of a @@ -XXX,XX +XXX,XX @@ void vcpu_show_execution_state(struct vcpu *v) * region. Despite this being a layering violation, engage the VMCS right * here. This then also avoids doing so several times in close succession. */ - if ( cpu_has_vmx && is_hvm_vcpu(v) ) + if ( using_vmx() && is_hvm_vcpu(v) ) { ASSERT(!in_irq()); vmx_vmcs_enter(v); } -#endif /* Prevent interleaving of output. */ flags = console_lock_recursive_irqsave(); @@ -XXX,XX +XXX,XX @@ void vcpu_show_execution_state(struct vcpu *v) console_unlock_recursive_irqrestore(flags); } -#ifdef CONFIG_HVM - if ( cpu_has_vmx && is_hvm_vcpu(v) ) + if ( using_vmx() && is_hvm_vcpu(v) ) vmx_vmcs_exit(v); -#endif vcpu_unpause(v); } -- 2.25.1
From: Xenia Ragiadakou <burzalodowa@gmail.com> Replace cpu_has_svm check with using_svm(), so that not only SVM support in CPU is being checked at runtime, but also at build time we ensure the availability of functions svm_load_segs() and svm_load_segs_prefetch(). Since SVM depends on HVM, it can be used alone. Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com> Acked-by: Jan Beulich <jbeulich@suse.com> --- changes in v4: - changed patch subject line - adjusted call to using_svm(), as it has become an inline function - use #ifdef CONFIG_PV - description changed a bit for more clarity - added tag changes in v3: - using_svm instead of IS_ENABLED(CONFIG_SVM) - updated description --- xen/arch/x86/domain.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -XXX,XX +XXX,XX @@ static void load_segments(struct vcpu *n) if ( !(n->arch.flags & TF_kernel_mode) ) SWAP(gsb, gss); -#ifdef CONFIG_HVM - if ( cpu_has_svm && (uregs->fs | uregs->gs) <= 3 ) + if ( using_svm() && (uregs->fs | uregs->gs) <= 3 ) fs_gs_done = svm_load_segs(n->arch.pv.ldt_ents, LDT_VIRT_START(n), n->arch.pv.fs_base, gsb, gss); -#endif } if ( !fs_gs_done ) @@ -XXX,XX +XXX,XX @@ static void __context_switch(void) write_ptbase(n); -#if defined(CONFIG_PV) && defined(CONFIG_HVM) +#ifdef CONFIG_PV /* Prefetch the VMCB if we expect to use it later in the context switch */ - if ( cpu_has_svm && is_pv_64bit_domain(nd) && !is_idle_domain(nd) ) + if ( using_svm() && is_pv_64bit_domain(nd) && !is_idle_domain(nd) ) svm_load_segs_prefetch(); #endif -- 2.25.1
From: Xenia Ragiadakou <burzalodowa@gmail.com> The symbol svm_stgi_label is AMD-V specific so guard its usage in common code with CONFIG_AMD_SVM. Since SVM depends on HVM, it can be used alone. Also, use #ifdef instead of #if. No functional change intended. Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com> Acked-by: Jan Beulich <jbeulich@suse.com> --- changes in v5: - change kconfig option name SVM -> AMD_SVM --- xen/arch/x86/oprofile/op_model_athlon.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/xen/arch/x86/oprofile/op_model_athlon.c b/xen/arch/x86/oprofile/op_model_athlon.c index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/oprofile/op_model_athlon.c +++ b/xen/arch/x86/oprofile/op_model_athlon.c @@ -XXX,XX +XXX,XX @@ static int cf_check athlon_check_ctrs( struct vcpu *v = current; unsigned int const nr_ctrs = model->num_counters; -#if CONFIG_HVM +#ifdef CONFIG_AMD_SVM struct cpu_user_regs *guest_regs = guest_cpu_user_regs(); if (!guest_mode(regs) && -- 2.25.1
There're several places in common code, outside of arch/x86/hvm/vmx, where cpu_has_vmx_* get accessed without checking whether VMX supported first. These macros rely on global variables defined in vmx code, so when VMX support is disabled accesses to these variables turn into build failures. To overcome these failures, build-time check is done before accessing global variables, so that DCE would remove these variables. Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com> Acked-by: Paul Durrant <paul@xen.org> CC: Andrew Cooper <andrew.cooper3@citrix.com> CC: Jan Beulich <jbeulich@suse.com> --- changes in v5: - change kconfig option name VMX -> INTEL_VMX - do not change .c files, only modify macros in vmcs.h changes in v4: - use IS_ENABLED(CONFIG_VMX) instead of using_vmx changes in v3: - using_vmx instead of cpu_has_vmx - clarify description on why this change needed --- xen/arch/x86/include/asm/hvm/vmx/vmcs.h | 33 ++++++++++++++++--------- 1 file changed, 22 insertions(+), 11 deletions(-) diff --git a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h +++ b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h @@ -XXX,XX +XXX,XX @@ extern u64 vmx_ept_vpid_cap; #define cpu_has_wbinvd_exiting \ (vmx_secondary_exec_control & SECONDARY_EXEC_WBINVD_EXITING) #define cpu_has_vmx_virtualize_apic_accesses \ - (vmx_secondary_exec_control & SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES) + (IS_ENABLED(CONFIG_INTEL_VMX) && \ + vmx_secondary_exec_control & SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES) #define cpu_has_vmx_tpr_shadow \ (vmx_cpu_based_exec_control & CPU_BASED_TPR_SHADOW) #define cpu_has_vmx_vnmi \ (vmx_pin_based_exec_control & PIN_BASED_VIRTUAL_NMIS) #define cpu_has_vmx_msr_bitmap \ - (vmx_cpu_based_exec_control & CPU_BASED_ACTIVATE_MSR_BITMAP) + (IS_ENABLED(CONFIG_INTEL_VMX) && \ + vmx_cpu_based_exec_control & CPU_BASED_ACTIVATE_MSR_BITMAP) #define cpu_has_vmx_secondary_exec_control \ (vmx_cpu_based_exec_control & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS) #define cpu_has_vmx_tertiary_exec_control \ @@ -XXX,XX +XXX,XX @@ extern u64 vmx_ept_vpid_cap; #define cpu_has_vmx_dt_exiting \ (vmx_secondary_exec_control & SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING) #define cpu_has_vmx_rdtscp \ - (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_RDTSCP) + (IS_ENABLED(CONFIG_INTEL_VMX) && \ + vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_RDTSCP) #define cpu_has_vmx_vpid \ (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_VPID) #define cpu_has_monitor_trap_flag \ - (vmx_cpu_based_exec_control & CPU_BASED_MONITOR_TRAP_FLAG) + (IS_ENABLED(CONFIG_INTEL_VMX) && \ + vmx_cpu_based_exec_control & CPU_BASED_MONITOR_TRAP_FLAG) #define cpu_has_vmx_pat \ (vmx_vmentry_control & VM_ENTRY_LOAD_GUEST_PAT) #define cpu_has_vmx_efer \ @@ -XXX,XX +XXX,XX @@ extern u64 vmx_ept_vpid_cap; #define cpu_has_vmx_ple \ (vmx_secondary_exec_control & SECONDARY_EXEC_PAUSE_LOOP_EXITING) #define cpu_has_vmx_invpcid \ - (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_INVPCID) + (IS_ENABLED(CONFIG_INTEL_VMX) && \ + vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_INVPCID) #define cpu_has_vmx_apic_reg_virt \ - (vmx_secondary_exec_control & SECONDARY_EXEC_APIC_REGISTER_VIRT) + (IS_ENABLED(CONFIG_INTEL_VMX) && \ + vmx_secondary_exec_control & SECONDARY_EXEC_APIC_REGISTER_VIRT) #define cpu_has_vmx_virtual_intr_delivery \ - (vmx_secondary_exec_control & SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY) + (IS_ENABLED(CONFIG_INTEL_VMX) && \ + vmx_secondary_exec_control & SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY) #define cpu_has_vmx_virtualize_x2apic_mode \ - (vmx_secondary_exec_control & SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE) + (IS_ENABLED(CONFIG_INTEL_VMX) && \ + vmx_secondary_exec_control & SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE) #define cpu_has_vmx_posted_intr_processing \ (vmx_pin_based_exec_control & PIN_BASED_POSTED_INTERRUPT) #define cpu_has_vmx_vmcs_shadowing \ @@ -XXX,XX +XXX,XX @@ extern u64 vmx_ept_vpid_cap; #define cpu_has_vmx_vmfunc \ (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_VM_FUNCTIONS) #define cpu_has_vmx_virt_exceptions \ - (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_VIRT_EXCEPTIONS) + (IS_ENABLED(CONFIG_INTEL_VMX) && \ + vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_VIRT_EXCEPTIONS) #define cpu_has_vmx_pml \ (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_PML) #define cpu_has_vmx_mpx \ - ((vmx_vmexit_control & VM_EXIT_CLEAR_BNDCFGS) && \ + (IS_ENABLED(CONFIG_INTEL_VMX) && \ + (vmx_vmexit_control & VM_EXIT_CLEAR_BNDCFGS) && \ (vmx_vmentry_control & VM_ENTRY_LOAD_BNDCFGS)) #define cpu_has_vmx_xsaves \ - (vmx_secondary_exec_control & SECONDARY_EXEC_XSAVES) + (IS_ENABLED(CONFIG_INTEL_VMX) && \ + vmx_secondary_exec_control & SECONDARY_EXEC_XSAVES) #define cpu_has_vmx_tsc_scaling \ (vmx_secondary_exec_control & SECONDARY_EXEC_TSC_SCALING) #define cpu_has_vmx_bus_lock_detection \ -- 2.25.1
If VMX/SVM disabled in the build, we may still want to have vPMU drivers for PV guests. Yet in such case before using VMX/SVM features and functions we have to explicitly check if they're available in the build. For this purpose (and also not to complicate conditionals) two helpers introduced -- is_{vmx,svm}_vcpu(v) that check both HVM & VMX/SVM conditions at the same time, and they replace is_hvm_vcpu(v) macro in Intel/AMD PMU drivers. Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com> CC: Jan Beulich <jbeulich@suse.com> --- changes in v5: - change kconfig option name SVM/VMX -> AMD_SVM/INTEL_VMX - replace is_hvm_vcpu() with is_{svm,vmx}_vcpu() changes in v4: - use IS_ENABLED(CONFIG_{VMX,SVM}) instead of using_{vmx,svm} - fix typo changes in v3: - introduced macro is_{vmx,svm}_vcpu(v) - changed description - reordered patch, do not modify conditionals w/ cpu_has_vmx_msr_bitmap check --- xen/arch/x86/cpu/vpmu_amd.c | 11 ++++++----- xen/arch/x86/cpu/vpmu_intel.c | 32 +++++++++++++++++--------------- 2 files changed, 23 insertions(+), 20 deletions(-) diff --git a/xen/arch/x86/cpu/vpmu_amd.c b/xen/arch/x86/cpu/vpmu_amd.c index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/cpu/vpmu_amd.c +++ b/xen/arch/x86/cpu/vpmu_amd.c @@ -XXX,XX +XXX,XX @@ #define is_pmu_enabled(msr) ((msr) & (1ULL << MSR_F10H_EVNTSEL_EN_SHIFT)) #define set_guest_mode(msr) ((msr) |= (1ULL << MSR_F10H_EVNTSEL_GO_SHIFT)) #define is_overflowed(msr) (!((msr) & (1ULL << (MSR_F10H_COUNTER_LENGTH - 1)))) +#define is_svm_vcpu(v) (IS_ENABLED(CONFIG_AMD_SVM) && is_hvm_vcpu(v)) static unsigned int __read_mostly num_counters; static const u32 __read_mostly *counters; @@ -XXX,XX +XXX,XX @@ static int cf_check amd_vpmu_save(struct vcpu *v, bool to_guest) context_save(v); - if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && is_hvm_vcpu(v) && + if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && is_svm_vcpu(v) && is_msr_bitmap_on(vpmu) ) amd_vpmu_unset_msr_bitmap(v); @@ -XXX,XX +XXX,XX @@ static int cf_check amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content) return -EINVAL; /* For all counters, enable guest only mode for HVM guest */ - if ( is_hvm_vcpu(v) && (type == MSR_TYPE_CTRL) && + if ( is_svm_vcpu(v) && (type == MSR_TYPE_CTRL) && !is_guest_mode(msr_content) ) { set_guest_mode(msr_content); @@ -XXX,XX +XXX,XX @@ static int cf_check amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content) return 0; vpmu_set(vpmu, VPMU_RUNNING); - if ( is_hvm_vcpu(v) && is_msr_bitmap_on(vpmu) ) + if ( is_svm_vcpu(v) && is_msr_bitmap_on(vpmu) ) amd_vpmu_set_msr_bitmap(v); } @@ -XXX,XX +XXX,XX @@ static int cf_check amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content) (is_pmu_enabled(msr_content) == 0) && vpmu_is_set(vpmu, VPMU_RUNNING) ) { vpmu_reset(vpmu, VPMU_RUNNING); - if ( is_hvm_vcpu(v) && is_msr_bitmap_on(vpmu) ) + if ( is_svm_vcpu(v) && is_msr_bitmap_on(vpmu) ) amd_vpmu_unset_msr_bitmap(v); release_pmu_ownership(PMU_OWNER_HVM); } @@ -XXX,XX +XXX,XX @@ static void cf_check amd_vpmu_destroy(struct vcpu *v) { struct vpmu_struct *vpmu = vcpu_vpmu(v); - if ( is_hvm_vcpu(v) && is_msr_bitmap_on(vpmu) ) + if ( is_svm_vcpu(v) && is_msr_bitmap_on(vpmu) ) amd_vpmu_unset_msr_bitmap(v); xfree(vpmu->context); diff --git a/xen/arch/x86/cpu/vpmu_intel.c b/xen/arch/x86/cpu/vpmu_intel.c index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/cpu/vpmu_intel.c +++ b/xen/arch/x86/cpu/vpmu_intel.c @@ -XXX,XX +XXX,XX @@ #define MSR_PMC_ALIAS_MASK (~(MSR_IA32_PERFCTR0 ^ MSR_IA32_A_PERFCTR0)) static bool __read_mostly full_width_write; +#define is_vmx_vcpu(v) (IS_ENABLED(CONFIG_INTEL_VMX) && is_hvm_vcpu(v)) + /* * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed * counters. 4 bits for every counter. @@ -XXX,XX +XXX,XX @@ static inline void __core2_vpmu_save(struct vcpu *v) rdmsrl(MSR_P6_EVNTSEL(i), xen_pmu_cntr_pair[i].control); } - if ( !is_hvm_vcpu(v) ) + if ( !is_vmx_vcpu(v) ) rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, core2_vpmu_cxt->global_status); /* Save MSR to private context to make it fork-friendly */ - else if ( mem_sharing_enabled(v->domain) ) + else if ( is_vmx_vcpu(v) && mem_sharing_enabled(v->domain) ) vmx_read_guest_msr(v, MSR_CORE_PERF_GLOBAL_CTRL, &core2_vpmu_cxt->global_ctrl); } @@ -XXX,XX +XXX,XX @@ static int cf_check core2_vpmu_save(struct vcpu *v, bool to_guest) { struct vpmu_struct *vpmu = vcpu_vpmu(v); - if ( !is_hvm_vcpu(v) ) + if ( !is_vmx_vcpu(v) ) wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0); if ( !vpmu_are_all_set(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED) ) @@ -XXX,XX +XXX,XX @@ static int cf_check core2_vpmu_save(struct vcpu *v, bool to_guest) __core2_vpmu_save(v); /* Unset PMU MSR bitmap to trap lazy load. */ - if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && is_hvm_vcpu(v) && + if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && is_vmx_vcpu(v) && cpu_has_vmx_msr_bitmap ) core2_vpmu_unset_msr_bitmap(v); @@ -XXX,XX +XXX,XX @@ static inline void __core2_vpmu_load(struct vcpu *v) if ( vpmu_is_set(vcpu_vpmu(v), VPMU_CPU_HAS_DS) ) wrmsrl(MSR_IA32_DS_AREA, core2_vpmu_cxt->ds_area); - if ( !is_hvm_vcpu(v) ) + if ( !is_vmx_vcpu(v) ) { wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, core2_vpmu_cxt->global_ovf_ctrl); core2_vpmu_cxt->global_ovf_ctrl = 0; wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, core2_vpmu_cxt->global_ctrl); } /* Restore MSR from context when used with a fork */ - else if ( mem_sharing_is_fork(v->domain) ) + else if ( is_vmx_vcpu(v) && mem_sharing_is_fork(v->domain) ) vmx_write_guest_msr(v, MSR_CORE_PERF_GLOBAL_CTRL, core2_vpmu_cxt->global_ctrl); } @@ -XXX,XX +XXX,XX @@ static int core2_vpmu_verify(struct vcpu *v) } if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) && - !(is_hvm_vcpu(v) + !(is_vmx_vcpu(v) ? is_canonical_address(core2_vpmu_cxt->ds_area) : __addr_ok(core2_vpmu_cxt->ds_area)) ) return -EINVAL; @@ -XXX,XX +XXX,XX @@ static int cf_check core2_vpmu_alloc_resource(struct vcpu *v) if ( !acquire_pmu_ownership(PMU_OWNER_HVM) ) return 0; - if ( is_hvm_vcpu(v) ) + if ( is_vmx_vcpu(v) ) { if ( vmx_add_host_load_msr(v, MSR_CORE_PERF_GLOBAL_CTRL, 0) ) goto out_err; @@ -XXX,XX +XXX,XX @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index) __core2_vpmu_load(current); vpmu_set(vpmu, VPMU_CONTEXT_LOADED); - if ( is_hvm_vcpu(current) && cpu_has_vmx_msr_bitmap ) + if ( is_vmx_vcpu(current) && cpu_has_vmx_msr_bitmap ) core2_vpmu_set_msr_bitmap(current); } return 1; @@ -XXX,XX +XXX,XX @@ static int cf_check core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content) return -EINVAL; if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) ) { - if ( !(is_hvm_vcpu(v) ? is_canonical_address(msr_content) + if ( !(is_vmx_vcpu(v) ? is_canonical_address(msr_content) : __addr_ok(msr_content)) ) { gdprintk(XENLOG_WARNING, @@ -XXX,XX +XXX,XX @@ static int cf_check core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content) if ( msr_content & fixed_ctrl_mask ) return -EINVAL; - if ( is_hvm_vcpu(v) ) + if ( is_vmx_vcpu(v) ) vmx_read_guest_msr(v, MSR_CORE_PERF_GLOBAL_CTRL, &core2_vpmu_cxt->global_ctrl); else @@ -XXX,XX +XXX,XX @@ static int cf_check core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content) if ( blocked ) return -EINVAL; - if ( is_hvm_vcpu(v) ) + if ( is_vmx_vcpu(v) ) vmx_read_guest_msr(v, MSR_CORE_PERF_GLOBAL_CTRL, &core2_vpmu_cxt->global_ctrl); else @@ -XXX,XX +XXX,XX @@ static int cf_check core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content) wrmsrl(msr, msr_content); else { - if ( is_hvm_vcpu(v) ) + if ( is_vmx_vcpu(v) ) vmx_write_guest_msr(v, MSR_CORE_PERF_GLOBAL_CTRL, msr_content); else wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, msr_content); @@ -XXX,XX +XXX,XX @@ static int cf_check core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content) *msr_content = core2_vpmu_cxt->global_status; break; case MSR_CORE_PERF_GLOBAL_CTRL: - if ( is_hvm_vcpu(v) ) + if ( is_vmx_vcpu(v) ) vmx_read_guest_msr(v, MSR_CORE_PERF_GLOBAL_CTRL, msr_content); else rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, *msr_content); @@ -XXX,XX +XXX,XX @@ static void cf_check core2_vpmu_destroy(struct vcpu *v) vpmu->context = NULL; xfree(vpmu->priv_context); vpmu->priv_context = NULL; - if ( is_hvm_vcpu(v) && cpu_has_vmx_msr_bitmap ) + if ( is_vmx_vcpu(v) && cpu_has_vmx_msr_bitmap ) core2_vpmu_unset_msr_bitmap(v); release_pmu_ownership(PMU_OWNER_HVM); vpmu_clear(vpmu); -- 2.25.1
From: Xenia Ragiadakou <burzalodowa@gmail.com> VIO_realmode_completion is specific to vmx realmode and thus the function arch_vcpu_ioreq_completion() has actual handling work only in VMX-enabled build, as for the rest x86 and ARM build configurations it is basically a stub. Here a separate configuration option ARCH_IOREQ_COMPLETION introduced that tells whether the platform we're building for requires any specific ioreq completion handling. As of now only VMX has such requirement, so the option is selected by INTEL_VMX, for other configurations a generic default stub is provided (it is ARM's version of arch_vcpu_ioreq_completion() moved to common header). Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com> CC: Julien Grall <julien@xen.org> CC: Jan Beulich <jbeulich@suse.com> --- changes in v5: - introduce ARCH_IOREQ_COMPLETION option & put arch_vcpu_ioreq_completion() under it - description changed changes in v4: - move whole arch_vcpu_ioreq_completion() under CONFIG_VMX and remove ARM's variant of this handler, as Julien suggested changes in v1: - put VIO_realmode_completion enum under #ifdef CONFIG_VMX --- xen/Kconfig | 3 +++ xen/arch/arm/ioreq.c | 6 ------ xen/arch/x86/Kconfig | 1 + xen/arch/x86/hvm/ioreq.c | 2 ++ xen/include/xen/ioreq.h | 10 ++++++++++ 5 files changed, 16 insertions(+), 6 deletions(-) diff --git a/xen/Kconfig b/xen/Kconfig index XXXXXXX..XXXXXXX 100644 --- a/xen/Kconfig +++ b/xen/Kconfig @@ -XXX,XX +XXX,XX @@ config LTO config ARCH_SUPPORTS_INT128 bool +config ARCH_IOREQ_COMPLETION + bool + source "Kconfig.debug" diff --git a/xen/arch/arm/ioreq.c b/xen/arch/arm/ioreq.c index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/arm/ioreq.c +++ b/xen/arch/arm/ioreq.c @@ -XXX,XX +XXX,XX @@ bool arch_ioreq_complete_mmio(void) return false; } -bool arch_vcpu_ioreq_completion(enum vio_completion completion) -{ - ASSERT_UNREACHABLE(); - return true; -} - /* * The "legacy" mechanism of mapping magic pages for the IOREQ servers * is x86 specific, so the following hooks don't need to be implemented on Arm: diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/Kconfig +++ b/xen/arch/x86/Kconfig @@ -XXX,XX +XXX,XX @@ config AMD_SVM config INTEL_VMX def_bool HVM + select ARCH_IOREQ_COMPLETION config XEN_SHSTK bool "Supervisor Shadow Stacks" diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -XXX,XX +XXX,XX @@ bool arch_ioreq_complete_mmio(void) return handle_mmio(); } +#ifdef CONFIG_ARCH_IOREQ_COMPLETION bool arch_vcpu_ioreq_completion(enum vio_completion completion) { switch ( completion ) @@ -XXX,XX +XXX,XX @@ bool arch_vcpu_ioreq_completion(enum vio_completion completion) return true; } +#endif static gfn_t hvm_alloc_legacy_ioreq_gfn(struct ioreq_server *s) { diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index XXXXXXX..XXXXXXX 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -XXX,XX +XXX,XX @@ void ioreq_domain_init(struct domain *d); int ioreq_server_dm_op(struct xen_dm_op *op, struct domain *d, bool *const_op); bool arch_ioreq_complete_mmio(void); + +#ifdef CONFIG_ARCH_IOREQ_COMPLETION bool arch_vcpu_ioreq_completion(enum vio_completion completion); +#else +static inline bool arch_vcpu_ioreq_completion(enum vio_completion completion) +{ + ASSERT_UNREACHABLE(); + return true; +} +#endif + int arch_ioreq_server_map_pages(struct ioreq_server *s); void arch_ioreq_server_unmap_pages(struct ioreq_server *s); void arch_ioreq_server_enable(struct ioreq_server *s); -- 2.25.1
As now we got a separate config option for VMX which itself depends on CONFIG_HVM, we need to use it to provide vmx_pi_hooks_{assign,deassign} stubs for case when VMX is disabled while HVM is enabled. Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com> Acked-by: Jan Beulich <jbeulich@suse.com> --- changes in v5: - change kconfig option name VMX -> INTEL_VMX changes in v4: - added tag changes in v3: - use CONFIG_VMX instead of CONFIG_HVM to provide stubs, instead of guarding calls to vmx_pi_hooks_{assign,deassign} in iommu/vt-d code --- xen/arch/x86/include/asm/hvm/vmx/vmx.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/xen/arch/x86/include/asm/hvm/vmx/vmx.h b/xen/arch/x86/include/asm/hvm/vmx/vmx.h index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/include/asm/hvm/vmx/vmx.h +++ b/xen/arch/x86/include/asm/hvm/vmx/vmx.h @@ -XXX,XX +XXX,XX @@ void vmx_pi_desc_fixup(unsigned int cpu); void vmx_sync_exit_bitmap(struct vcpu *v); -#ifdef CONFIG_HVM +#ifdef CONFIG_INTEL_VMX void vmx_pi_hooks_assign(struct domain *d); void vmx_pi_hooks_deassign(struct domain *d); #else -- 2.25.1
From: Xenia Ragiadakou <burzalodowa@gmail.com> Provide the user with configuration control over the cpu virtualization support in Xen by making AMD_SVM and INTEL_VMX options user selectable. To preserve the current default behavior, both options depend on HVM and default to value of HVM. To prevent users from unknowingly disabling virtualization support, make the controls user selectable only if EXPERT is enabled. No functional change intended. Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org> Acked-by: Jan Beulich <jbeulich@suse.com> --- changes in v5: - change kconfig option name SVM/VMX -> AMD_SVM/INTEL_VMX changes in v3: - only tags added changes in v2: - remove dependency of build options IOMMU/AMD_IOMMU on VMX/SVM options --- xen/arch/x86/Kconfig | 18 ++++++++++++++++-- 1 file changed, 16 insertions(+), 2 deletions(-) diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/Kconfig +++ b/xen/arch/x86/Kconfig @@ -XXX,XX +XXX,XX @@ config HVM If unsure, say Y. config AMD_SVM - def_bool HVM + bool "AMD-V" if EXPERT + depends on HVM + default HVM + help + Enables virtual machine extensions on platforms that implement the + AMD Virtualization Technology (AMD-V). + If your system includes a processor with AMD-V support, say Y. + If in doubt, say Y. config INTEL_VMX - def_bool HVM + bool "Intel VT-x" if EXPERT + depends on HVM + default HVM select ARCH_IOREQ_COMPLETION + help + Enables virtual machine extensions on platforms that implement the + Intel Virtualization Technology (Intel VT-x). + If your system includes a processor with Intel VT-x support, say Y. + If in doubt, say Y. config XEN_SHSTK bool "Supervisor Shadow Stacks" -- 2.25.1
These are final 2 patches of the series for making VMX/SVM support in Xen configurable: https://lore.kernel.org/xen-devel/cover.1723110344.git.Sergiy_Kibrik@epam.com/ Minor changes comparing to v6, changelogs are provided per-patch. -Sergiy Xenia Ragiadakou (2): ioreq: do not build arch_vcpu_ioreq_completion() for non-VMX configurations x86/hvm: make AMD-V and Intel VT-x support configurable xen/Kconfig | 7 +++++++ xen/arch/arm/ioreq.c | 6 ------ xen/arch/x86/Kconfig | 19 +++++++++++++++++-- xen/arch/x86/hvm/ioreq.c | 2 ++ xen/include/xen/ioreq.h | 10 ++++++++++ 5 files changed, 36 insertions(+), 8 deletions(-) -- 2.25.1
From: Xenia Ragiadakou <burzalodowa@gmail.com> VIO_realmode_completion is specific to vmx realmode and thus the function arch_vcpu_ioreq_completion() has actual handling work only in VMX-enabled build, as for the rest x86 and ARM build configurations it is basically a stub. Here a separate configuration option ARCH_VCPU_IOREQ_COMPLETION introduced that tells whether the platform we're building for requires any specific ioreq completion handling. As of now only VMX has such requirement, so the option is selected by INTEL_VMX, for other configurations a generic default stub is provided (it is ARM's version of arch_vcpu_ioreq_completion() moved to common header). Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com> Acked-by: Jan Beulich <jbeulich@suse.com> CC: Julien Grall <julien@xen.org> --- changes in v7: - comment in Kconfig adjusted - fixed patch description - updated tags changes in v6: - rename option ARCH_IOREQ_COMPLETION -> ARCH_VCPU_IOREQ_COMPLETION - put a comment with brief option's description changes in v5: - introduce ARCH_IOREQ_COMPLETION option & put arch_vcpu_ioreq_completion() under it - description changed --- xen/Kconfig | 7 +++++++ xen/arch/arm/ioreq.c | 6 ------ xen/arch/x86/Kconfig | 1 + xen/arch/x86/hvm/ioreq.c | 2 ++ xen/include/xen/ioreq.h | 10 ++++++++++ 5 files changed, 20 insertions(+), 6 deletions(-) diff --git a/xen/Kconfig b/xen/Kconfig index XXXXXXX..XXXXXXX 100644 --- a/xen/Kconfig +++ b/xen/Kconfig @@ -XXX,XX +XXX,XX @@ config LTO config ARCH_SUPPORTS_INT128 bool +# +# For platforms that require specific handling of per-vCPU ioreq completion +# events +# +config ARCH_VCPU_IOREQ_COMPLETION + bool + source "Kconfig.debug" diff --git a/xen/arch/arm/ioreq.c b/xen/arch/arm/ioreq.c index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/arm/ioreq.c +++ b/xen/arch/arm/ioreq.c @@ -XXX,XX +XXX,XX @@ bool arch_ioreq_complete_mmio(void) return false; } -bool arch_vcpu_ioreq_completion(enum vio_completion completion) -{ - ASSERT_UNREACHABLE(); - return true; -} - /* * The "legacy" mechanism of mapping magic pages for the IOREQ servers * is x86 specific, so the following hooks don't need to be implemented on Arm: diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/Kconfig +++ b/xen/arch/x86/Kconfig @@ -XXX,XX +XXX,XX @@ config AMD_SVM config INTEL_VMX def_bool HVM + select ARCH_VCPU_IOREQ_COMPLETION config XEN_SHSTK bool "Supervisor Shadow Stacks" diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -XXX,XX +XXX,XX @@ bool arch_ioreq_complete_mmio(void) return handle_mmio(); } +#ifdef CONFIG_VCPU_ARCH_IOREQ_COMPLETION bool arch_vcpu_ioreq_completion(enum vio_completion completion) { switch ( completion ) @@ -XXX,XX +XXX,XX @@ bool arch_vcpu_ioreq_completion(enum vio_completion completion) return true; } +#endif static gfn_t hvm_alloc_legacy_ioreq_gfn(struct ioreq_server *s) { diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index XXXXXXX..XXXXXXX 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -XXX,XX +XXX,XX @@ void ioreq_domain_init(struct domain *d); int ioreq_server_dm_op(struct xen_dm_op *op, struct domain *d, bool *const_op); bool arch_ioreq_complete_mmio(void); + +#ifdef CONFIG_VCPU_ARCH_IOREQ_COMPLETION bool arch_vcpu_ioreq_completion(enum vio_completion completion); +#else +static inline bool arch_vcpu_ioreq_completion(enum vio_completion completion) +{ + ASSERT_UNREACHABLE(); + return true; +} +#endif + int arch_ioreq_server_map_pages(struct ioreq_server *s); void arch_ioreq_server_unmap_pages(struct ioreq_server *s); void arch_ioreq_server_enable(struct ioreq_server *s); -- 2.25.1
From: Xenia Ragiadakou <burzalodowa@gmail.com> Provide the user with configuration control over the cpu virtualization support in Xen by making AMD_SVM and INTEL_VMX options user selectable. To preserve the current default behavior, both options depend on HVM and default to value of HVM. To prevent users from unknowingly disabling virtualization support, make the controls user selectable only if EXPERT is enabled. No functional change intended. Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org> Acked-by: Jan Beulich <jbeulich@suse.com> --- changes in v6: - "default y" instead of "default HVM" changes in v5: - change kconfig option name SVM/VMX -> AMD_SVM/INTEL_VMX changes in v3: - only tags added --- xen/arch/x86/Kconfig | 18 ++++++++++++++++-- 1 file changed, 16 insertions(+), 2 deletions(-) diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig index XXXXXXX..XXXXXXX 100644 --- a/xen/arch/x86/Kconfig +++ b/xen/arch/x86/Kconfig @@ -XXX,XX +XXX,XX @@ config HVM If unsure, say Y. config AMD_SVM - def_bool HVM + bool "AMD-V" if EXPERT + depends on HVM + default y + help + Enables virtual machine extensions on platforms that implement the + AMD Virtualization Technology (AMD-V). + If your system includes a processor with AMD-V support, say Y. + If in doubt, say Y. config INTEL_VMX - def_bool HVM + bool "Intel VT-x" if EXPERT + depends on HVM + default y select ARCH_VCPU_IOREQ_COMPLETION + help + Enables virtual machine extensions on platforms that implement the + Intel Virtualization Technology (Intel VT-x). + If your system includes a processor with Intel VT-x support, say Y. + If in doubt, say Y. config XEN_SHSTK bool "Supervisor Shadow Stacks" -- 2.25.1