[XEN][PATCH v5] x86: make Viridian support optional

Grygorii Strashko posted 1 patch 6 days, 4 hours ago
Patches applied successfully (tree, apply log)
git fetch https://gitlab.com/xen-project/patchew/xen tags/patchew/20251023151807.560843-1-grygorii._5Fstrashko@epam.com
xen/arch/x86/hvm/Kconfig                | 10 ++++++++++
xen/arch/x86/hvm/Makefile               |  2 +-
xen/arch/x86/hvm/hvm.c                  |  5 +++--
xen/arch/x86/hvm/viridian/viridian.c    | 14 ++++++++++----
xen/arch/x86/hvm/vlapic.c               | 11 +++++++----
xen/arch/x86/include/asm/hvm/domain.h   |  2 ++
xen/arch/x86/include/asm/hvm/hvm.h      |  3 ++-
xen/arch/x86/include/asm/hvm/vcpu.h     |  2 ++
xen/arch/x86/include/asm/hvm/viridian.h | 15 +++++++++++++++
9 files changed, 52 insertions(+), 12 deletions(-)
[XEN][PATCH v5] x86: make Viridian support optional
Posted by Grygorii Strashko 6 days, 4 hours ago
From: Sergiy Kibrik <Sergiy_Kibrik@epam.com>

Add config option VIRIDIAN that covers viridian code within HVM.
Calls to viridian functions guarded by is_viridian_domain() and related macros.
Having this option may be beneficial by reducing code footprint for systems
that are not using Hyper-V.

[grygorii_strashko@epam.com: fixed NULL pointer deref in
viridian_save_domain_ctxt(); stub viridian_vcpu/domain_init/deinit()]
Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Signed-off-by: Grygorii Strashko <grygorii_strashko@epam.com>
---
changes in v6:
- add stubs for viridian_vcpu/domain_init/deinit()
- update Kconfig description
- make set(HVM_PARAM_VIRIDIAN) return -ENODEV
  if (!IS_ENABLED(CONFIG_VIRIDIAN) && value)

changes in v5:
- drop "depends on AMD_SVM || INTEL_VMX"
- return -EILSEQ from viridian_load_x() if !VIRIDIAN

changes in v4:
- s/HVM_VIRIDIAN/VIRIDIAN
- add "depends on AMD_SVM || INTEL_VMX"
- add guard !is_viridian_vcpu() checks in viridian_load_vcpu_ctxt/viridian_load_domain_ctxt

changes in v3:
- fixed NULL pointer deref in viridian_save_domain_ctxt() reported for v2,
  which caused v2 revert by commit 1fffcf10cd71 ("Revert "x86: make Viridian
  support optional"")

v5: https://patchwork.kernel.org/project/xen-devel/patch/20250930125215.1087214-1-grygorii_strashko@epam.com/
v4: https://patchwork.kernel.org/project/xen-devel/patch/20250919163139.2821531-1-grygorii_strashko@epam.com/
v3: https://patchwork.kernel.org/project/xen-devel/patch/20250916134114.2214104-1-grygorii_strashko@epam.com/
v2: https://patchwork.kernel.org/project/xen-devel/patch/20250321092633.3982645-1-Sergiy_Kibrik@epam.com/

 xen/arch/x86/hvm/Kconfig                | 10 ++++++++++
 xen/arch/x86/hvm/Makefile               |  2 +-
 xen/arch/x86/hvm/hvm.c                  |  5 +++--
 xen/arch/x86/hvm/viridian/viridian.c    | 14 ++++++++++----
 xen/arch/x86/hvm/vlapic.c               | 11 +++++++----
 xen/arch/x86/include/asm/hvm/domain.h   |  2 ++
 xen/arch/x86/include/asm/hvm/hvm.h      |  3 ++-
 xen/arch/x86/include/asm/hvm/vcpu.h     |  2 ++
 xen/arch/x86/include/asm/hvm/viridian.h | 15 +++++++++++++++
 9 files changed, 52 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/hvm/Kconfig b/xen/arch/x86/hvm/Kconfig
index f223104be03c..625262f97f43 100644
--- a/xen/arch/x86/hvm/Kconfig
+++ b/xen/arch/x86/hvm/Kconfig
@@ -62,6 +62,16 @@ config ALTP2M
 
 	  If unsure, stay with defaults.
 
+config VIRIDIAN
+	bool "Hyper-V enlightenments for guests" if EXPERT
+	default y
+	help
+	  Support optimizations for Hyper-V guests such as hypercalls, efficient
+	  timers and interrupt handling. This is to improve performance and
+	  compatibility of Windows VMs.
+
+	  If unsure, say Y.
+
 config MEM_PAGING
 	bool "Xen memory paging support (UNSUPPORTED)" if UNSUPPORTED
 	depends on VM_EVENT
diff --git a/xen/arch/x86/hvm/Makefile b/xen/arch/x86/hvm/Makefile
index 6ec2c8f2db56..736eb3f966e9 100644
--- a/xen/arch/x86/hvm/Makefile
+++ b/xen/arch/x86/hvm/Makefile
@@ -1,6 +1,6 @@
 obj-$(CONFIG_AMD_SVM) += svm/
 obj-$(CONFIG_INTEL_VMX) += vmx/
-obj-y += viridian/
+obj-$(CONFIG_VIRIDIAN) += viridian/
 
 obj-y += asid.o
 obj-y += dm.o
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 3a30404d9940..6e0460984c09 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4292,8 +4292,9 @@ static int hvm_set_param(struct domain *d, uint32_t index, uint64_t value)
             rc = -EINVAL;
         break;
     case HVM_PARAM_VIRIDIAN:
-        if ( (value & ~HVMPV_feature_mask) ||
-             !(value & HVMPV_base_freq) )
+        if ( !IS_ENABLED(CONFIG_VIRIDIAN) && value )
+            rc = -ENODEV;
+        else if ( (value & ~HVMPV_feature_mask) || !(value & HVMPV_base_freq) )
             rc = -EINVAL;
         break;
     case HVM_PARAM_IDENT_PT:
diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index f79cffcb3767..b935803700fd 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -1097,14 +1097,14 @@ static int cf_check viridian_save_domain_ctxt(
 {
     const struct domain *d = v->domain;
     const struct viridian_domain *vd = d->arch.hvm.viridian;
-    struct hvm_viridian_domain_context ctxt = {
-        .hypercall_gpa = vd->hypercall_gpa.raw,
-        .guest_os_id = vd->guest_os_id.raw,
-    };
+    struct hvm_viridian_domain_context ctxt = {};
 
     if ( !is_viridian_domain(d) )
         return 0;
 
+    ctxt.hypercall_gpa = vd->hypercall_gpa.raw;
+    ctxt.guest_os_id = vd->guest_os_id.raw,
+
     viridian_time_save_domain_ctxt(d, &ctxt);
     viridian_synic_save_domain_ctxt(d, &ctxt);
 
@@ -1117,6 +1117,9 @@ static int cf_check viridian_load_domain_ctxt(
     struct viridian_domain *vd = d->arch.hvm.viridian;
     struct hvm_viridian_domain_context ctxt;
 
+    if ( !is_viridian_domain(d) )
+        return -EILSEQ;
+
     if ( hvm_load_entry_zeroextend(VIRIDIAN_DOMAIN, h, &ctxt) != 0 )
         return -EINVAL;
 
@@ -1153,6 +1156,9 @@ static int cf_check viridian_load_vcpu_ctxt(
     struct vcpu *v;
     struct hvm_viridian_vcpu_context ctxt;
 
+    if ( !is_viridian_domain(d) )
+        return -EILSEQ;
+
     if ( vcpuid >= d->max_vcpus || (v = d->vcpu[vcpuid]) == NULL )
     {
         dprintk(XENLOG_G_ERR, "HVM restore: dom%d has no vcpu%u\n",
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 4121285daef8..b315e56d3f18 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -447,7 +447,8 @@ void vlapic_EOI_set(struct vlapic *vlapic)
      * priority vector and then recurse to handle the lower priority
      * vector.
      */
-    bool missed_eoi = viridian_apic_assist_completed(v);
+    bool missed_eoi = has_viridian_apic_assist(v->domain) &&
+                      viridian_apic_assist_completed(v);
     int vector;
 
  again:
@@ -463,7 +464,7 @@ void vlapic_EOI_set(struct vlapic *vlapic)
      * NOTE: It is harmless to call viridian_apic_assist_clear() on a
      *       recursion, even though it is not necessary.
      */
-    if ( !missed_eoi )
+    if ( has_viridian_apic_assist(v->domain) && !missed_eoi )
         viridian_apic_assist_clear(v);
 
     vlapic_clear_vector(vector, &vlapic->regs->data[APIC_ISR]);
@@ -1375,7 +1376,8 @@ int vlapic_has_pending_irq(struct vcpu *v)
      * If so, we need to emulate the EOI here before comparing ISR
      * with IRR.
      */
-    if ( viridian_apic_assist_completed(v) )
+    if ( has_viridian_apic_assist(v->domain) &&
+         viridian_apic_assist_completed(v) )
         vlapic_EOI_set(vlapic);
 
     isr = vlapic_find_highest_isr(vlapic);
@@ -1388,7 +1390,8 @@ int vlapic_has_pending_irq(struct vcpu *v)
     if ( isr >= 0 &&
          (irr & 0xf0) <= (isr & 0xf0) )
     {
-        viridian_apic_assist_clear(v);
+        if ( has_viridian_apic_assist(v->domain) )
+            viridian_apic_assist_clear(v);
         return -1;
     }
 
diff --git a/xen/arch/x86/include/asm/hvm/domain.h b/xen/arch/x86/include/asm/hvm/domain.h
index 333501d5f2ac..95d9336a28f0 100644
--- a/xen/arch/x86/include/asm/hvm/domain.h
+++ b/xen/arch/x86/include/asm/hvm/domain.h
@@ -111,7 +111,9 @@ struct hvm_domain {
     /* hypervisor intercepted msix table */
     struct list_head       msixtbl_list;
 
+#ifdef CONFIG_VIRIDIAN
     struct viridian_domain *viridian;
+#endif
 
     /*
      * TSC value that VCPUs use to calculate their tsc_offset value.
diff --git a/xen/arch/x86/include/asm/hvm/hvm.h b/xen/arch/x86/include/asm/hvm/hvm.h
index 838ad5b59eb0..6f174ef658f1 100644
--- a/xen/arch/x86/include/asm/hvm/hvm.h
+++ b/xen/arch/x86/include/asm/hvm/hvm.h
@@ -509,7 +509,8 @@ hvm_get_cpl(struct vcpu *v)
     (has_hvm_params(d) ? (d)->arch.hvm.params[HVM_PARAM_VIRIDIAN] : 0)
 
 #define is_viridian_domain(d) \
-    (is_hvm_domain(d) && (viridian_feature_mask(d) & HVMPV_base_freq))
+    (IS_ENABLED(CONFIG_VIRIDIAN) && \
+     is_hvm_domain(d) && (viridian_feature_mask(d) & HVMPV_base_freq))
 
 #define is_viridian_vcpu(v) \
     is_viridian_domain((v)->domain)
diff --git a/xen/arch/x86/include/asm/hvm/vcpu.h b/xen/arch/x86/include/asm/hvm/vcpu.h
index 924af890c5b2..9ed9eaff3bc5 100644
--- a/xen/arch/x86/include/asm/hvm/vcpu.h
+++ b/xen/arch/x86/include/asm/hvm/vcpu.h
@@ -176,7 +176,9 @@ struct hvm_vcpu {
     /* Pending hw/sw interrupt (.vector = -1 means nothing pending). */
     struct x86_event     inject_event;
 
+#ifdef CONFIG_VIRIDIAN
     struct viridian_vcpu *viridian;
+#endif
 };
 
 #endif /* __ASM_X86_HVM_VCPU_H__ */
diff --git a/xen/arch/x86/include/asm/hvm/viridian.h b/xen/arch/x86/include/asm/hvm/viridian.h
index 47c9d13841ac..07ea95d4ae6e 100644
--- a/xen/arch/x86/include/asm/hvm/viridian.h
+++ b/xen/arch/x86/include/asm/hvm/viridian.h
@@ -86,11 +86,26 @@ viridian_hypercall(struct cpu_user_regs *regs);
 void viridian_time_domain_freeze(const struct domain *d);
 void viridian_time_domain_thaw(const struct domain *d);
 
+#if defined(CONFIG_VIRIDIAN)
 int viridian_vcpu_init(struct vcpu *v);
 int viridian_domain_init(struct domain *d);
 
 void viridian_vcpu_deinit(struct vcpu *v);
 void viridian_domain_deinit(struct domain *d);
+#else
+static inline int viridian_vcpu_init(struct vcpu *v)
+{
+    return 0;
+}
+
+static inline int viridian_domain_init(struct domain *d)
+{
+    return 0;
+}
+
+static inline void viridian_vcpu_deinit(struct vcpu *v) {}
+static inline void viridian_domain_deinit(struct domain *d) {}
+#endif
 
 void viridian_apic_assist_set(const struct vcpu *v);
 bool viridian_apic_assist_completed(const struct vcpu *v);
-- 
2.34.1
Re: [XEN][PATCH v5] x86: make Viridian support optional
Posted by Jason Andryuk 5 days ago
On 2025-10-23 11:18, Grygorii Strashko wrote:
> From: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
> 
> Add config option VIRIDIAN that covers viridian code within HVM.
> Calls to viridian functions guarded by is_viridian_domain() and related macros.
> Having this option may be beneficial by reducing code footprint for systems
> that are not using Hyper-V.
> 
> [grygorii_strashko@epam.com: fixed NULL pointer deref in
> viridian_save_domain_ctxt(); stub viridian_vcpu/domain_init/deinit()]
> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
> Signed-off-by: Grygorii Strashko <grygorii_strashko@epam.com>
> ---

> diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
> index f79cffcb3767..b935803700fd 100644
> --- a/xen/arch/x86/hvm/viridian/viridian.c
> +++ b/xen/arch/x86/hvm/viridian/viridian.c

> @@ -1153,6 +1156,9 @@ static int cf_check viridian_load_vcpu_ctxt(
>       struct vcpu *v;
>       struct hvm_viridian_vcpu_context ctxt;
>   
> +    if ( !is_viridian_domain(d) )
> +        return -EILSEQ;

Given:

  #define is_viridian_domain(d) \
     (IS_ENABLED(CONFIG_VIRIDIAN) && \
      is_hvm_domain(d) && (viridian_feature_mask(d) & HVMPV_base_freq))

CONFIG_VIRIDIAN=n is okay because of the IS_ENABLED.

For CONFIG_VIRIDIAN=y && a viridian domain, is HVM_PARAM_VIRIDIAN 
guaranteed to be loaded before viridian_load_vcpu_ctxt() is called, so 
that HVMPV_base_freq can be checked properly?  I don't know, but it 
seems a little fragile if this relies on implicit ordering.  Maybe just do:

if ( !IS_ENABLED(CONFIG_VIRIDIAN) )
     return -EILSEQ;

?

Everything else looks good.

Thanks,
Jason

> +
>       if ( vcpuid >= d->max_vcpus || (v = d->vcpu[vcpuid]) == NULL )
>       {
>           dprintk(XENLOG_G_ERR, "HVM restore: dom%d has no vcpu%u\n",
Re: [XEN][PATCH v5] x86: make Viridian support optional
Posted by Grygorii Strashko 23 hours ago
Hi Jason,

On 24.10.25 21:55, Jason Andryuk wrote:
> On 2025-10-23 11:18, Grygorii Strashko wrote:
>> From: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
>>
>> Add config option VIRIDIAN that covers viridian code within HVM.
>> Calls to viridian functions guarded by is_viridian_domain() and related macros.
>> Having this option may be beneficial by reducing code footprint for systems
>> that are not using Hyper-V.
>>
>> [grygorii_strashko@epam.com: fixed NULL pointer deref in
>> viridian_save_domain_ctxt(); stub viridian_vcpu/domain_init/deinit()]
>> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
>> Signed-off-by: Grygorii Strashko <grygorii_strashko@epam.com>
>> ---
> 
>> diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
>> index f79cffcb3767..b935803700fd 100644
>> --- a/xen/arch/x86/hvm/viridian/viridian.c
>> +++ b/xen/arch/x86/hvm/viridian/viridian.c
> 
>> @@ -1153,6 +1156,9 @@ static int cf_check viridian_load_vcpu_ctxt(
>>       struct vcpu *v;
>>       struct hvm_viridian_vcpu_context ctxt;
>> +    if ( !is_viridian_domain(d) )
>> +        return -EILSEQ;
> 
> Given:
> 
>   #define is_viridian_domain(d) \
>      (IS_ENABLED(CONFIG_VIRIDIAN) && \
>       is_hvm_domain(d) && (viridian_feature_mask(d) & HVMPV_base_freq))
> 
> CONFIG_VIRIDIAN=n is okay because of the IS_ENABLED.
> 
> For CONFIG_VIRIDIAN=y && a viridian domain, is HVM_PARAM_VIRIDIAN guaranteed to be loaded before viridian_load_vcpu_ctxt() is called, so that HVMPV_base_freq can be checked properly?  I don't know, but it seems a little fragile if this relies on implicit ordering.  Maybe just do:
> 
> if ( !IS_ENABLED(CONFIG_VIRIDIAN) )
>      return -EILSEQ;
> 
> ?

Should it be done the same way for viridian_load_domain_ctxt() also?


> 
> Everything else looks good.
> 
> Thanks,
> Jason
> 
>> +
>>       if ( vcpuid >= d->max_vcpus || (v = d->vcpu[vcpuid]) == NULL )
>>       {
>>           dprintk(XENLOG_G_ERR, "HVM restore: dom%d has no vcpu%u\n",
> 

-- 
Best regards,
-grygorii


Re: [XEN][PATCH v5] x86: make Viridian support optional
Posted by Jason Andryuk 22 hours ago
On 2025-10-28 16:17, Grygorii Strashko wrote:
> Hi Jason,
> 
> On 24.10.25 21:55, Jason Andryuk wrote:
>> On 2025-10-23 11:18, Grygorii Strashko wrote:
>>> From: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
>>>
>>> Add config option VIRIDIAN that covers viridian code within HVM.
>>> Calls to viridian functions guarded by is_viridian_domain() and 
>>> related macros.
>>> Having this option may be beneficial by reducing code footprint for 
>>> systems
>>> that are not using Hyper-V.
>>>
>>> [grygorii_strashko@epam.com: fixed NULL pointer deref in
>>> viridian_save_domain_ctxt(); stub viridian_vcpu/domain_init/deinit()]
>>> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
>>> Signed-off-by: Grygorii Strashko <grygorii_strashko@epam.com>
>>> ---
>>
>>> diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/ 
>>> viridian/viridian.c
>>> index f79cffcb3767..b935803700fd 100644
>>> --- a/xen/arch/x86/hvm/viridian/viridian.c
>>> +++ b/xen/arch/x86/hvm/viridian/viridian.c
>>
>>> @@ -1153,6 +1156,9 @@ static int cf_check viridian_load_vcpu_ctxt(
>>>       struct vcpu *v;
>>>       struct hvm_viridian_vcpu_context ctxt;
>>> +    if ( !is_viridian_domain(d) )
>>> +        return -EILSEQ;
>>
>> Given:
>>
>>   #define is_viridian_domain(d) \
>>      (IS_ENABLED(CONFIG_VIRIDIAN) && \
>>       is_hvm_domain(d) && (viridian_feature_mask(d) & HVMPV_base_freq))
>>
>> CONFIG_VIRIDIAN=n is okay because of the IS_ENABLED.
>>
>> For CONFIG_VIRIDIAN=y && a viridian domain, is HVM_PARAM_VIRIDIAN 
>> guaranteed to be loaded before viridian_load_vcpu_ctxt() is called, so 
>> that HVMPV_base_freq can be checked properly?  I don't know, but it 
>> seems a little fragile if this relies on implicit ordering.  Maybe 
>> just do:
>>
>> if ( !IS_ENABLED(CONFIG_VIRIDIAN) )
>>      return -EILSEQ;
>>
>> ?
> 
> Should it be done the same way for viridian_load_domain_ctxt() also?

Yes, I think so.  Thanks for catching that.

Regards,
Jason