Feature memory access is based on vm event subsystem, and it could be disabled
in the future. So a few switch-blocks in do_altp2m_op() need
vm_event_is_enabled() condition check to pass compilation when ALTP2M=y and
VM_EVENT=n(, hence MEM_ACCESS=n), like HVMOP_altp2m_set_mem_access, etc.
Function p2m_mem_access_check() still needs stub when VM_EVENT=n to
pass compilation.
Although local variable "req_ptr" still remains NULL throughout its lifetime,
with the change of NULL assignment, we will face runtime undefined error only
when CONFIG_USBAN is on. So we strengthen the condition check via adding
vm_event_is_enabled() for the special case.
Signed-off-by: Penny Zheng <Penny.Zheng@amd.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
v1 -> v3:
- a comment next to the excessive condition
- use vm_event_is_enabled() instead
- avoid heavy churn by using the inverted condition plus break
---
v3 - v4:
- refine comment
---
xen/arch/x86/hvm/hvm.c | 26 +++++++++++++++++++++++++-
xen/arch/x86/include/asm/mem_access.h | 10 ++++++++++
2 files changed, 35 insertions(+), 1 deletion(-)
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 07e54890d9..b34cd29629 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -52,6 +52,7 @@
#include <asm/i387.h>
#include <asm/mc146818rtc.h>
#include <asm/mce.h>
+#include <asm/mem_access.h>
#include <asm/monitor.h>
#include <asm/msr.h>
#include <asm/mtrr.h>
@@ -2082,7 +2083,12 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
#endif
}
- if ( req_ptr )
+ /*
+ * req_ptr being constant NULL when !CONFIG_VM_EVENT, CONFIG_UBSAN=y
+ * builds have been observed to still hit undefined-ness at runtime.
+ * Hence do a seemingly redundant vm_event_is_enabled() check here.
+ */
+ if ( req_ptr && vm_event_is_enabled(curr) )
{
if ( monitor_traps(curr, sync, req_ptr) < 0 )
rc = 0;
@@ -4804,6 +4810,12 @@ static int do_altp2m_op(
break;
case HVMOP_altp2m_set_mem_access:
+ if ( !vm_event_is_enabled(current) )
+ {
+ rc = -EOPNOTSUPP;
+ break;
+ }
+
if ( a.u.mem_access.pad )
rc = -EINVAL;
else
@@ -4813,6 +4825,12 @@ static int do_altp2m_op(
break;
case HVMOP_altp2m_set_mem_access_multi:
+ if ( !vm_event_is_enabled(current) )
+ {
+ rc = -EOPNOTSUPP;
+ break;
+ }
+
if ( a.u.set_mem_access_multi.pad ||
a.u.set_mem_access_multi.opaque > a.u.set_mem_access_multi.nr )
{
@@ -4844,6 +4862,12 @@ static int do_altp2m_op(
break;
case HVMOP_altp2m_get_mem_access:
+ if ( !vm_event_is_enabled(current) )
+ {
+ rc = -EOPNOTSUPP;
+ break;
+ }
+
if ( a.u.mem_access.pad )
rc = -EINVAL;
else
diff --git a/xen/arch/x86/include/asm/mem_access.h b/xen/arch/x86/include/asm/mem_access.h
index 257ed33de1..790bed81e8 100644
--- a/xen/arch/x86/include/asm/mem_access.h
+++ b/xen/arch/x86/include/asm/mem_access.h
@@ -14,6 +14,7 @@
#ifndef __ASM_X86_MEM_ACCESS_H__
#define __ASM_X86_MEM_ACCESS_H__
+#ifdef CONFIG_VM_EVENT
/*
* Setup vm_event request based on the access (gla is -1ull if not available).
* Handles the rw2rx conversion. Boolean return value indicates if event type
@@ -25,6 +26,15 @@
bool p2m_mem_access_check(paddr_t gpa, unsigned long gla,
struct npfec npfec,
struct vm_event_st **req_ptr);
+#else
+static inline bool p2m_mem_access_check(paddr_t gpa, unsigned long gla,
+ struct npfec npfec,
+ struct vm_event_st **req_ptr)
+{
+ *req_ptr = NULL;
+ return false;
+}
+#endif /* CONFIG_VM_EVENT */
/* Check for emulation and mark vcpu for skipping one instruction
* upon rescheduling if required. */
--
2.34.1
[Public]
Hi, Tamas
May I ask a review on this commit?
Many thanks,
Penny Zheng
> -----Original Message-----
> From: Penny, Zheng <penny.zheng@amd.com>
> Sent: Thursday, January 15, 2026 5:29 PM
> To: xen-devel@lists.xenproject.org; Andryuk, Jason <Jason.Andryuk@amd.com>
> Cc: Huang, Ray <Ray.Huang@amd.com>; Penny, Zheng
> <penny.zheng@amd.com>; Jan Beulich <jbeulich@suse.com>; Andrew Cooper
> <andrew.cooper3@citrix.com>; Roger Pau Monné <roger.pau@citrix.com>; Tamas
> K Lengyel <tamas@tklengyel.com>; Alexandru Isaila <aisaila@bitdefender.com>;
> Petre Pircalabu <ppircalabu@bitdefender.com>
> Subject: [PATCH v4 5/6] xen/mem_access: wrap memory access when
> VM_EVENT=n
>
> Feature memory access is based on vm event subsystem, and it could be disabled
> in the future. So a few switch-blocks in do_altp2m_op() need
> vm_event_is_enabled() condition check to pass compilation when ALTP2M=y and
> VM_EVENT=n(, hence MEM_ACCESS=n), like
> HVMOP_altp2m_set_mem_access, etc.
> Function p2m_mem_access_check() still needs stub when VM_EVENT=n to pass
> compilation.
> Although local variable "req_ptr" still remains NULL throughout its lifetime, with the
> change of NULL assignment, we will face runtime undefined error only when
> CONFIG_USBAN is on. So we strengthen the condition check via adding
> vm_event_is_enabled() for the special case.
>
> Signed-off-by: Penny Zheng <Penny.Zheng@amd.com>
> Acked-by: Jan Beulich <jbeulich@suse.com>
> ---
> v1 -> v3:
> - a comment next to the excessive condition
> - use vm_event_is_enabled() instead
> - avoid heavy churn by using the inverted condition plus break
> ---
> v3 - v4:
> - refine comment
> ---
> xen/arch/x86/hvm/hvm.c | 26 +++++++++++++++++++++++++-
> xen/arch/x86/include/asm/mem_access.h | 10 ++++++++++
> 2 files changed, 35 insertions(+), 1 deletion(-)
>
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index
> 07e54890d9..b34cd29629 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -52,6 +52,7 @@
> #include <asm/i387.h>
> #include <asm/mc146818rtc.h>
> #include <asm/mce.h>
> +#include <asm/mem_access.h>
> #include <asm/monitor.h>
> #include <asm/msr.h>
> #include <asm/mtrr.h>
> @@ -2082,7 +2083,12 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned
> long gla, #endif
> }
>
> - if ( req_ptr )
> + /*
> + * req_ptr being constant NULL when !CONFIG_VM_EVENT,
> CONFIG_UBSAN=y
> + * builds have been observed to still hit undefined-ness at runtime.
> + * Hence do a seemingly redundant vm_event_is_enabled() check here.
> + */
> + if ( req_ptr && vm_event_is_enabled(curr) )
> {
> if ( monitor_traps(curr, sync, req_ptr) < 0 )
> rc = 0;
> @@ -4804,6 +4810,12 @@ static int do_altp2m_op(
> break;
>
> case HVMOP_altp2m_set_mem_access:
> + if ( !vm_event_is_enabled(current) )
> + {
> + rc = -EOPNOTSUPP;
> + break;
> + }
> +
> if ( a.u.mem_access.pad )
> rc = -EINVAL;
> else
> @@ -4813,6 +4825,12 @@ static int do_altp2m_op(
> break;
>
> case HVMOP_altp2m_set_mem_access_multi:
> + if ( !vm_event_is_enabled(current) )
> + {
> + rc = -EOPNOTSUPP;
> + break;
> + }
> +
> if ( a.u.set_mem_access_multi.pad ||
> a.u.set_mem_access_multi.opaque > a.u.set_mem_access_multi.nr )
> {
> @@ -4844,6 +4862,12 @@ static int do_altp2m_op(
> break;
>
> case HVMOP_altp2m_get_mem_access:
> + if ( !vm_event_is_enabled(current) )
> + {
> + rc = -EOPNOTSUPP;
> + break;
> + }
> +
> if ( a.u.mem_access.pad )
> rc = -EINVAL;
> else
> diff --git a/xen/arch/x86/include/asm/mem_access.h
> b/xen/arch/x86/include/asm/mem_access.h
> index 257ed33de1..790bed81e8 100644
> --- a/xen/arch/x86/include/asm/mem_access.h
> +++ b/xen/arch/x86/include/asm/mem_access.h
> @@ -14,6 +14,7 @@
> #ifndef __ASM_X86_MEM_ACCESS_H__
> #define __ASM_X86_MEM_ACCESS_H__
>
> +#ifdef CONFIG_VM_EVENT
> /*
> * Setup vm_event request based on the access (gla is -1ull if not available).
> * Handles the rw2rx conversion. Boolean return value indicates if event type @@ -
> 25,6 +26,15 @@ bool p2m_mem_access_check(paddr_t gpa, unsigned long gla,
> struct npfec npfec,
> struct vm_event_st **req_ptr);
> +#else
> +static inline bool p2m_mem_access_check(paddr_t gpa, unsigned long gla,
> + struct npfec npfec,
> + struct vm_event_st **req_ptr) {
> + *req_ptr = NULL;
> + return false;
> +}
> +#endif /* CONFIG_VM_EVENT */
>
> /* Check for emulation and mark vcpu for skipping one instruction
> * upon rescheduling if required. */
> --
> 2.34.1
On 2026-01-15 04:28, Penny Zheng wrote: > Feature memory access is based on vm event subsystem, and it could be disabled > in the future. So a few switch-blocks in do_altp2m_op() need > vm_event_is_enabled() condition check to pass compilation when ALTP2M=y and > VM_EVENT=n(, hence MEM_ACCESS=n), like HVMOP_altp2m_set_mem_access, etc. > Function p2m_mem_access_check() still needs stub when VM_EVENT=n to > pass compilation. > Although local variable "req_ptr" still remains NULL throughout its lifetime, > with the change of NULL assignment, we will face runtime undefined error only > when CONFIG_USBAN is on. So we strengthen the condition check via adding > vm_event_is_enabled() for the special case. > > Signed-off-by: Penny Zheng <Penny.Zheng@amd.com> > Acked-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Jason Andryuk <jason.andryuk@amd.com>
On Thu, Jan 15, 2026 at 10:21 AM Jason Andryuk <jason.andryuk@amd.com> wrote: > On 2026-01-15 04:28, Penny Zheng wrote: > > Feature memory access is based on vm event subsystem, and it could be > disabled > > in the future. So a few switch-blocks in do_altp2m_op() need > > vm_event_is_enabled() condition check to pass compilation when ALTP2M=y > and > > VM_EVENT=n(, hence MEM_ACCESS=n), like HVMOP_altp2m_set_mem_access, etc. > > Function p2m_mem_access_check() still needs stub when VM_EVENT=n to > > pass compilation. > > Although local variable "req_ptr" still remains NULL throughout its > lifetime, > > with the change of NULL assignment, we will face runtime undefined error > only > > when CONFIG_USBAN is on. So we strengthen the condition check via adding > > vm_event_is_enabled() for the special case. > > > > Signed-off-by: Penny Zheng <Penny.Zheng@amd.com> > > Acked-by: Jan Beulich <jbeulich@suse.com> > > Reviewed-by: Jason Andryuk <jason.andryuk@amd.com> > Acked-by: Tamas K Lengyel <tamas@tklengyel.com>
© 2016 - 2026 Red Hat, Inc.