xen/arch/x86/hvm/emulate.c | 18 ++++++- xen/arch/x86/hvm/hvm.c | 13 +++++ xen/arch/x86/hvm/monitor.c | 80 +++++++++++++++++++++++++++++++ xen/arch/x86/mm/mem_access.c | 8 +++- xen/include/asm-x86/hvm/monitor.h | 3 ++ xen/include/asm-x86/hvm/support.h | 1 + xen/include/asm-x86/vm_event.h | 2 + 7 files changed, 123 insertions(+), 2 deletions(-)
A/D bit writes (on page walks) can be considered benign by an introspection
agent, so receiving vm_events for them is a pessimization. We try here to
optimize by filtering these events out.
Currently, we are fully emulating the instruction at RIP when the hardware sees
an EPT fault with npfec.kind != npfec_kind_with_gla. This is, however,
incorrect, because the instruction at RIP might legitimately cause an
EPT fault of its own while accessing a _different_ page from the original one,
where A/D were set.
The solution is to perform the whole emulation, while ignoring EPT restrictions
for the walk part, and taking them into account for the "actual" emulating of
the instruction at RIP. When we send out a vm_event, we don't want the emulation
to complete, since in that case we won't be able to veto whatever it is doing.
That would mean that we can't actually prevent any malicious activity, instead
we'd only be able to report on it.
When we see a "send-vm_event" case while emulating, we need to first send the
event out and then suspend the emulation (return X86EMUL_RETRY).
After the emulation stops we'll call hvm_vm_event_do_resume() again after the
introspection agent treats the event and resumes the guest. There, the
instruction at RIP will be fully emulated (with the EPT ignored) if the
introspection application allows it, and the guest will continue to run past
the instruction.
A common example is if the hardware exits because of an EPT fault caused by a
page walk, p2m_mem_access_check() decides if it is going to send a vm_event.
If the vm_event was sent and it would be treated so it runs the instruction
at RIP, that instruction might also hit a protected page and provoke a vm_event.
Now if npfec.kind == npfec_kind_in_gpt and d->arch.monitor.inguest_pagefault_disabled
is true then we are in the page walk case and we can do this emulation optimization
and emulate the page walk while ignoring the EPT, but don't ignore the EPT for the
emulation of the actual instruction.
In the first case we would have 2 EPT events, in the second case we would have
1 EPT event if the instruction at the RIP triggers an EPT event.
We use hvmemul_map_linear_addr() to intercept write access and
__hvm_copy() to intercept exec and read access.
hvm_emulate_send_vm_event() can return false if there was no violation,
if there was an error from monitor_traps() or p2m_get_mem_access().
-ESRCH from p2m_get_mem_access() is treated as restricted access.
NOTE: hvm_emulate_send_vm_event() assumes the caller will check
arch.vm_event->send_event
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
Changes since V10:
- Use local var curr instead of current
- Add a new member in enum hvm_translation_result
- Check HVMCOPY_linear flag in __hvm_copy()
- Consider -ESRCH from p2m_get_mem_access() as restricted
access.
---
xen/arch/x86/hvm/emulate.c | 18 ++++++-
xen/arch/x86/hvm/hvm.c | 13 +++++
xen/arch/x86/hvm/monitor.c | 80 +++++++++++++++++++++++++++++++
xen/arch/x86/mm/mem_access.c | 8 +++-
xen/include/asm-x86/hvm/monitor.h | 3 ++
xen/include/asm-x86/hvm/support.h | 1 +
xen/include/asm-x86/vm_event.h | 2 +
7 files changed, 123 insertions(+), 2 deletions(-)
diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 36bcb526d3..ddf24a83c1 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -548,6 +548,7 @@ static void *hvmemul_map_linear_addr(
unsigned int nr_frames = ((linear + bytes - !!bytes) >> PAGE_SHIFT) -
(linear >> PAGE_SHIFT) + 1;
unsigned int i;
+ gfn_t gfn;
/*
* mfn points to the next free slot. All used slots have a page reference
@@ -582,7 +583,7 @@ static void *hvmemul_map_linear_addr(
ASSERT(mfn_x(*mfn) == 0);
res = hvm_translate_get_page(curr, addr, true, pfec,
- &pfinfo, &page, NULL, &p2mt);
+ &pfinfo, &page, &gfn, &p2mt);
switch ( res )
{
@@ -601,6 +602,7 @@ static void *hvmemul_map_linear_addr(
case HVMTRANS_gfn_paged_out:
case HVMTRANS_gfn_shared:
+ case HVMTRANS_bad_gfn_access:
err = ERR_PTR(~X86EMUL_RETRY);
goto out;
@@ -626,6 +628,14 @@ static void *hvmemul_map_linear_addr(
ASSERT(p2mt == p2m_ram_logdirty || !p2m_is_readonly(p2mt));
}
+
+ if ( unlikely(curr->arch.vm_event) &&
+ curr->arch.vm_event->send_event &&
+ hvm_monitor_check_p2m(addr, gfn, pfec, npfec_kind_with_gla) )
+ {
+ err = ERR_PTR(~X86EMUL_RETRY);
+ goto out;
+ }
}
/* Entire access within a single frame? */
@@ -1141,6 +1151,7 @@ static int linear_read(unsigned long addr, unsigned int bytes, void *p_data,
case HVMTRANS_gfn_paged_out:
case HVMTRANS_gfn_shared:
+ case HVMTRANS_bad_gfn_access:
return X86EMUL_RETRY;
}
@@ -1192,6 +1203,7 @@ static int linear_write(unsigned long addr, unsigned int bytes, void *p_data,
case HVMTRANS_gfn_paged_out:
case HVMTRANS_gfn_shared:
+ case HVMTRANS_bad_gfn_access:
return X86EMUL_RETRY;
}
@@ -1852,6 +1864,8 @@ static int hvmemul_rep_movs(
xfree(buf);
+ ASSERT(rc != HVMTRANS_bad_gfn_access);
+
if ( rc == HVMTRANS_gfn_paged_out )
return X86EMUL_RETRY;
if ( rc == HVMTRANS_gfn_shared )
@@ -1964,6 +1978,8 @@ static int hvmemul_rep_stos(
if ( buf != p_data )
xfree(buf);
+ ASSERT(rc != HVMTRANS_bad_gfn_access);
+
switch ( rc )
{
case HVMTRANS_gfn_paged_out:
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index fdb1e17f59..4cc077bb3f 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3236,6 +3236,19 @@ static enum hvm_translation_result __hvm_copy(
return HVMTRANS_bad_gfn_to_mfn;
}
+ /*
+ * In case a vm event was sent return paged_out so the emulation will
+ * stop with no side effect
+ */
+ if ( (flags & HVMCOPY_linear) &&
+ unlikely(v->arch.vm_event) &&
+ v->arch.vm_event->send_event &&
+ hvm_monitor_check_p2m(addr, gfn, pfec, npfec_kind_with_gla) )
+ {
+ put_page(page);
+ return HVMTRANS_bad_gfn_access;
+ }
+
p = (char *)__map_domain_page(page) + (addr & ~PAGE_MASK);
if ( flags & HVMCOPY_to_guest )
diff --git a/xen/arch/x86/hvm/monitor.c b/xen/arch/x86/hvm/monitor.c
index 2a41ccc930..7ca7f098c0 100644
--- a/xen/arch/x86/hvm/monitor.c
+++ b/xen/arch/x86/hvm/monitor.c
@@ -23,8 +23,10 @@
*/
#include <xen/vm_event.h>
+#include <xen/mem_access.h>
#include <xen/monitor.h>
#include <asm/hvm/monitor.h>
+#include <asm/altp2m.h>
#include <asm/monitor.h>
#include <asm/paging.h>
#include <asm/vm_event.h>
@@ -215,6 +217,84 @@ void hvm_monitor_interrupt(unsigned int vector, unsigned int type,
monitor_traps(current, 1, &req);
}
+/*
+ * Send memory access vm_events based on pfec. Returns true if the event was
+ * sent and false for p2m_get_mem_access() error, no violation and event send
+ * error. Assumes the caller will check arch.vm_event->send_event.
+ */
+bool hvm_monitor_check_p2m(unsigned long gla, gfn_t gfn, uint32_t pfec,
+ uint16_t kind)
+{
+ xenmem_access_t access;
+ struct vcpu *curr = current;
+ vm_event_request_t req = {};
+ paddr_t gpa = (gfn_to_gaddr(gfn) | (gla & ~PAGE_MASK));
+ int rc;
+
+ ASSERT(curr->arch.vm_event->send_event);
+
+ curr->arch.vm_event->send_event = false;
+
+ /*
+ * p2m_get_mem_access() can fail from a invalid MFN and return -ESRCH
+ * in which case access must be restricted.
+ */
+ rc = p2m_get_mem_access(curr->domain, gfn, &access, altp2m_vcpu_idx(curr));
+
+ if ( rc == -ESRCH )
+ access = XENMEM_access_n;
+ else if ( rc )
+ return false;
+
+ switch ( access )
+ {
+ case XENMEM_access_x:
+ case XENMEM_access_rx:
+ if ( pfec & PFEC_write_access )
+ req.u.mem_access.flags = MEM_ACCESS_R | MEM_ACCESS_W;
+ break;
+
+ case XENMEM_access_w:
+ case XENMEM_access_rw:
+ if ( pfec & PFEC_insn_fetch )
+ req.u.mem_access.flags = MEM_ACCESS_X;
+ break;
+
+ case XENMEM_access_r:
+ case XENMEM_access_n:
+ if ( pfec & PFEC_write_access )
+ req.u.mem_access.flags |= MEM_ACCESS_R | MEM_ACCESS_W;
+ if ( pfec & PFEC_insn_fetch )
+ req.u.mem_access.flags |= MEM_ACCESS_X;
+ break;
+
+ case XENMEM_access_wx:
+ case XENMEM_access_rwx:
+ case XENMEM_access_rx2rw:
+ case XENMEM_access_n2rwx:
+ case XENMEM_access_default:
+ break;
+ }
+
+ if ( !req.u.mem_access.flags )
+ return false; /* no violation */
+
+ if ( kind == npfec_kind_with_gla )
+ req.u.mem_access.flags |= MEM_ACCESS_FAULT_WITH_GLA |
+ MEM_ACCESS_GLA_VALID;
+ else if ( kind == npfec_kind_in_gpt )
+ req.u.mem_access.flags |= MEM_ACCESS_FAULT_IN_GPT |
+ MEM_ACCESS_GLA_VALID;
+
+
+ req.reason = VM_EVENT_REASON_MEM_ACCESS;
+ req.u.mem_access.gfn = gfn_x(gfn);
+ req.u.mem_access.gla = gla;
+ req.u.mem_access.offset = gpa & ~PAGE_MASK;
+
+ return monitor_traps(curr, true, &req) >= 0;
+}
+
/*
* Local variables:
* mode: C
diff --git a/xen/arch/x86/mm/mem_access.c b/xen/arch/x86/mm/mem_access.c
index 0144f92b98..94c7f2a80c 100644
--- a/xen/arch/x86/mm/mem_access.c
+++ b/xen/arch/x86/mm/mem_access.c
@@ -210,10 +210,16 @@ bool p2m_mem_access_check(paddr_t gpa, unsigned long gla,
return true;
}
}
+
+ /*
+ * Try to avoid sending a mem event. Suppress events caused by page-walks
+ * by emulating but still checking mem_access violations.
+ */
if ( vm_event_check_ring(d->vm_event_monitor) &&
d->arch.monitor.inguest_pagefault_disabled &&
- npfec.kind != npfec_kind_with_gla ) /* don't send a mem_event */
+ npfec.kind == npfec_kind_in_gpt )
{
+ v->arch.vm_event->send_event = true;
hvm_emulate_one_vm_event(EMUL_KIND_NORMAL, TRAP_invalid_op, X86_EVENT_NO_EC);
return true;
diff --git a/xen/include/asm-x86/hvm/monitor.h b/xen/include/asm-x86/hvm/monitor.h
index f1af4f812a..325b44674d 100644
--- a/xen/include/asm-x86/hvm/monitor.h
+++ b/xen/include/asm-x86/hvm/monitor.h
@@ -49,6 +49,9 @@ void hvm_monitor_interrupt(unsigned int vector, unsigned int type,
unsigned int err, uint64_t cr2);
bool hvm_monitor_emul_unimplemented(void);
+bool hvm_monitor_check_p2m(unsigned long gla, gfn_t gfn, uint32_t pfec,
+ uint16_t kind);
+
#endif /* __ASM_X86_HVM_MONITOR_H__ */
/*
diff --git a/xen/include/asm-x86/hvm/support.h b/xen/include/asm-x86/hvm/support.h
index e989aa7349..509c586727 100644
--- a/xen/include/asm-x86/hvm/support.h
+++ b/xen/include/asm-x86/hvm/support.h
@@ -61,6 +61,7 @@ enum hvm_translation_result {
HVMTRANS_unhandleable,
HVMTRANS_gfn_paged_out,
HVMTRANS_gfn_shared,
+ HVMTRANS_bad_gfn_access,
};
/*
diff --git a/xen/include/asm-x86/vm_event.h b/xen/include/asm-x86/vm_event.h
index 23e655710b..66db9e1e25 100644
--- a/xen/include/asm-x86/vm_event.h
+++ b/xen/include/asm-x86/vm_event.h
@@ -36,6 +36,8 @@ struct arch_vm_event {
bool set_gprs;
/* A sync vm_event has been sent and we're not done handling it. */
bool sync_event;
+ /* Send mem access events from emulator */
+ bool send_event;
};
int vm_event_init_domain(struct domain *d);
--
2.17.1
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
> -----Original Message-----
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index fdb1e17f59..4cc077bb3f 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -3236,6 +3236,19 @@ static enum hvm_translation_result __hvm_copy(
> return HVMTRANS_bad_gfn_to_mfn;
> }
>
> + /*
> + * In case a vm event was sent return paged_out so the emulation will
> + * stop with no side effect
> + */
> + if ( (flags & HVMCOPY_linear) &&
> + unlikely(v->arch.vm_event) &&
> + v->arch.vm_event->send_event &&
> + hvm_monitor_check_p2m(addr, gfn, pfec, npfec_kind_with_gla) )
> + {
> + put_page(page);
> + return HVMTRANS_bad_gfn_access;
This doesn't match the comment above. Did you mean to return HVMTRANS_gfn_paged_out? I'm guessing not, in which case the comment needs to be fixed.
Paul
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
On 19.09.2019 17:09, Paul Durrant wrote:
>> -----Original Message-----
>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
>> index fdb1e17f59..4cc077bb3f 100644
>> --- a/xen/arch/x86/hvm/hvm.c
>> +++ b/xen/arch/x86/hvm/hvm.c
>> @@ -3236,6 +3236,19 @@ static enum hvm_translation_result __hvm_copy(
>> return HVMTRANS_bad_gfn_to_mfn;
>> }
>>
>> + /*
>> + * In case a vm event was sent return paged_out so the emulation will
>> + * stop with no side effect
>> + */
>> + if ( (flags & HVMCOPY_linear) &&
>> + unlikely(v->arch.vm_event) &&
>> + v->arch.vm_event->send_event &&
>> + hvm_monitor_check_p2m(addr, gfn, pfec, npfec_kind_with_gla) )
>> + {
>> + put_page(page);
>> + return HVMTRANS_bad_gfn_access;
>
> This doesn't match the comment above. Did you mean to return HVMTRANS_gfn_paged_out? I'm guessing not, in which case the comment needs to be fixed.
Yes, it seems I missed that but given that the return name will change I
will have the comment fixed in the next version. Thanks for pointing
this out.
Alex
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
On 19.09.2019 15:03, Alexandru Stefan ISAILA wrote:
> @@ -601,6 +602,7 @@ static void *hvmemul_map_linear_addr(
>
> case HVMTRANS_gfn_paged_out:
> case HVMTRANS_gfn_shared:
> + case HVMTRANS_bad_gfn_access:
> err = ERR_PTR(~X86EMUL_RETRY);
> goto out;
This looks pretty suspicious now - why would (without knowing all
the background) "bad access" translate into "retry". While you did
post the suggested name before, it's nevertheless pretty clear now
that it needs changing. Perhaps HVMTRANS_need_retry or some such?
> @@ -1852,6 +1864,8 @@ static int hvmemul_rep_movs(
>
> xfree(buf);
>
> + ASSERT(rc != HVMTRANS_bad_gfn_access);
> +
> if ( rc == HVMTRANS_gfn_paged_out )
> return X86EMUL_RETRY;
> if ( rc == HVMTRANS_gfn_shared )
> @@ -1964,6 +1978,8 @@ static int hvmemul_rep_stos(
> if ( buf != p_data )
> xfree(buf);
>
> + ASSERT(rc != HVMTRANS_bad_gfn_access);
> +
> switch ( rc )
> {
> case HVMTRANS_gfn_paged_out:
These are changes to places that were pointed out before do consume
HVMTRANS_* return values. Did you go through and check nothing else
needs adjustment? You don't say anything in this regard in the
description. For example, if shadow's hvm_read() would get to see
the new value, it would fall out of its switch() into a BUG().
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -3236,6 +3236,19 @@ static enum hvm_translation_result __hvm_copy(
> return HVMTRANS_bad_gfn_to_mfn;
> }
>
> + /*
> + * In case a vm event was sent return paged_out so the emulation will
> + * stop with no side effect
> + */
> + if ( (flags & HVMCOPY_linear) &&
> + unlikely(v->arch.vm_event) &&
> + v->arch.vm_event->send_event &&
> + hvm_monitor_check_p2m(addr, gfn, pfec, npfec_kind_with_gla) )
In such a sequence of checks with _some_ part using unlikely() I
think it would be better to have the unlikely() one first (unless
it's a relatively expensive check, which isn't the case here), to
have as little as possible unnecessary computations / branches in
the common (fast path) case.
Furthermore while you now restrict the check to linear address
based accesses, other than the description says (or at least
implies) you do not restrict it to read and exec accesses. It's
not clear to me whether that's intentional, yet it affects which
hvm_copy_*_linear() callers need auditing.
Finally, what about ->arch.vm_event->send_event remaining set
after hvm_emulate_one_vm_event(), because hvm_monitor_check_p2m()
(the only place where the flag would get cleared) was never hit
in the process? And what about an instruction accessing two (or
more) distinct addresses? The flag would be clear after the first
one was checked afaict.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
On 19.09.2019 16:59, Jan Beulich wrote:
> On 19.09.2019 15:03, Alexandru Stefan ISAILA wrote:
>> @@ -601,6 +602,7 @@ static void *hvmemul_map_linear_addr(
>>
>> case HVMTRANS_gfn_paged_out:
>> case HVMTRANS_gfn_shared:
>> + case HVMTRANS_bad_gfn_access:
>> err = ERR_PTR(~X86EMUL_RETRY);
>> goto out;
>
> This looks pretty suspicious now - why would (without knowing all
> the background) "bad access" translate into "retry". While you did
> post the suggested name before, it's nevertheless pretty clear now
> that it needs changing. Perhaps HVMTRANS_need_retry or some such?
It's fine by me, I will change the name to HVMTRANS_need_retry.
>
>> @@ -1852,6 +1864,8 @@ static int hvmemul_rep_movs(
>>
>> xfree(buf);
>>
>> + ASSERT(rc != HVMTRANS_bad_gfn_access);
>> +
>> if ( rc == HVMTRANS_gfn_paged_out )
>> return X86EMUL_RETRY;
>> if ( rc == HVMTRANS_gfn_shared )
>> @@ -1964,6 +1978,8 @@ static int hvmemul_rep_stos(
>> if ( buf != p_data )
>> xfree(buf);
>>
>> + ASSERT(rc != HVMTRANS_bad_gfn_access);
>> +
>> switch ( rc )
>> {
>> case HVMTRANS_gfn_paged_out:
>
> These are changes to places that were pointed out before do consume
> HVMTRANS_* return values. Did you go through and check nothing else
> needs adjustment? You don't say anything in this regard in the
> description. For example, if shadow's hvm_read() would get to see
> the new value, it would fall out of its switch() into a BUG().
>
Yes, you are right, the only thing that saves shadow from not raising a
BUG is the send_event flag. For safety reasons I will have a complete
check of all the places that can fail from adding the new return value.
>> --- a/xen/arch/x86/hvm/hvm.c
>> +++ b/xen/arch/x86/hvm/hvm.c
>> @@ -3236,6 +3236,19 @@ static enum hvm_translation_result __hvm_copy(
>> return HVMTRANS_bad_gfn_to_mfn;
>> }
>>
>> + /*
>> + * In case a vm event was sent return paged_out so the emulation will
>> + * stop with no side effect
>> + */
>> + if ( (flags & HVMCOPY_linear) &&
>> + unlikely(v->arch.vm_event) &&
>> + v->arch.vm_event->send_event &&
>> + hvm_monitor_check_p2m(addr, gfn, pfec, npfec_kind_with_gla) )
>
> In such a sequence of checks with _some_ part using unlikely() I
> think it would be better to have the unlikely() one first (unless
> it's a relatively expensive check, which isn't the case here), to
> have as little as possible unnecessary computations / branches in
> the common (fast path) case.
I will change the order in the next version.
>
> Furthermore while you now restrict the check to linear address
> based accesses, other than the description says (or at least
> implies) you do not restrict it to read and exec accesses. It's
> not clear to me whether that's intentional, yet it affects which
> hvm_copy_*_linear() callers need auditing.
The pfec var is checked in hvm_monitor_check_p2m(). If you think it is
necessary I can add one more check here for (pfec & (PFEC_insn_fetch |
PFEC_write_access)).
>
> Finally, what about ->arch.vm_event->send_event remaining set
> after hvm_emulate_one_vm_event(), because hvm_monitor_check_p2m()
> (the only place where the flag would get cleared) was never hit
> in the process?
Thanks for pointing this out, indeed it's a problem here. A solution can
be to move send_event = false; after hvm_emulate_one_vm_event() is
finished. And state in the comment that the user is in charge of
enabling and disabling the flag.
Or just have it in both places.
> And what about an instruction accessing two (or
> more) distinct addresses? The flag would be clear after the first
> one was checked afaict.
There is no problem here because emulation will stop after the first bad
access so there is no need to continue.
Regards,
Alex
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
On 20.09.2019 10:06, Alexandru Stefan ISAILA wrote: > On 19.09.2019 16:59, Jan Beulich wrote: >> Furthermore while you now restrict the check to linear address >> based accesses, other than the description says (or at least >> implies) you do not restrict it to read and exec accesses. It's >> not clear to me whether that's intentional, yet it affects which >> hvm_copy_*_linear() callers need auditing. > > The pfec var is checked in hvm_monitor_check_p2m(). If you think it is > necessary I can add one more check here for (pfec & (PFEC_insn_fetch | > PFEC_write_access)). hvm_monitor_check_p2m() gets called from two places, so a check there won't help (afaict). The question whether to put an additional check here depends on whether, as the description says, you really only want to handle read/exec accesses here, or whether you also want to cover write ones (in which case the description should be adjusted so that it's not misleading). >> Finally, what about ->arch.vm_event->send_event remaining set >> after hvm_emulate_one_vm_event(), because hvm_monitor_check_p2m() >> (the only place where the flag would get cleared) was never hit >> in the process? > Thanks for pointing this out, indeed it's a problem here. A solution can > be to move send_event = false; after hvm_emulate_one_vm_event() is > finished. And state in the comment that the user is in charge of > enabling and disabling the flag. > Or just have it in both places. For this aspect alone I think you want it in both places, but ... >> And what about an instruction accessing two (or >> more) distinct addresses? The flag would be clear after the first >> one was checked afaict. > > There is no problem here because emulation will stop after the first bad > access so there is no need to continue. ... for this moving it may indeed be necessary. I have to admit that I don't follow your reply here: The flag will also be clear after the first good access (afaict), and hence further accesses by the same insn won't make it into hvm_monitor_check_p2m() at all. Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
On 20.09.2019 11:24, Jan Beulich wrote: > On 20.09.2019 10:06, Alexandru Stefan ISAILA wrote: >> On 19.09.2019 16:59, Jan Beulich wrote: >>> Furthermore while you now restrict the check to linear address >>> based accesses, other than the description says (or at least >>> implies) you do not restrict it to read and exec accesses. It's >>> not clear to me whether that's intentional, yet it affects which >>> hvm_copy_*_linear() callers need auditing. >> >> The pfec var is checked in hvm_monitor_check_p2m(). If you think it is >> necessary I can add one more check here for (pfec & (PFEC_insn_fetch | >> PFEC_write_access)). > > hvm_monitor_check_p2m() gets called from two places, so a check > there won't help (afaict). The question whether to put an > additional check here depends on whether, as the description > says, you really only want to handle read/exec accesses here, > or whether you also want to cover write ones (in which case the > description should be adjusted so that it's not misleading). Indeed covering write access here as well as in hvmemul_map_linear_addr() is needed. I will adjust the comment. > >>> Finally, what about ->arch.vm_event->send_event remaining set >>> after hvm_emulate_one_vm_event(), because hvm_monitor_check_p2m() >>> (the only place where the flag would get cleared) was never hit >>> in the process? >> Thanks for pointing this out, indeed it's a problem here. A solution can >> be to move send_event = false; after hvm_emulate_one_vm_event() is >> finished. And state in the comment that the user is in charge of >> enabling and disabling the flag. >> Or just have it in both places. > > For this aspect alone I think you want it in both places, but ... > >>> And what about an instruction accessing two (or >>> more) distinct addresses? The flag would be clear after the first >>> one was checked afaict. >> >> There is no problem here because emulation will stop after the first bad >> access so there is no need to continue. > > ... for this moving it may indeed be necessary. I have to admit > that I don't follow your reply here: The flag will also be clear > after the first good access (afaict), and hence further accesses > by the same insn won't make it into hvm_monitor_check_p2m() at all. > Ok I will move it from hvm_monitor_check_p2m() and adjust the comment accordingly. Thanks for the review, Alex _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
© 2016 - 2025 Red Hat, Inc.