When instrumenting memory accesses for plugin, we force memory accesses
to use the slow path for mmu. [1]
This create a situation where we end up calling ptw_setl_slow.
Since this function gets called during a cpu_exec, start_exclusive then
hangs. This exclusive section was introduced initially for security
reasons [2].
I suspect this code path was never triggered, because ptw_setl_slow
would always be called transitively from cpu_exec, resulting in a hang.
[1] https://gitlab.com/qemu-project/qemu/-/commit/6d03226b42247b68ab2f0b3663e0f624335a4055
[2] https://gitlab.com/qemu-project/qemu/-/issues/279
Fixes: https://gitlab.com/qemu-project/qemu/-/issues/2566
Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
---
target/i386/tcg/sysemu/excp_helper.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/target/i386/tcg/sysemu/excp_helper.c b/target/i386/tcg/sysemu/excp_helper.c
index da187c8792a..e0436c7a672 100644
--- a/target/i386/tcg/sysemu/excp_helper.c
+++ b/target/i386/tcg/sysemu/excp_helper.c
@@ -107,6 +107,9 @@ static bool ptw_setl_slow(const PTETranslate *in, uint32_t old, uint32_t new)
{
uint32_t cmp;
+ /* We are in cpu_exec, and start_exclusive can't be called directly.*/
+ g_assert(current_cpu && current_cpu->running);
+ cpu_exec_end(current_cpu);
/* Does x86 really perform a rmw cycle on mmio for ptw? */
start_exclusive();
cmp = cpu_ldl_mmuidx_ra(in->env, in->gaddr, in->ptw_idx, 0);
@@ -114,6 +117,7 @@ static bool ptw_setl_slow(const PTETranslate *in, uint32_t old, uint32_t new)
cpu_stl_mmuidx_ra(in->env, in->gaddr, new, in->ptw_idx, 0);
}
end_exclusive();
+ cpu_exec_start(current_cpu);
return cmp == old;
}
--
2.39.5