Mahmoud Mandour <ma.mandourr@gmail.com> writes:
> It's not necessary to lock the address translation portion of the
> vcpu_mem_access callback.
>
> Signed-off-by: Mahmoud Mandour <ma.mandourr@gmail.com>
> ---
> contrib/plugins/cache.c | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/contrib/plugins/cache.c b/contrib/plugins/cache.c
> index 4a71602639..695fb969dc 100644
> --- a/contrib/plugins/cache.c
> +++ b/contrib/plugins/cache.c
> @@ -355,15 +355,14 @@ static void vcpu_mem_access(unsigned int vcpu_index, qemu_plugin_meminfo_t info,
> struct qemu_plugin_hwaddr *hwaddr;
> InsnData *insn;
>
> - g_mutex_lock(&mtx);
> hwaddr = qemu_plugin_get_hwaddr(info, vaddr);
> if (hwaddr && qemu_plugin_hwaddr_is_io(hwaddr)) {
> - g_mutex_unlock(&mtx);
> return;
> }
>
> effective_addr = hwaddr ? qemu_plugin_hwaddr_phys_addr(hwaddr) : vaddr;
>
> + g_mutex_lock(&mtx);
> if (!access_cache(dcache, effective_addr)) {
> insn = (InsnData *) userdata;
> insn->dmisses++;
This is fine, but I see an exit leg creeps in later which I think we can
eliminate. I'll comment there:
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
--
Alex Bennée