Just a little precursor re-factoring before I was going to add a trace
point:
- single return point, defaulting to tcg_ctx.code_gen_epilogue
- move cs_base, pc and flags inside the jump cache hit scope
- calculate the tb_jmp_cache hash once
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
---
tcg-runtime.c | 35 +++++++++++++++++++----------------
1 file changed, 19 insertions(+), 16 deletions(-)
diff --git a/tcg-runtime.c b/tcg-runtime.c
index 7fa90ce508..f4bfa9cea6 100644
--- a/tcg-runtime.c
+++ b/tcg-runtime.c
@@ -147,30 +147,33 @@ uint64_t HELPER(ctpop_i64)(uint64_t arg)
void *HELPER(lookup_tb_ptr)(CPUArchState *env, target_ulong addr)
{
CPUState *cpu = ENV_GET_CPU(env);
+ unsigned int addr_hash = tb_jmp_cache_hash_func(addr);
+ void *code_ptr = NULL;
TranslationBlock *tb;
- target_ulong cs_base, pc;
- uint32_t flags;
- tb = atomic_rcu_read(&cpu->tb_jmp_cache[tb_jmp_cache_hash_func(addr)]);
+ tb = atomic_rcu_read(&cpu->tb_jmp_cache[addr_hash]);
if (likely(tb)) {
+ target_ulong cs_base, pc;
+ uint32_t flags;
+
cpu_get_tb_cpu_state(env, &pc, &cs_base, &flags);
+
if (likely(tb->pc == addr && tb->cs_base == cs_base &&
tb->flags == flags)) {
- goto found;
- }
- tb = tb_htable_lookup(cpu, addr, cs_base, flags);
- if (likely(tb)) {
- atomic_set(&cpu->tb_jmp_cache[tb_jmp_cache_hash_func(addr)], tb);
- goto found;
+ code_ptr = tb->tc_ptr;
+ } else {
+ /* If we didn't find it in the jmp_cache we still might
+ * find it in the global tb_htable
+ */
+ tb = tb_htable_lookup(cpu, addr, cs_base, flags);
+ if (likely(tb)) {
+ atomic_set(&cpu->tb_jmp_cache[addr_hash], tb);
+ code_ptr = tb->tc_ptr;
+ }
}
}
- return tcg_ctx.code_gen_epilogue;
- found:
- qemu_log_mask_and_addr(CPU_LOG_EXEC, addr,
- "Chain %p [%d: " TARGET_FMT_lx "] %s\n",
- tb->tc_ptr, cpu->cpu_index, addr,
- lookup_symbol(addr));
- return tb->tc_ptr;
+
+ return code_ptr ? code_ptr : tcg_ctx.code_gen_epilogue;
}
void HELPER(exit_atomic)(CPUArchState *env)
--
2.13.0
On 06/14/2017 07:02 AM, Alex Bennée wrote:
> Just a little precursor re-factoring before I was going to add a trace
> point:
>
> - single return point, defaulting to tcg_ctx.code_gen_epilogue
Why? Any if you're going to do that, why not init ret = epilogue and avoid the
null test at the end?
> - move cs_base, pc and flags inside the jump cache hit scope
Funny story. While looking at this again, I notice that there's no reason to
avoid, and every reason not to avoid, calling tb_htable_lookup when
tb_jmp_cache is empty. So I'm now doing
tb = atomic_rcu_read(...);
cpu_get_tb_cpu_state(env, &pc, &cs_base, &flags);
if (!(tb && tb->pc == addr && ...)) {
tb = tb_htable_lookup(...);
...
> - calculate the tb_jmp_cache hash once
I did look once and the compiler is doing the CSE. But it does look cleaner.
r~
Richard Henderson <rth@twiddle.net> writes:
> On 06/14/2017 07:02 AM, Alex Bennée wrote:
>> Just a little precursor re-factoring before I was going to add a trace
>> point:
>>
>> - single return point, defaulting to tcg_ctx.code_gen_epilogue
>
> Why? Any if you're going to do that, why not init ret = epilogue and
> avoid the null test at the end?
That would be better. I think the NULL is left over from my originall
debugging when I had added a trace point and wanted to see when the
lookup had failed.
>
>> - move cs_base, pc and flags inside the jump cache hit scope
>
> Funny story. While looking at this again, I notice that there's no
> reason to avoid, and every reason not to avoid, calling
> tb_htable_lookup when tb_jmp_cache is empty. So I'm now doing
>
> tb = atomic_rcu_read(...);
> cpu_get_tb_cpu_state(env, &pc, &cs_base, &flags);
>
> if (!(tb && tb->pc == addr && ...)) {
> tb = tb_htable_lookup(...);
> ...
OK I can fix on the next iteration.
>> - calculate the tb_jmp_cache hash once
>
> I did look once and the compiler is doing the CSE. But it does look cleaner.
>
>
> r~
--
Alex Bennée
© 2016 - 2025 Red Hat, Inc.