include/exec/tb-lookup.h | 15 +++++++++++++++ 1 file changed, 15 insertions(+)
Hello,
I would like to request feedback on the following patch, which I do
not believe should be applied to master as-is. The idea here is to
avoid gathering the full CPU state in the fast path of an indirect
branch lookup when running in user mode on a platform where the flags
can only be changed in privileged mode. I believe this is true on the
AArch64 scenario that I care about, but clearly not true in general.
I'm particularly seeking feedback on how to clean this up into a
version that checks the correct necessary and sufficient conditions to
allow all users that can benefit from it to do so.
On the workload that I am targeting (aarch64 on x86), this patch
reduces execution wall time by approximately 20%, and eliminates
indirect branch lookups from the hot stack traces entirely.
Thank you,
--Owen
From 3d96db17d3baacb92ef1bc5e70ef06b97d06a0ae Mon Sep 17 00:00:00 2001
From: Owen Anderson <oanderso@google.com>
Date: Tue, 29 Sep 2020 13:47:00 -0700
Subject: [RFC] Don't lookup full CPU state in the indirect branch fast path on
AArch64 when running in user mode.
Most of the CPU state can't be changed in user mode, so this is useless work.
Signed-off-by: Owen Anderson <oanderso@google.com>
---
include/exec/tb-lookup.h | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/include/exec/tb-lookup.h b/include/exec/tb-lookup.h
index 9cf475bb03..f4ea0eb4c0 100644
--- a/include/exec/tb-lookup.h
+++ b/include/exec/tb-lookup.h
@@ -25,7 +25,15 @@ tb_lookup__cpu_state(CPUState *cpu, target_ulong
*pc, target_ulong *cs_base,
TranslationBlock *tb;
uint32_t hash;
+#if !defined(TARGET_ARM) || !defined(CONFIG_USER_ONLY)
cpu_get_tb_cpu_state(env, pc, cs_base, flags);
+#else
+ if (is_a64(env)) {
+ *pc = env->pc;
+ } else {
+ *pc = env->regs[15];
+ }
+#endif
hash = tb_jmp_cache_hash_func(*pc);
tb = qatomic_rcu_read(&cpu->tb_jmp_cache[hash]);
@@ -34,12 +42,19 @@ tb_lookup__cpu_state(CPUState *cpu, target_ulong
*pc, target_ulong *cs_base,
if (likely(tb &&
tb->pc == *pc &&
+#if !defined(TARGET_ARM) || !defined(CONFIG_USER_ONLY)
tb->cs_base == *cs_base &&
tb->flags == *flags &&
+#endif
tb->trace_vcpu_dstate == *cpu->trace_dstate &&
(tb_cflags(tb) & (CF_HASH_MASK | CF_INVALID)) == cf_mask)) {
return tb;
}
+
+#ifdef CONFIG_USER_ONLY
+ cpu_get_tb_cpu_state(env, pc, cs_base, flags);
+#endif
+
tb = tb_htable_lookup(cpu, *pc, *cs_base, *flags, cf_mask);
if (tb == NULL) {
return NULL;
--
2.28.0.709.gb0816b6eb0-goog
Ping. I'd like to get feedback on how/whether this could be developed into a landable version. Thanks, --Owen On Tue, Sep 29, 2020 at 2:32 PM Owen Anderson <oanderso@google.com> wrote: > > Hello, > > I would like to request feedback on the following patch, which I do > not believe should be applied to master as-is. The idea here is to > avoid gathering the full CPU state in the fast path of an indirect > branch lookup when running in user mode on a platform where the flags > can only be changed in privileged mode. I believe this is true on the > AArch64 scenario that I care about, but clearly not true in general. > I'm particularly seeking feedback on how to clean this up into a > version that checks the correct necessary and sufficient conditions to > allow all users that can benefit from it to do so. > > On the workload that I am targeting (aarch64 on x86), this patch > reduces execution wall time by approximately 20%, and eliminates > indirect branch lookups from the hot stack traces entirely. > > Thank you, > > --Owen > > From 3d96db17d3baacb92ef1bc5e70ef06b97d06a0ae Mon Sep 17 00:00:00 2001 > From: Owen Anderson <oanderso@google.com> > Date: Tue, 29 Sep 2020 13:47:00 -0700 > Subject: [RFC] Don't lookup full CPU state in the indirect branch fast path on > AArch64 when running in user mode. > > Most of the CPU state can't be changed in user mode, so this is useless work. > > Signed-off-by: Owen Anderson <oanderso@google.com> > --- > include/exec/tb-lookup.h | 15 +++++++++++++++ > 1 file changed, 15 insertions(+) > > diff --git a/include/exec/tb-lookup.h b/include/exec/tb-lookup.h > index 9cf475bb03..f4ea0eb4c0 100644 > --- a/include/exec/tb-lookup.h > +++ b/include/exec/tb-lookup.h > @@ -25,7 +25,15 @@ tb_lookup__cpu_state(CPUState *cpu, target_ulong > *pc, target_ulong *cs_base, > TranslationBlock *tb; > uint32_t hash; > > +#if !defined(TARGET_ARM) || !defined(CONFIG_USER_ONLY) > cpu_get_tb_cpu_state(env, pc, cs_base, flags); > +#else > + if (is_a64(env)) { > + *pc = env->pc; > + } else { > + *pc = env->regs[15]; > + } > +#endif > hash = tb_jmp_cache_hash_func(*pc); > tb = qatomic_rcu_read(&cpu->tb_jmp_cache[hash]); > > @@ -34,12 +42,19 @@ tb_lookup__cpu_state(CPUState *cpu, target_ulong > *pc, target_ulong *cs_base, > > if (likely(tb && > tb->pc == *pc && > +#if !defined(TARGET_ARM) || !defined(CONFIG_USER_ONLY) > tb->cs_base == *cs_base && > tb->flags == *flags && > +#endif > tb->trace_vcpu_dstate == *cpu->trace_dstate && > (tb_cflags(tb) & (CF_HASH_MASK | CF_INVALID)) == cf_mask)) { > return tb; > } > + > +#ifdef CONFIG_USER_ONLY > + cpu_get_tb_cpu_state(env, pc, cs_base, flags); > +#endif > + > tb = tb_htable_lookup(cpu, *pc, *cs_base, *flags, cf_mask); > if (tb == NULL) { > return NULL; > -- > 2.28.0.709.gb0816b6eb0-goog
Ping On Mon, Oct 12, 2020 at 1:52 PM Owen Anderson <oanderso@google.com> wrote: > > Ping. > > I'd like to get feedback on how/whether this could be developed into a > landable version. > > Thanks, > > --Owen > > On Tue, Sep 29, 2020 at 2:32 PM Owen Anderson <oanderso@google.com> wrote: > > > > Hello, > > > > I would like to request feedback on the following patch, which I do > > not believe should be applied to master as-is. The idea here is to > > avoid gathering the full CPU state in the fast path of an indirect > > branch lookup when running in user mode on a platform where the flags > > can only be changed in privileged mode. I believe this is true on the > > AArch64 scenario that I care about, but clearly not true in general. > > I'm particularly seeking feedback on how to clean this up into a > > version that checks the correct necessary and sufficient conditions to > > allow all users that can benefit from it to do so. > > > > On the workload that I am targeting (aarch64 on x86), this patch > > reduces execution wall time by approximately 20%, and eliminates > > indirect branch lookups from the hot stack traces entirely. > > > > Thank you, > > > > --Owen > > > > From 3d96db17d3baacb92ef1bc5e70ef06b97d06a0ae Mon Sep 17 00:00:00 2001 > > From: Owen Anderson <oanderso@google.com> > > Date: Tue, 29 Sep 2020 13:47:00 -0700 > > Subject: [RFC] Don't lookup full CPU state in the indirect branch fast path on > > AArch64 when running in user mode. > > > > Most of the CPU state can't be changed in user mode, so this is useless work. > > > > Signed-off-by: Owen Anderson <oanderso@google.com> > > --- > > include/exec/tb-lookup.h | 15 +++++++++++++++ > > 1 file changed, 15 insertions(+) > > > > diff --git a/include/exec/tb-lookup.h b/include/exec/tb-lookup.h > > index 9cf475bb03..f4ea0eb4c0 100644 > > --- a/include/exec/tb-lookup.h > > +++ b/include/exec/tb-lookup.h > > @@ -25,7 +25,15 @@ tb_lookup__cpu_state(CPUState *cpu, target_ulong > > *pc, target_ulong *cs_base, > > TranslationBlock *tb; > > uint32_t hash; > > > > +#if !defined(TARGET_ARM) || !defined(CONFIG_USER_ONLY) > > cpu_get_tb_cpu_state(env, pc, cs_base, flags); > > +#else > > + if (is_a64(env)) { > > + *pc = env->pc; > > + } else { > > + *pc = env->regs[15]; > > + } > > +#endif > > hash = tb_jmp_cache_hash_func(*pc); > > tb = qatomic_rcu_read(&cpu->tb_jmp_cache[hash]); > > > > @@ -34,12 +42,19 @@ tb_lookup__cpu_state(CPUState *cpu, target_ulong > > *pc, target_ulong *cs_base, > > > > if (likely(tb && > > tb->pc == *pc && > > +#if !defined(TARGET_ARM) || !defined(CONFIG_USER_ONLY) > > tb->cs_base == *cs_base && > > tb->flags == *flags && > > +#endif > > tb->trace_vcpu_dstate == *cpu->trace_dstate && > > (tb_cflags(tb) & (CF_HASH_MASK | CF_INVALID)) == cf_mask)) { > > return tb; > > } > > + > > +#ifdef CONFIG_USER_ONLY > > + cpu_get_tb_cpu_state(env, pc, cs_base, flags); > > +#endif > > + > > tb = tb_htable_lookup(cpu, *pc, *cs_base, *flags, cf_mask); > > if (tb == NULL) { > > return NULL; > > -- > > 2.28.0.709.gb0816b6eb0-goog -- --Owen
On 9/29/20 2:32 PM, Owen Anderson wrote: > Hello, > > I would like to request feedback on the following patch, which I do > not believe should be applied to master as-is. The idea here is to > avoid gathering the full CPU state in the fast path of an indirect > branch lookup when running in user mode on a platform where the flags > can only be changed in privileged mode. I believe this is true on the > AArch64 scenario that I care about, but clearly not true in general. > I'm particularly seeking feedback on how to clean this up into a > version that checks the correct necessary and sufficient conditions to > allow all users that can benefit from it to do so. > > On the workload that I am targeting (aarch64 on x86), this patch > reduces execution wall time by approximately 20%, and eliminates > indirect branch lookups from the hot stack traces entirely. (1) What qemu version are you looking at and, (2) Do you have --enable-tcg-debug enabled? Because you should not be seeing anything even close to 20% overhead. In e979972a6a1 (included in qemu 4.2), the AArch64 path is uint32_t flags = env->hflags; *cs_base = 0; if (FIELD_EX32(flags, TBFLAG_ANY, AARCH64_STATE)) { *pc = env->pc; if (cpu_isar_feature(aa64_bti, env_archcpu(env))) { flags = FIELD_DP32(flags, TBFLAG_A64, BTYPE, env->btype); } pstate_for_ss = env->pstate; } if (FIELD_EX32(flags, TBFLAG_ANY, SS_ACTIVE) && (pstate_for_ss & PSTATE_SS)) { flags = FIELD_DP32(flags, TBFLAG_ANY, PSTATE_SS, 1); } *pflags = flags; With --enable-tcg-debug, there is an additional step wherein we validate that env->hflags has the correct value. Which has caught a number of bugs. With a silly testcase like so: for (int x = 0; x < 10000000; ++x) { void *tmp; asm volatile("adr %0,1f; br %0; 1:" : "=r"(tmp)); } I see cpu_get_tb_cpu_state no higher than 10% of the total runtime. Which, I admit is higher than I expected, but still nothing like what you're reporting. And a "reasonable" test case should surely have a lower proportion of indirect branches per dynamic instruction. > +#if !defined(TARGET_ARM) || !defined(CONFIG_USER_ONLY) > cpu_get_tb_cpu_state(env, pc, cs_base, flags); > +#else > + if (is_a64(env)) { > + *pc = env->pc; > + } else { > + *pc = env->regs[15]; > + } > +#endif ... > +#if !defined(TARGET_ARM) || !defined(CONFIG_USER_ONLY) > tb->cs_base == *cs_base && > tb->flags == *flags && > +#endif This is assuming that all TB have the same flags, and thus the flags don't need comparing. Which is false, even for CONFIG_USER_ONLY. I would guess that testing -cpu cortex-a57 does not use any of the bits that might change, but post v8.2 will: SVE, BTI, MTE. So, this change breaks stuff. r~
On Mon, Oct 19, 2020 at 11:22 AM Richard Henderson <richard.henderson@linaro.org> wrote: > > (1) What qemu version are you looking at and, > (2) Do you have --enable-tcg-debug enabled? My use case is a large automated testing environment for large C++ binaries with heavy use of virtual dispatch. The binaries are generally not built at high optimization levels (-O0 or -O1), so it's not very surprising to me that indirect branches are more dominant in this as a workload My use case is currently using QEMU 4.0, but we will be moving to QEMU 4.2 soon. I do not have --enable-tcg-debug enabled. e979972a6a1 does look promising, and like it might deliver increased performance for our use case. It looks like the code in 4.0 is doing a lot more work gathering the flags values from a variety of places. --Owen
On 10/19/20 3:44 PM, Owen Anderson wrote: > My use case is currently using QEMU 4.0, but we will be moving to QEMU > 4.2 soon. I do not have --enable-tcg-debug enabled. > e979972a6a1 does look promising, and like it might deliver increased > performance for our use case. It looks like the code in 4.0 is doing a > lot more work gathering the flags values from a variety of places. Yes, before 4.2, we did a *lot* more work gathering flags, and the overhead you see roughly corresponds with what I saw. r~
© 2016 - 2024 Red Hat, Inc.