1
The following changes since commit 3e08b2b9cb64bff2b73fa9128c0e49bfcde0dd40:
1
The following changes since commit 627634031092e1514f363fd8659a579398de0f0e:
2
2
3
Merge remote-tracking branch 'remotes/philmd-gitlab/tags/edk2-next-20200121' into staging (2020-01-21 15:29:25 +0000)
3
Merge tag 'buildsys-qom-qdev-ui-20230227' of https://github.com/philmd/qemu into staging (2023-02-28 15:09:18 +0000)
4
4
5
are available in the Git repository at:
5
are available in the Git repository at:
6
6
7
https://github.com/rth7680/qemu.git tags/pull-tcg-20200121
7
https://gitlab.com/rth7680/qemu.git tags/pull-tcg-20230228
8
8
9
for you to fetch changes up to 75fa376cdab5e5db2c7fdd107358e16f95503ac6:
9
for you to fetch changes up to c7fbf10db8718d2eba87712bc3410b671157a377:
10
10
11
scripts/git.orderfile: Display decodetree before C source (2020-01-21 15:26:09 -1000)
11
tcg: Update docs/devel/tcg-ops.rst for temporary changes (2023-02-28 10:36:19 -1000)
12
12
13
----------------------------------------------------------------
13
----------------------------------------------------------------
14
Remove another limit to NB_MMU_MODES.
14
helper-head: Add fpu/softfloat-types.h
15
Fix compilation using uclibc.
15
softmmu: Use memmove in flatview_write_continue
16
Fix defaulting of -accel parameters.
16
tcg: Add sign param to probe_access_flags, probe_access_full
17
Tidy cputlb basic routines.
17
tcg: Convert TARGET_TB_PCREL to CF_PCREL
18
Adjust git.orderfile for decodetree.
18
tcg: Simplify temporary lifetimes for translators
19
19
20
----------------------------------------------------------------
20
----------------------------------------------------------------
21
Carlos Santos (1):
21
Akihiko Odaki (1):
22
util/cacheinfo: fix crash when compiling with uClibc
22
softmmu: Use memmove in flatview_write_continue
23
24
Anton Johansson via (27):
25
include/exec: Introduce `CF_PCREL`
26
target/i386: set `CF_PCREL` in `x86_cpu_realizefn`
27
target/arm: set `CF_PCREL` in `arm_cpu_realizefn`
28
accel/tcg: Replace `TARGET_TB_PCREL` with `CF_PCREL`
29
include/exec: Replace `TARGET_TB_PCREL` with `CF_PCREL`
30
target/arm: Replace `TARGET_TB_PCREL` with `CF_PCREL`
31
target/i386: Replace `TARGET_TB_PCREL` with `CF_PCREL`
32
include/exec: Remove `TARGET_TB_PCREL` define
33
target/arm: Remove `TARGET_TB_PCREL` define
34
target/i386: Remove `TARGET_TB_PCREL` define
35
accel/tcg: Move jmp-cache `CF_PCREL` checks to caller
36
accel/tcg: Replace `tb_pc()` with `tb->pc`
37
target/tricore: Replace `tb_pc()` with `tb->pc`
38
target/sparc: Replace `tb_pc()` with `tb->pc`
39
target/sh4: Replace `tb_pc()` with `tb->pc`
40
target/rx: Replace `tb_pc()` with `tb->pc`
41
target/riscv: Replace `tb_pc()` with `tb->pc`
42
target/openrisc: Replace `tb_pc()` with `tb->pc`
43
target/mips: Replace `tb_pc()` with `tb->pc`
44
target/microblaze: Replace `tb_pc()` with `tb->pc`
45
target/loongarch: Replace `tb_pc()` with `tb->pc`
46
target/i386: Replace `tb_pc()` with `tb->pc`
47
target/hppa: Replace `tb_pc()` with `tb->pc`
48
target/hexagon: Replace `tb_pc()` with `tb->pc`
49
target/avr: Replace `tb_pc()` with `tb->pc`
50
target/arm: Replace `tb_pc()` with `tb->pc`
51
include/exec: Remove `tb_pc()`
52
53
Daniel Henrique Barboza (1):
54
accel/tcg: Add 'size' param to probe_access_flags()
23
55
24
Philippe Mathieu-Daudé (1):
56
Philippe Mathieu-Daudé (1):
25
scripts/git.orderfile: Display decodetree before C source
57
exec/helper-head: Include missing "fpu/softfloat-types.h" header
26
58
27
Richard Henderson (14):
59
Richard Henderson (32):
28
cputlb: Handle NB_MMU_MODES > TARGET_PAGE_BITS_MIN
60
accel/tcg: Add 'size' param to probe_access_full
29
vl: Remove unused variable in configure_accelerators
61
tcg: Adjust TCGContext.temps_in_use check
30
vl: Reduce scope of variables in configure_accelerators
62
accel/tcg: Pass max_insn to gen_intermediate_code by pointer
31
vl: Remove useless test in configure_accelerators
63
accel/tcg: Use more accurate max_insns for tb_overflow
32
vl: Only choose enabled accelerators in configure_accelerators
64
tcg: Remove branch-to-next regardless of reference count
33
cputlb: Merge tlb_table_flush_by_mmuidx into tlb_flush_one_mmuidx_locked
65
tcg: Rename TEMP_LOCAL to TEMP_TB
34
cputlb: Make tlb_n_entries private to cputlb.c
66
tcg: Use noinline for major tcg_gen_code subroutines
35
cputlb: Pass CPUTLBDescFast to tlb_n_entries and sizeof_tlb
67
tcg: Add liveness_pass_0
36
cputlb: Hoist tlb portions in tlb_mmu_resize_locked
68
tcg: Remove TEMP_NORMAL
37
cputlb: Hoist tlb portions in tlb_flush_one_mmuidx_locked
69
tcg: Pass TCGTempKind to tcg_temp_new_internal
38
cputlb: Split out tlb_mmu_flush_locked
70
tcg: Use tcg_constant_i32 in tcg_gen_io_start
39
cputlb: Partially merge tlb_dyn_init into tlb_init
71
tcg: Add tcg_gen_movi_ptr
40
cputlb: Initialize tlbs as flushed
72
tcg: Add tcg_temp_ebb_new_{i32,i64,ptr}
41
cputlb: Hoist timestamp outside of loops over tlbs
73
tcg: Use tcg_temp_ebb_new_* in tcg/
74
tcg: Use tcg_constant_ptr in do_dup
75
accel/tcg/plugin: Use tcg_temp_ebb_*
76
accel/tcg/plugin: Tidy plugin_gen_disable_mem_helpers
77
tcg: Don't re-use TEMP_TB temporaries
78
tcg: Change default temp lifetime to TEMP_TB
79
target/arm: Drop copies in gen_sve_{ldr,str}
80
target/arm: Don't use tcg_temp_local_new_*
81
target/cris: Don't use tcg_temp_local_new
82
target/hexagon: Don't use tcg_temp_local_new_*
83
target/hexagon/idef-parser: Drop gen_tmp_local
84
target/hppa: Don't use tcg_temp_local_new
85
target/i386: Don't use tcg_temp_local_new
86
target/mips: Don't use tcg_temp_local_new
87
target/ppc: Don't use tcg_temp_local_new
88
target/xtensa: Don't use tcg_temp_local_new_*
89
exec/gen-icount: Don't use tcg_temp_local_new_i32
90
tcg: Remove tcg_temp_local_new_*, tcg_const_local_*
91
tcg: Update docs/devel/tcg-ops.rst for temporary changes
42
92
43
include/exec/cpu_ldst.h | 5 -
93
docs/devel/tcg-ops.rst | 230 +++++++++++++----------
44
accel/tcg/cputlb.c | 287 +++++++++++++++++++++++++++++++++---------------
94
target/hexagon/idef-parser/README.rst | 4 +-
45
util/cacheinfo.c | 10 +-
95
accel/tcg/internal.h | 10 +-
46
vl.c | 27 +++--
96
accel/tcg/tb-jmp-cache.h | 42 +----
47
scripts/git.orderfile | 3 +
97
include/exec/cpu-defs.h | 3 -
48
5 files changed, 223 insertions(+), 109 deletions(-)
98
include/exec/exec-all.h | 26 +--
99
include/exec/gen-icount.h | 12 +-
100
include/exec/helper-head.h | 2 +
101
include/exec/translator.h | 4 +-
102
include/tcg/tcg-op.h | 7 +-
103
include/tcg/tcg.h | 64 ++++---
104
target/arm/cpu-param.h | 2 -
105
target/arm/tcg/translate-a64.h | 1 -
106
target/arm/tcg/translate.h | 2 +-
107
target/hexagon/gen_tcg.h | 4 +-
108
target/i386/cpu-param.h | 4 -
109
accel/stubs/tcg-stub.c | 2 +-
110
accel/tcg/cpu-exec.c | 62 ++++--
111
accel/tcg/cputlb.c | 21 ++-
112
accel/tcg/perf.c | 2 +-
113
accel/tcg/plugin-gen.c | 32 ++--
114
accel/tcg/tb-maint.c | 10 +-
115
accel/tcg/translate-all.c | 18 +-
116
accel/tcg/translator.c | 6 +-
117
accel/tcg/user-exec.c | 5 +-
118
semihosting/uaccess.c | 2 +-
119
softmmu/physmem.c | 2 +-
120
target/alpha/translate.c | 2 +-
121
target/arm/cpu.c | 17 +-
122
target/arm/ptw.c | 4 +-
123
target/arm/tcg/mte_helper.c | 4 +-
124
target/arm/tcg/sve_helper.c | 4 +-
125
target/arm/tcg/translate-a64.c | 16 +-
126
target/arm/tcg/translate-sve.c | 38 +---
127
target/arm/tcg/translate.c | 14 +-
128
target/avr/cpu.c | 3 +-
129
target/avr/translate.c | 2 +-
130
target/cris/translate.c | 8 +-
131
target/hexagon/cpu.c | 4 +-
132
target/hexagon/genptr.c | 16 +-
133
target/hexagon/idef-parser/parser-helpers.c | 26 +--
134
target/hexagon/translate.c | 4 +-
135
target/hppa/cpu.c | 8 +-
136
target/hppa/translate.c | 5 +-
137
target/i386/cpu.c | 5 +
138
target/i386/helper.c | 2 +-
139
target/i386/tcg/sysemu/excp_helper.c | 4 +-
140
target/i386/tcg/tcg-cpu.c | 8 +-
141
target/i386/tcg/translate.c | 55 +++---
142
target/loongarch/cpu.c | 6 +-
143
target/loongarch/translate.c | 2 +-
144
target/m68k/translate.c | 2 +-
145
target/microblaze/cpu.c | 4 +-
146
target/microblaze/translate.c | 2 +-
147
target/mips/tcg/exception.c | 3 +-
148
target/mips/tcg/sysemu/special_helper.c | 2 +-
149
target/mips/tcg/translate.c | 59 ++----
150
target/nios2/translate.c | 2 +-
151
target/openrisc/cpu.c | 4 +-
152
target/openrisc/translate.c | 2 +-
153
target/ppc/translate.c | 8 +-
154
target/riscv/cpu.c | 7 +-
155
target/riscv/translate.c | 2 +-
156
target/rx/cpu.c | 3 +-
157
target/rx/translate.c | 2 +-
158
target/s390x/tcg/mem_helper.c | 2 +-
159
target/s390x/tcg/translate.c | 2 +-
160
target/sh4/cpu.c | 6 +-
161
target/sh4/translate.c | 2 +-
162
target/sparc/cpu.c | 4 +-
163
target/sparc/translate.c | 2 +-
164
target/tricore/cpu.c | 3 +-
165
target/tricore/translate.c | 2 +-
166
target/xtensa/translate.c | 18 +-
167
tcg/optimize.c | 2 +-
168
tcg/tcg-op-gvec.c | 189 ++++++++++---------
169
tcg/tcg-op.c | 258 ++++++++++++-------------
170
tcg/tcg.c | 280 ++++++++++++++++------------
171
target/cris/translate_v10.c.inc | 10 +-
172
target/mips/tcg/nanomips_translate.c.inc | 4 +-
173
target/ppc/translate/spe-impl.c.inc | 8 +-
174
target/ppc/translate/vmx-impl.c.inc | 4 +-
175
target/hexagon/README | 8 +-
176
target/hexagon/gen_tcg_funcs.py | 18 +-
177
84 files changed, 870 insertions(+), 890 deletions(-)
49
178
diff view generated by jsdifflib
New patch
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
1
2
3
'dh_ctype_f32' is defined as 'float32', itself declared
4
in "fpu/softfloat-types.h". Include this header to avoid
5
when refactoring other headers:
6
7
In file included from include/exec/helper-proto.h:7,
8
from include/tcg/tcg-op.h:29,
9
from ../../tcg/tcg-op-vec.c:22:
10
include/exec/helper-head.h:44:22: error: unknown type name ‘float32’; did you mean ‘_Float32’?
11
44 | #define dh_ctype_f32 float32
12
| ^~~~~~~
13
14
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
15
Message-Id: <20221216225202.25664-1-philmd@linaro.org>
16
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
17
---
18
include/exec/helper-head.h | 2 ++
19
1 file changed, 2 insertions(+)
20
21
diff --git a/include/exec/helper-head.h b/include/exec/helper-head.h
22
index XXXXXXX..XXXXXXX 100644
23
--- a/include/exec/helper-head.h
24
+++ b/include/exec/helper-head.h
25
@@ -XXX,XX +XXX,XX @@
26
#ifndef EXEC_HELPER_HEAD_H
27
#define EXEC_HELPER_HEAD_H
28
29
+#include "fpu/softfloat-types.h"
30
+
31
#define HELPER(name) glue(helper_, name)
32
33
/* Some types that make sense in C, but not for TCG. */
34
--
35
2.34.1
36
37
diff view generated by jsdifflib
New patch
1
From: Akihiko Odaki <akihiko.odaki@daynix.com>
1
2
3
We found a case where the source passed to flatview_write_continue() may
4
overlap with the destination when fuzzing igb, a new proposed network
5
device with sanitizers.
6
7
igb uses pci_dma_map() to get Tx packet, and pci_dma_write() to write Rx
8
buffer. While pci_dma_write() is usually used to write data from
9
memory not mapped to the guest, if igb is configured to perform
10
loopback, the data will be sourced from the guest memory. The source and
11
destination can overlap and the usage of memcpy() will be invalid in
12
such a case.
13
14
While we do not really have to deal with such an invalid request for
15
igb, detecting the overlap in igb code beforehand requires complex code,
16
and only covers this specific case. Instead, just replace memcpy() with
17
memmove() to tolerate overlaps. Using memmove() will slightly damage the
18
performance as it will need to check overlaps before using SIMD
19
instructions for copying, but the cost should be negligible, considering
20
the inherent complexity of flatview_write_continue().
21
22
The test cases generated by the fuzzer is available at:
23
https://patchew.org/QEMU/20230129053316.1071513-1-alxndr@bu.edu/
24
25
The fixed test case is:
26
fuzz/crash_47dfe62d9f911bf523ff48cd441b61c0013ed805
27
28
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
29
Acked-by: Alexander Bulekov <alxndr@bu.edu>
30
Acked-by: David Hildenbrand <david@redhat.com>
31
Message-Id: <20230131030155.18932-1-akihiko.odaki@daynix.com>
32
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
33
---
34
softmmu/physmem.c | 2 +-
35
1 file changed, 1 insertion(+), 1 deletion(-)
36
37
diff --git a/softmmu/physmem.c b/softmmu/physmem.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/softmmu/physmem.c
40
+++ b/softmmu/physmem.c
41
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_write_continue(FlatView *fv, hwaddr addr,
42
} else {
43
/* RAM case */
44
ram_ptr = qemu_ram_ptr_length(mr->ram_block, addr1, &l, false);
45
- memcpy(ram_ptr, buf, l);
46
+ memmove(ram_ptr, buf, l);
47
invalidate_and_set_dirty(mr, addr1, l);
48
}
49
50
--
51
2.34.1
diff view generated by jsdifflib
1
There is only one caller for tlb_table_flush_by_mmuidx. Place
1
From: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
2
the result at the earlier line number, due to an expected user
3
in the near future.
4
2
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
3
probe_access_flags() as it is today uses probe_access_full(), which in
6
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
4
turn uses probe_access_internal() with size = 0. probe_access_internal()
5
then uses the size to call the tlb_fill() callback for the given CPU.
6
This size param ('fault_size' as probe_access_internal() calls it) is
7
ignored by most existing .tlb_fill callback implementations, e.g.
8
arm_cpu_tlb_fill(), ppc_cpu_tlb_fill(), x86_cpu_tlb_fill() and
9
mips_cpu_tlb_fill() to name a few.
10
11
But RISC-V riscv_cpu_tlb_fill() actually uses it. The 'size' parameter
12
is used to check for PMP (Physical Memory Protection) access. This is
13
necessary because PMP does not make any guarantees about all the bytes
14
of the same page having the same permissions, i.e. the same page can
15
have different PMP properties, so we're forced to make sub-page range
16
checks. To allow RISC-V emulation to do a probe_acess_flags() that
17
covers PMP, we need to either add a 'size' param to the existing
18
probe_acess_flags() or create a new interface (e.g.
19
probe_access_range_flags).
20
21
There are quite a few probe_* APIs already, so let's add a 'size' param
22
to probe_access_flags() and re-use this API. This is done by open coding
23
what probe_access_full() does inside probe_acess_flags() and passing the
24
'size' param to probe_acess_internal(). Existing probe_access_flags()
25
callers use size = 0 to not change their current API usage. 'size' is
26
asserted to enforce single page access like probe_access() already does.
27
28
No behavioral changes intended.
29
30
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
31
Message-Id: <20230223234427.521114-2-dbarboza@ventanamicro.com>
32
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
33
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
---
34
---
9
accel/tcg/cputlb.c | 19 +++++++------------
35
include/exec/exec-all.h | 3 ++-
10
1 file changed, 7 insertions(+), 12 deletions(-)
36
accel/stubs/tcg-stub.c | 2 +-
37
accel/tcg/cputlb.c | 17 ++++++++++++++---
38
accel/tcg/user-exec.c | 5 +++--
39
semihosting/uaccess.c | 2 +-
40
target/arm/ptw.c | 2 +-
41
target/arm/tcg/sve_helper.c | 2 +-
42
target/s390x/tcg/mem_helper.c | 2 +-
43
8 files changed, 24 insertions(+), 11 deletions(-)
11
44
45
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
46
index XXXXXXX..XXXXXXX 100644
47
--- a/include/exec/exec-all.h
48
+++ b/include/exec/exec-all.h
49
@@ -XXX,XX +XXX,XX @@ static inline void *probe_read(CPUArchState *env, target_ulong addr, int size,
50
* probe_access_flags:
51
* @env: CPUArchState
52
* @addr: guest virtual address to look up
53
+ * @size: size of the access
54
* @access_type: read, write or execute permission
55
* @mmu_idx: MMU index to use for lookup
56
* @nonfault: suppress the fault
57
@@ -XXX,XX +XXX,XX @@ static inline void *probe_read(CPUArchState *env, target_ulong addr, int size,
58
* Do handle clean pages, so exclude TLB_NOTDIRY from the returned flags.
59
* For simplicity, all "mmio-like" flags are folded to TLB_MMIO.
60
*/
61
-int probe_access_flags(CPUArchState *env, target_ulong addr,
62
+int probe_access_flags(CPUArchState *env, target_ulong addr, int size,
63
MMUAccessType access_type, int mmu_idx,
64
bool nonfault, void **phost, uintptr_t retaddr);
65
66
diff --git a/accel/stubs/tcg-stub.c b/accel/stubs/tcg-stub.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/accel/stubs/tcg-stub.c
69
+++ b/accel/stubs/tcg-stub.c
70
@@ -XXX,XX +XXX,XX @@ void tcg_flush_jmp_cache(CPUState *cpu)
71
{
72
}
73
74
-int probe_access_flags(CPUArchState *env, target_ulong addr,
75
+int probe_access_flags(CPUArchState *env, target_ulong addr, int size,
76
MMUAccessType access_type, int mmu_idx,
77
bool nonfault, void **phost, uintptr_t retaddr)
78
{
12
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
79
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
13
index XXXXXXX..XXXXXXX 100644
80
index XXXXXXX..XXXXXXX 100644
14
--- a/accel/tcg/cputlb.c
81
--- a/accel/tcg/cputlb.c
15
+++ b/accel/tcg/cputlb.c
82
+++ b/accel/tcg/cputlb.c
16
@@ -XXX,XX +XXX,XX @@ static void tlb_mmu_resize_locked(CPUArchState *env, int mmu_idx)
83
@@ -XXX,XX +XXX,XX @@ int probe_access_full(CPUArchState *env, target_ulong addr,
17
}
84
return flags;
18
}
85
}
19
86
20
-static inline void tlb_table_flush_by_mmuidx(CPUArchState *env, int mmu_idx)
87
-int probe_access_flags(CPUArchState *env, target_ulong addr,
21
+static void tlb_flush_one_mmuidx_locked(CPUArchState *env, int mmu_idx)
88
+int probe_access_flags(CPUArchState *env, target_ulong addr, int size,
89
MMUAccessType access_type, int mmu_idx,
90
bool nonfault, void **phost, uintptr_t retaddr)
22
{
91
{
23
tlb_mmu_resize_locked(env, mmu_idx);
92
CPUTLBEntryFull *full;
24
- memset(env_tlb(env)->f[mmu_idx].table, -1, sizeof_tlb(env, mmu_idx));
93
+ int flags;
25
env_tlb(env)->d[mmu_idx].n_used_entries = 0;
94
26
+ env_tlb(env)->d[mmu_idx].large_page_addr = -1;
95
- return probe_access_full(env, addr, access_type, mmu_idx,
27
+ env_tlb(env)->d[mmu_idx].large_page_mask = -1;
96
- nonfault, phost, &full, retaddr);
28
+ env_tlb(env)->d[mmu_idx].vindex = 0;
97
+ g_assert(-(addr | TARGET_PAGE_MASK) >= size);
29
+ memset(env_tlb(env)->f[mmu_idx].table, -1, sizeof_tlb(env, mmu_idx));
98
+
30
+ memset(env_tlb(env)->d[mmu_idx].vtable, -1,
99
+ flags = probe_access_internal(env, addr, size, access_type, mmu_idx,
31
+ sizeof(env_tlb(env)->d[0].vtable));
100
+ nonfault, phost, &full, retaddr);
101
+
102
+ /* Handle clean RAM pages. */
103
+ if (unlikely(flags & TLB_NOTDIRTY)) {
104
+ notdirty_write(env_cpu(env), addr, 1, full, retaddr);
105
+ flags &= ~TLB_NOTDIRTY;
106
+ }
107
+
108
+ return flags;
32
}
109
}
33
110
34
static inline void tlb_n_used_entries_inc(CPUArchState *env, uintptr_t mmu_idx)
111
void *probe_access(CPUArchState *env, target_ulong addr, int size,
35
@@ -XXX,XX +XXX,XX @@ void tlb_flush_counts(size_t *pfull, size_t *ppart, size_t *pelide)
112
diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
36
*pelide = elide;
113
index XXXXXXX..XXXXXXX 100644
114
--- a/accel/tcg/user-exec.c
115
+++ b/accel/tcg/user-exec.c
116
@@ -XXX,XX +XXX,XX @@ static int probe_access_internal(CPUArchState *env, target_ulong addr,
117
cpu_loop_exit_sigsegv(env_cpu(env), addr, access_type, maperr, ra);
37
}
118
}
38
119
39
-static void tlb_flush_one_mmuidx_locked(CPUArchState *env, int mmu_idx)
120
-int probe_access_flags(CPUArchState *env, target_ulong addr,
40
-{
121
+int probe_access_flags(CPUArchState *env, target_ulong addr, int size,
41
- tlb_table_flush_by_mmuidx(env, mmu_idx);
122
MMUAccessType access_type, int mmu_idx,
42
- env_tlb(env)->d[mmu_idx].large_page_addr = -1;
123
bool nonfault, void **phost, uintptr_t ra)
43
- env_tlb(env)->d[mmu_idx].large_page_mask = -1;
44
- env_tlb(env)->d[mmu_idx].vindex = 0;
45
- memset(env_tlb(env)->d[mmu_idx].vtable, -1,
46
- sizeof(env_tlb(env)->d[0].vtable));
47
-}
48
-
49
static void tlb_flush_by_mmuidx_async_work(CPUState *cpu, run_on_cpu_data data)
50
{
124
{
51
CPUArchState *env = cpu->env_ptr;
125
int flags;
126
127
- flags = probe_access_internal(env, addr, 0, access_type, nonfault, ra);
128
+ g_assert(-(addr | TARGET_PAGE_MASK) >= size);
129
+ flags = probe_access_internal(env, addr, size, access_type, nonfault, ra);
130
*phost = flags ? NULL : g2h(env_cpu(env), addr);
131
return flags;
132
}
133
diff --git a/semihosting/uaccess.c b/semihosting/uaccess.c
134
index XXXXXXX..XXXXXXX 100644
135
--- a/semihosting/uaccess.c
136
+++ b/semihosting/uaccess.c
137
@@ -XXX,XX +XXX,XX @@ ssize_t softmmu_strlen_user(CPUArchState *env, target_ulong addr)
138
/* Find the number of bytes remaining in the page. */
139
left_in_page = TARGET_PAGE_SIZE - (addr & ~TARGET_PAGE_MASK);
140
141
- flags = probe_access_flags(env, addr, MMU_DATA_LOAD,
142
+ flags = probe_access_flags(env, addr, 0, MMU_DATA_LOAD,
143
mmu_idx, true, &h, 0);
144
if (flags & TLB_INVALID_MASK) {
145
return -1;
146
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
147
index XXXXXXX..XXXXXXX 100644
148
--- a/target/arm/ptw.c
149
+++ b/target/arm/ptw.c
150
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_casq_ptw(CPUARMState *env, uint64_t old_val,
151
void *discard;
152
153
env->tlb_fi = fi;
154
- flags = probe_access_flags(env, ptw->out_virt, MMU_DATA_STORE,
155
+ flags = probe_access_flags(env, ptw->out_virt, 0, MMU_DATA_STORE,
156
arm_to_core_mmu_idx(ptw->in_ptw_idx),
157
true, &discard, 0);
158
env->tlb_fi = NULL;
159
diff --git a/target/arm/tcg/sve_helper.c b/target/arm/tcg/sve_helper.c
160
index XXXXXXX..XXXXXXX 100644
161
--- a/target/arm/tcg/sve_helper.c
162
+++ b/target/arm/tcg/sve_helper.c
163
@@ -XXX,XX +XXX,XX @@ bool sve_probe_page(SVEHostPage *info, bool nofault, CPUARMState *env,
164
addr = useronly_clean_ptr(addr);
165
166
#ifdef CONFIG_USER_ONLY
167
- flags = probe_access_flags(env, addr, access_type, mmu_idx, nofault,
168
+ flags = probe_access_flags(env, addr, 0, access_type, mmu_idx, nofault,
169
&info->host, retaddr);
170
#else
171
CPUTLBEntryFull *full;
172
diff --git a/target/s390x/tcg/mem_helper.c b/target/s390x/tcg/mem_helper.c
173
index XXXXXXX..XXXXXXX 100644
174
--- a/target/s390x/tcg/mem_helper.c
175
+++ b/target/s390x/tcg/mem_helper.c
176
@@ -XXX,XX +XXX,XX @@ static inline int s390_probe_access(CPUArchState *env, target_ulong addr,
177
int mmu_idx, bool nonfault,
178
void **phost, uintptr_t ra)
179
{
180
- int flags = probe_access_flags(env, addr, access_type, mmu_idx,
181
+ int flags = probe_access_flags(env, addr, 0, access_type, mmu_idx,
182
nonfault, phost, ra);
183
184
if (unlikely(flags & TLB_INVALID_MASK)) {
52
--
185
--
53
2.20.1
186
2.34.1
54
55
diff view generated by jsdifflib
1
There's little point in leaving these data structures half initialized,
1
Change to match the recent change to probe_access_flags.
2
and relying on a flush to be done during reset.
2
All existing callers updated to supply 0, so no change in behaviour.
3
3
4
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
5
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
7
---
8
accel/tcg/cputlb.c | 5 +++--
8
include/exec/exec-all.h | 2 +-
9
1 file changed, 3 insertions(+), 2 deletions(-)
9
accel/tcg/cputlb.c | 4 ++--
10
target/arm/ptw.c | 2 +-
11
target/arm/tcg/mte_helper.c | 4 ++--
12
target/arm/tcg/sve_helper.c | 2 +-
13
target/arm/tcg/translate-a64.c | 2 +-
14
target/i386/tcg/sysemu/excp_helper.c | 4 ++--
15
7 files changed, 10 insertions(+), 10 deletions(-)
10
16
17
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/include/exec/exec-all.h
20
+++ b/include/exec/exec-all.h
21
@@ -XXX,XX +XXX,XX @@ int probe_access_flags(CPUArchState *env, target_ulong addr, int size,
22
* and must be consumed or copied immediately, before any further
23
* access or changes to TLB @mmu_idx.
24
*/
25
-int probe_access_full(CPUArchState *env, target_ulong addr,
26
+int probe_access_full(CPUArchState *env, target_ulong addr, int size,
27
MMUAccessType access_type, int mmu_idx,
28
bool nonfault, void **phost,
29
CPUTLBEntryFull **pfull, uintptr_t retaddr);
11
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
30
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
12
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
13
--- a/accel/tcg/cputlb.c
32
--- a/accel/tcg/cputlb.c
14
+++ b/accel/tcg/cputlb.c
33
+++ b/accel/tcg/cputlb.c
15
@@ -XXX,XX +XXX,XX @@ static void tlb_mmu_init(CPUTLBDesc *desc, CPUTLBDescFast *fast, int64_t now)
34
@@ -XXX,XX +XXX,XX @@ static int probe_access_internal(CPUArchState *env, target_ulong addr,
16
fast->mask = (n_entries - 1) << CPU_TLB_ENTRY_BITS;
35
return flags;
17
fast->table = g_new(CPUTLBEntry, n_entries);
18
desc->iotlb = g_new(CPUIOTLBEntry, n_entries);
19
+ tlb_mmu_flush_locked(desc, fast);
20
}
36
}
21
37
22
static inline void tlb_n_used_entries_inc(CPUArchState *env, uintptr_t mmu_idx)
38
-int probe_access_full(CPUArchState *env, target_ulong addr,
23
@@ -XXX,XX +XXX,XX @@ void tlb_init(CPUState *cpu)
39
+int probe_access_full(CPUArchState *env, target_ulong addr, int size,
24
40
MMUAccessType access_type, int mmu_idx,
25
qemu_spin_init(&env_tlb(env)->c.lock);
41
bool nonfault, void **phost, CPUTLBEntryFull **pfull,
26
42
uintptr_t retaddr)
27
- /* Ensure that cpu_reset performs a full flush. */
43
{
28
- env_tlb(env)->c.dirty = ALL_MMUIDX_BITS;
44
- int flags = probe_access_internal(env, addr, 0, access_type, mmu_idx,
29
+ /* All tlbs are initialized flushed. */
45
+ int flags = probe_access_internal(env, addr, size, access_type, mmu_idx,
30
+ env_tlb(env)->c.dirty = 0;
46
nonfault, phost, pfull, retaddr);
31
47
32
for (i = 0; i < NB_MMU_MODES; i++) {
48
/* Handle clean RAM pages. */
33
tlb_mmu_init(&env_tlb(env)->d[i], &env_tlb(env)->f[i], now);
49
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
50
index XXXXXXX..XXXXXXX 100644
51
--- a/target/arm/ptw.c
52
+++ b/target/arm/ptw.c
53
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
54
int flags;
55
56
env->tlb_fi = fi;
57
- flags = probe_access_full(env, addr, MMU_DATA_LOAD,
58
+ flags = probe_access_full(env, addr, 0, MMU_DATA_LOAD,
59
arm_to_core_mmu_idx(s2_mmu_idx),
60
true, &ptw->out_host, &full, 0);
61
env->tlb_fi = NULL;
62
diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c
63
index XXXXXXX..XXXXXXX 100644
64
--- a/target/arm/tcg/mte_helper.c
65
+++ b/target/arm/tcg/mte_helper.c
66
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
67
* valid. Indicate to probe_access_flags no-fault, then assert that
68
* we received a valid page.
69
*/
70
- flags = probe_access_full(env, ptr, ptr_access, ptr_mmu_idx,
71
+ flags = probe_access_full(env, ptr, 0, ptr_access, ptr_mmu_idx,
72
ra == 0, &host, &full, ra);
73
assert(!(flags & TLB_INVALID_MASK));
74
75
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
76
*/
77
in_page = -(ptr | TARGET_PAGE_MASK);
78
if (unlikely(ptr_size > in_page)) {
79
- flags |= probe_access_full(env, ptr + in_page, ptr_access,
80
+ flags |= probe_access_full(env, ptr + in_page, 0, ptr_access,
81
ptr_mmu_idx, ra == 0, &host, &full, ra);
82
assert(!(flags & TLB_INVALID_MASK));
83
}
84
diff --git a/target/arm/tcg/sve_helper.c b/target/arm/tcg/sve_helper.c
85
index XXXXXXX..XXXXXXX 100644
86
--- a/target/arm/tcg/sve_helper.c
87
+++ b/target/arm/tcg/sve_helper.c
88
@@ -XXX,XX +XXX,XX @@ bool sve_probe_page(SVEHostPage *info, bool nofault, CPUARMState *env,
89
&info->host, retaddr);
90
#else
91
CPUTLBEntryFull *full;
92
- flags = probe_access_full(env, addr, access_type, mmu_idx, nofault,
93
+ flags = probe_access_full(env, addr, 0, access_type, mmu_idx, nofault,
94
&info->host, &full, retaddr);
95
#endif
96
info->flags = flags;
97
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
98
index XXXXXXX..XXXXXXX 100644
99
--- a/target/arm/tcg/translate-a64.c
100
+++ b/target/arm/tcg/translate-a64.c
101
@@ -XXX,XX +XXX,XX @@ static bool is_guarded_page(CPUARMState *env, DisasContext *s)
102
* that the TLB entry must be present and valid, and thus this
103
* access will never raise an exception.
104
*/
105
- flags = probe_access_full(env, addr, MMU_INST_FETCH, mmu_idx,
106
+ flags = probe_access_full(env, addr, 0, MMU_INST_FETCH, mmu_idx,
107
false, &host, &full, 0);
108
assert(!(flags & TLB_INVALID_MASK));
109
110
diff --git a/target/i386/tcg/sysemu/excp_helper.c b/target/i386/tcg/sysemu/excp_helper.c
111
index XXXXXXX..XXXXXXX 100644
112
--- a/target/i386/tcg/sysemu/excp_helper.c
113
+++ b/target/i386/tcg/sysemu/excp_helper.c
114
@@ -XXX,XX +XXX,XX @@ static bool ptw_translate(PTETranslate *inout, hwaddr addr)
115
int flags;
116
117
inout->gaddr = addr;
118
- flags = probe_access_full(inout->env, addr, MMU_DATA_STORE,
119
+ flags = probe_access_full(inout->env, addr, 0, MMU_DATA_STORE,
120
inout->ptw_idx, true, &inout->haddr, &full, 0);
121
122
if (unlikely(flags & TLB_INVALID_MASK)) {
123
@@ -XXX,XX +XXX,XX @@ do_check_protect_pse36:
124
CPUTLBEntryFull *full;
125
int flags, nested_page_size;
126
127
- flags = probe_access_full(env, paddr, access_type,
128
+ flags = probe_access_full(env, paddr, 0, access_type,
129
MMU_NESTED_IDX, true,
130
&pte_trans.haddr, &full, 0);
131
if (unlikely(flags & TLB_INVALID_MASK)) {
34
--
132
--
35
2.20.1
133
2.34.1
36
134
37
135
diff view generated by jsdifflib
New patch
1
From: Anton Johansson via <qemu-devel@nongnu.org>
1
2
3
Adds a new field to TranslationBlock.cflags denoting whether or not the
4
instructions of a given translation block are pc-relative. This field
5
aims to replace the macro `TARGET_TB_PCREL`.
6
7
Signed-off-by: Anton Johansson <anjo@rev.ng>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
Message-Id: <20230227135202.9710-2-anjo@rev.ng>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
---
13
include/exec/exec-all.h | 1 +
14
1 file changed, 1 insertion(+)
15
16
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/include/exec/exec-all.h
19
+++ b/include/exec/exec-all.h
20
@@ -XXX,XX +XXX,XX @@ struct TranslationBlock {
21
#define CF_INVALID 0x00040000 /* TB is stale. Set with @jmp_lock held */
22
#define CF_PARALLEL 0x00080000 /* Generate code for a parallel context */
23
#define CF_NOIRQ 0x00100000 /* Generate an uninterruptible TB */
24
+#define CF_PCREL 0x00200000 /* Opcodes in TB are PC-relative */
25
#define CF_CLUSTER_MASK 0xff000000 /* Top 8 bits are cluster ID */
26
#define CF_CLUSTER_SHIFT 24
27
28
--
29
2.34.1
30
31
diff view generated by jsdifflib
New patch
1
From: Anton Johansson via <qemu-devel@nongnu.org>
1
2
3
Signed-off-by: Anton Johansson <anjo@rev.ng>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Message-Id: <20230227135202.9710-3-anjo@rev.ng>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
target/i386/cpu.c | 5 +++++
9
1 file changed, 5 insertions(+)
10
11
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/i386/cpu.c
14
+++ b/target/i386/cpu.c
15
@@ -XXX,XX +XXX,XX @@ static void x86_cpu_realizefn(DeviceState *dev, Error **errp)
16
static bool ht_warned;
17
unsigned requested_lbr_fmt;
18
19
+ /* Use pc-relative instructions in system-mode */
20
+#ifndef CONFIG_USER_ONLY
21
+ cs->tcg_cflags |= CF_PCREL;
22
+#endif
23
+
24
if (cpu->apic_id == UNASSIGNED_APIC_ID) {
25
error_setg(errp, "apic-id property was not initialized properly");
26
return;
27
--
28
2.34.1
29
30
diff view generated by jsdifflib
New patch
1
From: Anton Johansson via <qemu-devel@nongnu.org>
1
2
3
Signed-off-by: Anton Johansson <anjo@rev.ng>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Message-Id: <20230227135202.9710-4-anjo@rev.ng>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
target/arm/cpu.c | 5 +++++
9
1 file changed, 5 insertions(+)
10
11
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/cpu.c
14
+++ b/target/arm/cpu.c
15
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
16
Error *local_err = NULL;
17
bool no_aa32 = false;
18
19
+ /* Use pc-relative instructions in system-mode */
20
+#ifndef CONFIG_USER_ONLY
21
+ cs->tcg_cflags |= CF_PCREL;
22
+#endif
23
+
24
/* If we needed to query the host kernel for the CPU features
25
* then it's possible that might have failed in the initfn, but
26
* this is the first point where we can report it.
27
--
28
2.34.1
29
30
diff view generated by jsdifflib
New patch
1
1
From: Anton Johansson via <qemu-devel@nongnu.org>
2
3
Signed-off-by: Anton Johansson <anjo@rev.ng>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Message-Id: <20230227135202.9710-5-anjo@rev.ng>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
accel/tcg/internal.h | 10 ++++----
9
accel/tcg/tb-jmp-cache.h | 48 +++++++++++++++++++--------------------
10
accel/tcg/cpu-exec.c | 8 +++----
11
accel/tcg/perf.c | 2 +-
12
accel/tcg/tb-maint.c | 8 +++----
13
accel/tcg/translate-all.c | 14 ++++++------
14
6 files changed, 44 insertions(+), 46 deletions(-)
15
16
diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/accel/tcg/internal.h
19
+++ b/accel/tcg/internal.h
20
@@ -XXX,XX +XXX,XX @@ void cpu_restore_state_from_tb(CPUState *cpu, TranslationBlock *tb,
21
/* Return the current PC from CPU, which may be cached in TB. */
22
static inline target_ulong log_pc(CPUState *cpu, const TranslationBlock *tb)
23
{
24
-#if TARGET_TB_PCREL
25
- return cpu->cc->get_pc(cpu);
26
-#else
27
- return tb_pc(tb);
28
-#endif
29
+ if (tb_cflags(tb) & CF_PCREL) {
30
+ return cpu->cc->get_pc(cpu);
31
+ } else {
32
+ return tb_pc(tb);
33
+ }
34
}
35
36
extern int64_t max_delay;
37
diff --git a/accel/tcg/tb-jmp-cache.h b/accel/tcg/tb-jmp-cache.h
38
index XXXXXXX..XXXXXXX 100644
39
--- a/accel/tcg/tb-jmp-cache.h
40
+++ b/accel/tcg/tb-jmp-cache.h
41
@@ -XXX,XX +XXX,XX @@
42
43
/*
44
* Accessed in parallel; all accesses to 'tb' must be atomic.
45
- * For TARGET_TB_PCREL, accesses to 'pc' must be protected by
46
- * a load_acquire/store_release to 'tb'.
47
+ * For CF_PCREL, accesses to 'pc' must be protected by a
48
+ * load_acquire/store_release to 'tb'.
49
*/
50
struct CPUJumpCache {
51
struct rcu_head rcu;
52
struct {
53
TranslationBlock *tb;
54
-#if TARGET_TB_PCREL
55
target_ulong pc;
56
-#endif
57
} array[TB_JMP_CACHE_SIZE];
58
};
59
60
static inline TranslationBlock *
61
-tb_jmp_cache_get_tb(CPUJumpCache *jc, uint32_t hash)
62
+tb_jmp_cache_get_tb(CPUJumpCache *jc, uint32_t cflags, uint32_t hash)
63
{
64
-#if TARGET_TB_PCREL
65
- /* Use acquire to ensure current load of pc from jc. */
66
- return qatomic_load_acquire(&jc->array[hash].tb);
67
-#else
68
- /* Use rcu_read to ensure current load of pc from *tb. */
69
- return qatomic_rcu_read(&jc->array[hash].tb);
70
-#endif
71
+ if (cflags & CF_PCREL) {
72
+ /* Use acquire to ensure current load of pc from jc. */
73
+ return qatomic_load_acquire(&jc->array[hash].tb);
74
+ } else {
75
+ /* Use rcu_read to ensure current load of pc from *tb. */
76
+ return qatomic_rcu_read(&jc->array[hash].tb);
77
+ }
78
}
79
80
static inline target_ulong
81
tb_jmp_cache_get_pc(CPUJumpCache *jc, uint32_t hash, TranslationBlock *tb)
82
{
83
-#if TARGET_TB_PCREL
84
- return jc->array[hash].pc;
85
-#else
86
- return tb_pc(tb);
87
-#endif
88
+ if (tb_cflags(tb) & CF_PCREL) {
89
+ return jc->array[hash].pc;
90
+ } else {
91
+ return tb_pc(tb);
92
+ }
93
}
94
95
static inline void
96
tb_jmp_cache_set(CPUJumpCache *jc, uint32_t hash,
97
TranslationBlock *tb, target_ulong pc)
98
{
99
-#if TARGET_TB_PCREL
100
- jc->array[hash].pc = pc;
101
- /* Use store_release on tb to ensure pc is written first. */
102
- qatomic_store_release(&jc->array[hash].tb, tb);
103
-#else
104
- /* Use the pc value already stored in tb->pc. */
105
- qatomic_set(&jc->array[hash].tb, tb);
106
-#endif
107
+ if (tb_cflags(tb) & CF_PCREL) {
108
+ jc->array[hash].pc = pc;
109
+ /* Use store_release on tb to ensure pc is written first. */
110
+ qatomic_store_release(&jc->array[hash].tb, tb);
111
+ } else{
112
+ /* Use the pc value already stored in tb->pc. */
113
+ qatomic_set(&jc->array[hash].tb, tb);
114
+ }
115
}
116
117
#endif /* ACCEL_TCG_TB_JMP_CACHE_H */
118
diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
119
index XXXXXXX..XXXXXXX 100644
120
--- a/accel/tcg/cpu-exec.c
121
+++ b/accel/tcg/cpu-exec.c
122
@@ -XXX,XX +XXX,XX @@ static bool tb_lookup_cmp(const void *p, const void *d)
123
const TranslationBlock *tb = p;
124
const struct tb_desc *desc = d;
125
126
- if ((TARGET_TB_PCREL || tb_pc(tb) == desc->pc) &&
127
+ if ((tb_cflags(tb) & CF_PCREL || tb_pc(tb) == desc->pc) &&
128
tb_page_addr0(tb) == desc->page_addr0 &&
129
tb->cs_base == desc->cs_base &&
130
tb->flags == desc->flags &&
131
@@ -XXX,XX +XXX,XX @@ static TranslationBlock *tb_htable_lookup(CPUState *cpu, target_ulong pc,
132
return NULL;
133
}
134
desc.page_addr0 = phys_pc;
135
- h = tb_hash_func(phys_pc, (TARGET_TB_PCREL ? 0 : pc),
136
+ h = tb_hash_func(phys_pc, (cflags & CF_PCREL ? 0 : pc),
137
flags, cflags, *cpu->trace_dstate);
138
return qht_lookup_custom(&tb_ctx.htable, &desc, h, tb_lookup_cmp);
139
}
140
@@ -XXX,XX +XXX,XX @@ static inline TranslationBlock *tb_lookup(CPUState *cpu, target_ulong pc,
141
142
hash = tb_jmp_cache_hash_func(pc);
143
jc = cpu->tb_jmp_cache;
144
- tb = tb_jmp_cache_get_tb(jc, hash);
145
+ tb = tb_jmp_cache_get_tb(jc, cflags, hash);
146
147
if (likely(tb &&
148
tb_jmp_cache_get_pc(jc, hash, tb) == pc &&
149
@@ -XXX,XX +XXX,XX @@ cpu_tb_exec(CPUState *cpu, TranslationBlock *itb, int *tb_exit)
150
if (cc->tcg_ops->synchronize_from_tb) {
151
cc->tcg_ops->synchronize_from_tb(cpu, last_tb);
152
} else {
153
- assert(!TARGET_TB_PCREL);
154
+ tcg_debug_assert(!(tb_cflags(last_tb) & CF_PCREL));
155
assert(cc->set_pc);
156
cc->set_pc(cpu, tb_pc(last_tb));
157
}
158
diff --git a/accel/tcg/perf.c b/accel/tcg/perf.c
159
index XXXXXXX..XXXXXXX 100644
160
--- a/accel/tcg/perf.c
161
+++ b/accel/tcg/perf.c
162
@@ -XXX,XX +XXX,XX @@ void perf_report_code(uint64_t guest_pc, TranslationBlock *tb,
163
for (insn = 0; insn < tb->icount; insn++) {
164
/* FIXME: This replicates the restore_state_to_opc() logic. */
165
q[insn].address = tcg_ctx->gen_insn_data[insn][0];
166
- if (TARGET_TB_PCREL) {
167
+ if (tb_cflags(tb) & CF_PCREL) {
168
q[insn].address |= (guest_pc & TARGET_PAGE_MASK);
169
} else {
170
#if defined(TARGET_I386)
171
diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c
172
index XXXXXXX..XXXXXXX 100644
173
--- a/accel/tcg/tb-maint.c
174
+++ b/accel/tcg/tb-maint.c
175
@@ -XXX,XX +XXX,XX @@ static bool tb_cmp(const void *ap, const void *bp)
176
const TranslationBlock *a = ap;
177
const TranslationBlock *b = bp;
178
179
- return ((TARGET_TB_PCREL || tb_pc(a) == tb_pc(b)) &&
180
+ return ((tb_cflags(a) & CF_PCREL || tb_pc(a) == tb_pc(b)) &&
181
a->cs_base == b->cs_base &&
182
a->flags == b->flags &&
183
(tb_cflags(a) & ~CF_INVALID) == (tb_cflags(b) & ~CF_INVALID) &&
184
@@ -XXX,XX +XXX,XX @@ static void tb_jmp_cache_inval_tb(TranslationBlock *tb)
185
{
186
CPUState *cpu;
187
188
- if (TARGET_TB_PCREL) {
189
+ if (tb_cflags(tb) & CF_PCREL) {
190
/* A TB may be at any virtual address */
191
CPU_FOREACH(cpu) {
192
tcg_flush_jmp_cache(cpu);
193
@@ -XXX,XX +XXX,XX @@ static void do_tb_phys_invalidate(TranslationBlock *tb, bool rm_from_page_list)
194
195
/* remove the TB from the hash list */
196
phys_pc = tb_page_addr0(tb);
197
- h = tb_hash_func(phys_pc, (TARGET_TB_PCREL ? 0 : tb_pc(tb)),
198
+ h = tb_hash_func(phys_pc, (orig_cflags & CF_PCREL ? 0 : tb_pc(tb)),
199
tb->flags, orig_cflags, tb->trace_vcpu_dstate);
200
if (!qht_remove(&tb_ctx.htable, tb, h)) {
201
return;
202
@@ -XXX,XX +XXX,XX @@ TranslationBlock *tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
203
tb_record(tb, p, p2);
204
205
/* add in the hash table */
206
- h = tb_hash_func(phys_pc, (TARGET_TB_PCREL ? 0 : tb_pc(tb)),
207
+ h = tb_hash_func(phys_pc, (tb->cflags & CF_PCREL ? 0 : tb_pc(tb)),
208
tb->flags, tb->cflags, tb->trace_vcpu_dstate);
209
qht_insert(&tb_ctx.htable, tb, h, &existing_tb);
210
211
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
212
index XXXXXXX..XXXXXXX 100644
213
--- a/accel/tcg/translate-all.c
214
+++ b/accel/tcg/translate-all.c
215
@@ -XXX,XX +XXX,XX @@ static int encode_search(TranslationBlock *tb, uint8_t *block)
216
217
for (j = 0; j < TARGET_INSN_START_WORDS; ++j) {
218
if (i == 0) {
219
- prev = (!TARGET_TB_PCREL && j == 0 ? tb_pc(tb) : 0);
220
+ prev = (!(tb_cflags(tb) & CF_PCREL) && j == 0 ? tb_pc(tb) : 0);
221
} else {
222
prev = tcg_ctx->gen_insn_data[i - 1][j];
223
}
224
@@ -XXX,XX +XXX,XX @@ static int cpu_unwind_data_from_tb(TranslationBlock *tb, uintptr_t host_pc,
225
}
226
227
memset(data, 0, sizeof(uint64_t) * TARGET_INSN_START_WORDS);
228
- if (!TARGET_TB_PCREL) {
229
+ if (!(tb_cflags(tb) & CF_PCREL)) {
230
data[0] = tb_pc(tb);
231
}
232
233
@@ -XXX,XX +XXX,XX @@ TranslationBlock *tb_gen_code(CPUState *cpu,
234
235
gen_code_buf = tcg_ctx->code_gen_ptr;
236
tb->tc.ptr = tcg_splitwx_to_rx(gen_code_buf);
237
-#if !TARGET_TB_PCREL
238
- tb->pc = pc;
239
-#endif
240
+ if (!(cflags & CF_PCREL)) {
241
+ tb->pc = pc;
242
+ }
243
tb->cs_base = cs_base;
244
tb->flags = flags;
245
tb->cflags = cflags;
246
@@ -XXX,XX +XXX,XX @@ TranslationBlock *tb_gen_code(CPUState *cpu,
247
tb->tc.size = gen_code_size;
248
249
/*
250
- * For TARGET_TB_PCREL, attribute all executions of the generated
251
- * code to its first mapping.
252
+ * For CF_PCREL, attribute all executions of the generated code
253
+ * to its first mapping.
254
*/
255
perf_report_code(pc, tb, tcg_splitwx_to_rx(gen_code_buf));
256
257
--
258
2.34.1
259
260
diff view generated by jsdifflib
New patch
1
From: Anton Johansson via <qemu-devel@nongnu.org>
1
2
3
Signed-off-by: Anton Johansson <anjo@rev.ng>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Message-Id: <20230227135202.9710-6-anjo@rev.ng>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
include/exec/exec-all.h | 27 +++++++++++----------------
9
1 file changed, 11 insertions(+), 16 deletions(-)
10
11
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
12
index XXXXXXX..XXXXXXX 100644
13
--- a/include/exec/exec-all.h
14
+++ b/include/exec/exec-all.h
15
@@ -XXX,XX +XXX,XX @@ struct tb_tc {
16
};
17
18
struct TranslationBlock {
19
-#if !TARGET_TB_PCREL
20
/*
21
* Guest PC corresponding to this block. This must be the true
22
* virtual address. Therefore e.g. x86 stores EIP + CS_BASE, and
23
* targets like Arm, MIPS, HP-PA, which reuse low bits for ISA or
24
* privilege, must store those bits elsewhere.
25
*
26
- * If TARGET_TB_PCREL, the opcodes for the TranslationBlock are
27
- * written such that the TB is associated only with the physical
28
- * page and may be run in any virtual address context. In this case,
29
- * PC must always be taken from ENV in a target-specific manner.
30
+ * If CF_PCREL, the opcodes for the TranslationBlock are written
31
+ * such that the TB is associated only with the physical page and
32
+ * may be run in any virtual address context. In this case, PC
33
+ * must always be taken from ENV in a target-specific manner.
34
* Unwind information is taken as offsets from the page, to be
35
* deposited into the "current" PC.
36
*/
37
target_ulong pc;
38
-#endif
39
40
/*
41
* Target-specific data associated with the TranslationBlock, e.g.:
42
@@ -XXX,XX +XXX,XX @@ struct TranslationBlock {
43
uintptr_t jmp_dest[2];
44
};
45
46
-/* Hide the read to avoid ifdefs for TARGET_TB_PCREL. */
47
-static inline target_ulong tb_pc(const TranslationBlock *tb)
48
-{
49
-#if TARGET_TB_PCREL
50
- qemu_build_not_reached();
51
-#else
52
- return tb->pc;
53
-#endif
54
-}
55
-
56
/* Hide the qatomic_read to make code a little easier on the eyes */
57
static inline uint32_t tb_cflags(const TranslationBlock *tb)
58
{
59
return qatomic_read(&tb->cflags);
60
}
61
62
+/* Hide the read to avoid ifdefs for CF_PCREL. */
63
+static inline target_ulong tb_pc(const TranslationBlock *tb)
64
+{
65
+ assert(!(tb_cflags(tb) & CF_PCREL));
66
+ return tb->pc;
67
+}
68
+
69
static inline tb_page_addr_t tb_page_addr0(const TranslationBlock *tb)
70
{
71
#ifdef CONFIG_USER_ONLY
72
--
73
2.34.1
74
75
diff view generated by jsdifflib
New patch
1
From: Anton Johansson via <qemu-devel@nongnu.org>
1
2
3
Signed-off-by: Anton Johansson <anjo@rev.ng>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Message-Id: <20230227135202.9710-7-anjo@rev.ng>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
target/arm/tcg/translate.h | 2 +-
9
target/arm/cpu.c | 8 ++++----
10
target/arm/tcg/translate-a64.c | 8 ++++----
11
target/arm/tcg/translate.c | 6 +++---
12
4 files changed, 12 insertions(+), 12 deletions(-)
13
14
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/tcg/translate.h
17
+++ b/target/arm/tcg/translate.h
18
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
19
/* The address of the current instruction being translated. */
20
target_ulong pc_curr;
21
/*
22
- * For TARGET_TB_PCREL, the full value of cpu_pc is not known
23
+ * For CF_PCREL, the full value of cpu_pc is not known
24
* (although the page offset is known). For convenience, the
25
* translation loop uses the full virtual address that triggered
26
* the translation, from base.pc_start through pc_curr.
27
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/cpu.c
30
+++ b/target/arm/cpu.c
31
@@ -XXX,XX +XXX,XX @@ static vaddr arm_cpu_get_pc(CPUState *cs)
32
void arm_cpu_synchronize_from_tb(CPUState *cs,
33
const TranslationBlock *tb)
34
{
35
- /* The program counter is always up to date with TARGET_TB_PCREL. */
36
- if (!TARGET_TB_PCREL) {
37
+ /* The program counter is always up to date with CF_PCREL. */
38
+ if (!(tb_cflags(tb) & CF_PCREL)) {
39
CPUARMState *env = cs->env_ptr;
40
/*
41
* It's OK to look at env for the current mode here, because it's
42
@@ -XXX,XX +XXX,XX @@ void arm_restore_state_to_opc(CPUState *cs,
43
CPUARMState *env = cs->env_ptr;
44
45
if (is_a64(env)) {
46
- if (TARGET_TB_PCREL) {
47
+ if (tb_cflags(tb) & CF_PCREL) {
48
env->pc = (env->pc & TARGET_PAGE_MASK) | data[0];
49
} else {
50
env->pc = data[0];
51
@@ -XXX,XX +XXX,XX @@ void arm_restore_state_to_opc(CPUState *cs,
52
env->condexec_bits = 0;
53
env->exception.syndrome = data[2] << ARM_INSN_START_WORD2_SHIFT;
54
} else {
55
- if (TARGET_TB_PCREL) {
56
+ if (tb_cflags(tb) & CF_PCREL) {
57
env->regs[15] = (env->regs[15] & TARGET_PAGE_MASK) | data[0];
58
} else {
59
env->regs[15] = data[0];
60
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/tcg/translate-a64.c
63
+++ b/target/arm/tcg/translate-a64.c
64
@@ -XXX,XX +XXX,XX @@ static void reset_btype(DisasContext *s)
65
static void gen_pc_plus_diff(DisasContext *s, TCGv_i64 dest, target_long diff)
66
{
67
assert(s->pc_save != -1);
68
- if (TARGET_TB_PCREL) {
69
+ if (tb_cflags(s->base.tb) & CF_PCREL) {
70
tcg_gen_addi_i64(dest, cpu_pc, (s->pc_curr - s->pc_save) + diff);
71
} else {
72
tcg_gen_movi_i64(dest, s->pc_curr + diff);
73
@@ -XXX,XX +XXX,XX @@ static void gen_goto_tb(DisasContext *s, int n, int64_t diff)
74
* update to pc to the unlinked path. A long chain of links
75
* can thus avoid many updates to the PC.
76
*/
77
- if (TARGET_TB_PCREL) {
78
+ if (tb_cflags(s->base.tb) & CF_PCREL) {
79
gen_a64_update_pc(s, diff);
80
tcg_gen_goto_tb(n);
81
} else {
82
@@ -XXX,XX +XXX,XX @@ static void disas_pc_rel_adr(DisasContext *s, uint32_t insn)
83
if (page) {
84
/* ADRP (page based) */
85
offset <<= 12;
86
- /* The page offset is ok for TARGET_TB_PCREL. */
87
+ /* The page offset is ok for CF_PCREL. */
88
offset -= s->pc_curr & 0xfff;
89
}
90
91
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_insn_start(DisasContextBase *dcbase, CPUState *cpu)
92
DisasContext *dc = container_of(dcbase, DisasContext, base);
93
target_ulong pc_arg = dc->base.pc_next;
94
95
- if (TARGET_TB_PCREL) {
96
+ if (tb_cflags(dcbase->tb) & CF_PCREL) {
97
pc_arg &= ~TARGET_PAGE_MASK;
98
}
99
tcg_gen_insn_start(pc_arg, 0, 0);
100
diff --git a/target/arm/tcg/translate.c b/target/arm/tcg/translate.c
101
index XXXXXXX..XXXXXXX 100644
102
--- a/target/arm/tcg/translate.c
103
+++ b/target/arm/tcg/translate.c
104
@@ -XXX,XX +XXX,XX @@ static target_long jmp_diff(DisasContext *s, target_long diff)
105
static void gen_pc_plus_diff(DisasContext *s, TCGv_i32 var, target_long diff)
106
{
107
assert(s->pc_save != -1);
108
- if (TARGET_TB_PCREL) {
109
+ if (tb_cflags(s->base.tb) & CF_PCREL) {
110
tcg_gen_addi_i32(var, cpu_R[15], (s->pc_curr - s->pc_save) + diff);
111
} else {
112
tcg_gen_movi_i32(var, s->pc_curr + diff);
113
@@ -XXX,XX +XXX,XX @@ static void gen_goto_tb(DisasContext *s, int n, target_long diff)
114
* update to pc to the unlinked path. A long chain of links
115
* can thus avoid many updates to the PC.
116
*/
117
- if (TARGET_TB_PCREL) {
118
+ if (tb_cflags(s->base.tb) & CF_PCREL) {
119
gen_update_pc(s, diff);
120
tcg_gen_goto_tb(n);
121
} else {
122
@@ -XXX,XX +XXX,XX @@ static void arm_tr_insn_start(DisasContextBase *dcbase, CPUState *cpu)
123
uint32_t condexec_bits;
124
target_ulong pc_arg = dc->base.pc_next;
125
126
- if (TARGET_TB_PCREL) {
127
+ if (tb_cflags(dcbase->tb) & CF_PCREL) {
128
pc_arg &= ~TARGET_PAGE_MASK;
129
}
130
if (dc->eci) {
131
--
132
2.34.1
133
134
diff view generated by jsdifflib
New patch
1
From: Anton Johansson via <qemu-devel@nongnu.org>
1
2
3
Signed-off-by: Anton Johansson <anjo@rev.ng>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Message-Id: <20230227135202.9710-8-anjo@rev.ng>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
target/i386/helper.c | 2 +-
9
target/i386/tcg/tcg-cpu.c | 6 +++---
10
target/i386/tcg/translate.c | 26 +++++++++++++-------------
11
3 files changed, 17 insertions(+), 17 deletions(-)
12
13
diff --git a/target/i386/helper.c b/target/i386/helper.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/i386/helper.c
16
+++ b/target/i386/helper.c
17
@@ -XXX,XX +XXX,XX @@ static inline target_ulong get_memio_eip(CPUX86State *env)
18
}
19
20
/* Per x86_restore_state_to_opc. */
21
- if (TARGET_TB_PCREL) {
22
+ if (cs->tcg_cflags & CF_PCREL) {
23
return (env->eip & TARGET_PAGE_MASK) | data[0];
24
} else {
25
return data[0] - env->segs[R_CS].base;
26
diff --git a/target/i386/tcg/tcg-cpu.c b/target/i386/tcg/tcg-cpu.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/i386/tcg/tcg-cpu.c
29
+++ b/target/i386/tcg/tcg-cpu.c
30
@@ -XXX,XX +XXX,XX @@ static void x86_cpu_exec_exit(CPUState *cs)
31
static void x86_cpu_synchronize_from_tb(CPUState *cs,
32
const TranslationBlock *tb)
33
{
34
- /* The instruction pointer is always up to date with TARGET_TB_PCREL. */
35
- if (!TARGET_TB_PCREL) {
36
+ /* The instruction pointer is always up to date with CF_PCREL. */
37
+ if (!(tb_cflags(tb) & CF_PCREL)) {
38
CPUX86State *env = cs->env_ptr;
39
env->eip = tb_pc(tb) - tb->cs_base;
40
}
41
@@ -XXX,XX +XXX,XX @@ static void x86_restore_state_to_opc(CPUState *cs,
42
CPUX86State *env = &cpu->env;
43
int cc_op = data[1];
44
45
- if (TARGET_TB_PCREL) {
46
+ if (tb_cflags(tb) & CF_PCREL) {
47
env->eip = (env->eip & TARGET_PAGE_MASK) | data[0];
48
} else {
49
env->eip = data[0] - tb->cs_base;
50
diff --git a/target/i386/tcg/translate.c b/target/i386/tcg/translate.c
51
index XXXXXXX..XXXXXXX 100644
52
--- a/target/i386/tcg/translate.c
53
+++ b/target/i386/tcg/translate.c
54
@@ -XXX,XX +XXX,XX @@ static inline void gen_op_st_rm_T0_A0(DisasContext *s, int idx, int d)
55
static void gen_update_eip_cur(DisasContext *s)
56
{
57
assert(s->pc_save != -1);
58
- if (TARGET_TB_PCREL) {
59
+ if (tb_cflags(s->base.tb) & CF_PCREL) {
60
tcg_gen_addi_tl(cpu_eip, cpu_eip, s->base.pc_next - s->pc_save);
61
} else {
62
tcg_gen_movi_tl(cpu_eip, s->base.pc_next - s->cs_base);
63
@@ -XXX,XX +XXX,XX @@ static void gen_update_eip_cur(DisasContext *s)
64
static void gen_update_eip_next(DisasContext *s)
65
{
66
assert(s->pc_save != -1);
67
- if (TARGET_TB_PCREL) {
68
+ if (tb_cflags(s->base.tb) & CF_PCREL) {
69
tcg_gen_addi_tl(cpu_eip, cpu_eip, s->pc - s->pc_save);
70
} else {
71
tcg_gen_movi_tl(cpu_eip, s->pc - s->cs_base);
72
@@ -XXX,XX +XXX,XX @@ static TCGv_i32 eip_next_i32(DisasContext *s)
73
if (CODE64(s)) {
74
return tcg_constant_i32(-1);
75
}
76
- if (TARGET_TB_PCREL) {
77
+ if (tb_cflags(s->base.tb) & CF_PCREL) {
78
TCGv_i32 ret = tcg_temp_new_i32();
79
tcg_gen_trunc_tl_i32(ret, cpu_eip);
80
tcg_gen_addi_i32(ret, ret, s->pc - s->pc_save);
81
@@ -XXX,XX +XXX,XX @@ static TCGv_i32 eip_next_i32(DisasContext *s)
82
static TCGv eip_next_tl(DisasContext *s)
83
{
84
assert(s->pc_save != -1);
85
- if (TARGET_TB_PCREL) {
86
+ if (tb_cflags(s->base.tb) & CF_PCREL) {
87
TCGv ret = tcg_temp_new();
88
tcg_gen_addi_tl(ret, cpu_eip, s->pc - s->pc_save);
89
return ret;
90
@@ -XXX,XX +XXX,XX @@ static TCGv eip_next_tl(DisasContext *s)
91
static TCGv eip_cur_tl(DisasContext *s)
92
{
93
assert(s->pc_save != -1);
94
- if (TARGET_TB_PCREL) {
95
+ if (tb_cflags(s->base.tb) & CF_PCREL) {
96
TCGv ret = tcg_temp_new();
97
tcg_gen_addi_tl(ret, cpu_eip, s->base.pc_next - s->pc_save);
98
return ret;
99
@@ -XXX,XX +XXX,XX @@ static void gen_rot_rm_T1(DisasContext *s, MemOp ot, int op1, int is_right)
100
tcg_temp_free_i32(t0);
101
tcg_temp_free_i32(t1);
102
103
- /* The CC_OP value is no longer predictable. */
104
+ /* The CC_OP value is no longer predictable. */
105
set_cc_op(s, CC_OP_DYNAMIC);
106
}
107
108
@@ -XXX,XX +XXX,XX @@ static void gen_rotc_rm_T1(DisasContext *s, MemOp ot, int op1,
109
gen_op_ld_v(s, ot, s->T0, s->A0);
110
else
111
gen_op_mov_v_reg(s, ot, s->T0, op1);
112
-
113
+
114
if (is_right) {
115
switch (ot) {
116
case MO_8:
117
@@ -XXX,XX +XXX,XX @@ static TCGv gen_lea_modrm_1(DisasContext *s, AddressParts a, bool is_vsib)
118
ea = cpu_regs[a.base];
119
}
120
if (!ea) {
121
- if (TARGET_TB_PCREL && a.base == -2) {
122
+ if (tb_cflags(s->base.tb) & CF_PCREL && a.base == -2) {
123
/* With cpu_eip ~= pc_save, the expression is pc-relative. */
124
tcg_gen_addi_tl(s->A0, cpu_eip, a.disp - s->pc_save);
125
} else {
126
@@ -XXX,XX +XXX,XX @@ static void gen_jmp_rel(DisasContext *s, MemOp ot, int diff, int tb_num)
127
if (!CODE64(s)) {
128
if (ot == MO_16) {
129
mask = 0xffff;
130
- if (TARGET_TB_PCREL && CODE32(s)) {
131
+ if (tb_cflags(s->base.tb) & CF_PCREL && CODE32(s)) {
132
use_goto_tb = false;
133
}
134
} else {
135
@@ -XXX,XX +XXX,XX @@ static void gen_jmp_rel(DisasContext *s, MemOp ot, int diff, int tb_num)
136
gen_update_cc_op(s);
137
set_cc_op(s, CC_OP_DYNAMIC);
138
139
- if (TARGET_TB_PCREL) {
140
+ if (tb_cflags(s->base.tb) & CF_PCREL) {
141
tcg_gen_addi_tl(cpu_eip, cpu_eip, new_pc - s->pc_save);
142
/*
143
* If we can prove the branch does not leave the page and we have
144
@@ -XXX,XX +XXX,XX @@ static void gen_jmp_rel(DisasContext *s, MemOp ot, int diff, int tb_num)
145
translator_use_goto_tb(&s->base, new_eip + s->cs_base)) {
146
/* jump to same page: we can use a direct jump */
147
tcg_gen_goto_tb(tb_num);
148
- if (!TARGET_TB_PCREL) {
149
+ if (!(tb_cflags(s->base.tb) & CF_PCREL)) {
150
tcg_gen_movi_tl(cpu_eip, new_eip);
151
}
152
tcg_gen_exit_tb(s->base.tb, tb_num);
153
s->base.is_jmp = DISAS_NORETURN;
154
} else {
155
- if (!TARGET_TB_PCREL) {
156
+ if (!(tb_cflags(s->base.tb) & CF_PCREL)) {
157
tcg_gen_movi_tl(cpu_eip, new_eip);
158
}
159
if (s->jmp_opt) {
160
@@ -XXX,XX +XXX,XX @@ static void i386_tr_insn_start(DisasContextBase *dcbase, CPUState *cpu)
161
target_ulong pc_arg = dc->base.pc_next;
162
163
dc->prev_insn_end = tcg_last_op();
164
- if (TARGET_TB_PCREL) {
165
+ if (tb_cflags(dcbase->tb) & CF_PCREL) {
166
pc_arg -= dc->cs_base;
167
pc_arg &= ~TARGET_PAGE_MASK;
168
}
169
--
170
2.34.1
171
172
diff view generated by jsdifflib
New patch
1
From: Anton Johansson via <qemu-devel@nongnu.org>
1
2
3
Signed-off-by: Anton Johansson <anjo@rev.ng>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Message-Id: <20230227135202.9710-9-anjo@rev.ng>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
include/exec/cpu-defs.h | 3 ---
9
1 file changed, 3 deletions(-)
10
11
diff --git a/include/exec/cpu-defs.h b/include/exec/cpu-defs.h
12
index XXXXXXX..XXXXXXX 100644
13
--- a/include/exec/cpu-defs.h
14
+++ b/include/exec/cpu-defs.h
15
@@ -XXX,XX +XXX,XX @@
16
# error TARGET_PAGE_BITS must be defined in cpu-param.h
17
# endif
18
#endif
19
-#ifndef TARGET_TB_PCREL
20
-# define TARGET_TB_PCREL 0
21
-#endif
22
23
#define TARGET_LONG_SIZE (TARGET_LONG_BITS / 8)
24
25
--
26
2.34.1
27
28
diff view generated by jsdifflib
New patch
1
From: Anton Johansson via <qemu-devel@nongnu.org>
1
2
3
Signed-off-by: Anton Johansson <anjo@rev.ng>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Message-Id: <20230227135202.9710-10-anjo@rev.ng>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
target/arm/cpu-param.h | 2 --
9
1 file changed, 2 deletions(-)
10
11
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/cpu-param.h
14
+++ b/target/arm/cpu-param.h
15
@@ -XXX,XX +XXX,XX @@
16
# define TARGET_PAGE_BITS_VARY
17
# define TARGET_PAGE_BITS_MIN 10
18
19
-# define TARGET_TB_PCREL 1
20
-
21
/*
22
* Cache the attrs and shareability fields from the page table entry.
23
*
24
--
25
2.34.1
26
27
diff view generated by jsdifflib
New patch
1
From: Anton Johansson via <qemu-devel@nongnu.org>
1
2
3
Signed-off-by: Anton Johansson <anjo@rev.ng>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Message-Id: <20230227135202.9710-11-anjo@rev.ng>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
target/i386/cpu-param.h | 4 ----
9
1 file changed, 4 deletions(-)
10
11
diff --git a/target/i386/cpu-param.h b/target/i386/cpu-param.h
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/i386/cpu-param.h
14
+++ b/target/i386/cpu-param.h
15
@@ -XXX,XX +XXX,XX @@
16
#define TARGET_PAGE_BITS 12
17
#define NB_MMU_MODES 5
18
19
-#ifndef CONFIG_USER_ONLY
20
-# define TARGET_TB_PCREL 1
21
-#endif
22
-
23
#endif
24
--
25
2.34.1
26
27
diff view generated by jsdifflib
New patch
1
From: Anton Johansson via <qemu-devel@nongnu.org>
1
2
3
tb-jmp-cache.h contains a few small functions that only exist to hide a
4
CF_PCREL check, however the caller often already performs such a check.
5
6
This patch moves CF_PCREL checks from the callee to the caller, and also
7
removes these functions which now only hide an access of the jmp-cache.
8
9
Signed-off-by: Anton Johansson <anjo@rev.ng>
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
11
Message-Id: <20230227135202.9710-12-anjo@rev.ng>
12
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
---
14
accel/tcg/tb-jmp-cache.h | 36 ---------------------------
15
accel/tcg/cpu-exec.c | 54 +++++++++++++++++++++++++++++-----------
16
2 files changed, 40 insertions(+), 50 deletions(-)
17
18
diff --git a/accel/tcg/tb-jmp-cache.h b/accel/tcg/tb-jmp-cache.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/accel/tcg/tb-jmp-cache.h
21
+++ b/accel/tcg/tb-jmp-cache.h
22
@@ -XXX,XX +XXX,XX @@ struct CPUJumpCache {
23
} array[TB_JMP_CACHE_SIZE];
24
};
25
26
-static inline TranslationBlock *
27
-tb_jmp_cache_get_tb(CPUJumpCache *jc, uint32_t cflags, uint32_t hash)
28
-{
29
- if (cflags & CF_PCREL) {
30
- /* Use acquire to ensure current load of pc from jc. */
31
- return qatomic_load_acquire(&jc->array[hash].tb);
32
- } else {
33
- /* Use rcu_read to ensure current load of pc from *tb. */
34
- return qatomic_rcu_read(&jc->array[hash].tb);
35
- }
36
-}
37
-
38
-static inline target_ulong
39
-tb_jmp_cache_get_pc(CPUJumpCache *jc, uint32_t hash, TranslationBlock *tb)
40
-{
41
- if (tb_cflags(tb) & CF_PCREL) {
42
- return jc->array[hash].pc;
43
- } else {
44
- return tb_pc(tb);
45
- }
46
-}
47
-
48
-static inline void
49
-tb_jmp_cache_set(CPUJumpCache *jc, uint32_t hash,
50
- TranslationBlock *tb, target_ulong pc)
51
-{
52
- if (tb_cflags(tb) & CF_PCREL) {
53
- jc->array[hash].pc = pc;
54
- /* Use store_release on tb to ensure pc is written first. */
55
- qatomic_store_release(&jc->array[hash].tb, tb);
56
- } else{
57
- /* Use the pc value already stored in tb->pc. */
58
- qatomic_set(&jc->array[hash].tb, tb);
59
- }
60
-}
61
-
62
#endif /* ACCEL_TCG_TB_JMP_CACHE_H */
63
diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/accel/tcg/cpu-exec.c
66
+++ b/accel/tcg/cpu-exec.c
67
@@ -XXX,XX +XXX,XX @@ static inline TranslationBlock *tb_lookup(CPUState *cpu, target_ulong pc,
68
69
hash = tb_jmp_cache_hash_func(pc);
70
jc = cpu->tb_jmp_cache;
71
- tb = tb_jmp_cache_get_tb(jc, cflags, hash);
72
73
- if (likely(tb &&
74
- tb_jmp_cache_get_pc(jc, hash, tb) == pc &&
75
- tb->cs_base == cs_base &&
76
- tb->flags == flags &&
77
- tb->trace_vcpu_dstate == *cpu->trace_dstate &&
78
- tb_cflags(tb) == cflags)) {
79
- return tb;
80
+ if (cflags & CF_PCREL) {
81
+ /* Use acquire to ensure current load of pc from jc. */
82
+ tb = qatomic_load_acquire(&jc->array[hash].tb);
83
+
84
+ if (likely(tb &&
85
+ jc->array[hash].pc == pc &&
86
+ tb->cs_base == cs_base &&
87
+ tb->flags == flags &&
88
+ tb->trace_vcpu_dstate == *cpu->trace_dstate &&
89
+ tb_cflags(tb) == cflags)) {
90
+ return tb;
91
+ }
92
+ tb = tb_htable_lookup(cpu, pc, cs_base, flags, cflags);
93
+ if (tb == NULL) {
94
+ return NULL;
95
+ }
96
+ jc->array[hash].pc = pc;
97
+ /* Use store_release on tb to ensure pc is written first. */
98
+ qatomic_store_release(&jc->array[hash].tb, tb);
99
+ } else {
100
+ /* Use rcu_read to ensure current load of pc from *tb. */
101
+ tb = qatomic_rcu_read(&jc->array[hash].tb);
102
+
103
+ if (likely(tb &&
104
+ tb_pc(tb) == pc &&
105
+ tb->cs_base == cs_base &&
106
+ tb->flags == flags &&
107
+ tb->trace_vcpu_dstate == *cpu->trace_dstate &&
108
+ tb_cflags(tb) == cflags)) {
109
+ return tb;
110
+ }
111
+ tb = tb_htable_lookup(cpu, pc, cs_base, flags, cflags);
112
+ if (tb == NULL) {
113
+ return NULL;
114
+ }
115
+ /* Use the pc value already stored in tb->pc. */
116
+ qatomic_set(&jc->array[hash].tb, tb);
117
}
118
- tb = tb_htable_lookup(cpu, pc, cs_base, flags, cflags);
119
- if (tb == NULL) {
120
- return NULL;
121
- }
122
- tb_jmp_cache_set(jc, hash, tb, pc);
123
+
124
return tb;
125
}
126
127
@@ -XXX,XX +XXX,XX @@ cpu_exec_loop(CPUState *cpu, SyncClocks *sc)
128
* for the fast lookup
129
*/
130
h = tb_jmp_cache_hash_func(pc);
131
- tb_jmp_cache_set(cpu->tb_jmp_cache, h, tb, pc);
132
+ /* Use the pc value already stored in tb->pc. */
133
+ qatomic_set(&cpu->tb_jmp_cache->array[h].tb, tb);
134
}
135
136
#ifndef CONFIG_USER_ONLY
137
--
138
2.34.1
139
140
diff view generated by jsdifflib
New patch
1
From: Anton Johansson via <qemu-devel@nongnu.org>
1
2
3
Signed-off-by: Anton Johansson <anjo@rev.ng>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Message-Id: <20230227135202.9710-13-anjo@rev.ng>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
accel/tcg/internal.h | 2 +-
9
accel/tcg/cpu-exec.c | 6 +++---
10
accel/tcg/tb-maint.c | 8 ++++----
11
accel/tcg/translate-all.c | 4 ++--
12
4 files changed, 10 insertions(+), 10 deletions(-)
13
14
diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/accel/tcg/internal.h
17
+++ b/accel/tcg/internal.h
18
@@ -XXX,XX +XXX,XX @@ static inline target_ulong log_pc(CPUState *cpu, const TranslationBlock *tb)
19
if (tb_cflags(tb) & CF_PCREL) {
20
return cpu->cc->get_pc(cpu);
21
} else {
22
- return tb_pc(tb);
23
+ return tb->pc;
24
}
25
}
26
27
diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/accel/tcg/cpu-exec.c
30
+++ b/accel/tcg/cpu-exec.c
31
@@ -XXX,XX +XXX,XX @@ static bool tb_lookup_cmp(const void *p, const void *d)
32
const TranslationBlock *tb = p;
33
const struct tb_desc *desc = d;
34
35
- if ((tb_cflags(tb) & CF_PCREL || tb_pc(tb) == desc->pc) &&
36
+ if ((tb_cflags(tb) & CF_PCREL || tb->pc == desc->pc) &&
37
tb_page_addr0(tb) == desc->page_addr0 &&
38
tb->cs_base == desc->cs_base &&
39
tb->flags == desc->flags &&
40
@@ -XXX,XX +XXX,XX @@ static inline TranslationBlock *tb_lookup(CPUState *cpu, target_ulong pc,
41
tb = qatomic_rcu_read(&jc->array[hash].tb);
42
43
if (likely(tb &&
44
- tb_pc(tb) == pc &&
45
+ tb->pc == pc &&
46
tb->cs_base == cs_base &&
47
tb->flags == flags &&
48
tb->trace_vcpu_dstate == *cpu->trace_dstate &&
49
@@ -XXX,XX +XXX,XX @@ cpu_tb_exec(CPUState *cpu, TranslationBlock *itb, int *tb_exit)
50
} else {
51
tcg_debug_assert(!(tb_cflags(last_tb) & CF_PCREL));
52
assert(cc->set_pc);
53
- cc->set_pc(cpu, tb_pc(last_tb));
54
+ cc->set_pc(cpu, last_tb->pc);
55
}
56
if (qemu_loglevel_mask(CPU_LOG_EXEC)) {
57
target_ulong pc = log_pc(cpu, last_tb);
58
diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c
59
index XXXXXXX..XXXXXXX 100644
60
--- a/accel/tcg/tb-maint.c
61
+++ b/accel/tcg/tb-maint.c
62
@@ -XXX,XX +XXX,XX @@ static bool tb_cmp(const void *ap, const void *bp)
63
const TranslationBlock *a = ap;
64
const TranslationBlock *b = bp;
65
66
- return ((tb_cflags(a) & CF_PCREL || tb_pc(a) == tb_pc(b)) &&
67
+ return ((tb_cflags(a) & CF_PCREL || a->pc == b->pc) &&
68
a->cs_base == b->cs_base &&
69
a->flags == b->flags &&
70
(tb_cflags(a) & ~CF_INVALID) == (tb_cflags(b) & ~CF_INVALID) &&
71
@@ -XXX,XX +XXX,XX @@ static void tb_jmp_cache_inval_tb(TranslationBlock *tb)
72
tcg_flush_jmp_cache(cpu);
73
}
74
} else {
75
- uint32_t h = tb_jmp_cache_hash_func(tb_pc(tb));
76
+ uint32_t h = tb_jmp_cache_hash_func(tb->pc);
77
78
CPU_FOREACH(cpu) {
79
CPUJumpCache *jc = cpu->tb_jmp_cache;
80
@@ -XXX,XX +XXX,XX @@ static void do_tb_phys_invalidate(TranslationBlock *tb, bool rm_from_page_list)
81
82
/* remove the TB from the hash list */
83
phys_pc = tb_page_addr0(tb);
84
- h = tb_hash_func(phys_pc, (orig_cflags & CF_PCREL ? 0 : tb_pc(tb)),
85
+ h = tb_hash_func(phys_pc, (orig_cflags & CF_PCREL ? 0 : tb->pc),
86
tb->flags, orig_cflags, tb->trace_vcpu_dstate);
87
if (!qht_remove(&tb_ctx.htable, tb, h)) {
88
return;
89
@@ -XXX,XX +XXX,XX @@ TranslationBlock *tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
90
tb_record(tb, p, p2);
91
92
/* add in the hash table */
93
- h = tb_hash_func(phys_pc, (tb->cflags & CF_PCREL ? 0 : tb_pc(tb)),
94
+ h = tb_hash_func(phys_pc, (tb->cflags & CF_PCREL ? 0 : tb->pc),
95
tb->flags, tb->cflags, tb->trace_vcpu_dstate);
96
qht_insert(&tb_ctx.htable, tb, h, &existing_tb);
97
98
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
99
index XXXXXXX..XXXXXXX 100644
100
--- a/accel/tcg/translate-all.c
101
+++ b/accel/tcg/translate-all.c
102
@@ -XXX,XX +XXX,XX @@ static int encode_search(TranslationBlock *tb, uint8_t *block)
103
104
for (j = 0; j < TARGET_INSN_START_WORDS; ++j) {
105
if (i == 0) {
106
- prev = (!(tb_cflags(tb) & CF_PCREL) && j == 0 ? tb_pc(tb) : 0);
107
+ prev = (!(tb_cflags(tb) & CF_PCREL) && j == 0 ? tb->pc : 0);
108
} else {
109
prev = tcg_ctx->gen_insn_data[i - 1][j];
110
}
111
@@ -XXX,XX +XXX,XX @@ static int cpu_unwind_data_from_tb(TranslationBlock *tb, uintptr_t host_pc,
112
113
memset(data, 0, sizeof(uint64_t) * TARGET_INSN_START_WORDS);
114
if (!(tb_cflags(tb) & CF_PCREL)) {
115
- data[0] = tb_pc(tb);
116
+ data[0] = tb->pc;
117
}
118
119
/*
120
--
121
2.34.1
122
123
diff view generated by jsdifflib
New patch
1
From: Anton Johansson via <qemu-devel@nongnu.org>
1
2
3
Signed-off-by: Anton Johansson <anjo@rev.ng>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Message-Id: <20230227135202.9710-14-anjo@rev.ng>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
target/tricore/cpu.c | 3 ++-
9
1 file changed, 2 insertions(+), 1 deletion(-)
10
11
diff --git a/target/tricore/cpu.c b/target/tricore/cpu.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/tricore/cpu.c
14
+++ b/target/tricore/cpu.c
15
@@ -XXX,XX +XXX,XX @@ static void tricore_cpu_synchronize_from_tb(CPUState *cs,
16
TriCoreCPU *cpu = TRICORE_CPU(cs);
17
CPUTriCoreState *env = &cpu->env;
18
19
- env->PC = tb_pc(tb);
20
+ tcg_debug_assert(!(cs->tcg_cflags & CF_PCREL));
21
+ env->PC = tb->pc;
22
}
23
24
static void tricore_restore_state_to_opc(CPUState *cs,
25
--
26
2.34.1
27
28
diff view generated by jsdifflib
New patch
1
From: Anton Johansson via <qemu-devel@nongnu.org>
1
2
3
Signed-off-by: Anton Johansson <anjo@rev.ng>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Message-Id: <20230227135202.9710-15-anjo@rev.ng>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
target/sparc/cpu.c | 4 +++-
9
1 file changed, 3 insertions(+), 1 deletion(-)
10
11
diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/sparc/cpu.c
14
+++ b/target/sparc/cpu.c
15
@@ -XXX,XX +XXX,XX @@
16
#include "exec/exec-all.h"
17
#include "hw/qdev-properties.h"
18
#include "qapi/visitor.h"
19
+#include "tcg/tcg.h"
20
21
//#define DEBUG_FEATURES
22
23
@@ -XXX,XX +XXX,XX @@ static void sparc_cpu_synchronize_from_tb(CPUState *cs,
24
{
25
SPARCCPU *cpu = SPARC_CPU(cs);
26
27
- cpu->env.pc = tb_pc(tb);
28
+ tcg_debug_assert(!(cs->tcg_cflags & CF_PCREL));
29
+ cpu->env.pc = tb->pc;
30
cpu->env.npc = tb->cs_base;
31
}
32
33
--
34
2.34.1
35
36
diff view generated by jsdifflib
New patch
1
From: Anton Johansson via <qemu-devel@nongnu.org>
1
2
3
Signed-off-by: Anton Johansson <anjo@rev.ng>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Message-Id: <20230227135202.9710-16-anjo@rev.ng>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
target/sh4/cpu.c | 6 ++++--
9
1 file changed, 4 insertions(+), 2 deletions(-)
10
11
diff --git a/target/sh4/cpu.c b/target/sh4/cpu.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/sh4/cpu.c
14
+++ b/target/sh4/cpu.c
15
@@ -XXX,XX +XXX,XX @@
16
#include "migration/vmstate.h"
17
#include "exec/exec-all.h"
18
#include "fpu/softfloat-helpers.h"
19
+#include "tcg/tcg.h"
20
21
static void superh_cpu_set_pc(CPUState *cs, vaddr value)
22
{
23
@@ -XXX,XX +XXX,XX @@ static void superh_cpu_synchronize_from_tb(CPUState *cs,
24
{
25
SuperHCPU *cpu = SUPERH_CPU(cs);
26
27
- cpu->env.pc = tb_pc(tb);
28
+ tcg_debug_assert(!(cs->tcg_cflags & CF_PCREL));
29
+ cpu->env.pc = tb->pc;
30
cpu->env.flags = tb->flags & TB_FLAG_ENVFLAGS_MASK;
31
}
32
33
@@ -XXX,XX +XXX,XX @@ static bool superh_io_recompile_replay_branch(CPUState *cs,
34
CPUSH4State *env = &cpu->env;
35
36
if ((env->flags & (TB_FLAG_DELAY_SLOT | TB_FLAG_DELAY_SLOT_COND))
37
- && env->pc != tb_pc(tb)) {
38
+ && !(cs->tcg_cflags & CF_PCREL) && env->pc != tb->pc) {
39
env->pc -= 2;
40
env->flags &= ~(TB_FLAG_DELAY_SLOT | TB_FLAG_DELAY_SLOT_COND);
41
return true;
42
--
43
2.34.1
44
45
diff view generated by jsdifflib
New patch
1
From: Anton Johansson via <qemu-devel@nongnu.org>
1
2
3
Signed-off-by: Anton Johansson <anjo@rev.ng>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Message-Id: <20230227135202.9710-17-anjo@rev.ng>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
target/rx/cpu.c | 3 ++-
9
1 file changed, 2 insertions(+), 1 deletion(-)
10
11
diff --git a/target/rx/cpu.c b/target/rx/cpu.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/rx/cpu.c
14
+++ b/target/rx/cpu.c
15
@@ -XXX,XX +XXX,XX @@ static void rx_cpu_synchronize_from_tb(CPUState *cs,
16
{
17
RXCPU *cpu = RX_CPU(cs);
18
19
- cpu->env.pc = tb_pc(tb);
20
+ tcg_debug_assert(!(cs->tcg_cflags & CF_PCREL));
21
+ cpu->env.pc = tb->pc;
22
}
23
24
static void rx_restore_state_to_opc(CPUState *cs,
25
--
26
2.34.1
27
28
diff view generated by jsdifflib
New patch
1
From: Anton Johansson via <qemu-devel@nongnu.org>
1
2
3
Signed-off-by: Anton Johansson <anjo@rev.ng>
4
Reviewed-by: Palmer Dabbelt <palmer@rivosinc.com>
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Message-Id: <20230227135202.9710-18-anjo@rev.ng>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
---
9
target/riscv/cpu.c | 7 +++++--
10
1 file changed, 5 insertions(+), 2 deletions(-)
11
12
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/riscv/cpu.c
15
+++ b/target/riscv/cpu.c
16
@@ -XXX,XX +XXX,XX @@
17
#include "fpu/softfloat-helpers.h"
18
#include "sysemu/kvm.h"
19
#include "kvm_riscv.h"
20
+#include "tcg/tcg.h"
21
22
/* RISC-V CPU definitions */
23
24
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_synchronize_from_tb(CPUState *cs,
25
CPURISCVState *env = &cpu->env;
26
RISCVMXL xl = FIELD_EX32(tb->flags, TB_FLAGS, XL);
27
28
+ tcg_debug_assert(!(cs->tcg_cflags & CF_PCREL));
29
+
30
if (xl == MXL_RV32) {
31
- env->pc = (int32_t)tb_pc(tb);
32
+ env->pc = (int32_t) tb->pc;
33
} else {
34
- env->pc = tb_pc(tb);
35
+ env->pc = tb->pc;
36
}
37
}
38
39
--
40
2.34.1
41
42
diff view generated by jsdifflib
New patch
1
From: Anton Johansson via <qemu-devel@nongnu.org>
1
2
3
Signed-off-by: Anton Johansson <anjo@rev.ng>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Message-Id: <20230227135202.9710-19-anjo@rev.ng>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
target/openrisc/cpu.c | 4 +++-
9
1 file changed, 3 insertions(+), 1 deletion(-)
10
11
diff --git a/target/openrisc/cpu.c b/target/openrisc/cpu.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/openrisc/cpu.c
14
+++ b/target/openrisc/cpu.c
15
@@ -XXX,XX +XXX,XX @@
16
#include "qemu/qemu-print.h"
17
#include "cpu.h"
18
#include "exec/exec-all.h"
19
+#include "tcg/tcg.h"
20
21
static void openrisc_cpu_set_pc(CPUState *cs, vaddr value)
22
{
23
@@ -XXX,XX +XXX,XX @@ static void openrisc_cpu_synchronize_from_tb(CPUState *cs,
24
{
25
OpenRISCCPU *cpu = OPENRISC_CPU(cs);
26
27
- cpu->env.pc = tb_pc(tb);
28
+ tcg_debug_assert(!(cs->tcg_cflags & CF_PCREL));
29
+ cpu->env.pc = tb->pc;
30
}
31
32
static void openrisc_restore_state_to_opc(CPUState *cs,
33
--
34
2.34.1
35
36
diff view generated by jsdifflib
New patch
1
From: Anton Johansson via <qemu-devel@nongnu.org>
1
2
3
Signed-off-by: Anton Johansson <anjo@rev.ng>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Message-Id: <20230227135202.9710-20-anjo@rev.ng>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
target/mips/tcg/exception.c | 3 ++-
9
target/mips/tcg/sysemu/special_helper.c | 2 +-
10
2 files changed, 3 insertions(+), 2 deletions(-)
11
12
diff --git a/target/mips/tcg/exception.c b/target/mips/tcg/exception.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/mips/tcg/exception.c
15
+++ b/target/mips/tcg/exception.c
16
@@ -XXX,XX +XXX,XX @@ void mips_cpu_synchronize_from_tb(CPUState *cs, const TranslationBlock *tb)
17
MIPSCPU *cpu = MIPS_CPU(cs);
18
CPUMIPSState *env = &cpu->env;
19
20
- env->active_tc.PC = tb_pc(tb);
21
+ tcg_debug_assert(!(cs->tcg_cflags & CF_PCREL));
22
+ env->active_tc.PC = tb->pc;
23
env->hflags &= ~MIPS_HFLAG_BMASK;
24
env->hflags |= tb->flags & MIPS_HFLAG_BMASK;
25
}
26
diff --git a/target/mips/tcg/sysemu/special_helper.c b/target/mips/tcg/sysemu/special_helper.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/mips/tcg/sysemu/special_helper.c
29
+++ b/target/mips/tcg/sysemu/special_helper.c
30
@@ -XXX,XX +XXX,XX @@ bool mips_io_recompile_replay_branch(CPUState *cs, const TranslationBlock *tb)
31
CPUMIPSState *env = &cpu->env;
32
33
if ((env->hflags & MIPS_HFLAG_BMASK) != 0
34
- && env->active_tc.PC != tb_pc(tb)) {
35
+ && !(cs->tcg_cflags & CF_PCREL) && env->active_tc.PC != tb->pc) {
36
env->active_tc.PC -= (env->hflags & MIPS_HFLAG_B16 ? 2 : 4);
37
env->hflags &= ~MIPS_HFLAG_BMASK;
38
return true;
39
--
40
2.34.1
41
42
diff view generated by jsdifflib
New patch
1
From: Anton Johansson via <qemu-devel@nongnu.org>
1
2
3
Signed-off-by: Anton Johansson <anjo@rev.ng>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Message-Id: <20230227135202.9710-21-anjo@rev.ng>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
target/microblaze/cpu.c | 4 +++-
9
1 file changed, 3 insertions(+), 1 deletion(-)
10
11
diff --git a/target/microblaze/cpu.c b/target/microblaze/cpu.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/microblaze/cpu.c
14
+++ b/target/microblaze/cpu.c
15
@@ -XXX,XX +XXX,XX @@
16
#include "exec/exec-all.h"
17
#include "exec/gdbstub.h"
18
#include "fpu/softfloat-helpers.h"
19
+#include "tcg/tcg.h"
20
21
static const struct {
22
const char *name;
23
@@ -XXX,XX +XXX,XX @@ static void mb_cpu_synchronize_from_tb(CPUState *cs,
24
{
25
MicroBlazeCPU *cpu = MICROBLAZE_CPU(cs);
26
27
- cpu->env.pc = tb_pc(tb);
28
+ tcg_debug_assert(!(cs->tcg_cflags & CF_PCREL));
29
+ cpu->env.pc = tb->pc;
30
cpu->env.iflags = tb->flags & IFLAGS_TB_MASK;
31
}
32
33
--
34
2.34.1
35
36
diff view generated by jsdifflib
New patch
1
From: Anton Johansson via <qemu-devel@nongnu.org>
1
2
3
Signed-off-by: Anton Johansson <anjo@rev.ng>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Message-Id: <20230227135202.9710-22-anjo@rev.ng>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
target/loongarch/cpu.c | 6 ++++--
9
1 file changed, 4 insertions(+), 2 deletions(-)
10
11
diff --git a/target/loongarch/cpu.c b/target/loongarch/cpu.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/loongarch/cpu.c
14
+++ b/target/loongarch/cpu.c
15
@@ -XXX,XX +XXX,XX @@
16
#include "fpu/softfloat-helpers.h"
17
#include "cpu-csr.h"
18
#include "sysemu/reset.h"
19
+#include "tcg/tcg.h"
20
21
const char * const regnames[32] = {
22
"r0", "r1", "r2", "r3", "r4", "r5", "r6", "r7",
23
@@ -XXX,XX +XXX,XX @@ static void loongarch_cpu_synchronize_from_tb(CPUState *cs,
24
LoongArchCPU *cpu = LOONGARCH_CPU(cs);
25
CPULoongArchState *env = &cpu->env;
26
27
- env->pc = tb_pc(tb);
28
+ tcg_debug_assert(!(cs->tcg_cflags & CF_PCREL));
29
+ env->pc = tb->pc;
30
}
31
32
static void loongarch_restore_state_to_opc(CPUState *cs,
33
@@ -XXX,XX +XXX,XX @@ static ObjectClass *loongarch_cpu_class_by_name(const char *cpu_model)
34
35
oc = object_class_by_name(cpu_model);
36
if (!oc) {
37
- g_autofree char *typename
38
+ g_autofree char *typename
39
= g_strdup_printf(LOONGARCH_CPU_TYPE_NAME("%s"), cpu_model);
40
oc = object_class_by_name(typename);
41
if (!oc) {
42
--
43
2.34.1
44
45
diff view generated by jsdifflib
New patch
1
From: Anton Johansson via <qemu-devel@nongnu.org>
1
2
3
Signed-off-by: Anton Johansson <anjo@rev.ng>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Message-Id: <20230227135202.9710-23-anjo@rev.ng>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
target/i386/tcg/tcg-cpu.c | 2 +-
9
1 file changed, 1 insertion(+), 1 deletion(-)
10
11
diff --git a/target/i386/tcg/tcg-cpu.c b/target/i386/tcg/tcg-cpu.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/i386/tcg/tcg-cpu.c
14
+++ b/target/i386/tcg/tcg-cpu.c
15
@@ -XXX,XX +XXX,XX @@ static void x86_cpu_synchronize_from_tb(CPUState *cs,
16
/* The instruction pointer is always up to date with CF_PCREL. */
17
if (!(tb_cflags(tb) & CF_PCREL)) {
18
CPUX86State *env = cs->env_ptr;
19
- env->eip = tb_pc(tb) - tb->cs_base;
20
+ env->eip = tb->pc - tb->cs_base;
21
}
22
}
23
24
--
25
2.34.1
26
27
diff view generated by jsdifflib
New patch
1
From: Anton Johansson via <qemu-devel@nongnu.org>
1
2
3
Signed-off-by: Anton Johansson <anjo@rev.ng>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Message-Id: <20230227135202.9710-24-anjo@rev.ng>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
target/hppa/cpu.c | 8 +++++---
9
1 file changed, 5 insertions(+), 3 deletions(-)
10
11
diff --git a/target/hppa/cpu.c b/target/hppa/cpu.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/hppa/cpu.c
14
+++ b/target/hppa/cpu.c
15
@@ -XXX,XX +XXX,XX @@
16
#include "qemu/module.h"
17
#include "exec/exec-all.h"
18
#include "fpu/softfloat.h"
19
-
20
+#include "tcg/tcg.h"
21
22
static void hppa_cpu_set_pc(CPUState *cs, vaddr value)
23
{
24
@@ -XXX,XX +XXX,XX @@ static void hppa_cpu_synchronize_from_tb(CPUState *cs,
25
{
26
HPPACPU *cpu = HPPA_CPU(cs);
27
28
+ tcg_debug_assert(!(cs->tcg_cflags & CF_PCREL));
29
+
30
#ifdef CONFIG_USER_ONLY
31
- cpu->env.iaoq_f = tb_pc(tb);
32
+ cpu->env.iaoq_f = tb->pc;
33
cpu->env.iaoq_b = tb->cs_base;
34
#else
35
/* Recover the IAOQ values from the GVA + PRIV. */
36
@@ -XXX,XX +XXX,XX @@ static void hppa_cpu_synchronize_from_tb(CPUState *cs,
37
int32_t diff = cs_base;
38
39
cpu->env.iasq_f = iasq_f;
40
- cpu->env.iaoq_f = (tb_pc(tb) & ~iasq_f) + priv;
41
+ cpu->env.iaoq_f = (tb->pc & ~iasq_f) + priv;
42
if (diff) {
43
cpu->env.iaoq_b = cpu->env.iaoq_f + diff;
44
}
45
--
46
2.34.1
47
48
diff view generated by jsdifflib
New patch
1
From: Anton Johansson via <qemu-devel@nongnu.org>
1
2
3
Signed-off-by: Anton Johansson <anjo@rev.ng>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
6
Message-Id: <20230227135202.9710-25-anjo@rev.ng>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
---
9
target/hexagon/cpu.c | 4 +++-
10
1 file changed, 3 insertions(+), 1 deletion(-)
11
12
diff --git a/target/hexagon/cpu.c b/target/hexagon/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/hexagon/cpu.c
15
+++ b/target/hexagon/cpu.c
16
@@ -XXX,XX +XXX,XX @@
17
#include "qapi/error.h"
18
#include "hw/qdev-properties.h"
19
#include "fpu/softfloat-helpers.h"
20
+#include "tcg/tcg.h"
21
22
static void hexagon_v67_cpu_init(Object *obj)
23
{
24
@@ -XXX,XX +XXX,XX @@ static void hexagon_cpu_synchronize_from_tb(CPUState *cs,
25
{
26
HexagonCPU *cpu = HEXAGON_CPU(cs);
27
CPUHexagonState *env = &cpu->env;
28
- env->gpr[HEX_REG_PC] = tb_pc(tb);
29
+ tcg_debug_assert(!(cs->tcg_cflags & CF_PCREL));
30
+ env->gpr[HEX_REG_PC] = tb->pc;
31
}
32
33
static bool hexagon_cpu_has_work(CPUState *cs)
34
--
35
2.34.1
36
37
diff view generated by jsdifflib
1
The accel_list and tmp variables are only used when manufacturing
1
From: Anton Johansson via <qemu-devel@nongnu.org>
2
-machine accel, options based on -accel.
3
2
4
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
3
Signed-off-by: Anton Johansson <anjo@rev.ng>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Reviewed by: Aleksandar Markovic <amarkovic@wavecomp.com>
5
Message-Id: <20230227135202.9710-26-anjo@rev.ng>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
---
7
---
9
vl.c | 3 ++-
8
target/avr/cpu.c | 3 ++-
10
1 file changed, 2 insertions(+), 1 deletion(-)
9
1 file changed, 2 insertions(+), 1 deletion(-)
11
10
12
diff --git a/vl.c b/vl.c
11
diff --git a/target/avr/cpu.c b/target/avr/cpu.c
13
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
14
--- a/vl.c
13
--- a/target/avr/cpu.c
15
+++ b/vl.c
14
+++ b/target/avr/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static int do_configure_accelerator(void *opaque, QemuOpts *opts, Error **errp)
15
@@ -XXX,XX +XXX,XX @@ static void avr_cpu_synchronize_from_tb(CPUState *cs,
17
static void configure_accelerators(const char *progname)
16
AVRCPU *cpu = AVR_CPU(cs);
18
{
17
CPUAVRState *env = &cpu->env;
19
const char *accel;
18
20
- char **accel_list, **tmp;
19
- env->pc_w = tb_pc(tb) / 2; /* internally PC points to words */
21
bool init_failed = false;
20
+ tcg_debug_assert(!(cs->tcg_cflags & CF_PCREL));
22
21
+ env->pc_w = tb->pc / 2; /* internally PC points to words */
23
qemu_opts_foreach(qemu_find_opts("icount"),
22
}
24
@@ -XXX,XX +XXX,XX @@ static void configure_accelerators(const char *progname)
23
25
24
static void avr_restore_state_to_opc(CPUState *cs,
26
accel = qemu_opt_get(qemu_get_machine_opts(), "accel");
27
if (QTAILQ_EMPTY(&qemu_accel_opts.head)) {
28
+ char **accel_list, **tmp;
29
+
30
if (accel == NULL) {
31
/* Select the default accelerator */
32
if (!accel_find("tcg") && !accel_find("kvm")) {
33
--
25
--
34
2.20.1
26
2.34.1
35
27
36
28
diff view generated by jsdifflib
New patch
1
From: Anton Johansson via <qemu-devel@nongnu.org>
1
2
3
Signed-off-by: Anton Johansson <anjo@rev.ng>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Message-Id: <20230227135202.9710-27-anjo@rev.ng>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
target/arm/cpu.c | 4 ++--
9
1 file changed, 2 insertions(+), 2 deletions(-)
10
11
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/cpu.c
14
+++ b/target/arm/cpu.c
15
@@ -XXX,XX +XXX,XX @@ void arm_cpu_synchronize_from_tb(CPUState *cs,
16
* never possible for an AArch64 TB to chain to an AArch32 TB.
17
*/
18
if (is_a64(env)) {
19
- env->pc = tb_pc(tb);
20
+ env->pc = tb->pc;
21
} else {
22
- env->regs[15] = tb_pc(tb);
23
+ env->regs[15] = tb->pc;
24
}
25
}
26
}
27
--
28
2.34.1
29
30
diff view generated by jsdifflib
New patch
1
From: Anton Johansson via <qemu-devel@nongnu.org>
1
2
3
Signed-off-by: Anton Johansson <anjo@rev.ng>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Message-Id: <20230227135202.9710-28-anjo@rev.ng>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
include/exec/exec-all.h | 7 -------
9
1 file changed, 7 deletions(-)
10
11
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
12
index XXXXXXX..XXXXXXX 100644
13
--- a/include/exec/exec-all.h
14
+++ b/include/exec/exec-all.h
15
@@ -XXX,XX +XXX,XX @@ static inline uint32_t tb_cflags(const TranslationBlock *tb)
16
return qatomic_read(&tb->cflags);
17
}
18
19
-/* Hide the read to avoid ifdefs for CF_PCREL. */
20
-static inline target_ulong tb_pc(const TranslationBlock *tb)
21
-{
22
- assert(!(tb_cflags(tb) & CF_PCREL));
23
- return tb->pc;
24
-}
25
-
26
static inline tb_page_addr_t tb_page_addr0(const TranslationBlock *tb)
27
{
28
#ifdef CONFIG_USER_ONLY
29
--
30
2.34.1
31
32
diff view generated by jsdifflib
New patch
1
Change the temps_in_use check to use assert not fprintf.
2
Move the assert for double-free before the check for count,
3
since that is the more immediate problem.
1
4
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
tcg/tcg.c | 12 +++++-------
9
1 file changed, 5 insertions(+), 7 deletions(-)
10
11
diff --git a/tcg/tcg.c b/tcg/tcg.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/tcg/tcg.c
14
+++ b/tcg/tcg.c
15
@@ -XXX,XX +XXX,XX @@ void tcg_temp_free_internal(TCGTemp *ts)
16
g_assert_not_reached();
17
}
18
19
-#if defined(CONFIG_DEBUG_TCG)
20
- s->temps_in_use--;
21
- if (s->temps_in_use < 0) {
22
- fprintf(stderr, "More temporaries freed than allocated!\n");
23
- }
24
-#endif
25
-
26
tcg_debug_assert(ts->temp_allocated != 0);
27
ts->temp_allocated = 0;
28
29
+#if defined(CONFIG_DEBUG_TCG)
30
+ assert(s->temps_in_use > 0);
31
+ s->temps_in_use--;
32
+#endif
33
+
34
idx = temp_idx(ts);
35
k = ts->base_type + (ts->kind == TEMP_NORMAL ? 0 : TCG_TYPE_COUNT);
36
set_bit(idx, s->free_temps[k].l);
37
--
38
2.34.1
39
40
diff view generated by jsdifflib
New patch
1
In preparation for returning the number of insns generated
2
via the same pointer. Adjust only the prototypes so far.
1
3
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
---
7
include/exec/translator.h | 4 ++--
8
accel/tcg/translate-all.c | 2 +-
9
accel/tcg/translator.c | 4 ++--
10
target/alpha/translate.c | 2 +-
11
target/arm/tcg/translate.c | 2 +-
12
target/avr/translate.c | 2 +-
13
target/cris/translate.c | 2 +-
14
target/hexagon/translate.c | 2 +-
15
target/hppa/translate.c | 2 +-
16
target/i386/tcg/translate.c | 2 +-
17
target/loongarch/translate.c | 2 +-
18
target/m68k/translate.c | 2 +-
19
target/microblaze/translate.c | 2 +-
20
target/mips/tcg/translate.c | 2 +-
21
target/nios2/translate.c | 2 +-
22
target/openrisc/translate.c | 2 +-
23
target/ppc/translate.c | 2 +-
24
target/riscv/translate.c | 2 +-
25
target/rx/translate.c | 2 +-
26
target/s390x/tcg/translate.c | 2 +-
27
target/sh4/translate.c | 2 +-
28
target/sparc/translate.c | 2 +-
29
target/tricore/translate.c | 2 +-
30
target/xtensa/translate.c | 2 +-
31
24 files changed, 26 insertions(+), 26 deletions(-)
32
33
diff --git a/include/exec/translator.h b/include/exec/translator.h
34
index XXXXXXX..XXXXXXX 100644
35
--- a/include/exec/translator.h
36
+++ b/include/exec/translator.h
37
@@ -XXX,XX +XXX,XX @@
38
* This function must be provided by the target, which should create
39
* the target-specific DisasContext, and then invoke translator_loop.
40
*/
41
-void gen_intermediate_code(CPUState *cpu, TranslationBlock *tb, int max_insns,
42
+void gen_intermediate_code(CPUState *cpu, TranslationBlock *tb, int *max_insns,
43
target_ulong pc, void *host_pc);
44
45
/**
46
@@ -XXX,XX +XXX,XX @@ typedef struct TranslatorOps {
47
* - When single-stepping is enabled (system-wide or on the current vCPU).
48
* - When too many instructions have been translated.
49
*/
50
-void translator_loop(CPUState *cpu, TranslationBlock *tb, int max_insns,
51
+void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
52
target_ulong pc, void *host_pc,
53
const TranslatorOps *ops, DisasContextBase *db);
54
55
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
56
index XXXXXXX..XXXXXXX 100644
57
--- a/accel/tcg/translate-all.c
58
+++ b/accel/tcg/translate-all.c
59
@@ -XXX,XX +XXX,XX @@ static int setjmp_gen_code(CPUArchState *env, TranslationBlock *tb,
60
tcg_func_start(tcg_ctx);
61
62
tcg_ctx->cpu = env_cpu(env);
63
- gen_intermediate_code(env_cpu(env), tb, *max_insns, pc, host_pc);
64
+ gen_intermediate_code(env_cpu(env), tb, max_insns, pc, host_pc);
65
assert(tb->size != 0);
66
tcg_ctx->cpu = NULL;
67
*max_insns = tb->icount;
68
diff --git a/accel/tcg/translator.c b/accel/tcg/translator.c
69
index XXXXXXX..XXXXXXX 100644
70
--- a/accel/tcg/translator.c
71
+++ b/accel/tcg/translator.c
72
@@ -XXX,XX +XXX,XX @@ bool translator_use_goto_tb(DisasContextBase *db, target_ulong dest)
73
return ((db->pc_first ^ dest) & TARGET_PAGE_MASK) == 0;
74
}
75
76
-void translator_loop(CPUState *cpu, TranslationBlock *tb, int max_insns,
77
+void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
78
target_ulong pc, void *host_pc,
79
const TranslatorOps *ops, DisasContextBase *db)
80
{
81
@@ -XXX,XX +XXX,XX @@ void translator_loop(CPUState *cpu, TranslationBlock *tb, int max_insns,
82
db->pc_next = pc;
83
db->is_jmp = DISAS_NEXT;
84
db->num_insns = 0;
85
- db->max_insns = max_insns;
86
+ db->max_insns = *max_insns;
87
db->singlestep_enabled = cflags & CF_SINGLE_STEP;
88
db->host_addr[0] = host_pc;
89
db->host_addr[1] = NULL;
90
diff --git a/target/alpha/translate.c b/target/alpha/translate.c
91
index XXXXXXX..XXXXXXX 100644
92
--- a/target/alpha/translate.c
93
+++ b/target/alpha/translate.c
94
@@ -XXX,XX +XXX,XX @@ static const TranslatorOps alpha_tr_ops = {
95
.disas_log = alpha_tr_disas_log,
96
};
97
98
-void gen_intermediate_code(CPUState *cpu, TranslationBlock *tb, int max_insns,
99
+void gen_intermediate_code(CPUState *cpu, TranslationBlock *tb, int *max_insns,
100
target_ulong pc, void *host_pc)
101
{
102
DisasContext dc;
103
diff --git a/target/arm/tcg/translate.c b/target/arm/tcg/translate.c
104
index XXXXXXX..XXXXXXX 100644
105
--- a/target/arm/tcg/translate.c
106
+++ b/target/arm/tcg/translate.c
107
@@ -XXX,XX +XXX,XX @@ static const TranslatorOps thumb_translator_ops = {
108
};
109
110
/* generate intermediate code for basic block 'tb'. */
111
-void gen_intermediate_code(CPUState *cpu, TranslationBlock *tb, int max_insns,
112
+void gen_intermediate_code(CPUState *cpu, TranslationBlock *tb, int *max_insns,
113
target_ulong pc, void *host_pc)
114
{
115
DisasContext dc = { };
116
diff --git a/target/avr/translate.c b/target/avr/translate.c
117
index XXXXXXX..XXXXXXX 100644
118
--- a/target/avr/translate.c
119
+++ b/target/avr/translate.c
120
@@ -XXX,XX +XXX,XX @@ static const TranslatorOps avr_tr_ops = {
121
.disas_log = avr_tr_disas_log,
122
};
123
124
-void gen_intermediate_code(CPUState *cs, TranslationBlock *tb, int max_insns,
125
+void gen_intermediate_code(CPUState *cs, TranslationBlock *tb, int *max_insns,
126
target_ulong pc, void *host_pc)
127
{
128
DisasContext dc = { };
129
diff --git a/target/cris/translate.c b/target/cris/translate.c
130
index XXXXXXX..XXXXXXX 100644
131
--- a/target/cris/translate.c
132
+++ b/target/cris/translate.c
133
@@ -XXX,XX +XXX,XX @@ static const TranslatorOps cris_tr_ops = {
134
.disas_log = cris_tr_disas_log,
135
};
136
137
-void gen_intermediate_code(CPUState *cs, TranslationBlock *tb, int max_insns,
138
+void gen_intermediate_code(CPUState *cs, TranslationBlock *tb, int *max_insns,
139
target_ulong pc, void *host_pc)
140
{
141
DisasContext dc;
142
diff --git a/target/hexagon/translate.c b/target/hexagon/translate.c
143
index XXXXXXX..XXXXXXX 100644
144
--- a/target/hexagon/translate.c
145
+++ b/target/hexagon/translate.c
146
@@ -XXX,XX +XXX,XX @@ static const TranslatorOps hexagon_tr_ops = {
147
.disas_log = hexagon_tr_disas_log,
148
};
149
150
-void gen_intermediate_code(CPUState *cs, TranslationBlock *tb, int max_insns,
151
+void gen_intermediate_code(CPUState *cs, TranslationBlock *tb, int *max_insns,
152
target_ulong pc, void *host_pc)
153
{
154
DisasContext ctx;
155
diff --git a/target/hppa/translate.c b/target/hppa/translate.c
156
index XXXXXXX..XXXXXXX 100644
157
--- a/target/hppa/translate.c
158
+++ b/target/hppa/translate.c
159
@@ -XXX,XX +XXX,XX @@ static const TranslatorOps hppa_tr_ops = {
160
.disas_log = hppa_tr_disas_log,
161
};
162
163
-void gen_intermediate_code(CPUState *cs, TranslationBlock *tb, int max_insns,
164
+void gen_intermediate_code(CPUState *cs, TranslationBlock *tb, int *max_insns,
165
target_ulong pc, void *host_pc)
166
{
167
DisasContext ctx;
168
diff --git a/target/i386/tcg/translate.c b/target/i386/tcg/translate.c
169
index XXXXXXX..XXXXXXX 100644
170
--- a/target/i386/tcg/translate.c
171
+++ b/target/i386/tcg/translate.c
172
@@ -XXX,XX +XXX,XX @@ static const TranslatorOps i386_tr_ops = {
173
};
174
175
/* generate intermediate code for basic block 'tb'. */
176
-void gen_intermediate_code(CPUState *cpu, TranslationBlock *tb, int max_insns,
177
+void gen_intermediate_code(CPUState *cpu, TranslationBlock *tb, int *max_insns,
178
target_ulong pc, void *host_pc)
179
{
180
DisasContext dc;
181
diff --git a/target/loongarch/translate.c b/target/loongarch/translate.c
182
index XXXXXXX..XXXXXXX 100644
183
--- a/target/loongarch/translate.c
184
+++ b/target/loongarch/translate.c
185
@@ -XXX,XX +XXX,XX @@ static const TranslatorOps loongarch_tr_ops = {
186
.disas_log = loongarch_tr_disas_log,
187
};
188
189
-void gen_intermediate_code(CPUState *cs, TranslationBlock *tb, int max_insns,
190
+void gen_intermediate_code(CPUState *cs, TranslationBlock *tb, int *max_insns,
191
target_ulong pc, void *host_pc)
192
{
193
DisasContext ctx;
194
diff --git a/target/m68k/translate.c b/target/m68k/translate.c
195
index XXXXXXX..XXXXXXX 100644
196
--- a/target/m68k/translate.c
197
+++ b/target/m68k/translate.c
198
@@ -XXX,XX +XXX,XX @@ static const TranslatorOps m68k_tr_ops = {
199
.disas_log = m68k_tr_disas_log,
200
};
201
202
-void gen_intermediate_code(CPUState *cpu, TranslationBlock *tb, int max_insns,
203
+void gen_intermediate_code(CPUState *cpu, TranslationBlock *tb, int *max_insns,
204
target_ulong pc, void *host_pc)
205
{
206
DisasContext dc;
207
diff --git a/target/microblaze/translate.c b/target/microblaze/translate.c
208
index XXXXXXX..XXXXXXX 100644
209
--- a/target/microblaze/translate.c
210
+++ b/target/microblaze/translate.c
211
@@ -XXX,XX +XXX,XX @@ static const TranslatorOps mb_tr_ops = {
212
.disas_log = mb_tr_disas_log,
213
};
214
215
-void gen_intermediate_code(CPUState *cpu, TranslationBlock *tb, int max_insns,
216
+void gen_intermediate_code(CPUState *cpu, TranslationBlock *tb, int *max_insns,
217
target_ulong pc, void *host_pc)
218
{
219
DisasContext dc;
220
diff --git a/target/mips/tcg/translate.c b/target/mips/tcg/translate.c
221
index XXXXXXX..XXXXXXX 100644
222
--- a/target/mips/tcg/translate.c
223
+++ b/target/mips/tcg/translate.c
224
@@ -XXX,XX +XXX,XX @@ static const TranslatorOps mips_tr_ops = {
225
.disas_log = mips_tr_disas_log,
226
};
227
228
-void gen_intermediate_code(CPUState *cs, TranslationBlock *tb, int max_insns,
229
+void gen_intermediate_code(CPUState *cs, TranslationBlock *tb, int *max_insns,
230
target_ulong pc, void *host_pc)
231
{
232
DisasContext ctx;
233
diff --git a/target/nios2/translate.c b/target/nios2/translate.c
234
index XXXXXXX..XXXXXXX 100644
235
--- a/target/nios2/translate.c
236
+++ b/target/nios2/translate.c
237
@@ -XXX,XX +XXX,XX @@ static const TranslatorOps nios2_tr_ops = {
238
.disas_log = nios2_tr_disas_log,
239
};
240
241
-void gen_intermediate_code(CPUState *cs, TranslationBlock *tb, int max_insns,
242
+void gen_intermediate_code(CPUState *cs, TranslationBlock *tb, int *max_insns,
243
target_ulong pc, void *host_pc)
244
{
245
DisasContext dc;
246
diff --git a/target/openrisc/translate.c b/target/openrisc/translate.c
247
index XXXXXXX..XXXXXXX 100644
248
--- a/target/openrisc/translate.c
249
+++ b/target/openrisc/translate.c
250
@@ -XXX,XX +XXX,XX @@ static const TranslatorOps openrisc_tr_ops = {
251
.disas_log = openrisc_tr_disas_log,
252
};
253
254
-void gen_intermediate_code(CPUState *cs, TranslationBlock *tb, int max_insns,
255
+void gen_intermediate_code(CPUState *cs, TranslationBlock *tb, int *max_insns,
256
target_ulong pc, void *host_pc)
257
{
258
DisasContext ctx;
259
diff --git a/target/ppc/translate.c b/target/ppc/translate.c
260
index XXXXXXX..XXXXXXX 100644
261
--- a/target/ppc/translate.c
262
+++ b/target/ppc/translate.c
263
@@ -XXX,XX +XXX,XX @@ static const TranslatorOps ppc_tr_ops = {
264
.disas_log = ppc_tr_disas_log,
265
};
266
267
-void gen_intermediate_code(CPUState *cs, TranslationBlock *tb, int max_insns,
268
+void gen_intermediate_code(CPUState *cs, TranslationBlock *tb, int *max_insns,
269
target_ulong pc, void *host_pc)
270
{
271
DisasContext ctx;
272
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
273
index XXXXXXX..XXXXXXX 100644
274
--- a/target/riscv/translate.c
275
+++ b/target/riscv/translate.c
276
@@ -XXX,XX +XXX,XX @@ static const TranslatorOps riscv_tr_ops = {
277
.disas_log = riscv_tr_disas_log,
278
};
279
280
-void gen_intermediate_code(CPUState *cs, TranslationBlock *tb, int max_insns,
281
+void gen_intermediate_code(CPUState *cs, TranslationBlock *tb, int *max_insns,
282
target_ulong pc, void *host_pc)
283
{
284
DisasContext ctx;
285
diff --git a/target/rx/translate.c b/target/rx/translate.c
286
index XXXXXXX..XXXXXXX 100644
287
--- a/target/rx/translate.c
288
+++ b/target/rx/translate.c
289
@@ -XXX,XX +XXX,XX @@ static const TranslatorOps rx_tr_ops = {
290
.disas_log = rx_tr_disas_log,
291
};
292
293
-void gen_intermediate_code(CPUState *cs, TranslationBlock *tb, int max_insns,
294
+void gen_intermediate_code(CPUState *cs, TranslationBlock *tb, int *max_insns,
295
target_ulong pc, void *host_pc)
296
{
297
DisasContext dc;
298
diff --git a/target/s390x/tcg/translate.c b/target/s390x/tcg/translate.c
299
index XXXXXXX..XXXXXXX 100644
300
--- a/target/s390x/tcg/translate.c
301
+++ b/target/s390x/tcg/translate.c
302
@@ -XXX,XX +XXX,XX @@ static const TranslatorOps s390x_tr_ops = {
303
.disas_log = s390x_tr_disas_log,
304
};
305
306
-void gen_intermediate_code(CPUState *cs, TranslationBlock *tb, int max_insns,
307
+void gen_intermediate_code(CPUState *cs, TranslationBlock *tb, int *max_insns,
308
target_ulong pc, void *host_pc)
309
{
310
DisasContext dc;
311
diff --git a/target/sh4/translate.c b/target/sh4/translate.c
312
index XXXXXXX..XXXXXXX 100644
313
--- a/target/sh4/translate.c
314
+++ b/target/sh4/translate.c
315
@@ -XXX,XX +XXX,XX @@ static const TranslatorOps sh4_tr_ops = {
316
.disas_log = sh4_tr_disas_log,
317
};
318
319
-void gen_intermediate_code(CPUState *cs, TranslationBlock *tb, int max_insns,
320
+void gen_intermediate_code(CPUState *cs, TranslationBlock *tb, int *max_insns,
321
target_ulong pc, void *host_pc)
322
{
323
DisasContext ctx;
324
diff --git a/target/sparc/translate.c b/target/sparc/translate.c
325
index XXXXXXX..XXXXXXX 100644
326
--- a/target/sparc/translate.c
327
+++ b/target/sparc/translate.c
328
@@ -XXX,XX +XXX,XX @@ static const TranslatorOps sparc_tr_ops = {
329
.disas_log = sparc_tr_disas_log,
330
};
331
332
-void gen_intermediate_code(CPUState *cs, TranslationBlock *tb, int max_insns,
333
+void gen_intermediate_code(CPUState *cs, TranslationBlock *tb, int *max_insns,
334
target_ulong pc, void *host_pc)
335
{
336
DisasContext dc = {};
337
diff --git a/target/tricore/translate.c b/target/tricore/translate.c
338
index XXXXXXX..XXXXXXX 100644
339
--- a/target/tricore/translate.c
340
+++ b/target/tricore/translate.c
341
@@ -XXX,XX +XXX,XX @@ static const TranslatorOps tricore_tr_ops = {
342
};
343
344
345
-void gen_intermediate_code(CPUState *cs, TranslationBlock *tb, int max_insns,
346
+void gen_intermediate_code(CPUState *cs, TranslationBlock *tb, int *max_insns,
347
target_ulong pc, void *host_pc)
348
{
349
DisasContext ctx;
350
diff --git a/target/xtensa/translate.c b/target/xtensa/translate.c
351
index XXXXXXX..XXXXXXX 100644
352
--- a/target/xtensa/translate.c
353
+++ b/target/xtensa/translate.c
354
@@ -XXX,XX +XXX,XX @@ static const TranslatorOps xtensa_translator_ops = {
355
.disas_log = xtensa_tr_disas_log,
356
};
357
358
-void gen_intermediate_code(CPUState *cpu, TranslationBlock *tb, int max_insns,
359
+void gen_intermediate_code(CPUState *cpu, TranslationBlock *tb, int *max_insns,
360
target_ulong pc, void *host_pc)
361
{
362
DisasContext dc = {};
363
--
364
2.34.1
365
366
diff view generated by jsdifflib
1
The result of g_strsplit is never NULL.
1
Write back the number of insns that we attempt to translate,
2
so that if we longjmp out we have a more accurate limit for
3
the next attempt. This results in fewer restarts when some
4
limit is consumed by few instructions.
2
5
3
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
Reviewed by: Aleksandar Markovic <amarkovic@wavecomp.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
---
8
---
9
vl.c | 2 +-
9
accel/tcg/translator.c | 2 +-
10
1 file changed, 1 insertion(+), 1 deletion(-)
10
1 file changed, 1 insertion(+), 1 deletion(-)
11
11
12
diff --git a/vl.c b/vl.c
12
diff --git a/accel/tcg/translator.c b/accel/tcg/translator.c
13
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
14
--- a/vl.c
14
--- a/accel/tcg/translator.c
15
+++ b/vl.c
15
+++ b/accel/tcg/translator.c
16
@@ -XXX,XX +XXX,XX @@ static void configure_accelerators(const char *progname)
16
@@ -XXX,XX +XXX,XX @@ void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
17
17
plugin_enabled = plugin_gen_tb_start(cpu, db, cflags & CF_MEMI_ONLY);
18
accel_list = g_strsplit(accel, ":", 0);
18
19
19
while (true) {
20
- for (tmp = accel_list; tmp && *tmp; tmp++) {
20
- db->num_insns++;
21
+ for (tmp = accel_list; *tmp; tmp++) {
21
+ *max_insns = ++db->num_insns;
22
/*
22
ops->insn_start(db, cpu);
23
* Filter invalid accelerators here, to prevent obscenities
23
tcg_debug_assert(db->is_jmp == DISAS_NEXT); /* no early exit */
24
* such as "-machine accel=tcg,,thread=single".
24
25
--
25
--
26
2.20.1
26
2.34.1
27
27
28
28
diff view generated by jsdifflib
New patch
1
Just because the label reference count is more than 1 does
2
not mean we cannot remove a branch-to-next. By doing this
3
first, the label reference count may drop to 0, and then
4
the label itself gets removed as before.
1
5
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
---
9
tcg/tcg.c | 33 +++++++++++++++++----------------
10
1 file changed, 17 insertions(+), 16 deletions(-)
11
12
diff --git a/tcg/tcg.c b/tcg/tcg.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/tcg/tcg.c
15
+++ b/tcg/tcg.c
16
@@ -XXX,XX +XXX,XX @@ TCGOp *tcg_op_insert_after(TCGContext *s, TCGOp *old_op,
17
/* Reachable analysis : remove unreachable code. */
18
static void reachable_code_pass(TCGContext *s)
19
{
20
- TCGOp *op, *op_next;
21
+ TCGOp *op, *op_next, *op_prev;
22
bool dead = false;
23
24
QTAILQ_FOREACH_SAFE(op, &s->ops, link, op_next) {
25
@@ -XXX,XX +XXX,XX @@ static void reachable_code_pass(TCGContext *s)
26
switch (op->opc) {
27
case INDEX_op_set_label:
28
label = arg_label(op->args[0]);
29
+
30
+ /*
31
+ * Optimization can fold conditional branches to unconditional.
32
+ * If we find a label which is preceded by an unconditional
33
+ * branch to next, remove the branch. We couldn't do this when
34
+ * processing the branch because any dead code between the branch
35
+ * and label had not yet been removed.
36
+ */
37
+ op_prev = QTAILQ_PREV(op, link);
38
+ if (op_prev->opc == INDEX_op_br &&
39
+ label == arg_label(op_prev->args[0])) {
40
+ tcg_op_remove(s, op_prev);
41
+ /* Fall through means insns become live again. */
42
+ dead = false;
43
+ }
44
+
45
if (label->refs == 0) {
46
/*
47
* While there is an occasional backward branch, virtually
48
@@ -XXX,XX +XXX,XX @@ static void reachable_code_pass(TCGContext *s)
49
/* Once we see a label, insns become live again. */
50
dead = false;
51
remove = false;
52
-
53
- /*
54
- * Optimization can fold conditional branches to unconditional.
55
- * If we find a label with one reference which is preceded by
56
- * an unconditional branch to it, remove both. This needed to
57
- * wait until the dead code in between them was removed.
58
- */
59
- if (label->refs == 1) {
60
- TCGOp *op_prev = QTAILQ_PREV(op, link);
61
- if (op_prev->opc == INDEX_op_br &&
62
- label == arg_label(op_prev->args[0])) {
63
- tcg_op_remove(s, op_prev);
64
- remove = true;
65
- }
66
- }
67
}
68
break;
69
70
--
71
2.34.1
72
73
diff view generated by jsdifflib
1
By choosing "tcg:kvm" when kvm is not enabled, we generate
1
Use TEMP_TB as that is more explicit about the default
2
an incorrect warning: "invalid accelerator kvm".
2
lifetime of the data. While "global" and "local" used
3
to be contrasting, we have more lifetimes than that now.
3
4
4
At the same time, use g_str_has_suffix rather than open-coding
5
Do not yet rename tcg_temp_local_new_*, just the enum.
5
the same operation.
6
6
7
Presumably the inverse is also true with --disable-tcg.
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
9
Fixes: 28a0961757fc
10
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
11
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
12
Reviewed by: Aleksandar Markovic <amarkovic@wavecomp.com>
13
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
14
---
9
---
15
vl.c | 21 +++++++++++++--------
10
include/tcg/tcg.h | 12 ++++++++----
16
1 file changed, 13 insertions(+), 8 deletions(-)
11
tcg/optimize.c | 2 +-
12
tcg/tcg.c | 18 +++++++++---------
13
3 files changed, 18 insertions(+), 14 deletions(-)
17
14
18
diff --git a/vl.c b/vl.c
15
diff --git a/include/tcg/tcg.h b/include/tcg/tcg.h
19
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
20
--- a/vl.c
17
--- a/include/tcg/tcg.h
21
+++ b/vl.c
18
+++ b/include/tcg/tcg.h
22
@@ -XXX,XX +XXX,XX @@ static void configure_accelerators(const char *progname)
19
@@ -XXX,XX +XXX,XX @@ typedef enum TCGTempVal {
23
20
typedef enum TCGTempKind {
24
if (accel == NULL) {
21
/* Temp is dead at the end of all basic blocks. */
25
/* Select the default accelerator */
22
TEMP_NORMAL,
26
- if (!accel_find("tcg") && !accel_find("kvm")) {
23
- /* Temp is live across conditional branch, but dead otherwise. */
27
- error_report("No accelerator selected and"
24
+ /*
28
- " no default accelerator available");
25
+ * Temp is dead at the end of the extended basic block (EBB),
29
- exit(1);
26
+ * the single-entry multiple-exit region that falls through
30
- } else {
27
+ * conditional branches.
31
- int pnlen = strlen(progname);
28
+ */
32
- if (pnlen >= 3 && g_str_equal(&progname[pnlen - 3], "kvm")) {
29
TEMP_EBB,
33
+ bool have_tcg = accel_find("tcg");
30
- /* Temp is saved across basic blocks but dead at the end of TBs. */
34
+ bool have_kvm = accel_find("kvm");
31
- TEMP_LOCAL,
35
+
32
- /* Temp is saved across both basic blocks and translation blocks. */
36
+ if (have_tcg && have_kvm) {
33
+ /* Temp is live across the entire translation block, but dead at end. */
37
+ if (g_str_has_suffix(progname, "kvm")) {
34
+ TEMP_TB,
38
/* If the program name ends with "kvm", we prefer KVM */
35
+ /* Temp is live across the entire translation block, and between them. */
39
accel = "kvm:tcg";
36
TEMP_GLOBAL,
40
} else {
37
/* Temp is in a fixed register. */
41
accel = "tcg:kvm";
38
TEMP_FIXED,
42
}
39
diff --git a/tcg/optimize.c b/tcg/optimize.c
43
+ } else if (have_kvm) {
40
index XXXXXXX..XXXXXXX 100644
44
+ accel = "kvm";
41
--- a/tcg/optimize.c
45
+ } else if (have_tcg) {
42
+++ b/tcg/optimize.c
46
+ accel = "tcg";
43
@@ -XXX,XX +XXX,XX @@ static TCGTemp *find_better_copy(TCGContext *s, TCGTemp *ts)
47
+ } else {
44
} else if (i->kind > ts->kind) {
48
+ error_report("No accelerator selected and"
45
if (i->kind == TEMP_GLOBAL) {
49
+ " no default accelerator available");
46
g = i;
50
+ exit(1);
47
- } else if (i->kind == TEMP_LOCAL) {
48
+ } else if (i->kind == TEMP_TB) {
49
l = i;
51
}
50
}
52
}
51
}
53
-
52
diff --git a/tcg/tcg.c b/tcg/tcg.c
54
accel_list = g_strsplit(accel, ":", 0);
53
index XXXXXXX..XXXXXXX 100644
55
54
--- a/tcg/tcg.c
56
for (tmp = accel_list; *tmp; tmp++) {
55
+++ b/tcg/tcg.c
56
@@ -XXX,XX +XXX,XX @@ TCGTemp *tcg_global_mem_new_internal(TCGType type, TCGv_ptr base,
57
TCGTemp *tcg_temp_new_internal(TCGType type, bool temp_local)
58
{
59
TCGContext *s = tcg_ctx;
60
- TCGTempKind kind = temp_local ? TEMP_LOCAL : TEMP_NORMAL;
61
+ TCGTempKind kind = temp_local ? TEMP_TB : TEMP_NORMAL;
62
TCGTemp *ts;
63
int idx, k;
64
65
@@ -XXX,XX +XXX,XX @@ void tcg_temp_free_internal(TCGTemp *ts)
66
*/
67
return;
68
case TEMP_NORMAL:
69
- case TEMP_LOCAL:
70
+ case TEMP_TB:
71
break;
72
default:
73
g_assert_not_reached();
74
@@ -XXX,XX +XXX,XX @@ static void tcg_reg_alloc_start(TCGContext *s)
75
case TEMP_EBB:
76
val = TEMP_VAL_DEAD;
77
/* fall through */
78
- case TEMP_LOCAL:
79
+ case TEMP_TB:
80
ts->mem_allocated = 0;
81
break;
82
default:
83
@@ -XXX,XX +XXX,XX @@ static char *tcg_get_arg_str_ptr(TCGContext *s, char *buf, int buf_size,
84
case TEMP_GLOBAL:
85
pstrcpy(buf, buf_size, ts->name);
86
break;
87
- case TEMP_LOCAL:
88
+ case TEMP_TB:
89
snprintf(buf, buf_size, "loc%d", idx - s->nb_globals);
90
break;
91
case TEMP_EBB:
92
@@ -XXX,XX +XXX,XX @@ static void la_bb_end(TCGContext *s, int ng, int nt)
93
switch (ts->kind) {
94
case TEMP_FIXED:
95
case TEMP_GLOBAL:
96
- case TEMP_LOCAL:
97
+ case TEMP_TB:
98
state = TS_DEAD | TS_MEM;
99
break;
100
case TEMP_NORMAL:
101
@@ -XXX,XX +XXX,XX @@ static void la_bb_sync(TCGContext *s, int ng, int nt)
102
int state;
103
104
switch (ts->kind) {
105
- case TEMP_LOCAL:
106
+ case TEMP_TB:
107
state = ts->state;
108
ts->state = state | TS_MEM;
109
if (state != TS_DEAD) {
110
@@ -XXX,XX +XXX,XX @@ static void temp_free_or_dead(TCGContext *s, TCGTemp *ts, int free_or_dead)
111
case TEMP_FIXED:
112
return;
113
case TEMP_GLOBAL:
114
- case TEMP_LOCAL:
115
+ case TEMP_TB:
116
new_type = TEMP_VAL_MEM;
117
break;
118
case TEMP_NORMAL:
119
@@ -XXX,XX +XXX,XX @@ static void tcg_reg_alloc_bb_end(TCGContext *s, TCGRegSet allocated_regs)
120
TCGTemp *ts = &s->temps[i];
121
122
switch (ts->kind) {
123
- case TEMP_LOCAL:
124
+ case TEMP_TB:
125
temp_save(s, ts, allocated_regs);
126
break;
127
case TEMP_NORMAL:
128
@@ -XXX,XX +XXX,XX @@ static void tcg_reg_alloc_cbranch(TCGContext *s, TCGRegSet allocated_regs)
129
* Keep tcg_debug_asserts for safety.
130
*/
131
switch (ts->kind) {
132
- case TEMP_LOCAL:
133
+ case TEMP_TB:
134
tcg_debug_assert(ts->val_type != TEMP_VAL_REG || ts->mem_coherent);
135
break;
136
case TEMP_NORMAL:
57
--
137
--
58
2.20.1
138
2.34.1
59
139
60
140
diff view generated by jsdifflib
New patch
1
This makes it easier to assign blame with perf.
1
2
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
---
7
tcg/tcg.c | 9 ++++++---
8
1 file changed, 6 insertions(+), 3 deletions(-)
9
10
diff --git a/tcg/tcg.c b/tcg/tcg.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/tcg/tcg.c
13
+++ b/tcg/tcg.c
14
@@ -XXX,XX +XXX,XX @@ TCGOp *tcg_op_insert_after(TCGContext *s, TCGOp *old_op,
15
}
16
17
/* Reachable analysis : remove unreachable code. */
18
-static void reachable_code_pass(TCGContext *s)
19
+static void __attribute__((noinline))
20
+reachable_code_pass(TCGContext *s)
21
{
22
TCGOp *op, *op_next, *op_prev;
23
bool dead = false;
24
@@ -XXX,XX +XXX,XX @@ static void la_cross_call(TCGContext *s, int nt)
25
/* Liveness analysis : update the opc_arg_life array to tell if a
26
given input arguments is dead. Instructions updating dead
27
temporaries are removed. */
28
-static void liveness_pass_1(TCGContext *s)
29
+static void __attribute__((noinline))
30
+liveness_pass_1(TCGContext *s)
31
{
32
int nb_globals = s->nb_globals;
33
int nb_temps = s->nb_temps;
34
@@ -XXX,XX +XXX,XX @@ static void liveness_pass_1(TCGContext *s)
35
}
36
37
/* Liveness analysis: Convert indirect regs to direct temporaries. */
38
-static bool liveness_pass_2(TCGContext *s)
39
+static bool __attribute__((noinline))
40
+liveness_pass_2(TCGContext *s)
41
{
42
int nb_globals = s->nb_globals;
43
int nb_temps, i;
44
--
45
2.34.1
46
47
diff view generated by jsdifflib
1
We will want to be able to flush a tlb without resizing.
1
Attempt to reduce the lifetime of TEMP_TB.
2
2
3
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
3
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
4
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
5
---
8
accel/tcg/cputlb.c | 15 ++++++++++-----
6
tcg/tcg.c | 70 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
9
1 file changed, 10 insertions(+), 5 deletions(-)
7
1 file changed, 70 insertions(+)
10
8
11
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
9
diff --git a/tcg/tcg.c b/tcg/tcg.c
12
index XXXXXXX..XXXXXXX 100644
10
index XXXXXXX..XXXXXXX 100644
13
--- a/accel/tcg/cputlb.c
11
--- a/tcg/tcg.c
14
+++ b/accel/tcg/cputlb.c
12
+++ b/tcg/tcg.c
15
@@ -XXX,XX +XXX,XX @@ static void tlb_mmu_resize_locked(CPUTLBDesc *desc, CPUTLBDescFast *fast)
13
@@ -XXX,XX +XXX,XX @@ static void la_cross_call(TCGContext *s, int nt)
16
}
14
}
17
}
15
}
18
16
19
-static void tlb_flush_one_mmuidx_locked(CPUArchState *env, int mmu_idx)
17
+/*
20
+static void tlb_mmu_flush_locked(CPUTLBDesc *desc, CPUTLBDescFast *fast)
18
+ * Liveness analysis: Verify the lifetime of TEMP_TB, and reduce
21
{
19
+ * to TEMP_EBB, if possible.
22
- CPUTLBDesc *desc = &env_tlb(env)->d[mmu_idx];
20
+ */
23
- CPUTLBDescFast *fast = &env_tlb(env)->f[mmu_idx];
21
+static void __attribute__((noinline))
24
-
22
+liveness_pass_0(TCGContext *s)
25
- tlb_mmu_resize_locked(desc, fast);
26
desc->n_used_entries = 0;
27
desc->large_page_addr = -1;
28
desc->large_page_mask = -1;
29
@@ -XXX,XX +XXX,XX @@ static void tlb_flush_one_mmuidx_locked(CPUArchState *env, int mmu_idx)
30
memset(desc->vtable, -1, sizeof(desc->vtable));
31
}
32
33
+static void tlb_flush_one_mmuidx_locked(CPUArchState *env, int mmu_idx)
34
+{
23
+{
35
+ CPUTLBDesc *desc = &env_tlb(env)->d[mmu_idx];
24
+ void * const multiple_ebb = (void *)(uintptr_t)-1;
36
+ CPUTLBDescFast *fast = &env_tlb(env)->f[mmu_idx];
25
+ int nb_temps = s->nb_temps;
26
+ TCGOp *op, *ebb;
37
+
27
+
38
+ tlb_mmu_resize_locked(desc, fast);
28
+ for (int i = s->nb_globals; i < nb_temps; ++i) {
39
+ tlb_mmu_flush_locked(desc, fast);
29
+ s->temps[i].state_ptr = NULL;
30
+ }
31
+
32
+ /*
33
+ * Represent each EBB by the op at which it begins. In the case of
34
+ * the first EBB, this is the first op, otherwise it is a label.
35
+ * Collect the uses of each TEMP_TB: NULL for unused, EBB for use
36
+ * within a single EBB, else MULTIPLE_EBB.
37
+ */
38
+ ebb = QTAILQ_FIRST(&s->ops);
39
+ QTAILQ_FOREACH(op, &s->ops, link) {
40
+ const TCGOpDef *def;
41
+ int nb_oargs, nb_iargs;
42
+
43
+ switch (op->opc) {
44
+ case INDEX_op_set_label:
45
+ ebb = op;
46
+ continue;
47
+ case INDEX_op_discard:
48
+ continue;
49
+ case INDEX_op_call:
50
+ nb_oargs = TCGOP_CALLO(op);
51
+ nb_iargs = TCGOP_CALLI(op);
52
+ break;
53
+ default:
54
+ def = &tcg_op_defs[op->opc];
55
+ nb_oargs = def->nb_oargs;
56
+ nb_iargs = def->nb_iargs;
57
+ break;
58
+ }
59
+
60
+ for (int i = 0; i < nb_oargs + nb_iargs; ++i) {
61
+ TCGTemp *ts = arg_temp(op->args[i]);
62
+
63
+ if (ts->kind != TEMP_TB) {
64
+ continue;
65
+ }
66
+ if (ts->state_ptr == NULL) {
67
+ ts->state_ptr = ebb;
68
+ } else if (ts->state_ptr != ebb) {
69
+ ts->state_ptr = multiple_ebb;
70
+ }
71
+ }
72
+ }
73
+
74
+ /*
75
+ * For TEMP_TB that turned out not to be used beyond one EBB,
76
+ * reduce the liveness to TEMP_EBB.
77
+ */
78
+ for (int i = s->nb_globals; i < nb_temps; ++i) {
79
+ TCGTemp *ts = &s->temps[i];
80
+ if (ts->kind == TEMP_TB && ts->state_ptr != multiple_ebb) {
81
+ ts->kind = TEMP_EBB;
82
+ }
83
+ }
40
+}
84
+}
41
+
85
+
42
static inline void tlb_n_used_entries_inc(CPUArchState *env, uintptr_t mmu_idx)
86
/* Liveness analysis : update the opc_arg_life array to tell if a
43
{
87
given input arguments is dead. Instructions updating dead
44
env_tlb(env)->d[mmu_idx].n_used_entries++;
88
temporaries are removed. */
89
@@ -XXX,XX +XXX,XX @@ int tcg_gen_code(TCGContext *s, TranslationBlock *tb, target_ulong pc_start)
90
#endif
91
92
reachable_code_pass(s);
93
+ liveness_pass_0(s);
94
liveness_pass_1(s);
95
96
if (s->nb_indirects > 0) {
45
--
97
--
46
2.20.1
98
2.34.1
47
99
48
100
diff view generated by jsdifflib
New patch
1
TEMP_NORMAL is a subset of TEMP_EBB. Promote single basic
2
block temps to single extended basic block.
1
3
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
---
7
include/tcg/tcg.h | 2 --
8
tcg/tcg.c | 19 +++----------------
9
2 files changed, 3 insertions(+), 18 deletions(-)
10
11
diff --git a/include/tcg/tcg.h b/include/tcg/tcg.h
12
index XXXXXXX..XXXXXXX 100644
13
--- a/include/tcg/tcg.h
14
+++ b/include/tcg/tcg.h
15
@@ -XXX,XX +XXX,XX @@ typedef enum TCGTempVal {
16
} TCGTempVal;
17
18
typedef enum TCGTempKind {
19
- /* Temp is dead at the end of all basic blocks. */
20
- TEMP_NORMAL,
21
/*
22
* Temp is dead at the end of the extended basic block (EBB),
23
* the single-entry multiple-exit region that falls through
24
diff --git a/tcg/tcg.c b/tcg/tcg.c
25
index XXXXXXX..XXXXXXX 100644
26
--- a/tcg/tcg.c
27
+++ b/tcg/tcg.c
28
@@ -XXX,XX +XXX,XX @@ TCGTemp *tcg_global_mem_new_internal(TCGType type, TCGv_ptr base,
29
TCGTemp *tcg_temp_new_internal(TCGType type, bool temp_local)
30
{
31
TCGContext *s = tcg_ctx;
32
- TCGTempKind kind = temp_local ? TEMP_TB : TEMP_NORMAL;
33
+ TCGTempKind kind = temp_local ? TEMP_TB : TEMP_EBB;
34
TCGTemp *ts;
35
int idx, k;
36
37
@@ -XXX,XX +XXX,XX @@ void tcg_temp_free_internal(TCGTemp *ts)
38
* silently ignore free.
39
*/
40
return;
41
- case TEMP_NORMAL:
42
+ case TEMP_EBB:
43
case TEMP_TB:
44
break;
45
default:
46
@@ -XXX,XX +XXX,XX @@ void tcg_temp_free_internal(TCGTemp *ts)
47
#endif
48
49
idx = temp_idx(ts);
50
- k = ts->base_type + (ts->kind == TEMP_NORMAL ? 0 : TCG_TYPE_COUNT);
51
+ k = ts->base_type + (ts->kind == TEMP_EBB ? 0 : TCG_TYPE_COUNT);
52
set_bit(idx, s->free_temps[k].l);
53
}
54
55
@@ -XXX,XX +XXX,XX @@ static void tcg_reg_alloc_start(TCGContext *s)
56
break;
57
case TEMP_GLOBAL:
58
break;
59
- case TEMP_NORMAL:
60
case TEMP_EBB:
61
val = TEMP_VAL_DEAD;
62
/* fall through */
63
@@ -XXX,XX +XXX,XX @@ static char *tcg_get_arg_str_ptr(TCGContext *s, char *buf, int buf_size,
64
snprintf(buf, buf_size, "loc%d", idx - s->nb_globals);
65
break;
66
case TEMP_EBB:
67
- snprintf(buf, buf_size, "ebb%d", idx - s->nb_globals);
68
- break;
69
- case TEMP_NORMAL:
70
snprintf(buf, buf_size, "tmp%d", idx - s->nb_globals);
71
break;
72
case TEMP_CONST:
73
@@ -XXX,XX +XXX,XX @@ static void la_bb_end(TCGContext *s, int ng, int nt)
74
case TEMP_TB:
75
state = TS_DEAD | TS_MEM;
76
break;
77
- case TEMP_NORMAL:
78
case TEMP_EBB:
79
case TEMP_CONST:
80
state = TS_DEAD;
81
@@ -XXX,XX +XXX,XX @@ static void la_bb_sync(TCGContext *s, int ng, int nt)
82
continue;
83
}
84
break;
85
- case TEMP_NORMAL:
86
- s->temps[i].state = TS_DEAD;
87
- break;
88
case TEMP_EBB:
89
case TEMP_CONST:
90
continue;
91
@@ -XXX,XX +XXX,XX @@ static void temp_free_or_dead(TCGContext *s, TCGTemp *ts, int free_or_dead)
92
case TEMP_TB:
93
new_type = TEMP_VAL_MEM;
94
break;
95
- case TEMP_NORMAL:
96
case TEMP_EBB:
97
new_type = free_or_dead < 0 ? TEMP_VAL_MEM : TEMP_VAL_DEAD;
98
break;
99
@@ -XXX,XX +XXX,XX @@ static void tcg_reg_alloc_bb_end(TCGContext *s, TCGRegSet allocated_regs)
100
case TEMP_TB:
101
temp_save(s, ts, allocated_regs);
102
break;
103
- case TEMP_NORMAL:
104
case TEMP_EBB:
105
/* The liveness analysis already ensures that temps are dead.
106
Keep an tcg_debug_assert for safety. */
107
@@ -XXX,XX +XXX,XX @@ static void tcg_reg_alloc_cbranch(TCGContext *s, TCGRegSet allocated_regs)
108
case TEMP_TB:
109
tcg_debug_assert(ts->val_type != TEMP_VAL_REG || ts->mem_coherent);
110
break;
111
- case TEMP_NORMAL:
112
- tcg_debug_assert(ts->val_type == TEMP_VAL_DEAD);
113
- break;
114
case TEMP_EBB:
115
case TEMP_CONST:
116
break;
117
--
118
2.34.1
119
120
diff view generated by jsdifflib
1
Do not call get_clock_realtime() in tlb_mmu_resize_locked,
1
While the argument can only be TEMP_EBB or TEMP_TB,
2
but hoist outside of any loop over a set of tlbs. This is
2
it's more obvious this way.
3
only two (indirect) callers, tlb_flush_by_mmuidx_async_work
4
and tlb_flush_page_locked, so not onerous.
5
3
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
---
6
---
11
accel/tcg/cputlb.c | 14 ++++++++------
7
include/tcg/tcg.h | 18 +++++++++---------
12
1 file changed, 8 insertions(+), 6 deletions(-)
8
tcg/tcg.c | 8 ++++----
9
2 files changed, 13 insertions(+), 13 deletions(-)
13
10
14
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
11
diff --git a/include/tcg/tcg.h b/include/tcg/tcg.h
15
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
16
--- a/accel/tcg/cputlb.c
13
--- a/include/tcg/tcg.h
17
+++ b/accel/tcg/cputlb.c
14
+++ b/include/tcg/tcg.h
18
@@ -XXX,XX +XXX,XX @@ static void tlb_window_reset(CPUTLBDesc *desc, int64_t ns,
15
@@ -XXX,XX +XXX,XX @@ void tcg_set_frame(TCGContext *s, TCGReg reg, intptr_t start, intptr_t size);
19
* high), since otherwise we are likely to have a significant amount of
16
20
* conflict misses.
17
TCGTemp *tcg_global_mem_new_internal(TCGType, TCGv_ptr,
21
*/
18
intptr_t, const char *);
22
-static void tlb_mmu_resize_locked(CPUTLBDesc *desc, CPUTLBDescFast *fast)
19
-TCGTemp *tcg_temp_new_internal(TCGType, bool);
23
+static void tlb_mmu_resize_locked(CPUTLBDesc *desc, CPUTLBDescFast *fast,
20
+TCGTemp *tcg_temp_new_internal(TCGType, TCGTempKind);
24
+ int64_t now)
21
void tcg_temp_free_internal(TCGTemp *);
22
TCGv_vec tcg_temp_new_vec(TCGType type);
23
TCGv_vec tcg_temp_new_vec_matching(TCGv_vec match);
24
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 tcg_global_mem_new_i32(TCGv_ptr reg, intptr_t offset,
25
26
static inline TCGv_i32 tcg_temp_new_i32(void)
25
{
27
{
26
size_t old_size = tlb_n_entries(fast);
28
- TCGTemp *t = tcg_temp_new_internal(TCG_TYPE_I32, false);
27
size_t rate;
29
+ TCGTemp *t = tcg_temp_new_internal(TCG_TYPE_I32, TEMP_EBB);
28
size_t new_size = old_size;
30
return temp_tcgv_i32(t);
29
- int64_t now = get_clock_realtime();
30
int64_t window_len_ms = 100;
31
int64_t window_len_ns = window_len_ms * 1000 * 1000;
32
bool window_expired = now > desc->window_begin_ns + window_len_ns;
33
@@ -XXX,XX +XXX,XX @@ static void tlb_mmu_flush_locked(CPUTLBDesc *desc, CPUTLBDescFast *fast)
34
memset(desc->vtable, -1, sizeof(desc->vtable));
35
}
31
}
36
32
37
-static void tlb_flush_one_mmuidx_locked(CPUArchState *env, int mmu_idx)
33
static inline TCGv_i32 tcg_temp_local_new_i32(void)
38
+static void tlb_flush_one_mmuidx_locked(CPUArchState *env, int mmu_idx,
39
+ int64_t now)
40
{
34
{
41
CPUTLBDesc *desc = &env_tlb(env)->d[mmu_idx];
35
- TCGTemp *t = tcg_temp_new_internal(TCG_TYPE_I32, true);
42
CPUTLBDescFast *fast = &env_tlb(env)->f[mmu_idx];
36
+ TCGTemp *t = tcg_temp_new_internal(TCG_TYPE_I32, TEMP_TB);
43
37
return temp_tcgv_i32(t);
44
- tlb_mmu_resize_locked(desc, fast);
45
+ tlb_mmu_resize_locked(desc, fast, now);
46
tlb_mmu_flush_locked(desc, fast);
47
}
38
}
48
39
49
@@ -XXX,XX +XXX,XX @@ static void tlb_flush_by_mmuidx_async_work(CPUState *cpu, run_on_cpu_data data)
40
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i64 tcg_global_mem_new_i64(TCGv_ptr reg, intptr_t offset,
50
CPUArchState *env = cpu->env_ptr;
41
51
uint16_t asked = data.host_int;
42
static inline TCGv_i64 tcg_temp_new_i64(void)
52
uint16_t all_dirty, work, to_clean;
43
{
53
+ int64_t now = get_clock_realtime();
44
- TCGTemp *t = tcg_temp_new_internal(TCG_TYPE_I64, false);
54
45
+ TCGTemp *t = tcg_temp_new_internal(TCG_TYPE_I64, TEMP_EBB);
55
assert_cpu_is_self(cpu);
46
return temp_tcgv_i64(t);
56
47
}
57
@@ -XXX,XX +XXX,XX @@ static void tlb_flush_by_mmuidx_async_work(CPUState *cpu, run_on_cpu_data data)
48
58
49
static inline TCGv_i64 tcg_temp_local_new_i64(void)
59
for (work = to_clean; work != 0; work &= work - 1) {
50
{
60
int mmu_idx = ctz32(work);
51
- TCGTemp *t = tcg_temp_new_internal(TCG_TYPE_I64, true);
61
- tlb_flush_one_mmuidx_locked(env, mmu_idx);
52
+ TCGTemp *t = tcg_temp_new_internal(TCG_TYPE_I64, TEMP_TB);
62
+ tlb_flush_one_mmuidx_locked(env, mmu_idx, now);
53
return temp_tcgv_i64(t);
54
}
55
56
static inline TCGv_i128 tcg_temp_new_i128(void)
57
{
58
- TCGTemp *t = tcg_temp_new_internal(TCG_TYPE_I128, false);
59
+ TCGTemp *t = tcg_temp_new_internal(TCG_TYPE_I128, TEMP_EBB);
60
return temp_tcgv_i128(t);
61
}
62
63
static inline TCGv_i128 tcg_temp_local_new_i128(void)
64
{
65
- TCGTemp *t = tcg_temp_new_internal(TCG_TYPE_I128, true);
66
+ TCGTemp *t = tcg_temp_new_internal(TCG_TYPE_I128, TEMP_TB);
67
return temp_tcgv_i128(t);
68
}
69
70
@@ -XXX,XX +XXX,XX @@ static inline TCGv_ptr tcg_global_mem_new_ptr(TCGv_ptr reg, intptr_t offset,
71
72
static inline TCGv_ptr tcg_temp_new_ptr(void)
73
{
74
- TCGTemp *t = tcg_temp_new_internal(TCG_TYPE_PTR, false);
75
+ TCGTemp *t = tcg_temp_new_internal(TCG_TYPE_PTR, TEMP_EBB);
76
return temp_tcgv_ptr(t);
77
}
78
79
static inline TCGv_ptr tcg_temp_local_new_ptr(void)
80
{
81
- TCGTemp *t = tcg_temp_new_internal(TCG_TYPE_PTR, true);
82
+ TCGTemp *t = tcg_temp_new_internal(TCG_TYPE_PTR, TEMP_TB);
83
return temp_tcgv_ptr(t);
84
}
85
86
diff --git a/tcg/tcg.c b/tcg/tcg.c
87
index XXXXXXX..XXXXXXX 100644
88
--- a/tcg/tcg.c
89
+++ b/tcg/tcg.c
90
@@ -XXX,XX +XXX,XX @@ TCGTemp *tcg_global_mem_new_internal(TCGType type, TCGv_ptr base,
91
return ts;
92
}
93
94
-TCGTemp *tcg_temp_new_internal(TCGType type, bool temp_local)
95
+TCGTemp *tcg_temp_new_internal(TCGType type, TCGTempKind kind)
96
{
97
TCGContext *s = tcg_ctx;
98
- TCGTempKind kind = temp_local ? TEMP_TB : TEMP_EBB;
99
+ bool temp_local = kind == TEMP_TB;
100
TCGTemp *ts;
101
int idx, k;
102
103
@@ -XXX,XX +XXX,XX @@ TCGv_vec tcg_temp_new_vec(TCGType type)
63
}
104
}
64
105
#endif
65
qemu_spin_unlock(&env_tlb(env)->c.lock);
106
66
@@ -XXX,XX +XXX,XX @@ static void tlb_flush_page_locked(CPUArchState *env, int midx,
107
- t = tcg_temp_new_internal(type, 0);
67
tlb_debug("forcing full flush midx %d ("
108
+ t = tcg_temp_new_internal(type, TEMP_EBB);
68
TARGET_FMT_lx "/" TARGET_FMT_lx ")\n",
109
return temp_tcgv_vec(t);
69
midx, lp_addr, lp_mask);
110
}
70
- tlb_flush_one_mmuidx_locked(env, midx);
111
71
+ tlb_flush_one_mmuidx_locked(env, midx, get_clock_realtime());
112
@@ -XXX,XX +XXX,XX @@ TCGv_vec tcg_temp_new_vec_matching(TCGv_vec match)
72
} else {
113
73
if (tlb_flush_entry_locked(tlb_entry(env, midx, page), page)) {
114
tcg_debug_assert(t->temp_allocated != 0);
74
tlb_n_used_entries_dec(env, midx);
115
116
- t = tcg_temp_new_internal(t->base_type, 0);
117
+ t = tcg_temp_new_internal(t->base_type, TEMP_EBB);
118
return temp_tcgv_vec(t);
119
}
120
75
--
121
--
76
2.20.1
122
2.34.1
77
123
78
124
diff view generated by jsdifflib
New patch
1
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
---
5
include/exec/gen-icount.h | 4 +---
6
1 file changed, 1 insertion(+), 3 deletions(-)
1
7
8
diff --git a/include/exec/gen-icount.h b/include/exec/gen-icount.h
9
index XXXXXXX..XXXXXXX 100644
10
--- a/include/exec/gen-icount.h
11
+++ b/include/exec/gen-icount.h
12
@@ -XXX,XX +XXX,XX @@ static TCGOp *icount_start_insn;
13
14
static inline void gen_io_start(void)
15
{
16
- TCGv_i32 tmp = tcg_const_i32(1);
17
- tcg_gen_st_i32(tmp, cpu_env,
18
+ tcg_gen_st_i32(tcg_constant_i32(1), cpu_env,
19
offsetof(ArchCPU, parent_obj.can_do_io) -
20
offsetof(ArchCPU, env));
21
- tcg_temp_free_i32(tmp);
22
}
23
24
static inline void gen_tb_start(const TranslationBlock *tb)
25
--
26
2.34.1
27
28
diff view generated by jsdifflib
New patch
1
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
---
4
include/tcg/tcg-op.h | 5 +++++
5
1 file changed, 5 insertions(+)
1
6
7
diff --git a/include/tcg/tcg-op.h b/include/tcg/tcg-op.h
8
index XXXXXXX..XXXXXXX 100644
9
--- a/include/tcg/tcg-op.h
10
+++ b/include/tcg/tcg-op.h
11
@@ -XXX,XX +XXX,XX @@ static inline void tcg_gen_mov_ptr(TCGv_ptr d, TCGv_ptr s)
12
glue(tcg_gen_mov_,PTR)((NAT)d, (NAT)s);
13
}
14
15
+static inline void tcg_gen_movi_ptr(TCGv_ptr d, intptr_t s)
16
+{
17
+ glue(tcg_gen_movi_,PTR)((NAT)d, s);
18
+}
19
+
20
static inline void tcg_gen_brcondi_ptr(TCGCond cond, TCGv_ptr a,
21
intptr_t b, TCGLabel *label)
22
{
23
--
24
2.34.1
25
26
diff view generated by jsdifflib
1
We do not need the entire CPUArchState to compute these values.
1
TCG internals will want to be able to allocate and reuse
2
explicitly life-limited temporaries.
2
3
3
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
6
---
8
accel/tcg/cputlb.c | 15 ++++++++-------
7
include/tcg/tcg.h | 28 ++++++++++++++++++++++++++++
9
1 file changed, 8 insertions(+), 7 deletions(-)
8
1 file changed, 28 insertions(+)
10
9
11
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
10
diff --git a/include/tcg/tcg.h b/include/tcg/tcg.h
12
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
13
--- a/accel/tcg/cputlb.c
12
--- a/include/tcg/tcg.h
14
+++ b/accel/tcg/cputlb.c
13
+++ b/include/tcg/tcg.h
15
@@ -XXX,XX +XXX,XX @@ QEMU_BUILD_BUG_ON(sizeof(target_ulong) > sizeof(run_on_cpu_data));
14
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 tcg_global_mem_new_i32(TCGv_ptr reg, intptr_t offset,
16
QEMU_BUILD_BUG_ON(NB_MMU_MODES > 16);
15
return temp_tcgv_i32(t);
17
#define ALL_MMUIDX_BITS ((1 << NB_MMU_MODES) - 1)
16
}
18
17
19
-static inline size_t tlb_n_entries(CPUArchState *env, uintptr_t mmu_idx)
18
+/* Used only by tcg infrastructure: tcg-op.c or plugin-gen.c */
20
+static inline size_t tlb_n_entries(CPUTLBDescFast *fast)
19
+static inline TCGv_i32 tcg_temp_ebb_new_i32(void)
20
+{
21
+ TCGTemp *t = tcg_temp_new_internal(TCG_TYPE_I32, TEMP_EBB);
22
+ return temp_tcgv_i32(t);
23
+}
24
+
25
static inline TCGv_i32 tcg_temp_new_i32(void)
21
{
26
{
22
- return (env_tlb(env)->f[mmu_idx].mask >> CPU_TLB_ENTRY_BITS) + 1;
27
TCGTemp *t = tcg_temp_new_internal(TCG_TYPE_I32, TEMP_EBB);
23
+ return (fast->mask >> CPU_TLB_ENTRY_BITS) + 1;
28
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i64 tcg_global_mem_new_i64(TCGv_ptr reg, intptr_t offset,
29
return temp_tcgv_i64(t);
24
}
30
}
25
31
26
-static inline size_t sizeof_tlb(CPUArchState *env, uintptr_t mmu_idx)
32
+/* Used only by tcg infrastructure: tcg-op.c or plugin-gen.c */
27
+static inline size_t sizeof_tlb(CPUTLBDescFast *fast)
33
+static inline TCGv_i64 tcg_temp_ebb_new_i64(void)
34
+{
35
+ TCGTemp *t = tcg_temp_new_internal(TCG_TYPE_I64, TEMP_EBB);
36
+ return temp_tcgv_i64(t);
37
+}
38
+
39
static inline TCGv_i64 tcg_temp_new_i64(void)
28
{
40
{
29
- return env_tlb(env)->f[mmu_idx].mask + (1 << CPU_TLB_ENTRY_BITS);
41
TCGTemp *t = tcg_temp_new_internal(TCG_TYPE_I64, TEMP_EBB);
30
+ return fast->mask + (1 << CPU_TLB_ENTRY_BITS);
42
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i64 tcg_temp_local_new_i64(void)
43
return temp_tcgv_i64(t);
31
}
44
}
32
45
33
static void tlb_window_reset(CPUTLBDesc *desc, int64_t ns,
46
+/* Used only by tcg infrastructure: tcg-op.c or plugin-gen.c */
34
@@ -XXX,XX +XXX,XX @@ static void tlb_dyn_init(CPUArchState *env)
47
+static inline TCGv_i128 tcg_temp_ebb_new_i128(void)
35
static void tlb_mmu_resize_locked(CPUArchState *env, int mmu_idx)
48
+{
49
+ TCGTemp *t = tcg_temp_new_internal(TCG_TYPE_I128, TEMP_EBB);
50
+ return temp_tcgv_i128(t);
51
+}
52
+
53
static inline TCGv_i128 tcg_temp_new_i128(void)
36
{
54
{
37
CPUTLBDesc *desc = &env_tlb(env)->d[mmu_idx];
55
TCGTemp *t = tcg_temp_new_internal(TCG_TYPE_I128, TEMP_EBB);
38
- size_t old_size = tlb_n_entries(env, mmu_idx);
56
@@ -XXX,XX +XXX,XX @@ static inline TCGv_ptr tcg_global_mem_new_ptr(TCGv_ptr reg, intptr_t offset,
39
+ size_t old_size = tlb_n_entries(&env_tlb(env)->f[mmu_idx]);
57
return temp_tcgv_ptr(t);
40
size_t rate;
41
size_t new_size = old_size;
42
int64_t now = get_clock_realtime();
43
@@ -XXX,XX +XXX,XX @@ static void tlb_flush_one_mmuidx_locked(CPUArchState *env, int mmu_idx)
44
env_tlb(env)->d[mmu_idx].large_page_addr = -1;
45
env_tlb(env)->d[mmu_idx].large_page_mask = -1;
46
env_tlb(env)->d[mmu_idx].vindex = 0;
47
- memset(env_tlb(env)->f[mmu_idx].table, -1, sizeof_tlb(env, mmu_idx));
48
+ memset(env_tlb(env)->f[mmu_idx].table, -1,
49
+ sizeof_tlb(&env_tlb(env)->f[mmu_idx]));
50
memset(env_tlb(env)->d[mmu_idx].vtable, -1,
51
sizeof(env_tlb(env)->d[0].vtable));
52
}
58
}
53
@@ -XXX,XX +XXX,XX @@ void tlb_reset_dirty(CPUState *cpu, ram_addr_t start1, ram_addr_t length)
59
54
qemu_spin_lock(&env_tlb(env)->c.lock);
60
+/* Used only by tcg infrastructure: tcg-op.c or plugin-gen.c */
55
for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) {
61
+static inline TCGv_ptr tcg_temp_ebb_new_ptr(void)
56
unsigned int i;
62
+{
57
- unsigned int n = tlb_n_entries(env, mmu_idx);
63
+ TCGTemp *t = tcg_temp_new_internal(TCG_TYPE_PTR, TEMP_EBB);
58
+ unsigned int n = tlb_n_entries(&env_tlb(env)->f[mmu_idx]);
64
+ return temp_tcgv_ptr(t);
59
65
+}
60
for (i = 0; i < n; i++) {
66
+
61
tlb_reset_dirty_range_locked(&env_tlb(env)->f[mmu_idx].table[i],
67
static inline TCGv_ptr tcg_temp_new_ptr(void)
68
{
69
TCGTemp *t = tcg_temp_new_internal(TCG_TYPE_PTR, TEMP_EBB);
62
--
70
--
63
2.20.1
71
2.34.1
64
72
65
73
diff view generated by jsdifflib
New patch
1
All of these have obvious and quite local scope.
1
2
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
---
6
tcg/tcg-op-gvec.c | 186 ++++++++++++++++-----------------
7
tcg/tcg-op.c | 258 +++++++++++++++++++++++-----------------------
8
tcg/tcg.c | 2 +-
9
3 files changed, 223 insertions(+), 223 deletions(-)
10
11
diff --git a/tcg/tcg-op-gvec.c b/tcg/tcg-op-gvec.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/tcg/tcg-op-gvec.c
14
+++ b/tcg/tcg-op-gvec.c
15
@@ -XXX,XX +XXX,XX @@ void tcg_gen_gvec_2_ool(uint32_t dofs, uint32_t aofs,
16
TCGv_ptr a0, a1;
17
TCGv_i32 desc = tcg_constant_i32(simd_desc(oprsz, maxsz, data));
18
19
- a0 = tcg_temp_new_ptr();
20
- a1 = tcg_temp_new_ptr();
21
+ a0 = tcg_temp_ebb_new_ptr();
22
+ a1 = tcg_temp_ebb_new_ptr();
23
24
tcg_gen_addi_ptr(a0, cpu_env, dofs);
25
tcg_gen_addi_ptr(a1, cpu_env, aofs);
26
@@ -XXX,XX +XXX,XX @@ void tcg_gen_gvec_2i_ool(uint32_t dofs, uint32_t aofs, TCGv_i64 c,
27
TCGv_ptr a0, a1;
28
TCGv_i32 desc = tcg_constant_i32(simd_desc(oprsz, maxsz, data));
29
30
- a0 = tcg_temp_new_ptr();
31
- a1 = tcg_temp_new_ptr();
32
+ a0 = tcg_temp_ebb_new_ptr();
33
+ a1 = tcg_temp_ebb_new_ptr();
34
35
tcg_gen_addi_ptr(a0, cpu_env, dofs);
36
tcg_gen_addi_ptr(a1, cpu_env, aofs);
37
@@ -XXX,XX +XXX,XX @@ void tcg_gen_gvec_3_ool(uint32_t dofs, uint32_t aofs, uint32_t bofs,
38
TCGv_ptr a0, a1, a2;
39
TCGv_i32 desc = tcg_constant_i32(simd_desc(oprsz, maxsz, data));
40
41
- a0 = tcg_temp_new_ptr();
42
- a1 = tcg_temp_new_ptr();
43
- a2 = tcg_temp_new_ptr();
44
+ a0 = tcg_temp_ebb_new_ptr();
45
+ a1 = tcg_temp_ebb_new_ptr();
46
+ a2 = tcg_temp_ebb_new_ptr();
47
48
tcg_gen_addi_ptr(a0, cpu_env, dofs);
49
tcg_gen_addi_ptr(a1, cpu_env, aofs);
50
@@ -XXX,XX +XXX,XX @@ void tcg_gen_gvec_4_ool(uint32_t dofs, uint32_t aofs, uint32_t bofs,
51
TCGv_ptr a0, a1, a2, a3;
52
TCGv_i32 desc = tcg_constant_i32(simd_desc(oprsz, maxsz, data));
53
54
- a0 = tcg_temp_new_ptr();
55
- a1 = tcg_temp_new_ptr();
56
- a2 = tcg_temp_new_ptr();
57
- a3 = tcg_temp_new_ptr();
58
+ a0 = tcg_temp_ebb_new_ptr();
59
+ a1 = tcg_temp_ebb_new_ptr();
60
+ a2 = tcg_temp_ebb_new_ptr();
61
+ a3 = tcg_temp_ebb_new_ptr();
62
63
tcg_gen_addi_ptr(a0, cpu_env, dofs);
64
tcg_gen_addi_ptr(a1, cpu_env, aofs);
65
@@ -XXX,XX +XXX,XX @@ void tcg_gen_gvec_5_ool(uint32_t dofs, uint32_t aofs, uint32_t bofs,
66
TCGv_ptr a0, a1, a2, a3, a4;
67
TCGv_i32 desc = tcg_constant_i32(simd_desc(oprsz, maxsz, data));
68
69
- a0 = tcg_temp_new_ptr();
70
- a1 = tcg_temp_new_ptr();
71
- a2 = tcg_temp_new_ptr();
72
- a3 = tcg_temp_new_ptr();
73
- a4 = tcg_temp_new_ptr();
74
+ a0 = tcg_temp_ebb_new_ptr();
75
+ a1 = tcg_temp_ebb_new_ptr();
76
+ a2 = tcg_temp_ebb_new_ptr();
77
+ a3 = tcg_temp_ebb_new_ptr();
78
+ a4 = tcg_temp_ebb_new_ptr();
79
80
tcg_gen_addi_ptr(a0, cpu_env, dofs);
81
tcg_gen_addi_ptr(a1, cpu_env, aofs);
82
@@ -XXX,XX +XXX,XX @@ void tcg_gen_gvec_2_ptr(uint32_t dofs, uint32_t aofs,
83
TCGv_ptr a0, a1;
84
TCGv_i32 desc = tcg_constant_i32(simd_desc(oprsz, maxsz, data));
85
86
- a0 = tcg_temp_new_ptr();
87
- a1 = tcg_temp_new_ptr();
88
+ a0 = tcg_temp_ebb_new_ptr();
89
+ a1 = tcg_temp_ebb_new_ptr();
90
91
tcg_gen_addi_ptr(a0, cpu_env, dofs);
92
tcg_gen_addi_ptr(a1, cpu_env, aofs);
93
@@ -XXX,XX +XXX,XX @@ void tcg_gen_gvec_3_ptr(uint32_t dofs, uint32_t aofs, uint32_t bofs,
94
TCGv_ptr a0, a1, a2;
95
TCGv_i32 desc = tcg_constant_i32(simd_desc(oprsz, maxsz, data));
96
97
- a0 = tcg_temp_new_ptr();
98
- a1 = tcg_temp_new_ptr();
99
- a2 = tcg_temp_new_ptr();
100
+ a0 = tcg_temp_ebb_new_ptr();
101
+ a1 = tcg_temp_ebb_new_ptr();
102
+ a2 = tcg_temp_ebb_new_ptr();
103
104
tcg_gen_addi_ptr(a0, cpu_env, dofs);
105
tcg_gen_addi_ptr(a1, cpu_env, aofs);
106
@@ -XXX,XX +XXX,XX @@ void tcg_gen_gvec_4_ptr(uint32_t dofs, uint32_t aofs, uint32_t bofs,
107
TCGv_ptr a0, a1, a2, a3;
108
TCGv_i32 desc = tcg_constant_i32(simd_desc(oprsz, maxsz, data));
109
110
- a0 = tcg_temp_new_ptr();
111
- a1 = tcg_temp_new_ptr();
112
- a2 = tcg_temp_new_ptr();
113
- a3 = tcg_temp_new_ptr();
114
+ a0 = tcg_temp_ebb_new_ptr();
115
+ a1 = tcg_temp_ebb_new_ptr();
116
+ a2 = tcg_temp_ebb_new_ptr();
117
+ a3 = tcg_temp_ebb_new_ptr();
118
119
tcg_gen_addi_ptr(a0, cpu_env, dofs);
120
tcg_gen_addi_ptr(a1, cpu_env, aofs);
121
@@ -XXX,XX +XXX,XX @@ void tcg_gen_gvec_5_ptr(uint32_t dofs, uint32_t aofs, uint32_t bofs,
122
TCGv_ptr a0, a1, a2, a3, a4;
123
TCGv_i32 desc = tcg_constant_i32(simd_desc(oprsz, maxsz, data));
124
125
- a0 = tcg_temp_new_ptr();
126
- a1 = tcg_temp_new_ptr();
127
- a2 = tcg_temp_new_ptr();
128
- a3 = tcg_temp_new_ptr();
129
- a4 = tcg_temp_new_ptr();
130
+ a0 = tcg_temp_ebb_new_ptr();
131
+ a1 = tcg_temp_ebb_new_ptr();
132
+ a2 = tcg_temp_ebb_new_ptr();
133
+ a3 = tcg_temp_ebb_new_ptr();
134
+ a4 = tcg_temp_ebb_new_ptr();
135
136
tcg_gen_addi_ptr(a0, cpu_env, dofs);
137
tcg_gen_addi_ptr(a1, cpu_env, aofs);
138
@@ -XXX,XX +XXX,XX @@ static void do_dup(unsigned vece, uint32_t dofs, uint32_t oprsz,
139
be simple enough. */
140
if (TCG_TARGET_REG_BITS == 64
141
&& (vece != MO_32 || !check_size_impl(oprsz, 4))) {
142
- t_64 = tcg_temp_new_i64();
143
+ t_64 = tcg_temp_ebb_new_i64();
144
tcg_gen_extu_i32_i64(t_64, in_32);
145
tcg_gen_dup_i64(vece, t_64, t_64);
146
} else {
147
- t_32 = tcg_temp_new_i32();
148
+ t_32 = tcg_temp_ebb_new_i32();
149
tcg_gen_dup_i32(vece, t_32, in_32);
150
}
151
} else if (in_64) {
152
/* We are given a 64-bit variable input. */
153
- t_64 = tcg_temp_new_i64();
154
+ t_64 = tcg_temp_ebb_new_i64();
155
tcg_gen_dup_i64(vece, t_64, in_64);
156
} else {
157
/* We are given a constant input. */
158
@@ -XXX,XX +XXX,XX @@ static void do_dup(unsigned vece, uint32_t dofs, uint32_t oprsz,
159
}
160
161
/* Otherwise implement out of line. */
162
- t_ptr = tcg_temp_new_ptr();
163
+ t_ptr = tcg_temp_ebb_new_ptr();
164
tcg_gen_addi_ptr(t_ptr, cpu_env, dofs);
165
166
/*
167
@@ -XXX,XX +XXX,XX @@ static void do_dup(unsigned vece, uint32_t dofs, uint32_t oprsz,
168
if (in_32) {
169
t_val = in_32;
170
} else if (in_64) {
171
- t_val = tcg_temp_new_i32();
172
+ t_val = tcg_temp_ebb_new_i32();
173
tcg_gen_extrl_i64_i32(t_val, in_64);
174
} else {
175
t_val = tcg_constant_i32(in_c);
176
@@ -XXX,XX +XXX,XX @@ static void do_dup(unsigned vece, uint32_t dofs, uint32_t oprsz,
177
if (in_32) {
178
fns[vece](t_ptr, t_desc, in_32);
179
} else if (in_64) {
180
- t_32 = tcg_temp_new_i32();
181
+ t_32 = tcg_temp_ebb_new_i32();
182
tcg_gen_extrl_i64_i32(t_32, in_64);
183
fns[vece](t_ptr, t_desc, t_32);
184
tcg_temp_free_i32(t_32);
185
@@ -XXX,XX +XXX,XX @@ void tcg_gen_gvec_dup_mem(unsigned vece, uint32_t dofs, uint32_t aofs,
186
do_dup_store(type, dofs, oprsz, maxsz, t_vec);
187
tcg_temp_free_vec(t_vec);
188
} else if (vece <= MO_32) {
189
- TCGv_i32 in = tcg_temp_new_i32();
190
+ TCGv_i32 in = tcg_temp_ebb_new_i32();
191
switch (vece) {
192
case MO_8:
193
tcg_gen_ld8u_i32(in, cpu_env, aofs);
194
@@ -XXX,XX +XXX,XX @@ void tcg_gen_gvec_dup_mem(unsigned vece, uint32_t dofs, uint32_t aofs,
195
do_dup(vece, dofs, oprsz, maxsz, in, NULL, 0);
196
tcg_temp_free_i32(in);
197
} else {
198
- TCGv_i64 in = tcg_temp_new_i64();
199
+ TCGv_i64 in = tcg_temp_ebb_new_i64();
200
tcg_gen_ld_i64(in, cpu_env, aofs);
201
do_dup(vece, dofs, oprsz, maxsz, NULL, in, 0);
202
tcg_temp_free_i64(in);
203
@@ -XXX,XX +XXX,XX @@ void tcg_gen_gvec_dup_mem(unsigned vece, uint32_t dofs, uint32_t aofs,
204
}
205
tcg_temp_free_vec(in);
206
} else {
207
- TCGv_i64 in0 = tcg_temp_new_i64();
208
- TCGv_i64 in1 = tcg_temp_new_i64();
209
+ TCGv_i64 in0 = tcg_temp_ebb_new_i64();
210
+ TCGv_i64 in1 = tcg_temp_ebb_new_i64();
211
212
tcg_gen_ld_i64(in0, cpu_env, aofs);
213
tcg_gen_ld_i64(in1, cpu_env, aofs + 8);
214
@@ -XXX,XX +XXX,XX @@ void tcg_gen_gvec_dup_mem(unsigned vece, uint32_t dofs, uint32_t aofs,
215
int j;
216
217
for (j = 0; j < 4; ++j) {
218
- in[j] = tcg_temp_new_i64();
219
+ in[j] = tcg_temp_ebb_new_i64();
220
tcg_gen_ld_i64(in[j], cpu_env, aofs + j * 8);
221
}
222
for (i = (aofs == dofs) * 32; i < oprsz; i += 32) {
223
@@ -XXX,XX +XXX,XX @@ void tcg_gen_gvec_not(unsigned vece, uint32_t dofs, uint32_t aofs,
224
the 64-bit operation. */
225
static void gen_addv_mask(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b, TCGv_i64 m)
226
{
227
- TCGv_i64 t1 = tcg_temp_new_i64();
228
- TCGv_i64 t2 = tcg_temp_new_i64();
229
- TCGv_i64 t3 = tcg_temp_new_i64();
230
+ TCGv_i64 t1 = tcg_temp_ebb_new_i64();
231
+ TCGv_i64 t2 = tcg_temp_ebb_new_i64();
232
+ TCGv_i64 t3 = tcg_temp_ebb_new_i64();
233
234
tcg_gen_andc_i64(t1, a, m);
235
tcg_gen_andc_i64(t2, b, m);
236
@@ -XXX,XX +XXX,XX @@ void tcg_gen_vec_add8_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
237
void tcg_gen_vec_add8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
238
{
239
TCGv_i32 m = tcg_constant_i32((int32_t)dup_const(MO_8, 0x80));
240
- TCGv_i32 t1 = tcg_temp_new_i32();
241
- TCGv_i32 t2 = tcg_temp_new_i32();
242
- TCGv_i32 t3 = tcg_temp_new_i32();
243
+ TCGv_i32 t1 = tcg_temp_ebb_new_i32();
244
+ TCGv_i32 t2 = tcg_temp_ebb_new_i32();
245
+ TCGv_i32 t3 = tcg_temp_ebb_new_i32();
246
247
tcg_gen_andc_i32(t1, a, m);
248
tcg_gen_andc_i32(t2, b, m);
249
@@ -XXX,XX +XXX,XX @@ void tcg_gen_vec_add16_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
250
251
void tcg_gen_vec_add16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
252
{
253
- TCGv_i32 t1 = tcg_temp_new_i32();
254
- TCGv_i32 t2 = tcg_temp_new_i32();
255
+ TCGv_i32 t1 = tcg_temp_ebb_new_i32();
256
+ TCGv_i32 t2 = tcg_temp_ebb_new_i32();
257
258
tcg_gen_andi_i32(t1, a, ~0xffff);
259
tcg_gen_add_i32(t2, a, b);
260
@@ -XXX,XX +XXX,XX @@ void tcg_gen_vec_add16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
261
262
void tcg_gen_vec_add32_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
263
{
264
- TCGv_i64 t1 = tcg_temp_new_i64();
265
- TCGv_i64 t2 = tcg_temp_new_i64();
266
+ TCGv_i64 t1 = tcg_temp_ebb_new_i64();
267
+ TCGv_i64 t2 = tcg_temp_ebb_new_i64();
268
269
tcg_gen_andi_i64(t1, a, ~0xffffffffull);
270
tcg_gen_add_i64(t2, a, b);
271
@@ -XXX,XX +XXX,XX @@ void tcg_gen_gvec_subs(unsigned vece, uint32_t dofs, uint32_t aofs,
272
Compare gen_addv_mask above. */
273
static void gen_subv_mask(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b, TCGv_i64 m)
274
{
275
- TCGv_i64 t1 = tcg_temp_new_i64();
276
- TCGv_i64 t2 = tcg_temp_new_i64();
277
- TCGv_i64 t3 = tcg_temp_new_i64();
278
+ TCGv_i64 t1 = tcg_temp_ebb_new_i64();
279
+ TCGv_i64 t2 = tcg_temp_ebb_new_i64();
280
+ TCGv_i64 t3 = tcg_temp_ebb_new_i64();
281
282
tcg_gen_or_i64(t1, a, m);
283
tcg_gen_andc_i64(t2, b, m);
284
@@ -XXX,XX +XXX,XX @@ void tcg_gen_vec_sub8_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
285
void tcg_gen_vec_sub8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
286
{
287
TCGv_i32 m = tcg_constant_i32((int32_t)dup_const(MO_8, 0x80));
288
- TCGv_i32 t1 = tcg_temp_new_i32();
289
- TCGv_i32 t2 = tcg_temp_new_i32();
290
- TCGv_i32 t3 = tcg_temp_new_i32();
291
+ TCGv_i32 t1 = tcg_temp_ebb_new_i32();
292
+ TCGv_i32 t2 = tcg_temp_ebb_new_i32();
293
+ TCGv_i32 t3 = tcg_temp_ebb_new_i32();
294
295
tcg_gen_or_i32(t1, a, m);
296
tcg_gen_andc_i32(t2, b, m);
297
@@ -XXX,XX +XXX,XX @@ void tcg_gen_vec_sub16_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
298
299
void tcg_gen_vec_sub16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
300
{
301
- TCGv_i32 t1 = tcg_temp_new_i32();
302
- TCGv_i32 t2 = tcg_temp_new_i32();
303
+ TCGv_i32 t1 = tcg_temp_ebb_new_i32();
304
+ TCGv_i32 t2 = tcg_temp_ebb_new_i32();
305
306
tcg_gen_andi_i32(t1, b, ~0xffff);
307
tcg_gen_sub_i32(t2, a, b);
308
@@ -XXX,XX +XXX,XX @@ void tcg_gen_vec_sub16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
309
310
void tcg_gen_vec_sub32_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
311
{
312
- TCGv_i64 t1 = tcg_temp_new_i64();
313
- TCGv_i64 t2 = tcg_temp_new_i64();
314
+ TCGv_i64 t1 = tcg_temp_ebb_new_i64();
315
+ TCGv_i64 t2 = tcg_temp_ebb_new_i64();
316
317
tcg_gen_andi_i64(t1, b, ~0xffffffffull);
318
tcg_gen_sub_i64(t2, a, b);
319
@@ -XXX,XX +XXX,XX @@ void tcg_gen_gvec_umax(unsigned vece, uint32_t dofs, uint32_t aofs,
320
Compare gen_subv_mask above. */
321
static void gen_negv_mask(TCGv_i64 d, TCGv_i64 b, TCGv_i64 m)
322
{
323
- TCGv_i64 t2 = tcg_temp_new_i64();
324
- TCGv_i64 t3 = tcg_temp_new_i64();
325
+ TCGv_i64 t2 = tcg_temp_ebb_new_i64();
326
+ TCGv_i64 t3 = tcg_temp_ebb_new_i64();
327
328
tcg_gen_andc_i64(t3, m, b);
329
tcg_gen_andc_i64(t2, b, m);
330
@@ -XXX,XX +XXX,XX @@ void tcg_gen_vec_neg16_i64(TCGv_i64 d, TCGv_i64 b)
331
332
void tcg_gen_vec_neg32_i64(TCGv_i64 d, TCGv_i64 b)
333
{
334
- TCGv_i64 t1 = tcg_temp_new_i64();
335
- TCGv_i64 t2 = tcg_temp_new_i64();
336
+ TCGv_i64 t1 = tcg_temp_ebb_new_i64();
337
+ TCGv_i64 t2 = tcg_temp_ebb_new_i64();
338
339
tcg_gen_andi_i64(t1, b, ~0xffffffffull);
340
tcg_gen_neg_i64(t2, b);
341
@@ -XXX,XX +XXX,XX @@ void tcg_gen_gvec_neg(unsigned vece, uint32_t dofs, uint32_t aofs,
342
343
static void gen_absv_mask(TCGv_i64 d, TCGv_i64 b, unsigned vece)
344
{
345
- TCGv_i64 t = tcg_temp_new_i64();
346
+ TCGv_i64 t = tcg_temp_ebb_new_i64();
347
int nbit = 8 << vece;
348
349
/* Create -1 for each negative element. */
350
@@ -XXX,XX +XXX,XX @@ static const GVecGen2s gop_ands = {
351
void tcg_gen_gvec_ands(unsigned vece, uint32_t dofs, uint32_t aofs,
352
TCGv_i64 c, uint32_t oprsz, uint32_t maxsz)
353
{
354
- TCGv_i64 tmp = tcg_temp_new_i64();
355
+ TCGv_i64 tmp = tcg_temp_ebb_new_i64();
356
tcg_gen_dup_i64(vece, tmp, c);
357
tcg_gen_gvec_2s(dofs, aofs, oprsz, maxsz, tmp, &gop_ands);
358
tcg_temp_free_i64(tmp);
359
@@ -XXX,XX +XXX,XX @@ static const GVecGen2s gop_xors = {
360
void tcg_gen_gvec_xors(unsigned vece, uint32_t dofs, uint32_t aofs,
361
TCGv_i64 c, uint32_t oprsz, uint32_t maxsz)
362
{
363
- TCGv_i64 tmp = tcg_temp_new_i64();
364
+ TCGv_i64 tmp = tcg_temp_ebb_new_i64();
365
tcg_gen_dup_i64(vece, tmp, c);
366
tcg_gen_gvec_2s(dofs, aofs, oprsz, maxsz, tmp, &gop_xors);
367
tcg_temp_free_i64(tmp);
368
@@ -XXX,XX +XXX,XX @@ static const GVecGen2s gop_ors = {
369
void tcg_gen_gvec_ors(unsigned vece, uint32_t dofs, uint32_t aofs,
370
TCGv_i64 c, uint32_t oprsz, uint32_t maxsz)
371
{
372
- TCGv_i64 tmp = tcg_temp_new_i64();
373
+ TCGv_i64 tmp = tcg_temp_ebb_new_i64();
374
tcg_gen_dup_i64(vece, tmp, c);
375
tcg_gen_gvec_2s(dofs, aofs, oprsz, maxsz, tmp, &gop_ors);
376
tcg_temp_free_i64(tmp);
377
@@ -XXX,XX +XXX,XX @@ void tcg_gen_vec_sar8i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c)
378
{
379
uint64_t s_mask = dup_const(MO_8, 0x80 >> c);
380
uint64_t c_mask = dup_const(MO_8, 0xff >> c);
381
- TCGv_i64 s = tcg_temp_new_i64();
382
+ TCGv_i64 s = tcg_temp_ebb_new_i64();
383
384
tcg_gen_shri_i64(d, a, c);
385
tcg_gen_andi_i64(s, d, s_mask); /* isolate (shifted) sign bit */
386
@@ -XXX,XX +XXX,XX @@ void tcg_gen_vec_sar16i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c)
387
{
388
uint64_t s_mask = dup_const(MO_16, 0x8000 >> c);
389
uint64_t c_mask = dup_const(MO_16, 0xffff >> c);
390
- TCGv_i64 s = tcg_temp_new_i64();
391
+ TCGv_i64 s = tcg_temp_ebb_new_i64();
392
393
tcg_gen_shri_i64(d, a, c);
394
tcg_gen_andi_i64(s, d, s_mask); /* isolate (shifted) sign bit */
395
@@ -XXX,XX +XXX,XX @@ void tcg_gen_vec_sar8i_i32(TCGv_i32 d, TCGv_i32 a, int32_t c)
396
{
397
uint32_t s_mask = dup_const(MO_8, 0x80 >> c);
398
uint32_t c_mask = dup_const(MO_8, 0xff >> c);
399
- TCGv_i32 s = tcg_temp_new_i32();
400
+ TCGv_i32 s = tcg_temp_ebb_new_i32();
401
402
tcg_gen_shri_i32(d, a, c);
403
tcg_gen_andi_i32(s, d, s_mask); /* isolate (shifted) sign bit */
404
@@ -XXX,XX +XXX,XX @@ void tcg_gen_vec_sar16i_i32(TCGv_i32 d, TCGv_i32 a, int32_t c)
405
{
406
uint32_t s_mask = dup_const(MO_16, 0x8000 >> c);
407
uint32_t c_mask = dup_const(MO_16, 0xffff >> c);
408
- TCGv_i32 s = tcg_temp_new_i32();
409
+ TCGv_i32 s = tcg_temp_ebb_new_i32();
410
411
tcg_gen_shri_i32(d, a, c);
412
tcg_gen_andi_i32(s, d, s_mask); /* isolate (shifted) sign bit */
413
@@ -XXX,XX +XXX,XX @@ do_gvec_shifts(unsigned vece, uint32_t dofs, uint32_t aofs, TCGv_i32 shift,
414
TCGv_vec v_shift = tcg_temp_new_vec(type);
415
416
if (vece == MO_64) {
417
- TCGv_i64 sh64 = tcg_temp_new_i64();
418
+ TCGv_i64 sh64 = tcg_temp_ebb_new_i64();
419
tcg_gen_extu_i32_i64(sh64, shift);
420
tcg_gen_dup_i64_vec(MO_64, v_shift, sh64);
421
tcg_temp_free_i64(sh64);
422
@@ -XXX,XX +XXX,XX @@ do_gvec_shifts(unsigned vece, uint32_t dofs, uint32_t aofs, TCGv_i32 shift,
423
if (vece == MO_32 && check_size_impl(oprsz, 4)) {
424
expand_2s_i32(dofs, aofs, oprsz, shift, false, g->fni4);
425
} else if (vece == MO_64 && check_size_impl(oprsz, 8)) {
426
- TCGv_i64 sh64 = tcg_temp_new_i64();
427
+ TCGv_i64 sh64 = tcg_temp_ebb_new_i64();
428
tcg_gen_extu_i32_i64(sh64, shift);
429
expand_2s_i64(dofs, aofs, oprsz, sh64, false, g->fni8);
430
tcg_temp_free_i64(sh64);
431
} else {
432
- TCGv_ptr a0 = tcg_temp_new_ptr();
433
- TCGv_ptr a1 = tcg_temp_new_ptr();
434
- TCGv_i32 desc = tcg_temp_new_i32();
435
+ TCGv_ptr a0 = tcg_temp_ebb_new_ptr();
436
+ TCGv_ptr a1 = tcg_temp_ebb_new_ptr();
437
+ TCGv_i32 desc = tcg_temp_ebb_new_i32();
438
439
tcg_gen_shli_i32(desc, shift, SIMD_DATA_SHIFT);
440
tcg_gen_ori_i32(desc, desc, simd_desc(oprsz, maxsz, 0));
441
@@ -XXX,XX +XXX,XX @@ static void tcg_gen_shlv_mod_vec(unsigned vece, TCGv_vec d,
442
443
static void tcg_gen_shl_mod_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
444
{
445
- TCGv_i32 t = tcg_temp_new_i32();
446
+ TCGv_i32 t = tcg_temp_ebb_new_i32();
447
448
tcg_gen_andi_i32(t, b, 31);
449
tcg_gen_shl_i32(d, a, t);
450
@@ -XXX,XX +XXX,XX @@ static void tcg_gen_shl_mod_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
451
452
static void tcg_gen_shl_mod_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
453
{
454
- TCGv_i64 t = tcg_temp_new_i64();
455
+ TCGv_i64 t = tcg_temp_ebb_new_i64();
456
457
tcg_gen_andi_i64(t, b, 63);
458
tcg_gen_shl_i64(d, a, t);
459
@@ -XXX,XX +XXX,XX @@ static void tcg_gen_shrv_mod_vec(unsigned vece, TCGv_vec d,
460
461
static void tcg_gen_shr_mod_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
462
{
463
- TCGv_i32 t = tcg_temp_new_i32();
464
+ TCGv_i32 t = tcg_temp_ebb_new_i32();
465
466
tcg_gen_andi_i32(t, b, 31);
467
tcg_gen_shr_i32(d, a, t);
468
@@ -XXX,XX +XXX,XX @@ static void tcg_gen_shr_mod_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
469
470
static void tcg_gen_shr_mod_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
471
{
472
- TCGv_i64 t = tcg_temp_new_i64();
473
+ TCGv_i64 t = tcg_temp_ebb_new_i64();
474
475
tcg_gen_andi_i64(t, b, 63);
476
tcg_gen_shr_i64(d, a, t);
477
@@ -XXX,XX +XXX,XX @@ static void tcg_gen_sarv_mod_vec(unsigned vece, TCGv_vec d,
478
479
static void tcg_gen_sar_mod_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
480
{
481
- TCGv_i32 t = tcg_temp_new_i32();
482
+ TCGv_i32 t = tcg_temp_ebb_new_i32();
483
484
tcg_gen_andi_i32(t, b, 31);
485
tcg_gen_sar_i32(d, a, t);
486
@@ -XXX,XX +XXX,XX @@ static void tcg_gen_sar_mod_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
487
488
static void tcg_gen_sar_mod_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
489
{
490
- TCGv_i64 t = tcg_temp_new_i64();
491
+ TCGv_i64 t = tcg_temp_ebb_new_i64();
492
493
tcg_gen_andi_i64(t, b, 63);
494
tcg_gen_sar_i64(d, a, t);
495
@@ -XXX,XX +XXX,XX @@ static void tcg_gen_rotlv_mod_vec(unsigned vece, TCGv_vec d,
496
497
static void tcg_gen_rotl_mod_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
498
{
499
- TCGv_i32 t = tcg_temp_new_i32();
500
+ TCGv_i32 t = tcg_temp_ebb_new_i32();
501
502
tcg_gen_andi_i32(t, b, 31);
503
tcg_gen_rotl_i32(d, a, t);
504
@@ -XXX,XX +XXX,XX @@ static void tcg_gen_rotl_mod_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
505
506
static void tcg_gen_rotl_mod_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
507
{
508
- TCGv_i64 t = tcg_temp_new_i64();
509
+ TCGv_i64 t = tcg_temp_ebb_new_i64();
510
511
tcg_gen_andi_i64(t, b, 63);
512
tcg_gen_rotl_i64(d, a, t);
513
@@ -XXX,XX +XXX,XX @@ static void tcg_gen_rotrv_mod_vec(unsigned vece, TCGv_vec d,
514
515
static void tcg_gen_rotr_mod_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
516
{
517
- TCGv_i32 t = tcg_temp_new_i32();
518
+ TCGv_i32 t = tcg_temp_ebb_new_i32();
519
520
tcg_gen_andi_i32(t, b, 31);
521
tcg_gen_rotr_i32(d, a, t);
522
@@ -XXX,XX +XXX,XX @@ static void tcg_gen_rotr_mod_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
523
524
static void tcg_gen_rotr_mod_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
525
{
526
- TCGv_i64 t = tcg_temp_new_i64();
527
+ TCGv_i64 t = tcg_temp_ebb_new_i64();
528
529
tcg_gen_andi_i64(t, b, 63);
530
tcg_gen_rotr_i64(d, a, t);
531
@@ -XXX,XX +XXX,XX @@ void tcg_gen_gvec_rotrv(unsigned vece, uint32_t dofs, uint32_t aofs,
532
static void expand_cmp_i32(uint32_t dofs, uint32_t aofs, uint32_t bofs,
533
uint32_t oprsz, TCGCond cond)
534
{
535
- TCGv_i32 t0 = tcg_temp_new_i32();
536
- TCGv_i32 t1 = tcg_temp_new_i32();
537
+ TCGv_i32 t0 = tcg_temp_ebb_new_i32();
538
+ TCGv_i32 t1 = tcg_temp_ebb_new_i32();
539
uint32_t i;
540
541
for (i = 0; i < oprsz; i += 4) {
542
@@ -XXX,XX +XXX,XX @@ static void expand_cmp_i32(uint32_t dofs, uint32_t aofs, uint32_t bofs,
543
static void expand_cmp_i64(uint32_t dofs, uint32_t aofs, uint32_t bofs,
544
uint32_t oprsz, TCGCond cond)
545
{
546
- TCGv_i64 t0 = tcg_temp_new_i64();
547
- TCGv_i64 t1 = tcg_temp_new_i64();
548
+ TCGv_i64 t0 = tcg_temp_ebb_new_i64();
549
+ TCGv_i64 t1 = tcg_temp_ebb_new_i64();
550
uint32_t i;
551
552
for (i = 0; i < oprsz; i += 8) {
553
@@ -XXX,XX +XXX,XX @@ void tcg_gen_gvec_cmp(TCGCond cond, unsigned vece, uint32_t dofs,
554
555
static void tcg_gen_bitsel_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b, TCGv_i64 c)
556
{
557
- TCGv_i64 t = tcg_temp_new_i64();
558
+ TCGv_i64 t = tcg_temp_ebb_new_i64();
559
560
tcg_gen_and_i64(t, b, a);
561
tcg_gen_andc_i64(d, c, a);
562
diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c
563
index XXXXXXX..XXXXXXX 100644
564
--- a/tcg/tcg-op.c
565
+++ b/tcg/tcg-op.c
566
@@ -XXX,XX +XXX,XX @@ void tcg_gen_div_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2)
567
if (TCG_TARGET_HAS_div_i32) {
568
tcg_gen_op3_i32(INDEX_op_div_i32, ret, arg1, arg2);
569
} else if (TCG_TARGET_HAS_div2_i32) {
570
- TCGv_i32 t0 = tcg_temp_new_i32();
571
+ TCGv_i32 t0 = tcg_temp_ebb_new_i32();
572
tcg_gen_sari_i32(t0, arg1, 31);
573
tcg_gen_op5_i32(INDEX_op_div2_i32, ret, t0, arg1, t0, arg2);
574
tcg_temp_free_i32(t0);
575
@@ -XXX,XX +XXX,XX @@ void tcg_gen_rem_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2)
576
if (TCG_TARGET_HAS_rem_i32) {
577
tcg_gen_op3_i32(INDEX_op_rem_i32, ret, arg1, arg2);
578
} else if (TCG_TARGET_HAS_div_i32) {
579
- TCGv_i32 t0 = tcg_temp_new_i32();
580
+ TCGv_i32 t0 = tcg_temp_ebb_new_i32();
581
tcg_gen_op3_i32(INDEX_op_div_i32, t0, arg1, arg2);
582
tcg_gen_mul_i32(t0, t0, arg2);
583
tcg_gen_sub_i32(ret, arg1, t0);
584
tcg_temp_free_i32(t0);
585
} else if (TCG_TARGET_HAS_div2_i32) {
586
- TCGv_i32 t0 = tcg_temp_new_i32();
587
+ TCGv_i32 t0 = tcg_temp_ebb_new_i32();
588
tcg_gen_sari_i32(t0, arg1, 31);
589
tcg_gen_op5_i32(INDEX_op_div2_i32, t0, ret, arg1, t0, arg2);
590
tcg_temp_free_i32(t0);
591
@@ -XXX,XX +XXX,XX @@ void tcg_gen_divu_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2)
592
if (TCG_TARGET_HAS_div_i32) {
593
tcg_gen_op3_i32(INDEX_op_divu_i32, ret, arg1, arg2);
594
} else if (TCG_TARGET_HAS_div2_i32) {
595
- TCGv_i32 t0 = tcg_temp_new_i32();
596
+ TCGv_i32 t0 = tcg_temp_ebb_new_i32();
597
tcg_gen_movi_i32(t0, 0);
598
tcg_gen_op5_i32(INDEX_op_divu2_i32, ret, t0, arg1, t0, arg2);
599
tcg_temp_free_i32(t0);
600
@@ -XXX,XX +XXX,XX @@ void tcg_gen_remu_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2)
601
if (TCG_TARGET_HAS_rem_i32) {
602
tcg_gen_op3_i32(INDEX_op_remu_i32, ret, arg1, arg2);
603
} else if (TCG_TARGET_HAS_div_i32) {
604
- TCGv_i32 t0 = tcg_temp_new_i32();
605
+ TCGv_i32 t0 = tcg_temp_ebb_new_i32();
606
tcg_gen_op3_i32(INDEX_op_divu_i32, t0, arg1, arg2);
607
tcg_gen_mul_i32(t0, t0, arg2);
608
tcg_gen_sub_i32(ret, arg1, t0);
609
tcg_temp_free_i32(t0);
610
} else if (TCG_TARGET_HAS_div2_i32) {
611
- TCGv_i32 t0 = tcg_temp_new_i32();
612
+ TCGv_i32 t0 = tcg_temp_ebb_new_i32();
613
tcg_gen_movi_i32(t0, 0);
614
tcg_gen_op5_i32(INDEX_op_divu2_i32, t0, ret, arg1, t0, arg2);
615
tcg_temp_free_i32(t0);
616
@@ -XXX,XX +XXX,XX @@ void tcg_gen_andc_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2)
617
if (TCG_TARGET_HAS_andc_i32) {
618
tcg_gen_op3_i32(INDEX_op_andc_i32, ret, arg1, arg2);
619
} else {
620
- TCGv_i32 t0 = tcg_temp_new_i32();
621
+ TCGv_i32 t0 = tcg_temp_ebb_new_i32();
622
tcg_gen_not_i32(t0, arg2);
623
tcg_gen_and_i32(ret, arg1, t0);
624
tcg_temp_free_i32(t0);
625
@@ -XXX,XX +XXX,XX @@ void tcg_gen_orc_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2)
626
if (TCG_TARGET_HAS_orc_i32) {
627
tcg_gen_op3_i32(INDEX_op_orc_i32, ret, arg1, arg2);
628
} else {
629
- TCGv_i32 t0 = tcg_temp_new_i32();
630
+ TCGv_i32 t0 = tcg_temp_ebb_new_i32();
631
tcg_gen_not_i32(t0, arg2);
632
tcg_gen_or_i32(ret, arg1, t0);
633
tcg_temp_free_i32(t0);
634
@@ -XXX,XX +XXX,XX @@ void tcg_gen_clz_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2)
635
if (TCG_TARGET_HAS_clz_i32) {
636
tcg_gen_op3_i32(INDEX_op_clz_i32, ret, arg1, arg2);
637
} else if (TCG_TARGET_HAS_clz_i64) {
638
- TCGv_i64 t1 = tcg_temp_new_i64();
639
- TCGv_i64 t2 = tcg_temp_new_i64();
640
+ TCGv_i64 t1 = tcg_temp_ebb_new_i64();
641
+ TCGv_i64 t2 = tcg_temp_ebb_new_i64();
642
tcg_gen_extu_i32_i64(t1, arg1);
643
tcg_gen_extu_i32_i64(t2, arg2);
644
tcg_gen_addi_i64(t2, t2, 32);
645
@@ -XXX,XX +XXX,XX @@ void tcg_gen_ctz_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2)
646
if (TCG_TARGET_HAS_ctz_i32) {
647
tcg_gen_op3_i32(INDEX_op_ctz_i32, ret, arg1, arg2);
648
} else if (TCG_TARGET_HAS_ctz_i64) {
649
- TCGv_i64 t1 = tcg_temp_new_i64();
650
- TCGv_i64 t2 = tcg_temp_new_i64();
651
+ TCGv_i64 t1 = tcg_temp_ebb_new_i64();
652
+ TCGv_i64 t2 = tcg_temp_ebb_new_i64();
653
tcg_gen_extu_i32_i64(t1, arg1);
654
tcg_gen_extu_i32_i64(t2, arg2);
655
tcg_gen_ctz_i64(t1, t1, t2);
656
@@ -XXX,XX +XXX,XX @@ void tcg_gen_ctz_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2)
657
|| TCG_TARGET_HAS_ctpop_i64
658
|| TCG_TARGET_HAS_clz_i32
659
|| TCG_TARGET_HAS_clz_i64) {
660
- TCGv_i32 z, t = tcg_temp_new_i32();
661
+ TCGv_i32 z, t = tcg_temp_ebb_new_i32();
662
663
if (TCG_TARGET_HAS_ctpop_i32 || TCG_TARGET_HAS_ctpop_i64) {
664
tcg_gen_subi_i32(t, arg1, 1);
665
@@ -XXX,XX +XXX,XX @@ void tcg_gen_ctzi_i32(TCGv_i32 ret, TCGv_i32 arg1, uint32_t arg2)
666
{
667
if (!TCG_TARGET_HAS_ctz_i32 && TCG_TARGET_HAS_ctpop_i32 && arg2 == 32) {
668
/* This equivalence has the advantage of not requiring a fixup. */
669
- TCGv_i32 t = tcg_temp_new_i32();
670
+ TCGv_i32 t = tcg_temp_ebb_new_i32();
671
tcg_gen_subi_i32(t, arg1, 1);
672
tcg_gen_andc_i32(t, t, arg1);
673
tcg_gen_ctpop_i32(ret, t);
674
@@ -XXX,XX +XXX,XX @@ void tcg_gen_ctzi_i32(TCGv_i32 ret, TCGv_i32 arg1, uint32_t arg2)
675
void tcg_gen_clrsb_i32(TCGv_i32 ret, TCGv_i32 arg)
676
{
677
if (TCG_TARGET_HAS_clz_i32) {
678
- TCGv_i32 t = tcg_temp_new_i32();
679
+ TCGv_i32 t = tcg_temp_ebb_new_i32();
680
tcg_gen_sari_i32(t, arg, 31);
681
tcg_gen_xor_i32(t, t, arg);
682
tcg_gen_clzi_i32(t, t, 32);
683
@@ -XXX,XX +XXX,XX @@ void tcg_gen_ctpop_i32(TCGv_i32 ret, TCGv_i32 arg1)
684
if (TCG_TARGET_HAS_ctpop_i32) {
685
tcg_gen_op2_i32(INDEX_op_ctpop_i32, ret, arg1);
686
} else if (TCG_TARGET_HAS_ctpop_i64) {
687
- TCGv_i64 t = tcg_temp_new_i64();
688
+ TCGv_i64 t = tcg_temp_ebb_new_i64();
689
tcg_gen_extu_i32_i64(t, arg1);
690
tcg_gen_ctpop_i64(t, t);
691
tcg_gen_extrl_i64_i32(ret, t);
692
@@ -XXX,XX +XXX,XX @@ void tcg_gen_rotl_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2)
693
} else {
694
TCGv_i32 t0, t1;
695
696
- t0 = tcg_temp_new_i32();
697
- t1 = tcg_temp_new_i32();
698
+ t0 = tcg_temp_ebb_new_i32();
699
+ t1 = tcg_temp_ebb_new_i32();
700
tcg_gen_shl_i32(t0, arg1, arg2);
701
tcg_gen_subfi_i32(t1, 32, arg2);
702
tcg_gen_shr_i32(t1, arg1, t1);
703
@@ -XXX,XX +XXX,XX @@ void tcg_gen_rotli_i32(TCGv_i32 ret, TCGv_i32 arg1, int32_t arg2)
704
tcg_gen_rotl_i32(ret, arg1, tcg_constant_i32(arg2));
705
} else {
706
TCGv_i32 t0, t1;
707
- t0 = tcg_temp_new_i32();
708
- t1 = tcg_temp_new_i32();
709
+ t0 = tcg_temp_ebb_new_i32();
710
+ t1 = tcg_temp_ebb_new_i32();
711
tcg_gen_shli_i32(t0, arg1, arg2);
712
tcg_gen_shri_i32(t1, arg1, 32 - arg2);
713
tcg_gen_or_i32(ret, t0, t1);
714
@@ -XXX,XX +XXX,XX @@ void tcg_gen_rotr_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2)
715
} else {
716
TCGv_i32 t0, t1;
717
718
- t0 = tcg_temp_new_i32();
719
- t1 = tcg_temp_new_i32();
720
+ t0 = tcg_temp_ebb_new_i32();
721
+ t1 = tcg_temp_ebb_new_i32();
722
tcg_gen_shr_i32(t0, arg1, arg2);
723
tcg_gen_subfi_i32(t1, 32, arg2);
724
tcg_gen_shl_i32(t1, arg1, t1);
725
@@ -XXX,XX +XXX,XX @@ void tcg_gen_deposit_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2,
726
return;
727
}
728
729
- t1 = tcg_temp_new_i32();
730
+ t1 = tcg_temp_ebb_new_i32();
731
732
if (TCG_TARGET_HAS_extract2_i32) {
733
if (ofs + len == 32) {
734
@@ -XXX,XX +XXX,XX @@ void tcg_gen_extract2_i32(TCGv_i32 ret, TCGv_i32 al, TCGv_i32 ah,
735
} else if (TCG_TARGET_HAS_extract2_i32) {
736
tcg_gen_op4i_i32(INDEX_op_extract2_i32, ret, al, ah, ofs);
737
} else {
738
- TCGv_i32 t0 = tcg_temp_new_i32();
739
+ TCGv_i32 t0 = tcg_temp_ebb_new_i32();
740
tcg_gen_shri_i32(t0, al, ofs);
741
tcg_gen_deposit_i32(ret, t0, ah, 32 - ofs, ofs);
742
tcg_temp_free_i32(t0);
743
@@ -XXX,XX +XXX,XX @@ void tcg_gen_movcond_i32(TCGCond cond, TCGv_i32 ret, TCGv_i32 c1,
744
} else if (TCG_TARGET_HAS_movcond_i32) {
745
tcg_gen_op6i_i32(INDEX_op_movcond_i32, ret, c1, c2, v1, v2, cond);
746
} else {
747
- TCGv_i32 t0 = tcg_temp_new_i32();
748
- TCGv_i32 t1 = tcg_temp_new_i32();
749
+ TCGv_i32 t0 = tcg_temp_ebb_new_i32();
750
+ TCGv_i32 t1 = tcg_temp_ebb_new_i32();
751
tcg_gen_setcond_i32(cond, t0, c1, c2);
752
tcg_gen_neg_i32(t0, t0);
753
tcg_gen_and_i32(t1, v1, t0);
754
@@ -XXX,XX +XXX,XX @@ void tcg_gen_add2_i32(TCGv_i32 rl, TCGv_i32 rh, TCGv_i32 al,
755
if (TCG_TARGET_HAS_add2_i32) {
756
tcg_gen_op6_i32(INDEX_op_add2_i32, rl, rh, al, ah, bl, bh);
757
} else {
758
- TCGv_i64 t0 = tcg_temp_new_i64();
759
- TCGv_i64 t1 = tcg_temp_new_i64();
760
+ TCGv_i64 t0 = tcg_temp_ebb_new_i64();
761
+ TCGv_i64 t1 = tcg_temp_ebb_new_i64();
762
tcg_gen_concat_i32_i64(t0, al, ah);
763
tcg_gen_concat_i32_i64(t1, bl, bh);
764
tcg_gen_add_i64(t0, t0, t1);
765
@@ -XXX,XX +XXX,XX @@ void tcg_gen_sub2_i32(TCGv_i32 rl, TCGv_i32 rh, TCGv_i32 al,
766
if (TCG_TARGET_HAS_sub2_i32) {
767
tcg_gen_op6_i32(INDEX_op_sub2_i32, rl, rh, al, ah, bl, bh);
768
} else {
769
- TCGv_i64 t0 = tcg_temp_new_i64();
770
- TCGv_i64 t1 = tcg_temp_new_i64();
771
+ TCGv_i64 t0 = tcg_temp_ebb_new_i64();
772
+ TCGv_i64 t1 = tcg_temp_ebb_new_i64();
773
tcg_gen_concat_i32_i64(t0, al, ah);
774
tcg_gen_concat_i32_i64(t1, bl, bh);
775
tcg_gen_sub_i64(t0, t0, t1);
776
@@ -XXX,XX +XXX,XX @@ void tcg_gen_mulu2_i32(TCGv_i32 rl, TCGv_i32 rh, TCGv_i32 arg1, TCGv_i32 arg2)
777
if (TCG_TARGET_HAS_mulu2_i32) {
778
tcg_gen_op4_i32(INDEX_op_mulu2_i32, rl, rh, arg1, arg2);
779
} else if (TCG_TARGET_HAS_muluh_i32) {
780
- TCGv_i32 t = tcg_temp_new_i32();
781
+ TCGv_i32 t = tcg_temp_ebb_new_i32();
782
tcg_gen_op3_i32(INDEX_op_mul_i32, t, arg1, arg2);
783
tcg_gen_op3_i32(INDEX_op_muluh_i32, rh, arg1, arg2);
784
tcg_gen_mov_i32(rl, t);
785
tcg_temp_free_i32(t);
786
} else if (TCG_TARGET_REG_BITS == 64) {
787
- TCGv_i64 t0 = tcg_temp_new_i64();
788
- TCGv_i64 t1 = tcg_temp_new_i64();
789
+ TCGv_i64 t0 = tcg_temp_ebb_new_i64();
790
+ TCGv_i64 t1 = tcg_temp_ebb_new_i64();
791
tcg_gen_extu_i32_i64(t0, arg1);
792
tcg_gen_extu_i32_i64(t1, arg2);
793
tcg_gen_mul_i64(t0, t0, t1);
794
@@ -XXX,XX +XXX,XX @@ void tcg_gen_muls2_i32(TCGv_i32 rl, TCGv_i32 rh, TCGv_i32 arg1, TCGv_i32 arg2)
795
if (TCG_TARGET_HAS_muls2_i32) {
796
tcg_gen_op4_i32(INDEX_op_muls2_i32, rl, rh, arg1, arg2);
797
} else if (TCG_TARGET_HAS_mulsh_i32) {
798
- TCGv_i32 t = tcg_temp_new_i32();
799
+ TCGv_i32 t = tcg_temp_ebb_new_i32();
800
tcg_gen_op3_i32(INDEX_op_mul_i32, t, arg1, arg2);
801
tcg_gen_op3_i32(INDEX_op_mulsh_i32, rh, arg1, arg2);
802
tcg_gen_mov_i32(rl, t);
803
tcg_temp_free_i32(t);
804
} else if (TCG_TARGET_REG_BITS == 32) {
805
- TCGv_i32 t0 = tcg_temp_new_i32();
806
- TCGv_i32 t1 = tcg_temp_new_i32();
807
- TCGv_i32 t2 = tcg_temp_new_i32();
808
- TCGv_i32 t3 = tcg_temp_new_i32();
809
+ TCGv_i32 t0 = tcg_temp_ebb_new_i32();
810
+ TCGv_i32 t1 = tcg_temp_ebb_new_i32();
811
+ TCGv_i32 t2 = tcg_temp_ebb_new_i32();
812
+ TCGv_i32 t3 = tcg_temp_ebb_new_i32();
813
tcg_gen_mulu2_i32(t0, t1, arg1, arg2);
814
/* Adjust for negative inputs. */
815
tcg_gen_sari_i32(t2, arg1, 31);
816
@@ -XXX,XX +XXX,XX @@ void tcg_gen_muls2_i32(TCGv_i32 rl, TCGv_i32 rh, TCGv_i32 arg1, TCGv_i32 arg2)
817
tcg_temp_free_i32(t2);
818
tcg_temp_free_i32(t3);
819
} else {
820
- TCGv_i64 t0 = tcg_temp_new_i64();
821
- TCGv_i64 t1 = tcg_temp_new_i64();
822
+ TCGv_i64 t0 = tcg_temp_ebb_new_i64();
823
+ TCGv_i64 t1 = tcg_temp_ebb_new_i64();
824
tcg_gen_ext_i32_i64(t0, arg1);
825
tcg_gen_ext_i32_i64(t1, arg2);
826
tcg_gen_mul_i64(t0, t0, t1);
827
@@ -XXX,XX +XXX,XX @@ void tcg_gen_muls2_i32(TCGv_i32 rl, TCGv_i32 rh, TCGv_i32 arg1, TCGv_i32 arg2)
828
void tcg_gen_mulsu2_i32(TCGv_i32 rl, TCGv_i32 rh, TCGv_i32 arg1, TCGv_i32 arg2)
829
{
830
if (TCG_TARGET_REG_BITS == 32) {
831
- TCGv_i32 t0 = tcg_temp_new_i32();
832
- TCGv_i32 t1 = tcg_temp_new_i32();
833
- TCGv_i32 t2 = tcg_temp_new_i32();
834
+ TCGv_i32 t0 = tcg_temp_ebb_new_i32();
835
+ TCGv_i32 t1 = tcg_temp_ebb_new_i32();
836
+ TCGv_i32 t2 = tcg_temp_ebb_new_i32();
837
tcg_gen_mulu2_i32(t0, t1, arg1, arg2);
838
/* Adjust for negative input for the signed arg1. */
839
tcg_gen_sari_i32(t2, arg1, 31);
840
@@ -XXX,XX +XXX,XX @@ void tcg_gen_mulsu2_i32(TCGv_i32 rl, TCGv_i32 rh, TCGv_i32 arg1, TCGv_i32 arg2)
841
tcg_temp_free_i32(t1);
842
tcg_temp_free_i32(t2);
843
} else {
844
- TCGv_i64 t0 = tcg_temp_new_i64();
845
- TCGv_i64 t1 = tcg_temp_new_i64();
846
+ TCGv_i64 t0 = tcg_temp_ebb_new_i64();
847
+ TCGv_i64 t1 = tcg_temp_ebb_new_i64();
848
tcg_gen_ext_i32_i64(t0, arg1);
849
tcg_gen_extu_i32_i64(t1, arg2);
850
tcg_gen_mul_i64(t0, t0, t1);
851
@@ -XXX,XX +XXX,XX @@ void tcg_gen_bswap16_i32(TCGv_i32 ret, TCGv_i32 arg, int flags)
852
if (TCG_TARGET_HAS_bswap16_i32) {
853
tcg_gen_op3i_i32(INDEX_op_bswap16_i32, ret, arg, flags);
854
} else {
855
- TCGv_i32 t0 = tcg_temp_new_i32();
856
- TCGv_i32 t1 = tcg_temp_new_i32();
857
+ TCGv_i32 t0 = tcg_temp_ebb_new_i32();
858
+ TCGv_i32 t1 = tcg_temp_ebb_new_i32();
859
860
tcg_gen_shri_i32(t0, arg, 8);
861
if (!(flags & TCG_BSWAP_IZ)) {
862
@@ -XXX,XX +XXX,XX @@ void tcg_gen_bswap32_i32(TCGv_i32 ret, TCGv_i32 arg)
863
if (TCG_TARGET_HAS_bswap32_i32) {
864
tcg_gen_op3i_i32(INDEX_op_bswap32_i32, ret, arg, 0);
865
} else {
866
- TCGv_i32 t0 = tcg_temp_new_i32();
867
- TCGv_i32 t1 = tcg_temp_new_i32();
868
+ TCGv_i32 t0 = tcg_temp_ebb_new_i32();
869
+ TCGv_i32 t1 = tcg_temp_ebb_new_i32();
870
TCGv_i32 t2 = tcg_constant_i32(0x00ff00ff);
871
872
/* arg = abcd */
873
@@ -XXX,XX +XXX,XX @@ void tcg_gen_umax_i32(TCGv_i32 ret, TCGv_i32 a, TCGv_i32 b)
874
875
void tcg_gen_abs_i32(TCGv_i32 ret, TCGv_i32 a)
876
{
877
- TCGv_i32 t = tcg_temp_new_i32();
878
+ TCGv_i32 t = tcg_temp_ebb_new_i32();
879
880
tcg_gen_sari_i32(t, a, 31);
881
tcg_gen_xor_i32(ret, a, t);
882
@@ -XXX,XX +XXX,XX @@ void tcg_gen_mul_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
883
TCGv_i64 t0;
884
TCGv_i32 t1;
885
886
- t0 = tcg_temp_new_i64();
887
- t1 = tcg_temp_new_i32();
888
+ t0 = tcg_temp_ebb_new_i64();
889
+ t1 = tcg_temp_ebb_new_i32();
890
891
tcg_gen_mulu2_i32(TCGV_LOW(t0), TCGV_HIGH(t0),
892
TCGV_LOW(arg1), TCGV_LOW(arg2));
893
@@ -XXX,XX +XXX,XX @@ static inline void tcg_gen_shifti_i64(TCGv_i64 ret, TCGv_i64 arg1,
894
tcg_gen_extract2_i32(TCGV_HIGH(ret),
895
TCGV_LOW(arg1), TCGV_HIGH(arg1), 32 - c);
896
} else {
897
- TCGv_i32 t0 = tcg_temp_new_i32();
898
+ TCGv_i32 t0 = tcg_temp_ebb_new_i32();
899
tcg_gen_shri_i32(t0, TCGV_LOW(arg1), 32 - c);
900
tcg_gen_deposit_i32(TCGV_HIGH(ret), t0,
901
TCGV_HIGH(arg1), c, 32 - c);
902
@@ -XXX,XX +XXX,XX @@ void tcg_gen_div_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
903
if (TCG_TARGET_HAS_div_i64) {
904
tcg_gen_op3_i64(INDEX_op_div_i64, ret, arg1, arg2);
905
} else if (TCG_TARGET_HAS_div2_i64) {
906
- TCGv_i64 t0 = tcg_temp_new_i64();
907
+ TCGv_i64 t0 = tcg_temp_ebb_new_i64();
908
tcg_gen_sari_i64(t0, arg1, 63);
909
tcg_gen_op5_i64(INDEX_op_div2_i64, ret, t0, arg1, t0, arg2);
910
tcg_temp_free_i64(t0);
911
@@ -XXX,XX +XXX,XX @@ void tcg_gen_rem_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
912
if (TCG_TARGET_HAS_rem_i64) {
913
tcg_gen_op3_i64(INDEX_op_rem_i64, ret, arg1, arg2);
914
} else if (TCG_TARGET_HAS_div_i64) {
915
- TCGv_i64 t0 = tcg_temp_new_i64();
916
+ TCGv_i64 t0 = tcg_temp_ebb_new_i64();
917
tcg_gen_op3_i64(INDEX_op_div_i64, t0, arg1, arg2);
918
tcg_gen_mul_i64(t0, t0, arg2);
919
tcg_gen_sub_i64(ret, arg1, t0);
920
tcg_temp_free_i64(t0);
921
} else if (TCG_TARGET_HAS_div2_i64) {
922
- TCGv_i64 t0 = tcg_temp_new_i64();
923
+ TCGv_i64 t0 = tcg_temp_ebb_new_i64();
924
tcg_gen_sari_i64(t0, arg1, 63);
925
tcg_gen_op5_i64(INDEX_op_div2_i64, t0, ret, arg1, t0, arg2);
926
tcg_temp_free_i64(t0);
927
@@ -XXX,XX +XXX,XX @@ void tcg_gen_divu_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
928
if (TCG_TARGET_HAS_div_i64) {
929
tcg_gen_op3_i64(INDEX_op_divu_i64, ret, arg1, arg2);
930
} else if (TCG_TARGET_HAS_div2_i64) {
931
- TCGv_i64 t0 = tcg_temp_new_i64();
932
+ TCGv_i64 t0 = tcg_temp_ebb_new_i64();
933
tcg_gen_movi_i64(t0, 0);
934
tcg_gen_op5_i64(INDEX_op_divu2_i64, ret, t0, arg1, t0, arg2);
935
tcg_temp_free_i64(t0);
936
@@ -XXX,XX +XXX,XX @@ void tcg_gen_remu_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
937
if (TCG_TARGET_HAS_rem_i64) {
938
tcg_gen_op3_i64(INDEX_op_remu_i64, ret, arg1, arg2);
939
} else if (TCG_TARGET_HAS_div_i64) {
940
- TCGv_i64 t0 = tcg_temp_new_i64();
941
+ TCGv_i64 t0 = tcg_temp_ebb_new_i64();
942
tcg_gen_op3_i64(INDEX_op_divu_i64, t0, arg1, arg2);
943
tcg_gen_mul_i64(t0, t0, arg2);
944
tcg_gen_sub_i64(ret, arg1, t0);
945
tcg_temp_free_i64(t0);
946
} else if (TCG_TARGET_HAS_div2_i64) {
947
- TCGv_i64 t0 = tcg_temp_new_i64();
948
+ TCGv_i64 t0 = tcg_temp_ebb_new_i64();
949
tcg_gen_movi_i64(t0, 0);
950
tcg_gen_op5_i64(INDEX_op_divu2_i64, t0, ret, arg1, t0, arg2);
951
tcg_temp_free_i64(t0);
952
@@ -XXX,XX +XXX,XX @@ void tcg_gen_bswap16_i64(TCGv_i64 ret, TCGv_i64 arg, int flags)
953
} else if (TCG_TARGET_HAS_bswap16_i64) {
954
tcg_gen_op3i_i64(INDEX_op_bswap16_i64, ret, arg, flags);
955
} else {
956
- TCGv_i64 t0 = tcg_temp_new_i64();
957
- TCGv_i64 t1 = tcg_temp_new_i64();
958
+ TCGv_i64 t0 = tcg_temp_ebb_new_i64();
959
+ TCGv_i64 t1 = tcg_temp_ebb_new_i64();
960
961
tcg_gen_shri_i64(t0, arg, 8);
962
if (!(flags & TCG_BSWAP_IZ)) {
963
@@ -XXX,XX +XXX,XX @@ void tcg_gen_bswap32_i64(TCGv_i64 ret, TCGv_i64 arg, int flags)
964
} else if (TCG_TARGET_HAS_bswap32_i64) {
965
tcg_gen_op3i_i64(INDEX_op_bswap32_i64, ret, arg, flags);
966
} else {
967
- TCGv_i64 t0 = tcg_temp_new_i64();
968
- TCGv_i64 t1 = tcg_temp_new_i64();
969
+ TCGv_i64 t0 = tcg_temp_ebb_new_i64();
970
+ TCGv_i64 t1 = tcg_temp_ebb_new_i64();
971
TCGv_i64 t2 = tcg_constant_i64(0x00ff00ff);
972
973
/* arg = xxxxabcd */
974
@@ -XXX,XX +XXX,XX @@ void tcg_gen_bswap64_i64(TCGv_i64 ret, TCGv_i64 arg)
975
{
976
if (TCG_TARGET_REG_BITS == 32) {
977
TCGv_i32 t0, t1;
978
- t0 = tcg_temp_new_i32();
979
- t1 = tcg_temp_new_i32();
980
+ t0 = tcg_temp_ebb_new_i32();
981
+ t1 = tcg_temp_ebb_new_i32();
982
983
tcg_gen_bswap32_i32(t0, TCGV_LOW(arg));
984
tcg_gen_bswap32_i32(t1, TCGV_HIGH(arg));
985
@@ -XXX,XX +XXX,XX @@ void tcg_gen_bswap64_i64(TCGv_i64 ret, TCGv_i64 arg)
986
} else if (TCG_TARGET_HAS_bswap64_i64) {
987
tcg_gen_op3i_i64(INDEX_op_bswap64_i64, ret, arg, 0);
988
} else {
989
- TCGv_i64 t0 = tcg_temp_new_i64();
990
- TCGv_i64 t1 = tcg_temp_new_i64();
991
- TCGv_i64 t2 = tcg_temp_new_i64();
992
+ TCGv_i64 t0 = tcg_temp_ebb_new_i64();
993
+ TCGv_i64 t1 = tcg_temp_ebb_new_i64();
994
+ TCGv_i64 t2 = tcg_temp_ebb_new_i64();
995
996
/* arg = abcdefgh */
997
tcg_gen_movi_i64(t2, 0x00ff00ff00ff00ffull);
998
@@ -XXX,XX +XXX,XX @@ void tcg_gen_bswap64_i64(TCGv_i64 ret, TCGv_i64 arg)
999
void tcg_gen_hswap_i64(TCGv_i64 ret, TCGv_i64 arg)
1000
{
1001
uint64_t m = 0x0000ffff0000ffffull;
1002
- TCGv_i64 t0 = tcg_temp_new_i64();
1003
- TCGv_i64 t1 = tcg_temp_new_i64();
1004
+ TCGv_i64 t0 = tcg_temp_ebb_new_i64();
1005
+ TCGv_i64 t1 = tcg_temp_ebb_new_i64();
1006
1007
/* See include/qemu/bitops.h, hswap64. */
1008
tcg_gen_rotli_i64(t1, arg, 32);
1009
@@ -XXX,XX +XXX,XX @@ void tcg_gen_andc_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
1010
} else if (TCG_TARGET_HAS_andc_i64) {
1011
tcg_gen_op3_i64(INDEX_op_andc_i64, ret, arg1, arg2);
1012
} else {
1013
- TCGv_i64 t0 = tcg_temp_new_i64();
1014
+ TCGv_i64 t0 = tcg_temp_ebb_new_i64();
1015
tcg_gen_not_i64(t0, arg2);
1016
tcg_gen_and_i64(ret, arg1, t0);
1017
tcg_temp_free_i64(t0);
1018
@@ -XXX,XX +XXX,XX @@ void tcg_gen_orc_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
1019
} else if (TCG_TARGET_HAS_orc_i64) {
1020
tcg_gen_op3_i64(INDEX_op_orc_i64, ret, arg1, arg2);
1021
} else {
1022
- TCGv_i64 t0 = tcg_temp_new_i64();
1023
+ TCGv_i64 t0 = tcg_temp_ebb_new_i64();
1024
tcg_gen_not_i64(t0, arg2);
1025
tcg_gen_or_i64(ret, arg1, t0);
1026
tcg_temp_free_i64(t0);
1027
@@ -XXX,XX +XXX,XX @@ void tcg_gen_clzi_i64(TCGv_i64 ret, TCGv_i64 arg1, uint64_t arg2)
1028
if (TCG_TARGET_REG_BITS == 32
1029
&& TCG_TARGET_HAS_clz_i32
1030
&& arg2 <= 0xffffffffu) {
1031
- TCGv_i32 t = tcg_temp_new_i32();
1032
+ TCGv_i32 t = tcg_temp_ebb_new_i32();
1033
tcg_gen_clzi_i32(t, TCGV_LOW(arg1), arg2 - 32);
1034
tcg_gen_addi_i32(t, t, 32);
1035
tcg_gen_clz_i32(TCGV_LOW(ret), TCGV_HIGH(arg1), t);
1036
@@ -XXX,XX +XXX,XX @@ void tcg_gen_ctz_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
1037
if (TCG_TARGET_HAS_ctz_i64) {
1038
tcg_gen_op3_i64(INDEX_op_ctz_i64, ret, arg1, arg2);
1039
} else if (TCG_TARGET_HAS_ctpop_i64 || TCG_TARGET_HAS_clz_i64) {
1040
- TCGv_i64 z, t = tcg_temp_new_i64();
1041
+ TCGv_i64 z, t = tcg_temp_ebb_new_i64();
1042
1043
if (TCG_TARGET_HAS_ctpop_i64) {
1044
tcg_gen_subi_i64(t, arg1, 1);
1045
@@ -XXX,XX +XXX,XX @@ void tcg_gen_ctzi_i64(TCGv_i64 ret, TCGv_i64 arg1, uint64_t arg2)
1046
if (TCG_TARGET_REG_BITS == 32
1047
&& TCG_TARGET_HAS_ctz_i32
1048
&& arg2 <= 0xffffffffu) {
1049
- TCGv_i32 t32 = tcg_temp_new_i32();
1050
+ TCGv_i32 t32 = tcg_temp_ebb_new_i32();
1051
tcg_gen_ctzi_i32(t32, TCGV_HIGH(arg1), arg2 - 32);
1052
tcg_gen_addi_i32(t32, t32, 32);
1053
tcg_gen_ctz_i32(TCGV_LOW(ret), TCGV_LOW(arg1), t32);
1054
@@ -XXX,XX +XXX,XX @@ void tcg_gen_ctzi_i64(TCGv_i64 ret, TCGv_i64 arg1, uint64_t arg2)
1055
&& TCG_TARGET_HAS_ctpop_i64
1056
&& arg2 == 64) {
1057
/* This equivalence has the advantage of not requiring a fixup. */
1058
- TCGv_i64 t = tcg_temp_new_i64();
1059
+ TCGv_i64 t = tcg_temp_ebb_new_i64();
1060
tcg_gen_subi_i64(t, arg1, 1);
1061
tcg_gen_andc_i64(t, t, arg1);
1062
tcg_gen_ctpop_i64(ret, t);
1063
@@ -XXX,XX +XXX,XX @@ void tcg_gen_ctzi_i64(TCGv_i64 ret, TCGv_i64 arg1, uint64_t arg2)
1064
void tcg_gen_clrsb_i64(TCGv_i64 ret, TCGv_i64 arg)
1065
{
1066
if (TCG_TARGET_HAS_clz_i64 || TCG_TARGET_HAS_clz_i32) {
1067
- TCGv_i64 t = tcg_temp_new_i64();
1068
+ TCGv_i64 t = tcg_temp_ebb_new_i64();
1069
tcg_gen_sari_i64(t, arg, 63);
1070
tcg_gen_xor_i64(t, t, arg);
1071
tcg_gen_clzi_i64(t, t, 64);
1072
@@ -XXX,XX +XXX,XX @@ void tcg_gen_rotl_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
1073
tcg_gen_op3_i64(INDEX_op_rotl_i64, ret, arg1, arg2);
1074
} else {
1075
TCGv_i64 t0, t1;
1076
- t0 = tcg_temp_new_i64();
1077
- t1 = tcg_temp_new_i64();
1078
+ t0 = tcg_temp_ebb_new_i64();
1079
+ t1 = tcg_temp_ebb_new_i64();
1080
tcg_gen_shl_i64(t0, arg1, arg2);
1081
tcg_gen_subfi_i64(t1, 64, arg2);
1082
tcg_gen_shr_i64(t1, arg1, t1);
1083
@@ -XXX,XX +XXX,XX @@ void tcg_gen_rotli_i64(TCGv_i64 ret, TCGv_i64 arg1, int64_t arg2)
1084
tcg_gen_rotl_i64(ret, arg1, tcg_constant_i64(arg2));
1085
} else {
1086
TCGv_i64 t0, t1;
1087
- t0 = tcg_temp_new_i64();
1088
- t1 = tcg_temp_new_i64();
1089
+ t0 = tcg_temp_ebb_new_i64();
1090
+ t1 = tcg_temp_ebb_new_i64();
1091
tcg_gen_shli_i64(t0, arg1, arg2);
1092
tcg_gen_shri_i64(t1, arg1, 64 - arg2);
1093
tcg_gen_or_i64(ret, t0, t1);
1094
@@ -XXX,XX +XXX,XX @@ void tcg_gen_rotr_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
1095
tcg_gen_op3_i64(INDEX_op_rotr_i64, ret, arg1, arg2);
1096
} else {
1097
TCGv_i64 t0, t1;
1098
- t0 = tcg_temp_new_i64();
1099
- t1 = tcg_temp_new_i64();
1100
+ t0 = tcg_temp_ebb_new_i64();
1101
+ t1 = tcg_temp_ebb_new_i64();
1102
tcg_gen_shr_i64(t0, arg1, arg2);
1103
tcg_gen_subfi_i64(t1, 64, arg2);
1104
tcg_gen_shl_i64(t1, arg1, t1);
1105
@@ -XXX,XX +XXX,XX @@ void tcg_gen_deposit_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2,
1106
}
1107
}
1108
1109
- t1 = tcg_temp_new_i64();
1110
+ t1 = tcg_temp_ebb_new_i64();
1111
1112
if (TCG_TARGET_HAS_extract2_i64) {
1113
if (ofs + len == 64) {
1114
@@ -XXX,XX +XXX,XX @@ void tcg_gen_sextract_i64(TCGv_i64 ret, TCGv_i64 arg,
1115
tcg_gen_sextract_i32(TCGV_HIGH(ret), TCGV_HIGH(arg), 0, len - 32);
1116
return;
1117
} else if (len > 32) {
1118
- TCGv_i32 t = tcg_temp_new_i32();
1119
+ TCGv_i32 t = tcg_temp_ebb_new_i32();
1120
/* Extract the bits for the high word normally. */
1121
tcg_gen_sextract_i32(t, TCGV_HIGH(arg), ofs + 32, len - 32);
1122
/* Shift the field down for the low part. */
1123
@@ -XXX,XX +XXX,XX @@ void tcg_gen_extract2_i64(TCGv_i64 ret, TCGv_i64 al, TCGv_i64 ah,
1124
} else if (TCG_TARGET_HAS_extract2_i64) {
1125
tcg_gen_op4i_i64(INDEX_op_extract2_i64, ret, al, ah, ofs);
1126
} else {
1127
- TCGv_i64 t0 = tcg_temp_new_i64();
1128
+ TCGv_i64 t0 = tcg_temp_ebb_new_i64();
1129
tcg_gen_shri_i64(t0, al, ofs);
1130
tcg_gen_deposit_i64(ret, t0, ah, 64 - ofs, ofs);
1131
tcg_temp_free_i64(t0);
1132
@@ -XXX,XX +XXX,XX @@ void tcg_gen_movcond_i64(TCGCond cond, TCGv_i64 ret, TCGv_i64 c1,
1133
} else if (cond == TCG_COND_NEVER) {
1134
tcg_gen_mov_i64(ret, v2);
1135
} else if (TCG_TARGET_REG_BITS == 32) {
1136
- TCGv_i32 t0 = tcg_temp_new_i32();
1137
- TCGv_i32 t1 = tcg_temp_new_i32();
1138
+ TCGv_i32 t0 = tcg_temp_ebb_new_i32();
1139
+ TCGv_i32 t1 = tcg_temp_ebb_new_i32();
1140
tcg_gen_op6i_i32(INDEX_op_setcond2_i32, t0,
1141
TCGV_LOW(c1), TCGV_HIGH(c1),
1142
TCGV_LOW(c2), TCGV_HIGH(c2), cond);
1143
@@ -XXX,XX +XXX,XX @@ void tcg_gen_movcond_i64(TCGCond cond, TCGv_i64 ret, TCGv_i64 c1,
1144
} else if (TCG_TARGET_HAS_movcond_i64) {
1145
tcg_gen_op6i_i64(INDEX_op_movcond_i64, ret, c1, c2, v1, v2, cond);
1146
} else {
1147
- TCGv_i64 t0 = tcg_temp_new_i64();
1148
- TCGv_i64 t1 = tcg_temp_new_i64();
1149
+ TCGv_i64 t0 = tcg_temp_ebb_new_i64();
1150
+ TCGv_i64 t1 = tcg_temp_ebb_new_i64();
1151
tcg_gen_setcond_i64(cond, t0, c1, c2);
1152
tcg_gen_neg_i64(t0, t0);
1153
tcg_gen_and_i64(t1, v1, t0);
1154
@@ -XXX,XX +XXX,XX @@ void tcg_gen_add2_i64(TCGv_i64 rl, TCGv_i64 rh, TCGv_i64 al,
1155
if (TCG_TARGET_HAS_add2_i64) {
1156
tcg_gen_op6_i64(INDEX_op_add2_i64, rl, rh, al, ah, bl, bh);
1157
} else {
1158
- TCGv_i64 t0 = tcg_temp_new_i64();
1159
- TCGv_i64 t1 = tcg_temp_new_i64();
1160
+ TCGv_i64 t0 = tcg_temp_ebb_new_i64();
1161
+ TCGv_i64 t1 = tcg_temp_ebb_new_i64();
1162
tcg_gen_add_i64(t0, al, bl);
1163
tcg_gen_setcond_i64(TCG_COND_LTU, t1, t0, al);
1164
tcg_gen_add_i64(rh, ah, bh);
1165
@@ -XXX,XX +XXX,XX @@ void tcg_gen_sub2_i64(TCGv_i64 rl, TCGv_i64 rh, TCGv_i64 al,
1166
if (TCG_TARGET_HAS_sub2_i64) {
1167
tcg_gen_op6_i64(INDEX_op_sub2_i64, rl, rh, al, ah, bl, bh);
1168
} else {
1169
- TCGv_i64 t0 = tcg_temp_new_i64();
1170
- TCGv_i64 t1 = tcg_temp_new_i64();
1171
+ TCGv_i64 t0 = tcg_temp_ebb_new_i64();
1172
+ TCGv_i64 t1 = tcg_temp_ebb_new_i64();
1173
tcg_gen_sub_i64(t0, al, bl);
1174
tcg_gen_setcond_i64(TCG_COND_LTU, t1, al, bl);
1175
tcg_gen_sub_i64(rh, ah, bh);
1176
@@ -XXX,XX +XXX,XX @@ void tcg_gen_mulu2_i64(TCGv_i64 rl, TCGv_i64 rh, TCGv_i64 arg1, TCGv_i64 arg2)
1177
if (TCG_TARGET_HAS_mulu2_i64) {
1178
tcg_gen_op4_i64(INDEX_op_mulu2_i64, rl, rh, arg1, arg2);
1179
} else if (TCG_TARGET_HAS_muluh_i64) {
1180
- TCGv_i64 t = tcg_temp_new_i64();
1181
+ TCGv_i64 t = tcg_temp_ebb_new_i64();
1182
tcg_gen_op3_i64(INDEX_op_mul_i64, t, arg1, arg2);
1183
tcg_gen_op3_i64(INDEX_op_muluh_i64, rh, arg1, arg2);
1184
tcg_gen_mov_i64(rl, t);
1185
tcg_temp_free_i64(t);
1186
} else {
1187
- TCGv_i64 t0 = tcg_temp_new_i64();
1188
+ TCGv_i64 t0 = tcg_temp_ebb_new_i64();
1189
tcg_gen_mul_i64(t0, arg1, arg2);
1190
gen_helper_muluh_i64(rh, arg1, arg2);
1191
tcg_gen_mov_i64(rl, t0);
1192
@@ -XXX,XX +XXX,XX @@ void tcg_gen_muls2_i64(TCGv_i64 rl, TCGv_i64 rh, TCGv_i64 arg1, TCGv_i64 arg2)
1193
if (TCG_TARGET_HAS_muls2_i64) {
1194
tcg_gen_op4_i64(INDEX_op_muls2_i64, rl, rh, arg1, arg2);
1195
} else if (TCG_TARGET_HAS_mulsh_i64) {
1196
- TCGv_i64 t = tcg_temp_new_i64();
1197
+ TCGv_i64 t = tcg_temp_ebb_new_i64();
1198
tcg_gen_op3_i64(INDEX_op_mul_i64, t, arg1, arg2);
1199
tcg_gen_op3_i64(INDEX_op_mulsh_i64, rh, arg1, arg2);
1200
tcg_gen_mov_i64(rl, t);
1201
tcg_temp_free_i64(t);
1202
} else if (TCG_TARGET_HAS_mulu2_i64 || TCG_TARGET_HAS_muluh_i64) {
1203
- TCGv_i64 t0 = tcg_temp_new_i64();
1204
- TCGv_i64 t1 = tcg_temp_new_i64();
1205
- TCGv_i64 t2 = tcg_temp_new_i64();
1206
- TCGv_i64 t3 = tcg_temp_new_i64();
1207
+ TCGv_i64 t0 = tcg_temp_ebb_new_i64();
1208
+ TCGv_i64 t1 = tcg_temp_ebb_new_i64();
1209
+ TCGv_i64 t2 = tcg_temp_ebb_new_i64();
1210
+ TCGv_i64 t3 = tcg_temp_ebb_new_i64();
1211
tcg_gen_mulu2_i64(t0, t1, arg1, arg2);
1212
/* Adjust for negative inputs. */
1213
tcg_gen_sari_i64(t2, arg1, 63);
1214
@@ -XXX,XX +XXX,XX @@ void tcg_gen_muls2_i64(TCGv_i64 rl, TCGv_i64 rh, TCGv_i64 arg1, TCGv_i64 arg2)
1215
tcg_temp_free_i64(t2);
1216
tcg_temp_free_i64(t3);
1217
} else {
1218
- TCGv_i64 t0 = tcg_temp_new_i64();
1219
+ TCGv_i64 t0 = tcg_temp_ebb_new_i64();
1220
tcg_gen_mul_i64(t0, arg1, arg2);
1221
gen_helper_mulsh_i64(rh, arg1, arg2);
1222
tcg_gen_mov_i64(rl, t0);
1223
@@ -XXX,XX +XXX,XX @@ void tcg_gen_muls2_i64(TCGv_i64 rl, TCGv_i64 rh, TCGv_i64 arg1, TCGv_i64 arg2)
1224
1225
void tcg_gen_mulsu2_i64(TCGv_i64 rl, TCGv_i64 rh, TCGv_i64 arg1, TCGv_i64 arg2)
1226
{
1227
- TCGv_i64 t0 = tcg_temp_new_i64();
1228
- TCGv_i64 t1 = tcg_temp_new_i64();
1229
- TCGv_i64 t2 = tcg_temp_new_i64();
1230
+ TCGv_i64 t0 = tcg_temp_ebb_new_i64();
1231
+ TCGv_i64 t1 = tcg_temp_ebb_new_i64();
1232
+ TCGv_i64 t2 = tcg_temp_ebb_new_i64();
1233
tcg_gen_mulu2_i64(t0, t1, arg1, arg2);
1234
/* Adjust for negative input for the signed arg1. */
1235
tcg_gen_sari_i64(t2, arg1, 63);
1236
@@ -XXX,XX +XXX,XX @@ void tcg_gen_umax_i64(TCGv_i64 ret, TCGv_i64 a, TCGv_i64 b)
1237
1238
void tcg_gen_abs_i64(TCGv_i64 ret, TCGv_i64 a)
1239
{
1240
- TCGv_i64 t = tcg_temp_new_i64();
1241
+ TCGv_i64 t = tcg_temp_ebb_new_i64();
1242
1243
tcg_gen_sari_i64(t, a, 63);
1244
tcg_gen_xor_i64(ret, a, t);
1245
@@ -XXX,XX +XXX,XX @@ void tcg_gen_extrh_i64_i32(TCGv_i32 ret, TCGv_i64 arg)
1246
tcg_gen_op2(INDEX_op_extrh_i64_i32,
1247
tcgv_i32_arg(ret), tcgv_i64_arg(arg));
1248
} else {
1249
- TCGv_i64 t = tcg_temp_new_i64();
1250
+ TCGv_i64 t = tcg_temp_ebb_new_i64();
1251
tcg_gen_shri_i64(t, arg, 32);
1252
tcg_gen_mov_i32(ret, (TCGv_i32)t);
1253
tcg_temp_free_i64(t);
1254
@@ -XXX,XX +XXX,XX @@ void tcg_gen_concat_i32_i64(TCGv_i64 dest, TCGv_i32 low, TCGv_i32 high)
1255
return;
1256
}
1257
1258
- tmp = tcg_temp_new_i64();
1259
+ tmp = tcg_temp_ebb_new_i64();
1260
/* These extensions are only needed for type correctness.
1261
We may be able to do better given target specific information. */
1262
tcg_gen_extu_i32_i64(tmp, high);
1263
@@ -XXX,XX +XXX,XX @@ void tcg_gen_lookup_and_goto_ptr(void)
1264
}
1265
1266
plugin_gen_disable_mem_helpers();
1267
- ptr = tcg_temp_new_ptr();
1268
+ ptr = tcg_temp_ebb_new_ptr();
1269
gen_helper_lookup_tb_ptr(ptr, cpu_env);
1270
tcg_gen_op1i(INDEX_op_goto_ptr, tcgv_ptr_arg(ptr));
1271
tcg_temp_free_ptr(ptr);
1272
@@ -XXX,XX +XXX,XX @@ void tcg_gen_qemu_st_i32(TCGv_i32 val, TCGv addr, TCGArg idx, MemOp memop)
1273
oi = make_memop_idx(memop, idx);
1274
1275
if (!TCG_TARGET_HAS_MEMORY_BSWAP && (memop & MO_BSWAP)) {
1276
- swap = tcg_temp_new_i32();
1277
+ swap = tcg_temp_ebb_new_i32();
1278
switch (memop & MO_SIZE) {
1279
case MO_16:
1280
tcg_gen_bswap16_i32(swap, val, 0);
1281
@@ -XXX,XX +XXX,XX @@ void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, MemOp memop)
1282
oi = make_memop_idx(memop, idx);
1283
1284
if (!TCG_TARGET_HAS_MEMORY_BSWAP && (memop & MO_BSWAP)) {
1285
- swap = tcg_temp_new_i64();
1286
+ swap = tcg_temp_ebb_new_i64();
1287
switch (memop & MO_SIZE) {
1288
case MO_16:
1289
tcg_gen_bswap16_i64(swap, val, 0);
1290
@@ -XXX,XX +XXX,XX @@ void tcg_gen_qemu_st_i128(TCGv_i128 val, TCGv addr, TCGArg idx, MemOp memop)
1291
1292
addr_p8 = tcg_temp_new();
1293
if ((mop[0] ^ memop) & MO_BSWAP) {
1294
- TCGv_i64 t = tcg_temp_new_i64();
1295
+ TCGv_i64 t = tcg_temp_ebb_new_i64();
1296
1297
tcg_gen_bswap64_i64(t, x);
1298
gen_ldst_i64(INDEX_op_qemu_st_i64, t, addr, mop[0], idx);
1299
@@ -XXX,XX +XXX,XX @@ static void * const table_cmpxchg[(MO_SIZE | MO_BSWAP) + 1] = {
1300
void tcg_gen_nonatomic_cmpxchg_i32(TCGv_i32 retv, TCGv addr, TCGv_i32 cmpv,
1301
TCGv_i32 newv, TCGArg idx, MemOp memop)
1302
{
1303
- TCGv_i32 t1 = tcg_temp_new_i32();
1304
- TCGv_i32 t2 = tcg_temp_new_i32();
1305
+ TCGv_i32 t1 = tcg_temp_ebb_new_i32();
1306
+ TCGv_i32 t2 = tcg_temp_ebb_new_i32();
1307
1308
tcg_gen_ext_i32(t2, cmpv, memop & MO_SIZE);
1309
1310
@@ -XXX,XX +XXX,XX @@ void tcg_gen_nonatomic_cmpxchg_i64(TCGv_i64 retv, TCGv addr, TCGv_i64 cmpv,
1311
return;
1312
}
1313
1314
- t1 = tcg_temp_new_i64();
1315
- t2 = tcg_temp_new_i64();
1316
+ t1 = tcg_temp_ebb_new_i64();
1317
+ t2 = tcg_temp_ebb_new_i64();
1318
1319
tcg_gen_ext_i64(t2, cmpv, memop & MO_SIZE);
1320
1321
@@ -XXX,XX +XXX,XX @@ void tcg_gen_atomic_cmpxchg_i64(TCGv_i64 retv, TCGv addr, TCGv_i64 cmpv,
1322
tcg_gen_movi_i32(TCGV_HIGH(retv), 0);
1323
}
1324
} else {
1325
- TCGv_i32 c32 = tcg_temp_new_i32();
1326
- TCGv_i32 n32 = tcg_temp_new_i32();
1327
- TCGv_i32 r32 = tcg_temp_new_i32();
1328
+ TCGv_i32 c32 = tcg_temp_ebb_new_i32();
1329
+ TCGv_i32 n32 = tcg_temp_ebb_new_i32();
1330
+ TCGv_i32 r32 = tcg_temp_ebb_new_i32();
1331
1332
tcg_gen_extrl_i64_i32(c32, cmpv);
1333
tcg_gen_extrl_i64_i32(n32, newv);
1334
@@ -XXX,XX +XXX,XX @@ void tcg_gen_nonatomic_cmpxchg_i128(TCGv_i128 retv, TCGv addr, TCGv_i128 cmpv,
1335
1336
gen(retv, cpu_env, addr, cmpv, newv, tcg_constant_i32(oi));
1337
} else {
1338
- TCGv_i128 oldv = tcg_temp_new_i128();
1339
- TCGv_i128 tmpv = tcg_temp_new_i128();
1340
- TCGv_i64 t0 = tcg_temp_new_i64();
1341
- TCGv_i64 t1 = tcg_temp_new_i64();
1342
+ TCGv_i128 oldv = tcg_temp_ebb_new_i128();
1343
+ TCGv_i128 tmpv = tcg_temp_ebb_new_i128();
1344
+ TCGv_i64 t0 = tcg_temp_ebb_new_i64();
1345
+ TCGv_i64 t1 = tcg_temp_ebb_new_i64();
1346
TCGv_i64 z = tcg_constant_i64(0);
1347
1348
tcg_gen_qemu_ld_i128(oldv, addr, idx, memop);
1349
@@ -XXX,XX +XXX,XX @@ static void do_nonatomic_op_i32(TCGv_i32 ret, TCGv addr, TCGv_i32 val,
1350
TCGArg idx, MemOp memop, bool new_val,
1351
void (*gen)(TCGv_i32, TCGv_i32, TCGv_i32))
1352
{
1353
- TCGv_i32 t1 = tcg_temp_new_i32();
1354
- TCGv_i32 t2 = tcg_temp_new_i32();
1355
+ TCGv_i32 t1 = tcg_temp_ebb_new_i32();
1356
+ TCGv_i32 t2 = tcg_temp_ebb_new_i32();
1357
1358
memop = tcg_canonicalize_memop(memop, 0, 0);
1359
1360
@@ -XXX,XX +XXX,XX @@ static void do_nonatomic_op_i64(TCGv_i64 ret, TCGv addr, TCGv_i64 val,
1361
TCGArg idx, MemOp memop, bool new_val,
1362
void (*gen)(TCGv_i64, TCGv_i64, TCGv_i64))
1363
{
1364
- TCGv_i64 t1 = tcg_temp_new_i64();
1365
- TCGv_i64 t2 = tcg_temp_new_i64();
1366
+ TCGv_i64 t1 = tcg_temp_ebb_new_i64();
1367
+ TCGv_i64 t2 = tcg_temp_ebb_new_i64();
1368
1369
memop = tcg_canonicalize_memop(memop, 1, 0);
1370
1371
@@ -XXX,XX +XXX,XX @@ static void do_atomic_op_i64(TCGv_i64 ret, TCGv addr, TCGv_i64 val,
1372
tcg_gen_movi_i64(ret, 0);
1373
#endif /* CONFIG_ATOMIC64 */
1374
} else {
1375
- TCGv_i32 v32 = tcg_temp_new_i32();
1376
- TCGv_i32 r32 = tcg_temp_new_i32();
1377
+ TCGv_i32 v32 = tcg_temp_ebb_new_i32();
1378
+ TCGv_i32 r32 = tcg_temp_ebb_new_i32();
1379
1380
tcg_gen_extrl_i64_i32(v32, val);
1381
do_atomic_op_i32(r32, addr, v32, idx, memop & ~MO_SIGN, table);
1382
diff --git a/tcg/tcg.c b/tcg/tcg.c
1383
index XXXXXXX..XXXXXXX 100644
1384
--- a/tcg/tcg.c
1385
+++ b/tcg/tcg.c
1386
@@ -XXX,XX +XXX,XX @@ void tcg_gen_callN(void *func, TCGTemp *ret, int nargs, TCGTemp **args)
1387
case TCG_CALL_ARG_EXTEND_U:
1388
case TCG_CALL_ARG_EXTEND_S:
1389
{
1390
- TCGv_i64 temp = tcg_temp_new_i64();
1391
+ TCGv_i64 temp = tcg_temp_ebb_new_i64();
1392
TCGv_i32 orig = temp_tcgv_i32(ts);
1393
1394
if (loc->kind == TCG_CALL_ARG_EXTEND_S) {
1395
--
1396
2.34.1
1397
1398
diff view generated by jsdifflib
1
The accel_initialised variable no longer has any setters.
1
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2
2
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
3
Fixes: 6f6e1698a68c
4
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Reviewed by: Aleksandar Markovic <amarkovic@wavecomp.com>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
---
4
---
10
vl.c | 3 +--
5
tcg/tcg-op-gvec.c | 3 +--
11
1 file changed, 1 insertion(+), 2 deletions(-)
6
1 file changed, 1 insertion(+), 2 deletions(-)
12
7
13
diff --git a/vl.c b/vl.c
8
diff --git a/tcg/tcg-op-gvec.c b/tcg/tcg-op-gvec.c
14
index XXXXXXX..XXXXXXX 100644
9
index XXXXXXX..XXXXXXX 100644
15
--- a/vl.c
10
--- a/tcg/tcg-op-gvec.c
16
+++ b/vl.c
11
+++ b/tcg/tcg-op-gvec.c
17
@@ -XXX,XX +XXX,XX @@ static void configure_accelerators(const char *progname)
12
@@ -XXX,XX +XXX,XX @@ static void do_dup(unsigned vece, uint32_t dofs, uint32_t oprsz,
18
{
13
* stores through to memset.
19
const char *accel;
14
*/
20
char **accel_list, **tmp;
15
if (oprsz == maxsz && vece == MO_8) {
21
- bool accel_initialised = false;
16
- TCGv_ptr t_size = tcg_const_ptr(oprsz);
22
bool init_failed = false;
17
+ TCGv_ptr t_size = tcg_constant_ptr(oprsz);
23
18
TCGv_i32 t_val;
24
qemu_opts_foreach(qemu_find_opts("icount"),
19
25
@@ -XXX,XX +XXX,XX @@ static void configure_accelerators(const char *progname)
20
if (in_32) {
26
21
@@ -XXX,XX +XXX,XX @@ static void do_dup(unsigned vece, uint32_t dofs, uint32_t oprsz,
27
accel_list = g_strsplit(accel, ":", 0);
22
if (in_64) {
28
23
tcg_temp_free_i32(t_val);
29
- for (tmp = accel_list; !accel_initialised && tmp && *tmp; tmp++) {
24
}
30
+ for (tmp = accel_list; tmp && *tmp; tmp++) {
25
- tcg_temp_free_ptr(t_size);
31
/*
26
tcg_temp_free_ptr(t_ptr);
32
* Filter invalid accelerators here, to prevent obscenities
27
return;
33
* such as "-machine accel=tcg,,thread=single".
28
}
34
--
29
--
35
2.20.1
30
2.34.1
36
31
37
32
diff view generated by jsdifflib
New patch
1
All of these uses have quite local scope.
2
Avoid tcg_const_*, because we haven't added a corresponding
3
interface for TEMP_EBB. Use explicit tcg_gen_movi_* instead.
1
4
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
accel/tcg/plugin-gen.c | 24 ++++++++++++++----------
9
1 file changed, 14 insertions(+), 10 deletions(-)
10
11
diff --git a/accel/tcg/plugin-gen.c b/accel/tcg/plugin-gen.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/accel/tcg/plugin-gen.c
14
+++ b/accel/tcg/plugin-gen.c
15
@@ -XXX,XX +XXX,XX @@ void HELPER(plugin_vcpu_mem_cb)(unsigned int vcpu_index,
16
17
static void do_gen_mem_cb(TCGv vaddr, uint32_t info)
18
{
19
- TCGv_i32 cpu_index = tcg_temp_new_i32();
20
- TCGv_i32 meminfo = tcg_const_i32(info);
21
- TCGv_i64 vaddr64 = tcg_temp_new_i64();
22
- TCGv_ptr udata = tcg_const_ptr(NULL);
23
+ TCGv_i32 cpu_index = tcg_temp_ebb_new_i32();
24
+ TCGv_i32 meminfo = tcg_temp_ebb_new_i32();
25
+ TCGv_i64 vaddr64 = tcg_temp_ebb_new_i64();
26
+ TCGv_ptr udata = tcg_temp_ebb_new_ptr();
27
28
+ tcg_gen_movi_i32(meminfo, info);
29
+ tcg_gen_movi_ptr(udata, 0);
30
tcg_gen_ld_i32(cpu_index, cpu_env,
31
-offsetof(ArchCPU, env) + offsetof(CPUState, cpu_index));
32
tcg_gen_extu_tl_i64(vaddr64, vaddr);
33
@@ -XXX,XX +XXX,XX @@ static void do_gen_mem_cb(TCGv vaddr, uint32_t info)
34
35
static void gen_empty_udata_cb(void)
36
{
37
- TCGv_i32 cpu_index = tcg_temp_new_i32();
38
- TCGv_ptr udata = tcg_const_ptr(NULL); /* will be overwritten later */
39
+ TCGv_i32 cpu_index = tcg_temp_ebb_new_i32();
40
+ TCGv_ptr udata = tcg_temp_ebb_new_ptr();
41
42
+ tcg_gen_movi_ptr(udata, 0);
43
tcg_gen_ld_i32(cpu_index, cpu_env,
44
-offsetof(ArchCPU, env) + offsetof(CPUState, cpu_index));
45
gen_helper_plugin_vcpu_udata_cb(cpu_index, udata);
46
@@ -XXX,XX +XXX,XX @@ static void gen_empty_udata_cb(void)
47
*/
48
static void gen_empty_inline_cb(void)
49
{
50
- TCGv_i64 val = tcg_temp_new_i64();
51
- TCGv_ptr ptr = tcg_const_ptr(NULL); /* overwritten later */
52
+ TCGv_i64 val = tcg_temp_ebb_new_i64();
53
+ TCGv_ptr ptr = tcg_temp_ebb_new_ptr();
54
55
+ tcg_gen_movi_ptr(ptr, 0);
56
tcg_gen_ld_i64(val, ptr, 0);
57
/* pass an immediate != 0 so that it doesn't get optimized away */
58
tcg_gen_addi_i64(val, val, 0xdeadface);
59
@@ -XXX,XX +XXX,XX @@ static void gen_empty_mem_cb(TCGv addr, uint32_t info)
60
*/
61
static void gen_empty_mem_helper(void)
62
{
63
- TCGv_ptr ptr;
64
+ TCGv_ptr ptr = tcg_temp_ebb_new_ptr();
65
66
- ptr = tcg_const_ptr(NULL);
67
+ tcg_gen_movi_ptr(ptr, 0);
68
tcg_gen_st_ptr(ptr, cpu_env, offsetof(CPUState, plugin_mem_cbs) -
69
offsetof(ArchCPU, env));
70
tcg_temp_free_ptr(ptr);
71
--
72
2.34.1
73
74
diff view generated by jsdifflib
New patch
1
Here we are creating a temp whose value needs to be replaced,
2
but always storing NULL into CPUState.plugin_mem_cbs.
3
Use tcg_constant_ptr(0) explicitly.
1
4
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
accel/tcg/plugin-gen.c | 8 ++------
9
1 file changed, 2 insertions(+), 6 deletions(-)
10
11
diff --git a/accel/tcg/plugin-gen.c b/accel/tcg/plugin-gen.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/accel/tcg/plugin-gen.c
14
+++ b/accel/tcg/plugin-gen.c
15
@@ -XXX,XX +XXX,XX @@ static void inject_mem_disable_helper(struct qemu_plugin_insn *plugin_insn,
16
/* called before finishing a TB with exit_tb, goto_tb or goto_ptr */
17
void plugin_gen_disable_mem_helpers(void)
18
{
19
- TCGv_ptr ptr;
20
-
21
/*
22
* We could emit the clearing unconditionally and be done. However, this can
23
* be wasteful if for instance plugins don't track memory accesses, or if
24
@@ -XXX,XX +XXX,XX @@ void plugin_gen_disable_mem_helpers(void)
25
if (!tcg_ctx->plugin_tb->mem_helper) {
26
return;
27
}
28
- ptr = tcg_const_ptr(NULL);
29
- tcg_gen_st_ptr(ptr, cpu_env, offsetof(CPUState, plugin_mem_cbs) -
30
- offsetof(ArchCPU, env));
31
- tcg_temp_free_ptr(ptr);
32
+ tcg_gen_st_ptr(tcg_constant_ptr(NULL), cpu_env,
33
+ offsetof(CPUState, plugin_mem_cbs) - offsetof(ArchCPU, env));
34
}
35
36
static void plugin_gen_tb_udata(const struct qemu_plugin_tb *ptb,
37
--
38
2.34.1
39
40
diff view generated by jsdifflib
1
No functional change, but the smaller expressions make
1
Reusing TEMP_TB interferes with detecting whether the
2
the code easier to read.
2
temp can be adjusted to TEMP_EBB.
3
3
4
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
4
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
5
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
---
6
---
9
accel/tcg/cputlb.c | 19 ++++++++++---------
7
include/tcg/tcg.h | 2 +-
10
1 file changed, 10 insertions(+), 9 deletions(-)
8
tcg/tcg.c | 101 ++++++++++++++++++++++++----------------------
9
2 files changed, 53 insertions(+), 50 deletions(-)
11
10
12
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
11
diff --git a/include/tcg/tcg.h b/include/tcg/tcg.h
13
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
14
--- a/accel/tcg/cputlb.c
13
--- a/include/tcg/tcg.h
15
+++ b/accel/tcg/cputlb.c
14
+++ b/include/tcg/tcg.h
16
@@ -XXX,XX +XXX,XX @@ static void tlb_mmu_resize_locked(CPUTLBDesc *desc, CPUTLBDescFast *fast)
15
@@ -XXX,XX +XXX,XX @@ struct TCGContext {
17
16
#endif
18
static void tlb_flush_one_mmuidx_locked(CPUArchState *env, int mmu_idx)
17
18
GHashTable *const_table[TCG_TYPE_COUNT];
19
- TCGTempSet free_temps[TCG_TYPE_COUNT * 2];
20
+ TCGTempSet free_temps[TCG_TYPE_COUNT];
21
TCGTemp temps[TCG_MAX_TEMPS]; /* globals first, temps after */
22
23
QTAILQ_HEAD(, TCGOp) ops, free_ops;
24
diff --git a/tcg/tcg.c b/tcg/tcg.c
25
index XXXXXXX..XXXXXXX 100644
26
--- a/tcg/tcg.c
27
+++ b/tcg/tcg.c
28
@@ -XXX,XX +XXX,XX @@ TCGTemp *tcg_global_mem_new_internal(TCGType type, TCGv_ptr base,
29
TCGTemp *tcg_temp_new_internal(TCGType type, TCGTempKind kind)
19
{
30
{
20
- tlb_mmu_resize_locked(&env_tlb(env)->d[mmu_idx], &env_tlb(env)->f[mmu_idx]);
31
TCGContext *s = tcg_ctx;
21
- env_tlb(env)->d[mmu_idx].n_used_entries = 0;
32
- bool temp_local = kind == TEMP_TB;
22
- env_tlb(env)->d[mmu_idx].large_page_addr = -1;
33
TCGTemp *ts;
23
- env_tlb(env)->d[mmu_idx].large_page_mask = -1;
34
- int idx, k;
24
- env_tlb(env)->d[mmu_idx].vindex = 0;
35
+ int n;
25
- memset(env_tlb(env)->f[mmu_idx].table, -1,
36
26
- sizeof_tlb(&env_tlb(env)->f[mmu_idx]));
37
- k = type + (temp_local ? TCG_TYPE_COUNT : 0);
27
- memset(env_tlb(env)->d[mmu_idx].vtable, -1,
38
- idx = find_first_bit(s->free_temps[k].l, TCG_MAX_TEMPS);
28
- sizeof(env_tlb(env)->d[0].vtable));
39
- if (idx < TCG_MAX_TEMPS) {
29
+ CPUTLBDesc *desc = &env_tlb(env)->d[mmu_idx];
40
- /* There is already an available temp with the right type. */
30
+ CPUTLBDescFast *fast = &env_tlb(env)->f[mmu_idx];
41
- clear_bit(idx, s->free_temps[k].l);
42
+ if (kind == TEMP_EBB) {
43
+ int idx = find_first_bit(s->free_temps[type].l, TCG_MAX_TEMPS);
44
45
- ts = &s->temps[idx];
46
- ts->temp_allocated = 1;
47
- tcg_debug_assert(ts->base_type == type);
48
- tcg_debug_assert(ts->kind == kind);
49
- } else {
50
- int i, n;
51
+ if (idx < TCG_MAX_TEMPS) {
52
+ /* There is already an available temp with the right type. */
53
+ clear_bit(idx, s->free_temps[type].l);
54
55
- switch (type) {
56
- case TCG_TYPE_I32:
57
- case TCG_TYPE_V64:
58
- case TCG_TYPE_V128:
59
- case TCG_TYPE_V256:
60
- n = 1;
61
- break;
62
- case TCG_TYPE_I64:
63
- n = 64 / TCG_TARGET_REG_BITS;
64
- break;
65
- case TCG_TYPE_I128:
66
- n = 128 / TCG_TARGET_REG_BITS;
67
- break;
68
- default:
69
- g_assert_not_reached();
70
+ ts = &s->temps[idx];
71
+ ts->temp_allocated = 1;
72
+ tcg_debug_assert(ts->base_type == type);
73
+ tcg_debug_assert(ts->kind == kind);
74
+ goto done;
75
}
76
+ } else {
77
+ tcg_debug_assert(kind == TEMP_TB);
78
+ }
79
80
- ts = tcg_temp_alloc(s);
81
- ts->base_type = type;
82
- ts->temp_allocated = 1;
83
- ts->kind = kind;
84
+ switch (type) {
85
+ case TCG_TYPE_I32:
86
+ case TCG_TYPE_V64:
87
+ case TCG_TYPE_V128:
88
+ case TCG_TYPE_V256:
89
+ n = 1;
90
+ break;
91
+ case TCG_TYPE_I64:
92
+ n = 64 / TCG_TARGET_REG_BITS;
93
+ break;
94
+ case TCG_TYPE_I128:
95
+ n = 128 / TCG_TARGET_REG_BITS;
96
+ break;
97
+ default:
98
+ g_assert_not_reached();
99
+ }
100
101
- if (n == 1) {
102
- ts->type = type;
103
- } else {
104
- ts->type = TCG_TYPE_REG;
105
+ ts = tcg_temp_alloc(s);
106
+ ts->base_type = type;
107
+ ts->temp_allocated = 1;
108
+ ts->kind = kind;
109
110
- for (i = 1; i < n; ++i) {
111
- TCGTemp *ts2 = tcg_temp_alloc(s);
112
+ if (n == 1) {
113
+ ts->type = type;
114
+ } else {
115
+ ts->type = TCG_TYPE_REG;
116
117
- tcg_debug_assert(ts2 == ts + i);
118
- ts2->base_type = type;
119
- ts2->type = TCG_TYPE_REG;
120
- ts2->temp_allocated = 1;
121
- ts2->temp_subindex = i;
122
- ts2->kind = kind;
123
- }
124
+ for (int i = 1; i < n; ++i) {
125
+ TCGTemp *ts2 = tcg_temp_alloc(s);
31
+
126
+
32
+ tlb_mmu_resize_locked(desc, fast);
127
+ tcg_debug_assert(ts2 == ts + i);
33
+ desc->n_used_entries = 0;
128
+ ts2->base_type = type;
34
+ desc->large_page_addr = -1;
129
+ ts2->type = TCG_TYPE_REG;
35
+ desc->large_page_mask = -1;
130
+ ts2->temp_allocated = 1;
36
+ desc->vindex = 0;
131
+ ts2->temp_subindex = i;
37
+ memset(fast->table, -1, sizeof_tlb(fast));
132
+ ts2->kind = kind;
38
+ memset(desc->vtable, -1, sizeof(desc->vtable));
133
}
134
}
135
136
+ done:
137
#if defined(CONFIG_DEBUG_TCG)
138
s->temps_in_use++;
139
#endif
140
@@ -XXX,XX +XXX,XX @@ TCGv_vec tcg_temp_new_vec_matching(TCGv_vec match)
141
void tcg_temp_free_internal(TCGTemp *ts)
142
{
143
TCGContext *s = tcg_ctx;
144
- int k, idx;
145
146
switch (ts->kind) {
147
case TEMP_CONST:
148
@@ -XXX,XX +XXX,XX @@ void tcg_temp_free_internal(TCGTemp *ts)
149
s->temps_in_use--;
150
#endif
151
152
- idx = temp_idx(ts);
153
- k = ts->base_type + (ts->kind == TEMP_EBB ? 0 : TCG_TYPE_COUNT);
154
- set_bit(idx, s->free_temps[k].l);
155
+ if (ts->kind == TEMP_EBB) {
156
+ int idx = temp_idx(ts);
157
+ set_bit(idx, s->free_temps[ts->base_type].l);
158
+ }
39
}
159
}
40
160
41
static inline void tlb_n_used_entries_inc(CPUArchState *env, uintptr_t mmu_idx)
161
TCGTemp *tcg_constant_internal(TCGType type, int64_t val)
42
--
162
--
43
2.20.1
163
2.34.1
44
164
45
165
diff view generated by jsdifflib
1
In target/arm we will shortly have "too many" mmu_idx.
1
Guest front-ends now get temps that span the lifetime of
2
The current minimum barrier is caused by the way in which
2
the translation block by default, which avoids accidentally
3
tlb_flush_page_by_mmuidx is coded.
3
using the temp across branches and invalidating the data.
4
4
5
We can remove this limitation by allocating memory for
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
consumption by the worker. Let us assume that this is
7
the unlikely case, as will be the case for the majority
8
of targets which have so far satisfied the BUILD_BUG_ON,
9
and only allocate memory when necessary.
10
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
---
7
---
14
accel/tcg/cputlb.c | 167 +++++++++++++++++++++++++++++++++++----------
8
include/tcg/tcg.h | 8 ++++----
15
1 file changed, 132 insertions(+), 35 deletions(-)
9
1 file changed, 4 insertions(+), 4 deletions(-)
16
10
17
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
11
diff --git a/include/tcg/tcg.h b/include/tcg/tcg.h
18
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
19
--- a/accel/tcg/cputlb.c
13
--- a/include/tcg/tcg.h
20
+++ b/accel/tcg/cputlb.c
14
+++ b/include/tcg/tcg.h
21
@@ -XXX,XX +XXX,XX @@ static void tlb_flush_page_locked(CPUArchState *env, int midx,
15
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 tcg_temp_ebb_new_i32(void)
22
}
16
17
static inline TCGv_i32 tcg_temp_new_i32(void)
18
{
19
- TCGTemp *t = tcg_temp_new_internal(TCG_TYPE_I32, TEMP_EBB);
20
+ TCGTemp *t = tcg_temp_new_internal(TCG_TYPE_I32, TEMP_TB);
21
return temp_tcgv_i32(t);
23
}
22
}
24
23
25
-/* As we are going to hijack the bottom bits of the page address for a
24
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i64 tcg_temp_ebb_new_i64(void)
26
- * mmuidx bit mask we need to fail to build if we can't do that
25
27
+/**
26
static inline TCGv_i64 tcg_temp_new_i64(void)
28
+ * tlb_flush_page_by_mmuidx_async_0:
29
+ * @cpu: cpu on which to flush
30
+ * @addr: page of virtual address to flush
31
+ * @idxmap: set of mmu_idx to flush
32
+ *
33
+ * Helper for tlb_flush_page_by_mmuidx and friends, flush one page
34
+ * at @addr from the tlbs indicated by @idxmap from @cpu.
35
*/
36
-QEMU_BUILD_BUG_ON(NB_MMU_MODES > TARGET_PAGE_BITS_MIN);
37
-
38
-static void tlb_flush_page_by_mmuidx_async_work(CPUState *cpu,
39
- run_on_cpu_data data)
40
+static void tlb_flush_page_by_mmuidx_async_0(CPUState *cpu,
41
+ target_ulong addr,
42
+ uint16_t idxmap)
43
{
27
{
44
CPUArchState *env = cpu->env_ptr;
28
- TCGTemp *t = tcg_temp_new_internal(TCG_TYPE_I64, TEMP_EBB);
45
- target_ulong addr_and_mmuidx = (target_ulong) data.target_ptr;
29
+ TCGTemp *t = tcg_temp_new_internal(TCG_TYPE_I64, TEMP_TB);
46
- target_ulong addr = addr_and_mmuidx & TARGET_PAGE_MASK;
30
return temp_tcgv_i64(t);
47
- unsigned long mmu_idx_bitmap = addr_and_mmuidx & ALL_MMUIDX_BITS;
48
int mmu_idx;
49
50
assert_cpu_is_self(cpu);
51
52
- tlb_debug("page addr:" TARGET_FMT_lx " mmu_map:0x%lx\n",
53
- addr, mmu_idx_bitmap);
54
+ tlb_debug("page addr:" TARGET_FMT_lx " mmu_map:0x%x\n", addr, idxmap);
55
56
qemu_spin_lock(&env_tlb(env)->c.lock);
57
for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) {
58
- if (test_bit(mmu_idx, &mmu_idx_bitmap)) {
59
+ if ((idxmap >> mmu_idx) & 1) {
60
tlb_flush_page_locked(env, mmu_idx, addr);
61
}
62
}
63
@@ -XXX,XX +XXX,XX @@ static void tlb_flush_page_by_mmuidx_async_work(CPUState *cpu,
64
tb_flush_jmp_cache(cpu, addr);
65
}
31
}
66
32
67
+/**
33
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i128 tcg_temp_ebb_new_i128(void)
68
+ * tlb_flush_page_by_mmuidx_async_1:
34
69
+ * @cpu: cpu on which to flush
35
static inline TCGv_i128 tcg_temp_new_i128(void)
70
+ * @data: encoded addr + idxmap
71
+ *
72
+ * Helper for tlb_flush_page_by_mmuidx and friends, called through
73
+ * async_run_on_cpu. The idxmap parameter is encoded in the page
74
+ * offset of the target_ptr field. This limits the set of mmu_idx
75
+ * that can be passed via this method.
76
+ */
77
+static void tlb_flush_page_by_mmuidx_async_1(CPUState *cpu,
78
+ run_on_cpu_data data)
79
+{
80
+ target_ulong addr_and_idxmap = (target_ulong) data.target_ptr;
81
+ target_ulong addr = addr_and_idxmap & TARGET_PAGE_MASK;
82
+ uint16_t idxmap = addr_and_idxmap & ~TARGET_PAGE_MASK;
83
+
84
+ tlb_flush_page_by_mmuidx_async_0(cpu, addr, idxmap);
85
+}
86
+
87
+typedef struct {
88
+ target_ulong addr;
89
+ uint16_t idxmap;
90
+} TLBFlushPageByMMUIdxData;
91
+
92
+/**
93
+ * tlb_flush_page_by_mmuidx_async_2:
94
+ * @cpu: cpu on which to flush
95
+ * @data: allocated addr + idxmap
96
+ *
97
+ * Helper for tlb_flush_page_by_mmuidx and friends, called through
98
+ * async_run_on_cpu. The addr+idxmap parameters are stored in a
99
+ * TLBFlushPageByMMUIdxData structure that has been allocated
100
+ * specifically for this helper. Free the structure when done.
101
+ */
102
+static void tlb_flush_page_by_mmuidx_async_2(CPUState *cpu,
103
+ run_on_cpu_data data)
104
+{
105
+ TLBFlushPageByMMUIdxData *d = data.host_ptr;
106
+
107
+ tlb_flush_page_by_mmuidx_async_0(cpu, d->addr, d->idxmap);
108
+ g_free(d);
109
+}
110
+
111
void tlb_flush_page_by_mmuidx(CPUState *cpu, target_ulong addr, uint16_t idxmap)
112
{
36
{
113
- target_ulong addr_and_mmu_idx;
37
- TCGTemp *t = tcg_temp_new_internal(TCG_TYPE_I128, TEMP_EBB);
114
-
38
+ TCGTemp *t = tcg_temp_new_internal(TCG_TYPE_I128, TEMP_TB);
115
tlb_debug("addr: "TARGET_FMT_lx" mmu_idx:%" PRIx16 "\n", addr, idxmap);
39
return temp_tcgv_i128(t);
116
117
/* This should already be page aligned */
118
- addr_and_mmu_idx = addr & TARGET_PAGE_MASK;
119
- addr_and_mmu_idx |= idxmap;
120
+ addr &= TARGET_PAGE_MASK;
121
122
- if (!qemu_cpu_is_self(cpu)) {
123
- async_run_on_cpu(cpu, tlb_flush_page_by_mmuidx_async_work,
124
- RUN_ON_CPU_TARGET_PTR(addr_and_mmu_idx));
125
+ if (qemu_cpu_is_self(cpu)) {
126
+ tlb_flush_page_by_mmuidx_async_0(cpu, addr, idxmap);
127
+ } else if (idxmap < TARGET_PAGE_SIZE) {
128
+ /*
129
+ * Most targets have only a few mmu_idx. In the case where
130
+ * we can stuff idxmap into the low TARGET_PAGE_BITS, avoid
131
+ * allocating memory for this operation.
132
+ */
133
+ async_run_on_cpu(cpu, tlb_flush_page_by_mmuidx_async_1,
134
+ RUN_ON_CPU_TARGET_PTR(addr | idxmap));
135
} else {
136
- tlb_flush_page_by_mmuidx_async_work(
137
- cpu, RUN_ON_CPU_TARGET_PTR(addr_and_mmu_idx));
138
+ TLBFlushPageByMMUIdxData *d = g_new(TLBFlushPageByMMUIdxData, 1);
139
+
140
+ /* Otherwise allocate a structure, freed by the worker. */
141
+ d->addr = addr;
142
+ d->idxmap = idxmap;
143
+ async_run_on_cpu(cpu, tlb_flush_page_by_mmuidx_async_2,
144
+ RUN_ON_CPU_HOST_PTR(d));
145
}
146
}
40
}
147
41
148
@@ -XXX,XX +XXX,XX @@ void tlb_flush_page(CPUState *cpu, target_ulong addr)
42
@@ -XXX,XX +XXX,XX @@ static inline TCGv_ptr tcg_temp_ebb_new_ptr(void)
149
void tlb_flush_page_by_mmuidx_all_cpus(CPUState *src_cpu, target_ulong addr,
43
150
uint16_t idxmap)
44
static inline TCGv_ptr tcg_temp_new_ptr(void)
151
{
45
{
152
- const run_on_cpu_func fn = tlb_flush_page_by_mmuidx_async_work;
46
- TCGTemp *t = tcg_temp_new_internal(TCG_TYPE_PTR, TEMP_EBB);
153
- target_ulong addr_and_mmu_idx;
47
+ TCGTemp *t = tcg_temp_new_internal(TCG_TYPE_PTR, TEMP_TB);
154
-
48
return temp_tcgv_ptr(t);
155
tlb_debug("addr: "TARGET_FMT_lx" mmu_idx:%"PRIx16"\n", addr, idxmap);
156
157
/* This should already be page aligned */
158
- addr_and_mmu_idx = addr & TARGET_PAGE_MASK;
159
- addr_and_mmu_idx |= idxmap;
160
+ addr &= TARGET_PAGE_MASK;
161
162
- flush_all_helper(src_cpu, fn, RUN_ON_CPU_TARGET_PTR(addr_and_mmu_idx));
163
- fn(src_cpu, RUN_ON_CPU_TARGET_PTR(addr_and_mmu_idx));
164
+ /*
165
+ * Allocate memory to hold addr+idxmap only when needed.
166
+ * See tlb_flush_page_by_mmuidx for details.
167
+ */
168
+ if (idxmap < TARGET_PAGE_SIZE) {
169
+ flush_all_helper(src_cpu, tlb_flush_page_by_mmuidx_async_1,
170
+ RUN_ON_CPU_TARGET_PTR(addr | idxmap));
171
+ } else {
172
+ CPUState *dst_cpu;
173
+
174
+ /* Allocate a separate data block for each destination cpu. */
175
+ CPU_FOREACH(dst_cpu) {
176
+ if (dst_cpu != src_cpu) {
177
+ TLBFlushPageByMMUIdxData *d
178
+ = g_new(TLBFlushPageByMMUIdxData, 1);
179
+
180
+ d->addr = addr;
181
+ d->idxmap = idxmap;
182
+ async_run_on_cpu(dst_cpu, tlb_flush_page_by_mmuidx_async_2,
183
+ RUN_ON_CPU_HOST_PTR(d));
184
+ }
185
+ }
186
+ }
187
+
188
+ tlb_flush_page_by_mmuidx_async_0(src_cpu, addr, idxmap);
189
}
49
}
190
50
191
void tlb_flush_page_all_cpus(CPUState *src, target_ulong addr)
192
@@ -XXX,XX +XXX,XX @@ void tlb_flush_page_by_mmuidx_all_cpus_synced(CPUState *src_cpu,
193
target_ulong addr,
194
uint16_t idxmap)
195
{
196
- const run_on_cpu_func fn = tlb_flush_page_by_mmuidx_async_work;
197
- target_ulong addr_and_mmu_idx;
198
-
199
tlb_debug("addr: "TARGET_FMT_lx" mmu_idx:%"PRIx16"\n", addr, idxmap);
200
201
/* This should already be page aligned */
202
- addr_and_mmu_idx = addr & TARGET_PAGE_MASK;
203
- addr_and_mmu_idx |= idxmap;
204
+ addr &= TARGET_PAGE_MASK;
205
206
- flush_all_helper(src_cpu, fn, RUN_ON_CPU_TARGET_PTR(addr_and_mmu_idx));
207
- async_safe_run_on_cpu(src_cpu, fn, RUN_ON_CPU_TARGET_PTR(addr_and_mmu_idx));
208
+ /*
209
+ * Allocate memory to hold addr+idxmap only when needed.
210
+ * See tlb_flush_page_by_mmuidx for details.
211
+ */
212
+ if (idxmap < TARGET_PAGE_SIZE) {
213
+ flush_all_helper(src_cpu, tlb_flush_page_by_mmuidx_async_1,
214
+ RUN_ON_CPU_TARGET_PTR(addr | idxmap));
215
+ async_safe_run_on_cpu(src_cpu, tlb_flush_page_by_mmuidx_async_1,
216
+ RUN_ON_CPU_TARGET_PTR(addr | idxmap));
217
+ } else {
218
+ CPUState *dst_cpu;
219
+ TLBFlushPageByMMUIdxData *d;
220
+
221
+ /* Allocate a separate data block for each destination cpu. */
222
+ CPU_FOREACH(dst_cpu) {
223
+ if (dst_cpu != src_cpu) {
224
+ d = g_new(TLBFlushPageByMMUIdxData, 1);
225
+ d->addr = addr;
226
+ d->idxmap = idxmap;
227
+ async_run_on_cpu(dst_cpu, tlb_flush_page_by_mmuidx_async_2,
228
+ RUN_ON_CPU_HOST_PTR(d));
229
+ }
230
+ }
231
+
232
+ d = g_new(TLBFlushPageByMMUIdxData, 1);
233
+ d->addr = addr;
234
+ d->idxmap = idxmap;
235
+ async_safe_run_on_cpu(src_cpu, tlb_flush_page_by_mmuidx_async_2,
236
+ RUN_ON_CPU_HOST_PTR(d));
237
+ }
238
}
239
240
void tlb_flush_page_all_cpus_synced(CPUState *src, target_ulong addr)
241
--
51
--
242
2.20.1
52
2.34.1
243
53
244
54
diff view generated by jsdifflib
New patch
1
Since we now get TEMP_TB temporaries by default, we no longer
2
need to make copies across these loops. These were the only
3
uses of new_tmp_a64_local(), so remove that as well.
1
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
target/arm/tcg/translate-a64.h | 1 -
9
target/arm/tcg/translate-a64.c | 6 ------
10
target/arm/tcg/translate-sve.c | 32 --------------------------------
11
3 files changed, 39 deletions(-)
12
13
diff --git a/target/arm/tcg/translate-a64.h b/target/arm/tcg/translate-a64.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/tcg/translate-a64.h
16
+++ b/target/arm/tcg/translate-a64.h
17
@@ -XXX,XX +XXX,XX @@
18
#define TARGET_ARM_TRANSLATE_A64_H
19
20
TCGv_i64 new_tmp_a64(DisasContext *s);
21
-TCGv_i64 new_tmp_a64_local(DisasContext *s);
22
TCGv_i64 new_tmp_a64_zero(DisasContext *s);
23
TCGv_i64 cpu_reg(DisasContext *s, int reg);
24
TCGv_i64 cpu_reg_sp(DisasContext *s, int reg);
25
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/tcg/translate-a64.c
28
+++ b/target/arm/tcg/translate-a64.c
29
@@ -XXX,XX +XXX,XX @@ TCGv_i64 new_tmp_a64(DisasContext *s)
30
return s->tmp_a64[s->tmp_a64_count++] = tcg_temp_new_i64();
31
}
32
33
-TCGv_i64 new_tmp_a64_local(DisasContext *s)
34
-{
35
- assert(s->tmp_a64_count < TMP_A64_MAX);
36
- return s->tmp_a64[s->tmp_a64_count++] = tcg_temp_local_new_i64();
37
-}
38
-
39
TCGv_i64 new_tmp_a64_zero(DisasContext *s)
40
{
41
TCGv_i64 t = new_tmp_a64(s);
42
diff --git a/target/arm/tcg/translate-sve.c b/target/arm/tcg/translate-sve.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/tcg/translate-sve.c
45
+++ b/target/arm/tcg/translate-sve.c
46
@@ -XXX,XX +XXX,XX @@ void gen_sve_ldr(DisasContext *s, TCGv_ptr base, int vofs,
47
TCGLabel *loop = gen_new_label();
48
TCGv_ptr tp, i = tcg_const_local_ptr(0);
49
50
- /* Copy the clean address into a local temp, live across the loop. */
51
- t0 = clean_addr;
52
- clean_addr = new_tmp_a64_local(s);
53
- tcg_gen_mov_i64(clean_addr, t0);
54
-
55
- if (base != cpu_env) {
56
- TCGv_ptr b = tcg_temp_local_new_ptr();
57
- tcg_gen_mov_ptr(b, base);
58
- base = b;
59
- }
60
-
61
gen_set_label(loop);
62
63
t0 = tcg_temp_new_i64();
64
@@ -XXX,XX +XXX,XX @@ void gen_sve_ldr(DisasContext *s, TCGv_ptr base, int vofs,
65
66
tcg_gen_brcondi_ptr(TCG_COND_LTU, i, len_align, loop);
67
tcg_temp_free_ptr(i);
68
-
69
- if (base != cpu_env) {
70
- tcg_temp_free_ptr(base);
71
- assert(len_remain == 0);
72
- }
73
}
74
75
/*
76
@@ -XXX,XX +XXX,XX @@ void gen_sve_str(DisasContext *s, TCGv_ptr base, int vofs,
77
TCGLabel *loop = gen_new_label();
78
TCGv_ptr tp, i = tcg_const_local_ptr(0);
79
80
- /* Copy the clean address into a local temp, live across the loop. */
81
- t0 = clean_addr;
82
- clean_addr = new_tmp_a64_local(s);
83
- tcg_gen_mov_i64(clean_addr, t0);
84
-
85
- if (base != cpu_env) {
86
- TCGv_ptr b = tcg_temp_local_new_ptr();
87
- tcg_gen_mov_ptr(b, base);
88
- base = b;
89
- }
90
-
91
gen_set_label(loop);
92
93
t0 = tcg_temp_new_i64();
94
@@ -XXX,XX +XXX,XX @@ void gen_sve_str(DisasContext *s, TCGv_ptr base, int vofs,
95
96
tcg_gen_brcondi_ptr(TCG_COND_LTU, i, len_align, loop);
97
tcg_temp_free_ptr(i);
98
-
99
- if (base != cpu_env) {
100
- tcg_temp_free_ptr(base);
101
- assert(len_remain == 0);
102
- }
103
}
104
105
/* Predicate register stores can be any multiple of 2. */
106
--
107
2.34.1
diff view generated by jsdifflib
New patch
1
Since tcg_temp_new_* is now identical, use those.
1
2
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
---
6
target/arm/tcg/translate-sve.c | 6 +++---
7
target/arm/tcg/translate.c | 6 +++---
8
2 files changed, 6 insertions(+), 6 deletions(-)
9
10
diff --git a/target/arm/tcg/translate-sve.c b/target/arm/tcg/translate-sve.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/arm/tcg/translate-sve.c
13
+++ b/target/arm/tcg/translate-sve.c
14
@@ -XXX,XX +XXX,XX @@ static bool do_clast_vector(DisasContext *s, arg_rprr_esz *a, bool before)
15
return true;
16
}
17
18
- last = tcg_temp_local_new_i32();
19
+ last = tcg_temp_new_i32();
20
over = gen_new_label();
21
22
find_last_active(s, last, esz, a->pg);
23
@@ -XXX,XX +XXX,XX @@ void gen_sve_ldr(DisasContext *s, TCGv_ptr base, int vofs,
24
tcg_temp_free_i64(t0);
25
} else {
26
TCGLabel *loop = gen_new_label();
27
- TCGv_ptr tp, i = tcg_const_local_ptr(0);
28
+ TCGv_ptr tp, i = tcg_const_ptr(0);
29
30
gen_set_label(loop);
31
32
@@ -XXX,XX +XXX,XX @@ void gen_sve_str(DisasContext *s, TCGv_ptr base, int vofs,
33
tcg_temp_free_i64(t0);
34
} else {
35
TCGLabel *loop = gen_new_label();
36
- TCGv_ptr tp, i = tcg_const_local_ptr(0);
37
+ TCGv_ptr tp, i = tcg_const_ptr(0);
38
39
gen_set_label(loop);
40
41
diff --git a/target/arm/tcg/translate.c b/target/arm/tcg/translate.c
42
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/tcg/translate.c
44
+++ b/target/arm/tcg/translate.c
45
@@ -XXX,XX +XXX,XX @@ static bool op_strex(DisasContext *s, arg_STREX *a, MemOp mop, bool rel)
46
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_STRL);
47
}
48
49
- addr = tcg_temp_local_new_i32();
50
+ addr = tcg_temp_new_i32();
51
load_reg_var(s, addr, a->rn);
52
tcg_gen_addi_i32(addr, addr, a->imm);
53
54
@@ -XXX,XX +XXX,XX @@ static bool op_ldrex(DisasContext *s, arg_LDREX *a, MemOp mop, bool acq)
55
return true;
56
}
57
58
- addr = tcg_temp_local_new_i32();
59
+ addr = tcg_temp_new_i32();
60
load_reg_var(s, addr, a->rn);
61
tcg_gen_addi_i32(addr, addr, a->imm);
62
63
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
64
* Decrement by 1 << (4 - LTPSIZE). We need to use a TCG local
65
* so that decr stays live after the brcondi.
66
*/
67
- TCGv_i32 decr = tcg_temp_local_new_i32();
68
+ TCGv_i32 decr = tcg_temp_new_i32();
69
TCGv_i32 ltpsize = load_cpu_field(v7m.ltpsize);
70
tcg_gen_sub_i32(decr, tcg_constant_i32(4), ltpsize);
71
tcg_gen_shl_i32(decr, tcg_constant_i32(1), decr);
72
--
73
2.34.1
74
75
diff view generated by jsdifflib
New patch
1
Since tcg_temp_new is now identical, use that.
1
2
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
---
6
target/cris/translate.c | 6 +++---
7
target/cris/translate_v10.c.inc | 10 +++++-----
8
2 files changed, 8 insertions(+), 8 deletions(-)
9
10
diff --git a/target/cris/translate.c b/target/cris/translate.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/cris/translate.c
13
+++ b/target/cris/translate.c
14
@@ -XXX,XX +XXX,XX @@ static int dec_bound_r(CPUCRISState *env, DisasContext *dc)
15
LOG_DIS("bound.%c $r%u, $r%u\n",
16
memsize_char(size), dc->op1, dc->op2);
17
cris_cc_mask(dc, CC_MASK_NZ);
18
- l0 = tcg_temp_local_new();
19
+ l0 = tcg_temp_new();
20
dec_prep_move_r(dc, dc->op1, dc->op2, size, 0, l0);
21
cris_alu(dc, CC_OP_BOUND, cpu_R[dc->op2], cpu_R[dc->op2], l0, 4);
22
tcg_temp_free(l0);
23
@@ -XXX,XX +XXX,XX @@ static int dec_bound_m(CPUCRISState *env, DisasContext *dc)
24
dc->op1, dc->postinc ? "+]" : "]",
25
dc->op2);
26
27
- l[0] = tcg_temp_local_new();
28
- l[1] = tcg_temp_local_new();
29
+ l[0] = tcg_temp_new();
30
+ l[1] = tcg_temp_new();
31
insn_len = dec_prep_alu_m(env, dc, 0, memsize, l[0], l[1]);
32
cris_cc_mask(dc, CC_MASK_NZ);
33
cris_alu(dc, CC_OP_BOUND, cpu_R[dc->op2], l[0], l[1], 4);
34
diff --git a/target/cris/translate_v10.c.inc b/target/cris/translate_v10.c.inc
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/cris/translate_v10.c.inc
37
+++ b/target/cris/translate_v10.c.inc
38
@@ -XXX,XX +XXX,XX @@ static void gen_store_v10_conditional(DisasContext *dc, TCGv addr, TCGv val,
39
unsigned int size, int mem_index)
40
{
41
TCGLabel *l1 = gen_new_label();
42
- TCGv taddr = tcg_temp_local_new();
43
- TCGv tval = tcg_temp_local_new();
44
- TCGv t1 = tcg_temp_local_new();
45
+ TCGv taddr = tcg_temp_new();
46
+ TCGv tval = tcg_temp_new();
47
+ TCGv t1 = tcg_temp_new();
48
dc->postinc = 0;
49
cris_evaluate_flags(dc);
50
51
@@ -XXX,XX +XXX,XX @@ static void dec10_reg_bound(DisasContext *dc, int size)
52
{
53
TCGv t;
54
55
- t = tcg_temp_local_new();
56
+ t = tcg_temp_new();
57
t_gen_zext(t, cpu_R[dc->src], size);
58
cris_alu(dc, CC_OP_BOUND, cpu_R[dc->dst], cpu_R[dc->dst], t, 4);
59
tcg_temp_free(t);
60
@@ -XXX,XX +XXX,XX @@ static int dec10_ind_bound(CPUCRISState *env, DisasContext *dc,
61
int rd = dc->dst;
62
TCGv t;
63
64
- t = tcg_temp_local_new();
65
+ t = tcg_temp_new();
66
insn_len += dec10_prep_move_m(env, dc, 0, size, t);
67
cris_alu(dc, CC_OP_BOUND, cpu_R[dc->dst], cpu_R[rd], t, 4);
68
if (dc->dst == 15) {
69
--
70
2.34.1
71
72
diff view generated by jsdifflib
New patch
1
1
Since tcg_temp_new_* is now identical, use those.
2
3
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
---
7
target/hexagon/idef-parser/README.rst | 4 ++--
8
target/hexagon/gen_tcg.h | 4 ++--
9
target/hexagon/genptr.c | 16 ++++++++--------
10
target/hexagon/idef-parser/parser-helpers.c | 4 ++--
11
target/hexagon/translate.c | 2 +-
12
target/hexagon/README | 8 ++++----
13
target/hexagon/gen_tcg_funcs.py | 18 +++++++-----------
14
7 files changed, 26 insertions(+), 30 deletions(-)
15
16
diff --git a/target/hexagon/idef-parser/README.rst b/target/hexagon/idef-parser/README.rst
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/hexagon/idef-parser/README.rst
19
+++ b/target/hexagon/idef-parser/README.rst
20
@@ -XXX,XX +XXX,XX @@ generators the previous declarations are mapped to
21
22
::
23
24
- int var1; -> TCGv_i32 var1 = tcg_temp_local_new_i32();
25
+ int var1; -> TCGv_i32 var1 = tcg_temp_new_i32();
26
27
- int var2 = 0; -> TCGv_i32 var1 = tcg_temp_local_new_i32();
28
+ int var2 = 0; -> TCGv_i32 var1 = tcg_temp_new_i32();
29
tcg_gen_movi_i32(j, ((int64_t) 0ULL));
30
31
which are later automatically freed at the end of the function they're declared
32
diff --git a/target/hexagon/gen_tcg.h b/target/hexagon/gen_tcg.h
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/hexagon/gen_tcg.h
35
+++ b/target/hexagon/gen_tcg.h
36
@@ -XXX,XX +XXX,XX @@
37
*/
38
#define fGEN_TCG_PRED_LOAD(GET_EA, PRED, SIZE, SIGN) \
39
do { \
40
- TCGv LSB = tcg_temp_local_new(); \
41
+ TCGv LSB = tcg_temp_new(); \
42
TCGLabel *label = gen_new_label(); \
43
tcg_gen_movi_tl(EA, 0); \
44
PRED; \
45
@@ -XXX,XX +XXX,XX @@
46
/* Predicated loads into a register pair */
47
#define fGEN_TCG_PRED_LOAD_PAIR(GET_EA, PRED) \
48
do { \
49
- TCGv LSB = tcg_temp_local_new(); \
50
+ TCGv LSB = tcg_temp_new(); \
51
TCGLabel *label = gen_new_label(); \
52
tcg_gen_movi_tl(EA, 0); \
53
PRED; \
54
diff --git a/target/hexagon/genptr.c b/target/hexagon/genptr.c
55
index XXXXXXX..XXXXXXX 100644
56
--- a/target/hexagon/genptr.c
57
+++ b/target/hexagon/genptr.c
58
@@ -XXX,XX +XXX,XX @@ static void gen_cond_call(DisasContext *ctx, TCGv pred,
59
TCGCond cond, int pc_off)
60
{
61
TCGv next_PC;
62
- TCGv lsb = tcg_temp_local_new();
63
+ TCGv lsb = tcg_temp_new();
64
TCGLabel *skip = gen_new_label();
65
tcg_gen_andi_tl(lsb, pred, 1);
66
gen_write_new_pc_pcrel(ctx, pc_off, cond, lsb);
67
@@ -XXX,XX +XXX,XX @@ static void gen_cond_call(DisasContext *ctx, TCGv pred,
68
69
static void gen_endloop0(DisasContext *ctx)
70
{
71
- TCGv lpcfg = tcg_temp_local_new();
72
+ TCGv lpcfg = tcg_temp_new();
73
74
GET_USR_FIELD(USR_LPCFG, lpcfg);
75
76
@@ -XXX,XX +XXX,XX @@ static void gen_sar(TCGv dst, TCGv src, TCGv shift_amt)
77
/* Bidirectional shift right with saturation */
78
static void gen_asr_r_r_sat(TCGv RdV, TCGv RsV, TCGv RtV)
79
{
80
- TCGv shift_amt = tcg_temp_local_new();
81
+ TCGv shift_amt = tcg_temp_new();
82
TCGLabel *positive = gen_new_label();
83
TCGLabel *done = gen_new_label();
84
85
@@ -XXX,XX +XXX,XX @@ static void gen_asr_r_r_sat(TCGv RdV, TCGv RsV, TCGv RtV)
86
/* Bidirectional shift left with saturation */
87
static void gen_asl_r_r_sat(TCGv RdV, TCGv RsV, TCGv RtV)
88
{
89
- TCGv shift_amt = tcg_temp_local_new();
90
+ TCGv shift_amt = tcg_temp_new();
91
TCGLabel *positive = gen_new_label();
92
TCGLabel *done = gen_new_label();
93
94
@@ -XXX,XX +XXX,XX @@ static void gen_log_vreg_write(DisasContext *ctx, intptr_t srcoff, int num,
95
intptr_t dstoff;
96
97
if (is_predicated) {
98
- TCGv cancelled = tcg_temp_local_new();
99
+ TCGv cancelled = tcg_temp_new();
100
label_end = gen_new_label();
101
102
/* Don't do anything if the slot was cancelled */
103
@@ -XXX,XX +XXX,XX @@ static void gen_log_qreg_write(intptr_t srcoff, int num, int vnew,
104
intptr_t dstoff;
105
106
if (is_predicated) {
107
- TCGv cancelled = tcg_temp_local_new();
108
+ TCGv cancelled = tcg_temp_new();
109
label_end = gen_new_label();
110
111
/* Don't do anything if the slot was cancelled */
112
@@ -XXX,XX +XXX,XX @@ void gen_satu_i64_ovfl(TCGv ovfl, TCGv_i64 dest, TCGv_i64 source, int width)
113
/* Implements the fADDSAT64 macro in TCG */
114
void gen_add_sat_i64(TCGv_i64 ret, TCGv_i64 a, TCGv_i64 b)
115
{
116
- TCGv_i64 sum = tcg_temp_local_new_i64();
117
+ TCGv_i64 sum = tcg_temp_new_i64();
118
TCGv_i64 xor = tcg_temp_new_i64();
119
TCGv_i64 cond1 = tcg_temp_new_i64();
120
- TCGv_i64 cond2 = tcg_temp_local_new_i64();
121
+ TCGv_i64 cond2 = tcg_temp_new_i64();
122
TCGv_i64 cond3 = tcg_temp_new_i64();
123
TCGv_i64 mask = tcg_constant_i64(0x8000000000000000ULL);
124
TCGv_i64 max_pos = tcg_constant_i64(0x7FFFFFFFFFFFFFFFLL);
125
diff --git a/target/hexagon/idef-parser/parser-helpers.c b/target/hexagon/idef-parser/parser-helpers.c
126
index XXXXXXX..XXXXXXX 100644
127
--- a/target/hexagon/idef-parser/parser-helpers.c
128
+++ b/target/hexagon/idef-parser/parser-helpers.c
129
@@ -XXX,XX +XXX,XX @@ HexValue gen_tmp_local(Context *c,
130
rvalue.is_manual = false;
131
rvalue.tmp.index = c->inst.tmp_count;
132
OUT(c, locp, "TCGv_i", &bit_width, " tmp_", &c->inst.tmp_count,
133
- " = tcg_temp_local_new_i", &bit_width, "();\n");
134
+ " = tcg_temp_new_i", &bit_width, "();\n");
135
c->inst.tmp_count++;
136
return rvalue;
137
}
138
@@ -XXX,XX +XXX,XX @@ void gen_varid_allocate(Context *c,
139
new_var.signedness = signedness;
140
141
EMIT_HEAD(c, "TCGv_%s %s", bit_suffix, varid->var.name->str);
142
- EMIT_HEAD(c, " = tcg_temp_local_new_%s();\n", bit_suffix);
143
+ EMIT_HEAD(c, " = tcg_temp_new_%s();\n", bit_suffix);
144
g_array_append_val(c->inst.allocated, new_var);
145
}
146
147
diff --git a/target/hexagon/translate.c b/target/hexagon/translate.c
148
index XXXXXXX..XXXXXXX 100644
149
--- a/target/hexagon/translate.c
150
+++ b/target/hexagon/translate.c
151
@@ -XXX,XX +XXX,XX @@ void process_store(DisasContext *ctx, int slot_num)
152
tcg_temp_free(cancelled);
153
}
154
{
155
- TCGv address = tcg_temp_local_new();
156
+ TCGv address = tcg_temp_new();
157
tcg_gen_mov_tl(address, hex_store_addr[slot_num]);
158
159
/*
160
diff --git a/target/hexagon/README b/target/hexagon/README
161
index XXXXXXX..XXXXXXX 100644
162
--- a/target/hexagon/README
163
+++ b/target/hexagon/README
164
@@ -XXX,XX +XXX,XX @@ tcg_funcs_generated.c.inc
165
Insn *insn,
166
Packet *pkt)
167
{
168
- TCGv RdV = tcg_temp_local_new();
169
+ TCGv RdV = tcg_temp_new();
170
const int RdN = insn->regno[0];
171
TCGv RsV = hex_gpr[insn->regno[1]];
172
TCGv RtV = hex_gpr[insn->regno[2]];
173
@@ -XXX,XX +XXX,XX @@ istruction.
174
const int VdN = insn->regno[0];
175
const intptr_t VdV_off =
176
ctx_future_vreg_off(ctx, VdN, 1, true);
177
- TCGv_ptr VdV = tcg_temp_local_new_ptr();
178
+ TCGv_ptr VdV = tcg_temp_new_ptr();
179
tcg_gen_addi_ptr(VdV, cpu_env, VdV_off);
180
const int VuN = insn->regno[1];
181
const intptr_t VuV_off =
182
vreg_src_off(ctx, VuN);
183
- TCGv_ptr VuV = tcg_temp_local_new_ptr();
184
+ TCGv_ptr VuV = tcg_temp_new_ptr();
185
const int VvN = insn->regno[2];
186
const intptr_t VvV_off =
187
vreg_src_off(ctx, VvN);
188
- TCGv_ptr VvV = tcg_temp_local_new_ptr();
189
+ TCGv_ptr VvV = tcg_temp_new_ptr();
190
tcg_gen_addi_ptr(VuV, cpu_env, VuV_off);
191
tcg_gen_addi_ptr(VvV, cpu_env, VvV_off);
192
TCGv slot = tcg_constant_tl(insn->slot);
193
diff --git a/target/hexagon/gen_tcg_funcs.py b/target/hexagon/gen_tcg_funcs.py
194
index XXXXXXX..XXXXXXX 100755
195
--- a/target/hexagon/gen_tcg_funcs.py
196
+++ b/target/hexagon/gen_tcg_funcs.py
197
@@ -XXX,XX +XXX,XX @@
198
## Helpers for gen_tcg_func
199
##
200
def gen_decl_ea_tcg(f, tag):
201
- if ('A_CONDEXEC' in hex_common.attribdict[tag] or
202
- 'A_LOAD' in hex_common.attribdict[tag]):
203
- f.write(" TCGv EA = tcg_temp_local_new();\n")
204
- else:
205
- f.write(" TCGv EA = tcg_temp_new();\n")
206
+ f.write(" TCGv EA = tcg_temp_new();\n")
207
208
def gen_free_ea_tcg(f):
209
f.write(" tcg_temp_free(EA);\n")
210
211
def genptr_decl_pair_writable(f, tag, regtype, regid, regno):
212
regN="%s%sN" % (regtype,regid)
213
- f.write(" TCGv_i64 %s%sV = tcg_temp_local_new_i64();\n" % \
214
+ f.write(" TCGv_i64 %s%sV = tcg_temp_new_i64();\n" % \
215
(regtype, regid))
216
if (regtype == "C"):
217
f.write(" const int %s = insn->regno[%d] + HEX_REG_SA0;\n" % \
218
@@ -XXX,XX +XXX,XX @@ def genptr_decl_pair_writable(f, tag, regtype, regid, regno):
219
220
def genptr_decl_writable(f, tag, regtype, regid, regno):
221
regN="%s%sN" % (regtype,regid)
222
- f.write(" TCGv %s%sV = tcg_temp_local_new();\n" % \
223
+ f.write(" TCGv %s%sV = tcg_temp_new();\n" % \
224
(regtype, regid))
225
if (regtype == "C"):
226
f.write(" const int %s = insn->regno[%d] + HEX_REG_SA0;\n" % \
227
@@ -XXX,XX +XXX,XX @@ def genptr_decl(f, tag, regtype, regid, regno):
228
regN="%s%sN" % (regtype,regid)
229
if (regtype == "R"):
230
if (regid in {"ss", "tt"}):
231
- f.write(" TCGv_i64 %s%sV = tcg_temp_local_new_i64();\n" % \
232
+ f.write(" TCGv_i64 %s%sV = tcg_temp_new_i64();\n" % \
233
(regtype, regid))
234
f.write(" const int %s = insn->regno[%d];\n" % \
235
(regN, regno))
236
@@ -XXX,XX +XXX,XX @@ def genptr_decl(f, tag, regtype, regid, regno):
237
print("Bad register parse: ", regtype, regid)
238
elif (regtype == "C"):
239
if (regid == "ss"):
240
- f.write(" TCGv_i64 %s%sV = tcg_temp_local_new_i64();\n" % \
241
+ f.write(" TCGv_i64 %s%sV = tcg_temp_new_i64();\n" % \
242
(regtype, regid))
243
f.write(" const int %s = insn->regno[%d] + HEX_REG_SA0;\n" % \
244
(regN, regno))
245
elif (regid == "dd"):
246
genptr_decl_pair_writable(f, tag, regtype, regid, regno)
247
elif (regid == "s"):
248
- f.write(" TCGv %s%sV = tcg_temp_local_new();\n" % \
249
+ f.write(" TCGv %s%sV = tcg_temp_new();\n" % \
250
(regtype, regid))
251
f.write(" const int %s%sN = insn->regno[%d] + HEX_REG_SA0;\n" % \
252
(regtype, regid, regno))
253
@@ -XXX,XX +XXX,XX @@ def genptr_dst_write_opn(f,regtype, regid, tag):
254
## We produce:
255
## static void generate_A2_add(DisasContext *ctx)
256
## {
257
-## TCGv RdV = tcg_temp_local_new();
258
+## TCGv RdV = tcg_temp_new();
259
## const int RdN = insn->regno[0];
260
## TCGv RsV = hex_gpr[insn->regno[1]];
261
## TCGv RtV = hex_gpr[insn->regno[2]];
262
--
263
2.34.1
264
265
diff view generated by jsdifflib
1
There are no users of this function outside cputlb.c,
1
This is now equivalent to gen_tmp.
2
and its interface will change in the next patch.
3
2
4
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
3
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
5
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
---
6
---
9
include/exec/cpu_ldst.h | 5 -----
7
target/hexagon/idef-parser/parser-helpers.c | 24 ++-------------------
10
accel/tcg/cputlb.c | 5 +++++
8
1 file changed, 2 insertions(+), 22 deletions(-)
11
2 files changed, 5 insertions(+), 5 deletions(-)
12
9
13
diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h
10
diff --git a/target/hexagon/idef-parser/parser-helpers.c b/target/hexagon/idef-parser/parser-helpers.c
14
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
15
--- a/include/exec/cpu_ldst.h
12
--- a/target/hexagon/idef-parser/parser-helpers.c
16
+++ b/include/exec/cpu_ldst.h
13
+++ b/target/hexagon/idef-parser/parser-helpers.c
17
@@ -XXX,XX +XXX,XX @@ static inline uintptr_t tlb_index(CPUArchState *env, uintptr_t mmu_idx,
14
@@ -XXX,XX +XXX,XX @@ HexValue gen_tmp(Context *c,
18
return (addr >> TARGET_PAGE_BITS) & size_mask;
15
return rvalue;
19
}
16
}
20
17
21
-static inline size_t tlb_n_entries(CPUArchState *env, uintptr_t mmu_idx)
18
-HexValue gen_tmp_local(Context *c,
19
- YYLTYPE *locp,
20
- unsigned bit_width,
21
- HexSignedness signedness)
22
-{
22
-{
23
- return (env_tlb(env)->f[mmu_idx].mask >> CPU_TLB_ENTRY_BITS) + 1;
23
- HexValue rvalue;
24
- assert(bit_width == 32 || bit_width == 64);
25
- memset(&rvalue, 0, sizeof(HexValue));
26
- rvalue.type = TEMP;
27
- rvalue.bit_width = bit_width;
28
- rvalue.signedness = signedness;
29
- rvalue.is_dotnew = false;
30
- rvalue.is_manual = false;
31
- rvalue.tmp.index = c->inst.tmp_count;
32
- OUT(c, locp, "TCGv_i", &bit_width, " tmp_", &c->inst.tmp_count,
33
- " = tcg_temp_new_i", &bit_width, "();\n");
34
- c->inst.tmp_count++;
35
- return rvalue;
24
-}
36
-}
25
-
37
-
26
/* Find the TLB entry corresponding to the mmu_idx + address pair. */
38
HexValue gen_tmp_value(Context *c,
27
static inline CPUTLBEntry *tlb_entry(CPUArchState *env, uintptr_t mmu_idx,
39
YYLTYPE *locp,
28
target_ulong addr)
40
const char *value,
29
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
41
@@ -XXX,XX +XXX,XX @@ HexValue gen_rvalue_sat(Context *c, YYLTYPE *locp, HexSat *sat,
30
index XXXXXXX..XXXXXXX 100644
42
assert_signedness(c, locp, sat->signedness);
31
--- a/accel/tcg/cputlb.c
43
32
+++ b/accel/tcg/cputlb.c
44
unsigned_str = (sat->signedness == UNSIGNED) ? "u" : "";
33
@@ -XXX,XX +XXX,XX @@ QEMU_BUILD_BUG_ON(sizeof(target_ulong) > sizeof(run_on_cpu_data));
45
- res = gen_tmp_local(c, locp, value->bit_width, sat->signedness);
34
QEMU_BUILD_BUG_ON(NB_MMU_MODES > 16);
46
- ovfl = gen_tmp_local(c, locp, 32, sat->signedness);
35
#define ALL_MMUIDX_BITS ((1 << NB_MMU_MODES) - 1)
47
+ res = gen_tmp(c, locp, value->bit_width, sat->signedness);
36
48
+ ovfl = gen_tmp(c, locp, 32, sat->signedness);
37
+static inline size_t tlb_n_entries(CPUArchState *env, uintptr_t mmu_idx)
49
OUT(c, locp, "gen_sat", unsigned_str, "_", bit_suffix, "_ovfl(");
38
+{
50
OUT(c, locp, &ovfl, ", ", &res, ", ", value, ", ", &width->imm.value,
39
+ return (env_tlb(env)->f[mmu_idx].mask >> CPU_TLB_ENTRY_BITS) + 1;
51
");\n");
40
+}
41
+
42
static inline size_t sizeof_tlb(CPUArchState *env, uintptr_t mmu_idx)
43
{
44
return env_tlb(env)->f[mmu_idx].mask + (1 << CPU_TLB_ENTRY_BITS);
45
--
52
--
46
2.20.1
53
2.34.1
47
54
48
55
diff view generated by jsdifflib
New patch
1
This wasn't actually used at all, just some unused
2
macro re-definitions.
1
3
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
---
7
target/hppa/translate.c | 3 ---
8
1 file changed, 3 deletions(-)
9
10
diff --git a/target/hppa/translate.c b/target/hppa/translate.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/hppa/translate.c
13
+++ b/target/hppa/translate.c
14
@@ -XXX,XX +XXX,XX @@
15
#undef TCGv
16
#undef tcg_temp_new
17
#undef tcg_global_mem_new
18
-#undef tcg_temp_local_new
19
#undef tcg_temp_free
20
21
#if TARGET_LONG_BITS == 64
22
@@ -XXX,XX +XXX,XX @@
23
24
#define tcg_temp_new tcg_temp_new_i64
25
#define tcg_global_mem_new tcg_global_mem_new_i64
26
-#define tcg_temp_local_new tcg_temp_local_new_i64
27
#define tcg_temp_free tcg_temp_free_i64
28
29
#define tcg_gen_movi_reg tcg_gen_movi_i64
30
@@ -XXX,XX +XXX,XX @@
31
#define TCGv_reg TCGv_i32
32
#define tcg_temp_new tcg_temp_new_i32
33
#define tcg_global_mem_new tcg_global_mem_new_i32
34
-#define tcg_temp_local_new tcg_temp_local_new_i32
35
#define tcg_temp_free tcg_temp_free_i32
36
37
#define tcg_gen_movi_reg tcg_gen_movi_i32
38
--
39
2.34.1
40
41
diff view generated by jsdifflib
New patch
1
Since tcg_temp_new is now identical, use that.
2
In some cases we can avoid a copy from A0 or T0.
1
3
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
---
7
target/i386/tcg/translate.c | 27 +++++++++------------------
8
1 file changed, 9 insertions(+), 18 deletions(-)
9
10
diff --git a/target/i386/tcg/translate.c b/target/i386/tcg/translate.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/i386/tcg/translate.c
13
+++ b/target/i386/tcg/translate.c
14
@@ -XXX,XX +XXX,XX @@ static bool disas_insn(DisasContext *s, CPUState *cpu)
15
if (mod == 3) {
16
goto illegal_op;
17
}
18
- a0 = tcg_temp_local_new();
19
- t0 = tcg_temp_local_new();
20
+ a0 = s->A0;
21
+ t0 = s->T0;
22
label1 = gen_new_label();
23
24
- tcg_gen_mov_tl(a0, s->A0);
25
- tcg_gen_mov_tl(t0, s->T0);
26
-
27
gen_set_label(label1);
28
t1 = tcg_temp_new();
29
t2 = tcg_temp_new();
30
@@ -XXX,XX +XXX,XX @@ static bool disas_insn(DisasContext *s, CPUState *cpu)
31
tcg_gen_brcond_tl(TCG_COND_NE, t0, t2, label1);
32
33
tcg_temp_free(t2);
34
- tcg_temp_free(a0);
35
tcg_gen_neg_tl(s->T0, t0);
36
- tcg_temp_free(t0);
37
} else {
38
tcg_gen_neg_tl(s->T0, s->T0);
39
if (mod != 3) {
40
@@ -XXX,XX +XXX,XX @@ static bool disas_insn(DisasContext *s, CPUState *cpu)
41
#endif
42
{
43
TCGLabel *label1;
44
- TCGv t0, t1, t2, a0;
45
+ TCGv t0, t1, t2;
46
47
if (!PE(s) || VM86(s))
48
goto illegal_op;
49
- t0 = tcg_temp_local_new();
50
- t1 = tcg_temp_local_new();
51
- t2 = tcg_temp_local_new();
52
+ t0 = tcg_temp_new();
53
+ t1 = tcg_temp_new();
54
+ t2 = tcg_temp_new();
55
ot = MO_16;
56
modrm = x86_ldub_code(env, s);
57
reg = (modrm >> 3) & 7;
58
@@ -XXX,XX +XXX,XX @@ static bool disas_insn(DisasContext *s, CPUState *cpu)
59
if (mod != 3) {
60
gen_lea_modrm(env, s, modrm);
61
gen_op_ld_v(s, ot, t0, s->A0);
62
- a0 = tcg_temp_local_new();
63
- tcg_gen_mov_tl(a0, s->A0);
64
} else {
65
gen_op_mov_v_reg(s, ot, t0, rm);
66
- a0 = NULL;
67
}
68
gen_op_mov_v_reg(s, ot, t1, reg);
69
tcg_gen_andi_tl(s->tmp0, t0, 3);
70
@@ -XXX,XX +XXX,XX @@ static bool disas_insn(DisasContext *s, CPUState *cpu)
71
tcg_gen_movi_tl(t2, CC_Z);
72
gen_set_label(label1);
73
if (mod != 3) {
74
- gen_op_st_v(s, ot, t0, a0);
75
- tcg_temp_free(a0);
76
+ gen_op_st_v(s, ot, t0, s->A0);
77
} else {
78
gen_op_mov_reg_v(s, ot, rm, t0);
79
}
80
@@ -XXX,XX +XXX,XX @@ static bool disas_insn(DisasContext *s, CPUState *cpu)
81
modrm = x86_ldub_code(env, s);
82
reg = ((modrm >> 3) & 7) | REX_R(s);
83
gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0);
84
- t0 = tcg_temp_local_new();
85
+ t0 = tcg_temp_new();
86
gen_update_cc_op(s);
87
if (b == 0x102) {
88
gen_helper_lar(t0, cpu_env, s->T0);
89
@@ -XXX,XX +XXX,XX @@ static void i386_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cpu)
90
dc->tmp2_i32 = tcg_temp_new_i32();
91
dc->tmp3_i32 = tcg_temp_new_i32();
92
dc->tmp4 = tcg_temp_new();
93
- dc->cc_srcT = tcg_temp_local_new();
94
+ dc->cc_srcT = tcg_temp_new();
95
}
96
97
static void i386_tr_tb_start(DisasContextBase *db, CPUState *cpu)
98
--
99
2.34.1
100
101
diff view generated by jsdifflib
New patch
1
Since tcg_temp_new is now identical, use that.
1
2
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
---
6
target/mips/tcg/translate.c | 57 ++++++------------------
7
target/mips/tcg/nanomips_translate.c.inc | 4 +-
8
2 files changed, 16 insertions(+), 45 deletions(-)
9
10
diff --git a/target/mips/tcg/translate.c b/target/mips/tcg/translate.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/mips/tcg/translate.c
13
+++ b/target/mips/tcg/translate.c
14
@@ -XXX,XX +XXX,XX @@ static void gen_arith_imm(DisasContext *ctx, uint32_t opc,
15
switch (opc) {
16
case OPC_ADDI:
17
{
18
- TCGv t0 = tcg_temp_local_new();
19
+ TCGv t0 = tcg_temp_new();
20
TCGv t1 = tcg_temp_new();
21
TCGv t2 = tcg_temp_new();
22
TCGLabel *l1 = gen_new_label();
23
@@ -XXX,XX +XXX,XX @@ static void gen_arith_imm(DisasContext *ctx, uint32_t opc,
24
#if defined(TARGET_MIPS64)
25
case OPC_DADDI:
26
{
27
- TCGv t0 = tcg_temp_local_new();
28
+ TCGv t0 = tcg_temp_new();
29
TCGv t1 = tcg_temp_new();
30
TCGv t2 = tcg_temp_new();
31
TCGLabel *l1 = gen_new_label();
32
@@ -XXX,XX +XXX,XX @@ static void gen_arith(DisasContext *ctx, uint32_t opc,
33
switch (opc) {
34
case OPC_ADD:
35
{
36
- TCGv t0 = tcg_temp_local_new();
37
+ TCGv t0 = tcg_temp_new();
38
TCGv t1 = tcg_temp_new();
39
TCGv t2 = tcg_temp_new();
40
TCGLabel *l1 = gen_new_label();
41
@@ -XXX,XX +XXX,XX @@ static void gen_arith(DisasContext *ctx, uint32_t opc,
42
break;
43
case OPC_SUB:
44
{
45
- TCGv t0 = tcg_temp_local_new();
46
+ TCGv t0 = tcg_temp_new();
47
TCGv t1 = tcg_temp_new();
48
TCGv t2 = tcg_temp_new();
49
TCGLabel *l1 = gen_new_label();
50
@@ -XXX,XX +XXX,XX @@ static void gen_arith(DisasContext *ctx, uint32_t opc,
51
#if defined(TARGET_MIPS64)
52
case OPC_DADD:
53
{
54
- TCGv t0 = tcg_temp_local_new();
55
+ TCGv t0 = tcg_temp_new();
56
TCGv t1 = tcg_temp_new();
57
TCGv t2 = tcg_temp_new();
58
TCGLabel *l1 = gen_new_label();
59
@@ -XXX,XX +XXX,XX @@ static void gen_arith(DisasContext *ctx, uint32_t opc,
60
break;
61
case OPC_DSUB:
62
{
63
- TCGv t0 = tcg_temp_local_new();
64
+ TCGv t0 = tcg_temp_new();
65
TCGv t1 = tcg_temp_new();
66
TCGv t2 = tcg_temp_new();
67
TCGLabel *l1 = gen_new_label();
68
@@ -XXX,XX +XXX,XX @@ static void gen_loongson_integer(DisasContext *ctx, uint32_t opc,
69
return;
70
}
71
72
- switch (opc) {
73
- case OPC_MULT_G_2E:
74
- case OPC_MULT_G_2F:
75
- case OPC_MULTU_G_2E:
76
- case OPC_MULTU_G_2F:
77
-#if defined(TARGET_MIPS64)
78
- case OPC_DMULT_G_2E:
79
- case OPC_DMULT_G_2F:
80
- case OPC_DMULTU_G_2E:
81
- case OPC_DMULTU_G_2F:
82
-#endif
83
- t0 = tcg_temp_new();
84
- t1 = tcg_temp_new();
85
- break;
86
- default:
87
- t0 = tcg_temp_local_new();
88
- t1 = tcg_temp_local_new();
89
- break;
90
- }
91
-
92
+ t0 = tcg_temp_new();
93
+ t1 = tcg_temp_new();
94
gen_load_gpr(t0, rs);
95
gen_load_gpr(t1, rt);
96
97
@@ -XXX,XX +XXX,XX @@ static void gen_loongson_multimedia(DisasContext *ctx, int rd, int rs, int rt)
98
TCGCond cond;
99
100
opc = MASK_LMMI(ctx->opcode);
101
- switch (opc) {
102
- case OPC_ADD_CP2:
103
- case OPC_SUB_CP2:
104
- case OPC_DADD_CP2:
105
- case OPC_DSUB_CP2:
106
- t0 = tcg_temp_local_new_i64();
107
- t1 = tcg_temp_local_new_i64();
108
- break;
109
- default:
110
- t0 = tcg_temp_new_i64();
111
- t1 = tcg_temp_new_i64();
112
- break;
113
- }
114
-
115
check_cp1_enabled(ctx);
116
+
117
+ t0 = tcg_temp_new_i64();
118
+ t1 = tcg_temp_new_i64();
119
gen_load_fpr64(ctx, t0, rs);
120
gen_load_fpr64(ctx, t1, rt);
121
122
@@ -XXX,XX +XXX,XX @@ static void gen_mftr(CPUMIPSState *env, DisasContext *ctx, int rt, int rd,
123
int u, int sel, int h)
124
{
125
int other_tc = env->CP0_VPEControl & (0xff << CP0VPECo_TargTC);
126
- TCGv t0 = tcg_temp_local_new();
127
+ TCGv t0 = tcg_temp_new();
128
129
if ((env->CP0_VPEConf0 & (1 << CP0VPEC0_MVP)) == 0 &&
130
((env->tcs[other_tc].CP0_TCBind & (0xf << CP0TCBd_CurVPE)) !=
131
@@ -XXX,XX +XXX,XX @@ static void gen_mttr(CPUMIPSState *env, DisasContext *ctx, int rd, int rt,
132
int u, int sel, int h)
133
{
134
int other_tc = env->CP0_VPEControl & (0xff << CP0VPECo_TargTC);
135
- TCGv t0 = tcg_temp_local_new();
136
+ TCGv t0 = tcg_temp_new();
137
138
gen_load_gpr(t0, rt);
139
if ((env->CP0_VPEConf0 & (1 << CP0VPEC0_MVP)) == 0 &&
140
@@ -XXX,XX +XXX,XX @@ static void gen_flt3_arith(DisasContext *ctx, uint32_t opc,
141
case OPC_ALNV_PS:
142
check_ps(ctx);
143
{
144
- TCGv t0 = tcg_temp_local_new();
145
+ TCGv t0 = tcg_temp_new();
146
TCGv_i32 fp = tcg_temp_new_i32();
147
TCGv_i32 fph = tcg_temp_new_i32();
148
TCGLabel *l1 = gen_new_label();
149
diff --git a/target/mips/tcg/nanomips_translate.c.inc b/target/mips/tcg/nanomips_translate.c.inc
150
index XXXXXXX..XXXXXXX 100644
151
--- a/target/mips/tcg/nanomips_translate.c.inc
152
+++ b/target/mips/tcg/nanomips_translate.c.inc
153
@@ -XXX,XX +XXX,XX @@ static void gen_llwp(DisasContext *ctx, uint32_t base, int16_t offset,
154
static void gen_scwp(DisasContext *ctx, uint32_t base, int16_t offset,
155
uint32_t reg1, uint32_t reg2, bool eva)
156
{
157
- TCGv taddr = tcg_temp_local_new();
158
- TCGv lladdr = tcg_temp_local_new();
159
+ TCGv taddr = tcg_temp_new();
160
+ TCGv lladdr = tcg_temp_new();
161
TCGv_i64 tval = tcg_temp_new_i64();
162
TCGv_i64 llval = tcg_temp_new_i64();
163
TCGv_i64 val = tcg_temp_new_i64();
164
--
165
2.34.1
166
167
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
Since tcg_temp_new is now identical, use that.
2
2
3
To avoid scrolling each instruction when reviewing tcg
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
helpers written for the decodetree script, display the
5
.decode files (similar to header declarations) before
6
the C source (implementation of previous declarations).
7
8
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
10
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
11
Message-Id: <20191230082856.30556-1-philmd@redhat.com>
12
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
---
5
---
14
scripts/git.orderfile | 3 +++
6
target/ppc/translate.c | 6 +++---
15
1 file changed, 3 insertions(+)
7
target/ppc/translate/spe-impl.c.inc | 8 ++++----
8
target/ppc/translate/vmx-impl.c.inc | 4 ++--
9
3 files changed, 9 insertions(+), 9 deletions(-)
16
10
17
diff --git a/scripts/git.orderfile b/scripts/git.orderfile
11
diff --git a/target/ppc/translate.c b/target/ppc/translate.c
18
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
19
--- a/scripts/git.orderfile
13
--- a/target/ppc/translate.c
20
+++ b/scripts/git.orderfile
14
+++ b/target/ppc/translate.c
21
@@ -XXX,XX +XXX,XX @@ qga/*.json
15
@@ -XXX,XX +XXX,XX @@ static void gen_bcond(DisasContext *ctx, int type)
22
# headers
16
TCGv target;
23
*.h
17
24
18
if (type == BCOND_LR || type == BCOND_CTR || type == BCOND_TAR) {
25
+# decoding tree specification
19
- target = tcg_temp_local_new();
26
+*.decode
20
+ target = tcg_temp_new();
27
+
21
if (type == BCOND_CTR) {
28
# code
22
tcg_gen_mov_tl(target, cpu_ctr);
29
*.c
23
} else if (type == BCOND_TAR) {
24
@@ -XXX,XX +XXX,XX @@ static inline void gen_405_mulladd_insn(DisasContext *ctx, int opc2, int opc3,
25
{
26
TCGv t0, t1;
27
28
- t0 = tcg_temp_local_new();
29
- t1 = tcg_temp_local_new();
30
+ t0 = tcg_temp_new();
31
+ t1 = tcg_temp_new();
32
33
switch (opc3 & 0x0D) {
34
case 0x05:
35
diff --git a/target/ppc/translate/spe-impl.c.inc b/target/ppc/translate/spe-impl.c.inc
36
index XXXXXXX..XXXXXXX 100644
37
--- a/target/ppc/translate/spe-impl.c.inc
38
+++ b/target/ppc/translate/spe-impl.c.inc
39
@@ -XXX,XX +XXX,XX @@ static inline void gen_op_evsrwu(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2)
40
{
41
TCGLabel *l1 = gen_new_label();
42
TCGLabel *l2 = gen_new_label();
43
- TCGv_i32 t0 = tcg_temp_local_new_i32();
44
+ TCGv_i32 t0 = tcg_temp_new_i32();
45
46
/* No error here: 6 bits are used */
47
tcg_gen_andi_i32(t0, arg2, 0x3F);
48
@@ -XXX,XX +XXX,XX @@ static inline void gen_op_evsrws(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2)
49
{
50
TCGLabel *l1 = gen_new_label();
51
TCGLabel *l2 = gen_new_label();
52
- TCGv_i32 t0 = tcg_temp_local_new_i32();
53
+ TCGv_i32 t0 = tcg_temp_new_i32();
54
55
/* No error here: 6 bits are used */
56
tcg_gen_andi_i32(t0, arg2, 0x3F);
57
@@ -XXX,XX +XXX,XX @@ static inline void gen_op_evslw(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2)
58
{
59
TCGLabel *l1 = gen_new_label();
60
TCGLabel *l2 = gen_new_label();
61
- TCGv_i32 t0 = tcg_temp_local_new_i32();
62
+ TCGv_i32 t0 = tcg_temp_new_i32();
63
64
/* No error here: 6 bits are used */
65
tcg_gen_andi_i32(t0, arg2, 0x3F);
66
@@ -XXX,XX +XXX,XX @@ static inline void gen_evsel(DisasContext *ctx)
67
TCGLabel *l2 = gen_new_label();
68
TCGLabel *l3 = gen_new_label();
69
TCGLabel *l4 = gen_new_label();
70
- TCGv_i32 t0 = tcg_temp_local_new_i32();
71
+ TCGv_i32 t0 = tcg_temp_new_i32();
72
73
tcg_gen_andi_i32(t0, cpu_crf[ctx->opcode & 0x07], 1 << 3);
74
tcg_gen_brcondi_i32(TCG_COND_EQ, t0, 0, l1);
75
diff --git a/target/ppc/translate/vmx-impl.c.inc b/target/ppc/translate/vmx-impl.c.inc
76
index XXXXXXX..XXXXXXX 100644
77
--- a/target/ppc/translate/vmx-impl.c.inc
78
+++ b/target/ppc/translate/vmx-impl.c.inc
79
@@ -XXX,XX +XXX,XX @@ static bool do_vcmpq(DisasContext *ctx, arg_VX_bf *a, bool sign)
80
REQUIRE_INSNS_FLAGS2(ctx, ISA310);
81
REQUIRE_VECTOR(ctx);
82
83
- vra = tcg_temp_local_new_i64();
84
- vrb = tcg_temp_local_new_i64();
85
+ vra = tcg_temp_new_i64();
86
+ vrb = tcg_temp_new_i64();
87
gt = gen_new_label();
88
lt = gen_new_label();
89
done = gen_new_label();
30
--
90
--
31
2.20.1
91
2.34.1
32
92
33
93
diff view generated by jsdifflib
1
No functional change, but the smaller expressions make
1
Since tcg_temp_new_* is now identical, use those.
2
the code easier to read.
3
2
4
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
---
5
---
9
accel/tcg/cputlb.c | 35 +++++++++++++++++------------------
6
target/xtensa/translate.c | 16 ++++++++--------
10
1 file changed, 17 insertions(+), 18 deletions(-)
7
1 file changed, 8 insertions(+), 8 deletions(-)
11
8
12
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
9
diff --git a/target/xtensa/translate.c b/target/xtensa/translate.c
13
index XXXXXXX..XXXXXXX 100644
10
index XXXXXXX..XXXXXXX 100644
14
--- a/accel/tcg/cputlb.c
11
--- a/target/xtensa/translate.c
15
+++ b/accel/tcg/cputlb.c
12
+++ b/target/xtensa/translate.c
16
@@ -XXX,XX +XXX,XX @@ static void tlb_dyn_init(CPUArchState *env)
13
@@ -XXX,XX +XXX,XX @@ static void gen_right_shift_sar(DisasContext *dc, TCGv_i32 sa)
17
14
static void gen_left_shift_sar(DisasContext *dc, TCGv_i32 sa)
18
/**
19
* tlb_mmu_resize_locked() - perform TLB resize bookkeeping; resize if necessary
20
- * @env: CPU that owns the TLB
21
- * @mmu_idx: MMU index of the TLB
22
+ * @desc: The CPUTLBDesc portion of the TLB
23
+ * @fast: The CPUTLBDescFast portion of the same TLB
24
*
25
* Called with tlb_lock_held.
26
*
27
@@ -XXX,XX +XXX,XX @@ static void tlb_dyn_init(CPUArchState *env)
28
* high), since otherwise we are likely to have a significant amount of
29
* conflict misses.
30
*/
31
-static void tlb_mmu_resize_locked(CPUArchState *env, int mmu_idx)
32
+static void tlb_mmu_resize_locked(CPUTLBDesc *desc, CPUTLBDescFast *fast)
33
{
15
{
34
- CPUTLBDesc *desc = &env_tlb(env)->d[mmu_idx];
16
if (!dc->sar_m32_allocated) {
35
- size_t old_size = tlb_n_entries(&env_tlb(env)->f[mmu_idx]);
17
- dc->sar_m32 = tcg_temp_local_new_i32();
36
+ size_t old_size = tlb_n_entries(fast);
18
+ dc->sar_m32 = tcg_temp_new_i32();
37
size_t rate;
19
dc->sar_m32_allocated = true;
38
size_t new_size = old_size;
39
int64_t now = get_clock_realtime();
40
@@ -XXX,XX +XXX,XX @@ static void tlb_mmu_resize_locked(CPUArchState *env, int mmu_idx)
41
return;
42
}
20
}
43
21
tcg_gen_andi_i32(dc->sar_m32, sa, 0x1f);
44
- g_free(env_tlb(env)->f[mmu_idx].table);
22
@@ -XXX,XX +XXX,XX @@ static void disas_xtensa_insn(CPUXtensaState *env, DisasContext *dc)
45
- g_free(env_tlb(env)->d[mmu_idx].iotlb);
23
if (i == 0 || arg_copy[i].resource != resource) {
46
+ g_free(fast->table);
24
resource = arg_copy[i].resource;
47
+ g_free(desc->iotlb);
25
if (arg_copy[i].arg->num_bits <= 32) {
48
26
- temp = tcg_temp_local_new_i32();
49
tlb_window_reset(desc, now, 0);
27
+ temp = tcg_temp_new_i32();
50
/* desc->n_used_entries is cleared by the caller */
28
tcg_gen_mov_i32(temp, arg_copy[i].arg->in);
51
- env_tlb(env)->f[mmu_idx].mask = (new_size - 1) << CPU_TLB_ENTRY_BITS;
29
} else if (arg_copy[i].arg->num_bits <= 64) {
52
- env_tlb(env)->f[mmu_idx].table = g_try_new(CPUTLBEntry, new_size);
30
- temp = tcg_temp_local_new_i64();
53
- env_tlb(env)->d[mmu_idx].iotlb = g_try_new(CPUIOTLBEntry, new_size);
31
+ temp = tcg_temp_new_i64();
54
+ fast->mask = (new_size - 1) << CPU_TLB_ENTRY_BITS;
32
tcg_gen_mov_i64(temp, arg_copy[i].arg->in);
55
+ fast->table = g_try_new(CPUTLBEntry, new_size);
33
} else {
56
+ desc->iotlb = g_try_new(CPUIOTLBEntry, new_size);
34
g_assert_not_reached();
57
+
35
@@ -XXX,XX +XXX,XX @@ static void xtensa_tr_tb_start(DisasContextBase *dcbase, CPUState *cpu)
58
/*
36
DisasContext *dc = container_of(dcbase, DisasContext, base);
59
* If the allocations fail, try smaller sizes. We just freed some
37
60
* memory, so going back to half of new_size has a good chance of working.
38
if (dc->icount) {
61
@@ -XXX,XX +XXX,XX @@ static void tlb_mmu_resize_locked(CPUArchState *env, int mmu_idx)
39
- dc->next_icount = tcg_temp_local_new_i32();
62
* allocations to fail though, so we progressively reduce the allocation
40
+ dc->next_icount = tcg_temp_new_i32();
63
* size, aborting if we cannot even allocate the smallest TLB we support.
64
*/
65
- while (env_tlb(env)->f[mmu_idx].table == NULL ||
66
- env_tlb(env)->d[mmu_idx].iotlb == NULL) {
67
+ while (fast->table == NULL || desc->iotlb == NULL) {
68
if (new_size == (1 << CPU_TLB_DYN_MIN_BITS)) {
69
error_report("%s: %s", __func__, strerror(errno));
70
abort();
71
}
72
new_size = MAX(new_size >> 1, 1 << CPU_TLB_DYN_MIN_BITS);
73
- env_tlb(env)->f[mmu_idx].mask = (new_size - 1) << CPU_TLB_ENTRY_BITS;
74
+ fast->mask = (new_size - 1) << CPU_TLB_ENTRY_BITS;
75
76
- g_free(env_tlb(env)->f[mmu_idx].table);
77
- g_free(env_tlb(env)->d[mmu_idx].iotlb);
78
- env_tlb(env)->f[mmu_idx].table = g_try_new(CPUTLBEntry, new_size);
79
- env_tlb(env)->d[mmu_idx].iotlb = g_try_new(CPUIOTLBEntry, new_size);
80
+ g_free(fast->table);
81
+ g_free(desc->iotlb);
82
+ fast->table = g_try_new(CPUTLBEntry, new_size);
83
+ desc->iotlb = g_try_new(CPUIOTLBEntry, new_size);
84
}
41
}
85
}
42
}
86
43
87
static void tlb_flush_one_mmuidx_locked(CPUArchState *env, int mmu_idx)
44
@@ -XXX,XX +XXX,XX @@ static void gen_check_atomctl(DisasContext *dc, TCGv_i32 addr)
45
static void translate_s32c1i(DisasContext *dc, const OpcodeArg arg[],
46
const uint32_t par[])
88
{
47
{
89
- tlb_mmu_resize_locked(env, mmu_idx);
48
- TCGv_i32 tmp = tcg_temp_local_new_i32();
90
+ tlb_mmu_resize_locked(&env_tlb(env)->d[mmu_idx], &env_tlb(env)->f[mmu_idx]);
49
- TCGv_i32 addr = tcg_temp_local_new_i32();
91
env_tlb(env)->d[mmu_idx].n_used_entries = 0;
50
+ TCGv_i32 tmp = tcg_temp_new_i32();
92
env_tlb(env)->d[mmu_idx].large_page_addr = -1;
51
+ TCGv_i32 addr = tcg_temp_new_i32();
93
env_tlb(env)->d[mmu_idx].large_page_mask = -1;
52
MemOp mop;
53
54
tcg_gen_mov_i32(tmp, arg[0].in);
55
@@ -XXX,XX +XXX,XX @@ static void translate_s32ex(DisasContext *dc, const OpcodeArg arg[],
56
const uint32_t par[])
57
{
58
TCGv_i32 prev = tcg_temp_new_i32();
59
- TCGv_i32 addr = tcg_temp_local_new_i32();
60
- TCGv_i32 res = tcg_temp_local_new_i32();
61
+ TCGv_i32 addr = tcg_temp_new_i32();
62
+ TCGv_i32 res = tcg_temp_new_i32();
63
TCGLabel *label = gen_new_label();
64
MemOp mop;
65
94
--
66
--
95
2.20.1
67
2.34.1
96
68
97
69
diff view generated by jsdifflib
1
From: Carlos Santos <casantos@redhat.com>
1
Since tcg_temp_new_i32 is now identical, use that.
2
2
3
uClibc defines _SC_LEVEL1_ICACHE_LINESIZE and _SC_LEVEL1_DCACHE_LINESIZE
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
but the corresponding sysconf calls returns -1, which is a valid result,
5
meaning that the limit is indeterminate.
6
7
Handle this situation using the fallback values instead of crashing due
8
to an assertion failure.
9
10
Signed-off-by: Carlos Santos <casantos@redhat.com>
11
Message-Id: <20191017123713.30192-1-casantos@redhat.com>
12
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
---
5
---
14
util/cacheinfo.c | 10 ++++++++--
6
include/exec/gen-icount.h | 8 +-------
15
1 file changed, 8 insertions(+), 2 deletions(-)
7
1 file changed, 1 insertion(+), 7 deletions(-)
16
8
17
diff --git a/util/cacheinfo.c b/util/cacheinfo.c
9
diff --git a/include/exec/gen-icount.h b/include/exec/gen-icount.h
18
index XXXXXXX..XXXXXXX 100644
10
index XXXXXXX..XXXXXXX 100644
19
--- a/util/cacheinfo.c
11
--- a/include/exec/gen-icount.h
20
+++ b/util/cacheinfo.c
12
+++ b/include/exec/gen-icount.h
21
@@ -XXX,XX +XXX,XX @@ static void sys_cache_info(int *isize, int *dsize)
13
@@ -XXX,XX +XXX,XX @@ static inline void gen_io_start(void)
22
static void sys_cache_info(int *isize, int *dsize)
14
15
static inline void gen_tb_start(const TranslationBlock *tb)
23
{
16
{
24
# ifdef _SC_LEVEL1_ICACHE_LINESIZE
17
- TCGv_i32 count;
25
- *isize = sysconf(_SC_LEVEL1_ICACHE_LINESIZE);
18
-
26
+ int tmp_isize = (int) sysconf(_SC_LEVEL1_ICACHE_LINESIZE);
19
- if (tb_cflags(tb) & CF_USE_ICOUNT) {
27
+ if (tmp_isize > 0) {
20
- count = tcg_temp_local_new_i32();
28
+ *isize = tmp_isize;
21
- } else {
29
+ }
22
- count = tcg_temp_new_i32();
30
# endif
23
- }
31
# ifdef _SC_LEVEL1_DCACHE_LINESIZE
24
+ TCGv_i32 count = tcg_temp_new_i32();
32
- *dsize = sysconf(_SC_LEVEL1_DCACHE_LINESIZE);
25
33
+ int tmp_dsize = (int) sysconf(_SC_LEVEL1_DCACHE_LINESIZE);
26
tcg_gen_ld_i32(count, cpu_env,
34
+ if (tmp_dsize > 0) {
27
offsetof(ArchCPU, neg.icount_decr.u32) -
35
+ *dsize = tmp_dsize;
36
+ }
37
# endif
38
}
39
#endif /* sys_cache_info */
40
--
28
--
41
2.20.1
29
2.34.1
42
30
43
31
diff view generated by jsdifflib
1
Merge into the only caller, but at the same time split
1
These symbols are now unused.
2
out tlb_mmu_init to initialize a single tlb entry.
3
2
4
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
---
5
---
9
accel/tcg/cputlb.c | 33 ++++++++++++++++-----------------
6
include/tcg/tcg-op.h | 2 --
10
1 file changed, 16 insertions(+), 17 deletions(-)
7
include/tcg/tcg.h | 28 ----------------------------
8
tcg/tcg.c | 16 ----------------
9
3 files changed, 46 deletions(-)
11
10
12
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
11
diff --git a/include/tcg/tcg-op.h b/include/tcg/tcg-op.h
13
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
14
--- a/accel/tcg/cputlb.c
13
--- a/include/tcg/tcg-op.h
15
+++ b/accel/tcg/cputlb.c
14
+++ b/include/tcg/tcg-op.h
16
@@ -XXX,XX +XXX,XX @@ static void tlb_window_reset(CPUTLBDesc *desc, int64_t ns,
15
@@ -XXX,XX +XXX,XX @@ static inline void tcg_gen_plugin_cb_end(void)
17
desc->window_max_entries = max_entries;
16
#if TARGET_LONG_BITS == 32
17
#define tcg_temp_new() tcg_temp_new_i32()
18
#define tcg_global_mem_new tcg_global_mem_new_i32
19
-#define tcg_temp_local_new() tcg_temp_local_new_i32()
20
#define tcg_temp_free tcg_temp_free_i32
21
#define tcg_gen_qemu_ld_tl tcg_gen_qemu_ld_i32
22
#define tcg_gen_qemu_st_tl tcg_gen_qemu_st_i32
23
#else
24
#define tcg_temp_new() tcg_temp_new_i64()
25
#define tcg_global_mem_new tcg_global_mem_new_i64
26
-#define tcg_temp_local_new() tcg_temp_local_new_i64()
27
#define tcg_temp_free tcg_temp_free_i64
28
#define tcg_gen_qemu_ld_tl tcg_gen_qemu_ld_i64
29
#define tcg_gen_qemu_st_tl tcg_gen_qemu_st_i64
30
diff --git a/include/tcg/tcg.h b/include/tcg/tcg.h
31
index XXXXXXX..XXXXXXX 100644
32
--- a/include/tcg/tcg.h
33
+++ b/include/tcg/tcg.h
34
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 tcg_temp_new_i32(void)
35
return temp_tcgv_i32(t);
18
}
36
}
19
37
20
-static void tlb_dyn_init(CPUArchState *env)
38
-static inline TCGv_i32 tcg_temp_local_new_i32(void)
21
-{
39
-{
22
- int i;
40
- TCGTemp *t = tcg_temp_new_internal(TCG_TYPE_I32, TEMP_TB);
23
-
41
- return temp_tcgv_i32(t);
24
- for (i = 0; i < NB_MMU_MODES; i++) {
25
- CPUTLBDesc *desc = &env_tlb(env)->d[i];
26
- size_t n_entries = 1 << CPU_TLB_DYN_DEFAULT_BITS;
27
-
28
- tlb_window_reset(desc, get_clock_realtime(), 0);
29
- desc->n_used_entries = 0;
30
- env_tlb(env)->f[i].mask = (n_entries - 1) << CPU_TLB_ENTRY_BITS;
31
- env_tlb(env)->f[i].table = g_new(CPUTLBEntry, n_entries);
32
- env_tlb(env)->d[i].iotlb = g_new(CPUIOTLBEntry, n_entries);
33
- }
34
-}
42
-}
35
-
43
-
36
/**
44
static inline TCGv_i64 tcg_global_mem_new_i64(TCGv_ptr reg, intptr_t offset,
37
* tlb_mmu_resize_locked() - perform TLB resize bookkeeping; resize if necessary
45
const char *name)
38
* @desc: The CPUTLBDesc portion of the TLB
46
{
39
@@ -XXX,XX +XXX,XX @@ static void tlb_flush_one_mmuidx_locked(CPUArchState *env, int mmu_idx)
47
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i64 tcg_temp_new_i64(void)
40
tlb_mmu_flush_locked(desc, fast);
48
return temp_tcgv_i64(t);
41
}
49
}
42
50
43
+static void tlb_mmu_init(CPUTLBDesc *desc, CPUTLBDescFast *fast, int64_t now)
51
-static inline TCGv_i64 tcg_temp_local_new_i64(void)
44
+{
52
-{
45
+ size_t n_entries = 1 << CPU_TLB_DYN_DEFAULT_BITS;
53
- TCGTemp *t = tcg_temp_new_internal(TCG_TYPE_I64, TEMP_TB);
46
+
54
- return temp_tcgv_i64(t);
47
+ tlb_window_reset(desc, now, 0);
55
-}
48
+ desc->n_used_entries = 0;
56
-
49
+ fast->mask = (n_entries - 1) << CPU_TLB_ENTRY_BITS;
57
/* Used only by tcg infrastructure: tcg-op.c or plugin-gen.c */
50
+ fast->table = g_new(CPUTLBEntry, n_entries);
58
static inline TCGv_i128 tcg_temp_ebb_new_i128(void)
51
+ desc->iotlb = g_new(CPUIOTLBEntry, n_entries);
52
+}
53
+
54
static inline void tlb_n_used_entries_inc(CPUArchState *env, uintptr_t mmu_idx)
55
{
59
{
56
env_tlb(env)->d[mmu_idx].n_used_entries++;
60
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i128 tcg_temp_new_i128(void)
57
@@ -XXX,XX +XXX,XX @@ static inline void tlb_n_used_entries_dec(CPUArchState *env, uintptr_t mmu_idx)
61
return temp_tcgv_i128(t);
58
void tlb_init(CPUState *cpu)
62
}
63
64
-static inline TCGv_i128 tcg_temp_local_new_i128(void)
65
-{
66
- TCGTemp *t = tcg_temp_new_internal(TCG_TYPE_I128, TEMP_TB);
67
- return temp_tcgv_i128(t);
68
-}
69
-
70
static inline TCGv_ptr tcg_global_mem_new_ptr(TCGv_ptr reg, intptr_t offset,
71
const char *name)
59
{
72
{
60
CPUArchState *env = cpu->env_ptr;
73
@@ -XXX,XX +XXX,XX @@ static inline TCGv_ptr tcg_temp_new_ptr(void)
61
+ int64_t now = get_clock_realtime();
74
return temp_tcgv_ptr(t);
62
+ int i;
63
64
qemu_spin_init(&env_tlb(env)->c.lock);
65
66
/* Ensure that cpu_reset performs a full flush. */
67
env_tlb(env)->c.dirty = ALL_MMUIDX_BITS;
68
69
- tlb_dyn_init(env);
70
+ for (i = 0; i < NB_MMU_MODES; i++) {
71
+ tlb_mmu_init(&env_tlb(env)->d[i], &env_tlb(env)->f[i], now);
72
+ }
73
}
75
}
74
76
75
/* flush_all_helper: run fn across all cpus
77
-static inline TCGv_ptr tcg_temp_local_new_ptr(void)
78
-{
79
- TCGTemp *t = tcg_temp_new_internal(TCG_TYPE_PTR, TEMP_TB);
80
- return temp_tcgv_ptr(t);
81
-}
82
-
83
#if defined(CONFIG_DEBUG_TCG)
84
/* If you call tcg_clear_temp_count() at the start of a section of
85
* code which is not supposed to leak any TCG temporaries, then
86
@@ -XXX,XX +XXX,XX @@ void tcg_optimize(TCGContext *s);
87
/* Allocate a new temporary and initialize it with a constant. */
88
TCGv_i32 tcg_const_i32(int32_t val);
89
TCGv_i64 tcg_const_i64(int64_t val);
90
-TCGv_i32 tcg_const_local_i32(int32_t val);
91
-TCGv_i64 tcg_const_local_i64(int64_t val);
92
TCGv_vec tcg_const_zeros_vec(TCGType);
93
TCGv_vec tcg_const_ones_vec(TCGType);
94
TCGv_vec tcg_const_zeros_vec_matching(TCGv_vec);
95
@@ -XXX,XX +XXX,XX @@ TCGv_vec tcg_constant_vec_matching(TCGv_vec match, unsigned vece, int64_t val);
96
97
#if UINTPTR_MAX == UINT32_MAX
98
# define tcg_const_ptr(x) ((TCGv_ptr)tcg_const_i32((intptr_t)(x)))
99
-# define tcg_const_local_ptr(x) ((TCGv_ptr)tcg_const_local_i32((intptr_t)(x)))
100
# define tcg_constant_ptr(x) ((TCGv_ptr)tcg_constant_i32((intptr_t)(x)))
101
#else
102
# define tcg_const_ptr(x) ((TCGv_ptr)tcg_const_i64((intptr_t)(x)))
103
-# define tcg_const_local_ptr(x) ((TCGv_ptr)tcg_const_local_i64((intptr_t)(x)))
104
# define tcg_constant_ptr(x) ((TCGv_ptr)tcg_constant_i64((intptr_t)(x)))
105
#endif
106
107
diff --git a/tcg/tcg.c b/tcg/tcg.c
108
index XXXXXXX..XXXXXXX 100644
109
--- a/tcg/tcg.c
110
+++ b/tcg/tcg.c
111
@@ -XXX,XX +XXX,XX @@ TCGv_i64 tcg_const_i64(int64_t val)
112
return t0;
113
}
114
115
-TCGv_i32 tcg_const_local_i32(int32_t val)
116
-{
117
- TCGv_i32 t0;
118
- t0 = tcg_temp_local_new_i32();
119
- tcg_gen_movi_i32(t0, val);
120
- return t0;
121
-}
122
-
123
-TCGv_i64 tcg_const_local_i64(int64_t val)
124
-{
125
- TCGv_i64 t0;
126
- t0 = tcg_temp_local_new_i64();
127
- tcg_gen_movi_i64(t0, val);
128
- return t0;
129
-}
130
-
131
#if defined(CONFIG_DEBUG_TCG)
132
void tcg_clear_temp_count(void)
133
{
76
--
134
--
77
2.20.1
135
2.34.1
78
136
79
137
diff view generated by jsdifflib
New patch
1
Rewrite the sections which talked about 'local temporaries'.
2
Remove some assumptions which no longer hold.
1
3
4
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
---
7
docs/devel/tcg-ops.rst | 230 +++++++++++++++++++++++------------------
8
1 file changed, 129 insertions(+), 101 deletions(-)
9
10
diff --git a/docs/devel/tcg-ops.rst b/docs/devel/tcg-ops.rst
11
index XXXXXXX..XXXXXXX 100644
12
--- a/docs/devel/tcg-ops.rst
13
+++ b/docs/devel/tcg-ops.rst
14
@@ -XXX,XX +XXX,XX @@ TCG Intermediate Representation
15
Introduction
16
============
17
18
-TCG (Tiny Code Generator) began as a generic backend for a C
19
-compiler. It was simplified to be used in QEMU. It also has its roots
20
-in the QOP code generator written by Paul Brook.
21
+TCG (Tiny Code Generator) began as a generic backend for a C compiler.
22
+It was simplified to be used in QEMU. It also has its roots in the
23
+QOP code generator written by Paul Brook.
24
25
Definitions
26
===========
27
28
-TCG receives RISC-like *TCG ops* and performs some optimizations on them,
29
-including liveness analysis and trivial constant expression
30
-evaluation. TCG ops are then implemented in the host CPU back end,
31
-also known as the TCG target.
32
-
33
-The TCG *target* is the architecture for which we generate the
34
-code. It is of course not the same as the "target" of QEMU which is
35
-the emulated architecture. As TCG started as a generic C backend used
36
-for cross compiling, it is assumed that the TCG target is different
37
-from the host, although it is never the case for QEMU.
38
+The TCG *target* is the architecture for which we generate the code.
39
+It is of course not the same as the "target" of QEMU which is the
40
+emulated architecture. As TCG started as a generic C backend used
41
+for cross compiling, the assumption was that TCG target might be
42
+different from the host, although this is never the case for QEMU.
43
44
In this document, we use *guest* to specify what architecture we are
45
emulating; *target* always means the TCG target, the machine on which
46
we are running QEMU.
47
48
-A TCG *function* corresponds to a QEMU Translated Block (TB).
49
-
50
-A TCG *temporary* is a variable only live in a basic block. Temporaries are allocated explicitly in each function.
51
-
52
-A TCG *local temporary* is a variable only live in a function. Local temporaries are allocated explicitly in each function.
53
-
54
-A TCG *global* is a variable which is live in all the functions
55
-(equivalent of a C global variable). They are defined before the
56
-functions defined. A TCG global can be a memory location (e.g. a QEMU
57
-CPU register), a fixed host register (e.g. the QEMU CPU state pointer)
58
-or a memory location which is stored in a register outside QEMU TBs
59
-(not implemented yet).
60
-
61
-A TCG *basic block* corresponds to a list of instructions terminated
62
-by a branch instruction.
63
-
64
An operation with *undefined behavior* may result in a crash.
65
66
An operation with *unspecified behavior* shall not crash. However,
67
the result may be one of several possibilities so may be considered
68
an *undefined result*.
69
70
-Intermediate representation
71
-===========================
72
+Basic Blocks
73
+============
74
75
-Introduction
76
-------------
77
+A TCG *basic block* is a single entry, multiple exit region which
78
+corresponds to a list of instructions terminated by a label, or
79
+any branch instruction.
80
81
-TCG instructions operate on variables which are temporaries, local
82
-temporaries or globals. TCG instructions and variables are strongly
83
-typed. Two types are supported: 32 bit integers and 64 bit
84
-integers. Pointers are defined as an alias to 32 bit or 64 bit
85
-integers depending on the TCG target word size.
86
+A TCG *extended basic block* is a single entry, multiple exit region
87
+which corresponds to a list of instructions terminated by a label or
88
+an unconditional branch. Specifically, an extended basic block is
89
+a sequence of basic blocks connected by the fall-through paths of
90
+zero or more conditional branch instructions.
91
92
-Each instruction has a fixed number of output variable operands, input
93
-variable operands and always constant operands.
94
+Operations
95
+==========
96
97
-The notable exception is the call instruction which has a variable
98
-number of outputs and inputs.
99
+TCG instructions or *ops* operate on TCG *variables*, both of which
100
+are strongly typed. Each instruction has a fixed number of output
101
+variable operands, input variable operands and constant operands.
102
+Vector instructions have a field specifying the element size within
103
+the vector. The notable exception is the call instruction which has
104
+a variable number of outputs and inputs.
105
106
In the textual form, output operands usually come first, followed by
107
input operands, followed by constant operands. The output type is
108
@@ -XXX,XX +XXX,XX @@ included in the instruction name. Constants are prefixed with a '$'.
109
110
add_i32 t0, t1, t2 /* (t0 <- t1 + t2) */
111
112
+Variables
113
+=========
114
115
-Assumptions
116
------------
117
+* ``TEMP_FIXED``
118
119
-Basic blocks
120
-^^^^^^^^^^^^
121
+ There is one TCG *fixed global* variable, ``cpu_env``, which is
122
+ live in all translation blocks, and holds a pointer to ``CPUArchState``.
123
+ This variable is held in a host cpu register at all times in all
124
+ translation blocks.
125
126
-* Basic blocks end after branches (e.g. brcond_i32 instruction),
127
- goto_tb and exit_tb instructions.
128
+* ``TEMP_GLOBAL``
129
130
-* Basic blocks start after the end of a previous basic block, or at a
131
- set_label instruction.
132
+ A TCG *global* is a variable which is live in all translation blocks,
133
+ and corresponds to memory location that is within ``CPUArchState``.
134
+ These may be specified as an offset from ``cpu_env``, in which case
135
+ they are called *direct globals*, or may be specified as an offset
136
+ from a direct global, in which case they are called *indirect globals*.
137
+ Even indirect globals should still reference memory within
138
+ ``CPUArchState``. All TCG globals are defined during
139
+ ``TCGCPUOps.initialize``, before any translation blocks are generated.
140
141
-After the end of a basic block, the content of temporaries is
142
-destroyed, but local temporaries and globals are preserved.
143
+* ``TEMP_CONST``
144
145
-Floating point types
146
-^^^^^^^^^^^^^^^^^^^^
147
+ A TCG *constant* is a variable which is live throughout the entire
148
+ translation block, and contains a constant value. These variables
149
+ are allocated on demand during translation and are hashed so that
150
+ there is exactly one variable holding a given value.
151
152
-* Floating point types are not supported yet
153
+* ``TEMP_TB``
154
155
-Pointers
156
-^^^^^^^^
157
+ A TCG *translation block temporary* is a variable which is live
158
+ throughout the entire translation block, but dies on any exit.
159
+ These temporaries are allocated explicitly during translation.
160
161
-* Depending on the TCG target, pointer size is 32 bit or 64
162
- bit. The type ``TCG_TYPE_PTR`` is an alias to ``TCG_TYPE_I32`` or
163
- ``TCG_TYPE_I64``.
164
+* ``TEMP_EBB``
165
+
166
+ A TCG *extended basic block temporary* is a variable which is live
167
+ throughout an extended basic block, but dies on any exit.
168
+ These temporaries are allocated explicitly during translation.
169
+
170
+Types
171
+=====
172
+
173
+* ``TCG_TYPE_I32``
174
+
175
+ A 32-bit integer.
176
+
177
+* ``TCG_TYPE_I64``
178
+
179
+ A 64-bit integer. For 32-bit hosts, such variables are split into a pair
180
+ of variables with ``type=TCG_TYPE_I32`` and ``base_type=TCG_TYPE_I64``.
181
+ The ``temp_subindex`` for each indicates where it falls within the
182
+ host-endian representation.
183
+
184
+* ``TCG_TYPE_PTR``
185
+
186
+ An alias for ``TCG_TYPE_I32`` or ``TCG_TYPE_I64``, depending on the size
187
+ of a pointer for the host.
188
+
189
+* ``TCG_TYPE_REG``
190
+
191
+ An alias for ``TCG_TYPE_I32`` or ``TCG_TYPE_I64``, depending on the size
192
+ of the integer registers for the host. This may be larger
193
+ than ``TCG_TYPE_PTR`` depending on the host ABI.
194
+
195
+* ``TCG_TYPE_I128``
196
+
197
+ A 128-bit integer. For all hosts, such variables are split into a number
198
+ of variables with ``type=TCG_TYPE_REG`` and ``base_type=TCG_TYPE_I128``.
199
+ The ``temp_subindex`` for each indicates where it falls within the
200
+ host-endian representation.
201
+
202
+* ``TCG_TYPE_V64``
203
+
204
+ A 64-bit vector. This type is valid only if the TCG target
205
+ sets ``TCG_TARGET_HAS_v64``.
206
+
207
+* ``TCG_TYPE_V128``
208
+
209
+ A 128-bit vector. This type is valid only if the TCG target
210
+ sets ``TCG_TARGET_HAS_v128``.
211
+
212
+* ``TCG_TYPE_V256``
213
+
214
+ A 256-bit vector. This type is valid only if the TCG target
215
+ sets ``TCG_TARGET_HAS_v256``.
216
217
Helpers
218
-^^^^^^^
219
+=======
220
221
-* Using the tcg_gen_helper_x_y it is possible to call any function
222
- taking i32, i64 or pointer types. By default, before calling a helper,
223
- all globals are stored at their canonical location and it is assumed
224
- that the function can modify them. By default, the helper is allowed to
225
- modify the CPU state or raise an exception.
226
+Helpers are registered in a guest-specific ``helper.h``,
227
+which is processed to generate ``tcg_gen_helper_*`` functions.
228
+With these functions it is possible to call a function taking
229
+i32, i64, i128 or pointer types.
230
231
- This can be overridden using the following function modifiers:
232
+By default, before calling a helper, all globals are stored at their
233
+canonical location. By default, the helper is allowed to modify the
234
+CPU state (including the state represented by tcg globals)
235
+or may raise an exception. This default can be overridden using the
236
+following function modifiers:
237
238
- - ``TCG_CALL_NO_READ_GLOBALS`` means that the helper does not read globals,
239
- either directly or via an exception. They will not be saved to their
240
- canonical locations before calling the helper.
241
+* ``TCG_CALL_NO_WRITE_GLOBALS``
242
243
- - ``TCG_CALL_NO_WRITE_GLOBALS`` means that the helper does not modify any globals.
244
- They will only be saved to their canonical location before calling helpers,
245
- but they won't be reloaded afterwards.
246
+ The helper does not modify any globals, but may read them.
247
+ Globals will be saved to their canonical location before calling helpers,
248
+ but need not be reloaded afterwards.
249
250
- - ``TCG_CALL_NO_SIDE_EFFECTS`` means that the call to the function is removed if
251
- the return value is not used.
252
+* ``TCG_CALL_NO_READ_GLOBALS``
253
254
- Note that ``TCG_CALL_NO_READ_GLOBALS`` implies ``TCG_CALL_NO_WRITE_GLOBALS``.
255
+ The helper does not read globals, either directly or via an exception.
256
+ They will not be saved to their canonical locations before calling
257
+ the helper. This implies ``TCG_CALL_NO_WRITE_GLOBALS``.
258
259
- On some TCG targets (e.g. x86), several calling conventions are
260
- supported.
261
+* ``TCG_CALL_NO_SIDE_EFFECTS``
262
263
-Branches
264
-^^^^^^^^
265
-
266
-* Use the instruction 'br' to jump to a label.
267
+ The call to the helper function may be removed if the return value is
268
+ not used. This means that it may not modify any CPU state nor may it
269
+ raise an exception.
270
271
Code Optimizations
272
-------------------
273
+==================
274
275
When generating instructions, you can count on at least the following
276
optimizations:
277
@@ -XXX,XX +XXX,XX @@ Recommended coding rules for best performance
278
often modified, e.g. the integer registers and the condition
279
codes. TCG will be able to use host registers to store them.
280
281
-- Avoid globals stored in fixed registers. They must be used only to
282
- store the pointer to the CPU state and possibly to store a pointer
283
- to a register window.
284
-
285
-- Use temporaries. Use local temporaries only when really needed,
286
- e.g. when you need to use a value after a jump. Local temporaries
287
- introduce a performance hit in the current TCG implementation: their
288
- content is saved to memory at end of each basic block.
289
-
290
-- Free temporaries and local temporaries when they are no longer used
291
- (tcg_temp_free). Since tcg_const_x() also creates a temporary, you
292
- should free it after it is used. Freeing temporaries does not yield
293
- a better generated code, but it reduces the memory usage of TCG and
294
- the speed of the translation.
295
+- Free temporaries when they are no longer used (``tcg_temp_free``).
296
+ Since ``tcg_const_x`` also creates a temporary, you should free it
297
+ after it is used.
298
299
- Don't hesitate to use helpers for complicated or seldom used guest
300
instructions. There is little performance advantage in using TCG to
301
@@ -XXX,XX +XXX,XX @@ Recommended coding rules for best performance
302
the instruction is mostly doing loads and stores, and in those cases
303
inline TCG may still be faster for longer sequences.
304
305
-- The hard limit on the number of TCG instructions you can generate
306
- per guest instruction is set by ``MAX_OP_PER_INSTR`` in ``exec-all.h`` --
307
- you cannot exceed this without risking a buffer overrun.
308
-
309
- Use the 'discard' instruction if you know that TCG won't be able to
310
prove that a given global is "dead" at a given program point. The
311
x86 guest uses it to improve the condition codes optimisation.
312
--
313
2.34.1
diff view generated by jsdifflib