1
The following changes since commit 3ccf6cd0e3e1dfd663814640b3b18b55715d7a75:
1
The following changes since commit 57b6f58c1d0df757c9311496c32d502925056894:
2
2
3
Merge remote-tracking branch 'remotes/kraxel/tags/audio-20210617-pull-request' into staging (2021-06-18 09:54:42 +0100)
3
Merge remote-tracking branch 'remotes/hreitz/tags/pull-block-2021-09-15' into staging (2021-09-15 18:55:59 +0100)
4
4
5
are available in the Git repository at:
5
are available in the Git repository at:
6
6
7
https://gitlab.com/rth7680/qemu.git tags/pull-tcg-20210619
7
https://gitlab.com/rth7680/qemu.git tags/pull-tcg-20210916
8
8
9
for you to fetch changes up to 8169ec35eb766a12ad0ae898119060fde148ab61:
9
for you to fetch changes up to 50febfe212f24a9b91b4224d03f653415fddf8e1:
10
10
11
util/oslib-win32: Fix fatal assertion in qemu_try_memalign (2021-06-19 11:09:11 -0700)
11
tcg/mips: Drop special alignment for code_gen_buffer (2021-09-16 09:37:39 -0400)
12
12
13
----------------------------------------------------------------
13
----------------------------------------------------------------
14
TCI cleanup and re-encoding
14
Restrict cpu_has_work to sysemu, and move to AccelOpsClass.
15
Fixes for #367 and #390.
15
Move cpu_signal_handler declaration out of target/.
16
Move TCGCond to tcg/tcg-cond.h.
16
Misc tcg/mips/ cleanups.
17
Fix for win32 qemu_try_memalign.
18
17
19
----------------------------------------------------------------
18
----------------------------------------------------------------
20
Alessandro Di Federico (1):
19
Philippe Mathieu-Daudé (30):
21
tcg: expose TCGCond manipulation routines
20
accel/tcg: Restrict cpu_handle_halt() to sysemu
21
hw/core: Restrict cpu_has_work() to sysemu
22
hw/core: Un-inline cpu_has_work()
23
sysemu: Introduce AccelOpsClass::has_work()
24
accel/kvm: Implement AccelOpsClass::has_work()
25
accel/whpx: Implement AccelOpsClass::has_work()
26
accel/tcg: Implement AccelOpsClass::has_work() as stub
27
target/alpha: Restrict has_work() handler to sysemu
28
target/arm: Restrict has_work() handler to sysemu and TCG
29
target/avr: Restrict has_work() handler to sysemu
30
target/cris: Restrict has_work() handler to sysemu
31
target/hexagon: Remove unused has_work() handler
32
target/hppa: Restrict has_work() handler to sysemu
33
target/i386: Restrict has_work() handler to sysemu and TCG
34
target/m68k: Restrict has_work() handler to sysemu
35
target/microblaze: Restrict has_work() handler to sysemu
36
target/mips: Restrict has_work() handler to sysemu and TCG
37
target/nios2: Restrict has_work() handler to sysemu
38
target/openrisc: Restrict has_work() handler to sysemu
39
target/ppc: Introduce PowerPCCPUClass::has_work()
40
target/ppc: Restrict has_work() handlers to sysemu and TCG
41
target/riscv: Restrict has_work() handler to sysemu and TCG
42
target/rx: Restrict has_work() handler to sysemu
43
target/s390x: Restrict has_work() handler to sysemu and TCG
44
target/sh4: Restrict has_work() handler to sysemu
45
target/sparc: Remove pointless use of CONFIG_TCG definition
46
target/sparc: Restrict has_work() handler to sysemu
47
target/tricore: Restrict has_work() handler to sysemu
48
target/xtensa: Restrict has_work() handler to sysemu
49
accel: Add missing AccelOpsClass::has_work() and drop SysemuCPUOps one
22
50
23
Richard Henderson (31):
51
Richard Henderson (5):
24
tcg: Combine dh_is_64bit and dh_is_signed to dh_typecode
52
include/exec: Move cpu_signal_handler declaration
25
tcg: Add tcg_call_flags
53
tcg/mips: Drop inline markers
26
accel/tcg/plugin-gen: Drop inline markers
54
tcg/mips: Allow JAL to be out of range in tcg_out_bswap_subr
27
plugins: Drop tcg_flags from struct qemu_plugin_dyn_cb
55
tcg/mips: Unset TCG_TARGET_HAS_direct_jump
28
accel/tcg: Add tcg call flags to plugins helpers
56
tcg/mips: Drop special alignment for code_gen_buffer
29
tcg: Store the TCGHelperInfo in the TCGOp for call
30
tcg: Add tcg_call_func
31
tcg: Build ffi data structures for helpers
32
tcg/tci: Improve tcg_target_call_clobber_regs
33
tcg/tci: Move call-return regs to end of tcg_target_reg_alloc_order
34
tcg/tci: Use ffi for calls
35
tcg/tci: Reserve r13 for a temporary
36
tcg/tci: Emit setcond before brcond
37
tcg/tci: Remove tci_write_reg
38
tcg/tci: Change encoding to uint32_t units
39
tcg/tci: Implement goto_ptr
40
tcg/tci: Implement movcond
41
tcg/tci: Implement andc, orc, eqv, nand, nor
42
tcg/tci: Implement extract, sextract
43
tcg/tci: Implement clz, ctz, ctpop
44
tcg/tci: Implement mulu2, muls2
45
tcg/tci: Implement add2, sub2
46
tcg/tci: Split out tci_qemu_ld, tci_qemu_st
47
Revert "tcg/tci: Use exec/cpu_ldst.h interfaces"
48
tcg/tci: Remove the qemu_ld/st_type macros
49
tcg/tci: Use {set,clear}_helper_retaddr
50
tests/tcg: Increase timeout for TCI
51
accel/tcg: Probe the proper permissions for atomic ops
52
tcg/sparc: Fix temp_allocate_frame vs sparc stack bias
53
tcg: Allocate sufficient storage in temp_allocate_frame
54
tcg: Restart when exhausting the stack frame
55
57
56
Stefan Weil (1):
58
include/exec/exec-all.h | 13 +++++
57
util/oslib-win32: Fix fatal assertion in qemu_try_memalign
59
include/hw/core/cpu.h | 28 ++++------
60
include/hw/core/tcg-cpu-ops.h | 4 ++
61
include/sysemu/accel-ops.h | 5 ++
62
target/alpha/cpu.h | 6 ---
63
target/arm/cpu.h | 7 ---
64
target/avr/cpu.h | 2 -
65
target/cris/cpu.h | 8 ---
66
target/hexagon/cpu.h | 3 --
67
target/hppa/cpu.h | 3 --
68
target/i386/cpu.h | 7 ---
69
target/m68k/cpu.h | 8 ---
70
target/microblaze/cpu.h | 7 ---
71
target/mips/cpu.h | 3 --
72
target/mips/internal.h | 2 -
73
target/nios2/cpu.h | 2 -
74
target/openrisc/cpu.h | 2 -
75
target/ppc/cpu-qom.h | 3 ++
76
target/ppc/cpu.h | 7 ---
77
target/riscv/cpu.h | 2 -
78
target/rx/cpu.h | 4 --
79
target/s390x/cpu.h | 7 ---
80
target/sh4/cpu.h | 3 --
81
target/sparc/cpu.h | 2 -
82
target/tricore/cpu.h | 2 -
83
target/xtensa/cpu.h | 2 -
84
tcg/mips/tcg-target.h | 12 ++---
85
accel/hvf/hvf-accel-ops.c | 6 +++
86
accel/kvm/kvm-accel-ops.c | 6 +++
87
accel/qtest/qtest.c | 6 +++
88
accel/tcg/cpu-exec.c | 6 ++-
89
accel/tcg/tcg-accel-ops.c | 12 +++++
90
accel/xen/xen-all.c | 6 +++
91
hw/core/cpu-common.c | 6 ---
92
softmmu/cpus.c | 10 ++--
93
target/alpha/cpu.c | 4 +-
94
target/arm/cpu.c | 7 ++-
95
target/avr/cpu.c | 2 +-
96
target/cris/cpu.c | 4 +-
97
target/hexagon/cpu.c | 6 ---
98
target/hppa/cpu.c | 4 +-
99
target/i386/cpu.c | 6 ---
100
target/i386/hax/hax-accel-ops.c | 6 +++
101
target/i386/nvmm/nvmm-accel-ops.c | 6 +++
102
target/i386/tcg/tcg-cpu.c | 8 ++-
103
target/i386/whpx/whpx-accel-ops.c | 6 +++
104
target/m68k/cpu.c | 4 +-
105
target/microblaze/cpu.c | 8 +--
106
target/mips/cpu.c | 4 +-
107
target/nios2/cpu.c | 4 +-
108
target/openrisc/cpu.c | 4 +-
109
target/ppc/cpu_init.c | 37 ++++++++++----
110
target/riscv/cpu.c | 8 ++-
111
target/rx/cpu.c | 4 +-
112
target/s390x/cpu.c | 4 +-
113
target/sh4/cpu.c | 5 +-
114
target/sparc/cpu.c | 6 +--
115
target/tricore/cpu.c | 6 ++-
116
target/xtensa/cpu.c | 14 ++---
117
tcg/region.c | 91 ---------------------------------
118
tcg/mips/tcg-target.c.inc | 105 +++++++++++++-------------------------
119
61 files changed, 233 insertions(+), 342 deletions(-)
58
120
59
configure | 3 +
60
accel/tcg/atomic_template.h | 24 +-
61
accel/tcg/plugin-helpers.h | 5 +-
62
include/exec/helper-head.h | 37 +-
63
include/exec/helper-tcg.h | 34 +-
64
include/qemu/plugin.h | 1 -
65
include/tcg/tcg-cond.h | 101 ++
66
include/tcg/tcg-opc.h | 4 +-
67
include/tcg/tcg.h | 71 +-
68
target/hppa/helper.h | 3 -
69
target/i386/ops_sse_header.h | 3 -
70
target/m68k/helper.h | 1 -
71
target/ppc/helper.h | 3 -
72
tcg/tcg-internal.h | 22 +
73
tcg/tci/tcg-target-con-set.h | 1 +
74
tcg/tci/tcg-target.h | 68 +-
75
accel/tcg/cputlb.c | 95 +-
76
accel/tcg/plugin-gen.c | 20 +-
77
accel/tcg/user-exec.c | 8 +-
78
plugins/core.c | 30 +-
79
tcg/optimize.c | 3 +-
80
tcg/tcg.c | 300 +++--
81
tcg/tci.c | 1203 ++++++++++----------
82
util/oslib-win32.c | 6 +-
83
tcg/sparc/tcg-target.c.inc | 16 +-
84
tcg/tci/tcg-target.c.inc | 550 ++++-----
85
tcg/meson.build | 8 +-
86
tcg/tci/README | 20 +-
87
tests/docker/dockerfiles/alpine.docker | 1 +
88
tests/docker/dockerfiles/centos8.docker | 1 +
89
tests/docker/dockerfiles/debian10.docker | 1 +
90
tests/docker/dockerfiles/fedora-i386-cross.docker | 1 +
91
tests/docker/dockerfiles/fedora-win32-cross.docker | 1 +
92
tests/docker/dockerfiles/fedora-win64-cross.docker | 1 +
93
tests/docker/dockerfiles/fedora.docker | 1 +
94
tests/docker/dockerfiles/ubuntu.docker | 1 +
95
tests/docker/dockerfiles/ubuntu1804.docker | 1 +
96
tests/docker/dockerfiles/ubuntu2004.docker | 1 +
97
tests/tcg/Makefile.target | 6 +-
98
39 files changed, 1454 insertions(+), 1202 deletions(-)
99
create mode 100644 include/tcg/tcg-cond.h
100
diff view generated by jsdifflib
1
When this opcode is not available in the backend, tcg middle-end
1
There is nothing target specific about this. The implementation
2
will expand this as a series of 5 opcodes. So implementing this
2
is host specific, but the declaration is 100% common.
3
saves bytecode space.
4
3
5
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
Reviewed-By: Warner Losh <imp@bsdimp.com>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
---
8
---
9
tcg/tci/tcg-target.h | 4 ++--
9
include/exec/exec-all.h | 13 +++++++++++++
10
tcg/tci.c | 16 +++++++++++++++-
10
target/alpha/cpu.h | 6 ------
11
tcg/tci/tcg-target.c.inc | 10 +++++++---
11
target/arm/cpu.h | 7 -------
12
3 files changed, 24 insertions(+), 6 deletions(-)
12
target/avr/cpu.h | 2 --
13
target/cris/cpu.h | 8 --------
14
target/hexagon/cpu.h | 3 ---
15
target/hppa/cpu.h | 3 ---
16
target/i386/cpu.h | 7 -------
17
target/m68k/cpu.h | 8 --------
18
target/microblaze/cpu.h | 7 -------
19
target/mips/cpu.h | 3 ---
20
target/mips/internal.h | 2 --
21
target/nios2/cpu.h | 2 --
22
target/openrisc/cpu.h | 2 --
23
target/ppc/cpu.h | 7 -------
24
target/riscv/cpu.h | 2 --
25
target/rx/cpu.h | 4 ----
26
target/s390x/cpu.h | 7 -------
27
target/sh4/cpu.h | 3 ---
28
target/sparc/cpu.h | 2 --
29
target/tricore/cpu.h | 2 --
30
target/xtensa/cpu.h | 2 --
31
22 files changed, 13 insertions(+), 89 deletions(-)
13
32
14
diff --git a/tcg/tci/tcg-target.h b/tcg/tci/tcg-target.h
33
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
15
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
16
--- a/tcg/tci/tcg-target.h
35
--- a/include/exec/exec-all.h
17
+++ b/tcg/tci/tcg-target.h
36
+++ b/include/exec/exec-all.h
18
@@ -XXX,XX +XXX,XX @@
37
@@ -XXX,XX +XXX,XX @@ static inline tb_page_addr_t get_page_addr_code_hostp(CPUArchState *env,
19
#define TCG_TARGET_HAS_not_i32 1
38
}
20
#define TCG_TARGET_HAS_orc_i32 0
39
return addr;
21
#define TCG_TARGET_HAS_rot_i32 1
22
-#define TCG_TARGET_HAS_movcond_i32 0
23
+#define TCG_TARGET_HAS_movcond_i32 1
24
#define TCG_TARGET_HAS_muls2_i32 0
25
#define TCG_TARGET_HAS_muluh_i32 0
26
#define TCG_TARGET_HAS_mulsh_i32 0
27
@@ -XXX,XX +XXX,XX @@
28
#define TCG_TARGET_HAS_not_i64 1
29
#define TCG_TARGET_HAS_orc_i64 0
30
#define TCG_TARGET_HAS_rot_i64 1
31
-#define TCG_TARGET_HAS_movcond_i64 0
32
+#define TCG_TARGET_HAS_movcond_i64 1
33
#define TCG_TARGET_HAS_muls2_i64 0
34
#define TCG_TARGET_HAS_add2_i32 0
35
#define TCG_TARGET_HAS_sub2_i32 0
36
diff --git a/tcg/tci.c b/tcg/tci.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/tcg/tci.c
39
+++ b/tcg/tci.c
40
@@ -XXX,XX +XXX,XX @@ static void tci_args_rrrr(uint32_t insn,
41
*r2 = extract32(insn, 16, 4);
42
*r3 = extract32(insn, 20, 4);
43
}
40
}
44
+#endif
41
+
45
42
+/**
46
static void tci_args_rrrrrc(uint32_t insn, TCGReg *r0, TCGReg *r1,
43
+ * cpu_signal_handler
47
TCGReg *r2, TCGReg *r3, TCGReg *r4, TCGCond *c5)
44
+ * @signum: host signal number
48
@@ -XXX,XX +XXX,XX @@ static void tci_args_rrrrrc(uint32_t insn, TCGReg *r0, TCGReg *r1,
45
+ * @pinfo: host siginfo_t
49
*c5 = extract32(insn, 28, 4);
46
+ * @puc: host ucontext_t
47
+ *
48
+ * To be called from the SIGBUS and SIGSEGV signal handler to inform the
49
+ * virtual cpu of exceptions. Returns true if the signal was handled by
50
+ * the virtual CPU.
51
+ */
52
+int cpu_signal_handler(int signum, void *pinfo, void *puc);
53
+
54
#else
55
static inline void mmap_lock(void) {}
56
static inline void mmap_unlock(void) {}
57
diff --git a/target/alpha/cpu.h b/target/alpha/cpu.h
58
index XXXXXXX..XXXXXXX 100644
59
--- a/target/alpha/cpu.h
60
+++ b/target/alpha/cpu.h
61
@@ -XXX,XX +XXX,XX @@ void alpha_cpu_do_unaligned_access(CPUState *cpu, vaddr addr,
62
int mmu_idx, uintptr_t retaddr);
63
64
#define cpu_list alpha_cpu_list
65
-#define cpu_signal_handler cpu_alpha_signal_handler
66
67
typedef CPUAlphaState CPUArchState;
68
typedef AlphaCPU ArchCPU;
69
@@ -XXX,XX +XXX,XX @@ void alpha_translate_init(void);
70
#define CPU_RESOLVING_TYPE TYPE_ALPHA_CPU
71
72
void alpha_cpu_list(void);
73
-/* you can call this signal handler from your SIGBUS and SIGSEGV
74
- signal handlers to inform the virtual CPU of exceptions. non zero
75
- is returned if the signal was handled by the virtual CPU. */
76
-int cpu_alpha_signal_handler(int host_signum, void *pinfo,
77
- void *puc);
78
bool alpha_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
79
MMUAccessType access_type, int mmu_idx,
80
bool probe, uintptr_t retaddr);
81
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
82
index XXXXXXX..XXXXXXX 100644
83
--- a/target/arm/cpu.h
84
+++ b/target/arm/cpu.h
85
@@ -XXX,XX +XXX,XX @@ static inline bool is_a64(CPUARMState *env)
86
return env->aarch64;
50
}
87
}
51
88
52
+#if TCG_TARGET_REG_BITS == 32
89
-/* you can call this signal handler from your SIGBUS and SIGSEGV
53
static void tci_args_rrrrrr(uint32_t insn, TCGReg *r0, TCGReg *r1,
90
- signal handlers to inform the virtual CPU of exceptions. non zero
54
TCGReg *r2, TCGReg *r3, TCGReg *r4, TCGReg *r5)
91
- is returned if the signal was handled by the virtual CPU. */
92
-int cpu_arm_signal_handler(int host_signum, void *pinfo,
93
- void *puc);
94
-
95
/**
96
* pmu_op_start/finish
97
* @env: CPUARMState
98
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
99
#define ARM_CPU_TYPE_NAME(name) (name ARM_CPU_TYPE_SUFFIX)
100
#define CPU_RESOLVING_TYPE TYPE_ARM_CPU
101
102
-#define cpu_signal_handler cpu_arm_signal_handler
103
#define cpu_list arm_cpu_list
104
105
/* ARM has the following "translation regimes" (as the ARM ARM calls them):
106
diff --git a/target/avr/cpu.h b/target/avr/cpu.h
107
index XXXXXXX..XXXXXXX 100644
108
--- a/target/avr/cpu.h
109
+++ b/target/avr/cpu.h
110
@@ -XXX,XX +XXX,XX @@ static inline void set_avr_feature(CPUAVRState *env, int feature)
111
}
112
113
#define cpu_list avr_cpu_list
114
-#define cpu_signal_handler cpu_avr_signal_handler
115
#define cpu_mmu_index avr_cpu_mmu_index
116
117
static inline int avr_cpu_mmu_index(CPUAVRState *env, bool ifetch)
118
@@ -XXX,XX +XXX,XX @@ void avr_cpu_tcg_init(void);
119
120
void avr_cpu_list(void);
121
int cpu_avr_exec(CPUState *cpu);
122
-int cpu_avr_signal_handler(int host_signum, void *pinfo, void *puc);
123
int avr_cpu_memory_rw_debug(CPUState *cs, vaddr address, uint8_t *buf,
124
int len, bool is_write);
125
126
diff --git a/target/cris/cpu.h b/target/cris/cpu.h
127
index XXXXXXX..XXXXXXX 100644
128
--- a/target/cris/cpu.h
129
+++ b/target/cris/cpu.h
130
@@ -XXX,XX +XXX,XX @@ int crisv10_cpu_gdb_read_register(CPUState *cpu, GByteArray *buf, int reg);
131
int cris_cpu_gdb_read_register(CPUState *cpu, GByteArray *buf, int reg);
132
int cris_cpu_gdb_write_register(CPUState *cpu, uint8_t *buf, int reg);
133
134
-/* you can call this signal handler from your SIGBUS and SIGSEGV
135
- signal handlers to inform the virtual CPU of exceptions. non zero
136
- is returned if the signal was handled by the virtual CPU. */
137
-int cpu_cris_signal_handler(int host_signum, void *pinfo,
138
- void *puc);
139
-
140
void cris_initialize_tcg(void);
141
void cris_initialize_crisv10_tcg(void);
142
143
@@ -XXX,XX +XXX,XX @@ enum {
144
#define CRIS_CPU_TYPE_NAME(name) (name CRIS_CPU_TYPE_SUFFIX)
145
#define CPU_RESOLVING_TYPE TYPE_CRIS_CPU
146
147
-#define cpu_signal_handler cpu_cris_signal_handler
148
-
149
/* MMU modes definitions */
150
#define MMU_USER_IDX 1
151
static inline int cpu_mmu_index (CPUCRISState *env, bool ifetch)
152
diff --git a/target/hexagon/cpu.h b/target/hexagon/cpu.h
153
index XXXXXXX..XXXXXXX 100644
154
--- a/target/hexagon/cpu.h
155
+++ b/target/hexagon/cpu.h
156
@@ -XXX,XX +XXX,XX @@ typedef struct HexagonCPU {
157
158
#include "cpu_bits.h"
159
160
-#define cpu_signal_handler cpu_hexagon_signal_handler
161
-int cpu_hexagon_signal_handler(int host_signum, void *pinfo, void *puc);
162
-
163
static inline void cpu_get_tb_cpu_state(CPUHexagonState *env, target_ulong *pc,
164
target_ulong *cs_base, uint32_t *flags)
55
{
165
{
56
@@ -XXX,XX +XXX,XX @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
166
diff --git a/target/hppa/cpu.h b/target/hppa/cpu.h
57
tci_args_rrrc(insn, &r0, &r1, &r2, &condition);
167
index XXXXXXX..XXXXXXX 100644
58
regs[r0] = tci_compare32(regs[r1], regs[r2], condition);
168
--- a/target/hppa/cpu.h
59
break;
169
+++ b/target/hppa/cpu.h
60
+ case INDEX_op_movcond_i32:
170
@@ -XXX,XX +XXX,XX @@ static inline void cpu_hppa_change_prot_id(CPUHPPAState *env) { }
61
+ tci_args_rrrrrc(insn, &r0, &r1, &r2, &r3, &r4, &condition);
171
void cpu_hppa_change_prot_id(CPUHPPAState *env);
62
+ tmp32 = tci_compare32(regs[r1], regs[r2], condition);
63
+ regs[r0] = regs[tmp32 ? r3 : r4];
64
+ break;
65
#if TCG_TARGET_REG_BITS == 32
66
case INDEX_op_setcond2_i32:
67
tci_args_rrrrrc(insn, &r0, &r1, &r2, &r3, &r4, &condition);
68
@@ -XXX,XX +XXX,XX @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
69
tci_args_rrrc(insn, &r0, &r1, &r2, &condition);
70
regs[r0] = tci_compare64(regs[r1], regs[r2], condition);
71
break;
72
+ case INDEX_op_movcond_i64:
73
+ tci_args_rrrrrc(insn, &r0, &r1, &r2, &r3, &r4, &condition);
74
+ tmp32 = tci_compare64(regs[r1], regs[r2], condition);
75
+ regs[r0] = regs[tmp32 ? r3 : r4];
76
+ break;
77
#endif
172
#endif
78
CASE_32_64(mov)
173
79
tci_args_rr(insn, &r0, &r1);
174
-#define cpu_signal_handler cpu_hppa_signal_handler
80
@@ -XXX,XX +XXX,XX @@ int print_insn_tci(bfd_vma addr, disassemble_info *info)
175
-
81
op_name, str_r(r0), str_r(r1), str_r(r2), pos, len);
176
-int cpu_hppa_signal_handler(int host_signum, void *pinfo, void *puc);
82
break;
177
hwaddr hppa_cpu_get_phys_page_debug(CPUState *cs, vaddr addr);
83
178
int hppa_cpu_gdb_read_register(CPUState *cpu, GByteArray *buf, int reg);
84
-#if TCG_TARGET_REG_BITS == 32
179
int hppa_cpu_gdb_write_register(CPUState *cpu, uint8_t *buf, int reg);
85
+ case INDEX_op_movcond_i32:
180
diff --git a/target/i386/cpu.h b/target/i386/cpu.h
86
+ case INDEX_op_movcond_i64:
181
index XXXXXXX..XXXXXXX 100644
87
case INDEX_op_setcond2_i32:
182
--- a/target/i386/cpu.h
88
tci_args_rrrrrc(insn, &r0, &r1, &r2, &r3, &r4, &c);
183
+++ b/target/i386/cpu.h
89
info->fprintf_func(info->stream, "%-12s %s, %s, %s, %s, %s, %s",
184
@@ -XXX,XX +XXX,XX @@ void cpu_x86_frstor(CPUX86State *s, target_ulong ptr, int data32);
90
@@ -XXX,XX +XXX,XX @@ int print_insn_tci(bfd_vma addr, disassemble_info *info)
185
void cpu_x86_fxsave(CPUX86State *s, target_ulong ptr);
91
str_r(r3), str_r(r4), str_c(c));
186
void cpu_x86_fxrstor(CPUX86State *s, target_ulong ptr);
92
break;
187
93
188
-/* you can call this signal handler from your SIGBUS and SIGSEGV
94
+#if TCG_TARGET_REG_BITS == 32
189
- signal handlers to inform the virtual CPU of exceptions. non zero
95
case INDEX_op_mulu2_i32:
190
- is returned if the signal was handled by the virtual CPU. */
96
tci_args_rrrr(insn, &r0, &r1, &r2, &r3);
191
-int cpu_x86_signal_handler(int host_signum, void *pinfo,
97
info->fprintf_func(info->stream, "%-12s %s, %s, %s, %s",
192
- void *puc);
98
diff --git a/tcg/tci/tcg-target.c.inc b/tcg/tci/tcg-target.c.inc
193
-
99
index XXXXXXX..XXXXXXX 100644
194
/* cpu.c */
100
--- a/tcg/tci/tcg-target.c.inc
195
void x86_cpu_vendor_words2str(char *dst, uint32_t vendor1,
101
+++ b/tcg/tci/tcg-target.c.inc
196
uint32_t vendor2, uint32_t vendor3);
102
@@ -XXX,XX +XXX,XX @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op)
197
@@ -XXX,XX +XXX,XX @@ uint64_t cpu_get_tsc(CPUX86State *env);
103
return C_O0_I4(r, r, r, r);
198
#define TARGET_DEFAULT_CPU_TYPE X86_CPU_TYPE_NAME("qemu32")
104
case INDEX_op_mulu2_i32:
199
#endif
105
return C_O2_I2(r, r, r, r);
200
106
+#endif
201
-#define cpu_signal_handler cpu_x86_signal_handler
107
+
202
#define cpu_list x86_cpu_list
108
+ case INDEX_op_movcond_i32:
203
109
+ case INDEX_op_movcond_i64:
204
/* MMU modes definitions */
110
case INDEX_op_setcond2_i32:
205
diff --git a/target/m68k/cpu.h b/target/m68k/cpu.h
111
return C_O1_I4(r, r, r, r, r);
206
index XXXXXXX..XXXXXXX 100644
112
-#endif
207
--- a/target/m68k/cpu.h
113
208
+++ b/target/m68k/cpu.h
114
case INDEX_op_qemu_ld_i32:
209
@@ -XXX,XX +XXX,XX @@ int m68k_cpu_gdb_write_register(CPUState *cpu, uint8_t *buf, int reg);
115
return (TARGET_LONG_BITS <= TCG_TARGET_REG_BITS
210
116
@@ -XXX,XX +XXX,XX @@ static void tcg_out_op_rrrr(TCGContext *s, TCGOpcode op,
211
void m68k_tcg_init(void);
117
insn = deposit32(insn, 20, 4, r3);
212
void m68k_cpu_init_gdb(M68kCPU *cpu);
118
tcg_out32(s, insn);
213
-/*
214
- * you can call this signal handler from your SIGBUS and SIGSEGV
215
- * signal handlers to inform the virtual CPU of exceptions. non zero
216
- * is returned if the signal was handled by the virtual CPU.
217
- */
218
-int cpu_m68k_signal_handler(int host_signum, void *pinfo,
219
- void *puc);
220
uint32_t cpu_m68k_get_ccr(CPUM68KState *env);
221
void cpu_m68k_set_ccr(CPUM68KState *env, uint32_t);
222
void cpu_m68k_set_sr(CPUM68KState *env, uint32_t);
223
@@ -XXX,XX +XXX,XX @@ enum {
224
#define M68K_CPU_TYPE_NAME(model) model M68K_CPU_TYPE_SUFFIX
225
#define CPU_RESOLVING_TYPE TYPE_M68K_CPU
226
227
-#define cpu_signal_handler cpu_m68k_signal_handler
228
#define cpu_list m68k_cpu_list
229
230
/* MMU modes definitions */
231
diff --git a/target/microblaze/cpu.h b/target/microblaze/cpu.h
232
index XXXXXXX..XXXXXXX 100644
233
--- a/target/microblaze/cpu.h
234
+++ b/target/microblaze/cpu.h
235
@@ -XXX,XX +XXX,XX @@ static inline void mb_cpu_write_msr(CPUMBState *env, uint32_t val)
119
}
236
}
120
+#endif
237
121
238
void mb_tcg_init(void);
122
static void tcg_out_op_rrrrrc(TCGContext *s, TCGOpcode op,
239
-/* you can call this signal handler from your SIGBUS and SIGSEGV
123
TCGReg r0, TCGReg r1, TCGReg r2,
240
- signal handlers to inform the virtual CPU of exceptions. non zero
124
@@ -XXX,XX +XXX,XX @@ static void tcg_out_op_rrrrrc(TCGContext *s, TCGOpcode op,
241
- is returned if the signal was handled by the virtual CPU. */
125
tcg_out32(s, insn);
242
-int cpu_mb_signal_handler(int host_signum, void *pinfo,
126
}
243
- void *puc);
127
244
128
+#if TCG_TARGET_REG_BITS == 32
245
#define CPU_RESOLVING_TYPE TYPE_MICROBLAZE_CPU
129
static void tcg_out_op_rrrrrr(TCGContext *s, TCGOpcode op,
246
130
TCGReg r0, TCGReg r1, TCGReg r2,
247
-#define cpu_signal_handler cpu_mb_signal_handler
131
TCGReg r3, TCGReg r4, TCGReg r5)
248
-
132
@@ -XXX,XX +XXX,XX @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
249
/* MMU modes definitions */
133
tcg_out_op_rrrc(s, opc, args[0], args[1], args[2], args[3]);
250
#define MMU_NOMMU_IDX 0
134
break;
251
#define MMU_KERNEL_IDX 1
135
252
diff --git a/target/mips/cpu.h b/target/mips/cpu.h
136
-#if TCG_TARGET_REG_BITS == 32
253
index XXXXXXX..XXXXXXX 100644
137
+ CASE_32_64(movcond)
254
--- a/target/mips/cpu.h
138
case INDEX_op_setcond2_i32:
255
+++ b/target/mips/cpu.h
139
tcg_out_op_rrrrrc(s, opc, args[0], args[1], args[2],
256
@@ -XXX,XX +XXX,XX @@ struct MIPSCPU {
140
args[3], args[4], args[5]);
257
141
break;
258
void mips_cpu_list(void);
142
-#endif
259
143
260
-#define cpu_signal_handler cpu_mips_signal_handler
144
CASE_32_64(ld8u)
261
#define cpu_list mips_cpu_list
145
CASE_32_64(ld8s)
262
263
extern void cpu_wrdsp(uint32_t rs, uint32_t mask_num, CPUMIPSState *env);
264
@@ -XXX,XX +XXX,XX @@ enum {
265
*/
266
#define CPU_INTERRUPT_WAKE CPU_INTERRUPT_TGT_INT_0
267
268
-int cpu_mips_signal_handler(int host_signum, void *pinfo, void *puc);
269
-
270
#define MIPS_CPU_TYPE_SUFFIX "-" TYPE_MIPS_CPU
271
#define MIPS_CPU_TYPE_NAME(model) model MIPS_CPU_TYPE_SUFFIX
272
#define CPU_RESOLVING_TYPE TYPE_MIPS_CPU
273
diff --git a/target/mips/internal.h b/target/mips/internal.h
274
index XXXXXXX..XXXXXXX 100644
275
--- a/target/mips/internal.h
276
+++ b/target/mips/internal.h
277
@@ -XXX,XX +XXX,XX @@ extern const VMStateDescription vmstate_mips_cpu;
278
279
#endif /* !CONFIG_USER_ONLY */
280
281
-#define cpu_signal_handler cpu_mips_signal_handler
282
-
283
static inline bool cpu_mips_hw_interrupts_enabled(CPUMIPSState *env)
284
{
285
return (env->CP0_Status & (1 << CP0St_IE)) &&
286
diff --git a/target/nios2/cpu.h b/target/nios2/cpu.h
287
index XXXXXXX..XXXXXXX 100644
288
--- a/target/nios2/cpu.h
289
+++ b/target/nios2/cpu.h
290
@@ -XXX,XX +XXX,XX @@ struct Nios2CPU {
291
292
void nios2_tcg_init(void);
293
void nios2_cpu_do_interrupt(CPUState *cs);
294
-int cpu_nios2_signal_handler(int host_signum, void *pinfo, void *puc);
295
void dump_mmu(CPUNios2State *env);
296
void nios2_cpu_dump_state(CPUState *cpu, FILE *f, int flags);
297
hwaddr nios2_cpu_get_phys_page_debug(CPUState *cpu, vaddr addr);
298
@@ -XXX,XX +XXX,XX @@ void do_nios2_semihosting(CPUNios2State *env);
299
#define CPU_RESOLVING_TYPE TYPE_NIOS2_CPU
300
301
#define cpu_gen_code cpu_nios2_gen_code
302
-#define cpu_signal_handler cpu_nios2_signal_handler
303
304
#define CPU_SAVE_VERSION 1
305
306
diff --git a/target/openrisc/cpu.h b/target/openrisc/cpu.h
307
index XXXXXXX..XXXXXXX 100644
308
--- a/target/openrisc/cpu.h
309
+++ b/target/openrisc/cpu.h
310
@@ -XXX,XX +XXX,XX @@ void openrisc_translate_init(void);
311
bool openrisc_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
312
MMUAccessType access_type, int mmu_idx,
313
bool probe, uintptr_t retaddr);
314
-int cpu_openrisc_signal_handler(int host_signum, void *pinfo, void *puc);
315
int print_insn_or1k(bfd_vma addr, disassemble_info *info);
316
317
#define cpu_list cpu_openrisc_list
318
-#define cpu_signal_handler cpu_openrisc_signal_handler
319
320
#ifndef CONFIG_USER_ONLY
321
extern const VMStateDescription vmstate_openrisc_cpu;
322
diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
323
index XXXXXXX..XXXXXXX 100644
324
--- a/target/ppc/cpu.h
325
+++ b/target/ppc/cpu.h
326
@@ -XXX,XX +XXX,XX @@ extern const VMStateDescription vmstate_ppc_cpu;
327
328
/*****************************************************************************/
329
void ppc_translate_init(void);
330
-/*
331
- * you can call this signal handler from your SIGBUS and SIGSEGV
332
- * signal handlers to inform the virtual CPU of exceptions. non zero
333
- * is returned if the signal was handled by the virtual CPU.
334
- */
335
-int cpu_ppc_signal_handler(int host_signum, void *pinfo, void *puc);
336
bool ppc_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
337
MMUAccessType access_type, int mmu_idx,
338
bool probe, uintptr_t retaddr);
339
@@ -XXX,XX +XXX,XX @@ int ppc_dcr_write(ppc_dcr_t *dcr_env, int dcrn, uint32_t val);
340
#define POWERPC_CPU_TYPE_NAME(model) model POWERPC_CPU_TYPE_SUFFIX
341
#define CPU_RESOLVING_TYPE TYPE_POWERPC_CPU
342
343
-#define cpu_signal_handler cpu_ppc_signal_handler
344
#define cpu_list ppc_cpu_list
345
346
/* MMU modes definitions */
347
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
348
index XXXXXXX..XXXXXXX 100644
349
--- a/target/riscv/cpu.h
350
+++ b/target/riscv/cpu.h
351
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_do_transaction_failed(CPUState *cs, hwaddr physaddr,
352
char *riscv_isa_string(RISCVCPU *cpu);
353
void riscv_cpu_list(void);
354
355
-#define cpu_signal_handler riscv_cpu_signal_handler
356
#define cpu_list riscv_cpu_list
357
#define cpu_mmu_index riscv_cpu_mmu_index
358
359
@@ -XXX,XX +XXX,XX @@ void riscv_cpu_set_rdtime_fn(CPURISCVState *env, uint64_t (*fn)(uint32_t),
360
void riscv_cpu_set_mode(CPURISCVState *env, target_ulong newpriv);
361
362
void riscv_translate_init(void);
363
-int riscv_cpu_signal_handler(int host_signum, void *pinfo, void *puc);
364
void QEMU_NORETURN riscv_raise_exception(CPURISCVState *env,
365
uint32_t exception, uintptr_t pc);
366
367
diff --git a/target/rx/cpu.h b/target/rx/cpu.h
368
index XXXXXXX..XXXXXXX 100644
369
--- a/target/rx/cpu.h
370
+++ b/target/rx/cpu.h
371
@@ -XXX,XX +XXX,XX @@ int rx_cpu_gdb_write_register(CPUState *cpu, uint8_t *buf, int reg);
372
hwaddr rx_cpu_get_phys_page_debug(CPUState *cpu, vaddr addr);
373
374
void rx_translate_init(void);
375
-int cpu_rx_signal_handler(int host_signum, void *pinfo,
376
- void *puc);
377
-
378
void rx_cpu_list(void);
379
void rx_cpu_unpack_psw(CPURXState *env, uint32_t psw, int rte);
380
381
-#define cpu_signal_handler cpu_rx_signal_handler
382
#define cpu_list rx_cpu_list
383
384
#include "exec/cpu-all.h"
385
diff --git a/target/s390x/cpu.h b/target/s390x/cpu.h
386
index XXXXXXX..XXXXXXX 100644
387
--- a/target/s390x/cpu.h
388
+++ b/target/s390x/cpu.h
389
@@ -XXX,XX +XXX,XX @@ void s390_set_qemu_cpu_model(uint16_t type, uint8_t gen, uint8_t ec_ga,
390
#define S390_CPU_TYPE_NAME(name) (name S390_CPU_TYPE_SUFFIX)
391
#define CPU_RESOLVING_TYPE TYPE_S390_CPU
392
393
-/* you can call this signal handler from your SIGBUS and SIGSEGV
394
- signal handlers to inform the virtual CPU of exceptions. non zero
395
- is returned if the signal was handled by the virtual CPU. */
396
-int cpu_s390x_signal_handler(int host_signum, void *pinfo, void *puc);
397
-#define cpu_signal_handler cpu_s390x_signal_handler
398
-
399
-
400
/* interrupt.c */
401
#define RA_IGNORED 0
402
void s390_program_interrupt(CPUS390XState *env, uint32_t code, uintptr_t ra);
403
diff --git a/target/sh4/cpu.h b/target/sh4/cpu.h
404
index XXXXXXX..XXXXXXX 100644
405
--- a/target/sh4/cpu.h
406
+++ b/target/sh4/cpu.h
407
@@ -XXX,XX +XXX,XX @@ void superh_cpu_do_unaligned_access(CPUState *cpu, vaddr addr,
408
int mmu_idx, uintptr_t retaddr);
409
410
void sh4_translate_init(void);
411
-int cpu_sh4_signal_handler(int host_signum, void *pinfo,
412
- void *puc);
413
bool superh_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
414
MMUAccessType access_type, int mmu_idx,
415
bool probe, uintptr_t retaddr);
416
@@ -XXX,XX +XXX,XX @@ void cpu_load_tlb(CPUSH4State * env);
417
#define SUPERH_CPU_TYPE_NAME(model) model SUPERH_CPU_TYPE_SUFFIX
418
#define CPU_RESOLVING_TYPE TYPE_SUPERH_CPU
419
420
-#define cpu_signal_handler cpu_sh4_signal_handler
421
#define cpu_list sh4_cpu_list
422
423
/* MMU modes definitions */
424
diff --git a/target/sparc/cpu.h b/target/sparc/cpu.h
425
index XXXXXXX..XXXXXXX 100644
426
--- a/target/sparc/cpu.h
427
+++ b/target/sparc/cpu.h
428
@@ -XXX,XX +XXX,XX @@ hwaddr cpu_get_phys_page_nofault(CPUSPARCState *env, target_ulong addr,
429
int mmu_idx);
430
#endif
431
#endif
432
-int cpu_sparc_signal_handler(int host_signum, void *pinfo, void *puc);
433
434
#define SPARC_CPU_TYPE_SUFFIX "-" TYPE_SPARC_CPU
435
#define SPARC_CPU_TYPE_NAME(model) model SPARC_CPU_TYPE_SUFFIX
436
#define CPU_RESOLVING_TYPE TYPE_SPARC_CPU
437
438
-#define cpu_signal_handler cpu_sparc_signal_handler
439
#define cpu_list sparc_cpu_list
440
441
/* MMU modes definitions */
442
diff --git a/target/tricore/cpu.h b/target/tricore/cpu.h
443
index XXXXXXX..XXXXXXX 100644
444
--- a/target/tricore/cpu.h
445
+++ b/target/tricore/cpu.h
446
@@ -XXX,XX +XXX,XX @@ void fpu_set_state(CPUTriCoreState *env);
447
448
void tricore_cpu_list(void);
449
450
-#define cpu_signal_handler cpu_tricore_signal_handler
451
#define cpu_list tricore_cpu_list
452
453
static inline int cpu_mmu_index(CPUTriCoreState *env, bool ifetch)
454
@@ -XXX,XX +XXX,XX @@ typedef TriCoreCPU ArchCPU;
455
456
void cpu_state_reset(CPUTriCoreState *s);
457
void tricore_tcg_init(void);
458
-int cpu_tricore_signal_handler(int host_signum, void *pinfo, void *puc);
459
460
static inline void cpu_get_tb_cpu_state(CPUTriCoreState *env, target_ulong *pc,
461
target_ulong *cs_base, uint32_t *flags)
462
diff --git a/target/xtensa/cpu.h b/target/xtensa/cpu.h
463
index XXXXXXX..XXXXXXX 100644
464
--- a/target/xtensa/cpu.h
465
+++ b/target/xtensa/cpu.h
466
@@ -XXX,XX +XXX,XX @@ void xtensa_cpu_do_unaligned_access(CPUState *cpu, vaddr addr,
467
MMUAccessType access_type,
468
int mmu_idx, uintptr_t retaddr);
469
470
-#define cpu_signal_handler cpu_xtensa_signal_handler
471
#define cpu_list xtensa_cpu_list
472
473
#define XTENSA_CPU_TYPE_SUFFIX "-" TYPE_XTENSA_CPU
474
@@ -XXX,XX +XXX,XX @@ void check_interrupts(CPUXtensaState *s);
475
void xtensa_irq_init(CPUXtensaState *env);
476
qemu_irq *xtensa_get_extints(CPUXtensaState *env);
477
qemu_irq xtensa_get_runstall(CPUXtensaState *env);
478
-int cpu_xtensa_signal_handler(int host_signum, void *pinfo, void *puc);
479
void xtensa_cpu_list(void);
480
void xtensa_sync_window_from_phys(CPUXtensaState *env);
481
void xtensa_sync_phys_from_window(CPUXtensaState *env);
146
--
482
--
147
2.25.1
483
2.25.1
148
484
149
485
diff view generated by jsdifflib
New patch
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
2
3
Commit 372579427a5 ("tcg: enable thread-per-vCPU") added the following
4
comment describing EXCP_HALTED in qemu_tcg_cpu_thread_fn():
5
6
case EXCP_HALTED:
7
/* during start-up the vCPU is reset and the thread is
8
* kicked several times. If we don't ensure we go back
9
* to sleep in the halted state we won't cleanly
10
* start-up when the vCPU is enabled.
11
*
12
* cpu->halted should ensure we sleep in wait_io_event
13
*/
14
g_assert(cpu->halted);
15
break;
16
17
qemu_wait_io_event() is sysemu-specific, so we can restrict the
18
cpu_handle_halt() call in cpu_exec() to system emulation.
19
20
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
21
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
22
Message-Id: <20210912172731.789788-2-f4bug@amsat.org>
23
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
24
---
25
accel/tcg/cpu-exec.c | 6 ++++--
26
1 file changed, 4 insertions(+), 2 deletions(-)
27
28
diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/accel/tcg/cpu-exec.c
31
+++ b/accel/tcg/cpu-exec.c
32
@@ -XXX,XX +XXX,XX @@ static inline void tb_add_jump(TranslationBlock *tb, int n,
33
34
static inline bool cpu_handle_halt(CPUState *cpu)
35
{
36
+#ifndef CONFIG_USER_ONLY
37
if (cpu->halted) {
38
-#if defined(TARGET_I386) && !defined(CONFIG_USER_ONLY)
39
+#if defined(TARGET_I386)
40
if (cpu->interrupt_request & CPU_INTERRUPT_POLL) {
41
X86CPU *x86_cpu = X86_CPU(cpu);
42
qemu_mutex_lock_iothread();
43
@@ -XXX,XX +XXX,XX @@ static inline bool cpu_handle_halt(CPUState *cpu)
44
cpu_reset_interrupt(cpu, CPU_INTERRUPT_POLL);
45
qemu_mutex_unlock_iothread();
46
}
47
-#endif
48
+#endif /* TARGET_I386 */
49
if (!cpu_has_work(cpu)) {
50
return true;
51
}
52
53
cpu->halted = 0;
54
}
55
+#endif /* !CONFIG_USER_ONLY */
56
57
return false;
58
}
59
--
60
2.25.1
61
62
diff view generated by jsdifflib
1
From: Alessandro Di Federico <ale@rev.ng>
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
2
3
This commit moves into a separate file routines used to manipulate
3
cpu_has_work() is only called from system emulation code.
4
TCGCond. These will be employed by the idef-parser.
5
4
6
Signed-off-by: Alessandro Di Federico <ale@rev.ng>
5
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Signed-off-by: Paolo Montesel <babush@rev.ng>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-Id: <20210619093713.1845446-2-ale.qemu@rev.ng>
7
Message-Id: <20210912172731.789788-3-f4bug@amsat.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
---
9
---
11
include/tcg/tcg-cond.h | 101 +++++++++++++++++++++++++++++++++++++++++
10
include/hw/core/cpu.h | 32 ++++++++++++++++----------------
12
include/tcg/tcg.h | 70 +---------------------------
11
1 file changed, 16 insertions(+), 16 deletions(-)
13
2 files changed, 102 insertions(+), 69 deletions(-)
14
create mode 100644 include/tcg/tcg-cond.h
15
12
16
diff --git a/include/tcg/tcg-cond.h b/include/tcg/tcg-cond.h
13
diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
17
new file mode 100644
14
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX
15
--- a/include/hw/core/cpu.h
19
--- /dev/null
16
+++ b/include/hw/core/cpu.h
20
+++ b/include/tcg/tcg-cond.h
17
@@ -XXX,XX +XXX,XX @@ enum CPUDumpFlags {
21
@@ -XXX,XX +XXX,XX @@
18
void cpu_dump_state(CPUState *cpu, FILE *f, int flags);
22
+/*
19
23
+ * Tiny Code Generator for QEMU
20
#ifndef CONFIG_USER_ONLY
21
+/**
22
+ * cpu_has_work:
23
+ * @cpu: The vCPU to check.
24
+ *
24
+ *
25
+ * Copyright (c) 2008 Fabrice Bellard
25
+ * Checks whether the CPU has work to do.
26
+ *
26
+ *
27
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
27
+ * Returns: %true if the CPU has work, %false otherwise.
28
+ * of this software and associated documentation files (the "Software"), to deal
29
+ * in the Software without restriction, including without limitation the rights
30
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
31
+ * copies of the Software, and to permit persons to whom the Software is
32
+ * furnished to do so, subject to the following conditions:
33
+ *
34
+ * The above copyright notice and this permission notice shall be included in
35
+ * all copies or substantial portions of the Software.
36
+ *
37
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
38
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
39
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
40
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
41
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
42
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
43
+ * THE SOFTWARE.
44
+ */
28
+ */
29
+static inline bool cpu_has_work(CPUState *cpu)
30
+{
31
+ CPUClass *cc = CPU_GET_CLASS(cpu);
45
+
32
+
46
+#ifndef TCG_COND_H
33
+ g_assert(cc->has_work);
47
+#define TCG_COND_H
34
+ return cc->has_work(cpu);
48
+
49
+/*
50
+ * Conditions. Note that these are laid out for easy manipulation by
51
+ * the functions below:
52
+ * bit 0 is used for inverting;
53
+ * bit 1 is signed,
54
+ * bit 2 is unsigned,
55
+ * bit 3 is used with bit 0 for swapping signed/unsigned.
56
+ */
57
+typedef enum {
58
+ /* non-signed */
59
+ TCG_COND_NEVER = 0 | 0 | 0 | 0,
60
+ TCG_COND_ALWAYS = 0 | 0 | 0 | 1,
61
+ TCG_COND_EQ = 8 | 0 | 0 | 0,
62
+ TCG_COND_NE = 8 | 0 | 0 | 1,
63
+ /* signed */
64
+ TCG_COND_LT = 0 | 0 | 2 | 0,
65
+ TCG_COND_GE = 0 | 0 | 2 | 1,
66
+ TCG_COND_LE = 8 | 0 | 2 | 0,
67
+ TCG_COND_GT = 8 | 0 | 2 | 1,
68
+ /* unsigned */
69
+ TCG_COND_LTU = 0 | 4 | 0 | 0,
70
+ TCG_COND_GEU = 0 | 4 | 0 | 1,
71
+ TCG_COND_LEU = 8 | 4 | 0 | 0,
72
+ TCG_COND_GTU = 8 | 4 | 0 | 1,
73
+} TCGCond;
74
+
75
+/* Invert the sense of the comparison. */
76
+static inline TCGCond tcg_invert_cond(TCGCond c)
77
+{
78
+ return (TCGCond)(c ^ 1);
79
+}
35
+}
80
+
36
+
81
+/* Swap the operands in a comparison. */
37
/**
82
+static inline TCGCond tcg_swap_cond(TCGCond c)
38
* cpu_get_phys_page_attrs_debug:
83
+{
39
* @cpu: The CPU to obtain the physical page address for.
84
+ return c & 6 ? (TCGCond)(c ^ 9) : c;
40
@@ -XXX,XX +XXX,XX @@ CPUState *cpu_create(const char *typename);
85
+}
41
*/
86
+
42
const char *parse_cpu_option(const char *cpu_option);
87
+/* Create an "unsigned" version of a "signed" comparison. */
43
88
+static inline TCGCond tcg_unsigned_cond(TCGCond c)
44
-/**
89
+{
45
- * cpu_has_work:
90
+ return c & 2 ? (TCGCond)(c ^ 6) : c;
46
- * @cpu: The vCPU to check.
91
+}
47
- *
92
+
48
- * Checks whether the CPU has work to do.
93
+/* Create a "signed" version of an "unsigned" comparison. */
49
- *
94
+static inline TCGCond tcg_signed_cond(TCGCond c)
50
- * Returns: %true if the CPU has work, %false otherwise.
95
+{
51
- */
96
+ return c & 4 ? (TCGCond)(c ^ 6) : c;
52
-static inline bool cpu_has_work(CPUState *cpu)
97
+}
53
-{
98
+
54
- CPUClass *cc = CPU_GET_CLASS(cpu);
99
+/* Must a comparison be considered unsigned? */
100
+static inline bool is_unsigned_cond(TCGCond c)
101
+{
102
+ return (c & 4) != 0;
103
+}
104
+
105
+/*
106
+ * Create a "high" version of a double-word comparison.
107
+ * This removes equality from a LTE or GTE comparison.
108
+ */
109
+static inline TCGCond tcg_high_cond(TCGCond c)
110
+{
111
+ switch (c) {
112
+ case TCG_COND_GE:
113
+ case TCG_COND_LE:
114
+ case TCG_COND_GEU:
115
+ case TCG_COND_LEU:
116
+ return (TCGCond)(c ^ 8);
117
+ default:
118
+ return c;
119
+ }
120
+}
121
+
122
+#endif /* TCG_COND_H */
123
diff --git a/include/tcg/tcg.h b/include/tcg/tcg.h
124
index XXXXXXX..XXXXXXX 100644
125
--- a/include/tcg/tcg.h
126
+++ b/include/tcg/tcg.h
127
@@ -XXX,XX +XXX,XX @@
128
#include "tcg/tcg-mo.h"
129
#include "tcg-target.h"
130
#include "qemu/int128.h"
131
+#include "tcg/tcg-cond.h"
132
133
/* XXX: make safe guess about sizes */
134
#define MAX_OP_PER_INSTR 266
135
@@ -XXX,XX +XXX,XX @@ typedef TCGv_ptr TCGv_env;
136
/* Used to align parameters. See the comment before tcgv_i32_temp. */
137
#define TCG_CALL_DUMMY_ARG ((TCGArg)0)
138
139
-/* Conditions. Note that these are laid out for easy manipulation by
140
- the functions below:
141
- bit 0 is used for inverting;
142
- bit 1 is signed,
143
- bit 2 is unsigned,
144
- bit 3 is used with bit 0 for swapping signed/unsigned. */
145
-typedef enum {
146
- /* non-signed */
147
- TCG_COND_NEVER = 0 | 0 | 0 | 0,
148
- TCG_COND_ALWAYS = 0 | 0 | 0 | 1,
149
- TCG_COND_EQ = 8 | 0 | 0 | 0,
150
- TCG_COND_NE = 8 | 0 | 0 | 1,
151
- /* signed */
152
- TCG_COND_LT = 0 | 0 | 2 | 0,
153
- TCG_COND_GE = 0 | 0 | 2 | 1,
154
- TCG_COND_LE = 8 | 0 | 2 | 0,
155
- TCG_COND_GT = 8 | 0 | 2 | 1,
156
- /* unsigned */
157
- TCG_COND_LTU = 0 | 4 | 0 | 0,
158
- TCG_COND_GEU = 0 | 4 | 0 | 1,
159
- TCG_COND_LEU = 8 | 4 | 0 | 0,
160
- TCG_COND_GTU = 8 | 4 | 0 | 1,
161
-} TCGCond;
162
-
55
-
163
-/* Invert the sense of the comparison. */
56
- g_assert(cc->has_work);
164
-static inline TCGCond tcg_invert_cond(TCGCond c)
57
- return cc->has_work(cpu);
165
-{
166
- return (TCGCond)(c ^ 1);
167
-}
58
-}
168
-
59
-
169
-/* Swap the operands in a comparison. */
60
/**
170
-static inline TCGCond tcg_swap_cond(TCGCond c)
61
* qemu_cpu_is_self:
171
-{
62
* @cpu: The vCPU to check against.
172
- return c & 6 ? (TCGCond)(c ^ 9) : c;
173
-}
174
-
175
-/* Create an "unsigned" version of a "signed" comparison. */
176
-static inline TCGCond tcg_unsigned_cond(TCGCond c)
177
-{
178
- return c & 2 ? (TCGCond)(c ^ 6) : c;
179
-}
180
-
181
-/* Create a "signed" version of an "unsigned" comparison. */
182
-static inline TCGCond tcg_signed_cond(TCGCond c)
183
-{
184
- return c & 4 ? (TCGCond)(c ^ 6) : c;
185
-}
186
-
187
-/* Must a comparison be considered unsigned? */
188
-static inline bool is_unsigned_cond(TCGCond c)
189
-{
190
- return (c & 4) != 0;
191
-}
192
-
193
-/* Create a "high" version of a double-word comparison.
194
- This removes equality from a LTE or GTE comparison. */
195
-static inline TCGCond tcg_high_cond(TCGCond c)
196
-{
197
- switch (c) {
198
- case TCG_COND_GE:
199
- case TCG_COND_LE:
200
- case TCG_COND_GEU:
201
- case TCG_COND_LEU:
202
- return (TCGCond)(c ^ 8);
203
- default:
204
- return c;
205
- }
206
-}
207
-
208
typedef enum TCGTempVal {
209
TEMP_VAL_DEAD,
210
TEMP_VAL_REG,
211
--
63
--
212
2.25.1
64
2.25.1
213
65
214
66
diff view generated by jsdifflib
1
We can share this code between 32-bit and 64-bit loads and stores.
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
2
3
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
3
We want to make cpu_has_work() per-accelerator. Only declare its
4
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
prototype and move its definition to softmmu/cpus.c.
5
6
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-Id: <20210912172731.789788-4-f4bug@amsat.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
---
10
---
7
tcg/tci.c | 183 +++++++++++++++++++++---------------------------------
11
include/hw/core/cpu.h | 8 +-------
8
1 file changed, 71 insertions(+), 112 deletions(-)
12
softmmu/cpus.c | 8 ++++++++
13
2 files changed, 9 insertions(+), 7 deletions(-)
9
14
10
diff --git a/tcg/tci.c b/tcg/tci.c
15
diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
11
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
12
--- a/tcg/tci.c
17
--- a/include/hw/core/cpu.h
13
+++ b/tcg/tci.c
18
+++ b/include/hw/core/cpu.h
14
@@ -XXX,XX +XXX,XX @@ static bool tci_compare64(uint64_t u0, uint64_t u1, TCGCond condition)
19
@@ -XXX,XX +XXX,XX @@ void cpu_dump_state(CPUState *cpu, FILE *f, int flags);
15
#define qemu_st_beq(X) \
20
*
16
cpu_stq_be_mmuidx_ra(env, taddr, X, get_mmuidx(oi), (uintptr_t)tb_ptr)
21
* Returns: %true if the CPU has work, %false otherwise.
17
22
*/
18
+static uint64_t tci_qemu_ld(CPUArchState *env, target_ulong taddr,
23
-static inline bool cpu_has_work(CPUState *cpu)
19
+ TCGMemOpIdx oi, const void *tb_ptr)
24
-{
25
- CPUClass *cc = CPU_GET_CLASS(cpu);
26
-
27
- g_assert(cc->has_work);
28
- return cc->has_work(cpu);
29
-}
30
+bool cpu_has_work(CPUState *cpu);
31
32
/**
33
* cpu_get_phys_page_attrs_debug:
34
diff --git a/softmmu/cpus.c b/softmmu/cpus.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/softmmu/cpus.c
37
+++ b/softmmu/cpus.c
38
@@ -XXX,XX +XXX,XX @@ void cpu_interrupt(CPUState *cpu, int mask)
39
}
40
}
41
42
+bool cpu_has_work(CPUState *cpu)
20
+{
43
+{
21
+ MemOp mop = get_memop(oi) & (MO_BSWAP | MO_SSIZE);
44
+ CPUClass *cc = CPU_GET_CLASS(cpu);
22
+
45
+
23
+ switch (mop) {
46
+ g_assert(cc->has_work);
24
+ case MO_UB:
47
+ return cc->has_work(cpu);
25
+ return qemu_ld_ub;
26
+ case MO_SB:
27
+ return (int8_t)qemu_ld_ub;
28
+ case MO_LEUW:
29
+ return qemu_ld_leuw;
30
+ case MO_LESW:
31
+ return (int16_t)qemu_ld_leuw;
32
+ case MO_LEUL:
33
+ return qemu_ld_leul;
34
+ case MO_LESL:
35
+ return (int32_t)qemu_ld_leul;
36
+ case MO_LEQ:
37
+ return qemu_ld_leq;
38
+ case MO_BEUW:
39
+ return qemu_ld_beuw;
40
+ case MO_BESW:
41
+ return (int16_t)qemu_ld_beuw;
42
+ case MO_BEUL:
43
+ return qemu_ld_beul;
44
+ case MO_BESL:
45
+ return (int32_t)qemu_ld_beul;
46
+ case MO_BEQ:
47
+ return qemu_ld_beq;
48
+ default:
49
+ g_assert_not_reached();
50
+ }
51
+}
48
+}
52
+
49
+
53
+static void tci_qemu_st(CPUArchState *env, target_ulong taddr, uint64_t val,
50
static int do_vm_stop(RunState state, bool send_stop)
54
+ TCGMemOpIdx oi, const void *tb_ptr)
51
{
55
+{
52
int ret = 0;
56
+ MemOp mop = get_memop(oi) & (MO_BSWAP | MO_SSIZE);
57
+
58
+ switch (mop) {
59
+ case MO_UB:
60
+ qemu_st_b(val);
61
+ break;
62
+ case MO_LEUW:
63
+ qemu_st_lew(val);
64
+ break;
65
+ case MO_LEUL:
66
+ qemu_st_lel(val);
67
+ break;
68
+ case MO_LEQ:
69
+ qemu_st_leq(val);
70
+ break;
71
+ case MO_BEUW:
72
+ qemu_st_bew(val);
73
+ break;
74
+ case MO_BEUL:
75
+ qemu_st_bel(val);
76
+ break;
77
+ case MO_BEQ:
78
+ qemu_st_beq(val);
79
+ break;
80
+ default:
81
+ g_assert_not_reached();
82
+ }
83
+}
84
+
85
#if TCG_TARGET_REG_BITS == 64
86
# define CASE_32_64(x) \
87
case glue(glue(INDEX_op_, x), _i64): \
88
@@ -XXX,XX +XXX,XX @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
89
tci_args_rrrm(insn, &r0, &r1, &r2, &oi);
90
taddr = tci_uint64(regs[r2], regs[r1]);
91
}
92
- switch (get_memop(oi) & (MO_BSWAP | MO_SSIZE)) {
93
- case MO_UB:
94
- tmp32 = qemu_ld_ub;
95
- break;
96
- case MO_SB:
97
- tmp32 = (int8_t)qemu_ld_ub;
98
- break;
99
- case MO_LEUW:
100
- tmp32 = qemu_ld_leuw;
101
- break;
102
- case MO_LESW:
103
- tmp32 = (int16_t)qemu_ld_leuw;
104
- break;
105
- case MO_LEUL:
106
- tmp32 = qemu_ld_leul;
107
- break;
108
- case MO_BEUW:
109
- tmp32 = qemu_ld_beuw;
110
- break;
111
- case MO_BESW:
112
- tmp32 = (int16_t)qemu_ld_beuw;
113
- break;
114
- case MO_BEUL:
115
- tmp32 = qemu_ld_beul;
116
- break;
117
- default:
118
- g_assert_not_reached();
119
- }
120
+ tmp32 = tci_qemu_ld(env, taddr, oi, tb_ptr);
121
regs[r0] = tmp32;
122
break;
123
124
@@ -XXX,XX +XXX,XX @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
125
taddr = tci_uint64(regs[r3], regs[r2]);
126
oi = regs[r4];
127
}
128
- switch (get_memop(oi) & (MO_BSWAP | MO_SSIZE)) {
129
- case MO_UB:
130
- tmp64 = qemu_ld_ub;
131
- break;
132
- case MO_SB:
133
- tmp64 = (int8_t)qemu_ld_ub;
134
- break;
135
- case MO_LEUW:
136
- tmp64 = qemu_ld_leuw;
137
- break;
138
- case MO_LESW:
139
- tmp64 = (int16_t)qemu_ld_leuw;
140
- break;
141
- case MO_LEUL:
142
- tmp64 = qemu_ld_leul;
143
- break;
144
- case MO_LESL:
145
- tmp64 = (int32_t)qemu_ld_leul;
146
- break;
147
- case MO_LEQ:
148
- tmp64 = qemu_ld_leq;
149
- break;
150
- case MO_BEUW:
151
- tmp64 = qemu_ld_beuw;
152
- break;
153
- case MO_BESW:
154
- tmp64 = (int16_t)qemu_ld_beuw;
155
- break;
156
- case MO_BEUL:
157
- tmp64 = qemu_ld_beul;
158
- break;
159
- case MO_BESL:
160
- tmp64 = (int32_t)qemu_ld_beul;
161
- break;
162
- case MO_BEQ:
163
- tmp64 = qemu_ld_beq;
164
- break;
165
- default:
166
- g_assert_not_reached();
167
- }
168
+ tmp64 = tci_qemu_ld(env, taddr, oi, tb_ptr);
169
if (TCG_TARGET_REG_BITS == 32) {
170
tci_write_reg64(regs, r1, r0, tmp64);
171
} else {
172
@@ -XXX,XX +XXX,XX @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
173
taddr = tci_uint64(regs[r2], regs[r1]);
174
}
175
tmp32 = regs[r0];
176
- switch (get_memop(oi) & (MO_BSWAP | MO_SIZE)) {
177
- case MO_UB:
178
- qemu_st_b(tmp32);
179
- break;
180
- case MO_LEUW:
181
- qemu_st_lew(tmp32);
182
- break;
183
- case MO_LEUL:
184
- qemu_st_lel(tmp32);
185
- break;
186
- case MO_BEUW:
187
- qemu_st_bew(tmp32);
188
- break;
189
- case MO_BEUL:
190
- qemu_st_bel(tmp32);
191
- break;
192
- default:
193
- g_assert_not_reached();
194
- }
195
+ tci_qemu_st(env, taddr, tmp32, oi, tb_ptr);
196
break;
197
198
case INDEX_op_qemu_st_i64:
199
@@ -XXX,XX +XXX,XX @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
200
}
201
tmp64 = tci_uint64(regs[r1], regs[r0]);
202
}
203
- switch (get_memop(oi) & (MO_BSWAP | MO_SIZE)) {
204
- case MO_UB:
205
- qemu_st_b(tmp64);
206
- break;
207
- case MO_LEUW:
208
- qemu_st_lew(tmp64);
209
- break;
210
- case MO_LEUL:
211
- qemu_st_lel(tmp64);
212
- break;
213
- case MO_LEQ:
214
- qemu_st_leq(tmp64);
215
- break;
216
- case MO_BEUW:
217
- qemu_st_bew(tmp64);
218
- break;
219
- case MO_BEUL:
220
- qemu_st_bel(tmp64);
221
- break;
222
- case MO_BEQ:
223
- qemu_st_beq(tmp64);
224
- break;
225
- default:
226
- g_assert_not_reached();
227
- }
228
+ tci_qemu_st(env, taddr, tmp64, oi, tb_ptr);
229
break;
230
231
case INDEX_op_mb:
232
--
53
--
233
2.25.1
54
2.25.1
234
55
235
56
diff view generated by jsdifflib
New patch
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
2
3
Introduce an accelerator-specific has_work() handler.
4
Eventually call it from cpu_has_work().
5
6
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-Id: <20210912172731.789788-5-f4bug@amsat.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
---
11
include/sysemu/accel-ops.h | 5 +++++
12
softmmu/cpus.c | 3 +++
13
2 files changed, 8 insertions(+)
14
15
diff --git a/include/sysemu/accel-ops.h b/include/sysemu/accel-ops.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/include/sysemu/accel-ops.h
18
+++ b/include/sysemu/accel-ops.h
19
@@ -XXX,XX +XXX,XX @@ struct AccelOpsClass {
20
void (*create_vcpu_thread)(CPUState *cpu); /* MANDATORY NON-NULL */
21
void (*kick_vcpu_thread)(CPUState *cpu);
22
23
+ /**
24
+ * @has_work: Callback for checking if there is work to do.
25
+ */
26
+ bool (*has_work)(CPUState *cpu);
27
+
28
void (*synchronize_post_reset)(CPUState *cpu);
29
void (*synchronize_post_init)(CPUState *cpu);
30
void (*synchronize_state)(CPUState *cpu);
31
diff --git a/softmmu/cpus.c b/softmmu/cpus.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/softmmu/cpus.c
34
+++ b/softmmu/cpus.c
35
@@ -XXX,XX +XXX,XX @@ bool cpu_has_work(CPUState *cpu)
36
{
37
CPUClass *cc = CPU_GET_CLASS(cpu);
38
39
+ if (cpus_accel->has_work) {
40
+ return cpus_accel->has_work(cpu);
41
+ }
42
g_assert(cc->has_work);
43
return cc->has_work(cpu);
44
}
45
--
46
2.25.1
47
48
diff view generated by jsdifflib
1
The encoding planned for tci does not have enough room for
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
brcond2, with 4 registers and a condition as input as well
3
as the label. Resolve the condition into TCG_REG_TMP, and
4
relax brcond to one register plus a label, considering the
5
condition to always be reg != 0.
6
2
7
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
3
Implement KVM has_work() handler in AccelOpsClass and
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
remove it from cpu_thread_is_idle() since cpu_has_work()
5
is already called.
6
7
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-Id: <20210912172731.789788-6-f4bug@amsat.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
---
11
---
11
tcg/tci.c | 68 ++++++++++------------------------------
12
accel/kvm/kvm-accel-ops.c | 6 ++++++
12
tcg/tci/tcg-target.c.inc | 52 +++++++++++-------------------
13
softmmu/cpus.c | 2 +-
13
2 files changed, 35 insertions(+), 85 deletions(-)
14
2 files changed, 7 insertions(+), 1 deletion(-)
14
15
15
diff --git a/tcg/tci.c b/tcg/tci.c
16
diff --git a/accel/kvm/kvm-accel-ops.c b/accel/kvm/kvm-accel-ops.c
16
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
17
--- a/tcg/tci.c
18
--- a/accel/kvm/kvm-accel-ops.c
18
+++ b/tcg/tci.c
19
+++ b/accel/kvm/kvm-accel-ops.c
19
@@ -XXX,XX +XXX,XX @@ static void tci_args_nll(const uint8_t **tb_ptr, uint8_t *n0,
20
@@ -XXX,XX +XXX,XX @@ static void kvm_start_vcpu_thread(CPUState *cpu)
20
check_size(start, tb_ptr);
21
cpu, QEMU_THREAD_JOINABLE);
21
}
22
}
22
23
23
+static void tci_args_rl(const uint8_t **tb_ptr, TCGReg *r0, void **l1)
24
+static bool kvm_cpu_has_work(CPUState *cpu)
24
+{
25
+{
25
+ const uint8_t *start = *tb_ptr;
26
+ return kvm_halt_in_kernel();
26
+
27
+ *r0 = tci_read_r(tb_ptr);
28
+ *l1 = (void *)tci_read_label(tb_ptr);
29
+
30
+ check_size(start, tb_ptr);
31
+}
27
+}
32
+
28
+
33
static void tci_args_rr(const uint8_t **tb_ptr,
29
static void kvm_accel_ops_class_init(ObjectClass *oc, void *data)
34
TCGReg *r0, TCGReg *r1)
35
{
30
{
36
@@ -XXX,XX +XXX,XX @@ static void tci_args_rrs(const uint8_t **tb_ptr,
31
AccelOpsClass *ops = ACCEL_OPS_CLASS(oc);
37
check_size(start, tb_ptr);
32
@@ -XXX,XX +XXX,XX @@ static void kvm_accel_ops_class_init(ObjectClass *oc, void *data)
33
ops->synchronize_post_init = kvm_cpu_synchronize_post_init;
34
ops->synchronize_state = kvm_cpu_synchronize_state;
35
ops->synchronize_pre_loadvm = kvm_cpu_synchronize_pre_loadvm;
36
+ ops->has_work = kvm_cpu_has_work;
38
}
37
}
39
38
40
-static void tci_args_rrcl(const uint8_t **tb_ptr,
39
static const TypeInfo kvm_accel_ops_type = {
41
- TCGReg *r0, TCGReg *r1, TCGCond *c2, void **l3)
40
diff --git a/softmmu/cpus.c b/softmmu/cpus.c
42
-{
43
- const uint8_t *start = *tb_ptr;
44
-
45
- *r0 = tci_read_r(tb_ptr);
46
- *r1 = tci_read_r(tb_ptr);
47
- *c2 = tci_read_b(tb_ptr);
48
- *l3 = (void *)tci_read_label(tb_ptr);
49
-
50
- check_size(start, tb_ptr);
51
-}
52
-
53
static void tci_args_rrrc(const uint8_t **tb_ptr,
54
TCGReg *r0, TCGReg *r1, TCGReg *r2, TCGCond *c3)
55
{
56
@@ -XXX,XX +XXX,XX @@ static void tci_args_rrrr(const uint8_t **tb_ptr,
57
check_size(start, tb_ptr);
58
}
59
60
-static void tci_args_rrrrcl(const uint8_t **tb_ptr, TCGReg *r0, TCGReg *r1,
61
- TCGReg *r2, TCGReg *r3, TCGCond *c4, void **l5)
62
-{
63
- const uint8_t *start = *tb_ptr;
64
-
65
- *r0 = tci_read_r(tb_ptr);
66
- *r1 = tci_read_r(tb_ptr);
67
- *r2 = tci_read_r(tb_ptr);
68
- *r3 = tci_read_r(tb_ptr);
69
- *c4 = tci_read_b(tb_ptr);
70
- *l5 = (void *)tci_read_label(tb_ptr);
71
-
72
- check_size(start, tb_ptr);
73
-}
74
-
75
static void tci_args_rrrrrc(const uint8_t **tb_ptr, TCGReg *r0, TCGReg *r1,
76
TCGReg *r2, TCGReg *r3, TCGReg *r4, TCGCond *c5)
77
{
78
@@ -XXX,XX +XXX,XX @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
79
break;
80
#endif
81
case INDEX_op_brcond_i32:
82
- tci_args_rrcl(&tb_ptr, &r0, &r1, &condition, &ptr);
83
- if (tci_compare32(regs[r0], regs[r1], condition)) {
84
+ tci_args_rl(&tb_ptr, &r0, &ptr);
85
+ if ((uint32_t)regs[r0]) {
86
tb_ptr = ptr;
87
}
88
break;
89
@@ -XXX,XX +XXX,XX @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
90
T2 = tci_uint64(regs[r5], regs[r4]);
91
tci_write_reg64(regs, r1, r0, T1 - T2);
92
break;
93
- case INDEX_op_brcond2_i32:
94
- tci_args_rrrrcl(&tb_ptr, &r0, &r1, &r2, &r3, &condition, &ptr);
95
- T1 = tci_uint64(regs[r1], regs[r0]);
96
- T2 = tci_uint64(regs[r3], regs[r2]);
97
- if (tci_compare64(T1, T2, condition)) {
98
- tb_ptr = ptr;
99
- continue;
100
- }
101
- break;
102
case INDEX_op_mulu2_i32:
103
tci_args_rrrr(&tb_ptr, &r0, &r1, &r2, &r3);
104
tci_write_reg64(regs, r1, r0, (uint64_t)regs[r2] * regs[r3]);
105
@@ -XXX,XX +XXX,XX @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
106
break;
107
#endif
108
case INDEX_op_brcond_i64:
109
- tci_args_rrcl(&tb_ptr, &r0, &r1, &condition, &ptr);
110
- if (tci_compare64(regs[r0], regs[r1], condition)) {
111
+ tci_args_rl(&tb_ptr, &r0, &ptr);
112
+ if (regs[r0]) {
113
tb_ptr = ptr;
114
}
115
break;
116
@@ -XXX,XX +XXX,XX @@ int print_insn_tci(bfd_vma addr, disassemble_info *info)
117
118
case INDEX_op_brcond_i32:
119
case INDEX_op_brcond_i64:
120
- tci_args_rrcl(&tb_ptr, &r0, &r1, &c, &ptr);
121
- info->fprintf_func(info->stream, "%-12s %s, %s, %s, %p",
122
- op_name, str_r(r0), str_r(r1), str_c(c), ptr);
123
+ tci_args_rl(&tb_ptr, &r0, &ptr);
124
+ info->fprintf_func(info->stream, "%-12s %s, 0, ne, %p",
125
+ op_name, str_r(r0), ptr);
126
break;
127
128
case INDEX_op_setcond_i32:
129
@@ -XXX,XX +XXX,XX @@ int print_insn_tci(bfd_vma addr, disassemble_info *info)
130
str_r(r3), str_r(r4), str_c(c));
131
break;
132
133
- case INDEX_op_brcond2_i32:
134
- tci_args_rrrrcl(&tb_ptr, &r0, &r1, &r2, &r3, &c, &ptr);
135
- info->fprintf_func(info->stream, "%-12s %s, %s, %s, %s, %s, %p",
136
- op_name, str_r(r0), str_r(r1),
137
- str_r(r2), str_r(r3), str_c(c), ptr);
138
- break;
139
-
140
case INDEX_op_mulu2_i32:
141
tci_args_rrrr(&tb_ptr, &r0, &r1, &r2, &r3);
142
info->fprintf_func(info->stream, "%-12s %s, %s, %s, %s",
143
diff --git a/tcg/tci/tcg-target.c.inc b/tcg/tci/tcg-target.c.inc
144
index XXXXXXX..XXXXXXX 100644
41
index XXXXXXX..XXXXXXX 100644
145
--- a/tcg/tci/tcg-target.c.inc
42
--- a/softmmu/cpus.c
146
+++ b/tcg/tci/tcg-target.c.inc
43
+++ b/softmmu/cpus.c
147
@@ -XXX,XX +XXX,XX @@ static void tcg_out_op_rI(TCGContext *s, TCGOpcode op,
44
@@ -XXX,XX +XXX,XX @@ bool cpu_thread_is_idle(CPUState *cpu)
148
}
45
return true;
149
#endif
46
}
150
47
if (!cpu->halted || cpu_has_work(cpu) ||
151
+static void tcg_out_op_rl(TCGContext *s, TCGOpcode op, TCGReg r0, TCGLabel *l1)
48
- kvm_halt_in_kernel() || whpx_apic_in_platform()) {
152
+{
49
+ whpx_apic_in_platform()) {
153
+ uint8_t *old_code_ptr = s->code_ptr;
50
return false;
154
+
51
}
155
+ tcg_out_op_t(s, op);
52
return true;
156
+ tcg_out_r(s, r0);
157
+ tci_out_label(s, l1);
158
+
159
+ old_code_ptr[1] = s->code_ptr - old_code_ptr;
160
+}
161
+
162
static void tcg_out_op_rr(TCGContext *s, TCGOpcode op, TCGReg r0, TCGReg r1)
163
{
164
uint8_t *old_code_ptr = s->code_ptr;
165
@@ -XXX,XX +XXX,XX @@ static void tcg_out_op_rrs(TCGContext *s, TCGOpcode op,
166
old_code_ptr[1] = s->code_ptr - old_code_ptr;
167
}
168
169
-static void tcg_out_op_rrcl(TCGContext *s, TCGOpcode op,
170
- TCGReg r0, TCGReg r1, TCGCond c2, TCGLabel *l3)
171
-{
172
- uint8_t *old_code_ptr = s->code_ptr;
173
-
174
- tcg_out_op_t(s, op);
175
- tcg_out_r(s, r0);
176
- tcg_out_r(s, r1);
177
- tcg_out8(s, c2);
178
- tci_out_label(s, l3);
179
-
180
- old_code_ptr[1] = s->code_ptr - old_code_ptr;
181
-}
182
-
183
static void tcg_out_op_rrrc(TCGContext *s, TCGOpcode op,
184
TCGReg r0, TCGReg r1, TCGReg r2, TCGCond c3)
185
{
186
@@ -XXX,XX +XXX,XX @@ static void tcg_out_op_rrrr(TCGContext *s, TCGOpcode op,
187
old_code_ptr[1] = s->code_ptr - old_code_ptr;
188
}
189
190
-static void tcg_out_op_rrrrcl(TCGContext *s, TCGOpcode op,
191
- TCGReg r0, TCGReg r1, TCGReg r2, TCGReg r3,
192
- TCGCond c4, TCGLabel *l5)
193
-{
194
- uint8_t *old_code_ptr = s->code_ptr;
195
-
196
- tcg_out_op_t(s, op);
197
- tcg_out_r(s, r0);
198
- tcg_out_r(s, r1);
199
- tcg_out_r(s, r2);
200
- tcg_out_r(s, r3);
201
- tcg_out8(s, c4);
202
- tci_out_label(s, l5);
203
-
204
- old_code_ptr[1] = s->code_ptr - old_code_ptr;
205
-}
206
-
207
static void tcg_out_op_rrrrrc(TCGContext *s, TCGOpcode op,
208
TCGReg r0, TCGReg r1, TCGReg r2,
209
TCGReg r3, TCGReg r4, TCGCond c5)
210
@@ -XXX,XX +XXX,XX @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
211
break;
212
213
CASE_32_64(brcond)
214
- tcg_out_op_rrcl(s, opc, args[0], args[1], args[2], arg_label(args[3]));
215
+ tcg_out_op_rrrc(s, (opc == INDEX_op_brcond_i32
216
+ ? INDEX_op_setcond_i32 : INDEX_op_setcond_i64),
217
+ TCG_REG_TMP, args[0], args[1], args[2]);
218
+ tcg_out_op_rl(s, opc, TCG_REG_TMP, arg_label(args[3]));
219
break;
220
221
CASE_32_64(neg) /* Optional (TCG_TARGET_HAS_neg_*). */
222
@@ -XXX,XX +XXX,XX @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
223
args[3], args[4], args[5]);
224
break;
225
case INDEX_op_brcond2_i32:
226
- tcg_out_op_rrrrcl(s, opc, args[0], args[1], args[2],
227
- args[3], args[4], arg_label(args[5]));
228
+ tcg_out_op_rrrrrc(s, INDEX_op_setcond2_i32, TCG_REG_TMP,
229
+ args[0], args[1], args[2], args[3], args[4]);
230
+ tcg_out_op_rl(s, INDEX_op_brcond_i32, TCG_REG_TMP, arg_label(args[5]));
231
break;
232
case INDEX_op_mulu2_i32:
233
tcg_out_op_rrrr(s, opc, args[0], args[1], args[2], args[3]);
234
--
53
--
235
2.25.1
54
2.25.1
236
55
237
56
diff view generated by jsdifflib
1
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
3
Implement WHPX has_work() handler in AccelOpsClass and
4
remove it from cpu_thread_is_idle() since cpu_has_work()
5
is already called.
6
7
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-Id: <20210912172731.789788-7-f4bug@amsat.org>
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
---
11
---
5
tcg/tcg-internal.h | 5 +++++
12
softmmu/cpus.c | 4 +---
6
tcg/tcg.c | 5 ++---
13
target/i386/whpx/whpx-accel-ops.c | 6 ++++++
7
2 files changed, 7 insertions(+), 3 deletions(-)
14
2 files changed, 7 insertions(+), 3 deletions(-)
8
15
9
diff --git a/tcg/tcg-internal.h b/tcg/tcg-internal.h
16
diff --git a/softmmu/cpus.c b/softmmu/cpus.c
10
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
11
--- a/tcg/tcg-internal.h
18
--- a/softmmu/cpus.c
12
+++ b/tcg/tcg-internal.h
19
+++ b/softmmu/cpus.c
13
@@ -XXX,XX +XXX,XX @@ bool tcg_region_alloc(TCGContext *s);
20
@@ -XXX,XX +XXX,XX @@
14
void tcg_region_initial_alloc(TCGContext *s);
21
#include "sysemu/replay.h"
15
void tcg_region_prologue_set(TCGContext *s);
22
#include "sysemu/runstate.h"
16
23
#include "sysemu/cpu-timers.h"
17
+static inline void *tcg_call_func(TCGOp *op)
24
-#include "sysemu/whpx.h"
25
#include "hw/boards.h"
26
#include "hw/hw.h"
27
#include "trace.h"
28
@@ -XXX,XX +XXX,XX @@ bool cpu_thread_is_idle(CPUState *cpu)
29
if (cpu_is_stopped(cpu)) {
30
return true;
31
}
32
- if (!cpu->halted || cpu_has_work(cpu) ||
33
- whpx_apic_in_platform()) {
34
+ if (!cpu->halted || cpu_has_work(cpu)) {
35
return false;
36
}
37
return true;
38
diff --git a/target/i386/whpx/whpx-accel-ops.c b/target/i386/whpx/whpx-accel-ops.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/i386/whpx/whpx-accel-ops.c
41
+++ b/target/i386/whpx/whpx-accel-ops.c
42
@@ -XXX,XX +XXX,XX @@ static void whpx_kick_vcpu_thread(CPUState *cpu)
43
}
44
}
45
46
+static bool whpx_cpu_has_work(CPUState *cpu)
18
+{
47
+{
19
+ return (void *)(uintptr_t)op->args[TCGOP_CALLO(op) + TCGOP_CALLI(op)];
48
+ return whpx_apic_in_platform();
20
+}
49
+}
21
+
50
+
22
static inline const TCGHelperInfo *tcg_call_info(TCGOp *op)
51
static void whpx_accel_ops_class_init(ObjectClass *oc, void *data)
23
{
52
{
24
return (void *)(uintptr_t)op->args[TCGOP_CALLO(op) + TCGOP_CALLI(op) + 1];
53
AccelOpsClass *ops = ACCEL_OPS_CLASS(oc);
25
diff --git a/tcg/tcg.c b/tcg/tcg.c
54
@@ -XXX,XX +XXX,XX @@ static void whpx_accel_ops_class_init(ObjectClass *oc, void *data)
26
index XXXXXXX..XXXXXXX 100644
55
ops->synchronize_post_init = whpx_cpu_synchronize_post_init;
27
--- a/tcg/tcg.c
56
ops->synchronize_state = whpx_cpu_synchronize_state;
28
+++ b/tcg/tcg.c
57
ops->synchronize_pre_loadvm = whpx_cpu_synchronize_pre_loadvm;
29
@@ -XXX,XX +XXX,XX @@ static void tcg_dump_ops(TCGContext *s, bool have_prefs)
58
+ ops->has_work = whpx_cpu_has_work;
30
}
59
}
31
} else if (c == INDEX_op_call) {
60
32
const TCGHelperInfo *info = tcg_call_info(op);
61
static const TypeInfo whpx_accel_ops_type = {
33
- void *func;
34
+ void *func = tcg_call_func(op);
35
36
/* variable number of arguments */
37
nb_oargs = TCGOP_CALLO(op);
38
@@ -XXX,XX +XXX,XX @@ static void tcg_dump_ops(TCGContext *s, bool have_prefs)
39
* Note that plugins have a template function for the info,
40
* but the actual function pointer comes from the plugin.
41
*/
42
- func = (void *)(uintptr_t)op->args[nb_oargs + nb_iargs];
43
if (func == info->func) {
44
col += qemu_log("%s", info->name);
45
} else {
46
@@ -XXX,XX +XXX,XX @@ static void tcg_reg_alloc_call(TCGContext *s, TCGOp *op)
47
int allocate_args;
48
TCGRegSet allocated_regs;
49
50
- func_addr = (tcg_insn_unit *)(intptr_t)op->args[nb_oargs + nb_iargs];
51
+ func_addr = tcg_call_func(op);
52
flags = tcg_call_flags(op);
53
54
nb_regs = ARRAY_SIZE(tcg_target_call_iarg_regs);
55
--
62
--
56
2.25.1
63
2.25.1
57
64
58
65
diff view generated by jsdifflib
1
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
3
Add TCG target-specific has_work() handler in TCGCPUOps,
4
and add tcg_cpu_has_work() as AccelOpsClass has_work()
5
implementation.
6
7
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-Id: <20210912172731.789788-8-f4bug@amsat.org>
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
---
11
---
5
tcg/tci/tcg-target.h | 8 ++++----
12
include/hw/core/tcg-cpu-ops.h | 4 ++++
6
tcg/tci.c | 42 ++++++++++++++++++++++++++++++++++++++++
13
accel/tcg/tcg-accel-ops.c | 12 ++++++++++++
7
tcg/tci/tcg-target.c.inc | 32 ++++++++++++++++++++++++++++++
14
2 files changed, 16 insertions(+)
8
3 files changed, 78 insertions(+), 4 deletions(-)
9
15
10
diff --git a/tcg/tci/tcg-target.h b/tcg/tci/tcg-target.h
16
diff --git a/include/hw/core/tcg-cpu-ops.h b/include/hw/core/tcg-cpu-ops.h
11
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
12
--- a/tcg/tci/tcg-target.h
18
--- a/include/hw/core/tcg-cpu-ops.h
13
+++ b/tcg/tci/tcg-target.h
19
+++ b/include/hw/core/tcg-cpu-ops.h
20
@@ -XXX,XX +XXX,XX @@ struct TCGCPUOps {
21
void (*do_interrupt)(CPUState *cpu);
22
#endif /* !CONFIG_USER_ONLY || !TARGET_I386 */
23
#ifdef CONFIG_SOFTMMU
24
+ /**
25
+ * @has_work: Callback for checking if there is work to do.
26
+ */
27
+ bool (*has_work)(CPUState *cpu);
28
/** @cpu_exec_interrupt: Callback for processing interrupts in cpu_exec */
29
bool (*cpu_exec_interrupt)(CPUState *cpu, int interrupt_request);
30
/**
31
diff --git a/accel/tcg/tcg-accel-ops.c b/accel/tcg/tcg-accel-ops.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/accel/tcg/tcg-accel-ops.c
34
+++ b/accel/tcg/tcg-accel-ops.c
14
@@ -XXX,XX +XXX,XX @@
35
@@ -XXX,XX +XXX,XX @@
15
#define TCG_TARGET_HAS_ext16u_i32 1
36
#include "qemu/main-loop.h"
16
#define TCG_TARGET_HAS_andc_i32 1
37
#include "qemu/guest-random.h"
17
#define TCG_TARGET_HAS_deposit_i32 1
38
#include "exec/exec-all.h"
18
-#define TCG_TARGET_HAS_extract_i32 0
39
+#include "hw/core/tcg-cpu-ops.h"
19
-#define TCG_TARGET_HAS_sextract_i32 0
40
20
+#define TCG_TARGET_HAS_extract_i32 1
41
#include "tcg-accel-ops.h"
21
+#define TCG_TARGET_HAS_sextract_i32 1
42
#include "tcg-accel-ops-mttcg.h"
22
#define TCG_TARGET_HAS_extract2_i32 0
43
@@ -XXX,XX +XXX,XX @@ int tcg_cpus_exec(CPUState *cpu)
23
#define TCG_TARGET_HAS_eqv_i32 1
44
return ret;
24
#define TCG_TARGET_HAS_nand_i32 1
25
@@ -XXX,XX +XXX,XX @@
26
#define TCG_TARGET_HAS_bswap32_i64 1
27
#define TCG_TARGET_HAS_bswap64_i64 1
28
#define TCG_TARGET_HAS_deposit_i64 1
29
-#define TCG_TARGET_HAS_extract_i64 0
30
-#define TCG_TARGET_HAS_sextract_i64 0
31
+#define TCG_TARGET_HAS_extract_i64 1
32
+#define TCG_TARGET_HAS_sextract_i64 1
33
#define TCG_TARGET_HAS_extract2_i64 0
34
#define TCG_TARGET_HAS_div_i64 1
35
#define TCG_TARGET_HAS_rem_i64 1
36
diff --git a/tcg/tci.c b/tcg/tci.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/tcg/tci.c
39
+++ b/tcg/tci.c
40
@@ -XXX,XX +XXX,XX @@ static void tci_args_rrs(uint32_t insn, TCGReg *r0, TCGReg *r1, int32_t *i2)
41
*i2 = sextract32(insn, 16, 16);
42
}
45
}
43
46
44
+static void tci_args_rrbb(uint32_t insn, TCGReg *r0, TCGReg *r1,
47
+static bool tcg_cpu_has_work(CPUState *cpu)
45
+ uint8_t *i2, uint8_t *i3)
46
+{
48
+{
47
+ *r0 = extract32(insn, 8, 4);
49
+ CPUClass *cc = CPU_GET_CLASS(cpu);
48
+ *r1 = extract32(insn, 12, 4);
50
+
49
+ *i2 = extract32(insn, 16, 6);
51
+ if (!cc->tcg_ops->has_work) {
50
+ *i3 = extract32(insn, 22, 6);
52
+ return false;
53
+ }
54
+ return cc->tcg_ops->has_work(cpu);
51
+}
55
+}
52
+
56
+
53
static void tci_args_rrrc(uint32_t insn,
57
/* mask must never be zero, except for A20 change call */
54
TCGReg *r0, TCGReg *r1, TCGReg *r2, TCGCond *c3)
58
void tcg_handle_interrupt(CPUState *cpu, int mask)
55
{
59
{
56
@@ -XXX,XX +XXX,XX @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
60
@@ -XXX,XX +XXX,XX @@ static void tcg_accel_ops_init(AccelOpsClass *ops)
57
tci_args_rrrbb(insn, &r0, &r1, &r2, &pos, &len);
61
ops->kick_vcpu_thread = rr_kick_vcpu_thread;
58
regs[r0] = deposit32(regs[r1], pos, len, regs[r2]);
62
ops->handle_interrupt = tcg_handle_interrupt;
59
break;
63
}
60
+#endif
64
+ ops->has_work = tcg_cpu_has_work;
61
+#if TCG_TARGET_HAS_extract_i32
62
+ case INDEX_op_extract_i32:
63
+ tci_args_rrbb(insn, &r0, &r1, &pos, &len);
64
+ regs[r0] = extract32(regs[r1], pos, len);
65
+ break;
66
+#endif
67
+#if TCG_TARGET_HAS_sextract_i32
68
+ case INDEX_op_sextract_i32:
69
+ tci_args_rrbb(insn, &r0, &r1, &pos, &len);
70
+ regs[r0] = sextract32(regs[r1], pos, len);
71
+ break;
72
#endif
73
case INDEX_op_brcond_i32:
74
tci_args_rl(insn, tb_ptr, &r0, &ptr);
75
@@ -XXX,XX +XXX,XX @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
76
tci_args_rrrbb(insn, &r0, &r1, &r2, &pos, &len);
77
regs[r0] = deposit64(regs[r1], pos, len, regs[r2]);
78
break;
79
+#endif
80
+#if TCG_TARGET_HAS_extract_i64
81
+ case INDEX_op_extract_i64:
82
+ tci_args_rrbb(insn, &r0, &r1, &pos, &len);
83
+ regs[r0] = extract64(regs[r1], pos, len);
84
+ break;
85
+#endif
86
+#if TCG_TARGET_HAS_sextract_i64
87
+ case INDEX_op_sextract_i64:
88
+ tci_args_rrbb(insn, &r0, &r1, &pos, &len);
89
+ regs[r0] = sextract64(regs[r1], pos, len);
90
+ break;
91
#endif
92
case INDEX_op_brcond_i64:
93
tci_args_rl(insn, tb_ptr, &r0, &ptr);
94
@@ -XXX,XX +XXX,XX @@ int print_insn_tci(bfd_vma addr, disassemble_info *info)
95
op_name, str_r(r0), str_r(r1), str_r(r2), pos, len);
96
break;
97
98
+ case INDEX_op_extract_i32:
99
+ case INDEX_op_extract_i64:
100
+ case INDEX_op_sextract_i32:
101
+ case INDEX_op_sextract_i64:
102
+ tci_args_rrbb(insn, &r0, &r1, &pos, &len);
103
+ info->fprintf_func(info->stream, "%-12s %s,%s,%d,%d",
104
+ op_name, str_r(r0), str_r(r1), pos, len);
105
+ break;
106
+
107
case INDEX_op_movcond_i32:
108
case INDEX_op_movcond_i64:
109
case INDEX_op_setcond2_i32:
110
diff --git a/tcg/tci/tcg-target.c.inc b/tcg/tci/tcg-target.c.inc
111
index XXXXXXX..XXXXXXX 100644
112
--- a/tcg/tci/tcg-target.c.inc
113
+++ b/tcg/tci/tcg-target.c.inc
114
@@ -XXX,XX +XXX,XX @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op)
115
case INDEX_op_bswap32_i32:
116
case INDEX_op_bswap32_i64:
117
case INDEX_op_bswap64_i64:
118
+ case INDEX_op_extract_i32:
119
+ case INDEX_op_extract_i64:
120
+ case INDEX_op_sextract_i32:
121
+ case INDEX_op_sextract_i64:
122
return C_O1_I1(r, r);
123
124
case INDEX_op_st8_i32:
125
@@ -XXX,XX +XXX,XX @@ static void tcg_out_op_rrs(TCGContext *s, TCGOpcode op,
126
tcg_out32(s, insn);
127
}
65
}
128
66
129
+static void tcg_out_op_rrbb(TCGContext *s, TCGOpcode op, TCGReg r0,
67
static void tcg_accel_ops_class_init(ObjectClass *oc, void *data)
130
+ TCGReg r1, uint8_t b2, uint8_t b3)
131
+{
132
+ tcg_insn_unit insn = 0;
133
+
134
+ tcg_debug_assert(b2 == extract32(b2, 0, 6));
135
+ tcg_debug_assert(b3 == extract32(b3, 0, 6));
136
+ insn = deposit32(insn, 0, 8, op);
137
+ insn = deposit32(insn, 8, 4, r0);
138
+ insn = deposit32(insn, 12, 4, r1);
139
+ insn = deposit32(insn, 16, 6, b2);
140
+ insn = deposit32(insn, 22, 6, b3);
141
+ tcg_out32(s, insn);
142
+}
143
+
144
static void tcg_out_op_rrrc(TCGContext *s, TCGOpcode op,
145
TCGReg r0, TCGReg r1, TCGReg r2, TCGCond c3)
146
{
147
@@ -XXX,XX +XXX,XX @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
148
}
149
break;
150
151
+ CASE_32_64(extract) /* Optional (TCG_TARGET_HAS_extract_*). */
152
+ CASE_32_64(sextract) /* Optional (TCG_TARGET_HAS_sextract_*). */
153
+ {
154
+ TCGArg pos = args[2], len = args[3];
155
+ TCGArg max = tcg_op_defs[opc].flags & TCG_OPF_64BIT ? 64 : 32;
156
+
157
+ tcg_debug_assert(pos < max);
158
+ tcg_debug_assert(pos + len <= max);
159
+
160
+ tcg_out_op_rrbb(s, opc, args[0], args[1], pos, len);
161
+ }
162
+ break;
163
+
164
CASE_32_64(brcond)
165
tcg_out_op_rrrc(s, (opc == INDEX_op_brcond_i32
166
? INDEX_op_setcond_i32 : INDEX_op_setcond_i64),
167
--
68
--
168
2.25.1
69
2.25.1
169
70
170
71
diff view generated by jsdifflib
1
From: Stefan Weil <sw@weilnetz.de>
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
2
3
The function is called with alignment == 0 which caused an assertion.
3
Restrict has_work() to sysemu.
4
Use the code from oslib-posix.c to fix that regression.
5
4
6
Fixes: ed6f53f9ca9
5
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Signed-off-by: Stefan Weil <sw@weilnetz.de>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Message-Id: <20210912172731.789788-9-f4bug@amsat.org>
9
Message-Id: <20210611105846.347954-1-sw@weilnetz.de>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
---
9
---
12
util/oslib-win32.c | 6 +++++-
10
target/alpha/cpu.c | 4 +++-
13
1 file changed, 5 insertions(+), 1 deletion(-)
11
1 file changed, 3 insertions(+), 1 deletion(-)
14
12
15
diff --git a/util/oslib-win32.c b/util/oslib-win32.c
13
diff --git a/target/alpha/cpu.c b/target/alpha/cpu.c
16
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
17
--- a/util/oslib-win32.c
15
--- a/target/alpha/cpu.c
18
+++ b/util/oslib-win32.c
16
+++ b/target/alpha/cpu.c
19
@@ -XXX,XX +XXX,XX @@ void *qemu_try_memalign(size_t alignment, size_t size)
17
@@ -XXX,XX +XXX,XX @@ static void alpha_cpu_set_pc(CPUState *cs, vaddr value)
20
void *ptr;
18
cpu->env.pc = value;
21
19
}
22
g_assert(size != 0);
20
23
- g_assert(is_power_of_2(alignment));
21
+#if !defined(CONFIG_USER_ONLY)
24
+ if (alignment < sizeof(void *)) {
22
static bool alpha_cpu_has_work(CPUState *cs)
25
+ alignment = sizeof(void *);
23
{
26
+ } else {
24
/* Here we are checking to see if the CPU should wake up from HALT.
27
+ g_assert(is_power_of_2(alignment));
25
@@ -XXX,XX +XXX,XX @@ static bool alpha_cpu_has_work(CPUState *cs)
28
+ }
26
| CPU_INTERRUPT_SMP
29
ptr = _aligned_malloc(size, alignment);
27
| CPU_INTERRUPT_MCHK);
30
trace_qemu_memalign(alignment, size, ptr);
28
}
31
return ptr;
29
+#endif /* !CONFIG_USER_ONLY */
30
31
static void alpha_cpu_disas_set_info(CPUState *cpu, disassemble_info *info)
32
{
33
@@ -XXX,XX +XXX,XX @@ static const struct TCGCPUOps alpha_tcg_ops = {
34
.tlb_fill = alpha_cpu_tlb_fill,
35
36
#ifndef CONFIG_USER_ONLY
37
+ .has_work = alpha_cpu_has_work,
38
.cpu_exec_interrupt = alpha_cpu_exec_interrupt,
39
.do_interrupt = alpha_cpu_do_interrupt,
40
.do_transaction_failed = alpha_cpu_do_transaction_failed,
41
@@ -XXX,XX +XXX,XX @@ static void alpha_cpu_class_init(ObjectClass *oc, void *data)
42
&acc->parent_realize);
43
44
cc->class_by_name = alpha_cpu_class_by_name;
45
- cc->has_work = alpha_cpu_has_work;
46
cc->dump_state = alpha_cpu_dump_state;
47
cc->set_pc = alpha_cpu_set_pc;
48
cc->gdb_read_register = alpha_cpu_gdb_read_register;
32
--
49
--
33
2.25.1
50
2.25.1
34
51
35
52
diff view generated by jsdifflib
1
Assume that we'll have fewer temps allocated after
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
restarting with a fewer number of instructions.
3
2
4
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
3
Restrict has_work() to TCG sysemu.
4
5
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-Id: <20210912172731.789788-10-f4bug@amsat.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
---
9
---
7
tcg/tcg.c | 6 +++++-
10
target/arm/cpu.c | 7 +++++--
8
1 file changed, 5 insertions(+), 1 deletion(-)
11
1 file changed, 5 insertions(+), 2 deletions(-)
9
12
10
diff --git a/tcg/tcg.c b/tcg/tcg.c
13
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
11
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
12
--- a/tcg/tcg.c
15
--- a/target/arm/cpu.c
13
+++ b/tcg/tcg.c
16
+++ b/target/arm/cpu.c
14
@@ -XXX,XX +XXX,XX @@ static void temp_allocate_frame(TCGContext *s, TCGTemp *ts)
17
@@ -XXX,XX +XXX,XX @@ void arm_cpu_synchronize_from_tb(CPUState *cs,
15
18
env->regs[15] = tb->pc;
16
assert(align <= TCG_TARGET_STACK_ALIGN);
19
}
17
off = ROUND_UP(s->current_frame_offset, align);
20
}
18
- assert(off + size <= s->frame_end);
21
-#endif /* CONFIG_TCG */
22
23
+#ifndef CONFIG_USER_ONLY
24
static bool arm_cpu_has_work(CPUState *cs)
25
{
26
ARMCPU *cpu = ARM_CPU(cs);
27
@@ -XXX,XX +XXX,XX @@ static bool arm_cpu_has_work(CPUState *cs)
28
| CPU_INTERRUPT_VFIQ | CPU_INTERRUPT_VIRQ
29
| CPU_INTERRUPT_EXITTB);
30
}
31
+#endif /* !CONFIG_USER_ONLY */
19
+
32
+
20
+ /* If we've exhausted the stack frame, restart with a smaller TB. */
33
+#endif /* CONFIG_TCG */
21
+ if (off + size > s->frame_end) {
34
22
+ tcg_raise_tb_overflow(s);
35
void arm_register_pre_el_change_hook(ARMCPU *cpu, ARMELChangeHookFn *hook,
23
+ }
36
void *opaque)
24
s->current_frame_offset = off + size;
37
@@ -XXX,XX +XXX,XX @@ static const struct TCGCPUOps arm_tcg_ops = {
25
38
.debug_excp_handler = arm_debug_excp_handler,
26
ts->mem_offset = off;
39
40
#if !defined(CONFIG_USER_ONLY)
41
+ .has_work = arm_cpu_has_work,
42
.cpu_exec_interrupt = arm_cpu_exec_interrupt,
43
.do_interrupt = arm_cpu_do_interrupt,
44
.do_transaction_failed = arm_cpu_do_transaction_failed,
45
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_class_init(ObjectClass *oc, void *data)
46
device_class_set_parent_reset(dc, arm_cpu_reset, &acc->parent_reset);
47
48
cc->class_by_name = arm_cpu_class_by_name;
49
- cc->has_work = arm_cpu_has_work;
50
cc->dump_state = arm_cpu_dump_state;
51
cc->set_pc = arm_cpu_set_pc;
52
cc->gdb_read_register = arm_cpu_gdb_read_register;
27
--
53
--
28
2.25.1
54
2.25.1
29
55
30
56
diff view generated by jsdifflib
1
We should not be aligning the offset in temp_allocate_frame,
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
because the odd offset produces an aligned address in the end.
3
Instead, pass the logical offset into tcg_set_frame and add
4
the stack bias last.
5
2
6
Cc: qemu-stable@nongnu.org
3
Restrict has_work() to sysemu.
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
5
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-Id: <20210912172731.789788-11-f4bug@amsat.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
---
9
---
10
tcg/tcg.c | 9 +++------
10
target/avr/cpu.c | 2 +-
11
tcg/sparc/tcg-target.c.inc | 16 ++++++++++------
11
1 file changed, 1 insertion(+), 1 deletion(-)
12
2 files changed, 13 insertions(+), 12 deletions(-)
13
12
14
diff --git a/tcg/tcg.c b/tcg/tcg.c
13
diff --git a/target/avr/cpu.c b/target/avr/cpu.c
15
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
16
--- a/tcg/tcg.c
15
--- a/target/avr/cpu.c
17
+++ b/tcg/tcg.c
16
+++ b/target/avr/cpu.c
18
@@ -XXX,XX +XXX,XX @@ static void check_regs(TCGContext *s)
17
@@ -XXX,XX +XXX,XX @@ static const struct SysemuCPUOps avr_sysemu_ops = {
19
18
static const struct TCGCPUOps avr_tcg_ops = {
20
static void temp_allocate_frame(TCGContext *s, TCGTemp *ts)
19
.initialize = avr_cpu_tcg_init,
21
{
20
.synchronize_from_tb = avr_cpu_synchronize_from_tb,
22
-#if !(defined(__sparc__) && TCG_TARGET_REG_BITS == 64)
21
+ .has_work = avr_cpu_has_work,
23
- /* Sparc64 stack is accessed with offset of 2047 */
22
.cpu_exec_interrupt = avr_cpu_exec_interrupt,
24
- s->current_frame_offset = (s->current_frame_offset +
23
.tlb_fill = avr_cpu_tlb_fill,
25
- (tcg_target_long)sizeof(tcg_target_long) - 1) &
24
.do_interrupt = avr_cpu_do_interrupt,
26
- ~(sizeof(tcg_target_long) - 1);
25
@@ -XXX,XX +XXX,XX @@ static void avr_cpu_class_init(ObjectClass *oc, void *data)
27
-#endif
26
28
if (s->current_frame_offset + (tcg_target_long)sizeof(tcg_target_long) >
27
cc->class_by_name = avr_cpu_class_by_name;
29
s->frame_end) {
28
30
tcg_abort();
29
- cc->has_work = avr_cpu_has_work;
31
}
30
cc->dump_state = avr_cpu_dump_state;
32
ts->mem_offset = s->current_frame_offset;
31
cc->set_pc = avr_cpu_set_pc;
33
+#if defined(__sparc__)
32
cc->memory_rw_debug = avr_cpu_memory_rw_debug;
34
+ ts->mem_offset += TCG_TARGET_STACK_BIAS;
35
+#endif
36
ts->mem_base = s->frame_temp;
37
ts->mem_allocated = 1;
38
s->current_frame_offset += sizeof(tcg_target_long);
39
diff --git a/tcg/sparc/tcg-target.c.inc b/tcg/sparc/tcg-target.c.inc
40
index XXXXXXX..XXXXXXX 100644
41
--- a/tcg/sparc/tcg-target.c.inc
42
+++ b/tcg/sparc/tcg-target.c.inc
43
@@ -XXX,XX +XXX,XX @@ static void tcg_target_qemu_prologue(TCGContext *s)
44
{
45
int tmp_buf_size, frame_size;
46
47
- /* The TCG temp buffer is at the top of the frame, immediately
48
- below the frame pointer. */
49
+ /*
50
+ * The TCG temp buffer is at the top of the frame, immediately
51
+ * below the frame pointer. Use the logical (aligned) offset here;
52
+ * the stack bias is applied in temp_allocate_frame().
53
+ */
54
tmp_buf_size = CPU_TEMP_BUF_NLONGS * (int)sizeof(long);
55
- tcg_set_frame(s, TCG_REG_I6, TCG_TARGET_STACK_BIAS - tmp_buf_size,
56
- tmp_buf_size);
57
+ tcg_set_frame(s, TCG_REG_I6, -tmp_buf_size, tmp_buf_size);
58
59
- /* TCG_TARGET_CALL_STACK_OFFSET includes the stack bias, but is
60
- otherwise the minimal frame usable by callees. */
61
+ /*
62
+ * TCG_TARGET_CALL_STACK_OFFSET includes the stack bias, but is
63
+ * otherwise the minimal frame usable by callees.
64
+ */
65
frame_size = TCG_TARGET_CALL_STACK_OFFSET - TCG_TARGET_STACK_BIAS;
66
frame_size += TCG_STATIC_CALL_ARGS_SIZE + tmp_buf_size;
67
frame_size += TCG_TARGET_STACK_ALIGN - 1;
68
--
33
--
69
2.25.1
34
2.25.1
70
35
71
36
diff view generated by jsdifflib
1
We had a single ATOMIC_MMU_LOOKUP macro that probed for
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
read+write on all atomic ops. This is incorrect for
3
plain atomic load and atomic store.
4
2
5
For user-only, we rely on the host page permissions.
3
Restrict has_work() to sysemu.
6
4
7
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/390
5
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-Id: <20210912172731.789788-12-f4bug@amsat.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
---
9
---
11
accel/tcg/atomic_template.h | 24 +++++-----
10
target/cris/cpu.c | 4 +++-
12
accel/tcg/cputlb.c | 95 ++++++++++++++++++++++++++-----------
11
1 file changed, 3 insertions(+), 1 deletion(-)
13
accel/tcg/user-exec.c | 8 ++--
14
3 files changed, 83 insertions(+), 44 deletions(-)
15
12
16
diff --git a/accel/tcg/atomic_template.h b/accel/tcg/atomic_template.h
13
diff --git a/target/cris/cpu.c b/target/cris/cpu.c
17
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
18
--- a/accel/tcg/atomic_template.h
15
--- a/target/cris/cpu.c
19
+++ b/accel/tcg/atomic_template.h
16
+++ b/target/cris/cpu.c
20
@@ -XXX,XX +XXX,XX @@ ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, target_ulong addr,
17
@@ -XXX,XX +XXX,XX @@ static void cris_cpu_set_pc(CPUState *cs, vaddr value)
21
ABI_TYPE cmpv, ABI_TYPE newv EXTRA_ARGS)
18
cpu->env.pc = value;
19
}
20
21
+#if !defined(CONFIG_USER_ONLY)
22
static bool cris_cpu_has_work(CPUState *cs)
22
{
23
{
23
ATOMIC_MMU_DECLS;
24
return cs->interrupt_request & (CPU_INTERRUPT_HARD | CPU_INTERRUPT_NMI);
24
- DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP;
25
}
25
+ DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP_RW;
26
+#endif /* !CONFIG_USER_ONLY */
26
DATA_TYPE ret;
27
27
uint16_t info = trace_mem_build_info(SHIFT, false, 0, false,
28
static void cris_cpu_reset(DeviceState *dev)
28
ATOMIC_MMU_IDX);
29
@@ -XXX,XX +XXX,XX @@ ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, target_ulong addr,
30
ABI_TYPE ATOMIC_NAME(ld)(CPUArchState *env, target_ulong addr EXTRA_ARGS)
31
{
29
{
32
ATOMIC_MMU_DECLS;
30
@@ -XXX,XX +XXX,XX @@ static const struct TCGCPUOps crisv10_tcg_ops = {
33
- DATA_TYPE val, *haddr = ATOMIC_MMU_LOOKUP;
31
.tlb_fill = cris_cpu_tlb_fill,
34
+ DATA_TYPE val, *haddr = ATOMIC_MMU_LOOKUP_R;
32
35
uint16_t info = trace_mem_build_info(SHIFT, false, 0, false,
33
#ifndef CONFIG_USER_ONLY
36
ATOMIC_MMU_IDX);
34
+ .has_work = cris_cpu_has_work,
37
35
.cpu_exec_interrupt = cris_cpu_exec_interrupt,
38
@@ -XXX,XX +XXX,XX @@ void ATOMIC_NAME(st)(CPUArchState *env, target_ulong addr,
36
.do_interrupt = crisv10_cpu_do_interrupt,
39
ABI_TYPE val EXTRA_ARGS)
37
#endif /* !CONFIG_USER_ONLY */
40
{
38
@@ -XXX,XX +XXX,XX @@ static void cris_cpu_class_init(ObjectClass *oc, void *data)
41
ATOMIC_MMU_DECLS;
39
device_class_set_parent_reset(dc, cris_cpu_reset, &ccc->parent_reset);
42
- DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP;
40
43
+ DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP_W;
41
cc->class_by_name = cris_cpu_class_by_name;
44
uint16_t info = trace_mem_build_info(SHIFT, false, 0, true,
42
- cc->has_work = cris_cpu_has_work;
45
ATOMIC_MMU_IDX);
43
cc->dump_state = cris_cpu_dump_state;
46
44
cc->set_pc = cris_cpu_set_pc;
47
@@ -XXX,XX +XXX,XX @@ ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, target_ulong addr,
45
cc->gdb_read_register = cris_cpu_gdb_read_register;
48
ABI_TYPE val EXTRA_ARGS)
49
{
50
ATOMIC_MMU_DECLS;
51
- DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP;
52
+ DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP_RW;
53
DATA_TYPE ret;
54
uint16_t info = trace_mem_build_info(SHIFT, false, 0, false,
55
ATOMIC_MMU_IDX);
56
@@ -XXX,XX +XXX,XX @@ ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \
57
ABI_TYPE val EXTRA_ARGS) \
58
{ \
59
ATOMIC_MMU_DECLS; \
60
- DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP; \
61
+ DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP_RW; \
62
DATA_TYPE ret; \
63
uint16_t info = trace_mem_build_info(SHIFT, false, 0, false, \
64
ATOMIC_MMU_IDX); \
65
@@ -XXX,XX +XXX,XX @@ ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \
66
ABI_TYPE xval EXTRA_ARGS) \
67
{ \
68
ATOMIC_MMU_DECLS; \
69
- XDATA_TYPE *haddr = ATOMIC_MMU_LOOKUP; \
70
+ XDATA_TYPE *haddr = ATOMIC_MMU_LOOKUP_RW; \
71
XDATA_TYPE cmp, old, new, val = xval; \
72
uint16_t info = trace_mem_build_info(SHIFT, false, 0, false, \
73
ATOMIC_MMU_IDX); \
74
@@ -XXX,XX +XXX,XX @@ ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, target_ulong addr,
75
ABI_TYPE cmpv, ABI_TYPE newv EXTRA_ARGS)
76
{
77
ATOMIC_MMU_DECLS;
78
- DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP;
79
+ DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP_RW;
80
DATA_TYPE ret;
81
uint16_t info = trace_mem_build_info(SHIFT, false, MO_BSWAP, false,
82
ATOMIC_MMU_IDX);
83
@@ -XXX,XX +XXX,XX @@ ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, target_ulong addr,
84
ABI_TYPE ATOMIC_NAME(ld)(CPUArchState *env, target_ulong addr EXTRA_ARGS)
85
{
86
ATOMIC_MMU_DECLS;
87
- DATA_TYPE val, *haddr = ATOMIC_MMU_LOOKUP;
88
+ DATA_TYPE val, *haddr = ATOMIC_MMU_LOOKUP_R;
89
uint16_t info = trace_mem_build_info(SHIFT, false, MO_BSWAP, false,
90
ATOMIC_MMU_IDX);
91
92
@@ -XXX,XX +XXX,XX @@ void ATOMIC_NAME(st)(CPUArchState *env, target_ulong addr,
93
ABI_TYPE val EXTRA_ARGS)
94
{
95
ATOMIC_MMU_DECLS;
96
- DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP;
97
+ DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP_W;
98
uint16_t info = trace_mem_build_info(SHIFT, false, MO_BSWAP, true,
99
ATOMIC_MMU_IDX);
100
101
@@ -XXX,XX +XXX,XX @@ ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, target_ulong addr,
102
ABI_TYPE val EXTRA_ARGS)
103
{
104
ATOMIC_MMU_DECLS;
105
- DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP;
106
+ DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP_RW;
107
ABI_TYPE ret;
108
uint16_t info = trace_mem_build_info(SHIFT, false, MO_BSWAP, false,
109
ATOMIC_MMU_IDX);
110
@@ -XXX,XX +XXX,XX @@ ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \
111
ABI_TYPE val EXTRA_ARGS) \
112
{ \
113
ATOMIC_MMU_DECLS; \
114
- DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP; \
115
+ DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP_RW; \
116
DATA_TYPE ret; \
117
uint16_t info = trace_mem_build_info(SHIFT, false, MO_BSWAP, \
118
false, ATOMIC_MMU_IDX); \
119
@@ -XXX,XX +XXX,XX @@ ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \
120
ABI_TYPE xval EXTRA_ARGS) \
121
{ \
122
ATOMIC_MMU_DECLS; \
123
- XDATA_TYPE *haddr = ATOMIC_MMU_LOOKUP; \
124
+ XDATA_TYPE *haddr = ATOMIC_MMU_LOOKUP_RW; \
125
XDATA_TYPE ldo, ldn, old, new, val = xval; \
126
uint16_t info = trace_mem_build_info(SHIFT, false, MO_BSWAP, \
127
false, ATOMIC_MMU_IDX); \
128
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
129
index XXXXXXX..XXXXXXX 100644
130
--- a/accel/tcg/cputlb.c
131
+++ b/accel/tcg/cputlb.c
132
@@ -XXX,XX +XXX,XX @@ bool tlb_plugin_lookup(CPUState *cpu, target_ulong addr, int mmu_idx,
133
134
#endif
135
136
-/* Probe for a read-modify-write atomic operation. Do not allow unaligned
137
- * operations, or io operations to proceed. Return the host address. */
138
+/*
139
+ * Probe for an atomic operation. Do not allow unaligned operations,
140
+ * or io operations to proceed. Return the host address.
141
+ *
142
+ * @prot may be PAGE_READ, PAGE_WRITE, or PAGE_READ|PAGE_WRITE.
143
+ */
144
static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
145
- TCGMemOpIdx oi, uintptr_t retaddr)
146
+ TCGMemOpIdx oi, int size, int prot,
147
+ uintptr_t retaddr)
148
{
149
size_t mmu_idx = get_mmuidx(oi);
150
- uintptr_t index = tlb_index(env, mmu_idx, addr);
151
- CPUTLBEntry *tlbe = tlb_entry(env, mmu_idx, addr);
152
- target_ulong tlb_addr = tlb_addr_write(tlbe);
153
MemOp mop = get_memop(oi);
154
int a_bits = get_alignment_bits(mop);
155
- int s_bits = mop & MO_SIZE;
156
+ uintptr_t index;
157
+ CPUTLBEntry *tlbe;
158
+ target_ulong tlb_addr;
159
void *hostaddr;
160
161
/* Adjust the given return address. */
162
@@ -XXX,XX +XXX,XX @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
163
}
164
165
/* Enforce qemu required alignment. */
166
- if (unlikely(addr & ((1 << s_bits) - 1))) {
167
+ if (unlikely(addr & (size - 1))) {
168
/* We get here if guest alignment was not requested,
169
or was not enforced by cpu_unaligned_access above.
170
We might widen the access and emulate, but for now
171
@@ -XXX,XX +XXX,XX @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
172
goto stop_the_world;
173
}
174
175
+ index = tlb_index(env, mmu_idx, addr);
176
+ tlbe = tlb_entry(env, mmu_idx, addr);
177
+
178
/* Check TLB entry and enforce page permissions. */
179
- if (!tlb_hit(tlb_addr, addr)) {
180
- if (!VICTIM_TLB_HIT(addr_write, addr)) {
181
- tlb_fill(env_cpu(env), addr, 1 << s_bits, MMU_DATA_STORE,
182
- mmu_idx, retaddr);
183
- index = tlb_index(env, mmu_idx, addr);
184
- tlbe = tlb_entry(env, mmu_idx, addr);
185
+ if (prot & PAGE_WRITE) {
186
+ tlb_addr = tlb_addr_write(tlbe);
187
+ if (!tlb_hit(tlb_addr, addr)) {
188
+ if (!VICTIM_TLB_HIT(addr_write, addr)) {
189
+ tlb_fill(env_cpu(env), addr, size,
190
+ MMU_DATA_STORE, mmu_idx, retaddr);
191
+ index = tlb_index(env, mmu_idx, addr);
192
+ tlbe = tlb_entry(env, mmu_idx, addr);
193
+ }
194
+ tlb_addr = tlb_addr_write(tlbe) & ~TLB_INVALID_MASK;
195
+ }
196
+
197
+ /* Let the guest notice RMW on a write-only page. */
198
+ if ((prot & PAGE_READ) &&
199
+ unlikely(tlbe->addr_read != (tlb_addr & ~TLB_NOTDIRTY))) {
200
+ tlb_fill(env_cpu(env), addr, size,
201
+ MMU_DATA_LOAD, mmu_idx, retaddr);
202
+ /*
203
+ * Since we don't support reads and writes to different addresses,
204
+ * and we do have the proper page loaded for write, this shouldn't
205
+ * ever return. But just in case, handle via stop-the-world.
206
+ */
207
+ goto stop_the_world;
208
+ }
209
+ } else /* if (prot & PAGE_READ) */ {
210
+ tlb_addr = tlbe->addr_read;
211
+ if (!tlb_hit(tlb_addr, addr)) {
212
+ if (!VICTIM_TLB_HIT(addr_write, addr)) {
213
+ tlb_fill(env_cpu(env), addr, size,
214
+ MMU_DATA_LOAD, mmu_idx, retaddr);
215
+ index = tlb_index(env, mmu_idx, addr);
216
+ tlbe = tlb_entry(env, mmu_idx, addr);
217
+ }
218
+ tlb_addr = tlbe->addr_read & ~TLB_INVALID_MASK;
219
}
220
- tlb_addr = tlb_addr_write(tlbe) & ~TLB_INVALID_MASK;
221
}
222
223
/* Notice an IO access or a needs-MMU-lookup access */
224
@@ -XXX,XX +XXX,XX @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
225
goto stop_the_world;
226
}
227
228
- /* Let the guest notice RMW on a write-only page. */
229
- if (unlikely(tlbe->addr_read != (tlb_addr & ~TLB_NOTDIRTY))) {
230
- tlb_fill(env_cpu(env), addr, 1 << s_bits, MMU_DATA_LOAD,
231
- mmu_idx, retaddr);
232
- /* Since we don't support reads and writes to different addresses,
233
- and we do have the proper page loaded for write, this shouldn't
234
- ever return. But just in case, handle via stop-the-world. */
235
- goto stop_the_world;
236
- }
237
-
238
hostaddr = (void *)((uintptr_t)addr + tlbe->addend);
239
240
if (unlikely(tlb_addr & TLB_NOTDIRTY)) {
241
- notdirty_write(env_cpu(env), addr, 1 << s_bits,
242
+ notdirty_write(env_cpu(env), addr, size,
243
&env_tlb(env)->d[mmu_idx].iotlb[index], retaddr);
244
}
245
246
@@ -XXX,XX +XXX,XX @@ void cpu_stq_le_data(CPUArchState *env, target_ulong ptr, uint64_t val)
247
#define ATOMIC_NAME(X) \
248
HELPER(glue(glue(glue(atomic_ ## X, SUFFIX), END), _mmu))
249
#define ATOMIC_MMU_DECLS
250
-#define ATOMIC_MMU_LOOKUP atomic_mmu_lookup(env, addr, oi, retaddr)
251
+#define ATOMIC_MMU_LOOKUP_RW \
252
+ atomic_mmu_lookup(env, addr, oi, DATA_SIZE, PAGE_READ | PAGE_WRITE, retaddr)
253
+#define ATOMIC_MMU_LOOKUP_R \
254
+ atomic_mmu_lookup(env, addr, oi, DATA_SIZE, PAGE_READ, retaddr)
255
+#define ATOMIC_MMU_LOOKUP_W \
256
+ atomic_mmu_lookup(env, addr, oi, DATA_SIZE, PAGE_WRITE, retaddr)
257
#define ATOMIC_MMU_CLEANUP
258
#define ATOMIC_MMU_IDX get_mmuidx(oi)
259
260
@@ -XXX,XX +XXX,XX @@ void cpu_stq_le_data(CPUArchState *env, target_ulong ptr, uint64_t val)
261
262
#undef EXTRA_ARGS
263
#undef ATOMIC_NAME
264
-#undef ATOMIC_MMU_LOOKUP
265
+#undef ATOMIC_MMU_LOOKUP_RW
266
+#undef ATOMIC_MMU_LOOKUP_R
267
+#undef ATOMIC_MMU_LOOKUP_W
268
+
269
#define EXTRA_ARGS , TCGMemOpIdx oi
270
#define ATOMIC_NAME(X) HELPER(glue(glue(atomic_ ## X, SUFFIX), END))
271
-#define ATOMIC_MMU_LOOKUP atomic_mmu_lookup(env, addr, oi, GETPC())
272
+#define ATOMIC_MMU_LOOKUP_RW \
273
+ atomic_mmu_lookup(env, addr, oi, DATA_SIZE, PAGE_READ | PAGE_WRITE, GETPC())
274
+#define ATOMIC_MMU_LOOKUP_R \
275
+ atomic_mmu_lookup(env, addr, oi, DATA_SIZE, PAGE_READ, GETPC())
276
+#define ATOMIC_MMU_LOOKUP_W \
277
+ atomic_mmu_lookup(env, addr, oi, DATA_SIZE, PAGE_WRITE, GETPC())
278
279
#define DATA_SIZE 1
280
#include "atomic_template.h"
281
diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
282
index XXXXXXX..XXXXXXX 100644
283
--- a/accel/tcg/user-exec.c
284
+++ b/accel/tcg/user-exec.c
285
@@ -XXX,XX +XXX,XX @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
286
287
/* Macro to call the above, with local variables from the use context. */
288
#define ATOMIC_MMU_DECLS do {} while (0)
289
-#define ATOMIC_MMU_LOOKUP atomic_mmu_lookup(env, addr, DATA_SIZE, GETPC())
290
+#define ATOMIC_MMU_LOOKUP_RW atomic_mmu_lookup(env, addr, DATA_SIZE, GETPC())
291
+#define ATOMIC_MMU_LOOKUP_R ATOMIC_MMU_LOOKUP_RW
292
+#define ATOMIC_MMU_LOOKUP_W ATOMIC_MMU_LOOKUP_RW
293
#define ATOMIC_MMU_CLEANUP do { clear_helper_retaddr(); } while (0)
294
#define ATOMIC_MMU_IDX MMU_USER_IDX
295
296
@@ -XXX,XX +XXX,XX @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
297
298
#undef EXTRA_ARGS
299
#undef ATOMIC_NAME
300
-#undef ATOMIC_MMU_LOOKUP
301
+#undef ATOMIC_MMU_LOOKUP_RW
302
303
#define EXTRA_ARGS , TCGMemOpIdx oi, uintptr_t retaddr
304
#define ATOMIC_NAME(X) \
305
HELPER(glue(glue(glue(atomic_ ## X, SUFFIX), END), _mmu))
306
-#define ATOMIC_MMU_LOOKUP atomic_mmu_lookup(env, addr, DATA_SIZE, retaddr)
307
+#define ATOMIC_MMU_LOOKUP_RW atomic_mmu_lookup(env, addr, DATA_SIZE, retaddr)
308
309
#define DATA_SIZE 16
310
#include "atomic_template.h"
311
--
46
--
312
2.25.1
47
2.25.1
313
48
314
49
diff view generated by jsdifflib
1
As noted by qemu-plugins.h, enum qemu_plugin_cb_flags is
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
currently unused -- plugins can neither read nor write
3
guest registers.
4
2
5
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
3
has_work() is sysemu specific, and Hexagon target only provides
4
a linux-user implementation. Remove the unused hexagon_cpu_has_work().
5
6
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-Id: <20210912172731.789788-13-f4bug@amsat.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
10
---
8
accel/tcg/plugin-helpers.h | 1 -
11
target/hexagon/cpu.c | 6 ------
9
include/qemu/plugin.h | 1 -
12
1 file changed, 6 deletions(-)
10
accel/tcg/plugin-gen.c | 8 ++++----
11
plugins/core.c | 30 ++++++------------------------
12
4 files changed, 10 insertions(+), 30 deletions(-)
13
13
14
diff --git a/accel/tcg/plugin-helpers.h b/accel/tcg/plugin-helpers.h
14
diff --git a/target/hexagon/cpu.c b/target/hexagon/cpu.c
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/accel/tcg/plugin-helpers.h
16
--- a/target/hexagon/cpu.c
17
+++ b/accel/tcg/plugin-helpers.h
17
+++ b/target/hexagon/cpu.c
18
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@ static void hexagon_cpu_synchronize_from_tb(CPUState *cs,
19
#ifdef CONFIG_PLUGIN
19
env->gpr[HEX_REG_PC] = tb->pc;
20
-/* Note: no TCG flags because those are overwritten later */
21
DEF_HELPER_2(plugin_vcpu_udata_cb, void, i32, ptr)
22
DEF_HELPER_4(plugin_vcpu_mem_cb, void, i32, i32, i64, ptr)
23
#endif
24
diff --git a/include/qemu/plugin.h b/include/qemu/plugin.h
25
index XXXXXXX..XXXXXXX 100644
26
--- a/include/qemu/plugin.h
27
+++ b/include/qemu/plugin.h
28
@@ -XXX,XX +XXX,XX @@ enum plugin_dyn_cb_subtype {
29
struct qemu_plugin_dyn_cb {
30
union qemu_plugin_cb_sig f;
31
void *userp;
32
- unsigned tcg_flags;
33
enum plugin_dyn_cb_subtype type;
34
/* @rw applies to mem callbacks only (both regular and inline) */
35
enum qemu_plugin_mem_rw rw;
36
diff --git a/accel/tcg/plugin-gen.c b/accel/tcg/plugin-gen.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/accel/tcg/plugin-gen.c
39
+++ b/accel/tcg/plugin-gen.c
40
@@ -XXX,XX +XXX,XX @@ static TCGOp *copy_st_ptr(TCGOp **begin_op, TCGOp *op)
41
}
20
}
42
21
43
static TCGOp *copy_call(TCGOp **begin_op, TCGOp *op, void *empty_func,
22
-static bool hexagon_cpu_has_work(CPUState *cs)
44
- void *func, unsigned tcg_flags, int *cb_idx)
45
+ void *func, int *cb_idx)
46
{
47
/* copy all ops until the call */
48
do {
49
@@ -XXX,XX +XXX,XX @@ static TCGOp *copy_call(TCGOp **begin_op, TCGOp *op, void *empty_func,
50
tcg_debug_assert(i < MAX_OPC_PARAM_ARGS);
51
}
52
op->args[*cb_idx] = (uintptr_t)func;
53
- op->args[*cb_idx + 1] = tcg_flags;
54
+ op->args[*cb_idx + 1] = (*begin_op)->args[*cb_idx + 1];
55
56
return op;
57
}
58
@@ -XXX,XX +XXX,XX @@ static TCGOp *append_udata_cb(const struct qemu_plugin_dyn_cb *cb,
59
60
/* call */
61
op = copy_call(&begin_op, op, HELPER(plugin_vcpu_udata_cb),
62
- cb->f.vcpu_udata, cb->tcg_flags, cb_idx);
63
+ cb->f.vcpu_udata, cb_idx);
64
65
return op;
66
}
67
@@ -XXX,XX +XXX,XX @@ static TCGOp *append_mem_cb(const struct qemu_plugin_dyn_cb *cb,
68
if (type == PLUGIN_GEN_CB_MEM) {
69
/* call */
70
op = copy_call(&begin_op, op, HELPER(plugin_vcpu_mem_cb),
71
- cb->f.vcpu_udata, cb->tcg_flags, cb_idx);
72
+ cb->f.vcpu_udata, cb_idx);
73
}
74
75
return op;
76
diff --git a/plugins/core.c b/plugins/core.c
77
index XXXXXXX..XXXXXXX 100644
78
--- a/plugins/core.c
79
+++ b/plugins/core.c
80
@@ -XXX,XX +XXX,XX @@ void plugin_register_inline_op(GArray **arr,
81
dyn_cb->inline_insn.imm = imm;
82
}
83
84
-static inline uint32_t cb_to_tcg_flags(enum qemu_plugin_cb_flags flags)
85
-{
23
-{
86
- uint32_t ret;
24
- return true;
87
-
88
- switch (flags) {
89
- case QEMU_PLUGIN_CB_RW_REGS:
90
- ret = 0;
91
- break;
92
- case QEMU_PLUGIN_CB_R_REGS:
93
- ret = TCG_CALL_NO_WG;
94
- break;
95
- case QEMU_PLUGIN_CB_NO_REGS:
96
- default:
97
- ret = TCG_CALL_NO_RWG;
98
- }
99
- return ret;
100
-}
25
-}
101
-
26
-
102
-inline void
27
void restore_state_to_opc(CPUHexagonState *env, TranslationBlock *tb,
103
-plugin_register_dyn_cb__udata(GArray **arr,
28
target_ulong *data)
104
- qemu_plugin_vcpu_udata_cb_t cb,
105
- enum qemu_plugin_cb_flags flags, void *udata)
106
+void plugin_register_dyn_cb__udata(GArray **arr,
107
+ qemu_plugin_vcpu_udata_cb_t cb,
108
+ enum qemu_plugin_cb_flags flags,
109
+ void *udata)
110
{
29
{
111
struct qemu_plugin_dyn_cb *dyn_cb = plugin_get_dyn_cb(arr);
30
@@ -XXX,XX +XXX,XX @@ static void hexagon_cpu_class_init(ObjectClass *c, void *data)
112
31
device_class_set_parent_reset(dc, hexagon_cpu_reset, &mcc->parent_reset);
113
dyn_cb->userp = udata;
32
114
- dyn_cb->tcg_flags = cb_to_tcg_flags(flags);
33
cc->class_by_name = hexagon_cpu_class_by_name;
115
+ /* Note flags are discarded as unused. */
34
- cc->has_work = hexagon_cpu_has_work;
116
dyn_cb->f.vcpu_udata = cb;
35
cc->dump_state = hexagon_dump_state;
117
dyn_cb->type = PLUGIN_CB_REGULAR;
36
cc->set_pc = hexagon_cpu_set_pc;
118
}
37
cc->gdb_read_register = hexagon_gdb_read_register;
119
@@ -XXX,XX +XXX,XX @@ void plugin_register_vcpu_mem_cb(GArray **arr,
120
121
dyn_cb = plugin_get_dyn_cb(arr);
122
dyn_cb->userp = udata;
123
- dyn_cb->tcg_flags = cb_to_tcg_flags(flags);
124
+ /* Note flags are discarded as unused. */
125
dyn_cb->type = PLUGIN_CB_REGULAR;
126
dyn_cb->rw = rw;
127
dyn_cb->f.generic = cb;
128
--
38
--
129
2.25.1
39
2.25.1
130
40
131
41
diff view generated by jsdifflib
1
The longest test at the moment seems to be a (slower)
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
aarch64 host, for which test-mmap takes 64 seconds.
3
2
4
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
3
Restrict has_work() to sysemu.
5
Acked-by: Alex Bennée <alex.bennee@linaro.org>
4
6
Reviewed-by: Thomas Huth <thuth@redhat.com>
5
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-Id: <20210912172731.789788-14-f4bug@amsat.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
---
9
---
10
configure | 3 +++
10
target/hppa/cpu.c | 4 +++-
11
tests/tcg/Makefile.target | 6 ++++--
11
1 file changed, 3 insertions(+), 1 deletion(-)
12
2 files changed, 7 insertions(+), 2 deletions(-)
13
12
14
diff --git a/configure b/configure
13
diff --git a/target/hppa/cpu.c b/target/hppa/cpu.c
15
index XXXXXXX..XXXXXXX 100755
16
--- a/configure
17
+++ b/configure
18
@@ -XXX,XX +XXX,XX @@ fi
19
if test "$optreset" = "yes" ; then
20
echo "HAVE_OPTRESET=y" >> $config_host_mak
21
fi
22
+if test "$tcg" = "enabled" -a "$tcg_interpreter" = "true" ; then
23
+ echo "CONFIG_TCG_INTERPRETER=y" >> $config_host_mak
24
+fi
25
if test "$fdatasync" = "yes" ; then
26
echo "CONFIG_FDATASYNC=y" >> $config_host_mak
27
fi
28
diff --git a/tests/tcg/Makefile.target b/tests/tcg/Makefile.target
29
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
30
--- a/tests/tcg/Makefile.target
15
--- a/target/hppa/cpu.c
31
+++ b/tests/tcg/Makefile.target
16
+++ b/target/hppa/cpu.c
32
@@ -XXX,XX +XXX,XX @@ LDFLAGS=
17
@@ -XXX,XX +XXX,XX @@ static void hppa_cpu_synchronize_from_tb(CPUState *cs,
33
QEMU_OPTS=
18
cpu->env.psw_n = (tb->flags & PSW_N) != 0;
34
19
}
35
20
36
-# If TCG debugging is enabled things are a lot slower
21
+#if !defined(CONFIG_USER_ONLY)
37
-ifeq ($(CONFIG_DEBUG_TCG),y)
22
static bool hppa_cpu_has_work(CPUState *cs)
38
+# If TCG debugging, or TCI is enabled things are a lot slower
23
{
39
+ifneq ($(CONFIG_TCG_INTERPRETER),)
24
return cs->interrupt_request & CPU_INTERRUPT_HARD;
40
+TIMEOUT=90
25
}
41
+else ifneq ($(CONFIG_DEBUG_TCG),)
26
+#endif /* !CONFIG_USER_ONLY */
42
TIMEOUT=60
27
43
else
28
static void hppa_cpu_disas_set_info(CPUState *cs, disassemble_info *info)
44
TIMEOUT=15
29
{
30
@@ -XXX,XX +XXX,XX @@ static const struct TCGCPUOps hppa_tcg_ops = {
31
.tlb_fill = hppa_cpu_tlb_fill,
32
33
#ifndef CONFIG_USER_ONLY
34
+ .has_work = hppa_cpu_has_work,
35
.cpu_exec_interrupt = hppa_cpu_exec_interrupt,
36
.do_interrupt = hppa_cpu_do_interrupt,
37
.do_unaligned_access = hppa_cpu_do_unaligned_access,
38
@@ -XXX,XX +XXX,XX @@ static void hppa_cpu_class_init(ObjectClass *oc, void *data)
39
&acc->parent_realize);
40
41
cc->class_by_name = hppa_cpu_class_by_name;
42
- cc->has_work = hppa_cpu_has_work;
43
cc->dump_state = hppa_cpu_dump_state;
44
cc->set_pc = hppa_cpu_set_pc;
45
cc->gdb_read_register = hppa_cpu_gdb_read_register;
45
--
46
--
46
2.25.1
47
2.25.1
47
48
48
49
diff view generated by jsdifflib
1
We're going to change how to look up the call flags from a TCGop,
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
so extract it as a helper.
3
2
4
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
3
Restrict has_work() to TCG sysemu.
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
5
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-Id: <20210912172731.789788-15-f4bug@amsat.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
9
---
8
tcg/tcg-internal.h | 5 +++++
10
target/i386/cpu.c | 6 ------
9
tcg/optimize.c | 3 ++-
11
target/i386/tcg/tcg-cpu.c | 8 +++++++-
10
tcg/tcg.c | 14 ++++++--------
12
2 files changed, 7 insertions(+), 7 deletions(-)
11
3 files changed, 13 insertions(+), 9 deletions(-)
12
13
13
diff --git a/tcg/tcg-internal.h b/tcg/tcg-internal.h
14
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
14
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
15
--- a/tcg/tcg-internal.h
16
--- a/target/i386/cpu.c
16
+++ b/tcg/tcg-internal.h
17
+++ b/target/i386/cpu.c
17
@@ -XXX,XX +XXX,XX @@ bool tcg_region_alloc(TCGContext *s);
18
@@ -XXX,XX +XXX,XX @@ int x86_cpu_pending_interrupt(CPUState *cs, int interrupt_request)
18
void tcg_region_initial_alloc(TCGContext *s);
19
return 0;
19
void tcg_region_prologue_set(TCGContext *s);
20
}
20
21
21
+static inline unsigned tcg_call_flags(TCGOp *op)
22
-static bool x86_cpu_has_work(CPUState *cs)
23
-{
24
- return x86_cpu_pending_interrupt(cs, cs->interrupt_request) != 0;
25
-}
26
-
27
static void x86_disas_set_info(CPUState *cs, disassemble_info *info)
28
{
29
X86CPU *cpu = X86_CPU(cs);
30
@@ -XXX,XX +XXX,XX @@ static void x86_cpu_common_class_init(ObjectClass *oc, void *data)
31
32
cc->class_by_name = x86_cpu_class_by_name;
33
cc->parse_features = x86_cpu_parse_featurestr;
34
- cc->has_work = x86_cpu_has_work;
35
cc->dump_state = x86_cpu_dump_state;
36
cc->set_pc = x86_cpu_set_pc;
37
cc->gdb_read_register = x86_cpu_gdb_read_register;
38
diff --git a/target/i386/tcg/tcg-cpu.c b/target/i386/tcg/tcg-cpu.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/i386/tcg/tcg-cpu.c
41
+++ b/target/i386/tcg/tcg-cpu.c
42
@@ -XXX,XX +XXX,XX @@ static void x86_cpu_synchronize_from_tb(CPUState *cs,
43
}
44
45
#ifndef CONFIG_USER_ONLY
46
+static bool x86_cpu_has_work(CPUState *cs)
22
+{
47
+{
23
+ return op->args[TCGOP_CALLO(op) + TCGOP_CALLI(op) + 1];
48
+ return x86_cpu_pending_interrupt(cs, cs->interrupt_request) != 0;
24
+}
49
+}
25
+
50
+
26
#endif /* TCG_INTERNAL_H */
51
static bool x86_debug_check_breakpoint(CPUState *cs)
27
diff --git a/tcg/optimize.c b/tcg/optimize.c
52
{
28
index XXXXXXX..XXXXXXX 100644
53
X86CPU *cpu = X86_CPU(cs);
29
--- a/tcg/optimize.c
54
@@ -XXX,XX +XXX,XX @@ static bool x86_debug_check_breakpoint(CPUState *cs)
30
+++ b/tcg/optimize.c
55
/* RF disables all architectural breakpoints. */
31
@@ -XXX,XX +XXX,XX @@
56
return !(env->eflags & RF_MASK);
32
57
}
33
#include "qemu/osdep.h"
58
-#endif
34
#include "tcg/tcg-op.h"
59
+#endif /* CONFIG_USER_ONLY */
35
+#include "tcg-internal.h"
60
36
61
#include "hw/core/tcg-cpu-ops.h"
37
#define CASE_OP_32_64(x) \
62
38
glue(glue(case INDEX_op_, x), _i32): \
63
@@ -XXX,XX +XXX,XX @@ static const struct TCGCPUOps x86_tcg_ops = {
39
@@ -XXX,XX +XXX,XX @@ void tcg_optimize(TCGContext *s)
64
#ifdef CONFIG_USER_ONLY
40
break;
65
.fake_user_interrupt = x86_cpu_do_interrupt,
41
66
#else
42
case INDEX_op_call:
67
+ .has_work = x86_cpu_has_work,
43
- if (!(op->args[nb_oargs + nb_iargs + 1]
68
.do_interrupt = x86_cpu_do_interrupt,
44
+ if (!(tcg_call_flags(op)
69
.cpu_exec_interrupt = x86_cpu_exec_interrupt,
45
& (TCG_CALL_NO_READ_GLOBALS | TCG_CALL_NO_WRITE_GLOBALS))) {
70
.debug_excp_handler = breakpoint_handler,
46
for (i = 0; i < nb_globals; i++) {
47
if (test_bit(i, temps_used.l)) {
48
diff --git a/tcg/tcg.c b/tcg/tcg.c
49
index XXXXXXX..XXXXXXX 100644
50
--- a/tcg/tcg.c
51
+++ b/tcg/tcg.c
52
@@ -XXX,XX +XXX,XX @@ static void tcg_dump_ops(TCGContext *s, bool have_prefs)
53
nb_cargs = def->nb_cargs;
54
55
/* function name, flags, out args */
56
- col += qemu_log(" %s %s,$0x%" TCG_PRIlx ",$%d", def->name,
57
+ col += qemu_log(" %s %s,$0x%x,$%d", def->name,
58
tcg_find_helper(s, op->args[nb_oargs + nb_iargs]),
59
- op->args[nb_oargs + nb_iargs + 1], nb_oargs);
60
+ tcg_call_flags(op), nb_oargs);
61
for (i = 0; i < nb_oargs; i++) {
62
col += qemu_log(",%s", tcg_get_arg_str(s, buf, sizeof(buf),
63
op->args[i]));
64
@@ -XXX,XX +XXX,XX @@ static void reachable_code_pass(TCGContext *s)
65
QTAILQ_FOREACH_SAFE(op, &s->ops, link, op_next) {
66
bool remove = dead;
67
TCGLabel *label;
68
- int call_flags;
69
70
switch (op->opc) {
71
case INDEX_op_set_label:
72
@@ -XXX,XX +XXX,XX @@ static void reachable_code_pass(TCGContext *s)
73
74
case INDEX_op_call:
75
/* Notice noreturn helper calls, raising exceptions. */
76
- call_flags = op->args[TCGOP_CALLO(op) + TCGOP_CALLI(op) + 1];
77
- if (call_flags & TCG_CALL_NO_RETURN) {
78
+ if (tcg_call_flags(op) & TCG_CALL_NO_RETURN) {
79
dead = true;
80
}
81
break;
82
@@ -XXX,XX +XXX,XX @@ static void liveness_pass_1(TCGContext *s)
83
84
nb_oargs = TCGOP_CALLO(op);
85
nb_iargs = TCGOP_CALLI(op);
86
- call_flags = op->args[nb_oargs + nb_iargs + 1];
87
+ call_flags = tcg_call_flags(op);
88
89
/* pure functions can be removed if their result is unused */
90
if (call_flags & TCG_CALL_NO_SIDE_EFFECTS) {
91
@@ -XXX,XX +XXX,XX @@ static bool liveness_pass_2(TCGContext *s)
92
if (opc == INDEX_op_call) {
93
nb_oargs = TCGOP_CALLO(op);
94
nb_iargs = TCGOP_CALLI(op);
95
- call_flags = op->args[nb_oargs + nb_iargs + 1];
96
+ call_flags = tcg_call_flags(op);
97
} else {
98
nb_iargs = def->nb_iargs;
99
nb_oargs = def->nb_oargs;
100
@@ -XXX,XX +XXX,XX @@ static void tcg_reg_alloc_call(TCGContext *s, TCGOp *op)
101
TCGRegSet allocated_regs;
102
103
func_addr = (tcg_insn_unit *)(intptr_t)op->args[nb_oargs + nb_iargs];
104
- flags = op->args[nb_oargs + nb_iargs + 1];
105
+ flags = tcg_call_flags(op);
106
107
nb_regs = ARRAY_SIZE(tcg_target_call_iarg_regs);
108
if (nb_regs > nb_iargs) {
109
--
71
--
110
2.25.1
72
2.25.1
111
73
112
74
diff view generated by jsdifflib
1
These macros are only used in one place. By expanding,
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
we get to apply some common-subexpression elimination
3
and create some local variables.
4
2
5
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
3
Restrict has_work() to sysemu.
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
5
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-Id: <20210912172731.789788-16-f4bug@amsat.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
---
9
---
9
tcg/tci.c | 165 +++++++++++++++++++++++++++++++++---------------------
10
target/m68k/cpu.c | 4 +++-
10
1 file changed, 100 insertions(+), 65 deletions(-)
11
1 file changed, 3 insertions(+), 1 deletion(-)
11
12
12
diff --git a/tcg/tci.c b/tcg/tci.c
13
diff --git a/target/m68k/cpu.c b/target/m68k/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
14
--- a/tcg/tci.c
15
--- a/target/m68k/cpu.c
15
+++ b/tcg/tci.c
16
+++ b/target/m68k/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static bool tci_compare64(uint64_t u0, uint64_t u1, TCGCond condition)
17
@@ -XXX,XX +XXX,XX @@ static void m68k_cpu_set_pc(CPUState *cs, vaddr value)
17
return result;
18
cpu->env.pc = value;
18
}
19
}
19
20
20
-#ifdef CONFIG_SOFTMMU
21
+#if !defined(CONFIG_USER_ONLY)
21
-# define qemu_ld_ub \
22
static bool m68k_cpu_has_work(CPUState *cs)
22
- helper_ret_ldub_mmu(env, taddr, oi, (uintptr_t)tb_ptr)
23
-# define qemu_ld_leuw \
24
- helper_le_lduw_mmu(env, taddr, oi, (uintptr_t)tb_ptr)
25
-# define qemu_ld_leul \
26
- helper_le_ldul_mmu(env, taddr, oi, (uintptr_t)tb_ptr)
27
-# define qemu_ld_leq \
28
- helper_le_ldq_mmu(env, taddr, oi, (uintptr_t)tb_ptr)
29
-# define qemu_ld_beuw \
30
- helper_be_lduw_mmu(env, taddr, oi, (uintptr_t)tb_ptr)
31
-# define qemu_ld_beul \
32
- helper_be_ldul_mmu(env, taddr, oi, (uintptr_t)tb_ptr)
33
-# define qemu_ld_beq \
34
- helper_be_ldq_mmu(env, taddr, oi, (uintptr_t)tb_ptr)
35
-# define qemu_st_b(X) \
36
- helper_ret_stb_mmu(env, taddr, X, oi, (uintptr_t)tb_ptr)
37
-# define qemu_st_lew(X) \
38
- helper_le_stw_mmu(env, taddr, X, oi, (uintptr_t)tb_ptr)
39
-# define qemu_st_lel(X) \
40
- helper_le_stl_mmu(env, taddr, X, oi, (uintptr_t)tb_ptr)
41
-# define qemu_st_leq(X) \
42
- helper_le_stq_mmu(env, taddr, X, oi, (uintptr_t)tb_ptr)
43
-# define qemu_st_bew(X) \
44
- helper_be_stw_mmu(env, taddr, X, oi, (uintptr_t)tb_ptr)
45
-# define qemu_st_bel(X) \
46
- helper_be_stl_mmu(env, taddr, X, oi, (uintptr_t)tb_ptr)
47
-# define qemu_st_beq(X) \
48
- helper_be_stq_mmu(env, taddr, X, oi, (uintptr_t)tb_ptr)
49
-#else
50
-# define qemu_ld_ub ldub_p(g2h(env_cpu(env), taddr))
51
-# define qemu_ld_leuw lduw_le_p(g2h(env_cpu(env), taddr))
52
-# define qemu_ld_leul (uint32_t)ldl_le_p(g2h(env_cpu(env), taddr))
53
-# define qemu_ld_leq ldq_le_p(g2h(env_cpu(env), taddr))
54
-# define qemu_ld_beuw lduw_be_p(g2h(env_cpu(env), taddr))
55
-# define qemu_ld_beul (uint32_t)ldl_be_p(g2h(env_cpu(env), taddr))
56
-# define qemu_ld_beq ldq_be_p(g2h(env_cpu(env), taddr))
57
-# define qemu_st_b(X) stb_p(g2h(env_cpu(env), taddr), X)
58
-# define qemu_st_lew(X) stw_le_p(g2h(env_cpu(env), taddr), X)
59
-# define qemu_st_lel(X) stl_le_p(g2h(env_cpu(env), taddr), X)
60
-# define qemu_st_leq(X) stq_le_p(g2h(env_cpu(env), taddr), X)
61
-# define qemu_st_bew(X) stw_be_p(g2h(env_cpu(env), taddr), X)
62
-# define qemu_st_bel(X) stl_be_p(g2h(env_cpu(env), taddr), X)
63
-# define qemu_st_beq(X) stq_be_p(g2h(env_cpu(env), taddr), X)
64
-#endif
65
-
66
static uint64_t tci_qemu_ld(CPUArchState *env, target_ulong taddr,
67
TCGMemOpIdx oi, const void *tb_ptr)
68
{
23
{
69
MemOp mop = get_memop(oi) & (MO_BSWAP | MO_SSIZE);
24
return cs->interrupt_request & CPU_INTERRUPT_HARD;
70
71
+#ifdef CONFIG_SOFTMMU
72
+ uintptr_t ra = (uintptr_t)tb_ptr;
73
+
74
switch (mop) {
75
case MO_UB:
76
- return qemu_ld_ub;
77
+ return helper_ret_ldub_mmu(env, taddr, oi, ra);
78
case MO_SB:
79
- return (int8_t)qemu_ld_ub;
80
+ return helper_ret_ldsb_mmu(env, taddr, oi, ra);
81
case MO_LEUW:
82
- return qemu_ld_leuw;
83
+ return helper_le_lduw_mmu(env, taddr, oi, ra);
84
case MO_LESW:
85
- return (int16_t)qemu_ld_leuw;
86
+ return helper_le_ldsw_mmu(env, taddr, oi, ra);
87
case MO_LEUL:
88
- return qemu_ld_leul;
89
+ return helper_le_ldul_mmu(env, taddr, oi, ra);
90
case MO_LESL:
91
- return (int32_t)qemu_ld_leul;
92
+ return helper_le_ldsl_mmu(env, taddr, oi, ra);
93
case MO_LEQ:
94
- return qemu_ld_leq;
95
+ return helper_le_ldq_mmu(env, taddr, oi, ra);
96
case MO_BEUW:
97
- return qemu_ld_beuw;
98
+ return helper_be_lduw_mmu(env, taddr, oi, ra);
99
case MO_BESW:
100
- return (int16_t)qemu_ld_beuw;
101
+ return helper_be_ldsw_mmu(env, taddr, oi, ra);
102
case MO_BEUL:
103
- return qemu_ld_beul;
104
+ return helper_be_ldul_mmu(env, taddr, oi, ra);
105
case MO_BESL:
106
- return (int32_t)qemu_ld_beul;
107
+ return helper_be_ldsl_mmu(env, taddr, oi, ra);
108
case MO_BEQ:
109
- return qemu_ld_beq;
110
+ return helper_be_ldq_mmu(env, taddr, oi, ra);
111
default:
112
g_assert_not_reached();
113
}
114
+#else
115
+ void *haddr = g2h(env_cpu(env), taddr);
116
+ uint64_t ret;
117
+
118
+ switch (mop) {
119
+ case MO_UB:
120
+ ret = ldub_p(haddr);
121
+ break;
122
+ case MO_SB:
123
+ ret = ldsb_p(haddr);
124
+ break;
125
+ case MO_LEUW:
126
+ ret = lduw_le_p(haddr);
127
+ break;
128
+ case MO_LESW:
129
+ ret = ldsw_le_p(haddr);
130
+ break;
131
+ case MO_LEUL:
132
+ ret = (uint32_t)ldl_le_p(haddr);
133
+ break;
134
+ case MO_LESL:
135
+ ret = (int32_t)ldl_le_p(haddr);
136
+ break;
137
+ case MO_LEQ:
138
+ ret = ldq_le_p(haddr);
139
+ break;
140
+ case MO_BEUW:
141
+ ret = lduw_be_p(haddr);
142
+ break;
143
+ case MO_BESW:
144
+ ret = ldsw_be_p(haddr);
145
+ break;
146
+ case MO_BEUL:
147
+ ret = (uint32_t)ldl_be_p(haddr);
148
+ break;
149
+ case MO_BESL:
150
+ ret = (int32_t)ldl_be_p(haddr);
151
+ break;
152
+ case MO_BEQ:
153
+ ret = ldq_be_p(haddr);
154
+ break;
155
+ default:
156
+ g_assert_not_reached();
157
+ }
158
+ return ret;
159
+#endif
160
}
25
}
161
26
+#endif /* !CONFIG_USER_ONLY */
162
static void tci_qemu_st(CPUArchState *env, target_ulong taddr, uint64_t val,
27
163
@@ -XXX,XX +XXX,XX @@ static void tci_qemu_st(CPUArchState *env, target_ulong taddr, uint64_t val,
28
static void m68k_set_feature(CPUM68KState *env, int feature)
164
{
29
{
165
MemOp mop = get_memop(oi) & (MO_BSWAP | MO_SSIZE);
30
@@ -XXX,XX +XXX,XX @@ static const struct TCGCPUOps m68k_tcg_ops = {
166
31
.tlb_fill = m68k_cpu_tlb_fill,
167
+#ifdef CONFIG_SOFTMMU
32
168
+ uintptr_t ra = (uintptr_t)tb_ptr;
33
#ifndef CONFIG_USER_ONLY
169
+
34
+ .has_work = m68k_cpu_has_work,
170
switch (mop) {
35
.cpu_exec_interrupt = m68k_cpu_exec_interrupt,
171
case MO_UB:
36
.do_interrupt = m68k_cpu_do_interrupt,
172
- qemu_st_b(val);
37
.do_transaction_failed = m68k_cpu_transaction_failed,
173
+ helper_ret_stb_mmu(env, taddr, val, oi, ra);
38
@@ -XXX,XX +XXX,XX @@ static void m68k_cpu_class_init(ObjectClass *c, void *data)
174
break;
39
device_class_set_parent_reset(dc, m68k_cpu_reset, &mcc->parent_reset);
175
case MO_LEUW:
40
176
- qemu_st_lew(val);
41
cc->class_by_name = m68k_cpu_class_by_name;
177
+ helper_le_stw_mmu(env, taddr, val, oi, ra);
42
- cc->has_work = m68k_cpu_has_work;
178
break;
43
cc->dump_state = m68k_cpu_dump_state;
179
case MO_LEUL:
44
cc->set_pc = m68k_cpu_set_pc;
180
- qemu_st_lel(val);
45
cc->gdb_read_register = m68k_cpu_gdb_read_register;
181
+ helper_le_stl_mmu(env, taddr, val, oi, ra);
182
break;
183
case MO_LEQ:
184
- qemu_st_leq(val);
185
+ helper_le_stq_mmu(env, taddr, val, oi, ra);
186
break;
187
case MO_BEUW:
188
- qemu_st_bew(val);
189
+ helper_be_stw_mmu(env, taddr, val, oi, ra);
190
break;
191
case MO_BEUL:
192
- qemu_st_bel(val);
193
+ helper_be_stl_mmu(env, taddr, val, oi, ra);
194
break;
195
case MO_BEQ:
196
- qemu_st_beq(val);
197
+ helper_be_stq_mmu(env, taddr, val, oi, ra);
198
break;
199
default:
200
g_assert_not_reached();
201
}
202
+#else
203
+ void *haddr = g2h(env_cpu(env), taddr);
204
+
205
+ switch (mop) {
206
+ case MO_UB:
207
+ stb_p(haddr, val);
208
+ break;
209
+ case MO_LEUW:
210
+ stw_le_p(haddr, val);
211
+ break;
212
+ case MO_LEUL:
213
+ stl_le_p(haddr, val);
214
+ break;
215
+ case MO_LEQ:
216
+ stq_le_p(haddr, val);
217
+ break;
218
+ case MO_BEUW:
219
+ stw_be_p(haddr, val);
220
+ break;
221
+ case MO_BEUL:
222
+ stl_be_p(haddr, val);
223
+ break;
224
+ case MO_BEQ:
225
+ stq_be_p(haddr, val);
226
+ break;
227
+ default:
228
+ g_assert_not_reached();
229
+ }
230
+#endif
231
}
232
233
#if TCG_TARGET_REG_BITS == 64
234
--
46
--
235
2.25.1
47
2.25.1
236
48
237
49
diff view generated by jsdifflib
1
We already had the 32-bit versions for a 32-bit host; expand this
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
to 64-bit hosts as well. The 64-bit opcodes are new.
3
2
4
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
3
Restrict has_work() to sysemu.
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
5
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-Id: <20210912172731.789788-17-f4bug@amsat.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
9
---
8
tcg/tci/tcg-target.h | 8 ++++----
10
target/microblaze/cpu.c | 8 ++++----
9
tcg/tci.c | 40 ++++++++++++++++++++++++++--------------
11
1 file changed, 4 insertions(+), 4 deletions(-)
10
tcg/tci/tcg-target.c.inc | 15 ++++++++-------
11
3 files changed, 38 insertions(+), 25 deletions(-)
12
12
13
diff --git a/tcg/tci/tcg-target.h b/tcg/tci/tcg-target.h
13
diff --git a/target/microblaze/cpu.c b/target/microblaze/cpu.c
14
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
15
--- a/tcg/tci/tcg-target.h
15
--- a/target/microblaze/cpu.c
16
+++ b/tcg/tci/tcg-target.h
16
+++ b/target/microblaze/cpu.c
17
@@ -XXX,XX +XXX,XX @@
17
@@ -XXX,XX +XXX,XX @@ static void mb_cpu_synchronize_from_tb(CPUState *cs,
18
#define TCG_TARGET_HAS_rot_i64 1
18
cpu->env.iflags = tb->flags & IFLAGS_TB_MASK;
19
#define TCG_TARGET_HAS_movcond_i64 1
20
#define TCG_TARGET_HAS_muls2_i64 1
21
-#define TCG_TARGET_HAS_add2_i32 0
22
-#define TCG_TARGET_HAS_sub2_i32 0
23
+#define TCG_TARGET_HAS_add2_i32 1
24
+#define TCG_TARGET_HAS_sub2_i32 1
25
#define TCG_TARGET_HAS_mulu2_i32 1
26
-#define TCG_TARGET_HAS_add2_i64 0
27
-#define TCG_TARGET_HAS_sub2_i64 0
28
+#define TCG_TARGET_HAS_add2_i64 1
29
+#define TCG_TARGET_HAS_sub2_i64 1
30
#define TCG_TARGET_HAS_mulu2_i64 1
31
#define TCG_TARGET_HAS_muluh_i64 0
32
#define TCG_TARGET_HAS_mulsh_i64 0
33
diff --git a/tcg/tci.c b/tcg/tci.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/tcg/tci.c
36
+++ b/tcg/tci.c
37
@@ -XXX,XX +XXX,XX @@ static void tci_args_rrrrrc(uint32_t insn, TCGReg *r0, TCGReg *r1,
38
*c5 = extract32(insn, 28, 4);
39
}
19
}
40
20
41
-#if TCG_TARGET_REG_BITS == 32
21
+#ifndef CONFIG_USER_ONLY
42
static void tci_args_rrrrrr(uint32_t insn, TCGReg *r0, TCGReg *r1,
22
+
43
TCGReg *r2, TCGReg *r3, TCGReg *r4, TCGReg *r5)
23
static bool mb_cpu_has_work(CPUState *cs)
44
{
24
{
45
@@ -XXX,XX +XXX,XX @@ static void tci_args_rrrrrr(uint32_t insn, TCGReg *r0, TCGReg *r1,
25
return cs->interrupt_request & (CPU_INTERRUPT_HARD | CPU_INTERRUPT_NMI);
46
*r4 = extract32(insn, 24, 4);
26
}
47
*r5 = extract32(insn, 28, 4);
27
28
-#ifndef CONFIG_USER_ONLY
29
static void mb_cpu_ns_axi_dp(void *opaque, int irq, int level)
30
{
31
MicroBlazeCPU *cpu = opaque;
32
@@ -XXX,XX +XXX,XX @@ static void microblaze_cpu_set_irq(void *opaque, int irq, int level)
33
cpu_reset_interrupt(cs, type);
34
}
48
}
35
}
49
-#endif
36
-#endif
50
37
+#endif /* !CONFIG_USER_ONLY */
51
static bool tci_compare32(uint32_t u0, uint32_t u1, TCGCond condition)
38
39
static void mb_cpu_reset(DeviceState *dev)
52
{
40
{
53
@@ -XXX,XX +XXX,XX @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
41
@@ -XXX,XX +XXX,XX @@ static const struct TCGCPUOps mb_tcg_ops = {
54
for (;;) {
42
.tlb_fill = mb_cpu_tlb_fill,
55
uint32_t insn;
43
56
TCGOpcode opc;
44
#ifndef CONFIG_USER_ONLY
57
- TCGReg r0, r1, r2, r3, r4;
45
+ .has_work = mb_cpu_has_work,
58
+ TCGReg r0, r1, r2, r3, r4, r5;
46
.cpu_exec_interrupt = mb_cpu_exec_interrupt,
59
tcg_target_ulong t1;
47
.do_interrupt = mb_cpu_do_interrupt,
60
TCGCond condition;
48
.do_transaction_failed = mb_cpu_transaction_failed,
61
target_ulong taddr;
49
@@ -XXX,XX +XXX,XX @@ static void mb_cpu_class_init(ObjectClass *oc, void *data)
62
uint8_t pos, len;
50
device_class_set_parent_reset(dc, mb_cpu_reset, &mcc->parent_reset);
63
uint32_t tmp32;
51
64
uint64_t tmp64;
52
cc->class_by_name = mb_cpu_class_by_name;
65
-#if TCG_TARGET_REG_BITS == 32
53
- cc->has_work = mb_cpu_has_work;
66
- TCGReg r5;
54
-
67
uint64_t T1, T2;
55
cc->dump_state = mb_cpu_dump_state;
68
-#endif
56
cc->set_pc = mb_cpu_set_pc;
69
TCGMemOpIdx oi;
57
cc->gdb_read_register = mb_cpu_gdb_read_register;
70
int32_t ofs;
71
void *ptr;
72
@@ -XXX,XX +XXX,XX @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
73
tb_ptr = ptr;
74
}
75
break;
76
-#if TCG_TARGET_REG_BITS == 32
77
+#if TCG_TARGET_REG_BITS == 32 || TCG_TARGET_HAS_add2_i32
78
case INDEX_op_add2_i32:
79
tci_args_rrrrrr(insn, &r0, &r1, &r2, &r3, &r4, &r5);
80
T1 = tci_uint64(regs[r3], regs[r2]);
81
T2 = tci_uint64(regs[r5], regs[r4]);
82
tci_write_reg64(regs, r1, r0, T1 + T2);
83
break;
84
+#endif
85
+#if TCG_TARGET_REG_BITS == 32 || TCG_TARGET_HAS_sub2_i32
86
case INDEX_op_sub2_i32:
87
tci_args_rrrrrr(insn, &r0, &r1, &r2, &r3, &r4, &r5);
88
T1 = tci_uint64(regs[r3], regs[r2]);
89
T2 = tci_uint64(regs[r5], regs[r4]);
90
tci_write_reg64(regs, r1, r0, T1 - T2);
91
break;
92
-#endif /* TCG_TARGET_REG_BITS == 32 */
93
+#endif
94
#if TCG_TARGET_HAS_mulu2_i32
95
case INDEX_op_mulu2_i32:
96
tci_args_rrrr(insn, &r0, &r1, &r2, &r3);
97
@@ -XXX,XX +XXX,XX @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
98
muls64(&regs[r0], &regs[r1], regs[r2], regs[r3]);
99
break;
100
#endif
101
+#if TCG_TARGET_HAS_add2_i64
102
+ case INDEX_op_add2_i64:
103
+ tci_args_rrrrrr(insn, &r0, &r1, &r2, &r3, &r4, &r5);
104
+ T1 = regs[r2] + regs[r4];
105
+ T2 = regs[r3] + regs[r5] + (T1 < regs[r2]);
106
+ regs[r0] = T1;
107
+ regs[r1] = T2;
108
+ break;
109
+#endif
110
+#if TCG_TARGET_HAS_add2_i64
111
+ case INDEX_op_sub2_i64:
112
+ tci_args_rrrrrr(insn, &r0, &r1, &r2, &r3, &r4, &r5);
113
+ T1 = regs[r2] - regs[r4];
114
+ T2 = regs[r3] - regs[r5] - (regs[r2] < regs[r4]);
115
+ regs[r0] = T1;
116
+ regs[r1] = T2;
117
+ break;
118
+#endif
119
120
/* Shift/rotate operations (64 bit). */
121
122
@@ -XXX,XX +XXX,XX @@ int print_insn_tci(bfd_vma addr, disassemble_info *info)
123
const char *op_name;
124
uint32_t insn;
125
TCGOpcode op;
126
- TCGReg r0, r1, r2, r3, r4;
127
-#if TCG_TARGET_REG_BITS == 32
128
- TCGReg r5;
129
-#endif
130
+ TCGReg r0, r1, r2, r3, r4, r5;
131
tcg_target_ulong i1;
132
int32_t s2;
133
TCGCond c;
134
@@ -XXX,XX +XXX,XX @@ int print_insn_tci(bfd_vma addr, disassemble_info *info)
135
str_r(r2), str_r(r3));
136
break;
137
138
-#if TCG_TARGET_REG_BITS == 32
139
case INDEX_op_add2_i32:
140
+ case INDEX_op_add2_i64:
141
case INDEX_op_sub2_i32:
142
+ case INDEX_op_sub2_i64:
143
tci_args_rrrrrr(insn, &r0, &r1, &r2, &r3, &r4, &r5);
144
info->fprintf_func(info->stream, "%-12s %s, %s, %s, %s, %s, %s",
145
op_name, str_r(r0), str_r(r1), str_r(r2),
146
str_r(r3), str_r(r4), str_r(r5));
147
break;
148
-#endif
149
150
case INDEX_op_qemu_ld_i64:
151
case INDEX_op_qemu_st_i64:
152
diff --git a/tcg/tci/tcg-target.c.inc b/tcg/tci/tcg-target.c.inc
153
index XXXXXXX..XXXXXXX 100644
154
--- a/tcg/tci/tcg-target.c.inc
155
+++ b/tcg/tci/tcg-target.c.inc
156
@@ -XXX,XX +XXX,XX @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op)
157
case INDEX_op_brcond_i64:
158
return C_O0_I2(r, r);
159
160
-#if TCG_TARGET_REG_BITS == 32
161
- /* TODO: Support R, R, R, R, RI, RI? Will it be faster? */
162
case INDEX_op_add2_i32:
163
+ case INDEX_op_add2_i64:
164
case INDEX_op_sub2_i32:
165
+ case INDEX_op_sub2_i64:
166
return C_O2_I4(r, r, r, r, r, r);
167
+
168
+#if TCG_TARGET_REG_BITS == 32
169
case INDEX_op_brcond2_i32:
170
return C_O0_I4(r, r, r, r);
171
#endif
172
@@ -XXX,XX +XXX,XX @@ static void tcg_out_op_rrrrrc(TCGContext *s, TCGOpcode op,
173
tcg_out32(s, insn);
174
}
175
176
-#if TCG_TARGET_REG_BITS == 32
177
static void tcg_out_op_rrrrrr(TCGContext *s, TCGOpcode op,
178
TCGReg r0, TCGReg r1, TCGReg r2,
179
TCGReg r3, TCGReg r4, TCGReg r5)
180
@@ -XXX,XX +XXX,XX @@ static void tcg_out_op_rrrrrr(TCGContext *s, TCGOpcode op,
181
insn = deposit32(insn, 28, 4, r5);
182
tcg_out32(s, insn);
183
}
184
-#endif
185
186
static void tcg_out_ldst(TCGContext *s, TCGOpcode op, TCGReg val,
187
TCGReg base, intptr_t offset)
188
@@ -XXX,XX +XXX,XX @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
189
tcg_out_op_rr(s, opc, args[0], args[1]);
190
break;
191
192
-#if TCG_TARGET_REG_BITS == 32
193
- case INDEX_op_add2_i32:
194
- case INDEX_op_sub2_i32:
195
+ CASE_32_64(add2)
196
+ CASE_32_64(sub2)
197
tcg_out_op_rrrrrr(s, opc, args[0], args[1], args[2],
198
args[3], args[4], args[5]);
199
break;
200
+
201
+#if TCG_TARGET_REG_BITS == 32
202
case INDEX_op_brcond2_i32:
203
tcg_out_op_rrrrrc(s, INDEX_op_setcond2_i32, TCG_REG_TMP,
204
args[0], args[1], args[2], args[3], args[4]);
205
--
58
--
206
2.25.1
59
2.25.1
207
60
208
61
diff view generated by jsdifflib
1
This reverts commit dc09f047eddec8f4a1991c4f5f4a428d7aa3f2c0.
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
2
3
For tcg, tracepoints are expanded inline in tcg opcodes.
3
Restrict has_work() to TCG sysemu.
4
Using a helper which generates a second tracepoint is incorrect.
5
4
6
For system mode, the extraction and re-packing of MemOp and mmu_idx
5
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
lost the alignment information from MemOp. So we were no longer
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
raising alignment exceptions for !TARGET_ALIGNED_ONLY guests.
7
Message-Id: <20210912172731.789788-18-f4bug@amsat.org>
9
This can be seen in tests/tcg/xtensa/test_load_store.S.
10
11
For user mode, we must update to the new signature of g2h() so that
12
the revert compiles. We can leave set_helper_retaddr for later.
13
14
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
15
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
16
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
17
---
9
---
18
tcg/tci.c | 73 ++++++++++++++++++++++++++++++++++---------------------
10
target/mips/cpu.c | 4 +++-
19
1 file changed, 45 insertions(+), 28 deletions(-)
11
1 file changed, 3 insertions(+), 1 deletion(-)
20
12
21
diff --git a/tcg/tci.c b/tcg/tci.c
13
diff --git a/target/mips/cpu.c b/target/mips/cpu.c
22
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
23
--- a/tcg/tci.c
15
--- a/target/mips/cpu.c
24
+++ b/tcg/tci.c
16
+++ b/target/mips/cpu.c
25
@@ -XXX,XX +XXX,XX @@ static bool tci_compare64(uint64_t u0, uint64_t u1, TCGCond condition)
17
@@ -XXX,XX +XXX,XX @@ static void mips_cpu_set_pc(CPUState *cs, vaddr value)
26
return result;
18
mips_env_set_pc(&cpu->env, value);
27
}
19
}
28
20
29
-#define qemu_ld_ub \
21
+#if defined(CONFIG_TCG) && !defined(CONFIG_USER_ONLY)
30
- cpu_ldub_mmuidx_ra(env, taddr, get_mmuidx(oi), (uintptr_t)tb_ptr)
22
static bool mips_cpu_has_work(CPUState *cs)
31
-#define qemu_ld_leuw \
23
{
32
- cpu_lduw_le_mmuidx_ra(env, taddr, get_mmuidx(oi), (uintptr_t)tb_ptr)
24
MIPSCPU *cpu = MIPS_CPU(cs);
33
-#define qemu_ld_leul \
25
@@ -XXX,XX +XXX,XX @@ static bool mips_cpu_has_work(CPUState *cs)
34
- cpu_ldl_le_mmuidx_ra(env, taddr, get_mmuidx(oi), (uintptr_t)tb_ptr)
26
}
35
-#define qemu_ld_leq \
27
return has_work;
36
- cpu_ldq_le_mmuidx_ra(env, taddr, get_mmuidx(oi), (uintptr_t)tb_ptr)
28
}
37
-#define qemu_ld_beuw \
29
+#endif /* CONFIG_TCG && !CONFIG_USER_ONLY */
38
- cpu_lduw_be_mmuidx_ra(env, taddr, get_mmuidx(oi), (uintptr_t)tb_ptr)
30
39
-#define qemu_ld_beul \
31
#include "cpu-defs.c.inc"
40
- cpu_ldl_be_mmuidx_ra(env, taddr, get_mmuidx(oi), (uintptr_t)tb_ptr)
32
41
-#define qemu_ld_beq \
33
@@ -XXX,XX +XXX,XX @@ static const struct TCGCPUOps mips_tcg_ops = {
42
- cpu_ldq_be_mmuidx_ra(env, taddr, get_mmuidx(oi), (uintptr_t)tb_ptr)
34
.tlb_fill = mips_cpu_tlb_fill,
43
-#define qemu_st_b(X) \
35
44
- cpu_stb_mmuidx_ra(env, taddr, X, get_mmuidx(oi), (uintptr_t)tb_ptr)
36
#if !defined(CONFIG_USER_ONLY)
45
-#define qemu_st_lew(X) \
37
+ .has_work = mips_cpu_has_work,
46
- cpu_stw_le_mmuidx_ra(env, taddr, X, get_mmuidx(oi), (uintptr_t)tb_ptr)
38
.cpu_exec_interrupt = mips_cpu_exec_interrupt,
47
-#define qemu_st_lel(X) \
39
.do_interrupt = mips_cpu_do_interrupt,
48
- cpu_stl_le_mmuidx_ra(env, taddr, X, get_mmuidx(oi), (uintptr_t)tb_ptr)
40
.do_transaction_failed = mips_cpu_do_transaction_failed,
49
-#define qemu_st_leq(X) \
41
@@ -XXX,XX +XXX,XX @@ static void mips_cpu_class_init(ObjectClass *c, void *data)
50
- cpu_stq_le_mmuidx_ra(env, taddr, X, get_mmuidx(oi), (uintptr_t)tb_ptr)
42
device_class_set_props(dc, mips_cpu_properties);
51
-#define qemu_st_bew(X) \
43
52
- cpu_stw_be_mmuidx_ra(env, taddr, X, get_mmuidx(oi), (uintptr_t)tb_ptr)
44
cc->class_by_name = mips_cpu_class_by_name;
53
-#define qemu_st_bel(X) \
45
- cc->has_work = mips_cpu_has_work;
54
- cpu_stl_be_mmuidx_ra(env, taddr, X, get_mmuidx(oi), (uintptr_t)tb_ptr)
46
cc->dump_state = mips_cpu_dump_state;
55
-#define qemu_st_beq(X) \
47
cc->set_pc = mips_cpu_set_pc;
56
- cpu_stq_be_mmuidx_ra(env, taddr, X, get_mmuidx(oi), (uintptr_t)tb_ptr)
48
cc->gdb_read_register = mips_cpu_gdb_read_register;
57
+#ifdef CONFIG_SOFTMMU
58
+# define qemu_ld_ub \
59
+ helper_ret_ldub_mmu(env, taddr, oi, (uintptr_t)tb_ptr)
60
+# define qemu_ld_leuw \
61
+ helper_le_lduw_mmu(env, taddr, oi, (uintptr_t)tb_ptr)
62
+# define qemu_ld_leul \
63
+ helper_le_ldul_mmu(env, taddr, oi, (uintptr_t)tb_ptr)
64
+# define qemu_ld_leq \
65
+ helper_le_ldq_mmu(env, taddr, oi, (uintptr_t)tb_ptr)
66
+# define qemu_ld_beuw \
67
+ helper_be_lduw_mmu(env, taddr, oi, (uintptr_t)tb_ptr)
68
+# define qemu_ld_beul \
69
+ helper_be_ldul_mmu(env, taddr, oi, (uintptr_t)tb_ptr)
70
+# define qemu_ld_beq \
71
+ helper_be_ldq_mmu(env, taddr, oi, (uintptr_t)tb_ptr)
72
+# define qemu_st_b(X) \
73
+ helper_ret_stb_mmu(env, taddr, X, oi, (uintptr_t)tb_ptr)
74
+# define qemu_st_lew(X) \
75
+ helper_le_stw_mmu(env, taddr, X, oi, (uintptr_t)tb_ptr)
76
+# define qemu_st_lel(X) \
77
+ helper_le_stl_mmu(env, taddr, X, oi, (uintptr_t)tb_ptr)
78
+# define qemu_st_leq(X) \
79
+ helper_le_stq_mmu(env, taddr, X, oi, (uintptr_t)tb_ptr)
80
+# define qemu_st_bew(X) \
81
+ helper_be_stw_mmu(env, taddr, X, oi, (uintptr_t)tb_ptr)
82
+# define qemu_st_bel(X) \
83
+ helper_be_stl_mmu(env, taddr, X, oi, (uintptr_t)tb_ptr)
84
+# define qemu_st_beq(X) \
85
+ helper_be_stq_mmu(env, taddr, X, oi, (uintptr_t)tb_ptr)
86
+#else
87
+# define qemu_ld_ub ldub_p(g2h(env_cpu(env), taddr))
88
+# define qemu_ld_leuw lduw_le_p(g2h(env_cpu(env), taddr))
89
+# define qemu_ld_leul (uint32_t)ldl_le_p(g2h(env_cpu(env), taddr))
90
+# define qemu_ld_leq ldq_le_p(g2h(env_cpu(env), taddr))
91
+# define qemu_ld_beuw lduw_be_p(g2h(env_cpu(env), taddr))
92
+# define qemu_ld_beul (uint32_t)ldl_be_p(g2h(env_cpu(env), taddr))
93
+# define qemu_ld_beq ldq_be_p(g2h(env_cpu(env), taddr))
94
+# define qemu_st_b(X) stb_p(g2h(env_cpu(env), taddr), X)
95
+# define qemu_st_lew(X) stw_le_p(g2h(env_cpu(env), taddr), X)
96
+# define qemu_st_lel(X) stl_le_p(g2h(env_cpu(env), taddr), X)
97
+# define qemu_st_leq(X) stq_le_p(g2h(env_cpu(env), taddr), X)
98
+# define qemu_st_bew(X) stw_be_p(g2h(env_cpu(env), taddr), X)
99
+# define qemu_st_bel(X) stl_be_p(g2h(env_cpu(env), taddr), X)
100
+# define qemu_st_beq(X) stq_be_p(g2h(env_cpu(env), taddr), X)
101
+#endif
102
103
static uint64_t tci_qemu_ld(CPUArchState *env, target_ulong taddr,
104
TCGMemOpIdx oi, const void *tb_ptr)
105
--
49
--
106
2.25.1
50
2.25.1
107
51
108
52
diff view generated by jsdifflib
1
These were already present in tcg-target.c.inc,
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
but not in the interpreter.
3
2
4
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
3
Restrict has_work() to sysemu.
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
5
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-Id: <20210912172731.789788-19-f4bug@amsat.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
9
---
8
tcg/tci/tcg-target.h | 20 ++++++++++----------
10
target/nios2/cpu.c | 4 +++-
9
tcg/tci.c | 40 ++++++++++++++++++++++++++++++++++++++++
11
1 file changed, 3 insertions(+), 1 deletion(-)
10
2 files changed, 50 insertions(+), 10 deletions(-)
11
12
12
diff --git a/tcg/tci/tcg-target.h b/tcg/tci/tcg-target.h
13
diff --git a/target/nios2/cpu.c b/target/nios2/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
14
--- a/tcg/tci/tcg-target.h
15
--- a/target/nios2/cpu.c
15
+++ b/tcg/tci/tcg-target.h
16
+++ b/target/nios2/cpu.c
16
@@ -XXX,XX +XXX,XX @@
17
@@ -XXX,XX +XXX,XX @@ static void nios2_cpu_set_pc(CPUState *cs, vaddr value)
17
#define TCG_TARGET_HAS_ext16s_i32 1
18
env->regs[R_PC] = value;
18
#define TCG_TARGET_HAS_ext8u_i32 1
19
}
19
#define TCG_TARGET_HAS_ext16u_i32 1
20
20
-#define TCG_TARGET_HAS_andc_i32 0
21
+#if !defined(CONFIG_USER_ONLY)
21
+#define TCG_TARGET_HAS_andc_i32 1
22
static bool nios2_cpu_has_work(CPUState *cs)
22
#define TCG_TARGET_HAS_deposit_i32 1
23
{
23
#define TCG_TARGET_HAS_extract_i32 0
24
return cs->interrupt_request & (CPU_INTERRUPT_HARD | CPU_INTERRUPT_NMI);
24
#define TCG_TARGET_HAS_sextract_i32 0
25
}
25
#define TCG_TARGET_HAS_extract2_i32 0
26
+#endif /* !CONFIG_USER_ONLY */
26
-#define TCG_TARGET_HAS_eqv_i32 0
27
27
-#define TCG_TARGET_HAS_nand_i32 0
28
static void nios2_cpu_reset(DeviceState *dev)
28
-#define TCG_TARGET_HAS_nor_i32 0
29
{
29
+#define TCG_TARGET_HAS_eqv_i32 1
30
@@ -XXX,XX +XXX,XX @@ static const struct TCGCPUOps nios2_tcg_ops = {
30
+#define TCG_TARGET_HAS_nand_i32 1
31
.tlb_fill = nios2_cpu_tlb_fill,
31
+#define TCG_TARGET_HAS_nor_i32 1
32
32
#define TCG_TARGET_HAS_clz_i32 0
33
#ifndef CONFIG_USER_ONLY
33
#define TCG_TARGET_HAS_ctz_i32 0
34
+ .has_work = nios2_cpu_has_work,
34
#define TCG_TARGET_HAS_ctpop_i32 0
35
.cpu_exec_interrupt = nios2_cpu_exec_interrupt,
35
#define TCG_TARGET_HAS_neg_i32 1
36
.do_interrupt = nios2_cpu_do_interrupt,
36
#define TCG_TARGET_HAS_not_i32 1
37
.do_unaligned_access = nios2_cpu_do_unaligned_access,
37
-#define TCG_TARGET_HAS_orc_i32 0
38
@@ -XXX,XX +XXX,XX @@ static void nios2_cpu_class_init(ObjectClass *oc, void *data)
38
+#define TCG_TARGET_HAS_orc_i32 1
39
device_class_set_parent_reset(dc, nios2_cpu_reset, &ncc->parent_reset);
39
#define TCG_TARGET_HAS_rot_i32 1
40
40
#define TCG_TARGET_HAS_movcond_i32 1
41
cc->class_by_name = nios2_cpu_class_by_name;
41
#define TCG_TARGET_HAS_muls2_i32 0
42
- cc->has_work = nios2_cpu_has_work;
42
@@ -XXX,XX +XXX,XX @@
43
cc->dump_state = nios2_cpu_dump_state;
43
#define TCG_TARGET_HAS_ext8u_i64 1
44
cc->set_pc = nios2_cpu_set_pc;
44
#define TCG_TARGET_HAS_ext16u_i64 1
45
cc->disas_set_info = nios2_cpu_disas_set_info;
45
#define TCG_TARGET_HAS_ext32u_i64 1
46
-#define TCG_TARGET_HAS_andc_i64 0
47
-#define TCG_TARGET_HAS_eqv_i64 0
48
-#define TCG_TARGET_HAS_nand_i64 0
49
-#define TCG_TARGET_HAS_nor_i64 0
50
+#define TCG_TARGET_HAS_andc_i64 1
51
+#define TCG_TARGET_HAS_eqv_i64 1
52
+#define TCG_TARGET_HAS_nand_i64 1
53
+#define TCG_TARGET_HAS_nor_i64 1
54
#define TCG_TARGET_HAS_clz_i64 0
55
#define TCG_TARGET_HAS_ctz_i64 0
56
#define TCG_TARGET_HAS_ctpop_i64 0
57
#define TCG_TARGET_HAS_neg_i64 1
58
#define TCG_TARGET_HAS_not_i64 1
59
-#define TCG_TARGET_HAS_orc_i64 0
60
+#define TCG_TARGET_HAS_orc_i64 1
61
#define TCG_TARGET_HAS_rot_i64 1
62
#define TCG_TARGET_HAS_movcond_i64 1
63
#define TCG_TARGET_HAS_muls2_i64 0
64
diff --git a/tcg/tci.c b/tcg/tci.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/tcg/tci.c
67
+++ b/tcg/tci.c
68
@@ -XXX,XX +XXX,XX @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
69
tci_args_rrr(insn, &r0, &r1, &r2);
70
regs[r0] = regs[r1] ^ regs[r2];
71
break;
72
+#if TCG_TARGET_HAS_andc_i32 || TCG_TARGET_HAS_andc_i64
73
+ CASE_32_64(andc)
74
+ tci_args_rrr(insn, &r0, &r1, &r2);
75
+ regs[r0] = regs[r1] & ~regs[r2];
76
+ break;
77
+#endif
78
+#if TCG_TARGET_HAS_orc_i32 || TCG_TARGET_HAS_orc_i64
79
+ CASE_32_64(orc)
80
+ tci_args_rrr(insn, &r0, &r1, &r2);
81
+ regs[r0] = regs[r1] | ~regs[r2];
82
+ break;
83
+#endif
84
+#if TCG_TARGET_HAS_eqv_i32 || TCG_TARGET_HAS_eqv_i64
85
+ CASE_32_64(eqv)
86
+ tci_args_rrr(insn, &r0, &r1, &r2);
87
+ regs[r0] = ~(regs[r1] ^ regs[r2]);
88
+ break;
89
+#endif
90
+#if TCG_TARGET_HAS_nand_i32 || TCG_TARGET_HAS_nand_i64
91
+ CASE_32_64(nand)
92
+ tci_args_rrr(insn, &r0, &r1, &r2);
93
+ regs[r0] = ~(regs[r1] & regs[r2]);
94
+ break;
95
+#endif
96
+#if TCG_TARGET_HAS_nor_i32 || TCG_TARGET_HAS_nor_i64
97
+ CASE_32_64(nor)
98
+ tci_args_rrr(insn, &r0, &r1, &r2);
99
+ regs[r0] = ~(regs[r1] | regs[r2]);
100
+ break;
101
+#endif
102
103
/* Arithmetic operations (32 bit). */
104
105
@@ -XXX,XX +XXX,XX @@ int print_insn_tci(bfd_vma addr, disassemble_info *info)
106
case INDEX_op_or_i64:
107
case INDEX_op_xor_i32:
108
case INDEX_op_xor_i64:
109
+ case INDEX_op_andc_i32:
110
+ case INDEX_op_andc_i64:
111
+ case INDEX_op_orc_i32:
112
+ case INDEX_op_orc_i64:
113
+ case INDEX_op_eqv_i32:
114
+ case INDEX_op_eqv_i64:
115
+ case INDEX_op_nand_i32:
116
+ case INDEX_op_nand_i64:
117
+ case INDEX_op_nor_i32:
118
+ case INDEX_op_nor_i64:
119
case INDEX_op_div_i32:
120
case INDEX_op_div_i64:
121
case INDEX_op_rem_i32:
122
--
46
--
123
2.25.1
47
2.25.1
124
48
125
49
diff view generated by jsdifflib
1
We're about to adjust the offset range on host memory ops,
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
and the format of branches. Both will require a temporary.
3
2
4
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
3
Restrict has_work() to sysemu.
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
5
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-Id: <20210912172731.789788-20-f4bug@amsat.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
9
---
8
tcg/tci/tcg-target.h | 1 +
10
target/openrisc/cpu.c | 4 +++-
9
tcg/tci/tcg-target.c.inc | 1 +
11
1 file changed, 3 insertions(+), 1 deletion(-)
10
2 files changed, 2 insertions(+)
11
12
12
diff --git a/tcg/tci/tcg-target.h b/tcg/tci/tcg-target.h
13
diff --git a/target/openrisc/cpu.c b/target/openrisc/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
14
--- a/tcg/tci/tcg-target.h
15
--- a/target/openrisc/cpu.c
15
+++ b/tcg/tci/tcg-target.h
16
+++ b/target/openrisc/cpu.c
16
@@ -XXX,XX +XXX,XX @@ typedef enum {
17
@@ -XXX,XX +XXX,XX @@ static void openrisc_cpu_set_pc(CPUState *cs, vaddr value)
17
TCG_REG_R14,
18
cpu->env.dflag = 0;
18
TCG_REG_R15,
19
}
19
20
20
+ TCG_REG_TMP = TCG_REG_R13,
21
+#if !defined(CONFIG_USER_ONLY)
21
TCG_AREG0 = TCG_REG_R14,
22
static bool openrisc_cpu_has_work(CPUState *cs)
22
TCG_REG_CALL_STACK = TCG_REG_R15,
23
{
23
} TCGReg;
24
return cs->interrupt_request & (CPU_INTERRUPT_HARD |
24
diff --git a/tcg/tci/tcg-target.c.inc b/tcg/tci/tcg-target.c.inc
25
CPU_INTERRUPT_TIMER);
25
index XXXXXXX..XXXXXXX 100644
26
}
26
--- a/tcg/tci/tcg-target.c.inc
27
+#endif /* !CONFIG_USER_ONLY */
27
+++ b/tcg/tci/tcg-target.c.inc
28
28
@@ -XXX,XX +XXX,XX @@ static void tcg_target_init(TCGContext *s)
29
static void openrisc_disas_set_info(CPUState *cpu, disassemble_info *info)
29
MAKE_64BIT_MASK(TCG_REG_R0, 64 / TCG_TARGET_REG_BITS);
30
{
30
31
@@ -XXX,XX +XXX,XX @@ static const struct TCGCPUOps openrisc_tcg_ops = {
31
s->reserved_regs = 0;
32
.tlb_fill = openrisc_cpu_tlb_fill,
32
+ tcg_regset_set_reg(s->reserved_regs, TCG_REG_TMP);
33
33
tcg_regset_set_reg(s->reserved_regs, TCG_REG_CALL_STACK);
34
#ifndef CONFIG_USER_ONLY
34
35
+ .has_work = openrisc_cpu_has_work,
35
/* The call arguments come first, followed by the temp storage. */
36
.cpu_exec_interrupt = openrisc_cpu_exec_interrupt,
37
.do_interrupt = openrisc_cpu_do_interrupt,
38
#endif /* !CONFIG_USER_ONLY */
39
@@ -XXX,XX +XXX,XX @@ static void openrisc_cpu_class_init(ObjectClass *oc, void *data)
40
device_class_set_parent_reset(dc, openrisc_cpu_reset, &occ->parent_reset);
41
42
cc->class_by_name = openrisc_cpu_class_by_name;
43
- cc->has_work = openrisc_cpu_has_work;
44
cc->dump_state = openrisc_cpu_dump_state;
45
cc->set_pc = openrisc_cpu_set_pc;
46
cc->gdb_read_register = openrisc_cpu_gdb_read_register;
36
--
47
--
37
2.25.1
48
2.25.1
38
49
39
50
diff view generated by jsdifflib
1
This operation is critical to staying within the interpretation
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
loop longer, which avoids the overhead of setup and teardown for
3
many TBs.
4
2
5
The check in tcg_prologue_init is disabled because TCI does
3
We're moving the hook from CPUState to TCGCPUOps. TCGCPUOps is
6
want to use NULL to indicate exit, as opposed to branching to
4
a const structure, so to avoid creating multiple versions of
7
a real epilogue.
5
the same structure, simply changing the has_work() handler,
6
introduce yet another indirection with a has_work() handler in
7
PowerPCCPUClass, and ppc_cpu_has_work() method which dispatch
8
to it.
8
9
9
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-Id: <20210912172731.789788-21-f4bug@amsat.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
---
14
---
13
tcg/tci/tcg-target-con-set.h | 1 +
15
target/ppc/cpu-qom.h | 1 +
14
tcg/tci/tcg-target.h | 2 +-
16
target/ppc/cpu_init.c | 23 ++++++++++++++---------
15
tcg/tcg.c | 8 +++++++-
17
2 files changed, 15 insertions(+), 9 deletions(-)
16
tcg/tci.c | 19 +++++++++++++++++++
17
tcg/tci/tcg-target.c.inc | 16 ++++++++++++++++
18
5 files changed, 44 insertions(+), 2 deletions(-)
19
18
20
diff --git a/tcg/tci/tcg-target-con-set.h b/tcg/tci/tcg-target-con-set.h
19
diff --git a/target/ppc/cpu-qom.h b/target/ppc/cpu-qom.h
21
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
22
--- a/tcg/tci/tcg-target-con-set.h
21
--- a/target/ppc/cpu-qom.h
23
+++ b/tcg/tci/tcg-target-con-set.h
22
+++ b/target/ppc/cpu-qom.h
24
@@ -XXX,XX +XXX,XX @@
23
@@ -XXX,XX +XXX,XX @@ struct PowerPCCPUClass {
25
* Each operand should be a sequence of constraint letters as defined by
24
uint32_t flags;
26
* tcg-target-con-str.h; the constraint combination is inclusive or.
25
int bfd_mach;
27
*/
26
uint32_t l1_dcache_size, l1_icache_size;
28
+C_O0_I1(r)
27
+ bool (*has_work)(CPUState *cpu);
29
C_O0_I2(r, r)
28
#ifndef CONFIG_USER_ONLY
30
C_O0_I3(r, r, r)
29
unsigned int gdb_num_sprs;
31
C_O0_I4(r, r, r, r)
30
const char *gdb_spr_xml;
32
diff --git a/tcg/tci/tcg-target.h b/tcg/tci/tcg-target.h
31
diff --git a/target/ppc/cpu_init.c b/target/ppc/cpu_init.c
33
index XXXXXXX..XXXXXXX 100644
32
index XXXXXXX..XXXXXXX 100644
34
--- a/tcg/tci/tcg-target.h
33
--- a/target/ppc/cpu_init.c
35
+++ b/tcg/tci/tcg-target.h
34
+++ b/target/ppc/cpu_init.c
36
@@ -XXX,XX +XXX,XX @@
35
@@ -XXX,XX +XXX,XX @@ POWERPC_FAMILY(POWER7)(ObjectClass *oc, void *data)
37
#define TCG_TARGET_HAS_muls2_i32 0
36
{
38
#define TCG_TARGET_HAS_muluh_i32 0
37
DeviceClass *dc = DEVICE_CLASS(oc);
39
#define TCG_TARGET_HAS_mulsh_i32 0
38
PowerPCCPUClass *pcc = POWERPC_CPU_CLASS(oc);
40
-#define TCG_TARGET_HAS_goto_ptr 0
39
- CPUClass *cc = CPU_CLASS(oc);
41
+#define TCG_TARGET_HAS_goto_ptr 1
40
42
#define TCG_TARGET_HAS_direct_jump 0
41
dc->fw_name = "PowerPC,POWER7";
43
#define TCG_TARGET_HAS_qemu_st8_i32 0
42
dc->desc = "POWER7";
44
43
@@ -XXX,XX +XXX,XX @@ POWERPC_FAMILY(POWER7)(ObjectClass *oc, void *data)
45
diff --git a/tcg/tcg.c b/tcg/tcg.c
44
pcc->pcr_supported = PCR_COMPAT_2_06 | PCR_COMPAT_2_05;
46
index XXXXXXX..XXXXXXX 100644
45
pcc->init_proc = init_proc_POWER7;
47
--- a/tcg/tcg.c
46
pcc->check_pow = check_pow_nocheck;
48
+++ b/tcg/tcg.c
47
- cc->has_work = cpu_has_work_POWER7;
49
@@ -XXX,XX +XXX,XX @@ void tcg_prologue_init(TCGContext *s)
48
+ pcc->has_work = cpu_has_work_POWER7;
50
}
49
pcc->insns_flags = PPC_INSNS_BASE | PPC_ISEL | PPC_STRING | PPC_MFTB |
51
#endif
50
PPC_FLOAT | PPC_FLOAT_FSEL | PPC_FLOAT_FRES |
52
51
PPC_FLOAT_FSQRT | PPC_FLOAT_FRSQRTE |
53
- /* Assert that goto_ptr is implemented completely. */
52
@@ -XXX,XX +XXX,XX @@ POWERPC_FAMILY(POWER8)(ObjectClass *oc, void *data)
54
+#ifndef CONFIG_TCG_INTERPRETER
53
{
55
+ /*
54
DeviceClass *dc = DEVICE_CLASS(oc);
56
+ * Assert that goto_ptr is implemented completely, setting an epilogue.
55
PowerPCCPUClass *pcc = POWERPC_CPU_CLASS(oc);
57
+ * For tci, we use NULL as the signal to return from the interpreter,
56
- CPUClass *cc = CPU_CLASS(oc);
58
+ * so skip this check.
57
59
+ */
58
dc->fw_name = "PowerPC,POWER8";
60
if (TCG_TARGET_HAS_goto_ptr) {
59
dc->desc = "POWER8";
61
tcg_debug_assert(tcg_code_gen_epilogue != NULL);
60
@@ -XXX,XX +XXX,XX @@ POWERPC_FAMILY(POWER8)(ObjectClass *oc, void *data)
62
}
61
pcc->pcr_supported = PCR_COMPAT_2_07 | PCR_COMPAT_2_06 | PCR_COMPAT_2_05;
63
+#endif
62
pcc->init_proc = init_proc_POWER8;
63
pcc->check_pow = check_pow_nocheck;
64
- cc->has_work = cpu_has_work_POWER8;
65
+ pcc->has_work = cpu_has_work_POWER8;
66
pcc->insns_flags = PPC_INSNS_BASE | PPC_ISEL | PPC_STRING | PPC_MFTB |
67
PPC_FLOAT | PPC_FLOAT_FSEL | PPC_FLOAT_FRES |
68
PPC_FLOAT_FSQRT | PPC_FLOAT_FRSQRTE |
69
@@ -XXX,XX +XXX,XX @@ POWERPC_FAMILY(POWER9)(ObjectClass *oc, void *data)
70
{
71
DeviceClass *dc = DEVICE_CLASS(oc);
72
PowerPCCPUClass *pcc = POWERPC_CPU_CLASS(oc);
73
- CPUClass *cc = CPU_CLASS(oc);
74
75
dc->fw_name = "PowerPC,POWER9";
76
dc->desc = "POWER9";
77
@@ -XXX,XX +XXX,XX @@ POWERPC_FAMILY(POWER9)(ObjectClass *oc, void *data)
78
PCR_COMPAT_2_05;
79
pcc->init_proc = init_proc_POWER9;
80
pcc->check_pow = check_pow_nocheck;
81
- cc->has_work = cpu_has_work_POWER9;
82
+ pcc->has_work = cpu_has_work_POWER9;
83
pcc->insns_flags = PPC_INSNS_BASE | PPC_ISEL | PPC_STRING | PPC_MFTB |
84
PPC_FLOAT | PPC_FLOAT_FSEL | PPC_FLOAT_FRES |
85
PPC_FLOAT_FSQRT | PPC_FLOAT_FRSQRTE |
86
@@ -XXX,XX +XXX,XX @@ POWERPC_FAMILY(POWER10)(ObjectClass *oc, void *data)
87
{
88
DeviceClass *dc = DEVICE_CLASS(oc);
89
PowerPCCPUClass *pcc = POWERPC_CPU_CLASS(oc);
90
- CPUClass *cc = CPU_CLASS(oc);
91
92
dc->fw_name = "PowerPC,POWER10";
93
dc->desc = "POWER10";
94
@@ -XXX,XX +XXX,XX @@ POWERPC_FAMILY(POWER10)(ObjectClass *oc, void *data)
95
PCR_COMPAT_2_06 | PCR_COMPAT_2_05;
96
pcc->init_proc = init_proc_POWER10;
97
pcc->check_pow = check_pow_nocheck;
98
- cc->has_work = cpu_has_work_POWER10;
99
+ pcc->has_work = cpu_has_work_POWER10;
100
pcc->insns_flags = PPC_INSNS_BASE | PPC_ISEL | PPC_STRING | PPC_MFTB |
101
PPC_FLOAT | PPC_FLOAT_FSEL | PPC_FLOAT_FRES |
102
PPC_FLOAT_FSQRT | PPC_FLOAT_FRSQRTE |
103
@@ -XXX,XX +XXX,XX @@ static void ppc_cpu_set_pc(CPUState *cs, vaddr value)
104
cpu->env.nip = value;
64
}
105
}
65
106
66
void tcg_func_start(TCGContext *s)
107
-static bool ppc_cpu_has_work(CPUState *cs)
67
diff --git a/tcg/tci.c b/tcg/tci.c
108
+static bool cpu_has_work_default(CPUState *cs)
68
index XXXXXXX..XXXXXXX 100644
109
{
69
--- a/tcg/tci.c
110
PowerPCCPU *cpu = POWERPC_CPU(cs);
70
+++ b/tcg/tci.c
111
CPUPPCState *env = &cpu->env;
71
@@ -XXX,XX +XXX,XX @@ static void tci_args_l(uint32_t insn, const void *tb_ptr, void **l0)
112
@@ -XXX,XX +XXX,XX @@ static bool ppc_cpu_has_work(CPUState *cs)
72
*l0 = diff ? (void *)tb_ptr + diff : NULL;
113
return msr_ee && (cs->interrupt_request & CPU_INTERRUPT_HARD);
73
}
114
}
74
115
75
+static void tci_args_r(uint32_t insn, TCGReg *r0)
116
+static bool ppc_cpu_has_work(CPUState *cs)
76
+{
117
+{
77
+ *r0 = extract32(insn, 8, 4);
118
+ PowerPCCPU *cpu = POWERPC_CPU(cs);
119
+ PowerPCCPUClass *pcc = POWERPC_CPU_GET_CLASS(cpu);
120
+
121
+ return pcc->has_work(cs);
78
+}
122
+}
79
+
123
+
80
static void tci_args_nl(uint32_t insn, const void *tb_ptr,
124
static void ppc_cpu_reset(DeviceState *dev)
81
uint8_t *n0, void **l1)
82
{
125
{
83
@@ -XXX,XX +XXX,XX @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
126
CPUState *s = CPU(dev);
84
tb_ptr = *(void **)ptr;
127
@@ -XXX,XX +XXX,XX @@ static void ppc_cpu_class_init(ObjectClass *oc, void *data)
85
break;
128
device_class_set_parent_unrealize(dc, ppc_cpu_unrealize,
86
129
&pcc->parent_unrealize);
87
+ case INDEX_op_goto_ptr:
130
pcc->pvr_match = ppc_pvr_match_default;
88
+ tci_args_r(insn, &r0);
131
+ pcc->has_work = cpu_has_work_default;
89
+ ptr = (void *)regs[r0];
132
device_class_set_props(dc, ppc_cpu_properties);
90
+ if (!ptr) {
133
91
+ return 0;
134
device_class_set_parent_reset(dc, ppc_cpu_reset, &pcc->parent_reset);
92
+ }
93
+ tb_ptr = ptr;
94
+ break;
95
+
96
case INDEX_op_qemu_ld_i32:
97
if (TARGET_LONG_BITS <= TCG_TARGET_REG_BITS) {
98
tci_args_rrm(insn, &r0, &r1, &oi);
99
@@ -XXX,XX +XXX,XX @@ int print_insn_tci(bfd_vma addr, disassemble_info *info)
100
info->fprintf_func(info->stream, "%-12s %p", op_name, ptr);
101
break;
102
103
+ case INDEX_op_goto_ptr:
104
+ tci_args_r(insn, &r0);
105
+ info->fprintf_func(info->stream, "%-12s %s", op_name, str_r(r0));
106
+ break;
107
+
108
case INDEX_op_call:
109
tci_args_nl(insn, tb_ptr, &len, &ptr);
110
info->fprintf_func(info->stream, "%-12s %d, %p", op_name, len, ptr);
111
diff --git a/tcg/tci/tcg-target.c.inc b/tcg/tci/tcg-target.c.inc
112
index XXXXXXX..XXXXXXX 100644
113
--- a/tcg/tci/tcg-target.c.inc
114
+++ b/tcg/tci/tcg-target.c.inc
115
@@ -XXX,XX +XXX,XX @@
116
static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op)
117
{
118
switch (op) {
119
+ case INDEX_op_goto_ptr:
120
+ return C_O0_I1(r);
121
+
122
case INDEX_op_ld8u_i32:
123
case INDEX_op_ld8s_i32:
124
case INDEX_op_ld16u_i32:
125
@@ -XXX,XX +XXX,XX @@ static void tcg_out_op_p(TCGContext *s, TCGOpcode op, void *p0)
126
tcg_out32(s, insn);
127
}
128
129
+static void tcg_out_op_r(TCGContext *s, TCGOpcode op, TCGReg r0)
130
+{
131
+ tcg_insn_unit insn = 0;
132
+
133
+ insn = deposit32(insn, 0, 8, op);
134
+ insn = deposit32(insn, 8, 4, r0);
135
+ tcg_out32(s, insn);
136
+}
137
+
138
static void tcg_out_op_v(TCGContext *s, TCGOpcode op)
139
{
140
tcg_out32(s, (uint8_t)op);
141
@@ -XXX,XX +XXX,XX @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
142
set_jmp_reset_offset(s, args[0]);
143
break;
144
145
+ case INDEX_op_goto_ptr:
146
+ tcg_out_op_r(s, opc, args[0]);
147
+ break;
148
+
149
case INDEX_op_br:
150
tcg_out_op_l(s, opc, arg_label(args[0]));
151
break;
152
--
135
--
153
2.25.1
136
2.25.1
154
137
155
138
diff view generated by jsdifflib
1
Wrap guest memory operations for tci like we do for cpu_ld*_data.
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
2
3
We cannot actually use the cpu_ldst.h interface without duplicating
3
Restrict PowerPCCPUClass::has_work() and ppc_cpu_has_work()
4
the memory trace operations performed within, which will already
4
- SysemuCPUOps::has_work() implementation - to TCG sysemu.
5
have been expanded into the tcg opcode stream.
5
6
6
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Message-Id: <20210912172731.789788-22-f4bug@amsat.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
---
10
---
11
tcg/tci.c | 10 ++++++----
11
target/ppc/cpu-qom.h | 4 +++-
12
1 file changed, 6 insertions(+), 4 deletions(-)
12
target/ppc/cpu_init.c | 24 ++++++++++++++++++------
13
13
2 files changed, 21 insertions(+), 7 deletions(-)
14
diff --git a/tcg/tci.c b/tcg/tci.c
14
15
diff --git a/target/ppc/cpu-qom.h b/target/ppc/cpu-qom.h
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/tcg/tci.c
17
--- a/target/ppc/cpu-qom.h
17
+++ b/tcg/tci.c
18
+++ b/target/ppc/cpu-qom.h
18
@@ -XXX,XX +XXX,XX @@ static uint64_t tci_qemu_ld(CPUArchState *env, target_ulong taddr,
19
@@ -XXX,XX +XXX,XX @@ struct PowerPCCPUClass {
19
TCGMemOpIdx oi, const void *tb_ptr)
20
uint32_t flags;
20
{
21
int bfd_mach;
21
MemOp mop = get_memop(oi) & (MO_BSWAP | MO_SSIZE);
22
uint32_t l1_dcache_size, l1_icache_size;
22
-
23
- bool (*has_work)(CPUState *cpu);
23
-#ifdef CONFIG_SOFTMMU
24
#ifndef CONFIG_USER_ONLY
24
uintptr_t ra = (uintptr_t)tb_ptr;
25
+#ifdef CONFIG_TCG
25
26
+ bool (*has_work)(CPUState *cpu);
26
+#ifdef CONFIG_SOFTMMU
27
+#endif /* CONFIG_TCG */
27
switch (mop) {
28
unsigned int gdb_num_sprs;
28
case MO_UB:
29
const char *gdb_spr_xml;
29
return helper_ret_ldub_mmu(env, taddr, oi, ra);
30
@@ -XXX,XX +XXX,XX @@ static uint64_t tci_qemu_ld(CPUArchState *env, target_ulong taddr,
31
void *haddr = g2h(env_cpu(env), taddr);
32
uint64_t ret;
33
34
+ set_helper_retaddr(ra);
35
switch (mop) {
36
case MO_UB:
37
ret = ldub_p(haddr);
38
@@ -XXX,XX +XXX,XX @@ static uint64_t tci_qemu_ld(CPUArchState *env, target_ulong taddr,
39
default:
40
g_assert_not_reached();
41
}
42
+ clear_helper_retaddr();
43
return ret;
44
#endif
30
#endif
45
}
31
diff --git a/target/ppc/cpu_init.c b/target/ppc/cpu_init.c
46
@@ -XXX,XX +XXX,XX @@ static void tci_qemu_st(CPUArchState *env, target_ulong taddr, uint64_t val,
32
index XXXXXXX..XXXXXXX 100644
47
TCGMemOpIdx oi, const void *tb_ptr)
33
--- a/target/ppc/cpu_init.c
48
{
34
+++ b/target/ppc/cpu_init.c
49
MemOp mop = get_memop(oi) & (MO_BSWAP | MO_SSIZE);
35
@@ -XXX,XX +XXX,XX @@ static bool ppc_pvr_match_power7(PowerPCCPUClass *pcc, uint32_t pvr)
50
-
36
return false;
51
-#ifdef CONFIG_SOFTMMU
37
}
52
uintptr_t ra = (uintptr_t)tb_ptr;
38
53
39
+#if defined(CONFIG_TCG) && !defined(CONFIG_USER_ONLY)
54
+#ifdef CONFIG_SOFTMMU
40
static bool cpu_has_work_POWER7(CPUState *cs)
55
switch (mop) {
41
{
56
case MO_UB:
42
PowerPCCPU *cpu = POWERPC_CPU(cs);
57
helper_ret_stb_mmu(env, taddr, val, oi, ra);
43
@@ -XXX,XX +XXX,XX @@ static bool cpu_has_work_POWER7(CPUState *cs)
58
@@ -XXX,XX +XXX,XX @@ static void tci_qemu_st(CPUArchState *env, target_ulong taddr, uint64_t val,
44
return msr_ee && (cs->interrupt_request & CPU_INTERRUPT_HARD);
59
#else
45
}
60
void *haddr = g2h(env_cpu(env), taddr);
46
}
61
47
+#endif /* CONFIG_TCG && !CONFIG_USER_ONLY */
62
+ set_helper_retaddr(ra);
48
63
switch (mop) {
49
POWERPC_FAMILY(POWER7)(ObjectClass *oc, void *data)
64
case MO_UB:
50
{
65
stb_p(haddr, val);
51
@@ -XXX,XX +XXX,XX @@ POWERPC_FAMILY(POWER7)(ObjectClass *oc, void *data)
66
@@ -XXX,XX +XXX,XX @@ static void tci_qemu_st(CPUArchState *env, target_ulong taddr, uint64_t val,
52
pcc->pcr_supported = PCR_COMPAT_2_06 | PCR_COMPAT_2_05;
67
default:
53
pcc->init_proc = init_proc_POWER7;
68
g_assert_not_reached();
54
pcc->check_pow = check_pow_nocheck;
69
}
55
- pcc->has_work = cpu_has_work_POWER7;
70
+ clear_helper_retaddr();
56
pcc->insns_flags = PPC_INSNS_BASE | PPC_ISEL | PPC_STRING | PPC_MFTB |
57
PPC_FLOAT | PPC_FLOAT_FSEL | PPC_FLOAT_FRES |
58
PPC_FLOAT_FSQRT | PPC_FLOAT_FRSQRTE |
59
@@ -XXX,XX +XXX,XX @@ POWERPC_FAMILY(POWER7)(ObjectClass *oc, void *data)
60
pcc->lpcr_pm = LPCR_P7_PECE0 | LPCR_P7_PECE1 | LPCR_P7_PECE2;
61
pcc->mmu_model = POWERPC_MMU_2_06;
62
#if defined(CONFIG_SOFTMMU)
63
+ pcc->has_work = cpu_has_work_POWER7;
64
pcc->hash64_opts = &ppc_hash64_opts_POWER7;
65
pcc->lrg_decr_bits = 32;
71
#endif
66
#endif
72
}
67
@@ -XXX,XX +XXX,XX @@ static bool ppc_pvr_match_power8(PowerPCCPUClass *pcc, uint32_t pvr)
68
return false;
69
}
70
71
+#if defined(CONFIG_TCG) && !defined(CONFIG_USER_ONLY)
72
static bool cpu_has_work_POWER8(CPUState *cs)
73
{
74
PowerPCCPU *cpu = POWERPC_CPU(cs);
75
@@ -XXX,XX +XXX,XX @@ static bool cpu_has_work_POWER8(CPUState *cs)
76
return msr_ee && (cs->interrupt_request & CPU_INTERRUPT_HARD);
77
}
78
}
79
+#endif /* CONFIG_TCG && !CONFIG_USER_ONLY */
80
81
POWERPC_FAMILY(POWER8)(ObjectClass *oc, void *data)
82
{
83
@@ -XXX,XX +XXX,XX @@ POWERPC_FAMILY(POWER8)(ObjectClass *oc, void *data)
84
pcc->pcr_supported = PCR_COMPAT_2_07 | PCR_COMPAT_2_06 | PCR_COMPAT_2_05;
85
pcc->init_proc = init_proc_POWER8;
86
pcc->check_pow = check_pow_nocheck;
87
- pcc->has_work = cpu_has_work_POWER8;
88
pcc->insns_flags = PPC_INSNS_BASE | PPC_ISEL | PPC_STRING | PPC_MFTB |
89
PPC_FLOAT | PPC_FLOAT_FSEL | PPC_FLOAT_FRES |
90
PPC_FLOAT_FSQRT | PPC_FLOAT_FRSQRTE |
91
@@ -XXX,XX +XXX,XX @@ POWERPC_FAMILY(POWER8)(ObjectClass *oc, void *data)
92
LPCR_P8_PECE3 | LPCR_P8_PECE4;
93
pcc->mmu_model = POWERPC_MMU_2_07;
94
#if defined(CONFIG_SOFTMMU)
95
+ pcc->has_work = cpu_has_work_POWER8;
96
pcc->hash64_opts = &ppc_hash64_opts_POWER7;
97
pcc->lrg_decr_bits = 32;
98
pcc->n_host_threads = 8;
99
@@ -XXX,XX +XXX,XX @@ static bool ppc_pvr_match_power9(PowerPCCPUClass *pcc, uint32_t pvr)
100
return false;
101
}
102
103
+#if defined(CONFIG_TCG) && !defined(CONFIG_USER_ONLY)
104
static bool cpu_has_work_POWER9(CPUState *cs)
105
{
106
PowerPCCPU *cpu = POWERPC_CPU(cs);
107
@@ -XXX,XX +XXX,XX @@ static bool cpu_has_work_POWER9(CPUState *cs)
108
return msr_ee && (cs->interrupt_request & CPU_INTERRUPT_HARD);
109
}
110
}
111
+#endif /* CONFIG_TCG && !CONFIG_USER_ONLY */
112
113
POWERPC_FAMILY(POWER9)(ObjectClass *oc, void *data)
114
{
115
@@ -XXX,XX +XXX,XX @@ POWERPC_FAMILY(POWER9)(ObjectClass *oc, void *data)
116
PCR_COMPAT_2_05;
117
pcc->init_proc = init_proc_POWER9;
118
pcc->check_pow = check_pow_nocheck;
119
- pcc->has_work = cpu_has_work_POWER9;
120
pcc->insns_flags = PPC_INSNS_BASE | PPC_ISEL | PPC_STRING | PPC_MFTB |
121
PPC_FLOAT | PPC_FLOAT_FSEL | PPC_FLOAT_FRES |
122
PPC_FLOAT_FSQRT | PPC_FLOAT_FRSQRTE |
123
@@ -XXX,XX +XXX,XX @@ POWERPC_FAMILY(POWER9)(ObjectClass *oc, void *data)
124
pcc->lpcr_pm = LPCR_PDEE | LPCR_HDEE | LPCR_EEE | LPCR_DEE | LPCR_OEE;
125
pcc->mmu_model = POWERPC_MMU_3_00;
126
#if defined(CONFIG_SOFTMMU)
127
+ pcc->has_work = cpu_has_work_POWER9;
128
/* segment page size remain the same */
129
pcc->hash64_opts = &ppc_hash64_opts_POWER7;
130
pcc->radix_page_info = &POWER9_radix_page_info;
131
@@ -XXX,XX +XXX,XX @@ static bool ppc_pvr_match_power10(PowerPCCPUClass *pcc, uint32_t pvr)
132
return false;
133
}
134
135
+#if defined(CONFIG_TCG) && !defined(CONFIG_USER_ONLY)
136
static bool cpu_has_work_POWER10(CPUState *cs)
137
{
138
PowerPCCPU *cpu = POWERPC_CPU(cs);
139
@@ -XXX,XX +XXX,XX @@ static bool cpu_has_work_POWER10(CPUState *cs)
140
return msr_ee && (cs->interrupt_request & CPU_INTERRUPT_HARD);
141
}
142
}
143
+#endif /* CONFIG_TCG && !CONFIG_USER_ONLY */
144
145
POWERPC_FAMILY(POWER10)(ObjectClass *oc, void *data)
146
{
147
@@ -XXX,XX +XXX,XX @@ POWERPC_FAMILY(POWER10)(ObjectClass *oc, void *data)
148
PCR_COMPAT_2_06 | PCR_COMPAT_2_05;
149
pcc->init_proc = init_proc_POWER10;
150
pcc->check_pow = check_pow_nocheck;
151
- pcc->has_work = cpu_has_work_POWER10;
152
pcc->insns_flags = PPC_INSNS_BASE | PPC_ISEL | PPC_STRING | PPC_MFTB |
153
PPC_FLOAT | PPC_FLOAT_FSEL | PPC_FLOAT_FRES |
154
PPC_FLOAT_FSQRT | PPC_FLOAT_FRSQRTE |
155
@@ -XXX,XX +XXX,XX @@ POWERPC_FAMILY(POWER10)(ObjectClass *oc, void *data)
156
pcc->lpcr_pm = LPCR_PDEE | LPCR_HDEE | LPCR_EEE | LPCR_DEE | LPCR_OEE;
157
pcc->mmu_model = POWERPC_MMU_3_00;
158
#if defined(CONFIG_SOFTMMU)
159
+ pcc->has_work = cpu_has_work_POWER10;
160
/* segment page size remain the same */
161
pcc->hash64_opts = &ppc_hash64_opts_POWER7;
162
pcc->radix_page_info = &POWER10_radix_page_info;
163
@@ -XXX,XX +XXX,XX @@ static void ppc_cpu_set_pc(CPUState *cs, vaddr value)
164
cpu->env.nip = value;
165
}
166
167
+#if defined(CONFIG_TCG) && !defined(CONFIG_USER_ONLY)
168
static bool cpu_has_work_default(CPUState *cs)
169
{
170
PowerPCCPU *cpu = POWERPC_CPU(cs);
171
@@ -XXX,XX +XXX,XX @@ static bool ppc_cpu_has_work(CPUState *cs)
172
173
return pcc->has_work(cs);
174
}
175
+#endif /* CONFIG_TCG && !CONFIG_USER_ONLY */
176
177
static void ppc_cpu_reset(DeviceState *dev)
178
{
179
@@ -XXX,XX +XXX,XX @@ static const struct TCGCPUOps ppc_tcg_ops = {
180
.tlb_fill = ppc_cpu_tlb_fill,
181
182
#ifndef CONFIG_USER_ONLY
183
+ .has_work = ppc_cpu_has_work,
184
.cpu_exec_interrupt = ppc_cpu_exec_interrupt,
185
.do_interrupt = ppc_cpu_do_interrupt,
186
.cpu_exec_enter = ppc_cpu_exec_enter,
187
@@ -XXX,XX +XXX,XX @@ static void ppc_cpu_class_init(ObjectClass *oc, void *data)
188
device_class_set_parent_unrealize(dc, ppc_cpu_unrealize,
189
&pcc->parent_unrealize);
190
pcc->pvr_match = ppc_pvr_match_default;
191
- pcc->has_work = cpu_has_work_default;
192
device_class_set_props(dc, ppc_cpu_properties);
193
194
device_class_set_parent_reset(dc, ppc_cpu_reset, &pcc->parent_reset);
195
196
cc->class_by_name = ppc_cpu_class_by_name;
197
- cc->has_work = ppc_cpu_has_work;
198
cc->dump_state = ppc_cpu_dump_state;
199
cc->set_pc = ppc_cpu_set_pc;
200
cc->gdb_read_register = ppc_cpu_gdb_read_register;
201
@@ -XXX,XX +XXX,XX @@ static void ppc_cpu_class_init(ObjectClass *oc, void *data)
202
203
#ifdef CONFIG_TCG
204
cc->tcg_ops = &ppc_tcg_ops;
205
+#ifndef CONFIG_USER_ONLY
206
+ pcc->has_work = cpu_has_work_default;
207
+#endif
208
#endif /* CONFIG_TCG */
209
}
73
210
74
--
211
--
75
2.25.1
212
2.25.1
76
213
77
214
diff view generated by jsdifflib
1
The current setting is much too pessimistic. Indicating only
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
the one or two registers that are actually assigned after a
3
call should avoid unnecessary movement between the register
4
array and the stack array.
5
2
6
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
3
Restrict has_work() to TCG sysemu.
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
5
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-Id: <20210912172731.789788-23-f4bug@amsat.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
---
9
---
10
tcg/tci/tcg-target.c.inc | 10 ++++++++--
10
target/riscv/cpu.c | 8 +++-----
11
1 file changed, 8 insertions(+), 2 deletions(-)
11
1 file changed, 3 insertions(+), 5 deletions(-)
12
12
13
diff --git a/tcg/tci/tcg-target.c.inc b/tcg/tci/tcg-target.c.inc
13
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
14
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
15
--- a/tcg/tci/tcg-target.c.inc
15
--- a/target/riscv/cpu.c
16
+++ b/tcg/tci/tcg-target.c.inc
16
+++ b/target/riscv/cpu.c
17
@@ -XXX,XX +XXX,XX @@ static void tcg_target_init(TCGContext *s)
17
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_synchronize_from_tb(CPUState *cs,
18
tcg_target_available_regs[TCG_TYPE_I32] = BIT(TCG_TARGET_NB_REGS) - 1;
18
env->pc = tb->pc;
19
/* Registers available for 64 bit operations. */
19
}
20
tcg_target_available_regs[TCG_TYPE_I64] = BIT(TCG_TARGET_NB_REGS) - 1;
20
21
- /* TODO: Which registers should be set here? */
21
+#if defined(CONFIG_TCG) && !defined(CONFIG_USER_ONLY)
22
- tcg_target_call_clobber_regs = BIT(TCG_TARGET_NB_REGS) - 1;
22
static bool riscv_cpu_has_work(CPUState *cs)
23
+ /*
23
{
24
+ * The interpreter "registers" are in the local stack frame and
24
-#ifndef CONFIG_USER_ONLY
25
+ * cannot be clobbered by the called helper functions. However,
25
RISCVCPU *cpu = RISCV_CPU(cs);
26
+ * the interpreter assumes a 64-bit return value and assigns to
26
CPURISCVState *env = &cpu->env;
27
+ * the return value registers.
27
/*
28
+ */
28
@@ -XXX,XX +XXX,XX @@ static bool riscv_cpu_has_work(CPUState *cs)
29
+ tcg_target_call_clobber_regs =
29
* mode and delegation registers, but respect individual enables
30
+ MAKE_64BIT_MASK(TCG_REG_R0, 64 / TCG_TARGET_REG_BITS);
30
*/
31
31
return (env->mip & env->mie) != 0;
32
s->reserved_regs = 0;
32
-#else
33
tcg_regset_set_reg(s->reserved_regs, TCG_REG_CALL_STACK);
33
- return true;
34
-#endif
35
}
36
+#endif /* CONFIG_TCG && !CONFIG_USER_ONLY */
37
38
void restore_state_to_opc(CPURISCVState *env, TranslationBlock *tb,
39
target_ulong *data)
40
@@ -XXX,XX +XXX,XX @@ static const struct TCGCPUOps riscv_tcg_ops = {
41
.tlb_fill = riscv_cpu_tlb_fill,
42
43
#ifndef CONFIG_USER_ONLY
44
+ .has_work = riscv_cpu_has_work,
45
.cpu_exec_interrupt = riscv_cpu_exec_interrupt,
46
.do_interrupt = riscv_cpu_do_interrupt,
47
.do_transaction_failed = riscv_cpu_do_transaction_failed,
48
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_class_init(ObjectClass *c, void *data)
49
device_class_set_parent_reset(dc, riscv_cpu_reset, &mcc->parent_reset);
50
51
cc->class_by_name = riscv_cpu_class_by_name;
52
- cc->has_work = riscv_cpu_has_work;
53
cc->dump_state = riscv_cpu_dump_state;
54
cc->set_pc = riscv_cpu_set_pc;
55
cc->gdb_read_register = riscv_cpu_gdb_read_register;
34
--
56
--
35
2.25.1
57
2.25.1
36
58
37
59
diff view generated by jsdifflib
1
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
3
Restrict has_work() to sysemu.
4
5
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-Id: <20210912172731.789788-24-f4bug@amsat.org>
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
---
9
---
5
tcg/tci/tcg-target.h | 12 +++++------
10
target/rx/cpu.c | 4 +++-
6
tcg/tci.c | 44 ++++++++++++++++++++++++++++++++++++++++
11
1 file changed, 3 insertions(+), 1 deletion(-)
7
tcg/tci/tcg-target.c.inc | 9 ++++++++
8
3 files changed, 59 insertions(+), 6 deletions(-)
9
12
10
diff --git a/tcg/tci/tcg-target.h b/tcg/tci/tcg-target.h
13
diff --git a/target/rx/cpu.c b/target/rx/cpu.c
11
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
12
--- a/tcg/tci/tcg-target.h
15
--- a/target/rx/cpu.c
13
+++ b/tcg/tci/tcg-target.h
16
+++ b/target/rx/cpu.c
14
@@ -XXX,XX +XXX,XX @@
17
@@ -XXX,XX +XXX,XX @@ static void rx_cpu_synchronize_from_tb(CPUState *cs,
15
#define TCG_TARGET_HAS_eqv_i32 1
18
cpu->env.pc = tb->pc;
16
#define TCG_TARGET_HAS_nand_i32 1
19
}
17
#define TCG_TARGET_HAS_nor_i32 1
20
18
-#define TCG_TARGET_HAS_clz_i32 0
21
+#if !defined(CONFIG_USER_ONLY)
19
-#define TCG_TARGET_HAS_ctz_i32 0
22
static bool rx_cpu_has_work(CPUState *cs)
20
-#define TCG_TARGET_HAS_ctpop_i32 0
23
{
21
+#define TCG_TARGET_HAS_clz_i32 1
24
return cs->interrupt_request &
22
+#define TCG_TARGET_HAS_ctz_i32 1
25
(CPU_INTERRUPT_HARD | CPU_INTERRUPT_FIR);
23
+#define TCG_TARGET_HAS_ctpop_i32 1
26
}
24
#define TCG_TARGET_HAS_neg_i32 1
27
+#endif /* !CONFIG_USER_ONLY */
25
#define TCG_TARGET_HAS_not_i32 1
28
26
#define TCG_TARGET_HAS_orc_i32 1
29
static void rx_cpu_reset(DeviceState *dev)
27
@@ -XXX,XX +XXX,XX @@
30
{
28
#define TCG_TARGET_HAS_eqv_i64 1
31
@@ -XXX,XX +XXX,XX @@ static const struct TCGCPUOps rx_tcg_ops = {
29
#define TCG_TARGET_HAS_nand_i64 1
32
.tlb_fill = rx_cpu_tlb_fill,
30
#define TCG_TARGET_HAS_nor_i64 1
33
31
-#define TCG_TARGET_HAS_clz_i64 0
34
#ifndef CONFIG_USER_ONLY
32
-#define TCG_TARGET_HAS_ctz_i64 0
35
+ .has_work = rx_cpu_has_work,
33
-#define TCG_TARGET_HAS_ctpop_i64 0
36
.cpu_exec_interrupt = rx_cpu_exec_interrupt,
34
+#define TCG_TARGET_HAS_clz_i64 1
37
.do_interrupt = rx_cpu_do_interrupt,
35
+#define TCG_TARGET_HAS_ctz_i64 1
38
#endif /* !CONFIG_USER_ONLY */
36
+#define TCG_TARGET_HAS_ctpop_i64 1
39
@@ -XXX,XX +XXX,XX @@ static void rx_cpu_class_init(ObjectClass *klass, void *data)
37
#define TCG_TARGET_HAS_neg_i64 1
40
&rcc->parent_reset);
38
#define TCG_TARGET_HAS_not_i64 1
41
39
#define TCG_TARGET_HAS_orc_i64 1
42
cc->class_by_name = rx_cpu_class_by_name;
40
diff --git a/tcg/tci.c b/tcg/tci.c
43
- cc->has_work = rx_cpu_has_work;
41
index XXXXXXX..XXXXXXX 100644
44
cc->dump_state = rx_cpu_dump_state;
42
--- a/tcg/tci.c
45
cc->set_pc = rx_cpu_set_pc;
43
+++ b/tcg/tci.c
44
@@ -XXX,XX +XXX,XX @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
45
tci_args_rrr(insn, &r0, &r1, &r2);
46
regs[r0] = (uint32_t)regs[r1] % (uint32_t)regs[r2];
47
break;
48
+#if TCG_TARGET_HAS_clz_i32
49
+ case INDEX_op_clz_i32:
50
+ tci_args_rrr(insn, &r0, &r1, &r2);
51
+ tmp32 = regs[r1];
52
+ regs[r0] = tmp32 ? clz32(tmp32) : regs[r2];
53
+ break;
54
+#endif
55
+#if TCG_TARGET_HAS_ctz_i32
56
+ case INDEX_op_ctz_i32:
57
+ tci_args_rrr(insn, &r0, &r1, &r2);
58
+ tmp32 = regs[r1];
59
+ regs[r0] = tmp32 ? ctz32(tmp32) : regs[r2];
60
+ break;
61
+#endif
62
+#if TCG_TARGET_HAS_ctpop_i32
63
+ case INDEX_op_ctpop_i32:
64
+ tci_args_rr(insn, &r0, &r1);
65
+ regs[r0] = ctpop32(regs[r1]);
66
+ break;
67
+#endif
68
69
/* Shift/rotate operations (32 bit). */
70
71
@@ -XXX,XX +XXX,XX @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
72
tci_args_rrr(insn, &r0, &r1, &r2);
73
regs[r0] = (uint64_t)regs[r1] % (uint64_t)regs[r2];
74
break;
75
+#if TCG_TARGET_HAS_clz_i64
76
+ case INDEX_op_clz_i64:
77
+ tci_args_rrr(insn, &r0, &r1, &r2);
78
+ regs[r0] = regs[r1] ? clz64(regs[r1]) : regs[r2];
79
+ break;
80
+#endif
81
+#if TCG_TARGET_HAS_ctz_i64
82
+ case INDEX_op_ctz_i64:
83
+ tci_args_rrr(insn, &r0, &r1, &r2);
84
+ regs[r0] = regs[r1] ? ctz64(regs[r1]) : regs[r2];
85
+ break;
86
+#endif
87
+#if TCG_TARGET_HAS_ctpop_i64
88
+ case INDEX_op_ctpop_i64:
89
+ tci_args_rr(insn, &r0, &r1);
90
+ regs[r0] = ctpop64(regs[r1]);
91
+ break;
92
+#endif
93
94
/* Shift/rotate operations (64 bit). */
95
96
@@ -XXX,XX +XXX,XX @@ int print_insn_tci(bfd_vma addr, disassemble_info *info)
97
case INDEX_op_not_i64:
98
case INDEX_op_neg_i32:
99
case INDEX_op_neg_i64:
100
+ case INDEX_op_ctpop_i32:
101
+ case INDEX_op_ctpop_i64:
102
tci_args_rr(insn, &r0, &r1);
103
info->fprintf_func(info->stream, "%-12s %s, %s",
104
op_name, str_r(r0), str_r(r1));
105
@@ -XXX,XX +XXX,XX @@ int print_insn_tci(bfd_vma addr, disassemble_info *info)
106
case INDEX_op_rotl_i64:
107
case INDEX_op_rotr_i32:
108
case INDEX_op_rotr_i64:
109
+ case INDEX_op_clz_i32:
110
+ case INDEX_op_clz_i64:
111
+ case INDEX_op_ctz_i32:
112
+ case INDEX_op_ctz_i64:
113
tci_args_rrr(insn, &r0, &r1, &r2);
114
info->fprintf_func(info->stream, "%-12s %s, %s, %s",
115
op_name, str_r(r0), str_r(r1), str_r(r2));
116
diff --git a/tcg/tci/tcg-target.c.inc b/tcg/tci/tcg-target.c.inc
117
index XXXXXXX..XXXXXXX 100644
118
--- a/tcg/tci/tcg-target.c.inc
119
+++ b/tcg/tci/tcg-target.c.inc
120
@@ -XXX,XX +XXX,XX @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op)
121
case INDEX_op_extract_i64:
122
case INDEX_op_sextract_i32:
123
case INDEX_op_sextract_i64:
124
+ case INDEX_op_ctpop_i32:
125
+ case INDEX_op_ctpop_i64:
126
return C_O1_I1(r, r);
127
128
case INDEX_op_st8_i32:
129
@@ -XXX,XX +XXX,XX @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op)
130
case INDEX_op_setcond_i64:
131
case INDEX_op_deposit_i32:
132
case INDEX_op_deposit_i64:
133
+ case INDEX_op_clz_i32:
134
+ case INDEX_op_clz_i64:
135
+ case INDEX_op_ctz_i32:
136
+ case INDEX_op_ctz_i64:
137
return C_O1_I2(r, r, r);
138
139
case INDEX_op_brcond_i32:
140
@@ -XXX,XX +XXX,XX @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
141
CASE_32_64(divu) /* Optional (TCG_TARGET_HAS_div_*). */
142
CASE_32_64(rem) /* Optional (TCG_TARGET_HAS_div_*). */
143
CASE_32_64(remu) /* Optional (TCG_TARGET_HAS_div_*). */
144
+ CASE_32_64(clz) /* Optional (TCG_TARGET_HAS_clz_*). */
145
+ CASE_32_64(ctz) /* Optional (TCG_TARGET_HAS_ctz_*). */
146
tcg_out_op_rrr(s, opc, args[0], args[1], args[2]);
147
break;
148
149
@@ -XXX,XX +XXX,XX @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
150
CASE_32_64(bswap16) /* Optional (TCG_TARGET_HAS_bswap16_*). */
151
CASE_32_64(bswap32) /* Optional (TCG_TARGET_HAS_bswap32_*). */
152
CASE_64(bswap64) /* Optional (TCG_TARGET_HAS_bswap64_i64). */
153
+ CASE_32_64(ctpop) /* Optional (TCG_TARGET_HAS_ctpop_*). */
154
tcg_out_op_rr(s, opc, args[0], args[1]);
155
break;
156
46
157
--
47
--
158
2.25.1
48
2.25.1
159
49
160
50
diff view generated by jsdifflib
1
Add libffi as a build requirement for TCI.
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
Add libffi to the dockerfiles to satisfy that requirement.
3
2
4
Construct an ffi_cif structure for each unique typemask.
3
Restrict has_work() to TCG sysemu.
5
Record the result in a separate hash table for later lookup;
6
this allows helper_table to stay const.
7
4
8
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
5
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-Id: <20210912172731.789788-25-f4bug@amsat.org>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
---
9
---
12
tcg/tcg.c | 58 +++++++++++++++++++
10
target/s390x/cpu.c | 4 +++-
13
tcg/meson.build | 8 ++-
11
1 file changed, 3 insertions(+), 1 deletion(-)
14
tests/docker/dockerfiles/alpine.docker | 1 +
15
tests/docker/dockerfiles/centos8.docker | 1 +
16
tests/docker/dockerfiles/debian10.docker | 1 +
17
.../dockerfiles/fedora-i386-cross.docker | 1 +
18
.../dockerfiles/fedora-win32-cross.docker | 1 +
19
.../dockerfiles/fedora-win64-cross.docker | 1 +
20
tests/docker/dockerfiles/fedora.docker | 1 +
21
tests/docker/dockerfiles/ubuntu.docker | 1 +
22
tests/docker/dockerfiles/ubuntu1804.docker | 1 +
23
tests/docker/dockerfiles/ubuntu2004.docker | 1 +
24
12 files changed, 75 insertions(+), 1 deletion(-)
25
12
26
diff --git a/tcg/tcg.c b/tcg/tcg.c
13
diff --git a/target/s390x/cpu.c b/target/s390x/cpu.c
27
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
28
--- a/tcg/tcg.c
15
--- a/target/s390x/cpu.c
29
+++ b/tcg/tcg.c
16
+++ b/target/s390x/cpu.c
30
@@ -XXX,XX +XXX,XX @@
17
@@ -XXX,XX +XXX,XX @@ static void s390_cpu_set_pc(CPUState *cs, vaddr value)
31
#include "exec/log.h"
18
cpu->env.psw.addr = value;
32
#include "tcg-internal.h"
19
}
33
20
34
+#ifdef CONFIG_TCG_INTERPRETER
21
+#if defined(CONFIG_TCG) && !defined(CONFIG_USER_ONLY)
35
+#include <ffi.h>
22
static bool s390_cpu_has_work(CPUState *cs)
36
+#endif
23
{
37
+
24
S390CPU *cpu = S390_CPU(cs);
38
/* Forward declarations for functions declared in tcg-target.c.inc and
25
@@ -XXX,XX +XXX,XX @@ static bool s390_cpu_has_work(CPUState *cs)
39
used here. */
26
40
static void tcg_target_init(TCGContext *s);
27
return s390_cpu_has_int(cpu);
41
@@ -XXX,XX +XXX,XX @@ static const TCGHelperInfo all_helpers[] = {
28
}
42
};
29
+#endif /* CONFIG_TCG && !CONFIG_USER_ONLY */
43
static GHashTable *helper_table;
30
44
31
/* S390CPUClass::reset() */
45
+#ifdef CONFIG_TCG_INTERPRETER
32
static void s390_cpu_reset(CPUState *s, cpu_reset_type type)
46
+static GHashTable *ffi_table;
33
@@ -XXX,XX +XXX,XX @@ static const struct TCGCPUOps s390_tcg_ops = {
47
+
34
.tlb_fill = s390_cpu_tlb_fill,
48
+static ffi_type * const typecode_to_ffi[8] = {
35
49
+ [dh_typecode_void] = &ffi_type_void,
36
#if !defined(CONFIG_USER_ONLY)
50
+ [dh_typecode_i32] = &ffi_type_uint32,
37
+ .has_work = s390_cpu_has_work,
51
+ [dh_typecode_s32] = &ffi_type_sint32,
38
.cpu_exec_interrupt = s390_cpu_exec_interrupt,
52
+ [dh_typecode_i64] = &ffi_type_uint64,
39
.do_interrupt = s390_cpu_do_interrupt,
53
+ [dh_typecode_s64] = &ffi_type_sint64,
40
.debug_excp_handler = s390x_cpu_debug_excp_handler,
54
+ [dh_typecode_ptr] = &ffi_type_pointer,
41
@@ -XXX,XX +XXX,XX @@ static void s390_cpu_class_init(ObjectClass *oc, void *data)
55
+};
42
56
+#endif
43
scc->reset = s390_cpu_reset;
57
+
44
cc->class_by_name = s390_cpu_class_by_name,
58
static int indirect_reg_alloc_order[ARRAY_SIZE(tcg_target_reg_alloc_order)];
45
- cc->has_work = s390_cpu_has_work;
59
static void process_op_defs(TCGContext *s);
46
cc->dump_state = s390_cpu_dump_state;
60
static TCGTemp *tcg_global_reg_new_internal(TCGContext *s, TCGType type,
47
cc->set_pc = s390_cpu_set_pc;
61
@@ -XXX,XX +XXX,XX @@ static void tcg_context_init(unsigned max_cpus)
48
cc->gdb_read_register = s390_cpu_gdb_read_register;
62
(gpointer)&all_helpers[i]);
63
}
64
65
+#ifdef CONFIG_TCG_INTERPRETER
66
+ /* g_direct_hash/equal for direct comparisons on uint32_t. */
67
+ ffi_table = g_hash_table_new(NULL, NULL);
68
+ for (i = 0; i < ARRAY_SIZE(all_helpers); ++i) {
69
+ struct {
70
+ ffi_cif cif;
71
+ ffi_type *args[];
72
+ } *ca;
73
+ uint32_t typemask = all_helpers[i].typemask;
74
+ gpointer hash = (gpointer)(uintptr_t)typemask;
75
+ ffi_status status;
76
+ int nargs;
77
+
78
+ if (g_hash_table_lookup(ffi_table, hash)) {
79
+ continue;
80
+ }
81
+
82
+ /* Ignoring the return type, find the last non-zero field. */
83
+ nargs = 32 - clz32(typemask >> 3);
84
+ nargs = DIV_ROUND_UP(nargs, 3);
85
+
86
+ ca = g_malloc0(sizeof(*ca) + nargs * sizeof(ffi_type *));
87
+ ca->cif.rtype = typecode_to_ffi[typemask & 7];
88
+ ca->cif.nargs = nargs;
89
+
90
+ if (nargs != 0) {
91
+ ca->cif.arg_types = ca->args;
92
+ for (i = 0; i < nargs; ++i) {
93
+ int typecode = extract32(typemask, (i + 1) * 3, 3);
94
+ ca->args[i] = typecode_to_ffi[typecode];
95
+ }
96
+ }
97
+
98
+ status = ffi_prep_cif(&ca->cif, FFI_DEFAULT_ABI, nargs,
99
+ ca->cif.rtype, ca->cif.arg_types);
100
+ assert(status == FFI_OK);
101
+
102
+ g_hash_table_insert(ffi_table, hash, (gpointer)&ca->cif);
103
+ }
104
+#endif
105
+
106
tcg_target_init(s);
107
process_op_defs(s);
108
109
diff --git a/tcg/meson.build b/tcg/meson.build
110
index XXXXXXX..XXXXXXX 100644
111
--- a/tcg/meson.build
112
+++ b/tcg/meson.build
113
@@ -XXX,XX +XXX,XX @@ tcg_ss.add(files(
114
'tcg-op-gvec.c',
115
'tcg-op-vec.c',
116
))
117
-tcg_ss.add(when: 'CONFIG_TCG_INTERPRETER', if_true: files('tci.c'))
118
+
119
+if get_option('tcg_interpreter')
120
+ libffi = dependency('libffi', version: '>=3.0', required: true,
121
+ method: 'pkg-config', kwargs: static_kwargs)
122
+ specific_ss.add(libffi)
123
+ specific_ss.add(files('tci.c'))
124
+endif
125
126
specific_ss.add_all(when: 'CONFIG_TCG', if_true: tcg_ss)
127
diff --git a/tests/docker/dockerfiles/alpine.docker b/tests/docker/dockerfiles/alpine.docker
128
index XXXXXXX..XXXXXXX 100644
129
--- a/tests/docker/dockerfiles/alpine.docker
130
+++ b/tests/docker/dockerfiles/alpine.docker
131
@@ -XXX,XX +XXX,XX @@ ENV PACKAGES \
132
    libaio-dev \
133
    libbpf-dev \
134
    libcap-ng-dev \
135
+    libffi-dev \
136
    libjpeg-turbo-dev \
137
    libnfs-dev \
138
    libpng-dev \
139
diff --git a/tests/docker/dockerfiles/centos8.docker b/tests/docker/dockerfiles/centos8.docker
140
index XXXXXXX..XXXXXXX 100644
141
--- a/tests/docker/dockerfiles/centos8.docker
142
+++ b/tests/docker/dockerfiles/centos8.docker
143
@@ -XXX,XX +XXX,XX @@ ENV PACKAGES \
144
libbpf-devel \
145
libepoxy-devel \
146
libfdt-devel \
147
+ libffi-devel \
148
libgcrypt-devel \
149
lzo-devel \
150
make \
151
diff --git a/tests/docker/dockerfiles/debian10.docker b/tests/docker/dockerfiles/debian10.docker
152
index XXXXXXX..XXXXXXX 100644
153
--- a/tests/docker/dockerfiles/debian10.docker
154
+++ b/tests/docker/dockerfiles/debian10.docker
155
@@ -XXX,XX +XXX,XX @@ RUN apt update && \
156
gdb-multiarch \
157
gettext \
158
git \
159
+ libffi-dev \
160
libncurses5-dev \
161
ninja-build \
162
pkg-config \
163
diff --git a/tests/docker/dockerfiles/fedora-i386-cross.docker b/tests/docker/dockerfiles/fedora-i386-cross.docker
164
index XXXXXXX..XXXXXXX 100644
165
--- a/tests/docker/dockerfiles/fedora-i386-cross.docker
166
+++ b/tests/docker/dockerfiles/fedora-i386-cross.docker
167
@@ -XXX,XX +XXX,XX @@ ENV PACKAGES \
168
findutils \
169
gcc \
170
git \
171
+ libffi-devel.i686 \
172
libtasn1-devel.i686 \
173
libzstd-devel.i686 \
174
make \
175
diff --git a/tests/docker/dockerfiles/fedora-win32-cross.docker b/tests/docker/dockerfiles/fedora-win32-cross.docker
176
index XXXXXXX..XXXXXXX 100644
177
--- a/tests/docker/dockerfiles/fedora-win32-cross.docker
178
+++ b/tests/docker/dockerfiles/fedora-win32-cross.docker
179
@@ -XXX,XX +XXX,XX @@ ENV PACKAGES \
180
mingw32-gmp \
181
mingw32-gnutls \
182
mingw32-gtk3 \
183
+ mingw32-libffi \
184
mingw32-libjpeg-turbo \
185
mingw32-libpng \
186
mingw32-libtasn1 \
187
diff --git a/tests/docker/dockerfiles/fedora-win64-cross.docker b/tests/docker/dockerfiles/fedora-win64-cross.docker
188
index XXXXXXX..XXXXXXX 100644
189
--- a/tests/docker/dockerfiles/fedora-win64-cross.docker
190
+++ b/tests/docker/dockerfiles/fedora-win64-cross.docker
191
@@ -XXX,XX +XXX,XX @@ ENV PACKAGES \
192
mingw64-glib2 \
193
mingw64-gmp \
194
mingw64-gtk3 \
195
+ mingw64-libffi \
196
mingw64-libjpeg-turbo \
197
mingw64-libpng \
198
mingw64-libtasn1 \
199
diff --git a/tests/docker/dockerfiles/fedora.docker b/tests/docker/dockerfiles/fedora.docker
200
index XXXXXXX..XXXXXXX 100644
201
--- a/tests/docker/dockerfiles/fedora.docker
202
+++ b/tests/docker/dockerfiles/fedora.docker
203
@@ -XXX,XX +XXX,XX @@ ENV PACKAGES \
204
libepoxy-devel \
205
libfdt-devel \
206
libbpf-devel \
207
+ libffi-devel \
208
libiscsi-devel \
209
libjpeg-devel \
210
libpmem-devel \
211
diff --git a/tests/docker/dockerfiles/ubuntu.docker b/tests/docker/dockerfiles/ubuntu.docker
212
index XXXXXXX..XXXXXXX 100644
213
--- a/tests/docker/dockerfiles/ubuntu.docker
214
+++ b/tests/docker/dockerfiles/ubuntu.docker
215
@@ -XXX,XX +XXX,XX @@ ENV PACKAGES \
216
libdrm-dev \
217
libepoxy-dev \
218
libfdt-dev \
219
+ libffi-dev \
220
libgbm-dev \
221
libgnutls28-dev \
222
libgtk-3-dev \
223
diff --git a/tests/docker/dockerfiles/ubuntu1804.docker b/tests/docker/dockerfiles/ubuntu1804.docker
224
index XXXXXXX..XXXXXXX 100644
225
--- a/tests/docker/dockerfiles/ubuntu1804.docker
226
+++ b/tests/docker/dockerfiles/ubuntu1804.docker
227
@@ -XXX,XX +XXX,XX @@ ENV PACKAGES \
228
libdrm-dev \
229
libepoxy-dev \
230
libfdt-dev \
231
+ libffi-dev \
232
libgbm-dev \
233
libgtk-3-dev \
234
libibverbs-dev \
235
diff --git a/tests/docker/dockerfiles/ubuntu2004.docker b/tests/docker/dockerfiles/ubuntu2004.docker
236
index XXXXXXX..XXXXXXX 100644
237
--- a/tests/docker/dockerfiles/ubuntu2004.docker
238
+++ b/tests/docker/dockerfiles/ubuntu2004.docker
239
@@ -XXX,XX +XXX,XX @@ ENV PACKAGES flex bison \
240
libdrm-dev \
241
libepoxy-dev \
242
libfdt-dev \
243
+ libffi-dev \
244
libgbm-dev \
245
libgtk-3-dev \
246
libibverbs-dev \
247
--
49
--
248
2.25.1
50
2.25.1
249
51
250
52
diff view generated by jsdifflib
1
We already had mulu2_i32 for a 32-bit host; expand this to 64-bit
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
hosts as well. The muls2_i32 and the 64-bit opcodes are new.
3
2
4
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
3
Restrict has_work() to sysemu.
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
5
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-Id: <20210912172731.789788-26-f4bug@amsat.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
9
---
8
tcg/tci/tcg-target.h | 8 ++++----
10
target/sh4/cpu.c | 5 +++--
9
tcg/tci.c | 35 +++++++++++++++++++++++++++++------
11
1 file changed, 3 insertions(+), 2 deletions(-)
10
tcg/tci/tcg-target.c.inc | 16 ++++++++++------
11
3 files changed, 43 insertions(+), 16 deletions(-)
12
12
13
diff --git a/tcg/tci/tcg-target.h b/tcg/tci/tcg-target.h
13
diff --git a/target/sh4/cpu.c b/target/sh4/cpu.c
14
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
15
--- a/tcg/tci/tcg-target.h
15
--- a/target/sh4/cpu.c
16
+++ b/tcg/tci/tcg-target.h
16
+++ b/target/sh4/cpu.c
17
@@ -XXX,XX +XXX,XX @@
17
@@ -XXX,XX +XXX,XX @@ static bool superh_io_recompile_replay_branch(CPUState *cs,
18
#define TCG_TARGET_HAS_orc_i32 1
18
}
19
#define TCG_TARGET_HAS_rot_i32 1
19
return false;
20
#define TCG_TARGET_HAS_movcond_i32 1
21
-#define TCG_TARGET_HAS_muls2_i32 0
22
+#define TCG_TARGET_HAS_muls2_i32 1
23
#define TCG_TARGET_HAS_muluh_i32 0
24
#define TCG_TARGET_HAS_mulsh_i32 0
25
#define TCG_TARGET_HAS_goto_ptr 1
26
@@ -XXX,XX +XXX,XX @@
27
#define TCG_TARGET_HAS_orc_i64 1
28
#define TCG_TARGET_HAS_rot_i64 1
29
#define TCG_TARGET_HAS_movcond_i64 1
30
-#define TCG_TARGET_HAS_muls2_i64 0
31
+#define TCG_TARGET_HAS_muls2_i64 1
32
#define TCG_TARGET_HAS_add2_i32 0
33
#define TCG_TARGET_HAS_sub2_i32 0
34
-#define TCG_TARGET_HAS_mulu2_i32 0
35
+#define TCG_TARGET_HAS_mulu2_i32 1
36
#define TCG_TARGET_HAS_add2_i64 0
37
#define TCG_TARGET_HAS_sub2_i64 0
38
-#define TCG_TARGET_HAS_mulu2_i64 0
39
+#define TCG_TARGET_HAS_mulu2_i64 1
40
#define TCG_TARGET_HAS_muluh_i64 0
41
#define TCG_TARGET_HAS_mulsh_i64 0
42
#else
43
diff --git a/tcg/tci.c b/tcg/tci.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/tcg/tci.c
46
+++ b/tcg/tci.c
47
@@ -XXX,XX +XXX,XX @@ __thread uintptr_t tci_tb_ptr;
48
static void tci_write_reg64(tcg_target_ulong *regs, uint32_t high_index,
49
uint32_t low_index, uint64_t value)
50
{
51
- regs[low_index] = value;
52
+ regs[low_index] = (uint32_t)value;
53
regs[high_index] = value >> 32;
54
}
55
56
@@ -XXX,XX +XXX,XX @@ static void tci_args_rrrrr(uint32_t insn, TCGReg *r0, TCGReg *r1,
57
*r4 = extract32(insn, 24, 4);
58
}
59
60
-#if TCG_TARGET_REG_BITS == 32
61
static void tci_args_rrrr(uint32_t insn,
62
TCGReg *r0, TCGReg *r1, TCGReg *r2, TCGReg *r3)
63
{
64
@@ -XXX,XX +XXX,XX @@ static void tci_args_rrrr(uint32_t insn,
65
*r2 = extract32(insn, 16, 4);
66
*r3 = extract32(insn, 20, 4);
67
}
20
}
68
-#endif
21
-#endif
69
22
70
static void tci_args_rrrrrc(uint32_t insn, TCGReg *r0, TCGReg *r1,
23
static bool superh_cpu_has_work(CPUState *cs)
71
TCGReg *r2, TCGReg *r3, TCGReg *r4, TCGCond *c5)
24
{
72
@@ -XXX,XX +XXX,XX @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
25
return cs->interrupt_request & CPU_INTERRUPT_HARD;
73
T2 = tci_uint64(regs[r5], regs[r4]);
26
}
74
tci_write_reg64(regs, r1, r0, T1 - T2);
27
75
break;
28
+#endif /* !CONFIG_USER_ONLY */
76
+#endif /* TCG_TARGET_REG_BITS == 32 */
77
+#if TCG_TARGET_HAS_mulu2_i32
78
case INDEX_op_mulu2_i32:
79
tci_args_rrrr(insn, &r0, &r1, &r2, &r3);
80
- tci_write_reg64(regs, r1, r0, (uint64_t)regs[r2] * regs[r3]);
81
+ tmp64 = (uint64_t)(uint32_t)regs[r2] * (uint32_t)regs[r3];
82
+ tci_write_reg64(regs, r1, r0, tmp64);
83
break;
84
-#endif /* TCG_TARGET_REG_BITS == 32 */
85
+#endif
86
+#if TCG_TARGET_HAS_muls2_i32
87
+ case INDEX_op_muls2_i32:
88
+ tci_args_rrrr(insn, &r0, &r1, &r2, &r3);
89
+ tmp64 = (int64_t)(int32_t)regs[r2] * (int32_t)regs[r3];
90
+ tci_write_reg64(regs, r1, r0, tmp64);
91
+ break;
92
+#endif
93
#if TCG_TARGET_HAS_ext8s_i32 || TCG_TARGET_HAS_ext8s_i64
94
CASE_32_64(ext8s)
95
tci_args_rr(insn, &r0, &r1);
96
@@ -XXX,XX +XXX,XX @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
97
regs[r0] = ctpop64(regs[r1]);
98
break;
99
#endif
100
+#if TCG_TARGET_HAS_mulu2_i64
101
+ case INDEX_op_mulu2_i64:
102
+ tci_args_rrrr(insn, &r0, &r1, &r2, &r3);
103
+ mulu64(&regs[r0], &regs[r1], regs[r2], regs[r3]);
104
+ break;
105
+#endif
106
+#if TCG_TARGET_HAS_muls2_i64
107
+ case INDEX_op_muls2_i64:
108
+ tci_args_rrrr(insn, &r0, &r1, &r2, &r3);
109
+ muls64(&regs[r0], &regs[r1], regs[r2], regs[r3]);
110
+ break;
111
+#endif
112
113
/* Shift/rotate operations (64 bit). */
114
115
@@ -XXX,XX +XXX,XX @@ int print_insn_tci(bfd_vma addr, disassemble_info *info)
116
str_r(r3), str_r(r4), str_c(c));
117
break;
118
119
-#if TCG_TARGET_REG_BITS == 32
120
case INDEX_op_mulu2_i32:
121
+ case INDEX_op_mulu2_i64:
122
+ case INDEX_op_muls2_i32:
123
+ case INDEX_op_muls2_i64:
124
tci_args_rrrr(insn, &r0, &r1, &r2, &r3);
125
info->fprintf_func(info->stream, "%-12s %s, %s, %s, %s",
126
op_name, str_r(r0), str_r(r1),
127
str_r(r2), str_r(r3));
128
break;
129
130
+#if TCG_TARGET_REG_BITS == 32
131
case INDEX_op_add2_i32:
132
case INDEX_op_sub2_i32:
133
tci_args_rrrrrr(insn, &r0, &r1, &r2, &r3, &r4, &r5);
134
diff --git a/tcg/tci/tcg-target.c.inc b/tcg/tci/tcg-target.c.inc
135
index XXXXXXX..XXXXXXX 100644
136
--- a/tcg/tci/tcg-target.c.inc
137
+++ b/tcg/tci/tcg-target.c.inc
138
@@ -XXX,XX +XXX,XX @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op)
139
return C_O2_I4(r, r, r, r, r, r);
140
case INDEX_op_brcond2_i32:
141
return C_O0_I4(r, r, r, r);
142
- case INDEX_op_mulu2_i32:
143
- return C_O2_I2(r, r, r, r);
144
#endif
145
146
+ case INDEX_op_mulu2_i32:
147
+ case INDEX_op_mulu2_i64:
148
+ case INDEX_op_muls2_i32:
149
+ case INDEX_op_muls2_i64:
150
+ return C_O2_I2(r, r, r, r);
151
+
29
+
152
case INDEX_op_movcond_i32:
30
static void superh_cpu_reset(DeviceState *dev)
153
case INDEX_op_movcond_i64:
154
case INDEX_op_setcond2_i32:
155
@@ -XXX,XX +XXX,XX @@ static void tcg_out_op_rrrrr(TCGContext *s, TCGOpcode op, TCGReg r0,
156
tcg_out32(s, insn);
157
}
158
159
-#if TCG_TARGET_REG_BITS == 32
160
static void tcg_out_op_rrrr(TCGContext *s, TCGOpcode op,
161
TCGReg r0, TCGReg r1, TCGReg r2, TCGReg r3)
162
{
31
{
163
@@ -XXX,XX +XXX,XX @@ static void tcg_out_op_rrrr(TCGContext *s, TCGOpcode op,
32
CPUState *s = CPU(dev);
164
insn = deposit32(insn, 20, 4, r3);
33
@@ -XXX,XX +XXX,XX @@ static const struct TCGCPUOps superh_tcg_ops = {
165
tcg_out32(s, insn);
34
.tlb_fill = superh_cpu_tlb_fill,
166
}
35
167
-#endif
36
#ifndef CONFIG_USER_ONLY
168
37
+ .has_work = superh_cpu_has_work,
169
static void tcg_out_op_rrrrrc(TCGContext *s, TCGOpcode op,
38
.cpu_exec_interrupt = superh_cpu_exec_interrupt,
170
TCGReg r0, TCGReg r1, TCGReg r2,
39
.do_interrupt = superh_cpu_do_interrupt,
171
@@ -XXX,XX +XXX,XX @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
40
.do_unaligned_access = superh_cpu_do_unaligned_access,
172
args[0], args[1], args[2], args[3], args[4]);
41
@@ -XXX,XX +XXX,XX @@ static void superh_cpu_class_init(ObjectClass *oc, void *data)
173
tcg_out_op_rl(s, INDEX_op_brcond_i32, TCG_REG_TMP, arg_label(args[5]));
42
device_class_set_parent_reset(dc, superh_cpu_reset, &scc->parent_reset);
174
break;
43
175
- case INDEX_op_mulu2_i32:
44
cc->class_by_name = superh_cpu_class_by_name;
176
+#endif
45
- cc->has_work = superh_cpu_has_work;
177
+
46
cc->dump_state = superh_cpu_dump_state;
178
+ CASE_32_64(mulu2)
47
cc->set_pc = superh_cpu_set_pc;
179
+ CASE_32_64(muls2)
48
cc->gdb_read_register = superh_cpu_gdb_read_register;
180
tcg_out_op_rrrr(s, opc, args[0], args[1], args[2], args[3]);
181
break;
182
-#endif
183
184
case INDEX_op_qemu_ld_i32:
185
case INDEX_op_qemu_st_i32:
186
--
49
--
187
2.25.1
50
2.25.1
188
51
189
52
diff view generated by jsdifflib
1
As noted by qemu-plugins.h, plugins can neither read nor write
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
guest registers.
3
2
4
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
3
The SPARC target only support TCG acceleration. Remove the CONFIG_TCG
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
definition introduced by mistake in commit 78271684719 ("cpu: tcg_ops:
5
move to tcg-cpu-ops.h, keep a pointer in CPUClass").
6
7
Reported-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-Id: <20210912172731.789788-27-f4bug@amsat.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
12
---
8
accel/tcg/plugin-helpers.h | 4 ++--
13
target/sparc/cpu.c | 2 --
9
1 file changed, 2 insertions(+), 2 deletions(-)
14
1 file changed, 2 deletions(-)
10
15
11
diff --git a/accel/tcg/plugin-helpers.h b/accel/tcg/plugin-helpers.h
16
diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
12
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
13
--- a/accel/tcg/plugin-helpers.h
18
--- a/target/sparc/cpu.c
14
+++ b/accel/tcg/plugin-helpers.h
19
+++ b/target/sparc/cpu.c
15
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@ static const struct SysemuCPUOps sparc_sysemu_ops = {
16
#ifdef CONFIG_PLUGIN
21
};
17
-DEF_HELPER_2(plugin_vcpu_udata_cb, void, i32, ptr)
18
-DEF_HELPER_4(plugin_vcpu_mem_cb, void, i32, i32, i64, ptr)
19
+DEF_HELPER_FLAGS_2(plugin_vcpu_udata_cb, TCG_CALL_NO_RWG, void, i32, ptr)
20
+DEF_HELPER_FLAGS_4(plugin_vcpu_mem_cb, TCG_CALL_NO_RWG, void, i32, i32, i64, ptr)
21
#endif
22
#endif
23
24
-#ifdef CONFIG_TCG
25
#include "hw/core/tcg-cpu-ops.h"
26
27
static const struct TCGCPUOps sparc_tcg_ops = {
28
@@ -XXX,XX +XXX,XX @@ static const struct TCGCPUOps sparc_tcg_ops = {
29
.do_unaligned_access = sparc_cpu_do_unaligned_access,
30
#endif /* !CONFIG_USER_ONLY */
31
};
32
-#endif /* CONFIG_TCG */
33
34
static void sparc_cpu_class_init(ObjectClass *oc, void *data)
35
{
22
--
36
--
23
2.25.1
37
2.25.1
24
38
25
39
diff view generated by jsdifflib
1
We will shortly be interested in distinguishing pointers
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
from integers in the helper's declaration, as well as a
3
true void return. We currently have two parallel 1 bit
4
fields; merge them and expand to a 3 bit field.
5
2
6
Our current maximum is 7 helper arguments, plus the return
3
Restrict has_work() to sysemu.
7
makes 8 * 3 = 24 bits used within the uint32_t typemask.
8
4
9
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
5
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-Id: <20210912172731.789788-28-f4bug@amsat.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
---
9
---
13
include/exec/helper-head.h | 37 +++++--------------
10
target/sparc/cpu.c | 4 +++-
14
include/exec/helper-tcg.h | 34 ++++++++---------
11
1 file changed, 3 insertions(+), 1 deletion(-)
15
target/hppa/helper.h | 3 --
16
target/i386/ops_sse_header.h | 3 --
17
target/m68k/helper.h | 1 -
18
target/ppc/helper.h | 3 --
19
tcg/tcg.c | 71 +++++++++++++++++++++---------------
20
7 files changed, 67 insertions(+), 85 deletions(-)
21
12
22
diff --git a/include/exec/helper-head.h b/include/exec/helper-head.h
13
diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
23
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
24
--- a/include/exec/helper-head.h
15
--- a/target/sparc/cpu.c
25
+++ b/include/exec/helper-head.h
16
+++ b/target/sparc/cpu.c
26
@@ -XXX,XX +XXX,XX @@
17
@@ -XXX,XX +XXX,XX @@ static void sparc_cpu_synchronize_from_tb(CPUState *cs,
27
#define dh_retvar_ptr tcgv_ptr_temp(retval)
18
cpu->env.npc = tb->cs_base;
28
#define dh_retvar(t) glue(dh_retvar_, dh_alias(t))
19
}
29
20
30
-#define dh_is_64bit_void 0
21
+#if !defined(CONFIG_USER_ONLY)
31
-#define dh_is_64bit_noreturn 0
22
static bool sparc_cpu_has_work(CPUState *cs)
32
-#define dh_is_64bit_i32 0
33
-#define dh_is_64bit_i64 1
34
-#define dh_is_64bit_ptr (sizeof(void *) == 8)
35
-#define dh_is_64bit_cptr dh_is_64bit_ptr
36
-#define dh_is_64bit(t) glue(dh_is_64bit_, dh_alias(t))
37
-
38
-#define dh_is_signed_void 0
39
-#define dh_is_signed_noreturn 0
40
-#define dh_is_signed_i32 0
41
-#define dh_is_signed_s32 1
42
-#define dh_is_signed_i64 0
43
-#define dh_is_signed_s64 1
44
-#define dh_is_signed_f16 0
45
-#define dh_is_signed_f32 0
46
-#define dh_is_signed_f64 0
47
-#define dh_is_signed_tl 0
48
-#define dh_is_signed_int 1
49
-/* ??? This is highly specific to the host cpu. There are even special
50
- extension instructions that may be required, e.g. ia64's addp4. But
51
- for now we don't support any 64-bit targets with 32-bit pointers. */
52
-#define dh_is_signed_ptr 0
53
-#define dh_is_signed_cptr dh_is_signed_ptr
54
-#define dh_is_signed_env dh_is_signed_ptr
55
-#define dh_is_signed(t) dh_is_signed_##t
56
+#define dh_typecode_void 0
57
+#define dh_typecode_noreturn 0
58
+#define dh_typecode_i32 2
59
+#define dh_typecode_s32 3
60
+#define dh_typecode_i64 4
61
+#define dh_typecode_s64 5
62
+#define dh_typecode_ptr 6
63
+#define dh_typecode(t) glue(dh_typecode_, dh_alias(t))
64
65
#define dh_callflag_i32 0
66
#define dh_callflag_s32 0
67
@@ -XXX,XX +XXX,XX @@
68
#define dh_callflag_noreturn TCG_CALL_NO_RETURN
69
#define dh_callflag(t) glue(dh_callflag_, dh_alias(t))
70
71
-#define dh_sizemask(t, n) \
72
- ((dh_is_64bit(t) << (n*2)) | (dh_is_signed(t) << (n*2+1)))
73
+#define dh_typemask(t, n) (dh_typecode(t) << (n * 3))
74
75
#define dh_arg(t, n) \
76
glue(glue(tcgv_, dh_alias(t)), _temp)(glue(arg, n))
77
diff --git a/include/exec/helper-tcg.h b/include/exec/helper-tcg.h
78
index XXXXXXX..XXXXXXX 100644
79
--- a/include/exec/helper-tcg.h
80
+++ b/include/exec/helper-tcg.h
81
@@ -XXX,XX +XXX,XX @@
82
#define DEF_HELPER_FLAGS_0(NAME, FLAGS, ret) \
83
{ .func = HELPER(NAME), .name = str(NAME), \
84
.flags = FLAGS | dh_callflag(ret), \
85
- .sizemask = dh_sizemask(ret, 0) },
86
+ .typemask = dh_typemask(ret, 0) },
87
88
#define DEF_HELPER_FLAGS_1(NAME, FLAGS, ret, t1) \
89
{ .func = HELPER(NAME), .name = str(NAME), \
90
.flags = FLAGS | dh_callflag(ret), \
91
- .sizemask = dh_sizemask(ret, 0) | dh_sizemask(t1, 1) },
92
+ .typemask = dh_typemask(ret, 0) | dh_typemask(t1, 1) },
93
94
#define DEF_HELPER_FLAGS_2(NAME, FLAGS, ret, t1, t2) \
95
{ .func = HELPER(NAME), .name = str(NAME), \
96
.flags = FLAGS | dh_callflag(ret), \
97
- .sizemask = dh_sizemask(ret, 0) | dh_sizemask(t1, 1) \
98
- | dh_sizemask(t2, 2) },
99
+ .typemask = dh_typemask(ret, 0) | dh_typemask(t1, 1) \
100
+ | dh_typemask(t2, 2) },
101
102
#define DEF_HELPER_FLAGS_3(NAME, FLAGS, ret, t1, t2, t3) \
103
{ .func = HELPER(NAME), .name = str(NAME), \
104
.flags = FLAGS | dh_callflag(ret), \
105
- .sizemask = dh_sizemask(ret, 0) | dh_sizemask(t1, 1) \
106
- | dh_sizemask(t2, 2) | dh_sizemask(t3, 3) },
107
+ .typemask = dh_typemask(ret, 0) | dh_typemask(t1, 1) \
108
+ | dh_typemask(t2, 2) | dh_typemask(t3, 3) },
109
110
#define DEF_HELPER_FLAGS_4(NAME, FLAGS, ret, t1, t2, t3, t4) \
111
{ .func = HELPER(NAME), .name = str(NAME), \
112
.flags = FLAGS | dh_callflag(ret), \
113
- .sizemask = dh_sizemask(ret, 0) | dh_sizemask(t1, 1) \
114
- | dh_sizemask(t2, 2) | dh_sizemask(t3, 3) | dh_sizemask(t4, 4) },
115
+ .typemask = dh_typemask(ret, 0) | dh_typemask(t1, 1) \
116
+ | dh_typemask(t2, 2) | dh_typemask(t3, 3) | dh_typemask(t4, 4) },
117
118
#define DEF_HELPER_FLAGS_5(NAME, FLAGS, ret, t1, t2, t3, t4, t5) \
119
{ .func = HELPER(NAME), .name = str(NAME), \
120
.flags = FLAGS | dh_callflag(ret), \
121
- .sizemask = dh_sizemask(ret, 0) | dh_sizemask(t1, 1) \
122
- | dh_sizemask(t2, 2) | dh_sizemask(t3, 3) | dh_sizemask(t4, 4) \
123
- | dh_sizemask(t5, 5) },
124
+ .typemask = dh_typemask(ret, 0) | dh_typemask(t1, 1) \
125
+ | dh_typemask(t2, 2) | dh_typemask(t3, 3) | dh_typemask(t4, 4) \
126
+ | dh_typemask(t5, 5) },
127
128
#define DEF_HELPER_FLAGS_6(NAME, FLAGS, ret, t1, t2, t3, t4, t5, t6) \
129
{ .func = HELPER(NAME), .name = str(NAME), \
130
.flags = FLAGS | dh_callflag(ret), \
131
- .sizemask = dh_sizemask(ret, 0) | dh_sizemask(t1, 1) \
132
- | dh_sizemask(t2, 2) | dh_sizemask(t3, 3) | dh_sizemask(t4, 4) \
133
- | dh_sizemask(t5, 5) | dh_sizemask(t6, 6) },
134
+ .typemask = dh_typemask(ret, 0) | dh_typemask(t1, 1) \
135
+ | dh_typemask(t2, 2) | dh_typemask(t3, 3) | dh_typemask(t4, 4) \
136
+ | dh_typemask(t5, 5) | dh_typemask(t6, 6) },
137
138
#define DEF_HELPER_FLAGS_7(NAME, FLAGS, ret, t1, t2, t3, t4, t5, t6, t7) \
139
{ .func = HELPER(NAME), .name = str(NAME), .flags = FLAGS, \
140
- .sizemask = dh_sizemask(ret, 0) | dh_sizemask(t1, 1) \
141
- | dh_sizemask(t2, 2) | dh_sizemask(t3, 3) | dh_sizemask(t4, 4) \
142
- | dh_sizemask(t5, 5) | dh_sizemask(t6, 6) | dh_sizemask(t7, 7) },
143
+ .typemask = dh_typemask(ret, 0) | dh_typemask(t1, 1) \
144
+ | dh_typemask(t2, 2) | dh_typemask(t3, 3) | dh_typemask(t4, 4) \
145
+ | dh_typemask(t5, 5) | dh_typemask(t6, 6) | dh_typemask(t7, 7) },
146
147
#include "helper.h"
148
#include "trace/generated-helpers.h"
149
diff --git a/target/hppa/helper.h b/target/hppa/helper.h
150
index XXXXXXX..XXXXXXX 100644
151
--- a/target/hppa/helper.h
152
+++ b/target/hppa/helper.h
153
@@ -XXX,XX +XXX,XX @@
154
#if TARGET_REGISTER_BITS == 64
155
# define dh_alias_tr i64
156
-# define dh_is_64bit_tr 1
157
#else
158
# define dh_alias_tr i32
159
-# define dh_is_64bit_tr 0
160
#endif
161
#define dh_ctype_tr target_ureg
162
-#define dh_is_signed_tr 0
163
164
DEF_HELPER_2(excp, noreturn, env, int)
165
DEF_HELPER_FLAGS_2(tsv, TCG_CALL_NO_WG, void, env, tr)
166
diff --git a/target/i386/ops_sse_header.h b/target/i386/ops_sse_header.h
167
index XXXXXXX..XXXXXXX 100644
168
--- a/target/i386/ops_sse_header.h
169
+++ b/target/i386/ops_sse_header.h
170
@@ -XXX,XX +XXX,XX @@
171
#define dh_ctype_Reg Reg *
172
#define dh_ctype_ZMMReg ZMMReg *
173
#define dh_ctype_MMXReg MMXReg *
174
-#define dh_is_signed_Reg dh_is_signed_ptr
175
-#define dh_is_signed_ZMMReg dh_is_signed_ptr
176
-#define dh_is_signed_MMXReg dh_is_signed_ptr
177
178
DEF_HELPER_3(glue(psrlw, SUFFIX), void, env, Reg, Reg)
179
DEF_HELPER_3(glue(psraw, SUFFIX), void, env, Reg, Reg)
180
diff --git a/target/m68k/helper.h b/target/m68k/helper.h
181
index XXXXXXX..XXXXXXX 100644
182
--- a/target/m68k/helper.h
183
+++ b/target/m68k/helper.h
184
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_4(cas2l_parallel, void, env, i32, i32, i32)
185
186
#define dh_alias_fp ptr
187
#define dh_ctype_fp FPReg *
188
-#define dh_is_signed_fp dh_is_signed_ptr
189
190
DEF_HELPER_3(exts32, void, env, fp, s32)
191
DEF_HELPER_3(extf32, void, env, fp, f32)
192
diff --git a/target/ppc/helper.h b/target/ppc/helper.h
193
index XXXXXXX..XXXXXXX 100644
194
--- a/target/ppc/helper.h
195
+++ b/target/ppc/helper.h
196
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_1(ftsqrt, TCG_CALL_NO_RWG_SE, i32, i64)
197
198
#define dh_alias_avr ptr
199
#define dh_ctype_avr ppc_avr_t *
200
-#define dh_is_signed_avr dh_is_signed_ptr
201
202
#define dh_alias_vsr ptr
203
#define dh_ctype_vsr ppc_vsr_t *
204
-#define dh_is_signed_vsr dh_is_signed_ptr
205
206
DEF_HELPER_3(vavgub, void, avr, avr, avr)
207
DEF_HELPER_3(vavguh, void, avr, avr, avr)
208
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(store_601_batu, void, env, i32, tl)
209
210
#define dh_alias_fprp ptr
211
#define dh_ctype_fprp ppc_fprp_t *
212
-#define dh_is_signed_fprp dh_is_signed_ptr
213
214
DEF_HELPER_4(dadd, void, env, fprp, fprp, fprp)
215
DEF_HELPER_4(daddq, void, env, fprp, fprp, fprp)
216
diff --git a/tcg/tcg.c b/tcg/tcg.c
217
index XXXXXXX..XXXXXXX 100644
218
--- a/tcg/tcg.c
219
+++ b/tcg/tcg.c
220
@@ -XXX,XX +XXX,XX @@ typedef struct TCGHelperInfo {
221
void *func;
222
const char *name;
223
unsigned flags;
224
- unsigned sizemask;
225
+ unsigned typemask;
226
} TCGHelperInfo;
227
228
#include "exec/helper-proto.h"
229
@@ -XXX,XX +XXX,XX @@ bool tcg_op_supported(TCGOpcode op)
230
void tcg_gen_callN(void *func, TCGTemp *ret, int nargs, TCGTemp **args)
231
{
23
{
232
int i, real_args, nb_rets, pi;
24
SPARCCPU *cpu = SPARC_CPU(cs);
233
- unsigned sizemask, flags;
25
@@ -XXX,XX +XXX,XX @@ static bool sparc_cpu_has_work(CPUState *cs)
234
+ unsigned typemask, flags;
26
return (cs->interrupt_request & CPU_INTERRUPT_HARD) &&
235
TCGHelperInfo *info;
27
cpu_interrupts_enabled(env);
236
TCGOp *op;
28
}
237
29
+#endif /* !CONFIG_USER_ONLY */
238
info = g_hash_table_lookup(helper_table, (gpointer)func);
30
239
flags = info->flags;
31
static char *sparc_cpu_type_name(const char *cpu_model)
240
- sizemask = info->sizemask;
32
{
241
+ typemask = info->typemask;
33
@@ -XXX,XX +XXX,XX @@ static const struct TCGCPUOps sparc_tcg_ops = {
242
34
.tlb_fill = sparc_cpu_tlb_fill,
243
#ifdef CONFIG_PLUGIN
35
244
/* detect non-plugin helpers */
36
#ifndef CONFIG_USER_ONLY
245
@@ -XXX,XX +XXX,XX @@ void tcg_gen_callN(void *func, TCGTemp *ret, int nargs, TCGTemp **args)
37
+ .has_work = sparc_cpu_has_work,
246
&& !defined(CONFIG_TCG_INTERPRETER)
38
.cpu_exec_interrupt = sparc_cpu_exec_interrupt,
247
/* We have 64-bit values in one register, but need to pass as two
39
.do_interrupt = sparc_cpu_do_interrupt,
248
separate parameters. Split them. */
40
.do_transaction_failed = sparc_cpu_do_transaction_failed,
249
- int orig_sizemask = sizemask;
41
@@ -XXX,XX +XXX,XX @@ static void sparc_cpu_class_init(ObjectClass *oc, void *data)
250
+ int orig_typemask = typemask;
42
251
int orig_nargs = nargs;
43
cc->class_by_name = sparc_cpu_class_by_name;
252
TCGv_i64 retl, reth;
44
cc->parse_features = sparc_cpu_parse_features;
253
TCGTemp *split_args[MAX_OPC_PARAM];
45
- cc->has_work = sparc_cpu_has_work;
254
46
cc->dump_state = sparc_cpu_dump_state;
255
retl = NULL;
47
#if !defined(TARGET_SPARC64) && !defined(CONFIG_USER_ONLY)
256
reth = NULL;
48
cc->memory_rw_debug = sparc_cpu_memory_rw_debug;
257
- if (sizemask != 0) {
258
- for (i = real_args = 0; i < nargs; ++i) {
259
- int is_64bit = sizemask & (1 << (i+1)*2);
260
- if (is_64bit) {
261
- TCGv_i64 orig = temp_tcgv_i64(args[i]);
262
- TCGv_i32 h = tcg_temp_new_i32();
263
- TCGv_i32 l = tcg_temp_new_i32();
264
- tcg_gen_extr_i64_i32(l, h, orig);
265
- split_args[real_args++] = tcgv_i32_temp(h);
266
- split_args[real_args++] = tcgv_i32_temp(l);
267
- } else {
268
- split_args[real_args++] = args[i];
269
- }
270
+ typemask = 0;
271
+ for (i = real_args = 0; i < nargs; ++i) {
272
+ int argtype = extract32(orig_typemask, (i + 1) * 3, 3);
273
+ bool is_64bit = (argtype & ~1) == dh_typecode_i64;
274
+
275
+ if (is_64bit) {
276
+ TCGv_i64 orig = temp_tcgv_i64(args[i]);
277
+ TCGv_i32 h = tcg_temp_new_i32();
278
+ TCGv_i32 l = tcg_temp_new_i32();
279
+ tcg_gen_extr_i64_i32(l, h, orig);
280
+ split_args[real_args++] = tcgv_i32_temp(h);
281
+ typemask |= dh_typecode_i32 << (real_args * 3);
282
+ split_args[real_args++] = tcgv_i32_temp(l);
283
+ typemask |= dh_typecode_i32 << (real_args * 3);
284
+ } else {
285
+ split_args[real_args++] = args[i];
286
+ typemask |= argtype << (real_args * 3);
287
}
288
- nargs = real_args;
289
- args = split_args;
290
- sizemask = 0;
291
}
292
+ nargs = real_args;
293
+ args = split_args;
294
#elif defined(TCG_TARGET_EXTEND_ARGS) && TCG_TARGET_REG_BITS == 64
295
for (i = 0; i < nargs; ++i) {
296
- int is_64bit = sizemask & (1 << (i+1)*2);
297
- int is_signed = sizemask & (2 << (i+1)*2);
298
- if (!is_64bit) {
299
+ int argtype = extract32(typemask, (i + 1) * 3, 3);
300
+ bool is_32bit = (argtype & ~1) == dh_typecode_i32;
301
+ bool is_signed = argtype & 1;
302
+
303
+ if (is_32bit) {
304
TCGv_i64 temp = tcg_temp_new_i64();
305
TCGv_i64 orig = temp_tcgv_i64(args[i]);
306
if (is_signed) {
307
@@ -XXX,XX +XXX,XX @@ void tcg_gen_callN(void *func, TCGTemp *ret, int nargs, TCGTemp **args)
308
if (ret != NULL) {
309
#if defined(__sparc__) && !defined(__arch64__) \
310
&& !defined(CONFIG_TCG_INTERPRETER)
311
- if (orig_sizemask & 1) {
312
+ if ((typemask & 6) == dh_typecode_i64) {
313
/* The 32-bit ABI is going to return the 64-bit value in
314
the %o0/%o1 register pair. Prepare for this by using
315
two return temporaries, and reassemble below. */
316
@@ -XXX,XX +XXX,XX @@ void tcg_gen_callN(void *func, TCGTemp *ret, int nargs, TCGTemp **args)
317
nb_rets = 1;
318
}
319
#else
320
- if (TCG_TARGET_REG_BITS < 64 && (sizemask & 1)) {
321
+ if (TCG_TARGET_REG_BITS < 64 && (typemask & 6) == dh_typecode_i64) {
322
#ifdef HOST_WORDS_BIGENDIAN
323
op->args[pi++] = temp_arg(ret + 1);
324
op->args[pi++] = temp_arg(ret);
325
@@ -XXX,XX +XXX,XX @@ void tcg_gen_callN(void *func, TCGTemp *ret, int nargs, TCGTemp **args)
326
327
real_args = 0;
328
for (i = 0; i < nargs; i++) {
329
- int is_64bit = sizemask & (1 << (i+1)*2);
330
+ int argtype = extract32(typemask, (i + 1) * 3, 3);
331
+ bool is_64bit = (argtype & ~1) == dh_typecode_i64;
332
+
333
if (TCG_TARGET_REG_BITS < 64 && is_64bit) {
334
#ifdef TCG_TARGET_CALL_ALIGN_ARGS
335
/* some targets want aligned 64 bit args */
336
@@ -XXX,XX +XXX,XX @@ void tcg_gen_callN(void *func, TCGTemp *ret, int nargs, TCGTemp **args)
337
&& !defined(CONFIG_TCG_INTERPRETER)
338
/* Free all of the parts we allocated above. */
339
for (i = real_args = 0; i < orig_nargs; ++i) {
340
- int is_64bit = orig_sizemask & (1 << (i+1)*2);
341
+ int argtype = extract32(orig_typemask, (i + 1) * 3, 3);
342
+ bool is_64bit = (argtype & ~1) == dh_typecode_i64;
343
+
344
if (is_64bit) {
345
tcg_temp_free_internal(args[real_args++]);
346
tcg_temp_free_internal(args[real_args++]);
347
@@ -XXX,XX +XXX,XX @@ void tcg_gen_callN(void *func, TCGTemp *ret, int nargs, TCGTemp **args)
348
real_args++;
349
}
350
}
351
- if (orig_sizemask & 1) {
352
+ if ((orig_typemask & 6) == dh_typecode_i64) {
353
/* The 32-bit ABI returned two 32-bit pieces. Re-assemble them.
354
Note that describing these as TCGv_i64 eliminates an unnecessary
355
zero-extension that tcg_gen_concat_i32_i64 would create. */
356
@@ -XXX,XX +XXX,XX @@ void tcg_gen_callN(void *func, TCGTemp *ret, int nargs, TCGTemp **args)
357
}
358
#elif defined(TCG_TARGET_EXTEND_ARGS) && TCG_TARGET_REG_BITS == 64
359
for (i = 0; i < nargs; ++i) {
360
- int is_64bit = sizemask & (1 << (i+1)*2);
361
- if (!is_64bit) {
362
+ int argtype = extract32(typemask, (i + 1) * 3, 3);
363
+ bool is_32bit = (argtype & ~1) == dh_typecode_i32;
364
+
365
+ if (is_32bit) {
366
tcg_temp_free_internal(args[i]);
367
}
368
}
369
--
49
--
370
2.25.1
50
2.25.1
371
51
372
52
diff view generated by jsdifflib
1
As the only call-clobbered regs for TCI, these should
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
receive the least priority.
3
2
4
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
3
Restrict has_work() to sysemu.
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
5
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-Id: <20210912172731.789788-29-f4bug@amsat.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
9
---
8
tcg/tci/tcg-target.c.inc | 4 ++--
10
target/tricore/cpu.c | 6 +++++-
9
1 file changed, 2 insertions(+), 2 deletions(-)
11
1 file changed, 5 insertions(+), 1 deletion(-)
10
12
11
diff --git a/tcg/tci/tcg-target.c.inc b/tcg/tci/tcg-target.c.inc
13
diff --git a/target/tricore/cpu.c b/target/tricore/cpu.c
12
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
13
--- a/tcg/tci/tcg-target.c.inc
15
--- a/target/tricore/cpu.c
14
+++ b/tcg/tci/tcg-target.c.inc
16
+++ b/target/tricore/cpu.c
15
@@ -XXX,XX +XXX,XX @@ static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op)
17
@@ -XXX,XX +XXX,XX @@ static void tricore_cpu_reset(DeviceState *dev)
18
cpu_state_reset(env);
16
}
19
}
17
20
18
static const int tcg_target_reg_alloc_order[] = {
21
+#if !defined(CONFIG_USER_ONLY)
19
- TCG_REG_R0,
22
static bool tricore_cpu_has_work(CPUState *cs)
20
- TCG_REG_R1,
23
{
21
TCG_REG_R2,
24
return true;
22
TCG_REG_R3,
25
}
23
TCG_REG_R4,
26
+#endif /* !CONFIG_USER_ONLY */
24
@@ -XXX,XX +XXX,XX @@ static const int tcg_target_reg_alloc_order[] = {
27
25
TCG_REG_R13,
28
static void tricore_cpu_realizefn(DeviceState *dev, Error **errp)
26
TCG_REG_R14,
29
{
27
TCG_REG_R15,
30
@@ -XXX,XX +XXX,XX @@ static const struct TCGCPUOps tricore_tcg_ops = {
28
+ TCG_REG_R1,
31
.initialize = tricore_tcg_init,
29
+ TCG_REG_R0,
32
.synchronize_from_tb = tricore_cpu_synchronize_from_tb,
33
.tlb_fill = tricore_cpu_tlb_fill,
34
+#if !defined(CONFIG_USER_ONLY)
35
+ .has_work = tricore_cpu_has_work,
36
+#endif
30
};
37
};
31
38
32
#if MAX_OPC_PARAM_IARGS != 6
39
static void tricore_cpu_class_init(ObjectClass *c, void *data)
40
@@ -XXX,XX +XXX,XX @@ static void tricore_cpu_class_init(ObjectClass *c, void *data)
41
42
device_class_set_parent_reset(dc, tricore_cpu_reset, &mcc->parent_reset);
43
cc->class_by_name = tricore_cpu_class_by_name;
44
- cc->has_work = tricore_cpu_has_work;
45
46
cc->gdb_read_register = tricore_cpu_gdb_read_register;
47
cc->gdb_write_register = tricore_cpu_gdb_write_register;
33
--
48
--
34
2.25.1
49
2.25.1
35
50
36
51
diff view generated by jsdifflib
1
Let the compiler decide on inlining.
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
2
3
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
3
Restrict has_work() to sysemu.
4
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
5
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-Id: <20210912172731.789788-30-f4bug@amsat.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
---
9
---
7
accel/tcg/plugin-gen.c | 12 +++++-------
10
target/xtensa/cpu.c | 14 +++++++-------
8
1 file changed, 5 insertions(+), 7 deletions(-)
11
1 file changed, 7 insertions(+), 7 deletions(-)
9
12
10
diff --git a/accel/tcg/plugin-gen.c b/accel/tcg/plugin-gen.c
13
diff --git a/target/xtensa/cpu.c b/target/xtensa/cpu.c
11
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
12
--- a/accel/tcg/plugin-gen.c
15
--- a/target/xtensa/cpu.c
13
+++ b/accel/tcg/plugin-gen.c
16
+++ b/target/xtensa/cpu.c
14
@@ -XXX,XX +XXX,XX @@ static void gen_empty_mem_helper(void)
17
@@ -XXX,XX +XXX,XX @@ static void xtensa_cpu_set_pc(CPUState *cs, vaddr value)
15
tcg_temp_free_ptr(ptr);
18
cpu->env.pc = value;
16
}
19
}
17
20
18
-static inline
21
+#ifndef CONFIG_USER_ONLY
19
-void gen_plugin_cb_start(enum plugin_gen_from from,
22
+
20
- enum plugin_gen_cb type, unsigned wr)
23
static bool xtensa_cpu_has_work(CPUState *cs)
21
+static void gen_plugin_cb_start(enum plugin_gen_from from,
22
+ enum plugin_gen_cb type, unsigned wr)
23
{
24
{
24
TCGOp *op;
25
-#ifndef CONFIG_USER_ONLY
25
26
XtensaCPU *cpu = XTENSA_CPU(cs);
26
@@ -XXX,XX +XXX,XX @@ static void gen_wrapped(enum plugin_gen_from from,
27
27
tcg_gen_plugin_cb_end();
28
return !cpu->env.runstall && cpu->env.pending_irq_level;
29
-#else
30
- return true;
31
-#endif
28
}
32
}
29
33
30
-static inline void plugin_gen_empty_callback(enum plugin_gen_from from)
34
-#ifdef CONFIG_USER_ONLY
31
+static void plugin_gen_empty_callback(enum plugin_gen_from from)
35
+#else /* CONFIG_USER_ONLY*/
36
+
37
static bool abi_call0;
38
39
void xtensa_set_abi_call0(void)
40
@@ -XXX,XX +XXX,XX @@ bool xtensa_abi_call0(void)
32
{
41
{
33
switch (from) {
42
return abi_call0;
34
case PLUGIN_GEN_AFTER_INSN:
35
@@ -XXX,XX +XXX,XX @@ static bool op_rw(const TCGOp *op, const struct qemu_plugin_dyn_cb *cb)
36
return !!(cb->rw & (w + 1));
37
}
43
}
38
44
-#endif
39
-static inline
45
+
40
-void inject_cb_type(const GArray *cbs, TCGOp *begin_op, inject_fn inject,
46
+#endif /* CONFIG_USER_ONLY */
41
- op_ok_fn ok)
47
42
+static void inject_cb_type(const GArray *cbs, TCGOp *begin_op,
48
static void xtensa_cpu_reset(DeviceState *dev)
43
+ inject_fn inject, op_ok_fn ok)
44
{
49
{
45
TCGOp *end_op;
50
@@ -XXX,XX +XXX,XX @@ static const struct TCGCPUOps xtensa_tcg_ops = {
46
TCGOp *op;
51
.debug_excp_handler = xtensa_breakpoint_handler,
52
53
#ifndef CONFIG_USER_ONLY
54
+ .has_work = xtensa_cpu_has_work,
55
.cpu_exec_interrupt = xtensa_cpu_exec_interrupt,
56
.do_interrupt = xtensa_cpu_do_interrupt,
57
.do_transaction_failed = xtensa_cpu_do_transaction_failed,
58
@@ -XXX,XX +XXX,XX @@ static void xtensa_cpu_class_init(ObjectClass *oc, void *data)
59
device_class_set_parent_reset(dc, xtensa_cpu_reset, &xcc->parent_reset);
60
61
cc->class_by_name = xtensa_cpu_class_by_name;
62
- cc->has_work = xtensa_cpu_has_work;
63
cc->dump_state = xtensa_cpu_dump_state;
64
cc->set_pc = xtensa_cpu_set_pc;
65
cc->gdb_read_register = xtensa_cpu_gdb_read_register;
47
--
66
--
48
2.25.1
67
2.25.1
49
68
50
69
diff view generated by jsdifflib
1
Inline it into its one caller, tci_write_reg64.
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
Drop the asserts that are redundant with tcg_read_r.
2
3
3
cpu_common_has_work() is the default has_work() implementation
4
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
and returns 'false'.
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
5
6
Explicit it for the QTest / HAX / HVF / NVMM / Xen accelerators
7
and remove cpu_common_has_work().
8
9
Since there are no more implementations of SysemuCPUOps::has_work,
10
remove it along with the assertion in cpu_has_work().
11
12
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Acked-by: Paul Durrant <paul@xen.org>
15
Message-Id: <20210912172731.789788-31-f4bug@amsat.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
16
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
17
---
8
tcg/tci.c | 13 ++-----------
18
include/hw/core/cpu.h | 2 --
9
1 file changed, 2 insertions(+), 11 deletions(-)
19
accel/hvf/hvf-accel-ops.c | 6 ++++++
10
20
accel/qtest/qtest.c | 6 ++++++
11
diff --git a/tcg/tci.c b/tcg/tci.c
21
accel/xen/xen-all.c | 6 ++++++
12
index XXXXXXX..XXXXXXX 100644
22
hw/core/cpu-common.c | 6 ------
13
--- a/tcg/tci.c
23
softmmu/cpus.c | 9 ++-------
14
+++ b/tcg/tci.c
24
target/i386/hax/hax-accel-ops.c | 6 ++++++
15
@@ -XXX,XX +XXX,XX @@
25
target/i386/nvmm/nvmm-accel-ops.c | 6 ++++++
16
26
8 files changed, 32 insertions(+), 15 deletions(-)
17
__thread uintptr_t tci_tb_ptr;
27
18
28
diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
19
-static void
29
index XXXXXXX..XXXXXXX 100644
20
-tci_write_reg(tcg_target_ulong *regs, TCGReg index, tcg_target_ulong value)
30
--- a/include/hw/core/cpu.h
31
+++ b/include/hw/core/cpu.h
32
@@ -XXX,XX +XXX,XX @@ struct SysemuCPUOps;
33
* instantiatable CPU type.
34
* @parse_features: Callback to parse command line arguments.
35
* @reset_dump_flags: #CPUDumpFlags to use for reset logging.
36
- * @has_work: Callback for checking if there is work to do.
37
* @memory_rw_debug: Callback for GDB memory access.
38
* @dump_state: Callback for dumping state.
39
* @get_arch_id: Callback for getting architecture-dependent CPU ID.
40
@@ -XXX,XX +XXX,XX @@ struct CPUClass {
41
void (*parse_features)(const char *typename, char *str, Error **errp);
42
43
int reset_dump_flags;
44
- bool (*has_work)(CPUState *cpu);
45
int (*memory_rw_debug)(CPUState *cpu, vaddr addr,
46
uint8_t *buf, int len, bool is_write);
47
void (*dump_state)(CPUState *cpu, FILE *, int flags);
48
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
49
index XXXXXXX..XXXXXXX 100644
50
--- a/accel/hvf/hvf-accel-ops.c
51
+++ b/accel/hvf/hvf-accel-ops.c
52
@@ -XXX,XX +XXX,XX @@ static void hvf_start_vcpu_thread(CPUState *cpu)
53
cpu, QEMU_THREAD_JOINABLE);
54
}
55
56
+static bool hvf_cpu_has_work(CPUState *cpu)
57
+{
58
+ return false;
59
+}
60
+
61
static void hvf_accel_ops_class_init(ObjectClass *oc, void *data)
62
{
63
AccelOpsClass *ops = ACCEL_OPS_CLASS(oc);
64
@@ -XXX,XX +XXX,XX @@ static void hvf_accel_ops_class_init(ObjectClass *oc, void *data)
65
ops->synchronize_post_init = hvf_cpu_synchronize_post_init;
66
ops->synchronize_state = hvf_cpu_synchronize_state;
67
ops->synchronize_pre_loadvm = hvf_cpu_synchronize_pre_loadvm;
68
+ ops->has_work = hvf_cpu_has_work;
69
};
70
static const TypeInfo hvf_accel_ops_type = {
71
.name = ACCEL_OPS_NAME("hvf"),
72
diff --git a/accel/qtest/qtest.c b/accel/qtest/qtest.c
73
index XXXXXXX..XXXXXXX 100644
74
--- a/accel/qtest/qtest.c
75
+++ b/accel/qtest/qtest.c
76
@@ -XXX,XX +XXX,XX @@ static const TypeInfo qtest_accel_type = {
77
};
78
module_obj(TYPE_QTEST_ACCEL);
79
80
+static bool qtest_cpu_has_work(CPUState *cpu)
81
+{
82
+ return false;
83
+}
84
+
85
static void qtest_accel_ops_class_init(ObjectClass *oc, void *data)
86
{
87
AccelOpsClass *ops = ACCEL_OPS_CLASS(oc);
88
89
ops->create_vcpu_thread = dummy_start_vcpu_thread;
90
ops->get_virtual_clock = qtest_get_virtual_clock;
91
+ ops->has_work = qtest_cpu_has_work;
92
};
93
94
static const TypeInfo qtest_accel_ops_type = {
95
diff --git a/accel/xen/xen-all.c b/accel/xen/xen-all.c
96
index XXXXXXX..XXXXXXX 100644
97
--- a/accel/xen/xen-all.c
98
+++ b/accel/xen/xen-all.c
99
@@ -XXX,XX +XXX,XX @@ static const TypeInfo xen_accel_type = {
100
.class_init = xen_accel_class_init,
101
};
102
103
+static bool xen_cpu_has_work(CPUState *cpu)
104
+{
105
+ return false;
106
+}
107
+
108
static void xen_accel_ops_class_init(ObjectClass *oc, void *data)
109
{
110
AccelOpsClass *ops = ACCEL_OPS_CLASS(oc);
111
112
ops->create_vcpu_thread = dummy_start_vcpu_thread;
113
+ ops->has_work = xen_cpu_has_work;
114
}
115
116
static const TypeInfo xen_accel_ops_type = {
117
diff --git a/hw/core/cpu-common.c b/hw/core/cpu-common.c
118
index XXXXXXX..XXXXXXX 100644
119
--- a/hw/core/cpu-common.c
120
+++ b/hw/core/cpu-common.c
121
@@ -XXX,XX +XXX,XX @@ static void cpu_common_reset(DeviceState *dev)
122
}
123
}
124
125
-static bool cpu_common_has_work(CPUState *cs)
21
-{
126
-{
22
- tci_assert(index < TCG_TARGET_NB_REGS);
127
- return false;
23
- tci_assert(index != TCG_AREG0);
24
- tci_assert(index != TCG_REG_CALL_STACK);
25
- regs[index] = value;
26
-}
128
-}
27
-
129
-
28
static void tci_write_reg64(tcg_target_ulong *regs, uint32_t high_index,
130
ObjectClass *cpu_class_by_name(const char *typename, const char *cpu_model)
29
uint32_t low_index, uint64_t value)
131
{
30
{
132
CPUClass *cc = CPU_CLASS(object_class_by_name(typename));
31
- tci_write_reg(regs, low_index, value);
133
@@ -XXX,XX +XXX,XX @@ static void cpu_class_init(ObjectClass *klass, void *data)
32
- tci_write_reg(regs, high_index, value >> 32);
134
33
+ regs[low_index] = value;
135
k->parse_features = cpu_common_parse_features;
34
+ regs[high_index] = value >> 32;
136
k->get_arch_id = cpu_common_get_arch_id;
35
}
137
- k->has_work = cpu_common_has_work;
36
138
k->gdb_read_register = cpu_common_gdb_read_register;
37
/* Create a 64 bit value from two 32 bit values. */
139
k->gdb_write_register = cpu_common_gdb_write_register;
140
set_bit(DEVICE_CATEGORY_CPU, dc->categories);
141
diff --git a/softmmu/cpus.c b/softmmu/cpus.c
142
index XXXXXXX..XXXXXXX 100644
143
--- a/softmmu/cpus.c
144
+++ b/softmmu/cpus.c
145
@@ -XXX,XX +XXX,XX @@ void cpu_interrupt(CPUState *cpu, int mask)
146
147
bool cpu_has_work(CPUState *cpu)
148
{
149
- CPUClass *cc = CPU_GET_CLASS(cpu);
150
-
151
- if (cpus_accel->has_work) {
152
- return cpus_accel->has_work(cpu);
153
- }
154
- g_assert(cc->has_work);
155
- return cc->has_work(cpu);
156
+ g_assert(cpus_accel->has_work);
157
+ return cpus_accel->has_work(cpu);
158
}
159
160
static int do_vm_stop(RunState state, bool send_stop)
161
diff --git a/target/i386/hax/hax-accel-ops.c b/target/i386/hax/hax-accel-ops.c
162
index XXXXXXX..XXXXXXX 100644
163
--- a/target/i386/hax/hax-accel-ops.c
164
+++ b/target/i386/hax/hax-accel-ops.c
165
@@ -XXX,XX +XXX,XX @@ static void hax_start_vcpu_thread(CPUState *cpu)
166
#endif
167
}
168
169
+static bool hax_cpu_has_work(CPUState *cpu)
170
+{
171
+ return false;
172
+}
173
+
174
static void hax_accel_ops_class_init(ObjectClass *oc, void *data)
175
{
176
AccelOpsClass *ops = ACCEL_OPS_CLASS(oc);
177
@@ -XXX,XX +XXX,XX @@ static void hax_accel_ops_class_init(ObjectClass *oc, void *data)
178
ops->synchronize_post_init = hax_cpu_synchronize_post_init;
179
ops->synchronize_state = hax_cpu_synchronize_state;
180
ops->synchronize_pre_loadvm = hax_cpu_synchronize_pre_loadvm;
181
+ ops->has_work = hax_cpu_has_work;
182
}
183
184
static const TypeInfo hax_accel_ops_type = {
185
diff --git a/target/i386/nvmm/nvmm-accel-ops.c b/target/i386/nvmm/nvmm-accel-ops.c
186
index XXXXXXX..XXXXXXX 100644
187
--- a/target/i386/nvmm/nvmm-accel-ops.c
188
+++ b/target/i386/nvmm/nvmm-accel-ops.c
189
@@ -XXX,XX +XXX,XX @@ static void nvmm_kick_vcpu_thread(CPUState *cpu)
190
cpus_kick_thread(cpu);
191
}
192
193
+static bool nvmm_cpu_has_work(CPUState *cpu)
194
+{
195
+ return false;
196
+}
197
+
198
static void nvmm_accel_ops_class_init(ObjectClass *oc, void *data)
199
{
200
AccelOpsClass *ops = ACCEL_OPS_CLASS(oc);
201
@@ -XXX,XX +XXX,XX @@ static void nvmm_accel_ops_class_init(ObjectClass *oc, void *data)
202
ops->synchronize_post_init = nvmm_cpu_synchronize_post_init;
203
ops->synchronize_state = nvmm_cpu_synchronize_state;
204
ops->synchronize_pre_loadvm = nvmm_cpu_synchronize_pre_loadvm;
205
+ ops->has_work = nvmm_cpu_has_work;
206
}
207
208
static const TypeInfo nvmm_accel_ops_type = {
38
--
209
--
39
2.25.1
210
2.25.1
40
211
41
212
diff view generated by jsdifflib
1
This requires adjusting where arguments are stored.
1
Let the compiler decide about inlining.
2
Place them on the stack at left-aligned positions.
2
Remove tcg_out_ext8s and tcg_out_ext16s as unused.
3
Adjust the stack frame to be at entirely positive offsets.
3
4
5
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
---
6
---
9
include/tcg/tcg.h | 1 +
7
tcg/mips/tcg-target.c.inc | 76 ++++++++++++++-------------------------
10
tcg/tci/tcg-target.h | 2 +-
8
1 file changed, 27 insertions(+), 49 deletions(-)
11
tcg/tcg.c | 64 +++++++++++++-----
9
12
tcg/tci.c | 142 ++++++++++++++++++++++-----------------
10
diff --git a/tcg/mips/tcg-target.c.inc b/tcg/mips/tcg-target.c.inc
13
tcg/tci/tcg-target.c.inc | 50 +++++++-------
14
5 files changed, 153 insertions(+), 106 deletions(-)
15
16
diff --git a/include/tcg/tcg.h b/include/tcg/tcg.h
17
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
18
--- a/include/tcg/tcg.h
12
--- a/tcg/mips/tcg-target.c.inc
19
+++ b/include/tcg/tcg.h
13
+++ b/tcg/mips/tcg-target.c.inc
20
@@ -XXX,XX +XXX,XX @@
14
@@ -XXX,XX +XXX,XX @@ static bool patch_reloc(tcg_insn_unit *code_ptr, int type,
21
#define MAX_OPC_PARAM (4 + (MAX_OPC_PARAM_PER_ARG * MAX_OPC_PARAM_ARGS))
15
#endif
22
16
23
#define CPU_TEMP_BUF_NLONGS 128
17
24
+#define TCG_STATIC_FRAME_SIZE (CPU_TEMP_BUF_NLONGS * sizeof(long))
18
-static inline bool is_p2m1(tcg_target_long val)
25
19
+static bool is_p2m1(tcg_target_long val)
26
/* Default target word size to pointer size. */
20
{
27
#ifndef TCG_TARGET_REG_BITS
21
return val && ((val + 1) & val) == 0;
28
diff --git a/tcg/tci/tcg-target.h b/tcg/tci/tcg-target.h
22
}
29
index XXXXXXX..XXXXXXX 100644
30
--- a/tcg/tci/tcg-target.h
31
+++ b/tcg/tci/tcg-target.h
32
@@ -XXX,XX +XXX,XX @@ typedef enum {
23
@@ -XXX,XX +XXX,XX @@ typedef enum {
33
24
/*
34
/* Used for function call generation. */
25
* Type reg
35
#define TCG_TARGET_CALL_STACK_OFFSET 0
26
*/
36
-#define TCG_TARGET_STACK_ALIGN 16
27
-static inline void tcg_out_opc_reg(TCGContext *s, MIPSInsn opc,
37
+#define TCG_TARGET_STACK_ALIGN 8
28
- TCGReg rd, TCGReg rs, TCGReg rt)
38
29
+static void tcg_out_opc_reg(TCGContext *s, MIPSInsn opc,
39
#define HAVE_TCG_QEMU_TB_EXEC
30
+ TCGReg rd, TCGReg rs, TCGReg rt)
40
31
{
41
diff --git a/tcg/tcg.c b/tcg/tcg.c
32
int32_t inst;
42
index XXXXXXX..XXXXXXX 100644
33
43
--- a/tcg/tcg.c
34
@@ -XXX,XX +XXX,XX @@ static inline void tcg_out_opc_reg(TCGContext *s, MIPSInsn opc,
44
+++ b/tcg/tcg.c
35
/*
45
@@ -XXX,XX +XXX,XX @@ static void tcg_out_st(TCGContext *s, TCGType type, TCGReg arg, TCGReg arg1,
36
* Type immediate
46
intptr_t arg2);
37
*/
47
static bool tcg_out_sti(TCGContext *s, TCGType type, TCGArg val,
38
-static inline void tcg_out_opc_imm(TCGContext *s, MIPSInsn opc,
48
TCGReg base, intptr_t ofs);
39
- TCGReg rt, TCGReg rs, TCGArg imm)
49
+#ifdef CONFIG_TCG_INTERPRETER
40
+static void tcg_out_opc_imm(TCGContext *s, MIPSInsn opc,
50
+static void tcg_out_call(TCGContext *s, const tcg_insn_unit *target,
41
+ TCGReg rt, TCGReg rs, TCGArg imm)
51
+ ffi_cif *cif);
42
{
52
+#else
43
int32_t inst;
53
static void tcg_out_call(TCGContext *s, const tcg_insn_unit *target);
44
54
+#endif
45
@@ -XXX,XX +XXX,XX @@ static inline void tcg_out_opc_imm(TCGContext *s, MIPSInsn opc,
55
static bool tcg_target_const_match(int64_t val, TCGType type, int ct);
46
/*
56
#ifdef TCG_TARGET_NEED_LDST_LABELS
47
* Type bitfield
57
static int tcg_out_ldst_finalize(TCGContext *s);
48
*/
58
@@ -XXX,XX +XXX,XX @@ void tcg_gen_callN(void *func, TCGTemp *ret, int nargs, TCGTemp **args)
49
-static inline void tcg_out_opc_bf(TCGContext *s, MIPSInsn opc, TCGReg rt,
59
for (i = 0; i < nargs; i++) {
50
- TCGReg rs, int msb, int lsb)
60
int argtype = extract32(typemask, (i + 1) * 3, 3);
51
+static void tcg_out_opc_bf(TCGContext *s, MIPSInsn opc, TCGReg rt,
61
bool is_64bit = (argtype & ~1) == dh_typecode_i64;
52
+ TCGReg rs, int msb, int lsb)
62
+ bool want_align = false;
53
{
63
+
54
int32_t inst;
64
+#if defined(CONFIG_TCG_INTERPRETER)
55
65
+ /*
56
@@ -XXX,XX +XXX,XX @@ static inline void tcg_out_opc_bf(TCGContext *s, MIPSInsn opc, TCGReg rt,
66
+ * Align all arguments, so that they land in predictable places
57
tcg_out32(s, inst);
67
+ * for passing off to ffi_call.
58
}
68
+ */
59
69
+ want_align = true;
60
-static inline void tcg_out_opc_bf64(TCGContext *s, MIPSInsn opc, MIPSInsn opm,
70
+#elif defined(TCG_TARGET_CALL_ALIGN_ARGS)
61
- MIPSInsn oph, TCGReg rt, TCGReg rs,
71
+ /* Some targets want aligned 64 bit args */
62
+static void tcg_out_opc_bf64(TCGContext *s, MIPSInsn opc, MIPSInsn opm,
72
+ want_align = is_64bit;
63
+ MIPSInsn oph, TCGReg rt, TCGReg rs,
73
+#endif
64
int msb, int lsb)
74
+
65
{
75
+ if (TCG_TARGET_REG_BITS < 64 && want_align && (real_args & 1)) {
66
if (lsb >= 32) {
76
+ op->args[pi++] = TCG_CALL_DUMMY_ARG;
67
@@ -XXX,XX +XXX,XX @@ static inline void tcg_out_opc_bf64(TCGContext *s, MIPSInsn opc, MIPSInsn opm,
77
+ real_args++;
68
/*
78
+ }
69
* Type branch
79
70
*/
80
if (TCG_TARGET_REG_BITS < 64 && is_64bit) {
71
-static inline void tcg_out_opc_br(TCGContext *s, MIPSInsn opc,
81
-#ifdef TCG_TARGET_CALL_ALIGN_ARGS
72
- TCGReg rt, TCGReg rs)
82
- /* some targets want aligned 64 bit args */
73
+static void tcg_out_opc_br(TCGContext *s, MIPSInsn opc, TCGReg rt, TCGReg rs)
83
- if (real_args & 1) {
74
{
84
- op->args[pi++] = TCG_CALL_DUMMY_ARG;
75
tcg_out_opc_imm(s, opc, rt, rs, 0);
85
- real_args++;
76
}
86
- }
77
@@ -XXX,XX +XXX,XX @@ static inline void tcg_out_opc_br(TCGContext *s, MIPSInsn opc,
87
-#endif
78
/*
88
- /* If stack grows up, then we will be placing successive
79
* Type sa
89
- arguments at lower addresses, which means we need to
80
*/
90
- reverse the order compared to how we would normally
81
-static inline void tcg_out_opc_sa(TCGContext *s, MIPSInsn opc,
91
- treat either big or little-endian. For those arguments
82
- TCGReg rd, TCGReg rt, TCGArg sa)
92
- that will wind up in registers, this still works for
83
+static void tcg_out_opc_sa(TCGContext *s, MIPSInsn opc,
93
- HPPA (the only current STACK_GROWSUP target) since the
84
+ TCGReg rd, TCGReg rt, TCGArg sa)
94
- argument registers are *also* allocated in decreasing
85
{
95
- order. If another such target is added, this logic may
86
int32_t inst;
96
- have to get more complicated to differentiate between
87
97
- stack arguments and register arguments. */
88
@@ -XXX,XX +XXX,XX @@ static bool tcg_out_opc_jmp(TCGContext *s, MIPSInsn opc, const void *target)
98
+ /*
89
return true;
99
+ * If stack grows up, then we will be placing successive
90
}
100
+ * arguments at lower addresses, which means we need to
91
101
+ * reverse the order compared to how we would normally
92
-static inline void tcg_out_nop(TCGContext *s)
102
+ * treat either big or little-endian. For those arguments
93
+static void tcg_out_nop(TCGContext *s)
103
+ * that will wind up in registers, this still works for
94
{
104
+ * HPPA (the only current STACK_GROWSUP target) since the
95
tcg_out32(s, 0);
105
+ * argument registers are *also* allocated in decreasing
96
}
106
+ * order. If another such target is added, this logic may
97
107
+ * have to get more complicated to differentiate between
98
-static inline void tcg_out_dsll(TCGContext *s, TCGReg rd, TCGReg rt, TCGArg sa)
108
+ * stack arguments and register arguments.
99
+static void tcg_out_dsll(TCGContext *s, TCGReg rd, TCGReg rt, TCGArg sa)
109
+ */
100
{
110
#if defined(HOST_WORDS_BIGENDIAN) != defined(TCG_TARGET_STACK_GROWSUP)
101
tcg_out_opc_sa64(s, OPC_DSLL, OPC_DSLL32, rd, rt, sa);
111
op->args[pi++] = temp_arg(args[i] + 1);
102
}
112
op->args[pi++] = temp_arg(args[i]);
103
113
@@ -XXX,XX +XXX,XX @@ static void tcg_reg_alloc_call(TCGContext *s, TCGOp *op)
104
-static inline void tcg_out_dsrl(TCGContext *s, TCGReg rd, TCGReg rt, TCGArg sa)
114
const int nb_oargs = TCGOP_CALLO(op);
105
+static void tcg_out_dsrl(TCGContext *s, TCGReg rd, TCGReg rt, TCGArg sa)
115
const int nb_iargs = TCGOP_CALLI(op);
106
{
116
const TCGLifeData arg_life = op->life;
107
tcg_out_opc_sa64(s, OPC_DSRL, OPC_DSRL32, rd, rt, sa);
117
+ const TCGHelperInfo *info;
108
}
118
int flags, nb_regs, i;
109
119
TCGReg reg;
110
-static inline void tcg_out_dsra(TCGContext *s, TCGReg rd, TCGReg rt, TCGArg sa)
120
TCGArg arg;
111
+static void tcg_out_dsra(TCGContext *s, TCGReg rd, TCGReg rt, TCGArg sa)
121
@@ -XXX,XX +XXX,XX @@ static void tcg_reg_alloc_call(TCGContext *s, TCGOp *op)
112
{
122
TCGRegSet allocated_regs;
113
tcg_out_opc_sa64(s, OPC_DSRA, OPC_DSRA32, rd, rt, sa);
123
114
}
124
func_addr = tcg_call_func(op);
115
125
- flags = tcg_call_flags(op);
116
-static inline bool tcg_out_mov(TCGContext *s, TCGType type,
126
+ info = tcg_call_info(op);
117
- TCGReg ret, TCGReg arg)
127
+ flags = info->flags;
118
+static bool tcg_out_mov(TCGContext *s, TCGType type, TCGReg ret, TCGReg arg)
128
119
{
129
nb_regs = ARRAY_SIZE(tcg_target_call_iarg_regs);
120
/* Simple reg-reg move, optimising out the 'do nothing' case */
130
if (nb_regs > nb_iargs) {
121
if (ret != arg) {
131
@@ -XXX,XX +XXX,XX @@ static void tcg_reg_alloc_call(TCGContext *s, TCGOp *op)
122
@@ -XXX,XX +XXX,XX @@ static void tcg_out_bswap64(TCGContext *s, TCGReg ret, TCGReg arg)
132
save_globals(s, allocated_regs);
133
}
123
}
134
124
}
135
+#ifdef CONFIG_TCG_INTERPRETER
125
136
+ {
126
-static inline void tcg_out_ext8s(TCGContext *s, TCGReg ret, TCGReg arg)
137
+ gpointer hash = (gpointer)(uintptr_t)info->typemask;
138
+ ffi_cif *cif = g_hash_table_lookup(ffi_table, hash);
139
+ assert(cif != NULL);
140
+ tcg_out_call(s, func_addr, cif);
141
+ }
142
+#else
143
tcg_out_call(s, func_addr);
144
+#endif
145
146
/* assign output registers and emit moves if needed */
147
for(i = 0; i < nb_oargs; i++) {
148
diff --git a/tcg/tci.c b/tcg/tci.c
149
index XXXXXXX..XXXXXXX 100644
150
--- a/tcg/tci.c
151
+++ b/tcg/tci.c
152
@@ -XXX,XX +XXX,XX @@
153
*/
154
155
#include "qemu/osdep.h"
156
+#include "qemu-common.h"
157
+#include "tcg/tcg.h" /* MAX_OPC_PARAM_IARGS */
158
+#include "exec/cpu_ldst.h"
159
+#include "tcg/tcg-op.h"
160
+#include "qemu/compiler.h"
161
+#include <ffi.h>
162
163
-/* Enable TCI assertions only when debugging TCG (and without NDEBUG defined).
164
- * Without assertions, the interpreter runs much faster. */
165
+
166
+/*
167
+ * Enable TCI assertions only when debugging TCG (and without NDEBUG defined).
168
+ * Without assertions, the interpreter runs much faster.
169
+ */
170
#if defined(CONFIG_DEBUG_TCG)
171
# define tci_assert(cond) assert(cond)
172
#else
173
# define tci_assert(cond) ((void)(cond))
174
#endif
175
176
-#include "qemu-common.h"
177
-#include "tcg/tcg.h" /* MAX_OPC_PARAM_IARGS */
178
-#include "exec/cpu_ldst.h"
179
-#include "tcg/tcg-op.h"
180
-#include "qemu/compiler.h"
181
-
182
-#if MAX_OPC_PARAM_IARGS != 6
183
-# error Fix needed, number of supported input arguments changed!
184
-#endif
185
-#if TCG_TARGET_REG_BITS == 32
186
-typedef uint64_t (*helper_function)(tcg_target_ulong, tcg_target_ulong,
187
- tcg_target_ulong, tcg_target_ulong,
188
- tcg_target_ulong, tcg_target_ulong,
189
- tcg_target_ulong, tcg_target_ulong,
190
- tcg_target_ulong, tcg_target_ulong,
191
- tcg_target_ulong, tcg_target_ulong);
192
-#else
193
-typedef uint64_t (*helper_function)(tcg_target_ulong, tcg_target_ulong,
194
- tcg_target_ulong, tcg_target_ulong,
195
- tcg_target_ulong, tcg_target_ulong);
196
-#endif
197
-
198
__thread uintptr_t tci_tb_ptr;
199
200
-static tcg_target_ulong tci_read_reg(const tcg_target_ulong *regs, TCGReg index)
201
-{
127
-{
202
- tci_assert(index < TCG_TARGET_NB_REGS);
128
- if (use_mips32r2_instructions) {
203
- return regs[index];
129
- tcg_out_opc_reg(s, OPC_SEB, ret, 0, arg);
130
- } else {
131
- tcg_out_opc_sa(s, OPC_SLL, ret, arg, 24);
132
- tcg_out_opc_sa(s, OPC_SRA, ret, ret, 24);
133
- }
204
-}
134
-}
205
-
135
-
206
static void
136
-static inline void tcg_out_ext16s(TCGContext *s, TCGReg ret, TCGReg arg)
207
tci_write_reg(tcg_target_ulong *regs, TCGReg index, tcg_target_ulong value)
137
-{
208
{
138
- if (use_mips32r2_instructions) {
209
@@ -XXX,XX +XXX,XX @@ static tcg_target_ulong tci_read_label(const uint8_t **tb_ptr)
139
- tcg_out_opc_reg(s, OPC_SEH, ret, 0, arg);
210
* I = immediate (tcg_target_ulong)
140
- } else {
211
* l = label or pointer
141
- tcg_out_opc_sa(s, OPC_SLL, ret, arg, 16);
212
* m = immediate (TCGMemOpIdx)
142
- tcg_out_opc_sa(s, OPC_SRA, ret, ret, 16);
213
+ * n = immediate (call return length)
143
- }
214
* r = register
144
-}
215
* s = signed ldst offset
145
-
216
*/
146
-static inline void tcg_out_ext32u(TCGContext *s, TCGReg ret, TCGReg arg)
217
@@ -XXX,XX +XXX,XX @@ static void tci_args_l(const uint8_t **tb_ptr, void **l0)
147
+static void tcg_out_ext32u(TCGContext *s, TCGReg ret, TCGReg arg)
218
check_size(start, tb_ptr);
148
{
219
}
149
if (use_mips32r2_instructions) {
220
150
tcg_out_opc_bf(s, OPC_DEXT, ret, arg, 31, 0);
221
+static void tci_args_nll(const uint8_t **tb_ptr, uint8_t *n0,
151
@@ -XXX,XX +XXX,XX @@ static void tcg_out_ldst(TCGContext *s, MIPSInsn opc, TCGReg data,
222
+ void **l1, void **l2)
152
tcg_out_opc_imm(s, opc, data, addr, lo);
223
+{
153
}
224
+ const uint8_t *start = *tb_ptr;
154
225
+
155
-static inline void tcg_out_ld(TCGContext *s, TCGType type, TCGReg arg,
226
+ *n0 = tci_read_b(tb_ptr);
156
- TCGReg arg1, intptr_t arg2)
227
+ *l1 = (void *)tci_read_label(tb_ptr);
157
+static void tcg_out_ld(TCGContext *s, TCGType type, TCGReg arg,
228
+ *l2 = (void *)tci_read_label(tb_ptr);
158
+ TCGReg arg1, intptr_t arg2)
229
+
159
{
230
+ check_size(start, tb_ptr);
160
MIPSInsn opc = OPC_LD;
231
+}
161
if (TCG_TARGET_REG_BITS == 32 || type == TCG_TYPE_I32) {
232
+
162
@@ -XXX,XX +XXX,XX @@ static inline void tcg_out_ld(TCGContext *s, TCGType type, TCGReg arg,
233
static void tci_args_rr(const uint8_t **tb_ptr,
163
tcg_out_ldst(s, opc, arg, arg1, arg2);
234
TCGReg *r0, TCGReg *r1)
164
}
235
{
165
236
@@ -XXX,XX +XXX,XX @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
166
-static inline void tcg_out_st(TCGContext *s, TCGType type, TCGReg arg,
237
{
167
- TCGReg arg1, intptr_t arg2)
238
const uint8_t *tb_ptr = v_tb_ptr;
168
+static void tcg_out_st(TCGContext *s, TCGType type, TCGReg arg,
239
tcg_target_ulong regs[TCG_TARGET_NB_REGS];
169
+ TCGReg arg1, intptr_t arg2)
240
- long tcg_temps[CPU_TEMP_BUF_NLONGS];
170
{
241
- uintptr_t sp_value = (uintptr_t)(tcg_temps + CPU_TEMP_BUF_NLONGS);
171
MIPSInsn opc = OPC_SD;
242
+ uint64_t stack[(TCG_STATIC_CALL_ARGS_SIZE + TCG_STATIC_FRAME_SIZE)
172
if (TCG_TARGET_REG_BITS == 32 || type == TCG_TYPE_I32) {
243
+ / sizeof(uint64_t)];
173
@@ -XXX,XX +XXX,XX @@ static inline void tcg_out_st(TCGContext *s, TCGType type, TCGReg arg,
244
+ void *call_slots[TCG_STATIC_CALL_ARGS_SIZE / sizeof(uint64_t)];
174
tcg_out_ldst(s, opc, arg, arg1, arg2);
245
175
}
246
regs[TCG_AREG0] = (tcg_target_ulong)env;
176
247
- regs[TCG_REG_CALL_STACK] = sp_value;
177
-static inline bool tcg_out_sti(TCGContext *s, TCGType type, TCGArg val,
248
+ regs[TCG_REG_CALL_STACK] = (uintptr_t)stack;
178
- TCGReg base, intptr_t ofs)
249
+ /* Other call_slots entries initialized at first use (see below). */
179
+static bool tcg_out_sti(TCGContext *s, TCGType type, TCGArg val,
250
+ call_slots[0] = NULL;
180
+ TCGReg base, intptr_t ofs)
251
tci_assert(tb_ptr);
181
{
252
182
if (val == 0) {
253
for (;;) {
183
tcg_out_st(s, type, TCG_REG_ZERO, base, ofs);
254
@@ -XXX,XX +XXX,XX @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
184
@@ -XXX,XX +XXX,XX @@ static void tcg_out_clz(TCGContext *s, MIPSInsn opcv2, MIPSInsn opcv6,
255
#endif
256
TCGMemOpIdx oi;
257
int32_t ofs;
258
- void *ptr;
259
+ void *ptr, *cif;
260
261
/* Skip opcode and size entry. */
262
tb_ptr += 2;
263
264
switch (opc) {
265
case INDEX_op_call:
266
- tci_args_l(&tb_ptr, &ptr);
267
+ /*
268
+ * Set up the ffi_avalue array once, delayed until now
269
+ * because many TB's do not make any calls. In tcg_gen_callN,
270
+ * we arranged for every real argument to be "left-aligned"
271
+ * in each 64-bit slot.
272
+ */
273
+ if (unlikely(call_slots[0] == NULL)) {
274
+ for (int i = 0; i < ARRAY_SIZE(call_slots); ++i) {
275
+ call_slots[i] = &stack[i];
276
+ }
277
+ }
278
+
279
+ tci_args_nll(&tb_ptr, &len, &ptr, &cif);
280
+
281
+ /* Helper functions may need to access the "return address" */
282
tci_tb_ptr = (uintptr_t)tb_ptr;
283
-#if TCG_TARGET_REG_BITS == 32
284
- tmp64 = ((helper_function)ptr)(tci_read_reg(regs, TCG_REG_R0),
285
- tci_read_reg(regs, TCG_REG_R1),
286
- tci_read_reg(regs, TCG_REG_R2),
287
- tci_read_reg(regs, TCG_REG_R3),
288
- tci_read_reg(regs, TCG_REG_R4),
289
- tci_read_reg(regs, TCG_REG_R5),
290
- tci_read_reg(regs, TCG_REG_R6),
291
- tci_read_reg(regs, TCG_REG_R7),
292
- tci_read_reg(regs, TCG_REG_R8),
293
- tci_read_reg(regs, TCG_REG_R9),
294
- tci_read_reg(regs, TCG_REG_R10),
295
- tci_read_reg(regs, TCG_REG_R11));
296
- tci_write_reg(regs, TCG_REG_R0, tmp64);
297
- tci_write_reg(regs, TCG_REG_R1, tmp64 >> 32);
298
-#else
299
- tmp64 = ((helper_function)ptr)(tci_read_reg(regs, TCG_REG_R0),
300
- tci_read_reg(regs, TCG_REG_R1),
301
- tci_read_reg(regs, TCG_REG_R2),
302
- tci_read_reg(regs, TCG_REG_R3),
303
- tci_read_reg(regs, TCG_REG_R4),
304
- tci_read_reg(regs, TCG_REG_R5));
305
- tci_write_reg(regs, TCG_REG_R0, tmp64);
306
-#endif
307
+
308
+ ffi_call(cif, ptr, stack, call_slots);
309
+
310
+ /* Any result winds up "left-aligned" in the stack[0] slot. */
311
+ switch (len) {
312
+ case 0: /* void */
313
+ break;
314
+ case 1: /* uint32_t */
315
+ /*
316
+ * Note that libffi has an odd special case in that it will
317
+ * always widen an integral result to ffi_arg.
318
+ */
319
+ if (sizeof(ffi_arg) == 4) {
320
+ regs[TCG_REG_R0] = *(uint32_t *)stack;
321
+ break;
322
+ }
323
+ /* fall through */
324
+ case 2: /* uint64_t */
325
+ if (TCG_TARGET_REG_BITS == 32) {
326
+ tci_write_reg64(regs, TCG_REG_R1, TCG_REG_R0, stack[0]);
327
+ } else {
328
+ regs[TCG_REG_R0] = stack[0];
329
+ }
330
+ break;
331
+ default:
332
+ g_assert_not_reached();
333
+ }
334
break;
335
+
336
case INDEX_op_br:
337
tci_args_l(&tb_ptr, &ptr);
338
tb_ptr = ptr;
339
@@ -XXX,XX +XXX,XX @@ int print_insn_tci(bfd_vma addr, disassemble_info *info)
340
TCGCond c;
341
TCGMemOpIdx oi;
342
uint8_t pos, len;
343
- void *ptr;
344
+ void *ptr, *cif;
345
const uint8_t *tb_ptr;
346
347
status = info->read_memory_func(addr, buf, 2, info);
348
@@ -XXX,XX +XXX,XX @@ int print_insn_tci(bfd_vma addr, disassemble_info *info)
349
350
switch (op) {
351
case INDEX_op_br:
352
- case INDEX_op_call:
353
case INDEX_op_exit_tb:
354
case INDEX_op_goto_tb:
355
tci_args_l(&tb_ptr, &ptr);
356
info->fprintf_func(info->stream, "%-12s %p", op_name, ptr);
357
break;
358
359
+ case INDEX_op_call:
360
+ tci_args_nll(&tb_ptr, &len, &ptr, &cif);
361
+ info->fprintf_func(info->stream, "%-12s %d, %p, %p",
362
+ op_name, len, ptr, cif);
363
+ break;
364
+
365
case INDEX_op_brcond_i32:
366
case INDEX_op_brcond_i64:
367
tci_args_rrcl(&tb_ptr, &r0, &r1, &c, &ptr);
368
diff --git a/tcg/tci/tcg-target.c.inc b/tcg/tci/tcg-target.c.inc
369
index XXXXXXX..XXXXXXX 100644
370
--- a/tcg/tci/tcg-target.c.inc
371
+++ b/tcg/tci/tcg-target.c.inc
372
@@ -XXX,XX +XXX,XX @@ static const int tcg_target_reg_alloc_order[] = {
373
# error Fix needed, number of supported input arguments changed!
374
#endif
375
376
-static const int tcg_target_call_iarg_regs[] = {
377
- TCG_REG_R0,
378
- TCG_REG_R1,
379
- TCG_REG_R2,
380
- TCG_REG_R3,
381
- TCG_REG_R4,
382
- TCG_REG_R5,
383
-#if TCG_TARGET_REG_BITS == 32
384
- /* 32 bit hosts need 2 * MAX_OPC_PARAM_IARGS registers. */
385
- TCG_REG_R6,
386
- TCG_REG_R7,
387
- TCG_REG_R8,
388
- TCG_REG_R9,
389
- TCG_REG_R10,
390
- TCG_REG_R11,
391
-#endif
392
-};
393
+/* No call arguments via registers. All will be stored on the "stack". */
394
+static const int tcg_target_call_iarg_regs[] = { };
395
396
static const int tcg_target_call_oarg_regs[] = {
397
TCG_REG_R0,
398
@@ -XXX,XX +XXX,XX @@ static void tci_out_label(TCGContext *s, TCGLabel *label)
399
static void stack_bounds_check(TCGReg base, target_long offset)
400
{
401
if (base == TCG_REG_CALL_STACK) {
402
- tcg_debug_assert(offset < 0);
403
- tcg_debug_assert(offset >= -(CPU_TEMP_BUF_NLONGS * sizeof(long)));
404
+ tcg_debug_assert(offset >= 0);
405
+ tcg_debug_assert(offset < (TCG_STATIC_CALL_ARGS_SIZE +
406
+ TCG_STATIC_FRAME_SIZE));
407
}
185
}
408
}
186
}
409
187
410
@@ -XXX,XX +XXX,XX @@ static void tcg_out_movi(TCGContext *s, TCGType type,
188
-static inline void tcg_out_op(TCGContext *s, TCGOpcode opc,
411
}
189
- const TCGArg args[TCG_MAX_OP_ARGS],
412
}
190
- const int const_args[TCG_MAX_OP_ARGS])
413
191
+static void tcg_out_op(TCGContext *s, TCGOpcode opc,
414
-static inline void tcg_out_call(TCGContext *s, const tcg_insn_unit *arg)
192
+ const TCGArg args[TCG_MAX_OP_ARGS],
415
+static void tcg_out_call(TCGContext *s, const tcg_insn_unit *func,
193
+ const int const_args[TCG_MAX_OP_ARGS])
416
+ ffi_cif *cif)
194
{
417
{
195
MIPSInsn i1, i2;
418
uint8_t *old_code_ptr = s->code_ptr;
196
TCGArg a0, a1, a2;
419
+ uint8_t which;
420
+
421
+ if (cif->rtype == &ffi_type_void) {
422
+ which = 0;
423
+ } else if (cif->rtype->size == 4) {
424
+ which = 1;
425
+ } else {
426
+ tcg_debug_assert(cif->rtype->size == 8);
427
+ which = 2;
428
+ }
429
tcg_out_op_t(s, INDEX_op_call);
430
- tcg_out_i(s, (uintptr_t)arg);
431
+ tcg_out8(s, which);
432
+ tcg_out_i(s, (uintptr_t)func);
433
+ tcg_out_i(s, (uintptr_t)cif);
434
+
435
old_code_ptr[1] = s->code_ptr - old_code_ptr;
436
}
437
438
@@ -XXX,XX +XXX,XX @@ static void tcg_target_init(TCGContext *s)
439
s->reserved_regs = 0;
440
tcg_regset_set_reg(s->reserved_regs, TCG_REG_CALL_STACK);
441
442
- /* We use negative offsets from "sp" so that we can distinguish
443
- stores that might pretend to be call arguments. */
444
- tcg_set_frame(s, TCG_REG_CALL_STACK,
445
- -CPU_TEMP_BUF_NLONGS * sizeof(long),
446
- CPU_TEMP_BUF_NLONGS * sizeof(long));
447
+ /* The call arguments come first, followed by the temp storage. */
448
+ tcg_set_frame(s, TCG_REG_CALL_STACK, TCG_STATIC_CALL_ARGS_SIZE,
449
+ TCG_STATIC_FRAME_SIZE);
450
}
451
452
/* Generate global QEMU prologue and epilogue code. */
453
--
197
--
454
2.25.1
198
2.25.1
455
199
456
200
diff view generated by jsdifflib
1
This function should have been updated for vector types
1
Weaning off of unique alignment requirements, so allow JAL
2
when they were introduced.
2
to not reach the target. TCG_TMP1 is always available for
3
use as a scratch because it is clobbered by the subroutine
4
being called.
3
5
4
Fixes: d2fd745fe8b
5
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/367
6
Cc: qemu-stable@nongnu.org
7
Tested-by: Stefan Weil <sw@weilnetz.de>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
---
7
---
11
tcg/tcg.c | 32 +++++++++++++++++++++++++++-----
8
tcg/mips/tcg-target.c.inc | 6 ++++--
12
1 file changed, 27 insertions(+), 5 deletions(-)
9
1 file changed, 4 insertions(+), 2 deletions(-)
13
10
14
diff --git a/tcg/tcg.c b/tcg/tcg.c
11
diff --git a/tcg/mips/tcg-target.c.inc b/tcg/mips/tcg-target.c.inc
15
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
16
--- a/tcg/tcg.c
13
--- a/tcg/mips/tcg-target.c.inc
17
+++ b/tcg/tcg.c
14
+++ b/tcg/mips/tcg-target.c.inc
18
@@ -XXX,XX +XXX,XX @@ static void check_regs(TCGContext *s)
15
@@ -XXX,XX +XXX,XX @@ static void tcg_out_bswap16(TCGContext *s, TCGReg ret, TCGReg arg, int flags)
19
16
20
static void temp_allocate_frame(TCGContext *s, TCGTemp *ts)
17
static void tcg_out_bswap_subr(TCGContext *s, const tcg_insn_unit *sub)
21
{
18
{
22
- if (s->current_frame_offset + (tcg_target_long)sizeof(tcg_target_long) >
19
- bool ok = tcg_out_opc_jmp(s, OPC_JAL, sub);
23
- s->frame_end) {
20
- tcg_debug_assert(ok);
24
- tcg_abort();
21
+ if (!tcg_out_opc_jmp(s, OPC_JAL, sub)) {
25
+ size_t size, align;
22
+ tcg_out_movi(s, TCG_TYPE_PTR, TCG_TMP1, (uintptr_t)sub);
26
+ intptr_t off;
23
+ tcg_out_opc_reg(s, OPC_JALR, TCG_REG_RA, TCG_TMP1, 0);
27
+
24
+ }
28
+ switch (ts->type) {
29
+ case TCG_TYPE_I32:
30
+ size = align = 4;
31
+ break;
32
+ case TCG_TYPE_I64:
33
+ case TCG_TYPE_V64:
34
+ size = align = 8;
35
+ break;
36
+ case TCG_TYPE_V128:
37
+ size = align = 16;
38
+ break;
39
+ case TCG_TYPE_V256:
40
+ /* Note that we do not require aligned storage for V256. */
41
+ size = 32, align = 16;
42
+ break;
43
+ default:
44
+ g_assert_not_reached();
45
}
46
- ts->mem_offset = s->current_frame_offset;
47
+
48
+ assert(align <= TCG_TARGET_STACK_ALIGN);
49
+ off = ROUND_UP(s->current_frame_offset, align);
50
+ assert(off + size <= s->frame_end);
51
+ s->current_frame_offset = off + size;
52
+
53
+ ts->mem_offset = off;
54
#if defined(__sparc__)
55
ts->mem_offset += TCG_TARGET_STACK_BIAS;
56
#endif
57
ts->mem_base = s->frame_temp;
58
ts->mem_allocated = 1;
59
- s->current_frame_offset += sizeof(tcg_target_long);
60
}
25
}
61
26
62
static void temp_load(TCGContext *, TCGTemp *, TCGRegSet, TCGRegSet, TCGRegSet);
27
static void tcg_out_bswap32(TCGContext *s, TCGReg ret, TCGReg arg, int flags)
63
--
28
--
64
2.25.1
29
2.25.1
65
30
66
31
diff view generated by jsdifflib
1
This will give us both flags and typemask for use later.
1
Only use indirect jumps. Finish weaning away from the
2
unique alignment requirements for code_gen_buffer.
2
3
3
We also fix a dumping bug, wherein calls generated for plugins
4
fail tcg_find_helper and print (null) instead of either a name
5
or the raw function pointer.
6
7
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
---
6
---
11
tcg/tcg-internal.h | 14 ++++++++++++-
7
tcg/mips/tcg-target.h | 12 +++++-------
12
tcg/tcg.c | 49 ++++++++++++++++++++--------------------------
8
tcg/mips/tcg-target.c.inc | 23 +++++------------------
13
2 files changed, 34 insertions(+), 29 deletions(-)
9
2 files changed, 10 insertions(+), 25 deletions(-)
14
10
15
diff --git a/tcg/tcg-internal.h b/tcg/tcg-internal.h
11
diff --git a/tcg/mips/tcg-target.h b/tcg/mips/tcg-target.h
16
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
17
--- a/tcg/tcg-internal.h
13
--- a/tcg/mips/tcg-target.h
18
+++ b/tcg/tcg-internal.h
14
+++ b/tcg/mips/tcg-target.h
19
@@ -XXX,XX +XXX,XX @@
15
@@ -XXX,XX +XXX,XX @@
20
16
#define TCG_TARGET_TLB_DISPLACEMENT_BITS 16
21
#define TCG_HIGHWATER 1024
17
#define TCG_TARGET_NB_REGS 32
22
18
23
+typedef struct TCGHelperInfo {
19
-/*
24
+ void *func;
20
- * We have a 256MB branch region, but leave room to make sure the
25
+ const char *name;
21
- * main executable is also within that region.
26
+ unsigned flags;
22
- */
27
+ unsigned typemask;
23
-#define MAX_CODE_GEN_BUFFER_SIZE (128 * MiB)
28
+} TCGHelperInfo;
24
+#define MAX_CODE_GEN_BUFFER_SIZE ((size_t)-1)
29
+
25
30
extern TCGContext tcg_init_ctx;
26
typedef enum {
31
extern TCGContext **tcg_ctxs;
27
TCG_REG_ZERO = 0,
32
extern unsigned int tcg_cur_ctxs;
28
@@ -XXX,XX +XXX,XX @@ extern bool use_mips32r2_instructions;
33
@@ -XXX,XX +XXX,XX @@ bool tcg_region_alloc(TCGContext *s);
29
#define TCG_TARGET_HAS_muluh_i32 1
34
void tcg_region_initial_alloc(TCGContext *s);
30
#define TCG_TARGET_HAS_mulsh_i32 1
35
void tcg_region_prologue_set(TCGContext *s);
31
#define TCG_TARGET_HAS_bswap32_i32 1
36
32
-#define TCG_TARGET_HAS_direct_jump 1
37
+static inline const TCGHelperInfo *tcg_call_info(TCGOp *op)
33
+#define TCG_TARGET_HAS_direct_jump 0
38
+{
34
39
+ return (void *)(uintptr_t)op->args[TCGOP_CALLO(op) + TCGOP_CALLI(op) + 1];
35
#if TCG_TARGET_REG_BITS == 64
40
+}
36
#define TCG_TARGET_HAS_add2_i32 0
41
+
37
@@ -XXX,XX +XXX,XX @@ extern bool use_mips32r2_instructions;
42
static inline unsigned tcg_call_flags(TCGOp *op)
38
#define TCG_TARGET_DEFAULT_MO (0)
43
{
39
#define TCG_TARGET_HAS_MEMORY_BSWAP 1
44
- return op->args[TCGOP_CALLO(op) + TCGOP_CALLI(op) + 1];
40
45
+ return tcg_call_info(op)->flags;
41
-void tb_target_set_jmp_target(uintptr_t, uintptr_t, uintptr_t, uintptr_t);
42
+/* not defined -- call should be eliminated at compile time */
43
+void tb_target_set_jmp_target(uintptr_t, uintptr_t, uintptr_t, uintptr_t)
44
+ QEMU_ERROR("code path is reachable");
45
46
#ifdef CONFIG_SOFTMMU
47
#define TCG_TARGET_NEED_LDST_LABELS
48
diff --git a/tcg/mips/tcg-target.c.inc b/tcg/mips/tcg-target.c.inc
49
index XXXXXXX..XXXXXXX 100644
50
--- a/tcg/mips/tcg-target.c.inc
51
+++ b/tcg/mips/tcg-target.c.inc
52
@@ -XXX,XX +XXX,XX @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
53
}
54
break;
55
case INDEX_op_goto_tb:
56
- if (s->tb_jmp_insn_offset) {
57
- /* direct jump method */
58
- s->tb_jmp_insn_offset[a0] = tcg_current_code_size(s);
59
- /* Avoid clobbering the address during retranslation. */
60
- tcg_out32(s, OPC_J | (*(uint32_t *)s->code_ptr & 0x3ffffff));
61
- } else {
62
- /* indirect jump method */
63
- tcg_out_ld(s, TCG_TYPE_PTR, TCG_TMP0, TCG_REG_ZERO,
64
- (uintptr_t)(s->tb_jmp_target_addr + a0));
65
- tcg_out_opc_reg(s, OPC_JR, 0, TCG_TMP0, 0);
66
- }
67
+ /* indirect jump method */
68
+ tcg_debug_assert(s->tb_jmp_insn_offset == 0);
69
+ tcg_out_ld(s, TCG_TYPE_PTR, TCG_TMP0, TCG_REG_ZERO,
70
+ (uintptr_t)(s->tb_jmp_target_addr + a0));
71
+ tcg_out_opc_reg(s, OPC_JR, 0, TCG_TMP0, 0);
72
tcg_out_nop(s);
73
set_jmp_reset_offset(s, a0);
74
break;
75
@@ -XXX,XX +XXX,XX @@ static void tcg_target_init(TCGContext *s)
76
tcg_regset_set_reg(s->reserved_regs, TCG_REG_GP); /* global pointer */
46
}
77
}
47
78
48
#endif /* TCG_INTERNAL_H */
79
-void tb_target_set_jmp_target(uintptr_t tc_ptr, uintptr_t jmp_rx,
49
diff --git a/tcg/tcg.c b/tcg/tcg.c
80
- uintptr_t jmp_rw, uintptr_t addr)
50
index XXXXXXX..XXXXXXX 100644
51
--- a/tcg/tcg.c
52
+++ b/tcg/tcg.c
53
@@ -XXX,XX +XXX,XX @@ void tcg_pool_reset(TCGContext *s)
54
s->pool_current = NULL;
55
}
56
57
-typedef struct TCGHelperInfo {
58
- void *func;
59
- const char *name;
60
- unsigned flags;
61
- unsigned typemask;
62
-} TCGHelperInfo;
63
-
64
#include "exec/helper-proto.h"
65
66
static const TCGHelperInfo all_helpers[] = {
67
@@ -XXX,XX +XXX,XX @@ bool tcg_op_supported(TCGOpcode op)
68
void tcg_gen_callN(void *func, TCGTemp *ret, int nargs, TCGTemp **args)
69
{
70
int i, real_args, nb_rets, pi;
71
- unsigned typemask, flags;
72
- TCGHelperInfo *info;
73
+ unsigned typemask;
74
+ const TCGHelperInfo *info;
75
TCGOp *op;
76
77
info = g_hash_table_lookup(helper_table, (gpointer)func);
78
- flags = info->flags;
79
typemask = info->typemask;
80
81
#ifdef CONFIG_PLUGIN
82
@@ -XXX,XX +XXX,XX @@ void tcg_gen_callN(void *func, TCGTemp *ret, int nargs, TCGTemp **args)
83
real_args++;
84
}
85
op->args[pi++] = (uintptr_t)func;
86
- op->args[pi++] = flags;
87
+ op->args[pi++] = (uintptr_t)info;
88
TCGOP_CALLI(op) = real_args;
89
90
/* Make sure the fields didn't overflow. */
91
@@ -XXX,XX +XXX,XX @@ static char *tcg_get_arg_str(TCGContext *s, char *buf,
92
return tcg_get_arg_str_ptr(s, buf, buf_size, arg_temp(arg));
93
}
94
95
-/* Find helper name. */
96
-static inline const char *tcg_find_helper(TCGContext *s, uintptr_t val)
97
-{
81
-{
98
- const char *ret = NULL;
82
- qatomic_set((uint32_t *)jmp_rw, deposit32(OPC_J, 0, 26, addr >> 2));
99
- if (helper_table) {
83
- flush_idcache_range(jmp_rx, jmp_rw, 4);
100
- TCGHelperInfo *info = g_hash_table_lookup(helper_table, (gpointer)val);
101
- if (info) {
102
- ret = info->name;
103
- }
104
- }
105
- return ret;
106
-}
84
-}
107
-
85
-
108
static const char * const cond_name[] =
86
typedef struct {
109
{
87
DebugFrameHeader h;
110
[TCG_COND_NEVER] = "never",
88
uint8_t fde_def_cfa[4];
111
@@ -XXX,XX +XXX,XX @@ static void tcg_dump_ops(TCGContext *s, bool have_prefs)
112
col += qemu_log(" " TARGET_FMT_lx, a);
113
}
114
} else if (c == INDEX_op_call) {
115
+ const TCGHelperInfo *info = tcg_call_info(op);
116
+ void *func;
117
+
118
/* variable number of arguments */
119
nb_oargs = TCGOP_CALLO(op);
120
nb_iargs = TCGOP_CALLI(op);
121
nb_cargs = def->nb_cargs;
122
123
- /* function name, flags, out args */
124
- col += qemu_log(" %s %s,$0x%x,$%d", def->name,
125
- tcg_find_helper(s, op->args[nb_oargs + nb_iargs]),
126
- tcg_call_flags(op), nb_oargs);
127
+ col += qemu_log(" %s ", def->name);
128
+
129
+ /*
130
+ * Print the function name from TCGHelperInfo, if available.
131
+ * Note that plugins have a template function for the info,
132
+ * but the actual function pointer comes from the plugin.
133
+ */
134
+ func = (void *)(uintptr_t)op->args[nb_oargs + nb_iargs];
135
+ if (func == info->func) {
136
+ col += qemu_log("%s", info->name);
137
+ } else {
138
+ col += qemu_log("plugin(%p)", func);
139
+ }
140
+
141
+ col += qemu_log("$0x%x,$%d", info->flags, nb_oargs);
142
for (i = 0; i < nb_oargs; i++) {
143
col += qemu_log(",%s", tcg_get_arg_str(s, buf, sizeof(buf),
144
op->args[i]));
145
--
89
--
146
2.25.1
90
2.25.1
147
91
148
92
diff view generated by jsdifflib
1
This removes all of the problems with unaligned accesses
2
to the bytecode stream.
3
4
With an 8-bit opcode at the bottom, we have 24 bits remaining,
5
which are generally split into 6 4-bit slots. This fits well
6
with the maximum length opcodes, e.g. INDEX_op_add2_i32, which
7
have 6 register operands.
8
9
We have, in previous patches, rearranged things such that there
10
are no operations with a label which have more than one other
11
operand. Which leaves us with a 20-bit field in which to encode
12
a label, giving us a maximum TB size of 512k -- easily large.
13
14
Change the INDEX_op_tci_movi_{i32,i64} opcodes to tci_mov[il].
15
The former puts the immediate in the upper 20 bits of the insn,
16
like we do for the label displacement. The later uses a label
17
to reference an entry in the constant pool. Thus, in the worst
18
case we still have a single memory reference for any constant,
19
but now the constants are out-of-line of the bytecode and can
20
be shared between different moves saving space.
21
22
Change INDEX_op_call to use a label to reference a pair of
23
pointers in the constant pool. This removes the only slightly
24
dodgy link with the layout of struct TCGHelperInfo.
25
26
The re-encode cannot be done in pieces.
27
28
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
29
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
30
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
31
---
3
---
32
include/tcg/tcg-opc.h | 4 +-
4
tcg/region.c | 91 ----------------------------------------------------
33
tcg/tci/tcg-target.h | 3 +-
5
1 file changed, 91 deletions(-)
34
tcg/tci.c | 539 +++++++++++++++------------------------
35
tcg/tci/tcg-target.c.inc | 379 ++++++++++++---------------
36
tcg/tci/README | 20 +-
37
5 files changed, 383 insertions(+), 562 deletions(-)
38
6
39
diff --git a/include/tcg/tcg-opc.h b/include/tcg/tcg-opc.h
7
diff --git a/tcg/region.c b/tcg/region.c
40
index XXXXXXX..XXXXXXX 100644
8
index XXXXXXX..XXXXXXX 100644
41
--- a/include/tcg/tcg-opc.h
9
--- a/tcg/region.c
42
+++ b/include/tcg/tcg-opc.h
10
+++ b/tcg/region.c
43
@@ -XXX,XX +XXX,XX @@ DEF(last_generic, 0, 0, 0, TCG_OPF_NOT_PRESENT)
11
@@ -XXX,XX +XXX,XX @@ static size_t tcg_n_regions(size_t tb_size, unsigned max_cpus)
44
12
(DEFAULT_CODE_GEN_BUFFER_SIZE_1 < MAX_CODE_GEN_BUFFER_SIZE \
45
#ifdef TCG_TARGET_INTERPRETER
13
? DEFAULT_CODE_GEN_BUFFER_SIZE_1 : MAX_CODE_GEN_BUFFER_SIZE)
46
/* These opcodes are only for use between the tci generator and interpreter. */
14
47
-DEF(tci_movi_i32, 1, 0, 1, TCG_OPF_NOT_PRESENT)
15
-#ifdef __mips__
48
-DEF(tci_movi_i64, 1, 0, 1, TCG_OPF_64BIT | TCG_OPF_NOT_PRESENT)
16
-/*
49
+DEF(tci_movi, 1, 0, 1, TCG_OPF_NOT_PRESENT)
17
- * In order to use J and JAL within the code_gen_buffer, we require
50
+DEF(tci_movl, 1, 0, 1, TCG_OPF_NOT_PRESENT)
18
- * that the buffer not cross a 256MB boundary.
51
#endif
19
- */
52
20
-static inline bool cross_256mb(void *addr, size_t size)
53
#undef TLADDR_ARGS
54
diff --git a/tcg/tci/tcg-target.h b/tcg/tci/tcg-target.h
55
index XXXXXXX..XXXXXXX 100644
56
--- a/tcg/tci/tcg-target.h
57
+++ b/tcg/tci/tcg-target.h
58
@@ -XXX,XX +XXX,XX @@
59
#define TCG_TARGET_H
60
61
#define TCG_TARGET_INTERPRETER 1
62
-#define TCG_TARGET_INSN_UNIT_SIZE 1
63
+#define TCG_TARGET_INSN_UNIT_SIZE 4
64
#define TCG_TARGET_TLB_DISPLACEMENT_BITS 32
65
#define MAX_CODE_GEN_BUFFER_SIZE ((size_t)-1)
66
67
@@ -XXX,XX +XXX,XX @@ typedef enum {
68
#define TCG_TARGET_STACK_ALIGN 8
69
70
#define HAVE_TCG_QEMU_TB_EXEC
71
+#define TCG_TARGET_NEED_POOL_LABELS
72
73
/* We could notice __i386__ or __s390x__ and reduce the barriers depending
74
on the host. But if you want performance, you use the normal backend.
75
diff --git a/tcg/tci.c b/tcg/tci.c
76
index XXXXXXX..XXXXXXX 100644
77
--- a/tcg/tci.c
78
+++ b/tcg/tci.c
79
@@ -XXX,XX +XXX,XX @@ static uint64_t tci_uint64(uint32_t high, uint32_t low)
80
return ((uint64_t)high << 32) + low;
81
}
82
83
-/* Read constant byte from bytecode. */
84
-static uint8_t tci_read_b(const uint8_t **tb_ptr)
85
-{
21
-{
86
- return *(tb_ptr[0]++);
22
- return ((uintptr_t)addr ^ ((uintptr_t)addr + size)) & ~0x0ffffffful;
87
-}
23
-}
88
-
24
-
89
-/* Read register number from bytecode. */
25
-/*
90
-static TCGReg tci_read_r(const uint8_t **tb_ptr)
26
- * We weren't able to allocate a buffer without crossing that boundary,
27
- * so make do with the larger portion of the buffer that doesn't cross.
28
- * Returns the new base and size of the buffer in *obuf and *osize.
29
- */
30
-static inline void split_cross_256mb(void **obuf, size_t *osize,
31
- void *buf1, size_t size1)
91
-{
32
-{
92
- uint8_t regno = tci_read_b(tb_ptr);
33
- void *buf2 = (void *)(((uintptr_t)buf1 + size1) & ~0x0ffffffful);
93
- tci_assert(regno < TCG_TARGET_NB_REGS);
34
- size_t size2 = buf1 + size1 - buf2;
94
- return regno;
95
-}
96
-
35
-
97
-/* Read constant (native size) from bytecode. */
36
- size1 = buf2 - buf1;
98
-static tcg_target_ulong tci_read_i(const uint8_t **tb_ptr)
37
- if (size1 < size2) {
99
-{
38
- size1 = size2;
100
- tcg_target_ulong value = *(const tcg_target_ulong *)(*tb_ptr);
39
- buf1 = buf2;
101
- *tb_ptr += sizeof(value);
40
- }
102
- return value;
103
-}
104
-
41
-
105
-/* Read unsigned constant (32 bit) from bytecode. */
42
- *obuf = buf1;
106
-static uint32_t tci_read_i32(const uint8_t **tb_ptr)
43
- *osize = size1;
107
-{
108
- uint32_t value = *(const uint32_t *)(*tb_ptr);
109
- *tb_ptr += sizeof(value);
110
- return value;
111
-}
112
-
113
-/* Read signed constant (32 bit) from bytecode. */
114
-static int32_t tci_read_s32(const uint8_t **tb_ptr)
115
-{
116
- int32_t value = *(const int32_t *)(*tb_ptr);
117
- *tb_ptr += sizeof(value);
118
- return value;
119
-}
120
-
121
-static tcg_target_ulong tci_read_label(const uint8_t **tb_ptr)
122
-{
123
- return tci_read_i(tb_ptr);
124
-}
125
-
126
/*
127
* Load sets of arguments all at once. The naming convention is:
128
* tci_args_<arguments>
129
@@ -XXX,XX +XXX,XX @@ static tcg_target_ulong tci_read_label(const uint8_t **tb_ptr)
130
* s = signed ldst offset
131
*/
132
133
-static void check_size(const uint8_t *start, const uint8_t **tb_ptr)
134
+static void tci_args_l(uint32_t insn, const void *tb_ptr, void **l0)
135
{
136
- const uint8_t *old_code_ptr = start - 2;
137
- uint8_t op_size = old_code_ptr[1];
138
- tci_assert(*tb_ptr == old_code_ptr + op_size);
139
+ int diff = sextract32(insn, 12, 20);
140
+ *l0 = diff ? (void *)tb_ptr + diff : NULL;
141
}
142
143
-static void tci_args_l(const uint8_t **tb_ptr, void **l0)
144
+static void tci_args_nl(uint32_t insn, const void *tb_ptr,
145
+ uint8_t *n0, void **l1)
146
{
147
- const uint8_t *start = *tb_ptr;
148
-
149
- *l0 = (void *)tci_read_label(tb_ptr);
150
-
151
- check_size(start, tb_ptr);
152
+ *n0 = extract32(insn, 8, 4);
153
+ *l1 = sextract32(insn, 12, 20) + (void *)tb_ptr;
154
}
155
156
-static void tci_args_nll(const uint8_t **tb_ptr, uint8_t *n0,
157
- void **l1, void **l2)
158
+static void tci_args_rl(uint32_t insn, const void *tb_ptr,
159
+ TCGReg *r0, void **l1)
160
{
161
- const uint8_t *start = *tb_ptr;
162
-
163
- *n0 = tci_read_b(tb_ptr);
164
- *l1 = (void *)tci_read_label(tb_ptr);
165
- *l2 = (void *)tci_read_label(tb_ptr);
166
-
167
- check_size(start, tb_ptr);
168
+ *r0 = extract32(insn, 8, 4);
169
+ *l1 = sextract32(insn, 12, 20) + (void *)tb_ptr;
170
}
171
172
-static void tci_args_rl(const uint8_t **tb_ptr, TCGReg *r0, void **l1)
173
+static void tci_args_rr(uint32_t insn, TCGReg *r0, TCGReg *r1)
174
{
175
- const uint8_t *start = *tb_ptr;
176
-
177
- *r0 = tci_read_r(tb_ptr);
178
- *l1 = (void *)tci_read_label(tb_ptr);
179
-
180
- check_size(start, tb_ptr);
181
+ *r0 = extract32(insn, 8, 4);
182
+ *r1 = extract32(insn, 12, 4);
183
}
184
185
-static void tci_args_rr(const uint8_t **tb_ptr,
186
- TCGReg *r0, TCGReg *r1)
187
+static void tci_args_ri(uint32_t insn, TCGReg *r0, tcg_target_ulong *i1)
188
{
189
- const uint8_t *start = *tb_ptr;
190
-
191
- *r0 = tci_read_r(tb_ptr);
192
- *r1 = tci_read_r(tb_ptr);
193
-
194
- check_size(start, tb_ptr);
195
+ *r0 = extract32(insn, 8, 4);
196
+ *i1 = sextract32(insn, 12, 20);
197
}
198
199
-static void tci_args_ri(const uint8_t **tb_ptr,
200
- TCGReg *r0, tcg_target_ulong *i1)
201
+static void tci_args_rrm(uint32_t insn, TCGReg *r0,
202
+ TCGReg *r1, TCGMemOpIdx *m2)
203
{
204
- const uint8_t *start = *tb_ptr;
205
-
206
- *r0 = tci_read_r(tb_ptr);
207
- *i1 = tci_read_i32(tb_ptr);
208
-
209
- check_size(start, tb_ptr);
210
+ *r0 = extract32(insn, 8, 4);
211
+ *r1 = extract32(insn, 12, 4);
212
+ *m2 = extract32(insn, 20, 12);
213
}
214
215
-#if TCG_TARGET_REG_BITS == 64
216
-static void tci_args_rI(const uint8_t **tb_ptr,
217
- TCGReg *r0, tcg_target_ulong *i1)
218
+static void tci_args_rrr(uint32_t insn, TCGReg *r0, TCGReg *r1, TCGReg *r2)
219
{
220
- const uint8_t *start = *tb_ptr;
221
-
222
- *r0 = tci_read_r(tb_ptr);
223
- *i1 = tci_read_i(tb_ptr);
224
-
225
- check_size(start, tb_ptr);
226
-}
44
-}
227
-#endif
45
-#endif
228
-
46
-
229
-static void tci_args_rrm(const uint8_t **tb_ptr,
47
#ifdef USE_STATIC_CODE_GEN_BUFFER
230
- TCGReg *r0, TCGReg *r1, TCGMemOpIdx *m2)
48
static uint8_t static_code_gen_buffer[DEFAULT_CODE_GEN_BUFFER_SIZE]
231
-{
49
__attribute__((aligned(CODE_GEN_ALIGN)));
232
- const uint8_t *start = *tb_ptr;
50
@@ -XXX,XX +XXX,XX @@ static int alloc_code_gen_buffer(size_t tb_size, int splitwx, Error **errp)
233
-
51
size = QEMU_ALIGN_DOWN(tb_size, qemu_real_host_page_size);
234
- *r0 = tci_read_r(tb_ptr);
52
}
235
- *r1 = tci_read_r(tb_ptr);
53
236
- *m2 = tci_read_i32(tb_ptr);
54
-#ifdef __mips__
237
-
55
- if (cross_256mb(buf, size)) {
238
- check_size(start, tb_ptr);
56
- split_cross_256mb(&buf, &size, buf, size);
239
+ *r0 = extract32(insn, 8, 4);
240
+ *r1 = extract32(insn, 12, 4);
241
+ *r2 = extract32(insn, 16, 4);
242
}
243
244
-static void tci_args_rrr(const uint8_t **tb_ptr,
245
- TCGReg *r0, TCGReg *r1, TCGReg *r2)
246
+static void tci_args_rrs(uint32_t insn, TCGReg *r0, TCGReg *r1, int32_t *i2)
247
{
248
- const uint8_t *start = *tb_ptr;
249
-
250
- *r0 = tci_read_r(tb_ptr);
251
- *r1 = tci_read_r(tb_ptr);
252
- *r2 = tci_read_r(tb_ptr);
253
-
254
- check_size(start, tb_ptr);
255
+ *r0 = extract32(insn, 8, 4);
256
+ *r1 = extract32(insn, 12, 4);
257
+ *i2 = sextract32(insn, 16, 16);
258
}
259
260
-static void tci_args_rrs(const uint8_t **tb_ptr,
261
- TCGReg *r0, TCGReg *r1, int32_t *i2)
262
-{
263
- const uint8_t *start = *tb_ptr;
264
-
265
- *r0 = tci_read_r(tb_ptr);
266
- *r1 = tci_read_r(tb_ptr);
267
- *i2 = tci_read_s32(tb_ptr);
268
-
269
- check_size(start, tb_ptr);
270
-}
271
-
272
-static void tci_args_rrrc(const uint8_t **tb_ptr,
273
+static void tci_args_rrrc(uint32_t insn,
274
TCGReg *r0, TCGReg *r1, TCGReg *r2, TCGCond *c3)
275
{
276
- const uint8_t *start = *tb_ptr;
277
-
278
- *r0 = tci_read_r(tb_ptr);
279
- *r1 = tci_read_r(tb_ptr);
280
- *r2 = tci_read_r(tb_ptr);
281
- *c3 = tci_read_b(tb_ptr);
282
-
283
- check_size(start, tb_ptr);
284
+ *r0 = extract32(insn, 8, 4);
285
+ *r1 = extract32(insn, 12, 4);
286
+ *r2 = extract32(insn, 16, 4);
287
+ *c3 = extract32(insn, 20, 4);
288
}
289
290
-static void tci_args_rrrm(const uint8_t **tb_ptr,
291
+static void tci_args_rrrm(uint32_t insn,
292
TCGReg *r0, TCGReg *r1, TCGReg *r2, TCGMemOpIdx *m3)
293
{
294
- const uint8_t *start = *tb_ptr;
295
-
296
- *r0 = tci_read_r(tb_ptr);
297
- *r1 = tci_read_r(tb_ptr);
298
- *r2 = tci_read_r(tb_ptr);
299
- *m3 = tci_read_i32(tb_ptr);
300
-
301
- check_size(start, tb_ptr);
302
+ *r0 = extract32(insn, 8, 4);
303
+ *r1 = extract32(insn, 12, 4);
304
+ *r2 = extract32(insn, 16, 4);
305
+ *m3 = extract32(insn, 20, 12);
306
}
307
308
-static void tci_args_rrrbb(const uint8_t **tb_ptr, TCGReg *r0, TCGReg *r1,
309
+static void tci_args_rrrbb(uint32_t insn, TCGReg *r0, TCGReg *r1,
310
TCGReg *r2, uint8_t *i3, uint8_t *i4)
311
{
312
- const uint8_t *start = *tb_ptr;
313
-
314
- *r0 = tci_read_r(tb_ptr);
315
- *r1 = tci_read_r(tb_ptr);
316
- *r2 = tci_read_r(tb_ptr);
317
- *i3 = tci_read_b(tb_ptr);
318
- *i4 = tci_read_b(tb_ptr);
319
-
320
- check_size(start, tb_ptr);
321
+ *r0 = extract32(insn, 8, 4);
322
+ *r1 = extract32(insn, 12, 4);
323
+ *r2 = extract32(insn, 16, 4);
324
+ *i3 = extract32(insn, 20, 6);
325
+ *i4 = extract32(insn, 26, 6);
326
}
327
328
-static void tci_args_rrrrm(const uint8_t **tb_ptr, TCGReg *r0, TCGReg *r1,
329
- TCGReg *r2, TCGReg *r3, TCGMemOpIdx *m4)
330
+static void tci_args_rrrrr(uint32_t insn, TCGReg *r0, TCGReg *r1,
331
+ TCGReg *r2, TCGReg *r3, TCGReg *r4)
332
{
333
- const uint8_t *start = *tb_ptr;
334
-
335
- *r0 = tci_read_r(tb_ptr);
336
- *r1 = tci_read_r(tb_ptr);
337
- *r2 = tci_read_r(tb_ptr);
338
- *r3 = tci_read_r(tb_ptr);
339
- *m4 = tci_read_i32(tb_ptr);
340
-
341
- check_size(start, tb_ptr);
342
+ *r0 = extract32(insn, 8, 4);
343
+ *r1 = extract32(insn, 12, 4);
344
+ *r2 = extract32(insn, 16, 4);
345
+ *r3 = extract32(insn, 20, 4);
346
+ *r4 = extract32(insn, 24, 4);
347
}
348
349
#if TCG_TARGET_REG_BITS == 32
350
-static void tci_args_rrrr(const uint8_t **tb_ptr,
351
+static void tci_args_rrrr(uint32_t insn,
352
TCGReg *r0, TCGReg *r1, TCGReg *r2, TCGReg *r3)
353
{
354
- const uint8_t *start = *tb_ptr;
355
-
356
- *r0 = tci_read_r(tb_ptr);
357
- *r1 = tci_read_r(tb_ptr);
358
- *r2 = tci_read_r(tb_ptr);
359
- *r3 = tci_read_r(tb_ptr);
360
-
361
- check_size(start, tb_ptr);
362
+ *r0 = extract32(insn, 8, 4);
363
+ *r1 = extract32(insn, 12, 4);
364
+ *r2 = extract32(insn, 16, 4);
365
+ *r3 = extract32(insn, 20, 4);
366
}
367
368
-static void tci_args_rrrrrc(const uint8_t **tb_ptr, TCGReg *r0, TCGReg *r1,
369
+static void tci_args_rrrrrc(uint32_t insn, TCGReg *r0, TCGReg *r1,
370
TCGReg *r2, TCGReg *r3, TCGReg *r4, TCGCond *c5)
371
{
372
- const uint8_t *start = *tb_ptr;
373
-
374
- *r0 = tci_read_r(tb_ptr);
375
- *r1 = tci_read_r(tb_ptr);
376
- *r2 = tci_read_r(tb_ptr);
377
- *r3 = tci_read_r(tb_ptr);
378
- *r4 = tci_read_r(tb_ptr);
379
- *c5 = tci_read_b(tb_ptr);
380
-
381
- check_size(start, tb_ptr);
382
+ *r0 = extract32(insn, 8, 4);
383
+ *r1 = extract32(insn, 12, 4);
384
+ *r2 = extract32(insn, 16, 4);
385
+ *r3 = extract32(insn, 20, 4);
386
+ *r4 = extract32(insn, 24, 4);
387
+ *c5 = extract32(insn, 28, 4);
388
}
389
390
-static void tci_args_rrrrrr(const uint8_t **tb_ptr, TCGReg *r0, TCGReg *r1,
391
+static void tci_args_rrrrrr(uint32_t insn, TCGReg *r0, TCGReg *r1,
392
TCGReg *r2, TCGReg *r3, TCGReg *r4, TCGReg *r5)
393
{
394
- const uint8_t *start = *tb_ptr;
395
-
396
- *r0 = tci_read_r(tb_ptr);
397
- *r1 = tci_read_r(tb_ptr);
398
- *r2 = tci_read_r(tb_ptr);
399
- *r3 = tci_read_r(tb_ptr);
400
- *r4 = tci_read_r(tb_ptr);
401
- *r5 = tci_read_r(tb_ptr);
402
-
403
- check_size(start, tb_ptr);
404
+ *r0 = extract32(insn, 8, 4);
405
+ *r1 = extract32(insn, 12, 4);
406
+ *r2 = extract32(insn, 16, 4);
407
+ *r3 = extract32(insn, 20, 4);
408
+ *r4 = extract32(insn, 24, 4);
409
+ *r5 = extract32(insn, 28, 4);
410
}
411
#endif
412
413
@@ -XXX,XX +XXX,XX @@ static bool tci_compare64(uint64_t u0, uint64_t u1, TCGCond condition)
414
uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
415
const void *v_tb_ptr)
416
{
417
- const uint8_t *tb_ptr = v_tb_ptr;
418
+ const uint32_t *tb_ptr = v_tb_ptr;
419
tcg_target_ulong regs[TCG_TARGET_NB_REGS];
420
uint64_t stack[(TCG_STATIC_CALL_ARGS_SIZE + TCG_STATIC_FRAME_SIZE)
421
/ sizeof(uint64_t)];
422
@@ -XXX,XX +XXX,XX @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
423
tci_assert(tb_ptr);
424
425
for (;;) {
426
- TCGOpcode opc = tb_ptr[0];
427
- TCGReg r0, r1, r2, r3;
428
+ uint32_t insn;
429
+ TCGOpcode opc;
430
+ TCGReg r0, r1, r2, r3, r4;
431
tcg_target_ulong t1;
432
TCGCond condition;
433
target_ulong taddr;
434
@@ -XXX,XX +XXX,XX @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
435
uint32_t tmp32;
436
uint64_t tmp64;
437
#if TCG_TARGET_REG_BITS == 32
438
- TCGReg r4, r5;
439
+ TCGReg r5;
440
uint64_t T1, T2;
441
#endif
442
TCGMemOpIdx oi;
443
int32_t ofs;
444
- void *ptr, *cif;
445
+ void *ptr;
446
447
- /* Skip opcode and size entry. */
448
- tb_ptr += 2;
449
+ insn = *tb_ptr++;
450
+ opc = extract32(insn, 0, 8);
451
452
switch (opc) {
453
case INDEX_op_call:
454
@@ -XXX,XX +XXX,XX @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
455
}
456
}
457
458
- tci_args_nll(&tb_ptr, &len, &ptr, &cif);
459
+ tci_args_nl(insn, tb_ptr, &len, &ptr);
460
461
/* Helper functions may need to access the "return address" */
462
tci_tb_ptr = (uintptr_t)tb_ptr;
463
464
- ffi_call(cif, ptr, stack, call_slots);
465
+ {
466
+ void **pptr = ptr;
467
+ ffi_call(pptr[1], pptr[0], stack, call_slots);
468
+ }
469
470
/* Any result winds up "left-aligned" in the stack[0] slot. */
471
switch (len) {
472
@@ -XXX,XX +XXX,XX @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
473
break;
474
475
case INDEX_op_br:
476
- tci_args_l(&tb_ptr, &ptr);
477
+ tci_args_l(insn, tb_ptr, &ptr);
478
tb_ptr = ptr;
479
continue;
480
case INDEX_op_setcond_i32:
481
- tci_args_rrrc(&tb_ptr, &r0, &r1, &r2, &condition);
482
+ tci_args_rrrc(insn, &r0, &r1, &r2, &condition);
483
regs[r0] = tci_compare32(regs[r1], regs[r2], condition);
484
break;
485
#if TCG_TARGET_REG_BITS == 32
486
case INDEX_op_setcond2_i32:
487
- tci_args_rrrrrc(&tb_ptr, &r0, &r1, &r2, &r3, &r4, &condition);
488
+ tci_args_rrrrrc(insn, &r0, &r1, &r2, &r3, &r4, &condition);
489
T1 = tci_uint64(regs[r2], regs[r1]);
490
T2 = tci_uint64(regs[r4], regs[r3]);
491
regs[r0] = tci_compare64(T1, T2, condition);
492
break;
493
#elif TCG_TARGET_REG_BITS == 64
494
case INDEX_op_setcond_i64:
495
- tci_args_rrrc(&tb_ptr, &r0, &r1, &r2, &condition);
496
+ tci_args_rrrc(insn, &r0, &r1, &r2, &condition);
497
regs[r0] = tci_compare64(regs[r1], regs[r2], condition);
498
break;
499
#endif
500
CASE_32_64(mov)
501
- tci_args_rr(&tb_ptr, &r0, &r1);
502
+ tci_args_rr(insn, &r0, &r1);
503
regs[r0] = regs[r1];
504
break;
505
- case INDEX_op_tci_movi_i32:
506
- tci_args_ri(&tb_ptr, &r0, &t1);
507
+ case INDEX_op_tci_movi:
508
+ tci_args_ri(insn, &r0, &t1);
509
regs[r0] = t1;
510
break;
511
+ case INDEX_op_tci_movl:
512
+ tci_args_rl(insn, tb_ptr, &r0, &ptr);
513
+ regs[r0] = *(tcg_target_ulong *)ptr;
514
+ break;
515
516
/* Load/store operations (32 bit). */
517
518
CASE_32_64(ld8u)
519
- tci_args_rrs(&tb_ptr, &r0, &r1, &ofs);
520
+ tci_args_rrs(insn, &r0, &r1, &ofs);
521
ptr = (void *)(regs[r1] + ofs);
522
regs[r0] = *(uint8_t *)ptr;
523
break;
524
CASE_32_64(ld8s)
525
- tci_args_rrs(&tb_ptr, &r0, &r1, &ofs);
526
+ tci_args_rrs(insn, &r0, &r1, &ofs);
527
ptr = (void *)(regs[r1] + ofs);
528
regs[r0] = *(int8_t *)ptr;
529
break;
530
CASE_32_64(ld16u)
531
- tci_args_rrs(&tb_ptr, &r0, &r1, &ofs);
532
+ tci_args_rrs(insn, &r0, &r1, &ofs);
533
ptr = (void *)(regs[r1] + ofs);
534
regs[r0] = *(uint16_t *)ptr;
535
break;
536
CASE_32_64(ld16s)
537
- tci_args_rrs(&tb_ptr, &r0, &r1, &ofs);
538
+ tci_args_rrs(insn, &r0, &r1, &ofs);
539
ptr = (void *)(regs[r1] + ofs);
540
regs[r0] = *(int16_t *)ptr;
541
break;
542
case INDEX_op_ld_i32:
543
CASE_64(ld32u)
544
- tci_args_rrs(&tb_ptr, &r0, &r1, &ofs);
545
+ tci_args_rrs(insn, &r0, &r1, &ofs);
546
ptr = (void *)(regs[r1] + ofs);
547
regs[r0] = *(uint32_t *)ptr;
548
break;
549
CASE_32_64(st8)
550
- tci_args_rrs(&tb_ptr, &r0, &r1, &ofs);
551
+ tci_args_rrs(insn, &r0, &r1, &ofs);
552
ptr = (void *)(regs[r1] + ofs);
553
*(uint8_t *)ptr = regs[r0];
554
break;
555
CASE_32_64(st16)
556
- tci_args_rrs(&tb_ptr, &r0, &r1, &ofs);
557
+ tci_args_rrs(insn, &r0, &r1, &ofs);
558
ptr = (void *)(regs[r1] + ofs);
559
*(uint16_t *)ptr = regs[r0];
560
break;
561
case INDEX_op_st_i32:
562
CASE_64(st32)
563
- tci_args_rrs(&tb_ptr, &r0, &r1, &ofs);
564
+ tci_args_rrs(insn, &r0, &r1, &ofs);
565
ptr = (void *)(regs[r1] + ofs);
566
*(uint32_t *)ptr = regs[r0];
567
break;
568
@@ -XXX,XX +XXX,XX @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
569
/* Arithmetic operations (mixed 32/64 bit). */
570
571
CASE_32_64(add)
572
- tci_args_rrr(&tb_ptr, &r0, &r1, &r2);
573
+ tci_args_rrr(insn, &r0, &r1, &r2);
574
regs[r0] = regs[r1] + regs[r2];
575
break;
576
CASE_32_64(sub)
577
- tci_args_rrr(&tb_ptr, &r0, &r1, &r2);
578
+ tci_args_rrr(insn, &r0, &r1, &r2);
579
regs[r0] = regs[r1] - regs[r2];
580
break;
581
CASE_32_64(mul)
582
- tci_args_rrr(&tb_ptr, &r0, &r1, &r2);
583
+ tci_args_rrr(insn, &r0, &r1, &r2);
584
regs[r0] = regs[r1] * regs[r2];
585
break;
586
CASE_32_64(and)
587
- tci_args_rrr(&tb_ptr, &r0, &r1, &r2);
588
+ tci_args_rrr(insn, &r0, &r1, &r2);
589
regs[r0] = regs[r1] & regs[r2];
590
break;
591
CASE_32_64(or)
592
- tci_args_rrr(&tb_ptr, &r0, &r1, &r2);
593
+ tci_args_rrr(insn, &r0, &r1, &r2);
594
regs[r0] = regs[r1] | regs[r2];
595
break;
596
CASE_32_64(xor)
597
- tci_args_rrr(&tb_ptr, &r0, &r1, &r2);
598
+ tci_args_rrr(insn, &r0, &r1, &r2);
599
regs[r0] = regs[r1] ^ regs[r2];
600
break;
601
602
/* Arithmetic operations (32 bit). */
603
604
case INDEX_op_div_i32:
605
- tci_args_rrr(&tb_ptr, &r0, &r1, &r2);
606
+ tci_args_rrr(insn, &r0, &r1, &r2);
607
regs[r0] = (int32_t)regs[r1] / (int32_t)regs[r2];
608
break;
609
case INDEX_op_divu_i32:
610
- tci_args_rrr(&tb_ptr, &r0, &r1, &r2);
611
+ tci_args_rrr(insn, &r0, &r1, &r2);
612
regs[r0] = (uint32_t)regs[r1] / (uint32_t)regs[r2];
613
break;
614
case INDEX_op_rem_i32:
615
- tci_args_rrr(&tb_ptr, &r0, &r1, &r2);
616
+ tci_args_rrr(insn, &r0, &r1, &r2);
617
regs[r0] = (int32_t)regs[r1] % (int32_t)regs[r2];
618
break;
619
case INDEX_op_remu_i32:
620
- tci_args_rrr(&tb_ptr, &r0, &r1, &r2);
621
+ tci_args_rrr(insn, &r0, &r1, &r2);
622
regs[r0] = (uint32_t)regs[r1] % (uint32_t)regs[r2];
623
break;
624
625
/* Shift/rotate operations (32 bit). */
626
627
case INDEX_op_shl_i32:
628
- tci_args_rrr(&tb_ptr, &r0, &r1, &r2);
629
+ tci_args_rrr(insn, &r0, &r1, &r2);
630
regs[r0] = (uint32_t)regs[r1] << (regs[r2] & 31);
631
break;
632
case INDEX_op_shr_i32:
633
- tci_args_rrr(&tb_ptr, &r0, &r1, &r2);
634
+ tci_args_rrr(insn, &r0, &r1, &r2);
635
regs[r0] = (uint32_t)regs[r1] >> (regs[r2] & 31);
636
break;
637
case INDEX_op_sar_i32:
638
- tci_args_rrr(&tb_ptr, &r0, &r1, &r2);
639
+ tci_args_rrr(insn, &r0, &r1, &r2);
640
regs[r0] = (int32_t)regs[r1] >> (regs[r2] & 31);
641
break;
642
#if TCG_TARGET_HAS_rot_i32
643
case INDEX_op_rotl_i32:
644
- tci_args_rrr(&tb_ptr, &r0, &r1, &r2);
645
+ tci_args_rrr(insn, &r0, &r1, &r2);
646
regs[r0] = rol32(regs[r1], regs[r2] & 31);
647
break;
648
case INDEX_op_rotr_i32:
649
- tci_args_rrr(&tb_ptr, &r0, &r1, &r2);
650
+ tci_args_rrr(insn, &r0, &r1, &r2);
651
regs[r0] = ror32(regs[r1], regs[r2] & 31);
652
break;
653
#endif
654
#if TCG_TARGET_HAS_deposit_i32
655
case INDEX_op_deposit_i32:
656
- tci_args_rrrbb(&tb_ptr, &r0, &r1, &r2, &pos, &len);
657
+ tci_args_rrrbb(insn, &r0, &r1, &r2, &pos, &len);
658
regs[r0] = deposit32(regs[r1], pos, len, regs[r2]);
659
break;
660
#endif
661
case INDEX_op_brcond_i32:
662
- tci_args_rl(&tb_ptr, &r0, &ptr);
663
+ tci_args_rl(insn, tb_ptr, &r0, &ptr);
664
if ((uint32_t)regs[r0]) {
665
tb_ptr = ptr;
666
}
667
break;
668
#if TCG_TARGET_REG_BITS == 32
669
case INDEX_op_add2_i32:
670
- tci_args_rrrrrr(&tb_ptr, &r0, &r1, &r2, &r3, &r4, &r5);
671
+ tci_args_rrrrrr(insn, &r0, &r1, &r2, &r3, &r4, &r5);
672
T1 = tci_uint64(regs[r3], regs[r2]);
673
T2 = tci_uint64(regs[r5], regs[r4]);
674
tci_write_reg64(regs, r1, r0, T1 + T2);
675
break;
676
case INDEX_op_sub2_i32:
677
- tci_args_rrrrrr(&tb_ptr, &r0, &r1, &r2, &r3, &r4, &r5);
678
+ tci_args_rrrrrr(insn, &r0, &r1, &r2, &r3, &r4, &r5);
679
T1 = tci_uint64(regs[r3], regs[r2]);
680
T2 = tci_uint64(regs[r5], regs[r4]);
681
tci_write_reg64(regs, r1, r0, T1 - T2);
682
break;
683
case INDEX_op_mulu2_i32:
684
- tci_args_rrrr(&tb_ptr, &r0, &r1, &r2, &r3);
685
+ tci_args_rrrr(insn, &r0, &r1, &r2, &r3);
686
tci_write_reg64(regs, r1, r0, (uint64_t)regs[r2] * regs[r3]);
687
break;
688
#endif /* TCG_TARGET_REG_BITS == 32 */
689
#if TCG_TARGET_HAS_ext8s_i32 || TCG_TARGET_HAS_ext8s_i64
690
CASE_32_64(ext8s)
691
- tci_args_rr(&tb_ptr, &r0, &r1);
692
+ tci_args_rr(insn, &r0, &r1);
693
regs[r0] = (int8_t)regs[r1];
694
break;
695
#endif
696
#if TCG_TARGET_HAS_ext16s_i32 || TCG_TARGET_HAS_ext16s_i64
697
CASE_32_64(ext16s)
698
- tci_args_rr(&tb_ptr, &r0, &r1);
699
+ tci_args_rr(insn, &r0, &r1);
700
regs[r0] = (int16_t)regs[r1];
701
break;
702
#endif
703
#if TCG_TARGET_HAS_ext8u_i32 || TCG_TARGET_HAS_ext8u_i64
704
CASE_32_64(ext8u)
705
- tci_args_rr(&tb_ptr, &r0, &r1);
706
+ tci_args_rr(insn, &r0, &r1);
707
regs[r0] = (uint8_t)regs[r1];
708
break;
709
#endif
710
#if TCG_TARGET_HAS_ext16u_i32 || TCG_TARGET_HAS_ext16u_i64
711
CASE_32_64(ext16u)
712
- tci_args_rr(&tb_ptr, &r0, &r1);
713
+ tci_args_rr(insn, &r0, &r1);
714
regs[r0] = (uint16_t)regs[r1];
715
break;
716
#endif
717
#if TCG_TARGET_HAS_bswap16_i32 || TCG_TARGET_HAS_bswap16_i64
718
CASE_32_64(bswap16)
719
- tci_args_rr(&tb_ptr, &r0, &r1);
720
+ tci_args_rr(insn, &r0, &r1);
721
regs[r0] = bswap16(regs[r1]);
722
break;
723
#endif
724
#if TCG_TARGET_HAS_bswap32_i32 || TCG_TARGET_HAS_bswap32_i64
725
CASE_32_64(bswap32)
726
- tci_args_rr(&tb_ptr, &r0, &r1);
727
+ tci_args_rr(insn, &r0, &r1);
728
regs[r0] = bswap32(regs[r1]);
729
break;
730
#endif
731
#if TCG_TARGET_HAS_not_i32 || TCG_TARGET_HAS_not_i64
732
CASE_32_64(not)
733
- tci_args_rr(&tb_ptr, &r0, &r1);
734
+ tci_args_rr(insn, &r0, &r1);
735
regs[r0] = ~regs[r1];
736
break;
737
#endif
738
#if TCG_TARGET_HAS_neg_i32 || TCG_TARGET_HAS_neg_i64
739
CASE_32_64(neg)
740
- tci_args_rr(&tb_ptr, &r0, &r1);
741
+ tci_args_rr(insn, &r0, &r1);
742
regs[r0] = -regs[r1];
743
break;
744
#endif
745
#if TCG_TARGET_REG_BITS == 64
746
- case INDEX_op_tci_movi_i64:
747
- tci_args_rI(&tb_ptr, &r0, &t1);
748
- regs[r0] = t1;
749
- break;
750
-
751
/* Load/store operations (64 bit). */
752
753
case INDEX_op_ld32s_i64:
754
- tci_args_rrs(&tb_ptr, &r0, &r1, &ofs);
755
+ tci_args_rrs(insn, &r0, &r1, &ofs);
756
ptr = (void *)(regs[r1] + ofs);
757
regs[r0] = *(int32_t *)ptr;
758
break;
759
case INDEX_op_ld_i64:
760
- tci_args_rrs(&tb_ptr, &r0, &r1, &ofs);
761
+ tci_args_rrs(insn, &r0, &r1, &ofs);
762
ptr = (void *)(regs[r1] + ofs);
763
regs[r0] = *(uint64_t *)ptr;
764
break;
765
case INDEX_op_st_i64:
766
- tci_args_rrs(&tb_ptr, &r0, &r1, &ofs);
767
+ tci_args_rrs(insn, &r0, &r1, &ofs);
768
ptr = (void *)(regs[r1] + ofs);
769
*(uint64_t *)ptr = regs[r0];
770
break;
771
@@ -XXX,XX +XXX,XX @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
772
/* Arithmetic operations (64 bit). */
773
774
case INDEX_op_div_i64:
775
- tci_args_rrr(&tb_ptr, &r0, &r1, &r2);
776
+ tci_args_rrr(insn, &r0, &r1, &r2);
777
regs[r0] = (int64_t)regs[r1] / (int64_t)regs[r2];
778
break;
779
case INDEX_op_divu_i64:
780
- tci_args_rrr(&tb_ptr, &r0, &r1, &r2);
781
+ tci_args_rrr(insn, &r0, &r1, &r2);
782
regs[r0] = (uint64_t)regs[r1] / (uint64_t)regs[r2];
783
break;
784
case INDEX_op_rem_i64:
785
- tci_args_rrr(&tb_ptr, &r0, &r1, &r2);
786
+ tci_args_rrr(insn, &r0, &r1, &r2);
787
regs[r0] = (int64_t)regs[r1] % (int64_t)regs[r2];
788
break;
789
case INDEX_op_remu_i64:
790
- tci_args_rrr(&tb_ptr, &r0, &r1, &r2);
791
+ tci_args_rrr(insn, &r0, &r1, &r2);
792
regs[r0] = (uint64_t)regs[r1] % (uint64_t)regs[r2];
793
break;
794
795
/* Shift/rotate operations (64 bit). */
796
797
case INDEX_op_shl_i64:
798
- tci_args_rrr(&tb_ptr, &r0, &r1, &r2);
799
+ tci_args_rrr(insn, &r0, &r1, &r2);
800
regs[r0] = regs[r1] << (regs[r2] & 63);
801
break;
802
case INDEX_op_shr_i64:
803
- tci_args_rrr(&tb_ptr, &r0, &r1, &r2);
804
+ tci_args_rrr(insn, &r0, &r1, &r2);
805
regs[r0] = regs[r1] >> (regs[r2] & 63);
806
break;
807
case INDEX_op_sar_i64:
808
- tci_args_rrr(&tb_ptr, &r0, &r1, &r2);
809
+ tci_args_rrr(insn, &r0, &r1, &r2);
810
regs[r0] = (int64_t)regs[r1] >> (regs[r2] & 63);
811
break;
812
#if TCG_TARGET_HAS_rot_i64
813
case INDEX_op_rotl_i64:
814
- tci_args_rrr(&tb_ptr, &r0, &r1, &r2);
815
+ tci_args_rrr(insn, &r0, &r1, &r2);
816
regs[r0] = rol64(regs[r1], regs[r2] & 63);
817
break;
818
case INDEX_op_rotr_i64:
819
- tci_args_rrr(&tb_ptr, &r0, &r1, &r2);
820
+ tci_args_rrr(insn, &r0, &r1, &r2);
821
regs[r0] = ror64(regs[r1], regs[r2] & 63);
822
break;
823
#endif
824
#if TCG_TARGET_HAS_deposit_i64
825
case INDEX_op_deposit_i64:
826
- tci_args_rrrbb(&tb_ptr, &r0, &r1, &r2, &pos, &len);
827
+ tci_args_rrrbb(insn, &r0, &r1, &r2, &pos, &len);
828
regs[r0] = deposit64(regs[r1], pos, len, regs[r2]);
829
break;
830
#endif
831
case INDEX_op_brcond_i64:
832
- tci_args_rl(&tb_ptr, &r0, &ptr);
833
+ tci_args_rl(insn, tb_ptr, &r0, &ptr);
834
if (regs[r0]) {
835
tb_ptr = ptr;
836
}
837
break;
838
case INDEX_op_ext32s_i64:
839
case INDEX_op_ext_i32_i64:
840
- tci_args_rr(&tb_ptr, &r0, &r1);
841
+ tci_args_rr(insn, &r0, &r1);
842
regs[r0] = (int32_t)regs[r1];
843
break;
844
case INDEX_op_ext32u_i64:
845
case INDEX_op_extu_i32_i64:
846
- tci_args_rr(&tb_ptr, &r0, &r1);
847
+ tci_args_rr(insn, &r0, &r1);
848
regs[r0] = (uint32_t)regs[r1];
849
break;
850
#if TCG_TARGET_HAS_bswap64_i64
851
case INDEX_op_bswap64_i64:
852
- tci_args_rr(&tb_ptr, &r0, &r1);
853
+ tci_args_rr(insn, &r0, &r1);
854
regs[r0] = bswap64(regs[r1]);
855
break;
856
#endif
857
@@ -XXX,XX +XXX,XX @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
858
/* QEMU specific operations. */
859
860
case INDEX_op_exit_tb:
861
- tci_args_l(&tb_ptr, &ptr);
862
+ tci_args_l(insn, tb_ptr, &ptr);
863
return (uintptr_t)ptr;
864
865
case INDEX_op_goto_tb:
866
- tci_args_l(&tb_ptr, &ptr);
867
+ tci_args_l(insn, tb_ptr, &ptr);
868
tb_ptr = *(void **)ptr;
869
break;
870
871
case INDEX_op_qemu_ld_i32:
872
if (TARGET_LONG_BITS <= TCG_TARGET_REG_BITS) {
873
- tci_args_rrm(&tb_ptr, &r0, &r1, &oi);
874
+ tci_args_rrm(insn, &r0, &r1, &oi);
875
taddr = regs[r1];
876
} else {
877
- tci_args_rrrm(&tb_ptr, &r0, &r1, &r2, &oi);
878
+ tci_args_rrrm(insn, &r0, &r1, &r2, &oi);
879
taddr = tci_uint64(regs[r2], regs[r1]);
880
}
881
switch (get_memop(oi) & (MO_BSWAP | MO_SSIZE)) {
882
@@ -XXX,XX +XXX,XX @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
883
884
case INDEX_op_qemu_ld_i64:
885
if (TCG_TARGET_REG_BITS == 64) {
886
- tci_args_rrm(&tb_ptr, &r0, &r1, &oi);
887
+ tci_args_rrm(insn, &r0, &r1, &oi);
888
taddr = regs[r1];
889
} else if (TARGET_LONG_BITS <= TCG_TARGET_REG_BITS) {
890
- tci_args_rrrm(&tb_ptr, &r0, &r1, &r2, &oi);
891
+ tci_args_rrrm(insn, &r0, &r1, &r2, &oi);
892
taddr = regs[r2];
893
} else {
894
- tci_args_rrrrm(&tb_ptr, &r0, &r1, &r2, &r3, &oi);
895
+ tci_args_rrrrr(insn, &r0, &r1, &r2, &r3, &r4);
896
taddr = tci_uint64(regs[r3], regs[r2]);
897
+ oi = regs[r4];
898
}
899
switch (get_memop(oi) & (MO_BSWAP | MO_SSIZE)) {
900
case MO_UB:
901
@@ -XXX,XX +XXX,XX @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
902
903
case INDEX_op_qemu_st_i32:
904
if (TARGET_LONG_BITS <= TCG_TARGET_REG_BITS) {
905
- tci_args_rrm(&tb_ptr, &r0, &r1, &oi);
906
+ tci_args_rrm(insn, &r0, &r1, &oi);
907
taddr = regs[r1];
908
} else {
909
- tci_args_rrrm(&tb_ptr, &r0, &r1, &r2, &oi);
910
+ tci_args_rrrm(insn, &r0, &r1, &r2, &oi);
911
taddr = tci_uint64(regs[r2], regs[r1]);
912
}
913
tmp32 = regs[r0];
914
@@ -XXX,XX +XXX,XX @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
915
916
case INDEX_op_qemu_st_i64:
917
if (TCG_TARGET_REG_BITS == 64) {
918
- tci_args_rrm(&tb_ptr, &r0, &r1, &oi);
919
+ tci_args_rrm(insn, &r0, &r1, &oi);
920
taddr = regs[r1];
921
tmp64 = regs[r0];
922
} else {
923
if (TARGET_LONG_BITS <= TCG_TARGET_REG_BITS) {
924
- tci_args_rrrm(&tb_ptr, &r0, &r1, &r2, &oi);
925
+ tci_args_rrrm(insn, &r0, &r1, &r2, &oi);
926
taddr = regs[r2];
927
} else {
928
- tci_args_rrrrm(&tb_ptr, &r0, &r1, &r2, &r3, &oi);
929
+ tci_args_rrrrr(insn, &r0, &r1, &r2, &r3, &r4);
930
taddr = tci_uint64(regs[r3], regs[r2]);
931
+ oi = regs[r4];
932
}
933
tmp64 = tci_uint64(regs[r1], regs[r0]);
934
}
935
@@ -XXX,XX +XXX,XX @@ static const char *str_c(TCGCond c)
936
/* Disassemble TCI bytecode. */
937
int print_insn_tci(bfd_vma addr, disassemble_info *info)
938
{
939
- uint8_t buf[256];
940
- int length, status;
941
+ const uint32_t *tb_ptr = (const void *)(uintptr_t)addr;
942
const TCGOpDef *def;
943
const char *op_name;
944
+ uint32_t insn;
945
TCGOpcode op;
946
- TCGReg r0, r1, r2, r3;
947
+ TCGReg r0, r1, r2, r3, r4;
948
#if TCG_TARGET_REG_BITS == 32
949
- TCGReg r4, r5;
950
+ TCGReg r5;
951
#endif
952
tcg_target_ulong i1;
953
int32_t s2;
954
TCGCond c;
955
TCGMemOpIdx oi;
956
uint8_t pos, len;
957
- void *ptr, *cif;
958
- const uint8_t *tb_ptr;
959
+ void *ptr;
960
961
- status = info->read_memory_func(addr, buf, 2, info);
962
- if (status != 0) {
963
- info->memory_error_func(status, addr, info);
964
- return -1;
965
- }
57
- }
966
- op = buf[0];
967
- length = buf[1];
968
+ /* TCI is always the host, so we don't need to load indirect. */
969
+ insn = *tb_ptr++;
970
971
- if (length < 2) {
972
- info->fprintf_func(info->stream, "invalid length %d", length);
973
- return 1;
974
- }
975
-
976
- status = info->read_memory_func(addr + 2, buf + 2, length - 2, info);
977
- if (status != 0) {
978
- info->memory_error_func(status, addr + 2, info);
979
- return -1;
980
- }
981
+ info->fprintf_func(info->stream, "%08x ", insn);
982
983
+ op = extract32(insn, 0, 8);
984
def = &tcg_op_defs[op];
985
op_name = def->name;
986
- tb_ptr = buf + 2;
987
988
switch (op) {
989
case INDEX_op_br:
990
case INDEX_op_exit_tb:
991
case INDEX_op_goto_tb:
992
- tci_args_l(&tb_ptr, &ptr);
993
+ tci_args_l(insn, tb_ptr, &ptr);
994
info->fprintf_func(info->stream, "%-12s %p", op_name, ptr);
995
break;
996
997
case INDEX_op_call:
998
- tci_args_nll(&tb_ptr, &len, &ptr, &cif);
999
- info->fprintf_func(info->stream, "%-12s %d, %p, %p",
1000
- op_name, len, ptr, cif);
1001
+ tci_args_nl(insn, tb_ptr, &len, &ptr);
1002
+ info->fprintf_func(info->stream, "%-12s %d, %p", op_name, len, ptr);
1003
break;
1004
1005
case INDEX_op_brcond_i32:
1006
case INDEX_op_brcond_i64:
1007
- tci_args_rl(&tb_ptr, &r0, &ptr);
1008
+ tci_args_rl(insn, tb_ptr, &r0, &ptr);
1009
info->fprintf_func(info->stream, "%-12s %s, 0, ne, %p",
1010
op_name, str_r(r0), ptr);
1011
break;
1012
1013
case INDEX_op_setcond_i32:
1014
case INDEX_op_setcond_i64:
1015
- tci_args_rrrc(&tb_ptr, &r0, &r1, &r2, &c);
1016
+ tci_args_rrrc(insn, &r0, &r1, &r2, &c);
1017
info->fprintf_func(info->stream, "%-12s %s, %s, %s, %s",
1018
op_name, str_r(r0), str_r(r1), str_r(r2), str_c(c));
1019
break;
1020
1021
- case INDEX_op_tci_movi_i32:
1022
- tci_args_ri(&tb_ptr, &r0, &i1);
1023
+ case INDEX_op_tci_movi:
1024
+ tci_args_ri(insn, &r0, &i1);
1025
info->fprintf_func(info->stream, "%-12s %s, 0x%" TCG_PRIlx,
1026
op_name, str_r(r0), i1);
1027
break;
1028
1029
-#if TCG_TARGET_REG_BITS == 64
1030
- case INDEX_op_tci_movi_i64:
1031
- tci_args_rI(&tb_ptr, &r0, &i1);
1032
- info->fprintf_func(info->stream, "%-12s %s, 0x%" TCG_PRIlx,
1033
- op_name, str_r(r0), i1);
1034
+ case INDEX_op_tci_movl:
1035
+ tci_args_rl(insn, tb_ptr, &r0, &ptr);
1036
+ info->fprintf_func(info->stream, "%-12s %s, %p",
1037
+ op_name, str_r(r0), ptr);
1038
break;
1039
-#endif
1040
1041
case INDEX_op_ld8u_i32:
1042
case INDEX_op_ld8u_i64:
1043
@@ -XXX,XX +XXX,XX @@ int print_insn_tci(bfd_vma addr, disassemble_info *info)
1044
case INDEX_op_st32_i64:
1045
case INDEX_op_st_i32:
1046
case INDEX_op_st_i64:
1047
- tci_args_rrs(&tb_ptr, &r0, &r1, &s2);
1048
+ tci_args_rrs(insn, &r0, &r1, &s2);
1049
info->fprintf_func(info->stream, "%-12s %s, %s, %d",
1050
op_name, str_r(r0), str_r(r1), s2);
1051
break;
1052
@@ -XXX,XX +XXX,XX @@ int print_insn_tci(bfd_vma addr, disassemble_info *info)
1053
case INDEX_op_not_i64:
1054
case INDEX_op_neg_i32:
1055
case INDEX_op_neg_i64:
1056
- tci_args_rr(&tb_ptr, &r0, &r1);
1057
+ tci_args_rr(insn, &r0, &r1);
1058
info->fprintf_func(info->stream, "%-12s %s, %s",
1059
op_name, str_r(r0), str_r(r1));
1060
break;
1061
@@ -XXX,XX +XXX,XX @@ int print_insn_tci(bfd_vma addr, disassemble_info *info)
1062
case INDEX_op_rotl_i64:
1063
case INDEX_op_rotr_i32:
1064
case INDEX_op_rotr_i64:
1065
- tci_args_rrr(&tb_ptr, &r0, &r1, &r2);
1066
+ tci_args_rrr(insn, &r0, &r1, &r2);
1067
info->fprintf_func(info->stream, "%-12s %s, %s, %s",
1068
op_name, str_r(r0), str_r(r1), str_r(r2));
1069
break;
1070
1071
case INDEX_op_deposit_i32:
1072
case INDEX_op_deposit_i64:
1073
- tci_args_rrrbb(&tb_ptr, &r0, &r1, &r2, &pos, &len);
1074
+ tci_args_rrrbb(insn, &r0, &r1, &r2, &pos, &len);
1075
info->fprintf_func(info->stream, "%-12s %s, %s, %s, %d, %d",
1076
op_name, str_r(r0), str_r(r1), str_r(r2), pos, len);
1077
break;
1078
1079
#if TCG_TARGET_REG_BITS == 32
1080
case INDEX_op_setcond2_i32:
1081
- tci_args_rrrrrc(&tb_ptr, &r0, &r1, &r2, &r3, &r4, &c);
1082
+ tci_args_rrrrrc(insn, &r0, &r1, &r2, &r3, &r4, &c);
1083
info->fprintf_func(info->stream, "%-12s %s, %s, %s, %s, %s, %s",
1084
op_name, str_r(r0), str_r(r1), str_r(r2),
1085
str_r(r3), str_r(r4), str_c(c));
1086
break;
1087
1088
case INDEX_op_mulu2_i32:
1089
- tci_args_rrrr(&tb_ptr, &r0, &r1, &r2, &r3);
1090
+ tci_args_rrrr(insn, &r0, &r1, &r2, &r3);
1091
info->fprintf_func(info->stream, "%-12s %s, %s, %s, %s",
1092
op_name, str_r(r0), str_r(r1),
1093
str_r(r2), str_r(r3));
1094
@@ -XXX,XX +XXX,XX @@ int print_insn_tci(bfd_vma addr, disassemble_info *info)
1095
1096
case INDEX_op_add2_i32:
1097
case INDEX_op_sub2_i32:
1098
- tci_args_rrrrrr(&tb_ptr, &r0, &r1, &r2, &r3, &r4, &r5);
1099
+ tci_args_rrrrrr(insn, &r0, &r1, &r2, &r3, &r4, &r5);
1100
info->fprintf_func(info->stream, "%-12s %s, %s, %s, %s, %s, %s",
1101
op_name, str_r(r0), str_r(r1), str_r(r2),
1102
str_r(r3), str_r(r4), str_r(r5));
1103
@@ -XXX,XX +XXX,XX @@ int print_insn_tci(bfd_vma addr, disassemble_info *info)
1104
len += DIV_ROUND_UP(TARGET_LONG_BITS, TCG_TARGET_REG_BITS);
1105
switch (len) {
1106
case 2:
1107
- tci_args_rrm(&tb_ptr, &r0, &r1, &oi);
1108
+ tci_args_rrm(insn, &r0, &r1, &oi);
1109
info->fprintf_func(info->stream, "%-12s %s, %s, %x",
1110
op_name, str_r(r0), str_r(r1), oi);
1111
break;
1112
case 3:
1113
- tci_args_rrrm(&tb_ptr, &r0, &r1, &r2, &oi);
1114
+ tci_args_rrrm(insn, &r0, &r1, &r2, &oi);
1115
info->fprintf_func(info->stream, "%-12s %s, %s, %s, %x",
1116
op_name, str_r(r0), str_r(r1), str_r(r2), oi);
1117
break;
1118
case 4:
1119
- tci_args_rrrrm(&tb_ptr, &r0, &r1, &r2, &r3, &oi);
1120
- info->fprintf_func(info->stream, "%-12s %s, %s, %s, %s, %x",
1121
+ tci_args_rrrrr(insn, &r0, &r1, &r2, &r3, &r4);
1122
+ info->fprintf_func(info->stream, "%-12s %s, %s, %s, %s, %s",
1123
op_name, str_r(r0), str_r(r1),
1124
- str_r(r2), str_r(r3), oi);
1125
+ str_r(r2), str_r(r3), str_r(r4));
1126
break;
1127
default:
1128
g_assert_not_reached();
1129
}
1130
break;
1131
1132
+ case 0:
1133
+ /* tcg_out_nop_fill uses zeros */
1134
+ if (insn == 0) {
1135
+ info->fprintf_func(info->stream, "align");
1136
+ break;
1137
+ }
1138
+ /* fall through */
1139
+
1140
default:
1141
info->fprintf_func(info->stream, "illegal opcode %d", op);
1142
break;
1143
}
1144
1145
- return length;
1146
+ return sizeof(insn);
1147
}
1148
diff --git a/tcg/tci/tcg-target.c.inc b/tcg/tci/tcg-target.c.inc
1149
index XXXXXXX..XXXXXXX 100644
1150
--- a/tcg/tci/tcg-target.c.inc
1151
+++ b/tcg/tci/tcg-target.c.inc
1152
@@ -XXX,XX +XXX,XX @@
1153
* THE SOFTWARE.
1154
*/
1155
1156
-/* TODO list:
1157
- * - See TODO comments in code.
1158
- */
1159
-
1160
-/* Marker for missing code. */
1161
-#define TODO() \
1162
- do { \
1163
- fprintf(stderr, "TODO %s:%u: %s()\n", \
1164
- __FILE__, __LINE__, __func__); \
1165
- tcg_abort(); \
1166
- } while (0)
1167
-
1168
-/* Bitfield n...m (in 32 bit value). */
1169
-#define BITS(n, m) (((0xffffffffU << (31 - n)) >> (31 - n + m)) << m)
1170
+#include "../tcg-pool.c.inc"
1171
1172
static TCGConstraintSetIndex tcg_target_op_def(TCGOpcode op)
1173
{
1174
@@ -XXX,XX +XXX,XX @@ static const char *const tcg_target_reg_names[TCG_TARGET_NB_REGS] = {
1175
static bool patch_reloc(tcg_insn_unit *code_ptr, int type,
1176
intptr_t value, intptr_t addend)
1177
{
1178
- /* tcg_out_reloc always uses the same type, addend. */
1179
- tcg_debug_assert(type == sizeof(tcg_target_long));
1180
+ intptr_t diff = value - (intptr_t)(code_ptr + 1);
1181
+
1182
tcg_debug_assert(addend == 0);
1183
- tcg_debug_assert(value != 0);
1184
- if (TCG_TARGET_REG_BITS == 32) {
1185
- tcg_patch32(code_ptr, value);
1186
- } else {
1187
- tcg_patch64(code_ptr, value);
1188
- }
1189
- return true;
1190
-}
1191
-
1192
-/* Write value (native size). */
1193
-static void tcg_out_i(TCGContext *s, tcg_target_ulong v)
1194
-{
1195
- if (TCG_TARGET_REG_BITS == 32) {
1196
- tcg_out32(s, v);
1197
- } else {
1198
- tcg_out64(s, v);
1199
- }
1200
-}
1201
-
1202
-/* Write opcode. */
1203
-static void tcg_out_op_t(TCGContext *s, TCGOpcode op)
1204
-{
1205
- tcg_out8(s, op);
1206
- tcg_out8(s, 0);
1207
-}
1208
-
1209
-/* Write register. */
1210
-static void tcg_out_r(TCGContext *s, TCGArg t0)
1211
-{
1212
- tcg_debug_assert(t0 < TCG_TARGET_NB_REGS);
1213
- tcg_out8(s, t0);
1214
-}
1215
-
1216
-/* Write label. */
1217
-static void tci_out_label(TCGContext *s, TCGLabel *label)
1218
-{
1219
- if (label->has_value) {
1220
- tcg_out_i(s, label->u.value);
1221
- tcg_debug_assert(label->u.value);
1222
- } else {
1223
- tcg_out_reloc(s, s->code_ptr, sizeof(tcg_target_ulong), label, 0);
1224
- s->code_ptr += sizeof(tcg_target_ulong);
1225
+ tcg_debug_assert(type == 20);
1226
+
1227
+ if (diff == sextract32(diff, 0, type)) {
1228
+ tcg_patch32(code_ptr, deposit32(*code_ptr, 32 - type, type, diff));
1229
+ return true;
1230
}
1231
+ return false;
1232
}
1233
1234
static void stack_bounds_check(TCGReg base, target_long offset)
1235
@@ -XXX,XX +XXX,XX @@ static void stack_bounds_check(TCGReg base, target_long offset)
1236
1237
static void tcg_out_op_l(TCGContext *s, TCGOpcode op, TCGLabel *l0)
1238
{
1239
- uint8_t *old_code_ptr = s->code_ptr;
1240
+ tcg_insn_unit insn = 0;
1241
1242
- tcg_out_op_t(s, op);
1243
- tci_out_label(s, l0);
1244
-
1245
- old_code_ptr[1] = s->code_ptr - old_code_ptr;
1246
+ tcg_out_reloc(s, s->code_ptr, 20, l0, 0);
1247
+ insn = deposit32(insn, 0, 8, op);
1248
+ tcg_out32(s, insn);
1249
}
1250
1251
static void tcg_out_op_p(TCGContext *s, TCGOpcode op, void *p0)
1252
{
1253
- uint8_t *old_code_ptr = s->code_ptr;
1254
+ tcg_insn_unit insn = 0;
1255
+ intptr_t diff;
1256
1257
- tcg_out_op_t(s, op);
1258
- tcg_out_i(s, (uintptr_t)p0);
1259
-
1260
- old_code_ptr[1] = s->code_ptr - old_code_ptr;
1261
+ /* Special case for exit_tb: map null -> 0. */
1262
+ if (p0 == NULL) {
1263
+ diff = 0;
1264
+ } else {
1265
+ diff = p0 - (void *)(s->code_ptr + 1);
1266
+ tcg_debug_assert(diff != 0);
1267
+ if (diff != sextract32(diff, 0, 20)) {
1268
+ tcg_raise_tb_overflow(s);
1269
+ }
1270
+ }
1271
+ insn = deposit32(insn, 0, 8, op);
1272
+ insn = deposit32(insn, 12, 20, diff);
1273
+ tcg_out32(s, insn);
1274
}
1275
1276
static void tcg_out_op_v(TCGContext *s, TCGOpcode op)
1277
{
1278
- uint8_t *old_code_ptr = s->code_ptr;
1279
-
1280
- tcg_out_op_t(s, op);
1281
-
1282
- old_code_ptr[1] = s->code_ptr - old_code_ptr;
1283
+ tcg_out32(s, (uint8_t)op);
1284
}
1285
1286
static void tcg_out_op_ri(TCGContext *s, TCGOpcode op, TCGReg r0, int32_t i1)
1287
{
1288
- uint8_t *old_code_ptr = s->code_ptr;
1289
+ tcg_insn_unit insn = 0;
1290
1291
- tcg_out_op_t(s, op);
1292
- tcg_out_r(s, r0);
1293
- tcg_out32(s, i1);
1294
-
1295
- old_code_ptr[1] = s->code_ptr - old_code_ptr;
1296
+ tcg_debug_assert(i1 == sextract32(i1, 0, 20));
1297
+ insn = deposit32(insn, 0, 8, op);
1298
+ insn = deposit32(insn, 8, 4, r0);
1299
+ insn = deposit32(insn, 12, 20, i1);
1300
+ tcg_out32(s, insn);
1301
}
1302
1303
-#if TCG_TARGET_REG_BITS == 64
1304
-static void tcg_out_op_rI(TCGContext *s, TCGOpcode op,
1305
- TCGReg r0, uint64_t i1)
1306
-{
1307
- uint8_t *old_code_ptr = s->code_ptr;
1308
-
1309
- tcg_out_op_t(s, op);
1310
- tcg_out_r(s, r0);
1311
- tcg_out64(s, i1);
1312
-
1313
- old_code_ptr[1] = s->code_ptr - old_code_ptr;
1314
-}
1315
-#endif
58
-#endif
1316
-
59
-
1317
static void tcg_out_op_rl(TCGContext *s, TCGOpcode op, TCGReg r0, TCGLabel *l1)
60
region.start_aligned = buf;
1318
{
61
region.total_size = size;
1319
- uint8_t *old_code_ptr = s->code_ptr;
62
1320
+ tcg_insn_unit insn = 0;
63
@@ -XXX,XX +XXX,XX @@ static int alloc_code_gen_buffer_anon(size_t size, int prot,
1321
64
return -1;
1322
- tcg_out_op_t(s, op);
65
}
1323
- tcg_out_r(s, r0);
66
1324
- tci_out_label(s, l1);
67
-#ifdef __mips__
68
- if (cross_256mb(buf, size)) {
69
- /*
70
- * Try again, with the original still mapped, to avoid re-acquiring
71
- * the same 256mb crossing.
72
- */
73
- size_t size2;
74
- void *buf2 = mmap(NULL, size, prot, flags, -1, 0);
75
- switch ((int)(buf2 != MAP_FAILED)) {
76
- case 1:
77
- if (!cross_256mb(buf2, size)) {
78
- /* Success! Use the new buffer. */
79
- munmap(buf, size);
80
- break;
81
- }
82
- /* Failure. Work with what we had. */
83
- munmap(buf2, size);
84
- /* fallthru */
85
- default:
86
- /* Split the original buffer. Free the smaller half. */
87
- split_cross_256mb(&buf2, &size2, buf, size);
88
- if (buf == buf2) {
89
- munmap(buf + size2, size - size2);
90
- } else {
91
- munmap(buf, size - size2);
92
- }
93
- size = size2;
94
- break;
95
- }
96
- buf = buf2;
97
- }
98
-#endif
1325
-
99
-
1326
- old_code_ptr[1] = s->code_ptr - old_code_ptr;
100
region.start_aligned = buf;
1327
+ tcg_out_reloc(s, s->code_ptr, 20, l1, 0);
101
region.total_size = size;
1328
+ insn = deposit32(insn, 0, 8, op);
102
return prot;
1329
+ insn = deposit32(insn, 8, 4, r0);
103
@@ -XXX,XX +XXX,XX @@ static bool alloc_code_gen_buffer_splitwx_memfd(size_t size, Error **errp)
1330
+ tcg_out32(s, insn);
104
void *buf_rw = NULL, *buf_rx = MAP_FAILED;
1331
}
105
int fd = -1;
1332
106
1333
static void tcg_out_op_rr(TCGContext *s, TCGOpcode op, TCGReg r0, TCGReg r1)
107
-#ifdef __mips__
1334
{
108
- /* Find space for the RX mapping, vs the 256MiB regions. */
1335
- uint8_t *old_code_ptr = s->code_ptr;
109
- if (alloc_code_gen_buffer_anon(size, PROT_NONE,
1336
+ tcg_insn_unit insn = 0;
110
- MAP_PRIVATE | MAP_ANONYMOUS |
1337
111
- MAP_NORESERVE, errp) < 0) {
1338
- tcg_out_op_t(s, op);
112
- return false;
1339
- tcg_out_r(s, r0);
113
- }
1340
- tcg_out_r(s, r1);
114
- /* The size of the mapping may have been adjusted. */
115
- buf_rx = region.start_aligned;
116
- size = region.total_size;
117
-#endif
1341
-
118
-
1342
- old_code_ptr[1] = s->code_ptr - old_code_ptr;
119
buf_rw = qemu_memfd_alloc("tcg-jit", size, 0, &fd, errp);
1343
+ insn = deposit32(insn, 0, 8, op);
120
if (buf_rw == NULL) {
1344
+ insn = deposit32(insn, 8, 4, r0);
121
goto fail;
1345
+ insn = deposit32(insn, 12, 4, r1);
1346
+ tcg_out32(s, insn);
1347
}
1348
1349
static void tcg_out_op_rrm(TCGContext *s, TCGOpcode op,
1350
TCGReg r0, TCGReg r1, TCGArg m2)
1351
{
1352
- uint8_t *old_code_ptr = s->code_ptr;
1353
+ tcg_insn_unit insn = 0;
1354
1355
- tcg_out_op_t(s, op);
1356
- tcg_out_r(s, r0);
1357
- tcg_out_r(s, r1);
1358
- tcg_out32(s, m2);
1359
-
1360
- old_code_ptr[1] = s->code_ptr - old_code_ptr;
1361
+ tcg_debug_assert(m2 == extract32(m2, 0, 12));
1362
+ insn = deposit32(insn, 0, 8, op);
1363
+ insn = deposit32(insn, 8, 4, r0);
1364
+ insn = deposit32(insn, 12, 4, r1);
1365
+ insn = deposit32(insn, 20, 12, m2);
1366
+ tcg_out32(s, insn);
1367
}
1368
1369
static void tcg_out_op_rrr(TCGContext *s, TCGOpcode op,
1370
TCGReg r0, TCGReg r1, TCGReg r2)
1371
{
1372
- uint8_t *old_code_ptr = s->code_ptr;
1373
+ tcg_insn_unit insn = 0;
1374
1375
- tcg_out_op_t(s, op);
1376
- tcg_out_r(s, r0);
1377
- tcg_out_r(s, r1);
1378
- tcg_out_r(s, r2);
1379
-
1380
- old_code_ptr[1] = s->code_ptr - old_code_ptr;
1381
+ insn = deposit32(insn, 0, 8, op);
1382
+ insn = deposit32(insn, 8, 4, r0);
1383
+ insn = deposit32(insn, 12, 4, r1);
1384
+ insn = deposit32(insn, 16, 4, r2);
1385
+ tcg_out32(s, insn);
1386
}
1387
1388
static void tcg_out_op_rrs(TCGContext *s, TCGOpcode op,
1389
TCGReg r0, TCGReg r1, intptr_t i2)
1390
{
1391
- uint8_t *old_code_ptr = s->code_ptr;
1392
+ tcg_insn_unit insn = 0;
1393
1394
- tcg_out_op_t(s, op);
1395
- tcg_out_r(s, r0);
1396
- tcg_out_r(s, r1);
1397
- tcg_debug_assert(i2 == (int32_t)i2);
1398
- tcg_out32(s, i2);
1399
-
1400
- old_code_ptr[1] = s->code_ptr - old_code_ptr;
1401
+ tcg_debug_assert(i2 == sextract32(i2, 0, 16));
1402
+ insn = deposit32(insn, 0, 8, op);
1403
+ insn = deposit32(insn, 8, 4, r0);
1404
+ insn = deposit32(insn, 12, 4, r1);
1405
+ insn = deposit32(insn, 16, 16, i2);
1406
+ tcg_out32(s, insn);
1407
}
1408
1409
static void tcg_out_op_rrrc(TCGContext *s, TCGOpcode op,
1410
TCGReg r0, TCGReg r1, TCGReg r2, TCGCond c3)
1411
{
1412
- uint8_t *old_code_ptr = s->code_ptr;
1413
+ tcg_insn_unit insn = 0;
1414
1415
- tcg_out_op_t(s, op);
1416
- tcg_out_r(s, r0);
1417
- tcg_out_r(s, r1);
1418
- tcg_out_r(s, r2);
1419
- tcg_out8(s, c3);
1420
-
1421
- old_code_ptr[1] = s->code_ptr - old_code_ptr;
1422
+ insn = deposit32(insn, 0, 8, op);
1423
+ insn = deposit32(insn, 8, 4, r0);
1424
+ insn = deposit32(insn, 12, 4, r1);
1425
+ insn = deposit32(insn, 16, 4, r2);
1426
+ insn = deposit32(insn, 20, 4, c3);
1427
+ tcg_out32(s, insn);
1428
}
1429
1430
static void tcg_out_op_rrrm(TCGContext *s, TCGOpcode op,
1431
TCGReg r0, TCGReg r1, TCGReg r2, TCGArg m3)
1432
{
1433
- uint8_t *old_code_ptr = s->code_ptr;
1434
+ tcg_insn_unit insn = 0;
1435
1436
- tcg_out_op_t(s, op);
1437
- tcg_out_r(s, r0);
1438
- tcg_out_r(s, r1);
1439
- tcg_out_r(s, r2);
1440
- tcg_out32(s, m3);
1441
-
1442
- old_code_ptr[1] = s->code_ptr - old_code_ptr;
1443
+ tcg_debug_assert(m3 == extract32(m3, 0, 12));
1444
+ insn = deposit32(insn, 0, 8, op);
1445
+ insn = deposit32(insn, 8, 4, r0);
1446
+ insn = deposit32(insn, 12, 4, r1);
1447
+ insn = deposit32(insn, 16, 4, r2);
1448
+ insn = deposit32(insn, 20, 12, m3);
1449
+ tcg_out32(s, insn);
1450
}
1451
1452
static void tcg_out_op_rrrbb(TCGContext *s, TCGOpcode op, TCGReg r0,
1453
TCGReg r1, TCGReg r2, uint8_t b3, uint8_t b4)
1454
{
1455
- uint8_t *old_code_ptr = s->code_ptr;
1456
+ tcg_insn_unit insn = 0;
1457
1458
- tcg_out_op_t(s, op);
1459
- tcg_out_r(s, r0);
1460
- tcg_out_r(s, r1);
1461
- tcg_out_r(s, r2);
1462
- tcg_out8(s, b3);
1463
- tcg_out8(s, b4);
1464
-
1465
- old_code_ptr[1] = s->code_ptr - old_code_ptr;
1466
+ tcg_debug_assert(b3 == extract32(b3, 0, 6));
1467
+ tcg_debug_assert(b4 == extract32(b4, 0, 6));
1468
+ insn = deposit32(insn, 0, 8, op);
1469
+ insn = deposit32(insn, 8, 4, r0);
1470
+ insn = deposit32(insn, 12, 4, r1);
1471
+ insn = deposit32(insn, 16, 4, r2);
1472
+ insn = deposit32(insn, 20, 6, b3);
1473
+ insn = deposit32(insn, 26, 6, b4);
1474
+ tcg_out32(s, insn);
1475
}
1476
1477
-static void tcg_out_op_rrrrm(TCGContext *s, TCGOpcode op, TCGReg r0,
1478
- TCGReg r1, TCGReg r2, TCGReg r3, TCGArg m4)
1479
+static void tcg_out_op_rrrrr(TCGContext *s, TCGOpcode op, TCGReg r0,
1480
+ TCGReg r1, TCGReg r2, TCGReg r3, TCGReg r4)
1481
{
1482
- uint8_t *old_code_ptr = s->code_ptr;
1483
+ tcg_insn_unit insn = 0;
1484
1485
- tcg_out_op_t(s, op);
1486
- tcg_out_r(s, r0);
1487
- tcg_out_r(s, r1);
1488
- tcg_out_r(s, r2);
1489
- tcg_out_r(s, r3);
1490
- tcg_out32(s, m4);
1491
-
1492
- old_code_ptr[1] = s->code_ptr - old_code_ptr;
1493
+ insn = deposit32(insn, 0, 8, op);
1494
+ insn = deposit32(insn, 8, 4, r0);
1495
+ insn = deposit32(insn, 12, 4, r1);
1496
+ insn = deposit32(insn, 16, 4, r2);
1497
+ insn = deposit32(insn, 20, 4, r3);
1498
+ insn = deposit32(insn, 24, 4, r4);
1499
+ tcg_out32(s, insn);
1500
}
1501
1502
#if TCG_TARGET_REG_BITS == 32
1503
static void tcg_out_op_rrrr(TCGContext *s, TCGOpcode op,
1504
TCGReg r0, TCGReg r1, TCGReg r2, TCGReg r3)
1505
{
1506
- uint8_t *old_code_ptr = s->code_ptr;
1507
+ tcg_insn_unit insn = 0;
1508
1509
- tcg_out_op_t(s, op);
1510
- tcg_out_r(s, r0);
1511
- tcg_out_r(s, r1);
1512
- tcg_out_r(s, r2);
1513
- tcg_out_r(s, r3);
1514
-
1515
- old_code_ptr[1] = s->code_ptr - old_code_ptr;
1516
+ insn = deposit32(insn, 0, 8, op);
1517
+ insn = deposit32(insn, 8, 4, r0);
1518
+ insn = deposit32(insn, 12, 4, r1);
1519
+ insn = deposit32(insn, 16, 4, r2);
1520
+ insn = deposit32(insn, 20, 4, r3);
1521
+ tcg_out32(s, insn);
1522
}
1523
1524
static void tcg_out_op_rrrrrc(TCGContext *s, TCGOpcode op,
1525
TCGReg r0, TCGReg r1, TCGReg r2,
1526
TCGReg r3, TCGReg r4, TCGCond c5)
1527
{
1528
- uint8_t *old_code_ptr = s->code_ptr;
1529
+ tcg_insn_unit insn = 0;
1530
1531
- tcg_out_op_t(s, op);
1532
- tcg_out_r(s, r0);
1533
- tcg_out_r(s, r1);
1534
- tcg_out_r(s, r2);
1535
- tcg_out_r(s, r3);
1536
- tcg_out_r(s, r4);
1537
- tcg_out8(s, c5);
1538
-
1539
- old_code_ptr[1] = s->code_ptr - old_code_ptr;
1540
+ insn = deposit32(insn, 0, 8, op);
1541
+ insn = deposit32(insn, 8, 4, r0);
1542
+ insn = deposit32(insn, 12, 4, r1);
1543
+ insn = deposit32(insn, 16, 4, r2);
1544
+ insn = deposit32(insn, 20, 4, r3);
1545
+ insn = deposit32(insn, 24, 4, r4);
1546
+ insn = deposit32(insn, 28, 4, c5);
1547
+ tcg_out32(s, insn);
1548
}
1549
1550
static void tcg_out_op_rrrrrr(TCGContext *s, TCGOpcode op,
1551
TCGReg r0, TCGReg r1, TCGReg r2,
1552
TCGReg r3, TCGReg r4, TCGReg r5)
1553
{
1554
- uint8_t *old_code_ptr = s->code_ptr;
1555
+ tcg_insn_unit insn = 0;
1556
1557
- tcg_out_op_t(s, op);
1558
- tcg_out_r(s, r0);
1559
- tcg_out_r(s, r1);
1560
- tcg_out_r(s, r2);
1561
- tcg_out_r(s, r3);
1562
- tcg_out_r(s, r4);
1563
- tcg_out_r(s, r5);
1564
-
1565
- old_code_ptr[1] = s->code_ptr - old_code_ptr;
1566
+ insn = deposit32(insn, 0, 8, op);
1567
+ insn = deposit32(insn, 8, 4, r0);
1568
+ insn = deposit32(insn, 12, 4, r1);
1569
+ insn = deposit32(insn, 16, 4, r2);
1570
+ insn = deposit32(insn, 20, 4, r3);
1571
+ insn = deposit32(insn, 24, 4, r4);
1572
+ insn = deposit32(insn, 28, 4, r5);
1573
+ tcg_out32(s, insn);
1574
}
1575
#endif
1576
1577
+static void tcg_out_ldst(TCGContext *s, TCGOpcode op, TCGReg val,
1578
+ TCGReg base, intptr_t offset)
1579
+{
1580
+ stack_bounds_check(base, offset);
1581
+ if (offset != sextract32(offset, 0, 16)) {
1582
+ tcg_out_movi(s, TCG_TYPE_PTR, TCG_REG_TMP, offset);
1583
+ tcg_out_op_rrr(s, (TCG_TARGET_REG_BITS == 32
1584
+ ? INDEX_op_add_i32 : INDEX_op_add_i64),
1585
+ TCG_REG_TMP, TCG_REG_TMP, base);
1586
+ base = TCG_REG_TMP;
1587
+ offset = 0;
1588
+ }
1589
+ tcg_out_op_rrs(s, op, val, base, offset);
1590
+}
1591
+
1592
static void tcg_out_ld(TCGContext *s, TCGType type, TCGReg val, TCGReg base,
1593
intptr_t offset)
1594
{
1595
- stack_bounds_check(base, offset);
1596
switch (type) {
1597
case TCG_TYPE_I32:
1598
- tcg_out_op_rrs(s, INDEX_op_ld_i32, val, base, offset);
1599
+ tcg_out_ldst(s, INDEX_op_ld_i32, val, base, offset);
1600
break;
1601
#if TCG_TARGET_REG_BITS == 64
1602
case TCG_TYPE_I64:
1603
- tcg_out_op_rrs(s, INDEX_op_ld_i64, val, base, offset);
1604
+ tcg_out_ldst(s, INDEX_op_ld_i64, val, base, offset);
1605
break;
1606
#endif
1607
default:
1608
@@ -XXX,XX +XXX,XX @@ static void tcg_out_movi(TCGContext *s, TCGType type,
1609
{
1610
switch (type) {
1611
case TCG_TYPE_I32:
1612
- tcg_out_op_ri(s, INDEX_op_tci_movi_i32, ret, arg);
1613
- break;
1614
#if TCG_TARGET_REG_BITS == 64
1615
+ arg = (int32_t)arg;
1616
+ /* fall through */
1617
case TCG_TYPE_I64:
1618
- tcg_out_op_rI(s, INDEX_op_tci_movi_i64, ret, arg);
1619
- break;
1620
#endif
1621
+ break;
1622
default:
1623
g_assert_not_reached();
1624
}
122
}
1625
+
123
1626
+ if (arg == sextract32(arg, 0, 20)) {
124
-#ifdef __mips__
1627
+ tcg_out_op_ri(s, INDEX_op_tci_movi, ret, arg);
125
- void *tmp = mmap(buf_rx, size, PROT_READ | PROT_EXEC,
1628
+ } else {
126
- MAP_SHARED | MAP_FIXED, fd, 0);
1629
+ tcg_insn_unit insn = 0;
127
- if (tmp != buf_rx) {
1630
+
128
- goto fail_rx;
1631
+ new_pool_label(s, arg, 20, s->code_ptr, 0);
129
- }
1632
+ insn = deposit32(insn, 0, 8, INDEX_op_tci_movl);
130
-#else
1633
+ insn = deposit32(insn, 8, 4, ret);
131
buf_rx = mmap(NULL, size, PROT_READ | PROT_EXEC, MAP_SHARED, fd, 0);
1634
+ tcg_out32(s, insn);
132
if (buf_rx == MAP_FAILED) {
1635
+ }
133
goto fail_rx;
1636
}
1637
1638
static void tcg_out_call(TCGContext *s, const tcg_insn_unit *func,
1639
ffi_cif *cif)
1640
{
1641
- uint8_t *old_code_ptr = s->code_ptr;
1642
+ tcg_insn_unit insn = 0;
1643
uint8_t which;
1644
1645
if (cif->rtype == &ffi_type_void) {
1646
@@ -XXX,XX +XXX,XX @@ static void tcg_out_call(TCGContext *s, const tcg_insn_unit *func,
1647
tcg_debug_assert(cif->rtype->size == 8);
1648
which = 2;
1649
}
134
}
1650
- tcg_out_op_t(s, INDEX_op_call);
135
-#endif
1651
- tcg_out8(s, which);
136
1652
- tcg_out_i(s, (uintptr_t)func);
137
close(fd);
1653
- tcg_out_i(s, (uintptr_t)cif);
138
region.start_aligned = buf_rw;
1654
-
1655
- old_code_ptr[1] = s->code_ptr - old_code_ptr;
1656
+ new_pool_l2(s, 20, s->code_ptr, 0, (uintptr_t)func, (uintptr_t)cif);
1657
+ insn = deposit32(insn, 0, 8, INDEX_op_call);
1658
+ insn = deposit32(insn, 8, 4, which);
1659
+ tcg_out32(s, insn);
1660
}
1661
1662
#if TCG_TARGET_REG_BITS == 64
1663
@@ -XXX,XX +XXX,XX @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
1664
case INDEX_op_st_i32:
1665
CASE_64(st32)
1666
CASE_64(st)
1667
- stack_bounds_check(args[1], args[2]);
1668
- tcg_out_op_rrs(s, opc, args[0], args[1], args[2]);
1669
+ tcg_out_ldst(s, opc, args[0], args[1], args[2]);
1670
break;
1671
1672
CASE_32_64(add)
1673
@@ -XXX,XX +XXX,XX @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
1674
} else if (TARGET_LONG_BITS <= TCG_TARGET_REG_BITS) {
1675
tcg_out_op_rrrm(s, opc, args[0], args[1], args[2], args[3]);
1676
} else {
1677
- tcg_out_op_rrrrm(s, opc, args[0], args[1],
1678
- args[2], args[3], args[4]);
1679
+ tcg_out_movi(s, TCG_TYPE_I32, TCG_REG_TMP, args[4]);
1680
+ tcg_out_op_rrrrr(s, opc, args[0], args[1],
1681
+ args[2], args[3], TCG_REG_TMP);
1682
}
1683
break;
1684
1685
@@ -XXX,XX +XXX,XX @@ static bool tcg_target_const_match(int64_t val, TCGType type, int ct)
1686
return ct & TCG_CT_CONST;
1687
}
1688
1689
+static void tcg_out_nop_fill(tcg_insn_unit *p, int count)
1690
+{
1691
+ memset(p, 0, sizeof(*p) * count);
1692
+}
1693
+
1694
static void tcg_target_init(TCGContext *s)
1695
{
1696
#if defined(CONFIG_DEBUG_TCG_INTERPRETER)
1697
diff --git a/tcg/tci/README b/tcg/tci/README
1698
index XXXXXXX..XXXXXXX 100644
1699
--- a/tcg/tci/README
1700
+++ b/tcg/tci/README
1701
@@ -XXX,XX +XXX,XX @@ This is what TCI (Tiny Code Interpreter) does.
1702
Like each TCG host frontend, TCI implements the code generator in
1703
tcg-target.c.inc, tcg-target.h. Both files are in directory tcg/tci.
1704
1705
-The additional file tcg/tci.c adds the interpreter.
1706
+The additional file tcg/tci.c adds the interpreter and disassembler.
1707
1708
-The bytecode consists of opcodes (same numeric values as those used by
1709
-TCG), command length and arguments of variable size and number.
1710
+The bytecode consists of opcodes (with only a few exceptions, with
1711
+the same same numeric values and semantics as used by TCG), and up
1712
+to six arguments packed into a 32-bit integer. See comments in tci.c
1713
+for details on the encoding.
1714
1715
3) Usage
1716
1717
@@ -XXX,XX +XXX,XX @@ suggest using this option. Setting it automatically would need
1718
additional code in configure which must be fixed when new native TCG
1719
implementations are added.
1720
1721
-System emulation should work on any 32 or 64 bit host.
1722
-User mode emulation might work. Maybe a new linker script (*.ld)
1723
-is needed. Byte order might be wrong (on big endian hosts)
1724
-and need fixes in configure.
1725
-
1726
For hosts with native TCG, the interpreter TCI can be enabled by
1727
1728
configure --enable-tcg-interpreter
1729
@@ -XXX,XX +XXX,XX @@ u1 = linux-user-test works
1730
in the interpreter. These opcodes raise a runtime exception, so it is
1731
possible to see where code must be added.
1732
1733
-* The pseudo code is not optimized and still ugly. For hosts with special
1734
- alignment requirements, it needs some fixes (maybe aligned bytecode
1735
- would also improve speed for hosts which support byte alignment).
1736
-
1737
-* A better disassembler for the pseudo code would be nice (a very primitive
1738
- disassembler is included in tcg-target.c.inc).
1739
-
1740
* It might be useful to have a runtime option which selects the native TCG
1741
or TCI, so QEMU would have to include two TCGs. Today, selecting TCI
1742
is a configure option, so you need two compilations of QEMU.
1743
--
139
--
1744
2.25.1
140
2.25.1
1745
141
1746
142
diff view generated by jsdifflib