1
The following changes since commit 035eed4c0d257c905a556fa0f4865a0c077b4e7f:
1
TCG patch queue, plus one target/sh4 patch that
2
Yoshinori Sato asked me to process.
2
3
3
Merge remote-tracking branch 'remotes/vivier/tags/q800-for-5.0-pull-request' into staging (2020-01-07 17:08:21 +0000)
4
5
r~
6
7
8
The following changes since commit efbf38d73e5dcc4d5f8b98c6e7a12be1f3b91745:
9
10
Merge tag 'for-upstream' of git://repo.or.cz/qemu/kevin into staging (2022-10-03 15:06:07 -0400)
4
11
5
are available in the Git repository at:
12
are available in the Git repository at:
6
13
7
https://github.com/rth7680/qemu.git tags/pull-tcg-20200108
14
https://gitlab.com/rth7680/qemu.git tags/pull-tcg-20221004
8
15
9
for you to fetch changes up to 5e7ef51cbe47e726f76bfbc208e167085cf398c4:
16
for you to fetch changes up to ab419fd8a035a65942de4e63effcd55ccbf1a9fe:
10
17
11
MAINTAINERS: Replace Claudio Fontana for tcg/aarch64 (2020-01-08 11:54:12 +1100)
18
target/sh4: Fix TB_FLAG_UNALIGN (2022-10-04 12:33:05 -0700)
12
19
13
----------------------------------------------------------------
20
----------------------------------------------------------------
14
Improve -static and -pie linking
21
Cache CPUClass for use in hot code paths.
15
Add cpu_{ld,st}*_mmuidx_ra
22
Add CPUTLBEntryFull, probe_access_full, tlb_set_page_full.
16
Remove MMU_MODE*_SUFFIX
23
Add generic support for TARGET_TB_PCREL.
17
Move tcg headers under include/
24
tcg/ppc: Optimize 26-bit jumps using STQ for POWER 2.07
25
target/sh4: Fix TB_FLAG_UNALIGN
18
26
19
----------------------------------------------------------------
27
----------------------------------------------------------------
20
Philippe Mathieu-Daudé (4):
28
Alex Bennée (3):
21
tcg: Search includes from the project root source directory
29
cpu: cache CPUClass in CPUState for hot code paths
22
tcg: Search includes in the parent source directory
30
hw/core/cpu-sysemu: used cached class in cpu_asidx_from_attrs
23
tcg: Move TCG headers to include/tcg/
31
cputlb: used cached CPUClass in our hot-paths
24
configure: Remove tcg/ from the preprocessor include search list
25
32
26
Richard Henderson (37):
33
Leandro Lupori (1):
27
configure: Drop adjustment of textseg
34
tcg/ppc: Optimize 26-bit jumps
28
tcg: Remove softmmu code_gen_buffer fixed address
29
configure: Do not force pie=no for non-x86
30
configure: Always detect -no-pie toolchain support
31
configure: Unnest detection of -z,relro and -z,now
32
configure: Override the os default with --disable-pie
33
configure: Support -static-pie if requested
34
target/xtensa: Use probe_access for itlb_hit_test
35
cputlb: Use trace_mem_get_info instead of trace_mem_build_info
36
trace: Remove trace_mem_build_info_no_se_[bl]e
37
target/s390x: Include tcg.h in mem_helper.c
38
target/arm: Include tcg.h in sve_helper.c
39
accel/tcg: Include tcg.h in tcg-runtime.c
40
linux-user: Include tcg.h in syscall.c
41
linux-user: Include trace-root.h in syscall-trace.h
42
plugins: Include trace/mem.h in api.c
43
cputlb: Move body of cpu_ldst_template.h out of line
44
translator: Use cpu_ld*_code instead of open-coding
45
cputlb: Rename helper_ret_ld*_cmmu to cpu_ld*_code
46
cputlb: Provide cpu_(ld,st}*_mmuidx_ra for user-only
47
target/i386: Use cpu_*_mmuidx_ra instead of templates
48
cputlb: Expand cpu_ldst_useronly_template.h in user-exec.c
49
target/nios2: Remove MMU_MODE{0,1}_SUFFIX
50
target/alpha: Remove MMU_MODE{0,1}_SUFFIX
51
target/cris: Remove MMU_MODE{0,1}_SUFFIX
52
target/i386: Remove MMU_MODE{0,1,2}_SUFFIX
53
target/microblaze: Remove MMU_MODE{0,1,2}_SUFFIX
54
target/sh4: Remove MMU_MODE{0,1}_SUFFIX
55
target/unicore32: Remove MMU_MODE{0,1}_SUFFIX
56
target/xtensa: Remove MMU_MODE{0,1,2,3}_SUFFIX
57
target/m68k: Use cpu_*_mmuidx_ra instead of MMU_MODE{0,1}_SUFFIX
58
target/mips: Use cpu_*_mmuidx_ra instead of MMU_MODE*_SUFFIX
59
target/s390x: Use cpu_*_mmuidx_ra instead of MMU_MODE*_SUFFIX
60
target/ppc: Use cpu_*_mmuidx_ra instead of MMU_MODE*_SUFFIX
61
cputlb: Remove support for MMU_MODE*_SUFFIX
62
cputlb: Expand cpu_ldst_template.h in cputlb.c
63
MAINTAINERS: Replace Claudio Fontana for tcg/aarch64
64
35
65
Makefile | 2 +-
36
Richard Henderson (16):
66
accel/tcg/atomic_template.h | 67 ++---
37
accel/tcg: Rename CPUIOTLBEntry to CPUTLBEntryFull
67
include/exec/cpu_ldst.h | 446 +++++++++---------------------
38
accel/tcg: Drop addr member from SavedIOTLB
68
include/exec/cpu_ldst_template.h | 211 --------------
39
accel/tcg: Suppress auto-invalidate in probe_access_internal
69
include/exec/cpu_ldst_useronly_template.h | 159 -----------
40
accel/tcg: Introduce probe_access_full
70
include/exec/translator.h | 48 +---
41
accel/tcg: Introduce tlb_set_page_full
71
{tcg => include/tcg}/tcg-gvec-desc.h | 0
42
include/exec: Introduce TARGET_PAGE_ENTRY_EXTRA
72
{tcg => include/tcg}/tcg-mo.h | 0
43
accel/tcg: Remove PageDesc code_bitmap
73
{tcg => include/tcg}/tcg-op-gvec.h | 0
44
accel/tcg: Use bool for page_find_alloc
74
{tcg => include/tcg}/tcg-op.h | 2 +-
45
accel/tcg: Use DisasContextBase in plugin_gen_tb_start
75
{tcg => include/tcg}/tcg-opc.h | 0
46
accel/tcg: Do not align tb->page_addr[0]
76
{tcg => include/tcg}/tcg.h | 33 +--
47
accel/tcg: Inline tb_flush_jmp_cache
77
include/user/syscall-trace.h | 2 +
48
include/hw/core: Create struct CPUJumpCache
78
target/alpha/cpu.h | 2 -
49
hw/core: Add CPUClass.get_pc
79
target/cris/cpu.h | 2 -
50
accel/tcg: Introduce tb_pc and log_pc
80
target/i386/cpu.h | 3 -
51
accel/tcg: Introduce TARGET_TB_PCREL
81
target/m68k/cpu.h | 2 -
52
target/sh4: Fix TB_FLAG_UNALIGN
82
target/microblaze/cpu.h | 3 -
83
target/mips/cpu.h | 4 -
84
target/nios2/cpu.h | 2 -
85
target/ppc/cpu.h | 2 -
86
target/s390x/cpu.h | 5 -
87
target/sh4/cpu.h | 2 -
88
target/unicore32/cpu.h | 2 -
89
target/xtensa/cpu.h | 4 -
90
tcg/i386/tcg-target.h | 2 +-
91
trace/mem-internal.h | 17 --
92
accel/tcg/cpu-exec.c | 2 +-
93
accel/tcg/cputlb.c | 315 ++++++++++++++++-----
94
accel/tcg/tcg-runtime-gvec.c | 2 +-
95
accel/tcg/tcg-runtime.c | 1 +
96
accel/tcg/translate-all.c | 39 +--
97
accel/tcg/user-exec.c | 238 +++++++++++++++-
98
bsd-user/main.c | 2 +-
99
cpus.c | 2 +-
100
exec.c | 2 +-
101
linux-user/main.c | 2 +-
102
linux-user/syscall.c | 1 +
103
plugins/api.c | 1 +
104
target/alpha/translate.c | 2 +-
105
target/arm/helper-a64.c | 2 +-
106
target/arm/sve_helper.c | 1 +
107
target/arm/translate-a64.c | 4 +-
108
target/arm/translate-sve.c | 6 +-
109
target/arm/translate.c | 4 +-
110
target/cris/translate.c | 2 +-
111
target/hppa/translate.c | 2 +-
112
target/i386/mem_helper.c | 2 +-
113
target/i386/seg_helper.c | 56 ++--
114
target/i386/translate.c | 2 +-
115
target/lm32/translate.c | 2 +-
116
target/m68k/op_helper.c | 77 ++++--
117
target/m68k/translate.c | 2 +-
118
target/microblaze/translate.c | 2 +-
119
target/mips/op_helper.c | 182 ++++--------
120
target/mips/translate.c | 2 +-
121
target/moxie/translate.c | 2 +-
122
target/nios2/translate.c | 2 +-
123
target/openrisc/translate.c | 2 +-
124
target/ppc/mem_helper.c | 13 +-
125
target/ppc/translate.c | 4 +-
126
target/riscv/cpu_helper.c | 2 +-
127
target/riscv/translate.c | 2 +-
128
target/s390x/mem_helper.c | 11 +-
129
target/s390x/translate.c | 4 +-
130
target/sh4/translate.c | 2 +-
131
target/sparc/ldst_helper.c | 2 +-
132
target/sparc/translate.c | 2 +-
133
target/tilegx/translate.c | 2 +-
134
target/tricore/translate.c | 2 +-
135
target/unicore32/translate.c | 2 +-
136
target/xtensa/mmu_helper.c | 5 +-
137
target/xtensa/translate.c | 2 +-
138
tcg/aarch64/tcg-target.inc.c | 4 +-
139
tcg/arm/tcg-target.inc.c | 4 +-
140
tcg/i386/tcg-target.inc.c | 4 +-
141
tcg/mips/tcg-target.inc.c | 2 +-
142
tcg/optimize.c | 2 +-
143
tcg/ppc/tcg-target.inc.c | 4 +-
144
tcg/riscv/tcg-target.inc.c | 4 +-
145
tcg/s390/tcg-target.inc.c | 4 +-
146
tcg/sparc/tcg-target.inc.c | 2 +-
147
tcg/tcg-common.c | 2 +-
148
tcg/tcg-op-gvec.c | 8 +-
149
tcg/tcg-op-vec.c | 6 +-
150
tcg/tcg-op.c | 6 +-
151
tcg/tcg.c | 2 +-
152
tcg/tci.c | 2 +-
153
MAINTAINERS | 4 +-
154
configure | 117 +++-----
155
docs/devel/loads-stores.rst | 215 ++++++++++----
156
91 files changed, 1075 insertions(+), 1357 deletions(-)
157
delete mode 100644 include/exec/cpu_ldst_template.h
158
delete mode 100644 include/exec/cpu_ldst_useronly_template.h
159
rename {tcg => include/tcg}/tcg-gvec-desc.h (100%)
160
rename {tcg => include/tcg}/tcg-mo.h (100%)
161
rename {tcg => include/tcg}/tcg-op-gvec.h (100%)
162
rename {tcg => include/tcg}/tcg-op.h (99%)
163
rename {tcg => include/tcg}/tcg-opc.h (100%)
164
rename {tcg => include/tcg}/tcg.h (96%)
165
53
54
accel/tcg/internal.h | 10 ++
55
accel/tcg/tb-hash.h | 1 +
56
accel/tcg/tb-jmp-cache.h | 65 ++++++++
57
include/exec/cpu-common.h | 1 +
58
include/exec/cpu-defs.h | 48 ++++--
59
include/exec/exec-all.h | 75 ++++++++-
60
include/exec/plugin-gen.h | 7 +-
61
include/hw/core/cpu.h | 28 ++--
62
include/qemu/typedefs.h | 2 +
63
include/tcg/tcg.h | 2 +-
64
target/sh4/cpu.h | 56 ++++---
65
accel/stubs/tcg-stub.c | 4 +
66
accel/tcg/cpu-exec.c | 80 +++++-----
67
accel/tcg/cputlb.c | 259 ++++++++++++++++++--------------
68
accel/tcg/plugin-gen.c | 22 +--
69
accel/tcg/translate-all.c | 214 ++++++++++++--------------
70
accel/tcg/translator.c | 2 +-
71
cpu.c | 9 +-
72
hw/core/cpu-common.c | 3 +-
73
hw/core/cpu-sysemu.c | 5 +-
74
linux-user/sh4/signal.c | 6 +-
75
plugins/core.c | 2 +-
76
target/alpha/cpu.c | 9 ++
77
target/arm/cpu.c | 17 ++-
78
target/arm/mte_helper.c | 14 +-
79
target/arm/sve_helper.c | 4 +-
80
target/arm/translate-a64.c | 2 +-
81
target/avr/cpu.c | 10 +-
82
target/cris/cpu.c | 8 +
83
target/hexagon/cpu.c | 10 +-
84
target/hppa/cpu.c | 12 +-
85
target/i386/cpu.c | 9 ++
86
target/i386/tcg/tcg-cpu.c | 2 +-
87
target/loongarch/cpu.c | 11 +-
88
target/m68k/cpu.c | 8 +
89
target/microblaze/cpu.c | 10 +-
90
target/mips/cpu.c | 8 +
91
target/mips/tcg/exception.c | 2 +-
92
target/mips/tcg/sysemu/special_helper.c | 2 +-
93
target/nios2/cpu.c | 9 ++
94
target/openrisc/cpu.c | 10 +-
95
target/ppc/cpu_init.c | 8 +
96
target/riscv/cpu.c | 17 ++-
97
target/rx/cpu.c | 10 +-
98
target/s390x/cpu.c | 8 +
99
target/s390x/tcg/mem_helper.c | 4 -
100
target/sh4/cpu.c | 18 ++-
101
target/sh4/helper.c | 6 +-
102
target/sh4/translate.c | 90 +++++------
103
target/sparc/cpu.c | 10 +-
104
target/tricore/cpu.c | 11 +-
105
target/xtensa/cpu.c | 8 +
106
tcg/tcg.c | 8 +-
107
trace/control-target.c | 2 +-
108
tcg/ppc/tcg-target.c.inc | 119 +++++++++++----
109
55 files changed, 915 insertions(+), 462 deletions(-)
110
create mode 100644 accel/tcg/tb-jmp-cache.h
111
diff view generated by jsdifflib
1
Claudio's Huawei address has been defunct for quite a while. In
1
From: Alex Bennée <alex.bennee@linaro.org>
2
2
3
https://lists.gnu.org/archive/html/qemu-devel/2019-05/msg06872.html
3
The class cast checkers are quite expensive and always on (unlike the
4
dynamic case who's checks are gated by CONFIG_QOM_CAST_DEBUG). To
5
avoid the overhead of repeatedly checking something which should never
6
change we cache the CPUClass reference for use in the hot code paths.
4
7
5
he asked for his personal address to be removed as well.
8
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
6
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
I will take over officially.
10
Message-Id: <20220811151413.3350684-3-alex.bennee@linaro.org>
8
11
Signed-off-by: Cédric Le Goater <clg@kaod.org>
9
Cc: Claudio Fontana <claudio.fontana@gmail.com>
12
Message-Id: <20220923084803.498337-3-clg@kaod.org>
10
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
11
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
12
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
---
14
---
14
MAINTAINERS | 3 +--
15
include/hw/core/cpu.h | 9 +++++++++
15
1 file changed, 1 insertion(+), 2 deletions(-)
16
cpu.c | 9 ++++-----
17
2 files changed, 13 insertions(+), 5 deletions(-)
16
18
17
diff --git a/MAINTAINERS b/MAINTAINERS
19
diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
18
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
19
--- a/MAINTAINERS
21
--- a/include/hw/core/cpu.h
20
+++ b/MAINTAINERS
22
+++ b/include/hw/core/cpu.h
21
@@ -XXX,XX +XXX,XX @@ F: plugins/
23
@@ -XXX,XX +XXX,XX @@ typedef int (*WriteCoreDumpFunction)(const void *buf, size_t size,
22
F: tests/plugin
24
*/
23
25
#define CPU(obj) ((CPUState *)(obj))
24
AArch64 TCG target
26
25
-M: Claudio Fontana <claudio.fontana@huawei.com>
27
+/*
26
-M: Claudio Fontana <claudio.fontana@gmail.com>
28
+ * The class checkers bring in CPU_GET_CLASS() which is potentially
27
+M: Richard Henderson <richard.henderson@linaro.org>
29
+ * expensive given the eventual call to
28
S: Maintained
30
+ * object_class_dynamic_cast_assert(). Because of this the CPUState
29
L: qemu-arm@nongnu.org
31
+ * has a cached value for the class in cs->cc which is set up in
30
F: tcg/aarch64/
32
+ * cpu_exec_realizefn() for use in hot code paths.
33
+ */
34
typedef struct CPUClass CPUClass;
35
DECLARE_CLASS_CHECKERS(CPUClass, CPU,
36
TYPE_CPU)
37
@@ -XXX,XX +XXX,XX @@ struct qemu_work_item;
38
struct CPUState {
39
/*< private >*/
40
DeviceState parent_obj;
41
+ /* cache to avoid expensive CPU_GET_CLASS */
42
+ CPUClass *cc;
43
/*< public >*/
44
45
int nr_cores;
46
diff --git a/cpu.c b/cpu.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/cpu.c
49
+++ b/cpu.c
50
@@ -XXX,XX +XXX,XX @@ const VMStateDescription vmstate_cpu_common = {
51
52
void cpu_exec_realizefn(CPUState *cpu, Error **errp)
53
{
54
-#ifndef CONFIG_USER_ONLY
55
- CPUClass *cc = CPU_GET_CLASS(cpu);
56
-#endif
57
+ /* cache the cpu class for the hotpath */
58
+ cpu->cc = CPU_GET_CLASS(cpu);
59
60
cpu_list_add(cpu);
61
if (!accel_cpu_realizefn(cpu, errp)) {
62
@@ -XXX,XX +XXX,XX @@ void cpu_exec_realizefn(CPUState *cpu, Error **errp)
63
if (qdev_get_vmsd(DEVICE(cpu)) == NULL) {
64
vmstate_register(NULL, cpu->cpu_index, &vmstate_cpu_common, cpu);
65
}
66
- if (cc->sysemu_ops->legacy_vmsd != NULL) {
67
- vmstate_register(NULL, cpu->cpu_index, cc->sysemu_ops->legacy_vmsd, cpu);
68
+ if (cpu->cc->sysemu_ops->legacy_vmsd != NULL) {
69
+ vmstate_register(NULL, cpu->cpu_index, cpu->cc->sysemu_ops->legacy_vmsd, cpu);
70
}
71
#endif /* CONFIG_USER_ONLY */
72
}
31
--
73
--
32
2.20.1
74
2.34.1
33
75
34
76
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Alex Bennée <alex.bennee@linaro.org>
2
2
3
All tcg includes are relative to the repository root directory,
3
This is a heavily used function so lets avoid the cost of
4
we can safely remove the tcg/ directory from the include search
4
CPU_GET_CLASS. On the romulus-bmc run it has a modest effect:
5
path list.
6
5
7
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
6
Before: 36.812 s ± 0.506 s
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
7
After: 35.912 s ± 0.168 s
9
Reviewed-by: Stefan Weil <sw@weilnetz.de>
8
9
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
11
Message-Id: <20220811151413.3350684-4-alex.bennee@linaro.org>
12
Message-Id: <20200101112303.20724-5-philmd@redhat.com>
12
Signed-off-by: Cédric Le Goater <clg@kaod.org>
13
Message-Id: <20220923084803.498337-4-clg@kaod.org>
13
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
14
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
14
---
15
---
15
configure | 1 -
16
hw/core/cpu-sysemu.c | 5 ++---
16
1 file changed, 1 deletion(-)
17
1 file changed, 2 insertions(+), 3 deletions(-)
17
18
18
diff --git a/configure b/configure
19
diff --git a/hw/core/cpu-sysemu.c b/hw/core/cpu-sysemu.c
19
index XXXXXXX..XXXXXXX 100755
20
index XXXXXXX..XXXXXXX 100644
20
--- a/configure
21
--- a/hw/core/cpu-sysemu.c
21
+++ b/configure
22
+++ b/hw/core/cpu-sysemu.c
22
@@ -XXX,XX +XXX,XX @@ elif test "$ARCH" = "riscv32" || test "$ARCH" = "riscv64" ; then
23
@@ -XXX,XX +XXX,XX @@ hwaddr cpu_get_phys_page_debug(CPUState *cpu, vaddr addr)
23
else
24
24
QEMU_INCLUDES="-iquote \$(SRC_PATH)/tcg/\$(ARCH) $QEMU_INCLUDES"
25
int cpu_asidx_from_attrs(CPUState *cpu, MemTxAttrs attrs)
25
fi
26
{
26
-QEMU_INCLUDES="-iquote \$(SRC_PATH)/tcg $QEMU_INCLUDES"
27
- CPUClass *cc = CPU_GET_CLASS(cpu);
27
28
int ret = 0;
28
echo "TOOLS=$tools" >> $config_host_mak
29
29
echo "ROMS=$roms" >> $config_host_mak
30
- if (cc->sysemu_ops->asidx_from_attrs) {
31
- ret = cc->sysemu_ops->asidx_from_attrs(cpu, attrs);
32
+ if (cpu->cc->sysemu_ops->asidx_from_attrs) {
33
+ ret = cpu->cc->sysemu_ops->asidx_from_attrs(cpu, attrs);
34
assert(ret < cpu->num_ases && ret >= 0);
35
}
36
return ret;
30
--
37
--
31
2.20.1
38
2.34.1
32
39
33
40
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Alex Bennée <alex.bennee@linaro.org>
2
2
3
All the *.inc.c files included by tcg/$TARGET/tcg-target.inc.c
3
Before: 35.912 s ± 0.168 s
4
are in tcg/, their parent directory. To simplify the preprocessor
4
After: 35.565 s ± 0.087 s
5
search path, include the relative parent path: '..'.
6
5
7
Patch created mechanically by running:
6
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
8
9
$ for x in tcg-pool.inc.c tcg-ldst.inc.c; do \
10
sed -i "s,#include \"$x\",#include \"../$x\"," \
11
$(git grep -l "#include \"$x\""); \
12
done
13
14
Acked-by: David Gibson <david@gibson.dropbear.id.au> (ppc parts)
15
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
16
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
17
Reviewed-by: Stefan Weil <sw@weilnetz.de>
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Message-Id: <20220811151413.3350684-5-alex.bennee@linaro.org>
20
Message-Id: <20200101112303.20724-3-philmd@redhat.com>
9
Signed-off-by: Cédric Le Goater <clg@kaod.org>
10
Message-Id: <20220923084803.498337-5-clg@kaod.org>
21
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
22
---
12
---
23
tcg/aarch64/tcg-target.inc.c | 4 ++--
13
accel/tcg/cputlb.c | 15 ++++++---------
24
tcg/arm/tcg-target.inc.c | 4 ++--
14
1 file changed, 6 insertions(+), 9 deletions(-)
25
tcg/i386/tcg-target.inc.c | 4 ++--
26
tcg/mips/tcg-target.inc.c | 2 +-
27
tcg/ppc/tcg-target.inc.c | 4 ++--
28
tcg/riscv/tcg-target.inc.c | 4 ++--
29
tcg/s390/tcg-target.inc.c | 4 ++--
30
tcg/sparc/tcg-target.inc.c | 2 +-
31
8 files changed, 14 insertions(+), 14 deletions(-)
32
15
33
diff --git a/tcg/aarch64/tcg-target.inc.c b/tcg/aarch64/tcg-target.inc.c
16
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
34
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
35
--- a/tcg/aarch64/tcg-target.inc.c
18
--- a/accel/tcg/cputlb.c
36
+++ b/tcg/aarch64/tcg-target.inc.c
19
+++ b/accel/tcg/cputlb.c
37
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@ void tlb_set_page(CPUState *cpu, target_ulong vaddr,
38
* See the COPYING file in the top-level directory for details.
21
static void tlb_fill(CPUState *cpu, target_ulong addr, int size,
39
*/
22
MMUAccessType access_type, int mmu_idx, uintptr_t retaddr)
40
23
{
41
-#include "tcg-pool.inc.c"
24
- CPUClass *cc = CPU_GET_CLASS(cpu);
42
+#include "../tcg-pool.inc.c"
25
bool ok;
43
#include "qemu/bitops.h"
26
44
27
/*
45
/* We're going to re-use TCGType in setting of the SF bit, which controls
28
* This is not a probe, so only valid return is success; failure
46
@@ -XXX,XX +XXX,XX @@ static void tcg_out_cltz(TCGContext *s, TCGType ext, TCGReg d,
29
* should result in exception + longjmp to the cpu loop.
30
*/
31
- ok = cc->tcg_ops->tlb_fill(cpu, addr, size,
32
- access_type, mmu_idx, false, retaddr);
33
+ ok = cpu->cc->tcg_ops->tlb_fill(cpu, addr, size,
34
+ access_type, mmu_idx, false, retaddr);
35
assert(ok);
47
}
36
}
48
37
49
#ifdef CONFIG_SOFTMMU
38
@@ -XXX,XX +XXX,XX @@ static inline void cpu_unaligned_access(CPUState *cpu, vaddr addr,
50
-#include "tcg-ldst.inc.c"
39
MMUAccessType access_type,
51
+#include "../tcg-ldst.inc.c"
40
int mmu_idx, uintptr_t retaddr)
52
41
{
53
/* helper signature: helper_ret_ld_mmu(CPUState *env, target_ulong addr,
42
- CPUClass *cc = CPU_GET_CLASS(cpu);
54
* TCGMemOpIdx oi, uintptr_t ra)
43
-
55
diff --git a/tcg/arm/tcg-target.inc.c b/tcg/arm/tcg-target.inc.c
44
- cc->tcg_ops->do_unaligned_access(cpu, addr, access_type, mmu_idx, retaddr);
56
index XXXXXXX..XXXXXXX 100644
45
+ cpu->cc->tcg_ops->do_unaligned_access(cpu, addr, access_type,
57
--- a/tcg/arm/tcg-target.inc.c
46
+ mmu_idx, retaddr);
58
+++ b/tcg/arm/tcg-target.inc.c
59
@@ -XXX,XX +XXX,XX @@
60
*/
61
62
#include "elf.h"
63
-#include "tcg-pool.inc.c"
64
+#include "../tcg-pool.inc.c"
65
66
int arm_arch = __ARM_ARCH;
67
68
@@ -XXX,XX +XXX,XX @@ static TCGCond tcg_out_cmp2(TCGContext *s, const TCGArg *args,
69
}
47
}
70
48
71
#ifdef CONFIG_SOFTMMU
49
static inline void cpu_transaction_failed(CPUState *cpu, hwaddr physaddr,
72
-#include "tcg-ldst.inc.c"
50
@@ -XXX,XX +XXX,XX @@ static int probe_access_internal(CPUArchState *env, target_ulong addr,
73
+#include "../tcg-ldst.inc.c"
51
if (!tlb_hit_page(tlb_addr, page_addr)) {
74
52
if (!victim_tlb_hit(env, mmu_idx, index, elt_ofs, page_addr)) {
75
/* helper signature: helper_ret_ld_mmu(CPUState *env, target_ulong addr,
53
CPUState *cs = env_cpu(env);
76
* int mmu_idx, uintptr_t ra)
54
- CPUClass *cc = CPU_GET_CLASS(cs);
77
diff --git a/tcg/i386/tcg-target.inc.c b/tcg/i386/tcg-target.inc.c
55
78
index XXXXXXX..XXXXXXX 100644
56
- if (!cc->tcg_ops->tlb_fill(cs, addr, fault_size, access_type,
79
--- a/tcg/i386/tcg-target.inc.c
57
- mmu_idx, nonfault, retaddr)) {
80
+++ b/tcg/i386/tcg-target.inc.c
58
+ if (!cs->cc->tcg_ops->tlb_fill(cs, addr, fault_size, access_type,
81
@@ -XXX,XX +XXX,XX @@
59
+ mmu_idx, nonfault, retaddr)) {
82
* THE SOFTWARE.
60
/* Non-faulting page table read failed. */
83
*/
61
*phost = NULL;
84
62
return TLB_INVALID_MASK;
85
-#include "tcg-pool.inc.c"
86
+#include "../tcg-pool.inc.c"
87
88
#ifdef CONFIG_DEBUG_TCG
89
static const char * const tcg_target_reg_names[TCG_TARGET_NB_REGS] = {
90
@@ -XXX,XX +XXX,XX @@ static void tcg_out_nopn(TCGContext *s, int n)
91
}
92
93
#if defined(CONFIG_SOFTMMU)
94
-#include "tcg-ldst.inc.c"
95
+#include "../tcg-ldst.inc.c"
96
97
/* helper signature: helper_ret_ld_mmu(CPUState *env, target_ulong addr,
98
* int mmu_idx, uintptr_t ra)
99
diff --git a/tcg/mips/tcg-target.inc.c b/tcg/mips/tcg-target.inc.c
100
index XXXXXXX..XXXXXXX 100644
101
--- a/tcg/mips/tcg-target.inc.c
102
+++ b/tcg/mips/tcg-target.inc.c
103
@@ -XXX,XX +XXX,XX @@ static void tcg_out_call(TCGContext *s, tcg_insn_unit *arg)
104
}
105
106
#if defined(CONFIG_SOFTMMU)
107
-#include "tcg-ldst.inc.c"
108
+#include "../tcg-ldst.inc.c"
109
110
static void * const qemu_ld_helpers[16] = {
111
[MO_UB] = helper_ret_ldub_mmu,
112
diff --git a/tcg/ppc/tcg-target.inc.c b/tcg/ppc/tcg-target.inc.c
113
index XXXXXXX..XXXXXXX 100644
114
--- a/tcg/ppc/tcg-target.inc.c
115
+++ b/tcg/ppc/tcg-target.inc.c
116
@@ -XXX,XX +XXX,XX @@
117
*/
118
119
#include "elf.h"
120
-#include "tcg-pool.inc.c"
121
+#include "../tcg-pool.inc.c"
122
123
#if defined _CALL_DARWIN || defined __APPLE__
124
#define TCG_TARGET_CALL_DARWIN
125
@@ -XXX,XX +XXX,XX @@ static const uint32_t qemu_exts_opc[4] = {
126
};
127
128
#if defined (CONFIG_SOFTMMU)
129
-#include "tcg-ldst.inc.c"
130
+#include "../tcg-ldst.inc.c"
131
132
/* helper signature: helper_ld_mmu(CPUState *env, target_ulong addr,
133
* int mmu_idx, uintptr_t ra)
134
diff --git a/tcg/riscv/tcg-target.inc.c b/tcg/riscv/tcg-target.inc.c
135
index XXXXXXX..XXXXXXX 100644
136
--- a/tcg/riscv/tcg-target.inc.c
137
+++ b/tcg/riscv/tcg-target.inc.c
138
@@ -XXX,XX +XXX,XX @@
139
* THE SOFTWARE.
140
*/
141
142
-#include "tcg-pool.inc.c"
143
+#include "../tcg-pool.inc.c"
144
145
#ifdef CONFIG_DEBUG_TCG
146
static const char * const tcg_target_reg_names[TCG_TARGET_NB_REGS] = {
147
@@ -XXX,XX +XXX,XX @@ static void tcg_out_mb(TCGContext *s, TCGArg a0)
148
*/
149
150
#if defined(CONFIG_SOFTMMU)
151
-#include "tcg-ldst.inc.c"
152
+#include "../tcg-ldst.inc.c"
153
154
/* helper signature: helper_ret_ld_mmu(CPUState *env, target_ulong addr,
155
* TCGMemOpIdx oi, uintptr_t ra)
156
diff --git a/tcg/s390/tcg-target.inc.c b/tcg/s390/tcg-target.inc.c
157
index XXXXXXX..XXXXXXX 100644
158
--- a/tcg/s390/tcg-target.inc.c
159
+++ b/tcg/s390/tcg-target.inc.c
160
@@ -XXX,XX +XXX,XX @@
161
#error "unsupported code generation mode"
162
#endif
163
164
-#include "tcg-pool.inc.c"
165
+#include "../tcg-pool.inc.c"
166
#include "elf.h"
167
168
/* ??? The translation blocks produced by TCG are generally small enough to
169
@@ -XXX,XX +XXX,XX @@ static void tcg_out_qemu_st_direct(TCGContext *s, MemOp opc, TCGReg data,
170
}
171
172
#if defined(CONFIG_SOFTMMU)
173
-#include "tcg-ldst.inc.c"
174
+#include "../tcg-ldst.inc.c"
175
176
/* We're expecting to use a 20-bit negative offset on the tlb memory ops. */
177
QEMU_BUILD_BUG_ON(TLB_MASK_TABLE_OFS(0) > 0);
178
diff --git a/tcg/sparc/tcg-target.inc.c b/tcg/sparc/tcg-target.inc.c
179
index XXXXXXX..XXXXXXX 100644
180
--- a/tcg/sparc/tcg-target.inc.c
181
+++ b/tcg/sparc/tcg-target.inc.c
182
@@ -XXX,XX +XXX,XX @@
183
* THE SOFTWARE.
184
*/
185
186
-#include "tcg-pool.inc.c"
187
+#include "../tcg-pool.inc.c"
188
189
#ifdef CONFIG_DEBUG_TCG
190
static const char * const tcg_target_reg_names[TCG_TARGET_NB_REGS] = {
191
--
63
--
192
2.20.1
64
2.34.1
193
65
194
66
diff view generated by jsdifflib
1
Code movement in an upcoming patch will show that this file
1
This structure will shortly contain more than just
2
was implicitly depending on tcg.h being included indirectly.
2
data for accessing MMIO. Rename the 'addr' member
3
to 'xlat_section' to more clearly indicate its purpose.
3
4
4
Cc: Peter Maydell <peter.maydell@linaro.org>
5
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
---
9
---
9
target/arm/sve_helper.c | 1 +
10
include/exec/cpu-defs.h | 22 ++++----
10
1 file changed, 1 insertion(+)
11
accel/tcg/cputlb.c | 102 +++++++++++++++++++------------------
12
target/arm/mte_helper.c | 14 ++---
13
target/arm/sve_helper.c | 4 +-
14
target/arm/translate-a64.c | 2 +-
15
5 files changed, 73 insertions(+), 71 deletions(-)
11
16
17
diff --git a/include/exec/cpu-defs.h b/include/exec/cpu-defs.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/include/exec/cpu-defs.h
20
+++ b/include/exec/cpu-defs.h
21
@@ -XXX,XX +XXX,XX @@ typedef uint64_t target_ulong;
22
# endif
23
# endif
24
25
+/* Minimalized TLB entry for use by TCG fast path. */
26
typedef struct CPUTLBEntry {
27
/* bit TARGET_LONG_BITS to TARGET_PAGE_BITS : virtual address
28
bit TARGET_PAGE_BITS-1..4 : Nonzero for accesses that should not
29
@@ -XXX,XX +XXX,XX @@ typedef struct CPUTLBEntry {
30
31
QEMU_BUILD_BUG_ON(sizeof(CPUTLBEntry) != (1 << CPU_TLB_ENTRY_BITS));
32
33
-/* The IOTLB is not accessed directly inline by generated TCG code,
34
- * so the CPUIOTLBEntry layout is not as critical as that of the
35
- * CPUTLBEntry. (This is also why we don't want to combine the two
36
- * structs into one.)
37
+/*
38
+ * The full TLB entry, which is not accessed by generated TCG code,
39
+ * so the layout is not as critical as that of CPUTLBEntry. This is
40
+ * also why we don't want to combine the two structs.
41
*/
42
-typedef struct CPUIOTLBEntry {
43
+typedef struct CPUTLBEntryFull {
44
/*
45
- * @addr contains:
46
+ * @xlat_section contains:
47
* - in the lower TARGET_PAGE_BITS, a physical section number
48
* - with the lower TARGET_PAGE_BITS masked off, an offset which
49
* must be added to the virtual address to obtain:
50
@@ -XXX,XX +XXX,XX @@ typedef struct CPUIOTLBEntry {
51
* number is PHYS_SECTION_NOTDIRTY or PHYS_SECTION_ROM)
52
* + the offset within the target MemoryRegion (otherwise)
53
*/
54
- hwaddr addr;
55
+ hwaddr xlat_section;
56
MemTxAttrs attrs;
57
-} CPUIOTLBEntry;
58
+} CPUTLBEntryFull;
59
60
/*
61
* Data elements that are per MMU mode, minus the bits accessed by
62
@@ -XXX,XX +XXX,XX @@ typedef struct CPUTLBDesc {
63
size_t vindex;
64
/* The tlb victim table, in two parts. */
65
CPUTLBEntry vtable[CPU_VTLB_SIZE];
66
- CPUIOTLBEntry viotlb[CPU_VTLB_SIZE];
67
- /* The iotlb. */
68
- CPUIOTLBEntry *iotlb;
69
+ CPUTLBEntryFull vfulltlb[CPU_VTLB_SIZE];
70
+ CPUTLBEntryFull *fulltlb;
71
} CPUTLBDesc;
72
73
/*
74
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
75
index XXXXXXX..XXXXXXX 100644
76
--- a/accel/tcg/cputlb.c
77
+++ b/accel/tcg/cputlb.c
78
@@ -XXX,XX +XXX,XX @@ static void tlb_mmu_resize_locked(CPUTLBDesc *desc, CPUTLBDescFast *fast,
79
}
80
81
g_free(fast->table);
82
- g_free(desc->iotlb);
83
+ g_free(desc->fulltlb);
84
85
tlb_window_reset(desc, now, 0);
86
/* desc->n_used_entries is cleared by the caller */
87
fast->mask = (new_size - 1) << CPU_TLB_ENTRY_BITS;
88
fast->table = g_try_new(CPUTLBEntry, new_size);
89
- desc->iotlb = g_try_new(CPUIOTLBEntry, new_size);
90
+ desc->fulltlb = g_try_new(CPUTLBEntryFull, new_size);
91
92
/*
93
* If the allocations fail, try smaller sizes. We just freed some
94
@@ -XXX,XX +XXX,XX @@ static void tlb_mmu_resize_locked(CPUTLBDesc *desc, CPUTLBDescFast *fast,
95
* allocations to fail though, so we progressively reduce the allocation
96
* size, aborting if we cannot even allocate the smallest TLB we support.
97
*/
98
- while (fast->table == NULL || desc->iotlb == NULL) {
99
+ while (fast->table == NULL || desc->fulltlb == NULL) {
100
if (new_size == (1 << CPU_TLB_DYN_MIN_BITS)) {
101
error_report("%s: %s", __func__, strerror(errno));
102
abort();
103
@@ -XXX,XX +XXX,XX @@ static void tlb_mmu_resize_locked(CPUTLBDesc *desc, CPUTLBDescFast *fast,
104
fast->mask = (new_size - 1) << CPU_TLB_ENTRY_BITS;
105
106
g_free(fast->table);
107
- g_free(desc->iotlb);
108
+ g_free(desc->fulltlb);
109
fast->table = g_try_new(CPUTLBEntry, new_size);
110
- desc->iotlb = g_try_new(CPUIOTLBEntry, new_size);
111
+ desc->fulltlb = g_try_new(CPUTLBEntryFull, new_size);
112
}
113
}
114
115
@@ -XXX,XX +XXX,XX @@ static void tlb_mmu_init(CPUTLBDesc *desc, CPUTLBDescFast *fast, int64_t now)
116
desc->n_used_entries = 0;
117
fast->mask = (n_entries - 1) << CPU_TLB_ENTRY_BITS;
118
fast->table = g_new(CPUTLBEntry, n_entries);
119
- desc->iotlb = g_new(CPUIOTLBEntry, n_entries);
120
+ desc->fulltlb = g_new(CPUTLBEntryFull, n_entries);
121
tlb_mmu_flush_locked(desc, fast);
122
}
123
124
@@ -XXX,XX +XXX,XX @@ void tlb_destroy(CPUState *cpu)
125
CPUTLBDescFast *fast = &env_tlb(env)->f[i];
126
127
g_free(fast->table);
128
- g_free(desc->iotlb);
129
+ g_free(desc->fulltlb);
130
}
131
}
132
133
@@ -XXX,XX +XXX,XX @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
134
135
/* Evict the old entry into the victim tlb. */
136
copy_tlb_helper_locked(tv, te);
137
- desc->viotlb[vidx] = desc->iotlb[index];
138
+ desc->vfulltlb[vidx] = desc->fulltlb[index];
139
tlb_n_used_entries_dec(env, mmu_idx);
140
}
141
142
@@ -XXX,XX +XXX,XX @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
143
* subtract here is that of the page base, and not the same as the
144
* vaddr we add back in io_readx()/io_writex()/get_page_addr_code().
145
*/
146
- desc->iotlb[index].addr = iotlb - vaddr_page;
147
- desc->iotlb[index].attrs = attrs;
148
+ desc->fulltlb[index].xlat_section = iotlb - vaddr_page;
149
+ desc->fulltlb[index].attrs = attrs;
150
151
/* Now calculate the new entry */
152
tn.addend = addend - vaddr_page;
153
@@ -XXX,XX +XXX,XX @@ static inline void cpu_transaction_failed(CPUState *cpu, hwaddr physaddr,
154
}
155
}
156
157
-static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
158
+static uint64_t io_readx(CPUArchState *env, CPUTLBEntryFull *full,
159
int mmu_idx, target_ulong addr, uintptr_t retaddr,
160
MMUAccessType access_type, MemOp op)
161
{
162
@@ -XXX,XX +XXX,XX @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
163
bool locked = false;
164
MemTxResult r;
165
166
- section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
167
+ section = iotlb_to_section(cpu, full->xlat_section, full->attrs);
168
mr = section->mr;
169
- mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
170
+ mr_offset = (full->xlat_section & TARGET_PAGE_MASK) + addr;
171
cpu->mem_io_pc = retaddr;
172
if (!cpu->can_do_io) {
173
cpu_io_recompile(cpu, retaddr);
174
@@ -XXX,XX +XXX,XX @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
175
qemu_mutex_lock_iothread();
176
locked = true;
177
}
178
- r = memory_region_dispatch_read(mr, mr_offset, &val, op, iotlbentry->attrs);
179
+ r = memory_region_dispatch_read(mr, mr_offset, &val, op, full->attrs);
180
if (r != MEMTX_OK) {
181
hwaddr physaddr = mr_offset +
182
section->offset_within_address_space -
183
section->offset_within_region;
184
185
cpu_transaction_failed(cpu, physaddr, addr, memop_size(op), access_type,
186
- mmu_idx, iotlbentry->attrs, r, retaddr);
187
+ mmu_idx, full->attrs, r, retaddr);
188
}
189
if (locked) {
190
qemu_mutex_unlock_iothread();
191
@@ -XXX,XX +XXX,XX @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
192
}
193
194
/*
195
- * Save a potentially trashed IOTLB entry for later lookup by plugin.
196
- * This is read by tlb_plugin_lookup if the iotlb entry doesn't match
197
+ * Save a potentially trashed CPUTLBEntryFull for later lookup by plugin.
198
+ * This is read by tlb_plugin_lookup if the fulltlb entry doesn't match
199
* because of the side effect of io_writex changing memory layout.
200
*/
201
static void save_iotlb_data(CPUState *cs, hwaddr addr,
202
@@ -XXX,XX +XXX,XX @@ static void save_iotlb_data(CPUState *cs, hwaddr addr,
203
#endif
204
}
205
206
-static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
207
+static void io_writex(CPUArchState *env, CPUTLBEntryFull *full,
208
int mmu_idx, uint64_t val, target_ulong addr,
209
uintptr_t retaddr, MemOp op)
210
{
211
@@ -XXX,XX +XXX,XX @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
212
bool locked = false;
213
MemTxResult r;
214
215
- section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
216
+ section = iotlb_to_section(cpu, full->xlat_section, full->attrs);
217
mr = section->mr;
218
- mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
219
+ mr_offset = (full->xlat_section & TARGET_PAGE_MASK) + addr;
220
if (!cpu->can_do_io) {
221
cpu_io_recompile(cpu, retaddr);
222
}
223
@@ -XXX,XX +XXX,XX @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
224
* The memory_region_dispatch may trigger a flush/resize
225
* so for plugins we save the iotlb_data just in case.
226
*/
227
- save_iotlb_data(cpu, iotlbentry->addr, section, mr_offset);
228
+ save_iotlb_data(cpu, full->xlat_section, section, mr_offset);
229
230
if (!qemu_mutex_iothread_locked()) {
231
qemu_mutex_lock_iothread();
232
locked = true;
233
}
234
- r = memory_region_dispatch_write(mr, mr_offset, val, op, iotlbentry->attrs);
235
+ r = memory_region_dispatch_write(mr, mr_offset, val, op, full->attrs);
236
if (r != MEMTX_OK) {
237
hwaddr physaddr = mr_offset +
238
section->offset_within_address_space -
239
section->offset_within_region;
240
241
cpu_transaction_failed(cpu, physaddr, addr, memop_size(op),
242
- MMU_DATA_STORE, mmu_idx, iotlbentry->attrs, r,
243
+ MMU_DATA_STORE, mmu_idx, full->attrs, r,
244
retaddr);
245
}
246
if (locked) {
247
@@ -XXX,XX +XXX,XX @@ static bool victim_tlb_hit(CPUArchState *env, size_t mmu_idx, size_t index,
248
copy_tlb_helper_locked(vtlb, &tmptlb);
249
qemu_spin_unlock(&env_tlb(env)->c.lock);
250
251
- CPUIOTLBEntry tmpio, *io = &env_tlb(env)->d[mmu_idx].iotlb[index];
252
- CPUIOTLBEntry *vio = &env_tlb(env)->d[mmu_idx].viotlb[vidx];
253
- tmpio = *io; *io = *vio; *vio = tmpio;
254
+ CPUTLBEntryFull *f1 = &env_tlb(env)->d[mmu_idx].fulltlb[index];
255
+ CPUTLBEntryFull *f2 = &env_tlb(env)->d[mmu_idx].vfulltlb[vidx];
256
+ CPUTLBEntryFull tmpf;
257
+ tmpf = *f1; *f1 = *f2; *f2 = tmpf;
258
return true;
259
}
260
}
261
@@ -XXX,XX +XXX,XX @@ static bool victim_tlb_hit(CPUArchState *env, size_t mmu_idx, size_t index,
262
(ADDR) & TARGET_PAGE_MASK)
263
264
static void notdirty_write(CPUState *cpu, vaddr mem_vaddr, unsigned size,
265
- CPUIOTLBEntry *iotlbentry, uintptr_t retaddr)
266
+ CPUTLBEntryFull *full, uintptr_t retaddr)
267
{
268
- ram_addr_t ram_addr = mem_vaddr + iotlbentry->addr;
269
+ ram_addr_t ram_addr = mem_vaddr + full->xlat_section;
270
271
trace_memory_notdirty_write_access(mem_vaddr, ram_addr, size);
272
273
@@ -XXX,XX +XXX,XX @@ int probe_access_flags(CPUArchState *env, target_ulong addr,
274
/* Handle clean RAM pages. */
275
if (unlikely(flags & TLB_NOTDIRTY)) {
276
uintptr_t index = tlb_index(env, mmu_idx, addr);
277
- CPUIOTLBEntry *iotlbentry = &env_tlb(env)->d[mmu_idx].iotlb[index];
278
+ CPUTLBEntryFull *full = &env_tlb(env)->d[mmu_idx].fulltlb[index];
279
280
- notdirty_write(env_cpu(env), addr, 1, iotlbentry, retaddr);
281
+ notdirty_write(env_cpu(env), addr, 1, full, retaddr);
282
flags &= ~TLB_NOTDIRTY;
283
}
284
285
@@ -XXX,XX +XXX,XX @@ void *probe_access(CPUArchState *env, target_ulong addr, int size,
286
287
if (unlikely(flags & (TLB_NOTDIRTY | TLB_WATCHPOINT))) {
288
uintptr_t index = tlb_index(env, mmu_idx, addr);
289
- CPUIOTLBEntry *iotlbentry = &env_tlb(env)->d[mmu_idx].iotlb[index];
290
+ CPUTLBEntryFull *full = &env_tlb(env)->d[mmu_idx].fulltlb[index];
291
292
/* Handle watchpoints. */
293
if (flags & TLB_WATCHPOINT) {
294
int wp_access = (access_type == MMU_DATA_STORE
295
? BP_MEM_WRITE : BP_MEM_READ);
296
cpu_check_watchpoint(env_cpu(env), addr, size,
297
- iotlbentry->attrs, wp_access, retaddr);
298
+ full->attrs, wp_access, retaddr);
299
}
300
301
/* Handle clean RAM pages. */
302
if (flags & TLB_NOTDIRTY) {
303
- notdirty_write(env_cpu(env), addr, 1, iotlbentry, retaddr);
304
+ notdirty_write(env_cpu(env), addr, 1, full, retaddr);
305
}
306
}
307
308
@@ -XXX,XX +XXX,XX @@ tb_page_addr_t get_page_addr_code_hostp(CPUArchState *env, target_ulong addr,
309
* should have just filled the TLB. The one corner case is io_writex
310
* which can cause TLB flushes and potential resizing of the TLBs
311
* losing the information we need. In those cases we need to recover
312
- * data from a copy of the iotlbentry. As long as this always occurs
313
+ * data from a copy of the CPUTLBEntryFull. As long as this always occurs
314
* from the same thread (which a mem callback will be) this is safe.
315
*/
316
317
@@ -XXX,XX +XXX,XX @@ bool tlb_plugin_lookup(CPUState *cpu, target_ulong addr, int mmu_idx,
318
if (likely(tlb_hit(tlb_addr, addr))) {
319
/* We must have an iotlb entry for MMIO */
320
if (tlb_addr & TLB_MMIO) {
321
- CPUIOTLBEntry *iotlbentry;
322
- iotlbentry = &env_tlb(env)->d[mmu_idx].iotlb[index];
323
+ CPUTLBEntryFull *full;
324
+ full = &env_tlb(env)->d[mmu_idx].fulltlb[index];
325
data->is_io = true;
326
- data->v.io.section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
327
- data->v.io.offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
328
+ data->v.io.section =
329
+ iotlb_to_section(cpu, full->xlat_section, full->attrs);
330
+ data->v.io.offset = (full->xlat_section & TARGET_PAGE_MASK) + addr;
331
} else {
332
data->is_io = false;
333
data->v.ram.hostaddr = (void *)((uintptr_t)addr + tlbe->addend);
334
@@ -XXX,XX +XXX,XX @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
335
336
if (unlikely(tlb_addr & TLB_NOTDIRTY)) {
337
notdirty_write(env_cpu(env), addr, size,
338
- &env_tlb(env)->d[mmu_idx].iotlb[index], retaddr);
339
+ &env_tlb(env)->d[mmu_idx].fulltlb[index], retaddr);
340
}
341
342
return hostaddr;
343
@@ -XXX,XX +XXX,XX @@ load_helper(CPUArchState *env, target_ulong addr, MemOpIdx oi,
344
345
/* Handle anything that isn't just a straight memory access. */
346
if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) {
347
- CPUIOTLBEntry *iotlbentry;
348
+ CPUTLBEntryFull *full;
349
bool need_swap;
350
351
/* For anything that is unaligned, recurse through full_load. */
352
@@ -XXX,XX +XXX,XX @@ load_helper(CPUArchState *env, target_ulong addr, MemOpIdx oi,
353
goto do_unaligned_access;
354
}
355
356
- iotlbentry = &env_tlb(env)->d[mmu_idx].iotlb[index];
357
+ full = &env_tlb(env)->d[mmu_idx].fulltlb[index];
358
359
/* Handle watchpoints. */
360
if (unlikely(tlb_addr & TLB_WATCHPOINT)) {
361
/* On watchpoint hit, this will longjmp out. */
362
cpu_check_watchpoint(env_cpu(env), addr, size,
363
- iotlbentry->attrs, BP_MEM_READ, retaddr);
364
+ full->attrs, BP_MEM_READ, retaddr);
365
}
366
367
need_swap = size > 1 && (tlb_addr & TLB_BSWAP);
368
369
/* Handle I/O access. */
370
if (likely(tlb_addr & TLB_MMIO)) {
371
- return io_readx(env, iotlbentry, mmu_idx, addr, retaddr,
372
+ return io_readx(env, full, mmu_idx, addr, retaddr,
373
access_type, op ^ (need_swap * MO_BSWAP));
374
}
375
376
@@ -XXX,XX +XXX,XX @@ store_helper_unaligned(CPUArchState *env, target_ulong addr, uint64_t val,
377
*/
378
if (unlikely(tlb_addr & TLB_WATCHPOINT)) {
379
cpu_check_watchpoint(env_cpu(env), addr, size - size2,
380
- env_tlb(env)->d[mmu_idx].iotlb[index].attrs,
381
+ env_tlb(env)->d[mmu_idx].fulltlb[index].attrs,
382
BP_MEM_WRITE, retaddr);
383
}
384
if (unlikely(tlb_addr2 & TLB_WATCHPOINT)) {
385
cpu_check_watchpoint(env_cpu(env), page2, size2,
386
- env_tlb(env)->d[mmu_idx].iotlb[index2].attrs,
387
+ env_tlb(env)->d[mmu_idx].fulltlb[index2].attrs,
388
BP_MEM_WRITE, retaddr);
389
}
390
391
@@ -XXX,XX +XXX,XX @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
392
393
/* Handle anything that isn't just a straight memory access. */
394
if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) {
395
- CPUIOTLBEntry *iotlbentry;
396
+ CPUTLBEntryFull *full;
397
bool need_swap;
398
399
/* For anything that is unaligned, recurse through byte stores. */
400
@@ -XXX,XX +XXX,XX @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
401
goto do_unaligned_access;
402
}
403
404
- iotlbentry = &env_tlb(env)->d[mmu_idx].iotlb[index];
405
+ full = &env_tlb(env)->d[mmu_idx].fulltlb[index];
406
407
/* Handle watchpoints. */
408
if (unlikely(tlb_addr & TLB_WATCHPOINT)) {
409
/* On watchpoint hit, this will longjmp out. */
410
cpu_check_watchpoint(env_cpu(env), addr, size,
411
- iotlbentry->attrs, BP_MEM_WRITE, retaddr);
412
+ full->attrs, BP_MEM_WRITE, retaddr);
413
}
414
415
need_swap = size > 1 && (tlb_addr & TLB_BSWAP);
416
417
/* Handle I/O access. */
418
if (tlb_addr & TLB_MMIO) {
419
- io_writex(env, iotlbentry, mmu_idx, val, addr, retaddr,
420
+ io_writex(env, full, mmu_idx, val, addr, retaddr,
421
op ^ (need_swap * MO_BSWAP));
422
return;
423
}
424
@@ -XXX,XX +XXX,XX @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
425
426
/* Handle clean RAM pages. */
427
if (tlb_addr & TLB_NOTDIRTY) {
428
- notdirty_write(env_cpu(env), addr, size, iotlbentry, retaddr);
429
+ notdirty_write(env_cpu(env), addr, size, full, retaddr);
430
}
431
432
haddr = (void *)((uintptr_t)addr + entry->addend);
433
diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c
434
index XXXXXXX..XXXXXXX 100644
435
--- a/target/arm/mte_helper.c
436
+++ b/target/arm/mte_helper.c
437
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
438
return tags + index;
439
#else
440
uintptr_t index;
441
- CPUIOTLBEntry *iotlbentry;
442
+ CPUTLBEntryFull *full;
443
int in_page, flags;
444
ram_addr_t ptr_ra;
445
hwaddr ptr_paddr, tag_paddr, xlat;
446
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
447
assert(!(flags & TLB_INVALID_MASK));
448
449
/*
450
- * Find the iotlbentry for ptr. This *must* be present in the TLB
451
+ * Find the CPUTLBEntryFull for ptr. This *must* be present in the TLB
452
* because we just found the mapping.
453
* TODO: Perhaps there should be a cputlb helper that returns a
454
* matching tlb entry + iotlb entry.
455
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
456
g_assert(tlb_hit(comparator, ptr));
457
}
458
# endif
459
- iotlbentry = &env_tlb(env)->d[ptr_mmu_idx].iotlb[index];
460
+ full = &env_tlb(env)->d[ptr_mmu_idx].fulltlb[index];
461
462
/* If the virtual page MemAttr != Tagged, access unchecked. */
463
- if (!arm_tlb_mte_tagged(&iotlbentry->attrs)) {
464
+ if (!arm_tlb_mte_tagged(&full->attrs)) {
465
return NULL;
466
}
467
468
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
469
int wp = ptr_access == MMU_DATA_LOAD ? BP_MEM_READ : BP_MEM_WRITE;
470
assert(ra != 0);
471
cpu_check_watchpoint(env_cpu(env), ptr, ptr_size,
472
- iotlbentry->attrs, wp, ra);
473
+ full->attrs, wp, ra);
474
}
475
476
/*
477
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
478
tag_paddr = ptr_paddr >> (LOG2_TAG_GRANULE + 1);
479
480
/* Look up the address in tag space. */
481
- tag_asi = iotlbentry->attrs.secure ? ARMASIdx_TagS : ARMASIdx_TagNS;
482
+ tag_asi = full->attrs.secure ? ARMASIdx_TagS : ARMASIdx_TagNS;
483
tag_as = cpu_get_address_space(env_cpu(env), tag_asi);
484
mr = address_space_translate(tag_as, tag_paddr, &xlat, NULL,
485
tag_access == MMU_DATA_STORE,
486
- iotlbentry->attrs);
487
+ full->attrs);
488
489
/*
490
* Note that @mr will never be NULL. If there is nothing in the address
12
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
491
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
13
index XXXXXXX..XXXXXXX 100644
492
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/sve_helper.c
493
--- a/target/arm/sve_helper.c
15
+++ b/target/arm/sve_helper.c
494
+++ b/target/arm/sve_helper.c
16
@@ -XXX,XX +XXX,XX @@
495
@@ -XXX,XX +XXX,XX @@ bool sve_probe_page(SVEHostPage *info, bool nofault, CPUARMState *env,
17
#include "exec/helper-proto.h"
496
g_assert(tlb_hit(comparator, addr));
18
#include "tcg/tcg-gvec-desc.h"
497
# endif
19
#include "fpu/softfloat.h"
498
20
+#include "tcg.h"
499
- CPUIOTLBEntry *iotlbentry = &env_tlb(env)->d[mmu_idx].iotlb[index];
21
500
- info->attrs = iotlbentry->attrs;
22
501
+ CPUTLBEntryFull *full = &env_tlb(env)->d[mmu_idx].fulltlb[index];
23
/* Note that vector data is stored in host-endian 64-bit chunks,
502
+ info->attrs = full->attrs;
503
}
504
#endif
505
506
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
507
index XXXXXXX..XXXXXXX 100644
508
--- a/target/arm/translate-a64.c
509
+++ b/target/arm/translate-a64.c
510
@@ -XXX,XX +XXX,XX @@ static bool is_guarded_page(CPUARMState *env, DisasContext *s)
511
* table entry even for that case.
512
*/
513
return (tlb_hit(entry->addr_code, addr) &&
514
- arm_tlb_bti_gp(&env_tlb(env)->d[mmu_idx].iotlb[index].attrs));
515
+ arm_tlb_bti_gp(&env_tlb(env)->d[mmu_idx].fulltlb[index].attrs));
516
#endif
517
}
518
24
--
519
--
25
2.20.1
520
2.34.1
26
521
27
522
diff view generated by jsdifflib
1
This adjustment was random and unnecessary. The user mode
1
This field is only written, not read; remove it.
2
startup code in probe_guest_base() will choose a value for
3
guest_base that allows the host qemu binary to not conflict
4
with the guest binary.
5
6
With modern distributions, this isn't even used, as the default
7
is PIE, which does the same job in a more portable way.
8
2
9
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
3
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
10
Reviewed-by: Thomas Huth <thuth@redhat.com>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
---
7
---
13
v2: Remove mention of config-host.ld from make distclean
8
include/hw/core/cpu.h | 1 -
14
---
9
accel/tcg/cputlb.c | 7 +++----
15
Makefile | 2 +-
10
2 files changed, 3 insertions(+), 5 deletions(-)
16
configure | 47 -----------------------------------------------
17
2 files changed, 1 insertion(+), 48 deletions(-)
18
11
19
diff --git a/Makefile b/Makefile
12
diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
20
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
21
--- a/Makefile
14
--- a/include/hw/core/cpu.h
22
+++ b/Makefile
15
+++ b/include/hw/core/cpu.h
23
@@ -XXX,XX +XXX,XX @@ rm -f $(MANUAL_BUILDDIR)/$1/objects.inv $(MANUAL_BUILDDIR)/$1/searchindex.js $(M
16
@@ -XXX,XX +XXX,XX @@ struct CPUWatchpoint {
24
endef
17
* the memory regions get moved around by io_writex.
25
18
*/
26
distclean: clean
19
typedef struct SavedIOTLB {
27
-    rm -f config-host.mak config-host.h* config-host.ld $(DOCS) qemu-options.texi qemu-img-cmds.texi qemu-monitor.texi qemu-monitor-info.texi
20
- hwaddr addr;
28
+    rm -f config-host.mak config-host.h* $(DOCS) qemu-options.texi qemu-img-cmds.texi qemu-monitor.texi qemu-monitor-info.texi
21
MemoryRegionSection *section;
29
    rm -f tests/tcg/config-*.mak
22
hwaddr mr_offset;
30
    rm -f config-all-devices.mak config-all-disas.mak config.status
23
} SavedIOTLB;
31
    rm -f $(SUBDIR_DEVICES_MAK)
24
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
32
diff --git a/configure b/configure
25
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100755
26
--- a/accel/tcg/cputlb.c
34
--- a/configure
27
+++ b/accel/tcg/cputlb.c
35
+++ b/configure
28
@@ -XXX,XX +XXX,XX @@ static uint64_t io_readx(CPUArchState *env, CPUTLBEntryFull *full,
36
@@ -XXX,XX +XXX,XX @@ if test "$cpu" = "s390x" ; then
29
* This is read by tlb_plugin_lookup if the fulltlb entry doesn't match
37
fi
30
* because of the side effect of io_writex changing memory layout.
38
fi
31
*/
39
32
-static void save_iotlb_data(CPUState *cs, hwaddr addr,
40
-# Probe for the need for relocating the user-only binary.
33
- MemoryRegionSection *section, hwaddr mr_offset)
41
-if ( [ "$linux_user" = yes ] || [ "$bsd_user" = yes ] ) && [ "$pie" = no ]; then
34
+static void save_iotlb_data(CPUState *cs, MemoryRegionSection *section,
42
- textseg_addr=
35
+ hwaddr mr_offset)
43
- case "$cpu" in
36
{
44
- arm | i386 | ppc* | s390* | sparc* | x86_64 | x32)
37
#ifdef CONFIG_PLUGIN
45
- # ??? Rationale for choosing this address
38
SavedIOTLB *saved = &cs->saved_iotlb;
46
- textseg_addr=0x60000000
39
- saved->addr = addr;
47
- ;;
40
saved->section = section;
48
- mips)
41
saved->mr_offset = mr_offset;
49
- # A 256M aligned address, high in the address space, with enough
42
#endif
50
- # room for the code_gen_buffer above it before the stack.
43
@@ -XXX,XX +XXX,XX @@ static void io_writex(CPUArchState *env, CPUTLBEntryFull *full,
51
- textseg_addr=0x60000000
44
* The memory_region_dispatch may trigger a flush/resize
52
- ;;
45
* so for plugins we save the iotlb_data just in case.
53
- esac
46
*/
54
- if [ -n "$textseg_addr" ]; then
47
- save_iotlb_data(cpu, full->xlat_section, section, mr_offset);
55
- cat > $TMPC <<EOF
48
+ save_iotlb_data(cpu, section, mr_offset);
56
- int main(void) { return 0; }
49
57
-EOF
50
if (!qemu_mutex_iothread_locked()) {
58
- textseg_ldflags="-Wl,-Ttext-segment=$textseg_addr"
51
qemu_mutex_lock_iothread();
59
- if ! compile_prog "" "$textseg_ldflags"; then
60
- # In case ld does not support -Ttext-segment, edit the default linker
61
- # script via sed to set the .text start addr. This is needed on FreeBSD
62
- # at least.
63
- if ! $ld --verbose >/dev/null 2>&1; then
64
- error_exit \
65
- "We need to link the QEMU user mode binaries at a" \
66
- "specific text address. Unfortunately your linker" \
67
- "doesn't support either the -Ttext-segment option or" \
68
- "printing the default linker script with --verbose." \
69
- "If you don't want the user mode binaries, pass the" \
70
- "--disable-user option to configure."
71
- fi
72
-
73
- $ld --verbose | sed \
74
- -e '1,/==================================================/d' \
75
- -e '/==================================================/,$d' \
76
- -e "s/[.] = [0-9a-fx]* [+] SIZEOF_HEADERS/. = $textseg_addr + SIZEOF_HEADERS/" \
77
- -e "s/__executable_start = [0-9a-fx]*/__executable_start = $textseg_addr/" > config-host.ld
78
- textseg_ldflags="-Wl,-T../config-host.ld"
79
- fi
80
- fi
81
-fi
82
-
83
# Check that the C++ compiler exists and works with the C compiler.
84
# All the QEMU_CXXFLAGS are based on QEMU_CFLAGS. Keep this at the end to don't miss any other that could be added.
85
if has $cxx; then
86
@@ -XXX,XX +XXX,XX @@ if test "$gprof" = "yes" ; then
87
fi
88
fi
89
90
-if test "$target_linux_user" = "yes" || test "$target_bsd_user" = "yes" ; then
91
- ldflags="$ldflags $textseg_ldflags"
92
-fi
93
-
94
# Newer kernels on s390 check for an S390_PGSTE program header and
95
# enable the pgste page table extensions in that case. This makes
96
# the vm.allocate_pgste sysctl unnecessary. We enable this program
97
--
52
--
98
2.20.1
53
2.34.1
99
54
100
55
diff view generated by jsdifflib
1
There is nothing about these options that is related to PIE.
1
When PAGE_WRITE_INV is set when calling tlb_set_page,
2
Use them unconditionally.
2
we immediately set TLB_INVALID_MASK in order to force
3
tlb_fill to be called on the next lookup. Here in
4
probe_access_internal, we have just called tlb_fill
5
and eliminated true misses, thus the lookup must be valid.
6
7
This allows us to remove a warning comment from s390x.
8
There doesn't seem to be a reason to change the code though.
3
9
4
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
10
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
5
Reviewed-by: Fangrui Song <i@maskray.me>
11
Reviewed-by: David Hildenbrand <david@redhat.com>
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
---
14
---
9
v2: Do not split into two tests.
15
accel/tcg/cputlb.c | 10 +++++++++-
10
---
16
target/s390x/tcg/mem_helper.c | 4 ----
11
configure | 9 ++++++---
17
2 files changed, 9 insertions(+), 5 deletions(-)
12
1 file changed, 6 insertions(+), 3 deletions(-)
13
18
14
diff --git a/configure b/configure
19
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
15
index XXXXXXX..XXXXXXX 100755
20
index XXXXXXX..XXXXXXX 100644
16
--- a/configure
21
--- a/accel/tcg/cputlb.c
17
+++ b/configure
22
+++ b/accel/tcg/cputlb.c
18
@@ -XXX,XX +XXX,XX @@ if test "$pie" != "no" ; then
23
@@ -XXX,XX +XXX,XX @@ static int probe_access_internal(CPUArchState *env, target_ulong addr,
19
QEMU_CFLAGS="-fPIE -DPIE $QEMU_CFLAGS"
24
}
20
LDFLAGS="-pie $LDFLAGS"
25
tlb_addr = tlb_read_ofs(entry, elt_ofs);
21
pie="yes"
26
22
- if compile_prog "" "-Wl,-z,relro -Wl,-z,now" ; then
27
+ flags = TLB_FLAGS_MASK;
23
- LDFLAGS="-Wl,-z,relro -Wl,-z,now $LDFLAGS"
28
page_addr = addr & TARGET_PAGE_MASK;
24
- fi
29
if (!tlb_hit_page(tlb_addr, page_addr)) {
25
else
30
if (!victim_tlb_hit(env, mmu_idx, index, elt_ofs, page_addr)) {
26
if test "$pie" = "yes"; then
31
@@ -XXX,XX +XXX,XX @@ static int probe_access_internal(CPUArchState *env, target_ulong addr,
27
error_exit "PIE not available due to missing toolchain support"
32
28
@@ -XXX,XX +XXX,XX @@ if test "$pie" != "no" ; then
33
/* TLB resize via tlb_fill may have moved the entry. */
29
fi
34
entry = tlb_entry(env, mmu_idx, addr);
30
fi
31
32
+# Detect support for PT_GNU_RELRO + DT_BIND_NOW.
33
+# The combination is known as "full relro", because .got.plt is read-only too.
34
+if compile_prog "" "-Wl,-z,relro -Wl,-z,now" ; then
35
+ LDFLAGS="-Wl,-z,relro -Wl,-z,now $LDFLAGS"
36
+fi
37
+
35
+
38
##########################################
36
+ /*
39
# __sync_fetch_and_and requires at least -march=i486. Many toolchains
37
+ * With PAGE_WRITE_INV, we set TLB_INVALID_MASK immediately,
40
# use i686 as default anyway, but for those that don't, an explicit
38
+ * to force the next access through tlb_fill. We've just
39
+ * called tlb_fill, so we know that this entry *is* valid.
40
+ */
41
+ flags &= ~TLB_INVALID_MASK;
42
}
43
tlb_addr = tlb_read_ofs(entry, elt_ofs);
44
}
45
- flags = tlb_addr & TLB_FLAGS_MASK;
46
+ flags &= tlb_addr;
47
48
/* Fold all "mmio-like" bits into TLB_MMIO. This is not RAM. */
49
if (unlikely(flags & ~(TLB_WATCHPOINT | TLB_NOTDIRTY))) {
50
diff --git a/target/s390x/tcg/mem_helper.c b/target/s390x/tcg/mem_helper.c
51
index XXXXXXX..XXXXXXX 100644
52
--- a/target/s390x/tcg/mem_helper.c
53
+++ b/target/s390x/tcg/mem_helper.c
54
@@ -XXX,XX +XXX,XX @@ static int s390_probe_access(CPUArchState *env, target_ulong addr, int size,
55
#else
56
int flags;
57
58
- /*
59
- * For !CONFIG_USER_ONLY, we cannot rely on TLB_INVALID_MASK or haddr==NULL
60
- * to detect if there was an exception during tlb_fill().
61
- */
62
env->tlb_fill_exc = 0;
63
flags = probe_access_flags(env, addr, access_type, mmu_idx, nonfault, phost,
64
ra);
41
--
65
--
42
2.20.1
66
2.34.1
43
67
44
68
diff view generated by jsdifflib
1
There are no uses of the *_cmmu names other than the bare wrapping
1
Add an interface to return the CPUTLBEntryFull struct
2
within the *_code inlines. Therefore rename the functions so we
2
that goes with the lookup. The result is not intended
3
can drop the inlines.
3
to be valid across multiple lookups, so the user must
4
use the results immediately.
4
5
5
Use abi_ptr instead of target_ulong in preparation for user-only;
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
the two types are identical for softmmu.
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
---
10
---
12
include/exec/cpu_ldst.h | 29 ++++------
11
include/exec/exec-all.h | 15 +++++++++++++
13
include/exec/cpu_ldst_template.h | 21 -------
12
include/qemu/typedefs.h | 1 +
14
tcg/tcg.h | 29 ----------
13
accel/tcg/cputlb.c | 47 +++++++++++++++++++++++++----------------
15
accel/tcg/cputlb.c | 94 ++++++++------------------------
14
3 files changed, 45 insertions(+), 18 deletions(-)
16
docs/devel/loads-stores.rst | 4 +-
17
5 files changed, 36 insertions(+), 141 deletions(-)
18
15
19
diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h
16
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
20
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
21
--- a/include/exec/cpu_ldst.h
18
--- a/include/exec/exec-all.h
22
+++ b/include/exec/cpu_ldst.h
19
+++ b/include/exec/exec-all.h
23
@@ -XXX,XX +XXX,XX @@ void cpu_stq_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint64_t val,
20
@@ -XXX,XX +XXX,XX @@ int probe_access_flags(CPUArchState *env, target_ulong addr,
24
#undef CPU_MMU_INDEX
21
MMUAccessType access_type, int mmu_idx,
25
#undef MEMSUFFIX
22
bool nonfault, void **phost, uintptr_t retaddr);
26
23
27
-#define CPU_MMU_INDEX (cpu_mmu_index(env, true))
24
+#ifndef CONFIG_USER_ONLY
28
-#define MEMSUFFIX _code
25
+/**
29
-#define SOFTMMU_CODE_ACCESS
26
+ * probe_access_full:
30
+uint32_t cpu_ldub_code(CPUArchState *env, abi_ptr addr);
27
+ * Like probe_access_flags, except also return into @pfull.
31
+uint32_t cpu_lduw_code(CPUArchState *env, abi_ptr addr);
28
+ *
32
+uint32_t cpu_ldl_code(CPUArchState *env, abi_ptr addr);
29
+ * The CPUTLBEntryFull structure returned via @pfull is transient
33
+uint64_t cpu_ldq_code(CPUArchState *env, abi_ptr addr);
30
+ * and must be consumed or copied immediately, before any further
34
31
+ * access or changes to TLB @mmu_idx.
35
-#define DATA_SIZE 1
32
+ */
36
-#include "exec/cpu_ldst_template.h"
33
+int probe_access_full(CPUArchState *env, target_ulong addr,
37
+static inline int cpu_ldsb_code(CPUArchState *env, abi_ptr addr)
34
+ MMUAccessType access_type, int mmu_idx,
38
+{
35
+ bool nonfault, void **phost,
39
+ return (int8_t)cpu_ldub_code(env, addr);
36
+ CPUTLBEntryFull **pfull, uintptr_t retaddr);
40
+}
37
+#endif
41
38
+
42
-#define DATA_SIZE 2
39
#define CODE_GEN_ALIGN 16 /* must be >= of the size of a icache line */
43
-#include "exec/cpu_ldst_template.h"
40
44
-
41
/* Estimated block size for TB allocation. */
45
-#define DATA_SIZE 4
42
diff --git a/include/qemu/typedefs.h b/include/qemu/typedefs.h
46
-#include "exec/cpu_ldst_template.h"
47
-
48
-#define DATA_SIZE 8
49
-#include "exec/cpu_ldst_template.h"
50
-
51
-#undef CPU_MMU_INDEX
52
-#undef MEMSUFFIX
53
-#undef SOFTMMU_CODE_ACCESS
54
+static inline int cpu_ldsw_code(CPUArchState *env, abi_ptr addr)
55
+{
56
+ return (int16_t)cpu_lduw_code(env, addr);
57
+}
58
59
#endif /* defined(CONFIG_USER_ONLY) */
60
61
diff --git a/include/exec/cpu_ldst_template.h b/include/exec/cpu_ldst_template.h
62
index XXXXXXX..XXXXXXX 100644
43
index XXXXXXX..XXXXXXX 100644
63
--- a/include/exec/cpu_ldst_template.h
44
--- a/include/qemu/typedefs.h
64
+++ b/include/exec/cpu_ldst_template.h
45
+++ b/include/qemu/typedefs.h
65
@@ -XXX,XX +XXX,XX @@
46
@@ -XXX,XX +XXX,XX @@ typedef struct ConfidentialGuestSupport ConfidentialGuestSupport;
66
47
typedef struct CPUAddressSpace CPUAddressSpace;
67
/* generic load/store macros */
48
typedef struct CPUArchState CPUArchState;
68
49
typedef struct CPUState CPUState;
69
-#ifdef SOFTMMU_CODE_ACCESS
50
+typedef struct CPUTLBEntryFull CPUTLBEntryFull;
70
-
51
typedef struct DeviceListener DeviceListener;
71
-static inline RES_TYPE
52
typedef struct DeviceState DeviceState;
72
-glue(glue(cpu_ld, USUFFIX), _code)(CPUArchState *env, target_ulong ptr)
53
typedef struct DirtyBitmapSnapshot DirtyBitmapSnapshot;
73
-{
74
- TCGMemOpIdx oi = make_memop_idx(MO_TE | SHIFT, CPU_MMU_INDEX);
75
- return glue(glue(helper_ret_ld, USUFFIX), _cmmu)(env, ptr, oi, 0);
76
-}
77
-
78
-#if DATA_SIZE <= 2
79
-static inline int
80
-glue(glue(cpu_lds, SUFFIX), _code)(CPUArchState *env, target_ulong ptr)
81
-{
82
- return (DATA_STYPE)glue(glue(cpu_ld, USUFFIX), _code)(env, ptr);
83
-}
84
-#endif
85
-
86
-#else
87
-
88
static inline RES_TYPE
89
glue(glue(glue(cpu_ld, USUFFIX), MEMSUFFIX), _ra)(CPUArchState *env,
90
target_ulong ptr,
91
@@ -XXX,XX +XXX,XX @@ glue(glue(cpu_st, SUFFIX), MEMSUFFIX)(CPUArchState *env, target_ulong ptr,
92
glue(glue(cpu_st, SUFFIX), _mmuidx_ra)(env, ptr, v, CPU_MMU_INDEX, 0);
93
}
94
95
-#endif /* !SOFTMMU_CODE_ACCESS */
96
-
97
#undef RES_TYPE
98
#undef DATA_TYPE
99
#undef DATA_STYPE
100
diff --git a/tcg/tcg.h b/tcg/tcg.h
101
index XXXXXXX..XXXXXXX 100644
102
--- a/tcg/tcg.h
103
+++ b/tcg/tcg.h
104
@@ -XXX,XX +XXX,XX @@ void helper_be_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val,
105
void helper_be_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val,
106
TCGMemOpIdx oi, uintptr_t retaddr);
107
108
-uint8_t helper_ret_ldub_cmmu(CPUArchState *env, target_ulong addr,
109
- TCGMemOpIdx oi, uintptr_t retaddr);
110
-int8_t helper_ret_ldsb_cmmu(CPUArchState *env, target_ulong addr,
111
- TCGMemOpIdx oi, uintptr_t retaddr);
112
-uint16_t helper_le_lduw_cmmu(CPUArchState *env, target_ulong addr,
113
- TCGMemOpIdx oi, uintptr_t retaddr);
114
-int16_t helper_le_ldsw_cmmu(CPUArchState *env, target_ulong addr,
115
- TCGMemOpIdx oi, uintptr_t retaddr);
116
-uint32_t helper_le_ldl_cmmu(CPUArchState *env, target_ulong addr,
117
- TCGMemOpIdx oi, uintptr_t retaddr);
118
-uint64_t helper_le_ldq_cmmu(CPUArchState *env, target_ulong addr,
119
- TCGMemOpIdx oi, uintptr_t retaddr);
120
-uint16_t helper_be_lduw_cmmu(CPUArchState *env, target_ulong addr,
121
- TCGMemOpIdx oi, uintptr_t retaddr);
122
-int16_t helper_be_ldsw_cmmu(CPUArchState *env, target_ulong addr,
123
- TCGMemOpIdx oi, uintptr_t retaddr);
124
-uint32_t helper_be_ldl_cmmu(CPUArchState *env, target_ulong addr,
125
- TCGMemOpIdx oi, uintptr_t retaddr);
126
-uint64_t helper_be_ldq_cmmu(CPUArchState *env, target_ulong addr,
127
- TCGMemOpIdx oi, uintptr_t retaddr);
128
-
129
/* Temporary aliases until backends are converted. */
130
#ifdef TARGET_WORDS_BIGENDIAN
131
# define helper_ret_ldsw_mmu helper_be_ldsw_mmu
132
@@ -XXX,XX +XXX,XX @@ uint64_t helper_be_ldq_cmmu(CPUArchState *env, target_ulong addr,
133
# define helper_ret_stw_mmu helper_be_stw_mmu
134
# define helper_ret_stl_mmu helper_be_stl_mmu
135
# define helper_ret_stq_mmu helper_be_stq_mmu
136
-# define helper_ret_lduw_cmmu helper_be_lduw_cmmu
137
-# define helper_ret_ldsw_cmmu helper_be_ldsw_cmmu
138
-# define helper_ret_ldl_cmmu helper_be_ldl_cmmu
139
-# define helper_ret_ldq_cmmu helper_be_ldq_cmmu
140
#else
141
# define helper_ret_ldsw_mmu helper_le_ldsw_mmu
142
# define helper_ret_lduw_mmu helper_le_lduw_mmu
143
@@ -XXX,XX +XXX,XX @@ uint64_t helper_be_ldq_cmmu(CPUArchState *env, target_ulong addr,
144
# define helper_ret_stw_mmu helper_le_stw_mmu
145
# define helper_ret_stl_mmu helper_le_stl_mmu
146
# define helper_ret_stq_mmu helper_le_stq_mmu
147
-# define helper_ret_lduw_cmmu helper_le_lduw_cmmu
148
-# define helper_ret_ldsw_cmmu helper_le_ldsw_cmmu
149
-# define helper_ret_ldl_cmmu helper_le_ldl_cmmu
150
-# define helper_ret_ldq_cmmu helper_le_ldq_cmmu
151
#endif
152
153
uint32_t helper_atomic_cmpxchgb_mmu(CPUArchState *env, target_ulong addr,
154
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
54
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
155
index XXXXXXX..XXXXXXX 100644
55
index XXXXXXX..XXXXXXX 100644
156
--- a/accel/tcg/cputlb.c
56
--- a/accel/tcg/cputlb.c
157
+++ b/accel/tcg/cputlb.c
57
+++ b/accel/tcg/cputlb.c
158
@@ -XXX,XX +XXX,XX @@ void cpu_stq_mmuidx_ra(CPUArchState *env, target_ulong addr, uint64_t val,
58
@@ -XXX,XX +XXX,XX @@ static void notdirty_write(CPUState *cpu, vaddr mem_vaddr, unsigned size,
159
59
static int probe_access_internal(CPUArchState *env, target_ulong addr,
160
/* Code access functions. */
60
int fault_size, MMUAccessType access_type,
161
61
int mmu_idx, bool nonfault,
162
-static uint64_t full_ldub_cmmu(CPUArchState *env, target_ulong addr,
62
- void **phost, uintptr_t retaddr)
163
+static uint64_t full_ldub_code(CPUArchState *env, target_ulong addr,
63
+ void **phost, CPUTLBEntryFull **pfull,
164
TCGMemOpIdx oi, uintptr_t retaddr)
64
+ uintptr_t retaddr)
165
{
65
{
166
- return load_helper(env, addr, oi, retaddr, MO_8, true, full_ldub_cmmu);
66
uintptr_t index = tlb_index(env, mmu_idx, addr);
167
+ return load_helper(env, addr, oi, retaddr, MO_8, true, full_ldub_code);
67
CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr);
68
@@ -XXX,XX +XXX,XX @@ static int probe_access_internal(CPUArchState *env, target_ulong addr,
69
mmu_idx, nonfault, retaddr)) {
70
/* Non-faulting page table read failed. */
71
*phost = NULL;
72
+ *pfull = NULL;
73
return TLB_INVALID_MASK;
74
}
75
76
/* TLB resize via tlb_fill may have moved the entry. */
77
+ index = tlb_index(env, mmu_idx, addr);
78
entry = tlb_entry(env, mmu_idx, addr);
79
80
/*
81
@@ -XXX,XX +XXX,XX @@ static int probe_access_internal(CPUArchState *env, target_ulong addr,
82
}
83
flags &= tlb_addr;
84
85
+ *pfull = &env_tlb(env)->d[mmu_idx].fulltlb[index];
86
+
87
/* Fold all "mmio-like" bits into TLB_MMIO. This is not RAM. */
88
if (unlikely(flags & ~(TLB_WATCHPOINT | TLB_NOTDIRTY))) {
89
*phost = NULL;
90
@@ -XXX,XX +XXX,XX @@ static int probe_access_internal(CPUArchState *env, target_ulong addr,
91
return flags;
168
}
92
}
169
93
170
-uint8_t helper_ret_ldub_cmmu(CPUArchState *env, target_ulong addr,
94
-int probe_access_flags(CPUArchState *env, target_ulong addr,
171
- TCGMemOpIdx oi, uintptr_t retaddr)
95
- MMUAccessType access_type, int mmu_idx,
172
+uint32_t cpu_ldub_code(CPUArchState *env, abi_ptr addr)
96
- bool nonfault, void **phost, uintptr_t retaddr)
97
+int probe_access_full(CPUArchState *env, target_ulong addr,
98
+ MMUAccessType access_type, int mmu_idx,
99
+ bool nonfault, void **phost, CPUTLBEntryFull **pfull,
100
+ uintptr_t retaddr)
173
{
101
{
174
- return full_ldub_cmmu(env, addr, oi, retaddr);
102
- int flags;
175
+ TCGMemOpIdx oi = make_memop_idx(MO_UB, cpu_mmu_index(env, true));
103
-
176
+ return full_ldub_code(env, addr, oi, 0);
104
- flags = probe_access_internal(env, addr, 0, access_type, mmu_idx,
105
- nonfault, phost, retaddr);
106
+ int flags = probe_access_internal(env, addr, 0, access_type, mmu_idx,
107
+ nonfault, phost, pfull, retaddr);
108
109
/* Handle clean RAM pages. */
110
if (unlikely(flags & TLB_NOTDIRTY)) {
111
- uintptr_t index = tlb_index(env, mmu_idx, addr);
112
- CPUTLBEntryFull *full = &env_tlb(env)->d[mmu_idx].fulltlb[index];
113
-
114
- notdirty_write(env_cpu(env), addr, 1, full, retaddr);
115
+ notdirty_write(env_cpu(env), addr, 1, *pfull, retaddr);
116
flags &= ~TLB_NOTDIRTY;
117
}
118
119
return flags;
177
}
120
}
178
121
179
-int8_t helper_ret_ldsb_cmmu(CPUArchState *env, target_ulong addr,
122
+int probe_access_flags(CPUArchState *env, target_ulong addr,
180
- TCGMemOpIdx oi, uintptr_t retaddr)
123
+ MMUAccessType access_type, int mmu_idx,
181
+static uint64_t full_lduw_code(CPUArchState *env, target_ulong addr,
124
+ bool nonfault, void **phost, uintptr_t retaddr)
182
+ TCGMemOpIdx oi, uintptr_t retaddr)
125
+{
126
+ CPUTLBEntryFull *full;
127
+
128
+ return probe_access_full(env, addr, access_type, mmu_idx,
129
+ nonfault, phost, &full, retaddr);
130
+}
131
+
132
void *probe_access(CPUArchState *env, target_ulong addr, int size,
133
MMUAccessType access_type, int mmu_idx, uintptr_t retaddr)
183
{
134
{
184
- return (int8_t) full_ldub_cmmu(env, addr, oi, retaddr);
135
+ CPUTLBEntryFull *full;
185
+ return load_helper(env, addr, oi, retaddr, MO_TEUW, true, full_lduw_code);
136
void *host;
186
}
137
int flags;
187
138
188
-static uint64_t full_le_lduw_cmmu(CPUArchState *env, target_ulong addr,
139
g_assert(-(addr | TARGET_PAGE_MASK) >= size);
189
- TCGMemOpIdx oi, uintptr_t retaddr)
140
190
+uint32_t cpu_lduw_code(CPUArchState *env, abi_ptr addr)
141
flags = probe_access_internal(env, addr, size, access_type, mmu_idx,
142
- false, &host, retaddr);
143
+ false, &host, &full, retaddr);
144
145
/* Per the interface, size == 0 merely faults the access. */
146
if (size == 0) {
147
@@ -XXX,XX +XXX,XX @@ void *probe_access(CPUArchState *env, target_ulong addr, int size,
148
}
149
150
if (unlikely(flags & (TLB_NOTDIRTY | TLB_WATCHPOINT))) {
151
- uintptr_t index = tlb_index(env, mmu_idx, addr);
152
- CPUTLBEntryFull *full = &env_tlb(env)->d[mmu_idx].fulltlb[index];
153
-
154
/* Handle watchpoints. */
155
if (flags & TLB_WATCHPOINT) {
156
int wp_access = (access_type == MMU_DATA_STORE
157
@@ -XXX,XX +XXX,XX @@ void *probe_access(CPUArchState *env, target_ulong addr, int size,
158
void *tlb_vaddr_to_host(CPUArchState *env, abi_ptr addr,
159
MMUAccessType access_type, int mmu_idx)
191
{
160
{
192
- return load_helper(env, addr, oi, retaddr, MO_LEUW, true,
161
+ CPUTLBEntryFull *full;
193
- full_le_lduw_cmmu);
162
void *host;
194
+ TCGMemOpIdx oi = make_memop_idx(MO_TEUW, cpu_mmu_index(env, true));
163
int flags;
195
+ return full_lduw_code(env, addr, oi, 0);
164
196
}
165
flags = probe_access_internal(env, addr, 0, access_type,
197
166
- mmu_idx, true, &host, 0);
198
-uint16_t helper_le_lduw_cmmu(CPUArchState *env, target_ulong addr,
167
+ mmu_idx, true, &host, &full, 0);
199
- TCGMemOpIdx oi, uintptr_t retaddr)
168
200
+static uint64_t full_ldl_code(CPUArchState *env, target_ulong addr,
169
/* No combination of flags are expected by the caller. */
201
+ TCGMemOpIdx oi, uintptr_t retaddr)
170
return flags ? NULL : host;
171
@@ -XXX,XX +XXX,XX @@ void *tlb_vaddr_to_host(CPUArchState *env, abi_ptr addr,
172
tb_page_addr_t get_page_addr_code_hostp(CPUArchState *env, target_ulong addr,
173
void **hostp)
202
{
174
{
203
- return full_le_lduw_cmmu(env, addr, oi, retaddr);
175
+ CPUTLBEntryFull *full;
204
+ return load_helper(env, addr, oi, retaddr, MO_TEUL, true, full_ldl_code);
176
void *p;
205
}
177
206
178
(void)probe_access_internal(env, addr, 1, MMU_INST_FETCH,
207
-int16_t helper_le_ldsw_cmmu(CPUArchState *env, target_ulong addr,
179
- cpu_mmu_index(env, true), false, &p, 0);
208
- TCGMemOpIdx oi, uintptr_t retaddr)
180
+ cpu_mmu_index(env, true), false, &p, &full, 0);
209
+uint32_t cpu_ldl_code(CPUArchState *env, abi_ptr addr)
181
if (p == NULL) {
210
{
182
return -1;
211
- return (int16_t) full_le_lduw_cmmu(env, addr, oi, retaddr);
183
}
212
+ TCGMemOpIdx oi = make_memop_idx(MO_TEUL, cpu_mmu_index(env, true));
213
+ return full_ldl_code(env, addr, oi, 0);
214
}
215
216
-static uint64_t full_be_lduw_cmmu(CPUArchState *env, target_ulong addr,
217
- TCGMemOpIdx oi, uintptr_t retaddr)
218
+static uint64_t full_ldq_code(CPUArchState *env, target_ulong addr,
219
+ TCGMemOpIdx oi, uintptr_t retaddr)
220
{
221
- return load_helper(env, addr, oi, retaddr, MO_BEUW, true,
222
- full_be_lduw_cmmu);
223
+ return load_helper(env, addr, oi, retaddr, MO_TEQ, true, full_ldq_code);
224
}
225
226
-uint16_t helper_be_lduw_cmmu(CPUArchState *env, target_ulong addr,
227
- TCGMemOpIdx oi, uintptr_t retaddr)
228
+uint64_t cpu_ldq_code(CPUArchState *env, abi_ptr addr)
229
{
230
- return full_be_lduw_cmmu(env, addr, oi, retaddr);
231
-}
232
-
233
-int16_t helper_be_ldsw_cmmu(CPUArchState *env, target_ulong addr,
234
- TCGMemOpIdx oi, uintptr_t retaddr)
235
-{
236
- return (int16_t) full_be_lduw_cmmu(env, addr, oi, retaddr);
237
-}
238
-
239
-static uint64_t full_le_ldul_cmmu(CPUArchState *env, target_ulong addr,
240
- TCGMemOpIdx oi, uintptr_t retaddr)
241
-{
242
- return load_helper(env, addr, oi, retaddr, MO_LEUL, true,
243
- full_le_ldul_cmmu);
244
-}
245
-
246
-uint32_t helper_le_ldl_cmmu(CPUArchState *env, target_ulong addr,
247
- TCGMemOpIdx oi, uintptr_t retaddr)
248
-{
249
- return full_le_ldul_cmmu(env, addr, oi, retaddr);
250
-}
251
-
252
-static uint64_t full_be_ldul_cmmu(CPUArchState *env, target_ulong addr,
253
- TCGMemOpIdx oi, uintptr_t retaddr)
254
-{
255
- return load_helper(env, addr, oi, retaddr, MO_BEUL, true,
256
- full_be_ldul_cmmu);
257
-}
258
-
259
-uint32_t helper_be_ldl_cmmu(CPUArchState *env, target_ulong addr,
260
- TCGMemOpIdx oi, uintptr_t retaddr)
261
-{
262
- return full_be_ldul_cmmu(env, addr, oi, retaddr);
263
-}
264
-
265
-uint64_t helper_le_ldq_cmmu(CPUArchState *env, target_ulong addr,
266
- TCGMemOpIdx oi, uintptr_t retaddr)
267
-{
268
- return load_helper(env, addr, oi, retaddr, MO_LEQ, true,
269
- helper_le_ldq_cmmu);
270
-}
271
-
272
-uint64_t helper_be_ldq_cmmu(CPUArchState *env, target_ulong addr,
273
- TCGMemOpIdx oi, uintptr_t retaddr)
274
-{
275
- return load_helper(env, addr, oi, retaddr, MO_BEQ, true,
276
- helper_be_ldq_cmmu);
277
+ TCGMemOpIdx oi = make_memop_idx(MO_TEQ, cpu_mmu_index(env, true));
278
+ return full_ldq_code(env, addr, oi, 0);
279
}
280
diff --git a/docs/devel/loads-stores.rst b/docs/devel/loads-stores.rst
281
index XXXXXXX..XXXXXXX 100644
282
--- a/docs/devel/loads-stores.rst
283
+++ b/docs/devel/loads-stores.rst
284
@@ -XXX,XX +XXX,XX @@ more in line with the other memory access functions.
285
286
load: ``helper_{endian}_ld{sign}{size}_mmu(env, addr, opindex, retaddr)``
287
288
-load (code): ``helper_{endian}_ld{sign}{size}_cmmu(env, addr, opindex, retaddr)``
289
-
290
store: ``helper_{endian}_st{size}_mmu(env, addr, val, opindex, retaddr)``
291
292
``sign``
293
@@ -XXX,XX +XXX,XX @@ store: ``helper_{endian}_st{size}_mmu(env, addr, val, opindex, retaddr)``
294
- ``ret`` : target endianness
295
296
Regexes for git grep
297
- - ``\<helper_\(le\|be\|ret\)_ld[us]\?[bwlq]_c\?mmu\>``
298
+ - ``\<helper_\(le\|be\|ret\)_ld[us]\?[bwlq]_mmu\>``
299
- ``\<helper_\(le\|be\|ret\)_st[bwlq]_mmu\>``
300
301
``address_space_*``
302
--
184
--
303
2.20.1
185
2.34.1
304
186
305
187
diff view generated by jsdifflib
1
With the tracing hooks, the inline functions are no longer
1
Now that we have collected all of the page data into
2
so simple. Once out-of-line, the current tlb_entry lookup
2
CPUTLBEntryFull, provide an interface to record that
3
is redundant with the one in the main load/store_helper.
3
all in one go, instead of using 4 arguments. This interface
4
allows CPUTLBEntryFull to be extended without having to
5
change the number of arguments.
4
6
5
This also begins the introduction of a new target facing
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
interface, with suffix *_mmuidx_ra. This is not yet
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
official because the interface is not done for user-only.
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
9
Use abi_ptr instead of target_ulong in preparation for
10
user-only; the two types are identical for softmmu.
11
12
What remains in cpu_ldst_template.h are the expansions
13
for _code, _data, and MMU_MODE<N>_SUFFIX.
14
15
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
16
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
17
---
11
---
18
include/exec/cpu_ldst.h | 25 ++++++-
12
include/exec/cpu-defs.h | 14 +++++++++++
19
include/exec/cpu_ldst_template.h | 125 +++++++------------------------
13
include/exec/exec-all.h | 22 ++++++++++++++++++
20
accel/tcg/cputlb.c | 116 ++++++++++++++++++++++++++++
14
accel/tcg/cputlb.c | 51 ++++++++++++++++++++++++++---------------
21
3 files changed, 166 insertions(+), 100 deletions(-)
15
3 files changed, 69 insertions(+), 18 deletions(-)
22
16
23
diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h
17
diff --git a/include/exec/cpu-defs.h b/include/exec/cpu-defs.h
24
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
25
--- a/include/exec/cpu_ldst.h
19
--- a/include/exec/cpu-defs.h
26
+++ b/include/exec/cpu_ldst.h
20
+++ b/include/exec/cpu-defs.h
27
@@ -XXX,XX +XXX,XX @@ static inline void clear_helper_retaddr(void)
21
@@ -XXX,XX +XXX,XX @@ typedef struct CPUTLBEntryFull {
28
22
* + the offset within the target MemoryRegion (otherwise)
29
#else
23
*/
30
24
hwaddr xlat_section;
31
-/* The memory helpers for tcg-generated code need tcg_target_long etc. */
32
+/* Needed for TCG_OVERSIZED_GUEST */
33
#include "tcg.h"
34
35
static inline target_ulong tlb_addr_write(const CPUTLBEntry *entry)
36
@@ -XXX,XX +XXX,XX @@ static inline CPUTLBEntry *tlb_entry(CPUArchState *env, uintptr_t mmu_idx,
37
return &env_tlb(env)->f[mmu_idx].table[tlb_index(env, mmu_idx, addr)];
38
}
39
40
+uint32_t cpu_ldub_mmuidx_ra(CPUArchState *env, abi_ptr addr,
41
+ int mmu_idx, uintptr_t ra);
42
+uint32_t cpu_lduw_mmuidx_ra(CPUArchState *env, abi_ptr addr,
43
+ int mmu_idx, uintptr_t ra);
44
+uint32_t cpu_ldl_mmuidx_ra(CPUArchState *env, abi_ptr addr,
45
+ int mmu_idx, uintptr_t ra);
46
+uint64_t cpu_ldq_mmuidx_ra(CPUArchState *env, abi_ptr addr,
47
+ int mmu_idx, uintptr_t ra);
48
+
25
+
49
+int cpu_ldsb_mmuidx_ra(CPUArchState *env, abi_ptr addr,
26
+ /*
50
+ int mmu_idx, uintptr_t ra);
27
+ * @phys_addr contains the physical address in the address space
51
+int cpu_ldsw_mmuidx_ra(CPUArchState *env, abi_ptr addr,
28
+ * given by cpu_asidx_from_attrs(cpu, @attrs).
52
+ int mmu_idx, uintptr_t ra);
29
+ */
30
+ hwaddr phys_addr;
53
+
31
+
54
+void cpu_stb_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint32_t val,
32
+ /* @attrs contains the memory transaction attributes for the page. */
55
+ int mmu_idx, uintptr_t retaddr);
33
MemTxAttrs attrs;
56
+void cpu_stw_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint32_t val,
57
+ int mmu_idx, uintptr_t retaddr);
58
+void cpu_stl_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint32_t val,
59
+ int mmu_idx, uintptr_t retaddr);
60
+void cpu_stq_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint64_t val,
61
+ int mmu_idx, uintptr_t retaddr);
62
+
34
+
63
#ifdef MMU_MODE0_SUFFIX
35
+ /* @prot contains the complete protections for the page. */
64
#define CPU_MMU_INDEX 0
36
+ uint8_t prot;
65
#define MEMSUFFIX MMU_MODE0_SUFFIX
37
+
66
diff --git a/include/exec/cpu_ldst_template.h b/include/exec/cpu_ldst_template.h
38
+ /* @lg_page_size contains the log2 of the page size. */
39
+ uint8_t lg_page_size;
40
} CPUTLBEntryFull;
41
42
/*
43
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
67
index XXXXXXX..XXXXXXX 100644
44
index XXXXXXX..XXXXXXX 100644
68
--- a/include/exec/cpu_ldst_template.h
45
--- a/include/exec/exec-all.h
69
+++ b/include/exec/cpu_ldst_template.h
46
+++ b/include/exec/exec-all.h
70
@@ -XXX,XX +XXX,XX @@
47
@@ -XXX,XX +XXX,XX @@ void tlb_flush_range_by_mmuidx_all_cpus_synced(CPUState *cpu,
71
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
48
uint16_t idxmap,
72
*/
49
unsigned bits);
73
50
74
-#if !defined(SOFTMMU_CODE_ACCESS)
51
+/**
75
-#include "trace-root.h"
52
+ * tlb_set_page_full:
76
-#endif
53
+ * @cpu: CPU context
77
-
54
+ * @mmu_idx: mmu index of the tlb to modify
78
-#include "qemu/plugin.h"
55
+ * @vaddr: virtual address of the entry to add
79
-#include "trace/mem.h"
56
+ * @full: the details of the tlb entry
80
-
57
+ *
81
#if DATA_SIZE == 8
58
+ * Add an entry to @cpu tlb index @mmu_idx. All of the fields of
82
#define SUFFIX q
59
+ * @full must be filled, except for xlat_section, and constitute
83
#define USUFFIX q
60
+ * the complete description of the translated page.
84
@@ -XXX,XX +XXX,XX @@
61
+ *
85
#define RES_TYPE uint32_t
62
+ * This is generally called by the target tlb_fill function after
86
#endif
63
+ * having performed a successful page table walk to find the physical
87
64
+ * address and attributes for the translation.
88
+/* generic load/store macros */
65
+ *
66
+ * At most one entry for a given virtual address is permitted. Only a
67
+ * single TARGET_PAGE_SIZE region is mapped; @full->lg_page_size is only
68
+ * used by tlb_flush_page.
69
+ */
70
+void tlb_set_page_full(CPUState *cpu, int mmu_idx, target_ulong vaddr,
71
+ CPUTLBEntryFull *full);
89
+
72
+
90
#ifdef SOFTMMU_CODE_ACCESS
73
/**
91
-#define ADDR_READ addr_code
74
* tlb_set_page_with_attrs:
92
-#define MMUSUFFIX _cmmu
75
* @cpu: CPU to add this TLB entry for
93
-#define URETSUFFIX USUFFIX
94
-#define SRETSUFFIX glue(s, SUFFIX)
95
-#else
96
-#define ADDR_READ addr_read
97
-#define MMUSUFFIX _mmu
98
-#define URETSUFFIX USUFFIX
99
-#define SRETSUFFIX glue(s, SUFFIX)
100
+
101
+static inline RES_TYPE
102
+glue(glue(cpu_ld, USUFFIX), _code)(CPUArchState *env, target_ulong ptr)
103
+{
104
+ TCGMemOpIdx oi = make_memop_idx(MO_TE | SHIFT, CPU_MMU_INDEX);
105
+ return glue(glue(helper_ret_ld, USUFFIX), _cmmu)(env, ptr, oi, 0);
106
+}
107
+
108
+#if DATA_SIZE <= 2
109
+static inline int
110
+glue(glue(cpu_lds, SUFFIX), _code)(CPUArchState *env, target_ulong ptr)
111
+{
112
+ return (DATA_STYPE)glue(glue(cpu_ld, USUFFIX), _code)(env, ptr);
113
+}
114
#endif
115
116
-/* generic load/store macros */
117
+#else
118
119
static inline RES_TYPE
120
glue(glue(glue(cpu_ld, USUFFIX), MEMSUFFIX), _ra)(CPUArchState *env,
121
target_ulong ptr,
122
uintptr_t retaddr)
123
{
124
- CPUTLBEntry *entry;
125
- RES_TYPE res;
126
- target_ulong addr;
127
- int mmu_idx = CPU_MMU_INDEX;
128
- MemOp op = MO_TE | SHIFT;
129
-#if !defined(SOFTMMU_CODE_ACCESS)
130
- uint16_t meminfo = trace_mem_get_info(op, mmu_idx, false);
131
- trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
132
-#endif
133
-
134
- addr = ptr;
135
- entry = tlb_entry(env, mmu_idx, addr);
136
- if (unlikely(entry->ADDR_READ !=
137
- (addr & (TARGET_PAGE_MASK | (DATA_SIZE - 1))))) {
138
- TCGMemOpIdx oi = make_memop_idx(op, mmu_idx);
139
- res = glue(glue(helper_ret_ld, URETSUFFIX), MMUSUFFIX)(env, addr,
140
- oi, retaddr);
141
- } else {
142
- uintptr_t hostaddr = addr + entry->addend;
143
- res = glue(glue(ld, USUFFIX), _p)((uint8_t *)hostaddr);
144
- }
145
-#ifndef SOFTMMU_CODE_ACCESS
146
- qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
147
-#endif
148
- return res;
149
+ return glue(glue(cpu_ld, USUFFIX), _mmuidx_ra)(env, ptr, CPU_MMU_INDEX,
150
+ retaddr);
151
}
152
153
static inline RES_TYPE
154
glue(glue(cpu_ld, USUFFIX), MEMSUFFIX)(CPUArchState *env, target_ulong ptr)
155
{
156
- return glue(glue(glue(cpu_ld, USUFFIX), MEMSUFFIX), _ra)(env, ptr, 0);
157
+ return glue(glue(cpu_ld, USUFFIX), _mmuidx_ra)(env, ptr, CPU_MMU_INDEX, 0);
158
}
159
160
#if DATA_SIZE <= 2
161
@@ -XXX,XX +XXX,XX @@ glue(glue(glue(cpu_lds, SUFFIX), MEMSUFFIX), _ra)(CPUArchState *env,
162
target_ulong ptr,
163
uintptr_t retaddr)
164
{
165
- CPUTLBEntry *entry;
166
- int res;
167
- target_ulong addr;
168
- int mmu_idx = CPU_MMU_INDEX;
169
- MemOp op = MO_TE | MO_SIGN | SHIFT;
170
-#ifndef SOFTMMU_CODE_ACCESS
171
- uint16_t meminfo = trace_mem_get_info(op, mmu_idx, false);
172
- trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
173
-#endif
174
-
175
- addr = ptr;
176
- entry = tlb_entry(env, mmu_idx, addr);
177
- if (unlikely(entry->ADDR_READ !=
178
- (addr & (TARGET_PAGE_MASK | (DATA_SIZE - 1))))) {
179
- TCGMemOpIdx oi = make_memop_idx(op & ~MO_SIGN, mmu_idx);
180
- res = (DATA_STYPE)glue(glue(helper_ret_ld, SRETSUFFIX),
181
- MMUSUFFIX)(env, addr, oi, retaddr);
182
- } else {
183
- uintptr_t hostaddr = addr + entry->addend;
184
- res = glue(glue(lds, SUFFIX), _p)((uint8_t *)hostaddr);
185
- }
186
-#ifndef SOFTMMU_CODE_ACCESS
187
- qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
188
-#endif
189
- return res;
190
+ return glue(glue(cpu_lds, SUFFIX), _mmuidx_ra)(env, ptr, CPU_MMU_INDEX,
191
+ retaddr);
192
}
193
194
static inline int
195
glue(glue(cpu_lds, SUFFIX), MEMSUFFIX)(CPUArchState *env, target_ulong ptr)
196
{
197
- return glue(glue(glue(cpu_lds, SUFFIX), MEMSUFFIX), _ra)(env, ptr, 0);
198
+ return glue(glue(cpu_lds, SUFFIX), _mmuidx_ra)(env, ptr, CPU_MMU_INDEX, 0);
199
}
200
#endif
201
202
-#ifndef SOFTMMU_CODE_ACCESS
203
-
204
/* generic store macro */
205
206
static inline void
207
@@ -XXX,XX +XXX,XX @@ glue(glue(glue(cpu_st, SUFFIX), MEMSUFFIX), _ra)(CPUArchState *env,
208
target_ulong ptr,
209
RES_TYPE v, uintptr_t retaddr)
210
{
211
- CPUTLBEntry *entry;
212
- target_ulong addr;
213
- int mmu_idx = CPU_MMU_INDEX;
214
- MemOp op = MO_TE | SHIFT;
215
-#if !defined(SOFTMMU_CODE_ACCESS)
216
- uint16_t meminfo = trace_mem_get_info(op, mmu_idx, true);
217
- trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
218
-#endif
219
-
220
- addr = ptr;
221
- entry = tlb_entry(env, mmu_idx, addr);
222
- if (unlikely(tlb_addr_write(entry) !=
223
- (addr & (TARGET_PAGE_MASK | (DATA_SIZE - 1))))) {
224
- TCGMemOpIdx oi = make_memop_idx(op, mmu_idx);
225
- glue(glue(helper_ret_st, SUFFIX), MMUSUFFIX)(env, addr, v, oi,
226
- retaddr);
227
- } else {
228
- uintptr_t hostaddr = addr + entry->addend;
229
- glue(glue(st, SUFFIX), _p)((uint8_t *)hostaddr, v);
230
- }
231
-#ifndef SOFTMMU_CODE_ACCESS
232
- qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
233
-#endif
234
+ glue(glue(cpu_st, SUFFIX), _mmuidx_ra)(env, ptr, v, CPU_MMU_INDEX,
235
+ retaddr);
236
}
237
238
static inline void
239
glue(glue(cpu_st, SUFFIX), MEMSUFFIX)(CPUArchState *env, target_ulong ptr,
240
RES_TYPE v)
241
{
242
- glue(glue(glue(cpu_st, SUFFIX), MEMSUFFIX), _ra)(env, ptr, v, 0);
243
+ glue(glue(cpu_st, SUFFIX), _mmuidx_ra)(env, ptr, v, CPU_MMU_INDEX, 0);
244
}
245
246
#endif /* !SOFTMMU_CODE_ACCESS */
247
@@ -XXX,XX +XXX,XX @@ glue(glue(cpu_st, SUFFIX), MEMSUFFIX)(CPUArchState *env, target_ulong ptr,
248
#undef SUFFIX
249
#undef USUFFIX
250
#undef DATA_SIZE
251
-#undef MMUSUFFIX
252
-#undef ADDR_READ
253
-#undef URETSUFFIX
254
-#undef SRETSUFFIX
255
#undef SHIFT
256
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
76
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
257
index XXXXXXX..XXXXXXX 100644
77
index XXXXXXX..XXXXXXX 100644
258
--- a/accel/tcg/cputlb.c
78
--- a/accel/tcg/cputlb.c
259
+++ b/accel/tcg/cputlb.c
79
+++ b/accel/tcg/cputlb.c
260
@@ -XXX,XX +XXX,XX @@
80
@@ -XXX,XX +XXX,XX @@ static void tlb_add_large_page(CPUArchState *env, int mmu_idx,
261
#include "qemu/atomic.h"
81
env_tlb(env)->d[mmu_idx].large_page_mask = lp_mask;
262
#include "qemu/atomic128.h"
263
#include "translate-all.h"
264
+#include "trace-root.h"
265
+#include "qemu/plugin.h"
266
+#include "trace/mem.h"
267
#ifdef CONFIG_PLUGIN
268
#include "qemu/plugin-memory.h"
269
#endif
270
@@ -XXX,XX +XXX,XX @@ tcg_target_ulong helper_be_ldsl_mmu(CPUArchState *env, target_ulong addr,
271
return (int32_t)helper_be_ldul_mmu(env, addr, oi, retaddr);
272
}
82
}
273
83
84
-/* Add a new TLB entry. At most one entry for a given virtual address
274
+/*
85
+/*
275
+ * Load helpers for cpu_ldst.h.
86
+ * Add a new TLB entry. At most one entry for a given virtual address
276
+ */
87
* is permitted. Only a single TARGET_PAGE_SIZE region is mapped, the
88
* supplied size is only used by tlb_flush_page.
89
*
90
* Called from TCG-generated code, which is under an RCU read-side
91
* critical section.
92
*/
93
-void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
94
- hwaddr paddr, MemTxAttrs attrs, int prot,
95
- int mmu_idx, target_ulong size)
96
+void tlb_set_page_full(CPUState *cpu, int mmu_idx,
97
+ target_ulong vaddr, CPUTLBEntryFull *full)
98
{
99
CPUArchState *env = cpu->env_ptr;
100
CPUTLB *tlb = env_tlb(env);
101
@@ -XXX,XX +XXX,XX @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
102
CPUTLBEntry *te, tn;
103
hwaddr iotlb, xlat, sz, paddr_page;
104
target_ulong vaddr_page;
105
- int asidx = cpu_asidx_from_attrs(cpu, attrs);
106
- int wp_flags;
107
+ int asidx, wp_flags, prot;
108
bool is_ram, is_romd;
109
110
assert_cpu_is_self(cpu);
111
112
- if (size <= TARGET_PAGE_SIZE) {
113
+ if (full->lg_page_size <= TARGET_PAGE_BITS) {
114
sz = TARGET_PAGE_SIZE;
115
} else {
116
- tlb_add_large_page(env, mmu_idx, vaddr, size);
117
- sz = size;
118
+ sz = (hwaddr)1 << full->lg_page_size;
119
+ tlb_add_large_page(env, mmu_idx, vaddr, sz);
120
}
121
vaddr_page = vaddr & TARGET_PAGE_MASK;
122
- paddr_page = paddr & TARGET_PAGE_MASK;
123
+ paddr_page = full->phys_addr & TARGET_PAGE_MASK;
124
125
+ prot = full->prot;
126
+ asidx = cpu_asidx_from_attrs(cpu, full->attrs);
127
section = address_space_translate_for_iotlb(cpu, asidx, paddr_page,
128
- &xlat, &sz, attrs, &prot);
129
+ &xlat, &sz, full->attrs, &prot);
130
assert(sz >= TARGET_PAGE_SIZE);
131
132
tlb_debug("vaddr=" TARGET_FMT_lx " paddr=0x" TARGET_FMT_plx
133
" prot=%x idx=%d\n",
134
- vaddr, paddr, prot, mmu_idx);
135
+ vaddr, full->phys_addr, prot, mmu_idx);
136
137
address = vaddr_page;
138
- if (size < TARGET_PAGE_SIZE) {
139
+ if (full->lg_page_size < TARGET_PAGE_BITS) {
140
/* Repeat the MMU check and TLB fill on every access. */
141
address |= TLB_INVALID_MASK;
142
}
143
- if (attrs.byte_swap) {
144
+ if (full->attrs.byte_swap) {
145
address |= TLB_BSWAP;
146
}
147
148
@@ -XXX,XX +XXX,XX @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
149
* subtract here is that of the page base, and not the same as the
150
* vaddr we add back in io_readx()/io_writex()/get_page_addr_code().
151
*/
152
+ desc->fulltlb[index] = *full;
153
desc->fulltlb[index].xlat_section = iotlb - vaddr_page;
154
- desc->fulltlb[index].attrs = attrs;
155
+ desc->fulltlb[index].phys_addr = paddr_page;
156
+ desc->fulltlb[index].prot = prot;
157
158
/* Now calculate the new entry */
159
tn.addend = addend - vaddr_page;
160
@@ -XXX,XX +XXX,XX @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
161
qemu_spin_unlock(&tlb->c.lock);
162
}
163
164
-/* Add a new TLB entry, but without specifying the memory
165
- * transaction attributes to be used.
166
- */
167
+void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
168
+ hwaddr paddr, MemTxAttrs attrs, int prot,
169
+ int mmu_idx, target_ulong size)
170
+{
171
+ CPUTLBEntryFull full = {
172
+ .phys_addr = paddr,
173
+ .attrs = attrs,
174
+ .prot = prot,
175
+ .lg_page_size = ctz64(size)
176
+ };
277
+
177
+
278
+static inline uint64_t cpu_load_helper(CPUArchState *env, abi_ptr addr,
178
+ assert(is_power_of_2(size));
279
+ int mmu_idx, uintptr_t retaddr,
179
+ tlb_set_page_full(cpu, mmu_idx, vaddr, &full);
280
+ MemOp op, FullLoadHelper *full_load)
281
+{
282
+ uint16_t meminfo;
283
+ TCGMemOpIdx oi;
284
+ uint64_t ret;
285
+
286
+ meminfo = trace_mem_get_info(op, mmu_idx, false);
287
+ trace_guest_mem_before_exec(env_cpu(env), addr, meminfo);
288
+
289
+ op &= ~MO_SIGN;
290
+ oi = make_memop_idx(op, mmu_idx);
291
+ ret = full_load(env, addr, oi, retaddr);
292
+
293
+ qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, meminfo);
294
+
295
+ return ret;
296
+}
180
+}
297
+
181
+
298
+uint32_t cpu_ldub_mmuidx_ra(CPUArchState *env, abi_ptr addr,
182
void tlb_set_page(CPUState *cpu, target_ulong vaddr,
299
+ int mmu_idx, uintptr_t ra)
183
hwaddr paddr, int prot,
300
+{
184
int mmu_idx, target_ulong size)
301
+ return cpu_load_helper(env, addr, mmu_idx, ra, MO_UB, full_ldub_mmu);
302
+}
303
+
304
+int cpu_ldsb_mmuidx_ra(CPUArchState *env, abi_ptr addr,
305
+ int mmu_idx, uintptr_t ra)
306
+{
307
+ return (int8_t)cpu_load_helper(env, addr, mmu_idx, ra, MO_SB,
308
+ full_ldub_mmu);
309
+}
310
+
311
+uint32_t cpu_lduw_mmuidx_ra(CPUArchState *env, abi_ptr addr,
312
+ int mmu_idx, uintptr_t ra)
313
+{
314
+ return cpu_load_helper(env, addr, mmu_idx, ra, MO_TEUW,
315
+ MO_TE == MO_LE
316
+ ? full_le_lduw_mmu : full_be_lduw_mmu);
317
+}
318
+
319
+int cpu_ldsw_mmuidx_ra(CPUArchState *env, abi_ptr addr,
320
+ int mmu_idx, uintptr_t ra)
321
+{
322
+ return (int16_t)cpu_load_helper(env, addr, mmu_idx, ra, MO_TESW,
323
+ MO_TE == MO_LE
324
+ ? full_le_lduw_mmu : full_be_lduw_mmu);
325
+}
326
+
327
+uint32_t cpu_ldl_mmuidx_ra(CPUArchState *env, abi_ptr addr,
328
+ int mmu_idx, uintptr_t ra)
329
+{
330
+ return cpu_load_helper(env, addr, mmu_idx, ra, MO_TEUL,
331
+ MO_TE == MO_LE
332
+ ? full_le_ldul_mmu : full_be_ldul_mmu);
333
+}
334
+
335
+uint64_t cpu_ldq_mmuidx_ra(CPUArchState *env, abi_ptr addr,
336
+ int mmu_idx, uintptr_t ra)
337
+{
338
+ return cpu_load_helper(env, addr, mmu_idx, ra, MO_TEQ,
339
+ MO_TE == MO_LE
340
+ ? helper_le_ldq_mmu : helper_be_ldq_mmu);
341
+}
342
+
343
/*
344
* Store Helpers
345
*/
346
@@ -XXX,XX +XXX,XX @@ void helper_be_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val,
347
store_helper(env, addr, val, oi, retaddr, MO_BEQ);
348
}
349
350
+/*
351
+ * Store Helpers for cpu_ldst.h
352
+ */
353
+
354
+static inline void QEMU_ALWAYS_INLINE
355
+cpu_store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
356
+ int mmu_idx, uintptr_t retaddr, MemOp op)
357
+{
358
+ TCGMemOpIdx oi;
359
+ uint16_t meminfo;
360
+
361
+ meminfo = trace_mem_get_info(op, mmu_idx, true);
362
+ trace_guest_mem_before_exec(env_cpu(env), addr, meminfo);
363
+
364
+ oi = make_memop_idx(op, mmu_idx);
365
+ store_helper(env, addr, val, oi, retaddr, op);
366
+
367
+ qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, meminfo);
368
+}
369
+
370
+void cpu_stb_mmuidx_ra(CPUArchState *env, target_ulong addr, uint32_t val,
371
+ int mmu_idx, uintptr_t retaddr)
372
+{
373
+ cpu_store_helper(env, addr, val, mmu_idx, retaddr, MO_UB);
374
+}
375
+
376
+void cpu_stw_mmuidx_ra(CPUArchState *env, target_ulong addr, uint32_t val,
377
+ int mmu_idx, uintptr_t retaddr)
378
+{
379
+ cpu_store_helper(env, addr, val, mmu_idx, retaddr, MO_TEUW);
380
+}
381
+
382
+void cpu_stl_mmuidx_ra(CPUArchState *env, target_ulong addr, uint32_t val,
383
+ int mmu_idx, uintptr_t retaddr)
384
+{
385
+ cpu_store_helper(env, addr, val, mmu_idx, retaddr, MO_TEUL);
386
+}
387
+
388
+void cpu_stq_mmuidx_ra(CPUArchState *env, target_ulong addr, uint64_t val,
389
+ int mmu_idx, uintptr_t retaddr)
390
+{
391
+ cpu_store_helper(env, addr, val, mmu_idx, retaddr, MO_TEQ);
392
+}
393
+
394
/* First set of helpers allows passing in of OI and RETADDR. This makes
395
them callable from other helpers. */
396
397
--
185
--
398
2.20.1
186
2.34.1
399
187
400
188
diff view generated by jsdifflib
1
Some distributions, e.g. Ubuntu 19.10, enable PIE by default.
1
Allow the target to cache items from the guest page tables.
2
If for some reason one wishes to build a non-pie binary, we
3
must provide additional options to override.
4
5
At the same time, reorg the code to an elif chain.
6
2
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
3
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Thomas Huth <thuth@redhat.com>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
---
7
---
12
configure | 25 ++++++++++++-------------
8
include/exec/cpu-defs.h | 9 +++++++++
13
1 file changed, 12 insertions(+), 13 deletions(-)
9
1 file changed, 9 insertions(+)
14
10
15
diff --git a/configure b/configure
11
diff --git a/include/exec/cpu-defs.h b/include/exec/cpu-defs.h
16
index XXXXXXX..XXXXXXX 100755
12
index XXXXXXX..XXXXXXX 100644
17
--- a/configure
13
--- a/include/exec/cpu-defs.h
18
+++ b/configure
14
+++ b/include/exec/cpu-defs.h
19
@@ -XXX,XX +XXX,XX @@ if compile_prog "-Werror -fno-pie" "-no-pie"; then
15
@@ -XXX,XX +XXX,XX @@ typedef struct CPUTLBEntryFull {
20
LDFLAGS_NOPIE="-no-pie"
16
21
fi
17
/* @lg_page_size contains the log2 of the page size. */
22
18
uint8_t lg_page_size;
23
-if test "$pie" != "no" ; then
19
+
24
- if compile_prog "-fPIE -DPIE" "-pie"; then
20
+ /*
25
- QEMU_CFLAGS="-fPIE -DPIE $QEMU_CFLAGS"
21
+ * Allow target-specific additions to this structure.
26
- LDFLAGS="-pie $LDFLAGS"
22
+ * This may be used to cache items from the guest cpu
27
- pie="yes"
23
+ * page tables for later use by the implementation.
28
- else
24
+ */
29
- if test "$pie" = "yes"; then
25
+#ifdef TARGET_PAGE_ENTRY_EXTRA
30
- error_exit "PIE not available due to missing toolchain support"
26
+ TARGET_PAGE_ENTRY_EXTRA
31
- else
27
+#endif
32
- echo "Disabling PIE due to missing toolchain support"
28
} CPUTLBEntryFull;
33
- pie="no"
29
34
- fi
30
/*
35
- fi
36
+if test "$pie" = "no"; then
37
+ QEMU_CFLAGS="$CFLAGS_NOPIE $QEMU_CFLAGS"
38
+ LDFLAGS="$LDFLAGS_NOPIE $LDFLAGS"
39
+elif compile_prog "-fPIE -DPIE" "-pie"; then
40
+ QEMU_CFLAGS="-fPIE -DPIE $QEMU_CFLAGS"
41
+ LDFLAGS="-pie $LDFLAGS"
42
+ pie="yes"
43
+elif test "$pie" = "yes"; then
44
+ error_exit "PIE not available due to missing toolchain support"
45
+else
46
+ echo "Disabling PIE due to missing toolchain support"
47
+ pie="no"
48
fi
49
50
# Detect support for PT_GNU_RELRO + DT_BIND_NOW.
51
--
31
--
52
2.20.1
32
2.34.1
53
33
54
34
diff view generated by jsdifflib
1
With the tracing hooks, the inline functions are no longer
1
This bitmap is created and discarded immediately.
2
so simple. Reduce the amount of preprocessor obfuscation
2
We gain nothing by its existence.
3
by expanding the text of each of the functions generated.
4
3
5
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
4
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-Id: <20220822232338.1727934-2-richard.henderson@linaro.org>
8
---
7
---
9
include/exec/cpu_ldst.h | 54 +++--
8
accel/tcg/translate-all.c | 78 ++-------------------------------------
10
include/exec/cpu_ldst_useronly_template.h | 159 ---------------
9
1 file changed, 4 insertions(+), 74 deletions(-)
11
accel/tcg/user-exec.c | 236 ++++++++++++++++++++++
12
3 files changed, 262 insertions(+), 187 deletions(-)
13
delete mode 100644 include/exec/cpu_ldst_useronly_template.h
14
10
15
diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h
11
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
16
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
17
--- a/include/exec/cpu_ldst.h
13
--- a/accel/tcg/translate-all.c
18
+++ b/include/exec/cpu_ldst.h
14
+++ b/accel/tcg/translate-all.c
19
@@ -XXX,XX +XXX,XX @@ static inline void clear_helper_retaddr(void)
15
@@ -XXX,XX +XXX,XX @@
20
16
#define assert_memory_lock() tcg_debug_assert(have_mmap_lock())
21
/* In user-only mode we provide only the _code and _data accessors. */
17
#endif
22
18
23
-#define MEMSUFFIX _data
19
-#define SMC_BITMAP_USE_THRESHOLD 10
24
-#define DATA_SIZE 1
25
-#include "exec/cpu_ldst_useronly_template.h"
26
+uint32_t cpu_ldub_data(CPUArchState *env, abi_ptr ptr);
27
+uint32_t cpu_lduw_data(CPUArchState *env, abi_ptr ptr);
28
+uint32_t cpu_ldl_data(CPUArchState *env, abi_ptr ptr);
29
+uint64_t cpu_ldq_data(CPUArchState *env, abi_ptr ptr);
30
+int cpu_ldsb_data(CPUArchState *env, abi_ptr ptr);
31
+int cpu_ldsw_data(CPUArchState *env, abi_ptr ptr);
32
33
-#define DATA_SIZE 2
34
-#include "exec/cpu_ldst_useronly_template.h"
35
+uint32_t cpu_ldub_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t retaddr);
36
+uint32_t cpu_lduw_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t retaddr);
37
+uint32_t cpu_ldl_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t retaddr);
38
+uint64_t cpu_ldq_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t retaddr);
39
+int cpu_ldsb_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t retaddr);
40
+int cpu_ldsw_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t retaddr);
41
42
-#define DATA_SIZE 4
43
-#include "exec/cpu_ldst_useronly_template.h"
44
+void cpu_stb_data(CPUArchState *env, abi_ptr ptr, uint32_t val);
45
+void cpu_stw_data(CPUArchState *env, abi_ptr ptr, uint32_t val);
46
+void cpu_stl_data(CPUArchState *env, abi_ptr ptr, uint32_t val);
47
+void cpu_stq_data(CPUArchState *env, abi_ptr ptr, uint64_t val);
48
49
-#define DATA_SIZE 8
50
-#include "exec/cpu_ldst_useronly_template.h"
51
-#undef MEMSUFFIX
52
-
20
-
53
-#define MEMSUFFIX _code
21
typedef struct PageDesc {
54
-#define CODE_ACCESS
22
/* list of TBs intersecting this ram page */
55
-#define DATA_SIZE 1
23
uintptr_t first_tb;
56
-#include "exec/cpu_ldst_useronly_template.h"
24
-#ifdef CONFIG_SOFTMMU
57
-
25
- /* in order to optimize self modifying code, we count the number
58
-#define DATA_SIZE 2
26
- of lookups we do to a given page to use a bitmap */
59
-#include "exec/cpu_ldst_useronly_template.h"
27
- unsigned long *code_bitmap;
60
-
28
- unsigned int code_write_count;
61
-#define DATA_SIZE 4
29
-#else
62
-#include "exec/cpu_ldst_useronly_template.h"
30
+#ifdef CONFIG_USER_ONLY
63
-
31
unsigned long flags;
64
-#define DATA_SIZE 8
32
void *target_data;
65
-#include "exec/cpu_ldst_useronly_template.h"
33
#endif
66
-#undef MEMSUFFIX
34
-#ifndef CONFIG_USER_ONLY
67
-#undef CODE_ACCESS
35
+#ifdef CONFIG_SOFTMMU
68
+void cpu_stb_data_ra(CPUArchState *env, abi_ptr ptr,
36
QemuSpin lock;
69
+ uint32_t val, uintptr_t retaddr);
37
#endif
70
+void cpu_stw_data_ra(CPUArchState *env, abi_ptr ptr,
38
} PageDesc;
71
+ uint32_t val, uintptr_t retaddr);
39
@@ -XXX,XX +XXX,XX @@ void tb_htable_init(void)
72
+void cpu_stl_data_ra(CPUArchState *env, abi_ptr ptr,
40
qht_init(&tb_ctx.htable, tb_cmp, CODE_GEN_HTABLE_SIZE, mode);
73
+ uint32_t val, uintptr_t retaddr);
74
+void cpu_stq_data_ra(CPUArchState *env, abi_ptr ptr,
75
+ uint64_t val, uintptr_t retaddr);
76
77
/*
78
* Provide the same *_mmuidx_ra interface as for softmmu.
79
@@ -XXX,XX +XXX,XX @@ void cpu_stq_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint64_t val,
80
#undef CPU_MMU_INDEX
81
#undef MEMSUFFIX
82
83
+#endif /* defined(CONFIG_USER_ONLY) */
84
+
85
uint32_t cpu_ldub_code(CPUArchState *env, abi_ptr addr);
86
uint32_t cpu_lduw_code(CPUArchState *env, abi_ptr addr);
87
uint32_t cpu_ldl_code(CPUArchState *env, abi_ptr addr);
88
@@ -XXX,XX +XXX,XX @@ static inline int cpu_ldsw_code(CPUArchState *env, abi_ptr addr)
89
return (int16_t)cpu_lduw_code(env, addr);
90
}
41
}
91
42
92
-#endif /* defined(CONFIG_USER_ONLY) */
43
-/* call with @p->lock held */
93
-
44
-static inline void invalidate_page_bitmap(PageDesc *p)
94
/**
45
-{
95
* tlb_vaddr_to_host:
46
- assert_page_locked(p);
96
* @env: CPUArchState
47
-#ifdef CONFIG_SOFTMMU
97
diff --git a/include/exec/cpu_ldst_useronly_template.h b/include/exec/cpu_ldst_useronly_template.h
48
- g_free(p->code_bitmap);
98
deleted file mode 100644
49
- p->code_bitmap = NULL;
99
index XXXXXXX..XXXXXXX
50
- p->code_write_count = 0;
100
--- a/include/exec/cpu_ldst_useronly_template.h
101
+++ /dev/null
102
@@ -XXX,XX +XXX,XX @@
103
-/*
104
- * User-only accessor function support
105
- *
106
- * Generate inline load/store functions for one data size.
107
- *
108
- * Generate a store function as well as signed and unsigned loads.
109
- *
110
- * Not used directly but included from cpu_ldst.h.
111
- *
112
- * Copyright (c) 2015 Linaro Limited
113
- *
114
- * This library is free software; you can redistribute it and/or
115
- * modify it under the terms of the GNU Lesser General Public
116
- * License as published by the Free Software Foundation; either
117
- * version 2 of the License, or (at your option) any later version.
118
- *
119
- * This library is distributed in the hope that it will be useful,
120
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
121
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
122
- * Lesser General Public License for more details.
123
- *
124
- * You should have received a copy of the GNU Lesser General Public
125
- * License along with this library; if not, see <http://www.gnu.org/licenses/>.
126
- */
127
-
128
-#if !defined(CODE_ACCESS)
129
-#include "trace-root.h"
130
-#endif
51
-#endif
131
-
132
-#include "trace/mem.h"
133
-
134
-#if DATA_SIZE == 8
135
-#define SUFFIX q
136
-#define USUFFIX q
137
-#define DATA_TYPE uint64_t
138
-#define SHIFT 3
139
-#elif DATA_SIZE == 4
140
-#define SUFFIX l
141
-#define USUFFIX l
142
-#define DATA_TYPE uint32_t
143
-#define SHIFT 2
144
-#elif DATA_SIZE == 2
145
-#define SUFFIX w
146
-#define USUFFIX uw
147
-#define DATA_TYPE uint16_t
148
-#define DATA_STYPE int16_t
149
-#define SHIFT 1
150
-#elif DATA_SIZE == 1
151
-#define SUFFIX b
152
-#define USUFFIX ub
153
-#define DATA_TYPE uint8_t
154
-#define DATA_STYPE int8_t
155
-#define SHIFT 0
156
-#else
157
-#error unsupported data size
158
-#endif
159
-
160
-#if DATA_SIZE == 8
161
-#define RES_TYPE uint64_t
162
-#else
163
-#define RES_TYPE uint32_t
164
-#endif
165
-
166
-static inline RES_TYPE
167
-glue(glue(cpu_ld, USUFFIX), MEMSUFFIX)(CPUArchState *env, abi_ptr ptr)
168
-{
169
- RES_TYPE ret;
170
-#ifdef CODE_ACCESS
171
- set_helper_retaddr(1);
172
- ret = glue(glue(ld, USUFFIX), _p)(g2h(ptr));
173
- clear_helper_retaddr();
174
-#else
175
- MemOp op = MO_TE | SHIFT;
176
- uint16_t meminfo = trace_mem_get_info(op, MMU_USER_IDX, false);
177
- trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
178
- ret = glue(glue(ld, USUFFIX), _p)(g2h(ptr));
179
-#endif
180
- return ret;
181
-}
52
-}
182
-
53
-
183
-#ifndef CODE_ACCESS
54
/* Set to NULL all the 'first_tb' fields in all PageDescs. */
184
-static inline RES_TYPE
55
static void page_flush_tb_1(int level, void **lp)
185
-glue(glue(glue(cpu_ld, USUFFIX), MEMSUFFIX), _ra)(CPUArchState *env,
56
{
186
- abi_ptr ptr,
57
@@ -XXX,XX +XXX,XX @@ static void page_flush_tb_1(int level, void **lp)
187
- uintptr_t retaddr)
58
for (i = 0; i < V_L2_SIZE; ++i) {
59
page_lock(&pd[i]);
60
pd[i].first_tb = (uintptr_t)NULL;
61
- invalidate_page_bitmap(pd + i);
62
page_unlock(&pd[i]);
63
}
64
} else {
65
@@ -XXX,XX +XXX,XX @@ static void do_tb_phys_invalidate(TranslationBlock *tb, bool rm_from_page_list)
66
if (rm_from_page_list) {
67
p = page_find(tb->page_addr[0] >> TARGET_PAGE_BITS);
68
tb_page_remove(p, tb);
69
- invalidate_page_bitmap(p);
70
if (tb->page_addr[1] != -1) {
71
p = page_find(tb->page_addr[1] >> TARGET_PAGE_BITS);
72
tb_page_remove(p, tb);
73
- invalidate_page_bitmap(p);
74
}
75
}
76
77
@@ -XXX,XX +XXX,XX @@ void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr)
78
}
79
}
80
81
-#ifdef CONFIG_SOFTMMU
82
-/* call with @p->lock held */
83
-static void build_page_bitmap(PageDesc *p)
188
-{
84
-{
189
- RES_TYPE ret;
85
- int n, tb_start, tb_end;
190
- set_helper_retaddr(retaddr);
86
- TranslationBlock *tb;
191
- ret = glue(glue(cpu_ld, USUFFIX), MEMSUFFIX)(env, ptr);
87
-
192
- clear_helper_retaddr();
88
- assert_page_locked(p);
193
- return ret;
89
- p->code_bitmap = bitmap_new(TARGET_PAGE_SIZE);
90
-
91
- PAGE_FOR_EACH_TB(p, tb, n) {
92
- /* NOTE: this is subtle as a TB may span two physical pages */
93
- if (n == 0) {
94
- /* NOTE: tb_end may be after the end of the page, but
95
- it is not a problem */
96
- tb_start = tb->pc & ~TARGET_PAGE_MASK;
97
- tb_end = tb_start + tb->size;
98
- if (tb_end > TARGET_PAGE_SIZE) {
99
- tb_end = TARGET_PAGE_SIZE;
100
- }
101
- } else {
102
- tb_start = 0;
103
- tb_end = ((tb->pc + tb->size) & ~TARGET_PAGE_MASK);
104
- }
105
- bitmap_set(p->code_bitmap, tb_start, tb_end - tb_start);
106
- }
194
-}
107
-}
195
-#endif
108
-#endif
196
-
109
-
197
-#if DATA_SIZE <= 2
110
/* add the tb in the target page and protect it if necessary
198
-static inline int
111
*
199
-glue(glue(cpu_lds, SUFFIX), MEMSUFFIX)(CPUArchState *env, abi_ptr ptr)
112
* Called with mmap_lock held for user-mode emulation.
200
-{
113
@@ -XXX,XX +XXX,XX @@ static inline void tb_page_add(PageDesc *p, TranslationBlock *tb,
201
- int ret;
114
page_already_protected = p->first_tb != (uintptr_t)NULL;
202
-#ifdef CODE_ACCESS
115
#endif
203
- set_helper_retaddr(1);
116
p->first_tb = (uintptr_t)tb | n;
204
- ret = glue(glue(lds, SUFFIX), _p)(g2h(ptr));
117
- invalidate_page_bitmap(p);
205
- clear_helper_retaddr();
118
206
-#else
119
#if defined(CONFIG_USER_ONLY)
207
- MemOp op = MO_TE | MO_SIGN | SHIFT;
120
/* translator_loop() must have made all TB pages non-writable */
208
- uint16_t meminfo = trace_mem_get_info(op, MMU_USER_IDX, false);
121
@@ -XXX,XX +XXX,XX @@ tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
209
- trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
122
/* remove TB from the page(s) if we couldn't insert it */
210
- ret = glue(glue(lds, SUFFIX), _p)(g2h(ptr));
123
if (unlikely(existing_tb)) {
211
- qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
124
tb_page_remove(p, tb);
212
-#endif
125
- invalidate_page_bitmap(p);
213
- return ret;
126
if (p2) {
214
-}
127
tb_page_remove(p2, tb);
128
- invalidate_page_bitmap(p2);
129
}
130
tb = existing_tb;
131
}
132
@@ -XXX,XX +XXX,XX @@ tb_invalidate_phys_page_range__locked(struct page_collection *pages,
133
#if !defined(CONFIG_USER_ONLY)
134
/* if no code remaining, no need to continue to use slow writes */
135
if (!p->first_tb) {
136
- invalidate_page_bitmap(p);
137
tlb_unprotect_code(start);
138
}
139
#endif
140
@@ -XXX,XX +XXX,XX @@ void tb_invalidate_phys_page_fast(struct page_collection *pages,
141
}
142
143
assert_page_locked(p);
144
- if (!p->code_bitmap &&
145
- ++p->code_write_count >= SMC_BITMAP_USE_THRESHOLD) {
146
- build_page_bitmap(p);
147
- }
148
- if (p->code_bitmap) {
149
- unsigned int nr;
150
- unsigned long b;
215
-
151
-
216
-#ifndef CODE_ACCESS
152
- nr = start & ~TARGET_PAGE_MASK;
217
-static inline int
153
- b = p->code_bitmap[BIT_WORD(nr)] >> (nr & (BITS_PER_LONG - 1));
218
-glue(glue(glue(cpu_lds, SUFFIX), MEMSUFFIX), _ra)(CPUArchState *env,
154
- if (b & ((1 << len) - 1)) {
219
- abi_ptr ptr,
155
- goto do_invalidate;
220
- uintptr_t retaddr)
156
- }
221
-{
157
- } else {
222
- int ret;
158
- do_invalidate:
223
- set_helper_retaddr(retaddr);
159
- tb_invalidate_phys_page_range__locked(pages, p, start, start + len,
224
- ret = glue(glue(cpu_lds, SUFFIX), MEMSUFFIX)(env, ptr);
160
- retaddr);
225
- clear_helper_retaddr();
161
- }
226
- return ret;
162
+ tb_invalidate_phys_page_range__locked(pages, p, start, start + len,
227
-}
163
+ retaddr);
228
-#endif /* CODE_ACCESS */
164
}
229
-#endif /* DATA_SIZE <= 2 */
165
#else
230
-
166
/* Called with mmap_lock held. If pc is not 0 then it indicates the
231
-#ifndef CODE_ACCESS
232
-static inline void
233
-glue(glue(cpu_st, SUFFIX), MEMSUFFIX)(CPUArchState *env, abi_ptr ptr,
234
- RES_TYPE v)
235
-{
236
- MemOp op = MO_TE | SHIFT;
237
- uint16_t meminfo = trace_mem_get_info(op, MMU_USER_IDX, true);
238
- trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
239
- glue(glue(st, SUFFIX), _p)(g2h(ptr), v);
240
- qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
241
-}
242
-
243
-static inline void
244
-glue(glue(glue(cpu_st, SUFFIX), MEMSUFFIX), _ra)(CPUArchState *env,
245
- abi_ptr ptr,
246
- RES_TYPE v,
247
- uintptr_t retaddr)
248
-{
249
- set_helper_retaddr(retaddr);
250
- glue(glue(cpu_st, SUFFIX), MEMSUFFIX)(env, ptr, v);
251
- clear_helper_retaddr();
252
-}
253
-#endif
254
-
255
-#undef RES_TYPE
256
-#undef DATA_TYPE
257
-#undef DATA_STYPE
258
-#undef SUFFIX
259
-#undef USUFFIX
260
-#undef DATA_SIZE
261
-#undef SHIFT
262
diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
263
index XXXXXXX..XXXXXXX 100644
264
--- a/accel/tcg/user-exec.c
265
+++ b/accel/tcg/user-exec.c
266
@@ -XXX,XX +XXX,XX @@
267
#include "translate-all.h"
268
#include "exec/helper-proto.h"
269
#include "qemu/atomic128.h"
270
+#include "trace-root.h"
271
+#include "trace/mem.h"
272
273
#undef EAX
274
#undef ECX
275
@@ -XXX,XX +XXX,XX @@ int cpu_signal_handler(int host_signum, void *pinfo,
276
277
/* The softmmu versions of these helpers are in cputlb.c. */
278
279
+uint32_t cpu_ldub_data(CPUArchState *env, abi_ptr ptr)
280
+{
281
+ uint32_t ret;
282
+ uint16_t meminfo = trace_mem_get_info(MO_UB, MMU_USER_IDX, false);
283
+
284
+ trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
285
+ ret = ldub_p(g2h(ptr));
286
+ qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
287
+ return ret;
288
+}
289
+
290
+int cpu_ldsb_data(CPUArchState *env, abi_ptr ptr)
291
+{
292
+ int ret;
293
+ uint16_t meminfo = trace_mem_get_info(MO_SB, MMU_USER_IDX, false);
294
+
295
+ trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
296
+ ret = ldsb_p(g2h(ptr));
297
+ qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
298
+ return ret;
299
+}
300
+
301
+uint32_t cpu_lduw_data(CPUArchState *env, abi_ptr ptr)
302
+{
303
+ uint32_t ret;
304
+ uint16_t meminfo = trace_mem_get_info(MO_TEUW, MMU_USER_IDX, false);
305
+
306
+ trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
307
+ ret = lduw_p(g2h(ptr));
308
+ qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
309
+ return ret;
310
+}
311
+
312
+int cpu_ldsw_data(CPUArchState *env, abi_ptr ptr)
313
+{
314
+ int ret;
315
+ uint16_t meminfo = trace_mem_get_info(MO_TESW, MMU_USER_IDX, false);
316
+
317
+ trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
318
+ ret = ldsw_p(g2h(ptr));
319
+ qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
320
+ return ret;
321
+}
322
+
323
+uint32_t cpu_ldl_data(CPUArchState *env, abi_ptr ptr)
324
+{
325
+ uint32_t ret;
326
+ uint16_t meminfo = trace_mem_get_info(MO_TEUL, MMU_USER_IDX, false);
327
+
328
+ trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
329
+ ret = ldl_p(g2h(ptr));
330
+ qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
331
+ return ret;
332
+}
333
+
334
+uint64_t cpu_ldq_data(CPUArchState *env, abi_ptr ptr)
335
+{
336
+ uint64_t ret;
337
+ uint16_t meminfo = trace_mem_get_info(MO_TEQ, MMU_USER_IDX, false);
338
+
339
+ trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
340
+ ret = ldq_p(g2h(ptr));
341
+ qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
342
+ return ret;
343
+}
344
+
345
+uint32_t cpu_ldub_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t retaddr)
346
+{
347
+ uint32_t ret;
348
+
349
+ set_helper_retaddr(retaddr);
350
+ ret = cpu_ldub_data(env, ptr);
351
+ clear_helper_retaddr();
352
+ return ret;
353
+}
354
+
355
+int cpu_ldsb_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t retaddr)
356
+{
357
+ int ret;
358
+
359
+ set_helper_retaddr(retaddr);
360
+ ret = cpu_ldsb_data(env, ptr);
361
+ clear_helper_retaddr();
362
+ return ret;
363
+}
364
+
365
+uint32_t cpu_lduw_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t retaddr)
366
+{
367
+ uint32_t ret;
368
+
369
+ set_helper_retaddr(retaddr);
370
+ ret = cpu_lduw_data(env, ptr);
371
+ clear_helper_retaddr();
372
+ return ret;
373
+}
374
+
375
+int cpu_ldsw_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t retaddr)
376
+{
377
+ int ret;
378
+
379
+ set_helper_retaddr(retaddr);
380
+ ret = cpu_ldsw_data(env, ptr);
381
+ clear_helper_retaddr();
382
+ return ret;
383
+}
384
+
385
+uint32_t cpu_ldl_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t retaddr)
386
+{
387
+ uint32_t ret;
388
+
389
+ set_helper_retaddr(retaddr);
390
+ ret = cpu_ldl_data(env, ptr);
391
+ clear_helper_retaddr();
392
+ return ret;
393
+}
394
+
395
+uint64_t cpu_ldq_data_ra(CPUArchState *env, abi_ptr ptr, uintptr_t retaddr)
396
+{
397
+ uint64_t ret;
398
+
399
+ set_helper_retaddr(retaddr);
400
+ ret = cpu_ldq_data(env, ptr);
401
+ clear_helper_retaddr();
402
+ return ret;
403
+}
404
+
405
+void cpu_stb_data(CPUArchState *env, abi_ptr ptr, uint32_t val)
406
+{
407
+ uint16_t meminfo = trace_mem_get_info(MO_UB, MMU_USER_IDX, true);
408
+
409
+ trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
410
+ stb_p(g2h(ptr), val);
411
+ qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
412
+}
413
+
414
+void cpu_stw_data(CPUArchState *env, abi_ptr ptr, uint32_t val)
415
+{
416
+ uint16_t meminfo = trace_mem_get_info(MO_TEUW, MMU_USER_IDX, true);
417
+
418
+ trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
419
+ stw_p(g2h(ptr), val);
420
+ qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
421
+}
422
+
423
+void cpu_stl_data(CPUArchState *env, abi_ptr ptr, uint32_t val)
424
+{
425
+ uint16_t meminfo = trace_mem_get_info(MO_TEUL, MMU_USER_IDX, true);
426
+
427
+ trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
428
+ stl_p(g2h(ptr), val);
429
+ qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
430
+}
431
+
432
+void cpu_stq_data(CPUArchState *env, abi_ptr ptr, uint64_t val)
433
+{
434
+ uint16_t meminfo = trace_mem_get_info(MO_TEQ, MMU_USER_IDX, true);
435
+
436
+ trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
437
+ stq_p(g2h(ptr), val);
438
+ qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
439
+}
440
+
441
+void cpu_stb_data_ra(CPUArchState *env, abi_ptr ptr,
442
+ uint32_t val, uintptr_t retaddr)
443
+{
444
+ set_helper_retaddr(retaddr);
445
+ cpu_stb_data(env, ptr, val);
446
+ clear_helper_retaddr();
447
+}
448
+
449
+void cpu_stw_data_ra(CPUArchState *env, abi_ptr ptr,
450
+ uint32_t val, uintptr_t retaddr)
451
+{
452
+ set_helper_retaddr(retaddr);
453
+ cpu_stw_data(env, ptr, val);
454
+ clear_helper_retaddr();
455
+}
456
+
457
+void cpu_stl_data_ra(CPUArchState *env, abi_ptr ptr,
458
+ uint32_t val, uintptr_t retaddr)
459
+{
460
+ set_helper_retaddr(retaddr);
461
+ cpu_stl_data(env, ptr, val);
462
+ clear_helper_retaddr();
463
+}
464
+
465
+void cpu_stq_data_ra(CPUArchState *env, abi_ptr ptr,
466
+ uint64_t val, uintptr_t retaddr)
467
+{
468
+ set_helper_retaddr(retaddr);
469
+ cpu_stq_data(env, ptr, val);
470
+ clear_helper_retaddr();
471
+}
472
+
473
+uint32_t cpu_ldub_code(CPUArchState *env, abi_ptr ptr)
474
+{
475
+ uint32_t ret;
476
+
477
+ set_helper_retaddr(1);
478
+ ret = ldub_p(g2h(ptr));
479
+ clear_helper_retaddr();
480
+ return ret;
481
+}
482
+
483
+uint32_t cpu_lduw_code(CPUArchState *env, abi_ptr ptr)
484
+{
485
+ uint32_t ret;
486
+
487
+ set_helper_retaddr(1);
488
+ ret = lduw_p(g2h(ptr));
489
+ clear_helper_retaddr();
490
+ return ret;
491
+}
492
+
493
+uint32_t cpu_ldl_code(CPUArchState *env, abi_ptr ptr)
494
+{
495
+ uint32_t ret;
496
+
497
+ set_helper_retaddr(1);
498
+ ret = ldl_p(g2h(ptr));
499
+ clear_helper_retaddr();
500
+ return ret;
501
+}
502
+
503
+uint64_t cpu_ldq_code(CPUArchState *env, abi_ptr ptr)
504
+{
505
+ uint64_t ret;
506
+
507
+ set_helper_retaddr(1);
508
+ ret = ldq_p(g2h(ptr));
509
+ clear_helper_retaddr();
510
+ return ret;
511
+}
512
+
513
/* Do not allow unaligned operations to proceed. Return the host address. */
514
static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
515
int size, uintptr_t retaddr)
516
--
167
--
517
2.20.1
168
2.34.1
518
169
519
170
diff view generated by jsdifflib
1
The generated *_user functions are unused. The *_kernel functions
1
Bool is more appropriate type for the alloc parameter.
2
have a couple of users in op_helper.c; use *_mmuidx_ra instead,
3
with MMU_KERNEL_IDX.
4
2
5
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
3
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
4
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
---
6
---
10
v2: Use *_mmuidx_ra directly, without intermediate macros.
7
accel/tcg/translate-all.c | 14 +++++++-------
11
---
8
1 file changed, 7 insertions(+), 7 deletions(-)
12
target/m68k/cpu.h | 2 --
13
target/m68k/op_helper.c | 77 +++++++++++++++++++++++++----------------
14
2 files changed, 47 insertions(+), 32 deletions(-)
15
9
16
diff --git a/target/m68k/cpu.h b/target/m68k/cpu.h
10
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
17
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
18
--- a/target/m68k/cpu.h
12
--- a/accel/tcg/translate-all.c
19
+++ b/target/m68k/cpu.h
13
+++ b/accel/tcg/translate-all.c
20
@@ -XXX,XX +XXX,XX @@ enum {
14
@@ -XXX,XX +XXX,XX @@ void page_init(void)
21
#define cpu_list m68k_cpu_list
15
#endif
22
23
/* MMU modes definitions */
24
-#define MMU_MODE0_SUFFIX _kernel
25
-#define MMU_MODE1_SUFFIX _user
26
#define MMU_KERNEL_IDX 0
27
#define MMU_USER_IDX 1
28
static inline int cpu_mmu_index (CPUM68KState *env, bool ifetch)
29
diff --git a/target/m68k/op_helper.c b/target/m68k/op_helper.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/m68k/op_helper.c
32
+++ b/target/m68k/op_helper.c
33
@@ -XXX,XX +XXX,XX @@ static void cf_rte(CPUM68KState *env)
34
uint32_t fmt;
35
36
sp = env->aregs[7];
37
- fmt = cpu_ldl_kernel(env, sp);
38
- env->pc = cpu_ldl_kernel(env, sp + 4);
39
+ fmt = cpu_ldl_mmuidx_ra(env, sp, MMU_KERNEL_IDX, 0);
40
+ env->pc = cpu_ldl_mmuidx_ra(env, sp + 4, MMU_KERNEL_IDX, 0);
41
sp |= (fmt >> 28) & 3;
42
env->aregs[7] = sp + 8;
43
44
@@ -XXX,XX +XXX,XX @@ static void m68k_rte(CPUM68KState *env)
45
46
sp = env->aregs[7];
47
throwaway:
48
- sr = cpu_lduw_kernel(env, sp);
49
+ sr = cpu_lduw_mmuidx_ra(env, sp, MMU_KERNEL_IDX, 0);
50
sp += 2;
51
- env->pc = cpu_ldl_kernel(env, sp);
52
+ env->pc = cpu_ldl_mmuidx_ra(env, sp, MMU_KERNEL_IDX, 0);
53
sp += 4;
54
if (m68k_feature(env, M68K_FEATURE_QUAD_MULDIV)) {
55
/* all except 68000 */
56
- fmt = cpu_lduw_kernel(env, sp);
57
+ fmt = cpu_lduw_mmuidx_ra(env, sp, MMU_KERNEL_IDX, 0);
58
sp += 2;
59
switch (fmt >> 12) {
60
case 0:
61
@@ -XXX,XX +XXX,XX @@ static void cf_interrupt_all(CPUM68KState *env, int is_hw)
62
/* ??? This could cause MMU faults. */
63
sp &= ~3;
64
sp -= 4;
65
- cpu_stl_kernel(env, sp, retaddr);
66
+ cpu_stl_mmuidx_ra(env, sp, retaddr, MMU_KERNEL_IDX, 0);
67
sp -= 4;
68
- cpu_stl_kernel(env, sp, fmt);
69
+ cpu_stl_mmuidx_ra(env, sp, fmt, MMU_KERNEL_IDX, 0);
70
env->aregs[7] = sp;
71
/* Jump to vector. */
72
- env->pc = cpu_ldl_kernel(env, env->vbr + vector);
73
+ env->pc = cpu_ldl_mmuidx_ra(env, env->vbr + vector, MMU_KERNEL_IDX, 0);
74
}
16
}
75
17
76
static inline void do_stack_frame(CPUM68KState *env, uint32_t *sp,
18
-static PageDesc *page_find_alloc(tb_page_addr_t index, int alloc)
77
@@ -XXX,XX +XXX,XX @@ static inline void do_stack_frame(CPUM68KState *env, uint32_t *sp,
19
+static PageDesc *page_find_alloc(tb_page_addr_t index, bool alloc)
78
switch (format) {
20
{
79
case 4:
21
PageDesc *pd;
80
*sp -= 4;
22
void **lp;
81
- cpu_stl_kernel(env, *sp, env->pc);
23
@@ -XXX,XX +XXX,XX @@ static PageDesc *page_find_alloc(tb_page_addr_t index, int alloc)
82
+ cpu_stl_mmuidx_ra(env, *sp, env->pc, MMU_KERNEL_IDX, 0);
24
83
*sp -= 4;
25
static inline PageDesc *page_find(tb_page_addr_t index)
84
- cpu_stl_kernel(env, *sp, addr);
26
{
85
+ cpu_stl_mmuidx_ra(env, *sp, addr, MMU_KERNEL_IDX, 0);
27
- return page_find_alloc(index, 0);
86
break;
28
+ return page_find_alloc(index, false);
87
case 3:
88
case 2:
89
*sp -= 4;
90
- cpu_stl_kernel(env, *sp, addr);
91
+ cpu_stl_mmuidx_ra(env, *sp, addr, MMU_KERNEL_IDX, 0);
92
break;
93
}
94
*sp -= 2;
95
- cpu_stw_kernel(env, *sp, (format << 12) + (cs->exception_index << 2));
96
+ cpu_stw_mmuidx_ra(env, *sp, (format << 12) + (cs->exception_index << 2),
97
+ MMU_KERNEL_IDX, 0);
98
}
99
*sp -= 4;
100
- cpu_stl_kernel(env, *sp, retaddr);
101
+ cpu_stl_mmuidx_ra(env, *sp, retaddr, MMU_KERNEL_IDX, 0);
102
*sp -= 2;
103
- cpu_stw_kernel(env, *sp, sr);
104
+ cpu_stw_mmuidx_ra(env, *sp, sr, MMU_KERNEL_IDX, 0);
105
}
29
}
106
30
107
static void m68k_interrupt_all(CPUM68KState *env, int is_hw)
31
static void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1,
108
@@ -XXX,XX +XXX,XX @@ static void m68k_interrupt_all(CPUM68KState *env, int is_hw)
32
- PageDesc **ret_p2, tb_page_addr_t phys2, int alloc);
109
cpu_abort(cs, "DOUBLE MMU FAULT\n");
33
+ PageDesc **ret_p2, tb_page_addr_t phys2, bool alloc);
110
}
34
111
env->mmu.fault = true;
35
/* In user-mode page locks aren't used; mmap_lock is enough */
112
+ /* push data 3 */
36
#ifdef CONFIG_USER_ONLY
113
sp -= 4;
37
@@ -XXX,XX +XXX,XX @@ static inline void page_unlock(PageDesc *pd)
114
- cpu_stl_kernel(env, sp, 0); /* push data 3 */
38
/* lock the page(s) of a TB in the correct acquisition order */
115
+ cpu_stl_mmuidx_ra(env, sp, 0, MMU_KERNEL_IDX, 0);
39
static inline void page_lock_tb(const TranslationBlock *tb)
116
+ /* push data 2 */
40
{
117
sp -= 4;
41
- page_lock_pair(NULL, tb->page_addr[0], NULL, tb->page_addr[1], 0);
118
- cpu_stl_kernel(env, sp, 0); /* push data 2 */
42
+ page_lock_pair(NULL, tb->page_addr[0], NULL, tb->page_addr[1], false);
119
+ cpu_stl_mmuidx_ra(env, sp, 0, MMU_KERNEL_IDX, 0);
120
+ /* push data 1 */
121
sp -= 4;
122
- cpu_stl_kernel(env, sp, 0); /* push data 1 */
123
+ cpu_stl_mmuidx_ra(env, sp, 0, MMU_KERNEL_IDX, 0);
124
+ /* write back 1 / push data 0 */
125
sp -= 4;
126
- cpu_stl_kernel(env, sp, 0); /* write back 1 / push data 0 */
127
+ cpu_stl_mmuidx_ra(env, sp, 0, MMU_KERNEL_IDX, 0);
128
+ /* write back 1 address */
129
sp -= 4;
130
- cpu_stl_kernel(env, sp, 0); /* write back 1 address */
131
+ cpu_stl_mmuidx_ra(env, sp, 0, MMU_KERNEL_IDX, 0);
132
+ /* write back 2 data */
133
sp -= 4;
134
- cpu_stl_kernel(env, sp, 0); /* write back 2 data */
135
+ cpu_stl_mmuidx_ra(env, sp, 0, MMU_KERNEL_IDX, 0);
136
+ /* write back 2 address */
137
sp -= 4;
138
- cpu_stl_kernel(env, sp, 0); /* write back 2 address */
139
+ cpu_stl_mmuidx_ra(env, sp, 0, MMU_KERNEL_IDX, 0);
140
+ /* write back 3 data */
141
sp -= 4;
142
- cpu_stl_kernel(env, sp, 0); /* write back 3 data */
143
+ cpu_stl_mmuidx_ra(env, sp, 0, MMU_KERNEL_IDX, 0);
144
+ /* write back 3 address */
145
sp -= 4;
146
- cpu_stl_kernel(env, sp, env->mmu.ar); /* write back 3 address */
147
+ cpu_stl_mmuidx_ra(env, sp, env->mmu.ar, MMU_KERNEL_IDX, 0);
148
+ /* fault address */
149
sp -= 4;
150
- cpu_stl_kernel(env, sp, env->mmu.ar); /* fault address */
151
+ cpu_stl_mmuidx_ra(env, sp, env->mmu.ar, MMU_KERNEL_IDX, 0);
152
+ /* write back 1 status */
153
sp -= 2;
154
- cpu_stw_kernel(env, sp, 0); /* write back 1 status */
155
+ cpu_stw_mmuidx_ra(env, sp, 0, MMU_KERNEL_IDX, 0);
156
+ /* write back 2 status */
157
sp -= 2;
158
- cpu_stw_kernel(env, sp, 0); /* write back 2 status */
159
+ cpu_stw_mmuidx_ra(env, sp, 0, MMU_KERNEL_IDX, 0);
160
+ /* write back 3 status */
161
sp -= 2;
162
- cpu_stw_kernel(env, sp, 0); /* write back 3 status */
163
+ cpu_stw_mmuidx_ra(env, sp, 0, MMU_KERNEL_IDX, 0);
164
+ /* special status word */
165
sp -= 2;
166
- cpu_stw_kernel(env, sp, env->mmu.ssw); /* special status word */
167
+ cpu_stw_mmuidx_ra(env, sp, env->mmu.ssw, MMU_KERNEL_IDX, 0);
168
+ /* effective address */
169
sp -= 4;
170
- cpu_stl_kernel(env, sp, env->mmu.ar); /* effective address */
171
+ cpu_stl_mmuidx_ra(env, sp, env->mmu.ar, MMU_KERNEL_IDX, 0);
172
+
173
do_stack_frame(env, &sp, 7, oldsr, 0, retaddr);
174
env->mmu.fault = false;
175
if (qemu_loglevel_mask(CPU_LOG_INT)) {
176
@@ -XXX,XX +XXX,XX @@ static void m68k_interrupt_all(CPUM68KState *env, int is_hw)
177
178
env->aregs[7] = sp;
179
/* Jump to vector. */
180
- env->pc = cpu_ldl_kernel(env, env->vbr + vector);
181
+ env->pc = cpu_ldl_mmuidx_ra(env, env->vbr + vector, MMU_KERNEL_IDX, 0);
182
}
43
}
183
44
184
static void do_interrupt_all(CPUM68KState *env, int is_hw)
45
static inline void page_unlock_tb(const TranslationBlock *tb)
46
@@ -XXX,XX +XXX,XX @@ void page_collection_unlock(struct page_collection *set)
47
#endif /* !CONFIG_USER_ONLY */
48
49
static void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1,
50
- PageDesc **ret_p2, tb_page_addr_t phys2, int alloc)
51
+ PageDesc **ret_p2, tb_page_addr_t phys2, bool alloc)
52
{
53
PageDesc *p1, *p2;
54
tb_page_addr_t page1;
55
@@ -XXX,XX +XXX,XX @@ tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
56
* Note that inserting into the hash table first isn't an option, since
57
* we can only insert TBs that are fully initialized.
58
*/
59
- page_lock_pair(&p, phys_pc, &p2, phys_page2, 1);
60
+ page_lock_pair(&p, phys_pc, &p2, phys_page2, true);
61
tb_page_add(p, tb, 0, phys_pc & TARGET_PAGE_MASK);
62
if (p2) {
63
tb_page_add(p2, tb, 1, phys_page2);
64
@@ -XXX,XX +XXX,XX @@ void page_set_flags(target_ulong start, target_ulong end, int flags)
65
for (addr = start, len = end - start;
66
len != 0;
67
len -= TARGET_PAGE_SIZE, addr += TARGET_PAGE_SIZE) {
68
- PageDesc *p = page_find_alloc(addr >> TARGET_PAGE_BITS, 1);
69
+ PageDesc *p = page_find_alloc(addr >> TARGET_PAGE_BITS, true);
70
71
/* If the write protection bit is set, then we invalidate
72
the code inside. */
185
--
73
--
186
2.20.1
74
2.34.1
187
75
188
76
diff view generated by jsdifflib
1
Recent toolchains support static and pie at the same time.
1
Use the pc coming from db->pc_first rather than the TB.
2
2
3
As with normal dynamic builds, allow --static to default to PIE
3
Use the cached host_addr rather than re-computing for the
4
if supported by the toolchain. Allow --enable/--disable-pie to
4
first page. We still need a separate lookup for the second
5
override the default.
5
page because it won't be computed for DisasContextBase until
6
the translator actually performs a read from the page.
6
7
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
---
10
---
10
v2: Fix --disable-pie --static
11
include/exec/plugin-gen.h | 7 ++++---
11
---
12
accel/tcg/plugin-gen.c | 22 +++++++++++-----------
12
configure | 19 ++++++++++++-------
13
accel/tcg/translator.c | 2 +-
13
1 file changed, 12 insertions(+), 7 deletions(-)
14
3 files changed, 16 insertions(+), 15 deletions(-)
14
15
15
diff --git a/configure b/configure
16
diff --git a/include/exec/plugin-gen.h b/include/exec/plugin-gen.h
16
index XXXXXXX..XXXXXXX 100755
17
index XXXXXXX..XXXXXXX 100644
17
--- a/configure
18
--- a/include/exec/plugin-gen.h
18
+++ b/configure
19
+++ b/include/exec/plugin-gen.h
19
@@ -XXX,XX +XXX,XX @@ for opt do
20
@@ -XXX,XX +XXX,XX @@ struct DisasContextBase;
20
;;
21
21
--static)
22
#ifdef CONFIG_PLUGIN
22
static="yes"
23
23
- LDFLAGS="-static $LDFLAGS"
24
-bool plugin_gen_tb_start(CPUState *cpu, const TranslationBlock *tb, bool supress);
24
QEMU_PKG_CONFIG_FLAGS="--static $QEMU_PKG_CONFIG_FLAGS"
25
+bool plugin_gen_tb_start(CPUState *cpu, const struct DisasContextBase *db,
25
;;
26
+ bool supress);
26
--mandir=*) mandir="$optarg"
27
void plugin_gen_tb_end(CPUState *cpu);
27
@@ -XXX,XX +XXX,XX @@ if test "$static" = "yes" ; then
28
void plugin_gen_insn_start(CPUState *cpu, const struct DisasContextBase *db);
28
if test "$modules" = "yes" ; then
29
void plugin_gen_insn_end(void);
29
error_exit "static and modules are mutually incompatible"
30
@@ -XXX,XX +XXX,XX @@ static inline void plugin_insn_append(abi_ptr pc, const void *from, size_t size)
30
fi
31
31
- if test "$pie" = "yes" ; then
32
#else /* !CONFIG_PLUGIN */
32
- error_exit "static and pie are mutually incompatible"
33
33
- else
34
-static inline
34
- pie="no"
35
-bool plugin_gen_tb_start(CPUState *cpu, const TranslationBlock *tb, bool supress)
35
- fi
36
+static inline bool
36
fi
37
+plugin_gen_tb_start(CPUState *cpu, const struct DisasContextBase *db, bool sup)
37
38
{
38
# Unconditional check for compiler __thread support
39
return false;
39
@@ -XXX,XX +XXX,XX @@ if compile_prog "-Werror -fno-pie" "-no-pie"; then
40
}
40
LDFLAGS_NOPIE="-no-pie"
41
diff --git a/accel/tcg/plugin-gen.c b/accel/tcg/plugin-gen.c
41
fi
42
index XXXXXXX..XXXXXXX 100644
42
43
--- a/accel/tcg/plugin-gen.c
43
-if test "$pie" = "no"; then
44
+++ b/accel/tcg/plugin-gen.c
44
+if test "$static" = "yes"; then
45
@@ -XXX,XX +XXX,XX @@ static void plugin_gen_inject(const struct qemu_plugin_tb *plugin_tb)
45
+ if test "$pie" != "no" && compile_prog "-fPIE -DPIE" "-static-pie"; then
46
pr_ops();
46
+ QEMU_CFLAGS="-fPIE -DPIE $QEMU_CFLAGS"
47
}
47
+ LDFLAGS="-static-pie $LDFLAGS"
48
48
+ pie="yes"
49
-bool plugin_gen_tb_start(CPUState *cpu, const TranslationBlock *tb, bool mem_only)
49
+ elif test "$pie" = "yes"; then
50
+bool plugin_gen_tb_start(CPUState *cpu, const DisasContextBase *db,
50
+ error_exit "-static-pie not available due to missing toolchain support"
51
+ bool mem_only)
51
+ else
52
{
52
+ LDFLAGS="-static $LDFLAGS"
53
bool ret = false;
53
+ pie="no"
54
54
+ fi
55
@@ -XXX,XX +XXX,XX @@ bool plugin_gen_tb_start(CPUState *cpu, const TranslationBlock *tb, bool mem_onl
55
+elif test "$pie" = "no"; then
56
56
QEMU_CFLAGS="$CFLAGS_NOPIE $QEMU_CFLAGS"
57
ret = true;
57
LDFLAGS="$LDFLAGS_NOPIE $LDFLAGS"
58
58
elif compile_prog "-fPIE -DPIE" "-pie"; then
59
- ptb->vaddr = tb->pc;
60
+ ptb->vaddr = db->pc_first;
61
ptb->vaddr2 = -1;
62
- get_page_addr_code_hostp(cpu->env_ptr, tb->pc, &ptb->haddr1);
63
+ ptb->haddr1 = db->host_addr[0];
64
ptb->haddr2 = NULL;
65
ptb->mem_only = mem_only;
66
67
@@ -XXX,XX +XXX,XX @@ void plugin_gen_insn_start(CPUState *cpu, const DisasContextBase *db)
68
* Note that we skip this when haddr1 == NULL, e.g. when we're
69
* fetching instructions from a region not backed by RAM.
70
*/
71
- if (likely(ptb->haddr1 != NULL && ptb->vaddr2 == -1) &&
72
- unlikely((db->pc_next & TARGET_PAGE_MASK) !=
73
- (db->pc_first & TARGET_PAGE_MASK))) {
74
- get_page_addr_code_hostp(cpu->env_ptr, db->pc_next,
75
- &ptb->haddr2);
76
- ptb->vaddr2 = db->pc_next;
77
- }
78
- if (likely(ptb->vaddr2 == -1)) {
79
+ if (ptb->haddr1 == NULL) {
80
+ pinsn->haddr = NULL;
81
+ } else if (is_same_page(db, db->pc_next)) {
82
pinsn->haddr = ptb->haddr1 + pinsn->vaddr - ptb->vaddr;
83
} else {
84
+ if (ptb->vaddr2 == -1) {
85
+ ptb->vaddr2 = TARGET_PAGE_ALIGN(db->pc_first);
86
+ get_page_addr_code_hostp(cpu->env_ptr, ptb->vaddr2, &ptb->haddr2);
87
+ }
88
pinsn->haddr = ptb->haddr2 + pinsn->vaddr - ptb->vaddr2;
89
}
90
}
91
diff --git a/accel/tcg/translator.c b/accel/tcg/translator.c
92
index XXXXXXX..XXXXXXX 100644
93
--- a/accel/tcg/translator.c
94
+++ b/accel/tcg/translator.c
95
@@ -XXX,XX +XXX,XX @@ void translator_loop(CPUState *cpu, TranslationBlock *tb, int max_insns,
96
ops->tb_start(db, cpu);
97
tcg_debug_assert(db->is_jmp == DISAS_NEXT); /* no early exit */
98
99
- plugin_enabled = plugin_gen_tb_start(cpu, tb, cflags & CF_MEMI_ONLY);
100
+ plugin_enabled = plugin_gen_tb_start(cpu, db, cflags & CF_MEMI_ONLY);
101
102
while (true) {
103
db->num_insns++;
59
--
104
--
60
2.20.1
105
2.34.1
61
106
62
107
diff view generated by jsdifflib
1
The generated functions aside from *_real are unused.
1
Let tb->page_addr[0] contain the address of the first byte of the
2
The *_real functions have a couple of users in mem_helper.c;
2
translated block, rather than the address of the page containing the
3
use *_mmuidx_ra instead, with MMU_REAL_IDX.
3
start of the translated block. We need to recover this value anyway
4
at various points, and it is easier to discard a page offset when it
5
is not needed, which happens naturally via the existing find_page shift.
4
6
5
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
Reviewed-by: David Hildenbrand <david@redhat.com>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
---
9
---
10
v2: Use *_mmuidx_ra directly, without intermediate macros.
10
accel/tcg/cpu-exec.c | 16 ++++++++--------
11
---
11
accel/tcg/cputlb.c | 3 ++-
12
target/s390x/cpu.h | 5 -----
12
accel/tcg/translate-all.c | 9 +++++----
13
target/s390x/mem_helper.c | 10 +++++-----
13
3 files changed, 15 insertions(+), 13 deletions(-)
14
2 files changed, 5 insertions(+), 10 deletions(-)
15
14
16
diff --git a/target/s390x/cpu.h b/target/s390x/cpu.h
15
diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
17
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
18
--- a/target/s390x/cpu.h
17
--- a/accel/tcg/cpu-exec.c
19
+++ b/target/s390x/cpu.h
18
+++ b/accel/tcg/cpu-exec.c
20
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ struct tb_desc {
21
20
target_ulong pc;
22
#define TARGET_INSN_START_EXTRA_WORDS 2
21
target_ulong cs_base;
23
22
CPUArchState *env;
24
-#define MMU_MODE0_SUFFIX _primary
23
- tb_page_addr_t phys_page1;
25
-#define MMU_MODE1_SUFFIX _secondary
24
+ tb_page_addr_t page_addr0;
26
-#define MMU_MODE2_SUFFIX _home
25
uint32_t flags;
27
-#define MMU_MODE3_SUFFIX _real
26
uint32_t cflags;
28
-
27
uint32_t trace_vcpu_dstate;
29
#define MMU_USER_IDX 0
28
@@ -XXX,XX +XXX,XX @@ static bool tb_lookup_cmp(const void *p, const void *d)
30
29
const struct tb_desc *desc = d;
31
#define S390_MAX_CPUS 248
30
32
diff --git a/target/s390x/mem_helper.c b/target/s390x/mem_helper.c
31
if (tb->pc == desc->pc &&
33
index XXXXXXX..XXXXXXX 100644
32
- tb->page_addr[0] == desc->phys_page1 &&
34
--- a/target/s390x/mem_helper.c
33
+ tb->page_addr[0] == desc->page_addr0 &&
35
+++ b/target/s390x/mem_helper.c
34
tb->cs_base == desc->cs_base &&
36
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(testblock)(CPUS390XState *env, uint64_t real_addr)
35
tb->flags == desc->flags &&
37
real_addr = wrap_address(env, real_addr) & TARGET_PAGE_MASK;
36
tb->trace_vcpu_dstate == desc->trace_vcpu_dstate &&
38
37
@@ -XXX,XX +XXX,XX @@ static bool tb_lookup_cmp(const void *p, const void *d)
39
for (i = 0; i < TARGET_PAGE_SIZE; i += 8) {
38
if (tb->page_addr[1] == -1) {
40
- cpu_stq_real_ra(env, real_addr + i, 0, ra);
39
return true;
41
+ cpu_stq_mmuidx_ra(env, real_addr + i, 0, MMU_REAL_IDX, ra);
40
} else {
42
}
41
- tb_page_addr_t phys_page2;
43
42
- target_ulong virt_page2;
44
return 0;
43
+ tb_page_addr_t phys_page1;
45
@@ -XXX,XX +XXX,XX @@ void HELPER(idte)(CPUS390XState *env, uint64_t r1, uint64_t r2, uint32_t m4)
44
+ target_ulong virt_page1;
46
for (i = 0; i < entries; i++) {
45
47
/* addresses are not wrapped in 24/31bit mode but table index is */
46
/*
48
raddr = table + ((index + i) & 0x7ff) * sizeof(entry);
47
* We know that the first page matched, and an otherwise valid TB
49
- entry = cpu_ldq_real_ra(env, raddr, ra);
48
@@ -XXX,XX +XXX,XX @@ static bool tb_lookup_cmp(const void *p, const void *d)
50
+ entry = cpu_ldq_mmuidx_ra(env, raddr, MMU_REAL_IDX, ra);
49
* is different for the new TB. Therefore any exception raised
51
if (!(entry & REGION_ENTRY_I)) {
50
* here by the faulting lookup is not premature.
52
/* we are allowed to not store if already invalid */
51
*/
53
entry |= REGION_ENTRY_I;
52
- virt_page2 = TARGET_PAGE_ALIGN(desc->pc);
54
- cpu_stq_real_ra(env, raddr, entry, ra);
53
- phys_page2 = get_page_addr_code(desc->env, virt_page2);
55
+ cpu_stq_mmuidx_ra(env, raddr, entry, MMU_REAL_IDX, ra);
54
- if (tb->page_addr[1] == phys_page2) {
55
+ virt_page1 = TARGET_PAGE_ALIGN(desc->pc);
56
+ phys_page1 = get_page_addr_code(desc->env, virt_page1);
57
+ if (tb->page_addr[1] == phys_page1) {
58
return true;
56
}
59
}
57
}
60
}
61
@@ -XXX,XX +XXX,XX @@ static TranslationBlock *tb_htable_lookup(CPUState *cpu, target_ulong pc,
62
if (phys_pc == -1) {
63
return NULL;
58
}
64
}
59
@@ -XXX,XX +XXX,XX @@ void HELPER(ipte)(CPUS390XState *env, uint64_t pto, uint64_t vaddr,
65
- desc.phys_page1 = phys_pc & TARGET_PAGE_MASK;
60
pte_addr += VADDR_PAGE_TX(vaddr) * 8;
66
+ desc.page_addr0 = phys_pc;
61
67
h = tb_hash_func(phys_pc, pc, flags, cflags, *cpu->trace_dstate);
62
/* Mark the page table entry as invalid */
68
return qht_lookup_custom(&tb_ctx.htable, &desc, h, tb_lookup_cmp);
63
- pte = cpu_ldq_real_ra(env, pte_addr, ra);
69
}
64
+ pte = cpu_ldq_mmuidx_ra(env, pte_addr, MMU_REAL_IDX, ra);
70
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
65
pte |= PAGE_ENTRY_I;
71
index XXXXXXX..XXXXXXX 100644
66
- cpu_stq_real_ra(env, pte_addr, pte, ra);
72
--- a/accel/tcg/cputlb.c
67
+ cpu_stq_mmuidx_ra(env, pte_addr, pte, MMU_REAL_IDX, ra);
73
+++ b/accel/tcg/cputlb.c
68
74
@@ -XXX,XX +XXX,XX @@ void tlb_flush_page_bits_by_mmuidx_all_cpus_synced(CPUState *src_cpu,
69
/* XXX we exploit the fact that Linux passes the exact virtual
75
can be detected */
70
address here - it's not obliged to! */
76
void tlb_protect_code(ram_addr_t ram_addr)
77
{
78
- cpu_physical_memory_test_and_clear_dirty(ram_addr, TARGET_PAGE_SIZE,
79
+ cpu_physical_memory_test_and_clear_dirty(ram_addr & TARGET_PAGE_MASK,
80
+ TARGET_PAGE_SIZE,
81
DIRTY_MEMORY_CODE);
82
}
83
84
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
85
index XXXXXXX..XXXXXXX 100644
86
--- a/accel/tcg/translate-all.c
87
+++ b/accel/tcg/translate-all.c
88
@@ -XXX,XX +XXX,XX @@ static void do_tb_phys_invalidate(TranslationBlock *tb, bool rm_from_page_list)
89
qemu_spin_unlock(&tb->jmp_lock);
90
91
/* remove the TB from the hash list */
92
- phys_pc = tb->page_addr[0] + (tb->pc & ~TARGET_PAGE_MASK);
93
+ phys_pc = tb->page_addr[0];
94
h = tb_hash_func(phys_pc, tb->pc, tb->flags, orig_cflags,
95
tb->trace_vcpu_dstate);
96
if (!qht_remove(&tb_ctx.htable, tb, h)) {
97
@@ -XXX,XX +XXX,XX @@ tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
98
* we can only insert TBs that are fully initialized.
99
*/
100
page_lock_pair(&p, phys_pc, &p2, phys_page2, true);
101
- tb_page_add(p, tb, 0, phys_pc & TARGET_PAGE_MASK);
102
+ tb_page_add(p, tb, 0, phys_pc);
103
if (p2) {
104
tb_page_add(p2, tb, 1, phys_page2);
105
} else {
106
@@ -XXX,XX +XXX,XX @@ tb_invalidate_phys_page_range__locked(struct page_collection *pages,
107
if (n == 0) {
108
/* NOTE: tb_end may be after the end of the page, but
109
it is not a problem */
110
- tb_start = tb->page_addr[0] + (tb->pc & ~TARGET_PAGE_MASK);
111
+ tb_start = tb->page_addr[0];
112
tb_end = tb_start + tb->size;
113
} else {
114
tb_start = tb->page_addr[1];
115
- tb_end = tb_start + ((tb->pc + tb->size) & ~TARGET_PAGE_MASK);
116
+ tb_end = tb_start + ((tb->page_addr[0] + tb->size)
117
+ & ~TARGET_PAGE_MASK);
118
}
119
if (!(tb_end <= start || tb_start >= end)) {
120
#ifdef TARGET_HAS_PRECISE_SMC
71
--
121
--
72
2.20.1
122
2.34.1
73
123
74
124
diff view generated by jsdifflib
1
The separate suffixed functions were used to construct
1
This function has two users, who use it incompatibly.
2
some do_##insn function switched on mmu_idx. The interface
2
In tlb_flush_page_by_mmuidx_async_0, when flushing a
3
is exactly identical to the *_mmuidx_ra functions. Replace
3
single page, we need to flush exactly two pages.
4
them directly and remove the constructions.
4
In tlb_flush_range_by_mmuidx_async_0, when flushing a
5
range of pages, we need to flush N+1 pages.
5
6
6
Cc: Aurelien Jarno <aurelien@aurel32.net>
7
This avoids double-flushing of jmp cache pages in a range.
7
Cc: Aleksandar Rikalo <aleksandar.rikalo@rt-rk.com>
8
8
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
10
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
---
11
---
13
target/mips/cpu.h | 4 -
12
accel/tcg/cputlb.c | 25 ++++++++++++++-----------
14
target/mips/op_helper.c | 182 +++++++++++++---------------------------
13
1 file changed, 14 insertions(+), 11 deletions(-)
15
2 files changed, 60 insertions(+), 126 deletions(-)
16
14
17
diff --git a/target/mips/cpu.h b/target/mips/cpu.h
15
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
18
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
19
--- a/target/mips/cpu.h
17
--- a/accel/tcg/cputlb.c
20
+++ b/target/mips/cpu.h
18
+++ b/accel/tcg/cputlb.c
21
@@ -XXX,XX +XXX,XX @@ extern uint32_t cpu_rddsp(uint32_t mask_num, CPUMIPSState *env);
19
@@ -XXX,XX +XXX,XX @@ static void tb_jmp_cache_clear_page(CPUState *cpu, target_ulong page_addr)
22
* MMU modes definitions. We carefully match the indices with our
23
* hflags layout.
24
*/
25
-#define MMU_MODE0_SUFFIX _kernel
26
-#define MMU_MODE1_SUFFIX _super
27
-#define MMU_MODE2_SUFFIX _user
28
-#define MMU_MODE3_SUFFIX _error
29
#define MMU_USER_IDX 2
30
31
static inline int hflags_mmu_index(uint32_t hflags)
32
diff --git a/target/mips/op_helper.c b/target/mips/op_helper.c
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/mips/op_helper.c
35
+++ b/target/mips/op_helper.c
36
@@ -XXX,XX +XXX,XX @@ static void raise_exception(CPUMIPSState *env, uint32_t exception)
37
do_raise_exception(env, exception, 0);
38
}
39
40
-#if defined(CONFIG_USER_ONLY)
41
-#define HELPER_LD(name, insn, type) \
42
-static inline type do_##name(CPUMIPSState *env, target_ulong addr, \
43
- int mem_idx, uintptr_t retaddr) \
44
-{ \
45
- return (type) cpu_##insn##_data_ra(env, addr, retaddr); \
46
-}
47
-#else
48
-#define HELPER_LD(name, insn, type) \
49
-static inline type do_##name(CPUMIPSState *env, target_ulong addr, \
50
- int mem_idx, uintptr_t retaddr) \
51
-{ \
52
- switch (mem_idx) { \
53
- case 0: return (type) cpu_##insn##_kernel_ra(env, addr, retaddr); \
54
- case 1: return (type) cpu_##insn##_super_ra(env, addr, retaddr); \
55
- default: \
56
- case 2: return (type) cpu_##insn##_user_ra(env, addr, retaddr); \
57
- case 3: return (type) cpu_##insn##_error_ra(env, addr, retaddr); \
58
- } \
59
-}
60
-#endif
61
-HELPER_LD(lw, ldl, int32_t)
62
-#if defined(TARGET_MIPS64)
63
-HELPER_LD(ld, ldq, int64_t)
64
-#endif
65
-#undef HELPER_LD
66
-
67
-#if defined(CONFIG_USER_ONLY)
68
-#define HELPER_ST(name, insn, type) \
69
-static inline void do_##name(CPUMIPSState *env, target_ulong addr, \
70
- type val, int mem_idx, uintptr_t retaddr) \
71
-{ \
72
- cpu_##insn##_data_ra(env, addr, val, retaddr); \
73
-}
74
-#else
75
-#define HELPER_ST(name, insn, type) \
76
-static inline void do_##name(CPUMIPSState *env, target_ulong addr, \
77
- type val, int mem_idx, uintptr_t retaddr) \
78
-{ \
79
- switch (mem_idx) { \
80
- case 0: \
81
- cpu_##insn##_kernel_ra(env, addr, val, retaddr); \
82
- break; \
83
- case 1: \
84
- cpu_##insn##_super_ra(env, addr, val, retaddr); \
85
- break; \
86
- default: \
87
- case 2: \
88
- cpu_##insn##_user_ra(env, addr, val, retaddr); \
89
- break; \
90
- case 3: \
91
- cpu_##insn##_error_ra(env, addr, val, retaddr); \
92
- break; \
93
- } \
94
-}
95
-#endif
96
-HELPER_ST(sb, stb, uint8_t)
97
-HELPER_ST(sw, stl, uint32_t)
98
-#if defined(TARGET_MIPS64)
99
-HELPER_ST(sd, stq, uint64_t)
100
-#endif
101
-#undef HELPER_ST
102
-
103
/* 64 bits arithmetic for 32 bits hosts */
104
static inline uint64_t get_HILO(CPUMIPSState *env)
105
{
106
@@ -XXX,XX +XXX,XX @@ target_ulong helper_##name(CPUMIPSState *env, target_ulong arg, int mem_idx) \
107
} \
108
env->CP0_LLAddr = do_translate_address(env, arg, 0, GETPC()); \
109
env->lladdr = arg; \
110
- env->llval = do_##insn(env, arg, mem_idx, GETPC()); \
111
+ env->llval = cpu_##insn##_mmuidx_ra(env, arg, mem_idx, GETPC()); \
112
return env->llval; \
113
}
114
-HELPER_LD_ATOMIC(ll, lw, 0x3)
115
+HELPER_LD_ATOMIC(ll, ldl, 0x3)
116
#ifdef TARGET_MIPS64
117
-HELPER_LD_ATOMIC(lld, ld, 0x7)
118
+HELPER_LD_ATOMIC(lld, ldq, 0x7)
119
#endif
120
#undef HELPER_LD_ATOMIC
121
#endif
122
@@ -XXX,XX +XXX,XX @@ HELPER_LD_ATOMIC(lld, ld, 0x7)
123
void helper_swl(CPUMIPSState *env, target_ulong arg1, target_ulong arg2,
124
int mem_idx)
125
{
126
- do_sb(env, arg2, (uint8_t)(arg1 >> 24), mem_idx, GETPC());
127
+ cpu_stb_mmuidx_ra(env, arg2, (uint8_t)(arg1 >> 24), mem_idx, GETPC());
128
129
if (GET_LMASK(arg2) <= 2) {
130
- do_sb(env, GET_OFFSET(arg2, 1), (uint8_t)(arg1 >> 16), mem_idx,
131
- GETPC());
132
+ cpu_stb_mmuidx_ra(env, GET_OFFSET(arg2, 1), (uint8_t)(arg1 >> 16),
133
+ mem_idx, GETPC());
134
}
135
136
if (GET_LMASK(arg2) <= 1) {
137
- do_sb(env, GET_OFFSET(arg2, 2), (uint8_t)(arg1 >> 8), mem_idx,
138
- GETPC());
139
+ cpu_stb_mmuidx_ra(env, GET_OFFSET(arg2, 2), (uint8_t)(arg1 >> 8),
140
+ mem_idx, GETPC());
141
}
142
143
if (GET_LMASK(arg2) == 0) {
144
- do_sb(env, GET_OFFSET(arg2, 3), (uint8_t)arg1, mem_idx,
145
- GETPC());
146
+ cpu_stb_mmuidx_ra(env, GET_OFFSET(arg2, 3), (uint8_t)arg1,
147
+ mem_idx, GETPC());
148
}
20
}
149
}
21
}
150
22
151
void helper_swr(CPUMIPSState *env, target_ulong arg1, target_ulong arg2,
23
-static void tb_flush_jmp_cache(CPUState *cpu, target_ulong addr)
152
int mem_idx)
24
-{
153
{
25
- /* Discard jump cache entries for any tb which might potentially
154
- do_sb(env, arg2, (uint8_t)arg1, mem_idx, GETPC());
26
- overlap the flushed page. */
155
+ cpu_stb_mmuidx_ra(env, arg2, (uint8_t)arg1, mem_idx, GETPC());
27
- tb_jmp_cache_clear_page(cpu, addr - TARGET_PAGE_SIZE);
156
28
- tb_jmp_cache_clear_page(cpu, addr);
157
if (GET_LMASK(arg2) >= 1) {
29
-}
158
- do_sb(env, GET_OFFSET(arg2, -1), (uint8_t)(arg1 >> 8), mem_idx,
30
-
159
- GETPC());
31
/**
160
+ cpu_stb_mmuidx_ra(env, GET_OFFSET(arg2, -1), (uint8_t)(arg1 >> 8),
32
* tlb_mmu_resize_locked() - perform TLB resize bookkeeping; resize if necessary
161
+ mem_idx, GETPC());
33
* @desc: The CPUTLBDesc portion of the TLB
34
@@ -XXX,XX +XXX,XX @@ static void tlb_flush_page_by_mmuidx_async_0(CPUState *cpu,
162
}
35
}
163
36
qemu_spin_unlock(&env_tlb(env)->c.lock);
164
if (GET_LMASK(arg2) >= 2) {
37
165
- do_sb(env, GET_OFFSET(arg2, -2), (uint8_t)(arg1 >> 16), mem_idx,
38
- tb_flush_jmp_cache(cpu, addr);
166
- GETPC());
39
+ /*
167
+ cpu_stb_mmuidx_ra(env, GET_OFFSET(arg2, -2), (uint8_t)(arg1 >> 16),
40
+ * Discard jump cache entries for any tb which might potentially
168
+ mem_idx, GETPC());
41
+ * overlap the flushed page, which includes the previous.
42
+ */
43
+ tb_jmp_cache_clear_page(cpu, addr - TARGET_PAGE_SIZE);
44
+ tb_jmp_cache_clear_page(cpu, addr);
45
}
46
47
/**
48
@@ -XXX,XX +XXX,XX @@ static void tlb_flush_range_by_mmuidx_async_0(CPUState *cpu,
49
return;
169
}
50
}
170
51
171
if (GET_LMASK(arg2) == 3) {
52
- for (target_ulong i = 0; i < d.len; i += TARGET_PAGE_SIZE) {
172
- do_sb(env, GET_OFFSET(arg2, -3), (uint8_t)(arg1 >> 24), mem_idx,
53
- tb_flush_jmp_cache(cpu, d.addr + i);
173
- GETPC());
54
+ /*
174
+ cpu_stb_mmuidx_ra(env, GET_OFFSET(arg2, -3), (uint8_t)(arg1 >> 24),
55
+ * Discard jump cache entries for any tb which might potentially
175
+ mem_idx, GETPC());
56
+ * overlap the flushed pages, which includes the previous.
57
+ */
58
+ d.addr -= TARGET_PAGE_SIZE;
59
+ for (target_ulong i = 0, n = d.len / TARGET_PAGE_SIZE + 1; i < n; i++) {
60
+ tb_jmp_cache_clear_page(cpu, d.addr);
61
+ d.addr += TARGET_PAGE_SIZE;
176
}
62
}
177
}
63
}
178
64
179
@@ -XXX,XX +XXX,XX @@ void helper_swr(CPUMIPSState *env, target_ulong arg1, target_ulong arg2,
180
void helper_sdl(CPUMIPSState *env, target_ulong arg1, target_ulong arg2,
181
int mem_idx)
182
{
183
- do_sb(env, arg2, (uint8_t)(arg1 >> 56), mem_idx, GETPC());
184
+ cpu_stb_mmuidx_ra(env, arg2, (uint8_t)(arg1 >> 56), mem_idx, GETPC());
185
186
if (GET_LMASK64(arg2) <= 6) {
187
- do_sb(env, GET_OFFSET(arg2, 1), (uint8_t)(arg1 >> 48), mem_idx,
188
- GETPC());
189
+ cpu_stb_mmuidx_ra(env, GET_OFFSET(arg2, 1), (uint8_t)(arg1 >> 48),
190
+ mem_idx, GETPC());
191
}
192
193
if (GET_LMASK64(arg2) <= 5) {
194
- do_sb(env, GET_OFFSET(arg2, 2), (uint8_t)(arg1 >> 40), mem_idx,
195
- GETPC());
196
+ cpu_stb_mmuidx_ra(env, GET_OFFSET(arg2, 2), (uint8_t)(arg1 >> 40),
197
+ mem_idx, GETPC());
198
}
199
200
if (GET_LMASK64(arg2) <= 4) {
201
- do_sb(env, GET_OFFSET(arg2, 3), (uint8_t)(arg1 >> 32), mem_idx,
202
- GETPC());
203
+ cpu_stb_mmuidx_ra(env, GET_OFFSET(arg2, 3), (uint8_t)(arg1 >> 32),
204
+ mem_idx, GETPC());
205
}
206
207
if (GET_LMASK64(arg2) <= 3) {
208
- do_sb(env, GET_OFFSET(arg2, 4), (uint8_t)(arg1 >> 24), mem_idx,
209
- GETPC());
210
+ cpu_stb_mmuidx_ra(env, GET_OFFSET(arg2, 4), (uint8_t)(arg1 >> 24),
211
+ mem_idx, GETPC());
212
}
213
214
if (GET_LMASK64(arg2) <= 2) {
215
- do_sb(env, GET_OFFSET(arg2, 5), (uint8_t)(arg1 >> 16), mem_idx,
216
- GETPC());
217
+ cpu_stb_mmuidx_ra(env, GET_OFFSET(arg2, 5), (uint8_t)(arg1 >> 16),
218
+ mem_idx, GETPC());
219
}
220
221
if (GET_LMASK64(arg2) <= 1) {
222
- do_sb(env, GET_OFFSET(arg2, 6), (uint8_t)(arg1 >> 8), mem_idx,
223
- GETPC());
224
+ cpu_stb_mmuidx_ra(env, GET_OFFSET(arg2, 6), (uint8_t)(arg1 >> 8),
225
+ mem_idx, GETPC());
226
}
227
228
if (GET_LMASK64(arg2) <= 0) {
229
- do_sb(env, GET_OFFSET(arg2, 7), (uint8_t)arg1, mem_idx,
230
- GETPC());
231
+ cpu_stb_mmuidx_ra(env, GET_OFFSET(arg2, 7), (uint8_t)arg1,
232
+ mem_idx, GETPC());
233
}
234
}
235
236
void helper_sdr(CPUMIPSState *env, target_ulong arg1, target_ulong arg2,
237
int mem_idx)
238
{
239
- do_sb(env, arg2, (uint8_t)arg1, mem_idx, GETPC());
240
+ cpu_stb_mmuidx_ra(env, arg2, (uint8_t)arg1, mem_idx, GETPC());
241
242
if (GET_LMASK64(arg2) >= 1) {
243
- do_sb(env, GET_OFFSET(arg2, -1), (uint8_t)(arg1 >> 8), mem_idx,
244
- GETPC());
245
+ cpu_stb_mmuidx_ra(env, GET_OFFSET(arg2, -1), (uint8_t)(arg1 >> 8),
246
+ mem_idx, GETPC());
247
}
248
249
if (GET_LMASK64(arg2) >= 2) {
250
- do_sb(env, GET_OFFSET(arg2, -2), (uint8_t)(arg1 >> 16), mem_idx,
251
- GETPC());
252
+ cpu_stb_mmuidx_ra(env, GET_OFFSET(arg2, -2), (uint8_t)(arg1 >> 16),
253
+ mem_idx, GETPC());
254
}
255
256
if (GET_LMASK64(arg2) >= 3) {
257
- do_sb(env, GET_OFFSET(arg2, -3), (uint8_t)(arg1 >> 24), mem_idx,
258
- GETPC());
259
+ cpu_stb_mmuidx_ra(env, GET_OFFSET(arg2, -3), (uint8_t)(arg1 >> 24),
260
+ mem_idx, GETPC());
261
}
262
263
if (GET_LMASK64(arg2) >= 4) {
264
- do_sb(env, GET_OFFSET(arg2, -4), (uint8_t)(arg1 >> 32), mem_idx,
265
- GETPC());
266
+ cpu_stb_mmuidx_ra(env, GET_OFFSET(arg2, -4), (uint8_t)(arg1 >> 32),
267
+ mem_idx, GETPC());
268
}
269
270
if (GET_LMASK64(arg2) >= 5) {
271
- do_sb(env, GET_OFFSET(arg2, -5), (uint8_t)(arg1 >> 40), mem_idx,
272
- GETPC());
273
+ cpu_stb_mmuidx_ra(env, GET_OFFSET(arg2, -5), (uint8_t)(arg1 >> 40),
274
+ mem_idx, GETPC());
275
}
276
277
if (GET_LMASK64(arg2) >= 6) {
278
- do_sb(env, GET_OFFSET(arg2, -6), (uint8_t)(arg1 >> 48), mem_idx,
279
- GETPC());
280
+ cpu_stb_mmuidx_ra(env, GET_OFFSET(arg2, -6), (uint8_t)(arg1 >> 48),
281
+ mem_idx, GETPC());
282
}
283
284
if (GET_LMASK64(arg2) == 7) {
285
- do_sb(env, GET_OFFSET(arg2, -7), (uint8_t)(arg1 >> 56), mem_idx,
286
- GETPC());
287
+ cpu_stb_mmuidx_ra(env, GET_OFFSET(arg2, -7), (uint8_t)(arg1 >> 56),
288
+ mem_idx, GETPC());
289
}
290
}
291
#endif /* TARGET_MIPS64 */
292
@@ -XXX,XX +XXX,XX @@ void helper_lwm(CPUMIPSState *env, target_ulong addr, target_ulong reglist,
293
294
for (i = 0; i < base_reglist; i++) {
295
env->active_tc.gpr[multiple_regs[i]] =
296
- (target_long)do_lw(env, addr, mem_idx, GETPC());
297
+ (target_long)cpu_ldl_mmuidx_ra(env, addr, mem_idx, GETPC());
298
addr += 4;
299
}
300
}
301
302
if (do_r31) {
303
- env->active_tc.gpr[31] = (target_long)do_lw(env, addr, mem_idx,
304
- GETPC());
305
+ env->active_tc.gpr[31] =
306
+ (target_long)cpu_ldl_mmuidx_ra(env, addr, mem_idx, GETPC());
307
}
308
}
309
310
@@ -XXX,XX +XXX,XX @@ void helper_swm(CPUMIPSState *env, target_ulong addr, target_ulong reglist,
311
target_ulong i;
312
313
for (i = 0; i < base_reglist; i++) {
314
- do_sw(env, addr, env->active_tc.gpr[multiple_regs[i]], mem_idx,
315
- GETPC());
316
+ cpu_stw_mmuidx_ra(env, addr, env->active_tc.gpr[multiple_regs[i]],
317
+ mem_idx, GETPC());
318
addr += 4;
319
}
320
}
321
322
if (do_r31) {
323
- do_sw(env, addr, env->active_tc.gpr[31], mem_idx, GETPC());
324
+ cpu_stw_mmuidx_ra(env, addr, env->active_tc.gpr[31], mem_idx, GETPC());
325
}
326
}
327
328
@@ -XXX,XX +XXX,XX @@ void helper_ldm(CPUMIPSState *env, target_ulong addr, target_ulong reglist,
329
target_ulong i;
330
331
for (i = 0; i < base_reglist; i++) {
332
- env->active_tc.gpr[multiple_regs[i]] = do_ld(env, addr, mem_idx,
333
- GETPC());
334
+ env->active_tc.gpr[multiple_regs[i]] =
335
+ cpu_ldq_mmuidx_ra(env, addr, mem_idx, GETPC());
336
addr += 8;
337
}
338
}
339
340
if (do_r31) {
341
- env->active_tc.gpr[31] = do_ld(env, addr, mem_idx, GETPC());
342
+ env->active_tc.gpr[31] =
343
+ cpu_ldq_mmuidx_ra(env, addr, mem_idx, GETPC());
344
}
345
}
346
347
@@ -XXX,XX +XXX,XX @@ void helper_sdm(CPUMIPSState *env, target_ulong addr, target_ulong reglist,
348
target_ulong i;
349
350
for (i = 0; i < base_reglist; i++) {
351
- do_sd(env, addr, env->active_tc.gpr[multiple_regs[i]], mem_idx,
352
- GETPC());
353
+ cpu_stq_mmuidx_ra(env, addr, env->active_tc.gpr[multiple_regs[i]],
354
+ mem_idx, GETPC());
355
addr += 8;
356
}
357
}
358
359
if (do_r31) {
360
- do_sd(env, addr, env->active_tc.gpr[31], mem_idx, GETPC());
361
+ cpu_stq_mmuidx_ra(env, addr, env->active_tc.gpr[31], mem_idx, GETPC());
362
}
363
}
364
#endif
365
--
65
--
366
2.20.1
66
2.34.1
367
67
368
68
diff view generated by jsdifflib
1
Reduce the amount of preprocessor obfuscation by expanding
1
Wrap the bare TranslationBlock pointer into a structure.
2
the text of each of the functions generated. The result is
3
only slightly smaller than the original.
4
2
5
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
3
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
4
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
---
6
---
10
include/exec/cpu_ldst.h | 67 +++++++-----------
7
accel/tcg/tb-hash.h | 1 +
11
include/exec/cpu_ldst_template.h | 117 -------------------------------
8
accel/tcg/tb-jmp-cache.h | 24 ++++++++++++++++++++++++
12
accel/tcg/cputlb.c | 107 +++++++++++++++++++++++++++-
9
include/exec/cpu-common.h | 1 +
13
3 files changed, 130 insertions(+), 161 deletions(-)
10
include/hw/core/cpu.h | 15 +--------------
14
delete mode 100644 include/exec/cpu_ldst_template.h
11
include/qemu/typedefs.h | 1 +
12
accel/stubs/tcg-stub.c | 4 ++++
13
accel/tcg/cpu-exec.c | 10 +++++++---
14
accel/tcg/cputlb.c | 9 +++++----
15
accel/tcg/translate-all.c | 28 +++++++++++++++++++++++++---
16
hw/core/cpu-common.c | 3 +--
17
plugins/core.c | 2 +-
18
trace/control-target.c | 2 +-
19
12 files changed, 72 insertions(+), 28 deletions(-)
20
create mode 100644 accel/tcg/tb-jmp-cache.h
15
21
16
diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h
22
diff --git a/accel/tcg/tb-hash.h b/accel/tcg/tb-hash.h
17
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
18
--- a/include/exec/cpu_ldst.h
24
--- a/accel/tcg/tb-hash.h
19
+++ b/include/exec/cpu_ldst.h
25
+++ b/accel/tcg/tb-hash.h
20
@@ -XXX,XX +XXX,XX @@ typedef target_ulong abi_ptr;
26
@@ -XXX,XX +XXX,XX @@
21
#define TARGET_ABI_FMT_ptr TARGET_ABI_FMT_lx
27
#include "exec/cpu-defs.h"
22
#endif
28
#include "exec/exec-all.h"
23
29
#include "qemu/xxhash.h"
24
-#if defined(CONFIG_USER_ONLY)
30
+#include "tb-jmp-cache.h"
31
32
#ifdef CONFIG_SOFTMMU
33
34
diff --git a/accel/tcg/tb-jmp-cache.h b/accel/tcg/tb-jmp-cache.h
35
new file mode 100644
36
index XXXXXXX..XXXXXXX
37
--- /dev/null
38
+++ b/accel/tcg/tb-jmp-cache.h
39
@@ -XXX,XX +XXX,XX @@
40
+/*
41
+ * The per-CPU TranslationBlock jump cache.
42
+ *
43
+ * Copyright (c) 2003 Fabrice Bellard
44
+ *
45
+ * SPDX-License-Identifier: GPL-2.0-or-later
46
+ */
47
+
48
+#ifndef ACCEL_TCG_TB_JMP_CACHE_H
49
+#define ACCEL_TCG_TB_JMP_CACHE_H
50
+
51
+#define TB_JMP_CACHE_BITS 12
52
+#define TB_JMP_CACHE_SIZE (1 << TB_JMP_CACHE_BITS)
53
+
54
+/*
55
+ * Accessed in parallel; all accesses to 'tb' must be atomic.
56
+ */
57
+struct CPUJumpCache {
58
+ struct {
59
+ TranslationBlock *tb;
60
+ } array[TB_JMP_CACHE_SIZE];
61
+};
62
+
63
+#endif /* ACCEL_TCG_TB_JMP_CACHE_H */
64
diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h
65
index XXXXXXX..XXXXXXX 100644
66
--- a/include/exec/cpu-common.h
67
+++ b/include/exec/cpu-common.h
68
@@ -XXX,XX +XXX,XX @@ void cpu_list_unlock(void);
69
unsigned int cpu_list_generation_id_get(void);
70
71
void tcg_flush_softmmu_tlb(CPUState *cs);
72
+void tcg_flush_jmp_cache(CPUState *cs);
73
74
void tcg_iommu_init_notifier_list(CPUState *cpu);
75
void tcg_iommu_free_notifier_list(CPUState *cpu);
76
diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
77
index XXXXXXX..XXXXXXX 100644
78
--- a/include/hw/core/cpu.h
79
+++ b/include/hw/core/cpu.h
80
@@ -XXX,XX +XXX,XX @@ struct kvm_run;
81
struct hax_vcpu_state;
82
struct hvf_vcpu_state;
83
84
-#define TB_JMP_CACHE_BITS 12
85
-#define TB_JMP_CACHE_SIZE (1 << TB_JMP_CACHE_BITS)
25
-
86
-
26
-extern __thread uintptr_t helper_retaddr;
87
/* work queue */
88
89
/* The union type allows passing of 64 bit target pointers on 32 bit
90
@@ -XXX,XX +XXX,XX @@ struct CPUState {
91
CPUArchState *env_ptr;
92
IcountDecr *icount_decr_ptr;
93
94
- /* Accessed in parallel; all accesses must be atomic */
95
- TranslationBlock *tb_jmp_cache[TB_JMP_CACHE_SIZE];
96
+ CPUJumpCache *tb_jmp_cache;
97
98
struct GDBRegisterState *gdb_regs;
99
int gdb_num_regs;
100
@@ -XXX,XX +XXX,XX @@ extern CPUTailQ cpus;
101
102
extern __thread CPUState *current_cpu;
103
104
-static inline void cpu_tb_jmp_cache_clear(CPUState *cpu)
105
-{
106
- unsigned int i;
27
-
107
-
28
-static inline void set_helper_retaddr(uintptr_t ra)
108
- for (i = 0; i < TB_JMP_CACHE_SIZE; i++) {
29
-{
109
- qatomic_set(&cpu->tb_jmp_cache[i], NULL);
30
- helper_retaddr = ra;
110
- }
31
- /*
32
- * Ensure that this write is visible to the SIGSEGV handler that
33
- * may be invoked due to a subsequent invalid memory operation.
34
- */
35
- signal_barrier();
36
-}
111
-}
37
-
112
-
38
-static inline void clear_helper_retaddr(void)
113
/**
39
-{
114
* qemu_tcg_mttcg_enabled:
40
- /*
115
* Check whether we are running MultiThread TCG or not.
41
- * Ensure that previous memory operations have succeeded before
116
diff --git a/include/qemu/typedefs.h b/include/qemu/typedefs.h
42
- * removing the data visible to the signal handler.
117
index XXXXXXX..XXXXXXX 100644
43
- */
118
--- a/include/qemu/typedefs.h
44
- signal_barrier();
119
+++ b/include/qemu/typedefs.h
45
- helper_retaddr = 0;
120
@@ -XXX,XX +XXX,XX @@ typedef struct CoMutex CoMutex;
46
-}
121
typedef struct ConfidentialGuestSupport ConfidentialGuestSupport;
47
-
122
typedef struct CPUAddressSpace CPUAddressSpace;
48
-/* In user-only mode we provide only the _code and _data accessors. */
123
typedef struct CPUArchState CPUArchState;
49
-
124
+typedef struct CPUJumpCache CPUJumpCache;
50
uint32_t cpu_ldub_data(CPUArchState *env, abi_ptr ptr);
125
typedef struct CPUState CPUState;
51
uint32_t cpu_lduw_data(CPUArchState *env, abi_ptr ptr);
126
typedef struct CPUTLBEntryFull CPUTLBEntryFull;
52
uint32_t cpu_ldl_data(CPUArchState *env, abi_ptr ptr);
127
typedef struct DeviceListener DeviceListener;
53
@@ -XXX,XX +XXX,XX @@ void cpu_stl_data_ra(CPUArchState *env, abi_ptr ptr,
128
diff --git a/accel/stubs/tcg-stub.c b/accel/stubs/tcg-stub.c
54
void cpu_stq_data_ra(CPUArchState *env, abi_ptr ptr,
129
index XXXXXXX..XXXXXXX 100644
55
uint64_t val, uintptr_t retaddr);
130
--- a/accel/stubs/tcg-stub.c
56
131
+++ b/accel/stubs/tcg-stub.c
57
+#if defined(CONFIG_USER_ONLY)
132
@@ -XXX,XX +XXX,XX @@ void tlb_set_dirty(CPUState *cpu, target_ulong vaddr)
58
+
133
{
59
+extern __thread uintptr_t helper_retaddr;
134
}
60
+
135
61
+static inline void set_helper_retaddr(uintptr_t ra)
136
+void tcg_flush_jmp_cache(CPUState *cpu)
62
+{
137
+{
63
+ helper_retaddr = ra;
64
+ /*
65
+ * Ensure that this write is visible to the SIGSEGV handler that
66
+ * may be invoked due to a subsequent invalid memory operation.
67
+ */
68
+ signal_barrier();
69
+}
138
+}
70
+
139
+
71
+static inline void clear_helper_retaddr(void)
140
int probe_access_flags(CPUArchState *env, target_ulong addr,
72
+{
141
MMUAccessType access_type, int mmu_idx,
73
+ /*
142
bool nonfault, void **phost, uintptr_t retaddr)
74
+ * Ensure that previous memory operations have succeeded before
143
diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
75
+ * removing the data visible to the signal handler.
144
index XXXXXXX..XXXXXXX 100644
76
+ */
145
--- a/accel/tcg/cpu-exec.c
77
+ signal_barrier();
146
+++ b/accel/tcg/cpu-exec.c
78
+ helper_retaddr = 0;
79
+}
80
+
81
/*
82
* Provide the same *_mmuidx_ra interface as for softmmu.
83
* The mmu_idx argument is ignored.
84
@@ -XXX,XX +XXX,XX @@ void cpu_stl_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint32_t val,
85
void cpu_stq_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint64_t val,
86
int mmu_idx, uintptr_t retaddr);
87
88
-/* these access are slower, they must be as rare as possible */
89
-#define CPU_MMU_INDEX (cpu_mmu_index(env, false))
90
-#define MEMSUFFIX _data
91
-#define DATA_SIZE 1
92
-#include "exec/cpu_ldst_template.h"
93
-
94
-#define DATA_SIZE 2
95
-#include "exec/cpu_ldst_template.h"
96
-
97
-#define DATA_SIZE 4
98
-#include "exec/cpu_ldst_template.h"
99
-
100
-#define DATA_SIZE 8
101
-#include "exec/cpu_ldst_template.h"
102
-#undef CPU_MMU_INDEX
103
-#undef MEMSUFFIX
104
-
105
#endif /* defined(CONFIG_USER_ONLY) */
106
107
uint32_t cpu_ldub_code(CPUArchState *env, abi_ptr addr);
108
diff --git a/include/exec/cpu_ldst_template.h b/include/exec/cpu_ldst_template.h
109
deleted file mode 100644
110
index XXXXXXX..XXXXXXX
111
--- a/include/exec/cpu_ldst_template.h
112
+++ /dev/null
113
@@ -XXX,XX +XXX,XX @@
147
@@ -XXX,XX +XXX,XX @@
114
-/*
148
#include "sysemu/replay.h"
115
- * Software MMU support
149
#include "sysemu/tcg.h"
116
- *
150
#include "exec/helper-proto.h"
117
- * Generate inline load/store functions for one MMU mode and data
151
+#include "tb-jmp-cache.h"
118
- * size.
152
#include "tb-hash.h"
119
- *
153
#include "tb-context.h"
120
- * Generate a store function as well as signed and unsigned loads.
154
#include "internal.h"
121
- *
155
@@ -XXX,XX +XXX,XX @@ static inline TranslationBlock *tb_lookup(CPUState *cpu, target_ulong pc,
122
- * Not used directly but included from cpu_ldst.h.
156
tcg_debug_assert(!(cflags & CF_INVALID));
123
- *
157
124
- * Copyright (c) 2003 Fabrice Bellard
158
hash = tb_jmp_cache_hash_func(pc);
125
- *
159
- tb = qatomic_rcu_read(&cpu->tb_jmp_cache[hash]);
126
- * This library is free software; you can redistribute it and/or
160
+ tb = qatomic_rcu_read(&cpu->tb_jmp_cache->array[hash].tb);
127
- * modify it under the terms of the GNU Lesser General Public
161
128
- * License as published by the Free Software Foundation; either
162
if (likely(tb &&
129
- * version 2 of the License, or (at your option) any later version.
163
tb->pc == pc &&
130
- *
164
@@ -XXX,XX +XXX,XX @@ static inline TranslationBlock *tb_lookup(CPUState *cpu, target_ulong pc,
131
- * This library is distributed in the hope that it will be useful,
165
if (tb == NULL) {
132
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
166
return NULL;
133
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
167
}
134
- * Lesser General Public License for more details.
168
- qatomic_set(&cpu->tb_jmp_cache[hash], tb);
135
- *
169
+ qatomic_set(&cpu->tb_jmp_cache->array[hash].tb, tb);
136
- * You should have received a copy of the GNU Lesser General Public
170
return tb;
137
- * License along with this library; if not, see <http://www.gnu.org/licenses/>.
171
}
138
- */
172
139
-
173
@@ -XXX,XX +XXX,XX @@ int cpu_exec(CPUState *cpu)
140
-#if DATA_SIZE == 8
174
141
-#define SUFFIX q
175
tb = tb_lookup(cpu, pc, cs_base, flags, cflags);
142
-#define USUFFIX q
176
if (tb == NULL) {
143
-#define DATA_TYPE uint64_t
177
+ uint32_t h;
144
-#define SHIFT 3
178
+
145
-#elif DATA_SIZE == 4
179
mmap_lock();
146
-#define SUFFIX l
180
tb = tb_gen_code(cpu, pc, cs_base, flags, cflags);
147
-#define USUFFIX l
181
mmap_unlock();
148
-#define DATA_TYPE uint32_t
182
@@ -XXX,XX +XXX,XX @@ int cpu_exec(CPUState *cpu)
149
-#define SHIFT 2
183
* We add the TB in the virtual pc hash table
150
-#elif DATA_SIZE == 2
184
* for the fast lookup
151
-#define SUFFIX w
185
*/
152
-#define USUFFIX uw
186
- qatomic_set(&cpu->tb_jmp_cache[tb_jmp_cache_hash_func(pc)], tb);
153
-#define DATA_TYPE uint16_t
187
+ h = tb_jmp_cache_hash_func(pc);
154
-#define DATA_STYPE int16_t
188
+ qatomic_set(&cpu->tb_jmp_cache->array[h].tb, tb);
155
-#define SHIFT 1
189
}
156
-#elif DATA_SIZE == 1
190
157
-#define SUFFIX b
191
#ifndef CONFIG_USER_ONLY
158
-#define USUFFIX ub
159
-#define DATA_TYPE uint8_t
160
-#define DATA_STYPE int8_t
161
-#define SHIFT 0
162
-#else
163
-#error unsupported data size
164
-#endif
165
-
166
-#if DATA_SIZE == 8
167
-#define RES_TYPE uint64_t
168
-#else
169
-#define RES_TYPE uint32_t
170
-#endif
171
-
172
-/* generic load/store macros */
173
-
174
-static inline RES_TYPE
175
-glue(glue(glue(cpu_ld, USUFFIX), MEMSUFFIX), _ra)(CPUArchState *env,
176
- target_ulong ptr,
177
- uintptr_t retaddr)
178
-{
179
- return glue(glue(cpu_ld, USUFFIX), _mmuidx_ra)(env, ptr, CPU_MMU_INDEX,
180
- retaddr);
181
-}
182
-
183
-static inline RES_TYPE
184
-glue(glue(cpu_ld, USUFFIX), MEMSUFFIX)(CPUArchState *env, target_ulong ptr)
185
-{
186
- return glue(glue(cpu_ld, USUFFIX), _mmuidx_ra)(env, ptr, CPU_MMU_INDEX, 0);
187
-}
188
-
189
-#if DATA_SIZE <= 2
190
-static inline int
191
-glue(glue(glue(cpu_lds, SUFFIX), MEMSUFFIX), _ra)(CPUArchState *env,
192
- target_ulong ptr,
193
- uintptr_t retaddr)
194
-{
195
- return glue(glue(cpu_lds, SUFFIX), _mmuidx_ra)(env, ptr, CPU_MMU_INDEX,
196
- retaddr);
197
-}
198
-
199
-static inline int
200
-glue(glue(cpu_lds, SUFFIX), MEMSUFFIX)(CPUArchState *env, target_ulong ptr)
201
-{
202
- return glue(glue(cpu_lds, SUFFIX), _mmuidx_ra)(env, ptr, CPU_MMU_INDEX, 0);
203
-}
204
-#endif
205
-
206
-/* generic store macro */
207
-
208
-static inline void
209
-glue(glue(glue(cpu_st, SUFFIX), MEMSUFFIX), _ra)(CPUArchState *env,
210
- target_ulong ptr,
211
- RES_TYPE v, uintptr_t retaddr)
212
-{
213
- glue(glue(cpu_st, SUFFIX), _mmuidx_ra)(env, ptr, v, CPU_MMU_INDEX,
214
- retaddr);
215
-}
216
-
217
-static inline void
218
-glue(glue(cpu_st, SUFFIX), MEMSUFFIX)(CPUArchState *env, target_ulong ptr,
219
- RES_TYPE v)
220
-{
221
- glue(glue(cpu_st, SUFFIX), _mmuidx_ra)(env, ptr, v, CPU_MMU_INDEX, 0);
222
-}
223
-
224
-#undef RES_TYPE
225
-#undef DATA_TYPE
226
-#undef DATA_STYPE
227
-#undef SUFFIX
228
-#undef USUFFIX
229
-#undef DATA_SIZE
230
-#undef SHIFT
231
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
192
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
232
index XXXXXXX..XXXXXXX 100644
193
index XXXXXXX..XXXXXXX 100644
233
--- a/accel/tcg/cputlb.c
194
--- a/accel/tcg/cputlb.c
234
+++ b/accel/tcg/cputlb.c
195
+++ b/accel/tcg/cputlb.c
196
@@ -XXX,XX +XXX,XX @@ static void tlb_window_reset(CPUTLBDesc *desc, int64_t ns,
197
198
static void tb_jmp_cache_clear_page(CPUState *cpu, target_ulong page_addr)
199
{
200
- unsigned int i, i0 = tb_jmp_cache_hash_page(page_addr);
201
+ int i, i0 = tb_jmp_cache_hash_page(page_addr);
202
+ CPUJumpCache *jc = cpu->tb_jmp_cache;
203
204
for (i = 0; i < TB_JMP_PAGE_SIZE; i++) {
205
- qatomic_set(&cpu->tb_jmp_cache[i0 + i], NULL);
206
+ qatomic_set(&jc->array[i0 + i].tb, NULL);
207
}
208
}
209
210
@@ -XXX,XX +XXX,XX @@ static void tlb_flush_by_mmuidx_async_work(CPUState *cpu, run_on_cpu_data data)
211
212
qemu_spin_unlock(&env_tlb(env)->c.lock);
213
214
- cpu_tb_jmp_cache_clear(cpu);
215
+ tcg_flush_jmp_cache(cpu);
216
217
if (to_clean == ALL_MMUIDX_BITS) {
218
qatomic_set(&env_tlb(env)->c.full_flush_count,
219
@@ -XXX,XX +XXX,XX @@ static void tlb_flush_range_by_mmuidx_async_0(CPUState *cpu,
220
* longer to clear each entry individually than it will to clear it all.
221
*/
222
if (d.len >= (TARGET_PAGE_SIZE * TB_JMP_CACHE_SIZE)) {
223
- cpu_tb_jmp_cache_clear(cpu);
224
+ tcg_flush_jmp_cache(cpu);
225
return;
226
}
227
228
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
229
index XXXXXXX..XXXXXXX 100644
230
--- a/accel/tcg/translate-all.c
231
+++ b/accel/tcg/translate-all.c
235
@@ -XXX,XX +XXX,XX @@
232
@@ -XXX,XX +XXX,XX @@
236
#include "qemu/atomic128.h"
233
#include "sysemu/tcg.h"
237
#include "translate-all.h"
234
#include "qapi/error.h"
238
#include "trace-root.h"
235
#include "hw/core/tcg-cpu-ops.h"
239
-#include "qemu/plugin.h"
236
+#include "tb-jmp-cache.h"
240
#include "trace/mem.h"
237
#include "tb-hash.h"
241
#ifdef CONFIG_PLUGIN
238
#include "tb-context.h"
242
#include "qemu/plugin-memory.h"
239
#include "internal.h"
243
@@ -XXX,XX +XXX,XX @@ uint64_t cpu_ldq_mmuidx_ra(CPUArchState *env, abi_ptr addr,
240
@@ -XXX,XX +XXX,XX @@ static void do_tb_flush(CPUState *cpu, run_on_cpu_data tb_flush_count)
244
? helper_le_ldq_mmu : helper_be_ldq_mmu);
241
}
245
}
242
246
243
CPU_FOREACH(cpu) {
247
+uint32_t cpu_ldub_data_ra(CPUArchState *env, target_ulong ptr,
244
- cpu_tb_jmp_cache_clear(cpu);
248
+ uintptr_t retaddr)
245
+ tcg_flush_jmp_cache(cpu);
246
}
247
248
qht_reset_size(&tb_ctx.htable, CODE_GEN_HTABLE_SIZE);
249
@@ -XXX,XX +XXX,XX @@ static void do_tb_phys_invalidate(TranslationBlock *tb, bool rm_from_page_list)
250
/* remove the TB from the hash list */
251
h = tb_jmp_cache_hash_func(tb->pc);
252
CPU_FOREACH(cpu) {
253
- if (qatomic_read(&cpu->tb_jmp_cache[h]) == tb) {
254
- qatomic_set(&cpu->tb_jmp_cache[h], NULL);
255
+ CPUJumpCache *jc = cpu->tb_jmp_cache;
256
+ if (qatomic_read(&jc->array[h].tb) == tb) {
257
+ qatomic_set(&jc->array[h].tb, NULL);
258
}
259
}
260
261
@@ -XXX,XX +XXX,XX @@ int page_unprotect(target_ulong address, uintptr_t pc)
262
}
263
#endif /* CONFIG_USER_ONLY */
264
265
+/*
266
+ * Called by generic code at e.g. cpu reset after cpu creation,
267
+ * therefore we must be prepared to allocate the jump cache.
268
+ */
269
+void tcg_flush_jmp_cache(CPUState *cpu)
249
+{
270
+{
250
+ return cpu_ldub_mmuidx_ra(env, ptr, cpu_mmu_index(env, false), retaddr);
271
+ CPUJumpCache *jc = cpu->tb_jmp_cache;
272
+
273
+ if (likely(jc)) {
274
+ for (int i = 0; i < TB_JMP_CACHE_SIZE; i++) {
275
+ qatomic_set(&jc->array[i].tb, NULL);
276
+ }
277
+ } else {
278
+ /* This should happen once during realize, and thus never race. */
279
+ jc = g_new0(CPUJumpCache, 1);
280
+ jc = qatomic_xchg(&cpu->tb_jmp_cache, jc);
281
+ assert(jc == NULL);
282
+ }
251
+}
283
+}
252
+
284
+
253
+int cpu_ldsb_data_ra(CPUArchState *env, target_ulong ptr, uintptr_t retaddr)
285
/* This is a wrapper for common code that can not use CONFIG_SOFTMMU */
254
+{
286
void tcg_flush_softmmu_tlb(CPUState *cs)
255
+ return cpu_ldsb_mmuidx_ra(env, ptr, cpu_mmu_index(env, false), retaddr);
287
{
256
+}
288
diff --git a/hw/core/cpu-common.c b/hw/core/cpu-common.c
257
+
289
index XXXXXXX..XXXXXXX 100644
258
+uint32_t cpu_lduw_data_ra(CPUArchState *env, target_ulong ptr,
290
--- a/hw/core/cpu-common.c
259
+ uintptr_t retaddr)
291
+++ b/hw/core/cpu-common.c
260
+{
292
@@ -XXX,XX +XXX,XX @@ static void cpu_common_reset(DeviceState *dev)
261
+ return cpu_lduw_mmuidx_ra(env, ptr, cpu_mmu_index(env, false), retaddr);
293
cpu->cflags_next_tb = -1;
262
+}
294
263
+
295
if (tcg_enabled()) {
264
+int cpu_ldsw_data_ra(CPUArchState *env, target_ulong ptr, uintptr_t retaddr)
296
- cpu_tb_jmp_cache_clear(cpu);
265
+{
297
-
266
+ return cpu_ldsw_mmuidx_ra(env, ptr, cpu_mmu_index(env, false), retaddr);
298
+ tcg_flush_jmp_cache(cpu);
267
+}
299
tcg_flush_softmmu_tlb(cpu);
268
+
300
}
269
+uint32_t cpu_ldl_data_ra(CPUArchState *env, target_ulong ptr, uintptr_t retaddr)
301
}
270
+{
302
diff --git a/plugins/core.c b/plugins/core.c
271
+ return cpu_ldl_mmuidx_ra(env, ptr, cpu_mmu_index(env, false), retaddr);
303
index XXXXXXX..XXXXXXX 100644
272
+}
304
--- a/plugins/core.c
273
+
305
+++ b/plugins/core.c
274
+uint64_t cpu_ldq_data_ra(CPUArchState *env, target_ulong ptr, uintptr_t retaddr)
306
@@ -XXX,XX +XXX,XX @@ struct qemu_plugin_ctx *plugin_id_to_ctx_locked(qemu_plugin_id_t id)
275
+{
307
static void plugin_cpu_update__async(CPUState *cpu, run_on_cpu_data data)
276
+ return cpu_ldq_mmuidx_ra(env, ptr, cpu_mmu_index(env, false), retaddr);
308
{
277
+}
309
bitmap_copy(cpu->plugin_mask, &data.host_ulong, QEMU_PLUGIN_EV_MAX);
278
+
310
- cpu_tb_jmp_cache_clear(cpu);
279
+uint32_t cpu_ldub_data(CPUArchState *env, target_ulong ptr)
311
+ tcg_flush_jmp_cache(cpu);
280
+{
312
}
281
+ return cpu_ldub_data_ra(env, ptr, 0);
313
282
+}
314
static void plugin_cpu_update__locked(gpointer k, gpointer v, gpointer udata)
283
+
315
diff --git a/trace/control-target.c b/trace/control-target.c
284
+int cpu_ldsb_data(CPUArchState *env, target_ulong ptr)
316
index XXXXXXX..XXXXXXX 100644
285
+{
317
--- a/trace/control-target.c
286
+ return cpu_ldsb_data_ra(env, ptr, 0);
318
+++ b/trace/control-target.c
287
+}
319
@@ -XXX,XX +XXX,XX @@ static void trace_event_synchronize_vcpu_state_dynamic(
288
+
320
{
289
+uint32_t cpu_lduw_data(CPUArchState *env, target_ulong ptr)
321
bitmap_copy(vcpu->trace_dstate, vcpu->trace_dstate_delayed,
290
+{
322
CPU_TRACE_DSTATE_MAX_EVENTS);
291
+ return cpu_lduw_data_ra(env, ptr, 0);
323
- cpu_tb_jmp_cache_clear(vcpu);
292
+}
324
+ tcg_flush_jmp_cache(vcpu);
293
+
325
}
294
+int cpu_ldsw_data(CPUArchState *env, target_ulong ptr)
326
295
+{
327
void trace_event_set_vcpu_state_dynamic(CPUState *vcpu,
296
+ return cpu_ldsw_data_ra(env, ptr, 0);
297
+}
298
+
299
+uint32_t cpu_ldl_data(CPUArchState *env, target_ulong ptr)
300
+{
301
+ return cpu_ldl_data_ra(env, ptr, 0);
302
+}
303
+
304
+uint64_t cpu_ldq_data(CPUArchState *env, target_ulong ptr)
305
+{
306
+ return cpu_ldq_data_ra(env, ptr, 0);
307
+}
308
+
309
/*
310
* Store Helpers
311
*/
312
@@ -XXX,XX +XXX,XX @@ void cpu_stq_mmuidx_ra(CPUArchState *env, target_ulong addr, uint64_t val,
313
cpu_store_helper(env, addr, val, mmu_idx, retaddr, MO_TEQ);
314
}
315
316
+void cpu_stb_data_ra(CPUArchState *env, target_ulong ptr,
317
+ uint32_t val, uintptr_t retaddr)
318
+{
319
+ cpu_stb_mmuidx_ra(env, ptr, val, cpu_mmu_index(env, false), retaddr);
320
+}
321
+
322
+void cpu_stw_data_ra(CPUArchState *env, target_ulong ptr,
323
+ uint32_t val, uintptr_t retaddr)
324
+{
325
+ cpu_stw_mmuidx_ra(env, ptr, val, cpu_mmu_index(env, false), retaddr);
326
+}
327
+
328
+void cpu_stl_data_ra(CPUArchState *env, target_ulong ptr,
329
+ uint32_t val, uintptr_t retaddr)
330
+{
331
+ cpu_stl_mmuidx_ra(env, ptr, val, cpu_mmu_index(env, false), retaddr);
332
+}
333
+
334
+void cpu_stq_data_ra(CPUArchState *env, target_ulong ptr,
335
+ uint64_t val, uintptr_t retaddr)
336
+{
337
+ cpu_stq_mmuidx_ra(env, ptr, val, cpu_mmu_index(env, false), retaddr);
338
+}
339
+
340
+void cpu_stb_data(CPUArchState *env, target_ulong ptr, uint32_t val)
341
+{
342
+ cpu_stb_data_ra(env, ptr, val, 0);
343
+}
344
+
345
+void cpu_stw_data(CPUArchState *env, target_ulong ptr, uint32_t val)
346
+{
347
+ cpu_stw_data_ra(env, ptr, val, 0);
348
+}
349
+
350
+void cpu_stl_data(CPUArchState *env, target_ulong ptr, uint32_t val)
351
+{
352
+ cpu_stl_data_ra(env, ptr, val, 0);
353
+}
354
+
355
+void cpu_stq_data(CPUArchState *env, target_ulong ptr, uint64_t val)
356
+{
357
+ cpu_stq_data_ra(env, ptr, val, 0);
358
+}
359
+
360
/* First set of helpers allows passing in of OI and RETADDR. This makes
361
them callable from other helpers. */
362
363
--
328
--
364
2.20.1
329
2.34.1
365
330
366
331
diff view generated by jsdifflib
1
There are only two uses. Within dcbz_common, the local variable
1
Populate this new method for all targets. Always match
2
mmu_idx already contains the epid computation, and we can avoid
2
the result that would be given by cpu_get_tb_cpu_state,
3
repeating it for the store. Within helper_icbiep, the usage is
3
as we will want these values to correspond in the logs.
4
trivially expanded using PPC_TLB_EPID_LOAD.
5
4
6
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Reviewed-by: Taylor Simpson <tsimpson@quicinc.com>
7
Acked-by: David Gibson <david@gibson.dropbear.id.au>
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> (target/sparc)
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
---
9
---
11
target/ppc/cpu.h | 2 --
10
Cc: Eduardo Habkost <eduardo@habkost.net> (supporter:Machine core)
12
target/ppc/mem_helper.c | 11 ++---------
11
Cc: Marcel Apfelbaum <marcel.apfelbaum@gmail.com> (supporter:Machine core)
13
2 files changed, 2 insertions(+), 11 deletions(-)
12
Cc: "Philippe Mathieu-Daudé" <f4bug@amsat.org> (reviewer:Machine core)
13
Cc: Yanan Wang <wangyanan55@huawei.com> (reviewer:Machine core)
14
Cc: Michael Rolnik <mrolnik@gmail.com> (maintainer:AVR TCG CPUs)
15
Cc: "Edgar E. Iglesias" <edgar.iglesias@gmail.com> (maintainer:CRIS TCG CPUs)
16
Cc: Taylor Simpson <tsimpson@quicinc.com> (supporter:Hexagon TCG CPUs)
17
Cc: Song Gao <gaosong@loongson.cn> (maintainer:LoongArch TCG CPUs)
18
Cc: Xiaojuan Yang <yangxiaojuan@loongson.cn> (maintainer:LoongArch TCG CPUs)
19
Cc: Laurent Vivier <laurent@vivier.eu> (maintainer:M68K TCG CPUs)
20
Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> (reviewer:MIPS TCG CPUs)
21
Cc: Aleksandar Rikalo <aleksandar.rikalo@syrmia.com> (reviewer:MIPS TCG CPUs)
22
Cc: Chris Wulff <crwulff@gmail.com> (maintainer:NiosII TCG CPUs)
23
Cc: Marek Vasut <marex@denx.de> (maintainer:NiosII TCG CPUs)
24
Cc: Stafford Horne <shorne@gmail.com> (odd fixer:OpenRISC TCG CPUs)
25
Cc: Yoshinori Sato <ysato@users.sourceforge.jp> (reviewer:RENESAS RX CPUs)
26
Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> (maintainer:SPARC TCG CPUs)
27
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de> (maintainer:TriCore TCG CPUs)
28
Cc: Max Filippov <jcmvbkbc@gmail.com> (maintainer:Xtensa TCG CPUs)
29
Cc: qemu-arm@nongnu.org (open list:ARM TCG CPUs)
30
Cc: qemu-ppc@nongnu.org (open list:PowerPC TCG CPUs)
31
Cc: qemu-riscv@nongnu.org (open list:RISC-V TCG CPUs)
32
Cc: qemu-s390x@nongnu.org (open list:S390 TCG CPUs)
33
---
34
include/hw/core/cpu.h | 3 +++
35
target/alpha/cpu.c | 9 +++++++++
36
target/arm/cpu.c | 13 +++++++++++++
37
target/avr/cpu.c | 8 ++++++++
38
target/cris/cpu.c | 8 ++++++++
39
target/hexagon/cpu.c | 8 ++++++++
40
target/hppa/cpu.c | 8 ++++++++
41
target/i386/cpu.c | 9 +++++++++
42
target/loongarch/cpu.c | 9 +++++++++
43
target/m68k/cpu.c | 8 ++++++++
44
target/microblaze/cpu.c | 8 ++++++++
45
target/mips/cpu.c | 8 ++++++++
46
target/nios2/cpu.c | 9 +++++++++
47
target/openrisc/cpu.c | 8 ++++++++
48
target/ppc/cpu_init.c | 8 ++++++++
49
target/riscv/cpu.c | 13 +++++++++++++
50
target/rx/cpu.c | 8 ++++++++
51
target/s390x/cpu.c | 8 ++++++++
52
target/sh4/cpu.c | 8 ++++++++
53
target/sparc/cpu.c | 8 ++++++++
54
target/tricore/cpu.c | 9 +++++++++
55
target/xtensa/cpu.c | 8 ++++++++
56
22 files changed, 186 insertions(+)
14
57
15
diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
58
diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
16
index XXXXXXX..XXXXXXX 100644
59
index XXXXXXX..XXXXXXX 100644
17
--- a/target/ppc/cpu.h
60
--- a/include/hw/core/cpu.h
18
+++ b/target/ppc/cpu.h
61
+++ b/include/hw/core/cpu.h
19
@@ -XXX,XX +XXX,XX @@ struct ppc_radix_page_info {
62
@@ -XXX,XX +XXX,XX @@ struct SysemuCPUOps;
20
* + real/paged mode combinations. The other two modes are for
63
* If the target behaviour here is anything other than "set
21
* external PID load/store.
64
* the PC register to the value passed in" then the target must
22
*/
65
* also implement the synchronize_from_tb hook.
23
-#define MMU_MODE8_SUFFIX _epl
66
+ * @get_pc: Callback for getting the Program Counter register.
24
-#define MMU_MODE9_SUFFIX _eps
67
+ * As above, with the semantics of the target architecture.
25
#define PPC_TLB_EPID_LOAD 8
68
* @gdb_read_register: Callback for letting GDB read a register.
26
#define PPC_TLB_EPID_STORE 9
69
* @gdb_write_register: Callback for letting GDB write a register.
27
70
* @gdb_adjust_breakpoint: Callback for adjusting the address of a
28
diff --git a/target/ppc/mem_helper.c b/target/ppc/mem_helper.c
71
@@ -XXX,XX +XXX,XX @@ struct CPUClass {
29
index XXXXXXX..XXXXXXX 100644
72
void (*dump_state)(CPUState *cpu, FILE *, int flags);
30
--- a/target/ppc/mem_helper.c
73
int64_t (*get_arch_id)(CPUState *cpu);
31
+++ b/target/ppc/mem_helper.c
74
void (*set_pc)(CPUState *cpu, vaddr value);
32
@@ -XXX,XX +XXX,XX @@ static void dcbz_common(CPUPPCState *env, target_ulong addr,
75
+ vaddr (*get_pc)(CPUState *cpu);
33
} else {
76
int (*gdb_read_register)(CPUState *cpu, GByteArray *buf, int reg);
34
/* Slow path */
77
int (*gdb_write_register)(CPUState *cpu, uint8_t *buf, int reg);
35
for (i = 0; i < dcbz_size; i += 8) {
78
vaddr (*gdb_adjust_breakpoint)(CPUState *cpu, vaddr addr);
36
- if (epid) {
79
diff --git a/target/alpha/cpu.c b/target/alpha/cpu.c
37
-#if !defined(CONFIG_USER_ONLY)
80
index XXXXXXX..XXXXXXX 100644
38
- /* Does not make sense on USER_ONLY config */
81
--- a/target/alpha/cpu.c
39
- cpu_stq_eps_ra(env, addr + i, 0, retaddr);
82
+++ b/target/alpha/cpu.c
40
-#endif
83
@@ -XXX,XX +XXX,XX @@ static void alpha_cpu_set_pc(CPUState *cs, vaddr value)
41
- } else {
84
cpu->env.pc = value;
42
- cpu_stq_data_ra(env, addr + i, 0, retaddr);
85
}
43
- }
86
44
+ cpu_stq_mmuidx_ra(env, addr + i, 0, mmu_idx, retaddr);
87
+static vaddr alpha_cpu_get_pc(CPUState *cs)
45
}
88
+{
89
+ AlphaCPU *cpu = ALPHA_CPU(cs);
90
+
91
+ return cpu->env.pc;
92
+}
93
+
94
+
95
static bool alpha_cpu_has_work(CPUState *cs)
96
{
97
/* Here we are checking to see if the CPU should wake up from HALT.
98
@@ -XXX,XX +XXX,XX @@ static void alpha_cpu_class_init(ObjectClass *oc, void *data)
99
cc->has_work = alpha_cpu_has_work;
100
cc->dump_state = alpha_cpu_dump_state;
101
cc->set_pc = alpha_cpu_set_pc;
102
+ cc->get_pc = alpha_cpu_get_pc;
103
cc->gdb_read_register = alpha_cpu_gdb_read_register;
104
cc->gdb_write_register = alpha_cpu_gdb_write_register;
105
#ifndef CONFIG_USER_ONLY
106
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
107
index XXXXXXX..XXXXXXX 100644
108
--- a/target/arm/cpu.c
109
+++ b/target/arm/cpu.c
110
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_set_pc(CPUState *cs, vaddr value)
46
}
111
}
47
}
112
}
48
@@ -XXX,XX +XXX,XX @@ void helper_icbiep(CPUPPCState *env, target_ulong addr)
113
49
#if !defined(CONFIG_USER_ONLY)
114
+static vaddr arm_cpu_get_pc(CPUState *cs)
50
/* See comments above */
115
+{
51
addr &= ~(env->dcache_line_size - 1);
116
+ ARMCPU *cpu = ARM_CPU(cs);
52
- cpu_ldl_epl_ra(env, addr, GETPC());
117
+ CPUARMState *env = &cpu->env;
53
+ cpu_ldl_mmuidx_ra(env, addr, PPC_TLB_EPID_LOAD, GETPC());
118
+
119
+ if (is_a64(env)) {
120
+ return env->pc;
121
+ } else {
122
+ return env->regs[15];
123
+ }
124
+}
125
+
126
#ifdef CONFIG_TCG
127
void arm_cpu_synchronize_from_tb(CPUState *cs,
128
const TranslationBlock *tb)
129
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_class_init(ObjectClass *oc, void *data)
130
cc->has_work = arm_cpu_has_work;
131
cc->dump_state = arm_cpu_dump_state;
132
cc->set_pc = arm_cpu_set_pc;
133
+ cc->get_pc = arm_cpu_get_pc;
134
cc->gdb_read_register = arm_cpu_gdb_read_register;
135
cc->gdb_write_register = arm_cpu_gdb_write_register;
136
#ifndef CONFIG_USER_ONLY
137
diff --git a/target/avr/cpu.c b/target/avr/cpu.c
138
index XXXXXXX..XXXXXXX 100644
139
--- a/target/avr/cpu.c
140
+++ b/target/avr/cpu.c
141
@@ -XXX,XX +XXX,XX @@ static void avr_cpu_set_pc(CPUState *cs, vaddr value)
142
cpu->env.pc_w = value / 2; /* internally PC points to words */
143
}
144
145
+static vaddr avr_cpu_get_pc(CPUState *cs)
146
+{
147
+ AVRCPU *cpu = AVR_CPU(cs);
148
+
149
+ return cpu->env.pc_w * 2;
150
+}
151
+
152
static bool avr_cpu_has_work(CPUState *cs)
153
{
154
AVRCPU *cpu = AVR_CPU(cs);
155
@@ -XXX,XX +XXX,XX @@ static void avr_cpu_class_init(ObjectClass *oc, void *data)
156
cc->has_work = avr_cpu_has_work;
157
cc->dump_state = avr_cpu_dump_state;
158
cc->set_pc = avr_cpu_set_pc;
159
+ cc->get_pc = avr_cpu_get_pc;
160
dc->vmsd = &vms_avr_cpu;
161
cc->sysemu_ops = &avr_sysemu_ops;
162
cc->disas_set_info = avr_cpu_disas_set_info;
163
diff --git a/target/cris/cpu.c b/target/cris/cpu.c
164
index XXXXXXX..XXXXXXX 100644
165
--- a/target/cris/cpu.c
166
+++ b/target/cris/cpu.c
167
@@ -XXX,XX +XXX,XX @@ static void cris_cpu_set_pc(CPUState *cs, vaddr value)
168
cpu->env.pc = value;
169
}
170
171
+static vaddr cris_cpu_get_pc(CPUState *cs)
172
+{
173
+ CRISCPU *cpu = CRIS_CPU(cs);
174
+
175
+ return cpu->env.pc;
176
+}
177
+
178
static bool cris_cpu_has_work(CPUState *cs)
179
{
180
return cs->interrupt_request & (CPU_INTERRUPT_HARD | CPU_INTERRUPT_NMI);
181
@@ -XXX,XX +XXX,XX @@ static void cris_cpu_class_init(ObjectClass *oc, void *data)
182
cc->has_work = cris_cpu_has_work;
183
cc->dump_state = cris_cpu_dump_state;
184
cc->set_pc = cris_cpu_set_pc;
185
+ cc->get_pc = cris_cpu_get_pc;
186
cc->gdb_read_register = cris_cpu_gdb_read_register;
187
cc->gdb_write_register = cris_cpu_gdb_write_register;
188
#ifndef CONFIG_USER_ONLY
189
diff --git a/target/hexagon/cpu.c b/target/hexagon/cpu.c
190
index XXXXXXX..XXXXXXX 100644
191
--- a/target/hexagon/cpu.c
192
+++ b/target/hexagon/cpu.c
193
@@ -XXX,XX +XXX,XX @@ static void hexagon_cpu_set_pc(CPUState *cs, vaddr value)
194
env->gpr[HEX_REG_PC] = value;
195
}
196
197
+static vaddr hexagon_cpu_get_pc(CPUState *cs)
198
+{
199
+ HexagonCPU *cpu = HEXAGON_CPU(cs);
200
+ CPUHexagonState *env = &cpu->env;
201
+ return env->gpr[HEX_REG_PC];
202
+}
203
+
204
static void hexagon_cpu_synchronize_from_tb(CPUState *cs,
205
const TranslationBlock *tb)
206
{
207
@@ -XXX,XX +XXX,XX @@ static void hexagon_cpu_class_init(ObjectClass *c, void *data)
208
cc->has_work = hexagon_cpu_has_work;
209
cc->dump_state = hexagon_dump_state;
210
cc->set_pc = hexagon_cpu_set_pc;
211
+ cc->get_pc = hexagon_cpu_get_pc;
212
cc->gdb_read_register = hexagon_gdb_read_register;
213
cc->gdb_write_register = hexagon_gdb_write_register;
214
cc->gdb_num_core_regs = TOTAL_PER_THREAD_REGS + NUM_VREGS + NUM_QREGS;
215
diff --git a/target/hppa/cpu.c b/target/hppa/cpu.c
216
index XXXXXXX..XXXXXXX 100644
217
--- a/target/hppa/cpu.c
218
+++ b/target/hppa/cpu.c
219
@@ -XXX,XX +XXX,XX @@ static void hppa_cpu_set_pc(CPUState *cs, vaddr value)
220
cpu->env.iaoq_b = value + 4;
221
}
222
223
+static vaddr hppa_cpu_get_pc(CPUState *cs)
224
+{
225
+ HPPACPU *cpu = HPPA_CPU(cs);
226
+
227
+ return cpu->env.iaoq_f;
228
+}
229
+
230
static void hppa_cpu_synchronize_from_tb(CPUState *cs,
231
const TranslationBlock *tb)
232
{
233
@@ -XXX,XX +XXX,XX @@ static void hppa_cpu_class_init(ObjectClass *oc, void *data)
234
cc->has_work = hppa_cpu_has_work;
235
cc->dump_state = hppa_cpu_dump_state;
236
cc->set_pc = hppa_cpu_set_pc;
237
+ cc->get_pc = hppa_cpu_get_pc;
238
cc->gdb_read_register = hppa_cpu_gdb_read_register;
239
cc->gdb_write_register = hppa_cpu_gdb_write_register;
240
#ifndef CONFIG_USER_ONLY
241
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
242
index XXXXXXX..XXXXXXX 100644
243
--- a/target/i386/cpu.c
244
+++ b/target/i386/cpu.c
245
@@ -XXX,XX +XXX,XX @@ static void x86_cpu_set_pc(CPUState *cs, vaddr value)
246
cpu->env.eip = value;
247
}
248
249
+static vaddr x86_cpu_get_pc(CPUState *cs)
250
+{
251
+ X86CPU *cpu = X86_CPU(cs);
252
+
253
+ /* Match cpu_get_tb_cpu_state. */
254
+ return cpu->env.eip + cpu->env.segs[R_CS].base;
255
+}
256
+
257
int x86_cpu_pending_interrupt(CPUState *cs, int interrupt_request)
258
{
259
X86CPU *cpu = X86_CPU(cs);
260
@@ -XXX,XX +XXX,XX @@ static void x86_cpu_common_class_init(ObjectClass *oc, void *data)
261
cc->has_work = x86_cpu_has_work;
262
cc->dump_state = x86_cpu_dump_state;
263
cc->set_pc = x86_cpu_set_pc;
264
+ cc->get_pc = x86_cpu_get_pc;
265
cc->gdb_read_register = x86_cpu_gdb_read_register;
266
cc->gdb_write_register = x86_cpu_gdb_write_register;
267
cc->get_arch_id = x86_cpu_get_arch_id;
268
diff --git a/target/loongarch/cpu.c b/target/loongarch/cpu.c
269
index XXXXXXX..XXXXXXX 100644
270
--- a/target/loongarch/cpu.c
271
+++ b/target/loongarch/cpu.c
272
@@ -XXX,XX +XXX,XX @@ static void loongarch_cpu_set_pc(CPUState *cs, vaddr value)
273
env->pc = value;
274
}
275
276
+static vaddr loongarch_cpu_get_pc(CPUState *cs)
277
+{
278
+ LoongArchCPU *cpu = LOONGARCH_CPU(cs);
279
+ CPULoongArchState *env = &cpu->env;
280
+
281
+ return env->pc;
282
+}
283
+
284
#ifndef CONFIG_USER_ONLY
285
#include "hw/loongarch/virt.h"
286
287
@@ -XXX,XX +XXX,XX @@ static void loongarch_cpu_class_init(ObjectClass *c, void *data)
288
cc->has_work = loongarch_cpu_has_work;
289
cc->dump_state = loongarch_cpu_dump_state;
290
cc->set_pc = loongarch_cpu_set_pc;
291
+ cc->get_pc = loongarch_cpu_get_pc;
292
#ifndef CONFIG_USER_ONLY
293
dc->vmsd = &vmstate_loongarch_cpu;
294
cc->sysemu_ops = &loongarch_sysemu_ops;
295
diff --git a/target/m68k/cpu.c b/target/m68k/cpu.c
296
index XXXXXXX..XXXXXXX 100644
297
--- a/target/m68k/cpu.c
298
+++ b/target/m68k/cpu.c
299
@@ -XXX,XX +XXX,XX @@ static void m68k_cpu_set_pc(CPUState *cs, vaddr value)
300
cpu->env.pc = value;
301
}
302
303
+static vaddr m68k_cpu_get_pc(CPUState *cs)
304
+{
305
+ M68kCPU *cpu = M68K_CPU(cs);
306
+
307
+ return cpu->env.pc;
308
+}
309
+
310
static bool m68k_cpu_has_work(CPUState *cs)
311
{
312
return cs->interrupt_request & CPU_INTERRUPT_HARD;
313
@@ -XXX,XX +XXX,XX @@ static void m68k_cpu_class_init(ObjectClass *c, void *data)
314
cc->has_work = m68k_cpu_has_work;
315
cc->dump_state = m68k_cpu_dump_state;
316
cc->set_pc = m68k_cpu_set_pc;
317
+ cc->get_pc = m68k_cpu_get_pc;
318
cc->gdb_read_register = m68k_cpu_gdb_read_register;
319
cc->gdb_write_register = m68k_cpu_gdb_write_register;
320
#if defined(CONFIG_SOFTMMU)
321
diff --git a/target/microblaze/cpu.c b/target/microblaze/cpu.c
322
index XXXXXXX..XXXXXXX 100644
323
--- a/target/microblaze/cpu.c
324
+++ b/target/microblaze/cpu.c
325
@@ -XXX,XX +XXX,XX @@ static void mb_cpu_set_pc(CPUState *cs, vaddr value)
326
cpu->env.iflags = 0;
327
}
328
329
+static vaddr mb_cpu_get_pc(CPUState *cs)
330
+{
331
+ MicroBlazeCPU *cpu = MICROBLAZE_CPU(cs);
332
+
333
+ return cpu->env.pc;
334
+}
335
+
336
static void mb_cpu_synchronize_from_tb(CPUState *cs,
337
const TranslationBlock *tb)
338
{
339
@@ -XXX,XX +XXX,XX @@ static void mb_cpu_class_init(ObjectClass *oc, void *data)
340
341
cc->dump_state = mb_cpu_dump_state;
342
cc->set_pc = mb_cpu_set_pc;
343
+ cc->get_pc = mb_cpu_get_pc;
344
cc->gdb_read_register = mb_cpu_gdb_read_register;
345
cc->gdb_write_register = mb_cpu_gdb_write_register;
346
347
diff --git a/target/mips/cpu.c b/target/mips/cpu.c
348
index XXXXXXX..XXXXXXX 100644
349
--- a/target/mips/cpu.c
350
+++ b/target/mips/cpu.c
351
@@ -XXX,XX +XXX,XX @@ static void mips_cpu_set_pc(CPUState *cs, vaddr value)
352
mips_env_set_pc(&cpu->env, value);
353
}
354
355
+static vaddr mips_cpu_get_pc(CPUState *cs)
356
+{
357
+ MIPSCPU *cpu = MIPS_CPU(cs);
358
+
359
+ return cpu->env.active_tc.PC;
360
+}
361
+
362
static bool mips_cpu_has_work(CPUState *cs)
363
{
364
MIPSCPU *cpu = MIPS_CPU(cs);
365
@@ -XXX,XX +XXX,XX @@ static void mips_cpu_class_init(ObjectClass *c, void *data)
366
cc->has_work = mips_cpu_has_work;
367
cc->dump_state = mips_cpu_dump_state;
368
cc->set_pc = mips_cpu_set_pc;
369
+ cc->get_pc = mips_cpu_get_pc;
370
cc->gdb_read_register = mips_cpu_gdb_read_register;
371
cc->gdb_write_register = mips_cpu_gdb_write_register;
372
#ifndef CONFIG_USER_ONLY
373
diff --git a/target/nios2/cpu.c b/target/nios2/cpu.c
374
index XXXXXXX..XXXXXXX 100644
375
--- a/target/nios2/cpu.c
376
+++ b/target/nios2/cpu.c
377
@@ -XXX,XX +XXX,XX @@ static void nios2_cpu_set_pc(CPUState *cs, vaddr value)
378
env->pc = value;
379
}
380
381
+static vaddr nios2_cpu_get_pc(CPUState *cs)
382
+{
383
+ Nios2CPU *cpu = NIOS2_CPU(cs);
384
+ CPUNios2State *env = &cpu->env;
385
+
386
+ return env->pc;
387
+}
388
+
389
static bool nios2_cpu_has_work(CPUState *cs)
390
{
391
return cs->interrupt_request & CPU_INTERRUPT_HARD;
392
@@ -XXX,XX +XXX,XX @@ static void nios2_cpu_class_init(ObjectClass *oc, void *data)
393
cc->has_work = nios2_cpu_has_work;
394
cc->dump_state = nios2_cpu_dump_state;
395
cc->set_pc = nios2_cpu_set_pc;
396
+ cc->get_pc = nios2_cpu_get_pc;
397
cc->disas_set_info = nios2_cpu_disas_set_info;
398
#ifndef CONFIG_USER_ONLY
399
cc->sysemu_ops = &nios2_sysemu_ops;
400
diff --git a/target/openrisc/cpu.c b/target/openrisc/cpu.c
401
index XXXXXXX..XXXXXXX 100644
402
--- a/target/openrisc/cpu.c
403
+++ b/target/openrisc/cpu.c
404
@@ -XXX,XX +XXX,XX @@ static void openrisc_cpu_set_pc(CPUState *cs, vaddr value)
405
cpu->env.dflag = 0;
406
}
407
408
+static vaddr openrisc_cpu_get_pc(CPUState *cs)
409
+{
410
+ OpenRISCCPU *cpu = OPENRISC_CPU(cs);
411
+
412
+ return cpu->env.pc;
413
+}
414
+
415
static void openrisc_cpu_synchronize_from_tb(CPUState *cs,
416
const TranslationBlock *tb)
417
{
418
@@ -XXX,XX +XXX,XX @@ static void openrisc_cpu_class_init(ObjectClass *oc, void *data)
419
cc->has_work = openrisc_cpu_has_work;
420
cc->dump_state = openrisc_cpu_dump_state;
421
cc->set_pc = openrisc_cpu_set_pc;
422
+ cc->get_pc = openrisc_cpu_get_pc;
423
cc->gdb_read_register = openrisc_cpu_gdb_read_register;
424
cc->gdb_write_register = openrisc_cpu_gdb_write_register;
425
#ifndef CONFIG_USER_ONLY
426
diff --git a/target/ppc/cpu_init.c b/target/ppc/cpu_init.c
427
index XXXXXXX..XXXXXXX 100644
428
--- a/target/ppc/cpu_init.c
429
+++ b/target/ppc/cpu_init.c
430
@@ -XXX,XX +XXX,XX @@ static void ppc_cpu_set_pc(CPUState *cs, vaddr value)
431
cpu->env.nip = value;
432
}
433
434
+static vaddr ppc_cpu_get_pc(CPUState *cs)
435
+{
436
+ PowerPCCPU *cpu = POWERPC_CPU(cs);
437
+
438
+ return cpu->env.nip;
439
+}
440
+
441
static bool ppc_cpu_has_work(CPUState *cs)
442
{
443
PowerPCCPU *cpu = POWERPC_CPU(cs);
444
@@ -XXX,XX +XXX,XX @@ static void ppc_cpu_class_init(ObjectClass *oc, void *data)
445
cc->has_work = ppc_cpu_has_work;
446
cc->dump_state = ppc_cpu_dump_state;
447
cc->set_pc = ppc_cpu_set_pc;
448
+ cc->get_pc = ppc_cpu_get_pc;
449
cc->gdb_read_register = ppc_cpu_gdb_read_register;
450
cc->gdb_write_register = ppc_cpu_gdb_write_register;
451
#ifndef CONFIG_USER_ONLY
452
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
453
index XXXXXXX..XXXXXXX 100644
454
--- a/target/riscv/cpu.c
455
+++ b/target/riscv/cpu.c
456
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_set_pc(CPUState *cs, vaddr value)
457
}
458
}
459
460
+static vaddr riscv_cpu_get_pc(CPUState *cs)
461
+{
462
+ RISCVCPU *cpu = RISCV_CPU(cs);
463
+ CPURISCVState *env = &cpu->env;
464
+
465
+ /* Match cpu_get_tb_cpu_state. */
466
+ if (env->xl == MXL_RV32) {
467
+ return env->pc & UINT32_MAX;
468
+ }
469
+ return env->pc;
470
+}
471
+
472
static void riscv_cpu_synchronize_from_tb(CPUState *cs,
473
const TranslationBlock *tb)
474
{
475
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_class_init(ObjectClass *c, void *data)
476
cc->has_work = riscv_cpu_has_work;
477
cc->dump_state = riscv_cpu_dump_state;
478
cc->set_pc = riscv_cpu_set_pc;
479
+ cc->get_pc = riscv_cpu_get_pc;
480
cc->gdb_read_register = riscv_cpu_gdb_read_register;
481
cc->gdb_write_register = riscv_cpu_gdb_write_register;
482
cc->gdb_num_core_regs = 33;
483
diff --git a/target/rx/cpu.c b/target/rx/cpu.c
484
index XXXXXXX..XXXXXXX 100644
485
--- a/target/rx/cpu.c
486
+++ b/target/rx/cpu.c
487
@@ -XXX,XX +XXX,XX @@ static void rx_cpu_set_pc(CPUState *cs, vaddr value)
488
cpu->env.pc = value;
489
}
490
491
+static vaddr rx_cpu_get_pc(CPUState *cs)
492
+{
493
+ RXCPU *cpu = RX_CPU(cs);
494
+
495
+ return cpu->env.pc;
496
+}
497
+
498
static void rx_cpu_synchronize_from_tb(CPUState *cs,
499
const TranslationBlock *tb)
500
{
501
@@ -XXX,XX +XXX,XX @@ static void rx_cpu_class_init(ObjectClass *klass, void *data)
502
cc->has_work = rx_cpu_has_work;
503
cc->dump_state = rx_cpu_dump_state;
504
cc->set_pc = rx_cpu_set_pc;
505
+ cc->get_pc = rx_cpu_get_pc;
506
507
#ifndef CONFIG_USER_ONLY
508
cc->sysemu_ops = &rx_sysemu_ops;
509
diff --git a/target/s390x/cpu.c b/target/s390x/cpu.c
510
index XXXXXXX..XXXXXXX 100644
511
--- a/target/s390x/cpu.c
512
+++ b/target/s390x/cpu.c
513
@@ -XXX,XX +XXX,XX @@ static void s390_cpu_set_pc(CPUState *cs, vaddr value)
514
cpu->env.psw.addr = value;
515
}
516
517
+static vaddr s390_cpu_get_pc(CPUState *cs)
518
+{
519
+ S390CPU *cpu = S390_CPU(cs);
520
+
521
+ return cpu->env.psw.addr;
522
+}
523
+
524
static bool s390_cpu_has_work(CPUState *cs)
525
{
526
S390CPU *cpu = S390_CPU(cs);
527
@@ -XXX,XX +XXX,XX @@ static void s390_cpu_class_init(ObjectClass *oc, void *data)
528
cc->has_work = s390_cpu_has_work;
529
cc->dump_state = s390_cpu_dump_state;
530
cc->set_pc = s390_cpu_set_pc;
531
+ cc->get_pc = s390_cpu_get_pc;
532
cc->gdb_read_register = s390_cpu_gdb_read_register;
533
cc->gdb_write_register = s390_cpu_gdb_write_register;
534
#ifndef CONFIG_USER_ONLY
535
diff --git a/target/sh4/cpu.c b/target/sh4/cpu.c
536
index XXXXXXX..XXXXXXX 100644
537
--- a/target/sh4/cpu.c
538
+++ b/target/sh4/cpu.c
539
@@ -XXX,XX +XXX,XX @@ static void superh_cpu_set_pc(CPUState *cs, vaddr value)
540
cpu->env.pc = value;
541
}
542
543
+static vaddr superh_cpu_get_pc(CPUState *cs)
544
+{
545
+ SuperHCPU *cpu = SUPERH_CPU(cs);
546
+
547
+ return cpu->env.pc;
548
+}
549
+
550
static void superh_cpu_synchronize_from_tb(CPUState *cs,
551
const TranslationBlock *tb)
552
{
553
@@ -XXX,XX +XXX,XX @@ static void superh_cpu_class_init(ObjectClass *oc, void *data)
554
cc->has_work = superh_cpu_has_work;
555
cc->dump_state = superh_cpu_dump_state;
556
cc->set_pc = superh_cpu_set_pc;
557
+ cc->get_pc = superh_cpu_get_pc;
558
cc->gdb_read_register = superh_cpu_gdb_read_register;
559
cc->gdb_write_register = superh_cpu_gdb_write_register;
560
#ifndef CONFIG_USER_ONLY
561
diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
562
index XXXXXXX..XXXXXXX 100644
563
--- a/target/sparc/cpu.c
564
+++ b/target/sparc/cpu.c
565
@@ -XXX,XX +XXX,XX @@ static void sparc_cpu_set_pc(CPUState *cs, vaddr value)
566
cpu->env.npc = value + 4;
567
}
568
569
+static vaddr sparc_cpu_get_pc(CPUState *cs)
570
+{
571
+ SPARCCPU *cpu = SPARC_CPU(cs);
572
+
573
+ return cpu->env.pc;
574
+}
575
+
576
static void sparc_cpu_synchronize_from_tb(CPUState *cs,
577
const TranslationBlock *tb)
578
{
579
@@ -XXX,XX +XXX,XX @@ static void sparc_cpu_class_init(ObjectClass *oc, void *data)
580
cc->memory_rw_debug = sparc_cpu_memory_rw_debug;
54
#endif
581
#endif
55
}
582
cc->set_pc = sparc_cpu_set_pc;
56
583
+ cc->get_pc = sparc_cpu_get_pc;
584
cc->gdb_read_register = sparc_cpu_gdb_read_register;
585
cc->gdb_write_register = sparc_cpu_gdb_write_register;
586
#ifndef CONFIG_USER_ONLY
587
diff --git a/target/tricore/cpu.c b/target/tricore/cpu.c
588
index XXXXXXX..XXXXXXX 100644
589
--- a/target/tricore/cpu.c
590
+++ b/target/tricore/cpu.c
591
@@ -XXX,XX +XXX,XX @@ static void tricore_cpu_set_pc(CPUState *cs, vaddr value)
592
env->PC = value & ~(target_ulong)1;
593
}
594
595
+static vaddr tricore_cpu_get_pc(CPUState *cs)
596
+{
597
+ TriCoreCPU *cpu = TRICORE_CPU(cs);
598
+ CPUTriCoreState *env = &cpu->env;
599
+
600
+ return env->PC;
601
+}
602
+
603
static void tricore_cpu_synchronize_from_tb(CPUState *cs,
604
const TranslationBlock *tb)
605
{
606
@@ -XXX,XX +XXX,XX @@ static void tricore_cpu_class_init(ObjectClass *c, void *data)
607
608
cc->dump_state = tricore_cpu_dump_state;
609
cc->set_pc = tricore_cpu_set_pc;
610
+ cc->get_pc = tricore_cpu_get_pc;
611
cc->sysemu_ops = &tricore_sysemu_ops;
612
cc->tcg_ops = &tricore_tcg_ops;
613
}
614
diff --git a/target/xtensa/cpu.c b/target/xtensa/cpu.c
615
index XXXXXXX..XXXXXXX 100644
616
--- a/target/xtensa/cpu.c
617
+++ b/target/xtensa/cpu.c
618
@@ -XXX,XX +XXX,XX @@ static void xtensa_cpu_set_pc(CPUState *cs, vaddr value)
619
cpu->env.pc = value;
620
}
621
622
+static vaddr xtensa_cpu_get_pc(CPUState *cs)
623
+{
624
+ XtensaCPU *cpu = XTENSA_CPU(cs);
625
+
626
+ return cpu->env.pc;
627
+}
628
+
629
static bool xtensa_cpu_has_work(CPUState *cs)
630
{
631
#ifndef CONFIG_USER_ONLY
632
@@ -XXX,XX +XXX,XX @@ static void xtensa_cpu_class_init(ObjectClass *oc, void *data)
633
cc->has_work = xtensa_cpu_has_work;
634
cc->dump_state = xtensa_cpu_dump_state;
635
cc->set_pc = xtensa_cpu_set_pc;
636
+ cc->get_pc = xtensa_cpu_get_pc;
637
cc->gdb_read_register = xtensa_cpu_gdb_read_register;
638
cc->gdb_write_register = xtensa_cpu_gdb_write_register;
639
cc->gdb_stop_before_watchpoint = true;
57
--
640
--
58
2.20.1
641
2.34.1
59
642
60
643
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
The availability of tb->pc will shortly be conditional.
2
Introduce accessor functions to minimize ifdefs.
2
3
3
We currently search both the root and the tcg/ directories for tcg
4
Pass around a known pc to places like tcg_gen_code,
4
files:
5
where the caller must already have the value.
5
6
6
$ git grep '#include "tcg/' | wc -l
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
28
8
9
$ git grep '#include "tcg[^/]' | wc -l
10
94
11
12
To simplify the preprocessor search path, unify by expliciting the
13
tcg/ directory.
14
15
Patch created mechanically by running:
16
17
$ for x in \
18
tcg.h tcg-mo.h tcg-op.h tcg-opc.h \
19
tcg-op-gvec.h tcg-gvec-desc.h; do \
20
sed -i "s,#include \"$x\",#include \"tcg/$x\"," \
21
$(git grep -l "#include \"$x\""); \
22
done
23
24
Acked-by: David Gibson <david@gibson.dropbear.id.au> (ppc parts)
25
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
26
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
27
Reviewed-by: Stefan Weil <sw@weilnetz.de>
28
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
29
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
30
Message-Id: <20200101112303.20724-2-philmd@redhat.com>
31
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
32
---
9
---
33
include/exec/cpu_ldst.h | 2 +-
10
accel/tcg/internal.h | 6 ++++
34
tcg/i386/tcg-target.h | 2 +-
11
include/exec/exec-all.h | 6 ++++
35
tcg/tcg-op.h | 2 +-
12
include/tcg/tcg.h | 2 +-
36
tcg/tcg.h | 4 ++--
13
accel/tcg/cpu-exec.c | 46 ++++++++++++++-----------
37
accel/tcg/cpu-exec.c | 2 +-
14
accel/tcg/translate-all.c | 37 +++++++++++---------
38
accel/tcg/tcg-runtime-gvec.c | 2 +-
15
target/arm/cpu.c | 4 +--
39
accel/tcg/tcg-runtime.c | 2 +-
16
target/avr/cpu.c | 2 +-
40
accel/tcg/translate-all.c | 2 +-
17
target/hexagon/cpu.c | 2 +-
41
accel/tcg/user-exec.c | 2 +-
18
target/hppa/cpu.c | 4 +--
42
bsd-user/main.c | 2 +-
19
target/i386/tcg/tcg-cpu.c | 2 +-
43
cpus.c | 2 +-
20
target/loongarch/cpu.c | 2 +-
44
exec.c | 2 +-
21
target/microblaze/cpu.c | 2 +-
45
linux-user/main.c | 2 +-
22
target/mips/tcg/exception.c | 2 +-
46
linux-user/syscall.c | 2 +-
23
target/mips/tcg/sysemu/special_helper.c | 2 +-
47
target/alpha/translate.c | 2 +-
24
target/openrisc/cpu.c | 2 +-
48
target/arm/helper-a64.c | 2 +-
25
target/riscv/cpu.c | 4 +--
49
target/arm/sve_helper.c | 2 +-
26
target/rx/cpu.c | 2 +-
50
target/arm/translate-a64.c | 4 ++--
27
target/sh4/cpu.c | 4 +--
51
target/arm/translate-sve.c | 6 +++---
28
target/sparc/cpu.c | 2 +-
52
target/arm/translate.c | 4 ++--
29
target/tricore/cpu.c | 2 +-
53
target/cris/translate.c | 2 +-
30
tcg/tcg.c | 8 ++---
54
target/hppa/translate.c | 2 +-
31
21 files changed, 82 insertions(+), 61 deletions(-)
55
target/i386/mem_helper.c | 2 +-
56
target/i386/translate.c | 2 +-
57
target/lm32/translate.c | 2 +-
58
target/m68k/translate.c | 2 +-
59
target/microblaze/translate.c | 2 +-
60
target/mips/translate.c | 2 +-
61
target/moxie/translate.c | 2 +-
62
target/nios2/translate.c | 2 +-
63
target/openrisc/translate.c | 2 +-
64
target/ppc/mem_helper.c | 2 +-
65
target/ppc/translate.c | 4 ++--
66
target/riscv/cpu_helper.c | 2 +-
67
target/riscv/translate.c | 2 +-
68
target/s390x/mem_helper.c | 2 +-
69
target/s390x/translate.c | 4 ++--
70
target/sh4/translate.c | 2 +-
71
target/sparc/ldst_helper.c | 2 +-
72
target/sparc/translate.c | 2 +-
73
target/tilegx/translate.c | 2 +-
74
target/tricore/translate.c | 2 +-
75
target/unicore32/translate.c | 2 +-
76
target/xtensa/translate.c | 2 +-
77
tcg/optimize.c | 2 +-
78
tcg/tcg-common.c | 2 +-
79
tcg/tcg-op-gvec.c | 8 ++++----
80
tcg/tcg-op-vec.c | 6 +++---
81
tcg/tcg-op.c | 6 +++---
82
tcg/tcg.c | 2 +-
83
tcg/tci.c | 2 +-
84
51 files changed, 65 insertions(+), 65 deletions(-)
85
32
86
diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h
33
diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h
87
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
88
--- a/include/exec/cpu_ldst.h
35
--- a/accel/tcg/internal.h
89
+++ b/include/exec/cpu_ldst.h
36
+++ b/accel/tcg/internal.h
90
@@ -XXX,XX +XXX,XX @@ static inline void cpu_stq_mmuidx_ra(CPUArchState *env, abi_ptr addr,
37
@@ -XXX,XX +XXX,XX @@ G_NORETURN void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr);
91
#else
38
void page_init(void);
92
39
void tb_htable_init(void);
93
/* Needed for TCG_OVERSIZED_GUEST */
40
94
-#include "tcg.h"
41
+/* Return the current PC from CPU, which may be cached in TB. */
95
+#include "tcg/tcg.h"
42
+static inline target_ulong log_pc(CPUState *cpu, const TranslationBlock *tb)
96
43
+{
97
static inline target_ulong tlb_addr_write(const CPUTLBEntry *entry)
44
+ return tb_pc(tb);
98
{
45
+}
99
diff --git a/tcg/i386/tcg-target.h b/tcg/i386/tcg-target.h
46
+
100
index XXXXXXX..XXXXXXX 100644
47
#endif /* ACCEL_TCG_INTERNAL_H */
101
--- a/tcg/i386/tcg-target.h
48
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
102
+++ b/tcg/i386/tcg-target.h
49
index XXXXXXX..XXXXXXX 100644
103
@@ -XXX,XX +XXX,XX @@ static inline void tb_target_set_jmp_target(uintptr_t tc_ptr,
50
--- a/include/exec/exec-all.h
104
* The x86 has a pretty strong memory ordering which only really
51
+++ b/include/exec/exec-all.h
105
* allows for some stores to be re-ordered after loads.
52
@@ -XXX,XX +XXX,XX @@ struct TranslationBlock {
106
*/
53
uintptr_t jmp_dest[2];
107
-#include "tcg-mo.h"
54
};
108
+#include "tcg/tcg-mo.h"
55
109
56
+/* Hide the read to avoid ifdefs for TARGET_TB_PCREL. */
110
#define TCG_TARGET_DEFAULT_MO (TCG_MO_ALL & ~TCG_MO_ST_LD)
57
+static inline target_ulong tb_pc(const TranslationBlock *tb)
111
58
+{
112
diff --git a/tcg/tcg-op.h b/tcg/tcg-op.h
59
+ return tb->pc;
113
index XXXXXXX..XXXXXXX 100644
60
+}
114
--- a/tcg/tcg-op.h
61
+
115
+++ b/tcg/tcg-op.h
62
/* Hide the qatomic_read to make code a little easier on the eyes */
116
@@ -XXX,XX +XXX,XX @@
63
static inline uint32_t tb_cflags(const TranslationBlock *tb)
117
#ifndef TCG_TCG_OP_H
64
{
118
#define TCG_TCG_OP_H
65
diff --git a/include/tcg/tcg.h b/include/tcg/tcg.h
119
66
index XXXXXXX..XXXXXXX 100644
120
-#include "tcg.h"
67
--- a/include/tcg/tcg.h
121
+#include "tcg/tcg.h"
68
+++ b/include/tcg/tcg.h
122
#include "exec/helper-proto.h"
69
@@ -XXX,XX +XXX,XX @@ void tcg_register_thread(void);
123
#include "exec/helper-gen.h"
70
void tcg_prologue_init(TCGContext *s);
124
71
void tcg_func_start(TCGContext *s);
125
diff --git a/tcg/tcg.h b/tcg/tcg.h
72
126
index XXXXXXX..XXXXXXX 100644
73
-int tcg_gen_code(TCGContext *s, TranslationBlock *tb);
127
--- a/tcg/tcg.h
74
+int tcg_gen_code(TCGContext *s, TranslationBlock *tb, target_ulong pc_start);
128
+++ b/tcg/tcg.h
75
129
@@ -XXX,XX +XXX,XX @@
76
void tcg_set_frame(TCGContext *s, TCGReg reg, intptr_t start, intptr_t size);
130
#include "qemu/bitops.h"
77
131
#include "qemu/plugin.h"
132
#include "qemu/queue.h"
133
-#include "tcg-mo.h"
134
+#include "tcg/tcg-mo.h"
135
#include "tcg-target.h"
136
#include "qemu/int128.h"
137
138
@@ -XXX,XX +XXX,XX @@ typedef uint64_t TCGRegSet;
139
140
typedef enum TCGOpcode {
141
#define DEF(name, oargs, iargs, cargs, flags) INDEX_op_ ## name,
142
-#include "tcg-opc.h"
143
+#include "tcg/tcg-opc.h"
144
#undef DEF
145
NB_OPS,
146
} TCGOpcode;
147
diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
78
diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
148
index XXXXXXX..XXXXXXX 100644
79
index XXXXXXX..XXXXXXX 100644
149
--- a/accel/tcg/cpu-exec.c
80
--- a/accel/tcg/cpu-exec.c
150
+++ b/accel/tcg/cpu-exec.c
81
+++ b/accel/tcg/cpu-exec.c
151
@@ -XXX,XX +XXX,XX @@
82
@@ -XXX,XX +XXX,XX @@ static bool tb_lookup_cmp(const void *p, const void *d)
152
#include "trace.h"
83
const TranslationBlock *tb = p;
153
#include "disas/disas.h"
84
const struct tb_desc *desc = d;
154
#include "exec/exec-all.h"
85
155
-#include "tcg.h"
86
- if (tb->pc == desc->pc &&
156
+#include "tcg/tcg.h"
87
+ if (tb_pc(tb) == desc->pc &&
157
#include "qemu/atomic.h"
88
tb->page_addr[0] == desc->page_addr0 &&
158
#include "sysemu/qtest.h"
89
tb->cs_base == desc->cs_base &&
159
#include "qemu/timer.h"
90
tb->flags == desc->flags &&
160
diff --git a/accel/tcg/tcg-runtime-gvec.c b/accel/tcg/tcg-runtime-gvec.c
91
@@ -XXX,XX +XXX,XX @@ static inline TranslationBlock *tb_lookup(CPUState *cpu, target_ulong pc,
161
index XXXXXXX..XXXXXXX 100644
92
return tb;
162
--- a/accel/tcg/tcg-runtime-gvec.c
93
}
163
+++ b/accel/tcg/tcg-runtime-gvec.c
94
164
@@ -XXX,XX +XXX,XX @@
95
-static inline void log_cpu_exec(target_ulong pc, CPUState *cpu,
165
#include "qemu/host-utils.h"
96
- const TranslationBlock *tb)
166
#include "cpu.h"
97
+static void log_cpu_exec(target_ulong pc, CPUState *cpu,
167
#include "exec/helper-proto.h"
98
+ const TranslationBlock *tb)
168
-#include "tcg-gvec-desc.h"
99
{
169
+#include "tcg/tcg-gvec-desc.h"
100
- if (unlikely(qemu_loglevel_mask(CPU_LOG_TB_CPU | CPU_LOG_EXEC))
170
101
- && qemu_log_in_addr_range(pc)) {
171
102
-
172
/* Virtually all hosts support 16-byte vectors. Those that don't can emulate
103
+ if (qemu_log_in_addr_range(pc)) {
173
diff --git a/accel/tcg/tcg-runtime.c b/accel/tcg/tcg-runtime.c
104
qemu_log_mask(CPU_LOG_EXEC,
174
index XXXXXXX..XXXXXXX 100644
105
"Trace %d: %p [" TARGET_FMT_lx
175
--- a/accel/tcg/tcg-runtime.c
106
"/" TARGET_FMT_lx "/%08x/%08x] %s\n",
176
+++ b/accel/tcg/tcg-runtime.c
107
@@ -XXX,XX +XXX,XX @@ const void *HELPER(lookup_tb_ptr)(CPUArchState *env)
177
@@ -XXX,XX +XXX,XX @@
108
return tcg_code_gen_epilogue;
178
#include "exec/tb-lookup.h"
109
}
179
#include "disas/disas.h"
110
180
#include "exec/log.h"
111
- log_cpu_exec(pc, cpu, tb);
181
-#include "tcg.h"
112
+ if (qemu_loglevel_mask(CPU_LOG_TB_CPU | CPU_LOG_EXEC)) {
182
+#include "tcg/tcg.h"
113
+ log_cpu_exec(pc, cpu, tb);
183
114
+ }
184
/* 32-bit helpers */
115
185
116
return tb->tc.ptr;
117
}
118
@@ -XXX,XX +XXX,XX @@ cpu_tb_exec(CPUState *cpu, TranslationBlock *itb, int *tb_exit)
119
TranslationBlock *last_tb;
120
const void *tb_ptr = itb->tc.ptr;
121
122
- log_cpu_exec(itb->pc, cpu, itb);
123
+ if (qemu_loglevel_mask(CPU_LOG_TB_CPU | CPU_LOG_EXEC)) {
124
+ log_cpu_exec(log_pc(cpu, itb), cpu, itb);
125
+ }
126
127
qemu_thread_jit_execute();
128
ret = tcg_qemu_tb_exec(env, tb_ptr);
129
@@ -XXX,XX +XXX,XX @@ cpu_tb_exec(CPUState *cpu, TranslationBlock *itb, int *tb_exit)
130
* of the start of the TB.
131
*/
132
CPUClass *cc = CPU_GET_CLASS(cpu);
133
- qemu_log_mask_and_addr(CPU_LOG_EXEC, last_tb->pc,
134
- "Stopped execution of TB chain before %p ["
135
- TARGET_FMT_lx "] %s\n",
136
- last_tb->tc.ptr, last_tb->pc,
137
- lookup_symbol(last_tb->pc));
138
+
139
if (cc->tcg_ops->synchronize_from_tb) {
140
cc->tcg_ops->synchronize_from_tb(cpu, last_tb);
141
} else {
142
assert(cc->set_pc);
143
- cc->set_pc(cpu, last_tb->pc);
144
+ cc->set_pc(cpu, tb_pc(last_tb));
145
+ }
146
+ if (qemu_loglevel_mask(CPU_LOG_EXEC)) {
147
+ target_ulong pc = log_pc(cpu, last_tb);
148
+ if (qemu_log_in_addr_range(pc)) {
149
+ qemu_log("Stopped execution of TB chain before %p ["
150
+ TARGET_FMT_lx "] %s\n",
151
+ last_tb->tc.ptr, pc, lookup_symbol(pc));
152
+ }
153
}
154
}
155
156
@@ -XXX,XX +XXX,XX @@ static inline void tb_add_jump(TranslationBlock *tb, int n,
157
158
qemu_spin_unlock(&tb_next->jmp_lock);
159
160
- qemu_log_mask_and_addr(CPU_LOG_EXEC, tb->pc,
161
- "Linking TBs %p [" TARGET_FMT_lx
162
- "] index %d -> %p [" TARGET_FMT_lx "]\n",
163
- tb->tc.ptr, tb->pc, n,
164
- tb_next->tc.ptr, tb_next->pc);
165
+ qemu_log_mask(CPU_LOG_EXEC, "Linking TBs %p index %d -> %p\n",
166
+ tb->tc.ptr, n, tb_next->tc.ptr);
167
return;
168
169
out_unlock_next:
170
@@ -XXX,XX +XXX,XX @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
171
}
172
173
static inline void cpu_loop_exec_tb(CPUState *cpu, TranslationBlock *tb,
174
+ target_ulong pc,
175
TranslationBlock **last_tb, int *tb_exit)
176
{
177
int32_t insns_left;
178
179
- trace_exec_tb(tb, tb->pc);
180
+ trace_exec_tb(tb, pc);
181
tb = cpu_tb_exec(cpu, tb, tb_exit);
182
if (*tb_exit != TB_EXIT_REQUESTED) {
183
*last_tb = tb;
184
@@ -XXX,XX +XXX,XX @@ int cpu_exec(CPUState *cpu)
185
tb_add_jump(last_tb, tb_exit, tb);
186
}
187
188
- cpu_loop_exec_tb(cpu, tb, &last_tb, &tb_exit);
189
+ cpu_loop_exec_tb(cpu, tb, pc, &last_tb, &tb_exit);
190
191
/* Try to align the host and virtual clocks
192
if the guest is in advance */
186
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
193
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
187
index XXXXXXX..XXXXXXX 100644
194
index XXXXXXX..XXXXXXX 100644
188
--- a/accel/tcg/translate-all.c
195
--- a/accel/tcg/translate-all.c
189
+++ b/accel/tcg/translate-all.c
196
+++ b/accel/tcg/translate-all.c
190
@@ -XXX,XX +XXX,XX @@
197
@@ -XXX,XX +XXX,XX @@ static int encode_search(TranslationBlock *tb, uint8_t *block)
191
#include "trace.h"
198
192
#include "disas/disas.h"
199
for (j = 0; j < TARGET_INSN_START_WORDS; ++j) {
193
#include "exec/exec-all.h"
200
if (i == 0) {
194
-#include "tcg.h"
201
- prev = (j == 0 ? tb->pc : 0);
195
+#include "tcg/tcg.h"
202
+ prev = (j == 0 ? tb_pc(tb) : 0);
196
#if defined(CONFIG_USER_ONLY)
203
} else {
197
#include "qemu.h"
204
prev = tcg_ctx->gen_insn_data[i - 1][j];
198
#if defined(__FreeBSD__) || defined(__FreeBSD_kernel__)
205
}
199
diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
206
@@ -XXX,XX +XXX,XX @@ static int encode_search(TranslationBlock *tb, uint8_t *block)
200
index XXXXXXX..XXXXXXX 100644
207
static int cpu_restore_state_from_tb(CPUState *cpu, TranslationBlock *tb,
201
--- a/accel/tcg/user-exec.c
208
uintptr_t searched_pc, bool reset_icount)
202
+++ b/accel/tcg/user-exec.c
209
{
203
@@ -XXX,XX +XXX,XX @@
210
- target_ulong data[TARGET_INSN_START_WORDS] = { tb->pc };
204
#include "cpu.h"
211
+ target_ulong data[TARGET_INSN_START_WORDS] = { tb_pc(tb) };
205
#include "disas/disas.h"
212
uintptr_t host_pc = (uintptr_t)tb->tc.ptr;
206
#include "exec/exec-all.h"
213
CPUArchState *env = cpu->env_ptr;
207
-#include "tcg.h"
214
const uint8_t *p = tb->tc.ptr + tb->tc.size;
208
+#include "tcg/tcg.h"
215
@@ -XXX,XX +XXX,XX @@ static bool tb_cmp(const void *ap, const void *bp)
209
#include "qemu/bitops.h"
216
const TranslationBlock *a = ap;
210
#include "exec/cpu_ldst.h"
217
const TranslationBlock *b = bp;
211
#include "translate-all.h"
218
212
diff --git a/bsd-user/main.c b/bsd-user/main.c
219
- return a->pc == b->pc &&
213
index XXXXXXX..XXXXXXX 100644
220
+ return tb_pc(a) == tb_pc(b) &&
214
--- a/bsd-user/main.c
221
a->cs_base == b->cs_base &&
215
+++ b/bsd-user/main.c
222
a->flags == b->flags &&
216
@@ -XXX,XX +XXX,XX @@
223
(tb_cflags(a) & ~CF_INVALID) == (tb_cflags(b) & ~CF_INVALID) &&
217
#include "qemu/module.h"
224
@@ -XXX,XX +XXX,XX @@ static void do_tb_invalidate_check(void *p, uint32_t hash, void *userp)
218
#include "cpu.h"
225
TranslationBlock *tb = p;
219
#include "exec/exec-all.h"
226
target_ulong addr = *(target_ulong *)userp;
220
-#include "tcg.h"
227
221
+#include "tcg/tcg.h"
228
- if (!(addr + TARGET_PAGE_SIZE <= tb->pc || addr >= tb->pc + tb->size)) {
222
#include "qemu/timer.h"
229
+ if (!(addr + TARGET_PAGE_SIZE <= tb_pc(tb) ||
223
#include "qemu/envlist.h"
230
+ addr >= tb_pc(tb) + tb->size)) {
224
#include "exec/log.h"
231
printf("ERROR invalidate: address=" TARGET_FMT_lx
225
diff --git a/cpus.c b/cpus.c
232
- " PC=%08lx size=%04x\n", addr, (long)tb->pc, tb->size);
226
index XXXXXXX..XXXXXXX 100644
233
+ " PC=%08lx size=%04x\n", addr, (long)tb_pc(tb), tb->size);
227
--- a/cpus.c
234
}
228
+++ b/cpus.c
235
}
229
@@ -XXX,XX +XXX,XX @@
236
230
#include "qemu/bitmap.h"
237
@@ -XXX,XX +XXX,XX @@ static void do_tb_page_check(void *p, uint32_t hash, void *userp)
231
#include "qemu/seqlock.h"
238
TranslationBlock *tb = p;
232
#include "qemu/guest-random.h"
239
int flags1, flags2;
233
-#include "tcg.h"
240
234
+#include "tcg/tcg.h"
241
- flags1 = page_get_flags(tb->pc);
235
#include "hw/nmi.h"
242
- flags2 = page_get_flags(tb->pc + tb->size - 1);
236
#include "sysemu/replay.h"
243
+ flags1 = page_get_flags(tb_pc(tb));
237
#include "sysemu/runstate.h"
244
+ flags2 = page_get_flags(tb_pc(tb) + tb->size - 1);
238
diff --git a/exec.c b/exec.c
245
if ((flags1 & PAGE_WRITE) || (flags2 & PAGE_WRITE)) {
239
index XXXXXXX..XXXXXXX 100644
246
printf("ERROR page flags: PC=%08lx size=%04x f1=%x f2=%x\n",
240
--- a/exec.c
247
- (long)tb->pc, tb->size, flags1, flags2);
241
+++ b/exec.c
248
+ (long)tb_pc(tb), tb->size, flags1, flags2);
242
@@ -XXX,XX +XXX,XX @@
249
}
243
#include "cpu.h"
250
}
244
#include "exec/exec-all.h"
251
245
#include "exec/target_page.h"
252
@@ -XXX,XX +XXX,XX @@ static void do_tb_phys_invalidate(TranslationBlock *tb, bool rm_from_page_list)
246
-#include "tcg.h"
253
247
+#include "tcg/tcg.h"
254
/* remove the TB from the hash list */
248
#include "hw/qdev-core.h"
255
phys_pc = tb->page_addr[0];
249
#include "hw/qdev-properties.h"
256
- h = tb_hash_func(phys_pc, tb->pc, tb->flags, orig_cflags,
250
#if !defined(CONFIG_USER_ONLY)
257
+ h = tb_hash_func(phys_pc, tb_pc(tb), tb->flags, orig_cflags,
251
diff --git a/linux-user/main.c b/linux-user/main.c
258
tb->trace_vcpu_dstate);
252
index XXXXXXX..XXXXXXX 100644
259
if (!qht_remove(&tb_ctx.htable, tb, h)) {
253
--- a/linux-user/main.c
260
return;
254
+++ b/linux-user/main.c
261
@@ -XXX,XX +XXX,XX @@ tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
255
@@ -XXX,XX +XXX,XX @@
262
}
256
#include "qemu/plugin.h"
263
257
#include "cpu.h"
264
/* add in the hash table */
258
#include "exec/exec-all.h"
265
- h = tb_hash_func(phys_pc, tb->pc, tb->flags, tb->cflags,
259
-#include "tcg.h"
266
+ h = tb_hash_func(phys_pc, tb_pc(tb), tb->flags, tb->cflags,
260
+#include "tcg/tcg.h"
267
tb->trace_vcpu_dstate);
261
#include "qemu/timer.h"
268
qht_insert(&tb_ctx.htable, tb, h, &existing_tb);
262
#include "qemu/envlist.h"
269
263
#include "qemu/guest-random.h"
270
@@ -XXX,XX +XXX,XX @@ TranslationBlock *tb_gen_code(CPUState *cpu,
264
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
271
tcg_ctx->cpu = NULL;
265
index XXXXXXX..XXXXXXX 100644
272
max_insns = tb->icount;
266
--- a/linux-user/syscall.c
273
267
+++ b/linux-user/syscall.c
274
- trace_translate_block(tb, tb->pc, tb->tc.ptr);
268
@@ -XXX,XX +XXX,XX @@
275
+ trace_translate_block(tb, pc, tb->tc.ptr);
269
#include "user/syscall-trace.h"
276
270
#include "qapi/error.h"
277
/* generate machine code */
271
#include "fd-trans.h"
278
tb->jmp_reset_offset[0] = TB_JMP_RESET_OFFSET_INVALID;
272
-#include "tcg.h"
279
@@ -XXX,XX +XXX,XX @@ TranslationBlock *tb_gen_code(CPUState *cpu,
273
+#include "tcg/tcg.h"
280
ti = profile_getclock();
274
281
#endif
275
#ifndef CLONE_IO
282
276
#define CLONE_IO 0x80000000 /* Clone io context */
283
- gen_code_size = tcg_gen_code(tcg_ctx, tb);
277
diff --git a/target/alpha/translate.c b/target/alpha/translate.c
284
+ gen_code_size = tcg_gen_code(tcg_ctx, tb, pc);
278
index XXXXXXX..XXXXXXX 100644
285
if (unlikely(gen_code_size < 0)) {
279
--- a/target/alpha/translate.c
286
error_return:
280
+++ b/target/alpha/translate.c
287
switch (gen_code_size) {
281
@@ -XXX,XX +XXX,XX @@
288
@@ -XXX,XX +XXX,XX @@ TranslationBlock *tb_gen_code(CPUState *cpu,
282
#include "disas/disas.h"
289
283
#include "qemu/host-utils.h"
290
#ifdef DEBUG_DISAS
284
#include "exec/exec-all.h"
291
if (qemu_loglevel_mask(CPU_LOG_TB_OUT_ASM) &&
285
-#include "tcg-op.h"
292
- qemu_log_in_addr_range(tb->pc)) {
286
+#include "tcg/tcg-op.h"
293
+ qemu_log_in_addr_range(pc)) {
287
#include "exec/cpu_ldst.h"
294
FILE *logfile = qemu_log_trylock();
288
#include "exec/helper-proto.h"
295
if (logfile) {
289
#include "exec/helper-gen.h"
296
int code_size, data_size;
290
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
297
@@ -XXX,XX +XXX,XX @@ void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr)
291
index XXXXXXX..XXXXXXX 100644
298
*/
292
--- a/target/arm/helper-a64.c
299
cpu->cflags_next_tb = curr_cflags(cpu) | CF_MEMI_ONLY | CF_LAST_IO | n;
293
+++ b/target/arm/helper-a64.c
300
294
@@ -XXX,XX +XXX,XX @@
301
- qemu_log_mask_and_addr(CPU_LOG_EXEC, tb->pc,
295
#include "exec/cpu_ldst.h"
302
- "cpu_io_recompile: rewound execution of TB to "
296
#include "qemu/int128.h"
303
- TARGET_FMT_lx "\n", tb->pc);
297
#include "qemu/atomic128.h"
304
+ if (qemu_loglevel_mask(CPU_LOG_EXEC)) {
298
-#include "tcg.h"
305
+ target_ulong pc = log_pc(cpu, tb);
299
+#include "tcg/tcg.h"
306
+ if (qemu_log_in_addr_range(pc)) {
300
#include "fpu/softfloat.h"
307
+ qemu_log("cpu_io_recompile: rewound execution of TB to "
301
#include <zlib.h> /* For crc32 */
308
+ TARGET_FMT_lx "\n", pc);
302
309
+ }
303
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
310
+ }
304
index XXXXXXX..XXXXXXX 100644
311
305
--- a/target/arm/sve_helper.c
312
cpu_loop_exit_noexc(cpu);
306
+++ b/target/arm/sve_helper.c
313
}
307
@@ -XXX,XX +XXX,XX @@
314
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
308
#include "exec/helper-proto.h"
315
index XXXXXXX..XXXXXXX 100644
309
#include "tcg/tcg-gvec-desc.h"
316
--- a/target/arm/cpu.c
310
#include "fpu/softfloat.h"
317
+++ b/target/arm/cpu.c
311
-#include "tcg.h"
318
@@ -XXX,XX +XXX,XX @@ void arm_cpu_synchronize_from_tb(CPUState *cs,
312
+#include "tcg/tcg.h"
319
* never possible for an AArch64 TB to chain to an AArch32 TB.
313
320
*/
314
321
if (is_a64(env)) {
315
/* Note that vector data is stored in host-endian 64-bit chunks,
322
- env->pc = tb->pc;
316
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
323
+ env->pc = tb_pc(tb);
317
index XXXXXXX..XXXXXXX 100644
324
} else {
318
--- a/target/arm/translate-a64.c
325
- env->regs[15] = tb->pc;
319
+++ b/target/arm/translate-a64.c
326
+ env->regs[15] = tb_pc(tb);
320
@@ -XXX,XX +XXX,XX @@
327
}
321
328
}
322
#include "cpu.h"
329
#endif /* CONFIG_TCG */
323
#include "exec/exec-all.h"
330
diff --git a/target/avr/cpu.c b/target/avr/cpu.c
324
-#include "tcg-op.h"
331
index XXXXXXX..XXXXXXX 100644
325
-#include "tcg-op-gvec.h"
332
--- a/target/avr/cpu.c
326
+#include "tcg/tcg-op.h"
333
+++ b/target/avr/cpu.c
327
+#include "tcg/tcg-op-gvec.h"
334
@@ -XXX,XX +XXX,XX @@ static void avr_cpu_synchronize_from_tb(CPUState *cs,
328
#include "qemu/log.h"
335
AVRCPU *cpu = AVR_CPU(cs);
329
#include "arm_ldst.h"
336
CPUAVRState *env = &cpu->env;
330
#include "translate.h"
337
331
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
338
- env->pc_w = tb->pc / 2; /* internally PC points to words */
332
index XXXXXXX..XXXXXXX 100644
339
+ env->pc_w = tb_pc(tb) / 2; /* internally PC points to words */
333
--- a/target/arm/translate-sve.c
340
}
334
+++ b/target/arm/translate-sve.c
341
335
@@ -XXX,XX +XXX,XX @@
342
static void avr_cpu_reset(DeviceState *ds)
336
#include "qemu/osdep.h"
343
diff --git a/target/hexagon/cpu.c b/target/hexagon/cpu.c
337
#include "cpu.h"
344
index XXXXXXX..XXXXXXX 100644
338
#include "exec/exec-all.h"
345
--- a/target/hexagon/cpu.c
339
-#include "tcg-op.h"
346
+++ b/target/hexagon/cpu.c
340
-#include "tcg-op-gvec.h"
347
@@ -XXX,XX +XXX,XX @@ static void hexagon_cpu_synchronize_from_tb(CPUState *cs,
341
-#include "tcg-gvec-desc.h"
348
{
342
+#include "tcg/tcg-op.h"
349
HexagonCPU *cpu = HEXAGON_CPU(cs);
343
+#include "tcg/tcg-op-gvec.h"
350
CPUHexagonState *env = &cpu->env;
344
+#include "tcg/tcg-gvec-desc.h"
351
- env->gpr[HEX_REG_PC] = tb->pc;
345
#include "qemu/log.h"
352
+ env->gpr[HEX_REG_PC] = tb_pc(tb);
346
#include "arm_ldst.h"
353
}
347
#include "translate.h"
354
348
diff --git a/target/arm/translate.c b/target/arm/translate.c
355
static bool hexagon_cpu_has_work(CPUState *cs)
349
index XXXXXXX..XXXXXXX 100644
356
diff --git a/target/hppa/cpu.c b/target/hppa/cpu.c
350
--- a/target/arm/translate.c
357
index XXXXXXX..XXXXXXX 100644
351
+++ b/target/arm/translate.c
358
--- a/target/hppa/cpu.c
352
@@ -XXX,XX +XXX,XX @@
359
+++ b/target/hppa/cpu.c
353
#include "internals.h"
360
@@ -XXX,XX +XXX,XX @@ static void hppa_cpu_synchronize_from_tb(CPUState *cs,
354
#include "disas/disas.h"
361
HPPACPU *cpu = HPPA_CPU(cs);
355
#include "exec/exec-all.h"
362
356
-#include "tcg-op.h"
363
#ifdef CONFIG_USER_ONLY
357
-#include "tcg-op-gvec.h"
364
- cpu->env.iaoq_f = tb->pc;
358
+#include "tcg/tcg-op.h"
365
+ cpu->env.iaoq_f = tb_pc(tb);
359
+#include "tcg/tcg-op-gvec.h"
366
cpu->env.iaoq_b = tb->cs_base;
360
#include "qemu/log.h"
367
#else
361
#include "qemu/bitops.h"
368
/* Recover the IAOQ values from the GVA + PRIV. */
362
#include "arm_ldst.h"
369
@@ -XXX,XX +XXX,XX @@ static void hppa_cpu_synchronize_from_tb(CPUState *cs,
363
diff --git a/target/cris/translate.c b/target/cris/translate.c
370
int32_t diff = cs_base;
364
index XXXXXXX..XXXXXXX 100644
371
365
--- a/target/cris/translate.c
372
cpu->env.iasq_f = iasq_f;
366
+++ b/target/cris/translate.c
373
- cpu->env.iaoq_f = (tb->pc & ~iasq_f) + priv;
367
@@ -XXX,XX +XXX,XX @@
374
+ cpu->env.iaoq_f = (tb_pc(tb) & ~iasq_f) + priv;
368
#include "cpu.h"
375
if (diff) {
369
#include "disas/disas.h"
376
cpu->env.iaoq_b = cpu->env.iaoq_f + diff;
370
#include "exec/exec-all.h"
377
}
371
-#include "tcg-op.h"
378
diff --git a/target/i386/tcg/tcg-cpu.c b/target/i386/tcg/tcg-cpu.c
372
+#include "tcg/tcg-op.h"
379
index XXXXXXX..XXXXXXX 100644
373
#include "exec/helper-proto.h"
380
--- a/target/i386/tcg/tcg-cpu.c
374
#include "mmu.h"
381
+++ b/target/i386/tcg/tcg-cpu.c
375
#include "exec/cpu_ldst.h"
382
@@ -XXX,XX +XXX,XX @@ static void x86_cpu_synchronize_from_tb(CPUState *cs,
376
diff --git a/target/hppa/translate.c b/target/hppa/translate.c
383
{
377
index XXXXXXX..XXXXXXX 100644
384
X86CPU *cpu = X86_CPU(cs);
378
--- a/target/hppa/translate.c
385
379
+++ b/target/hppa/translate.c
386
- cpu->env.eip = tb->pc - tb->cs_base;
380
@@ -XXX,XX +XXX,XX @@
387
+ cpu->env.eip = tb_pc(tb) - tb->cs_base;
381
#include "disas/disas.h"
388
}
382
#include "qemu/host-utils.h"
389
383
#include "exec/exec-all.h"
390
#ifndef CONFIG_USER_ONLY
384
-#include "tcg-op.h"
391
diff --git a/target/loongarch/cpu.c b/target/loongarch/cpu.c
385
+#include "tcg/tcg-op.h"
392
index XXXXXXX..XXXXXXX 100644
386
#include "exec/cpu_ldst.h"
393
--- a/target/loongarch/cpu.c
387
#include "exec/helper-proto.h"
394
+++ b/target/loongarch/cpu.c
388
#include "exec/helper-gen.h"
395
@@ -XXX,XX +XXX,XX @@ static void loongarch_cpu_synchronize_from_tb(CPUState *cs,
389
diff --git a/target/i386/mem_helper.c b/target/i386/mem_helper.c
396
LoongArchCPU *cpu = LOONGARCH_CPU(cs);
390
index XXXXXXX..XXXXXXX 100644
397
CPULoongArchState *env = &cpu->env;
391
--- a/target/i386/mem_helper.c
398
392
+++ b/target/i386/mem_helper.c
399
- env->pc = tb->pc;
393
@@ -XXX,XX +XXX,XX @@
400
+ env->pc = tb_pc(tb);
394
#include "exec/cpu_ldst.h"
401
}
395
#include "qemu/int128.h"
402
#endif /* CONFIG_TCG */
396
#include "qemu/atomic128.h"
403
397
-#include "tcg.h"
404
diff --git a/target/microblaze/cpu.c b/target/microblaze/cpu.c
398
+#include "tcg/tcg.h"
405
index XXXXXXX..XXXXXXX 100644
399
406
--- a/target/microblaze/cpu.c
400
void helper_cmpxchg8b_unlocked(CPUX86State *env, target_ulong a0)
407
+++ b/target/microblaze/cpu.c
401
{
408
@@ -XXX,XX +XXX,XX @@ static void mb_cpu_synchronize_from_tb(CPUState *cs,
402
diff --git a/target/i386/translate.c b/target/i386/translate.c
409
{
403
index XXXXXXX..XXXXXXX 100644
410
MicroBlazeCPU *cpu = MICROBLAZE_CPU(cs);
404
--- a/target/i386/translate.c
411
405
+++ b/target/i386/translate.c
412
- cpu->env.pc = tb->pc;
406
@@ -XXX,XX +XXX,XX @@
413
+ cpu->env.pc = tb_pc(tb);
407
#include "cpu.h"
414
cpu->env.iflags = tb->flags & IFLAGS_TB_MASK;
408
#include "disas/disas.h"
415
}
409
#include "exec/exec-all.h"
416
410
-#include "tcg-op.h"
417
diff --git a/target/mips/tcg/exception.c b/target/mips/tcg/exception.c
411
+#include "tcg/tcg-op.h"
418
index XXXXXXX..XXXXXXX 100644
412
#include "exec/cpu_ldst.h"
419
--- a/target/mips/tcg/exception.c
413
#include "exec/translator.h"
420
+++ b/target/mips/tcg/exception.c
414
421
@@ -XXX,XX +XXX,XX @@ void mips_cpu_synchronize_from_tb(CPUState *cs, const TranslationBlock *tb)
415
diff --git a/target/lm32/translate.c b/target/lm32/translate.c
422
MIPSCPU *cpu = MIPS_CPU(cs);
416
index XXXXXXX..XXXXXXX 100644
423
CPUMIPSState *env = &cpu->env;
417
--- a/target/lm32/translate.c
424
418
+++ b/target/lm32/translate.c
425
- env->active_tc.PC = tb->pc;
419
@@ -XXX,XX +XXX,XX @@
426
+ env->active_tc.PC = tb_pc(tb);
420
#include "exec/helper-proto.h"
427
env->hflags &= ~MIPS_HFLAG_BMASK;
421
#include "exec/exec-all.h"
428
env->hflags |= tb->flags & MIPS_HFLAG_BMASK;
422
#include "exec/translator.h"
429
}
423
-#include "tcg-op.h"
430
diff --git a/target/mips/tcg/sysemu/special_helper.c b/target/mips/tcg/sysemu/special_helper.c
424
+#include "tcg/tcg-op.h"
431
index XXXXXXX..XXXXXXX 100644
425
#include "qemu/qemu-print.h"
432
--- a/target/mips/tcg/sysemu/special_helper.c
426
433
+++ b/target/mips/tcg/sysemu/special_helper.c
427
#include "exec/cpu_ldst.h"
434
@@ -XXX,XX +XXX,XX @@ bool mips_io_recompile_replay_branch(CPUState *cs, const TranslationBlock *tb)
428
diff --git a/target/m68k/translate.c b/target/m68k/translate.c
435
CPUMIPSState *env = &cpu->env;
429
index XXXXXXX..XXXXXXX 100644
436
430
--- a/target/m68k/translate.c
437
if ((env->hflags & MIPS_HFLAG_BMASK) != 0
431
+++ b/target/m68k/translate.c
438
- && env->active_tc.PC != tb->pc) {
432
@@ -XXX,XX +XXX,XX @@
439
+ && env->active_tc.PC != tb_pc(tb)) {
433
#include "cpu.h"
440
env->active_tc.PC -= (env->hflags & MIPS_HFLAG_B16 ? 2 : 4);
434
#include "disas/disas.h"
441
env->hflags &= ~MIPS_HFLAG_BMASK;
435
#include "exec/exec-all.h"
442
return true;
436
-#include "tcg-op.h"
443
diff --git a/target/openrisc/cpu.c b/target/openrisc/cpu.c
437
+#include "tcg/tcg-op.h"
444
index XXXXXXX..XXXXXXX 100644
438
#include "qemu/log.h"
445
--- a/target/openrisc/cpu.c
439
#include "qemu/qemu-print.h"
446
+++ b/target/openrisc/cpu.c
440
#include "exec/cpu_ldst.h"
447
@@ -XXX,XX +XXX,XX @@ static void openrisc_cpu_synchronize_from_tb(CPUState *cs,
441
diff --git a/target/microblaze/translate.c b/target/microblaze/translate.c
448
{
442
index XXXXXXX..XXXXXXX 100644
449
OpenRISCCPU *cpu = OPENRISC_CPU(cs);
443
--- a/target/microblaze/translate.c
450
444
+++ b/target/microblaze/translate.c
451
- cpu->env.pc = tb->pc;
445
@@ -XXX,XX +XXX,XX @@
452
+ cpu->env.pc = tb_pc(tb);
446
#include "cpu.h"
453
}
447
#include "disas/disas.h"
454
448
#include "exec/exec-all.h"
455
449
-#include "tcg-op.h"
456
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
450
+#include "tcg/tcg-op.h"
457
index XXXXXXX..XXXXXXX 100644
451
#include "exec/helper-proto.h"
458
--- a/target/riscv/cpu.c
452
#include "microblaze-decode.h"
459
+++ b/target/riscv/cpu.c
453
#include "exec/cpu_ldst.h"
460
@@ -XXX,XX +XXX,XX @@ static void riscv_cpu_synchronize_from_tb(CPUState *cs,
454
diff --git a/target/mips/translate.c b/target/mips/translate.c
461
RISCVMXL xl = FIELD_EX32(tb->flags, TB_FLAGS, XL);
455
index XXXXXXX..XXXXXXX 100644
462
456
--- a/target/mips/translate.c
463
if (xl == MXL_RV32) {
457
+++ b/target/mips/translate.c
464
- env->pc = (int32_t)tb->pc;
458
@@ -XXX,XX +XXX,XX @@
465
+ env->pc = (int32_t)tb_pc(tb);
459
#include "internal.h"
466
} else {
460
#include "disas/disas.h"
467
- env->pc = tb->pc;
461
#include "exec/exec-all.h"
468
+ env->pc = tb_pc(tb);
462
-#include "tcg-op.h"
469
}
463
+#include "tcg/tcg-op.h"
470
}
464
#include "exec/cpu_ldst.h"
471
465
#include "hw/mips/cpudevs.h"
472
diff --git a/target/rx/cpu.c b/target/rx/cpu.c
466
473
index XXXXXXX..XXXXXXX 100644
467
diff --git a/target/moxie/translate.c b/target/moxie/translate.c
474
--- a/target/rx/cpu.c
468
index XXXXXXX..XXXXXXX 100644
475
+++ b/target/rx/cpu.c
469
--- a/target/moxie/translate.c
476
@@ -XXX,XX +XXX,XX @@ static void rx_cpu_synchronize_from_tb(CPUState *cs,
470
+++ b/target/moxie/translate.c
477
{
471
@@ -XXX,XX +XXX,XX @@
478
RXCPU *cpu = RX_CPU(cs);
472
#include "cpu.h"
479
473
#include "exec/exec-all.h"
480
- cpu->env.pc = tb->pc;
474
#include "disas/disas.h"
481
+ cpu->env.pc = tb_pc(tb);
475
-#include "tcg-op.h"
482
}
476
+#include "tcg/tcg-op.h"
483
477
#include "exec/cpu_ldst.h"
484
static bool rx_cpu_has_work(CPUState *cs)
478
#include "qemu/qemu-print.h"
485
diff --git a/target/sh4/cpu.c b/target/sh4/cpu.c
479
486
index XXXXXXX..XXXXXXX 100644
480
diff --git a/target/nios2/translate.c b/target/nios2/translate.c
487
--- a/target/sh4/cpu.c
481
index XXXXXXX..XXXXXXX 100644
488
+++ b/target/sh4/cpu.c
482
--- a/target/nios2/translate.c
489
@@ -XXX,XX +XXX,XX @@ static void superh_cpu_synchronize_from_tb(CPUState *cs,
483
+++ b/target/nios2/translate.c
490
{
484
@@ -XXX,XX +XXX,XX @@
491
SuperHCPU *cpu = SUPERH_CPU(cs);
485
492
486
#include "qemu/osdep.h"
493
- cpu->env.pc = tb->pc;
487
#include "cpu.h"
494
+ cpu->env.pc = tb_pc(tb);
488
-#include "tcg-op.h"
495
cpu->env.flags = tb->flags & TB_FLAG_ENVFLAGS_MASK;
489
+#include "tcg/tcg-op.h"
496
}
490
#include "exec/exec-all.h"
497
491
#include "disas/disas.h"
498
@@ -XXX,XX +XXX,XX @@ static bool superh_io_recompile_replay_branch(CPUState *cs,
492
#include "exec/helper-proto.h"
499
CPUSH4State *env = &cpu->env;
493
diff --git a/target/openrisc/translate.c b/target/openrisc/translate.c
500
494
index XXXXXXX..XXXXXXX 100644
501
if ((env->flags & ((DELAY_SLOT | DELAY_SLOT_CONDITIONAL))) != 0
495
--- a/target/openrisc/translate.c
502
- && env->pc != tb->pc) {
496
+++ b/target/openrisc/translate.c
503
+ && env->pc != tb_pc(tb)) {
497
@@ -XXX,XX +XXX,XX @@
504
env->pc -= 2;
498
#include "cpu.h"
505
env->flags &= ~(DELAY_SLOT | DELAY_SLOT_CONDITIONAL);
499
#include "exec/exec-all.h"
506
return true;
500
#include "disas/disas.h"
507
diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
501
-#include "tcg-op.h"
508
index XXXXXXX..XXXXXXX 100644
502
+#include "tcg/tcg-op.h"
509
--- a/target/sparc/cpu.c
503
#include "qemu/log.h"
510
+++ b/target/sparc/cpu.c
504
#include "qemu/bitops.h"
511
@@ -XXX,XX +XXX,XX @@ static void sparc_cpu_synchronize_from_tb(CPUState *cs,
505
#include "qemu/qemu-print.h"
512
{
506
diff --git a/target/ppc/mem_helper.c b/target/ppc/mem_helper.c
513
SPARCCPU *cpu = SPARC_CPU(cs);
507
index XXXXXXX..XXXXXXX 100644
514
508
--- a/target/ppc/mem_helper.c
515
- cpu->env.pc = tb->pc;
509
+++ b/target/ppc/mem_helper.c
516
+ cpu->env.pc = tb_pc(tb);
510
@@ -XXX,XX +XXX,XX @@
517
cpu->env.npc = tb->cs_base;
511
#include "exec/helper-proto.h"
518
}
512
#include "helper_regs.h"
519
513
#include "exec/cpu_ldst.h"
520
diff --git a/target/tricore/cpu.c b/target/tricore/cpu.c
514
-#include "tcg.h"
521
index XXXXXXX..XXXXXXX 100644
515
+#include "tcg/tcg.h"
522
--- a/target/tricore/cpu.c
516
#include "internal.h"
523
+++ b/target/tricore/cpu.c
517
#include "qemu/atomic128.h"
524
@@ -XXX,XX +XXX,XX @@ static void tricore_cpu_synchronize_from_tb(CPUState *cs,
518
525
TriCoreCPU *cpu = TRICORE_CPU(cs);
519
diff --git a/target/ppc/translate.c b/target/ppc/translate.c
526
CPUTriCoreState *env = &cpu->env;
520
index XXXXXXX..XXXXXXX 100644
527
521
--- a/target/ppc/translate.c
528
- env->PC = tb->pc;
522
+++ b/target/ppc/translate.c
529
+ env->PC = tb_pc(tb);
523
@@ -XXX,XX +XXX,XX @@
530
}
524
#include "internal.h"
531
525
#include "disas/disas.h"
532
static void tricore_cpu_reset(DeviceState *dev)
526
#include "exec/exec-all.h"
527
-#include "tcg-op.h"
528
-#include "tcg-op-gvec.h"
529
+#include "tcg/tcg-op.h"
530
+#include "tcg/tcg-op-gvec.h"
531
#include "qemu/host-utils.h"
532
#include "qemu/main-loop.h"
533
#include "exec/cpu_ldst.h"
534
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
535
index XXXXXXX..XXXXXXX 100644
536
--- a/target/riscv/cpu_helper.c
537
+++ b/target/riscv/cpu_helper.c
538
@@ -XXX,XX +XXX,XX @@
539
#include "qemu/main-loop.h"
540
#include "cpu.h"
541
#include "exec/exec-all.h"
542
-#include "tcg-op.h"
543
+#include "tcg/tcg-op.h"
544
#include "trace.h"
545
546
int riscv_cpu_mmu_index(CPURISCVState *env, bool ifetch)
547
diff --git a/target/riscv/translate.c b/target/riscv/translate.c
548
index XXXXXXX..XXXXXXX 100644
549
--- a/target/riscv/translate.c
550
+++ b/target/riscv/translate.c
551
@@ -XXX,XX +XXX,XX @@
552
#include "qemu/osdep.h"
553
#include "qemu/log.h"
554
#include "cpu.h"
555
-#include "tcg-op.h"
556
+#include "tcg/tcg-op.h"
557
#include "disas/disas.h"
558
#include "exec/cpu_ldst.h"
559
#include "exec/exec-all.h"
560
diff --git a/target/s390x/mem_helper.c b/target/s390x/mem_helper.c
561
index XXXXXXX..XXXXXXX 100644
562
--- a/target/s390x/mem_helper.c
563
+++ b/target/s390x/mem_helper.c
564
@@ -XXX,XX +XXX,XX @@
565
#include "exec/cpu_ldst.h"
566
#include "qemu/int128.h"
567
#include "qemu/atomic128.h"
568
-#include "tcg.h"
569
+#include "tcg/tcg.h"
570
571
#if !defined(CONFIG_USER_ONLY)
572
#include "hw/s390x/storage-keys.h"
573
diff --git a/target/s390x/translate.c b/target/s390x/translate.c
574
index XXXXXXX..XXXXXXX 100644
575
--- a/target/s390x/translate.c
576
+++ b/target/s390x/translate.c
577
@@ -XXX,XX +XXX,XX @@
578
#include "internal.h"
579
#include "disas/disas.h"
580
#include "exec/exec-all.h"
581
-#include "tcg-op.h"
582
-#include "tcg-op-gvec.h"
583
+#include "tcg/tcg-op.h"
584
+#include "tcg/tcg-op-gvec.h"
585
#include "qemu/log.h"
586
#include "qemu/host-utils.h"
587
#include "exec/cpu_ldst.h"
588
diff --git a/target/sh4/translate.c b/target/sh4/translate.c
589
index XXXXXXX..XXXXXXX 100644
590
--- a/target/sh4/translate.c
591
+++ b/target/sh4/translate.c
592
@@ -XXX,XX +XXX,XX @@
593
#include "cpu.h"
594
#include "disas/disas.h"
595
#include "exec/exec-all.h"
596
-#include "tcg-op.h"
597
+#include "tcg/tcg-op.h"
598
#include "exec/cpu_ldst.h"
599
#include "exec/helper-proto.h"
600
#include "exec/helper-gen.h"
601
diff --git a/target/sparc/ldst_helper.c b/target/sparc/ldst_helper.c
602
index XXXXXXX..XXXXXXX 100644
603
--- a/target/sparc/ldst_helper.c
604
+++ b/target/sparc/ldst_helper.c
605
@@ -XXX,XX +XXX,XX @@
606
607
#include "qemu/osdep.h"
608
#include "cpu.h"
609
-#include "tcg.h"
610
+#include "tcg/tcg.h"
611
#include "exec/helper-proto.h"
612
#include "exec/exec-all.h"
613
#include "exec/cpu_ldst.h"
614
diff --git a/target/sparc/translate.c b/target/sparc/translate.c
615
index XXXXXXX..XXXXXXX 100644
616
--- a/target/sparc/translate.c
617
+++ b/target/sparc/translate.c
618
@@ -XXX,XX +XXX,XX @@
619
#include "disas/disas.h"
620
#include "exec/helper-proto.h"
621
#include "exec/exec-all.h"
622
-#include "tcg-op.h"
623
+#include "tcg/tcg-op.h"
624
#include "exec/cpu_ldst.h"
625
626
#include "exec/helper-gen.h"
627
diff --git a/target/tilegx/translate.c b/target/tilegx/translate.c
628
index XXXXXXX..XXXXXXX 100644
629
--- a/target/tilegx/translate.c
630
+++ b/target/tilegx/translate.c
631
@@ -XXX,XX +XXX,XX @@
632
#include "exec/log.h"
633
#include "disas/disas.h"
634
#include "exec/exec-all.h"
635
-#include "tcg-op.h"
636
+#include "tcg/tcg-op.h"
637
#include "exec/cpu_ldst.h"
638
#include "linux-user/syscall_defs.h"
639
640
diff --git a/target/tricore/translate.c b/target/tricore/translate.c
641
index XXXXXXX..XXXXXXX 100644
642
--- a/target/tricore/translate.c
643
+++ b/target/tricore/translate.c
644
@@ -XXX,XX +XXX,XX @@
645
#include "cpu.h"
646
#include "disas/disas.h"
647
#include "exec/exec-all.h"
648
-#include "tcg-op.h"
649
+#include "tcg/tcg-op.h"
650
#include "exec/cpu_ldst.h"
651
#include "qemu/qemu-print.h"
652
653
diff --git a/target/unicore32/translate.c b/target/unicore32/translate.c
654
index XXXXXXX..XXXXXXX 100644
655
--- a/target/unicore32/translate.c
656
+++ b/target/unicore32/translate.c
657
@@ -XXX,XX +XXX,XX @@
658
#include "cpu.h"
659
#include "disas/disas.h"
660
#include "exec/exec-all.h"
661
-#include "tcg-op.h"
662
+#include "tcg/tcg-op.h"
663
#include "qemu/log.h"
664
#include "exec/cpu_ldst.h"
665
#include "exec/translator.h"
666
diff --git a/target/xtensa/translate.c b/target/xtensa/translate.c
667
index XXXXXXX..XXXXXXX 100644
668
--- a/target/xtensa/translate.c
669
+++ b/target/xtensa/translate.c
670
@@ -XXX,XX +XXX,XX @@
671
#include "cpu.h"
672
#include "exec/exec-all.h"
673
#include "disas/disas.h"
674
-#include "tcg-op.h"
675
+#include "tcg/tcg-op.h"
676
#include "qemu/log.h"
677
#include "qemu/qemu-print.h"
678
#include "exec/cpu_ldst.h"
679
diff --git a/tcg/optimize.c b/tcg/optimize.c
680
index XXXXXXX..XXXXXXX 100644
681
--- a/tcg/optimize.c
682
+++ b/tcg/optimize.c
683
@@ -XXX,XX +XXX,XX @@
684
*/
685
686
#include "qemu/osdep.h"
687
-#include "tcg-op.h"
688
+#include "tcg/tcg-op.h"
689
690
#define CASE_OP_32_64(x) \
691
glue(glue(case INDEX_op_, x), _i32): \
692
diff --git a/tcg/tcg-common.c b/tcg/tcg-common.c
693
index XXXXXXX..XXXXXXX 100644
694
--- a/tcg/tcg-common.c
695
+++ b/tcg/tcg-common.c
696
@@ -XXX,XX +XXX,XX @@ uintptr_t tci_tb_ptr;
697
TCGOpDef tcg_op_defs[] = {
698
#define DEF(s, oargs, iargs, cargs, flags) \
699
{ #s, oargs, iargs, cargs, iargs + oargs + cargs, flags },
700
-#include "tcg-opc.h"
701
+#include "tcg/tcg-opc.h"
702
#undef DEF
703
};
704
const size_t tcg_op_defs_max = ARRAY_SIZE(tcg_op_defs);
705
diff --git a/tcg/tcg-op-gvec.c b/tcg/tcg-op-gvec.c
706
index XXXXXXX..XXXXXXX 100644
707
--- a/tcg/tcg-op-gvec.c
708
+++ b/tcg/tcg-op-gvec.c
709
@@ -XXX,XX +XXX,XX @@
710
*/
711
712
#include "qemu/osdep.h"
713
-#include "tcg.h"
714
-#include "tcg-op.h"
715
-#include "tcg-op-gvec.h"
716
+#include "tcg/tcg.h"
717
+#include "tcg/tcg-op.h"
718
+#include "tcg/tcg-op-gvec.h"
719
#include "qemu/main-loop.h"
720
-#include "tcg-gvec-desc.h"
721
+#include "tcg/tcg-gvec-desc.h"
722
723
#define MAX_UNROLL 4
724
725
diff --git a/tcg/tcg-op-vec.c b/tcg/tcg-op-vec.c
726
index XXXXXXX..XXXXXXX 100644
727
--- a/tcg/tcg-op-vec.c
728
+++ b/tcg/tcg-op-vec.c
729
@@ -XXX,XX +XXX,XX @@
730
731
#include "qemu/osdep.h"
732
#include "cpu.h"
733
-#include "tcg.h"
734
-#include "tcg-op.h"
735
-#include "tcg-mo.h"
736
+#include "tcg/tcg.h"
737
+#include "tcg/tcg-op.h"
738
+#include "tcg/tcg-mo.h"
739
740
/* Reduce the number of ifdefs below. This assumes that all uses of
741
TCGV_HIGH and TCGV_LOW are properly protected by a conditional that
742
diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c
743
index XXXXXXX..XXXXXXX 100644
744
--- a/tcg/tcg-op.c
745
+++ b/tcg/tcg-op.c
746
@@ -XXX,XX +XXX,XX @@
747
#include "qemu/osdep.h"
748
#include "cpu.h"
749
#include "exec/exec-all.h"
750
-#include "tcg.h"
751
-#include "tcg-op.h"
752
-#include "tcg-mo.h"
753
+#include "tcg/tcg.h"
754
+#include "tcg/tcg-op.h"
755
+#include "tcg/tcg-mo.h"
756
#include "trace-tcg.h"
757
#include "trace/mem.h"
758
#include "exec/plugin-gen.h"
759
diff --git a/tcg/tcg.c b/tcg/tcg.c
533
diff --git a/tcg/tcg.c b/tcg/tcg.c
760
index XXXXXXX..XXXXXXX 100644
534
index XXXXXXX..XXXXXXX 100644
761
--- a/tcg/tcg.c
535
--- a/tcg/tcg.c
762
+++ b/tcg/tcg.c
536
+++ b/tcg/tcg.c
763
@@ -XXX,XX +XXX,XX @@
537
@@ -XXX,XX +XXX,XX @@ int64_t tcg_cpu_exec_time(void)
764
#include "hw/boards.h"
765
#endif
538
#endif
766
539
767
-#include "tcg-op.h"
540
768
+#include "tcg/tcg-op.h"
541
-int tcg_gen_code(TCGContext *s, TranslationBlock *tb)
769
542
+int tcg_gen_code(TCGContext *s, TranslationBlock *tb, target_ulong pc_start)
770
#if UINTPTR_MAX == UINT32_MAX
543
{
771
# define ELF_CLASS ELFCLASS32
544
#ifdef CONFIG_PROFILER
772
diff --git a/tcg/tci.c b/tcg/tci.c
545
TCGProfile *prof = &s->prof;
773
index XXXXXXX..XXXXXXX 100644
546
@@ -XXX,XX +XXX,XX @@ int tcg_gen_code(TCGContext *s, TranslationBlock *tb)
774
--- a/tcg/tci.c
547
775
+++ b/tcg/tci.c
548
#ifdef DEBUG_DISAS
776
@@ -XXX,XX +XXX,XX @@
549
if (unlikely(qemu_loglevel_mask(CPU_LOG_TB_OP)
777
#include "qemu-common.h"
550
- && qemu_log_in_addr_range(tb->pc))) {
778
#include "tcg/tcg.h" /* MAX_OPC_PARAM_IARGS */
551
+ && qemu_log_in_addr_range(pc_start))) {
779
#include "exec/cpu_ldst.h"
552
FILE *logfile = qemu_log_trylock();
780
-#include "tcg-op.h"
553
if (logfile) {
781
+#include "tcg/tcg-op.h"
554
fprintf(logfile, "OP:\n");
782
555
@@ -XXX,XX +XXX,XX @@ int tcg_gen_code(TCGContext *s, TranslationBlock *tb)
783
/* Marker for missing code. */
556
if (s->nb_indirects > 0) {
784
#define TODO() \
557
#ifdef DEBUG_DISAS
558
if (unlikely(qemu_loglevel_mask(CPU_LOG_TB_OP_IND)
559
- && qemu_log_in_addr_range(tb->pc))) {
560
+ && qemu_log_in_addr_range(pc_start))) {
561
FILE *logfile = qemu_log_trylock();
562
if (logfile) {
563
fprintf(logfile, "OP before indirect lowering:\n");
564
@@ -XXX,XX +XXX,XX @@ int tcg_gen_code(TCGContext *s, TranslationBlock *tb)
565
566
#ifdef DEBUG_DISAS
567
if (unlikely(qemu_loglevel_mask(CPU_LOG_TB_OP_OPT)
568
- && qemu_log_in_addr_range(tb->pc))) {
569
+ && qemu_log_in_addr_range(pc_start))) {
570
FILE *logfile = qemu_log_trylock();
571
if (logfile) {
572
fprintf(logfile, "OP after optimization and liveness analysis:\n");
785
--
573
--
786
2.20.1
574
2.34.1
787
575
788
576
diff view generated by jsdifflib
1
The commentary talks about "in concert with the addresses
1
Prepare for targets to be able to produce TBs that can
2
assigned in the relevant linker script", except there is no
2
run in more than one virtual context.
3
linker script for softmmu, nor has there been for some time.
4
5
(Do not confuse the user-only linker script editing that was
6
removed in the previous patch, because user-only does not
7
use this code_gen_buffer allocation method.)
8
3
9
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
4
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
10
Reviewed-by: Thomas Huth <thuth@redhat.com>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
---
6
---
13
accel/tcg/translate-all.c | 37 +++++--------------------------------
7
accel/tcg/internal.h | 4 +++
14
1 file changed, 5 insertions(+), 32 deletions(-)
8
accel/tcg/tb-jmp-cache.h | 41 +++++++++++++++++++++++++
9
include/exec/cpu-defs.h | 3 ++
10
include/exec/exec-all.h | 32 ++++++++++++++++++--
11
accel/tcg/cpu-exec.c | 16 ++++++----
12
accel/tcg/translate-all.c | 64 ++++++++++++++++++++++++++-------------
13
6 files changed, 131 insertions(+), 29 deletions(-)
15
14
15
diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/accel/tcg/internal.h
18
+++ b/accel/tcg/internal.h
19
@@ -XXX,XX +XXX,XX @@ void tb_htable_init(void);
20
/* Return the current PC from CPU, which may be cached in TB. */
21
static inline target_ulong log_pc(CPUState *cpu, const TranslationBlock *tb)
22
{
23
+#if TARGET_TB_PCREL
24
+ return cpu->cc->get_pc(cpu);
25
+#else
26
return tb_pc(tb);
27
+#endif
28
}
29
30
#endif /* ACCEL_TCG_INTERNAL_H */
31
diff --git a/accel/tcg/tb-jmp-cache.h b/accel/tcg/tb-jmp-cache.h
32
index XXXXXXX..XXXXXXX 100644
33
--- a/accel/tcg/tb-jmp-cache.h
34
+++ b/accel/tcg/tb-jmp-cache.h
35
@@ -XXX,XX +XXX,XX @@
36
37
/*
38
* Accessed in parallel; all accesses to 'tb' must be atomic.
39
+ * For TARGET_TB_PCREL, accesses to 'pc' must be protected by
40
+ * a load_acquire/store_release to 'tb'.
41
*/
42
struct CPUJumpCache {
43
struct {
44
TranslationBlock *tb;
45
+#if TARGET_TB_PCREL
46
+ target_ulong pc;
47
+#endif
48
} array[TB_JMP_CACHE_SIZE];
49
};
50
51
+static inline TranslationBlock *
52
+tb_jmp_cache_get_tb(CPUJumpCache *jc, uint32_t hash)
53
+{
54
+#if TARGET_TB_PCREL
55
+ /* Use acquire to ensure current load of pc from jc. */
56
+ return qatomic_load_acquire(&jc->array[hash].tb);
57
+#else
58
+ /* Use rcu_read to ensure current load of pc from *tb. */
59
+ return qatomic_rcu_read(&jc->array[hash].tb);
60
+#endif
61
+}
62
+
63
+static inline target_ulong
64
+tb_jmp_cache_get_pc(CPUJumpCache *jc, uint32_t hash, TranslationBlock *tb)
65
+{
66
+#if TARGET_TB_PCREL
67
+ return jc->array[hash].pc;
68
+#else
69
+ return tb_pc(tb);
70
+#endif
71
+}
72
+
73
+static inline void
74
+tb_jmp_cache_set(CPUJumpCache *jc, uint32_t hash,
75
+ TranslationBlock *tb, target_ulong pc)
76
+{
77
+#if TARGET_TB_PCREL
78
+ jc->array[hash].pc = pc;
79
+ /* Use store_release on tb to ensure pc is written first. */
80
+ qatomic_store_release(&jc->array[hash].tb, tb);
81
+#else
82
+ /* Use the pc value already stored in tb->pc. */
83
+ qatomic_set(&jc->array[hash].tb, tb);
84
+#endif
85
+}
86
+
87
#endif /* ACCEL_TCG_TB_JMP_CACHE_H */
88
diff --git a/include/exec/cpu-defs.h b/include/exec/cpu-defs.h
89
index XXXXXXX..XXXXXXX 100644
90
--- a/include/exec/cpu-defs.h
91
+++ b/include/exec/cpu-defs.h
92
@@ -XXX,XX +XXX,XX @@
93
# error TARGET_PAGE_BITS must be defined in cpu-param.h
94
# endif
95
#endif
96
+#ifndef TARGET_TB_PCREL
97
+# define TARGET_TB_PCREL 0
98
+#endif
99
100
#define TARGET_LONG_SIZE (TARGET_LONG_BITS / 8)
101
102
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
103
index XXXXXXX..XXXXXXX 100644
104
--- a/include/exec/exec-all.h
105
+++ b/include/exec/exec-all.h
106
@@ -XXX,XX +XXX,XX @@ struct tb_tc {
107
};
108
109
struct TranslationBlock {
110
- target_ulong pc; /* simulated PC corresponding to this block (EIP + CS base) */
111
- target_ulong cs_base; /* CS base for this block */
112
+#if !TARGET_TB_PCREL
113
+ /*
114
+ * Guest PC corresponding to this block. This must be the true
115
+ * virtual address. Therefore e.g. x86 stores EIP + CS_BASE, and
116
+ * targets like Arm, MIPS, HP-PA, which reuse low bits for ISA or
117
+ * privilege, must store those bits elsewhere.
118
+ *
119
+ * If TARGET_TB_PCREL, the opcodes for the TranslationBlock are
120
+ * written such that the TB is associated only with the physical
121
+ * page and may be run in any virtual address context. In this case,
122
+ * PC must always be taken from ENV in a target-specific manner.
123
+ * Unwind information is taken as offsets from the page, to be
124
+ * deposited into the "current" PC.
125
+ */
126
+ target_ulong pc;
127
+#endif
128
+
129
+ /*
130
+ * Target-specific data associated with the TranslationBlock, e.g.:
131
+ * x86: the original user, the Code Segment virtual base,
132
+ * arm: an extension of tb->flags,
133
+ * s390x: instruction data for EXECUTE,
134
+ * sparc: the next pc of the instruction queue (for delay slots).
135
+ */
136
+ target_ulong cs_base;
137
+
138
uint32_t flags; /* flags defining in which context the code was generated */
139
uint32_t cflags; /* compile flags */
140
141
@@ -XXX,XX +XXX,XX @@ struct TranslationBlock {
142
/* Hide the read to avoid ifdefs for TARGET_TB_PCREL. */
143
static inline target_ulong tb_pc(const TranslationBlock *tb)
144
{
145
+#if TARGET_TB_PCREL
146
+ qemu_build_not_reached();
147
+#else
148
return tb->pc;
149
+#endif
150
}
151
152
/* Hide the qatomic_read to make code a little easier on the eyes */
153
diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
154
index XXXXXXX..XXXXXXX 100644
155
--- a/accel/tcg/cpu-exec.c
156
+++ b/accel/tcg/cpu-exec.c
157
@@ -XXX,XX +XXX,XX @@ static bool tb_lookup_cmp(const void *p, const void *d)
158
const TranslationBlock *tb = p;
159
const struct tb_desc *desc = d;
160
161
- if (tb_pc(tb) == desc->pc &&
162
+ if ((TARGET_TB_PCREL || tb_pc(tb) == desc->pc) &&
163
tb->page_addr[0] == desc->page_addr0 &&
164
tb->cs_base == desc->cs_base &&
165
tb->flags == desc->flags &&
166
@@ -XXX,XX +XXX,XX @@ static TranslationBlock *tb_htable_lookup(CPUState *cpu, target_ulong pc,
167
return NULL;
168
}
169
desc.page_addr0 = phys_pc;
170
- h = tb_hash_func(phys_pc, pc, flags, cflags, *cpu->trace_dstate);
171
+ h = tb_hash_func(phys_pc, (TARGET_TB_PCREL ? 0 : pc),
172
+ flags, cflags, *cpu->trace_dstate);
173
return qht_lookup_custom(&tb_ctx.htable, &desc, h, tb_lookup_cmp);
174
}
175
176
@@ -XXX,XX +XXX,XX @@ static inline TranslationBlock *tb_lookup(CPUState *cpu, target_ulong pc,
177
uint32_t flags, uint32_t cflags)
178
{
179
TranslationBlock *tb;
180
+ CPUJumpCache *jc;
181
uint32_t hash;
182
183
/* we should never be trying to look up an INVALID tb */
184
tcg_debug_assert(!(cflags & CF_INVALID));
185
186
hash = tb_jmp_cache_hash_func(pc);
187
- tb = qatomic_rcu_read(&cpu->tb_jmp_cache->array[hash].tb);
188
+ jc = cpu->tb_jmp_cache;
189
+ tb = tb_jmp_cache_get_tb(jc, hash);
190
191
if (likely(tb &&
192
- tb->pc == pc &&
193
+ tb_jmp_cache_get_pc(jc, hash, tb) == pc &&
194
tb->cs_base == cs_base &&
195
tb->flags == flags &&
196
tb->trace_vcpu_dstate == *cpu->trace_dstate &&
197
@@ -XXX,XX +XXX,XX @@ static inline TranslationBlock *tb_lookup(CPUState *cpu, target_ulong pc,
198
if (tb == NULL) {
199
return NULL;
200
}
201
- qatomic_set(&cpu->tb_jmp_cache->array[hash].tb, tb);
202
+ tb_jmp_cache_set(jc, hash, tb, pc);
203
return tb;
204
}
205
206
@@ -XXX,XX +XXX,XX @@ cpu_tb_exec(CPUState *cpu, TranslationBlock *itb, int *tb_exit)
207
if (cc->tcg_ops->synchronize_from_tb) {
208
cc->tcg_ops->synchronize_from_tb(cpu, last_tb);
209
} else {
210
+ assert(!TARGET_TB_PCREL);
211
assert(cc->set_pc);
212
cc->set_pc(cpu, tb_pc(last_tb));
213
}
214
@@ -XXX,XX +XXX,XX @@ int cpu_exec(CPUState *cpu)
215
* for the fast lookup
216
*/
217
h = tb_jmp_cache_hash_func(pc);
218
- qatomic_set(&cpu->tb_jmp_cache->array[h].tb, tb);
219
+ tb_jmp_cache_set(cpu->tb_jmp_cache, h, tb, pc);
220
}
221
222
#ifndef CONFIG_USER_ONLY
16
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
223
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
17
index XXXXXXX..XXXXXXX 100644
224
index XXXXXXX..XXXXXXX 100644
18
--- a/accel/tcg/translate-all.c
225
--- a/accel/tcg/translate-all.c
19
+++ b/accel/tcg/translate-all.c
226
+++ b/accel/tcg/translate-all.c
20
@@ -XXX,XX +XXX,XX @@ static inline void *alloc_code_gen_buffer(void)
227
@@ -XXX,XX +XXX,XX @@ static int encode_search(TranslationBlock *tb, uint8_t *block)
21
{
228
22
int prot = PROT_WRITE | PROT_READ | PROT_EXEC;
229
for (j = 0; j < TARGET_INSN_START_WORDS; ++j) {
23
int flags = MAP_PRIVATE | MAP_ANONYMOUS;
230
if (i == 0) {
24
- uintptr_t start = 0;
231
- prev = (j == 0 ? tb_pc(tb) : 0);
25
size_t size = tcg_ctx->code_gen_buffer_size;
232
+ prev = (!TARGET_TB_PCREL && j == 0 ? tb_pc(tb) : 0);
26
void *buf;
233
} else {
27
234
prev = tcg_ctx->gen_insn_data[i - 1][j];
28
- /* Constrain the position of the buffer based on the host cpu.
235
}
29
- Note that these addresses are chosen in concert with the
236
@@ -XXX,XX +XXX,XX @@ static int encode_search(TranslationBlock *tb, uint8_t *block)
30
- addresses assigned in the relevant linker script file. */
237
static int cpu_restore_state_from_tb(CPUState *cpu, TranslationBlock *tb,
31
-# if defined(__PIE__) || defined(__PIC__)
238
uintptr_t searched_pc, bool reset_icount)
32
- /* Don't bother setting a preferred location if we're building
239
{
33
- a position-independent executable. We're more likely to get
240
- target_ulong data[TARGET_INSN_START_WORDS] = { tb_pc(tb) };
34
- an address near the main executable if we let the kernel
241
+ target_ulong data[TARGET_INSN_START_WORDS];
35
- choose the address. */
242
uintptr_t host_pc = (uintptr_t)tb->tc.ptr;
36
-# elif defined(__x86_64__) && defined(MAP_32BIT)
243
CPUArchState *env = cpu->env_ptr;
37
- /* Force the memory down into low memory with the executable.
244
const uint8_t *p = tb->tc.ptr + tb->tc.size;
38
- Leave the choice of exact location with the kernel. */
245
@@ -XXX,XX +XXX,XX @@ static int cpu_restore_state_from_tb(CPUState *cpu, TranslationBlock *tb,
39
- flags |= MAP_32BIT;
246
return -1;
40
- /* Cannot expect to map more than 800MB in low memory. */
247
}
41
- if (size > 800u * 1024 * 1024) {
248
42
- tcg_ctx->code_gen_buffer_size = size = 800u * 1024 * 1024;
249
+ memset(data, 0, sizeof(data));
250
+ if (!TARGET_TB_PCREL) {
251
+ data[0] = tb_pc(tb);
252
+ }
253
+
254
/* Reconstruct the stored insn data while looking for the point at
255
which the end of the insn exceeds the searched_pc. */
256
for (i = 0; i < num_insns; ++i) {
257
@@ -XXX,XX +XXX,XX @@ static bool tb_cmp(const void *ap, const void *bp)
258
const TranslationBlock *a = ap;
259
const TranslationBlock *b = bp;
260
261
- return tb_pc(a) == tb_pc(b) &&
262
- a->cs_base == b->cs_base &&
263
- a->flags == b->flags &&
264
- (tb_cflags(a) & ~CF_INVALID) == (tb_cflags(b) & ~CF_INVALID) &&
265
- a->trace_vcpu_dstate == b->trace_vcpu_dstate &&
266
- a->page_addr[0] == b->page_addr[0] &&
267
- a->page_addr[1] == b->page_addr[1];
268
+ return ((TARGET_TB_PCREL || tb_pc(a) == tb_pc(b)) &&
269
+ a->cs_base == b->cs_base &&
270
+ a->flags == b->flags &&
271
+ (tb_cflags(a) & ~CF_INVALID) == (tb_cflags(b) & ~CF_INVALID) &&
272
+ a->trace_vcpu_dstate == b->trace_vcpu_dstate &&
273
+ a->page_addr[0] == b->page_addr[0] &&
274
+ a->page_addr[1] == b->page_addr[1]);
275
}
276
277
void tb_htable_init(void)
278
@@ -XXX,XX +XXX,XX @@ static inline void tb_jmp_unlink(TranslationBlock *dest)
279
qemu_spin_unlock(&dest->jmp_lock);
280
}
281
282
+static void tb_jmp_cache_inval_tb(TranslationBlock *tb)
283
+{
284
+ CPUState *cpu;
285
+
286
+ if (TARGET_TB_PCREL) {
287
+ /* A TB may be at any virtual address */
288
+ CPU_FOREACH(cpu) {
289
+ tcg_flush_jmp_cache(cpu);
290
+ }
291
+ } else {
292
+ uint32_t h = tb_jmp_cache_hash_func(tb_pc(tb));
293
+
294
+ CPU_FOREACH(cpu) {
295
+ CPUJumpCache *jc = cpu->tb_jmp_cache;
296
+
297
+ if (qatomic_read(&jc->array[h].tb) == tb) {
298
+ qatomic_set(&jc->array[h].tb, NULL);
299
+ }
300
+ }
301
+ }
302
+}
303
+
304
/*
305
* In user-mode, call with mmap_lock held.
306
* In !user-mode, if @rm_from_page_list is set, call with the TB's pages'
307
@@ -XXX,XX +XXX,XX @@ static inline void tb_jmp_unlink(TranslationBlock *dest)
308
*/
309
static void do_tb_phys_invalidate(TranslationBlock *tb, bool rm_from_page_list)
310
{
311
- CPUState *cpu;
312
PageDesc *p;
313
uint32_t h;
314
tb_page_addr_t phys_pc;
315
@@ -XXX,XX +XXX,XX @@ static void do_tb_phys_invalidate(TranslationBlock *tb, bool rm_from_page_list)
316
317
/* remove the TB from the hash list */
318
phys_pc = tb->page_addr[0];
319
- h = tb_hash_func(phys_pc, tb_pc(tb), tb->flags, orig_cflags,
320
- tb->trace_vcpu_dstate);
321
+ h = tb_hash_func(phys_pc, (TARGET_TB_PCREL ? 0 : tb_pc(tb)),
322
+ tb->flags, orig_cflags, tb->trace_vcpu_dstate);
323
if (!qht_remove(&tb_ctx.htable, tb, h)) {
324
return;
325
}
326
@@ -XXX,XX +XXX,XX @@ static void do_tb_phys_invalidate(TranslationBlock *tb, bool rm_from_page_list)
327
}
328
329
/* remove the TB from the hash list */
330
- h = tb_jmp_cache_hash_func(tb->pc);
331
- CPU_FOREACH(cpu) {
332
- CPUJumpCache *jc = cpu->tb_jmp_cache;
333
- if (qatomic_read(&jc->array[h].tb) == tb) {
334
- qatomic_set(&jc->array[h].tb, NULL);
335
- }
43
- }
336
- }
44
-# elif defined(__sparc__)
337
+ tb_jmp_cache_inval_tb(tb);
45
- start = 0x40000000ul;
338
46
-# elif defined(__s390x__)
339
/* suppress this TB from the two jump lists */
47
- start = 0x90000000ul;
340
tb_remove_from_jmp_list(tb, 0);
48
-# elif defined(__mips__)
341
@@ -XXX,XX +XXX,XX @@ tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
49
-# if _MIPS_SIM == _ABI64
342
}
50
- start = 0x128000000ul;
343
51
-# else
344
/* add in the hash table */
52
- start = 0x08000000ul;
345
- h = tb_hash_func(phys_pc, tb_pc(tb), tb->flags, tb->cflags,
53
-# endif
346
- tb->trace_vcpu_dstate);
54
-# endif
347
+ h = tb_hash_func(phys_pc, (TARGET_TB_PCREL ? 0 : tb_pc(tb)),
55
-
348
+ tb->flags, tb->cflags, tb->trace_vcpu_dstate);
56
- buf = mmap((void *)start, size, prot, flags, -1, 0);
349
qht_insert(&tb_ctx.htable, tb, h, &existing_tb);
57
+ buf = mmap(NULL, size, prot, flags, -1, 0);
350
58
if (buf == MAP_FAILED) {
351
/* remove TB from the page(s) if we couldn't insert it */
59
return NULL;
352
@@ -XXX,XX +XXX,XX @@ TranslationBlock *tb_gen_code(CPUState *cpu,
60
}
353
61
354
gen_code_buf = tcg_ctx->code_gen_ptr;
62
#ifdef __mips__
355
tb->tc.ptr = tcg_splitwx_to_rx(gen_code_buf);
63
if (cross_256mb(buf, size)) {
356
+#if !TARGET_TB_PCREL
64
- /* Try again, with the original still mapped, to avoid re-acquiring
357
tb->pc = pc;
65
- that 256mb crossing. This time don't specify an address. */
358
+#endif
66
+ /*
359
tb->cs_base = cs_base;
67
+ * Try again, with the original still mapped, to avoid re-acquiring
360
tb->flags = flags;
68
+ * the same 256mb crossing.
361
tb->cflags = cflags;
69
+ */
70
size_t size2;
71
void *buf2 = mmap(NULL, size, prot, flags, -1, 0);
72
switch ((int)(buf2 != MAP_FAILED)) {
73
--
362
--
74
2.20.1
363
2.34.1
75
364
76
365
diff view generated by jsdifflib
Deleted patch
1
PIE is supported on many other hosts besides x86.
2
1
3
The default for non-x86 is now the same as x86: pie is used
4
if supported, and may be forced via --enable/--disable-pie.
5
6
The original commit (40d6444e91c) said:
7
8
"Non-x86 are not changed, as they require TCG changes"
9
10
but I think that's wrong -- there's nothing about PIE that
11
affects TCG one way or another.
12
13
Tested on aarch64 (bionic) and ppc64le (centos 7) hosts.
14
15
Tested-by: Alex Bennée <alex.bennee@linaro.org>
16
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
17
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
18
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
19
---
20
configure | 10 ----------
21
1 file changed, 10 deletions(-)
22
23
diff --git a/configure b/configure
24
index XXXXXXX..XXXXXXX 100755
25
--- a/configure
26
+++ b/configure
27
@@ -XXX,XX +XXX,XX @@ if ! compile_prog "-Werror" "" ; then
28
    "Thread-Local Storage (TLS). Please upgrade to a version that does."
29
fi
30
31
-if test "$pie" = ""; then
32
- case "$cpu-$targetos" in
33
- i386-Linux|x86_64-Linux|x32-Linux|i386-OpenBSD|x86_64-OpenBSD)
34
- ;;
35
- *)
36
- pie="no"
37
- ;;
38
- esac
39
-fi
40
-
41
if test "$pie" != "no" ; then
42
cat > $TMPC << EOF
43
44
--
45
2.20.1
46
47
diff view generated by jsdifflib
Deleted patch
1
The CFLAGS_NOPIE and LDFLAGS_NOPIE variables are used
2
in pc-bios/optionrom/Makefile, which has nothing to do
3
with the PIE setting of the main qemu executables.
4
1
5
This overrides any operating system default to build
6
all executables as PIE, which is important for ROMs.
7
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Reviewed-by: Thomas Huth <thuth@redhat.com>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
---
12
configure | 18 ++++++++----------
13
1 file changed, 8 insertions(+), 10 deletions(-)
14
15
diff --git a/configure b/configure
16
index XXXXXXX..XXXXXXX 100755
17
--- a/configure
18
+++ b/configure
19
@@ -XXX,XX +XXX,XX @@ if ! compile_prog "-Werror" "" ; then
20
    "Thread-Local Storage (TLS). Please upgrade to a version that does."
21
fi
22
23
-if test "$pie" != "no" ; then
24
- cat > $TMPC << EOF
25
+cat > $TMPC << EOF
26
27
#ifdef __linux__
28
# define THREAD __thread
29
#else
30
# define THREAD
31
#endif
32
-
33
static THREAD int tls_var;
34
-
35
int main(void) { return tls_var; }
36
-
37
EOF
38
- # check we support --no-pie first...
39
- if compile_prog "-Werror -fno-pie" "-no-pie"; then
40
- CFLAGS_NOPIE="-fno-pie"
41
- LDFLAGS_NOPIE="-nopie"
42
- fi
43
44
+# Check we support --no-pie first; we will need this for building ROMs.
45
+if compile_prog "-Werror -fno-pie" "-no-pie"; then
46
+ CFLAGS_NOPIE="-fno-pie"
47
+ LDFLAGS_NOPIE="-no-pie"
48
+fi
49
+
50
+if test "$pie" != "no" ; then
51
if compile_prog "-fPIE -DPIE" "-pie"; then
52
QEMU_CFLAGS="-fPIE -DPIE $QEMU_CFLAGS"
53
LDFLAGS="-pie $LDFLAGS"
54
--
55
2.20.1
56
57
diff view generated by jsdifflib
Deleted patch
1
We don't actually need the result of the read, only to probe that the
2
memory mapping exists. This is exactly what probe_access does.
3
1
4
This is also the only user of any cpu_ld*_code_ra function.
5
Removing this allows the interface to be removed shortly.
6
7
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Acked-by: Max Filippov <jcmvbkbc@gmail.com>
9
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
---
13
target/xtensa/mmu_helper.c | 5 +++--
14
1 file changed, 3 insertions(+), 2 deletions(-)
15
16
diff --git a/target/xtensa/mmu_helper.c b/target/xtensa/mmu_helper.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/xtensa/mmu_helper.c
19
+++ b/target/xtensa/mmu_helper.c
20
@@ -XXX,XX +XXX,XX @@
21
void HELPER(itlb_hit_test)(CPUXtensaState *env, uint32_t vaddr)
22
{
23
/*
24
- * Attempt the memory load; we don't care about the result but
25
+ * Probe the memory; we don't care about the result but
26
* only the side-effects (ie any MMU or other exception)
27
*/
28
- cpu_ldub_code_ra(env, vaddr, GETPC());
29
+ probe_access(env, vaddr, 1, MMU_INST_FETCH,
30
+ cpu_mmu_index(env, true), GETPC());
31
}
32
33
void HELPER(wsr_rasid)(CPUXtensaState *env, uint32_t v)
34
--
35
2.20.1
36
37
diff view generated by jsdifflib
Deleted patch
1
In the cpu_ldst templates, we already require a MemOp, and it
2
is cleaner and clearer to pass that instead of 3 separate
3
arguments describing the memory operation.
4
1
5
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
---
10
include/exec/cpu_ldst_template.h | 22 +++++++++++-----------
11
include/exec/cpu_ldst_useronly_template.h | 12 ++++++------
12
2 files changed, 17 insertions(+), 17 deletions(-)
13
14
diff --git a/include/exec/cpu_ldst_template.h b/include/exec/cpu_ldst_template.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/include/exec/cpu_ldst_template.h
17
+++ b/include/exec/cpu_ldst_template.h
18
@@ -XXX,XX +XXX,XX @@ glue(glue(glue(cpu_ld, USUFFIX), MEMSUFFIX), _ra)(CPUArchState *env,
19
RES_TYPE res;
20
target_ulong addr;
21
int mmu_idx = CPU_MMU_INDEX;
22
- TCGMemOpIdx oi;
23
+ MemOp op = MO_TE | SHIFT;
24
#if !defined(SOFTMMU_CODE_ACCESS)
25
- uint16_t meminfo = trace_mem_build_info(SHIFT, false, MO_TE, false, mmu_idx);
26
+ uint16_t meminfo = trace_mem_get_info(op, mmu_idx, false);
27
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
28
#endif
29
30
@@ -XXX,XX +XXX,XX @@ glue(glue(glue(cpu_ld, USUFFIX), MEMSUFFIX), _ra)(CPUArchState *env,
31
entry = tlb_entry(env, mmu_idx, addr);
32
if (unlikely(entry->ADDR_READ !=
33
(addr & (TARGET_PAGE_MASK | (DATA_SIZE - 1))))) {
34
- oi = make_memop_idx(SHIFT, mmu_idx);
35
+ TCGMemOpIdx oi = make_memop_idx(op, mmu_idx);
36
res = glue(glue(helper_ret_ld, URETSUFFIX), MMUSUFFIX)(env, addr,
37
- oi, retaddr);
38
+ oi, retaddr);
39
} else {
40
uintptr_t hostaddr = addr + entry->addend;
41
res = glue(glue(ld, USUFFIX), _p)((uint8_t *)hostaddr);
42
@@ -XXX,XX +XXX,XX @@ glue(glue(glue(cpu_lds, SUFFIX), MEMSUFFIX), _ra)(CPUArchState *env,
43
int res;
44
target_ulong addr;
45
int mmu_idx = CPU_MMU_INDEX;
46
- TCGMemOpIdx oi;
47
-#if !defined(SOFTMMU_CODE_ACCESS)
48
- uint16_t meminfo = trace_mem_build_info(SHIFT, true, MO_TE, false, mmu_idx);
49
+ MemOp op = MO_TE | MO_SIGN | SHIFT;
50
+#ifndef SOFTMMU_CODE_ACCESS
51
+ uint16_t meminfo = trace_mem_get_info(op, mmu_idx, false);
52
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
53
#endif
54
55
@@ -XXX,XX +XXX,XX @@ glue(glue(glue(cpu_lds, SUFFIX), MEMSUFFIX), _ra)(CPUArchState *env,
56
entry = tlb_entry(env, mmu_idx, addr);
57
if (unlikely(entry->ADDR_READ !=
58
(addr & (TARGET_PAGE_MASK | (DATA_SIZE - 1))))) {
59
- oi = make_memop_idx(SHIFT, mmu_idx);
60
+ TCGMemOpIdx oi = make_memop_idx(op & ~MO_SIGN, mmu_idx);
61
res = (DATA_STYPE)glue(glue(helper_ret_ld, SRETSUFFIX),
62
MMUSUFFIX)(env, addr, oi, retaddr);
63
} else {
64
@@ -XXX,XX +XXX,XX @@ glue(glue(glue(cpu_st, SUFFIX), MEMSUFFIX), _ra)(CPUArchState *env,
65
CPUTLBEntry *entry;
66
target_ulong addr;
67
int mmu_idx = CPU_MMU_INDEX;
68
- TCGMemOpIdx oi;
69
+ MemOp op = MO_TE | SHIFT;
70
#if !defined(SOFTMMU_CODE_ACCESS)
71
- uint16_t meminfo = trace_mem_build_info(SHIFT, false, MO_TE, true, mmu_idx);
72
+ uint16_t meminfo = trace_mem_get_info(op, mmu_idx, true);
73
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
74
#endif
75
76
@@ -XXX,XX +XXX,XX @@ glue(glue(glue(cpu_st, SUFFIX), MEMSUFFIX), _ra)(CPUArchState *env,
77
entry = tlb_entry(env, mmu_idx, addr);
78
if (unlikely(tlb_addr_write(entry) !=
79
(addr & (TARGET_PAGE_MASK | (DATA_SIZE - 1))))) {
80
- oi = make_memop_idx(SHIFT, mmu_idx);
81
+ TCGMemOpIdx oi = make_memop_idx(op, mmu_idx);
82
glue(glue(helper_ret_st, SUFFIX), MMUSUFFIX)(env, addr, v, oi,
83
retaddr);
84
} else {
85
diff --git a/include/exec/cpu_ldst_useronly_template.h b/include/exec/cpu_ldst_useronly_template.h
86
index XXXXXXX..XXXXXXX 100644
87
--- a/include/exec/cpu_ldst_useronly_template.h
88
+++ b/include/exec/cpu_ldst_useronly_template.h
89
@@ -XXX,XX +XXX,XX @@ glue(glue(cpu_ld, USUFFIX), MEMSUFFIX)(CPUArchState *env, abi_ptr ptr)
90
ret = glue(glue(ld, USUFFIX), _p)(g2h(ptr));
91
clear_helper_retaddr();
92
#else
93
- uint16_t meminfo = trace_mem_build_info(SHIFT, false, MO_TE, false,
94
- MMU_USER_IDX);
95
+ MemOp op = MO_TE | SHIFT;
96
+ uint16_t meminfo = trace_mem_get_info(op, MMU_USER_IDX, false);
97
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
98
ret = glue(glue(ld, USUFFIX), _p)(g2h(ptr));
99
#endif
100
@@ -XXX,XX +XXX,XX @@ glue(glue(cpu_lds, SUFFIX), MEMSUFFIX)(CPUArchState *env, abi_ptr ptr)
101
ret = glue(glue(lds, SUFFIX), _p)(g2h(ptr));
102
clear_helper_retaddr();
103
#else
104
- uint16_t meminfo = trace_mem_build_info(SHIFT, true, MO_TE, false,
105
- MMU_USER_IDX);
106
+ MemOp op = MO_TE | MO_SIGN | SHIFT;
107
+ uint16_t meminfo = trace_mem_get_info(op, MMU_USER_IDX, false);
108
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
109
ret = glue(glue(lds, SUFFIX), _p)(g2h(ptr));
110
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
111
@@ -XXX,XX +XXX,XX @@ static inline void
112
glue(glue(cpu_st, SUFFIX), MEMSUFFIX)(CPUArchState *env, abi_ptr ptr,
113
RES_TYPE v)
114
{
115
- uint16_t meminfo = trace_mem_build_info(SHIFT, false, MO_TE, true,
116
- MMU_USER_IDX);
117
+ MemOp op = MO_TE | SHIFT;
118
+ uint16_t meminfo = trace_mem_get_info(op, MMU_USER_IDX, true);
119
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
120
glue(glue(st, SUFFIX), _p)(g2h(ptr), v);
121
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
122
--
123
2.20.1
124
125
diff view generated by jsdifflib
Deleted patch
1
It is easy for the atomic helpers to use trace_mem_build_info
2
directly, without resorting to symbol pasting. For this usage,
3
we cannot use trace_mem_get_info, because the MemOp does not
4
support 16-byte accesses.
5
1
6
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
---
10
accel/tcg/atomic_template.h | 67 +++++++++++++------------------------
11
trace/mem-internal.h | 17 ----------
12
2 files changed, 24 insertions(+), 60 deletions(-)
13
14
diff --git a/accel/tcg/atomic_template.h b/accel/tcg/atomic_template.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/accel/tcg/atomic_template.h
17
+++ b/accel/tcg/atomic_template.h
18
@@ -XXX,XX +XXX,XX @@
19
the ATOMIC_NAME macro, and redefined below. */
20
#if DATA_SIZE == 1
21
# define END
22
-# define MEND _be /* either le or be would be fine */
23
#elif defined(HOST_WORDS_BIGENDIAN)
24
# define END _be
25
-# define MEND _be
26
#else
27
# define END _le
28
-# define MEND _le
29
#endif
30
31
ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, target_ulong addr,
32
@@ -XXX,XX +XXX,XX @@ ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, target_ulong addr,
33
ATOMIC_MMU_DECLS;
34
DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP;
35
DATA_TYPE ret;
36
- uint16_t info = glue(trace_mem_build_info_no_se, MEND)(SHIFT, false,
37
- ATOMIC_MMU_IDX);
38
+ uint16_t info = trace_mem_build_info(SHIFT, false, 0, false,
39
+ ATOMIC_MMU_IDX);
40
41
atomic_trace_rmw_pre(env, addr, info);
42
#if DATA_SIZE == 16
43
@@ -XXX,XX +XXX,XX @@ ABI_TYPE ATOMIC_NAME(ld)(CPUArchState *env, target_ulong addr EXTRA_ARGS)
44
{
45
ATOMIC_MMU_DECLS;
46
DATA_TYPE val, *haddr = ATOMIC_MMU_LOOKUP;
47
- uint16_t info = glue(trace_mem_build_info_no_se, MEND)(SHIFT, false,
48
- ATOMIC_MMU_IDX);
49
+ uint16_t info = trace_mem_build_info(SHIFT, false, 0, false,
50
+ ATOMIC_MMU_IDX);
51
52
atomic_trace_ld_pre(env, addr, info);
53
val = atomic16_read(haddr);
54
@@ -XXX,XX +XXX,XX @@ void ATOMIC_NAME(st)(CPUArchState *env, target_ulong addr,
55
{
56
ATOMIC_MMU_DECLS;
57
DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP;
58
- uint16_t info = glue(trace_mem_build_info_no_se, MEND)(SHIFT, true,
59
- ATOMIC_MMU_IDX);
60
+ uint16_t info = trace_mem_build_info(SHIFT, false, 0, true,
61
+ ATOMIC_MMU_IDX);
62
63
atomic_trace_st_pre(env, addr, info);
64
atomic16_set(haddr, val);
65
@@ -XXX,XX +XXX,XX @@ ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, target_ulong addr,
66
ATOMIC_MMU_DECLS;
67
DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP;
68
DATA_TYPE ret;
69
- uint16_t info = glue(trace_mem_build_info_no_se, MEND)(SHIFT, false,
70
- ATOMIC_MMU_IDX);
71
+ uint16_t info = trace_mem_build_info(SHIFT, false, 0, false,
72
+ ATOMIC_MMU_IDX);
73
74
atomic_trace_rmw_pre(env, addr, info);
75
ret = atomic_xchg__nocheck(haddr, val);
76
@@ -XXX,XX +XXX,XX @@ ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \
77
ATOMIC_MMU_DECLS; \
78
DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP; \
79
DATA_TYPE ret; \
80
- uint16_t info = glue(trace_mem_build_info_no_se, MEND)(SHIFT, \
81
- false, \
82
- ATOMIC_MMU_IDX); \
83
- \
84
+ uint16_t info = trace_mem_build_info(SHIFT, false, 0, false, \
85
+ ATOMIC_MMU_IDX); \
86
atomic_trace_rmw_pre(env, addr, info); \
87
ret = atomic_##X(haddr, val); \
88
ATOMIC_MMU_CLEANUP; \
89
@@ -XXX,XX +XXX,XX @@ ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \
90
ATOMIC_MMU_DECLS; \
91
XDATA_TYPE *haddr = ATOMIC_MMU_LOOKUP; \
92
XDATA_TYPE cmp, old, new, val = xval; \
93
- uint16_t info = glue(trace_mem_build_info_no_se, MEND)(SHIFT, \
94
- false, \
95
- ATOMIC_MMU_IDX); \
96
- \
97
+ uint16_t info = trace_mem_build_info(SHIFT, false, 0, false, \
98
+ ATOMIC_MMU_IDX); \
99
atomic_trace_rmw_pre(env, addr, info); \
100
smp_mb(); \
101
cmp = atomic_read__nocheck(haddr); \
102
@@ -XXX,XX +XXX,XX @@ GEN_ATOMIC_HELPER_FN(umax_fetch, MAX, DATA_TYPE, new)
103
#endif /* DATA SIZE >= 16 */
104
105
#undef END
106
-#undef MEND
107
108
#if DATA_SIZE > 1
109
110
@@ -XXX,XX +XXX,XX @@ GEN_ATOMIC_HELPER_FN(umax_fetch, MAX, DATA_TYPE, new)
111
within the ATOMIC_NAME macro. */
112
#ifdef HOST_WORDS_BIGENDIAN
113
# define END _le
114
-# define MEND _le
115
#else
116
# define END _be
117
-# define MEND _be
118
#endif
119
120
ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, target_ulong addr,
121
@@ -XXX,XX +XXX,XX @@ ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, target_ulong addr,
122
ATOMIC_MMU_DECLS;
123
DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP;
124
DATA_TYPE ret;
125
- uint16_t info = glue(trace_mem_build_info_no_se, MEND)(SHIFT,
126
- false,
127
- ATOMIC_MMU_IDX);
128
+ uint16_t info = trace_mem_build_info(SHIFT, false, MO_BSWAP, false,
129
+ ATOMIC_MMU_IDX);
130
131
atomic_trace_rmw_pre(env, addr, info);
132
#if DATA_SIZE == 16
133
@@ -XXX,XX +XXX,XX @@ ABI_TYPE ATOMIC_NAME(ld)(CPUArchState *env, target_ulong addr EXTRA_ARGS)
134
{
135
ATOMIC_MMU_DECLS;
136
DATA_TYPE val, *haddr = ATOMIC_MMU_LOOKUP;
137
- uint16_t info = glue(trace_mem_build_info_no_se, MEND)(SHIFT,
138
- false,
139
- ATOMIC_MMU_IDX);
140
+ uint16_t info = trace_mem_build_info(SHIFT, false, MO_BSWAP, false,
141
+ ATOMIC_MMU_IDX);
142
143
atomic_trace_ld_pre(env, addr, info);
144
val = atomic16_read(haddr);
145
@@ -XXX,XX +XXX,XX @@ void ATOMIC_NAME(st)(CPUArchState *env, target_ulong addr,
146
{
147
ATOMIC_MMU_DECLS;
148
DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP;
149
- uint16_t info = glue(trace_mem_build_info_no_se, MEND)(SHIFT,
150
- true,
151
- ATOMIC_MMU_IDX);
152
+ uint16_t info = trace_mem_build_info(SHIFT, false, MO_BSWAP, true,
153
+ ATOMIC_MMU_IDX);
154
155
val = BSWAP(val);
156
atomic_trace_st_pre(env, addr, info);
157
@@ -XXX,XX +XXX,XX @@ ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, target_ulong addr,
158
ATOMIC_MMU_DECLS;
159
DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP;
160
ABI_TYPE ret;
161
- uint16_t info = glue(trace_mem_build_info_no_se, MEND)(SHIFT,
162
- false,
163
- ATOMIC_MMU_IDX);
164
+ uint16_t info = trace_mem_build_info(SHIFT, false, MO_BSWAP, false,
165
+ ATOMIC_MMU_IDX);
166
167
atomic_trace_rmw_pre(env, addr, info);
168
ret = atomic_xchg__nocheck(haddr, BSWAP(val));
169
@@ -XXX,XX +XXX,XX @@ ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \
170
ATOMIC_MMU_DECLS; \
171
DATA_TYPE *haddr = ATOMIC_MMU_LOOKUP; \
172
DATA_TYPE ret; \
173
- uint16_t info = glue(trace_mem_build_info_no_se, MEND)(SHIFT, \
174
- false, \
175
- ATOMIC_MMU_IDX); \
176
- \
177
+ uint16_t info = trace_mem_build_info(SHIFT, false, MO_BSWAP, \
178
+ false, ATOMIC_MMU_IDX); \
179
atomic_trace_rmw_pre(env, addr, info); \
180
ret = atomic_##X(haddr, BSWAP(val)); \
181
ATOMIC_MMU_CLEANUP; \
182
@@ -XXX,XX +XXX,XX @@ ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, target_ulong addr, \
183
ATOMIC_MMU_DECLS; \
184
XDATA_TYPE *haddr = ATOMIC_MMU_LOOKUP; \
185
XDATA_TYPE ldo, ldn, old, new, val = xval; \
186
- uint16_t info = glue(trace_mem_build_info_no_se, MEND)(SHIFT, \
187
- false, \
188
- ATOMIC_MMU_IDX); \
189
- \
190
+ uint16_t info = trace_mem_build_info(SHIFT, false, MO_BSWAP, \
191
+ false, ATOMIC_MMU_IDX); \
192
atomic_trace_rmw_pre(env, addr, info); \
193
smp_mb(); \
194
ldn = atomic_read__nocheck(haddr); \
195
@@ -XXX,XX +XXX,XX @@ GEN_ATOMIC_HELPER_FN(add_fetch, ADD, DATA_TYPE, new)
196
#endif /* DATA_SIZE >= 16 */
197
198
#undef END
199
-#undef MEND
200
#endif /* DATA_SIZE > 1 */
201
202
#undef BSWAP
203
diff --git a/trace/mem-internal.h b/trace/mem-internal.h
204
index XXXXXXX..XXXXXXX 100644
205
--- a/trace/mem-internal.h
206
+++ b/trace/mem-internal.h
207
@@ -XXX,XX +XXX,XX @@ static inline uint16_t trace_mem_get_info(MemOp op,
208
mmu_idx);
209
}
210
211
-/* Used by the atomic helpers */
212
-static inline
213
-uint16_t trace_mem_build_info_no_se_be(int size_shift, bool store,
214
- TCGMemOpIdx oi)
215
-{
216
- return trace_mem_build_info(size_shift, false, MO_BE, store,
217
- get_mmuidx(oi));
218
-}
219
-
220
-static inline
221
-uint16_t trace_mem_build_info_no_se_le(int size_shift, bool store,
222
- TCGMemOpIdx oi)
223
-{
224
- return trace_mem_build_info(size_shift, false, MO_LE, store,
225
- get_mmuidx(oi));
226
-}
227
-
228
#endif /* TRACE__MEM_INTERNAL_H */
229
--
230
2.20.1
231
232
diff view generated by jsdifflib
Deleted patch
1
Code movement in an upcoming patch will show that this file
2
was implicitly depending on tcg.h being included indirectly.
3
1
4
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Reviewed-by: David Hildenbrand <david@redhat.com>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
---
9
target/s390x/mem_helper.c | 1 +
10
1 file changed, 1 insertion(+)
11
12
diff --git a/target/s390x/mem_helper.c b/target/s390x/mem_helper.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/s390x/mem_helper.c
15
+++ b/target/s390x/mem_helper.c
16
@@ -XXX,XX +XXX,XX @@
17
#include "exec/cpu_ldst.h"
18
#include "qemu/int128.h"
19
#include "qemu/atomic128.h"
20
+#include "tcg.h"
21
22
#if !defined(CONFIG_USER_ONLY)
23
#include "hw/s390x/storage-keys.h"
24
--
25
2.20.1
26
27
diff view generated by jsdifflib
Deleted patch
1
Code movement in an upcoming patch will show that this file
2
was implicitly depending on tcg.h being included indirectly.
3
1
4
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
accel/tcg/tcg-runtime.c | 1 +
9
1 file changed, 1 insertion(+)
10
11
diff --git a/accel/tcg/tcg-runtime.c b/accel/tcg/tcg-runtime.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/accel/tcg/tcg-runtime.c
14
+++ b/accel/tcg/tcg-runtime.c
15
@@ -XXX,XX +XXX,XX @@
16
#include "exec/tb-lookup.h"
17
#include "disas/disas.h"
18
#include "exec/log.h"
19
+#include "tcg.h"
20
21
/* 32-bit helpers */
22
23
--
24
2.20.1
25
26
diff view generated by jsdifflib
Deleted patch
1
Code movement in an upcoming patch will show that this file
2
was implicitly depending on tcg.h being included indirectly.
3
1
4
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
---
9
linux-user/syscall.c | 1 +
10
1 file changed, 1 insertion(+)
11
12
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/linux-user/syscall.c
15
+++ b/linux-user/syscall.c
16
@@ -XXX,XX +XXX,XX @@
17
#include "user/syscall-trace.h"
18
#include "qapi/error.h"
19
#include "fd-trans.h"
20
+#include "tcg.h"
21
22
#ifndef CLONE_IO
23
#define CLONE_IO 0x80000000 /* Clone io context */
24
--
25
2.20.1
26
27
diff view generated by jsdifflib
Deleted patch
1
Code movement in an upcoming patch will show that this file
2
was implicitly depending on trace-root.h being included beforehand.
3
1
4
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
---
9
include/user/syscall-trace.h | 2 ++
10
1 file changed, 2 insertions(+)
11
12
diff --git a/include/user/syscall-trace.h b/include/user/syscall-trace.h
13
index XXXXXXX..XXXXXXX 100644
14
--- a/include/user/syscall-trace.h
15
+++ b/include/user/syscall-trace.h
16
@@ -XXX,XX +XXX,XX @@
17
#ifndef _SYSCALL_TRACE_H_
18
#define _SYSCALL_TRACE_H_
19
20
+#include "trace-root.h"
21
+
22
/*
23
* These helpers just provide a common place for the various
24
* subsystems that want to track syscalls to put their hooks in. We
25
--
26
2.20.1
27
28
diff view generated by jsdifflib
Deleted patch
1
Code movement in an upcoming patch will show that this file
2
was implicitly depending on trace/mem.h being included beforehand.
3
1
4
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
Reported-by: Alex Bennée <alex.bennee@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
---
9
plugins/api.c | 1 +
10
1 file changed, 1 insertion(+)
11
12
diff --git a/plugins/api.c b/plugins/api.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/plugins/api.c
15
+++ b/plugins/api.c
16
@@ -XXX,XX +XXX,XX @@
17
#include "qemu/plugin-memory.h"
18
#include "hw/boards.h"
19
#endif
20
+#include "trace/mem.h"
21
22
/* Uninstall and Reset handlers */
23
24
--
25
2.20.1
26
27
diff view generated by jsdifflib
Deleted patch
1
The DO_LOAD macros replicate the distinction already performed
2
by the cpu_ldst.h functions. Use them.
3
1
4
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
include/exec/cpu_ldst.h | 11 ---------
9
include/exec/translator.h | 48 +++++++++++----------------------------
10
2 files changed, 13 insertions(+), 46 deletions(-)
11
12
diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h
13
index XXXXXXX..XXXXXXX 100644
14
--- a/include/exec/cpu_ldst.h
15
+++ b/include/exec/cpu_ldst.h
16
@@ -XXX,XX +XXX,XX @@ static inline void clear_helper_retaddr(void)
17
#include "exec/cpu_ldst_useronly_template.h"
18
#undef MEMSUFFIX
19
20
-/*
21
- * Code access is deprecated in favour of translator_ld* functions
22
- * (see translator.h). However there are still users that need to
23
- * converted so for now these stay.
24
- */
25
#define MEMSUFFIX _code
26
#define CODE_ACCESS
27
#define DATA_SIZE 1
28
@@ -XXX,XX +XXX,XX @@ void cpu_stq_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint64_t val,
29
#undef CPU_MMU_INDEX
30
#undef MEMSUFFIX
31
32
-/*
33
- * Code access is deprecated in favour of translator_ld* functions
34
- * (see translator.h). However there are still users that need to
35
- * converted so for now these stay.
36
- */
37
-
38
#define CPU_MMU_INDEX (cpu_mmu_index(env, true))
39
#define MEMSUFFIX _code
40
#define SOFTMMU_CODE_ACCESS
41
diff --git a/include/exec/translator.h b/include/exec/translator.h
42
index XXXXXXX..XXXXXXX 100644
43
--- a/include/exec/translator.h
44
+++ b/include/exec/translator.h
45
@@ -XXX,XX +XXX,XX @@ void translator_loop_temp_check(DisasContextBase *db);
46
/*
47
* Translator Load Functions
48
*
49
- * These are intended to replace the old cpu_ld*_code functions and
50
- * are mandatory for front-ends that have been migrated to the common
51
- * translator_loop. These functions are only intended to be called
52
- * from the translation stage and should not be called from helper
53
- * functions. Those functions should be converted to encode the
54
- * relevant information at translation time.
55
+ * These are intended to replace the direct usage of the cpu_ld*_code
56
+ * functions and are mandatory for front-ends that have been migrated
57
+ * to the common translator_loop. These functions are only intended
58
+ * to be called from the translation stage and should not be called
59
+ * from helper functions. Those functions should be converted to encode
60
+ * the relevant information at translation time.
61
*/
62
63
-#ifdef CONFIG_USER_ONLY
64
-
65
-#define DO_LOAD(type, name, shift) \
66
- do { \
67
- set_helper_retaddr(1); \
68
- ret = name ## _p(g2h(pc)); \
69
- clear_helper_retaddr(); \
70
- } while (0)
71
-
72
-#else
73
-
74
-#define DO_LOAD(type, name, shift) \
75
- do { \
76
- int mmu_idx = cpu_mmu_index(env, true); \
77
- TCGMemOpIdx oi = make_memop_idx(shift, mmu_idx); \
78
- ret = helper_ret_ ## name ## _cmmu(env, pc, oi, 0); \
79
- } while (0)
80
-
81
-#endif
82
-
83
-#define GEN_TRANSLATOR_LD(fullname, name, type, shift, swap_fn) \
84
+#define GEN_TRANSLATOR_LD(fullname, type, load_fn, swap_fn) \
85
static inline type \
86
fullname ## _swap(CPUArchState *env, abi_ptr pc, bool do_swap) \
87
{ \
88
- type ret; \
89
- DO_LOAD(type, name, shift); \
90
- \
91
+ type ret = load_fn(env, pc); \
92
if (do_swap) { \
93
ret = swap_fn(ret); \
94
} \
95
@@ -XXX,XX +XXX,XX @@ void translator_loop_temp_check(DisasContextBase *db);
96
return fullname ## _swap(env, pc, false); \
97
}
98
99
-GEN_TRANSLATOR_LD(translator_ldub, ldub, uint8_t, 0, /* no swap */ )
100
-GEN_TRANSLATOR_LD(translator_ldsw, ldsw, int16_t, 1, bswap16)
101
-GEN_TRANSLATOR_LD(translator_lduw, lduw, uint16_t, 1, bswap16)
102
-GEN_TRANSLATOR_LD(translator_ldl, ldl, uint32_t, 2, bswap32)
103
-GEN_TRANSLATOR_LD(translator_ldq, ldq, uint64_t, 3, bswap64)
104
+GEN_TRANSLATOR_LD(translator_ldub, uint8_t, cpu_ldub_code, /* no swap */)
105
+GEN_TRANSLATOR_LD(translator_ldsw, int16_t, cpu_ldsw_code, bswap16)
106
+GEN_TRANSLATOR_LD(translator_lduw, uint16_t, cpu_lduw_code, bswap16)
107
+GEN_TRANSLATOR_LD(translator_ldl, uint32_t, cpu_ldl_code, bswap32)
108
+GEN_TRANSLATOR_LD(translator_ldq, uint64_t, cpu_ldq_code, bswap64)
109
#undef GEN_TRANSLATOR_LD
110
111
#endif /* EXEC__TRANSLATOR_H */
112
--
113
2.20.1
114
115
diff view generated by jsdifflib
1
This finishes the new interface began with the previous patch.
1
From: Leandro Lupori <leandro.lupori@eldorado.org.br>
2
Document the interface and deprecate MMU_MODE<N>_SUFFIX.
3
2
4
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
3
PowerPC64 processors handle direct branches better than indirect
5
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
4
ones, resulting in less stalled cycles and branch misses.
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
5
6
However, PPC's tb_target_set_jmp_target() was only using direct
7
branches for 16-bit jumps, while PowerPC64's unconditional branch
8
instructions are able to handle displacements of up to 26 bits.
9
To take advantage of this, now jumps whose displacements fit in
10
between 17 and 26 bits are also converted to direct branches.
11
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Signed-off-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
14
[rth: Expanded some commentary.]
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
15
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
---
16
---
9
include/exec/cpu_ldst.h | 80 +++++++++++++-
17
tcg/ppc/tcg-target.c.inc | 119 +++++++++++++++++++++++++++++----------
10
docs/devel/loads-stores.rst | 211 ++++++++++++++++++++++++++----------
18
1 file changed, 88 insertions(+), 31 deletions(-)
11
2 files changed, 230 insertions(+), 61 deletions(-)
12
19
13
diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h
20
diff --git a/tcg/ppc/tcg-target.c.inc b/tcg/ppc/tcg-target.c.inc
14
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
15
--- a/include/exec/cpu_ldst.h
22
--- a/tcg/ppc/tcg-target.c.inc
16
+++ b/include/exec/cpu_ldst.h
23
+++ b/tcg/ppc/tcg-target.c.inc
17
@@ -XXX,XX +XXX,XX @@
24
@@ -XXX,XX +XXX,XX @@ static void tcg_out_mb(TCGContext *s, TCGArg a0)
18
*
25
tcg_out32(s, insn);
19
* The syntax for the accessors is:
26
}
20
*
27
21
- * load: cpu_ld{sign}{size}_{mmusuffix}(env, ptr)
28
+static inline uint64_t make_pair(tcg_insn_unit i1, tcg_insn_unit i2)
22
+ * load: cpu_ld{sign}{size}_{mmusuffix}(env, ptr)
23
+ * cpu_ld{sign}{size}_{mmusuffix}_ra(env, ptr, retaddr)
24
+ * cpu_ld{sign}{size}_mmuidx_ra(env, ptr, mmu_idx, retaddr)
25
*
26
- * store: cpu_st{sign}{size}_{mmusuffix}(env, ptr, val)
27
+ * store: cpu_st{size}_{mmusuffix}(env, ptr, val)
28
+ * cpu_st{size}_{mmusuffix}_ra(env, ptr, val, retaddr)
29
+ * cpu_st{size}_mmuidx_ra(env, ptr, val, mmu_idx, retaddr)
30
*
31
* sign is:
32
* (empty): for 32 and 64 bit sizes
33
@@ -XXX,XX +XXX,XX @@
34
* l: 32 bits
35
* q: 64 bits
36
*
37
- * mmusuffix is one of the generic suffixes "data" or "code", or
38
- * (for softmmu configs) a target-specific MMU mode suffix as defined
39
- * in target cpu.h.
40
+ * mmusuffix is one of the generic suffixes "data" or "code", or "mmuidx".
41
+ * The "mmuidx" suffix carries an extra mmu_idx argument that specifies
42
+ * the index to use; the "data" and "code" suffixes take the index from
43
+ * cpu_mmu_index().
44
*/
45
#ifndef CPU_LDST_H
46
#define CPU_LDST_H
47
@@ -XXX,XX +XXX,XX @@ static inline void clear_helper_retaddr(void)
48
#undef MEMSUFFIX
49
#undef CODE_ACCESS
50
51
+/*
52
+ * Provide the same *_mmuidx_ra interface as for softmmu.
53
+ * The mmu_idx argument is ignored.
54
+ */
55
+
56
+static inline uint32_t cpu_ldub_mmuidx_ra(CPUArchState *env, abi_ptr addr,
57
+ int mmu_idx, uintptr_t ra)
58
+{
29
+{
59
+ return cpu_ldub_data_ra(env, addr, ra);
30
+ if (HOST_BIG_ENDIAN) {
31
+ return (uint64_t)i1 << 32 | i2;
32
+ }
33
+ return (uint64_t)i2 << 32 | i1;
60
+}
34
+}
61
+
35
+
62
+static inline uint32_t cpu_lduw_mmuidx_ra(CPUArchState *env, abi_ptr addr,
36
+static inline void ppc64_replace2(uintptr_t rx, uintptr_t rw,
63
+ int mmu_idx, uintptr_t ra)
37
+ tcg_insn_unit i0, tcg_insn_unit i1)
64
+{
38
+{
65
+ return cpu_lduw_data_ra(env, addr, ra);
39
+#if TCG_TARGET_REG_BITS == 64
40
+ qatomic_set((uint64_t *)rw, make_pair(i0, i1));
41
+ flush_idcache_range(rx, rw, 8);
42
+#else
43
+ qemu_build_not_reached();
44
+#endif
66
+}
45
+}
67
+
46
+
68
+static inline uint32_t cpu_ldl_mmuidx_ra(CPUArchState *env, abi_ptr addr,
47
+static inline void ppc64_replace4(uintptr_t rx, uintptr_t rw,
69
+ int mmu_idx, uintptr_t ra)
48
+ tcg_insn_unit i0, tcg_insn_unit i1,
49
+ tcg_insn_unit i2, tcg_insn_unit i3)
70
+{
50
+{
71
+ return cpu_ldl_data_ra(env, addr, ra);
51
+ uint64_t p[2];
52
+
53
+ p[!HOST_BIG_ENDIAN] = make_pair(i0, i1);
54
+ p[HOST_BIG_ENDIAN] = make_pair(i2, i3);
55
+
56
+ /*
57
+ * There's no convenient way to get the compiler to allocate a pair
58
+ * of registers at an even index, so copy into r6/r7 and clobber.
59
+ */
60
+ asm("mr %%r6, %1\n\t"
61
+ "mr %%r7, %2\n\t"
62
+ "stq %%r6, %0"
63
+ : "=Q"(*(__int128 *)rw) : "r"(p[0]), "r"(p[1]) : "r6", "r7");
64
+ flush_idcache_range(rx, rw, 16);
72
+}
65
+}
73
+
66
+
74
+static inline uint64_t cpu_ldq_mmuidx_ra(CPUArchState *env, abi_ptr addr,
67
void tb_target_set_jmp_target(uintptr_t tc_ptr, uintptr_t jmp_rx,
75
+ int mmu_idx, uintptr_t ra)
68
uintptr_t jmp_rw, uintptr_t addr)
76
+{
69
{
77
+ return cpu_ldq_data_ra(env, addr, ra);
70
- if (TCG_TARGET_REG_BITS == 64) {
78
+}
71
- tcg_insn_unit i1, i2;
72
- intptr_t tb_diff = addr - tc_ptr;
73
- intptr_t br_diff = addr - (jmp_rx + 4);
74
- uint64_t pair;
75
+ tcg_insn_unit i0, i1, i2, i3;
76
+ intptr_t tb_diff = addr - tc_ptr;
77
+ intptr_t br_diff = addr - (jmp_rx + 4);
78
+ intptr_t lo, hi;
79
80
- /* This does not exercise the range of the branch, but we do
81
- still need to be able to load the new value of TCG_REG_TB.
82
- But this does still happen quite often. */
83
- if (tb_diff == (int16_t)tb_diff) {
84
- i1 = ADDI | TAI(TCG_REG_TB, TCG_REG_TB, tb_diff);
85
- i2 = B | (br_diff & 0x3fffffc);
86
- } else {
87
- intptr_t lo = (int16_t)tb_diff;
88
- intptr_t hi = (int32_t)(tb_diff - lo);
89
- assert(tb_diff == hi + lo);
90
- i1 = ADDIS | TAI(TCG_REG_TB, TCG_REG_TB, hi >> 16);
91
- i2 = ADDI | TAI(TCG_REG_TB, TCG_REG_TB, lo);
92
- }
93
-#if HOST_BIG_ENDIAN
94
- pair = (uint64_t)i1 << 32 | i2;
95
-#else
96
- pair = (uint64_t)i2 << 32 | i1;
97
-#endif
98
-
99
- /* As per the enclosing if, this is ppc64. Avoid the _Static_assert
100
- within qatomic_set that would fail to build a ppc32 host. */
101
- qatomic_set__nocheck((uint64_t *)jmp_rw, pair);
102
- flush_idcache_range(jmp_rx, jmp_rw, 8);
103
- } else {
104
+ if (TCG_TARGET_REG_BITS == 32) {
105
intptr_t diff = addr - jmp_rx;
106
tcg_debug_assert(in_range_b(diff));
107
qatomic_set((uint32_t *)jmp_rw, B | (diff & 0x3fffffc));
108
flush_idcache_range(jmp_rx, jmp_rw, 4);
109
+ return;
110
}
79
+
111
+
80
+static inline int cpu_ldsb_mmuidx_ra(CPUArchState *env, abi_ptr addr,
112
+ /*
81
+ int mmu_idx, uintptr_t ra)
113
+ * For 16-bit displacements, we can use a single add + branch.
82
+{
114
+ * This happens quite often.
83
+ return cpu_ldsb_data_ra(env, addr, ra);
115
+ */
84
+}
116
+ if (tb_diff == (int16_t)tb_diff) {
117
+ i0 = ADDI | TAI(TCG_REG_TB, TCG_REG_TB, tb_diff);
118
+ i1 = B | (br_diff & 0x3fffffc);
119
+ ppc64_replace2(jmp_rx, jmp_rw, i0, i1);
120
+ return;
121
+ }
85
+
122
+
86
+static inline int cpu_ldsw_mmuidx_ra(CPUArchState *env, abi_ptr addr,
123
+ lo = (int16_t)tb_diff;
87
+ int mmu_idx, uintptr_t ra)
124
+ hi = (int32_t)(tb_diff - lo);
88
+{
125
+ assert(tb_diff == hi + lo);
89
+ return cpu_ldsw_data_ra(env, addr, ra);
126
+ i0 = ADDIS | TAI(TCG_REG_TB, TCG_REG_TB, hi >> 16);
90
+}
127
+ i1 = ADDI | TAI(TCG_REG_TB, TCG_REG_TB, lo);
91
+
128
+
92
+static inline void cpu_stb_mmuidx_ra(CPUArchState *env, abi_ptr addr,
129
+ /*
93
+ uint32_t val, int mmu_idx, uintptr_t ra)
130
+ * Without stq from 2.07, we can only update two insns,
94
+{
131
+ * and those must be the ones that load the target address.
95
+ cpu_stb_data_ra(env, addr, val, ra);
132
+ */
96
+}
133
+ if (!have_isa_2_07) {
134
+ ppc64_replace2(jmp_rx, jmp_rw, i0, i1);
135
+ return;
136
+ }
97
+
137
+
98
+static inline void cpu_stw_mmuidx_ra(CPUArchState *env, abi_ptr addr,
138
+ /*
99
+ uint32_t val, int mmu_idx, uintptr_t ra)
139
+ * For 26-bit displacements, we can use a direct branch.
100
+{
140
+ * Otherwise we still need the indirect branch, which we
101
+ cpu_stw_data_ra(env, addr, val, ra);
141
+ * must restore after a potential direct branch write.
102
+}
142
+ */
103
+
143
+ br_diff -= 4;
104
+static inline void cpu_stl_mmuidx_ra(CPUArchState *env, abi_ptr addr,
144
+ if (in_range_b(br_diff)) {
105
+ uint32_t val, int mmu_idx, uintptr_t ra)
145
+ i2 = B | (br_diff & 0x3fffffc);
106
+{
146
+ i3 = NOP;
107
+ cpu_stl_data_ra(env, addr, val, ra);
147
+ } else {
108
+}
148
+ i2 = MTSPR | RS(TCG_REG_TB) | CTR;
109
+
149
+ i3 = BCCTR | BO_ALWAYS;
110
+static inline void cpu_stq_mmuidx_ra(CPUArchState *env, abi_ptr addr,
150
+ }
111
+ uint64_t val, int mmu_idx, uintptr_t ra)
151
+ ppc64_replace4(jmp_rx, jmp_rw, i0, i1, i2, i3);
112
+{
152
}
113
+ cpu_stq_data_ra(env, addr, val, ra);
153
114
+}
154
static void tcg_out_call_int(TCGContext *s, int lk,
115
+
155
@@ -XXX,XX +XXX,XX @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
116
#else
156
if (s->tb_jmp_insn_offset) {
117
157
/* Direct jump. */
118
/* Needed for TCG_OVERSIZED_GUEST */
158
if (TCG_TARGET_REG_BITS == 64) {
119
diff --git a/docs/devel/loads-stores.rst b/docs/devel/loads-stores.rst
159
- /* Ensure the next insns are 8-byte aligned. */
120
index XXXXXXX..XXXXXXX 100644
160
- if ((uintptr_t)s->code_ptr & 7) {
121
--- a/docs/devel/loads-stores.rst
161
+ /* Ensure the next insns are 8 or 16-byte aligned. */
122
+++ b/docs/devel/loads-stores.rst
162
+ while ((uintptr_t)s->code_ptr & (have_isa_2_07 ? 15 : 7)) {
123
@@ -XXX,XX +XXX,XX @@ Regexes for git grep
163
tcg_out32(s, NOP);
124
- ``\<ldn_\([hbl]e\)?_p\>``
164
}
125
- ``\<stn_\([hbl]e\)?_p\>``
165
s->tb_jmp_insn_offset[args[0]] = tcg_current_code_size(s);
126
127
-``cpu_{ld,st}_*``
128
-~~~~~~~~~~~~~~~~~
129
+``cpu_{ld,st}*_mmuidx_ra``
130
+~~~~~~~~~~~~~~~~~~~~~~~~~~
131
132
-These functions operate on a guest virtual address. Be aware
133
-that these functions may cause a guest CPU exception to be
134
-taken (e.g. for an alignment fault or MMU fault) which will
135
-result in guest CPU state being updated and control longjumping
136
-out of the function call. They should therefore only be used
137
-in code that is implementing emulation of the target CPU.
138
+These functions operate on a guest virtual address plus a context,
139
+known as a "mmu index" or ``mmuidx``, which controls how that virtual
140
+address is translated. The meaning of the indexes are target specific,
141
+but specifying a particular index might be necessary if, for instance,
142
+the helper requires an "always as non-privileged" access rather that
143
+the default access for the current state of the guest CPU.
144
145
-These functions may throw an exception (longjmp() back out
146
-to the top level TCG loop). This means they must only be used
147
-from helper functions where the translator has saved all
148
-necessary CPU state before generating the helper function call.
149
-It's usually better to use the ``_ra`` variants described below
150
-from helper functions, but these functions are the right choice
151
-for calls made from hooks like the CPU do_interrupt hook or
152
-when you know for certain that the translator had to save all
153
-the CPU state that ``cpu_restore_state()`` would restore anyway.
154
+These functions may cause a guest CPU exception to be taken
155
+(e.g. for an alignment fault or MMU fault) which will result in
156
+guest CPU state being updated and control longjmp'ing out of the
157
+function call. They should therefore only be used in code that is
158
+implementing emulation of the guest CPU.
159
+
160
+The ``retaddr`` parameter is used to control unwinding of the
161
+guest CPU state in case of a guest CPU exception. This is passed
162
+to ``cpu_restore_state()``. Therefore the value should either be 0,
163
+to indicate that the guest CPU state is already synchronized, or
164
+the result of ``GETPC()`` from the top level ``HELPER(foo)``
165
+function, which is a return address into the generated code.
166
167
Function names follow the pattern:
168
169
-load: ``cpu_ld{sign}{size}_{mmusuffix}(env, ptr)``
170
+load: ``cpu_ld{sign}{size}_mmuidx_ra(env, ptr, mmuidx, retaddr)``
171
172
-store: ``cpu_st{size}_{mmusuffix}(env, ptr, val)``
173
+store: ``cpu_st{size}_mmuidx_ra(env, ptr, val, mmuidx, retaddr)``
174
175
``sign``
176
- (empty) : for 32 or 64 bit sizes
177
@@ -XXX,XX +XXX,XX @@ store: ``cpu_st{size}_{mmusuffix}(env, ptr, val)``
178
- ``l`` : 32 bits
179
- ``q`` : 64 bits
180
181
-``mmusuffix`` is one of the generic suffixes ``data`` or ``code``, or
182
-(for softmmu configs) a target-specific MMU mode suffix as defined
183
-in the target's ``cpu.h``.
184
+Regexes for git grep:
185
+ - ``\<cpu_ld[us]\?[bwlq]_mmuidx_ra\>``
186
+ - ``\<cpu_st[bwlq]_mmuidx_ra\>``
187
188
-Regexes for git grep
189
- - ``\<cpu_ld[us]\?[bwlq]_[a-zA-Z0-9]\+\>``
190
- - ``\<cpu_st[bwlq]_[a-zA-Z0-9]\+\>``
191
+``cpu_{ld,st}*_data_ra``
192
+~~~~~~~~~~~~~~~~~~~~~~~~
193
194
-``cpu_{ld,st}_*_ra``
195
-~~~~~~~~~~~~~~~~~~~~
196
-
197
-These functions work like the ``cpu_{ld,st}_*`` functions except
198
-that they also take a ``retaddr`` argument. This extra argument
199
-allows for correct unwinding of any exception that is taken,
200
-and should generally be the result of GETPC() called directly
201
-from the top level HELPER(foo) function (i.e. the return address
202
-in the generated code).
203
+These functions work like the ``cpu_{ld,st}_mmuidx_ra`` functions
204
+except that the ``mmuidx`` parameter is taken from the current mode
205
+of the guest CPU, as determined by ``cpu_mmu_index(env, false)``.
206
207
These are generally the preferred way to do accesses by guest
208
-virtual address from helper functions; see the documentation
209
-of the non-``_ra`` variants for when those would be better.
210
-
211
-Calling these functions with a ``retaddr`` argument of 0 is
212
-equivalent to calling the non-``_ra`` version of the function.
213
+virtual address from helper functions, unless the access should
214
+be performed with a context other than the default.
215
216
Function names follow the pattern:
217
218
-load: ``cpu_ld{sign}{size}_{mmusuffix}_ra(env, ptr, retaddr)``
219
+load: ``cpu_ld{sign}{size}_data_ra(env, ptr, ra)``
220
221
-store: ``cpu_st{sign}{size}_{mmusuffix}_ra(env, ptr, val, retaddr)``
222
+store: ``cpu_st{size}_data_ra(env, ptr, val, ra)``
223
+
224
+``sign``
225
+ - (empty) : for 32 or 64 bit sizes
226
+ - ``u`` : unsigned
227
+ - ``s`` : signed
228
+
229
+``size``
230
+ - ``b`` : 8 bits
231
+ - ``w`` : 16 bits
232
+ - ``l`` : 32 bits
233
+ - ``q`` : 64 bits
234
+
235
+Regexes for git grep:
236
+ - ``\<cpu_ld[us]\?[bwlq]_data_ra\>``
237
+ - ``\<cpu_st[bwlq]_data_ra\>``
238
+
239
+``cpu_{ld,st}*_data``
240
+~~~~~~~~~~~~~~~~~~~~~
241
+
242
+These functions work like the ``cpu_{ld,st}_data_ra`` functions
243
+except that the ``retaddr`` parameter is 0, and thus does not
244
+unwind guest CPU state.
245
+
246
+This means they must only be used from helper functions where the
247
+translator has saved all necessary CPU state. These functions are
248
+the right choice for calls made from hooks like the CPU ``do_interrupt``
249
+hook or when you know for certain that the translator had to save all
250
+the CPU state anyway.
251
+
252
+Function names follow the pattern:
253
+
254
+load: ``cpu_ld{sign}{size}_data(env, ptr)``
255
+
256
+store: ``cpu_st{size}_data(env, ptr, val)``
257
+
258
+``sign``
259
+ - (empty) : for 32 or 64 bit sizes
260
+ - ``u`` : unsigned
261
+ - ``s`` : signed
262
+
263
+``size``
264
+ - ``b`` : 8 bits
265
+ - ``w`` : 16 bits
266
+ - ``l`` : 32 bits
267
+ - ``q`` : 64 bits
268
269
Regexes for git grep
270
- - ``\<cpu_ld[us]\?[bwlq]_[a-zA-Z0-9]\+_ra\>``
271
- - ``\<cpu_st[bwlq]_[a-zA-Z0-9]\+_ra\>``
272
+ - ``\<cpu_ld[us]\?[bwlq]_data\>``
273
+ - ``\<cpu_st[bwlq]_data\+\>``
274
275
-``helper_*_{ld,st}*mmu``
276
-~~~~~~~~~~~~~~~~~~~~~~~~
277
+``cpu_ld*_code``
278
+~~~~~~~~~~~~~~~~
279
+
280
+These functions perform a read for instruction execution. The ``mmuidx``
281
+parameter is taken from the current mode of the guest CPU, as determined
282
+by ``cpu_mmu_index(env, true)``. The ``retaddr`` parameter is 0, and
283
+thus does not unwind guest CPU state, because CPU state is always
284
+synchronized while translating instructions. Any guest CPU exception
285
+that is raised will indicate an instruction execution fault rather than
286
+a data read fault.
287
+
288
+In general these functions should not be used directly during translation.
289
+There are wrapper functions that are to be used which also take care of
290
+plugins for tracing.
291
+
292
+Function names follow the pattern:
293
+
294
+load: ``cpu_ld{sign}{size}_code(env, ptr)``
295
+
296
+``sign``
297
+ - (empty) : for 32 or 64 bit sizes
298
+ - ``u`` : unsigned
299
+ - ``s`` : signed
300
+
301
+``size``
302
+ - ``b`` : 8 bits
303
+ - ``w`` : 16 bits
304
+ - ``l`` : 32 bits
305
+ - ``q`` : 64 bits
306
+
307
+Regexes for git grep:
308
+ - ``\<cpu_ld[us]\?[bwlq]_code\>``
309
+
310
+``translator_ld*``
311
+~~~~~~~~~~~~~~~~~~
312
+
313
+These functions are a wrapper for ``cpu_ld*_code`` which also perform
314
+any actions required by any tracing plugins. They are only to be
315
+called during the translator callback ``translate_insn``.
316
+
317
+There is a set of functions ending in ``_swap`` which, if the parameter
318
+is true, returns the value in the endianness that is the reverse of
319
+the guest native endianness, as determined by ``TARGET_WORDS_BIGENDIAN``.
320
+
321
+Function names follow the pattern:
322
+
323
+load: ``translator_ld{sign}{size}(env, ptr)``
324
+
325
+swap: ``translator_ld{sign}{size}_swap(env, ptr, swap)``
326
+
327
+``sign``
328
+ - (empty) : for 32 or 64 bit sizes
329
+ - ``u`` : unsigned
330
+ - ``s`` : signed
331
+
332
+``size``
333
+ - ``b`` : 8 bits
334
+ - ``w`` : 16 bits
335
+ - ``l`` : 32 bits
336
+ - ``q`` : 64 bits
337
+
338
+Regexes for git grep
339
+ - ``\<translator_ld[us]\?[bwlq]\(_swap\)\?\>``
340
+
341
+``helper_*_{ld,st}*_mmu``
342
+~~~~~~~~~~~~~~~~~~~~~~~~~
343
344
These functions are intended primarily to be called by the code
345
generated by the TCG backend. They may also be called by target
346
-CPU helper function code. Like the ``cpu_{ld,st}_*_ra`` functions
347
-they perform accesses by guest virtual address; the difference is
348
-that these functions allow you to specify an ``opindex`` parameter
349
-which encodes (among other things) the mmu index to use for the
350
-access. This is necessary if your helper needs to make an access
351
-via a specific mmu index (for instance, an "always as non-privileged"
352
-access) rather than using the default mmu index for the current state
353
-of the guest CPU.
354
+CPU helper function code. Like the ``cpu_{ld,st}_mmuidx_ra`` functions
355
+they perform accesses by guest virtual address, with a given ``mmuidx``.
356
357
-The ``opindex`` parameter should be created by calling ``make_memop_idx()``.
358
+These functions specify an ``opindex`` parameter which encodes
359
+(among other things) the mmu index to use for the access. This parameter
360
+should be created by calling ``make_memop_idx()``.
361
362
The ``retaddr`` parameter should be the result of GETPC() called directly
363
from the top level HELPER(foo) function (or 0 if no guest CPU state
364
@@ -XXX,XX +XXX,XX @@ unwinding is required).
365
366
**TODO** The names of these functions are a bit odd for historical
367
reasons because they were originally expected to be called only from
368
-within generated code. We should rename them to bring them
369
-more in line with the other memory access functions.
370
+within generated code. We should rename them to bring them more in
371
+line with the other memory access functions. The explicit endianness
372
+is the only feature they have beyond ``*_mmuidx_ra``.
373
374
load: ``helper_{endian}_ld{sign}{size}_mmu(env, addr, opindex, retaddr)``
375
376
--
166
--
377
2.20.1
167
2.34.1
378
379
diff view generated by jsdifflib
Deleted patch
1
Do not use exec/cpu_ldst_{,useronly_}template.h directly,
2
but instead use the functional interface.
3
1
4
Cc: Eduardo Habkost <ehabkost@redhat.com>
5
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
---
10
target/i386/seg_helper.c | 56 ++++++++++++++++++++--------------------
11
1 file changed, 28 insertions(+), 28 deletions(-)
12
13
diff --git a/target/i386/seg_helper.c b/target/i386/seg_helper.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/i386/seg_helper.c
16
+++ b/target/i386/seg_helper.c
17
@@ -XXX,XX +XXX,XX @@
18
# define LOG_PCALL_STATE(cpu) do { } while (0)
19
#endif
20
21
-#ifdef CONFIG_USER_ONLY
22
-#define MEMSUFFIX _kernel
23
-#define DATA_SIZE 1
24
-#include "exec/cpu_ldst_useronly_template.h"
25
+/*
26
+ * TODO: Convert callers to compute cpu_mmu_index_kernel once
27
+ * and use *_mmuidx_ra directly.
28
+ */
29
+#define cpu_ldub_kernel_ra(e, p, r) \
30
+ cpu_ldub_mmuidx_ra(e, p, cpu_mmu_index_kernel(e), r)
31
+#define cpu_lduw_kernel_ra(e, p, r) \
32
+ cpu_lduw_mmuidx_ra(e, p, cpu_mmu_index_kernel(e), r)
33
+#define cpu_ldl_kernel_ra(e, p, r) \
34
+ cpu_ldl_mmuidx_ra(e, p, cpu_mmu_index_kernel(e), r)
35
+#define cpu_ldq_kernel_ra(e, p, r) \
36
+ cpu_ldq_mmuidx_ra(e, p, cpu_mmu_index_kernel(e), r)
37
38
-#define DATA_SIZE 2
39
-#include "exec/cpu_ldst_useronly_template.h"
40
+#define cpu_stb_kernel_ra(e, p, v, r) \
41
+ cpu_stb_mmuidx_ra(e, p, v, cpu_mmu_index_kernel(e), r)
42
+#define cpu_stw_kernel_ra(e, p, v, r) \
43
+ cpu_stw_mmuidx_ra(e, p, v, cpu_mmu_index_kernel(e), r)
44
+#define cpu_stl_kernel_ra(e, p, v, r) \
45
+ cpu_stl_mmuidx_ra(e, p, v, cpu_mmu_index_kernel(e), r)
46
+#define cpu_stq_kernel_ra(e, p, v, r) \
47
+ cpu_stq_mmuidx_ra(e, p, v, cpu_mmu_index_kernel(e), r)
48
49
-#define DATA_SIZE 4
50
-#include "exec/cpu_ldst_useronly_template.h"
51
+#define cpu_ldub_kernel(e, p) cpu_ldub_kernel_ra(e, p, 0)
52
+#define cpu_lduw_kernel(e, p) cpu_lduw_kernel_ra(e, p, 0)
53
+#define cpu_ldl_kernel(e, p) cpu_ldl_kernel_ra(e, p, 0)
54
+#define cpu_ldq_kernel(e, p) cpu_ldq_kernel_ra(e, p, 0)
55
56
-#define DATA_SIZE 8
57
-#include "exec/cpu_ldst_useronly_template.h"
58
-#undef MEMSUFFIX
59
-#else
60
-#define CPU_MMU_INDEX (cpu_mmu_index_kernel(env))
61
-#define MEMSUFFIX _kernel
62
-#define DATA_SIZE 1
63
-#include "exec/cpu_ldst_template.h"
64
-
65
-#define DATA_SIZE 2
66
-#include "exec/cpu_ldst_template.h"
67
-
68
-#define DATA_SIZE 4
69
-#include "exec/cpu_ldst_template.h"
70
-
71
-#define DATA_SIZE 8
72
-#include "exec/cpu_ldst_template.h"
73
-#undef CPU_MMU_INDEX
74
-#undef MEMSUFFIX
75
-#endif
76
+#define cpu_stb_kernel(e, p, v) cpu_stb_kernel_ra(e, p, v, 0)
77
+#define cpu_stw_kernel(e, p, v) cpu_stw_kernel_ra(e, p, v, 0)
78
+#define cpu_stl_kernel(e, p, v) cpu_stl_kernel_ra(e, p, v, 0)
79
+#define cpu_stq_kernel(e, p, v) cpu_stq_kernel_ra(e, p, v, 0)
80
81
/* return non zero if error */
82
static inline int load_segment_ra(CPUX86State *env, uint32_t *e1_ptr,
83
--
84
2.20.1
85
86
diff view generated by jsdifflib
Deleted patch
1
The functions generated by these macros are unused.
2
1
3
Cc: Chris Wulff <crwulff@gmail.com>
4
Cc: Marek Vasut <marex@denx.de>
5
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
---
9
target/nios2/cpu.h | 2 --
10
1 file changed, 2 deletions(-)
11
12
diff --git a/target/nios2/cpu.h b/target/nios2/cpu.h
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/nios2/cpu.h
15
+++ b/target/nios2/cpu.h
16
@@ -XXX,XX +XXX,XX @@ void do_nios2_semihosting(CPUNios2State *env);
17
#define CPU_SAVE_VERSION 1
18
19
/* MMU modes definitions */
20
-#define MMU_MODE0_SUFFIX _kernel
21
-#define MMU_MODE1_SUFFIX _user
22
#define MMU_SUPERVISOR_IDX 0
23
#define MMU_USER_IDX 1
24
25
--
26
2.20.1
27
28
diff view generated by jsdifflib
Deleted patch
1
The functions generated by these macros are unused.
2
1
3
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
4
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
---
7
target/alpha/cpu.h | 2 --
8
1 file changed, 2 deletions(-)
9
10
diff --git a/target/alpha/cpu.h b/target/alpha/cpu.h
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/alpha/cpu.h
13
+++ b/target/alpha/cpu.h
14
@@ -XXX,XX +XXX,XX @@ enum {
15
PALcode cheats and usees the KSEG mapping for its code+data rather than
16
physical addresses. */
17
18
-#define MMU_MODE0_SUFFIX _kernel
19
-#define MMU_MODE1_SUFFIX _user
20
#define MMU_KERNEL_IDX 0
21
#define MMU_USER_IDX 1
22
#define MMU_PHYS_IDX 2
23
--
24
2.20.1
25
26
diff view generated by jsdifflib
Deleted patch
1
The functions generated by these macros are unused.
2
1
3
Cc: Edgar E. Iglesias <edgar.iglesias@gmail.com>
4
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
target/cris/cpu.h | 2 --
9
1 file changed, 2 deletions(-)
10
11
diff --git a/target/cris/cpu.h b/target/cris/cpu.h
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/cris/cpu.h
14
+++ b/target/cris/cpu.h
15
@@ -XXX,XX +XXX,XX @@ enum {
16
#define cpu_signal_handler cpu_cris_signal_handler
17
18
/* MMU modes definitions */
19
-#define MMU_MODE0_SUFFIX _kernel
20
-#define MMU_MODE1_SUFFIX _user
21
#define MMU_USER_IDX 1
22
static inline int cpu_mmu_index (CPUCRISState *env, bool ifetch)
23
{
24
--
25
2.20.1
26
27
diff view generated by jsdifflib
Deleted patch
1
The functions generated by these macros are unused.
2
1
3
Cc: Eduardo Habkost <ehabkost@redhat.com>
4
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
---
9
target/i386/cpu.h | 3 ---
10
1 file changed, 3 deletions(-)
11
12
diff --git a/target/i386/cpu.h b/target/i386/cpu.h
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/i386/cpu.h
15
+++ b/target/i386/cpu.h
16
@@ -XXX,XX +XXX,XX @@ uint64_t cpu_get_tsc(CPUX86State *env);
17
#define cpu_list x86_cpu_list
18
19
/* MMU modes definitions */
20
-#define MMU_MODE0_SUFFIX _ksmap
21
-#define MMU_MODE1_SUFFIX _user
22
-#define MMU_MODE2_SUFFIX _knosmap /* SMAP disabled or CPL<3 && AC=1 */
23
#define MMU_KSMAP_IDX 0
24
#define MMU_USER_IDX 1
25
#define MMU_KNOSMAP_IDX 2
26
--
27
2.20.1
28
29
diff view generated by jsdifflib
Deleted patch
1
The functions generated by these macros are unused.
2
1
3
Cc: Edgar E. Iglesias <edgar.iglesias@gmail.com>
4
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
target/microblaze/cpu.h | 3 ---
9
1 file changed, 3 deletions(-)
10
11
diff --git a/target/microblaze/cpu.h b/target/microblaze/cpu.h
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/microblaze/cpu.h
14
+++ b/target/microblaze/cpu.h
15
@@ -XXX,XX +XXX,XX @@ int cpu_mb_signal_handler(int host_signum, void *pinfo,
16
#define cpu_signal_handler cpu_mb_signal_handler
17
18
/* MMU modes definitions */
19
-#define MMU_MODE0_SUFFIX _nommu
20
-#define MMU_MODE1_SUFFIX _kernel
21
-#define MMU_MODE2_SUFFIX _user
22
#define MMU_NOMMU_IDX 0
23
#define MMU_KERNEL_IDX 1
24
#define MMU_USER_IDX 2
25
--
26
2.20.1
27
28
diff view generated by jsdifflib
1
The functions generated by these macros are unused.
1
The value previously chosen overlaps GUSA_MASK.
2
2
3
Cc: Aurelien Jarno <aurelien@aurel32.net>
3
Rename all DELAY_SLOT_* and GUSA_* defines to emphasize
4
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
4
that they are included in TB_FLAGs. Add aliases for the
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
5
FPSCR and SR bits that are included in TB_FLAGS, so that
6
we don't accidentally reassign those bits.
7
8
Fixes: 4da06fb3062 ("target/sh4: Implement prctl_unalign_sigbus")
9
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/856
10
Reviewed-by: Yoshinori Sato <ysato@users.sourceforge.jp>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
12
---
8
target/sh4/cpu.h | 2 --
13
target/sh4/cpu.h | 56 +++++++++++++------------
9
1 file changed, 2 deletions(-)
14
linux-user/sh4/signal.c | 6 +--
15
target/sh4/cpu.c | 6 +--
16
target/sh4/helper.c | 6 +--
17
target/sh4/translate.c | 90 ++++++++++++++++++++++-------------------
18
5 files changed, 88 insertions(+), 76 deletions(-)
10
19
11
diff --git a/target/sh4/cpu.h b/target/sh4/cpu.h
20
diff --git a/target/sh4/cpu.h b/target/sh4/cpu.h
12
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
13
--- a/target/sh4/cpu.h
22
--- a/target/sh4/cpu.h
14
+++ b/target/sh4/cpu.h
23
+++ b/target/sh4/cpu.h
15
@@ -XXX,XX +XXX,XX @@ void cpu_load_tlb(CPUSH4State * env);
24
@@ -XXX,XX +XXX,XX @@
16
#define cpu_list sh4_cpu_list
25
#define FPSCR_RM_NEAREST (0 << 0)
17
26
#define FPSCR_RM_ZERO (1 << 0)
18
/* MMU modes definitions */
27
19
-#define MMU_MODE0_SUFFIX _kernel
28
-#define DELAY_SLOT_MASK 0x7
20
-#define MMU_MODE1_SUFFIX _user
29
-#define DELAY_SLOT (1 << 0)
21
#define MMU_USER_IDX 1
30
-#define DELAY_SLOT_CONDITIONAL (1 << 1)
22
static inline int cpu_mmu_index (CPUSH4State *env, bool ifetch)
31
-#define DELAY_SLOT_RTE (1 << 2)
32
+#define TB_FLAG_DELAY_SLOT (1 << 0)
33
+#define TB_FLAG_DELAY_SLOT_COND (1 << 1)
34
+#define TB_FLAG_DELAY_SLOT_RTE (1 << 2)
35
+#define TB_FLAG_PENDING_MOVCA (1 << 3)
36
+#define TB_FLAG_GUSA_SHIFT 4 /* [11:4] */
37
+#define TB_FLAG_GUSA_EXCLUSIVE (1 << 12)
38
+#define TB_FLAG_UNALIGN (1 << 13)
39
+#define TB_FLAG_SR_FD (1 << SR_FD) /* 15 */
40
+#define TB_FLAG_FPSCR_PR FPSCR_PR /* 19 */
41
+#define TB_FLAG_FPSCR_SZ FPSCR_SZ /* 20 */
42
+#define TB_FLAG_FPSCR_FR FPSCR_FR /* 21 */
43
+#define TB_FLAG_SR_RB (1 << SR_RB) /* 29 */
44
+#define TB_FLAG_SR_MD (1 << SR_MD) /* 30 */
45
46
-#define TB_FLAG_PENDING_MOVCA (1 << 3)
47
-#define TB_FLAG_UNALIGN (1 << 4)
48
-
49
-#define GUSA_SHIFT 4
50
-#ifdef CONFIG_USER_ONLY
51
-#define GUSA_EXCLUSIVE (1 << 12)
52
-#define GUSA_MASK ((0xff << GUSA_SHIFT) | GUSA_EXCLUSIVE)
53
-#else
54
-/* Provide dummy versions of the above to allow tests against tbflags
55
- to be elided while avoiding ifdefs. */
56
-#define GUSA_EXCLUSIVE 0
57
-#define GUSA_MASK 0
58
-#endif
59
-
60
-#define TB_FLAG_ENVFLAGS_MASK (DELAY_SLOT_MASK | GUSA_MASK)
61
+#define TB_FLAG_DELAY_SLOT_MASK (TB_FLAG_DELAY_SLOT | \
62
+ TB_FLAG_DELAY_SLOT_COND | \
63
+ TB_FLAG_DELAY_SLOT_RTE)
64
+#define TB_FLAG_GUSA_MASK ((0xff << TB_FLAG_GUSA_SHIFT) | \
65
+ TB_FLAG_GUSA_EXCLUSIVE)
66
+#define TB_FLAG_FPSCR_MASK (TB_FLAG_FPSCR_PR | \
67
+ TB_FLAG_FPSCR_SZ | \
68
+ TB_FLAG_FPSCR_FR)
69
+#define TB_FLAG_SR_MASK (TB_FLAG_SR_FD | \
70
+ TB_FLAG_SR_RB | \
71
+ TB_FLAG_SR_MD)
72
+#define TB_FLAG_ENVFLAGS_MASK (TB_FLAG_DELAY_SLOT_MASK | \
73
+ TB_FLAG_GUSA_MASK)
74
75
typedef struct tlb_t {
76
uint32_t vpn;        /* virtual page number */
77
@@ -XXX,XX +XXX,XX @@ static inline int cpu_mmu_index (CPUSH4State *env, bool ifetch)
23
{
78
{
79
/* The instruction in a RTE delay slot is fetched in privileged
80
mode, but executed in user mode. */
81
- if (ifetch && (env->flags & DELAY_SLOT_RTE)) {
82
+ if (ifetch && (env->flags & TB_FLAG_DELAY_SLOT_RTE)) {
83
return 0;
84
} else {
85
return (env->sr & (1u << SR_MD)) == 0 ? 1 : 0;
86
@@ -XXX,XX +XXX,XX @@ static inline void cpu_get_tb_cpu_state(CPUSH4State *env, target_ulong *pc,
87
{
88
*pc = env->pc;
89
/* For a gUSA region, notice the end of the region. */
90
- *cs_base = env->flags & GUSA_MASK ? env->gregs[0] : 0;
91
- *flags = env->flags /* TB_FLAG_ENVFLAGS_MASK: bits 0-2, 4-12 */
92
- | (env->fpscr & (FPSCR_FR | FPSCR_SZ | FPSCR_PR)) /* Bits 19-21 */
93
- | (env->sr & ((1u << SR_MD) | (1u << SR_RB))) /* Bits 29-30 */
94
- | (env->sr & (1u << SR_FD)) /* Bit 15 */
95
+ *cs_base = env->flags & TB_FLAG_GUSA_MASK ? env->gregs[0] : 0;
96
+ *flags = env->flags
97
+ | (env->fpscr & TB_FLAG_FPSCR_MASK)
98
+ | (env->sr & TB_FLAG_SR_MASK)
99
| (env->movcal_backup ? TB_FLAG_PENDING_MOVCA : 0); /* Bit 3 */
100
#ifdef CONFIG_USER_ONLY
101
*flags |= TB_FLAG_UNALIGN * !env_cpu(env)->prctl_unalign_sigbus;
102
diff --git a/linux-user/sh4/signal.c b/linux-user/sh4/signal.c
103
index XXXXXXX..XXXXXXX 100644
104
--- a/linux-user/sh4/signal.c
105
+++ b/linux-user/sh4/signal.c
106
@@ -XXX,XX +XXX,XX @@ static void restore_sigcontext(CPUSH4State *regs, struct target_sigcontext *sc)
107
__get_user(regs->fpul, &sc->sc_fpul);
108
109
regs->tra = -1; /* disable syscall checks */
110
- regs->flags &= ~(DELAY_SLOT_MASK | GUSA_MASK);
111
+ regs->flags = 0;
112
}
113
114
void setup_frame(int sig, struct target_sigaction *ka,
115
@@ -XXX,XX +XXX,XX @@ void setup_frame(int sig, struct target_sigaction *ka,
116
regs->gregs[5] = 0;
117
regs->gregs[6] = frame_addr += offsetof(typeof(*frame), sc);
118
regs->pc = (unsigned long) ka->_sa_handler;
119
- regs->flags &= ~(DELAY_SLOT_MASK | GUSA_MASK);
120
+ regs->flags &= ~(TB_FLAG_DELAY_SLOT_MASK | TB_FLAG_GUSA_MASK);
121
122
unlock_user_struct(frame, frame_addr, 1);
123
return;
124
@@ -XXX,XX +XXX,XX @@ void setup_rt_frame(int sig, struct target_sigaction *ka,
125
regs->gregs[5] = frame_addr + offsetof(typeof(*frame), info);
126
regs->gregs[6] = frame_addr + offsetof(typeof(*frame), uc);
127
regs->pc = (unsigned long) ka->_sa_handler;
128
- regs->flags &= ~(DELAY_SLOT_MASK | GUSA_MASK);
129
+ regs->flags &= ~(TB_FLAG_DELAY_SLOT_MASK | TB_FLAG_GUSA_MASK);
130
131
unlock_user_struct(frame, frame_addr, 1);
132
return;
133
diff --git a/target/sh4/cpu.c b/target/sh4/cpu.c
134
index XXXXXXX..XXXXXXX 100644
135
--- a/target/sh4/cpu.c
136
+++ b/target/sh4/cpu.c
137
@@ -XXX,XX +XXX,XX @@ static void superh_cpu_synchronize_from_tb(CPUState *cs,
138
SuperHCPU *cpu = SUPERH_CPU(cs);
139
140
cpu->env.pc = tb_pc(tb);
141
- cpu->env.flags = tb->flags & TB_FLAG_ENVFLAGS_MASK;
142
+ cpu->env.flags = tb->flags;
143
}
144
145
#ifndef CONFIG_USER_ONLY
146
@@ -XXX,XX +XXX,XX @@ static bool superh_io_recompile_replay_branch(CPUState *cs,
147
SuperHCPU *cpu = SUPERH_CPU(cs);
148
CPUSH4State *env = &cpu->env;
149
150
- if ((env->flags & ((DELAY_SLOT | DELAY_SLOT_CONDITIONAL))) != 0
151
+ if ((env->flags & (TB_FLAG_DELAY_SLOT | TB_FLAG_DELAY_SLOT_COND))
152
&& env->pc != tb_pc(tb)) {
153
env->pc -= 2;
154
- env->flags &= ~(DELAY_SLOT | DELAY_SLOT_CONDITIONAL);
155
+ env->flags &= ~(TB_FLAG_DELAY_SLOT | TB_FLAG_DELAY_SLOT_COND);
156
return true;
157
}
158
return false;
159
diff --git a/target/sh4/helper.c b/target/sh4/helper.c
160
index XXXXXXX..XXXXXXX 100644
161
--- a/target/sh4/helper.c
162
+++ b/target/sh4/helper.c
163
@@ -XXX,XX +XXX,XX @@ void superh_cpu_do_interrupt(CPUState *cs)
164
env->sr |= (1u << SR_BL) | (1u << SR_MD) | (1u << SR_RB);
165
env->lock_addr = -1;
166
167
- if (env->flags & DELAY_SLOT_MASK) {
168
+ if (env->flags & TB_FLAG_DELAY_SLOT_MASK) {
169
/* Branch instruction should be executed again before delay slot. */
170
    env->spc -= 2;
171
    /* Clear flags for exception/interrupt routine. */
172
- env->flags &= ~DELAY_SLOT_MASK;
173
+ env->flags &= ~TB_FLAG_DELAY_SLOT_MASK;
174
}
175
176
if (do_exp) {
177
@@ -XXX,XX +XXX,XX @@ bool superh_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
178
CPUSH4State *env = &cpu->env;
179
180
/* Delay slots are indivisible, ignore interrupts */
181
- if (env->flags & DELAY_SLOT_MASK) {
182
+ if (env->flags & TB_FLAG_DELAY_SLOT_MASK) {
183
return false;
184
} else {
185
superh_cpu_do_interrupt(cs);
186
diff --git a/target/sh4/translate.c b/target/sh4/translate.c
187
index XXXXXXX..XXXXXXX 100644
188
--- a/target/sh4/translate.c
189
+++ b/target/sh4/translate.c
190
@@ -XXX,XX +XXX,XX @@ void superh_cpu_dump_state(CPUState *cs, FILE *f, int flags)
191
         i, env->gregs[i], i + 1, env->gregs[i + 1],
192
         i + 2, env->gregs[i + 2], i + 3, env->gregs[i + 3]);
193
}
194
- if (env->flags & DELAY_SLOT) {
195
+ if (env->flags & TB_FLAG_DELAY_SLOT) {
196
qemu_printf("in delay slot (delayed_pc=0x%08x)\n",
197
         env->delayed_pc);
198
- } else if (env->flags & DELAY_SLOT_CONDITIONAL) {
199
+ } else if (env->flags & TB_FLAG_DELAY_SLOT_COND) {
200
qemu_printf("in conditional delay slot (delayed_pc=0x%08x)\n",
201
         env->delayed_pc);
202
- } else if (env->flags & DELAY_SLOT_RTE) {
203
+ } else if (env->flags & TB_FLAG_DELAY_SLOT_RTE) {
204
qemu_fprintf(f, "in rte delay slot (delayed_pc=0x%08x)\n",
205
env->delayed_pc);
206
}
207
@@ -XXX,XX +XXX,XX @@ static inline void gen_save_cpu_state(DisasContext *ctx, bool save_pc)
208
209
static inline bool use_exit_tb(DisasContext *ctx)
210
{
211
- return (ctx->tbflags & GUSA_EXCLUSIVE) != 0;
212
+ return (ctx->tbflags & TB_FLAG_GUSA_EXCLUSIVE) != 0;
213
}
214
215
static bool use_goto_tb(DisasContext *ctx, target_ulong dest)
216
@@ -XXX,XX +XXX,XX @@ static void gen_conditional_jump(DisasContext *ctx, target_ulong dest,
217
TCGLabel *l1 = gen_new_label();
218
TCGCond cond_not_taken = jump_if_true ? TCG_COND_EQ : TCG_COND_NE;
219
220
- if (ctx->tbflags & GUSA_EXCLUSIVE) {
221
+ if (ctx->tbflags & TB_FLAG_GUSA_EXCLUSIVE) {
222
/* When in an exclusive region, we must continue to the end.
223
Therefore, exit the region on a taken branch, but otherwise
224
fall through to the next instruction. */
225
tcg_gen_brcondi_i32(cond_not_taken, cpu_sr_t, 0, l1);
226
- tcg_gen_movi_i32(cpu_flags, ctx->envflags & ~GUSA_MASK);
227
+ tcg_gen_movi_i32(cpu_flags, ctx->envflags & ~TB_FLAG_GUSA_MASK);
228
/* Note that this won't actually use a goto_tb opcode because we
229
disallow it in use_goto_tb, but it handles exit + singlestep. */
230
gen_goto_tb(ctx, 0, dest);
231
@@ -XXX,XX +XXX,XX @@ static void gen_delayed_conditional_jump(DisasContext * ctx)
232
tcg_gen_mov_i32(ds, cpu_delayed_cond);
233
tcg_gen_discard_i32(cpu_delayed_cond);
234
235
- if (ctx->tbflags & GUSA_EXCLUSIVE) {
236
+ if (ctx->tbflags & TB_FLAG_GUSA_EXCLUSIVE) {
237
/* When in an exclusive region, we must continue to the end.
238
Therefore, exit the region on a taken branch, but otherwise
239
fall through to the next instruction. */
240
tcg_gen_brcondi_i32(TCG_COND_EQ, ds, 0, l1);
241
242
/* Leave the gUSA region. */
243
- tcg_gen_movi_i32(cpu_flags, ctx->envflags & ~GUSA_MASK);
244
+ tcg_gen_movi_i32(cpu_flags, ctx->envflags & ~TB_FLAG_GUSA_MASK);
245
gen_jump(ctx);
246
247
gen_set_label(l1);
248
@@ -XXX,XX +XXX,XX @@ static inline void gen_store_fpr64(DisasContext *ctx, TCGv_i64 t, int reg)
249
#define XHACK(x) ((((x) & 1 ) << 4) | ((x) & 0xe))
250
251
#define CHECK_NOT_DELAY_SLOT \
252
- if (ctx->envflags & DELAY_SLOT_MASK) { \
253
- goto do_illegal_slot; \
254
+ if (ctx->envflags & TB_FLAG_DELAY_SLOT_MASK) { \
255
+ goto do_illegal_slot; \
256
}
257
258
#define CHECK_PRIVILEGED \
259
@@ -XXX,XX +XXX,XX @@ static void _decode_opc(DisasContext * ctx)
260
case 0x000b:        /* rts */
261
    CHECK_NOT_DELAY_SLOT
262
    tcg_gen_mov_i32(cpu_delayed_pc, cpu_pr);
263
- ctx->envflags |= DELAY_SLOT;
264
+ ctx->envflags |= TB_FLAG_DELAY_SLOT;
265
    ctx->delayed_pc = (uint32_t) - 1;
266
    return;
267
case 0x0028:        /* clrmac */
268
@@ -XXX,XX +XXX,XX @@ static void _decode_opc(DisasContext * ctx)
269
    CHECK_NOT_DELAY_SLOT
270
gen_write_sr(cpu_ssr);
271
    tcg_gen_mov_i32(cpu_delayed_pc, cpu_spc);
272
- ctx->envflags |= DELAY_SLOT_RTE;
273
+ ctx->envflags |= TB_FLAG_DELAY_SLOT_RTE;
274
    ctx->delayed_pc = (uint32_t) - 1;
275
ctx->base.is_jmp = DISAS_STOP;
276
    return;
277
@@ -XXX,XX +XXX,XX @@ static void _decode_opc(DisasContext * ctx)
278
    return;
279
case 0xe000:        /* mov #imm,Rn */
280
#ifdef CONFIG_USER_ONLY
281
- /* Detect the start of a gUSA region. If so, update envflags
282
- and end the TB. This will allow us to see the end of the
283
- region (stored in R0) in the next TB. */
284
+ /*
285
+ * Detect the start of a gUSA region (mov #-n, r15).
286
+ * If so, update envflags and end the TB. This will allow us
287
+ * to see the end of the region (stored in R0) in the next TB.
288
+ */
289
if (B11_8 == 15 && B7_0s < 0 &&
290
(tb_cflags(ctx->base.tb) & CF_PARALLEL)) {
291
- ctx->envflags = deposit32(ctx->envflags, GUSA_SHIFT, 8, B7_0s);
292
+ ctx->envflags =
293
+ deposit32(ctx->envflags, TB_FLAG_GUSA_SHIFT, 8, B7_0s);
294
ctx->base.is_jmp = DISAS_STOP;
295
}
296
#endif
297
@@ -XXX,XX +XXX,XX @@ static void _decode_opc(DisasContext * ctx)
298
case 0xa000:        /* bra disp */
299
    CHECK_NOT_DELAY_SLOT
300
ctx->delayed_pc = ctx->base.pc_next + 4 + B11_0s * 2;
301
- ctx->envflags |= DELAY_SLOT;
302
+ ctx->envflags |= TB_FLAG_DELAY_SLOT;
303
    return;
304
case 0xb000:        /* bsr disp */
305
    CHECK_NOT_DELAY_SLOT
306
tcg_gen_movi_i32(cpu_pr, ctx->base.pc_next + 4);
307
ctx->delayed_pc = ctx->base.pc_next + 4 + B11_0s * 2;
308
- ctx->envflags |= DELAY_SLOT;
309
+ ctx->envflags |= TB_FLAG_DELAY_SLOT;
310
    return;
311
}
312
313
@@ -XXX,XX +XXX,XX @@ static void _decode_opc(DisasContext * ctx)
314
    CHECK_NOT_DELAY_SLOT
315
tcg_gen_xori_i32(cpu_delayed_cond, cpu_sr_t, 1);
316
ctx->delayed_pc = ctx->base.pc_next + 4 + B7_0s * 2;
317
- ctx->envflags |= DELAY_SLOT_CONDITIONAL;
318
+ ctx->envflags |= TB_FLAG_DELAY_SLOT_COND;
319
    return;
320
case 0x8900:        /* bt label */
321
    CHECK_NOT_DELAY_SLOT
322
@@ -XXX,XX +XXX,XX @@ static void _decode_opc(DisasContext * ctx)
323
    CHECK_NOT_DELAY_SLOT
324
tcg_gen_mov_i32(cpu_delayed_cond, cpu_sr_t);
325
ctx->delayed_pc = ctx->base.pc_next + 4 + B7_0s * 2;
326
- ctx->envflags |= DELAY_SLOT_CONDITIONAL;
327
+ ctx->envflags |= TB_FLAG_DELAY_SLOT_COND;
328
    return;
329
case 0x8800:        /* cmp/eq #imm,R0 */
330
tcg_gen_setcondi_i32(TCG_COND_EQ, cpu_sr_t, REG(0), B7_0s);
331
@@ -XXX,XX +XXX,XX @@ static void _decode_opc(DisasContext * ctx)
332
case 0x0023:        /* braf Rn */
333
    CHECK_NOT_DELAY_SLOT
334
tcg_gen_addi_i32(cpu_delayed_pc, REG(B11_8), ctx->base.pc_next + 4);
335
- ctx->envflags |= DELAY_SLOT;
336
+ ctx->envflags |= TB_FLAG_DELAY_SLOT;
337
    ctx->delayed_pc = (uint32_t) - 1;
338
    return;
339
case 0x0003:        /* bsrf Rn */
340
    CHECK_NOT_DELAY_SLOT
341
tcg_gen_movi_i32(cpu_pr, ctx->base.pc_next + 4);
342
    tcg_gen_add_i32(cpu_delayed_pc, REG(B11_8), cpu_pr);
343
- ctx->envflags |= DELAY_SLOT;
344
+ ctx->envflags |= TB_FLAG_DELAY_SLOT;
345
    ctx->delayed_pc = (uint32_t) - 1;
346
    return;
347
case 0x4015:        /* cmp/pl Rn */
348
@@ -XXX,XX +XXX,XX @@ static void _decode_opc(DisasContext * ctx)
349
case 0x402b:        /* jmp @Rn */
350
    CHECK_NOT_DELAY_SLOT
351
    tcg_gen_mov_i32(cpu_delayed_pc, REG(B11_8));
352
- ctx->envflags |= DELAY_SLOT;
353
+ ctx->envflags |= TB_FLAG_DELAY_SLOT;
354
    ctx->delayed_pc = (uint32_t) - 1;
355
    return;
356
case 0x400b:        /* jsr @Rn */
357
    CHECK_NOT_DELAY_SLOT
358
tcg_gen_movi_i32(cpu_pr, ctx->base.pc_next + 4);
359
    tcg_gen_mov_i32(cpu_delayed_pc, REG(B11_8));
360
- ctx->envflags |= DELAY_SLOT;
361
+ ctx->envflags |= TB_FLAG_DELAY_SLOT;
362
    ctx->delayed_pc = (uint32_t) - 1;
363
    return;
364
case 0x400e:        /* ldc Rm,SR */
365
@@ -XXX,XX +XXX,XX @@ static void _decode_opc(DisasContext * ctx)
366
fflush(stderr);
367
#endif
368
do_illegal:
369
- if (ctx->envflags & DELAY_SLOT_MASK) {
370
+ if (ctx->envflags & TB_FLAG_DELAY_SLOT_MASK) {
371
do_illegal_slot:
372
gen_save_cpu_state(ctx, true);
373
gen_helper_raise_slot_illegal_instruction(cpu_env);
374
@@ -XXX,XX +XXX,XX @@ static void _decode_opc(DisasContext * ctx)
375
376
do_fpu_disabled:
377
gen_save_cpu_state(ctx, true);
378
- if (ctx->envflags & DELAY_SLOT_MASK) {
379
+ if (ctx->envflags & TB_FLAG_DELAY_SLOT_MASK) {
380
gen_helper_raise_slot_fpu_disable(cpu_env);
381
} else {
382
gen_helper_raise_fpu_disable(cpu_env);
383
@@ -XXX,XX +XXX,XX @@ static void decode_opc(DisasContext * ctx)
384
385
_decode_opc(ctx);
386
387
- if (old_flags & DELAY_SLOT_MASK) {
388
+ if (old_flags & TB_FLAG_DELAY_SLOT_MASK) {
389
/* go out of the delay slot */
390
- ctx->envflags &= ~DELAY_SLOT_MASK;
391
+ ctx->envflags &= ~TB_FLAG_DELAY_SLOT_MASK;
392
393
/* When in an exclusive region, we must continue to the end
394
for conditional branches. */
395
- if (ctx->tbflags & GUSA_EXCLUSIVE
396
- && old_flags & DELAY_SLOT_CONDITIONAL) {
397
+ if (ctx->tbflags & TB_FLAG_GUSA_EXCLUSIVE
398
+ && old_flags & TB_FLAG_DELAY_SLOT_COND) {
399
gen_delayed_conditional_jump(ctx);
400
return;
401
}
402
/* Otherwise this is probably an invalid gUSA region.
403
Drop the GUSA bits so the next TB doesn't see them. */
404
- ctx->envflags &= ~GUSA_MASK;
405
+ ctx->envflags &= ~TB_FLAG_GUSA_MASK;
406
407
tcg_gen_movi_i32(cpu_flags, ctx->envflags);
408
- if (old_flags & DELAY_SLOT_CONDITIONAL) {
409
+ if (old_flags & TB_FLAG_DELAY_SLOT_COND) {
410
     gen_delayed_conditional_jump(ctx);
411
} else {
412
gen_jump(ctx);
413
@@ -XXX,XX +XXX,XX @@ static void decode_gusa(DisasContext *ctx, CPUSH4State *env)
414
}
415
416
/* The entire region has been translated. */
417
- ctx->envflags &= ~GUSA_MASK;
418
+ ctx->envflags &= ~TB_FLAG_GUSA_MASK;
419
ctx->base.pc_next = pc_end;
420
ctx->base.num_insns += max_insns - 1;
421
return;
422
@@ -XXX,XX +XXX,XX @@ static void decode_gusa(DisasContext *ctx, CPUSH4State *env)
423
424
/* Restart with the EXCLUSIVE bit set, within a TB run via
425
cpu_exec_step_atomic holding the exclusive lock. */
426
- ctx->envflags |= GUSA_EXCLUSIVE;
427
+ ctx->envflags |= TB_FLAG_GUSA_EXCLUSIVE;
428
gen_save_cpu_state(ctx, false);
429
gen_helper_exclusive(cpu_env);
430
ctx->base.is_jmp = DISAS_NORETURN;
431
@@ -XXX,XX +XXX,XX @@ static void sh4_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
432
(tbflags & (1 << SR_RB))) * 0x10;
433
ctx->fbank = tbflags & FPSCR_FR ? 0x10 : 0;
434
435
- if (tbflags & GUSA_MASK) {
436
+#ifdef CONFIG_USER_ONLY
437
+ if (tbflags & TB_FLAG_GUSA_MASK) {
438
+ /* In gUSA exclusive region. */
439
uint32_t pc = ctx->base.pc_next;
440
uint32_t pc_end = ctx->base.tb->cs_base;
441
- int backup = sextract32(ctx->tbflags, GUSA_SHIFT, 8);
442
+ int backup = sextract32(ctx->tbflags, TB_FLAG_GUSA_SHIFT, 8);
443
int max_insns = (pc_end - pc) / 2;
444
445
if (pc != pc_end + backup || max_insns < 2) {
446
/* This is a malformed gUSA region. Don't do anything special,
447
since the interpreter is likely to get confused. */
448
- ctx->envflags &= ~GUSA_MASK;
449
- } else if (tbflags & GUSA_EXCLUSIVE) {
450
+ ctx->envflags &= ~TB_FLAG_GUSA_MASK;
451
+ } else if (tbflags & TB_FLAG_GUSA_EXCLUSIVE) {
452
/* Regardless of single-stepping or the end of the page,
453
we must complete execution of the gUSA region while
454
holding the exclusive lock. */
455
@@ -XXX,XX +XXX,XX @@ static void sh4_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
456
return;
457
}
458
}
459
+#endif
460
461
/* Since the ISA is fixed-width, we can bound by the number
462
of instructions remaining on the page. */
463
@@ -XXX,XX +XXX,XX @@ static void sh4_tr_translate_insn(DisasContextBase *dcbase, CPUState *cs)
464
DisasContext *ctx = container_of(dcbase, DisasContext, base);
465
466
#ifdef CONFIG_USER_ONLY
467
- if (unlikely(ctx->envflags & GUSA_MASK)
468
- && !(ctx->envflags & GUSA_EXCLUSIVE)) {
469
+ if (unlikely(ctx->envflags & TB_FLAG_GUSA_MASK)
470
+ && !(ctx->envflags & TB_FLAG_GUSA_EXCLUSIVE)) {
471
/* We're in an gUSA region, and we have not already fallen
472
back on using an exclusive region. Attempt to parse the
473
region into a single supported atomic operation. Failure
474
@@ -XXX,XX +XXX,XX @@ static void sh4_tr_tb_stop(DisasContextBase *dcbase, CPUState *cs)
475
{
476
DisasContext *ctx = container_of(dcbase, DisasContext, base);
477
478
- if (ctx->tbflags & GUSA_EXCLUSIVE) {
479
+ if (ctx->tbflags & TB_FLAG_GUSA_EXCLUSIVE) {
480
/* Ending the region of exclusivity. Clear the bits. */
481
- ctx->envflags &= ~GUSA_MASK;
482
+ ctx->envflags &= ~TB_FLAG_GUSA_MASK;
483
}
484
485
switch (ctx->base.is_jmp) {
24
--
486
--
25
2.20.1
487
2.34.1
26
27
diff view generated by jsdifflib
Deleted patch
1
The functions generated by these macros are unused.
2
1
3
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
4
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
target/unicore32/cpu.h | 2 --
9
1 file changed, 2 deletions(-)
10
11
diff --git a/target/unicore32/cpu.h b/target/unicore32/cpu.h
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/unicore32/cpu.h
14
+++ b/target/unicore32/cpu.h
15
@@ -XXX,XX +XXX,XX @@ void cpu_asr_write(CPUUniCore32State *env1, target_ulong val, target_ulong mask)
16
int uc32_cpu_signal_handler(int host_signum, void *pinfo, void *puc);
17
18
/* MMU modes definitions */
19
-#define MMU_MODE0_SUFFIX _kernel
20
-#define MMU_MODE1_SUFFIX _user
21
#define MMU_USER_IDX 1
22
static inline int cpu_mmu_index(CPUUniCore32State *env, bool ifetch)
23
{
24
--
25
2.20.1
26
27
diff view generated by jsdifflib
Deleted patch
1
The functions generated by these macros are unused.
2
1
3
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
4
Acked-by: Max Filippov <jcmvbkbc@gmail.com>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
target/xtensa/cpu.h | 4 ----
9
1 file changed, 4 deletions(-)
10
11
diff --git a/target/xtensa/cpu.h b/target/xtensa/cpu.h
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/xtensa/cpu.h
14
+++ b/target/xtensa/cpu.h
15
@@ -XXX,XX +XXX,XX @@ static inline uint32_t xtensa_replicate_windowstart(CPUXtensaState *env)
16
}
17
18
/* MMU modes definitions */
19
-#define MMU_MODE0_SUFFIX _ring0
20
-#define MMU_MODE1_SUFFIX _ring1
21
-#define MMU_MODE2_SUFFIX _ring2
22
-#define MMU_MODE3_SUFFIX _ring3
23
#define MMU_USER_IDX 3
24
25
static inline int cpu_mmu_index(CPUXtensaState *env, bool ifetch)
26
--
27
2.20.1
28
29
diff view generated by jsdifflib
Deleted patch
1
All users have now been converted to cpu_*_mmuidx_ra.
2
1
3
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
4
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
---
7
include/exec/cpu_ldst.h | 230 ----------------------------------------
8
1 file changed, 230 deletions(-)
9
10
diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h
11
index XXXXXXX..XXXXXXX 100644
12
--- a/include/exec/cpu_ldst.h
13
+++ b/include/exec/cpu_ldst.h
14
@@ -XXX,XX +XXX,XX @@ void cpu_stl_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint32_t val,
15
void cpu_stq_mmuidx_ra(CPUArchState *env, abi_ptr addr, uint64_t val,
16
int mmu_idx, uintptr_t retaddr);
17
18
-#ifdef MMU_MODE0_SUFFIX
19
-#define CPU_MMU_INDEX 0
20
-#define MEMSUFFIX MMU_MODE0_SUFFIX
21
-#define DATA_SIZE 1
22
-#include "exec/cpu_ldst_template.h"
23
-
24
-#define DATA_SIZE 2
25
-#include "exec/cpu_ldst_template.h"
26
-
27
-#define DATA_SIZE 4
28
-#include "exec/cpu_ldst_template.h"
29
-
30
-#define DATA_SIZE 8
31
-#include "exec/cpu_ldst_template.h"
32
-#undef CPU_MMU_INDEX
33
-#undef MEMSUFFIX
34
-#endif
35
-
36
-#if (NB_MMU_MODES >= 2) && defined(MMU_MODE1_SUFFIX)
37
-#define CPU_MMU_INDEX 1
38
-#define MEMSUFFIX MMU_MODE1_SUFFIX
39
-#define DATA_SIZE 1
40
-#include "exec/cpu_ldst_template.h"
41
-
42
-#define DATA_SIZE 2
43
-#include "exec/cpu_ldst_template.h"
44
-
45
-#define DATA_SIZE 4
46
-#include "exec/cpu_ldst_template.h"
47
-
48
-#define DATA_SIZE 8
49
-#include "exec/cpu_ldst_template.h"
50
-#undef CPU_MMU_INDEX
51
-#undef MEMSUFFIX
52
-#endif
53
-
54
-#if (NB_MMU_MODES >= 3) && defined(MMU_MODE2_SUFFIX)
55
-
56
-#define CPU_MMU_INDEX 2
57
-#define MEMSUFFIX MMU_MODE2_SUFFIX
58
-#define DATA_SIZE 1
59
-#include "exec/cpu_ldst_template.h"
60
-
61
-#define DATA_SIZE 2
62
-#include "exec/cpu_ldst_template.h"
63
-
64
-#define DATA_SIZE 4
65
-#include "exec/cpu_ldst_template.h"
66
-
67
-#define DATA_SIZE 8
68
-#include "exec/cpu_ldst_template.h"
69
-#undef CPU_MMU_INDEX
70
-#undef MEMSUFFIX
71
-#endif /* (NB_MMU_MODES >= 3) */
72
-
73
-#if (NB_MMU_MODES >= 4) && defined(MMU_MODE3_SUFFIX)
74
-
75
-#define CPU_MMU_INDEX 3
76
-#define MEMSUFFIX MMU_MODE3_SUFFIX
77
-#define DATA_SIZE 1
78
-#include "exec/cpu_ldst_template.h"
79
-
80
-#define DATA_SIZE 2
81
-#include "exec/cpu_ldst_template.h"
82
-
83
-#define DATA_SIZE 4
84
-#include "exec/cpu_ldst_template.h"
85
-
86
-#define DATA_SIZE 8
87
-#include "exec/cpu_ldst_template.h"
88
-#undef CPU_MMU_INDEX
89
-#undef MEMSUFFIX
90
-#endif /* (NB_MMU_MODES >= 4) */
91
-
92
-#if (NB_MMU_MODES >= 5) && defined(MMU_MODE4_SUFFIX)
93
-
94
-#define CPU_MMU_INDEX 4
95
-#define MEMSUFFIX MMU_MODE4_SUFFIX
96
-#define DATA_SIZE 1
97
-#include "exec/cpu_ldst_template.h"
98
-
99
-#define DATA_SIZE 2
100
-#include "exec/cpu_ldst_template.h"
101
-
102
-#define DATA_SIZE 4
103
-#include "exec/cpu_ldst_template.h"
104
-
105
-#define DATA_SIZE 8
106
-#include "exec/cpu_ldst_template.h"
107
-#undef CPU_MMU_INDEX
108
-#undef MEMSUFFIX
109
-#endif /* (NB_MMU_MODES >= 5) */
110
-
111
-#if (NB_MMU_MODES >= 6) && defined(MMU_MODE5_SUFFIX)
112
-
113
-#define CPU_MMU_INDEX 5
114
-#define MEMSUFFIX MMU_MODE5_SUFFIX
115
-#define DATA_SIZE 1
116
-#include "exec/cpu_ldst_template.h"
117
-
118
-#define DATA_SIZE 2
119
-#include "exec/cpu_ldst_template.h"
120
-
121
-#define DATA_SIZE 4
122
-#include "exec/cpu_ldst_template.h"
123
-
124
-#define DATA_SIZE 8
125
-#include "exec/cpu_ldst_template.h"
126
-#undef CPU_MMU_INDEX
127
-#undef MEMSUFFIX
128
-#endif /* (NB_MMU_MODES >= 6) */
129
-
130
-#if (NB_MMU_MODES >= 7) && defined(MMU_MODE6_SUFFIX)
131
-
132
-#define CPU_MMU_INDEX 6
133
-#define MEMSUFFIX MMU_MODE6_SUFFIX
134
-#define DATA_SIZE 1
135
-#include "exec/cpu_ldst_template.h"
136
-
137
-#define DATA_SIZE 2
138
-#include "exec/cpu_ldst_template.h"
139
-
140
-#define DATA_SIZE 4
141
-#include "exec/cpu_ldst_template.h"
142
-
143
-#define DATA_SIZE 8
144
-#include "exec/cpu_ldst_template.h"
145
-#undef CPU_MMU_INDEX
146
-#undef MEMSUFFIX
147
-#endif /* (NB_MMU_MODES >= 7) */
148
-
149
-#if (NB_MMU_MODES >= 8) && defined(MMU_MODE7_SUFFIX)
150
-
151
-#define CPU_MMU_INDEX 7
152
-#define MEMSUFFIX MMU_MODE7_SUFFIX
153
-#define DATA_SIZE 1
154
-#include "exec/cpu_ldst_template.h"
155
-
156
-#define DATA_SIZE 2
157
-#include "exec/cpu_ldst_template.h"
158
-
159
-#define DATA_SIZE 4
160
-#include "exec/cpu_ldst_template.h"
161
-
162
-#define DATA_SIZE 8
163
-#include "exec/cpu_ldst_template.h"
164
-#undef CPU_MMU_INDEX
165
-#undef MEMSUFFIX
166
-#endif /* (NB_MMU_MODES >= 8) */
167
-
168
-#if (NB_MMU_MODES >= 9) && defined(MMU_MODE8_SUFFIX)
169
-
170
-#define CPU_MMU_INDEX 8
171
-#define MEMSUFFIX MMU_MODE8_SUFFIX
172
-#define DATA_SIZE 1
173
-#include "exec/cpu_ldst_template.h"
174
-
175
-#define DATA_SIZE 2
176
-#include "exec/cpu_ldst_template.h"
177
-
178
-#define DATA_SIZE 4
179
-#include "exec/cpu_ldst_template.h"
180
-
181
-#define DATA_SIZE 8
182
-#include "exec/cpu_ldst_template.h"
183
-#undef CPU_MMU_INDEX
184
-#undef MEMSUFFIX
185
-#endif /* (NB_MMU_MODES >= 9) */
186
-
187
-#if (NB_MMU_MODES >= 10) && defined(MMU_MODE9_SUFFIX)
188
-
189
-#define CPU_MMU_INDEX 9
190
-#define MEMSUFFIX MMU_MODE9_SUFFIX
191
-#define DATA_SIZE 1
192
-#include "exec/cpu_ldst_template.h"
193
-
194
-#define DATA_SIZE 2
195
-#include "exec/cpu_ldst_template.h"
196
-
197
-#define DATA_SIZE 4
198
-#include "exec/cpu_ldst_template.h"
199
-
200
-#define DATA_SIZE 8
201
-#include "exec/cpu_ldst_template.h"
202
-#undef CPU_MMU_INDEX
203
-#undef MEMSUFFIX
204
-#endif /* (NB_MMU_MODES >= 10) */
205
-
206
-#if (NB_MMU_MODES >= 11) && defined(MMU_MODE10_SUFFIX)
207
-
208
-#define CPU_MMU_INDEX 10
209
-#define MEMSUFFIX MMU_MODE10_SUFFIX
210
-#define DATA_SIZE 1
211
-#include "exec/cpu_ldst_template.h"
212
-
213
-#define DATA_SIZE 2
214
-#include "exec/cpu_ldst_template.h"
215
-
216
-#define DATA_SIZE 4
217
-#include "exec/cpu_ldst_template.h"
218
-
219
-#define DATA_SIZE 8
220
-#include "exec/cpu_ldst_template.h"
221
-#undef CPU_MMU_INDEX
222
-#undef MEMSUFFIX
223
-#endif /* (NB_MMU_MODES >= 11) */
224
-
225
-#if (NB_MMU_MODES >= 12) && defined(MMU_MODE11_SUFFIX)
226
-
227
-#define CPU_MMU_INDEX 11
228
-#define MEMSUFFIX MMU_MODE11_SUFFIX
229
-#define DATA_SIZE 1
230
-#include "exec/cpu_ldst_template.h"
231
-
232
-#define DATA_SIZE 2
233
-#include "exec/cpu_ldst_template.h"
234
-
235
-#define DATA_SIZE 4
236
-#include "exec/cpu_ldst_template.h"
237
-
238
-#define DATA_SIZE 8
239
-#include "exec/cpu_ldst_template.h"
240
-#undef CPU_MMU_INDEX
241
-#undef MEMSUFFIX
242
-#endif /* (NB_MMU_MODES >= 12) */
243
-
244
-#if (NB_MMU_MODES > 12)
245
-#error "NB_MMU_MODES > 12 is not supported for now"
246
-#endif /* (NB_MMU_MODES > 12) */
247
-
248
/* these access are slower, they must be as rare as possible */
249
#define CPU_MMU_INDEX (cpu_mmu_index(env, false))
250
#define MEMSUFFIX _data
251
--
252
2.20.1
253
254
diff view generated by jsdifflib
Deleted patch
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
1
3
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
4
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
5
Reviewed-by: Stefan Weil <sw@weilnetz.de>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Message-Id: <20200101112303.20724-4-philmd@redhat.com>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
---
11
{tcg => include/tcg}/tcg-gvec-desc.h | 0
12
{tcg => include/tcg}/tcg-mo.h | 0
13
{tcg => include/tcg}/tcg-op-gvec.h | 0
14
{tcg => include/tcg}/tcg-op.h | 0
15
{tcg => include/tcg}/tcg-opc.h | 0
16
{tcg => include/tcg}/tcg.h | 0
17
MAINTAINERS | 1 +
18
7 files changed, 1 insertion(+)
19
rename {tcg => include/tcg}/tcg-gvec-desc.h (100%)
20
rename {tcg => include/tcg}/tcg-mo.h (100%)
21
rename {tcg => include/tcg}/tcg-op-gvec.h (100%)
22
rename {tcg => include/tcg}/tcg-op.h (100%)
23
rename {tcg => include/tcg}/tcg-opc.h (100%)
24
rename {tcg => include/tcg}/tcg.h (100%)
25
26
diff --git a/tcg/tcg-gvec-desc.h b/include/tcg/tcg-gvec-desc.h
27
similarity index 100%
28
rename from tcg/tcg-gvec-desc.h
29
rename to include/tcg/tcg-gvec-desc.h
30
diff --git a/tcg/tcg-mo.h b/include/tcg/tcg-mo.h
31
similarity index 100%
32
rename from tcg/tcg-mo.h
33
rename to include/tcg/tcg-mo.h
34
diff --git a/tcg/tcg-op-gvec.h b/include/tcg/tcg-op-gvec.h
35
similarity index 100%
36
rename from tcg/tcg-op-gvec.h
37
rename to include/tcg/tcg-op-gvec.h
38
diff --git a/tcg/tcg-op.h b/include/tcg/tcg-op.h
39
similarity index 100%
40
rename from tcg/tcg-op.h
41
rename to include/tcg/tcg-op.h
42
diff --git a/tcg/tcg-opc.h b/include/tcg/tcg-opc.h
43
similarity index 100%
44
rename from tcg/tcg-opc.h
45
rename to include/tcg/tcg-opc.h
46
diff --git a/tcg/tcg.h b/include/tcg/tcg.h
47
similarity index 100%
48
rename from tcg/tcg.h
49
rename to include/tcg/tcg.h
50
diff --git a/MAINTAINERS b/MAINTAINERS
51
index XXXXXXX..XXXXXXX 100644
52
--- a/MAINTAINERS
53
+++ b/MAINTAINERS
54
@@ -XXX,XX +XXX,XX @@ Common TCG code
55
M: Richard Henderson <rth@twiddle.net>
56
S: Maintained
57
F: tcg/
58
+F: include/tcg/
59
60
TCG Plugins
61
M: Alex Bennée <alex.bennee@linaro.org>
62
--
63
2.20.1
64
65
diff view generated by jsdifflib