1
The following changes since commit 2ecfc0657afa5d29a373271b342f704a1a3c6737:
1
This is mostly my code_gen_buffer cleanup, plus a few other random
2
changes thrown in. Including a fix for a recent float32_exp2 bug.
2
3
3
Merge remote-tracking branch 'remotes/armbru/tags/pull-misc-2020-12-10' into staging (2020-12-10 17:01:05 +0000)
4
5
r~
6
7
8
The following changes since commit 894fc4fd670aaf04a67dc7507739f914ff4bacf2:
9
10
Merge remote-tracking branch 'remotes/jasowang/tags/net-pull-request' into staging (2021-06-11 09:21:48 +0100)
4
11
5
are available in the Git repository at:
12
are available in the Git repository at:
6
13
7
https://gitlab.com/rth7680/qemu.git tags/pull-tcg-20201210
14
https://gitlab.com/rth7680/qemu.git tags/pull-tcg-20210611
8
15
9
for you to fetch changes up to 9e2658d62ebc23efe7df43fc0e306f129510d874:
16
for you to fetch changes up to 60afaddc208d34f6dc86dd974f6e02724fba6eb6:
10
17
11
accel/tcg: rename tcg-cpus functions to match module name (2020-12-10 17:44:10 -0600)
18
docs/devel: Explain in more detail the TB chaining mechanisms (2021-06-11 09:41:25 -0700)
12
19
13
----------------------------------------------------------------
20
----------------------------------------------------------------
14
Split CpusAccel for tcg variants
21
Clean up code_gen_buffer allocation.
22
Add tcg_remove_ops_after.
23
Fix tcg_constant_* documentation.
24
Improve TB chaining documentation.
25
Fix float32_exp2.
15
26
16
----------------------------------------------------------------
27
----------------------------------------------------------------
17
Claudio Fontana (3):
28
Jose R. Ziviani (1):
18
accel/tcg: split CpusAccel into three TCG variants
29
tcg/arm: Fix tcg_out_op function signature
19
accel/tcg: split tcg_start_vcpu_thread
20
accel/tcg: rename tcg-cpus functions to match module name
21
30
22
accel/tcg/tcg-cpus-icount.h | 17 ++
31
Luis Pires (1):
23
accel/tcg/tcg-cpus-rr.h | 21 ++
32
docs/devel: Explain in more detail the TB chaining mechanisms
24
accel/tcg/tcg-cpus.h | 12 +-
25
accel/tcg/tcg-all.c | 13 +-
26
accel/tcg/tcg-cpus-icount.c | 147 +++++++++++++
27
accel/tcg/tcg-cpus-mttcg.c | 140 ++++++++++++
28
accel/tcg/tcg-cpus-rr.c | 305 ++++++++++++++++++++++++++
29
accel/tcg/tcg-cpus.c | 506 +-------------------------------------------
30
softmmu/icount.c | 2 +-
31
accel/tcg/meson.build | 9 +-
32
10 files changed, 670 insertions(+), 502 deletions(-)
33
create mode 100644 accel/tcg/tcg-cpus-icount.h
34
create mode 100644 accel/tcg/tcg-cpus-rr.h
35
create mode 100644 accel/tcg/tcg-cpus-icount.c
36
create mode 100644 accel/tcg/tcg-cpus-mttcg.c
37
create mode 100644 accel/tcg/tcg-cpus-rr.c
38
33
34
Richard Henderson (32):
35
meson: Split out tcg/meson.build
36
meson: Split out fpu/meson.build
37
tcg: Re-order tcg_region_init vs tcg_prologue_init
38
tcg: Remove error return from tcg_region_initial_alloc__locked
39
tcg: Split out tcg_region_initial_alloc
40
tcg: Split out tcg_region_prologue_set
41
tcg: Split out region.c
42
accel/tcg: Inline cpu_gen_init
43
accel/tcg: Move alloc_code_gen_buffer to tcg/region.c
44
accel/tcg: Rename tcg_init to tcg_init_machine
45
tcg: Create tcg_init
46
accel/tcg: Merge tcg_exec_init into tcg_init_machine
47
accel/tcg: Use MiB in tcg_init_machine
48
accel/tcg: Pass down max_cpus to tcg_init
49
tcg: Introduce tcg_max_ctxs
50
tcg: Move MAX_CODE_GEN_BUFFER_SIZE to tcg-target.h
51
tcg: Replace region.end with region.total_size
52
tcg: Rename region.start to region.after_prologue
53
tcg: Tidy tcg_n_regions
54
tcg: Tidy split_cross_256mb
55
tcg: Move in_code_gen_buffer and tests to region.c
56
tcg: Allocate code_gen_buffer into struct tcg_region_state
57
tcg: Return the map protection from alloc_code_gen_buffer
58
tcg: Sink qemu_madvise call to common code
59
util/osdep: Add qemu_mprotect_rw
60
tcg: Round the tb_size default from qemu_get_host_physmem
61
tcg: Merge buffer protection and guard page protection
62
tcg: When allocating for !splitwx, begin with PROT_NONE
63
tcg: Move tcg_init_ctx and tcg_ctx from accel/tcg/
64
tcg: Introduce tcg_remove_ops_after
65
tcg: Fix documentation for tcg_constant_* vs tcg_temp_free_*
66
softfloat: Fix tp init in float32_exp2
67
68
docs/devel/tcg.rst | 101 ++++-
69
meson.build | 12 +-
70
accel/tcg/internal.h | 2 +
71
include/qemu/osdep.h | 1 +
72
include/sysemu/tcg.h | 2 -
73
include/tcg/tcg.h | 28 +-
74
tcg/aarch64/tcg-target.h | 1 +
75
tcg/arm/tcg-target.h | 1 +
76
tcg/i386/tcg-target.h | 2 +
77
tcg/mips/tcg-target.h | 6 +
78
tcg/ppc/tcg-target.h | 2 +
79
tcg/riscv/tcg-target.h | 1 +
80
tcg/s390/tcg-target.h | 3 +
81
tcg/sparc/tcg-target.h | 1 +
82
tcg/tcg-internal.h | 40 ++
83
tcg/tci/tcg-target.h | 1 +
84
accel/tcg/tcg-all.c | 32 +-
85
accel/tcg/translate-all.c | 439 +-------------------
86
bsd-user/main.c | 3 +-
87
fpu/softfloat.c | 2 +-
88
linux-user/main.c | 1 -
89
tcg/region.c | 999 ++++++++++++++++++++++++++++++++++++++++++++++
90
tcg/tcg.c | 649 +++---------------------------
91
util/osdep.c | 9 +
92
tcg/arm/tcg-target.c.inc | 3 +-
93
fpu/meson.build | 1 +
94
tcg/meson.build | 14 +
95
27 files changed, 1266 insertions(+), 1090 deletions(-)
96
create mode 100644 tcg/tcg-internal.h
97
create mode 100644 tcg/region.c
98
create mode 100644 fpu/meson.build
99
create mode 100644 tcg/meson.build
100
diff view generated by jsdifflib
New patch
1
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
2
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
3
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
---
6
meson.build | 8 +-------
7
tcg/meson.build | 13 +++++++++++++
8
2 files changed, 14 insertions(+), 7 deletions(-)
9
create mode 100644 tcg/meson.build
1
10
11
diff --git a/meson.build b/meson.build
12
index XXXXXXX..XXXXXXX 100644
13
--- a/meson.build
14
+++ b/meson.build
15
@@ -XXX,XX +XXX,XX @@ common_ss.add(capstone)
16
specific_ss.add(files('cpu.c', 'disas.c', 'gdbstub.c'), capstone)
17
specific_ss.add(when: 'CONFIG_TCG', if_true: files(
18
'fpu/softfloat.c',
19
- 'tcg/optimize.c',
20
- 'tcg/tcg-common.c',
21
- 'tcg/tcg-op-gvec.c',
22
- 'tcg/tcg-op-vec.c',
23
- 'tcg/tcg-op.c',
24
- 'tcg/tcg.c',
25
))
26
-specific_ss.add(when: 'CONFIG_TCG_INTERPRETER', if_true: files('tcg/tci.c'))
27
28
# Work around a gcc bug/misfeature wherein constant propagation looks
29
# through an alias:
30
@@ -XXX,XX +XXX,XX @@ subdir('net')
31
subdir('replay')
32
subdir('semihosting')
33
subdir('hw')
34
+subdir('tcg')
35
subdir('accel')
36
subdir('plugins')
37
subdir('bsd-user')
38
diff --git a/tcg/meson.build b/tcg/meson.build
39
new file mode 100644
40
index XXXXXXX..XXXXXXX
41
--- /dev/null
42
+++ b/tcg/meson.build
43
@@ -XXX,XX +XXX,XX @@
44
+tcg_ss = ss.source_set()
45
+
46
+tcg_ss.add(files(
47
+ 'optimize.c',
48
+ 'tcg.c',
49
+ 'tcg-common.c',
50
+ 'tcg-op.c',
51
+ 'tcg-op-gvec.c',
52
+ 'tcg-op-vec.c',
53
+))
54
+tcg_ss.add(when: 'CONFIG_TCG_INTERPRETER', if_true: files('tci.c'))
55
+
56
+specific_ss.add_all(when: 'CONFIG_TCG', if_true: tcg_ss)
57
--
58
2.25.1
59
60
diff view generated by jsdifflib
New patch
1
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
2
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
3
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
---
6
meson.build | 4 +---
7
fpu/meson.build | 1 +
8
2 files changed, 2 insertions(+), 3 deletions(-)
9
create mode 100644 fpu/meson.build
1
10
11
diff --git a/meson.build b/meson.build
12
index XXXXXXX..XXXXXXX 100644
13
--- a/meson.build
14
+++ b/meson.build
15
@@ -XXX,XX +XXX,XX @@ subdir('softmmu')
16
17
common_ss.add(capstone)
18
specific_ss.add(files('cpu.c', 'disas.c', 'gdbstub.c'), capstone)
19
-specific_ss.add(when: 'CONFIG_TCG', if_true: files(
20
- 'fpu/softfloat.c',
21
-))
22
23
# Work around a gcc bug/misfeature wherein constant propagation looks
24
# through an alias:
25
@@ -XXX,XX +XXX,XX @@ subdir('replay')
26
subdir('semihosting')
27
subdir('hw')
28
subdir('tcg')
29
+subdir('fpu')
30
subdir('accel')
31
subdir('plugins')
32
subdir('bsd-user')
33
diff --git a/fpu/meson.build b/fpu/meson.build
34
new file mode 100644
35
index XXXXXXX..XXXXXXX
36
--- /dev/null
37
+++ b/fpu/meson.build
38
@@ -0,0 +1 @@
39
+specific_ss.add(when: 'CONFIG_TCG', if_true: files('softfloat.c'))
40
--
41
2.25.1
42
43
diff view generated by jsdifflib
New patch
1
Instead of delaying tcg_region_init until after tcg_prologue_init
2
is complete, do tcg_region_init first and let tcg_prologue_init
3
shrink the first region by the size of the generated prologue.
1
4
5
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
---
9
accel/tcg/tcg-all.c | 11 ---------
10
accel/tcg/translate-all.c | 3 +++
11
bsd-user/main.c | 1 -
12
linux-user/main.c | 1 -
13
tcg/tcg.c | 52 ++++++++++++++-------------------------
14
5 files changed, 22 insertions(+), 46 deletions(-)
15
16
diff --git a/accel/tcg/tcg-all.c b/accel/tcg/tcg-all.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/accel/tcg/tcg-all.c
19
+++ b/accel/tcg/tcg-all.c
20
@@ -XXX,XX +XXX,XX @@ static int tcg_init(MachineState *ms)
21
22
tcg_exec_init(s->tb_size * 1024 * 1024, s->splitwx_enabled);
23
mttcg_enabled = s->mttcg_enabled;
24
-
25
- /*
26
- * Initialize TCG regions only for softmmu.
27
- *
28
- * This needs to be done later for user mode, because the prologue
29
- * generation needs to be delayed so that GUEST_BASE is already set.
30
- */
31
-#ifndef CONFIG_USER_ONLY
32
- tcg_region_init();
33
-#endif /* !CONFIG_USER_ONLY */
34
-
35
return 0;
36
}
37
38
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/accel/tcg/translate-all.c
41
+++ b/accel/tcg/translate-all.c
42
@@ -XXX,XX +XXX,XX @@ void tcg_exec_init(unsigned long tb_size, int splitwx)
43
splitwx, &error_fatal);
44
assert(ok);
45
46
+ /* TODO: allocating regions is hand-in-glove with code_gen_buffer. */
47
+ tcg_region_init();
48
+
49
#if defined(CONFIG_SOFTMMU)
50
/* There's no guest base to take into account, so go ahead and
51
initialize the prologue now. */
52
diff --git a/bsd-user/main.c b/bsd-user/main.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/bsd-user/main.c
55
+++ b/bsd-user/main.c
56
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv)
57
* the real value of GUEST_BASE into account.
58
*/
59
tcg_prologue_init(tcg_ctx);
60
- tcg_region_init();
61
62
/* build Task State */
63
memset(ts, 0, sizeof(TaskState));
64
diff --git a/linux-user/main.c b/linux-user/main.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/linux-user/main.c
67
+++ b/linux-user/main.c
68
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv, char **envp)
69
generating the prologue until now so that the prologue can take
70
the real value of GUEST_BASE into account. */
71
tcg_prologue_init(tcg_ctx);
72
- tcg_region_init();
73
74
target_cpu_copy_regs(env, regs);
75
76
diff --git a/tcg/tcg.c b/tcg/tcg.c
77
index XXXXXXX..XXXXXXX 100644
78
--- a/tcg/tcg.c
79
+++ b/tcg/tcg.c
80
@@ -XXX,XX +XXX,XX @@ TranslationBlock *tcg_tb_alloc(TCGContext *s)
81
82
void tcg_prologue_init(TCGContext *s)
83
{
84
- size_t prologue_size, total_size;
85
- void *buf0, *buf1;
86
+ size_t prologue_size;
87
88
/* Put the prologue at the beginning of code_gen_buffer. */
89
- buf0 = s->code_gen_buffer;
90
- total_size = s->code_gen_buffer_size;
91
- s->code_ptr = buf0;
92
- s->code_buf = buf0;
93
+ tcg_region_assign(s, 0);
94
+ s->code_ptr = s->code_gen_ptr;
95
+ s->code_buf = s->code_gen_ptr;
96
s->data_gen_ptr = NULL;
97
98
- /*
99
- * The region trees are not yet configured, but tcg_splitwx_to_rx
100
- * needs the bounds for an assert.
101
- */
102
- region.start = buf0;
103
- region.end = buf0 + total_size;
104
-
105
#ifndef CONFIG_TCG_INTERPRETER
106
- tcg_qemu_tb_exec = (tcg_prologue_fn *)tcg_splitwx_to_rx(buf0);
107
+ tcg_qemu_tb_exec = (tcg_prologue_fn *)tcg_splitwx_to_rx(s->code_ptr);
108
#endif
109
110
- /* Compute a high-water mark, at which we voluntarily flush the buffer
111
- and start over. The size here is arbitrary, significantly larger
112
- than we expect the code generation for any one opcode to require. */
113
- s->code_gen_highwater = s->code_gen_buffer + (total_size - TCG_HIGHWATER);
114
-
115
#ifdef TCG_TARGET_NEED_POOL_LABELS
116
s->pool_labels = NULL;
117
#endif
118
@@ -XXX,XX +XXX,XX @@ void tcg_prologue_init(TCGContext *s)
119
}
120
#endif
121
122
- buf1 = s->code_ptr;
123
+ prologue_size = tcg_current_code_size(s);
124
+
125
#ifndef CONFIG_TCG_INTERPRETER
126
- flush_idcache_range((uintptr_t)tcg_splitwx_to_rx(buf0), (uintptr_t)buf0,
127
- tcg_ptr_byte_diff(buf1, buf0));
128
+ flush_idcache_range((uintptr_t)tcg_splitwx_to_rx(s->code_buf),
129
+ (uintptr_t)s->code_buf, prologue_size);
130
#endif
131
132
- /* Deduct the prologue from the buffer. */
133
- prologue_size = tcg_current_code_size(s);
134
- s->code_gen_ptr = buf1;
135
- s->code_gen_buffer = buf1;
136
- s->code_buf = buf1;
137
- total_size -= prologue_size;
138
- s->code_gen_buffer_size = total_size;
139
+ /* Deduct the prologue from the first region. */
140
+ region.start = s->code_ptr;
141
142
- tcg_register_jit(tcg_splitwx_to_rx(s->code_gen_buffer), total_size);
143
+ /* Recompute boundaries of the first region. */
144
+ tcg_region_assign(s, 0);
145
+
146
+ tcg_register_jit(tcg_splitwx_to_rx(region.start),
147
+ region.end - region.start);
148
149
#ifdef DEBUG_DISAS
150
if (qemu_loglevel_mask(CPU_LOG_TB_OUT_ASM)) {
151
FILE *logfile = qemu_log_lock();
152
qemu_log("PROLOGUE: [size=%zu]\n", prologue_size);
153
if (s->data_gen_ptr) {
154
- size_t code_size = s->data_gen_ptr - buf0;
155
+ size_t code_size = s->data_gen_ptr - s->code_gen_ptr;
156
size_t data_size = prologue_size - code_size;
157
size_t i;
158
159
- log_disas(buf0, code_size);
160
+ log_disas(s->code_gen_ptr, code_size);
161
162
for (i = 0; i < data_size; i += sizeof(tcg_target_ulong)) {
163
if (sizeof(tcg_target_ulong) == 8) {
164
@@ -XXX,XX +XXX,XX @@ void tcg_prologue_init(TCGContext *s)
165
}
166
}
167
} else {
168
- log_disas(buf0, prologue_size);
169
+ log_disas(s->code_gen_ptr, prologue_size);
170
}
171
qemu_log("\n");
172
qemu_log_flush();
173
--
174
2.25.1
175
176
diff view generated by jsdifflib
New patch
1
All callers immediately assert on error, so move the assert
2
into the function itself.
1
3
4
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
---
9
tcg/tcg.c | 19 ++++++-------------
10
1 file changed, 6 insertions(+), 13 deletions(-)
11
12
diff --git a/tcg/tcg.c b/tcg/tcg.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/tcg/tcg.c
15
+++ b/tcg/tcg.c
16
@@ -XXX,XX +XXX,XX @@ static bool tcg_region_alloc(TCGContext *s)
17
* Perform a context's first region allocation.
18
* This function does _not_ increment region.agg_size_full.
19
*/
20
-static inline bool tcg_region_initial_alloc__locked(TCGContext *s)
21
+static void tcg_region_initial_alloc__locked(TCGContext *s)
22
{
23
- return tcg_region_alloc__locked(s);
24
+ bool err = tcg_region_alloc__locked(s);
25
+ g_assert(!err);
26
}
27
28
/* Call from a safe-work context */
29
@@ -XXX,XX +XXX,XX @@ void tcg_region_reset_all(void)
30
31
for (i = 0; i < n_ctxs; i++) {
32
TCGContext *s = qatomic_read(&tcg_ctxs[i]);
33
- bool err = tcg_region_initial_alloc__locked(s);
34
-
35
- g_assert(!err);
36
+ tcg_region_initial_alloc__locked(s);
37
}
38
qemu_mutex_unlock(&region.lock);
39
40
@@ -XXX,XX +XXX,XX @@ void tcg_region_init(void)
41
42
/* In user-mode we support only one ctx, so do the initial allocation now */
43
#ifdef CONFIG_USER_ONLY
44
- {
45
- bool err = tcg_region_initial_alloc__locked(tcg_ctx);
46
-
47
- g_assert(!err);
48
- }
49
+ tcg_region_initial_alloc__locked(tcg_ctx);
50
#endif
51
}
52
53
@@ -XXX,XX +XXX,XX @@ void tcg_register_thread(void)
54
MachineState *ms = MACHINE(qdev_get_machine());
55
TCGContext *s = g_malloc(sizeof(*s));
56
unsigned int i, n;
57
- bool err;
58
59
*s = tcg_init_ctx;
60
61
@@ -XXX,XX +XXX,XX @@ void tcg_register_thread(void)
62
63
tcg_ctx = s;
64
qemu_mutex_lock(&region.lock);
65
- err = tcg_region_initial_alloc__locked(tcg_ctx);
66
- g_assert(!err);
67
+ tcg_region_initial_alloc__locked(s);
68
qemu_mutex_unlock(&region.lock);
69
}
70
#endif /* !CONFIG_USER_ONLY */
71
--
72
2.25.1
73
74
diff view generated by jsdifflib
New patch
1
This has only one user, and currently needs an ifdef,
2
but will make more sense after some code motion.
1
3
4
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
tcg/tcg.c | 13 ++++++++++---
9
1 file changed, 10 insertions(+), 3 deletions(-)
10
11
diff --git a/tcg/tcg.c b/tcg/tcg.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/tcg/tcg.c
14
+++ b/tcg/tcg.c
15
@@ -XXX,XX +XXX,XX @@ static void tcg_region_initial_alloc__locked(TCGContext *s)
16
g_assert(!err);
17
}
18
19
+#ifndef CONFIG_USER_ONLY
20
+static void tcg_region_initial_alloc(TCGContext *s)
21
+{
22
+ qemu_mutex_lock(&region.lock);
23
+ tcg_region_initial_alloc__locked(s);
24
+ qemu_mutex_unlock(&region.lock);
25
+}
26
+#endif
27
+
28
/* Call from a safe-work context */
29
void tcg_region_reset_all(void)
30
{
31
@@ -XXX,XX +XXX,XX @@ void tcg_register_thread(void)
32
}
33
34
tcg_ctx = s;
35
- qemu_mutex_lock(&region.lock);
36
- tcg_region_initial_alloc__locked(s);
37
- qemu_mutex_unlock(&region.lock);
38
+ tcg_region_initial_alloc(s);
39
}
40
#endif /* !CONFIG_USER_ONLY */
41
42
--
43
2.25.1
44
45
diff view generated by jsdifflib
New patch
1
This has only one user, but will make more sense after some
2
code motion.
1
3
4
Always leave the tcg_init_ctx initialized to the first region,
5
in preparation for tcg_prologue_init(). This also requires
6
that we don't re-allocate the region for the first cpu, lest
7
we hit the assertion for total number of regions allocated .
8
9
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
10
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
---
13
tcg/tcg.c | 37 ++++++++++++++++++++++---------------
14
1 file changed, 22 insertions(+), 15 deletions(-)
15
16
diff --git a/tcg/tcg.c b/tcg/tcg.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/tcg/tcg.c
19
+++ b/tcg/tcg.c
20
@@ -XXX,XX +XXX,XX @@ void tcg_region_init(void)
21
22
tcg_region_trees_init();
23
24
- /* In user-mode we support only one ctx, so do the initial allocation now */
25
-#ifdef CONFIG_USER_ONLY
26
- tcg_region_initial_alloc__locked(tcg_ctx);
27
-#endif
28
+ /*
29
+ * Leave the initial context initialized to the first region.
30
+ * This will be the context into which we generate the prologue.
31
+ * It is also the only context for CONFIG_USER_ONLY.
32
+ */
33
+ tcg_region_initial_alloc__locked(&tcg_init_ctx);
34
+}
35
+
36
+static void tcg_region_prologue_set(TCGContext *s)
37
+{
38
+ /* Deduct the prologue from the first region. */
39
+ g_assert(region.start == s->code_gen_buffer);
40
+ region.start = s->code_ptr;
41
+
42
+ /* Recompute boundaries of the first region. */
43
+ tcg_region_assign(s, 0);
44
+
45
+ /* Register the balance of the buffer with gdb. */
46
+ tcg_register_jit(tcg_splitwx_to_rx(region.start),
47
+ region.end - region.start);
48
}
49
50
#ifdef CONFIG_DEBUG_TCG
51
@@ -XXX,XX +XXX,XX @@ void tcg_register_thread(void)
52
53
if (n > 0) {
54
alloc_tcg_plugin_context(s);
55
+ tcg_region_initial_alloc(s);
56
}
57
58
tcg_ctx = s;
59
- tcg_region_initial_alloc(s);
60
}
61
#endif /* !CONFIG_USER_ONLY */
62
63
@@ -XXX,XX +XXX,XX @@ void tcg_prologue_init(TCGContext *s)
64
{
65
size_t prologue_size;
66
67
- /* Put the prologue at the beginning of code_gen_buffer. */
68
- tcg_region_assign(s, 0);
69
s->code_ptr = s->code_gen_ptr;
70
s->code_buf = s->code_gen_ptr;
71
s->data_gen_ptr = NULL;
72
@@ -XXX,XX +XXX,XX @@ void tcg_prologue_init(TCGContext *s)
73
(uintptr_t)s->code_buf, prologue_size);
74
#endif
75
76
- /* Deduct the prologue from the first region. */
77
- region.start = s->code_ptr;
78
-
79
- /* Recompute boundaries of the first region. */
80
- tcg_region_assign(s, 0);
81
-
82
- tcg_register_jit(tcg_splitwx_to_rx(region.start),
83
- region.end - region.start);
84
+ tcg_region_prologue_set(s);
85
86
#ifdef DEBUG_DISAS
87
if (qemu_loglevel_mask(CPU_LOG_TB_OUT_ASM)) {
88
--
89
2.25.1
90
91
diff view generated by jsdifflib
1
From: Claudio Fontana <cfontana@suse.de>
1
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
2
2
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
3
split up the CpusAccel tcg_cpus into three TCG variants:
4
5
tcg_cpus_rr (single threaded, round robin cpus)
6
tcg_cpus_icount (same as rr, but with instruction counting enabled)
7
tcg_cpus_mttcg (multi-threaded cpus)
8
9
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
10
Signed-off-by: Claudio Fontana <cfontana@suse.de>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
Message-Id: <20201015143217.29337-2-cfontana@suse.de>
14
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
15
---
4
---
16
accel/tcg/tcg-cpus-icount.h | 17 ++
5
tcg/tcg-internal.h | 37 +++
17
accel/tcg/tcg-cpus-mttcg.h | 21 ++
6
tcg/region.c | 572 +++++++++++++++++++++++++++++++++++++++++++++
18
accel/tcg/tcg-cpus-rr.h | 20 ++
7
tcg/tcg.c | 547 +------------------------------------------
19
accel/tcg/tcg-cpus.h | 13 +-
8
tcg/meson.build | 1 +
20
accel/tcg/tcg-all.c | 8 +-
9
4 files changed, 613 insertions(+), 544 deletions(-)
21
accel/tcg/tcg-cpus-icount.c | 147 +++++++++++
10
create mode 100644 tcg/tcg-internal.h
22
accel/tcg/tcg-cpus-mttcg.c | 117 +++++++++
11
create mode 100644 tcg/region.c
23
accel/tcg/tcg-cpus-rr.c | 270 ++++++++++++++++++++
24
accel/tcg/tcg-cpus.c | 484 ++----------------------------------
25
softmmu/icount.c | 2 +-
26
accel/tcg/meson.build | 9 +-
27
11 files changed, 646 insertions(+), 462 deletions(-)
28
create mode 100644 accel/tcg/tcg-cpus-icount.h
29
create mode 100644 accel/tcg/tcg-cpus-mttcg.h
30
create mode 100644 accel/tcg/tcg-cpus-rr.h
31
create mode 100644 accel/tcg/tcg-cpus-icount.c
32
create mode 100644 accel/tcg/tcg-cpus-mttcg.c
33
create mode 100644 accel/tcg/tcg-cpus-rr.c
34
12
35
diff --git a/accel/tcg/tcg-cpus-icount.h b/accel/tcg/tcg-cpus-icount.h
13
diff --git a/tcg/tcg-internal.h b/tcg/tcg-internal.h
36
new file mode 100644
14
new file mode 100644
37
index XXXXXXX..XXXXXXX
15
index XXXXXXX..XXXXXXX
38
--- /dev/null
16
--- /dev/null
39
+++ b/accel/tcg/tcg-cpus-icount.h
17
+++ b/tcg/tcg-internal.h
40
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@
41
+/*
19
+/*
42
+ * QEMU TCG Single Threaded vCPUs implementation using instruction counting
20
+ * Internal declarations for Tiny Code Generator for QEMU
43
+ *
21
+ *
44
+ * Copyright 2020 SUSE LLC
22
+ * Copyright (c) 2008 Fabrice Bellard
45
+ *
46
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
47
+ * See the COPYING file in the top-level directory.
48
+ */
49
+
50
+#ifndef TCG_CPUS_ICOUNT_H
51
+#define TCG_CPUS_ICOUNT_H
52
+
53
+void handle_icount_deadline(void);
54
+void prepare_icount_for_run(CPUState *cpu);
55
+void process_icount_data(CPUState *cpu);
56
+
57
+#endif /* TCG_CPUS_ICOUNT_H */
58
diff --git a/accel/tcg/tcg-cpus-mttcg.h b/accel/tcg/tcg-cpus-mttcg.h
59
new file mode 100644
60
index XXXXXXX..XXXXXXX
61
--- /dev/null
62
+++ b/accel/tcg/tcg-cpus-mttcg.h
63
@@ -XXX,XX +XXX,XX @@
64
+/*
65
+ * QEMU TCG Multi Threaded vCPUs implementation
66
+ *
67
+ * Copyright 2020 SUSE LLC
68
+ *
69
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
70
+ * See the COPYING file in the top-level directory.
71
+ */
72
+
73
+#ifndef TCG_CPUS_MTTCG_H
74
+#define TCG_CPUS_MTTCG_H
75
+
76
+/*
77
+ * In the multi-threaded case each vCPU has its own thread. The TLS
78
+ * variable current_cpu can be used deep in the code to find the
79
+ * current CPUState for a given thread.
80
+ */
81
+
82
+void *tcg_cpu_thread_fn(void *arg);
83
+
84
+#endif /* TCG_CPUS_MTTCG_H */
85
diff --git a/accel/tcg/tcg-cpus-rr.h b/accel/tcg/tcg-cpus-rr.h
86
new file mode 100644
87
index XXXXXXX..XXXXXXX
88
--- /dev/null
89
+++ b/accel/tcg/tcg-cpus-rr.h
90
@@ -XXX,XX +XXX,XX @@
91
+/*
92
+ * QEMU TCG Single Threaded vCPUs implementation
93
+ *
94
+ * Copyright 2020 SUSE LLC
95
+ *
96
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
97
+ * See the COPYING file in the top-level directory.
98
+ */
99
+
100
+#ifndef TCG_CPUS_RR_H
101
+#define TCG_CPUS_RR_H
102
+
103
+#define TCG_KICK_PERIOD (NANOSECONDS_PER_SECOND / 10)
104
+
105
+/* Kick all RR vCPUs. */
106
+void qemu_cpu_kick_rr_cpus(CPUState *unused);
107
+
108
+void *tcg_rr_cpu_thread_fn(void *arg);
109
+
110
+#endif /* TCG_CPUS_RR_H */
111
diff --git a/accel/tcg/tcg-cpus.h b/accel/tcg/tcg-cpus.h
112
index XXXXXXX..XXXXXXX 100644
113
--- a/accel/tcg/tcg-cpus.h
114
+++ b/accel/tcg/tcg-cpus.h
115
@@ -XXX,XX +XXX,XX @@
116
/*
117
- * Accelerator CPUS Interface
118
+ * QEMU TCG vCPU common functionality
119
+ *
120
+ * Functionality common to all TCG vcpu variants: mttcg, rr and icount.
121
*
122
* Copyright 2020 SUSE LLC
123
*
124
@@ -XXX,XX +XXX,XX @@
125
126
#include "sysemu/cpus.h"
127
128
-extern const CpusAccel tcg_cpus;
129
+extern const CpusAccel tcg_cpus_mttcg;
130
+extern const CpusAccel tcg_cpus_icount;
131
+extern const CpusAccel tcg_cpus_rr;
132
+
133
+void tcg_start_vcpu_thread(CPUState *cpu);
134
+void qemu_tcg_destroy_vcpu(CPUState *cpu);
135
+int tcg_cpu_exec(CPUState *cpu);
136
+void tcg_handle_interrupt(CPUState *cpu, int mask);
137
138
#endif /* TCG_CPUS_H */
139
diff --git a/accel/tcg/tcg-all.c b/accel/tcg/tcg-all.c
140
index XXXXXXX..XXXXXXX 100644
141
--- a/accel/tcg/tcg-all.c
142
+++ b/accel/tcg/tcg-all.c
143
@@ -XXX,XX +XXX,XX @@ static int tcg_init(MachineState *ms)
144
145
tcg_exec_init(s->tb_size * 1024 * 1024);
146
mttcg_enabled = s->mttcg_enabled;
147
- cpus_register_accel(&tcg_cpus);
148
149
+ if (mttcg_enabled) {
150
+ cpus_register_accel(&tcg_cpus_mttcg);
151
+ } else if (icount_enabled()) {
152
+ cpus_register_accel(&tcg_cpus_icount);
153
+ } else {
154
+ cpus_register_accel(&tcg_cpus_rr);
155
+ }
156
return 0;
157
}
158
159
diff --git a/accel/tcg/tcg-cpus-icount.c b/accel/tcg/tcg-cpus-icount.c
160
new file mode 100644
161
index XXXXXXX..XXXXXXX
162
--- /dev/null
163
+++ b/accel/tcg/tcg-cpus-icount.c
164
@@ -XXX,XX +XXX,XX @@
165
+/*
166
+ * QEMU TCG Single Threaded vCPUs implementation using instruction counting
167
+ *
168
+ * Copyright (c) 2003-2008 Fabrice Bellard
169
+ * Copyright (c) 2014 Red Hat Inc.
170
+ *
23
+ *
171
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
24
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
172
+ * of this software and associated documentation files (the "Software"), to deal
25
+ * of this software and associated documentation files (the "Software"), to deal
173
+ * in the Software without restriction, including without limitation the rights
26
+ * in the Software without restriction, including without limitation the rights
174
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
27
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
...
...
185
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
38
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
186
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
39
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
187
+ * THE SOFTWARE.
40
+ * THE SOFTWARE.
188
+ */
41
+ */
189
+
42
+
190
+#include "qemu/osdep.h"
43
+#ifndef TCG_INTERNAL_H
191
+#include "qemu-common.h"
44
+#define TCG_INTERNAL_H 1
192
+#include "sysemu/tcg.h"
45
+
193
+#include "sysemu/replay.h"
46
+#define TCG_HIGHWATER 1024
194
+#include "qemu/main-loop.h"
47
+
195
+#include "qemu/guest-random.h"
48
+extern TCGContext **tcg_ctxs;
196
+#include "exec/exec-all.h"
49
+extern unsigned int n_tcg_ctxs;
197
+#include "hw/boards.h"
50
+
198
+
51
+bool tcg_region_alloc(TCGContext *s);
199
+#include "tcg-cpus.h"
52
+void tcg_region_initial_alloc(TCGContext *s);
200
+#include "tcg-cpus-icount.h"
53
+void tcg_region_prologue_set(TCGContext *s);
201
+#include "tcg-cpus-rr.h"
54
+
202
+
55
+#endif /* TCG_INTERNAL_H */
203
+static int64_t tcg_get_icount_limit(void)
56
diff --git a/tcg/region.c b/tcg/region.c
204
+{
205
+ int64_t deadline;
206
+
207
+ if (replay_mode != REPLAY_MODE_PLAY) {
208
+ /*
209
+ * Include all the timers, because they may need an attention.
210
+ * Too long CPU execution may create unnecessary delay in UI.
211
+ */
212
+ deadline = qemu_clock_deadline_ns_all(QEMU_CLOCK_VIRTUAL,
213
+ QEMU_TIMER_ATTR_ALL);
214
+ /* Check realtime timers, because they help with input processing */
215
+ deadline = qemu_soonest_timeout(deadline,
216
+ qemu_clock_deadline_ns_all(QEMU_CLOCK_REALTIME,
217
+ QEMU_TIMER_ATTR_ALL));
218
+
219
+ /*
220
+ * Maintain prior (possibly buggy) behaviour where if no deadline
221
+ * was set (as there is no QEMU_CLOCK_VIRTUAL timer) or it is more than
222
+ * INT32_MAX nanoseconds ahead, we still use INT32_MAX
223
+ * nanoseconds.
224
+ */
225
+ if ((deadline < 0) || (deadline > INT32_MAX)) {
226
+ deadline = INT32_MAX;
227
+ }
228
+
229
+ return icount_round(deadline);
230
+ } else {
231
+ return replay_get_instructions();
232
+ }
233
+}
234
+
235
+static void notify_aio_contexts(void)
236
+{
237
+ /* Wake up other AioContexts. */
238
+ qemu_clock_notify(QEMU_CLOCK_VIRTUAL);
239
+ qemu_clock_run_timers(QEMU_CLOCK_VIRTUAL);
240
+}
241
+
242
+void handle_icount_deadline(void)
243
+{
244
+ assert(qemu_in_vcpu_thread());
245
+ int64_t deadline = qemu_clock_deadline_ns_all(QEMU_CLOCK_VIRTUAL,
246
+ QEMU_TIMER_ATTR_ALL);
247
+
248
+ if (deadline == 0) {
249
+ notify_aio_contexts();
250
+ }
251
+}
252
+
253
+void prepare_icount_for_run(CPUState *cpu)
254
+{
255
+ int insns_left;
256
+
257
+ /*
258
+ * These should always be cleared by process_icount_data after
259
+ * each vCPU execution. However u16.high can be raised
260
+ * asynchronously by cpu_exit/cpu_interrupt/tcg_handle_interrupt
261
+ */
262
+ g_assert(cpu_neg(cpu)->icount_decr.u16.low == 0);
263
+ g_assert(cpu->icount_extra == 0);
264
+
265
+ cpu->icount_budget = tcg_get_icount_limit();
266
+ insns_left = MIN(0xffff, cpu->icount_budget);
267
+ cpu_neg(cpu)->icount_decr.u16.low = insns_left;
268
+ cpu->icount_extra = cpu->icount_budget - insns_left;
269
+
270
+ replay_mutex_lock();
271
+
272
+ if (cpu->icount_budget == 0 && replay_has_checkpoint()) {
273
+ notify_aio_contexts();
274
+ }
275
+}
276
+
277
+void process_icount_data(CPUState *cpu)
278
+{
279
+ /* Account for executed instructions */
280
+ icount_update(cpu);
281
+
282
+ /* Reset the counters */
283
+ cpu_neg(cpu)->icount_decr.u16.low = 0;
284
+ cpu->icount_extra = 0;
285
+ cpu->icount_budget = 0;
286
+
287
+ replay_account_executed_instructions();
288
+
289
+ replay_mutex_unlock();
290
+}
291
+
292
+static void icount_handle_interrupt(CPUState *cpu, int mask)
293
+{
294
+ int old_mask = cpu->interrupt_request;
295
+
296
+ tcg_handle_interrupt(cpu, mask);
297
+ if (qemu_cpu_is_self(cpu) &&
298
+ !cpu->can_do_io
299
+ && (mask & ~old_mask) != 0) {
300
+ cpu_abort(cpu, "Raised interrupt while not in I/O function");
301
+ }
302
+}
303
+
304
+const CpusAccel tcg_cpus_icount = {
305
+ .create_vcpu_thread = tcg_start_vcpu_thread,
306
+ .kick_vcpu_thread = qemu_cpu_kick_rr_cpus,
307
+
308
+ .handle_interrupt = icount_handle_interrupt,
309
+ .get_virtual_clock = icount_get,
310
+ .get_elapsed_ticks = icount_get,
311
+};
312
diff --git a/accel/tcg/tcg-cpus-mttcg.c b/accel/tcg/tcg-cpus-mttcg.c
313
new file mode 100644
57
new file mode 100644
314
index XXXXXXX..XXXXXXX
58
index XXXXXXX..XXXXXXX
315
--- /dev/null
59
--- /dev/null
316
+++ b/accel/tcg/tcg-cpus-mttcg.c
60
+++ b/tcg/region.c
317
@@ -XXX,XX +XXX,XX @@
61
@@ -XXX,XX +XXX,XX @@
318
+/*
62
+/*
319
+ * QEMU TCG Multi Threaded vCPUs implementation
63
+ * Memory region management for Tiny Code Generator for QEMU
320
+ *
64
+ *
321
+ * Copyright (c) 2003-2008 Fabrice Bellard
65
+ * Copyright (c) 2008 Fabrice Bellard
322
+ * Copyright (c) 2014 Red Hat Inc.
323
+ *
66
+ *
324
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
67
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
325
+ * of this software and associated documentation files (the "Software"), to deal
68
+ * of this software and associated documentation files (the "Software"), to deal
326
+ * in the Software without restriction, including without limitation the rights
69
+ * in the Software without restriction, including without limitation the rights
327
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
70
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
...
...
339
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
82
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
340
+ * THE SOFTWARE.
83
+ * THE SOFTWARE.
341
+ */
84
+ */
342
+
85
+
343
+#include "qemu/osdep.h"
86
+#include "qemu/osdep.h"
344
+#include "qemu-common.h"
345
+#include "sysemu/tcg.h"
346
+#include "sysemu/replay.h"
347
+#include "qemu/main-loop.h"
348
+#include "qemu/guest-random.h"
349
+#include "exec/exec-all.h"
87
+#include "exec/exec-all.h"
88
+#include "tcg/tcg.h"
89
+#if !defined(CONFIG_USER_ONLY)
350
+#include "hw/boards.h"
90
+#include "hw/boards.h"
351
+
91
+#endif
352
+#include "tcg-cpus.h"
92
+#include "tcg-internal.h"
353
+#include "tcg-cpus-mttcg.h"
93
+
94
+
95
+struct tcg_region_tree {
96
+ QemuMutex lock;
97
+ GTree *tree;
98
+ /* padding to avoid false sharing is computed at run-time */
99
+};
354
+
100
+
355
+/*
101
+/*
356
+ * In the multi-threaded case each vCPU has its own thread. The TLS
102
+ * We divide code_gen_buffer into equally-sized "regions" that TCG threads
357
+ * variable current_cpu can be used deep in the code to find the
103
+ * dynamically allocate from as demand dictates. Given appropriate region
358
+ * current CPUState for a given thread.
104
+ * sizing, this minimizes flushes even when some TCG threads generate a lot
105
+ * more code than others.
359
+ */
106
+ */
360
+
107
+struct tcg_region_state {
361
+void *tcg_cpu_thread_fn(void *arg)
108
+ QemuMutex lock;
362
+{
109
+
363
+ CPUState *cpu = arg;
110
+ /* fields set at init time */
364
+
111
+ void *start;
365
+ assert(tcg_enabled());
112
+ void *start_aligned;
366
+ g_assert(!icount_enabled());
113
+ void *end;
367
+
114
+ size_t n;
368
+ rcu_register_thread();
115
+ size_t size; /* size of one region */
369
+ tcg_register_thread();
116
+ size_t stride; /* .size + guard size */
370
+
117
+
371
+ qemu_mutex_lock_iothread();
118
+ /* fields protected by the lock */
372
+ qemu_thread_get_self(cpu->thread);
119
+ size_t current; /* current region index */
373
+
120
+ size_t agg_size_full; /* aggregate size of full regions */
374
+ cpu->thread_id = qemu_get_thread_id();
121
+};
375
+ cpu->can_do_io = 1;
122
+
376
+ current_cpu = cpu;
123
+static struct tcg_region_state region;
377
+ cpu_thread_signal_created(cpu);
124
+
378
+ qemu_guest_random_seed_thread_part2(cpu->random_seed);
125
+/*
379
+
126
+ * This is an array of struct tcg_region_tree's, with padding.
380
+ /* process any pending work */
127
+ * We use void * to simplify the computation of region_trees[i]; each
381
+ cpu->exit_request = 1;
128
+ * struct is found every tree_size bytes.
382
+
129
+ */
383
+ do {
130
+static void *region_trees;
384
+ if (cpu_can_run(cpu)) {
131
+static size_t tree_size;
385
+ int r;
132
+
386
+ qemu_mutex_unlock_iothread();
133
+/* compare a pointer @ptr and a tb_tc @s */
387
+ r = tcg_cpu_exec(cpu);
134
+static int ptr_cmp_tb_tc(const void *ptr, const struct tb_tc *s)
388
+ qemu_mutex_lock_iothread();
135
+{
389
+ switch (r) {
136
+ if (ptr >= s->ptr + s->size) {
390
+ case EXCP_DEBUG:
137
+ return 1;
391
+ cpu_handle_guest_debug(cpu);
138
+ } else if (ptr < s->ptr) {
392
+ break;
139
+ return -1;
393
+ case EXCP_HALTED:
140
+ }
394
+ /*
141
+ return 0;
395
+ * during start-up the vCPU is reset and the thread is
142
+}
396
+ * kicked several times. If we don't ensure we go back
143
+
397
+ * to sleep in the halted state we won't cleanly
144
+static gint tb_tc_cmp(gconstpointer ap, gconstpointer bp)
398
+ * start-up when the vCPU is enabled.
145
+{
399
+ *
146
+ const struct tb_tc *a = ap;
400
+ * cpu->halted should ensure we sleep in wait_io_event
147
+ const struct tb_tc *b = bp;
401
+ */
148
+
402
+ g_assert(cpu->halted);
149
+ /*
403
+ break;
150
+ * When both sizes are set, we know this isn't a lookup.
404
+ case EXCP_ATOMIC:
151
+ * This is the most likely case: every TB must be inserted; lookups
405
+ qemu_mutex_unlock_iothread();
152
+ * are a lot less frequent.
406
+ cpu_exec_step_atomic(cpu);
153
+ */
407
+ qemu_mutex_lock_iothread();
154
+ if (likely(a->size && b->size)) {
408
+ default:
155
+ if (a->ptr > b->ptr) {
409
+ /* Ignore everything else? */
156
+ return 1;
410
+ break;
157
+ } else if (a->ptr < b->ptr) {
411
+ }
158
+ return -1;
412
+ }
159
+ }
413
+
160
+ /* a->ptr == b->ptr should happen only on deletions */
414
+ qatomic_mb_set(&cpu->exit_request, 0);
161
+ g_assert(a->size == b->size);
415
+ qemu_wait_io_event(cpu);
162
+ return 0;
416
+ } while (!cpu->unplug || cpu_can_run(cpu));
163
+ }
417
+
164
+ /*
418
+ qemu_tcg_destroy_vcpu(cpu);
165
+ * All lookups have either .size field set to 0.
419
+ qemu_mutex_unlock_iothread();
166
+ * From the glib sources we see that @ap is always the lookup key. However
420
+ rcu_unregister_thread();
167
+ * the docs provide no guarantee, so we just mark this case as likely.
421
+ return NULL;
168
+ */
422
+}
169
+ if (likely(a->size == 0)) {
423
+
170
+ return ptr_cmp_tb_tc(a->ptr, b);
424
+static void mttcg_kick_vcpu_thread(CPUState *cpu)
171
+ }
425
+{
172
+ return ptr_cmp_tb_tc(b->ptr, a);
426
+ cpu_exit(cpu);
173
+}
427
+}
174
+
428
+
175
+static void tcg_region_trees_init(void)
429
+const CpusAccel tcg_cpus_mttcg = {
176
+{
430
+ .create_vcpu_thread = tcg_start_vcpu_thread,
177
+ size_t i;
431
+ .kick_vcpu_thread = mttcg_kick_vcpu_thread,
178
+
432
+
179
+ tree_size = ROUND_UP(sizeof(struct tcg_region_tree), qemu_dcache_linesize);
433
+ .handle_interrupt = tcg_handle_interrupt,
180
+ region_trees = qemu_memalign(qemu_dcache_linesize, region.n * tree_size);
434
+};
181
+ for (i = 0; i < region.n; i++) {
435
diff --git a/accel/tcg/tcg-cpus-rr.c b/accel/tcg/tcg-cpus-rr.c
182
+ struct tcg_region_tree *rt = region_trees + i * tree_size;
436
new file mode 100644
183
+
437
index XXXXXXX..XXXXXXX
184
+ qemu_mutex_init(&rt->lock);
438
--- /dev/null
185
+ rt->tree = g_tree_new(tb_tc_cmp);
439
+++ b/accel/tcg/tcg-cpus-rr.c
186
+ }
187
+}
188
+
189
+static struct tcg_region_tree *tc_ptr_to_region_tree(const void *p)
190
+{
191
+ size_t region_idx;
192
+
193
+ /*
194
+ * Like tcg_splitwx_to_rw, with no assert. The pc may come from
195
+ * a signal handler over which the caller has no control.
196
+ */
197
+ if (!in_code_gen_buffer(p)) {
198
+ p -= tcg_splitwx_diff;
199
+ if (!in_code_gen_buffer(p)) {
200
+ return NULL;
201
+ }
202
+ }
203
+
204
+ if (p < region.start_aligned) {
205
+ region_idx = 0;
206
+ } else {
207
+ ptrdiff_t offset = p - region.start_aligned;
208
+
209
+ if (offset > region.stride * (region.n - 1)) {
210
+ region_idx = region.n - 1;
211
+ } else {
212
+ region_idx = offset / region.stride;
213
+ }
214
+ }
215
+ return region_trees + region_idx * tree_size;
216
+}
217
+
218
+void tcg_tb_insert(TranslationBlock *tb)
219
+{
220
+ struct tcg_region_tree *rt = tc_ptr_to_region_tree(tb->tc.ptr);
221
+
222
+ g_assert(rt != NULL);
223
+ qemu_mutex_lock(&rt->lock);
224
+ g_tree_insert(rt->tree, &tb->tc, tb);
225
+ qemu_mutex_unlock(&rt->lock);
226
+}
227
+
228
+void tcg_tb_remove(TranslationBlock *tb)
229
+{
230
+ struct tcg_region_tree *rt = tc_ptr_to_region_tree(tb->tc.ptr);
231
+
232
+ g_assert(rt != NULL);
233
+ qemu_mutex_lock(&rt->lock);
234
+ g_tree_remove(rt->tree, &tb->tc);
235
+ qemu_mutex_unlock(&rt->lock);
236
+}
237
+
238
+/*
239
+ * Find the TB 'tb' such that
240
+ * tb->tc.ptr <= tc_ptr < tb->tc.ptr + tb->tc.size
241
+ * Return NULL if not found.
242
+ */
243
+TranslationBlock *tcg_tb_lookup(uintptr_t tc_ptr)
244
+{
245
+ struct tcg_region_tree *rt = tc_ptr_to_region_tree((void *)tc_ptr);
246
+ TranslationBlock *tb;
247
+ struct tb_tc s = { .ptr = (void *)tc_ptr };
248
+
249
+ if (rt == NULL) {
250
+ return NULL;
251
+ }
252
+
253
+ qemu_mutex_lock(&rt->lock);
254
+ tb = g_tree_lookup(rt->tree, &s);
255
+ qemu_mutex_unlock(&rt->lock);
256
+ return tb;
257
+}
258
+
259
+static void tcg_region_tree_lock_all(void)
260
+{
261
+ size_t i;
262
+
263
+ for (i = 0; i < region.n; i++) {
264
+ struct tcg_region_tree *rt = region_trees + i * tree_size;
265
+
266
+ qemu_mutex_lock(&rt->lock);
267
+ }
268
+}
269
+
270
+static void tcg_region_tree_unlock_all(void)
271
+{
272
+ size_t i;
273
+
274
+ for (i = 0; i < region.n; i++) {
275
+ struct tcg_region_tree *rt = region_trees + i * tree_size;
276
+
277
+ qemu_mutex_unlock(&rt->lock);
278
+ }
279
+}
280
+
281
+void tcg_tb_foreach(GTraverseFunc func, gpointer user_data)
282
+{
283
+ size_t i;
284
+
285
+ tcg_region_tree_lock_all();
286
+ for (i = 0; i < region.n; i++) {
287
+ struct tcg_region_tree *rt = region_trees + i * tree_size;
288
+
289
+ g_tree_foreach(rt->tree, func, user_data);
290
+ }
291
+ tcg_region_tree_unlock_all();
292
+}
293
+
294
+size_t tcg_nb_tbs(void)
295
+{
296
+ size_t nb_tbs = 0;
297
+ size_t i;
298
+
299
+ tcg_region_tree_lock_all();
300
+ for (i = 0; i < region.n; i++) {
301
+ struct tcg_region_tree *rt = region_trees + i * tree_size;
302
+
303
+ nb_tbs += g_tree_nnodes(rt->tree);
304
+ }
305
+ tcg_region_tree_unlock_all();
306
+ return nb_tbs;
307
+}
308
+
309
+static gboolean tcg_region_tree_traverse(gpointer k, gpointer v, gpointer data)
310
+{
311
+ TranslationBlock *tb = v;
312
+
313
+ tb_destroy(tb);
314
+ return FALSE;
315
+}
316
+
317
+static void tcg_region_tree_reset_all(void)
318
+{
319
+ size_t i;
320
+
321
+ tcg_region_tree_lock_all();
322
+ for (i = 0; i < region.n; i++) {
323
+ struct tcg_region_tree *rt = region_trees + i * tree_size;
324
+
325
+ g_tree_foreach(rt->tree, tcg_region_tree_traverse, NULL);
326
+ /* Increment the refcount first so that destroy acts as a reset */
327
+ g_tree_ref(rt->tree);
328
+ g_tree_destroy(rt->tree);
329
+ }
330
+ tcg_region_tree_unlock_all();
331
+}
332
+
333
+static void tcg_region_bounds(size_t curr_region, void **pstart, void **pend)
334
+{
335
+ void *start, *end;
336
+
337
+ start = region.start_aligned + curr_region * region.stride;
338
+ end = start + region.size;
339
+
340
+ if (curr_region == 0) {
341
+ start = region.start;
342
+ }
343
+ if (curr_region == region.n - 1) {
344
+ end = region.end;
345
+ }
346
+
347
+ *pstart = start;
348
+ *pend = end;
349
+}
350
+
351
+static void tcg_region_assign(TCGContext *s, size_t curr_region)
352
+{
353
+ void *start, *end;
354
+
355
+ tcg_region_bounds(curr_region, &start, &end);
356
+
357
+ s->code_gen_buffer = start;
358
+ s->code_gen_ptr = start;
359
+ s->code_gen_buffer_size = end - start;
360
+ s->code_gen_highwater = end - TCG_HIGHWATER;
361
+}
362
+
363
+static bool tcg_region_alloc__locked(TCGContext *s)
364
+{
365
+ if (region.current == region.n) {
366
+ return true;
367
+ }
368
+ tcg_region_assign(s, region.current);
369
+ region.current++;
370
+ return false;
371
+}
372
+
373
+/*
374
+ * Request a new region once the one in use has filled up.
375
+ * Returns true on error.
376
+ */
377
+bool tcg_region_alloc(TCGContext *s)
378
+{
379
+ bool err;
380
+ /* read the region size now; alloc__locked will overwrite it on success */
381
+ size_t size_full = s->code_gen_buffer_size;
382
+
383
+ qemu_mutex_lock(&region.lock);
384
+ err = tcg_region_alloc__locked(s);
385
+ if (!err) {
386
+ region.agg_size_full += size_full - TCG_HIGHWATER;
387
+ }
388
+ qemu_mutex_unlock(&region.lock);
389
+ return err;
390
+}
391
+
392
+/*
393
+ * Perform a context's first region allocation.
394
+ * This function does _not_ increment region.agg_size_full.
395
+ */
396
+static void tcg_region_initial_alloc__locked(TCGContext *s)
397
+{
398
+ bool err = tcg_region_alloc__locked(s);
399
+ g_assert(!err);
400
+}
401
+
402
+void tcg_region_initial_alloc(TCGContext *s)
403
+{
404
+ qemu_mutex_lock(&region.lock);
405
+ tcg_region_initial_alloc__locked(s);
406
+ qemu_mutex_unlock(&region.lock);
407
+}
408
+
409
+/* Call from a safe-work context */
410
+void tcg_region_reset_all(void)
411
+{
412
+ unsigned int n_ctxs = qatomic_read(&n_tcg_ctxs);
413
+ unsigned int i;
414
+
415
+ qemu_mutex_lock(&region.lock);
416
+ region.current = 0;
417
+ region.agg_size_full = 0;
418
+
419
+ for (i = 0; i < n_ctxs; i++) {
420
+ TCGContext *s = qatomic_read(&tcg_ctxs[i]);
421
+ tcg_region_initial_alloc__locked(s);
422
+ }
423
+ qemu_mutex_unlock(&region.lock);
424
+
425
+ tcg_region_tree_reset_all();
426
+}
427
+
428
+#ifdef CONFIG_USER_ONLY
429
+static size_t tcg_n_regions(void)
430
+{
431
+ return 1;
432
+}
433
+#else
434
+/*
435
+ * It is likely that some vCPUs will translate more code than others, so we
436
+ * first try to set more regions than max_cpus, with those regions being of
437
+ * reasonable size. If that's not possible we make do by evenly dividing
438
+ * the code_gen_buffer among the vCPUs.
439
+ */
440
+static size_t tcg_n_regions(void)
441
+{
442
+ size_t i;
443
+
444
+ /* Use a single region if all we have is one vCPU thread */
445
+#if !defined(CONFIG_USER_ONLY)
446
+ MachineState *ms = MACHINE(qdev_get_machine());
447
+ unsigned int max_cpus = ms->smp.max_cpus;
448
+#endif
449
+ if (max_cpus == 1 || !qemu_tcg_mttcg_enabled()) {
450
+ return 1;
451
+ }
452
+
453
+ /* Try to have more regions than max_cpus, with each region being >= 2 MB */
454
+ for (i = 8; i > 0; i--) {
455
+ size_t regions_per_thread = i;
456
+ size_t region_size;
457
+
458
+ region_size = tcg_init_ctx.code_gen_buffer_size;
459
+ region_size /= max_cpus * regions_per_thread;
460
+
461
+ if (region_size >= 2 * 1024u * 1024) {
462
+ return max_cpus * regions_per_thread;
463
+ }
464
+ }
465
+ /* If we can't, then just allocate one region per vCPU thread */
466
+ return max_cpus;
467
+}
468
+#endif
469
+
470
+/*
471
+ * Initializes region partitioning.
472
+ *
473
+ * Called at init time from the parent thread (i.e. the one calling
474
+ * tcg_context_init), after the target's TCG globals have been set.
475
+ *
476
+ * Region partitioning works by splitting code_gen_buffer into separate regions,
477
+ * and then assigning regions to TCG threads so that the threads can translate
478
+ * code in parallel without synchronization.
479
+ *
480
+ * In softmmu the number of TCG threads is bounded by max_cpus, so we use at
481
+ * least max_cpus regions in MTTCG. In !MTTCG we use a single region.
482
+ * Note that the TCG options from the command-line (i.e. -accel accel=tcg,[...])
483
+ * must have been parsed before calling this function, since it calls
484
+ * qemu_tcg_mttcg_enabled().
485
+ *
486
+ * In user-mode we use a single region. Having multiple regions in user-mode
487
+ * is not supported, because the number of vCPU threads (recall that each thread
488
+ * spawned by the guest corresponds to a vCPU thread) is only bounded by the
489
+ * OS, and usually this number is huge (tens of thousands is not uncommon).
490
+ * Thus, given this large bound on the number of vCPU threads and the fact
491
+ * that code_gen_buffer is allocated at compile-time, we cannot guarantee
492
+ * that the availability of at least one region per vCPU thread.
493
+ *
494
+ * However, this user-mode limitation is unlikely to be a significant problem
495
+ * in practice. Multi-threaded guests share most if not all of their translated
496
+ * code, which makes parallel code generation less appealing than in softmmu.
497
+ */
498
+void tcg_region_init(void)
499
+{
500
+ void *buf = tcg_init_ctx.code_gen_buffer;
501
+ void *aligned;
502
+ size_t size = tcg_init_ctx.code_gen_buffer_size;
503
+ size_t page_size = qemu_real_host_page_size;
504
+ size_t region_size;
505
+ size_t n_regions;
506
+ size_t i;
507
+
508
+ n_regions = tcg_n_regions();
509
+
510
+ /* The first region will be 'aligned - buf' bytes larger than the others */
511
+ aligned = QEMU_ALIGN_PTR_UP(buf, page_size);
512
+ g_assert(aligned < tcg_init_ctx.code_gen_buffer + size);
513
+ /*
514
+ * Make region_size a multiple of page_size, using aligned as the start.
515
+ * As a result of this we might end up with a few extra pages at the end of
516
+ * the buffer; we will assign those to the last region.
517
+ */
518
+ region_size = (size - (aligned - buf)) / n_regions;
519
+ region_size = QEMU_ALIGN_DOWN(region_size, page_size);
520
+
521
+ /* A region must have at least 2 pages; one code, one guard */
522
+ g_assert(region_size >= 2 * page_size);
523
+
524
+ /* init the region struct */
525
+ qemu_mutex_init(&region.lock);
526
+ region.n = n_regions;
527
+ region.size = region_size - page_size;
528
+ region.stride = region_size;
529
+ region.start = buf;
530
+ region.start_aligned = aligned;
531
+ /* page-align the end, since its last page will be a guard page */
532
+ region.end = QEMU_ALIGN_PTR_DOWN(buf + size, page_size);
533
+ /* account for that last guard page */
534
+ region.end -= page_size;
535
+
536
+ /*
537
+ * Set guard pages in the rw buffer, as that's the one into which
538
+ * buffer overruns could occur. Do not set guard pages in the rx
539
+ * buffer -- let that one use hugepages throughout.
540
+ */
541
+ for (i = 0; i < region.n; i++) {
542
+ void *start, *end;
543
+
544
+ tcg_region_bounds(i, &start, &end);
545
+
546
+ /*
547
+ * macOS 11.2 has a bug (Apple Feedback FB8994773) in which mprotect
548
+ * rejects a permission change from RWX -> NONE. Guard pages are
549
+ * nice for bug detection but are not essential; ignore any failure.
550
+ */
551
+ (void)qemu_mprotect_none(end, page_size);
552
+ }
553
+
554
+ tcg_region_trees_init();
555
+
556
+ /*
557
+ * Leave the initial context initialized to the first region.
558
+ * This will be the context into which we generate the prologue.
559
+ * It is also the only context for CONFIG_USER_ONLY.
560
+ */
561
+ tcg_region_initial_alloc__locked(&tcg_init_ctx);
562
+}
563
+
564
+void tcg_region_prologue_set(TCGContext *s)
565
+{
566
+ /* Deduct the prologue from the first region. */
567
+ g_assert(region.start == s->code_gen_buffer);
568
+ region.start = s->code_ptr;
569
+
570
+ /* Recompute boundaries of the first region. */
571
+ tcg_region_assign(s, 0);
572
+
573
+ /* Register the balance of the buffer with gdb. */
574
+ tcg_register_jit(tcg_splitwx_to_rx(region.start),
575
+ region.end - region.start);
576
+}
577
+
578
+/*
579
+ * Returns the size (in bytes) of all translated code (i.e. from all regions)
580
+ * currently in the cache.
581
+ * See also: tcg_code_capacity()
582
+ * Do not confuse with tcg_current_code_size(); that one applies to a single
583
+ * TCG context.
584
+ */
585
+size_t tcg_code_size(void)
586
+{
587
+ unsigned int n_ctxs = qatomic_read(&n_tcg_ctxs);
588
+ unsigned int i;
589
+ size_t total;
590
+
591
+ qemu_mutex_lock(&region.lock);
592
+ total = region.agg_size_full;
593
+ for (i = 0; i < n_ctxs; i++) {
594
+ const TCGContext *s = qatomic_read(&tcg_ctxs[i]);
595
+ size_t size;
596
+
597
+ size = qatomic_read(&s->code_gen_ptr) - s->code_gen_buffer;
598
+ g_assert(size <= s->code_gen_buffer_size);
599
+ total += size;
600
+ }
601
+ qemu_mutex_unlock(&region.lock);
602
+ return total;
603
+}
604
+
605
+/*
606
+ * Returns the code capacity (in bytes) of the entire cache, i.e. including all
607
+ * regions.
608
+ * See also: tcg_code_size()
609
+ */
610
+size_t tcg_code_capacity(void)
611
+{
612
+ size_t guard_size, capacity;
613
+
614
+ /* no need for synchronization; these variables are set at init time */
615
+ guard_size = region.stride - region.size;
616
+ capacity = region.end + guard_size - region.start;
617
+ capacity -= region.n * (guard_size + TCG_HIGHWATER);
618
+ return capacity;
619
+}
620
+
621
+size_t tcg_tb_phys_invalidate_count(void)
622
+{
623
+ unsigned int n_ctxs = qatomic_read(&n_tcg_ctxs);
624
+ unsigned int i;
625
+ size_t total = 0;
626
+
627
+ for (i = 0; i < n_ctxs; i++) {
628
+ const TCGContext *s = qatomic_read(&tcg_ctxs[i]);
629
+
630
+ total += qatomic_read(&s->tb_phys_invalidate_count);
631
+ }
632
+ return total;
633
+}
634
diff --git a/tcg/tcg.c b/tcg/tcg.c
635
index XXXXXXX..XXXXXXX 100644
636
--- a/tcg/tcg.c
637
+++ b/tcg/tcg.c
440
@@ -XXX,XX +XXX,XX @@
638
@@ -XXX,XX +XXX,XX @@
441
+/*
639
442
+ * QEMU TCG Single Threaded vCPUs implementation
640
#include "elf.h"
443
+ *
641
#include "exec/log.h"
444
+ * Copyright (c) 2003-2008 Fabrice Bellard
642
+#include "tcg-internal.h"
445
+ * Copyright (c) 2014 Red Hat Inc.
643
446
+ *
644
/* Forward declarations for functions declared in tcg-target.c.inc and
447
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
645
used here. */
448
+ * of this software and associated documentation files (the "Software"), to deal
646
@@ -XXX,XX +XXX,XX @@ static bool tcg_target_const_match(int64_t val, TCGType type, int ct);
449
+ * in the Software without restriction, including without limitation the rights
647
static int tcg_out_ldst_finalize(TCGContext *s);
450
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
648
#endif
451
+ * copies of the Software, and to permit persons to whom the Software is
649
452
+ * furnished to do so, subject to the following conditions:
650
-#define TCG_HIGHWATER 1024
453
+ *
651
-
454
+ * The above copyright notice and this permission notice shall be included in
652
-static TCGContext **tcg_ctxs;
455
+ * all copies or substantial portions of the Software.
653
-static unsigned int n_tcg_ctxs;
456
+ *
654
+TCGContext **tcg_ctxs;
457
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
655
+unsigned int n_tcg_ctxs;
458
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
656
TCGv_env cpu_env = 0;
459
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
657
const void *tcg_code_gen_epilogue;
460
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
658
uintptr_t tcg_splitwx_diff;
461
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
659
@@ -XXX,XX +XXX,XX @@ uintptr_t tcg_splitwx_diff;
462
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
660
tcg_prologue_fn *tcg_qemu_tb_exec;
463
+ * THE SOFTWARE.
661
#endif
464
+ */
662
465
+
663
-struct tcg_region_tree {
466
+#include "qemu/osdep.h"
664
- QemuMutex lock;
467
+#include "qemu-common.h"
665
- GTree *tree;
468
+#include "sysemu/tcg.h"
666
- /* padding to avoid false sharing is computed at run-time */
469
+#include "sysemu/replay.h"
667
-};
470
+#include "qemu/main-loop.h"
668
-
471
+#include "qemu/guest-random.h"
669
-/*
472
+#include "exec/exec-all.h"
670
- * We divide code_gen_buffer into equally-sized "regions" that TCG threads
473
+#include "hw/boards.h"
671
- * dynamically allocate from as demand dictates. Given appropriate region
474
+
672
- * sizing, this minimizes flushes even when some TCG threads generate a lot
475
+#include "tcg-cpus.h"
673
- * more code than others.
476
+#include "tcg-cpus-rr.h"
674
- */
477
+#include "tcg-cpus-icount.h"
675
-struct tcg_region_state {
478
+
676
- QemuMutex lock;
479
+/* Kick all RR vCPUs */
677
-
480
+void qemu_cpu_kick_rr_cpus(CPUState *unused)
678
- /* fields set at init time */
481
+{
679
- void *start;
482
+ CPUState *cpu;
680
- void *start_aligned;
483
+
681
- void *end;
484
+ CPU_FOREACH(cpu) {
682
- size_t n;
485
+ cpu_exit(cpu);
683
- size_t size; /* size of one region */
486
+ };
684
- size_t stride; /* .size + guard size */
487
+}
685
-
488
+
686
- /* fields protected by the lock */
489
+/*
687
- size_t current; /* current region index */
490
+ * TCG vCPU kick timer
688
- size_t agg_size_full; /* aggregate size of full regions */
491
+ *
689
-};
492
+ * The kick timer is responsible for moving single threaded vCPU
690
-
493
+ * emulation on to the next vCPU. If more than one vCPU is running a
691
-static struct tcg_region_state region;
494
+ * timer event with force a cpu->exit so the next vCPU can get
692
-/*
495
+ * scheduled.
693
- * This is an array of struct tcg_region_tree's, with padding.
496
+ *
694
- * We use void * to simplify the computation of region_trees[i]; each
497
+ * The timer is removed if all vCPUs are idle and restarted again once
695
- * struct is found every tree_size bytes.
498
+ * idleness is complete.
696
- */
499
+ */
697
-static void *region_trees;
500
+
698
-static size_t tree_size;
501
+static QEMUTimer *tcg_kick_vcpu_timer;
699
static TCGRegSet tcg_target_available_regs[TCG_TYPE_COUNT];
502
+static CPUState *tcg_current_rr_cpu;
700
static TCGRegSet tcg_target_call_clobber_regs;
503
+
701
504
+#define TCG_KICK_PERIOD (NANOSECONDS_PER_SECOND / 10)
702
@@ -XXX,XX +XXX,XX @@ static const TCGTargetOpDef constraint_sets[] = {
505
+
703
506
+static inline int64_t qemu_tcg_next_kick(void)
704
#include "tcg-target.c.inc"
507
+{
705
508
+ return qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + TCG_KICK_PERIOD;
706
-/* compare a pointer @ptr and a tb_tc @s */
509
+}
707
-static int ptr_cmp_tb_tc(const void *ptr, const struct tb_tc *s)
510
+
708
-{
511
+/* Kick the currently round-robin scheduled vCPU to next */
709
- if (ptr >= s->ptr + s->size) {
512
+static void qemu_cpu_kick_rr_next_cpu(void)
710
- return 1;
513
+{
711
- } else if (ptr < s->ptr) {
514
+ CPUState *cpu;
712
- return -1;
515
+ do {
713
- }
516
+ cpu = qatomic_mb_read(&tcg_current_rr_cpu);
714
- return 0;
517
+ if (cpu) {
715
-}
518
+ cpu_exit(cpu);
716
-
519
+ }
717
-static gint tb_tc_cmp(gconstpointer ap, gconstpointer bp)
520
+ } while (cpu != qatomic_mb_read(&tcg_current_rr_cpu));
718
-{
521
+}
719
- const struct tb_tc *a = ap;
522
+
720
- const struct tb_tc *b = bp;
523
+static void kick_tcg_thread(void *opaque)
721
-
524
+{
722
- /*
525
+ timer_mod(tcg_kick_vcpu_timer, qemu_tcg_next_kick());
723
- * When both sizes are set, we know this isn't a lookup.
526
+ qemu_cpu_kick_rr_next_cpu();
724
- * This is the most likely case: every TB must be inserted; lookups
527
+}
725
- * are a lot less frequent.
528
+
726
- */
529
+static void start_tcg_kick_timer(void)
727
- if (likely(a->size && b->size)) {
530
+{
728
- if (a->ptr > b->ptr) {
531
+ if (!tcg_kick_vcpu_timer && CPU_NEXT(first_cpu)) {
729
- return 1;
532
+ tcg_kick_vcpu_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL,
730
- } else if (a->ptr < b->ptr) {
533
+ kick_tcg_thread, NULL);
731
- return -1;
534
+ }
732
- }
535
+ if (tcg_kick_vcpu_timer && !timer_pending(tcg_kick_vcpu_timer)) {
733
- /* a->ptr == b->ptr should happen only on deletions */
536
+ timer_mod(tcg_kick_vcpu_timer, qemu_tcg_next_kick());
734
- g_assert(a->size == b->size);
537
+ }
735
- return 0;
538
+}
736
- }
539
+
737
- /*
540
+static void stop_tcg_kick_timer(void)
738
- * All lookups have either .size field set to 0.
541
+{
739
- * From the glib sources we see that @ap is always the lookup key. However
542
+ if (tcg_kick_vcpu_timer && timer_pending(tcg_kick_vcpu_timer)) {
740
- * the docs provide no guarantee, so we just mark this case as likely.
543
+ timer_del(tcg_kick_vcpu_timer);
741
- */
544
+ }
742
- if (likely(a->size == 0)) {
545
+}
743
- return ptr_cmp_tb_tc(a->ptr, b);
546
+
744
- }
547
+static void qemu_tcg_rr_wait_io_event(void)
745
- return ptr_cmp_tb_tc(b->ptr, a);
548
+{
746
-}
549
+ CPUState *cpu;
747
-
550
+
748
-static void tcg_region_trees_init(void)
551
+ while (all_cpu_threads_idle()) {
749
-{
552
+ stop_tcg_kick_timer();
750
- size_t i;
553
+ qemu_cond_wait_iothread(first_cpu->halt_cond);
751
-
554
+ }
752
- tree_size = ROUND_UP(sizeof(struct tcg_region_tree), qemu_dcache_linesize);
555
+
753
- region_trees = qemu_memalign(qemu_dcache_linesize, region.n * tree_size);
556
+ start_tcg_kick_timer();
754
- for (i = 0; i < region.n; i++) {
557
+
755
- struct tcg_region_tree *rt = region_trees + i * tree_size;
558
+ CPU_FOREACH(cpu) {
756
-
559
+ qemu_wait_io_event_common(cpu);
757
- qemu_mutex_init(&rt->lock);
560
+ }
758
- rt->tree = g_tree_new(tb_tc_cmp);
561
+}
759
- }
562
+
760
-}
563
+/*
761
-
564
+ * Destroy any remaining vCPUs which have been unplugged and have
762
-static struct tcg_region_tree *tc_ptr_to_region_tree(const void *p)
565
+ * finished running
763
-{
566
+ */
764
- size_t region_idx;
567
+static void deal_with_unplugged_cpus(void)
765
-
568
+{
766
- /*
569
+ CPUState *cpu;
767
- * Like tcg_splitwx_to_rw, with no assert. The pc may come from
570
+
768
- * a signal handler over which the caller has no control.
571
+ CPU_FOREACH(cpu) {
769
- */
572
+ if (cpu->unplug && !cpu_can_run(cpu)) {
770
- if (!in_code_gen_buffer(p)) {
573
+ qemu_tcg_destroy_vcpu(cpu);
771
- p -= tcg_splitwx_diff;
574
+ break;
772
- if (!in_code_gen_buffer(p)) {
575
+ }
773
- return NULL;
576
+ }
774
- }
577
+}
775
- }
578
+
776
-
579
+/*
777
- if (p < region.start_aligned) {
580
+ * In the single-threaded case each vCPU is simulated in turn. If
778
- region_idx = 0;
581
+ * there is more than a single vCPU we create a simple timer to kick
779
- } else {
582
+ * the vCPU and ensure we don't get stuck in a tight loop in one vCPU.
780
- ptrdiff_t offset = p - region.start_aligned;
583
+ * This is done explicitly rather than relying on side-effects
781
-
584
+ * elsewhere.
782
- if (offset > region.stride * (region.n - 1)) {
585
+ */
783
- region_idx = region.n - 1;
586
+
784
- } else {
587
+void *tcg_rr_cpu_thread_fn(void *arg)
785
- region_idx = offset / region.stride;
588
+{
786
- }
589
+ CPUState *cpu = arg;
787
- }
590
+
788
- return region_trees + region_idx * tree_size;
591
+ assert(tcg_enabled());
789
-}
592
+ rcu_register_thread();
790
-
593
+ tcg_register_thread();
791
-void tcg_tb_insert(TranslationBlock *tb)
594
+
792
-{
595
+ qemu_mutex_lock_iothread();
793
- struct tcg_region_tree *rt = tc_ptr_to_region_tree(tb->tc.ptr);
596
+ qemu_thread_get_self(cpu->thread);
794
-
597
+
795
- g_assert(rt != NULL);
598
+ cpu->thread_id = qemu_get_thread_id();
796
- qemu_mutex_lock(&rt->lock);
599
+ cpu->can_do_io = 1;
797
- g_tree_insert(rt->tree, &tb->tc, tb);
600
+ cpu_thread_signal_created(cpu);
798
- qemu_mutex_unlock(&rt->lock);
601
+ qemu_guest_random_seed_thread_part2(cpu->random_seed);
799
-}
602
+
800
-
603
+ /* wait for initial kick-off after machine start */
801
-void tcg_tb_remove(TranslationBlock *tb)
604
+ while (first_cpu->stopped) {
802
-{
605
+ qemu_cond_wait_iothread(first_cpu->halt_cond);
803
- struct tcg_region_tree *rt = tc_ptr_to_region_tree(tb->tc.ptr);
606
+
804
-
607
+ /* process any pending work */
805
- g_assert(rt != NULL);
608
+ CPU_FOREACH(cpu) {
806
- qemu_mutex_lock(&rt->lock);
609
+ current_cpu = cpu;
807
- g_tree_remove(rt->tree, &tb->tc);
610
+ qemu_wait_io_event_common(cpu);
808
- qemu_mutex_unlock(&rt->lock);
611
+ }
809
-}
612
+ }
810
-
613
+
811
-/*
614
+ start_tcg_kick_timer();
812
- * Find the TB 'tb' such that
615
+
813
- * tb->tc.ptr <= tc_ptr < tb->tc.ptr + tb->tc.size
616
+ cpu = first_cpu;
814
- * Return NULL if not found.
617
+
815
- */
618
+ /* process any pending work */
816
-TranslationBlock *tcg_tb_lookup(uintptr_t tc_ptr)
619
+ cpu->exit_request = 1;
817
-{
620
+
818
- struct tcg_region_tree *rt = tc_ptr_to_region_tree((void *)tc_ptr);
621
+ while (1) {
819
- TranslationBlock *tb;
622
+ qemu_mutex_unlock_iothread();
820
- struct tb_tc s = { .ptr = (void *)tc_ptr };
623
+ replay_mutex_lock();
821
-
624
+ qemu_mutex_lock_iothread();
822
- if (rt == NULL) {
625
+
823
- return NULL;
626
+ if (icount_enabled()) {
824
- }
627
+ /* Account partial waits to QEMU_CLOCK_VIRTUAL. */
825
-
628
+ icount_account_warp_timer();
826
- qemu_mutex_lock(&rt->lock);
629
+ /*
827
- tb = g_tree_lookup(rt->tree, &s);
630
+ * Run the timers here. This is much more efficient than
828
- qemu_mutex_unlock(&rt->lock);
631
+ * waking up the I/O thread and waiting for completion.
829
- return tb;
632
+ */
830
-}
633
+ handle_icount_deadline();
831
-
634
+ }
832
-static void tcg_region_tree_lock_all(void)
635
+
833
-{
636
+ replay_mutex_unlock();
834
- size_t i;
637
+
835
-
638
+ if (!cpu) {
836
- for (i = 0; i < region.n; i++) {
639
+ cpu = first_cpu;
837
- struct tcg_region_tree *rt = region_trees + i * tree_size;
640
+ }
838
-
641
+
839
- qemu_mutex_lock(&rt->lock);
642
+ while (cpu && cpu_work_list_empty(cpu) && !cpu->exit_request) {
840
- }
643
+
841
-}
644
+ qatomic_mb_set(&tcg_current_rr_cpu, cpu);
842
-
645
+ current_cpu = cpu;
843
-static void tcg_region_tree_unlock_all(void)
646
+
844
-{
647
+ qemu_clock_enable(QEMU_CLOCK_VIRTUAL,
845
- size_t i;
648
+ (cpu->singlestep_enabled & SSTEP_NOTIMER) == 0);
846
-
649
+
847
- for (i = 0; i < region.n; i++) {
650
+ if (cpu_can_run(cpu)) {
848
- struct tcg_region_tree *rt = region_trees + i * tree_size;
651
+ int r;
849
-
652
+
850
- qemu_mutex_unlock(&rt->lock);
653
+ qemu_mutex_unlock_iothread();
851
- }
654
+ if (icount_enabled()) {
852
-}
655
+ prepare_icount_for_run(cpu);
853
-
656
+ }
854
-void tcg_tb_foreach(GTraverseFunc func, gpointer user_data)
657
+ r = tcg_cpu_exec(cpu);
855
-{
658
+ if (icount_enabled()) {
856
- size_t i;
659
+ process_icount_data(cpu);
857
-
660
+ }
858
- tcg_region_tree_lock_all();
661
+ qemu_mutex_lock_iothread();
859
- for (i = 0; i < region.n; i++) {
662
+
860
- struct tcg_region_tree *rt = region_trees + i * tree_size;
663
+ if (r == EXCP_DEBUG) {
861
-
664
+ cpu_handle_guest_debug(cpu);
862
- g_tree_foreach(rt->tree, func, user_data);
665
+ break;
863
- }
666
+ } else if (r == EXCP_ATOMIC) {
864
- tcg_region_tree_unlock_all();
667
+ qemu_mutex_unlock_iothread();
865
-}
668
+ cpu_exec_step_atomic(cpu);
866
-
669
+ qemu_mutex_lock_iothread();
867
-size_t tcg_nb_tbs(void)
670
+ break;
868
-{
671
+ }
869
- size_t nb_tbs = 0;
672
+ } else if (cpu->stop) {
870
- size_t i;
673
+ if (cpu->unplug) {
871
-
674
+ cpu = CPU_NEXT(cpu);
872
- tcg_region_tree_lock_all();
675
+ }
873
- for (i = 0; i < region.n; i++) {
676
+ break;
874
- struct tcg_region_tree *rt = region_trees + i * tree_size;
677
+ }
875
-
678
+
876
- nb_tbs += g_tree_nnodes(rt->tree);
679
+ cpu = CPU_NEXT(cpu);
877
- }
680
+ } /* while (cpu && !cpu->exit_request).. */
878
- tcg_region_tree_unlock_all();
681
+
879
- return nb_tbs;
682
+ /* Does not need qatomic_mb_set because a spurious wakeup is okay. */
880
-}
683
+ qatomic_set(&tcg_current_rr_cpu, NULL);
881
-
684
+
882
-static gboolean tcg_region_tree_traverse(gpointer k, gpointer v, gpointer data)
685
+ if (cpu && cpu->exit_request) {
883
-{
686
+ qatomic_mb_set(&cpu->exit_request, 0);
884
- TranslationBlock *tb = v;
687
+ }
885
-
688
+
886
- tb_destroy(tb);
689
+ if (icount_enabled() && all_cpu_threads_idle()) {
887
- return FALSE;
690
+ /*
888
-}
691
+ * When all cpus are sleeping (e.g in WFI), to avoid a deadlock
889
-
692
+ * in the main_loop, wake it up in order to start the warp timer.
890
-static void tcg_region_tree_reset_all(void)
693
+ */
891
-{
694
+ qemu_notify_event();
892
- size_t i;
695
+ }
893
-
696
+
894
- tcg_region_tree_lock_all();
697
+ qemu_tcg_rr_wait_io_event();
895
- for (i = 0; i < region.n; i++) {
698
+ deal_with_unplugged_cpus();
896
- struct tcg_region_tree *rt = region_trees + i * tree_size;
699
+ }
897
-
700
+
898
- g_tree_foreach(rt->tree, tcg_region_tree_traverse, NULL);
701
+ rcu_unregister_thread();
899
- /* Increment the refcount first so that destroy acts as a reset */
702
+ return NULL;
900
- g_tree_ref(rt->tree);
703
+}
901
- g_tree_destroy(rt->tree);
704
+
902
- }
705
+const CpusAccel tcg_cpus_rr = {
903
- tcg_region_tree_unlock_all();
706
+ .create_vcpu_thread = tcg_start_vcpu_thread,
904
-}
707
+ .kick_vcpu_thread = qemu_cpu_kick_rr_cpus,
905
-
708
+
906
-static void tcg_region_bounds(size_t curr_region, void **pstart, void **pend)
709
+ .handle_interrupt = tcg_handle_interrupt,
907
-{
710
+};
908
- void *start, *end;
711
diff --git a/accel/tcg/tcg-cpus.c b/accel/tcg/tcg-cpus.c
909
-
910
- start = region.start_aligned + curr_region * region.stride;
911
- end = start + region.size;
912
-
913
- if (curr_region == 0) {
914
- start = region.start;
915
- }
916
- if (curr_region == region.n - 1) {
917
- end = region.end;
918
- }
919
-
920
- *pstart = start;
921
- *pend = end;
922
-}
923
-
924
-static void tcg_region_assign(TCGContext *s, size_t curr_region)
925
-{
926
- void *start, *end;
927
-
928
- tcg_region_bounds(curr_region, &start, &end);
929
-
930
- s->code_gen_buffer = start;
931
- s->code_gen_ptr = start;
932
- s->code_gen_buffer_size = end - start;
933
- s->code_gen_highwater = end - TCG_HIGHWATER;
934
-}
935
-
936
-static bool tcg_region_alloc__locked(TCGContext *s)
937
-{
938
- if (region.current == region.n) {
939
- return true;
940
- }
941
- tcg_region_assign(s, region.current);
942
- region.current++;
943
- return false;
944
-}
945
-
946
-/*
947
- * Request a new region once the one in use has filled up.
948
- * Returns true on error.
949
- */
950
-static bool tcg_region_alloc(TCGContext *s)
951
-{
952
- bool err;
953
- /* read the region size now; alloc__locked will overwrite it on success */
954
- size_t size_full = s->code_gen_buffer_size;
955
-
956
- qemu_mutex_lock(&region.lock);
957
- err = tcg_region_alloc__locked(s);
958
- if (!err) {
959
- region.agg_size_full += size_full - TCG_HIGHWATER;
960
- }
961
- qemu_mutex_unlock(&region.lock);
962
- return err;
963
-}
964
-
965
-/*
966
- * Perform a context's first region allocation.
967
- * This function does _not_ increment region.agg_size_full.
968
- */
969
-static void tcg_region_initial_alloc__locked(TCGContext *s)
970
-{
971
- bool err = tcg_region_alloc__locked(s);
972
- g_assert(!err);
973
-}
974
-
975
-#ifndef CONFIG_USER_ONLY
976
-static void tcg_region_initial_alloc(TCGContext *s)
977
-{
978
- qemu_mutex_lock(&region.lock);
979
- tcg_region_initial_alloc__locked(s);
980
- qemu_mutex_unlock(&region.lock);
981
-}
982
-#endif
983
-
984
-/* Call from a safe-work context */
985
-void tcg_region_reset_all(void)
986
-{
987
- unsigned int n_ctxs = qatomic_read(&n_tcg_ctxs);
988
- unsigned int i;
989
-
990
- qemu_mutex_lock(&region.lock);
991
- region.current = 0;
992
- region.agg_size_full = 0;
993
-
994
- for (i = 0; i < n_ctxs; i++) {
995
- TCGContext *s = qatomic_read(&tcg_ctxs[i]);
996
- tcg_region_initial_alloc__locked(s);
997
- }
998
- qemu_mutex_unlock(&region.lock);
999
-
1000
- tcg_region_tree_reset_all();
1001
-}
1002
-
1003
-#ifdef CONFIG_USER_ONLY
1004
-static size_t tcg_n_regions(void)
1005
-{
1006
- return 1;
1007
-}
1008
-#else
1009
-/*
1010
- * It is likely that some vCPUs will translate more code than others, so we
1011
- * first try to set more regions than max_cpus, with those regions being of
1012
- * reasonable size. If that's not possible we make do by evenly dividing
1013
- * the code_gen_buffer among the vCPUs.
1014
- */
1015
-static size_t tcg_n_regions(void)
1016
-{
1017
- size_t i;
1018
-
1019
- /* Use a single region if all we have is one vCPU thread */
1020
-#if !defined(CONFIG_USER_ONLY)
1021
- MachineState *ms = MACHINE(qdev_get_machine());
1022
- unsigned int max_cpus = ms->smp.max_cpus;
1023
-#endif
1024
- if (max_cpus == 1 || !qemu_tcg_mttcg_enabled()) {
1025
- return 1;
1026
- }
1027
-
1028
- /* Try to have more regions than max_cpus, with each region being >= 2 MB */
1029
- for (i = 8; i > 0; i--) {
1030
- size_t regions_per_thread = i;
1031
- size_t region_size;
1032
-
1033
- region_size = tcg_init_ctx.code_gen_buffer_size;
1034
- region_size /= max_cpus * regions_per_thread;
1035
-
1036
- if (region_size >= 2 * 1024u * 1024) {
1037
- return max_cpus * regions_per_thread;
1038
- }
1039
- }
1040
- /* If we can't, then just allocate one region per vCPU thread */
1041
- return max_cpus;
1042
-}
1043
-#endif
1044
-
1045
-/*
1046
- * Initializes region partitioning.
1047
- *
1048
- * Called at init time from the parent thread (i.e. the one calling
1049
- * tcg_context_init), after the target's TCG globals have been set.
1050
- *
1051
- * Region partitioning works by splitting code_gen_buffer into separate regions,
1052
- * and then assigning regions to TCG threads so that the threads can translate
1053
- * code in parallel without synchronization.
1054
- *
1055
- * In softmmu the number of TCG threads is bounded by max_cpus, so we use at
1056
- * least max_cpus regions in MTTCG. In !MTTCG we use a single region.
1057
- * Note that the TCG options from the command-line (i.e. -accel accel=tcg,[...])
1058
- * must have been parsed before calling this function, since it calls
1059
- * qemu_tcg_mttcg_enabled().
1060
- *
1061
- * In user-mode we use a single region. Having multiple regions in user-mode
1062
- * is not supported, because the number of vCPU threads (recall that each thread
1063
- * spawned by the guest corresponds to a vCPU thread) is only bounded by the
1064
- * OS, and usually this number is huge (tens of thousands is not uncommon).
1065
- * Thus, given this large bound on the number of vCPU threads and the fact
1066
- * that code_gen_buffer is allocated at compile-time, we cannot guarantee
1067
- * that the availability of at least one region per vCPU thread.
1068
- *
1069
- * However, this user-mode limitation is unlikely to be a significant problem
1070
- * in practice. Multi-threaded guests share most if not all of their translated
1071
- * code, which makes parallel code generation less appealing than in softmmu.
1072
- */
1073
-void tcg_region_init(void)
1074
-{
1075
- void *buf = tcg_init_ctx.code_gen_buffer;
1076
- void *aligned;
1077
- size_t size = tcg_init_ctx.code_gen_buffer_size;
1078
- size_t page_size = qemu_real_host_page_size;
1079
- size_t region_size;
1080
- size_t n_regions;
1081
- size_t i;
1082
-
1083
- n_regions = tcg_n_regions();
1084
-
1085
- /* The first region will be 'aligned - buf' bytes larger than the others */
1086
- aligned = QEMU_ALIGN_PTR_UP(buf, page_size);
1087
- g_assert(aligned < tcg_init_ctx.code_gen_buffer + size);
1088
- /*
1089
- * Make region_size a multiple of page_size, using aligned as the start.
1090
- * As a result of this we might end up with a few extra pages at the end of
1091
- * the buffer; we will assign those to the last region.
1092
- */
1093
- region_size = (size - (aligned - buf)) / n_regions;
1094
- region_size = QEMU_ALIGN_DOWN(region_size, page_size);
1095
-
1096
- /* A region must have at least 2 pages; one code, one guard */
1097
- g_assert(region_size >= 2 * page_size);
1098
-
1099
- /* init the region struct */
1100
- qemu_mutex_init(&region.lock);
1101
- region.n = n_regions;
1102
- region.size = region_size - page_size;
1103
- region.stride = region_size;
1104
- region.start = buf;
1105
- region.start_aligned = aligned;
1106
- /* page-align the end, since its last page will be a guard page */
1107
- region.end = QEMU_ALIGN_PTR_DOWN(buf + size, page_size);
1108
- /* account for that last guard page */
1109
- region.end -= page_size;
1110
-
1111
- /*
1112
- * Set guard pages in the rw buffer, as that's the one into which
1113
- * buffer overruns could occur. Do not set guard pages in the rx
1114
- * buffer -- let that one use hugepages throughout.
1115
- */
1116
- for (i = 0; i < region.n; i++) {
1117
- void *start, *end;
1118
-
1119
- tcg_region_bounds(i, &start, &end);
1120
-
1121
- /*
1122
- * macOS 11.2 has a bug (Apple Feedback FB8994773) in which mprotect
1123
- * rejects a permission change from RWX -> NONE. Guard pages are
1124
- * nice for bug detection but are not essential; ignore any failure.
1125
- */
1126
- (void)qemu_mprotect_none(end, page_size);
1127
- }
1128
-
1129
- tcg_region_trees_init();
1130
-
1131
- /*
1132
- * Leave the initial context initialized to the first region.
1133
- * This will be the context into which we generate the prologue.
1134
- * It is also the only context for CONFIG_USER_ONLY.
1135
- */
1136
- tcg_region_initial_alloc__locked(&tcg_init_ctx);
1137
-}
1138
-
1139
-static void tcg_region_prologue_set(TCGContext *s)
1140
-{
1141
- /* Deduct the prologue from the first region. */
1142
- g_assert(region.start == s->code_gen_buffer);
1143
- region.start = s->code_ptr;
1144
-
1145
- /* Recompute boundaries of the first region. */
1146
- tcg_region_assign(s, 0);
1147
-
1148
- /* Register the balance of the buffer with gdb. */
1149
- tcg_register_jit(tcg_splitwx_to_rx(region.start),
1150
- region.end - region.start);
1151
-}
1152
-
1153
#ifdef CONFIG_DEBUG_TCG
1154
const void *tcg_splitwx_to_rx(void *rw)
1155
{
1156
@@ -XXX,XX +XXX,XX @@ void tcg_register_thread(void)
1157
}
1158
#endif /* !CONFIG_USER_ONLY */
1159
1160
-/*
1161
- * Returns the size (in bytes) of all translated code (i.e. from all regions)
1162
- * currently in the cache.
1163
- * See also: tcg_code_capacity()
1164
- * Do not confuse with tcg_current_code_size(); that one applies to a single
1165
- * TCG context.
1166
- */
1167
-size_t tcg_code_size(void)
1168
-{
1169
- unsigned int n_ctxs = qatomic_read(&n_tcg_ctxs);
1170
- unsigned int i;
1171
- size_t total;
1172
-
1173
- qemu_mutex_lock(&region.lock);
1174
- total = region.agg_size_full;
1175
- for (i = 0; i < n_ctxs; i++) {
1176
- const TCGContext *s = qatomic_read(&tcg_ctxs[i]);
1177
- size_t size;
1178
-
1179
- size = qatomic_read(&s->code_gen_ptr) - s->code_gen_buffer;
1180
- g_assert(size <= s->code_gen_buffer_size);
1181
- total += size;
1182
- }
1183
- qemu_mutex_unlock(&region.lock);
1184
- return total;
1185
-}
1186
-
1187
-/*
1188
- * Returns the code capacity (in bytes) of the entire cache, i.e. including all
1189
- * regions.
1190
- * See also: tcg_code_size()
1191
- */
1192
-size_t tcg_code_capacity(void)
1193
-{
1194
- size_t guard_size, capacity;
1195
-
1196
- /* no need for synchronization; these variables are set at init time */
1197
- guard_size = region.stride - region.size;
1198
- capacity = region.end + guard_size - region.start;
1199
- capacity -= region.n * (guard_size + TCG_HIGHWATER);
1200
- return capacity;
1201
-}
1202
-
1203
-size_t tcg_tb_phys_invalidate_count(void)
1204
-{
1205
- unsigned int n_ctxs = qatomic_read(&n_tcg_ctxs);
1206
- unsigned int i;
1207
- size_t total = 0;
1208
-
1209
- for (i = 0; i < n_ctxs; i++) {
1210
- const TCGContext *s = qatomic_read(&tcg_ctxs[i]);
1211
-
1212
- total += qatomic_read(&s->tb_phys_invalidate_count);
1213
- }
1214
- return total;
1215
-}
1216
-
1217
/* pool based memory allocation */
1218
void *tcg_malloc_internal(TCGContext *s, int size)
1219
{
1220
diff --git a/tcg/meson.build b/tcg/meson.build
712
index XXXXXXX..XXXXXXX 100644
1221
index XXXXXXX..XXXXXXX 100644
713
--- a/accel/tcg/tcg-cpus.c
1222
--- a/tcg/meson.build
714
+++ b/accel/tcg/tcg-cpus.c
1223
+++ b/tcg/meson.build
715
@@ -XXX,XX +XXX,XX @@
1224
@@ -XXX,XX +XXX,XX @@ tcg_ss = ss.source_set()
716
/*
1225
717
- * QEMU System Emulator
1226
tcg_ss.add(files(
718
+ * QEMU TCG vCPU common functionality
1227
'optimize.c',
719
+ *
1228
+ 'region.c',
720
+ * Functionality common to all TCG vCPU variants: mttcg, rr and icount.
1229
'tcg.c',
721
*
1230
'tcg-common.c',
722
* Copyright (c) 2003-2008 Fabrice Bellard
1231
'tcg-op.c',
723
* Copyright (c) 2014 Red Hat Inc.
724
@@ -XXX,XX +XXX,XX @@
725
#include "hw/boards.h"
726
727
#include "tcg-cpus.h"
728
+#include "tcg-cpus-mttcg.h"
729
+#include "tcg-cpus-rr.h"
730
731
-/* Kick all RR vCPUs */
732
-static void qemu_cpu_kick_rr_cpus(void)
733
-{
734
- CPUState *cpu;
735
+/* common functionality among all TCG variants */
736
737
- CPU_FOREACH(cpu) {
738
- cpu_exit(cpu);
739
- };
740
-}
741
-
742
-static void tcg_kick_vcpu_thread(CPUState *cpu)
743
-{
744
- if (qemu_tcg_mttcg_enabled()) {
745
- cpu_exit(cpu);
746
- } else {
747
- qemu_cpu_kick_rr_cpus();
748
- }
749
-}
750
-
751
-/*
752
- * TCG vCPU kick timer
753
- *
754
- * The kick timer is responsible for moving single threaded vCPU
755
- * emulation on to the next vCPU. If more than one vCPU is running a
756
- * timer event with force a cpu->exit so the next vCPU can get
757
- * scheduled.
758
- *
759
- * The timer is removed if all vCPUs are idle and restarted again once
760
- * idleness is complete.
761
- */
762
-
763
-static QEMUTimer *tcg_kick_vcpu_timer;
764
-static CPUState *tcg_current_rr_cpu;
765
-
766
-#define TCG_KICK_PERIOD (NANOSECONDS_PER_SECOND / 10)
767
-
768
-static inline int64_t qemu_tcg_next_kick(void)
769
-{
770
- return qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + TCG_KICK_PERIOD;
771
-}
772
-
773
-/* Kick the currently round-robin scheduled vCPU to next */
774
-static void qemu_cpu_kick_rr_next_cpu(void)
775
-{
776
- CPUState *cpu;
777
- do {
778
- cpu = qatomic_mb_read(&tcg_current_rr_cpu);
779
- if (cpu) {
780
- cpu_exit(cpu);
781
- }
782
- } while (cpu != qatomic_mb_read(&tcg_current_rr_cpu));
783
-}
784
-
785
-static void kick_tcg_thread(void *opaque)
786
-{
787
- timer_mod(tcg_kick_vcpu_timer, qemu_tcg_next_kick());
788
- qemu_cpu_kick_rr_next_cpu();
789
-}
790
-
791
-static void start_tcg_kick_timer(void)
792
-{
793
- assert(!mttcg_enabled);
794
- if (!tcg_kick_vcpu_timer && CPU_NEXT(first_cpu)) {
795
- tcg_kick_vcpu_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL,
796
- kick_tcg_thread, NULL);
797
- }
798
- if (tcg_kick_vcpu_timer && !timer_pending(tcg_kick_vcpu_timer)) {
799
- timer_mod(tcg_kick_vcpu_timer, qemu_tcg_next_kick());
800
- }
801
-}
802
-
803
-static void stop_tcg_kick_timer(void)
804
-{
805
- assert(!mttcg_enabled);
806
- if (tcg_kick_vcpu_timer && timer_pending(tcg_kick_vcpu_timer)) {
807
- timer_del(tcg_kick_vcpu_timer);
808
- }
809
-}
810
-
811
-static void qemu_tcg_destroy_vcpu(CPUState *cpu)
812
-{
813
-}
814
-
815
-static void qemu_tcg_rr_wait_io_event(void)
816
-{
817
- CPUState *cpu;
818
-
819
- while (all_cpu_threads_idle()) {
820
- stop_tcg_kick_timer();
821
- qemu_cond_wait_iothread(first_cpu->halt_cond);
822
- }
823
-
824
- start_tcg_kick_timer();
825
-
826
- CPU_FOREACH(cpu) {
827
- qemu_wait_io_event_common(cpu);
828
- }
829
-}
830
-
831
-static int64_t tcg_get_icount_limit(void)
832
-{
833
- int64_t deadline;
834
-
835
- if (replay_mode != REPLAY_MODE_PLAY) {
836
- /*
837
- * Include all the timers, because they may need an attention.
838
- * Too long CPU execution may create unnecessary delay in UI.
839
- */
840
- deadline = qemu_clock_deadline_ns_all(QEMU_CLOCK_VIRTUAL,
841
- QEMU_TIMER_ATTR_ALL);
842
- /* Check realtime timers, because they help with input processing */
843
- deadline = qemu_soonest_timeout(deadline,
844
- qemu_clock_deadline_ns_all(QEMU_CLOCK_REALTIME,
845
- QEMU_TIMER_ATTR_ALL));
846
-
847
- /*
848
- * Maintain prior (possibly buggy) behaviour where if no deadline
849
- * was set (as there is no QEMU_CLOCK_VIRTUAL timer) or it is more than
850
- * INT32_MAX nanoseconds ahead, we still use INT32_MAX
851
- * nanoseconds.
852
- */
853
- if ((deadline < 0) || (deadline > INT32_MAX)) {
854
- deadline = INT32_MAX;
855
- }
856
-
857
- return icount_round(deadline);
858
- } else {
859
- return replay_get_instructions();
860
- }
861
-}
862
-
863
-static void notify_aio_contexts(void)
864
-{
865
- /* Wake up other AioContexts. */
866
- qemu_clock_notify(QEMU_CLOCK_VIRTUAL);
867
- qemu_clock_run_timers(QEMU_CLOCK_VIRTUAL);
868
-}
869
-
870
-static void handle_icount_deadline(void)
871
-{
872
- assert(qemu_in_vcpu_thread());
873
- if (icount_enabled()) {
874
- int64_t deadline = qemu_clock_deadline_ns_all(QEMU_CLOCK_VIRTUAL,
875
- QEMU_TIMER_ATTR_ALL);
876
-
877
- if (deadline == 0) {
878
- notify_aio_contexts();
879
- }
880
- }
881
-}
882
-
883
-static void prepare_icount_for_run(CPUState *cpu)
884
-{
885
- if (icount_enabled()) {
886
- int insns_left;
887
-
888
- /*
889
- * These should always be cleared by process_icount_data after
890
- * each vCPU execution. However u16.high can be raised
891
- * asynchronously by cpu_exit/cpu_interrupt/tcg_handle_interrupt
892
- */
893
- g_assert(cpu_neg(cpu)->icount_decr.u16.low == 0);
894
- g_assert(cpu->icount_extra == 0);
895
-
896
- cpu->icount_budget = tcg_get_icount_limit();
897
- insns_left = MIN(0xffff, cpu->icount_budget);
898
- cpu_neg(cpu)->icount_decr.u16.low = insns_left;
899
- cpu->icount_extra = cpu->icount_budget - insns_left;
900
-
901
- replay_mutex_lock();
902
-
903
- if (cpu->icount_budget == 0 && replay_has_checkpoint()) {
904
- notify_aio_contexts();
905
- }
906
- }
907
-}
908
-
909
-static void process_icount_data(CPUState *cpu)
910
-{
911
- if (icount_enabled()) {
912
- /* Account for executed instructions */
913
- icount_update(cpu);
914
-
915
- /* Reset the counters */
916
- cpu_neg(cpu)->icount_decr.u16.low = 0;
917
- cpu->icount_extra = 0;
918
- cpu->icount_budget = 0;
919
-
920
- replay_account_executed_instructions();
921
-
922
- replay_mutex_unlock();
923
- }
924
-}
925
-
926
-static int tcg_cpu_exec(CPUState *cpu)
927
-{
928
- int ret;
929
-#ifdef CONFIG_PROFILER
930
- int64_t ti;
931
-#endif
932
-
933
- assert(tcg_enabled());
934
-#ifdef CONFIG_PROFILER
935
- ti = profile_getclock();
936
-#endif
937
- cpu_exec_start(cpu);
938
- ret = cpu_exec(cpu);
939
- cpu_exec_end(cpu);
940
-#ifdef CONFIG_PROFILER
941
- qatomic_set(&tcg_ctx->prof.cpu_exec_time,
942
- tcg_ctx->prof.cpu_exec_time + profile_getclock() - ti);
943
-#endif
944
- return ret;
945
-}
946
-
947
-/*
948
- * Destroy any remaining vCPUs which have been unplugged and have
949
- * finished running
950
- */
951
-static void deal_with_unplugged_cpus(void)
952
-{
953
- CPUState *cpu;
954
-
955
- CPU_FOREACH(cpu) {
956
- if (cpu->unplug && !cpu_can_run(cpu)) {
957
- qemu_tcg_destroy_vcpu(cpu);
958
- cpu_thread_signal_destroyed(cpu);
959
- break;
960
- }
961
- }
962
-}
963
-
964
-/*
965
- * Single-threaded TCG
966
- *
967
- * In the single-threaded case each vCPU is simulated in turn. If
968
- * there is more than a single vCPU we create a simple timer to kick
969
- * the vCPU and ensure we don't get stuck in a tight loop in one vCPU.
970
- * This is done explicitly rather than relying on side-effects
971
- * elsewhere.
972
- */
973
-
974
-static void *tcg_rr_cpu_thread_fn(void *arg)
975
-{
976
- CPUState *cpu = arg;
977
-
978
- assert(tcg_enabled());
979
- rcu_register_thread();
980
- tcg_register_thread();
981
-
982
- qemu_mutex_lock_iothread();
983
- qemu_thread_get_self(cpu->thread);
984
-
985
- cpu->thread_id = qemu_get_thread_id();
986
- cpu->can_do_io = 1;
987
- cpu_thread_signal_created(cpu);
988
- qemu_guest_random_seed_thread_part2(cpu->random_seed);
989
-
990
- /* wait for initial kick-off after machine start */
991
- while (first_cpu->stopped) {
992
- qemu_cond_wait_iothread(first_cpu->halt_cond);
993
-
994
- /* process any pending work */
995
- CPU_FOREACH(cpu) {
996
- current_cpu = cpu;
997
- qemu_wait_io_event_common(cpu);
998
- }
999
- }
1000
-
1001
- start_tcg_kick_timer();
1002
-
1003
- cpu = first_cpu;
1004
-
1005
- /* process any pending work */
1006
- cpu->exit_request = 1;
1007
-
1008
- while (1) {
1009
- qemu_mutex_unlock_iothread();
1010
- replay_mutex_lock();
1011
- qemu_mutex_lock_iothread();
1012
- /* Account partial waits to QEMU_CLOCK_VIRTUAL. */
1013
- icount_account_warp_timer();
1014
-
1015
- /*
1016
- * Run the timers here. This is much more efficient than
1017
- * waking up the I/O thread and waiting for completion.
1018
- */
1019
- handle_icount_deadline();
1020
-
1021
- replay_mutex_unlock();
1022
-
1023
- if (!cpu) {
1024
- cpu = first_cpu;
1025
- }
1026
-
1027
- while (cpu && cpu_work_list_empty(cpu) && !cpu->exit_request) {
1028
-
1029
- qatomic_mb_set(&tcg_current_rr_cpu, cpu);
1030
- current_cpu = cpu;
1031
-
1032
- qemu_clock_enable(QEMU_CLOCK_VIRTUAL,
1033
- (cpu->singlestep_enabled & SSTEP_NOTIMER) == 0);
1034
-
1035
- if (cpu_can_run(cpu)) {
1036
- int r;
1037
-
1038
- qemu_mutex_unlock_iothread();
1039
- prepare_icount_for_run(cpu);
1040
-
1041
- r = tcg_cpu_exec(cpu);
1042
-
1043
- process_icount_data(cpu);
1044
- qemu_mutex_lock_iothread();
1045
-
1046
- if (r == EXCP_DEBUG) {
1047
- cpu_handle_guest_debug(cpu);
1048
- break;
1049
- } else if (r == EXCP_ATOMIC) {
1050
- qemu_mutex_unlock_iothread();
1051
- cpu_exec_step_atomic(cpu);
1052
- qemu_mutex_lock_iothread();
1053
- break;
1054
- }
1055
- } else if (cpu->stop) {
1056
- if (cpu->unplug) {
1057
- cpu = CPU_NEXT(cpu);
1058
- }
1059
- break;
1060
- }
1061
-
1062
- cpu = CPU_NEXT(cpu);
1063
- } /* while (cpu && !cpu->exit_request).. */
1064
-
1065
- /* Does not need qatomic_mb_set because a spurious wakeup is okay. */
1066
- qatomic_set(&tcg_current_rr_cpu, NULL);
1067
-
1068
- if (cpu && cpu->exit_request) {
1069
- qatomic_mb_set(&cpu->exit_request, 0);
1070
- }
1071
-
1072
- if (icount_enabled() && all_cpu_threads_idle()) {
1073
- /*
1074
- * When all cpus are sleeping (e.g in WFI), to avoid a deadlock
1075
- * in the main_loop, wake it up in order to start the warp timer.
1076
- */
1077
- qemu_notify_event();
1078
- }
1079
-
1080
- qemu_tcg_rr_wait_io_event();
1081
- deal_with_unplugged_cpus();
1082
- }
1083
-
1084
- rcu_unregister_thread();
1085
- return NULL;
1086
-}
1087
-
1088
-/*
1089
- * Multi-threaded TCG
1090
- *
1091
- * In the multi-threaded case each vCPU has its own thread. The TLS
1092
- * variable current_cpu can be used deep in the code to find the
1093
- * current CPUState for a given thread.
1094
- */
1095
-
1096
-static void *tcg_cpu_thread_fn(void *arg)
1097
-{
1098
- CPUState *cpu = arg;
1099
-
1100
- assert(tcg_enabled());
1101
- g_assert(!icount_enabled());
1102
-
1103
- rcu_register_thread();
1104
- tcg_register_thread();
1105
-
1106
- qemu_mutex_lock_iothread();
1107
- qemu_thread_get_self(cpu->thread);
1108
-
1109
- cpu->thread_id = qemu_get_thread_id();
1110
- cpu->can_do_io = 1;
1111
- current_cpu = cpu;
1112
- cpu_thread_signal_created(cpu);
1113
- qemu_guest_random_seed_thread_part2(cpu->random_seed);
1114
-
1115
- /* process any pending work */
1116
- cpu->exit_request = 1;
1117
-
1118
- do {
1119
- if (cpu_can_run(cpu)) {
1120
- int r;
1121
- qemu_mutex_unlock_iothread();
1122
- r = tcg_cpu_exec(cpu);
1123
- qemu_mutex_lock_iothread();
1124
- switch (r) {
1125
- case EXCP_DEBUG:
1126
- cpu_handle_guest_debug(cpu);
1127
- break;
1128
- case EXCP_HALTED:
1129
- /*
1130
- * during start-up the vCPU is reset and the thread is
1131
- * kicked several times. If we don't ensure we go back
1132
- * to sleep in the halted state we won't cleanly
1133
- * start-up when the vCPU is enabled.
1134
- *
1135
- * cpu->halted should ensure we sleep in wait_io_event
1136
- */
1137
- g_assert(cpu->halted);
1138
- break;
1139
- case EXCP_ATOMIC:
1140
- qemu_mutex_unlock_iothread();
1141
- cpu_exec_step_atomic(cpu);
1142
- qemu_mutex_lock_iothread();
1143
- default:
1144
- /* Ignore everything else? */
1145
- break;
1146
- }
1147
- }
1148
-
1149
- qatomic_mb_set(&cpu->exit_request, 0);
1150
- qemu_wait_io_event(cpu);
1151
- } while (!cpu->unplug || cpu_can_run(cpu));
1152
-
1153
- qemu_tcg_destroy_vcpu(cpu);
1154
- cpu_thread_signal_destroyed(cpu);
1155
- qemu_mutex_unlock_iothread();
1156
- rcu_unregister_thread();
1157
- return NULL;
1158
-}
1159
-
1160
-static void tcg_start_vcpu_thread(CPUState *cpu)
1161
+void tcg_start_vcpu_thread(CPUState *cpu)
1162
{
1163
char thread_name[VCPU_THREAD_NAME_SIZE];
1164
static QemuCond *single_tcg_halt_cond;
1165
@@ -XXX,XX +XXX,XX @@ static void tcg_start_vcpu_thread(CPUState *cpu)
1166
}
1167
}
1168
1169
-static int64_t tcg_get_virtual_clock(void)
1170
+void qemu_tcg_destroy_vcpu(CPUState *cpu)
1171
{
1172
- if (icount_enabled()) {
1173
- return icount_get();
1174
- }
1175
- return cpu_get_clock();
1176
+ cpu_thread_signal_destroyed(cpu);
1177
}
1178
1179
-static int64_t tcg_get_elapsed_ticks(void)
1180
+int tcg_cpu_exec(CPUState *cpu)
1181
{
1182
- if (icount_enabled()) {
1183
- return icount_get();
1184
- }
1185
- return cpu_get_ticks();
1186
+ int ret;
1187
+#ifdef CONFIG_PROFILER
1188
+ int64_t ti;
1189
+#endif
1190
+ assert(tcg_enabled());
1191
+#ifdef CONFIG_PROFILER
1192
+ ti = profile_getclock();
1193
+#endif
1194
+ cpu_exec_start(cpu);
1195
+ ret = cpu_exec(cpu);
1196
+ cpu_exec_end(cpu);
1197
+#ifdef CONFIG_PROFILER
1198
+ qatomic_set(&tcg_ctx->prof.cpu_exec_time,
1199
+ tcg_ctx->prof.cpu_exec_time + profile_getclock() - ti);
1200
+#endif
1201
+ return ret;
1202
}
1203
1204
/* mask must never be zero, except for A20 change call */
1205
-static void tcg_handle_interrupt(CPUState *cpu, int mask)
1206
+void tcg_handle_interrupt(CPUState *cpu, int mask)
1207
{
1208
- int old_mask;
1209
g_assert(qemu_mutex_iothread_locked());
1210
1211
- old_mask = cpu->interrupt_request;
1212
cpu->interrupt_request |= mask;
1213
1214
/*
1215
@@ -XXX,XX +XXX,XX @@ static void tcg_handle_interrupt(CPUState *cpu, int mask)
1216
qemu_cpu_kick(cpu);
1217
} else {
1218
qatomic_set(&cpu_neg(cpu)->icount_decr.u16.high, -1);
1219
- if (icount_enabled() &&
1220
- !cpu->can_do_io
1221
- && (mask & ~old_mask) != 0) {
1222
- cpu_abort(cpu, "Raised interrupt while not in I/O function");
1223
- }
1224
}
1225
}
1226
-
1227
-const CpusAccel tcg_cpus = {
1228
- .create_vcpu_thread = tcg_start_vcpu_thread,
1229
- .kick_vcpu_thread = tcg_kick_vcpu_thread,
1230
-
1231
- .handle_interrupt = tcg_handle_interrupt,
1232
-
1233
- .get_virtual_clock = tcg_get_virtual_clock,
1234
- .get_elapsed_ticks = tcg_get_elapsed_ticks,
1235
-};
1236
diff --git a/softmmu/icount.c b/softmmu/icount.c
1237
index XXXXXXX..XXXXXXX 100644
1238
--- a/softmmu/icount.c
1239
+++ b/softmmu/icount.c
1240
@@ -XXX,XX +XXX,XX @@ void icount_start_warp_timer(void)
1241
1242
void icount_account_warp_timer(void)
1243
{
1244
- if (!icount_enabled() || !icount_sleep) {
1245
+ if (!icount_sleep) {
1246
return;
1247
}
1248
1249
diff --git a/accel/tcg/meson.build b/accel/tcg/meson.build
1250
index XXXXXXX..XXXXXXX 100644
1251
--- a/accel/tcg/meson.build
1252
+++ b/accel/tcg/meson.build
1253
@@ -XXX,XX +XXX,XX @@ tcg_ss.add(when: 'CONFIG_SOFTMMU', if_false: files('user-exec-stub.c'))
1254
tcg_ss.add(when: 'CONFIG_PLUGIN', if_true: [files('plugin-gen.c'), libdl])
1255
specific_ss.add_all(when: 'CONFIG_TCG', if_true: tcg_ss)
1256
1257
-specific_ss.add(when: ['CONFIG_SOFTMMU', 'CONFIG_TCG'], if_true: files('tcg-all.c', 'cputlb.c', 'tcg-cpus.c'))
1258
+specific_ss.add(when: ['CONFIG_SOFTMMU', 'CONFIG_TCG'], if_true: files(
1259
+ 'tcg-all.c',
1260
+ 'cputlb.c',
1261
+ 'tcg-cpus.c',
1262
+ 'tcg-cpus-mttcg.c',
1263
+ 'tcg-cpus-icount.c',
1264
+ 'tcg-cpus-rr.c'
1265
+))
1266
--
1232
--
1267
2.25.1
1233
2.25.1
1268
1234
1269
1235
diff view generated by jsdifflib
New patch
1
It consists of one function call and has only one caller.
1
2
3
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
4
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
accel/tcg/translate-all.c | 7 +------
9
1 file changed, 1 insertion(+), 6 deletions(-)
10
11
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/accel/tcg/translate-all.c
14
+++ b/accel/tcg/translate-all.c
15
@@ -XXX,XX +XXX,XX @@ static void page_table_config_init(void)
16
assert(v_l2_levels >= 0);
17
}
18
19
-static void cpu_gen_init(void)
20
-{
21
- tcg_context_init(&tcg_init_ctx);
22
-}
23
-
24
/* Encode VAL as a signed leb128 sequence at P.
25
Return P incremented past the encoded value. */
26
static uint8_t *encode_sleb128(uint8_t *p, target_long val)
27
@@ -XXX,XX +XXX,XX @@ void tcg_exec_init(unsigned long tb_size, int splitwx)
28
bool ok;
29
30
tcg_allowed = true;
31
- cpu_gen_init();
32
+ tcg_context_init(&tcg_init_ctx);
33
page_init();
34
tb_htable_init();
35
36
--
37
2.25.1
38
39
diff view generated by jsdifflib
New patch
1
Buffer management is integral to tcg. Do not leave the allocation
2
to code outside of tcg/. This is code movement, with further
3
cleanups to follow.
1
4
5
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
---
9
include/tcg/tcg.h | 2 +-
10
accel/tcg/translate-all.c | 414 +-----------------------------------
11
tcg/region.c | 431 +++++++++++++++++++++++++++++++++++++-
12
3 files changed, 428 insertions(+), 419 deletions(-)
13
14
diff --git a/include/tcg/tcg.h b/include/tcg/tcg.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/include/tcg/tcg.h
17
+++ b/include/tcg/tcg.h
18
@@ -XXX,XX +XXX,XX @@ void *tcg_malloc_internal(TCGContext *s, int size);
19
void tcg_pool_reset(TCGContext *s);
20
TranslationBlock *tcg_tb_alloc(TCGContext *s);
21
22
-void tcg_region_init(void);
23
+void tcg_region_init(size_t tb_size, int splitwx);
24
void tb_destroy(TranslationBlock *tb);
25
void tcg_region_reset_all(void);
26
27
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/accel/tcg/translate-all.c
30
+++ b/accel/tcg/translate-all.c
31
@@ -XXX,XX +XXX,XX @@
32
*/
33
34
#include "qemu/osdep.h"
35
-#include "qemu/units.h"
36
#include "qemu-common.h"
37
38
#define NO_CPU_IO_DEFS
39
@@ -XXX,XX +XXX,XX @@
40
#include "exec/cputlb.h"
41
#include "exec/translate-all.h"
42
#include "qemu/bitmap.h"
43
-#include "qemu/error-report.h"
44
#include "qemu/qemu-print.h"
45
#include "qemu/timer.h"
46
#include "qemu/main-loop.h"
47
@@ -XXX,XX +XXX,XX @@ static void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1,
48
}
49
}
50
51
-/* Minimum size of the code gen buffer. This number is randomly chosen,
52
- but not so small that we can't have a fair number of TB's live. */
53
-#define MIN_CODE_GEN_BUFFER_SIZE (1 * MiB)
54
-
55
-/* Maximum size of the code gen buffer we'd like to use. Unless otherwise
56
- indicated, this is constrained by the range of direct branches on the
57
- host cpu, as used by the TCG implementation of goto_tb. */
58
-#if defined(__x86_64__)
59
-# define MAX_CODE_GEN_BUFFER_SIZE (2 * GiB)
60
-#elif defined(__sparc__)
61
-# define MAX_CODE_GEN_BUFFER_SIZE (2 * GiB)
62
-#elif defined(__powerpc64__)
63
-# define MAX_CODE_GEN_BUFFER_SIZE (2 * GiB)
64
-#elif defined(__powerpc__)
65
-# define MAX_CODE_GEN_BUFFER_SIZE (32 * MiB)
66
-#elif defined(__aarch64__)
67
-# define MAX_CODE_GEN_BUFFER_SIZE (2 * GiB)
68
-#elif defined(__s390x__)
69
- /* We have a +- 4GB range on the branches; leave some slop. */
70
-# define MAX_CODE_GEN_BUFFER_SIZE (3 * GiB)
71
-#elif defined(__mips__)
72
- /* We have a 256MB branch region, but leave room to make sure the
73
- main executable is also within that region. */
74
-# define MAX_CODE_GEN_BUFFER_SIZE (128 * MiB)
75
-#else
76
-# define MAX_CODE_GEN_BUFFER_SIZE ((size_t)-1)
77
-#endif
78
-
79
-#if TCG_TARGET_REG_BITS == 32
80
-#define DEFAULT_CODE_GEN_BUFFER_SIZE_1 (32 * MiB)
81
-#ifdef CONFIG_USER_ONLY
82
-/*
83
- * For user mode on smaller 32 bit systems we may run into trouble
84
- * allocating big chunks of data in the right place. On these systems
85
- * we utilise a static code generation buffer directly in the binary.
86
- */
87
-#define USE_STATIC_CODE_GEN_BUFFER
88
-#endif
89
-#else /* TCG_TARGET_REG_BITS == 64 */
90
-#ifdef CONFIG_USER_ONLY
91
-/*
92
- * As user-mode emulation typically means running multiple instances
93
- * of the translator don't go too nuts with our default code gen
94
- * buffer lest we make things too hard for the OS.
95
- */
96
-#define DEFAULT_CODE_GEN_BUFFER_SIZE_1 (128 * MiB)
97
-#else
98
-/*
99
- * We expect most system emulation to run one or two guests per host.
100
- * Users running large scale system emulation may want to tweak their
101
- * runtime setup via the tb-size control on the command line.
102
- */
103
-#define DEFAULT_CODE_GEN_BUFFER_SIZE_1 (1 * GiB)
104
-#endif
105
-#endif
106
-
107
-#define DEFAULT_CODE_GEN_BUFFER_SIZE \
108
- (DEFAULT_CODE_GEN_BUFFER_SIZE_1 < MAX_CODE_GEN_BUFFER_SIZE \
109
- ? DEFAULT_CODE_GEN_BUFFER_SIZE_1 : MAX_CODE_GEN_BUFFER_SIZE)
110
-
111
-static size_t size_code_gen_buffer(size_t tb_size)
112
-{
113
- /* Size the buffer. */
114
- if (tb_size == 0) {
115
- size_t phys_mem = qemu_get_host_physmem();
116
- if (phys_mem == 0) {
117
- tb_size = DEFAULT_CODE_GEN_BUFFER_SIZE;
118
- } else {
119
- tb_size = MIN(DEFAULT_CODE_GEN_BUFFER_SIZE, phys_mem / 8);
120
- }
121
- }
122
- if (tb_size < MIN_CODE_GEN_BUFFER_SIZE) {
123
- tb_size = MIN_CODE_GEN_BUFFER_SIZE;
124
- }
125
- if (tb_size > MAX_CODE_GEN_BUFFER_SIZE) {
126
- tb_size = MAX_CODE_GEN_BUFFER_SIZE;
127
- }
128
- return tb_size;
129
-}
130
-
131
-#ifdef __mips__
132
-/* In order to use J and JAL within the code_gen_buffer, we require
133
- that the buffer not cross a 256MB boundary. */
134
-static inline bool cross_256mb(void *addr, size_t size)
135
-{
136
- return ((uintptr_t)addr ^ ((uintptr_t)addr + size)) & ~0x0ffffffful;
137
-}
138
-
139
-/* We weren't able to allocate a buffer without crossing that boundary,
140
- so make do with the larger portion of the buffer that doesn't cross.
141
- Returns the new base of the buffer, and adjusts code_gen_buffer_size. */
142
-static inline void *split_cross_256mb(void *buf1, size_t size1)
143
-{
144
- void *buf2 = (void *)(((uintptr_t)buf1 + size1) & ~0x0ffffffful);
145
- size_t size2 = buf1 + size1 - buf2;
146
-
147
- size1 = buf2 - buf1;
148
- if (size1 < size2) {
149
- size1 = size2;
150
- buf1 = buf2;
151
- }
152
-
153
- tcg_ctx->code_gen_buffer_size = size1;
154
- return buf1;
155
-}
156
-#endif
157
-
158
-#ifdef USE_STATIC_CODE_GEN_BUFFER
159
-static uint8_t static_code_gen_buffer[DEFAULT_CODE_GEN_BUFFER_SIZE]
160
- __attribute__((aligned(CODE_GEN_ALIGN)));
161
-
162
-static bool alloc_code_gen_buffer(size_t tb_size, int splitwx, Error **errp)
163
-{
164
- void *buf, *end;
165
- size_t size;
166
-
167
- if (splitwx > 0) {
168
- error_setg(errp, "jit split-wx not supported");
169
- return false;
170
- }
171
-
172
- /* page-align the beginning and end of the buffer */
173
- buf = static_code_gen_buffer;
174
- end = static_code_gen_buffer + sizeof(static_code_gen_buffer);
175
- buf = QEMU_ALIGN_PTR_UP(buf, qemu_real_host_page_size);
176
- end = QEMU_ALIGN_PTR_DOWN(end, qemu_real_host_page_size);
177
-
178
- size = end - buf;
179
-
180
- /* Honor a command-line option limiting the size of the buffer. */
181
- if (size > tb_size) {
182
- size = QEMU_ALIGN_DOWN(tb_size, qemu_real_host_page_size);
183
- }
184
- tcg_ctx->code_gen_buffer_size = size;
185
-
186
-#ifdef __mips__
187
- if (cross_256mb(buf, size)) {
188
- buf = split_cross_256mb(buf, size);
189
- size = tcg_ctx->code_gen_buffer_size;
190
- }
191
-#endif
192
-
193
- if (qemu_mprotect_rwx(buf, size)) {
194
- error_setg_errno(errp, errno, "mprotect of jit buffer");
195
- return false;
196
- }
197
- qemu_madvise(buf, size, QEMU_MADV_HUGEPAGE);
198
-
199
- tcg_ctx->code_gen_buffer = buf;
200
- return true;
201
-}
202
-#elif defined(_WIN32)
203
-static bool alloc_code_gen_buffer(size_t size, int splitwx, Error **errp)
204
-{
205
- void *buf;
206
-
207
- if (splitwx > 0) {
208
- error_setg(errp, "jit split-wx not supported");
209
- return false;
210
- }
211
-
212
- buf = VirtualAlloc(NULL, size, MEM_RESERVE | MEM_COMMIT,
213
- PAGE_EXECUTE_READWRITE);
214
- if (buf == NULL) {
215
- error_setg_win32(errp, GetLastError(),
216
- "allocate %zu bytes for jit buffer", size);
217
- return false;
218
- }
219
-
220
- tcg_ctx->code_gen_buffer = buf;
221
- tcg_ctx->code_gen_buffer_size = size;
222
- return true;
223
-}
224
-#else
225
-static bool alloc_code_gen_buffer_anon(size_t size, int prot,
226
- int flags, Error **errp)
227
-{
228
- void *buf;
229
-
230
- buf = mmap(NULL, size, prot, flags, -1, 0);
231
- if (buf == MAP_FAILED) {
232
- error_setg_errno(errp, errno,
233
- "allocate %zu bytes for jit buffer", size);
234
- return false;
235
- }
236
- tcg_ctx->code_gen_buffer_size = size;
237
-
238
-#ifdef __mips__
239
- if (cross_256mb(buf, size)) {
240
- /*
241
- * Try again, with the original still mapped, to avoid re-acquiring
242
- * the same 256mb crossing.
243
- */
244
- size_t size2;
245
- void *buf2 = mmap(NULL, size, prot, flags, -1, 0);
246
- switch ((int)(buf2 != MAP_FAILED)) {
247
- case 1:
248
- if (!cross_256mb(buf2, size)) {
249
- /* Success! Use the new buffer. */
250
- munmap(buf, size);
251
- break;
252
- }
253
- /* Failure. Work with what we had. */
254
- munmap(buf2, size);
255
- /* fallthru */
256
- default:
257
- /* Split the original buffer. Free the smaller half. */
258
- buf2 = split_cross_256mb(buf, size);
259
- size2 = tcg_ctx->code_gen_buffer_size;
260
- if (buf == buf2) {
261
- munmap(buf + size2, size - size2);
262
- } else {
263
- munmap(buf, size - size2);
264
- }
265
- size = size2;
266
- break;
267
- }
268
- buf = buf2;
269
- }
270
-#endif
271
-
272
- /* Request large pages for the buffer. */
273
- qemu_madvise(buf, size, QEMU_MADV_HUGEPAGE);
274
-
275
- tcg_ctx->code_gen_buffer = buf;
276
- return true;
277
-}
278
-
279
-#ifndef CONFIG_TCG_INTERPRETER
280
-#ifdef CONFIG_POSIX
281
-#include "qemu/memfd.h"
282
-
283
-static bool alloc_code_gen_buffer_splitwx_memfd(size_t size, Error **errp)
284
-{
285
- void *buf_rw = NULL, *buf_rx = MAP_FAILED;
286
- int fd = -1;
287
-
288
-#ifdef __mips__
289
- /* Find space for the RX mapping, vs the 256MiB regions. */
290
- if (!alloc_code_gen_buffer_anon(size, PROT_NONE,
291
- MAP_PRIVATE | MAP_ANONYMOUS |
292
- MAP_NORESERVE, errp)) {
293
- return false;
294
- }
295
- /* The size of the mapping may have been adjusted. */
296
- size = tcg_ctx->code_gen_buffer_size;
297
- buf_rx = tcg_ctx->code_gen_buffer;
298
-#endif
299
-
300
- buf_rw = qemu_memfd_alloc("tcg-jit", size, 0, &fd, errp);
301
- if (buf_rw == NULL) {
302
- goto fail;
303
- }
304
-
305
-#ifdef __mips__
306
- void *tmp = mmap(buf_rx, size, PROT_READ | PROT_EXEC,
307
- MAP_SHARED | MAP_FIXED, fd, 0);
308
- if (tmp != buf_rx) {
309
- goto fail_rx;
310
- }
311
-#else
312
- buf_rx = mmap(NULL, size, PROT_READ | PROT_EXEC, MAP_SHARED, fd, 0);
313
- if (buf_rx == MAP_FAILED) {
314
- goto fail_rx;
315
- }
316
-#endif
317
-
318
- close(fd);
319
- tcg_ctx->code_gen_buffer = buf_rw;
320
- tcg_ctx->code_gen_buffer_size = size;
321
- tcg_splitwx_diff = buf_rx - buf_rw;
322
-
323
- /* Request large pages for the buffer and the splitwx. */
324
- qemu_madvise(buf_rw, size, QEMU_MADV_HUGEPAGE);
325
- qemu_madvise(buf_rx, size, QEMU_MADV_HUGEPAGE);
326
- return true;
327
-
328
- fail_rx:
329
- error_setg_errno(errp, errno, "failed to map shared memory for execute");
330
- fail:
331
- if (buf_rx != MAP_FAILED) {
332
- munmap(buf_rx, size);
333
- }
334
- if (buf_rw) {
335
- munmap(buf_rw, size);
336
- }
337
- if (fd >= 0) {
338
- close(fd);
339
- }
340
- return false;
341
-}
342
-#endif /* CONFIG_POSIX */
343
-
344
-#ifdef CONFIG_DARWIN
345
-#include <mach/mach.h>
346
-
347
-extern kern_return_t mach_vm_remap(vm_map_t target_task,
348
- mach_vm_address_t *target_address,
349
- mach_vm_size_t size,
350
- mach_vm_offset_t mask,
351
- int flags,
352
- vm_map_t src_task,
353
- mach_vm_address_t src_address,
354
- boolean_t copy,
355
- vm_prot_t *cur_protection,
356
- vm_prot_t *max_protection,
357
- vm_inherit_t inheritance);
358
-
359
-static bool alloc_code_gen_buffer_splitwx_vmremap(size_t size, Error **errp)
360
-{
361
- kern_return_t ret;
362
- mach_vm_address_t buf_rw, buf_rx;
363
- vm_prot_t cur_prot, max_prot;
364
-
365
- /* Map the read-write portion via normal anon memory. */
366
- if (!alloc_code_gen_buffer_anon(size, PROT_READ | PROT_WRITE,
367
- MAP_PRIVATE | MAP_ANONYMOUS, errp)) {
368
- return false;
369
- }
370
-
371
- buf_rw = (mach_vm_address_t)tcg_ctx->code_gen_buffer;
372
- buf_rx = 0;
373
- ret = mach_vm_remap(mach_task_self(),
374
- &buf_rx,
375
- size,
376
- 0,
377
- VM_FLAGS_ANYWHERE,
378
- mach_task_self(),
379
- buf_rw,
380
- false,
381
- &cur_prot,
382
- &max_prot,
383
- VM_INHERIT_NONE);
384
- if (ret != KERN_SUCCESS) {
385
- /* TODO: Convert "ret" to a human readable error message. */
386
- error_setg(errp, "vm_remap for jit splitwx failed");
387
- munmap((void *)buf_rw, size);
388
- return false;
389
- }
390
-
391
- if (mprotect((void *)buf_rx, size, PROT_READ | PROT_EXEC) != 0) {
392
- error_setg_errno(errp, errno, "mprotect for jit splitwx");
393
- munmap((void *)buf_rx, size);
394
- munmap((void *)buf_rw, size);
395
- return false;
396
- }
397
-
398
- tcg_splitwx_diff = buf_rx - buf_rw;
399
- return true;
400
-}
401
-#endif /* CONFIG_DARWIN */
402
-#endif /* CONFIG_TCG_INTERPRETER */
403
-
404
-static bool alloc_code_gen_buffer_splitwx(size_t size, Error **errp)
405
-{
406
-#ifndef CONFIG_TCG_INTERPRETER
407
-# ifdef CONFIG_DARWIN
408
- return alloc_code_gen_buffer_splitwx_vmremap(size, errp);
409
-# endif
410
-# ifdef CONFIG_POSIX
411
- return alloc_code_gen_buffer_splitwx_memfd(size, errp);
412
-# endif
413
-#endif
414
- error_setg(errp, "jit split-wx not supported");
415
- return false;
416
-}
417
-
418
-static bool alloc_code_gen_buffer(size_t size, int splitwx, Error **errp)
419
-{
420
- ERRP_GUARD();
421
- int prot, flags;
422
-
423
- if (splitwx) {
424
- if (alloc_code_gen_buffer_splitwx(size, errp)) {
425
- return true;
426
- }
427
- /*
428
- * If splitwx force-on (1), fail;
429
- * if splitwx default-on (-1), fall through to splitwx off.
430
- */
431
- if (splitwx > 0) {
432
- return false;
433
- }
434
- error_free_or_abort(errp);
435
- }
436
-
437
- prot = PROT_READ | PROT_WRITE | PROT_EXEC;
438
- flags = MAP_PRIVATE | MAP_ANONYMOUS;
439
-#ifdef CONFIG_TCG_INTERPRETER
440
- /* The tcg interpreter does not need execute permission. */
441
- prot = PROT_READ | PROT_WRITE;
442
-#elif defined(CONFIG_DARWIN)
443
- /* Applicable to both iOS and macOS (Apple Silicon). */
444
- if (!splitwx) {
445
- flags |= MAP_JIT;
446
- }
447
-#endif
448
-
449
- return alloc_code_gen_buffer_anon(size, prot, flags, errp);
450
-}
451
-#endif /* USE_STATIC_CODE_GEN_BUFFER, WIN32, POSIX */
452
-
453
static bool tb_cmp(const void *ap, const void *bp)
454
{
455
const TranslationBlock *a = ap;
456
@@ -XXX,XX +XXX,XX @@ static void tb_htable_init(void)
457
size. */
458
void tcg_exec_init(unsigned long tb_size, int splitwx)
459
{
460
- bool ok;
461
-
462
tcg_allowed = true;
463
tcg_context_init(&tcg_init_ctx);
464
page_init();
465
tb_htable_init();
466
-
467
- ok = alloc_code_gen_buffer(size_code_gen_buffer(tb_size),
468
- splitwx, &error_fatal);
469
- assert(ok);
470
-
471
- /* TODO: allocating regions is hand-in-glove with code_gen_buffer. */
472
- tcg_region_init();
473
+ tcg_region_init(tb_size, splitwx);
474
475
#if defined(CONFIG_SOFTMMU)
476
/* There's no guest base to take into account, so go ahead and
477
diff --git a/tcg/region.c b/tcg/region.c
478
index XXXXXXX..XXXXXXX 100644
479
--- a/tcg/region.c
480
+++ b/tcg/region.c
481
@@ -XXX,XX +XXX,XX @@
482
*/
483
484
#include "qemu/osdep.h"
485
+#include "qemu/units.h"
486
+#include "qapi/error.h"
487
#include "exec/exec-all.h"
488
#include "tcg/tcg.h"
489
#if !defined(CONFIG_USER_ONLY)
490
@@ -XXX,XX +XXX,XX @@ static size_t tcg_n_regions(void)
491
}
492
#endif
493
494
+/*
495
+ * Minimum size of the code gen buffer. This number is randomly chosen,
496
+ * but not so small that we can't have a fair number of TB's live.
497
+ */
498
+#define MIN_CODE_GEN_BUFFER_SIZE (1 * MiB)
499
+
500
+/*
501
+ * Maximum size of the code gen buffer we'd like to use. Unless otherwise
502
+ * indicated, this is constrained by the range of direct branches on the
503
+ * host cpu, as used by the TCG implementation of goto_tb.
504
+ */
505
+#if defined(__x86_64__)
506
+# define MAX_CODE_GEN_BUFFER_SIZE (2 * GiB)
507
+#elif defined(__sparc__)
508
+# define MAX_CODE_GEN_BUFFER_SIZE (2 * GiB)
509
+#elif defined(__powerpc64__)
510
+# define MAX_CODE_GEN_BUFFER_SIZE (2 * GiB)
511
+#elif defined(__powerpc__)
512
+# define MAX_CODE_GEN_BUFFER_SIZE (32 * MiB)
513
+#elif defined(__aarch64__)
514
+# define MAX_CODE_GEN_BUFFER_SIZE (2 * GiB)
515
+#elif defined(__s390x__)
516
+ /* We have a +- 4GB range on the branches; leave some slop. */
517
+# define MAX_CODE_GEN_BUFFER_SIZE (3 * GiB)
518
+#elif defined(__mips__)
519
+ /*
520
+ * We have a 256MB branch region, but leave room to make sure the
521
+ * main executable is also within that region.
522
+ */
523
+# define MAX_CODE_GEN_BUFFER_SIZE (128 * MiB)
524
+#else
525
+# define MAX_CODE_GEN_BUFFER_SIZE ((size_t)-1)
526
+#endif
527
+
528
+#if TCG_TARGET_REG_BITS == 32
529
+#define DEFAULT_CODE_GEN_BUFFER_SIZE_1 (32 * MiB)
530
+#ifdef CONFIG_USER_ONLY
531
+/*
532
+ * For user mode on smaller 32 bit systems we may run into trouble
533
+ * allocating big chunks of data in the right place. On these systems
534
+ * we utilise a static code generation buffer directly in the binary.
535
+ */
536
+#define USE_STATIC_CODE_GEN_BUFFER
537
+#endif
538
+#else /* TCG_TARGET_REG_BITS == 64 */
539
+#ifdef CONFIG_USER_ONLY
540
+/*
541
+ * As user-mode emulation typically means running multiple instances
542
+ * of the translator don't go too nuts with our default code gen
543
+ * buffer lest we make things too hard for the OS.
544
+ */
545
+#define DEFAULT_CODE_GEN_BUFFER_SIZE_1 (128 * MiB)
546
+#else
547
+/*
548
+ * We expect most system emulation to run one or two guests per host.
549
+ * Users running large scale system emulation may want to tweak their
550
+ * runtime setup via the tb-size control on the command line.
551
+ */
552
+#define DEFAULT_CODE_GEN_BUFFER_SIZE_1 (1 * GiB)
553
+#endif
554
+#endif
555
+
556
+#define DEFAULT_CODE_GEN_BUFFER_SIZE \
557
+ (DEFAULT_CODE_GEN_BUFFER_SIZE_1 < MAX_CODE_GEN_BUFFER_SIZE \
558
+ ? DEFAULT_CODE_GEN_BUFFER_SIZE_1 : MAX_CODE_GEN_BUFFER_SIZE)
559
+
560
+static size_t size_code_gen_buffer(size_t tb_size)
561
+{
562
+ /* Size the buffer. */
563
+ if (tb_size == 0) {
564
+ size_t phys_mem = qemu_get_host_physmem();
565
+ if (phys_mem == 0) {
566
+ tb_size = DEFAULT_CODE_GEN_BUFFER_SIZE;
567
+ } else {
568
+ tb_size = MIN(DEFAULT_CODE_GEN_BUFFER_SIZE, phys_mem / 8);
569
+ }
570
+ }
571
+ if (tb_size < MIN_CODE_GEN_BUFFER_SIZE) {
572
+ tb_size = MIN_CODE_GEN_BUFFER_SIZE;
573
+ }
574
+ if (tb_size > MAX_CODE_GEN_BUFFER_SIZE) {
575
+ tb_size = MAX_CODE_GEN_BUFFER_SIZE;
576
+ }
577
+ return tb_size;
578
+}
579
+
580
+#ifdef __mips__
581
+/*
582
+ * In order to use J and JAL within the code_gen_buffer, we require
583
+ * that the buffer not cross a 256MB boundary.
584
+ */
585
+static inline bool cross_256mb(void *addr, size_t size)
586
+{
587
+ return ((uintptr_t)addr ^ ((uintptr_t)addr + size)) & ~0x0ffffffful;
588
+}
589
+
590
+/*
591
+ * We weren't able to allocate a buffer without crossing that boundary,
592
+ * so make do with the larger portion of the buffer that doesn't cross.
593
+ * Returns the new base of the buffer, and adjusts code_gen_buffer_size.
594
+ */
595
+static inline void *split_cross_256mb(void *buf1, size_t size1)
596
+{
597
+ void *buf2 = (void *)(((uintptr_t)buf1 + size1) & ~0x0ffffffful);
598
+ size_t size2 = buf1 + size1 - buf2;
599
+
600
+ size1 = buf2 - buf1;
601
+ if (size1 < size2) {
602
+ size1 = size2;
603
+ buf1 = buf2;
604
+ }
605
+
606
+ tcg_ctx->code_gen_buffer_size = size1;
607
+ return buf1;
608
+}
609
+#endif
610
+
611
+#ifdef USE_STATIC_CODE_GEN_BUFFER
612
+static uint8_t static_code_gen_buffer[DEFAULT_CODE_GEN_BUFFER_SIZE]
613
+ __attribute__((aligned(CODE_GEN_ALIGN)));
614
+
615
+static bool alloc_code_gen_buffer(size_t tb_size, int splitwx, Error **errp)
616
+{
617
+ void *buf, *end;
618
+ size_t size;
619
+
620
+ if (splitwx > 0) {
621
+ error_setg(errp, "jit split-wx not supported");
622
+ return false;
623
+ }
624
+
625
+ /* page-align the beginning and end of the buffer */
626
+ buf = static_code_gen_buffer;
627
+ end = static_code_gen_buffer + sizeof(static_code_gen_buffer);
628
+ buf = QEMU_ALIGN_PTR_UP(buf, qemu_real_host_page_size);
629
+ end = QEMU_ALIGN_PTR_DOWN(end, qemu_real_host_page_size);
630
+
631
+ size = end - buf;
632
+
633
+ /* Honor a command-line option limiting the size of the buffer. */
634
+ if (size > tb_size) {
635
+ size = QEMU_ALIGN_DOWN(tb_size, qemu_real_host_page_size);
636
+ }
637
+ tcg_ctx->code_gen_buffer_size = size;
638
+
639
+#ifdef __mips__
640
+ if (cross_256mb(buf, size)) {
641
+ buf = split_cross_256mb(buf, size);
642
+ size = tcg_ctx->code_gen_buffer_size;
643
+ }
644
+#endif
645
+
646
+ if (qemu_mprotect_rwx(buf, size)) {
647
+ error_setg_errno(errp, errno, "mprotect of jit buffer");
648
+ return false;
649
+ }
650
+ qemu_madvise(buf, size, QEMU_MADV_HUGEPAGE);
651
+
652
+ tcg_ctx->code_gen_buffer = buf;
653
+ return true;
654
+}
655
+#elif defined(_WIN32)
656
+static bool alloc_code_gen_buffer(size_t size, int splitwx, Error **errp)
657
+{
658
+ void *buf;
659
+
660
+ if (splitwx > 0) {
661
+ error_setg(errp, "jit split-wx not supported");
662
+ return false;
663
+ }
664
+
665
+ buf = VirtualAlloc(NULL, size, MEM_RESERVE | MEM_COMMIT,
666
+ PAGE_EXECUTE_READWRITE);
667
+ if (buf == NULL) {
668
+ error_setg_win32(errp, GetLastError(),
669
+ "allocate %zu bytes for jit buffer", size);
670
+ return false;
671
+ }
672
+
673
+ tcg_ctx->code_gen_buffer = buf;
674
+ tcg_ctx->code_gen_buffer_size = size;
675
+ return true;
676
+}
677
+#else
678
+static bool alloc_code_gen_buffer_anon(size_t size, int prot,
679
+ int flags, Error **errp)
680
+{
681
+ void *buf;
682
+
683
+ buf = mmap(NULL, size, prot, flags, -1, 0);
684
+ if (buf == MAP_FAILED) {
685
+ error_setg_errno(errp, errno,
686
+ "allocate %zu bytes for jit buffer", size);
687
+ return false;
688
+ }
689
+ tcg_ctx->code_gen_buffer_size = size;
690
+
691
+#ifdef __mips__
692
+ if (cross_256mb(buf, size)) {
693
+ /*
694
+ * Try again, with the original still mapped, to avoid re-acquiring
695
+ * the same 256mb crossing.
696
+ */
697
+ size_t size2;
698
+ void *buf2 = mmap(NULL, size, prot, flags, -1, 0);
699
+ switch ((int)(buf2 != MAP_FAILED)) {
700
+ case 1:
701
+ if (!cross_256mb(buf2, size)) {
702
+ /* Success! Use the new buffer. */
703
+ munmap(buf, size);
704
+ break;
705
+ }
706
+ /* Failure. Work with what we had. */
707
+ munmap(buf2, size);
708
+ /* fallthru */
709
+ default:
710
+ /* Split the original buffer. Free the smaller half. */
711
+ buf2 = split_cross_256mb(buf, size);
712
+ size2 = tcg_ctx->code_gen_buffer_size;
713
+ if (buf == buf2) {
714
+ munmap(buf + size2, size - size2);
715
+ } else {
716
+ munmap(buf, size - size2);
717
+ }
718
+ size = size2;
719
+ break;
720
+ }
721
+ buf = buf2;
722
+ }
723
+#endif
724
+
725
+ /* Request large pages for the buffer. */
726
+ qemu_madvise(buf, size, QEMU_MADV_HUGEPAGE);
727
+
728
+ tcg_ctx->code_gen_buffer = buf;
729
+ return true;
730
+}
731
+
732
+#ifndef CONFIG_TCG_INTERPRETER
733
+#ifdef CONFIG_POSIX
734
+#include "qemu/memfd.h"
735
+
736
+static bool alloc_code_gen_buffer_splitwx_memfd(size_t size, Error **errp)
737
+{
738
+ void *buf_rw = NULL, *buf_rx = MAP_FAILED;
739
+ int fd = -1;
740
+
741
+#ifdef __mips__
742
+ /* Find space for the RX mapping, vs the 256MiB regions. */
743
+ if (!alloc_code_gen_buffer_anon(size, PROT_NONE,
744
+ MAP_PRIVATE | MAP_ANONYMOUS |
745
+ MAP_NORESERVE, errp)) {
746
+ return false;
747
+ }
748
+ /* The size of the mapping may have been adjusted. */
749
+ size = tcg_ctx->code_gen_buffer_size;
750
+ buf_rx = tcg_ctx->code_gen_buffer;
751
+#endif
752
+
753
+ buf_rw = qemu_memfd_alloc("tcg-jit", size, 0, &fd, errp);
754
+ if (buf_rw == NULL) {
755
+ goto fail;
756
+ }
757
+
758
+#ifdef __mips__
759
+ void *tmp = mmap(buf_rx, size, PROT_READ | PROT_EXEC,
760
+ MAP_SHARED | MAP_FIXED, fd, 0);
761
+ if (tmp != buf_rx) {
762
+ goto fail_rx;
763
+ }
764
+#else
765
+ buf_rx = mmap(NULL, size, PROT_READ | PROT_EXEC, MAP_SHARED, fd, 0);
766
+ if (buf_rx == MAP_FAILED) {
767
+ goto fail_rx;
768
+ }
769
+#endif
770
+
771
+ close(fd);
772
+ tcg_ctx->code_gen_buffer = buf_rw;
773
+ tcg_ctx->code_gen_buffer_size = size;
774
+ tcg_splitwx_diff = buf_rx - buf_rw;
775
+
776
+ /* Request large pages for the buffer and the splitwx. */
777
+ qemu_madvise(buf_rw, size, QEMU_MADV_HUGEPAGE);
778
+ qemu_madvise(buf_rx, size, QEMU_MADV_HUGEPAGE);
779
+ return true;
780
+
781
+ fail_rx:
782
+ error_setg_errno(errp, errno, "failed to map shared memory for execute");
783
+ fail:
784
+ if (buf_rx != MAP_FAILED) {
785
+ munmap(buf_rx, size);
786
+ }
787
+ if (buf_rw) {
788
+ munmap(buf_rw, size);
789
+ }
790
+ if (fd >= 0) {
791
+ close(fd);
792
+ }
793
+ return false;
794
+}
795
+#endif /* CONFIG_POSIX */
796
+
797
+#ifdef CONFIG_DARWIN
798
+#include <mach/mach.h>
799
+
800
+extern kern_return_t mach_vm_remap(vm_map_t target_task,
801
+ mach_vm_address_t *target_address,
802
+ mach_vm_size_t size,
803
+ mach_vm_offset_t mask,
804
+ int flags,
805
+ vm_map_t src_task,
806
+ mach_vm_address_t src_address,
807
+ boolean_t copy,
808
+ vm_prot_t *cur_protection,
809
+ vm_prot_t *max_protection,
810
+ vm_inherit_t inheritance);
811
+
812
+static bool alloc_code_gen_buffer_splitwx_vmremap(size_t size, Error **errp)
813
+{
814
+ kern_return_t ret;
815
+ mach_vm_address_t buf_rw, buf_rx;
816
+ vm_prot_t cur_prot, max_prot;
817
+
818
+ /* Map the read-write portion via normal anon memory. */
819
+ if (!alloc_code_gen_buffer_anon(size, PROT_READ | PROT_WRITE,
820
+ MAP_PRIVATE | MAP_ANONYMOUS, errp)) {
821
+ return false;
822
+ }
823
+
824
+ buf_rw = (mach_vm_address_t)tcg_ctx->code_gen_buffer;
825
+ buf_rx = 0;
826
+ ret = mach_vm_remap(mach_task_self(),
827
+ &buf_rx,
828
+ size,
829
+ 0,
830
+ VM_FLAGS_ANYWHERE,
831
+ mach_task_self(),
832
+ buf_rw,
833
+ false,
834
+ &cur_prot,
835
+ &max_prot,
836
+ VM_INHERIT_NONE);
837
+ if (ret != KERN_SUCCESS) {
838
+ /* TODO: Convert "ret" to a human readable error message. */
839
+ error_setg(errp, "vm_remap for jit splitwx failed");
840
+ munmap((void *)buf_rw, size);
841
+ return false;
842
+ }
843
+
844
+ if (mprotect((void *)buf_rx, size, PROT_READ | PROT_EXEC) != 0) {
845
+ error_setg_errno(errp, errno, "mprotect for jit splitwx");
846
+ munmap((void *)buf_rx, size);
847
+ munmap((void *)buf_rw, size);
848
+ return false;
849
+ }
850
+
851
+ tcg_splitwx_diff = buf_rx - buf_rw;
852
+ return true;
853
+}
854
+#endif /* CONFIG_DARWIN */
855
+#endif /* CONFIG_TCG_INTERPRETER */
856
+
857
+static bool alloc_code_gen_buffer_splitwx(size_t size, Error **errp)
858
+{
859
+#ifndef CONFIG_TCG_INTERPRETER
860
+# ifdef CONFIG_DARWIN
861
+ return alloc_code_gen_buffer_splitwx_vmremap(size, errp);
862
+# endif
863
+# ifdef CONFIG_POSIX
864
+ return alloc_code_gen_buffer_splitwx_memfd(size, errp);
865
+# endif
866
+#endif
867
+ error_setg(errp, "jit split-wx not supported");
868
+ return false;
869
+}
870
+
871
+static bool alloc_code_gen_buffer(size_t size, int splitwx, Error **errp)
872
+{
873
+ ERRP_GUARD();
874
+ int prot, flags;
875
+
876
+ if (splitwx) {
877
+ if (alloc_code_gen_buffer_splitwx(size, errp)) {
878
+ return true;
879
+ }
880
+ /*
881
+ * If splitwx force-on (1), fail;
882
+ * if splitwx default-on (-1), fall through to splitwx off.
883
+ */
884
+ if (splitwx > 0) {
885
+ return false;
886
+ }
887
+ error_free_or_abort(errp);
888
+ }
889
+
890
+ prot = PROT_READ | PROT_WRITE | PROT_EXEC;
891
+ flags = MAP_PRIVATE | MAP_ANONYMOUS;
892
+#ifdef CONFIG_TCG_INTERPRETER
893
+ /* The tcg interpreter does not need execute permission. */
894
+ prot = PROT_READ | PROT_WRITE;
895
+#elif defined(CONFIG_DARWIN)
896
+ /* Applicable to both iOS and macOS (Apple Silicon). */
897
+ if (!splitwx) {
898
+ flags |= MAP_JIT;
899
+ }
900
+#endif
901
+
902
+ return alloc_code_gen_buffer_anon(size, prot, flags, errp);
903
+}
904
+#endif /* USE_STATIC_CODE_GEN_BUFFER, WIN32, POSIX */
905
+
906
/*
907
* Initializes region partitioning.
908
*
909
@@ -XXX,XX +XXX,XX @@ static size_t tcg_n_regions(void)
910
* in practice. Multi-threaded guests share most if not all of their translated
911
* code, which makes parallel code generation less appealing than in softmmu.
912
*/
913
-void tcg_region_init(void)
914
+void tcg_region_init(size_t tb_size, int splitwx)
915
{
916
- void *buf = tcg_init_ctx.code_gen_buffer;
917
- void *aligned;
918
- size_t size = tcg_init_ctx.code_gen_buffer_size;
919
- size_t page_size = qemu_real_host_page_size;
920
+ void *buf, *aligned;
921
+ size_t size;
922
+ size_t page_size;
923
size_t region_size;
924
size_t n_regions;
925
size_t i;
926
+ bool ok;
927
928
+ ok = alloc_code_gen_buffer(size_code_gen_buffer(tb_size),
929
+ splitwx, &error_fatal);
930
+ assert(ok);
931
+
932
+ buf = tcg_init_ctx.code_gen_buffer;
933
+ size = tcg_init_ctx.code_gen_buffer_size;
934
+ page_size = qemu_real_host_page_size;
935
n_regions = tcg_n_regions();
936
937
/* The first region will be 'aligned - buf' bytes larger than the others */
938
--
939
2.25.1
940
941
diff view generated by jsdifflib
New patch
1
We shortly want to use tcg_init for something else.
2
Since the hook is called init_machine, match that.
1
3
4
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
---
9
accel/tcg/tcg-all.c | 4 ++--
10
1 file changed, 2 insertions(+), 2 deletions(-)
11
12
diff --git a/accel/tcg/tcg-all.c b/accel/tcg/tcg-all.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/accel/tcg/tcg-all.c
15
+++ b/accel/tcg/tcg-all.c
16
@@ -XXX,XX +XXX,XX @@ static void tcg_accel_instance_init(Object *obj)
17
18
bool mttcg_enabled;
19
20
-static int tcg_init(MachineState *ms)
21
+static int tcg_init_machine(MachineState *ms)
22
{
23
TCGState *s = TCG_STATE(current_accel());
24
25
@@ -XXX,XX +XXX,XX @@ static void tcg_accel_class_init(ObjectClass *oc, void *data)
26
{
27
AccelClass *ac = ACCEL_CLASS(oc);
28
ac->name = "tcg";
29
- ac->init_machine = tcg_init;
30
+ ac->init_machine = tcg_init_machine;
31
ac->allowed = &tcg_allowed;
32
33
object_class_property_add_str(oc, "thread",
34
--
35
2.25.1
36
37
diff view generated by jsdifflib
New patch
1
Perform both tcg_context_init and tcg_region_init.
2
Do not leave this split to the caller.
1
3
4
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
include/tcg/tcg.h | 3 +--
9
tcg/tcg-internal.h | 1 +
10
accel/tcg/translate-all.c | 3 +--
11
tcg/tcg.c | 9 ++++++++-
12
4 files changed, 11 insertions(+), 5 deletions(-)
13
14
diff --git a/include/tcg/tcg.h b/include/tcg/tcg.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/include/tcg/tcg.h
17
+++ b/include/tcg/tcg.h
18
@@ -XXX,XX +XXX,XX @@ void *tcg_malloc_internal(TCGContext *s, int size);
19
void tcg_pool_reset(TCGContext *s);
20
TranslationBlock *tcg_tb_alloc(TCGContext *s);
21
22
-void tcg_region_init(size_t tb_size, int splitwx);
23
void tb_destroy(TranslationBlock *tb);
24
void tcg_region_reset_all(void);
25
26
@@ -XXX,XX +XXX,XX @@ static inline void *tcg_malloc(int size)
27
}
28
}
29
30
-void tcg_context_init(TCGContext *s);
31
+void tcg_init(size_t tb_size, int splitwx);
32
void tcg_register_thread(void);
33
void tcg_prologue_init(TCGContext *s);
34
void tcg_func_start(TCGContext *s);
35
diff --git a/tcg/tcg-internal.h b/tcg/tcg-internal.h
36
index XXXXXXX..XXXXXXX 100644
37
--- a/tcg/tcg-internal.h
38
+++ b/tcg/tcg-internal.h
39
@@ -XXX,XX +XXX,XX @@
40
extern TCGContext **tcg_ctxs;
41
extern unsigned int n_tcg_ctxs;
42
43
+void tcg_region_init(size_t tb_size, int splitwx);
44
bool tcg_region_alloc(TCGContext *s);
45
void tcg_region_initial_alloc(TCGContext *s);
46
void tcg_region_prologue_set(TCGContext *s);
47
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
48
index XXXXXXX..XXXXXXX 100644
49
--- a/accel/tcg/translate-all.c
50
+++ b/accel/tcg/translate-all.c
51
@@ -XXX,XX +XXX,XX @@ static void tb_htable_init(void)
52
void tcg_exec_init(unsigned long tb_size, int splitwx)
53
{
54
tcg_allowed = true;
55
- tcg_context_init(&tcg_init_ctx);
56
page_init();
57
tb_htable_init();
58
- tcg_region_init(tb_size, splitwx);
59
+ tcg_init(tb_size, splitwx);
60
61
#if defined(CONFIG_SOFTMMU)
62
/* There's no guest base to take into account, so go ahead and
63
diff --git a/tcg/tcg.c b/tcg/tcg.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/tcg/tcg.c
66
+++ b/tcg/tcg.c
67
@@ -XXX,XX +XXX,XX @@ static void process_op_defs(TCGContext *s);
68
static TCGTemp *tcg_global_reg_new_internal(TCGContext *s, TCGType type,
69
TCGReg reg, const char *name);
70
71
-void tcg_context_init(TCGContext *s)
72
+static void tcg_context_init(void)
73
{
74
+ TCGContext *s = &tcg_init_ctx;
75
int op, total_args, n, i;
76
TCGOpDef *def;
77
TCGArgConstraint *args_ct;
78
@@ -XXX,XX +XXX,XX @@ void tcg_context_init(TCGContext *s)
79
cpu_env = temp_tcgv_ptr(ts);
80
}
81
82
+void tcg_init(size_t tb_size, int splitwx)
83
+{
84
+ tcg_context_init();
85
+ tcg_region_init(tb_size, splitwx);
86
+}
87
+
88
/*
89
* Allocate TBs right before their corresponding translated code, making
90
* sure that TBs and code are on different cache lines.
91
--
92
2.25.1
93
94
diff view generated by jsdifflib
1
From: Claudio Fontana <cfontana@suse.de>
1
There is only one caller, and shortly we will need access
2
to the MachineState, which tcg_init_machine already has.
2
3
3
after the initial split into 3 tcg variants, we proceed to also
4
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
4
split tcg_start_vcpu_thread.
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
5
6
We actually split it in 2 this time, since the icount variant
7
just uses the round robin function.
8
9
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
10
Signed-off-by: Claudio Fontana <cfontana@suse.de>
11
Message-Id: <20201015143217.29337-3-cfontana@suse.de>
12
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
---
7
---
14
accel/tcg/tcg-cpus-mttcg.h | 21 --------------
8
accel/tcg/internal.h | 2 ++
15
accel/tcg/tcg-cpus-rr.h | 3 +-
9
include/sysemu/tcg.h | 2 --
16
accel/tcg/tcg-cpus.h | 1 -
10
accel/tcg/tcg-all.c | 16 +++++++++++++++-
17
accel/tcg/tcg-all.c | 5 ++++
11
accel/tcg/translate-all.c | 21 ++-------------------
18
accel/tcg/tcg-cpus-icount.c | 2 +-
12
bsd-user/main.c | 2 +-
19
accel/tcg/tcg-cpus-mttcg.c | 29 +++++++++++++++++--
13
5 files changed, 20 insertions(+), 23 deletions(-)
20
accel/tcg/tcg-cpus-rr.c | 39 +++++++++++++++++++++++--
21
accel/tcg/tcg-cpus.c | 58 -------------------------------------
22
8 files changed, 71 insertions(+), 87 deletions(-)
23
delete mode 100644 accel/tcg/tcg-cpus-mttcg.h
24
14
25
diff --git a/accel/tcg/tcg-cpus-mttcg.h b/accel/tcg/tcg-cpus-mttcg.h
15
diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h
26
deleted file mode 100644
16
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX
17
--- a/accel/tcg/internal.h
28
--- a/accel/tcg/tcg-cpus-mttcg.h
18
+++ b/accel/tcg/internal.h
29
+++ /dev/null
19
@@ -XXX,XX +XXX,XX @@ TranslationBlock *tb_gen_code(CPUState *cpu, target_ulong pc,
20
int cflags);
21
22
void QEMU_NORETURN cpu_io_recompile(CPUState *cpu, uintptr_t retaddr);
23
+void page_init(void);
24
+void tb_htable_init(void);
25
26
#endif /* ACCEL_TCG_INTERNAL_H */
27
diff --git a/include/sysemu/tcg.h b/include/sysemu/tcg.h
28
index XXXXXXX..XXXXXXX 100644
29
--- a/include/sysemu/tcg.h
30
+++ b/include/sysemu/tcg.h
30
@@ -XXX,XX +XXX,XX @@
31
@@ -XXX,XX +XXX,XX @@
31
-/*
32
#ifndef SYSEMU_TCG_H
32
- * QEMU TCG Multi Threaded vCPUs implementation
33
#define SYSEMU_TCG_H
33
- *
34
34
- * Copyright 2020 SUSE LLC
35
-void tcg_exec_init(unsigned long tb_size, int splitwx);
35
- *
36
- * This work is licensed under the terms of the GNU GPL, version 2 or later.
37
- * See the COPYING file in the top-level directory.
38
- */
39
-
36
-
40
-#ifndef TCG_CPUS_MTTCG_H
37
#ifdef CONFIG_TCG
41
-#define TCG_CPUS_MTTCG_H
38
extern bool tcg_allowed;
42
-
39
#define tcg_enabled() (tcg_allowed)
43
-/*
44
- * In the multi-threaded case each vCPU has its own thread. The TLS
45
- * variable current_cpu can be used deep in the code to find the
46
- * current CPUState for a given thread.
47
- */
48
-
49
-void *tcg_cpu_thread_fn(void *arg);
50
-
51
-#endif /* TCG_CPUS_MTTCG_H */
52
diff --git a/accel/tcg/tcg-cpus-rr.h b/accel/tcg/tcg-cpus-rr.h
53
index XXXXXXX..XXXXXXX 100644
54
--- a/accel/tcg/tcg-cpus-rr.h
55
+++ b/accel/tcg/tcg-cpus-rr.h
56
@@ -XXX,XX +XXX,XX @@
57
/* Kick all RR vCPUs. */
58
void qemu_cpu_kick_rr_cpus(CPUState *unused);
59
60
-void *tcg_rr_cpu_thread_fn(void *arg);
61
+/* start the round robin vcpu thread */
62
+void rr_start_vcpu_thread(CPUState *cpu);
63
64
#endif /* TCG_CPUS_RR_H */
65
diff --git a/accel/tcg/tcg-cpus.h b/accel/tcg/tcg-cpus.h
66
index XXXXXXX..XXXXXXX 100644
67
--- a/accel/tcg/tcg-cpus.h
68
+++ b/accel/tcg/tcg-cpus.h
69
@@ -XXX,XX +XXX,XX @@ extern const CpusAccel tcg_cpus_mttcg;
70
extern const CpusAccel tcg_cpus_icount;
71
extern const CpusAccel tcg_cpus_rr;
72
73
-void tcg_start_vcpu_thread(CPUState *cpu);
74
void qemu_tcg_destroy_vcpu(CPUState *cpu);
75
int tcg_cpu_exec(CPUState *cpu);
76
void tcg_handle_interrupt(CPUState *cpu, int mask);
77
diff --git a/accel/tcg/tcg-all.c b/accel/tcg/tcg-all.c
40
diff --git a/accel/tcg/tcg-all.c b/accel/tcg/tcg-all.c
78
index XXXXXXX..XXXXXXX 100644
41
index XXXXXXX..XXXXXXX 100644
79
--- a/accel/tcg/tcg-all.c
42
--- a/accel/tcg/tcg-all.c
80
+++ b/accel/tcg/tcg-all.c
43
+++ b/accel/tcg/tcg-all.c
81
@@ -XXX,XX +XXX,XX @@ static int tcg_init(MachineState *ms)
44
@@ -XXX,XX +XXX,XX @@
82
tcg_exec_init(s->tb_size * 1024 * 1024);
45
#include "qemu/error-report.h"
46
#include "qemu/accel.h"
47
#include "qapi/qapi-builtin-visit.h"
48
+#include "internal.h"
49
50
struct TCGState {
51
AccelState parent_obj;
52
@@ -XXX,XX +XXX,XX @@ static int tcg_init_machine(MachineState *ms)
53
{
54
TCGState *s = TCG_STATE(current_accel());
55
56
- tcg_exec_init(s->tb_size * 1024 * 1024, s->splitwx_enabled);
57
+ tcg_allowed = true;
83
mttcg_enabled = s->mttcg_enabled;
58
mttcg_enabled = s->mttcg_enabled;
84
59
+
60
+ page_init();
61
+ tb_htable_init();
62
+ tcg_init(s->tb_size * 1024 * 1024, s->splitwx_enabled);
63
+
64
+#if defined(CONFIG_SOFTMMU)
85
+ /*
65
+ /*
86
+ * Initialize TCG regions
66
+ * There's no guest base to take into account, so go ahead and
67
+ * initialize the prologue now.
87
+ */
68
+ */
88
+ tcg_region_init();
69
+ tcg_prologue_init(tcg_ctx);
70
+#endif
89
+
71
+
90
if (mttcg_enabled) {
72
return 0;
91
cpus_register_accel(&tcg_cpus_mttcg);
73
}
92
} else if (icount_enabled()) {
74
93
diff --git a/accel/tcg/tcg-cpus-icount.c b/accel/tcg/tcg-cpus-icount.c
75
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
94
index XXXXXXX..XXXXXXX 100644
76
index XXXXXXX..XXXXXXX 100644
95
--- a/accel/tcg/tcg-cpus-icount.c
77
--- a/accel/tcg/translate-all.c
96
+++ b/accel/tcg/tcg-cpus-icount.c
78
+++ b/accel/tcg/translate-all.c
97
@@ -XXX,XX +XXX,XX @@ static void icount_handle_interrupt(CPUState *cpu, int mask)
79
@@ -XXX,XX +XXX,XX @@ bool cpu_restore_state(CPUState *cpu, uintptr_t host_pc, bool will_exit)
80
return false;
98
}
81
}
99
82
100
const CpusAccel tcg_cpus_icount = {
83
-static void page_init(void)
101
- .create_vcpu_thread = tcg_start_vcpu_thread,
84
+void page_init(void)
102
+ .create_vcpu_thread = rr_start_vcpu_thread,
103
.kick_vcpu_thread = qemu_cpu_kick_rr_cpus,
104
105
.handle_interrupt = icount_handle_interrupt,
106
diff --git a/accel/tcg/tcg-cpus-mttcg.c b/accel/tcg/tcg-cpus-mttcg.c
107
index XXXXXXX..XXXXXXX 100644
108
--- a/accel/tcg/tcg-cpus-mttcg.c
109
+++ b/accel/tcg/tcg-cpus-mttcg.c
110
@@ -XXX,XX +XXX,XX @@
111
#include "hw/boards.h"
112
113
#include "tcg-cpus.h"
114
-#include "tcg-cpus-mttcg.h"
115
116
/*
117
* In the multi-threaded case each vCPU has its own thread. The TLS
118
@@ -XXX,XX +XXX,XX @@
119
* current CPUState for a given thread.
120
*/
121
122
-void *tcg_cpu_thread_fn(void *arg)
123
+static void *tcg_cpu_thread_fn(void *arg)
124
{
85
{
125
CPUState *cpu = arg;
86
page_size_init();
126
87
page_table_config_init();
127
@@ -XXX,XX +XXX,XX @@ static void mttcg_kick_vcpu_thread(CPUState *cpu)
88
@@ -XXX,XX +XXX,XX @@ static bool tb_cmp(const void *ap, const void *bp)
128
cpu_exit(cpu);
89
a->page_addr[1] == b->page_addr[1];
129
}
90
}
130
91
131
+static void mttcg_start_vcpu_thread(CPUState *cpu)
92
-static void tb_htable_init(void)
132
+{
93
+void tb_htable_init(void)
133
+ char thread_name[VCPU_THREAD_NAME_SIZE];
134
+
135
+ g_assert(tcg_enabled());
136
+
137
+ parallel_cpus = (current_machine->smp.max_cpus > 1);
138
+
139
+ cpu->thread = g_malloc0(sizeof(QemuThread));
140
+ cpu->halt_cond = g_malloc0(sizeof(QemuCond));
141
+ qemu_cond_init(cpu->halt_cond);
142
+
143
+ /* create a thread per vCPU with TCG (MTTCG) */
144
+ snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/TCG",
145
+ cpu->cpu_index);
146
+
147
+ qemu_thread_create(cpu->thread, thread_name, tcg_cpu_thread_fn,
148
+ cpu, QEMU_THREAD_JOINABLE);
149
+
150
+#ifdef _WIN32
151
+ cpu->hThread = qemu_thread_get_handle(cpu->thread);
152
+#endif
153
+}
154
+
155
const CpusAccel tcg_cpus_mttcg = {
156
- .create_vcpu_thread = tcg_start_vcpu_thread,
157
+ .create_vcpu_thread = mttcg_start_vcpu_thread,
158
.kick_vcpu_thread = mttcg_kick_vcpu_thread,
159
160
.handle_interrupt = tcg_handle_interrupt,
161
diff --git a/accel/tcg/tcg-cpus-rr.c b/accel/tcg/tcg-cpus-rr.c
162
index XXXXXXX..XXXXXXX 100644
163
--- a/accel/tcg/tcg-cpus-rr.c
164
+++ b/accel/tcg/tcg-cpus-rr.c
165
@@ -XXX,XX +XXX,XX @@ static void deal_with_unplugged_cpus(void)
166
* elsewhere.
167
*/
168
169
-void *tcg_rr_cpu_thread_fn(void *arg)
170
+static void *tcg_rr_cpu_thread_fn(void *arg)
171
{
94
{
172
CPUState *cpu = arg;
95
unsigned int mode = QHT_MODE_AUTO_RESIZE;
173
96
174
@@ -XXX,XX +XXX,XX @@ void *tcg_rr_cpu_thread_fn(void *arg)
97
qht_init(&tb_ctx.htable, tb_cmp, CODE_GEN_HTABLE_SIZE, mode);
175
return NULL;
176
}
98
}
177
99
178
+void rr_start_vcpu_thread(CPUState *cpu)
100
-/* Must be called before using the QEMU cpus. 'tb_size' is the size
179
+{
101
- (in bytes) allocated to the translation buffer. Zero means default
180
+ char thread_name[VCPU_THREAD_NAME_SIZE];
102
- size. */
181
+ static QemuCond *single_tcg_halt_cond;
103
-void tcg_exec_init(unsigned long tb_size, int splitwx)
182
+ static QemuThread *single_tcg_cpu_thread;
183
+
184
+ g_assert(tcg_enabled());
185
+ parallel_cpus = false;
186
+
187
+ if (!single_tcg_cpu_thread) {
188
+ cpu->thread = g_malloc0(sizeof(QemuThread));
189
+ cpu->halt_cond = g_malloc0(sizeof(QemuCond));
190
+ qemu_cond_init(cpu->halt_cond);
191
+
192
+ /* share a single thread for all cpus with TCG */
193
+ snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "ALL CPUs/TCG");
194
+ qemu_thread_create(cpu->thread, thread_name,
195
+ tcg_rr_cpu_thread_fn,
196
+ cpu, QEMU_THREAD_JOINABLE);
197
+
198
+ single_tcg_halt_cond = cpu->halt_cond;
199
+ single_tcg_cpu_thread = cpu->thread;
200
+#ifdef _WIN32
201
+ cpu->hThread = qemu_thread_get_handle(cpu->thread);
202
+#endif
203
+ } else {
204
+ /* we share the thread */
205
+ cpu->thread = single_tcg_cpu_thread;
206
+ cpu->halt_cond = single_tcg_halt_cond;
207
+ cpu->thread_id = first_cpu->thread_id;
208
+ cpu->can_do_io = 1;
209
+ cpu->created = true;
210
+ }
211
+}
212
+
213
const CpusAccel tcg_cpus_rr = {
214
- .create_vcpu_thread = tcg_start_vcpu_thread,
215
+ .create_vcpu_thread = rr_start_vcpu_thread,
216
.kick_vcpu_thread = qemu_cpu_kick_rr_cpus,
217
218
.handle_interrupt = tcg_handle_interrupt,
219
diff --git a/accel/tcg/tcg-cpus.c b/accel/tcg/tcg-cpus.c
220
index XXXXXXX..XXXXXXX 100644
221
--- a/accel/tcg/tcg-cpus.c
222
+++ b/accel/tcg/tcg-cpus.c
223
@@ -XXX,XX +XXX,XX @@
224
#include "hw/boards.h"
225
226
#include "tcg-cpus.h"
227
-#include "tcg-cpus-mttcg.h"
228
-#include "tcg-cpus-rr.h"
229
230
/* common functionality among all TCG variants */
231
232
-void tcg_start_vcpu_thread(CPUState *cpu)
233
-{
104
-{
234
- char thread_name[VCPU_THREAD_NAME_SIZE];
105
- tcg_allowed = true;
235
- static QemuCond *single_tcg_halt_cond;
106
- page_init();
236
- static QemuThread *single_tcg_cpu_thread;
107
- tb_htable_init();
237
- static int tcg_region_inited;
108
- tcg_init(tb_size, splitwx);
238
-
109
-
239
- assert(tcg_enabled());
110
-#if defined(CONFIG_SOFTMMU)
240
- /*
111
- /* There's no guest base to take into account, so go ahead and
241
- * Initialize TCG regions--once. Now is a good time, because:
112
- initialize the prologue now. */
242
- * (1) TCG's init context, prologue and target globals have been set up.
113
- tcg_prologue_init(tcg_ctx);
243
- * (2) qemu_tcg_mttcg_enabled() works now (TCG init code runs before the
244
- * -accel flag is processed, so the check doesn't work then).
245
- */
246
- if (!tcg_region_inited) {
247
- tcg_region_inited = 1;
248
- tcg_region_init();
249
- parallel_cpus = qemu_tcg_mttcg_enabled() && current_machine->smp.max_cpus > 1;
250
- }
251
-
252
- if (qemu_tcg_mttcg_enabled() || !single_tcg_cpu_thread) {
253
- cpu->thread = g_malloc0(sizeof(QemuThread));
254
- cpu->halt_cond = g_malloc0(sizeof(QemuCond));
255
- qemu_cond_init(cpu->halt_cond);
256
-
257
- if (qemu_tcg_mttcg_enabled()) {
258
- /* create a thread per vCPU with TCG (MTTCG) */
259
- snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/TCG",
260
- cpu->cpu_index);
261
-
262
- qemu_thread_create(cpu->thread, thread_name, tcg_cpu_thread_fn,
263
- cpu, QEMU_THREAD_JOINABLE);
264
-
265
- } else {
266
- /* share a single thread for all cpus with TCG */
267
- snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "ALL CPUs/TCG");
268
- qemu_thread_create(cpu->thread, thread_name,
269
- tcg_rr_cpu_thread_fn,
270
- cpu, QEMU_THREAD_JOINABLE);
271
-
272
- single_tcg_halt_cond = cpu->halt_cond;
273
- single_tcg_cpu_thread = cpu->thread;
274
- }
275
-#ifdef _WIN32
276
- cpu->hThread = qemu_thread_get_handle(cpu->thread);
277
-#endif
114
-#endif
278
- } else {
279
- /* For non-MTTCG cases we share the thread */
280
- cpu->thread = single_tcg_cpu_thread;
281
- cpu->halt_cond = single_tcg_halt_cond;
282
- cpu->thread_id = first_cpu->thread_id;
283
- cpu->can_do_io = 1;
284
- cpu->created = true;
285
- }
286
-}
115
-}
287
-
116
-
288
void qemu_tcg_destroy_vcpu(CPUState *cpu)
117
/* call with @p->lock held */
118
static inline void invalidate_page_bitmap(PageDesc *p)
289
{
119
{
290
cpu_thread_signal_destroyed(cpu);
120
diff --git a/bsd-user/main.c b/bsd-user/main.c
121
index XXXXXXX..XXXXXXX 100644
122
--- a/bsd-user/main.c
123
+++ b/bsd-user/main.c
124
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv)
125
envlist_free(envlist);
126
127
/*
128
- * Now that page sizes are configured in tcg_exec_init() we can do
129
+ * Now that page sizes are configured we can do
130
* proper page alignment for guest_base.
131
*/
132
guest_base = HOST_PAGE_ALIGN(guest_base);
291
--
133
--
292
2.25.1
134
2.25.1
293
135
294
136
diff view generated by jsdifflib
New patch
1
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
3
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
---
6
accel/tcg/tcg-all.c | 3 ++-
7
1 file changed, 2 insertions(+), 1 deletion(-)
1
8
9
diff --git a/accel/tcg/tcg-all.c b/accel/tcg/tcg-all.c
10
index XXXXXXX..XXXXXXX 100644
11
--- a/accel/tcg/tcg-all.c
12
+++ b/accel/tcg/tcg-all.c
13
@@ -XXX,XX +XXX,XX @@
14
#include "qemu/error-report.h"
15
#include "qemu/accel.h"
16
#include "qapi/qapi-builtin-visit.h"
17
+#include "qemu/units.h"
18
#include "internal.h"
19
20
struct TCGState {
21
@@ -XXX,XX +XXX,XX @@ static int tcg_init_machine(MachineState *ms)
22
23
page_init();
24
tb_htable_init();
25
- tcg_init(s->tb_size * 1024 * 1024, s->splitwx_enabled);
26
+ tcg_init(s->tb_size * MiB, s->splitwx_enabled);
27
28
#if defined(CONFIG_SOFTMMU)
29
/*
30
--
31
2.25.1
32
33
diff view generated by jsdifflib
1
From: Claudio Fontana <cfontana@suse.de>
1
Start removing the include of hw/boards.h from tcg/.
2
Pass down the max_cpus value from tcg_init_machine,
3
where we have the MachineState already.
2
4
3
Signed-off-by: Claudio Fontana <cfontana@suse.de>
5
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
4
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
5
Message-Id: <20201015143217.29337-4-cfontana@suse.de>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
9
---
8
accel/tcg/tcg-cpus-icount.h | 6 +--
10
include/tcg/tcg.h | 2 +-
9
accel/tcg/tcg-cpus-rr.h | 2 +-
11
tcg/tcg-internal.h | 2 +-
10
accel/tcg/tcg-cpus.h | 6 +--
12
accel/tcg/tcg-all.c | 10 +++++++++-
11
accel/tcg/tcg-cpus-icount.c | 24 ++++++------
13
tcg/region.c | 32 +++++++++++---------------------
12
accel/tcg/tcg-cpus-mttcg.c | 10 ++---
14
tcg/tcg.c | 10 ++++------
13
accel/tcg/tcg-cpus-rr.c | 74 ++++++++++++++++++-------------------
15
5 files changed, 26 insertions(+), 30 deletions(-)
14
accel/tcg/tcg-cpus.c | 6 +--
15
7 files changed, 64 insertions(+), 64 deletions(-)
16
16
17
diff --git a/accel/tcg/tcg-cpus-icount.h b/accel/tcg/tcg-cpus-icount.h
17
diff --git a/include/tcg/tcg.h b/include/tcg/tcg.h
18
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
19
--- a/accel/tcg/tcg-cpus-icount.h
19
--- a/include/tcg/tcg.h
20
+++ b/accel/tcg/tcg-cpus-icount.h
20
+++ b/include/tcg/tcg.h
21
@@ -XXX,XX +XXX,XX @@
21
@@ -XXX,XX +XXX,XX @@ static inline void *tcg_malloc(int size)
22
#ifndef TCG_CPUS_ICOUNT_H
23
#define TCG_CPUS_ICOUNT_H
24
25
-void handle_icount_deadline(void);
26
-void prepare_icount_for_run(CPUState *cpu);
27
-void process_icount_data(CPUState *cpu);
28
+void icount_handle_deadline(void);
29
+void icount_prepare_for_run(CPUState *cpu);
30
+void icount_process_data(CPUState *cpu);
31
32
#endif /* TCG_CPUS_ICOUNT_H */
33
diff --git a/accel/tcg/tcg-cpus-rr.h b/accel/tcg/tcg-cpus-rr.h
34
index XXXXXXX..XXXXXXX 100644
35
--- a/accel/tcg/tcg-cpus-rr.h
36
+++ b/accel/tcg/tcg-cpus-rr.h
37
@@ -XXX,XX +XXX,XX @@
38
#define TCG_KICK_PERIOD (NANOSECONDS_PER_SECOND / 10)
39
40
/* Kick all RR vCPUs. */
41
-void qemu_cpu_kick_rr_cpus(CPUState *unused);
42
+void rr_kick_vcpu_thread(CPUState *unused);
43
44
/* start the round robin vcpu thread */
45
void rr_start_vcpu_thread(CPUState *cpu);
46
diff --git a/accel/tcg/tcg-cpus.h b/accel/tcg/tcg-cpus.h
47
index XXXXXXX..XXXXXXX 100644
48
--- a/accel/tcg/tcg-cpus.h
49
+++ b/accel/tcg/tcg-cpus.h
50
@@ -XXX,XX +XXX,XX @@ extern const CpusAccel tcg_cpus_mttcg;
51
extern const CpusAccel tcg_cpus_icount;
52
extern const CpusAccel tcg_cpus_rr;
53
54
-void qemu_tcg_destroy_vcpu(CPUState *cpu);
55
-int tcg_cpu_exec(CPUState *cpu);
56
-void tcg_handle_interrupt(CPUState *cpu, int mask);
57
+void tcg_cpus_destroy(CPUState *cpu);
58
+int tcg_cpus_exec(CPUState *cpu);
59
+void tcg_cpus_handle_interrupt(CPUState *cpu, int mask);
60
61
#endif /* TCG_CPUS_H */
62
diff --git a/accel/tcg/tcg-cpus-icount.c b/accel/tcg/tcg-cpus-icount.c
63
index XXXXXXX..XXXXXXX 100644
64
--- a/accel/tcg/tcg-cpus-icount.c
65
+++ b/accel/tcg/tcg-cpus-icount.c
66
@@ -XXX,XX +XXX,XX @@
67
#include "tcg-cpus-icount.h"
68
#include "tcg-cpus-rr.h"
69
70
-static int64_t tcg_get_icount_limit(void)
71
+static int64_t icount_get_limit(void)
72
{
73
int64_t deadline;
74
75
@@ -XXX,XX +XXX,XX @@ static int64_t tcg_get_icount_limit(void)
76
}
22
}
77
}
23
}
78
24
79
-static void notify_aio_contexts(void)
25
-void tcg_init(size_t tb_size, int splitwx);
80
+static void icount_notify_aio_contexts(void)
26
+void tcg_init(size_t tb_size, int splitwx, unsigned max_cpus);
27
void tcg_register_thread(void);
28
void tcg_prologue_init(TCGContext *s);
29
void tcg_func_start(TCGContext *s);
30
diff --git a/tcg/tcg-internal.h b/tcg/tcg-internal.h
31
index XXXXXXX..XXXXXXX 100644
32
--- a/tcg/tcg-internal.h
33
+++ b/tcg/tcg-internal.h
34
@@ -XXX,XX +XXX,XX @@
35
extern TCGContext **tcg_ctxs;
36
extern unsigned int n_tcg_ctxs;
37
38
-void tcg_region_init(size_t tb_size, int splitwx);
39
+void tcg_region_init(size_t tb_size, int splitwx, unsigned max_cpus);
40
bool tcg_region_alloc(TCGContext *s);
41
void tcg_region_initial_alloc(TCGContext *s);
42
void tcg_region_prologue_set(TCGContext *s);
43
diff --git a/accel/tcg/tcg-all.c b/accel/tcg/tcg-all.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/accel/tcg/tcg-all.c
46
+++ b/accel/tcg/tcg-all.c
47
@@ -XXX,XX +XXX,XX @@
48
#include "qemu/accel.h"
49
#include "qapi/qapi-builtin-visit.h"
50
#include "qemu/units.h"
51
+#if !defined(CONFIG_USER_ONLY)
52
+#include "hw/boards.h"
53
+#endif
54
#include "internal.h"
55
56
struct TCGState {
57
@@ -XXX,XX +XXX,XX @@ bool mttcg_enabled;
58
static int tcg_init_machine(MachineState *ms)
81
{
59
{
82
/* Wake up other AioContexts. */
60
TCGState *s = TCG_STATE(current_accel());
83
qemu_clock_notify(QEMU_CLOCK_VIRTUAL);
61
+#ifdef CONFIG_USER_ONLY
84
qemu_clock_run_timers(QEMU_CLOCK_VIRTUAL);
62
+ unsigned max_cpus = 1;
63
+#else
64
+ unsigned max_cpus = ms->smp.max_cpus;
65
+#endif
66
67
tcg_allowed = true;
68
mttcg_enabled = s->mttcg_enabled;
69
70
page_init();
71
tb_htable_init();
72
- tcg_init(s->tb_size * MiB, s->splitwx_enabled);
73
+ tcg_init(s->tb_size * MiB, s->splitwx_enabled, max_cpus);
74
75
#if defined(CONFIG_SOFTMMU)
76
/*
77
diff --git a/tcg/region.c b/tcg/region.c
78
index XXXXXXX..XXXXXXX 100644
79
--- a/tcg/region.c
80
+++ b/tcg/region.c
81
@@ -XXX,XX +XXX,XX @@
82
#include "qapi/error.h"
83
#include "exec/exec-all.h"
84
#include "tcg/tcg.h"
85
-#if !defined(CONFIG_USER_ONLY)
86
-#include "hw/boards.h"
87
-#endif
88
#include "tcg-internal.h"
89
90
91
@@ -XXX,XX +XXX,XX @@ void tcg_region_reset_all(void)
92
tcg_region_tree_reset_all();
85
}
93
}
86
94
87
-void handle_icount_deadline(void)
95
+static size_t tcg_n_regions(unsigned max_cpus)
88
+void icount_handle_deadline(void)
96
+{
97
#ifdef CONFIG_USER_ONLY
98
-static size_t tcg_n_regions(void)
99
-{
100
return 1;
101
-}
102
#else
103
-/*
104
- * It is likely that some vCPUs will translate more code than others, so we
105
- * first try to set more regions than max_cpus, with those regions being of
106
- * reasonable size. If that's not possible we make do by evenly dividing
107
- * the code_gen_buffer among the vCPUs.
108
- */
109
-static size_t tcg_n_regions(void)
110
-{
111
+ /*
112
+ * It is likely that some vCPUs will translate more code than others,
113
+ * so we first try to set more regions than max_cpus, with those regions
114
+ * being of reasonable size. If that's not possible we make do by evenly
115
+ * dividing the code_gen_buffer among the vCPUs.
116
+ */
117
size_t i;
118
119
/* Use a single region if all we have is one vCPU thread */
120
-#if !defined(CONFIG_USER_ONLY)
121
- MachineState *ms = MACHINE(qdev_get_machine());
122
- unsigned int max_cpus = ms->smp.max_cpus;
123
-#endif
124
if (max_cpus == 1 || !qemu_tcg_mttcg_enabled()) {
125
return 1;
126
}
127
@@ -XXX,XX +XXX,XX @@ static size_t tcg_n_regions(void)
128
}
129
/* If we can't, then just allocate one region per vCPU thread */
130
return max_cpus;
131
-}
132
#endif
133
+}
134
135
/*
136
* Minimum size of the code gen buffer. This number is randomly chosen,
137
@@ -XXX,XX +XXX,XX @@ static bool alloc_code_gen_buffer(size_t size, int splitwx, Error **errp)
138
* in practice. Multi-threaded guests share most if not all of their translated
139
* code, which makes parallel code generation less appealing than in softmmu.
140
*/
141
-void tcg_region_init(size_t tb_size, int splitwx)
142
+void tcg_region_init(size_t tb_size, int splitwx, unsigned max_cpus)
89
{
143
{
90
assert(qemu_in_vcpu_thread());
144
void *buf, *aligned;
91
int64_t deadline = qemu_clock_deadline_ns_all(QEMU_CLOCK_VIRTUAL,
145
size_t size;
92
QEMU_TIMER_ATTR_ALL);
146
@@ -XXX,XX +XXX,XX @@ void tcg_region_init(size_t tb_size, int splitwx)
93
147
buf = tcg_init_ctx.code_gen_buffer;
94
if (deadline == 0) {
148
size = tcg_init_ctx.code_gen_buffer_size;
95
- notify_aio_contexts();
149
page_size = qemu_real_host_page_size;
96
+ icount_notify_aio_contexts();
150
- n_regions = tcg_n_regions();
97
}
151
+ n_regions = tcg_n_regions(max_cpus);
152
153
/* The first region will be 'aligned - buf' bytes larger than the others */
154
aligned = QEMU_ALIGN_PTR_UP(buf, page_size);
155
diff --git a/tcg/tcg.c b/tcg/tcg.c
156
index XXXXXXX..XXXXXXX 100644
157
--- a/tcg/tcg.c
158
+++ b/tcg/tcg.c
159
@@ -XXX,XX +XXX,XX @@ static void process_op_defs(TCGContext *s);
160
static TCGTemp *tcg_global_reg_new_internal(TCGContext *s, TCGType type,
161
TCGReg reg, const char *name);
162
163
-static void tcg_context_init(void)
164
+static void tcg_context_init(unsigned max_cpus)
165
{
166
TCGContext *s = &tcg_init_ctx;
167
int op, total_args, n, i;
168
@@ -XXX,XX +XXX,XX @@ static void tcg_context_init(void)
169
tcg_ctxs = &tcg_ctx;
170
n_tcg_ctxs = 1;
171
#else
172
- MachineState *ms = MACHINE(qdev_get_machine());
173
- unsigned int max_cpus = ms->smp.max_cpus;
174
tcg_ctxs = g_new(TCGContext *, max_cpus);
175
#endif
176
177
@@ -XXX,XX +XXX,XX @@ static void tcg_context_init(void)
178
cpu_env = temp_tcgv_ptr(ts);
98
}
179
}
99
180
100
-void prepare_icount_for_run(CPUState *cpu)
181
-void tcg_init(size_t tb_size, int splitwx)
101
+void icount_prepare_for_run(CPUState *cpu)
182
+void tcg_init(size_t tb_size, int splitwx, unsigned max_cpus)
102
{
183
{
103
int insns_left;
184
- tcg_context_init();
104
185
- tcg_region_init(tb_size, splitwx);
105
/*
186
+ tcg_context_init(max_cpus);
106
- * These should always be cleared by process_icount_data after
187
+ tcg_region_init(tb_size, splitwx, max_cpus);
107
+ * These should always be cleared by icount_process_data after
108
* each vCPU execution. However u16.high can be raised
109
- * asynchronously by cpu_exit/cpu_interrupt/tcg_handle_interrupt
110
+ * asynchronously by cpu_exit/cpu_interrupt/tcg_cpus_handle_interrupt
111
*/
112
g_assert(cpu_neg(cpu)->icount_decr.u16.low == 0);
113
g_assert(cpu->icount_extra == 0);
114
115
- cpu->icount_budget = tcg_get_icount_limit();
116
+ cpu->icount_budget = icount_get_limit();
117
insns_left = MIN(0xffff, cpu->icount_budget);
118
cpu_neg(cpu)->icount_decr.u16.low = insns_left;
119
cpu->icount_extra = cpu->icount_budget - insns_left;
120
@@ -XXX,XX +XXX,XX @@ void prepare_icount_for_run(CPUState *cpu)
121
replay_mutex_lock();
122
123
if (cpu->icount_budget == 0 && replay_has_checkpoint()) {
124
- notify_aio_contexts();
125
+ icount_notify_aio_contexts();
126
}
127
}
188
}
128
189
129
-void process_icount_data(CPUState *cpu)
190
/*
130
+void icount_process_data(CPUState *cpu)
131
{
132
/* Account for executed instructions */
133
icount_update(cpu);
134
@@ -XXX,XX +XXX,XX @@ static void icount_handle_interrupt(CPUState *cpu, int mask)
135
{
136
int old_mask = cpu->interrupt_request;
137
138
- tcg_handle_interrupt(cpu, mask);
139
+ tcg_cpus_handle_interrupt(cpu, mask);
140
if (qemu_cpu_is_self(cpu) &&
141
!cpu->can_do_io
142
&& (mask & ~old_mask) != 0) {
143
@@ -XXX,XX +XXX,XX @@ static void icount_handle_interrupt(CPUState *cpu, int mask)
144
145
const CpusAccel tcg_cpus_icount = {
146
.create_vcpu_thread = rr_start_vcpu_thread,
147
- .kick_vcpu_thread = qemu_cpu_kick_rr_cpus,
148
+ .kick_vcpu_thread = rr_kick_vcpu_thread,
149
150
.handle_interrupt = icount_handle_interrupt,
151
.get_virtual_clock = icount_get,
152
diff --git a/accel/tcg/tcg-cpus-mttcg.c b/accel/tcg/tcg-cpus-mttcg.c
153
index XXXXXXX..XXXXXXX 100644
154
--- a/accel/tcg/tcg-cpus-mttcg.c
155
+++ b/accel/tcg/tcg-cpus-mttcg.c
156
@@ -XXX,XX +XXX,XX @@
157
* current CPUState for a given thread.
158
*/
159
160
-static void *tcg_cpu_thread_fn(void *arg)
161
+static void *mttcg_cpu_thread_fn(void *arg)
162
{
163
CPUState *cpu = arg;
164
165
@@ -XXX,XX +XXX,XX @@ static void *tcg_cpu_thread_fn(void *arg)
166
if (cpu_can_run(cpu)) {
167
int r;
168
qemu_mutex_unlock_iothread();
169
- r = tcg_cpu_exec(cpu);
170
+ r = tcg_cpus_exec(cpu);
171
qemu_mutex_lock_iothread();
172
switch (r) {
173
case EXCP_DEBUG:
174
@@ -XXX,XX +XXX,XX @@ static void *tcg_cpu_thread_fn(void *arg)
175
qemu_wait_io_event(cpu);
176
} while (!cpu->unplug || cpu_can_run(cpu));
177
178
- qemu_tcg_destroy_vcpu(cpu);
179
+ tcg_cpus_destroy(cpu);
180
qemu_mutex_unlock_iothread();
181
rcu_unregister_thread();
182
return NULL;
183
@@ -XXX,XX +XXX,XX @@ static void mttcg_start_vcpu_thread(CPUState *cpu)
184
snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/TCG",
185
cpu->cpu_index);
186
187
- qemu_thread_create(cpu->thread, thread_name, tcg_cpu_thread_fn,
188
+ qemu_thread_create(cpu->thread, thread_name, mttcg_cpu_thread_fn,
189
cpu, QEMU_THREAD_JOINABLE);
190
191
#ifdef _WIN32
192
@@ -XXX,XX +XXX,XX @@ const CpusAccel tcg_cpus_mttcg = {
193
.create_vcpu_thread = mttcg_start_vcpu_thread,
194
.kick_vcpu_thread = mttcg_kick_vcpu_thread,
195
196
- .handle_interrupt = tcg_handle_interrupt,
197
+ .handle_interrupt = tcg_cpus_handle_interrupt,
198
};
199
diff --git a/accel/tcg/tcg-cpus-rr.c b/accel/tcg/tcg-cpus-rr.c
200
index XXXXXXX..XXXXXXX 100644
201
--- a/accel/tcg/tcg-cpus-rr.c
202
+++ b/accel/tcg/tcg-cpus-rr.c
203
@@ -XXX,XX +XXX,XX @@
204
#include "tcg-cpus-icount.h"
205
206
/* Kick all RR vCPUs */
207
-void qemu_cpu_kick_rr_cpus(CPUState *unused)
208
+void rr_kick_vcpu_thread(CPUState *unused)
209
{
210
CPUState *cpu;
211
212
@@ -XXX,XX +XXX,XX @@ void qemu_cpu_kick_rr_cpus(CPUState *unused)
213
* idleness is complete.
214
*/
215
216
-static QEMUTimer *tcg_kick_vcpu_timer;
217
-static CPUState *tcg_current_rr_cpu;
218
+static QEMUTimer *rr_kick_vcpu_timer;
219
+static CPUState *rr_current_cpu;
220
221
#define TCG_KICK_PERIOD (NANOSECONDS_PER_SECOND / 10)
222
223
-static inline int64_t qemu_tcg_next_kick(void)
224
+static inline int64_t rr_next_kick_time(void)
225
{
226
return qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + TCG_KICK_PERIOD;
227
}
228
229
/* Kick the currently round-robin scheduled vCPU to next */
230
-static void qemu_cpu_kick_rr_next_cpu(void)
231
+static void rr_kick_next_cpu(void)
232
{
233
CPUState *cpu;
234
do {
235
- cpu = qatomic_mb_read(&tcg_current_rr_cpu);
236
+ cpu = qatomic_mb_read(&rr_current_cpu);
237
if (cpu) {
238
cpu_exit(cpu);
239
}
240
- } while (cpu != qatomic_mb_read(&tcg_current_rr_cpu));
241
+ } while (cpu != qatomic_mb_read(&rr_current_cpu));
242
}
243
244
-static void kick_tcg_thread(void *opaque)
245
+static void rr_kick_thread(void *opaque)
246
{
247
- timer_mod(tcg_kick_vcpu_timer, qemu_tcg_next_kick());
248
- qemu_cpu_kick_rr_next_cpu();
249
+ timer_mod(rr_kick_vcpu_timer, rr_next_kick_time());
250
+ rr_kick_next_cpu();
251
}
252
253
-static void start_tcg_kick_timer(void)
254
+static void rr_start_kick_timer(void)
255
{
256
- if (!tcg_kick_vcpu_timer && CPU_NEXT(first_cpu)) {
257
- tcg_kick_vcpu_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL,
258
- kick_tcg_thread, NULL);
259
+ if (!rr_kick_vcpu_timer && CPU_NEXT(first_cpu)) {
260
+ rr_kick_vcpu_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL,
261
+ rr_kick_thread, NULL);
262
}
263
- if (tcg_kick_vcpu_timer && !timer_pending(tcg_kick_vcpu_timer)) {
264
- timer_mod(tcg_kick_vcpu_timer, qemu_tcg_next_kick());
265
+ if (rr_kick_vcpu_timer && !timer_pending(rr_kick_vcpu_timer)) {
266
+ timer_mod(rr_kick_vcpu_timer, rr_next_kick_time());
267
}
268
}
269
270
-static void stop_tcg_kick_timer(void)
271
+static void rr_stop_kick_timer(void)
272
{
273
- if (tcg_kick_vcpu_timer && timer_pending(tcg_kick_vcpu_timer)) {
274
- timer_del(tcg_kick_vcpu_timer);
275
+ if (rr_kick_vcpu_timer && timer_pending(rr_kick_vcpu_timer)) {
276
+ timer_del(rr_kick_vcpu_timer);
277
}
278
}
279
280
-static void qemu_tcg_rr_wait_io_event(void)
281
+static void rr_wait_io_event(void)
282
{
283
CPUState *cpu;
284
285
while (all_cpu_threads_idle()) {
286
- stop_tcg_kick_timer();
287
+ rr_stop_kick_timer();
288
qemu_cond_wait_iothread(first_cpu->halt_cond);
289
}
290
291
- start_tcg_kick_timer();
292
+ rr_start_kick_timer();
293
294
CPU_FOREACH(cpu) {
295
qemu_wait_io_event_common(cpu);
296
@@ -XXX,XX +XXX,XX @@ static void qemu_tcg_rr_wait_io_event(void)
297
* Destroy any remaining vCPUs which have been unplugged and have
298
* finished running
299
*/
300
-static void deal_with_unplugged_cpus(void)
301
+static void rr_deal_with_unplugged_cpus(void)
302
{
303
CPUState *cpu;
304
305
CPU_FOREACH(cpu) {
306
if (cpu->unplug && !cpu_can_run(cpu)) {
307
- qemu_tcg_destroy_vcpu(cpu);
308
+ tcg_cpus_destroy(cpu);
309
break;
310
}
311
}
312
@@ -XXX,XX +XXX,XX @@ static void deal_with_unplugged_cpus(void)
313
* elsewhere.
314
*/
315
316
-static void *tcg_rr_cpu_thread_fn(void *arg)
317
+static void *rr_cpu_thread_fn(void *arg)
318
{
319
CPUState *cpu = arg;
320
321
@@ -XXX,XX +XXX,XX @@ static void *tcg_rr_cpu_thread_fn(void *arg)
322
}
323
}
324
325
- start_tcg_kick_timer();
326
+ rr_start_kick_timer();
327
328
cpu = first_cpu;
329
330
@@ -XXX,XX +XXX,XX @@ static void *tcg_rr_cpu_thread_fn(void *arg)
331
* Run the timers here. This is much more efficient than
332
* waking up the I/O thread and waiting for completion.
333
*/
334
- handle_icount_deadline();
335
+ icount_handle_deadline();
336
}
337
338
replay_mutex_unlock();
339
@@ -XXX,XX +XXX,XX @@ static void *tcg_rr_cpu_thread_fn(void *arg)
340
341
while (cpu && cpu_work_list_empty(cpu) && !cpu->exit_request) {
342
343
- qatomic_mb_set(&tcg_current_rr_cpu, cpu);
344
+ qatomic_mb_set(&rr_current_cpu, cpu);
345
current_cpu = cpu;
346
347
qemu_clock_enable(QEMU_CLOCK_VIRTUAL,
348
@@ -XXX,XX +XXX,XX @@ static void *tcg_rr_cpu_thread_fn(void *arg)
349
350
qemu_mutex_unlock_iothread();
351
if (icount_enabled()) {
352
- prepare_icount_for_run(cpu);
353
+ icount_prepare_for_run(cpu);
354
}
355
- r = tcg_cpu_exec(cpu);
356
+ r = tcg_cpus_exec(cpu);
357
if (icount_enabled()) {
358
- process_icount_data(cpu);
359
+ icount_process_data(cpu);
360
}
361
qemu_mutex_lock_iothread();
362
363
@@ -XXX,XX +XXX,XX @@ static void *tcg_rr_cpu_thread_fn(void *arg)
364
} /* while (cpu && !cpu->exit_request).. */
365
366
/* Does not need qatomic_mb_set because a spurious wakeup is okay. */
367
- qatomic_set(&tcg_current_rr_cpu, NULL);
368
+ qatomic_set(&rr_current_cpu, NULL);
369
370
if (cpu && cpu->exit_request) {
371
qatomic_mb_set(&cpu->exit_request, 0);
372
@@ -XXX,XX +XXX,XX @@ static void *tcg_rr_cpu_thread_fn(void *arg)
373
qemu_notify_event();
374
}
375
376
- qemu_tcg_rr_wait_io_event();
377
- deal_with_unplugged_cpus();
378
+ rr_wait_io_event();
379
+ rr_deal_with_unplugged_cpus();
380
}
381
382
rcu_unregister_thread();
383
@@ -XXX,XX +XXX,XX @@ void rr_start_vcpu_thread(CPUState *cpu)
384
/* share a single thread for all cpus with TCG */
385
snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "ALL CPUs/TCG");
386
qemu_thread_create(cpu->thread, thread_name,
387
- tcg_rr_cpu_thread_fn,
388
+ rr_cpu_thread_fn,
389
cpu, QEMU_THREAD_JOINABLE);
390
391
single_tcg_halt_cond = cpu->halt_cond;
392
@@ -XXX,XX +XXX,XX @@ void rr_start_vcpu_thread(CPUState *cpu)
393
394
const CpusAccel tcg_cpus_rr = {
395
.create_vcpu_thread = rr_start_vcpu_thread,
396
- .kick_vcpu_thread = qemu_cpu_kick_rr_cpus,
397
+ .kick_vcpu_thread = rr_kick_vcpu_thread,
398
399
- .handle_interrupt = tcg_handle_interrupt,
400
+ .handle_interrupt = tcg_cpus_handle_interrupt,
401
};
402
diff --git a/accel/tcg/tcg-cpus.c b/accel/tcg/tcg-cpus.c
403
index XXXXXXX..XXXXXXX 100644
404
--- a/accel/tcg/tcg-cpus.c
405
+++ b/accel/tcg/tcg-cpus.c
406
@@ -XXX,XX +XXX,XX @@
407
408
/* common functionality among all TCG variants */
409
410
-void qemu_tcg_destroy_vcpu(CPUState *cpu)
411
+void tcg_cpus_destroy(CPUState *cpu)
412
{
413
cpu_thread_signal_destroyed(cpu);
414
}
415
416
-int tcg_cpu_exec(CPUState *cpu)
417
+int tcg_cpus_exec(CPUState *cpu)
418
{
419
int ret;
420
#ifdef CONFIG_PROFILER
421
@@ -XXX,XX +XXX,XX @@ int tcg_cpu_exec(CPUState *cpu)
422
}
423
424
/* mask must never be zero, except for A20 change call */
425
-void tcg_handle_interrupt(CPUState *cpu, int mask)
426
+void tcg_cpus_handle_interrupt(CPUState *cpu, int mask)
427
{
428
g_assert(qemu_mutex_iothread_locked());
429
430
--
191
--
431
2.25.1
192
2.25.1
432
193
433
194
diff view generated by jsdifflib
New patch
1
Finish the divorce of tcg/ from hw/, and do not take
2
the max cpu value from MachineState; just remember what
3
we were passed in tcg_init.
1
4
5
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
---
10
tcg/tcg-internal.h | 3 ++-
11
tcg/region.c | 6 +++---
12
tcg/tcg.c | 23 ++++++++++-------------
13
3 files changed, 15 insertions(+), 17 deletions(-)
14
15
diff --git a/tcg/tcg-internal.h b/tcg/tcg-internal.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/tcg/tcg-internal.h
18
+++ b/tcg/tcg-internal.h
19
@@ -XXX,XX +XXX,XX @@
20
#define TCG_HIGHWATER 1024
21
22
extern TCGContext **tcg_ctxs;
23
-extern unsigned int n_tcg_ctxs;
24
+extern unsigned int tcg_cur_ctxs;
25
+extern unsigned int tcg_max_ctxs;
26
27
void tcg_region_init(size_t tb_size, int splitwx, unsigned max_cpus);
28
bool tcg_region_alloc(TCGContext *s);
29
diff --git a/tcg/region.c b/tcg/region.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/tcg/region.c
32
+++ b/tcg/region.c
33
@@ -XXX,XX +XXX,XX @@ void tcg_region_initial_alloc(TCGContext *s)
34
/* Call from a safe-work context */
35
void tcg_region_reset_all(void)
36
{
37
- unsigned int n_ctxs = qatomic_read(&n_tcg_ctxs);
38
+ unsigned int n_ctxs = qatomic_read(&tcg_cur_ctxs);
39
unsigned int i;
40
41
qemu_mutex_lock(&region.lock);
42
@@ -XXX,XX +XXX,XX @@ void tcg_region_prologue_set(TCGContext *s)
43
*/
44
size_t tcg_code_size(void)
45
{
46
- unsigned int n_ctxs = qatomic_read(&n_tcg_ctxs);
47
+ unsigned int n_ctxs = qatomic_read(&tcg_cur_ctxs);
48
unsigned int i;
49
size_t total;
50
51
@@ -XXX,XX +XXX,XX @@ size_t tcg_code_capacity(void)
52
53
size_t tcg_tb_phys_invalidate_count(void)
54
{
55
- unsigned int n_ctxs = qatomic_read(&n_tcg_ctxs);
56
+ unsigned int n_ctxs = qatomic_read(&tcg_cur_ctxs);
57
unsigned int i;
58
size_t total = 0;
59
60
diff --git a/tcg/tcg.c b/tcg/tcg.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/tcg/tcg.c
63
+++ b/tcg/tcg.c
64
@@ -XXX,XX +XXX,XX @@
65
#define NO_CPU_IO_DEFS
66
67
#include "exec/exec-all.h"
68
-
69
-#if !defined(CONFIG_USER_ONLY)
70
-#include "hw/boards.h"
71
-#endif
72
-
73
#include "tcg/tcg-op.h"
74
75
#if UINTPTR_MAX == UINT32_MAX
76
@@ -XXX,XX +XXX,XX @@ static int tcg_out_ldst_finalize(TCGContext *s);
77
#endif
78
79
TCGContext **tcg_ctxs;
80
-unsigned int n_tcg_ctxs;
81
+unsigned int tcg_cur_ctxs;
82
+unsigned int tcg_max_ctxs;
83
TCGv_env cpu_env = 0;
84
const void *tcg_code_gen_epilogue;
85
uintptr_t tcg_splitwx_diff;
86
@@ -XXX,XX +XXX,XX @@ void tcg_register_thread(void)
87
#else
88
void tcg_register_thread(void)
89
{
90
- MachineState *ms = MACHINE(qdev_get_machine());
91
TCGContext *s = g_malloc(sizeof(*s));
92
unsigned int i, n;
93
94
@@ -XXX,XX +XXX,XX @@ void tcg_register_thread(void)
95
}
96
97
/* Claim an entry in tcg_ctxs */
98
- n = qatomic_fetch_inc(&n_tcg_ctxs);
99
- g_assert(n < ms->smp.max_cpus);
100
+ n = qatomic_fetch_inc(&tcg_cur_ctxs);
101
+ g_assert(n < tcg_max_ctxs);
102
qatomic_set(&tcg_ctxs[n], s);
103
104
if (n > 0) {
105
@@ -XXX,XX +XXX,XX @@ static void tcg_context_init(unsigned max_cpus)
106
*/
107
#ifdef CONFIG_USER_ONLY
108
tcg_ctxs = &tcg_ctx;
109
- n_tcg_ctxs = 1;
110
+ tcg_cur_ctxs = 1;
111
+ tcg_max_ctxs = 1;
112
#else
113
- tcg_ctxs = g_new(TCGContext *, max_cpus);
114
+ tcg_max_ctxs = max_cpus;
115
+ tcg_ctxs = g_new0(TCGContext *, max_cpus);
116
#endif
117
118
tcg_debug_assert(!tcg_regset_test_reg(s->reserved_regs, TCG_AREG0));
119
@@ -XXX,XX +XXX,XX @@ static void tcg_reg_alloc_call(TCGContext *s, TCGOp *op)
120
static inline
121
void tcg_profile_snapshot(TCGProfile *prof, bool counters, bool table)
122
{
123
- unsigned int n_ctxs = qatomic_read(&n_tcg_ctxs);
124
+ unsigned int n_ctxs = qatomic_read(&tcg_cur_ctxs);
125
unsigned int i;
126
127
for (i = 0; i < n_ctxs; i++) {
128
@@ -XXX,XX +XXX,XX @@ void tcg_dump_op_count(void)
129
130
int64_t tcg_cpu_exec_time(void)
131
{
132
- unsigned int n_ctxs = qatomic_read(&n_tcg_ctxs);
133
+ unsigned int n_ctxs = qatomic_read(&tcg_cur_ctxs);
134
unsigned int i;
135
int64_t ret = 0;
136
137
--
138
2.25.1
139
140
diff view generated by jsdifflib
New patch
1
Remove the ifdef ladder and move each define into the
2
appropriate header file.
1
3
4
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
tcg/aarch64/tcg-target.h | 1 +
9
tcg/arm/tcg-target.h | 1 +
10
tcg/i386/tcg-target.h | 2 ++
11
tcg/mips/tcg-target.h | 6 ++++++
12
tcg/ppc/tcg-target.h | 2 ++
13
tcg/riscv/tcg-target.h | 1 +
14
tcg/s390/tcg-target.h | 3 +++
15
tcg/sparc/tcg-target.h | 1 +
16
tcg/tci/tcg-target.h | 1 +
17
tcg/region.c | 33 +++++----------------------------
18
10 files changed, 23 insertions(+), 28 deletions(-)
19
20
diff --git a/tcg/aarch64/tcg-target.h b/tcg/aarch64/tcg-target.h
21
index XXXXXXX..XXXXXXX 100644
22
--- a/tcg/aarch64/tcg-target.h
23
+++ b/tcg/aarch64/tcg-target.h
24
@@ -XXX,XX +XXX,XX @@
25
26
#define TCG_TARGET_INSN_UNIT_SIZE 4
27
#define TCG_TARGET_TLB_DISPLACEMENT_BITS 24
28
+#define MAX_CODE_GEN_BUFFER_SIZE (2 * GiB)
29
#undef TCG_TARGET_STACK_GROWSUP
30
31
typedef enum {
32
diff --git a/tcg/arm/tcg-target.h b/tcg/arm/tcg-target.h
33
index XXXXXXX..XXXXXXX 100644
34
--- a/tcg/arm/tcg-target.h
35
+++ b/tcg/arm/tcg-target.h
36
@@ -XXX,XX +XXX,XX @@ extern int arm_arch;
37
#undef TCG_TARGET_STACK_GROWSUP
38
#define TCG_TARGET_INSN_UNIT_SIZE 4
39
#define TCG_TARGET_TLB_DISPLACEMENT_BITS 16
40
+#define MAX_CODE_GEN_BUFFER_SIZE UINT32_MAX
41
42
typedef enum {
43
TCG_REG_R0 = 0,
44
diff --git a/tcg/i386/tcg-target.h b/tcg/i386/tcg-target.h
45
index XXXXXXX..XXXXXXX 100644
46
--- a/tcg/i386/tcg-target.h
47
+++ b/tcg/i386/tcg-target.h
48
@@ -XXX,XX +XXX,XX @@
49
#ifdef __x86_64__
50
# define TCG_TARGET_REG_BITS 64
51
# define TCG_TARGET_NB_REGS 32
52
+# define MAX_CODE_GEN_BUFFER_SIZE (2 * GiB)
53
#else
54
# define TCG_TARGET_REG_BITS 32
55
# define TCG_TARGET_NB_REGS 24
56
+# define MAX_CODE_GEN_BUFFER_SIZE UINT32_MAX
57
#endif
58
59
typedef enum {
60
diff --git a/tcg/mips/tcg-target.h b/tcg/mips/tcg-target.h
61
index XXXXXXX..XXXXXXX 100644
62
--- a/tcg/mips/tcg-target.h
63
+++ b/tcg/mips/tcg-target.h
64
@@ -XXX,XX +XXX,XX @@
65
#define TCG_TARGET_TLB_DISPLACEMENT_BITS 16
66
#define TCG_TARGET_NB_REGS 32
67
68
+/*
69
+ * We have a 256MB branch region, but leave room to make sure the
70
+ * main executable is also within that region.
71
+ */
72
+#define MAX_CODE_GEN_BUFFER_SIZE (128 * MiB)
73
+
74
typedef enum {
75
TCG_REG_ZERO = 0,
76
TCG_REG_AT,
77
diff --git a/tcg/ppc/tcg-target.h b/tcg/ppc/tcg-target.h
78
index XXXXXXX..XXXXXXX 100644
79
--- a/tcg/ppc/tcg-target.h
80
+++ b/tcg/ppc/tcg-target.h
81
@@ -XXX,XX +XXX,XX @@
82
83
#ifdef _ARCH_PPC64
84
# define TCG_TARGET_REG_BITS 64
85
+# define MAX_CODE_GEN_BUFFER_SIZE (2 * GiB)
86
#else
87
# define TCG_TARGET_REG_BITS 32
88
+# define MAX_CODE_GEN_BUFFER_SIZE (32 * MiB)
89
#endif
90
91
#define TCG_TARGET_NB_REGS 64
92
diff --git a/tcg/riscv/tcg-target.h b/tcg/riscv/tcg-target.h
93
index XXXXXXX..XXXXXXX 100644
94
--- a/tcg/riscv/tcg-target.h
95
+++ b/tcg/riscv/tcg-target.h
96
@@ -XXX,XX +XXX,XX @@
97
#define TCG_TARGET_INSN_UNIT_SIZE 4
98
#define TCG_TARGET_TLB_DISPLACEMENT_BITS 20
99
#define TCG_TARGET_NB_REGS 32
100
+#define MAX_CODE_GEN_BUFFER_SIZE ((size_t)-1)
101
102
typedef enum {
103
TCG_REG_ZERO,
104
diff --git a/tcg/s390/tcg-target.h b/tcg/s390/tcg-target.h
105
index XXXXXXX..XXXXXXX 100644
106
--- a/tcg/s390/tcg-target.h
107
+++ b/tcg/s390/tcg-target.h
108
@@ -XXX,XX +XXX,XX @@
109
#define TCG_TARGET_INSN_UNIT_SIZE 2
110
#define TCG_TARGET_TLB_DISPLACEMENT_BITS 19
111
112
+/* We have a +- 4GB range on the branches; leave some slop. */
113
+#define MAX_CODE_GEN_BUFFER_SIZE (3 * GiB)
114
+
115
typedef enum TCGReg {
116
TCG_REG_R0 = 0,
117
TCG_REG_R1,
118
diff --git a/tcg/sparc/tcg-target.h b/tcg/sparc/tcg-target.h
119
index XXXXXXX..XXXXXXX 100644
120
--- a/tcg/sparc/tcg-target.h
121
+++ b/tcg/sparc/tcg-target.h
122
@@ -XXX,XX +XXX,XX @@
123
#define TCG_TARGET_INSN_UNIT_SIZE 4
124
#define TCG_TARGET_TLB_DISPLACEMENT_BITS 32
125
#define TCG_TARGET_NB_REGS 32
126
+#define MAX_CODE_GEN_BUFFER_SIZE (2 * GiB)
127
128
typedef enum {
129
TCG_REG_G0 = 0,
130
diff --git a/tcg/tci/tcg-target.h b/tcg/tci/tcg-target.h
131
index XXXXXXX..XXXXXXX 100644
132
--- a/tcg/tci/tcg-target.h
133
+++ b/tcg/tci/tcg-target.h
134
@@ -XXX,XX +XXX,XX @@
135
#define TCG_TARGET_INTERPRETER 1
136
#define TCG_TARGET_INSN_UNIT_SIZE 1
137
#define TCG_TARGET_TLB_DISPLACEMENT_BITS 32
138
+#define MAX_CODE_GEN_BUFFER_SIZE ((size_t)-1)
139
140
#if UINTPTR_MAX == UINT32_MAX
141
# define TCG_TARGET_REG_BITS 32
142
diff --git a/tcg/region.c b/tcg/region.c
143
index XXXXXXX..XXXXXXX 100644
144
--- a/tcg/region.c
145
+++ b/tcg/region.c
146
@@ -XXX,XX +XXX,XX @@ static size_t tcg_n_regions(unsigned max_cpus)
147
/*
148
* Minimum size of the code gen buffer. This number is randomly chosen,
149
* but not so small that we can't have a fair number of TB's live.
150
+ *
151
+ * Maximum size, MAX_CODE_GEN_BUFFER_SIZE, is defined in tcg-target.h.
152
+ * Unless otherwise indicated, this is constrained by the range of
153
+ * direct branches on the host cpu, as used by the TCG implementation
154
+ * of goto_tb.
155
*/
156
#define MIN_CODE_GEN_BUFFER_SIZE (1 * MiB)
157
158
-/*
159
- * Maximum size of the code gen buffer we'd like to use. Unless otherwise
160
- * indicated, this is constrained by the range of direct branches on the
161
- * host cpu, as used by the TCG implementation of goto_tb.
162
- */
163
-#if defined(__x86_64__)
164
-# define MAX_CODE_GEN_BUFFER_SIZE (2 * GiB)
165
-#elif defined(__sparc__)
166
-# define MAX_CODE_GEN_BUFFER_SIZE (2 * GiB)
167
-#elif defined(__powerpc64__)
168
-# define MAX_CODE_GEN_BUFFER_SIZE (2 * GiB)
169
-#elif defined(__powerpc__)
170
-# define MAX_CODE_GEN_BUFFER_SIZE (32 * MiB)
171
-#elif defined(__aarch64__)
172
-# define MAX_CODE_GEN_BUFFER_SIZE (2 * GiB)
173
-#elif defined(__s390x__)
174
- /* We have a +- 4GB range on the branches; leave some slop. */
175
-# define MAX_CODE_GEN_BUFFER_SIZE (3 * GiB)
176
-#elif defined(__mips__)
177
- /*
178
- * We have a 256MB branch region, but leave room to make sure the
179
- * main executable is also within that region.
180
- */
181
-# define MAX_CODE_GEN_BUFFER_SIZE (128 * MiB)
182
-#else
183
-# define MAX_CODE_GEN_BUFFER_SIZE ((size_t)-1)
184
-#endif
185
-
186
#if TCG_TARGET_REG_BITS == 32
187
#define DEFAULT_CODE_GEN_BUFFER_SIZE_1 (32 * MiB)
188
#ifdef CONFIG_USER_ONLY
189
--
190
2.25.1
191
192
diff view generated by jsdifflib
New patch
1
A size is easier to work with than an end point,
2
particularly during initial buffer allocation.
1
3
4
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
tcg/region.c | 30 ++++++++++++++++++------------
9
1 file changed, 18 insertions(+), 12 deletions(-)
10
11
diff --git a/tcg/region.c b/tcg/region.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/tcg/region.c
14
+++ b/tcg/region.c
15
@@ -XXX,XX +XXX,XX @@ struct tcg_region_state {
16
/* fields set at init time */
17
void *start;
18
void *start_aligned;
19
- void *end;
20
size_t n;
21
size_t size; /* size of one region */
22
size_t stride; /* .size + guard size */
23
+ size_t total_size; /* size of entire buffer, >= n * stride */
24
25
/* fields protected by the lock */
26
size_t current; /* current region index */
27
@@ -XXX,XX +XXX,XX @@ static void tcg_region_bounds(size_t curr_region, void **pstart, void **pend)
28
if (curr_region == 0) {
29
start = region.start;
30
}
31
+ /* The final region may have a few extra pages due to earlier rounding. */
32
if (curr_region == region.n - 1) {
33
- end = region.end;
34
+ end = region.start_aligned + region.total_size;
35
}
36
37
*pstart = start;
38
@@ -XXX,XX +XXX,XX @@ static bool alloc_code_gen_buffer(size_t size, int splitwx, Error **errp)
39
*/
40
void tcg_region_init(size_t tb_size, int splitwx, unsigned max_cpus)
41
{
42
- void *buf, *aligned;
43
- size_t size;
44
+ void *buf, *aligned, *end;
45
+ size_t total_size;
46
size_t page_size;
47
size_t region_size;
48
size_t n_regions;
49
@@ -XXX,XX +XXX,XX @@ void tcg_region_init(size_t tb_size, int splitwx, unsigned max_cpus)
50
assert(ok);
51
52
buf = tcg_init_ctx.code_gen_buffer;
53
- size = tcg_init_ctx.code_gen_buffer_size;
54
+ total_size = tcg_init_ctx.code_gen_buffer_size;
55
page_size = qemu_real_host_page_size;
56
n_regions = tcg_n_regions(max_cpus);
57
58
/* The first region will be 'aligned - buf' bytes larger than the others */
59
aligned = QEMU_ALIGN_PTR_UP(buf, page_size);
60
- g_assert(aligned < tcg_init_ctx.code_gen_buffer + size);
61
+ g_assert(aligned < tcg_init_ctx.code_gen_buffer + total_size);
62
+
63
/*
64
* Make region_size a multiple of page_size, using aligned as the start.
65
* As a result of this we might end up with a few extra pages at the end of
66
* the buffer; we will assign those to the last region.
67
*/
68
- region_size = (size - (aligned - buf)) / n_regions;
69
+ region_size = (total_size - (aligned - buf)) / n_regions;
70
region_size = QEMU_ALIGN_DOWN(region_size, page_size);
71
72
/* A region must have at least 2 pages; one code, one guard */
73
@@ -XXX,XX +XXX,XX @@ void tcg_region_init(size_t tb_size, int splitwx, unsigned max_cpus)
74
region.start = buf;
75
region.start_aligned = aligned;
76
/* page-align the end, since its last page will be a guard page */
77
- region.end = QEMU_ALIGN_PTR_DOWN(buf + size, page_size);
78
+ end = QEMU_ALIGN_PTR_DOWN(buf + total_size, page_size);
79
/* account for that last guard page */
80
- region.end -= page_size;
81
+ end -= page_size;
82
+ total_size = end - aligned;
83
+ region.total_size = total_size;
84
85
/*
86
* Set guard pages in the rw buffer, as that's the one into which
87
@@ -XXX,XX +XXX,XX @@ void tcg_region_prologue_set(TCGContext *s)
88
89
/* Register the balance of the buffer with gdb. */
90
tcg_register_jit(tcg_splitwx_to_rx(region.start),
91
- region.end - region.start);
92
+ region.start_aligned + region.total_size - region.start);
93
}
94
95
/*
96
@@ -XXX,XX +XXX,XX @@ size_t tcg_code_capacity(void)
97
98
/* no need for synchronization; these variables are set at init time */
99
guard_size = region.stride - region.size;
100
- capacity = region.end + guard_size - region.start;
101
- capacity -= region.n * (guard_size + TCG_HIGHWATER);
102
+ capacity = region.total_size;
103
+ capacity -= (region.n - 1) * guard_size;
104
+ capacity -= region.n * TCG_HIGHWATER;
105
+
106
return capacity;
107
}
108
109
--
110
2.25.1
111
112
diff view generated by jsdifflib
New patch
1
Give the field a name reflecting its actual meaning.
1
2
3
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
4
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
---
7
tcg/region.c | 15 ++++++++-------
8
1 file changed, 8 insertions(+), 7 deletions(-)
9
10
diff --git a/tcg/region.c b/tcg/region.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/tcg/region.c
13
+++ b/tcg/region.c
14
@@ -XXX,XX +XXX,XX @@ struct tcg_region_state {
15
QemuMutex lock;
16
17
/* fields set at init time */
18
- void *start;
19
void *start_aligned;
20
+ void *after_prologue;
21
size_t n;
22
size_t size; /* size of one region */
23
size_t stride; /* .size + guard size */
24
@@ -XXX,XX +XXX,XX @@ static void tcg_region_bounds(size_t curr_region, void **pstart, void **pend)
25
end = start + region.size;
26
27
if (curr_region == 0) {
28
- start = region.start;
29
+ start = region.after_prologue;
30
}
31
/* The final region may have a few extra pages due to earlier rounding. */
32
if (curr_region == region.n - 1) {
33
@@ -XXX,XX +XXX,XX @@ void tcg_region_init(size_t tb_size, int splitwx, unsigned max_cpus)
34
region.n = n_regions;
35
region.size = region_size - page_size;
36
region.stride = region_size;
37
- region.start = buf;
38
+ region.after_prologue = buf;
39
region.start_aligned = aligned;
40
/* page-align the end, since its last page will be a guard page */
41
end = QEMU_ALIGN_PTR_DOWN(buf + total_size, page_size);
42
@@ -XXX,XX +XXX,XX @@ void tcg_region_init(size_t tb_size, int splitwx, unsigned max_cpus)
43
void tcg_region_prologue_set(TCGContext *s)
44
{
45
/* Deduct the prologue from the first region. */
46
- g_assert(region.start == s->code_gen_buffer);
47
- region.start = s->code_ptr;
48
+ g_assert(region.start_aligned == s->code_gen_buffer);
49
+ region.after_prologue = s->code_ptr;
50
51
/* Recompute boundaries of the first region. */
52
tcg_region_assign(s, 0);
53
54
/* Register the balance of the buffer with gdb. */
55
- tcg_register_jit(tcg_splitwx_to_rx(region.start),
56
- region.start_aligned + region.total_size - region.start);
57
+ tcg_register_jit(tcg_splitwx_to_rx(region.after_prologue),
58
+ region.start_aligned + region.total_size -
59
+ region.after_prologue);
60
}
61
62
/*
63
--
64
2.25.1
65
66
diff view generated by jsdifflib
New patch
1
Compute the value using straight division and bounds,
2
rather than a loop. Pass in tb_size rather than reading
3
from tcg_init_ctx.code_gen_buffer_size,
1
4
5
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
---
9
tcg/region.c | 29 ++++++++++++-----------------
10
1 file changed, 12 insertions(+), 17 deletions(-)
11
12
diff --git a/tcg/region.c b/tcg/region.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/tcg/region.c
15
+++ b/tcg/region.c
16
@@ -XXX,XX +XXX,XX @@ void tcg_region_reset_all(void)
17
tcg_region_tree_reset_all();
18
}
19
20
-static size_t tcg_n_regions(unsigned max_cpus)
21
+static size_t tcg_n_regions(size_t tb_size, unsigned max_cpus)
22
{
23
#ifdef CONFIG_USER_ONLY
24
return 1;
25
#else
26
+ size_t n_regions;
27
+
28
/*
29
* It is likely that some vCPUs will translate more code than others,
30
* so we first try to set more regions than max_cpus, with those regions
31
* being of reasonable size. If that's not possible we make do by evenly
32
* dividing the code_gen_buffer among the vCPUs.
33
*/
34
- size_t i;
35
-
36
/* Use a single region if all we have is one vCPU thread */
37
if (max_cpus == 1 || !qemu_tcg_mttcg_enabled()) {
38
return 1;
39
}
40
41
- /* Try to have more regions than max_cpus, with each region being >= 2 MB */
42
- for (i = 8; i > 0; i--) {
43
- size_t regions_per_thread = i;
44
- size_t region_size;
45
-
46
- region_size = tcg_init_ctx.code_gen_buffer_size;
47
- region_size /= max_cpus * regions_per_thread;
48
-
49
- if (region_size >= 2 * 1024u * 1024) {
50
- return max_cpus * regions_per_thread;
51
- }
52
+ /*
53
+ * Try to have more regions than max_cpus, with each region being >= 2 MB.
54
+ * If we can't, then just allocate one region per vCPU thread.
55
+ */
56
+ n_regions = tb_size / (2 * MiB);
57
+ if (n_regions <= max_cpus) {
58
+ return max_cpus;
59
}
60
- /* If we can't, then just allocate one region per vCPU thread */
61
- return max_cpus;
62
+ return MIN(n_regions, max_cpus * 8);
63
#endif
64
}
65
66
@@ -XXX,XX +XXX,XX @@ void tcg_region_init(size_t tb_size, int splitwx, unsigned max_cpus)
67
buf = tcg_init_ctx.code_gen_buffer;
68
total_size = tcg_init_ctx.code_gen_buffer_size;
69
page_size = qemu_real_host_page_size;
70
- n_regions = tcg_n_regions(max_cpus);
71
+ n_regions = tcg_n_regions(total_size, max_cpus);
72
73
/* The first region will be 'aligned - buf' bytes larger than the others */
74
aligned = QEMU_ALIGN_PTR_UP(buf, page_size);
75
--
76
2.25.1
77
78
diff view generated by jsdifflib
New patch
1
Return output buffer and size via output pointer arguments,
2
rather than returning size via tcg_ctx->code_gen_buffer_size.
1
3
4
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
tcg/region.c | 19 +++++++++----------
9
1 file changed, 9 insertions(+), 10 deletions(-)
10
11
diff --git a/tcg/region.c b/tcg/region.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/tcg/region.c
14
+++ b/tcg/region.c
15
@@ -XXX,XX +XXX,XX @@ static inline bool cross_256mb(void *addr, size_t size)
16
/*
17
* We weren't able to allocate a buffer without crossing that boundary,
18
* so make do with the larger portion of the buffer that doesn't cross.
19
- * Returns the new base of the buffer, and adjusts code_gen_buffer_size.
20
+ * Returns the new base and size of the buffer in *obuf and *osize.
21
*/
22
-static inline void *split_cross_256mb(void *buf1, size_t size1)
23
+static inline void split_cross_256mb(void **obuf, size_t *osize,
24
+ void *buf1, size_t size1)
25
{
26
void *buf2 = (void *)(((uintptr_t)buf1 + size1) & ~0x0ffffffful);
27
size_t size2 = buf1 + size1 - buf2;
28
@@ -XXX,XX +XXX,XX @@ static inline void *split_cross_256mb(void *buf1, size_t size1)
29
buf1 = buf2;
30
}
31
32
- tcg_ctx->code_gen_buffer_size = size1;
33
- return buf1;
34
+ *obuf = buf1;
35
+ *osize = size1;
36
}
37
#endif
38
39
@@ -XXX,XX +XXX,XX @@ static bool alloc_code_gen_buffer(size_t tb_size, int splitwx, Error **errp)
40
if (size > tb_size) {
41
size = QEMU_ALIGN_DOWN(tb_size, qemu_real_host_page_size);
42
}
43
- tcg_ctx->code_gen_buffer_size = size;
44
45
#ifdef __mips__
46
if (cross_256mb(buf, size)) {
47
- buf = split_cross_256mb(buf, size);
48
- size = tcg_ctx->code_gen_buffer_size;
49
+ split_cross_256mb(&buf, &size, buf, size);
50
}
51
#endif
52
53
@@ -XXX,XX +XXX,XX @@ static bool alloc_code_gen_buffer(size_t tb_size, int splitwx, Error **errp)
54
qemu_madvise(buf, size, QEMU_MADV_HUGEPAGE);
55
56
tcg_ctx->code_gen_buffer = buf;
57
+ tcg_ctx->code_gen_buffer_size = size;
58
return true;
59
}
60
#elif defined(_WIN32)
61
@@ -XXX,XX +XXX,XX @@ static bool alloc_code_gen_buffer_anon(size_t size, int prot,
62
"allocate %zu bytes for jit buffer", size);
63
return false;
64
}
65
- tcg_ctx->code_gen_buffer_size = size;
66
67
#ifdef __mips__
68
if (cross_256mb(buf, size)) {
69
@@ -XXX,XX +XXX,XX @@ static bool alloc_code_gen_buffer_anon(size_t size, int prot,
70
/* fallthru */
71
default:
72
/* Split the original buffer. Free the smaller half. */
73
- buf2 = split_cross_256mb(buf, size);
74
- size2 = tcg_ctx->code_gen_buffer_size;
75
+ split_cross_256mb(&buf2, &size2, buf, size);
76
if (buf == buf2) {
77
munmap(buf + size2, size - size2);
78
} else {
79
@@ -XXX,XX +XXX,XX @@ static bool alloc_code_gen_buffer_anon(size_t size, int prot,
80
qemu_madvise(buf, size, QEMU_MADV_HUGEPAGE);
81
82
tcg_ctx->code_gen_buffer = buf;
83
+ tcg_ctx->code_gen_buffer_size = size;
84
return true;
85
}
86
87
--
88
2.25.1
89
90
diff view generated by jsdifflib
New patch
1
Shortly, the full code_gen_buffer will only be visible
2
to region.c, so move in_code_gen_buffer out-of-line.
1
3
4
Move the debugging versions of tcg_splitwx_to_{rx,rw}
5
to region.c as well, so that the compiler gets to see
6
the implementation of in_code_gen_buffer.
7
8
This leaves exactly one use of in_code_gen_buffer outside
9
of region.c, in cpu_restore_state. Which, being on the
10
exception path, is not performance critical.
11
12
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
13
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
14
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
15
---
16
include/tcg/tcg.h | 11 +----------
17
tcg/region.c | 34 ++++++++++++++++++++++++++++++++++
18
tcg/tcg.c | 23 -----------------------
19
3 files changed, 35 insertions(+), 33 deletions(-)
20
21
diff --git a/include/tcg/tcg.h b/include/tcg/tcg.h
22
index XXXXXXX..XXXXXXX 100644
23
--- a/include/tcg/tcg.h
24
+++ b/include/tcg/tcg.h
25
@@ -XXX,XX +XXX,XX @@ extern const void *tcg_code_gen_epilogue;
26
extern uintptr_t tcg_splitwx_diff;
27
extern TCGv_env cpu_env;
28
29
-static inline bool in_code_gen_buffer(const void *p)
30
-{
31
- const TCGContext *s = &tcg_init_ctx;
32
- /*
33
- * Much like it is valid to have a pointer to the byte past the
34
- * end of an array (so long as you don't dereference it), allow
35
- * a pointer to the byte past the end of the code gen buffer.
36
- */
37
- return (size_t)(p - s->code_gen_buffer) <= s->code_gen_buffer_size;
38
-}
39
+bool in_code_gen_buffer(const void *p);
40
41
#ifdef CONFIG_DEBUG_TCG
42
const void *tcg_splitwx_to_rx(void *rw);
43
diff --git a/tcg/region.c b/tcg/region.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/tcg/region.c
46
+++ b/tcg/region.c
47
@@ -XXX,XX +XXX,XX @@ static struct tcg_region_state region;
48
static void *region_trees;
49
static size_t tree_size;
50
51
+bool in_code_gen_buffer(const void *p)
52
+{
53
+ const TCGContext *s = &tcg_init_ctx;
54
+ /*
55
+ * Much like it is valid to have a pointer to the byte past the
56
+ * end of an array (so long as you don't dereference it), allow
57
+ * a pointer to the byte past the end of the code gen buffer.
58
+ */
59
+ return (size_t)(p - s->code_gen_buffer) <= s->code_gen_buffer_size;
60
+}
61
+
62
+#ifdef CONFIG_DEBUG_TCG
63
+const void *tcg_splitwx_to_rx(void *rw)
64
+{
65
+ /* Pass NULL pointers unchanged. */
66
+ if (rw) {
67
+ g_assert(in_code_gen_buffer(rw));
68
+ rw += tcg_splitwx_diff;
69
+ }
70
+ return rw;
71
+}
72
+
73
+void *tcg_splitwx_to_rw(const void *rx)
74
+{
75
+ /* Pass NULL pointers unchanged. */
76
+ if (rx) {
77
+ rx -= tcg_splitwx_diff;
78
+ /* Assert that we end with a pointer in the rw region. */
79
+ g_assert(in_code_gen_buffer(rx));
80
+ }
81
+ return (void *)rx;
82
+}
83
+#endif /* CONFIG_DEBUG_TCG */
84
+
85
/* compare a pointer @ptr and a tb_tc @s */
86
static int ptr_cmp_tb_tc(const void *ptr, const struct tb_tc *s)
87
{
88
diff --git a/tcg/tcg.c b/tcg/tcg.c
89
index XXXXXXX..XXXXXXX 100644
90
--- a/tcg/tcg.c
91
+++ b/tcg/tcg.c
92
@@ -XXX,XX +XXX,XX @@ static const TCGTargetOpDef constraint_sets[] = {
93
94
#include "tcg-target.c.inc"
95
96
-#ifdef CONFIG_DEBUG_TCG
97
-const void *tcg_splitwx_to_rx(void *rw)
98
-{
99
- /* Pass NULL pointers unchanged. */
100
- if (rw) {
101
- g_assert(in_code_gen_buffer(rw));
102
- rw += tcg_splitwx_diff;
103
- }
104
- return rw;
105
-}
106
-
107
-void *tcg_splitwx_to_rw(const void *rx)
108
-{
109
- /* Pass NULL pointers unchanged. */
110
- if (rx) {
111
- rx -= tcg_splitwx_diff;
112
- /* Assert that we end with a pointer in the rw region. */
113
- g_assert(in_code_gen_buffer(rx));
114
- }
115
- return (void *)rx;
116
-}
117
-#endif /* CONFIG_DEBUG_TCG */
118
-
119
static void alloc_tcg_plugin_context(TCGContext *s)
120
{
121
#ifdef CONFIG_PLUGIN
122
--
123
2.25.1
124
125
diff view generated by jsdifflib
New patch
1
Do not mess around with setting values within tcg_init_ctx.
2
Put the values into 'region' directly, which is where they
3
will live for the lifetime of the program.
1
4
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
---
9
tcg/region.c | 64 ++++++++++++++++++++++------------------------------
10
1 file changed, 27 insertions(+), 37 deletions(-)
11
12
diff --git a/tcg/region.c b/tcg/region.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/tcg/region.c
15
+++ b/tcg/region.c
16
@@ -XXX,XX +XXX,XX @@ static size_t tree_size;
17
18
bool in_code_gen_buffer(const void *p)
19
{
20
- const TCGContext *s = &tcg_init_ctx;
21
/*
22
* Much like it is valid to have a pointer to the byte past the
23
* end of an array (so long as you don't dereference it), allow
24
* a pointer to the byte past the end of the code gen buffer.
25
*/
26
- return (size_t)(p - s->code_gen_buffer) <= s->code_gen_buffer_size;
27
+ return (size_t)(p - region.start_aligned) <= region.total_size;
28
}
29
30
#ifdef CONFIG_DEBUG_TCG
31
@@ -XXX,XX +XXX,XX @@ static bool alloc_code_gen_buffer(size_t tb_size, int splitwx, Error **errp)
32
}
33
qemu_madvise(buf, size, QEMU_MADV_HUGEPAGE);
34
35
- tcg_ctx->code_gen_buffer = buf;
36
- tcg_ctx->code_gen_buffer_size = size;
37
+ region.start_aligned = buf;
38
+ region.total_size = size;
39
return true;
40
}
41
#elif defined(_WIN32)
42
@@ -XXX,XX +XXX,XX @@ static bool alloc_code_gen_buffer(size_t size, int splitwx, Error **errp)
43
return false;
44
}
45
46
- tcg_ctx->code_gen_buffer = buf;
47
- tcg_ctx->code_gen_buffer_size = size;
48
+ region.start_aligned = buf;
49
+ region.total_size = size;
50
return true;
51
}
52
#else
53
@@ -XXX,XX +XXX,XX @@ static bool alloc_code_gen_buffer_anon(size_t size, int prot,
54
/* Request large pages for the buffer. */
55
qemu_madvise(buf, size, QEMU_MADV_HUGEPAGE);
56
57
- tcg_ctx->code_gen_buffer = buf;
58
- tcg_ctx->code_gen_buffer_size = size;
59
+ region.start_aligned = buf;
60
+ region.total_size = size;
61
return true;
62
}
63
64
@@ -XXX,XX +XXX,XX @@ static bool alloc_code_gen_buffer_splitwx_memfd(size_t size, Error **errp)
65
return false;
66
}
67
/* The size of the mapping may have been adjusted. */
68
- size = tcg_ctx->code_gen_buffer_size;
69
- buf_rx = tcg_ctx->code_gen_buffer;
70
+ buf_rx = region.start_aligned;
71
+ size = region.total_size;
72
#endif
73
74
buf_rw = qemu_memfd_alloc("tcg-jit", size, 0, &fd, errp);
75
@@ -XXX,XX +XXX,XX @@ static bool alloc_code_gen_buffer_splitwx_memfd(size_t size, Error **errp)
76
#endif
77
78
close(fd);
79
- tcg_ctx->code_gen_buffer = buf_rw;
80
- tcg_ctx->code_gen_buffer_size = size;
81
+ region.start_aligned = buf_rw;
82
+ region.total_size = size;
83
tcg_splitwx_diff = buf_rx - buf_rw;
84
85
/* Request large pages for the buffer and the splitwx. */
86
@@ -XXX,XX +XXX,XX @@ static bool alloc_code_gen_buffer_splitwx_vmremap(size_t size, Error **errp)
87
return false;
88
}
89
90
- buf_rw = (mach_vm_address_t)tcg_ctx->code_gen_buffer;
91
+ buf_rw = region.start_aligned;
92
buf_rx = 0;
93
ret = mach_vm_remap(mach_task_self(),
94
&buf_rx,
95
@@ -XXX,XX +XXX,XX @@ static bool alloc_code_gen_buffer(size_t size, int splitwx, Error **errp)
96
*/
97
void tcg_region_init(size_t tb_size, int splitwx, unsigned max_cpus)
98
{
99
- void *buf, *aligned, *end;
100
- size_t total_size;
101
size_t page_size;
102
size_t region_size;
103
- size_t n_regions;
104
size_t i;
105
bool ok;
106
107
@@ -XXX,XX +XXX,XX @@ void tcg_region_init(size_t tb_size, int splitwx, unsigned max_cpus)
108
splitwx, &error_fatal);
109
assert(ok);
110
111
- buf = tcg_init_ctx.code_gen_buffer;
112
- total_size = tcg_init_ctx.code_gen_buffer_size;
113
- page_size = qemu_real_host_page_size;
114
- n_regions = tcg_n_regions(total_size, max_cpus);
115
-
116
- /* The first region will be 'aligned - buf' bytes larger than the others */
117
- aligned = QEMU_ALIGN_PTR_UP(buf, page_size);
118
- g_assert(aligned < tcg_init_ctx.code_gen_buffer + total_size);
119
-
120
/*
121
* Make region_size a multiple of page_size, using aligned as the start.
122
* As a result of this we might end up with a few extra pages at the end of
123
* the buffer; we will assign those to the last region.
124
*/
125
- region_size = (total_size - (aligned - buf)) / n_regions;
126
+ region.n = tcg_n_regions(region.total_size, max_cpus);
127
+ page_size = qemu_real_host_page_size;
128
+ region_size = region.total_size / region.n;
129
region_size = QEMU_ALIGN_DOWN(region_size, page_size);
130
131
/* A region must have at least 2 pages; one code, one guard */
132
g_assert(region_size >= 2 * page_size);
133
+ region.stride = region_size;
134
+
135
+ /* Reserve space for guard pages. */
136
+ region.size = region_size - page_size;
137
+ region.total_size -= page_size;
138
+
139
+ /*
140
+ * The first region will be smaller than the others, via the prologue,
141
+ * which has yet to be allocated. For now, the first region begins at
142
+ * the page boundary.
143
+ */
144
+ region.after_prologue = region.start_aligned;
145
146
/* init the region struct */
147
qemu_mutex_init(&region.lock);
148
- region.n = n_regions;
149
- region.size = region_size - page_size;
150
- region.stride = region_size;
151
- region.after_prologue = buf;
152
- region.start_aligned = aligned;
153
- /* page-align the end, since its last page will be a guard page */
154
- end = QEMU_ALIGN_PTR_DOWN(buf + total_size, page_size);
155
- /* account for that last guard page */
156
- end -= page_size;
157
- total_size = end - aligned;
158
- region.total_size = total_size;
159
160
/*
161
* Set guard pages in the rw buffer, as that's the one into which
162
--
163
2.25.1
164
165
diff view generated by jsdifflib
New patch
1
1
Change the interface from a boolean error indication to a
2
negative error vs a non-negative protection. For the moment
3
this is only interface change, not making use of the new data.
4
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
---
9
tcg/region.c | 63 +++++++++++++++++++++++++++-------------------------
10
1 file changed, 33 insertions(+), 30 deletions(-)
11
12
diff --git a/tcg/region.c b/tcg/region.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/tcg/region.c
15
+++ b/tcg/region.c
16
@@ -XXX,XX +XXX,XX @@ static inline void split_cross_256mb(void **obuf, size_t *osize,
17
static uint8_t static_code_gen_buffer[DEFAULT_CODE_GEN_BUFFER_SIZE]
18
__attribute__((aligned(CODE_GEN_ALIGN)));
19
20
-static bool alloc_code_gen_buffer(size_t tb_size, int splitwx, Error **errp)
21
+static int alloc_code_gen_buffer(size_t tb_size, int splitwx, Error **errp)
22
{
23
void *buf, *end;
24
size_t size;
25
26
if (splitwx > 0) {
27
error_setg(errp, "jit split-wx not supported");
28
- return false;
29
+ return -1;
30
}
31
32
/* page-align the beginning and end of the buffer */
33
@@ -XXX,XX +XXX,XX @@ static bool alloc_code_gen_buffer(size_t tb_size, int splitwx, Error **errp)
34
35
region.start_aligned = buf;
36
region.total_size = size;
37
- return true;
38
+
39
+ return PROT_READ | PROT_WRITE;
40
}
41
#elif defined(_WIN32)
42
-static bool alloc_code_gen_buffer(size_t size, int splitwx, Error **errp)
43
+static int alloc_code_gen_buffer(size_t size, int splitwx, Error **errp)
44
{
45
void *buf;
46
47
if (splitwx > 0) {
48
error_setg(errp, "jit split-wx not supported");
49
- return false;
50
+ return -1;
51
}
52
53
buf = VirtualAlloc(NULL, size, MEM_RESERVE | MEM_COMMIT,
54
@@ -XXX,XX +XXX,XX @@ static bool alloc_code_gen_buffer(size_t size, int splitwx, Error **errp)
55
56
region.start_aligned = buf;
57
region.total_size = size;
58
- return true;
59
+
60
+ return PAGE_READ | PAGE_WRITE | PAGE_EXEC;
61
}
62
#else
63
-static bool alloc_code_gen_buffer_anon(size_t size, int prot,
64
- int flags, Error **errp)
65
+static int alloc_code_gen_buffer_anon(size_t size, int prot,
66
+ int flags, Error **errp)
67
{
68
void *buf;
69
70
@@ -XXX,XX +XXX,XX @@ static bool alloc_code_gen_buffer_anon(size_t size, int prot,
71
if (buf == MAP_FAILED) {
72
error_setg_errno(errp, errno,
73
"allocate %zu bytes for jit buffer", size);
74
- return false;
75
+ return -1;
76
}
77
78
#ifdef __mips__
79
@@ -XXX,XX +XXX,XX @@ static bool alloc_code_gen_buffer_anon(size_t size, int prot,
80
81
region.start_aligned = buf;
82
region.total_size = size;
83
- return true;
84
+ return prot;
85
}
86
87
#ifndef CONFIG_TCG_INTERPRETER
88
@@ -XXX,XX +XXX,XX @@ static bool alloc_code_gen_buffer_splitwx_memfd(size_t size, Error **errp)
89
90
#ifdef __mips__
91
/* Find space for the RX mapping, vs the 256MiB regions. */
92
- if (!alloc_code_gen_buffer_anon(size, PROT_NONE,
93
- MAP_PRIVATE | MAP_ANONYMOUS |
94
- MAP_NORESERVE, errp)) {
95
+ if (alloc_code_gen_buffer_anon(size, PROT_NONE,
96
+ MAP_PRIVATE | MAP_ANONYMOUS |
97
+ MAP_NORESERVE, errp) < 0) {
98
return false;
99
}
100
/* The size of the mapping may have been adjusted. */
101
@@ -XXX,XX +XXX,XX @@ static bool alloc_code_gen_buffer_splitwx_memfd(size_t size, Error **errp)
102
/* Request large pages for the buffer and the splitwx. */
103
qemu_madvise(buf_rw, size, QEMU_MADV_HUGEPAGE);
104
qemu_madvise(buf_rx, size, QEMU_MADV_HUGEPAGE);
105
- return true;
106
+ return PROT_READ | PROT_WRITE;
107
108
fail_rx:
109
error_setg_errno(errp, errno, "failed to map shared memory for execute");
110
@@ -XXX,XX +XXX,XX @@ static bool alloc_code_gen_buffer_splitwx_memfd(size_t size, Error **errp)
111
if (fd >= 0) {
112
close(fd);
113
}
114
- return false;
115
+ return -1;
116
}
117
#endif /* CONFIG_POSIX */
118
119
@@ -XXX,XX +XXX,XX @@ extern kern_return_t mach_vm_remap(vm_map_t target_task,
120
vm_prot_t *max_protection,
121
vm_inherit_t inheritance);
122
123
-static bool alloc_code_gen_buffer_splitwx_vmremap(size_t size, Error **errp)
124
+static int alloc_code_gen_buffer_splitwx_vmremap(size_t size, Error **errp)
125
{
126
kern_return_t ret;
127
mach_vm_address_t buf_rw, buf_rx;
128
@@ -XXX,XX +XXX,XX @@ static bool alloc_code_gen_buffer_splitwx_vmremap(size_t size, Error **errp)
129
/* Map the read-write portion via normal anon memory. */
130
if (!alloc_code_gen_buffer_anon(size, PROT_READ | PROT_WRITE,
131
MAP_PRIVATE | MAP_ANONYMOUS, errp)) {
132
- return false;
133
+ return -1;
134
}
135
136
buf_rw = region.start_aligned;
137
@@ -XXX,XX +XXX,XX @@ static bool alloc_code_gen_buffer_splitwx_vmremap(size_t size, Error **errp)
138
/* TODO: Convert "ret" to a human readable error message. */
139
error_setg(errp, "vm_remap for jit splitwx failed");
140
munmap((void *)buf_rw, size);
141
- return false;
142
+ return -1;
143
}
144
145
if (mprotect((void *)buf_rx, size, PROT_READ | PROT_EXEC) != 0) {
146
error_setg_errno(errp, errno, "mprotect for jit splitwx");
147
munmap((void *)buf_rx, size);
148
munmap((void *)buf_rw, size);
149
- return false;
150
+ return -1;
151
}
152
153
tcg_splitwx_diff = buf_rx - buf_rw;
154
- return true;
155
+ return PROT_READ | PROT_WRITE;
156
}
157
#endif /* CONFIG_DARWIN */
158
#endif /* CONFIG_TCG_INTERPRETER */
159
160
-static bool alloc_code_gen_buffer_splitwx(size_t size, Error **errp)
161
+static int alloc_code_gen_buffer_splitwx(size_t size, Error **errp)
162
{
163
#ifndef CONFIG_TCG_INTERPRETER
164
# ifdef CONFIG_DARWIN
165
@@ -XXX,XX +XXX,XX @@ static bool alloc_code_gen_buffer_splitwx(size_t size, Error **errp)
166
# endif
167
#endif
168
error_setg(errp, "jit split-wx not supported");
169
- return false;
170
+ return -1;
171
}
172
173
-static bool alloc_code_gen_buffer(size_t size, int splitwx, Error **errp)
174
+static int alloc_code_gen_buffer(size_t size, int splitwx, Error **errp)
175
{
176
ERRP_GUARD();
177
int prot, flags;
178
179
if (splitwx) {
180
- if (alloc_code_gen_buffer_splitwx(size, errp)) {
181
- return true;
182
+ prot = alloc_code_gen_buffer_splitwx(size, errp);
183
+ if (prot >= 0) {
184
+ return prot;
185
}
186
/*
187
* If splitwx force-on (1), fail;
188
* if splitwx default-on (-1), fall through to splitwx off.
189
*/
190
if (splitwx > 0) {
191
- return false;
192
+ return -1;
193
}
194
error_free_or_abort(errp);
195
}
196
@@ -XXX,XX +XXX,XX @@ void tcg_region_init(size_t tb_size, int splitwx, unsigned max_cpus)
197
size_t page_size;
198
size_t region_size;
199
size_t i;
200
- bool ok;
201
+ int have_prot;
202
203
- ok = alloc_code_gen_buffer(size_code_gen_buffer(tb_size),
204
- splitwx, &error_fatal);
205
- assert(ok);
206
+ have_prot = alloc_code_gen_buffer(size_code_gen_buffer(tb_size),
207
+ splitwx, &error_fatal);
208
+ assert(have_prot >= 0);
209
210
/*
211
* Make region_size a multiple of page_size, using aligned as the start.
212
--
213
2.25.1
214
215
diff view generated by jsdifflib
New patch
1
Move the call out of the N versions of alloc_code_gen_buffer
2
and into tcg_region_init.
1
3
4
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
5
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
tcg/region.c | 14 +++++++-------
9
1 file changed, 7 insertions(+), 7 deletions(-)
10
11
diff --git a/tcg/region.c b/tcg/region.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/tcg/region.c
14
+++ b/tcg/region.c
15
@@ -XXX,XX +XXX,XX @@ static int alloc_code_gen_buffer(size_t tb_size, int splitwx, Error **errp)
16
error_setg_errno(errp, errno, "mprotect of jit buffer");
17
return false;
18
}
19
- qemu_madvise(buf, size, QEMU_MADV_HUGEPAGE);
20
21
region.start_aligned = buf;
22
region.total_size = size;
23
@@ -XXX,XX +XXX,XX @@ static int alloc_code_gen_buffer_anon(size_t size, int prot,
24
}
25
#endif
26
27
- /* Request large pages for the buffer. */
28
- qemu_madvise(buf, size, QEMU_MADV_HUGEPAGE);
29
-
30
region.start_aligned = buf;
31
region.total_size = size;
32
return prot;
33
@@ -XXX,XX +XXX,XX @@ static bool alloc_code_gen_buffer_splitwx_memfd(size_t size, Error **errp)
34
region.total_size = size;
35
tcg_splitwx_diff = buf_rx - buf_rw;
36
37
- /* Request large pages for the buffer and the splitwx. */
38
- qemu_madvise(buf_rw, size, QEMU_MADV_HUGEPAGE);
39
- qemu_madvise(buf_rx, size, QEMU_MADV_HUGEPAGE);
40
return PROT_READ | PROT_WRITE;
41
42
fail_rx:
43
@@ -XXX,XX +XXX,XX @@ void tcg_region_init(size_t tb_size, int splitwx, unsigned max_cpus)
44
splitwx, &error_fatal);
45
assert(have_prot >= 0);
46
47
+ /* Request large pages for the buffer and the splitwx. */
48
+ qemu_madvise(region.start_aligned, region.total_size, QEMU_MADV_HUGEPAGE);
49
+ if (tcg_splitwx_diff) {
50
+ qemu_madvise(region.start_aligned + tcg_splitwx_diff,
51
+ region.total_size, QEMU_MADV_HUGEPAGE);
52
+ }
53
+
54
/*
55
* Make region_size a multiple of page_size, using aligned as the start.
56
* As a result of this we might end up with a few extra pages at the end of
57
--
58
2.25.1
59
60
diff view generated by jsdifflib
New patch
1
For --enable-tcg-interpreter on Windows, we will need this.
1
2
3
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
4
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
---
8
include/qemu/osdep.h | 1 +
9
util/osdep.c | 9 +++++++++
10
2 files changed, 10 insertions(+)
11
12
diff --git a/include/qemu/osdep.h b/include/qemu/osdep.h
13
index XXXXXXX..XXXXXXX 100644
14
--- a/include/qemu/osdep.h
15
+++ b/include/qemu/osdep.h
16
@@ -XXX,XX +XXX,XX @@ void sigaction_invoke(struct sigaction *action,
17
#endif
18
19
int qemu_madvise(void *addr, size_t len, int advice);
20
+int qemu_mprotect_rw(void *addr, size_t size);
21
int qemu_mprotect_rwx(void *addr, size_t size);
22
int qemu_mprotect_none(void *addr, size_t size);
23
24
diff --git a/util/osdep.c b/util/osdep.c
25
index XXXXXXX..XXXXXXX 100644
26
--- a/util/osdep.c
27
+++ b/util/osdep.c
28
@@ -XXX,XX +XXX,XX @@ static int qemu_mprotect__osdep(void *addr, size_t size, int prot)
29
#endif
30
}
31
32
+int qemu_mprotect_rw(void *addr, size_t size)
33
+{
34
+#ifdef _WIN32
35
+ return qemu_mprotect__osdep(addr, size, PAGE_READWRITE);
36
+#else
37
+ return qemu_mprotect__osdep(addr, size, PROT_READ | PROT_WRITE);
38
+#endif
39
+}
40
+
41
int qemu_mprotect_rwx(void *addr, size_t size)
42
{
43
#ifdef _WIN32
44
--
45
2.25.1
46
47
diff view generated by jsdifflib
New patch
1
If qemu_get_host_physmem returns an odd number of pages,
2
then physmem / 8 will not be a multiple of the page size.
1
3
4
The following was observed on a gitlab runner:
5
6
ERROR qtest-arm/boot-serial-test - Bail out!
7
ERROR:../util/osdep.c:80:qemu_mprotect__osdep: \
8
assertion failed: (!(size & ~qemu_real_host_page_mask))
9
10
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
11
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
12
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
---
14
tcg/region.c | 47 +++++++++++++++++++++--------------------------
15
1 file changed, 21 insertions(+), 26 deletions(-)
16
17
diff --git a/tcg/region.c b/tcg/region.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/tcg/region.c
20
+++ b/tcg/region.c
21
@@ -XXX,XX +XXX,XX @@ static size_t tcg_n_regions(size_t tb_size, unsigned max_cpus)
22
(DEFAULT_CODE_GEN_BUFFER_SIZE_1 < MAX_CODE_GEN_BUFFER_SIZE \
23
? DEFAULT_CODE_GEN_BUFFER_SIZE_1 : MAX_CODE_GEN_BUFFER_SIZE)
24
25
-static size_t size_code_gen_buffer(size_t tb_size)
26
-{
27
- /* Size the buffer. */
28
- if (tb_size == 0) {
29
- size_t phys_mem = qemu_get_host_physmem();
30
- if (phys_mem == 0) {
31
- tb_size = DEFAULT_CODE_GEN_BUFFER_SIZE;
32
- } else {
33
- tb_size = MIN(DEFAULT_CODE_GEN_BUFFER_SIZE, phys_mem / 8);
34
- }
35
- }
36
- if (tb_size < MIN_CODE_GEN_BUFFER_SIZE) {
37
- tb_size = MIN_CODE_GEN_BUFFER_SIZE;
38
- }
39
- if (tb_size > MAX_CODE_GEN_BUFFER_SIZE) {
40
- tb_size = MAX_CODE_GEN_BUFFER_SIZE;
41
- }
42
- return tb_size;
43
-}
44
-
45
#ifdef __mips__
46
/*
47
* In order to use J and JAL within the code_gen_buffer, we require
48
@@ -XXX,XX +XXX,XX @@ static int alloc_code_gen_buffer(size_t size, int splitwx, Error **errp)
49
*/
50
void tcg_region_init(size_t tb_size, int splitwx, unsigned max_cpus)
51
{
52
- size_t page_size;
53
+ const size_t page_size = qemu_real_host_page_size;
54
size_t region_size;
55
size_t i;
56
int have_prot;
57
58
- have_prot = alloc_code_gen_buffer(size_code_gen_buffer(tb_size),
59
- splitwx, &error_fatal);
60
+ /* Size the buffer. */
61
+ if (tb_size == 0) {
62
+ size_t phys_mem = qemu_get_host_physmem();
63
+ if (phys_mem == 0) {
64
+ tb_size = DEFAULT_CODE_GEN_BUFFER_SIZE;
65
+ } else {
66
+ tb_size = QEMU_ALIGN_DOWN(phys_mem / 8, page_size);
67
+ tb_size = MIN(DEFAULT_CODE_GEN_BUFFER_SIZE, tb_size);
68
+ }
69
+ }
70
+ if (tb_size < MIN_CODE_GEN_BUFFER_SIZE) {
71
+ tb_size = MIN_CODE_GEN_BUFFER_SIZE;
72
+ }
73
+ if (tb_size > MAX_CODE_GEN_BUFFER_SIZE) {
74
+ tb_size = MAX_CODE_GEN_BUFFER_SIZE;
75
+ }
76
+
77
+ have_prot = alloc_code_gen_buffer(tb_size, splitwx, &error_fatal);
78
assert(have_prot >= 0);
79
80
/* Request large pages for the buffer and the splitwx. */
81
@@ -XXX,XX +XXX,XX @@ void tcg_region_init(size_t tb_size, int splitwx, unsigned max_cpus)
82
* As a result of this we might end up with a few extra pages at the end of
83
* the buffer; we will assign those to the last region.
84
*/
85
- region.n = tcg_n_regions(region.total_size, max_cpus);
86
- page_size = qemu_real_host_page_size;
87
- region_size = region.total_size / region.n;
88
+ region.n = tcg_n_regions(tb_size, max_cpus);
89
+ region_size = tb_size / region.n;
90
region_size = QEMU_ALIGN_DOWN(region_size, page_size);
91
92
/* A region must have at least 2 pages; one code, one guard */
93
--
94
2.25.1
95
96
diff view generated by jsdifflib
New patch
1
Do not handle protections on a case-by-case basis in the
2
various alloc_code_gen_buffer instances; do it within a
3
single loop in tcg_region_init.
1
4
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
---
9
tcg/region.c | 45 +++++++++++++++++++++++++++++++--------------
10
1 file changed, 31 insertions(+), 14 deletions(-)
11
12
diff --git a/tcg/region.c b/tcg/region.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/tcg/region.c
15
+++ b/tcg/region.c
16
@@ -XXX,XX +XXX,XX @@ static int alloc_code_gen_buffer(size_t tb_size, int splitwx, Error **errp)
17
}
18
#endif
19
20
- if (qemu_mprotect_rwx(buf, size)) {
21
- error_setg_errno(errp, errno, "mprotect of jit buffer");
22
- return false;
23
- }
24
-
25
region.start_aligned = buf;
26
region.total_size = size;
27
28
@@ -XXX,XX +XXX,XX @@ void tcg_region_init(size_t tb_size, int splitwx, unsigned max_cpus)
29
{
30
const size_t page_size = qemu_real_host_page_size;
31
size_t region_size;
32
- size_t i;
33
- int have_prot;
34
+ int have_prot, need_prot;
35
36
/* Size the buffer. */
37
if (tb_size == 0) {
38
@@ -XXX,XX +XXX,XX @@ void tcg_region_init(size_t tb_size, int splitwx, unsigned max_cpus)
39
* Set guard pages in the rw buffer, as that's the one into which
40
* buffer overruns could occur. Do not set guard pages in the rx
41
* buffer -- let that one use hugepages throughout.
42
+ * Work with the page protections set up with the initial mapping.
43
*/
44
- for (i = 0; i < region.n; i++) {
45
+ need_prot = PAGE_READ | PAGE_WRITE;
46
+#ifndef CONFIG_TCG_INTERPRETER
47
+ if (tcg_splitwx_diff == 0) {
48
+ need_prot |= PAGE_EXEC;
49
+ }
50
+#endif
51
+ for (size_t i = 0, n = region.n; i < n; i++) {
52
void *start, *end;
53
54
tcg_region_bounds(i, &start, &end);
55
+ if (have_prot != need_prot) {
56
+ int rc;
57
58
- /*
59
- * macOS 11.2 has a bug (Apple Feedback FB8994773) in which mprotect
60
- * rejects a permission change from RWX -> NONE. Guard pages are
61
- * nice for bug detection but are not essential; ignore any failure.
62
- */
63
- (void)qemu_mprotect_none(end, page_size);
64
+ if (need_prot == (PAGE_READ | PAGE_WRITE | PAGE_EXEC)) {
65
+ rc = qemu_mprotect_rwx(start, end - start);
66
+ } else if (need_prot == (PAGE_READ | PAGE_WRITE)) {
67
+ rc = qemu_mprotect_rw(start, end - start);
68
+ } else {
69
+ g_assert_not_reached();
70
+ }
71
+ if (rc) {
72
+ error_setg_errno(&error_fatal, errno,
73
+ "mprotect of jit buffer");
74
+ }
75
+ }
76
+ if (have_prot != 0) {
77
+ /*
78
+ * macOS 11.2 has a bug (Apple Feedback FB8994773) in which mprotect
79
+ * rejects a permission change from RWX -> NONE. Guard pages are
80
+ * nice for bug detection but are not essential; ignore any failure.
81
+ */
82
+ (void)qemu_mprotect_none(end, page_size);
83
+ }
84
}
85
86
tcg_region_trees_init();
87
--
88
2.25.1
89
90
diff view generated by jsdifflib
New patch
1
There's a change in mprotect() behaviour [1] in the latest macOS
2
on M1 and it's not yet clear if it's going to be fixed by Apple.
1
3
4
In this case, instead of changing permissions of N guard pages,
5
we change permissions of N rwx regions. The same number of
6
syscalls are required either way.
7
8
[1] https://gist.github.com/hikalium/75ae822466ee4da13cbbe486498a191f
9
10
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
---
13
tcg/region.c | 19 +++++++++----------
14
1 file changed, 9 insertions(+), 10 deletions(-)
15
16
diff --git a/tcg/region.c b/tcg/region.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/tcg/region.c
19
+++ b/tcg/region.c
20
@@ -XXX,XX +XXX,XX @@ static int alloc_code_gen_buffer(size_t size, int splitwx, Error **errp)
21
error_free_or_abort(errp);
22
}
23
24
- prot = PROT_READ | PROT_WRITE | PROT_EXEC;
25
+ /*
26
+ * macOS 11.2 has a bug (Apple Feedback FB8994773) in which mprotect
27
+ * rejects a permission change from RWX -> NONE when reserving the
28
+ * guard pages later. We can go the other way with the same number
29
+ * of syscalls, so always begin with PROT_NONE.
30
+ */
31
+ prot = PROT_NONE;
32
flags = MAP_PRIVATE | MAP_ANONYMOUS;
33
-#ifdef CONFIG_TCG_INTERPRETER
34
- /* The tcg interpreter does not need execute permission. */
35
- prot = PROT_READ | PROT_WRITE;
36
-#elif defined(CONFIG_DARWIN)
37
+#ifdef CONFIG_DARWIN
38
/* Applicable to both iOS and macOS (Apple Silicon). */
39
if (!splitwx) {
40
flags |= MAP_JIT;
41
@@ -XXX,XX +XXX,XX @@ void tcg_region_init(size_t tb_size, int splitwx, unsigned max_cpus)
42
}
43
}
44
if (have_prot != 0) {
45
- /*
46
- * macOS 11.2 has a bug (Apple Feedback FB8994773) in which mprotect
47
- * rejects a permission change from RWX -> NONE. Guard pages are
48
- * nice for bug detection but are not essential; ignore any failure.
49
- */
50
+ /* Guard pages are nice for bug detection but are not essential. */
51
(void)qemu_mprotect_none(end, page_size);
52
}
53
}
54
--
55
2.25.1
56
57
diff view generated by jsdifflib
New patch
1
These variables belong to the jit side, not the user side.
1
2
3
Since tcg_init_ctx is no longer used outside of tcg/, move
4
the declaration to tcg-internal.h.
5
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
8
Suggested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
---
11
include/tcg/tcg.h | 1 -
12
tcg/tcg-internal.h | 1 +
13
accel/tcg/translate-all.c | 3 ---
14
tcg/tcg.c | 3 +++
15
4 files changed, 4 insertions(+), 4 deletions(-)
16
17
diff --git a/include/tcg/tcg.h b/include/tcg/tcg.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/include/tcg/tcg.h
20
+++ b/include/tcg/tcg.h
21
@@ -XXX,XX +XXX,XX @@ static inline bool temp_readonly(TCGTemp *ts)
22
return ts->kind >= TEMP_FIXED;
23
}
24
25
-extern TCGContext tcg_init_ctx;
26
extern __thread TCGContext *tcg_ctx;
27
extern const void *tcg_code_gen_epilogue;
28
extern uintptr_t tcg_splitwx_diff;
29
diff --git a/tcg/tcg-internal.h b/tcg/tcg-internal.h
30
index XXXXXXX..XXXXXXX 100644
31
--- a/tcg/tcg-internal.h
32
+++ b/tcg/tcg-internal.h
33
@@ -XXX,XX +XXX,XX @@
34
35
#define TCG_HIGHWATER 1024
36
37
+extern TCGContext tcg_init_ctx;
38
extern TCGContext **tcg_ctxs;
39
extern unsigned int tcg_cur_ctxs;
40
extern unsigned int tcg_max_ctxs;
41
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
42
index XXXXXXX..XXXXXXX 100644
43
--- a/accel/tcg/translate-all.c
44
+++ b/accel/tcg/translate-all.c
45
@@ -XXX,XX +XXX,XX @@ static int v_l2_levels;
46
47
static void *l1_map[V_L1_MAX_SIZE];
48
49
-/* code generation context */
50
-TCGContext tcg_init_ctx;
51
-__thread TCGContext *tcg_ctx;
52
TBContext tb_ctx;
53
54
static void page_table_config_init(void)
55
diff --git a/tcg/tcg.c b/tcg/tcg.c
56
index XXXXXXX..XXXXXXX 100644
57
--- a/tcg/tcg.c
58
+++ b/tcg/tcg.c
59
@@ -XXX,XX +XXX,XX @@ static bool tcg_target_const_match(int64_t val, TCGType type, int ct);
60
static int tcg_out_ldst_finalize(TCGContext *s);
61
#endif
62
63
+TCGContext tcg_init_ctx;
64
+__thread TCGContext *tcg_ctx;
65
+
66
TCGContext **tcg_ctxs;
67
unsigned int tcg_cur_ctxs;
68
unsigned int tcg_max_ctxs;
69
--
70
2.25.1
71
72
diff view generated by jsdifflib
New patch
1
Introduce a function to remove everything emitted
2
since a given point.
1
3
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
---
7
include/tcg/tcg.h | 10 ++++++++++
8
tcg/tcg.c | 13 +++++++++++++
9
2 files changed, 23 insertions(+)
10
11
diff --git a/include/tcg/tcg.h b/include/tcg/tcg.h
12
index XXXXXXX..XXXXXXX 100644
13
--- a/include/tcg/tcg.h
14
+++ b/include/tcg/tcg.h
15
@@ -XXX,XX +XXX,XX @@ void tcg_op_remove(TCGContext *s, TCGOp *op);
16
TCGOp *tcg_op_insert_before(TCGContext *s, TCGOp *op, TCGOpcode opc);
17
TCGOp *tcg_op_insert_after(TCGContext *s, TCGOp *op, TCGOpcode opc);
18
19
+/**
20
+ * tcg_remove_ops_after:
21
+ * @op: target operation
22
+ *
23
+ * Discard any opcodes emitted since @op. Expected usage is to save
24
+ * a starting point with tcg_last_op(), speculatively emit opcodes,
25
+ * then decide whether or not to keep those opcodes after the fact.
26
+ */
27
+void tcg_remove_ops_after(TCGOp *op);
28
+
29
void tcg_optimize(TCGContext *s);
30
31
/* Allocate a new temporary and initialize it with a constant. */
32
diff --git a/tcg/tcg.c b/tcg/tcg.c
33
index XXXXXXX..XXXXXXX 100644
34
--- a/tcg/tcg.c
35
+++ b/tcg/tcg.c
36
@@ -XXX,XX +XXX,XX @@ void tcg_op_remove(TCGContext *s, TCGOp *op)
37
#endif
38
}
39
40
+void tcg_remove_ops_after(TCGOp *op)
41
+{
42
+ TCGContext *s = tcg_ctx;
43
+
44
+ while (true) {
45
+ TCGOp *last = tcg_last_op();
46
+ if (last == op) {
47
+ return;
48
+ }
49
+ tcg_op_remove(s, last);
50
+ }
51
+}
52
+
53
static TCGOp *tcg_op_alloc(TCGOpcode opc)
54
{
55
TCGContext *s = tcg_ctx;
56
--
57
2.25.1
58
59
diff view generated by jsdifflib
New patch
1
At some point during the development of tcg_constant_*, I changed
2
my mind about whether such temps should be able to be passed to
3
tcg_temp_free_*. The final version committed allows this, but the
4
commentary was not updated to match.
1
5
6
Fixes: c0522136adf
7
Reported-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
---
12
include/tcg/tcg.h | 3 ++-
13
1 file changed, 2 insertions(+), 1 deletion(-)
14
15
diff --git a/include/tcg/tcg.h b/include/tcg/tcg.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/include/tcg/tcg.h
18
+++ b/include/tcg/tcg.h
19
@@ -XXX,XX +XXX,XX @@ TCGv_vec tcg_const_ones_vec_matching(TCGv_vec);
20
21
/*
22
* Locate or create a read-only temporary that is a constant.
23
- * This kind of temporary need not and should not be freed.
24
+ * This kind of temporary need not be freed, but for convenience
25
+ * will be silently ignored by tcg_temp_free_*.
26
*/
27
TCGTemp *tcg_constant_internal(TCGType type, int64_t val);
28
29
--
30
2.25.1
31
32
diff view generated by jsdifflib
New patch
1
From: "Jose R. Ziviani" <jziviani@suse.de>
1
2
3
Commit 5e8892db93 fixed several function signatures but tcg_out_op for
4
arm is missing. This patch fixes it as well.
5
6
Signed-off-by: Jose R. Ziviani <jziviani@suse.de>
7
Message-Id: <20210610224450.23425-1-jziviani@suse.de>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
---
10
tcg/arm/tcg-target.c.inc | 3 ++-
11
1 file changed, 2 insertions(+), 1 deletion(-)
12
13
diff --git a/tcg/arm/tcg-target.c.inc b/tcg/arm/tcg-target.c.inc
14
index XXXXXXX..XXXXXXX 100644
15
--- a/tcg/arm/tcg-target.c.inc
16
+++ b/tcg/arm/tcg-target.c.inc
17
@@ -XXX,XX +XXX,XX @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is64)
18
static void tcg_out_epilogue(TCGContext *s);
19
20
static inline void tcg_out_op(TCGContext *s, TCGOpcode opc,
21
- const TCGArg *args, const int *const_args)
22
+ const TCGArg args[TCG_MAX_OP_ARGS],
23
+ const int const_args[TCG_MAX_OP_ARGS])
24
{
25
TCGArg a0, a1, a2, a3, a4, a5;
26
int c;
27
--
28
2.25.1
29
30
diff view generated by jsdifflib
New patch
1
Typo in the conversion to FloatParts64.
1
2
3
Fixes: 572c4d862ff2
4
Fixes: Coverity CID 1457457
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Message-Id: <20210607223812.110596-1-richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
---
10
fpu/softfloat.c | 2 +-
11
1 file changed, 1 insertion(+), 1 deletion(-)
12
13
diff --git a/fpu/softfloat.c b/fpu/softfloat.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/fpu/softfloat.c
16
+++ b/fpu/softfloat.c
17
@@ -XXX,XX +XXX,XX @@ float32 float32_exp2(float32 a, float_status *status)
18
19
float_raise(float_flag_inexact, status);
20
21
- float64_unpack_canonical(&xnp, float64_ln2, status);
22
+ float64_unpack_canonical(&tp, float64_ln2, status);
23
xp = *parts_mul(&xp, &tp, status);
24
xnp = xp;
25
26
--
27
2.25.1
28
29
diff view generated by jsdifflib
New patch
1
From: Luis Pires <luis.pires@eldorado.org.br>
1
2
3
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
4
Message-Id: <20210601125143.191165-1-luis.pires@eldorado.org.br>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
---
7
docs/devel/tcg.rst | 101 ++++++++++++++++++++++++++++++++++++++++-----
8
1 file changed, 90 insertions(+), 11 deletions(-)
9
10
diff --git a/docs/devel/tcg.rst b/docs/devel/tcg.rst
11
index XXXXXXX..XXXXXXX 100644
12
--- a/docs/devel/tcg.rst
13
+++ b/docs/devel/tcg.rst
14
@@ -XXX,XX +XXX,XX @@ performances.
15
QEMU's dynamic translation backend is called TCG, for "Tiny Code
16
Generator". For more information, please take a look at ``tcg/README``.
17
18
-Some notable features of QEMU's dynamic translator are:
19
+The following sections outline some notable features and implementation
20
+details of QEMU's dynamic translator.
21
22
CPU state optimisations
23
-----------------------
24
25
-The target CPUs have many internal states which change the way it
26
-evaluates instructions. In order to achieve a good speed, the
27
+The target CPUs have many internal states which change the way they
28
+evaluate instructions. In order to achieve a good speed, the
29
translation phase considers that some state information of the virtual
30
CPU cannot change in it. The state is recorded in the Translation
31
Block (TB). If the state changes (e.g. privilege level), a new TB will
32
@@ -XXX,XX +XXX,XX @@ Direct block chaining
33
---------------------
34
35
After each translated basic block is executed, QEMU uses the simulated
36
-Program Counter (PC) and other cpu state information (such as the CS
37
+Program Counter (PC) and other CPU state information (such as the CS
38
segment base value) to find the next basic block.
39
40
-In order to accelerate the most common cases where the new simulated PC
41
-is known, QEMU can patch a basic block so that it jumps directly to the
42
-next one.
43
+In its simplest, less optimized form, this is done by exiting from the
44
+current TB, going through the TB epilogue, and then back to the
45
+main loop. That’s where QEMU looks for the next TB to execute,
46
+translating it from the guest architecture if it isn’t already available
47
+in memory. Then QEMU proceeds to execute this next TB, starting at the
48
+prologue and then moving on to the translated instructions.
49
50
-The most portable code uses an indirect jump. An indirect jump makes
51
-it easier to make the jump target modification atomic. On some host
52
-architectures (such as x86 or PowerPC), the ``JUMP`` opcode is
53
-directly patched so that the block chaining has no overhead.
54
+Exiting from the TB this way will cause the ``cpu_exec_interrupt()``
55
+callback to be re-evaluated before executing additional instructions.
56
+It is mandatory to exit this way after any CPU state changes that may
57
+unmask interrupts.
58
+
59
+In order to accelerate the cases where the TB for the new
60
+simulated PC is already available, QEMU has mechanisms that allow
61
+multiple TBs to be chained directly, without having to go back to the
62
+main loop as described above. These mechanisms are:
63
+
64
+``lookup_and_goto_ptr``
65
+^^^^^^^^^^^^^^^^^^^^^^^
66
+
67
+Calling ``tcg_gen_lookup_and_goto_ptr()`` will emit a call to
68
+``helper_lookup_tb_ptr``. This helper will look for an existing TB that
69
+matches the current CPU state. If the destination TB is available its
70
+code address is returned, otherwise the address of the JIT epilogue is
71
+returned. The call to the helper is always followed by the tcg ``goto_ptr``
72
+opcode, which branches to the returned address. In this way, we either
73
+branch to the next TB or return to the main loop.
74
+
75
+``goto_tb + exit_tb``
76
+^^^^^^^^^^^^^^^^^^^^^
77
+
78
+The translation code usually implements branching by performing the
79
+following steps:
80
+
81
+1. Call ``tcg_gen_goto_tb()`` passing a jump slot index (either 0 or 1)
82
+ as a parameter.
83
+
84
+2. Emit TCG instructions to update the CPU state with any information
85
+ that has been assumed constant and is required by the main loop to
86
+ correctly locate and execute the next TB. For most guests, this is
87
+ just the PC of the branch destination, but others may store additional
88
+ data. The information updated in this step must be inferable from both
89
+ ``cpu_get_tb_cpu_state()`` and ``cpu_restore_state()``.
90
+
91
+3. Call ``tcg_gen_exit_tb()`` passing the address of the current TB and
92
+ the jump slot index again.
93
+
94
+Step 1, ``tcg_gen_goto_tb()``, will emit a ``goto_tb`` TCG
95
+instruction that later on gets translated to a jump to an address
96
+associated with the specified jump slot. Initially, this is the address
97
+of step 2's instructions, which update the CPU state information. Step 3,
98
+``tcg_gen_exit_tb()``, exits from the current TB returning a tagged
99
+pointer composed of the last executed TB’s address and the jump slot
100
+index.
101
+
102
+The first time this whole sequence is executed, step 1 simply jumps
103
+to step 2. Then the CPU state information gets updated and we exit from
104
+the current TB. As a result, the behavior is very similar to the less
105
+optimized form described earlier in this section.
106
+
107
+Next, the main loop looks for the next TB to execute using the
108
+current CPU state information (creating the TB if it wasn’t already
109
+available) and, before starting to execute the new TB’s instructions,
110
+patches the previously executed TB by associating one of its jump
111
+slots (the one specified in the call to ``tcg_gen_exit_tb()``) with the
112
+address of the new TB.
113
+
114
+The next time this previous TB is executed and we get to that same
115
+``goto_tb`` step, it will already be patched (assuming the destination TB
116
+is still in memory) and will jump directly to the first instruction of
117
+the destination TB, without going back to the main loop.
118
+
119
+For the ``goto_tb + exit_tb`` mechanism to be used, the following
120
+conditions need to be satisfied:
121
+
122
+* The change in CPU state must be constant, e.g., a direct branch and
123
+ not an indirect branch.
124
+
125
+* The direct branch cannot cross a page boundary. Memory mappings
126
+ may change, causing the code at the destination address to change.
127
+
128
+Note that, on step 3 (``tcg_gen_exit_tb()``), in addition to the
129
+jump slot index, the address of the TB just executed is also returned.
130
+This address corresponds to the TB that will be patched; it may be
131
+different than the one that was directly executed from the main loop
132
+if the latter had already been chained to other TBs.
133
134
Self-modifying code and translated code invalidation
135
----------------------------------------------------
136
--
137
2.25.1
138
139
diff view generated by jsdifflib