1
The following changes since commit 4c41341af76cfc85b5a6c0f87de4838672ab9f89:
1
The following changes since commit 003ba52a8b327180e284630b289c6ece5a3e08b9:
2
2
3
Merge remote-tracking branch 'remotes/aperard/tags/pull-xen-20201020' into staging (2020-10-20 11:20:36 +0100)
3
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging (2023-02-16 11:16:39 +0000)
4
4
5
are available in the Git repository at:
5
are available in the Git repository at:
6
6
7
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20201020
7
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20230216
8
8
9
for you to fetch changes up to 6358890cb939192f6169fdf7664d903bf9b1d338:
9
for you to fetch changes up to caf01d6a435d9f4a95aeae2f9fc6cb8b889b1fb8:
10
10
11
tests/tcg/aarch64: Add bti smoke tests (2020-10-20 16:12:02 +0100)
11
tests/qtest: Restrict tpm-tis-devices-{swtpm}-test to CONFIG_TCG (2023-02-16 16:28:53 +0000)
12
12
13
----------------------------------------------------------------
13
----------------------------------------------------------------
14
target-arm queue:
14
target-arm queue:
15
* Fix AArch32 SMLAD incorrect setting of Q bit
15
* Some mostly M-profile-related code cleanups
16
* AArch32 VCVT fixed-point to float is always round-to-nearest
16
* avocado: Retire the boot_linux.py AArch64 TCG tests
17
* strongarm: Fix 'time to transmit a char' unit comment
17
* hw/arm/smmuv3: Add GBPA register
18
* Restrict APEI tables generation to the 'virt' machine
18
* arm/virt: don't try to spell out the accelerator
19
* bcm2835: minor code cleanups
19
* hw/arm: Attach PSPI module to NPCM7XX SoC
20
* correctly flush TLBs when TBI is enabled
20
* Some cleanup/refactoring patches aiming towards
21
* tests/qtest: Add npcm7xx timer test
21
allowing building Arm targets without CONFIG_TCG
22
* loads-stores.rst: add footnote that clarifies GETPC usage
23
* Fix reported EL for mte_check_fail
24
* Ignore HCR_EL2.ATA when {E2H,TGE} != 11
25
* microbit_i2c: Fix coredump when dump-vmstate
26
* nseries: Fix loading kernel image on n8x0 machines
27
* Implement v8.1M low-overhead-loops
28
* linux-user: Support AArch64 BTI
29
22
30
----------------------------------------------------------------
23
----------------------------------------------------------------
31
Emanuele Giuseppe Esposito (1):
24
Alex Bennée (1):
32
loads-stores.rst: add footnote that clarifies GETPC usage
25
tests/avocado: retire the Aarch64 TCG tests from boot_linux.py
33
26
34
Havard Skinnemoen (1):
27
Claudio Fontana (3):
35
tests/qtest: Add npcm7xx timer test
28
target/arm: rename handle_semihosting to tcg_handle_semihosting
29
target/arm: wrap psci call with tcg_enabled
30
target/arm: wrap call to aarch64_sve_change_el in tcg_enabled()
36
31
37
Peng Liang (1):
32
Cornelia Huck (1):
38
microbit_i2c: Fix coredump when dump-vmstate
33
arm/virt: don't try to spell out the accelerator
39
34
40
Peter Maydell (12):
35
Fabiano Rosas (7):
41
target/arm: Fix SMLAD incorrect setting of Q bit
36
target/arm: Move PC alignment check
42
target/arm: AArch32 VCVT fixed-point to float is always round-to-nearest
37
target/arm: Move cpregs code out of cpu.h
43
decodetree: Fix codegen for non-overlapping group inside overlapping group
38
tests/avocado: Skip tests that require a missing accelerator
44
target/arm: Implement v8.1M NOCP handling
39
tests/avocado: Tag TCG tests with accel:tcg
45
target/arm: Implement v8.1M conditional-select insns
40
target/arm: Use "max" as default cpu for the virt machine with KVM
46
target/arm: Make the t32 insn[25:23]=111 group non-overlapping
41
tests/qtest: arm-cpu-features: Match tests to required accelerators
47
target/arm: Don't allow BLX imm for M-profile
42
tests/qtest: Restrict tpm-tis-devices-{swtpm}-test to CONFIG_TCG
48
target/arm: Implement v8.1M branch-future insns (as NOPs)
49
target/arm: Implement v8.1M low-overhead-loop instructions
50
target/arm: Fix has_vfp/has_neon ID reg squashing for M-profile
51
target/arm: Allow M-profile CPUs with FP16 to set FPSCR.FP16
52
target/arm: Implement FPSCR.LTPSIZE for M-profile LOB extension
53
43
54
Philippe Mathieu-Daudé (10):
44
Hao Wu (3):
55
hw/arm/strongarm: Fix 'time to transmit a char' unit comment
45
MAINTAINERS: Add myself to maintainers and remove Havard
56
hw/arm: Restrict APEI tables generation to the 'virt' machine
46
hw/ssi: Add Nuvoton PSPI Module
57
hw/timer/bcm2835: Introduce BCM2835_SYSTIMER_COUNT definition
47
hw/arm: Attach PSPI module to NPCM7XX SoC
58
hw/timer/bcm2835: Rename variable holding CTRL_STATUS register
59
hw/timer/bcm2835: Support the timer COMPARE registers
60
hw/arm/bcm2835_peripherals: Correctly wire the SYS_timer IRQs
61
hw/intc/bcm2835_ic: Trace GPU/CPU IRQ handlers
62
hw/intc/bcm2836_control: Use IRQ definitions instead of magic numbers
63
hw/arm/nseries: Fix loading kernel image on n8x0 machines
64
linux-user/elfload: Avoid leaking interp_name using GLib memory API
65
48
66
Richard Henderson (16):
49
Jean-Philippe Brucker (2):
67
accel/tcg: Add tlb_flush_page_bits_by_mmuidx*
50
hw/arm/smmu-common: Support 64-bit addresses
68
target/arm: Use tlb_flush_page_bits_by_mmuidx*
51
hw/arm/smmu-common: Fix TTB1 handling
69
target/arm: Remove redundant mmu_idx lookup
70
target/arm: Fix reported EL for mte_check_fail
71
target/arm: Ignore HCR_EL2.ATA when {E2H,TGE} != 11
72
linux-user/aarch64: Reset btype for signals
73
linux-user: Set PAGE_TARGET_1 for TARGET_PROT_BTI
74
include/elf: Add defines related to GNU property notes for AArch64
75
linux-user/elfload: Fix coding style in load_elf_image
76
linux-user/elfload: Adjust iteration over phdr
77
linux-user/elfload: Move PT_INTERP detection to first loop
78
linux-user/elfload: Use Error for load_elf_image
79
linux-user/elfload: Use Error for load_elf_interp
80
linux-user/elfload: Parse NT_GNU_PROPERTY_TYPE_0 notes
81
linux-user/elfload: Parse GNU_PROPERTY_AARCH64_FEATURE_1_AND
82
tests/tcg/aarch64: Add bti smoke tests
83
52
84
docs/devel/loads-stores.rst | 8 +-
53
Mostafa Saleh (1):
85
default-configs/devices/arm-softmmu.mak | 1 -
54
hw/arm/smmuv3: Add GBPA register
86
include/elf.h | 22 ++
87
include/exec/cpu-all.h | 2 +
88
include/exec/exec-all.h | 36 ++
89
include/hw/timer/bcm2835_systmr.h | 17 +-
90
linux-user/qemu.h | 4 +
91
linux-user/syscall_defs.h | 4 +
92
target/arm/cpu.h | 13 +
93
target/arm/helper.h | 13 +
94
target/arm/internals.h | 9 +-
95
target/arm/m-nocp.decode | 10 +-
96
target/arm/t32.decode | 50 ++-
97
accel/tcg/cputlb.c | 275 +++++++++++++++-
98
hw/arm/bcm2835_peripherals.c | 13 +-
99
hw/arm/nseries.c | 1 +
100
hw/arm/strongarm.c | 2 +-
101
hw/i2c/microbit_i2c.c | 1 +
102
hw/intc/bcm2835_ic.c | 4 +-
103
hw/intc/bcm2836_control.c | 8 +-
104
hw/timer/bcm2835_systmr.c | 57 ++--
105
linux-user/aarch64/signal.c | 10 +-
106
linux-user/elfload.c | 326 ++++++++++++++----
107
linux-user/mmap.c | 16 +
108
target/arm/cpu.c | 38 ++-
109
target/arm/helper.c | 55 +++-
110
target/arm/mte_helper.c | 13 +-
111
target/arm/translate-a64.c | 6 +-
112
target/arm/translate.c | 239 +++++++++++++-
113
target/arm/vfp_helper.c | 76 +++--
114
tests/qtest/npcm7xx_timer-test.c | 562 ++++++++++++++++++++++++++++++++
115
tests/tcg/aarch64/bti-1.c | 62 ++++
116
tests/tcg/aarch64/bti-2.c | 108 ++++++
117
tests/tcg/aarch64/bti-crt.inc.c | 51 +++
118
hw/arm/Kconfig | 1 +
119
hw/intc/trace-events | 4 +
120
hw/timer/trace-events | 6 +-
121
scripts/decodetree.py | 2 +-
122
target/arm/translate-vfp.c.inc | 41 ++-
123
tests/qtest/meson.build | 1 +
124
tests/tcg/aarch64/Makefile.target | 10 +
125
tests/tcg/configure.sh | 4 +
126
42 files changed, 1973 insertions(+), 208 deletions(-)
127
create mode 100644 tests/qtest/npcm7xx_timer-test.c
128
create mode 100644 tests/tcg/aarch64/bti-1.c
129
create mode 100644 tests/tcg/aarch64/bti-2.c
130
create mode 100644 tests/tcg/aarch64/bti-crt.inc.c
131
55
56
Philippe Mathieu-Daudé (12):
57
hw/intc/armv7m_nvic: Use OBJECT_DECLARE_SIMPLE_TYPE() macro
58
target/arm: Simplify arm_v7m_mmu_idx_for_secstate() for user emulation
59
target/arm: Reduce arm_v7m_mmu_idx_[all/for_secstate_and_priv]() scope
60
target/arm: Constify ID_PFR1 on user emulation
61
target/arm: Convert CPUARMState::eabi to boolean
62
target/arm: Avoid resetting CPUARMState::eabi field
63
target/arm: Restrict CPUARMState::gicv3state to sysemu
64
target/arm: Restrict CPUARMState::arm_boot_info to sysemu
65
target/arm: Restrict CPUARMState::nvic to sysemu
66
target/arm: Store CPUARMState::nvic as NVICState*
67
target/arm: Declare CPU <-> NVIC helpers in 'hw/intc/armv7m_nvic.h'
68
hw/arm: Add missing XLNX_ZYNQMP_ARM -> USB_DWC3 Kconfig dependency
69
70
MAINTAINERS | 8 +-
71
docs/system/arm/nuvoton.rst | 2 +-
72
hw/arm/smmuv3-internal.h | 7 +
73
include/hw/arm/npcm7xx.h | 2 +
74
include/hw/arm/smmu-common.h | 2 -
75
include/hw/arm/smmuv3.h | 1 +
76
include/hw/intc/armv7m_nvic.h | 128 +++++++++++++++++-
77
include/hw/ssi/npcm_pspi.h | 53 ++++++++
78
linux-user/user-internals.h | 2 +-
79
target/arm/cpregs.h | 98 ++++++++++++++
80
target/arm/cpu.h | 228 ++-------------------------------
81
target/arm/internals.h | 14 --
82
hw/arm/npcm7xx.c | 25 +++-
83
hw/arm/smmu-common.c | 4 +-
84
hw/arm/smmuv3.c | 43 ++++++-
85
hw/arm/virt.c | 10 +-
86
hw/intc/armv7m_nvic.c | 38 ++----
87
hw/ssi/npcm_pspi.c | 221 ++++++++++++++++++++++++++++++++
88
linux-user/arm/cpu_loop.c | 4 +-
89
target/arm/cpu.c | 5 +-
90
target/arm/cpu_tcg.c | 3 +
91
target/arm/helper.c | 31 +++--
92
target/arm/m_helper.c | 86 +++++++------
93
target/arm/machine.c | 18 +--
94
tests/qtest/arm-cpu-features.c | 28 ++--
95
hw/arm/Kconfig | 1 +
96
hw/ssi/meson.build | 2 +-
97
hw/ssi/trace-events | 5 +
98
tests/avocado/avocado_qemu/__init__.py | 4 +
99
tests/avocado/boot_linux.py | 48 ++-----
100
tests/avocado/boot_linux_console.py | 1 +
101
tests/avocado/machine_aarch64_virt.py | 63 ++++++++-
102
tests/avocado/reverse_debugging.py | 8 ++
103
tests/qtest/meson.build | 4 +-
104
34 files changed, 798 insertions(+), 399 deletions(-)
105
create mode 100644 include/hw/ssi/npcm_pspi.h
106
create mode 100644 hw/ssi/npcm_pspi.c
107
diff view generated by jsdifflib
Deleted patch
1
The SMLAD instruction is supposed to:
2
* signed multiply Rn[15:0] * Rm[15:0]
3
* signed multiply Rn[31:16] * Rm[31:16]
4
* perform a signed addition of the products and Ra
5
* set Rd to the low 32 bits of the theoretical
6
infinite-precision result
7
* set the Q flag if the sign-extension of Rd
8
would differ from the infinite-precision result
9
(ie on overflow)
10
1
11
Our current implementation doesn't quite do this, though: it performs
12
an addition of the products setting Q on overflow, and then it adds
13
Ra, again possibly setting Q. This sometimes incorrectly sets Q when
14
the architecturally mandated only-check-for-overflow-once algorithm
15
does not. For instance:
16
r1 = 0x80008000; r2 = 0x80008000; r3 = 0xffffffff
17
smlad r0, r1, r2, r3
18
This is (-32768 * -32768) + (-32768 * -32768) - 1
19
20
The products are both 0x4000_0000, so when added together as 32-bit
21
signed numbers they overflow (and QEMU sets Q), but because the
22
addition of Ra == -1 brings the total back down to 0x7fff_ffff
23
there is no overflow for the complete operation and setting Q is
24
incorrect.
25
26
Fix this edge case by resorting to 64-bit arithmetic for the
27
case where we need to add three values together.
28
29
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
30
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
31
Message-id: 20201009144712.11187-1-peter.maydell@linaro.org
32
---
33
target/arm/translate.c | 58 ++++++++++++++++++++++++++++++++++--------
34
1 file changed, 48 insertions(+), 10 deletions(-)
35
36
diff --git a/target/arm/translate.c b/target/arm/translate.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/translate.c
39
+++ b/target/arm/translate.c
40
@@ -XXX,XX +XXX,XX @@ static bool op_smlad(DisasContext *s, arg_rrrr *a, bool m_swap, bool sub)
41
gen_smul_dual(t1, t2);
42
43
if (sub) {
44
- /* This subtraction cannot overflow. */
45
+ /*
46
+ * This subtraction cannot overflow, so we can do a simple
47
+ * 32-bit subtraction and then a possible 32-bit saturating
48
+ * addition of Ra.
49
+ */
50
tcg_gen_sub_i32(t1, t1, t2);
51
+ tcg_temp_free_i32(t2);
52
+
53
+ if (a->ra != 15) {
54
+ t2 = load_reg(s, a->ra);
55
+ gen_helper_add_setq(t1, cpu_env, t1, t2);
56
+ tcg_temp_free_i32(t2);
57
+ }
58
+ } else if (a->ra == 15) {
59
+ /* Single saturation-checking addition */
60
+ gen_helper_add_setq(t1, cpu_env, t1, t2);
61
+ tcg_temp_free_i32(t2);
62
} else {
63
/*
64
- * This addition cannot overflow 32 bits; however it may
65
- * overflow considered as a signed operation, in which case
66
- * we must set the Q flag.
67
+ * We need to add the products and Ra together and then
68
+ * determine whether the final result overflowed. Doing
69
+ * this as two separate add-and-check-overflow steps incorrectly
70
+ * sets Q for cases like (-32768 * -32768) + (-32768 * -32768) + -1.
71
+ * Do all the arithmetic at 64-bits and then check for overflow.
72
*/
73
- gen_helper_add_setq(t1, cpu_env, t1, t2);
74
- }
75
- tcg_temp_free_i32(t2);
76
+ TCGv_i64 p64, q64;
77
+ TCGv_i32 t3, qf, one;
78
79
- if (a->ra != 15) {
80
- t2 = load_reg(s, a->ra);
81
- gen_helper_add_setq(t1, cpu_env, t1, t2);
82
+ p64 = tcg_temp_new_i64();
83
+ q64 = tcg_temp_new_i64();
84
+ tcg_gen_ext_i32_i64(p64, t1);
85
+ tcg_gen_ext_i32_i64(q64, t2);
86
+ tcg_gen_add_i64(p64, p64, q64);
87
+ load_reg_var(s, t2, a->ra);
88
+ tcg_gen_ext_i32_i64(q64, t2);
89
+ tcg_gen_add_i64(p64, p64, q64);
90
+ tcg_temp_free_i64(q64);
91
+
92
+ tcg_gen_extr_i64_i32(t1, t2, p64);
93
+ tcg_temp_free_i64(p64);
94
+ /*
95
+ * t1 is the low half of the result which goes into Rd.
96
+ * We have overflow and must set Q if the high half (t2)
97
+ * is different from the sign-extension of t1.
98
+ */
99
+ t3 = tcg_temp_new_i32();
100
+ tcg_gen_sari_i32(t3, t1, 31);
101
+ qf = load_cpu_field(QF);
102
+ one = tcg_const_i32(1);
103
+ tcg_gen_movcond_i32(TCG_COND_NE, qf, t2, t3, one, qf);
104
+ store_cpu_field(qf, QF);
105
+ tcg_temp_free_i32(one);
106
+ tcg_temp_free_i32(t3);
107
tcg_temp_free_i32(t2);
108
}
109
store_reg(s, a->rd, t1);
110
--
111
2.20.1
112
113
diff view generated by jsdifflib
Deleted patch
1
For AArch32, unlike the VCVT of integer to float, which honours the
2
rounding mode specified by the FPSCR, VCVT of fixed-point to float is
3
always round-to-nearest. (AArch64 fixed-point-to-float conversions
4
always honour the FPCR rounding mode.)
5
1
6
Implement this by providing _round_to_nearest versions of the
7
relevant helpers which set the rounding mode temporarily when making
8
the call to the underlying softfloat function.
9
10
We only need to change the VFP VCVT instructions, because the
11
standard- FPSCR value used by the Neon VCVT is always set to
12
round-to-nearest, so we don't need to do the extra work of saving
13
and restoring the rounding mode.
14
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
17
Message-id: 20201013103532.13391-1-peter.maydell@linaro.org
18
---
19
target/arm/helper.h | 13 +++++++++++++
20
target/arm/vfp_helper.c | 23 ++++++++++++++++++++++-
21
target/arm/translate-vfp.c.inc | 24 ++++++++++++------------
22
3 files changed, 47 insertions(+), 13 deletions(-)
23
24
diff --git a/target/arm/helper.h b/target/arm/helper.h
25
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/helper.h
27
+++ b/target/arm/helper.h
28
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(vfp_ultoh, f16, i32, i32, ptr)
29
DEF_HELPER_3(vfp_sqtoh, f16, i64, i32, ptr)
30
DEF_HELPER_3(vfp_uqtoh, f16, i64, i32, ptr)
31
32
+DEF_HELPER_3(vfp_shtos_round_to_nearest, f32, i32, i32, ptr)
33
+DEF_HELPER_3(vfp_sltos_round_to_nearest, f32, i32, i32, ptr)
34
+DEF_HELPER_3(vfp_uhtos_round_to_nearest, f32, i32, i32, ptr)
35
+DEF_HELPER_3(vfp_ultos_round_to_nearest, f32, i32, i32, ptr)
36
+DEF_HELPER_3(vfp_shtod_round_to_nearest, f64, i64, i32, ptr)
37
+DEF_HELPER_3(vfp_sltod_round_to_nearest, f64, i64, i32, ptr)
38
+DEF_HELPER_3(vfp_uhtod_round_to_nearest, f64, i64, i32, ptr)
39
+DEF_HELPER_3(vfp_ultod_round_to_nearest, f64, i64, i32, ptr)
40
+DEF_HELPER_3(vfp_shtoh_round_to_nearest, f16, i32, i32, ptr)
41
+DEF_HELPER_3(vfp_uhtoh_round_to_nearest, f16, i32, i32, ptr)
42
+DEF_HELPER_3(vfp_sltoh_round_to_nearest, f16, i32, i32, ptr)
43
+DEF_HELPER_3(vfp_ultoh_round_to_nearest, f16, i32, i32, ptr)
44
+
45
DEF_HELPER_FLAGS_2(set_rmode, TCG_CALL_NO_RWG, i32, i32, ptr)
46
47
DEF_HELPER_FLAGS_3(vfp_fcvt_f16_to_f32, TCG_CALL_NO_RWG, f32, f16, ptr, i32)
48
diff --git a/target/arm/vfp_helper.c b/target/arm/vfp_helper.c
49
index XXXXXXX..XXXXXXX 100644
50
--- a/target/arm/vfp_helper.c
51
+++ b/target/arm/vfp_helper.c
52
@@ -XXX,XX +XXX,XX @@ float32 VFP_HELPER(fcvts, d)(float64 x, CPUARMState *env)
53
return float64_to_float32(x, &env->vfp.fp_status);
54
}
55
56
-/* VFP3 fixed point conversion. */
57
+/*
58
+ * VFP3 fixed point conversion. The AArch32 versions of fix-to-float
59
+ * must always round-to-nearest; the AArch64 ones honour the FPSCR
60
+ * rounding mode. (For AArch32 Neon the standard-FPSCR is set to
61
+ * round-to-nearest so either helper will work.) AArch32 float-to-fix
62
+ * must round-to-zero.
63
+ */
64
#define VFP_CONV_FIX_FLOAT(name, p, fsz, ftype, isz, itype) \
65
ftype HELPER(vfp_##name##to##p)(uint##isz##_t x, uint32_t shift, \
66
void *fpstp) \
67
{ return itype##_to_##float##fsz##_scalbn(x, -shift, fpstp); }
68
69
+#define VFP_CONV_FIX_FLOAT_ROUND(name, p, fsz, ftype, isz, itype) \
70
+ ftype HELPER(vfp_##name##to##p##_round_to_nearest)(uint##isz##_t x, \
71
+ uint32_t shift, \
72
+ void *fpstp) \
73
+ { \
74
+ ftype ret; \
75
+ float_status *fpst = fpstp; \
76
+ FloatRoundMode oldmode = fpst->float_rounding_mode; \
77
+ fpst->float_rounding_mode = float_round_nearest_even; \
78
+ ret = itype##_to_##float##fsz##_scalbn(x, -shift, fpstp); \
79
+ fpst->float_rounding_mode = oldmode; \
80
+ return ret; \
81
+ }
82
+
83
#define VFP_CONV_FLOAT_FIX_ROUND(name, p, fsz, ftype, isz, itype, ROUND, suff) \
84
uint##isz##_t HELPER(vfp_to##name##p##suff)(ftype x, uint32_t shift, \
85
void *fpst) \
86
@@ -XXX,XX +XXX,XX @@ uint##isz##_t HELPER(vfp_to##name##p##suff)(ftype x, uint32_t shift, \
87
88
#define VFP_CONV_FIX(name, p, fsz, ftype, isz, itype) \
89
VFP_CONV_FIX_FLOAT(name, p, fsz, ftype, isz, itype) \
90
+VFP_CONV_FIX_FLOAT_ROUND(name, p, fsz, ftype, isz, itype) \
91
VFP_CONV_FLOAT_FIX_ROUND(name, p, fsz, ftype, isz, itype, \
92
float_round_to_zero, _round_to_zero) \
93
VFP_CONV_FLOAT_FIX_ROUND(name, p, fsz, ftype, isz, itype, \
94
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
95
index XXXXXXX..XXXXXXX 100644
96
--- a/target/arm/translate-vfp.c.inc
97
+++ b/target/arm/translate-vfp.c.inc
98
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_hp(DisasContext *s, arg_VCVT_fix_sp *a)
99
/* Switch on op:U:sx bits */
100
switch (a->opc) {
101
case 0:
102
- gen_helper_vfp_shtoh(vd, vd, shift, fpst);
103
+ gen_helper_vfp_shtoh_round_to_nearest(vd, vd, shift, fpst);
104
break;
105
case 1:
106
- gen_helper_vfp_sltoh(vd, vd, shift, fpst);
107
+ gen_helper_vfp_sltoh_round_to_nearest(vd, vd, shift, fpst);
108
break;
109
case 2:
110
- gen_helper_vfp_uhtoh(vd, vd, shift, fpst);
111
+ gen_helper_vfp_uhtoh_round_to_nearest(vd, vd, shift, fpst);
112
break;
113
case 3:
114
- gen_helper_vfp_ultoh(vd, vd, shift, fpst);
115
+ gen_helper_vfp_ultoh_round_to_nearest(vd, vd, shift, fpst);
116
break;
117
case 4:
118
gen_helper_vfp_toshh_round_to_zero(vd, vd, shift, fpst);
119
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_sp(DisasContext *s, arg_VCVT_fix_sp *a)
120
/* Switch on op:U:sx bits */
121
switch (a->opc) {
122
case 0:
123
- gen_helper_vfp_shtos(vd, vd, shift, fpst);
124
+ gen_helper_vfp_shtos_round_to_nearest(vd, vd, shift, fpst);
125
break;
126
case 1:
127
- gen_helper_vfp_sltos(vd, vd, shift, fpst);
128
+ gen_helper_vfp_sltos_round_to_nearest(vd, vd, shift, fpst);
129
break;
130
case 2:
131
- gen_helper_vfp_uhtos(vd, vd, shift, fpst);
132
+ gen_helper_vfp_uhtos_round_to_nearest(vd, vd, shift, fpst);
133
break;
134
case 3:
135
- gen_helper_vfp_ultos(vd, vd, shift, fpst);
136
+ gen_helper_vfp_ultos_round_to_nearest(vd, vd, shift, fpst);
137
break;
138
case 4:
139
gen_helper_vfp_toshs_round_to_zero(vd, vd, shift, fpst);
140
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_dp(DisasContext *s, arg_VCVT_fix_dp *a)
141
/* Switch on op:U:sx bits */
142
switch (a->opc) {
143
case 0:
144
- gen_helper_vfp_shtod(vd, vd, shift, fpst);
145
+ gen_helper_vfp_shtod_round_to_nearest(vd, vd, shift, fpst);
146
break;
147
case 1:
148
- gen_helper_vfp_sltod(vd, vd, shift, fpst);
149
+ gen_helper_vfp_sltod_round_to_nearest(vd, vd, shift, fpst);
150
break;
151
case 2:
152
- gen_helper_vfp_uhtod(vd, vd, shift, fpst);
153
+ gen_helper_vfp_uhtod_round_to_nearest(vd, vd, shift, fpst);
154
break;
155
case 3:
156
- gen_helper_vfp_ultod(vd, vd, shift, fpst);
157
+ gen_helper_vfp_ultod_round_to_nearest(vd, vd, shift, fpst);
158
break;
159
case 4:
160
gen_helper_vfp_toshd_round_to_zero(vd, vd, shift, fpst);
161
--
162
2.20.1
163
164
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
2
3
Use the new generic support for NT_GNU_PROPERTY_TYPE_0.
3
Manually convert to OBJECT_DECLARE_SIMPLE_TYPE() macro,
4
similarly to automatic conversion from commit 8063396bf3
5
("Use OBJECT_DECLARE_SIMPLE_TYPE when possible").
4
6
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Message-id: 20201016184207.786698-12-richard.henderson@linaro.org
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20230206223502.25122-2-philmd@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
11
---
10
linux-user/elfload.c | 48 ++++++++++++++++++++++++++++++++++++++++++--
12
include/hw/intc/armv7m_nvic.h | 5 +----
11
1 file changed, 46 insertions(+), 2 deletions(-)
13
1 file changed, 1 insertion(+), 4 deletions(-)
12
14
13
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
15
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
14
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
15
--- a/linux-user/elfload.c
17
--- a/include/hw/intc/armv7m_nvic.h
16
+++ b/linux-user/elfload.c
18
+++ b/include/hw/intc/armv7m_nvic.h
17
@@ -XXX,XX +XXX,XX @@ static void elf_core_copy_regs(target_elf_gregset_t *regs,
19
@@ -XXX,XX +XXX,XX @@
18
20
#include "qom/object.h"
19
#include "elf.h"
21
20
22
#define TYPE_NVIC "armv7m_nvic"
21
+/* We must delay the following stanzas until after "elf.h". */
23
-
22
+#if defined(TARGET_AARCH64)
24
-typedef struct NVICState NVICState;
23
+
25
-DECLARE_INSTANCE_CHECKER(NVICState, NVIC,
24
+static bool arch_parse_elf_property(uint32_t pr_type, uint32_t pr_datasz,
26
- TYPE_NVIC)
25
+ const uint32_t *data,
27
+OBJECT_DECLARE_SIMPLE_TYPE(NVICState, NVIC)
26
+ struct image_info *info,
28
27
+ Error **errp)
29
/* Highest permitted number of exceptions (architectural limit) */
28
+{
30
#define NVIC_MAX_VECTORS 512
29
+ if (pr_type == GNU_PROPERTY_AARCH64_FEATURE_1_AND) {
30
+ if (pr_datasz != sizeof(uint32_t)) {
31
+ error_setg(errp, "Ill-formed GNU_PROPERTY_AARCH64_FEATURE_1_AND");
32
+ return false;
33
+ }
34
+ /* We will extract GNU_PROPERTY_AARCH64_FEATURE_1_BTI later. */
35
+ info->note_flags = *data;
36
+ }
37
+ return true;
38
+}
39
+#define ARCH_USE_GNU_PROPERTY 1
40
+
41
+#else
42
+
43
static bool arch_parse_elf_property(uint32_t pr_type, uint32_t pr_datasz,
44
const uint32_t *data,
45
struct image_info *info,
46
@@ -XXX,XX +XXX,XX @@ static bool arch_parse_elf_property(uint32_t pr_type, uint32_t pr_datasz,
47
}
48
#define ARCH_USE_GNU_PROPERTY 0
49
50
+#endif
51
+
52
struct exec
53
{
54
unsigned int a_info; /* Use macros N_MAGIC, etc for access */
55
@@ -XXX,XX +XXX,XX @@ static void load_elf_image(const char *image_name, int image_fd,
56
struct elfhdr *ehdr = (struct elfhdr *)bprm_buf;
57
struct elf_phdr *phdr;
58
abi_ulong load_addr, load_bias, loaddr, hiaddr, error;
59
- int i, retval;
60
+ int i, retval, prot_exec;
61
Error *err = NULL;
62
63
/* First of all, some simple consistency checks */
64
@@ -XXX,XX +XXX,XX @@ static void load_elf_image(const char *image_name, int image_fd,
65
info->brk = 0;
66
info->elf_flags = ehdr->e_flags;
67
68
+ prot_exec = PROT_EXEC;
69
+#ifdef TARGET_AARCH64
70
+ /*
71
+ * If the BTI feature is present, this indicates that the executable
72
+ * pages of the startup binary should be mapped with PROT_BTI, so that
73
+ * branch targets are enforced.
74
+ *
75
+ * The startup binary is either the interpreter or the static executable.
76
+ * The interpreter is responsible for all pages of a dynamic executable.
77
+ *
78
+ * Elf notes are backward compatible to older cpus.
79
+ * Do not enable BTI unless it is supported.
80
+ */
81
+ if ((info->note_flags & GNU_PROPERTY_AARCH64_FEATURE_1_BTI)
82
+ && (pinterp_name == NULL || *pinterp_name == 0)
83
+ && cpu_isar_feature(aa64_bti, ARM_CPU(thread_cpu))) {
84
+ prot_exec |= TARGET_PROT_BTI;
85
+ }
86
+#endif
87
+
88
for (i = 0; i < ehdr->e_phnum; i++) {
89
struct elf_phdr *eppnt = phdr + i;
90
if (eppnt->p_type == PT_LOAD) {
91
@@ -XXX,XX +XXX,XX @@ static void load_elf_image(const char *image_name, int image_fd,
92
elf_prot |= PROT_WRITE;
93
}
94
if (eppnt->p_flags & PF_X) {
95
- elf_prot |= PROT_EXEC;
96
+ elf_prot |= prot_exec;
97
}
98
99
vaddr = load_bias + eppnt->p_vaddr;
100
--
31
--
101
2.20.1
32
2.34.1
102
33
103
34
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
2
3
This is a bit clearer than open-coding some of this
3
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
4
with a bare c string.
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20230206223502.25122-3-philmd@linaro.org
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Message-id: 20201016184207.786698-9-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
8
---
11
linux-user/elfload.c | 37 ++++++++++++++++++++-----------------
9
target/arm/m_helper.c | 11 ++++++++---
12
1 file changed, 20 insertions(+), 17 deletions(-)
10
1 file changed, 8 insertions(+), 3 deletions(-)
13
11
14
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
12
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
15
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
16
--- a/linux-user/elfload.c
14
--- a/target/arm/m_helper.c
17
+++ b/linux-user/elfload.c
15
+++ b/target/arm/m_helper.c
18
@@ -XXX,XX +XXX,XX @@
16
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
19
#include "qemu/guest-random.h"
17
return 0;
20
#include "qemu/units.h"
18
}
21
#include "qemu/selfmap.h"
19
22
+#include "qapi/error.h"
20
-#else
23
21
+ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
24
#ifdef _ARCH_PPC64
22
+{
25
#undef ARCH_DLINFO
23
+ return ARMMMUIdx_MUser;
26
@@ -XXX,XX +XXX,XX @@ static void load_elf_image(const char *image_name, int image_fd,
24
+}
27
struct elf_phdr *phdr;
28
abi_ulong load_addr, load_bias, loaddr, hiaddr, error;
29
int i, retval;
30
- const char *errmsg;
31
+ Error *err = NULL;
32
33
/* First of all, some simple consistency checks */
34
- errmsg = "Invalid ELF image for this architecture";
35
if (!elf_check_ident(ehdr)) {
36
+ error_setg(&err, "Invalid ELF image for this architecture");
37
goto exit_errmsg;
38
}
39
bswap_ehdr(ehdr);
40
if (!elf_check_ehdr(ehdr)) {
41
+ error_setg(&err, "Invalid ELF image for this architecture");
42
goto exit_errmsg;
43
}
44
45
@@ -XXX,XX +XXX,XX @@ static void load_elf_image(const char *image_name, int image_fd,
46
g_autofree char *interp_name = NULL;
47
48
if (*pinterp_name) {
49
- errmsg = "Multiple PT_INTERP entries";
50
+ error_setg(&err, "Multiple PT_INTERP entries");
51
goto exit_errmsg;
52
}
53
+
25
+
54
interp_name = g_malloc(eppnt->p_filesz);
26
+#else /* !CONFIG_USER_ONLY */
55
- if (!interp_name) {
27
56
- goto exit_perror;
28
/*
57
- }
29
* What kind of stack write are we doing? This affects how exceptions
58
30
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
59
if (eppnt->p_offset + eppnt->p_filesz <= BPRM_BUF_SIZE) {
31
return tt_resp;
60
memcpy(interp_name, bprm_buf + eppnt->p_offset,
61
@@ -XXX,XX +XXX,XX @@ static void load_elf_image(const char *image_name, int image_fd,
62
retval = pread(image_fd, interp_name, eppnt->p_filesz,
63
eppnt->p_offset);
64
if (retval != eppnt->p_filesz) {
65
- goto exit_perror;
66
+ goto exit_read;
67
}
68
}
69
if (interp_name[eppnt->p_filesz - 1] != 0) {
70
- errmsg = "Invalid PT_INTERP entry";
71
+ error_setg(&err, "Invalid PT_INTERP entry");
72
goto exit_errmsg;
73
}
74
*pinterp_name = g_steal_pointer(&interp_name);
75
@@ -XXX,XX +XXX,XX @@ static void load_elf_image(const char *image_name, int image_fd,
76
(ehdr->e_type == ET_EXEC ? MAP_FIXED : 0),
77
-1, 0);
78
if (load_addr == -1) {
79
- goto exit_perror;
80
+ goto exit_mmap;
81
}
82
load_bias = load_addr - loaddr;
83
84
@@ -XXX,XX +XXX,XX @@ static void load_elf_image(const char *image_name, int image_fd,
85
image_fd, eppnt->p_offset - vaddr_po);
86
87
if (error == -1) {
88
- goto exit_perror;
89
+ goto exit_mmap;
90
}
91
}
92
93
@@ -XXX,XX +XXX,XX @@ static void load_elf_image(const char *image_name, int image_fd,
94
} else if (eppnt->p_type == PT_MIPS_ABIFLAGS) {
95
Mips_elf_abiflags_v0 abiflags;
96
if (eppnt->p_filesz < sizeof(Mips_elf_abiflags_v0)) {
97
- errmsg = "Invalid PT_MIPS_ABIFLAGS entry";
98
+ error_setg(&err, "Invalid PT_MIPS_ABIFLAGS entry");
99
goto exit_errmsg;
100
}
101
if (eppnt->p_offset + eppnt->p_filesz <= BPRM_BUF_SIZE) {
102
@@ -XXX,XX +XXX,XX @@ static void load_elf_image(const char *image_name, int image_fd,
103
retval = pread(image_fd, &abiflags, sizeof(Mips_elf_abiflags_v0),
104
eppnt->p_offset);
105
if (retval != sizeof(Mips_elf_abiflags_v0)) {
106
- goto exit_perror;
107
+ goto exit_read;
108
}
109
}
110
bswap_mips_abiflags(&abiflags);
111
@@ -XXX,XX +XXX,XX @@ static void load_elf_image(const char *image_name, int image_fd,
112
113
exit_read:
114
if (retval >= 0) {
115
- errmsg = "Incomplete read of file header";
116
- goto exit_errmsg;
117
+ error_setg(&err, "Incomplete read of file header");
118
+ } else {
119
+ error_setg_errno(&err, errno, "Error reading file header");
120
}
121
- exit_perror:
122
- errmsg = strerror(errno);
123
+ goto exit_errmsg;
124
+ exit_mmap:
125
+ error_setg_errno(&err, errno, "Error mapping file");
126
+ goto exit_errmsg;
127
exit_errmsg:
128
- fprintf(stderr, "%s: %s\n", image_name, errmsg);
129
+ error_reportf_err(err, "%s: ", image_name);
130
exit(-1);
131
}
32
}
132
33
34
-#endif /* !CONFIG_USER_ONLY */
35
-
36
ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
37
bool secstate, bool priv, bool negpri)
38
{
39
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
40
41
return arm_v7m_mmu_idx_for_secstate_and_priv(env, secstate, priv);
42
}
43
+
44
+#endif /* !CONFIG_USER_ONLY */
133
--
45
--
134
2.20.1
46
2.34.1
135
47
136
48
diff view generated by jsdifflib
1
v8.1M brings four new insns to M-profile:
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
* CSEL : Rd = cond ? Rn : Rm
3
* CSINC : Rd = cond ? Rn : Rm+1
4
* CSINV : Rd = cond ? Rn : ~Rm
5
* CSNEG : Rd = cond ? Rn : -Rm
6
2
7
Implement these.
3
arm_v7m_mmu_idx_all() and arm_v7m_mmu_idx_for_secstate_and_priv()
4
are only used for system emulation in m_helper.c.
5
Move the definitions to avoid prototype forward declarations.
8
6
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20230206223502.25122-4-philmd@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20201019151301.2046-4-peter.maydell@linaro.org
12
---
11
---
13
target/arm/t32.decode | 3 +++
12
target/arm/internals.h | 14 --------
14
target/arm/translate.c | 60 ++++++++++++++++++++++++++++++++++++++++++
13
target/arm/m_helper.c | 74 +++++++++++++++++++++---------------------
15
2 files changed, 63 insertions(+)
14
2 files changed, 37 insertions(+), 51 deletions(-)
16
15
17
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
18
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/t32.decode
18
--- a/target/arm/internals.h
20
+++ b/target/arm/t32.decode
19
+++ b/target/arm/internals.h
21
@@ -XXX,XX +XXX,XX @@ SBC_rrri 1110101 1011 . .... 0 ... .... .... .... @s_rrr_shi
20
@@ -XXX,XX +XXX,XX @@ static inline ARMMMUIdx core_to_aa64_mmu_idx(int mmu_idx)
22
}
21
23
RSB_rrri 1110101 1110 . .... 0 ... .... .... .... @s_rrr_shi
22
int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx);
24
23
25
+# v8.1M CSEL and friends
24
-/*
26
+CSEL 1110101 0010 1 rn:4 10 op:2 rd:4 fcond:4 rm:4
25
- * Return the MMU index for a v7M CPU with all relevant information
26
- * manually specified.
27
- */
28
-ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
29
- bool secstate, bool priv, bool negpri);
30
-
31
-/*
32
- * Return the MMU index for a v7M CPU in the specified security and
33
- * privilege state.
34
- */
35
-ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
36
- bool secstate, bool priv);
37
-
38
/* Return the MMU index for a v7M CPU in the specified security state */
39
ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate);
40
41
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
42
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/m_helper.c
44
+++ b/target/arm/m_helper.c
45
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
46
47
#else /* !CONFIG_USER_ONLY */
48
49
+static ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
50
+ bool secstate, bool priv, bool negpri)
51
+{
52
+ ARMMMUIdx mmu_idx = ARM_MMU_IDX_M;
27
+
53
+
28
# Data-processing (register-shifted register)
54
+ if (priv) {
29
55
+ mmu_idx |= ARM_MMU_IDX_M_PRIV;
30
MOV_rxrr 1111 1010 0 shty:2 s:1 rm:4 1111 rd:4 0000 rs:4 \
31
diff --git a/target/arm/translate.c b/target/arm/translate.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/translate.c
34
+++ b/target/arm/translate.c
35
@@ -XXX,XX +XXX,XX @@ static bool trans_IT(DisasContext *s, arg_IT *a)
36
return true;
37
}
38
39
+/* v8.1M CSEL/CSINC/CSNEG/CSINV */
40
+static bool trans_CSEL(DisasContext *s, arg_CSEL *a)
41
+{
42
+ TCGv_i32 rn, rm, zero;
43
+ DisasCompare c;
44
+
45
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
46
+ return false;
47
+ }
56
+ }
48
+
57
+
49
+ if (a->rm == 13) {
58
+ if (negpri) {
50
+ /* SEE "Related encodings" (MVE shifts) */
59
+ mmu_idx |= ARM_MMU_IDX_M_NEGPRI;
51
+ return false;
52
+ }
60
+ }
53
+
61
+
54
+ if (a->rd == 13 || a->rd == 15 || a->rn == 13 || a->fcond >= 14) {
62
+ if (secstate) {
55
+ /* CONSTRAINED UNPREDICTABLE: we choose to UNDEF */
63
+ mmu_idx |= ARM_MMU_IDX_M_S;
56
+ return false;
57
+ }
64
+ }
58
+
65
+
59
+ /* In this insn input reg fields of 0b1111 mean "zero", not "PC" */
66
+ return mmu_idx;
60
+ if (a->rn == 15) {
67
+}
61
+ rn = tcg_const_i32(0);
62
+ } else {
63
+ rn = load_reg(s, a->rn);
64
+ }
65
+ if (a->rm == 15) {
66
+ rm = tcg_const_i32(0);
67
+ } else {
68
+ rm = load_reg(s, a->rm);
69
+ }
70
+
68
+
71
+ switch (a->op) {
69
+static ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
72
+ case 0: /* CSEL */
70
+ bool secstate, bool priv)
73
+ break;
71
+{
74
+ case 1: /* CSINC */
72
+ bool negpri = armv7m_nvic_neg_prio_requested(env->nvic, secstate);
75
+ tcg_gen_addi_i32(rm, rm, 1);
76
+ break;
77
+ case 2: /* CSINV */
78
+ tcg_gen_not_i32(rm, rm);
79
+ break;
80
+ case 3: /* CSNEG */
81
+ tcg_gen_neg_i32(rm, rm);
82
+ break;
83
+ default:
84
+ g_assert_not_reached();
85
+ }
86
+
73
+
87
+ arm_test_cc(&c, a->fcond);
74
+ return arm_v7m_mmu_idx_all(env, secstate, priv, negpri);
88
+ zero = tcg_const_i32(0);
75
+}
89
+ tcg_gen_movcond_i32(c.cond, rn, c.value, zero, rn, rm);
90
+ arm_free_cc(&c);
91
+ tcg_temp_free_i32(zero);
92
+
76
+
93
+ store_reg(s, a->rd, rn);
77
+/* Return the MMU index for a v7M CPU in the specified security state */
94
+ tcg_temp_free_i32(rm);
78
+ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
79
+{
80
+ bool priv = arm_v7m_is_handler_mode(env) ||
81
+ !(env->v7m.control[secstate] & 1);
95
+
82
+
96
+ return true;
83
+ return arm_v7m_mmu_idx_for_secstate_and_priv(env, secstate, priv);
97
+}
84
+}
98
+
85
+
99
/*
86
/*
100
* Legacy decoder.
87
* What kind of stack write are we doing? This affects how exceptions
101
*/
88
* generated during the stacking are treated.
89
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
90
return tt_resp;
91
}
92
93
-ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
94
- bool secstate, bool priv, bool negpri)
95
-{
96
- ARMMMUIdx mmu_idx = ARM_MMU_IDX_M;
97
-
98
- if (priv) {
99
- mmu_idx |= ARM_MMU_IDX_M_PRIV;
100
- }
101
-
102
- if (negpri) {
103
- mmu_idx |= ARM_MMU_IDX_M_NEGPRI;
104
- }
105
-
106
- if (secstate) {
107
- mmu_idx |= ARM_MMU_IDX_M_S;
108
- }
109
-
110
- return mmu_idx;
111
-}
112
-
113
-ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
114
- bool secstate, bool priv)
115
-{
116
- bool negpri = armv7m_nvic_neg_prio_requested(env->nvic, secstate);
117
-
118
- return arm_v7m_mmu_idx_all(env, secstate, priv, negpri);
119
-}
120
-
121
-/* Return the MMU index for a v7M CPU in the specified security state */
122
-ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
123
-{
124
- bool priv = arm_v7m_is_handler_mode(env) ||
125
- !(env->v7m.control[secstate] & 1);
126
-
127
- return arm_v7m_mmu_idx_for_secstate_and_priv(env, secstate, priv);
128
-}
129
-
130
#endif /* !CONFIG_USER_ONLY */
102
--
131
--
103
2.20.1
132
2.34.1
104
133
105
134
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
2
3
When TBI is enabled in a given regime, 56 bits of the address
3
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
are significant and we need to clear out any other matching
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
virtual addresses with differing tags.
5
Message-id: 20230206223502.25122-5-philmd@linaro.org
6
7
The other uses of tlb_flush_page (without mmuidx) in this file
8
are only used by aarch32 mode.
9
10
Fixes: 38d931687fa1
11
Reported-by: Jordan Frank <jordanfrank@fb.com>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
14
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
15
Message-id: 20201016210754.818257-3-richard.henderson@linaro.org
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
7
---
18
target/arm/helper.c | 46 ++++++++++++++++++++++++++++++++++++++-------
8
target/arm/helper.c | 12 ++++++++++--
19
1 file changed, 39 insertions(+), 7 deletions(-)
9
1 file changed, 10 insertions(+), 2 deletions(-)
20
10
21
diff --git a/target/arm/helper.c b/target/arm/helper.c
11
diff --git a/target/arm/helper.c b/target/arm/helper.c
22
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/helper.c
13
--- a/target/arm/helper.c
24
+++ b/target/arm/helper.c
14
+++ b/target/arm/helper.c
25
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
15
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
26
#endif
27
28
static void switch_mode(CPUARMState *env, int mode);
29
+static int aa64_va_parameter_tbi(uint64_t tcr, ARMMMUIdx mmu_idx);
30
31
static int vfp_gdb_get_reg(CPUARMState *env, GByteArray *buf, int reg)
32
{
33
@@ -XXX,XX +XXX,XX @@ static int vae1_tlbmask(CPUARMState *env)
34
}
16
}
35
}
17
}
36
18
37
+/* Return 56 if TBI is enabled, 64 otherwise. */
19
+#ifndef CONFIG_USER_ONLY
38
+static int tlbbits_for_regime(CPUARMState *env, ARMMMUIdx mmu_idx,
20
/*
39
+ uint64_t addr)
21
* We don't know until after realize whether there's a GICv3
40
+{
22
* attached, and that is what registers the gicv3 sysregs.
41
+ uint64_t tcr = regime_tcr(env, mmu_idx)->raw_tcr;
23
@@ -XXX,XX +XXX,XX @@ static uint64_t id_pfr1_read(CPUARMState *env, const ARMCPRegInfo *ri)
42
+ int tbi = aa64_va_parameter_tbi(tcr, mmu_idx);
24
return pfr1;
43
+ int select = extract64(addr, 55, 1);
25
}
44
+
26
45
+ return (tbi >> select) & 1 ? 56 : 64;
27
-#ifndef CONFIG_USER_ONLY
46
+}
28
static uint64_t id_aa64pfr0_read(CPUARMState *env, const ARMCPRegInfo *ri)
47
+
48
+static int vae1_tlbbits(CPUARMState *env, uint64_t addr)
49
+{
50
+ ARMMMUIdx mmu_idx;
51
+
52
+ /* Only the regime of the mmu_idx below is significant. */
53
+ if (arm_is_secure_below_el3(env)) {
54
+ mmu_idx = ARMMMUIdx_SE10_0;
55
+ } else if ((env->cp15.hcr_el2 & (HCR_E2H | HCR_TGE))
56
+ == (HCR_E2H | HCR_TGE)) {
57
+ mmu_idx = ARMMMUIdx_E20_0;
58
+ } else {
59
+ mmu_idx = ARMMMUIdx_E10_0;
60
+ }
61
+ return tlbbits_for_regime(env, mmu_idx, addr);
62
+}
63
+
64
static void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
65
uint64_t value)
66
{
29
{
67
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
30
ARMCPU *cpu = env_archcpu(env);
68
CPUState *cs = env_cpu(env);
31
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
69
int mask = vae1_tlbmask(env);
32
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 1,
70
uint64_t pageaddr = sextract64(value << 12, 0, 56);
33
.access = PL1_R, .type = ARM_CP_NO_RAW,
71
+ int bits = vae1_tlbbits(env, pageaddr);
34
.accessfn = access_aa32_tid3,
72
35
+#ifdef CONFIG_USER_ONLY
73
- tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, mask);
36
+ .type = ARM_CP_CONST,
74
+ tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, mask, bits);
37
+ .resetvalue = cpu->isar.id_pfr1,
75
}
38
+#else
76
39
+ .type = ARM_CP_NO_RAW,
77
static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
40
+ .accessfn = access_aa32_tid3,
78
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
41
.readfn = id_pfr1_read,
79
CPUState *cs = env_cpu(env);
42
- .writefn = arm_cp_write_ignore },
80
int mask = vae1_tlbmask(env);
43
+ .writefn = arm_cp_write_ignore
81
uint64_t pageaddr = sextract64(value << 12, 0, 56);
44
+#endif
82
+ int bits = vae1_tlbbits(env, pageaddr);
45
+ },
83
46
{ .name = "ID_DFR0", .state = ARM_CP_STATE_BOTH,
84
if (tlb_force_broadcast(env)) {
47
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 2,
85
- tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, mask);
48
.access = PL1_R, .type = ARM_CP_CONST,
86
+ tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, mask, bits);
87
} else {
88
- tlb_flush_page_by_mmuidx(cs, pageaddr, mask);
89
+ tlb_flush_page_bits_by_mmuidx(cs, pageaddr, mask, bits);
90
}
91
}
92
93
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
94
{
95
CPUState *cs = env_cpu(env);
96
uint64_t pageaddr = sextract64(value << 12, 0, 56);
97
+ int bits = tlbbits_for_regime(env, ARMMMUIdx_E2, pageaddr);
98
99
- tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr,
100
- ARMMMUIdxBit_E2);
101
+ tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr,
102
+ ARMMMUIdxBit_E2, bits);
103
}
104
105
static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
106
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
107
{
108
CPUState *cs = env_cpu(env);
109
uint64_t pageaddr = sextract64(value << 12, 0, 56);
110
+ int bits = tlbbits_for_regime(env, ARMMMUIdx_SE3, pageaddr);
111
112
- tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr,
113
- ARMMMUIdxBit_SE3);
114
+ tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr,
115
+ ARMMMUIdxBit_SE3, bits);
116
}
117
118
static CPAccessResult aa64_zva_access(CPUARMState *env, const ARMCPRegInfo *ri,
119
--
49
--
120
2.20.1
50
2.34.1
121
51
122
52
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
2
3
The second loop uses a loop induction variable, and the first
3
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
4
does not. Transform the first to match the second, to simplify
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
a following patch moving code between them.
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
6
Message-id: 20230206223502.25122-6-philmd@linaro.org
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Message-id: 20201016184207.786698-7-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
8
---
12
linux-user/elfload.c | 9 +++++----
9
linux-user/user-internals.h | 2 +-
13
1 file changed, 5 insertions(+), 4 deletions(-)
10
target/arm/cpu.h | 2 +-
11
linux-user/arm/cpu_loop.c | 4 ++--
12
3 files changed, 4 insertions(+), 4 deletions(-)
14
13
15
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
14
diff --git a/linux-user/user-internals.h b/linux-user/user-internals.h
16
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
17
--- a/linux-user/elfload.c
16
--- a/linux-user/user-internals.h
18
+++ b/linux-user/elfload.c
17
+++ b/linux-user/user-internals.h
19
@@ -XXX,XX +XXX,XX @@ static void load_elf_image(const char *image_name, int image_fd,
18
@@ -XXX,XX +XXX,XX @@ void print_termios(void *arg);
20
loaddr = -1, hiaddr = 0;
19
#ifdef TARGET_ARM
21
info->alignment = 0;
20
static inline int regpairs_aligned(CPUArchState *cpu_env, int num)
22
for (i = 0; i < ehdr->e_phnum; ++i) {
21
{
23
- if (phdr[i].p_type == PT_LOAD) {
22
- return cpu_env->eabi == 1;
24
- abi_ulong a = phdr[i].p_vaddr - phdr[i].p_offset;
23
+ return cpu_env->eabi;
25
+ struct elf_phdr *eppnt = phdr + i;
24
}
26
+ if (eppnt->p_type == PT_LOAD) {
25
#elif defined(TARGET_MIPS) && defined(TARGET_ABI_MIPSO32)
27
+ abi_ulong a = eppnt->p_vaddr - eppnt->p_offset;
26
static inline int regpairs_aligned(CPUArchState *cpu_env, int num) { return 1; }
28
if (a < loaddr) {
27
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
29
loaddr = a;
28
index XXXXXXX..XXXXXXX 100644
30
}
29
--- a/target/arm/cpu.h
31
- a = phdr[i].p_vaddr + phdr[i].p_memsz;
30
+++ b/target/arm/cpu.h
32
+ a = eppnt->p_vaddr + eppnt->p_memsz;
31
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
33
if (a > hiaddr) {
32
34
hiaddr = a;
33
#if defined(CONFIG_USER_ONLY)
35
}
34
/* For usermode syscall translation. */
36
++info->nsegs;
35
- int eabi;
37
- info->alignment |= phdr[i].p_align;
36
+ bool eabi;
38
+ info->alignment |= eppnt->p_align;
37
#endif
39
}
38
40
}
39
struct CPUBreakpoint *cpu_breakpoint[16];
40
diff --git a/linux-user/arm/cpu_loop.c b/linux-user/arm/cpu_loop.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/linux-user/arm/cpu_loop.c
43
+++ b/linux-user/arm/cpu_loop.c
44
@@ -XXX,XX +XXX,XX @@ void cpu_loop(CPUARMState *env)
45
break;
46
case EXCP_SWI:
47
{
48
- env->eabi = 1;
49
+ env->eabi = true;
50
/* system call */
51
if (env->thumb) {
52
/* Thumb is always EABI style with syscall number in r7 */
53
@@ -XXX,XX +XXX,XX @@ void cpu_loop(CPUARMState *env)
54
* > 0xfffff and are handled below as out-of-range.
55
*/
56
n ^= ARM_SYSCALL_BASE;
57
- env->eabi = 0;
58
+ env->eabi = false;
59
}
60
}
41
61
42
--
62
--
43
2.20.1
63
2.34.1
44
64
45
65
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
2
3
Transform the prot bit to a qemu internal page bit, and save
3
Although the 'eabi' field is only used in user emulation where
4
it in the page tables.
4
CPU reset doesn't occur, it doesn't belong to the area to reset.
5
Move it after the 'end_reset_fields' for consistency.
5
6
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Message-id: 20201016184207.786698-3-richard.henderson@linaro.org
9
Message-id: 20230206223502.25122-7-philmd@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
---
11
include/exec/cpu-all.h | 2 ++
12
target/arm/cpu.h | 9 ++++-----
12
linux-user/syscall_defs.h | 4 ++++
13
1 file changed, 4 insertions(+), 5 deletions(-)
13
target/arm/cpu.h | 5 +++++
14
linux-user/mmap.c | 16 ++++++++++++++++
15
target/arm/translate-a64.c | 6 +++---
16
5 files changed, 30 insertions(+), 3 deletions(-)
17
14
18
diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/include/exec/cpu-all.h
21
+++ b/include/exec/cpu-all.h
22
@@ -XXX,XX +XXX,XX @@ extern intptr_t qemu_host_page_mask;
23
/* FIXME: Code that sets/uses this is broken and needs to go away. */
24
#define PAGE_RESERVED 0x0020
25
#endif
26
+/* Target-specific bits that will be used via page_get_flags(). */
27
+#define PAGE_TARGET_1 0x0080
28
29
#if defined(CONFIG_USER_ONLY)
30
void page_dump(FILE *f);
31
diff --git a/linux-user/syscall_defs.h b/linux-user/syscall_defs.h
32
index XXXXXXX..XXXXXXX 100644
33
--- a/linux-user/syscall_defs.h
34
+++ b/linux-user/syscall_defs.h
35
@@ -XXX,XX +XXX,XX @@ struct target_winsize {
36
#define TARGET_PROT_SEM 0x08
37
#endif
38
39
+#ifdef TARGET_AARCH64
40
+#define TARGET_PROT_BTI 0x10
41
+#endif
42
+
43
/* Common */
44
#define TARGET_MAP_SHARED    0x01        /* Share changes */
45
#define TARGET_MAP_PRIVATE    0x02        /* Changes are private */
46
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
47
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/cpu.h
17
--- a/target/arm/cpu.h
49
+++ b/target/arm/cpu.h
18
+++ b/target/arm/cpu.h
50
@@ -XXX,XX +XXX,XX @@ static inline MemTxAttrs *typecheck_memtxattrs(MemTxAttrs *x)
19
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
51
#define arm_tlb_bti_gp(x) (typecheck_memtxattrs(x)->target_tlb_bit0)
20
ARMVectorReg zarray[ARM_MAX_VQ * 16];
52
#define arm_tlb_mte_tagged(x) (typecheck_memtxattrs(x)->target_tlb_bit1)
21
#endif
53
22
54
+/*
23
-#if defined(CONFIG_USER_ONLY)
55
+ * AArch64 usage of the PAGE_TARGET_* bits for linux-user.
24
- /* For usermode syscall translation. */
56
+ */
25
- bool eabi;
57
+#define PAGE_BTI PAGE_TARGET_1
26
-#endif
58
+
27
-
59
/*
28
struct CPUBreakpoint *cpu_breakpoint[16];
60
* Naming convention for isar_feature functions:
29
struct CPUWatchpoint *cpu_watchpoint[16];
61
* Functions which test 32-bit ID registers should have _aa32_ in
30
62
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
31
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
63
index XXXXXXX..XXXXXXX 100644
32
const struct arm_boot_info *boot_info;
64
--- a/linux-user/mmap.c
33
/* Store GICv3CPUState to access from this struct */
65
+++ b/linux-user/mmap.c
34
void *gicv3state;
66
@@ -XXX,XX +XXX,XX @@ static int validate_prot_to_pageflags(int *host_prot, int prot)
35
+#if defined(CONFIG_USER_ONLY)
67
*host_prot = (prot & (PROT_READ | PROT_WRITE))
36
+ /* For usermode syscall translation. */
68
| (prot & PROT_EXEC ? PROT_READ : 0);
37
+ bool eabi;
69
38
+#endif /* CONFIG_USER_ONLY */
70
+#ifdef TARGET_AARCH64
39
71
+ /*
40
#ifdef TARGET_TAGGED_ADDRESSES
72
+ * The PROT_BTI bit is only accepted if the cpu supports the feature.
41
/* Linux syscall tagged address support */
73
+ * Since this is the unusual case, don't bother checking unless
74
+ * the bit has been requested. If set and valid, record the bit
75
+ * within QEMU's page_flags.
76
+ */
77
+ if (prot & TARGET_PROT_BTI) {
78
+ ARMCPU *cpu = ARM_CPU(thread_cpu);
79
+ if (cpu_isar_feature(aa64_bti, cpu)) {
80
+ valid |= TARGET_PROT_BTI;
81
+ page_flags |= PAGE_BTI;
82
+ }
83
+ }
84
+#endif
85
+
86
return prot & ~valid ? 0 : page_flags;
87
}
88
89
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
90
index XXXXXXX..XXXXXXX 100644
91
--- a/target/arm/translate-a64.c
92
+++ b/target/arm/translate-a64.c
93
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_simd_fp(DisasContext *s, uint32_t insn)
94
*/
95
static bool is_guarded_page(CPUARMState *env, DisasContext *s)
96
{
97
-#ifdef CONFIG_USER_ONLY
98
- return false; /* FIXME */
99
-#else
100
uint64_t addr = s->base.pc_first;
101
+#ifdef CONFIG_USER_ONLY
102
+ return page_get_flags(addr) & PAGE_BTI;
103
+#else
104
int mmu_idx = arm_to_core_mmu_idx(s->mmu_idx);
105
unsigned int index = tlb_index(env, mmu_idx, addr);
106
CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr);
107
--
42
--
108
2.20.1
43
2.34.1
109
44
110
45
diff view generated by jsdifflib
1
If the M-profile low-overhead-branch extension is implemented, FPSCR
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
bits [18:16] are a new field LTPSIZE. If MVE is not implemented
3
(currently always true for us) then this field always reads as 4 and
4
ignores writes.
5
2
6
These bits used to be the vector-length field for the old
3
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
short-vector extension, so we need to take care that they are not
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
misinterpreted as setting vec_len. We do this with a rearrangement
5
Message-id: 20230206223502.25122-8-philmd@linaro.org
9
of the vfp_set_fpscr() code that deals with vec_len, vec_stride
10
and also the QC bit; this obviates the need for the M-profile
11
only masking step that we used to have at the start of the function.
12
13
We provide a new field in CPUState for LTPSIZE, even though this
14
will always be 4, in preparation for MVE, so we don't have to
15
come back later and split it out of the vfp.xregs[FPSCR] value.
16
(This state struct field will be saved and restored as part of
17
the FPSCR value via the vmstate_fpscr in machine.c.)
18
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
21
Message-id: 20201019151301.2046-11-peter.maydell@linaro.org
22
---
7
---
23
target/arm/cpu.h | 1 +
8
target/arm/cpu.h | 3 ++-
24
target/arm/cpu.c | 9 +++++++++
9
1 file changed, 2 insertions(+), 1 deletion(-)
25
target/arm/vfp_helper.c | 6 ++++++
26
3 files changed, 16 insertions(+)
27
10
28
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
11
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
29
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/cpu.h
13
--- a/target/arm/cpu.h
31
+++ b/target/arm/cpu.h
14
+++ b/target/arm/cpu.h
32
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
15
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
33
uint32_t fpdscr[M_REG_NUM_BANKS];
16
34
uint32_t cpacr[M_REG_NUM_BANKS];
17
void *nvic;
35
uint32_t nsacr;
18
const struct arm_boot_info *boot_info;
36
+ int ltpsize;
19
+#if !defined(CONFIG_USER_ONLY)
37
} v7m;
20
/* Store GICv3CPUState to access from this struct */
38
21
void *gicv3state;
39
/* Information associated with an exception about to be taken:
22
-#if defined(CONFIG_USER_ONLY)
40
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
23
+#else /* CONFIG_USER_ONLY */
41
index XXXXXXX..XXXXXXX 100644
24
/* For usermode syscall translation. */
42
--- a/target/arm/cpu.c
25
bool eabi;
43
+++ b/target/arm/cpu.c
26
#endif /* CONFIG_USER_ONLY */
44
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
45
uint8_t *rom;
46
uint32_t vecbase;
47
48
+ if (cpu_isar_feature(aa32_lob, cpu)) {
49
+ /*
50
+ * LTPSIZE is constant 4 if MVE not implemented, and resets
51
+ * to an UNKNOWN value if MVE is implemented. We choose to
52
+ * always reset to 4.
53
+ */
54
+ env->v7m.ltpsize = 4;
55
+ }
56
+
57
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
58
env->v7m.secure = true;
59
} else {
60
diff --git a/target/arm/vfp_helper.c b/target/arm/vfp_helper.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/vfp_helper.c
63
+++ b/target/arm/vfp_helper.c
64
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(vfp_get_fpscr)(CPUARMState *env)
65
| (env->vfp.vec_len << 16)
66
| (env->vfp.vec_stride << 20);
67
68
+ /*
69
+ * M-profile LTPSIZE overlaps A-profile Stride; whichever of the
70
+ * two is not applicable to this CPU will always be zero.
71
+ */
72
+ fpscr |= env->v7m.ltpsize << 16;
73
+
74
fpscr |= vfp_get_fpscr_from_host(env);
75
76
i = env->vfp.qc[0] | env->vfp.qc[1] | env->vfp.qc[2] | env->vfp.qc[3];
77
--
27
--
78
2.20.1
28
2.34.1
79
29
80
30
diff view generated by jsdifflib
1
v8.1M implements a new 'branch future' feature, which is a
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
set of instructions that request the CPU to perform a branch
3
"in the future", when it reaches a particular execution address.
4
In hardware, the expected implementation is that the information
5
about the branch location and destination is cached and then
6
acted upon when execution reaches the specified address.
7
However the architecture permits an implementation to discard
8
this cached information at any point, and so guest code must
9
always include a normal branch insn at the branch point as
10
a fallback. In particular, an implementation is specifically
11
permitted to treat all BF insns as NOPs (which is equivalent
12
to discarding the cached information immediately).
13
14
For QEMU, implementing this caching of branch information
15
would be complicated and would not improve the speed of
16
execution at all, so we make the IMPDEF choice to implement
17
all BF insns as NOPs.
18
2
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
3
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
5
Message-id: 20230206223502.25122-9-philmd@linaro.org
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
Message-id: 20201019151301.2046-7-peter.maydell@linaro.org
22
---
7
---
23
target/arm/cpu.h | 6 ++++++
8
target/arm/cpu.h | 2 +-
24
target/arm/t32.decode | 13 ++++++++++++-
9
1 file changed, 1 insertion(+), 1 deletion(-)
25
target/arm/translate.c | 20 ++++++++++++++++++++
26
3 files changed, 38 insertions(+), 1 deletion(-)
27
10
28
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
11
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
29
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/cpu.h
13
--- a/target/arm/cpu.h
31
+++ b/target/arm/cpu.h
14
+++ b/target/arm/cpu.h
32
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_arm_div(const ARMISARegisters *id)
15
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
33
return FIELD_EX32(id->id_isar0, ID_ISAR0, DIVIDE) > 1;
16
} sau;
34
}
17
35
18
void *nvic;
36
+static inline bool isar_feature_aa32_lob(const ARMISARegisters *id)
19
- const struct arm_boot_info *boot_info;
37
+{
20
#if !defined(CONFIG_USER_ONLY)
38
+ /* (M-profile) low-overhead loops and branch future */
21
+ const struct arm_boot_info *boot_info;
39
+ return FIELD_EX32(id->id_isar0, ID_ISAR0, CMPBRANCH) >= 3;
22
/* Store GICv3CPUState to access from this struct */
40
+}
23
void *gicv3state;
41
+
24
#else /* CONFIG_USER_ONLY */
42
static inline bool isar_feature_aa32_jazelle(const ARMISARegisters *id)
43
{
44
return FIELD_EX32(id->id_isar1, ID_ISAR1, JAZELLE) != 0;
45
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/t32.decode
48
+++ b/target/arm/t32.decode
49
@@ -XXX,XX +XXX,XX @@ MRC 1110 1110 ... 1 .... .... .... ... 1 .... @mcr
50
51
B 1111 0. .......... 10.1 ............ @branch24
52
BL 1111 0. .......... 11.1 ............ @branch24
53
-BLX_i 1111 0. .......... 11.0 ............ @branch24
54
+{
55
+ # BLX_i is non-M-profile only
56
+ BLX_i 1111 0. .......... 11.0 ............ @branch24
57
+ # M-profile only: loop and branch insns
58
+ [
59
+ # All these BF insns have boff != 0b0000; we NOP them all
60
+ BF 1111 0 boff:4 ------- 1100 - ---------- 1 # BFL
61
+ BF 1111 0 boff:4 0 ------ 1110 - ---------- 1 # BFCSEL
62
+ BF 1111 0 boff:4 10 ----- 1110 - ---------- 1 # BF
63
+ BF 1111 0 boff:4 11 ----- 1110 0 0000000000 1 # BFX, BFLX
64
+ ]
65
+}
66
diff --git a/target/arm/translate.c b/target/arm/translate.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/target/arm/translate.c
69
+++ b/target/arm/translate.c
70
@@ -XXX,XX +XXX,XX @@ static bool trans_BLX_suffix(DisasContext *s, arg_BLX_suffix *a)
71
return true;
72
}
73
74
+static bool trans_BF(DisasContext *s, arg_BF *a)
75
+{
76
+ /*
77
+ * M-profile branch future insns. The architecture permits an
78
+ * implementation to implement these as NOPs (equivalent to
79
+ * discarding the LO_BRANCH_INFO cache immediately), and we
80
+ * take that IMPDEF option because for QEMU a "real" implementation
81
+ * would be complicated and wouldn't execute any faster.
82
+ */
83
+ if (!dc_isar_feature(aa32_lob, s)) {
84
+ return false;
85
+ }
86
+ if (a->boff == 0) {
87
+ /* SEE "Related encodings" (loop insns) */
88
+ return false;
89
+ }
90
+ /* Handle as NOP */
91
+ return true;
92
+}
93
+
94
static bool op_tbranch(DisasContext *s, arg_tbranch *a, bool half)
95
{
96
TCGv_i32 addr, tmp;
97
--
25
--
98
2.20.1
26
2.34.1
99
27
100
28
diff view generated by jsdifflib
1
From v8.1M, disabled-coprocessor handling changes slightly:
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
* coprocessors 8, 9, 14 and 15 are also governed by the
3
cp10 enable bit, like cp11
4
* an extra range of instruction patterns is considered
5
to be inside the coprocessor space
6
2
7
We previously marked these up with TODO comments; implement the
3
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
correct behaviour.
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
5
Message-id: 20230206223502.25122-10-philmd@linaro.org
10
Unfortunately there is no ID register field which indicates this
11
behaviour. We could in theory test an unrelated ID register which
12
indicates guaranteed-to-be-in-v8.1M behaviour like ID_ISAR0.CmpBranch
13
>= 3 (low-overhead-loops), but it seems better to simply define a new
14
ARM_FEATURE_V8_1M feature flag and use it for this and other
15
new-in-v8.1M behaviour that isn't identifiable from the ID registers.
16
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Message-id: 20201019151301.2046-3-peter.maydell@linaro.org
20
---
7
---
21
target/arm/cpu.h | 1 +
8
target/arm/cpu.h | 2 +-
22
target/arm/m-nocp.decode | 10 ++++++----
9
1 file changed, 1 insertion(+), 1 deletion(-)
23
target/arm/translate-vfp.c.inc | 17 +++++++++++++++--
24
3 files changed, 22 insertions(+), 6 deletions(-)
25
10
26
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
11
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
27
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/cpu.h
13
--- a/target/arm/cpu.h
29
+++ b/target/arm/cpu.h
14
+++ b/target/arm/cpu.h
30
@@ -XXX,XX +XXX,XX @@ enum arm_features {
15
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
31
ARM_FEATURE_VBAR, /* has cp15 VBAR */
16
uint32_t ctrl;
32
ARM_FEATURE_M_SECURITY, /* M profile Security Extension */
17
} sau;
33
ARM_FEATURE_M_MAIN, /* M profile Main Extension */
18
34
+ ARM_FEATURE_V8_1M, /* M profile extras only in v8.1M and later */
19
- void *nvic;
35
};
20
#if !defined(CONFIG_USER_ONLY)
36
21
+ void *nvic;
37
static inline int arm_feature(CPUARMState *env, int feature)
22
const struct arm_boot_info *boot_info;
38
diff --git a/target/arm/m-nocp.decode b/target/arm/m-nocp.decode
23
/* Store GICv3CPUState to access from this struct */
39
index XXXXXXX..XXXXXXX 100644
24
void *gicv3state;
40
--- a/target/arm/m-nocp.decode
41
+++ b/target/arm/m-nocp.decode
42
@@ -XXX,XX +XXX,XX @@
43
# If the coprocessor is not present or disabled then we will generate
44
# the NOCP exception; otherwise we let the insn through to the main decode.
45
46
+&nocp cp
47
+
48
{
49
# Special cases which do not take an early NOCP: VLLDM and VLSTM
50
VLLDM_VLSTM 1110 1100 001 l:1 rn:4 0000 1010 0000 0000
51
# TODO: VSCCLRM (new in v8.1M) is similar:
52
#VSCCLRM 1110 1100 1-01 1111 ---- 1011 ---- ---0
53
54
- NOCP 111- 1110 ---- ---- ---- cp:4 ---- ----
55
- NOCP 111- 110- ---- ---- ---- cp:4 ---- ----
56
- # TODO: From v8.1M onwards we will also want this range to NOCP
57
- #NOCP_8_1 111- 1111 ---- ---- ---- ---- ---- ---- cp=10
58
+ NOCP 111- 1110 ---- ---- ---- cp:4 ---- ---- &nocp
59
+ NOCP 111- 110- ---- ---- ---- cp:4 ---- ---- &nocp
60
+ # From v8.1M onwards this range will also NOCP:
61
+ NOCP_8_1 111- 1111 ---- ---- ---- ---- ---- ---- &nocp cp=10
62
}
63
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
64
index XXXXXXX..XXXXXXX 100644
65
--- a/target/arm/translate-vfp.c.inc
66
+++ b/target/arm/translate-vfp.c.inc
67
@@ -XXX,XX +XXX,XX @@ static bool trans_VLLDM_VLSTM(DisasContext *s, arg_VLLDM_VLSTM *a)
68
return true;
69
}
70
71
-static bool trans_NOCP(DisasContext *s, arg_NOCP *a)
72
+static bool trans_NOCP(DisasContext *s, arg_nocp *a)
73
{
74
/*
75
* Handle M-profile early check for disabled coprocessor:
76
@@ -XXX,XX +XXX,XX @@ static bool trans_NOCP(DisasContext *s, arg_NOCP *a)
77
if (a->cp == 11) {
78
a->cp = 10;
79
}
80
- /* TODO: in v8.1M cp 8, 9, 14, 15 also are governed by the cp10 enable */
81
+ if (arm_dc_feature(s, ARM_FEATURE_V8_1M) &&
82
+ (a->cp == 8 || a->cp == 9 || a->cp == 14 || a->cp == 15)) {
83
+ /* in v8.1M cp 8, 9, 14, 15 also are governed by the cp10 enable */
84
+ a->cp = 10;
85
+ }
86
87
if (a->cp != 10) {
88
gen_exception_insn(s, s->pc_curr, EXCP_NOCP,
89
@@ -XXX,XX +XXX,XX @@ static bool trans_NOCP(DisasContext *s, arg_NOCP *a)
90
return false;
91
}
92
93
+static bool trans_NOCP_8_1(DisasContext *s, arg_nocp *a)
94
+{
95
+ /* This range needs a coprocessor check for v8.1M and later only */
96
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_1M)) {
97
+ return false;
98
+ }
99
+ return trans_NOCP(s, a);
100
+}
101
+
102
static bool trans_VINS(DisasContext *s, arg_VINS *a)
103
{
104
TCGv_i32 rd, rm;
105
--
25
--
106
2.20.1
26
2.34.1
107
27
108
28
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
2
3
On ARM, the Top Byte Ignore feature means that only 56 bits of
3
There is no point in using a void pointer to access the NVIC.
4
the address are significant in the virtual address. We are
4
Use the real type to avoid casting it while debugging.
5
required to give the entire 64-bit address to FAR_ELx on fault,
5
6
which means that we do not "clean" the top byte early in TCG.
6
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
This new interface allows us to flush all 256 possible aliases
8
Message-id: 20230206223502.25122-11-philmd@linaro.org
9
for a given page, currently missed by tlb_flush_page*.
10
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Message-id: 20201016210754.818257-2-richard.henderson@linaro.org
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
10
---
17
include/exec/exec-all.h | 36 ++++++
11
target/arm/cpu.h | 46 ++++++++++++++++++++++---------------------
18
accel/tcg/cputlb.c | 275 ++++++++++++++++++++++++++++++++++++++--
12
hw/intc/armv7m_nvic.c | 38 ++++++++++++-----------------------
19
2 files changed, 302 insertions(+), 9 deletions(-)
13
target/arm/cpu.c | 1 +
20
14
target/arm/m_helper.c | 2 +-
21
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
15
4 files changed, 39 insertions(+), 48 deletions(-)
16
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
22
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
23
--- a/include/exec/exec-all.h
19
--- a/target/arm/cpu.h
24
+++ b/include/exec/exec-all.h
20
+++ b/target/arm/cpu.h
25
@@ -XXX,XX +XXX,XX @@ void tlb_flush_by_mmuidx_all_cpus(CPUState *cpu, uint16_t idxmap);
21
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMTBFlags {
26
* depend on when the guests translation ends the TB.
22
27
*/
23
typedef struct ARMMMUFaultInfo ARMMMUFaultInfo;
28
void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *cpu, uint16_t idxmap);
24
25
+typedef struct NVICState NVICState;
29
+
26
+
30
+/**
27
typedef struct CPUArchState {
31
+ * tlb_flush_page_bits_by_mmuidx
28
/* Regs for current mode. */
32
+ * @cpu: CPU whose TLB should be flushed
29
uint32_t regs[16];
33
+ * @addr: virtual address of page to be flushed
30
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
34
+ * @idxmap: bitmap of mmu indexes to flush
31
} sau;
35
+ * @bits: number of significant bits in address
32
36
+ *
33
#if !defined(CONFIG_USER_ONLY)
37
+ * Similar to tlb_flush_page_mask, but with a bitmap of indexes.
34
- void *nvic;
38
+ */
35
+ NVICState *nvic;
39
+void tlb_flush_page_bits_by_mmuidx(CPUState *cpu, target_ulong addr,
36
const struct arm_boot_info *boot_info;
40
+ uint16_t idxmap, unsigned bits);
37
/* Store GICv3CPUState to access from this struct */
41
+
38
void *gicv3state;
42
+/* Similarly, with broadcast and syncing. */
39
@@ -XXX,XX +XXX,XX @@ uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
43
+void tlb_flush_page_bits_by_mmuidx_all_cpus(CPUState *cpu, target_ulong addr,
40
44
+ uint16_t idxmap, unsigned bits);
41
/* Interface between CPU and Interrupt controller. */
45
+void tlb_flush_page_bits_by_mmuidx_all_cpus_synced
42
#ifndef CONFIG_USER_ONLY
46
+ (CPUState *cpu, target_ulong addr, uint16_t idxmap, unsigned bits);
43
-bool armv7m_nvic_can_take_pending_exception(void *opaque);
47
+
44
+bool armv7m_nvic_can_take_pending_exception(NVICState *s);
48
/**
45
#else
49
* tlb_set_page_with_attrs:
46
-static inline bool armv7m_nvic_can_take_pending_exception(void *opaque)
50
* @cpu: CPU to add this TLB entry for
47
+static inline bool armv7m_nvic_can_take_pending_exception(NVICState *s)
51
@@ -XXX,XX +XXX,XX @@ static inline void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *cpu,
48
{
52
uint16_t idxmap)
49
return true;
53
{
50
}
54
}
55
+static inline void tlb_flush_page_bits_by_mmuidx(CPUState *cpu,
56
+ target_ulong addr,
57
+ uint16_t idxmap,
58
+ unsigned bits)
59
+{
60
+}
61
+static inline void tlb_flush_page_bits_by_mmuidx_all_cpus(CPUState *cpu,
62
+ target_ulong addr,
63
+ uint16_t idxmap,
64
+ unsigned bits)
65
+{
66
+}
67
+static inline void
68
+tlb_flush_page_bits_by_mmuidx_all_cpus_synced(CPUState *cpu, target_ulong addr,
69
+ uint16_t idxmap, unsigned bits)
70
+{
71
+}
72
#endif
51
#endif
73
/**
52
/**
74
* probe_access:
53
* armv7m_nvic_set_pending: mark the specified exception as pending
75
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
54
- * @opaque: the NVIC
55
+ * @s: the NVIC
56
* @irq: the exception number to mark pending
57
* @secure: false for non-banked exceptions or for the nonsecure
58
* version of a banked exception, true for the secure version of a banked
59
@@ -XXX,XX +XXX,XX @@ static inline bool armv7m_nvic_can_take_pending_exception(void *opaque)
60
* if @secure is true and @irq does not specify one of the fixed set
61
* of architecturally banked exceptions.
62
*/
63
-void armv7m_nvic_set_pending(void *opaque, int irq, bool secure);
64
+void armv7m_nvic_set_pending(NVICState *s, int irq, bool secure);
65
/**
66
* armv7m_nvic_set_pending_derived: mark this derived exception as pending
67
- * @opaque: the NVIC
68
+ * @s: the NVIC
69
* @irq: the exception number to mark pending
70
* @secure: false for non-banked exceptions or for the nonsecure
71
* version of a banked exception, true for the secure version of a banked
72
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending(void *opaque, int irq, bool secure);
73
* exceptions (exceptions generated in the course of trying to take
74
* a different exception).
75
*/
76
-void armv7m_nvic_set_pending_derived(void *opaque, int irq, bool secure);
77
+void armv7m_nvic_set_pending_derived(NVICState *s, int irq, bool secure);
78
/**
79
* armv7m_nvic_set_pending_lazyfp: mark this lazy FP exception as pending
80
- * @opaque: the NVIC
81
+ * @s: the NVIC
82
* @irq: the exception number to mark pending
83
* @secure: false for non-banked exceptions or for the nonsecure
84
* version of a banked exception, true for the secure version of a banked
85
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending_derived(void *opaque, int irq, bool secure);
86
* Similar to armv7m_nvic_set_pending(), but specifically for exceptions
87
* generated in the course of lazy stacking of FP registers.
88
*/
89
-void armv7m_nvic_set_pending_lazyfp(void *opaque, int irq, bool secure);
90
+void armv7m_nvic_set_pending_lazyfp(NVICState *s, int irq, bool secure);
91
/**
92
* armv7m_nvic_get_pending_irq_info: return highest priority pending
93
* exception, and whether it targets Secure state
94
- * @opaque: the NVIC
95
+ * @s: the NVIC
96
* @pirq: set to pending exception number
97
* @ptargets_secure: set to whether pending exception targets Secure
98
*
99
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending_lazyfp(void *opaque, int irq, bool secure);
100
* to true if the current highest priority pending exception should
101
* be taken to Secure state, false for NS.
102
*/
103
-void armv7m_nvic_get_pending_irq_info(void *opaque, int *pirq,
104
+void armv7m_nvic_get_pending_irq_info(NVICState *s, int *pirq,
105
bool *ptargets_secure);
106
/**
107
* armv7m_nvic_acknowledge_irq: make highest priority pending exception active
108
- * @opaque: the NVIC
109
+ * @s: the NVIC
110
*
111
* Move the current highest priority pending exception from the pending
112
* state to the active state, and update v7m.exception to indicate that
113
* it is the exception currently being handled.
114
*/
115
-void armv7m_nvic_acknowledge_irq(void *opaque);
116
+void armv7m_nvic_acknowledge_irq(NVICState *s);
117
/**
118
* armv7m_nvic_complete_irq: complete specified interrupt or exception
119
- * @opaque: the NVIC
120
+ * @s: the NVIC
121
* @irq: the exception number to complete
122
* @secure: true if this exception was secure
123
*
124
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_acknowledge_irq(void *opaque);
125
* 0 if there is still an irq active after this one was completed
126
* (Ignoring -1, this is the same as the RETTOBASE value before completion.)
127
*/
128
-int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure);
129
+int armv7m_nvic_complete_irq(NVICState *s, int irq, bool secure);
130
/**
131
* armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure)
132
- * @opaque: the NVIC
133
+ * @s: the NVIC
134
* @irq: the exception number to mark pending
135
* @secure: false for non-banked exceptions or for the nonsecure
136
* version of a banked exception, true for the secure version of a banked
137
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure);
138
* interrupt the current execution priority. This controls whether the
139
* RDY bit for it in the FPCCR is set.
140
*/
141
-bool armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure);
142
+bool armv7m_nvic_get_ready_status(NVICState *s, int irq, bool secure);
143
/**
144
* armv7m_nvic_raw_execution_priority: return the raw execution priority
145
- * @opaque: the NVIC
146
+ * @s: the NVIC
147
*
148
* Returns: the raw execution priority as defined by the v8M architecture.
149
* This is the execution priority minus the effects of AIRCR.PRIS,
150
* and minus any PRIMASK/FAULTMASK/BASEPRI priority boosting.
151
* (v8M ARM ARM I_PKLD.)
152
*/
153
-int armv7m_nvic_raw_execution_priority(void *opaque);
154
+int armv7m_nvic_raw_execution_priority(NVICState *s);
155
/**
156
* armv7m_nvic_neg_prio_requested: return true if the requested execution
157
* priority is negative for the specified security state.
158
- * @opaque: the NVIC
159
+ * @s: the NVIC
160
* @secure: the security state to test
161
* This corresponds to the pseudocode IsReqExecPriNeg().
162
*/
163
#ifndef CONFIG_USER_ONLY
164
-bool armv7m_nvic_neg_prio_requested(void *opaque, bool secure);
165
+bool armv7m_nvic_neg_prio_requested(NVICState *s, bool secure);
166
#else
167
-static inline bool armv7m_nvic_neg_prio_requested(void *opaque, bool secure)
168
+static inline bool armv7m_nvic_neg_prio_requested(NVICState *s, bool secure)
169
{
170
return false;
171
}
172
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
76
index XXXXXXX..XXXXXXX 100644
173
index XXXXXXX..XXXXXXX 100644
77
--- a/accel/tcg/cputlb.c
174
--- a/hw/intc/armv7m_nvic.c
78
+++ b/accel/tcg/cputlb.c
175
+++ b/hw/intc/armv7m_nvic.c
79
@@ -XXX,XX +XXX,XX @@ void tlb_flush_all_cpus_synced(CPUState *src_cpu)
176
@@ -XXX,XX +XXX,XX @@ static inline int nvic_exec_prio(NVICState *s)
80
tlb_flush_by_mmuidx_all_cpus_synced(src_cpu, ALL_MMUIDX_BITS);
177
return MIN(running, s->exception_prio);
81
}
178
}
82
179
83
+static bool tlb_hit_page_mask_anyprot(CPUTLBEntry *tlb_entry,
180
-bool armv7m_nvic_neg_prio_requested(void *opaque, bool secure)
84
+ target_ulong page, target_ulong mask)
181
+bool armv7m_nvic_neg_prio_requested(NVICState *s, bool secure)
85
+{
182
{
86
+ page &= mask;
183
/* Return true if the requested execution priority is negative
87
+ mask &= TARGET_PAGE_MASK | TLB_INVALID_MASK;
184
* for the specified security state, ie that security state
88
+
185
@@ -XXX,XX +XXX,XX @@ bool armv7m_nvic_neg_prio_requested(void *opaque, bool secure)
89
+ return (page == (tlb_entry->addr_read & mask) ||
186
* mean we don't allow FAULTMASK_NS to actually make the execution
90
+ page == (tlb_addr_write(tlb_entry) & mask) ||
187
* priority negative). Compare pseudocode IsReqExcPriNeg().
91
+ page == (tlb_entry->addr_code & mask));
188
*/
92
+}
189
- NVICState *s = opaque;
93
+
190
-
94
static inline bool tlb_hit_page_anyprot(CPUTLBEntry *tlb_entry,
191
if (s->cpu->env.v7m.faultmask[secure]) {
95
target_ulong page)
96
{
97
- return tlb_hit_page(tlb_entry->addr_read, page) ||
98
- tlb_hit_page(tlb_addr_write(tlb_entry), page) ||
99
- tlb_hit_page(tlb_entry->addr_code, page);
100
+ return tlb_hit_page_mask_anyprot(tlb_entry, page, -1);
101
}
102
103
/**
104
@@ -XXX,XX +XXX,XX @@ static inline bool tlb_entry_is_empty(const CPUTLBEntry *te)
105
}
106
107
/* Called with tlb_c.lock held */
108
-static inline bool tlb_flush_entry_locked(CPUTLBEntry *tlb_entry,
109
- target_ulong page)
110
+static bool tlb_flush_entry_mask_locked(CPUTLBEntry *tlb_entry,
111
+ target_ulong page,
112
+ target_ulong mask)
113
{
114
- if (tlb_hit_page_anyprot(tlb_entry, page)) {
115
+ if (tlb_hit_page_mask_anyprot(tlb_entry, page, mask)) {
116
memset(tlb_entry, -1, sizeof(*tlb_entry));
117
return true;
192
return true;
118
}
193
}
194
@@ -XXX,XX +XXX,XX @@ bool armv7m_nvic_neg_prio_requested(void *opaque, bool secure)
119
return false;
195
return false;
120
}
196
}
121
197
122
+static inline bool tlb_flush_entry_locked(CPUTLBEntry *tlb_entry,
198
-bool armv7m_nvic_can_take_pending_exception(void *opaque)
123
+ target_ulong page)
199
+bool armv7m_nvic_can_take_pending_exception(NVICState *s)
124
+{
200
{
125
+ return tlb_flush_entry_mask_locked(tlb_entry, page, -1);
201
- NVICState *s = opaque;
126
+}
202
-
127
+
203
return nvic_exec_prio(s) > nvic_pending_prio(s);
128
/* Called with tlb_c.lock held */
204
}
129
-static inline void tlb_flush_vtlb_page_locked(CPUArchState *env, int mmu_idx,
205
130
- target_ulong page)
206
-int armv7m_nvic_raw_execution_priority(void *opaque)
131
+static void tlb_flush_vtlb_page_mask_locked(CPUArchState *env, int mmu_idx,
207
+int armv7m_nvic_raw_execution_priority(NVICState *s)
132
+ target_ulong page,
208
{
133
+ target_ulong mask)
209
- NVICState *s = opaque;
134
{
210
-
135
CPUTLBDesc *d = &env_tlb(env)->d[mmu_idx];
211
return s->exception_prio;
136
int k;
212
}
137
213
138
assert_cpu_is_self(env_cpu(env));
214
@@ -XXX,XX +XXX,XX @@ static void nvic_irq_update(NVICState *s)
139
for (k = 0; k < CPU_VTLB_SIZE; k++) {
215
* if @secure is true and @irq does not specify one of the fixed set
140
- if (tlb_flush_entry_locked(&d->vtable[k], page)) {
216
* of architecturally banked exceptions.
141
+ if (tlb_flush_entry_mask_locked(&d->vtable[k], page, mask)) {
217
*/
142
tlb_n_used_entries_dec(env, mmu_idx);
218
-static void armv7m_nvic_clear_pending(void *opaque, int irq, bool secure)
143
}
219
+static void armv7m_nvic_clear_pending(NVICState *s, int irq, bool secure)
220
{
221
- NVICState *s = (NVICState *)opaque;
222
VecInfo *vec;
223
224
assert(irq > ARMV7M_EXCP_RESET && irq < s->num_irq);
225
@@ -XXX,XX +XXX,XX @@ static void do_armv7m_nvic_set_pending(void *opaque, int irq, bool secure,
144
}
226
}
145
}
227
}
146
228
147
+static inline void tlb_flush_vtlb_page_locked(CPUArchState *env, int mmu_idx,
229
-void armv7m_nvic_set_pending(void *opaque, int irq, bool secure)
148
+ target_ulong page)
230
+void armv7m_nvic_set_pending(NVICState *s, int irq, bool secure)
149
+{
231
{
150
+ tlb_flush_vtlb_page_mask_locked(env, mmu_idx, page, -1);
232
- do_armv7m_nvic_set_pending(opaque, irq, secure, false);
151
+}
233
+ do_armv7m_nvic_set_pending(s, irq, secure, false);
152
+
234
}
153
static void tlb_flush_page_locked(CPUArchState *env, int midx,
235
154
target_ulong page)
236
-void armv7m_nvic_set_pending_derived(void *opaque, int irq, bool secure)
155
{
237
+void armv7m_nvic_set_pending_derived(NVICState *s, int irq, bool secure)
156
@@ -XXX,XX +XXX,XX @@ void tlb_flush_page_all_cpus_synced(CPUState *src, target_ulong addr)
238
{
157
tlb_flush_page_by_mmuidx_all_cpus_synced(src, addr, ALL_MMUIDX_BITS);
239
- do_armv7m_nvic_set_pending(opaque, irq, secure, true);
158
}
240
+ do_armv7m_nvic_set_pending(s, irq, secure, true);
159
241
}
160
+static void tlb_flush_page_bits_locked(CPUArchState *env, int midx,
242
161
+ target_ulong page, unsigned bits)
243
-void armv7m_nvic_set_pending_lazyfp(void *opaque, int irq, bool secure)
162
+{
244
+void armv7m_nvic_set_pending_lazyfp(NVICState *s, int irq, bool secure)
163
+ CPUTLBDesc *d = &env_tlb(env)->d[midx];
245
{
164
+ CPUTLBDescFast *f = &env_tlb(env)->f[midx];
246
/*
165
+ target_ulong mask = MAKE_64BIT_MASK(0, bits);
247
* Pend an exception during lazy FP stacking. This differs
166
+
248
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending_lazyfp(void *opaque, int irq, bool secure)
167
+ /*
249
* whether we should escalate depends on the saved context
168
+ * If @bits is smaller than the tlb size, there may be multiple entries
250
* in the FPCCR register, not on the current state of the CPU/NVIC.
169
+ * within the TLB; otherwise all addresses that match under @mask hit
251
*/
170
+ * the same TLB entry.
252
- NVICState *s = (NVICState *)opaque;
171
+ *
253
bool banked = exc_is_banked(irq);
172
+ * TODO: Perhaps allow bits to be a few bits less than the size.
254
VecInfo *vec;
173
+ * For now, just flush the entire TLB.
255
bool targets_secure;
174
+ */
256
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending_lazyfp(void *opaque, int irq, bool secure)
175
+ if (mask < f->mask) {
257
}
176
+ tlb_debug("forcing full flush midx %d ("
258
177
+ TARGET_FMT_lx "/" TARGET_FMT_lx ")\n",
259
/* Make pending IRQ active. */
178
+ midx, page, mask);
260
-void armv7m_nvic_acknowledge_irq(void *opaque)
179
+ tlb_flush_one_mmuidx_locked(env, midx, get_clock_realtime());
261
+void armv7m_nvic_acknowledge_irq(NVICState *s)
180
+ return;
262
{
181
+ }
263
- NVICState *s = (NVICState *)opaque;
182
+
264
CPUARMState *env = &s->cpu->env;
183
+ /* Check if we need to flush due to large pages. */
265
const int pending = s->vectpending;
184
+ if ((page & d->large_page_mask) == d->large_page_addr) {
266
const int running = nvic_exec_prio(s);
185
+ tlb_debug("forcing full flush midx %d ("
267
@@ -XXX,XX +XXX,XX @@ static bool vectpending_targets_secure(NVICState *s)
186
+ TARGET_FMT_lx "/" TARGET_FMT_lx ")\n",
268
exc_targets_secure(s, s->vectpending);
187
+ midx, d->large_page_addr, d->large_page_mask);
269
}
188
+ tlb_flush_one_mmuidx_locked(env, midx, get_clock_realtime());
270
189
+ return;
271
-void armv7m_nvic_get_pending_irq_info(void *opaque,
190
+ }
272
+void armv7m_nvic_get_pending_irq_info(NVICState *s,
191
+
273
int *pirq, bool *ptargets_secure)
192
+ if (tlb_flush_entry_mask_locked(tlb_entry(env, midx, page), page, mask)) {
274
{
193
+ tlb_n_used_entries_dec(env, midx);
275
- NVICState *s = (NVICState *)opaque;
194
+ }
276
const int pending = s->vectpending;
195
+ tlb_flush_vtlb_page_mask_locked(env, midx, page, mask);
277
bool targets_secure;
196
+}
278
197
+
279
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_get_pending_irq_info(void *opaque,
198
+typedef struct {
280
*pirq = pending;
199
+ target_ulong addr;
281
}
200
+ uint16_t idxmap;
282
201
+ uint16_t bits;
283
-int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
202
+} TLBFlushPageBitsByMMUIdxData;
284
+int armv7m_nvic_complete_irq(NVICState *s, int irq, bool secure)
203
+
285
{
204
+static void
286
- NVICState *s = (NVICState *)opaque;
205
+tlb_flush_page_bits_by_mmuidx_async_0(CPUState *cpu,
287
VecInfo *vec = NULL;
206
+ TLBFlushPageBitsByMMUIdxData d)
288
int ret = 0;
207
+{
289
208
+ CPUArchState *env = cpu->env_ptr;
290
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
209
+ int mmu_idx;
291
return ret;
210
+
292
}
211
+ assert_cpu_is_self(cpu);
293
212
+
294
-bool armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure)
213
+ tlb_debug("page addr:" TARGET_FMT_lx "/%u mmu_map:0x%x\n",
295
+bool armv7m_nvic_get_ready_status(NVICState *s, int irq, bool secure)
214
+ d.addr, d.bits, d.idxmap);
296
{
215
+
297
/*
216
+ qemu_spin_lock(&env_tlb(env)->c.lock);
298
* Return whether an exception is "ready", i.e. it is enabled and is
217
+ for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) {
299
@@ -XXX,XX +XXX,XX @@ bool armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure)
218
+ if ((d.idxmap >> mmu_idx) & 1) {
300
* for non-banked exceptions secure is always false; for banked exceptions
219
+ tlb_flush_page_bits_locked(env, mmu_idx, d.addr, d.bits);
301
* it indicates which of the exceptions is required.
220
+ }
302
*/
221
+ }
303
- NVICState *s = (NVICState *)opaque;
222
+ qemu_spin_unlock(&env_tlb(env)->c.lock);
304
bool banked = exc_is_banked(irq);
223
+
305
VecInfo *vec;
224
+ tb_flush_jmp_cache(cpu, d.addr);
306
int running = nvic_exec_prio(s);
225
+}
307
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
226
+
308
index XXXXXXX..XXXXXXX 100644
227
+static bool encode_pbm_to_runon(run_on_cpu_data *out,
309
--- a/target/arm/cpu.c
228
+ TLBFlushPageBitsByMMUIdxData d)
310
+++ b/target/arm/cpu.c
229
+{
311
@@ -XXX,XX +XXX,XX @@
230
+ /* We need 6 bits to hold to hold @bits up to 63. */
312
#if !defined(CONFIG_USER_ONLY)
231
+ if (d.idxmap <= MAKE_64BIT_MASK(0, TARGET_PAGE_BITS - 6)) {
313
#include "hw/loader.h"
232
+ *out = RUN_ON_CPU_TARGET_PTR(d.addr | (d.idxmap << 6) | d.bits);
314
#include "hw/boards.h"
233
+ return true;
315
+#include "hw/intc/armv7m_nvic.h"
234
+ }
316
#endif
235
+ return false;
317
#include "sysemu/tcg.h"
236
+}
318
#include "sysemu/qtest.h"
237
+
319
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
238
+static TLBFlushPageBitsByMMUIdxData
320
index XXXXXXX..XXXXXXX 100644
239
+decode_runon_to_pbm(run_on_cpu_data data)
321
--- a/target/arm/m_helper.c
240
+{
322
+++ b/target/arm/m_helper.c
241
+ target_ulong addr_map_bits = (target_ulong) data.target_ptr;
323
@@ -XXX,XX +XXX,XX @@ static void v7m_update_fpccr(CPUARMState *env, uint32_t frameptr,
242
+ return (TLBFlushPageBitsByMMUIdxData){
324
* that we will need later in order to do lazy FP reg stacking.
243
+ .addr = addr_map_bits & TARGET_PAGE_MASK,
325
*/
244
+ .idxmap = (addr_map_bits & ~TARGET_PAGE_MASK) >> 6,
326
bool is_secure = env->v7m.secure;
245
+ .bits = addr_map_bits & 0x3f
327
- void *nvic = env->nvic;
246
+ };
328
+ NVICState *nvic = env->nvic;
247
+}
329
/*
248
+
330
* Some bits are unbanked and live always in fpccr[M_REG_S]; some bits
249
+static void tlb_flush_page_bits_by_mmuidx_async_1(CPUState *cpu,
331
* are banked and we want to update the bit in the bank for the
250
+ run_on_cpu_data runon)
251
+{
252
+ tlb_flush_page_bits_by_mmuidx_async_0(cpu, decode_runon_to_pbm(runon));
253
+}
254
+
255
+static void tlb_flush_page_bits_by_mmuidx_async_2(CPUState *cpu,
256
+ run_on_cpu_data data)
257
+{
258
+ TLBFlushPageBitsByMMUIdxData *d = data.host_ptr;
259
+ tlb_flush_page_bits_by_mmuidx_async_0(cpu, *d);
260
+ g_free(d);
261
+}
262
+
263
+void tlb_flush_page_bits_by_mmuidx(CPUState *cpu, target_ulong addr,
264
+ uint16_t idxmap, unsigned bits)
265
+{
266
+ TLBFlushPageBitsByMMUIdxData d;
267
+ run_on_cpu_data runon;
268
+
269
+ /* If all bits are significant, this devolves to tlb_flush_page. */
270
+ if (bits >= TARGET_LONG_BITS) {
271
+ tlb_flush_page_by_mmuidx(cpu, addr, idxmap);
272
+ return;
273
+ }
274
+ /* If no page bits are significant, this devolves to tlb_flush. */
275
+ if (bits < TARGET_PAGE_BITS) {
276
+ tlb_flush_by_mmuidx(cpu, idxmap);
277
+ return;
278
+ }
279
+
280
+ /* This should already be page aligned */
281
+ d.addr = addr & TARGET_PAGE_MASK;
282
+ d.idxmap = idxmap;
283
+ d.bits = bits;
284
+
285
+ if (qemu_cpu_is_self(cpu)) {
286
+ tlb_flush_page_bits_by_mmuidx_async_0(cpu, d);
287
+ } else if (encode_pbm_to_runon(&runon, d)) {
288
+ async_run_on_cpu(cpu, tlb_flush_page_bits_by_mmuidx_async_1, runon);
289
+ } else {
290
+ TLBFlushPageBitsByMMUIdxData *p
291
+ = g_new(TLBFlushPageBitsByMMUIdxData, 1);
292
+
293
+ /* Otherwise allocate a structure, freed by the worker. */
294
+ *p = d;
295
+ async_run_on_cpu(cpu, tlb_flush_page_bits_by_mmuidx_async_2,
296
+ RUN_ON_CPU_HOST_PTR(p));
297
+ }
298
+}
299
+
300
+void tlb_flush_page_bits_by_mmuidx_all_cpus(CPUState *src_cpu,
301
+ target_ulong addr,
302
+ uint16_t idxmap,
303
+ unsigned bits)
304
+{
305
+ TLBFlushPageBitsByMMUIdxData d;
306
+ run_on_cpu_data runon;
307
+
308
+ /* If all bits are significant, this devolves to tlb_flush_page. */
309
+ if (bits >= TARGET_LONG_BITS) {
310
+ tlb_flush_page_by_mmuidx_all_cpus(src_cpu, addr, idxmap);
311
+ return;
312
+ }
313
+ /* If no page bits are significant, this devolves to tlb_flush. */
314
+ if (bits < TARGET_PAGE_BITS) {
315
+ tlb_flush_by_mmuidx_all_cpus(src_cpu, idxmap);
316
+ return;
317
+ }
318
+
319
+ /* This should already be page aligned */
320
+ d.addr = addr & TARGET_PAGE_MASK;
321
+ d.idxmap = idxmap;
322
+ d.bits = bits;
323
+
324
+ if (encode_pbm_to_runon(&runon, d)) {
325
+ flush_all_helper(src_cpu, tlb_flush_page_bits_by_mmuidx_async_1, runon);
326
+ } else {
327
+ CPUState *dst_cpu;
328
+ TLBFlushPageBitsByMMUIdxData *p;
329
+
330
+ /* Allocate a separate data block for each destination cpu. */
331
+ CPU_FOREACH(dst_cpu) {
332
+ if (dst_cpu != src_cpu) {
333
+ p = g_new(TLBFlushPageBitsByMMUIdxData, 1);
334
+ *p = d;
335
+ async_run_on_cpu(dst_cpu,
336
+ tlb_flush_page_bits_by_mmuidx_async_2,
337
+ RUN_ON_CPU_HOST_PTR(p));
338
+ }
339
+ }
340
+ }
341
+
342
+ tlb_flush_page_bits_by_mmuidx_async_0(src_cpu, d);
343
+}
344
+
345
+void tlb_flush_page_bits_by_mmuidx_all_cpus_synced(CPUState *src_cpu,
346
+ target_ulong addr,
347
+ uint16_t idxmap,
348
+ unsigned bits)
349
+{
350
+ TLBFlushPageBitsByMMUIdxData d;
351
+ run_on_cpu_data runon;
352
+
353
+ /* If all bits are significant, this devolves to tlb_flush_page. */
354
+ if (bits >= TARGET_LONG_BITS) {
355
+ tlb_flush_page_by_mmuidx_all_cpus_synced(src_cpu, addr, idxmap);
356
+ return;
357
+ }
358
+ /* If no page bits are significant, this devolves to tlb_flush. */
359
+ if (bits < TARGET_PAGE_BITS) {
360
+ tlb_flush_by_mmuidx_all_cpus_synced(src_cpu, idxmap);
361
+ return;
362
+ }
363
+
364
+ /* This should already be page aligned */
365
+ d.addr = addr & TARGET_PAGE_MASK;
366
+ d.idxmap = idxmap;
367
+ d.bits = bits;
368
+
369
+ if (encode_pbm_to_runon(&runon, d)) {
370
+ flush_all_helper(src_cpu, tlb_flush_page_bits_by_mmuidx_async_1, runon);
371
+ async_safe_run_on_cpu(src_cpu, tlb_flush_page_bits_by_mmuidx_async_1,
372
+ runon);
373
+ } else {
374
+ CPUState *dst_cpu;
375
+ TLBFlushPageBitsByMMUIdxData *p;
376
+
377
+ /* Allocate a separate data block for each destination cpu. */
378
+ CPU_FOREACH(dst_cpu) {
379
+ if (dst_cpu != src_cpu) {
380
+ p = g_new(TLBFlushPageBitsByMMUIdxData, 1);
381
+ *p = d;
382
+ async_run_on_cpu(dst_cpu, tlb_flush_page_bits_by_mmuidx_async_2,
383
+ RUN_ON_CPU_HOST_PTR(p));
384
+ }
385
+ }
386
+
387
+ p = g_new(TLBFlushPageBitsByMMUIdxData, 1);
388
+ *p = d;
389
+ async_safe_run_on_cpu(src_cpu, tlb_flush_page_bits_by_mmuidx_async_2,
390
+ RUN_ON_CPU_HOST_PTR(p));
391
+ }
392
+}
393
+
394
/* update the TLBs so that writes to code in the virtual page 'addr'
395
can be detected */
396
void tlb_protect_code(ram_addr_t ram_addr)
397
--
332
--
398
2.20.1
333
2.34.1
399
334
400
335
diff view generated by jsdifflib
1
In arm_cpu_realizefn(), if the CPU has VFP or Neon disabled then we
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
squash the ID register fields so that we don't advertise it to the
2
3
guest. This code was written for A-profile and needs some tweaks to
3
While dozens of files include "cpu.h", only 3 files require
4
work correctly on M-profile:
4
these NVIC helper declarations.
5
6
* A-profile only fields should not be zeroed on M-profile:
7
- MVFR0.FPSHVEC,FPTRAP
8
- MVFR1.SIMDLS,SIMDINT,SIMDSP,SIMDHP
9
- MVFR2.SIMDMISC
10
* M-profile only fields should be zeroed on M-profile:
11
- MVFR1.FP16
12
13
In particular, because MVFR1.SIMDHP on A-profile is the same field as
14
MVFR1.FP16 on M-profile this code was incorrectly disabling FP16
15
support on an M-profile CPU (where has_neon is always false). This
16
isn't a visible bug yet because we don't have any M-profile CPUs with
17
FP16 support, but the change is necessary before we introduce any.
18
5
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Message-id: 20230206223502.25122-12-philmd@linaro.org
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
Message-id: 20201019151301.2046-9-peter.maydell@linaro.org
22
---
10
---
23
target/arm/cpu.c | 29 ++++++++++++++++++-----------
11
include/hw/intc/armv7m_nvic.h | 123 ++++++++++++++++++++++++++++++++++
24
1 file changed, 18 insertions(+), 11 deletions(-)
12
target/arm/cpu.h | 123 ----------------------------------
25
13
target/arm/cpu.c | 4 +-
14
target/arm/cpu_tcg.c | 3 +
15
target/arm/m_helper.c | 3 +
16
5 files changed, 132 insertions(+), 124 deletions(-)
17
18
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/include/hw/intc/armv7m_nvic.h
21
+++ b/include/hw/intc/armv7m_nvic.h
22
@@ -XXX,XX +XXX,XX @@ struct NVICState {
23
qemu_irq sysresetreq;
24
};
25
26
+/* Interface between CPU and Interrupt controller. */
27
+/**
28
+ * armv7m_nvic_set_pending: mark the specified exception as pending
29
+ * @s: the NVIC
30
+ * @irq: the exception number to mark pending
31
+ * @secure: false for non-banked exceptions or for the nonsecure
32
+ * version of a banked exception, true for the secure version of a banked
33
+ * exception.
34
+ *
35
+ * Marks the specified exception as pending. Note that we will assert()
36
+ * if @secure is true and @irq does not specify one of the fixed set
37
+ * of architecturally banked exceptions.
38
+ */
39
+void armv7m_nvic_set_pending(NVICState *s, int irq, bool secure);
40
+/**
41
+ * armv7m_nvic_set_pending_derived: mark this derived exception as pending
42
+ * @s: the NVIC
43
+ * @irq: the exception number to mark pending
44
+ * @secure: false for non-banked exceptions or for the nonsecure
45
+ * version of a banked exception, true for the secure version of a banked
46
+ * exception.
47
+ *
48
+ * Similar to armv7m_nvic_set_pending(), but specifically for derived
49
+ * exceptions (exceptions generated in the course of trying to take
50
+ * a different exception).
51
+ */
52
+void armv7m_nvic_set_pending_derived(NVICState *s, int irq, bool secure);
53
+/**
54
+ * armv7m_nvic_set_pending_lazyfp: mark this lazy FP exception as pending
55
+ * @s: the NVIC
56
+ * @irq: the exception number to mark pending
57
+ * @secure: false for non-banked exceptions or for the nonsecure
58
+ * version of a banked exception, true for the secure version of a banked
59
+ * exception.
60
+ *
61
+ * Similar to armv7m_nvic_set_pending(), but specifically for exceptions
62
+ * generated in the course of lazy stacking of FP registers.
63
+ */
64
+void armv7m_nvic_set_pending_lazyfp(NVICState *s, int irq, bool secure);
65
+/**
66
+ * armv7m_nvic_get_pending_irq_info: return highest priority pending
67
+ * exception, and whether it targets Secure state
68
+ * @s: the NVIC
69
+ * @pirq: set to pending exception number
70
+ * @ptargets_secure: set to whether pending exception targets Secure
71
+ *
72
+ * This function writes the number of the highest priority pending
73
+ * exception (the one which would be made active by
74
+ * armv7m_nvic_acknowledge_irq()) to @pirq, and sets @ptargets_secure
75
+ * to true if the current highest priority pending exception should
76
+ * be taken to Secure state, false for NS.
77
+ */
78
+void armv7m_nvic_get_pending_irq_info(NVICState *s, int *pirq,
79
+ bool *ptargets_secure);
80
+/**
81
+ * armv7m_nvic_acknowledge_irq: make highest priority pending exception active
82
+ * @s: the NVIC
83
+ *
84
+ * Move the current highest priority pending exception from the pending
85
+ * state to the active state, and update v7m.exception to indicate that
86
+ * it is the exception currently being handled.
87
+ */
88
+void armv7m_nvic_acknowledge_irq(NVICState *s);
89
+/**
90
+ * armv7m_nvic_complete_irq: complete specified interrupt or exception
91
+ * @s: the NVIC
92
+ * @irq: the exception number to complete
93
+ * @secure: true if this exception was secure
94
+ *
95
+ * Returns: -1 if the irq was not active
96
+ * 1 if completing this irq brought us back to base (no active irqs)
97
+ * 0 if there is still an irq active after this one was completed
98
+ * (Ignoring -1, this is the same as the RETTOBASE value before completion.)
99
+ */
100
+int armv7m_nvic_complete_irq(NVICState *s, int irq, bool secure);
101
+/**
102
+ * armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure)
103
+ * @s: the NVIC
104
+ * @irq: the exception number to mark pending
105
+ * @secure: false for non-banked exceptions or for the nonsecure
106
+ * version of a banked exception, true for the secure version of a banked
107
+ * exception.
108
+ *
109
+ * Return whether an exception is "ready", i.e. whether the exception is
110
+ * enabled and is configured at a priority which would allow it to
111
+ * interrupt the current execution priority. This controls whether the
112
+ * RDY bit for it in the FPCCR is set.
113
+ */
114
+bool armv7m_nvic_get_ready_status(NVICState *s, int irq, bool secure);
115
+/**
116
+ * armv7m_nvic_raw_execution_priority: return the raw execution priority
117
+ * @s: the NVIC
118
+ *
119
+ * Returns: the raw execution priority as defined by the v8M architecture.
120
+ * This is the execution priority minus the effects of AIRCR.PRIS,
121
+ * and minus any PRIMASK/FAULTMASK/BASEPRI priority boosting.
122
+ * (v8M ARM ARM I_PKLD.)
123
+ */
124
+int armv7m_nvic_raw_execution_priority(NVICState *s);
125
+/**
126
+ * armv7m_nvic_neg_prio_requested: return true if the requested execution
127
+ * priority is negative for the specified security state.
128
+ * @s: the NVIC
129
+ * @secure: the security state to test
130
+ * This corresponds to the pseudocode IsReqExecPriNeg().
131
+ */
132
+#ifndef CONFIG_USER_ONLY
133
+bool armv7m_nvic_neg_prio_requested(NVICState *s, bool secure);
134
+#else
135
+static inline bool armv7m_nvic_neg_prio_requested(NVICState *s, bool secure)
136
+{
137
+ return false;
138
+}
139
+#endif
140
+#ifndef CONFIG_USER_ONLY
141
+bool armv7m_nvic_can_take_pending_exception(NVICState *s);
142
+#else
143
+static inline bool armv7m_nvic_can_take_pending_exception(NVICState *s)
144
+{
145
+ return true;
146
+}
147
+#endif
148
+
149
#endif
150
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
151
index XXXXXXX..XXXXXXX 100644
152
--- a/target/arm/cpu.h
153
+++ b/target/arm/cpu.h
154
@@ -XXX,XX +XXX,XX @@ void arm_cpu_list(void);
155
uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
156
uint32_t cur_el, bool secure);
157
158
-/* Interface between CPU and Interrupt controller. */
159
-#ifndef CONFIG_USER_ONLY
160
-bool armv7m_nvic_can_take_pending_exception(NVICState *s);
161
-#else
162
-static inline bool armv7m_nvic_can_take_pending_exception(NVICState *s)
163
-{
164
- return true;
165
-}
166
-#endif
167
-/**
168
- * armv7m_nvic_set_pending: mark the specified exception as pending
169
- * @s: the NVIC
170
- * @irq: the exception number to mark pending
171
- * @secure: false for non-banked exceptions or for the nonsecure
172
- * version of a banked exception, true for the secure version of a banked
173
- * exception.
174
- *
175
- * Marks the specified exception as pending. Note that we will assert()
176
- * if @secure is true and @irq does not specify one of the fixed set
177
- * of architecturally banked exceptions.
178
- */
179
-void armv7m_nvic_set_pending(NVICState *s, int irq, bool secure);
180
-/**
181
- * armv7m_nvic_set_pending_derived: mark this derived exception as pending
182
- * @s: the NVIC
183
- * @irq: the exception number to mark pending
184
- * @secure: false for non-banked exceptions or for the nonsecure
185
- * version of a banked exception, true for the secure version of a banked
186
- * exception.
187
- *
188
- * Similar to armv7m_nvic_set_pending(), but specifically for derived
189
- * exceptions (exceptions generated in the course of trying to take
190
- * a different exception).
191
- */
192
-void armv7m_nvic_set_pending_derived(NVICState *s, int irq, bool secure);
193
-/**
194
- * armv7m_nvic_set_pending_lazyfp: mark this lazy FP exception as pending
195
- * @s: the NVIC
196
- * @irq: the exception number to mark pending
197
- * @secure: false for non-banked exceptions or for the nonsecure
198
- * version of a banked exception, true for the secure version of a banked
199
- * exception.
200
- *
201
- * Similar to armv7m_nvic_set_pending(), but specifically for exceptions
202
- * generated in the course of lazy stacking of FP registers.
203
- */
204
-void armv7m_nvic_set_pending_lazyfp(NVICState *s, int irq, bool secure);
205
-/**
206
- * armv7m_nvic_get_pending_irq_info: return highest priority pending
207
- * exception, and whether it targets Secure state
208
- * @s: the NVIC
209
- * @pirq: set to pending exception number
210
- * @ptargets_secure: set to whether pending exception targets Secure
211
- *
212
- * This function writes the number of the highest priority pending
213
- * exception (the one which would be made active by
214
- * armv7m_nvic_acknowledge_irq()) to @pirq, and sets @ptargets_secure
215
- * to true if the current highest priority pending exception should
216
- * be taken to Secure state, false for NS.
217
- */
218
-void armv7m_nvic_get_pending_irq_info(NVICState *s, int *pirq,
219
- bool *ptargets_secure);
220
-/**
221
- * armv7m_nvic_acknowledge_irq: make highest priority pending exception active
222
- * @s: the NVIC
223
- *
224
- * Move the current highest priority pending exception from the pending
225
- * state to the active state, and update v7m.exception to indicate that
226
- * it is the exception currently being handled.
227
- */
228
-void armv7m_nvic_acknowledge_irq(NVICState *s);
229
-/**
230
- * armv7m_nvic_complete_irq: complete specified interrupt or exception
231
- * @s: the NVIC
232
- * @irq: the exception number to complete
233
- * @secure: true if this exception was secure
234
- *
235
- * Returns: -1 if the irq was not active
236
- * 1 if completing this irq brought us back to base (no active irqs)
237
- * 0 if there is still an irq active after this one was completed
238
- * (Ignoring -1, this is the same as the RETTOBASE value before completion.)
239
- */
240
-int armv7m_nvic_complete_irq(NVICState *s, int irq, bool secure);
241
-/**
242
- * armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure)
243
- * @s: the NVIC
244
- * @irq: the exception number to mark pending
245
- * @secure: false for non-banked exceptions or for the nonsecure
246
- * version of a banked exception, true for the secure version of a banked
247
- * exception.
248
- *
249
- * Return whether an exception is "ready", i.e. whether the exception is
250
- * enabled and is configured at a priority which would allow it to
251
- * interrupt the current execution priority. This controls whether the
252
- * RDY bit for it in the FPCCR is set.
253
- */
254
-bool armv7m_nvic_get_ready_status(NVICState *s, int irq, bool secure);
255
-/**
256
- * armv7m_nvic_raw_execution_priority: return the raw execution priority
257
- * @s: the NVIC
258
- *
259
- * Returns: the raw execution priority as defined by the v8M architecture.
260
- * This is the execution priority minus the effects of AIRCR.PRIS,
261
- * and minus any PRIMASK/FAULTMASK/BASEPRI priority boosting.
262
- * (v8M ARM ARM I_PKLD.)
263
- */
264
-int armv7m_nvic_raw_execution_priority(NVICState *s);
265
-/**
266
- * armv7m_nvic_neg_prio_requested: return true if the requested execution
267
- * priority is negative for the specified security state.
268
- * @s: the NVIC
269
- * @secure: the security state to test
270
- * This corresponds to the pseudocode IsReqExecPriNeg().
271
- */
272
-#ifndef CONFIG_USER_ONLY
273
-bool armv7m_nvic_neg_prio_requested(NVICState *s, bool secure);
274
-#else
275
-static inline bool armv7m_nvic_neg_prio_requested(NVICState *s, bool secure)
276
-{
277
- return false;
278
-}
279
-#endif
280
-
281
/* Interface for defining coprocessor registers.
282
* Registers are defined in tables of arm_cp_reginfo structs
283
* which are passed to define_arm_cp_regs().
26
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
284
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
27
index XXXXXXX..XXXXXXX 100644
285
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/cpu.c
286
--- a/target/arm/cpu.c
29
+++ b/target/arm/cpu.c
287
+++ b/target/arm/cpu.c
30
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
288
@@ -XXX,XX +XXX,XX @@
31
u = cpu->isar.mvfr0;
289
#if !defined(CONFIG_USER_ONLY)
32
u = FIELD_DP32(u, MVFR0, FPSP, 0);
290
#include "hw/loader.h"
33
u = FIELD_DP32(u, MVFR0, FPDP, 0);
291
#include "hw/boards.h"
34
- u = FIELD_DP32(u, MVFR0, FPTRAP, 0);
292
+#ifdef CONFIG_TCG
35
u = FIELD_DP32(u, MVFR0, FPDIVIDE, 0);
293
#include "hw/intc/armv7m_nvic.h"
36
u = FIELD_DP32(u, MVFR0, FPSQRT, 0);
294
-#endif
37
- u = FIELD_DP32(u, MVFR0, FPSHVEC, 0);
295
+#endif /* CONFIG_TCG */
38
u = FIELD_DP32(u, MVFR0, FPROUND, 0);
296
+#endif /* !CONFIG_USER_ONLY */
39
+ if (!arm_feature(env, ARM_FEATURE_M)) {
297
#include "sysemu/tcg.h"
40
+ u = FIELD_DP32(u, MVFR0, FPTRAP, 0);
298
#include "sysemu/qtest.h"
41
+ u = FIELD_DP32(u, MVFR0, FPSHVEC, 0);
299
#include "sysemu/hw_accel.h"
42
+ }
300
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
43
cpu->isar.mvfr0 = u;
301
index XXXXXXX..XXXXXXX 100644
44
302
--- a/target/arm/cpu_tcg.c
45
u = cpu->isar.mvfr1;
303
+++ b/target/arm/cpu_tcg.c
46
u = FIELD_DP32(u, MVFR1, FPFTZ, 0);
304
@@ -XXX,XX +XXX,XX @@
47
u = FIELD_DP32(u, MVFR1, FPDNAN, 0);
305
#include "hw/boards.h"
48
u = FIELD_DP32(u, MVFR1, FPHP, 0);
306
#endif
49
+ if (arm_feature(env, ARM_FEATURE_M)) {
307
#include "cpregs.h"
50
+ u = FIELD_DP32(u, MVFR1, FP16, 0);
308
+#if !defined(CONFIG_USER_ONLY) && defined(CONFIG_TCG)
51
+ }
309
+#include "hw/intc/armv7m_nvic.h"
52
cpu->isar.mvfr1 = u;
310
+#endif
53
311
54
u = cpu->isar.mvfr2;
312
55
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
313
/* Share AArch32 -cpu max features with AArch64. */
56
u = FIELD_DP32(u, ID_ISAR6, FHM, 0);
314
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
57
cpu->isar.id_isar6 = u;
315
index XXXXXXX..XXXXXXX 100644
58
316
--- a/target/arm/m_helper.c
59
- u = cpu->isar.mvfr1;
317
+++ b/target/arm/m_helper.c
60
- u = FIELD_DP32(u, MVFR1, SIMDLS, 0);
318
@@ -XXX,XX +XXX,XX @@
61
- u = FIELD_DP32(u, MVFR1, SIMDINT, 0);
319
#include "exec/cpu_ldst.h"
62
- u = FIELD_DP32(u, MVFR1, SIMDSP, 0);
320
#include "semihosting/common-semi.h"
63
- u = FIELD_DP32(u, MVFR1, SIMDHP, 0);
321
#endif
64
- cpu->isar.mvfr1 = u;
322
+#if !defined(CONFIG_USER_ONLY)
65
+ if (!arm_feature(env, ARM_FEATURE_M)) {
323
+#include "hw/intc/armv7m_nvic.h"
66
+ u = cpu->isar.mvfr1;
324
+#endif
67
+ u = FIELD_DP32(u, MVFR1, SIMDLS, 0);
325
68
+ u = FIELD_DP32(u, MVFR1, SIMDINT, 0);
326
static void v7m_msr_xpsr(CPUARMState *env, uint32_t mask,
69
+ u = FIELD_DP32(u, MVFR1, SIMDSP, 0);
327
uint32_t reg, uint32_t val)
70
+ u = FIELD_DP32(u, MVFR1, SIMDHP, 0);
71
+ cpu->isar.mvfr1 = u;
72
73
- u = cpu->isar.mvfr2;
74
- u = FIELD_DP32(u, MVFR2, SIMDMISC, 0);
75
- cpu->isar.mvfr2 = u;
76
+ u = cpu->isar.mvfr2;
77
+ u = FIELD_DP32(u, MVFR2, SIMDMISC, 0);
78
+ cpu->isar.mvfr2 = u;
79
+ }
80
}
81
82
if (!cpu->has_neon && !cpu->has_vfp) {
83
--
328
--
84
2.20.1
329
2.34.1
85
330
86
331
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Alex Bennée <alex.bennee@linaro.org>
2
2
3
This peripheral has 1 free-running timer and 4 compare registers.
3
The two TCG tests for GICv2 and GICv3 are very heavy weight distros
4
4
that take a long time to boot up, especially for an --enable-debug
5
Only the free-running timer is implemented. Add support the
5
build. The total code coverage they give is:
6
COMPARE registers (each register is wired to an IRQ).
6
7
7
Overall coverage rate:
8
Reference: "BCM2835 ARM Peripherals" datasheet [*]
8
lines......: 11.2% (59584 of 530123 lines)
9
chapter 12 "System Timer":
9
functions..: 15.0% (7436 of 49443 functions)
10
10
branches...: 6.3% (19273 of 303933 branches)
11
The System Timer peripheral provides four 32-bit timer channels
11
12
and a single 64-bit free running counter. Each channel has an
12
We already get pretty close to that with the machine_aarch64_virt
13
output compare register, which is compared against the 32 least
13
tests which only does one full boot (~120s vs ~600s) of alpine. We
14
significant bits of the free running counter values. When the
14
expand the kernel+initrd boot (~8s) to test both GICs and also add an
15
two values match, the system timer peripheral generates a signal
15
RNG device and a block device to generate a few IRQs and exercise the
16
to indicate a match for the appropriate channel. The match signal
16
storage layer. With that we get to a coverage of:
17
is then fed into the interrupt controller.
17
18
18
Overall coverage rate:
19
This peripheral is used since Linux 3.7, commit ee4af5696720
19
lines......: 11.0% (58121 of 530123 lines)
20
("ARM: bcm2835: add system timer").
20
functions..: 14.9% (7343 of 49443 functions)
21
21
branches...: 6.0% (18269 of 303933 branches)
22
[*] https://www.raspberrypi.org/app/uploads/2012/02/BCM2835-ARM-Peripherals.pdf
22
23
23
which I feel is close enough given the massive time saving. If we want
24
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
24
to target any more sub-systems we can use lighter weight more directed
25
Reviewed-by: Luc Michel <luc@lmichel.fr>
25
tests.
26
Message-id: 20201010203709.3116542-4-f4bug@amsat.org
26
27
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
28
Reviewed-by: Fabiano Rosas <farosas@suse.de>
29
Acked-by: Richard Henderson <richard.henderson@linaro.org>
30
Message-id: 20230203181632.2919715-1-alex.bennee@linaro.org
31
Cc: Peter Maydell <peter.maydell@linaro.org>
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
32
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
28
---
33
---
29
include/hw/timer/bcm2835_systmr.h | 11 +++++--
34
tests/avocado/boot_linux.py | 48 ++++----------------
30
hw/timer/bcm2835_systmr.c | 48 ++++++++++++++++++++-----------
35
tests/avocado/machine_aarch64_virt.py | 63 ++++++++++++++++++++++++---
31
hw/timer/trace-events | 6 ++--
36
2 files changed, 65 insertions(+), 46 deletions(-)
32
3 files changed, 44 insertions(+), 21 deletions(-)
37
33
38
diff --git a/tests/avocado/boot_linux.py b/tests/avocado/boot_linux.py
34
diff --git a/include/hw/timer/bcm2835_systmr.h b/include/hw/timer/bcm2835_systmr.h
35
index XXXXXXX..XXXXXXX 100644
39
index XXXXXXX..XXXXXXX 100644
36
--- a/include/hw/timer/bcm2835_systmr.h
40
--- a/tests/avocado/boot_linux.py
37
+++ b/include/hw/timer/bcm2835_systmr.h
41
+++ b/tests/avocado/boot_linux.py
42
@@ -XXX,XX +XXX,XX @@ def test_pc_q35_kvm(self):
43
self.launch_and_wait(set_up_ssh_connection=False)
44
45
46
-# For Aarch64 we only boot KVM tests in CI as the TCG tests are very
47
-# heavyweight. There are lighter weight distros which we use in the
48
-# machine_aarch64_virt.py tests.
49
+# For Aarch64 we only boot KVM tests in CI as booting the current
50
+# Fedora OS in TCG tests is very heavyweight. There are lighter weight
51
+# distros which we use in the machine_aarch64_virt.py tests.
52
class BootLinuxAarch64(LinuxTest):
53
"""
54
:avocado: tags=arch:aarch64
55
:avocado: tags=machine:virt
56
- :avocado: tags=machine:gic-version=2
57
"""
58
timeout = 720
59
60
- def add_common_args(self):
61
- self.vm.add_args('-bios',
62
- os.path.join(BUILD_DIR, 'pc-bios',
63
- 'edk2-aarch64-code.fd'))
64
- self.vm.add_args('-device', 'virtio-rng-pci,rng=rng0')
65
- self.vm.add_args('-object', 'rng-random,id=rng0,filename=/dev/urandom')
66
-
67
- @skipIf(os.getenv('GITLAB_CI'), 'Running on GitLab')
68
- def test_fedora_cloud_tcg_gicv2(self):
69
- """
70
- :avocado: tags=accel:tcg
71
- :avocado: tags=cpu:max
72
- :avocado: tags=device:gicv2
73
- """
74
- self.require_accelerator("tcg")
75
- self.vm.add_args("-accel", "tcg")
76
- self.vm.add_args("-cpu", "max,lpa2=off")
77
- self.vm.add_args("-machine", "virt,gic-version=2")
78
- self.add_common_args()
79
- self.launch_and_wait(set_up_ssh_connection=False)
80
-
81
- @skipIf(os.getenv('GITLAB_CI'), 'Running on GitLab')
82
- def test_fedora_cloud_tcg_gicv3(self):
83
- """
84
- :avocado: tags=accel:tcg
85
- :avocado: tags=cpu:max
86
- :avocado: tags=device:gicv3
87
- """
88
- self.require_accelerator("tcg")
89
- self.vm.add_args("-accel", "tcg")
90
- self.vm.add_args("-cpu", "max,lpa2=off")
91
- self.vm.add_args("-machine", "virt,gic-version=3")
92
- self.add_common_args()
93
- self.launch_and_wait(set_up_ssh_connection=False)
94
-
95
def test_virt_kvm(self):
96
"""
97
:avocado: tags=accel:kvm
98
@@ -XXX,XX +XXX,XX @@ def test_virt_kvm(self):
99
self.require_accelerator("kvm")
100
self.vm.add_args("-accel", "kvm")
101
self.vm.add_args("-machine", "virt,gic-version=host")
102
- self.add_common_args()
103
+ self.vm.add_args('-bios',
104
+ os.path.join(BUILD_DIR, 'pc-bios',
105
+ 'edk2-aarch64-code.fd'))
106
+ self.vm.add_args('-device', 'virtio-rng-pci,rng=rng0')
107
+ self.vm.add_args('-object', 'rng-random,id=rng0,filename=/dev/urandom')
108
self.launch_and_wait(set_up_ssh_connection=False)
109
110
111
diff --git a/tests/avocado/machine_aarch64_virt.py b/tests/avocado/machine_aarch64_virt.py
112
index XXXXXXX..XXXXXXX 100644
113
--- a/tests/avocado/machine_aarch64_virt.py
114
+++ b/tests/avocado/machine_aarch64_virt.py
38
@@ -XXX,XX +XXX,XX @@
115
@@ -XXX,XX +XXX,XX @@
39
116
40
#include "hw/sysbus.h"
117
import time
41
#include "hw/irq.h"
118
import os
42
+#include "qemu/timer.h"
119
+import logging
43
#include "qom/object.h"
120
44
121
from avocado_qemu import QemuSystemTest
45
#define TYPE_BCM2835_SYSTIMER "bcm2835-sys-timer"
122
from avocado_qemu import wait_for_console_pattern
46
@@ -XXX,XX +XXX,XX @@ OBJECT_DECLARE_SIMPLE_TYPE(BCM2835SystemTimerState, BCM2835_SYSTIMER)
123
from avocado_qemu import exec_command
47
124
from avocado_qemu import BUILD_DIR
48
#define BCM2835_SYSTIMER_COUNT 4
125
+from avocado.utils import process
49
126
+from avocado.utils.path import find_command
50
+typedef struct {
127
51
+ unsigned id;
128
class Aarch64VirtMachine(QemuSystemTest):
52
+ QEMUTimer timer;
129
KERNEL_COMMON_COMMAND_LINE = 'printk.time=0 '
53
+ qemu_irq irq;
130
@@ -XXX,XX +XXX,XX @@ def test_alpine_virt_tcg_gic_max(self):
54
+ BCM2835SystemTimerState *state;
131
self.wait_for_console_pattern('Welcome to Alpine Linux 3.16')
55
+} BCM2835SystemTimerCompare;
132
56
+
133
57
struct BCM2835SystemTimerState {
134
- def test_aarch64_virt(self):
58
/*< private >*/
135
+ def common_aarch64_virt(self, machine):
59
SysBusDevice parent_obj;
136
"""
60
137
- :avocado: tags=arch:aarch64
61
/*< public >*/
138
- :avocado: tags=machine:virt
62
MemoryRegion iomem;
139
- :avocado: tags=accel:tcg
63
- qemu_irq irq;
140
- :avocado: tags=cpu:max
64
-
141
+ Common code to launch basic virt machine with kernel+initrd
65
struct {
142
+ and a scratch disk.
66
uint32_t ctrl_status;
143
"""
67
uint32_t compare[BCM2835_SYSTIMER_COUNT];
144
+ logger = logging.getLogger('aarch64_virt')
68
} reg;
145
+
69
+ BCM2835SystemTimerCompare tmr[BCM2835_SYSTIMER_COUNT];
146
kernel_url = ('https://fileserver.linaro.org/s/'
70
};
147
'z6B2ARM7DQT3HWN/download')
71
148
-
72
#endif
149
kernel_hash = 'ed11daab50c151dde0e1e9c9cb8b2d9bd3215347'
73
diff --git a/hw/timer/bcm2835_systmr.c b/hw/timer/bcm2835_systmr.c
150
kernel_path = self.fetch_asset(kernel_url, asset_hash=kernel_hash)
74
index XXXXXXX..XXXXXXX 100644
151
75
--- a/hw/timer/bcm2835_systmr.c
152
@@ -XXX,XX +XXX,XX @@ def test_aarch64_virt(self):
76
+++ b/hw/timer/bcm2835_systmr.c
153
'console=ttyAMA0')
77
@@ -XXX,XX +XXX,XX @@ REG32(COMPARE1, 0x10)
154
self.require_accelerator("tcg")
78
REG32(COMPARE2, 0x14)
155
self.vm.add_args('-cpu', 'max,pauth-impdef=on',
79
REG32(COMPARE3, 0x18)
156
+ '-machine', machine,
80
157
'-accel', 'tcg',
81
-static void bcm2835_systmr_update_irq(BCM2835SystemTimerState *s)
158
'-kernel', kernel_path,
82
+static void bcm2835_systmr_timer_expire(void *opaque)
159
'-append', kernel_command_line)
83
{
160
+
84
- bool enable = !!s->reg.ctrl_status;
161
+ # A RNG offers an easy way to generate a few IRQs
85
+ BCM2835SystemTimerCompare *tmr = opaque;
162
+ self.vm.add_args('-device', 'virtio-rng-pci,rng=rng0')
86
163
+ self.vm.add_args('-object',
87
- trace_bcm2835_systmr_irq(enable);
164
+ 'rng-random,id=rng0,filename=/dev/urandom')
88
- qemu_set_irq(s->irq, enable);
165
+
89
-}
166
+ # Also add a scratch block device
90
-
167
+ logger.info('creating scratch qcow2 image')
91
-static void bcm2835_systmr_update_compare(BCM2835SystemTimerState *s,
168
+ image_path = os.path.join(self.workdir, 'scratch.qcow2')
92
- unsigned timer_index)
169
+ qemu_img = os.path.join(BUILD_DIR, 'qemu-img')
93
-{
170
+ if not os.path.exists(qemu_img):
94
- /* TODO fow now, since neither Linux nor U-boot use these timers. */
171
+ qemu_img = find_command('qemu-img', False)
95
- qemu_log_mask(LOG_UNIMP, "COMPARE register %u not implemented\n",
172
+ if qemu_img is False:
96
- timer_index);
173
+ self.cancel('Could not find "qemu-img", which is required to '
97
+ trace_bcm2835_systmr_timer_expired(tmr->id);
174
+ 'create the temporary qcow2 image')
98
+ tmr->state->reg.ctrl_status |= 1 << tmr->id;
175
+ cmd = '%s create -f qcow2 %s 8M' % (qemu_img, image_path)
99
+ qemu_set_irq(tmr->irq, 1);
176
+ process.run(cmd)
100
}
177
+
101
178
+ # Add the device
102
static uint64_t bcm2835_systmr_read(void *opaque, hwaddr offset,
179
+ self.vm.add_args('-blockdev',
103
@@ -XXX,XX +XXX,XX @@ static uint64_t bcm2835_systmr_read(void *opaque, hwaddr offset,
180
+ f"driver=qcow2,file.driver=file,file.filename={image_path},node-name=scratch")
104
}
181
+ self.vm.add_args('-device',
105
182
+ 'virtio-blk-device,drive=scratch')
106
static void bcm2835_systmr_write(void *opaque, hwaddr offset,
183
+
107
- uint64_t value, unsigned size)
184
self.vm.launch()
108
+ uint64_t value64, unsigned size)
185
self.wait_for_console_pattern('Welcome to Buildroot')
109
{
186
time.sleep(0.1)
110
BCM2835SystemTimerState *s = BCM2835_SYSTIMER(opaque);
187
exec_command(self, 'root')
111
+ int index;
188
time.sleep(0.1)
112
+ uint32_t value = value64;
189
+ exec_command(self, 'dd if=/dev/hwrng of=/dev/vda bs=512 count=4')
113
+ uint32_t triggers_delay_us;
190
+ time.sleep(0.1)
114
+ uint64_t now;
191
+ exec_command(self, 'md5sum /dev/vda')
115
192
+ time.sleep(0.1)
116
trace_bcm2835_systmr_write(offset, value);
193
+ exec_command(self, 'cat /proc/interrupts')
117
switch (offset) {
194
+ time.sleep(0.1)
118
case A_CTRL_STATUS:
195
exec_command(self, 'cat /proc/self/maps')
119
s->reg.ctrl_status &= ~value; /* Ack */
196
time.sleep(0.1)
120
- bcm2835_systmr_update_irq(s);
197
+
121
+ for (index = 0; index < ARRAY_SIZE(s->tmr); index++) {
198
+ def test_aarch64_virt_gicv3(self):
122
+ if (extract32(value, index, 1)) {
199
+ """
123
+ trace_bcm2835_systmr_irq_ack(index);
200
+ :avocado: tags=arch:aarch64
124
+ qemu_set_irq(s->tmr[index].irq, 0);
201
+ :avocado: tags=machine:virt
125
+ }
202
+ :avocado: tags=accel:tcg
126
+ }
203
+ :avocado: tags=cpu:max
127
break;
204
+ """
128
case A_COMPARE0 ... A_COMPARE3:
205
+ self.common_aarch64_virt("virt,gic_version=3")
129
- s->reg.compare[(offset - A_COMPARE0) >> 2] = value;
206
+
130
- bcm2835_systmr_update_compare(s, (offset - A_COMPARE0) >> 2);
207
+ def test_aarch64_virt_gicv2(self):
131
+ index = (offset - A_COMPARE0) >> 2;
208
+ """
132
+ s->reg.compare[index] = value;
209
+ :avocado: tags=arch:aarch64
133
+ now = qemu_clock_get_us(QEMU_CLOCK_VIRTUAL);
210
+ :avocado: tags=machine:virt
134
+ /* Compare lower 32-bits of the free-running counter. */
211
+ :avocado: tags=accel:tcg
135
+ triggers_delay_us = value - now;
212
+ :avocado: tags=cpu:max
136
+ trace_bcm2835_systmr_run(index, triggers_delay_us);
213
+ """
137
+ timer_mod(&s->tmr[index].timer, now + triggers_delay_us);
214
+ self.common_aarch64_virt("virt,gic-version=2")
138
break;
139
case A_COUNTER_LOW:
140
case A_COUNTER_HIGH:
141
@@ -XXX,XX +XXX,XX @@ static void bcm2835_systmr_realize(DeviceState *dev, Error **errp)
142
memory_region_init_io(&s->iomem, OBJECT(dev), &bcm2835_systmr_ops,
143
s, "bcm2835-sys-timer", 0x20);
144
sysbus_init_mmio(SYS_BUS_DEVICE(dev), &s->iomem);
145
- sysbus_init_irq(SYS_BUS_DEVICE(dev), &s->irq);
146
+
147
+ for (size_t i = 0; i < ARRAY_SIZE(s->tmr); i++) {
148
+ s->tmr[i].id = i;
149
+ s->tmr[i].state = s;
150
+ sysbus_init_irq(SYS_BUS_DEVICE(dev), &s->tmr[i].irq);
151
+ timer_init_us(&s->tmr[i].timer, QEMU_CLOCK_VIRTUAL,
152
+ bcm2835_systmr_timer_expire, &s->tmr[i]);
153
+ }
154
}
155
156
static const VMStateDescription bcm2835_systmr_vmstate = {
157
diff --git a/hw/timer/trace-events b/hw/timer/trace-events
158
index XXXXXXX..XXXXXXX 100644
159
--- a/hw/timer/trace-events
160
+++ b/hw/timer/trace-events
161
@@ -XXX,XX +XXX,XX @@ nrf51_timer_write(uint8_t timer_id, uint64_t addr, uint32_t value, unsigned size
162
nrf51_timer_set_count(uint8_t timer_id, uint8_t counter_id, uint32_t value) "timer %u counter %u count 0x%" PRIx32
163
164
# bcm2835_systmr.c
165
-bcm2835_systmr_irq(bool enable) "timer irq state %u"
166
+bcm2835_systmr_timer_expired(unsigned id) "timer #%u expired"
167
+bcm2835_systmr_irq_ack(unsigned id) "timer #%u acked"
168
bcm2835_systmr_read(uint64_t offset, uint64_t data) "timer read: offset 0x%" PRIx64 " data 0x%" PRIx64
169
-bcm2835_systmr_write(uint64_t offset, uint64_t data) "timer write: offset 0x%" PRIx64 " data 0x%" PRIx64
170
+bcm2835_systmr_write(uint64_t offset, uint32_t data) "timer write: offset 0x%" PRIx64 " data 0x%" PRIx32
171
+bcm2835_systmr_run(unsigned id, uint64_t delay_us) "timer #%u expiring in %"PRIu64" us"
172
173
# avr_timer16.c
174
avr_timer16_read(uint8_t addr, uint8_t value) "timer16 read addr:%u value:%u"
175
--
215
--
176
2.20.1
216
2.34.1
177
217
178
218
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Mostafa Saleh <smostafa@google.com>
2
2
3
For BTI, we need to know if the executable is static or dynamic,
3
GBPA register can be used to globally abort all
4
which means looking for PT_INTERP earlier.
4
transactions.
5
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
It is described in the SMMU manual in "6.3.14 SMMU_GBPA".
7
Message-id: 20201016184207.786698-8-richard.henderson@linaro.org
7
ABORT reset value is IMPLEMENTATION DEFINED, it is chosen to
8
be zero(Do not abort incoming transactions).
9
10
Other fields have default values of Use Incoming.
11
12
If UPDATE is not set, the write is ignored. This is the only permitted
13
behavior in SMMUv3.2 and later.(6.3.14.1 Update procedure)
14
15
As this patch adds a new state to the SMMU (GBPA), it is added
16
in a new subsection for forward migration compatibility.
17
GBPA is only migrated if its value is different from the reset value.
18
It does this to be backward migration compatible if SW didn't write
19
the register.
20
21
Signed-off-by: Mostafa Saleh <smostafa@google.com>
22
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
23
Reviewed-by: Eric Auger <eric.auger@redhat.com>
24
Message-id: 20230214094009.2445653-1-smostafa@google.com
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
25
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
26
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
27
---
11
linux-user/elfload.c | 60 +++++++++++++++++++++++---------------------
28
hw/arm/smmuv3-internal.h | 7 +++++++
12
1 file changed, 31 insertions(+), 29 deletions(-)
29
include/hw/arm/smmuv3.h | 1 +
30
hw/arm/smmuv3.c | 43 +++++++++++++++++++++++++++++++++++++++-
31
3 files changed, 50 insertions(+), 1 deletion(-)
13
32
14
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
33
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
15
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
16
--- a/linux-user/elfload.c
35
--- a/hw/arm/smmuv3-internal.h
17
+++ b/linux-user/elfload.c
36
+++ b/hw/arm/smmuv3-internal.h
18
@@ -XXX,XX +XXX,XX @@ static void load_elf_image(const char *image_name, int image_fd,
37
@@ -XXX,XX +XXX,XX @@ REG32(CR0ACK, 0x24)
19
38
REG32(CR1, 0x28)
20
mmap_lock();
39
REG32(CR2, 0x2c)
21
40
REG32(STATUSR, 0x40)
22
- /* Find the maximum size of the image and allocate an appropriate
41
+REG32(GBPA, 0x44)
23
- amount of memory to handle that. */
42
+ FIELD(GBPA, ABORT, 20, 1)
24
+ /*
43
+ FIELD(GBPA, UPDATE, 31, 1)
25
+ * Find the maximum size of the image and allocate an appropriate
26
+ * amount of memory to handle that. Locate the interpreter, if any.
27
+ */
28
loaddr = -1, hiaddr = 0;
29
info->alignment = 0;
30
for (i = 0; i < ehdr->e_phnum; ++i) {
31
@@ -XXX,XX +XXX,XX @@ static void load_elf_image(const char *image_name, int image_fd,
32
}
33
++info->nsegs;
34
info->alignment |= eppnt->p_align;
35
+ } else if (eppnt->p_type == PT_INTERP && pinterp_name) {
36
+ g_autofree char *interp_name = NULL;
37
+
44
+
38
+ if (*pinterp_name) {
45
+/* Use incoming. */
39
+ errmsg = "Multiple PT_INTERP entries";
46
+#define SMMU_GBPA_RESET_VAL 0x1000
40
+ goto exit_errmsg;
41
+ }
42
+ interp_name = g_malloc(eppnt->p_filesz);
43
+ if (!interp_name) {
44
+ goto exit_perror;
45
+ }
46
+
47
+
47
+ if (eppnt->p_offset + eppnt->p_filesz <= BPRM_BUF_SIZE) {
48
REG32(IRQ_CTRL, 0x50)
48
+ memcpy(interp_name, bprm_buf + eppnt->p_offset,
49
FIELD(IRQ_CTRL, GERROR_IRQEN, 0, 1)
49
+ eppnt->p_filesz);
50
FIELD(IRQ_CTRL, PRI_IRQEN, 1, 1)
50
+ } else {
51
diff --git a/include/hw/arm/smmuv3.h b/include/hw/arm/smmuv3.h
51
+ retval = pread(image_fd, interp_name, eppnt->p_filesz,
52
index XXXXXXX..XXXXXXX 100644
52
+ eppnt->p_offset);
53
--- a/include/hw/arm/smmuv3.h
53
+ if (retval != eppnt->p_filesz) {
54
+++ b/include/hw/arm/smmuv3.h
54
+ goto exit_perror;
55
@@ -XXX,XX +XXX,XX @@ struct SMMUv3State {
55
+ }
56
uint32_t cr[3];
56
+ }
57
uint32_t cr0ack;
57
+ if (interp_name[eppnt->p_filesz - 1] != 0) {
58
uint32_t statusr;
58
+ errmsg = "Invalid PT_INTERP entry";
59
+ uint32_t gbpa;
59
+ goto exit_errmsg;
60
uint32_t irq_ctrl;
60
+ }
61
uint32_t gerror;
61
+ *pinterp_name = g_steal_pointer(&interp_name);
62
uint32_t gerrorn;
62
}
63
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/hw/arm/smmuv3.c
66
+++ b/hw/arm/smmuv3.c
67
@@ -XXX,XX +XXX,XX @@ static void smmuv3_init_regs(SMMUv3State *s)
68
s->gerror = 0;
69
s->gerrorn = 0;
70
s->statusr = 0;
71
+ s->gbpa = SMMU_GBPA_RESET_VAL;
72
}
73
74
static int smmu_get_ste(SMMUv3State *s, dma_addr_t addr, STE *buf,
75
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
76
qemu_mutex_lock(&s->mutex);
77
78
if (!smmu_enabled(s)) {
79
- status = SMMU_TRANS_DISABLE;
80
+ if (FIELD_EX32(s->gbpa, GBPA, ABORT)) {
81
+ status = SMMU_TRANS_ABORT;
82
+ } else {
83
+ status = SMMU_TRANS_DISABLE;
84
+ }
85
goto epilogue;
63
}
86
}
64
87
65
@@ -XXX,XX +XXX,XX @@ static void load_elf_image(const char *image_name, int image_fd,
88
@@ -XXX,XX +XXX,XX @@ static MemTxResult smmu_writel(SMMUv3State *s, hwaddr offset,
66
if (vaddr_em > info->brk) {
89
case A_GERROR_IRQ_CFG2:
67
info->brk = vaddr_em;
90
s->gerror_irq_cfg2 = data;
68
}
91
return MEMTX_OK;
69
- } else if (eppnt->p_type == PT_INTERP && pinterp_name) {
92
+ case A_GBPA:
70
- g_autofree char *interp_name = NULL;
93
+ /*
71
-
94
+ * If UPDATE is not set, the write is ignored. This is the only
72
- if (*pinterp_name) {
95
+ * permitted behavior in SMMUv3.2 and later.
73
- errmsg = "Multiple PT_INTERP entries";
96
+ */
74
- goto exit_errmsg;
97
+ if (data & R_GBPA_UPDATE_MASK) {
75
- }
98
+ /* Ignore update bit as write is synchronous. */
76
- interp_name = g_malloc(eppnt->p_filesz);
99
+ s->gbpa = data & ~R_GBPA_UPDATE_MASK;
77
- if (!interp_name) {
100
+ }
78
- goto exit_perror;
101
+ return MEMTX_OK;
79
- }
102
case A_STRTAB_BASE: /* 64b */
80
-
103
s->strtab_base = deposit64(s->strtab_base, 0, 32, data);
81
- if (eppnt->p_offset + eppnt->p_filesz <= BPRM_BUF_SIZE) {
104
return MEMTX_OK;
82
- memcpy(interp_name, bprm_buf + eppnt->p_offset,
105
@@ -XXX,XX +XXX,XX @@ static MemTxResult smmu_readl(SMMUv3State *s, hwaddr offset,
83
- eppnt->p_filesz);
106
case A_STATUSR:
84
- } else {
107
*data = s->statusr;
85
- retval = pread(image_fd, interp_name, eppnt->p_filesz,
108
return MEMTX_OK;
86
- eppnt->p_offset);
109
+ case A_GBPA:
87
- if (retval != eppnt->p_filesz) {
110
+ *data = s->gbpa;
88
- goto exit_perror;
111
+ return MEMTX_OK;
89
- }
112
case A_IRQ_CTRL:
90
- }
113
case A_IRQ_CTRL_ACK:
91
- if (interp_name[eppnt->p_filesz - 1] != 0) {
114
*data = s->irq_ctrl;
92
- errmsg = "Invalid PT_INTERP entry";
115
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_smmuv3_queue = {
93
- goto exit_errmsg;
116
},
94
- }
117
};
95
- *pinterp_name = g_steal_pointer(&interp_name);
118
96
#ifdef TARGET_MIPS
119
+static bool smmuv3_gbpa_needed(void *opaque)
97
} else if (eppnt->p_type == PT_MIPS_ABIFLAGS) {
120
+{
98
Mips_elf_abiflags_v0 abiflags;
121
+ SMMUv3State *s = opaque;
122
+
123
+ /* Only migrate GBPA if it has different reset value. */
124
+ return s->gbpa != SMMU_GBPA_RESET_VAL;
125
+}
126
+
127
+static const VMStateDescription vmstate_gbpa = {
128
+ .name = "smmuv3/gbpa",
129
+ .version_id = 1,
130
+ .minimum_version_id = 1,
131
+ .needed = smmuv3_gbpa_needed,
132
+ .fields = (VMStateField[]) {
133
+ VMSTATE_UINT32(gbpa, SMMUv3State),
134
+ VMSTATE_END_OF_LIST()
135
+ }
136
+};
137
+
138
static const VMStateDescription vmstate_smmuv3 = {
139
.name = "smmuv3",
140
.version_id = 1,
141
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_smmuv3 = {
142
143
VMSTATE_END_OF_LIST(),
144
},
145
+ .subsections = (const VMStateDescription * []) {
146
+ &vmstate_gbpa,
147
+ NULL
148
+ }
149
};
150
151
static void smmuv3_instance_init(Object *obj)
99
--
152
--
100
2.20.1
153
2.34.1
101
102
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
2
3
While APEI is a generic ACPI feature (usable by X86 and ARM64), only
3
Since commit acc0b8b05a when running the ZynqMP ZCU102 board with
4
the 'virt' machine uses it, by enabling the RAS Virtualization. See
4
a QEMU configured using --without-default-devices, we get:
5
commit 2afa8c8519: "hw/arm/virt: Introduce a RAS machine option").
6
5
7
Restrict the APEI tables generation code to the single user: the virt
6
$ qemu-system-aarch64 -M xlnx-zcu102
8
machine. If another machine wants to use it, it simply has to 'select
7
qemu-system-aarch64: missing object type 'usb_dwc3'
9
ACPI_APEI' in its Kconfig.
8
Abort trap: 6
10
9
11
Fixes: aa16508f1d ("ACPI: Build related register address fields via hardware error fw_cfg blob")
10
Fix by adding the missing Kconfig dependency.
12
Acked-by: Michael S. Tsirkin <mst@redhat.com>
11
13
Reviewed-by: Dongjiu Geng <gengdongjiu@huawei.com>
12
Fixes: acc0b8b05a ("hw/arm/xlnx-zynqmp: Connect ZynqMP's USB controllers")
14
Acked-by: Laszlo Ersek <lersek@redhat.com>
13
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
15
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
14
Message-id: 20230216092327.2203-1-philmd@linaro.org
16
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
15
Reviewed-by: Francisco Iglesias <francisco.iglesias@amd.com>
17
Message-id: 20201008161414.2672569-1-philmd@redhat.com
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
---
17
---
20
default-configs/devices/arm-softmmu.mak | 1 -
18
hw/arm/Kconfig | 1 +
21
hw/arm/Kconfig | 1 +
19
1 file changed, 1 insertion(+)
22
2 files changed, 1 insertion(+), 1 deletion(-)
23
20
24
diff --git a/default-configs/devices/arm-softmmu.mak b/default-configs/devices/arm-softmmu.mak
25
index XXXXXXX..XXXXXXX 100644
26
--- a/default-configs/devices/arm-softmmu.mak
27
+++ b/default-configs/devices/arm-softmmu.mak
28
@@ -XXX,XX +XXX,XX @@ CONFIG_FSL_IMX7=y
29
CONFIG_FSL_IMX6UL=y
30
CONFIG_SEMIHOSTING=y
31
CONFIG_ALLWINNER_H3=y
32
-CONFIG_ACPI_APEI=y
33
diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig
21
diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig
34
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
35
--- a/hw/arm/Kconfig
23
--- a/hw/arm/Kconfig
36
+++ b/hw/arm/Kconfig
24
+++ b/hw/arm/Kconfig
37
@@ -XXX,XX +XXX,XX @@ config ARM_VIRT
25
@@ -XXX,XX +XXX,XX @@ config XLNX_ZYNQMP_ARM
38
select ACPI_MEMORY_HOTPLUG
26
select XLNX_CSU_DMA
39
select ACPI_HW_REDUCED
27
select XLNX_ZYNQMP
40
select ACPI_NVDIMM
28
select XLNX_ZDMA
41
+ select ACPI_APEI
29
+ select USB_DWC3
42
30
43
config CHEETAH
31
config XLNX_VERSAL
44
bool
32
bool
45
--
33
--
46
2.20.1
34
2.34.1
47
35
48
36
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Cornelia Huck <cohuck@redhat.com>
2
2
3
This is slightly clearer than just using strerror, though
3
Just use current_accel_name() directly.
4
the different forms produced by error_setg_file_open and
5
error_setg_errno isn't entirely convenient.
6
4
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Eric Auger <eric.auger@redhat.com>
9
Message-id: 20201016184207.786698-10-richard.henderson@linaro.org
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
9
---
12
linux-user/elfload.c | 15 ++++++++-------
10
hw/arm/virt.c | 6 +++---
13
1 file changed, 8 insertions(+), 7 deletions(-)
11
1 file changed, 3 insertions(+), 3 deletions(-)
14
12
15
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
13
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
16
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
17
--- a/linux-user/elfload.c
15
--- a/hw/arm/virt.c
18
+++ b/linux-user/elfload.c
16
+++ b/hw/arm/virt.c
19
@@ -XXX,XX +XXX,XX @@ static void load_elf_interp(const char *filename, struct image_info *info,
17
@@ -XXX,XX +XXX,XX @@ static void machvirt_init(MachineState *machine)
20
char bprm_buf[BPRM_BUF_SIZE])
18
if (vms->secure && (kvm_enabled() || hvf_enabled())) {
21
{
19
error_report("mach-virt: %s does not support providing "
22
int fd, retval;
20
"Security extensions (TrustZone) to the guest CPU",
23
+ Error *err = NULL;
21
- kvm_enabled() ? "KVM" : "HVF");
24
22
+ current_accel_name());
25
fd = open(path(filename), O_RDONLY);
23
exit(1);
26
if (fd < 0) {
27
- goto exit_perror;
28
+ error_setg_file_open(&err, errno, filename);
29
+ error_report_err(err);
30
+ exit(-1);
31
}
24
}
32
25
33
retval = read(fd, bprm_buf, BPRM_BUF_SIZE);
26
if (vms->virt && (kvm_enabled() || hvf_enabled())) {
34
if (retval < 0) {
27
error_report("mach-virt: %s does not support providing "
35
- goto exit_perror;
28
"Virtualization extensions to the guest CPU",
36
+ error_setg_errno(&err, errno, "Error reading file header");
29
- kvm_enabled() ? "KVM" : "HVF");
37
+ error_reportf_err(err, "%s: ", filename);
30
+ current_accel_name());
38
+ exit(-1);
31
exit(1);
39
}
32
}
40
+
33
41
if (retval < BPRM_BUF_SIZE) {
34
if (vms->mte && (kvm_enabled() || hvf_enabled())) {
42
memset(bprm_buf + retval, 0, BPRM_BUF_SIZE - retval);
35
error_report("mach-virt: %s does not support providing "
36
"MTE to the guest CPU",
37
- kvm_enabled() ? "KVM" : "HVF");
38
+ current_accel_name());
39
exit(1);
43
}
40
}
44
41
45
load_elf_image(filename, fd, info, NULL, bprm_buf);
46
- return;
47
-
48
- exit_perror:
49
- fprintf(stderr, "%s: %s\n", filename, strerror(errno));
50
- exit(-1);
51
}
52
53
static int symfind(const void *s0, const void *s1)
54
--
42
--
55
2.20.1
43
2.34.1
56
57
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Hao Wu <wuhaotsh@google.com>
2
2
3
Fixing this now will clarify following patches.
3
Havard is no longer working on the Nuvoton systems for a while
4
and won't be able to do any work on it in the future. So I'll
5
take over maintaining the Nuvoton system from him.
4
6
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Hao Wu <wuhaotsh@google.com>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Acked-by: Havard Skinnemoen <hskinnemoen@google.com>
7
Message-id: 20201016184207.786698-6-richard.henderson@linaro.org
9
Reviewed-by: Philippe Mathieu-Daude <philmd@linaro.org>
10
Message-id: 20230208235433.3989937-2-wuhaotsh@google.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
12
---
10
linux-user/elfload.c | 12 +++++++++---
13
MAINTAINERS | 2 +-
11
1 file changed, 9 insertions(+), 3 deletions(-)
14
1 file changed, 1 insertion(+), 1 deletion(-)
12
15
13
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
16
diff --git a/MAINTAINERS b/MAINTAINERS
14
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
15
--- a/linux-user/elfload.c
18
--- a/MAINTAINERS
16
+++ b/linux-user/elfload.c
19
+++ b/MAINTAINERS
17
@@ -XXX,XX +XXX,XX @@ static void load_elf_image(const char *image_name, int image_fd,
20
@@ -XXX,XX +XXX,XX @@ F: include/hw/net/mv88w8618_eth.h
18
abi_ulong vaddr, vaddr_po, vaddr_ps, vaddr_ef, vaddr_em, vaddr_len;
21
F: docs/system/arm/musicpal.rst
19
int elf_prot = 0;
22
20
23
Nuvoton NPCM7xx
21
- if (eppnt->p_flags & PF_R) elf_prot = PROT_READ;
24
-M: Havard Skinnemoen <hskinnemoen@google.com>
22
- if (eppnt->p_flags & PF_W) elf_prot |= PROT_WRITE;
25
M: Tyrone Ting <kfting@nuvoton.com>
23
- if (eppnt->p_flags & PF_X) elf_prot |= PROT_EXEC;
26
+M: Hao Wu <wuhaotsh@google.com>
24
+ if (eppnt->p_flags & PF_R) {
27
L: qemu-arm@nongnu.org
25
+ elf_prot |= PROT_READ;
28
S: Supported
26
+ }
29
F: hw/*/npcm7xx*
27
+ if (eppnt->p_flags & PF_W) {
28
+ elf_prot |= PROT_WRITE;
29
+ }
30
+ if (eppnt->p_flags & PF_X) {
31
+ elf_prot |= PROT_EXEC;
32
+ }
33
34
vaddr = load_bias + eppnt->p_vaddr;
35
vaddr_po = TARGET_ELF_PAGEOFFSET(vaddr);
36
--
30
--
37
2.20.1
31
2.34.1
38
39
diff view generated by jsdifflib
1
From: Havard Skinnemoen <hskinnemoen@google.com>
1
From: Hao Wu <wuhaotsh@google.com>
2
2
3
This test exercises the various modes of the npcm7xx timer. In
3
Nuvoton's PSPI is a general purpose SPI module which enables
4
particular, it triggers the bug found by the fuzzer, as reported here:
4
connections to SPI-based peripheral devices.
5
5
6
https://lists.gnu.org/archive/html/qemu-devel/2020-09/msg02992.html
6
Signed-off-by: Hao Wu <wuhaotsh@google.com>
7
7
Reviewed-by: Chris Rauer <crauer@google.com>
8
It also found several other bugs, especially related to interrupt
8
Reviewed-by: Philippe Mathieu-Daude <philmd@linaro.org>
9
handling.
9
Message-id: 20230208235433.3989937-3-wuhaotsh@google.com
10
11
The test exercises all the timers in all the timer modules, which
12
expands to 180 test cases in total.
13
14
Reviewed-by: Tyrone Ting <kfting@nuvoton.com>
15
Signed-off-by: Havard Skinnemoen <hskinnemoen@google.com>
16
Message-id: 20201008232154.94221-2-hskinnemoen@google.com
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
---
11
---
19
tests/qtest/npcm7xx_timer-test.c | 562 +++++++++++++++++++++++++++++++
12
MAINTAINERS | 6 +-
20
tests/qtest/meson.build | 1 +
13
include/hw/ssi/npcm_pspi.h | 53 +++++++++
21
2 files changed, 563 insertions(+)
14
hw/ssi/npcm_pspi.c | 221 +++++++++++++++++++++++++++++++++++++
22
create mode 100644 tests/qtest/npcm7xx_timer-test.c
15
hw/ssi/meson.build | 2 +-
16
hw/ssi/trace-events | 5 +
17
5 files changed, 283 insertions(+), 4 deletions(-)
18
create mode 100644 include/hw/ssi/npcm_pspi.h
19
create mode 100644 hw/ssi/npcm_pspi.c
23
20
24
diff --git a/tests/qtest/npcm7xx_timer-test.c b/tests/qtest/npcm7xx_timer-test.c
21
diff --git a/MAINTAINERS b/MAINTAINERS
22
index XXXXXXX..XXXXXXX 100644
23
--- a/MAINTAINERS
24
+++ b/MAINTAINERS
25
@@ -XXX,XX +XXX,XX @@ M: Tyrone Ting <kfting@nuvoton.com>
26
M: Hao Wu <wuhaotsh@google.com>
27
L: qemu-arm@nongnu.org
28
S: Supported
29
-F: hw/*/npcm7xx*
30
-F: include/hw/*/npcm7xx*
31
-F: tests/qtest/npcm7xx*
32
+F: hw/*/npcm*
33
+F: include/hw/*/npcm*
34
+F: tests/qtest/npcm*
35
F: pc-bios/npcm7xx_bootrom.bin
36
F: roms/vbootrom
37
F: docs/system/arm/nuvoton.rst
38
diff --git a/include/hw/ssi/npcm_pspi.h b/include/hw/ssi/npcm_pspi.h
25
new file mode 100644
39
new file mode 100644
26
index XXXXXXX..XXXXXXX
40
index XXXXXXX..XXXXXXX
27
--- /dev/null
41
--- /dev/null
28
+++ b/tests/qtest/npcm7xx_timer-test.c
42
+++ b/include/hw/ssi/npcm_pspi.h
29
@@ -XXX,XX +XXX,XX @@
43
@@ -XXX,XX +XXX,XX @@
30
+/*
44
+/*
31
+ * QTest testcase for the Nuvoton NPCM7xx Timer
45
+ * Nuvoton Peripheral SPI Module
32
+ *
46
+ *
33
+ * Copyright 2020 Google LLC
47
+ * Copyright 2023 Google LLC
34
+ *
48
+ *
35
+ * This program is free software; you can redistribute it and/or modify it
49
+ * This program is free software; you can redistribute it and/or modify it
36
+ * under the terms of the GNU General Public License as published by the
50
+ * under the terms of the GNU General Public License as published by the
37
+ * Free Software Foundation; either version 2 of the License, or
51
+ * Free Software Foundation; either version 2 of the License, or
38
+ * (at your option) any later version.
52
+ * (at your option) any later version.
39
+ *
53
+ *
40
+ * This program is distributed in the hope that it will be useful, but WITHOUT
54
+ * This program is distributed in the hope that it will be useful, but WITHOUT
41
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
55
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
42
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
56
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
43
+ * for more details.
57
+ * for more details.
44
+ */
58
+ */
59
+#ifndef NPCM_PSPI_H
60
+#define NPCM_PSPI_H
61
+
62
+#include "hw/ssi/ssi.h"
63
+#include "hw/sysbus.h"
64
+
65
+/*
66
+ * Number of registers in our device state structure. Don't change this without
67
+ * incrementing the version_id in the vmstate.
68
+ */
69
+#define NPCM_PSPI_NR_REGS 3
70
+
71
+/**
72
+ * NPCMPSPIState - Device state for one Flash Interface Unit.
73
+ * @parent: System bus device.
74
+ * @mmio: Memory region for register access.
75
+ * @spi: The SPI bus mastered by this controller.
76
+ * @regs: Register contents.
77
+ * @irq: The interrupt request queue for this module.
78
+ *
79
+ * Each PSPI has a shared bank of registers, and controls up to four chip
80
+ * selects. Each chip select has a dedicated memory region which may be used to
81
+ * read and write the flash connected to that chip select as if it were memory.
82
+ */
83
+typedef struct NPCMPSPIState {
84
+ SysBusDevice parent;
85
+
86
+ MemoryRegion mmio;
87
+
88
+ SSIBus *spi;
89
+ uint16_t regs[NPCM_PSPI_NR_REGS];
90
+ qemu_irq irq;
91
+} NPCMPSPIState;
92
+
93
+#define TYPE_NPCM_PSPI "npcm-pspi"
94
+OBJECT_DECLARE_SIMPLE_TYPE(NPCMPSPIState, NPCM_PSPI)
95
+
96
+#endif /* NPCM_PSPI_H */
97
diff --git a/hw/ssi/npcm_pspi.c b/hw/ssi/npcm_pspi.c
98
new file mode 100644
99
index XXXXXXX..XXXXXXX
100
--- /dev/null
101
+++ b/hw/ssi/npcm_pspi.c
102
@@ -XXX,XX +XXX,XX @@
103
+/*
104
+ * Nuvoton NPCM Peripheral SPI Module (PSPI)
105
+ *
106
+ * Copyright 2023 Google LLC
107
+ *
108
+ * This program is free software; you can redistribute it and/or modify it
109
+ * under the terms of the GNU General Public License as published by the
110
+ * Free Software Foundation; either version 2 of the License, or
111
+ * (at your option) any later version.
112
+ *
113
+ * This program is distributed in the hope that it will be useful, but WITHOUT
114
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
115
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
116
+ * for more details.
117
+ */
45
+
118
+
46
+#include "qemu/osdep.h"
119
+#include "qemu/osdep.h"
47
+#include "qemu/timer.h"
120
+
48
+#include "libqtest-single.h"
121
+#include "hw/irq.h"
49
+
122
+#include "hw/registerfields.h"
50
+#define TIM_REF_HZ (25000000)
123
+#include "hw/ssi/npcm_pspi.h"
51
+
124
+#include "migration/vmstate.h"
52
+/* Bits in TCSRx */
125
+#include "qapi/error.h"
53
+#define CEN BIT(30)
126
+#include "qemu/error-report.h"
54
+#define IE BIT(29)
127
+#include "qemu/log.h"
55
+#define MODE_ONESHOT (0 << 27)
128
+#include "qemu/module.h"
56
+#define MODE_PERIODIC (1 << 27)
129
+#include "qemu/units.h"
57
+#define CRST BIT(26)
130
+
58
+#define CACT BIT(25)
131
+#include "trace.h"
59
+#define PRESCALE(x) (x)
132
+
60
+
133
+REG16(PSPI_DATA, 0x0)
61
+/* Registers shared between all timers in a module. */
134
+REG16(PSPI_CTL1, 0x2)
62
+#define TISR 0x18
135
+ FIELD(PSPI_CTL1, SPIEN, 0, 1)
63
+#define WTCR 0x1c
136
+ FIELD(PSPI_CTL1, MOD, 2, 1)
64
+# define WTCLK(x) ((x) << 10)
137
+ FIELD(PSPI_CTL1, EIR, 5, 1)
65
+
138
+ FIELD(PSPI_CTL1, EIW, 6, 1)
66
+/* Power-on default; used to re-initialize timers before each test. */
139
+ FIELD(PSPI_CTL1, SCM, 7, 1)
67
+#define TCSR_DEFAULT PRESCALE(5)
140
+ FIELD(PSPI_CTL1, SCIDL, 8, 1)
68
+
141
+ FIELD(PSPI_CTL1, SCDV, 9, 7)
69
+/* Register offsets for a timer within a timer block. */
142
+REG16(PSPI_STAT, 0x4)
70
+typedef struct Timer {
143
+ FIELD(PSPI_STAT, BSY, 0, 1)
71
+ unsigned int tcsr_offset;
144
+ FIELD(PSPI_STAT, RBF, 1, 1)
72
+ unsigned int ticr_offset;
145
+
73
+ unsigned int tdr_offset;
146
+static void npcm_pspi_update_irq(NPCMPSPIState *s)
74
+} Timer;
147
+{
75
+
148
+ int level = 0;
76
+/* A timer block containing 5 timers. */
149
+
77
+typedef struct TimerBlock {
150
+ /* Only fire IRQ when the module is enabled. */
78
+ int irq_base;
151
+ if (FIELD_EX16(s->regs[R_PSPI_CTL1], PSPI_CTL1, SPIEN)) {
79
+ uint64_t base_addr;
152
+ /* Update interrupt as BSY is cleared. */
80
+} TimerBlock;
153
+ if ((!FIELD_EX16(s->regs[R_PSPI_STAT], PSPI_STAT, BSY)) &&
81
+
154
+ FIELD_EX16(s->regs[R_PSPI_CTL1], PSPI_CTL1, EIW)) {
82
+/* Testdata for testing a particular timer within a timer block. */
155
+ level = 1;
83
+typedef struct TestData {
156
+ }
84
+ const TimerBlock *tim;
157
+
85
+ const Timer *timer;
158
+ /* Update interrupt as RBF is set. */
86
+} TestData;
159
+ if (FIELD_EX16(s->regs[R_PSPI_STAT], PSPI_STAT, RBF) &&
87
+
160
+ FIELD_EX16(s->regs[R_PSPI_CTL1], PSPI_CTL1, EIR)) {
88
+const TimerBlock timer_block[] = {
161
+ level = 1;
89
+ {
162
+ }
90
+ .irq_base = 32,
163
+ }
91
+ .base_addr = 0xf0008000,
164
+ qemu_set_irq(s->irq, level);
165
+}
166
+
167
+static uint16_t npcm_pspi_read_data(NPCMPSPIState *s)
168
+{
169
+ uint16_t value = s->regs[R_PSPI_DATA];
170
+
171
+ /* Clear stat bits as the value are read out. */
172
+ s->regs[R_PSPI_STAT] = 0;
173
+
174
+ return value;
175
+}
176
+
177
+static void npcm_pspi_write_data(NPCMPSPIState *s, uint16_t data)
178
+{
179
+ uint16_t value = 0;
180
+
181
+ if (FIELD_EX16(s->regs[R_PSPI_CTL1], PSPI_CTL1, MOD)) {
182
+ value = ssi_transfer(s->spi, extract16(data, 8, 8)) << 8;
183
+ }
184
+ value |= ssi_transfer(s->spi, extract16(data, 0, 8));
185
+ s->regs[R_PSPI_DATA] = value;
186
+
187
+ /* Mark data as available */
188
+ s->regs[R_PSPI_STAT] = R_PSPI_STAT_BSY_MASK | R_PSPI_STAT_RBF_MASK;
189
+}
190
+
191
+/* Control register read handler. */
192
+static uint64_t npcm_pspi_ctrl_read(void *opaque, hwaddr addr,
193
+ unsigned int size)
194
+{
195
+ NPCMPSPIState *s = opaque;
196
+ uint16_t value;
197
+
198
+ switch (addr) {
199
+ case A_PSPI_DATA:
200
+ value = npcm_pspi_read_data(s);
201
+ break;
202
+
203
+ case A_PSPI_CTL1:
204
+ value = s->regs[R_PSPI_CTL1];
205
+ break;
206
+
207
+ case A_PSPI_STAT:
208
+ value = s->regs[R_PSPI_STAT];
209
+ break;
210
+
211
+ default:
212
+ qemu_log_mask(LOG_GUEST_ERROR,
213
+ "%s: write to invalid offset 0x%" PRIx64 "\n",
214
+ DEVICE(s)->canonical_path, addr);
215
+ return 0;
216
+ }
217
+ trace_npcm_pspi_ctrl_read(DEVICE(s)->canonical_path, addr, value);
218
+ npcm_pspi_update_irq(s);
219
+
220
+ return value;
221
+}
222
+
223
+/* Control register write handler. */
224
+static void npcm_pspi_ctrl_write(void *opaque, hwaddr addr, uint64_t v,
225
+ unsigned int size)
226
+{
227
+ NPCMPSPIState *s = opaque;
228
+ uint16_t value = v;
229
+
230
+ trace_npcm_pspi_ctrl_write(DEVICE(s)->canonical_path, addr, value);
231
+
232
+ switch (addr) {
233
+ case A_PSPI_DATA:
234
+ npcm_pspi_write_data(s, value);
235
+ break;
236
+
237
+ case A_PSPI_CTL1:
238
+ s->regs[R_PSPI_CTL1] = value;
239
+ break;
240
+
241
+ case A_PSPI_STAT:
242
+ qemu_log_mask(LOG_GUEST_ERROR,
243
+ "%s: write to read-only register PSPI_STAT: 0x%08"
244
+ PRIx64 "\n", DEVICE(s)->canonical_path, v);
245
+ break;
246
+
247
+ default:
248
+ qemu_log_mask(LOG_GUEST_ERROR,
249
+ "%s: write to invalid offset 0x%" PRIx64 "\n",
250
+ DEVICE(s)->canonical_path, addr);
251
+ return;
252
+ }
253
+ npcm_pspi_update_irq(s);
254
+}
255
+
256
+static const MemoryRegionOps npcm_pspi_ctrl_ops = {
257
+ .read = npcm_pspi_ctrl_read,
258
+ .write = npcm_pspi_ctrl_write,
259
+ .endianness = DEVICE_LITTLE_ENDIAN,
260
+ .valid = {
261
+ .min_access_size = 1,
262
+ .max_access_size = 2,
263
+ .unaligned = false,
92
+ },
264
+ },
93
+ {
265
+ .impl = {
94
+ .irq_base = 37,
266
+ .min_access_size = 2,
95
+ .base_addr = 0xf0009000,
267
+ .max_access_size = 2,
96
+ },
268
+ .unaligned = false,
97
+ {
98
+ .irq_base = 42,
99
+ .base_addr = 0xf000a000,
100
+ },
269
+ },
101
+};
270
+};
102
+
271
+
103
+const Timer timer[] = {
272
+static void npcm_pspi_enter_reset(Object *obj, ResetType type)
104
+ {
273
+{
105
+ .tcsr_offset = 0x00,
274
+ NPCMPSPIState *s = NPCM_PSPI(obj);
106
+ .ticr_offset = 0x08,
275
+
107
+ .tdr_offset = 0x10,
276
+ trace_npcm_pspi_enter_reset(DEVICE(obj)->canonical_path, type);
108
+ }, {
277
+ memset(s->regs, 0, sizeof(s->regs));
109
+ .tcsr_offset = 0x04,
278
+}
110
+ .ticr_offset = 0x0c,
279
+
111
+ .tdr_offset = 0x14,
280
+static void npcm_pspi_realize(DeviceState *dev, Error **errp)
112
+ }, {
281
+{
113
+ .tcsr_offset = 0x20,
282
+ NPCMPSPIState *s = NPCM_PSPI(dev);
114
+ .ticr_offset = 0x28,
283
+ SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
115
+ .tdr_offset = 0x30,
284
+ Object *obj = OBJECT(dev);
116
+ }, {
285
+
117
+ .tcsr_offset = 0x24,
286
+ s->spi = ssi_create_bus(dev, "pspi");
118
+ .ticr_offset = 0x2c,
287
+ memory_region_init_io(&s->mmio, obj, &npcm_pspi_ctrl_ops, s,
119
+ .tdr_offset = 0x34,
288
+ "mmio", 4 * KiB);
120
+ }, {
289
+ sysbus_init_mmio(sbd, &s->mmio);
121
+ .tcsr_offset = 0x40,
290
+ sysbus_init_irq(sbd, &s->irq);
122
+ .ticr_offset = 0x48,
291
+}
123
+ .tdr_offset = 0x50,
292
+
293
+static const VMStateDescription vmstate_npcm_pspi = {
294
+ .name = "npcm-pspi",
295
+ .version_id = 0,
296
+ .minimum_version_id = 0,
297
+ .fields = (VMStateField[]) {
298
+ VMSTATE_UINT16_ARRAY(regs, NPCMPSPIState, NPCM_PSPI_NR_REGS),
299
+ VMSTATE_END_OF_LIST(),
124
+ },
300
+ },
125
+};
301
+};
126
+
302
+
127
+/* Returns the index of the timer block. */
303
+
128
+static int tim_index(const TimerBlock *tim)
304
+static void npcm_pspi_class_init(ObjectClass *klass, void *data)
129
+{
305
+{
130
+ ptrdiff_t diff = tim - timer_block;
306
+ ResettableClass *rc = RESETTABLE_CLASS(klass);
131
+
307
+ DeviceClass *dc = DEVICE_CLASS(klass);
132
+ g_assert(diff >= 0 && diff < ARRAY_SIZE(timer_block));
308
+
133
+
309
+ dc->desc = "NPCM Peripheral SPI Module";
134
+ return diff;
310
+ dc->realize = npcm_pspi_realize;
135
+}
311
+ dc->vmsd = &vmstate_npcm_pspi;
136
+
312
+ rc->phases.enter = npcm_pspi_enter_reset;
137
+/* Returns the index of a timer within a timer block. */
313
+}
138
+static int timer_index(const Timer *t)
314
+
139
+{
315
+static const TypeInfo npcm_pspi_types[] = {
140
+ ptrdiff_t diff = t - timer;
316
+ {
141
+
317
+ .name = TYPE_NPCM_PSPI,
142
+ g_assert(diff >= 0 && diff < ARRAY_SIZE(timer));
318
+ .parent = TYPE_SYS_BUS_DEVICE,
143
+
319
+ .instance_size = sizeof(NPCMPSPIState),
144
+ return diff;
320
+ .class_init = npcm_pspi_class_init,
145
+}
321
+ },
146
+
322
+};
147
+/* Returns the irq line for a given timer. */
323
+DEFINE_TYPES(npcm_pspi_types);
148
+static int tim_timer_irq(const TestData *td)
324
diff --git a/hw/ssi/meson.build b/hw/ssi/meson.build
149
+{
150
+ return td->tim->irq_base + timer_index(td->timer);
151
+}
152
+
153
+/* Register read/write accessors. */
154
+
155
+static void tim_write(const TestData *td,
156
+ unsigned int offset, uint32_t value)
157
+{
158
+ writel(td->tim->base_addr + offset, value);
159
+}
160
+
161
+static uint32_t tim_read(const TestData *td, unsigned int offset)
162
+{
163
+ return readl(td->tim->base_addr + offset);
164
+}
165
+
166
+static void tim_write_tcsr(const TestData *td, uint32_t value)
167
+{
168
+ tim_write(td, td->timer->tcsr_offset, value);
169
+}
170
+
171
+static uint32_t tim_read_tcsr(const TestData *td)
172
+{
173
+ return tim_read(td, td->timer->tcsr_offset);
174
+}
175
+
176
+static void tim_write_ticr(const TestData *td, uint32_t value)
177
+{
178
+ tim_write(td, td->timer->ticr_offset, value);
179
+}
180
+
181
+static uint32_t tim_read_ticr(const TestData *td)
182
+{
183
+ return tim_read(td, td->timer->ticr_offset);
184
+}
185
+
186
+static uint32_t tim_read_tdr(const TestData *td)
187
+{
188
+ return tim_read(td, td->timer->tdr_offset);
189
+}
190
+
191
+/* Returns the number of nanoseconds to count the given number of cycles. */
192
+static int64_t tim_calculate_step(uint32_t count, uint32_t prescale)
193
+{
194
+ return (1000000000LL / TIM_REF_HZ) * count * (prescale + 1);
195
+}
196
+
197
+/* Returns a bitmask corresponding to the timer under test. */
198
+static uint32_t tim_timer_bit(const TestData *td)
199
+{
200
+ return BIT(timer_index(td->timer));
201
+}
202
+
203
+/* Resets all timers to power-on defaults. */
204
+static void tim_reset(const TestData *td)
205
+{
206
+ int i, j;
207
+
208
+ /* Reset all the timers, in case a previous test left a timer running. */
209
+ for (i = 0; i < ARRAY_SIZE(timer_block); i++) {
210
+ for (j = 0; j < ARRAY_SIZE(timer); j++) {
211
+ writel(timer_block[i].base_addr + timer[j].tcsr_offset,
212
+ CRST | TCSR_DEFAULT);
213
+ }
214
+ writel(timer_block[i].base_addr + TISR, -1);
215
+ }
216
+}
217
+
218
+/* Verifies the reset state of a timer. */
219
+static void test_reset(gconstpointer test_data)
220
+{
221
+ const TestData *td = test_data;
222
+
223
+ tim_reset(td);
224
+
225
+ g_assert_cmphex(tim_read_tcsr(td), ==, TCSR_DEFAULT);
226
+ g_assert_cmphex(tim_read_ticr(td), ==, 0);
227
+ g_assert_cmphex(tim_read_tdr(td), ==, 0);
228
+ g_assert_cmphex(tim_read(td, TISR), ==, 0);
229
+ g_assert_cmphex(tim_read(td, WTCR), ==, WTCLK(1));
230
+}
231
+
232
+/* Verifies that CRST wins if both CEN and CRST are set. */
233
+static void test_reset_overrides_enable(gconstpointer test_data)
234
+{
235
+ const TestData *td = test_data;
236
+
237
+ tim_reset(td);
238
+
239
+ /* CRST should force CEN to 0 */
240
+ tim_write_tcsr(td, CEN | CRST | TCSR_DEFAULT);
241
+
242
+ g_assert_cmphex(tim_read_tcsr(td), ==, TCSR_DEFAULT);
243
+ g_assert_cmphex(tim_read_tdr(td), ==, 0);
244
+ g_assert_cmphex(tim_read(td, TISR), ==, 0);
245
+}
246
+
247
+/* Verifies the behavior when CEN is set and then cleared. */
248
+static void test_oneshot_enable_then_disable(gconstpointer test_data)
249
+{
250
+ const TestData *td = test_data;
251
+
252
+ tim_reset(td);
253
+
254
+ /* Enable the timer with zero initial count, then disable it again. */
255
+ tim_write_tcsr(td, CEN | TCSR_DEFAULT);
256
+ tim_write_tcsr(td, TCSR_DEFAULT);
257
+
258
+ g_assert_cmphex(tim_read_tcsr(td), ==, TCSR_DEFAULT);
259
+ g_assert_cmphex(tim_read_tdr(td), ==, 0);
260
+ /* Timer interrupt flag should be set, but interrupts are not enabled. */
261
+ g_assert_cmphex(tim_read(td, TISR), ==, tim_timer_bit(td));
262
+ g_assert_false(qtest_get_irq(global_qtest, tim_timer_irq(td)));
263
+}
264
+
265
+/* Verifies that a one-shot timer fires when expected with prescaler 5. */
266
+static void test_oneshot_ps5(gconstpointer test_data)
267
+{
268
+ const TestData *td = test_data;
269
+ unsigned int count = 256;
270
+ unsigned int ps = 5;
271
+
272
+ tim_reset(td);
273
+
274
+ tim_write_ticr(td, count);
275
+ tim_write_tcsr(td, CEN | PRESCALE(ps));
276
+ g_assert_cmphex(tim_read_tcsr(td), ==, CEN | CACT | PRESCALE(ps));
277
+ g_assert_cmpuint(tim_read_tdr(td), ==, count);
278
+
279
+ clock_step(tim_calculate_step(count, ps) - 1);
280
+
281
+ g_assert_cmphex(tim_read_tcsr(td), ==, CEN | CACT | PRESCALE(ps));
282
+ g_assert_cmpuint(tim_read_tdr(td), <, count);
283
+ g_assert_cmphex(tim_read(td, TISR), ==, 0);
284
+
285
+ clock_step(1);
286
+
287
+ g_assert_cmphex(tim_read_tcsr(td), ==, PRESCALE(ps));
288
+ g_assert_cmpuint(tim_read_tdr(td), ==, count);
289
+ g_assert_cmphex(tim_read(td, TISR), ==, tim_timer_bit(td));
290
+ g_assert_false(qtest_get_irq(global_qtest, tim_timer_irq(td)));
291
+
292
+ /* Clear the interrupt flag. */
293
+ tim_write(td, TISR, tim_timer_bit(td));
294
+ g_assert_cmphex(tim_read(td, TISR), ==, 0);
295
+ g_assert_false(qtest_get_irq(global_qtest, tim_timer_irq(td)));
296
+
297
+ /* Verify that this isn't a periodic timer. */
298
+ clock_step(2 * tim_calculate_step(count, ps));
299
+ g_assert_cmphex(tim_read(td, TISR), ==, 0);
300
+ g_assert_false(qtest_get_irq(global_qtest, tim_timer_irq(td)));
301
+}
302
+
303
+/* Verifies that a one-shot timer fires when expected with prescaler 0. */
304
+static void test_oneshot_ps0(gconstpointer test_data)
305
+{
306
+ const TestData *td = test_data;
307
+ unsigned int count = 1;
308
+ unsigned int ps = 0;
309
+
310
+ tim_reset(td);
311
+
312
+ tim_write_ticr(td, count);
313
+ tim_write_tcsr(td, CEN | PRESCALE(ps));
314
+ g_assert_cmphex(tim_read_tcsr(td), ==, CEN | CACT | PRESCALE(ps));
315
+ g_assert_cmpuint(tim_read_tdr(td), ==, count);
316
+
317
+ clock_step(tim_calculate_step(count, ps) - 1);
318
+
319
+ g_assert_cmphex(tim_read_tcsr(td), ==, CEN | CACT | PRESCALE(ps));
320
+ g_assert_cmpuint(tim_read_tdr(td), <, count);
321
+ g_assert_cmphex(tim_read(td, TISR), ==, 0);
322
+
323
+ clock_step(1);
324
+
325
+ g_assert_cmphex(tim_read_tcsr(td), ==, PRESCALE(ps));
326
+ g_assert_cmpuint(tim_read_tdr(td), ==, count);
327
+ g_assert_cmphex(tim_read(td, TISR), ==, tim_timer_bit(td));
328
+ g_assert_false(qtest_get_irq(global_qtest, tim_timer_irq(td)));
329
+}
330
+
331
+/* Verifies that a one-shot timer fires when expected with highest prescaler. */
332
+static void test_oneshot_ps255(gconstpointer test_data)
333
+{
334
+ const TestData *td = test_data;
335
+ unsigned int count = (1U << 24) - 1;
336
+ unsigned int ps = 255;
337
+
338
+ tim_reset(td);
339
+
340
+ tim_write_ticr(td, count);
341
+ tim_write_tcsr(td, CEN | PRESCALE(ps));
342
+ g_assert_cmphex(tim_read_tcsr(td), ==, CEN | CACT | PRESCALE(ps));
343
+ g_assert_cmpuint(tim_read_tdr(td), ==, count);
344
+
345
+ clock_step(tim_calculate_step(count, ps) - 1);
346
+
347
+ g_assert_cmphex(tim_read_tcsr(td), ==, CEN | CACT | PRESCALE(ps));
348
+ g_assert_cmpuint(tim_read_tdr(td), <, count);
349
+ g_assert_cmphex(tim_read(td, TISR), ==, 0);
350
+
351
+ clock_step(1);
352
+
353
+ g_assert_cmphex(tim_read_tcsr(td), ==, PRESCALE(ps));
354
+ g_assert_cmpuint(tim_read_tdr(td), ==, count);
355
+ g_assert_cmphex(tim_read(td, TISR), ==, tim_timer_bit(td));
356
+ g_assert_false(qtest_get_irq(global_qtest, tim_timer_irq(td)));
357
+}
358
+
359
+/* Verifies that a oneshot timer fires an interrupt when expected. */
360
+static void test_oneshot_interrupt(gconstpointer test_data)
361
+{
362
+ const TestData *td = test_data;
363
+ unsigned int count = 256;
364
+ unsigned int ps = 7;
365
+
366
+ tim_reset(td);
367
+
368
+ tim_write_ticr(td, count);
369
+ tim_write_tcsr(td, IE | CEN | MODE_ONESHOT | PRESCALE(ps));
370
+
371
+ clock_step_next();
372
+
373
+ g_assert_cmphex(tim_read(td, TISR), ==, tim_timer_bit(td));
374
+ g_assert_true(qtest_get_irq(global_qtest, tim_timer_irq(td)));
375
+}
376
+
377
+/*
378
+ * Verifies that the timer can be paused and later resumed, and it still fires
379
+ * at the right moment.
380
+ */
381
+static void test_pause_resume(gconstpointer test_data)
382
+{
383
+ const TestData *td = test_data;
384
+ unsigned int count = 256;
385
+ unsigned int ps = 1;
386
+
387
+ tim_reset(td);
388
+
389
+ tim_write_ticr(td, count);
390
+ tim_write_tcsr(td, IE | CEN | MODE_ONESHOT | PRESCALE(ps));
391
+
392
+ /* Pause the timer halfway to expiration. */
393
+ clock_step(tim_calculate_step(count / 2, ps));
394
+ tim_write_tcsr(td, IE | MODE_ONESHOT | PRESCALE(ps));
395
+ g_assert_cmpuint(tim_read_tdr(td), ==, count / 2);
396
+
397
+ /* Counter should not advance during the following step. */
398
+ clock_step(2 * tim_calculate_step(count, ps));
399
+ g_assert_cmpuint(tim_read_tdr(td), ==, count / 2);
400
+ g_assert_cmphex(tim_read(td, TISR), ==, 0);
401
+ g_assert_false(qtest_get_irq(global_qtest, tim_timer_irq(td)));
402
+
403
+ /* Resume the timer and run _almost_ to expiration. */
404
+ tim_write_tcsr(td, IE | CEN | MODE_ONESHOT | PRESCALE(ps));
405
+ clock_step(tim_calculate_step(count / 2, ps) - 1);
406
+ g_assert_cmpuint(tim_read_tdr(td), <, count);
407
+ g_assert_cmphex(tim_read(td, TISR), ==, 0);
408
+ g_assert_false(qtest_get_irq(global_qtest, tim_timer_irq(td)));
409
+
410
+ /* Now, run the rest of the way and verify that the interrupt fires. */
411
+ clock_step(1);
412
+ g_assert_cmphex(tim_read(td, TISR), ==, tim_timer_bit(td));
413
+ g_assert_true(qtest_get_irq(global_qtest, tim_timer_irq(td)));
414
+}
415
+
416
+/* Verifies that the prescaler can be changed while the timer is runnin. */
417
+static void test_prescaler_change(gconstpointer test_data)
418
+{
419
+ const TestData *td = test_data;
420
+ unsigned int count = 256;
421
+ unsigned int ps = 5;
422
+
423
+ tim_reset(td);
424
+
425
+ tim_write_ticr(td, count);
426
+ tim_write_tcsr(td, CEN | MODE_ONESHOT | PRESCALE(ps));
427
+
428
+ /* Run a quarter of the way, and change the prescaler. */
429
+ clock_step(tim_calculate_step(count / 4, ps));
430
+ g_assert_cmpuint(tim_read_tdr(td), ==, 3 * count / 4);
431
+ ps = 2;
432
+ tim_write_tcsr(td, CEN | MODE_ONESHOT | PRESCALE(ps));
433
+ /* The counter must not change. */
434
+ g_assert_cmpuint(tim_read_tdr(td), ==, 3 * count / 4);
435
+
436
+ /* Run another quarter of the way, and change the prescaler again. */
437
+ clock_step(tim_calculate_step(count / 4, ps));
438
+ g_assert_cmpuint(tim_read_tdr(td), ==, count / 2);
439
+ ps = 8;
440
+ tim_write_tcsr(td, CEN | MODE_ONESHOT | PRESCALE(ps));
441
+ /* The counter must not change. */
442
+ g_assert_cmpuint(tim_read_tdr(td), ==, count / 2);
443
+
444
+ /* Run another quarter of the way, and change the prescaler again. */
445
+ clock_step(tim_calculate_step(count / 4, ps));
446
+ g_assert_cmpuint(tim_read_tdr(td), ==, count / 4);
447
+ ps = 0;
448
+ tim_write_tcsr(td, CEN | MODE_ONESHOT | PRESCALE(ps));
449
+ /* The counter must not change. */
450
+ g_assert_cmpuint(tim_read_tdr(td), ==, count / 4);
451
+
452
+ /* Run almost to expiration, and verify the timer didn't fire yet. */
453
+ clock_step(tim_calculate_step(count / 4, ps) - 1);
454
+ g_assert_cmpuint(tim_read_tdr(td), <, count);
455
+ g_assert_cmphex(tim_read(td, TISR), ==, 0);
456
+
457
+ /* Now, run the rest of the way and verify that the timer fires. */
458
+ clock_step(1);
459
+ g_assert_cmphex(tim_read(td, TISR), ==, tim_timer_bit(td));
460
+}
461
+
462
+/* Verifies that a periodic timer automatically restarts after expiration. */
463
+static void test_periodic_no_interrupt(gconstpointer test_data)
464
+{
465
+ const TestData *td = test_data;
466
+ unsigned int count = 2;
467
+ unsigned int ps = 3;
468
+ int i;
469
+
470
+ tim_reset(td);
471
+
472
+ tim_write_ticr(td, count);
473
+ tim_write_tcsr(td, CEN | MODE_PERIODIC | PRESCALE(ps));
474
+
475
+ for (i = 0; i < 4; i++) {
476
+ clock_step_next();
477
+
478
+ g_assert_cmphex(tim_read(td, TISR), ==, tim_timer_bit(td));
479
+ g_assert_false(qtest_get_irq(global_qtest, tim_timer_irq(td)));
480
+
481
+ tim_write(td, TISR, tim_timer_bit(td));
482
+
483
+ g_assert_cmphex(tim_read(td, TISR), ==, 0);
484
+ g_assert_false(qtest_get_irq(global_qtest, tim_timer_irq(td)));
485
+ }
486
+}
487
+
488
+/* Verifies that a periodict timer fires an interrupt every time it expires. */
489
+static void test_periodic_interrupt(gconstpointer test_data)
490
+{
491
+ const TestData *td = test_data;
492
+ unsigned int count = 65535;
493
+ unsigned int ps = 2;
494
+ int i;
495
+
496
+ tim_reset(td);
497
+
498
+ tim_write_ticr(td, count);
499
+ tim_write_tcsr(td, CEN | IE | MODE_PERIODIC | PRESCALE(ps));
500
+
501
+ for (i = 0; i < 4; i++) {
502
+ clock_step_next();
503
+
504
+ g_assert_cmphex(tim_read(td, TISR), ==, tim_timer_bit(td));
505
+ g_assert_true(qtest_get_irq(global_qtest, tim_timer_irq(td)));
506
+
507
+ tim_write(td, TISR, tim_timer_bit(td));
508
+
509
+ g_assert_cmphex(tim_read(td, TISR), ==, 0);
510
+ g_assert_false(qtest_get_irq(global_qtest, tim_timer_irq(td)));
511
+ }
512
+}
513
+
514
+/*
515
+ * Verifies that the timer behaves correctly when disabled right before and
516
+ * exactly when it's supposed to expire.
517
+ */
518
+static void test_disable_on_expiration(gconstpointer test_data)
519
+{
520
+ const TestData *td = test_data;
521
+ unsigned int count = 8;
522
+ unsigned int ps = 255;
523
+
524
+ tim_reset(td);
525
+
526
+ tim_write_ticr(td, count);
527
+ tim_write_tcsr(td, CEN | MODE_ONESHOT | PRESCALE(ps));
528
+
529
+ clock_step(tim_calculate_step(count, ps) - 1);
530
+
531
+ tim_write_tcsr(td, MODE_ONESHOT | PRESCALE(ps));
532
+ tim_write_tcsr(td, CEN | MODE_ONESHOT | PRESCALE(ps));
533
+ clock_step(1);
534
+ tim_write_tcsr(td, MODE_ONESHOT | PRESCALE(ps));
535
+ g_assert_cmphex(tim_read(td, TISR), ==, tim_timer_bit(td));
536
+}
537
+
538
+/*
539
+ * Constructs a name that includes the timer block, timer and testcase name,
540
+ * and adds the test to the test suite.
541
+ */
542
+static void tim_add_test(const char *name, const TestData *td, GTestDataFunc fn)
543
+{
544
+ g_autofree char *full_name;
545
+
546
+ full_name = g_strdup_printf("npcm7xx_timer/tim[%d]/timer[%d]/%s",
547
+ tim_index(td->tim), timer_index(td->timer),
548
+ name);
549
+ qtest_add_data_func(full_name, td, fn);
550
+}
551
+
552
+/* Convenience macro for adding a test with a predictable function name. */
553
+#define add_test(name, td) tim_add_test(#name, td, test_##name)
554
+
555
+int main(int argc, char **argv)
556
+{
557
+ TestData testdata[ARRAY_SIZE(timer_block) * ARRAY_SIZE(timer)];
558
+ int ret;
559
+ int i, j;
560
+
561
+ g_test_init(&argc, &argv, NULL);
562
+ g_test_set_nonfatal_assertions();
563
+
564
+ for (i = 0; i < ARRAY_SIZE(timer_block); i++) {
565
+ for (j = 0; j < ARRAY_SIZE(timer); j++) {
566
+ TestData *td = &testdata[i * ARRAY_SIZE(timer) + j];
567
+ td->tim = &timer_block[i];
568
+ td->timer = &timer[j];
569
+
570
+ add_test(reset, td);
571
+ add_test(reset_overrides_enable, td);
572
+ add_test(oneshot_enable_then_disable, td);
573
+ add_test(oneshot_ps5, td);
574
+ add_test(oneshot_ps0, td);
575
+ add_test(oneshot_ps255, td);
576
+ add_test(oneshot_interrupt, td);
577
+ add_test(pause_resume, td);
578
+ add_test(prescaler_change, td);
579
+ add_test(periodic_no_interrupt, td);
580
+ add_test(periodic_interrupt, td);
581
+ add_test(disable_on_expiration, td);
582
+ }
583
+ }
584
+
585
+ qtest_start("-machine npcm750-evb");
586
+ qtest_irq_intercept_in(global_qtest, "/machine/soc/a9mpcore/gic");
587
+ ret = g_test_run();
588
+ qtest_end();
589
+
590
+ return ret;
591
+}
592
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
593
index XXXXXXX..XXXXXXX 100644
325
index XXXXXXX..XXXXXXX 100644
594
--- a/tests/qtest/meson.build
326
--- a/hw/ssi/meson.build
595
+++ b/tests/qtest/meson.build
327
+++ b/hw/ssi/meson.build
596
@@ -XXX,XX +XXX,XX @@ qtests_arm = \
328
@@ -XXX,XX +XXX,XX @@
597
['arm-cpu-features',
329
softmmu_ss.add(when: 'CONFIG_ASPEED_SOC', if_true: files('aspeed_smc.c'))
598
'microbit-test',
330
softmmu_ss.add(when: 'CONFIG_MSF2', if_true: files('mss-spi.c'))
599
'm25p80-test',
331
-softmmu_ss.add(when: 'CONFIG_NPCM7XX', if_true: files('npcm7xx_fiu.c'))
600
+ 'npcm7xx_timer-test',
332
+softmmu_ss.add(when: 'CONFIG_NPCM7XX', if_true: files('npcm7xx_fiu.c', 'npcm_pspi.c'))
601
'test-arm-mptimer',
333
softmmu_ss.add(when: 'CONFIG_PL022', if_true: files('pl022.c'))
602
'boot-serial-test',
334
softmmu_ss.add(when: 'CONFIG_SIFIVE_SPI', if_true: files('sifive_spi.c'))
603
'hexloader-test']
335
softmmu_ss.add(when: 'CONFIG_SSI', if_true: files('ssi.c'))
336
diff --git a/hw/ssi/trace-events b/hw/ssi/trace-events
337
index XXXXXXX..XXXXXXX 100644
338
--- a/hw/ssi/trace-events
339
+++ b/hw/ssi/trace-events
340
@@ -XXX,XX +XXX,XX @@ npcm7xx_fiu_ctrl_write(const char *id, uint64_t addr, uint32_t data) "%s offset:
341
npcm7xx_fiu_flash_read(const char *id, int cs, uint64_t addr, unsigned int size, uint64_t value) "%s[%d] offset: 0x%08" PRIx64 " size: %u value: 0x%" PRIx64
342
npcm7xx_fiu_flash_write(const char *id, unsigned cs, uint64_t addr, unsigned int size, uint64_t value) "%s[%d] offset: 0x%08" PRIx64 " size: %u value: 0x%" PRIx64
343
344
+# npcm_pspi.c
345
+npcm_pspi_enter_reset(const char *id, int reset_type) "%s reset type: %d"
346
+npcm_pspi_ctrl_read(const char *id, uint64_t addr, uint16_t data) "%s offset: 0x%03" PRIx64 " value: 0x%04" PRIx16
347
+npcm_pspi_ctrl_write(const char *id, uint64_t addr, uint16_t data) "%s offset: 0x%03" PRIx64 " value: 0x%04" PRIx16
348
+
349
# ibex_spi_host.c
350
351
ibex_spi_host_reset(const char *msg) "%s"
604
--
352
--
605
2.20.1
353
2.34.1
606
607
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Hao Wu <wuhaotsh@google.com>
2
2
3
This is generic support, with the code disabled for all targets.
3
Signed-off-by: Hao Wu <wuhaotsh@google.com>
4
4
Reviewed-by: Titus Rwantare <titusr@google.com>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daude <philmd@linaro.org>
6
Message-id: 20201016184207.786698-11-richard.henderson@linaro.org
6
Message-id: 20230208235433.3989937-4-wuhaotsh@google.com
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
8
---
10
linux-user/qemu.h | 4 ++
9
docs/system/arm/nuvoton.rst | 2 +-
11
linux-user/elfload.c | 157 +++++++++++++++++++++++++++++++++++++++++++
10
include/hw/arm/npcm7xx.h | 2 ++
12
2 files changed, 161 insertions(+)
11
hw/arm/npcm7xx.c | 25 +++++++++++++++++++++++--
12
3 files changed, 26 insertions(+), 3 deletions(-)
13
13
14
diff --git a/linux-user/qemu.h b/linux-user/qemu.h
14
diff --git a/docs/system/arm/nuvoton.rst b/docs/system/arm/nuvoton.rst
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/linux-user/qemu.h
16
--- a/docs/system/arm/nuvoton.rst
17
+++ b/linux-user/qemu.h
17
+++ b/docs/system/arm/nuvoton.rst
18
@@ -XXX,XX +XXX,XX @@ struct image_info {
18
@@ -XXX,XX +XXX,XX @@ Supported devices
19
abi_ulong interpreter_loadmap_addr;
19
* SMBus controller (SMBF)
20
abi_ulong interpreter_pt_dynamic_addr;
20
* Ethernet controller (EMC)
21
struct image_info *other_info;
21
* Tachometer
22
+
22
+ * Peripheral SPI controller (PSPI)
23
+ /* For target-specific processing of NT_GNU_PROPERTY_TYPE_0. */
23
24
+ uint32_t note_flags;
24
Missing devices
25
+
25
---------------
26
#ifdef TARGET_MIPS
26
@@ -XXX,XX +XXX,XX @@ Missing devices
27
int fp_abi;
27
28
int interp_fp_abi;
28
* Ethernet controller (GMAC)
29
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
29
* USB device (USBD)
30
- * Peripheral SPI controller (PSPI)
31
* SD/MMC host
32
* PECI interface
33
* PCI and PCIe root complex and bridges
34
diff --git a/include/hw/arm/npcm7xx.h b/include/hw/arm/npcm7xx.h
30
index XXXXXXX..XXXXXXX 100644
35
index XXXXXXX..XXXXXXX 100644
31
--- a/linux-user/elfload.c
36
--- a/include/hw/arm/npcm7xx.h
32
+++ b/linux-user/elfload.c
37
+++ b/include/hw/arm/npcm7xx.h
33
@@ -XXX,XX +XXX,XX @@ static void elf_core_copy_regs(target_elf_gregset_t *regs,
38
@@ -XXX,XX +XXX,XX @@
34
39
#include "hw/nvram/npcm7xx_otp.h"
35
#include "elf.h"
40
#include "hw/timer/npcm7xx_timer.h"
36
41
#include "hw/ssi/npcm7xx_fiu.h"
37
+static bool arch_parse_elf_property(uint32_t pr_type, uint32_t pr_datasz,
42
+#include "hw/ssi/npcm_pspi.h"
38
+ const uint32_t *data,
43
#include "hw/usb/hcd-ehci.h"
39
+ struct image_info *info,
44
#include "hw/usb/hcd-ohci.h"
40
+ Error **errp)
45
#include "target/arm/cpu.h"
41
+{
46
@@ -XXX,XX +XXX,XX @@ struct NPCM7xxState {
42
+ g_assert_not_reached();
47
NPCM7xxFIUState fiu[2];
43
+}
48
NPCM7xxEMCState emc[2];
44
+#define ARCH_USE_GNU_PROPERTY 0
49
NPCM7xxSDHCIState mmc;
45
+
50
+ NPCMPSPIState pspi[2];
46
struct exec
51
};
47
{
52
48
unsigned int a_info; /* Use macros N_MAGIC, etc for access */
53
#define TYPE_NPCM7XX "npcm7xx"
49
@@ -XXX,XX +XXX,XX @@ void probe_guest_base(const char *image_name, abi_ulong guest_loaddr,
54
diff --git a/hw/arm/npcm7xx.c b/hw/arm/npcm7xx.c
50
"@ 0x%" PRIx64 "\n", (uint64_t)guest_base);
55
index XXXXXXX..XXXXXXX 100644
51
}
56
--- a/hw/arm/npcm7xx.c
52
57
+++ b/hw/arm/npcm7xx.c
53
+enum {
58
@@ -XXX,XX +XXX,XX @@ enum NPCM7xxInterrupt {
54
+ /* The string "GNU\0" as a magic number. */
59
NPCM7XX_EMC1RX_IRQ = 15,
55
+ GNU0_MAGIC = const_le32('G' | 'N' << 8 | 'U' << 16),
60
NPCM7XX_EMC1TX_IRQ,
56
+ NOTE_DATA_SZ = 1 * KiB,
61
NPCM7XX_MMC_IRQ = 26,
57
+ NOTE_NAME_SZ = 4,
62
+ NPCM7XX_PSPI2_IRQ = 28,
58
+ ELF_GNU_PROPERTY_ALIGN = ELF_CLASS == ELFCLASS32 ? 4 : 8,
63
+ NPCM7XX_PSPI1_IRQ = 31,
64
NPCM7XX_TIMER0_IRQ = 32, /* Timer Module 0 */
65
NPCM7XX_TIMER1_IRQ,
66
NPCM7XX_TIMER2_IRQ,
67
@@ -XXX,XX +XXX,XX @@ static const hwaddr npcm7xx_emc_addr[] = {
68
0xf0826000,
69
};
70
71
+/* Register base address for each PSPI Module */
72
+static const hwaddr npcm7xx_pspi_addr[] = {
73
+ 0xf0200000,
74
+ 0xf0201000,
59
+};
75
+};
60
+
76
+
61
+/*
77
static const struct {
62
+ * Process a single gnu_property entry.
78
hwaddr regs_addr;
63
+ * Return false for error.
79
uint32_t unconnected_pins;
64
+ */
80
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_init(Object *obj)
65
+static bool parse_elf_property(const uint32_t *data, int *off, int datasz,
81
object_initialize_child(obj, "emc[*]", &s->emc[i], TYPE_NPCM7XX_EMC);
66
+ struct image_info *info, bool have_prev_type,
82
}
67
+ uint32_t *prev_type, Error **errp)
83
68
+{
84
+ for (i = 0; i < ARRAY_SIZE(s->pspi); i++) {
69
+ uint32_t pr_type, pr_datasz, step;
85
+ object_initialize_child(obj, "pspi[*]", &s->pspi[i], TYPE_NPCM_PSPI);
70
+
71
+ if (*off > datasz || !QEMU_IS_ALIGNED(*off, ELF_GNU_PROPERTY_ALIGN)) {
72
+ goto error_data;
73
+ }
74
+ datasz -= *off;
75
+ data += *off / sizeof(uint32_t);
76
+
77
+ if (datasz < 2 * sizeof(uint32_t)) {
78
+ goto error_data;
79
+ }
80
+ pr_type = data[0];
81
+ pr_datasz = data[1];
82
+ data += 2;
83
+ datasz -= 2 * sizeof(uint32_t);
84
+ step = ROUND_UP(pr_datasz, ELF_GNU_PROPERTY_ALIGN);
85
+ if (step > datasz) {
86
+ goto error_data;
87
+ }
86
+ }
88
+
87
+
89
+ /* Properties are supposed to be unique and sorted on pr_type. */
88
object_initialize_child(obj, "mmc", &s->mmc, TYPE_NPCM7XX_SDHCI);
90
+ if (have_prev_type && pr_type <= *prev_type) {
89
}
91
+ if (pr_type == *prev_type) {
90
92
+ error_setg(errp, "Duplicate property in PT_GNU_PROPERTY");
91
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_realize(DeviceState *dev, Error **errp)
93
+ } else {
92
sysbus_connect_irq(SYS_BUS_DEVICE(&s->mmc), 0,
94
+ error_setg(errp, "Unsorted property in PT_GNU_PROPERTY");
93
npcm7xx_irq(s, NPCM7XX_MMC_IRQ));
95
+ }
94
96
+ return false;
95
+ /* PSPI */
97
+ }
96
+ QEMU_BUILD_BUG_ON(ARRAY_SIZE(npcm7xx_pspi_addr) != ARRAY_SIZE(s->pspi));
98
+ *prev_type = pr_type;
97
+ for (i = 0; i < ARRAY_SIZE(s->pspi); i++) {
98
+ SysBusDevice *sbd = SYS_BUS_DEVICE(&s->pspi[i]);
99
+ int irq = (i == 0) ? NPCM7XX_PSPI1_IRQ : NPCM7XX_PSPI2_IRQ;
99
+
100
+
100
+ if (!arch_parse_elf_property(pr_type, pr_datasz, data, info, errp)) {
101
+ sysbus_realize(sbd, &error_abort);
101
+ return false;
102
+ sysbus_mmio_map(sbd, 0, npcm7xx_pspi_addr[i]);
103
+ sysbus_connect_irq(sbd, 0, npcm7xx_irq(s, irq));
102
+ }
104
+ }
103
+
105
+
104
+ *off += 2 * sizeof(uint32_t) + step;
106
create_unimplemented_device("npcm7xx.shm", 0xc0001000, 4 * KiB);
105
+ return true;
107
create_unimplemented_device("npcm7xx.vdmx", 0xe0800000, 4 * KiB);
106
+
108
create_unimplemented_device("npcm7xx.pcierc", 0xe1000000, 64 * KiB);
107
+ error_data:
109
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_realize(DeviceState *dev, Error **errp)
108
+ error_setg(errp, "Ill-formed property in PT_GNU_PROPERTY");
110
create_unimplemented_device("npcm7xx.peci", 0xf0100000, 4 * KiB);
109
+ return false;
111
create_unimplemented_device("npcm7xx.siox[1]", 0xf0101000, 4 * KiB);
110
+}
112
create_unimplemented_device("npcm7xx.siox[2]", 0xf0102000, 4 * KiB);
111
+
113
- create_unimplemented_device("npcm7xx.pspi1", 0xf0200000, 4 * KiB);
112
+/* Process NT_GNU_PROPERTY_TYPE_0. */
114
- create_unimplemented_device("npcm7xx.pspi2", 0xf0201000, 4 * KiB);
113
+static bool parse_elf_properties(int image_fd,
115
create_unimplemented_device("npcm7xx.ahbpci", 0xf0400000, 1 * MiB);
114
+ struct image_info *info,
116
create_unimplemented_device("npcm7xx.mcphy", 0xf05f0000, 64 * KiB);
115
+ const struct elf_phdr *phdr,
117
create_unimplemented_device("npcm7xx.gmac1", 0xf0802000, 8 * KiB);
116
+ char bprm_buf[BPRM_BUF_SIZE],
117
+ Error **errp)
118
+{
119
+ union {
120
+ struct elf_note nhdr;
121
+ uint32_t data[NOTE_DATA_SZ / sizeof(uint32_t)];
122
+ } note;
123
+
124
+ int n, off, datasz;
125
+ bool have_prev_type;
126
+ uint32_t prev_type;
127
+
128
+ /* Unless the arch requires properties, ignore them. */
129
+ if (!ARCH_USE_GNU_PROPERTY) {
130
+ return true;
131
+ }
132
+
133
+ /* If the properties are crazy large, that's too bad. */
134
+ n = phdr->p_filesz;
135
+ if (n > sizeof(note)) {
136
+ error_setg(errp, "PT_GNU_PROPERTY too large");
137
+ return false;
138
+ }
139
+ if (n < sizeof(note.nhdr)) {
140
+ error_setg(errp, "PT_GNU_PROPERTY too small");
141
+ return false;
142
+ }
143
+
144
+ if (phdr->p_offset + n <= BPRM_BUF_SIZE) {
145
+ memcpy(&note, bprm_buf + phdr->p_offset, n);
146
+ } else {
147
+ ssize_t len = pread(image_fd, &note, n, phdr->p_offset);
148
+ if (len != n) {
149
+ error_setg_errno(errp, errno, "Error reading file header");
150
+ return false;
151
+ }
152
+ }
153
+
154
+ /*
155
+ * The contents of a valid PT_GNU_PROPERTY is a sequence
156
+ * of uint32_t -- swap them all now.
157
+ */
158
+#ifdef BSWAP_NEEDED
159
+ for (int i = 0; i < n / 4; i++) {
160
+ bswap32s(note.data + i);
161
+ }
162
+#endif
163
+
164
+ /*
165
+ * Note that nhdr is 3 words, and that the "name" described by namesz
166
+ * immediately follows nhdr and is thus at the 4th word. Further, all
167
+ * of the inputs to the kernel's round_up are multiples of 4.
168
+ */
169
+ if (note.nhdr.n_type != NT_GNU_PROPERTY_TYPE_0 ||
170
+ note.nhdr.n_namesz != NOTE_NAME_SZ ||
171
+ note.data[3] != GNU0_MAGIC) {
172
+ error_setg(errp, "Invalid note in PT_GNU_PROPERTY");
173
+ return false;
174
+ }
175
+ off = sizeof(note.nhdr) + NOTE_NAME_SZ;
176
+
177
+ datasz = note.nhdr.n_descsz + off;
178
+ if (datasz > n) {
179
+ error_setg(errp, "Invalid note size in PT_GNU_PROPERTY");
180
+ return false;
181
+ }
182
+
183
+ have_prev_type = false;
184
+ prev_type = 0;
185
+ while (1) {
186
+ if (off == datasz) {
187
+ return true; /* end, exit ok */
188
+ }
189
+ if (!parse_elf_property(note.data, &off, datasz, info,
190
+ have_prev_type, &prev_type, errp)) {
191
+ return false;
192
+ }
193
+ have_prev_type = true;
194
+ }
195
+}
196
+
197
/* Load an ELF image into the address space.
198
199
IMAGE_NAME is the filename of the image, to use in error messages.
200
@@ -XXX,XX +XXX,XX @@ static void load_elf_image(const char *image_name, int image_fd,
201
goto exit_errmsg;
202
}
203
*pinterp_name = g_steal_pointer(&interp_name);
204
+ } else if (eppnt->p_type == PT_GNU_PROPERTY) {
205
+ if (!parse_elf_properties(image_fd, info, eppnt, bprm_buf, &err)) {
206
+ goto exit_errmsg;
207
+ }
208
}
209
}
210
211
--
118
--
212
2.20.1
119
2.34.1
213
214
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
2
2
3
Fix an unlikely memory leak in load_elf_image().
3
Addresses targeting the second translation table (TTB1) in the SMMU have
4
all upper bits set. Ensure the IOMMU region covers all 64 bits.
4
5
5
Fixes: bf858897b7 ("linux-user: Re-use load_elf_image for the main binary.")
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Eric Auger <eric.auger@redhat.com>
8
Message-id: 20201016184207.786698-5-richard.henderson@linaro.org
9
Message-id: 20230214171921.1917916-2-jean-philippe@linaro.org
9
Message-Id: <20201003174944.1972444-1-f4bug@amsat.org>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
11
---
14
linux-user/elfload.c | 8 ++++----
12
include/hw/arm/smmu-common.h | 2 --
15
1 file changed, 4 insertions(+), 4 deletions(-)
13
hw/arm/smmu-common.c | 2 +-
14
2 files changed, 1 insertion(+), 3 deletions(-)
16
15
17
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
16
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
18
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
19
--- a/linux-user/elfload.c
18
--- a/include/hw/arm/smmu-common.h
20
+++ b/linux-user/elfload.c
19
+++ b/include/hw/arm/smmu-common.h
21
@@ -XXX,XX +XXX,XX @@ static void load_elf_image(const char *image_name, int image_fd,
20
@@ -XXX,XX +XXX,XX @@
22
info->brk = vaddr_em;
21
#define SMMU_PCI_DEVFN_MAX 256
23
}
22
#define SMMU_PCI_DEVFN(sid) (sid & 0xFF)
24
} else if (eppnt->p_type == PT_INTERP && pinterp_name) {
23
25
- char *interp_name;
24
-#define SMMU_MAX_VA_BITS 48
26
+ g_autofree char *interp_name = NULL;
25
-
27
26
/*
28
if (*pinterp_name) {
27
* Page table walk error types
29
errmsg = "Multiple PT_INTERP entries";
28
*/
30
goto exit_errmsg;
29
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
31
}
30
index XXXXXXX..XXXXXXX 100644
32
- interp_name = malloc(eppnt->p_filesz);
31
--- a/hw/arm/smmu-common.c
33
+ interp_name = g_malloc(eppnt->p_filesz);
32
+++ b/hw/arm/smmu-common.c
34
if (!interp_name) {
33
@@ -XXX,XX +XXX,XX @@ static AddressSpace *smmu_find_add_as(PCIBus *bus, void *opaque, int devfn)
35
goto exit_perror;
34
36
}
35
memory_region_init_iommu(&sdev->iommu, sizeof(sdev->iommu),
37
@@ -XXX,XX +XXX,XX @@ static void load_elf_image(const char *image_name, int image_fd,
36
s->mrtypename,
38
errmsg = "Invalid PT_INTERP entry";
37
- OBJECT(s), name, 1ULL << SMMU_MAX_VA_BITS);
39
goto exit_errmsg;
38
+ OBJECT(s), name, UINT64_MAX);
40
}
39
address_space_init(&sdev->as,
41
- *pinterp_name = interp_name;
40
MEMORY_REGION(&sdev->iommu), name);
42
+ *pinterp_name = g_steal_pointer(&interp_name);
41
trace_smmu_add_mr(name);
43
#ifdef TARGET_MIPS
44
} else if (eppnt->p_type == PT_MIPS_ABIFLAGS) {
45
Mips_elf_abiflags_v0 abiflags;
46
@@ -XXX,XX +XXX,XX @@ int load_elf_binary(struct linux_binprm *bprm, struct image_info *info)
47
if (elf_interpreter) {
48
info->load_bias = interp_info.load_bias;
49
info->entry = interp_info.entry;
50
- free(elf_interpreter);
51
+ g_free(elf_interpreter);
52
}
53
54
#ifdef USE_ELF_CORE_DUMP
55
--
42
--
56
2.20.1
43
2.34.1
57
58
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
2
2
3
The time to transmit a char is expressed in nanoseconds, not in ticks.
3
Addresses targeting the second translation table (TTB1) in the SMMU have
4
all upper bits set (except for the top byte when TBI is enabled). Fix
5
the TTB1 check.
4
6
5
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reported-by: Ola Hugosson <ola.hugosson@arm.com>
6
Message-id: 20201014213601.205222-1-f4bug@amsat.org
8
Reviewed-by: Eric Auger <eric.auger@redhat.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
11
Message-id: 20230214171921.1917916-3-jean-philippe@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
13
---
10
hw/arm/strongarm.c | 2 +-
14
hw/arm/smmu-common.c | 2 +-
11
1 file changed, 1 insertion(+), 1 deletion(-)
15
1 file changed, 1 insertion(+), 1 deletion(-)
12
16
13
diff --git a/hw/arm/strongarm.c b/hw/arm/strongarm.c
17
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
14
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/arm/strongarm.c
19
--- a/hw/arm/smmu-common.c
16
+++ b/hw/arm/strongarm.c
20
+++ b/hw/arm/smmu-common.c
17
@@ -XXX,XX +XXX,XX @@ struct StrongARMUARTState {
21
@@ -XXX,XX +XXX,XX @@ SMMUTransTableInfo *select_tt(SMMUTransCfg *cfg, dma_addr_t iova)
18
uint8_t rx_start;
22
/* there is a ttbr0 region and we are in it (high bits all zero) */
19
uint8_t rx_len;
23
return &cfg->tt[0];
20
24
} else if (cfg->tt[1].tsz &&
21
- uint64_t char_transmit_time; /* time to transmit a char in ticks*/
25
- !extract64(iova, 64 - cfg->tt[1].tsz, cfg->tt[1].tsz - tbi_byte)) {
22
+ uint64_t char_transmit_time; /* time to transmit a char in nanoseconds */
26
+ sextract64(iova, 64 - cfg->tt[1].tsz, cfg->tt[1].tsz - tbi_byte) == -1) {
23
bool wait_break_end;
27
/* there is a ttbr1 region and we are in it (high bits all one) */
24
QEMUTimer *rx_timeout_timer;
28
return &cfg->tt[1];
25
QEMUTimer *tx_timer;
29
} else if (!cfg->tt[0].tsz) {
26
--
30
--
27
2.20.1
31
2.34.1
28
29
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Claudio Fontana <cfontana@suse.de>
2
2
3
These are all of the defines required to parse
3
make it clearer from the name that this is a tcg-only function.
4
GNU_PROPERTY_AARCH64_FEATURE_1_AND, copied from binutils.
5
Other missing defines related to other GNU program headers
6
and notes are elided for now.
7
4
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Claudio Fontana <cfontana@suse.de>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Fabiano Rosas <farosas@suse.de>
10
Message-id: 20201016184207.786698-4-richard.henderson@linaro.org
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
11
---
13
include/elf.h | 22 ++++++++++++++++++++++
12
target/arm/helper.c | 4 ++--
14
1 file changed, 22 insertions(+)
13
1 file changed, 2 insertions(+), 2 deletions(-)
15
14
16
diff --git a/include/elf.h b/include/elf.h
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
17
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
18
--- a/include/elf.h
17
--- a/target/arm/helper.c
19
+++ b/include/elf.h
18
+++ b/target/arm/helper.c
20
@@ -XXX,XX +XXX,XX @@ typedef int64_t Elf64_Sxword;
19
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
21
#define PT_NOTE 4
20
* trapped to the hypervisor in KVM.
22
#define PT_SHLIB 5
21
*/
23
#define PT_PHDR 6
22
#ifdef CONFIG_TCG
24
+#define PT_LOOS 0x60000000
23
-static void handle_semihosting(CPUState *cs)
25
+#define PT_HIOS 0x6fffffff
24
+static void tcg_handle_semihosting(CPUState *cs)
26
#define PT_LOPROC 0x70000000
25
{
27
#define PT_HIPROC 0x7fffffff
26
ARMCPU *cpu = ARM_CPU(cs);
28
27
CPUARMState *env = &cpu->env;
29
+#define PT_GNU_PROPERTY (PT_LOOS + 0x474e553)
28
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_interrupt(CPUState *cs)
30
+
29
*/
31
#define PT_MIPS_REGINFO 0x70000000
30
#ifdef CONFIG_TCG
32
#define PT_MIPS_RTPROC 0x70000001
31
if (cs->exception_index == EXCP_SEMIHOST) {
33
#define PT_MIPS_OPTIONS 0x70000002
32
- handle_semihosting(cs);
34
@@ -XXX,XX +XXX,XX @@ typedef struct elf64_shdr {
33
+ tcg_handle_semihosting(cs);
35
#define NT_ARM_SYSTEM_CALL 0x404 /* ARM system call number */
34
return;
36
#define NT_ARM_SVE 0x405 /* ARM Scalable Vector Extension regs */
35
}
37
36
#endif
38
+/* Defined note types for GNU systems. */
39
+
40
+#define NT_GNU_PROPERTY_TYPE_0 5 /* Program property */
41
+
42
+/* Values used in GNU .note.gnu.property notes (NT_GNU_PROPERTY_TYPE_0). */
43
+
44
+#define GNU_PROPERTY_STACK_SIZE 1
45
+#define GNU_PROPERTY_NO_COPY_ON_PROTECTED 2
46
+
47
+#define GNU_PROPERTY_LOPROC 0xc0000000
48
+#define GNU_PROPERTY_HIPROC 0xdfffffff
49
+#define GNU_PROPERTY_LOUSER 0xe0000000
50
+#define GNU_PROPERTY_HIUSER 0xffffffff
51
+
52
+#define GNU_PROPERTY_AARCH64_FEATURE_1_AND 0xc0000000
53
+#define GNU_PROPERTY_AARCH64_FEATURE_1_BTI (1u << 0)
54
+#define GNU_PROPERTY_AARCH64_FEATURE_1_PAC (1u << 1)
55
+
56
/*
57
* Physical entry point into the kernel.
58
*
59
--
37
--
60
2.20.1
38
2.34.1
61
39
62
40
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Claudio Fontana <cfontana@suse.de>
2
2
3
The reporting in AArch64.TagCheckFail only depends on PSTATE.EL,
3
for "all" builds (tcg + kvm), we want to avoid doing
4
and not the AccType of the operation. There are two guest
4
the psci check if tcg is built-in, but not enabled.
5
visible problems that affect LDTR and STTR because of this:
6
5
7
(1) Selecting TCF0 vs TCF1 to decide on reporting,
6
Signed-off-by: Claudio Fontana <cfontana@suse.de>
8
(2) Report "data abort same el" not "data abort lower el".
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
8
Signed-off-by: Fabiano Rosas <farosas@suse.de>
10
Reported-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
9
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
13
Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
14
Message-id: 20201008162155.161886-3-richard.henderson@linaro.org
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
11
---
17
target/arm/mte_helper.c | 10 +++-------
12
target/arm/helper.c | 3 ++-
18
1 file changed, 3 insertions(+), 7 deletions(-)
13
1 file changed, 2 insertions(+), 1 deletion(-)
19
14
20
diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
21
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/mte_helper.c
17
--- a/target/arm/helper.c
23
+++ b/target/arm/mte_helper.c
18
+++ b/target/arm/helper.c
24
@@ -XXX,XX +XXX,XX @@ static void mte_check_fail(CPUARMState *env, uint32_t desc,
19
@@ -XXX,XX +XXX,XX @@
25
reg_el = regime_el(env, arm_mmu_idx);
20
#include "hw/irq.h"
26
sctlr = env->cp15.sctlr_el[reg_el];
21
#include "sysemu/cpu-timers.h"
27
22
#include "sysemu/kvm.h"
28
- switch (arm_mmu_idx) {
23
+#include "sysemu/tcg.h"
29
- case ARMMMUIdx_E10_0:
24
#include "qapi/qapi-commands-machine-target.h"
30
- case ARMMMUIdx_E20_0:
25
#include "qapi/error.h"
31
- el = 0;
26
#include "qemu/guest-random.h"
32
+ el = arm_current_el(env);
27
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_interrupt(CPUState *cs)
33
+ if (el == 0) {
28
env->exception.syndrome);
34
tcf = extract64(sctlr, 38, 2);
35
- break;
36
- default:
37
- el = reg_el;
38
+ } else {
39
tcf = extract64(sctlr, 40, 2);
40
}
29
}
41
30
31
- if (arm_is_psci_call(cpu, cs->exception_index)) {
32
+ if (tcg_enabled() && arm_is_psci_call(cpu, cs->exception_index)) {
33
arm_handle_psci_call(cpu);
34
qemu_log_mask(CPU_LOG_INT, "...handled as PSCI call\n");
35
return;
42
--
36
--
43
2.20.1
37
2.34.1
44
38
45
39
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Claudio Fontana <cfontana@suse.de>
2
2
3
Unlike many other bits in HCR_EL2, the description for this
3
Signed-off-by: Claudio Fontana <cfontana@suse.de>
4
bit does not contain the phrase "if ... this field behaves
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
as 0 for all purposes other than", so do not squash the bit
5
Signed-off-by: Fabiano Rosas <farosas@suse.de>
6
in arm_hcr_el2_eff.
6
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
8
Instead, replicate the E2H+TGE test in the two places that
9
require it.
10
11
Reported-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
12
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
14
Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
15
Message-id: 20201008162155.161886-4-richard.henderson@linaro.org
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
8
---
18
target/arm/internals.h | 9 +++++----
9
target/arm/helper.c | 12 +++++++-----
19
target/arm/helper.c | 9 +++++----
10
1 file changed, 7 insertions(+), 5 deletions(-)
20
2 files changed, 10 insertions(+), 8 deletions(-)
21
11
22
diff --git a/target/arm/internals.h b/target/arm/internals.h
23
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/internals.h
25
+++ b/target/arm/internals.h
26
@@ -XXX,XX +XXX,XX @@ static inline bool allocation_tag_access_enabled(CPUARMState *env, int el,
27
&& !(env->cp15.scr_el3 & SCR_ATA)) {
28
return false;
29
}
30
- if (el < 2
31
- && arm_feature(env, ARM_FEATURE_EL2)
32
- && !(arm_hcr_el2_eff(env) & HCR_ATA)) {
33
- return false;
34
+ if (el < 2 && arm_feature(env, ARM_FEATURE_EL2)) {
35
+ uint64_t hcr = arm_hcr_el2_eff(env);
36
+ if (!(hcr & HCR_ATA) && (!(hcr & HCR_E2H) || !(hcr & HCR_TGE))) {
37
+ return false;
38
+ }
39
}
40
sctlr &= (el == 0 ? SCTLR_ATA0 : SCTLR_ATA);
41
return sctlr != 0;
42
diff --git a/target/arm/helper.c b/target/arm/helper.c
12
diff --git a/target/arm/helper.c b/target/arm/helper.c
43
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/helper.c
14
--- a/target/arm/helper.c
45
+++ b/target/arm/helper.c
15
+++ b/target/arm/helper.c
46
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_mte(CPUARMState *env, const ARMCPRegInfo *ri,
16
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
47
{
17
unsigned int cur_el = arm_current_el(env);
48
int el = arm_current_el(env);
18
int rt;
49
19
50
- if (el < 2 &&
20
- /*
51
- arm_feature(env, ARM_FEATURE_EL2) &&
21
- * Note that new_el can never be 0. If cur_el is 0, then
52
- !(arm_hcr_el2_eff(env) & HCR_ATA)) {
22
- * el0_a64 is is_a64(), else el0_a64 is ignored.
53
- return CP_ACCESS_TRAP_EL2;
23
- */
54
+ if (el < 2 && arm_feature(env, ARM_FEATURE_EL2)) {
24
- aarch64_sve_change_el(env, cur_el, new_el, is_a64(env));
55
+ uint64_t hcr = arm_hcr_el2_eff(env);
25
+ if (tcg_enabled()) {
56
+ if (!(hcr & HCR_ATA) && (!(hcr & HCR_E2H) || !(hcr & HCR_TGE))) {
26
+ /*
57
+ return CP_ACCESS_TRAP_EL2;
27
+ * Note that new_el can never be 0. If cur_el is 0, then
58
+ }
28
+ * el0_a64 is is_a64(), else el0_a64 is ignored.
59
}
29
+ */
60
if (el < 3 &&
30
+ aarch64_sve_change_el(env, cur_el, new_el, is_a64(env));
61
arm_feature(env, ARM_FEATURE_EL3) &&
31
+ }
32
33
if (cur_el < new_el) {
34
/*
62
--
35
--
63
2.20.1
36
2.34.1
64
37
65
38
diff view generated by jsdifflib
1
The BLX immediate insn in the Thumb encoding always performs
1
From: Fabiano Rosas <farosas@suse.de>
2
a switch from Thumb to Arm state. This would be totally useless
3
in M-profile which has no Arm decoder, and so the instruction
4
does not exist at all there. Make the encoding UNDEF for M-profile.
5
2
6
(This part of the encoding space is used for the branch-future
3
Move this earlier to make the next patch diff cleaner. While here
7
and low-overhead-loop insns in v8.1M.)
4
update the comment slightly to not give the impression that the
5
misalignment affects only TCG.
8
6
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Signed-off-by: Fabiano Rosas <farosas@suse.de>
10
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20201019151301.2046-6-peter.maydell@linaro.org
12
---
12
---
13
target/arm/translate.c | 8 ++++++++
13
target/arm/machine.c | 18 +++++++++---------
14
1 file changed, 8 insertions(+)
14
1 file changed, 9 insertions(+), 9 deletions(-)
15
15
16
diff --git a/target/arm/translate.c b/target/arm/translate.c
16
diff --git a/target/arm/machine.c b/target/arm/machine.c
17
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/translate.c
18
--- a/target/arm/machine.c
19
+++ b/target/arm/translate.c
19
+++ b/target/arm/machine.c
20
@@ -XXX,XX +XXX,XX @@ static bool trans_BLX_i(DisasContext *s, arg_BLX_i *a)
20
@@ -XXX,XX +XXX,XX @@ static int cpu_post_load(void *opaque, int version_id)
21
{
21
}
22
TCGv_i32 tmp;
22
}
23
23
24
+ /*
24
+ /*
25
+ * BLX <imm> would be useless on M-profile; the encoding space
25
+ * Misaligned thumb pc is architecturally impossible. Fail the
26
+ * is used for other insns from v8.1M onward, and UNDEFs before that.
26
+ * incoming migration. For TCG it would trigger the assert in
27
+ * thumb_tr_translate_insn().
27
+ */
28
+ */
28
+ if (arm_dc_feature(s, ARM_FEATURE_M)) {
29
+ if (!is_a64(env) && env->thumb && (env->regs[15] & 1)) {
29
+ return false;
30
+ return -1;
30
+ }
31
+ }
31
+
32
+
32
/* For A32, ARM_FEATURE_V5 is checked near the start of the uncond block. */
33
hw_breakpoint_update_all(cpu);
33
if (s->thumb && (a->imm & 2)) {
34
hw_watchpoint_update_all(cpu);
34
return false;
35
36
@@ -XXX,XX +XXX,XX @@ static int cpu_post_load(void *opaque, int version_id)
37
}
38
}
39
40
- /*
41
- * Misaligned thumb pc is architecturally impossible.
42
- * We have an assert in thumb_tr_translate_insn to verify this.
43
- * Fail an incoming migrate to avoid this assert.
44
- */
45
- if (!is_a64(env) && env->thumb && (env->regs[15] & 1)) {
46
- return -1;
47
- }
48
-
49
if (!kvm_enabled()) {
50
pmu_op_finish(&cpu->env);
51
}
35
--
52
--
36
2.20.1
53
2.34.1
37
54
38
55
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Fabiano Rosas <farosas@suse.de>
2
2
3
Use the BCM2835_SYSTIMER_COUNT definition instead of the
3
Since commit cf7c6d1004 ("target/arm: Split out cpregs.h") we now have
4
magic '4' value.
4
a cpregs.h header which is more suitable for this code.
5
5
6
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
6
Code moved verbatim.
7
8
Signed-off-by: Fabiano Rosas <farosas@suse.de>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Message-id: 20201010203709.3116542-2-f4bug@amsat.org
11
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
13
---
12
include/hw/timer/bcm2835_systmr.h | 4 +++-
14
target/arm/cpregs.h | 98 +++++++++++++++++++++++++++++++++++++++++++++
13
hw/timer/bcm2835_systmr.c | 3 ++-
15
target/arm/cpu.h | 91 -----------------------------------------
14
2 files changed, 5 insertions(+), 2 deletions(-)
16
2 files changed, 98 insertions(+), 91 deletions(-)
15
17
16
diff --git a/include/hw/timer/bcm2835_systmr.h b/include/hw/timer/bcm2835_systmr.h
18
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
17
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
18
--- a/include/hw/timer/bcm2835_systmr.h
20
--- a/target/arm/cpregs.h
19
+++ b/include/hw/timer/bcm2835_systmr.h
21
+++ b/target/arm/cpregs.h
20
@@ -XXX,XX +XXX,XX @@
22
@@ -XXX,XX +XXX,XX @@ enum {
21
#define TYPE_BCM2835_SYSTIMER "bcm2835-sys-timer"
23
ARM_CP_SME = 1 << 19,
22
OBJECT_DECLARE_SIMPLE_TYPE(BCM2835SystemTimerState, BCM2835_SYSTIMER)
23
24
+#define BCM2835_SYSTIMER_COUNT 4
25
+
26
struct BCM2835SystemTimerState {
27
/*< private >*/
28
SysBusDevice parent_obj;
29
@@ -XXX,XX +XXX,XX @@ struct BCM2835SystemTimerState {
30
31
struct {
32
uint32_t status;
33
- uint32_t compare[4];
34
+ uint32_t compare[BCM2835_SYSTIMER_COUNT];
35
} reg;
36
};
24
};
37
25
38
diff --git a/hw/timer/bcm2835_systmr.c b/hw/timer/bcm2835_systmr.c
26
+/*
27
+ * Interface for defining coprocessor registers.
28
+ * Registers are defined in tables of arm_cp_reginfo structs
29
+ * which are passed to define_arm_cp_regs().
30
+ */
31
+
32
+/*
33
+ * When looking up a coprocessor register we look for it
34
+ * via an integer which encodes all of:
35
+ * coprocessor number
36
+ * Crn, Crm, opc1, opc2 fields
37
+ * 32 or 64 bit register (ie is it accessed via MRC/MCR
38
+ * or via MRRC/MCRR?)
39
+ * non-secure/secure bank (AArch32 only)
40
+ * We allow 4 bits for opc1 because MRRC/MCRR have a 4 bit field.
41
+ * (In this case crn and opc2 should be zero.)
42
+ * For AArch64, there is no 32/64 bit size distinction;
43
+ * instead all registers have a 2 bit op0, 3 bit op1 and op2,
44
+ * and 4 bit CRn and CRm. The encoding patterns are chosen
45
+ * to be easy to convert to and from the KVM encodings, and also
46
+ * so that the hashtable can contain both AArch32 and AArch64
47
+ * registers (to allow for interprocessing where we might run
48
+ * 32 bit code on a 64 bit core).
49
+ */
50
+/*
51
+ * This bit is private to our hashtable cpreg; in KVM register
52
+ * IDs the AArch64/32 distinction is the KVM_REG_ARM/ARM64
53
+ * in the upper bits of the 64 bit ID.
54
+ */
55
+#define CP_REG_AA64_SHIFT 28
56
+#define CP_REG_AA64_MASK (1 << CP_REG_AA64_SHIFT)
57
+
58
+/*
59
+ * To enable banking of coprocessor registers depending on ns-bit we
60
+ * add a bit to distinguish between secure and non-secure cpregs in the
61
+ * hashtable.
62
+ */
63
+#define CP_REG_NS_SHIFT 29
64
+#define CP_REG_NS_MASK (1 << CP_REG_NS_SHIFT)
65
+
66
+#define ENCODE_CP_REG(cp, is64, ns, crn, crm, opc1, opc2) \
67
+ ((ns) << CP_REG_NS_SHIFT | ((cp) << 16) | ((is64) << 15) | \
68
+ ((crn) << 11) | ((crm) << 7) | ((opc1) << 3) | (opc2))
69
+
70
+#define ENCODE_AA64_CP_REG(cp, crn, crm, op0, op1, op2) \
71
+ (CP_REG_AA64_MASK | \
72
+ ((cp) << CP_REG_ARM_COPROC_SHIFT) | \
73
+ ((op0) << CP_REG_ARM64_SYSREG_OP0_SHIFT) | \
74
+ ((op1) << CP_REG_ARM64_SYSREG_OP1_SHIFT) | \
75
+ ((crn) << CP_REG_ARM64_SYSREG_CRN_SHIFT) | \
76
+ ((crm) << CP_REG_ARM64_SYSREG_CRM_SHIFT) | \
77
+ ((op2) << CP_REG_ARM64_SYSREG_OP2_SHIFT))
78
+
79
+/*
80
+ * Convert a full 64 bit KVM register ID to the truncated 32 bit
81
+ * version used as a key for the coprocessor register hashtable
82
+ */
83
+static inline uint32_t kvm_to_cpreg_id(uint64_t kvmid)
84
+{
85
+ uint32_t cpregid = kvmid;
86
+ if ((kvmid & CP_REG_ARCH_MASK) == CP_REG_ARM64) {
87
+ cpregid |= CP_REG_AA64_MASK;
88
+ } else {
89
+ if ((kvmid & CP_REG_SIZE_MASK) == CP_REG_SIZE_U64) {
90
+ cpregid |= (1 << 15);
91
+ }
92
+
93
+ /*
94
+ * KVM is always non-secure so add the NS flag on AArch32 register
95
+ * entries.
96
+ */
97
+ cpregid |= 1 << CP_REG_NS_SHIFT;
98
+ }
99
+ return cpregid;
100
+}
101
+
102
+/*
103
+ * Convert a truncated 32 bit hashtable key into the full
104
+ * 64 bit KVM register ID.
105
+ */
106
+static inline uint64_t cpreg_to_kvm_id(uint32_t cpregid)
107
+{
108
+ uint64_t kvmid;
109
+
110
+ if (cpregid & CP_REG_AA64_MASK) {
111
+ kvmid = cpregid & ~CP_REG_AA64_MASK;
112
+ kvmid |= CP_REG_SIZE_U64 | CP_REG_ARM64;
113
+ } else {
114
+ kvmid = cpregid & ~(1 << 15);
115
+ if (cpregid & (1 << 15)) {
116
+ kvmid |= CP_REG_SIZE_U64 | CP_REG_ARM;
117
+ } else {
118
+ kvmid |= CP_REG_SIZE_U32 | CP_REG_ARM;
119
+ }
120
+ }
121
+ return kvmid;
122
+}
123
+
124
/*
125
* Valid values for ARMCPRegInfo state field, indicating which of
126
* the AArch32 and AArch64 execution states this register is visible in.
127
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
39
index XXXXXXX..XXXXXXX 100644
128
index XXXXXXX..XXXXXXX 100644
40
--- a/hw/timer/bcm2835_systmr.c
129
--- a/target/arm/cpu.h
41
+++ b/hw/timer/bcm2835_systmr.c
130
+++ b/target/arm/cpu.h
42
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription bcm2835_systmr_vmstate = {
131
@@ -XXX,XX +XXX,XX @@ void arm_cpu_list(void);
43
.minimum_version_id = 1,
132
uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
44
.fields = (VMStateField[]) {
133
uint32_t cur_el, bool secure);
45
VMSTATE_UINT32(reg.status, BCM2835SystemTimerState),
134
46
- VMSTATE_UINT32_ARRAY(reg.compare, BCM2835SystemTimerState, 4),
135
-/* Interface for defining coprocessor registers.
47
+ VMSTATE_UINT32_ARRAY(reg.compare, BCM2835SystemTimerState,
136
- * Registers are defined in tables of arm_cp_reginfo structs
48
+ BCM2835_SYSTIMER_COUNT),
137
- * which are passed to define_arm_cp_regs().
49
VMSTATE_END_OF_LIST()
138
- */
50
}
139
-
51
};
140
-/* When looking up a coprocessor register we look for it
141
- * via an integer which encodes all of:
142
- * coprocessor number
143
- * Crn, Crm, opc1, opc2 fields
144
- * 32 or 64 bit register (ie is it accessed via MRC/MCR
145
- * or via MRRC/MCRR?)
146
- * non-secure/secure bank (AArch32 only)
147
- * We allow 4 bits for opc1 because MRRC/MCRR have a 4 bit field.
148
- * (In this case crn and opc2 should be zero.)
149
- * For AArch64, there is no 32/64 bit size distinction;
150
- * instead all registers have a 2 bit op0, 3 bit op1 and op2,
151
- * and 4 bit CRn and CRm. The encoding patterns are chosen
152
- * to be easy to convert to and from the KVM encodings, and also
153
- * so that the hashtable can contain both AArch32 and AArch64
154
- * registers (to allow for interprocessing where we might run
155
- * 32 bit code on a 64 bit core).
156
- */
157
-/* This bit is private to our hashtable cpreg; in KVM register
158
- * IDs the AArch64/32 distinction is the KVM_REG_ARM/ARM64
159
- * in the upper bits of the 64 bit ID.
160
- */
161
-#define CP_REG_AA64_SHIFT 28
162
-#define CP_REG_AA64_MASK (1 << CP_REG_AA64_SHIFT)
163
-
164
-/* To enable banking of coprocessor registers depending on ns-bit we
165
- * add a bit to distinguish between secure and non-secure cpregs in the
166
- * hashtable.
167
- */
168
-#define CP_REG_NS_SHIFT 29
169
-#define CP_REG_NS_MASK (1 << CP_REG_NS_SHIFT)
170
-
171
-#define ENCODE_CP_REG(cp, is64, ns, crn, crm, opc1, opc2) \
172
- ((ns) << CP_REG_NS_SHIFT | ((cp) << 16) | ((is64) << 15) | \
173
- ((crn) << 11) | ((crm) << 7) | ((opc1) << 3) | (opc2))
174
-
175
-#define ENCODE_AA64_CP_REG(cp, crn, crm, op0, op1, op2) \
176
- (CP_REG_AA64_MASK | \
177
- ((cp) << CP_REG_ARM_COPROC_SHIFT) | \
178
- ((op0) << CP_REG_ARM64_SYSREG_OP0_SHIFT) | \
179
- ((op1) << CP_REG_ARM64_SYSREG_OP1_SHIFT) | \
180
- ((crn) << CP_REG_ARM64_SYSREG_CRN_SHIFT) | \
181
- ((crm) << CP_REG_ARM64_SYSREG_CRM_SHIFT) | \
182
- ((op2) << CP_REG_ARM64_SYSREG_OP2_SHIFT))
183
-
184
-/* Convert a full 64 bit KVM register ID to the truncated 32 bit
185
- * version used as a key for the coprocessor register hashtable
186
- */
187
-static inline uint32_t kvm_to_cpreg_id(uint64_t kvmid)
188
-{
189
- uint32_t cpregid = kvmid;
190
- if ((kvmid & CP_REG_ARCH_MASK) == CP_REG_ARM64) {
191
- cpregid |= CP_REG_AA64_MASK;
192
- } else {
193
- if ((kvmid & CP_REG_SIZE_MASK) == CP_REG_SIZE_U64) {
194
- cpregid |= (1 << 15);
195
- }
196
-
197
- /* KVM is always non-secure so add the NS flag on AArch32 register
198
- * entries.
199
- */
200
- cpregid |= 1 << CP_REG_NS_SHIFT;
201
- }
202
- return cpregid;
203
-}
204
-
205
-/* Convert a truncated 32 bit hashtable key into the full
206
- * 64 bit KVM register ID.
207
- */
208
-static inline uint64_t cpreg_to_kvm_id(uint32_t cpregid)
209
-{
210
- uint64_t kvmid;
211
-
212
- if (cpregid & CP_REG_AA64_MASK) {
213
- kvmid = cpregid & ~CP_REG_AA64_MASK;
214
- kvmid |= CP_REG_SIZE_U64 | CP_REG_ARM64;
215
- } else {
216
- kvmid = cpregid & ~(1 << 15);
217
- if (cpregid & (1 << 15)) {
218
- kvmid |= CP_REG_SIZE_U64 | CP_REG_ARM;
219
- } else {
220
- kvmid |= CP_REG_SIZE_U32 | CP_REG_ARM;
221
- }
222
- }
223
- return kvmid;
224
-}
225
-
226
/* Return the highest implemented Exception Level */
227
static inline int arm_highest_el(CPUARMState *env)
228
{
52
--
229
--
53
2.20.1
230
2.34.1
54
231
55
232
diff view generated by jsdifflib
Deleted patch
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
1
3
The variable holding the CTRL_STATUS register is misnamed
4
'status'. Rename it 'ctrl_status' to make it more obvious
5
this register is also used to control the peripheral.
6
7
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Message-id: 20201010203709.3116542-3-f4bug@amsat.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
include/hw/timer/bcm2835_systmr.h | 2 +-
14
hw/timer/bcm2835_systmr.c | 8 ++++----
15
2 files changed, 5 insertions(+), 5 deletions(-)
16
17
diff --git a/include/hw/timer/bcm2835_systmr.h b/include/hw/timer/bcm2835_systmr.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/include/hw/timer/bcm2835_systmr.h
20
+++ b/include/hw/timer/bcm2835_systmr.h
21
@@ -XXX,XX +XXX,XX @@ struct BCM2835SystemTimerState {
22
qemu_irq irq;
23
24
struct {
25
- uint32_t status;
26
+ uint32_t ctrl_status;
27
uint32_t compare[BCM2835_SYSTIMER_COUNT];
28
} reg;
29
};
30
diff --git a/hw/timer/bcm2835_systmr.c b/hw/timer/bcm2835_systmr.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/hw/timer/bcm2835_systmr.c
33
+++ b/hw/timer/bcm2835_systmr.c
34
@@ -XXX,XX +XXX,XX @@ REG32(COMPARE3, 0x18)
35
36
static void bcm2835_systmr_update_irq(BCM2835SystemTimerState *s)
37
{
38
- bool enable = !!s->reg.status;
39
+ bool enable = !!s->reg.ctrl_status;
40
41
trace_bcm2835_systmr_irq(enable);
42
qemu_set_irq(s->irq, enable);
43
@@ -XXX,XX +XXX,XX @@ static uint64_t bcm2835_systmr_read(void *opaque, hwaddr offset,
44
45
switch (offset) {
46
case A_CTRL_STATUS:
47
- r = s->reg.status;
48
+ r = s->reg.ctrl_status;
49
break;
50
case A_COMPARE0 ... A_COMPARE3:
51
r = s->reg.compare[(offset - A_COMPARE0) >> 2];
52
@@ -XXX,XX +XXX,XX @@ static void bcm2835_systmr_write(void *opaque, hwaddr offset,
53
trace_bcm2835_systmr_write(offset, value);
54
switch (offset) {
55
case A_CTRL_STATUS:
56
- s->reg.status &= ~value; /* Ack */
57
+ s->reg.ctrl_status &= ~value; /* Ack */
58
bcm2835_systmr_update_irq(s);
59
break;
60
case A_COMPARE0 ... A_COMPARE3:
61
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription bcm2835_systmr_vmstate = {
62
.version_id = 1,
63
.minimum_version_id = 1,
64
.fields = (VMStateField[]) {
65
- VMSTATE_UINT32(reg.status, BCM2835SystemTimerState),
66
+ VMSTATE_UINT32(reg.ctrl_status, BCM2835SystemTimerState),
67
VMSTATE_UINT32_ARRAY(reg.compare, BCM2835SystemTimerState,
68
BCM2835_SYSTIMER_COUNT),
69
VMSTATE_END_OF_LIST()
70
--
71
2.20.1
72
73
diff view generated by jsdifflib
Deleted patch
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
1
3
The SYS_timer is not directly wired to the ARM core, but to the
4
SoC (peripheral) interrupt controller.
5
6
Fixes: 0e5bbd74064 ("hw/arm/bcm2835_peripherals: Use the SYS_timer")
7
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Message-id: 20201010203709.3116542-5-f4bug@amsat.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
hw/arm/bcm2835_peripherals.c | 13 +++++++++++--
14
1 file changed, 11 insertions(+), 2 deletions(-)
15
16
diff --git a/hw/arm/bcm2835_peripherals.c b/hw/arm/bcm2835_peripherals.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/arm/bcm2835_peripherals.c
19
+++ b/hw/arm/bcm2835_peripherals.c
20
@@ -XXX,XX +XXX,XX @@ static void bcm2835_peripherals_realize(DeviceState *dev, Error **errp)
21
memory_region_add_subregion(&s->peri_mr, ST_OFFSET,
22
sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->systmr), 0));
23
sysbus_connect_irq(SYS_BUS_DEVICE(&s->systmr), 0,
24
- qdev_get_gpio_in_named(DEVICE(&s->ic), BCM2835_IC_ARM_IRQ,
25
- INTERRUPT_ARM_TIMER));
26
+ qdev_get_gpio_in_named(DEVICE(&s->ic), BCM2835_IC_GPU_IRQ,
27
+ INTERRUPT_TIMER0));
28
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->systmr), 1,
29
+ qdev_get_gpio_in_named(DEVICE(&s->ic), BCM2835_IC_GPU_IRQ,
30
+ INTERRUPT_TIMER1));
31
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->systmr), 2,
32
+ qdev_get_gpio_in_named(DEVICE(&s->ic), BCM2835_IC_GPU_IRQ,
33
+ INTERRUPT_TIMER2));
34
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->systmr), 3,
35
+ qdev_get_gpio_in_named(DEVICE(&s->ic), BCM2835_IC_GPU_IRQ,
36
+ INTERRUPT_TIMER3));
37
38
/* UART0 */
39
qdev_prop_set_chr(DEVICE(&s->uart0), "chardev", serial_hd(0));
40
--
41
2.20.1
42
43
diff view generated by jsdifflib
Deleted patch
1
From: Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
2
1
3
Current documentation is not too clear on the GETPC usage.
4
In particular, when used outside the top level helper function
5
it causes unexpected behavior.
6
7
Signed-off-by: Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
8
Message-id: 20201015095147.1691-1-e.emanuelegiuseppe@gmail.com
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
docs/devel/loads-stores.rst | 8 +++++++-
13
1 file changed, 7 insertions(+), 1 deletion(-)
14
15
diff --git a/docs/devel/loads-stores.rst b/docs/devel/loads-stores.rst
16
index XXXXXXX..XXXXXXX 100644
17
--- a/docs/devel/loads-stores.rst
18
+++ b/docs/devel/loads-stores.rst
19
@@ -XXX,XX +XXX,XX @@ guest CPU state in case of a guest CPU exception. This is passed
20
to ``cpu_restore_state()``. Therefore the value should either be 0,
21
to indicate that the guest CPU state is already synchronized, or
22
the result of ``GETPC()`` from the top level ``HELPER(foo)``
23
-function, which is a return address into the generated code.
24
+function, which is a return address into the generated code [#gpc]_.
25
+
26
+.. [#gpc] Note that ``GETPC()`` should be used with great care: calling
27
+ it in other functions that are *not* the top level
28
+ ``HELPER(foo)`` will cause unexpected behavior. Instead, the
29
+ value of ``GETPC()`` should be read from the helper and passed
30
+ if needed to the functions that the helper calls.
31
32
Function names follow the pattern:
33
34
--
35
2.20.1
36
37
diff view generated by jsdifflib
Deleted patch
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
1
3
Add trace events for GPU and CPU IRQs.
4
5
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
6
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Message-id: 20201017180731.1165871-2-f4bug@amsat.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
hw/intc/bcm2835_ic.c | 4 +++-
11
hw/intc/trace-events | 4 ++++
12
2 files changed, 7 insertions(+), 1 deletion(-)
13
14
diff --git a/hw/intc/bcm2835_ic.c b/hw/intc/bcm2835_ic.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/intc/bcm2835_ic.c
17
+++ b/hw/intc/bcm2835_ic.c
18
@@ -XXX,XX +XXX,XX @@
19
#include "migration/vmstate.h"
20
#include "qemu/log.h"
21
#include "qemu/module.h"
22
+#include "trace.h"
23
24
#define GPU_IRQS 64
25
#define ARM_IRQS 8
26
@@ -XXX,XX +XXX,XX @@ static void bcm2835_ic_update(BCM2835ICState *s)
27
set = (s->gpu_irq_level & s->gpu_irq_enable)
28
|| (s->arm_irq_level & s->arm_irq_enable);
29
qemu_set_irq(s->irq, set);
30
-
31
}
32
33
static void bcm2835_ic_set_gpu_irq(void *opaque, int irq, int level)
34
@@ -XXX,XX +XXX,XX @@ static void bcm2835_ic_set_gpu_irq(void *opaque, int irq, int level)
35
BCM2835ICState *s = opaque;
36
37
assert(irq >= 0 && irq < 64);
38
+ trace_bcm2835_ic_set_gpu_irq(irq, level);
39
s->gpu_irq_level = deposit64(s->gpu_irq_level, irq, 1, level != 0);
40
bcm2835_ic_update(s);
41
}
42
@@ -XXX,XX +XXX,XX @@ static void bcm2835_ic_set_arm_irq(void *opaque, int irq, int level)
43
BCM2835ICState *s = opaque;
44
45
assert(irq >= 0 && irq < 8);
46
+ trace_bcm2835_ic_set_cpu_irq(irq, level);
47
s->arm_irq_level = deposit32(s->arm_irq_level, irq, 1, level != 0);
48
bcm2835_ic_update(s);
49
}
50
diff --git a/hw/intc/trace-events b/hw/intc/trace-events
51
index XXXXXXX..XXXXXXX 100644
52
--- a/hw/intc/trace-events
53
+++ b/hw/intc/trace-events
54
@@ -XXX,XX +XXX,XX @@ nvic_sysreg_write(uint64_t addr, uint32_t value, unsigned size) "NVIC sysreg wri
55
heathrow_write(uint64_t addr, unsigned int n, uint64_t value) "0x%"PRIx64" %u: 0x%"PRIx64
56
heathrow_read(uint64_t addr, unsigned int n, uint64_t value) "0x%"PRIx64" %u: 0x%"PRIx64
57
heathrow_set_irq(int num, int level) "set_irq: num=0x%02x level=%d"
58
+
59
+# bcm2835_ic.c
60
+bcm2835_ic_set_gpu_irq(int irq, int level) "GPU irq #%d level %d"
61
+bcm2835_ic_set_cpu_irq(int irq, int level) "CPU irq #%d level %d"
62
--
63
2.20.1
64
65
diff view generated by jsdifflib
Deleted patch
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
1
3
The IRQ values are defined few lines earlier, use them instead of
4
the magic numbers.
5
6
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Message-id: 20201017180731.1165871-3-f4bug@amsat.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
hw/intc/bcm2836_control.c | 8 ++++----
12
1 file changed, 4 insertions(+), 4 deletions(-)
13
14
diff --git a/hw/intc/bcm2836_control.c b/hw/intc/bcm2836_control.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/intc/bcm2836_control.c
17
+++ b/hw/intc/bcm2836_control.c
18
@@ -XXX,XX +XXX,XX @@ static void bcm2836_control_set_local_irq(void *opaque, int core, int local_irq,
19
20
static void bcm2836_control_set_local_irq0(void *opaque, int core, int level)
21
{
22
- bcm2836_control_set_local_irq(opaque, core, 0, level);
23
+ bcm2836_control_set_local_irq(opaque, core, IRQ_CNTPSIRQ, level);
24
}
25
26
static void bcm2836_control_set_local_irq1(void *opaque, int core, int level)
27
{
28
- bcm2836_control_set_local_irq(opaque, core, 1, level);
29
+ bcm2836_control_set_local_irq(opaque, core, IRQ_CNTPNSIRQ, level);
30
}
31
32
static void bcm2836_control_set_local_irq2(void *opaque, int core, int level)
33
{
34
- bcm2836_control_set_local_irq(opaque, core, 2, level);
35
+ bcm2836_control_set_local_irq(opaque, core, IRQ_CNTHPIRQ, level);
36
}
37
38
static void bcm2836_control_set_local_irq3(void *opaque, int core, int level)
39
{
40
- bcm2836_control_set_local_irq(opaque, core, 3, level);
41
+ bcm2836_control_set_local_irq(opaque, core, IRQ_CNTVIRQ, level);
42
}
43
44
static void bcm2836_control_set_gpu_irq(void *opaque, int irq, int level)
45
--
46
2.20.1
47
48
diff view generated by jsdifflib
1
From: Peng Liang <liangpeng10@huawei.com>
1
From: Fabiano Rosas <farosas@suse.de>
2
2
3
VMStateDescription.fields should be end with VMSTATE_END_OF_LIST().
3
If a test was tagged with the "accel" tag and the specified
4
However, microbit_i2c_vmstate doesn't follow it. Let's change it.
4
accelerator it not present in the qemu binary, cancel the test.
5
5
6
Fixes: 9d68bf564e ("arm: Stub out NRF51 TWI magnetometer/accelerometer detection")
6
We can now write tests without explicit calls to require_accelerator,
7
Reported-by: Euler Robot <euler.robot@huawei.com>
7
just the tag is enough.
8
Signed-off-by: Peng Liang <liangpeng10@huawei.com>
8
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Signed-off-by: Fabiano Rosas <farosas@suse.de>
10
Message-id: 20201019093401.2993833-1-liangpeng10@huawei.com
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
---
13
hw/i2c/microbit_i2c.c | 1 +
14
tests/avocado/avocado_qemu/__init__.py | 4 ++++
14
1 file changed, 1 insertion(+)
15
1 file changed, 4 insertions(+)
15
16
16
diff --git a/hw/i2c/microbit_i2c.c b/hw/i2c/microbit_i2c.c
17
diff --git a/tests/avocado/avocado_qemu/__init__.py b/tests/avocado/avocado_qemu/__init__.py
17
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/i2c/microbit_i2c.c
19
--- a/tests/avocado/avocado_qemu/__init__.py
19
+++ b/hw/i2c/microbit_i2c.c
20
+++ b/tests/avocado/avocado_qemu/__init__.py
20
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription microbit_i2c_vmstate = {
21
@@ -XXX,XX +XXX,XX @@ def setUp(self):
21
.fields = (VMStateField[]) {
22
22
VMSTATE_UINT32_ARRAY(regs, MicrobitI2CState, MICROBIT_I2C_NREGS),
23
super().setUp('qemu-system-')
23
VMSTATE_UINT32(read_idx, MicrobitI2CState),
24
24
+ VMSTATE_END_OF_LIST()
25
+ accel_required = self._get_unique_tag_val('accel')
25
},
26
+ if accel_required:
26
};
27
+ self.require_accelerator(accel_required)
28
+
29
self.machine = self.params.get('machine',
30
default=self._get_unique_tag_val('machine'))
27
31
28
--
32
--
29
2.20.1
33
2.34.1
30
34
31
35
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Fabiano Rosas <farosas@suse.de>
2
2
3
The kernel sets btype for the signal handler as if for a call.
3
This allows the test to be skipped when TCG is not present in the QEMU
4
binary.
4
5
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Fabiano Rosas <farosas@suse.de>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20201016184207.786698-2-richard.henderson@linaro.org
8
Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
---
10
linux-user/aarch64/signal.c | 10 ++++++++--
11
tests/avocado/boot_linux_console.py | 1 +
11
1 file changed, 8 insertions(+), 2 deletions(-)
12
tests/avocado/reverse_debugging.py | 8 ++++++++
13
2 files changed, 9 insertions(+)
12
14
13
diff --git a/linux-user/aarch64/signal.c b/linux-user/aarch64/signal.c
15
diff --git a/tests/avocado/boot_linux_console.py b/tests/avocado/boot_linux_console.py
14
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
15
--- a/linux-user/aarch64/signal.c
17
--- a/tests/avocado/boot_linux_console.py
16
+++ b/linux-user/aarch64/signal.c
18
+++ b/tests/avocado/boot_linux_console.py
17
@@ -XXX,XX +XXX,XX @@ static void target_setup_frame(int usig, struct target_sigaction *ka,
19
@@ -XXX,XX +XXX,XX @@ def test_arm_orangepi_uboot_netbsd9(self):
18
+ offsetof(struct target_rt_frame_record, tramp);
20
19
}
21
def test_aarch64_raspi3_atf(self):
20
env->xregs[0] = usig;
22
"""
21
- env->xregs[31] = frame_addr;
23
+ :avocado: tags=accel:tcg
22
env->xregs[29] = frame_addr + fr_ofs;
24
:avocado: tags=arch:aarch64
23
- env->pc = ka->_sa_handler;
25
:avocado: tags=machine:raspi3b
24
env->xregs[30] = return_addr;
26
:avocado: tags=cpu:cortex-a53
25
+ env->xregs[31] = frame_addr;
27
diff --git a/tests/avocado/reverse_debugging.py b/tests/avocado/reverse_debugging.py
26
+ env->pc = ka->_sa_handler;
28
index XXXXXXX..XXXXXXX 100644
29
--- a/tests/avocado/reverse_debugging.py
30
+++ b/tests/avocado/reverse_debugging.py
31
@@ -XXX,XX +XXX,XX @@ def reverse_debugging(self, shift=7, args=None):
32
vm.shutdown()
33
34
class ReverseDebugging_X86_64(ReverseDebugging):
35
+ """
36
+ :avocado: tags=accel:tcg
37
+ """
27
+
38
+
28
+ /* Invoke the signal handler as if by indirect call. */
39
REG_PC = 0x10
29
+ if (cpu_isar_feature(aa64_bti, env_archcpu(env))) {
40
REG_CS = 0x12
30
+ env->btype = 2;
41
def get_pc(self, g):
31
+ }
42
@@ -XXX,XX +XXX,XX @@ def test_x86_64_pc(self):
43
self.reverse_debugging()
44
45
class ReverseDebugging_AArch64(ReverseDebugging):
46
+ """
47
+ :avocado: tags=accel:tcg
48
+ """
32
+
49
+
33
if (info) {
50
REG_PC = 32
34
tswap_siginfo(&frame->info, info);
51
35
env->xregs[1] = frame_addr + offsetof(struct target_rt_sigframe, info);
52
# unidentified gitlab timeout problem
36
--
53
--
37
2.20.1
54
2.34.1
38
55
39
56
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Fabiano Rosas <farosas@suse.de>
2
2
3
Commit 7998beb9c2e removed the ram_size initialization in the
3
Now that the cortex-a15 is under CONFIG_TCG, use as default CPU for a
4
arm_boot_info structure, however it is used by arm_load_kernel().
4
KVM-only build the 'max' cpu.
5
5
6
Initialize the field to fix:
6
Note that we cannot use 'host' here because the qtests can run without
7
any other accelerator (than qtest) and 'host' depends on KVM being
8
enabled.
7
9
8
$ qemu-system-arm -M n800 -append 'console=ttyS1' \
10
Signed-off-by: Fabiano Rosas <farosas@suse.de>
9
-kernel meego-arm-n8x0-1.0.80.20100712.1431-vmlinuz-2.6.35~rc4-129.1-n8x0
11
Acked-by: Richard Henderson <richard.henderson@linaro.org>
10
qemu-system-arm: kernel 'meego-arm-n8x0-1.0.80.20100712.1431-vmlinuz-2.6.35~rc4-129.1-n8x0' is too large to fit in RAM (kernel size 1964608, RAM size 0)
12
Reviewed-by: Thomas Huth <thuth@redhat.com>
11
12
Noticed while running the test introduced in commit 050a82f0c5b
13
("tests/acceptance: Add a test for the N800 and N810 arm machines").
14
15
Fixes: 7998beb9c2e ("arm/nseries: use memdev for RAM")
16
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Tested-by: Thomas Huth <thuth@redhat.com>
19
Message-id: 20201019095148.1602119-1-f4bug@amsat.org
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
---
14
---
22
hw/arm/nseries.c | 1 +
15
hw/arm/virt.c | 4 ++++
23
1 file changed, 1 insertion(+)
16
1 file changed, 4 insertions(+)
24
17
25
diff --git a/hw/arm/nseries.c b/hw/arm/nseries.c
18
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
26
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
27
--- a/hw/arm/nseries.c
20
--- a/hw/arm/virt.c
28
+++ b/hw/arm/nseries.c
21
+++ b/hw/arm/virt.c
29
@@ -XXX,XX +XXX,XX @@ static void n8x0_init(MachineState *machine,
22
@@ -XXX,XX +XXX,XX @@ static void virt_machine_class_init(ObjectClass *oc, void *data)
30
g_free(sz);
23
mc->minimum_page_bits = 12;
31
exit(EXIT_FAILURE);
24
mc->possible_cpu_arch_ids = virt_possible_cpu_arch_ids;
32
}
25
mc->cpu_index_to_instance_props = virt_cpu_index_to_props;
33
+ binfo->ram_size = machine->ram_size;
26
+#ifdef CONFIG_TCG
34
27
mc->default_cpu_type = ARM_CPU_TYPE_NAME("cortex-a15");
35
memory_region_add_subregion(get_system_memory(), OMAP2_Q2_BASE,
28
+#else
36
machine->ram);
29
+ mc->default_cpu_type = ARM_CPU_TYPE_NAME("max");
30
+#endif
31
mc->get_default_cpu_node_id = virt_get_default_cpu_node_id;
32
mc->kvm_type = virt_kvm_type;
33
assert(!mc->get_hotplug_handler);
37
--
34
--
38
2.20.1
35
2.34.1
39
40
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Fabiano Rosas <farosas@suse.de>
2
2
3
The note test requires gcc 10 for -mbranch-protection=standard.
3
Signed-off-by: Fabiano Rosas <farosas@suse.de>
4
The mmap test uses PROT_BTI and does not require special compiler support.
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
5
Acked-by: Thomas Huth <thuth@redhat.com>
6
Acked-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20201016184207.786698-13-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
7
---
12
tests/tcg/aarch64/bti-1.c | 62 +++++++++++++++++
8
tests/qtest/arm-cpu-features.c | 28 ++++++++++++++++++----------
13
tests/tcg/aarch64/bti-2.c | 108 ++++++++++++++++++++++++++++++
9
1 file changed, 18 insertions(+), 10 deletions(-)
14
tests/tcg/aarch64/bti-crt.inc.c | 51 ++++++++++++++
15
tests/tcg/aarch64/Makefile.target | 10 +++
16
tests/tcg/configure.sh | 4 ++
17
5 files changed, 235 insertions(+)
18
create mode 100644 tests/tcg/aarch64/bti-1.c
19
create mode 100644 tests/tcg/aarch64/bti-2.c
20
create mode 100644 tests/tcg/aarch64/bti-crt.inc.c
21
10
22
diff --git a/tests/tcg/aarch64/bti-1.c b/tests/tcg/aarch64/bti-1.c
11
diff --git a/tests/qtest/arm-cpu-features.c b/tests/qtest/arm-cpu-features.c
23
new file mode 100644
12
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX
13
--- a/tests/qtest/arm-cpu-features.c
25
--- /dev/null
14
+++ b/tests/qtest/arm-cpu-features.c
26
+++ b/tests/tcg/aarch64/bti-1.c
27
@@ -XXX,XX +XXX,XX @@
15
@@ -XXX,XX +XXX,XX @@
28
+/*
16
#define SVE_MAX_VQ 16
29
+ * Branch target identification, basic notskip cases.
17
30
+ */
18
#define MACHINE "-machine virt,gic-version=max -accel tcg "
31
+
19
-#define MACHINE_KVM "-machine virt,gic-version=max -accel kvm -accel tcg "
32
+#include "bti-crt.inc.c"
20
+#define MACHINE_KVM "-machine virt,gic-version=max -accel kvm "
33
+
21
#define QUERY_HEAD "{ 'execute': 'query-cpu-model-expansion', " \
34
+static void skip2_sigill(int sig, siginfo_t *info, ucontext_t *uc)
22
" 'arguments': { 'type': 'full', "
35
+{
23
#define QUERY_TAIL "}}"
36
+ uc->uc_mcontext.pc += 8;
24
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv)
37
+ uc->uc_mcontext.pstate = 1;
25
{
38
+}
26
g_test_init(&argc, &argv, NULL);
39
+
27
40
+#define NOP "nop"
28
- qtest_add_data_func("/arm/query-cpu-model-expansion",
41
+#define BTI_N "hint #32"
29
- NULL, test_query_cpu_model_expansion);
42
+#define BTI_C "hint #34"
30
+ if (qtest_has_accel("tcg")) {
43
+#define BTI_J "hint #36"
31
+ qtest_add_data_func("/arm/query-cpu-model-expansion",
44
+#define BTI_JC "hint #38"
32
+ NULL, test_query_cpu_model_expansion);
45
+
46
+#define BTYPE_1(DEST) \
47
+ asm("mov %0,#1; adr x16, 1f; br x16; 1: " DEST "; mov %0,#0" \
48
+ : "=r"(skipped) : : "x16")
49
+
50
+#define BTYPE_2(DEST) \
51
+ asm("mov %0,#1; adr x16, 1f; blr x16; 1: " DEST "; mov %0,#0" \
52
+ : "=r"(skipped) : : "x16", "x30")
53
+
54
+#define BTYPE_3(DEST) \
55
+ asm("mov %0,#1; adr x15, 1f; br x15; 1: " DEST "; mov %0,#0" \
56
+ : "=r"(skipped) : : "x15")
57
+
58
+#define TEST(WHICH, DEST, EXPECT) \
59
+ do { WHICH(DEST); fail += skipped ^ EXPECT; } while (0)
60
+
61
+
62
+int main()
63
+{
64
+ int fail = 0;
65
+ int skipped;
66
+
67
+ /* Signal-like with SA_SIGINFO. */
68
+ signal_info(SIGILL, skip2_sigill);
69
+
70
+ TEST(BTYPE_1, NOP, 1);
71
+ TEST(BTYPE_1, BTI_N, 1);
72
+ TEST(BTYPE_1, BTI_C, 0);
73
+ TEST(BTYPE_1, BTI_J, 0);
74
+ TEST(BTYPE_1, BTI_JC, 0);
75
+
76
+ TEST(BTYPE_2, NOP, 1);
77
+ TEST(BTYPE_2, BTI_N, 1);
78
+ TEST(BTYPE_2, BTI_C, 0);
79
+ TEST(BTYPE_2, BTI_J, 1);
80
+ TEST(BTYPE_2, BTI_JC, 0);
81
+
82
+ TEST(BTYPE_3, NOP, 1);
83
+ TEST(BTYPE_3, BTI_N, 1);
84
+ TEST(BTYPE_3, BTI_C, 1);
85
+ TEST(BTYPE_3, BTI_J, 0);
86
+ TEST(BTYPE_3, BTI_JC, 0);
87
+
88
+ return fail;
89
+}
90
diff --git a/tests/tcg/aarch64/bti-2.c b/tests/tcg/aarch64/bti-2.c
91
new file mode 100644
92
index XXXXXXX..XXXXXXX
93
--- /dev/null
94
+++ b/tests/tcg/aarch64/bti-2.c
95
@@ -XXX,XX +XXX,XX @@
96
+/*
97
+ * Branch target identification, basic notskip cases.
98
+ */
99
+
100
+#include <stdio.h>
101
+#include <signal.h>
102
+#include <string.h>
103
+#include <unistd.h>
104
+#include <sys/mman.h>
105
+
106
+#ifndef PROT_BTI
107
+#define PROT_BTI 0x10
108
+#endif
109
+
110
+static void skip2_sigill(int sig, siginfo_t *info, void *vuc)
111
+{
112
+ ucontext_t *uc = vuc;
113
+ uc->uc_mcontext.pc += 8;
114
+ uc->uc_mcontext.pstate = 1;
115
+}
116
+
117
+#define NOP "nop"
118
+#define BTI_N "hint #32"
119
+#define BTI_C "hint #34"
120
+#define BTI_J "hint #36"
121
+#define BTI_JC "hint #38"
122
+
123
+#define BTYPE_1(DEST) \
124
+ "mov x1, #1\n\t" \
125
+ "adr x16, 1f\n\t" \
126
+ "br x16\n" \
127
+"1: " DEST "\n\t" \
128
+ "mov x1, #0"
129
+
130
+#define BTYPE_2(DEST) \
131
+ "mov x1, #1\n\t" \
132
+ "adr x16, 1f\n\t" \
133
+ "blr x16\n" \
134
+"1: " DEST "\n\t" \
135
+ "mov x1, #0"
136
+
137
+#define BTYPE_3(DEST) \
138
+ "mov x1, #1\n\t" \
139
+ "adr x15, 1f\n\t" \
140
+ "br x15\n" \
141
+"1: " DEST "\n\t" \
142
+ "mov x1, #0"
143
+
144
+#define TEST(WHICH, DEST, EXPECT) \
145
+ WHICH(DEST) "\n" \
146
+ ".if " #EXPECT "\n\t" \
147
+ "eor x1, x1," #EXPECT "\n" \
148
+ ".endif\n\t" \
149
+ "add x0, x0, x1\n\t"
150
+
151
+extern char test_begin[], test_end[];
152
+
153
+asm("\n"
154
+"test_begin:\n\t"
155
+ BTI_C "\n\t"
156
+ "mov x2, x30\n\t"
157
+ "mov x0, #0\n\t"
158
+
159
+ TEST(BTYPE_1, NOP, 1)
160
+ TEST(BTYPE_1, BTI_N, 1)
161
+ TEST(BTYPE_1, BTI_C, 0)
162
+ TEST(BTYPE_1, BTI_J, 0)
163
+ TEST(BTYPE_1, BTI_JC, 0)
164
+
165
+ TEST(BTYPE_2, NOP, 1)
166
+ TEST(BTYPE_2, BTI_N, 1)
167
+ TEST(BTYPE_2, BTI_C, 0)
168
+ TEST(BTYPE_2, BTI_J, 1)
169
+ TEST(BTYPE_2, BTI_JC, 0)
170
+
171
+ TEST(BTYPE_3, NOP, 1)
172
+ TEST(BTYPE_3, BTI_N, 1)
173
+ TEST(BTYPE_3, BTI_C, 1)
174
+ TEST(BTYPE_3, BTI_J, 0)
175
+ TEST(BTYPE_3, BTI_JC, 0)
176
+
177
+ "ret x2\n"
178
+"test_end:"
179
+);
180
+
181
+int main()
182
+{
183
+ struct sigaction sa;
184
+
185
+ void *p = mmap(0, getpagesize(),
186
+ PROT_EXEC | PROT_READ | PROT_WRITE | PROT_BTI,
187
+ MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
188
+ if (p == MAP_FAILED) {
189
+ perror("mmap");
190
+ return 1;
191
+ }
33
+ }
192
+
34
+
193
+ memset(&sa, 0, sizeof(sa));
35
+ if (!g_str_equal(qtest_get_arch(), "aarch64")) {
194
+ sa.sa_sigaction = skip2_sigill;
36
+ goto out;
195
+ sa.sa_flags = SA_SIGINFO;
37
+ }
196
+ if (sigaction(SIGILL, &sa, NULL) < 0) {
38
197
+ perror("sigaction");
39
/*
198
+ return 1;
40
* For now we only run KVM specific tests with AArch64 QEMU in
41
* order avoid attempting to run an AArch32 QEMU with KVM on
42
* AArch64 hosts. That won't work and isn't easy to detect.
43
*/
44
- if (g_str_equal(qtest_get_arch(), "aarch64") && qtest_has_accel("kvm")) {
45
+ if (qtest_has_accel("kvm")) {
46
/*
47
* This tests target the 'host' CPU type, so register it only if
48
* KVM is available.
49
*/
50
qtest_add_data_func("/arm/kvm/query-cpu-model-expansion",
51
NULL, test_query_cpu_model_expansion_kvm);
52
- }
53
54
- if (g_str_equal(qtest_get_arch(), "aarch64")) {
55
- qtest_add_data_func("/arm/max/query-cpu-model-expansion/sve-max-vq-8",
56
- NULL, sve_tests_sve_max_vq_8);
57
- qtest_add_data_func("/arm/max/query-cpu-model-expansion/sve-off",
58
- NULL, sve_tests_sve_off);
59
qtest_add_data_func("/arm/kvm/query-cpu-model-expansion/sve-off",
60
NULL, sve_tests_sve_off_kvm);
61
}
62
63
+ if (qtest_has_accel("tcg")) {
64
+ qtest_add_data_func("/arm/max/query-cpu-model-expansion/sve-max-vq-8",
65
+ NULL, sve_tests_sve_max_vq_8);
66
+ qtest_add_data_func("/arm/max/query-cpu-model-expansion/sve-off",
67
+ NULL, sve_tests_sve_off);
199
+ }
68
+ }
200
+
69
+
201
+ memcpy(p, test_begin, test_end - test_begin);
70
+out:
202
+ return ((int (*)(void))p)();
71
return g_test_run();
203
+}
72
}
204
diff --git a/tests/tcg/aarch64/bti-crt.inc.c b/tests/tcg/aarch64/bti-crt.inc.c
205
new file mode 100644
206
index XXXXXXX..XXXXXXX
207
--- /dev/null
208
+++ b/tests/tcg/aarch64/bti-crt.inc.c
209
@@ -XXX,XX +XXX,XX @@
210
+/*
211
+ * Minimal user-environment for testing BTI.
212
+ *
213
+ * Normal libc is not (yet) built with BTI support enabled,
214
+ * and so could generate a BTI TRAP before ever reaching main.
215
+ */
216
+
217
+#include <stdlib.h>
218
+#include <signal.h>
219
+#include <ucontext.h>
220
+#include <asm/unistd.h>
221
+
222
+int main(void);
223
+
224
+void _start(void)
225
+{
226
+ exit(main());
227
+}
228
+
229
+void exit(int ret)
230
+{
231
+ register int x0 __asm__("x0") = ret;
232
+ register int x8 __asm__("x8") = __NR_exit;
233
+
234
+ asm volatile("svc #0" : : "r"(x0), "r"(x8));
235
+ __builtin_unreachable();
236
+}
237
+
238
+/*
239
+ * Irritatingly, the user API struct sigaction does not match the
240
+ * kernel API struct sigaction. So for simplicity, isolate the
241
+ * kernel ABI here, and make this act like signal.
242
+ */
243
+void signal_info(int sig, void (*fn)(int, siginfo_t *, ucontext_t *))
244
+{
245
+ struct kernel_sigaction {
246
+ void (*handler)(int, siginfo_t *, ucontext_t *);
247
+ unsigned long flags;
248
+ unsigned long restorer;
249
+ unsigned long mask;
250
+ } sa = { fn, SA_SIGINFO, 0, 0 };
251
+
252
+ register int x0 __asm__("x0") = sig;
253
+ register void *x1 __asm__("x1") = &sa;
254
+ register void *x2 __asm__("x2") = 0;
255
+ register int x3 __asm__("x3") = sizeof(unsigned long);
256
+ register int x8 __asm__("x8") = __NR_rt_sigaction;
257
+
258
+ asm volatile("svc #0"
259
+ : : "r"(x0), "r"(x1), "r"(x2), "r"(x3), "r"(x8) : "memory");
260
+}
261
diff --git a/tests/tcg/aarch64/Makefile.target b/tests/tcg/aarch64/Makefile.target
262
index XXXXXXX..XXXXXXX 100644
263
--- a/tests/tcg/aarch64/Makefile.target
264
+++ b/tests/tcg/aarch64/Makefile.target
265
@@ -XXX,XX +XXX,XX @@ run-pauth-%: QEMU_OPTS += -cpu max
266
run-plugin-pauth-%: QEMU_OPTS += -cpu max
267
endif
268
269
+# BTI Tests
270
+# bti-1 tests the elf notes, so we require special compiler support.
271
+ifneq ($(DOCKER_IMAGE)$(CROSS_CC_HAS_ARMV8_BTI),)
272
+AARCH64_TESTS += bti-1
273
+bti-1: CFLAGS += -mbranch-protection=standard
274
+bti-1: LDFLAGS += -nostdlib
275
+endif
276
+# bti-2 tests PROT_BTI, so no special compiler support required.
277
+AARCH64_TESTS += bti-2
278
+
279
# Semihosting smoke test for linux-user
280
AARCH64_TESTS += semihosting
281
run-semihosting: semihosting
282
diff --git a/tests/tcg/configure.sh b/tests/tcg/configure.sh
283
index XXXXXXX..XXXXXXX 100755
284
--- a/tests/tcg/configure.sh
285
+++ b/tests/tcg/configure.sh
286
@@ -XXX,XX +XXX,XX @@ for target in $target_list; do
287
-march=armv8.3-a -o $TMPE $TMPC; then
288
echo "CROSS_CC_HAS_ARMV8_3=y" >> $config_target_mak
289
fi
290
+ if do_compiler "$target_compiler" $target_compiler_cflags \
291
+ -mbranch-protection=standard -o $TMPE $TMPC; then
292
+ echo "CROSS_CC_HAS_ARMV8_BTI=y" >> $config_target_mak
293
+ fi
294
;;
295
esac
296
297
--
73
--
298
2.20.1
74
2.34.1
299
300
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Fabiano Rosas <farosas@suse.de>
2
2
3
We already have the full ARMMMUIdx as computed from the
3
These tests set -accel tcg, so restrict them to when TCG is present.
4
function parameter.
5
4
6
For the purpose of regime_has_2_ranges, we can ignore any
5
Signed-off-by: Fabiano Rosas <farosas@suse.de>
7
difference between AccType_Normal and AccType_Unpriv, which
6
Acked-by: Richard Henderson <richard.henderson@linaro.org>
8
would be the only difference between the passed mmu_idx
7
Reviewed-by: Thomas Huth <thuth@redhat.com>
9
and arm_mmu_idx_el.
10
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
14
Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
15
Message-id: 20201008162155.161886-2-richard.henderson@linaro.org
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
9
---
18
target/arm/mte_helper.c | 3 +--
10
tests/qtest/meson.build | 4 ++--
19
1 file changed, 1 insertion(+), 2 deletions(-)
11
1 file changed, 2 insertions(+), 2 deletions(-)
20
12
21
diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c
13
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
22
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/mte_helper.c
15
--- a/tests/qtest/meson.build
24
+++ b/target/arm/mte_helper.c
16
+++ b/tests/qtest/meson.build
25
@@ -XXX,XX +XXX,XX @@ static void mte_check_fail(CPUARMState *env, uint32_t desc,
17
@@ -XXX,XX +XXX,XX @@ qtests_arm = \
26
18
# TODO: once aarch64 TCG is fixed on ARM 32 bit host, make bios-tables-test unconditional
27
case 2:
19
qtests_aarch64 = \
28
/* Tag check fail causes asynchronous flag set. */
20
(cpu != 'arm' and unpack_edk2_blobs ? ['bios-tables-test'] : []) + \
29
- mmu_idx = arm_mmu_idx_el(env, el);
21
- (config_all_devices.has_key('CONFIG_TPM_TIS_SYSBUS') ? ['tpm-tis-device-test'] : []) + \
30
- if (regime_has_2_ranges(mmu_idx)) {
22
- (config_all_devices.has_key('CONFIG_TPM_TIS_SYSBUS') ? ['tpm-tis-device-swtpm-test'] : []) + \
31
+ if (regime_has_2_ranges(arm_mmu_idx)) {
23
+ (config_all.has_key('CONFIG_TCG') and config_all_devices.has_key('CONFIG_TPM_TIS_SYSBUS') ? \
32
select = extract64(dirty_ptr, 55, 1);
24
+ ['tpm-tis-device-test', 'tpm-tis-device-swtpm-test'] : []) + \
33
} else {
25
(config_all_devices.has_key('CONFIG_XLNX_ZYNQMP_ARM') ? ['xlnx-can-test', 'fuzz-xlnx-dp-test'] : []) + \
34
select = 0;
26
(config_all_devices.has_key('CONFIG_RASPI') ? ['bcm2835-dma-test'] : []) + \
27
['arm-cpu-features',
35
--
28
--
36
2.20.1
29
2.34.1
37
38
diff view generated by jsdifflib
Deleted patch
1
For nested groups like:
2
1
3
{
4
[
5
pattern 1
6
pattern 2
7
]
8
pattern 3
9
}
10
11
the intended behaviour is that patterns 1 and 2 must not
12
overlap with each other; if the insn matches neither then
13
we fall through to pattern 3 as the next thing in the
14
outer overlapping group.
15
16
Currently we generate incorrect code for this situation,
17
because in the code path for a failed match inside the
18
inner non-overlapping group we generate a "return" statement,
19
which causes decode to stop entirely rather than continuing
20
to the next thing in the outer group.
21
22
Generate a "break" instead, so that decode flow behaves
23
as required for this nested group case.
24
25
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
26
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
28
Message-id: 20201019151301.2046-2-peter.maydell@linaro.org
29
---
30
scripts/decodetree.py | 2 +-
31
1 file changed, 1 insertion(+), 1 deletion(-)
32
33
diff --git a/scripts/decodetree.py b/scripts/decodetree.py
34
index XXXXXXX..XXXXXXX 100644
35
--- a/scripts/decodetree.py
36
+++ b/scripts/decodetree.py
37
@@ -XXX,XX +XXX,XX @@ class Tree:
38
output(ind, ' /* ',
39
str_match_bits(innerbits, innermask), ' */\n')
40
s.output_code(i + 4, extracted, innerbits, innermask)
41
- output(ind, ' return false;\n')
42
+ output(ind, ' break;\n')
43
output(ind, '}\n')
44
# end Tree
45
46
--
47
2.20.1
48
49
diff view generated by jsdifflib
Deleted patch
1
The t32 decode has a group which represents a set of insns
2
which overlap with B_cond_thumb because they have [25:23]=111
3
(which is an invalid condition code field for the branch insn).
4
This group is currently defined using the {} overlap-OK syntax,
5
but it is almost entirely non-overlapping patterns. Switch
6
it over to use a non-overlapping group.
7
1
8
For this to be valid syntactically, CPS must move into the same
9
overlapping-group as the hint insns (CPS vs hints was the
10
only actual use of the overlap facility for the group).
11
12
The non-overlapping subgroup for CLREX/DSB/DMB/ISB/SB is no longer
13
necessary and so we can remove it (promoting those insns to
14
be members of the parent group).
15
16
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Message-id: 20201019151301.2046-5-peter.maydell@linaro.org
19
---
20
target/arm/t32.decode | 26 ++++++++++++--------------
21
1 file changed, 12 insertions(+), 14 deletions(-)
22
23
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
24
index XXXXXXX..XXXXXXX 100644
25
--- a/target/arm/t32.decode
26
+++ b/target/arm/t32.decode
27
@@ -XXX,XX +XXX,XX @@ CLZ 1111 1010 1011 ---- 1111 .... 1000 .... @rdm
28
{
29
# Group insn[25:23] = 111, which is cond=111x for the branch below,
30
# or unconditional, which would be illegal for the branch.
31
- {
32
- # Hints
33
+ [
34
+ # Hints, and CPS
35
{
36
YIELD 1111 0011 1010 1111 1000 0000 0000 0001
37
WFE 1111 0011 1010 1111 1000 0000 0000 0010
38
@@ -XXX,XX +XXX,XX @@ CLZ 1111 1010 1011 ---- 1111 .... 1000 .... @rdm
39
# The canonical nop ends in 0000 0000, but the whole rest
40
# of the space is "reserved hint, behaves as nop".
41
NOP 1111 0011 1010 1111 1000 0000 ---- ----
42
+
43
+ # If imod == '00' && M == '0' then SEE "Hint instructions", above.
44
+ CPS 1111 0011 1010 1111 1000 0 imod:2 M:1 A:1 I:1 F:1 mode:5 \
45
+ &cps
46
}
47
48
- # If imod == '00' && M == '0' then SEE "Hint instructions", above.
49
- CPS 1111 0011 1010 1111 1000 0 imod:2 M:1 A:1 I:1 F:1 mode:5 \
50
- &cps
51
-
52
# Miscellaneous control
53
- [
54
- CLREX 1111 0011 1011 1111 1000 1111 0010 1111
55
- DSB 1111 0011 1011 1111 1000 1111 0100 ----
56
- DMB 1111 0011 1011 1111 1000 1111 0101 ----
57
- ISB 1111 0011 1011 1111 1000 1111 0110 ----
58
- SB 1111 0011 1011 1111 1000 1111 0111 0000
59
- ]
60
+ CLREX 1111 0011 1011 1111 1000 1111 0010 1111
61
+ DSB 1111 0011 1011 1111 1000 1111 0100 ----
62
+ DMB 1111 0011 1011 1111 1000 1111 0101 ----
63
+ ISB 1111 0011 1011 1111 1000 1111 0110 ----
64
+ SB 1111 0011 1011 1111 1000 1111 0111 0000
65
66
# Note that the v7m insn overlaps both the normal and banked insn.
67
{
68
@@ -XXX,XX +XXX,XX @@ CLZ 1111 1010 1011 ---- 1111 .... 1000 .... @rdm
69
HVC 1111 0111 1110 .... 1000 .... .... .... \
70
&i imm=%imm16_16_0
71
UDF 1111 0111 1111 ---- 1010 ---- ---- ----
72
- }
73
+ ]
74
B_cond_thumb 1111 0. cond:4 ...... 10.0 ............ &ci imm=%imm21
75
}
76
77
--
78
2.20.1
79
80
diff view generated by jsdifflib
Deleted patch
1
v8.1M's "low-overhead-loop" extension has three instructions
2
for looping:
3
* DLS (start of a do-loop)
4
* WLS (start of a while-loop)
5
* LE (end of a loop)
6
1
7
The loop-start instructions are both simple operations to start a
8
loop whose iteration count (if any) is in LR. The loop-end
9
instruction handles "decrement iteration count and jump back to loop
10
start"; it also caches the information about the branch back to the
11
start of the loop to improve performance of the branch on subsequent
12
iterations.
13
14
As with the branch-future instructions, the architecture permits an
15
implementation to discard the LO_BRANCH_INFO cache at any time, and
16
QEMU takes the IMPDEF option to never set it in the first place
17
(equivalent to discarding it immediately), because for us a "real"
18
implementation would be unnecessary complexity.
19
20
(This implementation only provides the simple looping constructs; the
21
vector extension MVE (Helium) adds some extra variants to handle
22
looping across vectors. We'll add those later when we implement
23
MVE.)
24
25
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
26
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
27
Message-id: 20201019151301.2046-8-peter.maydell@linaro.org
28
---
29
target/arm/t32.decode | 8 ++++
30
target/arm/translate.c | 93 +++++++++++++++++++++++++++++++++++++++++-
31
2 files changed, 99 insertions(+), 2 deletions(-)
32
33
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/t32.decode
36
+++ b/target/arm/t32.decode
37
@@ -XXX,XX +XXX,XX @@ BL 1111 0. .......... 11.1 ............ @branch24
38
BF 1111 0 boff:4 10 ----- 1110 - ---------- 1 # BF
39
BF 1111 0 boff:4 11 ----- 1110 0 0000000000 1 # BFX, BFLX
40
]
41
+ [
42
+ # LE and WLS immediate
43
+ %lob_imm 1:10 11:1 !function=times_2
44
+
45
+ DLS 1111 0 0000 100 rn:4 1110 0000 0000 0001
46
+ WLS 1111 0 0000 100 rn:4 1100 . .......... 1 imm=%lob_imm
47
+ LE 1111 0 0000 0 f:1 0 1111 1100 . .......... 1 imm=%lob_imm
48
+ ]
49
}
50
diff --git a/target/arm/translate.c b/target/arm/translate.c
51
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/translate.c
53
+++ b/target/arm/translate.c
54
@@ -XXX,XX +XXX,XX @@ static void gen_goto_tb(DisasContext *s, int n, target_ulong dest)
55
s->base.is_jmp = DISAS_NORETURN;
56
}
57
58
-static inline void gen_jmp (DisasContext *s, uint32_t dest)
59
+/* Jump, specifying which TB number to use if we gen_goto_tb() */
60
+static inline void gen_jmp_tb(DisasContext *s, uint32_t dest, int tbno)
61
{
62
if (unlikely(is_singlestepping(s))) {
63
/* An indirect jump so that we still trigger the debug exception. */
64
gen_set_pc_im(s, dest);
65
s->base.is_jmp = DISAS_JUMP;
66
} else {
67
- gen_goto_tb(s, 0, dest);
68
+ gen_goto_tb(s, tbno, dest);
69
}
70
}
71
72
+static inline void gen_jmp(DisasContext *s, uint32_t dest)
73
+{
74
+ gen_jmp_tb(s, dest, 0);
75
+}
76
+
77
static inline void gen_mulxy(TCGv_i32 t0, TCGv_i32 t1, int x, int y)
78
{
79
if (x)
80
@@ -XXX,XX +XXX,XX @@ static bool trans_BF(DisasContext *s, arg_BF *a)
81
return true;
82
}
83
84
+static bool trans_DLS(DisasContext *s, arg_DLS *a)
85
+{
86
+ /* M-profile low-overhead loop start */
87
+ TCGv_i32 tmp;
88
+
89
+ if (!dc_isar_feature(aa32_lob, s)) {
90
+ return false;
91
+ }
92
+ if (a->rn == 13 || a->rn == 15) {
93
+ /* CONSTRAINED UNPREDICTABLE: we choose to UNDEF */
94
+ return false;
95
+ }
96
+
97
+ /* Not a while loop, no tail predication: just set LR to the count */
98
+ tmp = load_reg(s, a->rn);
99
+ store_reg(s, 14, tmp);
100
+ return true;
101
+}
102
+
103
+static bool trans_WLS(DisasContext *s, arg_WLS *a)
104
+{
105
+ /* M-profile low-overhead while-loop start */
106
+ TCGv_i32 tmp;
107
+ TCGLabel *nextlabel;
108
+
109
+ if (!dc_isar_feature(aa32_lob, s)) {
110
+ return false;
111
+ }
112
+ if (a->rn == 13 || a->rn == 15) {
113
+ /* CONSTRAINED UNPREDICTABLE: we choose to UNDEF */
114
+ return false;
115
+ }
116
+ if (s->condexec_mask) {
117
+ /*
118
+ * WLS in an IT block is CONSTRAINED UNPREDICTABLE;
119
+ * we choose to UNDEF, because otherwise our use of
120
+ * gen_goto_tb(1) would clash with the use of TB exit 1
121
+ * in the dc->condjmp condition-failed codepath in
122
+ * arm_tr_tb_stop() and we'd get an assertion.
123
+ */
124
+ return false;
125
+ }
126
+ nextlabel = gen_new_label();
127
+ tcg_gen_brcondi_i32(TCG_COND_EQ, cpu_R[a->rn], 0, nextlabel);
128
+ tmp = load_reg(s, a->rn);
129
+ store_reg(s, 14, tmp);
130
+ gen_jmp_tb(s, s->base.pc_next, 1);
131
+
132
+ gen_set_label(nextlabel);
133
+ gen_jmp(s, read_pc(s) + a->imm);
134
+ return true;
135
+}
136
+
137
+static bool trans_LE(DisasContext *s, arg_LE *a)
138
+{
139
+ /*
140
+ * M-profile low-overhead loop end. The architecture permits an
141
+ * implementation to discard the LO_BRANCH_INFO cache at any time,
142
+ * and we take the IMPDEF option to never set it in the first place
143
+ * (equivalent to always discarding it immediately), because for QEMU
144
+ * a "real" implementation would be complicated and wouldn't execute
145
+ * any faster.
146
+ */
147
+ TCGv_i32 tmp;
148
+
149
+ if (!dc_isar_feature(aa32_lob, s)) {
150
+ return false;
151
+ }
152
+
153
+ if (!a->f) {
154
+ /* Not loop-forever. If LR <= 1 this is the last loop: do nothing. */
155
+ arm_gen_condlabel(s);
156
+ tcg_gen_brcondi_i32(TCG_COND_LEU, cpu_R[14], 1, s->condlabel);
157
+ /* Decrement LR */
158
+ tmp = load_reg(s, 14);
159
+ tcg_gen_addi_i32(tmp, tmp, -1);
160
+ store_reg(s, 14, tmp);
161
+ }
162
+ /* Jump back to the loop start */
163
+ gen_jmp(s, read_pc(s) - a->imm);
164
+ return true;
165
+}
166
+
167
static bool op_tbranch(DisasContext *s, arg_tbranch *a, bool half)
168
{
169
TCGv_i32 addr, tmp;
170
--
171
2.20.1
172
173
diff view generated by jsdifflib
Deleted patch
1
M-profile CPUs with half-precision floating point support should
2
be able to write to FPSCR.FZ16, but an M-profile specific masking
3
of the value at the top of vfp_set_fpscr() currently prevents that.
4
This is not yet an active bug because we have no M-profile
5
FP16 CPUs, but needs to be fixed before we can add any.
6
1
7
The bits that the masking is effectively preventing from being
8
set are the A-profile only short-vector Len and Stride fields,
9
plus the Neon QC bit. Rearrange the order of the function so
10
that those fields are handled earlier and only under a suitable
11
guard; this allows us to drop the M-profile specific masking,
12
making FZ16 writeable.
13
14
This change also makes the QC bit correctly RAZ/WI for older
15
no-Neon A-profile cores.
16
17
This refactoring also paves the way for the low-overhead-branch
18
LTPSIZE field, which uses some of the bits that are used for
19
A-profile Stride and Len.
20
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
23
Message-id: 20201019151301.2046-10-peter.maydell@linaro.org
24
---
25
target/arm/vfp_helper.c | 47 ++++++++++++++++++++++++-----------------
26
1 file changed, 28 insertions(+), 19 deletions(-)
27
28
diff --git a/target/arm/vfp_helper.c b/target/arm/vfp_helper.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/vfp_helper.c
31
+++ b/target/arm/vfp_helper.c
32
@@ -XXX,XX +XXX,XX @@ void HELPER(vfp_set_fpscr)(CPUARMState *env, uint32_t val)
33
val &= ~FPCR_FZ16;
34
}
35
36
- if (arm_feature(env, ARM_FEATURE_M)) {
37
+ vfp_set_fpscr_to_host(env, val);
38
+
39
+ if (!arm_feature(env, ARM_FEATURE_M)) {
40
/*
41
- * M profile FPSCR is RES0 for the QC, STRIDE, FZ16, LEN bits
42
- * and also for the trapped-exception-handling bits IxE.
43
+ * Short-vector length and stride; on M-profile these bits
44
+ * are used for different purposes.
45
+ * We can't make this conditional be "if MVFR0.FPShVec != 0",
46
+ * because in v7A no-short-vector-support cores still had to
47
+ * allow Stride/Len to be written with the only effect that
48
+ * some insns are required to UNDEF if the guest sets them.
49
+ *
50
+ * TODO: if M-profile MVE implemented, set LTPSIZE.
51
*/
52
- val &= 0xf7c0009f;
53
+ env->vfp.vec_len = extract32(val, 16, 3);
54
+ env->vfp.vec_stride = extract32(val, 20, 2);
55
}
56
57
- vfp_set_fpscr_to_host(env, val);
58
+ if (arm_feature(env, ARM_FEATURE_NEON)) {
59
+ /*
60
+ * The bit we set within fpscr_q is arbitrary; the register as a
61
+ * whole being zero/non-zero is what counts.
62
+ * TODO: M-profile MVE also has a QC bit.
63
+ */
64
+ env->vfp.qc[0] = val & FPCR_QC;
65
+ env->vfp.qc[1] = 0;
66
+ env->vfp.qc[2] = 0;
67
+ env->vfp.qc[3] = 0;
68
+ }
69
70
/*
71
* We don't implement trapped exception handling, so the
72
* trap enable bits, IDE|IXE|UFE|OFE|DZE|IOE are all RAZ/WI (not RES0!)
73
*
74
- * If we exclude the exception flags, IOC|DZC|OFC|UFC|IXC|IDC
75
- * (which are stored in fp_status), and the other RES0 bits
76
- * in between, then we clear all of the low 16 bits.
77
+ * The exception flags IOC|DZC|OFC|UFC|IXC|IDC are stored in
78
+ * fp_status; QC, Len and Stride are stored separately earlier.
79
+ * Clear out all of those and the RES0 bits: only NZCV, AHP, DN,
80
+ * FZ, RMode and FZ16 are kept in vfp.xregs[FPSCR].
81
*/
82
env->vfp.xregs[ARM_VFP_FPSCR] = val & 0xf7c80000;
83
- env->vfp.vec_len = (val >> 16) & 7;
84
- env->vfp.vec_stride = (val >> 20) & 3;
85
-
86
- /*
87
- * The bit we set within fpscr_q is arbitrary; the register as a
88
- * whole being zero/non-zero is what counts.
89
- */
90
- env->vfp.qc[0] = val & FPCR_QC;
91
- env->vfp.qc[1] = 0;
92
- env->vfp.qc[2] = 0;
93
- env->vfp.qc[3] = 0;
94
}
95
96
void vfp_set_fpscr(CPUARMState *env, uint32_t val)
97
--
98
2.20.1
99
100
diff view generated by jsdifflib