1
First pullreq for arm of the 4.1 series, since I'm back from
1
First arm pullreq of 4.2...
2
holiday now. This is mostly my M-profile FPU series and Philippe's
3
devices.h cleanup. I have a pile of other patchsets to work through
4
in my to-review folder, but 42 patches is definitely quite
5
big enough to send now...
6
2
7
thanks
3
thanks
8
-- PMM
4
-- PMM
9
5
10
The following changes since commit 413a99a92c13ec408dcf2adaa87918dc81e890c8:
6
The following changes since commit 27608c7c66bd923eb5e5faab80e795408cbe2b51:
11
7
12
Add Nios II semihosting support. (2019-04-29 16:09:51 +0100)
8
Merge remote-tracking branch 'remotes/dgilbert/tags/pull-migration-20190814a' into staging (2019-08-16 12:00:18 +0100)
13
9
14
are available in the Git repository at:
10
are available in the Git repository at:
15
11
16
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20190429
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20190816
17
13
18
for you to fetch changes up to 437cc27ddfded3bbab6afd5ac1761e0e195edba7:
14
for you to fetch changes up to 664b7e3b97d6376f3329986c465b3782458b0f8b:
19
15
20
hw/devices: Move SMSC 91C111 declaration into a new header (2019-04-29 17:57:21 +0100)
16
target/arm: Use tcg_gen_extrh_i64_i32 to extract the high word (2019-08-16 14:02:53 +0100)
21
17
22
----------------------------------------------------------------
18
----------------------------------------------------------------
23
target-arm queue:
19
target-arm queue:
24
* remove "bag of random stuff" hw/devices.h header
20
* target/arm: generate a custom MIDR for -cpu max
25
* implement FPU for Cortex-M and enable it for Cortex-M4 and -M33
21
* hw/misc/zynq_slcr: refactor to use standard register definition
26
* hw/dma: Compile the bcm2835_dma device as common object
22
* Set ENET_BD_BDU in I.MX FEC controller
27
* configure: Remove --source-path option
23
* target/arm: Fix routing of singlestep exceptions
28
* hw/ssi/xilinx_spips: Avoid variable length array
24
* refactor a32/t32 decoder handling of PC
29
* hw/arm/smmuv3: Remove SMMUNotifierNode
25
* minor optimisations/cleanups of some a32/t32 codegen
26
* target/arm/cpu64: Ensure kvm really supports aarch64=off
27
* target/arm/cpu: Ensure we can use the pmu with kvm
28
* target/arm: Minor cleanups preparatory to KVM SVE support
30
29
31
----------------------------------------------------------------
30
----------------------------------------------------------------
32
Eric Auger (1):
31
Aaron Hill (1):
33
hw/arm/smmuv3: Remove SMMUNotifierNode
32
Set ENET_BD_BDU in I.MX FEC controller
34
33
35
Peter Maydell (28):
34
Alex Bennée (1):
36
hw/ssi/xilinx_spips: Avoid variable length array
35
target/arm: generate a custom MIDR for -cpu max
37
configure: Remove --source-path option
38
target/arm: Make sure M-profile FPSCR RES0 bits are not settable
39
hw/intc/armv7m_nvic: Allow reading of M-profile MVFR* registers
40
target/arm: Implement dummy versions of M-profile FP-related registers
41
target/arm: Disable most VFP sysregs for M-profile
42
target/arm: Honour M-profile FP enable bits
43
target/arm: Decode FP instructions for M profile
44
target/arm: Clear CONTROL_S.SFPA in SG insn if FPU present
45
target/arm: Handle SFPA and FPCA bits in reads and writes of CONTROL
46
target/arm/helper: don't return early for STKOF faults during stacking
47
target/arm: Handle floating point registers in exception entry
48
target/arm: Implement v7m_update_fpccr()
49
target/arm: Clear CONTROL.SFPA in BXNS and BLXNS
50
target/arm: Clean excReturn bits when tail chaining
51
target/arm: Allow for floating point in callee stack integrity check
52
target/arm: Handle floating point registers in exception return
53
target/arm: Move NS TBFLAG from bit 19 to bit 6
54
target/arm: Overlap VECSTRIDE and XSCALE_CPAR TB flags
55
target/arm: Set FPCCR.S when executing M-profile floating point insns
56
target/arm: Activate M-profile floating point context when FPCCR.ASPEN is set
57
target/arm: New helper function arm_v7m_mmu_idx_all()
58
target/arm: New function armv7m_nvic_set_pending_lazyfp()
59
target/arm: Add lazy-FP-stacking support to v7m_stack_write()
60
target/arm: Implement M-profile lazy FP state preservation
61
target/arm: Implement VLSTM for v7M CPUs with an FPU
62
target/arm: Implement VLLDM for v7M CPUs with an FPU
63
target/arm: Enable FPU for Cortex-M4 and Cortex-M33
64
36
65
Philippe Mathieu-Daudé (13):
37
Andrew Jones (6):
66
hw/dma: Compile the bcm2835_dma device as common object
38
target/arm/cpu64: Ensure kvm really supports aarch64=off
67
hw/arm/aspeed: Use TYPE_TMP105/TYPE_PCA9552 instead of hardcoded string
39
target/arm/cpu: Ensure we can use the pmu with kvm
68
hw/arm/nseries: Use TYPE_TMP105 instead of hardcoded string
40
target/arm/helper: zcr: Add build bug next to value range assumption
69
hw/display/tc6393xb: Remove unused functions
41
target/arm/cpu: Use div-round-up to determine predicate register array size
70
hw/devices: Move TC6393XB declarations into a new header
42
target/arm/kvm64: Fix error returns
71
hw/devices: Move Blizzard declarations into a new header
43
target/arm/kvm64: Move the get/put of fpsimd registers out
72
hw/devices: Move CBus declarations into a new header
73
hw/devices: Move Gamepad declarations into a new header
74
hw/devices: Move TI touchscreen declarations into a new header
75
hw/devices: Move LAN9118 declarations into a new header
76
hw/net/ne2000-isa: Add guards to the header
77
hw/net/lan9118: Export TYPE_LAN9118 and use it instead of hardcoded string
78
hw/devices: Move SMSC 91C111 declaration into a new header
79
44
80
configure | 10 +-
45
Damien Hedde (1):
81
hw/dma/Makefile.objs | 2 +-
46
hw/misc/zynq_slcr: use standard register definition
82
include/hw/arm/omap.h | 6 +-
83
include/hw/arm/smmu-common.h | 8 +-
84
include/hw/devices.h | 62 ---
85
include/hw/display/blizzard.h | 22 ++
86
include/hw/display/tc6393xb.h | 24 ++
87
include/hw/input/gamepad.h | 19 +
88
include/hw/input/tsc2xxx.h | 36 ++
89
include/hw/misc/cbus.h | 32 ++
90
include/hw/net/lan9118.h | 21 +
91
include/hw/net/ne2000-isa.h | 6 +
92
include/hw/net/smc91c111.h | 19 +
93
include/qemu/typedefs.h | 1 -
94
target/arm/cpu.h | 95 ++++-
95
target/arm/helper.h | 5 +
96
target/arm/translate.h | 3 +
97
hw/arm/aspeed.c | 13 +-
98
hw/arm/exynos4_boards.c | 3 +-
99
hw/arm/gumstix.c | 2 +-
100
hw/arm/integratorcp.c | 2 +-
101
hw/arm/kzm.c | 2 +-
102
hw/arm/mainstone.c | 2 +-
103
hw/arm/mps2-tz.c | 3 +-
104
hw/arm/mps2.c | 2 +-
105
hw/arm/nseries.c | 7 +-
106
hw/arm/palm.c | 2 +-
107
hw/arm/realview.c | 3 +-
108
hw/arm/smmu-common.c | 6 +-
109
hw/arm/smmuv3.c | 28 +-
110
hw/arm/stellaris.c | 2 +-
111
hw/arm/tosa.c | 2 +-
112
hw/arm/versatilepb.c | 2 +-
113
hw/arm/vexpress.c | 2 +-
114
hw/display/blizzard.c | 2 +-
115
hw/display/tc6393xb.c | 18 +-
116
hw/input/stellaris_input.c | 2 +-
117
hw/input/tsc2005.c | 2 +-
118
hw/input/tsc210x.c | 4 +-
119
hw/intc/armv7m_nvic.c | 261 +++++++++++++
120
hw/misc/cbus.c | 2 +-
121
hw/net/lan9118.c | 3 +-
122
hw/net/smc91c111.c | 2 +-
123
hw/ssi/xilinx_spips.c | 6 +-
124
target/arm/cpu.c | 20 +
125
target/arm/helper.c | 873 +++++++++++++++++++++++++++++++++++++++---
126
target/arm/machine.c | 16 +
127
target/arm/translate.c | 150 +++++++-
128
target/arm/vfp_helper.c | 8 +
129
MAINTAINERS | 7 +
130
50 files changed, 1595 insertions(+), 235 deletions(-)
131
delete mode 100644 include/hw/devices.h
132
create mode 100644 include/hw/display/blizzard.h
133
create mode 100644 include/hw/display/tc6393xb.h
134
create mode 100644 include/hw/input/gamepad.h
135
create mode 100644 include/hw/input/tsc2xxx.h
136
create mode 100644 include/hw/misc/cbus.h
137
create mode 100644 include/hw/net/lan9118.h
138
create mode 100644 include/hw/net/smc91c111.h
139
47
48
Peter Maydell (2):
49
target/arm: Factor out 'generate singlestep exception' function
50
target/arm: Fix routing of singlestep exceptions
51
52
Richard Henderson (18):
53
target/arm: Pass in pc to thumb_insn_is_16bit
54
target/arm: Introduce pc_curr
55
target/arm: Introduce read_pc
56
target/arm: Introduce add_reg_for_lit
57
target/arm: Remove redundant s->pc & ~1
58
target/arm: Replace s->pc with s->base.pc_next
59
target/arm: Replace offset with pc in gen_exception_insn
60
target/arm: Replace offset with pc in gen_exception_internal_insn
61
target/arm: Remove offset argument to gen_exception_bkpt_insn
62
target/arm: Use unallocated_encoding for aarch32
63
target/arm: Remove helper_double_saturate
64
target/arm: Use tcg_gen_extract_i32 for shifter_out_im
65
target/arm: Use tcg_gen_deposit_i32 for PKHBT, PKHTB
66
target/arm: Remove redundant shift tests
67
target/arm: Use ror32 instead of open-coding the operation
68
target/arm: Use tcg_gen_rotri_i32 for gen_swap_half
69
target/arm: Simplify SMMLA, SMMLAR, SMMLS, SMMLSR
70
target/arm: Use tcg_gen_extrh_i64_i32 to extract the high word
71
72
target/arm/cpu.h | 13 +-
73
target/arm/helper.h | 1 -
74
target/arm/kvm_arm.h | 28 ++
75
target/arm/translate-a64.h | 4 +-
76
target/arm/translate.h | 39 ++-
77
hw/misc/zynq_slcr.c | 450 ++++++++++++++++----------------
78
hw/net/imx_fec.c | 4 +
79
target/arm/cpu.c | 30 ++-
80
target/arm/cpu64.c | 31 ++-
81
target/arm/helper.c | 7 +
82
target/arm/kvm.c | 7 +
83
target/arm/kvm64.c | 161 +++++++-----
84
target/arm/op_helper.c | 15 --
85
target/arm/translate-a64.c | 130 ++++------
86
target/arm/translate-vfp.inc.c | 45 +---
87
target/arm/translate.c | 572 +++++++++++++++++------------------------
88
16 files changed, 771 insertions(+), 766 deletions(-)
89
diff view generated by jsdifflib
1
The M-profile floating point support has three associated config
1
From: Alex Bennée <alex.bennee@linaro.org>
2
registers: FPCAR, FPCCR and FPDSCR. It also makes the registers
3
CPACR and NSACR have behaviour other than reads-as-zero.
4
Add support for all of these as simple reads-as-written registers.
5
We will hook up actual functionality later.
6
2
7
The main complexity here is handling the FPCCR register, which
3
While most features are now detected by probing the ID_* registers
8
has a mix of banked and unbanked bits.
4
kernels can (and do) use MIDR_EL1 for working out of they have to
5
apply errata. This can trip up warnings in the kernel as it tries to
6
work out if it should apply workarounds to features that don't
7
actually exist in the reported CPU type.
9
8
10
Note that we don't share storage with the A-profile
9
Avoid this problem by synthesising our own MIDR value.
11
cpu->cp15.nsacr and cpu->cp15.cpacr_el1, though the behaviour
12
is quite similar, for two reasons:
13
* the M profile CPACR is banked between security states
14
* it preserves the invariant that M profile uses no state
15
inside the cp15 substruct
16
10
11
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20190726113950.7499-1-alex.bennee@linaro.org
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Message-id: 20190416125744.27770-4-peter.maydell@linaro.org
20
---
16
---
21
target/arm/cpu.h | 34 ++++++++++++
17
target/arm/cpu.h | 6 ++++++
22
hw/intc/armv7m_nvic.c | 125 ++++++++++++++++++++++++++++++++++++++++++
18
target/arm/cpu64.c | 19 +++++++++++++++++++
23
target/arm/cpu.c | 5 ++
19
2 files changed, 25 insertions(+)
24
target/arm/machine.c | 16 ++++++
25
4 files changed, 180 insertions(+)
26
20
27
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
21
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
28
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/cpu.h
23
--- a/target/arm/cpu.h
30
+++ b/target/arm/cpu.h
24
+++ b/target/arm/cpu.h
31
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
25
@@ -XXX,XX +XXX,XX @@ FIELD(V7M_FPCCR, ASPEN, 31, 1)
32
uint32_t scr[M_REG_NUM_BANKS];
33
uint32_t msplim[M_REG_NUM_BANKS];
34
uint32_t psplim[M_REG_NUM_BANKS];
35
+ uint32_t fpcar[M_REG_NUM_BANKS];
36
+ uint32_t fpccr[M_REG_NUM_BANKS];
37
+ uint32_t fpdscr[M_REG_NUM_BANKS];
38
+ uint32_t cpacr[M_REG_NUM_BANKS];
39
+ uint32_t nsacr;
40
} v7m;
41
42
/* Information associated with an exception about to be taken:
43
@@ -XXX,XX +XXX,XX @@ FIELD(V7M_CSSELR, LEVEL, 1, 3)
44
*/
45
FIELD(V7M_CSSELR, INDEX, 0, 4)
46
47
+/* v7M FPCCR bits */
48
+FIELD(V7M_FPCCR, LSPACT, 0, 1)
49
+FIELD(V7M_FPCCR, USER, 1, 1)
50
+FIELD(V7M_FPCCR, S, 2, 1)
51
+FIELD(V7M_FPCCR, THREAD, 3, 1)
52
+FIELD(V7M_FPCCR, HFRDY, 4, 1)
53
+FIELD(V7M_FPCCR, MMRDY, 5, 1)
54
+FIELD(V7M_FPCCR, BFRDY, 6, 1)
55
+FIELD(V7M_FPCCR, SFRDY, 7, 1)
56
+FIELD(V7M_FPCCR, MONRDY, 8, 1)
57
+FIELD(V7M_FPCCR, SPLIMVIOL, 9, 1)
58
+FIELD(V7M_FPCCR, UFRDY, 10, 1)
59
+FIELD(V7M_FPCCR, RES0, 11, 15)
60
+FIELD(V7M_FPCCR, TS, 26, 1)
61
+FIELD(V7M_FPCCR, CLRONRETS, 27, 1)
62
+FIELD(V7M_FPCCR, CLRONRET, 28, 1)
63
+FIELD(V7M_FPCCR, LSPENS, 29, 1)
64
+FIELD(V7M_FPCCR, LSPEN, 30, 1)
65
+FIELD(V7M_FPCCR, ASPEN, 31, 1)
66
+/* These bits are banked. Others are non-banked and live in the M_REG_S bank */
67
+#define R_V7M_FPCCR_BANKED_MASK \
68
+ (R_V7M_FPCCR_LSPACT_MASK | \
69
+ R_V7M_FPCCR_USER_MASK | \
70
+ R_V7M_FPCCR_THREAD_MASK | \
71
+ R_V7M_FPCCR_MMRDY_MASK | \
72
+ R_V7M_FPCCR_SPLIMVIOL_MASK | \
73
+ R_V7M_FPCCR_UFRDY_MASK | \
74
+ R_V7M_FPCCR_ASPEN_MASK)
75
+
76
/*
26
/*
77
* System register ID fields.
27
* System register ID fields.
78
*/
28
*/
79
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
29
+FIELD(MIDR_EL1, REVISION, 0, 4)
30
+FIELD(MIDR_EL1, PARTNUM, 4, 12)
31
+FIELD(MIDR_EL1, ARCHITECTURE, 16, 4)
32
+FIELD(MIDR_EL1, VARIANT, 20, 4)
33
+FIELD(MIDR_EL1, IMPLEMENTER, 24, 8)
34
+
35
FIELD(ID_ISAR0, SWAP, 0, 4)
36
FIELD(ID_ISAR0, BITCOUNT, 4, 4)
37
FIELD(ID_ISAR0, BITFIELD, 8, 4)
38
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
80
index XXXXXXX..XXXXXXX 100644
39
index XXXXXXX..XXXXXXX 100644
81
--- a/hw/intc/armv7m_nvic.c
40
--- a/target/arm/cpu64.c
82
+++ b/hw/intc/armv7m_nvic.c
41
+++ b/target/arm/cpu64.c
83
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
42
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
84
}
43
uint32_t u;
85
case 0xd84: /* CSSELR */
44
aarch64_a57_initfn(obj);
86
return cpu->env.v7m.csselr[attrs.secure];
45
87
+ case 0xd88: /* CPACR */
46
+ /*
88
+ if (!arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
47
+ * Reset MIDR so the guest doesn't mistake our 'max' CPU type for a real
89
+ return 0;
48
+ * one and try to apply errata workarounds or use impdef features we
90
+ }
49
+ * don't provide.
91
+ return cpu->env.v7m.cpacr[attrs.secure];
50
+ * An IMPLEMENTER field of 0 means "reserved for software use";
92
+ case 0xd8c: /* NSACR */
51
+ * ARCHITECTURE must be 0xf indicating "v7 or later, check ID registers
93
+ if (!attrs.secure || !arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
52
+ * to see which features are present";
94
+ return 0;
53
+ * the VARIANT, PARTNUM and REVISION fields are all implementation
95
+ }
54
+ * defined and we choose to define PARTNUM just in case guest
96
+ return cpu->env.v7m.nsacr;
55
+ * code needs to distinguish this QEMU CPU from other software
97
/* TODO: Implement debug registers. */
56
+ * implementations, though this shouldn't be needed.
98
case 0xd90: /* MPU_TYPE */
57
+ */
99
/* Unified MPU; if the MPU is not present this value is zero */
58
+ t = FIELD_DP64(0, MIDR_EL1, IMPLEMENTER, 0);
100
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
59
+ t = FIELD_DP64(t, MIDR_EL1, ARCHITECTURE, 0xf);
101
return 0;
60
+ t = FIELD_DP64(t, MIDR_EL1, PARTNUM, 'Q');
102
}
61
+ t = FIELD_DP64(t, MIDR_EL1, VARIANT, 0);
103
return cpu->env.v7m.sfar;
62
+ t = FIELD_DP64(t, MIDR_EL1, REVISION, 0);
104
+ case 0xf34: /* FPCCR */
63
+ cpu->midr = t;
105
+ if (!arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
106
+ return 0;
107
+ }
108
+ if (attrs.secure) {
109
+ return cpu->env.v7m.fpccr[M_REG_S];
110
+ } else {
111
+ /*
112
+ * NS can read LSPEN, CLRONRET and MONRDY. It can read
113
+ * BFRDY and HFRDY if AIRCR.BFHFNMINS != 0;
114
+ * other non-banked bits RAZ.
115
+ * TODO: MONRDY should RAZ/WI if DEMCR.SDME is set.
116
+ */
117
+ uint32_t value = cpu->env.v7m.fpccr[M_REG_S];
118
+ uint32_t mask = R_V7M_FPCCR_LSPEN_MASK |
119
+ R_V7M_FPCCR_CLRONRET_MASK |
120
+ R_V7M_FPCCR_MONRDY_MASK;
121
+
64
+
122
+ if (s->cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK) {
65
t = cpu->isar.id_aa64isar0;
123
+ mask |= R_V7M_FPCCR_BFRDY_MASK | R_V7M_FPCCR_HFRDY_MASK;
66
t = FIELD_DP64(t, ID_AA64ISAR0, AES, 2); /* AES + PMULL */
124
+ }
67
t = FIELD_DP64(t, ID_AA64ISAR0, SHA1, 1);
125
+
126
+ value &= mask;
127
+
128
+ value |= cpu->env.v7m.fpccr[M_REG_NS];
129
+ return value;
130
+ }
131
+ case 0xf38: /* FPCAR */
132
+ if (!arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
133
+ return 0;
134
+ }
135
+ return cpu->env.v7m.fpcar[attrs.secure];
136
+ case 0xf3c: /* FPDSCR */
137
+ if (!arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
138
+ return 0;
139
+ }
140
+ return cpu->env.v7m.fpdscr[attrs.secure];
141
case 0xf40: /* MVFR0 */
142
return cpu->isar.mvfr0;
143
case 0xf44: /* MVFR1 */
144
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
145
cpu->env.v7m.csselr[attrs.secure] = value & R_V7M_CSSELR_INDEX_MASK;
146
}
147
break;
148
+ case 0xd88: /* CPACR */
149
+ if (arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
150
+ /* We implement only the Floating Point extension's CP10/CP11 */
151
+ cpu->env.v7m.cpacr[attrs.secure] = value & (0xf << 20);
152
+ }
153
+ break;
154
+ case 0xd8c: /* NSACR */
155
+ if (attrs.secure && arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
156
+ /* We implement only the Floating Point extension's CP10/CP11 */
157
+ cpu->env.v7m.nsacr = value & (3 << 10);
158
+ }
159
+ break;
160
case 0xd90: /* MPU_TYPE */
161
return; /* RO */
162
case 0xd94: /* MPU_CTRL */
163
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
164
}
165
break;
166
}
167
+ case 0xf34: /* FPCCR */
168
+ if (arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
169
+ /* Not all bits here are banked. */
170
+ uint32_t fpccr_s;
171
+
172
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
173
+ /* Don't allow setting of bits not present in v7M */
174
+ value &= (R_V7M_FPCCR_LSPACT_MASK |
175
+ R_V7M_FPCCR_USER_MASK |
176
+ R_V7M_FPCCR_THREAD_MASK |
177
+ R_V7M_FPCCR_HFRDY_MASK |
178
+ R_V7M_FPCCR_MMRDY_MASK |
179
+ R_V7M_FPCCR_BFRDY_MASK |
180
+ R_V7M_FPCCR_MONRDY_MASK |
181
+ R_V7M_FPCCR_LSPEN_MASK |
182
+ R_V7M_FPCCR_ASPEN_MASK);
183
+ }
184
+ value &= ~R_V7M_FPCCR_RES0_MASK;
185
+
186
+ if (!attrs.secure) {
187
+ /* Some non-banked bits are configurably writable by NS */
188
+ fpccr_s = cpu->env.v7m.fpccr[M_REG_S];
189
+ if (!(fpccr_s & R_V7M_FPCCR_LSPENS_MASK)) {
190
+ uint32_t lspen = FIELD_EX32(value, V7M_FPCCR, LSPEN);
191
+ fpccr_s = FIELD_DP32(fpccr_s, V7M_FPCCR, LSPEN, lspen);
192
+ }
193
+ if (!(fpccr_s & R_V7M_FPCCR_CLRONRETS_MASK)) {
194
+ uint32_t cor = FIELD_EX32(value, V7M_FPCCR, CLRONRET);
195
+ fpccr_s = FIELD_DP32(fpccr_s, V7M_FPCCR, CLRONRET, cor);
196
+ }
197
+ if ((s->cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK)) {
198
+ uint32_t hfrdy = FIELD_EX32(value, V7M_FPCCR, HFRDY);
199
+ uint32_t bfrdy = FIELD_EX32(value, V7M_FPCCR, BFRDY);
200
+ fpccr_s = FIELD_DP32(fpccr_s, V7M_FPCCR, HFRDY, hfrdy);
201
+ fpccr_s = FIELD_DP32(fpccr_s, V7M_FPCCR, BFRDY, bfrdy);
202
+ }
203
+ /* TODO MONRDY should RAZ/WI if DEMCR.SDME is set */
204
+ {
205
+ uint32_t monrdy = FIELD_EX32(value, V7M_FPCCR, MONRDY);
206
+ fpccr_s = FIELD_DP32(fpccr_s, V7M_FPCCR, MONRDY, monrdy);
207
+ }
208
+
209
+ /*
210
+ * All other non-banked bits are RAZ/WI from NS; write
211
+ * just the banked bits to fpccr[M_REG_NS].
212
+ */
213
+ value &= R_V7M_FPCCR_BANKED_MASK;
214
+ cpu->env.v7m.fpccr[M_REG_NS] = value;
215
+ } else {
216
+ fpccr_s = value;
217
+ }
218
+ cpu->env.v7m.fpccr[M_REG_S] = fpccr_s;
219
+ }
220
+ break;
221
+ case 0xf38: /* FPCAR */
222
+ if (arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
223
+ value &= ~7;
224
+ cpu->env.v7m.fpcar[attrs.secure] = value;
225
+ }
226
+ break;
227
+ case 0xf3c: /* FPDSCR */
228
+ if (arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
229
+ value &= 0x07c00000;
230
+ cpu->env.v7m.fpdscr[attrs.secure] = value;
231
+ }
232
+ break;
233
case 0xf50: /* ICIALLU */
234
case 0xf58: /* ICIMVAU */
235
case 0xf5c: /* DCIMVAC */
236
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
237
index XXXXXXX..XXXXXXX 100644
238
--- a/target/arm/cpu.c
239
+++ b/target/arm/cpu.c
240
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(CPUState *s)
241
env->v7m.ccr[M_REG_S] |= R_V7M_CCR_UNALIGN_TRP_MASK;
242
}
243
244
+ if (arm_feature(env, ARM_FEATURE_VFP)) {
245
+ env->v7m.fpccr[M_REG_NS] = R_V7M_FPCCR_ASPEN_MASK;
246
+ env->v7m.fpccr[M_REG_S] = R_V7M_FPCCR_ASPEN_MASK |
247
+ R_V7M_FPCCR_LSPEN_MASK | R_V7M_FPCCR_S_MASK;
248
+ }
249
/* Unlike A/R profile, M profile defines the reset LR value */
250
env->regs[14] = 0xffffffff;
251
252
diff --git a/target/arm/machine.c b/target/arm/machine.c
253
index XXXXXXX..XXXXXXX 100644
254
--- a/target/arm/machine.c
255
+++ b/target/arm/machine.c
256
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m_v8m = {
257
}
258
};
259
260
+static const VMStateDescription vmstate_m_fp = {
261
+ .name = "cpu/m/fp",
262
+ .version_id = 1,
263
+ .minimum_version_id = 1,
264
+ .needed = vfp_needed,
265
+ .fields = (VMStateField[]) {
266
+ VMSTATE_UINT32_ARRAY(env.v7m.fpcar, ARMCPU, M_REG_NUM_BANKS),
267
+ VMSTATE_UINT32_ARRAY(env.v7m.fpccr, ARMCPU, M_REG_NUM_BANKS),
268
+ VMSTATE_UINT32_ARRAY(env.v7m.fpdscr, ARMCPU, M_REG_NUM_BANKS),
269
+ VMSTATE_UINT32_ARRAY(env.v7m.cpacr, ARMCPU, M_REG_NUM_BANKS),
270
+ VMSTATE_UINT32(env.v7m.nsacr, ARMCPU),
271
+ VMSTATE_END_OF_LIST()
272
+ }
273
+};
274
+
275
static const VMStateDescription vmstate_m = {
276
.name = "cpu/m",
277
.version_id = 4,
278
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m = {
279
&vmstate_m_scr,
280
&vmstate_m_other_sp,
281
&vmstate_m_v8m,
282
+ &vmstate_m_fp,
283
NULL
284
}
285
};
286
--
68
--
287
2.20.1
69
2.20.1
288
70
289
71
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Damien Hedde <damien.hedde@greensocs.com>
2
2
3
Reviewed-by: Markus Armbruster <armbru@redhat.com>
3
Replace the zynq_slcr registers enum and macros using the
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
4
hw/registerfields.h macros.
5
Message-id: 20190412165416.7977-8-philmd@redhat.com
5
6
Signed-off-by: Damien Hedde <damien.hedde@greensocs.com>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Message-id: 20190729145654.14644-30-damien.hedde@greensocs.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
11
---
8
include/hw/devices.h | 3 ---
12
hw/misc/zynq_slcr.c | 450 ++++++++++++++++++++++----------------------
9
include/hw/input/gamepad.h | 19 +++++++++++++++++++
13
1 file changed, 225 insertions(+), 225 deletions(-)
10
hw/arm/stellaris.c | 2 +-
11
hw/input/stellaris_input.c | 2 +-
12
MAINTAINERS | 1 +
13
5 files changed, 22 insertions(+), 5 deletions(-)
14
create mode 100644 include/hw/input/gamepad.h
15
14
16
diff --git a/include/hw/devices.h b/include/hw/devices.h
15
diff --git a/hw/misc/zynq_slcr.c b/hw/misc/zynq_slcr.c
17
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
18
--- a/include/hw/devices.h
17
--- a/hw/misc/zynq_slcr.c
19
+++ b/include/hw/devices.h
18
+++ b/hw/misc/zynq_slcr.c
20
@@ -XXX,XX +XXX,XX @@ void *tsc2005_init(qemu_irq pintdav);
21
uint32_t tsc2005_txrx(void *opaque, uint32_t value, int len);
22
void tsc2005_set_transform(void *opaque, MouseTransformInfo *info);
23
24
-/* stellaris_input.c */
25
-void stellaris_gamepad_init(int n, qemu_irq *irq, const int *keycode);
26
-
27
#endif
28
diff --git a/include/hw/input/gamepad.h b/include/hw/input/gamepad.h
29
new file mode 100644
30
index XXXXXXX..XXXXXXX
31
--- /dev/null
32
+++ b/include/hw/input/gamepad.h
33
@@ -XXX,XX +XXX,XX @@
34
+/*
35
+ * Gamepad style buttons connected to IRQ/GPIO lines
36
+ *
37
+ * Copyright (c) 2007 CodeSourcery.
38
+ * Written by Paul Brook
39
+ *
40
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
41
+ * See the COPYING file in the top-level directory.
42
+ */
43
+
44
+#ifndef HW_INPUT_GAMEPAD_H
45
+#define HW_INPUT_GAMEPAD_H
46
+
47
+#include "hw/irq.h"
48
+
49
+/* stellaris_input.c */
50
+void stellaris_gamepad_init(int n, qemu_irq *irq, const int *keycode);
51
+
52
+#endif
53
diff --git a/hw/arm/stellaris.c b/hw/arm/stellaris.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/hw/arm/stellaris.c
56
+++ b/hw/arm/stellaris.c
57
@@ -XXX,XX +XXX,XX @@
58
#include "hw/sysbus.h"
59
#include "hw/ssi/ssi.h"
60
#include "hw/arm/arm.h"
61
-#include "hw/devices.h"
62
#include "qemu/timer.h"
63
#include "hw/i2c/i2c.h"
64
#include "net/net.h"
65
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@
66
#include "sysemu/sysemu.h"
20
#include "sysemu/sysemu.h"
67
#include "hw/arm/armv7m.h"
21
#include "qemu/log.h"
68
#include "hw/char/pl011.h"
22
#include "qemu/module.h"
69
+#include "hw/input/gamepad.h"
23
+#include "hw/registerfields.h"
70
#include "hw/watchdog/cmsdk-apb-watchdog.h"
24
71
#include "hw/misc/unimp.h"
25
#ifndef ZYNQ_SLCR_ERR_DEBUG
72
#include "cpu.h"
26
#define ZYNQ_SLCR_ERR_DEBUG 0
73
diff --git a/hw/input/stellaris_input.c b/hw/input/stellaris_input.c
74
index XXXXXXX..XXXXXXX 100644
75
--- a/hw/input/stellaris_input.c
76
+++ b/hw/input/stellaris_input.c
77
@@ -XXX,XX +XXX,XX @@
27
@@ -XXX,XX +XXX,XX @@
78
*/
28
#define XILINX_LOCK_KEY 0x767b
79
#include "qemu/osdep.h"
29
#define XILINX_UNLOCK_KEY 0xdf0d
80
#include "hw/hw.h"
30
81
-#include "hw/devices.h"
31
-#define R_PSS_RST_CTRL_SOFT_RST 0x1
82
+#include "hw/input/gamepad.h"
32
+REG32(SCL, 0x000)
83
#include "ui/console.h"
33
+REG32(LOCK, 0x004)
84
34
+REG32(UNLOCK, 0x008)
85
typedef struct {
35
+REG32(LOCKSTA, 0x00c)
86
diff --git a/MAINTAINERS b/MAINTAINERS
36
87
index XXXXXXX..XXXXXXX 100644
37
-enum {
88
--- a/MAINTAINERS
38
- SCL = 0x000 / 4,
89
+++ b/MAINTAINERS
39
- LOCK,
90
@@ -XXX,XX +XXX,XX @@ M: Peter Maydell <peter.maydell@linaro.org>
40
- UNLOCK,
91
L: qemu-arm@nongnu.org
41
- LOCKSTA,
92
S: Maintained
42
+REG32(ARM_PLL_CTRL, 0x100)
93
F: hw/*/stellaris*
43
+REG32(DDR_PLL_CTRL, 0x104)
94
+F: include/hw/input/gamepad.h
44
+REG32(IO_PLL_CTRL, 0x108)
95
45
+REG32(PLL_STATUS, 0x10c)
96
Versatile Express
46
+REG32(ARM_PLL_CFG, 0x110)
97
M: Peter Maydell <peter.maydell@linaro.org>
47
+REG32(DDR_PLL_CFG, 0x114)
48
+REG32(IO_PLL_CFG, 0x118)
49
50
- ARM_PLL_CTRL = 0x100 / 4,
51
- DDR_PLL_CTRL,
52
- IO_PLL_CTRL,
53
- PLL_STATUS,
54
- ARM_PLL_CFG,
55
- DDR_PLL_CFG,
56
- IO_PLL_CFG,
57
-
58
- ARM_CLK_CTRL = 0x120 / 4,
59
- DDR_CLK_CTRL,
60
- DCI_CLK_CTRL,
61
- APER_CLK_CTRL,
62
- USB0_CLK_CTRL,
63
- USB1_CLK_CTRL,
64
- GEM0_RCLK_CTRL,
65
- GEM1_RCLK_CTRL,
66
- GEM0_CLK_CTRL,
67
- GEM1_CLK_CTRL,
68
- SMC_CLK_CTRL,
69
- LQSPI_CLK_CTRL,
70
- SDIO_CLK_CTRL,
71
- UART_CLK_CTRL,
72
- SPI_CLK_CTRL,
73
- CAN_CLK_CTRL,
74
- CAN_MIOCLK_CTRL,
75
- DBG_CLK_CTRL,
76
- PCAP_CLK_CTRL,
77
- TOPSW_CLK_CTRL,
78
+REG32(ARM_CLK_CTRL, 0x120)
79
+REG32(DDR_CLK_CTRL, 0x124)
80
+REG32(DCI_CLK_CTRL, 0x128)
81
+REG32(APER_CLK_CTRL, 0x12c)
82
+REG32(USB0_CLK_CTRL, 0x130)
83
+REG32(USB1_CLK_CTRL, 0x134)
84
+REG32(GEM0_RCLK_CTRL, 0x138)
85
+REG32(GEM1_RCLK_CTRL, 0x13c)
86
+REG32(GEM0_CLK_CTRL, 0x140)
87
+REG32(GEM1_CLK_CTRL, 0x144)
88
+REG32(SMC_CLK_CTRL, 0x148)
89
+REG32(LQSPI_CLK_CTRL, 0x14c)
90
+REG32(SDIO_CLK_CTRL, 0x150)
91
+REG32(UART_CLK_CTRL, 0x154)
92
+REG32(SPI_CLK_CTRL, 0x158)
93
+REG32(CAN_CLK_CTRL, 0x15c)
94
+REG32(CAN_MIOCLK_CTRL, 0x160)
95
+REG32(DBG_CLK_CTRL, 0x164)
96
+REG32(PCAP_CLK_CTRL, 0x168)
97
+REG32(TOPSW_CLK_CTRL, 0x16c)
98
99
#define FPGA_CTRL_REGS(n, start) \
100
- FPGA ## n ## _CLK_CTRL = (start) / 4, \
101
- FPGA ## n ## _THR_CTRL, \
102
- FPGA ## n ## _THR_CNT, \
103
- FPGA ## n ## _THR_STA,
104
- FPGA_CTRL_REGS(0, 0x170)
105
- FPGA_CTRL_REGS(1, 0x180)
106
- FPGA_CTRL_REGS(2, 0x190)
107
- FPGA_CTRL_REGS(3, 0x1a0)
108
+ REG32(FPGA ## n ## _CLK_CTRL, (start)) \
109
+ REG32(FPGA ## n ## _THR_CTRL, (start) + 0x4)\
110
+ REG32(FPGA ## n ## _THR_CNT, (start) + 0x8)\
111
+ REG32(FPGA ## n ## _THR_STA, (start) + 0xc)
112
+FPGA_CTRL_REGS(0, 0x170)
113
+FPGA_CTRL_REGS(1, 0x180)
114
+FPGA_CTRL_REGS(2, 0x190)
115
+FPGA_CTRL_REGS(3, 0x1a0)
116
117
- BANDGAP_TRIP = 0x1b8 / 4,
118
- PLL_PREDIVISOR = 0x1c0 / 4,
119
- CLK_621_TRUE,
120
+REG32(BANDGAP_TRIP, 0x1b8)
121
+REG32(PLL_PREDIVISOR, 0x1c0)
122
+REG32(CLK_621_TRUE, 0x1c4)
123
124
- PSS_RST_CTRL = 0x200 / 4,
125
- DDR_RST_CTRL,
126
- TOPSW_RESET_CTRL,
127
- DMAC_RST_CTRL,
128
- USB_RST_CTRL,
129
- GEM_RST_CTRL,
130
- SDIO_RST_CTRL,
131
- SPI_RST_CTRL,
132
- CAN_RST_CTRL,
133
- I2C_RST_CTRL,
134
- UART_RST_CTRL,
135
- GPIO_RST_CTRL,
136
- LQSPI_RST_CTRL,
137
- SMC_RST_CTRL,
138
- OCM_RST_CTRL,
139
- FPGA_RST_CTRL = 0x240 / 4,
140
- A9_CPU_RST_CTRL,
141
+REG32(PSS_RST_CTRL, 0x200)
142
+ FIELD(PSS_RST_CTRL, SOFT_RST, 0, 1)
143
+REG32(DDR_RST_CTRL, 0x204)
144
+REG32(TOPSW_RESET_CTRL, 0x208)
145
+REG32(DMAC_RST_CTRL, 0x20c)
146
+REG32(USB_RST_CTRL, 0x210)
147
+REG32(GEM_RST_CTRL, 0x214)
148
+REG32(SDIO_RST_CTRL, 0x218)
149
+REG32(SPI_RST_CTRL, 0x21c)
150
+REG32(CAN_RST_CTRL, 0x220)
151
+REG32(I2C_RST_CTRL, 0x224)
152
+REG32(UART_RST_CTRL, 0x228)
153
+REG32(GPIO_RST_CTRL, 0x22c)
154
+REG32(LQSPI_RST_CTRL, 0x230)
155
+REG32(SMC_RST_CTRL, 0x234)
156
+REG32(OCM_RST_CTRL, 0x238)
157
+REG32(FPGA_RST_CTRL, 0x240)
158
+REG32(A9_CPU_RST_CTRL, 0x244)
159
160
- RS_AWDT_CTRL = 0x24c / 4,
161
- RST_REASON,
162
+REG32(RS_AWDT_CTRL, 0x24c)
163
+REG32(RST_REASON, 0x250)
164
165
- REBOOT_STATUS = 0x258 / 4,
166
- BOOT_MODE,
167
+REG32(REBOOT_STATUS, 0x258)
168
+REG32(BOOT_MODE, 0x25c)
169
170
- APU_CTRL = 0x300 / 4,
171
- WDT_CLK_SEL,
172
+REG32(APU_CTRL, 0x300)
173
+REG32(WDT_CLK_SEL, 0x304)
174
175
- TZ_DMA_NS = 0x440 / 4,
176
- TZ_DMA_IRQ_NS,
177
- TZ_DMA_PERIPH_NS,
178
+REG32(TZ_DMA_NS, 0x440)
179
+REG32(TZ_DMA_IRQ_NS, 0x444)
180
+REG32(TZ_DMA_PERIPH_NS, 0x448)
181
182
- PSS_IDCODE = 0x530 / 4,
183
+REG32(PSS_IDCODE, 0x530)
184
185
- DDR_URGENT = 0x600 / 4,
186
- DDR_CAL_START = 0x60c / 4,
187
- DDR_REF_START = 0x614 / 4,
188
- DDR_CMD_STA,
189
- DDR_URGENT_SEL,
190
- DDR_DFI_STATUS,
191
+REG32(DDR_URGENT, 0x600)
192
+REG32(DDR_CAL_START, 0x60c)
193
+REG32(DDR_REF_START, 0x614)
194
+REG32(DDR_CMD_STA, 0x618)
195
+REG32(DDR_URGENT_SEL, 0x61c)
196
+REG32(DDR_DFI_STATUS, 0x620)
197
198
- MIO = 0x700 / 4,
199
+REG32(MIO, 0x700)
200
#define MIO_LENGTH 54
201
202
- MIO_LOOPBACK = 0x804 / 4,
203
- MIO_MST_TRI0,
204
- MIO_MST_TRI1,
205
+REG32(MIO_LOOPBACK, 0x804)
206
+REG32(MIO_MST_TRI0, 0x808)
207
+REG32(MIO_MST_TRI1, 0x80c)
208
209
- SD0_WP_CD_SEL = 0x830 / 4,
210
- SD1_WP_CD_SEL,
211
+REG32(SD0_WP_CD_SEL, 0x830)
212
+REG32(SD1_WP_CD_SEL, 0x834)
213
214
- LVL_SHFTR_EN = 0x900 / 4,
215
- OCM_CFG = 0x910 / 4,
216
+REG32(LVL_SHFTR_EN, 0x900)
217
+REG32(OCM_CFG, 0x910)
218
219
- CPU_RAM = 0xa00 / 4,
220
+REG32(CPU_RAM, 0xa00)
221
222
- IOU = 0xa30 / 4,
223
+REG32(IOU, 0xa30)
224
225
- DMAC_RAM = 0xa50 / 4,
226
+REG32(DMAC_RAM, 0xa50)
227
228
- AFI0 = 0xa60 / 4,
229
- AFI1 = AFI0 + 3,
230
- AFI2 = AFI1 + 3,
231
- AFI3 = AFI2 + 3,
232
+REG32(AFI0, 0xa60)
233
+REG32(AFI1, 0xa6c)
234
+REG32(AFI2, 0xa78)
235
+REG32(AFI3, 0xa84)
236
#define AFI_LENGTH 3
237
238
- OCM = 0xa90 / 4,
239
+REG32(OCM, 0xa90)
240
241
- DEVCI_RAM = 0xaa0 / 4,
242
+REG32(DEVCI_RAM, 0xaa0)
243
244
- CSG_RAM = 0xab0 / 4,
245
+REG32(CSG_RAM, 0xab0)
246
247
- GPIOB_CTRL = 0xb00 / 4,
248
- GPIOB_CFG_CMOS18,
249
- GPIOB_CFG_CMOS25,
250
- GPIOB_CFG_CMOS33,
251
- GPIOB_CFG_HSTL = 0xb14 / 4,
252
- GPIOB_DRVR_BIAS_CTRL,
253
+REG32(GPIOB_CTRL, 0xb00)
254
+REG32(GPIOB_CFG_CMOS18, 0xb04)
255
+REG32(GPIOB_CFG_CMOS25, 0xb08)
256
+REG32(GPIOB_CFG_CMOS33, 0xb0c)
257
+REG32(GPIOB_CFG_HSTL, 0xb14)
258
+REG32(GPIOB_DRVR_BIAS_CTRL, 0xb18)
259
260
- DDRIOB = 0xb40 / 4,
261
+REG32(DDRIOB, 0xb40)
262
#define DDRIOB_LENGTH 14
263
-};
264
265
#define ZYNQ_SLCR_MMIO_SIZE 0x1000
266
#define ZYNQ_SLCR_NUM_REGS (ZYNQ_SLCR_MMIO_SIZE / 4)
267
@@ -XXX,XX +XXX,XX @@ static void zynq_slcr_reset(DeviceState *d)
268
269
DB_PRINT("RESET\n");
270
271
- s->regs[LOCKSTA] = 1;
272
+ s->regs[R_LOCKSTA] = 1;
273
/* 0x100 - 0x11C */
274
- s->regs[ARM_PLL_CTRL] = 0x0001A008;
275
- s->regs[DDR_PLL_CTRL] = 0x0001A008;
276
- s->regs[IO_PLL_CTRL] = 0x0001A008;
277
- s->regs[PLL_STATUS] = 0x0000003F;
278
- s->regs[ARM_PLL_CFG] = 0x00014000;
279
- s->regs[DDR_PLL_CFG] = 0x00014000;
280
- s->regs[IO_PLL_CFG] = 0x00014000;
281
+ s->regs[R_ARM_PLL_CTRL] = 0x0001A008;
282
+ s->regs[R_DDR_PLL_CTRL] = 0x0001A008;
283
+ s->regs[R_IO_PLL_CTRL] = 0x0001A008;
284
+ s->regs[R_PLL_STATUS] = 0x0000003F;
285
+ s->regs[R_ARM_PLL_CFG] = 0x00014000;
286
+ s->regs[R_DDR_PLL_CFG] = 0x00014000;
287
+ s->regs[R_IO_PLL_CFG] = 0x00014000;
288
289
/* 0x120 - 0x16C */
290
- s->regs[ARM_CLK_CTRL] = 0x1F000400;
291
- s->regs[DDR_CLK_CTRL] = 0x18400003;
292
- s->regs[DCI_CLK_CTRL] = 0x01E03201;
293
- s->regs[APER_CLK_CTRL] = 0x01FFCCCD;
294
- s->regs[USB0_CLK_CTRL] = s->regs[USB1_CLK_CTRL] = 0x00101941;
295
- s->regs[GEM0_RCLK_CTRL] = s->regs[GEM1_RCLK_CTRL] = 0x00000001;
296
- s->regs[GEM0_CLK_CTRL] = s->regs[GEM1_CLK_CTRL] = 0x00003C01;
297
- s->regs[SMC_CLK_CTRL] = 0x00003C01;
298
- s->regs[LQSPI_CLK_CTRL] = 0x00002821;
299
- s->regs[SDIO_CLK_CTRL] = 0x00001E03;
300
- s->regs[UART_CLK_CTRL] = 0x00003F03;
301
- s->regs[SPI_CLK_CTRL] = 0x00003F03;
302
- s->regs[CAN_CLK_CTRL] = 0x00501903;
303
- s->regs[DBG_CLK_CTRL] = 0x00000F03;
304
- s->regs[PCAP_CLK_CTRL] = 0x00000F01;
305
+ s->regs[R_ARM_CLK_CTRL] = 0x1F000400;
306
+ s->regs[R_DDR_CLK_CTRL] = 0x18400003;
307
+ s->regs[R_DCI_CLK_CTRL] = 0x01E03201;
308
+ s->regs[R_APER_CLK_CTRL] = 0x01FFCCCD;
309
+ s->regs[R_USB0_CLK_CTRL] = s->regs[R_USB1_CLK_CTRL] = 0x00101941;
310
+ s->regs[R_GEM0_RCLK_CTRL] = s->regs[R_GEM1_RCLK_CTRL] = 0x00000001;
311
+ s->regs[R_GEM0_CLK_CTRL] = s->regs[R_GEM1_CLK_CTRL] = 0x00003C01;
312
+ s->regs[R_SMC_CLK_CTRL] = 0x00003C01;
313
+ s->regs[R_LQSPI_CLK_CTRL] = 0x00002821;
314
+ s->regs[R_SDIO_CLK_CTRL] = 0x00001E03;
315
+ s->regs[R_UART_CLK_CTRL] = 0x00003F03;
316
+ s->regs[R_SPI_CLK_CTRL] = 0x00003F03;
317
+ s->regs[R_CAN_CLK_CTRL] = 0x00501903;
318
+ s->regs[R_DBG_CLK_CTRL] = 0x00000F03;
319
+ s->regs[R_PCAP_CLK_CTRL] = 0x00000F01;
320
321
/* 0x170 - 0x1AC */
322
- s->regs[FPGA0_CLK_CTRL] = s->regs[FPGA1_CLK_CTRL] = s->regs[FPGA2_CLK_CTRL]
323
- = s->regs[FPGA3_CLK_CTRL] = 0x00101800;
324
- s->regs[FPGA0_THR_STA] = s->regs[FPGA1_THR_STA] = s->regs[FPGA2_THR_STA]
325
- = s->regs[FPGA3_THR_STA] = 0x00010000;
326
+ s->regs[R_FPGA0_CLK_CTRL] = s->regs[R_FPGA1_CLK_CTRL]
327
+ = s->regs[R_FPGA2_CLK_CTRL]
328
+ = s->regs[R_FPGA3_CLK_CTRL] = 0x00101800;
329
+ s->regs[R_FPGA0_THR_STA] = s->regs[R_FPGA1_THR_STA]
330
+ = s->regs[R_FPGA2_THR_STA]
331
+ = s->regs[R_FPGA3_THR_STA] = 0x00010000;
332
333
/* 0x1B0 - 0x1D8 */
334
- s->regs[BANDGAP_TRIP] = 0x0000001F;
335
- s->regs[PLL_PREDIVISOR] = 0x00000001;
336
- s->regs[CLK_621_TRUE] = 0x00000001;
337
+ s->regs[R_BANDGAP_TRIP] = 0x0000001F;
338
+ s->regs[R_PLL_PREDIVISOR] = 0x00000001;
339
+ s->regs[R_CLK_621_TRUE] = 0x00000001;
340
341
/* 0x200 - 0x25C */
342
- s->regs[FPGA_RST_CTRL] = 0x01F33F0F;
343
- s->regs[RST_REASON] = 0x00000040;
344
+ s->regs[R_FPGA_RST_CTRL] = 0x01F33F0F;
345
+ s->regs[R_RST_REASON] = 0x00000040;
346
347
- s->regs[BOOT_MODE] = 0x00000001;
348
+ s->regs[R_BOOT_MODE] = 0x00000001;
349
350
/* 0x700 - 0x7D4 */
351
for (i = 0; i < 54; i++) {
352
- s->regs[MIO + i] = 0x00001601;
353
+ s->regs[R_MIO + i] = 0x00001601;
354
}
355
for (i = 2; i <= 8; i++) {
356
- s->regs[MIO + i] = 0x00000601;
357
+ s->regs[R_MIO + i] = 0x00000601;
358
}
359
360
- s->regs[MIO_MST_TRI0] = s->regs[MIO_MST_TRI1] = 0xFFFFFFFF;
361
+ s->regs[R_MIO_MST_TRI0] = s->regs[R_MIO_MST_TRI1] = 0xFFFFFFFF;
362
363
- s->regs[CPU_RAM + 0] = s->regs[CPU_RAM + 1] = s->regs[CPU_RAM + 3]
364
- = s->regs[CPU_RAM + 4] = s->regs[CPU_RAM + 7]
365
- = 0x00010101;
366
- s->regs[CPU_RAM + 2] = s->regs[CPU_RAM + 5] = 0x01010101;
367
- s->regs[CPU_RAM + 6] = 0x00000001;
368
+ s->regs[R_CPU_RAM + 0] = s->regs[R_CPU_RAM + 1] = s->regs[R_CPU_RAM + 3]
369
+ = s->regs[R_CPU_RAM + 4] = s->regs[R_CPU_RAM + 7]
370
+ = 0x00010101;
371
+ s->regs[R_CPU_RAM + 2] = s->regs[R_CPU_RAM + 5] = 0x01010101;
372
+ s->regs[R_CPU_RAM + 6] = 0x00000001;
373
374
- s->regs[IOU + 0] = s->regs[IOU + 1] = s->regs[IOU + 2] = s->regs[IOU + 3]
375
- = 0x09090909;
376
- s->regs[IOU + 4] = s->regs[IOU + 5] = 0x00090909;
377
- s->regs[IOU + 6] = 0x00000909;
378
+ s->regs[R_IOU + 0] = s->regs[R_IOU + 1] = s->regs[R_IOU + 2]
379
+ = s->regs[R_IOU + 3] = 0x09090909;
380
+ s->regs[R_IOU + 4] = s->regs[R_IOU + 5] = 0x00090909;
381
+ s->regs[R_IOU + 6] = 0x00000909;
382
383
- s->regs[DMAC_RAM] = 0x00000009;
384
+ s->regs[R_DMAC_RAM] = 0x00000009;
385
386
- s->regs[AFI0 + 0] = s->regs[AFI0 + 1] = 0x09090909;
387
- s->regs[AFI1 + 0] = s->regs[AFI1 + 1] = 0x09090909;
388
- s->regs[AFI2 + 0] = s->regs[AFI2 + 1] = 0x09090909;
389
- s->regs[AFI3 + 0] = s->regs[AFI3 + 1] = 0x09090909;
390
- s->regs[AFI0 + 2] = s->regs[AFI1 + 2] = s->regs[AFI2 + 2]
391
- = s->regs[AFI3 + 2] = 0x00000909;
392
+ s->regs[R_AFI0 + 0] = s->regs[R_AFI0 + 1] = 0x09090909;
393
+ s->regs[R_AFI1 + 0] = s->regs[R_AFI1 + 1] = 0x09090909;
394
+ s->regs[R_AFI2 + 0] = s->regs[R_AFI2 + 1] = 0x09090909;
395
+ s->regs[R_AFI3 + 0] = s->regs[R_AFI3 + 1] = 0x09090909;
396
+ s->regs[R_AFI0 + 2] = s->regs[R_AFI1 + 2] = s->regs[R_AFI2 + 2]
397
+ = s->regs[R_AFI3 + 2] = 0x00000909;
398
399
- s->regs[OCM + 0] = 0x01010101;
400
- s->regs[OCM + 1] = s->regs[OCM + 2] = 0x09090909;
401
+ s->regs[R_OCM + 0] = 0x01010101;
402
+ s->regs[R_OCM + 1] = s->regs[R_OCM + 2] = 0x09090909;
403
404
- s->regs[DEVCI_RAM] = 0x00000909;
405
- s->regs[CSG_RAM] = 0x00000001;
406
+ s->regs[R_DEVCI_RAM] = 0x00000909;
407
+ s->regs[R_CSG_RAM] = 0x00000001;
408
409
- s->regs[DDRIOB + 0] = s->regs[DDRIOB + 1] = s->regs[DDRIOB + 2]
410
- = s->regs[DDRIOB + 3] = 0x00000e00;
411
- s->regs[DDRIOB + 4] = s->regs[DDRIOB + 5] = s->regs[DDRIOB + 6]
412
- = 0x00000e00;
413
- s->regs[DDRIOB + 12] = 0x00000021;
414
+ s->regs[R_DDRIOB + 0] = s->regs[R_DDRIOB + 1] = s->regs[R_DDRIOB + 2]
415
+ = s->regs[R_DDRIOB + 3] = 0x00000e00;
416
+ s->regs[R_DDRIOB + 4] = s->regs[R_DDRIOB + 5] = s->regs[R_DDRIOB + 6]
417
+ = 0x00000e00;
418
+ s->regs[R_DDRIOB + 12] = 0x00000021;
419
}
420
421
422
static bool zynq_slcr_check_offset(hwaddr offset, bool rnw)
423
{
424
switch (offset) {
425
- case LOCK:
426
- case UNLOCK:
427
- case DDR_CAL_START:
428
- case DDR_REF_START:
429
+ case R_LOCK:
430
+ case R_UNLOCK:
431
+ case R_DDR_CAL_START:
432
+ case R_DDR_REF_START:
433
return !rnw; /* Write only */
434
- case LOCKSTA:
435
- case FPGA0_THR_STA:
436
- case FPGA1_THR_STA:
437
- case FPGA2_THR_STA:
438
- case FPGA3_THR_STA:
439
- case BOOT_MODE:
440
- case PSS_IDCODE:
441
- case DDR_CMD_STA:
442
- case DDR_DFI_STATUS:
443
- case PLL_STATUS:
444
+ case R_LOCKSTA:
445
+ case R_FPGA0_THR_STA:
446
+ case R_FPGA1_THR_STA:
447
+ case R_FPGA2_THR_STA:
448
+ case R_FPGA3_THR_STA:
449
+ case R_BOOT_MODE:
450
+ case R_PSS_IDCODE:
451
+ case R_DDR_CMD_STA:
452
+ case R_DDR_DFI_STATUS:
453
+ case R_PLL_STATUS:
454
return rnw;/* read only */
455
- case SCL:
456
- case ARM_PLL_CTRL ... IO_PLL_CTRL:
457
- case ARM_PLL_CFG ... IO_PLL_CFG:
458
- case ARM_CLK_CTRL ... TOPSW_CLK_CTRL:
459
- case FPGA0_CLK_CTRL ... FPGA0_THR_CNT:
460
- case FPGA1_CLK_CTRL ... FPGA1_THR_CNT:
461
- case FPGA2_CLK_CTRL ... FPGA2_THR_CNT:
462
- case FPGA3_CLK_CTRL ... FPGA3_THR_CNT:
463
- case BANDGAP_TRIP:
464
- case PLL_PREDIVISOR:
465
- case CLK_621_TRUE:
466
- case PSS_RST_CTRL ... A9_CPU_RST_CTRL:
467
- case RS_AWDT_CTRL:
468
- case RST_REASON:
469
- case REBOOT_STATUS:
470
- case APU_CTRL:
471
- case WDT_CLK_SEL:
472
- case TZ_DMA_NS ... TZ_DMA_PERIPH_NS:
473
- case DDR_URGENT:
474
- case DDR_URGENT_SEL:
475
- case MIO ... MIO + MIO_LENGTH - 1:
476
- case MIO_LOOPBACK ... MIO_MST_TRI1:
477
- case SD0_WP_CD_SEL:
478
- case SD1_WP_CD_SEL:
479
- case LVL_SHFTR_EN:
480
- case OCM_CFG:
481
- case CPU_RAM:
482
- case IOU:
483
- case DMAC_RAM:
484
- case AFI0 ... AFI3 + AFI_LENGTH - 1:
485
- case OCM:
486
- case DEVCI_RAM:
487
- case CSG_RAM:
488
- case GPIOB_CTRL ... GPIOB_CFG_CMOS33:
489
- case GPIOB_CFG_HSTL:
490
- case GPIOB_DRVR_BIAS_CTRL:
491
- case DDRIOB ... DDRIOB + DDRIOB_LENGTH - 1:
492
+ case R_SCL:
493
+ case R_ARM_PLL_CTRL ... R_IO_PLL_CTRL:
494
+ case R_ARM_PLL_CFG ... R_IO_PLL_CFG:
495
+ case R_ARM_CLK_CTRL ... R_TOPSW_CLK_CTRL:
496
+ case R_FPGA0_CLK_CTRL ... R_FPGA0_THR_CNT:
497
+ case R_FPGA1_CLK_CTRL ... R_FPGA1_THR_CNT:
498
+ case R_FPGA2_CLK_CTRL ... R_FPGA2_THR_CNT:
499
+ case R_FPGA3_CLK_CTRL ... R_FPGA3_THR_CNT:
500
+ case R_BANDGAP_TRIP:
501
+ case R_PLL_PREDIVISOR:
502
+ case R_CLK_621_TRUE:
503
+ case R_PSS_RST_CTRL ... R_A9_CPU_RST_CTRL:
504
+ case R_RS_AWDT_CTRL:
505
+ case R_RST_REASON:
506
+ case R_REBOOT_STATUS:
507
+ case R_APU_CTRL:
508
+ case R_WDT_CLK_SEL:
509
+ case R_TZ_DMA_NS ... R_TZ_DMA_PERIPH_NS:
510
+ case R_DDR_URGENT:
511
+ case R_DDR_URGENT_SEL:
512
+ case R_MIO ... R_MIO + MIO_LENGTH - 1:
513
+ case R_MIO_LOOPBACK ... R_MIO_MST_TRI1:
514
+ case R_SD0_WP_CD_SEL:
515
+ case R_SD1_WP_CD_SEL:
516
+ case R_LVL_SHFTR_EN:
517
+ case R_OCM_CFG:
518
+ case R_CPU_RAM:
519
+ case R_IOU:
520
+ case R_DMAC_RAM:
521
+ case R_AFI0 ... R_AFI3 + AFI_LENGTH - 1:
522
+ case R_OCM:
523
+ case R_DEVCI_RAM:
524
+ case R_CSG_RAM:
525
+ case R_GPIOB_CTRL ... R_GPIOB_CFG_CMOS33:
526
+ case R_GPIOB_CFG_HSTL:
527
+ case R_GPIOB_DRVR_BIAS_CTRL:
528
+ case R_DDRIOB ... R_DDRIOB + DDRIOB_LENGTH - 1:
529
return true;
530
default:
531
return false;
532
@@ -XXX,XX +XXX,XX @@ static void zynq_slcr_write(void *opaque, hwaddr offset,
533
}
534
535
switch (offset) {
536
- case SCL:
537
- s->regs[SCL] = val & 0x1;
538
+ case R_SCL:
539
+ s->regs[R_SCL] = val & 0x1;
540
return;
541
- case LOCK:
542
+ case R_LOCK:
543
if ((val & 0xFFFF) == XILINX_LOCK_KEY) {
544
DB_PRINT("XILINX LOCK 0xF8000000 + 0x%x <= 0x%x\n", (int)offset,
545
(unsigned)val & 0xFFFF);
546
- s->regs[LOCKSTA] = 1;
547
+ s->regs[R_LOCKSTA] = 1;
548
} else {
549
DB_PRINT("WRONG XILINX LOCK KEY 0xF8000000 + 0x%x <= 0x%x\n",
550
(int)offset, (unsigned)val & 0xFFFF);
551
}
552
return;
553
- case UNLOCK:
554
+ case R_UNLOCK:
555
if ((val & 0xFFFF) == XILINX_UNLOCK_KEY) {
556
DB_PRINT("XILINX UNLOCK 0xF8000000 + 0x%x <= 0x%x\n", (int)offset,
557
(unsigned)val & 0xFFFF);
558
- s->regs[LOCKSTA] = 0;
559
+ s->regs[R_LOCKSTA] = 0;
560
} else {
561
DB_PRINT("WRONG XILINX UNLOCK KEY 0xF8000000 + 0x%x <= 0x%x\n",
562
(int)offset, (unsigned)val & 0xFFFF);
563
@@ -XXX,XX +XXX,XX @@ static void zynq_slcr_write(void *opaque, hwaddr offset,
564
return;
565
}
566
567
- if (s->regs[LOCKSTA]) {
568
+ if (s->regs[R_LOCKSTA]) {
569
qemu_log_mask(LOG_GUEST_ERROR,
570
"SCLR registers are locked. Unlock them first\n");
571
return;
572
@@ -XXX,XX +XXX,XX @@ static void zynq_slcr_write(void *opaque, hwaddr offset,
573
s->regs[offset] = val;
574
575
switch (offset) {
576
- case PSS_RST_CTRL:
577
- if (val & R_PSS_RST_CTRL_SOFT_RST) {
578
+ case R_PSS_RST_CTRL:
579
+ if (FIELD_EX32(val, PSS_RST_CTRL, SOFT_RST)) {
580
qemu_system_reset_request(SHUTDOWN_CAUSE_GUEST_RESET);
581
}
582
break;
98
--
583
--
99
2.20.1
584
2.20.1
100
585
101
586
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Aaron Hill <aa1ronham@gmail.com>
2
2
3
This commit finally deletes "hw/devices.h".
3
This commit properly sets the ENET_BD_BDU flag once the emulated FEC controller
4
has finished processing the last descriptor. This is done for both transmit
5
and receive descriptors.
4
6
5
Reviewed-by: Markus Armbruster <armbru@redhat.com>
7
This allows the QNX 7.0.0 BSP for the Sabrelite board (which can be
6
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
found at http://blackberry.qnx.com/en/developers/bsp) to properly
7
Message-id: 20190412165416.7977-13-philmd@redhat.com
9
control the FEC. Without this patch, the BSP ethernet driver will never
10
re-use FEC descriptors, as the unset ENET_BD_BDU flag will cause
11
it to believe that the descriptors are still in use by the NIC.
12
13
Note that Linux does not appear to use this field at all, and is
14
unaffected by this patch.
15
16
Without this patch, QNX will think that the NIC is still processing its
17
transaction descriptors, and won't send any more data over the network.
18
19
For reference:
20
21
On page 1192 of the I.MX 6DQ reference manual revision (Rev. 5, 06/2018),
22
which can be found at https://www.nxp.com/products/processors-and-microcontrollers/arm-based-processors-and-mcus/i.mx-applications-processors/i.mx-6-processors/i.mx-6quad-processors-high-performance-3d-graphics-hd-video-arm-cortex-a9-core:i.MX6Q?&tab=Documentation_Tab&linkline=Application-Note
23
24
the 'BDU' field is described as follows for the 'Enhanced transmit
25
buffer descriptor':
26
27
'Last buffer descriptor update done. Indicates that the last BD data has been updated by
28
uDMA. This field is written by the user (=0) and uDMA (=1).'
29
30
The same description is used for the receive buffer descriptor.
31
32
Signed-off-by: Aaron Hill <aa1ronham@gmail.com>
33
Message-id: 20190805142417.10433-1-aaron.hill@alertinnovation.com
34
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
35
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
36
---
10
include/hw/devices.h | 11 -----------
37
hw/net/imx_fec.c | 4 ++++
11
include/hw/net/smc91c111.h | 19 +++++++++++++++++++
38
1 file changed, 4 insertions(+)
12
hw/arm/gumstix.c | 2 +-
13
hw/arm/integratorcp.c | 2 +-
14
hw/arm/mainstone.c | 2 +-
15
hw/arm/realview.c | 2 +-
16
hw/arm/versatilepb.c | 2 +-
17
hw/net/smc91c111.c | 2 +-
18
8 files changed, 25 insertions(+), 17 deletions(-)
19
delete mode 100644 include/hw/devices.h
20
create mode 100644 include/hw/net/smc91c111.h
21
39
22
diff --git a/include/hw/devices.h b/include/hw/devices.h
40
diff --git a/hw/net/imx_fec.c b/hw/net/imx_fec.c
23
deleted file mode 100644
24
index XXXXXXX..XXXXXXX
25
--- a/include/hw/devices.h
26
+++ /dev/null
27
@@ -XXX,XX +XXX,XX @@
28
-#ifndef QEMU_DEVICES_H
29
-#define QEMU_DEVICES_H
30
-
31
-/* Devices that have nowhere better to go. */
32
-
33
-#include "hw/hw.h"
34
-
35
-/* smc91c111.c */
36
-void smc91c111_init(NICInfo *, uint32_t, qemu_irq);
37
-
38
-#endif
39
diff --git a/include/hw/net/smc91c111.h b/include/hw/net/smc91c111.h
40
new file mode 100644
41
index XXXXXXX..XXXXXXX
42
--- /dev/null
43
+++ b/include/hw/net/smc91c111.h
44
@@ -XXX,XX +XXX,XX @@
45
+/*
46
+ * SMSC 91C111 Ethernet interface emulation
47
+ *
48
+ * Copyright (c) 2005 CodeSourcery, LLC.
49
+ * Written by Paul Brook
50
+ *
51
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
52
+ * See the COPYING file in the top-level directory.
53
+ */
54
+
55
+#ifndef HW_NET_SMC91C111_H
56
+#define HW_NET_SMC91C111_H
57
+
58
+#include "hw/irq.h"
59
+#include "net/net.h"
60
+
61
+void smc91c111_init(NICInfo *, uint32_t, qemu_irq);
62
+
63
+#endif
64
diff --git a/hw/arm/gumstix.c b/hw/arm/gumstix.c
65
index XXXXXXX..XXXXXXX 100644
41
index XXXXXXX..XXXXXXX 100644
66
--- a/hw/arm/gumstix.c
42
--- a/hw/net/imx_fec.c
67
+++ b/hw/arm/gumstix.c
43
+++ b/hw/net/imx_fec.c
68
@@ -XXX,XX +XXX,XX @@
44
@@ -XXX,XX +XXX,XX @@ static void imx_enet_do_tx(IMXFECState *s, uint32_t index)
69
#include "hw/arm/pxa.h"
45
if (bd.option & ENET_BD_TX_INT) {
70
#include "net/net.h"
46
s->regs[ENET_EIR] |= int_txf;
71
#include "hw/block/flash.h"
47
}
72
-#include "hw/devices.h"
48
+ /* Indicate that we've updated the last buffer descriptor. */
73
+#include "hw/net/smc91c111.h"
49
+ bd.last_buffer = ENET_BD_BDU;
74
#include "hw/boards.h"
50
}
75
#include "exec/address-spaces.h"
51
if (bd.option & ENET_BD_TX_INT) {
76
#include "sysemu/qtest.h"
52
s->regs[ENET_EIR] |= int_txb;
77
diff --git a/hw/arm/integratorcp.c b/hw/arm/integratorcp.c
53
@@ -XXX,XX +XXX,XX @@ static ssize_t imx_enet_receive(NetClientState *nc, const uint8_t *buf,
78
index XXXXXXX..XXXXXXX 100644
54
/* Last buffer in frame. */
79
--- a/hw/arm/integratorcp.c
55
bd.flags |= flags | ENET_BD_L;
80
+++ b/hw/arm/integratorcp.c
56
FEC_PRINTF("rx frame flags %04x\n", bd.flags);
81
@@ -XXX,XX +XXX,XX @@
57
+ /* Indicate that we've updated the last buffer descriptor. */
82
#include "qemu-common.h"
58
+ bd.last_buffer = ENET_BD_BDU;
83
#include "cpu.h"
59
if (bd.option & ENET_BD_RX_INT) {
84
#include "hw/sysbus.h"
60
s->regs[ENET_EIR] |= ENET_INT_RXF;
85
-#include "hw/devices.h"
61
}
86
#include "hw/boards.h"
87
#include "hw/arm/arm.h"
88
#include "hw/misc/arm_integrator_debug.h"
89
+#include "hw/net/smc91c111.h"
90
#include "net/net.h"
91
#include "exec/address-spaces.h"
92
#include "sysemu/sysemu.h"
93
diff --git a/hw/arm/mainstone.c b/hw/arm/mainstone.c
94
index XXXXXXX..XXXXXXX 100644
95
--- a/hw/arm/mainstone.c
96
+++ b/hw/arm/mainstone.c
97
@@ -XXX,XX +XXX,XX @@
98
#include "hw/arm/pxa.h"
99
#include "hw/arm/arm.h"
100
#include "net/net.h"
101
-#include "hw/devices.h"
102
+#include "hw/net/smc91c111.h"
103
#include "hw/boards.h"
104
#include "hw/block/flash.h"
105
#include "hw/sysbus.h"
106
diff --git a/hw/arm/realview.c b/hw/arm/realview.c
107
index XXXXXXX..XXXXXXX 100644
108
--- a/hw/arm/realview.c
109
+++ b/hw/arm/realview.c
110
@@ -XXX,XX +XXX,XX @@
111
#include "hw/sysbus.h"
112
#include "hw/arm/arm.h"
113
#include "hw/arm/primecell.h"
114
-#include "hw/devices.h"
115
#include "hw/net/lan9118.h"
116
+#include "hw/net/smc91c111.h"
117
#include "hw/pci/pci.h"
118
#include "net/net.h"
119
#include "sysemu/sysemu.h"
120
diff --git a/hw/arm/versatilepb.c b/hw/arm/versatilepb.c
121
index XXXXXXX..XXXXXXX 100644
122
--- a/hw/arm/versatilepb.c
123
+++ b/hw/arm/versatilepb.c
124
@@ -XXX,XX +XXX,XX @@
125
#include "cpu.h"
126
#include "hw/sysbus.h"
127
#include "hw/arm/arm.h"
128
-#include "hw/devices.h"
129
+#include "hw/net/smc91c111.h"
130
#include "net/net.h"
131
#include "sysemu/sysemu.h"
132
#include "hw/pci/pci.h"
133
diff --git a/hw/net/smc91c111.c b/hw/net/smc91c111.c
134
index XXXXXXX..XXXXXXX 100644
135
--- a/hw/net/smc91c111.c
136
+++ b/hw/net/smc91c111.c
137
@@ -XXX,XX +XXX,XX @@
138
#include "qemu/osdep.h"
139
#include "hw/sysbus.h"
140
#include "net/net.h"
141
-#include "hw/devices.h"
142
+#include "hw/net/smc91c111.h"
143
#include "qemu/log.h"
144
/* For crc32 */
145
#include <zlib.h>
146
--
62
--
147
2.20.1
63
2.20.1
148
64
149
65
diff view generated by jsdifflib
1
The M-profile architecture floating point system supports
1
Factor out code to 'generate a singlestep exception', which is
2
lazy FP state preservation, where FP registers are not
2
currently repeated in four places.
3
pushed to the stack when an exception occurs but are instead
3
4
only saved if and when the first FP instruction in the exception
4
To do this we need to also pull the identical copies of the
5
handler is executed. Implement this in QEMU, corresponding
5
gen-exception() function out of translate-a64.c and translate.c
6
to the check of LSPACT in the pseudocode ExecuteFPCheck().
6
into translate.h.
7
8
(There is a bug in the code: we're taking the exception to the wrong
9
target EL. This will be simpler to fix if there's only one place to
10
do it.)
7
11
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Message-id: 20190416125744.27770-24-peter.maydell@linaro.org
14
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
15
Message-id: 20190805130952.4415-2-peter.maydell@linaro.org
11
---
16
---
12
target/arm/cpu.h | 3 ++
17
target/arm/translate.h | 23 +++++++++++++++++++++++
13
target/arm/helper.h | 2 +
18
target/arm/translate-a64.c | 19 ++-----------------
14
target/arm/translate.h | 1 +
19
target/arm/translate.c | 20 ++------------------
15
target/arm/helper.c | 112 +++++++++++++++++++++++++++++++++++++++++
20
3 files changed, 27 insertions(+), 35 deletions(-)
16
target/arm/translate.c | 22 ++++++++
17
5 files changed, 140 insertions(+)
18
21
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/cpu.h
22
+++ b/target/arm/cpu.h
23
@@ -XXX,XX +XXX,XX @@
24
#define EXCP_NOCP 17 /* v7M NOCP UsageFault */
25
#define EXCP_INVSTATE 18 /* v7M INVSTATE UsageFault */
26
#define EXCP_STKOF 19 /* v8M STKOF UsageFault */
27
+#define EXCP_LAZYFP 20 /* v7M fault during lazy FP stacking */
28
/* NB: add new EXCP_ defines to the array in arm_log_exception() too */
29
30
#define ARMV7M_EXCP_RESET 1
31
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A32, NS, 6, 1)
32
FIELD(TBFLAG_A32, VFPEN, 7, 1)
33
FIELD(TBFLAG_A32, CONDEXEC, 8, 8)
34
FIELD(TBFLAG_A32, SCTLR_B, 16, 1)
35
+/* For M profile only, set if FPCCR.LSPACT is set */
36
+FIELD(TBFLAG_A32, LSPACT, 18, 1)
37
/* For M profile only, set if we must create a new FP context */
38
FIELD(TBFLAG_A32, NEW_FP_CTXT_NEEDED, 19, 1)
39
/* For M profile only, set if FPCCR.S does not match current security state */
40
diff --git a/target/arm/helper.h b/target/arm/helper.h
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/helper.h
43
+++ b/target/arm/helper.h
44
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_2(v7m_blxns, void, env, i32)
45
46
DEF_HELPER_3(v7m_tt, i32, env, i32, i32)
47
48
+DEF_HELPER_1(v7m_preserve_fp_state, void, env)
49
+
50
DEF_HELPER_2(v8m_stackcheck, void, env, i32)
51
52
DEF_HELPER_4(access_check_cp_reg, void, env, ptr, i32, i32)
53
diff --git a/target/arm/translate.h b/target/arm/translate.h
22
diff --git a/target/arm/translate.h b/target/arm/translate.h
54
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
55
--- a/target/arm/translate.h
24
--- a/target/arm/translate.h
56
+++ b/target/arm/translate.h
25
+++ b/target/arm/translate.h
57
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
26
@@ -XXX,XX +XXX,XX @@
58
bool v8m_stackcheck; /* true if we need to perform v8M stack limit checks */
27
#define TARGET_ARM_TRANSLATE_H
59
bool v8m_fpccr_s_wrong; /* true if v8M FPCCR.S != v8m_secure */
28
60
bool v7m_new_fp_ctxt_needed; /* ASPEN set but no active FP context */
29
#include "exec/translator.h"
61
+ bool v7m_lspact; /* FPCCR.LSPACT set */
30
+#include "internals.h"
62
/* Immediate value in AArch32 SVC insn; must be set if is_jmp == DISAS_SWI
31
63
* so that top level loop can generate correct syndrome information.
32
64
*/
33
/* internal defines */
65
diff --git a/target/arm/helper.c b/target/arm/helper.c
34
@@ -XXX,XX +XXX,XX @@ static inline void gen_ss_advance(DisasContext *s)
66
index XXXXXXX..XXXXXXX 100644
35
}
67
--- a/target/arm/helper.c
68
+++ b/target/arm/helper.c
69
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest)
70
g_assert_not_reached();
71
}
36
}
72
37
73
+void HELPER(v7m_preserve_fp_state)(CPUARMState *env)
38
+static inline void gen_exception(int excp, uint32_t syndrome,
39
+ uint32_t target_el)
74
+{
40
+{
75
+ /* translate.c should never generate calls here in user-only mode */
41
+ TCGv_i32 tcg_excp = tcg_const_i32(excp);
76
+ g_assert_not_reached();
42
+ TCGv_i32 tcg_syn = tcg_const_i32(syndrome);
43
+ TCGv_i32 tcg_el = tcg_const_i32(target_el);
44
+
45
+ gen_helper_exception_with_syndrome(cpu_env, tcg_excp,
46
+ tcg_syn, tcg_el);
47
+
48
+ tcg_temp_free_i32(tcg_el);
49
+ tcg_temp_free_i32(tcg_syn);
50
+ tcg_temp_free_i32(tcg_excp);
77
+}
51
+}
78
+
52
+
79
uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
53
+/* Generate an architectural singlestep exception */
80
{
54
+static inline void gen_swstep_exception(DisasContext *s, int isv, int ex)
81
/* The TT instructions can be used by unprivileged code, but in
82
@@ -XXX,XX +XXX,XX @@ pend_fault:
83
return false;
84
}
85
86
+void HELPER(v7m_preserve_fp_state)(CPUARMState *env)
87
+{
55
+{
88
+ /*
56
+ gen_exception(EXCP_UDEF, syn_swstep(s->ss_same_el, isv, ex),
89
+ * Preserve FP state (because LSPACT was set and we are about
57
+ default_exception_el(s));
90
+ * to execute an FP instruction). This corresponds to the
91
+ * PreserveFPState() pseudocode.
92
+ * We may throw an exception if the stacking fails.
93
+ */
94
+ ARMCPU *cpu = arm_env_get_cpu(env);
95
+ bool is_secure = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
96
+ bool negpri = !(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_HFRDY_MASK);
97
+ bool is_priv = !(env->v7m.fpccr[is_secure] & R_V7M_FPCCR_USER_MASK);
98
+ bool splimviol = env->v7m.fpccr[is_secure] & R_V7M_FPCCR_SPLIMVIOL_MASK;
99
+ uint32_t fpcar = env->v7m.fpcar[is_secure];
100
+ bool stacked_ok = true;
101
+ bool ts = is_secure && (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK);
102
+ bool take_exception;
103
+
104
+ /* Take the iothread lock as we are going to touch the NVIC */
105
+ qemu_mutex_lock_iothread();
106
+
107
+ /* Check the background context had access to the FPU */
108
+ if (!v7m_cpacr_pass(env, is_secure, is_priv)) {
109
+ armv7m_nvic_set_pending_lazyfp(env->nvic, ARMV7M_EXCP_USAGE, is_secure);
110
+ env->v7m.cfsr[is_secure] |= R_V7M_CFSR_NOCP_MASK;
111
+ stacked_ok = false;
112
+ } else if (!is_secure && !extract32(env->v7m.nsacr, 10, 1)) {
113
+ armv7m_nvic_set_pending_lazyfp(env->nvic, ARMV7M_EXCP_USAGE, M_REG_S);
114
+ env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_NOCP_MASK;
115
+ stacked_ok = false;
116
+ }
117
+
118
+ if (!splimviol && stacked_ok) {
119
+ /* We only stack if the stack limit wasn't violated */
120
+ int i;
121
+ ARMMMUIdx mmu_idx;
122
+
123
+ mmu_idx = arm_v7m_mmu_idx_all(env, is_secure, is_priv, negpri);
124
+ for (i = 0; i < (ts ? 32 : 16); i += 2) {
125
+ uint64_t dn = *aa32_vfp_dreg(env, i / 2);
126
+ uint32_t faddr = fpcar + 4 * i;
127
+ uint32_t slo = extract64(dn, 0, 32);
128
+ uint32_t shi = extract64(dn, 32, 32);
129
+
130
+ if (i >= 16) {
131
+ faddr += 8; /* skip the slot for the FPSCR */
132
+ }
133
+ stacked_ok = stacked_ok &&
134
+ v7m_stack_write(cpu, faddr, slo, mmu_idx, STACK_LAZYFP) &&
135
+ v7m_stack_write(cpu, faddr + 4, shi, mmu_idx, STACK_LAZYFP);
136
+ }
137
+
138
+ stacked_ok = stacked_ok &&
139
+ v7m_stack_write(cpu, fpcar + 0x40,
140
+ vfp_get_fpscr(env), mmu_idx, STACK_LAZYFP);
141
+ }
142
+
143
+ /*
144
+ * We definitely pended an exception, but it's possible that it
145
+ * might not be able to be taken now. If its priority permits us
146
+ * to take it now, then we must not update the LSPACT or FP regs,
147
+ * but instead jump out to take the exception immediately.
148
+ * If it's just pending and won't be taken until the current
149
+ * handler exits, then we do update LSPACT and the FP regs.
150
+ */
151
+ take_exception = !stacked_ok &&
152
+ armv7m_nvic_can_take_pending_exception(env->nvic);
153
+
154
+ qemu_mutex_unlock_iothread();
155
+
156
+ if (take_exception) {
157
+ raise_exception_ra(env, EXCP_LAZYFP, 0, 1, GETPC());
158
+ }
159
+
160
+ env->v7m.fpccr[is_secure] &= ~R_V7M_FPCCR_LSPACT_MASK;
161
+
162
+ if (ts) {
163
+ /* Clear s0 to s31 and the FPSCR */
164
+ int i;
165
+
166
+ for (i = 0; i < 32; i += 2) {
167
+ *aa32_vfp_dreg(env, i / 2) = 0;
168
+ }
169
+ vfp_set_fpscr(env, 0);
170
+ }
171
+ /*
172
+ * Otherwise s0 to s15 and FPSCR are UNKNOWN; we choose to leave them
173
+ * unchanged.
174
+ */
175
+}
58
+}
176
+
59
+
177
/* Write to v7M CONTROL.SPSEL bit for the specified security bank.
60
/*
178
* This may change the current stack pointer between Main and Process
61
* Given a VFP floating point constant encoded into an 8 bit immediate in an
179
* stack pointers if it is done for the CONTROL register for the current
62
* instruction, expand it to the actual constant value of the specified
180
@@ -XXX,XX +XXX,XX @@ static void arm_log_exception(int idx)
63
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
181
[EXCP_NOCP] = "v7M NOCP UsageFault",
64
index XXXXXXX..XXXXXXX 100644
182
[EXCP_INVSTATE] = "v7M INVSTATE UsageFault",
65
--- a/target/arm/translate-a64.c
183
[EXCP_STKOF] = "v8M STKOF UsageFault",
66
+++ b/target/arm/translate-a64.c
184
+ [EXCP_LAZYFP] = "v7M exception during lazy FP stacking",
67
@@ -XXX,XX +XXX,XX @@ static void gen_exception_internal(int excp)
185
};
68
tcg_temp_free_i32(tcg_excp);
186
187
if (idx >= 0 && idx < ARRAY_SIZE(excnames)) {
188
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
189
return;
190
}
191
break;
192
+ case EXCP_LAZYFP:
193
+ /*
194
+ * We already pended the specific exception in the NVIC in the
195
+ * v7m_preserve_fp_state() helper function.
196
+ */
197
+ break;
198
default:
199
cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index);
200
return; /* Never happens. Keep compiler happy. */
201
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
202
flags = FIELD_DP32(flags, TBFLAG_A32, NEW_FP_CTXT_NEEDED, 1);
203
}
204
205
+ if (arm_feature(env, ARM_FEATURE_M)) {
206
+ bool is_secure = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
207
+
208
+ if (env->v7m.fpccr[is_secure] & R_V7M_FPCCR_LSPACT_MASK) {
209
+ flags = FIELD_DP32(flags, TBFLAG_A32, LSPACT, 1);
210
+ }
211
+ }
212
+
213
*pflags = flags;
214
*cs_base = 0;
215
}
69
}
70
71
-static void gen_exception(int excp, uint32_t syndrome, uint32_t target_el)
72
-{
73
- TCGv_i32 tcg_excp = tcg_const_i32(excp);
74
- TCGv_i32 tcg_syn = tcg_const_i32(syndrome);
75
- TCGv_i32 tcg_el = tcg_const_i32(target_el);
76
-
77
- gen_helper_exception_with_syndrome(cpu_env, tcg_excp,
78
- tcg_syn, tcg_el);
79
- tcg_temp_free_i32(tcg_el);
80
- tcg_temp_free_i32(tcg_syn);
81
- tcg_temp_free_i32(tcg_excp);
82
-}
83
-
84
static void gen_exception_internal_insn(DisasContext *s, int offset, int excp)
85
{
86
gen_a64_set_pc_im(s->pc - offset);
87
@@ -XXX,XX +XXX,XX @@ static void gen_step_complete_exception(DisasContext *s)
88
* of the exception, and our syndrome information is always correct.
89
*/
90
gen_ss_advance(s);
91
- gen_exception(EXCP_UDEF, syn_swstep(s->ss_same_el, 1, s->is_ldex),
92
- default_exception_el(s));
93
+ gen_swstep_exception(s, 1, s->is_ldex);
94
s->base.is_jmp = DISAS_NORETURN;
95
}
96
97
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
98
* bits should be zero.
99
*/
100
assert(dc->base.num_insns == 1);
101
- gen_exception(EXCP_UDEF, syn_swstep(dc->ss_same_el, 0, 0),
102
- default_exception_el(dc));
103
+ gen_swstep_exception(dc, 0, 0);
104
dc->base.is_jmp = DISAS_NORETURN;
105
} else {
106
disas_a64_insn(env, dc);
216
diff --git a/target/arm/translate.c b/target/arm/translate.c
107
diff --git a/target/arm/translate.c b/target/arm/translate.c
217
index XXXXXXX..XXXXXXX 100644
108
index XXXXXXX..XXXXXXX 100644
218
--- a/target/arm/translate.c
109
--- a/target/arm/translate.c
219
+++ b/target/arm/translate.c
110
+++ b/target/arm/translate.c
220
@@ -XXX,XX +XXX,XX @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
111
@@ -XXX,XX +XXX,XX @@ static void gen_exception_internal(int excp)
221
if (arm_dc_feature(s, ARM_FEATURE_M)) {
112
tcg_temp_free_i32(tcg_excp);
222
/* Handle M-profile lazy FP state mechanics */
113
}
223
114
224
+ /* Trigger lazy-state preservation if necessary */
115
-static void gen_exception(int excp, uint32_t syndrome, uint32_t target_el)
225
+ if (s->v7m_lspact) {
116
-{
226
+ /*
117
- TCGv_i32 tcg_excp = tcg_const_i32(excp);
227
+ * Lazy state saving affects external memory and also the NVIC,
118
- TCGv_i32 tcg_syn = tcg_const_i32(syndrome);
228
+ * so we must mark it as an IO operation for icount.
119
- TCGv_i32 tcg_el = tcg_const_i32(target_el);
229
+ */
120
-
230
+ if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
121
- gen_helper_exception_with_syndrome(cpu_env, tcg_excp,
231
+ gen_io_start();
122
- tcg_syn, tcg_el);
232
+ }
123
-
233
+ gen_helper_v7m_preserve_fp_state(cpu_env);
124
- tcg_temp_free_i32(tcg_el);
234
+ if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
125
- tcg_temp_free_i32(tcg_syn);
235
+ gen_io_end();
126
- tcg_temp_free_i32(tcg_excp);
236
+ }
127
-}
237
+ /*
128
-
238
+ * If the preserve_fp_state helper doesn't throw an exception
129
static void gen_step_complete_exception(DisasContext *s)
239
+ * then it will clear LSPACT; we don't need to repeat this for
130
{
240
+ * any further FP insns in this TB.
131
/* We just completed step of an insn. Move from Active-not-pending
241
+ */
132
@@ -XXX,XX +XXX,XX @@ static void gen_step_complete_exception(DisasContext *s)
242
+ s->v7m_lspact = false;
133
* of the exception, and our syndrome information is always correct.
243
+ }
134
*/
244
+
135
gen_ss_advance(s);
245
/* Update ownership of FP context: set FPCCR.S to match current state */
136
- gen_exception(EXCP_UDEF, syn_swstep(s->ss_same_el, 1, s->is_ldex),
246
if (s->v8m_fpccr_s_wrong) {
137
- default_exception_el(s));
247
TCGv_i32 tmp;
138
+ gen_swstep_exception(s, 1, s->is_ldex);
248
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
139
s->base.is_jmp = DISAS_NORETURN;
249
dc->v8m_fpccr_s_wrong = FIELD_EX32(tb_flags, TBFLAG_A32, FPCCR_S_WRONG);
140
}
250
dc->v7m_new_fp_ctxt_needed =
141
251
FIELD_EX32(tb_flags, TBFLAG_A32, NEW_FP_CTXT_NEEDED);
142
@@ -XXX,XX +XXX,XX @@ static bool arm_pre_translate_insn(DisasContext *dc)
252
+ dc->v7m_lspact = FIELD_EX32(tb_flags, TBFLAG_A32, LSPACT);
143
* bits should be zero.
253
dc->cp_regs = cpu->cp_regs;
144
*/
254
dc->features = env->features;
145
assert(dc->base.num_insns == 1);
255
146
- gen_exception(EXCP_UDEF, syn_swstep(dc->ss_same_el, 0, 0),
147
- default_exception_el(dc));
148
+ gen_swstep_exception(dc, 0, 0);
149
dc->base.is_jmp = DISAS_NORETURN;
150
return true;
151
}
256
--
152
--
257
2.20.1
153
2.20.1
258
154
259
155
diff view generated by jsdifflib
1
The M-profile FPCCR.ASPEN bit indicates that automatic floating-point
1
When generating an architectural single-step exception we were
2
context preservation is enabled. Before executing any floating-point
2
routing it to the "default exception level", which is to say
3
instruction, if FPCCR.ASPEN is set and the CONTROL FPCA/SFPA bits
3
the same exception level we execute at except that EL0 exceptions
4
indicate that there is no active floating point context then we
4
go to EL1. This is incorrect because the debug exception level
5
must create a new context (by initializing FPSCR and setting
5
can be configured by the guest for situations such as single
6
FPCA/SFPA to indicate that the context is now active). In the
6
stepping of EL0 and EL1 code by EL2.
7
pseudocode this is handled by ExecuteFPCheck().
8
7
9
Implement this with a new TB flag which tracks whether we
8
We have to track the target debug exception level in the TB
10
need to create a new FP context.
9
flags, because it is dependent on CPU state like HCR_EL2.TGE
10
and MDCR_EL2.TDE. (That we were previously calling the
11
arm_debug_target_el() function to determine dc->ss_same_el
12
is itself a bug, though one that would only have manifested
13
as incorrect syndrome information.) Since we are out of TB
14
flag bits unless we want to expand into the cs_base field,
15
we share some bits with the M-profile only HANDLER and
16
STACKCHECK bits, since only A-profile has this singlestep.
11
17
18
Fixes: https://bugs.launchpad.net/qemu/+bug/1838913
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
14
Message-id: 20190416125744.27770-20-peter.maydell@linaro.org
21
Tested-by: Alex Bennée <alex.bennee@linaro.org>
22
Message-id: 20190805130952.4415-3-peter.maydell@linaro.org
15
---
23
---
16
target/arm/cpu.h | 2 ++
24
target/arm/cpu.h | 5 +++++
17
target/arm/translate.h | 1 +
25
target/arm/translate.h | 15 +++++++++++----
18
target/arm/helper.c | 13 +++++++++++++
26
target/arm/helper.c | 6 ++++++
19
target/arm/translate.c | 29 +++++++++++++++++++++++++++++
27
target/arm/translate-a64.c | 2 +-
20
4 files changed, 45 insertions(+)
28
target/arm/translate.c | 4 +++-
29
5 files changed, 26 insertions(+), 6 deletions(-)
21
30
22
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
31
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
23
index XXXXXXX..XXXXXXX 100644
32
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/cpu.h
33
--- a/target/arm/cpu.h
25
+++ b/target/arm/cpu.h
34
+++ b/target/arm/cpu.h
26
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A32, NS, 6, 1)
35
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_ANY, PSTATE_SS, 26, 1)
27
FIELD(TBFLAG_A32, VFPEN, 7, 1)
36
/* Target EL if we take a floating-point-disabled exception */
28
FIELD(TBFLAG_A32, CONDEXEC, 8, 8)
37
FIELD(TBFLAG_ANY, FPEXC_EL, 24, 2)
29
FIELD(TBFLAG_A32, SCTLR_B, 16, 1)
38
FIELD(TBFLAG_ANY, BE_DATA, 23, 1)
30
+/* For M profile only, set if we must create a new FP context */
39
+/*
31
+FIELD(TBFLAG_A32, NEW_FP_CTXT_NEEDED, 19, 1)
40
+ * For A-profile only, target EL for debug exceptions.
32
/* For M profile only, set if FPCCR.S does not match current security state */
41
+ * Note that this overlaps with the M-profile-only HANDLER and STACKCHECK bits.
33
FIELD(TBFLAG_A32, FPCCR_S_WRONG, 20, 1)
42
+ */
34
/* For M profile only, Handler (ie not Thread) mode */
43
+FIELD(TBFLAG_ANY, DEBUG_TARGET_EL, 21, 2)
44
45
/* Bit usage when in AArch32 state: */
46
FIELD(TBFLAG_A32, THUMB, 0, 1)
35
diff --git a/target/arm/translate.h b/target/arm/translate.h
47
diff --git a/target/arm/translate.h b/target/arm/translate.h
36
index XXXXXXX..XXXXXXX 100644
48
index XXXXXXX..XXXXXXX 100644
37
--- a/target/arm/translate.h
49
--- a/target/arm/translate.h
38
+++ b/target/arm/translate.h
50
+++ b/target/arm/translate.h
39
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
51
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
40
bool v8m_secure; /* true if v8M and we're in Secure mode */
52
uint32_t svc_imm;
41
bool v8m_stackcheck; /* true if we need to perform v8M stack limit checks */
53
int aarch64;
42
bool v8m_fpccr_s_wrong; /* true if v8M FPCCR.S != v8m_secure */
54
int current_el;
43
+ bool v7m_new_fp_ctxt_needed; /* ASPEN set but no active FP context */
55
+ /* Debug target exception level for single-step exceptions */
44
/* Immediate value in AArch32 SVC insn; must be set if is_jmp == DISAS_SWI
56
+ int debug_target_el;
45
* so that top level loop can generate correct syndrome information.
57
GHashTable *cp_regs;
58
uint64_t features; /* CPU features bits */
59
/* Because unallocated encodings generate different exception syndrome
60
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
61
* ie A64 LDX*, LDAX*, A32/T32 LDREX*, LDAEX*.
46
*/
62
*/
63
bool is_ldex;
64
- /* True if a single-step exception will be taken to the current EL */
65
- bool ss_same_el;
66
/* True if v8.3-PAuth is active. */
67
bool pauth_active;
68
/* True with v8.5-BTI and SCTLR_ELx.BT* set. */
69
@@ -XXX,XX +XXX,XX @@ static inline void gen_exception(int excp, uint32_t syndrome,
70
/* Generate an architectural singlestep exception */
71
static inline void gen_swstep_exception(DisasContext *s, int isv, int ex)
72
{
73
- gen_exception(EXCP_UDEF, syn_swstep(s->ss_same_el, isv, ex),
74
- default_exception_el(s));
75
+ bool same_el = (s->debug_target_el == s->current_el);
76
+
77
+ /*
78
+ * If singlestep is targeting a lower EL than the current one,
79
+ * then s->ss_active must be false and we can never get here.
80
+ */
81
+ assert(s->debug_target_el >= s->current_el);
82
+
83
+ gen_exception(EXCP_UDEF, syn_swstep(same_el, isv, ex), s->debug_target_el);
84
}
85
86
/*
47
diff --git a/target/arm/helper.c b/target/arm/helper.c
87
diff --git a/target/arm/helper.c b/target/arm/helper.c
48
index XXXXXXX..XXXXXXX 100644
88
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/helper.c
89
--- a/target/arm/helper.c
50
+++ b/target/arm/helper.c
90
+++ b/target/arm/helper.c
51
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
91
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
52
flags = FIELD_DP32(flags, TBFLAG_A32, FPCCR_S_WRONG, 1);
92
}
53
}
93
}
54
94
55
+ if (arm_feature(env, ARM_FEATURE_M) &&
95
+ if (!arm_feature(env, ARM_FEATURE_M)) {
56
+ (env->v7m.fpccr[env->v7m.secure] & R_V7M_FPCCR_ASPEN_MASK) &&
96
+ int target_el = arm_debug_target_el(env);
57
+ (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK) ||
97
+
58
+ (env->v7m.secure &&
98
+ flags = FIELD_DP32(flags, TBFLAG_ANY, DEBUG_TARGET_EL, target_el);
59
+ !(env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)))) {
60
+ /*
61
+ * ASPEN is set, but FPCA/SFPA indicate that there is no active
62
+ * FP context; we must create a new FP context before executing
63
+ * any FP insn.
64
+ */
65
+ flags = FIELD_DP32(flags, TBFLAG_A32, NEW_FP_CTXT_NEEDED, 1);
66
+ }
99
+ }
67
+
100
+
68
*pflags = flags;
101
*pflags = flags;
69
*cs_base = 0;
102
*cs_base = 0;
70
}
103
}
104
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
105
index XXXXXXX..XXXXXXX 100644
106
--- a/target/arm/translate-a64.c
107
+++ b/target/arm/translate-a64.c
108
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
109
dc->ss_active = FIELD_EX32(tb_flags, TBFLAG_ANY, SS_ACTIVE);
110
dc->pstate_ss = FIELD_EX32(tb_flags, TBFLAG_ANY, PSTATE_SS);
111
dc->is_ldex = false;
112
- dc->ss_same_el = (arm_debug_target_el(env) == dc->current_el);
113
+ dc->debug_target_el = FIELD_EX32(tb_flags, TBFLAG_ANY, DEBUG_TARGET_EL);
114
115
/* Bound the number of insns to execute to those left on the page. */
116
bound = -(dc->base.pc_first | TARGET_PAGE_MASK) / 4;
71
diff --git a/target/arm/translate.c b/target/arm/translate.c
117
diff --git a/target/arm/translate.c b/target/arm/translate.c
72
index XXXXXXX..XXXXXXX 100644
118
index XXXXXXX..XXXXXXX 100644
73
--- a/target/arm/translate.c
119
--- a/target/arm/translate.c
74
+++ b/target/arm/translate.c
120
+++ b/target/arm/translate.c
75
@@ -XXX,XX +XXX,XX @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
76
/* Don't need to do this for any further FP insns in this TB */
77
s->v8m_fpccr_s_wrong = false;
78
}
79
+
80
+ if (s->v7m_new_fp_ctxt_needed) {
81
+ /*
82
+ * Create new FP context by updating CONTROL.FPCA, CONTROL.SFPA
83
+ * and the FPSCR.
84
+ */
85
+ TCGv_i32 control, fpscr;
86
+ uint32_t bits = R_V7M_CONTROL_FPCA_MASK;
87
+
88
+ fpscr = load_cpu_field(v7m.fpdscr[s->v8m_secure]);
89
+ gen_helper_vfp_set_fpscr(cpu_env, fpscr);
90
+ tcg_temp_free_i32(fpscr);
91
+ /*
92
+ * We don't need to arrange to end the TB, because the only
93
+ * parts of FPSCR which we cache in the TB flags are the VECLEN
94
+ * and VECSTRIDE, and those don't exist for M-profile.
95
+ */
96
+
97
+ if (s->v8m_secure) {
98
+ bits |= R_V7M_CONTROL_SFPA_MASK;
99
+ }
100
+ control = load_cpu_field(v7m.control[M_REG_S]);
101
+ tcg_gen_ori_i32(control, control, bits);
102
+ store_cpu_field(control, v7m.control[M_REG_S]);
103
+ /* Don't need to do this for any further FP insns in this TB */
104
+ s->v7m_new_fp_ctxt_needed = false;
105
+ }
106
}
107
108
if (extract32(insn, 28, 4) == 0xf) {
109
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
121
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
110
regime_is_secure(env, dc->mmu_idx);
122
dc->ss_active = FIELD_EX32(tb_flags, TBFLAG_ANY, SS_ACTIVE);
111
dc->v8m_stackcheck = FIELD_EX32(tb_flags, TBFLAG_A32, STACKCHECK);
123
dc->pstate_ss = FIELD_EX32(tb_flags, TBFLAG_ANY, PSTATE_SS);
112
dc->v8m_fpccr_s_wrong = FIELD_EX32(tb_flags, TBFLAG_A32, FPCCR_S_WRONG);
124
dc->is_ldex = false;
113
+ dc->v7m_new_fp_ctxt_needed =
125
- dc->ss_same_el = false; /* Can't be true since EL_d must be AArch64 */
114
+ FIELD_EX32(tb_flags, TBFLAG_A32, NEW_FP_CTXT_NEEDED);
126
+ if (!arm_feature(env, ARM_FEATURE_M)) {
115
dc->cp_regs = cpu->cp_regs;
127
+ dc->debug_target_el = FIELD_EX32(tb_flags, TBFLAG_ANY, DEBUG_TARGET_EL);
116
dc->features = env->features;
128
+ }
129
130
dc->page_start = dc->base.pc_first & TARGET_PAGE_MASK;
117
131
118
--
132
--
119
2.20.1
133
2.20.1
120
134
121
135
diff view generated by jsdifflib
1
Add a new helper function which returns the MMU index to use
1
From: Richard Henderson <richard.henderson@linaro.org>
2
for v7M, where the caller specifies all of the security
3
state, privilege level and whether the execution priority
4
is negative, and reimplement the existing
5
arm_v7m_mmu_idx_for_secstate_and_priv() in terms of it.
6
2
7
We are going to need this for the lazy-FP-stacking code.
3
This function is used in two different contexts, and it will be
4
clearer if the function is given the address to which it applies.
8
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Message-id: 20190807045335.1361-2-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20190416125744.27770-21-peter.maydell@linaro.org
12
---
11
---
13
target/arm/cpu.h | 7 +++++++
12
target/arm/translate.c | 14 +++++++-------
14
target/arm/helper.c | 14 +++++++++++---
13
1 file changed, 7 insertions(+), 7 deletions(-)
15
2 files changed, 18 insertions(+), 3 deletions(-)
16
14
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
diff --git a/target/arm/translate.c b/target/arm/translate.c
18
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/cpu.h
17
--- a/target/arm/translate.c
20
+++ b/target/arm/cpu.h
18
+++ b/target/arm/translate.c
21
@@ -XXX,XX +XXX,XX @@ static inline int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx)
19
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
22
}
20
}
23
}
21
}
24
22
25
+/*
23
-static bool thumb_insn_is_16bit(DisasContext *s, uint32_t insn)
26
+ * Return the MMU index for a v7M CPU with all relevant information
24
+static bool thumb_insn_is_16bit(DisasContext *s, uint32_t pc, uint32_t insn)
27
+ * manually specified.
25
{
28
+ */
26
- /* Return true if this is a 16 bit instruction. We must be precise
29
+ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
27
- * about this (matching the decode). We assume that s->pc still
30
+ bool secstate, bool priv, bool negpri);
28
- * points to the first 16 bits of the insn.
31
+
29
+ /*
32
/* Return the MMU index for a v7M CPU in the specified security and
30
+ * Return true if this is a 16 bit instruction. We must be precise
33
* privilege state.
31
+ * about this (matching the decode).
34
*/
32
*/
35
diff --git a/target/arm/helper.c b/target/arm/helper.c
33
if ((insn >> 11) < 0x1d) {
36
index XXXXXXX..XXXXXXX 100644
34
/* Definitely a 16-bit instruction */
37
--- a/target/arm/helper.c
35
@@ -XXX,XX +XXX,XX @@ static bool thumb_insn_is_16bit(DisasContext *s, uint32_t insn)
38
+++ b/target/arm/helper.c
36
return false;
39
@@ -XXX,XX +XXX,XX @@ int fp_exception_el(CPUARMState *env, int cur_el)
37
}
40
return 0;
38
39
- if ((insn >> 11) == 0x1e && s->pc - s->page_start < TARGET_PAGE_SIZE - 3) {
40
+ if ((insn >> 11) == 0x1e && pc - s->page_start < TARGET_PAGE_SIZE - 3) {
41
/* 0b1111_0xxx_xxxx_xxxx : BL/BLX prefix, and the suffix
42
* is not on the next page; we merge this into a 32-bit
43
* insn.
44
@@ -XXX,XX +XXX,XX @@ static bool insn_crosses_page(CPUARMState *env, DisasContext *s)
45
*/
46
uint16_t insn = arm_lduw_code(env, s->pc, s->sctlr_b);
47
48
- return !thumb_insn_is_16bit(s, insn);
49
+ return !thumb_insn_is_16bit(s, s->pc, insn);
41
}
50
}
42
51
43
-ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
52
static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
44
- bool secstate, bool priv)
53
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
45
+ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
46
+ bool secstate, bool priv, bool negpri)
47
{
48
ARMMMUIdx mmu_idx = ARM_MMU_IDX_M;
49
50
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
51
mmu_idx |= ARM_MMU_IDX_M_PRIV;
52
}
54
}
53
55
54
- if (armv7m_nvic_neg_prio_requested(env->nvic, secstate)) {
56
insn = arm_lduw_code(env, dc->pc, dc->sctlr_b);
55
+ if (negpri) {
57
- is_16bit = thumb_insn_is_16bit(dc, insn);
56
mmu_idx |= ARM_MMU_IDX_M_NEGPRI;
58
+ is_16bit = thumb_insn_is_16bit(dc, dc->pc, insn);
57
}
59
dc->pc += 2;
58
60
if (!is_16bit) {
59
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
61
uint32_t insn2 = arm_lduw_code(env, dc->pc, dc->sctlr_b);
60
return mmu_idx;
61
}
62
63
+ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
64
+ bool secstate, bool priv)
65
+{
66
+ bool negpri = armv7m_nvic_neg_prio_requested(env->nvic, secstate);
67
+
68
+ return arm_v7m_mmu_idx_all(env, secstate, priv, negpri);
69
+}
70
+
71
/* Return the MMU index for a v7M CPU in the specified security state */
72
ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
73
{
74
--
62
--
75
2.20.1
63
2.20.1
76
64
77
65
diff view generated by jsdifflib
1
The M-profile FPCCR.S bit indicates the security status of
1
From: Richard Henderson <richard.henderson@linaro.org>
2
the floating point context. In the pseudocode ExecuteFPCheck()
2
3
function it is unconditionally set to match the current
3
Add a new field to retain the address of the instruction currently
4
security state whenever a floating point instruction is
4
being translated. The 32-bit uses are all within subroutines used
5
executed.
5
by a32 and t32. This will become less obvious when t16 support is
6
6
merged with a32+t32, and having a clear definition will help.
7
Implement this by adding a new TB flag which tracks whether
7
8
FPCCR.S is different from the current security state, so
8
Convert aarch64 as well for consistency. Note that there is one
9
that we only need to emit the code to update it in the
9
instance of a pre-assert fprintf that used the wrong value for the
10
less-common case when it is not already set correctly.
10
address of the current instruction.
11
11
12
Note that we will add the handling for the other work done
12
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
by ExecuteFPCheck() in later commits.
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
14
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
15
Message-id: 20190807045335.1361-3-richard.henderson@linaro.org
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
17
Message-id: 20190416125744.27770-19-peter.maydell@linaro.org
18
---
17
---
19
target/arm/cpu.h | 2 ++
18
target/arm/translate-a64.h | 2 +-
20
target/arm/translate.h | 1 +
19
target/arm/translate.h | 2 ++
21
target/arm/helper.c | 5 +++++
20
target/arm/translate-a64.c | 21 +++++++++++----------
22
target/arm/translate.c | 20 ++++++++++++++++++++
21
target/arm/translate.c | 14 ++++++++------
23
4 files changed, 28 insertions(+)
22
4 files changed, 22 insertions(+), 17 deletions(-)
24
23
25
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
24
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
26
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/cpu.h
26
--- a/target/arm/translate-a64.h
28
+++ b/target/arm/cpu.h
27
+++ b/target/arm/translate-a64.h
29
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A32, NS, 6, 1)
28
@@ -XXX,XX +XXX,XX @@ void unallocated_encoding(DisasContext *s);
30
FIELD(TBFLAG_A32, VFPEN, 7, 1)
29
qemu_log_mask(LOG_UNIMP, \
31
FIELD(TBFLAG_A32, CONDEXEC, 8, 8)
30
"%s:%d: unsupported instruction encoding 0x%08x " \
32
FIELD(TBFLAG_A32, SCTLR_B, 16, 1)
31
"at pc=%016" PRIx64 "\n", \
33
+/* For M profile only, set if FPCCR.S does not match current security state */
32
- __FILE__, __LINE__, insn, s->pc - 4); \
34
+FIELD(TBFLAG_A32, FPCCR_S_WRONG, 20, 1)
33
+ __FILE__, __LINE__, insn, s->pc_curr); \
35
/* For M profile only, Handler (ie not Thread) mode */
34
unallocated_encoding(s); \
36
FIELD(TBFLAG_A32, HANDLER, 21, 1)
35
} while (0)
37
/* For M profile only, whether we should generate stack-limit checks */
36
38
diff --git a/target/arm/translate.h b/target/arm/translate.h
37
diff --git a/target/arm/translate.h b/target/arm/translate.h
39
index XXXXXXX..XXXXXXX 100644
38
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/translate.h
39
--- a/target/arm/translate.h
41
+++ b/target/arm/translate.h
40
+++ b/target/arm/translate.h
42
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
41
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
43
bool v7m_handler_mode;
42
const ARMISARegisters *isar;
44
bool v8m_secure; /* true if v8M and we're in Secure mode */
43
45
bool v8m_stackcheck; /* true if we need to perform v8M stack limit checks */
44
target_ulong pc;
46
+ bool v8m_fpccr_s_wrong; /* true if v8M FPCCR.S != v8m_secure */
45
+ /* The address of the current instruction being translated. */
47
/* Immediate value in AArch32 SVC insn; must be set if is_jmp == DISAS_SWI
46
+ target_ulong pc_curr;
48
* so that top level loop can generate correct syndrome information.
47
target_ulong page_start;
49
*/
48
uint32_t insn;
50
diff --git a/target/arm/helper.c b/target/arm/helper.c
49
/* Nonzero if this instruction has been conditionally skipped. */
51
index XXXXXXX..XXXXXXX 100644
50
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
52
--- a/target/arm/helper.c
51
index XXXXXXX..XXXXXXX 100644
53
+++ b/target/arm/helper.c
52
--- a/target/arm/translate-a64.c
54
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
53
+++ b/target/arm/translate-a64.c
55
flags = FIELD_DP32(flags, TBFLAG_A32, STACKCHECK, 1);
54
@@ -XXX,XX +XXX,XX @@ static inline AArch64DecodeFn *lookup_disas_fn(const AArch64DecodeTable *table,
55
*/
56
static void disas_uncond_b_imm(DisasContext *s, uint32_t insn)
57
{
58
- uint64_t addr = s->pc + sextract32(insn, 0, 26) * 4 - 4;
59
+ uint64_t addr = s->pc_curr + sextract32(insn, 0, 26) * 4;
60
61
if (insn & (1U << 31)) {
62
/* BL Branch with link */
63
@@ -XXX,XX +XXX,XX @@ static void disas_comp_b_imm(DisasContext *s, uint32_t insn)
64
sf = extract32(insn, 31, 1);
65
op = extract32(insn, 24, 1); /* 0: CBZ; 1: CBNZ */
66
rt = extract32(insn, 0, 5);
67
- addr = s->pc + sextract32(insn, 5, 19) * 4 - 4;
68
+ addr = s->pc_curr + sextract32(insn, 5, 19) * 4;
69
70
tcg_cmp = read_cpu_reg(s, rt, sf);
71
label_match = gen_new_label();
72
@@ -XXX,XX +XXX,XX @@ static void disas_test_b_imm(DisasContext *s, uint32_t insn)
73
74
bit_pos = (extract32(insn, 31, 1) << 5) | extract32(insn, 19, 5);
75
op = extract32(insn, 24, 1); /* 0: TBZ; 1: TBNZ */
76
- addr = s->pc + sextract32(insn, 5, 14) * 4 - 4;
77
+ addr = s->pc_curr + sextract32(insn, 5, 14) * 4;
78
rt = extract32(insn, 0, 5);
79
80
tcg_cmp = tcg_temp_new_i64();
81
@@ -XXX,XX +XXX,XX @@ static void disas_cond_b_imm(DisasContext *s, uint32_t insn)
82
unallocated_encoding(s);
83
return;
56
}
84
}
57
85
- addr = s->pc + sextract32(insn, 5, 19) * 4 - 4;
58
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY) &&
86
+ addr = s->pc_curr + sextract32(insn, 5, 19) * 4;
59
+ FIELD_EX32(env->v7m.fpccr[M_REG_S], V7M_FPCCR, S) != env->v7m.secure) {
87
cond = extract32(insn, 0, 4);
60
+ flags = FIELD_DP32(flags, TBFLAG_A32, FPCCR_S_WRONG, 1);
88
61
+ }
89
reset_btype(s);
62
+
90
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
63
*pflags = flags;
91
TCGv_i32 tcg_syn, tcg_isread;
64
*cs_base = 0;
92
uint32_t syndrome;
65
}
93
94
- gen_a64_set_pc_im(s->pc - 4);
95
+ gen_a64_set_pc_im(s->pc_curr);
96
tmpptr = tcg_const_ptr(ri);
97
syndrome = syn_aa64_sysregtrap(op0, op1, op2, crn, crm, rt, isread);
98
tcg_syn = tcg_const_i32(syndrome);
99
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
100
/* The pre HVC helper handles cases when HVC gets trapped
101
* as an undefined insn by runtime configuration.
102
*/
103
- gen_a64_set_pc_im(s->pc - 4);
104
+ gen_a64_set_pc_im(s->pc_curr);
105
gen_helper_pre_hvc(cpu_env);
106
gen_ss_advance(s);
107
gen_exception_insn(s, 0, EXCP_HVC, syn_aa64_hvc(imm16), 2);
108
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
109
unallocated_encoding(s);
110
break;
111
}
112
- gen_a64_set_pc_im(s->pc - 4);
113
+ gen_a64_set_pc_im(s->pc_curr);
114
tmp = tcg_const_i32(syn_aa64_smc(imm16));
115
gen_helper_pre_smc(cpu_env, tmp);
116
tcg_temp_free_i32(tmp);
117
@@ -XXX,XX +XXX,XX @@ static void disas_ld_lit(DisasContext *s, uint32_t insn)
118
119
tcg_rt = cpu_reg(s, rt);
120
121
- clean_addr = tcg_const_i64((s->pc - 4) + imm);
122
+ clean_addr = tcg_const_i64(s->pc_curr + imm);
123
if (is_vector) {
124
do_fp_ld(s, rt, clean_addr, size);
125
} else {
126
@@ -XXX,XX +XXX,XX @@ static void disas_pc_rel_adr(DisasContext *s, uint32_t insn)
127
offset = sextract64(insn, 5, 19);
128
offset = offset << 2 | extract32(insn, 29, 2);
129
rd = extract32(insn, 0, 5);
130
- base = s->pc - 4;
131
+ base = s->pc_curr;
132
133
if (page) {
134
/* ADRP (page based) */
135
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
136
break;
137
default:
138
fprintf(stderr, "%s: insn %#04x, fpop %#2x @ %#" PRIx64 "\n",
139
- __func__, insn, fpopcode, s->pc);
140
+ __func__, insn, fpopcode, s->pc_curr);
141
g_assert_not_reached();
142
}
143
144
@@ -XXX,XX +XXX,XX @@ static void disas_a64_insn(CPUARMState *env, DisasContext *s)
145
{
146
uint32_t insn;
147
148
+ s->pc_curr = s->pc;
149
insn = arm_ldl_code(env, s->pc, s->sctlr_b);
150
s->insn = insn;
151
s->pc += 4;
66
diff --git a/target/arm/translate.c b/target/arm/translate.c
152
diff --git a/target/arm/translate.c b/target/arm/translate.c
67
index XXXXXXX..XXXXXXX 100644
153
index XXXXXXX..XXXXXXX 100644
68
--- a/target/arm/translate.c
154
--- a/target/arm/translate.c
69
+++ b/target/arm/translate.c
155
+++ b/target/arm/translate.c
70
@@ -XXX,XX +XXX,XX @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
156
@@ -XXX,XX +XXX,XX @@ static inline void gen_hvc(DisasContext *s, int imm16)
71
}
157
* as an undefined insn by runtime configuration (ie before
158
* the insn really executes).
159
*/
160
- gen_set_pc_im(s, s->pc - 4);
161
+ gen_set_pc_im(s, s->pc_curr);
162
gen_helper_pre_hvc(cpu_env);
163
/* Otherwise we will treat this as a real exception which
164
* happens after execution of the insn. (The distinction matters
165
@@ -XXX,XX +XXX,XX @@ static inline void gen_smc(DisasContext *s)
166
*/
167
TCGv_i32 tmp;
168
169
- gen_set_pc_im(s, s->pc - 4);
170
+ gen_set_pc_im(s, s->pc_curr);
171
tmp = tcg_const_i32(syn_aa32_smc());
172
gen_helper_pre_smc(cpu_env, tmp);
173
tcg_temp_free_i32(tmp);
174
@@ -XXX,XX +XXX,XX @@ static void gen_msr_banked(DisasContext *s, int r, int sysm, int rn)
175
176
/* Sync state because msr_banked() can raise exceptions */
177
gen_set_condexec(s);
178
- gen_set_pc_im(s, s->pc - 4);
179
+ gen_set_pc_im(s, s->pc_curr);
180
tcg_reg = load_reg(s, rn);
181
tcg_tgtmode = tcg_const_i32(tgtmode);
182
tcg_regno = tcg_const_i32(regno);
183
@@ -XXX,XX +XXX,XX @@ static void gen_mrs_banked(DisasContext *s, int r, int sysm, int rn)
184
185
/* Sync state because mrs_banked() can raise exceptions */
186
gen_set_condexec(s);
187
- gen_set_pc_im(s, s->pc - 4);
188
+ gen_set_pc_im(s, s->pc_curr);
189
tcg_reg = tcg_temp_new_i32();
190
tcg_tgtmode = tcg_const_i32(tgtmode);
191
tcg_regno = tcg_const_i32(regno);
192
@@ -XXX,XX +XXX,XX @@ static int disas_coproc_insn(DisasContext *s, uint32_t insn)
193
}
194
195
gen_set_condexec(s);
196
- gen_set_pc_im(s, s->pc - 4);
197
+ gen_set_pc_im(s, s->pc_curr);
198
tmpptr = tcg_const_ptr(ri);
199
tcg_syn = tcg_const_i32(syndrome);
200
tcg_isread = tcg_const_i32(isread);
201
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
202
tmp = tcg_const_i32(mode);
203
/* get_r13_banked() will raise an exception if called from System mode */
204
gen_set_condexec(s);
205
- gen_set_pc_im(s, s->pc - 4);
206
+ gen_set_pc_im(s, s->pc_curr);
207
gen_helper_get_r13_banked(addr, cpu_env, tmp);
208
tcg_temp_free_i32(tmp);
209
switch (amode) {
210
@@ -XXX,XX +XXX,XX @@ static void arm_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
211
return;
72
}
212
}
73
213
74
+ if (arm_dc_feature(s, ARM_FEATURE_M)) {
214
+ dc->pc_curr = dc->pc;
75
+ /* Handle M-profile lazy FP state mechanics */
215
insn = arm_ldl_code(env, dc->pc, dc->sctlr_b);
76
+
216
dc->insn = insn;
77
+ /* Update ownership of FP context: set FPCCR.S to match current state */
217
dc->pc += 4;
78
+ if (s->v8m_fpccr_s_wrong) {
218
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
79
+ TCGv_i32 tmp;
219
return;
80
+
220
}
81
+ tmp = load_cpu_field(v7m.fpccr[M_REG_S]);
221
82
+ if (s->v8m_secure) {
222
+ dc->pc_curr = dc->pc;
83
+ tcg_gen_ori_i32(tmp, tmp, R_V7M_FPCCR_S_MASK);
223
insn = arm_lduw_code(env, dc->pc, dc->sctlr_b);
84
+ } else {
224
is_16bit = thumb_insn_is_16bit(dc, dc->pc, insn);
85
+ tcg_gen_andi_i32(tmp, tmp, ~R_V7M_FPCCR_S_MASK);
225
dc->pc += 2;
86
+ }
87
+ store_cpu_field(tmp, v7m.fpccr[M_REG_S]);
88
+ /* Don't need to do this for any further FP insns in this TB */
89
+ s->v8m_fpccr_s_wrong = false;
90
+ }
91
+ }
92
+
93
if (extract32(insn, 28, 4) == 0xf) {
94
/*
95
* Encodings with T=1 (Thumb) or unconditional (ARM):
96
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
97
dc->v8m_secure = arm_feature(env, ARM_FEATURE_M_SECURITY) &&
98
regime_is_secure(env, dc->mmu_idx);
99
dc->v8m_stackcheck = FIELD_EX32(tb_flags, TBFLAG_A32, STACKCHECK);
100
+ dc->v8m_fpccr_s_wrong = FIELD_EX32(tb_flags, TBFLAG_A32, FPCCR_S_WRONG);
101
dc->cp_regs = cpu->cp_regs;
102
dc->features = env->features;
103
104
--
226
--
105
2.20.1
227
2.20.1
106
228
107
229
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Markus Armbruster <armbru@redhat.com>
3
We currently have 3 different ways of computing the architectural
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
4
value of "PC" as seen in the ARM ARM.
5
Message-id: 20190412165416.7977-12-philmd@redhat.com
5
6
The value of s->pc has been incremented past the current insn,
7
but that is all. Thus for a32, PC = s->pc + 4; for t32, PC = s->pc;
8
for t16, PC = s->pc + 2. These differing computations make it
9
impossible at present to unify the various code paths.
10
11
With the newly introduced s->pc_curr, we can compute the correct
12
value for all cases, using the formula given in the ARM ARM.
13
14
This changes the behaviour for load_reg() and load_reg_var()
15
when called with reg==15 from a 32-bit Thumb instruction:
16
previously they would have returned the incorrect value
17
of pc_curr + 6, and now they will return the architecturally
18
correct value of PC, which is pc_curr + 4. This will not
19
affect well-behaved guest software, because all of the places
20
we call these functions from T32 code are instructions where
21
using r15 is UNPREDICTABLE. Using the architectural PC value
22
here is more consistent with the T16 and A32 behaviour.
23
24
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
25
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
26
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
27
Message-id: 20190807045335.1361-4-richard.henderson@linaro.org
28
[PMM: added commit message note about UNPREDICTABLE T32 cases]
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
29
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
30
---
8
include/hw/net/lan9118.h | 2 ++
31
target/arm/translate.c | 59 ++++++++++++++++--------------------------
9
hw/arm/exynos4_boards.c | 3 ++-
32
1 file changed, 23 insertions(+), 36 deletions(-)
10
hw/arm/mps2-tz.c | 3 ++-
33
11
hw/net/lan9118.c | 1 -
34
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
4 files changed, 6 insertions(+), 3 deletions(-)
13
14
diff --git a/include/hw/net/lan9118.h b/include/hw/net/lan9118.h
15
index XXXXXXX..XXXXXXX 100644
35
index XXXXXXX..XXXXXXX 100644
16
--- a/include/hw/net/lan9118.h
36
--- a/target/arm/translate.c
17
+++ b/include/hw/net/lan9118.h
37
+++ b/target/arm/translate.c
18
@@ -XXX,XX +XXX,XX @@
38
@@ -XXX,XX +XXX,XX @@ static inline void store_cpu_offset(TCGv_i32 var, int offset)
19
#include "hw/irq.h"
39
#define store_cpu_field(var, name) \
20
#include "net/net.h"
40
store_cpu_offset(var, offsetof(CPUARMState, name))
21
41
22
+#define TYPE_LAN9118 "lan9118"
42
+/* The architectural value of PC. */
43
+static uint32_t read_pc(DisasContext *s)
44
+{
45
+ return s->pc_curr + (s->thumb ? 4 : 8);
46
+}
23
+
47
+
24
void lan9118_init(NICInfo *, uint32_t, qemu_irq);
48
/* Set a variable to the value of a CPU register. */
25
49
static void load_reg_var(DisasContext *s, TCGv_i32 var, int reg)
26
#endif
50
{
27
diff --git a/hw/arm/exynos4_boards.c b/hw/arm/exynos4_boards.c
51
if (reg == 15) {
28
index XXXXXXX..XXXXXXX 100644
52
- uint32_t addr;
29
--- a/hw/arm/exynos4_boards.c
53
- /* normally, since we updated PC, we need only to add one insn */
30
+++ b/hw/arm/exynos4_boards.c
54
- if (s->thumb)
31
@@ -XXX,XX +XXX,XX @@
55
- addr = (long)s->pc + 2;
32
#include "hw/arm/arm.h"
56
- else
33
#include "exec/address-spaces.h"
57
- addr = (long)s->pc + 4;
34
#include "hw/arm/exynos4210.h"
58
- tcg_gen_movi_i32(var, addr);
35
+#include "hw/net/lan9118.h"
59
+ tcg_gen_movi_i32(var, read_pc(s));
36
#include "hw/boards.h"
60
} else {
37
61
tcg_gen_mov_i32(var, cpu_R[reg]);
38
#undef DEBUG
39
@@ -XXX,XX +XXX,XX @@ static void lan9215_init(uint32_t base, qemu_irq irq)
40
/* This should be a 9215 but the 9118 is close enough */
41
if (nd_table[0].used) {
42
qemu_check_nic_model(&nd_table[0], "lan9118");
43
- dev = qdev_create(NULL, "lan9118");
44
+ dev = qdev_create(NULL, TYPE_LAN9118);
45
qdev_set_nic_properties(dev, &nd_table[0]);
46
qdev_prop_set_uint32(dev, "mode_16bit", 1);
47
qdev_init_nofail(dev);
48
diff --git a/hw/arm/mps2-tz.c b/hw/arm/mps2-tz.c
49
index XXXXXXX..XXXXXXX 100644
50
--- a/hw/arm/mps2-tz.c
51
+++ b/hw/arm/mps2-tz.c
52
@@ -XXX,XX +XXX,XX @@
53
#include "hw/arm/armsse.h"
54
#include "hw/dma/pl080.h"
55
#include "hw/ssi/pl022.h"
56
+#include "hw/net/lan9118.h"
57
#include "net/net.h"
58
#include "hw/core/split-irq.h"
59
60
@@ -XXX,XX +XXX,XX @@ static MemoryRegion *make_eth_dev(MPS2TZMachineState *mms, void *opaque,
61
* except that it doesn't support the checksum-offload feature.
62
*/
63
qemu_check_nic_model(nd, "lan9118");
64
- mms->lan9118 = qdev_create(NULL, "lan9118");
65
+ mms->lan9118 = qdev_create(NULL, TYPE_LAN9118);
66
qdev_set_nic_properties(mms->lan9118, nd);
67
qdev_init_nofail(mms->lan9118);
68
69
diff --git a/hw/net/lan9118.c b/hw/net/lan9118.c
70
index XXXXXXX..XXXXXXX 100644
71
--- a/hw/net/lan9118.c
72
+++ b/hw/net/lan9118.c
73
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_lan9118_packet = {
74
}
62
}
75
};
63
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
76
64
/* branch link and change to thumb (blx <offset>) */
77
-#define TYPE_LAN9118 "lan9118"
65
int32_t offset;
78
#define LAN9118(obj) OBJECT_CHECK(lan9118_state, (obj), TYPE_LAN9118)
66
79
67
- val = (uint32_t)s->pc;
80
typedef struct {
68
tmp = tcg_temp_new_i32();
69
- tcg_gen_movi_i32(tmp, val);
70
+ tcg_gen_movi_i32(tmp, s->pc);
71
store_reg(s, 14, tmp);
72
/* Sign-extend the 24-bit offset */
73
offset = (((int32_t)insn) << 8) >> 8;
74
+ val = read_pc(s);
75
/* offset * 4 + bit24 * 2 + (thumb bit) */
76
val += (offset << 2) | ((insn >> 23) & 2) | 1;
77
- /* pipeline offset */
78
- val += 4;
79
/* protected by ARCH(5); above, near the start of uncond block */
80
gen_bx_im(s, val);
81
return;
82
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
83
} else {
84
/* store */
85
if (i == 15) {
86
- /* special case: r15 = PC + 8 */
87
- val = (long)s->pc + 4;
88
tmp = tcg_temp_new_i32();
89
- tcg_gen_movi_i32(tmp, val);
90
+ tcg_gen_movi_i32(tmp, read_pc(s));
91
} else if (user) {
92
tmp = tcg_temp_new_i32();
93
tmp2 = tcg_const_i32(i);
94
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
95
int32_t offset;
96
97
/* branch (and link) */
98
- val = (int32_t)s->pc;
99
if (insn & (1 << 24)) {
100
tmp = tcg_temp_new_i32();
101
- tcg_gen_movi_i32(tmp, val);
102
+ tcg_gen_movi_i32(tmp, s->pc);
103
store_reg(s, 14, tmp);
104
}
105
offset = sextract32(insn << 2, 0, 26);
106
- val += offset + 4;
107
- gen_jmp(s, val);
108
+ gen_jmp(s, read_pc(s) + offset);
109
}
110
break;
111
case 0xc:
112
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
113
tcg_temp_free_i32(addr);
114
} else if ((insn & (7 << 5)) == 0) {
115
/* Table Branch. */
116
- if (rn == 15) {
117
- addr = tcg_temp_new_i32();
118
- tcg_gen_movi_i32(addr, s->pc);
119
- } else {
120
- addr = load_reg(s, rn);
121
- }
122
+ addr = load_reg(s, rn);
123
tmp = load_reg(s, rm);
124
tcg_gen_add_i32(addr, addr, tmp);
125
if (insn & (1 << 4)) {
126
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
127
}
128
tcg_temp_free_i32(addr);
129
tcg_gen_shli_i32(tmp, tmp, 1);
130
- tcg_gen_addi_i32(tmp, tmp, s->pc);
131
+ tcg_gen_addi_i32(tmp, tmp, read_pc(s));
132
store_reg(s, 15, tmp);
133
} else {
134
bool is_lasr = false;
135
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
136
tcg_gen_movi_i32(cpu_R[14], s->pc | 1);
137
}
138
139
- offset += s->pc;
140
+ offset += read_pc(s);
141
if (insn & (1 << 12)) {
142
/* b/bl */
143
gen_jmp(s, offset);
144
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
145
offset |= (insn & (1 << 11)) << 8;
146
147
/* jump to the offset */
148
- gen_jmp(s, s->pc + offset);
149
+ gen_jmp(s, read_pc(s) + offset);
150
}
151
} else {
152
/*
153
@@ -XXX,XX +XXX,XX @@ static void disas_thumb_insn(DisasContext *s, uint32_t insn)
154
if (insn & (1 << 11)) {
155
rd = (insn >> 8) & 7;
156
/* load pc-relative. Bit 1 of PC is ignored. */
157
- val = s->pc + 2 + ((insn & 0xff) * 4);
158
+ val = read_pc(s) + ((insn & 0xff) * 4);
159
val &= ~(uint32_t)2;
160
addr = tcg_temp_new_i32();
161
tcg_gen_movi_i32(addr, val);
162
@@ -XXX,XX +XXX,XX @@ static void disas_thumb_insn(DisasContext *s, uint32_t insn)
163
} else {
164
/* PC. bit 1 is ignored. */
165
tmp = tcg_temp_new_i32();
166
- tcg_gen_movi_i32(tmp, (s->pc + 2) & ~(uint32_t)2);
167
+ tcg_gen_movi_i32(tmp, read_pc(s) & ~(uint32_t)2);
168
}
169
val = (insn & 0xff) * 4;
170
tcg_gen_addi_i32(tmp, tmp, val);
171
@@ -XXX,XX +XXX,XX @@ static void disas_thumb_insn(DisasContext *s, uint32_t insn)
172
tcg_gen_brcondi_i32(TCG_COND_NE, tmp, 0, s->condlabel);
173
tcg_temp_free_i32(tmp);
174
offset = ((insn & 0xf8) >> 2) | (insn & 0x200) >> 3;
175
- val = (uint32_t)s->pc + 2;
176
- val += offset;
177
- gen_jmp(s, val);
178
+ gen_jmp(s, read_pc(s) + offset);
179
break;
180
181
case 15: /* IT, nop-hint. */
182
@@ -XXX,XX +XXX,XX @@ static void disas_thumb_insn(DisasContext *s, uint32_t insn)
183
arm_skip_unless(s, cond);
184
185
/* jump to the offset */
186
- val = (uint32_t)s->pc + 2;
187
+ val = read_pc(s);
188
offset = ((int32_t)insn << 24) >> 24;
189
val += offset << 1;
190
gen_jmp(s, val);
191
@@ -XXX,XX +XXX,XX @@ static void disas_thumb_insn(DisasContext *s, uint32_t insn)
192
break;
193
}
194
/* unconditional branch */
195
- val = (uint32_t)s->pc;
196
+ val = read_pc(s);
197
offset = ((int32_t)insn << 21) >> 21;
198
- val += (offset << 1) + 2;
199
+ val += offset << 1;
200
gen_jmp(s, val);
201
break;
202
203
@@ -XXX,XX +XXX,XX @@ static void disas_thumb_insn(DisasContext *s, uint32_t insn)
204
/* 0b1111_0xxx_xxxx_xxxx : BL/BLX prefix */
205
uint32_t uoffset = ((int32_t)insn << 21) >> 9;
206
207
- tcg_gen_movi_i32(cpu_R[14], s->pc + 2 + uoffset);
208
+ tcg_gen_movi_i32(cpu_R[14], read_pc(s) + uoffset);
209
}
210
break;
211
}
81
--
212
--
82
2.20.1
213
2.20.1
83
214
84
215
diff view generated by jsdifflib
1
We are close to running out of TB flags for AArch32; we could
1
From: Richard Henderson <richard.henderson@linaro.org>
2
start using the cs_base word, but before we do that we can
2
3
economise on our usage by sharing the same bits for the VFP
3
Provide a common routine for the places that require ALIGN(PC, 4)
4
VECSTRIDE field and the XScale XSCALE_CPAR field. This
4
as the base address as opposed to plain PC. The two are always
5
works because no XScale CPU ever had VFP.
5
the same for A32, but the difference is meaningful for thumb mode.
6
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Message-id: 20190807045335.1361-5-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20190416125744.27770-18-peter.maydell@linaro.org
10
---
12
---
11
target/arm/cpu.h | 10 ++++++----
13
target/arm/translate-vfp.inc.c | 38 ++------
12
target/arm/cpu.c | 7 +++++++
14
target/arm/translate.c | 166 +++++++++++++++------------------
13
target/arm/helper.c | 6 +++++-
15
2 files changed, 82 insertions(+), 122 deletions(-)
14
target/arm/translate.c | 9 +++++++--
16
15
4 files changed, 25 insertions(+), 7 deletions(-)
17
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
16
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/cpu.h
19
--- a/target/arm/translate-vfp.inc.c
20
+++ b/target/arm/cpu.h
20
+++ b/target/arm/translate-vfp.inc.c
21
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_ANY, BE_DATA, 23, 1)
21
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_sp(DisasContext *s, arg_VLDR_VSTR_sp *a)
22
FIELD(TBFLAG_A32, THUMB, 0, 1)
22
offset = -offset;
23
FIELD(TBFLAG_A32, VECLEN, 1, 3)
24
FIELD(TBFLAG_A32, VECSTRIDE, 4, 2)
25
+/*
26
+ * We store the bottom two bits of the CPAR as TB flags and handle
27
+ * checks on the other bits at runtime. This shares the same bits as
28
+ * VECSTRIDE, which is OK as no XScale CPU has VFP.
29
+ */
30
+FIELD(TBFLAG_A32, XSCALE_CPAR, 4, 2)
31
/*
32
* Indicates whether cp register reads and writes by guest code should access
33
* the secure or nonsecure bank of banked registers; note that this is not
34
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A32, NS, 6, 1)
35
FIELD(TBFLAG_A32, VFPEN, 7, 1)
36
FIELD(TBFLAG_A32, CONDEXEC, 8, 8)
37
FIELD(TBFLAG_A32, SCTLR_B, 16, 1)
38
-/* We store the bottom two bits of the CPAR as TB flags and handle
39
- * checks on the other bits at runtime
40
- */
41
-FIELD(TBFLAG_A32, XSCALE_CPAR, 17, 2)
42
/* For M profile only, Handler (ie not Thread) mode */
43
FIELD(TBFLAG_A32, HANDLER, 21, 1)
44
/* For M profile only, whether we should generate stack-limit checks */
45
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/cpu.c
48
+++ b/target/arm/cpu.c
49
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
50
set_feature(env, ARM_FEATURE_THUMB_DSP);
51
}
23
}
52
24
53
+ /*
25
- if (s->thumb && a->rn == 15) {
54
+ * We rely on no XScale CPU having VFP so we can use the same bits in the
26
- /* This is actually UNPREDICTABLE */
55
+ * TB flags field for VECSTRIDE and XSCALE_CPAR.
27
- addr = tcg_temp_new_i32();
56
+ */
28
- tcg_gen_movi_i32(addr, s->pc & ~2);
57
+ assert(!(arm_feature(env, ARM_FEATURE_VFP) &&
29
- } else {
58
+ arm_feature(env, ARM_FEATURE_XSCALE)));
30
- addr = load_reg(s, a->rn);
59
+
31
- }
60
if (arm_feature(env, ARM_FEATURE_V7) &&
32
- tcg_gen_addi_i32(addr, addr, offset);
61
!arm_feature(env, ARM_FEATURE_M) &&
33
+ /* For thumb, use of PC is UNPREDICTABLE. */
62
!arm_feature(env, ARM_FEATURE_PMSA)) {
34
+ addr = add_reg_for_lit(s, a->rn, offset);
63
diff --git a/target/arm/helper.c b/target/arm/helper.c
35
tmp = tcg_temp_new_i32();
64
index XXXXXXX..XXXXXXX 100644
36
if (a->l) {
65
--- a/target/arm/helper.c
37
gen_aa32_ld32u(s, tmp, addr, get_mem_index(s));
66
+++ b/target/arm/helper.c
38
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_dp(DisasContext *s, arg_VLDR_VSTR_dp *a)
67
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
39
offset = -offset;
68
|| arm_el_is_aa64(env, 1) || arm_feature(env, ARM_FEATURE_M)) {
69
flags = FIELD_DP32(flags, TBFLAG_A32, VFPEN, 1);
70
}
71
- flags = FIELD_DP32(flags, TBFLAG_A32, XSCALE_CPAR, env->cp15.c15_cpar);
72
+ /* Note that XSCALE_CPAR shares bits with VECSTRIDE */
73
+ if (arm_feature(env, ARM_FEATURE_XSCALE)) {
74
+ flags = FIELD_DP32(flags, TBFLAG_A32,
75
+ XSCALE_CPAR, env->cp15.c15_cpar);
76
+ }
77
}
40
}
78
41
79
flags = FIELD_DP32(flags, TBFLAG_ANY, MMUIDX, arm_to_core_mmu_idx(mmu_idx));
42
- if (s->thumb && a->rn == 15) {
43
- /* This is actually UNPREDICTABLE */
44
- addr = tcg_temp_new_i32();
45
- tcg_gen_movi_i32(addr, s->pc & ~2);
46
- } else {
47
- addr = load_reg(s, a->rn);
48
- }
49
- tcg_gen_addi_i32(addr, addr, offset);
50
+ /* For thumb, use of PC is UNPREDICTABLE. */
51
+ addr = add_reg_for_lit(s, a->rn, offset);
52
tmp = tcg_temp_new_i64();
53
if (a->l) {
54
gen_aa32_ld64(s, tmp, addr, get_mem_index(s));
55
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDM_VSTM_sp(DisasContext *s, arg_VLDM_VSTM_sp *a)
56
return true;
57
}
58
59
- if (s->thumb && a->rn == 15) {
60
- /* This is actually UNPREDICTABLE */
61
- addr = tcg_temp_new_i32();
62
- tcg_gen_movi_i32(addr, s->pc & ~2);
63
- } else {
64
- addr = load_reg(s, a->rn);
65
- }
66
+ /* For thumb, use of PC is UNPREDICTABLE. */
67
+ addr = add_reg_for_lit(s, a->rn, 0);
68
if (a->p) {
69
/* pre-decrement */
70
tcg_gen_addi_i32(addr, addr, -(a->imm << 2));
71
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDM_VSTM_dp(DisasContext *s, arg_VLDM_VSTM_dp *a)
72
return true;
73
}
74
75
- if (s->thumb && a->rn == 15) {
76
- /* This is actually UNPREDICTABLE */
77
- addr = tcg_temp_new_i32();
78
- tcg_gen_movi_i32(addr, s->pc & ~2);
79
- } else {
80
- addr = load_reg(s, a->rn);
81
- }
82
+ /* For thumb, use of PC is UNPREDICTABLE. */
83
+ addr = add_reg_for_lit(s, a->rn, 0);
84
if (a->p) {
85
/* pre-decrement */
86
tcg_gen_addi_i32(addr, addr, -(a->imm << 2));
80
diff --git a/target/arm/translate.c b/target/arm/translate.c
87
diff --git a/target/arm/translate.c b/target/arm/translate.c
81
index XXXXXXX..XXXXXXX 100644
88
index XXXXXXX..XXXXXXX 100644
82
--- a/target/arm/translate.c
89
--- a/target/arm/translate.c
83
+++ b/target/arm/translate.c
90
+++ b/target/arm/translate.c
84
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
91
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 load_reg(DisasContext *s, int reg)
85
dc->fp_excp_el = FIELD_EX32(tb_flags, TBFLAG_ANY, FPEXC_EL);
92
return tmp;
86
dc->vfp_enabled = FIELD_EX32(tb_flags, TBFLAG_A32, VFPEN);
93
}
87
dc->vec_len = FIELD_EX32(tb_flags, TBFLAG_A32, VECLEN);
94
88
- dc->vec_stride = FIELD_EX32(tb_flags, TBFLAG_A32, VECSTRIDE);
95
+/*
89
- dc->c15_cpar = FIELD_EX32(tb_flags, TBFLAG_A32, XSCALE_CPAR);
96
+ * Create a new temp, REG + OFS, except PC is ALIGN(PC, 4).
90
+ if (arm_feature(env, ARM_FEATURE_XSCALE)) {
97
+ * This is used for load/store for which use of PC implies (literal),
91
+ dc->c15_cpar = FIELD_EX32(tb_flags, TBFLAG_A32, XSCALE_CPAR);
98
+ * or ADD that implies ADR.
92
+ dc->vec_stride = 0;
99
+ */
100
+static TCGv_i32 add_reg_for_lit(DisasContext *s, int reg, int ofs)
101
+{
102
+ TCGv_i32 tmp = tcg_temp_new_i32();
103
+
104
+ if (reg == 15) {
105
+ tcg_gen_movi_i32(tmp, (read_pc(s) & ~3) + ofs);
93
+ } else {
106
+ } else {
94
+ dc->vec_stride = FIELD_EX32(tb_flags, TBFLAG_A32, VECSTRIDE);
107
+ tcg_gen_addi_i32(tmp, cpu_R[reg], ofs);
95
+ dc->c15_cpar = 0;
96
+ }
108
+ }
97
dc->v7m_handler_mode = FIELD_EX32(tb_flags, TBFLAG_A32, HANDLER);
109
+ return tmp;
98
dc->v8m_secure = arm_feature(env, ARM_FEATURE_M_SECURITY) &&
110
+}
99
regime_is_secure(env, dc->mmu_idx);
111
+
112
/* Set a CPU register. The source must be a temporary and will be
113
marked as dead. */
114
static void store_reg(DisasContext *s, int reg, TCGv_i32 var)
115
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
116
*/
117
bool wback = extract32(insn, 21, 1);
118
119
- if (rn == 15) {
120
- if (insn & (1 << 21)) {
121
- /* UNPREDICTABLE */
122
- goto illegal_op;
123
- }
124
- addr = tcg_temp_new_i32();
125
- tcg_gen_movi_i32(addr, s->pc & ~3);
126
- } else {
127
- addr = load_reg(s, rn);
128
+ if (rn == 15 && (insn & (1 << 21))) {
129
+ /* UNPREDICTABLE */
130
+ goto illegal_op;
131
}
132
+
133
+ addr = add_reg_for_lit(s, rn, 0);
134
offset = (insn & 0xff) * 4;
135
if ((insn & (1 << 23)) == 0) {
136
offset = -offset;
137
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
138
store_reg(s, rd, tmp);
139
} else {
140
/* Add/sub 12-bit immediate. */
141
- if (rn == 15) {
142
- offset = s->pc & ~(uint32_t)3;
143
- if (insn & (1 << 23))
144
- offset -= imm;
145
- else
146
- offset += imm;
147
- tmp = tcg_temp_new_i32();
148
- tcg_gen_movi_i32(tmp, offset);
149
- store_reg(s, rd, tmp);
150
+ if (insn & (1 << 23)) {
151
+ imm = -imm;
152
+ }
153
+ tmp = add_reg_for_lit(s, rn, imm);
154
+ if (rn == 13 && rd == 13) {
155
+ /* ADD SP, SP, imm or SUB SP, SP, imm */
156
+ store_sp_checked(s, tmp);
157
} else {
158
- tmp = load_reg(s, rn);
159
- if (insn & (1 << 23))
160
- tcg_gen_subi_i32(tmp, tmp, imm);
161
- else
162
- tcg_gen_addi_i32(tmp, tmp, imm);
163
- if (rn == 13 && rd == 13) {
164
- /* ADD SP, SP, imm or SUB SP, SP, imm */
165
- store_sp_checked(s, tmp);
166
- } else {
167
- store_reg(s, rd, tmp);
168
- }
169
+ store_reg(s, rd, tmp);
170
}
171
}
172
}
173
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
174
}
175
}
176
memidx = get_mem_index(s);
177
- if (rn == 15) {
178
- addr = tcg_temp_new_i32();
179
- /* PC relative. */
180
- /* s->pc has already been incremented by 4. */
181
- imm = s->pc & 0xfffffffc;
182
- if (insn & (1 << 23))
183
- imm += insn & 0xfff;
184
- else
185
- imm -= insn & 0xfff;
186
- tcg_gen_movi_i32(addr, imm);
187
+ imm = insn & 0xfff;
188
+ if (insn & (1 << 23)) {
189
+ /* PC relative or Positive offset. */
190
+ addr = add_reg_for_lit(s, rn, imm);
191
+ } else if (rn == 15) {
192
+ /* PC relative with negative offset. */
193
+ addr = add_reg_for_lit(s, rn, -imm);
194
} else {
195
addr = load_reg(s, rn);
196
- if (insn & (1 << 23)) {
197
- /* Positive offset. */
198
- imm = insn & 0xfff;
199
- tcg_gen_addi_i32(addr, addr, imm);
200
- } else {
201
- imm = insn & 0xff;
202
- switch ((insn >> 8) & 0xf) {
203
- case 0x0: /* Shifted Register. */
204
- shift = (insn >> 4) & 0xf;
205
- if (shift > 3) {
206
- tcg_temp_free_i32(addr);
207
- goto illegal_op;
208
- }
209
- tmp = load_reg(s, rm);
210
- if (shift)
211
- tcg_gen_shli_i32(tmp, tmp, shift);
212
- tcg_gen_add_i32(addr, addr, tmp);
213
- tcg_temp_free_i32(tmp);
214
- break;
215
- case 0xc: /* Negative offset. */
216
- tcg_gen_addi_i32(addr, addr, -imm);
217
- break;
218
- case 0xe: /* User privilege. */
219
- tcg_gen_addi_i32(addr, addr, imm);
220
- memidx = get_a32_user_mem_index(s);
221
- break;
222
- case 0x9: /* Post-decrement. */
223
- imm = -imm;
224
- /* Fall through. */
225
- case 0xb: /* Post-increment. */
226
- postinc = 1;
227
- writeback = 1;
228
- break;
229
- case 0xd: /* Pre-decrement. */
230
- imm = -imm;
231
- /* Fall through. */
232
- case 0xf: /* Pre-increment. */
233
- writeback = 1;
234
- break;
235
- default:
236
+ imm = insn & 0xff;
237
+ switch ((insn >> 8) & 0xf) {
238
+ case 0x0: /* Shifted Register. */
239
+ shift = (insn >> 4) & 0xf;
240
+ if (shift > 3) {
241
tcg_temp_free_i32(addr);
242
goto illegal_op;
243
}
244
+ tmp = load_reg(s, rm);
245
+ if (shift) {
246
+ tcg_gen_shli_i32(tmp, tmp, shift);
247
+ }
248
+ tcg_gen_add_i32(addr, addr, tmp);
249
+ tcg_temp_free_i32(tmp);
250
+ break;
251
+ case 0xc: /* Negative offset. */
252
+ tcg_gen_addi_i32(addr, addr, -imm);
253
+ break;
254
+ case 0xe: /* User privilege. */
255
+ tcg_gen_addi_i32(addr, addr, imm);
256
+ memidx = get_a32_user_mem_index(s);
257
+ break;
258
+ case 0x9: /* Post-decrement. */
259
+ imm = -imm;
260
+ /* Fall through. */
261
+ case 0xb: /* Post-increment. */
262
+ postinc = 1;
263
+ writeback = 1;
264
+ break;
265
+ case 0xd: /* Pre-decrement. */
266
+ imm = -imm;
267
+ /* Fall through. */
268
+ case 0xf: /* Pre-increment. */
269
+ writeback = 1;
270
+ break;
271
+ default:
272
+ tcg_temp_free_i32(addr);
273
+ goto illegal_op;
274
}
275
}
276
277
@@ -XXX,XX +XXX,XX @@ static void disas_thumb_insn(DisasContext *s, uint32_t insn)
278
if (insn & (1 << 11)) {
279
rd = (insn >> 8) & 7;
280
/* load pc-relative. Bit 1 of PC is ignored. */
281
- val = read_pc(s) + ((insn & 0xff) * 4);
282
- val &= ~(uint32_t)2;
283
- addr = tcg_temp_new_i32();
284
- tcg_gen_movi_i32(addr, val);
285
+ addr = add_reg_for_lit(s, 15, (insn & 0xff) * 4);
286
tmp = tcg_temp_new_i32();
287
gen_aa32_ld32u_iss(s, tmp, addr, get_mem_index(s),
288
rd | ISSIs16Bit);
289
@@ -XXX,XX +XXX,XX @@ static void disas_thumb_insn(DisasContext *s, uint32_t insn)
290
* - Add PC/SP (immediate)
291
*/
292
rd = (insn >> 8) & 7;
293
- if (insn & (1 << 11)) {
294
- /* SP */
295
- tmp = load_reg(s, 13);
296
- } else {
297
- /* PC. bit 1 is ignored. */
298
- tmp = tcg_temp_new_i32();
299
- tcg_gen_movi_i32(tmp, read_pc(s) & ~(uint32_t)2);
300
- }
301
val = (insn & 0xff) * 4;
302
- tcg_gen_addi_i32(tmp, tmp, val);
303
+ tmp = add_reg_for_lit(s, insn & (1 << 11) ? 13 : 15, val);
304
store_reg(s, rd, tmp);
305
break;
306
100
--
307
--
101
2.20.1
308
2.20.1
102
309
103
310
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Thomas Huth <thuth@redhat.com>
3
The thumb bit has already been removed from s->pc, and is always even.
4
Reviewed-by: Markus Armbruster <armbru@redhat.com>
4
5
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20190412165416.7977-11-philmd@redhat.com
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Message-id: 20190807045335.1361-6-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
10
---
9
include/hw/net/ne2000-isa.h | 6 ++++++
11
target/arm/translate.c | 10 +++++-----
10
1 file changed, 6 insertions(+)
12
1 file changed, 5 insertions(+), 5 deletions(-)
11
13
12
diff --git a/include/hw/net/ne2000-isa.h b/include/hw/net/ne2000-isa.h
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
13
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
14
--- a/include/hw/net/ne2000-isa.h
16
--- a/target/arm/translate.c
15
+++ b/include/hw/net/ne2000-isa.h
17
+++ b/target/arm/translate.c
16
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@ static void gen_exception_bkpt_insn(DisasContext *s, int offset, uint32_t syn)
17
* This work is licensed under the terms of the GNU GPL, version 2 or later.
19
/* Force a TB lookup after an instruction that changes the CPU state. */
18
* See the COPYING file in the top-level directory.
20
static inline void gen_lookup_tb(DisasContext *s)
19
*/
21
{
20
+
22
- tcg_gen_movi_i32(cpu_R[15], s->pc & ~1);
21
+#ifndef HW_NET_NE2K_ISA_H
23
+ tcg_gen_movi_i32(cpu_R[15], s->pc);
22
+#define HW_NET_NE2K_ISA_H
24
s->base.is_jmp = DISAS_EXIT;
23
+
24
#include "hw/hw.h"
25
#include "hw/qdev.h"
26
#include "hw/isa/isa.h"
27
@@ -XXX,XX +XXX,XX @@ static inline ISADevice *isa_ne2000_init(ISABus *bus, int base, int irq,
28
}
29
return d;
30
}
25
}
31
+
26
32
+#endif
27
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
28
* self-modifying code correctly and also to take
29
* any pending interrupts immediately.
30
*/
31
- gen_goto_tb(s, 0, s->pc & ~1);
32
+ gen_goto_tb(s, 0, s->pc);
33
return;
34
case 7: /* sb */
35
if ((insn & 0xf) || !dc_isar_feature(aa32_sb, s)) {
36
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
37
* for TCG; MB and end the TB instead.
38
*/
39
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_SC);
40
- gen_goto_tb(s, 0, s->pc & ~1);
41
+ gen_goto_tb(s, 0, s->pc);
42
return;
43
default:
44
goto illegal_op;
45
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
46
* and also to take any pending interrupts
47
* immediately.
48
*/
49
- gen_goto_tb(s, 0, s->pc & ~1);
50
+ gen_goto_tb(s, 0, s->pc);
51
break;
52
case 7: /* sb */
53
if ((insn & 0xf) || !dc_isar_feature(aa32_sb, s)) {
54
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
55
* for TCG; MB and end the TB instead.
56
*/
57
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_SC);
58
- gen_goto_tb(s, 0, s->pc & ~1);
59
+ gen_goto_tb(s, 0, s->pc);
60
break;
61
default:
62
goto illegal_op;
33
--
63
--
34
2.20.1
64
2.20.1
35
65
36
66
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Markus Armbruster <armbru@redhat.com>
3
We must update s->base.pc_next when we return from the translate_insn
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
4
hook to the main translator loop. By incrementing s->base.pc_next
5
Message-id: 20190412165416.7977-10-philmd@redhat.com
5
immediately after reading the insn word, "pc_next" contains the address
6
of the next instruction throughout translation.
7
8
All remaining uses of s->pc are referencing the address of the next insn,
9
so this is now a simple global replacement. Remove the "s->pc" field.
10
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
14
Message-id: 20190807045335.1361-7-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
16
---
8
include/hw/devices.h | 3 ---
17
target/arm/translate.h | 1 -
9
include/hw/net/lan9118.h | 19 +++++++++++++++++++
18
target/arm/translate-a64.c | 51 +++++++++---------
10
hw/arm/kzm.c | 2 +-
19
target/arm/translate.c | 103 ++++++++++++++++++-------------------
11
hw/arm/mps2.c | 2 +-
20
3 files changed, 72 insertions(+), 83 deletions(-)
12
hw/arm/realview.c | 1 +
13
hw/arm/vexpress.c | 2 +-
14
hw/net/lan9118.c | 2 +-
15
7 files changed, 24 insertions(+), 7 deletions(-)
16
create mode 100644 include/hw/net/lan9118.h
17
21
18
diff --git a/include/hw/devices.h b/include/hw/devices.h
22
diff --git a/target/arm/translate.h b/target/arm/translate.h
19
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
20
--- a/include/hw/devices.h
24
--- a/target/arm/translate.h
21
+++ b/include/hw/devices.h
25
+++ b/target/arm/translate.h
22
@@ -XXX,XX +XXX,XX @@
26
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
23
/* smc91c111.c */
27
DisasContextBase base;
24
void smc91c111_init(NICInfo *, uint32_t, qemu_irq);
28
const ARMISARegisters *isar;
25
29
26
-/* lan9118.c */
30
- target_ulong pc;
27
-void lan9118_init(NICInfo *, uint32_t, qemu_irq);
31
/* The address of the current instruction being translated. */
32
target_ulong pc_curr;
33
target_ulong page_start;
34
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/translate-a64.c
37
+++ b/target/arm/translate-a64.c
38
@@ -XXX,XX +XXX,XX @@ static void gen_exception_internal(int excp)
39
40
static void gen_exception_internal_insn(DisasContext *s, int offset, int excp)
41
{
42
- gen_a64_set_pc_im(s->pc - offset);
43
+ gen_a64_set_pc_im(s->base.pc_next - offset);
44
gen_exception_internal(excp);
45
s->base.is_jmp = DISAS_NORETURN;
46
}
47
@@ -XXX,XX +XXX,XX @@ static void gen_exception_internal_insn(DisasContext *s, int offset, int excp)
48
static void gen_exception_insn(DisasContext *s, int offset, int excp,
49
uint32_t syndrome, uint32_t target_el)
50
{
51
- gen_a64_set_pc_im(s->pc - offset);
52
+ gen_a64_set_pc_im(s->base.pc_next - offset);
53
gen_exception(excp, syndrome, target_el);
54
s->base.is_jmp = DISAS_NORETURN;
55
}
56
@@ -XXX,XX +XXX,XX @@ static void gen_exception_bkpt_insn(DisasContext *s, int offset,
57
{
58
TCGv_i32 tcg_syn;
59
60
- gen_a64_set_pc_im(s->pc - offset);
61
+ gen_a64_set_pc_im(s->base.pc_next - offset);
62
tcg_syn = tcg_const_i32(syndrome);
63
gen_helper_exception_bkpt_insn(cpu_env, tcg_syn);
64
tcg_temp_free_i32(tcg_syn);
65
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_imm(DisasContext *s, uint32_t insn)
66
67
if (insn & (1U << 31)) {
68
/* BL Branch with link */
69
- tcg_gen_movi_i64(cpu_reg(s, 30), s->pc);
70
+ tcg_gen_movi_i64(cpu_reg(s, 30), s->base.pc_next);
71
}
72
73
/* B Branch / BL Branch with link */
74
@@ -XXX,XX +XXX,XX @@ static void disas_comp_b_imm(DisasContext *s, uint32_t insn)
75
tcg_gen_brcondi_i64(op ? TCG_COND_NE : TCG_COND_EQ,
76
tcg_cmp, 0, label_match);
77
78
- gen_goto_tb(s, 0, s->pc);
79
+ gen_goto_tb(s, 0, s->base.pc_next);
80
gen_set_label(label_match);
81
gen_goto_tb(s, 1, addr);
82
}
83
@@ -XXX,XX +XXX,XX @@ static void disas_test_b_imm(DisasContext *s, uint32_t insn)
84
tcg_gen_brcondi_i64(op ? TCG_COND_NE : TCG_COND_EQ,
85
tcg_cmp, 0, label_match);
86
tcg_temp_free_i64(tcg_cmp);
87
- gen_goto_tb(s, 0, s->pc);
88
+ gen_goto_tb(s, 0, s->base.pc_next);
89
gen_set_label(label_match);
90
gen_goto_tb(s, 1, addr);
91
}
92
@@ -XXX,XX +XXX,XX @@ static void disas_cond_b_imm(DisasContext *s, uint32_t insn)
93
/* genuinely conditional branches */
94
TCGLabel *label_match = gen_new_label();
95
arm_gen_test_cc(cond, label_match);
96
- gen_goto_tb(s, 0, s->pc);
97
+ gen_goto_tb(s, 0, s->base.pc_next);
98
gen_set_label(label_match);
99
gen_goto_tb(s, 1, addr);
100
} else {
101
@@ -XXX,XX +XXX,XX @@ static void handle_sync(DisasContext *s, uint32_t insn,
102
* any pending interrupts immediately.
103
*/
104
reset_btype(s);
105
- gen_goto_tb(s, 0, s->pc);
106
+ gen_goto_tb(s, 0, s->base.pc_next);
107
return;
108
109
case 7: /* SB */
110
@@ -XXX,XX +XXX,XX @@ static void handle_sync(DisasContext *s, uint32_t insn,
111
* MB and end the TB instead.
112
*/
113
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_SC);
114
- gen_goto_tb(s, 0, s->pc);
115
+ gen_goto_tb(s, 0, s->base.pc_next);
116
return;
117
118
default:
119
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
120
gen_a64_set_pc(s, dst);
121
/* BLR also needs to load return address */
122
if (opc == 1) {
123
- tcg_gen_movi_i64(cpu_reg(s, 30), s->pc);
124
+ tcg_gen_movi_i64(cpu_reg(s, 30), s->base.pc_next);
125
}
126
break;
127
128
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
129
gen_a64_set_pc(s, dst);
130
/* BLRAA also needs to load return address */
131
if (opc == 9) {
132
- tcg_gen_movi_i64(cpu_reg(s, 30), s->pc);
133
+ tcg_gen_movi_i64(cpu_reg(s, 30), s->base.pc_next);
134
}
135
break;
136
137
@@ -XXX,XX +XXX,XX @@ static void disas_a64_insn(CPUARMState *env, DisasContext *s)
138
{
139
uint32_t insn;
140
141
- s->pc_curr = s->pc;
142
- insn = arm_ldl_code(env, s->pc, s->sctlr_b);
143
+ s->pc_curr = s->base.pc_next;
144
+ insn = arm_ldl_code(env, s->base.pc_next, s->sctlr_b);
145
s->insn = insn;
146
- s->pc += 4;
147
+ s->base.pc_next += 4;
148
149
s->fp_access_checked = false;
150
151
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
152
int bound, core_mmu_idx;
153
154
dc->isar = &arm_cpu->isar;
155
- dc->pc = dc->base.pc_first;
156
dc->condjmp = 0;
157
158
dc->aarch64 = 1;
159
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_insn_start(DisasContextBase *dcbase, CPUState *cpu)
160
{
161
DisasContext *dc = container_of(dcbase, DisasContext, base);
162
163
- tcg_gen_insn_start(dc->pc, 0, 0);
164
+ tcg_gen_insn_start(dc->base.pc_next, 0, 0);
165
dc->insn_start = tcg_last_op();
166
}
167
168
@@ -XXX,XX +XXX,XX @@ static bool aarch64_tr_breakpoint_check(DisasContextBase *dcbase, CPUState *cpu,
169
DisasContext *dc = container_of(dcbase, DisasContext, base);
170
171
if (bp->flags & BP_CPU) {
172
- gen_a64_set_pc_im(dc->pc);
173
+ gen_a64_set_pc_im(dc->base.pc_next);
174
gen_helper_check_breakpoints(cpu_env);
175
/* End the TB early; it likely won't be executed */
176
dc->base.is_jmp = DISAS_TOO_MANY;
177
@@ -XXX,XX +XXX,XX @@ static bool aarch64_tr_breakpoint_check(DisasContextBase *dcbase, CPUState *cpu,
178
to for it to be properly cleared -- thus we
179
increment the PC here so that the logic setting
180
tb->size below does the right thing. */
181
- dc->pc += 4;
182
+ dc->base.pc_next += 4;
183
dc->base.is_jmp = DISAS_NORETURN;
184
}
185
186
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
187
disas_a64_insn(env, dc);
188
}
189
190
- dc->base.pc_next = dc->pc;
191
translator_loop_temp_check(&dc->base);
192
}
193
194
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
195
*/
196
switch (dc->base.is_jmp) {
197
default:
198
- gen_a64_set_pc_im(dc->pc);
199
+ gen_a64_set_pc_im(dc->base.pc_next);
200
/* fall through */
201
case DISAS_EXIT:
202
case DISAS_JUMP:
203
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
204
switch (dc->base.is_jmp) {
205
case DISAS_NEXT:
206
case DISAS_TOO_MANY:
207
- gen_goto_tb(dc, 1, dc->pc);
208
+ gen_goto_tb(dc, 1, dc->base.pc_next);
209
break;
210
default:
211
case DISAS_UPDATE:
212
- gen_a64_set_pc_im(dc->pc);
213
+ gen_a64_set_pc_im(dc->base.pc_next);
214
/* fall through */
215
case DISAS_EXIT:
216
tcg_gen_exit_tb(NULL, 0);
217
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
218
case DISAS_SWI:
219
break;
220
case DISAS_WFE:
221
- gen_a64_set_pc_im(dc->pc);
222
+ gen_a64_set_pc_im(dc->base.pc_next);
223
gen_helper_wfe(cpu_env);
224
break;
225
case DISAS_YIELD:
226
- gen_a64_set_pc_im(dc->pc);
227
+ gen_a64_set_pc_im(dc->base.pc_next);
228
gen_helper_yield(cpu_env);
229
break;
230
case DISAS_WFI:
231
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
232
*/
233
TCGv_i32 tmp = tcg_const_i32(4);
234
235
- gen_a64_set_pc_im(dc->pc);
236
+ gen_a64_set_pc_im(dc->base.pc_next);
237
gen_helper_wfi(cpu_env, tmp);
238
tcg_temp_free_i32(tmp);
239
/* The helper doesn't necessarily throw an exception, but we
240
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
241
}
242
}
243
}
28
-
244
-
245
- /* Functions above can change dc->pc, so re-align db->pc_next */
246
- dc->base.pc_next = dc->pc;
247
}
248
249
static void aarch64_tr_disas_log(const DisasContextBase *dcbase,
250
diff --git a/target/arm/translate.c b/target/arm/translate.c
251
index XXXXXXX..XXXXXXX 100644
252
--- a/target/arm/translate.c
253
+++ b/target/arm/translate.c
254
@@ -XXX,XX +XXX,XX @@ static inline void gen_blxns(DisasContext *s, int rm)
255
* We do however need to set the PC, because the blxns helper reads it.
256
* The blxns helper may throw an exception.
257
*/
258
- gen_set_pc_im(s, s->pc);
259
+ gen_set_pc_im(s, s->base.pc_next);
260
gen_helper_v7m_blxns(cpu_env, var);
261
tcg_temp_free_i32(var);
262
s->base.is_jmp = DISAS_EXIT;
263
@@ -XXX,XX +XXX,XX @@ static inline void gen_hvc(DisasContext *s, int imm16)
264
* for single stepping.)
265
*/
266
s->svc_imm = imm16;
267
- gen_set_pc_im(s, s->pc);
268
+ gen_set_pc_im(s, s->base.pc_next);
269
s->base.is_jmp = DISAS_HVC;
270
}
271
272
@@ -XXX,XX +XXX,XX @@ static inline void gen_smc(DisasContext *s)
273
tmp = tcg_const_i32(syn_aa32_smc());
274
gen_helper_pre_smc(cpu_env, tmp);
275
tcg_temp_free_i32(tmp);
276
- gen_set_pc_im(s, s->pc);
277
+ gen_set_pc_im(s, s->base.pc_next);
278
s->base.is_jmp = DISAS_SMC;
279
}
280
281
static void gen_exception_internal_insn(DisasContext *s, int offset, int excp)
282
{
283
gen_set_condexec(s);
284
- gen_set_pc_im(s, s->pc - offset);
285
+ gen_set_pc_im(s, s->base.pc_next - offset);
286
gen_exception_internal(excp);
287
s->base.is_jmp = DISAS_NORETURN;
288
}
289
@@ -XXX,XX +XXX,XX @@ static void gen_exception_insn(DisasContext *s, int offset, int excp,
290
int syn, uint32_t target_el)
291
{
292
gen_set_condexec(s);
293
- gen_set_pc_im(s, s->pc - offset);
294
+ gen_set_pc_im(s, s->base.pc_next - offset);
295
gen_exception(excp, syn, target_el);
296
s->base.is_jmp = DISAS_NORETURN;
297
}
298
@@ -XXX,XX +XXX,XX @@ static void gen_exception_bkpt_insn(DisasContext *s, int offset, uint32_t syn)
299
TCGv_i32 tcg_syn;
300
301
gen_set_condexec(s);
302
- gen_set_pc_im(s, s->pc - offset);
303
+ gen_set_pc_im(s, s->base.pc_next - offset);
304
tcg_syn = tcg_const_i32(syn);
305
gen_helper_exception_bkpt_insn(cpu_env, tcg_syn);
306
tcg_temp_free_i32(tcg_syn);
307
@@ -XXX,XX +XXX,XX @@ static void gen_exception_bkpt_insn(DisasContext *s, int offset, uint32_t syn)
308
/* Force a TB lookup after an instruction that changes the CPU state. */
309
static inline void gen_lookup_tb(DisasContext *s)
310
{
311
- tcg_gen_movi_i32(cpu_R[15], s->pc);
312
+ tcg_gen_movi_i32(cpu_R[15], s->base.pc_next);
313
s->base.is_jmp = DISAS_EXIT;
314
}
315
316
@@ -XXX,XX +XXX,XX @@ static inline bool use_goto_tb(DisasContext *s, target_ulong dest)
317
{
318
#ifndef CONFIG_USER_ONLY
319
return (s->base.tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) ||
320
- ((s->pc - 1) & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK);
321
+ ((s->base.pc_next - 1) & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK);
322
#else
323
return true;
29
#endif
324
#endif
30
diff --git a/include/hw/net/lan9118.h b/include/hw/net/lan9118.h
325
@@ -XXX,XX +XXX,XX @@ static void gen_nop_hint(DisasContext *s, int val)
31
new file mode 100644
326
*/
32
index XXXXXXX..XXXXXXX
327
case 1: /* yield */
33
--- /dev/null
328
if (!(tb_cflags(s->base.tb) & CF_PARALLEL)) {
34
+++ b/include/hw/net/lan9118.h
329
- gen_set_pc_im(s, s->pc);
35
@@ -XXX,XX +XXX,XX @@
330
+ gen_set_pc_im(s, s->base.pc_next);
36
+/*
331
s->base.is_jmp = DISAS_YIELD;
37
+ * SMSC LAN9118 Ethernet interface emulation
332
}
38
+ *
333
break;
39
+ * Copyright (c) 2009 CodeSourcery, LLC.
334
case 3: /* wfi */
40
+ * Written by Paul Brook
335
- gen_set_pc_im(s, s->pc);
41
+ *
336
+ gen_set_pc_im(s, s->base.pc_next);
42
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
337
s->base.is_jmp = DISAS_WFI;
43
+ * See the COPYING file in the top-level directory.
338
break;
44
+ */
339
case 2: /* wfe */
45
+
340
if (!(tb_cflags(s->base.tb) & CF_PARALLEL)) {
46
+#ifndef HW_NET_LAN9118_H
341
- gen_set_pc_im(s, s->pc);
47
+#define HW_NET_LAN9118_H
342
+ gen_set_pc_im(s, s->base.pc_next);
48
+
343
s->base.is_jmp = DISAS_WFE;
49
+#include "hw/irq.h"
344
}
50
+#include "net/net.h"
345
break;
51
+
346
@@ -XXX,XX +XXX,XX @@ static int disas_coproc_insn(DisasContext *s, uint32_t insn)
52
+void lan9118_init(NICInfo *, uint32_t, qemu_irq);
347
if (isread) {
53
+
348
return 1;
54
+#endif
349
}
55
diff --git a/hw/arm/kzm.c b/hw/arm/kzm.c
350
- gen_set_pc_im(s, s->pc);
56
index XXXXXXX..XXXXXXX 100644
351
+ gen_set_pc_im(s, s->base.pc_next);
57
--- a/hw/arm/kzm.c
352
s->base.is_jmp = DISAS_WFI;
58
+++ b/hw/arm/kzm.c
353
return 0;
59
@@ -XXX,XX +XXX,XX @@
354
default:
60
#include "qemu/error-report.h"
355
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
61
#include "exec/address-spaces.h"
356
* self-modifying code correctly and also to take
62
#include "net/net.h"
357
* any pending interrupts immediately.
63
-#include "hw/devices.h"
358
*/
64
+#include "hw/net/lan9118.h"
359
- gen_goto_tb(s, 0, s->pc);
65
#include "hw/char/serial.h"
360
+ gen_goto_tb(s, 0, s->base.pc_next);
66
#include "sysemu/qtest.h"
361
return;
67
362
case 7: /* sb */
68
diff --git a/hw/arm/mps2.c b/hw/arm/mps2.c
363
if ((insn & 0xf) || !dc_isar_feature(aa32_sb, s)) {
69
index XXXXXXX..XXXXXXX 100644
364
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
70
--- a/hw/arm/mps2.c
365
* for TCG; MB and end the TB instead.
71
+++ b/hw/arm/mps2.c
366
*/
72
@@ -XXX,XX +XXX,XX @@
367
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_SC);
73
#include "hw/timer/cmsdk-apb-timer.h"
368
- gen_goto_tb(s, 0, s->pc);
74
#include "hw/timer/cmsdk-apb-dualtimer.h"
369
+ gen_goto_tb(s, 0, s->base.pc_next);
75
#include "hw/misc/mps2-scc.h"
370
return;
76
-#include "hw/devices.h"
371
default:
77
+#include "hw/net/lan9118.h"
372
goto illegal_op;
78
#include "net/net.h"
373
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
79
374
int32_t offset;
80
typedef enum MPS2FPGAType {
375
81
diff --git a/hw/arm/realview.c b/hw/arm/realview.c
376
tmp = tcg_temp_new_i32();
82
index XXXXXXX..XXXXXXX 100644
377
- tcg_gen_movi_i32(tmp, s->pc);
83
--- a/hw/arm/realview.c
378
+ tcg_gen_movi_i32(tmp, s->base.pc_next);
84
+++ b/hw/arm/realview.c
379
store_reg(s, 14, tmp);
85
@@ -XXX,XX +XXX,XX @@
380
/* Sign-extend the 24-bit offset */
86
#include "hw/arm/arm.h"
381
offset = (((int32_t)insn) << 8) >> 8;
87
#include "hw/arm/primecell.h"
382
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
88
#include "hw/devices.h"
383
/* branch link/exchange thumb (blx) */
89
+#include "hw/net/lan9118.h"
384
tmp = load_reg(s, rm);
90
#include "hw/pci/pci.h"
385
tmp2 = tcg_temp_new_i32();
91
#include "net/net.h"
386
- tcg_gen_movi_i32(tmp2, s->pc);
92
#include "sysemu/sysemu.h"
387
+ tcg_gen_movi_i32(tmp2, s->base.pc_next);
93
diff --git a/hw/arm/vexpress.c b/hw/arm/vexpress.c
388
store_reg(s, 14, tmp2);
94
index XXXXXXX..XXXXXXX 100644
389
gen_bx(s, tmp);
95
--- a/hw/arm/vexpress.c
390
break;
96
+++ b/hw/arm/vexpress.c
391
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
97
@@ -XXX,XX +XXX,XX @@
392
/* branch (and link) */
98
#include "hw/sysbus.h"
393
if (insn & (1 << 24)) {
99
#include "hw/arm/arm.h"
394
tmp = tcg_temp_new_i32();
100
#include "hw/arm/primecell.h"
395
- tcg_gen_movi_i32(tmp, s->pc);
101
-#include "hw/devices.h"
396
+ tcg_gen_movi_i32(tmp, s->base.pc_next);
102
+#include "hw/net/lan9118.h"
397
store_reg(s, 14, tmp);
103
#include "hw/i2c/i2c.h"
398
}
104
#include "net/net.h"
399
offset = sextract32(insn << 2, 0, 26);
105
#include "sysemu/sysemu.h"
400
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
106
diff --git a/hw/net/lan9118.c b/hw/net/lan9118.c
401
break;
107
index XXXXXXX..XXXXXXX 100644
402
case 0xf:
108
--- a/hw/net/lan9118.c
403
/* swi */
109
+++ b/hw/net/lan9118.c
404
- gen_set_pc_im(s, s->pc);
110
@@ -XXX,XX +XXX,XX @@
405
+ gen_set_pc_im(s, s->base.pc_next);
111
#include "hw/sysbus.h"
406
s->svc_imm = extract32(insn, 0, 24);
112
#include "net/net.h"
407
s->base.is_jmp = DISAS_SWI;
113
#include "net/eth.h"
408
break;
114
-#include "hw/devices.h"
409
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
115
+#include "hw/net/lan9118.h"
410
116
#include "sysemu/sysemu.h"
411
if (insn & (1 << 14)) {
117
#include "hw/ptimer.h"
412
/* Branch and link. */
118
#include "qemu/log.h"
413
- tcg_gen_movi_i32(cpu_R[14], s->pc | 1);
414
+ tcg_gen_movi_i32(cpu_R[14], s->base.pc_next | 1);
415
}
416
417
offset += read_pc(s);
418
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
419
* and also to take any pending interrupts
420
* immediately.
421
*/
422
- gen_goto_tb(s, 0, s->pc);
423
+ gen_goto_tb(s, 0, s->base.pc_next);
424
break;
425
case 7: /* sb */
426
if ((insn & 0xf) || !dc_isar_feature(aa32_sb, s)) {
427
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
428
* for TCG; MB and end the TB instead.
429
*/
430
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_SC);
431
- gen_goto_tb(s, 0, s->pc);
432
+ gen_goto_tb(s, 0, s->base.pc_next);
433
break;
434
default:
435
goto illegal_op;
436
@@ -XXX,XX +XXX,XX @@ static void disas_thumb_insn(DisasContext *s, uint32_t insn)
437
/* BLX/BX */
438
tmp = load_reg(s, rm);
439
if (link) {
440
- val = (uint32_t)s->pc | 1;
441
+ val = (uint32_t)s->base.pc_next | 1;
442
tmp2 = tcg_temp_new_i32();
443
tcg_gen_movi_i32(tmp2, val);
444
store_reg(s, 14, tmp2);
445
@@ -XXX,XX +XXX,XX @@ static void disas_thumb_insn(DisasContext *s, uint32_t insn)
446
447
if (cond == 0xf) {
448
/* swi */
449
- gen_set_pc_im(s, s->pc);
450
+ gen_set_pc_im(s, s->base.pc_next);
451
s->svc_imm = extract32(insn, 0, 8);
452
s->base.is_jmp = DISAS_SWI;
453
break;
454
@@ -XXX,XX +XXX,XX @@ static void disas_thumb_insn(DisasContext *s, uint32_t insn)
455
tcg_gen_andi_i32(tmp, tmp, 0xfffffffc);
456
457
tmp2 = tcg_temp_new_i32();
458
- tcg_gen_movi_i32(tmp2, s->pc | 1);
459
+ tcg_gen_movi_i32(tmp2, s->base.pc_next | 1);
460
store_reg(s, 14, tmp2);
461
gen_bx(s, tmp);
462
break;
463
@@ -XXX,XX +XXX,XX @@ static void disas_thumb_insn(DisasContext *s, uint32_t insn)
464
tcg_gen_addi_i32(tmp, tmp, offset);
465
466
tmp2 = tcg_temp_new_i32();
467
- tcg_gen_movi_i32(tmp2, s->pc | 1);
468
+ tcg_gen_movi_i32(tmp2, s->base.pc_next | 1);
469
store_reg(s, 14, tmp2);
470
gen_bx(s, tmp);
471
} else {
472
@@ -XXX,XX +XXX,XX @@ undef:
473
474
static bool insn_crosses_page(CPUARMState *env, DisasContext *s)
475
{
476
- /* Return true if the insn at dc->pc might cross a page boundary.
477
+ /* Return true if the insn at dc->base.pc_next might cross a page boundary.
478
* (False positives are OK, false negatives are not.)
479
* We know this is a Thumb insn, and our caller ensures we are
480
- * only called if dc->pc is less than 4 bytes from the page
481
+ * only called if dc->base.pc_next is less than 4 bytes from the page
482
* boundary, so we cross the page if the first 16 bits indicate
483
* that this is a 32 bit insn.
484
*/
485
- uint16_t insn = arm_lduw_code(env, s->pc, s->sctlr_b);
486
+ uint16_t insn = arm_lduw_code(env, s->base.pc_next, s->sctlr_b);
487
488
- return !thumb_insn_is_16bit(s, s->pc, insn);
489
+ return !thumb_insn_is_16bit(s, s->base.pc_next, insn);
490
}
491
492
static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
493
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
494
uint32_t condexec, core_mmu_idx;
495
496
dc->isar = &cpu->isar;
497
- dc->pc = dc->base.pc_first;
498
dc->condjmp = 0;
499
500
dc->aarch64 = 0;
501
@@ -XXX,XX +XXX,XX @@ static void arm_tr_insn_start(DisasContextBase *dcbase, CPUState *cpu)
502
{
503
DisasContext *dc = container_of(dcbase, DisasContext, base);
504
505
- tcg_gen_insn_start(dc->pc,
506
+ tcg_gen_insn_start(dc->base.pc_next,
507
(dc->condexec_cond << 4) | (dc->condexec_mask >> 1),
508
0);
509
dc->insn_start = tcg_last_op();
510
@@ -XXX,XX +XXX,XX @@ static bool arm_tr_breakpoint_check(DisasContextBase *dcbase, CPUState *cpu,
511
512
if (bp->flags & BP_CPU) {
513
gen_set_condexec(dc);
514
- gen_set_pc_im(dc, dc->pc);
515
+ gen_set_pc_im(dc, dc->base.pc_next);
516
gen_helper_check_breakpoints(cpu_env);
517
/* End the TB early; it's likely not going to be executed */
518
dc->base.is_jmp = DISAS_TOO_MANY;
519
@@ -XXX,XX +XXX,XX @@ static bool arm_tr_breakpoint_check(DisasContextBase *dcbase, CPUState *cpu,
520
tb->size below does the right thing. */
521
/* TODO: Advance PC by correct instruction length to
522
* avoid disassembler error messages */
523
- dc->pc += 2;
524
+ dc->base.pc_next += 2;
525
dc->base.is_jmp = DISAS_NORETURN;
526
}
527
528
@@ -XXX,XX +XXX,XX @@ static bool arm_pre_translate_insn(DisasContext *dc)
529
{
530
#ifdef CONFIG_USER_ONLY
531
/* Intercept jump to the magic kernel page. */
532
- if (dc->pc >= 0xffff0000) {
533
+ if (dc->base.pc_next >= 0xffff0000) {
534
/* We always get here via a jump, so know we are not in a
535
conditional execution block. */
536
gen_exception_internal(EXCP_KERNEL_TRAP);
537
@@ -XXX,XX +XXX,XX @@ static void arm_post_translate_insn(DisasContext *dc)
538
gen_set_label(dc->condlabel);
539
dc->condjmp = 0;
540
}
541
- dc->base.pc_next = dc->pc;
542
translator_loop_temp_check(&dc->base);
543
}
544
545
@@ -XXX,XX +XXX,XX @@ static void arm_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
546
return;
547
}
548
549
- dc->pc_curr = dc->pc;
550
- insn = arm_ldl_code(env, dc->pc, dc->sctlr_b);
551
+ dc->pc_curr = dc->base.pc_next;
552
+ insn = arm_ldl_code(env, dc->base.pc_next, dc->sctlr_b);
553
dc->insn = insn;
554
- dc->pc += 4;
555
+ dc->base.pc_next += 4;
556
disas_arm_insn(dc, insn);
557
558
arm_post_translate_insn(dc);
559
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
560
return;
561
}
562
563
- dc->pc_curr = dc->pc;
564
- insn = arm_lduw_code(env, dc->pc, dc->sctlr_b);
565
- is_16bit = thumb_insn_is_16bit(dc, dc->pc, insn);
566
- dc->pc += 2;
567
+ dc->pc_curr = dc->base.pc_next;
568
+ insn = arm_lduw_code(env, dc->base.pc_next, dc->sctlr_b);
569
+ is_16bit = thumb_insn_is_16bit(dc, dc->base.pc_next, insn);
570
+ dc->base.pc_next += 2;
571
if (!is_16bit) {
572
- uint32_t insn2 = arm_lduw_code(env, dc->pc, dc->sctlr_b);
573
+ uint32_t insn2 = arm_lduw_code(env, dc->base.pc_next, dc->sctlr_b);
574
575
insn = insn << 16 | insn2;
576
- dc->pc += 2;
577
+ dc->base.pc_next += 2;
578
}
579
dc->insn = insn;
580
581
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
582
* but isn't very efficient).
583
*/
584
if (dc->base.is_jmp == DISAS_NEXT
585
- && (dc->pc - dc->page_start >= TARGET_PAGE_SIZE
586
- || (dc->pc - dc->page_start >= TARGET_PAGE_SIZE - 3
587
+ && (dc->base.pc_next - dc->page_start >= TARGET_PAGE_SIZE
588
+ || (dc->base.pc_next - dc->page_start >= TARGET_PAGE_SIZE - 3
589
&& insn_crosses_page(env, dc)))) {
590
dc->base.is_jmp = DISAS_TOO_MANY;
591
}
592
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
593
case DISAS_NEXT:
594
case DISAS_TOO_MANY:
595
case DISAS_UPDATE:
596
- gen_set_pc_im(dc, dc->pc);
597
+ gen_set_pc_im(dc, dc->base.pc_next);
598
/* fall through */
599
default:
600
/* FIXME: Single stepping a WFI insn will not halt the CPU. */
601
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
602
switch(dc->base.is_jmp) {
603
case DISAS_NEXT:
604
case DISAS_TOO_MANY:
605
- gen_goto_tb(dc, 1, dc->pc);
606
+ gen_goto_tb(dc, 1, dc->base.pc_next);
607
break;
608
case DISAS_JUMP:
609
gen_goto_ptr();
610
break;
611
case DISAS_UPDATE:
612
- gen_set_pc_im(dc, dc->pc);
613
+ gen_set_pc_im(dc, dc->base.pc_next);
614
/* fall through */
615
default:
616
/* indicate that the hash table must be used to find the next TB */
617
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
618
gen_set_label(dc->condlabel);
619
gen_set_condexec(dc);
620
if (unlikely(is_singlestepping(dc))) {
621
- gen_set_pc_im(dc, dc->pc);
622
+ gen_set_pc_im(dc, dc->base.pc_next);
623
gen_singlestep_exception(dc);
624
} else {
625
- gen_goto_tb(dc, 1, dc->pc);
626
+ gen_goto_tb(dc, 1, dc->base.pc_next);
627
}
628
}
629
-
630
- /* Functions above can change dc->pc, so re-align db->pc_next */
631
- dc->base.pc_next = dc->pc;
632
}
633
634
static void arm_tr_disas_log(const DisasContextBase *dcbase, CPUState *cpu)
119
--
635
--
120
2.20.1
636
2.20.1
121
637
122
638
diff view generated by jsdifflib
1
Like AArch64, M-profile floating point has no FPEXC enable
1
From: Richard Henderson <richard.henderson@linaro.org>
2
bit to gate floating point; so always set the VFPEN TB flag.
2
3
3
The offset is variable depending on the instruction set, whereas
4
M-profile also has CPACR and NSACR similar to A-profile;
4
we have stored values for the current pc and the next pc. Passing
5
they behave slightly differently:
5
in the actual value is clearer in intent.
6
* the CPACR is banked between Secure and Non-Secure
6
7
* if the NSACR forces a trap then this is taken to
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
the Secure state, not the Non-Secure state
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Honour the CPACR and NSACR settings. The NSACR handling
10
Message-id: 20190807045335.1361-8-richard.henderson@linaro.org
11
requires us to borrow the exception.target_el field
12
(usually meaningless for M profile) to distinguish the
13
NOCP UsageFault taken to Secure state from the more
14
usual fault taken to the current security state.
15
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Message-id: 20190416125744.27770-6-peter.maydell@linaro.org
19
---
12
---
20
target/arm/helper.c | 55 +++++++++++++++++++++++++++++++++++++++---
13
target/arm/translate-a64.c | 25 ++++++++++++++-----------
21
target/arm/translate.c | 10 ++++++--
14
target/arm/translate-vfp.inc.c | 6 +++---
22
2 files changed, 60 insertions(+), 5 deletions(-)
15
target/arm/translate.c | 31 ++++++++++++++++---------------
23
16
3 files changed, 33 insertions(+), 29 deletions(-)
24
diff --git a/target/arm/helper.c b/target/arm/helper.c
17
18
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
25
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/helper.c
20
--- a/target/arm/translate-a64.c
27
+++ b/target/arm/helper.c
21
+++ b/target/arm/translate-a64.c
28
@@ -XXX,XX +XXX,XX @@ uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
22
@@ -XXX,XX +XXX,XX @@ static void gen_exception_internal_insn(DisasContext *s, int offset, int excp)
29
return target_el;
23
s->base.is_jmp = DISAS_NORETURN;
30
}
24
}
31
25
32
+/*
26
-static void gen_exception_insn(DisasContext *s, int offset, int excp,
33
+ * Return true if the v7M CPACR permits access to the FPU for the specified
27
+static void gen_exception_insn(DisasContext *s, uint64_t pc, int excp,
34
+ * security state and privilege level.
28
uint32_t syndrome, uint32_t target_el)
35
+ */
29
{
36
+static bool v7m_cpacr_pass(CPUARMState *env, bool is_secure, bool is_priv)
30
- gen_a64_set_pc_im(s->base.pc_next - offset);
37
+{
31
+ gen_a64_set_pc_im(pc);
38
+ switch (extract32(env->v7m.cpacr[is_secure], 20, 2)) {
32
gen_exception(excp, syndrome, target_el);
39
+ case 0:
33
s->base.is_jmp = DISAS_NORETURN;
40
+ case 2: /* UNPREDICTABLE: we treat like 0 */
34
}
41
+ return false;
35
@@ -XXX,XX +XXX,XX @@ static inline void gen_goto_tb(DisasContext *s, int n, uint64_t dest)
42
+ case 1:
36
void unallocated_encoding(DisasContext *s)
43
+ return is_priv;
37
{
44
+ case 3:
38
/* Unallocated and reserved encodings are uncategorized */
45
+ return true;
39
- gen_exception_insn(s, 4, EXCP_UDEF, syn_uncategorized(),
46
+ default:
40
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized(),
47
+ g_assert_not_reached();
41
default_exception_el(s));
48
+ }
42
}
49
+}
43
50
+
44
@@ -XXX,XX +XXX,XX @@ static inline bool fp_access_check(DisasContext *s)
51
static bool v7m_stack_write(ARMCPU *cpu, uint32_t addr, uint32_t value,
45
return true;
52
ARMMMUIdx mmu_idx, bool ignfault)
46
}
53
{
47
54
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
48
- gen_exception_insn(s, 4, EXCP_UDEF, syn_fp_access_trap(1, 0xe, false),
55
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNDEFINSTR_MASK;
49
- s->fp_excp_el);
56
break;
50
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
57
case EXCP_NOCP:
51
+ syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
58
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
52
return false;
59
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_NOCP_MASK;
53
}
60
+ {
54
61
+ /*
55
@@ -XXX,XX +XXX,XX @@ static inline bool fp_access_check(DisasContext *s)
62
+ * NOCP might be directed to something other than the current
56
bool sve_access_check(DisasContext *s)
63
+ * security state if this fault is because of NSACR; we indicate
57
{
64
+ * the target security state using exception.target_el.
58
if (s->sve_excp_el) {
65
+ */
59
- gen_exception_insn(s, 4, EXCP_UDEF, syn_sve_access_trap(),
66
+ int target_secstate;
60
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_sve_access_trap(),
67
+
61
s->sve_excp_el);
68
+ if (env->exception.target_el == 3) {
62
return false;
69
+ target_secstate = M_REG_S;
63
}
70
+ } else {
64
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
71
+ target_secstate = env->v7m.secure;
65
switch (op2_ll) {
72
+ }
66
case 1: /* SVC */
73
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, target_secstate);
67
gen_ss_advance(s);
74
+ env->v7m.cfsr[target_secstate] |= R_V7M_CFSR_NOCP_MASK;
68
- gen_exception_insn(s, 0, EXCP_SWI, syn_aa64_svc(imm16),
75
break;
69
- default_exception_el(s));
76
+ }
70
+ gen_exception_insn(s, s->base.pc_next, EXCP_SWI,
77
case EXCP_INVSTATE:
71
+ syn_aa64_svc(imm16), default_exception_el(s));
78
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
72
break;
79
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVSTATE_MASK;
73
case 2: /* HVC */
80
@@ -XXX,XX +XXX,XX @@ int fp_exception_el(CPUARMState *env, int cur_el)
74
if (s->current_el == 0) {
81
return 0;
75
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
82
}
76
gen_a64_set_pc_im(s->pc_curr);
83
77
gen_helper_pre_hvc(cpu_env);
84
+ if (arm_feature(env, ARM_FEATURE_M)) {
78
gen_ss_advance(s);
85
+ /* CPACR can cause a NOCP UsageFault taken to current security state */
79
- gen_exception_insn(s, 0, EXCP_HVC, syn_aa64_hvc(imm16), 2);
86
+ if (!v7m_cpacr_pass(env, env->v7m.secure, cur_el != 0)) {
80
+ gen_exception_insn(s, s->base.pc_next, EXCP_HVC,
87
+ return 1;
81
+ syn_aa64_hvc(imm16), 2);
88
+ }
82
break;
89
+
83
case 3: /* SMC */
90
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY) && !env->v7m.secure) {
84
if (s->current_el == 0) {
91
+ if (!extract32(env->v7m.nsacr, 10, 1)) {
85
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
92
+ /* FP insns cause a NOCP UsageFault taken to Secure */
86
gen_helper_pre_smc(cpu_env, tmp);
93
+ return 3;
87
tcg_temp_free_i32(tmp);
94
+ }
88
gen_ss_advance(s);
95
+ }
89
- gen_exception_insn(s, 0, EXCP_SMC, syn_aa64_smc(imm16), 3);
96
+
90
+ gen_exception_insn(s, s->base.pc_next, EXCP_SMC,
97
+ return 0;
91
+ syn_aa64_smc(imm16), 3);
98
+ }
92
break;
99
+
93
default:
100
/* The CPACR controls traps to EL1, or PL1 if we're 32 bit:
94
unallocated_encoding(s);
101
* 0, 2 : trap EL0 and EL1/PL1 accesses
95
@@ -XXX,XX +XXX,XX @@ static void disas_a64_insn(CPUARMState *env, DisasContext *s)
102
* 1 : trap only EL0 accesses
96
if (s->btype != 0
103
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
97
&& s->guarded_page
104
flags = FIELD_DP32(flags, TBFLAG_A32, SCTLR_B, arm_sctlr_b(env));
98
&& !btype_destination_ok(insn, s->bt, s->btype)) {
105
flags = FIELD_DP32(flags, TBFLAG_A32, NS, !access_secure_reg(env));
99
- gen_exception_insn(s, 4, EXCP_UDEF, syn_btitrap(s->btype),
106
if (env->vfp.xregs[ARM_VFP_FPEXC] & (1 << 30)
100
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
107
- || arm_el_is_aa64(env, 1)) {
101
+ syn_btitrap(s->btype),
108
+ || arm_el_is_aa64(env, 1) || arm_feature(env, ARM_FEATURE_M)) {
102
default_exception_el(s));
109
flags = FIELD_DP32(flags, TBFLAG_A32, VFPEN, 1);
103
return;
104
}
105
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
106
index XXXXXXX..XXXXXXX 100644
107
--- a/target/arm/translate-vfp.inc.c
108
+++ b/target/arm/translate-vfp.inc.c
109
@@ -XXX,XX +XXX,XX @@ static bool full_vfp_access_check(DisasContext *s, bool ignore_vfp_enabled)
110
{
111
if (s->fp_excp_el) {
112
if (arm_dc_feature(s, ARM_FEATURE_M)) {
113
- gen_exception_insn(s, 4, EXCP_NOCP, syn_uncategorized(),
114
+ gen_exception_insn(s, s->pc_curr, EXCP_NOCP, syn_uncategorized(),
115
s->fp_excp_el);
116
} else {
117
- gen_exception_insn(s, 4, EXCP_UDEF,
118
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
119
syn_fp_access_trap(1, 0xe, false),
120
s->fp_excp_el);
110
}
121
}
111
flags = FIELD_DP32(flags, TBFLAG_A32, XSCALE_CPAR, env->cp15.c15_cpar);
122
@@ -XXX,XX +XXX,XX @@ static bool full_vfp_access_check(DisasContext *s, bool ignore_vfp_enabled)
123
124
if (!s->vfp_enabled && !ignore_vfp_enabled) {
125
assert(!arm_dc_feature(s, ARM_FEATURE_M));
126
- gen_exception_insn(s, 4, EXCP_UDEF, syn_uncategorized(),
127
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized(),
128
default_exception_el(s));
129
return false;
130
}
112
diff --git a/target/arm/translate.c b/target/arm/translate.c
131
diff --git a/target/arm/translate.c b/target/arm/translate.c
113
index XXXXXXX..XXXXXXX 100644
132
index XXXXXXX..XXXXXXX 100644
114
--- a/target/arm/translate.c
133
--- a/target/arm/translate.c
115
+++ b/target/arm/translate.c
134
+++ b/target/arm/translate.c
116
@@ -XXX,XX +XXX,XX @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
135
@@ -XXX,XX +XXX,XX @@ static void gen_exception_internal_insn(DisasContext *s, int offset, int excp)
136
s->base.is_jmp = DISAS_NORETURN;
137
}
138
139
-static void gen_exception_insn(DisasContext *s, int offset, int excp,
140
+static void gen_exception_insn(DisasContext *s, uint32_t pc, int excp,
141
int syn, uint32_t target_el)
142
{
143
gen_set_condexec(s);
144
- gen_set_pc_im(s, s->base.pc_next - offset);
145
+ gen_set_pc_im(s, pc);
146
gen_exception(excp, syn, target_el);
147
s->base.is_jmp = DISAS_NORETURN;
148
}
149
@@ -XXX,XX +XXX,XX @@ static inline void gen_hlt(DisasContext *s, int imm)
150
return;
151
}
152
153
- gen_exception_insn(s, s->thumb ? 2 : 4, EXCP_UDEF, syn_uncategorized(),
154
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized(),
155
default_exception_el(s));
156
}
157
158
@@ -XXX,XX +XXX,XX @@ static bool msr_banked_access_decode(DisasContext *s, int r, int sysm, int rn,
159
160
undef:
161
/* If we get here then some access check did not pass */
162
- gen_exception_insn(s, 4, EXCP_UDEF, syn_uncategorized(), exc_target);
163
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
164
+ syn_uncategorized(), exc_target);
165
return false;
166
}
167
168
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
117
* for attempts to execute invalid vfp/neon encodings with FP disabled.
169
* for attempts to execute invalid vfp/neon encodings with FP disabled.
118
*/
170
*/
119
if (s->fp_excp_el) {
171
if (s->fp_excp_el) {
120
- gen_exception_insn(s, 4, EXCP_UDEF,
172
- gen_exception_insn(s, 4, EXCP_UDEF,
121
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
173
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
122
+ if (arm_dc_feature(s, ARM_FEATURE_M)) {
174
syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
123
+ gen_exception_insn(s, 4, EXCP_NOCP, syn_uncategorized(),
175
return 0;
124
+ s->fp_excp_el);
176
}
125
+ } else {
177
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
126
+ gen_exception_insn(s, 4, EXCP_UDEF,
178
* for attempts to execute invalid vfp/neon encodings with FP disabled.
127
+ syn_fp_access_trap(1, 0xe, false),
179
*/
128
+ s->fp_excp_el);
180
if (s->fp_excp_el) {
129
+ }
181
- gen_exception_insn(s, 4, EXCP_UDEF,
130
return 0;
182
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
131
}
183
syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
184
return 0;
185
}
186
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
187
}
188
189
if (s->fp_excp_el) {
190
- gen_exception_insn(s, 4, EXCP_UDEF,
191
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
192
syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
193
return 0;
194
}
195
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
196
off_rm = vfp_reg_offset(0, rm);
197
}
198
if (s->fp_excp_el) {
199
- gen_exception_insn(s, 4, EXCP_UDEF,
200
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
201
syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
202
return 0;
203
}
204
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
205
* For the UNPREDICTABLE cases we choose to UNDEF.
206
*/
207
if (s->current_el == 1 && !s->ns && mode == ARM_CPU_MODE_MON) {
208
- gen_exception_insn(s, 4, EXCP_UDEF, syn_uncategorized(), 3);
209
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized(), 3);
210
return;
211
}
212
213
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
214
}
215
216
if (undef) {
217
- gen_exception_insn(s, 4, EXCP_UDEF, syn_uncategorized(),
218
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized(),
219
default_exception_el(s));
220
return;
221
}
222
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
223
* UsageFault exception.
224
*/
225
if (arm_dc_feature(s, ARM_FEATURE_M)) {
226
- gen_exception_insn(s, 4, EXCP_INVSTATE, syn_uncategorized(),
227
+ gen_exception_insn(s, s->pc_curr, EXCP_INVSTATE, syn_uncategorized(),
228
default_exception_el(s));
229
return;
230
}
231
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
232
break;
233
default:
234
illegal_op:
235
- gen_exception_insn(s, 4, EXCP_UDEF, syn_uncategorized(),
236
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized(),
237
default_exception_el(s));
238
break;
239
}
240
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
241
}
242
243
/* All other insns: NOCP */
244
- gen_exception_insn(s, 4, EXCP_NOCP, syn_uncategorized(),
245
+ gen_exception_insn(s, s->pc_curr, EXCP_NOCP, syn_uncategorized(),
246
default_exception_el(s));
247
break;
248
}
249
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
250
}
251
return;
252
illegal_op:
253
- gen_exception_insn(s, 4, EXCP_UDEF, syn_uncategorized(),
254
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized(),
255
default_exception_el(s));
256
}
257
258
@@ -XXX,XX +XXX,XX @@ static void disas_thumb_insn(DisasContext *s, uint32_t insn)
259
return;
260
illegal_op:
261
undef:
262
- gen_exception_insn(s, 2, EXCP_UDEF, syn_uncategorized(),
263
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized(),
264
default_exception_el(s));
265
}
132
266
133
--
267
--
134
2.20.1
268
2.20.1
135
269
136
270
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Since uWireSlave is only used in this new header, there is no
3
The offset is variable depending on the instruction set.
4
need to expose it via "qemu/typedefs.h".
4
Passing in the actual value is clearer in intent.
5
5
6
Reviewed-by: Markus Armbruster <armbru@redhat.com>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20190412165416.7977-9-philmd@redhat.com
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Message-id: 20190807045335.1361-9-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
---
11
include/hw/arm/omap.h | 6 +-----
12
target/arm/translate-a64.c | 8 ++++----
12
include/hw/devices.h | 15 ---------------
13
target/arm/translate.c | 8 ++++----
13
include/hw/input/tsc2xxx.h | 36 ++++++++++++++++++++++++++++++++++++
14
2 files changed, 8 insertions(+), 8 deletions(-)
14
include/qemu/typedefs.h | 1 -
15
hw/arm/nseries.c | 2 +-
16
hw/arm/palm.c | 2 +-
17
hw/input/tsc2005.c | 2 +-
18
hw/input/tsc210x.c | 4 ++--
19
MAINTAINERS | 2 ++
20
9 files changed, 44 insertions(+), 26 deletions(-)
21
create mode 100644 include/hw/input/tsc2xxx.h
22
15
23
diff --git a/include/hw/arm/omap.h b/include/hw/arm/omap.h
16
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
24
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
25
--- a/include/hw/arm/omap.h
18
--- a/target/arm/translate-a64.c
26
+++ b/include/hw/arm/omap.h
19
+++ b/target/arm/translate-a64.c
27
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@ static void gen_exception_internal(int excp)
28
#include "exec/memory.h"
21
tcg_temp_free_i32(tcg_excp);
29
# define hw_omap_h        "omap.h"
22
}
30
#include "hw/irq.h"
23
31
+#include "hw/input/tsc2xxx.h"
24
-static void gen_exception_internal_insn(DisasContext *s, int offset, int excp)
32
#include "target/arm/cpu-qom.h"
25
+static void gen_exception_internal_insn(DisasContext *s, uint64_t pc, int excp)
33
#include "qemu/log.h"
26
{
34
27
- gen_a64_set_pc_im(s->base.pc_next - offset);
35
@@ -XXX,XX +XXX,XX @@ qemu_irq *omap_mpuio_in_get(struct omap_mpuio_s *s);
28
+ gen_a64_set_pc_im(pc);
36
void omap_mpuio_out_set(struct omap_mpuio_s *s, int line, qemu_irq handler);
29
gen_exception_internal(excp);
37
void omap_mpuio_key(struct omap_mpuio_s *s, int row, int col, int down);
30
s->base.is_jmp = DISAS_NORETURN;
38
31
}
39
-struct uWireSlave {
32
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
40
- uint16_t (*receive)(void *opaque);
33
break;
41
- void (*send)(void *opaque, uint16_t data);
34
}
42
- void *opaque;
35
#endif
43
-};
36
- gen_exception_internal_insn(s, 0, EXCP_SEMIHOST);
44
struct omap_uwire_s;
37
+ gen_exception_internal_insn(s, s->base.pc_next, EXCP_SEMIHOST);
45
void omap_uwire_attach(struct omap_uwire_s *s,
38
} else {
46
uWireSlave *slave, int chipselect);
39
unsupported_encoding(s, insn);
47
diff --git a/include/hw/devices.h b/include/hw/devices.h
40
}
41
@@ -XXX,XX +XXX,XX @@ static bool aarch64_tr_breakpoint_check(DisasContextBase *dcbase, CPUState *cpu,
42
/* End the TB early; it likely won't be executed */
43
dc->base.is_jmp = DISAS_TOO_MANY;
44
} else {
45
- gen_exception_internal_insn(dc, 0, EXCP_DEBUG);
46
+ gen_exception_internal_insn(dc, dc->base.pc_next, EXCP_DEBUG);
47
/* The address covered by the breakpoint must be
48
included in [tb->pc, tb->pc + tb->size) in order
49
to for it to be properly cleared -- thus we
50
diff --git a/target/arm/translate.c b/target/arm/translate.c
48
index XXXXXXX..XXXXXXX 100644
51
index XXXXXXX..XXXXXXX 100644
49
--- a/include/hw/devices.h
52
--- a/target/arm/translate.c
50
+++ b/include/hw/devices.h
53
+++ b/target/arm/translate.c
51
@@ -XXX,XX +XXX,XX @@
54
@@ -XXX,XX +XXX,XX @@ static inline void gen_smc(DisasContext *s)
52
/* Devices that have nowhere better to go. */
55
s->base.is_jmp = DISAS_SMC;
53
56
}
54
#include "hw/hw.h"
57
55
-#include "ui/console.h"
58
-static void gen_exception_internal_insn(DisasContext *s, int offset, int excp)
56
59
+static void gen_exception_internal_insn(DisasContext *s, uint32_t pc, int excp)
57
/* smc91c111.c */
60
{
58
void smc91c111_init(NICInfo *, uint32_t, qemu_irq);
61
gen_set_condexec(s);
59
@@ -XXX,XX +XXX,XX @@ void smc91c111_init(NICInfo *, uint32_t, qemu_irq);
62
- gen_set_pc_im(s, s->base.pc_next - offset);
60
/* lan9118.c */
63
+ gen_set_pc_im(s, pc);
61
void lan9118_init(NICInfo *, uint32_t, qemu_irq);
64
gen_exception_internal(excp);
62
65
s->base.is_jmp = DISAS_NORETURN;
63
-/* tsc210x.c */
66
}
64
-uWireSlave *tsc2102_init(qemu_irq pint);
67
@@ -XXX,XX +XXX,XX @@ static inline void gen_hlt(DisasContext *s, int imm)
65
-uWireSlave *tsc2301_init(qemu_irq penirq, qemu_irq kbirq, qemu_irq dav);
68
s->current_el != 0 &&
66
-I2SCodec *tsc210x_codec(uWireSlave *chip);
67
-uint32_t tsc210x_txrx(void *opaque, uint32_t value, int len);
68
-void tsc210x_set_transform(uWireSlave *chip,
69
- MouseTransformInfo *info);
70
-void tsc210x_key_event(uWireSlave *chip, int key, int down);
71
-
72
-/* tsc2005.c */
73
-void *tsc2005_init(qemu_irq pintdav);
74
-uint32_t tsc2005_txrx(void *opaque, uint32_t value, int len);
75
-void tsc2005_set_transform(void *opaque, MouseTransformInfo *info);
76
-
77
#endif
69
#endif
78
diff --git a/include/hw/input/tsc2xxx.h b/include/hw/input/tsc2xxx.h
70
(imm == (s->thumb ? 0x3c : 0xf000))) {
79
new file mode 100644
71
- gen_exception_internal_insn(s, 0, EXCP_SEMIHOST);
80
index XXXXXXX..XXXXXXX
72
+ gen_exception_internal_insn(s, s->base.pc_next, EXCP_SEMIHOST);
81
--- /dev/null
73
return;
82
+++ b/include/hw/input/tsc2xxx.h
74
}
83
@@ -XXX,XX +XXX,XX @@
75
84
+/*
76
@@ -XXX,XX +XXX,XX @@ static bool arm_tr_breakpoint_check(DisasContextBase *dcbase, CPUState *cpu,
85
+ * TI touchscreen controller
77
/* End the TB early; it's likely not going to be executed */
86
+ *
78
dc->base.is_jmp = DISAS_TOO_MANY;
87
+ * Copyright (c) 2006 Andrzej Zaborowski
79
} else {
88
+ * Copyright (C) 2008 Nokia Corporation
80
- gen_exception_internal_insn(dc, 0, EXCP_DEBUG);
89
+ *
81
+ gen_exception_internal_insn(dc, dc->base.pc_next, EXCP_DEBUG);
90
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
82
/* The address covered by the breakpoint must be
91
+ * See the COPYING file in the top-level directory.
83
included in [tb->pc, tb->pc + tb->size) in order
92
+ */
84
to for it to be properly cleared -- thus we
93
+
94
+#ifndef HW_INPUT_TSC2XXX_H
95
+#define HW_INPUT_TSC2XXX_H
96
+
97
+#include "hw/irq.h"
98
+#include "ui/console.h"
99
+
100
+typedef struct uWireSlave {
101
+ uint16_t (*receive)(void *opaque);
102
+ void (*send)(void *opaque, uint16_t data);
103
+ void *opaque;
104
+} uWireSlave;
105
+
106
+/* tsc210x.c */
107
+uWireSlave *tsc2102_init(qemu_irq pint);
108
+uWireSlave *tsc2301_init(qemu_irq penirq, qemu_irq kbirq, qemu_irq dav);
109
+I2SCodec *tsc210x_codec(uWireSlave *chip);
110
+uint32_t tsc210x_txrx(void *opaque, uint32_t value, int len);
111
+void tsc210x_set_transform(uWireSlave *chip, MouseTransformInfo *info);
112
+void tsc210x_key_event(uWireSlave *chip, int key, int down);
113
+
114
+/* tsc2005.c */
115
+void *tsc2005_init(qemu_irq pintdav);
116
+uint32_t tsc2005_txrx(void *opaque, uint32_t value, int len);
117
+void tsc2005_set_transform(void *opaque, MouseTransformInfo *info);
118
+
119
+#endif
120
diff --git a/include/qemu/typedefs.h b/include/qemu/typedefs.h
121
index XXXXXXX..XXXXXXX 100644
122
--- a/include/qemu/typedefs.h
123
+++ b/include/qemu/typedefs.h
124
@@ -XXX,XX +XXX,XX @@ typedef struct RAMBlock RAMBlock;
125
typedef struct Range Range;
126
typedef struct SHPCDevice SHPCDevice;
127
typedef struct SSIBus SSIBus;
128
-typedef struct uWireSlave uWireSlave;
129
typedef struct VirtIODevice VirtIODevice;
130
typedef struct Visitor Visitor;
131
typedef void SaveStateHandler(QEMUFile *f, void *opaque);
132
diff --git a/hw/arm/nseries.c b/hw/arm/nseries.c
133
index XXXXXXX..XXXXXXX 100644
134
--- a/hw/arm/nseries.c
135
+++ b/hw/arm/nseries.c
136
@@ -XXX,XX +XXX,XX @@
137
#include "ui/console.h"
138
#include "hw/boards.h"
139
#include "hw/i2c/i2c.h"
140
-#include "hw/devices.h"
141
#include "hw/display/blizzard.h"
142
+#include "hw/input/tsc2xxx.h"
143
#include "hw/misc/cbus.h"
144
#include "hw/misc/tmp105.h"
145
#include "hw/block/flash.h"
146
diff --git a/hw/arm/palm.c b/hw/arm/palm.c
147
index XXXXXXX..XXXXXXX 100644
148
--- a/hw/arm/palm.c
149
+++ b/hw/arm/palm.c
150
@@ -XXX,XX +XXX,XX @@
151
#include "hw/arm/omap.h"
152
#include "hw/boards.h"
153
#include "hw/arm/arm.h"
154
-#include "hw/devices.h"
155
+#include "hw/input/tsc2xxx.h"
156
#include "hw/loader.h"
157
#include "exec/address-spaces.h"
158
#include "cpu.h"
159
diff --git a/hw/input/tsc2005.c b/hw/input/tsc2005.c
160
index XXXXXXX..XXXXXXX 100644
161
--- a/hw/input/tsc2005.c
162
+++ b/hw/input/tsc2005.c
163
@@ -XXX,XX +XXX,XX @@
164
#include "hw/hw.h"
165
#include "qemu/timer.h"
166
#include "ui/console.h"
167
-#include "hw/devices.h"
168
+#include "hw/input/tsc2xxx.h"
169
#include "trace.h"
170
171
#define TSC_CUT_RESOLUTION(value, p)    ((value) >> (16 - (p ? 12 : 10)))
172
diff --git a/hw/input/tsc210x.c b/hw/input/tsc210x.c
173
index XXXXXXX..XXXXXXX 100644
174
--- a/hw/input/tsc210x.c
175
+++ b/hw/input/tsc210x.c
176
@@ -XXX,XX +XXX,XX @@
177
#include "audio/audio.h"
178
#include "qemu/timer.h"
179
#include "ui/console.h"
180
-#include "hw/arm/omap.h"    /* For I2SCodec and uWireSlave */
181
-#include "hw/devices.h"
182
+#include "hw/arm/omap.h" /* For I2SCodec */
183
+#include "hw/input/tsc2xxx.h"
184
185
#define TSC_DATA_REGISTERS_PAGE        0x0
186
#define TSC_CONTROL_REGISTERS_PAGE    0x1
187
diff --git a/MAINTAINERS b/MAINTAINERS
188
index XXXXXXX..XXXXXXX 100644
189
--- a/MAINTAINERS
190
+++ b/MAINTAINERS
191
@@ -XXX,XX +XXX,XX @@ F: hw/input/tsc2005.c
192
F: hw/misc/cbus.c
193
F: hw/timer/twl92230.c
194
F: include/hw/display/blizzard.h
195
+F: include/hw/input/tsc2xxx.h
196
F: include/hw/misc/cbus.h
197
198
Palm
199
@@ -XXX,XX +XXX,XX @@ L: qemu-arm@nongnu.org
200
S: Odd Fixes
201
F: hw/arm/palm.c
202
F: hw/input/tsc210x.c
203
+F: include/hw/input/tsc2xxx.h
204
205
Raspberry Pi
206
M: Peter Maydell <peter.maydell@linaro.org>
207
--
85
--
208
2.20.1
86
2.20.1
209
87
210
88
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Thomas Huth <thuth@redhat.com>
3
Unlike the other more generic gen_exception{,_internal}_insn
4
Reviewed-by: Markus Armbruster <armbru@redhat.com>
4
interfaces, breakpoints always refer to the current instruction.
5
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
6
Message-id: 20190412165416.7977-7-philmd@redhat.com
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Message-id: 20190807045335.1361-10-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
11
---
9
include/hw/devices.h | 14 --------------
12
target/arm/translate-a64.c | 7 +++----
10
include/hw/misc/cbus.h | 32 ++++++++++++++++++++++++++++++++
13
target/arm/translate.c | 8 ++++----
11
hw/arm/nseries.c | 1 +
14
2 files changed, 7 insertions(+), 8 deletions(-)
12
hw/misc/cbus.c | 2 +-
13
MAINTAINERS | 1 +
14
5 files changed, 35 insertions(+), 15 deletions(-)
15
create mode 100644 include/hw/misc/cbus.h
16
15
17
diff --git a/include/hw/devices.h b/include/hw/devices.h
16
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
18
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
19
--- a/include/hw/devices.h
18
--- a/target/arm/translate-a64.c
20
+++ b/include/hw/devices.h
19
+++ b/target/arm/translate-a64.c
21
@@ -XXX,XX +XXX,XX @@ void tsc2005_set_transform(void *opaque, MouseTransformInfo *info);
20
@@ -XXX,XX +XXX,XX @@ static void gen_exception_insn(DisasContext *s, uint64_t pc, int excp,
22
/* stellaris_input.c */
21
s->base.is_jmp = DISAS_NORETURN;
23
void stellaris_gamepad_init(int n, qemu_irq *irq, const int *keycode);
22
}
24
23
25
-/* cbus.c */
24
-static void gen_exception_bkpt_insn(DisasContext *s, int offset,
26
-typedef struct {
25
- uint32_t syndrome)
27
- qemu_irq clk;
26
+static void gen_exception_bkpt_insn(DisasContext *s, uint32_t syndrome)
28
- qemu_irq dat;
27
{
29
- qemu_irq sel;
28
TCGv_i32 tcg_syn;
30
-} CBus;
29
31
-CBus *cbus_init(qemu_irq dat_out);
30
- gen_a64_set_pc_im(s->base.pc_next - offset);
32
-void cbus_attach(CBus *bus, void *slave_opaque);
31
+ gen_a64_set_pc_im(s->pc_curr);
33
-
32
tcg_syn = tcg_const_i32(syndrome);
34
-void *retu_init(qemu_irq irq, int vilma);
33
gen_helper_exception_bkpt_insn(cpu_env, tcg_syn);
35
-void *tahvo_init(qemu_irq irq, int betty);
34
tcg_temp_free_i32(tcg_syn);
36
-
35
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
37
-void retu_key_event(void *retu, int state);
36
break;
38
-
37
}
39
#endif
38
/* BRK */
40
diff --git a/include/hw/misc/cbus.h b/include/hw/misc/cbus.h
39
- gen_exception_bkpt_insn(s, 4, syn_aa64_bkpt(imm16));
41
new file mode 100644
40
+ gen_exception_bkpt_insn(s, syn_aa64_bkpt(imm16));
42
index XXXXXXX..XXXXXXX
41
break;
43
--- /dev/null
42
case 2:
44
+++ b/include/hw/misc/cbus.h
43
if (op2_ll != 0) {
45
@@ -XXX,XX +XXX,XX @@
44
diff --git a/target/arm/translate.c b/target/arm/translate.c
46
+/*
47
+ * CBUS three-pin bus and the Retu / Betty / Tahvo / Vilma / Avilma /
48
+ * Hinku / Vinku / Ahne / Pihi chips used in various Nokia platforms.
49
+ * Based on reverse-engineering of a linux driver.
50
+ *
51
+ * Copyright (C) 2008 Nokia Corporation
52
+ * Written by Andrzej Zaborowski
53
+ *
54
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
55
+ * See the COPYING file in the top-level directory.
56
+ */
57
+
58
+#ifndef HW_MISC_CBUS_H
59
+#define HW_MISC_CBUS_H
60
+
61
+#include "hw/irq.h"
62
+
63
+typedef struct {
64
+ qemu_irq clk;
65
+ qemu_irq dat;
66
+ qemu_irq sel;
67
+} CBus;
68
+
69
+CBus *cbus_init(qemu_irq dat_out);
70
+void cbus_attach(CBus *bus, void *slave_opaque);
71
+
72
+void *retu_init(qemu_irq irq, int vilma);
73
+void *tahvo_init(qemu_irq irq, int betty);
74
+
75
+void retu_key_event(void *retu, int state);
76
+
77
+#endif
78
diff --git a/hw/arm/nseries.c b/hw/arm/nseries.c
79
index XXXXXXX..XXXXXXX 100644
45
index XXXXXXX..XXXXXXX 100644
80
--- a/hw/arm/nseries.c
46
--- a/target/arm/translate.c
81
+++ b/hw/arm/nseries.c
47
+++ b/target/arm/translate.c
82
@@ -XXX,XX +XXX,XX @@
48
@@ -XXX,XX +XXX,XX @@ static void gen_exception_insn(DisasContext *s, uint32_t pc, int excp,
83
#include "hw/i2c/i2c.h"
49
s->base.is_jmp = DISAS_NORETURN;
84
#include "hw/devices.h"
50
}
85
#include "hw/display/blizzard.h"
51
86
+#include "hw/misc/cbus.h"
52
-static void gen_exception_bkpt_insn(DisasContext *s, int offset, uint32_t syn)
87
#include "hw/misc/tmp105.h"
53
+static void gen_exception_bkpt_insn(DisasContext *s, uint32_t syn)
88
#include "hw/block/flash.h"
54
{
89
#include "hw/hw.h"
55
TCGv_i32 tcg_syn;
90
diff --git a/hw/misc/cbus.c b/hw/misc/cbus.c
56
91
index XXXXXXX..XXXXXXX 100644
57
gen_set_condexec(s);
92
--- a/hw/misc/cbus.c
58
- gen_set_pc_im(s, s->base.pc_next - offset);
93
+++ b/hw/misc/cbus.c
59
+ gen_set_pc_im(s, s->pc_curr);
94
@@ -XXX,XX +XXX,XX @@
60
tcg_syn = tcg_const_i32(syn);
95
#include "qemu/osdep.h"
61
gen_helper_exception_bkpt_insn(cpu_env, tcg_syn);
96
#include "hw/hw.h"
62
tcg_temp_free_i32(tcg_syn);
97
#include "hw/irq.h"
63
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
98
-#include "hw/devices.h"
64
case 1:
99
+#include "hw/misc/cbus.h"
65
/* bkpt */
100
#include "sysemu/sysemu.h"
66
ARCH(5);
101
67
- gen_exception_bkpt_insn(s, 4, syn_aa32_bkpt(imm16, false));
102
//#define DEBUG
68
+ gen_exception_bkpt_insn(s, syn_aa32_bkpt(imm16, false));
103
diff --git a/MAINTAINERS b/MAINTAINERS
69
break;
104
index XXXXXXX..XXXXXXX 100644
70
case 2:
105
--- a/MAINTAINERS
71
/* Hypervisor call (v7) */
106
+++ b/MAINTAINERS
72
@@ -XXX,XX +XXX,XX @@ static void disas_thumb_insn(DisasContext *s, uint32_t insn)
107
@@ -XXX,XX +XXX,XX @@ F: hw/input/tsc2005.c
73
{
108
F: hw/misc/cbus.c
74
int imm8 = extract32(insn, 0, 8);
109
F: hw/timer/twl92230.c
75
ARCH(5);
110
F: include/hw/display/blizzard.h
76
- gen_exception_bkpt_insn(s, 2, syn_aa32_bkpt(imm8, true));
111
+F: include/hw/misc/cbus.h
77
+ gen_exception_bkpt_insn(s, syn_aa32_bkpt(imm8, true));
112
78
break;
113
Palm
79
}
114
M: Andrzej Zaborowski <balrogg@gmail.com>
80
115
--
81
--
116
2.20.1
82
2.20.1
117
83
118
84
diff view generated by jsdifflib
1
Implement the code which updates the FPCCR register on an
1
From: Richard Henderson <richard.henderson@linaro.org>
2
exception entry where we are going to use lazy FP stacking.
3
We have to defer to the NVIC to determine whether the
4
various exceptions are currently ready or not.
5
2
3
Promote this function from aarch64 to fully general use.
4
Use it to unify the code sequences for generating illegal
5
opcode exceptions.
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Message-id: 20190807045335.1361-11-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20190416125744.27770-12-peter.maydell@linaro.org
8
---
12
---
9
target/arm/cpu.h | 14 +++++++++
13
target/arm/translate-a64.h | 2 --
10
hw/intc/armv7m_nvic.c | 34 ++++++++++++++++++++++
14
target/arm/translate.h | 2 ++
11
target/arm/helper.c | 67 ++++++++++++++++++++++++++++++++++++++++++-
15
target/arm/translate-a64.c | 7 -------
12
3 files changed, 114 insertions(+), 1 deletion(-)
16
target/arm/translate-vfp.inc.c | 3 +--
17
target/arm/translate.c | 22 ++++++++++++----------
18
5 files changed, 15 insertions(+), 21 deletions(-)
13
19
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
15
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpu.h
22
--- a/target/arm/translate-a64.h
17
+++ b/target/arm/cpu.h
23
+++ b/target/arm/translate-a64.h
18
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_acknowledge_irq(void *opaque);
24
@@ -XXX,XX +XXX,XX @@
19
* (Ignoring -1, this is the same as the RETTOBASE value before completion.)
25
#ifndef TARGET_ARM_TRANSLATE_A64_H
20
*/
26
#define TARGET_ARM_TRANSLATE_A64_H
21
int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure);
27
22
+/**
28
-void unallocated_encoding(DisasContext *s);
23
+ * armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure)
29
-
24
+ * @opaque: the NVIC
30
#define unsupported_encoding(s, insn) \
25
+ * @irq: the exception number to mark pending
31
do { \
26
+ * @secure: false for non-banked exceptions or for the nonsecure
32
qemu_log_mask(LOG_UNIMP, \
27
+ * version of a banked exception, true for the secure version of a banked
33
diff --git a/target/arm/translate.h b/target/arm/translate.h
28
+ * exception.
29
+ *
30
+ * Return whether an exception is "ready", i.e. whether the exception is
31
+ * enabled and is configured at a priority which would allow it to
32
+ * interrupt the current execution priority. This controls whether the
33
+ * RDY bit for it in the FPCCR is set.
34
+ */
35
+bool armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure);
36
/**
37
* armv7m_nvic_raw_execution_priority: return the raw execution priority
38
* @opaque: the NVIC
39
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
40
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
41
--- a/hw/intc/armv7m_nvic.c
35
--- a/target/arm/translate.h
42
+++ b/hw/intc/armv7m_nvic.c
36
+++ b/target/arm/translate.h
43
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
37
@@ -XXX,XX +XXX,XX @@ typedef struct DisasCompare {
44
return ret;
38
bool value_global;
39
} DisasCompare;
40
41
+void unallocated_encoding(DisasContext *s);
42
+
43
/* Share the TCG temporaries common between 32 and 64 bit modes. */
44
extern TCGv_i32 cpu_NF, cpu_ZF, cpu_CF, cpu_VF;
45
extern TCGv_i64 cpu_exclusive_addr;
46
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/translate-a64.c
49
+++ b/target/arm/translate-a64.c
50
@@ -XXX,XX +XXX,XX @@ static inline void gen_goto_tb(DisasContext *s, int n, uint64_t dest)
51
}
45
}
52
}
46
53
47
+bool armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure)
54
-void unallocated_encoding(DisasContext *s)
55
-{
56
- /* Unallocated and reserved encodings are uncategorized */
57
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized(),
58
- default_exception_el(s));
59
-}
60
-
61
static void init_tmp_a64_array(DisasContext *s)
62
{
63
#ifdef CONFIG_DEBUG_TCG
64
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/target/arm/translate-vfp.inc.c
67
+++ b/target/arm/translate-vfp.inc.c
68
@@ -XXX,XX +XXX,XX @@ static bool full_vfp_access_check(DisasContext *s, bool ignore_vfp_enabled)
69
70
if (!s->vfp_enabled && !ignore_vfp_enabled) {
71
assert(!arm_dc_feature(s, ARM_FEATURE_M));
72
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized(),
73
- default_exception_el(s));
74
+ unallocated_encoding(s);
75
return false;
76
}
77
78
diff --git a/target/arm/translate.c b/target/arm/translate.c
79
index XXXXXXX..XXXXXXX 100644
80
--- a/target/arm/translate.c
81
+++ b/target/arm/translate.c
82
@@ -XXX,XX +XXX,XX @@ static void gen_exception_bkpt_insn(DisasContext *s, uint32_t syn)
83
s->base.is_jmp = DISAS_NORETURN;
84
}
85
86
+void unallocated_encoding(DisasContext *s)
48
+{
87
+{
49
+ /*
88
+ /* Unallocated and reserved encodings are uncategorized */
50
+ * Return whether an exception is "ready", i.e. it is enabled and is
89
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized(),
51
+ * configured at a priority which would allow it to interrupt the
90
+ default_exception_el(s));
52
+ * current execution priority.
53
+ *
54
+ * irq and secure have the same semantics as for armv7m_nvic_set_pending():
55
+ * for non-banked exceptions secure is always false; for banked exceptions
56
+ * it indicates which of the exceptions is required.
57
+ */
58
+ NVICState *s = (NVICState *)opaque;
59
+ bool banked = exc_is_banked(irq);
60
+ VecInfo *vec;
61
+ int running = nvic_exec_prio(s);
62
+
63
+ assert(irq > ARMV7M_EXCP_RESET && irq < s->num_irq);
64
+ assert(!secure || banked);
65
+
66
+ /*
67
+ * HardFault is an odd special case: we always check against -1,
68
+ * even if we're secure and HardFault has priority -3; we never
69
+ * need to check for enabled state.
70
+ */
71
+ if (irq == ARMV7M_EXCP_HARD) {
72
+ return running > -1;
73
+ }
74
+
75
+ vec = (banked && secure) ? &s->sec_vectors[irq] : &s->vectors[irq];
76
+
77
+ return vec->enabled &&
78
+ exc_group_prio(s, vec->prio, secure) < running;
79
+}
91
+}
80
+
92
+
81
/* callback when external interrupt line is changed */
93
/* Force a TB lookup after an instruction that changes the CPU state. */
82
static void set_irq_level(void *opaque, int n, int level)
94
static inline void gen_lookup_tb(DisasContext *s)
83
{
95
{
84
diff --git a/target/arm/helper.c b/target/arm/helper.c
96
@@ -XXX,XX +XXX,XX @@ static inline void gen_hlt(DisasContext *s, int imm)
85
index XXXXXXX..XXXXXXX 100644
97
return;
86
--- a/target/arm/helper.c
98
}
87
+++ b/target/arm/helper.c
99
88
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
100
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized(),
89
env->thumb = addr & 1;
101
- default_exception_el(s));
102
+ unallocated_encoding(s);
90
}
103
}
91
104
92
+static void v7m_update_fpccr(CPUARMState *env, uint32_t frameptr,
105
static inline void gen_add_data_offset(DisasContext *s, unsigned int insn,
93
+ bool apply_splim)
106
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
94
+{
107
}
95
+ /*
108
96
+ * Like the pseudocode UpdateFPCCR: save state in FPCAR and FPCCR
109
if (undef) {
97
+ * that we will need later in order to do lazy FP reg stacking.
110
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized(),
98
+ */
111
- default_exception_el(s));
99
+ bool is_secure = env->v7m.secure;
112
+ unallocated_encoding(s);
100
+ void *nvic = env->nvic;
113
return;
101
+ /*
114
}
102
+ * Some bits are unbanked and live always in fpccr[M_REG_S]; some bits
115
103
+ * are banked and we want to update the bit in the bank for the
116
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
104
+ * current security state; and in one case we want to specifically
117
break;
105
+ * update the NS banked version of a bit even if we are secure.
118
default:
106
+ */
119
illegal_op:
107
+ uint32_t *fpccr_s = &env->v7m.fpccr[M_REG_S];
120
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized(),
108
+ uint32_t *fpccr_ns = &env->v7m.fpccr[M_REG_NS];
121
- default_exception_el(s));
109
+ uint32_t *fpccr = &env->v7m.fpccr[is_secure];
122
+ unallocated_encoding(s);
110
+ bool hfrdy, bfrdy, mmrdy, ns_ufrdy, s_ufrdy, sfrdy, monrdy;
123
break;
111
+
112
+ env->v7m.fpcar[is_secure] = frameptr & ~0x7;
113
+
114
+ if (apply_splim && arm_feature(env, ARM_FEATURE_V8)) {
115
+ bool splimviol;
116
+ uint32_t splim = v7m_sp_limit(env);
117
+ bool ign = armv7m_nvic_neg_prio_requested(nvic, is_secure) &&
118
+ (env->v7m.ccr[is_secure] & R_V7M_CCR_STKOFHFNMIGN_MASK);
119
+
120
+ splimviol = !ign && frameptr < splim;
121
+ *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, SPLIMVIOL, splimviol);
122
+ }
123
+
124
+ *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, LSPACT, 1);
125
+
126
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, S, is_secure);
127
+
128
+ *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, USER, arm_current_el(env) == 0);
129
+
130
+ *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, THREAD,
131
+ !arm_v7m_is_handler_mode(env));
132
+
133
+ hfrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_HARD, false);
134
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, HFRDY, hfrdy);
135
+
136
+ bfrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_BUS, false);
137
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, BFRDY, bfrdy);
138
+
139
+ mmrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_MEM, is_secure);
140
+ *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, MMRDY, mmrdy);
141
+
142
+ ns_ufrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_USAGE, false);
143
+ *fpccr_ns = FIELD_DP32(*fpccr_ns, V7M_FPCCR, UFRDY, ns_ufrdy);
144
+
145
+ monrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_DEBUG, false);
146
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, MONRDY, monrdy);
147
+
148
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
149
+ s_ufrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_USAGE, true);
150
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, UFRDY, s_ufrdy);
151
+
152
+ sfrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_SECURE, false);
153
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, SFRDY, sfrdy);
154
+ }
155
+}
156
+
157
static bool v7m_push_stack(ARMCPU *cpu)
158
{
159
/* Do the "set up stack frame" part of exception entry,
160
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
161
}
162
} else {
163
/* Lazy stacking enabled, save necessary info to stack later */
164
- /* TODO : equivalent of UpdateFPCCR() pseudocode */
165
+ v7m_update_fpccr(env, frameptr + 0x20, true);
166
}
167
}
124
}
168
}
125
}
126
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
127
}
128
return;
129
illegal_op:
130
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized(),
131
- default_exception_el(s));
132
+ unallocated_encoding(s);
133
}
134
135
static void disas_thumb_insn(DisasContext *s, uint32_t insn)
136
@@ -XXX,XX +XXX,XX @@ static void disas_thumb_insn(DisasContext *s, uint32_t insn)
137
return;
138
illegal_op:
139
undef:
140
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized(),
141
- default_exception_el(s));
142
+ unallocated_encoding(s);
143
}
144
145
static bool insn_crosses_page(CPUARMState *env, DisasContext *s)
169
--
146
--
170
2.20.1
147
2.20.1
171
148
172
149
diff view generated by jsdifflib
1
Implement the VLLDM instruction for v7M for the FPU present cas.
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Replace x = double_saturate(y) with x = add_saturate(y, y).
4
There is no need for a separate more specialized helper.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Message-id: 20190807045335.1361-12-richard.henderson@linaro.org
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20190416125744.27770-26-peter.maydell@linaro.org
6
---
11
---
7
target/arm/helper.h | 1 +
12
target/arm/helper.h | 1 -
8
target/arm/helper.c | 54 ++++++++++++++++++++++++++++++++++++++++++
13
target/arm/op_helper.c | 15 ---------------
9
target/arm/translate.c | 2 +-
14
target/arm/translate.c | 4 ++--
10
3 files changed, 56 insertions(+), 1 deletion(-)
15
3 files changed, 2 insertions(+), 18 deletions(-)
11
16
12
diff --git a/target/arm/helper.h b/target/arm/helper.h
17
diff --git a/target/arm/helper.h b/target/arm/helper.h
13
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/helper.h
19
--- a/target/arm/helper.h
15
+++ b/target/arm/helper.h
20
+++ b/target/arm/helper.h
16
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(v7m_tt, i32, env, i32, i32)
21
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(add_saturate, i32, env, i32, i32)
17
DEF_HELPER_1(v7m_preserve_fp_state, void, env)
22
DEF_HELPER_3(sub_saturate, i32, env, i32, i32)
18
23
DEF_HELPER_3(add_usaturate, i32, env, i32, i32)
19
DEF_HELPER_2(v7m_vlstm, void, env, i32)
24
DEF_HELPER_3(sub_usaturate, i32, env, i32, i32)
20
+DEF_HELPER_2(v7m_vlldm, void, env, i32)
25
-DEF_HELPER_2(double_saturate, i32, env, s32)
21
26
DEF_HELPER_FLAGS_2(sdiv, TCG_CALL_NO_RWG_SE, s32, s32, s32)
22
DEF_HELPER_2(v8m_stackcheck, void, env, i32)
27
DEF_HELPER_FLAGS_2(udiv, TCG_CALL_NO_RWG_SE, i32, i32, i32)
23
28
DEF_HELPER_FLAGS_1(rbit, TCG_CALL_NO_RWG_SE, i32, i32)
24
diff --git a/target/arm/helper.c b/target/arm/helper.c
29
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
25
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/helper.c
31
--- a/target/arm/op_helper.c
27
+++ b/target/arm/helper.c
32
+++ b/target/arm/op_helper.c
28
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
33
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sub_saturate)(CPUARMState *env, uint32_t a, uint32_t b)
29
g_assert_not_reached();
34
return res;
30
}
35
}
31
36
32
+void HELPER(v7m_vlldm)(CPUARMState *env, uint32_t fptr)
37
-uint32_t HELPER(double_saturate)(CPUARMState *env, int32_t val)
33
+{
38
-{
34
+ /* translate.c should never generate calls here in user-only mode */
39
- uint32_t res;
35
+ g_assert_not_reached();
40
- if (val >= 0x40000000) {
36
+}
41
- res = ~SIGNBIT;
37
+
42
- env->QF = 1;
38
uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
43
- } else if (val <= (int32_t)0xc0000000) {
44
- res = SIGNBIT;
45
- env->QF = 1;
46
- } else {
47
- res = val << 1;
48
- }
49
- return res;
50
-}
51
-
52
uint32_t HELPER(add_usaturate)(CPUARMState *env, uint32_t a, uint32_t b)
39
{
53
{
40
/* The TT instructions can be used by unprivileged code, but in
54
uint32_t res = a + b;
41
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
42
env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK;
43
}
44
45
+void HELPER(v7m_vlldm)(CPUARMState *env, uint32_t fptr)
46
+{
47
+ /* fptr is the value of Rn, the frame pointer we load the FP regs from */
48
+ assert(env->v7m.secure);
49
+
50
+ if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)) {
51
+ return;
52
+ }
53
+
54
+ /* Check access to the coprocessor is permitted */
55
+ if (!v7m_cpacr_pass(env, true, arm_current_el(env) != 0)) {
56
+ raise_exception_ra(env, EXCP_NOCP, 0, 1, GETPC());
57
+ }
58
+
59
+ if (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPACT_MASK) {
60
+ /* State in FP is still valid */
61
+ env->v7m.fpccr[M_REG_S] &= ~R_V7M_FPCCR_LSPACT_MASK;
62
+ } else {
63
+ bool ts = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK;
64
+ int i;
65
+ uint32_t fpscr;
66
+
67
+ if (fptr & 7) {
68
+ raise_exception_ra(env, EXCP_UNALIGNED, 0, 1, GETPC());
69
+ }
70
+
71
+ for (i = 0; i < (ts ? 32 : 16); i += 2) {
72
+ uint32_t slo, shi;
73
+ uint64_t dn;
74
+ uint32_t faddr = fptr + 4 * i;
75
+
76
+ if (i >= 16) {
77
+ faddr += 8; /* skip the slot for the FPSCR */
78
+ }
79
+
80
+ slo = cpu_ldl_data(env, faddr);
81
+ shi = cpu_ldl_data(env, faddr + 4);
82
+
83
+ dn = (uint64_t) shi << 32 | slo;
84
+ *aa32_vfp_dreg(env, i / 2) = dn;
85
+ }
86
+ fpscr = cpu_ldl_data(env, fptr + 0x40);
87
+ vfp_set_fpscr(env, fpscr);
88
+ }
89
+
90
+ env->v7m.control[M_REG_S] |= R_V7M_CONTROL_FPCA_MASK;
91
+}
92
+
93
static bool v7m_push_stack(ARMCPU *cpu)
94
{
95
/* Do the "set up stack frame" part of exception entry,
96
diff --git a/target/arm/translate.c b/target/arm/translate.c
55
diff --git a/target/arm/translate.c b/target/arm/translate.c
97
index XXXXXXX..XXXXXXX 100644
56
index XXXXXXX..XXXXXXX 100644
98
--- a/target/arm/translate.c
57
--- a/target/arm/translate.c
99
+++ b/target/arm/translate.c
58
+++ b/target/arm/translate.c
59
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
60
tmp = load_reg(s, rm);
61
tmp2 = load_reg(s, rn);
62
if (op1 & 2)
63
- gen_helper_double_saturate(tmp2, cpu_env, tmp2);
64
+ gen_helper_add_saturate(tmp2, cpu_env, tmp2, tmp2);
65
if (op1 & 1)
66
gen_helper_sub_saturate(tmp, cpu_env, tmp, tmp2);
67
else
100
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
68
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
101
TCGv_i32 fptr = load_reg(s, rn);
69
tmp = load_reg(s, rn);
102
70
tmp2 = load_reg(s, rm);
103
if (extract32(insn, 20, 1)) {
71
if (op & 1)
104
- /* VLLDM */
72
- gen_helper_double_saturate(tmp, cpu_env, tmp);
105
+ gen_helper_v7m_vlldm(cpu_env, fptr);
73
+ gen_helper_add_saturate(tmp, cpu_env, tmp, tmp);
106
} else {
74
if (op & 2)
107
gen_helper_v7m_vlstm(cpu_env, fptr);
75
gen_helper_sub_saturate(tmp, cpu_env, tmp2, tmp);
108
}
76
else
109
--
77
--
110
2.20.1
78
2.20.1
111
79
112
80
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
From: Andrew Jones <drjones@redhat.com>
2
2
3
The SMMUNotifierNode struct is not necessary and brings extra
3
If -cpu <cpu>,aarch64=off is used then KVM must also be used, and it
4
complexity so let's remove it. We now directly track the SMMUDevices
4
and the host must support running the vcpu in 32-bit mode. Also, if
5
which have registered IOMMU MR notifiers.
5
-cpu <cpu>,aarch64=on is used, then it doesn't matter if kvm is
6
enabled or not.
6
7
7
This is inspired from the same transformation on intel-iommu
8
Signed-off-by: Andrew Jones <drjones@redhat.com>
8
done in commit b4a4ba0d68f50f218ee3957b6638dbee32a5eeef
9
Reviewed-by: Eric Auger <eric.auger@redhat.com>
9
("intel-iommu: remove IntelIOMMUNotifierNode")
10
11
Signed-off-by: Eric Auger <eric.auger@redhat.com>
12
Reviewed-by: Peter Xu <peterx@redhat.com>
13
Message-id: 20190409160219.19026-1-eric.auger@redhat.com
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
11
---
16
include/hw/arm/smmu-common.h | 8 ++------
12
target/arm/kvm_arm.h | 14 ++++++++++++++
17
hw/arm/smmu-common.c | 6 +++---
13
target/arm/cpu64.c | 12 ++++++------
18
hw/arm/smmuv3.c | 28 +++++++---------------------
14
target/arm/kvm64.c | 9 +++++++++
19
3 files changed, 12 insertions(+), 30 deletions(-)
15
3 files changed, 29 insertions(+), 6 deletions(-)
20
16
21
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
17
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
22
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
23
--- a/include/hw/arm/smmu-common.h
19
--- a/target/arm/kvm_arm.h
24
+++ b/include/hw/arm/smmu-common.h
20
+++ b/target/arm/kvm_arm.h
25
@@ -XXX,XX +XXX,XX @@ typedef struct SMMUDevice {
21
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf);
26
AddressSpace as;
22
*/
27
uint32_t cfg_cache_hits;
23
void kvm_arm_set_cpu_features_from_host(ARMCPU *cpu);
28
uint32_t cfg_cache_misses;
24
29
+ QLIST_ENTRY(SMMUDevice) next;
25
+/**
30
} SMMUDevice;
26
+ * kvm_arm_aarch32_supported:
31
27
+ * @cs: CPUState
32
-typedef struct SMMUNotifierNode {
28
+ *
33
- SMMUDevice *sdev;
29
+ * Returns: true if the KVM VCPU can enable AArch32 mode
34
- QLIST_ENTRY(SMMUNotifierNode) next;
30
+ * and false otherwise.
35
-} SMMUNotifierNode;
31
+ */
36
-
32
+bool kvm_arm_aarch32_supported(CPUState *cs);
37
typedef struct SMMUPciBus {
33
+
38
PCIBus *bus;
34
/**
39
SMMUDevice *pbdev[0]; /* Parent array is sparse, so dynamically alloc */
35
* kvm_arm_get_max_vm_ipa_size - Returns the number of bits in the
40
@@ -XXX,XX +XXX,XX @@ typedef struct SMMUState {
36
* IPA address space supported by KVM
41
GHashTable *iotlb;
37
@@ -XXX,XX +XXX,XX @@ static inline void kvm_arm_set_cpu_features_from_host(ARMCPU *cpu)
42
SMMUPciBus *smmu_pcibus_by_bus_num[SMMU_PCI_BUS_MAX];
38
cpu->host_cpu_probe_failed = true;
43
PCIBus *pci_bus;
39
}
44
- QLIST_HEAD(, SMMUNotifierNode) notifiers_list;
40
45
+ QLIST_HEAD(, SMMUDevice) devices_with_notifiers;
41
+static inline bool kvm_arm_aarch32_supported(CPUState *cs)
46
uint8_t bus_num;
42
+{
47
PCIBus *primary_bus;
43
+ return false;
48
} SMMUState;
44
+}
49
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
45
+
46
static inline int kvm_arm_get_max_vm_ipa_size(MachineState *ms)
47
{
48
return -ENOENT;
49
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
50
index XXXXXXX..XXXXXXX 100644
50
index XXXXXXX..XXXXXXX 100644
51
--- a/hw/arm/smmu-common.c
51
--- a/target/arm/cpu64.c
52
+++ b/hw/arm/smmu-common.c
52
+++ b/target/arm/cpu64.c
53
@@ -XXX,XX +XXX,XX @@ inline void smmu_inv_notifiers_mr(IOMMUMemoryRegion *mr)
53
@@ -XXX,XX +XXX,XX @@ static void aarch64_cpu_set_aarch64(Object *obj, bool value, Error **errp)
54
/* Unmap all notifiers of all mr's */
54
* restriction allows us to avoid fixing up functionality that assumes a
55
void smmu_inv_notifiers_all(SMMUState *s)
55
* uniform execution state like do_interrupt.
56
{
56
*/
57
- SMMUNotifierNode *node;
57
- if (!kvm_enabled()) {
58
+ SMMUDevice *sdev;
58
- error_setg(errp, "'aarch64' feature cannot be disabled "
59
59
- "unless KVM is enabled");
60
- QLIST_FOREACH(node, &s->notifiers_list, next) {
61
- smmu_inv_notifiers_mr(&node->sdev->iommu);
62
+ QLIST_FOREACH(sdev, &s->devices_with_notifiers, next) {
63
+ smmu_inv_notifiers_mr(&sdev->iommu);
64
}
65
}
66
67
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
68
index XXXXXXX..XXXXXXX 100644
69
--- a/hw/arm/smmuv3.c
70
+++ b/hw/arm/smmuv3.c
71
@@ -XXX,XX +XXX,XX @@ static void smmuv3_notify_iova(IOMMUMemoryRegion *mr,
72
/* invalidate an asid/iova tuple in all mr's */
73
static void smmuv3_inv_notifiers_iova(SMMUState *s, int asid, dma_addr_t iova)
74
{
75
- SMMUNotifierNode *node;
76
+ SMMUDevice *sdev;
77
78
- QLIST_FOREACH(node, &s->notifiers_list, next) {
79
- IOMMUMemoryRegion *mr = &node->sdev->iommu;
80
+ QLIST_FOREACH(sdev, &s->devices_with_notifiers, next) {
81
+ IOMMUMemoryRegion *mr = &sdev->iommu;
82
IOMMUNotifier *n;
83
84
trace_smmuv3_inv_notifiers_iova(mr->parent_obj.name, asid, iova);
85
@@ -XXX,XX +XXX,XX @@ static void smmuv3_notify_flag_changed(IOMMUMemoryRegion *iommu,
86
SMMUDevice *sdev = container_of(iommu, SMMUDevice, iommu);
87
SMMUv3State *s3 = sdev->smmu;
88
SMMUState *s = &(s3->smmu_state);
89
- SMMUNotifierNode *node = NULL;
90
- SMMUNotifierNode *next_node = NULL;
91
92
if (new & IOMMU_NOTIFIER_MAP) {
93
int bus_num = pci_bus_num(sdev->bus);
94
@@ -XXX,XX +XXX,XX @@ static void smmuv3_notify_flag_changed(IOMMUMemoryRegion *iommu,
95
96
if (old == IOMMU_NOTIFIER_NONE) {
97
trace_smmuv3_notify_flag_add(iommu->parent_obj.name);
98
- node = g_malloc0(sizeof(*node));
99
- node->sdev = sdev;
100
- QLIST_INSERT_HEAD(&s->notifiers_list, node, next);
101
- return;
60
- return;
102
- }
61
- }
103
-
62
-
104
- /* update notifier node with new flags */
63
if (value == false) {
105
- QLIST_FOREACH_SAFE(node, &s->notifiers_list, next, next_node) {
64
+ if (!kvm_enabled() || !kvm_arm_aarch32_supported(CPU(cpu))) {
106
- if (node->sdev == sdev) {
65
+ error_setg(errp, "'aarch64' feature cannot be disabled "
107
- if (new == IOMMU_NOTIFIER_NONE) {
66
+ "unless KVM is enabled and 32-bit EL1 "
108
- trace_smmuv3_notify_flag_del(iommu->parent_obj.name);
67
+ "is supported");
109
- QLIST_REMOVE(node, next);
68
+ return;
110
- g_free(node);
69
+ }
111
- }
70
unset_feature(&cpu->env, ARM_FEATURE_AARCH64);
112
- return;
71
} else {
113
- }
72
set_feature(&cpu->env, ARM_FEATURE_AARCH64);
114
+ QLIST_INSERT_HEAD(&s->devices_with_notifiers, sdev, next);
73
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
115
+ } else if (new == IOMMU_NOTIFIER_NONE) {
74
index XXXXXXX..XXXXXXX 100644
116
+ trace_smmuv3_notify_flag_del(iommu->parent_obj.name);
75
--- a/target/arm/kvm64.c
117
+ QLIST_REMOVE(sdev, next);
76
+++ b/target/arm/kvm64.c
118
}
77
@@ -XXX,XX +XXX,XX @@
78
#include "exec/gdbstub.h"
79
#include "sysemu/sysemu.h"
80
#include "sysemu/kvm.h"
81
+#include "sysemu/kvm_int.h"
82
#include "kvm_arm.h"
83
+#include "hw/boards.h"
84
#include "internals.h"
85
86
static bool have_guest_debug;
87
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
88
return true;
119
}
89
}
120
90
91
+bool kvm_arm_aarch32_supported(CPUState *cpu)
92
+{
93
+ KVMState *s = KVM_STATE(current_machine->accelerator);
94
+
95
+ return kvm_check_extension(s, KVM_CAP_ARM_EL1_32BIT);
96
+}
97
+
98
#define ARM_CPU_ID_MPIDR 3, 0, 0, 0, 5
99
100
int kvm_arch_init_vcpu(CPUState *cs)
121
--
101
--
122
2.20.1
102
2.20.1
123
103
124
104
diff view generated by jsdifflib
Deleted patch
1
In the stripe8() function we use a variable length array; however
2
we know that the maximum length required is MAX_NUM_BUSSES. Use
3
a fixed-length array and an assert instead.
4
1
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
10
Message-id: 20190328152635.2794-1-peter.maydell@linaro.org
11
---
12
hw/ssi/xilinx_spips.c | 6 ++++--
13
1 file changed, 4 insertions(+), 2 deletions(-)
14
15
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/ssi/xilinx_spips.c
18
+++ b/hw/ssi/xilinx_spips.c
19
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_qspips_reset(DeviceState *d)
20
21
static inline void stripe8(uint8_t *x, int num, bool dir)
22
{
23
- uint8_t r[num];
24
- memset(r, 0, sizeof(uint8_t) * num);
25
+ uint8_t r[MAX_NUM_BUSSES];
26
int idx[2] = {0, 0};
27
int bit[2] = {0, 7};
28
int d = dir;
29
30
+ assert(num <= MAX_NUM_BUSSES);
31
+ memset(r, 0, sizeof(uint8_t) * num);
32
+
33
for (idx[0] = 0; idx[0] < num; ++idx[0]) {
34
for (bit[0] = 7; bit[0] >= 0; bit[0]--) {
35
r[idx[!d]] |= x[idx[d]] & 1 << bit[d] ? 1 << bit[!d] : 0;
36
--
37
2.20.1
38
39
diff view generated by jsdifflib
Deleted patch
1
Normally configure identifies the source path by looking
2
at the location where the configure script itself exists.
3
We also provide a --source-path option which lets the user
4
manually override this.
5
1
6
There isn't really an obvious use case for the --source-path
7
option, and in commit 927128222b0a91f56c13a in 2017 we
8
accidentally added some logic that looks at $source_path
9
before the command line option that overrides it has been
10
processed.
11
12
The fact that nobody complained suggests that there isn't
13
any use of this option and we aren't testing it either;
14
remove it. This allows us to move the "make $source_path
15
absolute" logic up so that there is no window in the script
16
where $source_path is set but not yet absolute.
17
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
20
Message-id: 20190318134019.23729-1-peter.maydell@linaro.org
21
---
22
configure | 10 ++--------
23
1 file changed, 2 insertions(+), 8 deletions(-)
24
25
diff --git a/configure b/configure
26
index XXXXXXX..XXXXXXX 100755
27
--- a/configure
28
+++ b/configure
29
@@ -XXX,XX +XXX,XX @@ ld_has() {
30
31
# default parameters
32
source_path=$(dirname "$0")
33
+# make source path absolute
34
+source_path=$(cd "$source_path"; pwd)
35
cpu=""
36
iasl="iasl"
37
interp_prefix="/usr/gnemul/qemu-%M"
38
@@ -XXX,XX +XXX,XX @@ for opt do
39
;;
40
--cxx=*) CXX="$optarg"
41
;;
42
- --source-path=*) source_path="$optarg"
43
- ;;
44
--cpu=*) cpu="$optarg"
45
;;
46
--extra-cflags=*) QEMU_CFLAGS="$QEMU_CFLAGS $optarg"
47
@@ -XXX,XX +XXX,XX @@ if test "$debug_info" = "yes"; then
48
LDFLAGS="-g $LDFLAGS"
49
fi
50
51
-# make source path absolute
52
-source_path=$(cd "$source_path"; pwd)
53
-
54
# running configure in the source tree?
55
# we know that's the case if configure is there.
56
if test -f "./configure"; then
57
@@ -XXX,XX +XXX,XX @@ for opt do
58
;;
59
--interp-prefix=*) interp_prefix="$optarg"
60
;;
61
- --source-path=*)
62
- ;;
63
--cross-prefix=*)
64
;;
65
--cc=*)
66
@@ -XXX,XX +XXX,XX @@ $(echo Available targets: $default_target_list | \
67
--target-list-exclude=LIST exclude a set of targets from the default target-list
68
69
Advanced options (experts only):
70
- --source-path=PATH path of source code [$source_path]
71
--cross-prefix=PREFIX use PREFIX for compile tools [$cross_prefix]
72
--cc=CC use C compiler CC [$cc]
73
--iasl=IASL use ACPI compiler IASL [$iasl]
74
--
75
2.20.1
76
77
diff view generated by jsdifflib
Deleted patch
1
Enforce that for M-profile various FPSCR bits which are RES0 there
2
but have defined meanings on A-profile are never settable. This
3
ensures that M-profile code can't enable the A-profile behaviour
4
(notably vector length/stride handling) by accident.
5
1
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20190416125744.27770-2-peter.maydell@linaro.org
9
---
10
target/arm/vfp_helper.c | 8 ++++++++
11
1 file changed, 8 insertions(+)
12
13
diff --git a/target/arm/vfp_helper.c b/target/arm/vfp_helper.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/vfp_helper.c
16
+++ b/target/arm/vfp_helper.c
17
@@ -XXX,XX +XXX,XX @@ void HELPER(vfp_set_fpscr)(CPUARMState *env, uint32_t val)
18
val &= ~FPCR_FZ16;
19
}
20
21
+ if (arm_feature(env, ARM_FEATURE_M)) {
22
+ /*
23
+ * M profile FPSCR is RES0 for the QC, STRIDE, FZ16, LEN bits
24
+ * and also for the trapped-exception-handling bits IxE.
25
+ */
26
+ val &= 0xf7c0009f;
27
+ }
28
+
29
/*
30
* We don't implement trapped exception handling, so the
31
* trap enable bits, IDE|IXE|UFE|OFE|DZE|IOE are all RAZ/WI (not RES0!)
32
--
33
2.20.1
34
35
diff view generated by jsdifflib
Deleted patch
1
For M-profile the MVFR* ID registers are memory mapped, in the
2
range we implement via the NVIC. Allow them to be read.
3
(If the CPU has no FPU, these registers are defined to be RAZ.)
4
1
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20190416125744.27770-3-peter.maydell@linaro.org
8
---
9
hw/intc/armv7m_nvic.c | 6 ++++++
10
1 file changed, 6 insertions(+)
11
12
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/hw/intc/armv7m_nvic.c
15
+++ b/hw/intc/armv7m_nvic.c
16
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
17
return 0;
18
}
19
return cpu->env.v7m.sfar;
20
+ case 0xf40: /* MVFR0 */
21
+ return cpu->isar.mvfr0;
22
+ case 0xf44: /* MVFR1 */
23
+ return cpu->isar.mvfr1;
24
+ case 0xf48: /* MVFR2 */
25
+ return cpu->isar.mvfr2;
26
default:
27
bad_offset:
28
qemu_log_mask(LOG_GUEST_ERROR, "NVIC: Bad read offset 0x%x\n", offset);
29
--
30
2.20.1
31
32
diff view generated by jsdifflib
1
Enable the FPU by default for the Cortex-M4 and Cortex-M33.
1
From: Andrew Jones <drjones@redhat.com>
2
2
3
We first convert the pmu property from a static property to one with
4
its own accessors. Then we use the set accessor to check if the PMU is
5
supported when using KVM. Indeed a 32-bit KVM host does not support
6
the PMU, so this check will catch an attempt to use it at property-set
7
time.
8
9
Signed-off-by: Andrew Jones <drjones@redhat.com>
10
Reviewed-by: Eric Auger <eric.auger@redhat.com>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20190416125744.27770-27-peter.maydell@linaro.org
6
---
12
---
7
target/arm/cpu.c | 8 ++++++++
13
target/arm/kvm_arm.h | 14 ++++++++++++++
8
1 file changed, 8 insertions(+)
14
target/arm/cpu.c | 30 +++++++++++++++++++++++++-----
15
target/arm/kvm.c | 7 +++++++
16
3 files changed, 46 insertions(+), 5 deletions(-)
9
17
18
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/kvm_arm.h
21
+++ b/target/arm/kvm_arm.h
22
@@ -XXX,XX +XXX,XX @@ void kvm_arm_set_cpu_features_from_host(ARMCPU *cpu);
23
*/
24
bool kvm_arm_aarch32_supported(CPUState *cs);
25
26
+/**
27
+ * bool kvm_arm_pmu_supported:
28
+ * @cs: CPUState
29
+ *
30
+ * Returns: true if the KVM VCPU can enable its PMU
31
+ * and false otherwise.
32
+ */
33
+bool kvm_arm_pmu_supported(CPUState *cs);
34
+
35
/**
36
* kvm_arm_get_max_vm_ipa_size - Returns the number of bits in the
37
* IPA address space supported by KVM
38
@@ -XXX,XX +XXX,XX @@ static inline bool kvm_arm_aarch32_supported(CPUState *cs)
39
return false;
40
}
41
42
+static inline bool kvm_arm_pmu_supported(CPUState *cs)
43
+{
44
+ return false;
45
+}
46
+
47
static inline int kvm_arm_get_max_vm_ipa_size(MachineState *ms)
48
{
49
return -ENOENT;
10
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
50
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
11
index XXXXXXX..XXXXXXX 100644
51
index XXXXXXX..XXXXXXX 100644
12
--- a/target/arm/cpu.c
52
--- a/target/arm/cpu.c
13
+++ b/target/arm/cpu.c
53
+++ b/target/arm/cpu.c
14
@@ -XXX,XX +XXX,XX @@ static void cortex_m4_initfn(Object *obj)
54
@@ -XXX,XX +XXX,XX @@ static Property arm_cpu_has_el3_property =
15
set_feature(&cpu->env, ARM_FEATURE_M);
55
static Property arm_cpu_cfgend_property =
16
set_feature(&cpu->env, ARM_FEATURE_M_MAIN);
56
DEFINE_PROP_BOOL("cfgend", ARMCPU, cfgend, false);
17
set_feature(&cpu->env, ARM_FEATURE_THUMB_DSP);
57
18
+ set_feature(&cpu->env, ARM_FEATURE_VFP4);
58
-/* use property name "pmu" to match other archs and virt tools */
19
cpu->midr = 0x410fc240; /* r0p0 */
59
-static Property arm_cpu_has_pmu_property =
20
cpu->pmsav7_dregion = 8;
60
- DEFINE_PROP_BOOL("pmu", ARMCPU, has_pmu, true);
21
+ cpu->isar.mvfr0 = 0x10110021;
61
-
22
+ cpu->isar.mvfr1 = 0x11000011;
62
static Property arm_cpu_has_vfp_property =
23
+ cpu->isar.mvfr2 = 0x00000000;
63
DEFINE_PROP_BOOL("vfp", ARMCPU, has_vfp, true);
24
cpu->id_pfr0 = 0x00000030;
64
25
cpu->id_pfr1 = 0x00000200;
65
@@ -XXX,XX +XXX,XX @@ static Property arm_cpu_pmsav7_dregion_property =
26
cpu->id_dfr0 = 0x00100000;
66
pmsav7_dregion,
27
@@ -XXX,XX +XXX,XX @@ static void cortex_m33_initfn(Object *obj)
67
qdev_prop_uint32, uint32_t);
28
set_feature(&cpu->env, ARM_FEATURE_M_MAIN);
68
29
set_feature(&cpu->env, ARM_FEATURE_M_SECURITY);
69
+static bool arm_get_pmu(Object *obj, Error **errp)
30
set_feature(&cpu->env, ARM_FEATURE_THUMB_DSP);
70
+{
31
+ set_feature(&cpu->env, ARM_FEATURE_VFP4);
71
+ ARMCPU *cpu = ARM_CPU(obj);
32
cpu->midr = 0x410fd213; /* r0p3 */
72
+
33
cpu->pmsav7_dregion = 16;
73
+ return cpu->has_pmu;
34
cpu->sau_sregion = 8;
74
+}
35
+ cpu->isar.mvfr0 = 0x10110021;
75
+
36
+ cpu->isar.mvfr1 = 0x11000011;
76
+static void arm_set_pmu(Object *obj, bool value, Error **errp)
37
+ cpu->isar.mvfr2 = 0x00000040;
77
+{
38
cpu->id_pfr0 = 0x00000030;
78
+ ARMCPU *cpu = ARM_CPU(obj);
39
cpu->id_pfr1 = 0x00000210;
79
+
40
cpu->id_dfr0 = 0x00200000;
80
+ if (value) {
81
+ if (kvm_enabled() && !kvm_arm_pmu_supported(CPU(cpu))) {
82
+ error_setg(errp, "'pmu' feature not supported by KVM on this host");
83
+ return;
84
+ }
85
+ set_feature(&cpu->env, ARM_FEATURE_PMU);
86
+ } else {
87
+ unset_feature(&cpu->env, ARM_FEATURE_PMU);
88
+ }
89
+ cpu->has_pmu = value;
90
+}
91
+
92
static void arm_get_init_svtor(Object *obj, Visitor *v, const char *name,
93
void *opaque, Error **errp)
94
{
95
@@ -XXX,XX +XXX,XX @@ void arm_cpu_post_init(Object *obj)
96
}
97
98
if (arm_feature(&cpu->env, ARM_FEATURE_PMU)) {
99
- qdev_property_add_static(DEVICE(obj), &arm_cpu_has_pmu_property,
100
+ cpu->has_pmu = true;
101
+ object_property_add_bool(obj, "pmu", arm_get_pmu, arm_set_pmu,
102
&error_abort);
103
}
104
105
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
106
index XXXXXXX..XXXXXXX 100644
107
--- a/target/arm/kvm.c
108
+++ b/target/arm/kvm.c
109
@@ -XXX,XX +XXX,XX @@ void kvm_arm_set_cpu_features_from_host(ARMCPU *cpu)
110
env->features = arm_host_cpu_features.features;
111
}
112
113
+bool kvm_arm_pmu_supported(CPUState *cpu)
114
+{
115
+ KVMState *s = KVM_STATE(current_machine->accelerator);
116
+
117
+ return kvm_check_extension(s, KVM_CAP_ARM_PMU_V3);
118
+}
119
+
120
int kvm_arm_get_max_vm_ipa_size(MachineState *ms)
121
{
122
KVMState *s = KVM_STATE(ms->accelerator);
41
--
123
--
42
2.20.1
124
2.20.1
43
125
44
126
diff view generated by jsdifflib
1
If the floating point extension is present, then the SG instruction
1
From: Andrew Jones <drjones@redhat.com>
2
must clear the CONTROL_S.SFPA bit. Implement this.
3
2
4
(On a no-FPU system the bit will always be zero, so we don't need
3
The current implementation of ZCR_ELx matches the architecture, only
5
to make the clearing of the bit conditional on ARM_FEATURE_VFP.)
4
implementing the lower four bits, with the rest RAZ/WI. This puts
5
a strict limit on ARM_MAX_VQ of 16. Make sure we don't let ARM_MAX_VQ
6
grow without a corresponding update here.
6
7
8
Suggested-by: Dave Martin <Dave.Martin@arm.com>
9
Signed-off-by: Andrew Jones <drjones@redhat.com>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Eric Auger <eric.auger@redhat.com>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20190416125744.27770-8-peter.maydell@linaro.org
10
---
13
---
11
target/arm/helper.c | 1 +
14
target/arm/helper.c | 1 +
12
1 file changed, 1 insertion(+)
15
1 file changed, 1 insertion(+)
13
16
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
17
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.c
19
--- a/target/arm/helper.c
17
+++ b/target/arm/helper.c
20
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ static bool v7m_handle_execute_nsc(ARMCPU *cpu)
21
@@ -XXX,XX +XXX,XX @@ static void zcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
19
qemu_log_mask(CPU_LOG_INT, "...really an SG instruction at 0x%08" PRIx32
22
int new_len;
20
", executing it\n", env->regs[15]);
23
21
env->regs[14] &= ~1;
24
/* Bits other than [3:0] are RAZ/WI. */
22
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
25
+ QEMU_BUILD_BUG_ON(ARM_MAX_VQ > 16);
23
switch_v7m_security_state(env, true);
26
raw_write(env, ri, value & 0xf);
24
xpsr_write(env, 0, XPSR_IT);
27
25
env->regs[15] += 4;
28
/*
26
--
29
--
27
2.20.1
30
2.20.1
28
31
29
32
diff view generated by jsdifflib
1
In the v7M architecture, if an exception is generated in the process
1
From: Andrew Jones <drjones@redhat.com>
2
of doing the lazy stacking of FP registers, the handling of
3
possible escalation to HardFault is treated differently to the normal
4
approach: it works based on the saved information about exception
5
readiness that was stored in the FPCCR when the stack frame was
6
created. Provide a new function armv7m_nvic_set_pending_lazyfp()
7
which pends exceptions during lazy stacking, and implements
8
this logic.
9
2
10
This corresponds to the pseudocode TakePreserveFPException().
3
Unless we're guaranteed to always increase ARM_MAX_VQ by a multiple of
4
four, then we should use DIV_ROUND_UP to ensure we get an appropriate
5
array size.
11
6
7
Signed-off-by: Andrew Jones <drjones@redhat.com>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20190416125744.27770-22-peter.maydell@linaro.org
15
---
10
---
16
target/arm/cpu.h | 12 ++++++
11
target/arm/cpu.h | 2 +-
17
hw/intc/armv7m_nvic.c | 96 +++++++++++++++++++++++++++++++++++++++++++
12
1 file changed, 1 insertion(+), 1 deletion(-)
18
2 files changed, 108 insertions(+)
19
13
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
21
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/cpu.h
16
--- a/target/arm/cpu.h
23
+++ b/target/arm/cpu.h
17
+++ b/target/arm/cpu.h
24
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending(void *opaque, int irq, bool secure);
18
@@ -XXX,XX +XXX,XX @@ typedef struct ARMVectorReg {
25
* a different exception).
19
#ifdef TARGET_AARCH64
26
*/
20
/* In AArch32 mode, predicate registers do not exist at all. */
27
void armv7m_nvic_set_pending_derived(void *opaque, int irq, bool secure);
21
typedef struct ARMPredicateReg {
28
+/**
22
- uint64_t p[2 * ARM_MAX_VQ / 8] QEMU_ALIGNED(16);
29
+ * armv7m_nvic_set_pending_lazyfp: mark this lazy FP exception as pending
23
+ uint64_t p[DIV_ROUND_UP(2 * ARM_MAX_VQ, 8)] QEMU_ALIGNED(16);
30
+ * @opaque: the NVIC
24
} ARMPredicateReg;
31
+ * @irq: the exception number to mark pending
25
32
+ * @secure: false for non-banked exceptions or for the nonsecure
26
/* In AArch32 mode, PAC keys do not exist at all. */
33
+ * version of a banked exception, true for the secure version of a banked
34
+ * exception.
35
+ *
36
+ * Similar to armv7m_nvic_set_pending(), but specifically for exceptions
37
+ * generated in the course of lazy stacking of FP registers.
38
+ */
39
+void armv7m_nvic_set_pending_lazyfp(void *opaque, int irq, bool secure);
40
/**
41
* armv7m_nvic_get_pending_irq_info: return highest priority pending
42
* exception, and whether it targets Secure state
43
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/hw/intc/armv7m_nvic.c
46
+++ b/hw/intc/armv7m_nvic.c
47
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending_derived(void *opaque, int irq, bool secure)
48
do_armv7m_nvic_set_pending(opaque, irq, secure, true);
49
}
50
51
+void armv7m_nvic_set_pending_lazyfp(void *opaque, int irq, bool secure)
52
+{
53
+ /*
54
+ * Pend an exception during lazy FP stacking. This differs
55
+ * from the usual exception pending because the logic for
56
+ * whether we should escalate depends on the saved context
57
+ * in the FPCCR register, not on the current state of the CPU/NVIC.
58
+ */
59
+ NVICState *s = (NVICState *)opaque;
60
+ bool banked = exc_is_banked(irq);
61
+ VecInfo *vec;
62
+ bool targets_secure;
63
+ bool escalate = false;
64
+ /*
65
+ * We will only look at bits in fpccr if this is a banked exception
66
+ * (in which case 'secure' tells us whether it is the S or NS version).
67
+ * All the bits for the non-banked exceptions are in fpccr_s.
68
+ */
69
+ uint32_t fpccr_s = s->cpu->env.v7m.fpccr[M_REG_S];
70
+ uint32_t fpccr = s->cpu->env.v7m.fpccr[secure];
71
+
72
+ assert(irq > ARMV7M_EXCP_RESET && irq < s->num_irq);
73
+ assert(!secure || banked);
74
+
75
+ vec = (banked && secure) ? &s->sec_vectors[irq] : &s->vectors[irq];
76
+
77
+ targets_secure = banked ? secure : exc_targets_secure(s, irq);
78
+
79
+ switch (irq) {
80
+ case ARMV7M_EXCP_DEBUG:
81
+ if (!(fpccr_s & R_V7M_FPCCR_MONRDY_MASK)) {
82
+ /* Ignore DebugMonitor exception */
83
+ return;
84
+ }
85
+ break;
86
+ case ARMV7M_EXCP_MEM:
87
+ escalate = !(fpccr & R_V7M_FPCCR_MMRDY_MASK);
88
+ break;
89
+ case ARMV7M_EXCP_USAGE:
90
+ escalate = !(fpccr & R_V7M_FPCCR_UFRDY_MASK);
91
+ break;
92
+ case ARMV7M_EXCP_BUS:
93
+ escalate = !(fpccr_s & R_V7M_FPCCR_BFRDY_MASK);
94
+ break;
95
+ case ARMV7M_EXCP_SECURE:
96
+ escalate = !(fpccr_s & R_V7M_FPCCR_SFRDY_MASK);
97
+ break;
98
+ default:
99
+ g_assert_not_reached();
100
+ }
101
+
102
+ if (escalate) {
103
+ /*
104
+ * Escalate to HardFault: faults that initially targeted Secure
105
+ * continue to do so, even if HF normally targets NonSecure.
106
+ */
107
+ irq = ARMV7M_EXCP_HARD;
108
+ if (arm_feature(&s->cpu->env, ARM_FEATURE_M_SECURITY) &&
109
+ (targets_secure ||
110
+ !(s->cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK))) {
111
+ vec = &s->sec_vectors[irq];
112
+ } else {
113
+ vec = &s->vectors[irq];
114
+ }
115
+ }
116
+
117
+ if (!vec->enabled ||
118
+ nvic_exec_prio(s) <= exc_group_prio(s, vec->prio, secure)) {
119
+ if (!(fpccr_s & R_V7M_FPCCR_HFRDY_MASK)) {
120
+ /*
121
+ * We want to escalate to HardFault but the context the
122
+ * FP state belongs to prevents the exception pre-empting.
123
+ */
124
+ cpu_abort(&s->cpu->parent_obj,
125
+ "Lockup: can't escalate to HardFault during "
126
+ "lazy FP register stacking\n");
127
+ }
128
+ }
129
+
130
+ if (escalate) {
131
+ s->cpu->env.v7m.hfsr |= R_V7M_HFSR_FORCED_MASK;
132
+ }
133
+ if (!vec->pending) {
134
+ vec->pending = 1;
135
+ /*
136
+ * We do not call nvic_irq_update(), because we know our caller
137
+ * is going to handle causing us to take the exception by
138
+ * raising EXCP_LAZYFP, so raising the IRQ line would be
139
+ * pointless extra work. We just need to recompute the
140
+ * priorities so that armv7m_nvic_can_take_pending_exception()
141
+ * returns the right answer.
142
+ */
143
+ nvic_recompute_state(s);
144
+ }
145
+}
146
+
147
/* Make pending IRQ active. */
148
void armv7m_nvic_acknowledge_irq(void *opaque)
149
{
150
--
27
--
151
2.20.1
28
2.20.1
152
29
153
30
diff view generated by jsdifflib
1
The M-profile CONTROL register has two bits -- SFPA and FPCA --
1
From: Andrew Jones <drjones@redhat.com>
2
which relate to floating-point support, and should be RES0 otherwise.
3
Handle them correctly in the MSR/MRS register access code.
4
Neither is banked between security states, so they are stored
5
in v7m.control[M_REG_S] regardless of current security state.
6
2
3
A couple return -EINVAL's forgot their '-'s.
4
5
Signed-off-by: Andrew Jones <drjones@redhat.com>
6
Reviewed-by: Eric Auger <eric.auger@redhat.com>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20190416125744.27770-9-peter.maydell@linaro.org
10
---
9
---
11
target/arm/helper.c | 57 ++++++++++++++++++++++++++++++++++++++-------
10
target/arm/kvm64.c | 4 ++--
12
1 file changed, 49 insertions(+), 8 deletions(-)
11
1 file changed, 2 insertions(+), 2 deletions(-)
13
12
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
15
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.c
15
--- a/target/arm/kvm64.c
17
+++ b/target/arm/helper.c
16
+++ b/target/arm/kvm64.c
18
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg)
17
@@ -XXX,XX +XXX,XX @@ int kvm_arch_put_registers(CPUState *cs, int level)
19
return xpsr_read(env) & mask;
18
write_cpustate_to_list(cpu, true);
20
break;
19
21
case 20: /* CONTROL */
20
if (!write_list_to_kvmstate(cpu, level)) {
22
- return env->v7m.control[env->v7m.secure];
21
- return EINVAL;
23
+ {
22
+ return -EINVAL;
24
+ uint32_t value = env->v7m.control[env->v7m.secure];
25
+ if (!env->v7m.secure) {
26
+ /* SFPA is RAZ/WI from NS; FPCA is stored in the M_REG_S bank */
27
+ value |= env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK;
28
+ }
29
+ return value;
30
+ }
31
case 0x94: /* CONTROL_NS */
32
/* We have to handle this here because unprivileged Secure code
33
* can read the NS CONTROL register.
34
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg)
35
if (!env->v7m.secure) {
36
return 0;
37
}
38
- return env->v7m.control[M_REG_NS];
39
+ return env->v7m.control[M_REG_NS] |
40
+ (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK);
41
}
23
}
42
24
43
if (el == 0) {
25
kvm_arm_sync_mpstate_to_kvm(cpu);
44
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
26
@@ -XXX,XX +XXX,XX @@ int kvm_arch_get_registers(CPUState *cs)
45
*/
46
uint32_t mask = extract32(maskreg, 8, 4);
47
uint32_t reg = extract32(maskreg, 0, 8);
48
+ int cur_el = arm_current_el(env);
49
50
- if (arm_current_el(env) == 0 && reg > 7) {
51
- /* only xPSR sub-fields may be written by unprivileged */
52
+ if (cur_el == 0 && reg > 7 && reg != 20) {
53
+ /*
54
+ * only xPSR sub-fields and CONTROL.SFPA may be written by
55
+ * unprivileged code
56
+ */
57
return;
58
}
27
}
59
28
60
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
29
if (!write_kvmstate_to_list(cpu)) {
61
env->v7m.control[M_REG_NS] &= ~R_V7M_CONTROL_NPRIV_MASK;
30
- return EINVAL;
62
env->v7m.control[M_REG_NS] |= val & R_V7M_CONTROL_NPRIV_MASK;
31
+ return -EINVAL;
63
}
32
}
64
+ /*
33
/* Note that it's OK to have registers which aren't in CPUState,
65
+ * SFPA is RAZ/WI from NS. FPCA is RO if NSACR.CP10 == 0,
34
* so we can ignore a failure return here.
66
+ * RES0 if the FPU is not present, and is stored in the S bank
67
+ */
68
+ if (arm_feature(env, ARM_FEATURE_VFP) &&
69
+ extract32(env->v7m.nsacr, 10, 1)) {
70
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK;
71
+ env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_FPCA_MASK;
72
+ }
73
return;
74
case 0x98: /* SP_NS */
75
{
76
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
77
env->v7m.faultmask[env->v7m.secure] = val & 1;
78
break;
79
case 20: /* CONTROL */
80
- /* Writing to the SPSEL bit only has an effect if we are in
81
+ /*
82
+ * Writing to the SPSEL bit only has an effect if we are in
83
* thread mode; other bits can be updated by any privileged code.
84
* write_v7m_control_spsel() deals with updating the SPSEL bit in
85
* env->v7m.control, so we only need update the others.
86
* For v7M, we must just ignore explicit writes to SPSEL in handler
87
* mode; for v8M the write is permitted but will have no effect.
88
+ * All these bits are writes-ignored from non-privileged code,
89
+ * except for SFPA.
90
*/
91
- if (arm_feature(env, ARM_FEATURE_V8) ||
92
- !arm_v7m_is_handler_mode(env)) {
93
+ if (cur_el > 0 && (arm_feature(env, ARM_FEATURE_V8) ||
94
+ !arm_v7m_is_handler_mode(env))) {
95
write_v7m_control_spsel(env, (val & R_V7M_CONTROL_SPSEL_MASK) != 0);
96
}
97
- if (arm_feature(env, ARM_FEATURE_M_MAIN)) {
98
+ if (cur_el > 0 && arm_feature(env, ARM_FEATURE_M_MAIN)) {
99
env->v7m.control[env->v7m.secure] &= ~R_V7M_CONTROL_NPRIV_MASK;
100
env->v7m.control[env->v7m.secure] |= val & R_V7M_CONTROL_NPRIV_MASK;
101
}
102
+ if (arm_feature(env, ARM_FEATURE_VFP)) {
103
+ /*
104
+ * SFPA is RAZ/WI from NS or if no FPU.
105
+ * FPCA is RO if NSACR.CP10 == 0, RES0 if the FPU is not present.
106
+ * Both are stored in the S bank.
107
+ */
108
+ if (env->v7m.secure) {
109
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
110
+ env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_SFPA_MASK;
111
+ }
112
+ if (cur_el > 0 &&
113
+ (env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_SECURITY) ||
114
+ extract32(env->v7m.nsacr, 10, 1))) {
115
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK;
116
+ env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_FPCA_MASK;
117
+ }
118
+ }
119
break;
120
default:
121
bad_reg:
122
--
35
--
123
2.20.1
36
2.20.1
124
37
125
38
diff view generated by jsdifflib
1
Handle floating point registers in exception return.
1
From: Andrew Jones <drjones@redhat.com>
2
This corresponds to pseudocode functions ValidateExceptionReturn(),
2
3
ExceptionReturn(), PopStack() and ConsumeExcStackFrame().
3
Move the getting/putting of the fpsimd registers out of
4
4
kvm_arch_get/put_registers() into their own helper functions
5
to prepare for alternatively getting/putting SVE registers.
6
7
No functional change.
8
9
Signed-off-by: Andrew Jones <drjones@redhat.com>
10
Reviewed-by: Eric Auger <eric.auger@redhat.com>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20190416125744.27770-16-peter.maydell@linaro.org
8
---
13
---
9
target/arm/helper.c | 142 +++++++++++++++++++++++++++++++++++++++++++-
14
target/arm/kvm64.c | 148 +++++++++++++++++++++++++++------------------
10
1 file changed, 141 insertions(+), 1 deletion(-)
15
1 file changed, 88 insertions(+), 60 deletions(-)
11
16
12
diff --git a/target/arm/helper.c b/target/arm/helper.c
17
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
13
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/helper.c
19
--- a/target/arm/kvm64.c
15
+++ b/target/arm/helper.c
20
+++ b/target/arm/kvm64.c
16
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
21
@@ -XXX,XX +XXX,XX @@ int kvm_arm_cpreg_level(uint64_t regidx)
17
bool rettobase = false;
22
#define AARCH64_SIMD_CTRL_REG(x) (KVM_REG_ARM64 | KVM_REG_SIZE_U32 | \
18
bool exc_secure = false;
23
KVM_REG_ARM_CORE | KVM_REG_ARM_CORE_REG(x))
19
bool return_to_secure;
24
20
+ bool ftype;
25
+static int kvm_arch_put_fpsimd(CPUState *cs)
21
+ bool restore_s16_s31;
26
+{
22
27
+ ARMCPU *cpu = ARM_CPU(cs);
23
/* If we're not in Handler mode then jumps to magic exception-exit
28
+ CPUARMState *env = &cpu->env;
24
* addresses don't have magic behaviour. However for the v8M
29
+ struct kvm_one_reg reg;
25
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
30
+ uint32_t fpr;
26
excret);
31
+ int i, ret;
27
}
32
+
28
33
+ for (i = 0; i < 32; i++) {
29
+ ftype = excret & R_V7M_EXCRET_FTYPE_MASK;
34
+ uint64_t *q = aa64_vfp_qreg(env, i);
30
+
35
+#ifdef HOST_WORDS_BIGENDIAN
31
+ if (!arm_feature(env, ARM_FEATURE_VFP) && !ftype) {
36
+ uint64_t fp_val[2] = { q[1], q[0] };
32
+ qemu_log_mask(LOG_GUEST_ERROR, "M profile: zero FTYPE in exception "
37
+ reg.addr = (uintptr_t)fp_val;
33
+ "exit PC value 0x%" PRIx32 " is UNPREDICTABLE "
38
+#else
34
+ "if FPU not present\n",
39
+ reg.addr = (uintptr_t)q;
35
+ excret);
40
+#endif
36
+ ftype = true;
41
+ reg.id = AARCH64_SIMD_CORE_REG(fp_regs.vregs[i]);
37
+ }
42
+ ret = kvm_vcpu_ioctl(cs, KVM_SET_ONE_REG, &reg);
38
+
43
+ if (ret) {
39
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
44
+ return ret;
40
/* EXC_RETURN.ES validation check (R_SMFL). We must do this before
45
+ }
41
* we pick which FAULTMASK to clear.
46
+ }
42
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
47
+
43
*/
48
+ reg.addr = (uintptr_t)(&fpr);
44
write_v7m_control_spsel_for_secstate(env, return_to_sp_process, exc_secure);
49
+ fpr = vfp_get_fpsr(env);
45
50
+ reg.id = AARCH64_SIMD_CTRL_REG(fp_regs.fpsr);
46
+ /*
51
+ ret = kvm_vcpu_ioctl(cs, KVM_SET_ONE_REG, &reg);
47
+ * Clear scratch FP values left in caller saved registers; this
52
+ if (ret) {
48
+ * must happen before any kind of tail chaining.
53
+ return ret;
49
+ */
54
+ }
50
+ if ((env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_CLRONRET_MASK) &&
55
+
51
+ (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK)) {
56
+ reg.addr = (uintptr_t)(&fpr);
52
+ if (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPACT_MASK) {
57
+ fpr = vfp_get_fpcr(env);
53
+ env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
58
+ reg.id = AARCH64_SIMD_CTRL_REG(fp_regs.fpcr);
54
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
59
+ ret = kvm_vcpu_ioctl(cs, KVM_SET_ONE_REG, &reg);
55
+ qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
60
+ if (ret) {
56
+ "stackframe: error during lazy state deactivation\n");
61
+ return ret;
57
+ v7m_exception_taken(cpu, excret, true, false);
62
+ }
58
+ return;
63
+
64
+ return 0;
65
+}
66
+
67
int kvm_arch_put_registers(CPUState *cs, int level)
68
{
69
struct kvm_one_reg reg;
70
- uint32_t fpr;
71
uint64_t val;
72
- int i;
73
- int ret;
74
+ int i, ret;
75
unsigned int el;
76
77
ARMCPU *cpu = ARM_CPU(cs);
78
@@ -XXX,XX +XXX,XX @@ int kvm_arch_put_registers(CPUState *cs, int level)
79
}
80
}
81
82
- /* Advanced SIMD and FP registers. */
83
- for (i = 0; i < 32; i++) {
84
- uint64_t *q = aa64_vfp_qreg(env, i);
85
-#ifdef HOST_WORDS_BIGENDIAN
86
- uint64_t fp_val[2] = { q[1], q[0] };
87
- reg.addr = (uintptr_t)fp_val;
88
-#else
89
- reg.addr = (uintptr_t)q;
90
-#endif
91
- reg.id = AARCH64_SIMD_CORE_REG(fp_regs.vregs[i]);
92
- ret = kvm_vcpu_ioctl(cs, KVM_SET_ONE_REG, &reg);
93
- if (ret) {
94
- return ret;
95
- }
96
- }
97
-
98
- reg.addr = (uintptr_t)(&fpr);
99
- fpr = vfp_get_fpsr(env);
100
- reg.id = AARCH64_SIMD_CTRL_REG(fp_regs.fpsr);
101
- ret = kvm_vcpu_ioctl(cs, KVM_SET_ONE_REG, &reg);
102
- if (ret) {
103
- return ret;
104
- }
105
-
106
- fpr = vfp_get_fpcr(env);
107
- reg.id = AARCH64_SIMD_CTRL_REG(fp_regs.fpcr);
108
- ret = kvm_vcpu_ioctl(cs, KVM_SET_ONE_REG, &reg);
109
+ ret = kvm_arch_put_fpsimd(cs);
110
if (ret) {
111
return ret;
112
}
113
@@ -XXX,XX +XXX,XX @@ int kvm_arch_put_registers(CPUState *cs, int level)
114
return ret;
115
}
116
117
+static int kvm_arch_get_fpsimd(CPUState *cs)
118
+{
119
+ ARMCPU *cpu = ARM_CPU(cs);
120
+ CPUARMState *env = &cpu->env;
121
+ struct kvm_one_reg reg;
122
+ uint32_t fpr;
123
+ int i, ret;
124
+
125
+ for (i = 0; i < 32; i++) {
126
+ uint64_t *q = aa64_vfp_qreg(env, i);
127
+ reg.id = AARCH64_SIMD_CORE_REG(fp_regs.vregs[i]);
128
+ reg.addr = (uintptr_t)q;
129
+ ret = kvm_vcpu_ioctl(cs, KVM_GET_ONE_REG, &reg);
130
+ if (ret) {
131
+ return ret;
59
+ } else {
132
+ } else {
60
+ /* Clear s0..s15 and FPSCR */
133
+#ifdef HOST_WORDS_BIGENDIAN
61
+ int i;
134
+ uint64_t t;
62
+
135
+ t = q[0], q[0] = q[1], q[1] = t;
63
+ for (i = 0; i < 16; i += 2) {
136
+#endif
64
+ *aa32_vfp_dreg(env, i / 2) = 0;
65
+ }
66
+ vfp_set_fpscr(env, 0);
67
+ }
137
+ }
68
+ }
138
+ }
69
+
139
+
70
if (sfault) {
140
+ reg.addr = (uintptr_t)(&fpr);
71
env->v7m.sfsr |= R_V7M_SFSR_INVER_MASK;
141
+ reg.id = AARCH64_SIMD_CTRL_REG(fp_regs.fpsr);
72
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
142
+ ret = kvm_vcpu_ioctl(cs, KVM_GET_ONE_REG, &reg);
73
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
143
+ if (ret) {
74
}
144
+ return ret;
75
}
145
+ }
76
146
+ vfp_set_fpsr(env, fpr);
77
+ if (!ftype) {
147
+
78
+ /* FP present and we need to handle it */
148
+ reg.addr = (uintptr_t)(&fpr);
79
+ if (!return_to_secure &&
149
+ reg.id = AARCH64_SIMD_CTRL_REG(fp_regs.fpcr);
80
+ (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPACT_MASK)) {
150
+ ret = kvm_vcpu_ioctl(cs, KVM_GET_ONE_REG, &reg);
81
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
151
+ if (ret) {
82
+ env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
152
+ return ret;
83
+ qemu_log_mask(CPU_LOG_INT,
153
+ }
84
+ "...taking SecureFault on existing stackframe: "
154
+ vfp_set_fpcr(env, fpr);
85
+ "Secure LSPACT set but exception return is "
155
+
86
+ "not to secure state\n");
156
+ return 0;
87
+ v7m_exception_taken(cpu, excret, true, false);
157
+}
88
+ return;
158
+
89
+ }
159
int kvm_arch_get_registers(CPUState *cs)
90
+
160
{
91
+ restore_s16_s31 = return_to_secure &&
161
struct kvm_one_reg reg;
92
+ (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK);
162
uint64_t val;
93
+
163
- uint32_t fpr;
94
+ if (env->v7m.fpccr[return_to_secure] & R_V7M_FPCCR_LSPACT_MASK) {
164
unsigned int el;
95
+ /* State in FPU is still valid, just clear LSPACT */
165
- int i;
96
+ env->v7m.fpccr[return_to_secure] &= ~R_V7M_FPCCR_LSPACT_MASK;
166
- int ret;
97
+ } else {
167
+ int i, ret;
98
+ int i;
168
99
+ uint32_t fpscr;
169
ARMCPU *cpu = ARM_CPU(cs);
100
+ bool cpacr_pass, nsacr_pass;
170
CPUARMState *env = &cpu->env;
101
+
171
@@ -XXX,XX +XXX,XX @@ int kvm_arch_get_registers(CPUState *cs)
102
+ cpacr_pass = v7m_cpacr_pass(env, return_to_secure,
172
env->spsr = env->banked_spsr[i];
103
+ return_to_priv);
173
}
104
+ nsacr_pass = return_to_secure ||
174
105
+ extract32(env->v7m.nsacr, 10, 1);
175
- /* Advanced SIMD and FP registers */
106
+
176
- for (i = 0; i < 32; i++) {
107
+ if (!cpacr_pass) {
177
- uint64_t *q = aa64_vfp_qreg(env, i);
108
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
178
- reg.id = AARCH64_SIMD_CORE_REG(fp_regs.vregs[i]);
109
+ return_to_secure);
179
- reg.addr = (uintptr_t)q;
110
+ env->v7m.cfsr[return_to_secure] |= R_V7M_CFSR_NOCP_MASK;
180
- ret = kvm_vcpu_ioctl(cs, KVM_GET_ONE_REG, &reg);
111
+ qemu_log_mask(CPU_LOG_INT,
181
- if (ret) {
112
+ "...taking UsageFault on existing "
182
- return ret;
113
+ "stackframe: CPACR.CP10 prevents unstacking "
183
- } else {
114
+ "FP regs\n");
184
-#ifdef HOST_WORDS_BIGENDIAN
115
+ v7m_exception_taken(cpu, excret, true, false);
185
- uint64_t t;
116
+ return;
186
- t = q[0], q[0] = q[1], q[1] = t;
117
+ } else if (!nsacr_pass) {
187
-#endif
118
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, true);
188
- }
119
+ env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_INVPC_MASK;
189
- }
120
+ qemu_log_mask(CPU_LOG_INT,
190
-
121
+ "...taking Secure UsageFault on existing "
191
- reg.addr = (uintptr_t)(&fpr);
122
+ "stackframe: NSACR.CP10 prevents unstacking "
192
- reg.id = AARCH64_SIMD_CTRL_REG(fp_regs.fpsr);
123
+ "FP regs\n");
193
- ret = kvm_vcpu_ioctl(cs, KVM_GET_ONE_REG, &reg);
124
+ v7m_exception_taken(cpu, excret, true, false);
194
+ ret = kvm_arch_get_fpsimd(cs);
125
+ return;
195
if (ret) {
126
+ }
196
return ret;
127
+
197
}
128
+ for (i = 0; i < (restore_s16_s31 ? 32 : 16); i += 2) {
198
- vfp_set_fpsr(env, fpr);
129
+ uint32_t slo, shi;
199
-
130
+ uint64_t dn;
200
- reg.id = AARCH64_SIMD_CTRL_REG(fp_regs.fpcr);
131
+ uint32_t faddr = frameptr + 0x20 + 4 * i;
201
- ret = kvm_vcpu_ioctl(cs, KVM_GET_ONE_REG, &reg);
132
+
202
- if (ret) {
133
+ if (i >= 16) {
203
- return ret;
134
+ faddr += 8; /* Skip the slot for the FPSCR */
204
- }
135
+ }
205
- vfp_set_fpcr(env, fpr);
136
+
206
137
+ pop_ok = pop_ok &&
207
ret = kvm_get_vcpu_events(cpu);
138
+ v7m_stack_read(cpu, &slo, faddr, mmu_idx) &&
208
if (ret) {
139
+ v7m_stack_read(cpu, &shi, faddr + 4, mmu_idx);
140
+
141
+ if (!pop_ok) {
142
+ break;
143
+ }
144
+
145
+ dn = (uint64_t)shi << 32 | slo;
146
+ *aa32_vfp_dreg(env, i / 2) = dn;
147
+ }
148
+ pop_ok = pop_ok &&
149
+ v7m_stack_read(cpu, &fpscr, frameptr + 0x60, mmu_idx);
150
+ if (pop_ok) {
151
+ vfp_set_fpscr(env, fpscr);
152
+ }
153
+ if (!pop_ok) {
154
+ /*
155
+ * These regs are 0 if security extension present;
156
+ * otherwise merely UNKNOWN. We zero always.
157
+ */
158
+ for (i = 0; i < (restore_s16_s31 ? 32 : 16); i += 2) {
159
+ *aa32_vfp_dreg(env, i / 2) = 0;
160
+ }
161
+ vfp_set_fpscr(env, 0);
162
+ }
163
+ }
164
+ }
165
+ env->v7m.control[M_REG_S] = FIELD_DP32(env->v7m.control[M_REG_S],
166
+ V7M_CONTROL, FPCA, !ftype);
167
+
168
/* Commit to consuming the stack frame */
169
frameptr += 0x20;
170
+ if (!ftype) {
171
+ frameptr += 0x48;
172
+ if (restore_s16_s31) {
173
+ frameptr += 0x40;
174
+ }
175
+ }
176
/* Undo stack alignment (the SPREALIGN bit indicates that the original
177
* pre-exception SP was not 8-aligned and we added a padding word to
178
* align it, so we undo this by ORing in the bit that increases it
179
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
180
*frame_sp_p = frameptr;
181
}
182
/* This xpsr_write() will invalidate frame_sp_p as it may switch stack */
183
- xpsr_write(env, xpsr, ~XPSR_SPREALIGN);
184
+ xpsr_write(env, xpsr, ~(XPSR_SPREALIGN | XPSR_SFPA));
185
+
186
+ if (env->v7m.secure) {
187
+ bool sfpa = xpsr & XPSR_SFPA;
188
+
189
+ env->v7m.control[M_REG_S] = FIELD_DP32(env->v7m.control[M_REG_S],
190
+ V7M_CONTROL, SFPA, sfpa);
191
+ }
192
193
/* The restored xPSR exception field will be zero if we're
194
* resuming in Thread mode. If that doesn't match what the
195
--
209
--
196
2.20.1
210
2.20.1
197
211
198
212
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This device is used by both ARM (BCM2836, for raspi2) and AArch64
3
Extract is a compact combination of shift + and.
4
(BCM2837, for raspi3) targets, and is not CPU-specific.
5
Move it to common object, so we build it once for all targets.
6
4
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20190427133028.12874-1-philmd@redhat.com
6
Message-id: 20190808202616.13782-2-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
9
---
12
hw/dma/Makefile.objs | 2 +-
10
target/arm/translate.c | 9 +--------
13
1 file changed, 1 insertion(+), 1 deletion(-)
11
1 file changed, 1 insertion(+), 8 deletions(-)
14
12
15
diff --git a/hw/dma/Makefile.objs b/hw/dma/Makefile.objs
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
16
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/dma/Makefile.objs
15
--- a/target/arm/translate.c
18
+++ b/hw/dma/Makefile.objs
16
+++ b/target/arm/translate.c
19
@@ -XXX,XX +XXX,XX @@ common-obj-$(CONFIG_XLNX_ZYNQMP_ARM) += xlnx-zdma.o
17
@@ -XXX,XX +XXX,XX @@ static void gen_sar(TCGv_i32 dest, TCGv_i32 t0, TCGv_i32 t1)
20
18
21
obj-$(CONFIG_OMAP) += omap_dma.o soc_dma.o
19
static void shifter_out_im(TCGv_i32 var, int shift)
22
obj-$(CONFIG_PXA2XX) += pxa2xx_dma.o
20
{
23
-obj-$(CONFIG_RASPI) += bcm2835_dma.o
21
- if (shift == 0) {
24
+common-obj-$(CONFIG_RASPI) += bcm2835_dma.o
22
- tcg_gen_andi_i32(cpu_CF, var, 1);
23
- } else {
24
- tcg_gen_shri_i32(cpu_CF, var, shift);
25
- if (shift != 31) {
26
- tcg_gen_andi_i32(cpu_CF, cpu_CF, 1);
27
- }
28
- }
29
+ tcg_gen_extract_i32(cpu_CF, var, shift, 1);
30
}
31
32
/* Shift by immediate. Includes special handling for shift == 0. */
25
--
33
--
26
2.20.1
34
2.20.1
27
35
28
36
diff view generated by jsdifflib
1
The only "system register" that M-profile floating point exposes
1
From: Richard Henderson <richard.henderson@linaro.org>
2
via the VMRS/VMRS instructions is FPSCR, and it does not have
3
the odd special case for rd==15. Add a check to ensure we only
4
expose FPSCR.
5
2
3
Use deposit as the composit operation to merge the
4
bits from the two inputs.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20190808202616.13782-3-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20190416125744.27770-5-peter.maydell@linaro.org
9
---
10
---
10
target/arm/translate.c | 19 +++++++++++++++++--
11
target/arm/translate.c | 26 ++++++++++----------------
11
1 file changed, 17 insertions(+), 2 deletions(-)
12
1 file changed, 10 insertions(+), 16 deletions(-)
12
13
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
14
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate.c
16
--- a/target/arm/translate.c
16
+++ b/target/arm/translate.c
17
+++ b/target/arm/translate.c
17
@@ -XXX,XX +XXX,XX @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
18
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
18
}
19
shift = (insn >> 7) & 0x1f;
19
}
20
if (insn & (1 << 6)) {
20
} else { /* !dp */
21
/* pkhtb */
21
+ bool is_sysreg;
22
- if (shift == 0)
22
+
23
+ if (shift == 0) {
23
if ((insn & 0x6f) != 0x00)
24
shift = 31;
24
return 1;
25
+ }
25
rn = VFP_SREG_N(insn);
26
tcg_gen_sari_i32(tmp2, tmp2, shift);
26
+
27
- tcg_gen_andi_i32(tmp, tmp, 0xffff0000);
27
+ is_sysreg = extract32(insn, 21, 1);
28
- tcg_gen_ext16u_i32(tmp2, tmp2);
28
+
29
+ tcg_gen_deposit_i32(tmp, tmp, tmp2, 0, 16);
29
+ if (arm_dc_feature(s, ARM_FEATURE_M)) {
30
} else {
30
+ /*
31
/* pkhbt */
31
+ * The only M-profile VFP vmrs/vmsr sysreg is FPSCR.
32
- if (shift)
32
+ * Writes to R15 are UNPREDICTABLE; we choose to undef.
33
- tcg_gen_shli_i32(tmp2, tmp2, shift);
33
+ */
34
- tcg_gen_ext16u_i32(tmp, tmp);
34
+ if (is_sysreg && (rd == 15 || (rn >> 1) != ARM_VFP_FPSCR)) {
35
- tcg_gen_andi_i32(tmp2, tmp2, 0xffff0000);
35
+ return 1;
36
+ tcg_gen_shli_i32(tmp2, tmp2, shift);
36
+ }
37
+ tcg_gen_deposit_i32(tmp, tmp2, tmp, 0, 16);
38
}
39
- tcg_gen_or_i32(tmp, tmp, tmp2);
40
tcg_temp_free_i32(tmp2);
41
store_reg(s, rd, tmp);
42
} else if ((insn & 0x00200020) == 0x00200000) {
43
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
44
shift = ((insn >> 10) & 0x1c) | ((insn >> 6) & 0x3);
45
if (insn & (1 << 5)) {
46
/* pkhtb */
47
- if (shift == 0)
48
+ if (shift == 0) {
49
shift = 31;
37
+ }
50
+ }
38
+
51
tcg_gen_sari_i32(tmp2, tmp2, shift);
39
if (insn & ARM_CP_RW_BIT) {
52
- tcg_gen_andi_i32(tmp, tmp, 0xffff0000);
40
/* vfp->arm */
53
- tcg_gen_ext16u_i32(tmp2, tmp2);
41
- if (insn & (1 << 21)) {
54
+ tcg_gen_deposit_i32(tmp, tmp, tmp2, 0, 16);
42
+ if (is_sysreg) {
55
} else {
43
/* system register */
56
/* pkhbt */
44
rn >>= 1;
57
- if (shift)
45
58
- tcg_gen_shli_i32(tmp2, tmp2, shift);
46
@@ -XXX,XX +XXX,XX @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
59
- tcg_gen_ext16u_i32(tmp, tmp);
47
}
60
- tcg_gen_andi_i32(tmp2, tmp2, 0xffff0000);
48
} else {
61
+ tcg_gen_shli_i32(tmp2, tmp2, shift);
49
/* arm->vfp */
62
+ tcg_gen_deposit_i32(tmp, tmp2, tmp, 0, 16);
50
- if (insn & (1 << 21)) {
63
}
51
+ if (is_sysreg) {
64
- tcg_gen_or_i32(tmp, tmp, tmp2);
52
rn >>= 1;
65
tcg_temp_free_i32(tmp2);
53
/* system register */
66
store_reg(s, rd, tmp);
54
switch (rn) {
67
} else {
55
--
68
--
56
2.20.1
69
2.20.1
57
70
58
71
diff view generated by jsdifflib
1
Implement the VLSTM instruction for v7M for the FPU present case.
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The immediate shift generator functions already test for,
4
and eliminate, the case of a shift by zero.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20190808202616.13782-4-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20190416125744.27770-25-peter.maydell@linaro.org
6
---
10
---
7
target/arm/cpu.h | 2 +
11
target/arm/translate.c | 19 +++++++------------
8
target/arm/helper.h | 2 +
12
1 file changed, 7 insertions(+), 12 deletions(-)
9
target/arm/helper.c | 84 ++++++++++++++++++++++++++++++++++++++++++
10
target/arm/translate.c | 15 +++++++-
11
4 files changed, 102 insertions(+), 1 deletion(-)
12
13
13
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/cpu.h
16
+++ b/target/arm/cpu.h
17
@@ -XXX,XX +XXX,XX @@
18
#define EXCP_INVSTATE 18 /* v7M INVSTATE UsageFault */
19
#define EXCP_STKOF 19 /* v8M STKOF UsageFault */
20
#define EXCP_LAZYFP 20 /* v7M fault during lazy FP stacking */
21
+#define EXCP_LSERR 21 /* v8M LSERR SecureFault */
22
+#define EXCP_UNALIGNED 22 /* v7M UNALIGNED UsageFault */
23
/* NB: add new EXCP_ defines to the array in arm_log_exception() too */
24
25
#define ARMV7M_EXCP_RESET 1
26
diff --git a/target/arm/helper.h b/target/arm/helper.h
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/helper.h
29
+++ b/target/arm/helper.h
30
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(v7m_tt, i32, env, i32, i32)
31
32
DEF_HELPER_1(v7m_preserve_fp_state, void, env)
33
34
+DEF_HELPER_2(v7m_vlstm, void, env, i32)
35
+
36
DEF_HELPER_2(v8m_stackcheck, void, env, i32)
37
38
DEF_HELPER_4(access_check_cp_reg, void, env, ptr, i32, i32)
39
diff --git a/target/arm/helper.c b/target/arm/helper.c
40
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/helper.c
42
+++ b/target/arm/helper.c
43
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_preserve_fp_state)(CPUARMState *env)
44
g_assert_not_reached();
45
}
46
47
+void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
48
+{
49
+ /* translate.c should never generate calls here in user-only mode */
50
+ g_assert_not_reached();
51
+}
52
+
53
uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
54
{
55
/* The TT instructions can be used by unprivileged code, but in
56
@@ -XXX,XX +XXX,XX @@ static void v7m_update_fpccr(CPUARMState *env, uint32_t frameptr,
57
}
58
}
59
60
+void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
61
+{
62
+ /* fptr is the value of Rn, the frame pointer we store the FP regs to */
63
+ bool s = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
64
+ bool lspact = env->v7m.fpccr[s] & R_V7M_FPCCR_LSPACT_MASK;
65
+
66
+ assert(env->v7m.secure);
67
+
68
+ if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)) {
69
+ return;
70
+ }
71
+
72
+ /* Check access to the coprocessor is permitted */
73
+ if (!v7m_cpacr_pass(env, true, arm_current_el(env) != 0)) {
74
+ raise_exception_ra(env, EXCP_NOCP, 0, 1, GETPC());
75
+ }
76
+
77
+ if (lspact) {
78
+ /* LSPACT should not be active when there is active FP state */
79
+ raise_exception_ra(env, EXCP_LSERR, 0, 1, GETPC());
80
+ }
81
+
82
+ if (fptr & 7) {
83
+ raise_exception_ra(env, EXCP_UNALIGNED, 0, 1, GETPC());
84
+ }
85
+
86
+ /*
87
+ * Note that we do not use v7m_stack_write() here, because the
88
+ * accesses should not set the FSR bits for stacking errors if they
89
+ * fail. (In pseudocode terms, they are AccType_NORMAL, not AccType_STACK
90
+ * or AccType_LAZYFP). Faults in cpu_stl_data() will throw exceptions
91
+ * and longjmp out.
92
+ */
93
+ if (!(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPEN_MASK)) {
94
+ bool ts = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK;
95
+ int i;
96
+
97
+ for (i = 0; i < (ts ? 32 : 16); i += 2) {
98
+ uint64_t dn = *aa32_vfp_dreg(env, i / 2);
99
+ uint32_t faddr = fptr + 4 * i;
100
+ uint32_t slo = extract64(dn, 0, 32);
101
+ uint32_t shi = extract64(dn, 32, 32);
102
+
103
+ if (i >= 16) {
104
+ faddr += 8; /* skip the slot for the FPSCR */
105
+ }
106
+ cpu_stl_data(env, faddr, slo);
107
+ cpu_stl_data(env, faddr + 4, shi);
108
+ }
109
+ cpu_stl_data(env, fptr + 0x40, vfp_get_fpscr(env));
110
+
111
+ /*
112
+ * If TS is 0 then s0 to s15 and FPSCR are UNKNOWN; we choose to
113
+ * leave them unchanged, matching our choice in v7m_preserve_fp_state.
114
+ */
115
+ if (ts) {
116
+ for (i = 0; i < 32; i += 2) {
117
+ *aa32_vfp_dreg(env, i / 2) = 0;
118
+ }
119
+ vfp_set_fpscr(env, 0);
120
+ }
121
+ } else {
122
+ v7m_update_fpccr(env, fptr, false);
123
+ }
124
+
125
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK;
126
+}
127
+
128
static bool v7m_push_stack(ARMCPU *cpu)
129
{
130
/* Do the "set up stack frame" part of exception entry,
131
@@ -XXX,XX +XXX,XX @@ static void arm_log_exception(int idx)
132
[EXCP_INVSTATE] = "v7M INVSTATE UsageFault",
133
[EXCP_STKOF] = "v8M STKOF UsageFault",
134
[EXCP_LAZYFP] = "v7M exception during lazy FP stacking",
135
+ [EXCP_LSERR] = "v8M LSERR UsageFault",
136
+ [EXCP_UNALIGNED] = "v7M UNALIGNED UsageFault",
137
};
138
139
if (idx >= 0 && idx < ARRAY_SIZE(excnames)) {
140
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
141
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
142
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_STKOF_MASK;
143
break;
144
+ case EXCP_LSERR:
145
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
146
+ env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
147
+ break;
148
+ case EXCP_UNALIGNED:
149
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
150
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNALIGNED_MASK;
151
+ break;
152
case EXCP_SWI:
153
/* The PC already points to the next instruction. */
154
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SVC, env->v7m.secure);
155
diff --git a/target/arm/translate.c b/target/arm/translate.c
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
156
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
157
--- a/target/arm/translate.c
16
--- a/target/arm/translate.c
158
+++ b/target/arm/translate.c
17
+++ b/target/arm/translate.c
18
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
19
shift = (insn >> 10) & 3;
20
/* ??? In many cases it's not necessary to do a
21
rotate, a shift is sufficient. */
22
- if (shift != 0)
23
- tcg_gen_rotri_i32(tmp, tmp, shift * 8);
24
+ tcg_gen_rotri_i32(tmp, tmp, shift * 8);
25
op1 = (insn >> 20) & 7;
26
switch (op1) {
27
case 0: gen_sxtb16(tmp); break;
159
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
28
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
160
if (!s->v8m_secure || (insn & 0x0040f0ff)) {
29
shift = (insn >> 4) & 3;
30
/* ??? In many cases it's not necessary to do a
31
rotate, a shift is sufficient. */
32
- if (shift != 0)
33
- tcg_gen_rotri_i32(tmp, tmp, shift * 8);
34
+ tcg_gen_rotri_i32(tmp, tmp, shift * 8);
35
op = (insn >> 20) & 7;
36
switch (op) {
37
case 0: gen_sxth(tmp); break;
38
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
39
case 7:
40
goto illegal_op;
41
default: /* Saturate. */
42
- if (shift) {
43
- if (op & 1)
44
- tcg_gen_sari_i32(tmp, tmp, shift);
45
- else
46
- tcg_gen_shli_i32(tmp, tmp, shift);
47
+ if (op & 1) {
48
+ tcg_gen_sari_i32(tmp, tmp, shift);
49
+ } else {
50
+ tcg_gen_shli_i32(tmp, tmp, shift);
51
}
52
tmp2 = tcg_const_i32(imm);
53
if (op & 4) {
54
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
161
goto illegal_op;
55
goto illegal_op;
162
}
56
}
163
- /* Just NOP since FP support is not implemented */
57
tmp = load_reg(s, rm);
164
+
58
- if (shift) {
165
+ if (arm_dc_feature(s, ARM_FEATURE_VFP)) {
59
- tcg_gen_shli_i32(tmp, tmp, shift);
166
+ TCGv_i32 fptr = load_reg(s, rn);
60
- }
167
+
61
+ tcg_gen_shli_i32(tmp, tmp, shift);
168
+ if (extract32(insn, 20, 1)) {
62
tcg_gen_add_i32(addr, addr, tmp);
169
+ /* VLLDM */
63
tcg_temp_free_i32(tmp);
170
+ } else {
171
+ gen_helper_v7m_vlstm(cpu_env, fptr);
172
+ }
173
+ tcg_temp_free_i32(fptr);
174
+
175
+ /* End the TB, because we have updated FP control bits */
176
+ s->base.is_jmp = DISAS_UPDATE;
177
+ }
178
break;
64
break;
179
}
180
if (arm_dc_feature(s, ARM_FEATURE_VFP) &&
181
--
65
--
182
2.20.1
66
2.20.1
183
67
184
68
diff view generated by jsdifflib
1
Correct the decode of the M-profile "coprocessor and
1
From: Richard Henderson <richard.henderson@linaro.org>
2
floating-point instructions" space:
3
* op0 == 0b11 is always unallocated
4
* if the CPU has an FPU then all insns with op1 == 0b101
5
are floating point and go to disas_vfp_insn()
6
2
7
For the moment we leave VLLDM and VLSTM as NOPs; in
3
The helper function is more documentary, and also already
8
a later commit we will fill in the proper implementation
4
handles the case of rotate by zero.
9
for the case where an FPU is present.
10
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20190808202616.13782-5-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20190416125744.27770-7-peter.maydell@linaro.org
14
---
10
---
15
target/arm/translate.c | 26 ++++++++++++++++++++++----
11
target/arm/translate.c | 7 ++-----
16
1 file changed, 22 insertions(+), 4 deletions(-)
12
1 file changed, 2 insertions(+), 5 deletions(-)
17
13
18
diff --git a/target/arm/translate.c b/target/arm/translate.c
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
19
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/translate.c
16
--- a/target/arm/translate.c
21
+++ b/target/arm/translate.c
17
+++ b/target/arm/translate.c
22
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
18
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
23
case 6: case 7: case 14: case 15:
19
/* CPSR = immediate */
24
/* Coprocessor. */
20
val = insn & 0xff;
25
if (arm_dc_feature(s, ARM_FEATURE_M)) {
21
shift = ((insn >> 8) & 0xf) * 2;
26
- /* We don't currently implement M profile FP support,
22
- if (shift)
27
- * so this entire space should give a NOCP fault, with
23
- val = (val >> shift) | (val << (32 - shift));
28
- * the exception of the v8M VLLDM and VLSTM insns, which
24
+ val = ror32(val, shift);
29
- * must be NOPs in Secure state and UNDEF in Nonsecure state.
25
i = ((insn & (1 << 22)) != 0);
30
+ /* 0b111x_11xx_xxxx_xxxx_xxxx_xxxx_xxxx_xxxx */
26
if (gen_set_psr_im(s, msr_mask(s, (insn >> 16) & 0xf, i),
31
+ if (extract32(insn, 24, 2) == 3) {
27
i, val)) {
32
+ goto illegal_op; /* op0 = 0b11 : unallocated */
28
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
33
+ }
29
/* immediate operand */
34
+
30
val = insn & 0xff;
35
+ /*
31
shift = ((insn >> 8) & 0xf) * 2;
36
+ * Decode VLLDM and VLSTM first: these are nonstandard because:
32
- if (shift) {
37
+ * * if there is no FPU then these insns must NOP in
33
- val = (val >> shift) | (val << (32 - shift));
38
+ * Secure state and UNDEF in Nonsecure state
34
- }
39
+ * * if there is an FPU then these insns do not have
35
+ val = ror32(val, shift);
40
+ * the usual behaviour that disas_vfp_insn() provides of
36
tmp2 = tcg_temp_new_i32();
41
+ * being controlled by CPACR/NSACR enable bits or the
37
tcg_gen_movi_i32(tmp2, val);
42
+ * lazy-stacking logic.
38
if (logic_cc && shift) {
43
*/
44
if (arm_dc_feature(s, ARM_FEATURE_V8) &&
45
(insn & 0xffa00f00) == 0xec200a00) {
46
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
47
/* Just NOP since FP support is not implemented */
48
break;
49
}
50
+ if (arm_dc_feature(s, ARM_FEATURE_VFP) &&
51
+ ((insn >> 8) & 0xe) == 10) {
52
+ /* FP, and the CPU supports it */
53
+ if (disas_vfp_insn(s, insn)) {
54
+ goto illegal_op;
55
+ }
56
+ break;
57
+ }
58
+
59
/* All other insns: NOCP */
60
gen_exception_insn(s, 4, EXCP_NOCP, syn_uncategorized(),
61
default_exception_el(s));
62
--
39
--
63
2.20.1
40
2.20.1
64
41
65
42
diff view generated by jsdifflib
Deleted patch
1
Currently the code in v7m_push_stack() which detects a violation
2
of the v8M stack limit simply returns early if it does so. This
3
is OK for the current integer-only code, but won't work for the
4
floating point handling we're about to add. We need to continue
5
executing the rest of the function so that we check for other
6
exceptions like not having permission to use the FPU and so
7
that we correctly set the FPCCR state if we are doing lazy
8
stacking. Refactor to avoid the early return.
9
1
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20190416125744.27770-10-peter.maydell@linaro.org
13
---
14
target/arm/helper.c | 23 ++++++++++++++++++-----
15
1 file changed, 18 insertions(+), 5 deletions(-)
16
17
diff --git a/target/arm/helper.c b/target/arm/helper.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.c
20
+++ b/target/arm/helper.c
21
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
22
* should ignore further stack faults trying to process
23
* that derived exception.)
24
*/
25
- bool stacked_ok;
26
+ bool stacked_ok = true, limitviol = false;
27
CPUARMState *env = &cpu->env;
28
uint32_t xpsr = xpsr_read(env);
29
uint32_t frameptr = env->regs[13];
30
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
31
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
32
env->v7m.secure);
33
env->regs[13] = limit;
34
- return true;
35
+ /*
36
+ * We won't try to perform any further memory accesses but
37
+ * we must continue through the following code to check for
38
+ * permission faults during FPU state preservation, and we
39
+ * must update FPCCR if lazy stacking is enabled.
40
+ */
41
+ limitviol = true;
42
+ stacked_ok = false;
43
}
44
}
45
46
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
47
* (which may be taken in preference to the one we started with
48
* if it has higher priority).
49
*/
50
- stacked_ok =
51
+ stacked_ok = stacked_ok &&
52
v7m_stack_write(cpu, frameptr, env->regs[0], mmu_idx, false) &&
53
v7m_stack_write(cpu, frameptr + 4, env->regs[1], mmu_idx, false) &&
54
v7m_stack_write(cpu, frameptr + 8, env->regs[2], mmu_idx, false) &&
55
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
56
v7m_stack_write(cpu, frameptr + 24, env->regs[15], mmu_idx, false) &&
57
v7m_stack_write(cpu, frameptr + 28, xpsr, mmu_idx, false);
58
59
- /* Update SP regardless of whether any of the stack accesses failed. */
60
- env->regs[13] = frameptr;
61
+ /*
62
+ * If we broke a stack limit then SP was already updated earlier;
63
+ * otherwise we update SP regardless of whether any of the stack
64
+ * accesses failed or we took some other kind of fault.
65
+ */
66
+ if (!limitviol) {
67
+ env->regs[13] = frameptr;
68
+ }
69
70
return !stacked_ok;
71
}
72
--
73
2.20.1
74
75
diff view generated by jsdifflib
Deleted patch
1
Handle floating point registers in exception entry.
2
This corresponds to the FP-specific parts of the pseudocode
3
functions ActivateException() and PushStack().
4
1
5
We defer the code corresponding to UpdateFPCCR() to a later patch.
6
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20190416125744.27770-11-peter.maydell@linaro.org
10
---
11
target/arm/helper.c | 98 +++++++++++++++++++++++++++++++++++++++++++--
12
1 file changed, 95 insertions(+), 3 deletions(-)
13
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.c
17
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
19
switch_v7m_security_state(env, targets_secure);
20
write_v7m_control_spsel(env, 0);
21
arm_clear_exclusive(env);
22
+ /* Clear SFPA and FPCA (has no effect if no FPU) */
23
+ env->v7m.control[M_REG_S] &=
24
+ ~(R_V7M_CONTROL_FPCA_MASK | R_V7M_CONTROL_SFPA_MASK);
25
/* Clear IT bits */
26
env->condexec_bits = 0;
27
env->regs[14] = lr;
28
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
29
uint32_t xpsr = xpsr_read(env);
30
uint32_t frameptr = env->regs[13];
31
ARMMMUIdx mmu_idx = arm_mmu_idx(env);
32
+ uint32_t framesize;
33
+ bool nsacr_cp10 = extract32(env->v7m.nsacr, 10, 1);
34
+
35
+ if ((env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK) &&
36
+ (env->v7m.secure || nsacr_cp10)) {
37
+ if (env->v7m.secure &&
38
+ env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK) {
39
+ framesize = 0xa8;
40
+ } else {
41
+ framesize = 0x68;
42
+ }
43
+ } else {
44
+ framesize = 0x20;
45
+ }
46
47
/* Align stack pointer if the guest wants that */
48
if ((frameptr & 4) &&
49
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
50
xpsr |= XPSR_SPREALIGN;
51
}
52
53
- frameptr -= 0x20;
54
+ xpsr &= ~XPSR_SFPA;
55
+ if (env->v7m.secure &&
56
+ (env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)) {
57
+ xpsr |= XPSR_SFPA;
58
+ }
59
+
60
+ frameptr -= framesize;
61
62
if (arm_feature(env, ARM_FEATURE_V8)) {
63
uint32_t limit = v7m_sp_limit(env);
64
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
65
v7m_stack_write(cpu, frameptr + 24, env->regs[15], mmu_idx, false) &&
66
v7m_stack_write(cpu, frameptr + 28, xpsr, mmu_idx, false);
67
68
+ if (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK) {
69
+ /* FPU is active, try to save its registers */
70
+ bool fpccr_s = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
71
+ bool lspact = env->v7m.fpccr[fpccr_s] & R_V7M_FPCCR_LSPACT_MASK;
72
+
73
+ if (lspact && arm_feature(env, ARM_FEATURE_M_SECURITY)) {
74
+ qemu_log_mask(CPU_LOG_INT,
75
+ "...SecureFault because LSPACT and FPCA both set\n");
76
+ env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
77
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
78
+ } else if (!env->v7m.secure && !nsacr_cp10) {
79
+ qemu_log_mask(CPU_LOG_INT,
80
+ "...Secure UsageFault with CFSR.NOCP because "
81
+ "NSACR.CP10 prevents stacking FP regs\n");
82
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, M_REG_S);
83
+ env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_NOCP_MASK;
84
+ } else {
85
+ if (!(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPEN_MASK)) {
86
+ /* Lazy stacking disabled, save registers now */
87
+ int i;
88
+ bool cpacr_pass = v7m_cpacr_pass(env, env->v7m.secure,
89
+ arm_current_el(env) != 0);
90
+
91
+ if (stacked_ok && !cpacr_pass) {
92
+ /*
93
+ * Take UsageFault if CPACR forbids access. The pseudocode
94
+ * here does a full CheckCPEnabled() but we know the NSACR
95
+ * check can never fail as we have already handled that.
96
+ */
97
+ qemu_log_mask(CPU_LOG_INT,
98
+ "...UsageFault with CFSR.NOCP because "
99
+ "CPACR.CP10 prevents stacking FP regs\n");
100
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
101
+ env->v7m.secure);
102
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_NOCP_MASK;
103
+ stacked_ok = false;
104
+ }
105
+
106
+ for (i = 0; i < ((framesize == 0xa8) ? 32 : 16); i += 2) {
107
+ uint64_t dn = *aa32_vfp_dreg(env, i / 2);
108
+ uint32_t faddr = frameptr + 0x20 + 4 * i;
109
+ uint32_t slo = extract64(dn, 0, 32);
110
+ uint32_t shi = extract64(dn, 32, 32);
111
+
112
+ if (i >= 16) {
113
+ faddr += 8; /* skip the slot for the FPSCR */
114
+ }
115
+ stacked_ok = stacked_ok &&
116
+ v7m_stack_write(cpu, faddr, slo, mmu_idx, false) &&
117
+ v7m_stack_write(cpu, faddr + 4, shi, mmu_idx, false);
118
+ }
119
+ stacked_ok = stacked_ok &&
120
+ v7m_stack_write(cpu, frameptr + 0x60,
121
+ vfp_get_fpscr(env), mmu_idx, false);
122
+ if (cpacr_pass) {
123
+ for (i = 0; i < ((framesize == 0xa8) ? 32 : 16); i += 2) {
124
+ *aa32_vfp_dreg(env, i / 2) = 0;
125
+ }
126
+ vfp_set_fpscr(env, 0);
127
+ }
128
+ } else {
129
+ /* Lazy stacking enabled, save necessary info to stack later */
130
+ /* TODO : equivalent of UpdateFPCCR() pseudocode */
131
+ }
132
+ }
133
+ }
134
+
135
/*
136
* If we broke a stack limit then SP was already updated earlier;
137
* otherwise we update SP regardless of whether any of the stack
138
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
139
140
if (arm_feature(env, ARM_FEATURE_V8)) {
141
lr = R_V7M_EXCRET_RES1_MASK |
142
- R_V7M_EXCRET_DCRS_MASK |
143
- R_V7M_EXCRET_FTYPE_MASK;
144
+ R_V7M_EXCRET_DCRS_MASK;
145
/* The S bit indicates whether we should return to Secure
146
* or NonSecure (ie our current state).
147
* The ES bit indicates whether we're taking this exception
148
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
149
if (env->v7m.secure) {
150
lr |= R_V7M_EXCRET_S_MASK;
151
}
152
+ if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK)) {
153
+ lr |= R_V7M_EXCRET_FTYPE_MASK;
154
+ }
155
} else {
156
lr = R_V7M_EXCRET_RES1_MASK |
157
R_V7M_EXCRET_S_MASK |
158
--
159
2.20.1
160
161
diff view generated by jsdifflib
Deleted patch
1
For v8M floating point support, transitions from Secure
2
to Non-secure state via BLNS and BLXNS must clear the
3
CONTROL.SFPA bit. (This corresponds to the pseudocode
4
BranchToNS() function.)
5
1
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20190416125744.27770-13-peter.maydell@linaro.org
9
---
10
target/arm/helper.c | 4 ++++
11
1 file changed, 4 insertions(+)
12
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
16
+++ b/target/arm/helper.c
17
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest)
18
/* translate.c should have made BXNS UNDEF unless we're secure */
19
assert(env->v7m.secure);
20
21
+ if (!(dest & 1)) {
22
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
23
+ }
24
switch_v7m_security_state(env, dest & 1);
25
env->thumb = 1;
26
env->regs[15] = dest & ~1;
27
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest)
28
*/
29
write_v7m_exception(env, 1);
30
}
31
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
32
switch_v7m_security_state(env, 0);
33
env->thumb = 1;
34
env->regs[15] = dest;
35
--
36
2.20.1
37
38
diff view generated by jsdifflib
Deleted patch
1
The TailChain() pseudocode specifies that a tail chaining
2
exception should sanitize the excReturn all-ones bits and
3
(if there is no FPU) the excReturn FType bits; we weren't
4
doing this.
5
1
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20190416125744.27770-14-peter.maydell@linaro.org
9
---
10
target/arm/helper.c | 8 ++++++++
11
1 file changed, 8 insertions(+)
12
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
16
+++ b/target/arm/helper.c
17
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
18
qemu_log_mask(CPU_LOG_INT, "...taking pending %s exception %d\n",
19
targets_secure ? "secure" : "nonsecure", exc);
20
21
+ if (dotailchain) {
22
+ /* Sanitize LR FType and PREFIX bits */
23
+ if (!arm_feature(env, ARM_FEATURE_VFP)) {
24
+ lr |= R_V7M_EXCRET_FTYPE_MASK;
25
+ }
26
+ lr = deposit32(lr, 24, 8, 0xff);
27
+ }
28
+
29
if (arm_feature(env, ARM_FEATURE_V8)) {
30
if (arm_feature(env, ARM_FEATURE_M_SECURITY) &&
31
(lr & R_V7M_EXCRET_S_MASK)) {
32
--
33
2.20.1
34
35
diff view generated by jsdifflib
Deleted patch
1
The magic value pushed onto the callee stack as an integrity
2
check is different if floating point is present.
3
1
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20190416125744.27770-15-peter.maydell@linaro.org
7
---
8
target/arm/helper.c | 22 +++++++++++++++++++---
9
1 file changed, 19 insertions(+), 3 deletions(-)
10
11
diff --git a/target/arm/helper.c b/target/arm/helper.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/helper.c
14
+++ b/target/arm/helper.c
15
@@ -XXX,XX +XXX,XX @@ load_fail:
16
return false;
17
}
18
19
+static uint32_t v7m_integrity_sig(CPUARMState *env, uint32_t lr)
20
+{
21
+ /*
22
+ * Return the integrity signature value for the callee-saves
23
+ * stack frame section. @lr is the exception return payload/LR value
24
+ * whose FType bit forms bit 0 of the signature if FP is present.
25
+ */
26
+ uint32_t sig = 0xfefa125a;
27
+
28
+ if (!arm_feature(env, ARM_FEATURE_VFP) || (lr & R_V7M_EXCRET_FTYPE_MASK)) {
29
+ sig |= 1;
30
+ }
31
+ return sig;
32
+}
33
+
34
static bool v7m_push_callee_stack(ARMCPU *cpu, uint32_t lr, bool dotailchain,
35
bool ignore_faults)
36
{
37
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_callee_stack(ARMCPU *cpu, uint32_t lr, bool dotailchain,
38
bool stacked_ok;
39
uint32_t limit;
40
bool want_psp;
41
+ uint32_t sig;
42
43
if (dotailchain) {
44
bool mode = lr & R_V7M_EXCRET_MODE_MASK;
45
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_callee_stack(ARMCPU *cpu, uint32_t lr, bool dotailchain,
46
/* Write as much of the stack frame as we can. A write failure may
47
* cause us to pend a derived exception.
48
*/
49
+ sig = v7m_integrity_sig(env, lr);
50
stacked_ok =
51
- v7m_stack_write(cpu, frameptr, 0xfefa125b, mmu_idx, ignore_faults) &&
52
+ v7m_stack_write(cpu, frameptr, sig, mmu_idx, ignore_faults) &&
53
v7m_stack_write(cpu, frameptr + 0x8, env->regs[4], mmu_idx,
54
ignore_faults) &&
55
v7m_stack_write(cpu, frameptr + 0xc, env->regs[5], mmu_idx,
56
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
57
if (return_to_secure &&
58
((excret & R_V7M_EXCRET_ES_MASK) == 0 ||
59
(excret & R_V7M_EXCRET_DCRS_MASK) == 0)) {
60
- uint32_t expected_sig = 0xfefa125b;
61
uint32_t actual_sig;
62
63
pop_ok = v7m_stack_read(cpu, &actual_sig, frameptr, mmu_idx);
64
65
- if (pop_ok && expected_sig != actual_sig) {
66
+ if (pop_ok && v7m_integrity_sig(env, excret) != actual_sig) {
67
/* Take a SecureFault on the current stack */
68
env->v7m.sfsr |= R_V7M_SFSR_INVIS_MASK;
69
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
70
--
71
2.20.1
72
73
diff view generated by jsdifflib
Deleted patch
1
Move the NS TBFLAG down from bit 19 to bit 6, which has not
2
been used since commit c1e3781090b9d36c60 in 2015, when we
3
started passing the entire MMU index in the TB flags rather
4
than just a 'privilege level' bit.
5
1
6
This rearrangement is not strictly necessary, but means that
7
we can put M-profile-only bits next to each other rather
8
than scattered across the flag word.
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20190416125744.27770-17-peter.maydell@linaro.org
13
---
14
target/arm/cpu.h | 11 ++++++-----
15
1 file changed, 6 insertions(+), 5 deletions(-)
16
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/cpu.h
20
+++ b/target/arm/cpu.h
21
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_ANY, BE_DATA, 23, 1)
22
FIELD(TBFLAG_A32, THUMB, 0, 1)
23
FIELD(TBFLAG_A32, VECLEN, 1, 3)
24
FIELD(TBFLAG_A32, VECSTRIDE, 4, 2)
25
+/*
26
+ * Indicates whether cp register reads and writes by guest code should access
27
+ * the secure or nonsecure bank of banked registers; note that this is not
28
+ * the same thing as the current security state of the processor!
29
+ */
30
+FIELD(TBFLAG_A32, NS, 6, 1)
31
FIELD(TBFLAG_A32, VFPEN, 7, 1)
32
FIELD(TBFLAG_A32, CONDEXEC, 8, 8)
33
FIELD(TBFLAG_A32, SCTLR_B, 16, 1)
34
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A32, SCTLR_B, 16, 1)
35
* checks on the other bits at runtime
36
*/
37
FIELD(TBFLAG_A32, XSCALE_CPAR, 17, 2)
38
-/* Indicates whether cp register reads and writes by guest code should access
39
- * the secure or nonsecure bank of banked registers; note that this is not
40
- * the same thing as the current security state of the processor!
41
- */
42
-FIELD(TBFLAG_A32, NS, 19, 1)
43
/* For M profile only, Handler (ie not Thread) mode */
44
FIELD(TBFLAG_A32, HANDLER, 21, 1)
45
/* For M profile only, whether we should generate stack-limit checks */
46
--
47
2.20.1
48
49
diff view generated by jsdifflib
Deleted patch
1
Pushing registers to the stack for v7M needs to handle three cases:
2
* the "normal" case where we pend exceptions
3
* an "ignore faults" case where we set FSR bits but
4
do not pend exceptions (this is used when we are
5
handling some kinds of derived exception on exception entry)
6
* a "lazy FP stacking" case, where different FSR bits
7
are set and the exception is pended differently
8
1
9
Implement this by changing the existing flag argument that
10
tells us whether to ignore faults or not into an enum that
11
specifies which of the 3 modes we should handle.
12
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20190416125744.27770-23-peter.maydell@linaro.org
16
---
17
target/arm/helper.c | 118 +++++++++++++++++++++++++++++---------------
18
1 file changed, 79 insertions(+), 39 deletions(-)
19
20
diff --git a/target/arm/helper.c b/target/arm/helper.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/helper.c
23
+++ b/target/arm/helper.c
24
@@ -XXX,XX +XXX,XX @@ static bool v7m_cpacr_pass(CPUARMState *env, bool is_secure, bool is_priv)
25
}
26
}
27
28
+/*
29
+ * What kind of stack write are we doing? This affects how exceptions
30
+ * generated during the stacking are treated.
31
+ */
32
+typedef enum StackingMode {
33
+ STACK_NORMAL,
34
+ STACK_IGNFAULTS,
35
+ STACK_LAZYFP,
36
+} StackingMode;
37
+
38
static bool v7m_stack_write(ARMCPU *cpu, uint32_t addr, uint32_t value,
39
- ARMMMUIdx mmu_idx, bool ignfault)
40
+ ARMMMUIdx mmu_idx, StackingMode mode)
41
{
42
CPUState *cs = CPU(cpu);
43
CPUARMState *env = &cpu->env;
44
@@ -XXX,XX +XXX,XX @@ static bool v7m_stack_write(ARMCPU *cpu, uint32_t addr, uint32_t value,
45
&attrs, &prot, &page_size, &fi, NULL)) {
46
/* MPU/SAU lookup failed */
47
if (fi.type == ARMFault_QEMU_SFault) {
48
- qemu_log_mask(CPU_LOG_INT,
49
- "...SecureFault with SFSR.AUVIOL during stacking\n");
50
- env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK | R_V7M_SFSR_SFARVALID_MASK;
51
+ if (mode == STACK_LAZYFP) {
52
+ qemu_log_mask(CPU_LOG_INT,
53
+ "...SecureFault with SFSR.LSPERR "
54
+ "during lazy stacking\n");
55
+ env->v7m.sfsr |= R_V7M_SFSR_LSPERR_MASK;
56
+ } else {
57
+ qemu_log_mask(CPU_LOG_INT,
58
+ "...SecureFault with SFSR.AUVIOL "
59
+ "during stacking\n");
60
+ env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK;
61
+ }
62
+ env->v7m.sfsr |= R_V7M_SFSR_SFARVALID_MASK;
63
env->v7m.sfar = addr;
64
exc = ARMV7M_EXCP_SECURE;
65
exc_secure = false;
66
} else {
67
- qemu_log_mask(CPU_LOG_INT, "...MemManageFault with CFSR.MSTKERR\n");
68
- env->v7m.cfsr[secure] |= R_V7M_CFSR_MSTKERR_MASK;
69
+ if (mode == STACK_LAZYFP) {
70
+ qemu_log_mask(CPU_LOG_INT,
71
+ "...MemManageFault with CFSR.MLSPERR\n");
72
+ env->v7m.cfsr[secure] |= R_V7M_CFSR_MLSPERR_MASK;
73
+ } else {
74
+ qemu_log_mask(CPU_LOG_INT,
75
+ "...MemManageFault with CFSR.MSTKERR\n");
76
+ env->v7m.cfsr[secure] |= R_V7M_CFSR_MSTKERR_MASK;
77
+ }
78
exc = ARMV7M_EXCP_MEM;
79
exc_secure = secure;
80
}
81
@@ -XXX,XX +XXX,XX @@ static bool v7m_stack_write(ARMCPU *cpu, uint32_t addr, uint32_t value,
82
attrs, &txres);
83
if (txres != MEMTX_OK) {
84
/* BusFault trying to write the data */
85
- qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.STKERR\n");
86
- env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_STKERR_MASK;
87
+ if (mode == STACK_LAZYFP) {
88
+ qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.LSPERR\n");
89
+ env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_LSPERR_MASK;
90
+ } else {
91
+ qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.STKERR\n");
92
+ env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_STKERR_MASK;
93
+ }
94
exc = ARMV7M_EXCP_BUS;
95
exc_secure = false;
96
goto pend_fault;
97
@@ -XXX,XX +XXX,XX @@ pend_fault:
98
* later if we have two derived exceptions.
99
* The only case when we must not pend the exception but instead
100
* throw it away is if we are doing the push of the callee registers
101
- * and we've already generated a derived exception. Even in this
102
- * case we will still update the fault status registers.
103
+ * and we've already generated a derived exception (this is indicated
104
+ * by the caller passing STACK_IGNFAULTS). Even in this case we will
105
+ * still update the fault status registers.
106
*/
107
- if (!ignfault) {
108
+ switch (mode) {
109
+ case STACK_NORMAL:
110
armv7m_nvic_set_pending_derived(env->nvic, exc, exc_secure);
111
+ break;
112
+ case STACK_LAZYFP:
113
+ armv7m_nvic_set_pending_lazyfp(env->nvic, exc, exc_secure);
114
+ break;
115
+ case STACK_IGNFAULTS:
116
+ break;
117
}
118
return false;
119
}
120
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_callee_stack(ARMCPU *cpu, uint32_t lr, bool dotailchain,
121
uint32_t limit;
122
bool want_psp;
123
uint32_t sig;
124
+ StackingMode smode = ignore_faults ? STACK_IGNFAULTS : STACK_NORMAL;
125
126
if (dotailchain) {
127
bool mode = lr & R_V7M_EXCRET_MODE_MASK;
128
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_callee_stack(ARMCPU *cpu, uint32_t lr, bool dotailchain,
129
*/
130
sig = v7m_integrity_sig(env, lr);
131
stacked_ok =
132
- v7m_stack_write(cpu, frameptr, sig, mmu_idx, ignore_faults) &&
133
- v7m_stack_write(cpu, frameptr + 0x8, env->regs[4], mmu_idx,
134
- ignore_faults) &&
135
- v7m_stack_write(cpu, frameptr + 0xc, env->regs[5], mmu_idx,
136
- ignore_faults) &&
137
- v7m_stack_write(cpu, frameptr + 0x10, env->regs[6], mmu_idx,
138
- ignore_faults) &&
139
- v7m_stack_write(cpu, frameptr + 0x14, env->regs[7], mmu_idx,
140
- ignore_faults) &&
141
- v7m_stack_write(cpu, frameptr + 0x18, env->regs[8], mmu_idx,
142
- ignore_faults) &&
143
- v7m_stack_write(cpu, frameptr + 0x1c, env->regs[9], mmu_idx,
144
- ignore_faults) &&
145
- v7m_stack_write(cpu, frameptr + 0x20, env->regs[10], mmu_idx,
146
- ignore_faults) &&
147
- v7m_stack_write(cpu, frameptr + 0x24, env->regs[11], mmu_idx,
148
- ignore_faults);
149
+ v7m_stack_write(cpu, frameptr, sig, mmu_idx, smode) &&
150
+ v7m_stack_write(cpu, frameptr + 0x8, env->regs[4], mmu_idx, smode) &&
151
+ v7m_stack_write(cpu, frameptr + 0xc, env->regs[5], mmu_idx, smode) &&
152
+ v7m_stack_write(cpu, frameptr + 0x10, env->regs[6], mmu_idx, smode) &&
153
+ v7m_stack_write(cpu, frameptr + 0x14, env->regs[7], mmu_idx, smode) &&
154
+ v7m_stack_write(cpu, frameptr + 0x18, env->regs[8], mmu_idx, smode) &&
155
+ v7m_stack_write(cpu, frameptr + 0x1c, env->regs[9], mmu_idx, smode) &&
156
+ v7m_stack_write(cpu, frameptr + 0x20, env->regs[10], mmu_idx, smode) &&
157
+ v7m_stack_write(cpu, frameptr + 0x24, env->regs[11], mmu_idx, smode);
158
159
/* Update SP regardless of whether any of the stack accesses failed. */
160
*frame_sp_p = frameptr;
161
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
162
* if it has higher priority).
163
*/
164
stacked_ok = stacked_ok &&
165
- v7m_stack_write(cpu, frameptr, env->regs[0], mmu_idx, false) &&
166
- v7m_stack_write(cpu, frameptr + 4, env->regs[1], mmu_idx, false) &&
167
- v7m_stack_write(cpu, frameptr + 8, env->regs[2], mmu_idx, false) &&
168
- v7m_stack_write(cpu, frameptr + 12, env->regs[3], mmu_idx, false) &&
169
- v7m_stack_write(cpu, frameptr + 16, env->regs[12], mmu_idx, false) &&
170
- v7m_stack_write(cpu, frameptr + 20, env->regs[14], mmu_idx, false) &&
171
- v7m_stack_write(cpu, frameptr + 24, env->regs[15], mmu_idx, false) &&
172
- v7m_stack_write(cpu, frameptr + 28, xpsr, mmu_idx, false);
173
+ v7m_stack_write(cpu, frameptr, env->regs[0], mmu_idx, STACK_NORMAL) &&
174
+ v7m_stack_write(cpu, frameptr + 4, env->regs[1],
175
+ mmu_idx, STACK_NORMAL) &&
176
+ v7m_stack_write(cpu, frameptr + 8, env->regs[2],
177
+ mmu_idx, STACK_NORMAL) &&
178
+ v7m_stack_write(cpu, frameptr + 12, env->regs[3],
179
+ mmu_idx, STACK_NORMAL) &&
180
+ v7m_stack_write(cpu, frameptr + 16, env->regs[12],
181
+ mmu_idx, STACK_NORMAL) &&
182
+ v7m_stack_write(cpu, frameptr + 20, env->regs[14],
183
+ mmu_idx, STACK_NORMAL) &&
184
+ v7m_stack_write(cpu, frameptr + 24, env->regs[15],
185
+ mmu_idx, STACK_NORMAL) &&
186
+ v7m_stack_write(cpu, frameptr + 28, xpsr, mmu_idx, STACK_NORMAL);
187
188
if (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK) {
189
/* FPU is active, try to save its registers */
190
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
191
faddr += 8; /* skip the slot for the FPSCR */
192
}
193
stacked_ok = stacked_ok &&
194
- v7m_stack_write(cpu, faddr, slo, mmu_idx, false) &&
195
- v7m_stack_write(cpu, faddr + 4, shi, mmu_idx, false);
196
+ v7m_stack_write(cpu, faddr, slo,
197
+ mmu_idx, STACK_NORMAL) &&
198
+ v7m_stack_write(cpu, faddr + 4, shi,
199
+ mmu_idx, STACK_NORMAL);
200
}
201
stacked_ok = stacked_ok &&
202
v7m_stack_write(cpu, frameptr + 0x60,
203
- vfp_get_fpscr(env), mmu_idx, false);
204
+ vfp_get_fpscr(env), mmu_idx, STACK_NORMAL);
205
if (cpacr_pass) {
206
for (i = 0; i < ((framesize == 0xa8) ? 32 : 16); i += 2) {
207
*aa32_vfp_dreg(env, i / 2) = 0;
208
--
209
2.20.1
210
211
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Suggested-by: Markus Armbruster <armbru@redhat.com>
3
Rotate is the more compact and obvious way to swap 16-bit
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
4
elements of a 32-bit word.
5
Message-id: 20190412165416.7977-3-philmd@redhat.com
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20190808202616.13782-6-richard.henderson@linaro.org
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
10
---
9
hw/arm/nseries.c | 3 ++-
11
target/arm/translate.c | 6 +-----
10
1 file changed, 2 insertions(+), 1 deletion(-)
12
1 file changed, 1 insertion(+), 5 deletions(-)
11
13
12
diff --git a/hw/arm/nseries.c b/hw/arm/nseries.c
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
13
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
14
--- a/hw/arm/nseries.c
16
--- a/target/arm/translate.c
15
+++ b/hw/arm/nseries.c
17
+++ b/target/arm/translate.c
16
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@ static TCGv_i64 gen_muls_i64_i32(TCGv_i32 a, TCGv_i32 b)
17
#include "hw/boards.h"
19
/* Swap low and high halfwords. */
18
#include "hw/i2c/i2c.h"
20
static void gen_swap_half(TCGv_i32 var)
19
#include "hw/devices.h"
21
{
20
+#include "hw/misc/tmp105.h"
22
- TCGv_i32 tmp = tcg_temp_new_i32();
21
#include "hw/block/flash.h"
23
- tcg_gen_shri_i32(tmp, var, 16);
22
#include "hw/hw.h"
24
- tcg_gen_shli_i32(var, var, 16);
23
#include "hw/bt.h"
25
- tcg_gen_or_i32(var, var, tmp);
24
@@ -XXX,XX +XXX,XX @@ static void n8x0_i2c_setup(struct n800_s *s)
26
- tcg_temp_free_i32(tmp);
25
qemu_register_powerdown_notifier(&n8x0_system_powerdown_notifier);
27
+ tcg_gen_rotri_i32(var, var, 16);
26
27
/* Attach a TMP105 PM chip (A0 wired to ground) */
28
- dev = i2c_create_slave(i2c, "tmp105", N8X0_TMP105_ADDR);
29
+ dev = i2c_create_slave(i2c, TYPE_TMP105, N8X0_TMP105_ADDR);
30
qdev_connect_gpio_out(dev, 0, tmp_irq);
31
}
28
}
32
29
30
/* Dual 16-bit add. Result placed in t0 and t1 is marked as dead.
33
--
31
--
34
2.20.1
32
2.20.1
35
33
36
34
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
No code used the tc6393xb_gpio_in_get() and tc6393xb_gpio_out_set()
3
All of the inputs to these instructions are 32-bits. Rather than
4
functions since their introduction in commit 88d2c950b002. Time to
4
extend each input to 64-bits and then extract the high 32-bits of
5
remove them.
5
the output, use tcg_gen_muls2_i32 and other 32-bit generator functions.
6
6
7
Suggested-by: Markus Armbruster <armbru@redhat.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Message-id: 20190808202616.13782-7-richard.henderson@linaro.org
9
Message-id: 20190412165416.7977-4-philmd@redhat.com
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
11
---
13
include/hw/devices.h | 3 ---
12
target/arm/translate.c | 72 +++++++++++++++---------------------------
14
hw/display/tc6393xb.c | 16 ----------------
13
1 file changed, 26 insertions(+), 46 deletions(-)
15
2 files changed, 19 deletions(-)
16
14
17
diff --git a/include/hw/devices.h b/include/hw/devices.h
15
diff --git a/target/arm/translate.c b/target/arm/translate.c
18
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
19
--- a/include/hw/devices.h
17
--- a/target/arm/translate.c
20
+++ b/include/hw/devices.h
18
+++ b/target/arm/translate.c
21
@@ -XXX,XX +XXX,XX @@ void retu_key_event(void *retu, int state);
19
@@ -XXX,XX +XXX,XX @@ static void gen_revsh(TCGv_i32 var)
22
typedef struct TC6393xbState TC6393xbState;
20
tcg_gen_ext16s_i32(var, var);
23
TC6393xbState *tc6393xb_init(struct MemoryRegion *sysmem,
21
}
24
uint32_t base, qemu_irq irq);
22
25
-void tc6393xb_gpio_out_set(TC6393xbState *s, int line,
23
-/* Return (b << 32) + a. Mark inputs as dead */
26
- qemu_irq handler);
24
-static TCGv_i64 gen_addq_msw(TCGv_i64 a, TCGv_i32 b)
27
-qemu_irq *tc6393xb_gpio_in_get(TC6393xbState *s);
28
qemu_irq tc6393xb_l3v_get(TC6393xbState *s);
29
30
#endif
31
diff --git a/hw/display/tc6393xb.c b/hw/display/tc6393xb.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/hw/display/tc6393xb.c
34
+++ b/hw/display/tc6393xb.c
35
@@ -XXX,XX +XXX,XX @@ struct TC6393xbState {
36
blanked : 1;
37
};
38
39
-qemu_irq *tc6393xb_gpio_in_get(TC6393xbState *s)
40
-{
25
-{
41
- return s->gpio_in;
26
- TCGv_i64 tmp64 = tcg_temp_new_i64();
27
-
28
- tcg_gen_extu_i32_i64(tmp64, b);
29
- tcg_temp_free_i32(b);
30
- tcg_gen_shli_i64(tmp64, tmp64, 32);
31
- tcg_gen_add_i64(a, tmp64, a);
32
-
33
- tcg_temp_free_i64(tmp64);
34
- return a;
42
-}
35
-}
43
-
36
-
44
static void tc6393xb_gpio_set(void *opaque, int line, int level)
37
-/* Return (b << 32) - a. Mark inputs as dead. */
45
{
38
-static TCGv_i64 gen_subq_msw(TCGv_i64 a, TCGv_i32 b)
46
// TC6393xbState *s = opaque;
47
@@ -XXX,XX +XXX,XX @@ static void tc6393xb_gpio_set(void *opaque, int line, int level)
48
// FIXME: how does the chip reflect the GPIO input level change?
49
}
50
51
-void tc6393xb_gpio_out_set(TC6393xbState *s, int line,
52
- qemu_irq handler)
53
-{
39
-{
54
- if (line >= TC6393XB_GPIOS) {
40
- TCGv_i64 tmp64 = tcg_temp_new_i64();
55
- fprintf(stderr, "TC6393xb: no GPIO pin %d\n", line);
56
- return;
57
- }
58
-
41
-
59
- s->handler[line] = handler;
42
- tcg_gen_extu_i32_i64(tmp64, b);
43
- tcg_temp_free_i32(b);
44
- tcg_gen_shli_i64(tmp64, tmp64, 32);
45
- tcg_gen_sub_i64(a, tmp64, a);
46
-
47
- tcg_temp_free_i64(tmp64);
48
- return a;
60
-}
49
-}
61
-
50
-
62
static void tc6393xb_gpio_handler_update(TC6393xbState *s)
51
/* 32x32->64 multiply. Marks inputs as dead. */
52
static TCGv_i64 gen_mulu_i64_i32(TCGv_i32 a, TCGv_i32 b)
63
{
53
{
64
uint32_t level, diff;
54
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
55
(SMMUL, SMMLA, SMMLS) */
56
tmp = load_reg(s, rm);
57
tmp2 = load_reg(s, rs);
58
- tmp64 = gen_muls_i64_i32(tmp, tmp2);
59
+ tcg_gen_muls2_i32(tmp2, tmp, tmp, tmp2);
60
61
if (rd != 15) {
62
- tmp = load_reg(s, rd);
63
+ tmp3 = load_reg(s, rd);
64
if (insn & (1 << 6)) {
65
- tmp64 = gen_subq_msw(tmp64, tmp);
66
+ tcg_gen_sub_i32(tmp, tmp, tmp3);
67
} else {
68
- tmp64 = gen_addq_msw(tmp64, tmp);
69
+ tcg_gen_add_i32(tmp, tmp, tmp3);
70
}
71
+ tcg_temp_free_i32(tmp3);
72
}
73
if (insn & (1 << 5)) {
74
- tcg_gen_addi_i64(tmp64, tmp64, 0x80000000u);
75
+ /*
76
+ * Adding 0x80000000 to the 64-bit quantity
77
+ * means that we have carry in to the high
78
+ * word when the low word has the high bit set.
79
+ */
80
+ tcg_gen_shri_i32(tmp2, tmp2, 31);
81
+ tcg_gen_add_i32(tmp, tmp, tmp2);
82
}
83
- tcg_gen_shri_i64(tmp64, tmp64, 32);
84
- tmp = tcg_temp_new_i32();
85
- tcg_gen_extrl_i64_i32(tmp, tmp64);
86
- tcg_temp_free_i64(tmp64);
87
+ tcg_temp_free_i32(tmp2);
88
store_reg(s, rn, tmp);
89
break;
90
case 0:
91
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
92
}
93
break;
94
case 5: case 6: /* 32 * 32 -> 32msb (SMMUL, SMMLA, SMMLS) */
95
- tmp64 = gen_muls_i64_i32(tmp, tmp2);
96
+ tcg_gen_muls2_i32(tmp2, tmp, tmp, tmp2);
97
if (rs != 15) {
98
- tmp = load_reg(s, rs);
99
+ tmp3 = load_reg(s, rs);
100
if (insn & (1 << 20)) {
101
- tmp64 = gen_addq_msw(tmp64, tmp);
102
+ tcg_gen_add_i32(tmp, tmp, tmp3);
103
} else {
104
- tmp64 = gen_subq_msw(tmp64, tmp);
105
+ tcg_gen_sub_i32(tmp, tmp, tmp3);
106
}
107
+ tcg_temp_free_i32(tmp3);
108
}
109
if (insn & (1 << 4)) {
110
- tcg_gen_addi_i64(tmp64, tmp64, 0x80000000u);
111
+ /*
112
+ * Adding 0x80000000 to the 64-bit quantity
113
+ * means that we have carry in to the high
114
+ * word when the low word has the high bit set.
115
+ */
116
+ tcg_gen_shri_i32(tmp2, tmp2, 31);
117
+ tcg_gen_add_i32(tmp, tmp, tmp2);
118
}
119
- tcg_gen_shri_i64(tmp64, tmp64, 32);
120
- tmp = tcg_temp_new_i32();
121
- tcg_gen_extrl_i64_i32(tmp, tmp64);
122
- tcg_temp_free_i64(tmp64);
123
+ tcg_temp_free_i32(tmp2);
124
break;
125
case 7: /* Unsigned sum of absolute differences. */
126
gen_helper_usad8(tmp, tmp, tmp2);
65
--
127
--
66
2.20.1
128
2.20.1
67
129
68
130
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Thomas Huth <thuth@redhat.com>
3
Separate shift + extract low will result in one extra insn
4
Reviewed-by: Cédric Le Goater <clg@kaod.org>
4
for hosts like RISC-V, MIPS, and Sparc.
5
Reviewed-by: Markus Armbruster <armbru@redhat.com>
5
6
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20190412165416.7977-2-philmd@redhat.com
7
Message-id: 20190808202616.13782-8-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
---
10
hw/arm/aspeed.c | 13 +++++++++----
11
target/arm/translate.c | 18 ++++++------------
11
1 file changed, 9 insertions(+), 4 deletions(-)
12
1 file changed, 6 insertions(+), 12 deletions(-)
12
13
13
diff --git a/hw/arm/aspeed.c b/hw/arm/aspeed.c
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
14
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/arm/aspeed.c
16
--- a/target/arm/translate.c
16
+++ b/hw/arm/aspeed.c
17
+++ b/target/arm/translate.c
17
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@ static int disas_iwmmxt_insn(DisasContext *s, uint32_t insn)
18
#include "hw/arm/aspeed_soc.h"
19
if (insn & ARM_CP_RW_BIT) { /* TMRRC */
19
#include "hw/boards.h"
20
iwmmxt_load_reg(cpu_V0, wrd);
20
#include "hw/i2c/smbus_eeprom.h"
21
tcg_gen_extrl_i64_i32(cpu_R[rdlo], cpu_V0);
21
+#include "hw/misc/pca9552.h"
22
- tcg_gen_shri_i64(cpu_V0, cpu_V0, 32);
22
+#include "hw/misc/tmp105.h"
23
- tcg_gen_extrl_i64_i32(cpu_R[rdhi], cpu_V0);
23
#include "qemu/log.h"
24
+ tcg_gen_extrh_i64_i32(cpu_R[rdhi], cpu_V0);
24
#include "sysemu/block-backend.h"
25
} else { /* TMCRR */
25
#include "hw/loader.h"
26
tcg_gen_concat_i32_i64(cpu_V0, cpu_R[rdlo], cpu_R[rdhi]);
26
@@ -XXX,XX +XXX,XX @@ static void ast2500_evb_i2c_init(AspeedBoardState *bmc)
27
iwmmxt_store_reg(cpu_V0, wrd);
27
eeprom_buf);
28
@@ -XXX,XX +XXX,XX @@ static int disas_dsp_insn(DisasContext *s, uint32_t insn)
28
29
if (insn & ARM_CP_RW_BIT) { /* MRA */
29
/* The AST2500 EVB expects a LM75 but a TMP105 is compatible */
30
iwmmxt_load_reg(cpu_V0, acc);
30
- i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 7), "tmp105", 0x4d);
31
tcg_gen_extrl_i64_i32(cpu_R[rdlo], cpu_V0);
31
+ i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 7),
32
- tcg_gen_shri_i64(cpu_V0, cpu_V0, 32);
32
+ TYPE_TMP105, 0x4d);
33
- tcg_gen_extrl_i64_i32(cpu_R[rdhi], cpu_V0);
33
34
+ tcg_gen_extrh_i64_i32(cpu_R[rdhi], cpu_V0);
34
/* The AST2500 EVB does not have an RTC. Let's pretend that one is
35
tcg_gen_andi_i32(cpu_R[rdhi], cpu_R[rdhi], (1 << (40 - 32)) - 1);
35
* plugged on the I2C bus header */
36
} else { /* MAR */
36
@@ -XXX,XX +XXX,XX @@ static void witherspoon_bmc_i2c_init(AspeedBoardState *bmc)
37
tcg_gen_concat_i32_i64(cpu_V0, cpu_R[rdlo], cpu_R[rdhi]);
37
AspeedSoCState *soc = &bmc->soc;
38
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
38
uint8_t *eeprom_buf = g_malloc0(8 * 1024);
39
gen_helper_neon_narrow_high_u16(tmp, cpu_V0);
39
40
break;
40
- i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 3), "pca9552", 0x60);
41
case 2:
41
+ i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 3), TYPE_PCA9552,
42
- tcg_gen_shri_i64(cpu_V0, cpu_V0, 32);
42
+ 0x60);
43
- tcg_gen_extrl_i64_i32(tmp, cpu_V0);
43
44
+ tcg_gen_extrh_i64_i32(tmp, cpu_V0);
44
i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 4), "tmp423", 0x4c);
45
break;
45
i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 5), "tmp423", 0x4c);
46
default: abort();
46
47
}
47
/* The Witherspoon expects a TMP275 but a TMP105 is compatible */
48
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
48
- i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 9), "tmp105", 0x4a);
49
break;
49
+ i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 9), TYPE_TMP105,
50
case 2:
50
+ 0x4a);
51
tcg_gen_addi_i64(cpu_V0, cpu_V0, 1u << 31);
51
52
- tcg_gen_shri_i64(cpu_V0, cpu_V0, 32);
52
/* The witherspoon board expects Epson RX8900 I2C RTC but a ds1338 is
53
- tcg_gen_extrl_i64_i32(tmp, cpu_V0);
53
* good enough */
54
+ tcg_gen_extrh_i64_i32(tmp, cpu_V0);
54
@@ -XXX,XX +XXX,XX @@ static void witherspoon_bmc_i2c_init(AspeedBoardState *bmc)
55
break;
55
56
default: abort();
56
smbus_eeprom_init_one(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 11), 0x51,
57
}
57
eeprom_buf);
58
@@ -XXX,XX +XXX,XX @@ static int disas_coproc_insn(DisasContext *s, uint32_t insn)
58
- i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 11), "pca9552",
59
tmp = tcg_temp_new_i32();
59
+ i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 11), TYPE_PCA9552,
60
tcg_gen_extrl_i64_i32(tmp, tmp64);
60
0x60);
61
store_reg(s, rt, tmp);
62
- tcg_gen_shri_i64(tmp64, tmp64, 32);
63
tmp = tcg_temp_new_i32();
64
- tcg_gen_extrl_i64_i32(tmp, tmp64);
65
+ tcg_gen_extrh_i64_i32(tmp, tmp64);
66
tcg_temp_free_i64(tmp64);
67
store_reg(s, rt2, tmp);
68
} else {
69
@@ -XXX,XX +XXX,XX @@ static void gen_storeq_reg(DisasContext *s, int rlow, int rhigh, TCGv_i64 val)
70
tcg_gen_extrl_i64_i32(tmp, val);
71
store_reg(s, rlow, tmp);
72
tmp = tcg_temp_new_i32();
73
- tcg_gen_shri_i64(val, val, 32);
74
- tcg_gen_extrl_i64_i32(tmp, val);
75
+ tcg_gen_extrh_i64_i32(tmp, val);
76
store_reg(s, rhigh, tmp);
61
}
77
}
62
78
63
--
79
--
64
2.20.1
80
2.20.1
65
81
66
82
diff view generated by jsdifflib
Deleted patch
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
1
3
Reviewed-by: Markus Armbruster <armbru@redhat.com>
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Message-id: 20190412165416.7977-5-philmd@redhat.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
include/hw/devices.h | 6 ------
9
include/hw/display/tc6393xb.h | 24 ++++++++++++++++++++++++
10
hw/arm/tosa.c | 2 +-
11
hw/display/tc6393xb.c | 2 +-
12
MAINTAINERS | 1 +
13
5 files changed, 27 insertions(+), 8 deletions(-)
14
create mode 100644 include/hw/display/tc6393xb.h
15
16
diff --git a/include/hw/devices.h b/include/hw/devices.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/include/hw/devices.h
19
+++ b/include/hw/devices.h
20
@@ -XXX,XX +XXX,XX @@ void *tahvo_init(qemu_irq irq, int betty);
21
22
void retu_key_event(void *retu, int state);
23
24
-/* tc6393xb.c */
25
-typedef struct TC6393xbState TC6393xbState;
26
-TC6393xbState *tc6393xb_init(struct MemoryRegion *sysmem,
27
- uint32_t base, qemu_irq irq);
28
-qemu_irq tc6393xb_l3v_get(TC6393xbState *s);
29
-
30
#endif
31
diff --git a/include/hw/display/tc6393xb.h b/include/hw/display/tc6393xb.h
32
new file mode 100644
33
index XXXXXXX..XXXXXXX
34
--- /dev/null
35
+++ b/include/hw/display/tc6393xb.h
36
@@ -XXX,XX +XXX,XX @@
37
+/*
38
+ * Toshiba TC6393XB I/O Controller.
39
+ * Found in Sharp Zaurus SL-6000 (tosa) or some
40
+ * Toshiba e-Series PDAs.
41
+ *
42
+ * Copyright (c) 2007 Hervé Poussineau
43
+ *
44
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
45
+ * See the COPYING file in the top-level directory.
46
+ */
47
+
48
+#ifndef HW_DISPLAY_TC6393XB_H
49
+#define HW_DISPLAY_TC6393XB_H
50
+
51
+#include "exec/memory.h"
52
+#include "hw/irq.h"
53
+
54
+typedef struct TC6393xbState TC6393xbState;
55
+
56
+TC6393xbState *tc6393xb_init(struct MemoryRegion *sysmem,
57
+ uint32_t base, qemu_irq irq);
58
+qemu_irq tc6393xb_l3v_get(TC6393xbState *s);
59
+
60
+#endif
61
diff --git a/hw/arm/tosa.c b/hw/arm/tosa.c
62
index XXXXXXX..XXXXXXX 100644
63
--- a/hw/arm/tosa.c
64
+++ b/hw/arm/tosa.c
65
@@ -XXX,XX +XXX,XX @@
66
#include "hw/hw.h"
67
#include "hw/arm/pxa.h"
68
#include "hw/arm/arm.h"
69
-#include "hw/devices.h"
70
#include "hw/arm/sharpsl.h"
71
#include "hw/pcmcia.h"
72
#include "hw/boards.h"
73
+#include "hw/display/tc6393xb.h"
74
#include "hw/i2c/i2c.h"
75
#include "hw/ssi/ssi.h"
76
#include "hw/sysbus.h"
77
diff --git a/hw/display/tc6393xb.c b/hw/display/tc6393xb.c
78
index XXXXXXX..XXXXXXX 100644
79
--- a/hw/display/tc6393xb.c
80
+++ b/hw/display/tc6393xb.c
81
@@ -XXX,XX +XXX,XX @@
82
#include "qapi/error.h"
83
#include "qemu/host-utils.h"
84
#include "hw/hw.h"
85
-#include "hw/devices.h"
86
+#include "hw/display/tc6393xb.h"
87
#include "hw/block/flash.h"
88
#include "ui/console.h"
89
#include "ui/pixel_ops.h"
90
diff --git a/MAINTAINERS b/MAINTAINERS
91
index XXXXXXX..XXXXXXX 100644
92
--- a/MAINTAINERS
93
+++ b/MAINTAINERS
94
@@ -XXX,XX +XXX,XX @@ F: hw/misc/mst_fpga.c
95
F: hw/misc/max111x.c
96
F: include/hw/arm/pxa.h
97
F: include/hw/arm/sharpsl.h
98
+F: include/hw/display/tc6393xb.h
99
100
SABRELITE / i.MX6
101
M: Peter Maydell <peter.maydell@linaro.org>
102
--
103
2.20.1
104
105
diff view generated by jsdifflib
Deleted patch
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
1
3
Add an entries the Blizzard device in MAINTAINERS.
4
5
Reviewed-by: Thomas Huth <thuth@redhat.com>
6
Reviewed-by: Markus Armbruster <armbru@redhat.com>
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Message-id: 20190412165416.7977-6-philmd@redhat.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
include/hw/devices.h | 7 -------
12
include/hw/display/blizzard.h | 22 ++++++++++++++++++++++
13
hw/arm/nseries.c | 1 +
14
hw/display/blizzard.c | 2 +-
15
MAINTAINERS | 2 ++
16
5 files changed, 26 insertions(+), 8 deletions(-)
17
create mode 100644 include/hw/display/blizzard.h
18
19
diff --git a/include/hw/devices.h b/include/hw/devices.h
20
index XXXXXXX..XXXXXXX 100644
21
--- a/include/hw/devices.h
22
+++ b/include/hw/devices.h
23
@@ -XXX,XX +XXX,XX @@ void tsc2005_set_transform(void *opaque, MouseTransformInfo *info);
24
/* stellaris_input.c */
25
void stellaris_gamepad_init(int n, qemu_irq *irq, const int *keycode);
26
27
-/* blizzard.c */
28
-void *s1d13745_init(qemu_irq gpio_int);
29
-void s1d13745_write(void *opaque, int dc, uint16_t value);
30
-void s1d13745_write_block(void *opaque, int dc,
31
- void *buf, size_t len, int pitch);
32
-uint16_t s1d13745_read(void *opaque, int dc);
33
-
34
/* cbus.c */
35
typedef struct {
36
qemu_irq clk;
37
diff --git a/include/hw/display/blizzard.h b/include/hw/display/blizzard.h
38
new file mode 100644
39
index XXXXXXX..XXXXXXX
40
--- /dev/null
41
+++ b/include/hw/display/blizzard.h
42
@@ -XXX,XX +XXX,XX @@
43
+/*
44
+ * Epson S1D13744/S1D13745 (Blizzard/Hailstorm/Tornado) LCD/TV controller.
45
+ *
46
+ * Copyright (C) 2008 Nokia Corporation
47
+ * Written by Andrzej Zaborowski
48
+ *
49
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
50
+ * See the COPYING file in the top-level directory.
51
+ */
52
+
53
+#ifndef HW_DISPLAY_BLIZZARD_H
54
+#define HW_DISPLAY_BLIZZARD_H
55
+
56
+#include "hw/irq.h"
57
+
58
+void *s1d13745_init(qemu_irq gpio_int);
59
+void s1d13745_write(void *opaque, int dc, uint16_t value);
60
+void s1d13745_write_block(void *opaque, int dc,
61
+ void *buf, size_t len, int pitch);
62
+uint16_t s1d13745_read(void *opaque, int dc);
63
+
64
+#endif
65
diff --git a/hw/arm/nseries.c b/hw/arm/nseries.c
66
index XXXXXXX..XXXXXXX 100644
67
--- a/hw/arm/nseries.c
68
+++ b/hw/arm/nseries.c
69
@@ -XXX,XX +XXX,XX @@
70
#include "hw/boards.h"
71
#include "hw/i2c/i2c.h"
72
#include "hw/devices.h"
73
+#include "hw/display/blizzard.h"
74
#include "hw/misc/tmp105.h"
75
#include "hw/block/flash.h"
76
#include "hw/hw.h"
77
diff --git a/hw/display/blizzard.c b/hw/display/blizzard.c
78
index XXXXXXX..XXXXXXX 100644
79
--- a/hw/display/blizzard.c
80
+++ b/hw/display/blizzard.c
81
@@ -XXX,XX +XXX,XX @@
82
#include "qemu/osdep.h"
83
#include "qemu-common.h"
84
#include "ui/console.h"
85
-#include "hw/devices.h"
86
+#include "hw/display/blizzard.h"
87
#include "ui/pixel_ops.h"
88
89
typedef void (*blizzard_fn_t)(uint8_t *, const uint8_t *, unsigned int);
90
diff --git a/MAINTAINERS b/MAINTAINERS
91
index XXXXXXX..XXXXXXX 100644
92
--- a/MAINTAINERS
93
+++ b/MAINTAINERS
94
@@ -XXX,XX +XXX,XX @@ M: Peter Maydell <peter.maydell@linaro.org>
95
L: qemu-arm@nongnu.org
96
S: Odd Fixes
97
F: hw/arm/nseries.c
98
+F: hw/display/blizzard.c
99
F: hw/input/lm832x.c
100
F: hw/input/tsc2005.c
101
F: hw/misc/cbus.c
102
F: hw/timer/twl92230.c
103
+F: include/hw/display/blizzard.h
104
105
Palm
106
M: Andrzej Zaborowski <balrogg@gmail.com>
107
--
108
2.20.1
109
110
diff view generated by jsdifflib