1
Target-arm queue for rc2 -- just some minor bugfixes.
1
arm pullreq for rc1. All minor bugfixes, except for the sve-default-vector-length
2
patches, which are somewhere between a bugfix and a new feature.
2
3
3
thanks
4
thanks
4
-- PMM
5
-- PMM
5
6
6
The following changes since commit 6e5d4999c761ffa082f60d72a14e5c953515b417:
7
The following changes since commit c08ccd1b53f488ac86c1f65cf7623dc91acc249a:
7
8
8
Merge remote-tracking branch 'remotes/armbru/tags/pull-monitor-2019-11-19' into staging (2019-11-19 11:29:01 +0000)
9
Merge remote-tracking branch 'remotes/rth-gitlab/tags/pull-tcg-20210726' into staging (2021-07-27 08:35:01 +0100)
9
10
10
are available in the Git repository at:
11
are available in the Git repository at:
11
12
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20191119
13
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20210727
13
14
14
for you to fetch changes up to 04c9c81b8fa2ee33f59a26265700fae6fc646062:
15
for you to fetch changes up to e229a179a503f2aee43a76888cf12fbdfe8a3749:
15
16
16
target/arm: Support EL0 v7m msr/mrs for CONFIG_USER_ONLY (2019-11-19 13:20:28 +0000)
17
hw: aspeed_gpio: Fix memory size (2021-07-27 11:00:00 +0100)
17
18
18
----------------------------------------------------------------
19
----------------------------------------------------------------
19
target-arm queue:
20
target-arm queue:
20
* Support EL0 v7m msr/mrs for CONFIG_USER_ONLY
21
* hw/arm/smmuv3: Check 31st bit to see if CD is valid
21
* Relax r13 restriction for ldrex/strex for v8.0
22
* qemu-options.hx: Fix formatting of -machine memory-backend option
22
* Do not reject rt == rt2 for strexd
23
* hw: aspeed_gpio: Fix memory size
23
* net/cadence_gem: Set PHY autonegotiation restart status
24
* hw/arm/nseries: Display hexadecimal value with '0x' prefix
24
* ssi: xilinx_spips: Skip spi bus update for a few register writes
25
* Add sve-default-vector-length cpu property
25
* pl031: Expose RTCICR as proper WC register
26
* docs: Update path that mentions deprecated.rst
27
* hw/intc/armv7m_nvic: for v8.1M VECTPENDING hides S exceptions from NS
28
* hw/intc/armv7m_nvic: Correct size of ICSR.VECTPENDING
29
* hw/intc/armv7m_nvic: ISCR.ISRPENDING is set for non-enabled pending interrupts
30
* target/arm: Report M-profile alignment faults correctly to the guest
31
* target/arm: Add missing 'return's after calling v7m_exception_taken()
32
* target/arm: Enforce that M-profile SP low 2 bits are always zero
26
33
27
----------------------------------------------------------------
34
----------------------------------------------------------------
28
Alexander Graf (1):
35
Joe Komlodi (1):
29
pl031: Expose RTCICR as proper WC register
36
hw/arm/smmuv3: Check 31st bit to see if CD is valid
30
37
31
Linus Ziegert (1):
38
Joel Stanley (1):
32
net/cadence_gem: Set PHY autonegotiation restart status
39
hw: aspeed_gpio: Fix memory size
33
40
34
Richard Henderson (4):
41
Mao Zhongyi (1):
35
target/arm: Merge arm_cpu_vq_map_next_smaller into sole caller
42
docs: Update path that mentions deprecated.rst
36
target/arm: Do not reject rt == rt2 for strexd
37
target/arm: Relax r13 restriction for ldrex/strex for v8.0
38
target/arm: Support EL0 v7m msr/mrs for CONFIG_USER_ONLY
39
43
40
Sai Pavan Boddu (1):
44
Peter Maydell (7):
41
ssi: xilinx_spips: Skip spi bus update for a few register writes
45
qemu-options.hx: Fix formatting of -machine memory-backend option
46
target/arm: Enforce that M-profile SP low 2 bits are always zero
47
target/arm: Add missing 'return's after calling v7m_exception_taken()
48
target/arm: Report M-profile alignment faults correctly to the guest
49
hw/intc/armv7m_nvic: ISCR.ISRPENDING is set for non-enabled pending interrupts
50
hw/intc/armv7m_nvic: Correct size of ICSR.VECTPENDING
51
hw/intc/armv7m_nvic: for v8.1M VECTPENDING hides S exceptions from NS
42
52
43
target/arm/cpu.h | 5 +--
53
Philippe Mathieu-Daudé (1):
44
hw/net/cadence_gem.c | 9 ++--
54
hw/arm/nseries: Display hexadecimal value with '0x' prefix
45
hw/rtc/pl031.c | 6 +--
46
hw/ssi/xilinx_spips.c | 22 ++++++++--
47
target/arm/cpu64.c | 15 -------
48
target/arm/helper.c | 9 +++-
49
target/arm/m_helper.c | 114 ++++++++++++++++++++++++++++++-------------------
50
target/arm/translate.c | 14 +++---
51
8 files changed, 113 insertions(+), 81 deletions(-)
52
55
56
Richard Henderson (3):
57
target/arm: Correctly bound length in sve_zcr_get_valid_len
58
target/arm: Export aarch64_sve_zcr_get_valid_len
59
target/arm: Add sve-default-vector-length cpu property
60
61
docs/system/arm/cpu-features.rst | 15 ++++++++++
62
configure | 2 +-
63
hw/arm/smmuv3-internal.h | 2 +-
64
target/arm/cpu.h | 5 ++++
65
target/arm/internals.h | 10 +++++++
66
hw/arm/nseries.c | 2 +-
67
hw/gpio/aspeed_gpio.c | 3 +-
68
hw/intc/armv7m_nvic.c | 40 +++++++++++++++++++--------
69
target/arm/cpu.c | 14 ++++++++--
70
target/arm/cpu64.c | 60 ++++++++++++++++++++++++++++++++++++++++
71
target/arm/gdbstub.c | 4 +++
72
target/arm/helper.c | 8 ++++--
73
target/arm/m_helper.c | 24 ++++++++++++----
74
target/arm/translate.c | 3 ++
75
target/i386/cpu.c | 2 +-
76
MAINTAINERS | 2 +-
77
qemu-options.hx | 30 +++++++++++---------
78
17 files changed, 183 insertions(+), 43 deletions(-)
79
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Joe Komlodi <joe.komlodi@xilinx.com>
2
2
3
There was too much cut and paste between ldrexd and strexd,
3
The bit to see if a CD is valid is the last bit of the first word of the CD.
4
as ldrexd does prohibit two output registers the same.
5
4
6
Fixes: af288228995
5
Signed-off-by: Joe Komlodi <joe.komlodi@xilinx.com>
7
Reported-by: Michael Goffioul <michael.goffioul@gmail.com>
6
Message-id: 1626728232-134665-2-git-send-email-joe.komlodi@xilinx.com
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20191117090621.32425-2-richard.henderson@linaro.org
10
Reviewed-by: Robert Foley <robert.foley@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
9
---
14
target/arm/translate.c | 2 +-
10
hw/arm/smmuv3-internal.h | 2 +-
15
1 file changed, 1 insertion(+), 1 deletion(-)
11
1 file changed, 1 insertion(+), 1 deletion(-)
16
12
17
diff --git a/target/arm/translate.c b/target/arm/translate.c
13
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
18
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/translate.c
15
--- a/hw/arm/smmuv3-internal.h
20
+++ b/target/arm/translate.c
16
+++ b/hw/arm/smmuv3-internal.h
21
@@ -XXX,XX +XXX,XX @@ static bool op_strex(DisasContext *s, arg_STREX *a, MemOp mop, bool rel)
17
@@ -XXX,XX +XXX,XX @@ static inline int pa_range(STE *ste)
22
|| (s->thumb && (a->rd == 13 || a->rt == 13))
18
23
|| (mop == MO_64
19
/* CD fields */
24
&& (a->rt2 == 15
20
25
- || a->rd == a->rt2 || a->rt == a->rt2
21
-#define CD_VALID(x) extract32((x)->word[0], 30, 1)
26
+ || a->rd == a->rt2
22
+#define CD_VALID(x) extract32((x)->word[0], 31, 1)
27
|| (s->thumb && a->rt2 == 13)))) {
23
#define CD_ASID(x) extract32((x)->word[1], 16, 16)
28
unallocated_encoding(s);
24
#define CD_TTB(x, sel) \
29
return true;
25
({ \
30
--
26
--
31
2.20.1
27
2.20.1
32
28
33
29
diff view generated by jsdifflib
New patch
1
The documentation of the -machine memory-backend has some minor
2
formatting errors:
3
* Misindentation of the initial line meant that the whole option
4
section is incorrectly indented in the HTML output compared to
5
the other -machine options
6
* The examples weren't indented, which meant that they were formatted
7
as plain run-on text including outputting the "::" as text.
8
* The a) b) list has no rst-format markup so it is rendered as
9
a single run-on paragraph
1
10
11
Fix the formatting.
12
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
15
Message-id: 20210719105257.3599-1-peter.maydell@linaro.org
16
---
17
qemu-options.hx | 30 +++++++++++++++++-------------
18
1 file changed, 17 insertions(+), 13 deletions(-)
19
20
diff --git a/qemu-options.hx b/qemu-options.hx
21
index XXXXXXX..XXXXXXX 100644
22
--- a/qemu-options.hx
23
+++ b/qemu-options.hx
24
@@ -XXX,XX +XXX,XX @@ SRST
25
Enables or disables ACPI Heterogeneous Memory Attribute Table
26
(HMAT) support. The default is off.
27
28
- ``memory-backend='id'``
29
+ ``memory-backend='id'``
30
An alternative to legacy ``-mem-path`` and ``mem-prealloc`` options.
31
Allows to use a memory backend as main RAM.
32
33
For example:
34
::
35
- -object memory-backend-file,id=pc.ram,size=512M,mem-path=/hugetlbfs,prealloc=on,share=on
36
- -machine memory-backend=pc.ram
37
- -m 512M
38
+
39
+ -object memory-backend-file,id=pc.ram,size=512M,mem-path=/hugetlbfs,prealloc=on,share=on
40
+ -machine memory-backend=pc.ram
41
+ -m 512M
42
43
Migration compatibility note:
44
- a) as backend id one shall use value of 'default-ram-id', advertised by
45
- machine type (available via ``query-machines`` QMP command), if migration
46
- to/from old QEMU (<5.0) is expected.
47
- b) for machine types 4.0 and older, user shall
48
- use ``x-use-canonical-path-for-ramblock-id=off`` backend option
49
- if migration to/from old QEMU (<5.0) is expected.
50
+
51
+ * as backend id one shall use value of 'default-ram-id', advertised by
52
+ machine type (available via ``query-machines`` QMP command), if migration
53
+ to/from old QEMU (<5.0) is expected.
54
+ * for machine types 4.0 and older, user shall
55
+ use ``x-use-canonical-path-for-ramblock-id=off`` backend option
56
+ if migration to/from old QEMU (<5.0) is expected.
57
+
58
For example:
59
::
60
- -object memory-backend-ram,id=pc.ram,size=512M,x-use-canonical-path-for-ramblock-id=off
61
- -machine memory-backend=pc.ram
62
- -m 512M
63
+
64
+ -object memory-backend-ram,id=pc.ram,size=512M,x-use-canonical-path-for-ramblock-id=off
65
+ -machine memory-backend=pc.ram
66
+ -m 512M
67
ERST
68
69
HXCOMM Deprecated by -machine
70
--
71
2.20.1
72
73
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
For M-profile, unlike A-profile, the low 2 bits of SP are defined to be
2
RES0H, which is to say that they must be hardwired to zero so that
3
guest attempts to write non-zero values to them are ignored.
2
4
3
Simply moving the non-stub helper_v7m_mrs/msr outside of
5
Implement this behaviour by masking out the low bits:
4
!CONFIG_USER_ONLY is not an option, because of all of the
6
* for writes to r13 by the gdbstub
5
other system-mode helpers that are called.
7
* for writes to any of the various flavours of SP via MSR
8
* for writes to r13 via store_reg() in generated code
6
9
7
But we can split out a few subroutines to handle the few
10
Note that all the direct uses of cpu_R[] in translate.c are in places
8
EL0 accessible registers without duplicating code.
11
where the register is definitely not r13 (usually because that has
12
been checked for as an UNDEFINED or UNPREDICTABLE case and handled as
13
UNDEF).
9
14
10
Reported-by: Christophe Lyon <christophe.lyon@linaro.org>
15
All the other writes to regs[13] in C code are either:
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
16
* A-profile only code
12
Message-id: 20191118194916.3670-1-richard.henderson@linaro.org
17
* writes of values we can guarantee to be aligned, such as
13
[PMM: deleted now-redundant comment; added a default case
18
- writes of previous-SP-value plus or minus a 4-aligned constant
14
to switch in v7m_msr helper]
19
- writes of the value in an SP limit register (which we already
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
20
enforce to be aligned)
21
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
24
Message-id: 20210723162146.5167-2-peter.maydell@linaro.org
17
---
25
---
18
target/arm/cpu.h | 2 +
26
target/arm/gdbstub.c | 4 ++++
19
target/arm/m_helper.c | 114 ++++++++++++++++++++++++++----------------
27
target/arm/m_helper.c | 14 ++++++++------
20
2 files changed, 73 insertions(+), 43 deletions(-)
28
target/arm/translate.c | 3 +++
29
3 files changed, 15 insertions(+), 6 deletions(-)
21
30
22
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
31
diff --git a/target/arm/gdbstub.c b/target/arm/gdbstub.c
23
index XXXXXXX..XXXXXXX 100644
32
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/cpu.h
33
--- a/target/arm/gdbstub.c
25
+++ b/target/arm/cpu.h
34
+++ b/target/arm/gdbstub.c
26
@@ -XXX,XX +XXX,XX @@ static inline void xpsr_write(CPUARMState *env, uint32_t val, uint32_t mask)
35
@@ -XXX,XX +XXX,XX @@ int arm_cpu_gdb_write_register(CPUState *cs, uint8_t *mem_buf, int n)
27
if (mask & XPSR_GE) {
36
28
env->GE = (val & XPSR_GE) >> 16;
37
if (n < 16) {
38
/* Core integer register. */
39
+ if (n == 13 && arm_feature(env, ARM_FEATURE_M)) {
40
+ /* M profile SP low bits are always 0 */
41
+ tmp &= ~3;
42
+ }
43
env->regs[n] = tmp;
44
return 4;
29
}
45
}
30
+#ifndef CONFIG_USER_ONLY
31
if (mask & XPSR_T) {
32
env->thumb = ((val & XPSR_T) != 0);
33
}
34
@@ -XXX,XX +XXX,XX @@ static inline void xpsr_write(CPUARMState *env, uint32_t val, uint32_t mask)
35
/* Note that this only happens on exception exit */
36
write_v7m_exception(env, val & XPSR_EXCP);
37
}
38
+#endif
39
}
40
41
#define HCR_VM (1ULL << 0)
42
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
46
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
43
index XXXXXXX..XXXXXXX 100644
47
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/m_helper.c
48
--- a/target/arm/m_helper.c
45
+++ b/target/arm/m_helper.c
49
+++ b/target/arm/m_helper.c
46
@@ -XXX,XX +XXX,XX @@
50
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
47
#include "exec/cpu_ldst.h"
51
if (!env->v7m.secure) {
48
#endif
52
return;
49
53
}
50
+static void v7m_msr_xpsr(CPUARMState *env, uint32_t mask,
54
- env->v7m.other_ss_msp = val;
51
+ uint32_t reg, uint32_t val)
55
+ env->v7m.other_ss_msp = val & ~3;
52
+{
56
return;
53
+ /* Only APSR is actually writable */
57
case 0x89: /* PSP_NS */
54
+ if (!(reg & 4)) {
58
if (!env->v7m.secure) {
55
+ uint32_t apsrmask = 0;
59
return;
60
}
61
- env->v7m.other_ss_psp = val;
62
+ env->v7m.other_ss_psp = val & ~3;
63
return;
64
case 0x8a: /* MSPLIM_NS */
65
if (!env->v7m.secure) {
66
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
67
68
limit = is_psp ? env->v7m.psplim[false] : env->v7m.msplim[false];
69
70
+ val &= ~0x3;
56
+
71
+
57
+ if (mask & 8) {
72
if (val < limit) {
58
+ apsrmask |= XPSR_NZCV | XPSR_Q;
73
raise_exception_ra(env, EXCP_STKOF, 0, 1, GETPC());
59
+ }
74
}
60
+ if ((mask & 4) && arm_feature(env, ARM_FEATURE_THUMB_DSP)) {
61
+ apsrmask |= XPSR_GE;
62
+ }
63
+ xpsr_write(env, val, apsrmask);
64
+ }
65
+}
66
+
67
+static uint32_t v7m_mrs_xpsr(CPUARMState *env, uint32_t reg, unsigned el)
68
+{
69
+ uint32_t mask = 0;
70
+
71
+ if ((reg & 1) && el) {
72
+ mask |= XPSR_EXCP; /* IPSR (unpriv. reads as zero) */
73
+ }
74
+ if (!(reg & 4)) {
75
+ mask |= XPSR_NZCV | XPSR_Q; /* APSR */
76
+ if (arm_feature(env, ARM_FEATURE_THUMB_DSP)) {
77
+ mask |= XPSR_GE;
78
+ }
79
+ }
80
+ /* EPSR reads as zero */
81
+ return xpsr_read(env) & mask;
82
+}
83
+
84
+static uint32_t v7m_mrs_control(CPUARMState *env, uint32_t secure)
85
+{
86
+ uint32_t value = env->v7m.control[secure];
87
+
88
+ if (!secure) {
89
+ /* SFPA is RAZ/WI from NS; FPCA is stored in the M_REG_S bank */
90
+ value |= env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK;
91
+ }
92
+ return value;
93
+}
94
+
95
#ifdef CONFIG_USER_ONLY
96
97
-/* These should probably raise undefined insn exceptions. */
98
-void HELPER(v7m_msr)(CPUARMState *env, uint32_t reg, uint32_t val)
99
+void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
100
{
101
- ARMCPU *cpu = env_archcpu(env);
102
+ uint32_t mask = extract32(maskreg, 8, 4);
103
+ uint32_t reg = extract32(maskreg, 0, 8);
104
105
- cpu_abort(CPU(cpu), "v7m_msr %d\n", reg);
106
+ switch (reg) {
107
+ case 0 ... 7: /* xPSR sub-fields */
108
+ v7m_msr_xpsr(env, mask, reg, val);
109
+ break;
110
+ case 20: /* CONTROL */
111
+ /* There are no sub-fields that are actually writable from EL0. */
112
+ break;
113
+ default:
114
+ /* Unprivileged writes to other registers are ignored */
115
+ break;
116
+ }
117
}
118
119
uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg)
120
{
121
- ARMCPU *cpu = env_archcpu(env);
122
-
123
- cpu_abort(CPU(cpu), "v7m_mrs %d\n", reg);
124
- return 0;
125
+ switch (reg) {
126
+ case 0 ... 7: /* xPSR sub-fields */
127
+ return v7m_mrs_xpsr(env, reg, 0);
128
+ case 20: /* CONTROL */
129
+ return v7m_mrs_control(env, 0);
130
+ default:
131
+ /* Unprivileged reads others as zero. */
132
+ return 0;
133
+ }
134
}
135
136
void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest)
137
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
138
139
uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg)
140
{
141
- uint32_t mask;
142
unsigned el = arm_current_el(env);
143
144
/* First handle registers which unprivileged can read */
145
-
146
switch (reg) {
147
case 0 ... 7: /* xPSR sub-fields */
148
- mask = 0;
149
- if ((reg & 1) && el) {
150
- mask |= XPSR_EXCP; /* IPSR (unpriv. reads as zero) */
151
- }
152
- if (!(reg & 4)) {
153
- mask |= XPSR_NZCV | XPSR_Q; /* APSR */
154
- if (arm_feature(env, ARM_FEATURE_THUMB_DSP)) {
155
- mask |= XPSR_GE;
156
- }
157
- }
158
- /* EPSR reads as zero */
159
- return xpsr_read(env) & mask;
160
- break;
161
+ return v7m_mrs_xpsr(env, reg, el);
162
case 20: /* CONTROL */
163
- {
164
- uint32_t value = env->v7m.control[env->v7m.secure];
165
- if (!env->v7m.secure) {
166
- /* SFPA is RAZ/WI from NS; FPCA is stored in the M_REG_S bank */
167
- value |= env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK;
168
- }
169
- return value;
170
- }
171
+ return v7m_mrs_control(env, env->v7m.secure);
172
case 0x94: /* CONTROL_NS */
173
/*
174
* We have to handle this here because unprivileged Secure code
175
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
75
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
176
177
switch (reg) {
178
case 0 ... 7: /* xPSR sub-fields */
179
- /* only APSR is actually writable */
180
- if (!(reg & 4)) {
181
- uint32_t apsrmask = 0;
182
-
183
- if (mask & 8) {
184
- apsrmask |= XPSR_NZCV | XPSR_Q;
185
- }
186
- if ((mask & 4) && arm_feature(env, ARM_FEATURE_THUMB_DSP)) {
187
- apsrmask |= XPSR_GE;
188
- }
189
- xpsr_write(env, val, apsrmask);
190
- }
191
+ v7m_msr_xpsr(env, mask, reg, val);
192
break;
76
break;
193
case 8: /* MSP */
77
case 8: /* MSP */
194
if (v7m_using_psp(env)) {
78
if (v7m_using_psp(env)) {
79
- env->v7m.other_sp = val;
80
+ env->v7m.other_sp = val & ~3;
81
} else {
82
- env->regs[13] = val;
83
+ env->regs[13] = val & ~3;
84
}
85
break;
86
case 9: /* PSP */
87
if (v7m_using_psp(env)) {
88
- env->regs[13] = val;
89
+ env->regs[13] = val & ~3;
90
} else {
91
- env->v7m.other_sp = val;
92
+ env->v7m.other_sp = val & ~3;
93
}
94
break;
95
case 10: /* MSPLIM */
96
diff --git a/target/arm/translate.c b/target/arm/translate.c
97
index XXXXXXX..XXXXXXX 100644
98
--- a/target/arm/translate.c
99
+++ b/target/arm/translate.c
100
@@ -XXX,XX +XXX,XX @@ void store_reg(DisasContext *s, int reg, TCGv_i32 var)
101
*/
102
tcg_gen_andi_i32(var, var, s->thumb ? ~1 : ~3);
103
s->base.is_jmp = DISAS_JUMP;
104
+ } else if (reg == 13 && arm_dc_feature(s, ARM_FEATURE_M)) {
105
+ /* For M-profile SP bits [1:0] are always zero */
106
+ tcg_gen_andi_i32(var, var, ~3);
107
}
108
tcg_gen_mov_i32(cpu_R[reg], var);
109
tcg_temp_free_i32(var);
195
--
110
--
196
2.20.1
111
2.20.1
197
112
198
113
diff view generated by jsdifflib
New patch
1
In do_v7m_exception_exit(), we perform various checks as part of
2
performing the exception return. If one of these checks fails, the
3
architecture requires that we take an appropriate exception on the
4
existing stackframe. We implement this by calling
5
v7m_exception_taken() to set up to take the new exception, and then
6
immediately returning from do_v7m_exception_exit() without proceeding
7
any further with the unstack-and-exception-return process.
1
8
9
In a couple of checks that are new in v8.1M, we forgot the "return"
10
statement, with the effect that if bad code in the guest tripped over
11
these checks we would set up to take a UsageFault exception but then
12
blunder on trying to also unstack and return from the original
13
exception, with the probable result that the guest would crash.
14
15
Add the missing return statements.
16
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Message-id: 20210723162146.5167-3-peter.maydell@linaro.org
20
---
21
target/arm/m_helper.c | 2 ++
22
1 file changed, 2 insertions(+)
23
24
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
25
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/m_helper.c
27
+++ b/target/arm/m_helper.c
28
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
29
qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
30
"stackframe: NSACR prevents clearing FPU registers\n");
31
v7m_exception_taken(cpu, excret, true, false);
32
+ return;
33
} else if (!cpacr_pass) {
34
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
35
exc_secure);
36
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
37
qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
38
"stackframe: CPACR prevents clearing FPU registers\n");
39
v7m_exception_taken(cpu, excret, true, false);
40
+ return;
41
}
42
}
43
/* Clear s0..s15, FPSCR and VPR */
44
--
45
2.20.1
46
47
diff view generated by jsdifflib
New patch
1
For M-profile, we weren't reporting alignment faults triggered by the
2
generic TCG code correctly to the guest. These get passed into
3
arm_v7m_cpu_do_interrupt() as an EXCP_DATA_ABORT with an A-profile
4
style exception.fsr value of 1. We didn't check for this, and so
5
they fell through into the default of "assume this is an MPU fault"
6
and were reported to the guest as a data access violation MPU fault.
1
7
8
Report these alignment faults as UsageFaults which set the UNALIGNED
9
bit in the UFSR.
10
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20210723162146.5167-4-peter.maydell@linaro.org
14
---
15
target/arm/m_helper.c | 8 ++++++++
16
1 file changed, 8 insertions(+)
17
18
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/m_helper.c
21
+++ b/target/arm/m_helper.c
22
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
23
env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
24
break;
25
case EXCP_UNALIGNED:
26
+ /* Unaligned faults reported by M-profile aware code */
27
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
28
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNALIGNED_MASK;
29
break;
30
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
31
}
32
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false);
33
break;
34
+ case 0x1: /* Alignment fault reported by generic code */
35
+ qemu_log_mask(CPU_LOG_INT,
36
+ "...really UsageFault with UFSR.UNALIGNED\n");
37
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNALIGNED_MASK;
38
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
39
+ env->v7m.secure);
40
+ break;
41
default:
42
/*
43
* All other FSR values are either MPU faults or "can't happen
44
--
45
2.20.1
46
47
diff view generated by jsdifflib
New patch
1
The ISCR.ISRPENDING bit is set when an external interrupt is pending.
2
This is true whether that external interrupt is enabled or not.
3
This means that we can't use 's->vectpending == 0' as a shortcut to
4
"ISRPENDING is zero", because s->vectpending indicates only the
5
highest priority pending enabled interrupt.
1
6
7
Remove the incorrect optimization so that if there is no pending
8
enabled interrupt we fall through to scanning through the whole
9
interrupt array.
10
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20210723162146.5167-5-peter.maydell@linaro.org
14
---
15
hw/intc/armv7m_nvic.c | 9 ++++-----
16
1 file changed, 4 insertions(+), 5 deletions(-)
17
18
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/intc/armv7m_nvic.c
21
+++ b/hw/intc/armv7m_nvic.c
22
@@ -XXX,XX +XXX,XX @@ static bool nvic_isrpending(NVICState *s)
23
{
24
int irq;
25
26
- /* We can shortcut if the highest priority pending interrupt
27
- * happens to be external or if there is nothing pending.
28
+ /*
29
+ * We can shortcut if the highest priority pending interrupt
30
+ * happens to be external; if not we need to check the whole
31
+ * vectors[] array.
32
*/
33
if (s->vectpending > NVIC_FIRST_IRQ) {
34
return true;
35
}
36
- if (s->vectpending == 0) {
37
- return false;
38
- }
39
40
for (irq = NVIC_FIRST_IRQ; irq < s->num_irq; irq++) {
41
if (s->vectors[irq].pending) {
42
--
43
2.20.1
44
45
diff view generated by jsdifflib
New patch
1
The VECTPENDING field in the ICSR is 9 bits wide, in bits [20:12] of
2
the register. We were incorrectly masking it to 8 bits, so it would
3
report the wrong value if the pending exception was greater than 256.
4
Fix the bug.
1
5
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210723162146.5167-6-peter.maydell@linaro.org
9
---
10
hw/intc/armv7m_nvic.c | 2 +-
11
1 file changed, 1 insertion(+), 1 deletion(-)
12
13
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/intc/armv7m_nvic.c
16
+++ b/hw/intc/armv7m_nvic.c
17
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
18
/* VECTACTIVE */
19
val = cpu->env.v7m.exception;
20
/* VECTPENDING */
21
- val |= (s->vectpending & 0xff) << 12;
22
+ val |= (s->vectpending & 0x1ff) << 12;
23
/* ISRPENDING - set if any external IRQ is pending */
24
if (nvic_isrpending(s)) {
25
val |= (1 << 22);
26
--
27
2.20.1
28
29
diff view generated by jsdifflib
New patch
1
In Arm v8.1M the VECTPENDING field in the ICSR has new behaviour: if
2
the register is accessed NonSecure and the highest priority pending
3
enabled exception (that would be returned in the VECTPENDING field)
4
targets Secure, then the VECTPENDING field must read 1 rather than
5
the exception number of the pending exception. Implement this.
1
6
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210723162146.5167-7-peter.maydell@linaro.org
10
---
11
hw/intc/armv7m_nvic.c | 31 ++++++++++++++++++++++++-------
12
1 file changed, 24 insertions(+), 7 deletions(-)
13
14
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/intc/armv7m_nvic.c
17
+++ b/hw/intc/armv7m_nvic.c
18
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_acknowledge_irq(void *opaque)
19
nvic_irq_update(s);
20
}
21
22
+static bool vectpending_targets_secure(NVICState *s)
23
+{
24
+ /* Return true if s->vectpending targets Secure state */
25
+ if (s->vectpending_is_s_banked) {
26
+ return true;
27
+ }
28
+ return !exc_is_banked(s->vectpending) &&
29
+ exc_targets_secure(s, s->vectpending);
30
+}
31
+
32
void armv7m_nvic_get_pending_irq_info(void *opaque,
33
int *pirq, bool *ptargets_secure)
34
{
35
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_get_pending_irq_info(void *opaque,
36
37
assert(pending > ARMV7M_EXCP_RESET && pending < s->num_irq);
38
39
- if (s->vectpending_is_s_banked) {
40
- targets_secure = true;
41
- } else {
42
- targets_secure = !exc_is_banked(pending) &&
43
- exc_targets_secure(s, pending);
44
- }
45
+ targets_secure = vectpending_targets_secure(s);
46
47
trace_nvic_get_pending_irq_info(pending, targets_secure);
48
49
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
50
/* VECTACTIVE */
51
val = cpu->env.v7m.exception;
52
/* VECTPENDING */
53
- val |= (s->vectpending & 0x1ff) << 12;
54
+ if (s->vectpending) {
55
+ /*
56
+ * From v8.1M VECTPENDING must read as 1 if accessed as
57
+ * NonSecure and the highest priority pending and enabled
58
+ * exception targets Secure.
59
+ */
60
+ int vp = s->vectpending;
61
+ if (!attrs.secure && arm_feature(&cpu->env, ARM_FEATURE_V8_1M) &&
62
+ vectpending_targets_secure(s)) {
63
+ vp = 1;
64
+ }
65
+ val |= (vp & 0x1ff) << 12;
66
+ }
67
/* ISRPENDING - set if any external IRQ is pending */
68
if (nvic_isrpending(s)) {
69
val |= (1 << 22);
70
--
71
2.20.1
72
73
diff view generated by jsdifflib
New patch
1
From: Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
1
2
3
Missed in commit f3478392 "docs: Move deprecation, build
4
and license info out of system/"
5
6
Signed-off-by: Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20210723065828.1336760-1-maozhongyi@cmss.chinamobile.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
configure | 2 +-
12
target/i386/cpu.c | 2 +-
13
MAINTAINERS | 2 +-
14
3 files changed, 3 insertions(+), 3 deletions(-)
15
16
diff --git a/configure b/configure
17
index XXXXXXX..XXXXXXX 100755
18
--- a/configure
19
+++ b/configure
20
@@ -XXX,XX +XXX,XX @@ fi
21
22
if test -n "${deprecated_features}"; then
23
echo "Warning, deprecated features enabled."
24
- echo "Please see docs/system/deprecated.rst"
25
+ echo "Please see docs/about/deprecated.rst"
26
echo " features: ${deprecated_features}"
27
fi
28
29
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/i386/cpu.c
32
+++ b/target/i386/cpu.c
33
@@ -XXX,XX +XXX,XX @@ static const X86CPUDefinition builtin_x86_defs[] = {
34
* none", but this is just for compatibility while libvirt isn't
35
* adapted to resolve CPU model versions before creating VMs.
36
* See "Runnability guarantee of CPU models" at
37
- * docs/system/deprecated.rst.
38
+ * docs/about/deprecated.rst.
39
*/
40
X86CPUVersion default_cpu_version = 1;
41
42
diff --git a/MAINTAINERS b/MAINTAINERS
43
index XXXXXXX..XXXXXXX 100644
44
--- a/MAINTAINERS
45
+++ b/MAINTAINERS
46
@@ -XXX,XX +XXX,XX @@ F: contrib/gitdm/*
47
48
Incompatible changes
49
R: libvir-list@redhat.com
50
-F: docs/system/deprecated.rst
51
+F: docs/about/deprecated.rst
52
53
Build System
54
------------
55
--
56
2.20.1
57
58
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Armv8-A removes UNPREDICTABLE for R13 for these cases.
3
Currently, our only caller is sve_zcr_len_for_el, which has
4
already masked the length extracted from ZCR_ELx, so the
5
masking done here is a nop. But we will shortly have uses
6
from other locations, where the length will be unmasked.
7
8
Saturate the length to ARM_MAX_VQ instead of truncating to
9
the low 4 bits.
4
10
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20191117090621.32425-3-richard.henderson@linaro.org
7
[PMM: changed ENABLE_ARCH_8 checks to check a new bool 'v8a',
8
since these cases are still UNPREDICTABLE for v8M]
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Message-id: 20210723203344.968563-2-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
15
---
12
target/arm/translate.c | 12 ++++++++----
16
target/arm/helper.c | 4 +++-
13
1 file changed, 8 insertions(+), 4 deletions(-)
17
1 file changed, 3 insertions(+), 1 deletion(-)
14
18
15
diff --git a/target/arm/translate.c b/target/arm/translate.c
19
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.c
21
--- a/target/arm/helper.c
18
+++ b/target/arm/translate.c
22
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ static bool trans_SWPB(DisasContext *s, arg_SWP *a)
23
@@ -XXX,XX +XXX,XX @@ static uint32_t sve_zcr_get_valid_len(ARMCPU *cpu, uint32_t start_len)
20
static bool op_strex(DisasContext *s, arg_STREX *a, MemOp mop, bool rel)
21
{
24
{
22
TCGv_i32 addr;
25
uint32_t end_len;
23
+ /* Some cases stopped being UNPREDICTABLE in v8A (but not v8M) */
26
24
+ bool v8a = ENABLE_ARCH_8 && !arm_dc_feature(s, ARM_FEATURE_M);
27
- end_len = start_len &= 0xf;
25
28
+ start_len = MIN(start_len, ARM_MAX_VQ - 1);
26
/* We UNDEF for these UNPREDICTABLE cases. */
29
+ end_len = start_len;
27
if (a->rd == 15 || a->rn == 15 || a->rt == 15
30
+
28
|| a->rd == a->rn || a->rd == a->rt
31
if (!test_bit(start_len, cpu->sve_vq_map)) {
29
- || (s->thumb && (a->rd == 13 || a->rt == 13))
32
end_len = find_last_bit(cpu->sve_vq_map, start_len);
30
+ || (!v8a && s->thumb && (a->rd == 13 || a->rt == 13))
33
assert(end_len < start_len);
31
|| (mop == MO_64
32
&& (a->rt2 == 15
33
|| a->rd == a->rt2
34
- || (s->thumb && a->rt2 == 13)))) {
35
+ || (!v8a && s->thumb && a->rt2 == 13)))) {
36
unallocated_encoding(s);
37
return true;
38
}
39
@@ -XXX,XX +XXX,XX @@ static bool trans_STLH(DisasContext *s, arg_STL *a)
40
static bool op_ldrex(DisasContext *s, arg_LDREX *a, MemOp mop, bool acq)
41
{
42
TCGv_i32 addr;
43
+ /* Some cases stopped being UNPREDICTABLE in v8A (but not v8M) */
44
+ bool v8a = ENABLE_ARCH_8 && !arm_dc_feature(s, ARM_FEATURE_M);
45
46
/* We UNDEF for these UNPREDICTABLE cases. */
47
if (a->rn == 15 || a->rt == 15
48
- || (s->thumb && a->rt == 13)
49
+ || (!v8a && s->thumb && a->rt == 13)
50
|| (mop == MO_64
51
&& (a->rt2 == 15 || a->rt == a->rt2
52
- || (s->thumb && a->rt2 == 13)))) {
53
+ || (!v8a && s->thumb && a->rt2 == 13)))) {
54
unallocated_encoding(s);
55
return true;
56
}
57
--
34
--
58
2.20.1
35
2.20.1
59
36
60
37
diff view generated by jsdifflib
1
From: Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
A few configuration register writes need not update the spi bus state, so just
3
Rename from sve_zcr_get_valid_len and make accessible
4
return after the register write.
4
from outside of helper.c.
5
5
6
Signed-off-by: Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
8
Message-id: 20210723203344.968563-3-richard.henderson@linaro.org
9
Tested-by: Francisco Iglesias <frasse.iglesias@gmail.com>
10
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
11
Message-id: 1573830705-14579-1-git-send-email-sai.pavan.boddu@xilinx.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
10
---
14
hw/ssi/xilinx_spips.c | 22 ++++++++++++++++++----
11
target/arm/internals.h | 10 ++++++++++
15
1 file changed, 18 insertions(+), 4 deletions(-)
12
target/arm/helper.c | 4 ++--
13
2 files changed, 12 insertions(+), 2 deletions(-)
16
14
17
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
15
diff --git a/target/arm/internals.h b/target/arm/internals.h
18
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/ssi/xilinx_spips.c
17
--- a/target/arm/internals.h
20
+++ b/hw/ssi/xilinx_spips.c
18
+++ b/target/arm/internals.h
21
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ void arm_translate_init(void);
22
#define R_GPIO (0x30 / 4)
20
void arm_cpu_synchronize_from_tb(CPUState *cs, const TranslationBlock *tb);
23
#define R_LPBK_DLY_ADJ (0x38 / 4)
21
#endif /* CONFIG_TCG */
24
#define R_LPBK_DLY_ADJ_RESET (0x33)
22
25
+#define R_IOU_TAPDLY_BYPASS (0x3C / 4)
23
+/**
26
#define R_TXD1 (0x80 / 4)
24
+ * aarch64_sve_zcr_get_valid_len:
27
#define R_TXD2 (0x84 / 4)
25
+ * @cpu: cpu context
28
#define R_TXD3 (0x88 / 4)
26
+ * @start_len: maximum len to consider
29
@@ -XXX,XX +XXX,XX @@
27
+ *
30
#define R_LQSPI_STS (0xA4 / 4)
28
+ * Return the maximum supported sve vector length <= @start_len.
31
#define LQSPI_STS_WR_RECVD (1 << 1)
29
+ * Note that both @start_len and the return value are in units
32
30
+ * of ZCR_ELx.LEN, so the vector bit length is (x + 1) * 128.
33
+#define R_DUMMY_CYCLE_EN (0xC8 / 4)
31
+ */
34
+#define R_ECO (0xF8 / 4)
32
+uint32_t aarch64_sve_zcr_get_valid_len(ARMCPU *cpu, uint32_t start_len);
35
#define R_MOD_ID (0xFC / 4)
33
36
34
enum arm_fprounding {
37
#define R_GQSPI_SELECT (0x144 / 4)
35
FPROUNDING_TIEEVEN,
38
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_write(void *opaque, hwaddr addr,
36
diff --git a/target/arm/helper.c b/target/arm/helper.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/helper.c
39
+++ b/target/arm/helper.c
40
@@ -XXX,XX +XXX,XX @@ int sve_exception_el(CPUARMState *env, int el)
41
return 0;
42
}
43
44
-static uint32_t sve_zcr_get_valid_len(ARMCPU *cpu, uint32_t start_len)
45
+uint32_t aarch64_sve_zcr_get_valid_len(ARMCPU *cpu, uint32_t start_len)
39
{
46
{
40
int mask = ~0;
47
uint32_t end_len;
41
XilinxSPIPS *s = opaque;
48
42
+ bool try_flush = true;
49
@@ -XXX,XX +XXX,XX @@ uint32_t sve_zcr_len_for_el(CPUARMState *env, int el)
43
50
zcr_len = MIN(zcr_len, 0xf & (uint32_t)env->vfp.zcr_el[3]);
44
DB_PRINT_L(0, "addr=" TARGET_FMT_plx " = %x\n", addr, (unsigned)value);
45
addr >>= 2;
46
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_write(void *opaque, hwaddr addr,
47
tx_data_bytes(&s->tx_fifo, (uint32_t)value, 3,
48
s->regs[R_CONFIG] & R_CONFIG_ENDIAN);
49
goto no_reg_update;
50
+ /* Skip SPI bus update for below registers writes */
51
+ case R_GPIO:
52
+ case R_LPBK_DLY_ADJ:
53
+ case R_IOU_TAPDLY_BYPASS:
54
+ case R_DUMMY_CYCLE_EN:
55
+ case R_ECO:
56
+ try_flush = false;
57
+ break;
58
}
51
}
59
s->regs[addr] = (s->regs[addr] & ~mask) | (value & mask);
52
60
no_reg_update:
53
- return sve_zcr_get_valid_len(cpu, zcr_len);
61
- xilinx_spips_update_cs_lines(s);
54
+ return aarch64_sve_zcr_get_valid_len(cpu, zcr_len);
62
- xilinx_spips_check_flush(s);
63
- xilinx_spips_update_cs_lines(s);
64
- xilinx_spips_update_ixr(s);
65
+ if (try_flush) {
66
+ xilinx_spips_update_cs_lines(s);
67
+ xilinx_spips_check_flush(s);
68
+ xilinx_spips_update_cs_lines(s);
69
+ xilinx_spips_update_ixr(s);
70
+ }
71
}
55
}
72
56
73
static const MemoryRegionOps spips_ops = {
57
static void zcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
74
--
58
--
75
2.20.1
59
2.20.1
76
60
77
61
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Coverity reports, in sve_zcr_get_valid_len,
3
Mirror the behavour of /proc/sys/abi/sve_default_vector_length
4
under the real linux kernel. We have no way of passing along
5
a real default across exec like the kernel can, but this is a
6
decent way of adjusting the startup vector length of a process.
4
7
5
"Subtract operation overflows on operands
8
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/482
6
arm_cpu_vq_map_next_smaller(cpu, start_vq + 1U) and 1U"
7
8
First, the aarch32 stub version of arm_cpu_vq_map_next_smaller,
9
returning 0, does exactly what Coverity reports. Remove it.
10
11
Second, the aarch64 version of arm_cpu_vq_map_next_smaller has
12
a set of asserts, but they don't cover the case in question.
13
Further, there is a fair amount of extra arithmetic needed to
14
convert from the 0-based zcr register, to the 1-base vq form,
15
to the 0-based bitmap, and back again. This can be simplified
16
by leaving the value in the 0-based form.
17
18
Finally, use test_bit to simplify the common case, where the
19
length in the zcr registers is in fact a supported length.
20
21
Reported-by: Coverity (CID 1407217)
22
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
23
Reviewed-by: Andrew Jones <drjones@redhat.com>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
24
Message-id: 20191118091414.19440-1-richard.henderson@linaro.org
11
Message-id: 20210723203344.968563-4-richard.henderson@linaro.org
12
[PMM: tweaked docs formatting, document -1 special-case,
13
added fixup patch from RTH mentioning QEMU's maximum veclen.]
25
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
26
---
15
---
27
target/arm/cpu.h | 3 ---
16
docs/system/arm/cpu-features.rst | 15 ++++++++
28
target/arm/cpu64.c | 15 ---------------
17
target/arm/cpu.h | 5 +++
29
target/arm/helper.c | 9 +++++++--
18
target/arm/cpu.c | 14 ++++++--
30
3 files changed, 7 insertions(+), 20 deletions(-)
19
target/arm/cpu64.c | 60 ++++++++++++++++++++++++++++++++
20
4 files changed, 92 insertions(+), 2 deletions(-)
31
21
22
diff --git a/docs/system/arm/cpu-features.rst b/docs/system/arm/cpu-features.rst
23
index XXXXXXX..XXXXXXX 100644
24
--- a/docs/system/arm/cpu-features.rst
25
+++ b/docs/system/arm/cpu-features.rst
26
@@ -XXX,XX +XXX,XX @@ verbose command lines. However, the recommended way to select vector
27
lengths is to explicitly enable each desired length. Therefore only
28
example's (1), (4), and (6) exhibit recommended uses of the properties.
29
30
+SVE User-mode Default Vector Length Property
31
+--------------------------------------------
32
+
33
+For qemu-aarch64, the cpu property ``sve-default-vector-length=N`` is
34
+defined to mirror the Linux kernel parameter file
35
+``/proc/sys/abi/sve_default_vector_length``. The default length, ``N``,
36
+is in units of bytes and must be between 16 and 8192.
37
+If not specified, the default vector length is 64.
38
+
39
+If the default length is larger than the maximum vector length enabled,
40
+the actual vector length will be reduced. Note that the maximum vector
41
+length supported by QEMU is 256.
42
+
43
+If this property is set to ``-1`` then the default vector length
44
+is set to the maximum possible length.
32
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
45
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
33
index XXXXXXX..XXXXXXX 100644
46
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/cpu.h
47
--- a/target/arm/cpu.h
35
+++ b/target/arm/cpu.h
48
+++ b/target/arm/cpu.h
36
@@ -XXX,XX +XXX,XX @@ typedef struct {
49
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
37
#ifdef TARGET_AARCH64
50
/* Used to set the maximum vector length the cpu will support. */
38
# define ARM_MAX_VQ 16
51
uint32_t sve_max_vq;
39
void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp);
52
40
-uint32_t arm_cpu_vq_map_next_smaller(ARMCPU *cpu, uint32_t vq);
53
+#ifdef CONFIG_USER_ONLY
41
#else
54
+ /* Used to set the default vector length at process start. */
42
# define ARM_MAX_VQ 1
55
+ uint32_t sve_default_vq;
43
static inline void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp) { }
56
+#endif
44
-static inline uint32_t arm_cpu_vq_map_next_smaller(ARMCPU *cpu, uint32_t vq)
57
+
45
-{ return 0; }
58
/*
46
#endif
59
* In sve_vq_map each set bit is a supported vector length of
47
60
* (bit-number + 1) * 16 bytes, i.e. each bit number + 1 is the vector
48
typedef struct ARMVectorReg {
61
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
62
index XXXXXXX..XXXXXXX 100644
63
--- a/target/arm/cpu.c
64
+++ b/target/arm/cpu.c
65
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
66
env->cp15.cpacr_el1 = deposit64(env->cp15.cpacr_el1, 16, 2, 3);
67
/* with reasonable vector length */
68
if (cpu_isar_feature(aa64_sve, cpu)) {
69
- env->vfp.zcr_el[1] = MIN(cpu->sve_max_vq - 1, 3);
70
+ env->vfp.zcr_el[1] =
71
+ aarch64_sve_zcr_get_valid_len(cpu, cpu->sve_default_vq - 1);
72
}
73
/*
74
* Enable TBI0 but not TBI1.
75
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_initfn(Object *obj)
76
QLIST_INIT(&cpu->pre_el_change_hooks);
77
QLIST_INIT(&cpu->el_change_hooks);
78
79
-#ifndef CONFIG_USER_ONLY
80
+#ifdef CONFIG_USER_ONLY
81
+# ifdef TARGET_AARCH64
82
+ /*
83
+ * The linux kernel defaults to 512-bit vectors, when sve is supported.
84
+ * See documentation for /proc/sys/abi/sve_default_vector_length, and
85
+ * our corresponding sve-default-vector-length cpu property.
86
+ */
87
+ cpu->sve_default_vq = 4;
88
+# endif
89
+#else
90
/* Our inbound IRQ and FIQ lines */
91
if (kvm_enabled()) {
92
/* VIRQ and VFIQ are unused with KVM but we add them to maintain
49
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
93
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
50
index XXXXXXX..XXXXXXX 100644
94
index XXXXXXX..XXXXXXX 100644
51
--- a/target/arm/cpu64.c
95
--- a/target/arm/cpu64.c
52
+++ b/target/arm/cpu64.c
96
+++ b/target/arm/cpu64.c
53
@@ -XXX,XX +XXX,XX @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
97
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_set_sve(Object *obj, bool value, Error **errp)
54
cpu->sve_max_vq = max_vq;
98
cpu->isar.id_aa64pfr0 = t;
55
}
99
}
56
100
57
-uint32_t arm_cpu_vq_map_next_smaller(ARMCPU *cpu, uint32_t vq)
101
+#ifdef CONFIG_USER_ONLY
58
-{
102
+/* Mirror linux /proc/sys/abi/sve_default_vector_length. */
59
- uint32_t bitnum;
103
+static void cpu_arm_set_sve_default_vec_len(Object *obj, Visitor *v,
60
-
104
+ const char *name, void *opaque,
61
- /*
105
+ Error **errp)
62
- * We allow vq == ARM_MAX_VQ + 1 to be input because the caller may want
106
+{
63
- * to find the maximum vq enabled, which may be ARM_MAX_VQ, but this
107
+ ARMCPU *cpu = ARM_CPU(obj);
64
- * function always returns the next smaller than the input.
108
+ int32_t default_len, default_vq, remainder;
65
- */
109
+
66
- assert(vq && vq <= ARM_MAX_VQ + 1);
110
+ if (!visit_type_int32(v, name, &default_len, errp)) {
67
-
111
+ return;
68
- bitnum = find_last_bit(cpu->sve_vq_map, vq - 1);
112
+ }
69
- return bitnum == vq - 1 ? 0 : bitnum + 1;
113
+
70
-}
114
+ /* Undocumented, but the kernel allows -1 to indicate "maximum". */
71
-
115
+ if (default_len == -1) {
72
static void cpu_max_get_sve_max_vq(Object *obj, Visitor *v, const char *name,
116
+ cpu->sve_default_vq = ARM_MAX_VQ;
73
void *opaque, Error **errp)
117
+ return;
118
+ }
119
+
120
+ default_vq = default_len / 16;
121
+ remainder = default_len % 16;
122
+
123
+ /*
124
+ * Note that the 512 max comes from include/uapi/asm/sve_context.h
125
+ * and is the maximum architectural width of ZCR_ELx.LEN.
126
+ */
127
+ if (remainder || default_vq < 1 || default_vq > 512) {
128
+ error_setg(errp, "cannot set sve-default-vector-length");
129
+ if (remainder) {
130
+ error_append_hint(errp, "Vector length not a multiple of 16\n");
131
+ } else if (default_vq < 1) {
132
+ error_append_hint(errp, "Vector length smaller than 16\n");
133
+ } else {
134
+ error_append_hint(errp, "Vector length larger than %d\n",
135
+ 512 * 16);
136
+ }
137
+ return;
138
+ }
139
+
140
+ cpu->sve_default_vq = default_vq;
141
+}
142
+
143
+static void cpu_arm_get_sve_default_vec_len(Object *obj, Visitor *v,
144
+ const char *name, void *opaque,
145
+ Error **errp)
146
+{
147
+ ARMCPU *cpu = ARM_CPU(obj);
148
+ int32_t value = cpu->sve_default_vq * 16;
149
+
150
+ visit_type_int32(v, name, &value, errp);
151
+}
152
+#endif
153
+
154
void aarch64_add_sve_properties(Object *obj)
74
{
155
{
75
diff --git a/target/arm/helper.c b/target/arm/helper.c
156
uint32_t vq;
76
index XXXXXXX..XXXXXXX 100644
157
@@ -XXX,XX +XXX,XX @@ void aarch64_add_sve_properties(Object *obj)
77
--- a/target/arm/helper.c
158
object_property_add(obj, name, "bool", cpu_arm_get_sve_vq,
78
+++ b/target/arm/helper.c
159
cpu_arm_set_sve_vq, NULL, NULL);
79
@@ -XXX,XX +XXX,XX @@ int sve_exception_el(CPUARMState *env, int el)
160
}
80
161
+
81
static uint32_t sve_zcr_get_valid_len(ARMCPU *cpu, uint32_t start_len)
162
+#ifdef CONFIG_USER_ONLY
82
{
163
+ /* Mirror linux /proc/sys/abi/sve_default_vector_length. */
83
- uint32_t start_vq = (start_len & 0xf) + 1;
164
+ object_property_add(obj, "sve-default-vector-length", "int32",
84
+ uint32_t end_len;
165
+ cpu_arm_get_sve_default_vec_len,
85
166
+ cpu_arm_set_sve_default_vec_len, NULL, NULL);
86
- return arm_cpu_vq_map_next_smaller(cpu, start_vq + 1) - 1;
167
+#endif
87
+ end_len = start_len &= 0xf;
88
+ if (!test_bit(start_len, cpu->sve_vq_map)) {
89
+ end_len = find_last_bit(cpu->sve_vq_map, start_len);
90
+ assert(end_len < start_len);
91
+ }
92
+ return end_len;
93
}
168
}
94
169
95
/*
170
void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp)
96
--
171
--
97
2.20.1
172
2.20.1
98
173
99
174
diff view generated by jsdifflib
1
From: Linus Ziegert <linus.ziegert+qemu@holoplot.com>
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
2
3
The Linux kernel PHY driver sets AN_RESTART in the BMCR of the
3
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
PHY when autonegotiation is started.
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Recently the kernel started to read back the PHY's AN_RESTART
5
Message-id: 20210726150953.1218690-1-f4bug@amsat.org
6
bit and now checks whether the autonegotiation is complete and
7
the bit was cleared [1]. Otherwise the link status is down.
8
9
The emulated PHY needs to clear AN_RESTART immediately to inform
10
the kernel driver about the completion of autonegotiation phase.
11
12
[1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=c36757eb9dee
13
14
Signed-off-by: Linus Ziegert <linus.ziegert+qemu@holoplot.com>
15
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
16
Message-id: 20191104181604.21943-1-linus.ziegert+qemu@holoplot.com
17
Cc: qemu-stable@nongnu.org
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
---
7
---
20
hw/net/cadence_gem.c | 9 +++++----
8
hw/arm/nseries.c | 2 +-
21
1 file changed, 5 insertions(+), 4 deletions(-)
9
1 file changed, 1 insertion(+), 1 deletion(-)
22
10
23
diff --git a/hw/net/cadence_gem.c b/hw/net/cadence_gem.c
11
diff --git a/hw/arm/nseries.c b/hw/arm/nseries.c
24
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
25
--- a/hw/net/cadence_gem.c
13
--- a/hw/arm/nseries.c
26
+++ b/hw/net/cadence_gem.c
14
+++ b/hw/arm/nseries.c
27
@@ -XXX,XX +XXX,XX @@
15
@@ -XXX,XX +XXX,XX @@ static uint32_t mipid_txrx(void *opaque, uint32_t cmd, int len)
28
#define PHY_REG_EXT_PHYSPCFC_ST 27
16
default:
29
#define PHY_REG_CABLE_DIAG 28
17
bad_cmd:
30
18
qemu_log_mask(LOG_GUEST_ERROR,
31
-#define PHY_REG_CONTROL_RST 0x8000
19
- "%s: unknown command %02x\n", __func__, s->cmd);
32
-#define PHY_REG_CONTROL_LOOP 0x4000
20
+ "%s: unknown command 0x%02x\n", __func__, s->cmd);
33
-#define PHY_REG_CONTROL_ANEG 0x1000
21
break;
34
+#define PHY_REG_CONTROL_RST 0x8000
22
}
35
+#define PHY_REG_CONTROL_LOOP 0x4000
23
36
+#define PHY_REG_CONTROL_ANEG 0x1000
37
+#define PHY_REG_CONTROL_ANRESTART 0x0200
38
39
#define PHY_REG_STATUS_LINK 0x0004
40
#define PHY_REG_STATUS_ANEGCMPL 0x0020
41
@@ -XXX,XX +XXX,XX @@ static void gem_phy_write(CadenceGEMState *s, unsigned reg_num, uint16_t val)
42
}
43
if (val & PHY_REG_CONTROL_ANEG) {
44
/* Complete autonegotiation immediately */
45
- val &= ~PHY_REG_CONTROL_ANEG;
46
+ val &= ~(PHY_REG_CONTROL_ANEG | PHY_REG_CONTROL_ANRESTART);
47
s->phy_regs[PHY_REG_STATUS] |= PHY_REG_STATUS_ANEGCMPL;
48
}
49
if (val & PHY_REG_CONTROL_LOOP) {
50
--
24
--
51
2.20.1
25
2.20.1
52
26
53
27
diff view generated by jsdifflib
1
From: Alexander Graf <graf@amazon.com>
1
From: Joel Stanley <joel@jms.id.au>
2
2
3
The current PL031 RTCICR register implementation always clears the
3
The macro used to calculate the maximum memory size of the MMIO region
4
IRQ pending status on a register write, regardless of the value the
4
had a mistake, causing all GPIO models to create a mapping of 0x9D8.
5
guest writes.
5
The intent was to have it be 0x9D8 - 0x800.
6
6
7
To justify that behavior, it references the ARM926EJ-S Development
7
This extra size doesn't matter on ast2400 and ast2500, which have a 4KB
8
Chip Reference Manual (DDI0287B) and indicates that said document
8
region set aside for the GPIO controller.
9
states that any write clears the internal IRQ state. It is indeed
10
true that in section 11.1 this document says:
11
9
12
"The interrupt is cleared by writing any data value to the
10
On the ast2600 the 3.3V and 1.8V GPIO controllers are 2KB apart, so the
13
interrupt clear register RTCICR".
11
regions would overlap. Worse was the 1.8V controller would map over the
12
top of the following peripheral, which happens to be the RTC.
14
13
15
However, later in section 11.2.2 it contradicts itself by saying:
14
The mmio region used by each device is a maximum of 2KB, so avoid the
15
calculations and hard code this as the maximum.
16
16
17
"Writing 1 to bit 0 of RTCICR clears the RTCINTR flag."
17
Fixes: 36d737ee82b2 ("hw/gpio: Add in AST2600 specific implementation")
18
18
Signed-off-by: Joel Stanley <joel@jms.id.au>
19
The latter statement matches the PL031 TRM (DDI0224C), which says:
19
Reviewed-by: Rashmica Gupta <rashmica.g@gmail.com>
20
20
Reviewed-by: Cédric Le Goater <clg@kaod.org>
21
"Writing 1 to bit position 0 clears the corresponding interrupt.
21
Message-id: 20210713065854.134634-2-joel@jms.id.au
22
Writing 0 has no effect."
22
[PMM: fix autocorrect error in commit message]
23
24
Let's assume that the self-contradictory DDI0287B is in error, and
25
follow the reference manual for the device itself, by making the
26
register write-one-to-clear.
27
28
Reported-by: Hendrik Borghorst <hborghor@amazon.de>
29
Signed-off-by: Alexander Graf <graf@amazon.com>
30
Message-id: 20191104115228.30745-1-graf@amazon.com
31
[PMM: updated commit message to note that DDI0287B says two
32
conflicting things]
33
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
34
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
35
---
24
---
36
hw/rtc/pl031.c | 6 +-----
25
hw/gpio/aspeed_gpio.c | 3 +--
37
1 file changed, 1 insertion(+), 5 deletions(-)
26
1 file changed, 1 insertion(+), 2 deletions(-)
38
27
39
diff --git a/hw/rtc/pl031.c b/hw/rtc/pl031.c
28
diff --git a/hw/gpio/aspeed_gpio.c b/hw/gpio/aspeed_gpio.c
40
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
41
--- a/hw/rtc/pl031.c
30
--- a/hw/gpio/aspeed_gpio.c
42
+++ b/hw/rtc/pl031.c
31
+++ b/hw/gpio/aspeed_gpio.c
43
@@ -XXX,XX +XXX,XX @@ static void pl031_write(void * opaque, hwaddr offset,
32
@@ -XXX,XX +XXX,XX @@
44
pl031_update(s);
33
#define GPIO_1_8V_MEM_SIZE 0x9D8
45
break;
34
#define GPIO_1_8V_REG_ARRAY_SIZE ((GPIO_1_8V_MEM_SIZE - \
46
case RTC_ICR:
35
GPIO_1_8V_REG_OFFSET) >> 2)
47
- /* The PL031 documentation (DDI0224B) states that the interrupt is
36
-#define GPIO_MAX_MEM_SIZE MAX(GPIO_3_6V_MEM_SIZE, GPIO_1_8V_MEM_SIZE)
48
- cleared when bit 0 of the written value is set. However the
37
49
- arm926e documentation (DDI0287B) states that the interrupt is
38
static int aspeed_evaluate_irq(GPIOSets *regs, int gpio_prev_high, int gpio)
50
- cleared when any value is written. */
39
{
51
- s->is = 0;
40
@@ -XXX,XX +XXX,XX @@ static void aspeed_gpio_realize(DeviceState *dev, Error **errp)
52
+ s->is &= ~value;
41
}
53
pl031_update(s);
42
54
break;
43
memory_region_init_io(&s->iomem, OBJECT(s), &aspeed_gpio_ops, s,
55
case RTC_CR:
44
- TYPE_ASPEED_GPIO, GPIO_MAX_MEM_SIZE);
45
+ TYPE_ASPEED_GPIO, 0x800);
46
47
sysbus_init_mmio(sbd, &s->iomem);
48
}
56
--
49
--
57
2.20.1
50
2.20.1
58
51
59
52
diff view generated by jsdifflib