1
A last collection of patches to squeeze in before rc0.
1
arm pullreq for rc1. All minor bugfixes, except for the sve-default-vector-length
2
The patches from me are all bugfixes. Philippe's are just
2
patches, which are somewhere between a bugfix and a new feature.
3
code-movement, but I wanted to get these into 4.1 because
4
that kind of patch is so painful to have to rebase.
5
(The diffstat is huge but it's just code moving from file to file.)
6
7
v2: fix up for clash with the qapi refactor which only
8
showed up in a build-from-clean.
9
3
10
thanks
4
thanks
11
-- PMM
5
-- PMM
12
6
7
The following changes since commit c08ccd1b53f488ac86c1f65cf7623dc91acc249a:
13
8
14
The following changes since commit c3e1d838cfa5aac1a6210c8ddf182d0ef7d95dd8:
9
Merge remote-tracking branch 'remotes/rth-gitlab/tags/pull-tcg-20210726' into staging (2021-07-27 08:35:01 +0100)
15
16
Merge remote-tracking branch 'remotes/kraxel/tags/ui-20190704-pull-request' into staging (2019-07-04 16:43:13 +0100)
17
10
18
are available in the Git repository at:
11
are available in the Git repository at:
19
12
20
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20190704-1
13
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20210727
21
14
22
for you to fetch changes up to 89a11ff756410aecb87d2c774df6e45dbf4105c1:
15
for you to fetch changes up to e229a179a503f2aee43a76888cf12fbdfe8a3749:
23
16
24
target/arm: Correct VMOV_imm_dp handling of short vectors (2019-07-04 17:25:30 +0100)
17
hw: aspeed_gpio: Fix memory size (2021-07-27 11:00:00 +0100)
25
18
26
----------------------------------------------------------------
19
----------------------------------------------------------------
27
target-arm queue:
20
target-arm queue:
28
* more code-movement to separate TCG-only functions into their own files
21
* hw/arm/smmuv3: Check 31st bit to see if CD is valid
29
* Correct VMOV_imm_dp handling of short vectors
22
* qemu-options.hx: Fix formatting of -machine memory-backend option
30
* Execute Thumb instructions when their condbits are 0xf
23
* hw: aspeed_gpio: Fix memory size
31
* armv7m_systick: Forbid non-privileged accesses
24
* hw/arm/nseries: Display hexadecimal value with '0x' prefix
32
* Use _ra versions of cpu_stl_data() in v7M helpers
25
* Add sve-default-vector-length cpu property
33
* v8M: Check state of exception being returned from
26
* docs: Update path that mentions deprecated.rst
34
* v8M: Forcibly clear negative-priority exceptions on deactivate
27
* hw/intc/armv7m_nvic: for v8.1M VECTPENDING hides S exceptions from NS
28
* hw/intc/armv7m_nvic: Correct size of ICSR.VECTPENDING
29
* hw/intc/armv7m_nvic: ISCR.ISRPENDING is set for non-enabled pending interrupts
30
* target/arm: Report M-profile alignment faults correctly to the guest
31
* target/arm: Add missing 'return's after calling v7m_exception_taken()
32
* target/arm: Enforce that M-profile SP low 2 bits are always zero
35
33
36
----------------------------------------------------------------
34
----------------------------------------------------------------
37
Peter Maydell (6):
35
Joe Komlodi (1):
38
arm v8M: Forcibly clear negative-priority exceptions on deactivate
36
hw/arm/smmuv3: Check 31st bit to see if CD is valid
39
target/arm: v8M: Check state of exception being returned from
40
target/arm: Use _ra versions of cpu_stl_data() in v7M helpers
41
hw/timer/armv7m_systick: Forbid non-privileged accesses
42
target/arm: Execute Thumb instructions when their condbits are 0xf
43
target/arm: Correct VMOV_imm_dp handling of short vectors
44
37
45
Philippe Mathieu-Daudé (3):
38
Joel Stanley (1):
46
target/arm: Move debug routines to debug_helper.c
39
hw: aspeed_gpio: Fix memory size
47
target/arm: Restrict semi-hosting to TCG
48
target/arm/helper: Move M profile routines to m_helper.c
49
40
50
target/arm/Makefile.objs | 5 +-
41
Mao Zhongyi (1):
51
target/arm/cpu.h | 7 +
42
docs: Update path that mentions deprecated.rst
52
hw/intc/armv7m_nvic.c | 54 +-
53
hw/timer/armv7m_systick.c | 26 +-
54
target/arm/cpu.c | 9 +-
55
target/arm/debug_helper.c | 311 +++++
56
target/arm/helper.c | 2646 +--------------------------------------
57
target/arm/m_helper.c | 2679 ++++++++++++++++++++++++++++++++++++++++
58
target/arm/op_helper.c | 295 -----
59
target/arm/translate-vfp.inc.c | 2 +-
60
target/arm/translate.c | 15 +-
61
11 files changed, 3096 insertions(+), 2953 deletions(-)
62
create mode 100644 target/arm/debug_helper.c
63
create mode 100644 target/arm/m_helper.c
64
43
44
Peter Maydell (7):
45
qemu-options.hx: Fix formatting of -machine memory-backend option
46
target/arm: Enforce that M-profile SP low 2 bits are always zero
47
target/arm: Add missing 'return's after calling v7m_exception_taken()
48
target/arm: Report M-profile alignment faults correctly to the guest
49
hw/intc/armv7m_nvic: ISCR.ISRPENDING is set for non-enabled pending interrupts
50
hw/intc/armv7m_nvic: Correct size of ICSR.VECTPENDING
51
hw/intc/armv7m_nvic: for v8.1M VECTPENDING hides S exceptions from NS
52
53
Philippe Mathieu-Daudé (1):
54
hw/arm/nseries: Display hexadecimal value with '0x' prefix
55
56
Richard Henderson (3):
57
target/arm: Correctly bound length in sve_zcr_get_valid_len
58
target/arm: Export aarch64_sve_zcr_get_valid_len
59
target/arm: Add sve-default-vector-length cpu property
60
61
docs/system/arm/cpu-features.rst | 15 ++++++++++
62
configure | 2 +-
63
hw/arm/smmuv3-internal.h | 2 +-
64
target/arm/cpu.h | 5 ++++
65
target/arm/internals.h | 10 +++++++
66
hw/arm/nseries.c | 2 +-
67
hw/gpio/aspeed_gpio.c | 3 +-
68
hw/intc/armv7m_nvic.c | 40 +++++++++++++++++++--------
69
target/arm/cpu.c | 14 ++++++++--
70
target/arm/cpu64.c | 60 ++++++++++++++++++++++++++++++++++++++++
71
target/arm/gdbstub.c | 4 +++
72
target/arm/helper.c | 8 ++++--
73
target/arm/m_helper.c | 24 ++++++++++++----
74
target/arm/translate.c | 3 ++
75
target/i386/cpu.c | 2 +-
76
MAINTAINERS | 2 +-
77
qemu-options.hx | 30 +++++++++++---------
78
17 files changed, 183 insertions(+), 43 deletions(-)
79
diff view generated by jsdifflib
New patch
1
From: Joe Komlodi <joe.komlodi@xilinx.com>
1
2
3
The bit to see if a CD is valid is the last bit of the first word of the CD.
4
5
Signed-off-by: Joe Komlodi <joe.komlodi@xilinx.com>
6
Message-id: 1626728232-134665-2-git-send-email-joe.komlodi@xilinx.com
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
hw/arm/smmuv3-internal.h | 2 +-
11
1 file changed, 1 insertion(+), 1 deletion(-)
12
13
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/arm/smmuv3-internal.h
16
+++ b/hw/arm/smmuv3-internal.h
17
@@ -XXX,XX +XXX,XX @@ static inline int pa_range(STE *ste)
18
19
/* CD fields */
20
21
-#define CD_VALID(x) extract32((x)->word[0], 30, 1)
22
+#define CD_VALID(x) extract32((x)->word[0], 31, 1)
23
#define CD_ASID(x) extract32((x)->word[1], 16, 16)
24
#define CD_TTB(x, sel) \
25
({ \
26
--
27
2.20.1
28
29
diff view generated by jsdifflib
New patch
1
The documentation of the -machine memory-backend has some minor
2
formatting errors:
3
* Misindentation of the initial line meant that the whole option
4
section is incorrectly indented in the HTML output compared to
5
the other -machine options
6
* The examples weren't indented, which meant that they were formatted
7
as plain run-on text including outputting the "::" as text.
8
* The a) b) list has no rst-format markup so it is rendered as
9
a single run-on paragraph
1
10
11
Fix the formatting.
12
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
15
Message-id: 20210719105257.3599-1-peter.maydell@linaro.org
16
---
17
qemu-options.hx | 30 +++++++++++++++++-------------
18
1 file changed, 17 insertions(+), 13 deletions(-)
19
20
diff --git a/qemu-options.hx b/qemu-options.hx
21
index XXXXXXX..XXXXXXX 100644
22
--- a/qemu-options.hx
23
+++ b/qemu-options.hx
24
@@ -XXX,XX +XXX,XX @@ SRST
25
Enables or disables ACPI Heterogeneous Memory Attribute Table
26
(HMAT) support. The default is off.
27
28
- ``memory-backend='id'``
29
+ ``memory-backend='id'``
30
An alternative to legacy ``-mem-path`` and ``mem-prealloc`` options.
31
Allows to use a memory backend as main RAM.
32
33
For example:
34
::
35
- -object memory-backend-file,id=pc.ram,size=512M,mem-path=/hugetlbfs,prealloc=on,share=on
36
- -machine memory-backend=pc.ram
37
- -m 512M
38
+
39
+ -object memory-backend-file,id=pc.ram,size=512M,mem-path=/hugetlbfs,prealloc=on,share=on
40
+ -machine memory-backend=pc.ram
41
+ -m 512M
42
43
Migration compatibility note:
44
- a) as backend id one shall use value of 'default-ram-id', advertised by
45
- machine type (available via ``query-machines`` QMP command), if migration
46
- to/from old QEMU (<5.0) is expected.
47
- b) for machine types 4.0 and older, user shall
48
- use ``x-use-canonical-path-for-ramblock-id=off`` backend option
49
- if migration to/from old QEMU (<5.0) is expected.
50
+
51
+ * as backend id one shall use value of 'default-ram-id', advertised by
52
+ machine type (available via ``query-machines`` QMP command), if migration
53
+ to/from old QEMU (<5.0) is expected.
54
+ * for machine types 4.0 and older, user shall
55
+ use ``x-use-canonical-path-for-ramblock-id=off`` backend option
56
+ if migration to/from old QEMU (<5.0) is expected.
57
+
58
For example:
59
::
60
- -object memory-backend-ram,id=pc.ram,size=512M,x-use-canonical-path-for-ramblock-id=off
61
- -machine memory-backend=pc.ram
62
- -m 512M
63
+
64
+ -object memory-backend-ram,id=pc.ram,size=512M,x-use-canonical-path-for-ramblock-id=off
65
+ -machine memory-backend=pc.ram
66
+ -m 512M
67
ERST
68
69
HXCOMM Deprecated by -machine
70
--
71
2.20.1
72
73
diff view generated by jsdifflib
New patch
1
For M-profile, unlike A-profile, the low 2 bits of SP are defined to be
2
RES0H, which is to say that they must be hardwired to zero so that
3
guest attempts to write non-zero values to them are ignored.
1
4
5
Implement this behaviour by masking out the low bits:
6
* for writes to r13 by the gdbstub
7
* for writes to any of the various flavours of SP via MSR
8
* for writes to r13 via store_reg() in generated code
9
10
Note that all the direct uses of cpu_R[] in translate.c are in places
11
where the register is definitely not r13 (usually because that has
12
been checked for as an UNDEFINED or UNPREDICTABLE case and handled as
13
UNDEF).
14
15
All the other writes to regs[13] in C code are either:
16
* A-profile only code
17
* writes of values we can guarantee to be aligned, such as
18
- writes of previous-SP-value plus or minus a 4-aligned constant
19
- writes of the value in an SP limit register (which we already
20
enforce to be aligned)
21
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
24
Message-id: 20210723162146.5167-2-peter.maydell@linaro.org
25
---
26
target/arm/gdbstub.c | 4 ++++
27
target/arm/m_helper.c | 14 ++++++++------
28
target/arm/translate.c | 3 +++
29
3 files changed, 15 insertions(+), 6 deletions(-)
30
31
diff --git a/target/arm/gdbstub.c b/target/arm/gdbstub.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/gdbstub.c
34
+++ b/target/arm/gdbstub.c
35
@@ -XXX,XX +XXX,XX @@ int arm_cpu_gdb_write_register(CPUState *cs, uint8_t *mem_buf, int n)
36
37
if (n < 16) {
38
/* Core integer register. */
39
+ if (n == 13 && arm_feature(env, ARM_FEATURE_M)) {
40
+ /* M profile SP low bits are always 0 */
41
+ tmp &= ~3;
42
+ }
43
env->regs[n] = tmp;
44
return 4;
45
}
46
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/m_helper.c
49
+++ b/target/arm/m_helper.c
50
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
51
if (!env->v7m.secure) {
52
return;
53
}
54
- env->v7m.other_ss_msp = val;
55
+ env->v7m.other_ss_msp = val & ~3;
56
return;
57
case 0x89: /* PSP_NS */
58
if (!env->v7m.secure) {
59
return;
60
}
61
- env->v7m.other_ss_psp = val;
62
+ env->v7m.other_ss_psp = val & ~3;
63
return;
64
case 0x8a: /* MSPLIM_NS */
65
if (!env->v7m.secure) {
66
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
67
68
limit = is_psp ? env->v7m.psplim[false] : env->v7m.msplim[false];
69
70
+ val &= ~0x3;
71
+
72
if (val < limit) {
73
raise_exception_ra(env, EXCP_STKOF, 0, 1, GETPC());
74
}
75
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
76
break;
77
case 8: /* MSP */
78
if (v7m_using_psp(env)) {
79
- env->v7m.other_sp = val;
80
+ env->v7m.other_sp = val & ~3;
81
} else {
82
- env->regs[13] = val;
83
+ env->regs[13] = val & ~3;
84
}
85
break;
86
case 9: /* PSP */
87
if (v7m_using_psp(env)) {
88
- env->regs[13] = val;
89
+ env->regs[13] = val & ~3;
90
} else {
91
- env->v7m.other_sp = val;
92
+ env->v7m.other_sp = val & ~3;
93
}
94
break;
95
case 10: /* MSPLIM */
96
diff --git a/target/arm/translate.c b/target/arm/translate.c
97
index XXXXXXX..XXXXXXX 100644
98
--- a/target/arm/translate.c
99
+++ b/target/arm/translate.c
100
@@ -XXX,XX +XXX,XX @@ void store_reg(DisasContext *s, int reg, TCGv_i32 var)
101
*/
102
tcg_gen_andi_i32(var, var, s->thumb ? ~1 : ~3);
103
s->base.is_jmp = DISAS_JUMP;
104
+ } else if (reg == 13 && arm_dc_feature(s, ARM_FEATURE_M)) {
105
+ /* For M-profile SP bits [1:0] are always zero */
106
+ tcg_gen_andi_i32(var, var, ~3);
107
}
108
tcg_gen_mov_i32(cpu_R[reg], var);
109
tcg_temp_free_i32(var);
110
--
111
2.20.1
112
113
diff view generated by jsdifflib
New patch
1
In do_v7m_exception_exit(), we perform various checks as part of
2
performing the exception return. If one of these checks fails, the
3
architecture requires that we take an appropriate exception on the
4
existing stackframe. We implement this by calling
5
v7m_exception_taken() to set up to take the new exception, and then
6
immediately returning from do_v7m_exception_exit() without proceeding
7
any further with the unstack-and-exception-return process.
1
8
9
In a couple of checks that are new in v8.1M, we forgot the "return"
10
statement, with the effect that if bad code in the guest tripped over
11
these checks we would set up to take a UsageFault exception but then
12
blunder on trying to also unstack and return from the original
13
exception, with the probable result that the guest would crash.
14
15
Add the missing return statements.
16
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Message-id: 20210723162146.5167-3-peter.maydell@linaro.org
20
---
21
target/arm/m_helper.c | 2 ++
22
1 file changed, 2 insertions(+)
23
24
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
25
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/m_helper.c
27
+++ b/target/arm/m_helper.c
28
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
29
qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
30
"stackframe: NSACR prevents clearing FPU registers\n");
31
v7m_exception_taken(cpu, excret, true, false);
32
+ return;
33
} else if (!cpacr_pass) {
34
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
35
exc_secure);
36
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
37
qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
38
"stackframe: CPACR prevents clearing FPU registers\n");
39
v7m_exception_taken(cpu, excret, true, false);
40
+ return;
41
}
42
}
43
/* Clear s0..s15, FPSCR and VPR */
44
--
45
2.20.1
46
47
diff view generated by jsdifflib
New patch
1
For M-profile, we weren't reporting alignment faults triggered by the
2
generic TCG code correctly to the guest. These get passed into
3
arm_v7m_cpu_do_interrupt() as an EXCP_DATA_ABORT with an A-profile
4
style exception.fsr value of 1. We didn't check for this, and so
5
they fell through into the default of "assume this is an MPU fault"
6
and were reported to the guest as a data access violation MPU fault.
1
7
8
Report these alignment faults as UsageFaults which set the UNALIGNED
9
bit in the UFSR.
10
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20210723162146.5167-4-peter.maydell@linaro.org
14
---
15
target/arm/m_helper.c | 8 ++++++++
16
1 file changed, 8 insertions(+)
17
18
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/m_helper.c
21
+++ b/target/arm/m_helper.c
22
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
23
env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
24
break;
25
case EXCP_UNALIGNED:
26
+ /* Unaligned faults reported by M-profile aware code */
27
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
28
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNALIGNED_MASK;
29
break;
30
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
31
}
32
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false);
33
break;
34
+ case 0x1: /* Alignment fault reported by generic code */
35
+ qemu_log_mask(CPU_LOG_INT,
36
+ "...really UsageFault with UFSR.UNALIGNED\n");
37
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNALIGNED_MASK;
38
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
39
+ env->v7m.secure);
40
+ break;
41
default:
42
/*
43
* All other FSR values are either MPU faults or "can't happen
44
--
45
2.20.1
46
47
diff view generated by jsdifflib
New patch
1
The ISCR.ISRPENDING bit is set when an external interrupt is pending.
2
This is true whether that external interrupt is enabled or not.
3
This means that we can't use 's->vectpending == 0' as a shortcut to
4
"ISRPENDING is zero", because s->vectpending indicates only the
5
highest priority pending enabled interrupt.
1
6
7
Remove the incorrect optimization so that if there is no pending
8
enabled interrupt we fall through to scanning through the whole
9
interrupt array.
10
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20210723162146.5167-5-peter.maydell@linaro.org
14
---
15
hw/intc/armv7m_nvic.c | 9 ++++-----
16
1 file changed, 4 insertions(+), 5 deletions(-)
17
18
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/intc/armv7m_nvic.c
21
+++ b/hw/intc/armv7m_nvic.c
22
@@ -XXX,XX +XXX,XX @@ static bool nvic_isrpending(NVICState *s)
23
{
24
int irq;
25
26
- /* We can shortcut if the highest priority pending interrupt
27
- * happens to be external or if there is nothing pending.
28
+ /*
29
+ * We can shortcut if the highest priority pending interrupt
30
+ * happens to be external; if not we need to check the whole
31
+ * vectors[] array.
32
*/
33
if (s->vectpending > NVIC_FIRST_IRQ) {
34
return true;
35
}
36
- if (s->vectpending == 0) {
37
- return false;
38
- }
39
40
for (irq = NVIC_FIRST_IRQ; irq < s->num_irq; irq++) {
41
if (s->vectors[irq].pending) {
42
--
43
2.20.1
44
45
diff view generated by jsdifflib
New patch
1
The VECTPENDING field in the ICSR is 9 bits wide, in bits [20:12] of
2
the register. We were incorrectly masking it to 8 bits, so it would
3
report the wrong value if the pending exception was greater than 256.
4
Fix the bug.
1
5
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210723162146.5167-6-peter.maydell@linaro.org
9
---
10
hw/intc/armv7m_nvic.c | 2 +-
11
1 file changed, 1 insertion(+), 1 deletion(-)
12
13
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/intc/armv7m_nvic.c
16
+++ b/hw/intc/armv7m_nvic.c
17
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
18
/* VECTACTIVE */
19
val = cpu->env.v7m.exception;
20
/* VECTPENDING */
21
- val |= (s->vectpending & 0xff) << 12;
22
+ val |= (s->vectpending & 0x1ff) << 12;
23
/* ISRPENDING - set if any external IRQ is pending */
24
if (nvic_isrpending(s)) {
25
val |= (1 << 22);
26
--
27
2.20.1
28
29
diff view generated by jsdifflib
New patch
1
In Arm v8.1M the VECTPENDING field in the ICSR has new behaviour: if
2
the register is accessed NonSecure and the highest priority pending
3
enabled exception (that would be returned in the VECTPENDING field)
4
targets Secure, then the VECTPENDING field must read 1 rather than
5
the exception number of the pending exception. Implement this.
1
6
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210723162146.5167-7-peter.maydell@linaro.org
10
---
11
hw/intc/armv7m_nvic.c | 31 ++++++++++++++++++++++++-------
12
1 file changed, 24 insertions(+), 7 deletions(-)
13
14
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/intc/armv7m_nvic.c
17
+++ b/hw/intc/armv7m_nvic.c
18
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_acknowledge_irq(void *opaque)
19
nvic_irq_update(s);
20
}
21
22
+static bool vectpending_targets_secure(NVICState *s)
23
+{
24
+ /* Return true if s->vectpending targets Secure state */
25
+ if (s->vectpending_is_s_banked) {
26
+ return true;
27
+ }
28
+ return !exc_is_banked(s->vectpending) &&
29
+ exc_targets_secure(s, s->vectpending);
30
+}
31
+
32
void armv7m_nvic_get_pending_irq_info(void *opaque,
33
int *pirq, bool *ptargets_secure)
34
{
35
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_get_pending_irq_info(void *opaque,
36
37
assert(pending > ARMV7M_EXCP_RESET && pending < s->num_irq);
38
39
- if (s->vectpending_is_s_banked) {
40
- targets_secure = true;
41
- } else {
42
- targets_secure = !exc_is_banked(pending) &&
43
- exc_targets_secure(s, pending);
44
- }
45
+ targets_secure = vectpending_targets_secure(s);
46
47
trace_nvic_get_pending_irq_info(pending, targets_secure);
48
49
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
50
/* VECTACTIVE */
51
val = cpu->env.v7m.exception;
52
/* VECTPENDING */
53
- val |= (s->vectpending & 0x1ff) << 12;
54
+ if (s->vectpending) {
55
+ /*
56
+ * From v8.1M VECTPENDING must read as 1 if accessed as
57
+ * NonSecure and the highest priority pending and enabled
58
+ * exception targets Secure.
59
+ */
60
+ int vp = s->vectpending;
61
+ if (!attrs.secure && arm_feature(&cpu->env, ARM_FEATURE_V8_1M) &&
62
+ vectpending_targets_secure(s)) {
63
+ vp = 1;
64
+ }
65
+ val |= (vp & 0x1ff) << 12;
66
+ }
67
/* ISRPENDING - set if any external IRQ is pending */
68
if (nvic_isrpending(s)) {
69
val |= (1 << 22);
70
--
71
2.20.1
72
73
diff view generated by jsdifflib
New patch
1
From: Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
1
2
3
Missed in commit f3478392 "docs: Move deprecation, build
4
and license info out of system/"
5
6
Signed-off-by: Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20210723065828.1336760-1-maozhongyi@cmss.chinamobile.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
configure | 2 +-
12
target/i386/cpu.c | 2 +-
13
MAINTAINERS | 2 +-
14
3 files changed, 3 insertions(+), 3 deletions(-)
15
16
diff --git a/configure b/configure
17
index XXXXXXX..XXXXXXX 100755
18
--- a/configure
19
+++ b/configure
20
@@ -XXX,XX +XXX,XX @@ fi
21
22
if test -n "${deprecated_features}"; then
23
echo "Warning, deprecated features enabled."
24
- echo "Please see docs/system/deprecated.rst"
25
+ echo "Please see docs/about/deprecated.rst"
26
echo " features: ${deprecated_features}"
27
fi
28
29
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/i386/cpu.c
32
+++ b/target/i386/cpu.c
33
@@ -XXX,XX +XXX,XX @@ static const X86CPUDefinition builtin_x86_defs[] = {
34
* none", but this is just for compatibility while libvirt isn't
35
* adapted to resolve CPU model versions before creating VMs.
36
* See "Runnability guarantee of CPU models" at
37
- * docs/system/deprecated.rst.
38
+ * docs/about/deprecated.rst.
39
*/
40
X86CPUVersion default_cpu_version = 1;
41
42
diff --git a/MAINTAINERS b/MAINTAINERS
43
index XXXXXXX..XXXXXXX 100644
44
--- a/MAINTAINERS
45
+++ b/MAINTAINERS
46
@@ -XXX,XX +XXX,XX @@ F: contrib/gitdm/*
47
48
Incompatible changes
49
R: libvir-list@redhat.com
50
-F: docs/system/deprecated.rst
51
+F: docs/about/deprecated.rst
52
53
Build System
54
------------
55
--
56
2.20.1
57
58
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Currently, our only caller is sve_zcr_len_for_el, which has
4
already masked the length extracted from ZCR_ELx, so the
5
masking done here is a nop. But we will shortly have uses
6
from other locations, where the length will be unmasked.
7
8
Saturate the length to ARM_MAX_VQ instead of truncating to
9
the low 4 bits.
10
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Message-id: 20210723203344.968563-2-richard.henderson@linaro.org
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
target/arm/helper.c | 4 +++-
17
1 file changed, 3 insertions(+), 1 deletion(-)
18
19
diff --git a/target/arm/helper.c b/target/arm/helper.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper.c
22
+++ b/target/arm/helper.c
23
@@ -XXX,XX +XXX,XX @@ static uint32_t sve_zcr_get_valid_len(ARMCPU *cpu, uint32_t start_len)
24
{
25
uint32_t end_len;
26
27
- end_len = start_len &= 0xf;
28
+ start_len = MIN(start_len, ARM_MAX_VQ - 1);
29
+ end_len = start_len;
30
+
31
if (!test_bit(start_len, cpu->sve_vq_map)) {
32
end_len = find_last_bit(cpu->sve_vq_map, start_len);
33
assert(end_len < start_len);
34
--
35
2.20.1
36
37
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Rename from sve_zcr_get_valid_len and make accessible
4
from outside of helper.c.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20210723203344.968563-3-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/internals.h | 10 ++++++++++
12
target/arm/helper.c | 4 ++--
13
2 files changed, 12 insertions(+), 2 deletions(-)
14
15
diff --git a/target/arm/internals.h b/target/arm/internals.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/internals.h
18
+++ b/target/arm/internals.h
19
@@ -XXX,XX +XXX,XX @@ void arm_translate_init(void);
20
void arm_cpu_synchronize_from_tb(CPUState *cs, const TranslationBlock *tb);
21
#endif /* CONFIG_TCG */
22
23
+/**
24
+ * aarch64_sve_zcr_get_valid_len:
25
+ * @cpu: cpu context
26
+ * @start_len: maximum len to consider
27
+ *
28
+ * Return the maximum supported sve vector length <= @start_len.
29
+ * Note that both @start_len and the return value are in units
30
+ * of ZCR_ELx.LEN, so the vector bit length is (x + 1) * 128.
31
+ */
32
+uint32_t aarch64_sve_zcr_get_valid_len(ARMCPU *cpu, uint32_t start_len);
33
34
enum arm_fprounding {
35
FPROUNDING_TIEEVEN,
36
diff --git a/target/arm/helper.c b/target/arm/helper.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/helper.c
39
+++ b/target/arm/helper.c
40
@@ -XXX,XX +XXX,XX @@ int sve_exception_el(CPUARMState *env, int el)
41
return 0;
42
}
43
44
-static uint32_t sve_zcr_get_valid_len(ARMCPU *cpu, uint32_t start_len)
45
+uint32_t aarch64_sve_zcr_get_valid_len(ARMCPU *cpu, uint32_t start_len)
46
{
47
uint32_t end_len;
48
49
@@ -XXX,XX +XXX,XX @@ uint32_t sve_zcr_len_for_el(CPUARMState *env, int el)
50
zcr_len = MIN(zcr_len, 0xf & (uint32_t)env->vfp.zcr_el[3]);
51
}
52
53
- return sve_zcr_get_valid_len(cpu, zcr_len);
54
+ return aarch64_sve_zcr_get_valid_len(cpu, zcr_len);
55
}
56
57
static void zcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
58
--
59
2.20.1
60
61
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Mirror the behavour of /proc/sys/abi/sve_default_vector_length
4
under the real linux kernel. We have no way of passing along
5
a real default across exec like the kernel can, but this is a
6
decent way of adjusting the startup vector length of a process.
7
8
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/482
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20210723203344.968563-4-richard.henderson@linaro.org
12
[PMM: tweaked docs formatting, document -1 special-case,
13
added fixup patch from RTH mentioning QEMU's maximum veclen.]
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
docs/system/arm/cpu-features.rst | 15 ++++++++
17
target/arm/cpu.h | 5 +++
18
target/arm/cpu.c | 14 ++++++--
19
target/arm/cpu64.c | 60 ++++++++++++++++++++++++++++++++
20
4 files changed, 92 insertions(+), 2 deletions(-)
21
22
diff --git a/docs/system/arm/cpu-features.rst b/docs/system/arm/cpu-features.rst
23
index XXXXXXX..XXXXXXX 100644
24
--- a/docs/system/arm/cpu-features.rst
25
+++ b/docs/system/arm/cpu-features.rst
26
@@ -XXX,XX +XXX,XX @@ verbose command lines. However, the recommended way to select vector
27
lengths is to explicitly enable each desired length. Therefore only
28
example's (1), (4), and (6) exhibit recommended uses of the properties.
29
30
+SVE User-mode Default Vector Length Property
31
+--------------------------------------------
32
+
33
+For qemu-aarch64, the cpu property ``sve-default-vector-length=N`` is
34
+defined to mirror the Linux kernel parameter file
35
+``/proc/sys/abi/sve_default_vector_length``. The default length, ``N``,
36
+is in units of bytes and must be between 16 and 8192.
37
+If not specified, the default vector length is 64.
38
+
39
+If the default length is larger than the maximum vector length enabled,
40
+the actual vector length will be reduced. Note that the maximum vector
41
+length supported by QEMU is 256.
42
+
43
+If this property is set to ``-1`` then the default vector length
44
+is set to the maximum possible length.
45
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/cpu.h
48
+++ b/target/arm/cpu.h
49
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
50
/* Used to set the maximum vector length the cpu will support. */
51
uint32_t sve_max_vq;
52
53
+#ifdef CONFIG_USER_ONLY
54
+ /* Used to set the default vector length at process start. */
55
+ uint32_t sve_default_vq;
56
+#endif
57
+
58
/*
59
* In sve_vq_map each set bit is a supported vector length of
60
* (bit-number + 1) * 16 bytes, i.e. each bit number + 1 is the vector
61
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
62
index XXXXXXX..XXXXXXX 100644
63
--- a/target/arm/cpu.c
64
+++ b/target/arm/cpu.c
65
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
66
env->cp15.cpacr_el1 = deposit64(env->cp15.cpacr_el1, 16, 2, 3);
67
/* with reasonable vector length */
68
if (cpu_isar_feature(aa64_sve, cpu)) {
69
- env->vfp.zcr_el[1] = MIN(cpu->sve_max_vq - 1, 3);
70
+ env->vfp.zcr_el[1] =
71
+ aarch64_sve_zcr_get_valid_len(cpu, cpu->sve_default_vq - 1);
72
}
73
/*
74
* Enable TBI0 but not TBI1.
75
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_initfn(Object *obj)
76
QLIST_INIT(&cpu->pre_el_change_hooks);
77
QLIST_INIT(&cpu->el_change_hooks);
78
79
-#ifndef CONFIG_USER_ONLY
80
+#ifdef CONFIG_USER_ONLY
81
+# ifdef TARGET_AARCH64
82
+ /*
83
+ * The linux kernel defaults to 512-bit vectors, when sve is supported.
84
+ * See documentation for /proc/sys/abi/sve_default_vector_length, and
85
+ * our corresponding sve-default-vector-length cpu property.
86
+ */
87
+ cpu->sve_default_vq = 4;
88
+# endif
89
+#else
90
/* Our inbound IRQ and FIQ lines */
91
if (kvm_enabled()) {
92
/* VIRQ and VFIQ are unused with KVM but we add them to maintain
93
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
94
index XXXXXXX..XXXXXXX 100644
95
--- a/target/arm/cpu64.c
96
+++ b/target/arm/cpu64.c
97
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_set_sve(Object *obj, bool value, Error **errp)
98
cpu->isar.id_aa64pfr0 = t;
99
}
100
101
+#ifdef CONFIG_USER_ONLY
102
+/* Mirror linux /proc/sys/abi/sve_default_vector_length. */
103
+static void cpu_arm_set_sve_default_vec_len(Object *obj, Visitor *v,
104
+ const char *name, void *opaque,
105
+ Error **errp)
106
+{
107
+ ARMCPU *cpu = ARM_CPU(obj);
108
+ int32_t default_len, default_vq, remainder;
109
+
110
+ if (!visit_type_int32(v, name, &default_len, errp)) {
111
+ return;
112
+ }
113
+
114
+ /* Undocumented, but the kernel allows -1 to indicate "maximum". */
115
+ if (default_len == -1) {
116
+ cpu->sve_default_vq = ARM_MAX_VQ;
117
+ return;
118
+ }
119
+
120
+ default_vq = default_len / 16;
121
+ remainder = default_len % 16;
122
+
123
+ /*
124
+ * Note that the 512 max comes from include/uapi/asm/sve_context.h
125
+ * and is the maximum architectural width of ZCR_ELx.LEN.
126
+ */
127
+ if (remainder || default_vq < 1 || default_vq > 512) {
128
+ error_setg(errp, "cannot set sve-default-vector-length");
129
+ if (remainder) {
130
+ error_append_hint(errp, "Vector length not a multiple of 16\n");
131
+ } else if (default_vq < 1) {
132
+ error_append_hint(errp, "Vector length smaller than 16\n");
133
+ } else {
134
+ error_append_hint(errp, "Vector length larger than %d\n",
135
+ 512 * 16);
136
+ }
137
+ return;
138
+ }
139
+
140
+ cpu->sve_default_vq = default_vq;
141
+}
142
+
143
+static void cpu_arm_get_sve_default_vec_len(Object *obj, Visitor *v,
144
+ const char *name, void *opaque,
145
+ Error **errp)
146
+{
147
+ ARMCPU *cpu = ARM_CPU(obj);
148
+ int32_t value = cpu->sve_default_vq * 16;
149
+
150
+ visit_type_int32(v, name, &value, errp);
151
+}
152
+#endif
153
+
154
void aarch64_add_sve_properties(Object *obj)
155
{
156
uint32_t vq;
157
@@ -XXX,XX +XXX,XX @@ void aarch64_add_sve_properties(Object *obj)
158
object_property_add(obj, name, "bool", cpu_arm_get_sve_vq,
159
cpu_arm_set_sve_vq, NULL, NULL);
160
}
161
+
162
+#ifdef CONFIG_USER_ONLY
163
+ /* Mirror linux /proc/sys/abi/sve_default_vector_length. */
164
+ object_property_add(obj, "sve-default-vector-length", "int32",
165
+ cpu_arm_get_sve_default_vec_len,
166
+ cpu_arm_set_sve_default_vec_len, NULL, NULL);
167
+#endif
168
}
169
170
void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp)
171
--
172
2.20.1
173
174
diff view generated by jsdifflib
New patch
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
2
3
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210726150953.1218690-1-f4bug@amsat.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
hw/arm/nseries.c | 2 +-
9
1 file changed, 1 insertion(+), 1 deletion(-)
10
11
diff --git a/hw/arm/nseries.c b/hw/arm/nseries.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/arm/nseries.c
14
+++ b/hw/arm/nseries.c
15
@@ -XXX,XX +XXX,XX @@ static uint32_t mipid_txrx(void *opaque, uint32_t cmd, int len)
16
default:
17
bad_cmd:
18
qemu_log_mask(LOG_GUEST_ERROR,
19
- "%s: unknown command %02x\n", __func__, s->cmd);
20
+ "%s: unknown command 0x%02x\n", __func__, s->cmd);
21
break;
22
}
23
24
--
25
2.20.1
26
27
diff view generated by jsdifflib
New patch
1
From: Joel Stanley <joel@jms.id.au>
1
2
3
The macro used to calculate the maximum memory size of the MMIO region
4
had a mistake, causing all GPIO models to create a mapping of 0x9D8.
5
The intent was to have it be 0x9D8 - 0x800.
6
7
This extra size doesn't matter on ast2400 and ast2500, which have a 4KB
8
region set aside for the GPIO controller.
9
10
On the ast2600 the 3.3V and 1.8V GPIO controllers are 2KB apart, so the
11
regions would overlap. Worse was the 1.8V controller would map over the
12
top of the following peripheral, which happens to be the RTC.
13
14
The mmio region used by each device is a maximum of 2KB, so avoid the
15
calculations and hard code this as the maximum.
16
17
Fixes: 36d737ee82b2 ("hw/gpio: Add in AST2600 specific implementation")
18
Signed-off-by: Joel Stanley <joel@jms.id.au>
19
Reviewed-by: Rashmica Gupta <rashmica.g@gmail.com>
20
Reviewed-by: Cédric Le Goater <clg@kaod.org>
21
Message-id: 20210713065854.134634-2-joel@jms.id.au
22
[PMM: fix autocorrect error in commit message]
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
---
25
hw/gpio/aspeed_gpio.c | 3 +--
26
1 file changed, 1 insertion(+), 2 deletions(-)
27
28
diff --git a/hw/gpio/aspeed_gpio.c b/hw/gpio/aspeed_gpio.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/hw/gpio/aspeed_gpio.c
31
+++ b/hw/gpio/aspeed_gpio.c
32
@@ -XXX,XX +XXX,XX @@
33
#define GPIO_1_8V_MEM_SIZE 0x9D8
34
#define GPIO_1_8V_REG_ARRAY_SIZE ((GPIO_1_8V_MEM_SIZE - \
35
GPIO_1_8V_REG_OFFSET) >> 2)
36
-#define GPIO_MAX_MEM_SIZE MAX(GPIO_3_6V_MEM_SIZE, GPIO_1_8V_MEM_SIZE)
37
38
static int aspeed_evaluate_irq(GPIOSets *regs, int gpio_prev_high, int gpio)
39
{
40
@@ -XXX,XX +XXX,XX @@ static void aspeed_gpio_realize(DeviceState *dev, Error **errp)
41
}
42
43
memory_region_init_io(&s->iomem, OBJECT(s), &aspeed_gpio_ops, s,
44
- TYPE_ASPEED_GPIO, GPIO_MAX_MEM_SIZE);
45
+ TYPE_ASPEED_GPIO, 0x800);
46
47
sysbus_init_mmio(sbd, &s->iomem);
48
}
49
--
50
2.20.1
51
52
diff view generated by jsdifflib