1
arm pullreq for rc1. All minor bugfixes, except for the sve-default-vector-length
1
Some small arm bug fixes for rc3.
2
patches, which are somewhere between a bugfix and a new feature.
3
2
4
thanks
5
-- PMM
3
-- PMM
6
4
7
The following changes since commit c08ccd1b53f488ac86c1f65cf7623dc91acc249a:
5
The following changes since commit 9b617b1bb4056e60b39be4c33be20c10928a6a5c:
8
6
9
Merge remote-tracking branch 'remotes/rth-gitlab/tags/pull-tcg-20210726' into staging (2021-07-27 08:35:01 +0100)
7
Merge tag 'trivial-branch-for-7.0-pull-request' of https://gitlab.com/laurent_vivier/qemu into staging (2022-04-01 10:23:27 +0100)
10
8
11
are available in the Git repository at:
9
are available in the Git repository at:
12
10
13
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20210727
11
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20220401
14
12
15
for you to fetch changes up to e229a179a503f2aee43a76888cf12fbdfe8a3749:
13
for you to fetch changes up to a5b1e1ab662aa6dc42d5a913080fccbb8bf82e9b:
16
14
17
hw: aspeed_gpio: Fix memory size (2021-07-27 11:00:00 +0100)
15
target/arm: Don't use DISAS_NORETURN in STXP !HAVE_CMPXCHG128 codegen (2022-04-01 15:35:49 +0100)
18
16
19
----------------------------------------------------------------
17
----------------------------------------------------------------
20
target-arm queue:
18
target-arm queue:
21
* hw/arm/smmuv3: Check 31st bit to see if CD is valid
19
* target/arm: Fix some bugs in secure EL2 handling
22
* qemu-options.hx: Fix formatting of -machine memory-backend option
20
* target/arm: Fix assert when !HAVE_CMPXCHG128
23
* hw: aspeed_gpio: Fix memory size
21
* MAINTAINERS: change Fred Konrad's email address
24
* hw/arm/nseries: Display hexadecimal value with '0x' prefix
25
* Add sve-default-vector-length cpu property
26
* docs: Update path that mentions deprecated.rst
27
* hw/intc/armv7m_nvic: for v8.1M VECTPENDING hides S exceptions from NS
28
* hw/intc/armv7m_nvic: Correct size of ICSR.VECTPENDING
29
* hw/intc/armv7m_nvic: ISCR.ISRPENDING is set for non-enabled pending interrupts
30
* target/arm: Report M-profile alignment faults correctly to the guest
31
* target/arm: Add missing 'return's after calling v7m_exception_taken()
32
* target/arm: Enforce that M-profile SP low 2 bits are always zero
33
22
34
----------------------------------------------------------------
23
----------------------------------------------------------------
35
Joe Komlodi (1):
24
Frederic Konrad (1):
36
hw/arm/smmuv3: Check 31st bit to see if CD is valid
25
MAINTAINERS: change Fred Konrad's email address
37
26
38
Joel Stanley (1):
27
Idan Horowitz (4):
39
hw: aspeed_gpio: Fix memory size
28
target/arm: Fix MTE access checks for disabled SEL2
29
target/arm: Check VSTCR.SW when assigning the stage 2 output PA space
30
target/arm: Take VSTCR.SW, VTCR.NSW into account in final stage 2 walk
31
target/arm: Determine final stage 2 output PA space based on original IPA
40
32
41
Mao Zhongyi (1):
33
Peter Maydell (1):
42
docs: Update path that mentions deprecated.rst
34
target/arm: Don't use DISAS_NORETURN in STXP !HAVE_CMPXCHG128 codegen
43
35
44
Peter Maydell (7):
36
target/arm/internals.h | 2 +-
45
qemu-options.hx: Fix formatting of -machine memory-backend option
37
target/arm/helper.c | 18 +++++++++++++++---
46
target/arm: Enforce that M-profile SP low 2 bits are always zero
38
target/arm/translate-a64.c | 7 ++++++-
47
target/arm: Add missing 'return's after calling v7m_exception_taken()
39
.mailmap | 3 ++-
48
target/arm: Report M-profile alignment faults correctly to the guest
40
MAINTAINERS | 2 +-
49
hw/intc/armv7m_nvic: ISCR.ISRPENDING is set for non-enabled pending interrupts
41
5 files changed, 25 insertions(+), 7 deletions(-)
50
hw/intc/armv7m_nvic: Correct size of ICSR.VECTPENDING
51
hw/intc/armv7m_nvic: for v8.1M VECTPENDING hides S exceptions from NS
52
53
Philippe Mathieu-Daudé (1):
54
hw/arm/nseries: Display hexadecimal value with '0x' prefix
55
56
Richard Henderson (3):
57
target/arm: Correctly bound length in sve_zcr_get_valid_len
58
target/arm: Export aarch64_sve_zcr_get_valid_len
59
target/arm: Add sve-default-vector-length cpu property
60
61
docs/system/arm/cpu-features.rst | 15 ++++++++++
62
configure | 2 +-
63
hw/arm/smmuv3-internal.h | 2 +-
64
target/arm/cpu.h | 5 ++++
65
target/arm/internals.h | 10 +++++++
66
hw/arm/nseries.c | 2 +-
67
hw/gpio/aspeed_gpio.c | 3 +-
68
hw/intc/armv7m_nvic.c | 40 +++++++++++++++++++--------
69
target/arm/cpu.c | 14 ++++++++--
70
target/arm/cpu64.c | 60 ++++++++++++++++++++++++++++++++++++++++
71
target/arm/gdbstub.c | 4 +++
72
target/arm/helper.c | 8 ++++--
73
target/arm/m_helper.c | 24 ++++++++++++----
74
target/arm/translate.c | 3 ++
75
target/i386/cpu.c | 2 +-
76
MAINTAINERS | 2 +-
77
qemu-options.hx | 30 +++++++++++---------
78
17 files changed, 183 insertions(+), 43 deletions(-)
79
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Idan Horowitz <idan.horowitz@gmail.com>
2
2
3
Rename from sve_zcr_get_valid_len and make accessible
3
While not mentioned anywhere in the actual specification text, the
4
from outside of helper.c.
4
HCR_EL2.ATA bit is treated as '1' when EL2 is disabled at the current
5
security state. This can be observed in the psuedo-code implementation
6
of AArch64.AllocationTagAccessIsEnabled().
5
7
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Idan Horowitz <idan.horowitz@gmail.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210723203344.968563-3-richard.henderson@linaro.org
10
Message-id: 20220328173107.311267-1-idan.horowitz@gmail.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
12
---
11
target/arm/internals.h | 10 ++++++++++
13
target/arm/internals.h | 2 +-
12
target/arm/helper.c | 4 ++--
14
target/arm/helper.c | 2 +-
13
2 files changed, 12 insertions(+), 2 deletions(-)
15
2 files changed, 2 insertions(+), 2 deletions(-)
14
16
15
diff --git a/target/arm/internals.h b/target/arm/internals.h
17
diff --git a/target/arm/internals.h b/target/arm/internals.h
16
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/internals.h
19
--- a/target/arm/internals.h
18
+++ b/target/arm/internals.h
20
+++ b/target/arm/internals.h
19
@@ -XXX,XX +XXX,XX @@ void arm_translate_init(void);
21
@@ -XXX,XX +XXX,XX @@ static inline bool allocation_tag_access_enabled(CPUARMState *env, int el,
20
void arm_cpu_synchronize_from_tb(CPUState *cs, const TranslationBlock *tb);
22
&& !(env->cp15.scr_el3 & SCR_ATA)) {
21
#endif /* CONFIG_TCG */
23
return false;
22
24
}
23
+/**
25
- if (el < 2 && arm_feature(env, ARM_FEATURE_EL2)) {
24
+ * aarch64_sve_zcr_get_valid_len:
26
+ if (el < 2 && arm_is_el2_enabled(env)) {
25
+ * @cpu: cpu context
27
uint64_t hcr = arm_hcr_el2_eff(env);
26
+ * @start_len: maximum len to consider
28
if (!(hcr & HCR_ATA) && (!(hcr & HCR_E2H) || !(hcr & HCR_TGE))) {
27
+ *
29
return false;
28
+ * Return the maximum supported sve vector length <= @start_len.
29
+ * Note that both @start_len and the return value are in units
30
+ * of ZCR_ELx.LEN, so the vector bit length is (x + 1) * 128.
31
+ */
32
+uint32_t aarch64_sve_zcr_get_valid_len(ARMCPU *cpu, uint32_t start_len);
33
34
enum arm_fprounding {
35
FPROUNDING_TIEEVEN,
36
diff --git a/target/arm/helper.c b/target/arm/helper.c
30
diff --git a/target/arm/helper.c b/target/arm/helper.c
37
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/helper.c
32
--- a/target/arm/helper.c
39
+++ b/target/arm/helper.c
33
+++ b/target/arm/helper.c
40
@@ -XXX,XX +XXX,XX @@ int sve_exception_el(CPUARMState *env, int el)
34
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_mte(CPUARMState *env, const ARMCPRegInfo *ri,
41
return 0;
42
}
43
44
-static uint32_t sve_zcr_get_valid_len(ARMCPU *cpu, uint32_t start_len)
45
+uint32_t aarch64_sve_zcr_get_valid_len(ARMCPU *cpu, uint32_t start_len)
46
{
35
{
47
uint32_t end_len;
36
int el = arm_current_el(env);
48
37
49
@@ -XXX,XX +XXX,XX @@ uint32_t sve_zcr_len_for_el(CPUARMState *env, int el)
38
- if (el < 2 && arm_feature(env, ARM_FEATURE_EL2)) {
50
zcr_len = MIN(zcr_len, 0xf & (uint32_t)env->vfp.zcr_el[3]);
39
+ if (el < 2 && arm_is_el2_enabled(env)) {
51
}
40
uint64_t hcr = arm_hcr_el2_eff(env);
52
41
if (!(hcr & HCR_ATA) && (!(hcr & HCR_E2H) || !(hcr & HCR_TGE))) {
53
- return sve_zcr_get_valid_len(cpu, zcr_len);
42
return CP_ACCESS_TRAP_EL2;
54
+ return aarch64_sve_zcr_get_valid_len(cpu, zcr_len);
55
}
56
57
static void zcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
58
--
43
--
59
2.20.1
44
2.25.1
60
61
diff view generated by jsdifflib
1
From: Joe Komlodi <joe.komlodi@xilinx.com>
1
From: Idan Horowitz <idan.horowitz@gmail.com>
2
2
3
The bit to see if a CD is valid is the last bit of the first word of the CD.
3
As per the AArch64.SS2OutputPASpace() psuedo-code in the ARMv8 ARM when the
4
PA space of the IPA is non secure, the output PA space is secure if and only
5
if all of the bits VTCR.<NSW, NSA>, VSTCR.<SW, SA> are not set.
4
6
5
Signed-off-by: Joe Komlodi <joe.komlodi@xilinx.com>
7
Signed-off-by: Idan Horowitz <idan.horowitz@gmail.com>
6
Message-id: 1626728232-134665-2-git-send-email-joe.komlodi@xilinx.com
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20220327093427.1548629-2-idan.horowitz@gmail.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
11
---
10
hw/arm/smmuv3-internal.h | 2 +-
12
target/arm/helper.c | 2 +-
11
1 file changed, 1 insertion(+), 1 deletion(-)
13
1 file changed, 1 insertion(+), 1 deletion(-)
12
14
13
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/arm/smmuv3-internal.h
17
--- a/target/arm/helper.c
16
+++ b/hw/arm/smmuv3-internal.h
18
+++ b/target/arm/helper.c
17
@@ -XXX,XX +XXX,XX @@ static inline int pa_range(STE *ste)
19
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
18
20
} else {
19
/* CD fields */
21
attrs->secure =
20
22
!((env->cp15.vtcr_el2.raw_tcr & (VTCR_NSA | VTCR_NSW))
21
-#define CD_VALID(x) extract32((x)->word[0], 30, 1)
23
- || (env->cp15.vstcr_el2.raw_tcr & VSTCR_SA));
22
+#define CD_VALID(x) extract32((x)->word[0], 31, 1)
24
+ || (env->cp15.vstcr_el2.raw_tcr & (VSTCR_SA | VSTCR_SW)));
23
#define CD_ASID(x) extract32((x)->word[1], 16, 16)
25
}
24
#define CD_TTB(x, sel) \
26
}
25
({ \
27
return 0;
26
--
28
--
27
2.20.1
29
2.25.1
28
29
diff view generated by jsdifflib
Deleted patch
1
The documentation of the -machine memory-backend has some minor
2
formatting errors:
3
* Misindentation of the initial line meant that the whole option
4
section is incorrectly indented in the HTML output compared to
5
the other -machine options
6
* The examples weren't indented, which meant that they were formatted
7
as plain run-on text including outputting the "::" as text.
8
* The a) b) list has no rst-format markup so it is rendered as
9
a single run-on paragraph
10
1
11
Fix the formatting.
12
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
15
Message-id: 20210719105257.3599-1-peter.maydell@linaro.org
16
---
17
qemu-options.hx | 30 +++++++++++++++++-------------
18
1 file changed, 17 insertions(+), 13 deletions(-)
19
20
diff --git a/qemu-options.hx b/qemu-options.hx
21
index XXXXXXX..XXXXXXX 100644
22
--- a/qemu-options.hx
23
+++ b/qemu-options.hx
24
@@ -XXX,XX +XXX,XX @@ SRST
25
Enables or disables ACPI Heterogeneous Memory Attribute Table
26
(HMAT) support. The default is off.
27
28
- ``memory-backend='id'``
29
+ ``memory-backend='id'``
30
An alternative to legacy ``-mem-path`` and ``mem-prealloc`` options.
31
Allows to use a memory backend as main RAM.
32
33
For example:
34
::
35
- -object memory-backend-file,id=pc.ram,size=512M,mem-path=/hugetlbfs,prealloc=on,share=on
36
- -machine memory-backend=pc.ram
37
- -m 512M
38
+
39
+ -object memory-backend-file,id=pc.ram,size=512M,mem-path=/hugetlbfs,prealloc=on,share=on
40
+ -machine memory-backend=pc.ram
41
+ -m 512M
42
43
Migration compatibility note:
44
- a) as backend id one shall use value of 'default-ram-id', advertised by
45
- machine type (available via ``query-machines`` QMP command), if migration
46
- to/from old QEMU (<5.0) is expected.
47
- b) for machine types 4.0 and older, user shall
48
- use ``x-use-canonical-path-for-ramblock-id=off`` backend option
49
- if migration to/from old QEMU (<5.0) is expected.
50
+
51
+ * as backend id one shall use value of 'default-ram-id', advertised by
52
+ machine type (available via ``query-machines`` QMP command), if migration
53
+ to/from old QEMU (<5.0) is expected.
54
+ * for machine types 4.0 and older, user shall
55
+ use ``x-use-canonical-path-for-ramblock-id=off`` backend option
56
+ if migration to/from old QEMU (<5.0) is expected.
57
+
58
For example:
59
::
60
- -object memory-backend-ram,id=pc.ram,size=512M,x-use-canonical-path-for-ramblock-id=off
61
- -machine memory-backend=pc.ram
62
- -m 512M
63
+
64
+ -object memory-backend-ram,id=pc.ram,size=512M,x-use-canonical-path-for-ramblock-id=off
65
+ -machine memory-backend=pc.ram
66
+ -m 512M
67
ERST
68
69
HXCOMM Deprecated by -machine
70
--
71
2.20.1
72
73
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Idan Horowitz <idan.horowitz@gmail.com>
2
2
3
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
3
As per the AArch64.SS2InitialTTWState() psuedo-code in the ARMv8 ARM the
4
initial PA space used for stage 2 table walks is assigned based on the SW
5
and NSW bits of the VSTCR and VTCR registers.
6
This was already implemented for the recursive stage 2 page table walks
7
in S1_ptw_translate(), but was missing for the final stage 2 walk.
8
9
Signed-off-by: Idan Horowitz <idan.horowitz@gmail.com>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210726150953.1218690-1-f4bug@amsat.org
11
Message-id: 20220327093427.1548629-3-idan.horowitz@gmail.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
13
---
8
hw/arm/nseries.c | 2 +-
14
target/arm/helper.c | 10 ++++++++++
9
1 file changed, 1 insertion(+), 1 deletion(-)
15
1 file changed, 10 insertions(+)
10
16
11
diff --git a/hw/arm/nseries.c b/hw/arm/nseries.c
17
diff --git a/target/arm/helper.c b/target/arm/helper.c
12
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/arm/nseries.c
19
--- a/target/arm/helper.c
14
+++ b/hw/arm/nseries.c
20
+++ b/target/arm/helper.c
15
@@ -XXX,XX +XXX,XX @@ static uint32_t mipid_txrx(void *opaque, uint32_t cmd, int len)
21
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
16
default:
22
return ret;
17
bad_cmd:
23
}
18
qemu_log_mask(LOG_GUEST_ERROR,
24
19
- "%s: unknown command %02x\n", __func__, s->cmd);
25
+ if (arm_is_secure_below_el3(env)) {
20
+ "%s: unknown command 0x%02x\n", __func__, s->cmd);
26
+ if (attrs->secure) {
21
break;
27
+ attrs->secure = !(env->cp15.vstcr_el2.raw_tcr & VSTCR_SW);
22
}
28
+ } else {
29
+ attrs->secure = !(env->cp15.vtcr_el2.raw_tcr & VTCR_NSW);
30
+ }
31
+ } else {
32
+ assert(!attrs->secure);
33
+ }
34
+
35
s2_mmu_idx = attrs->secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
36
is_el0 = mmu_idx == ARMMMUIdx_E10_0 || mmu_idx == ARMMMUIdx_SE10_0;
23
37
24
--
38
--
25
2.20.1
39
2.25.1
26
27
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Idan Horowitz <idan.horowitz@gmail.com>
2
2
3
Currently, our only caller is sve_zcr_len_for_el, which has
3
As per the AArch64.S2Walk() pseudo-code in the ARMv8 ARM, the final
4
already masked the length extracted from ZCR_ELx, so the
4
decision as to the output address's PA space based on the SA/SW/NSA/NSW
5
masking done here is a nop. But we will shortly have uses
5
bits needs to take the input IPA's PA space into account, and not the
6
from other locations, where the length will be unmasked.
6
PA space of the result of the stage 2 walk itself.
7
7
8
Saturate the length to ARM_MAX_VQ instead of truncating to
8
Signed-off-by: Idan Horowitz <idan.horowitz@gmail.com>
9
the low 4 bits.
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
10
Message-id: 20220327093427.1548629-4-idan.horowitz@gmail.com
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
[PMM: fixed commit message typo]
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Message-id: 20210723203344.968563-2-richard.henderson@linaro.org
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
13
---
16
target/arm/helper.c | 4 +++-
14
target/arm/helper.c | 8 +++++---
17
1 file changed, 3 insertions(+), 1 deletion(-)
15
1 file changed, 5 insertions(+), 3 deletions(-)
18
16
19
diff --git a/target/arm/helper.c b/target/arm/helper.c
17
diff --git a/target/arm/helper.c b/target/arm/helper.c
20
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper.c
19
--- a/target/arm/helper.c
22
+++ b/target/arm/helper.c
20
+++ b/target/arm/helper.c
23
@@ -XXX,XX +XXX,XX @@ static uint32_t sve_zcr_get_valid_len(ARMCPU *cpu, uint32_t start_len)
21
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
24
{
22
hwaddr ipa;
25
uint32_t end_len;
23
int s2_prot;
26
24
int ret;
27
- end_len = start_len &= 0xf;
25
+ bool ipa_secure;
28
+ start_len = MIN(start_len, ARM_MAX_VQ - 1);
26
ARMCacheAttrs cacheattrs2 = {};
29
+ end_len = start_len;
27
ARMMMUIdx s2_mmu_idx;
30
+
28
bool is_el0;
31
if (!test_bit(start_len, cpu->sve_vq_map)) {
29
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
32
end_len = find_last_bit(cpu->sve_vq_map, start_len);
30
return ret;
33
assert(end_len < start_len);
31
}
32
33
+ ipa_secure = attrs->secure;
34
if (arm_is_secure_below_el3(env)) {
35
- if (attrs->secure) {
36
+ if (ipa_secure) {
37
attrs->secure = !(env->cp15.vstcr_el2.raw_tcr & VSTCR_SW);
38
} else {
39
attrs->secure = !(env->cp15.vtcr_el2.raw_tcr & VTCR_NSW);
40
}
41
} else {
42
- assert(!attrs->secure);
43
+ assert(!ipa_secure);
44
}
45
46
s2_mmu_idx = attrs->secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
47
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
48
49
/* Check if IPA translates to secure or non-secure PA space. */
50
if (arm_is_secure_below_el3(env)) {
51
- if (attrs->secure) {
52
+ if (ipa_secure) {
53
attrs->secure =
54
!(env->cp15.vstcr_el2.raw_tcr & (VSTCR_SA | VSTCR_SW));
55
} else {
34
--
56
--
35
2.20.1
57
2.25.1
36
37
diff view generated by jsdifflib
1
From: Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
1
From: Frederic Konrad <konrad@adacore.com>
2
2
3
Missed in commit f3478392 "docs: Move deprecation, build
3
frederic.konrad@adacore.com and konrad@adacore.com will stop working starting
4
and license info out of system/"
4
2022-04-01.
5
5
6
Signed-off-by: Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
6
Use my personal email instead.
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
8
Message-id: 20210723065828.1336760-1-maozhongyi@cmss.chinamobile.com
8
Signed-off-by: Frederic Konrad <frederic.konrad@adacore.com>
9
Reviewed-by: Fabien Chouteau <chouteau@adacore.com <clg@kaod.org>>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Message-id: 1648643217-15811-1-git-send-email-frederic.konrad@adacore.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
13
---
11
configure | 2 +-
14
.mailmap | 3 ++-
12
target/i386/cpu.c | 2 +-
15
MAINTAINERS | 2 +-
13
MAINTAINERS | 2 +-
16
2 files changed, 3 insertions(+), 2 deletions(-)
14
3 files changed, 3 insertions(+), 3 deletions(-)
15
17
16
diff --git a/configure b/configure
18
diff --git a/.mailmap b/.mailmap
17
index XXXXXXX..XXXXXXX 100755
18
--- a/configure
19
+++ b/configure
20
@@ -XXX,XX +XXX,XX @@ fi
21
22
if test -n "${deprecated_features}"; then
23
echo "Warning, deprecated features enabled."
24
- echo "Please see docs/system/deprecated.rst"
25
+ echo "Please see docs/about/deprecated.rst"
26
echo " features: ${deprecated_features}"
27
fi
28
29
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
30
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
31
--- a/target/i386/cpu.c
20
--- a/.mailmap
32
+++ b/target/i386/cpu.c
21
+++ b/.mailmap
33
@@ -XXX,XX +XXX,XX @@ static const X86CPUDefinition builtin_x86_defs[] = {
22
@@ -XXX,XX +XXX,XX @@ Alexander Graf <agraf@csgraf.de> <agraf@suse.de>
34
* none", but this is just for compatibility while libvirt isn't
23
Anthony Liguori <anthony@codemonkey.ws> Anthony Liguori <aliguori@us.ibm.com>
35
* adapted to resolve CPU model versions before creating VMs.
24
Christian Borntraeger <borntraeger@linux.ibm.com> <borntraeger@de.ibm.com>
36
* See "Runnability guarantee of CPU models" at
25
Filip Bozuta <filip.bozuta@syrmia.com> <filip.bozuta@rt-rk.com.com>
37
- * docs/system/deprecated.rst.
26
-Frederic Konrad <konrad@adacore.com> <fred.konrad@greensocs.com>
38
+ * docs/about/deprecated.rst.
27
+Frederic Konrad <konrad.frederic@yahoo.fr> <fred.konrad@greensocs.com>
39
*/
28
+Frederic Konrad <konrad.frederic@yahoo.fr> <konrad@adacore.com>
40
X86CPUVersion default_cpu_version = 1;
29
Greg Kurz <groug@kaod.org> <gkurz@linux.vnet.ibm.com>
41
30
Huacai Chen <chenhuacai@kernel.org> <chenhc@lemote.com>
31
Huacai Chen <chenhuacai@kernel.org> <chenhuacai@loongson.cn>
42
diff --git a/MAINTAINERS b/MAINTAINERS
32
diff --git a/MAINTAINERS b/MAINTAINERS
43
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
44
--- a/MAINTAINERS
34
--- a/MAINTAINERS
45
+++ b/MAINTAINERS
35
+++ b/MAINTAINERS
46
@@ -XXX,XX +XXX,XX @@ F: contrib/gitdm/*
36
@@ -XXX,XX +XXX,XX @@ F: include/hw/rtc/sun4v-rtc.h
47
37
48
Incompatible changes
38
Leon3
49
R: libvir-list@redhat.com
39
M: Fabien Chouteau <chouteau@adacore.com>
50
-F: docs/system/deprecated.rst
40
-M: KONRAD Frederic <frederic.konrad@adacore.com>
51
+F: docs/about/deprecated.rst
41
+M: Frederic Konrad <konrad.frederic@yahoo.fr>
52
42
S: Maintained
53
Build System
43
F: hw/sparc/leon3.c
54
------------
44
F: hw/*/grlib*
55
--
45
--
56
2.20.1
46
2.25.1
57
47
58
48
diff view generated by jsdifflib
1
For M-profile, unlike A-profile, the low 2 bits of SP are defined to be
1
In gen_store_exclusive(), if the host does not have a cmpxchg128
2
RES0H, which is to say that they must be hardwired to zero so that
2
primitive then we generate bad code for STXP for storing two 64-bit
3
guest attempts to write non-zero values to them are ignored.
3
values. We generate a call to the exit_atomic helper, which never
4
returns, and set is_jmp to DISAS_NORETURN. However, this is
5
forgetting that we have already emitted a brcond that jumps over this
6
call for the case where we don't hold the exclusive. The effect is
7
that we don't generate any code to end the TB for the
8
exclusive-not-held execution path, which falls into the "exit with
9
TB_EXIT_REQUESTED" code that gen_tb_end() emits. This then causes an
10
assert at runtime when cpu_loop_exec_tb() sees an EXIT_REQUESTED TB
11
return that wasn't for an interrupt or icount.
4
12
5
Implement this behaviour by masking out the low bits:
13
In particular, you can hit this case when using the clang sanitizers
6
* for writes to r13 by the gdbstub
14
and trying to run the xlnx-versal-virt acceptance test in 'make
7
* for writes to any of the various flavours of SP via MSR
15
check-acceptance'. This bug was masked until commit 848126d11e93ff
8
* for writes to r13 via store_reg() in generated code
16
("meson: move int128 checks from configure") because we used to set
17
CONFIG_CMPXCHG128=1 and avoid the buggy codepath, but after that we
18
do not.
9
19
10
Note that all the direct uses of cpu_R[] in translate.c are in places
20
Fix the bug by not setting is_jmp. The code after the exit_atomic
11
where the register is definitely not r13 (usually because that has
21
call up to the fail_label is dead, but TCG is smart enough to
12
been checked for as an UNDEFINED or UNPREDICTABLE case and handled as
22
eliminate it. We do need to set 'tmp' to some valid value, though
13
UNDEF).
23
(in the same way the exit_atomic-using code in tcg/tcg-op.c does).
14
24
15
All the other writes to regs[13] in C code are either:
25
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/953
16
* A-profile only code
17
* writes of values we can guarantee to be aligned, such as
18
- writes of previous-SP-value plus or minus a 4-aligned constant
19
- writes of the value in an SP limit register (which we already
20
enforce to be aligned)
21
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
26
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
27
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
24
Message-id: 20210723162146.5167-2-peter.maydell@linaro.org
28
Message-id: 20220331150858.96348-1-peter.maydell@linaro.org
25
---
29
---
26
target/arm/gdbstub.c | 4 ++++
30
target/arm/translate-a64.c | 7 ++++++-
27
target/arm/m_helper.c | 14 ++++++++------
31
1 file changed, 6 insertions(+), 1 deletion(-)
28
target/arm/translate.c | 3 +++
29
3 files changed, 15 insertions(+), 6 deletions(-)
30
32
31
diff --git a/target/arm/gdbstub.c b/target/arm/gdbstub.c
33
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
32
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/gdbstub.c
35
--- a/target/arm/translate-a64.c
34
+++ b/target/arm/gdbstub.c
36
+++ b/target/arm/translate-a64.c
35
@@ -XXX,XX +XXX,XX @@ int arm_cpu_gdb_write_register(CPUState *cs, uint8_t *mem_buf, int n)
37
@@ -XXX,XX +XXX,XX @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2,
36
38
} else if (tb_cflags(s->base.tb) & CF_PARALLEL) {
37
if (n < 16) {
39
if (!HAVE_CMPXCHG128) {
38
/* Core integer register. */
40
gen_helper_exit_atomic(cpu_env);
39
+ if (n == 13 && arm_feature(env, ARM_FEATURE_M)) {
41
- s->base.is_jmp = DISAS_NORETURN;
40
+ /* M profile SP low bits are always 0 */
42
+ /*
41
+ tmp &= ~3;
43
+ * Produce a result so we have a well-formed opcode
42
+ }
44
+ * stream when the following (dead) code uses 'tmp'.
43
env->regs[n] = tmp;
45
+ * TCG will remove the dead ops for us.
44
return 4;
46
+ */
45
}
47
+ tcg_gen_movi_i64(tmp, 0);
46
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
48
} else if (s->be_data == MO_LE) {
47
index XXXXXXX..XXXXXXX 100644
49
gen_helper_paired_cmpxchg64_le_parallel(tmp, cpu_env,
48
--- a/target/arm/m_helper.c
50
cpu_exclusive_addr,
49
+++ b/target/arm/m_helper.c
50
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
51
if (!env->v7m.secure) {
52
return;
53
}
54
- env->v7m.other_ss_msp = val;
55
+ env->v7m.other_ss_msp = val & ~3;
56
return;
57
case 0x89: /* PSP_NS */
58
if (!env->v7m.secure) {
59
return;
60
}
61
- env->v7m.other_ss_psp = val;
62
+ env->v7m.other_ss_psp = val & ~3;
63
return;
64
case 0x8a: /* MSPLIM_NS */
65
if (!env->v7m.secure) {
66
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
67
68
limit = is_psp ? env->v7m.psplim[false] : env->v7m.msplim[false];
69
70
+ val &= ~0x3;
71
+
72
if (val < limit) {
73
raise_exception_ra(env, EXCP_STKOF, 0, 1, GETPC());
74
}
75
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
76
break;
77
case 8: /* MSP */
78
if (v7m_using_psp(env)) {
79
- env->v7m.other_sp = val;
80
+ env->v7m.other_sp = val & ~3;
81
} else {
82
- env->regs[13] = val;
83
+ env->regs[13] = val & ~3;
84
}
85
break;
86
case 9: /* PSP */
87
if (v7m_using_psp(env)) {
88
- env->regs[13] = val;
89
+ env->regs[13] = val & ~3;
90
} else {
91
- env->v7m.other_sp = val;
92
+ env->v7m.other_sp = val & ~3;
93
}
94
break;
95
case 10: /* MSPLIM */
96
diff --git a/target/arm/translate.c b/target/arm/translate.c
97
index XXXXXXX..XXXXXXX 100644
98
--- a/target/arm/translate.c
99
+++ b/target/arm/translate.c
100
@@ -XXX,XX +XXX,XX @@ void store_reg(DisasContext *s, int reg, TCGv_i32 var)
101
*/
102
tcg_gen_andi_i32(var, var, s->thumb ? ~1 : ~3);
103
s->base.is_jmp = DISAS_JUMP;
104
+ } else if (reg == 13 && arm_dc_feature(s, ARM_FEATURE_M)) {
105
+ /* For M-profile SP bits [1:0] are always zero */
106
+ tcg_gen_andi_i32(var, var, ~3);
107
}
108
tcg_gen_mov_i32(cpu_R[reg], var);
109
tcg_temp_free_i32(var);
110
--
51
--
111
2.20.1
52
2.25.1
112
113
diff view generated by jsdifflib
Deleted patch
1
In do_v7m_exception_exit(), we perform various checks as part of
2
performing the exception return. If one of these checks fails, the
3
architecture requires that we take an appropriate exception on the
4
existing stackframe. We implement this by calling
5
v7m_exception_taken() to set up to take the new exception, and then
6
immediately returning from do_v7m_exception_exit() without proceeding
7
any further with the unstack-and-exception-return process.
8
1
9
In a couple of checks that are new in v8.1M, we forgot the "return"
10
statement, with the effect that if bad code in the guest tripped over
11
these checks we would set up to take a UsageFault exception but then
12
blunder on trying to also unstack and return from the original
13
exception, with the probable result that the guest would crash.
14
15
Add the missing return statements.
16
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Message-id: 20210723162146.5167-3-peter.maydell@linaro.org
20
---
21
target/arm/m_helper.c | 2 ++
22
1 file changed, 2 insertions(+)
23
24
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
25
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/m_helper.c
27
+++ b/target/arm/m_helper.c
28
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
29
qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
30
"stackframe: NSACR prevents clearing FPU registers\n");
31
v7m_exception_taken(cpu, excret, true, false);
32
+ return;
33
} else if (!cpacr_pass) {
34
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
35
exc_secure);
36
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
37
qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
38
"stackframe: CPACR prevents clearing FPU registers\n");
39
v7m_exception_taken(cpu, excret, true, false);
40
+ return;
41
}
42
}
43
/* Clear s0..s15, FPSCR and VPR */
44
--
45
2.20.1
46
47
diff view generated by jsdifflib
Deleted patch
1
For M-profile, we weren't reporting alignment faults triggered by the
2
generic TCG code correctly to the guest. These get passed into
3
arm_v7m_cpu_do_interrupt() as an EXCP_DATA_ABORT with an A-profile
4
style exception.fsr value of 1. We didn't check for this, and so
5
they fell through into the default of "assume this is an MPU fault"
6
and were reported to the guest as a data access violation MPU fault.
7
1
8
Report these alignment faults as UsageFaults which set the UNALIGNED
9
bit in the UFSR.
10
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20210723162146.5167-4-peter.maydell@linaro.org
14
---
15
target/arm/m_helper.c | 8 ++++++++
16
1 file changed, 8 insertions(+)
17
18
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/m_helper.c
21
+++ b/target/arm/m_helper.c
22
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
23
env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
24
break;
25
case EXCP_UNALIGNED:
26
+ /* Unaligned faults reported by M-profile aware code */
27
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
28
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNALIGNED_MASK;
29
break;
30
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
31
}
32
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false);
33
break;
34
+ case 0x1: /* Alignment fault reported by generic code */
35
+ qemu_log_mask(CPU_LOG_INT,
36
+ "...really UsageFault with UFSR.UNALIGNED\n");
37
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNALIGNED_MASK;
38
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
39
+ env->v7m.secure);
40
+ break;
41
default:
42
/*
43
* All other FSR values are either MPU faults or "can't happen
44
--
45
2.20.1
46
47
diff view generated by jsdifflib
Deleted patch
1
The ISCR.ISRPENDING bit is set when an external interrupt is pending.
2
This is true whether that external interrupt is enabled or not.
3
This means that we can't use 's->vectpending == 0' as a shortcut to
4
"ISRPENDING is zero", because s->vectpending indicates only the
5
highest priority pending enabled interrupt.
6
1
7
Remove the incorrect optimization so that if there is no pending
8
enabled interrupt we fall through to scanning through the whole
9
interrupt array.
10
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20210723162146.5167-5-peter.maydell@linaro.org
14
---
15
hw/intc/armv7m_nvic.c | 9 ++++-----
16
1 file changed, 4 insertions(+), 5 deletions(-)
17
18
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/intc/armv7m_nvic.c
21
+++ b/hw/intc/armv7m_nvic.c
22
@@ -XXX,XX +XXX,XX @@ static bool nvic_isrpending(NVICState *s)
23
{
24
int irq;
25
26
- /* We can shortcut if the highest priority pending interrupt
27
- * happens to be external or if there is nothing pending.
28
+ /*
29
+ * We can shortcut if the highest priority pending interrupt
30
+ * happens to be external; if not we need to check the whole
31
+ * vectors[] array.
32
*/
33
if (s->vectpending > NVIC_FIRST_IRQ) {
34
return true;
35
}
36
- if (s->vectpending == 0) {
37
- return false;
38
- }
39
40
for (irq = NVIC_FIRST_IRQ; irq < s->num_irq; irq++) {
41
if (s->vectors[irq].pending) {
42
--
43
2.20.1
44
45
diff view generated by jsdifflib
Deleted patch
1
The VECTPENDING field in the ICSR is 9 bits wide, in bits [20:12] of
2
the register. We were incorrectly masking it to 8 bits, so it would
3
report the wrong value if the pending exception was greater than 256.
4
Fix the bug.
5
1
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210723162146.5167-6-peter.maydell@linaro.org
9
---
10
hw/intc/armv7m_nvic.c | 2 +-
11
1 file changed, 1 insertion(+), 1 deletion(-)
12
13
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/intc/armv7m_nvic.c
16
+++ b/hw/intc/armv7m_nvic.c
17
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
18
/* VECTACTIVE */
19
val = cpu->env.v7m.exception;
20
/* VECTPENDING */
21
- val |= (s->vectpending & 0xff) << 12;
22
+ val |= (s->vectpending & 0x1ff) << 12;
23
/* ISRPENDING - set if any external IRQ is pending */
24
if (nvic_isrpending(s)) {
25
val |= (1 << 22);
26
--
27
2.20.1
28
29
diff view generated by jsdifflib
Deleted patch
1
In Arm v8.1M the VECTPENDING field in the ICSR has new behaviour: if
2
the register is accessed NonSecure and the highest priority pending
3
enabled exception (that would be returned in the VECTPENDING field)
4
targets Secure, then the VECTPENDING field must read 1 rather than
5
the exception number of the pending exception. Implement this.
6
1
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210723162146.5167-7-peter.maydell@linaro.org
10
---
11
hw/intc/armv7m_nvic.c | 31 ++++++++++++++++++++++++-------
12
1 file changed, 24 insertions(+), 7 deletions(-)
13
14
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/intc/armv7m_nvic.c
17
+++ b/hw/intc/armv7m_nvic.c
18
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_acknowledge_irq(void *opaque)
19
nvic_irq_update(s);
20
}
21
22
+static bool vectpending_targets_secure(NVICState *s)
23
+{
24
+ /* Return true if s->vectpending targets Secure state */
25
+ if (s->vectpending_is_s_banked) {
26
+ return true;
27
+ }
28
+ return !exc_is_banked(s->vectpending) &&
29
+ exc_targets_secure(s, s->vectpending);
30
+}
31
+
32
void armv7m_nvic_get_pending_irq_info(void *opaque,
33
int *pirq, bool *ptargets_secure)
34
{
35
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_get_pending_irq_info(void *opaque,
36
37
assert(pending > ARMV7M_EXCP_RESET && pending < s->num_irq);
38
39
- if (s->vectpending_is_s_banked) {
40
- targets_secure = true;
41
- } else {
42
- targets_secure = !exc_is_banked(pending) &&
43
- exc_targets_secure(s, pending);
44
- }
45
+ targets_secure = vectpending_targets_secure(s);
46
47
trace_nvic_get_pending_irq_info(pending, targets_secure);
48
49
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
50
/* VECTACTIVE */
51
val = cpu->env.v7m.exception;
52
/* VECTPENDING */
53
- val |= (s->vectpending & 0x1ff) << 12;
54
+ if (s->vectpending) {
55
+ /*
56
+ * From v8.1M VECTPENDING must read as 1 if accessed as
57
+ * NonSecure and the highest priority pending and enabled
58
+ * exception targets Secure.
59
+ */
60
+ int vp = s->vectpending;
61
+ if (!attrs.secure && arm_feature(&cpu->env, ARM_FEATURE_V8_1M) &&
62
+ vectpending_targets_secure(s)) {
63
+ vp = 1;
64
+ }
65
+ val |= (vp & 0x1ff) << 12;
66
+ }
67
/* ISRPENDING - set if any external IRQ is pending */
68
if (nvic_isrpending(s)) {
69
val |= (1 << 22);
70
--
71
2.20.1
72
73
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Mirror the behavour of /proc/sys/abi/sve_default_vector_length
4
under the real linux kernel. We have no way of passing along
5
a real default across exec like the kernel can, but this is a
6
decent way of adjusting the startup vector length of a process.
7
8
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/482
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20210723203344.968563-4-richard.henderson@linaro.org
12
[PMM: tweaked docs formatting, document -1 special-case,
13
added fixup patch from RTH mentioning QEMU's maximum veclen.]
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
docs/system/arm/cpu-features.rst | 15 ++++++++
17
target/arm/cpu.h | 5 +++
18
target/arm/cpu.c | 14 ++++++--
19
target/arm/cpu64.c | 60 ++++++++++++++++++++++++++++++++
20
4 files changed, 92 insertions(+), 2 deletions(-)
21
22
diff --git a/docs/system/arm/cpu-features.rst b/docs/system/arm/cpu-features.rst
23
index XXXXXXX..XXXXXXX 100644
24
--- a/docs/system/arm/cpu-features.rst
25
+++ b/docs/system/arm/cpu-features.rst
26
@@ -XXX,XX +XXX,XX @@ verbose command lines. However, the recommended way to select vector
27
lengths is to explicitly enable each desired length. Therefore only
28
example's (1), (4), and (6) exhibit recommended uses of the properties.
29
30
+SVE User-mode Default Vector Length Property
31
+--------------------------------------------
32
+
33
+For qemu-aarch64, the cpu property ``sve-default-vector-length=N`` is
34
+defined to mirror the Linux kernel parameter file
35
+``/proc/sys/abi/sve_default_vector_length``. The default length, ``N``,
36
+is in units of bytes and must be between 16 and 8192.
37
+If not specified, the default vector length is 64.
38
+
39
+If the default length is larger than the maximum vector length enabled,
40
+the actual vector length will be reduced. Note that the maximum vector
41
+length supported by QEMU is 256.
42
+
43
+If this property is set to ``-1`` then the default vector length
44
+is set to the maximum possible length.
45
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/cpu.h
48
+++ b/target/arm/cpu.h
49
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
50
/* Used to set the maximum vector length the cpu will support. */
51
uint32_t sve_max_vq;
52
53
+#ifdef CONFIG_USER_ONLY
54
+ /* Used to set the default vector length at process start. */
55
+ uint32_t sve_default_vq;
56
+#endif
57
+
58
/*
59
* In sve_vq_map each set bit is a supported vector length of
60
* (bit-number + 1) * 16 bytes, i.e. each bit number + 1 is the vector
61
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
62
index XXXXXXX..XXXXXXX 100644
63
--- a/target/arm/cpu.c
64
+++ b/target/arm/cpu.c
65
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
66
env->cp15.cpacr_el1 = deposit64(env->cp15.cpacr_el1, 16, 2, 3);
67
/* with reasonable vector length */
68
if (cpu_isar_feature(aa64_sve, cpu)) {
69
- env->vfp.zcr_el[1] = MIN(cpu->sve_max_vq - 1, 3);
70
+ env->vfp.zcr_el[1] =
71
+ aarch64_sve_zcr_get_valid_len(cpu, cpu->sve_default_vq - 1);
72
}
73
/*
74
* Enable TBI0 but not TBI1.
75
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_initfn(Object *obj)
76
QLIST_INIT(&cpu->pre_el_change_hooks);
77
QLIST_INIT(&cpu->el_change_hooks);
78
79
-#ifndef CONFIG_USER_ONLY
80
+#ifdef CONFIG_USER_ONLY
81
+# ifdef TARGET_AARCH64
82
+ /*
83
+ * The linux kernel defaults to 512-bit vectors, when sve is supported.
84
+ * See documentation for /proc/sys/abi/sve_default_vector_length, and
85
+ * our corresponding sve-default-vector-length cpu property.
86
+ */
87
+ cpu->sve_default_vq = 4;
88
+# endif
89
+#else
90
/* Our inbound IRQ and FIQ lines */
91
if (kvm_enabled()) {
92
/* VIRQ and VFIQ are unused with KVM but we add them to maintain
93
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
94
index XXXXXXX..XXXXXXX 100644
95
--- a/target/arm/cpu64.c
96
+++ b/target/arm/cpu64.c
97
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_set_sve(Object *obj, bool value, Error **errp)
98
cpu->isar.id_aa64pfr0 = t;
99
}
100
101
+#ifdef CONFIG_USER_ONLY
102
+/* Mirror linux /proc/sys/abi/sve_default_vector_length. */
103
+static void cpu_arm_set_sve_default_vec_len(Object *obj, Visitor *v,
104
+ const char *name, void *opaque,
105
+ Error **errp)
106
+{
107
+ ARMCPU *cpu = ARM_CPU(obj);
108
+ int32_t default_len, default_vq, remainder;
109
+
110
+ if (!visit_type_int32(v, name, &default_len, errp)) {
111
+ return;
112
+ }
113
+
114
+ /* Undocumented, but the kernel allows -1 to indicate "maximum". */
115
+ if (default_len == -1) {
116
+ cpu->sve_default_vq = ARM_MAX_VQ;
117
+ return;
118
+ }
119
+
120
+ default_vq = default_len / 16;
121
+ remainder = default_len % 16;
122
+
123
+ /*
124
+ * Note that the 512 max comes from include/uapi/asm/sve_context.h
125
+ * and is the maximum architectural width of ZCR_ELx.LEN.
126
+ */
127
+ if (remainder || default_vq < 1 || default_vq > 512) {
128
+ error_setg(errp, "cannot set sve-default-vector-length");
129
+ if (remainder) {
130
+ error_append_hint(errp, "Vector length not a multiple of 16\n");
131
+ } else if (default_vq < 1) {
132
+ error_append_hint(errp, "Vector length smaller than 16\n");
133
+ } else {
134
+ error_append_hint(errp, "Vector length larger than %d\n",
135
+ 512 * 16);
136
+ }
137
+ return;
138
+ }
139
+
140
+ cpu->sve_default_vq = default_vq;
141
+}
142
+
143
+static void cpu_arm_get_sve_default_vec_len(Object *obj, Visitor *v,
144
+ const char *name, void *opaque,
145
+ Error **errp)
146
+{
147
+ ARMCPU *cpu = ARM_CPU(obj);
148
+ int32_t value = cpu->sve_default_vq * 16;
149
+
150
+ visit_type_int32(v, name, &value, errp);
151
+}
152
+#endif
153
+
154
void aarch64_add_sve_properties(Object *obj)
155
{
156
uint32_t vq;
157
@@ -XXX,XX +XXX,XX @@ void aarch64_add_sve_properties(Object *obj)
158
object_property_add(obj, name, "bool", cpu_arm_get_sve_vq,
159
cpu_arm_set_sve_vq, NULL, NULL);
160
}
161
+
162
+#ifdef CONFIG_USER_ONLY
163
+ /* Mirror linux /proc/sys/abi/sve_default_vector_length. */
164
+ object_property_add(obj, "sve-default-vector-length", "int32",
165
+ cpu_arm_get_sve_default_vec_len,
166
+ cpu_arm_set_sve_default_vec_len, NULL, NULL);
167
+#endif
168
}
169
170
void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp)
171
--
172
2.20.1
173
174
diff view generated by jsdifflib
Deleted patch
1
From: Joel Stanley <joel@jms.id.au>
2
1
3
The macro used to calculate the maximum memory size of the MMIO region
4
had a mistake, causing all GPIO models to create a mapping of 0x9D8.
5
The intent was to have it be 0x9D8 - 0x800.
6
7
This extra size doesn't matter on ast2400 and ast2500, which have a 4KB
8
region set aside for the GPIO controller.
9
10
On the ast2600 the 3.3V and 1.8V GPIO controllers are 2KB apart, so the
11
regions would overlap. Worse was the 1.8V controller would map over the
12
top of the following peripheral, which happens to be the RTC.
13
14
The mmio region used by each device is a maximum of 2KB, so avoid the
15
calculations and hard code this as the maximum.
16
17
Fixes: 36d737ee82b2 ("hw/gpio: Add in AST2600 specific implementation")
18
Signed-off-by: Joel Stanley <joel@jms.id.au>
19
Reviewed-by: Rashmica Gupta <rashmica.g@gmail.com>
20
Reviewed-by: Cédric Le Goater <clg@kaod.org>
21
Message-id: 20210713065854.134634-2-joel@jms.id.au
22
[PMM: fix autocorrect error in commit message]
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
---
25
hw/gpio/aspeed_gpio.c | 3 +--
26
1 file changed, 1 insertion(+), 2 deletions(-)
27
28
diff --git a/hw/gpio/aspeed_gpio.c b/hw/gpio/aspeed_gpio.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/hw/gpio/aspeed_gpio.c
31
+++ b/hw/gpio/aspeed_gpio.c
32
@@ -XXX,XX +XXX,XX @@
33
#define GPIO_1_8V_MEM_SIZE 0x9D8
34
#define GPIO_1_8V_REG_ARRAY_SIZE ((GPIO_1_8V_MEM_SIZE - \
35
GPIO_1_8V_REG_OFFSET) >> 2)
36
-#define GPIO_MAX_MEM_SIZE MAX(GPIO_3_6V_MEM_SIZE, GPIO_1_8V_MEM_SIZE)
37
38
static int aspeed_evaluate_irq(GPIOSets *regs, int gpio_prev_high, int gpio)
39
{
40
@@ -XXX,XX +XXX,XX @@ static void aspeed_gpio_realize(DeviceState *dev, Error **errp)
41
}
42
43
memory_region_init_io(&s->iomem, OBJECT(s), &aspeed_gpio_ops, s,
44
- TYPE_ASPEED_GPIO, GPIO_MAX_MEM_SIZE);
45
+ TYPE_ASPEED_GPIO, 0x800);
46
47
sysbus_init_mmio(sbd, &s->iomem);
48
}
49
--
50
2.20.1
51
52
diff view generated by jsdifflib