1
Some small arm bug fixes for rc3.
1
This bug seemed worth fixing for 8.0 since we need an rc4 anyway:
2
we were using uninitialized data for the guarded bit when
3
combining stage 1 and stage 2 attrs.
2
4
5
thanks
3
-- PMM
6
-- PMM
4
7
5
The following changes since commit 9b617b1bb4056e60b39be4c33be20c10928a6a5c:
8
The following changes since commit 08dede07030973c1053868bc64de7e10bfa02ad6:
6
9
7
Merge tag 'trivial-branch-for-7.0-pull-request' of https://gitlab.com/laurent_vivier/qemu into staging (2022-04-01 10:23:27 +0100)
10
Merge tag 'pull-ppc-20230409' of https://github.com/legoater/qemu into staging (2023-04-10 11:47:52 +0100)
8
11
9
are available in the Git repository at:
12
are available in the Git repository at:
10
13
11
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20220401
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20230410
12
15
13
for you to fetch changes up to a5b1e1ab662aa6dc42d5a913080fccbb8bf82e9b:
16
for you to fetch changes up to 8539dc00552e8ea60420856fc1262c8299bc6308:
14
17
15
target/arm: Don't use DISAS_NORETURN in STXP !HAVE_CMPXCHG128 codegen (2022-04-01 15:35:49 +0100)
18
target/arm: Copy guarded bit in combine_cacheattrs (2023-04-10 14:31:40 +0100)
16
19
17
----------------------------------------------------------------
20
----------------------------------------------------------------
18
target-arm queue:
21
target-arm: Fix bug where we weren't initializing
19
* target/arm: Fix some bugs in secure EL2 handling
22
guarded bit state when combining S1/S2 attrs
20
* target/arm: Fix assert when !HAVE_CMPXCHG128
21
* MAINTAINERS: change Fred Konrad's email address
22
23
23
----------------------------------------------------------------
24
----------------------------------------------------------------
24
Frederic Konrad (1):
25
Richard Henderson (2):
25
MAINTAINERS: change Fred Konrad's email address
26
target/arm: PTE bit GP only applies to stage1
27
target/arm: Copy guarded bit in combine_cacheattrs
26
28
27
Idan Horowitz (4):
29
target/arm/ptw.c | 11 ++++++-----
28
target/arm: Fix MTE access checks for disabled SEL2
30
1 file changed, 6 insertions(+), 5 deletions(-)
29
target/arm: Check VSTCR.SW when assigning the stage 2 output PA space
30
target/arm: Take VSTCR.SW, VTCR.NSW into account in final stage 2 walk
31
target/arm: Determine final stage 2 output PA space based on original IPA
32
33
Peter Maydell (1):
34
target/arm: Don't use DISAS_NORETURN in STXP !HAVE_CMPXCHG128 codegen
35
36
target/arm/internals.h | 2 +-
37
target/arm/helper.c | 18 +++++++++++++++---
38
target/arm/translate-a64.c | 7 ++++++-
39
.mailmap | 3 ++-
40
MAINTAINERS | 2 +-
41
5 files changed, 25 insertions(+), 7 deletions(-)
diff view generated by jsdifflib
1
From: Idan Horowitz <idan.horowitz@gmail.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
While not mentioned anywhere in the actual specification text, the
3
Only perform the extract of GP during the stage1 walk.
4
HCR_EL2.ATA bit is treated as '1' when EL2 is disabled at the current
5
security state. This can be observed in the psuedo-code implementation
6
of AArch64.AllocationTagAccessIsEnabled().
7
4
8
Signed-off-by: Idan Horowitz <idan.horowitz@gmail.com>
5
Reported-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20220328173107.311267-1-idan.horowitz@gmail.com
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20230407185149.3253946-2-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
10
---
13
target/arm/internals.h | 2 +-
11
target/arm/ptw.c | 10 +++++-----
14
target/arm/helper.c | 2 +-
12
1 file changed, 5 insertions(+), 5 deletions(-)
15
2 files changed, 2 insertions(+), 2 deletions(-)
16
13
17
diff --git a/target/arm/internals.h b/target/arm/internals.h
14
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
18
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/internals.h
16
--- a/target/arm/ptw.c
20
+++ b/target/arm/internals.h
17
+++ b/target/arm/ptw.c
21
@@ -XXX,XX +XXX,XX @@ static inline bool allocation_tag_access_enabled(CPUARMState *env, int el,
18
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
22
&& !(env->cp15.scr_el3 & SCR_ATA)) {
19
result->f.attrs.secure = false;
23
return false;
24
}
20
}
25
- if (el < 2 && arm_feature(env, ARM_FEATURE_EL2)) {
21
26
+ if (el < 2 && arm_is_el2_enabled(env)) {
22
- /* When in aarch64 mode, and BTI is enabled, remember GP in the TLB. */
27
uint64_t hcr = arm_hcr_el2_eff(env);
23
- if (aarch64 && cpu_isar_feature(aa64_bti, cpu)) {
28
if (!(hcr & HCR_ATA) && (!(hcr & HCR_E2H) || !(hcr & HCR_TGE))) {
24
- result->f.guarded = extract64(attrs, 50, 1); /* GP */
29
return false;
25
- }
30
diff --git a/target/arm/helper.c b/target/arm/helper.c
26
-
31
index XXXXXXX..XXXXXXX 100644
27
if (regime_is_stage2(mmu_idx)) {
32
--- a/target/arm/helper.c
28
result->cacheattrs.is_s2_format = true;
33
+++ b/target/arm/helper.c
29
result->cacheattrs.attrs = extract32(attrs, 2, 4);
34
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_mte(CPUARMState *env, const ARMCPRegInfo *ri,
30
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
35
{
31
assert(attrindx <= 7);
36
int el = arm_current_el(env);
32
result->cacheattrs.is_s2_format = false;
37
33
result->cacheattrs.attrs = extract64(mair, attrindx * 8, 8);
38
- if (el < 2 && arm_feature(env, ARM_FEATURE_EL2)) {
34
+
39
+ if (el < 2 && arm_is_el2_enabled(env)) {
35
+ /* When in aarch64 mode, and BTI is enabled, remember GP in the TLB. */
40
uint64_t hcr = arm_hcr_el2_eff(env);
36
+ if (aarch64 && cpu_isar_feature(aa64_bti, cpu)) {
41
if (!(hcr & HCR_ATA) && (!(hcr & HCR_E2H) || !(hcr & HCR_TGE))) {
37
+ result->f.guarded = extract64(attrs, 50, 1); /* GP */
42
return CP_ACCESS_TRAP_EL2;
38
+ }
39
}
40
41
/*
43
--
42
--
44
2.25.1
43
2.34.1
diff view generated by jsdifflib
Deleted patch
1
From: Idan Horowitz <idan.horowitz@gmail.com>
2
1
3
As per the AArch64.SS2OutputPASpace() psuedo-code in the ARMv8 ARM when the
4
PA space of the IPA is non secure, the output PA space is secure if and only
5
if all of the bits VTCR.<NSW, NSA>, VSTCR.<SW, SA> are not set.
6
7
Signed-off-by: Idan Horowitz <idan.horowitz@gmail.com>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20220327093427.1548629-2-idan.horowitz@gmail.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/helper.c | 2 +-
13
1 file changed, 1 insertion(+), 1 deletion(-)
14
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
18
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
20
} else {
21
attrs->secure =
22
!((env->cp15.vtcr_el2.raw_tcr & (VTCR_NSA | VTCR_NSW))
23
- || (env->cp15.vstcr_el2.raw_tcr & VSTCR_SA));
24
+ || (env->cp15.vstcr_el2.raw_tcr & (VSTCR_SA | VSTCR_SW)));
25
}
26
}
27
return 0;
28
--
29
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Idan Horowitz <idan.horowitz@gmail.com>
2
1
3
As per the AArch64.SS2InitialTTWState() psuedo-code in the ARMv8 ARM the
4
initial PA space used for stage 2 table walks is assigned based on the SW
5
and NSW bits of the VSTCR and VTCR registers.
6
This was already implemented for the recursive stage 2 page table walks
7
in S1_ptw_translate(), but was missing for the final stage 2 walk.
8
9
Signed-off-by: Idan Horowitz <idan.horowitz@gmail.com>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20220327093427.1548629-3-idan.horowitz@gmail.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
target/arm/helper.c | 10 ++++++++++
15
1 file changed, 10 insertions(+)
16
17
diff --git a/target/arm/helper.c b/target/arm/helper.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.c
20
+++ b/target/arm/helper.c
21
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
22
return ret;
23
}
24
25
+ if (arm_is_secure_below_el3(env)) {
26
+ if (attrs->secure) {
27
+ attrs->secure = !(env->cp15.vstcr_el2.raw_tcr & VSTCR_SW);
28
+ } else {
29
+ attrs->secure = !(env->cp15.vtcr_el2.raw_tcr & VTCR_NSW);
30
+ }
31
+ } else {
32
+ assert(!attrs->secure);
33
+ }
34
+
35
s2_mmu_idx = attrs->secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
36
is_el0 = mmu_idx == ARMMMUIdx_E10_0 || mmu_idx == ARMMMUIdx_SE10_0;
37
38
--
39
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Idan Horowitz <idan.horowitz@gmail.com>
2
1
3
As per the AArch64.S2Walk() pseudo-code in the ARMv8 ARM, the final
4
decision as to the output address's PA space based on the SA/SW/NSA/NSW
5
bits needs to take the input IPA's PA space into account, and not the
6
PA space of the result of the stage 2 walk itself.
7
8
Signed-off-by: Idan Horowitz <idan.horowitz@gmail.com>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20220327093427.1548629-4-idan.horowitz@gmail.com
11
[PMM: fixed commit message typo]
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
target/arm/helper.c | 8 +++++---
15
1 file changed, 5 insertions(+), 3 deletions(-)
16
17
diff --git a/target/arm/helper.c b/target/arm/helper.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.c
20
+++ b/target/arm/helper.c
21
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
22
hwaddr ipa;
23
int s2_prot;
24
int ret;
25
+ bool ipa_secure;
26
ARMCacheAttrs cacheattrs2 = {};
27
ARMMMUIdx s2_mmu_idx;
28
bool is_el0;
29
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
30
return ret;
31
}
32
33
+ ipa_secure = attrs->secure;
34
if (arm_is_secure_below_el3(env)) {
35
- if (attrs->secure) {
36
+ if (ipa_secure) {
37
attrs->secure = !(env->cp15.vstcr_el2.raw_tcr & VSTCR_SW);
38
} else {
39
attrs->secure = !(env->cp15.vtcr_el2.raw_tcr & VTCR_NSW);
40
}
41
} else {
42
- assert(!attrs->secure);
43
+ assert(!ipa_secure);
44
}
45
46
s2_mmu_idx = attrs->secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
47
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
48
49
/* Check if IPA translates to secure or non-secure PA space. */
50
if (arm_is_secure_below_el3(env)) {
51
- if (attrs->secure) {
52
+ if (ipa_secure) {
53
attrs->secure =
54
!(env->cp15.vstcr_el2.raw_tcr & (VSTCR_SA | VSTCR_SW));
55
} else {
56
--
57
2.25.1
diff view generated by jsdifflib
1
From: Frederic Konrad <konrad@adacore.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
frederic.konrad@adacore.com and konrad@adacore.com will stop working starting
3
The guarded bit comes from the stage1 walk.
4
2022-04-01.
5
4
6
Use my personal email instead.
5
Fixes: Coverity CID 1507929
7
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Frederic Konrad <frederic.konrad@adacore.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Fabien Chouteau <chouteau@adacore.com <clg@kaod.org>>
8
Message-id: 20230407185149.3253946-3-richard.henderson@linaro.org
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Message-id: 1648643217-15811-1-git-send-email-frederic.konrad@adacore.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
10
---
14
.mailmap | 3 ++-
11
target/arm/ptw.c | 1 +
15
MAINTAINERS | 2 +-
12
1 file changed, 1 insertion(+)
16
2 files changed, 3 insertions(+), 2 deletions(-)
17
13
18
diff --git a/.mailmap b/.mailmap
14
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
19
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
20
--- a/.mailmap
16
--- a/target/arm/ptw.c
21
+++ b/.mailmap
17
+++ b/target/arm/ptw.c
22
@@ -XXX,XX +XXX,XX @@ Alexander Graf <agraf@csgraf.de> <agraf@suse.de>
18
@@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(uint64_t hcr,
23
Anthony Liguori <anthony@codemonkey.ws> Anthony Liguori <aliguori@us.ibm.com>
19
24
Christian Borntraeger <borntraeger@linux.ibm.com> <borntraeger@de.ibm.com>
20
assert(!s1.is_s2_format);
25
Filip Bozuta <filip.bozuta@syrmia.com> <filip.bozuta@rt-rk.com.com>
21
ret.is_s2_format = false;
26
-Frederic Konrad <konrad@adacore.com> <fred.konrad@greensocs.com>
22
+ ret.guarded = s1.guarded;
27
+Frederic Konrad <konrad.frederic@yahoo.fr> <fred.konrad@greensocs.com>
23
28
+Frederic Konrad <konrad.frederic@yahoo.fr> <konrad@adacore.com>
24
if (s1.attrs == 0xf0) {
29
Greg Kurz <groug@kaod.org> <gkurz@linux.vnet.ibm.com>
25
tagged = true;
30
Huacai Chen <chenhuacai@kernel.org> <chenhc@lemote.com>
31
Huacai Chen <chenhuacai@kernel.org> <chenhuacai@loongson.cn>
32
diff --git a/MAINTAINERS b/MAINTAINERS
33
index XXXXXXX..XXXXXXX 100644
34
--- a/MAINTAINERS
35
+++ b/MAINTAINERS
36
@@ -XXX,XX +XXX,XX @@ F: include/hw/rtc/sun4v-rtc.h
37
38
Leon3
39
M: Fabien Chouteau <chouteau@adacore.com>
40
-M: KONRAD Frederic <frederic.konrad@adacore.com>
41
+M: Frederic Konrad <konrad.frederic@yahoo.fr>
42
S: Maintained
43
F: hw/sparc/leon3.c
44
F: hw/*/grlib*
45
--
26
--
46
2.25.1
27
2.34.1
47
48
diff view generated by jsdifflib
Deleted patch
1
In gen_store_exclusive(), if the host does not have a cmpxchg128
2
primitive then we generate bad code for STXP for storing two 64-bit
3
values. We generate a call to the exit_atomic helper, which never
4
returns, and set is_jmp to DISAS_NORETURN. However, this is
5
forgetting that we have already emitted a brcond that jumps over this
6
call for the case where we don't hold the exclusive. The effect is
7
that we don't generate any code to end the TB for the
8
exclusive-not-held execution path, which falls into the "exit with
9
TB_EXIT_REQUESTED" code that gen_tb_end() emits. This then causes an
10
assert at runtime when cpu_loop_exec_tb() sees an EXIT_REQUESTED TB
11
return that wasn't for an interrupt or icount.
12
1
13
In particular, you can hit this case when using the clang sanitizers
14
and trying to run the xlnx-versal-virt acceptance test in 'make
15
check-acceptance'. This bug was masked until commit 848126d11e93ff
16
("meson: move int128 checks from configure") because we used to set
17
CONFIG_CMPXCHG128=1 and avoid the buggy codepath, but after that we
18
do not.
19
20
Fix the bug by not setting is_jmp. The code after the exit_atomic
21
call up to the fail_label is dead, but TCG is smart enough to
22
eliminate it. We do need to set 'tmp' to some valid value, though
23
(in the same way the exit_atomic-using code in tcg/tcg-op.c does).
24
25
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/953
26
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
27
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
28
Message-id: 20220331150858.96348-1-peter.maydell@linaro.org
29
---
30
target/arm/translate-a64.c | 7 ++++++-
31
1 file changed, 6 insertions(+), 1 deletion(-)
32
33
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/translate-a64.c
36
+++ b/target/arm/translate-a64.c
37
@@ -XXX,XX +XXX,XX @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2,
38
} else if (tb_cflags(s->base.tb) & CF_PARALLEL) {
39
if (!HAVE_CMPXCHG128) {
40
gen_helper_exit_atomic(cpu_env);
41
- s->base.is_jmp = DISAS_NORETURN;
42
+ /*
43
+ * Produce a result so we have a well-formed opcode
44
+ * stream when the following (dead) code uses 'tmp'.
45
+ * TCG will remove the dead ops for us.
46
+ */
47
+ tcg_gen_movi_i64(tmp, 0);
48
} else if (s->be_data == MO_LE) {
49
gen_helper_paired_cmpxchg64_le_parallel(tmp, cpu_env,
50
cpu_exclusive_addr,
51
--
52
2.25.1
diff view generated by jsdifflib