1
Some small arm bug fixes for rc3.
1
Only thing for Arm for rc1 is RTH's fix for the KVM SVE probe code.
2
2
3
-- PMM
3
-- PMM
4
4
5
The following changes since commit 9b617b1bb4056e60b39be4c33be20c10928a6a5c:
5
The following changes since commit 4e06b3fc1b5e1ec03f22190eabe56891dc9c2236:
6
6
7
Merge tag 'trivial-branch-for-7.0-pull-request' of https://gitlab.com/laurent_vivier/qemu into staging (2022-04-01 10:23:27 +0100)
7
Merge tag 'pull-hex-20220731' of https://github.com/quic/qemu into staging (2022-07-31 21:38:54 -0700)
8
8
9
are available in the Git repository at:
9
are available in the Git repository at:
10
10
11
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20220401
11
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20220801
12
12
13
for you to fetch changes up to a5b1e1ab662aa6dc42d5a913080fccbb8bf82e9b:
13
for you to fetch changes up to 5265d24c981dfdda8d29b44f7e84a514da75eedc:
14
14
15
target/arm: Don't use DISAS_NORETURN in STXP !HAVE_CMPXCHG128 codegen (2022-04-01 15:35:49 +0100)
15
target/arm: Move sve probe inside kvm >= 4.15 branch (2022-08-01 16:21:18 +0100)
16
16
17
----------------------------------------------------------------
17
----------------------------------------------------------------
18
target-arm queue:
18
target-arm queue:
19
* target/arm: Fix some bugs in secure EL2 handling
19
* Fix KVM SVE ID register probe code
20
* target/arm: Fix assert when !HAVE_CMPXCHG128
21
* MAINTAINERS: change Fred Konrad's email address
22
20
23
----------------------------------------------------------------
21
----------------------------------------------------------------
24
Frederic Konrad (1):
22
Richard Henderson (3):
25
MAINTAINERS: change Fred Konrad's email address
23
target/arm: Use kvm_arm_sve_supported in kvm_arm_get_host_cpu_features
24
target/arm: Set KVM_ARM_VCPU_SVE while probing the host
25
target/arm: Move sve probe inside kvm >= 4.15 branch
26
26
27
Idan Horowitz (4):
27
target/arm/kvm64.c | 45 ++++++++++++++++++++++-----------------------
28
target/arm: Fix MTE access checks for disabled SEL2
28
1 file changed, 22 insertions(+), 23 deletions(-)
29
target/arm: Check VSTCR.SW when assigning the stage 2 output PA space
30
target/arm: Take VSTCR.SW, VTCR.NSW into account in final stage 2 walk
31
target/arm: Determine final stage 2 output PA space based on original IPA
32
33
Peter Maydell (1):
34
target/arm: Don't use DISAS_NORETURN in STXP !HAVE_CMPXCHG128 codegen
35
36
target/arm/internals.h | 2 +-
37
target/arm/helper.c | 18 +++++++++++++++---
38
target/arm/translate-a64.c | 7 ++++++-
39
.mailmap | 3 ++-
40
MAINTAINERS | 2 +-
41
5 files changed, 25 insertions(+), 7 deletions(-)
diff view generated by jsdifflib
1
From: Idan Horowitz <idan.horowitz@gmail.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
As per the AArch64.SS2OutputPASpace() psuedo-code in the ARMv8 ARM when the
3
Indication for support for SVE will not depend on whether we
4
PA space of the IPA is non secure, the output PA space is secure if and only
4
perform the query on the main kvm_state or the temp vcpu.
5
if all of the bits VTCR.<NSW, NSA>, VSTCR.<SW, SA> are not set.
6
5
7
Signed-off-by: Idan Horowitz <idan.horowitz@gmail.com>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20220726045828.53697-2-richard.henderson@linaro.org
9
Message-id: 20220327093427.1548629-2-idan.horowitz@gmail.com
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
target/arm/helper.c | 2 +-
11
target/arm/kvm64.c | 2 +-
13
1 file changed, 1 insertion(+), 1 deletion(-)
12
1 file changed, 1 insertion(+), 1 deletion(-)
14
13
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
16
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
16
--- a/target/arm/kvm64.c
18
+++ b/target/arm/helper.c
17
+++ b/target/arm/kvm64.c
19
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
18
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
20
} else {
19
}
21
attrs->secure =
20
}
22
!((env->cp15.vtcr_el2.raw_tcr & (VTCR_NSA | VTCR_NSW))
21
23
- || (env->cp15.vstcr_el2.raw_tcr & VSTCR_SA));
22
- sve_supported = ioctl(fdarray[0], KVM_CHECK_EXTENSION, KVM_CAP_ARM_SVE) > 0;
24
+ || (env->cp15.vstcr_el2.raw_tcr & (VSTCR_SA | VSTCR_SW)));
23
+ sve_supported = kvm_arm_sve_supported();
25
}
24
26
}
25
/* Add feature bits that can't appear until after VCPU init. */
27
return 0;
26
if (sve_supported) {
28
--
27
--
29
2.25.1
28
2.25.1
diff view generated by jsdifflib
1
From: Idan Horowitz <idan.horowitz@gmail.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
While not mentioned anywhere in the actual specification text, the
3
Because we weren't setting this flag, our probe of ID_AA64ZFR0
4
HCR_EL2.ATA bit is treated as '1' when EL2 is disabled at the current
4
was always returning zero. This also obviates the adjustment
5
security state. This can be observed in the psuedo-code implementation
5
of ID_AA64PFR0, which had sanitized the SVE field.
6
of AArch64.AllocationTagAccessIsEnabled().
7
6
8
Signed-off-by: Idan Horowitz <idan.horowitz@gmail.com>
7
The effects of the bug are not visible, because the only thing that
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
ID_AA64ZFR0 is used for within qemu at present is tcg translation.
10
Message-id: 20220328173107.311267-1-idan.horowitz@gmail.com
9
The other tests for SVE within KVM are via ID_AA64PFR0.SVE.
10
11
Reported-by: Zenghui Yu <yuzenghui@huawei.com>
12
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20220726045828.53697-3-richard.henderson@linaro.org
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
16
---
13
target/arm/internals.h | 2 +-
17
target/arm/kvm64.c | 27 +++++++++++++--------------
14
target/arm/helper.c | 2 +-
18
1 file changed, 13 insertions(+), 14 deletions(-)
15
2 files changed, 2 insertions(+), 2 deletions(-)
16
19
17
diff --git a/target/arm/internals.h b/target/arm/internals.h
20
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
18
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/internals.h
22
--- a/target/arm/kvm64.c
20
+++ b/target/arm/internals.h
23
+++ b/target/arm/kvm64.c
21
@@ -XXX,XX +XXX,XX @@ static inline bool allocation_tag_access_enabled(CPUARMState *env, int el,
24
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
22
&& !(env->cp15.scr_el3 & SCR_ATA)) {
25
bool sve_supported;
23
return false;
26
bool pmu_supported = false;
27
uint64_t features = 0;
28
- uint64_t t;
29
int err;
30
31
/* Old kernels may not know about the PREFERRED_TARGET ioctl: however
32
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
33
struct kvm_vcpu_init init = { .target = -1, };
34
35
/*
36
- * Ask for Pointer Authentication if supported. We can't play the
37
- * SVE trick of synthesising the ID reg as KVM won't tell us
38
- * whether we have the architected or IMPDEF version of PAuth, so
39
- * we have to use the actual ID regs.
40
+ * Ask for SVE if supported, so that we can query ID_AA64ZFR0,
41
+ * which is otherwise RAZ.
42
+ */
43
+ sve_supported = kvm_arm_sve_supported();
44
+ if (sve_supported) {
45
+ init.features[0] |= 1 << KVM_ARM_VCPU_SVE;
46
+ }
47
+
48
+ /*
49
+ * Ask for Pointer Authentication if supported, so that we get
50
+ * the unsanitized field values for AA64ISAR1_EL1.
51
*/
52
if (kvm_arm_pauth_supported()) {
53
init.features[0] |= (1 << KVM_ARM_VCPU_PTRAUTH_ADDRESS |
54
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
55
}
24
}
56
}
25
- if (el < 2 && arm_feature(env, ARM_FEATURE_EL2)) {
57
26
+ if (el < 2 && arm_is_el2_enabled(env)) {
58
- sve_supported = kvm_arm_sve_supported();
27
uint64_t hcr = arm_hcr_el2_eff(env);
59
-
28
if (!(hcr & HCR_ATA) && (!(hcr & HCR_E2H) || !(hcr & HCR_TGE))) {
60
- /* Add feature bits that can't appear until after VCPU init. */
29
return false;
61
if (sve_supported) {
30
diff --git a/target/arm/helper.c b/target/arm/helper.c
62
- t = ahcf->isar.id_aa64pfr0;
31
index XXXXXXX..XXXXXXX 100644
63
- t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
32
--- a/target/arm/helper.c
64
- ahcf->isar.id_aa64pfr0 = t;
33
+++ b/target/arm/helper.c
65
-
34
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_mte(CPUARMState *env, const ARMCPRegInfo *ri,
66
/*
35
{
67
* There is a range of kernels between kernel commit 73433762fcae
36
int el = arm_current_el(env);
68
* and f81cb2c3ad41 which have a bug where the kernel doesn't expose
37
69
* SYS_ID_AA64ZFR0_EL1 via the ONE_REG API unless the VM has enabled
38
- if (el < 2 && arm_feature(env, ARM_FEATURE_EL2)) {
70
- * SVE support, so we only read it here, rather than together with all
39
+ if (el < 2 && arm_is_el2_enabled(env)) {
71
- * the other ID registers earlier.
40
uint64_t hcr = arm_hcr_el2_eff(env);
72
+ * SVE support, which resulted in an error rather than RAZ.
41
if (!(hcr & HCR_ATA) && (!(hcr & HCR_E2H) || !(hcr & HCR_TGE))) {
73
+ * So only read the register if we set KVM_ARM_VCPU_SVE above.
42
return CP_ACCESS_TRAP_EL2;
74
*/
75
err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64zfr0,
76
ARM64_SYS_REG(3, 0, 0, 4, 4));
43
--
77
--
44
2.25.1
78
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Idan Horowitz <idan.horowitz@gmail.com>
2
1
3
As per the AArch64.SS2InitialTTWState() psuedo-code in the ARMv8 ARM the
4
initial PA space used for stage 2 table walks is assigned based on the SW
5
and NSW bits of the VSTCR and VTCR registers.
6
This was already implemented for the recursive stage 2 page table walks
7
in S1_ptw_translate(), but was missing for the final stage 2 walk.
8
9
Signed-off-by: Idan Horowitz <idan.horowitz@gmail.com>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20220327093427.1548629-3-idan.horowitz@gmail.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
target/arm/helper.c | 10 ++++++++++
15
1 file changed, 10 insertions(+)
16
17
diff --git a/target/arm/helper.c b/target/arm/helper.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.c
20
+++ b/target/arm/helper.c
21
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
22
return ret;
23
}
24
25
+ if (arm_is_secure_below_el3(env)) {
26
+ if (attrs->secure) {
27
+ attrs->secure = !(env->cp15.vstcr_el2.raw_tcr & VSTCR_SW);
28
+ } else {
29
+ attrs->secure = !(env->cp15.vtcr_el2.raw_tcr & VTCR_NSW);
30
+ }
31
+ } else {
32
+ assert(!attrs->secure);
33
+ }
34
+
35
s2_mmu_idx = attrs->secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
36
is_el0 = mmu_idx == ARMMMUIdx_E10_0 || mmu_idx == ARMMMUIdx_SE10_0;
37
38
--
39
2.25.1
diff view generated by jsdifflib
1
From: Idan Horowitz <idan.horowitz@gmail.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
As per the AArch64.S2Walk() pseudo-code in the ARMv8 ARM, the final
3
The test for the IF block indicates no ID registers are exposed, much
4
decision as to the output address's PA space based on the SA/SW/NSA/NSW
4
less host support for SVE. Move the SVE probe into the ELSE block.
5
bits needs to take the input IPA's PA space into account, and not the
6
PA space of the result of the stage 2 walk itself.
7
5
8
Signed-off-by: Idan Horowitz <idan.horowitz@gmail.com>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20220726045828.53697-4-richard.henderson@linaro.org
10
Message-id: 20220327093427.1548629-4-idan.horowitz@gmail.com
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
[PMM: fixed commit message typo]
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
10
---
14
target/arm/helper.c | 8 +++++---
11
target/arm/kvm64.c | 22 +++++++++++-----------
15
1 file changed, 5 insertions(+), 3 deletions(-)
12
1 file changed, 11 insertions(+), 11 deletions(-)
16
13
17
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
18
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.c
16
--- a/target/arm/kvm64.c
20
+++ b/target/arm/helper.c
17
+++ b/target/arm/kvm64.c
21
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
18
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
22
hwaddr ipa;
19
err |= read_sys_reg64(fdarray[2], &ahcf->isar.reset_pmcr_el0,
23
int s2_prot;
20
ARM64_SYS_REG(3, 3, 9, 12, 0));
24
int ret;
21
}
25
+ bool ipa_secure;
22
- }
26
ARMCacheAttrs cacheattrs2 = {};
23
27
ARMMMUIdx s2_mmu_idx;
24
- if (sve_supported) {
28
bool is_el0;
25
- /*
29
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
26
- * There is a range of kernels between kernel commit 73433762fcae
30
return ret;
27
- * and f81cb2c3ad41 which have a bug where the kernel doesn't expose
31
}
28
- * SYS_ID_AA64ZFR0_EL1 via the ONE_REG API unless the VM has enabled
32
29
- * SVE support, which resulted in an error rather than RAZ.
33
+ ipa_secure = attrs->secure;
30
- * So only read the register if we set KVM_ARM_VCPU_SVE above.
34
if (arm_is_secure_below_el3(env)) {
31
- */
35
- if (attrs->secure) {
32
- err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64zfr0,
36
+ if (ipa_secure) {
33
- ARM64_SYS_REG(3, 0, 0, 4, 4));
37
attrs->secure = !(env->cp15.vstcr_el2.raw_tcr & VSTCR_SW);
34
+ if (sve_supported) {
38
} else {
35
+ /*
39
attrs->secure = !(env->cp15.vtcr_el2.raw_tcr & VTCR_NSW);
36
+ * There is a range of kernels between kernel commit 73433762fcae
40
}
37
+ * and f81cb2c3ad41 which have a bug where the kernel doesn't
41
} else {
38
+ * expose SYS_ID_AA64ZFR0_EL1 via the ONE_REG API unless the VM has
42
- assert(!attrs->secure);
39
+ * enabled SVE support, which resulted in an error rather than RAZ.
43
+ assert(!ipa_secure);
40
+ * So only read the register if we set KVM_ARM_VCPU_SVE above.
44
}
41
+ */
45
42
+ err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64zfr0,
46
s2_mmu_idx = attrs->secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
43
+ ARM64_SYS_REG(3, 0, 0, 4, 4));
47
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
44
+ }
48
45
}
49
/* Check if IPA translates to secure or non-secure PA space. */
46
50
if (arm_is_secure_below_el3(env)) {
47
kvm_arm_destroy_scratch_host_vcpu(fdarray);
51
- if (attrs->secure) {
52
+ if (ipa_secure) {
53
attrs->secure =
54
!(env->cp15.vstcr_el2.raw_tcr & (VSTCR_SA | VSTCR_SW));
55
} else {
56
--
48
--
57
2.25.1
49
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Frederic Konrad <konrad@adacore.com>
2
1
3
frederic.konrad@adacore.com and konrad@adacore.com will stop working starting
4
2022-04-01.
5
6
Use my personal email instead.
7
8
Signed-off-by: Frederic Konrad <frederic.konrad@adacore.com>
9
Reviewed-by: Fabien Chouteau <chouteau@adacore.com <clg@kaod.org>>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Message-id: 1648643217-15811-1-git-send-email-frederic.konrad@adacore.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
.mailmap | 3 ++-
15
MAINTAINERS | 2 +-
16
2 files changed, 3 insertions(+), 2 deletions(-)
17
18
diff --git a/.mailmap b/.mailmap
19
index XXXXXXX..XXXXXXX 100644
20
--- a/.mailmap
21
+++ b/.mailmap
22
@@ -XXX,XX +XXX,XX @@ Alexander Graf <agraf@csgraf.de> <agraf@suse.de>
23
Anthony Liguori <anthony@codemonkey.ws> Anthony Liguori <aliguori@us.ibm.com>
24
Christian Borntraeger <borntraeger@linux.ibm.com> <borntraeger@de.ibm.com>
25
Filip Bozuta <filip.bozuta@syrmia.com> <filip.bozuta@rt-rk.com.com>
26
-Frederic Konrad <konrad@adacore.com> <fred.konrad@greensocs.com>
27
+Frederic Konrad <konrad.frederic@yahoo.fr> <fred.konrad@greensocs.com>
28
+Frederic Konrad <konrad.frederic@yahoo.fr> <konrad@adacore.com>
29
Greg Kurz <groug@kaod.org> <gkurz@linux.vnet.ibm.com>
30
Huacai Chen <chenhuacai@kernel.org> <chenhc@lemote.com>
31
Huacai Chen <chenhuacai@kernel.org> <chenhuacai@loongson.cn>
32
diff --git a/MAINTAINERS b/MAINTAINERS
33
index XXXXXXX..XXXXXXX 100644
34
--- a/MAINTAINERS
35
+++ b/MAINTAINERS
36
@@ -XXX,XX +XXX,XX @@ F: include/hw/rtc/sun4v-rtc.h
37
38
Leon3
39
M: Fabien Chouteau <chouteau@adacore.com>
40
-M: KONRAD Frederic <frederic.konrad@adacore.com>
41
+M: Frederic Konrad <konrad.frederic@yahoo.fr>
42
S: Maintained
43
F: hw/sparc/leon3.c
44
F: hw/*/grlib*
45
--
46
2.25.1
47
48
diff view generated by jsdifflib
Deleted patch
1
In gen_store_exclusive(), if the host does not have a cmpxchg128
2
primitive then we generate bad code for STXP for storing two 64-bit
3
values. We generate a call to the exit_atomic helper, which never
4
returns, and set is_jmp to DISAS_NORETURN. However, this is
5
forgetting that we have already emitted a brcond that jumps over this
6
call for the case where we don't hold the exclusive. The effect is
7
that we don't generate any code to end the TB for the
8
exclusive-not-held execution path, which falls into the "exit with
9
TB_EXIT_REQUESTED" code that gen_tb_end() emits. This then causes an
10
assert at runtime when cpu_loop_exec_tb() sees an EXIT_REQUESTED TB
11
return that wasn't for an interrupt or icount.
12
1
13
In particular, you can hit this case when using the clang sanitizers
14
and trying to run the xlnx-versal-virt acceptance test in 'make
15
check-acceptance'. This bug was masked until commit 848126d11e93ff
16
("meson: move int128 checks from configure") because we used to set
17
CONFIG_CMPXCHG128=1 and avoid the buggy codepath, but after that we
18
do not.
19
20
Fix the bug by not setting is_jmp. The code after the exit_atomic
21
call up to the fail_label is dead, but TCG is smart enough to
22
eliminate it. We do need to set 'tmp' to some valid value, though
23
(in the same way the exit_atomic-using code in tcg/tcg-op.c does).
24
25
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/953
26
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
27
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
28
Message-id: 20220331150858.96348-1-peter.maydell@linaro.org
29
---
30
target/arm/translate-a64.c | 7 ++++++-
31
1 file changed, 6 insertions(+), 1 deletion(-)
32
33
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/translate-a64.c
36
+++ b/target/arm/translate-a64.c
37
@@ -XXX,XX +XXX,XX @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2,
38
} else if (tb_cflags(s->base.tb) & CF_PARALLEL) {
39
if (!HAVE_CMPXCHG128) {
40
gen_helper_exit_atomic(cpu_env);
41
- s->base.is_jmp = DISAS_NORETURN;
42
+ /*
43
+ * Produce a result so we have a well-formed opcode
44
+ * stream when the following (dead) code uses 'tmp'.
45
+ * TCG will remove the dead ops for us.
46
+ */
47
+ tcg_gen_movi_i64(tmp, 0);
48
} else if (s->be_data == MO_LE) {
49
gen_helper_paired_cmpxchg64_le_parallel(tmp, cpu_env,
50
cpu_exclusive_addr,
51
--
52
2.25.1
diff view generated by jsdifflib