1
target-arm queue:
1
target-arm queue. This has the "plumb txattrs through various
2
* mostly my latest v8M stuff, plus a couple of minor patches
2
bits of exec.c" patches, and a collection of bug fixes from
3
various people.
3
4
4
The following changes since commit a0b261db8c030813e30a39eae47359ac2a37f7e2:
5
thanks
6
-- PMM
5
7
6
Merge remote-tracking branch 'remotes/ehabkost/tags/python-next-pull-request' into staging (2017-10-12 10:02:09 +0100)
7
8
8
are available in the git repository at:
9
9
10
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20171012
10
The following changes since commit a3ac12fba028df90f7b3dbec924995c126c41022:
11
11
12
for you to fetch changes up to cf5f7937b05c84d5565134f058c00cd48304a117:
12
Merge remote-tracking branch 'remotes/ehabkost/tags/numa-next-pull-request' into staging (2018-05-31 11:12:36 +0100)
13
13
14
nvic: Fix miscalculation of offsets into ITNS array (2017-10-12 16:33:16 +0100)
14
are available in the Git repository at:
15
16
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20180531
17
18
for you to fetch changes up to 49d1dca0520ea71bc21867fab6647f474fcf857b:
19
20
KVM: GIC: Fix memory leak due to calling kvm_init_irq_routing twice (2018-05-31 14:52:53 +0100)
15
21
16
----------------------------------------------------------------
22
----------------------------------------------------------------
17
target-arm queue:
23
target-arm queue:
18
* v8M: SG, BLXNS, secure-return
24
* target/arm: Honour FPCR.FZ in FRECPX
19
* v8M: fixes for coverity issues in previous patches
25
* MAINTAINERS: Add entries for newer MPS2 boards and devices
20
* arm: fix armv7m_init() declaration to match definition
26
* hw/intc/arm_gicv3: Fix APxR<n> register dispatching
21
* watchdog/aspeed: fix variable type to store reload value
27
* arm_gicv3_kvm: fix bug in writing zero bits back to the in-kernel
28
GIC state
29
* tcg: Fix helper function vs host abi for float16
30
* arm: fix qemu crash on startup with -bios option
31
* arm: fix malloc type mismatch
32
* xlnx-zdma: Correct mem leaks and memset to zero on desc unaligned errors
33
* Correct CPACR reset value for v7 cores
34
* memory.h: Improve IOMMU related documentation
35
* exec: Plumb transaction attributes through various functions in
36
preparation for allowing IOMMUs to see them
37
* vmstate.h: Provide VMSTATE_BOOL_SUB_ARRAY
38
* ARM: ACPI: Fix use-after-free due to memory realloc
39
* KVM: GIC: Fix memory leak due to calling kvm_init_irq_routing twice
22
40
23
----------------------------------------------------------------
41
----------------------------------------------------------------
24
Cédric Le Goater (1):
42
Francisco Iglesias (1):
25
watchdog/aspeed: fix variable type to store reload value
43
xlnx-zdma: Correct mem leaks and memset to zero on desc unaligned errors
26
44
27
Igor Mammedov (1):
45
Igor Mammedov (1):
28
arm: fix armv7m_init() declaration to match definition
46
arm: fix qemu crash on startup with -bios option
29
47
30
Peter Maydell (11):
48
Jan Kiszka (1):
31
target/arm: Add M profile secure MMU index values to get_a32_user_mem_index()
49
hw/intc/arm_gicv3: Fix APxR<n> register dispatching
32
target/arm: Implement SG instruction
33
target/arm: Implement BLXNS
34
target/arm: Implement secure function return
35
target-arm: Don't check for "Thumb2 or M profile" for not-Thumb1
36
target/arm: Pull Thumb insn word loads up to top level
37
target-arm: Simplify insn_crosses_page()
38
target/arm: Support some Thumb insns being always unconditional
39
target/arm: Implement SG instruction corner cases
40
nvic: Add missing 'break'
41
nvic: Fix miscalculation of offsets into ITNS array
42
50
43
include/hw/arm/arm.h | 2 +-
51
Paolo Bonzini (1):
44
target/arm/helper.h | 1 +
52
arm: fix malloc type mismatch
45
target/arm/internals.h | 8 ++
46
hw/intc/armv7m_nvic.c | 5 +-
47
hw/watchdog/wdt_aspeed.c | 4 +-
48
target/arm/helper.c | 306 ++++++++++++++++++++++++++++++++++++++++++++--
49
target/arm/translate.c | 310 ++++++++++++++++++++++++++++++++---------------
50
7 files changed, 521 insertions(+), 115 deletions(-)
51
53
54
Peter Maydell (17):
55
target/arm: Honour FPCR.FZ in FRECPX
56
MAINTAINERS: Add entries for newer MPS2 boards and devices
57
Correct CPACR reset value for v7 cores
58
memory.h: Improve IOMMU related documentation
59
Make tb_invalidate_phys_addr() take a MemTxAttrs argument
60
Make address_space_translate{, _cached}() take a MemTxAttrs argument
61
Make address_space_map() take a MemTxAttrs argument
62
Make address_space_access_valid() take a MemTxAttrs argument
63
Make flatview_extend_translation() take a MemTxAttrs argument
64
Make memory_region_access_valid() take a MemTxAttrs argument
65
Make MemoryRegion valid.accepts callback take a MemTxAttrs argument
66
Make flatview_access_valid() take a MemTxAttrs argument
67
Make flatview_translate() take a MemTxAttrs argument
68
Make address_space_get_iotlb_entry() take a MemTxAttrs argument
69
Make flatview_do_translate() take a MemTxAttrs argument
70
Make address_space_translate_iommu take a MemTxAttrs argument
71
vmstate.h: Provide VMSTATE_BOOL_SUB_ARRAY
72
73
Richard Henderson (1):
74
tcg: Fix helper function vs host abi for float16
75
76
Shannon Zhao (3):
77
arm_gicv3_kvm: increase clroffset accordingly
78
ARM: ACPI: Fix use-after-free due to memory realloc
79
KVM: GIC: Fix memory leak due to calling kvm_init_irq_routing twice
80
81
include/exec/exec-all.h | 5 +-
82
include/exec/helper-head.h | 2 +-
83
include/exec/memory-internal.h | 3 +-
84
include/exec/memory.h | 128 +++++++++++++++++++++++++++++++++++------
85
include/migration/vmstate.h | 3 +
86
include/sysemu/dma.h | 6 +-
87
accel/tcg/translate-all.c | 4 +-
88
exec.c | 95 ++++++++++++++++++------------
89
hw/arm/boot.c | 18 +++---
90
hw/arm/virt-acpi-build.c | 20 +++++--
91
hw/dma/xlnx-zdma.c | 10 +++-
92
hw/hppa/dino.c | 3 +-
93
hw/intc/arm_gic_kvm.c | 1 -
94
hw/intc/arm_gicv3_cpuif.c | 12 ++--
95
hw/intc/arm_gicv3_kvm.c | 2 +-
96
hw/nvram/fw_cfg.c | 12 ++--
97
hw/s390x/s390-pci-inst.c | 3 +-
98
hw/scsi/esp.c | 3 +-
99
hw/vfio/common.c | 3 +-
100
hw/virtio/vhost.c | 3 +-
101
hw/xen/xen_pt_msi.c | 3 +-
102
memory.c | 12 ++--
103
memory_ldst.inc.c | 18 +++---
104
target/arm/gdbstub.c | 3 +-
105
target/arm/helper-a64.c | 41 +++++++------
106
target/arm/helper.c | 90 ++++++++++++++++-------------
107
target/ppc/mmu-hash64.c | 3 +-
108
target/riscv/helper.c | 2 +-
109
target/s390x/diag.c | 6 +-
110
target/s390x/excp_helper.c | 3 +-
111
target/s390x/mmu_helper.c | 3 +-
112
target/s390x/sigp.c | 3 +-
113
target/xtensa/op_helper.c | 3 +-
114
MAINTAINERS | 9 ++-
115
34 files changed, 353 insertions(+), 182 deletions(-)
116
diff view generated by jsdifflib
1
This calculation of the first exception vector in
1
The FRECPX instructions should (like most other floating point operations)
2
the ITNS<n> register being accessed:
2
honour the FPCR.FZ bit which specifies whether input denormals should
3
int startvec = 32 * (offset - 0x380) + NVIC_FIRST_IRQ;
3
be flushed to zero (or FZ16 for the half-precision version).
4
4
We forgot to implement this, which doesn't affect the results (since
5
is incorrect, because offset is in bytes, so we only want
5
the calculation doesn't actually care about the mantissa bits) but did
6
to multiply by 8.
6
mean we were failing to set the FPSR.IDC bit.
7
8
Spotted by Coverity (CID 1381484, CID 1381488), though it is
9
not correct that it actually overflows the buffer, because
10
we have a 'startvec + i < s->num_irq' guard.
11
7
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 1507650856-11718-1-git-send-email-peter.maydell@linaro.org
10
Message-id: 20180521172712.19930-1-peter.maydell@linaro.org
15
---
11
---
16
hw/intc/armv7m_nvic.c | 4 ++--
12
target/arm/helper-a64.c | 6 ++++++
17
1 file changed, 2 insertions(+), 2 deletions(-)
13
1 file changed, 6 insertions(+)
18
14
19
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
15
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
20
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
21
--- a/hw/intc/armv7m_nvic.c
17
--- a/target/arm/helper-a64.c
22
+++ b/hw/intc/armv7m_nvic.c
18
+++ b/target/arm/helper-a64.c
23
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
19
@@ -XXX,XX +XXX,XX @@ float16 HELPER(frecpx_f16)(float16 a, void *fpstp)
24
return ((s->num_irq - NVIC_FIRST_IRQ) / 32) - 1;
20
return nan;
25
case 0x380 ... 0x3bf: /* NVIC_ITNS<n> */
21
}
26
{
22
27
- int startvec = 32 * (offset - 0x380) + NVIC_FIRST_IRQ;
23
+ a = float16_squash_input_denormal(a, fpst);
28
+ int startvec = 8 * (offset - 0x380) + NVIC_FIRST_IRQ;
24
+
29
int i;
25
val16 = float16_val(a);
30
26
sbit = 0x8000 & val16;
31
if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
27
exp = extract32(val16, 10, 5);
32
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
28
@@ -XXX,XX +XXX,XX @@ float32 HELPER(frecpx_f32)(float32 a, void *fpstp)
33
switch (offset) {
29
return nan;
34
case 0x380 ... 0x3bf: /* NVIC_ITNS<n> */
30
}
35
{
31
36
- int startvec = 32 * (offset - 0x380) + NVIC_FIRST_IRQ;
32
+ a = float32_squash_input_denormal(a, fpst);
37
+ int startvec = 8 * (offset - 0x380) + NVIC_FIRST_IRQ;
33
+
38
int i;
34
val32 = float32_val(a);
39
35
sbit = 0x80000000ULL & val32;
40
if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
36
exp = extract32(val32, 23, 8);
37
@@ -XXX,XX +XXX,XX @@ float64 HELPER(frecpx_f64)(float64 a, void *fpstp)
38
return nan;
39
}
40
41
+ a = float64_squash_input_denormal(a, fpst);
42
+
43
val64 = float64_val(a);
44
sbit = 0x8000000000000000ULL & val64;
45
exp = extract64(float64_val(a), 52, 11);
41
--
46
--
42
2.7.4
47
2.17.1
43
48
44
49
diff view generated by jsdifflib
New patch
1
Add entries to MAINTAINERS to cover the newer MPS2 boards and
2
the new devices they use.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20180518153157.14899-1-peter.maydell@linaro.org
6
---
7
MAINTAINERS | 9 +++++++--
8
1 file changed, 7 insertions(+), 2 deletions(-)
9
10
diff --git a/MAINTAINERS b/MAINTAINERS
11
index XXXXXXX..XXXXXXX 100644
12
--- a/MAINTAINERS
13
+++ b/MAINTAINERS
14
@@ -XXX,XX +XXX,XX @@ F: hw/timer/cmsdk-apb-timer.c
15
F: include/hw/timer/cmsdk-apb-timer.h
16
F: hw/char/cmsdk-apb-uart.c
17
F: include/hw/char/cmsdk-apb-uart.h
18
+F: hw/misc/tz-ppc.c
19
+F: include/hw/misc/tz-ppc.h
20
21
ARM cores
22
M: Peter Maydell <peter.maydell@linaro.org>
23
@@ -XXX,XX +XXX,XX @@ M: Peter Maydell <peter.maydell@linaro.org>
24
L: qemu-arm@nongnu.org
25
S: Maintained
26
F: hw/arm/mps2.c
27
-F: hw/misc/mps2-scc.c
28
-F: include/hw/misc/mps2-scc.h
29
+F: hw/arm/mps2-tz.c
30
+F: hw/misc/mps2-*.c
31
+F: include/hw/misc/mps2-*.h
32
+F: hw/arm/iotkit.c
33
+F: include/hw/arm/iotkit.h
34
35
Musicpal
36
M: Jan Kiszka <jan.kiszka@web.de>
37
--
38
2.17.1
39
40
diff view generated by jsdifflib
New patch
1
From: Jan Kiszka <jan.kiszka@siemens.com>
1
2
3
There was a nasty flip in identifying which register group an access is
4
targeting. The issue caused spuriously raised priorities of the guest
5
when handing CPUs over in the Jailhouse hypervisor.
6
7
Cc: qemu-stable@nongnu.org
8
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
9
Message-id: 28b927d3-da58-bce4-cc13-bfec7f9b1cb9@siemens.com
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
hw/intc/arm_gicv3_cpuif.c | 12 ++++++------
14
1 file changed, 6 insertions(+), 6 deletions(-)
15
16
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/intc/arm_gicv3_cpuif.c
19
+++ b/hw/intc/arm_gicv3_cpuif.c
20
@@ -XXX,XX +XXX,XX @@ static uint64_t icv_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
21
{
22
GICv3CPUState *cs = icc_cs_from_env(env);
23
int regno = ri->opc2 & 3;
24
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
25
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
26
uint64_t value = cs->ich_apr[grp][regno];
27
28
trace_gicv3_icv_ap_read(ri->crm & 1, regno, gicv3_redist_affid(cs), value);
29
@@ -XXX,XX +XXX,XX @@ static void icv_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
30
{
31
GICv3CPUState *cs = icc_cs_from_env(env);
32
int regno = ri->opc2 & 3;
33
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
34
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
35
36
trace_gicv3_icv_ap_write(ri->crm & 1, regno, gicv3_redist_affid(cs), value);
37
38
@@ -XXX,XX +XXX,XX @@ static uint64_t icc_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
39
uint64_t value;
40
41
int regno = ri->opc2 & 3;
42
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1;
43
+ int grp = (ri->crm & 1) ? GICV3_G1 : GICV3_G0;
44
45
if (icv_access(env, grp == GICV3_G0 ? HCR_FMO : HCR_IMO)) {
46
return icv_ap_read(env, ri);
47
@@ -XXX,XX +XXX,XX @@ static void icc_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
48
GICv3CPUState *cs = icc_cs_from_env(env);
49
50
int regno = ri->opc2 & 3;
51
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1;
52
+ int grp = (ri->crm & 1) ? GICV3_G1 : GICV3_G0;
53
54
if (icv_access(env, grp == GICV3_G0 ? HCR_FMO : HCR_IMO)) {
55
icv_ap_write(env, ri, value);
56
@@ -XXX,XX +XXX,XX @@ static uint64_t ich_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
57
{
58
GICv3CPUState *cs = icc_cs_from_env(env);
59
int regno = ri->opc2 & 3;
60
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
61
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
62
uint64_t value;
63
64
value = cs->ich_apr[grp][regno];
65
@@ -XXX,XX +XXX,XX @@ static void ich_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
66
{
67
GICv3CPUState *cs = icc_cs_from_env(env);
68
int regno = ri->opc2 & 3;
69
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
70
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
71
72
trace_gicv3_ich_ap_write(ri->crm & 1, regno, gicv3_redist_affid(cs), value);
73
74
--
75
2.17.1
76
77
diff view generated by jsdifflib
New patch
1
From: Shannon Zhao <zhaoshenglong@huawei.com>
1
2
3
It forgot to increase clroffset during the loop. So it only clear the
4
first 4 bytes.
5
6
Fixes: 367b9f527becdd20ddf116e17a3c0c2bbc486920
7
Cc: qemu-stable@nongnu.org
8
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
9
Reviewed-by: Eric Auger <eric.auger@redhat.com>
10
Message-id: 1527047633-12368-1-git-send-email-zhaoshenglong@huawei.com
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
hw/intc/arm_gicv3_kvm.c | 1 +
15
1 file changed, 1 insertion(+)
16
17
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/intc/arm_gicv3_kvm.c
20
+++ b/hw/intc/arm_gicv3_kvm.c
21
@@ -XXX,XX +XXX,XX @@ static void kvm_dist_putbmp(GICv3State *s, uint32_t offset,
22
if (clroffset != 0) {
23
reg = 0;
24
kvm_gicd_access(s, clroffset, &reg, true);
25
+ clroffset += 4;
26
}
27
reg = *gic_bmp_ptr32(bmp, irq);
28
kvm_gicd_access(s, offset, &reg, true);
29
--
30
2.17.1
31
32
diff view generated by jsdifflib
New patch
1
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Depending on the host abi, float16, aka uint16_t, values are
4
passed and returned either zero-extended in the host register
5
or with garbage at the top of the host register.
6
7
The tcg code generator has so far been assuming garbage, as that
8
matches the x86 abi, but this is incorrect for other host abis.
9
Further, target/arm has so far been assuming zero-extended results,
10
so that it may store the 16-bit value into a 32-bit slot with the
11
high 16-bits already clear.
12
13
Rectify both problems by mapping "f16" in the helper definition
14
to uint32_t instead of (a typedef for) uint16_t. This forces
15
the host compiler to assume garbage in the upper 16 bits on input
16
and to zero-extend the result on output.
17
18
Cc: qemu-stable@nongnu.org
19
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
20
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
21
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
22
Message-id: 20180522175629.24932-1-richard.henderson@linaro.org
23
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
25
---
26
include/exec/helper-head.h | 2 +-
27
target/arm/helper-a64.c | 35 +++++++++--------
28
target/arm/helper.c | 80 +++++++++++++++++++-------------------
29
3 files changed, 59 insertions(+), 58 deletions(-)
30
31
diff --git a/include/exec/helper-head.h b/include/exec/helper-head.h
32
index XXXXXXX..XXXXXXX 100644
33
--- a/include/exec/helper-head.h
34
+++ b/include/exec/helper-head.h
35
@@ -XXX,XX +XXX,XX @@
36
#define dh_ctype_int int
37
#define dh_ctype_i64 uint64_t
38
#define dh_ctype_s64 int64_t
39
-#define dh_ctype_f16 float16
40
+#define dh_ctype_f16 uint32_t
41
#define dh_ctype_f32 float32
42
#define dh_ctype_f64 float64
43
#define dh_ctype_ptr void *
44
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/helper-a64.c
47
+++ b/target/arm/helper-a64.c
48
@@ -XXX,XX +XXX,XX @@ static inline uint32_t float_rel_to_flags(int res)
49
return flags;
50
}
51
52
-uint64_t HELPER(vfp_cmph_a64)(float16 x, float16 y, void *fp_status)
53
+uint64_t HELPER(vfp_cmph_a64)(uint32_t x, uint32_t y, void *fp_status)
54
{
55
return float_rel_to_flags(float16_compare_quiet(x, y, fp_status));
56
}
57
58
-uint64_t HELPER(vfp_cmpeh_a64)(float16 x, float16 y, void *fp_status)
59
+uint64_t HELPER(vfp_cmpeh_a64)(uint32_t x, uint32_t y, void *fp_status)
60
{
61
return float_rel_to_flags(float16_compare(x, y, fp_status));
62
}
63
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(neon_cgt_f64)(float64 a, float64 b, void *fpstp)
64
#define float64_three make_float64(0x4008000000000000ULL)
65
#define float64_one_point_five make_float64(0x3FF8000000000000ULL)
66
67
-float16 HELPER(recpsf_f16)(float16 a, float16 b, void *fpstp)
68
+uint32_t HELPER(recpsf_f16)(uint32_t a, uint32_t b, void *fpstp)
69
{
70
float_status *fpst = fpstp;
71
72
@@ -XXX,XX +XXX,XX @@ float64 HELPER(recpsf_f64)(float64 a, float64 b, void *fpstp)
73
return float64_muladd(a, b, float64_two, 0, fpst);
74
}
75
76
-float16 HELPER(rsqrtsf_f16)(float16 a, float16 b, void *fpstp)
77
+uint32_t HELPER(rsqrtsf_f16)(uint32_t a, uint32_t b, void *fpstp)
78
{
79
float_status *fpst = fpstp;
80
81
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(neon_addlp_u16)(uint64_t a)
82
}
83
84
/* Floating-point reciprocal exponent - see FPRecpX in ARM ARM */
85
-float16 HELPER(frecpx_f16)(float16 a, void *fpstp)
86
+uint32_t HELPER(frecpx_f16)(uint32_t a, void *fpstp)
87
{
88
float_status *fpst = fpstp;
89
uint16_t val16, sbit;
90
@@ -XXX,XX +XXX,XX @@ void HELPER(casp_be_parallel)(CPUARMState *env, uint32_t rs, uint64_t addr,
91
#define ADVSIMD_HELPER(name, suffix) HELPER(glue(glue(advsimd_, name), suffix))
92
93
#define ADVSIMD_HALFOP(name) \
94
-float16 ADVSIMD_HELPER(name, h)(float16 a, float16 b, void *fpstp) \
95
+uint32_t ADVSIMD_HELPER(name, h)(uint32_t a, uint32_t b, void *fpstp) \
96
{ \
97
float_status *fpst = fpstp; \
98
return float16_ ## name(a, b, fpst); \
99
@@ -XXX,XX +XXX,XX @@ ADVSIMD_HALFOP(mulx)
100
ADVSIMD_TWOHALFOP(mulx)
101
102
/* fused multiply-accumulate */
103
-float16 HELPER(advsimd_muladdh)(float16 a, float16 b, float16 c, void *fpstp)
104
+uint32_t HELPER(advsimd_muladdh)(uint32_t a, uint32_t b, uint32_t c,
105
+ void *fpstp)
106
{
107
float_status *fpst = fpstp;
108
return float16_muladd(a, b, c, 0, fpst);
109
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_muladd2h)(uint32_t two_a, uint32_t two_b,
110
111
#define ADVSIMD_CMPRES(test) (test) ? 0xffff : 0
112
113
-uint32_t HELPER(advsimd_ceq_f16)(float16 a, float16 b, void *fpstp)
114
+uint32_t HELPER(advsimd_ceq_f16)(uint32_t a, uint32_t b, void *fpstp)
115
{
116
float_status *fpst = fpstp;
117
int compare = float16_compare_quiet(a, b, fpst);
118
return ADVSIMD_CMPRES(compare == float_relation_equal);
119
}
120
121
-uint32_t HELPER(advsimd_cge_f16)(float16 a, float16 b, void *fpstp)
122
+uint32_t HELPER(advsimd_cge_f16)(uint32_t a, uint32_t b, void *fpstp)
123
{
124
float_status *fpst = fpstp;
125
int compare = float16_compare(a, b, fpst);
126
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_cge_f16)(float16 a, float16 b, void *fpstp)
127
compare == float_relation_equal);
128
}
129
130
-uint32_t HELPER(advsimd_cgt_f16)(float16 a, float16 b, void *fpstp)
131
+uint32_t HELPER(advsimd_cgt_f16)(uint32_t a, uint32_t b, void *fpstp)
132
{
133
float_status *fpst = fpstp;
134
int compare = float16_compare(a, b, fpst);
135
return ADVSIMD_CMPRES(compare == float_relation_greater);
136
}
137
138
-uint32_t HELPER(advsimd_acge_f16)(float16 a, float16 b, void *fpstp)
139
+uint32_t HELPER(advsimd_acge_f16)(uint32_t a, uint32_t b, void *fpstp)
140
{
141
float_status *fpst = fpstp;
142
float16 f0 = float16_abs(a);
143
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_acge_f16)(float16 a, float16 b, void *fpstp)
144
compare == float_relation_equal);
145
}
146
147
-uint32_t HELPER(advsimd_acgt_f16)(float16 a, float16 b, void *fpstp)
148
+uint32_t HELPER(advsimd_acgt_f16)(uint32_t a, uint32_t b, void *fpstp)
149
{
150
float_status *fpst = fpstp;
151
float16 f0 = float16_abs(a);
152
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_acgt_f16)(float16 a, float16 b, void *fpstp)
153
}
154
155
/* round to integral */
156
-float16 HELPER(advsimd_rinth_exact)(float16 x, void *fp_status)
157
+uint32_t HELPER(advsimd_rinth_exact)(uint32_t x, void *fp_status)
158
{
159
return float16_round_to_int(x, fp_status);
160
}
161
162
-float16 HELPER(advsimd_rinth)(float16 x, void *fp_status)
163
+uint32_t HELPER(advsimd_rinth)(uint32_t x, void *fp_status)
164
{
165
int old_flags = get_float_exception_flags(fp_status), new_flags;
166
float16 ret;
167
@@ -XXX,XX +XXX,XX @@ float16 HELPER(advsimd_rinth)(float16 x, void *fp_status)
168
* setting the mode appropriately before calling the helper.
169
*/
170
171
-uint32_t HELPER(advsimd_f16tosinth)(float16 a, void *fpstp)
172
+uint32_t HELPER(advsimd_f16tosinth)(uint32_t a, void *fpstp)
173
{
174
float_status *fpst = fpstp;
175
176
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_f16tosinth)(float16 a, void *fpstp)
177
return float16_to_int16(a, fpst);
178
}
179
180
-uint32_t HELPER(advsimd_f16touinth)(float16 a, void *fpstp)
181
+uint32_t HELPER(advsimd_f16touinth)(uint32_t a, void *fpstp)
182
{
183
float_status *fpst = fpstp;
184
185
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_f16touinth)(float16 a, void *fpstp)
186
* Square Root and Reciprocal square root
187
*/
188
189
-float16 HELPER(sqrt_f16)(float16 a, void *fpstp)
190
+uint32_t HELPER(sqrt_f16)(uint32_t a, void *fpstp)
191
{
192
float_status *s = fpstp;
193
194
diff --git a/target/arm/helper.c b/target/arm/helper.c
195
index XXXXXXX..XXXXXXX 100644
196
--- a/target/arm/helper.c
197
+++ b/target/arm/helper.c
198
@@ -XXX,XX +XXX,XX @@ DO_VFP_cmp(d, float64)
199
200
/* Integer to float and float to integer conversions */
201
202
-#define CONV_ITOF(name, fsz, sign) \
203
- float##fsz HELPER(name)(uint32_t x, void *fpstp) \
204
-{ \
205
- float_status *fpst = fpstp; \
206
- return sign##int32_to_##float##fsz((sign##int32_t)x, fpst); \
207
+#define CONV_ITOF(name, ftype, fsz, sign) \
208
+ftype HELPER(name)(uint32_t x, void *fpstp) \
209
+{ \
210
+ float_status *fpst = fpstp; \
211
+ return sign##int32_to_##float##fsz((sign##int32_t)x, fpst); \
212
}
213
214
-#define CONV_FTOI(name, fsz, sign, round) \
215
-uint32_t HELPER(name)(float##fsz x, void *fpstp) \
216
-{ \
217
- float_status *fpst = fpstp; \
218
- if (float##fsz##_is_any_nan(x)) { \
219
- float_raise(float_flag_invalid, fpst); \
220
- return 0; \
221
- } \
222
- return float##fsz##_to_##sign##int32##round(x, fpst); \
223
+#define CONV_FTOI(name, ftype, fsz, sign, round) \
224
+uint32_t HELPER(name)(ftype x, void *fpstp) \
225
+{ \
226
+ float_status *fpst = fpstp; \
227
+ if (float##fsz##_is_any_nan(x)) { \
228
+ float_raise(float_flag_invalid, fpst); \
229
+ return 0; \
230
+ } \
231
+ return float##fsz##_to_##sign##int32##round(x, fpst); \
232
}
233
234
-#define FLOAT_CONVS(name, p, fsz, sign) \
235
-CONV_ITOF(vfp_##name##to##p, fsz, sign) \
236
-CONV_FTOI(vfp_to##name##p, fsz, sign, ) \
237
-CONV_FTOI(vfp_to##name##z##p, fsz, sign, _round_to_zero)
238
+#define FLOAT_CONVS(name, p, ftype, fsz, sign) \
239
+ CONV_ITOF(vfp_##name##to##p, ftype, fsz, sign) \
240
+ CONV_FTOI(vfp_to##name##p, ftype, fsz, sign, ) \
241
+ CONV_FTOI(vfp_to##name##z##p, ftype, fsz, sign, _round_to_zero)
242
243
-FLOAT_CONVS(si, h, 16, )
244
-FLOAT_CONVS(si, s, 32, )
245
-FLOAT_CONVS(si, d, 64, )
246
-FLOAT_CONVS(ui, h, 16, u)
247
-FLOAT_CONVS(ui, s, 32, u)
248
-FLOAT_CONVS(ui, d, 64, u)
249
+FLOAT_CONVS(si, h, uint32_t, 16, )
250
+FLOAT_CONVS(si, s, float32, 32, )
251
+FLOAT_CONVS(si, d, float64, 64, )
252
+FLOAT_CONVS(ui, h, uint32_t, 16, u)
253
+FLOAT_CONVS(ui, s, float32, 32, u)
254
+FLOAT_CONVS(ui, d, float64, 64, u)
255
256
#undef CONV_ITOF
257
#undef CONV_FTOI
258
@@ -XXX,XX +XXX,XX @@ static float16 do_postscale_fp16(float64 f, int shift, float_status *fpst)
259
return float64_to_float16(float64_scalbn(f, -shift, fpst), true, fpst);
260
}
261
262
-float16 HELPER(vfp_sltoh)(uint32_t x, uint32_t shift, void *fpst)
263
+uint32_t HELPER(vfp_sltoh)(uint32_t x, uint32_t shift, void *fpst)
264
{
265
return do_postscale_fp16(int32_to_float64(x, fpst), shift, fpst);
266
}
267
268
-float16 HELPER(vfp_ultoh)(uint32_t x, uint32_t shift, void *fpst)
269
+uint32_t HELPER(vfp_ultoh)(uint32_t x, uint32_t shift, void *fpst)
270
{
271
return do_postscale_fp16(uint32_to_float64(x, fpst), shift, fpst);
272
}
273
274
-float16 HELPER(vfp_sqtoh)(uint64_t x, uint32_t shift, void *fpst)
275
+uint32_t HELPER(vfp_sqtoh)(uint64_t x, uint32_t shift, void *fpst)
276
{
277
return do_postscale_fp16(int64_to_float64(x, fpst), shift, fpst);
278
}
279
280
-float16 HELPER(vfp_uqtoh)(uint64_t x, uint32_t shift, void *fpst)
281
+uint32_t HELPER(vfp_uqtoh)(uint64_t x, uint32_t shift, void *fpst)
282
{
283
return do_postscale_fp16(uint64_to_float64(x, fpst), shift, fpst);
284
}
285
@@ -XXX,XX +XXX,XX @@ static float64 do_prescale_fp16(float16 f, int shift, float_status *fpst)
286
}
287
}
288
289
-uint32_t HELPER(vfp_toshh)(float16 x, uint32_t shift, void *fpst)
290
+uint32_t HELPER(vfp_toshh)(uint32_t x, uint32_t shift, void *fpst)
291
{
292
return float64_to_int16(do_prescale_fp16(x, shift, fpst), fpst);
293
}
294
295
-uint32_t HELPER(vfp_touhh)(float16 x, uint32_t shift, void *fpst)
296
+uint32_t HELPER(vfp_touhh)(uint32_t x, uint32_t shift, void *fpst)
297
{
298
return float64_to_uint16(do_prescale_fp16(x, shift, fpst), fpst);
299
}
300
301
-uint32_t HELPER(vfp_toslh)(float16 x, uint32_t shift, void *fpst)
302
+uint32_t HELPER(vfp_toslh)(uint32_t x, uint32_t shift, void *fpst)
303
{
304
return float64_to_int32(do_prescale_fp16(x, shift, fpst), fpst);
305
}
306
307
-uint32_t HELPER(vfp_toulh)(float16 x, uint32_t shift, void *fpst)
308
+uint32_t HELPER(vfp_toulh)(uint32_t x, uint32_t shift, void *fpst)
309
{
310
return float64_to_uint32(do_prescale_fp16(x, shift, fpst), fpst);
311
}
312
313
-uint64_t HELPER(vfp_tosqh)(float16 x, uint32_t shift, void *fpst)
314
+uint64_t HELPER(vfp_tosqh)(uint32_t x, uint32_t shift, void *fpst)
315
{
316
return float64_to_int64(do_prescale_fp16(x, shift, fpst), fpst);
317
}
318
319
-uint64_t HELPER(vfp_touqh)(float16 x, uint32_t shift, void *fpst)
320
+uint64_t HELPER(vfp_touqh)(uint32_t x, uint32_t shift, void *fpst)
321
{
322
return float64_to_uint64(do_prescale_fp16(x, shift, fpst), fpst);
323
}
324
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(set_neon_rmode)(uint32_t rmode, CPUARMState *env)
325
}
326
327
/* Half precision conversions. */
328
-float32 HELPER(vfp_fcvt_f16_to_f32)(float16 a, void *fpstp, uint32_t ahp_mode)
329
+float32 HELPER(vfp_fcvt_f16_to_f32)(uint32_t a, void *fpstp, uint32_t ahp_mode)
330
{
331
/* Squash FZ16 to 0 for the duration of conversion. In this case,
332
* it would affect flushing input denormals.
333
@@ -XXX,XX +XXX,XX @@ float32 HELPER(vfp_fcvt_f16_to_f32)(float16 a, void *fpstp, uint32_t ahp_mode)
334
return r;
335
}
336
337
-float16 HELPER(vfp_fcvt_f32_to_f16)(float32 a, void *fpstp, uint32_t ahp_mode)
338
+uint32_t HELPER(vfp_fcvt_f32_to_f16)(float32 a, void *fpstp, uint32_t ahp_mode)
339
{
340
/* Squash FZ16 to 0 for the duration of conversion. In this case,
341
* it would affect flushing output denormals.
342
@@ -XXX,XX +XXX,XX @@ float16 HELPER(vfp_fcvt_f32_to_f16)(float32 a, void *fpstp, uint32_t ahp_mode)
343
return r;
344
}
345
346
-float64 HELPER(vfp_fcvt_f16_to_f64)(float16 a, void *fpstp, uint32_t ahp_mode)
347
+float64 HELPER(vfp_fcvt_f16_to_f64)(uint32_t a, void *fpstp, uint32_t ahp_mode)
348
{
349
/* Squash FZ16 to 0 for the duration of conversion. In this case,
350
* it would affect flushing input denormals.
351
@@ -XXX,XX +XXX,XX @@ float64 HELPER(vfp_fcvt_f16_to_f64)(float16 a, void *fpstp, uint32_t ahp_mode)
352
return r;
353
}
354
355
-float16 HELPER(vfp_fcvt_f64_to_f16)(float64 a, void *fpstp, uint32_t ahp_mode)
356
+uint32_t HELPER(vfp_fcvt_f64_to_f16)(float64 a, void *fpstp, uint32_t ahp_mode)
357
{
358
/* Squash FZ16 to 0 for the duration of conversion. In this case,
359
* it would affect flushing output denormals.
360
@@ -XXX,XX +XXX,XX @@ static bool round_to_inf(float_status *fpst, bool sign_bit)
361
g_assert_not_reached();
362
}
363
364
-float16 HELPER(recpe_f16)(float16 input, void *fpstp)
365
+uint32_t HELPER(recpe_f16)(uint32_t input, void *fpstp)
366
{
367
float_status *fpst = fpstp;
368
float16 f16 = float16_squash_input_denormal(input, fpst);
369
@@ -XXX,XX +XXX,XX @@ static uint64_t recip_sqrt_estimate(int *exp , int exp_off, uint64_t frac)
370
return extract64(estimate, 0, 8) << 44;
371
}
372
373
-float16 HELPER(rsqrte_f16)(float16 input, void *fpstp)
374
+uint32_t HELPER(rsqrte_f16)(uint32_t input, void *fpstp)
375
{
376
float_status *s = fpstp;
377
float16 f16 = float16_squash_input_denormal(input, s);
378
--
379
2.17.1
380
381
diff view generated by jsdifflib
1
From: Igor Mammedov <imammedo@redhat.com>
1
From: Igor Mammedov <imammedo@redhat.com>
2
2
3
s/cpu_model/cpu_type/ that has been forgotten during
3
When QEMU is started with following CLI
4
conversion (ba1ba5cc), while touching the line also
4
-machine virt,gic-version=3,accel=kvm -cpu host -bios AAVMF_CODE.fd
5
fixup alignment.
5
it crashes with abort at
6
accel/kvm/kvm-all.c:2164:
7
KVM_SET_DEVICE_ATTR failed: Group 6 attr 0x000000000000c665: Invalid argument
6
8
9
Which is caused by implicit dependency of kvm_arm_gicv3_reset() on
10
arm_gicv3_icc_reset() where the later is called by CPU reset
11
reset callback.
12
13
However commit:
14
3b77f6c arm/boot: split load_dtb() from arm_load_kernel()
15
broke CPU reset callback registration in case
16
17
arm_load_kernel()
18
...
19
if (!info->kernel_filename || info->firmware_loaded)
20
21
branch is taken, i.e. it's sufficient to provide a firmware
22
or do not provide kernel on CLI to skip cpu reset callback
23
registration, where before offending commit the callback
24
has been registered unconditionally.
25
26
Fix it by registering the callback right at the beginning of
27
arm_load_kernel() unconditionally instead of doing it at the end.
28
29
NOTE:
30
we probably should eliminate that dependency anyways as well as
31
separate arch CPU reset parts from arm_load_kernel() into CPU
32
itself, but that refactoring that I probably would have to do
33
anyways later for CPU hotplug to work.
34
35
Reported-by: Auger Eric <eric.auger@redhat.com>
7
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
36
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
8
Message-id: 1507710805-221721-1-git-send-email-imammedo@redhat.com
37
Reviewed-by: Eric Auger <eric.auger@redhat.com>
38
Tested-by: Eric Auger <eric.auger@redhat.com>
39
Message-id: 1527070950-208350-1-git-send-email-imammedo@redhat.com
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
40
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
41
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
42
---
12
include/hw/arm/arm.h | 2 +-
43
hw/arm/boot.c | 18 +++++++++---------
13
1 file changed, 1 insertion(+), 1 deletion(-)
44
1 file changed, 9 insertions(+), 9 deletions(-)
14
45
15
diff --git a/include/hw/arm/arm.h b/include/hw/arm/arm.h
46
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
16
index XXXXXXX..XXXXXXX 100644
47
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/arm/arm.h
48
--- a/hw/arm/boot.c
18
+++ b/include/hw/arm/arm.h
49
+++ b/hw/arm/boot.c
19
@@ -XXX,XX +XXX,XX @@ typedef enum {
50
@@ -XXX,XX +XXX,XX @@ void arm_load_kernel(ARMCPU *cpu, struct arm_boot_info *info)
20
51
static const ARMInsnFixup *primary_loader;
21
/* armv7m.c */
52
AddressSpace *as = arm_boot_address_space(cpu, info);
22
DeviceState *armv7m_init(MemoryRegion *system_memory, int mem_size, int num_irq,
53
23
- const char *kernel_filename, const char *cpu_model);
54
+ /* CPU objects (unlike devices) are not automatically reset on system
24
+ const char *kernel_filename, const char *cpu_type);
55
+ * reset, so we must always register a handler to do so. If we're
25
/**
56
+ * actually loading a kernel, the handler is also responsible for
26
* armv7m_load_kernel:
57
+ * arranging that we start it correctly.
27
* @cpu: CPU
58
+ */
59
+ for (cs = first_cpu; cs; cs = CPU_NEXT(cs)) {
60
+ qemu_register_reset(do_cpu_reset, ARM_CPU(cs));
61
+ }
62
+
63
/* The board code is not supposed to set secure_board_setup unless
64
* running its code in secure mode is actually possible, and KVM
65
* doesn't support secure.
66
@@ -XXX,XX +XXX,XX @@ void arm_load_kernel(ARMCPU *cpu, struct arm_boot_info *info)
67
ARM_CPU(cs)->env.boot_info = info;
68
}
69
70
- /* CPU objects (unlike devices) are not automatically reset on system
71
- * reset, so we must always register a handler to do so. If we're
72
- * actually loading a kernel, the handler is also responsible for
73
- * arranging that we start it correctly.
74
- */
75
- for (cs = first_cpu; cs; cs = CPU_NEXT(cs)) {
76
- qemu_register_reset(do_cpu_reset, ARM_CPU(cs));
77
- }
78
-
79
if (!info->skip_dtb_autoload && have_dtb(info)) {
80
if (arm_load_dtb(info->dtb_start, info, info->dtb_limit, as) < 0) {
81
exit(1);
28
--
82
--
29
2.7.4
83
2.17.1
30
84
31
85
diff view generated by jsdifflib
New patch
1
From: Paolo Bonzini <pbonzini@redhat.com>
1
2
3
cpregs_keys is an uint32_t* so the allocation should use uint32_t.
4
g_new is even better because it is type-safe.
5
6
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/gdbstub.c | 3 +--
12
1 file changed, 1 insertion(+), 2 deletions(-)
13
14
diff --git a/target/arm/gdbstub.c b/target/arm/gdbstub.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/gdbstub.c
17
+++ b/target/arm/gdbstub.c
18
@@ -XXX,XX +XXX,XX @@ int arm_gen_dynamic_xml(CPUState *cs)
19
RegisterSysregXmlParam param = {cs, s};
20
21
cpu->dyn_xml.num_cpregs = 0;
22
- cpu->dyn_xml.cpregs_keys = g_malloc(sizeof(uint32_t *) *
23
- g_hash_table_size(cpu->cp_regs));
24
+ cpu->dyn_xml.cpregs_keys = g_new(uint32_t, g_hash_table_size(cpu->cp_regs));
25
g_string_printf(s, "<?xml version=\"1.0\"?>");
26
g_string_append_printf(s, "<!DOCTYPE target SYSTEM \"gdb-target.dtd\">");
27
g_string_append_printf(s, "<feature name=\"org.qemu.gdb.arm.sys.regs\">");
28
--
29
2.17.1
30
31
diff view generated by jsdifflib
New patch
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
2
3
Coverity found that the string return by 'object_get_canonical_path' was not
4
being freed at two locations in the model (CID 1391294 and CID 1391293) and
5
also that a memset was being called with a value greater than the max of a byte
6
on the second argument (CID 1391286). This patch corrects this by adding the
7
freeing of the strings and also changing to memset to zero instead on
8
descriptor unaligned errors.
9
10
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
11
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
12
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
Message-id: 20180528184859.3530-1-frasse.iglesias@gmail.com
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
17
hw/dma/xlnx-zdma.c | 10 +++++++---
18
1 file changed, 7 insertions(+), 3 deletions(-)
19
20
diff --git a/hw/dma/xlnx-zdma.c b/hw/dma/xlnx-zdma.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/dma/xlnx-zdma.c
23
+++ b/hw/dma/xlnx-zdma.c
24
@@ -XXX,XX +XXX,XX @@ static bool zdma_load_descriptor(XlnxZDMA *s, uint64_t addr, void *buf)
25
qemu_log_mask(LOG_GUEST_ERROR,
26
"zdma: unaligned descriptor at %" PRIx64,
27
addr);
28
- memset(buf, 0xdeadbeef, sizeof(XlnxZDMADescr));
29
+ memset(buf, 0x0, sizeof(XlnxZDMADescr));
30
s->error = true;
31
return false;
32
}
33
@@ -XXX,XX +XXX,XX @@ static uint64_t zdma_read(void *opaque, hwaddr addr, unsigned size)
34
RegisterInfo *r = &s->regs_info[addr / 4];
35
36
if (!r->data) {
37
+ gchar *path = object_get_canonical_path(OBJECT(s));
38
qemu_log("%s: Decode error: read from %" HWADDR_PRIx "\n",
39
- object_get_canonical_path(OBJECT(s)),
40
+ path,
41
addr);
42
+ g_free(path);
43
ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, INV_APB, true);
44
zdma_ch_imr_update_irq(s);
45
return 0;
46
@@ -XXX,XX +XXX,XX @@ static void zdma_write(void *opaque, hwaddr addr, uint64_t value,
47
RegisterInfo *r = &s->regs_info[addr / 4];
48
49
if (!r->data) {
50
+ gchar *path = object_get_canonical_path(OBJECT(s));
51
qemu_log("%s: Decode error: write to %" HWADDR_PRIx "=%" PRIx64 "\n",
52
- object_get_canonical_path(OBJECT(s)),
53
+ path,
54
addr, value);
55
+ g_free(path);
56
ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, INV_APB, true);
57
zdma_ch_imr_update_irq(s);
58
return;
59
--
60
2.17.1
61
62
diff view generated by jsdifflib
1
Implement the SG instruction, which we emulate 'by hand' in the
1
In commit f0aff255700 we made cpacr_write() enforce that some CPACR
2
exception handling code path.
2
bits are RAZ/WI and some are RAO/WI for ARMv7 cores. Unfortunately
3
we forgot to also update the register's reset value. The effect
4
was that (a) a guest that read CPACR on reset would not see ones in
5
the RAO bits, and (b) if you did a migration before the guest did
6
a write to the CPACR then the migration would fail because the
7
destination would enforce the RAO bits and then complain that they
8
didn't match the zero value from the source.
3
9
10
Implement reset for the CPACR using a custom reset function
11
that just calls cpacr_write(), to avoid having to duplicate
12
the logic for which bits are RAO.
13
14
This bug would affect migration for TCG CPUs which are ARMv7
15
with VFP but without one of Neon or VFPv3.
16
17
Reported-by: Cédric Le Goater <clg@kaod.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Tested-by: Cédric Le Goater <clg@kaod.org>
6
Message-id: 1507556919-24992-3-git-send-email-peter.maydell@linaro.org
20
Message-id: 20180522173713.26282-1-peter.maydell@linaro.org
7
---
21
---
8
target/arm/helper.c | 132 ++++++++++++++++++++++++++++++++++++++++++++++++++--
22
target/arm/helper.c | 10 +++++++++-
9
1 file changed, 127 insertions(+), 5 deletions(-)
23
1 file changed, 9 insertions(+), 1 deletion(-)
10
24
11
diff --git a/target/arm/helper.c b/target/arm/helper.c
25
diff --git a/target/arm/helper.c b/target/arm/helper.c
12
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/helper.c
27
--- a/target/arm/helper.c
14
+++ b/target/arm/helper.c
28
+++ b/target/arm/helper.c
15
@@ -XXX,XX +XXX,XX @@ typedef struct V8M_SAttributes {
29
@@ -XXX,XX +XXX,XX @@ static void cpacr_write(CPUARMState *env, const ARMCPRegInfo *ri,
16
bool irvalid;
30
env->cp15.cpacr_el1 = value;
17
} V8M_SAttributes;
18
19
+static void v8m_security_lookup(CPUARMState *env, uint32_t address,
20
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
21
+ V8M_SAttributes *sattrs);
22
+
23
/* Definitions for the PMCCNTR and PMCR registers */
24
#define PMCRD 0x8
25
#define PMCRC 0x4
26
@@ -XXX,XX +XXX,XX @@ static void arm_log_exception(int idx)
27
}
28
}
31
}
29
32
30
+static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx,
33
+static void cpacr_reset(CPUARMState *env, const ARMCPRegInfo *ri)
31
+ uint32_t addr, uint16_t *insn)
32
+{
34
+{
33
+ /* Load a 16-bit portion of a v7M instruction, returning true on success,
35
+ /* Call cpacr_write() so that we reset with the correct RAO bits set
34
+ * or false on failure (in which case we will have pended the appropriate
36
+ * for our CPU features.
35
+ * exception).
36
+ * We need to do the instruction fetch's MPU and SAU checks
37
+ * like this because there is no MMU index that would allow
38
+ * doing the load with a single function call. Instead we must
39
+ * first check that the security attributes permit the load
40
+ * and that they don't mismatch on the two halves of the instruction,
41
+ * and then we do the load as a secure load (ie using the security
42
+ * attributes of the address, not the CPU, as architecturally required).
43
+ */
37
+ */
44
+ CPUState *cs = CPU(cpu);
38
+ cpacr_write(env, ri, 0);
45
+ CPUARMState *env = &cpu->env;
46
+ V8M_SAttributes sattrs = {};
47
+ MemTxAttrs attrs = {};
48
+ ARMMMUFaultInfo fi = {};
49
+ MemTxResult txres;
50
+ target_ulong page_size;
51
+ hwaddr physaddr;
52
+ int prot;
53
+ uint32_t fsr;
54
+
55
+ v8m_security_lookup(env, addr, MMU_INST_FETCH, mmu_idx, &sattrs);
56
+ if (!sattrs.nsc || sattrs.ns) {
57
+ /* This must be the second half of the insn, and it straddles a
58
+ * region boundary with the second half not being S&NSC.
59
+ */
60
+ env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK;
61
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
62
+ qemu_log_mask(CPU_LOG_INT,
63
+ "...really SecureFault with SFSR.INVEP\n");
64
+ return false;
65
+ }
66
+ if (get_phys_addr(env, addr, MMU_INST_FETCH, mmu_idx,
67
+ &physaddr, &attrs, &prot, &page_size, &fsr, &fi)) {
68
+ /* the MPU lookup failed */
69
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_IACCVIOL_MASK;
70
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM, env->v7m.secure);
71
+ qemu_log_mask(CPU_LOG_INT, "...really MemManage with CFSR.IACCVIOL\n");
72
+ return false;
73
+ }
74
+ *insn = address_space_lduw_le(arm_addressspace(cs, attrs), physaddr,
75
+ attrs, &txres);
76
+ if (txres != MEMTX_OK) {
77
+ env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_IBUSERR_MASK;
78
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false);
79
+ qemu_log_mask(CPU_LOG_INT, "...really BusFault with CFSR.IBUSERR\n");
80
+ return false;
81
+ }
82
+ return true;
83
+}
39
+}
84
+
40
+
85
+static bool v7m_handle_execute_nsc(ARMCPU *cpu)
41
static CPAccessResult cpacr_access(CPUARMState *env, const ARMCPRegInfo *ri,
86
+{
42
bool isread)
87
+ /* Check whether this attempt to execute code in a Secure & NS-Callable
88
+ * memory region is for an SG instruction; if so, then emulate the
89
+ * effect of the SG instruction and return true. Otherwise pend
90
+ * the correct kind of exception and return false.
91
+ */
92
+ CPUARMState *env = &cpu->env;
93
+ ARMMMUIdx mmu_idx;
94
+ uint16_t insn;
95
+
96
+ /* We should never get here unless get_phys_addr_pmsav8() caused
97
+ * an exception for NS executing in S&NSC memory.
98
+ */
99
+ assert(!env->v7m.secure);
100
+ assert(arm_feature(env, ARM_FEATURE_M_SECURITY));
101
+
102
+ /* We want to do the MPU lookup as secure; work out what mmu_idx that is */
103
+ mmu_idx = arm_v7m_mmu_idx_for_secstate(env, true);
104
+
105
+ if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15], &insn)) {
106
+ return false;
107
+ }
108
+
109
+ if (!env->thumb) {
110
+ goto gen_invep;
111
+ }
112
+
113
+ if (insn != 0xe97f) {
114
+ /* Not an SG instruction first half (we choose the IMPDEF
115
+ * early-SG-check option).
116
+ */
117
+ goto gen_invep;
118
+ }
119
+
120
+ if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15] + 2, &insn)) {
121
+ return false;
122
+ }
123
+
124
+ if (insn != 0xe97f) {
125
+ /* Not an SG instruction second half (yes, both halves of the SG
126
+ * insn have the same hex value)
127
+ */
128
+ goto gen_invep;
129
+ }
130
+
131
+ /* OK, we have confirmed that we really have an SG instruction.
132
+ * We know we're NS in S memory so don't need to repeat those checks.
133
+ */
134
+ qemu_log_mask(CPU_LOG_INT, "...really an SG instruction at 0x%08" PRIx32
135
+ ", executing it\n", env->regs[15]);
136
+ env->regs[14] &= ~1;
137
+ switch_v7m_security_state(env, true);
138
+ xpsr_write(env, 0, XPSR_IT);
139
+ env->regs[15] += 4;
140
+ return true;
141
+
142
+gen_invep:
143
+ env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK;
144
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
145
+ qemu_log_mask(CPU_LOG_INT,
146
+ "...really SecureFault with SFSR.INVEP\n");
147
+ return false;
148
+}
149
+
150
void arm_v7m_cpu_do_interrupt(CPUState *cs)
151
{
43
{
152
ARMCPU *cpu = ARM_CPU(cs);
44
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6_cp_reginfo[] = {
153
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
45
{ .name = "CPACR", .state = ARM_CP_STATE_BOTH, .opc0 = 3,
154
* the SG instruction have the same security attributes.)
46
.crn = 1, .crm = 0, .opc1 = 0, .opc2 = 2, .accessfn = cpacr_access,
155
* Everything else must generate an INVEP SecureFault, so we
47
.access = PL1_RW, .fieldoffset = offsetof(CPUARMState, cp15.cpacr_el1),
156
* emulate the SG instruction here.
48
- .resetvalue = 0, .writefn = cpacr_write },
157
- * TODO: actually emulate SG.
49
+ .resetfn = cpacr_reset, .writefn = cpacr_write },
158
*/
50
REGINFO_SENTINEL
159
- env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK;
51
};
160
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
52
161
- qemu_log_mask(CPU_LOG_INT,
162
- "...really SecureFault with SFSR.INVEP\n");
163
+ if (v7m_handle_execute_nsc(cpu)) {
164
+ return;
165
+ }
166
break;
167
case M_FAKE_FSR_SFAULT:
168
/* Various flavours of SecureFault for attempts to execute or
169
--
53
--
170
2.7.4
54
2.17.1
171
55
172
56
diff view generated by jsdifflib
1
The common situation of the SG instruction is that it is
1
Add more detail to the documentation for memory_region_init_iommu()
2
executed from S&NSC memory by a CPU in NS state. That case
2
and other IOMMU-related functions and data structures.
3
is handled by v7m_handle_execute_nsc(). However the instruction
4
also has defined behaviour in a couple of other cases:
5
* SG instruction in NS memory (behaves as a NOP)
6
* SG in S memory but CPU already secure (clears IT bits and
7
does nothing else)
8
* SG instruction in v8M without Security Extension (NOP)
9
10
These can be implemented in translate.c.
11
3
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 1507556919-24992-10-git-send-email-peter.maydell@linaro.org
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Eric Auger <eric.auger@redhat.com>
8
Message-id: 20180521140402.23318-2-peter.maydell@linaro.org
15
---
9
---
16
target/arm/translate.c | 23 ++++++++++++++++++++++-
10
include/exec/memory.h | 105 ++++++++++++++++++++++++++++++++++++++----
17
1 file changed, 22 insertions(+), 1 deletion(-)
11
1 file changed, 95 insertions(+), 10 deletions(-)
18
12
19
diff --git a/target/arm/translate.c b/target/arm/translate.c
13
diff --git a/include/exec/memory.h b/include/exec/memory.h
20
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/translate.c
15
--- a/include/exec/memory.h
22
+++ b/target/arm/translate.c
16
+++ b/include/exec/memory.h
23
@@ -XXX,XX +XXX,XX @@ static int disas_thumb2_insn(DisasContext *s, uint32_t insn)
17
@@ -XXX,XX +XXX,XX @@ enum IOMMUMemoryRegionAttr {
24
* - load/store doubleword, load/store exclusive, ldacq/strel,
18
IOMMU_ATTR_SPAPR_TCE_FD
25
* table branch.
19
};
26
*/
20
27
- if (insn & 0x01200000) {
21
+/**
28
+ if (insn == 0xe97fe97f && arm_dc_feature(s, ARM_FEATURE_M) &&
22
+ * IOMMUMemoryRegionClass:
29
+ arm_dc_feature(s, ARM_FEATURE_V8)) {
23
+ *
30
+ /* 0b1110_1001_0111_1111_1110_1001_0111_111
24
+ * All IOMMU implementations need to subclass TYPE_IOMMU_MEMORY_REGION
31
+ * - SG (v8M only)
25
+ * and provide an implementation of at least the @translate method here
32
+ * The bulk of the behaviour for this instruction is implemented
26
+ * to handle requests to the memory region. Other methods are optional.
33
+ * in v7m_handle_execute_nsc(), which deals with the insn when
27
+ *
34
+ * it is executed by a CPU in non-secure state from memory
28
+ * The IOMMU implementation must use the IOMMU notifier infrastructure
35
+ * which is Secure & NonSecure-Callable.
29
+ * to report whenever mappings are changed, by calling
36
+ * Here we only need to handle the remaining cases:
30
+ * memory_region_notify_iommu() (or, if necessary, by calling
37
+ * * in NS memory (including the "security extension not
31
+ * memory_region_notify_one() for each registered notifier).
38
+ * implemented" case) : NOP
32
+ */
39
+ * * in S memory but CPU already secure (clear IT bits)
33
typedef struct IOMMUMemoryRegionClass {
40
+ * We know that the attribute for the memory this insn is
34
/* private */
41
+ * in must match the current CPU state, because otherwise
35
struct DeviceClass parent_class;
42
+ * get_phys_addr_pmsav8 would have generated an exception.
36
43
+ */
37
/*
44
+ if (s->v8m_secure) {
38
- * Return a TLB entry that contains a given address. Flag should
45
+ /* Like the IT insn, we don't need to generate any code */
39
- * be the access permission of this translation operation. We can
46
+ s->condexec_cond = 0;
40
- * set flag to IOMMU_NONE to mean that we don't need any
47
+ s->condexec_mask = 0;
41
- * read/write permission checks, like, when for region replay.
48
+ }
42
+ * Return a TLB entry that contains a given address.
49
+ } else if (insn & 0x01200000) {
43
+ *
50
/* 0b1110_1000_x11x_xxxx_xxxx_xxxx_xxxx_xxxx
44
+ * The IOMMUAccessFlags indicated via @flag are optional and may
51
* - load/store dual (post-indexed)
45
+ * be specified as IOMMU_NONE to indicate that the caller needs
52
* 0b1111_1001_x10x_xxxx_xxxx_xxxx_xxxx_xxxx
46
+ * the full translation information for both reads and writes. If
47
+ * the access flags are specified then the IOMMU implementation
48
+ * may use this as an optimization, to stop doing a page table
49
+ * walk as soon as it knows that the requested permissions are not
50
+ * allowed. If IOMMU_NONE is passed then the IOMMU must do the
51
+ * full page table walk and report the permissions in the returned
52
+ * IOMMUTLBEntry. (Note that this implies that an IOMMU may not
53
+ * return different mappings for reads and writes.)
54
+ *
55
+ * The returned information remains valid while the caller is
56
+ * holding the big QEMU lock or is inside an RCU critical section;
57
+ * if the caller wishes to cache the mapping beyond that it must
58
+ * register an IOMMU notifier so it can invalidate its cached
59
+ * information when the IOMMU mapping changes.
60
+ *
61
+ * @iommu: the IOMMUMemoryRegion
62
+ * @hwaddr: address to be translated within the memory region
63
+ * @flag: requested access permissions
64
*/
65
IOMMUTLBEntry (*translate)(IOMMUMemoryRegion *iommu, hwaddr addr,
66
IOMMUAccessFlags flag);
67
- /* Returns minimum supported page size */
68
+ /* Returns minimum supported page size in bytes.
69
+ * If this method is not provided then the minimum is assumed to
70
+ * be TARGET_PAGE_SIZE.
71
+ *
72
+ * @iommu: the IOMMUMemoryRegion
73
+ */
74
uint64_t (*get_min_page_size)(IOMMUMemoryRegion *iommu);
75
- /* Called when IOMMU Notifier flag changed */
76
+ /* Called when IOMMU Notifier flag changes (ie when the set of
77
+ * events which IOMMU users are requesting notification for changes).
78
+ * Optional method -- need not be provided if the IOMMU does not
79
+ * need to know exactly which events must be notified.
80
+ *
81
+ * @iommu: the IOMMUMemoryRegion
82
+ * @old_flags: events which previously needed to be notified
83
+ * @new_flags: events which now need to be notified
84
+ */
85
void (*notify_flag_changed)(IOMMUMemoryRegion *iommu,
86
IOMMUNotifierFlag old_flags,
87
IOMMUNotifierFlag new_flags);
88
- /* Set this up to provide customized IOMMU replay function */
89
+ /* Called to handle memory_region_iommu_replay().
90
+ *
91
+ * The default implementation of memory_region_iommu_replay() is to
92
+ * call the IOMMU translate method for every page in the address space
93
+ * with flag == IOMMU_NONE and then call the notifier if translate
94
+ * returns a valid mapping. If this method is implemented then it
95
+ * overrides the default behaviour, and must provide the full semantics
96
+ * of memory_region_iommu_replay(), by calling @notifier for every
97
+ * translation present in the IOMMU.
98
+ *
99
+ * Optional method -- an IOMMU only needs to provide this method
100
+ * if the default is inefficient or produces undesirable side effects.
101
+ *
102
+ * Note: this is not related to record-and-replay functionality.
103
+ */
104
void (*replay)(IOMMUMemoryRegion *iommu, IOMMUNotifier *notifier);
105
106
- /* Get IOMMU misc attributes */
107
- int (*get_attr)(IOMMUMemoryRegion *iommu, enum IOMMUMemoryRegionAttr,
108
+ /* Get IOMMU misc attributes. This is an optional method that
109
+ * can be used to allow users of the IOMMU to get implementation-specific
110
+ * information. The IOMMU implements this method to handle calls
111
+ * by IOMMU users to memory_region_iommu_get_attr() by filling in
112
+ * the arbitrary data pointer for any IOMMUMemoryRegionAttr values that
113
+ * the IOMMU supports. If the method is unimplemented then
114
+ * memory_region_iommu_get_attr() will always return -EINVAL.
115
+ *
116
+ * @iommu: the IOMMUMemoryRegion
117
+ * @attr: attribute being queried
118
+ * @data: memory to fill in with the attribute data
119
+ *
120
+ * Returns 0 on success, or a negative errno; in particular
121
+ * returns -EINVAL for unrecognized or unimplemented attribute types.
122
+ */
123
+ int (*get_attr)(IOMMUMemoryRegion *iommu, enum IOMMUMemoryRegionAttr attr,
124
void *data);
125
} IOMMUMemoryRegionClass;
126
127
@@ -XXX,XX +XXX,XX @@ static inline void memory_region_init_reservation(MemoryRegion *mr,
128
* An IOMMU region translates addresses and forwards accesses to a target
129
* memory region.
130
*
131
+ * The IOMMU implementation must define a subclass of TYPE_IOMMU_MEMORY_REGION.
132
+ * @_iommu_mr should be a pointer to enough memory for an instance of
133
+ * that subclass, @instance_size is the size of that subclass, and
134
+ * @mrtypename is its name. This function will initialize @_iommu_mr as an
135
+ * instance of the subclass, and its methods will then be called to handle
136
+ * accesses to the memory region. See the documentation of
137
+ * #IOMMUMemoryRegionClass for further details.
138
+ *
139
* @_iommu_mr: the #IOMMUMemoryRegion to be initialized
140
* @instance_size: the IOMMUMemoryRegion subclass instance size
141
* @mrtypename: the type name of the #IOMMUMemoryRegion
142
@@ -XXX,XX +XXX,XX @@ void memory_region_register_iommu_notifier(MemoryRegion *mr,
143
* a notifier with the minimum page granularity returned by
144
* mr->iommu_ops->get_page_size().
145
*
146
+ * Note: this is not related to record-and-replay functionality.
147
+ *
148
* @iommu_mr: the memory region to observe
149
* @n: the notifier to which to replay iommu mappings
150
*/
151
@@ -XXX,XX +XXX,XX @@ void memory_region_iommu_replay(IOMMUMemoryRegion *iommu_mr, IOMMUNotifier *n);
152
* memory_region_iommu_replay_all: replay existing IOMMU translations
153
* to all the notifiers registered.
154
*
155
+ * Note: this is not related to record-and-replay functionality.
156
+ *
157
* @iommu_mr: the memory region to observe
158
*/
159
void memory_region_iommu_replay_all(IOMMUMemoryRegion *iommu_mr);
160
@@ -XXX,XX +XXX,XX @@ void memory_region_unregister_iommu_notifier(MemoryRegion *mr,
161
* memory_region_iommu_get_attr: return an IOMMU attr if get_attr() is
162
* defined on the IOMMU.
163
*
164
- * Returns 0 if succeded, error code otherwise.
165
+ * Returns 0 on success, or a negative errno otherwise. In particular,
166
+ * -EINVAL indicates that the IOMMU does not support the requested
167
+ * attribute.
168
*
169
* @iommu_mr: the memory region
170
* @attr: the requested attribute
53
--
171
--
54
2.7.4
172
2.17.1
55
173
56
174
diff view generated by jsdifflib
1
Refactor the Thumb decode to do the loads of the instruction words at
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
2
the top level rather than only loading the second half of a 32-bit
2
add MemTxAttrs as an argument to tb_invalidate_phys_addr().
3
Thumb insn in the middle of the decode.
3
Its callers either have an attrs value to hand, or don't care
4
4
and can use MEMTXATTRS_UNSPECIFIED.
5
This is simple apart from the awkward case of Thumb1, where the
6
BL/BLX prefix and suffix instructions live in what in Thumb2 is the
7
32-bit insn space. To handle these we decode enough to identify
8
whether we're looking at a prefix/suffix that we handle as a 16 bit
9
insn, or a prefix that we're going to merge with the following suffix
10
to consider as a 32 bit insn. The translation of the 16 bit cases
11
then moves from disas_thumb2_insn() to disas_thumb_insn().
12
13
The refactoring has the benefit that we don't need to pass the
14
CPUARMState* down into the decoder code any more, but the major
15
reason for doing this is that some Thumb instructions must be always
16
unconditional regardless of the IT state bits, so we need to know the
17
whole insn before we emit the "skip this insn if the IT bits and cond
18
state tell us to" code. (The always unconditional insns are BKPT,
19
HLT and SG; the last of these is 32 bits.)
20
5
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
23
Message-id: 1507556919-24992-7-git-send-email-peter.maydell@linaro.org
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
Message-id: 20180521140402.23318-3-peter.maydell@linaro.org
24
---
10
---
25
target/arm/translate.c | 178 ++++++++++++++++++++++++++++++-------------------
11
include/exec/exec-all.h | 5 +++--
26
1 file changed, 108 insertions(+), 70 deletions(-)
12
accel/tcg/translate-all.c | 2 +-
13
exec.c | 2 +-
14
target/xtensa/op_helper.c | 3 ++-
15
4 files changed, 7 insertions(+), 5 deletions(-)
27
16
28
diff --git a/target/arm/translate.c b/target/arm/translate.c
17
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
29
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/translate.c
19
--- a/include/exec/exec-all.h
31
+++ b/target/arm/translate.c
20
+++ b/include/exec/exec-all.h
32
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
21
@@ -XXX,XX +XXX,XX @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
22
void tlb_set_page(CPUState *cpu, target_ulong vaddr,
23
hwaddr paddr, int prot,
24
int mmu_idx, target_ulong size);
25
-void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr);
26
+void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs);
27
void probe_write(CPUArchState *env, target_ulong addr, int size, int mmu_idx,
28
uintptr_t retaddr);
29
#else
30
@@ -XXX,XX +XXX,XX @@ static inline void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *cpu,
31
uint16_t idxmap)
32
{
33
}
34
-static inline void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr)
35
+static inline void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr,
36
+ MemTxAttrs attrs)
37
{
38
}
39
#endif
40
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/accel/tcg/translate-all.c
43
+++ b/accel/tcg/translate-all.c
44
@@ -XXX,XX +XXX,XX @@ static TranslationBlock *tb_find_pc(uintptr_t tc_ptr)
45
}
46
47
#if !defined(CONFIG_USER_ONLY)
48
-void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr)
49
+void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs)
50
{
51
ram_addr_t ram_addr;
52
MemoryRegion *mr;
53
diff --git a/exec.c b/exec.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/exec.c
56
+++ b/exec.c
57
@@ -XXX,XX +XXX,XX @@ static void breakpoint_invalidate(CPUState *cpu, target_ulong pc)
58
if (phys != -1) {
59
/* Locks grabbed by tb_invalidate_phys_addr */
60
tb_invalidate_phys_addr(cpu->cpu_ases[asidx].as,
61
- phys | (pc & ~TARGET_PAGE_MASK));
62
+ phys | (pc & ~TARGET_PAGE_MASK), attrs);
33
}
63
}
34
}
64
}
35
65
#endif
36
+static bool thumb_insn_is_16bit(DisasContext *s, uint32_t insn)
66
diff --git a/target/xtensa/op_helper.c b/target/xtensa/op_helper.c
37
+{
67
index XXXXXXX..XXXXXXX 100644
38
+ /* Return true if this is a 16 bit instruction. We must be precise
68
--- a/target/xtensa/op_helper.c
39
+ * about this (matching the decode). We assume that s->pc still
69
+++ b/target/xtensa/op_helper.c
40
+ * points to the first 16 bits of the insn.
70
@@ -XXX,XX +XXX,XX @@ static void tb_invalidate_virtual_addr(CPUXtensaState *env, uint32_t vaddr)
41
+ */
71
int ret = xtensa_get_physical_addr(env, false, vaddr, 2, 0,
42
+ if ((insn >> 11) < 0x1d) {
72
&paddr, &page_size, &access);
43
+ /* Definitely a 16-bit instruction */
73
if (ret == 0) {
44
+ return true;
74
- tb_invalidate_phys_addr(&address_space_memory, paddr);
45
+ }
75
+ tb_invalidate_phys_addr(&address_space_memory, paddr,
46
+
76
+ MEMTXATTRS_UNSPECIFIED);
47
+ /* Top five bits 0b11101 / 0b11110 / 0b11111 : this is the
48
+ * first half of a 32-bit Thumb insn. Thumb-1 cores might
49
+ * end up actually treating this as two 16-bit insns, though,
50
+ * if it's half of a bl/blx pair that might span a page boundary.
51
+ */
52
+ if (arm_dc_feature(s, ARM_FEATURE_THUMB2)) {
53
+ /* Thumb2 cores (including all M profile ones) always treat
54
+ * 32-bit insns as 32-bit.
55
+ */
56
+ return false;
57
+ }
58
+
59
+ if ((insn >> 11) == 0x1e && (s->pc < s->next_page_start - 3)) {
60
+ /* 0b1111_0xxx_xxxx_xxxx : BL/BLX prefix, and the suffix
61
+ * is not on the next page; we merge this into a 32-bit
62
+ * insn.
63
+ */
64
+ return false;
65
+ }
66
+ /* 0b1110_1xxx_xxxx_xxxx : BLX suffix (or UNDEF);
67
+ * 0b1111_1xxx_xxxx_xxxx : BL suffix;
68
+ * 0b1111_0xxx_xxxx_xxxx : BL/BLX prefix on the end of a page
69
+ * -- handle as single 16 bit insn
70
+ */
71
+ return true;
72
+}
73
+
74
/* Return true if this is a Thumb-2 logical op. */
75
static int
76
thumb2_logic_op(int op)
77
@@ -XXX,XX +XXX,XX @@ gen_thumb2_data_op(DisasContext *s, int op, int conds, uint32_t shifter_out,
78
79
/* Translate a 32-bit thumb instruction. Returns nonzero if the instruction
80
is not legal. */
81
-static int disas_thumb2_insn(CPUARMState *env, DisasContext *s, uint16_t insn_hw1)
82
+static int disas_thumb2_insn(DisasContext *s, uint32_t insn)
83
{
84
- uint32_t insn, imm, shift, offset;
85
+ uint32_t imm, shift, offset;
86
uint32_t rd, rn, rm, rs;
87
TCGv_i32 tmp;
88
TCGv_i32 tmp2;
89
@@ -XXX,XX +XXX,XX @@ static int disas_thumb2_insn(CPUARMState *env, DisasContext *s, uint16_t insn_hw
90
int conds;
91
int logic_cc;
92
93
- if (!arm_dc_feature(s, ARM_FEATURE_THUMB2)) {
94
- /* Thumb-1 cores may need to treat bl and blx as a pair of
95
- 16-bit instructions to get correct prefetch abort behavior. */
96
- insn = insn_hw1;
97
- if ((insn & (1 << 12)) == 0) {
98
- ARCH(5);
99
- /* Second half of blx. */
100
- offset = ((insn & 0x7ff) << 1);
101
- tmp = load_reg(s, 14);
102
- tcg_gen_addi_i32(tmp, tmp, offset);
103
- tcg_gen_andi_i32(tmp, tmp, 0xfffffffc);
104
-
105
- tmp2 = tcg_temp_new_i32();
106
- tcg_gen_movi_i32(tmp2, s->pc | 1);
107
- store_reg(s, 14, tmp2);
108
- gen_bx(s, tmp);
109
- return 0;
110
- }
111
- if (insn & (1 << 11)) {
112
- /* Second half of bl. */
113
- offset = ((insn & 0x7ff) << 1) | 1;
114
- tmp = load_reg(s, 14);
115
- tcg_gen_addi_i32(tmp, tmp, offset);
116
-
117
- tmp2 = tcg_temp_new_i32();
118
- tcg_gen_movi_i32(tmp2, s->pc | 1);
119
- store_reg(s, 14, tmp2);
120
- gen_bx(s, tmp);
121
- return 0;
122
- }
123
- if ((s->pc & ~TARGET_PAGE_MASK) == 0) {
124
- /* Instruction spans a page boundary. Implement it as two
125
- 16-bit instructions in case the second half causes an
126
- prefetch abort. */
127
- offset = ((int32_t)insn << 21) >> 9;
128
- tcg_gen_movi_i32(cpu_R[14], s->pc + 2 + offset);
129
- return 0;
130
- }
131
- /* Fall through to 32-bit decode. */
132
- }
133
-
134
- insn = arm_lduw_code(env, s->pc, s->sctlr_b);
135
- s->pc += 2;
136
- insn |= (uint32_t)insn_hw1 << 16;
137
-
138
+ /* The only 32 bit insn that's allowed for Thumb1 is the combined
139
+ * BL/BLX prefix and suffix.
140
+ */
141
if ((insn & 0xf800e800) != 0xf000e800) {
142
ARCH(6T2);
143
}
77
}
144
@@ -XXX,XX +XXX,XX @@ illegal_op:
145
return 1;
146
}
78
}
147
79
148
-static void disas_thumb_insn(CPUARMState *env, DisasContext *s)
149
+static void disas_thumb_insn(DisasContext *s, uint32_t insn)
150
{
151
- uint32_t val, insn, op, rm, rn, rd, shift, cond;
152
+ uint32_t val, op, rm, rn, rd, shift, cond;
153
int32_t offset;
154
int i;
155
TCGv_i32 tmp;
156
TCGv_i32 tmp2;
157
TCGv_i32 addr;
158
159
- if (s->condexec_mask) {
160
- cond = s->condexec_cond;
161
- if (cond != 0x0e) { /* Skip conditional when condition is AL. */
162
- s->condlabel = gen_new_label();
163
- arm_gen_test_cc(cond ^ 1, s->condlabel);
164
- s->condjmp = 1;
165
- }
166
- }
167
-
168
- insn = arm_lduw_code(env, s->pc, s->sctlr_b);
169
- s->pc += 2;
170
-
171
switch (insn >> 12) {
172
case 0: case 1:
173
174
@@ -XXX,XX +XXX,XX @@ static void disas_thumb_insn(CPUARMState *env, DisasContext *s)
175
176
case 14:
177
if (insn & (1 << 11)) {
178
- if (disas_thumb2_insn(env, s, insn))
179
- goto undef32;
180
+ /* thumb_insn_is_16bit() ensures we can't get here for
181
+ * a Thumb2 CPU, so this must be a thumb1 split BL/BLX:
182
+ * 0b1110_1xxx_xxxx_xxxx : BLX suffix (or UNDEF)
183
+ */
184
+ assert(!arm_dc_feature(s, ARM_FEATURE_THUMB2));
185
+ ARCH(5);
186
+ offset = ((insn & 0x7ff) << 1);
187
+ tmp = load_reg(s, 14);
188
+ tcg_gen_addi_i32(tmp, tmp, offset);
189
+ tcg_gen_andi_i32(tmp, tmp, 0xfffffffc);
190
+
191
+ tmp2 = tcg_temp_new_i32();
192
+ tcg_gen_movi_i32(tmp2, s->pc | 1);
193
+ store_reg(s, 14, tmp2);
194
+ gen_bx(s, tmp);
195
break;
196
}
197
/* unconditional branch */
198
@@ -XXX,XX +XXX,XX @@ static void disas_thumb_insn(CPUARMState *env, DisasContext *s)
199
break;
200
201
case 15:
202
- if (disas_thumb2_insn(env, s, insn))
203
- goto undef32;
204
+ /* thumb_insn_is_16bit() ensures we can't get here for
205
+ * a Thumb2 CPU, so this must be a thumb1 split BL/BLX.
206
+ */
207
+ assert(!arm_dc_feature(s, ARM_FEATURE_THUMB2));
208
+
209
+ if (insn & (1 << 11)) {
210
+ /* 0b1111_1xxx_xxxx_xxxx : BL suffix */
211
+ offset = ((insn & 0x7ff) << 1) | 1;
212
+ tmp = load_reg(s, 14);
213
+ tcg_gen_addi_i32(tmp, tmp, offset);
214
+
215
+ tmp2 = tcg_temp_new_i32();
216
+ tcg_gen_movi_i32(tmp2, s->pc | 1);
217
+ store_reg(s, 14, tmp2);
218
+ gen_bx(s, tmp);
219
+ } else {
220
+ /* 0b1111_0xxx_xxxx_xxxx : BL/BLX prefix */
221
+ uint32_t uoffset = ((int32_t)insn << 21) >> 9;
222
+
223
+ tcg_gen_movi_i32(cpu_R[14], s->pc + 2 + uoffset);
224
+ }
225
break;
226
}
227
return;
228
-undef32:
229
- gen_exception_insn(s, 4, EXCP_UDEF, syn_uncategorized(),
230
- default_exception_el(s));
231
- return;
232
illegal_op:
233
undef:
234
gen_exception_insn(s, 2, EXCP_UDEF, syn_uncategorized(),
235
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
236
{
237
DisasContext *dc = container_of(dcbase, DisasContext, base);
238
CPUARMState *env = cpu->env_ptr;
239
+ uint32_t insn;
240
+ bool is_16bit;
241
242
if (arm_pre_translate_insn(dc)) {
243
return;
244
}
245
246
- disas_thumb_insn(env, dc);
247
+ insn = arm_lduw_code(env, dc->pc, dc->sctlr_b);
248
+ is_16bit = thumb_insn_is_16bit(dc, insn);
249
+ dc->pc += 2;
250
+ if (!is_16bit) {
251
+ uint32_t insn2 = arm_lduw_code(env, dc->pc, dc->sctlr_b);
252
+
253
+ insn = insn << 16 | insn2;
254
+ dc->pc += 2;
255
+ }
256
+
257
+ if (dc->condexec_mask) {
258
+ uint32_t cond = dc->condexec_cond;
259
+
260
+ if (cond != 0x0e) { /* Skip conditional when condition is AL. */
261
+ dc->condlabel = gen_new_label();
262
+ arm_gen_test_cc(cond ^ 1, dc->condlabel);
263
+ dc->condjmp = 1;
264
+ }
265
+ }
266
+
267
+ if (is_16bit) {
268
+ disas_thumb_insn(dc, insn);
269
+ } else {
270
+ disas_thumb2_insn(dc, insn);
271
+ }
272
273
/* Advance the Thumb condexec condition. */
274
if (dc->condexec_mask) {
275
--
80
--
276
2.7.4
81
2.17.1
277
82
278
83
diff view generated by jsdifflib
New patch
1
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
2
add MemTxAttrs as an argument to address_space_translate()
3
and address_space_translate_cached(). Callers either have an
4
attrs value to hand, or don't care and can use MEMTXATTRS_UNSPECIFIED.
5
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-4-peter.maydell@linaro.org
10
---
11
include/exec/memory.h | 4 +++-
12
accel/tcg/translate-all.c | 2 +-
13
exec.c | 14 +++++++++-----
14
hw/vfio/common.c | 3 ++-
15
memory_ldst.inc.c | 18 +++++++++---------
16
target/riscv/helper.c | 2 +-
17
6 files changed, 25 insertions(+), 18 deletions(-)
18
19
diff --git a/include/exec/memory.h b/include/exec/memory.h
20
index XXXXXXX..XXXXXXX 100644
21
--- a/include/exec/memory.h
22
+++ b/include/exec/memory.h
23
@@ -XXX,XX +XXX,XX @@ IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
24
* #MemoryRegion.
25
* @len: pointer to length
26
* @is_write: indicates the transfer direction
27
+ * @attrs: memory attributes
28
*/
29
MemoryRegion *flatview_translate(FlatView *fv,
30
hwaddr addr, hwaddr *xlat,
31
@@ -XXX,XX +XXX,XX @@ MemoryRegion *flatview_translate(FlatView *fv,
32
33
static inline MemoryRegion *address_space_translate(AddressSpace *as,
34
hwaddr addr, hwaddr *xlat,
35
- hwaddr *len, bool is_write)
36
+ hwaddr *len, bool is_write,
37
+ MemTxAttrs attrs)
38
{
39
return flatview_translate(address_space_to_flatview(as),
40
addr, xlat, len, is_write);
41
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
42
index XXXXXXX..XXXXXXX 100644
43
--- a/accel/tcg/translate-all.c
44
+++ b/accel/tcg/translate-all.c
45
@@ -XXX,XX +XXX,XX @@ void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs)
46
hwaddr l = 1;
47
48
rcu_read_lock();
49
- mr = address_space_translate(as, addr, &addr, &l, false);
50
+ mr = address_space_translate(as, addr, &addr, &l, false, attrs);
51
if (!(memory_region_is_ram(mr)
52
|| memory_region_is_romd(mr))) {
53
rcu_read_unlock();
54
diff --git a/exec.c b/exec.c
55
index XXXXXXX..XXXXXXX 100644
56
--- a/exec.c
57
+++ b/exec.c
58
@@ -XXX,XX +XXX,XX @@ static inline void cpu_physical_memory_write_rom_internal(AddressSpace *as,
59
rcu_read_lock();
60
while (len > 0) {
61
l = len;
62
- mr = address_space_translate(as, addr, &addr1, &l, true);
63
+ mr = address_space_translate(as, addr, &addr1, &l, true,
64
+ MEMTXATTRS_UNSPECIFIED);
65
66
if (!(memory_region_is_ram(mr) ||
67
memory_region_is_romd(mr))) {
68
@@ -XXX,XX +XXX,XX @@ void address_space_cache_destroy(MemoryRegionCache *cache)
69
*/
70
static inline MemoryRegion *address_space_translate_cached(
71
MemoryRegionCache *cache, hwaddr addr, hwaddr *xlat,
72
- hwaddr *plen, bool is_write)
73
+ hwaddr *plen, bool is_write, MemTxAttrs attrs)
74
{
75
MemoryRegionSection section;
76
MemoryRegion *mr;
77
@@ -XXX,XX +XXX,XX @@ address_space_read_cached_slow(MemoryRegionCache *cache, hwaddr addr,
78
MemoryRegion *mr;
79
80
l = len;
81
- mr = address_space_translate_cached(cache, addr, &addr1, &l, false);
82
+ mr = address_space_translate_cached(cache, addr, &addr1, &l, false,
83
+ MEMTXATTRS_UNSPECIFIED);
84
flatview_read_continue(cache->fv,
85
addr, MEMTXATTRS_UNSPECIFIED, buf, len,
86
addr1, l, mr);
87
@@ -XXX,XX +XXX,XX @@ address_space_write_cached_slow(MemoryRegionCache *cache, hwaddr addr,
88
MemoryRegion *mr;
89
90
l = len;
91
- mr = address_space_translate_cached(cache, addr, &addr1, &l, true);
92
+ mr = address_space_translate_cached(cache, addr, &addr1, &l, true,
93
+ MEMTXATTRS_UNSPECIFIED);
94
flatview_write_continue(cache->fv,
95
addr, MEMTXATTRS_UNSPECIFIED, buf, len,
96
addr1, l, mr);
97
@@ -XXX,XX +XXX,XX @@ bool cpu_physical_memory_is_io(hwaddr phys_addr)
98
99
rcu_read_lock();
100
mr = address_space_translate(&address_space_memory,
101
- phys_addr, &phys_addr, &l, false);
102
+ phys_addr, &phys_addr, &l, false,
103
+ MEMTXATTRS_UNSPECIFIED);
104
105
res = !(memory_region_is_ram(mr) || memory_region_is_romd(mr));
106
rcu_read_unlock();
107
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
108
index XXXXXXX..XXXXXXX 100644
109
--- a/hw/vfio/common.c
110
+++ b/hw/vfio/common.c
111
@@ -XXX,XX +XXX,XX @@ static bool vfio_get_vaddr(IOMMUTLBEntry *iotlb, void **vaddr,
112
*/
113
mr = address_space_translate(&address_space_memory,
114
iotlb->translated_addr,
115
- &xlat, &len, writable);
116
+ &xlat, &len, writable,
117
+ MEMTXATTRS_UNSPECIFIED);
118
if (!memory_region_is_ram(mr)) {
119
error_report("iommu map to non memory area %"HWADDR_PRIx"",
120
xlat);
121
diff --git a/memory_ldst.inc.c b/memory_ldst.inc.c
122
index XXXXXXX..XXXXXXX 100644
123
--- a/memory_ldst.inc.c
124
+++ b/memory_ldst.inc.c
125
@@ -XXX,XX +XXX,XX @@ static inline uint32_t glue(address_space_ldl_internal, SUFFIX)(ARG1_DECL,
126
bool release_lock = false;
127
128
RCU_READ_LOCK();
129
- mr = TRANSLATE(addr, &addr1, &l, false);
130
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
131
if (l < 4 || !IS_DIRECT(mr, false)) {
132
release_lock |= prepare_mmio_access(mr);
133
134
@@ -XXX,XX +XXX,XX @@ static inline uint64_t glue(address_space_ldq_internal, SUFFIX)(ARG1_DECL,
135
bool release_lock = false;
136
137
RCU_READ_LOCK();
138
- mr = TRANSLATE(addr, &addr1, &l, false);
139
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
140
if (l < 8 || !IS_DIRECT(mr, false)) {
141
release_lock |= prepare_mmio_access(mr);
142
143
@@ -XXX,XX +XXX,XX @@ uint32_t glue(address_space_ldub, SUFFIX)(ARG1_DECL,
144
bool release_lock = false;
145
146
RCU_READ_LOCK();
147
- mr = TRANSLATE(addr, &addr1, &l, false);
148
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
149
if (!IS_DIRECT(mr, false)) {
150
release_lock |= prepare_mmio_access(mr);
151
152
@@ -XXX,XX +XXX,XX @@ static inline uint32_t glue(address_space_lduw_internal, SUFFIX)(ARG1_DECL,
153
bool release_lock = false;
154
155
RCU_READ_LOCK();
156
- mr = TRANSLATE(addr, &addr1, &l, false);
157
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
158
if (l < 2 || !IS_DIRECT(mr, false)) {
159
release_lock |= prepare_mmio_access(mr);
160
161
@@ -XXX,XX +XXX,XX @@ void glue(address_space_stl_notdirty, SUFFIX)(ARG1_DECL,
162
bool release_lock = false;
163
164
RCU_READ_LOCK();
165
- mr = TRANSLATE(addr, &addr1, &l, true);
166
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
167
if (l < 4 || !IS_DIRECT(mr, true)) {
168
release_lock |= prepare_mmio_access(mr);
169
170
@@ -XXX,XX +XXX,XX @@ static inline void glue(address_space_stl_internal, SUFFIX)(ARG1_DECL,
171
bool release_lock = false;
172
173
RCU_READ_LOCK();
174
- mr = TRANSLATE(addr, &addr1, &l, true);
175
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
176
if (l < 4 || !IS_DIRECT(mr, true)) {
177
release_lock |= prepare_mmio_access(mr);
178
179
@@ -XXX,XX +XXX,XX @@ void glue(address_space_stb, SUFFIX)(ARG1_DECL,
180
bool release_lock = false;
181
182
RCU_READ_LOCK();
183
- mr = TRANSLATE(addr, &addr1, &l, true);
184
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
185
if (!IS_DIRECT(mr, true)) {
186
release_lock |= prepare_mmio_access(mr);
187
r = memory_region_dispatch_write(mr, addr1, val, 1, attrs);
188
@@ -XXX,XX +XXX,XX @@ static inline void glue(address_space_stw_internal, SUFFIX)(ARG1_DECL,
189
bool release_lock = false;
190
191
RCU_READ_LOCK();
192
- mr = TRANSLATE(addr, &addr1, &l, true);
193
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
194
if (l < 2 || !IS_DIRECT(mr, true)) {
195
release_lock |= prepare_mmio_access(mr);
196
197
@@ -XXX,XX +XXX,XX @@ static void glue(address_space_stq_internal, SUFFIX)(ARG1_DECL,
198
bool release_lock = false;
199
200
RCU_READ_LOCK();
201
- mr = TRANSLATE(addr, &addr1, &l, true);
202
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
203
if (l < 8 || !IS_DIRECT(mr, true)) {
204
release_lock |= prepare_mmio_access(mr);
205
206
diff --git a/target/riscv/helper.c b/target/riscv/helper.c
207
index XXXXXXX..XXXXXXX 100644
208
--- a/target/riscv/helper.c
209
+++ b/target/riscv/helper.c
210
@@ -XXX,XX +XXX,XX @@ restart:
211
MemoryRegion *mr;
212
hwaddr l = sizeof(target_ulong), addr1;
213
mr = address_space_translate(cs->as, pte_addr,
214
- &addr1, &l, false);
215
+ &addr1, &l, false, MEMTXATTRS_UNSPECIFIED);
216
if (memory_access_is_direct(mr, true)) {
217
target_ulong *pte_pa =
218
qemu_map_ram_ptr(mr->ram_block, addr1);
219
--
220
2.17.1
221
222
diff view generated by jsdifflib
New patch
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
2
add MemTxAttrs as an argument to address_space_map().
3
Its callers either have an attrs value to hand, or don't care
4
and can use MEMTXATTRS_UNSPECIFIED.
1
5
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-5-peter.maydell@linaro.org
10
---
11
include/exec/memory.h | 3 ++-
12
include/sysemu/dma.h | 3 ++-
13
exec.c | 6 ++++--
14
target/ppc/mmu-hash64.c | 3 ++-
15
4 files changed, 10 insertions(+), 5 deletions(-)
16
17
diff --git a/include/exec/memory.h b/include/exec/memory.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/include/exec/memory.h
20
+++ b/include/exec/memory.h
21
@@ -XXX,XX +XXX,XX @@ bool address_space_access_valid(AddressSpace *as, hwaddr addr, int len, bool is_
22
* @addr: address within that address space
23
* @plen: pointer to length of buffer; updated on return
24
* @is_write: indicates the transfer direction
25
+ * @attrs: memory attributes
26
*/
27
void *address_space_map(AddressSpace *as, hwaddr addr,
28
- hwaddr *plen, bool is_write);
29
+ hwaddr *plen, bool is_write, MemTxAttrs attrs);
30
31
/* address_space_unmap: Unmaps a memory region previously mapped by address_space_map()
32
*
33
diff --git a/include/sysemu/dma.h b/include/sysemu/dma.h
34
index XXXXXXX..XXXXXXX 100644
35
--- a/include/sysemu/dma.h
36
+++ b/include/sysemu/dma.h
37
@@ -XXX,XX +XXX,XX @@ static inline void *dma_memory_map(AddressSpace *as,
38
hwaddr xlen = *len;
39
void *p;
40
41
- p = address_space_map(as, addr, &xlen, dir == DMA_DIRECTION_FROM_DEVICE);
42
+ p = address_space_map(as, addr, &xlen, dir == DMA_DIRECTION_FROM_DEVICE,
43
+ MEMTXATTRS_UNSPECIFIED);
44
*len = xlen;
45
return p;
46
}
47
diff --git a/exec.c b/exec.c
48
index XXXXXXX..XXXXXXX 100644
49
--- a/exec.c
50
+++ b/exec.c
51
@@ -XXX,XX +XXX,XX @@ flatview_extend_translation(FlatView *fv, hwaddr addr,
52
void *address_space_map(AddressSpace *as,
53
hwaddr addr,
54
hwaddr *plen,
55
- bool is_write)
56
+ bool is_write,
57
+ MemTxAttrs attrs)
58
{
59
hwaddr len = *plen;
60
hwaddr l, xlat;
61
@@ -XXX,XX +XXX,XX @@ void *cpu_physical_memory_map(hwaddr addr,
62
hwaddr *plen,
63
int is_write)
64
{
65
- return address_space_map(&address_space_memory, addr, plen, is_write);
66
+ return address_space_map(&address_space_memory, addr, plen, is_write,
67
+ MEMTXATTRS_UNSPECIFIED);
68
}
69
70
void cpu_physical_memory_unmap(void *buffer, hwaddr len,
71
diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
72
index XXXXXXX..XXXXXXX 100644
73
--- a/target/ppc/mmu-hash64.c
74
+++ b/target/ppc/mmu-hash64.c
75
@@ -XXX,XX +XXX,XX @@ const ppc_hash_pte64_t *ppc_hash64_map_hptes(PowerPCCPU *cpu,
76
return NULL;
77
}
78
79
- hptes = address_space_map(CPU(cpu)->as, base + pte_offset, &plen, false);
80
+ hptes = address_space_map(CPU(cpu)->as, base + pte_offset, &plen, false,
81
+ MEMTXATTRS_UNSPECIFIED);
82
if (plen < (n * HASH_PTE_SIZE_64)) {
83
hw_error("%s: Unable to map all requested HPTEs\n", __func__);
84
}
85
--
86
2.17.1
87
88
diff view generated by jsdifflib
1
Coverity points out that we forgot the 'break' for
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
2
the SAU_CTRL write case (CID1381683). This has
2
add MemTxAttrs as an argument to address_space_access_valid().
3
no actual visible consequences because it happens
3
Its callers either have an attrs value to hand, or don't care
4
that the following case is effectively a no-op.
4
and can use MEMTXATTRS_UNSPECIFIED.
5
5
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Message-id: 1507742676-9908-1-git-send-email-peter.maydell@linaro.org
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-6-peter.maydell@linaro.org
10
---
10
---
11
hw/intc/armv7m_nvic.c | 1 +
11
include/exec/memory.h | 4 +++-
12
1 file changed, 1 insertion(+)
12
include/sysemu/dma.h | 3 ++-
13
exec.c | 3 ++-
14
target/s390x/diag.c | 6 ++++--
15
target/s390x/excp_helper.c | 3 ++-
16
target/s390x/mmu_helper.c | 3 ++-
17
target/s390x/sigp.c | 3 ++-
18
7 files changed, 17 insertions(+), 8 deletions(-)
13
19
14
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
20
diff --git a/include/exec/memory.h b/include/exec/memory.h
15
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/intc/armv7m_nvic.c
22
--- a/include/exec/memory.h
17
+++ b/hw/intc/armv7m_nvic.c
23
+++ b/include/exec/memory.h
18
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
24
@@ -XXX,XX +XXX,XX @@ static inline MemoryRegion *address_space_translate(AddressSpace *as,
25
* @addr: address within that address space
26
* @len: length of the area to be checked
27
* @is_write: indicates the transfer direction
28
+ * @attrs: memory attributes
29
*/
30
-bool address_space_access_valid(AddressSpace *as, hwaddr addr, int len, bool is_write);
31
+bool address_space_access_valid(AddressSpace *as, hwaddr addr, int len,
32
+ bool is_write, MemTxAttrs attrs);
33
34
/* address_space_map: map a physical memory region into a host virtual address
35
*
36
diff --git a/include/sysemu/dma.h b/include/sysemu/dma.h
37
index XXXXXXX..XXXXXXX 100644
38
--- a/include/sysemu/dma.h
39
+++ b/include/sysemu/dma.h
40
@@ -XXX,XX +XXX,XX @@ static inline bool dma_memory_valid(AddressSpace *as,
41
DMADirection dir)
42
{
43
return address_space_access_valid(as, addr, len,
44
- dir == DMA_DIRECTION_FROM_DEVICE);
45
+ dir == DMA_DIRECTION_FROM_DEVICE,
46
+ MEMTXATTRS_UNSPECIFIED);
47
}
48
49
static inline int dma_memory_rw_relaxed(AddressSpace *as, dma_addr_t addr,
50
diff --git a/exec.c b/exec.c
51
index XXXXXXX..XXXXXXX 100644
52
--- a/exec.c
53
+++ b/exec.c
54
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
55
}
56
57
bool address_space_access_valid(AddressSpace *as, hwaddr addr,
58
- int len, bool is_write)
59
+ int len, bool is_write,
60
+ MemTxAttrs attrs)
61
{
62
FlatView *fv;
63
bool result;
64
diff --git a/target/s390x/diag.c b/target/s390x/diag.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/target/s390x/diag.c
67
+++ b/target/s390x/diag.c
68
@@ -XXX,XX +XXX,XX @@ void handle_diag_308(CPUS390XState *env, uint64_t r1, uint64_t r3, uintptr_t ra)
19
return;
69
return;
20
}
70
}
21
cpu->env.sau.ctrl = value & 3;
71
if (!address_space_access_valid(&address_space_memory, addr,
22
+ break;
72
- sizeof(IplParameterBlock), false)) {
23
case 0xdd4: /* SAU_TYPE */
73
+ sizeof(IplParameterBlock), false,
24
if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
74
+ MEMTXATTRS_UNSPECIFIED)) {
25
goto bad_offset;
75
s390_program_interrupt(env, PGM_ADDRESSING, ILEN_AUTO, ra);
76
return;
77
}
78
@@ -XXX,XX +XXX,XX @@ out:
79
return;
80
}
81
if (!address_space_access_valid(&address_space_memory, addr,
82
- sizeof(IplParameterBlock), true)) {
83
+ sizeof(IplParameterBlock), true,
84
+ MEMTXATTRS_UNSPECIFIED)) {
85
s390_program_interrupt(env, PGM_ADDRESSING, ILEN_AUTO, ra);
86
return;
87
}
88
diff --git a/target/s390x/excp_helper.c b/target/s390x/excp_helper.c
89
index XXXXXXX..XXXXXXX 100644
90
--- a/target/s390x/excp_helper.c
91
+++ b/target/s390x/excp_helper.c
92
@@ -XXX,XX +XXX,XX @@ int s390_cpu_handle_mmu_fault(CPUState *cs, vaddr orig_vaddr, int size,
93
94
/* check out of RAM access */
95
if (!address_space_access_valid(&address_space_memory, raddr,
96
- TARGET_PAGE_SIZE, rw)) {
97
+ TARGET_PAGE_SIZE, rw,
98
+ MEMTXATTRS_UNSPECIFIED)) {
99
DPRINTF("%s: raddr %" PRIx64 " > ram_size %" PRIx64 "\n", __func__,
100
(uint64_t)raddr, (uint64_t)ram_size);
101
trigger_pgm_exception(env, PGM_ADDRESSING, ILEN_AUTO);
102
diff --git a/target/s390x/mmu_helper.c b/target/s390x/mmu_helper.c
103
index XXXXXXX..XXXXXXX 100644
104
--- a/target/s390x/mmu_helper.c
105
+++ b/target/s390x/mmu_helper.c
106
@@ -XXX,XX +XXX,XX @@ static int translate_pages(S390CPU *cpu, vaddr addr, int nr_pages,
107
return ret;
108
}
109
if (!address_space_access_valid(&address_space_memory, pages[i],
110
- TARGET_PAGE_SIZE, is_write)) {
111
+ TARGET_PAGE_SIZE, is_write,
112
+ MEMTXATTRS_UNSPECIFIED)) {
113
trigger_access_exception(env, PGM_ADDRESSING, ILEN_AUTO, 0);
114
return -EFAULT;
115
}
116
diff --git a/target/s390x/sigp.c b/target/s390x/sigp.c
117
index XXXXXXX..XXXXXXX 100644
118
--- a/target/s390x/sigp.c
119
+++ b/target/s390x/sigp.c
120
@@ -XXX,XX +XXX,XX @@ static void sigp_set_prefix(CPUState *cs, run_on_cpu_data arg)
121
cpu_synchronize_state(cs);
122
123
if (!address_space_access_valid(&address_space_memory, addr,
124
- sizeof(struct LowCore), false)) {
125
+ sizeof(struct LowCore), false,
126
+ MEMTXATTRS_UNSPECIFIED)) {
127
set_sigp_status(si, SIGP_STAT_INVALID_PARAMETER);
128
return;
129
}
26
--
130
--
27
2.7.4
131
2.17.1
28
132
29
133
diff view generated by jsdifflib
New patch
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
2
add MemTxAttrs as an argument to flatview_extend_translation().
3
Its callers either have an attrs value to hand, or don't care
4
and can use MEMTXATTRS_UNSPECIFIED.
1
5
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-7-peter.maydell@linaro.org
10
---
11
exec.c | 15 ++++++++++-----
12
1 file changed, 10 insertions(+), 5 deletions(-)
13
14
diff --git a/exec.c b/exec.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/exec.c
17
+++ b/exec.c
18
@@ -XXX,XX +XXX,XX @@ bool address_space_access_valid(AddressSpace *as, hwaddr addr,
19
20
static hwaddr
21
flatview_extend_translation(FlatView *fv, hwaddr addr,
22
- hwaddr target_len,
23
- MemoryRegion *mr, hwaddr base, hwaddr len,
24
- bool is_write)
25
+ hwaddr target_len,
26
+ MemoryRegion *mr, hwaddr base, hwaddr len,
27
+ bool is_write, MemTxAttrs attrs)
28
{
29
hwaddr done = 0;
30
hwaddr xlat;
31
@@ -XXX,XX +XXX,XX @@ void *address_space_map(AddressSpace *as,
32
33
memory_region_ref(mr);
34
*plen = flatview_extend_translation(fv, addr, len, mr, xlat,
35
- l, is_write);
36
+ l, is_write, attrs);
37
ptr = qemu_ram_ptr_length(mr->ram_block, xlat, plen, true);
38
rcu_read_unlock();
39
40
@@ -XXX,XX +XXX,XX @@ int64_t address_space_cache_init(MemoryRegionCache *cache,
41
mr = cache->mrs.mr;
42
memory_region_ref(mr);
43
if (memory_access_is_direct(mr, is_write)) {
44
+ /* We don't care about the memory attributes here as we're only
45
+ * doing this if we found actual RAM, which behaves the same
46
+ * regardless of attributes; so UNSPECIFIED is fine.
47
+ */
48
l = flatview_extend_translation(cache->fv, addr, len, mr,
49
- cache->xlat, l, is_write);
50
+ cache->xlat, l, is_write,
51
+ MEMTXATTRS_UNSPECIFIED);
52
cache->ptr = qemu_ram_ptr_length(mr->ram_block, cache->xlat, &l, true);
53
} else {
54
cache->ptr = NULL;
55
--
56
2.17.1
57
58
diff view generated by jsdifflib
New patch
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
2
add MemTxAttrs as an argument to memory_region_access_valid().
3
Its callers either have an attrs value to hand, or don't care
4
and can use MEMTXATTRS_UNSPECIFIED.
1
5
6
The callsite in flatview_access_valid() is part of a recursive
7
loop flatview_access_valid() -> memory_region_access_valid() ->
8
subpage_accepts() -> flatview_access_valid(); we make it pass
9
MEMTXATTRS_UNSPECIFIED for now, until the next several commits
10
have plumbed an attrs parameter through the rest of the loop
11
and we can add an attrs parameter to flatview_access_valid().
12
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20180521140402.23318-8-peter.maydell@linaro.org
17
---
18
include/exec/memory-internal.h | 3 ++-
19
exec.c | 4 +++-
20
hw/s390x/s390-pci-inst.c | 3 ++-
21
memory.c | 7 ++++---
22
4 files changed, 11 insertions(+), 6 deletions(-)
23
24
diff --git a/include/exec/memory-internal.h b/include/exec/memory-internal.h
25
index XXXXXXX..XXXXXXX 100644
26
--- a/include/exec/memory-internal.h
27
+++ b/include/exec/memory-internal.h
28
@@ -XXX,XX +XXX,XX @@ void flatview_unref(FlatView *view);
29
extern const MemoryRegionOps unassigned_mem_ops;
30
31
bool memory_region_access_valid(MemoryRegion *mr, hwaddr addr,
32
- unsigned size, bool is_write);
33
+ unsigned size, bool is_write,
34
+ MemTxAttrs attrs);
35
36
void flatview_add_to_dispatch(FlatView *fv, MemoryRegionSection *section);
37
AddressSpaceDispatch *address_space_dispatch_new(FlatView *fv);
38
diff --git a/exec.c b/exec.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/exec.c
41
+++ b/exec.c
42
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
43
mr = flatview_translate(fv, addr, &xlat, &l, is_write);
44
if (!memory_access_is_direct(mr, is_write)) {
45
l = memory_access_size(mr, l, addr);
46
- if (!memory_region_access_valid(mr, xlat, l, is_write)) {
47
+ /* When our callers all have attrs we'll pass them through here */
48
+ if (!memory_region_access_valid(mr, xlat, l, is_write,
49
+ MEMTXATTRS_UNSPECIFIED)) {
50
return false;
51
}
52
}
53
diff --git a/hw/s390x/s390-pci-inst.c b/hw/s390x/s390-pci-inst.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/hw/s390x/s390-pci-inst.c
56
+++ b/hw/s390x/s390-pci-inst.c
57
@@ -XXX,XX +XXX,XX @@ int pcistb_service_call(S390CPU *cpu, uint8_t r1, uint8_t r3, uint64_t gaddr,
58
mr = s390_get_subregion(mr, offset, len);
59
offset -= mr->addr;
60
61
- if (!memory_region_access_valid(mr, offset, len, true)) {
62
+ if (!memory_region_access_valid(mr, offset, len, true,
63
+ MEMTXATTRS_UNSPECIFIED)) {
64
s390_program_interrupt(env, PGM_OPERAND, 6, ra);
65
return 0;
66
}
67
diff --git a/memory.c b/memory.c
68
index XXXXXXX..XXXXXXX 100644
69
--- a/memory.c
70
+++ b/memory.c
71
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps ram_device_mem_ops = {
72
bool memory_region_access_valid(MemoryRegion *mr,
73
hwaddr addr,
74
unsigned size,
75
- bool is_write)
76
+ bool is_write,
77
+ MemTxAttrs attrs)
78
{
79
int access_size_min, access_size_max;
80
int access_size, i;
81
@@ -XXX,XX +XXX,XX @@ MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
82
{
83
MemTxResult r;
84
85
- if (!memory_region_access_valid(mr, addr, size, false)) {
86
+ if (!memory_region_access_valid(mr, addr, size, false, attrs)) {
87
*pval = unassigned_mem_read(mr, addr, size);
88
return MEMTX_DECODE_ERROR;
89
}
90
@@ -XXX,XX +XXX,XX @@ MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
91
unsigned size,
92
MemTxAttrs attrs)
93
{
94
- if (!memory_region_access_valid(mr, addr, size, true)) {
95
+ if (!memory_region_access_valid(mr, addr, size, true, attrs)) {
96
unassigned_mem_write(mr, addr, data, size);
97
return MEMTX_DECODE_ERROR;
98
}
99
--
100
2.17.1
101
102
diff view generated by jsdifflib
1
Implement the BLXNS instruction, which allows secure code to
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
2
call non-secure code.
2
add MemTxAttrs as an argument to the MemoryRegion valid.accepts
3
callback. We'll need this for subpage_accepts().
4
5
We could take the approach we used with the read and write
6
callbacks and add new a new _with_attrs version, but since there
7
are so few implementations of the accepts hook we just change
8
them all.
3
9
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 1507556919-24992-4-git-send-email-peter.maydell@linaro.org
13
Message-id: 20180521140402.23318-9-peter.maydell@linaro.org
7
---
14
---
8
target/arm/helper.h | 1 +
15
include/exec/memory.h | 3 ++-
9
target/arm/internals.h | 1 +
16
exec.c | 9 ++++++---
10
target/arm/helper.c | 59 ++++++++++++++++++++++++++++++++++++++++++++++++++
17
hw/hppa/dino.c | 3 ++-
11
target/arm/translate.c | 17 +++++++++++++--
18
hw/nvram/fw_cfg.c | 12 ++++++++----
12
4 files changed, 76 insertions(+), 2 deletions(-)
19
hw/scsi/esp.c | 3 ++-
20
hw/xen/xen_pt_msi.c | 3 ++-
21
memory.c | 5 +++--
22
7 files changed, 25 insertions(+), 13 deletions(-)
13
23
14
diff --git a/target/arm/helper.h b/target/arm/helper.h
24
diff --git a/include/exec/memory.h b/include/exec/memory.h
15
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.h
26
--- a/include/exec/memory.h
17
+++ b/target/arm/helper.h
27
+++ b/include/exec/memory.h
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(v7m_msr, void, env, i32, i32)
28
@@ -XXX,XX +XXX,XX @@ struct MemoryRegionOps {
19
DEF_HELPER_2(v7m_mrs, i32, env, i32)
29
* as a machine check exception).
20
30
*/
21
DEF_HELPER_2(v7m_bxns, void, env, i32)
31
bool (*accepts)(void *opaque, hwaddr addr,
22
+DEF_HELPER_2(v7m_blxns, void, env, i32)
32
- unsigned size, bool is_write);
23
33
+ unsigned size, bool is_write,
24
DEF_HELPER_4(access_check_cp_reg, void, env, ptr, i32, i32)
34
+ MemTxAttrs attrs);
25
DEF_HELPER_3(set_cp_reg, void, env, ptr, i32)
35
} valid;
26
diff --git a/target/arm/internals.h b/target/arm/internals.h
36
/* Internal implementation constraints: */
37
struct {
38
diff --git a/exec.c b/exec.c
27
index XXXXXXX..XXXXXXX 100644
39
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/internals.h
40
--- a/exec.c
29
+++ b/target/arm/internals.h
41
+++ b/exec.c
30
@@ -XXX,XX +XXX,XX @@ static inline bool excp_is_internal(int excp)
42
@@ -XXX,XX +XXX,XX @@ static void notdirty_mem_write(void *opaque, hwaddr ram_addr,
31
FIELD(V7M_CONTROL, NPRIV, 0, 1)
43
}
32
FIELD(V7M_CONTROL, SPSEL, 1, 1)
44
33
FIELD(V7M_CONTROL, FPCA, 2, 1)
45
static bool notdirty_mem_accepts(void *opaque, hwaddr addr,
34
+FIELD(V7M_CONTROL, SFPA, 3, 1)
46
- unsigned size, bool is_write)
35
47
+ unsigned size, bool is_write,
36
/* Bit definitions for v7M exception return payload */
48
+ MemTxAttrs attrs)
37
FIELD(V7M_EXCRET, ES, 0, 1)
49
{
38
diff --git a/target/arm/helper.c b/target/arm/helper.c
50
return is_write;
51
}
52
@@ -XXX,XX +XXX,XX @@ static MemTxResult subpage_write(void *opaque, hwaddr addr,
53
}
54
55
static bool subpage_accepts(void *opaque, hwaddr addr,
56
- unsigned len, bool is_write)
57
+ unsigned len, bool is_write,
58
+ MemTxAttrs attrs)
59
{
60
subpage_t *subpage = opaque;
61
#if defined(DEBUG_SUBPAGE)
62
@@ -XXX,XX +XXX,XX @@ static void readonly_mem_write(void *opaque, hwaddr addr,
63
}
64
65
static bool readonly_mem_accepts(void *opaque, hwaddr addr,
66
- unsigned size, bool is_write)
67
+ unsigned size, bool is_write,
68
+ MemTxAttrs attrs)
69
{
70
return is_write;
71
}
72
diff --git a/hw/hppa/dino.c b/hw/hppa/dino.c
39
index XXXXXXX..XXXXXXX 100644
73
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/helper.c
74
--- a/hw/hppa/dino.c
41
+++ b/target/arm/helper.c
75
+++ b/hw/hppa/dino.c
42
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest)
76
@@ -XXX,XX +XXX,XX @@ static void gsc_to_pci_forwarding(DinoState *s)
43
g_assert_not_reached();
44
}
77
}
45
78
46
+void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest)
79
static bool dino_chip_mem_valid(void *opaque, hwaddr addr,
47
+{
80
- unsigned size, bool is_write)
48
+ /* translate.c should never generate calls here in user-only mode */
81
+ unsigned size, bool is_write,
49
+ g_assert_not_reached();
82
+ MemTxAttrs attrs)
50
+}
51
+
52
void switch_mode(CPUARMState *env, int mode)
53
{
83
{
54
ARMCPU *cpu = arm_env_get_cpu(env);
84
switch (addr) {
55
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest)
85
case DINO_IAR0:
56
env->regs[15] = dest & ~1;
86
diff --git a/hw/nvram/fw_cfg.c b/hw/nvram/fw_cfg.c
87
index XXXXXXX..XXXXXXX 100644
88
--- a/hw/nvram/fw_cfg.c
89
+++ b/hw/nvram/fw_cfg.c
90
@@ -XXX,XX +XXX,XX @@ static void fw_cfg_dma_mem_write(void *opaque, hwaddr addr,
57
}
91
}
58
92
59
+void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest)
93
static bool fw_cfg_dma_mem_valid(void *opaque, hwaddr addr,
60
+{
94
- unsigned size, bool is_write)
61
+ /* Handle v7M BLXNS:
95
+ unsigned size, bool is_write,
62
+ * - bit 0 of the destination address is the target security state
96
+ MemTxAttrs attrs)
63
+ */
64
+
65
+ /* At this point regs[15] is the address just after the BLXNS */
66
+ uint32_t nextinst = env->regs[15] | 1;
67
+ uint32_t sp = env->regs[13] - 8;
68
+ uint32_t saved_psr;
69
+
70
+ /* translate.c will have made BLXNS UNDEF unless we're secure */
71
+ assert(env->v7m.secure);
72
+
73
+ if (dest & 1) {
74
+ /* target is Secure, so this is just a normal BLX,
75
+ * except that the low bit doesn't indicate Thumb/not.
76
+ */
77
+ env->regs[14] = nextinst;
78
+ env->thumb = 1;
79
+ env->regs[15] = dest & ~1;
80
+ return;
81
+ }
82
+
83
+ /* Target is non-secure: first push a stack frame */
84
+ if (!QEMU_IS_ALIGNED(sp, 8)) {
85
+ qemu_log_mask(LOG_GUEST_ERROR,
86
+ "BLXNS with misaligned SP is UNPREDICTABLE\n");
87
+ }
88
+
89
+ saved_psr = env->v7m.exception;
90
+ if (env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK) {
91
+ saved_psr |= XPSR_SFPA;
92
+ }
93
+
94
+ /* Note that these stores can throw exceptions on MPU faults */
95
+ cpu_stl_data(env, sp, nextinst);
96
+ cpu_stl_data(env, sp + 4, saved_psr);
97
+
98
+ env->regs[13] = sp;
99
+ env->regs[14] = 0xfeffffff;
100
+ if (arm_v7m_is_handler_mode(env)) {
101
+ /* Write a dummy value to IPSR, to avoid leaking the current secure
102
+ * exception number to non-secure code. This is guaranteed not
103
+ * to cause write_v7m_exception() to actually change stacks.
104
+ */
105
+ write_v7m_exception(env, 1);
106
+ }
107
+ switch_v7m_security_state(env, 0);
108
+ env->thumb = 1;
109
+ env->regs[15] = dest;
110
+}
111
+
112
static uint32_t *get_v7m_sp_ptr(CPUARMState *env, bool secure, bool threadmode,
113
bool spsel)
114
{
97
{
115
diff --git a/target/arm/translate.c b/target/arm/translate.c
98
return !is_write || ((size == 4 && (addr == 0 || addr == 4)) ||
99
(size == 8 && addr == 0));
100
}
101
102
static bool fw_cfg_data_mem_valid(void *opaque, hwaddr addr,
103
- unsigned size, bool is_write)
104
+ unsigned size, bool is_write,
105
+ MemTxAttrs attrs)
106
{
107
return addr == 0;
108
}
109
@@ -XXX,XX +XXX,XX @@ static void fw_cfg_ctl_mem_write(void *opaque, hwaddr addr,
110
}
111
112
static bool fw_cfg_ctl_mem_valid(void *opaque, hwaddr addr,
113
- unsigned size, bool is_write)
114
+ unsigned size, bool is_write,
115
+ MemTxAttrs attrs)
116
{
117
return is_write && size == 2;
118
}
119
@@ -XXX,XX +XXX,XX @@ static void fw_cfg_comb_write(void *opaque, hwaddr addr,
120
}
121
122
static bool fw_cfg_comb_valid(void *opaque, hwaddr addr,
123
- unsigned size, bool is_write)
124
+ unsigned size, bool is_write,
125
+ MemTxAttrs attrs)
126
{
127
return (size == 1) || (is_write && size == 2);
128
}
129
diff --git a/hw/scsi/esp.c b/hw/scsi/esp.c
116
index XXXXXXX..XXXXXXX 100644
130
index XXXXXXX..XXXXXXX 100644
117
--- a/target/arm/translate.c
131
--- a/hw/scsi/esp.c
118
+++ b/target/arm/translate.c
132
+++ b/hw/scsi/esp.c
119
@@ -XXX,XX +XXX,XX @@ static inline void gen_bxns(DisasContext *s, int rm)
133
@@ -XXX,XX +XXX,XX @@ void esp_reg_write(ESPState *s, uint32_t saddr, uint64_t val)
120
s->base.is_jmp = DISAS_EXIT;
121
}
134
}
122
135
123
+static inline void gen_blxns(DisasContext *s, int rm)
136
static bool esp_mem_accepts(void *opaque, hwaddr addr,
124
+{
137
- unsigned size, bool is_write)
125
+ TCGv_i32 var = load_reg(s, rm);
138
+ unsigned size, bool is_write,
126
+
139
+ MemTxAttrs attrs)
127
+ /* We don't need to sync condexec state, for the same reason as bxns.
140
{
128
+ * We do however need to set the PC, because the blxns helper reads it.
141
return (size == 1) || (is_write && size == 4);
129
+ * The blxns helper may throw an exception.
142
}
130
+ */
143
diff --git a/hw/xen/xen_pt_msi.c b/hw/xen/xen_pt_msi.c
131
+ gen_set_pc_im(s, s->pc);
144
index XXXXXXX..XXXXXXX 100644
132
+ gen_helper_v7m_blxns(cpu_env, var);
145
--- a/hw/xen/xen_pt_msi.c
133
+ tcg_temp_free_i32(var);
146
+++ b/hw/xen/xen_pt_msi.c
134
+ s->base.is_jmp = DISAS_EXIT;
147
@@ -XXX,XX +XXX,XX @@ static uint64_t pci_msix_read(void *opaque, hwaddr addr,
135
+}
148
}
136
+
149
137
/* Variant of store_reg which uses branch&exchange logic when storing
150
static bool pci_msix_accepts(void *opaque, hwaddr addr,
138
to r15 in ARM architecture v7 and above. The source must be a temporary
151
- unsigned size, bool is_write)
139
and will be marked as dead. */
152
+ unsigned size, bool is_write,
140
@@ -XXX,XX +XXX,XX @@ static void disas_thumb_insn(CPUARMState *env, DisasContext *s)
153
+ MemTxAttrs attrs)
141
goto undef;
154
{
142
}
155
return !(addr & (size - 1));
143
if (link) {
156
}
144
- /* BLXNS: not yet implemented */
157
diff --git a/memory.c b/memory.c
145
- goto undef;
158
index XXXXXXX..XXXXXXX 100644
146
+ gen_blxns(s, rm);
159
--- a/memory.c
147
} else {
160
+++ b/memory.c
148
gen_bxns(s, rm);
161
@@ -XXX,XX +XXX,XX @@ static void unassigned_mem_write(void *opaque, hwaddr addr,
149
}
162
}
163
164
static bool unassigned_mem_accepts(void *opaque, hwaddr addr,
165
- unsigned size, bool is_write)
166
+ unsigned size, bool is_write,
167
+ MemTxAttrs attrs)
168
{
169
return false;
170
}
171
@@ -XXX,XX +XXX,XX @@ bool memory_region_access_valid(MemoryRegion *mr,
172
access_size = MAX(MIN(size, access_size_max), access_size_min);
173
for (i = 0; i < size; i += access_size) {
174
if (!mr->ops->valid.accepts(mr->opaque, addr + i, access_size,
175
- is_write)) {
176
+ is_write, attrs)) {
177
return false;
178
}
179
}
150
--
180
--
151
2.7.4
181
2.17.1
152
182
153
183
diff view generated by jsdifflib
1
Secure function return happens when a non-secure function has been
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
2
called using BLXNS and so has a particular magic LR value (either
2
add MemTxAttrs as an argument to flatview_access_valid().
3
0xfefffffe or 0xfeffffff). The function return via BX behaves
3
Its callers now all have an attrs value to hand, so we can
4
specially when the new PC value is this magic value, in the same
4
correct our earlier temporary use of MEMTXATTRS_UNSPECIFIED.
5
way that exception returns are handled.
6
7
Adjust our BX excret guards so that they recognize the function
8
return magic number as well, and perform the function-return
9
unstacking in do_v7m_exception_exit().
10
5
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Acked-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 1507556919-24992-5-git-send-email-peter.maydell@linaro.org
9
Message-id: 20180521140402.23318-10-peter.maydell@linaro.org
15
---
10
---
16
target/arm/internals.h | 7 +++
11
exec.c | 12 +++++-------
17
target/arm/helper.c | 115 +++++++++++++++++++++++++++++++++++++++++++++----
12
1 file changed, 5 insertions(+), 7 deletions(-)
18
target/arm/translate.c | 14 +++++-
19
3 files changed, 126 insertions(+), 10 deletions(-)
20
13
21
diff --git a/target/arm/internals.h b/target/arm/internals.h
14
diff --git a/exec.c b/exec.c
22
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/internals.h
16
--- a/exec.c
24
+++ b/target/arm/internals.h
17
+++ b/exec.c
25
@@ -XXX,XX +XXX,XX @@ FIELD(V7M_EXCRET, DCRS, 5, 1)
18
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_read(FlatView *fv, hwaddr addr,
26
FIELD(V7M_EXCRET, S, 6, 1)
19
static MemTxResult flatview_write(FlatView *fv, hwaddr addr, MemTxAttrs attrs,
27
FIELD(V7M_EXCRET, RES1, 7, 25) /* including the must-be-1 prefix */
20
const uint8_t *buf, int len);
28
21
static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
29
+/* Minimum value which is a magic number for exception return */
22
- bool is_write);
30
+#define EXC_RETURN_MIN_MAGIC 0xff000000
23
+ bool is_write, MemTxAttrs attrs);
31
+/* Minimum number which is a magic number for function or exception return
24
32
+ * when using v8M security extension
25
static MemTxResult subpage_read(void *opaque, hwaddr addr, uint64_t *data,
33
+ */
26
unsigned len, MemTxAttrs attrs)
34
+#define FNC_RETURN_MIN_MAGIC 0xfefffffe
27
@@ -XXX,XX +XXX,XX @@ static bool subpage_accepts(void *opaque, hwaddr addr,
35
+
28
#endif
36
/* We use a few fake FSR values for internal purposes in M profile.
29
37
* M profile cores don't have A/R format FSRs, but currently our
30
return flatview_access_valid(subpage->fv, addr + subpage->base,
38
* get_phys_addr() code assumes A/R profile and reports failures via
31
- len, is_write);
39
diff --git a/target/arm/helper.c b/target/arm/helper.c
32
+ len, is_write, attrs);
40
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/helper.c
42
+++ b/target/arm/helper.c
43
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest)
44
* - if the return value is a magic value, do exception return (like BX)
45
* - otherwise bit 0 of the return value is the target security state
46
*/
47
- if (dest >= 0xff000000) {
48
+ uint32_t min_magic;
49
+
50
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
51
+ /* Covers FNC_RETURN and EXC_RETURN magic */
52
+ min_magic = FNC_RETURN_MIN_MAGIC;
53
+ } else {
54
+ /* EXC_RETURN magic only */
55
+ min_magic = EXC_RETURN_MIN_MAGIC;
56
+ }
57
+
58
+ if (dest >= min_magic) {
59
/* This is an exception return magic value; put it where
60
* do_v7m_exception_exit() expects and raise EXCEPTION_EXIT.
61
* Note that if we ever add gen_ss_advance() singlestep support to
62
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
63
bool exc_secure = false;
64
bool return_to_secure;
65
66
- /* We can only get here from an EXCP_EXCEPTION_EXIT, and
67
- * gen_bx_excret() enforces the architectural rule
68
- * that jumps to magic addresses don't have magic behaviour unless
69
- * we're in Handler mode (compare pseudocode BXWritePC()).
70
+ /* If we're not in Handler mode then jumps to magic exception-exit
71
+ * addresses don't have magic behaviour. However for the v8M
72
+ * security extensions the magic secure-function-return has to
73
+ * work in thread mode too, so to avoid doing an extra check in
74
+ * the generated code we allow exception-exit magic to also cause the
75
+ * internal exception and bring us here in thread mode. Correct code
76
+ * will never try to do this (the following insn fetch will always
77
+ * fault) so we the overhead of having taken an unnecessary exception
78
+ * doesn't matter.
79
*/
80
- assert(arm_v7m_is_handler_mode(env));
81
+ if (!arm_v7m_is_handler_mode(env)) {
82
+ return;
83
+ }
84
85
/* In the spec pseudocode ExceptionReturn() is called directly
86
* from BXWritePC() and gets the full target PC value including
87
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
88
qemu_log_mask(CPU_LOG_INT, "...successful exception return\n");
89
}
33
}
90
34
91
+static bool do_v7m_function_return(ARMCPU *cpu)
35
static const MemoryRegionOps subpage_ops = {
92
+{
36
@@ -XXX,XX +XXX,XX @@ static void cpu_notify_map_clients(void)
93
+ /* v8M security extensions magic function return.
37
}
94
+ * We may either:
38
95
+ * (1) throw an exception (longjump)
39
static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
96
+ * (2) return true if we successfully handled the function return
40
- bool is_write)
97
+ * (3) return false if we failed a consistency check and have
41
+ bool is_write, MemTxAttrs attrs)
98
+ * pended a UsageFault that needs to be taken now
99
+ *
100
+ * At this point the magic return value is split between env->regs[15]
101
+ * and env->thumb. We don't bother to reconstitute it because we don't
102
+ * need it (all values are handled the same way).
103
+ */
104
+ CPUARMState *env = &cpu->env;
105
+ uint32_t newpc, newpsr, newpsr_exc;
106
+
107
+ qemu_log_mask(CPU_LOG_INT, "...really v7M secure function return\n");
108
+
109
+ {
110
+ bool threadmode, spsel;
111
+ TCGMemOpIdx oi;
112
+ ARMMMUIdx mmu_idx;
113
+ uint32_t *frame_sp_p;
114
+ uint32_t frameptr;
115
+
116
+ /* Pull the return address and IPSR from the Secure stack */
117
+ threadmode = !arm_v7m_is_handler_mode(env);
118
+ spsel = env->v7m.control[M_REG_S] & R_V7M_CONTROL_SPSEL_MASK;
119
+
120
+ frame_sp_p = get_v7m_sp_ptr(env, true, threadmode, spsel);
121
+ frameptr = *frame_sp_p;
122
+
123
+ /* These loads may throw an exception (for MPU faults). We want to
124
+ * do them as secure, so work out what MMU index that is.
125
+ */
126
+ mmu_idx = arm_v7m_mmu_idx_for_secstate(env, true);
127
+ oi = make_memop_idx(MO_LE, arm_to_core_mmu_idx(mmu_idx));
128
+ newpc = helper_le_ldul_mmu(env, frameptr, oi, 0);
129
+ newpsr = helper_le_ldul_mmu(env, frameptr + 4, oi, 0);
130
+
131
+ /* Consistency checks on new IPSR */
132
+ newpsr_exc = newpsr & XPSR_EXCP;
133
+ if (!((env->v7m.exception == 0 && newpsr_exc == 0) ||
134
+ (env->v7m.exception == 1 && newpsr_exc != 0))) {
135
+ /* Pend the fault and tell our caller to take it */
136
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
137
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
138
+ env->v7m.secure);
139
+ qemu_log_mask(CPU_LOG_INT,
140
+ "...taking INVPC UsageFault: "
141
+ "IPSR consistency check failed\n");
142
+ return false;
143
+ }
144
+
145
+ *frame_sp_p = frameptr + 8;
146
+ }
147
+
148
+ /* This invalidates frame_sp_p */
149
+ switch_v7m_security_state(env, true);
150
+ env->v7m.exception = newpsr_exc;
151
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
152
+ if (newpsr & XPSR_SFPA) {
153
+ env->v7m.control[M_REG_S] |= R_V7M_CONTROL_SFPA_MASK;
154
+ }
155
+ xpsr_write(env, 0, XPSR_IT);
156
+ env->thumb = newpc & 1;
157
+ env->regs[15] = newpc & ~1;
158
+
159
+ qemu_log_mask(CPU_LOG_INT, "...function return successful\n");
160
+ return true;
161
+}
162
+
163
static void arm_log_exception(int idx)
164
{
42
{
165
if (qemu_loglevel_mask(CPU_LOG_INT)) {
43
MemoryRegion *mr;
166
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
44
hwaddr l, xlat;
167
case EXCP_IRQ:
45
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
168
break;
46
mr = flatview_translate(fv, addr, &xlat, &l, is_write);
169
case EXCP_EXCEPTION_EXIT:
47
if (!memory_access_is_direct(mr, is_write)) {
170
- do_v7m_exception_exit(cpu);
48
l = memory_access_size(mr, l, addr);
171
- return;
49
- /* When our callers all have attrs we'll pass them through here */
172
+ if (env->regs[15] < EXC_RETURN_MIN_MAGIC) {
50
- if (!memory_region_access_valid(mr, xlat, l, is_write,
173
+ /* Must be v8M security extension function return */
51
- MEMTXATTRS_UNSPECIFIED)) {
174
+ assert(env->regs[15] >= FNC_RETURN_MIN_MAGIC);
52
+ if (!memory_region_access_valid(mr, xlat, l, is_write, attrs)) {
175
+ assert(arm_feature(env, ARM_FEATURE_M_SECURITY));
53
return false;
176
+ if (do_v7m_function_return(cpu)) {
54
}
177
+ return;
55
}
178
+ }
56
@@ -XXX,XX +XXX,XX @@ bool address_space_access_valid(AddressSpace *as, hwaddr addr,
179
+ } else {
57
180
+ do_v7m_exception_exit(cpu);
58
rcu_read_lock();
181
+ return;
59
fv = address_space_to_flatview(as);
182
+ }
60
- result = flatview_access_valid(fv, addr, len, is_write);
183
+ break;
61
+ result = flatview_access_valid(fv, addr, len, is_write, attrs);
184
default:
62
rcu_read_unlock();
185
cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index);
63
return result;
186
return; /* Never happens. Keep compiler happy. */
187
diff --git a/target/arm/translate.c b/target/arm/translate.c
188
index XXXXXXX..XXXXXXX 100644
189
--- a/target/arm/translate.c
190
+++ b/target/arm/translate.c
191
@@ -XXX,XX +XXX,XX @@ static inline void gen_bx_excret(DisasContext *s, TCGv_i32 var)
192
* s->base.is_jmp that we need to do the rest of the work later.
193
*/
194
gen_bx(s, var);
195
- if (s->v7m_handler_mode && arm_dc_feature(s, ARM_FEATURE_M)) {
196
+ if (arm_dc_feature(s, ARM_FEATURE_M_SECURITY) ||
197
+ (s->v7m_handler_mode && arm_dc_feature(s, ARM_FEATURE_M))) {
198
s->base.is_jmp = DISAS_BX_EXCRET;
199
}
200
}
64
}
201
@@ -XXX,XX +XXX,XX @@ static inline void gen_bx_excret_final_code(DisasContext *s)
202
{
203
/* Generate the code to finish possible exception return and end the TB */
204
TCGLabel *excret_label = gen_new_label();
205
+ uint32_t min_magic;
206
+
207
+ if (arm_dc_feature(s, ARM_FEATURE_M_SECURITY)) {
208
+ /* Covers FNC_RETURN and EXC_RETURN magic */
209
+ min_magic = FNC_RETURN_MIN_MAGIC;
210
+ } else {
211
+ /* EXC_RETURN magic only */
212
+ min_magic = EXC_RETURN_MIN_MAGIC;
213
+ }
214
215
/* Is the new PC value in the magic range indicating exception return? */
216
- tcg_gen_brcondi_i32(TCG_COND_GEU, cpu_R[15], 0xff000000, excret_label);
217
+ tcg_gen_brcondi_i32(TCG_COND_GEU, cpu_R[15], min_magic, excret_label);
218
/* No: end the TB as we would for a DISAS_JMP */
219
if (is_singlestepping(s)) {
220
gen_singlestep_exception(s);
221
--
65
--
222
2.7.4
66
2.17.1
223
67
224
68
diff view generated by jsdifflib
1
A few Thumb instructions are always unconditional even inside an
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
2
IT block (as opposed to being UNPREDICTABLE if used inside an
2
add MemTxAttrs as an argument to flatview_translate(); all its
3
IT block): BKPT, the v8M SG instruction, and the A profile
3
callers now have attrs available.
4
HLT (debug halt) instruction.
5
6
This means we need to suppress the jump-over-instruction-on-condfail
7
code generation (though the IT state still advances as usual and
8
subsequent insns in the IT block may be conditional).
9
4
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 1507556919-24992-9-git-send-email-peter.maydell@linaro.org
8
Message-id: 20180521140402.23318-11-peter.maydell@linaro.org
13
---
9
---
14
target/arm/translate.c | 48 +++++++++++++++++++++++++++++++++++++++++++++++-
10
include/exec/memory.h | 7 ++++---
15
1 file changed, 47 insertions(+), 1 deletion(-)
11
exec.c | 17 +++++++++--------
12
2 files changed, 13 insertions(+), 11 deletions(-)
16
13
17
diff --git a/target/arm/translate.c b/target/arm/translate.c
14
diff --git a/include/exec/memory.h b/include/exec/memory.h
18
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/translate.c
16
--- a/include/exec/memory.h
20
+++ b/target/arm/translate.c
17
+++ b/include/exec/memory.h
21
@@ -XXX,XX +XXX,XX @@ static void arm_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
18
@@ -XXX,XX +XXX,XX @@ IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
22
in init_disas_context by adjusting max_insns. */
19
*/
20
MemoryRegion *flatview_translate(FlatView *fv,
21
hwaddr addr, hwaddr *xlat,
22
- hwaddr *len, bool is_write);
23
+ hwaddr *len, bool is_write,
24
+ MemTxAttrs attrs);
25
26
static inline MemoryRegion *address_space_translate(AddressSpace *as,
27
hwaddr addr, hwaddr *xlat,
28
@@ -XXX,XX +XXX,XX @@ static inline MemoryRegion *address_space_translate(AddressSpace *as,
29
MemTxAttrs attrs)
30
{
31
return flatview_translate(address_space_to_flatview(as),
32
- addr, xlat, len, is_write);
33
+ addr, xlat, len, is_write, attrs);
23
}
34
}
24
35
25
+static bool thumb_insn_is_unconditional(DisasContext *s, uint32_t insn)
36
/* address_space_access_valid: check for validity of accessing an address
26
+{
37
@@ -XXX,XX +XXX,XX @@ MemTxResult address_space_read(AddressSpace *as, hwaddr addr,
27
+ /* Return true if this Thumb insn is always unconditional,
38
rcu_read_lock();
28
+ * even inside an IT block. This is true of only a very few
39
fv = address_space_to_flatview(as);
29
+ * instructions: BKPT, HLT, and SG.
40
l = len;
30
+ *
41
- mr = flatview_translate(fv, addr, &addr1, &l, false);
31
+ * A larger class of instructions are UNPREDICTABLE if used
42
+ mr = flatview_translate(fv, addr, &addr1, &l, false, attrs);
32
+ * inside an IT block; we do not need to detect those here, because
43
if (len == l && memory_access_is_direct(mr, false)) {
33
+ * what we do by default (perform the cc check and update the IT
44
ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
34
+ * bits state machine) is a permitted CONSTRAINED UNPREDICTABLE
45
memcpy(buf, ptr, len);
35
+ * choice for those situations.
46
diff --git a/exec.c b/exec.c
36
+ *
47
index XXXXXXX..XXXXXXX 100644
37
+ * insn is either a 16-bit or a 32-bit instruction; the two are
48
--- a/exec.c
38
+ * distinguishable because for the 16-bit case the top 16 bits
49
+++ b/exec.c
39
+ * are zeroes, and that isn't a valid 32-bit encoding.
50
@@ -XXX,XX +XXX,XX @@ iotlb_fail:
40
+ */
51
41
+ if ((insn & 0xffffff00) == 0xbe00) {
52
/* Called from RCU critical section */
42
+ /* BKPT */
53
MemoryRegion *flatview_translate(FlatView *fv, hwaddr addr, hwaddr *xlat,
43
+ return true;
54
- hwaddr *plen, bool is_write)
44
+ }
55
+ hwaddr *plen, bool is_write,
45
+
56
+ MemTxAttrs attrs)
46
+ if ((insn & 0xffffffc0) == 0xba80 && arm_dc_feature(s, ARM_FEATURE_V8) &&
47
+ !arm_dc_feature(s, ARM_FEATURE_M)) {
48
+ /* HLT: v8A only. This is unconditional even when it is going to
49
+ * UNDEF; see the v8A ARM ARM DDI0487B.a H3.3.
50
+ * For v7 cores this was a plain old undefined encoding and so
51
+ * honours its cc check. (We might be using the encoding as
52
+ * a semihosting trap, but we don't change the cc check behaviour
53
+ * on that account, because a debugger connected to a real v7A
54
+ * core and emulating semihosting traps by catching the UNDEF
55
+ * exception would also only see cases where the cc check passed.
56
+ * No guest code should be trying to do a HLT semihosting trap
57
+ * in an IT block anyway.
58
+ */
59
+ return true;
60
+ }
61
+
62
+ if (insn == 0xe97fe97f && arm_dc_feature(s, ARM_FEATURE_V8) &&
63
+ arm_dc_feature(s, ARM_FEATURE_M)) {
64
+ /* SG: v8M only */
65
+ return true;
66
+ }
67
+
68
+ return false;
69
+}
70
+
71
static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
72
{
57
{
73
DisasContext *dc = container_of(dcbase, DisasContext, base);
58
MemoryRegion *mr;
74
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
59
MemoryRegionSection section;
75
dc->pc += 2;
60
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_write_continue(FlatView *fv, hwaddr addr,
61
}
62
63
l = len;
64
- mr = flatview_translate(fv, addr, &addr1, &l, true);
65
+ mr = flatview_translate(fv, addr, &addr1, &l, true, attrs);
76
}
66
}
77
67
78
- if (dc->condexec_mask) {
68
return result;
79
+ if (dc->condexec_mask && !thumb_insn_is_unconditional(dc, insn)) {
69
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_write(FlatView *fv, hwaddr addr, MemTxAttrs attrs,
80
uint32_t cond = dc->condexec_cond;
70
MemTxResult result = MEMTX_OK;
81
71
82
if (cond != 0x0e) { /* Skip conditional when condition is AL. */
72
l = len;
73
- mr = flatview_translate(fv, addr, &addr1, &l, true);
74
+ mr = flatview_translate(fv, addr, &addr1, &l, true, attrs);
75
result = flatview_write_continue(fv, addr, attrs, buf, len,
76
addr1, l, mr);
77
78
@@ -XXX,XX +XXX,XX @@ MemTxResult flatview_read_continue(FlatView *fv, hwaddr addr,
79
}
80
81
l = len;
82
- mr = flatview_translate(fv, addr, &addr1, &l, false);
83
+ mr = flatview_translate(fv, addr, &addr1, &l, false, attrs);
84
}
85
86
return result;
87
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_read(FlatView *fv, hwaddr addr,
88
MemoryRegion *mr;
89
90
l = len;
91
- mr = flatview_translate(fv, addr, &addr1, &l, false);
92
+ mr = flatview_translate(fv, addr, &addr1, &l, false, attrs);
93
return flatview_read_continue(fv, addr, attrs, buf, len,
94
addr1, l, mr);
95
}
96
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
97
98
while (len > 0) {
99
l = len;
100
- mr = flatview_translate(fv, addr, &xlat, &l, is_write);
101
+ mr = flatview_translate(fv, addr, &xlat, &l, is_write, attrs);
102
if (!memory_access_is_direct(mr, is_write)) {
103
l = memory_access_size(mr, l, addr);
104
if (!memory_region_access_valid(mr, xlat, l, is_write, attrs)) {
105
@@ -XXX,XX +XXX,XX @@ flatview_extend_translation(FlatView *fv, hwaddr addr,
106
107
len = target_len;
108
this_mr = flatview_translate(fv, addr, &xlat,
109
- &len, is_write);
110
+ &len, is_write, attrs);
111
if (this_mr != mr || xlat != base + done) {
112
return done;
113
}
114
@@ -XXX,XX +XXX,XX @@ void *address_space_map(AddressSpace *as,
115
l = len;
116
rcu_read_lock();
117
fv = address_space_to_flatview(as);
118
- mr = flatview_translate(fv, addr, &xlat, &l, is_write);
119
+ mr = flatview_translate(fv, addr, &xlat, &l, is_write, attrs);
120
121
if (!memory_access_is_direct(mr, is_write)) {
122
if (atomic_xchg(&bounce.in_use, true)) {
83
--
123
--
84
2.7.4
124
2.17.1
85
125
86
126
diff view generated by jsdifflib
New patch
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
2
add MemTxAttrs as an argument to address_space_get_iotlb_entry().
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180521140402.23318-12-peter.maydell@linaro.org
8
---
9
include/exec/memory.h | 2 +-
10
exec.c | 2 +-
11
hw/virtio/vhost.c | 3 ++-
12
3 files changed, 4 insertions(+), 3 deletions(-)
13
14
diff --git a/include/exec/memory.h b/include/exec/memory.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/include/exec/memory.h
17
+++ b/include/exec/memory.h
18
@@ -XXX,XX +XXX,XX @@ void address_space_cache_destroy(MemoryRegionCache *cache);
19
* entry. Should be called from an RCU critical section.
20
*/
21
IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
22
- bool is_write);
23
+ bool is_write, MemTxAttrs attrs);
24
25
/* address_space_translate: translate an address range into an address space
26
* into a MemoryRegion and an address range into that section. Should be
27
diff --git a/exec.c b/exec.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/exec.c
30
+++ b/exec.c
31
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection flatview_do_translate(FlatView *fv,
32
33
/* Called from RCU critical section */
34
IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
35
- bool is_write)
36
+ bool is_write, MemTxAttrs attrs)
37
{
38
MemoryRegionSection section;
39
hwaddr xlat, page_mask;
40
diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/hw/virtio/vhost.c
43
+++ b/hw/virtio/vhost.c
44
@@ -XXX,XX +XXX,XX @@ int vhost_device_iotlb_miss(struct vhost_dev *dev, uint64_t iova, int write)
45
trace_vhost_iotlb_miss(dev, 1);
46
47
iotlb = address_space_get_iotlb_entry(dev->vdev->dma_as,
48
- iova, write);
49
+ iova, write,
50
+ MEMTXATTRS_UNSPECIFIED);
51
if (iotlb.target_as != NULL) {
52
ret = vhost_memory_region_lookup(dev, iotlb.translated_addr,
53
&uaddr, &len);
54
--
55
2.17.1
56
57
diff view generated by jsdifflib
1
The code which implements the Thumb1 split BL/BLX instructions
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
2
is guarded by a check on "not M or THUMB2". All we really need
2
add MemTxAttrs as an argument to flatview_do_translate().
3
to check here is "not THUMB2" (and we assume that elsewhere too,
4
eg in the ARCH(6T2) test that UNDEFs the Thumb2 insns).
5
6
This doesn't change behaviour because all M profile cores
7
have Thumb2 and so ARM_FEATURE_M implies ARM_FEATURE_THUMB2.
8
(v6M implements a very restricted subset of Thumb2, but we
9
can cross that bridge when we get to it with appropriate
10
feature bits.)
11
3
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 1507556919-24992-6-git-send-email-peter.maydell@linaro.org
7
Message-id: 20180521140402.23318-13-peter.maydell@linaro.org
15
---
8
---
16
target/arm/translate.c | 3 +--
9
exec.c | 9 ++++++---
17
1 file changed, 1 insertion(+), 2 deletions(-)
10
1 file changed, 6 insertions(+), 3 deletions(-)
18
11
19
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
diff --git a/exec.c b/exec.c
20
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/translate.c
14
--- a/exec.c
22
+++ b/target/arm/translate.c
15
+++ b/exec.c
23
@@ -XXX,XX +XXX,XX @@ static int disas_thumb2_insn(CPUARMState *env, DisasContext *s, uint16_t insn_hw
16
@@ -XXX,XX +XXX,XX @@ unassigned:
24
int conds;
17
* @is_write: whether the translation operation is for write
25
int logic_cc;
18
* @is_mmio: whether this can be MMIO, set true if it can
26
19
* @target_as: the address space targeted by the IOMMU
27
- if (!(arm_dc_feature(s, ARM_FEATURE_THUMB2)
20
+ * @attrs: memory transaction attributes
28
- || arm_dc_feature(s, ARM_FEATURE_M))) {
21
*
29
+ if (!arm_dc_feature(s, ARM_FEATURE_THUMB2)) {
22
* This function is called from RCU critical section
30
/* Thumb-1 cores may need to treat bl and blx as a pair of
23
*/
31
16-bit instructions to get correct prefetch abort behavior. */
24
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection flatview_do_translate(FlatView *fv,
32
insn = insn_hw1;
25
hwaddr *page_mask_out,
26
bool is_write,
27
bool is_mmio,
28
- AddressSpace **target_as)
29
+ AddressSpace **target_as,
30
+ MemTxAttrs attrs)
31
{
32
MemoryRegionSection *section;
33
IOMMUMemoryRegion *iommu_mr;
34
@@ -XXX,XX +XXX,XX @@ IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
35
* but page mask.
36
*/
37
section = flatview_do_translate(address_space_to_flatview(as), addr, &xlat,
38
- NULL, &page_mask, is_write, false, &as);
39
+ NULL, &page_mask, is_write, false, &as,
40
+ attrs);
41
42
/* Illegal translation */
43
if (section.mr == &io_mem_unassigned) {
44
@@ -XXX,XX +XXX,XX @@ MemoryRegion *flatview_translate(FlatView *fv, hwaddr addr, hwaddr *xlat,
45
46
/* This can be MMIO, so setup MMIO bit. */
47
section = flatview_do_translate(fv, addr, xlat, plen, NULL,
48
- is_write, true, &as);
49
+ is_write, true, &as, attrs);
50
mr = section.mr;
51
52
if (xen_enabled() && memory_access_is_direct(mr, is_write)) {
33
--
53
--
34
2.7.4
54
2.17.1
35
55
36
56
diff view generated by jsdifflib
1
Recent changes have left insn_crosses_page() more complicated
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
2
than it needed to be:
2
add MemTxAttrs as an argument to address_space_translate_iommu().
3
* it's only called from thumb_tr_translate_insn() so we know
4
for certain that we're looking at a Thumb insn
5
* the caller's check for dc->pc >= dc->next_page_start - 3
6
means that dc->pc can't possibly be 4 aligned, so there's
7
no need to check that (the check was partly there to ensure
8
that we didn't treat an ARM insn as Thumb, I think)
9
* we now have thumb_insn_is_16bit() which lets us do a precise
10
check of the length of the next insn, rather than opencoding
11
an inaccurate check
12
13
Simplify it down to just loading the first half of the insn
14
and calling thumb_insn_is_16bit() on it.
15
3
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Message-id: 1507556919-24992-8-git-send-email-peter.maydell@linaro.org
7
Message-id: 20180521140402.23318-14-peter.maydell@linaro.org
19
---
8
---
20
target/arm/translate.c | 27 ++++++---------------------
9
exec.c | 8 +++++---
21
1 file changed, 6 insertions(+), 21 deletions(-)
10
1 file changed, 5 insertions(+), 3 deletions(-)
22
11
23
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
diff --git a/exec.c b/exec.c
24
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
25
--- a/target/arm/translate.c
14
--- a/exec.c
26
+++ b/target/arm/translate.c
15
+++ b/exec.c
27
@@ -XXX,XX +XXX,XX @@ static bool insn_crosses_page(CPUARMState *env, DisasContext *s)
16
@@ -XXX,XX +XXX,XX @@ address_space_translate_internal(AddressSpaceDispatch *d, hwaddr addr, hwaddr *x
17
* @is_write: whether the translation operation is for write
18
* @is_mmio: whether this can be MMIO, set true if it can
19
* @target_as: the address space targeted by the IOMMU
20
+ * @attrs: transaction attributes
21
*
22
* This function is called from RCU critical section. It is the common
23
* part of flatview_do_translate and address_space_translate_cached.
24
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection address_space_translate_iommu(IOMMUMemoryRegion *iomm
25
hwaddr *page_mask_out,
26
bool is_write,
27
bool is_mmio,
28
- AddressSpace **target_as)
29
+ AddressSpace **target_as,
30
+ MemTxAttrs attrs)
28
{
31
{
29
/* Return true if the insn at dc->pc might cross a page boundary.
32
MemoryRegionSection *section;
30
* (False positives are OK, false negatives are not.)
33
hwaddr page_mask = (hwaddr)-1;
31
+ * We know this is a Thumb insn, and our caller ensures we are
34
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection flatview_do_translate(FlatView *fv,
32
+ * only called if dc->pc is less than 4 bytes from the page
35
return address_space_translate_iommu(iommu_mr, xlat,
33
+ * boundary, so we cross the page if the first 16 bits indicate
36
plen_out, page_mask_out,
34
+ * that this is a 32 bit insn.
37
is_write, is_mmio,
35
*/
38
- target_as);
36
- uint16_t insn;
39
+ target_as, attrs);
37
+ uint16_t insn = arm_lduw_code(env, s->pc, s->sctlr_b);
40
}
38
41
if (page_mask_out) {
39
- if ((s->pc & 3) == 0) {
42
/* Not behind an IOMMU, use default page size. */
40
- /* At a 4-aligned address we can't be crossing a page */
43
@@ -XXX,XX +XXX,XX @@ static inline MemoryRegion *address_space_translate_cached(
41
- return false;
44
42
- }
45
section = address_space_translate_iommu(iommu_mr, xlat, plen,
43
-
46
NULL, is_write, true,
44
- /* This must be a Thumb insn */
47
- &target_as);
45
- insn = arm_lduw_code(env, s->pc, s->sctlr_b);
48
+ &target_as, attrs);
46
-
49
return section.mr;
47
- if ((insn >> 11) >= 0x1d) {
48
- /* Top five bits 0b11101 / 0b11110 / 0b11111 : this is the
49
- * First half of a 32-bit Thumb insn. Thumb-1 cores might
50
- * end up actually treating this as two 16-bit insns (see the
51
- * code at the start of disas_thumb2_insn()) but we don't bother
52
- * to check for that as it is unlikely, and false positives here
53
- * are harmless.
54
- */
55
- return true;
56
- }
57
- /* Definitely a 16-bit insn, can't be crossing a page. */
58
- return false;
59
+ return !thumb_insn_is_16bit(s, insn);
60
}
50
}
61
51
62
static int arm_tr_init_disas_context(DisasContextBase *dcbase,
63
--
52
--
64
2.7.4
53
2.17.1
65
54
66
55
diff view generated by jsdifflib
1
Add the M profile secure MMU index values to the switch in
1
Provide a VMSTATE_BOOL_SUB_ARRAY to go with VMSTATE_UINT8_SUB_ARRAY
2
get_a32_user_mem_index() so that LDRT/STRT work correctly
2
and friends.
3
rather than asserting at translate time.
4
3
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Message-id: 1507556919-24992-2-git-send-email-peter.maydell@linaro.org
6
Message-id: 20180521140402.23318-23-peter.maydell@linaro.org
8
---
7
---
9
target/arm/translate.c | 4 ++++
8
include/migration/vmstate.h | 3 +++
10
1 file changed, 4 insertions(+)
9
1 file changed, 3 insertions(+)
11
10
12
diff --git a/target/arm/translate.c b/target/arm/translate.c
11
diff --git a/include/migration/vmstate.h b/include/migration/vmstate.h
13
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate.c
13
--- a/include/migration/vmstate.h
15
+++ b/target/arm/translate.c
14
+++ b/include/migration/vmstate.h
16
@@ -XXX,XX +XXX,XX @@ static inline int get_a32_user_mem_index(DisasContext *s)
15
@@ -XXX,XX +XXX,XX @@ extern const VMStateInfo vmstate_info_qtailq;
17
case ARMMMUIdx_MPriv:
16
#define VMSTATE_BOOL_ARRAY(_f, _s, _n) \
18
case ARMMMUIdx_MNegPri:
17
VMSTATE_BOOL_ARRAY_V(_f, _s, _n, 0)
19
return arm_to_core_mmu_idx(ARMMMUIdx_MUser);
18
20
+ case ARMMMUIdx_MSUser:
19
+#define VMSTATE_BOOL_SUB_ARRAY(_f, _s, _start, _num) \
21
+ case ARMMMUIdx_MSPriv:
20
+ VMSTATE_SUB_ARRAY(_f, _s, _start, _num, 0, vmstate_info_bool, bool)
22
+ case ARMMMUIdx_MSNegPri:
21
+
23
+ return arm_to_core_mmu_idx(ARMMMUIdx_MSUser);
22
#define VMSTATE_UINT16_ARRAY_V(_f, _s, _n, _v) \
24
case ARMMMUIdx_S2NS:
23
VMSTATE_ARRAY(_f, _s, _n, _v, vmstate_info_uint16, uint16_t)
25
default:
24
26
g_assert_not_reached();
27
--
25
--
28
2.7.4
26
2.17.1
29
27
30
28
diff view generated by jsdifflib
1
From: Cédric Le Goater <clg@kaod.org>
1
From: Shannon Zhao <zhaoshenglong@huawei.com>
2
2
3
Initially from Anton D. Kachalov" <mouse@yandex-team.ru> but the SoB was
3
acpi_data_push uses g_array_set_size to resize the memory size. If there
4
missing.
4
is no enough contiguous memory, the address will be changed. So previous
5
pointer could not be used any more. It must update the pointer and use
6
the new one.
5
7
6
Signed-off-by: Cédric Le Goater <clg@kaod.org>
8
Also, previous codes wrongly use le32 conversion of iort->node_offset
7
Acked-by: Andrew Jeffery <andrew@aj.id.au>
9
for subsequent computations that will result incorrect value if host is
8
Message-id: 20170920064915.30027-1-clg@kaod.org
10
not litlle endian. So use the non-converted one instead.
9
[clg: change commit log and subject
11
10
replace UL suffix by ULL ]
12
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
11
Signed-off-by: Cédric Le Goater <clg@kaod.org>
13
Reviewed-by: Eric Auger <eric.auger@redhat.com>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Message-id: 1527663951-14552-1-git-send-email-zhaoshenglong@huawei.com
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
16
---
15
hw/watchdog/wdt_aspeed.c | 4 ++--
17
hw/arm/virt-acpi-build.c | 20 +++++++++++++++-----
16
1 file changed, 2 insertions(+), 2 deletions(-)
18
1 file changed, 15 insertions(+), 5 deletions(-)
17
19
18
diff --git a/hw/watchdog/wdt_aspeed.c b/hw/watchdog/wdt_aspeed.c
20
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
19
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/watchdog/wdt_aspeed.c
22
--- a/hw/arm/virt-acpi-build.c
21
+++ b/hw/watchdog/wdt_aspeed.c
23
+++ b/hw/arm/virt-acpi-build.c
22
@@ -XXX,XX +XXX,XX @@ static uint64_t aspeed_wdt_read(void *opaque, hwaddr offset, unsigned size)
24
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
23
25
AcpiIortItsGroup *its;
24
static void aspeed_wdt_reload(AspeedWDTState *s, bool pclk)
26
AcpiIortTable *iort;
25
{
27
AcpiIortSmmu3 *smmu;
26
- uint32_t reload;
28
- size_t node_size, iort_length, smmu_offset = 0;
27
+ uint64_t reload;
29
+ size_t node_size, iort_node_offset, iort_length, smmu_offset = 0;
28
30
AcpiIortRC *rc;
29
if (pclk) {
31
30
reload = muldiv64(s->regs[WDT_RELOAD_VALUE], NANOSECONDS_PER_SECOND,
32
iort = acpi_data_push(table_data, sizeof(*iort));
31
s->pclk_freq);
33
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
34
35
iort_length = sizeof(*iort);
36
iort->node_count = cpu_to_le32(nb_nodes);
37
- iort->node_offset = cpu_to_le32(sizeof(*iort));
38
+ /*
39
+ * Use a copy in case table_data->data moves during acpi_data_push
40
+ * operations.
41
+ */
42
+ iort_node_offset = sizeof(*iort);
43
+ iort->node_offset = cpu_to_le32(iort_node_offset);
44
45
/* ITS group node */
46
node_size = sizeof(*its) + sizeof(uint32_t);
47
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
48
int irq = vms->irqmap[VIRT_SMMU];
49
50
/* SMMUv3 node */
51
- smmu_offset = iort->node_offset + node_size;
52
+ smmu_offset = iort_node_offset + node_size;
53
node_size = sizeof(*smmu) + sizeof(*idmap);
54
iort_length += node_size;
55
smmu = acpi_data_push(table_data, node_size);
56
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
57
idmap->id_count = cpu_to_le32(0xFFFF);
58
idmap->output_base = 0;
59
/* output IORT node is the ITS group node (the first node) */
60
- idmap->output_reference = cpu_to_le32(iort->node_offset);
61
+ idmap->output_reference = cpu_to_le32(iort_node_offset);
62
}
63
64
/* Root Complex Node */
65
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
66
idmap->output_reference = cpu_to_le32(smmu_offset);
32
} else {
67
} else {
33
- reload = s->regs[WDT_RELOAD_VALUE] * 1000;
68
/* output IORT node is the ITS group node (the first node) */
34
+ reload = s->regs[WDT_RELOAD_VALUE] * 1000ULL;
69
- idmap->output_reference = cpu_to_le32(iort->node_offset);
70
+ idmap->output_reference = cpu_to_le32(iort_node_offset);
35
}
71
}
36
72
37
if (aspeed_wdt_is_enabled(s)) {
73
+ /*
74
+ * Update the pointer address in case table_data->data moves during above
75
+ * acpi_data_push operations.
76
+ */
77
+ iort = (AcpiIortTable *)(table_data->data + iort_start);
78
iort->length = cpu_to_le32(iort_length);
79
80
build_header(linker, table_data, (void *)(table_data->data + iort_start),
38
--
81
--
39
2.7.4
82
2.17.1
40
83
41
84
diff view generated by jsdifflib
New patch
1
From: Shannon Zhao <zhaoshenglong@huawei.com>
1
2
3
kvm_irqchip_create called by kvm_init will call kvm_init_irq_routing to
4
initialize global capability variables. If we call kvm_init_irq_routing in
5
GIC realize function, previous allocated memory will leak.
6
7
Fix this by deleting the unnecessary call.
8
9
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
10
Reviewed-by: Eric Auger <eric.auger@redhat.com>
11
Message-id: 1527750994-14360-1-git-send-email-zhaoshenglong@huawei.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
hw/intc/arm_gic_kvm.c | 1 -
15
hw/intc/arm_gicv3_kvm.c | 1 -
16
2 files changed, 2 deletions(-)
17
18
diff --git a/hw/intc/arm_gic_kvm.c b/hw/intc/arm_gic_kvm.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/intc/arm_gic_kvm.c
21
+++ b/hw/intc/arm_gic_kvm.c
22
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gic_realize(DeviceState *dev, Error **errp)
23
24
if (kvm_has_gsi_routing()) {
25
/* set up irq routing */
26
- kvm_init_irq_routing(kvm_state);
27
for (i = 0; i < s->num_irq - GIC_INTERNAL; ++i) {
28
kvm_irqchip_add_irq_route(kvm_state, i, 0, i);
29
}
30
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/hw/intc/arm_gicv3_kvm.c
33
+++ b/hw/intc/arm_gicv3_kvm.c
34
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gicv3_realize(DeviceState *dev, Error **errp)
35
36
if (kvm_has_gsi_routing()) {
37
/* set up irq routing */
38
- kvm_init_irq_routing(kvm_state);
39
for (i = 0; i < s->num_irq - GIC_INTERNAL; ++i) {
40
kvm_irqchip_add_irq_route(kvm_state, i, 0, i);
41
}
42
--
43
2.17.1
44
45
diff view generated by jsdifflib