1
Latest arm queue, half minor code cleanups and half minor
1
target-arm queue for rc1 -- these are all bug fixes.
2
bug fixes.
3
2
3
thanks
4
-- PMM
4
-- PMM
5
5
6
The following changes since commit 5d0e5694470d2952b4f257bc985cac8c89b4fd92:
6
The following changes since commit b9404bf592e7ba74180e1a54ed7a266ec6ee67f2:
7
7
8
Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into staging (2019-06-17 11:55:14 +0100)
8
Merge remote-tracking branch 'remotes/dgilbert/tags/pull-hmp-20190715' into staging (2019-07-15 12:22:07 +0100)
9
9
10
are available in the Git repository at:
10
are available in the Git repository at:
11
11
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20190617
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20190715
13
13
14
for you to fetch changes up to 1120827fa182f0e76226df7ffe7a86598d1df54f:
14
for you to fetch changes up to 51c9122e92b776a3f16af0b9282f1dc5012e2a19:
15
15
16
target/arm: Only implement doubles if the FPU supports them (2019-06-17 15:15:06 +0100)
16
target/arm: NS BusFault on vector table fetch escalates to NS HardFault (2019-07-15 14:17:04 +0100)
17
17
18
----------------------------------------------------------------
18
----------------------------------------------------------------
19
target-arm queue:
19
target-arm queue:
20
* support large kernel images in bootloader (by avoiding
20
* report ARMv8-A FP support for AArch32 -cpu max
21
putting the initrd over the top of them)
21
* hw/ssi/xilinx_spips: Avoid AXI writes to the LQSPI linear memory
22
* correctly disable FPU/DSP in the CPU for the mps2-an521, musca-a boards
22
* hw/ssi/xilinx_spips: Avoid out-of-bound access to lqspi_buf[]
23
* arm_gicv3: Fix decoding of ID register range
23
* hw/ssi/mss-spi: Avoid crash when reading empty RX FIFO
24
* arm_gicv3: GICD_TYPER.SecurityExtn is RAZ if GICD_CTLR.DS == 1
24
* hw/display/xlnx_dp: Avoid crash when reading empty RX FIFO
25
* some code cleanups following on from the VFP decodetree conversion
25
* hw/arm/virt: Fix non-secure flash mode
26
* Only implement doubles if the FPU supports them
26
* pl031: Correctly migrate state when using -rtc clock=host
27
(so we now correctly model Cortex-M4, -M33 as single precision only)
27
* fix regression that meant arm926 and arm1026 lost VFP
28
double-precision support
29
* v8M: NS BusFault on vector table fetch escalates to NS HardFault
28
30
29
----------------------------------------------------------------
31
----------------------------------------------------------------
30
Peter Maydell (24):
32
Alex Bennée (1):
31
hw/arm/boot: Don't assume RAM starts at address zero
33
target/arm: report ARMv8-A FP support for AArch32 -cpu max
32
hw/arm/boot: Diagnose layouts that put initrd or DTB off the end of RAM
33
hw/arm/boot: Avoid placing the initrd on top of the kernel
34
hw/arm/boot: Honour image size field in AArch64 Image format kernels
35
target/arm: Allow VFP and Neon to be disabled via a CPU property
36
target/arm: Allow M-profile CPUs to disable the DSP extension via CPU property
37
hw/arm/armv7m: Forward "vfp" and "dsp" properties to CPU
38
hw/arm: Correctly disable FPU/DSP for some ARMSSE-based boards
39
hw/intc/arm_gicv3: Fix decoding of ID register range
40
hw/intc/arm_gicv3: GICD_TYPER.SecurityExtn is RAZ if GICD_CTLR.DS == 1
41
target/arm: Move vfp_expand_imm() to translate.[ch]
42
target/arm: Use vfp_expand_imm() for AArch32 VFP VMOV_imm
43
target/arm: Stop using cpu_F0s for NEON_2RM_VABS_F
44
target/arm: Stop using cpu_F0s for NEON_2RM_VNEG_F
45
target/arm: Stop using cpu_F0s for NEON_2RM_VRINT*
46
target/arm: Stop using cpu_F0s for NEON_2RM_VCVT[ANPM][US]
47
target/arm: Stop using cpu_F0s for NEON_2RM_VRECPE_F and NEON_2RM_VRSQRTE_F
48
target/arm: Stop using cpu_F0s for Neon f32/s32 VCVT
49
target/arm: Stop using cpu_F0s in Neon VCVT fixed-point ops
50
target/arm: stop using deprecated functions in NEON_2RM_VCVT_F16_F32
51
target/arm: Stop using deprecated functions in NEON_2RM_VCVT_F32_F16
52
target/arm: Remove unused cpu_F0s, cpu_F0d, cpu_F1s, cpu_F1d
53
target/arm: Fix typos in trans function prototypes
54
target/arm: Only implement doubles if the FPU supports them
55
34
56
include/hw/arm/armsse.h | 7 ++
35
David Engraf (1):
57
include/hw/arm/armv7m.h | 4 +
36
hw/arm/virt: Fix non-secure flash mode
58
target/arm/cpu.h | 12 +++
59
target/arm/translate-a64.h | 1 -
60
target/arm/translate.h | 7 ++
61
hw/arm/armsse.c | 58 +++++++---
62
hw/arm/armv7m.c | 18 ++++
63
hw/arm/boot.c | 83 ++++++++++----
64
hw/arm/musca.c | 8 ++
65
hw/intc/arm_gicv3_dist.c | 12 ++-
66
hw/intc/arm_gicv3_redist.c | 4 +-
67
target/arm/cpu.c | 179 ++++++++++++++++++++++++++++--
68
target/arm/translate-a64.c | 32 ------
69
target/arm/translate-vfp.inc.c | 173 ++++++++++++++++++++++-------
70
target/arm/translate.c | 240 ++++++++++++++---------------------------
71
target/arm/vfp.decode | 10 +-
72
16 files changed, 572 insertions(+), 276 deletions(-)
73
37
38
Peter Maydell (3):
39
pl031: Correctly migrate state when using -rtc clock=host
40
target/arm: Set VFP-related MVFR0 fields for arm926 and arm1026
41
target/arm: NS BusFault on vector table fetch escalates to NS HardFault
42
43
Philippe Mathieu-Daudé (5):
44
hw/ssi/xilinx_spips: Convert lqspi_read() to read_with_attrs
45
hw/ssi/xilinx_spips: Avoid AXI writes to the LQSPI linear memory
46
hw/ssi/xilinx_spips: Avoid out-of-bound access to lqspi_buf[]
47
hw/ssi/mss-spi: Avoid crash when reading empty RX FIFO
48
hw/display/xlnx_dp: Avoid crash when reading empty RX FIFO
49
50
include/hw/timer/pl031.h | 2 ++
51
hw/arm/virt.c | 2 +-
52
hw/core/machine.c | 1 +
53
hw/display/xlnx_dp.c | 15 +++++---
54
hw/ssi/mss-spi.c | 8 ++++-
55
hw/ssi/xilinx_spips.c | 43 +++++++++++++++-------
56
hw/timer/pl031.c | 92 +++++++++++++++++++++++++++++++++++++++++++++---
57
target/arm/cpu.c | 16 +++++++++
58
target/arm/m_helper.c | 21 ++++++++---
59
9 files changed, 174 insertions(+), 26 deletions(-)
60
diff view generated by jsdifflib
Deleted patch
1
In the Arm kernel/initrd loading code, in some places we make the
2
incorrect assumption that info->ram_size can be treated as the
3
address of the end of RAM, as for instance when we calculate the
4
available space for the initrd using "info->ram_size - info->initrd_start".
5
This is wrong, because many Arm boards (including "virt") specify
6
a non-zero info->loader_start to indicate that their RAM area
7
starts at a non-zero physical address.
8
1
9
Correct the places which make this incorrect assumption.
10
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
13
Tested-by: Mark Rutland <mark.rutland@arm.com>
14
Message-id: 20190516144733.32399-2-peter.maydell@linaro.org
15
---
16
hw/arm/boot.c | 9 ++++-----
17
1 file changed, 4 insertions(+), 5 deletions(-)
18
19
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/hw/arm/boot.c
22
+++ b/hw/arm/boot.c
23
@@ -XXX,XX +XXX,XX @@ static void arm_setup_direct_kernel_boot(ARMCPU *cpu,
24
int elf_machine;
25
hwaddr entry;
26
static const ARMInsnFixup *primary_loader;
27
+ uint64_t ram_end = info->loader_start + info->ram_size;
28
29
if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
30
primary_loader = bootloader_aarch64;
31
@@ -XXX,XX +XXX,XX @@ static void arm_setup_direct_kernel_boot(ARMCPU *cpu,
32
/* 32-bit ARM */
33
entry = info->loader_start + KERNEL_LOAD_ADDR;
34
kernel_size = load_image_targphys_as(info->kernel_filename, entry,
35
- info->ram_size - KERNEL_LOAD_ADDR,
36
- as);
37
+ ram_end - KERNEL_LOAD_ADDR, as);
38
is_linux = 1;
39
}
40
if (kernel_size < 0) {
41
@@ -XXX,XX +XXX,XX @@ static void arm_setup_direct_kernel_boot(ARMCPU *cpu,
42
if (info->initrd_filename) {
43
initrd_size = load_ramdisk_as(info->initrd_filename,
44
info->initrd_start,
45
- info->ram_size - info->initrd_start,
46
- as);
47
+ ram_end - info->initrd_start, as);
48
if (initrd_size < 0) {
49
initrd_size = load_image_targphys_as(info->initrd_filename,
50
info->initrd_start,
51
- info->ram_size -
52
+ ram_end -
53
info->initrd_start,
54
as);
55
}
56
--
57
2.20.1
58
59
diff view generated by jsdifflib
Deleted patch
1
We calculate the locations in memory where we want to put the
2
initrd and the DTB based on the size of the kernel, since they
3
come after it. Add some explicit checks that these aren't off the
4
end of RAM entirely.
5
1
6
(At the moment the way we calculate the initrd_start means that
7
it can't ever be off the end of RAM, but that will change with
8
the next commit.)
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
12
Tested-by: Mark Rutland <mark.rutland@arm.com>
13
Message-id: 20190516144733.32399-3-peter.maydell@linaro.org
14
---
15
hw/arm/boot.c | 23 +++++++++++++++++++++++
16
1 file changed, 23 insertions(+)
17
18
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/arm/boot.c
21
+++ b/hw/arm/boot.c
22
@@ -XXX,XX +XXX,XX @@ static void arm_setup_direct_kernel_boot(ARMCPU *cpu,
23
error_report("could not load kernel '%s'", info->kernel_filename);
24
exit(1);
25
}
26
+
27
+ if (kernel_size > info->ram_size) {
28
+ error_report("kernel '%s' is too large to fit in RAM "
29
+ "(kernel size %d, RAM size %" PRId64 ")",
30
+ info->kernel_filename, kernel_size, info->ram_size);
31
+ exit(1);
32
+ }
33
+
34
info->entry = entry;
35
if (is_linux) {
36
uint32_t fixupcontext[FIXUP_MAX];
37
38
if (info->initrd_filename) {
39
+
40
+ if (info->initrd_start >= ram_end) {
41
+ error_report("not enough space after kernel to load initrd");
42
+ exit(1);
43
+ }
44
+
45
initrd_size = load_ramdisk_as(info->initrd_filename,
46
info->initrd_start,
47
ram_end - info->initrd_start, as);
48
@@ -XXX,XX +XXX,XX @@ static void arm_setup_direct_kernel_boot(ARMCPU *cpu,
49
info->initrd_filename);
50
exit(1);
51
}
52
+ if (info->initrd_start + initrd_size > info->ram_size) {
53
+ error_report("could not load initrd '%s': "
54
+ "too big to fit into RAM after the kernel",
55
+ info->initrd_filename);
56
+ }
57
} else {
58
initrd_size = 0;
59
}
60
@@ -XXX,XX +XXX,XX @@ static void arm_setup_direct_kernel_boot(ARMCPU *cpu,
61
/* Place the DTB after the initrd in memory with alignment. */
62
info->dtb_start = QEMU_ALIGN_UP(info->initrd_start + initrd_size,
63
align);
64
+ if (info->dtb_start >= ram_end) {
65
+ error_report("Not enough space for DTB after kernel/initrd");
66
+ exit(1);
67
+ }
68
fixupcontext[FIXUP_ARGPTR_LO] = info->dtb_start;
69
fixupcontext[FIXUP_ARGPTR_HI] = info->dtb_start >> 32;
70
} else {
71
--
72
2.20.1
73
74
diff view generated by jsdifflib
1
Allow the DSP extension to be disabled via a CPU property for
1
From: Alex Bennée <alex.bennee@linaro.org>
2
M-profile CPUs. (A and R-profile CPUs don't have this extension
3
as a defined separate optional architecture extension, so
4
they don't need the property.)
5
2
3
When we converted to using feature bits in 602f6e42cfbf we missed out
4
the fact (dp && arm_dc_feature(s, ARM_FEATURE_V8)) was supported for
5
-cpu max configurations. This caused a regression in the GCC test
6
suite. Fix this by setting the appropriate bits in mvfr1.FPHP to
7
report ARMv8-A with FP support (but not ARMv8.2-FP16).
8
9
Fixes: https://bugs.launchpad.net/qemu/+bug/1836078
10
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20190711103737.10017-1-alex.bennee@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
Message-id: 20190517174046.11146-3-peter.maydell@linaro.org
10
---
14
---
11
target/arm/cpu.h | 2 ++
15
target/arm/cpu.c | 4 ++++
12
target/arm/cpu.c | 29 +++++++++++++++++++++++++++++
16
1 file changed, 4 insertions(+)
13
2 files changed, 31 insertions(+)
14
17
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.h
18
+++ b/target/arm/cpu.h
19
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
20
bool has_vfp;
21
/* CPU has Neon */
22
bool has_neon;
23
+ /* CPU has M-profile DSP extension */
24
+ bool has_dsp;
25
26
/* CPU has memory protection unit */
27
bool has_mpu;
28
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
18
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
29
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/cpu.c
20
--- a/target/arm/cpu.c
31
+++ b/target/arm/cpu.c
21
+++ b/target/arm/cpu.c
32
@@ -XXX,XX +XXX,XX @@ static Property arm_cpu_has_vfp_property =
22
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
33
static Property arm_cpu_has_neon_property =
23
t = FIELD_DP32(t, ID_ISAR6, SPECRES, 1);
34
DEFINE_PROP_BOOL("neon", ARMCPU, has_neon, true);
24
cpu->isar.id_isar6 = t;
35
25
36
+static Property arm_cpu_has_dsp_property =
26
+ t = cpu->isar.mvfr1;
37
+ DEFINE_PROP_BOOL("dsp", ARMCPU, has_dsp, true);
27
+ t = FIELD_DP32(t, MVFR1, FPHP, 2); /* v8.0 FP support */
28
+ cpu->isar.mvfr1 = t;
38
+
29
+
39
static Property arm_cpu_has_mpu_property =
30
t = cpu->isar.mvfr2;
40
DEFINE_PROP_BOOL("has-mpu", ARMCPU, has_mpu, true);
31
t = FIELD_DP32(t, MVFR2, SIMDMISC, 3); /* SIMD MaxNum */
41
32
t = FIELD_DP32(t, MVFR2, FPMISC, 4); /* FP MaxNum */
42
@@ -XXX,XX +XXX,XX @@ void arm_cpu_post_init(Object *obj)
43
}
44
}
45
46
+ if (arm_feature(&cpu->env, ARM_FEATURE_M) &&
47
+ arm_feature(&cpu->env, ARM_FEATURE_THUMB_DSP)) {
48
+ qdev_property_add_static(DEVICE(obj), &arm_cpu_has_dsp_property,
49
+ &error_abort);
50
+ }
51
+
52
if (arm_feature(&cpu->env, ARM_FEATURE_PMSA)) {
53
qdev_property_add_static(DEVICE(obj), &arm_cpu_has_mpu_property,
54
&error_abort);
55
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
56
cpu->isar.mvfr0 = u;
57
}
58
59
+ if (arm_feature(env, ARM_FEATURE_M) && !cpu->has_dsp) {
60
+ uint32_t u;
61
+
62
+ unset_feature(env, ARM_FEATURE_THUMB_DSP);
63
+
64
+ u = cpu->isar.id_isar1;
65
+ u = FIELD_DP32(u, ID_ISAR1, EXTEND, 1);
66
+ cpu->isar.id_isar1 = u;
67
+
68
+ u = cpu->isar.id_isar2;
69
+ u = FIELD_DP32(u, ID_ISAR2, MULTU, 1);
70
+ u = FIELD_DP32(u, ID_ISAR2, MULTS, 1);
71
+ cpu->isar.id_isar2 = u;
72
+
73
+ u = cpu->isar.id_isar3;
74
+ u = FIELD_DP32(u, ID_ISAR3, SIMD, 1);
75
+ u = FIELD_DP32(u, ID_ISAR3, SATURATE, 0);
76
+ cpu->isar.id_isar3 = u;
77
+ }
78
+
79
/* Some features automatically imply others: */
80
if (arm_feature(env, ARM_FEATURE_V8)) {
81
if (arm_feature(env, ARM_FEATURE_M)) {
82
--
33
--
83
2.20.1
34
2.20.1
84
35
85
36
diff view generated by jsdifflib
1
We want to use vfp_expand_imm() in the AArch32 VFP decode;
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
move it from the a64-only header/source file to the
3
AArch32 one (which is always compiled even for AArch64).
4
2
3
In the next commit we will implement the write_with_attrs()
4
handler. To avoid using different APIs, convert the read()
5
handler first.
6
7
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
8
Tested-by: Francisco Iglesias <frasse.iglesias@gmail.com>
9
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Message-id: 20190613163917.28589-2-peter.maydell@linaro.org
9
---
11
---
10
target/arm/translate-a64.h | 1 -
12
hw/ssi/xilinx_spips.c | 23 +++++++++++------------
11
target/arm/translate.h | 7 +++++++
13
1 file changed, 11 insertions(+), 12 deletions(-)
12
target/arm/translate-a64.c | 32 --------------------------------
13
target/arm/translate-vfp.inc.c | 33 +++++++++++++++++++++++++++++++++
14
4 files changed, 40 insertions(+), 33 deletions(-)
15
14
16
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
15
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
17
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/translate-a64.h
17
--- a/hw/ssi/xilinx_spips.c
19
+++ b/target/arm/translate-a64.h
18
+++ b/hw/ssi/xilinx_spips.c
20
@@ -XXX,XX +XXX,XX @@ void write_fp_dreg(DisasContext *s, int reg, TCGv_i64 v);
19
@@ -XXX,XX +XXX,XX @@ static void lqspi_load_cache(void *opaque, hwaddr addr)
21
TCGv_ptr get_fpstatus_ptr(bool);
22
bool logic_imm_decode_wmask(uint64_t *result, unsigned int immn,
23
unsigned int imms, unsigned int immr);
24
-uint64_t vfp_expand_imm(int size, uint8_t imm8);
25
bool sve_access_check(DisasContext *s);
26
27
/* We should have at some point before trying to access an FP register
28
diff --git a/target/arm/translate.h b/target/arm/translate.h
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/translate.h
31
+++ b/target/arm/translate.h
32
@@ -XXX,XX +XXX,XX @@ static inline void gen_ss_advance(DisasContext *s)
33
}
20
}
34
}
21
}
35
22
36
+/*
23
-static uint64_t
37
+ * Given a VFP floating point constant encoded into an 8 bit immediate in an
24
-lqspi_read(void *opaque, hwaddr addr, unsigned int size)
38
+ * instruction, expand it to the actual constant value of the specified
25
+static MemTxResult lqspi_read(void *opaque, hwaddr addr, uint64_t *value,
39
+ * size, as per the VFPExpandImm() pseudocode in the Arm ARM.
26
+ unsigned size, MemTxAttrs attrs)
40
+ */
27
{
41
+uint64_t vfp_expand_imm(int size, uint8_t imm8);
28
- XilinxQSPIPS *q = opaque;
29
- uint32_t ret;
30
+ XilinxQSPIPS *q = XILINX_QSPIPS(opaque);
31
32
if (addr >= q->lqspi_cached_addr &&
33
addr <= q->lqspi_cached_addr + LQSPI_CACHE_SIZE - 4) {
34
uint8_t *retp = &q->lqspi_buf[addr - q->lqspi_cached_addr];
35
- ret = cpu_to_le32(*(uint32_t *)retp);
36
- DB_PRINT_L(1, "addr: %08x, data: %08x\n", (unsigned)addr,
37
- (unsigned)ret);
38
- return ret;
39
- } else {
40
- lqspi_load_cache(opaque, addr);
41
- return lqspi_read(opaque, addr, size);
42
+ *value = cpu_to_le32(*(uint32_t *)retp);
43
+ DB_PRINT_L(1, "addr: %08" HWADDR_PRIx ", data: %08" PRIx64 "\n",
44
+ addr, *value);
45
+ return MEMTX_OK;
46
}
42
+
47
+
43
/* Vector operations shared between ARM and AArch64. */
48
+ lqspi_load_cache(opaque, addr);
44
extern const GVecGen3 mla_op[4];
49
+ return lqspi_read(opaque, addr, value, size, attrs);
45
extern const GVecGen3 mls_op[4];
46
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/translate-a64.c
49
+++ b/target/arm/translate-a64.c
50
@@ -XXX,XX +XXX,XX @@ static void disas_fp_3src(DisasContext *s, uint32_t insn)
51
}
52
}
50
}
53
51
54
-/* The imm8 encodes the sign bit, enough bits to represent an exponent in
52
static const MemoryRegionOps lqspi_ops = {
55
- * the range 01....1xx to 10....0xx, and the most significant 4 bits of
53
- .read = lqspi_read,
56
- * the mantissa; see VFPExpandImm() in the v8 ARM ARM.
54
+ .read_with_attrs = lqspi_read,
57
- */
55
.endianness = DEVICE_NATIVE_ENDIAN,
58
-uint64_t vfp_expand_imm(int size, uint8_t imm8)
56
.valid = {
59
-{
57
.min_access_size = 1,
60
- uint64_t imm;
61
-
62
- switch (size) {
63
- case MO_64:
64
- imm = (extract32(imm8, 7, 1) ? 0x8000 : 0) |
65
- (extract32(imm8, 6, 1) ? 0x3fc0 : 0x4000) |
66
- extract32(imm8, 0, 6);
67
- imm <<= 48;
68
- break;
69
- case MO_32:
70
- imm = (extract32(imm8, 7, 1) ? 0x8000 : 0) |
71
- (extract32(imm8, 6, 1) ? 0x3e00 : 0x4000) |
72
- (extract32(imm8, 0, 6) << 3);
73
- imm <<= 16;
74
- break;
75
- case MO_16:
76
- imm = (extract32(imm8, 7, 1) ? 0x8000 : 0) |
77
- (extract32(imm8, 6, 1) ? 0x3000 : 0x4000) |
78
- (extract32(imm8, 0, 6) << 6);
79
- break;
80
- default:
81
- g_assert_not_reached();
82
- }
83
- return imm;
84
-}
85
-
86
/* Floating point immediate
87
* 31 30 29 28 24 23 22 21 20 13 12 10 9 5 4 0
88
* +---+---+---+-----------+------+---+------------+-------+------+------+
89
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
90
index XXXXXXX..XXXXXXX 100644
91
--- a/target/arm/translate-vfp.inc.c
92
+++ b/target/arm/translate-vfp.inc.c
93
@@ -XXX,XX +XXX,XX @@
94
#include "decode-vfp.inc.c"
95
#include "decode-vfp-uncond.inc.c"
96
97
+/*
98
+ * The imm8 encodes the sign bit, enough bits to represent an exponent in
99
+ * the range 01....1xx to 10....0xx, and the most significant 4 bits of
100
+ * the mantissa; see VFPExpandImm() in the v8 ARM ARM.
101
+ */
102
+uint64_t vfp_expand_imm(int size, uint8_t imm8)
103
+{
104
+ uint64_t imm;
105
+
106
+ switch (size) {
107
+ case MO_64:
108
+ imm = (extract32(imm8, 7, 1) ? 0x8000 : 0) |
109
+ (extract32(imm8, 6, 1) ? 0x3fc0 : 0x4000) |
110
+ extract32(imm8, 0, 6);
111
+ imm <<= 48;
112
+ break;
113
+ case MO_32:
114
+ imm = (extract32(imm8, 7, 1) ? 0x8000 : 0) |
115
+ (extract32(imm8, 6, 1) ? 0x3e00 : 0x4000) |
116
+ (extract32(imm8, 0, 6) << 3);
117
+ imm <<= 16;
118
+ break;
119
+ case MO_16:
120
+ imm = (extract32(imm8, 7, 1) ? 0x8000 : 0) |
121
+ (extract32(imm8, 6, 1) ? 0x3000 : 0x4000) |
122
+ (extract32(imm8, 0, 6) << 6);
123
+ break;
124
+ default:
125
+ g_assert_not_reached();
126
+ }
127
+ return imm;
128
+}
129
+
130
/*
131
* Return the offset of a 16-bit half of the specified VFP single-precision
132
* register. If top is true, returns the top 16 bits; otherwise the bottom
133
--
58
--
134
2.20.1
59
2.20.1
135
60
136
61
diff view generated by jsdifflib
1
The architecture permits FPUs which have only single-precision
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
support, not double-precision; Cortex-M4 and Cortex-M33 are
3
both like that. Add the necessary checks on the MVFR0 FPDP
4
field so that we UNDEF any double-precision instructions on
5
CPUs like this.
6
2
7
Note that even if FPDP==0 the insns like VMOV-to/from-gpreg,
3
Lei Sun found while auditing the code that a CPU write would
8
VLDM/VSTM, VLDR/VSTR which take double precision registers
4
trigger a NULL pointer dereference.
9
still exist.
10
5
6
>From UG1085 datasheet [*] AXI writes in this region are ignored
7
and generates an AXI Slave Error (SLVERR).
8
9
Fix by implementing the write_with_attrs() handler.
10
Return MEMTX_ERROR when the region is accessed (this error maps
11
to an AXI slave error).
12
13
[*] https://www.xilinx.com/support/documentation/user_guides/ug1085-zynq-ultrascale-trm.pdf
14
15
Reported-by: Lei Sun <slei.casper@gmail.com>
16
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
17
Tested-by: Francisco Iglesias <frasse.iglesias@gmail.com>
18
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20190614104457.24703-3-peter.maydell@linaro.org
14
---
20
---
15
target/arm/cpu.h | 6 +++
21
hw/ssi/xilinx_spips.c | 16 ++++++++++++++++
16
target/arm/translate-vfp.inc.c | 84 ++++++++++++++++++++++++++++++++++
22
1 file changed, 16 insertions(+)
17
2 files changed, 90 insertions(+)
18
23
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
24
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
20
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/cpu.h
26
--- a/hw/ssi/xilinx_spips.c
22
+++ b/target/arm/cpu.h
27
+++ b/hw/ssi/xilinx_spips.c
23
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_fpshvec(const ARMISARegisters *id)
28
@@ -XXX,XX +XXX,XX @@ static MemTxResult lqspi_read(void *opaque, hwaddr addr, uint64_t *value,
24
return FIELD_EX64(id->mvfr0, MVFR0, FPSHVEC) > 0;
29
return lqspi_read(opaque, addr, value, size, attrs);
25
}
30
}
26
31
27
+static inline bool isar_feature_aa32_fpdp(const ARMISARegisters *id)
32
+static MemTxResult lqspi_write(void *opaque, hwaddr offset, uint64_t value,
33
+ unsigned size, MemTxAttrs attrs)
28
+{
34
+{
29
+ /* Return true if CPU supports double precision floating point */
35
+ /*
30
+ return FIELD_EX64(id->mvfr0, MVFR0, FPDP) > 0;
36
+ * From UG1085, Chapter 24 (Quad-SPI controllers):
37
+ * - Writes are ignored
38
+ * - AXI writes generate an external AXI slave error (SLVERR)
39
+ */
40
+ qemu_log_mask(LOG_GUEST_ERROR, "%s Unexpected %u-bit access to 0x%" PRIx64
41
+ " (value: 0x%" PRIx64 "\n",
42
+ __func__, size << 3, offset, value);
43
+
44
+ return MEMTX_ERROR;
31
+}
45
+}
32
+
46
+
33
/*
47
static const MemoryRegionOps lqspi_ops = {
34
* We always set the FP and SIMD FP16 fields to indicate identical
48
.read_with_attrs = lqspi_read,
35
* levels of support (assuming SIMD is implemented at all), so
49
+ .write_with_attrs = lqspi_write,
36
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
50
.endianness = DEVICE_NATIVE_ENDIAN,
37
index XXXXXXX..XXXXXXX 100644
51
.valid = {
38
--- a/target/arm/translate-vfp.inc.c
52
.min_access_size = 1,
39
+++ b/target/arm/translate-vfp.inc.c
40
@@ -XXX,XX +XXX,XX @@ static bool trans_VSEL(DisasContext *s, arg_VSEL *a)
41
((a->vm | a->vn | a->vd) & 0x10)) {
42
return false;
43
}
44
+
45
+ if (dp && !dc_isar_feature(aa32_fpdp, s)) {
46
+ return false;
47
+ }
48
+
49
rd = a->vd;
50
rn = a->vn;
51
rm = a->vm;
52
@@ -XXX,XX +XXX,XX @@ static bool trans_VMINMAXNM(DisasContext *s, arg_VMINMAXNM *a)
53
((a->vm | a->vn | a->vd) & 0x10)) {
54
return false;
55
}
56
+
57
+ if (dp && !dc_isar_feature(aa32_fpdp, s)) {
58
+ return false;
59
+ }
60
+
61
rd = a->vd;
62
rn = a->vn;
63
rm = a->vm;
64
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINT(DisasContext *s, arg_VRINT *a)
65
((a->vm | a->vd) & 0x10)) {
66
return false;
67
}
68
+
69
+ if (dp && !dc_isar_feature(aa32_fpdp, s)) {
70
+ return false;
71
+ }
72
+
73
rd = a->vd;
74
rm = a->vm;
75
76
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a)
77
if (dp && !dc_isar_feature(aa32_fp_d32, s) && (a->vm & 0x10)) {
78
return false;
79
}
80
+
81
+ if (dp && !dc_isar_feature(aa32_fpdp, s)) {
82
+ return false;
83
+ }
84
+
85
rd = a->vd;
86
rm = a->vm;
87
88
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_dp(DisasContext *s, VFPGen3OpDPFn *fn,
89
return false;
90
}
91
92
+ if (!dc_isar_feature(aa32_fpdp, s)) {
93
+ return false;
94
+ }
95
+
96
if (!dc_isar_feature(aa32_fpshvec, s) &&
97
(veclen != 0 || s->vec_stride != 0)) {
98
return false;
99
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_dp(DisasContext *s, VFPGen2OpDPFn *fn, int vd, int vm)
100
return false;
101
}
102
103
+ if (!dc_isar_feature(aa32_fpdp, s)) {
104
+ return false;
105
+ }
106
+
107
if (!dc_isar_feature(aa32_fpshvec, s) &&
108
(veclen != 0 || s->vec_stride != 0)) {
109
return false;
110
@@ -XXX,XX +XXX,XX @@ static bool trans_VFM_sp(DisasContext *s, arg_VFM_sp *a)
111
return false;
112
}
113
114
+ if (!dc_isar_feature(aa32_fpdp, s)) {
115
+ return false;
116
+ }
117
+
118
if (!vfp_access_check(s)) {
119
return true;
120
}
121
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_dp(DisasContext *s, arg_VMOV_imm_dp *a)
122
return false;
123
}
124
125
+ if (!dc_isar_feature(aa32_fpdp, s)) {
126
+ return false;
127
+ }
128
+
129
if (!dc_isar_feature(aa32_fpshvec, s) &&
130
(veclen != 0 || s->vec_stride != 0)) {
131
return false;
132
@@ -XXX,XX +XXX,XX @@ static bool trans_VCMP_dp(DisasContext *s, arg_VCMP_dp *a)
133
return false;
134
}
135
136
+ if (!dc_isar_feature(aa32_fpdp, s)) {
137
+ return false;
138
+ }
139
+
140
if (!vfp_access_check(s)) {
141
return true;
142
}
143
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f64_f16(DisasContext *s, arg_VCVT_f64_f16 *a)
144
return false;
145
}
146
147
+ if (!dc_isar_feature(aa32_fpdp, s)) {
148
+ return false;
149
+ }
150
+
151
if (!vfp_access_check(s)) {
152
return true;
153
}
154
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f16_f64(DisasContext *s, arg_VCVT_f16_f64 *a)
155
return false;
156
}
157
158
+ if (!dc_isar_feature(aa32_fpdp, s)) {
159
+ return false;
160
+ }
161
+
162
if (!vfp_access_check(s)) {
163
return true;
164
}
165
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTR_dp(DisasContext *s, arg_VRINTR_dp *a)
166
return false;
167
}
168
169
+ if (!dc_isar_feature(aa32_fpdp, s)) {
170
+ return false;
171
+ }
172
+
173
if (!vfp_access_check(s)) {
174
return true;
175
}
176
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTZ_dp(DisasContext *s, arg_VRINTZ_dp *a)
177
return false;
178
}
179
180
+ if (!dc_isar_feature(aa32_fpdp, s)) {
181
+ return false;
182
+ }
183
+
184
if (!vfp_access_check(s)) {
185
return true;
186
}
187
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTX_dp(DisasContext *s, arg_VRINTX_dp *a)
188
return false;
189
}
190
191
+ if (!dc_isar_feature(aa32_fpdp, s)) {
192
+ return false;
193
+ }
194
+
195
if (!vfp_access_check(s)) {
196
return true;
197
}
198
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_sp(DisasContext *s, arg_VCVT_sp *a)
199
return false;
200
}
201
202
+ if (!dc_isar_feature(aa32_fpdp, s)) {
203
+ return false;
204
+ }
205
+
206
if (!vfp_access_check(s)) {
207
return true;
208
}
209
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp(DisasContext *s, arg_VCVT_dp *a)
210
return false;
211
}
212
213
+ if (!dc_isar_feature(aa32_fpdp, s)) {
214
+ return false;
215
+ }
216
+
217
if (!vfp_access_check(s)) {
218
return true;
219
}
220
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_dp(DisasContext *s, arg_VCVT_int_dp *a)
221
return false;
222
}
223
224
+ if (!dc_isar_feature(aa32_fpdp, s)) {
225
+ return false;
226
+ }
227
+
228
if (!vfp_access_check(s)) {
229
return true;
230
}
231
@@ -XXX,XX +XXX,XX @@ static bool trans_VJCVT(DisasContext *s, arg_VJCVT *a)
232
return false;
233
}
234
235
+ if (!dc_isar_feature(aa32_fpdp, s)) {
236
+ return false;
237
+ }
238
+
239
if (!vfp_access_check(s)) {
240
return true;
241
}
242
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_dp(DisasContext *s, arg_VCVT_fix_dp *a)
243
return false;
244
}
245
246
+ if (!dc_isar_feature(aa32_fpdp, s)) {
247
+ return false;
248
+ }
249
+
250
if (!vfp_access_check(s)) {
251
return true;
252
}
253
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp_int(DisasContext *s, arg_VCVT_dp_int *a)
254
return false;
255
}
256
257
+ if (!dc_isar_feature(aa32_fpdp, s)) {
258
+ return false;
259
+ }
260
+
261
if (!vfp_access_check(s)) {
262
return true;
263
}
264
--
53
--
265
2.20.1
54
2.20.1
266
55
267
56
diff view generated by jsdifflib
1
In several places cut and paste errors meant we were using the wrong
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
type for the 'arg' struct in trans_ functions called by the
3
decodetree decoder, because we were using the _sp version of the
4
struct in the _dp function. These were harmless, because the two
5
structs were identical and so decodetree made them typedefs of the
6
same underlying structure (and we'd have had a compile error if they
7
were not harmless), but we should clean them up anyway.
8
2
3
Both lqspi_read() and lqspi_load_cache() expect a 32-bit
4
aligned address.
5
6
>From UG1085 datasheet [*] chapter on 'Quad-SPI Controller':
7
8
Transfer Size Limitations
9
10
Because of the 32-bit wide TX, RX, and generic FIFO, all
11
APB/AXI transfers must be an integer multiple of 4-bytes.
12
Shorter transfers are not possible.
13
14
Set MemoryRegionOps.impl values to force 32-bit accesses,
15
this way we are sure we do not access the lqspi_buf[] array
16
out of bound.
17
18
[*] https://www.xilinx.com/support/documentation/user_guides/ug1085-zynq-ultrascale-trm.pdf
19
20
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
21
Tested-by: Francisco Iglesias <frasse.iglesias@gmail.com>
22
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
11
Message-id: 20190614104457.24703-2-peter.maydell@linaro.org
12
---
24
---
13
target/arm/translate-vfp.inc.c | 28 ++++++++++++++--------------
25
hw/ssi/xilinx_spips.c | 4 ++++
14
1 file changed, 14 insertions(+), 14 deletions(-)
26
1 file changed, 4 insertions(+)
15
27
16
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
28
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
17
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/translate-vfp.inc.c
30
--- a/hw/ssi/xilinx_spips.c
19
+++ b/target/arm/translate-vfp.inc.c
31
+++ b/hw/ssi/xilinx_spips.c
20
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_64_sp(DisasContext *s, arg_VMOV_64_sp *a)
32
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps lqspi_ops = {
21
return true;
33
.read_with_attrs = lqspi_read,
22
}
34
.write_with_attrs = lqspi_write,
23
35
.endianness = DEVICE_NATIVE_ENDIAN,
24
-static bool trans_VMOV_64_dp(DisasContext *s, arg_VMOV_64_sp *a)
36
+ .impl = {
25
+static bool trans_VMOV_64_dp(DisasContext *s, arg_VMOV_64_dp *a)
37
+ .min_access_size = 4,
26
{
38
+ .max_access_size = 4,
27
TCGv_i32 tmp;
39
+ },
28
40
.valid = {
29
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_sp(DisasContext *s, arg_VLDR_VSTR_sp *a)
41
.min_access_size = 1,
30
return true;
42
.max_access_size = 4
31
}
32
33
-static bool trans_VLDR_VSTR_dp(DisasContext *s, arg_VLDR_VSTR_sp *a)
34
+static bool trans_VLDR_VSTR_dp(DisasContext *s, arg_VLDR_VSTR_dp *a)
35
{
36
uint32_t offset;
37
TCGv_i32 addr;
38
@@ -XXX,XX +XXX,XX @@ static void gen_VMLA_dp(TCGv_i64 vd, TCGv_i64 vn, TCGv_i64 vm, TCGv_ptr fpst)
39
tcg_temp_free_i64(tmp);
40
}
41
42
-static bool trans_VMLA_dp(DisasContext *s, arg_VMLA_sp *a)
43
+static bool trans_VMLA_dp(DisasContext *s, arg_VMLA_dp *a)
44
{
45
return do_vfp_3op_dp(s, gen_VMLA_dp, a->vd, a->vn, a->vm, true);
46
}
47
@@ -XXX,XX +XXX,XX @@ static void gen_VMLS_dp(TCGv_i64 vd, TCGv_i64 vn, TCGv_i64 vm, TCGv_ptr fpst)
48
tcg_temp_free_i64(tmp);
49
}
50
51
-static bool trans_VMLS_dp(DisasContext *s, arg_VMLS_sp *a)
52
+static bool trans_VMLS_dp(DisasContext *s, arg_VMLS_dp *a)
53
{
54
return do_vfp_3op_dp(s, gen_VMLS_dp, a->vd, a->vn, a->vm, true);
55
}
56
@@ -XXX,XX +XXX,XX @@ static void gen_VNMLS_dp(TCGv_i64 vd, TCGv_i64 vn, TCGv_i64 vm, TCGv_ptr fpst)
57
tcg_temp_free_i64(tmp);
58
}
59
60
-static bool trans_VNMLS_dp(DisasContext *s, arg_VNMLS_sp *a)
61
+static bool trans_VNMLS_dp(DisasContext *s, arg_VNMLS_dp *a)
62
{
63
return do_vfp_3op_dp(s, gen_VNMLS_dp, a->vd, a->vn, a->vm, true);
64
}
65
@@ -XXX,XX +XXX,XX @@ static void gen_VNMLA_dp(TCGv_i64 vd, TCGv_i64 vn, TCGv_i64 vm, TCGv_ptr fpst)
66
tcg_temp_free_i64(tmp);
67
}
68
69
-static bool trans_VNMLA_dp(DisasContext *s, arg_VNMLA_sp *a)
70
+static bool trans_VNMLA_dp(DisasContext *s, arg_VNMLA_dp *a)
71
{
72
return do_vfp_3op_dp(s, gen_VNMLA_dp, a->vd, a->vn, a->vm, true);
73
}
74
@@ -XXX,XX +XXX,XX @@ static bool trans_VMUL_sp(DisasContext *s, arg_VMUL_sp *a)
75
return do_vfp_3op_sp(s, gen_helper_vfp_muls, a->vd, a->vn, a->vm, false);
76
}
77
78
-static bool trans_VMUL_dp(DisasContext *s, arg_VMUL_sp *a)
79
+static bool trans_VMUL_dp(DisasContext *s, arg_VMUL_dp *a)
80
{
81
return do_vfp_3op_dp(s, gen_helper_vfp_muld, a->vd, a->vn, a->vm, false);
82
}
83
@@ -XXX,XX +XXX,XX @@ static void gen_VNMUL_dp(TCGv_i64 vd, TCGv_i64 vn, TCGv_i64 vm, TCGv_ptr fpst)
84
gen_helper_vfp_negd(vd, vd);
85
}
86
87
-static bool trans_VNMUL_dp(DisasContext *s, arg_VNMUL_sp *a)
88
+static bool trans_VNMUL_dp(DisasContext *s, arg_VNMUL_dp *a)
89
{
90
return do_vfp_3op_dp(s, gen_VNMUL_dp, a->vd, a->vn, a->vm, false);
91
}
92
@@ -XXX,XX +XXX,XX @@ static bool trans_VADD_sp(DisasContext *s, arg_VADD_sp *a)
93
return do_vfp_3op_sp(s, gen_helper_vfp_adds, a->vd, a->vn, a->vm, false);
94
}
95
96
-static bool trans_VADD_dp(DisasContext *s, arg_VADD_sp *a)
97
+static bool trans_VADD_dp(DisasContext *s, arg_VADD_dp *a)
98
{
99
return do_vfp_3op_dp(s, gen_helper_vfp_addd, a->vd, a->vn, a->vm, false);
100
}
101
@@ -XXX,XX +XXX,XX @@ static bool trans_VSUB_sp(DisasContext *s, arg_VSUB_sp *a)
102
return do_vfp_3op_sp(s, gen_helper_vfp_subs, a->vd, a->vn, a->vm, false);
103
}
104
105
-static bool trans_VSUB_dp(DisasContext *s, arg_VSUB_sp *a)
106
+static bool trans_VSUB_dp(DisasContext *s, arg_VSUB_dp *a)
107
{
108
return do_vfp_3op_dp(s, gen_helper_vfp_subd, a->vd, a->vn, a->vm, false);
109
}
110
@@ -XXX,XX +XXX,XX @@ static bool trans_VDIV_sp(DisasContext *s, arg_VDIV_sp *a)
111
return do_vfp_3op_sp(s, gen_helper_vfp_divs, a->vd, a->vn, a->vm, false);
112
}
113
114
-static bool trans_VDIV_dp(DisasContext *s, arg_VDIV_sp *a)
115
+static bool trans_VDIV_dp(DisasContext *s, arg_VDIV_dp *a)
116
{
117
return do_vfp_3op_dp(s, gen_helper_vfp_divd, a->vd, a->vn, a->vm, false);
118
}
119
@@ -XXX,XX +XXX,XX @@ static bool trans_VFM_sp(DisasContext *s, arg_VFM_sp *a)
120
return true;
121
}
122
123
-static bool trans_VFM_dp(DisasContext *s, arg_VFM_sp *a)
124
+static bool trans_VFM_dp(DisasContext *s, arg_VFM_dp *a)
125
{
126
/*
127
* VFNMA : fd = muladd(-fd, fn, fm)
128
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTR_sp(DisasContext *s, arg_VRINTR_sp *a)
129
return true;
130
}
131
132
-static bool trans_VRINTR_dp(DisasContext *s, arg_VRINTR_sp *a)
133
+static bool trans_VRINTR_dp(DisasContext *s, arg_VRINTR_dp *a)
134
{
135
TCGv_ptr fpst;
136
TCGv_i64 tmp;
137
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTZ_sp(DisasContext *s, arg_VRINTZ_sp *a)
138
return true;
139
}
140
141
-static bool trans_VRINTZ_dp(DisasContext *s, arg_VRINTZ_sp *a)
142
+static bool trans_VRINTZ_dp(DisasContext *s, arg_VRINTZ_dp *a)
143
{
144
TCGv_ptr fpst;
145
TCGv_i64 tmp;
146
--
43
--
147
2.20.1
44
2.20.1
148
45
149
46
diff view generated by jsdifflib
1
The GICv3 specification says that the GICD_TYPER.SecurityExtn bit
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
is RAZ if GICD_CTLR.DS is 1. We were incorrectly making it RAZ
3
if the security extension is unsupported. "Security extension
4
unsupported" always implies GICD_CTLR.DS == 1, but the guest can
5
also set DS on a GIC which does support the security extension.
6
Fix the condition to correctly check the GICD_CTLR.DS bit.
7
2
3
Reading the RX_DATA register when the RX_FIFO is empty triggers
4
an abort. This can be easily reproduced:
5
6
$ qemu-system-arm -M emcraft-sf2 -monitor stdio -S
7
QEMU 4.0.50 monitor - type 'help' for more information
8
(qemu) x 0x40001010
9
Aborted (core dumped)
10
11
(gdb) bt
12
#1 0x00007f035874f895 in abort () at /lib64/libc.so.6
13
#2 0x00005628686591ff in fifo8_pop (fifo=0x56286a9a4c68) at util/fifo8.c:66
14
#3 0x00005628683e0b8e in fifo32_pop (fifo=0x56286a9a4c68) at include/qemu/fifo32.h:137
15
#4 0x00005628683e0efb in spi_read (opaque=0x56286a9a4850, addr=4, size=4) at hw/ssi/mss-spi.c:168
16
#5 0x0000562867f96801 in memory_region_read_accessor (mr=0x56286a9a4b60, addr=16, value=0x7ffeecb0c5c8, size=4, shift=0, mask=4294967295, attrs=...) at memory.c:439
17
#6 0x0000562867f96cdb in access_with_adjusted_size (addr=16, value=0x7ffeecb0c5c8, size=4, access_size_min=1, access_size_max=4, access_fn=0x562867f967c3 <memory_region_read_accessor>, mr=0x56286a9a4b60, attrs=...) at memory.c:569
18
#7 0x0000562867f99940 in memory_region_dispatch_read1 (mr=0x56286a9a4b60, addr=16, pval=0x7ffeecb0c5c8, size=4, attrs=...) at memory.c:1420
19
#8 0x0000562867f99a08 in memory_region_dispatch_read (mr=0x56286a9a4b60, addr=16, pval=0x7ffeecb0c5c8, size=4, attrs=...) at memory.c:1447
20
#9 0x0000562867f38721 in flatview_read_continue (fv=0x56286aec6360, addr=1073745936, attrs=..., buf=0x7ffeecb0c7c0 "\340ǰ\354\376\177", len=4, addr1=16, l=4, mr=0x56286a9a4b60) at exec.c:3385
21
#10 0x0000562867f38874 in flatview_read (fv=0x56286aec6360, addr=1073745936, attrs=..., buf=0x7ffeecb0c7c0 "\340ǰ\354\376\177", len=4) at exec.c:3423
22
#11 0x0000562867f388ea in address_space_read_full (as=0x56286aa3e890, addr=1073745936, attrs=..., buf=0x7ffeecb0c7c0 "\340ǰ\354\376\177", len=4) at exec.c:3436
23
#12 0x0000562867f389c5 in address_space_rw (as=0x56286aa3e890, addr=1073745936, attrs=..., buf=0x7ffeecb0c7c0 "\340ǰ\354\376\177", len=4, is_write=false) at exec.c:3466
24
#13 0x0000562867f3bdd7 in cpu_memory_rw_debug (cpu=0x56286aa19d00, addr=1073745936, buf=0x7ffeecb0c7c0 "\340ǰ\354\376\177", len=4, is_write=0) at exec.c:3976
25
#14 0x000056286811ed51 in memory_dump (mon=0x56286a8c32d0, count=1, format=120, wsize=4, addr=1073745936, is_physical=0) at monitor/misc.c:730
26
#15 0x000056286811eff1 in hmp_memory_dump (mon=0x56286a8c32d0, qdict=0x56286b15c400) at monitor/misc.c:785
27
#16 0x00005628684740ee in handle_hmp_command (mon=0x56286a8c32d0, cmdline=0x56286a8caeb2 "0x40001010") at monitor/hmp.c:1082
28
29
From the datasheet "Actel SmartFusion Microcontroller Subsystem
30
User's Guide" Rev.1, Table 13-3 "SPI Register Summary", this
31
register has a reset value of 0.
32
33
Check the FIFO is not empty before accessing it, else log an
34
error message.
35
36
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
37
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
38
Message-id: 20190709113715.7761-3-philmd@redhat.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
39
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20190524124248.28394-3-peter.maydell@linaro.org
10
---
40
---
11
hw/intc/arm_gicv3_dist.c | 8 +++++++-
41
hw/ssi/mss-spi.c | 8 +++++++-
12
1 file changed, 7 insertions(+), 1 deletion(-)
42
1 file changed, 7 insertions(+), 1 deletion(-)
13
43
14
diff --git a/hw/intc/arm_gicv3_dist.c b/hw/intc/arm_gicv3_dist.c
44
diff --git a/hw/ssi/mss-spi.c b/hw/ssi/mss-spi.c
15
index XXXXXXX..XXXXXXX 100644
45
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/intc/arm_gicv3_dist.c
46
--- a/hw/ssi/mss-spi.c
17
+++ b/hw/intc/arm_gicv3_dist.c
47
+++ b/hw/ssi/mss-spi.c
18
@@ -XXX,XX +XXX,XX @@ static MemTxResult gicd_readl(GICv3State *s, hwaddr offset,
48
@@ -XXX,XX +XXX,XX @@ spi_read(void *opaque, hwaddr addr, unsigned int size)
19
* ITLinesNumber == (num external irqs / 32) - 1
49
case R_SPI_RX:
20
*/
50
s->regs[R_SPI_STATUS] &= ~S_RXFIFOFUL;
21
int itlinesnumber = ((s->num_irq - GIC_INTERNAL) / 32) - 1;
51
s->regs[R_SPI_STATUS] &= ~S_RXCHOVRF;
22
+ /*
52
- ret = fifo32_pop(&s->rx_fifo);
23
+ * SecurityExtn must be RAZ if GICD_CTLR.DS == 1, and
53
+ if (fifo32_is_empty(&s->rx_fifo)) {
24
+ * "security extensions not supported" always implies DS == 1,
54
+ qemu_log_mask(LOG_GUEST_ERROR,
25
+ * so we only need to check the DS bit.
55
+ "%s: Reading empty RX_FIFO\n",
26
+ */
56
+ __func__);
27
+ bool sec_extn = !(s->gicd_ctlr & GICD_CTLR_DS);
57
+ } else {
28
58
+ ret = fifo32_pop(&s->rx_fifo);
29
- *data = (1 << 25) | (1 << 24) | (s->security_extn << 10) |
59
+ }
30
+ *data = (1 << 25) | (1 << 24) | (sec_extn << 10) |
60
if (fifo32_is_empty(&s->rx_fifo)) {
31
(0xf << 19) | itlinesnumber;
61
s->regs[R_SPI_STATUS] |= S_RXFIFOEMP;
32
return MEMTX_OK;
62
}
33
}
34
--
63
--
35
2.20.1
64
2.20.1
36
65
37
66
diff view generated by jsdifflib
1
Remove some old constructns from NEON_2RM_VCVT_F16_F32 code:
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
* don't use CPU_F0s
3
* don't use tcg_gen_st_f32
4
2
3
In the previous commit we fixed a crash when the guest read a
4
register that pop from an empty FIFO.
5
By auditing the repository, we found another similar use with
6
an easy way to reproduce:
7
8
$ qemu-system-aarch64 -M xlnx-zcu102 -monitor stdio -S
9
QEMU 4.0.50 monitor - type 'help' for more information
10
(qemu) xp/b 0xfd4a0134
11
Aborted (core dumped)
12
13
(gdb) bt
14
#0 0x00007f6936dea57f in raise () at /lib64/libc.so.6
15
#1 0x00007f6936dd4895 in abort () at /lib64/libc.so.6
16
#2 0x0000561ad32975ec in xlnx_dp_aux_pop_rx_fifo (s=0x7f692babee70) at hw/display/xlnx_dp.c:431
17
#3 0x0000561ad3297dc0 in xlnx_dp_read (opaque=0x7f692babee70, offset=77, size=4) at hw/display/xlnx_dp.c:667
18
#4 0x0000561ad321b896 in memory_region_read_accessor (mr=0x7f692babf620, addr=308, value=0x7ffe05c1db88, size=4, shift=0, mask=4294967295, attrs=...) at memory.c:439
19
#5 0x0000561ad321bd70 in access_with_adjusted_size (addr=308, value=0x7ffe05c1db88, size=1, access_size_min=4, access_size_max=4, access_fn=0x561ad321b858 <memory_region_read_accessor>, mr=0x7f692babf620, attrs=...) at memory.c:569
20
#6 0x0000561ad321e9d5 in memory_region_dispatch_read1 (mr=0x7f692babf620, addr=308, pval=0x7ffe05c1db88, size=1, attrs=...) at memory.c:1420
21
#7 0x0000561ad321ea9d in memory_region_dispatch_read (mr=0x7f692babf620, addr=308, pval=0x7ffe05c1db88, size=1, attrs=...) at memory.c:1447
22
#8 0x0000561ad31bd742 in flatview_read_continue (fv=0x561ad69c04f0, addr=4249485620, attrs=..., buf=0x7ffe05c1dcf0 "\020\335\301\005\376\177", len=1, addr1=308, l=1, mr=0x7f692babf620) at exec.c:3385
23
#9 0x0000561ad31bd895 in flatview_read (fv=0x561ad69c04f0, addr=4249485620, attrs=..., buf=0x7ffe05c1dcf0 "\020\335\301\005\376\177", len=1) at exec.c:3423
24
#10 0x0000561ad31bd90b in address_space_read_full (as=0x561ad5bb3020, addr=4249485620, attrs=..., buf=0x7ffe05c1dcf0 "\020\335\301\005\376\177", len=1) at exec.c:3436
25
#11 0x0000561ad33b1c42 in address_space_read (len=1, buf=0x7ffe05c1dcf0 "\020\335\301\005\376\177", attrs=..., addr=4249485620, as=0x561ad5bb3020) at include/exec/memory.h:2131
26
#12 0x0000561ad33b1c42 in memory_dump (mon=0x561ad59c4530, count=1, format=120, wsize=1, addr=4249485620, is_physical=1) at monitor/misc.c:723
27
#13 0x0000561ad33b1fc1 in hmp_physical_memory_dump (mon=0x561ad59c4530, qdict=0x561ad6c6fd00) at monitor/misc.c:795
28
#14 0x0000561ad37b4a9f in handle_hmp_command (mon=0x561ad59c4530, cmdline=0x561ad59d0f22 "/b 0x00000000fd4a0134") at monitor/hmp.c:1082
29
30
Fix by checking the FIFO is not empty before popping from it.
31
32
The datasheet is not clear about the reset value of this register,
33
we choose to return '0'.
34
35
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
36
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
37
Message-id: 20190709113715.7761-4-philmd@redhat.com
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
38
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Message-id: 20190613163917.28589-12-peter.maydell@linaro.org
9
---
39
---
10
target/arm/translate.c | 26 +++++++++++---------------
40
hw/display/xlnx_dp.c | 15 +++++++++++----
11
1 file changed, 11 insertions(+), 15 deletions(-)
41
1 file changed, 11 insertions(+), 4 deletions(-)
12
42
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
43
diff --git a/hw/display/xlnx_dp.c b/hw/display/xlnx_dp.c
14
index XXXXXXX..XXXXXXX 100644
44
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate.c
45
--- a/hw/display/xlnx_dp.c
16
+++ b/target/arm/translate.c
46
+++ b/hw/display/xlnx_dp.c
17
@@ -XXX,XX +XXX,XX @@ static TCGv_ptr vfp_reg_ptr(bool dp, int reg)
47
@@ -XXX,XX +XXX,XX @@ static uint8_t xlnx_dp_aux_pop_rx_fifo(XlnxDPState *s)
48
uint8_t ret;
49
50
if (fifo8_is_empty(&s->rx_fifo)) {
51
- DPRINTF("rx_fifo underflow..\n");
52
- abort();
53
+ qemu_log_mask(LOG_GUEST_ERROR,
54
+ "%s: Reading empty RX_FIFO\n",
55
+ __func__);
56
+ /*
57
+ * The datasheet is not clear about the reset value, it seems
58
+ * to be unspecified. We choose to return '0'.
59
+ */
60
+ ret = 0;
61
+ } else {
62
+ ret = fifo8_pop(&s->rx_fifo);
63
+ DPRINTF("pop 0x%" PRIX8 " from rx_fifo.\n", ret);
64
}
65
- ret = fifo8_pop(&s->rx_fifo);
66
- DPRINTF("pop 0x%" PRIX8 " from rx_fifo.\n", ret);
18
return ret;
67
return ret;
19
}
68
}
20
69
21
-#define tcg_gen_st_f32 tcg_gen_st_i32
22
-
23
#define ARM_CP_RW_BIT (1 << 20)
24
25
/* Include the VFP decoder */
26
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
27
tmp = neon_load_reg(rm, 0);
28
tmp2 = neon_load_reg(rm, 1);
29
tcg_gen_ext16u_i32(tmp3, tmp);
30
- gen_helper_vfp_fcvt_f16_to_f32(cpu_F0s, tmp3, fpst, ahp);
31
- tcg_gen_st_f32(cpu_F0s, cpu_env, neon_reg_offset(rd, 0));
32
- tcg_gen_shri_i32(tmp3, tmp, 16);
33
- gen_helper_vfp_fcvt_f16_to_f32(cpu_F0s, tmp3, fpst, ahp);
34
- tcg_gen_st_f32(cpu_F0s, cpu_env, neon_reg_offset(rd, 1));
35
- tcg_temp_free_i32(tmp);
36
+ gen_helper_vfp_fcvt_f16_to_f32(tmp3, tmp3, fpst, ahp);
37
+ neon_store_reg(rd, 0, tmp3);
38
+ tcg_gen_shri_i32(tmp, tmp, 16);
39
+ gen_helper_vfp_fcvt_f16_to_f32(tmp, tmp, fpst, ahp);
40
+ neon_store_reg(rd, 1, tmp);
41
+ tmp3 = tcg_temp_new_i32();
42
tcg_gen_ext16u_i32(tmp3, tmp2);
43
- gen_helper_vfp_fcvt_f16_to_f32(cpu_F0s, tmp3, fpst, ahp);
44
- tcg_gen_st_f32(cpu_F0s, cpu_env, neon_reg_offset(rd, 2));
45
- tcg_gen_shri_i32(tmp3, tmp2, 16);
46
- gen_helper_vfp_fcvt_f16_to_f32(cpu_F0s, tmp3, fpst, ahp);
47
- tcg_gen_st_f32(cpu_F0s, cpu_env, neon_reg_offset(rd, 3));
48
- tcg_temp_free_i32(tmp2);
49
- tcg_temp_free_i32(tmp3);
50
+ gen_helper_vfp_fcvt_f16_to_f32(tmp3, tmp3, fpst, ahp);
51
+ neon_store_reg(rd, 2, tmp3);
52
+ tcg_gen_shri_i32(tmp2, tmp2, 16);
53
+ gen_helper_vfp_fcvt_f16_to_f32(tmp2, tmp2, fpst, ahp);
54
+ neon_store_reg(rd, 3, tmp2);
55
tcg_temp_free_i32(ahp);
56
tcg_temp_free_ptr(fpst);
57
break;
58
--
70
--
59
2.20.1
71
2.20.1
60
72
61
73
diff view generated by jsdifflib
1
We currently put the initrd at the smaller of:
1
From: David Engraf <david.engraf@sysgo.com>
2
* 128MB into RAM
3
* halfway into the RAM
4
(with the dtb following it).
5
2
6
However for large kernels this might mean that the kernel
3
Using the whole 128 MiB flash in non-secure mode is not working because
7
overlaps the initrd. For some kinds of kernel (self-decompressing
4
virt_flash_fdt() expects the same address for secure_sysmem and sysmem.
8
32-bit kernels, and ELF images with a BSS section at the end)
5
This is not correctly handled by caller because it forwards NULL for
9
we don't know the exact size, but even there we have a
6
secure_sysmem in non-secure flash mode.
10
minimum size. Put the initrd at least further into RAM than
11
that. For image formats that can give us an exact kernel size, this
12
will mean that we definitely avoid overlaying kernel and initrd.
13
7
8
Fixed by using sysmem when secure_sysmem is NULL.
9
10
Signed-off-by: David Engraf <david.engraf@sysgo.com>
11
Message-id: 20190712075002.14326-1-david.engraf@sysgo.com
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
16
Tested-by: Mark Rutland <mark.rutland@arm.com>
17
Message-id: 20190516144733.32399-4-peter.maydell@linaro.org
18
---
14
---
19
hw/arm/boot.c | 34 ++++++++++++++++++++--------------
15
hw/arm/virt.c | 2 +-
20
1 file changed, 20 insertions(+), 14 deletions(-)
16
1 file changed, 1 insertion(+), 1 deletion(-)
21
17
22
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
18
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
23
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
24
--- a/hw/arm/boot.c
20
--- a/hw/arm/virt.c
25
+++ b/hw/arm/boot.c
21
+++ b/hw/arm/virt.c
26
@@ -XXX,XX +XXX,XX @@ static void arm_setup_direct_kernel_boot(ARMCPU *cpu,
22
@@ -XXX,XX +XXX,XX @@ static void machvirt_init(MachineState *machine)
27
if (info->nb_cpus == 0)
23
&machine->device_memory->mr);
28
info->nb_cpus = 1;
29
30
- /*
31
- * We want to put the initrd far enough into RAM that when the
32
- * kernel is uncompressed it will not clobber the initrd. However
33
- * on boards without much RAM we must ensure that we still leave
34
- * enough room for a decent sized initrd, and on boards with large
35
- * amounts of RAM we must avoid the initrd being so far up in RAM
36
- * that it is outside lowmem and inaccessible to the kernel.
37
- * So for boards with less than 256MB of RAM we put the initrd
38
- * halfway into RAM, and for boards with 256MB of RAM or more we put
39
- * the initrd at 128MB.
40
- */
41
- info->initrd_start = info->loader_start +
42
- MIN(info->ram_size / 2, 128 * 1024 * 1024);
43
-
44
/* Assume that raw images are linux kernels, and ELF images are not. */
45
kernel_size = arm_load_elf(info, &elf_entry, &elf_low_addr,
46
&elf_high_addr, elf_machine, as);
47
@@ -XXX,XX +XXX,XX @@ static void arm_setup_direct_kernel_boot(ARMCPU *cpu,
48
}
24
}
49
25
50
info->entry = entry;
26
- virt_flash_fdt(vms, sysmem, secure_sysmem);
51
+
27
+ virt_flash_fdt(vms, sysmem, secure_sysmem ?: sysmem);
52
+ /*
28
53
+ * We want to put the initrd far enough into RAM that when the
29
create_gic(vms, pic);
54
+ * kernel is uncompressed it will not clobber the initrd. However
55
+ * on boards without much RAM we must ensure that we still leave
56
+ * enough room for a decent sized initrd, and on boards with large
57
+ * amounts of RAM we must avoid the initrd being so far up in RAM
58
+ * that it is outside lowmem and inaccessible to the kernel.
59
+ * So for boards with less than 256MB of RAM we put the initrd
60
+ * halfway into RAM, and for boards with 256MB of RAM or more we put
61
+ * the initrd at 128MB.
62
+ * We also refuse to put the initrd somewhere that will definitely
63
+ * overlay the kernel we just loaded, though for kernel formats which
64
+ * don't tell us their exact size (eg self-decompressing 32-bit kernels)
65
+ * we might still make a bad choice here.
66
+ */
67
+ info->initrd_start = info->loader_start +
68
+ MAX(MIN(info->ram_size / 2, 128 * 1024 * 1024), kernel_size);
69
+ info->initrd_start = TARGET_PAGE_ALIGN(info->initrd_start);
70
+
71
if (is_linux) {
72
uint32_t fixupcontext[FIXUP_MAX];
73
30
74
--
31
--
75
2.20.1
32
2.20.1
76
33
77
34
diff view generated by jsdifflib
Deleted patch
1
Since Linux v3.17, the kernel's Image header includes a field image_size,
2
which gives the total size of the kernel including unpopulated data
3
sections such as the BSS). If this is present, then return it from
4
load_aarch64_image() as the true size of the kernel rather than
5
just using the size of the Image file itself. This allows the code
6
which calculates where to put the initrd to avoid putting it in
7
the kernel's BSS area.
8
1
9
This means that we should be able to reliably load kernel images
10
which are larger than 128MB without accidentally putting the
11
initrd or dtb in locations that clash with the kernel itself.
12
13
Fixes: https://bugs.launchpad.net/qemu/+bug/1823998
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
17
Tested-by: Mark Rutland <mark.rutland@arm.com>
18
Message-id: 20190516144733.32399-5-peter.maydell@linaro.org
19
---
20
hw/arm/boot.c | 17 +++++++++++++++--
21
1 file changed, 15 insertions(+), 2 deletions(-)
22
23
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
24
index XXXXXXX..XXXXXXX 100644
25
--- a/hw/arm/boot.c
26
+++ b/hw/arm/boot.c
27
@@ -XXX,XX +XXX,XX @@ static uint64_t load_aarch64_image(const char *filename, hwaddr mem_base,
28
hwaddr *entry, AddressSpace *as)
29
{
30
hwaddr kernel_load_offset = KERNEL64_LOAD_ADDR;
31
+ uint64_t kernel_size = 0;
32
uint8_t *buffer;
33
int size;
34
35
@@ -XXX,XX +XXX,XX @@ static uint64_t load_aarch64_image(const char *filename, hwaddr mem_base,
36
* is only valid if the image_size is non-zero.
37
*/
38
memcpy(&hdrvals, buffer + ARM64_TEXT_OFFSET_OFFSET, sizeof(hdrvals));
39
- if (hdrvals[1] != 0) {
40
+
41
+ kernel_size = le64_to_cpu(hdrvals[1]);
42
+
43
+ if (kernel_size != 0) {
44
kernel_load_offset = le64_to_cpu(hdrvals[0]);
45
46
/*
47
@@ -XXX,XX +XXX,XX @@ static uint64_t load_aarch64_image(const char *filename, hwaddr mem_base,
48
}
49
}
50
51
+ /*
52
+ * Kernels before v3.17 don't populate the image_size field, and
53
+ * raw images have no header. For those our best guess at the size
54
+ * is the size of the Image file itself.
55
+ */
56
+ if (kernel_size == 0) {
57
+ kernel_size = size;
58
+ }
59
+
60
*entry = mem_base + kernel_load_offset;
61
rom_add_blob_fixed_as(filename, buffer, size, *entry, as);
62
63
g_free(buffer);
64
65
- return size;
66
+ return kernel_size;
67
}
68
69
static void arm_setup_direct_kernel_boot(ARMCPU *cpu,
70
--
71
2.20.1
72
73
diff view generated by jsdifflib
1
The SSE-200 hardware has configurable integration settings which
1
The PL031 RTC tracks the difference between the guest RTC
2
determine whether its two CPUs have the FPU and DSP:
2
and the host RTC using a tick_offset field. For migration,
3
* CPU0_FPU (default 0)
3
however, we currently always migrate the offset between
4
* CPU0_DSP (default 0)
4
the guest and the vm_clock, even if the RTC clock is not
5
* CPU1_FPU (default 1)
5
the same as the vm_clock; this was an attempt to retain
6
* CPU1_DSP (default 1)
6
migration backwards compatibility.
7
7
8
Similarly, the IoTKit has settings for its single CPU:
8
Unfortunately this results in the RTC behaving oddly across
9
* CPU0_FPU (default 1)
9
a VM state save and restore -- since the VM clock stands still
10
* CPU0_DSP (default 1)
10
across save-then-restore, regardless of how much real world
11
11
time has elapsed, the guest RTC ends up out of sync with the
12
Of our four boards that use either the IoTKit or the SSE-200:
12
host RTC in the restored VM.
13
* mps2-an505, mps2-an521 and musca-a use the default settings
13
14
* musca-b1 enables FPU and DSP on both CPUs
14
Fix this by migrating the raw tick_offset. To retain migration
15
15
compatibility as far as possible, we have a new property
16
Currently QEMU models all these boards using CPUs with
16
migrate-tick-offset; by default this is 'true' and we will
17
both FPU and DSP enabled. This means that we are incorrect
17
migrate the true tick offset in a new subsection; if the
18
for mps2-an521 and musca-a, which should not have FPU or DSP
18
incoming data has no subsection we fall back to the old
19
on CPU0.
19
vm_clock-based offset information, so old->new migration
20
20
compatibility is preserved. For complete new->old migration
21
Create QOM properties on the ARMSSE devices corresponding to the
21
compatibility, the property is set to 'false' for 4.0 and
22
default h/w integration settings, and make the Musca-B1 board
22
earlier machine types (this will only affect 'virt-4.0'
23
enable FPU and DSP on both CPUs. This fixes the mps2-an521
23
and below, as none of the other pl031-using machines are
24
and musca-a behaviour, and leaves the musca-b1 and mps2-an505
24
versioned).
25
behaviour unchanged.
25
26
26
Reported-by: Russell King <rmk@armlinux.org.uk>
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
28
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
28
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
29
Message-id: 20190517174046.11146-5-peter.maydell@linaro.org
29
Message-id: 20190709143912.28905-1-peter.maydell@linaro.org
30
---
30
---
31
include/hw/arm/armsse.h | 7 +++++
31
include/hw/timer/pl031.h | 2 +
32
hw/arm/armsse.c | 58 ++++++++++++++++++++++++++++++++---------
32
hw/core/machine.c | 1 +
33
hw/arm/musca.c | 8 ++++++
33
hw/timer/pl031.c | 92 ++++++++++++++++++++++++++++++++++++++--
34
3 files changed, 61 insertions(+), 12 deletions(-)
34
3 files changed, 91 insertions(+), 4 deletions(-)
35
35
36
diff --git a/include/hw/arm/armsse.h b/include/hw/arm/armsse.h
36
diff --git a/include/hw/timer/pl031.h b/include/hw/timer/pl031.h
37
index XXXXXXX..XXXXXXX 100644
37
index XXXXXXX..XXXXXXX 100644
38
--- a/include/hw/arm/armsse.h
38
--- a/include/hw/timer/pl031.h
39
+++ b/include/hw/arm/armsse.h
39
+++ b/include/hw/timer/pl031.h
40
@@ -XXX,XX +XXX,XX @@
40
@@ -XXX,XX +XXX,XX @@ typedef struct PL031State {
41
* address of each SRAM bank (and thus the total amount of internal SRAM)
41
*/
42
* + QOM property "init-svtor" sets the initial value of the CPU SVTOR register
42
uint32_t tick_offset_vmstate;
43
* (where it expects to load the PC and SP from the vector table on reset)
43
uint32_t tick_offset;
44
+ * + QOM properties "CPU0_FPU", "CPU0_DSP", "CPU1_FPU" and "CPU1_DSP" which
44
+ bool tick_offset_migrated;
45
+ * set whether the CPUs have the FPU and DSP features present. The default
45
+ bool migrate_tick_offset;
46
+ * (matching the hardware) is that for CPU0 in an IoTKit and CPU1 in an
46
47
+ * SSE-200 both are present; CPU0 in an SSE-200 has neither.
47
uint32_t mr;
48
+ * Since the IoTKit has only one CPU, it does not have the CPU1_* properties.
48
uint32_t lr;
49
* + Named GPIO inputs "EXP_IRQ" 0..n are the expansion interrupts for CPU 0,
49
diff --git a/hw/core/machine.c b/hw/core/machine.c
50
* which are wired to its NVIC lines 32 .. n+32
51
* + Named GPIO inputs "EXP_CPU1_IRQ" 0..n are the expansion interrupts for
52
@@ -XXX,XX +XXX,XX @@ typedef struct ARMSSE {
53
uint32_t mainclk_frq;
54
uint32_t sram_addr_width;
55
uint32_t init_svtor;
56
+ bool cpu_fpu[SSE_MAX_CPUS];
57
+ bool cpu_dsp[SSE_MAX_CPUS];
58
} ARMSSE;
59
60
typedef struct ARMSSEInfo ARMSSEInfo;
61
diff --git a/hw/arm/armsse.c b/hw/arm/armsse.c
62
index XXXXXXX..XXXXXXX 100644
50
index XXXXXXX..XXXXXXX 100644
63
--- a/hw/arm/armsse.c
51
--- a/hw/core/machine.c
64
+++ b/hw/arm/armsse.c
52
+++ b/hw/core/machine.c
65
@@ -XXX,XX +XXX,XX @@ struct ARMSSEInfo {
53
@@ -XXX,XX +XXX,XX @@ GlobalProperty hw_compat_4_0[] = {
66
bool has_cachectrl;
54
{ "virtio-gpu-pci", "edid", "false" },
67
bool has_cpusecctrl;
55
{ "virtio-device", "use-started", "false" },
68
bool has_cpuid;
56
{ "virtio-balloon-device", "qemu-4-0-config-size", "true" },
69
+ Property *props;
57
+ { "pl031", "migrate-tick-offset", "false" },
58
};
59
const size_t hw_compat_4_0_len = G_N_ELEMENTS(hw_compat_4_0);
60
61
diff --git a/hw/timer/pl031.c b/hw/timer/pl031.c
62
index XXXXXXX..XXXXXXX 100644
63
--- a/hw/timer/pl031.c
64
+++ b/hw/timer/pl031.c
65
@@ -XXX,XX +XXX,XX @@ static int pl031_pre_save(void *opaque)
66
{
67
PL031State *s = opaque;
68
69
- /* tick_offset is base_time - rtc_clock base time. Instead, we want to
70
- * store the base time relative to the QEMU_CLOCK_VIRTUAL for backwards-compatibility. */
71
+ /*
72
+ * The PL031 device model code uses the tick_offset field, which is
73
+ * the offset between what the guest RTC should read and what the
74
+ * QEMU rtc_clock reads:
75
+ * guest_rtc = rtc_clock + tick_offset
76
+ * and so
77
+ * tick_offset = guest_rtc - rtc_clock
78
+ *
79
+ * We want to migrate this offset, which sounds straightforward.
80
+ * Unfortunately older versions of QEMU migrated a conversion of this
81
+ * offset into an offset from the vm_clock. (This was in turn an
82
+ * attempt to be compatible with even older QEMU versions, but it
83
+ * has incorrect behaviour if the rtc_clock is not the same as the
84
+ * vm_clock.) So we put the actual tick_offset into a migration
85
+ * subsection, and the backwards-compatible time-relative-to-vm_clock
86
+ * in the main migration state.
87
+ *
88
+ * Calculate base time relative to QEMU_CLOCK_VIRTUAL:
89
+ */
90
int64_t delta = qemu_clock_get_ns(rtc_clock) - qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
91
s->tick_offset_vmstate = s->tick_offset + delta / NANOSECONDS_PER_SECOND;
92
93
return 0;
94
}
95
96
+static int pl031_pre_load(void *opaque)
97
+{
98
+ PL031State *s = opaque;
99
+
100
+ s->tick_offset_migrated = false;
101
+ return 0;
102
+}
103
+
104
static int pl031_post_load(void *opaque, int version_id)
105
{
106
PL031State *s = opaque;
107
108
- int64_t delta = qemu_clock_get_ns(rtc_clock) - qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
109
- s->tick_offset = s->tick_offset_vmstate - delta / NANOSECONDS_PER_SECOND;
110
+ /*
111
+ * If we got the tick_offset subsection, then we can just use
112
+ * the value in that. Otherwise the source is an older QEMU and
113
+ * has given us the offset from the vm_clock; convert it back to
114
+ * an offset from the rtc_clock. This will cause time to incorrectly
115
+ * go backwards compared to the host RTC, but this is unavoidable.
116
+ */
117
+
118
+ if (!s->tick_offset_migrated) {
119
+ int64_t delta = qemu_clock_get_ns(rtc_clock) -
120
+ qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
121
+ s->tick_offset = s->tick_offset_vmstate -
122
+ delta / NANOSECONDS_PER_SECOND;
123
+ }
124
pl031_set_alarm(s);
125
return 0;
126
}
127
128
+static int pl031_tick_offset_post_load(void *opaque, int version_id)
129
+{
130
+ PL031State *s = opaque;
131
+
132
+ s->tick_offset_migrated = true;
133
+ return 0;
134
+}
135
+
136
+static bool pl031_tick_offset_needed(void *opaque)
137
+{
138
+ PL031State *s = opaque;
139
+
140
+ return s->migrate_tick_offset;
141
+}
142
+
143
+static const VMStateDescription vmstate_pl031_tick_offset = {
144
+ .name = "pl031/tick-offset",
145
+ .version_id = 1,
146
+ .minimum_version_id = 1,
147
+ .needed = pl031_tick_offset_needed,
148
+ .post_load = pl031_tick_offset_post_load,
149
+ .fields = (VMStateField[]) {
150
+ VMSTATE_UINT32(tick_offset, PL031State),
151
+ VMSTATE_END_OF_LIST()
152
+ }
70
+};
153
+};
71
+
154
+
72
+static Property iotkit_properties[] = {
155
static const VMStateDescription vmstate_pl031 = {
73
+ DEFINE_PROP_LINK("memory", ARMSSE, board_memory, TYPE_MEMORY_REGION,
156
.name = "pl031",
74
+ MemoryRegion *),
157
.version_id = 1,
75
+ DEFINE_PROP_UINT32("EXP_NUMIRQ", ARMSSE, exp_numirq, 64),
158
.minimum_version_id = 1,
76
+ DEFINE_PROP_UINT32("MAINCLK", ARMSSE, mainclk_frq, 0),
159
.pre_save = pl031_pre_save,
77
+ DEFINE_PROP_UINT32("SRAM_ADDR_WIDTH", ARMSSE, sram_addr_width, 15),
160
+ .pre_load = pl031_pre_load,
78
+ DEFINE_PROP_UINT32("init-svtor", ARMSSE, init_svtor, 0x10000000),
161
.post_load = pl031_post_load,
79
+ DEFINE_PROP_BOOL("CPU0_FPU", ARMSSE, cpu_fpu[0], true),
162
.fields = (VMStateField[]) {
80
+ DEFINE_PROP_BOOL("CPU0_DSP", ARMSSE, cpu_dsp[0], true),
163
VMSTATE_UINT32(tick_offset_vmstate, PL031State),
164
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_pl031 = {
165
VMSTATE_UINT32(im, PL031State),
166
VMSTATE_UINT32(is, PL031State),
167
VMSTATE_END_OF_LIST()
168
+ },
169
+ .subsections = (const VMStateDescription*[]) {
170
+ &vmstate_pl031_tick_offset,
171
+ NULL
172
}
173
};
174
175
+static Property pl031_properties[] = {
176
+ /*
177
+ * True to correctly migrate the tick offset of the RTC. False to
178
+ * obtain backward migration compatibility with older QEMU versions,
179
+ * at the expense of the guest RTC going backwards compared with the
180
+ * host RTC when the VM is saved/restored if using -rtc host.
181
+ * (Even if set to 'true' older QEMU can migrate forward to newer QEMU;
182
+ * 'false' also permits newer QEMU to migrate to older QEMU.)
183
+ */
184
+ DEFINE_PROP_BOOL("migrate-tick-offset",
185
+ PL031State, migrate_tick_offset, true),
81
+ DEFINE_PROP_END_OF_LIST()
186
+ DEFINE_PROP_END_OF_LIST()
82
+};
187
+};
83
+
188
+
84
+static Property armsse_properties[] = {
189
static void pl031_class_init(ObjectClass *klass, void *data)
85
+ DEFINE_PROP_LINK("memory", ARMSSE, board_memory, TYPE_MEMORY_REGION,
86
+ MemoryRegion *),
87
+ DEFINE_PROP_UINT32("EXP_NUMIRQ", ARMSSE, exp_numirq, 64),
88
+ DEFINE_PROP_UINT32("MAINCLK", ARMSSE, mainclk_frq, 0),
89
+ DEFINE_PROP_UINT32("SRAM_ADDR_WIDTH", ARMSSE, sram_addr_width, 15),
90
+ DEFINE_PROP_UINT32("init-svtor", ARMSSE, init_svtor, 0x10000000),
91
+ DEFINE_PROP_BOOL("CPU0_FPU", ARMSSE, cpu_fpu[0], false),
92
+ DEFINE_PROP_BOOL("CPU0_DSP", ARMSSE, cpu_dsp[0], false),
93
+ DEFINE_PROP_BOOL("CPU1_FPU", ARMSSE, cpu_fpu[1], true),
94
+ DEFINE_PROP_BOOL("CPU1_DSP", ARMSSE, cpu_dsp[1], true),
95
+ DEFINE_PROP_END_OF_LIST()
96
};
97
98
static const ARMSSEInfo armsse_variants[] = {
99
@@ -XXX,XX +XXX,XX @@ static const ARMSSEInfo armsse_variants[] = {
100
.has_cachectrl = false,
101
.has_cpusecctrl = false,
102
.has_cpuid = false,
103
+ .props = iotkit_properties,
104
},
105
{
106
.name = TYPE_SSE200,
107
@@ -XXX,XX +XXX,XX @@ static const ARMSSEInfo armsse_variants[] = {
108
.has_cachectrl = true,
109
.has_cpusecctrl = true,
110
.has_cpuid = true,
111
+ .props = armsse_properties,
112
},
113
};
114
115
@@ -XXX,XX +XXX,XX @@ static void armsse_realize(DeviceState *dev, Error **errp)
116
return;
117
}
118
}
119
+ if (!s->cpu_fpu[i]) {
120
+ object_property_set_bool(cpuobj, false, "vfp", &err);
121
+ if (err) {
122
+ error_propagate(errp, err);
123
+ return;
124
+ }
125
+ }
126
+ if (!s->cpu_dsp[i]) {
127
+ object_property_set_bool(cpuobj, false, "dsp", &err);
128
+ if (err) {
129
+ error_propagate(errp, err);
130
+ return;
131
+ }
132
+ }
133
134
if (i > 0) {
135
memory_region_add_subregion_overlap(&s->cpu_container[i], 0,
136
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription armsse_vmstate = {
137
}
138
};
139
140
-static Property armsse_properties[] = {
141
- DEFINE_PROP_LINK("memory", ARMSSE, board_memory, TYPE_MEMORY_REGION,
142
- MemoryRegion *),
143
- DEFINE_PROP_UINT32("EXP_NUMIRQ", ARMSSE, exp_numirq, 64),
144
- DEFINE_PROP_UINT32("MAINCLK", ARMSSE, mainclk_frq, 0),
145
- DEFINE_PROP_UINT32("SRAM_ADDR_WIDTH", ARMSSE, sram_addr_width, 15),
146
- DEFINE_PROP_UINT32("init-svtor", ARMSSE, init_svtor, 0x10000000),
147
- DEFINE_PROP_END_OF_LIST()
148
-};
149
-
150
static void armsse_reset(DeviceState *dev)
151
{
190
{
152
ARMSSE *s = ARMSSE(dev);
153
@@ -XXX,XX +XXX,XX @@ static void armsse_class_init(ObjectClass *klass, void *data)
154
DeviceClass *dc = DEVICE_CLASS(klass);
191
DeviceClass *dc = DEVICE_CLASS(klass);
155
IDAUInterfaceClass *iic = IDAU_INTERFACE_CLASS(klass);
192
156
ARMSSEClass *asc = ARMSSE_CLASS(klass);
193
dc->vmsd = &vmstate_pl031;
157
+ const ARMSSEInfo *info = data;
194
+ dc->props = pl031_properties;
158
159
dc->realize = armsse_realize;
160
dc->vmsd = &armsse_vmstate;
161
- dc->props = armsse_properties;
162
+ dc->props = info->props;
163
dc->reset = armsse_reset;
164
iic->check = armsse_idau_check;
165
- asc->info = data;
166
+ asc->info = info;
167
}
195
}
168
196
169
static const TypeInfo armsse_info = {
197
static const TypeInfo pl031_info = {
170
diff --git a/hw/arm/musca.c b/hw/arm/musca.c
171
index XXXXXXX..XXXXXXX 100644
172
--- a/hw/arm/musca.c
173
+++ b/hw/arm/musca.c
174
@@ -XXX,XX +XXX,XX @@ static void musca_init(MachineState *machine)
175
qdev_prop_set_uint32(ssedev, "init-svtor", mmc->init_svtor);
176
qdev_prop_set_uint32(ssedev, "SRAM_ADDR_WIDTH", mmc->sram_addr_width);
177
qdev_prop_set_uint32(ssedev, "MAINCLK", SYSCLK_FRQ);
178
+ /*
179
+ * Musca-A takes the default SSE-200 FPU/DSP settings (ie no for
180
+ * CPU0 and yes for CPU1); Musca-B1 explicitly enables them for CPU0.
181
+ */
182
+ if (mmc->type == MUSCA_B1) {
183
+ qdev_prop_set_bit(ssedev, "CPU0_FPU", true);
184
+ qdev_prop_set_bit(ssedev, "CPU0_DSP", true);
185
+ }
186
object_property_set_bool(OBJECT(&mms->sse), true, "realized",
187
&error_fatal);
188
189
--
198
--
190
2.20.1
199
2.20.1
191
200
192
201
diff view generated by jsdifflib
1
Allow VFP and neon to be disabled via a CPU property. As with
1
The ARMv5 architecture didn't specify detailed per-feature ID
2
the "pmu" property, we only allow these features to be removed
2
registers. Now that we're using the MVFR0 register fields to
3
from CPUs which have it by default, not added to CPUs which
3
gate the existence of VFP instructions, we need to set up
4
don't have it.
4
the correct values in the cpu->isar structure so that we still
5
provide an FPU to the guest.
5
6
6
The primary motivation here is to be able to optionally
7
This fixes a regression in the arm926 and arm1026 CPUs, which
7
create Cortex-M33 CPUs with no FPU, but we provide switches
8
are the only ones that both have VFP and are ARMv5 or earlier.
8
for both VFP and Neon because the two interact:
9
This regression was introduced by the VFP refactoring, and more
9
* AArch64 can't have one without the other
10
specifically by commits 1120827fa182f0e76 and 266bd25c485597c,
10
* Some ID register fields only change if both are disabled
11
which accidentally disabled VFP short-vector support and
12
double-precision support on these CPUs.
11
13
14
Fixes: 1120827fa182f0e
15
Fixes: 266bd25c485597c
16
Fixes: https://bugs.launchpad.net/qemu/+bug/1836192
17
Reported-by: Christophe Lyon <christophe.lyon@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
20
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
15
Message-id: 20190517174046.11146-2-peter.maydell@linaro.org
21
Tested-by: Christophe Lyon <christophe.lyon@linaro.org>
22
Message-id: 20190711131241.22231-1-peter.maydell@linaro.org
16
---
23
---
17
target/arm/cpu.h | 4 ++
24
target/arm/cpu.c | 12 ++++++++++++
18
target/arm/cpu.c | 150 +++++++++++++++++++++++++++++++++++++++++++++--
25
1 file changed, 12 insertions(+)
19
2 files changed, 148 insertions(+), 6 deletions(-)
20
26
21
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
22
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/cpu.h
24
+++ b/target/arm/cpu.h
25
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
26
bool has_el3;
27
/* CPU has PMU (Performance Monitor Unit) */
28
bool has_pmu;
29
+ /* CPU has VFP */
30
+ bool has_vfp;
31
+ /* CPU has Neon */
32
+ bool has_neon;
33
34
/* CPU has memory protection unit */
35
bool has_mpu;
36
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
27
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
37
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/cpu.c
29
--- a/target/arm/cpu.c
39
+++ b/target/arm/cpu.c
30
+++ b/target/arm/cpu.c
40
@@ -XXX,XX +XXX,XX @@ static Property arm_cpu_cfgend_property =
31
@@ -XXX,XX +XXX,XX @@ static void arm926_initfn(Object *obj)
41
static Property arm_cpu_has_pmu_property =
32
* set the field to indicate Jazelle support within QEMU.
42
DEFINE_PROP_BOOL("pmu", ARMCPU, has_pmu, true);
33
*/
43
34
cpu->isar.id_isar1 = FIELD_DP32(cpu->isar.id_isar1, ID_ISAR1, JAZELLE, 1);
44
+static Property arm_cpu_has_vfp_property =
45
+ DEFINE_PROP_BOOL("vfp", ARMCPU, has_vfp, true);
46
+
47
+static Property arm_cpu_has_neon_property =
48
+ DEFINE_PROP_BOOL("neon", ARMCPU, has_neon, true);
49
+
50
static Property arm_cpu_has_mpu_property =
51
DEFINE_PROP_BOOL("has-mpu", ARMCPU, has_mpu, true);
52
53
@@ -XXX,XX +XXX,XX @@ void arm_cpu_post_init(Object *obj)
54
if (arm_feature(&cpu->env, ARM_FEATURE_M)) {
55
set_feature(&cpu->env, ARM_FEATURE_PMSA);
56
}
57
+ /* Similarly for the VFP feature bits */
58
+ if (arm_feature(&cpu->env, ARM_FEATURE_VFP4)) {
59
+ set_feature(&cpu->env, ARM_FEATURE_VFP3);
60
+ }
61
+ if (arm_feature(&cpu->env, ARM_FEATURE_VFP3)) {
62
+ set_feature(&cpu->env, ARM_FEATURE_VFP);
63
+ }
64
65
if (arm_feature(&cpu->env, ARM_FEATURE_CBAR) ||
66
arm_feature(&cpu->env, ARM_FEATURE_CBAR_RO)) {
67
@@ -XXX,XX +XXX,XX @@ void arm_cpu_post_init(Object *obj)
68
&error_abort);
69
}
70
71
+ /*
35
+ /*
72
+ * Allow user to turn off VFP and Neon support, but only for TCG --
36
+ * Similarly, we need to set MVFR0 fields to enable double precision
73
+ * KVM does not currently allow us to lie to the guest about its
37
+ * and short vector support even though ARMv5 doesn't have this register.
74
+ * ID/feature registers, so the guest always sees what the host has.
75
+ */
38
+ */
76
+ if (arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
39
+ cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPSHVEC, 1);
77
+ cpu->has_vfp = true;
40
+ cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPDP, 1);
78
+ if (!kvm_enabled()) {
41
}
79
+ qdev_property_add_static(DEVICE(obj), &arm_cpu_has_vfp_property,
42
80
+ &error_abort);
43
static void arm946_initfn(Object *obj)
81
+ }
44
@@ -XXX,XX +XXX,XX @@ static void arm1026_initfn(Object *obj)
82
+ }
45
* set the field to indicate Jazelle support within QEMU.
83
+
46
*/
84
+ if (arm_feature(&cpu->env, ARM_FEATURE_NEON)) {
47
cpu->isar.id_isar1 = FIELD_DP32(cpu->isar.id_isar1, ID_ISAR1, JAZELLE, 1);
85
+ cpu->has_neon = true;
48
+ /*
86
+ if (!kvm_enabled()) {
49
+ * Similarly, we need to set MVFR0 fields to enable double precision
87
+ qdev_property_add_static(DEVICE(obj), &arm_cpu_has_neon_property,
50
+ * and short vector support even though ARMv5 doesn't have this register.
88
+ &error_abort);
51
+ */
89
+ }
52
+ cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPSHVEC, 1);
90
+ }
53
+ cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPDP, 1);
91
+
54
92
if (arm_feature(&cpu->env, ARM_FEATURE_PMSA)) {
55
{
93
qdev_property_add_static(DEVICE(obj), &arm_cpu_has_mpu_property,
56
/* The 1026 had an IFAR at c6,c0,0,1 rather than the ARMv6 c6,c0,0,2 */
94
&error_abort);
95
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
96
return;
97
}
98
99
+ if (arm_feature(env, ARM_FEATURE_AARCH64) &&
100
+ cpu->has_vfp != cpu->has_neon) {
101
+ /*
102
+ * This is an architectural requirement for AArch64; AArch32 is
103
+ * more flexible and permits VFP-no-Neon and Neon-no-VFP.
104
+ */
105
+ error_setg(errp,
106
+ "AArch64 CPUs must have both VFP and Neon or neither");
107
+ return;
108
+ }
109
+
110
+ if (!cpu->has_vfp) {
111
+ uint64_t t;
112
+ uint32_t u;
113
+
114
+ unset_feature(env, ARM_FEATURE_VFP);
115
+ unset_feature(env, ARM_FEATURE_VFP3);
116
+ unset_feature(env, ARM_FEATURE_VFP4);
117
+
118
+ t = cpu->isar.id_aa64isar1;
119
+ t = FIELD_DP64(t, ID_AA64ISAR1, JSCVT, 0);
120
+ cpu->isar.id_aa64isar1 = t;
121
+
122
+ t = cpu->isar.id_aa64pfr0;
123
+ t = FIELD_DP64(t, ID_AA64PFR0, FP, 0xf);
124
+ cpu->isar.id_aa64pfr0 = t;
125
+
126
+ u = cpu->isar.id_isar6;
127
+ u = FIELD_DP32(u, ID_ISAR6, JSCVT, 0);
128
+ cpu->isar.id_isar6 = u;
129
+
130
+ u = cpu->isar.mvfr0;
131
+ u = FIELD_DP32(u, MVFR0, FPSP, 0);
132
+ u = FIELD_DP32(u, MVFR0, FPDP, 0);
133
+ u = FIELD_DP32(u, MVFR0, FPTRAP, 0);
134
+ u = FIELD_DP32(u, MVFR0, FPDIVIDE, 0);
135
+ u = FIELD_DP32(u, MVFR0, FPSQRT, 0);
136
+ u = FIELD_DP32(u, MVFR0, FPSHVEC, 0);
137
+ u = FIELD_DP32(u, MVFR0, FPROUND, 0);
138
+ cpu->isar.mvfr0 = u;
139
+
140
+ u = cpu->isar.mvfr1;
141
+ u = FIELD_DP32(u, MVFR1, FPFTZ, 0);
142
+ u = FIELD_DP32(u, MVFR1, FPDNAN, 0);
143
+ u = FIELD_DP32(u, MVFR1, FPHP, 0);
144
+ cpu->isar.mvfr1 = u;
145
+
146
+ u = cpu->isar.mvfr2;
147
+ u = FIELD_DP32(u, MVFR2, FPMISC, 0);
148
+ cpu->isar.mvfr2 = u;
149
+ }
150
+
151
+ if (!cpu->has_neon) {
152
+ uint64_t t;
153
+ uint32_t u;
154
+
155
+ unset_feature(env, ARM_FEATURE_NEON);
156
+
157
+ t = cpu->isar.id_aa64isar0;
158
+ t = FIELD_DP64(t, ID_AA64ISAR0, DP, 0);
159
+ cpu->isar.id_aa64isar0 = t;
160
+
161
+ t = cpu->isar.id_aa64isar1;
162
+ t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 0);
163
+ cpu->isar.id_aa64isar1 = t;
164
+
165
+ t = cpu->isar.id_aa64pfr0;
166
+ t = FIELD_DP64(t, ID_AA64PFR0, ADVSIMD, 0xf);
167
+ cpu->isar.id_aa64pfr0 = t;
168
+
169
+ u = cpu->isar.id_isar5;
170
+ u = FIELD_DP32(u, ID_ISAR5, RDM, 0);
171
+ u = FIELD_DP32(u, ID_ISAR5, VCMA, 0);
172
+ cpu->isar.id_isar5 = u;
173
+
174
+ u = cpu->isar.id_isar6;
175
+ u = FIELD_DP32(u, ID_ISAR6, DP, 0);
176
+ u = FIELD_DP32(u, ID_ISAR6, FHM, 0);
177
+ cpu->isar.id_isar6 = u;
178
+
179
+ u = cpu->isar.mvfr1;
180
+ u = FIELD_DP32(u, MVFR1, SIMDLS, 0);
181
+ u = FIELD_DP32(u, MVFR1, SIMDINT, 0);
182
+ u = FIELD_DP32(u, MVFR1, SIMDSP, 0);
183
+ u = FIELD_DP32(u, MVFR1, SIMDHP, 0);
184
+ u = FIELD_DP32(u, MVFR1, SIMDFMAC, 0);
185
+ cpu->isar.mvfr1 = u;
186
+
187
+ u = cpu->isar.mvfr2;
188
+ u = FIELD_DP32(u, MVFR2, SIMDMISC, 0);
189
+ cpu->isar.mvfr2 = u;
190
+ }
191
+
192
+ if (!cpu->has_neon && !cpu->has_vfp) {
193
+ uint64_t t;
194
+ uint32_t u;
195
+
196
+ t = cpu->isar.id_aa64isar0;
197
+ t = FIELD_DP64(t, ID_AA64ISAR0, FHM, 0);
198
+ cpu->isar.id_aa64isar0 = t;
199
+
200
+ t = cpu->isar.id_aa64isar1;
201
+ t = FIELD_DP64(t, ID_AA64ISAR1, FRINTTS, 0);
202
+ cpu->isar.id_aa64isar1 = t;
203
+
204
+ u = cpu->isar.mvfr0;
205
+ u = FIELD_DP32(u, MVFR0, SIMDREG, 0);
206
+ cpu->isar.mvfr0 = u;
207
+ }
208
+
209
/* Some features automatically imply others: */
210
if (arm_feature(env, ARM_FEATURE_V8)) {
211
if (arm_feature(env, ARM_FEATURE_M)) {
212
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
213
if (arm_feature(env, ARM_FEATURE_V5)) {
214
set_feature(env, ARM_FEATURE_V4T);
215
}
216
- if (arm_feature(env, ARM_FEATURE_VFP4)) {
217
- set_feature(env, ARM_FEATURE_VFP3);
218
- }
219
- if (arm_feature(env, ARM_FEATURE_VFP3)) {
220
- set_feature(env, ARM_FEATURE_VFP);
221
- }
222
if (arm_feature(env, ARM_FEATURE_LPAE)) {
223
set_feature(env, ARM_FEATURE_V7MP);
224
set_feature(env, ARM_FEATURE_PXN);
225
--
57
--
226
2.20.1
58
2.20.1
227
59
228
60
diff view generated by jsdifflib
1
Create "vfp" and "dsp" properties on the armv7m container object
1
In the M-profile architecture, when we do a vector table fetch and it
2
which will be forwarded to its CPU object, so that SoCs can
2
fails, we need to report a HardFault. Whether this is a Secure HF or
3
configure whether the CPU has these features.
3
a NonSecure HF depends on several things. If AIRCR.BFHFNMINS is 0
4
then HF is always Secure, because there is no NonSecure HardFault.
5
Otherwise, the answer depends on whether the 'underlying exception'
6
(MemManage, BusFault, SecureFault) targets Secure or NonSecure. (In
7
the pseudocode, this is handled in the Vector() function: the final
8
exc.isSecure is calculated by looking at the exc.isSecure from the
9
exception returned from the memory access, not the isSecure input
10
argument.)
11
12
We weren't doing this correctly, because we were looking at
13
the target security domain of the exception we were trying to
14
load the vector table entry for. This produces errors of two kinds:
15
* a load from the NS vector table which hits the "NS access
16
to S memory" SecureFault should end up as a Secure HardFault,
17
but we were raising an NS HardFault
18
* a load from the S vector table which causes a BusFault
19
should raise an NS HardFault if BFHFNMINS == 1 (because
20
in that case all BusFaults are NonSecure), but we were raising
21
a Secure HardFault
22
23
Correct the logic.
24
25
We also fix a comment error where we claimed that we might
26
be escalating MemManage to HardFault, and forgot about SecureFault.
27
(Vector loads can never hit MPU access faults, because they're
28
always aligned and always use the default address map.)
4
29
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
30
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
31
Message-id: 20190705094823.28905-1-peter.maydell@linaro.org
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Message-id: 20190517174046.11146-4-peter.maydell@linaro.org
9
---
32
---
10
include/hw/arm/armv7m.h | 4 ++++
33
target/arm/m_helper.c | 21 +++++++++++++++++----
11
hw/arm/armv7m.c | 18 ++++++++++++++++++
34
1 file changed, 17 insertions(+), 4 deletions(-)
12
2 files changed, 22 insertions(+)
13
35
14
diff --git a/include/hw/arm/armv7m.h b/include/hw/arm/armv7m.h
36
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
15
index XXXXXXX..XXXXXXX 100644
37
index XXXXXXX..XXXXXXX 100644
16
--- a/include/hw/arm/armv7m.h
38
--- a/target/arm/m_helper.c
17
+++ b/include/hw/arm/armv7m.h
39
+++ b/target/arm/m_helper.c
18
@@ -XXX,XX +XXX,XX @@ typedef struct {
40
@@ -XXX,XX +XXX,XX @@ static bool arm_v7m_load_vector(ARMCPU *cpu, int exc, bool targets_secure,
19
* devices will be automatically layered on top of this view.)
41
if (sattrs.ns) {
20
* + Property "idau": IDAU interface (forwarded to CPU object)
42
attrs.secure = false;
21
* + Property "init-svtor": secure VTOR reset value (forwarded to CPU object)
43
} else if (!targets_secure) {
22
+ * + Property "vfp": enable VFP (forwarded to CPU object)
44
- /* NS access to S memory */
23
+ * + Property "dsp": enable DSP (forwarded to CPU object)
45
+ /*
24
* + Property "enable-bitband": expose bitbanded IO
46
+ * NS access to S memory: the underlying exception which we escalate
25
*/
47
+ * to HardFault is SecureFault, which always targets Secure.
26
typedef struct ARMv7MState {
48
+ */
27
@@ -XXX,XX +XXX,XX @@ typedef struct ARMv7MState {
49
+ exc_secure = true;
28
uint32_t init_svtor;
50
goto load_fail;
29
bool enable_bitband;
30
bool start_powered_off;
31
+ bool vfp;
32
+ bool dsp;
33
} ARMv7MState;
34
35
#endif
36
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/hw/arm/armv7m.c
39
+++ b/hw/arm/armv7m.c
40
@@ -XXX,XX +XXX,XX @@ static void armv7m_realize(DeviceState *dev, Error **errp)
41
return;
42
}
51
}
43
}
52
}
44
+ if (object_property_find(OBJECT(s->cpu), "vfp", NULL)) {
53
@@ -XXX,XX +XXX,XX @@ static bool arm_v7m_load_vector(ARMCPU *cpu, int exc, bool targets_secure,
45
+ object_property_set_bool(OBJECT(s->cpu), s->vfp,
54
vector_entry = address_space_ldl(arm_addressspace(cs, attrs), addr,
46
+ "vfp", &err);
55
attrs, &result);
47
+ if (err != NULL) {
56
if (result != MEMTX_OK) {
48
+ error_propagate(errp, err);
57
+ /*
49
+ return;
58
+ * Underlying exception is BusFault: its target security state
50
+ }
59
+ * depends on BFHFNMINS.
60
+ */
61
+ exc_secure = !(cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK);
62
goto load_fail;
63
}
64
*pvec = vector_entry;
65
@@ -XXX,XX +XXX,XX @@ load_fail:
66
/*
67
* All vector table fetch fails are reported as HardFault, with
68
* HFSR.VECTTBL and .FORCED set. (FORCED is set because
69
- * technically the underlying exception is a MemManage or BusFault
70
+ * technically the underlying exception is a SecureFault or BusFault
71
* that is escalated to HardFault.) This is a terminal exception,
72
* so we will either take the HardFault immediately or else enter
73
* lockup (the latter case is handled in armv7m_nvic_set_pending_derived()).
74
+ * The HardFault is Secure if BFHFNMINS is 0 (meaning that all HFs are
75
+ * secure); otherwise it targets the same security state as the
76
+ * underlying exception.
77
*/
78
- exc_secure = targets_secure ||
79
- !(cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK);
80
+ if (!(cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK)) {
81
+ exc_secure = true;
51
+ }
82
+ }
52
+ if (object_property_find(OBJECT(s->cpu), "dsp", NULL)) {
83
env->v7m.hfsr |= R_V7M_HFSR_VECTTBL_MASK | R_V7M_HFSR_FORCED_MASK;
53
+ object_property_set_bool(OBJECT(s->cpu), s->dsp,
84
armv7m_nvic_set_pending_derived(env->nvic, ARMV7M_EXCP_HARD, exc_secure);
54
+ "dsp", &err);
85
return false;
55
+ if (err != NULL) {
56
+ error_propagate(errp, err);
57
+ return;
58
+ }
59
+ }
60
61
/*
62
* Tell the CPU where the NVIC is; it will fail realize if it doesn't
63
@@ -XXX,XX +XXX,XX @@ static Property armv7m_properties[] = {
64
DEFINE_PROP_BOOL("enable-bitband", ARMv7MState, enable_bitband, false),
65
DEFINE_PROP_BOOL("start-powered-off", ARMv7MState, start_powered_off,
66
false),
67
+ DEFINE_PROP_BOOL("vfp", ARMv7MState, vfp, true),
68
+ DEFINE_PROP_BOOL("dsp", ARMv7MState, dsp, true),
69
DEFINE_PROP_END_OF_LIST(),
70
};
71
72
--
86
--
73
2.20.1
87
2.20.1
74
88
75
89
diff view generated by jsdifflib
Deleted patch
1
The GIC ID registers cover an area 0x30 bytes in size
2
(12 registers, 4 bytes each). We were incorrectly decoding
3
only the first 0x20 bytes.
4
1
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Message-id: 20190524124248.28394-2-peter.maydell@linaro.org
8
---
9
hw/intc/arm_gicv3_dist.c | 4 ++--
10
hw/intc/arm_gicv3_redist.c | 4 ++--
11
2 files changed, 4 insertions(+), 4 deletions(-)
12
13
diff --git a/hw/intc/arm_gicv3_dist.c b/hw/intc/arm_gicv3_dist.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/intc/arm_gicv3_dist.c
16
+++ b/hw/intc/arm_gicv3_dist.c
17
@@ -XXX,XX +XXX,XX @@ static MemTxResult gicd_readl(GICv3State *s, hwaddr offset,
18
}
19
return MEMTX_OK;
20
}
21
- case GICD_IDREGS ... GICD_IDREGS + 0x1f:
22
+ case GICD_IDREGS ... GICD_IDREGS + 0x2f:
23
/* ID registers */
24
*data = gicv3_idreg(offset - GICD_IDREGS);
25
return MEMTX_OK;
26
@@ -XXX,XX +XXX,XX @@ static MemTxResult gicd_writel(GICv3State *s, hwaddr offset,
27
gicd_write_irouter(s, attrs, irq, r);
28
return MEMTX_OK;
29
}
30
- case GICD_IDREGS ... GICD_IDREGS + 0x1f:
31
+ case GICD_IDREGS ... GICD_IDREGS + 0x2f:
32
case GICD_TYPER:
33
case GICD_IIDR:
34
/* RO registers, ignore the write */
35
diff --git a/hw/intc/arm_gicv3_redist.c b/hw/intc/arm_gicv3_redist.c
36
index XXXXXXX..XXXXXXX 100644
37
--- a/hw/intc/arm_gicv3_redist.c
38
+++ b/hw/intc/arm_gicv3_redist.c
39
@@ -XXX,XX +XXX,XX @@ static MemTxResult gicr_readl(GICv3CPUState *cs, hwaddr offset,
40
}
41
*data = cs->gicr_nsacr;
42
return MEMTX_OK;
43
- case GICR_IDREGS ... GICR_IDREGS + 0x1f:
44
+ case GICR_IDREGS ... GICR_IDREGS + 0x2f:
45
*data = gicv3_idreg(offset - GICR_IDREGS);
46
return MEMTX_OK;
47
default:
48
@@ -XXX,XX +XXX,XX @@ static MemTxResult gicr_writel(GICv3CPUState *cs, hwaddr offset,
49
return MEMTX_OK;
50
case GICR_IIDR:
51
case GICR_TYPER:
52
- case GICR_IDREGS ... GICR_IDREGS + 0x1f:
53
+ case GICR_IDREGS ... GICR_IDREGS + 0x2f:
54
/* RO registers, ignore the write */
55
qemu_log_mask(LOG_GUEST_ERROR,
56
"%s: invalid guest write to RO register at offset "
57
--
58
2.20.1
59
60
diff view generated by jsdifflib
Deleted patch
1
The AArch32 VMOV (immediate) instruction uses the same VFP encoded
2
immediate format we already handle in vfp_expand_imm(). Use that
3
function rather than hand-decoding it.
4
1
5
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Message-id: 20190613163917.28589-3-peter.maydell@linaro.org
10
---
11
target/arm/translate-vfp.inc.c | 28 ++++------------------------
12
target/arm/vfp.decode | 10 ++++++----
13
2 files changed, 10 insertions(+), 28 deletions(-)
14
15
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate-vfp.inc.c
18
+++ b/target/arm/translate-vfp.inc.c
19
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_sp(DisasContext *s, arg_VMOV_imm_sp *a)
20
uint32_t delta_d = 0;
21
int veclen = s->vec_len;
22
TCGv_i32 fd;
23
- uint32_t n, i, vd;
24
+ uint32_t vd;
25
26
vd = a->vd;
27
28
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_sp(DisasContext *s, arg_VMOV_imm_sp *a)
29
}
30
}
31
32
- n = (a->imm4h << 28) & 0x80000000;
33
- i = ((a->imm4h << 4) & 0x70) | a->imm4l;
34
- if (i & 0x40) {
35
- i |= 0x780;
36
- } else {
37
- i |= 0x800;
38
- }
39
- n |= i << 19;
40
-
41
- fd = tcg_temp_new_i32();
42
- tcg_gen_movi_i32(fd, n);
43
+ fd = tcg_const_i32(vfp_expand_imm(MO_32, a->imm));
44
45
for (;;) {
46
neon_store_reg32(fd, vd);
47
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_dp(DisasContext *s, arg_VMOV_imm_dp *a)
48
uint32_t delta_d = 0;
49
int veclen = s->vec_len;
50
TCGv_i64 fd;
51
- uint32_t n, i, vd;
52
+ uint32_t vd;
53
54
vd = a->vd;
55
56
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_dp(DisasContext *s, arg_VMOV_imm_dp *a)
57
}
58
}
59
60
- n = (a->imm4h << 28) & 0x80000000;
61
- i = ((a->imm4h << 4) & 0x70) | a->imm4l;
62
- if (i & 0x40) {
63
- i |= 0x3f80;
64
- } else {
65
- i |= 0x4000;
66
- }
67
- n |= i << 16;
68
-
69
- fd = tcg_temp_new_i64();
70
- tcg_gen_movi_i64(fd, ((uint64_t)n) << 32);
71
+ fd = tcg_const_i64(vfp_expand_imm(MO_64, a->imm));
72
73
for (;;) {
74
neon_store_reg64(fd, vd);
75
diff --git a/target/arm/vfp.decode b/target/arm/vfp.decode
76
index XXXXXXX..XXXXXXX 100644
77
--- a/target/arm/vfp.decode
78
+++ b/target/arm/vfp.decode
79
@@ -XXX,XX +XXX,XX @@
80
%vmov_idx_b 21:1 5:2
81
%vmov_idx_h 21:1 6:1
82
83
+%vmov_imm 16:4 0:4
84
+
85
# VMOV scalar to general-purpose register; note that this does
86
# include some Neon cases.
87
VMOV_to_gp ---- 1110 u:1 1. 1 .... rt:4 1011 ... 1 0000 \
88
@@ -XXX,XX +XXX,XX @@ VFM_sp ---- 1110 1.10 .... .... 1010 . o2:1 . 0 .... \
89
VFM_dp ---- 1110 1.10 .... .... 1011 . o2:1 . 0 .... \
90
vm=%vm_dp vn=%vn_dp vd=%vd_dp o1=2
91
92
-VMOV_imm_sp ---- 1110 1.11 imm4h:4 .... 1010 0000 imm4l:4 \
93
- vd=%vd_sp
94
-VMOV_imm_dp ---- 1110 1.11 imm4h:4 .... 1011 0000 imm4l:4 \
95
- vd=%vd_dp
96
+VMOV_imm_sp ---- 1110 1.11 .... .... 1010 0000 .... \
97
+ vd=%vd_sp imm=%vmov_imm
98
+VMOV_imm_dp ---- 1110 1.11 .... .... 1011 0000 .... \
99
+ vd=%vd_dp imm=%vmov_imm
100
101
VMOV_reg_sp ---- 1110 1.11 0000 .... 1010 01.0 .... \
102
vd=%vd_sp vm=%vm_sp
103
--
104
2.20.1
105
106
diff view generated by jsdifflib
Deleted patch
1
Where Neon instructions are floating point operations, we
2
mostly use the old VFP utility functions like gen_vfp_abs()
3
which work on the TCG globals cpu_F0s and cpu_F1s. The
4
Neon for-each-element loop conditionally loads the inputs
5
into either a plain old TCG temporary for most operations
6
or into cpu_F0s for float operations, and similarly stores
7
back either cpu_F0s or the temporary.
8
1
9
Switch NEON_2RM_VABS_F away from using cpu_F0s, and
10
update neon_2rm_is_float_op() accordingly.
11
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
15
Message-id: 20190613163917.28589-4-peter.maydell@linaro.org
16
---
17
target/arm/translate.c | 19 ++++++++-----------
18
1 file changed, 8 insertions(+), 11 deletions(-)
19
20
diff --git a/target/arm/translate.c b/target/arm/translate.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/translate.c
23
+++ b/target/arm/translate.c
24
@@ -XXX,XX +XXX,XX @@ static TCGv_ptr get_fpstatus_ptr(int neon)
25
return statusptr;
26
}
27
28
-static inline void gen_vfp_abs(int dp)
29
-{
30
- if (dp)
31
- gen_helper_vfp_absd(cpu_F0d, cpu_F0d);
32
- else
33
- gen_helper_vfp_abss(cpu_F0s, cpu_F0s);
34
-}
35
-
36
static inline void gen_vfp_neg(int dp)
37
{
38
if (dp)
39
@@ -XXX,XX +XXX,XX @@ static const uint8_t neon_3r_sizes[] = {
40
41
static int neon_2rm_is_float_op(int op)
42
{
43
- /* Return true if this neon 2reg-misc op is float-to-float */
44
- return (op == NEON_2RM_VABS_F || op == NEON_2RM_VNEG_F ||
45
+ /*
46
+ * Return true if this neon 2reg-misc op is float-to-float.
47
+ * This is not a property of the operation but of our code --
48
+ * what we are asking here is "does the code for this case in
49
+ * the Neon for-each-pass loop use cpu_F0s?".
50
+ */
51
+ return (op == NEON_2RM_VNEG_F ||
52
(op >= NEON_2RM_VRINTN && op <= NEON_2RM_VRINTZ) ||
53
op == NEON_2RM_VRINTM ||
54
(op >= NEON_2RM_VRINTP && op <= NEON_2RM_VCVTMS) ||
55
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
56
break;
57
}
58
case NEON_2RM_VABS_F:
59
- gen_vfp_abs(0);
60
+ gen_helper_vfp_abss(tmp, tmp);
61
break;
62
case NEON_2RM_VNEG_F:
63
gen_vfp_neg(0);
64
--
65
2.20.1
66
67
diff view generated by jsdifflib
Deleted patch
1
Switch NEON_2RM_VABS_F away from using cpu_F0s.
2
1
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
Message-id: 20190613163917.28589-5-peter.maydell@linaro.org
7
---
8
target/arm/translate.c | 13 ++-----------
9
1 file changed, 2 insertions(+), 11 deletions(-)
10
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static TCGv_ptr get_fpstatus_ptr(int neon)
16
return statusptr;
17
}
18
19
-static inline void gen_vfp_neg(int dp)
20
-{
21
- if (dp)
22
- gen_helper_vfp_negd(cpu_F0d, cpu_F0d);
23
- else
24
- gen_helper_vfp_negs(cpu_F0s, cpu_F0s);
25
-}
26
-
27
#define VFP_GEN_ITOF(name) \
28
static inline void gen_vfp_##name(int dp, int neon) \
29
{ \
30
@@ -XXX,XX +XXX,XX @@ static int neon_2rm_is_float_op(int op)
31
* what we are asking here is "does the code for this case in
32
* the Neon for-each-pass loop use cpu_F0s?".
33
*/
34
- return (op == NEON_2RM_VNEG_F ||
35
- (op >= NEON_2RM_VRINTN && op <= NEON_2RM_VRINTZ) ||
36
+ return ((op >= NEON_2RM_VRINTN && op <= NEON_2RM_VRINTZ) ||
37
op == NEON_2RM_VRINTM ||
38
(op >= NEON_2RM_VRINTP && op <= NEON_2RM_VCVTMS) ||
39
op >= NEON_2RM_VRECPE_F);
40
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
41
gen_helper_vfp_abss(tmp, tmp);
42
break;
43
case NEON_2RM_VNEG_F:
44
- gen_vfp_neg(0);
45
+ gen_helper_vfp_negs(tmp, tmp);
46
break;
47
case NEON_2RM_VSWP:
48
tmp2 = neon_load_reg(rd, pass);
49
--
50
2.20.1
51
52
diff view generated by jsdifflib
Deleted patch
1
Switch NEON_2RM_VRINT* away from using cpu_F0s.
2
1
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
Message-id: 20190613163917.28589-6-peter.maydell@linaro.org
7
---
8
target/arm/translate.c | 8 +++-----
9
1 file changed, 3 insertions(+), 5 deletions(-)
10
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static int neon_2rm_is_float_op(int op)
16
* what we are asking here is "does the code for this case in
17
* the Neon for-each-pass loop use cpu_F0s?".
18
*/
19
- return ((op >= NEON_2RM_VRINTN && op <= NEON_2RM_VRINTZ) ||
20
- op == NEON_2RM_VRINTM ||
21
- (op >= NEON_2RM_VRINTP && op <= NEON_2RM_VCVTMS) ||
22
+ return ((op >= NEON_2RM_VCVTAU && op <= NEON_2RM_VCVTMS) ||
23
op >= NEON_2RM_VRECPE_F);
24
}
25
26
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
27
tcg_rmode = tcg_const_i32(arm_rmode_to_sf(rmode));
28
gen_helper_set_neon_rmode(tcg_rmode, tcg_rmode,
29
cpu_env);
30
- gen_helper_rints(cpu_F0s, cpu_F0s, fpstatus);
31
+ gen_helper_rints(tmp, tmp, fpstatus);
32
gen_helper_set_neon_rmode(tcg_rmode, tcg_rmode,
33
cpu_env);
34
tcg_temp_free_ptr(fpstatus);
35
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
36
case NEON_2RM_VRINTX:
37
{
38
TCGv_ptr fpstatus = get_fpstatus_ptr(1);
39
- gen_helper_rints_exact(cpu_F0s, cpu_F0s, fpstatus);
40
+ gen_helper_rints_exact(tmp, tmp, fpstatus);
41
tcg_temp_free_ptr(fpstatus);
42
break;
43
}
44
--
45
2.20.1
46
47
diff view generated by jsdifflib
Deleted patch
1
Stop using cpu_F0s for the NEON_2RM_VCVT[ANPM][US] ops.
2
1
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
Message-id: 20190613163917.28589-7-peter.maydell@linaro.org
7
---
8
target/arm/translate.c | 7 +++----
9
1 file changed, 3 insertions(+), 4 deletions(-)
10
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static int neon_2rm_is_float_op(int op)
16
* what we are asking here is "does the code for this case in
17
* the Neon for-each-pass loop use cpu_F0s?".
18
*/
19
- return ((op >= NEON_2RM_VCVTAU && op <= NEON_2RM_VCVTMS) ||
20
- op >= NEON_2RM_VRECPE_F);
21
+ return op >= NEON_2RM_VRECPE_F;
22
}
23
24
static bool neon_2rm_is_v8_op(int op)
25
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
26
cpu_env);
27
28
if (is_signed) {
29
- gen_helper_vfp_tosls(cpu_F0s, cpu_F0s,
30
+ gen_helper_vfp_tosls(tmp, tmp,
31
tcg_shift, fpst);
32
} else {
33
- gen_helper_vfp_touls(cpu_F0s, cpu_F0s,
34
+ gen_helper_vfp_touls(tmp, tmp,
35
tcg_shift, fpst);
36
}
37
38
--
39
2.20.1
40
41
diff view generated by jsdifflib
Deleted patch
1
Stop using cpu_F0s for NEON_2RM_VRECPE_F and NEON_2RM_VRSQRTE_F.
2
1
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
Message-id: 20190613163917.28589-8-peter.maydell@linaro.org
7
---
8
target/arm/translate.c | 6 +++---
9
1 file changed, 3 insertions(+), 3 deletions(-)
10
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static int neon_2rm_is_float_op(int op)
16
* what we are asking here is "does the code for this case in
17
* the Neon for-each-pass loop use cpu_F0s?".
18
*/
19
- return op >= NEON_2RM_VRECPE_F;
20
+ return op >= NEON_2RM_VCVT_FS;
21
}
22
23
static bool neon_2rm_is_v8_op(int op)
24
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
25
case NEON_2RM_VRECPE_F:
26
{
27
TCGv_ptr fpstatus = get_fpstatus_ptr(1);
28
- gen_helper_recpe_f32(cpu_F0s, cpu_F0s, fpstatus);
29
+ gen_helper_recpe_f32(tmp, tmp, fpstatus);
30
tcg_temp_free_ptr(fpstatus);
31
break;
32
}
33
case NEON_2RM_VRSQRTE_F:
34
{
35
TCGv_ptr fpstatus = get_fpstatus_ptr(1);
36
- gen_helper_rsqrte_f32(cpu_F0s, cpu_F0s, fpstatus);
37
+ gen_helper_rsqrte_f32(tmp, tmp, fpstatus);
38
tcg_temp_free_ptr(fpstatus);
39
break;
40
}
41
--
42
2.20.1
43
44
diff view generated by jsdifflib
Deleted patch
1
Stop using cpu_F0s for the Neon f32/s32 VCVT operations.
2
Since this is the last user of cpu_F0s in the Neon 2rm-op
3
loop, we can remove the handling code for it too.
4
1
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Message-id: 20190613163917.28589-9-peter.maydell@linaro.org
9
---
10
target/arm/translate.c | 82 ++++++++++++------------------------------
11
1 file changed, 22 insertions(+), 60 deletions(-)
12
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate.c
16
+++ b/target/arm/translate.c
17
@@ -XXX,XX +XXX,XX @@ static TCGv_ptr get_fpstatus_ptr(int neon)
18
return statusptr;
19
}
20
21
-#define VFP_GEN_ITOF(name) \
22
-static inline void gen_vfp_##name(int dp, int neon) \
23
-{ \
24
- TCGv_ptr statusptr = get_fpstatus_ptr(neon); \
25
- if (dp) { \
26
- gen_helper_vfp_##name##d(cpu_F0d, cpu_F0s, statusptr); \
27
- } else { \
28
- gen_helper_vfp_##name##s(cpu_F0s, cpu_F0s, statusptr); \
29
- } \
30
- tcg_temp_free_ptr(statusptr); \
31
-}
32
-
33
-VFP_GEN_ITOF(uito)
34
-VFP_GEN_ITOF(sito)
35
-#undef VFP_GEN_ITOF
36
-
37
-#define VFP_GEN_FTOI(name) \
38
-static inline void gen_vfp_##name(int dp, int neon) \
39
-{ \
40
- TCGv_ptr statusptr = get_fpstatus_ptr(neon); \
41
- if (dp) { \
42
- gen_helper_vfp_##name##d(cpu_F0s, cpu_F0d, statusptr); \
43
- } else { \
44
- gen_helper_vfp_##name##s(cpu_F0s, cpu_F0s, statusptr); \
45
- } \
46
- tcg_temp_free_ptr(statusptr); \
47
-}
48
-
49
-VFP_GEN_FTOI(touiz)
50
-VFP_GEN_FTOI(tosiz)
51
-#undef VFP_GEN_FTOI
52
-
53
#define VFP_GEN_FIX(name, round) \
54
static inline void gen_vfp_##name(int dp, int shift, int neon) \
55
{ \
56
@@ -XXX,XX +XXX,XX @@ static const uint8_t neon_3r_sizes[] = {
57
#define NEON_2RM_VCVT_SF 62
58
#define NEON_2RM_VCVT_UF 63
59
60
-static int neon_2rm_is_float_op(int op)
61
-{
62
- /*
63
- * Return true if this neon 2reg-misc op is float-to-float.
64
- * This is not a property of the operation but of our code --
65
- * what we are asking here is "does the code for this case in
66
- * the Neon for-each-pass loop use cpu_F0s?".
67
- */
68
- return op >= NEON_2RM_VCVT_FS;
69
-}
70
-
71
static bool neon_2rm_is_v8_op(int op)
72
{
73
/* Return true if this neon 2reg-misc op is ARMv8 and up */
74
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
75
default:
76
elementwise:
77
for (pass = 0; pass < (q ? 4 : 2); pass++) {
78
- if (neon_2rm_is_float_op(op)) {
79
- tcg_gen_ld_f32(cpu_F0s, cpu_env,
80
- neon_reg_offset(rm, pass));
81
- tmp = NULL;
82
- } else {
83
- tmp = neon_load_reg(rm, pass);
84
- }
85
+ tmp = neon_load_reg(rm, pass);
86
switch (op) {
87
case NEON_2RM_VREV32:
88
switch (size) {
89
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
90
break;
91
}
92
case NEON_2RM_VCVT_FS: /* VCVT.F32.S32 */
93
- gen_vfp_sito(0, 1);
94
+ {
95
+ TCGv_ptr fpstatus = get_fpstatus_ptr(1);
96
+ gen_helper_vfp_sitos(tmp, tmp, fpstatus);
97
+ tcg_temp_free_ptr(fpstatus);
98
break;
99
+ }
100
case NEON_2RM_VCVT_FU: /* VCVT.F32.U32 */
101
- gen_vfp_uito(0, 1);
102
+ {
103
+ TCGv_ptr fpstatus = get_fpstatus_ptr(1);
104
+ gen_helper_vfp_uitos(tmp, tmp, fpstatus);
105
+ tcg_temp_free_ptr(fpstatus);
106
break;
107
+ }
108
case NEON_2RM_VCVT_SF: /* VCVT.S32.F32 */
109
- gen_vfp_tosiz(0, 1);
110
+ {
111
+ TCGv_ptr fpstatus = get_fpstatus_ptr(1);
112
+ gen_helper_vfp_tosizs(tmp, tmp, fpstatus);
113
+ tcg_temp_free_ptr(fpstatus);
114
break;
115
+ }
116
case NEON_2RM_VCVT_UF: /* VCVT.U32.F32 */
117
- gen_vfp_touiz(0, 1);
118
+ {
119
+ TCGv_ptr fpstatus = get_fpstatus_ptr(1);
120
+ gen_helper_vfp_touizs(tmp, tmp, fpstatus);
121
+ tcg_temp_free_ptr(fpstatus);
122
break;
123
+ }
124
default:
125
/* Reserved op values were caught by the
126
* neon_2rm_sizes[] check earlier.
127
*/
128
abort();
129
}
130
- if (neon_2rm_is_float_op(op)) {
131
- tcg_gen_st_f32(cpu_F0s, cpu_env,
132
- neon_reg_offset(rd, pass));
133
- } else {
134
- neon_store_reg(rd, pass, tmp);
135
- }
136
+ neon_store_reg(rd, pass, tmp);
137
}
138
break;
139
}
140
--
141
2.20.1
142
143
diff view generated by jsdifflib
Deleted patch
1
Stop using cpu_F0s in the Neon VCVT fixed-point operations.
2
1
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
Message-id: 20190613163917.28589-10-peter.maydell@linaro.org
7
---
8
target/arm/translate.c | 62 +++++++++++++++++++-----------------------
9
1 file changed, 28 insertions(+), 34 deletions(-)
10
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static const char * const regnames[] =
16
/* Function prototypes for gen_ functions calling Neon helpers. */
17
typedef void NeonGenThreeOpEnvFn(TCGv_i32, TCGv_env, TCGv_i32,
18
TCGv_i32, TCGv_i32);
19
+/* Function prototypes for gen_ functions for fix point conversions */
20
+typedef void VFPGenFixPointFn(TCGv_i32, TCGv_i32, TCGv_i32, TCGv_ptr);
21
22
/* initialize TCG globals. */
23
void arm_translate_init(void)
24
@@ -XXX,XX +XXX,XX @@ static TCGv_ptr get_fpstatus_ptr(int neon)
25
return statusptr;
26
}
27
28
-#define VFP_GEN_FIX(name, round) \
29
-static inline void gen_vfp_##name(int dp, int shift, int neon) \
30
-{ \
31
- TCGv_i32 tmp_shift = tcg_const_i32(shift); \
32
- TCGv_ptr statusptr = get_fpstatus_ptr(neon); \
33
- if (dp) { \
34
- gen_helper_vfp_##name##d##round(cpu_F0d, cpu_F0d, tmp_shift, \
35
- statusptr); \
36
- } else { \
37
- gen_helper_vfp_##name##s##round(cpu_F0s, cpu_F0s, tmp_shift, \
38
- statusptr); \
39
- } \
40
- tcg_temp_free_i32(tmp_shift); \
41
- tcg_temp_free_ptr(statusptr); \
42
-}
43
-VFP_GEN_FIX(tosl, _round_to_zero)
44
-VFP_GEN_FIX(toul, _round_to_zero)
45
-VFP_GEN_FIX(slto, )
46
-VFP_GEN_FIX(ulto, )
47
-#undef VFP_GEN_FIX
48
-
49
static inline long vfp_reg_offset(bool dp, unsigned reg)
50
{
51
if (dp) {
52
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
53
}
54
} else if (op >= 14) {
55
/* VCVT fixed-point. */
56
+ TCGv_ptr fpst;
57
+ TCGv_i32 shiftv;
58
+ VFPGenFixPointFn *fn;
59
+
60
if (!(insn & (1 << 21)) || (q && ((rd | rm) & 1))) {
61
return 1;
62
}
63
+
64
+ if (!(op & 1)) {
65
+ if (u) {
66
+ fn = gen_helper_vfp_ultos;
67
+ } else {
68
+ fn = gen_helper_vfp_sltos;
69
+ }
70
+ } else {
71
+ if (u) {
72
+ fn = gen_helper_vfp_touls_round_to_zero;
73
+ } else {
74
+ fn = gen_helper_vfp_tosls_round_to_zero;
75
+ }
76
+ }
77
+
78
/* We have already masked out the must-be-1 top bit of imm6,
79
* hence this 32-shift where the ARM ARM has 64-imm6.
80
*/
81
shift = 32 - shift;
82
+ fpst = get_fpstatus_ptr(1);
83
+ shiftv = tcg_const_i32(shift);
84
for (pass = 0; pass < (q ? 4 : 2); pass++) {
85
- tcg_gen_ld_f32(cpu_F0s, cpu_env, neon_reg_offset(rm, pass));
86
- if (!(op & 1)) {
87
- if (u)
88
- gen_vfp_ulto(0, shift, 1);
89
- else
90
- gen_vfp_slto(0, shift, 1);
91
- } else {
92
- if (u)
93
- gen_vfp_toul(0, shift, 1);
94
- else
95
- gen_vfp_tosl(0, shift, 1);
96
- }
97
- tcg_gen_st_f32(cpu_F0s, cpu_env, neon_reg_offset(rd, pass));
98
+ TCGv_i32 tmpf = neon_load_reg(rm, pass);
99
+ fn(tmpf, tmpf, shiftv, fpst);
100
+ neon_store_reg(rd, pass, tmpf);
101
}
102
+ tcg_temp_free_ptr(fpst);
103
+ tcg_temp_free_i32(shiftv);
104
} else {
105
return 1;
106
}
107
--
108
2.20.1
109
110
diff view generated by jsdifflib
Deleted patch
1
Remove some old constructs from NEON_2RM_VCVT_F16_F32 code:
2
* don't use cpu_F0s
3
* don't use tcg_gen_ld_f32
4
1
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Message-id: 20190613163917.28589-11-peter.maydell@linaro.org
9
---
10
target/arm/translate.c | 27 ++++++++++++---------------
11
1 file changed, 12 insertions(+), 15 deletions(-)
12
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate.c
16
+++ b/target/arm/translate.c
17
@@ -XXX,XX +XXX,XX @@ static TCGv_ptr vfp_reg_ptr(bool dp, int reg)
18
return ret;
19
}
20
21
-#define tcg_gen_ld_f32 tcg_gen_ld_i32
22
#define tcg_gen_st_f32 tcg_gen_st_i32
23
24
#define ARM_CP_RW_BIT (1 << 20)
25
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
26
q || (rm & 1)) {
27
return 1;
28
}
29
- tmp = tcg_temp_new_i32();
30
- tmp2 = tcg_temp_new_i32();
31
fpst = get_fpstatus_ptr(true);
32
ahp = get_ahp_flag();
33
- tcg_gen_ld_f32(cpu_F0s, cpu_env, neon_reg_offset(rm, 0));
34
- gen_helper_vfp_fcvt_f32_to_f16(tmp, cpu_F0s, fpst, ahp);
35
- tcg_gen_ld_f32(cpu_F0s, cpu_env, neon_reg_offset(rm, 1));
36
- gen_helper_vfp_fcvt_f32_to_f16(tmp2, cpu_F0s, fpst, ahp);
37
+ tmp = neon_load_reg(rm, 0);
38
+ gen_helper_vfp_fcvt_f32_to_f16(tmp, tmp, fpst, ahp);
39
+ tmp2 = neon_load_reg(rm, 1);
40
+ gen_helper_vfp_fcvt_f32_to_f16(tmp2, tmp2, fpst, ahp);
41
tcg_gen_shli_i32(tmp2, tmp2, 16);
42
tcg_gen_or_i32(tmp2, tmp2, tmp);
43
- tcg_gen_ld_f32(cpu_F0s, cpu_env, neon_reg_offset(rm, 2));
44
- gen_helper_vfp_fcvt_f32_to_f16(tmp, cpu_F0s, fpst, ahp);
45
- tcg_gen_ld_f32(cpu_F0s, cpu_env, neon_reg_offset(rm, 3));
46
+ tcg_temp_free_i32(tmp);
47
+ tmp = neon_load_reg(rm, 2);
48
+ gen_helper_vfp_fcvt_f32_to_f16(tmp, tmp, fpst, ahp);
49
+ tmp3 = neon_load_reg(rm, 3);
50
neon_store_reg(rd, 0, tmp2);
51
- tmp2 = tcg_temp_new_i32();
52
- gen_helper_vfp_fcvt_f32_to_f16(tmp2, cpu_F0s, fpst, ahp);
53
- tcg_gen_shli_i32(tmp2, tmp2, 16);
54
- tcg_gen_or_i32(tmp2, tmp2, tmp);
55
- neon_store_reg(rd, 1, tmp2);
56
+ gen_helper_vfp_fcvt_f32_to_f16(tmp3, tmp3, fpst, ahp);
57
+ tcg_gen_shli_i32(tmp3, tmp3, 16);
58
+ tcg_gen_or_i32(tmp3, tmp3, tmp);
59
+ neon_store_reg(rd, 1, tmp3);
60
tcg_temp_free_i32(tmp);
61
tcg_temp_free_i32(ahp);
62
tcg_temp_free_ptr(fpst);
63
--
64
2.20.1
65
66
diff view generated by jsdifflib
Deleted patch
1
Remove the now unused TCG globals cpu_F0s, cpu_F0d, cpu_F1s, cpu_F1d.
2
1
3
cpu_M0 is still used by the iwmmxt code, and cpu_V0 and
4
cpu_V1 are used by both iwmmxt and Neon.
5
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Message-id: 20190613163917.28589-13-peter.maydell@linaro.org
10
---
11
target/arm/translate.c | 12 ++----------
12
1 file changed, 2 insertions(+), 10 deletions(-)
13
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate.c
17
+++ b/target/arm/translate.c
18
@@ -XXX,XX +XXX,XX @@ TCGv_i32 cpu_CF, cpu_NF, cpu_VF, cpu_ZF;
19
TCGv_i64 cpu_exclusive_addr;
20
TCGv_i64 cpu_exclusive_val;
21
22
-/* FIXME: These should be removed. */
23
-static TCGv_i32 cpu_F0s, cpu_F1s;
24
-static TCGv_i64 cpu_F0d, cpu_F1d;
25
-
26
#include "exec/gen-icount.h"
27
28
static const char * const regnames[] =
29
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
30
dc->base.max_insns = MIN(dc->base.max_insns, bound);
31
}
32
33
- cpu_F0s = tcg_temp_new_i32();
34
- cpu_F1s = tcg_temp_new_i32();
35
- cpu_F0d = tcg_temp_new_i64();
36
- cpu_F1d = tcg_temp_new_i64();
37
- cpu_V0 = cpu_F0d;
38
- cpu_V1 = cpu_F1d;
39
+ cpu_V0 = tcg_temp_new_i64();
40
+ cpu_V1 = tcg_temp_new_i64();
41
/* FIXME: cpu_M0 can probably be the same as cpu_V0. */
42
cpu_M0 = tcg_temp_new_i64();
43
}
44
--
45
2.20.1
46
47
diff view generated by jsdifflib