1
target-arm queue. This has the "plumb txattrs through various
1
Big pullreq this week, though none of the new features are
2
bits of exec.c" patches, and a collection of bug fixes from
2
particularly earthshaking. Most of the bulk is from code cleanup
3
various people.
3
patches from me or rth.
4
4
5
thanks
5
thanks
6
-- PMM
6
-- PMM
7
7
8
The following changes since commit b651b80822fa8cb66ca30087ac7fbc75507ae5d2:
8
9
9
10
Merge remote-tracking branch 'remotes/vivier2/tags/linux-user-for-5.0-pull-request' into staging (2020-02-20 17:35:42 +0000)
10
The following changes since commit a3ac12fba028df90f7b3dbec924995c126c41022:
11
12
Merge remote-tracking branch 'remotes/ehabkost/tags/numa-next-pull-request' into staging (2018-05-31 11:12:36 +0100)
13
11
14
are available in the Git repository at:
12
are available in the Git repository at:
15
13
16
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20180531
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20200221
17
15
18
for you to fetch changes up to 49d1dca0520ea71bc21867fab6647f474fcf857b:
16
for you to fetch changes up to 270a679b3f950d7c4c600f324aab8bff292d0971:
19
17
20
KVM: GIC: Fix memory leak due to calling kvm_init_irq_routing twice (2018-05-31 14:52:53 +0100)
18
target/arm: Add missing checks for fpsp_v2 (2020-02-21 12:54:25 +0000)
21
19
22
----------------------------------------------------------------
20
----------------------------------------------------------------
23
target-arm queue:
21
target-arm queue:
24
* target/arm: Honour FPCR.FZ in FRECPX
22
* aspeed/scu: Implement chip ID register
25
* MAINTAINERS: Add entries for newer MPS2 boards and devices
23
* hw/misc/iotkit-secctl: Fix writing to 'PPC Interrupt Clear' register
26
* hw/intc/arm_gicv3: Fix APxR<n> register dispatching
24
* mainstone: Make providing flash images non-mandatory
27
* arm_gicv3_kvm: fix bug in writing zero bits back to the in-kernel
25
* z2: Make providing flash images non-mandatory
28
GIC state
26
* Fix failures to flush SVE high bits after AdvSIMD INS/ZIP/UZP/TRN/TBL/TBX/EXT
29
* tcg: Fix helper function vs host abi for float16
27
* Minor performance improvement: spend less time recalculating hflags values
30
* arm: fix qemu crash on startup with -bios option
28
* Code cleanup to isar_feature function tests
31
* arm: fix malloc type mismatch
29
* Implement ARMv8.1-PMU and ARMv8.4-PMU extensions
32
* xlnx-zdma: Correct mem leaks and memset to zero on desc unaligned errors
30
* Bugfix: correct handling of PMCR_EL0.LC bit
33
* Correct CPACR reset value for v7 cores
31
* Bugfix: correct definition of PMCRDP
34
* memory.h: Improve IOMMU related documentation
32
* Correctly implement ACTLR2, HACTLR2
35
* exec: Plumb transaction attributes through various functions in
33
* allwinner: Wire up USB ports
36
preparation for allowing IOMMUs to see them
34
* Vectorize emulation of USHL, SSHL, PMUL*
37
* vmstate.h: Provide VMSTATE_BOOL_SUB_ARRAY
35
* xilinx_spips: Correct the number of dummy cycles for the FAST_READ_4 cmd
38
* ARM: ACPI: Fix use-after-free due to memory realloc
36
* sh4: Fix PCI ISA IO memory subregion
39
* KVM: GIC: Fix memory leak due to calling kvm_init_irq_routing twice
37
* Code cleanup to use more isar_feature tests and fewer ARM_FEATURE_* tests
40
38
41
----------------------------------------------------------------
39
----------------------------------------------------------------
42
Francisco Iglesias (1):
40
Francisco Iglesias (1):
43
xlnx-zdma: Correct mem leaks and memset to zero on desc unaligned errors
41
xilinx_spips: Correct the number of dummy cycles for the FAST_READ_4 cmd
44
42
45
Igor Mammedov (1):
43
Guenter Roeck (6):
46
arm: fix qemu crash on startup with -bios option
44
mainstone: Make providing flash images non-mandatory
45
z2: Make providing flash images non-mandatory
46
hw: usb: hcd-ohci: Move OHCISysBusState and TYPE_SYSBUS_OHCI to include file
47
hcd-ehci: Introduce "companion-enable" sysbus property
48
arm: allwinner: Wire up USB ports
49
sh4: Fix PCI ISA IO memory subregion
47
50
48
Jan Kiszka (1):
51
Joel Stanley (2):
49
hw/intc/arm_gicv3: Fix APxR<n> register dispatching
52
aspeed/scu: Create separate write callbacks
53
aspeed/scu: Implement chip ID register
50
54
51
Paolo Bonzini (1):
55
Peter Maydell (21):
52
arm: fix malloc type mismatch
56
target/arm: Add _aa32_ to isar_feature functions testing 32-bit ID registers
57
target/arm: Check aa32_pan in take_aarch32_exception(), not aa64_pan
58
target/arm: Add isar_feature_any_fp16 and document naming/usage conventions
59
target/arm: Define and use any_predinv isar_feature test
60
target/arm: Factor out PMU register definitions
61
target/arm: Add and use FIELD definitions for ID_AA64DFR0_EL1
62
target/arm: Use FIELD macros for clearing ID_DFR0 PERFMON field
63
target/arm: Define an aa32_pmu_8_1 isar feature test function
64
target/arm: Add _aa64_ and _any_ versions of pmu_8_1 isar checks
65
target/arm: Stop assuming DBGDIDR always exists
66
target/arm: Move DBGDIDR into ARMISARegisters
67
target/arm: Read debug-related ID registers from KVM
68
target/arm: Implement ARMv8.1-PMU extension
69
target/arm: Implement ARMv8.4-PMU extension
70
target/arm: Provide ARMv8.4-PMU in '-cpu max'
71
target/arm: Correct definition of PMCRDP
72
target/arm: Correct handling of PMCR_EL0.LC bit
73
target/arm: Test correct register in aa32_pan and aa32_ats1e1 checks
74
target/arm: Use isar_feature function for testing AA32HPD feature
75
target/arm: Use FIELD_EX32 for testing 32-bit fields
76
target/arm: Correctly implement ACTLR2, HACTLR2
53
77
54
Peter Maydell (17):
78
Philippe Mathieu-Daudé (1):
55
target/arm: Honour FPCR.FZ in FRECPX
79
hw/misc/iotkit-secctl: Fix writing to 'PPC Interrupt Clear' register
56
MAINTAINERS: Add entries for newer MPS2 boards and devices
57
Correct CPACR reset value for v7 cores
58
memory.h: Improve IOMMU related documentation
59
Make tb_invalidate_phys_addr() take a MemTxAttrs argument
60
Make address_space_translate{, _cached}() take a MemTxAttrs argument
61
Make address_space_map() take a MemTxAttrs argument
62
Make address_space_access_valid() take a MemTxAttrs argument
63
Make flatview_extend_translation() take a MemTxAttrs argument
64
Make memory_region_access_valid() take a MemTxAttrs argument
65
Make MemoryRegion valid.accepts callback take a MemTxAttrs argument
66
Make flatview_access_valid() take a MemTxAttrs argument
67
Make flatview_translate() take a MemTxAttrs argument
68
Make address_space_get_iotlb_entry() take a MemTxAttrs argument
69
Make flatview_do_translate() take a MemTxAttrs argument
70
Make address_space_translate_iommu take a MemTxAttrs argument
71
vmstate.h: Provide VMSTATE_BOOL_SUB_ARRAY
72
80
73
Richard Henderson (1):
81
Richard Henderson (21):
74
tcg: Fix helper function vs host abi for float16
82
target/arm: Flush high bits of sve register after AdvSIMD EXT
83
target/arm: Flush high bits of sve register after AdvSIMD TBL/TBX
84
target/arm: Flush high bits of sve register after AdvSIMD ZIP/UZP/TRN
85
target/arm: Flush high bits of sve register after AdvSIMD INS
86
target/arm: Use bit 55 explicitly for pauth
87
target/arm: Fix select for aa64_va_parameters_both
88
target/arm: Remove ttbr1_valid check from get_phys_addr_lpae
89
target/arm: Split out aa64_va_parameter_tbi, aa64_va_parameter_tbid
90
target/arm: Vectorize USHL and SSHL
91
target/arm: Convert PMUL.8 to gvec
92
target/arm: Convert PMULL.64 to gvec
93
target/arm: Convert PMULL.8 to gvec
94
target/arm: Rename isar_feature_aa32_simd_r32
95
target/arm: Use isar_feature_aa32_simd_r32 more places
96
target/arm: Set MVFR0.FPSP for ARMv5 cpus
97
target/arm: Add isar_feature_aa32_simd_r16
98
target/arm: Rename isar_feature_aa32_fpdp_v2
99
target/arm: Add isar_feature_aa32_{fpsp_v2, fpsp_v3, fpdp_v3}
100
target/arm: Perform fpdp_v2 check first
101
target/arm: Replace ARM_FEATURE_VFP3 checks with fp{sp, dp}_v3
102
target/arm: Add missing checks for fpsp_v2
75
103
76
Shannon Zhao (3):
104
hw/usb/hcd-ohci.h | 16 ++
77
arm_gicv3_kvm: increase clroffset accordingly
105
include/hw/arm/allwinner-a10.h | 6 +
78
ARM: ACPI: Fix use-after-free due to memory realloc
106
target/arm/cpu.h | 173 ++++++++++++---
79
KVM: GIC: Fix memory leak due to calling kvm_init_irq_routing twice
107
target/arm/helper-sve.h | 2 +
108
target/arm/helper.h | 21 +-
109
target/arm/internals.h | 47 +++-
110
target/arm/translate.h | 6 +
111
hw/arm/allwinner-a10.c | 43 ++++
112
hw/arm/mainstone.c | 11 +-
113
hw/arm/z2.c | 6 -
114
hw/intc/armv7m_nvic.c | 30 +--
115
hw/misc/aspeed_scu.c | 93 ++++++--
116
hw/misc/iotkit-secctl.c | 2 +-
117
hw/sh4/sh_pci.c | 11 +-
118
hw/ssi/xilinx_spips.c | 2 +-
119
hw/usb/hcd-ehci-sysbus.c | 2 +
120
hw/usb/hcd-ohci.c | 15 --
121
linux-user/arm/signal.c | 4 +-
122
linux-user/elfload.c | 4 +-
123
target/arm/arch_dump.c | 11 +-
124
target/arm/cpu.c | 175 +++++++--------
125
target/arm/cpu64.c | 58 +++--
126
target/arm/debug_helper.c | 6 +-
127
target/arm/helper.c | 472 +++++++++++++++++++++++------------------
128
target/arm/kvm32.c | 25 +++
129
target/arm/kvm64.c | 46 ++++
130
target/arm/m_helper.c | 11 +-
131
target/arm/machine.c | 3 +-
132
target/arm/neon_helper.c | 117 ----------
133
target/arm/pauth_helper.c | 3 +-
134
target/arm/translate-a64.c | 92 ++++----
135
target/arm/translate-vfp.inc.c | 263 ++++++++++++++---------
136
target/arm/translate.c | 356 ++++++++++++++++++++++++++-----
137
target/arm/vec_helper.c | 211 ++++++++++++++++++
138
target/arm/vfp_helper.c | 2 +-
139
35 files changed, 1564 insertions(+), 781 deletions(-)
80
140
81
include/exec/exec-all.h | 5 +-
82
include/exec/helper-head.h | 2 +-
83
include/exec/memory-internal.h | 3 +-
84
include/exec/memory.h | 128 +++++++++++++++++++++++++++++++++++------
85
include/migration/vmstate.h | 3 +
86
include/sysemu/dma.h | 6 +-
87
accel/tcg/translate-all.c | 4 +-
88
exec.c | 95 ++++++++++++++++++------------
89
hw/arm/boot.c | 18 +++---
90
hw/arm/virt-acpi-build.c | 20 +++++--
91
hw/dma/xlnx-zdma.c | 10 +++-
92
hw/hppa/dino.c | 3 +-
93
hw/intc/arm_gic_kvm.c | 1 -
94
hw/intc/arm_gicv3_cpuif.c | 12 ++--
95
hw/intc/arm_gicv3_kvm.c | 2 +-
96
hw/nvram/fw_cfg.c | 12 ++--
97
hw/s390x/s390-pci-inst.c | 3 +-
98
hw/scsi/esp.c | 3 +-
99
hw/vfio/common.c | 3 +-
100
hw/virtio/vhost.c | 3 +-
101
hw/xen/xen_pt_msi.c | 3 +-
102
memory.c | 12 ++--
103
memory_ldst.inc.c | 18 +++---
104
target/arm/gdbstub.c | 3 +-
105
target/arm/helper-a64.c | 41 +++++++------
106
target/arm/helper.c | 90 ++++++++++++++++-------------
107
target/ppc/mmu-hash64.c | 3 +-
108
target/riscv/helper.c | 2 +-
109
target/s390x/diag.c | 6 +-
110
target/s390x/excp_helper.c | 3 +-
111
target/s390x/mmu_helper.c | 3 +-
112
target/s390x/sigp.c | 3 +-
113
target/xtensa/op_helper.c | 3 +-
114
MAINTAINERS | 9 ++-
115
34 files changed, 353 insertions(+), 182 deletions(-)
116
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Joel Stanley <joel@jms.id.au>
2
add MemTxAttrs as an argument to flatview_do_translate().
3
2
3
This splits the common write callback into separate ast2400 and ast2500
4
implementations. This makes it clearer when implementing differing
5
behaviour.
6
7
Signed-off-by: Joel Stanley <joel@jms.id.au>
8
Reviewed-by: Andrew Jeffery <andrew@aj.id.au>
9
Reviewed-by: Cédric Le Goater <clg@kaod.org>
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
11
Message-id: 20200121013302.43839-2-joel@jms.id.au
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180521140402.23318-13-peter.maydell@linaro.org
8
---
13
---
9
exec.c | 9 ++++++---
14
hw/misc/aspeed_scu.c | 80 +++++++++++++++++++++++++++++++-------------
10
1 file changed, 6 insertions(+), 3 deletions(-)
15
1 file changed, 57 insertions(+), 23 deletions(-)
11
16
12
diff --git a/exec.c b/exec.c
17
diff --git a/hw/misc/aspeed_scu.c b/hw/misc/aspeed_scu.c
13
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
14
--- a/exec.c
19
--- a/hw/misc/aspeed_scu.c
15
+++ b/exec.c
20
+++ b/hw/misc/aspeed_scu.c
16
@@ -XXX,XX +XXX,XX @@ unassigned:
21
@@ -XXX,XX +XXX,XX @@ static uint64_t aspeed_scu_read(void *opaque, hwaddr offset, unsigned size)
17
* @is_write: whether the translation operation is for write
22
return s->regs[reg];
18
* @is_mmio: whether this can be MMIO, set true if it can
23
}
19
* @target_as: the address space targeted by the IOMMU
24
20
+ * @attrs: memory transaction attributes
25
-static void aspeed_scu_write(void *opaque, hwaddr offset, uint64_t data,
21
*
26
- unsigned size)
22
* This function is called from RCU critical section
27
+static void aspeed_ast2400_scu_write(void *opaque, hwaddr offset,
23
*/
28
+ uint64_t data, unsigned size)
24
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection flatview_do_translate(FlatView *fv,
29
+{
25
hwaddr *page_mask_out,
30
+ AspeedSCUState *s = ASPEED_SCU(opaque);
26
bool is_write,
31
+ int reg = TO_REG(offset);
27
bool is_mmio,
32
+
28
- AddressSpace **target_as)
33
+ if (reg >= ASPEED_SCU_NR_REGS) {
29
+ AddressSpace **target_as,
34
+ qemu_log_mask(LOG_GUEST_ERROR,
30
+ MemTxAttrs attrs)
35
+ "%s: Out-of-bounds write at offset 0x%" HWADDR_PRIx "\n",
36
+ __func__, offset);
37
+ return;
38
+ }
39
+
40
+ if (reg > PROT_KEY && reg < CPU2_BASE_SEG1 &&
41
+ !s->regs[PROT_KEY]) {
42
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: SCU is locked!\n", __func__);
43
+ }
44
+
45
+ trace_aspeed_scu_write(offset, size, data);
46
+
47
+ switch (reg) {
48
+ case PROT_KEY:
49
+ s->regs[reg] = (data == ASPEED_SCU_PROT_KEY) ? 1 : 0;
50
+ return;
51
+ case SILICON_REV:
52
+ case FREQ_CNTR_EVAL:
53
+ case VGA_SCRATCH1 ... VGA_SCRATCH8:
54
+ case RNG_DATA:
55
+ case FREE_CNTR4:
56
+ case FREE_CNTR4_EXT:
57
+ qemu_log_mask(LOG_GUEST_ERROR,
58
+ "%s: Write to read-only offset 0x%" HWADDR_PRIx "\n",
59
+ __func__, offset);
60
+ return;
61
+ }
62
+
63
+ s->regs[reg] = data;
64
+}
65
+
66
+static void aspeed_ast2500_scu_write(void *opaque, hwaddr offset,
67
+ uint64_t data, unsigned size)
31
{
68
{
32
MemoryRegionSection *section;
69
AspeedSCUState *s = ASPEED_SCU(opaque);
33
IOMMUMemoryRegion *iommu_mr;
70
int reg = TO_REG(offset);
34
@@ -XXX,XX +XXX,XX @@ IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
71
@@ -XXX,XX +XXX,XX @@ static void aspeed_scu_write(void *opaque, hwaddr offset, uint64_t data,
35
* but page mask.
72
case PROT_KEY:
36
*/
73
s->regs[reg] = (data == ASPEED_SCU_PROT_KEY) ? 1 : 0;
37
section = flatview_do_translate(address_space_to_flatview(as), addr, &xlat,
74
return;
38
- NULL, &page_mask, is_write, false, &as);
75
- case CLK_SEL:
39
+ NULL, &page_mask, is_write, false, &as,
76
- s->regs[reg] = data;
40
+ attrs);
77
- break;
41
78
case HW_STRAP1:
42
/* Illegal translation */
79
- if (ASPEED_IS_AST2500(s->regs[SILICON_REV])) {
43
if (section.mr == &io_mem_unassigned) {
80
- s->regs[HW_STRAP1] |= data;
44
@@ -XXX,XX +XXX,XX @@ MemoryRegion *flatview_translate(FlatView *fv, hwaddr addr, hwaddr *xlat,
81
- return;
45
82
- }
46
/* This can be MMIO, so setup MMIO bit. */
83
- /* Jump to assignment below */
47
section = flatview_do_translate(fv, addr, xlat, plen, NULL,
84
- break;
48
- is_write, true, &as);
85
+ s->regs[HW_STRAP1] |= data;
49
+ is_write, true, &as, attrs);
86
+ return;
50
mr = section.mr;
87
case SILICON_REV:
51
88
- if (ASPEED_IS_AST2500(s->regs[SILICON_REV])) {
52
if (xen_enabled() && memory_access_is_direct(mr, is_write)) {
89
- s->regs[HW_STRAP1] &= ~data;
90
- } else {
91
- qemu_log_mask(LOG_GUEST_ERROR,
92
- "%s: Write to read-only offset 0x%" HWADDR_PRIx "\n",
93
- __func__, offset);
94
- }
95
- /* Avoid assignment below, we've handled everything */
96
+ s->regs[HW_STRAP1] &= ~data;
97
return;
98
case FREQ_CNTR_EVAL:
99
case VGA_SCRATCH1 ... VGA_SCRATCH8:
100
@@ -XXX,XX +XXX,XX @@ static void aspeed_scu_write(void *opaque, hwaddr offset, uint64_t data,
101
s->regs[reg] = data;
102
}
103
104
-static const MemoryRegionOps aspeed_scu_ops = {
105
+static const MemoryRegionOps aspeed_ast2400_scu_ops = {
106
.read = aspeed_scu_read,
107
- .write = aspeed_scu_write,
108
+ .write = aspeed_ast2400_scu_write,
109
+ .endianness = DEVICE_LITTLE_ENDIAN,
110
+ .valid.min_access_size = 4,
111
+ .valid.max_access_size = 4,
112
+ .valid.unaligned = false,
113
+};
114
+
115
+static const MemoryRegionOps aspeed_ast2500_scu_ops = {
116
+ .read = aspeed_scu_read,
117
+ .write = aspeed_ast2500_scu_write,
118
.endianness = DEVICE_LITTLE_ENDIAN,
119
.valid.min_access_size = 4,
120
.valid.max_access_size = 4,
121
@@ -XXX,XX +XXX,XX @@ static void aspeed_2400_scu_class_init(ObjectClass *klass, void *data)
122
asc->calc_hpll = aspeed_2400_scu_calc_hpll;
123
asc->apb_divider = 2;
124
asc->nr_regs = ASPEED_SCU_NR_REGS;
125
- asc->ops = &aspeed_scu_ops;
126
+ asc->ops = &aspeed_ast2400_scu_ops;
127
}
128
129
static const TypeInfo aspeed_2400_scu_info = {
130
@@ -XXX,XX +XXX,XX @@ static void aspeed_2500_scu_class_init(ObjectClass *klass, void *data)
131
asc->calc_hpll = aspeed_2500_scu_calc_hpll;
132
asc->apb_divider = 4;
133
asc->nr_regs = ASPEED_SCU_NR_REGS;
134
- asc->ops = &aspeed_scu_ops;
135
+ asc->ops = &aspeed_ast2500_scu_ops;
136
}
137
138
static const TypeInfo aspeed_2500_scu_info = {
53
--
139
--
54
2.17.1
140
2.20.1
55
141
56
142
diff view generated by jsdifflib
New patch
1
From: Joel Stanley <joel@jms.id.au>
1
2
3
This returns a fixed but non-zero value for the chip id.
4
5
Signed-off-by: Joel Stanley <joel@jms.id.au>
6
Reviewed-by: Andrew Jeffery <andrew@aj.id.au>
7
Reviewed-by: Cédric Le Goater <clg@kaod.org>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Message-id: 20200121013302.43839-3-joel@jms.id.au
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
hw/misc/aspeed_scu.c | 13 +++++++++++++
13
1 file changed, 13 insertions(+)
14
15
diff --git a/hw/misc/aspeed_scu.c b/hw/misc/aspeed_scu.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/misc/aspeed_scu.c
18
+++ b/hw/misc/aspeed_scu.c
19
@@ -XXX,XX +XXX,XX @@
20
#define CPU2_BASE_SEG4 TO_REG(0x110)
21
#define CPU2_BASE_SEG5 TO_REG(0x114)
22
#define CPU2_CACHE_CTRL TO_REG(0x118)
23
+#define CHIP_ID0 TO_REG(0x150)
24
+#define CHIP_ID1 TO_REG(0x154)
25
#define UART_HPLL_CLK TO_REG(0x160)
26
#define PCIE_CTRL TO_REG(0x180)
27
#define BMC_MMIO_CTRL TO_REG(0x184)
28
@@ -XXX,XX +XXX,XX @@
29
#define AST2600_HW_STRAP2_PROT TO_REG(0x518)
30
#define AST2600_RNG_CTRL TO_REG(0x524)
31
#define AST2600_RNG_DATA TO_REG(0x540)
32
+#define AST2600_CHIP_ID0 TO_REG(0x5B0)
33
+#define AST2600_CHIP_ID1 TO_REG(0x5B4)
34
35
#define AST2600_CLK TO_REG(0x40)
36
37
@@ -XXX,XX +XXX,XX @@ static const uint32_t ast2500_a1_resets[ASPEED_SCU_NR_REGS] = {
38
[CPU2_BASE_SEG1] = 0x80000000U,
39
[CPU2_BASE_SEG4] = 0x1E600000U,
40
[CPU2_BASE_SEG5] = 0xC0000000U,
41
+ [CHIP_ID0] = 0x1234ABCDU,
42
+ [CHIP_ID1] = 0x88884444U,
43
[UART_HPLL_CLK] = 0x00001903U,
44
[PCIE_CTRL] = 0x0000007BU,
45
[BMC_DEV_ID] = 0x00002402U
46
@@ -XXX,XX +XXX,XX @@ static void aspeed_ast2500_scu_write(void *opaque, hwaddr offset,
47
case RNG_DATA:
48
case FREE_CNTR4:
49
case FREE_CNTR4_EXT:
50
+ case CHIP_ID0:
51
+ case CHIP_ID1:
52
qemu_log_mask(LOG_GUEST_ERROR,
53
"%s: Write to read-only offset 0x%" HWADDR_PRIx "\n",
54
__func__, offset);
55
@@ -XXX,XX +XXX,XX @@ static void aspeed_ast2600_scu_write(void *opaque, hwaddr offset,
56
case AST2600_RNG_DATA:
57
case AST2600_SILICON_REV:
58
case AST2600_SILICON_REV2:
59
+ case AST2600_CHIP_ID0:
60
+ case AST2600_CHIP_ID1:
61
/* Add read only registers here */
62
qemu_log_mask(LOG_GUEST_ERROR,
63
"%s: Write to read-only offset 0x%" HWADDR_PRIx "\n",
64
@@ -XXX,XX +XXX,XX @@ static const uint32_t ast2600_a0_resets[ASPEED_AST2600_SCU_NR_REGS] = {
65
[AST2600_CLK_STOP_CTRL2] = 0xFFF0FFF0,
66
[AST2600_SDRAM_HANDSHAKE] = 0x00000040, /* SoC completed DRAM init */
67
[AST2600_HPLL_PARAM] = 0x1000405F,
68
+ [AST2600_CHIP_ID0] = 0x1234ABCD,
69
+ [AST2600_CHIP_ID1] = 0x88884444,
70
+
71
};
72
73
static void aspeed_ast2600_scu_reset(DeviceState *dev)
74
--
75
2.20.1
76
77
diff view generated by jsdifflib
New patch
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
2
3
Fix warning reported by Clang static code analyzer:
4
5
CC hw/misc/iotkit-secctl.o
6
hw/misc/iotkit-secctl.c:343:9: warning: Value stored to 'value' is never read
7
value &= 0x00f000f3;
8
^ ~~~~~~~~~~
9
10
Fixes: b3717c23e1c
11
Reported-by: Clang Static Analyzer
12
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
Message-id: 20200217132922.24607-1-f4bug@amsat.org
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
17
hw/misc/iotkit-secctl.c | 2 +-
18
1 file changed, 1 insertion(+), 1 deletion(-)
19
20
diff --git a/hw/misc/iotkit-secctl.c b/hw/misc/iotkit-secctl.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/misc/iotkit-secctl.c
23
+++ b/hw/misc/iotkit-secctl.c
24
@@ -XXX,XX +XXX,XX @@ static MemTxResult iotkit_secctl_s_write(void *opaque, hwaddr addr,
25
qemu_set_irq(s->sec_resp_cfg, s->secrespcfg);
26
break;
27
case A_SECPPCINTCLR:
28
- value &= 0x00f000f3;
29
+ s->secppcintstat &= ~(value & 0x00f000f3);
30
foreach_ppc(s, iotkit_secctl_ppc_update_irq_clear);
31
break;
32
case A_SECPPCINTEN:
33
--
34
2.20.1
35
36
diff view generated by jsdifflib
New patch
1
From: Guenter Roeck <linux@roeck-us.net>
1
2
3
Up to now, the mainstone machine only boots if two flash images are
4
provided. This is not really necessary; the machine can boot from initrd
5
or from SD without it. At the same time, having to provide dummy flash
6
images is a nuisance and does not add any real value. Make it optional.
7
8
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Message-id: 20200217210824.18513-1-linux@roeck-us.net
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
hw/arm/mainstone.c | 11 +----------
14
1 file changed, 1 insertion(+), 10 deletions(-)
15
16
diff --git a/hw/arm/mainstone.c b/hw/arm/mainstone.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/arm/mainstone.c
19
+++ b/hw/arm/mainstone.c
20
@@ -XXX,XX +XXX,XX @@ static void mainstone_common_init(MemoryRegion *address_space_mem,
21
/* There are two 32MiB flash devices on the board */
22
for (i = 0; i < 2; i ++) {
23
dinfo = drive_get(IF_PFLASH, 0, i);
24
- if (!dinfo) {
25
- if (qtest_enabled()) {
26
- break;
27
- }
28
- error_report("Two flash images must be given with the "
29
- "'pflash' parameter");
30
- exit(1);
31
- }
32
-
33
if (!pflash_cfi01_register(mainstone_flash_base[i],
34
i ? "mainstone.flash1" : "mainstone.flash0",
35
MAINSTONE_FLASH,
36
- blk_by_legacy_dinfo(dinfo),
37
+ dinfo ? blk_by_legacy_dinfo(dinfo) : NULL,
38
sector_len, 4, 0, 0, 0, 0, be)) {
39
error_report("Error registering flash memory");
40
exit(1);
41
--
42
2.20.1
43
44
diff view generated by jsdifflib
New patch
1
From: Guenter Roeck <linux@roeck-us.net>
1
2
3
Up to now, the z2 machine only boots if a flash image is provided.
4
This is not really necessary; the machine can boot from initrd or from
5
SD without it. At the same time, having to provide dummy flash images
6
is a nuisance and does not add any real value. Make it optional.
7
8
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Message-id: 20200217210903.18602-1-linux@roeck-us.net
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
hw/arm/z2.c | 6 ------
14
1 file changed, 6 deletions(-)
15
16
diff --git a/hw/arm/z2.c b/hw/arm/z2.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/arm/z2.c
19
+++ b/hw/arm/z2.c
20
@@ -XXX,XX +XXX,XX @@ static void z2_init(MachineState *machine)
21
be = 0;
22
#endif
23
dinfo = drive_get(IF_PFLASH, 0, 0);
24
- if (!dinfo && !qtest_enabled()) {
25
- error_report("Flash image must be given with the "
26
- "'pflash' parameter");
27
- exit(1);
28
- }
29
-
30
if (!pflash_cfi01_register(Z2_FLASH_BASE, "z2.flash0", Z2_FLASH_SIZE,
31
dinfo ? blk_by_legacy_dinfo(dinfo) : NULL,
32
sector_len, 4, 0, 0, 0, 0, be)) {
33
--
34
2.20.1
35
36
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Writes to AdvSIMD registers flush the bits above 128.
4
5
Buglink: https://bugs.launchpad.net/bugs/1863247
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20200214194643.23317-2-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/translate-a64.c | 1 +
12
1 file changed, 1 insertion(+)
13
14
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate-a64.c
17
+++ b/target/arm/translate-a64.c
18
@@ -XXX,XX +XXX,XX @@ static void disas_simd_ext(DisasContext *s, uint32_t insn)
19
tcg_temp_free_i64(tcg_resl);
20
write_vec_element(s, tcg_resh, rd, 1, MO_64);
21
tcg_temp_free_i64(tcg_resh);
22
+ clear_vec_high(s, true, rd);
23
}
24
25
/* TBL/TBX
26
--
27
2.20.1
28
29
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Writes to AdvSIMD registers flush the bits above 128.
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20200214194643.23317-3-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/translate-a64.c | 1 +
11
1 file changed, 1 insertion(+)
12
13
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-a64.c
16
+++ b/target/arm/translate-a64.c
17
@@ -XXX,XX +XXX,XX @@ static void disas_simd_tb(DisasContext *s, uint32_t insn)
18
tcg_temp_free_i64(tcg_resl);
19
write_vec_element(s, tcg_resh, rd, 1, MO_64);
20
tcg_temp_free_i64(tcg_resh);
21
+ clear_vec_high(s, true, rd);
22
}
23
24
/* ZIP/UZP/TRN
25
--
26
2.20.1
27
28
diff view generated by jsdifflib
1
From: Shannon Zhao <zhaoshenglong@huawei.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
It forgot to increase clroffset during the loop. So it only clear the
3
Writes to AdvSIMD registers flush the bits above 128.
4
first 4 bytes.
5
4
6
Fixes: 367b9f527becdd20ddf116e17a3c0c2bbc486920
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Cc: qemu-stable@nongnu.org
6
Message-id: 20200214194643.23317-4-richard.henderson@linaro.org
8
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
9
Reviewed-by: Eric Auger <eric.auger@redhat.com>
10
Message-id: 1527047633-12368-1-git-send-email-zhaoshenglong@huawei.com
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
9
---
14
hw/intc/arm_gicv3_kvm.c | 1 +
10
target/arm/translate-a64.c | 1 +
15
1 file changed, 1 insertion(+)
11
1 file changed, 1 insertion(+)
16
12
17
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
13
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
18
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/intc/arm_gicv3_kvm.c
15
--- a/target/arm/translate-a64.c
20
+++ b/hw/intc/arm_gicv3_kvm.c
16
+++ b/target/arm/translate-a64.c
21
@@ -XXX,XX +XXX,XX @@ static void kvm_dist_putbmp(GICv3State *s, uint32_t offset,
17
@@ -XXX,XX +XXX,XX @@ static void disas_simd_zip_trn(DisasContext *s, uint32_t insn)
22
if (clroffset != 0) {
18
tcg_temp_free_i64(tcg_resl);
23
reg = 0;
19
write_vec_element(s, tcg_resh, rd, 1, MO_64);
24
kvm_gicd_access(s, clroffset, &reg, true);
20
tcg_temp_free_i64(tcg_resh);
25
+ clroffset += 4;
21
+ clear_vec_high(s, true, rd);
26
}
22
}
27
reg = *gic_bmp_ptr32(bmp, irq);
23
28
kvm_gicd_access(s, offset, &reg, true);
24
/*
29
--
25
--
30
2.17.1
26
2.20.1
31
27
32
28
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Writes to AdvSIMD registers flush the bits above 128.
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20200214194643.23317-5-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/translate-a64.c | 6 ++++++
11
1 file changed, 6 insertions(+)
12
13
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-a64.c
16
+++ b/target/arm/translate-a64.c
17
@@ -XXX,XX +XXX,XX @@ static void handle_simd_inse(DisasContext *s, int rd, int rn,
18
write_vec_element(s, tmp, rd, dst_index, size);
19
20
tcg_temp_free_i64(tmp);
21
+
22
+ /* INS is considered a 128-bit write for SVE. */
23
+ clear_vec_high(s, true, rd);
24
}
25
26
27
@@ -XXX,XX +XXX,XX @@ static void handle_simd_insg(DisasContext *s, int rd, int rn, int imm5)
28
29
idx = extract32(imm5, 1 + size, 4 - size);
30
write_vec_element(s, cpu_reg(s, rn), rd, idx, size);
31
+
32
+ /* INS is considered a 128-bit write for SVE. */
33
+ clear_vec_high(s, true, rd);
34
}
35
36
/*
37
--
38
2.20.1
39
40
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
The psuedocode in aarch64/functions/pac/auth/Auth and
4
aarch64/functions/pac/strip/Strip always uses bit 55 for
5
extfield and do not consider if the current regime has 2 ranges.
6
7
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20200216194343.21331-2-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/pauth_helper.c | 3 ++-
14
1 file changed, 2 insertions(+), 1 deletion(-)
15
16
diff --git a/target/arm/pauth_helper.c b/target/arm/pauth_helper.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/pauth_helper.c
19
+++ b/target/arm/pauth_helper.c
20
@@ -XXX,XX +XXX,XX @@ static uint64_t pauth_addpac(CPUARMState *env, uint64_t ptr, uint64_t modifier,
21
22
static uint64_t pauth_original_ptr(uint64_t ptr, ARMVAParameters param)
23
{
24
- uint64_t extfield = -param.select;
25
+ /* Note that bit 55 is used whether or not the regime has 2 ranges. */
26
+ uint64_t extfield = sextract64(ptr, 55, 1);
27
int bot_pac_bit = 64 - param.tsz;
28
int top_pac_bit = 64 - 8 * param.tbi;
29
30
--
31
2.20.1
32
33
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Select should always be 0 for a regime with one range.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20200216194343.21331-3-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/helper.c | 46 +++++++++++++++++++++++----------------------
11
1 file changed, 24 insertions(+), 22 deletions(-)
12
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
16
+++ b/target/arm/helper.c
17
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters_both(CPUARMState *env, uint64_t va,
18
bool tbi, tbid, epd, hpd, using16k, using64k;
19
int select, tsz;
20
21
- /*
22
- * Bit 55 is always between the two regions, and is canonical for
23
- * determining if address tagging is enabled.
24
- */
25
- select = extract64(va, 55, 1);
26
-
27
if (!regime_has_2_ranges(mmu_idx)) {
28
+ select = 0;
29
tsz = extract32(tcr, 0, 6);
30
using64k = extract32(tcr, 14, 1);
31
using16k = extract32(tcr, 15, 1);
32
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters_both(CPUARMState *env, uint64_t va,
33
tbid = extract32(tcr, 29, 1);
34
}
35
epd = false;
36
- } else if (!select) {
37
- tsz = extract32(tcr, 0, 6);
38
- epd = extract32(tcr, 7, 1);
39
- using64k = extract32(tcr, 14, 1);
40
- using16k = extract32(tcr, 15, 1);
41
- tbi = extract64(tcr, 37, 1);
42
- hpd = extract64(tcr, 41, 1);
43
- tbid = extract64(tcr, 51, 1);
44
} else {
45
- int tg = extract32(tcr, 30, 2);
46
- using16k = tg == 1;
47
- using64k = tg == 3;
48
- tsz = extract32(tcr, 16, 6);
49
- epd = extract32(tcr, 23, 1);
50
- tbi = extract64(tcr, 38, 1);
51
- hpd = extract64(tcr, 42, 1);
52
- tbid = extract64(tcr, 52, 1);
53
+ /*
54
+ * Bit 55 is always between the two regions, and is canonical for
55
+ * determining if address tagging is enabled.
56
+ */
57
+ select = extract64(va, 55, 1);
58
+ if (!select) {
59
+ tsz = extract32(tcr, 0, 6);
60
+ epd = extract32(tcr, 7, 1);
61
+ using64k = extract32(tcr, 14, 1);
62
+ using16k = extract32(tcr, 15, 1);
63
+ tbi = extract64(tcr, 37, 1);
64
+ hpd = extract64(tcr, 41, 1);
65
+ tbid = extract64(tcr, 51, 1);
66
+ } else {
67
+ int tg = extract32(tcr, 30, 2);
68
+ using16k = tg == 1;
69
+ using64k = tg == 3;
70
+ tsz = extract32(tcr, 16, 6);
71
+ epd = extract32(tcr, 23, 1);
72
+ tbi = extract64(tcr, 38, 1);
73
+ hpd = extract64(tcr, 42, 1);
74
+ tbid = extract64(tcr, 52, 1);
75
+ }
76
}
77
tsz = MIN(tsz, 39); /* TODO: ARMv8.4-TTST */
78
tsz = MAX(tsz, 16); /* TODO: ARMv8.2-LVA */
79
--
80
2.20.1
81
82
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Now that aa64_va_parameters_both sets select based on the number
4
of ranges in the regime, the ttbr1_valid check is redundant.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20200216194343.21331-4-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/helper.c | 6 +-----
12
1 file changed, 1 insertion(+), 5 deletions(-)
13
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.c
17
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
19
TCR *tcr = regime_tcr(env, mmu_idx);
20
int ap, ns, xn, pxn;
21
uint32_t el = regime_el(env, mmu_idx);
22
- bool ttbr1_valid;
23
uint64_t descaddrmask;
24
bool aarch64 = arm_el_is_aa64(env, el);
25
bool guarded = false;
26
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
27
param = aa64_va_parameters(env, address, mmu_idx,
28
access_type != MMU_INST_FETCH);
29
level = 0;
30
- ttbr1_valid = regime_has_2_ranges(mmu_idx);
31
addrsize = 64 - 8 * param.tbi;
32
inputsize = 64 - param.tsz;
33
} else {
34
param = aa32_va_parameters(env, address, mmu_idx);
35
level = 1;
36
- /* There is no TTBR1 for EL2 */
37
- ttbr1_valid = (el != 2);
38
addrsize = (mmu_idx == ARMMMUIdx_Stage2 ? 40 : 32);
39
inputsize = addrsize - param.tsz;
40
}
41
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
42
if (inputsize < addrsize) {
43
target_ulong top_bits = sextract64(address, inputsize,
44
addrsize - inputsize);
45
- if (-top_bits != param.select || (param.select && !ttbr1_valid)) {
46
+ if (-top_bits != param.select) {
47
/* The gap between the two regions is a Translation fault */
48
fault_type = ARMFault_Translation;
49
goto do_fault;
50
--
51
2.20.1
52
53
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Depending on the host abi, float16, aka uint16_t, values are
3
For the purpose of rebuild_hflags_a64, we do not need to compute
4
passed and returned either zero-extended in the host register
4
all of the va parameters, only tbi. Moreover, we can compute them
5
or with garbage at the top of the host register.
5
in a form that is more useful to storing in hflags.
6
6
7
The tcg code generator has so far been assuming garbage, as that
7
This eliminates the need for aa64_va_parameter_both, so fold that
8
matches the x86 abi, but this is incorrect for other host abis.
8
in to aa64_va_parameter. The remaining calls to aa64_va_parameter
9
Further, target/arm has so far been assuming zero-extended results,
9
are in get_phys_addr_lpae and in pauth_helper.c.
10
so that it may store the 16-bit value into a 32-bit slot with the
11
high 16-bits already clear.
12
10
13
Rectify both problems by mapping "f16" in the helper definition
11
This reduces the total cpu consumption of aa64_va_parameter in a
14
to uint32_t instead of (a typedef for) uint16_t. This forces
12
kernel boot plus a kvm guest kernel boot from 3% to 0.5%.
15
the host compiler to assume garbage in the upper 16 bits on input
16
and to zero-extend the result on output.
17
13
18
Cc: qemu-stable@nongnu.org
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
15
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
20
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
16
Message-id: 20200216194343.21331-5-richard.henderson@linaro.org
21
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
22
Message-id: 20180522175629.24932-1-richard.henderson@linaro.org
23
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
25
---
18
---
26
include/exec/helper-head.h | 2 +-
19
target/arm/internals.h | 3 --
27
target/arm/helper-a64.c | 35 +++++++++--------
20
target/arm/helper.c | 68 +++++++++++++++++++++++-------------------
28
target/arm/helper.c | 80 +++++++++++++++++++-------------------
21
2 files changed, 37 insertions(+), 34 deletions(-)
29
3 files changed, 59 insertions(+), 58 deletions(-)
30
22
31
diff --git a/include/exec/helper-head.h b/include/exec/helper-head.h
23
diff --git a/target/arm/internals.h b/target/arm/internals.h
32
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
33
--- a/include/exec/helper-head.h
25
--- a/target/arm/internals.h
34
+++ b/include/exec/helper-head.h
26
+++ b/target/arm/internals.h
35
@@ -XXX,XX +XXX,XX @@
27
@@ -XXX,XX +XXX,XX @@ typedef struct ARMVAParameters {
36
#define dh_ctype_int int
28
unsigned tsz : 8;
37
#define dh_ctype_i64 uint64_t
29
unsigned select : 1;
38
#define dh_ctype_s64 int64_t
30
bool tbi : 1;
39
-#define dh_ctype_f16 float16
31
- bool tbid : 1;
40
+#define dh_ctype_f16 uint32_t
32
bool epd : 1;
41
#define dh_ctype_f32 float32
33
bool hpd : 1;
42
#define dh_ctype_f64 float64
34
bool using16k : 1;
43
#define dh_ctype_ptr void *
35
bool using64k : 1;
44
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
36
} ARMVAParameters;
45
index XXXXXXX..XXXXXXX 100644
37
46
--- a/target/arm/helper-a64.c
38
-ARMVAParameters aa64_va_parameters_both(CPUARMState *env, uint64_t va,
47
+++ b/target/arm/helper-a64.c
39
- ARMMMUIdx mmu_idx);
48
@@ -XXX,XX +XXX,XX @@ static inline uint32_t float_rel_to_flags(int res)
40
ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
49
return flags;
41
ARMMMUIdx mmu_idx, bool data);
50
}
51
52
-uint64_t HELPER(vfp_cmph_a64)(float16 x, float16 y, void *fp_status)
53
+uint64_t HELPER(vfp_cmph_a64)(uint32_t x, uint32_t y, void *fp_status)
54
{
55
return float_rel_to_flags(float16_compare_quiet(x, y, fp_status));
56
}
57
58
-uint64_t HELPER(vfp_cmpeh_a64)(float16 x, float16 y, void *fp_status)
59
+uint64_t HELPER(vfp_cmpeh_a64)(uint32_t x, uint32_t y, void *fp_status)
60
{
61
return float_rel_to_flags(float16_compare(x, y, fp_status));
62
}
63
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(neon_cgt_f64)(float64 a, float64 b, void *fpstp)
64
#define float64_three make_float64(0x4008000000000000ULL)
65
#define float64_one_point_five make_float64(0x3FF8000000000000ULL)
66
67
-float16 HELPER(recpsf_f16)(float16 a, float16 b, void *fpstp)
68
+uint32_t HELPER(recpsf_f16)(uint32_t a, uint32_t b, void *fpstp)
69
{
70
float_status *fpst = fpstp;
71
72
@@ -XXX,XX +XXX,XX @@ float64 HELPER(recpsf_f64)(float64 a, float64 b, void *fpstp)
73
return float64_muladd(a, b, float64_two, 0, fpst);
74
}
75
76
-float16 HELPER(rsqrtsf_f16)(float16 a, float16 b, void *fpstp)
77
+uint32_t HELPER(rsqrtsf_f16)(uint32_t a, uint32_t b, void *fpstp)
78
{
79
float_status *fpst = fpstp;
80
81
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(neon_addlp_u16)(uint64_t a)
82
}
83
84
/* Floating-point reciprocal exponent - see FPRecpX in ARM ARM */
85
-float16 HELPER(frecpx_f16)(float16 a, void *fpstp)
86
+uint32_t HELPER(frecpx_f16)(uint32_t a, void *fpstp)
87
{
88
float_status *fpst = fpstp;
89
uint16_t val16, sbit;
90
@@ -XXX,XX +XXX,XX @@ void HELPER(casp_be_parallel)(CPUARMState *env, uint32_t rs, uint64_t addr,
91
#define ADVSIMD_HELPER(name, suffix) HELPER(glue(glue(advsimd_, name), suffix))
92
93
#define ADVSIMD_HALFOP(name) \
94
-float16 ADVSIMD_HELPER(name, h)(float16 a, float16 b, void *fpstp) \
95
+uint32_t ADVSIMD_HELPER(name, h)(uint32_t a, uint32_t b, void *fpstp) \
96
{ \
97
float_status *fpst = fpstp; \
98
return float16_ ## name(a, b, fpst); \
99
@@ -XXX,XX +XXX,XX @@ ADVSIMD_HALFOP(mulx)
100
ADVSIMD_TWOHALFOP(mulx)
101
102
/* fused multiply-accumulate */
103
-float16 HELPER(advsimd_muladdh)(float16 a, float16 b, float16 c, void *fpstp)
104
+uint32_t HELPER(advsimd_muladdh)(uint32_t a, uint32_t b, uint32_t c,
105
+ void *fpstp)
106
{
107
float_status *fpst = fpstp;
108
return float16_muladd(a, b, c, 0, fpst);
109
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_muladd2h)(uint32_t two_a, uint32_t two_b,
110
111
#define ADVSIMD_CMPRES(test) (test) ? 0xffff : 0
112
113
-uint32_t HELPER(advsimd_ceq_f16)(float16 a, float16 b, void *fpstp)
114
+uint32_t HELPER(advsimd_ceq_f16)(uint32_t a, uint32_t b, void *fpstp)
115
{
116
float_status *fpst = fpstp;
117
int compare = float16_compare_quiet(a, b, fpst);
118
return ADVSIMD_CMPRES(compare == float_relation_equal);
119
}
120
121
-uint32_t HELPER(advsimd_cge_f16)(float16 a, float16 b, void *fpstp)
122
+uint32_t HELPER(advsimd_cge_f16)(uint32_t a, uint32_t b, void *fpstp)
123
{
124
float_status *fpst = fpstp;
125
int compare = float16_compare(a, b, fpst);
126
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_cge_f16)(float16 a, float16 b, void *fpstp)
127
compare == float_relation_equal);
128
}
129
130
-uint32_t HELPER(advsimd_cgt_f16)(float16 a, float16 b, void *fpstp)
131
+uint32_t HELPER(advsimd_cgt_f16)(uint32_t a, uint32_t b, void *fpstp)
132
{
133
float_status *fpst = fpstp;
134
int compare = float16_compare(a, b, fpst);
135
return ADVSIMD_CMPRES(compare == float_relation_greater);
136
}
137
138
-uint32_t HELPER(advsimd_acge_f16)(float16 a, float16 b, void *fpstp)
139
+uint32_t HELPER(advsimd_acge_f16)(uint32_t a, uint32_t b, void *fpstp)
140
{
141
float_status *fpst = fpstp;
142
float16 f0 = float16_abs(a);
143
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_acge_f16)(float16 a, float16 b, void *fpstp)
144
compare == float_relation_equal);
145
}
146
147
-uint32_t HELPER(advsimd_acgt_f16)(float16 a, float16 b, void *fpstp)
148
+uint32_t HELPER(advsimd_acgt_f16)(uint32_t a, uint32_t b, void *fpstp)
149
{
150
float_status *fpst = fpstp;
151
float16 f0 = float16_abs(a);
152
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_acgt_f16)(float16 a, float16 b, void *fpstp)
153
}
154
155
/* round to integral */
156
-float16 HELPER(advsimd_rinth_exact)(float16 x, void *fp_status)
157
+uint32_t HELPER(advsimd_rinth_exact)(uint32_t x, void *fp_status)
158
{
159
return float16_round_to_int(x, fp_status);
160
}
161
162
-float16 HELPER(advsimd_rinth)(float16 x, void *fp_status)
163
+uint32_t HELPER(advsimd_rinth)(uint32_t x, void *fp_status)
164
{
165
int old_flags = get_float_exception_flags(fp_status), new_flags;
166
float16 ret;
167
@@ -XXX,XX +XXX,XX @@ float16 HELPER(advsimd_rinth)(float16 x, void *fp_status)
168
* setting the mode appropriately before calling the helper.
169
*/
170
171
-uint32_t HELPER(advsimd_f16tosinth)(float16 a, void *fpstp)
172
+uint32_t HELPER(advsimd_f16tosinth)(uint32_t a, void *fpstp)
173
{
174
float_status *fpst = fpstp;
175
176
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_f16tosinth)(float16 a, void *fpstp)
177
return float16_to_int16(a, fpst);
178
}
179
180
-uint32_t HELPER(advsimd_f16touinth)(float16 a, void *fpstp)
181
+uint32_t HELPER(advsimd_f16touinth)(uint32_t a, void *fpstp)
182
{
183
float_status *fpst = fpstp;
184
185
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_f16touinth)(float16 a, void *fpstp)
186
* Square Root and Reciprocal square root
187
*/
188
189
-float16 HELPER(sqrt_f16)(float16 a, void *fpstp)
190
+uint32_t HELPER(sqrt_f16)(uint32_t a, void *fpstp)
191
{
192
float_status *s = fpstp;
193
42
194
diff --git a/target/arm/helper.c b/target/arm/helper.c
43
diff --git a/target/arm/helper.c b/target/arm/helper.c
195
index XXXXXXX..XXXXXXX 100644
44
index XXXXXXX..XXXXXXX 100644
196
--- a/target/arm/helper.c
45
--- a/target/arm/helper.c
197
+++ b/target/arm/helper.c
46
+++ b/target/arm/helper.c
198
@@ -XXX,XX +XXX,XX @@ DO_VFP_cmp(d, float64)
47
@@ -XXX,XX +XXX,XX @@ static uint8_t convert_stage2_attrs(CPUARMState *env, uint8_t s2attrs)
199
200
/* Integer to float and float to integer conversions */
201
202
-#define CONV_ITOF(name, fsz, sign) \
203
- float##fsz HELPER(name)(uint32_t x, void *fpstp) \
204
-{ \
205
- float_status *fpst = fpstp; \
206
- return sign##int32_to_##float##fsz((sign##int32_t)x, fpst); \
207
+#define CONV_ITOF(name, ftype, fsz, sign) \
208
+ftype HELPER(name)(uint32_t x, void *fpstp) \
209
+{ \
210
+ float_status *fpst = fpstp; \
211
+ return sign##int32_to_##float##fsz((sign##int32_t)x, fpst); \
212
}
48
}
213
49
#endif /* !CONFIG_USER_ONLY */
214
-#define CONV_FTOI(name, fsz, sign, round) \
50
215
-uint32_t HELPER(name)(float##fsz x, void *fpstp) \
51
-ARMVAParameters aa64_va_parameters_both(CPUARMState *env, uint64_t va,
216
-{ \
52
- ARMMMUIdx mmu_idx)
217
- float_status *fpst = fpstp; \
53
+static int aa64_va_parameter_tbi(uint64_t tcr, ARMMMUIdx mmu_idx)
218
- if (float##fsz##_is_any_nan(x)) { \
54
+{
219
- float_raise(float_flag_invalid, fpst); \
55
+ if (regime_has_2_ranges(mmu_idx)) {
220
- return 0; \
56
+ return extract64(tcr, 37, 2);
221
- } \
57
+ } else if (mmu_idx == ARMMMUIdx_Stage2) {
222
- return float##fsz##_to_##sign##int32##round(x, fpst); \
58
+ return 0; /* VTCR_EL2 */
223
+#define CONV_FTOI(name, ftype, fsz, sign, round) \
59
+ } else {
224
+uint32_t HELPER(name)(ftype x, void *fpstp) \
60
+ return extract32(tcr, 20, 1);
225
+{ \
61
+ }
226
+ float_status *fpst = fpstp; \
62
+}
227
+ if (float##fsz##_is_any_nan(x)) { \
63
+
228
+ float_raise(float_flag_invalid, fpst); \
64
+static int aa64_va_parameter_tbid(uint64_t tcr, ARMMMUIdx mmu_idx)
229
+ return 0; \
65
+{
230
+ } \
66
+ if (regime_has_2_ranges(mmu_idx)) {
231
+ return float##fsz##_to_##sign##int32##round(x, fpst); \
67
+ return extract64(tcr, 51, 2);
68
+ } else if (mmu_idx == ARMMMUIdx_Stage2) {
69
+ return 0; /* VTCR_EL2 */
70
+ } else {
71
+ return extract32(tcr, 29, 1);
72
+ }
73
+}
74
+
75
+ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
76
+ ARMMMUIdx mmu_idx, bool data)
77
{
78
uint64_t tcr = regime_tcr(env, mmu_idx)->raw_tcr;
79
- bool tbi, tbid, epd, hpd, using16k, using64k;
80
- int select, tsz;
81
+ bool epd, hpd, using16k, using64k;
82
+ int select, tsz, tbi;
83
84
if (!regime_has_2_ranges(mmu_idx)) {
85
select = 0;
86
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters_both(CPUARMState *env, uint64_t va,
87
using16k = extract32(tcr, 15, 1);
88
if (mmu_idx == ARMMMUIdx_Stage2) {
89
/* VTCR_EL2 */
90
- tbi = tbid = hpd = false;
91
+ hpd = false;
92
} else {
93
- tbi = extract32(tcr, 20, 1);
94
hpd = extract32(tcr, 24, 1);
95
- tbid = extract32(tcr, 29, 1);
96
}
97
epd = false;
98
} else {
99
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters_both(CPUARMState *env, uint64_t va,
100
epd = extract32(tcr, 7, 1);
101
using64k = extract32(tcr, 14, 1);
102
using16k = extract32(tcr, 15, 1);
103
- tbi = extract64(tcr, 37, 1);
104
hpd = extract64(tcr, 41, 1);
105
- tbid = extract64(tcr, 51, 1);
106
} else {
107
int tg = extract32(tcr, 30, 2);
108
using16k = tg == 1;
109
using64k = tg == 3;
110
tsz = extract32(tcr, 16, 6);
111
epd = extract32(tcr, 23, 1);
112
- tbi = extract64(tcr, 38, 1);
113
hpd = extract64(tcr, 42, 1);
114
- tbid = extract64(tcr, 52, 1);
115
}
116
}
117
tsz = MIN(tsz, 39); /* TODO: ARMv8.4-TTST */
118
tsz = MAX(tsz, 16); /* TODO: ARMv8.2-LVA */
119
120
+ /* Present TBI as a composite with TBID. */
121
+ tbi = aa64_va_parameter_tbi(tcr, mmu_idx);
122
+ if (!data) {
123
+ tbi &= ~aa64_va_parameter_tbid(tcr, mmu_idx);
124
+ }
125
+ tbi = (tbi >> select) & 1;
126
+
127
return (ARMVAParameters) {
128
.tsz = tsz,
129
.select = select,
130
.tbi = tbi,
131
- .tbid = tbid,
132
.epd = epd,
133
.hpd = hpd,
134
.using16k = using16k,
135
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters_both(CPUARMState *env, uint64_t va,
136
};
232
}
137
}
233
138
234
-#define FLOAT_CONVS(name, p, fsz, sign) \
139
-ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
235
-CONV_ITOF(vfp_##name##to##p, fsz, sign) \
140
- ARMMMUIdx mmu_idx, bool data)
236
-CONV_FTOI(vfp_to##name##p, fsz, sign, ) \
141
-{
237
-CONV_FTOI(vfp_to##name##z##p, fsz, sign, _round_to_zero)
142
- ARMVAParameters ret = aa64_va_parameters_both(env, va, mmu_idx);
238
+#define FLOAT_CONVS(name, p, ftype, fsz, sign) \
143
-
239
+ CONV_ITOF(vfp_##name##to##p, ftype, fsz, sign) \
144
- /* Present TBI as a composite with TBID. */
240
+ CONV_FTOI(vfp_to##name##p, ftype, fsz, sign, ) \
145
- ret.tbi &= (data || !ret.tbid);
241
+ CONV_FTOI(vfp_to##name##z##p, ftype, fsz, sign, _round_to_zero)
146
- return ret;
242
147
-}
243
-FLOAT_CONVS(si, h, 16, )
148
-
244
-FLOAT_CONVS(si, s, 32, )
149
#ifndef CONFIG_USER_ONLY
245
-FLOAT_CONVS(si, d, 64, )
150
static ARMVAParameters aa32_va_parameters(CPUARMState *env, uint32_t va,
246
-FLOAT_CONVS(ui, h, 16, u)
151
ARMMMUIdx mmu_idx)
247
-FLOAT_CONVS(ui, s, 32, u)
152
@@ -XXX,XX +XXX,XX @@ static uint32_t rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
248
-FLOAT_CONVS(ui, d, 64, u)
249
+FLOAT_CONVS(si, h, uint32_t, 16, )
250
+FLOAT_CONVS(si, s, float32, 32, )
251
+FLOAT_CONVS(si, d, float64, 64, )
252
+FLOAT_CONVS(ui, h, uint32_t, 16, u)
253
+FLOAT_CONVS(ui, s, float32, 32, u)
254
+FLOAT_CONVS(ui, d, float64, 64, u)
255
256
#undef CONV_ITOF
257
#undef CONV_FTOI
258
@@ -XXX,XX +XXX,XX @@ static float16 do_postscale_fp16(float64 f, int shift, float_status *fpst)
259
return float64_to_float16(float64_scalbn(f, -shift, fpst), true, fpst);
260
}
261
262
-float16 HELPER(vfp_sltoh)(uint32_t x, uint32_t shift, void *fpst)
263
+uint32_t HELPER(vfp_sltoh)(uint32_t x, uint32_t shift, void *fpst)
264
{
153
{
265
return do_postscale_fp16(int32_to_float64(x, fpst), shift, fpst);
154
uint32_t flags = rebuild_hflags_aprofile(env);
266
}
155
ARMMMUIdx stage1 = stage_1_mmu_idx(mmu_idx);
267
156
- ARMVAParameters p0 = aa64_va_parameters_both(env, 0, stage1);
268
-float16 HELPER(vfp_ultoh)(uint32_t x, uint32_t shift, void *fpst)
157
+ uint64_t tcr = regime_tcr(env, mmu_idx)->raw_tcr;
269
+uint32_t HELPER(vfp_ultoh)(uint32_t x, uint32_t shift, void *fpst)
158
uint64_t sctlr;
270
{
159
int tbii, tbid;
271
return do_postscale_fp16(uint32_to_float64(x, fpst), shift, fpst);
160
272
}
161
flags = FIELD_DP32(flags, TBFLAG_ANY, AARCH64_STATE, 1);
273
162
274
-float16 HELPER(vfp_sqtoh)(uint64_t x, uint32_t shift, void *fpst)
163
/* Get control bits for tagged addresses. */
275
+uint32_t HELPER(vfp_sqtoh)(uint64_t x, uint32_t shift, void *fpst)
164
- if (regime_has_2_ranges(mmu_idx)) {
276
{
165
- ARMVAParameters p1 = aa64_va_parameters_both(env, -1, stage1);
277
return do_postscale_fp16(int64_to_float64(x, fpst), shift, fpst);
166
- tbid = (p1.tbi << 1) | p0.tbi;
278
}
167
- tbii = tbid & ~((p1.tbid << 1) | p0.tbid);
279
168
- } else {
280
-float16 HELPER(vfp_uqtoh)(uint64_t x, uint32_t shift, void *fpst)
169
- tbid = p0.tbi;
281
+uint32_t HELPER(vfp_uqtoh)(uint64_t x, uint32_t shift, void *fpst)
170
- tbii = tbid & !p0.tbid;
282
{
171
- }
283
return do_postscale_fp16(uint64_to_float64(x, fpst), shift, fpst);
172
+ tbid = aa64_va_parameter_tbi(tcr, mmu_idx);
284
}
173
+ tbii = tbid & ~aa64_va_parameter_tbid(tcr, mmu_idx);
285
@@ -XXX,XX +XXX,XX @@ static float64 do_prescale_fp16(float16 f, int shift, float_status *fpst)
174
286
}
175
flags = FIELD_DP32(flags, TBFLAG_A64, TBII, tbii);
287
}
176
flags = FIELD_DP32(flags, TBFLAG_A64, TBID, tbid);
288
289
-uint32_t HELPER(vfp_toshh)(float16 x, uint32_t shift, void *fpst)
290
+uint32_t HELPER(vfp_toshh)(uint32_t x, uint32_t shift, void *fpst)
291
{
292
return float64_to_int16(do_prescale_fp16(x, shift, fpst), fpst);
293
}
294
295
-uint32_t HELPER(vfp_touhh)(float16 x, uint32_t shift, void *fpst)
296
+uint32_t HELPER(vfp_touhh)(uint32_t x, uint32_t shift, void *fpst)
297
{
298
return float64_to_uint16(do_prescale_fp16(x, shift, fpst), fpst);
299
}
300
301
-uint32_t HELPER(vfp_toslh)(float16 x, uint32_t shift, void *fpst)
302
+uint32_t HELPER(vfp_toslh)(uint32_t x, uint32_t shift, void *fpst)
303
{
304
return float64_to_int32(do_prescale_fp16(x, shift, fpst), fpst);
305
}
306
307
-uint32_t HELPER(vfp_toulh)(float16 x, uint32_t shift, void *fpst)
308
+uint32_t HELPER(vfp_toulh)(uint32_t x, uint32_t shift, void *fpst)
309
{
310
return float64_to_uint32(do_prescale_fp16(x, shift, fpst), fpst);
311
}
312
313
-uint64_t HELPER(vfp_tosqh)(float16 x, uint32_t shift, void *fpst)
314
+uint64_t HELPER(vfp_tosqh)(uint32_t x, uint32_t shift, void *fpst)
315
{
316
return float64_to_int64(do_prescale_fp16(x, shift, fpst), fpst);
317
}
318
319
-uint64_t HELPER(vfp_touqh)(float16 x, uint32_t shift, void *fpst)
320
+uint64_t HELPER(vfp_touqh)(uint32_t x, uint32_t shift, void *fpst)
321
{
322
return float64_to_uint64(do_prescale_fp16(x, shift, fpst), fpst);
323
}
324
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(set_neon_rmode)(uint32_t rmode, CPUARMState *env)
325
}
326
327
/* Half precision conversions. */
328
-float32 HELPER(vfp_fcvt_f16_to_f32)(float16 a, void *fpstp, uint32_t ahp_mode)
329
+float32 HELPER(vfp_fcvt_f16_to_f32)(uint32_t a, void *fpstp, uint32_t ahp_mode)
330
{
331
/* Squash FZ16 to 0 for the duration of conversion. In this case,
332
* it would affect flushing input denormals.
333
@@ -XXX,XX +XXX,XX @@ float32 HELPER(vfp_fcvt_f16_to_f32)(float16 a, void *fpstp, uint32_t ahp_mode)
334
return r;
335
}
336
337
-float16 HELPER(vfp_fcvt_f32_to_f16)(float32 a, void *fpstp, uint32_t ahp_mode)
338
+uint32_t HELPER(vfp_fcvt_f32_to_f16)(float32 a, void *fpstp, uint32_t ahp_mode)
339
{
340
/* Squash FZ16 to 0 for the duration of conversion. In this case,
341
* it would affect flushing output denormals.
342
@@ -XXX,XX +XXX,XX @@ float16 HELPER(vfp_fcvt_f32_to_f16)(float32 a, void *fpstp, uint32_t ahp_mode)
343
return r;
344
}
345
346
-float64 HELPER(vfp_fcvt_f16_to_f64)(float16 a, void *fpstp, uint32_t ahp_mode)
347
+float64 HELPER(vfp_fcvt_f16_to_f64)(uint32_t a, void *fpstp, uint32_t ahp_mode)
348
{
349
/* Squash FZ16 to 0 for the duration of conversion. In this case,
350
* it would affect flushing input denormals.
351
@@ -XXX,XX +XXX,XX @@ float64 HELPER(vfp_fcvt_f16_to_f64)(float16 a, void *fpstp, uint32_t ahp_mode)
352
return r;
353
}
354
355
-float16 HELPER(vfp_fcvt_f64_to_f16)(float64 a, void *fpstp, uint32_t ahp_mode)
356
+uint32_t HELPER(vfp_fcvt_f64_to_f16)(float64 a, void *fpstp, uint32_t ahp_mode)
357
{
358
/* Squash FZ16 to 0 for the duration of conversion. In this case,
359
* it would affect flushing output denormals.
360
@@ -XXX,XX +XXX,XX @@ static bool round_to_inf(float_status *fpst, bool sign_bit)
361
g_assert_not_reached();
362
}
363
364
-float16 HELPER(recpe_f16)(float16 input, void *fpstp)
365
+uint32_t HELPER(recpe_f16)(uint32_t input, void *fpstp)
366
{
367
float_status *fpst = fpstp;
368
float16 f16 = float16_squash_input_denormal(input, fpst);
369
@@ -XXX,XX +XXX,XX @@ static uint64_t recip_sqrt_estimate(int *exp , int exp_off, uint64_t frac)
370
return extract64(estimate, 0, 8) << 44;
371
}
372
373
-float16 HELPER(rsqrte_f16)(float16 input, void *fpstp)
374
+uint32_t HELPER(rsqrte_f16)(uint32_t input, void *fpstp)
375
{
376
float_status *s = fpstp;
377
float16 f16 = float16_squash_input_denormal(input, s);
378
--
177
--
379
2.17.1
178
2.20.1
380
179
381
180
diff view generated by jsdifflib
New patch
1
Enforce a convention that an isar_feature function that tests a
2
32-bit ID register always has _aa32_ in its name, and one that
3
tests a 64-bit ID register always has _aa64_ in its name.
4
We already follow this except for three cases: thumb_div,
5
arm_div and jazelle, which all need _aa32_ adding.
1
6
7
(As noted in the comment, isar_feature_aa32_fp16_arith()
8
is an exception in that it currently tests ID_AA64PFR0_EL1,
9
but will switch to MVFR1 once we've properly implemented
10
FP16 for AArch32.)
11
12
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Message-id: 20200214175116.9164-2-peter.maydell@linaro.org
16
---
17
target/arm/cpu.h | 13 ++++++++++---
18
target/arm/internals.h | 2 +-
19
linux-user/elfload.c | 4 ++--
20
target/arm/cpu.c | 6 ++++--
21
target/arm/helper.c | 2 +-
22
target/arm/translate.c | 6 +++---
23
6 files changed, 21 insertions(+), 12 deletions(-)
24
25
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/cpu.h
28
+++ b/target/arm/cpu.h
29
@@ -XXX,XX +XXX,XX @@ static inline uint64_t *aa64_vfp_qreg(CPUARMState *env, unsigned regno)
30
/* Shared between translate-sve.c and sve_helper.c. */
31
extern const uint64_t pred_esz_masks[4];
32
33
+/*
34
+ * Naming convention for isar_feature functions:
35
+ * Functions which test 32-bit ID registers should have _aa32_ in
36
+ * their name. Functions which test 64-bit ID registers should have
37
+ * _aa64_ in their name.
38
+ */
39
+
40
/*
41
* 32-bit feature tests via id registers.
42
*/
43
-static inline bool isar_feature_thumb_div(const ARMISARegisters *id)
44
+static inline bool isar_feature_aa32_thumb_div(const ARMISARegisters *id)
45
{
46
return FIELD_EX32(id->id_isar0, ID_ISAR0, DIVIDE) != 0;
47
}
48
49
-static inline bool isar_feature_arm_div(const ARMISARegisters *id)
50
+static inline bool isar_feature_aa32_arm_div(const ARMISARegisters *id)
51
{
52
return FIELD_EX32(id->id_isar0, ID_ISAR0, DIVIDE) > 1;
53
}
54
55
-static inline bool isar_feature_jazelle(const ARMISARegisters *id)
56
+static inline bool isar_feature_aa32_jazelle(const ARMISARegisters *id)
57
{
58
return FIELD_EX32(id->id_isar1, ID_ISAR1, JAZELLE) != 0;
59
}
60
diff --git a/target/arm/internals.h b/target/arm/internals.h
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/internals.h
63
+++ b/target/arm/internals.h
64
@@ -XXX,XX +XXX,XX @@ static inline uint32_t aarch32_cpsr_valid_mask(uint64_t features,
65
if ((features >> ARM_FEATURE_THUMB2) & 1) {
66
valid |= CPSR_IT;
67
}
68
- if (isar_feature_jazelle(id)) {
69
+ if (isar_feature_aa32_jazelle(id)) {
70
valid |= CPSR_J;
71
}
72
if (isar_feature_aa32_pan(id)) {
73
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
74
index XXXXXXX..XXXXXXX 100644
75
--- a/linux-user/elfload.c
76
+++ b/linux-user/elfload.c
77
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
78
GET_FEATURE(ARM_FEATURE_VFP3, ARM_HWCAP_ARM_VFPv3);
79
GET_FEATURE(ARM_FEATURE_V6K, ARM_HWCAP_ARM_TLS);
80
GET_FEATURE(ARM_FEATURE_VFP4, ARM_HWCAP_ARM_VFPv4);
81
- GET_FEATURE_ID(arm_div, ARM_HWCAP_ARM_IDIVA);
82
- GET_FEATURE_ID(thumb_div, ARM_HWCAP_ARM_IDIVT);
83
+ GET_FEATURE_ID(aa32_arm_div, ARM_HWCAP_ARM_IDIVA);
84
+ GET_FEATURE_ID(aa32_thumb_div, ARM_HWCAP_ARM_IDIVT);
85
/* All QEMU's VFPv3 CPUs have 32 registers, see VFP_DREG in translate.c.
86
* Note that the ARM_HWCAP_ARM_VFPv3D16 bit is always the inverse of
87
* ARM_HWCAP_ARM_VFPD32 (and so always clear for QEMU); it is unrelated
88
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
89
index XXXXXXX..XXXXXXX 100644
90
--- a/target/arm/cpu.c
91
+++ b/target/arm/cpu.c
92
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
93
* Presence of EL2 itself is ARM_FEATURE_EL2, and of the
94
* Security Extensions is ARM_FEATURE_EL3.
95
*/
96
- assert(!tcg_enabled() || no_aa32 || cpu_isar_feature(arm_div, cpu));
97
+ assert(!tcg_enabled() || no_aa32 ||
98
+ cpu_isar_feature(aa32_arm_div, cpu));
99
set_feature(env, ARM_FEATURE_LPAE);
100
set_feature(env, ARM_FEATURE_V7);
101
}
102
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
103
if (arm_feature(env, ARM_FEATURE_V6)) {
104
set_feature(env, ARM_FEATURE_V5);
105
if (!arm_feature(env, ARM_FEATURE_M)) {
106
- assert(!tcg_enabled() || no_aa32 || cpu_isar_feature(jazelle, cpu));
107
+ assert(!tcg_enabled() || no_aa32 ||
108
+ cpu_isar_feature(aa32_jazelle, cpu));
109
set_feature(env, ARM_FEATURE_AUXCR);
110
}
111
}
112
diff --git a/target/arm/helper.c b/target/arm/helper.c
113
index XXXXXXX..XXXXXXX 100644
114
--- a/target/arm/helper.c
115
+++ b/target/arm/helper.c
116
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
117
if (arm_feature(env, ARM_FEATURE_LPAE)) {
118
define_arm_cp_regs(cpu, lpae_cp_reginfo);
119
}
120
- if (cpu_isar_feature(jazelle, cpu)) {
121
+ if (cpu_isar_feature(aa32_jazelle, cpu)) {
122
define_arm_cp_regs(cpu, jazelle_regs);
123
}
124
/* Slightly awkwardly, the OMAP and StrongARM cores need all of
125
diff --git a/target/arm/translate.c b/target/arm/translate.c
126
index XXXXXXX..XXXXXXX 100644
127
--- a/target/arm/translate.c
128
+++ b/target/arm/translate.c
129
@@ -XXX,XX +XXX,XX @@
130
#define ENABLE_ARCH_5 arm_dc_feature(s, ARM_FEATURE_V5)
131
/* currently all emulated v5 cores are also v5TE, so don't bother */
132
#define ENABLE_ARCH_5TE arm_dc_feature(s, ARM_FEATURE_V5)
133
-#define ENABLE_ARCH_5J dc_isar_feature(jazelle, s)
134
+#define ENABLE_ARCH_5J dc_isar_feature(aa32_jazelle, s)
135
#define ENABLE_ARCH_6 arm_dc_feature(s, ARM_FEATURE_V6)
136
#define ENABLE_ARCH_6K arm_dc_feature(s, ARM_FEATURE_V6K)
137
#define ENABLE_ARCH_6T2 arm_dc_feature(s, ARM_FEATURE_THUMB2)
138
@@ -XXX,XX +XXX,XX @@ static bool op_div(DisasContext *s, arg_rrr *a, bool u)
139
TCGv_i32 t1, t2;
140
141
if (s->thumb
142
- ? !dc_isar_feature(thumb_div, s)
143
- : !dc_isar_feature(arm_div, s)) {
144
+ ? !dc_isar_feature(aa32_thumb_div, s)
145
+ : !dc_isar_feature(aa32_arm_div, s)) {
146
return false;
147
}
148
149
--
150
2.20.1
151
152
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
In take_aarch32_exception(), we know we are dealing with a CPU that
2
add MemTxAttrs as an argument to address_space_get_iotlb_entry().
2
has AArch32, so the right isar_feature test is aa32_pan, not aa64_pan.
3
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180521140402.23318-12-peter.maydell@linaro.org
6
Message-id: 20200214175116.9164-3-peter.maydell@linaro.org
8
---
7
---
9
include/exec/memory.h | 2 +-
8
target/arm/helper.c | 2 +-
10
exec.c | 2 +-
9
1 file changed, 1 insertion(+), 1 deletion(-)
11
hw/virtio/vhost.c | 3 ++-
12
3 files changed, 4 insertions(+), 3 deletions(-)
13
10
14
diff --git a/include/exec/memory.h b/include/exec/memory.h
11
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
16
--- a/include/exec/memory.h
13
--- a/target/arm/helper.c
17
+++ b/include/exec/memory.h
14
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ void address_space_cache_destroy(MemoryRegionCache *cache);
15
@@ -XXX,XX +XXX,XX @@ static void take_aarch32_exception(CPUARMState *env, int new_mode,
19
* entry. Should be called from an RCU critical section.
16
env->elr_el[2] = env->regs[15];
20
*/
17
} else {
21
IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
18
/* CPSR.PAN is normally preserved preserved unless... */
22
- bool is_write);
19
- if (cpu_isar_feature(aa64_pan, env_archcpu(env))) {
23
+ bool is_write, MemTxAttrs attrs);
20
+ if (cpu_isar_feature(aa32_pan, env_archcpu(env))) {
24
21
switch (new_el) {
25
/* address_space_translate: translate an address range into an address space
22
case 3:
26
* into a MemoryRegion and an address range into that section. Should be
23
if (!arm_is_secure_below_el3(env)) {
27
diff --git a/exec.c b/exec.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/exec.c
30
+++ b/exec.c
31
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection flatview_do_translate(FlatView *fv,
32
33
/* Called from RCU critical section */
34
IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
35
- bool is_write)
36
+ bool is_write, MemTxAttrs attrs)
37
{
38
MemoryRegionSection section;
39
hwaddr xlat, page_mask;
40
diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/hw/virtio/vhost.c
43
+++ b/hw/virtio/vhost.c
44
@@ -XXX,XX +XXX,XX @@ int vhost_device_iotlb_miss(struct vhost_dev *dev, uint64_t iova, int write)
45
trace_vhost_iotlb_miss(dev, 1);
46
47
iotlb = address_space_get_iotlb_entry(dev->vdev->dma_as,
48
- iova, write);
49
+ iova, write,
50
+ MEMTXATTRS_UNSPECIFIED);
51
if (iotlb.target_as != NULL) {
52
ret = vhost_memory_region_lookup(dev, iotlb.translated_addr,
53
&uaddr, &len);
54
--
24
--
55
2.17.1
25
2.20.1
56
26
57
27
diff view generated by jsdifflib
New patch
1
Our current usage of the isar_feature feature tests almost always
2
uses an _aa32_ test when the code path is known to be AArch32
3
specific and an _aa64_ test when the code path is known to be
4
AArch64 specific. There is just one exception: in the vfp_set_fpscr
5
helper we check aa64_fp16 to determine whether the FZ16 bit in
6
the FP(S)CR exists, but this code is also used for AArch32.
7
There are other places in future where we're likely to want
8
a general "does this feature exist for either AArch32 or
9
AArch64" check (typically where architecturally the feature exists
10
for both CPU states if it exists at all, but the CPU might be
11
AArch32-only or AArch64-only, and so only have one set of ID
12
registers).
1
13
14
Introduce a new category of isar_feature_* functions:
15
isar_feature_any_foo() should be tested when what we want to
16
know is "does this feature exist for either AArch32 or AArch64",
17
and always returns the logical OR of isar_feature_aa32_foo()
18
and isar_feature_aa64_foo().
19
20
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
21
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Message-id: 20200214175116.9164-4-peter.maydell@linaro.org
24
---
25
target/arm/cpu.h | 19 ++++++++++++++++++-
26
target/arm/vfp_helper.c | 2 +-
27
2 files changed, 19 insertions(+), 2 deletions(-)
28
29
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/cpu.h
32
+++ b/target/arm/cpu.h
33
@@ -XXX,XX +XXX,XX @@ extern const uint64_t pred_esz_masks[4];
34
* Naming convention for isar_feature functions:
35
* Functions which test 32-bit ID registers should have _aa32_ in
36
* their name. Functions which test 64-bit ID registers should have
37
- * _aa64_ in their name.
38
+ * _aa64_ in their name. These must only be used in code where we
39
+ * know for certain that the CPU has AArch32 or AArch64 respectively
40
+ * or where the correct answer for a CPU which doesn't implement that
41
+ * CPU state is "false" (eg when generating A32 or A64 code, if adding
42
+ * system registers that are specific to that CPU state, for "should
43
+ * we let this system register bit be set" tests where the 32-bit
44
+ * flavour of the register doesn't have the bit, and so on).
45
+ * Functions which simply ask "does this feature exist at all" have
46
+ * _any_ in their name, and always return the logical OR of the _aa64_
47
+ * and the _aa32_ function.
48
*/
49
50
/*
51
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_bti(const ARMISARegisters *id)
52
return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, BT) != 0;
53
}
54
55
+/*
56
+ * Feature tests for "does this exist in either 32-bit or 64-bit?"
57
+ */
58
+static inline bool isar_feature_any_fp16(const ARMISARegisters *id)
59
+{
60
+ return isar_feature_aa64_fp16(id) || isar_feature_aa32_fp16_arith(id);
61
+}
62
+
63
/*
64
* Forward to the above feature tests given an ARMCPU pointer.
65
*/
66
diff --git a/target/arm/vfp_helper.c b/target/arm/vfp_helper.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/target/arm/vfp_helper.c
69
+++ b/target/arm/vfp_helper.c
70
@@ -XXX,XX +XXX,XX @@ uint32_t vfp_get_fpscr(CPUARMState *env)
71
void HELPER(vfp_set_fpscr)(CPUARMState *env, uint32_t val)
72
{
73
/* When ARMv8.2-FP16 is not supported, FZ16 is RES0. */
74
- if (!cpu_isar_feature(aa64_fp16, env_archcpu(env))) {
75
+ if (!cpu_isar_feature(any_fp16, env_archcpu(env))) {
76
val &= ~FPCR_FZ16;
77
}
78
79
--
80
2.20.1
81
82
diff view generated by jsdifflib
New patch
1
Instead of open-coding "ARM_FEATURE_AARCH64 ? aa64_predinv: aa32_predinv",
2
define and use an any_predinv isar_feature test function.
1
3
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20200214175116.9164-5-peter.maydell@linaro.org
8
---
9
target/arm/cpu.h | 5 +++++
10
target/arm/helper.c | 9 +--------
11
2 files changed, 6 insertions(+), 8 deletions(-)
12
13
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/cpu.h
16
+++ b/target/arm/cpu.h
17
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_any_fp16(const ARMISARegisters *id)
18
return isar_feature_aa64_fp16(id) || isar_feature_aa32_fp16_arith(id);
19
}
20
21
+static inline bool isar_feature_any_predinv(const ARMISARegisters *id)
22
+{
23
+ return isar_feature_aa64_predinv(id) || isar_feature_aa32_predinv(id);
24
+}
25
+
26
/*
27
* Forward to the above feature tests given an ARMCPU pointer.
28
*/
29
diff --git a/target/arm/helper.c b/target/arm/helper.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/helper.c
32
+++ b/target/arm/helper.c
33
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
34
#endif /*CONFIG_USER_ONLY*/
35
#endif
36
37
- /*
38
- * While all v8.0 cpus support aarch64, QEMU does have configurations
39
- * that do not set ID_AA64ISAR1, e.g. user-only qemu-arm -cpu max,
40
- * which will set ID_ISAR6.
41
- */
42
- if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)
43
- ? cpu_isar_feature(aa64_predinv, cpu)
44
- : cpu_isar_feature(aa32_predinv, cpu)) {
45
+ if (cpu_isar_feature(any_predinv, cpu)) {
46
define_arm_cp_regs(cpu, predinv_reginfo);
47
}
48
49
--
50
2.20.1
51
52
diff view generated by jsdifflib
New patch
1
1
Pull the code that defines the various PMU registers out
2
into its own function, matching the pattern we have
3
already for the debug registers.
4
5
Apart from one style fix to a multi-line comment, this
6
is purely movement of code with no changes to it.
7
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20200214175116.9164-6-peter.maydell@linaro.org
12
---
13
target/arm/helper.c | 158 +++++++++++++++++++++++---------------------
14
1 file changed, 82 insertions(+), 76 deletions(-)
15
16
diff --git a/target/arm/helper.c b/target/arm/helper.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/helper.c
19
+++ b/target/arm/helper.c
20
@@ -XXX,XX +XXX,XX @@ static void define_debug_regs(ARMCPU *cpu)
21
}
22
}
23
24
+static void define_pmu_regs(ARMCPU *cpu)
25
+{
26
+ /*
27
+ * v7 performance monitor control register: same implementor
28
+ * field as main ID register, and we implement four counters in
29
+ * addition to the cycle count register.
30
+ */
31
+ unsigned int i, pmcrn = 4;
32
+ ARMCPRegInfo pmcr = {
33
+ .name = "PMCR", .cp = 15, .crn = 9, .crm = 12, .opc1 = 0, .opc2 = 0,
34
+ .access = PL0_RW,
35
+ .type = ARM_CP_IO | ARM_CP_ALIAS,
36
+ .fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pmcr),
37
+ .accessfn = pmreg_access, .writefn = pmcr_write,
38
+ .raw_writefn = raw_write,
39
+ };
40
+ ARMCPRegInfo pmcr64 = {
41
+ .name = "PMCR_EL0", .state = ARM_CP_STATE_AA64,
42
+ .opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 0,
43
+ .access = PL0_RW, .accessfn = pmreg_access,
44
+ .type = ARM_CP_IO,
45
+ .fieldoffset = offsetof(CPUARMState, cp15.c9_pmcr),
46
+ .resetvalue = (cpu->midr & 0xff000000) | (pmcrn << PMCRN_SHIFT),
47
+ .writefn = pmcr_write, .raw_writefn = raw_write,
48
+ };
49
+ define_one_arm_cp_reg(cpu, &pmcr);
50
+ define_one_arm_cp_reg(cpu, &pmcr64);
51
+ for (i = 0; i < pmcrn; i++) {
52
+ char *pmevcntr_name = g_strdup_printf("PMEVCNTR%d", i);
53
+ char *pmevcntr_el0_name = g_strdup_printf("PMEVCNTR%d_EL0", i);
54
+ char *pmevtyper_name = g_strdup_printf("PMEVTYPER%d", i);
55
+ char *pmevtyper_el0_name = g_strdup_printf("PMEVTYPER%d_EL0", i);
56
+ ARMCPRegInfo pmev_regs[] = {
57
+ { .name = pmevcntr_name, .cp = 15, .crn = 14,
58
+ .crm = 8 | (3 & (i >> 3)), .opc1 = 0, .opc2 = i & 7,
59
+ .access = PL0_RW, .type = ARM_CP_IO | ARM_CP_ALIAS,
60
+ .readfn = pmevcntr_readfn, .writefn = pmevcntr_writefn,
61
+ .accessfn = pmreg_access },
62
+ { .name = pmevcntr_el0_name, .state = ARM_CP_STATE_AA64,
63
+ .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 8 | (3 & (i >> 3)),
64
+ .opc2 = i & 7, .access = PL0_RW, .accessfn = pmreg_access,
65
+ .type = ARM_CP_IO,
66
+ .readfn = pmevcntr_readfn, .writefn = pmevcntr_writefn,
67
+ .raw_readfn = pmevcntr_rawread,
68
+ .raw_writefn = pmevcntr_rawwrite },
69
+ { .name = pmevtyper_name, .cp = 15, .crn = 14,
70
+ .crm = 12 | (3 & (i >> 3)), .opc1 = 0, .opc2 = i & 7,
71
+ .access = PL0_RW, .type = ARM_CP_IO | ARM_CP_ALIAS,
72
+ .readfn = pmevtyper_readfn, .writefn = pmevtyper_writefn,
73
+ .accessfn = pmreg_access },
74
+ { .name = pmevtyper_el0_name, .state = ARM_CP_STATE_AA64,
75
+ .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 12 | (3 & (i >> 3)),
76
+ .opc2 = i & 7, .access = PL0_RW, .accessfn = pmreg_access,
77
+ .type = ARM_CP_IO,
78
+ .readfn = pmevtyper_readfn, .writefn = pmevtyper_writefn,
79
+ .raw_writefn = pmevtyper_rawwrite },
80
+ REGINFO_SENTINEL
81
+ };
82
+ define_arm_cp_regs(cpu, pmev_regs);
83
+ g_free(pmevcntr_name);
84
+ g_free(pmevcntr_el0_name);
85
+ g_free(pmevtyper_name);
86
+ g_free(pmevtyper_el0_name);
87
+ }
88
+ if (FIELD_EX32(cpu->id_dfr0, ID_DFR0, PERFMON) >= 4 &&
89
+ FIELD_EX32(cpu->id_dfr0, ID_DFR0, PERFMON) != 0xf) {
90
+ ARMCPRegInfo v81_pmu_regs[] = {
91
+ { .name = "PMCEID2", .state = ARM_CP_STATE_AA32,
92
+ .cp = 15, .opc1 = 0, .crn = 9, .crm = 14, .opc2 = 4,
93
+ .access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST,
94
+ .resetvalue = extract64(cpu->pmceid0, 32, 32) },
95
+ { .name = "PMCEID3", .state = ARM_CP_STATE_AA32,
96
+ .cp = 15, .opc1 = 0, .crn = 9, .crm = 14, .opc2 = 5,
97
+ .access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST,
98
+ .resetvalue = extract64(cpu->pmceid1, 32, 32) },
99
+ REGINFO_SENTINEL
100
+ };
101
+ define_arm_cp_regs(cpu, v81_pmu_regs);
102
+ }
103
+}
104
+
105
/* We don't know until after realize whether there's a GICv3
106
* attached, and that is what registers the gicv3 sysregs.
107
* So we have to fill in the GIC fields in ID_PFR/ID_PFR1_EL1/ID_AA64PFR0_EL1
108
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
109
define_arm_cp_regs(cpu, pmovsset_cp_reginfo);
110
}
111
if (arm_feature(env, ARM_FEATURE_V7)) {
112
- /* v7 performance monitor control register: same implementor
113
- * field as main ID register, and we implement four counters in
114
- * addition to the cycle count register.
115
- */
116
- unsigned int i, pmcrn = 4;
117
- ARMCPRegInfo pmcr = {
118
- .name = "PMCR", .cp = 15, .crn = 9, .crm = 12, .opc1 = 0, .opc2 = 0,
119
- .access = PL0_RW,
120
- .type = ARM_CP_IO | ARM_CP_ALIAS,
121
- .fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pmcr),
122
- .accessfn = pmreg_access, .writefn = pmcr_write,
123
- .raw_writefn = raw_write,
124
- };
125
- ARMCPRegInfo pmcr64 = {
126
- .name = "PMCR_EL0", .state = ARM_CP_STATE_AA64,
127
- .opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 0,
128
- .access = PL0_RW, .accessfn = pmreg_access,
129
- .type = ARM_CP_IO,
130
- .fieldoffset = offsetof(CPUARMState, cp15.c9_pmcr),
131
- .resetvalue = (cpu->midr & 0xff000000) | (pmcrn << PMCRN_SHIFT),
132
- .writefn = pmcr_write, .raw_writefn = raw_write,
133
- };
134
- define_one_arm_cp_reg(cpu, &pmcr);
135
- define_one_arm_cp_reg(cpu, &pmcr64);
136
- for (i = 0; i < pmcrn; i++) {
137
- char *pmevcntr_name = g_strdup_printf("PMEVCNTR%d", i);
138
- char *pmevcntr_el0_name = g_strdup_printf("PMEVCNTR%d_EL0", i);
139
- char *pmevtyper_name = g_strdup_printf("PMEVTYPER%d", i);
140
- char *pmevtyper_el0_name = g_strdup_printf("PMEVTYPER%d_EL0", i);
141
- ARMCPRegInfo pmev_regs[] = {
142
- { .name = pmevcntr_name, .cp = 15, .crn = 14,
143
- .crm = 8 | (3 & (i >> 3)), .opc1 = 0, .opc2 = i & 7,
144
- .access = PL0_RW, .type = ARM_CP_IO | ARM_CP_ALIAS,
145
- .readfn = pmevcntr_readfn, .writefn = pmevcntr_writefn,
146
- .accessfn = pmreg_access },
147
- { .name = pmevcntr_el0_name, .state = ARM_CP_STATE_AA64,
148
- .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 8 | (3 & (i >> 3)),
149
- .opc2 = i & 7, .access = PL0_RW, .accessfn = pmreg_access,
150
- .type = ARM_CP_IO,
151
- .readfn = pmevcntr_readfn, .writefn = pmevcntr_writefn,
152
- .raw_readfn = pmevcntr_rawread,
153
- .raw_writefn = pmevcntr_rawwrite },
154
- { .name = pmevtyper_name, .cp = 15, .crn = 14,
155
- .crm = 12 | (3 & (i >> 3)), .opc1 = 0, .opc2 = i & 7,
156
- .access = PL0_RW, .type = ARM_CP_IO | ARM_CP_ALIAS,
157
- .readfn = pmevtyper_readfn, .writefn = pmevtyper_writefn,
158
- .accessfn = pmreg_access },
159
- { .name = pmevtyper_el0_name, .state = ARM_CP_STATE_AA64,
160
- .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 12 | (3 & (i >> 3)),
161
- .opc2 = i & 7, .access = PL0_RW, .accessfn = pmreg_access,
162
- .type = ARM_CP_IO,
163
- .readfn = pmevtyper_readfn, .writefn = pmevtyper_writefn,
164
- .raw_writefn = pmevtyper_rawwrite },
165
- REGINFO_SENTINEL
166
- };
167
- define_arm_cp_regs(cpu, pmev_regs);
168
- g_free(pmevcntr_name);
169
- g_free(pmevcntr_el0_name);
170
- g_free(pmevtyper_name);
171
- g_free(pmevtyper_el0_name);
172
- }
173
ARMCPRegInfo clidr = {
174
.name = "CLIDR", .state = ARM_CP_STATE_BOTH,
175
.opc0 = 3, .crn = 0, .crm = 0, .opc1 = 1, .opc2 = 1,
176
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
177
define_one_arm_cp_reg(cpu, &clidr);
178
define_arm_cp_regs(cpu, v7_cp_reginfo);
179
define_debug_regs(cpu);
180
+ define_pmu_regs(cpu);
181
} else {
182
define_arm_cp_regs(cpu, not_v7_cp_reginfo);
183
}
184
- if (FIELD_EX32(cpu->id_dfr0, ID_DFR0, PERFMON) >= 4 &&
185
- FIELD_EX32(cpu->id_dfr0, ID_DFR0, PERFMON) != 0xf) {
186
- ARMCPRegInfo v81_pmu_regs[] = {
187
- { .name = "PMCEID2", .state = ARM_CP_STATE_AA32,
188
- .cp = 15, .opc1 = 0, .crn = 9, .crm = 14, .opc2 = 4,
189
- .access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST,
190
- .resetvalue = extract64(cpu->pmceid0, 32, 32) },
191
- { .name = "PMCEID3", .state = ARM_CP_STATE_AA32,
192
- .cp = 15, .opc1 = 0, .crn = 9, .crm = 14, .opc2 = 5,
193
- .access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST,
194
- .resetvalue = extract64(cpu->pmceid1, 32, 32) },
195
- REGINFO_SENTINEL
196
- };
197
- define_arm_cp_regs(cpu, v81_pmu_regs);
198
- }
199
if (arm_feature(env, ARM_FEATURE_V8)) {
200
/* AArch64 ID registers, which all have impdef reset values.
201
* Note that within the ID register ranges the unused slots
202
--
203
2.20.1
204
205
diff view generated by jsdifflib
New patch
1
Add FIELD() definitions for the ID_AA64DFR0_EL1 and use them
2
where we currently have hard-coded bit values.
1
3
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20200214175116.9164-7-peter.maydell@linaro.org
8
---
9
target/arm/cpu.h | 10 ++++++++++
10
target/arm/cpu.c | 2 +-
11
target/arm/helper.c | 6 +++---
12
3 files changed, 14 insertions(+), 4 deletions(-)
13
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpu.h
17
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64MMFR2, BBM, 52, 4)
19
FIELD(ID_AA64MMFR2, EVT, 56, 4)
20
FIELD(ID_AA64MMFR2, E0PD, 60, 4)
21
22
+FIELD(ID_AA64DFR0, DEBUGVER, 0, 4)
23
+FIELD(ID_AA64DFR0, TRACEVER, 4, 4)
24
+FIELD(ID_AA64DFR0, PMUVER, 8, 4)
25
+FIELD(ID_AA64DFR0, BRPS, 12, 4)
26
+FIELD(ID_AA64DFR0, WRPS, 20, 4)
27
+FIELD(ID_AA64DFR0, CTX_CMPS, 28, 4)
28
+FIELD(ID_AA64DFR0, PMSVER, 32, 4)
29
+FIELD(ID_AA64DFR0, DOUBLELOCK, 36, 4)
30
+FIELD(ID_AA64DFR0, TRACEFILT, 40, 4)
31
+
32
FIELD(ID_DFR0, COPDBG, 0, 4)
33
FIELD(ID_DFR0, COPSDBG, 4, 4)
34
FIELD(ID_DFR0, MMAPDBG, 8, 4)
35
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
36
index XXXXXXX..XXXXXXX 100644
37
--- a/target/arm/cpu.c
38
+++ b/target/arm/cpu.c
39
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
40
cpu);
41
#endif
42
} else {
43
- cpu->id_aa64dfr0 &= ~0xf00;
44
+ cpu->id_aa64dfr0 = FIELD_DP64(cpu->id_aa64dfr0, ID_AA64DFR0, PMUVER, 0);
45
cpu->id_dfr0 &= ~(0xf << 24);
46
cpu->pmceid0 = 0;
47
cpu->pmceid1 = 0;
48
diff --git a/target/arm/helper.c b/target/arm/helper.c
49
index XXXXXXX..XXXXXXX 100644
50
--- a/target/arm/helper.c
51
+++ b/target/arm/helper.c
52
@@ -XXX,XX +XXX,XX @@ static void define_debug_regs(ARMCPU *cpu)
53
* check that if they both exist then they agree.
54
*/
55
if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
56
- assert(extract32(cpu->id_aa64dfr0, 12, 4) == brps);
57
- assert(extract32(cpu->id_aa64dfr0, 20, 4) == wrps);
58
- assert(extract32(cpu->id_aa64dfr0, 28, 4) == ctx_cmps);
59
+ assert(FIELD_EX64(cpu->id_aa64dfr0, ID_AA64DFR0, BRPS) == brps);
60
+ assert(FIELD_EX64(cpu->id_aa64dfr0, ID_AA64DFR0, WRPS) == wrps);
61
+ assert(FIELD_EX64(cpu->id_aa64dfr0, ID_AA64DFR0, CTX_CMPS) == ctx_cmps);
62
}
63
64
define_one_arm_cp_reg(cpu, &dbgdidr);
65
--
66
2.20.1
67
68
diff view generated by jsdifflib
New patch
1
We already define FIELD macros for ID_DFR0, so use them in the
2
one place where we're doing direct bit value manipulation.
1
3
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20200214175116.9164-8-peter.maydell@linaro.org
8
---
9
target/arm/cpu.c | 2 +-
10
1 file changed, 1 insertion(+), 1 deletion(-)
11
12
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/cpu.c
15
+++ b/target/arm/cpu.c
16
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
17
#endif
18
} else {
19
cpu->id_aa64dfr0 = FIELD_DP64(cpu->id_aa64dfr0, ID_AA64DFR0, PMUVER, 0);
20
- cpu->id_dfr0 &= ~(0xf << 24);
21
+ cpu->id_dfr0 = FIELD_DP32(cpu->id_dfr0, ID_DFR0, PERFMON, 0);
22
cpu->pmceid0 = 0;
23
cpu->pmceid1 = 0;
24
}
25
--
26
2.20.1
27
28
diff view generated by jsdifflib
New patch
1
1
Instead of open-coding a check on the ID_DFR0 PerfMon ID register
2
field, create a standardly-named isar_feature for "does AArch32 have
3
a v8.1 PMUv3" and use it.
4
5
This entails moving the id_dfr0 field into the ARMISARegisters struct.
6
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20200214175116.9164-9-peter.maydell@linaro.org
10
---
11
target/arm/cpu.h | 9 ++++++++-
12
hw/intc/armv7m_nvic.c | 2 +-
13
target/arm/cpu.c | 28 ++++++++++++++--------------
14
target/arm/cpu64.c | 6 +++---
15
target/arm/helper.c | 5 ++---
16
5 files changed, 28 insertions(+), 22 deletions(-)
17
18
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/cpu.h
21
+++ b/target/arm/cpu.h
22
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
23
uint32_t mvfr0;
24
uint32_t mvfr1;
25
uint32_t mvfr2;
26
+ uint32_t id_dfr0;
27
uint64_t id_aa64isar0;
28
uint64_t id_aa64isar1;
29
uint64_t id_aa64pfr0;
30
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
31
uint32_t reset_sctlr;
32
uint32_t id_pfr0;
33
uint32_t id_pfr1;
34
- uint32_t id_dfr0;
35
uint64_t pmceid0;
36
uint64_t pmceid1;
37
uint32_t id_afr0;
38
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_ats1e1(const ARMISARegisters *id)
39
return FIELD_EX64(id->mvfr0, ID_MMFR3, PAN) >= 2;
40
}
41
42
+static inline bool isar_feature_aa32_pmu_8_1(const ARMISARegisters *id)
43
+{
44
+ /* 0xf means "non-standard IMPDEF PMU" */
45
+ return FIELD_EX32(id->id_dfr0, ID_DFR0, PERFMON) >= 4 &&
46
+ FIELD_EX32(id->id_dfr0, ID_DFR0, PERFMON) != 0xf;
47
+}
48
+
49
/*
50
* 64-bit feature tests via id registers.
51
*/
52
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/hw/intc/armv7m_nvic.c
55
+++ b/hw/intc/armv7m_nvic.c
56
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
57
case 0xd44: /* PFR1. */
58
return cpu->id_pfr1;
59
case 0xd48: /* DFR0. */
60
- return cpu->id_dfr0;
61
+ return cpu->isar.id_dfr0;
62
case 0xd4c: /* AFR0. */
63
return cpu->id_afr0;
64
case 0xd50: /* MMFR0. */
65
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
66
index XXXXXXX..XXXXXXX 100644
67
--- a/target/arm/cpu.c
68
+++ b/target/arm/cpu.c
69
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
70
#endif
71
} else {
72
cpu->id_aa64dfr0 = FIELD_DP64(cpu->id_aa64dfr0, ID_AA64DFR0, PMUVER, 0);
73
- cpu->id_dfr0 = FIELD_DP32(cpu->id_dfr0, ID_DFR0, PERFMON, 0);
74
+ cpu->isar.id_dfr0 = FIELD_DP32(cpu->isar.id_dfr0, ID_DFR0, PERFMON, 0);
75
cpu->pmceid0 = 0;
76
cpu->pmceid1 = 0;
77
}
78
@@ -XXX,XX +XXX,XX @@ static void arm1136_r2_initfn(Object *obj)
79
cpu->reset_sctlr = 0x00050078;
80
cpu->id_pfr0 = 0x111;
81
cpu->id_pfr1 = 0x1;
82
- cpu->id_dfr0 = 0x2;
83
+ cpu->isar.id_dfr0 = 0x2;
84
cpu->id_afr0 = 0x3;
85
cpu->id_mmfr0 = 0x01130003;
86
cpu->id_mmfr1 = 0x10030302;
87
@@ -XXX,XX +XXX,XX @@ static void arm1136_initfn(Object *obj)
88
cpu->reset_sctlr = 0x00050078;
89
cpu->id_pfr0 = 0x111;
90
cpu->id_pfr1 = 0x1;
91
- cpu->id_dfr0 = 0x2;
92
+ cpu->isar.id_dfr0 = 0x2;
93
cpu->id_afr0 = 0x3;
94
cpu->id_mmfr0 = 0x01130003;
95
cpu->id_mmfr1 = 0x10030302;
96
@@ -XXX,XX +XXX,XX @@ static void arm1176_initfn(Object *obj)
97
cpu->reset_sctlr = 0x00050078;
98
cpu->id_pfr0 = 0x111;
99
cpu->id_pfr1 = 0x11;
100
- cpu->id_dfr0 = 0x33;
101
+ cpu->isar.id_dfr0 = 0x33;
102
cpu->id_afr0 = 0;
103
cpu->id_mmfr0 = 0x01130003;
104
cpu->id_mmfr1 = 0x10030302;
105
@@ -XXX,XX +XXX,XX @@ static void arm11mpcore_initfn(Object *obj)
106
cpu->ctr = 0x1d192992; /* 32K icache 32K dcache */
107
cpu->id_pfr0 = 0x111;
108
cpu->id_pfr1 = 0x1;
109
- cpu->id_dfr0 = 0;
110
+ cpu->isar.id_dfr0 = 0;
111
cpu->id_afr0 = 0x2;
112
cpu->id_mmfr0 = 0x01100103;
113
cpu->id_mmfr1 = 0x10020302;
114
@@ -XXX,XX +XXX,XX @@ static void cortex_m3_initfn(Object *obj)
115
cpu->pmsav7_dregion = 8;
116
cpu->id_pfr0 = 0x00000030;
117
cpu->id_pfr1 = 0x00000200;
118
- cpu->id_dfr0 = 0x00100000;
119
+ cpu->isar.id_dfr0 = 0x00100000;
120
cpu->id_afr0 = 0x00000000;
121
cpu->id_mmfr0 = 0x00000030;
122
cpu->id_mmfr1 = 0x00000000;
123
@@ -XXX,XX +XXX,XX @@ static void cortex_m4_initfn(Object *obj)
124
cpu->isar.mvfr2 = 0x00000000;
125
cpu->id_pfr0 = 0x00000030;
126
cpu->id_pfr1 = 0x00000200;
127
- cpu->id_dfr0 = 0x00100000;
128
+ cpu->isar.id_dfr0 = 0x00100000;
129
cpu->id_afr0 = 0x00000000;
130
cpu->id_mmfr0 = 0x00000030;
131
cpu->id_mmfr1 = 0x00000000;
132
@@ -XXX,XX +XXX,XX @@ static void cortex_m7_initfn(Object *obj)
133
cpu->isar.mvfr2 = 0x00000040;
134
cpu->id_pfr0 = 0x00000030;
135
cpu->id_pfr1 = 0x00000200;
136
- cpu->id_dfr0 = 0x00100000;
137
+ cpu->isar.id_dfr0 = 0x00100000;
138
cpu->id_afr0 = 0x00000000;
139
cpu->id_mmfr0 = 0x00100030;
140
cpu->id_mmfr1 = 0x00000000;
141
@@ -XXX,XX +XXX,XX @@ static void cortex_m33_initfn(Object *obj)
142
cpu->isar.mvfr2 = 0x00000040;
143
cpu->id_pfr0 = 0x00000030;
144
cpu->id_pfr1 = 0x00000210;
145
- cpu->id_dfr0 = 0x00200000;
146
+ cpu->isar.id_dfr0 = 0x00200000;
147
cpu->id_afr0 = 0x00000000;
148
cpu->id_mmfr0 = 0x00101F40;
149
cpu->id_mmfr1 = 0x00000000;
150
@@ -XXX,XX +XXX,XX @@ static void cortex_r5_initfn(Object *obj)
151
cpu->midr = 0x411fc153; /* r1p3 */
152
cpu->id_pfr0 = 0x0131;
153
cpu->id_pfr1 = 0x001;
154
- cpu->id_dfr0 = 0x010400;
155
+ cpu->isar.id_dfr0 = 0x010400;
156
cpu->id_afr0 = 0x0;
157
cpu->id_mmfr0 = 0x0210030;
158
cpu->id_mmfr1 = 0x00000000;
159
@@ -XXX,XX +XXX,XX @@ static void cortex_a8_initfn(Object *obj)
160
cpu->reset_sctlr = 0x00c50078;
161
cpu->id_pfr0 = 0x1031;
162
cpu->id_pfr1 = 0x11;
163
- cpu->id_dfr0 = 0x400;
164
+ cpu->isar.id_dfr0 = 0x400;
165
cpu->id_afr0 = 0;
166
cpu->id_mmfr0 = 0x31100003;
167
cpu->id_mmfr1 = 0x20000000;
168
@@ -XXX,XX +XXX,XX @@ static void cortex_a9_initfn(Object *obj)
169
cpu->reset_sctlr = 0x00c50078;
170
cpu->id_pfr0 = 0x1031;
171
cpu->id_pfr1 = 0x11;
172
- cpu->id_dfr0 = 0x000;
173
+ cpu->isar.id_dfr0 = 0x000;
174
cpu->id_afr0 = 0;
175
cpu->id_mmfr0 = 0x00100103;
176
cpu->id_mmfr1 = 0x20000000;
177
@@ -XXX,XX +XXX,XX @@ static void cortex_a7_initfn(Object *obj)
178
cpu->reset_sctlr = 0x00c50078;
179
cpu->id_pfr0 = 0x00001131;
180
cpu->id_pfr1 = 0x00011011;
181
- cpu->id_dfr0 = 0x02010555;
182
+ cpu->isar.id_dfr0 = 0x02010555;
183
cpu->id_afr0 = 0x00000000;
184
cpu->id_mmfr0 = 0x10101105;
185
cpu->id_mmfr1 = 0x40000000;
186
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
187
cpu->reset_sctlr = 0x00c50078;
188
cpu->id_pfr0 = 0x00001131;
189
cpu->id_pfr1 = 0x00011011;
190
- cpu->id_dfr0 = 0x02010555;
191
+ cpu->isar.id_dfr0 = 0x02010555;
192
cpu->id_afr0 = 0x00000000;
193
cpu->id_mmfr0 = 0x10201105;
194
cpu->id_mmfr1 = 0x20000000;
195
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
196
index XXXXXXX..XXXXXXX 100644
197
--- a/target/arm/cpu64.c
198
+++ b/target/arm/cpu64.c
199
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
200
cpu->reset_sctlr = 0x00c50838;
201
cpu->id_pfr0 = 0x00000131;
202
cpu->id_pfr1 = 0x00011011;
203
- cpu->id_dfr0 = 0x03010066;
204
+ cpu->isar.id_dfr0 = 0x03010066;
205
cpu->id_afr0 = 0x00000000;
206
cpu->id_mmfr0 = 0x10101105;
207
cpu->id_mmfr1 = 0x40000000;
208
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
209
cpu->reset_sctlr = 0x00c50838;
210
cpu->id_pfr0 = 0x00000131;
211
cpu->id_pfr1 = 0x00011011;
212
- cpu->id_dfr0 = 0x03010066;
213
+ cpu->isar.id_dfr0 = 0x03010066;
214
cpu->id_afr0 = 0x00000000;
215
cpu->id_mmfr0 = 0x10101105;
216
cpu->id_mmfr1 = 0x40000000;
217
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
218
cpu->reset_sctlr = 0x00c50838;
219
cpu->id_pfr0 = 0x00000131;
220
cpu->id_pfr1 = 0x00011011;
221
- cpu->id_dfr0 = 0x03010066;
222
+ cpu->isar.id_dfr0 = 0x03010066;
223
cpu->id_afr0 = 0x00000000;
224
cpu->id_mmfr0 = 0x10201105;
225
cpu->id_mmfr1 = 0x40000000;
226
diff --git a/target/arm/helper.c b/target/arm/helper.c
227
index XXXXXXX..XXXXXXX 100644
228
--- a/target/arm/helper.c
229
+++ b/target/arm/helper.c
230
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
231
g_free(pmevtyper_name);
232
g_free(pmevtyper_el0_name);
233
}
234
- if (FIELD_EX32(cpu->id_dfr0, ID_DFR0, PERFMON) >= 4 &&
235
- FIELD_EX32(cpu->id_dfr0, ID_DFR0, PERFMON) != 0xf) {
236
+ if (cpu_isar_feature(aa32_pmu_8_1, cpu)) {
237
ARMCPRegInfo v81_pmu_regs[] = {
238
{ .name = "PMCEID2", .state = ARM_CP_STATE_AA32,
239
.cp = 15, .opc1 = 0, .crn = 9, .crm = 14, .opc2 = 4,
240
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
241
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 2,
242
.access = PL1_R, .type = ARM_CP_CONST,
243
.accessfn = access_aa32_tid3,
244
- .resetvalue = cpu->id_dfr0 },
245
+ .resetvalue = cpu->isar.id_dfr0 },
246
{ .name = "ID_AFR0", .state = ARM_CP_STATE_BOTH,
247
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 3,
248
.access = PL1_R, .type = ARM_CP_CONST,
249
--
250
2.20.1
251
252
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
Add the 64-bit version of the "is this a v8.1 PMUv3?"
2
add MemTxAttrs as an argument to flatview_translate(); all its
2
ID register check function, and the _any_ version that
3
callers now have attrs available.
3
checks for either AArch32 or AArch64 support. We'll use
4
this in a later commit.
4
5
6
We don't (yet) do any isar_feature checks on ID_AA64DFR1_EL1,
7
but we move id_aa64dfr1 into the ARMISARegisters struct with
8
id_aa64dfr0, for consistency.
9
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
13
Message-id: 20200214175116.9164-10-peter.maydell@linaro.org
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180521140402.23318-11-peter.maydell@linaro.org
9
---
14
---
10
include/exec/memory.h | 7 ++++---
15
target/arm/cpu.h | 15 +++++++++++++--
11
exec.c | 17 +++++++++--------
16
target/arm/cpu.c | 3 ++-
12
2 files changed, 13 insertions(+), 11 deletions(-)
17
target/arm/cpu64.c | 6 +++---
18
target/arm/helper.c | 12 +++++++-----
19
4 files changed, 25 insertions(+), 11 deletions(-)
13
20
14
diff --git a/include/exec/memory.h b/include/exec/memory.h
21
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
16
--- a/include/exec/memory.h
23
--- a/target/arm/cpu.h
17
+++ b/include/exec/memory.h
24
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
25
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
26
uint64_t id_aa64mmfr0;
27
uint64_t id_aa64mmfr1;
28
uint64_t id_aa64mmfr2;
29
+ uint64_t id_aa64dfr0;
30
+ uint64_t id_aa64dfr1;
31
} isar;
32
uint32_t midr;
33
uint32_t revidr;
34
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
35
uint32_t id_mmfr2;
36
uint32_t id_mmfr3;
37
uint32_t id_mmfr4;
38
- uint64_t id_aa64dfr0;
39
- uint64_t id_aa64dfr1;
40
uint64_t id_aa64afr0;
41
uint64_t id_aa64afr1;
42
uint32_t dbgdidr;
43
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_bti(const ARMISARegisters *id)
44
return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, BT) != 0;
45
}
46
47
+static inline bool isar_feature_aa64_pmu_8_1(const ARMISARegisters *id)
48
+{
49
+ return FIELD_EX64(id->id_aa64dfr0, ID_AA64DFR0, PMUVER) >= 4 &&
50
+ FIELD_EX64(id->id_aa64dfr0, ID_AA64DFR0, PMUVER) != 0xf;
51
+}
52
+
53
/*
54
* Feature tests for "does this exist in either 32-bit or 64-bit?"
19
*/
55
*/
20
MemoryRegion *flatview_translate(FlatView *fv,
56
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_any_predinv(const ARMISARegisters *id)
21
hwaddr addr, hwaddr *xlat,
57
return isar_feature_aa64_predinv(id) || isar_feature_aa32_predinv(id);
22
- hwaddr *len, bool is_write);
23
+ hwaddr *len, bool is_write,
24
+ MemTxAttrs attrs);
25
26
static inline MemoryRegion *address_space_translate(AddressSpace *as,
27
hwaddr addr, hwaddr *xlat,
28
@@ -XXX,XX +XXX,XX @@ static inline MemoryRegion *address_space_translate(AddressSpace *as,
29
MemTxAttrs attrs)
30
{
31
return flatview_translate(address_space_to_flatview(as),
32
- addr, xlat, len, is_write);
33
+ addr, xlat, len, is_write, attrs);
34
}
58
}
35
59
36
/* address_space_access_valid: check for validity of accessing an address
60
+static inline bool isar_feature_any_pmu_8_1(const ARMISARegisters *id)
37
@@ -XXX,XX +XXX,XX @@ MemTxResult address_space_read(AddressSpace *as, hwaddr addr,
61
+{
38
rcu_read_lock();
62
+ return isar_feature_aa64_pmu_8_1(id) || isar_feature_aa32_pmu_8_1(id);
39
fv = address_space_to_flatview(as);
63
+}
40
l = len;
64
+
41
- mr = flatview_translate(fv, addr, &addr1, &l, false);
65
/*
42
+ mr = flatview_translate(fv, addr, &addr1, &l, false, attrs);
66
* Forward to the above feature tests given an ARMCPU pointer.
43
if (len == l && memory_access_is_direct(mr, false)) {
67
*/
44
ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
68
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
45
memcpy(buf, ptr, len);
46
diff --git a/exec.c b/exec.c
47
index XXXXXXX..XXXXXXX 100644
69
index XXXXXXX..XXXXXXX 100644
48
--- a/exec.c
70
--- a/target/arm/cpu.c
49
+++ b/exec.c
71
+++ b/target/arm/cpu.c
50
@@ -XXX,XX +XXX,XX @@ iotlb_fail:
72
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
51
73
cpu);
52
/* Called from RCU critical section */
74
#endif
53
MemoryRegion *flatview_translate(FlatView *fv, hwaddr addr, hwaddr *xlat,
75
} else {
54
- hwaddr *plen, bool is_write)
76
- cpu->id_aa64dfr0 = FIELD_DP64(cpu->id_aa64dfr0, ID_AA64DFR0, PMUVER, 0);
55
+ hwaddr *plen, bool is_write,
77
+ cpu->isar.id_aa64dfr0 =
56
+ MemTxAttrs attrs)
78
+ FIELD_DP64(cpu->isar.id_aa64dfr0, ID_AA64DFR0, PMUVER, 0);
57
{
79
cpu->isar.id_dfr0 = FIELD_DP32(cpu->isar.id_dfr0, ID_DFR0, PERFMON, 0);
58
MemoryRegion *mr;
80
cpu->pmceid0 = 0;
59
MemoryRegionSection section;
81
cpu->pmceid1 = 0;
60
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_write_continue(FlatView *fv, hwaddr addr,
82
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
61
}
83
index XXXXXXX..XXXXXXX 100644
62
84
--- a/target/arm/cpu64.c
63
l = len;
85
+++ b/target/arm/cpu64.c
64
- mr = flatview_translate(fv, addr, &addr1, &l, true);
86
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
65
+ mr = flatview_translate(fv, addr, &addr1, &l, true, attrs);
87
cpu->isar.id_isar5 = 0x00011121;
88
cpu->isar.id_isar6 = 0;
89
cpu->isar.id_aa64pfr0 = 0x00002222;
90
- cpu->id_aa64dfr0 = 0x10305106;
91
+ cpu->isar.id_aa64dfr0 = 0x10305106;
92
cpu->isar.id_aa64isar0 = 0x00011120;
93
cpu->isar.id_aa64mmfr0 = 0x00001124;
94
cpu->dbgdidr = 0x3516d000;
95
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
96
cpu->isar.id_isar5 = 0x00011121;
97
cpu->isar.id_isar6 = 0;
98
cpu->isar.id_aa64pfr0 = 0x00002222;
99
- cpu->id_aa64dfr0 = 0x10305106;
100
+ cpu->isar.id_aa64dfr0 = 0x10305106;
101
cpu->isar.id_aa64isar0 = 0x00011120;
102
cpu->isar.id_aa64mmfr0 = 0x00001122; /* 40 bit physical addr */
103
cpu->dbgdidr = 0x3516d000;
104
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
105
cpu->isar.id_isar4 = 0x00011142;
106
cpu->isar.id_isar5 = 0x00011121;
107
cpu->isar.id_aa64pfr0 = 0x00002222;
108
- cpu->id_aa64dfr0 = 0x10305106;
109
+ cpu->isar.id_aa64dfr0 = 0x10305106;
110
cpu->isar.id_aa64isar0 = 0x00011120;
111
cpu->isar.id_aa64mmfr0 = 0x00001124;
112
cpu->dbgdidr = 0x3516d000;
113
diff --git a/target/arm/helper.c b/target/arm/helper.c
114
index XXXXXXX..XXXXXXX 100644
115
--- a/target/arm/helper.c
116
+++ b/target/arm/helper.c
117
@@ -XXX,XX +XXX,XX @@
118
#include "hw/semihosting/semihost.h"
119
#include "sysemu/cpus.h"
120
#include "sysemu/kvm.h"
121
+#include "sysemu/tcg.h"
122
#include "qemu/range.h"
123
#include "qapi/qapi-commands-machine-target.h"
124
#include "qapi/error.h"
125
@@ -XXX,XX +XXX,XX @@ static void define_debug_regs(ARMCPU *cpu)
126
* check that if they both exist then they agree.
127
*/
128
if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
129
- assert(FIELD_EX64(cpu->id_aa64dfr0, ID_AA64DFR0, BRPS) == brps);
130
- assert(FIELD_EX64(cpu->id_aa64dfr0, ID_AA64DFR0, WRPS) == wrps);
131
- assert(FIELD_EX64(cpu->id_aa64dfr0, ID_AA64DFR0, CTX_CMPS) == ctx_cmps);
132
+ assert(FIELD_EX64(cpu->isar.id_aa64dfr0, ID_AA64DFR0, BRPS) == brps);
133
+ assert(FIELD_EX64(cpu->isar.id_aa64dfr0, ID_AA64DFR0, WRPS) == wrps);
134
+ assert(FIELD_EX64(cpu->isar.id_aa64dfr0, ID_AA64DFR0, CTX_CMPS)
135
+ == ctx_cmps);
66
}
136
}
67
137
68
return result;
138
define_one_arm_cp_reg(cpu, &dbgdidr);
69
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_write(FlatView *fv, hwaddr addr, MemTxAttrs attrs,
139
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
70
MemTxResult result = MEMTX_OK;
140
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 5, .opc2 = 0,
71
141
.access = PL1_R, .type = ARM_CP_CONST,
72
l = len;
142
.accessfn = access_aa64_tid3,
73
- mr = flatview_translate(fv, addr, &addr1, &l, true);
143
- .resetvalue = cpu->id_aa64dfr0 },
74
+ mr = flatview_translate(fv, addr, &addr1, &l, true, attrs);
144
+ .resetvalue = cpu->isar.id_aa64dfr0 },
75
result = flatview_write_continue(fv, addr, attrs, buf, len,
145
{ .name = "ID_AA64DFR1_EL1", .state = ARM_CP_STATE_AA64,
76
addr1, l, mr);
146
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 5, .opc2 = 1,
77
147
.access = PL1_R, .type = ARM_CP_CONST,
78
@@ -XXX,XX +XXX,XX @@ MemTxResult flatview_read_continue(FlatView *fv, hwaddr addr,
148
.accessfn = access_aa64_tid3,
79
}
149
- .resetvalue = cpu->id_aa64dfr1 },
80
150
+ .resetvalue = cpu->isar.id_aa64dfr1 },
81
l = len;
151
{ .name = "ID_AA64DFR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
82
- mr = flatview_translate(fv, addr, &addr1, &l, false);
152
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 5, .opc2 = 2,
83
+ mr = flatview_translate(fv, addr, &addr1, &l, false, attrs);
153
.access = PL1_R, .type = ARM_CP_CONST,
84
}
85
86
return result;
87
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_read(FlatView *fv, hwaddr addr,
88
MemoryRegion *mr;
89
90
l = len;
91
- mr = flatview_translate(fv, addr, &addr1, &l, false);
92
+ mr = flatview_translate(fv, addr, &addr1, &l, false, attrs);
93
return flatview_read_continue(fv, addr, attrs, buf, len,
94
addr1, l, mr);
95
}
96
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
97
98
while (len > 0) {
99
l = len;
100
- mr = flatview_translate(fv, addr, &xlat, &l, is_write);
101
+ mr = flatview_translate(fv, addr, &xlat, &l, is_write, attrs);
102
if (!memory_access_is_direct(mr, is_write)) {
103
l = memory_access_size(mr, l, addr);
104
if (!memory_region_access_valid(mr, xlat, l, is_write, attrs)) {
105
@@ -XXX,XX +XXX,XX @@ flatview_extend_translation(FlatView *fv, hwaddr addr,
106
107
len = target_len;
108
this_mr = flatview_translate(fv, addr, &xlat,
109
- &len, is_write);
110
+ &len, is_write, attrs);
111
if (this_mr != mr || xlat != base + done) {
112
return done;
113
}
114
@@ -XXX,XX +XXX,XX @@ void *address_space_map(AddressSpace *as,
115
l = len;
116
rcu_read_lock();
117
fv = address_space_to_flatview(as);
118
- mr = flatview_translate(fv, addr, &xlat, &l, is_write);
119
+ mr = flatview_translate(fv, addr, &xlat, &l, is_write, attrs);
120
121
if (!memory_access_is_direct(mr, is_write)) {
122
if (atomic_xchg(&bounce.in_use, true)) {
123
--
154
--
124
2.17.1
155
2.20.1
125
156
126
157
diff view generated by jsdifflib
1
Add more detail to the documentation for memory_region_init_iommu()
1
The AArch32 DBGDIDR defines properties like the number of
2
and other IOMMU-related functions and data structures.
2
breakpoints, watchpoints and context-matching comparators. On an
3
AArch64 CPU, the register may not even exist if AArch32 is not
4
supported at EL1.
5
6
Currently we hard-code use of DBGDIDR to identify the number of
7
breakpoints etc; this works for all our TCG CPUs, but will break if
8
we ever add an AArch64-only CPU. We also have an assert() that the
9
AArch32 and AArch64 registers match, which currently works only by
10
luck for KVM because we don't populate either of these ID registers
11
from the KVM vCPU and so they are both zero.
12
13
Clean this up so we have functions for finding the number
14
of breakpoints, watchpoints and context comparators which look
15
in the appropriate ID register.
16
17
This allows us to drop the "check that AArch64 and AArch32 agree
18
on the number of breakpoints etc" asserts:
19
* we no longer look at the AArch32 versions unless that's the
20
right place to be looking
21
* it's valid to have a CPU (eg AArch64-only) where they don't match
22
* we shouldn't have been asserting the validity of ID registers
23
in a codepath used with KVM anyway
3
24
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
25
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
26
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
27
Message-id: 20200214175116.9164-11-peter.maydell@linaro.org
7
Reviewed-by: Eric Auger <eric.auger@redhat.com>
8
Message-id: 20180521140402.23318-2-peter.maydell@linaro.org
9
---
28
---
10
include/exec/memory.h | 105 ++++++++++++++++++++++++++++++++++++++----
29
target/arm/cpu.h | 7 +++++++
11
1 file changed, 95 insertions(+), 10 deletions(-)
30
target/arm/internals.h | 42 +++++++++++++++++++++++++++++++++++++++
31
target/arm/debug_helper.c | 6 +++---
32
target/arm/helper.c | 21 +++++---------------
33
4 files changed, 57 insertions(+), 19 deletions(-)
12
34
13
diff --git a/include/exec/memory.h b/include/exec/memory.h
35
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
index XXXXXXX..XXXXXXX 100644
36
index XXXXXXX..XXXXXXX 100644
15
--- a/include/exec/memory.h
37
--- a/target/arm/cpu.h
16
+++ b/include/exec/memory.h
38
+++ b/target/arm/cpu.h
17
@@ -XXX,XX +XXX,XX @@ enum IOMMUMemoryRegionAttr {
39
@@ -XXX,XX +XXX,XX @@ FIELD(ID_DFR0, MPROFDBG, 20, 4)
18
IOMMU_ATTR_SPAPR_TCE_FD
40
FIELD(ID_DFR0, PERFMON, 24, 4)
19
};
41
FIELD(ID_DFR0, TRACEFILT, 28, 4)
42
43
+FIELD(DBGDIDR, SE_IMP, 12, 1)
44
+FIELD(DBGDIDR, NSUHD_IMP, 14, 1)
45
+FIELD(DBGDIDR, VERSION, 16, 4)
46
+FIELD(DBGDIDR, CTX_CMPS, 20, 4)
47
+FIELD(DBGDIDR, BRPS, 24, 4)
48
+FIELD(DBGDIDR, WRPS, 28, 4)
49
+
50
FIELD(MVFR0, SIMDREG, 0, 4)
51
FIELD(MVFR0, FPSP, 4, 4)
52
FIELD(MVFR0, FPDP, 8, 4)
53
diff --git a/target/arm/internals.h b/target/arm/internals.h
54
index XXXXXXX..XXXXXXX 100644
55
--- a/target/arm/internals.h
56
+++ b/target/arm/internals.h
57
@@ -XXX,XX +XXX,XX @@ static inline uint32_t arm_debug_exception_fsr(CPUARMState *env)
58
}
59
}
20
60
21
+/**
61
+/**
22
+ * IOMMUMemoryRegionClass:
62
+ * arm_num_brps: Return number of implemented breakpoints.
23
+ *
63
+ * Note that the ID register BRPS field is "number of bps - 1",
24
+ * All IOMMU implementations need to subclass TYPE_IOMMU_MEMORY_REGION
64
+ * and we return the actual number of breakpoints.
25
+ * and provide an implementation of at least the @translate method here
26
+ * to handle requests to the memory region. Other methods are optional.
27
+ *
28
+ * The IOMMU implementation must use the IOMMU notifier infrastructure
29
+ * to report whenever mappings are changed, by calling
30
+ * memory_region_notify_iommu() (or, if necessary, by calling
31
+ * memory_region_notify_one() for each registered notifier).
32
+ */
65
+ */
33
typedef struct IOMMUMemoryRegionClass {
66
+static inline int arm_num_brps(ARMCPU *cpu)
34
/* private */
67
+{
35
struct DeviceClass parent_class;
68
+ if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
36
69
+ return FIELD_EX64(cpu->isar.id_aa64dfr0, ID_AA64DFR0, BRPS) + 1;
37
/*
70
+ } else {
38
- * Return a TLB entry that contains a given address. Flag should
71
+ return FIELD_EX32(cpu->dbgdidr, DBGDIDR, BRPS) + 1;
39
- * be the access permission of this translation operation. We can
72
+ }
40
- * set flag to IOMMU_NONE to mean that we don't need any
73
+}
41
- * read/write permission checks, like, when for region replay.
74
+
42
+ * Return a TLB entry that contains a given address.
75
+/**
43
+ *
76
+ * arm_num_wrps: Return number of implemented watchpoints.
44
+ * The IOMMUAccessFlags indicated via @flag are optional and may
77
+ * Note that the ID register WRPS field is "number of wps - 1",
45
+ * be specified as IOMMU_NONE to indicate that the caller needs
78
+ * and we return the actual number of watchpoints.
46
+ * the full translation information for both reads and writes. If
79
+ */
47
+ * the access flags are specified then the IOMMU implementation
80
+static inline int arm_num_wrps(ARMCPU *cpu)
48
+ * may use this as an optimization, to stop doing a page table
81
+{
49
+ * walk as soon as it knows that the requested permissions are not
82
+ if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
50
+ * allowed. If IOMMU_NONE is passed then the IOMMU must do the
83
+ return FIELD_EX64(cpu->isar.id_aa64dfr0, ID_AA64DFR0, WRPS) + 1;
51
+ * full page table walk and report the permissions in the returned
84
+ } else {
52
+ * IOMMUTLBEntry. (Note that this implies that an IOMMU may not
85
+ return FIELD_EX32(cpu->dbgdidr, DBGDIDR, WRPS) + 1;
53
+ * return different mappings for reads and writes.)
86
+ }
54
+ *
87
+}
55
+ * The returned information remains valid while the caller is
88
+
56
+ * holding the big QEMU lock or is inside an RCU critical section;
89
+/**
57
+ * if the caller wishes to cache the mapping beyond that it must
90
+ * arm_num_ctx_cmps: Return number of implemented context comparators.
58
+ * register an IOMMU notifier so it can invalidate its cached
91
+ * Note that the ID register CTX_CMPS field is "number of cmps - 1",
59
+ * information when the IOMMU mapping changes.
92
+ * and we return the actual number of comparators.
60
+ *
93
+ */
61
+ * @iommu: the IOMMUMemoryRegion
94
+static inline int arm_num_ctx_cmps(ARMCPU *cpu)
62
+ * @hwaddr: address to be translated within the memory region
95
+{
63
+ * @flag: requested access permissions
96
+ if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
97
+ return FIELD_EX64(cpu->isar.id_aa64dfr0, ID_AA64DFR0, CTX_CMPS) + 1;
98
+ } else {
99
+ return FIELD_EX32(cpu->dbgdidr, DBGDIDR, CTX_CMPS) + 1;
100
+ }
101
+}
102
+
103
/* Note make_memop_idx reserves 4 bits for mmu_idx, and MO_BSWAP is bit 3.
104
* Thus a TCGMemOpIdx, without any MO_ALIGN bits, fits in 8 bits.
105
*/
106
diff --git a/target/arm/debug_helper.c b/target/arm/debug_helper.c
107
index XXXXXXX..XXXXXXX 100644
108
--- a/target/arm/debug_helper.c
109
+++ b/target/arm/debug_helper.c
110
@@ -XXX,XX +XXX,XX @@ static bool linked_bp_matches(ARMCPU *cpu, int lbn)
111
{
112
CPUARMState *env = &cpu->env;
113
uint64_t bcr = env->cp15.dbgbcr[lbn];
114
- int brps = extract32(cpu->dbgdidr, 24, 4);
115
- int ctx_cmps = extract32(cpu->dbgdidr, 20, 4);
116
+ int brps = arm_num_brps(cpu);
117
+ int ctx_cmps = arm_num_ctx_cmps(cpu);
118
int bt;
119
uint32_t contextidr;
120
uint64_t hcr_el2;
121
@@ -XXX,XX +XXX,XX @@ static bool linked_bp_matches(ARMCPU *cpu, int lbn)
122
* case DBGWCR<n>_EL1.LBN must indicate that breakpoint).
123
* We choose the former.
64
*/
124
*/
65
IOMMUTLBEntry (*translate)(IOMMUMemoryRegion *iommu, hwaddr addr,
125
- if (lbn > brps || lbn < (brps - ctx_cmps)) {
66
IOMMUAccessFlags flag);
126
+ if (lbn >= brps || lbn < (brps - ctx_cmps)) {
67
- /* Returns minimum supported page size */
127
return false;
68
+ /* Returns minimum supported page size in bytes.
128
}
69
+ * If this method is not provided then the minimum is assumed to
129
70
+ * be TARGET_PAGE_SIZE.
130
diff --git a/target/arm/helper.c b/target/arm/helper.c
71
+ *
131
index XXXXXXX..XXXXXXX 100644
72
+ * @iommu: the IOMMUMemoryRegion
132
--- a/target/arm/helper.c
73
+ */
133
+++ b/target/arm/helper.c
74
uint64_t (*get_min_page_size)(IOMMUMemoryRegion *iommu);
134
@@ -XXX,XX +XXX,XX @@ static void define_debug_regs(ARMCPU *cpu)
75
- /* Called when IOMMU Notifier flag changed */
135
};
76
+ /* Called when IOMMU Notifier flag changes (ie when the set of
136
77
+ * events which IOMMU users are requesting notification for changes).
137
/* Note that all these register fields hold "number of Xs minus 1". */
78
+ * Optional method -- need not be provided if the IOMMU does not
138
- brps = extract32(cpu->dbgdidr, 24, 4);
79
+ * need to know exactly which events must be notified.
139
- wrps = extract32(cpu->dbgdidr, 28, 4);
80
+ *
140
- ctx_cmps = extract32(cpu->dbgdidr, 20, 4);
81
+ * @iommu: the IOMMUMemoryRegion
141
+ brps = arm_num_brps(cpu);
82
+ * @old_flags: events which previously needed to be notified
142
+ wrps = arm_num_wrps(cpu);
83
+ * @new_flags: events which now need to be notified
143
+ ctx_cmps = arm_num_ctx_cmps(cpu);
84
+ */
144
85
void (*notify_flag_changed)(IOMMUMemoryRegion *iommu,
145
assert(ctx_cmps <= brps);
86
IOMMUNotifierFlag old_flags,
146
87
IOMMUNotifierFlag new_flags);
147
- /* The DBGDIDR and ID_AA64DFR0_EL1 define various properties
88
- /* Set this up to provide customized IOMMU replay function */
148
- * of the debug registers such as number of breakpoints;
89
+ /* Called to handle memory_region_iommu_replay().
149
- * check that if they both exist then they agree.
90
+ *
150
- */
91
+ * The default implementation of memory_region_iommu_replay() is to
151
- if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
92
+ * call the IOMMU translate method for every page in the address space
152
- assert(FIELD_EX64(cpu->isar.id_aa64dfr0, ID_AA64DFR0, BRPS) == brps);
93
+ * with flag == IOMMU_NONE and then call the notifier if translate
153
- assert(FIELD_EX64(cpu->isar.id_aa64dfr0, ID_AA64DFR0, WRPS) == wrps);
94
+ * returns a valid mapping. If this method is implemented then it
154
- assert(FIELD_EX64(cpu->isar.id_aa64dfr0, ID_AA64DFR0, CTX_CMPS)
95
+ * overrides the default behaviour, and must provide the full semantics
155
- == ctx_cmps);
96
+ * of memory_region_iommu_replay(), by calling @notifier for every
156
- }
97
+ * translation present in the IOMMU.
157
-
98
+ *
158
define_one_arm_cp_reg(cpu, &dbgdidr);
99
+ * Optional method -- an IOMMU only needs to provide this method
159
define_arm_cp_regs(cpu, debug_cp_reginfo);
100
+ * if the default is inefficient or produces undesirable side effects.
160
101
+ *
161
@@ -XXX,XX +XXX,XX @@ static void define_debug_regs(ARMCPU *cpu)
102
+ * Note: this is not related to record-and-replay functionality.
162
define_arm_cp_regs(cpu, debug_lpae_cp_reginfo);
103
+ */
163
}
104
void (*replay)(IOMMUMemoryRegion *iommu, IOMMUNotifier *notifier);
164
105
165
- for (i = 0; i < brps + 1; i++) {
106
- /* Get IOMMU misc attributes */
166
+ for (i = 0; i < brps; i++) {
107
- int (*get_attr)(IOMMUMemoryRegion *iommu, enum IOMMUMemoryRegionAttr,
167
ARMCPRegInfo dbgregs[] = {
108
+ /* Get IOMMU misc attributes. This is an optional method that
168
{ .name = "DBGBVR", .state = ARM_CP_STATE_BOTH,
109
+ * can be used to allow users of the IOMMU to get implementation-specific
169
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = i, .opc2 = 4,
110
+ * information. The IOMMU implements this method to handle calls
170
@@ -XXX,XX +XXX,XX @@ static void define_debug_regs(ARMCPU *cpu)
111
+ * by IOMMU users to memory_region_iommu_get_attr() by filling in
171
define_arm_cp_regs(cpu, dbgregs);
112
+ * the arbitrary data pointer for any IOMMUMemoryRegionAttr values that
172
}
113
+ * the IOMMU supports. If the method is unimplemented then
173
114
+ * memory_region_iommu_get_attr() will always return -EINVAL.
174
- for (i = 0; i < wrps + 1; i++) {
115
+ *
175
+ for (i = 0; i < wrps; i++) {
116
+ * @iommu: the IOMMUMemoryRegion
176
ARMCPRegInfo dbgregs[] = {
117
+ * @attr: attribute being queried
177
{ .name = "DBGWVR", .state = ARM_CP_STATE_BOTH,
118
+ * @data: memory to fill in with the attribute data
178
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = i, .opc2 = 6,
119
+ *
120
+ * Returns 0 on success, or a negative errno; in particular
121
+ * returns -EINVAL for unrecognized or unimplemented attribute types.
122
+ */
123
+ int (*get_attr)(IOMMUMemoryRegion *iommu, enum IOMMUMemoryRegionAttr attr,
124
void *data);
125
} IOMMUMemoryRegionClass;
126
127
@@ -XXX,XX +XXX,XX @@ static inline void memory_region_init_reservation(MemoryRegion *mr,
128
* An IOMMU region translates addresses and forwards accesses to a target
129
* memory region.
130
*
131
+ * The IOMMU implementation must define a subclass of TYPE_IOMMU_MEMORY_REGION.
132
+ * @_iommu_mr should be a pointer to enough memory for an instance of
133
+ * that subclass, @instance_size is the size of that subclass, and
134
+ * @mrtypename is its name. This function will initialize @_iommu_mr as an
135
+ * instance of the subclass, and its methods will then be called to handle
136
+ * accesses to the memory region. See the documentation of
137
+ * #IOMMUMemoryRegionClass for further details.
138
+ *
139
* @_iommu_mr: the #IOMMUMemoryRegion to be initialized
140
* @instance_size: the IOMMUMemoryRegion subclass instance size
141
* @mrtypename: the type name of the #IOMMUMemoryRegion
142
@@ -XXX,XX +XXX,XX @@ void memory_region_register_iommu_notifier(MemoryRegion *mr,
143
* a notifier with the minimum page granularity returned by
144
* mr->iommu_ops->get_page_size().
145
*
146
+ * Note: this is not related to record-and-replay functionality.
147
+ *
148
* @iommu_mr: the memory region to observe
149
* @n: the notifier to which to replay iommu mappings
150
*/
151
@@ -XXX,XX +XXX,XX @@ void memory_region_iommu_replay(IOMMUMemoryRegion *iommu_mr, IOMMUNotifier *n);
152
* memory_region_iommu_replay_all: replay existing IOMMU translations
153
* to all the notifiers registered.
154
*
155
+ * Note: this is not related to record-and-replay functionality.
156
+ *
157
* @iommu_mr: the memory region to observe
158
*/
159
void memory_region_iommu_replay_all(IOMMUMemoryRegion *iommu_mr);
160
@@ -XXX,XX +XXX,XX @@ void memory_region_unregister_iommu_notifier(MemoryRegion *mr,
161
* memory_region_iommu_get_attr: return an IOMMU attr if get_attr() is
162
* defined on the IOMMU.
163
*
164
- * Returns 0 if succeded, error code otherwise.
165
+ * Returns 0 on success, or a negative errno otherwise. In particular,
166
+ * -EINVAL indicates that the IOMMU does not support the requested
167
+ * attribute.
168
*
169
* @iommu_mr: the memory region
170
* @attr: the requested attribute
171
--
179
--
172
2.17.1
180
2.20.1
173
181
174
182
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
We're going to want to read the DBGDIDR register from KVM in
2
add MemTxAttrs as an argument to tb_invalidate_phys_addr().
2
a subsequent commit, which means it needs to be in the
3
Its callers either have an attrs value to hand, or don't care
3
ARMISARegisters sub-struct. Move it.
4
and can use MEMTXATTRS_UNSPECIFIED.
5
4
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Message-id: 20200214175116.9164-12-peter.maydell@linaro.org
9
Message-id: 20180521140402.23318-3-peter.maydell@linaro.org
10
---
8
---
11
include/exec/exec-all.h | 5 +++--
9
target/arm/cpu.h | 2 +-
12
accel/tcg/translate-all.c | 2 +-
10
target/arm/internals.h | 6 +++---
13
exec.c | 2 +-
11
target/arm/cpu.c | 8 ++++----
14
target/xtensa/op_helper.c | 3 ++-
12
target/arm/cpu64.c | 6 +++---
15
4 files changed, 7 insertions(+), 5 deletions(-)
13
target/arm/helper.c | 2 +-
14
5 files changed, 12 insertions(+), 12 deletions(-)
16
15
17
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
19
--- a/include/exec/exec-all.h
18
--- a/target/arm/cpu.h
20
+++ b/include/exec/exec-all.h
19
+++ b/target/arm/cpu.h
21
@@ -XXX,XX +XXX,XX @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
20
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
22
void tlb_set_page(CPUState *cpu, target_ulong vaddr,
21
uint32_t mvfr1;
23
hwaddr paddr, int prot,
22
uint32_t mvfr2;
24
int mmu_idx, target_ulong size);
23
uint32_t id_dfr0;
25
-void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr);
24
+ uint32_t dbgdidr;
26
+void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs);
25
uint64_t id_aa64isar0;
27
void probe_write(CPUArchState *env, target_ulong addr, int size, int mmu_idx,
26
uint64_t id_aa64isar1;
28
uintptr_t retaddr);
27
uint64_t id_aa64pfr0;
29
#else
28
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
30
@@ -XXX,XX +XXX,XX @@ static inline void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *cpu,
29
uint32_t id_mmfr4;
31
uint16_t idxmap)
30
uint64_t id_aa64afr0;
32
{
31
uint64_t id_aa64afr1;
33
}
32
- uint32_t dbgdidr;
34
-static inline void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr)
33
uint32_t clidr;
35
+static inline void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr,
34
uint64_t mp_affinity; /* MP ID without feature bits */
36
+ MemTxAttrs attrs)
35
/* The elements of this array are the CCSIDR values for each cache,
37
{
36
diff --git a/target/arm/internals.h b/target/arm/internals.h
38
}
39
#endif
40
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
41
index XXXXXXX..XXXXXXX 100644
37
index XXXXXXX..XXXXXXX 100644
42
--- a/accel/tcg/translate-all.c
38
--- a/target/arm/internals.h
43
+++ b/accel/tcg/translate-all.c
39
+++ b/target/arm/internals.h
44
@@ -XXX,XX +XXX,XX @@ static TranslationBlock *tb_find_pc(uintptr_t tc_ptr)
40
@@ -XXX,XX +XXX,XX @@ static inline int arm_num_brps(ARMCPU *cpu)
45
}
41
if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
46
42
return FIELD_EX64(cpu->isar.id_aa64dfr0, ID_AA64DFR0, BRPS) + 1;
47
#if !defined(CONFIG_USER_ONLY)
43
} else {
48
-void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr)
44
- return FIELD_EX32(cpu->dbgdidr, DBGDIDR, BRPS) + 1;
49
+void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs)
45
+ return FIELD_EX32(cpu->isar.dbgdidr, DBGDIDR, BRPS) + 1;
50
{
51
ram_addr_t ram_addr;
52
MemoryRegion *mr;
53
diff --git a/exec.c b/exec.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/exec.c
56
+++ b/exec.c
57
@@ -XXX,XX +XXX,XX @@ static void breakpoint_invalidate(CPUState *cpu, target_ulong pc)
58
if (phys != -1) {
59
/* Locks grabbed by tb_invalidate_phys_addr */
60
tb_invalidate_phys_addr(cpu->cpu_ases[asidx].as,
61
- phys | (pc & ~TARGET_PAGE_MASK));
62
+ phys | (pc & ~TARGET_PAGE_MASK), attrs);
63
}
46
}
64
}
47
}
65
#endif
48
66
diff --git a/target/xtensa/op_helper.c b/target/xtensa/op_helper.c
49
@@ -XXX,XX +XXX,XX @@ static inline int arm_num_wrps(ARMCPU *cpu)
67
index XXXXXXX..XXXXXXX 100644
50
if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
68
--- a/target/xtensa/op_helper.c
51
return FIELD_EX64(cpu->isar.id_aa64dfr0, ID_AA64DFR0, WRPS) + 1;
69
+++ b/target/xtensa/op_helper.c
52
} else {
70
@@ -XXX,XX +XXX,XX @@ static void tb_invalidate_virtual_addr(CPUXtensaState *env, uint32_t vaddr)
53
- return FIELD_EX32(cpu->dbgdidr, DBGDIDR, WRPS) + 1;
71
int ret = xtensa_get_physical_addr(env, false, vaddr, 2, 0,
54
+ return FIELD_EX32(cpu->isar.dbgdidr, DBGDIDR, WRPS) + 1;
72
&paddr, &page_size, &access);
73
if (ret == 0) {
74
- tb_invalidate_phys_addr(&address_space_memory, paddr);
75
+ tb_invalidate_phys_addr(&address_space_memory, paddr,
76
+ MEMTXATTRS_UNSPECIFIED);
77
}
55
}
78
}
56
}
79
57
58
@@ -XXX,XX +XXX,XX @@ static inline int arm_num_ctx_cmps(ARMCPU *cpu)
59
if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
60
return FIELD_EX64(cpu->isar.id_aa64dfr0, ID_AA64DFR0, CTX_CMPS) + 1;
61
} else {
62
- return FIELD_EX32(cpu->dbgdidr, DBGDIDR, CTX_CMPS) + 1;
63
+ return FIELD_EX32(cpu->isar.dbgdidr, DBGDIDR, CTX_CMPS) + 1;
64
}
65
}
66
67
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
68
index XXXXXXX..XXXXXXX 100644
69
--- a/target/arm/cpu.c
70
+++ b/target/arm/cpu.c
71
@@ -XXX,XX +XXX,XX @@ static void cortex_a8_initfn(Object *obj)
72
cpu->isar.id_isar2 = 0x21232031;
73
cpu->isar.id_isar3 = 0x11112131;
74
cpu->isar.id_isar4 = 0x00111142;
75
- cpu->dbgdidr = 0x15141000;
76
+ cpu->isar.dbgdidr = 0x15141000;
77
cpu->clidr = (1 << 27) | (2 << 24) | 3;
78
cpu->ccsidr[0] = 0xe007e01a; /* 16k L1 dcache. */
79
cpu->ccsidr[1] = 0x2007e01a; /* 16k L1 icache. */
80
@@ -XXX,XX +XXX,XX @@ static void cortex_a9_initfn(Object *obj)
81
cpu->isar.id_isar2 = 0x21232041;
82
cpu->isar.id_isar3 = 0x11112131;
83
cpu->isar.id_isar4 = 0x00111142;
84
- cpu->dbgdidr = 0x35141000;
85
+ cpu->isar.dbgdidr = 0x35141000;
86
cpu->clidr = (1 << 27) | (1 << 24) | 3;
87
cpu->ccsidr[0] = 0xe00fe019; /* 16k L1 dcache. */
88
cpu->ccsidr[1] = 0x200fe019; /* 16k L1 icache. */
89
@@ -XXX,XX +XXX,XX @@ static void cortex_a7_initfn(Object *obj)
90
cpu->isar.id_isar2 = 0x21232041;
91
cpu->isar.id_isar3 = 0x11112131;
92
cpu->isar.id_isar4 = 0x10011142;
93
- cpu->dbgdidr = 0x3515f005;
94
+ cpu->isar.dbgdidr = 0x3515f005;
95
cpu->clidr = 0x0a200023;
96
cpu->ccsidr[0] = 0x701fe00a; /* 32K L1 dcache */
97
cpu->ccsidr[1] = 0x201fe00a; /* 32K L1 icache */
98
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
99
cpu->isar.id_isar2 = 0x21232041;
100
cpu->isar.id_isar3 = 0x11112131;
101
cpu->isar.id_isar4 = 0x10011142;
102
- cpu->dbgdidr = 0x3515f021;
103
+ cpu->isar.dbgdidr = 0x3515f021;
104
cpu->clidr = 0x0a200023;
105
cpu->ccsidr[0] = 0x701fe00a; /* 32K L1 dcache */
106
cpu->ccsidr[1] = 0x201fe00a; /* 32K L1 icache */
107
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
108
index XXXXXXX..XXXXXXX 100644
109
--- a/target/arm/cpu64.c
110
+++ b/target/arm/cpu64.c
111
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
112
cpu->isar.id_aa64dfr0 = 0x10305106;
113
cpu->isar.id_aa64isar0 = 0x00011120;
114
cpu->isar.id_aa64mmfr0 = 0x00001124;
115
- cpu->dbgdidr = 0x3516d000;
116
+ cpu->isar.dbgdidr = 0x3516d000;
117
cpu->clidr = 0x0a200023;
118
cpu->ccsidr[0] = 0x701fe00a; /* 32KB L1 dcache */
119
cpu->ccsidr[1] = 0x201fe012; /* 48KB L1 icache */
120
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
121
cpu->isar.id_aa64dfr0 = 0x10305106;
122
cpu->isar.id_aa64isar0 = 0x00011120;
123
cpu->isar.id_aa64mmfr0 = 0x00001122; /* 40 bit physical addr */
124
- cpu->dbgdidr = 0x3516d000;
125
+ cpu->isar.dbgdidr = 0x3516d000;
126
cpu->clidr = 0x0a200023;
127
cpu->ccsidr[0] = 0x700fe01a; /* 32KB L1 dcache */
128
cpu->ccsidr[1] = 0x201fe00a; /* 32KB L1 icache */
129
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
130
cpu->isar.id_aa64dfr0 = 0x10305106;
131
cpu->isar.id_aa64isar0 = 0x00011120;
132
cpu->isar.id_aa64mmfr0 = 0x00001124;
133
- cpu->dbgdidr = 0x3516d000;
134
+ cpu->isar.dbgdidr = 0x3516d000;
135
cpu->clidr = 0x0a200023;
136
cpu->ccsidr[0] = 0x701fe00a; /* 32KB L1 dcache */
137
cpu->ccsidr[1] = 0x201fe012; /* 48KB L1 icache */
138
diff --git a/target/arm/helper.c b/target/arm/helper.c
139
index XXXXXXX..XXXXXXX 100644
140
--- a/target/arm/helper.c
141
+++ b/target/arm/helper.c
142
@@ -XXX,XX +XXX,XX @@ static void define_debug_regs(ARMCPU *cpu)
143
ARMCPRegInfo dbgdidr = {
144
.name = "DBGDIDR", .cp = 14, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 0,
145
.access = PL0_R, .accessfn = access_tda,
146
- .type = ARM_CP_CONST, .resetvalue = cpu->dbgdidr,
147
+ .type = ARM_CP_CONST, .resetvalue = cpu->isar.dbgdidr,
148
};
149
150
/* Note that all these register fields hold "number of Xs minus 1". */
80
--
151
--
81
2.17.1
152
2.20.1
82
153
83
154
diff view generated by jsdifflib
1
The FRECPX instructions should (like most other floating point operations)
1
Now we have isar_feature test functions that look at fields in the
2
honour the FPCR.FZ bit which specifies whether input denormals should
2
ID_AA64DFR0_EL1 and ID_DFR0 ID registers, add the code that reads
3
be flushed to zero (or FZ16 for the half-precision version).
3
these register values from KVM so that the checks behave correctly
4
We forgot to implement this, which doesn't affect the results (since
4
when we're using KVM.
5
the calculation doesn't actually care about the mantissa bits) but did
5
6
mean we were failing to set the FPSR.IDC bit.
6
No isar_feature function tests ID_AA64DFR1_EL1 or DBGDIDR yet, but we
7
add it to maintain the invariant that every field in the
8
ARMISARegisters struct is populated for a KVM CPU and can be relied
9
on. This requirement isn't actually written down yet, so add a note
10
to the relevant comment.
7
11
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20180521172712.19930-1-peter.maydell@linaro.org
14
Message-id: 20200214175116.9164-13-peter.maydell@linaro.org
11
---
15
---
12
target/arm/helper-a64.c | 6 ++++++
16
target/arm/cpu.h | 5 +++++
13
1 file changed, 6 insertions(+)
17
target/arm/kvm32.c | 8 ++++++++
18
target/arm/kvm64.c | 36 ++++++++++++++++++++++++++++++++++++
19
3 files changed, 49 insertions(+)
14
20
15
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
21
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper-a64.c
23
--- a/target/arm/cpu.h
18
+++ b/target/arm/helper-a64.c
24
+++ b/target/arm/cpu.h
19
@@ -XXX,XX +XXX,XX @@ float16 HELPER(frecpx_f16)(float16 a, void *fpstp)
25
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
20
return nan;
26
* prefix means a constant register.
27
* Some of these registers are split out into a substructure that
28
* is shared with the translators to control the ISA.
29
+ *
30
+ * Note that if you add an ID register to the ARMISARegisters struct
31
+ * you need to also update the 32-bit and 64-bit versions of the
32
+ * kvm_arm_get_host_cpu_features() function to correctly populate the
33
+ * field by reading the value from the KVM vCPU.
34
*/
35
struct ARMISARegisters {
36
uint32_t id_isar0;
37
diff --git a/target/arm/kvm32.c b/target/arm/kvm32.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/kvm32.c
40
+++ b/target/arm/kvm32.c
41
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
42
ahcf->isar.id_isar6 = 0;
21
}
43
}
22
44
23
+ a = float16_squash_input_denormal(a, fpst);
45
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_dfr0,
46
+ ARM_CP15_REG32(0, 0, 1, 2));
24
+
47
+
25
val16 = float16_val(a);
48
err |= read_sys_reg32(fdarray[2], &ahcf->isar.mvfr0,
26
sbit = 0x8000 & val16;
49
KVM_REG_ARM | KVM_REG_SIZE_U32 |
27
exp = extract32(val16, 10, 5);
50
KVM_REG_ARM_VFP | KVM_REG_ARM_VFP_MVFR0);
28
@@ -XXX,XX +XXX,XX @@ float32 HELPER(frecpx_f32)(float32 a, void *fpstp)
51
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
29
return nan;
52
* Fortunately there is not yet anything in there that affects migration.
53
*/
54
55
+ /*
56
+ * There is no way to read DBGDIDR, because currently 32-bit KVM
57
+ * doesn't implement debug at all. Leave it at zero.
58
+ */
59
+
60
kvm_arm_destroy_scratch_host_vcpu(fdarray);
61
62
if (err < 0) {
63
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/target/arm/kvm64.c
66
+++ b/target/arm/kvm64.c
67
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
68
} else {
69
err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64pfr1,
70
ARM64_SYS_REG(3, 0, 0, 4, 1));
71
+ err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64dfr0,
72
+ ARM64_SYS_REG(3, 0, 0, 5, 0));
73
+ err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64dfr1,
74
+ ARM64_SYS_REG(3, 0, 0, 5, 1));
75
err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64isar0,
76
ARM64_SYS_REG(3, 0, 0, 6, 0));
77
err |= read_sys_reg64(fdarray[2], &ahcf->isar.id_aa64isar1,
78
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
79
* than skipping the reads and leaving 0, as we must avoid
80
* considering the values in every case.
81
*/
82
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_dfr0,
83
+ ARM64_SYS_REG(3, 0, 0, 1, 2));
84
err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_isar0,
85
ARM64_SYS_REG(3, 0, 0, 2, 0));
86
err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_isar1,
87
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
88
ARM64_SYS_REG(3, 0, 0, 3, 1));
89
err |= read_sys_reg32(fdarray[2], &ahcf->isar.mvfr2,
90
ARM64_SYS_REG(3, 0, 0, 3, 2));
91
+
92
+ /*
93
+ * DBGDIDR is a bit complicated because the kernel doesn't
94
+ * provide an accessor for it in 64-bit mode, which is what this
95
+ * scratch VM is in, and there's no architected "64-bit sysreg
96
+ * which reads the same as the 32-bit register" the way there is
97
+ * for other ID registers. Instead we synthesize a value from the
98
+ * AArch64 ID_AA64DFR0, the same way the kernel code in
99
+ * arch/arm64/kvm/sys_regs.c:trap_dbgidr() does.
100
+ * We only do this if the CPU supports AArch32 at EL1.
101
+ */
102
+ if (FIELD_EX32(ahcf->isar.id_aa64pfr0, ID_AA64PFR0, EL1) >= 2) {
103
+ int wrps = FIELD_EX64(ahcf->isar.id_aa64dfr0, ID_AA64DFR0, WRPS);
104
+ int brps = FIELD_EX64(ahcf->isar.id_aa64dfr0, ID_AA64DFR0, BRPS);
105
+ int ctx_cmps =
106
+ FIELD_EX64(ahcf->isar.id_aa64dfr0, ID_AA64DFR0, CTX_CMPS);
107
+ int version = 6; /* ARMv8 debug architecture */
108
+ bool has_el3 =
109
+ !!FIELD_EX32(ahcf->isar.id_aa64pfr0, ID_AA64PFR0, EL3);
110
+ uint32_t dbgdidr = 0;
111
+
112
+ dbgdidr = FIELD_DP32(dbgdidr, DBGDIDR, WRPS, wrps);
113
+ dbgdidr = FIELD_DP32(dbgdidr, DBGDIDR, BRPS, brps);
114
+ dbgdidr = FIELD_DP32(dbgdidr, DBGDIDR, CTX_CMPS, ctx_cmps);
115
+ dbgdidr = FIELD_DP32(dbgdidr, DBGDIDR, VERSION, version);
116
+ dbgdidr = FIELD_DP32(dbgdidr, DBGDIDR, NSUHD_IMP, has_el3);
117
+ dbgdidr = FIELD_DP32(dbgdidr, DBGDIDR, SE_IMP, has_el3);
118
+ dbgdidr |= (1 << 15); /* RES1 bit */
119
+ ahcf->isar.dbgdidr = dbgdidr;
120
+ }
30
}
121
}
31
122
32
+ a = float32_squash_input_denormal(a, fpst);
123
sve_supported = ioctl(fdarray[0], KVM_CHECK_EXTENSION, KVM_CAP_ARM_SVE) > 0;
33
+
34
val32 = float32_val(a);
35
sbit = 0x80000000ULL & val32;
36
exp = extract32(val32, 23, 8);
37
@@ -XXX,XX +XXX,XX @@ float64 HELPER(frecpx_f64)(float64 a, void *fpstp)
38
return nan;
39
}
40
41
+ a = float64_squash_input_denormal(a, fpst);
42
+
43
val64 = float64_val(a);
44
sbit = 0x8000000000000000ULL & val64;
45
exp = extract64(float64_val(a), 52, 11);
46
--
124
--
47
2.17.1
125
2.20.1
48
126
49
127
diff view generated by jsdifflib
New patch
1
The ARMv8.1-PMU extension requires:
2
* the evtCount field in PMETYPER<n>_EL0 is 16 bits, not 10
3
* MDCR_EL2.HPMD allows event counting to be disabled at EL2
4
* two new required events, STALL_FRONTEND and STALL_BACKEND
5
* ID register bits in ID_AA64DFR0_EL1 and ID_DFR0
1
6
7
We already implement the 16-bit evtCount field and the
8
HPMD bit, so all that is missing is the two new events:
9
STALL_FRONTEND
10
"counts every cycle counted by the CPU_CYCLES event on which no
11
operation was issued because there are no operations available
12
to issue to this PE from the frontend"
13
STALL_BACKEND
14
"counts every cycle counted by the CPU_CYCLES event on which no
15
operation was issued because the backend is unable to accept
16
any available operations from the frontend"
17
18
QEMU never stalls in this sense, so our implementation is trivial:
19
always return a zero count.
20
21
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Message-id: 20200214175116.9164-14-peter.maydell@linaro.org
24
---
25
target/arm/helper.c | 32 ++++++++++++++++++++++++++++++--
26
1 file changed, 30 insertions(+), 2 deletions(-)
27
28
diff --git a/target/arm/helper.c b/target/arm/helper.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/helper.c
31
+++ b/target/arm/helper.c
32
@@ -XXX,XX +XXX,XX @@ static int64_t instructions_ns_per(uint64_t icount)
33
}
34
#endif
35
36
+static bool pmu_8_1_events_supported(CPUARMState *env)
37
+{
38
+ /* For events which are supported in any v8.1 PMU */
39
+ return cpu_isar_feature(any_pmu_8_1, env_archcpu(env));
40
+}
41
+
42
+static uint64_t zero_event_get_count(CPUARMState *env)
43
+{
44
+ /* For events which on QEMU never fire, so their count is always zero */
45
+ return 0;
46
+}
47
+
48
+static int64_t zero_event_ns_per(uint64_t cycles)
49
+{
50
+ /* An event which never fires can never overflow */
51
+ return -1;
52
+}
53
+
54
static const pm_event pm_events[] = {
55
{ .number = 0x000, /* SW_INCR */
56
.supported = event_always_supported,
57
@@ -XXX,XX +XXX,XX @@ static const pm_event pm_events[] = {
58
.supported = event_always_supported,
59
.get_count = cycles_get_count,
60
.ns_per_count = cycles_ns_per,
61
- }
62
+ },
63
#endif
64
+ { .number = 0x023, /* STALL_FRONTEND */
65
+ .supported = pmu_8_1_events_supported,
66
+ .get_count = zero_event_get_count,
67
+ .ns_per_count = zero_event_ns_per,
68
+ },
69
+ { .number = 0x024, /* STALL_BACKEND */
70
+ .supported = pmu_8_1_events_supported,
71
+ .get_count = zero_event_get_count,
72
+ .ns_per_count = zero_event_ns_per,
73
+ },
74
};
75
76
/*
77
@@ -XXX,XX +XXX,XX @@ static const pm_event pm_events[] = {
78
* should first be updated to something sparse instead of the current
79
* supported_event_map[] array.
80
*/
81
-#define MAX_EVENT_ID 0x11
82
+#define MAX_EVENT_ID 0x24
83
#define UNSUPPORTED_EVENT UINT16_MAX
84
static uint16_t supported_event_map[MAX_EVENT_ID + 1];
85
86
--
87
2.20.1
88
89
diff view generated by jsdifflib
1
In commit f0aff255700 we made cpacr_write() enforce that some CPACR
1
The ARMv8.4-PMU extension adds:
2
bits are RAZ/WI and some are RAO/WI for ARMv7 cores. Unfortunately
2
* one new required event, STALL
3
we forgot to also update the register's reset value. The effect
3
* one new system register PMMIR_EL1
4
was that (a) a guest that read CPACR on reset would not see ones in
5
the RAO bits, and (b) if you did a migration before the guest did
6
a write to the CPACR then the migration would fail because the
7
destination would enforce the RAO bits and then complain that they
8
didn't match the zero value from the source.
9
4
10
Implement reset for the CPACR using a custom reset function
5
(There are also some more L1-cache related events, but since
11
that just calls cpacr_write(), to avoid having to duplicate
6
we don't implement any cache we don't provide these, in the
12
the logic for which bits are RAO.
7
same way we don't provide the base-PMUv3 cache events.)
13
8
14
This bug would affect migration for TCG CPUs which are ARMv7
9
The STALL event "counts every attributable cycle on which no
15
with VFP but without one of Neon or VFPv3.
10
attributable instruction or operation was sent for execution on this
11
PE". QEMU doesn't stall in this sense, so this is another
12
always-reads-zero event.
16
13
17
Reported-by: Cédric Le Goater <clg@kaod.org>
14
The PMMIR_EL1 register is a read-only register providing
15
implementation-specific information about the PMU; currently it has
16
only one field, SLOTS, which defines behaviour of the STALL_SLOT PMU
17
event. Since QEMU doesn't implement the STALL_SLOT event, we can
18
validly make the register read zero.
19
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Tested-by: Cédric Le Goater <clg@kaod.org>
22
Message-id: 20200214175116.9164-15-peter.maydell@linaro.org
20
Message-id: 20180522173713.26282-1-peter.maydell@linaro.org
21
---
23
---
22
target/arm/helper.c | 10 +++++++++-
24
target/arm/cpu.h | 18 ++++++++++++++++++
23
1 file changed, 9 insertions(+), 1 deletion(-)
25
target/arm/helper.c | 22 +++++++++++++++++++++-
26
2 files changed, 39 insertions(+), 1 deletion(-)
24
27
28
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/cpu.h
31
+++ b/target/arm/cpu.h
32
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_pmu_8_1(const ARMISARegisters *id)
33
FIELD_EX32(id->id_dfr0, ID_DFR0, PERFMON) != 0xf;
34
}
35
36
+static inline bool isar_feature_aa32_pmu_8_4(const ARMISARegisters *id)
37
+{
38
+ /* 0xf means "non-standard IMPDEF PMU" */
39
+ return FIELD_EX32(id->id_dfr0, ID_DFR0, PERFMON) >= 5 &&
40
+ FIELD_EX32(id->id_dfr0, ID_DFR0, PERFMON) != 0xf;
41
+}
42
+
43
/*
44
* 64-bit feature tests via id registers.
45
*/
46
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_pmu_8_1(const ARMISARegisters *id)
47
FIELD_EX64(id->id_aa64dfr0, ID_AA64DFR0, PMUVER) != 0xf;
48
}
49
50
+static inline bool isar_feature_aa64_pmu_8_4(const ARMISARegisters *id)
51
+{
52
+ return FIELD_EX32(id->id_aa64dfr0, ID_AA64DFR0, PMUVER) >= 5 &&
53
+ FIELD_EX32(id->id_aa64dfr0, ID_AA64DFR0, PMUVER) != 0xf;
54
+}
55
+
56
/*
57
* Feature tests for "does this exist in either 32-bit or 64-bit?"
58
*/
59
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_any_pmu_8_1(const ARMISARegisters *id)
60
return isar_feature_aa64_pmu_8_1(id) || isar_feature_aa32_pmu_8_1(id);
61
}
62
63
+static inline bool isar_feature_any_pmu_8_4(const ARMISARegisters *id)
64
+{
65
+ return isar_feature_aa64_pmu_8_4(id) || isar_feature_aa32_pmu_8_4(id);
66
+}
67
+
68
/*
69
* Forward to the above feature tests given an ARMCPU pointer.
70
*/
25
diff --git a/target/arm/helper.c b/target/arm/helper.c
71
diff --git a/target/arm/helper.c b/target/arm/helper.c
26
index XXXXXXX..XXXXXXX 100644
72
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/helper.c
73
--- a/target/arm/helper.c
28
+++ b/target/arm/helper.c
74
+++ b/target/arm/helper.c
29
@@ -XXX,XX +XXX,XX @@ static void cpacr_write(CPUARMState *env, const ARMCPRegInfo *ri,
75
@@ -XXX,XX +XXX,XX @@ static bool pmu_8_1_events_supported(CPUARMState *env)
30
env->cp15.cpacr_el1 = value;
76
return cpu_isar_feature(any_pmu_8_1, env_archcpu(env));
31
}
77
}
32
78
33
+static void cpacr_reset(CPUARMState *env, const ARMCPRegInfo *ri)
79
+static bool pmu_8_4_events_supported(CPUARMState *env)
34
+{
80
+{
35
+ /* Call cpacr_write() so that we reset with the correct RAO bits set
81
+ /* For events which are supported in any v8.1 PMU */
36
+ * for our CPU features.
82
+ return cpu_isar_feature(any_pmu_8_4, env_archcpu(env));
37
+ */
38
+ cpacr_write(env, ri, 0);
39
+}
83
+}
40
+
84
+
41
static CPAccessResult cpacr_access(CPUARMState *env, const ARMCPRegInfo *ri,
85
static uint64_t zero_event_get_count(CPUARMState *env)
42
bool isread)
43
{
86
{
44
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6_cp_reginfo[] = {
87
/* For events which on QEMU never fire, so their count is always zero */
45
{ .name = "CPACR", .state = ARM_CP_STATE_BOTH, .opc0 = 3,
88
@@ -XXX,XX +XXX,XX @@ static const pm_event pm_events[] = {
46
.crn = 1, .crm = 0, .opc1 = 0, .opc2 = 2, .accessfn = cpacr_access,
89
.get_count = zero_event_get_count,
47
.access = PL1_RW, .fieldoffset = offsetof(CPUARMState, cp15.cpacr_el1),
90
.ns_per_count = zero_event_ns_per,
48
- .resetvalue = 0, .writefn = cpacr_write },
91
},
49
+ .resetfn = cpacr_reset, .writefn = cpacr_write },
92
+ { .number = 0x03c, /* STALL */
50
REGINFO_SENTINEL
93
+ .supported = pmu_8_4_events_supported,
94
+ .get_count = zero_event_get_count,
95
+ .ns_per_count = zero_event_ns_per,
96
+ },
51
};
97
};
52
98
99
/*
100
@@ -XXX,XX +XXX,XX @@ static const pm_event pm_events[] = {
101
* should first be updated to something sparse instead of the current
102
* supported_event_map[] array.
103
*/
104
-#define MAX_EVENT_ID 0x24
105
+#define MAX_EVENT_ID 0x3c
106
#define UNSUPPORTED_EVENT UINT16_MAX
107
static uint16_t supported_event_map[MAX_EVENT_ID + 1];
108
109
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
110
};
111
define_arm_cp_regs(cpu, v81_pmu_regs);
112
}
113
+ if (cpu_isar_feature(any_pmu_8_4, cpu)) {
114
+ static const ARMCPRegInfo v84_pmmir = {
115
+ .name = "PMMIR_EL1", .state = ARM_CP_STATE_BOTH,
116
+ .opc0 = 3, .opc1 = 0, .crn = 9, .crm = 14, .opc2 = 6,
117
+ .access = PL1_R, .accessfn = pmreg_access, .type = ARM_CP_CONST,
118
+ .resetvalue = 0
119
+ };
120
+ define_one_arm_cp_reg(cpu, &v84_pmmir);
121
+ }
122
}
123
124
/* We don't know until after realize whether there's a GICv3
53
--
125
--
54
2.17.1
126
2.20.1
55
127
56
128
diff view generated by jsdifflib
New patch
1
Set the ID register bits to provide ARMv8.4-PMU (and implicitly
2
also ARMv8.1-PMU) in the 'max' CPU.
1
3
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Message-id: 20200214175116.9164-16-peter.maydell@linaro.org
7
---
8
target/arm/cpu64.c | 8 ++++++++
9
1 file changed, 8 insertions(+)
10
11
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/cpu64.c
14
+++ b/target/arm/cpu64.c
15
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
16
u = FIELD_DP32(u, ID_MMFR3, PAN, 2); /* ATS1E1 */
17
cpu->id_mmfr3 = u;
18
19
+ u = cpu->isar.id_aa64dfr0;
20
+ u = FIELD_DP64(u, ID_AA64DFR0, PMUVER, 5); /* v8.4-PMU */
21
+ cpu->isar.id_aa64dfr0 = u;
22
+
23
+ u = cpu->isar.id_dfr0;
24
+ u = FIELD_DP32(u, ID_DFR0, PERFMON, 5); /* v8.4-PMU */
25
+ cpu->isar.id_dfr0 = u;
26
+
27
/*
28
* FIXME: We do not yet support ARMv8.2-fp16 for AArch32 yet,
29
* so do not set MVFR1.FPHP. Strictly speaking this is not legal,
30
--
31
2.20.1
32
33
diff view generated by jsdifflib
New patch
1
The PMCR_EL0.DP bit is bit 5, which is 0x20, not 0x10. 0x10 is 'X'.
2
Correct our #define of PMCRDP and add the missing PMCRX.
1
3
4
We do have the correct behaviour for handling the DP bit being
5
set, so this fixes a guest-visible bug.
6
7
Fixes: 033614c47de
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20200214175116.9164-17-peter.maydell@linaro.org
12
---
13
target/arm/helper.c | 3 ++-
14
1 file changed, 2 insertions(+), 1 deletion(-)
15
16
diff --git a/target/arm/helper.c b/target/arm/helper.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/helper.c
19
+++ b/target/arm/helper.c
20
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6_cp_reginfo[] = {
21
#define PMCRN_MASK 0xf800
22
#define PMCRN_SHIFT 11
23
#define PMCRLC 0x40
24
-#define PMCRDP 0x10
25
+#define PMCRDP 0x20
26
+#define PMCRX 0x10
27
#define PMCRD 0x8
28
#define PMCRC 0x4
29
#define PMCRP 0x2
30
--
31
2.20.1
32
33
diff view generated by jsdifflib
New patch
1
The LC bit in the PMCR_EL0 register is supposed to be:
2
* read/write
3
* RES1 on an AArch64-only implementation
4
* an architecturally UNKNOWN value on reset
5
(and use of LC==0 by software is deprecated).
1
6
7
We were implementing it incorrectly as read-only always zero,
8
though we do have all the code needed to test it and behave
9
accordingly.
10
11
Instead make it a read-write bit which resets to 1 always, which
12
satisfies all the architectural requirements above.
13
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Message-id: 20200214175116.9164-18-peter.maydell@linaro.org
18
---
19
target/arm/helper.c | 13 +++++++++----
20
1 file changed, 9 insertions(+), 4 deletions(-)
21
22
diff --git a/target/arm/helper.c b/target/arm/helper.c
23
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/helper.c
25
+++ b/target/arm/helper.c
26
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6_cp_reginfo[] = {
27
#define PMCRC 0x4
28
#define PMCRP 0x2
29
#define PMCRE 0x1
30
+/*
31
+ * Mask of PMCR bits writeable by guest (not including WO bits like C, P,
32
+ * which can be written as 1 to trigger behaviour but which stay RAZ).
33
+ */
34
+#define PMCR_WRITEABLE_MASK (PMCRLC | PMCRDP | PMCRX | PMCRD | PMCRE)
35
36
#define PMXEVTYPER_P 0x80000000
37
#define PMXEVTYPER_U 0x40000000
38
@@ -XXX,XX +XXX,XX @@ static void pmcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
39
}
40
}
41
42
- /* only the DP, X, D and E bits are writable */
43
- env->cp15.c9_pmcr &= ~0x39;
44
- env->cp15.c9_pmcr |= (value & 0x39);
45
+ env->cp15.c9_pmcr &= ~PMCR_WRITEABLE_MASK;
46
+ env->cp15.c9_pmcr |= (value & PMCR_WRITEABLE_MASK);
47
48
pmu_op_finish(env);
49
}
50
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
51
.access = PL0_RW, .accessfn = pmreg_access,
52
.type = ARM_CP_IO,
53
.fieldoffset = offsetof(CPUARMState, cp15.c9_pmcr),
54
- .resetvalue = (cpu->midr & 0xff000000) | (pmcrn << PMCRN_SHIFT),
55
+ .resetvalue = (cpu->midr & 0xff000000) | (pmcrn << PMCRN_SHIFT) |
56
+ PMCRLC,
57
.writefn = pmcr_write, .raw_writefn = raw_write,
58
};
59
define_one_arm_cp_reg(cpu, &pmcr);
60
--
61
2.20.1
62
63
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
The isar_feature_aa32_pan and isar_feature_aa32_ats1e1 functions
2
add MemTxAttrs as an argument to flatview_extend_translation().
2
are supposed to be testing fields in ID_MMFR3; but a cut-and-paste
3
Its callers either have an attrs value to hand, or don't care
3
error meant we were looking at MVFR0 instead.
4
and can use MEMTXATTRS_UNSPECIFIED.
5
4
5
Fix the functions to look at the right register; this requires
6
us to move at least id_mmfr3 to the ARMISARegisters struct; we
7
choose to move all the ID_MMFRn registers for consistency.
8
9
Fixes: 3d6ad6bb466f
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-7-peter.maydell@linaro.org
12
Message-id: 20200214175116.9164-19-peter.maydell@linaro.org
10
---
13
---
11
exec.c | 15 ++++++++++-----
14
target/arm/cpu.h | 14 +++---
12
1 file changed, 10 insertions(+), 5 deletions(-)
15
hw/intc/armv7m_nvic.c | 8 ++--
16
target/arm/cpu.c | 104 +++++++++++++++++++++---------------------
17
target/arm/cpu64.c | 28 ++++++------
18
target/arm/helper.c | 12 ++---
19
target/arm/kvm32.c | 17 +++++++
20
target/arm/kvm64.c | 10 ++++
21
7 files changed, 110 insertions(+), 83 deletions(-)
13
22
14
diff --git a/exec.c b/exec.c
23
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
16
--- a/exec.c
25
--- a/target/arm/cpu.h
17
+++ b/exec.c
26
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ bool address_space_access_valid(AddressSpace *as, hwaddr addr,
27
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
19
28
uint32_t id_isar4;
20
static hwaddr
29
uint32_t id_isar5;
21
flatview_extend_translation(FlatView *fv, hwaddr addr,
30
uint32_t id_isar6;
22
- hwaddr target_len,
31
+ uint32_t id_mmfr0;
23
- MemoryRegion *mr, hwaddr base, hwaddr len,
32
+ uint32_t id_mmfr1;
24
- bool is_write)
33
+ uint32_t id_mmfr2;
25
+ hwaddr target_len,
34
+ uint32_t id_mmfr3;
26
+ MemoryRegion *mr, hwaddr base, hwaddr len,
35
+ uint32_t id_mmfr4;
27
+ bool is_write, MemTxAttrs attrs)
36
uint32_t mvfr0;
37
uint32_t mvfr1;
38
uint32_t mvfr2;
39
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
40
uint64_t pmceid0;
41
uint64_t pmceid1;
42
uint32_t id_afr0;
43
- uint32_t id_mmfr0;
44
- uint32_t id_mmfr1;
45
- uint32_t id_mmfr2;
46
- uint32_t id_mmfr3;
47
- uint32_t id_mmfr4;
48
uint64_t id_aa64afr0;
49
uint64_t id_aa64afr1;
50
uint32_t clidr;
51
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_vminmaxnm(const ARMISARegisters *id)
52
53
static inline bool isar_feature_aa32_pan(const ARMISARegisters *id)
28
{
54
{
29
hwaddr done = 0;
55
- return FIELD_EX64(id->mvfr0, ID_MMFR3, PAN) != 0;
30
hwaddr xlat;
56
+ return FIELD_EX32(id->id_mmfr3, ID_MMFR3, PAN) != 0;
31
@@ -XXX,XX +XXX,XX @@ void *address_space_map(AddressSpace *as,
57
}
32
58
33
memory_region_ref(mr);
59
static inline bool isar_feature_aa32_ats1e1(const ARMISARegisters *id)
34
*plen = flatview_extend_translation(fv, addr, len, mr, xlat,
60
{
35
- l, is_write);
61
- return FIELD_EX64(id->mvfr0, ID_MMFR3, PAN) >= 2;
36
+ l, is_write, attrs);
62
+ return FIELD_EX32(id->id_mmfr3, ID_MMFR3, PAN) >= 2;
37
ptr = qemu_ram_ptr_length(mr->ram_block, xlat, plen, true);
63
}
38
rcu_read_unlock();
64
39
65
static inline bool isar_feature_aa32_pmu_8_1(const ARMISARegisters *id)
40
@@ -XXX,XX +XXX,XX @@ int64_t address_space_cache_init(MemoryRegionCache *cache,
66
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
41
mr = cache->mrs.mr;
67
index XXXXXXX..XXXXXXX 100644
42
memory_region_ref(mr);
68
--- a/hw/intc/armv7m_nvic.c
43
if (memory_access_is_direct(mr, is_write)) {
69
+++ b/hw/intc/armv7m_nvic.c
44
+ /* We don't care about the memory attributes here as we're only
70
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
45
+ * doing this if we found actual RAM, which behaves the same
71
case 0xd4c: /* AFR0. */
46
+ * regardless of attributes; so UNSPECIFIED is fine.
72
return cpu->id_afr0;
73
case 0xd50: /* MMFR0. */
74
- return cpu->id_mmfr0;
75
+ return cpu->isar.id_mmfr0;
76
case 0xd54: /* MMFR1. */
77
- return cpu->id_mmfr1;
78
+ return cpu->isar.id_mmfr1;
79
case 0xd58: /* MMFR2. */
80
- return cpu->id_mmfr2;
81
+ return cpu->isar.id_mmfr2;
82
case 0xd5c: /* MMFR3. */
83
- return cpu->id_mmfr3;
84
+ return cpu->isar.id_mmfr3;
85
case 0xd60: /* ISAR0. */
86
return cpu->isar.id_isar0;
87
case 0xd64: /* ISAR1. */
88
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
89
index XXXXXXX..XXXXXXX 100644
90
--- a/target/arm/cpu.c
91
+++ b/target/arm/cpu.c
92
@@ -XXX,XX +XXX,XX @@ static void arm1136_r2_initfn(Object *obj)
93
cpu->id_pfr1 = 0x1;
94
cpu->isar.id_dfr0 = 0x2;
95
cpu->id_afr0 = 0x3;
96
- cpu->id_mmfr0 = 0x01130003;
97
- cpu->id_mmfr1 = 0x10030302;
98
- cpu->id_mmfr2 = 0x01222110;
99
+ cpu->isar.id_mmfr0 = 0x01130003;
100
+ cpu->isar.id_mmfr1 = 0x10030302;
101
+ cpu->isar.id_mmfr2 = 0x01222110;
102
cpu->isar.id_isar0 = 0x00140011;
103
cpu->isar.id_isar1 = 0x12002111;
104
cpu->isar.id_isar2 = 0x11231111;
105
@@ -XXX,XX +XXX,XX @@ static void arm1136_initfn(Object *obj)
106
cpu->id_pfr1 = 0x1;
107
cpu->isar.id_dfr0 = 0x2;
108
cpu->id_afr0 = 0x3;
109
- cpu->id_mmfr0 = 0x01130003;
110
- cpu->id_mmfr1 = 0x10030302;
111
- cpu->id_mmfr2 = 0x01222110;
112
+ cpu->isar.id_mmfr0 = 0x01130003;
113
+ cpu->isar.id_mmfr1 = 0x10030302;
114
+ cpu->isar.id_mmfr2 = 0x01222110;
115
cpu->isar.id_isar0 = 0x00140011;
116
cpu->isar.id_isar1 = 0x12002111;
117
cpu->isar.id_isar2 = 0x11231111;
118
@@ -XXX,XX +XXX,XX @@ static void arm1176_initfn(Object *obj)
119
cpu->id_pfr1 = 0x11;
120
cpu->isar.id_dfr0 = 0x33;
121
cpu->id_afr0 = 0;
122
- cpu->id_mmfr0 = 0x01130003;
123
- cpu->id_mmfr1 = 0x10030302;
124
- cpu->id_mmfr2 = 0x01222100;
125
+ cpu->isar.id_mmfr0 = 0x01130003;
126
+ cpu->isar.id_mmfr1 = 0x10030302;
127
+ cpu->isar.id_mmfr2 = 0x01222100;
128
cpu->isar.id_isar0 = 0x0140011;
129
cpu->isar.id_isar1 = 0x12002111;
130
cpu->isar.id_isar2 = 0x11231121;
131
@@ -XXX,XX +XXX,XX @@ static void arm11mpcore_initfn(Object *obj)
132
cpu->id_pfr1 = 0x1;
133
cpu->isar.id_dfr0 = 0;
134
cpu->id_afr0 = 0x2;
135
- cpu->id_mmfr0 = 0x01100103;
136
- cpu->id_mmfr1 = 0x10020302;
137
- cpu->id_mmfr2 = 0x01222000;
138
+ cpu->isar.id_mmfr0 = 0x01100103;
139
+ cpu->isar.id_mmfr1 = 0x10020302;
140
+ cpu->isar.id_mmfr2 = 0x01222000;
141
cpu->isar.id_isar0 = 0x00100011;
142
cpu->isar.id_isar1 = 0x12002111;
143
cpu->isar.id_isar2 = 0x11221011;
144
@@ -XXX,XX +XXX,XX @@ static void cortex_m3_initfn(Object *obj)
145
cpu->id_pfr1 = 0x00000200;
146
cpu->isar.id_dfr0 = 0x00100000;
147
cpu->id_afr0 = 0x00000000;
148
- cpu->id_mmfr0 = 0x00000030;
149
- cpu->id_mmfr1 = 0x00000000;
150
- cpu->id_mmfr2 = 0x00000000;
151
- cpu->id_mmfr3 = 0x00000000;
152
+ cpu->isar.id_mmfr0 = 0x00000030;
153
+ cpu->isar.id_mmfr1 = 0x00000000;
154
+ cpu->isar.id_mmfr2 = 0x00000000;
155
+ cpu->isar.id_mmfr3 = 0x00000000;
156
cpu->isar.id_isar0 = 0x01141110;
157
cpu->isar.id_isar1 = 0x02111000;
158
cpu->isar.id_isar2 = 0x21112231;
159
@@ -XXX,XX +XXX,XX @@ static void cortex_m4_initfn(Object *obj)
160
cpu->id_pfr1 = 0x00000200;
161
cpu->isar.id_dfr0 = 0x00100000;
162
cpu->id_afr0 = 0x00000000;
163
- cpu->id_mmfr0 = 0x00000030;
164
- cpu->id_mmfr1 = 0x00000000;
165
- cpu->id_mmfr2 = 0x00000000;
166
- cpu->id_mmfr3 = 0x00000000;
167
+ cpu->isar.id_mmfr0 = 0x00000030;
168
+ cpu->isar.id_mmfr1 = 0x00000000;
169
+ cpu->isar.id_mmfr2 = 0x00000000;
170
+ cpu->isar.id_mmfr3 = 0x00000000;
171
cpu->isar.id_isar0 = 0x01141110;
172
cpu->isar.id_isar1 = 0x02111000;
173
cpu->isar.id_isar2 = 0x21112231;
174
@@ -XXX,XX +XXX,XX @@ static void cortex_m7_initfn(Object *obj)
175
cpu->id_pfr1 = 0x00000200;
176
cpu->isar.id_dfr0 = 0x00100000;
177
cpu->id_afr0 = 0x00000000;
178
- cpu->id_mmfr0 = 0x00100030;
179
- cpu->id_mmfr1 = 0x00000000;
180
- cpu->id_mmfr2 = 0x01000000;
181
- cpu->id_mmfr3 = 0x00000000;
182
+ cpu->isar.id_mmfr0 = 0x00100030;
183
+ cpu->isar.id_mmfr1 = 0x00000000;
184
+ cpu->isar.id_mmfr2 = 0x01000000;
185
+ cpu->isar.id_mmfr3 = 0x00000000;
186
cpu->isar.id_isar0 = 0x01101110;
187
cpu->isar.id_isar1 = 0x02112000;
188
cpu->isar.id_isar2 = 0x20232231;
189
@@ -XXX,XX +XXX,XX @@ static void cortex_m33_initfn(Object *obj)
190
cpu->id_pfr1 = 0x00000210;
191
cpu->isar.id_dfr0 = 0x00200000;
192
cpu->id_afr0 = 0x00000000;
193
- cpu->id_mmfr0 = 0x00101F40;
194
- cpu->id_mmfr1 = 0x00000000;
195
- cpu->id_mmfr2 = 0x01000000;
196
- cpu->id_mmfr3 = 0x00000000;
197
+ cpu->isar.id_mmfr0 = 0x00101F40;
198
+ cpu->isar.id_mmfr1 = 0x00000000;
199
+ cpu->isar.id_mmfr2 = 0x01000000;
200
+ cpu->isar.id_mmfr3 = 0x00000000;
201
cpu->isar.id_isar0 = 0x01101110;
202
cpu->isar.id_isar1 = 0x02212000;
203
cpu->isar.id_isar2 = 0x20232232;
204
@@ -XXX,XX +XXX,XX @@ static void cortex_r5_initfn(Object *obj)
205
cpu->id_pfr1 = 0x001;
206
cpu->isar.id_dfr0 = 0x010400;
207
cpu->id_afr0 = 0x0;
208
- cpu->id_mmfr0 = 0x0210030;
209
- cpu->id_mmfr1 = 0x00000000;
210
- cpu->id_mmfr2 = 0x01200000;
211
- cpu->id_mmfr3 = 0x0211;
212
+ cpu->isar.id_mmfr0 = 0x0210030;
213
+ cpu->isar.id_mmfr1 = 0x00000000;
214
+ cpu->isar.id_mmfr2 = 0x01200000;
215
+ cpu->isar.id_mmfr3 = 0x0211;
216
cpu->isar.id_isar0 = 0x02101111;
217
cpu->isar.id_isar1 = 0x13112111;
218
cpu->isar.id_isar2 = 0x21232141;
219
@@ -XXX,XX +XXX,XX @@ static void cortex_a8_initfn(Object *obj)
220
cpu->id_pfr1 = 0x11;
221
cpu->isar.id_dfr0 = 0x400;
222
cpu->id_afr0 = 0;
223
- cpu->id_mmfr0 = 0x31100003;
224
- cpu->id_mmfr1 = 0x20000000;
225
- cpu->id_mmfr2 = 0x01202000;
226
- cpu->id_mmfr3 = 0x11;
227
+ cpu->isar.id_mmfr0 = 0x31100003;
228
+ cpu->isar.id_mmfr1 = 0x20000000;
229
+ cpu->isar.id_mmfr2 = 0x01202000;
230
+ cpu->isar.id_mmfr3 = 0x11;
231
cpu->isar.id_isar0 = 0x00101111;
232
cpu->isar.id_isar1 = 0x12112111;
233
cpu->isar.id_isar2 = 0x21232031;
234
@@ -XXX,XX +XXX,XX @@ static void cortex_a9_initfn(Object *obj)
235
cpu->id_pfr1 = 0x11;
236
cpu->isar.id_dfr0 = 0x000;
237
cpu->id_afr0 = 0;
238
- cpu->id_mmfr0 = 0x00100103;
239
- cpu->id_mmfr1 = 0x20000000;
240
- cpu->id_mmfr2 = 0x01230000;
241
- cpu->id_mmfr3 = 0x00002111;
242
+ cpu->isar.id_mmfr0 = 0x00100103;
243
+ cpu->isar.id_mmfr1 = 0x20000000;
244
+ cpu->isar.id_mmfr2 = 0x01230000;
245
+ cpu->isar.id_mmfr3 = 0x00002111;
246
cpu->isar.id_isar0 = 0x00101111;
247
cpu->isar.id_isar1 = 0x13112111;
248
cpu->isar.id_isar2 = 0x21232041;
249
@@ -XXX,XX +XXX,XX @@ static void cortex_a7_initfn(Object *obj)
250
cpu->id_pfr1 = 0x00011011;
251
cpu->isar.id_dfr0 = 0x02010555;
252
cpu->id_afr0 = 0x00000000;
253
- cpu->id_mmfr0 = 0x10101105;
254
- cpu->id_mmfr1 = 0x40000000;
255
- cpu->id_mmfr2 = 0x01240000;
256
- cpu->id_mmfr3 = 0x02102211;
257
+ cpu->isar.id_mmfr0 = 0x10101105;
258
+ cpu->isar.id_mmfr1 = 0x40000000;
259
+ cpu->isar.id_mmfr2 = 0x01240000;
260
+ cpu->isar.id_mmfr3 = 0x02102211;
261
/* a7_mpcore_r0p5_trm, page 4-4 gives 0x01101110; but
262
* table 4-41 gives 0x02101110, which includes the arm div insns.
263
*/
264
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
265
cpu->id_pfr1 = 0x00011011;
266
cpu->isar.id_dfr0 = 0x02010555;
267
cpu->id_afr0 = 0x00000000;
268
- cpu->id_mmfr0 = 0x10201105;
269
- cpu->id_mmfr1 = 0x20000000;
270
- cpu->id_mmfr2 = 0x01240000;
271
- cpu->id_mmfr3 = 0x02102211;
272
+ cpu->isar.id_mmfr0 = 0x10201105;
273
+ cpu->isar.id_mmfr1 = 0x20000000;
274
+ cpu->isar.id_mmfr2 = 0x01240000;
275
+ cpu->isar.id_mmfr3 = 0x02102211;
276
cpu->isar.id_isar0 = 0x02101110;
277
cpu->isar.id_isar1 = 0x13112111;
278
cpu->isar.id_isar2 = 0x21232041;
279
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
280
t = FIELD_DP32(t, MVFR2, FPMISC, 4); /* FP MaxNum */
281
cpu->isar.mvfr2 = t;
282
283
- t = cpu->id_mmfr3;
284
+ t = cpu->isar.id_mmfr3;
285
t = FIELD_DP32(t, ID_MMFR3, PAN, 2); /* ATS1E1 */
286
- cpu->id_mmfr3 = t;
287
+ cpu->isar.id_mmfr3 = t;
288
289
- t = cpu->id_mmfr4;
290
+ t = cpu->isar.id_mmfr4;
291
t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* AA32HPD */
292
- cpu->id_mmfr4 = t;
293
+ cpu->isar.id_mmfr4 = t;
294
}
295
#endif
296
}
297
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
298
index XXXXXXX..XXXXXXX 100644
299
--- a/target/arm/cpu64.c
300
+++ b/target/arm/cpu64.c
301
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
302
cpu->id_pfr1 = 0x00011011;
303
cpu->isar.id_dfr0 = 0x03010066;
304
cpu->id_afr0 = 0x00000000;
305
- cpu->id_mmfr0 = 0x10101105;
306
- cpu->id_mmfr1 = 0x40000000;
307
- cpu->id_mmfr2 = 0x01260000;
308
- cpu->id_mmfr3 = 0x02102211;
309
+ cpu->isar.id_mmfr0 = 0x10101105;
310
+ cpu->isar.id_mmfr1 = 0x40000000;
311
+ cpu->isar.id_mmfr2 = 0x01260000;
312
+ cpu->isar.id_mmfr3 = 0x02102211;
313
cpu->isar.id_isar0 = 0x02101110;
314
cpu->isar.id_isar1 = 0x13112111;
315
cpu->isar.id_isar2 = 0x21232042;
316
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
317
cpu->id_pfr1 = 0x00011011;
318
cpu->isar.id_dfr0 = 0x03010066;
319
cpu->id_afr0 = 0x00000000;
320
- cpu->id_mmfr0 = 0x10101105;
321
- cpu->id_mmfr1 = 0x40000000;
322
- cpu->id_mmfr2 = 0x01260000;
323
- cpu->id_mmfr3 = 0x02102211;
324
+ cpu->isar.id_mmfr0 = 0x10101105;
325
+ cpu->isar.id_mmfr1 = 0x40000000;
326
+ cpu->isar.id_mmfr2 = 0x01260000;
327
+ cpu->isar.id_mmfr3 = 0x02102211;
328
cpu->isar.id_isar0 = 0x02101110;
329
cpu->isar.id_isar1 = 0x13112111;
330
cpu->isar.id_isar2 = 0x21232042;
331
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
332
cpu->id_pfr1 = 0x00011011;
333
cpu->isar.id_dfr0 = 0x03010066;
334
cpu->id_afr0 = 0x00000000;
335
- cpu->id_mmfr0 = 0x10201105;
336
- cpu->id_mmfr1 = 0x40000000;
337
- cpu->id_mmfr2 = 0x01260000;
338
- cpu->id_mmfr3 = 0x02102211;
339
+ cpu->isar.id_mmfr0 = 0x10201105;
340
+ cpu->isar.id_mmfr1 = 0x40000000;
341
+ cpu->isar.id_mmfr2 = 0x01260000;
342
+ cpu->isar.id_mmfr3 = 0x02102211;
343
cpu->isar.id_isar0 = 0x02101110;
344
cpu->isar.id_isar1 = 0x13112111;
345
cpu->isar.id_isar2 = 0x21232042;
346
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
347
u = FIELD_DP32(u, ID_ISAR6, SPECRES, 1);
348
cpu->isar.id_isar6 = u;
349
350
- u = cpu->id_mmfr3;
351
+ u = cpu->isar.id_mmfr3;
352
u = FIELD_DP32(u, ID_MMFR3, PAN, 2); /* ATS1E1 */
353
- cpu->id_mmfr3 = u;
354
+ cpu->isar.id_mmfr3 = u;
355
356
u = cpu->isar.id_aa64dfr0;
357
u = FIELD_DP64(u, ID_AA64DFR0, PMUVER, 5); /* v8.4-PMU */
358
diff --git a/target/arm/helper.c b/target/arm/helper.c
359
index XXXXXXX..XXXXXXX 100644
360
--- a/target/arm/helper.c
361
+++ b/target/arm/helper.c
362
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
363
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 4,
364
.access = PL1_R, .type = ARM_CP_CONST,
365
.accessfn = access_aa32_tid3,
366
- .resetvalue = cpu->id_mmfr0 },
367
+ .resetvalue = cpu->isar.id_mmfr0 },
368
{ .name = "ID_MMFR1", .state = ARM_CP_STATE_BOTH,
369
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 5,
370
.access = PL1_R, .type = ARM_CP_CONST,
371
.accessfn = access_aa32_tid3,
372
- .resetvalue = cpu->id_mmfr1 },
373
+ .resetvalue = cpu->isar.id_mmfr1 },
374
{ .name = "ID_MMFR2", .state = ARM_CP_STATE_BOTH,
375
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 6,
376
.access = PL1_R, .type = ARM_CP_CONST,
377
.accessfn = access_aa32_tid3,
378
- .resetvalue = cpu->id_mmfr2 },
379
+ .resetvalue = cpu->isar.id_mmfr2 },
380
{ .name = "ID_MMFR3", .state = ARM_CP_STATE_BOTH,
381
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 7,
382
.access = PL1_R, .type = ARM_CP_CONST,
383
.accessfn = access_aa32_tid3,
384
- .resetvalue = cpu->id_mmfr3 },
385
+ .resetvalue = cpu->isar.id_mmfr3 },
386
{ .name = "ID_ISAR0", .state = ARM_CP_STATE_BOTH,
387
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 0,
388
.access = PL1_R, .type = ARM_CP_CONST,
389
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
390
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 6,
391
.access = PL1_R, .type = ARM_CP_CONST,
392
.accessfn = access_aa32_tid3,
393
- .resetvalue = cpu->id_mmfr4 },
394
+ .resetvalue = cpu->isar.id_mmfr4 },
395
{ .name = "ID_ISAR6", .state = ARM_CP_STATE_BOTH,
396
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 7,
397
.access = PL1_R, .type = ARM_CP_CONST,
398
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
399
define_arm_cp_regs(cpu, vmsa_pmsa_cp_reginfo);
400
define_arm_cp_regs(cpu, vmsa_cp_reginfo);
401
/* TTCBR2 is introduced with ARMv8.2-A32HPD. */
402
- if (FIELD_EX32(cpu->id_mmfr4, ID_MMFR4, HPDS) != 0) {
403
+ if (FIELD_EX32(cpu->isar.id_mmfr4, ID_MMFR4, HPDS) != 0) {
404
define_one_arm_cp_reg(cpu, &ttbcr2_reginfo);
405
}
406
}
407
diff --git a/target/arm/kvm32.c b/target/arm/kvm32.c
408
index XXXXXXX..XXXXXXX 100644
409
--- a/target/arm/kvm32.c
410
+++ b/target/arm/kvm32.c
411
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
412
* Fortunately there is not yet anything in there that affects migration.
413
*/
414
415
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_mmfr0,
416
+ ARM_CP15_REG32(0, 0, 1, 4));
417
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_mmfr1,
418
+ ARM_CP15_REG32(0, 0, 1, 5));
419
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_mmfr2,
420
+ ARM_CP15_REG32(0, 0, 1, 6));
421
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_mmfr3,
422
+ ARM_CP15_REG32(0, 0, 1, 7));
423
+ if (read_sys_reg32(fdarray[2], &ahcf->isar.id_mmfr4,
424
+ ARM_CP15_REG32(0, 0, 2, 6))) {
425
+ /*
426
+ * Older kernels don't support reading ID_MMFR4 (a new in v8
427
+ * register); assume it's zero.
47
+ */
428
+ */
48
l = flatview_extend_translation(cache->fv, addr, len, mr,
429
+ ahcf->isar.id_mmfr4 = 0;
49
- cache->xlat, l, is_write);
430
+ }
50
+ cache->xlat, l, is_write,
431
+
51
+ MEMTXATTRS_UNSPECIFIED);
432
/*
52
cache->ptr = qemu_ram_ptr_length(mr->ram_block, cache->xlat, &l, true);
433
* There is no way to read DBGDIDR, because currently 32-bit KVM
53
} else {
434
* doesn't implement debug at all. Leave it at zero.
54
cache->ptr = NULL;
435
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
436
index XXXXXXX..XXXXXXX 100644
437
--- a/target/arm/kvm64.c
438
+++ b/target/arm/kvm64.c
439
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
440
*/
441
err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_dfr0,
442
ARM64_SYS_REG(3, 0, 0, 1, 2));
443
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_mmfr0,
444
+ ARM64_SYS_REG(3, 0, 0, 1, 4));
445
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_mmfr1,
446
+ ARM64_SYS_REG(3, 0, 0, 1, 5));
447
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_mmfr2,
448
+ ARM64_SYS_REG(3, 0, 0, 1, 6));
449
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_mmfr3,
450
+ ARM64_SYS_REG(3, 0, 0, 1, 7));
451
err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_isar0,
452
ARM64_SYS_REG(3, 0, 0, 2, 0));
453
err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_isar1,
454
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
455
ARM64_SYS_REG(3, 0, 0, 2, 4));
456
err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_isar5,
457
ARM64_SYS_REG(3, 0, 0, 2, 5));
458
+ err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_mmfr4,
459
+ ARM64_SYS_REG(3, 0, 0, 2, 6));
460
err |= read_sys_reg32(fdarray[2], &ahcf->isar.id_isar6,
461
ARM64_SYS_REG(3, 0, 0, 2, 7));
462
55
--
463
--
56
2.17.1
464
2.20.1
57
465
58
466
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
Now we have moved ID_MMFR4 into the ARMISARegisters struct, we
2
add MemTxAttrs as an argument to memory_region_access_valid().
2
can define and use an isar_feature for the presence of the
3
Its callers either have an attrs value to hand, or don't care
3
ARMv8.2-AA32HPD feature, rather than open-coding the test.
4
and can use MEMTXATTRS_UNSPECIFIED.
5
4
6
The callsite in flatview_access_valid() is part of a recursive
5
While we're here, correct a comment typo which missed an 'A'
7
loop flatview_access_valid() -> memory_region_access_valid() ->
6
from the feature name.
8
subpage_accepts() -> flatview_access_valid(); we make it pass
9
MEMTXATTRS_UNSPECIFIED for now, until the next several commits
10
have plumbed an attrs parameter through the rest of the loop
11
and we can add an attrs parameter to flatview_access_valid().
12
7
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20180521140402.23318-8-peter.maydell@linaro.org
10
Message-id: 20200214175116.9164-20-peter.maydell@linaro.org
17
---
11
---
18
include/exec/memory-internal.h | 3 ++-
12
target/arm/cpu.h | 5 +++++
19
exec.c | 4 +++-
13
target/arm/helper.c | 4 ++--
20
hw/s390x/s390-pci-inst.c | 3 ++-
14
2 files changed, 7 insertions(+), 2 deletions(-)
21
memory.c | 7 ++++---
22
4 files changed, 11 insertions(+), 6 deletions(-)
23
15
24
diff --git a/include/exec/memory-internal.h b/include/exec/memory-internal.h
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
25
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
26
--- a/include/exec/memory-internal.h
18
--- a/target/arm/cpu.h
27
+++ b/include/exec/memory-internal.h
19
+++ b/target/arm/cpu.h
28
@@ -XXX,XX +XXX,XX @@ void flatview_unref(FlatView *view);
20
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_pmu_8_4(const ARMISARegisters *id)
29
extern const MemoryRegionOps unassigned_mem_ops;
21
FIELD_EX32(id->id_dfr0, ID_DFR0, PERFMON) != 0xf;
30
22
}
31
bool memory_region_access_valid(MemoryRegion *mr, hwaddr addr,
23
32
- unsigned size, bool is_write);
24
+static inline bool isar_feature_aa32_hpd(const ARMISARegisters *id)
33
+ unsigned size, bool is_write,
25
+{
34
+ MemTxAttrs attrs);
26
+ return FIELD_EX32(id->id_mmfr4, ID_MMFR4, HPDS) != 0;
35
27
+}
36
void flatview_add_to_dispatch(FlatView *fv, MemoryRegionSection *section);
28
+
37
AddressSpaceDispatch *address_space_dispatch_new(FlatView *fv);
29
/*
38
diff --git a/exec.c b/exec.c
30
* 64-bit feature tests via id registers.
31
*/
32
diff --git a/target/arm/helper.c b/target/arm/helper.c
39
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
40
--- a/exec.c
34
--- a/target/arm/helper.c
41
+++ b/exec.c
35
+++ b/target/arm/helper.c
42
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
36
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
43
mr = flatview_translate(fv, addr, &xlat, &l, is_write);
37
} else {
44
if (!memory_access_is_direct(mr, is_write)) {
38
define_arm_cp_regs(cpu, vmsa_pmsa_cp_reginfo);
45
l = memory_access_size(mr, l, addr);
39
define_arm_cp_regs(cpu, vmsa_cp_reginfo);
46
- if (!memory_region_access_valid(mr, xlat, l, is_write)) {
40
- /* TTCBR2 is introduced with ARMv8.2-A32HPD. */
47
+ /* When our callers all have attrs we'll pass them through here */
41
- if (FIELD_EX32(cpu->isar.id_mmfr4, ID_MMFR4, HPDS) != 0) {
48
+ if (!memory_region_access_valid(mr, xlat, l, is_write,
42
+ /* TTCBR2 is introduced with ARMv8.2-AA32HPD. */
49
+ MEMTXATTRS_UNSPECIFIED)) {
43
+ if (cpu_isar_feature(aa32_hpd, cpu)) {
50
return false;
44
define_one_arm_cp_reg(cpu, &ttbcr2_reginfo);
51
}
52
}
45
}
53
diff --git a/hw/s390x/s390-pci-inst.c b/hw/s390x/s390-pci-inst.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/hw/s390x/s390-pci-inst.c
56
+++ b/hw/s390x/s390-pci-inst.c
57
@@ -XXX,XX +XXX,XX @@ int pcistb_service_call(S390CPU *cpu, uint8_t r1, uint8_t r3, uint64_t gaddr,
58
mr = s390_get_subregion(mr, offset, len);
59
offset -= mr->addr;
60
61
- if (!memory_region_access_valid(mr, offset, len, true)) {
62
+ if (!memory_region_access_valid(mr, offset, len, true,
63
+ MEMTXATTRS_UNSPECIFIED)) {
64
s390_program_interrupt(env, PGM_OPERAND, 6, ra);
65
return 0;
66
}
67
diff --git a/memory.c b/memory.c
68
index XXXXXXX..XXXXXXX 100644
69
--- a/memory.c
70
+++ b/memory.c
71
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps ram_device_mem_ops = {
72
bool memory_region_access_valid(MemoryRegion *mr,
73
hwaddr addr,
74
unsigned size,
75
- bool is_write)
76
+ bool is_write,
77
+ MemTxAttrs attrs)
78
{
79
int access_size_min, access_size_max;
80
int access_size, i;
81
@@ -XXX,XX +XXX,XX @@ MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
82
{
83
MemTxResult r;
84
85
- if (!memory_region_access_valid(mr, addr, size, false)) {
86
+ if (!memory_region_access_valid(mr, addr, size, false, attrs)) {
87
*pval = unassigned_mem_read(mr, addr, size);
88
return MEMTX_DECODE_ERROR;
89
}
90
@@ -XXX,XX +XXX,XX @@ MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
91
unsigned size,
92
MemTxAttrs attrs)
93
{
94
- if (!memory_region_access_valid(mr, addr, size, true)) {
95
+ if (!memory_region_access_valid(mr, addr, size, true, attrs)) {
96
unassigned_mem_write(mr, addr, data, size);
97
return MEMTX_DECODE_ERROR;
98
}
46
}
99
--
47
--
100
2.17.1
48
2.20.1
101
49
102
50
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
Cut-and-paste errors mean we're using FIELD_EX64() to extract fields from
2
add MemTxAttrs as an argument to the MemoryRegion valid.accepts
2
some 32-bit ID register fields. Use FIELD_EX32() instead. (This makes
3
callback. We'll need this for subpage_accepts().
3
no difference in behaviour, it's just more consistent.)
4
5
We could take the approach we used with the read and write
6
callbacks and add new a new _with_attrs version, but since there
7
are so few implementations of the accepts hook we just change
8
them all.
9
4
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20180521140402.23318-9-peter.maydell@linaro.org
7
Message-id: 20200214175116.9164-21-peter.maydell@linaro.org
14
---
8
---
15
include/exec/memory.h | 3 ++-
9
target/arm/cpu.h | 18 +++++++++---------
16
exec.c | 9 ++++++---
10
1 file changed, 9 insertions(+), 9 deletions(-)
17
hw/hppa/dino.c | 3 ++-
18
hw/nvram/fw_cfg.c | 12 ++++++++----
19
hw/scsi/esp.c | 3 ++-
20
hw/xen/xen_pt_msi.c | 3 ++-
21
memory.c | 5 +++--
22
7 files changed, 25 insertions(+), 13 deletions(-)
23
11
24
diff --git a/include/exec/memory.h b/include/exec/memory.h
12
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
25
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
26
--- a/include/exec/memory.h
14
--- a/target/arm/cpu.h
27
+++ b/include/exec/memory.h
15
+++ b/target/arm/cpu.h
28
@@ -XXX,XX +XXX,XX @@ struct MemoryRegionOps {
16
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_fp16_arith(const ARMISARegisters *id)
29
* as a machine check exception).
17
static inline bool isar_feature_aa32_fp_d32(const ARMISARegisters *id)
30
*/
18
{
31
bool (*accepts)(void *opaque, hwaddr addr,
19
/* Return true if D16-D31 are implemented */
32
- unsigned size, bool is_write);
20
- return FIELD_EX64(id->mvfr0, MVFR0, SIMDREG) >= 2;
33
+ unsigned size, bool is_write,
21
+ return FIELD_EX32(id->mvfr0, MVFR0, SIMDREG) >= 2;
34
+ MemTxAttrs attrs);
35
} valid;
36
/* Internal implementation constraints: */
37
struct {
38
diff --git a/exec.c b/exec.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/exec.c
41
+++ b/exec.c
42
@@ -XXX,XX +XXX,XX @@ static void notdirty_mem_write(void *opaque, hwaddr ram_addr,
43
}
22
}
44
23
45
static bool notdirty_mem_accepts(void *opaque, hwaddr addr,
24
static inline bool isar_feature_aa32_fpshvec(const ARMISARegisters *id)
46
- unsigned size, bool is_write)
47
+ unsigned size, bool is_write,
48
+ MemTxAttrs attrs)
49
{
25
{
50
return is_write;
26
- return FIELD_EX64(id->mvfr0, MVFR0, FPSHVEC) > 0;
27
+ return FIELD_EX32(id->mvfr0, MVFR0, FPSHVEC) > 0;
51
}
28
}
52
@@ -XXX,XX +XXX,XX @@ static MemTxResult subpage_write(void *opaque, hwaddr addr,
29
30
static inline bool isar_feature_aa32_fpdp(const ARMISARegisters *id)
31
{
32
/* Return true if CPU supports double precision floating point */
33
- return FIELD_EX64(id->mvfr0, MVFR0, FPDP) > 0;
34
+ return FIELD_EX32(id->mvfr0, MVFR0, FPDP) > 0;
53
}
35
}
54
36
55
static bool subpage_accepts(void *opaque, hwaddr addr,
37
/*
56
- unsigned len, bool is_write)
38
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_fpdp(const ARMISARegisters *id)
57
+ unsigned len, bool is_write,
39
*/
58
+ MemTxAttrs attrs)
40
static inline bool isar_feature_aa32_fp16_spconv(const ARMISARegisters *id)
59
{
41
{
60
subpage_t *subpage = opaque;
42
- return FIELD_EX64(id->mvfr1, MVFR1, FPHP) > 0;
61
#if defined(DEBUG_SUBPAGE)
43
+ return FIELD_EX32(id->mvfr1, MVFR1, FPHP) > 0;
62
@@ -XXX,XX +XXX,XX @@ static void readonly_mem_write(void *opaque, hwaddr addr,
63
}
44
}
64
45
65
static bool readonly_mem_accepts(void *opaque, hwaddr addr,
46
static inline bool isar_feature_aa32_fp16_dpconv(const ARMISARegisters *id)
66
- unsigned size, bool is_write)
67
+ unsigned size, bool is_write,
68
+ MemTxAttrs attrs)
69
{
47
{
70
return is_write;
48
- return FIELD_EX64(id->mvfr1, MVFR1, FPHP) > 1;
49
+ return FIELD_EX32(id->mvfr1, MVFR1, FPHP) > 1;
71
}
50
}
72
diff --git a/hw/hppa/dino.c b/hw/hppa/dino.c
51
73
index XXXXXXX..XXXXXXX 100644
52
static inline bool isar_feature_aa32_vsel(const ARMISARegisters *id)
74
--- a/hw/hppa/dino.c
53
{
75
+++ b/hw/hppa/dino.c
54
- return FIELD_EX64(id->mvfr2, MVFR2, FPMISC) >= 1;
76
@@ -XXX,XX +XXX,XX @@ static void gsc_to_pci_forwarding(DinoState *s)
55
+ return FIELD_EX32(id->mvfr2, MVFR2, FPMISC) >= 1;
77
}
56
}
78
57
79
static bool dino_chip_mem_valid(void *opaque, hwaddr addr,
58
static inline bool isar_feature_aa32_vcvt_dr(const ARMISARegisters *id)
80
- unsigned size, bool is_write)
81
+ unsigned size, bool is_write,
82
+ MemTxAttrs attrs)
83
{
59
{
84
switch (addr) {
60
- return FIELD_EX64(id->mvfr2, MVFR2, FPMISC) >= 2;
85
case DINO_IAR0:
61
+ return FIELD_EX32(id->mvfr2, MVFR2, FPMISC) >= 2;
86
diff --git a/hw/nvram/fw_cfg.c b/hw/nvram/fw_cfg.c
87
index XXXXXXX..XXXXXXX 100644
88
--- a/hw/nvram/fw_cfg.c
89
+++ b/hw/nvram/fw_cfg.c
90
@@ -XXX,XX +XXX,XX @@ static void fw_cfg_dma_mem_write(void *opaque, hwaddr addr,
91
}
62
}
92
63
93
static bool fw_cfg_dma_mem_valid(void *opaque, hwaddr addr,
64
static inline bool isar_feature_aa32_vrint(const ARMISARegisters *id)
94
- unsigned size, bool is_write)
95
+ unsigned size, bool is_write,
96
+ MemTxAttrs attrs)
97
{
65
{
98
return !is_write || ((size == 4 && (addr == 0 || addr == 4)) ||
66
- return FIELD_EX64(id->mvfr2, MVFR2, FPMISC) >= 3;
99
(size == 8 && addr == 0));
67
+ return FIELD_EX32(id->mvfr2, MVFR2, FPMISC) >= 3;
100
}
68
}
101
69
102
static bool fw_cfg_data_mem_valid(void *opaque, hwaddr addr,
70
static inline bool isar_feature_aa32_vminmaxnm(const ARMISARegisters *id)
103
- unsigned size, bool is_write)
104
+ unsigned size, bool is_write,
105
+ MemTxAttrs attrs)
106
{
71
{
107
return addr == 0;
72
- return FIELD_EX64(id->mvfr2, MVFR2, FPMISC) >= 4;
73
+ return FIELD_EX32(id->mvfr2, MVFR2, FPMISC) >= 4;
108
}
74
}
109
@@ -XXX,XX +XXX,XX @@ static void fw_cfg_ctl_mem_write(void *opaque, hwaddr addr,
75
110
}
76
static inline bool isar_feature_aa32_pan(const ARMISARegisters *id)
111
112
static bool fw_cfg_ctl_mem_valid(void *opaque, hwaddr addr,
113
- unsigned size, bool is_write)
114
+ unsigned size, bool is_write,
115
+ MemTxAttrs attrs)
116
{
117
return is_write && size == 2;
118
}
119
@@ -XXX,XX +XXX,XX @@ static void fw_cfg_comb_write(void *opaque, hwaddr addr,
120
}
121
122
static bool fw_cfg_comb_valid(void *opaque, hwaddr addr,
123
- unsigned size, bool is_write)
124
+ unsigned size, bool is_write,
125
+ MemTxAttrs attrs)
126
{
127
return (size == 1) || (is_write && size == 2);
128
}
129
diff --git a/hw/scsi/esp.c b/hw/scsi/esp.c
130
index XXXXXXX..XXXXXXX 100644
131
--- a/hw/scsi/esp.c
132
+++ b/hw/scsi/esp.c
133
@@ -XXX,XX +XXX,XX @@ void esp_reg_write(ESPState *s, uint32_t saddr, uint64_t val)
134
}
135
136
static bool esp_mem_accepts(void *opaque, hwaddr addr,
137
- unsigned size, bool is_write)
138
+ unsigned size, bool is_write,
139
+ MemTxAttrs attrs)
140
{
141
return (size == 1) || (is_write && size == 4);
142
}
143
diff --git a/hw/xen/xen_pt_msi.c b/hw/xen/xen_pt_msi.c
144
index XXXXXXX..XXXXXXX 100644
145
--- a/hw/xen/xen_pt_msi.c
146
+++ b/hw/xen/xen_pt_msi.c
147
@@ -XXX,XX +XXX,XX @@ static uint64_t pci_msix_read(void *opaque, hwaddr addr,
148
}
149
150
static bool pci_msix_accepts(void *opaque, hwaddr addr,
151
- unsigned size, bool is_write)
152
+ unsigned size, bool is_write,
153
+ MemTxAttrs attrs)
154
{
155
return !(addr & (size - 1));
156
}
157
diff --git a/memory.c b/memory.c
158
index XXXXXXX..XXXXXXX 100644
159
--- a/memory.c
160
+++ b/memory.c
161
@@ -XXX,XX +XXX,XX @@ static void unassigned_mem_write(void *opaque, hwaddr addr,
162
}
163
164
static bool unassigned_mem_accepts(void *opaque, hwaddr addr,
165
- unsigned size, bool is_write)
166
+ unsigned size, bool is_write,
167
+ MemTxAttrs attrs)
168
{
169
return false;
170
}
171
@@ -XXX,XX +XXX,XX @@ bool memory_region_access_valid(MemoryRegion *mr,
172
access_size = MAX(MIN(size, access_size_max), access_size_min);
173
for (i = 0; i < size; i += access_size) {
174
if (!mr->ops->valid.accepts(mr->opaque, addr + i, access_size,
175
- is_write)) {
176
+ is_write, attrs)) {
177
return false;
178
}
179
}
180
--
77
--
181
2.17.1
78
2.20.1
182
79
183
80
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
The ACTLR2 and HACTLR2 AArch32 system registers didn't exist in ARMv7
2
add MemTxAttrs as an argument to address_space_translate()
2
or the original ARMv8. They were later added as optional registers,
3
and address_space_translate_cached(). Callers either have an
3
whose presence is signaled by the ID_MMFR4.AC2 field. From ARMv8.2
4
attrs value to hand, or don't care and can use MEMTXATTRS_UNSPECIFIED.
4
they are mandatory (ie ID_MMFR4.AC2 must be non-zero).
5
5
6
We implemented HACTLR2 in commit 0e0456ab8895a5e85, but we
7
incorrectly made it exist for all v8 CPUs, and we didn't implement
8
ACTLR2 at all.
9
10
Sort this out by implementing both registers only when they are
11
supposed to exist, and setting the ID_MMFR4 bit for -cpu max.
12
13
Note that this removes HACTLR2 from our Cortex-A53, -A47 and -A72
14
CPU models; this is correct, because those CPUs do not implement
15
this register.
16
17
Fixes: 0e0456ab8895a5e85
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-4-peter.maydell@linaro.org
20
Message-id: 20200214175116.9164-22-peter.maydell@linaro.org
10
---
21
---
11
include/exec/memory.h | 4 +++-
22
target/arm/cpu.h | 5 +++++
12
accel/tcg/translate-all.c | 2 +-
23
target/arm/cpu.c | 1 +
13
exec.c | 14 +++++++++-----
24
target/arm/cpu64.c | 4 ++++
14
hw/vfio/common.c | 3 ++-
25
target/arm/helper.c | 32 +++++++++++++++++++++++---------
15
memory_ldst.inc.c | 18 +++++++++---------
26
4 files changed, 33 insertions(+), 9 deletions(-)
16
target/riscv/helper.c | 2 +-
17
6 files changed, 25 insertions(+), 18 deletions(-)
18
27
19
diff --git a/include/exec/memory.h b/include/exec/memory.h
28
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
21
--- a/include/exec/memory.h
30
--- a/target/arm/cpu.h
22
+++ b/include/exec/memory.h
31
+++ b/target/arm/cpu.h
23
@@ -XXX,XX +XXX,XX @@ IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
32
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_hpd(const ARMISARegisters *id)
24
* #MemoryRegion.
33
return FIELD_EX32(id->id_mmfr4, ID_MMFR4, HPDS) != 0;
25
* @len: pointer to length
34
}
26
* @is_write: indicates the transfer direction
35
27
+ * @attrs: memory attributes
36
+static inline bool isar_feature_aa32_ac2(const ARMISARegisters *id)
37
+{
38
+ return FIELD_EX32(id->id_mmfr4, ID_MMFR4, AC2) != 0;
39
+}
40
+
41
/*
42
* 64-bit feature tests via id registers.
28
*/
43
*/
29
MemoryRegion *flatview_translate(FlatView *fv,
44
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
30
hwaddr addr, hwaddr *xlat,
45
index XXXXXXX..XXXXXXX 100644
31
@@ -XXX,XX +XXX,XX @@ MemoryRegion *flatview_translate(FlatView *fv,
46
--- a/target/arm/cpu.c
32
47
+++ b/target/arm/cpu.c
33
static inline MemoryRegion *address_space_translate(AddressSpace *as,
48
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
34
hwaddr addr, hwaddr *xlat,
49
35
- hwaddr *len, bool is_write)
50
t = cpu->isar.id_mmfr4;
36
+ hwaddr *len, bool is_write,
51
t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* AA32HPD */
37
+ MemTxAttrs attrs)
52
+ t = FIELD_DP32(t, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
53
cpu->isar.id_mmfr4 = t;
54
}
55
#endif
56
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
57
index XXXXXXX..XXXXXXX 100644
58
--- a/target/arm/cpu64.c
59
+++ b/target/arm/cpu64.c
60
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
61
u = FIELD_DP32(u, ID_MMFR3, PAN, 2); /* ATS1E1 */
62
cpu->isar.id_mmfr3 = u;
63
64
+ u = cpu->isar.id_mmfr4;
65
+ u = FIELD_DP32(u, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
66
+ cpu->isar.id_mmfr4 = u;
67
+
68
u = cpu->isar.id_aa64dfr0;
69
u = FIELD_DP64(u, ID_AA64DFR0, PMUVER, 5); /* v8.4-PMU */
70
cpu->isar.id_aa64dfr0 = u;
71
diff --git a/target/arm/helper.c b/target/arm/helper.c
72
index XXXXXXX..XXXXXXX 100644
73
--- a/target/arm/helper.c
74
+++ b/target/arm/helper.c
75
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo ats1cp_reginfo[] = {
76
};
77
#endif
78
79
+/*
80
+ * ACTLR2 and HACTLR2 map to ACTLR_EL1[63:32] and
81
+ * ACTLR_EL2[63:32]. They exist only if the ID_MMFR4.AC2 field
82
+ * is non-zero, which is never for ARMv7, optionally in ARMv8
83
+ * and mandatorily for ARMv8.2 and up.
84
+ * ACTLR2 is banked for S and NS if EL3 is AArch32. Since QEMU's
85
+ * implementation is RAZ/WI we can ignore this detail, as we
86
+ * do for ACTLR.
87
+ */
88
+static const ARMCPRegInfo actlr2_hactlr2_reginfo[] = {
89
+ { .name = "ACTLR2", .state = ARM_CP_STATE_AA32,
90
+ .cp = 15, .opc1 = 0, .crn = 1, .crm = 0, .opc2 = 3,
91
+ .access = PL1_RW, .type = ARM_CP_CONST,
92
+ .resetvalue = 0 },
93
+ { .name = "HACTLR2", .state = ARM_CP_STATE_AA32,
94
+ .cp = 15, .opc1 = 4, .crn = 1, .crm = 0, .opc2 = 3,
95
+ .access = PL2_RW, .type = ARM_CP_CONST,
96
+ .resetvalue = 0 },
97
+ REGINFO_SENTINEL
98
+};
99
+
100
void register_cp_regs_for_features(ARMCPU *cpu)
38
{
101
{
39
return flatview_translate(address_space_to_flatview(as),
102
/* Register all the coprocessor registers based on feature bits */
40
addr, xlat, len, is_write);
103
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
41
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
104
REGINFO_SENTINEL
42
index XXXXXXX..XXXXXXX 100644
105
};
43
--- a/accel/tcg/translate-all.c
106
define_arm_cp_regs(cpu, auxcr_reginfo);
44
+++ b/accel/tcg/translate-all.c
107
- if (arm_feature(env, ARM_FEATURE_V8)) {
45
@@ -XXX,XX +XXX,XX @@ void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs)
108
- /* HACTLR2 maps to ACTLR_EL2[63:32] and is not in ARMv7 */
46
hwaddr l = 1;
109
- ARMCPRegInfo hactlr2_reginfo = {
47
110
- .name = "HACTLR2", .state = ARM_CP_STATE_AA32,
48
rcu_read_lock();
111
- .cp = 15, .opc1 = 4, .crn = 1, .crm = 0, .opc2 = 3,
49
- mr = address_space_translate(as, addr, &addr, &l, false);
112
- .access = PL2_RW, .type = ARM_CP_CONST,
50
+ mr = address_space_translate(as, addr, &addr, &l, false, attrs);
113
- .resetvalue = 0
51
if (!(memory_region_is_ram(mr)
114
- };
52
|| memory_region_is_romd(mr))) {
115
- define_one_arm_cp_reg(cpu, &hactlr2_reginfo);
53
rcu_read_unlock();
116
+ if (cpu_isar_feature(aa32_ac2, cpu)) {
54
diff --git a/exec.c b/exec.c
117
+ define_arm_cp_regs(cpu, actlr2_hactlr2_reginfo);
55
index XXXXXXX..XXXXXXX 100644
118
}
56
--- a/exec.c
119
}
57
+++ b/exec.c
120
58
@@ -XXX,XX +XXX,XX @@ static inline void cpu_physical_memory_write_rom_internal(AddressSpace *as,
59
rcu_read_lock();
60
while (len > 0) {
61
l = len;
62
- mr = address_space_translate(as, addr, &addr1, &l, true);
63
+ mr = address_space_translate(as, addr, &addr1, &l, true,
64
+ MEMTXATTRS_UNSPECIFIED);
65
66
if (!(memory_region_is_ram(mr) ||
67
memory_region_is_romd(mr))) {
68
@@ -XXX,XX +XXX,XX @@ void address_space_cache_destroy(MemoryRegionCache *cache)
69
*/
70
static inline MemoryRegion *address_space_translate_cached(
71
MemoryRegionCache *cache, hwaddr addr, hwaddr *xlat,
72
- hwaddr *plen, bool is_write)
73
+ hwaddr *plen, bool is_write, MemTxAttrs attrs)
74
{
75
MemoryRegionSection section;
76
MemoryRegion *mr;
77
@@ -XXX,XX +XXX,XX @@ address_space_read_cached_slow(MemoryRegionCache *cache, hwaddr addr,
78
MemoryRegion *mr;
79
80
l = len;
81
- mr = address_space_translate_cached(cache, addr, &addr1, &l, false);
82
+ mr = address_space_translate_cached(cache, addr, &addr1, &l, false,
83
+ MEMTXATTRS_UNSPECIFIED);
84
flatview_read_continue(cache->fv,
85
addr, MEMTXATTRS_UNSPECIFIED, buf, len,
86
addr1, l, mr);
87
@@ -XXX,XX +XXX,XX @@ address_space_write_cached_slow(MemoryRegionCache *cache, hwaddr addr,
88
MemoryRegion *mr;
89
90
l = len;
91
- mr = address_space_translate_cached(cache, addr, &addr1, &l, true);
92
+ mr = address_space_translate_cached(cache, addr, &addr1, &l, true,
93
+ MEMTXATTRS_UNSPECIFIED);
94
flatview_write_continue(cache->fv,
95
addr, MEMTXATTRS_UNSPECIFIED, buf, len,
96
addr1, l, mr);
97
@@ -XXX,XX +XXX,XX @@ bool cpu_physical_memory_is_io(hwaddr phys_addr)
98
99
rcu_read_lock();
100
mr = address_space_translate(&address_space_memory,
101
- phys_addr, &phys_addr, &l, false);
102
+ phys_addr, &phys_addr, &l, false,
103
+ MEMTXATTRS_UNSPECIFIED);
104
105
res = !(memory_region_is_ram(mr) || memory_region_is_romd(mr));
106
rcu_read_unlock();
107
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
108
index XXXXXXX..XXXXXXX 100644
109
--- a/hw/vfio/common.c
110
+++ b/hw/vfio/common.c
111
@@ -XXX,XX +XXX,XX @@ static bool vfio_get_vaddr(IOMMUTLBEntry *iotlb, void **vaddr,
112
*/
113
mr = address_space_translate(&address_space_memory,
114
iotlb->translated_addr,
115
- &xlat, &len, writable);
116
+ &xlat, &len, writable,
117
+ MEMTXATTRS_UNSPECIFIED);
118
if (!memory_region_is_ram(mr)) {
119
error_report("iommu map to non memory area %"HWADDR_PRIx"",
120
xlat);
121
diff --git a/memory_ldst.inc.c b/memory_ldst.inc.c
122
index XXXXXXX..XXXXXXX 100644
123
--- a/memory_ldst.inc.c
124
+++ b/memory_ldst.inc.c
125
@@ -XXX,XX +XXX,XX @@ static inline uint32_t glue(address_space_ldl_internal, SUFFIX)(ARG1_DECL,
126
bool release_lock = false;
127
128
RCU_READ_LOCK();
129
- mr = TRANSLATE(addr, &addr1, &l, false);
130
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
131
if (l < 4 || !IS_DIRECT(mr, false)) {
132
release_lock |= prepare_mmio_access(mr);
133
134
@@ -XXX,XX +XXX,XX @@ static inline uint64_t glue(address_space_ldq_internal, SUFFIX)(ARG1_DECL,
135
bool release_lock = false;
136
137
RCU_READ_LOCK();
138
- mr = TRANSLATE(addr, &addr1, &l, false);
139
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
140
if (l < 8 || !IS_DIRECT(mr, false)) {
141
release_lock |= prepare_mmio_access(mr);
142
143
@@ -XXX,XX +XXX,XX @@ uint32_t glue(address_space_ldub, SUFFIX)(ARG1_DECL,
144
bool release_lock = false;
145
146
RCU_READ_LOCK();
147
- mr = TRANSLATE(addr, &addr1, &l, false);
148
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
149
if (!IS_DIRECT(mr, false)) {
150
release_lock |= prepare_mmio_access(mr);
151
152
@@ -XXX,XX +XXX,XX @@ static inline uint32_t glue(address_space_lduw_internal, SUFFIX)(ARG1_DECL,
153
bool release_lock = false;
154
155
RCU_READ_LOCK();
156
- mr = TRANSLATE(addr, &addr1, &l, false);
157
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
158
if (l < 2 || !IS_DIRECT(mr, false)) {
159
release_lock |= prepare_mmio_access(mr);
160
161
@@ -XXX,XX +XXX,XX @@ void glue(address_space_stl_notdirty, SUFFIX)(ARG1_DECL,
162
bool release_lock = false;
163
164
RCU_READ_LOCK();
165
- mr = TRANSLATE(addr, &addr1, &l, true);
166
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
167
if (l < 4 || !IS_DIRECT(mr, true)) {
168
release_lock |= prepare_mmio_access(mr);
169
170
@@ -XXX,XX +XXX,XX @@ static inline void glue(address_space_stl_internal, SUFFIX)(ARG1_DECL,
171
bool release_lock = false;
172
173
RCU_READ_LOCK();
174
- mr = TRANSLATE(addr, &addr1, &l, true);
175
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
176
if (l < 4 || !IS_DIRECT(mr, true)) {
177
release_lock |= prepare_mmio_access(mr);
178
179
@@ -XXX,XX +XXX,XX @@ void glue(address_space_stb, SUFFIX)(ARG1_DECL,
180
bool release_lock = false;
181
182
RCU_READ_LOCK();
183
- mr = TRANSLATE(addr, &addr1, &l, true);
184
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
185
if (!IS_DIRECT(mr, true)) {
186
release_lock |= prepare_mmio_access(mr);
187
r = memory_region_dispatch_write(mr, addr1, val, 1, attrs);
188
@@ -XXX,XX +XXX,XX @@ static inline void glue(address_space_stw_internal, SUFFIX)(ARG1_DECL,
189
bool release_lock = false;
190
191
RCU_READ_LOCK();
192
- mr = TRANSLATE(addr, &addr1, &l, true);
193
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
194
if (l < 2 || !IS_DIRECT(mr, true)) {
195
release_lock |= prepare_mmio_access(mr);
196
197
@@ -XXX,XX +XXX,XX @@ static void glue(address_space_stq_internal, SUFFIX)(ARG1_DECL,
198
bool release_lock = false;
199
200
RCU_READ_LOCK();
201
- mr = TRANSLATE(addr, &addr1, &l, true);
202
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
203
if (l < 8 || !IS_DIRECT(mr, true)) {
204
release_lock |= prepare_mmio_access(mr);
205
206
diff --git a/target/riscv/helper.c b/target/riscv/helper.c
207
index XXXXXXX..XXXXXXX 100644
208
--- a/target/riscv/helper.c
209
+++ b/target/riscv/helper.c
210
@@ -XXX,XX +XXX,XX @@ restart:
211
MemoryRegion *mr;
212
hwaddr l = sizeof(target_ulong), addr1;
213
mr = address_space_translate(cs->as, pte_addr,
214
- &addr1, &l, false);
215
+ &addr1, &l, false, MEMTXATTRS_UNSPECIFIED);
216
if (memory_access_is_direct(mr, true)) {
217
target_ulong *pte_pa =
218
qemu_map_ram_ptr(mr->ram_block, addr1);
219
--
121
--
220
2.17.1
122
2.20.1
221
123
222
124
diff view generated by jsdifflib
1
From: Shannon Zhao <zhaoshenglong@huawei.com>
1
From: Guenter Roeck <linux@roeck-us.net>
2
2
3
kvm_irqchip_create called by kvm_init will call kvm_init_irq_routing to
3
We need to be able to use OHCISysBusState outside hcd-ohci.c, so move it
4
initialize global capability variables. If we call kvm_init_irq_routing in
4
to its include file.
5
GIC realize function, previous allocated memory will leak.
6
5
7
Fix this by deleting the unnecessary call.
6
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
8
7
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
9
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
8
Tested-by: Niek Linnenbank <nieklinnenbank@gmail.com>
10
Reviewed-by: Eric Auger <eric.auger@redhat.com>
9
Message-id: 20200217204812.9857-2-linux@roeck-us.net
11
Message-id: 1527750994-14360-1-git-send-email-zhaoshenglong@huawei.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
11
---
14
hw/intc/arm_gic_kvm.c | 1 -
12
hw/usb/hcd-ohci.h | 16 ++++++++++++++++
15
hw/intc/arm_gicv3_kvm.c | 1 -
13
hw/usb/hcd-ohci.c | 15 ---------------
16
2 files changed, 2 deletions(-)
14
2 files changed, 16 insertions(+), 15 deletions(-)
17
15
18
diff --git a/hw/intc/arm_gic_kvm.c b/hw/intc/arm_gic_kvm.c
16
diff --git a/hw/usb/hcd-ohci.h b/hw/usb/hcd-ohci.h
19
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/intc/arm_gic_kvm.c
18
--- a/hw/usb/hcd-ohci.h
21
+++ b/hw/intc/arm_gic_kvm.c
19
+++ b/hw/usb/hcd-ohci.h
22
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gic_realize(DeviceState *dev, Error **errp)
20
@@ -XXX,XX +XXX,XX @@
23
21
#define HCD_OHCI_H
24
if (kvm_has_gsi_routing()) {
22
25
/* set up irq routing */
23
#include "sysemu/dma.h"
26
- kvm_init_irq_routing(kvm_state);
24
+#include "hw/usb.h"
27
for (i = 0; i < s->num_irq - GIC_INTERNAL; ++i) {
25
28
kvm_irqchip_add_irq_route(kvm_state, i, 0, i);
26
/* Number of Downstream Ports on the root hub: */
29
}
27
#define OHCI_MAX_PORTS 15
30
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
28
@@ -XXX,XX +XXX,XX @@ typedef struct OHCIState {
29
void (*ohci_die)(struct OHCIState *ohci);
30
} OHCIState;
31
32
+#define TYPE_SYSBUS_OHCI "sysbus-ohci"
33
+#define SYSBUS_OHCI(obj) OBJECT_CHECK(OHCISysBusState, (obj), TYPE_SYSBUS_OHCI)
34
+
35
+typedef struct {
36
+ /*< private >*/
37
+ SysBusDevice parent_obj;
38
+ /*< public >*/
39
+
40
+ OHCIState ohci;
41
+ char *masterbus;
42
+ uint32_t num_ports;
43
+ uint32_t firstport;
44
+ dma_addr_t dma_offset;
45
+} OHCISysBusState;
46
+
47
extern const VMStateDescription vmstate_ohci_state;
48
49
void usb_ohci_init(OHCIState *ohci, DeviceState *dev, uint32_t num_ports,
50
diff --git a/hw/usb/hcd-ohci.c b/hw/usb/hcd-ohci.c
31
index XXXXXXX..XXXXXXX 100644
51
index XXXXXXX..XXXXXXX 100644
32
--- a/hw/intc/arm_gicv3_kvm.c
52
--- a/hw/usb/hcd-ohci.c
33
+++ b/hw/intc/arm_gicv3_kvm.c
53
+++ b/hw/usb/hcd-ohci.c
34
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gicv3_realize(DeviceState *dev, Error **errp)
54
@@ -XXX,XX +XXX,XX @@ void ohci_sysbus_die(struct OHCIState *ohci)
35
55
ohci_bus_stop(ohci);
36
if (kvm_has_gsi_routing()) {
56
}
37
/* set up irq routing */
57
38
- kvm_init_irq_routing(kvm_state);
58
-#define TYPE_SYSBUS_OHCI "sysbus-ohci"
39
for (i = 0; i < s->num_irq - GIC_INTERNAL; ++i) {
59
-#define SYSBUS_OHCI(obj) OBJECT_CHECK(OHCISysBusState, (obj), TYPE_SYSBUS_OHCI)
40
kvm_irqchip_add_irq_route(kvm_state, i, 0, i);
60
-
41
}
61
-typedef struct {
62
- /*< private >*/
63
- SysBusDevice parent_obj;
64
- /*< public >*/
65
-
66
- OHCIState ohci;
67
- char *masterbus;
68
- uint32_t num_ports;
69
- uint32_t firstport;
70
- dma_addr_t dma_offset;
71
-} OHCISysBusState;
72
-
73
static void ohci_realize_pxa(DeviceState *dev, Error **errp)
74
{
75
OHCISysBusState *s = SYSBUS_OHCI(dev);
42
--
76
--
43
2.17.1
77
2.20.1
44
78
45
79
diff view generated by jsdifflib
1
Provide a VMSTATE_BOOL_SUB_ARRAY to go with VMSTATE_UINT8_SUB_ARRAY
1
From: Guenter Roeck <linux@roeck-us.net>
2
and friends.
3
2
3
We'll use this property in a follow-up patch to insantiate an EHCI
4
bus with companion support.
5
6
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
7
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
8
Tested-by: Niek Linnenbank <nieklinnenbank@gmail.com>
9
Message-id: 20200217204812.9857-3-linux@roeck-us.net
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Message-id: 20180521140402.23318-23-peter.maydell@linaro.org
7
---
11
---
8
include/migration/vmstate.h | 3 +++
12
hw/usb/hcd-ehci-sysbus.c | 2 ++
9
1 file changed, 3 insertions(+)
13
1 file changed, 2 insertions(+)
10
14
11
diff --git a/include/migration/vmstate.h b/include/migration/vmstate.h
15
diff --git a/hw/usb/hcd-ehci-sysbus.c b/hw/usb/hcd-ehci-sysbus.c
12
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
13
--- a/include/migration/vmstate.h
17
--- a/hw/usb/hcd-ehci-sysbus.c
14
+++ b/include/migration/vmstate.h
18
+++ b/hw/usb/hcd-ehci-sysbus.c
15
@@ -XXX,XX +XXX,XX @@ extern const VMStateInfo vmstate_info_qtailq;
19
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_ehci_sysbus = {
16
#define VMSTATE_BOOL_ARRAY(_f, _s, _n) \
20
17
VMSTATE_BOOL_ARRAY_V(_f, _s, _n, 0)
21
static Property ehci_sysbus_properties[] = {
18
22
DEFINE_PROP_UINT32("maxframes", EHCISysBusState, ehci.maxframes, 128),
19
+#define VMSTATE_BOOL_SUB_ARRAY(_f, _s, _start, _num) \
23
+ DEFINE_PROP_BOOL("companion-enable", EHCISysBusState, ehci.companion_enable,
20
+ VMSTATE_SUB_ARRAY(_f, _s, _start, _num, 0, vmstate_info_bool, bool)
24
+ false),
21
+
25
DEFINE_PROP_END_OF_LIST(),
22
#define VMSTATE_UINT16_ARRAY_V(_f, _s, _n, _v) \
26
};
23
VMSTATE_ARRAY(_f, _s, _n, _v, vmstate_info_uint16, uint16_t)
24
27
25
--
28
--
26
2.17.1
29
2.20.1
27
30
28
31
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Guenter Roeck <linux@roeck-us.net>
2
add MemTxAttrs as an argument to address_space_access_valid().
3
Its callers either have an attrs value to hand, or don't care
4
and can use MEMTXATTRS_UNSPECIFIED.
5
2
3
Instantiate EHCI and OHCI controllers on Allwinner A10. OHCI ports are
4
modeled as companions of the respective EHCI ports.
5
6
With this patch applied, USB controllers are discovered and instantiated
7
when booting the cubieboard machine with a recent Linux kernel.
8
9
ehci-platform 1c14000.usb: EHCI Host Controller
10
ehci-platform 1c14000.usb: new USB bus registered, assigned bus number 1
11
ehci-platform 1c14000.usb: irq 26, io mem 0x01c14000
12
ehci-platform 1c14000.usb: USB 2.0 started, EHCI 1.00
13
ehci-platform 1c1c000.usb: EHCI Host Controller
14
ehci-platform 1c1c000.usb: new USB bus registered, assigned bus number 2
15
ehci-platform 1c1c000.usb: irq 31, io mem 0x01c1c000
16
ehci-platform 1c1c000.usb: USB 2.0 started, EHCI 1.00
17
ohci-platform 1c14400.usb: Generic Platform OHCI controller
18
ohci-platform 1c14400.usb: new USB bus registered, assigned bus number 3
19
ohci-platform 1c14400.usb: irq 27, io mem 0x01c14400
20
ohci-platform 1c1c400.usb: Generic Platform OHCI controller
21
ohci-platform 1c1c400.usb: new USB bus registered, assigned bus number 4
22
ohci-platform 1c1c400.usb: irq 32, io mem 0x01c1c400
23
usb 2-1: new high-speed USB device number 2 using ehci-platform
24
usb-storage 2-1:1.0: USB Mass Storage device detected
25
scsi host1: usb-storage 2-1:1.0
26
usb 3-1: new full-speed USB device number 2 using ohci-platform
27
input: QEMU QEMU USB Mouse as /devices/platform/soc/1c14400.usb/usb3/3-1/3-1:1.0/0003:0627:0001.0001/input/input0
28
29
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
30
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
31
Tested-by: Niek Linnenbank <nieklinnenbank@gmail.com>
32
Message-id: 20200217204812.9857-4-linux@roeck-us.net
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
33
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-6-peter.maydell@linaro.org
10
---
34
---
11
include/exec/memory.h | 4 +++-
35
include/hw/arm/allwinner-a10.h | 6 +++++
12
include/sysemu/dma.h | 3 ++-
36
hw/arm/allwinner-a10.c | 43 ++++++++++++++++++++++++++++++++++
13
exec.c | 3 ++-
37
2 files changed, 49 insertions(+)
14
target/s390x/diag.c | 6 ++++--
15
target/s390x/excp_helper.c | 3 ++-
16
target/s390x/mmu_helper.c | 3 ++-
17
target/s390x/sigp.c | 3 ++-
18
7 files changed, 17 insertions(+), 8 deletions(-)
19
38
20
diff --git a/include/exec/memory.h b/include/exec/memory.h
39
diff --git a/include/hw/arm/allwinner-a10.h b/include/hw/arm/allwinner-a10.h
21
index XXXXXXX..XXXXXXX 100644
40
index XXXXXXX..XXXXXXX 100644
22
--- a/include/exec/memory.h
41
--- a/include/hw/arm/allwinner-a10.h
23
+++ b/include/exec/memory.h
42
+++ b/include/hw/arm/allwinner-a10.h
24
@@ -XXX,XX +XXX,XX @@ static inline MemoryRegion *address_space_translate(AddressSpace *as,
43
@@ -XXX,XX +XXX,XX @@
25
* @addr: address within that address space
44
#include "hw/intc/allwinner-a10-pic.h"
26
* @len: length of the area to be checked
45
#include "hw/net/allwinner_emac.h"
27
* @is_write: indicates the transfer direction
46
#include "hw/ide/ahci.h"
28
+ * @attrs: memory attributes
47
+#include "hw/usb/hcd-ohci.h"
29
*/
48
+#include "hw/usb/hcd-ehci.h"
30
-bool address_space_access_valid(AddressSpace *as, hwaddr addr, int len, bool is_write);
49
31
+bool address_space_access_valid(AddressSpace *as, hwaddr addr, int len,
50
#include "target/arm/cpu.h"
32
+ bool is_write, MemTxAttrs attrs);
51
33
52
34
/* address_space_map: map a physical memory region into a host virtual address
53
#define AW_A10_SDRAM_BASE 0x40000000
35
*
54
36
diff --git a/include/sysemu/dma.h b/include/sysemu/dma.h
55
+#define AW_A10_NUM_USB 2
56
+
57
#define TYPE_AW_A10 "allwinner-a10"
58
#define AW_A10(obj) OBJECT_CHECK(AwA10State, (obj), TYPE_AW_A10)
59
60
@@ -XXX,XX +XXX,XX @@ typedef struct AwA10State {
61
AwEmacState emac;
62
AllwinnerAHCIState sata;
63
MemoryRegion sram_a;
64
+ EHCISysBusState ehci[AW_A10_NUM_USB];
65
+ OHCISysBusState ohci[AW_A10_NUM_USB];
66
} AwA10State;
67
68
#endif
69
diff --git a/hw/arm/allwinner-a10.c b/hw/arm/allwinner-a10.c
37
index XXXXXXX..XXXXXXX 100644
70
index XXXXXXX..XXXXXXX 100644
38
--- a/include/sysemu/dma.h
71
--- a/hw/arm/allwinner-a10.c
39
+++ b/include/sysemu/dma.h
72
+++ b/hw/arm/allwinner-a10.c
40
@@ -XXX,XX +XXX,XX @@ static inline bool dma_memory_valid(AddressSpace *as,
73
@@ -XXX,XX +XXX,XX @@
41
DMADirection dir)
74
#include "hw/arm/allwinner-a10.h"
42
{
75
#include "hw/misc/unimp.h"
43
return address_space_access_valid(as, addr, len,
76
#include "sysemu/sysemu.h"
44
- dir == DMA_DIRECTION_FROM_DEVICE);
77
+#include "hw/boards.h"
45
+ dir == DMA_DIRECTION_FROM_DEVICE,
78
+#include "hw/usb/hcd-ohci.h"
46
+ MEMTXATTRS_UNSPECIFIED);
79
80
#define AW_A10_PIC_REG_BASE 0x01c20400
81
#define AW_A10_PIT_REG_BASE 0x01c20c00
82
#define AW_A10_UART0_REG_BASE 0x01c28000
83
#define AW_A10_EMAC_BASE 0x01c0b000
84
+#define AW_A10_EHCI_BASE 0x01c14000
85
+#define AW_A10_OHCI_BASE 0x01c14400
86
#define AW_A10_SATA_BASE 0x01c18000
87
88
static void aw_a10_init(Object *obj)
89
@@ -XXX,XX +XXX,XX @@ static void aw_a10_init(Object *obj)
90
91
sysbus_init_child_obj(obj, "sata", &s->sata, sizeof(s->sata),
92
TYPE_ALLWINNER_AHCI);
93
+
94
+ if (machine_usb(current_machine)) {
95
+ int i;
96
+
97
+ for (i = 0; i < AW_A10_NUM_USB; i++) {
98
+ sysbus_init_child_obj(obj, "ehci[*]", OBJECT(&s->ehci[i]),
99
+ sizeof(s->ehci[i]), TYPE_PLATFORM_EHCI);
100
+ sysbus_init_child_obj(obj, "ohci[*]", OBJECT(&s->ohci[i]),
101
+ sizeof(s->ohci[i]), TYPE_SYSBUS_OHCI);
102
+ }
103
+ }
47
}
104
}
48
105
49
static inline int dma_memory_rw_relaxed(AddressSpace *as, dma_addr_t addr,
106
static void aw_a10_realize(DeviceState *dev, Error **errp)
50
diff --git a/exec.c b/exec.c
107
@@ -XXX,XX +XXX,XX @@ static void aw_a10_realize(DeviceState *dev, Error **errp)
51
index XXXXXXX..XXXXXXX 100644
108
serial_mm_init(get_system_memory(), AW_A10_UART0_REG_BASE, 2,
52
--- a/exec.c
109
qdev_get_gpio_in(dev, 1),
53
+++ b/exec.c
110
115200, serial_hd(0), DEVICE_NATIVE_ENDIAN);
54
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
111
+
112
+ if (machine_usb(current_machine)) {
113
+ int i;
114
+
115
+ for (i = 0; i < AW_A10_NUM_USB; i++) {
116
+ char bus[16];
117
+
118
+ sprintf(bus, "usb-bus.%d", i);
119
+
120
+ object_property_set_bool(OBJECT(&s->ehci[i]), true,
121
+ "companion-enable", &error_fatal);
122
+ object_property_set_bool(OBJECT(&s->ehci[i]), true, "realized",
123
+ &error_fatal);
124
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->ehci[i]), 0,
125
+ AW_A10_EHCI_BASE + i * 0x8000);
126
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->ehci[i]), 0,
127
+ qdev_get_gpio_in(dev, 39 + i));
128
+
129
+ object_property_set_str(OBJECT(&s->ohci[i]), bus, "masterbus",
130
+ &error_fatal);
131
+ object_property_set_bool(OBJECT(&s->ohci[i]), true, "realized",
132
+ &error_fatal);
133
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->ohci[i]), 0,
134
+ AW_A10_OHCI_BASE + i * 0x8000);
135
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->ohci[i]), 0,
136
+ qdev_get_gpio_in(dev, 64 + i));
137
+ }
138
+ }
55
}
139
}
56
140
57
bool address_space_access_valid(AddressSpace *as, hwaddr addr,
141
static void aw_a10_class_init(ObjectClass *oc, void *data)
58
- int len, bool is_write)
59
+ int len, bool is_write,
60
+ MemTxAttrs attrs)
61
{
62
FlatView *fv;
63
bool result;
64
diff --git a/target/s390x/diag.c b/target/s390x/diag.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/target/s390x/diag.c
67
+++ b/target/s390x/diag.c
68
@@ -XXX,XX +XXX,XX @@ void handle_diag_308(CPUS390XState *env, uint64_t r1, uint64_t r3, uintptr_t ra)
69
return;
70
}
71
if (!address_space_access_valid(&address_space_memory, addr,
72
- sizeof(IplParameterBlock), false)) {
73
+ sizeof(IplParameterBlock), false,
74
+ MEMTXATTRS_UNSPECIFIED)) {
75
s390_program_interrupt(env, PGM_ADDRESSING, ILEN_AUTO, ra);
76
return;
77
}
78
@@ -XXX,XX +XXX,XX @@ out:
79
return;
80
}
81
if (!address_space_access_valid(&address_space_memory, addr,
82
- sizeof(IplParameterBlock), true)) {
83
+ sizeof(IplParameterBlock), true,
84
+ MEMTXATTRS_UNSPECIFIED)) {
85
s390_program_interrupt(env, PGM_ADDRESSING, ILEN_AUTO, ra);
86
return;
87
}
88
diff --git a/target/s390x/excp_helper.c b/target/s390x/excp_helper.c
89
index XXXXXXX..XXXXXXX 100644
90
--- a/target/s390x/excp_helper.c
91
+++ b/target/s390x/excp_helper.c
92
@@ -XXX,XX +XXX,XX @@ int s390_cpu_handle_mmu_fault(CPUState *cs, vaddr orig_vaddr, int size,
93
94
/* check out of RAM access */
95
if (!address_space_access_valid(&address_space_memory, raddr,
96
- TARGET_PAGE_SIZE, rw)) {
97
+ TARGET_PAGE_SIZE, rw,
98
+ MEMTXATTRS_UNSPECIFIED)) {
99
DPRINTF("%s: raddr %" PRIx64 " > ram_size %" PRIx64 "\n", __func__,
100
(uint64_t)raddr, (uint64_t)ram_size);
101
trigger_pgm_exception(env, PGM_ADDRESSING, ILEN_AUTO);
102
diff --git a/target/s390x/mmu_helper.c b/target/s390x/mmu_helper.c
103
index XXXXXXX..XXXXXXX 100644
104
--- a/target/s390x/mmu_helper.c
105
+++ b/target/s390x/mmu_helper.c
106
@@ -XXX,XX +XXX,XX @@ static int translate_pages(S390CPU *cpu, vaddr addr, int nr_pages,
107
return ret;
108
}
109
if (!address_space_access_valid(&address_space_memory, pages[i],
110
- TARGET_PAGE_SIZE, is_write)) {
111
+ TARGET_PAGE_SIZE, is_write,
112
+ MEMTXATTRS_UNSPECIFIED)) {
113
trigger_access_exception(env, PGM_ADDRESSING, ILEN_AUTO, 0);
114
return -EFAULT;
115
}
116
diff --git a/target/s390x/sigp.c b/target/s390x/sigp.c
117
index XXXXXXX..XXXXXXX 100644
118
--- a/target/s390x/sigp.c
119
+++ b/target/s390x/sigp.c
120
@@ -XXX,XX +XXX,XX @@ static void sigp_set_prefix(CPUState *cs, run_on_cpu_data arg)
121
cpu_synchronize_state(cs);
122
123
if (!address_space_access_valid(&address_space_memory, addr,
124
- sizeof(struct LowCore), false)) {
125
+ sizeof(struct LowCore), false,
126
+ MEMTXATTRS_UNSPECIFIED)) {
127
set_sigp_status(si, SIGP_STAT_INVALID_PARAMETER);
128
return;
129
}
130
--
142
--
131
2.17.1
143
2.20.1
132
144
133
145
diff view generated by jsdifflib
1
Add entries to MAINTAINERS to cover the newer MPS2 boards and
1
From: Richard Henderson <richard.henderson@linaro.org>
2
the new devices they use.
3
2
3
These instructions shift left or right depending on the sign
4
of the input, and 7 bits are significant to the shift. This
5
requires several masks and selects in addition to the actual
6
shifts to form the complete answer.
7
8
That said, the operation is still a small improvement even for
9
two 64-bit elements -- 13 vector operations instead of 2 * 7
10
integer operations.
11
12
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
13
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20200216214232.4230-2-richard.henderson@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20180518153157.14899-1-peter.maydell@linaro.org
6
---
16
---
7
MAINTAINERS | 9 +++++++--
17
target/arm/helper.h | 11 +-
8
1 file changed, 7 insertions(+), 2 deletions(-)
18
target/arm/translate.h | 6 +
19
target/arm/neon_helper.c | 33 ----
20
target/arm/translate-a64.c | 18 +--
21
target/arm/translate.c | 299 +++++++++++++++++++++++++++++++++++--
22
target/arm/vec_helper.c | 88 +++++++++++
23
6 files changed, 389 insertions(+), 66 deletions(-)
9
24
10
diff --git a/MAINTAINERS b/MAINTAINERS
25
diff --git a/target/arm/helper.h b/target/arm/helper.h
11
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
12
--- a/MAINTAINERS
27
--- a/target/arm/helper.h
13
+++ b/MAINTAINERS
28
+++ b/target/arm/helper.h
14
@@ -XXX,XX +XXX,XX @@ F: hw/timer/cmsdk-apb-timer.c
29
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_2(neon_abd_s16, i32, i32, i32)
15
F: include/hw/timer/cmsdk-apb-timer.h
30
DEF_HELPER_2(neon_abd_u32, i32, i32, i32)
16
F: hw/char/cmsdk-apb-uart.c
31
DEF_HELPER_2(neon_abd_s32, i32, i32, i32)
17
F: include/hw/char/cmsdk-apb-uart.h
32
18
+F: hw/misc/tz-ppc.c
33
-DEF_HELPER_2(neon_shl_u8, i32, i32, i32)
19
+F: include/hw/misc/tz-ppc.h
34
-DEF_HELPER_2(neon_shl_s8, i32, i32, i32)
20
35
DEF_HELPER_2(neon_shl_u16, i32, i32, i32)
21
ARM cores
36
DEF_HELPER_2(neon_shl_s16, i32, i32, i32)
22
M: Peter Maydell <peter.maydell@linaro.org>
37
-DEF_HELPER_2(neon_shl_u32, i32, i32, i32)
23
@@ -XXX,XX +XXX,XX @@ M: Peter Maydell <peter.maydell@linaro.org>
38
-DEF_HELPER_2(neon_shl_s32, i32, i32, i32)
24
L: qemu-arm@nongnu.org
39
-DEF_HELPER_2(neon_shl_u64, i64, i64, i64)
25
S: Maintained
40
-DEF_HELPER_2(neon_shl_s64, i64, i64, i64)
26
F: hw/arm/mps2.c
41
DEF_HELPER_2(neon_rshl_u8, i32, i32, i32)
27
-F: hw/misc/mps2-scc.c
42
DEF_HELPER_2(neon_rshl_s8, i32, i32, i32)
28
-F: include/hw/misc/mps2-scc.h
43
DEF_HELPER_2(neon_rshl_u16, i32, i32, i32)
29
+F: hw/arm/mps2-tz.c
44
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_2(frint64_s, TCG_CALL_NO_RWG, f32, f32, ptr)
30
+F: hw/misc/mps2-*.c
45
DEF_HELPER_FLAGS_2(frint32_d, TCG_CALL_NO_RWG, f64, f64, ptr)
31
+F: include/hw/misc/mps2-*.h
46
DEF_HELPER_FLAGS_2(frint64_d, TCG_CALL_NO_RWG, f64, f64, ptr)
32
+F: hw/arm/iotkit.c
47
33
+F: include/hw/arm/iotkit.h
48
+DEF_HELPER_FLAGS_4(gvec_sshl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
34
49
+DEF_HELPER_FLAGS_4(gvec_sshl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
35
Musicpal
50
+DEF_HELPER_FLAGS_4(gvec_ushl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
36
M: Jan Kiszka <jan.kiszka@web.de>
51
+DEF_HELPER_FLAGS_4(gvec_ushl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
52
+
53
#ifdef TARGET_AARCH64
54
#include "helper-a64.h"
55
#include "helper-sve.h"
56
diff --git a/target/arm/translate.h b/target/arm/translate.h
57
index XXXXXXX..XXXXXXX 100644
58
--- a/target/arm/translate.h
59
+++ b/target/arm/translate.h
60
@@ -XXX,XX +XXX,XX @@ uint64_t vfp_expand_imm(int size, uint8_t imm8);
61
extern const GVecGen3 mla_op[4];
62
extern const GVecGen3 mls_op[4];
63
extern const GVecGen3 cmtst_op[4];
64
+extern const GVecGen3 sshl_op[4];
65
+extern const GVecGen3 ushl_op[4];
66
extern const GVecGen2i ssra_op[4];
67
extern const GVecGen2i usra_op[4];
68
extern const GVecGen2i sri_op[4];
69
@@ -XXX,XX +XXX,XX @@ extern const GVecGen4 sqadd_op[4];
70
extern const GVecGen4 uqsub_op[4];
71
extern const GVecGen4 sqsub_op[4];
72
void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b);
73
+void gen_ushl_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b);
74
+void gen_sshl_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b);
75
+void gen_ushl_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b);
76
+void gen_sshl_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b);
77
78
/*
79
* Forward to the isar_feature_* tests given a DisasContext pointer.
80
diff --git a/target/arm/neon_helper.c b/target/arm/neon_helper.c
81
index XXXXXXX..XXXXXXX 100644
82
--- a/target/arm/neon_helper.c
83
+++ b/target/arm/neon_helper.c
84
@@ -XXX,XX +XXX,XX @@ NEON_VOP(abd_u32, neon_u32, 1)
85
} else { \
86
dest = src1 << tmp; \
87
}} while (0)
88
-NEON_VOP(shl_u8, neon_u8, 4)
89
NEON_VOP(shl_u16, neon_u16, 2)
90
-NEON_VOP(shl_u32, neon_u32, 1)
91
#undef NEON_FN
92
93
-uint64_t HELPER(neon_shl_u64)(uint64_t val, uint64_t shiftop)
94
-{
95
- int8_t shift = (int8_t)shiftop;
96
- if (shift >= 64 || shift <= -64) {
97
- val = 0;
98
- } else if (shift < 0) {
99
- val >>= -shift;
100
- } else {
101
- val <<= shift;
102
- }
103
- return val;
104
-}
105
-
106
#define NEON_FN(dest, src1, src2) do { \
107
int8_t tmp; \
108
tmp = (int8_t)src2; \
109
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(neon_shl_u64)(uint64_t val, uint64_t shiftop)
110
} else { \
111
dest = src1 << tmp; \
112
}} while (0)
113
-NEON_VOP(shl_s8, neon_s8, 4)
114
NEON_VOP(shl_s16, neon_s16, 2)
115
-NEON_VOP(shl_s32, neon_s32, 1)
116
#undef NEON_FN
117
118
-uint64_t HELPER(neon_shl_s64)(uint64_t valop, uint64_t shiftop)
119
-{
120
- int8_t shift = (int8_t)shiftop;
121
- int64_t val = valop;
122
- if (shift >= 64) {
123
- val = 0;
124
- } else if (shift <= -64) {
125
- val >>= 63;
126
- } else if (shift < 0) {
127
- val >>= -shift;
128
- } else {
129
- val <<= shift;
130
- }
131
- return val;
132
-}
133
-
134
#define NEON_FN(dest, src1, src2) do { \
135
int8_t tmp; \
136
tmp = (int8_t)src2; \
137
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
138
index XXXXXXX..XXXXXXX 100644
139
--- a/target/arm/translate-a64.c
140
+++ b/target/arm/translate-a64.c
141
@@ -XXX,XX +XXX,XX @@ static void handle_3same_64(DisasContext *s, int opcode, bool u,
142
break;
143
case 0x8: /* SSHL, USHL */
144
if (u) {
145
- gen_helper_neon_shl_u64(tcg_rd, tcg_rn, tcg_rm);
146
+ gen_ushl_i64(tcg_rd, tcg_rn, tcg_rm);
147
} else {
148
- gen_helper_neon_shl_s64(tcg_rd, tcg_rn, tcg_rm);
149
+ gen_sshl_i64(tcg_rd, tcg_rn, tcg_rm);
150
}
151
break;
152
case 0x9: /* SQSHL, UQSHL */
153
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
154
is_q ? 16 : 8, vec_full_reg_size(s),
155
(u ? uqsub_op : sqsub_op) + size);
156
return;
157
+ case 0x08: /* SSHL, USHL */
158
+ gen_gvec_op3(s, is_q, rd, rn, rm,
159
+ u ? &ushl_op[size] : &sshl_op[size]);
160
+ return;
161
case 0x0c: /* SMAX, UMAX */
162
if (u) {
163
gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_umax, size);
164
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
165
genfn = fns[size][u];
166
break;
167
}
168
- case 0x8: /* SSHL, USHL */
169
- {
170
- static NeonGenTwoOpFn * const fns[3][2] = {
171
- { gen_helper_neon_shl_s8, gen_helper_neon_shl_u8 },
172
- { gen_helper_neon_shl_s16, gen_helper_neon_shl_u16 },
173
- { gen_helper_neon_shl_s32, gen_helper_neon_shl_u32 },
174
- };
175
- genfn = fns[size][u];
176
- break;
177
- }
178
case 0x9: /* SQSHL, UQSHL */
179
{
180
static NeonGenTwoOpEnvFn * const fns[3][2] = {
181
diff --git a/target/arm/translate.c b/target/arm/translate.c
182
index XXXXXXX..XXXXXXX 100644
183
--- a/target/arm/translate.c
184
+++ b/target/arm/translate.c
185
@@ -XXX,XX +XXX,XX @@ static inline void gen_neon_shift_narrow(int size, TCGv_i32 var, TCGv_i32 shift,
186
if (u) {
187
switch (size) {
188
case 1: gen_helper_neon_shl_u16(var, var, shift); break;
189
- case 2: gen_helper_neon_shl_u32(var, var, shift); break;
190
+ case 2: gen_ushl_i32(var, var, shift); break;
191
default: abort();
192
}
193
} else {
194
switch (size) {
195
case 1: gen_helper_neon_shl_s16(var, var, shift); break;
196
- case 2: gen_helper_neon_shl_s32(var, var, shift); break;
197
+ case 2: gen_sshl_i32(var, var, shift); break;
198
default: abort();
199
}
200
}
201
@@ -XXX,XX +XXX,XX @@ const GVecGen3 cmtst_op[4] = {
202
.vece = MO_64 },
203
};
204
205
+void gen_ushl_i32(TCGv_i32 dst, TCGv_i32 src, TCGv_i32 shift)
206
+{
207
+ TCGv_i32 lval = tcg_temp_new_i32();
208
+ TCGv_i32 rval = tcg_temp_new_i32();
209
+ TCGv_i32 lsh = tcg_temp_new_i32();
210
+ TCGv_i32 rsh = tcg_temp_new_i32();
211
+ TCGv_i32 zero = tcg_const_i32(0);
212
+ TCGv_i32 max = tcg_const_i32(32);
213
+
214
+ /*
215
+ * Rely on the TCG guarantee that out of range shifts produce
216
+ * unspecified results, not undefined behaviour (i.e. no trap).
217
+ * Discard out-of-range results after the fact.
218
+ */
219
+ tcg_gen_ext8s_i32(lsh, shift);
220
+ tcg_gen_neg_i32(rsh, lsh);
221
+ tcg_gen_shl_i32(lval, src, lsh);
222
+ tcg_gen_shr_i32(rval, src, rsh);
223
+ tcg_gen_movcond_i32(TCG_COND_LTU, dst, lsh, max, lval, zero);
224
+ tcg_gen_movcond_i32(TCG_COND_LTU, dst, rsh, max, rval, dst);
225
+
226
+ tcg_temp_free_i32(lval);
227
+ tcg_temp_free_i32(rval);
228
+ tcg_temp_free_i32(lsh);
229
+ tcg_temp_free_i32(rsh);
230
+ tcg_temp_free_i32(zero);
231
+ tcg_temp_free_i32(max);
232
+}
233
+
234
+void gen_ushl_i64(TCGv_i64 dst, TCGv_i64 src, TCGv_i64 shift)
235
+{
236
+ TCGv_i64 lval = tcg_temp_new_i64();
237
+ TCGv_i64 rval = tcg_temp_new_i64();
238
+ TCGv_i64 lsh = tcg_temp_new_i64();
239
+ TCGv_i64 rsh = tcg_temp_new_i64();
240
+ TCGv_i64 zero = tcg_const_i64(0);
241
+ TCGv_i64 max = tcg_const_i64(64);
242
+
243
+ /*
244
+ * Rely on the TCG guarantee that out of range shifts produce
245
+ * unspecified results, not undefined behaviour (i.e. no trap).
246
+ * Discard out-of-range results after the fact.
247
+ */
248
+ tcg_gen_ext8s_i64(lsh, shift);
249
+ tcg_gen_neg_i64(rsh, lsh);
250
+ tcg_gen_shl_i64(lval, src, lsh);
251
+ tcg_gen_shr_i64(rval, src, rsh);
252
+ tcg_gen_movcond_i64(TCG_COND_LTU, dst, lsh, max, lval, zero);
253
+ tcg_gen_movcond_i64(TCG_COND_LTU, dst, rsh, max, rval, dst);
254
+
255
+ tcg_temp_free_i64(lval);
256
+ tcg_temp_free_i64(rval);
257
+ tcg_temp_free_i64(lsh);
258
+ tcg_temp_free_i64(rsh);
259
+ tcg_temp_free_i64(zero);
260
+ tcg_temp_free_i64(max);
261
+}
262
+
263
+static void gen_ushl_vec(unsigned vece, TCGv_vec dst,
264
+ TCGv_vec src, TCGv_vec shift)
265
+{
266
+ TCGv_vec lval = tcg_temp_new_vec_matching(dst);
267
+ TCGv_vec rval = tcg_temp_new_vec_matching(dst);
268
+ TCGv_vec lsh = tcg_temp_new_vec_matching(dst);
269
+ TCGv_vec rsh = tcg_temp_new_vec_matching(dst);
270
+ TCGv_vec msk, max;
271
+
272
+ tcg_gen_neg_vec(vece, rsh, shift);
273
+ if (vece == MO_8) {
274
+ tcg_gen_mov_vec(lsh, shift);
275
+ } else {
276
+ msk = tcg_temp_new_vec_matching(dst);
277
+ tcg_gen_dupi_vec(vece, msk, 0xff);
278
+ tcg_gen_and_vec(vece, lsh, shift, msk);
279
+ tcg_gen_and_vec(vece, rsh, rsh, msk);
280
+ tcg_temp_free_vec(msk);
281
+ }
282
+
283
+ /*
284
+ * Rely on the TCG guarantee that out of range shifts produce
285
+ * unspecified results, not undefined behaviour (i.e. no trap).
286
+ * Discard out-of-range results after the fact.
287
+ */
288
+ tcg_gen_shlv_vec(vece, lval, src, lsh);
289
+ tcg_gen_shrv_vec(vece, rval, src, rsh);
290
+
291
+ max = tcg_temp_new_vec_matching(dst);
292
+ tcg_gen_dupi_vec(vece, max, 8 << vece);
293
+
294
+ /*
295
+ * The choice of LT (signed) and GEU (unsigned) are biased toward
296
+ * the instructions of the x86_64 host. For MO_8, the whole byte
297
+ * is significant so we must use an unsigned compare; otherwise we
298
+ * have already masked to a byte and so a signed compare works.
299
+ * Other tcg hosts have a full set of comparisons and do not care.
300
+ */
301
+ if (vece == MO_8) {
302
+ tcg_gen_cmp_vec(TCG_COND_GEU, vece, lsh, lsh, max);
303
+ tcg_gen_cmp_vec(TCG_COND_GEU, vece, rsh, rsh, max);
304
+ tcg_gen_andc_vec(vece, lval, lval, lsh);
305
+ tcg_gen_andc_vec(vece, rval, rval, rsh);
306
+ } else {
307
+ tcg_gen_cmp_vec(TCG_COND_LT, vece, lsh, lsh, max);
308
+ tcg_gen_cmp_vec(TCG_COND_LT, vece, rsh, rsh, max);
309
+ tcg_gen_and_vec(vece, lval, lval, lsh);
310
+ tcg_gen_and_vec(vece, rval, rval, rsh);
311
+ }
312
+ tcg_gen_or_vec(vece, dst, lval, rval);
313
+
314
+ tcg_temp_free_vec(max);
315
+ tcg_temp_free_vec(lval);
316
+ tcg_temp_free_vec(rval);
317
+ tcg_temp_free_vec(lsh);
318
+ tcg_temp_free_vec(rsh);
319
+}
320
+
321
+static const TCGOpcode ushl_list[] = {
322
+ INDEX_op_neg_vec, INDEX_op_shlv_vec,
323
+ INDEX_op_shrv_vec, INDEX_op_cmp_vec, 0
324
+};
325
+
326
+const GVecGen3 ushl_op[4] = {
327
+ { .fniv = gen_ushl_vec,
328
+ .fno = gen_helper_gvec_ushl_b,
329
+ .opt_opc = ushl_list,
330
+ .vece = MO_8 },
331
+ { .fniv = gen_ushl_vec,
332
+ .fno = gen_helper_gvec_ushl_h,
333
+ .opt_opc = ushl_list,
334
+ .vece = MO_16 },
335
+ { .fni4 = gen_ushl_i32,
336
+ .fniv = gen_ushl_vec,
337
+ .opt_opc = ushl_list,
338
+ .vece = MO_32 },
339
+ { .fni8 = gen_ushl_i64,
340
+ .fniv = gen_ushl_vec,
341
+ .opt_opc = ushl_list,
342
+ .vece = MO_64 },
343
+};
344
+
345
+void gen_sshl_i32(TCGv_i32 dst, TCGv_i32 src, TCGv_i32 shift)
346
+{
347
+ TCGv_i32 lval = tcg_temp_new_i32();
348
+ TCGv_i32 rval = tcg_temp_new_i32();
349
+ TCGv_i32 lsh = tcg_temp_new_i32();
350
+ TCGv_i32 rsh = tcg_temp_new_i32();
351
+ TCGv_i32 zero = tcg_const_i32(0);
352
+ TCGv_i32 max = tcg_const_i32(31);
353
+
354
+ /*
355
+ * Rely on the TCG guarantee that out of range shifts produce
356
+ * unspecified results, not undefined behaviour (i.e. no trap).
357
+ * Discard out-of-range results after the fact.
358
+ */
359
+ tcg_gen_ext8s_i32(lsh, shift);
360
+ tcg_gen_neg_i32(rsh, lsh);
361
+ tcg_gen_shl_i32(lval, src, lsh);
362
+ tcg_gen_umin_i32(rsh, rsh, max);
363
+ tcg_gen_sar_i32(rval, src, rsh);
364
+ tcg_gen_movcond_i32(TCG_COND_LEU, lval, lsh, max, lval, zero);
365
+ tcg_gen_movcond_i32(TCG_COND_LT, dst, lsh, zero, rval, lval);
366
+
367
+ tcg_temp_free_i32(lval);
368
+ tcg_temp_free_i32(rval);
369
+ tcg_temp_free_i32(lsh);
370
+ tcg_temp_free_i32(rsh);
371
+ tcg_temp_free_i32(zero);
372
+ tcg_temp_free_i32(max);
373
+}
374
+
375
+void gen_sshl_i64(TCGv_i64 dst, TCGv_i64 src, TCGv_i64 shift)
376
+{
377
+ TCGv_i64 lval = tcg_temp_new_i64();
378
+ TCGv_i64 rval = tcg_temp_new_i64();
379
+ TCGv_i64 lsh = tcg_temp_new_i64();
380
+ TCGv_i64 rsh = tcg_temp_new_i64();
381
+ TCGv_i64 zero = tcg_const_i64(0);
382
+ TCGv_i64 max = tcg_const_i64(63);
383
+
384
+ /*
385
+ * Rely on the TCG guarantee that out of range shifts produce
386
+ * unspecified results, not undefined behaviour (i.e. no trap).
387
+ * Discard out-of-range results after the fact.
388
+ */
389
+ tcg_gen_ext8s_i64(lsh, shift);
390
+ tcg_gen_neg_i64(rsh, lsh);
391
+ tcg_gen_shl_i64(lval, src, lsh);
392
+ tcg_gen_umin_i64(rsh, rsh, max);
393
+ tcg_gen_sar_i64(rval, src, rsh);
394
+ tcg_gen_movcond_i64(TCG_COND_LEU, lval, lsh, max, lval, zero);
395
+ tcg_gen_movcond_i64(TCG_COND_LT, dst, lsh, zero, rval, lval);
396
+
397
+ tcg_temp_free_i64(lval);
398
+ tcg_temp_free_i64(rval);
399
+ tcg_temp_free_i64(lsh);
400
+ tcg_temp_free_i64(rsh);
401
+ tcg_temp_free_i64(zero);
402
+ tcg_temp_free_i64(max);
403
+}
404
+
405
+static void gen_sshl_vec(unsigned vece, TCGv_vec dst,
406
+ TCGv_vec src, TCGv_vec shift)
407
+{
408
+ TCGv_vec lval = tcg_temp_new_vec_matching(dst);
409
+ TCGv_vec rval = tcg_temp_new_vec_matching(dst);
410
+ TCGv_vec lsh = tcg_temp_new_vec_matching(dst);
411
+ TCGv_vec rsh = tcg_temp_new_vec_matching(dst);
412
+ TCGv_vec tmp = tcg_temp_new_vec_matching(dst);
413
+
414
+ /*
415
+ * Rely on the TCG guarantee that out of range shifts produce
416
+ * unspecified results, not undefined behaviour (i.e. no trap).
417
+ * Discard out-of-range results after the fact.
418
+ */
419
+ tcg_gen_neg_vec(vece, rsh, shift);
420
+ if (vece == MO_8) {
421
+ tcg_gen_mov_vec(lsh, shift);
422
+ } else {
423
+ tcg_gen_dupi_vec(vece, tmp, 0xff);
424
+ tcg_gen_and_vec(vece, lsh, shift, tmp);
425
+ tcg_gen_and_vec(vece, rsh, rsh, tmp);
426
+ }
427
+
428
+ /* Bound rsh so out of bound right shift gets -1. */
429
+ tcg_gen_dupi_vec(vece, tmp, (8 << vece) - 1);
430
+ tcg_gen_umin_vec(vece, rsh, rsh, tmp);
431
+ tcg_gen_cmp_vec(TCG_COND_GT, vece, tmp, lsh, tmp);
432
+
433
+ tcg_gen_shlv_vec(vece, lval, src, lsh);
434
+ tcg_gen_sarv_vec(vece, rval, src, rsh);
435
+
436
+ /* Select in-bound left shift. */
437
+ tcg_gen_andc_vec(vece, lval, lval, tmp);
438
+
439
+ /* Select between left and right shift. */
440
+ if (vece == MO_8) {
441
+ tcg_gen_dupi_vec(vece, tmp, 0);
442
+ tcg_gen_cmpsel_vec(TCG_COND_LT, vece, dst, lsh, tmp, rval, lval);
443
+ } else {
444
+ tcg_gen_dupi_vec(vece, tmp, 0x80);
445
+ tcg_gen_cmpsel_vec(TCG_COND_LT, vece, dst, lsh, tmp, lval, rval);
446
+ }
447
+
448
+ tcg_temp_free_vec(lval);
449
+ tcg_temp_free_vec(rval);
450
+ tcg_temp_free_vec(lsh);
451
+ tcg_temp_free_vec(rsh);
452
+ tcg_temp_free_vec(tmp);
453
+}
454
+
455
+static const TCGOpcode sshl_list[] = {
456
+ INDEX_op_neg_vec, INDEX_op_umin_vec, INDEX_op_shlv_vec,
457
+ INDEX_op_sarv_vec, INDEX_op_cmp_vec, INDEX_op_cmpsel_vec, 0
458
+};
459
+
460
+const GVecGen3 sshl_op[4] = {
461
+ { .fniv = gen_sshl_vec,
462
+ .fno = gen_helper_gvec_sshl_b,
463
+ .opt_opc = sshl_list,
464
+ .vece = MO_8 },
465
+ { .fniv = gen_sshl_vec,
466
+ .fno = gen_helper_gvec_sshl_h,
467
+ .opt_opc = sshl_list,
468
+ .vece = MO_16 },
469
+ { .fni4 = gen_sshl_i32,
470
+ .fniv = gen_sshl_vec,
471
+ .opt_opc = sshl_list,
472
+ .vece = MO_32 },
473
+ { .fni8 = gen_sshl_i64,
474
+ .fniv = gen_sshl_vec,
475
+ .opt_opc = sshl_list,
476
+ .vece = MO_64 },
477
+};
478
+
479
static void gen_uqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
480
TCGv_vec a, TCGv_vec b)
481
{
482
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
483
vec_size, vec_size);
484
}
485
return 0;
486
+
487
+ case NEON_3R_VSHL:
488
+ /* Note the operation is vshl vd,vm,vn */
489
+ tcg_gen_gvec_3(rd_ofs, rm_ofs, rn_ofs, vec_size, vec_size,
490
+ u ? &ushl_op[size] : &sshl_op[size]);
491
+ return 0;
492
}
493
494
if (size == 3) {
495
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
496
neon_load_reg64(cpu_V0, rn + pass);
497
neon_load_reg64(cpu_V1, rm + pass);
498
switch (op) {
499
- case NEON_3R_VSHL:
500
- if (u) {
501
- gen_helper_neon_shl_u64(cpu_V0, cpu_V1, cpu_V0);
502
- } else {
503
- gen_helper_neon_shl_s64(cpu_V0, cpu_V1, cpu_V0);
504
- }
505
- break;
506
case NEON_3R_VQSHL:
507
if (u) {
508
gen_helper_neon_qshl_u64(cpu_V0, cpu_env,
509
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
510
}
511
pairwise = 0;
512
switch (op) {
513
- case NEON_3R_VSHL:
514
case NEON_3R_VQSHL:
515
case NEON_3R_VRSHL:
516
case NEON_3R_VQRSHL:
517
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
518
case NEON_3R_VHSUB:
519
GEN_NEON_INTEGER_OP(hsub);
520
break;
521
- case NEON_3R_VSHL:
522
- GEN_NEON_INTEGER_OP(shl);
523
- break;
524
case NEON_3R_VQSHL:
525
GEN_NEON_INTEGER_OP_ENV(qshl);
526
break;
527
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
528
}
529
} else {
530
if (input_unsigned) {
531
- gen_helper_neon_shl_u64(cpu_V0, in, tmp64);
532
+ gen_ushl_i64(cpu_V0, in, tmp64);
533
} else {
534
- gen_helper_neon_shl_s64(cpu_V0, in, tmp64);
535
+ gen_sshl_i64(cpu_V0, in, tmp64);
536
}
537
}
538
tmp = tcg_temp_new_i32();
539
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
540
index XXXXXXX..XXXXXXX 100644
541
--- a/target/arm/vec_helper.c
542
+++ b/target/arm/vec_helper.c
543
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_fmlal_idx_a64)(void *vd, void *vn, void *vm,
544
do_fmlal_idx(vd, vn, vm, &env->vfp.fp_status, desc,
545
get_flush_inputs_to_zero(&env->vfp.fp_status_f16));
546
}
547
+
548
+void HELPER(gvec_sshl_b)(void *vd, void *vn, void *vm, uint32_t desc)
549
+{
550
+ intptr_t i, opr_sz = simd_oprsz(desc);
551
+ int8_t *d = vd, *n = vn, *m = vm;
552
+
553
+ for (i = 0; i < opr_sz; ++i) {
554
+ int8_t mm = m[i];
555
+ int8_t nn = n[i];
556
+ int8_t res = 0;
557
+ if (mm >= 0) {
558
+ if (mm < 8) {
559
+ res = nn << mm;
560
+ }
561
+ } else {
562
+ res = nn >> (mm > -8 ? -mm : 7);
563
+ }
564
+ d[i] = res;
565
+ }
566
+ clear_tail(d, opr_sz, simd_maxsz(desc));
567
+}
568
+
569
+void HELPER(gvec_sshl_h)(void *vd, void *vn, void *vm, uint32_t desc)
570
+{
571
+ intptr_t i, opr_sz = simd_oprsz(desc);
572
+ int16_t *d = vd, *n = vn, *m = vm;
573
+
574
+ for (i = 0; i < opr_sz / 2; ++i) {
575
+ int8_t mm = m[i]; /* only 8 bits of shift are significant */
576
+ int16_t nn = n[i];
577
+ int16_t res = 0;
578
+ if (mm >= 0) {
579
+ if (mm < 16) {
580
+ res = nn << mm;
581
+ }
582
+ } else {
583
+ res = nn >> (mm > -16 ? -mm : 15);
584
+ }
585
+ d[i] = res;
586
+ }
587
+ clear_tail(d, opr_sz, simd_maxsz(desc));
588
+}
589
+
590
+void HELPER(gvec_ushl_b)(void *vd, void *vn, void *vm, uint32_t desc)
591
+{
592
+ intptr_t i, opr_sz = simd_oprsz(desc);
593
+ uint8_t *d = vd, *n = vn, *m = vm;
594
+
595
+ for (i = 0; i < opr_sz; ++i) {
596
+ int8_t mm = m[i];
597
+ uint8_t nn = n[i];
598
+ uint8_t res = 0;
599
+ if (mm >= 0) {
600
+ if (mm < 8) {
601
+ res = nn << mm;
602
+ }
603
+ } else {
604
+ if (mm > -8) {
605
+ res = nn >> -mm;
606
+ }
607
+ }
608
+ d[i] = res;
609
+ }
610
+ clear_tail(d, opr_sz, simd_maxsz(desc));
611
+}
612
+
613
+void HELPER(gvec_ushl_h)(void *vd, void *vn, void *vm, uint32_t desc)
614
+{
615
+ intptr_t i, opr_sz = simd_oprsz(desc);
616
+ uint16_t *d = vd, *n = vn, *m = vm;
617
+
618
+ for (i = 0; i < opr_sz / 2; ++i) {
619
+ int8_t mm = m[i]; /* only 8 bits of shift are significant */
620
+ uint16_t nn = n[i];
621
+ uint16_t res = 0;
622
+ if (mm >= 0) {
623
+ if (mm < 16) {
624
+ res = nn << mm;
625
+ }
626
+ } else {
627
+ if (mm > -16) {
628
+ res = nn >> -mm;
629
+ }
630
+ }
631
+ d[i] = res;
632
+ }
633
+ clear_tail(d, opr_sz, simd_maxsz(desc));
634
+}
37
--
635
--
38
2.17.1
636
2.20.1
39
637
40
638
diff view generated by jsdifflib
1
From: Shannon Zhao <zhaoshenglong@huawei.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
acpi_data_push uses g_array_set_size to resize the memory size. If there
3
The gvec form will be needed for implementing SVE2.
4
is no enough contiguous memory, the address will be changed. So previous
5
pointer could not be used any more. It must update the pointer and use
6
the new one.
7
4
8
Also, previous codes wrongly use le32 conversion of iort->node_offset
5
Extend the implementation to operate on uint64_t instead of uint32_t.
9
for subsequent computations that will result incorrect value if host is
6
Use a counted inner loop instead of terminating when op1 goes to zero,
10
not litlle endian. So use the non-converted one instead.
7
looking toward the required implementation for ARMv8.4-DIT.
11
8
12
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
9
Tested-by: Alex Bennée <alex.bennee@linaro.org>
13
Reviewed-by: Eric Auger <eric.auger@redhat.com>
10
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
14
Message-id: 1527663951-14552-1-git-send-email-zhaoshenglong@huawei.com
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20200216214232.4230-3-richard.henderson@linaro.org
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
14
---
17
hw/arm/virt-acpi-build.c | 20 +++++++++++++++-----
15
target/arm/helper.h | 3 ++-
18
1 file changed, 15 insertions(+), 5 deletions(-)
16
target/arm/neon_helper.c | 22 ----------------------
17
target/arm/translate-a64.c | 10 +++-------
18
target/arm/translate.c | 11 ++++-------
19
target/arm/vec_helper.c | 30 ++++++++++++++++++++++++++++++
20
5 files changed, 39 insertions(+), 37 deletions(-)
19
21
20
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
22
diff --git a/target/arm/helper.h b/target/arm/helper.h
21
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/arm/virt-acpi-build.c
24
--- a/target/arm/helper.h
23
+++ b/hw/arm/virt-acpi-build.c
25
+++ b/target/arm/helper.h
24
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
26
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_2(neon_sub_u8, i32, i32, i32)
25
AcpiIortItsGroup *its;
27
DEF_HELPER_2(neon_sub_u16, i32, i32, i32)
26
AcpiIortTable *iort;
28
DEF_HELPER_2(neon_mul_u8, i32, i32, i32)
27
AcpiIortSmmu3 *smmu;
29
DEF_HELPER_2(neon_mul_u16, i32, i32, i32)
28
- size_t node_size, iort_length, smmu_offset = 0;
30
-DEF_HELPER_2(neon_mul_p8, i32, i32, i32)
29
+ size_t node_size, iort_node_offset, iort_length, smmu_offset = 0;
31
DEF_HELPER_2(neon_mull_p8, i64, i32, i32)
30
AcpiIortRC *rc;
32
31
33
DEF_HELPER_2(neon_tst_u8, i32, i32, i32)
32
iort = acpi_data_push(table_data, sizeof(*iort));
34
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(gvec_sshl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
35
DEF_HELPER_FLAGS_4(gvec_ushl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
34
36
DEF_HELPER_FLAGS_4(gvec_ushl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
35
iort_length = sizeof(*iort);
37
36
iort->node_count = cpu_to_le32(nb_nodes);
38
+DEF_HELPER_FLAGS_4(gvec_pmul_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
37
- iort->node_offset = cpu_to_le32(sizeof(*iort));
39
+
38
+ /*
40
#ifdef TARGET_AARCH64
39
+ * Use a copy in case table_data->data moves during acpi_data_push
41
#include "helper-a64.h"
40
+ * operations.
42
#include "helper-sve.h"
41
+ */
43
diff --git a/target/arm/neon_helper.c b/target/arm/neon_helper.c
42
+ iort_node_offset = sizeof(*iort);
44
index XXXXXXX..XXXXXXX 100644
43
+ iort->node_offset = cpu_to_le32(iort_node_offset);
45
--- a/target/arm/neon_helper.c
44
46
+++ b/target/arm/neon_helper.c
45
/* ITS group node */
47
@@ -XXX,XX +XXX,XX @@ NEON_VOP(mul_u16, neon_u16, 2)
46
node_size = sizeof(*its) + sizeof(uint32_t);
48
47
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
49
/* Polynomial multiplication is like integer multiplication except the
48
int irq = vms->irqmap[VIRT_SMMU];
50
partial products are XORed, not added. */
49
51
-uint32_t HELPER(neon_mul_p8)(uint32_t op1, uint32_t op2)
50
/* SMMUv3 node */
52
-{
51
- smmu_offset = iort->node_offset + node_size;
53
- uint32_t mask;
52
+ smmu_offset = iort_node_offset + node_size;
54
- uint32_t result;
53
node_size = sizeof(*smmu) + sizeof(*idmap);
55
- result = 0;
54
iort_length += node_size;
56
- while (op1) {
55
smmu = acpi_data_push(table_data, node_size);
57
- mask = 0;
56
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
58
- if (op1 & 1)
57
idmap->id_count = cpu_to_le32(0xFFFF);
59
- mask |= 0xff;
58
idmap->output_base = 0;
60
- if (op1 & (1 << 8))
59
/* output IORT node is the ITS group node (the first node) */
61
- mask |= (0xff << 8);
60
- idmap->output_reference = cpu_to_le32(iort->node_offset);
62
- if (op1 & (1 << 16))
61
+ idmap->output_reference = cpu_to_le32(iort_node_offset);
63
- mask |= (0xff << 16);
64
- if (op1 & (1 << 24))
65
- mask |= (0xff << 24);
66
- result ^= op2 & mask;
67
- op1 = (op1 >> 1) & 0x7f7f7f7f;
68
- op2 = (op2 << 1) & 0xfefefefe;
69
- }
70
- return result;
71
-}
72
-
73
uint64_t HELPER(neon_mull_p8)(uint32_t op1, uint32_t op2)
74
{
75
uint64_t result = 0;
76
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
77
index XXXXXXX..XXXXXXX 100644
78
--- a/target/arm/translate-a64.c
79
+++ b/target/arm/translate-a64.c
80
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
81
case 0x13: /* MUL, PMUL */
82
if (!u) { /* MUL */
83
gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_mul, size);
84
- return;
85
+ } else { /* PMUL */
86
+ gen_gvec_op3_ool(s, is_q, rd, rn, rm, 0, gen_helper_gvec_pmul_b);
87
}
88
- break;
89
+ return;
90
case 0x12: /* MLA, MLS */
91
if (u) {
92
gen_gvec_op3(s, is_q, rd, rn, rm, &mls_op[size]);
93
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
94
genfn = fns[size][u];
95
break;
96
}
97
- case 0x13: /* MUL, PMUL */
98
- assert(u); /* PMUL */
99
- assert(size == 0);
100
- genfn = gen_helper_neon_mul_p8;
101
- break;
102
case 0x16: /* SQDMULH, SQRDMULH */
103
{
104
static NeonGenTwoOpEnvFn * const fns[2][2] = {
105
diff --git a/target/arm/translate.c b/target/arm/translate.c
106
index XXXXXXX..XXXXXXX 100644
107
--- a/target/arm/translate.c
108
+++ b/target/arm/translate.c
109
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
110
111
case NEON_3R_VMUL: /* VMUL */
112
if (u) {
113
- /* Polynomial case allows only P8 and is handled below. */
114
+ /* Polynomial case allows only P8. */
115
if (size != 0) {
116
return 1;
117
}
118
+ tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size,
119
+ 0, gen_helper_gvec_pmul_b);
120
} else {
121
tcg_gen_gvec_mul(size, rd_ofs, rn_ofs, rm_ofs,
122
vec_size, vec_size);
123
- return 0;
124
}
125
- break;
126
+ return 0;
127
128
case NEON_3R_VML: /* VMLA, VMLS */
129
tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size,
130
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
131
tmp2 = neon_load_reg(rd, pass);
132
gen_neon_add(size, tmp, tmp2);
133
break;
134
- case NEON_3R_VMUL:
135
- /* VMUL.P8; other cases already eliminated. */
136
- gen_helper_neon_mul_p8(tmp, tmp, tmp2);
137
- break;
138
case NEON_3R_VPMAX:
139
GEN_NEON_INTEGER_OP(pmax);
140
break;
141
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
142
index XXXXXXX..XXXXXXX 100644
143
--- a/target/arm/vec_helper.c
144
+++ b/target/arm/vec_helper.c
145
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_ushl_h)(void *vd, void *vn, void *vm, uint32_t desc)
62
}
146
}
63
147
clear_tail(d, opr_sz, simd_maxsz(desc));
64
/* Root Complex Node */
148
}
65
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
149
+
66
idmap->output_reference = cpu_to_le32(smmu_offset);
150
+/*
67
} else {
151
+ * 8x8->8 polynomial multiply.
68
/* output IORT node is the ITS group node (the first node) */
152
+ *
69
- idmap->output_reference = cpu_to_le32(iort->node_offset);
153
+ * Polynomial multiplication is like integer multiplication except the
70
+ idmap->output_reference = cpu_to_le32(iort_node_offset);
154
+ * partial products are XORed, not added.
71
}
155
+ *
72
156
+ * TODO: expose this as a generic vector operation, as it is a common
73
+ /*
157
+ * crypto building block.
74
+ * Update the pointer address in case table_data->data moves during above
158
+ */
75
+ * acpi_data_push operations.
159
+void HELPER(gvec_pmul_b)(void *vd, void *vn, void *vm, uint32_t desc)
76
+ */
160
+{
77
+ iort = (AcpiIortTable *)(table_data->data + iort_start);
161
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
78
iort->length = cpu_to_le32(iort_length);
162
+ uint64_t *d = vd, *n = vn, *m = vm;
79
163
+
80
build_header(linker, table_data, (void *)(table_data->data + iort_start),
164
+ for (i = 0; i < opr_sz / 8; ++i) {
165
+ uint64_t nn = n[i];
166
+ uint64_t mm = m[i];
167
+ uint64_t rr = 0;
168
+
169
+ for (j = 0; j < 8; ++j) {
170
+ uint64_t mask = (nn & 0x0101010101010101ull) * 0xff;
171
+ rr ^= mm & mask;
172
+ mm = (mm << 1) & 0xfefefefefefefefeull;
173
+ nn >>= 1;
174
+ }
175
+ d[i] = rr;
176
+ }
177
+ clear_tail(d, opr_sz, simd_maxsz(desc));
178
+}
81
--
179
--
82
2.17.1
180
2.20.1
83
181
84
182
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
add MemTxAttrs as an argument to address_space_map().
3
Its callers either have an attrs value to hand, or don't care
4
and can use MEMTXATTRS_UNSPECIFIED.
5
2
3
The gvec form will be needed for implementing SVE2.
4
5
Tested-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200216214232.4230-4-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-5-peter.maydell@linaro.org
10
---
10
---
11
include/exec/memory.h | 3 ++-
11
target/arm/helper.h | 4 +---
12
include/sysemu/dma.h | 3 ++-
12
target/arm/neon_helper.c | 30 ------------------------------
13
exec.c | 6 ++++--
13
target/arm/translate-a64.c | 28 +++-------------------------
14
target/ppc/mmu-hash64.c | 3 ++-
14
target/arm/translate.c | 16 ++--------------
15
4 files changed, 10 insertions(+), 5 deletions(-)
15
target/arm/vec_helper.c | 33 +++++++++++++++++++++++++++++++++
16
5 files changed, 39 insertions(+), 72 deletions(-)
16
17
17
diff --git a/include/exec/memory.h b/include/exec/memory.h
18
diff --git a/target/arm/helper.h b/target/arm/helper.h
18
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
19
--- a/include/exec/memory.h
20
--- a/target/arm/helper.h
20
+++ b/include/exec/memory.h
21
+++ b/target/arm/helper.h
21
@@ -XXX,XX +XXX,XX @@ bool address_space_access_valid(AddressSpace *as, hwaddr addr, int len, bool is_
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(crc32, TCG_CALL_NO_RWG_SE, i32, i32, i32, i32)
22
* @addr: address within that address space
23
DEF_HELPER_FLAGS_3(crc32c, TCG_CALL_NO_RWG_SE, i32, i32, i32, i32)
23
* @plen: pointer to length of buffer; updated on return
24
DEF_HELPER_2(dc_zva, void, env, i64)
24
* @is_write: indicates the transfer direction
25
25
+ * @attrs: memory attributes
26
-DEF_HELPER_FLAGS_2(neon_pmull_64_lo, TCG_CALL_NO_RWG_SE, i64, i64, i64)
26
*/
27
-DEF_HELPER_FLAGS_2(neon_pmull_64_hi, TCG_CALL_NO_RWG_SE, i64, i64, i64)
27
void *address_space_map(AddressSpace *as, hwaddr addr,
28
-
28
- hwaddr *plen, bool is_write);
29
DEF_HELPER_FLAGS_5(gvec_qrdmlah_s16, TCG_CALL_NO_RWG,
29
+ hwaddr *plen, bool is_write, MemTxAttrs attrs);
30
void, ptr, ptr, ptr, ptr, i32)
30
31
DEF_HELPER_FLAGS_5(gvec_qrdmlsh_s16, TCG_CALL_NO_RWG,
31
/* address_space_unmap: Unmaps a memory region previously mapped by address_space_map()
32
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(gvec_ushl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
32
*
33
DEF_HELPER_FLAGS_4(gvec_ushl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
33
diff --git a/include/sysemu/dma.h b/include/sysemu/dma.h
34
35
DEF_HELPER_FLAGS_4(gvec_pmul_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
36
+DEF_HELPER_FLAGS_4(gvec_pmull_q, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
37
38
#ifdef TARGET_AARCH64
39
#include "helper-a64.h"
40
diff --git a/target/arm/neon_helper.c b/target/arm/neon_helper.c
34
index XXXXXXX..XXXXXXX 100644
41
index XXXXXXX..XXXXXXX 100644
35
--- a/include/sysemu/dma.h
42
--- a/target/arm/neon_helper.c
36
+++ b/include/sysemu/dma.h
43
+++ b/target/arm/neon_helper.c
37
@@ -XXX,XX +XXX,XX @@ static inline void *dma_memory_map(AddressSpace *as,
44
@@ -XXX,XX +XXX,XX @@ void HELPER(neon_zip16)(void *vd, void *vm)
38
hwaddr xlen = *len;
45
rm[0] = m0;
39
void *p;
46
rd[0] = d0;
40
41
- p = address_space_map(as, addr, &xlen, dir == DMA_DIRECTION_FROM_DEVICE);
42
+ p = address_space_map(as, addr, &xlen, dir == DMA_DIRECTION_FROM_DEVICE,
43
+ MEMTXATTRS_UNSPECIFIED);
44
*len = xlen;
45
return p;
46
}
47
}
47
diff --git a/exec.c b/exec.c
48
-
49
-/* Helper function for 64 bit polynomial multiply case:
50
- * perform PolynomialMult(op1, op2) and return either the top or
51
- * bottom half of the 128 bit result.
52
- */
53
-uint64_t HELPER(neon_pmull_64_lo)(uint64_t op1, uint64_t op2)
54
-{
55
- int bitnum;
56
- uint64_t res = 0;
57
-
58
- for (bitnum = 0; bitnum < 64; bitnum++) {
59
- if (op1 & (1ULL << bitnum)) {
60
- res ^= op2 << bitnum;
61
- }
62
- }
63
- return res;
64
-}
65
-uint64_t HELPER(neon_pmull_64_hi)(uint64_t op1, uint64_t op2)
66
-{
67
- int bitnum;
68
- uint64_t res = 0;
69
-
70
- /* bit 0 of op1 can't influence the high 64 bits at all */
71
- for (bitnum = 1; bitnum < 64; bitnum++) {
72
- if (op1 & (1ULL << bitnum)) {
73
- res ^= op2 >> (64 - bitnum);
74
- }
75
- }
76
- return res;
77
-}
78
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
48
index XXXXXXX..XXXXXXX 100644
79
index XXXXXXX..XXXXXXX 100644
49
--- a/exec.c
80
--- a/target/arm/translate-a64.c
50
+++ b/exec.c
81
+++ b/target/arm/translate-a64.c
51
@@ -XXX,XX +XXX,XX @@ flatview_extend_translation(FlatView *fv, hwaddr addr,
82
@@ -XXX,XX +XXX,XX @@ static void handle_3rd_narrowing(DisasContext *s, int is_q, int is_u, int size,
52
void *address_space_map(AddressSpace *as,
83
clear_vec_high(s, is_q, rd);
53
hwaddr addr,
54
hwaddr *plen,
55
- bool is_write)
56
+ bool is_write,
57
+ MemTxAttrs attrs)
58
{
59
hwaddr len = *plen;
60
hwaddr l, xlat;
61
@@ -XXX,XX +XXX,XX @@ void *cpu_physical_memory_map(hwaddr addr,
62
hwaddr *plen,
63
int is_write)
64
{
65
- return address_space_map(&address_space_memory, addr, plen, is_write);
66
+ return address_space_map(&address_space_memory, addr, plen, is_write,
67
+ MEMTXATTRS_UNSPECIFIED);
68
}
84
}
69
85
70
void cpu_physical_memory_unmap(void *buffer, hwaddr len,
86
-static void handle_pmull_64(DisasContext *s, int is_q, int rd, int rn, int rm)
71
diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
87
-{
88
- /* PMULL of 64 x 64 -> 128 is an odd special case because it
89
- * is the only three-reg-diff instruction which produces a
90
- * 128-bit wide result from a single operation. However since
91
- * it's possible to calculate the two halves more or less
92
- * separately we just use two helper calls.
93
- */
94
- TCGv_i64 tcg_op1 = tcg_temp_new_i64();
95
- TCGv_i64 tcg_op2 = tcg_temp_new_i64();
96
- TCGv_i64 tcg_res = tcg_temp_new_i64();
97
-
98
- read_vec_element(s, tcg_op1, rn, is_q, MO_64);
99
- read_vec_element(s, tcg_op2, rm, is_q, MO_64);
100
- gen_helper_neon_pmull_64_lo(tcg_res, tcg_op1, tcg_op2);
101
- write_vec_element(s, tcg_res, rd, 0, MO_64);
102
- gen_helper_neon_pmull_64_hi(tcg_res, tcg_op1, tcg_op2);
103
- write_vec_element(s, tcg_res, rd, 1, MO_64);
104
-
105
- tcg_temp_free_i64(tcg_op1);
106
- tcg_temp_free_i64(tcg_op2);
107
- tcg_temp_free_i64(tcg_res);
108
-}
109
-
110
/* AdvSIMD three different
111
* 31 30 29 28 24 23 22 21 20 16 15 12 11 10 9 5 4 0
112
* +---+---+---+-----------+------+---+------+--------+-----+------+------+
113
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_diff(DisasContext *s, uint32_t insn)
114
if (!fp_access_check(s)) {
115
return;
116
}
117
- handle_pmull_64(s, is_q, rd, rn, rm);
118
+ /* The Q field specifies lo/hi half input for this insn. */
119
+ gen_gvec_op3_ool(s, true, rd, rn, rm, is_q,
120
+ gen_helper_gvec_pmull_q);
121
return;
122
}
123
goto is_widening;
124
diff --git a/target/arm/translate.c b/target/arm/translate.c
72
index XXXXXXX..XXXXXXX 100644
125
index XXXXXXX..XXXXXXX 100644
73
--- a/target/ppc/mmu-hash64.c
126
--- a/target/arm/translate.c
74
+++ b/target/ppc/mmu-hash64.c
127
+++ b/target/arm/translate.c
75
@@ -XXX,XX +XXX,XX @@ const ppc_hash_pte64_t *ppc_hash64_map_hptes(PowerPCCPU *cpu,
128
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
76
return NULL;
129
* outside the loop below as it only performs a single pass.
130
*/
131
if (op == 14 && size == 2) {
132
- TCGv_i64 tcg_rn, tcg_rm, tcg_rd;
133
-
134
if (!dc_isar_feature(aa32_pmull, s)) {
135
return 1;
136
}
137
- tcg_rn = tcg_temp_new_i64();
138
- tcg_rm = tcg_temp_new_i64();
139
- tcg_rd = tcg_temp_new_i64();
140
- neon_load_reg64(tcg_rn, rn);
141
- neon_load_reg64(tcg_rm, rm);
142
- gen_helper_neon_pmull_64_lo(tcg_rd, tcg_rn, tcg_rm);
143
- neon_store_reg64(tcg_rd, rd);
144
- gen_helper_neon_pmull_64_hi(tcg_rd, tcg_rn, tcg_rm);
145
- neon_store_reg64(tcg_rd, rd + 1);
146
- tcg_temp_free_i64(tcg_rn);
147
- tcg_temp_free_i64(tcg_rm);
148
- tcg_temp_free_i64(tcg_rd);
149
+ tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, 16, 16,
150
+ 0, gen_helper_gvec_pmull_q);
151
return 0;
152
}
153
154
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
155
index XXXXXXX..XXXXXXX 100644
156
--- a/target/arm/vec_helper.c
157
+++ b/target/arm/vec_helper.c
158
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_pmul_b)(void *vd, void *vn, void *vm, uint32_t desc)
77
}
159
}
78
160
clear_tail(d, opr_sz, simd_maxsz(desc));
79
- hptes = address_space_map(CPU(cpu)->as, base + pte_offset, &plen, false);
161
}
80
+ hptes = address_space_map(CPU(cpu)->as, base + pte_offset, &plen, false,
162
+
81
+ MEMTXATTRS_UNSPECIFIED);
163
+/*
82
if (plen < (n * HASH_PTE_SIZE_64)) {
164
+ * 64x64->128 polynomial multiply.
83
hw_error("%s: Unable to map all requested HPTEs\n", __func__);
165
+ * Because of the lanes are not accessed in strict columns,
84
}
166
+ * this probably cannot be turned into a generic helper.
167
+ */
168
+void HELPER(gvec_pmull_q)(void *vd, void *vn, void *vm, uint32_t desc)
169
+{
170
+ intptr_t i, j, opr_sz = simd_oprsz(desc);
171
+ intptr_t hi = simd_data(desc);
172
+ uint64_t *d = vd, *n = vn, *m = vm;
173
+
174
+ for (i = 0; i < opr_sz / 8; i += 2) {
175
+ uint64_t nn = n[i + hi];
176
+ uint64_t mm = m[i + hi];
177
+ uint64_t rhi = 0;
178
+ uint64_t rlo = 0;
179
+
180
+ /* Bit 0 can only influence the low 64-bit result. */
181
+ if (nn & 1) {
182
+ rlo = mm;
183
+ }
184
+
185
+ for (j = 1; j < 64; ++j) {
186
+ uint64_t mask = -((nn >> j) & 1);
187
+ rlo ^= (mm << j) & mask;
188
+ rhi ^= (mm >> (64 - j)) & mask;
189
+ }
190
+ d[i] = rlo;
191
+ d[i + 1] = rhi;
192
+ }
193
+ clear_tail(d, opr_sz, simd_maxsz(desc));
194
+}
85
--
195
--
86
2.17.1
196
2.20.1
87
197
88
198
diff view generated by jsdifflib
New patch
1
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
We still need two different helpers, since NEON and SVE2 get the
4
inputs from different locations within the source vector. However,
5
we can convert both to the same internal form for computation.
6
7
The sve2 helper is not used yet, but adding it with this patch
8
helps illustrate why the neon changes are helpful.
9
10
Tested-by: Alex Bennée <alex.bennee@linaro.org>
11
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
12
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20200216214232.4230-5-richard.henderson@linaro.org
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
target/arm/helper-sve.h | 2 ++
17
target/arm/helper.h | 3 +-
18
target/arm/neon_helper.c | 32 --------------------
19
target/arm/translate-a64.c | 27 +++++++++++------
20
target/arm/translate.c | 26 ++++++++---------
21
target/arm/vec_helper.c | 60 ++++++++++++++++++++++++++++++++++++++
22
6 files changed, 95 insertions(+), 55 deletions(-)
23
24
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
25
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/helper-sve.h
27
+++ b/target/arm/helper-sve.h
28
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_6(sve_stdd_le_zd, TCG_CALL_NO_WG,
29
void, env, ptr, ptr, ptr, tl, i32)
30
DEF_HELPER_FLAGS_6(sve_stdd_be_zd, TCG_CALL_NO_WG,
31
void, env, ptr, ptr, ptr, tl, i32)
32
+
33
+DEF_HELPER_FLAGS_4(sve2_pmull_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
34
diff --git a/target/arm/helper.h b/target/arm/helper.h
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/helper.h
37
+++ b/target/arm/helper.h
38
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_2(neon_sub_u8, i32, i32, i32)
39
DEF_HELPER_2(neon_sub_u16, i32, i32, i32)
40
DEF_HELPER_2(neon_mul_u8, i32, i32, i32)
41
DEF_HELPER_2(neon_mul_u16, i32, i32, i32)
42
-DEF_HELPER_2(neon_mull_p8, i64, i32, i32)
43
44
DEF_HELPER_2(neon_tst_u8, i32, i32, i32)
45
DEF_HELPER_2(neon_tst_u16, i32, i32, i32)
46
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_4(gvec_ushl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
47
DEF_HELPER_FLAGS_4(gvec_pmul_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
48
DEF_HELPER_FLAGS_4(gvec_pmull_q, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
49
50
+DEF_HELPER_FLAGS_4(neon_pmull_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
51
+
52
#ifdef TARGET_AARCH64
53
#include "helper-a64.h"
54
#include "helper-sve.h"
55
diff --git a/target/arm/neon_helper.c b/target/arm/neon_helper.c
56
index XXXXXXX..XXXXXXX 100644
57
--- a/target/arm/neon_helper.c
58
+++ b/target/arm/neon_helper.c
59
@@ -XXX,XX +XXX,XX @@ NEON_VOP(mul_u8, neon_u8, 4)
60
NEON_VOP(mul_u16, neon_u16, 2)
61
#undef NEON_FN
62
63
-/* Polynomial multiplication is like integer multiplication except the
64
- partial products are XORed, not added. */
65
-uint64_t HELPER(neon_mull_p8)(uint32_t op1, uint32_t op2)
66
-{
67
- uint64_t result = 0;
68
- uint64_t mask;
69
- uint64_t op2ex = op2;
70
- op2ex = (op2ex & 0xff) |
71
- ((op2ex & 0xff00) << 8) |
72
- ((op2ex & 0xff0000) << 16) |
73
- ((op2ex & 0xff000000) << 24);
74
- while (op1) {
75
- mask = 0;
76
- if (op1 & 1) {
77
- mask |= 0xffff;
78
- }
79
- if (op1 & (1 << 8)) {
80
- mask |= (0xffffU << 16);
81
- }
82
- if (op1 & (1 << 16)) {
83
- mask |= (0xffffULL << 32);
84
- }
85
- if (op1 & (1 << 24)) {
86
- mask |= (0xffffULL << 48);
87
- }
88
- result ^= op2ex & mask;
89
- op1 = (op1 >> 1) & 0x7f7f7f7f;
90
- op2ex <<= 1;
91
- }
92
- return result;
93
-}
94
-
95
#define NEON_FN(dest, src1, src2) dest = (src1 & src2) ? -1 : 0
96
NEON_VOP(tst_u8, neon_u8, 4)
97
NEON_VOP(tst_u16, neon_u16, 2)
98
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
99
index XXXXXXX..XXXXXXX 100644
100
--- a/target/arm/translate-a64.c
101
+++ b/target/arm/translate-a64.c
102
@@ -XXX,XX +XXX,XX @@ static void handle_3rd_widening(DisasContext *s, int is_q, int is_u, int size,
103
gen_helper_neon_addl_saturate_s32(tcg_passres, cpu_env,
104
tcg_passres, tcg_passres);
105
break;
106
- case 14: /* PMULL */
107
- assert(size == 0);
108
- gen_helper_neon_mull_p8(tcg_passres, tcg_op1, tcg_op2);
109
- break;
110
default:
111
g_assert_not_reached();
112
}
113
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_diff(DisasContext *s, uint32_t insn)
114
handle_3rd_narrowing(s, is_q, is_u, size, opcode, rd, rn, rm);
115
break;
116
case 14: /* PMULL, PMULL2 */
117
- if (is_u || size == 1 || size == 2) {
118
+ if (is_u) {
119
unallocated_encoding(s);
120
return;
121
}
122
- if (size == 3) {
123
+ switch (size) {
124
+ case 0: /* PMULL.P8 */
125
+ if (!fp_access_check(s)) {
126
+ return;
127
+ }
128
+ /* The Q field specifies lo/hi half input for this insn. */
129
+ gen_gvec_op3_ool(s, true, rd, rn, rm, is_q,
130
+ gen_helper_neon_pmull_h);
131
+ break;
132
+
133
+ case 3: /* PMULL.P64 */
134
if (!dc_isar_feature(aa64_pmull, s)) {
135
unallocated_encoding(s);
136
return;
137
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_diff(DisasContext *s, uint32_t insn)
138
/* The Q field specifies lo/hi half input for this insn. */
139
gen_gvec_op3_ool(s, true, rd, rn, rm, is_q,
140
gen_helper_gvec_pmull_q);
141
- return;
142
+ break;
143
+
144
+ default:
145
+ unallocated_encoding(s);
146
+ break;
147
}
148
- goto is_widening;
149
+ return;
150
case 9: /* SQDMLAL, SQDMLAL2 */
151
case 11: /* SQDMLSL, SQDMLSL2 */
152
case 13: /* SQDMULL, SQDMULL2 */
153
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_diff(DisasContext *s, uint32_t insn)
154
unallocated_encoding(s);
155
return;
156
}
157
- is_widening:
158
if (!fp_access_check(s)) {
159
return;
160
}
161
diff --git a/target/arm/translate.c b/target/arm/translate.c
162
index XXXXXXX..XXXXXXX 100644
163
--- a/target/arm/translate.c
164
+++ b/target/arm/translate.c
165
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
166
return 1;
167
}
168
169
- /* Handle VMULL.P64 (Polynomial 64x64 to 128 bit multiply)
170
- * outside the loop below as it only performs a single pass.
171
- */
172
- if (op == 14 && size == 2) {
173
- if (!dc_isar_feature(aa32_pmull, s)) {
174
- return 1;
175
+ /* Handle polynomial VMULL in a single pass. */
176
+ if (op == 14) {
177
+ if (size == 0) {
178
+ /* VMULL.P8 */
179
+ tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, 16, 16,
180
+ 0, gen_helper_neon_pmull_h);
181
+ } else {
182
+ /* VMULL.P64 */
183
+ if (!dc_isar_feature(aa32_pmull, s)) {
184
+ return 1;
185
+ }
186
+ tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, 16, 16,
187
+ 0, gen_helper_gvec_pmull_q);
188
}
189
- tcg_gen_gvec_3_ool(rd_ofs, rn_ofs, rm_ofs, 16, 16,
190
- 0, gen_helper_gvec_pmull_q);
191
return 0;
192
}
193
194
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
195
/* VMLAL, VQDMLAL, VMLSL, VQDMLSL, VMULL, VQDMULL */
196
gen_neon_mull(cpu_V0, tmp, tmp2, size, u);
197
break;
198
- case 14: /* Polynomial VMULL */
199
- gen_helper_neon_mull_p8(cpu_V0, tmp, tmp2);
200
- tcg_temp_free_i32(tmp2);
201
- tcg_temp_free_i32(tmp);
202
- break;
203
default: /* 15 is RESERVED: caught earlier */
204
abort();
205
}
206
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
207
index XXXXXXX..XXXXXXX 100644
208
--- a/target/arm/vec_helper.c
209
+++ b/target/arm/vec_helper.c
210
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_pmull_q)(void *vd, void *vn, void *vm, uint32_t desc)
211
}
212
clear_tail(d, opr_sz, simd_maxsz(desc));
213
}
214
+
215
+/*
216
+ * 8x8->16 polynomial multiply.
217
+ *
218
+ * The byte inputs are expanded to (or extracted from) half-words.
219
+ * Note that neon and sve2 get the inputs from different positions.
220
+ * This allows 4 bytes to be processed in parallel with uint64_t.
221
+ */
222
+
223
+static uint64_t expand_byte_to_half(uint64_t x)
224
+{
225
+ return (x & 0x000000ff)
226
+ | ((x & 0x0000ff00) << 8)
227
+ | ((x & 0x00ff0000) << 16)
228
+ | ((x & 0xff000000) << 24);
229
+}
230
+
231
+static uint64_t pmull_h(uint64_t op1, uint64_t op2)
232
+{
233
+ uint64_t result = 0;
234
+ int i;
235
+
236
+ for (i = 0; i < 8; ++i) {
237
+ uint64_t mask = (op1 & 0x0001000100010001ull) * 0xffff;
238
+ result ^= op2 & mask;
239
+ op1 >>= 1;
240
+ op2 <<= 1;
241
+ }
242
+ return result;
243
+}
244
+
245
+void HELPER(neon_pmull_h)(void *vd, void *vn, void *vm, uint32_t desc)
246
+{
247
+ int hi = simd_data(desc);
248
+ uint64_t *d = vd, *n = vn, *m = vm;
249
+ uint64_t nn = n[hi], mm = m[hi];
250
+
251
+ d[0] = pmull_h(expand_byte_to_half(nn), expand_byte_to_half(mm));
252
+ nn >>= 32;
253
+ mm >>= 32;
254
+ d[1] = pmull_h(expand_byte_to_half(nn), expand_byte_to_half(mm));
255
+
256
+ clear_tail(d, 16, simd_maxsz(desc));
257
+}
258
+
259
+#ifdef TARGET_AARCH64
260
+void HELPER(sve2_pmull_h)(void *vd, void *vn, void *vm, uint32_t desc)
261
+{
262
+ int shift = simd_data(desc) * 8;
263
+ intptr_t i, opr_sz = simd_oprsz(desc);
264
+ uint64_t *d = vd, *n = vn, *m = vm;
265
+
266
+ for (i = 0; i < opr_sz / 8; ++i) {
267
+ uint64_t nn = (n[i] >> shift) & 0x00ff00ff00ff00ffull;
268
+ uint64_t mm = (m[i] >> shift) & 0x00ff00ff00ff00ffull;
269
+
270
+ d[i] = pmull_h(nn, mm);
271
+ }
272
+}
273
+#endif
274
--
275
2.20.1
276
277
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Francisco Iglesias <francisco.iglesias@xilinx.com>
2
2
3
Coverity found that the string return by 'object_get_canonical_path' was not
3
Correct the number of dummy cycles required by the FAST_READ_4 command (to
4
being freed at two locations in the model (CID 1391294 and CID 1391293) and
4
be eight, one dummy byte).
5
also that a memset was being called with a value greater than the max of a byte
6
on the second argument (CID 1391286). This patch corrects this by adding the
7
freeing of the strings and also changing to memset to zero instead on
8
descriptor unaligned errors.
9
5
6
Fixes: ef06ca3946 ("xilinx_spips: Add support for RX discard and RX drain")
7
Suggested-by: Cédric Le Goater <clg@kaod.org>
10
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
8
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
11
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
9
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
12
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Message-id: 20200218113350.6090-1-frasse.iglesias@gmail.com
13
Message-id: 20180528184859.3530-1-frasse.iglesias@gmail.com
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
12
---
17
hw/dma/xlnx-zdma.c | 10 +++++++---
13
hw/ssi/xilinx_spips.c | 2 +-
18
1 file changed, 7 insertions(+), 3 deletions(-)
14
1 file changed, 1 insertion(+), 1 deletion(-)
19
15
20
diff --git a/hw/dma/xlnx-zdma.c b/hw/dma/xlnx-zdma.c
16
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
21
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/dma/xlnx-zdma.c
18
--- a/hw/ssi/xilinx_spips.c
23
+++ b/hw/dma/xlnx-zdma.c
19
+++ b/hw/ssi/xilinx_spips.c
24
@@ -XXX,XX +XXX,XX @@ static bool zdma_load_descriptor(XlnxZDMA *s, uint64_t addr, void *buf)
20
@@ -XXX,XX +XXX,XX @@ static int xilinx_spips_num_dummies(XilinxQSPIPS *qs, uint8_t command)
25
qemu_log_mask(LOG_GUEST_ERROR,
21
case FAST_READ:
26
"zdma: unaligned descriptor at %" PRIx64,
22
case DOR:
27
addr);
23
case QOR:
28
- memset(buf, 0xdeadbeef, sizeof(XlnxZDMADescr));
24
+ case FAST_READ_4:
29
+ memset(buf, 0x0, sizeof(XlnxZDMADescr));
25
case DOR_4:
30
s->error = true;
26
case QOR_4:
31
return false;
27
return 1;
32
}
28
case DIOR:
33
@@ -XXX,XX +XXX,XX @@ static uint64_t zdma_read(void *opaque, hwaddr addr, unsigned size)
29
- case FAST_READ_4:
34
RegisterInfo *r = &s->regs_info[addr / 4];
30
case DIOR_4:
35
31
return 2;
36
if (!r->data) {
32
case QIOR:
37
+ gchar *path = object_get_canonical_path(OBJECT(s));
38
qemu_log("%s: Decode error: read from %" HWADDR_PRIx "\n",
39
- object_get_canonical_path(OBJECT(s)),
40
+ path,
41
addr);
42
+ g_free(path);
43
ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, INV_APB, true);
44
zdma_ch_imr_update_irq(s);
45
return 0;
46
@@ -XXX,XX +XXX,XX @@ static void zdma_write(void *opaque, hwaddr addr, uint64_t value,
47
RegisterInfo *r = &s->regs_info[addr / 4];
48
49
if (!r->data) {
50
+ gchar *path = object_get_canonical_path(OBJECT(s));
51
qemu_log("%s: Decode error: write to %" HWADDR_PRIx "=%" PRIx64 "\n",
52
- object_get_canonical_path(OBJECT(s)),
53
+ path,
54
addr, value);
55
+ g_free(path);
56
ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, INV_APB, true);
57
zdma_ch_imr_update_irq(s);
58
return;
59
--
33
--
60
2.17.1
34
2.20.1
61
35
62
36
diff view generated by jsdifflib
1
From: Paolo Bonzini <pbonzini@redhat.com>
1
From: Guenter Roeck <linux@roeck-us.net>
2
2
3
cpregs_keys is an uint32_t* so the allocation should use uint32_t.
3
Booting the r2d machine from flash fails because flash is not discovered.
4
g_new is even better because it is type-safe.
4
Looking at the flattened memory tree, we see the following.
5
5
6
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6
FlatView #1
7
AS "memory", root: system
8
AS "cpu-memory-0", root: system
9
AS "sh_pci_host", root: bus master container
10
Root memory region: system
11
0000000000000000-000000000000ffff (prio 0, i/o): io
12
0000000000010000-0000000000ffffff (prio 0, i/o): r2d.flash @0000000000010000
13
14
The overlapping memory region is sh_pci.isa, ie the ISA I/O region bridge.
15
This region is initially assigned to address 0xfe240000, but overwritten
16
with a write into the PCIIOBR register. This write is expected to adjust
17
the PCI memory window, but not to change the region's base adddress.
18
19
Peter Maydell provided the following detailed explanation.
20
21
"Section 22.3.7 and in particular figure 22.3 (of "SSH7751R user's manual:
22
hardware") are clear about how this is supposed to work: there is a window
23
at 0xfe240000 in the system register space for PCI I/O space. When the CPU
24
makes an access into that area, the PCI controller calculates the PCI
25
address to use by combining bits 0..17 of the system address with the
26
bits 31..18 value that the guest has put into the PCIIOBR. That is, writing
27
to the PCIIOBR changes which section of the IO address space is visible in
28
the 0xfe240000 window. Instead what QEMU's implementation does is move the
29
window to whatever value the guest writes to the PCIIOBR register -- so if
30
the guest writes 0 we put the window at 0 in system address space."
31
32
Fix the problem by calling memory_region_set_alias_offset() instead of
33
removing and re-adding the PCI ISA subregion on writes into PCIIOBR.
34
At the same time, in sh_pci_device_realize(), don't set iobr since
35
it is overwritten later anyway. Instead, pass the base address to
36
memory_region_add_subregion() directly.
37
38
Many thanks to Peter Maydell for the detailed problem analysis, and for
39
providing suggestions on how to fix the problem.
40
41
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
42
Message-id: 20200218201050.15273-1-linux@roeck-us.net
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
43
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
44
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
45
---
11
target/arm/gdbstub.c | 3 +--
46
hw/sh4/sh_pci.c | 11 +++--------
12
1 file changed, 1 insertion(+), 2 deletions(-)
47
1 file changed, 3 insertions(+), 8 deletions(-)
13
48
14
diff --git a/target/arm/gdbstub.c b/target/arm/gdbstub.c
49
diff --git a/hw/sh4/sh_pci.c b/hw/sh4/sh_pci.c
15
index XXXXXXX..XXXXXXX 100644
50
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/gdbstub.c
51
--- a/hw/sh4/sh_pci.c
17
+++ b/target/arm/gdbstub.c
52
+++ b/hw/sh4/sh_pci.c
18
@@ -XXX,XX +XXX,XX @@ int arm_gen_dynamic_xml(CPUState *cs)
53
@@ -XXX,XX +XXX,XX @@ static void sh_pci_reg_write (void *p, hwaddr addr, uint64_t val,
19
RegisterSysregXmlParam param = {cs, s};
54
pcic->mbr = val & 0xff000001;
20
55
break;
21
cpu->dyn_xml.num_cpregs = 0;
56
case 0x1c8:
22
- cpu->dyn_xml.cpregs_keys = g_malloc(sizeof(uint32_t *) *
57
- if ((val & 0xfffc0000) != (pcic->iobr & 0xfffc0000)) {
23
- g_hash_table_size(cpu->cp_regs));
58
- memory_region_del_subregion(get_system_memory(), &pcic->isa);
24
+ cpu->dyn_xml.cpregs_keys = g_new(uint32_t, g_hash_table_size(cpu->cp_regs));
59
- pcic->iobr = val & 0xfffc0001;
25
g_string_printf(s, "<?xml version=\"1.0\"?>");
60
- memory_region_add_subregion(get_system_memory(),
26
g_string_append_printf(s, "<!DOCTYPE target SYSTEM \"gdb-target.dtd\">");
61
- pcic->iobr & 0xfffc0000, &pcic->isa);
27
g_string_append_printf(s, "<feature name=\"org.qemu.gdb.arm.sys.regs\">");
62
- }
63
+ pcic->iobr = val & 0xfffc0001;
64
+ memory_region_set_alias_offset(&pcic->isa, val & 0xfffc0000);
65
break;
66
case 0x220:
67
pci_data_write(phb->bus, pcic->par, val, 4);
68
@@ -XXX,XX +XXX,XX @@ static void sh_pci_device_realize(DeviceState *dev, Error **errp)
69
get_system_io(), 0, 0x40000);
70
sysbus_init_mmio(sbd, &s->memconfig_p4);
71
sysbus_init_mmio(sbd, &s->memconfig_a7);
72
- s->iobr = 0xfe240000;
73
- memory_region_add_subregion(get_system_memory(), s->iobr, &s->isa);
74
+ memory_region_add_subregion(get_system_memory(), 0xfe240000, &s->isa);
75
76
s->dev = pci_create_simple(phb->bus, PCI_DEVFN(0, 0), "sh_pci_host");
77
}
28
--
78
--
29
2.17.1
79
2.20.1
30
80
31
81
diff view generated by jsdifflib
New patch
1
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
The old name, isar_feature_aa32_fp_d32, does not reflect
4
the MVFR0 field name, SIMDReg.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Message-id: 20200214181547.21408-3-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
[PMM: wrapped one long line]
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/cpu.h | 2 +-
14
target/arm/translate-vfp.inc.c | 53 +++++++++++++++++-----------------
15
2 files changed, 28 insertions(+), 27 deletions(-)
16
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/cpu.h
20
+++ b/target/arm/cpu.h
21
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_fp16_arith(const ARMISARegisters *id)
22
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, FP) == 1;
23
}
24
25
-static inline bool isar_feature_aa32_fp_d32(const ARMISARegisters *id)
26
+static inline bool isar_feature_aa32_simd_r32(const ARMISARegisters *id)
27
{
28
/* Return true if D16-D31 are implemented */
29
return FIELD_EX32(id->mvfr0, MVFR0, SIMDREG) >= 2;
30
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/translate-vfp.inc.c
33
+++ b/target/arm/translate-vfp.inc.c
34
@@ -XXX,XX +XXX,XX @@ static bool trans_VSEL(DisasContext *s, arg_VSEL *a)
35
}
36
37
/* UNDEF accesses to D16-D31 if they don't exist */
38
- if (dp && !dc_isar_feature(aa32_fp_d32, s) &&
39
+ if (dp && !dc_isar_feature(aa32_simd_r32, s) &&
40
((a->vm | a->vn | a->vd) & 0x10)) {
41
return false;
42
}
43
@@ -XXX,XX +XXX,XX @@ static bool trans_VMINMAXNM(DisasContext *s, arg_VMINMAXNM *a)
44
}
45
46
/* UNDEF accesses to D16-D31 if they don't exist */
47
- if (dp && !dc_isar_feature(aa32_fp_d32, s) &&
48
+ if (dp && !dc_isar_feature(aa32_simd_r32, s) &&
49
((a->vm | a->vn | a->vd) & 0x10)) {
50
return false;
51
}
52
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINT(DisasContext *s, arg_VRINT *a)
53
}
54
55
/* UNDEF accesses to D16-D31 if they don't exist */
56
- if (dp && !dc_isar_feature(aa32_fp_d32, s) &&
57
+ if (dp && !dc_isar_feature(aa32_simd_r32, s) &&
58
((a->vm | a->vd) & 0x10)) {
59
return false;
60
}
61
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a)
62
}
63
64
/* UNDEF accesses to D16-D31 if they don't exist */
65
- if (dp && !dc_isar_feature(aa32_fp_d32, s) && (a->vm & 0x10)) {
66
+ if (dp && !dc_isar_feature(aa32_simd_r32, s) && (a->vm & 0x10)) {
67
return false;
68
}
69
70
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_to_gp(DisasContext *s, arg_VMOV_to_gp *a)
71
uint32_t offset;
72
73
/* UNDEF accesses to D16-D31 if they don't exist */
74
- if (!dc_isar_feature(aa32_fp_d32, s) && (a->vn & 0x10)) {
75
+ if (!dc_isar_feature(aa32_simd_r32, s) && (a->vn & 0x10)) {
76
return false;
77
}
78
79
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_from_gp(DisasContext *s, arg_VMOV_from_gp *a)
80
uint32_t offset;
81
82
/* UNDEF accesses to D16-D31 if they don't exist */
83
- if (!dc_isar_feature(aa32_fp_d32, s) && (a->vn & 0x10)) {
84
+ if (!dc_isar_feature(aa32_simd_r32, s) && (a->vn & 0x10)) {
85
return false;
86
}
87
88
@@ -XXX,XX +XXX,XX @@ static bool trans_VDUP(DisasContext *s, arg_VDUP *a)
89
}
90
91
/* UNDEF accesses to D16-D31 if they don't exist */
92
- if (!dc_isar_feature(aa32_fp_d32, s) && (a->vn & 0x10)) {
93
+ if (!dc_isar_feature(aa32_simd_r32, s) && (a->vn & 0x10)) {
94
return false;
95
}
96
97
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_64_dp(DisasContext *s, arg_VMOV_64_dp *a)
98
*/
99
100
/* UNDEF accesses to D16-D31 if they don't exist */
101
- if (!dc_isar_feature(aa32_fp_d32, s) && (a->vm & 0x10)) {
102
+ if (!dc_isar_feature(aa32_simd_r32, s) && (a->vm & 0x10)) {
103
return false;
104
}
105
106
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_dp(DisasContext *s, arg_VLDR_VSTR_dp *a)
107
TCGv_i64 tmp;
108
109
/* UNDEF accesses to D16-D31 if they don't exist */
110
- if (!dc_isar_feature(aa32_fp_d32, s) && (a->vd & 0x10)) {
111
+ if (!dc_isar_feature(aa32_simd_r32, s) && (a->vd & 0x10)) {
112
return false;
113
}
114
115
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDM_VSTM_dp(DisasContext *s, arg_VLDM_VSTM_dp *a)
116
}
117
118
/* UNDEF accesses to D16-D31 if they don't exist */
119
- if (!dc_isar_feature(aa32_fp_d32, s) && (a->vd + n) > 16) {
120
+ if (!dc_isar_feature(aa32_simd_r32, s) && (a->vd + n) > 16) {
121
return false;
122
}
123
124
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_dp(DisasContext *s, VFPGen3OpDPFn *fn,
125
TCGv_ptr fpst;
126
127
/* UNDEF accesses to D16-D31 if they don't exist */
128
- if (!dc_isar_feature(aa32_fp_d32, s) && ((vd | vn | vm) & 0x10)) {
129
+ if (!dc_isar_feature(aa32_simd_r32, s) && ((vd | vn | vm) & 0x10)) {
130
return false;
131
}
132
133
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_dp(DisasContext *s, VFPGen2OpDPFn *fn, int vd, int vm)
134
TCGv_i64 f0, fd;
135
136
/* UNDEF accesses to D16-D31 if they don't exist */
137
- if (!dc_isar_feature(aa32_fp_d32, s) && ((vd | vm) & 0x10)) {
138
+ if (!dc_isar_feature(aa32_simd_r32, s) && ((vd | vm) & 0x10)) {
139
return false;
140
}
141
142
@@ -XXX,XX +XXX,XX @@ static bool trans_VFM_dp(DisasContext *s, arg_VFM_dp *a)
143
}
144
145
/* UNDEF accesses to D16-D31 if they don't exist. */
146
- if (!dc_isar_feature(aa32_fp_d32, s) && ((a->vd | a->vn | a->vm) & 0x10)) {
147
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
148
+ ((a->vd | a->vn | a->vm) & 0x10)) {
149
return false;
150
}
151
152
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_dp(DisasContext *s, arg_VMOV_imm_dp *a)
153
vd = a->vd;
154
155
/* UNDEF accesses to D16-D31 if they don't exist. */
156
- if (!dc_isar_feature(aa32_fp_d32, s) && (vd & 0x10)) {
157
+ if (!dc_isar_feature(aa32_simd_r32, s) && (vd & 0x10)) {
158
return false;
159
}
160
161
@@ -XXX,XX +XXX,XX @@ static bool trans_VCMP_dp(DisasContext *s, arg_VCMP_dp *a)
162
}
163
164
/* UNDEF accesses to D16-D31 if they don't exist. */
165
- if (!dc_isar_feature(aa32_fp_d32, s) && ((a->vd | a->vm) & 0x10)) {
166
+ if (!dc_isar_feature(aa32_simd_r32, s) && ((a->vd | a->vm) & 0x10)) {
167
return false;
168
}
169
170
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f64_f16(DisasContext *s, arg_VCVT_f64_f16 *a)
171
}
172
173
/* UNDEF accesses to D16-D31 if they don't exist. */
174
- if (!dc_isar_feature(aa32_fp_d32, s) && (a->vd & 0x10)) {
175
+ if (!dc_isar_feature(aa32_simd_r32, s) && (a->vd & 0x10)) {
176
return false;
177
}
178
179
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f16_f64(DisasContext *s, arg_VCVT_f16_f64 *a)
180
}
181
182
/* UNDEF accesses to D16-D31 if they don't exist. */
183
- if (!dc_isar_feature(aa32_fp_d32, s) && (a->vm & 0x10)) {
184
+ if (!dc_isar_feature(aa32_simd_r32, s) && (a->vm & 0x10)) {
185
return false;
186
}
187
188
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTR_dp(DisasContext *s, arg_VRINTR_dp *a)
189
}
190
191
/* UNDEF accesses to D16-D31 if they don't exist. */
192
- if (!dc_isar_feature(aa32_fp_d32, s) && ((a->vd | a->vm) & 0x10)) {
193
+ if (!dc_isar_feature(aa32_simd_r32, s) && ((a->vd | a->vm) & 0x10)) {
194
return false;
195
}
196
197
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTZ_dp(DisasContext *s, arg_VRINTZ_dp *a)
198
}
199
200
/* UNDEF accesses to D16-D31 if they don't exist. */
201
- if (!dc_isar_feature(aa32_fp_d32, s) && ((a->vd | a->vm) & 0x10)) {
202
+ if (!dc_isar_feature(aa32_simd_r32, s) && ((a->vd | a->vm) & 0x10)) {
203
return false;
204
}
205
206
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTX_dp(DisasContext *s, arg_VRINTX_dp *a)
207
}
208
209
/* UNDEF accesses to D16-D31 if they don't exist. */
210
- if (!dc_isar_feature(aa32_fp_d32, s) && ((a->vd | a->vm) & 0x10)) {
211
+ if (!dc_isar_feature(aa32_simd_r32, s) && ((a->vd | a->vm) & 0x10)) {
212
return false;
213
}
214
215
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_sp(DisasContext *s, arg_VCVT_sp *a)
216
TCGv_i32 vm;
217
218
/* UNDEF accesses to D16-D31 if they don't exist. */
219
- if (!dc_isar_feature(aa32_fp_d32, s) && (a->vd & 0x10)) {
220
+ if (!dc_isar_feature(aa32_simd_r32, s) && (a->vd & 0x10)) {
221
return false;
222
}
223
224
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp(DisasContext *s, arg_VCVT_dp *a)
225
TCGv_i32 vd;
226
227
/* UNDEF accesses to D16-D31 if they don't exist. */
228
- if (!dc_isar_feature(aa32_fp_d32, s) && (a->vm & 0x10)) {
229
+ if (!dc_isar_feature(aa32_simd_r32, s) && (a->vm & 0x10)) {
230
return false;
231
}
232
233
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_dp(DisasContext *s, arg_VCVT_int_dp *a)
234
TCGv_ptr fpst;
235
236
/* UNDEF accesses to D16-D31 if they don't exist. */
237
- if (!dc_isar_feature(aa32_fp_d32, s) && (a->vd & 0x10)) {
238
+ if (!dc_isar_feature(aa32_simd_r32, s) && (a->vd & 0x10)) {
239
return false;
240
}
241
242
@@ -XXX,XX +XXX,XX @@ static bool trans_VJCVT(DisasContext *s, arg_VJCVT *a)
243
}
244
245
/* UNDEF accesses to D16-D31 if they don't exist. */
246
- if (!dc_isar_feature(aa32_fp_d32, s) && (a->vm & 0x10)) {
247
+ if (!dc_isar_feature(aa32_simd_r32, s) && (a->vm & 0x10)) {
248
return false;
249
}
250
251
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_dp(DisasContext *s, arg_VCVT_fix_dp *a)
252
}
253
254
/* UNDEF accesses to D16-D31 if they don't exist. */
255
- if (!dc_isar_feature(aa32_fp_d32, s) && (a->vd & 0x10)) {
256
+ if (!dc_isar_feature(aa32_simd_r32, s) && (a->vd & 0x10)) {
257
return false;
258
}
259
260
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp_int(DisasContext *s, arg_VCVT_dp_int *a)
261
TCGv_ptr fpst;
262
263
/* UNDEF accesses to D16-D31 if they don't exist. */
264
- if (!dc_isar_feature(aa32_fp_d32, s) && (a->vm & 0x10)) {
265
+ if (!dc_isar_feature(aa32_simd_r32, s) && (a->vm & 0x10)) {
266
return false;
267
}
268
269
--
270
2.20.1
271
272
diff view generated by jsdifflib
1
From: Jan Kiszka <jan.kiszka@siemens.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
There was a nasty flip in identifying which register group an access is
3
Many uses of ARM_FEATURE_VFP3 are testing for the number of simd
4
targeting. The issue caused spuriously raised priorities of the guest
4
registers implemented. Use the proper test vs MVFR0.SIMDReg.
5
when handing CPUs over in the Jailhouse hypervisor.
6
5
7
Cc: qemu-stable@nongnu.org
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
7
Message-id: 20200214181547.21408-4-richard.henderson@linaro.org
9
Message-id: 28b927d3-da58-bce4-cc13-bfec7f9b1cb9@siemens.com
8
[PMM: fix typo in commit message]
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
11
---
13
hw/intc/arm_gicv3_cpuif.c | 12 ++++++------
12
target/arm/cpu.c | 9 ++++-----
14
1 file changed, 6 insertions(+), 6 deletions(-)
13
target/arm/helper.c | 13 ++++++-------
14
target/arm/translate.c | 2 +-
15
3 files changed, 11 insertions(+), 13 deletions(-)
15
16
16
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
17
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
17
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/intc/arm_gicv3_cpuif.c
19
--- a/target/arm/cpu.c
19
+++ b/hw/intc/arm_gicv3_cpuif.c
20
+++ b/target/arm/cpu.c
20
@@ -XXX,XX +XXX,XX @@ static uint64_t icv_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
21
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_dump_state(CPUState *cs, FILE *f, int flags)
22
23
if (flags & CPU_DUMP_FPU) {
24
int numvfpregs = 0;
25
- if (arm_feature(env, ARM_FEATURE_VFP)) {
26
- numvfpregs += 16;
27
- }
28
- if (arm_feature(env, ARM_FEATURE_VFP3)) {
29
- numvfpregs += 16;
30
+ if (cpu_isar_feature(aa32_simd_r32, cpu)) {
31
+ numvfpregs = 32;
32
+ } else if (arm_feature(env, ARM_FEATURE_VFP)) {
33
+ numvfpregs = 16;
34
}
35
for (i = 0; i < numvfpregs; i++) {
36
uint64_t v = *aa32_vfp_dreg(env, i);
37
diff --git a/target/arm/helper.c b/target/arm/helper.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/helper.c
40
+++ b/target/arm/helper.c
41
@@ -XXX,XX +XXX,XX @@ static void switch_mode(CPUARMState *env, int mode);
42
43
static int vfp_gdb_get_reg(CPUARMState *env, uint8_t *buf, int reg)
21
{
44
{
22
GICv3CPUState *cs = icc_cs_from_env(env);
45
- int nregs;
23
int regno = ri->opc2 & 3;
46
+ ARMCPU *cpu = env_archcpu(env);
24
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
47
+ int nregs = cpu_isar_feature(aa32_simd_r32, cpu) ? 32 : 16;
25
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
48
26
uint64_t value = cs->ich_apr[grp][regno];
49
/* VFP data registers are always little-endian. */
27
50
- nregs = arm_feature(env, ARM_FEATURE_VFP3) ? 32 : 16;
28
trace_gicv3_icv_ap_read(ri->crm & 1, regno, gicv3_redist_affid(cs), value);
51
if (reg < nregs) {
29
@@ -XXX,XX +XXX,XX @@ static void icv_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
52
stq_le_p(buf, *aa32_vfp_dreg(env, reg));
53
return 8;
54
@@ -XXX,XX +XXX,XX @@ static int vfp_gdb_get_reg(CPUARMState *env, uint8_t *buf, int reg)
55
56
static int vfp_gdb_set_reg(CPUARMState *env, uint8_t *buf, int reg)
30
{
57
{
31
GICv3CPUState *cs = icc_cs_from_env(env);
58
- int nregs;
32
int regno = ri->opc2 & 3;
59
+ ARMCPU *cpu = env_archcpu(env);
33
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
60
+ int nregs = cpu_isar_feature(aa32_simd_r32, cpu) ? 32 : 16;
34
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
61
35
62
- nregs = arm_feature(env, ARM_FEATURE_VFP3) ? 32 : 16;
36
trace_gicv3_icv_ap_write(ri->crm & 1, regno, gicv3_redist_affid(cs), value);
63
if (reg < nregs) {
37
64
*aa32_vfp_dreg(env, reg) = ldq_le_p(buf);
38
@@ -XXX,XX +XXX,XX @@ static uint64_t icc_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
65
return 8;
39
uint64_t value;
66
@@ -XXX,XX +XXX,XX @@ static void cpacr_write(CPUARMState *env, const ARMCPRegInfo *ri,
40
67
/* VFPv3 and upwards with NEON implement 32 double precision
41
int regno = ri->opc2 & 3;
68
* registers (D0-D31).
42
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1;
69
*/
43
+ int grp = (ri->crm & 1) ? GICV3_G1 : GICV3_G0;
70
- if (!arm_feature(env, ARM_FEATURE_NEON) ||
44
71
- !arm_feature(env, ARM_FEATURE_VFP3)) {
45
if (icv_access(env, grp == GICV3_G0 ? HCR_FMO : HCR_IMO)) {
72
+ if (!cpu_isar_feature(aa32_simd_r32, env_archcpu(env))) {
46
return icv_ap_read(env, ri);
73
/* D32DIS [30] is RAO/WI if D16-31 are not implemented. */
47
@@ -XXX,XX +XXX,XX @@ static void icc_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
74
value |= (1 << 30);
48
GICv3CPUState *cs = icc_cs_from_env(env);
75
}
49
76
@@ -XXX,XX +XXX,XX @@ void arm_cpu_register_gdb_regs_for_features(ARMCPU *cpu)
50
int regno = ri->opc2 & 3;
77
} else if (arm_feature(env, ARM_FEATURE_NEON)) {
51
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1;
78
gdb_register_coprocessor(cs, vfp_gdb_get_reg, vfp_gdb_set_reg,
52
+ int grp = (ri->crm & 1) ? GICV3_G1 : GICV3_G0;
79
51, "arm-neon.xml", 0);
53
80
- } else if (arm_feature(env, ARM_FEATURE_VFP3)) {
54
if (icv_access(env, grp == GICV3_G0 ? HCR_FMO : HCR_IMO)) {
81
+ } else if (cpu_isar_feature(aa32_simd_r32, cpu)) {
55
icv_ap_write(env, ri, value);
82
gdb_register_coprocessor(cs, vfp_gdb_get_reg, vfp_gdb_set_reg,
56
@@ -XXX,XX +XXX,XX @@ static uint64_t ich_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
83
35, "arm-vfp3.xml", 0);
57
{
84
} else if (arm_feature(env, ARM_FEATURE_VFP)) {
58
GICv3CPUState *cs = icc_cs_from_env(env);
85
diff --git a/target/arm/translate.c b/target/arm/translate.c
59
int regno = ri->opc2 & 3;
86
index XXXXXXX..XXXXXXX 100644
60
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
87
--- a/target/arm/translate.c
61
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
88
+++ b/target/arm/translate.c
62
uint64_t value;
89
@@ -XXX,XX +XXX,XX @@ static int disas_dsp_insn(DisasContext *s, uint32_t insn)
63
90
#define VFP_SREG(insn, bigbit, smallbit) \
64
value = cs->ich_apr[grp][regno];
91
((VFP_REG_SHR(insn, bigbit - 1) & 0x1e) | (((insn) >> (smallbit)) & 1))
65
@@ -XXX,XX +XXX,XX @@ static void ich_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
92
#define VFP_DREG(reg, insn, bigbit, smallbit) do { \
66
{
93
- if (arm_dc_feature(s, ARM_FEATURE_VFP3)) { \
67
GICv3CPUState *cs = icc_cs_from_env(env);
94
+ if (dc_isar_feature(aa32_simd_r32, s)) { \
68
int regno = ri->opc2 & 3;
95
reg = (((insn) >> (bigbit)) & 0x0f) \
69
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
96
| (((insn) >> ((smallbit) - 4)) & 0x10); \
70
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
97
} else { \
71
72
trace_gicv3_ich_ap_write(ri->crm & 1, regno, gicv3_redist_affid(cs), value);
73
74
--
98
--
75
2.17.1
99
2.20.1
76
100
77
101
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
add MemTxAttrs as an argument to address_space_translate_iommu().
3
2
3
We are going to convert FEATURE tests to ISAR tests,
4
so FPSP needs to be set for these cpus, like we have
5
already for FPDP.
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200214181547.21408-5-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180521140402.23318-14-peter.maydell@linaro.org
8
---
11
---
9
exec.c | 8 +++++---
12
target/arm/cpu.c | 10 ++++++----
10
1 file changed, 5 insertions(+), 3 deletions(-)
13
1 file changed, 6 insertions(+), 4 deletions(-)
11
14
12
diff --git a/exec.c b/exec.c
15
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
13
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
14
--- a/exec.c
17
--- a/target/arm/cpu.c
15
+++ b/exec.c
18
+++ b/target/arm/cpu.c
16
@@ -XXX,XX +XXX,XX @@ address_space_translate_internal(AddressSpaceDispatch *d, hwaddr addr, hwaddr *x
19
@@ -XXX,XX +XXX,XX @@ static void arm926_initfn(Object *obj)
17
* @is_write: whether the translation operation is for write
20
*/
18
* @is_mmio: whether this can be MMIO, set true if it can
21
cpu->isar.id_isar1 = FIELD_DP32(cpu->isar.id_isar1, ID_ISAR1, JAZELLE, 1);
19
* @target_as: the address space targeted by the IOMMU
22
/*
20
+ * @attrs: transaction attributes
23
- * Similarly, we need to set MVFR0 fields to enable double precision
21
*
24
- * and short vector support even though ARMv5 doesn't have this register.
22
* This function is called from RCU critical section. It is the common
25
+ * Similarly, we need to set MVFR0 fields to enable vfp and short vector
23
* part of flatview_do_translate and address_space_translate_cached.
26
+ * support even though ARMv5 doesn't have this register.
24
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection address_space_translate_iommu(IOMMUMemoryRegion *iomm
27
*/
25
hwaddr *page_mask_out,
28
cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPSHVEC, 1);
26
bool is_write,
29
+ cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPSP, 1);
27
bool is_mmio,
30
cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPDP, 1);
28
- AddressSpace **target_as)
29
+ AddressSpace **target_as,
30
+ MemTxAttrs attrs)
31
{
32
MemoryRegionSection *section;
33
hwaddr page_mask = (hwaddr)-1;
34
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection flatview_do_translate(FlatView *fv,
35
return address_space_translate_iommu(iommu_mr, xlat,
36
plen_out, page_mask_out,
37
is_write, is_mmio,
38
- target_as);
39
+ target_as, attrs);
40
}
41
if (page_mask_out) {
42
/* Not behind an IOMMU, use default page size. */
43
@@ -XXX,XX +XXX,XX @@ static inline MemoryRegion *address_space_translate_cached(
44
45
section = address_space_translate_iommu(iommu_mr, xlat, plen,
46
NULL, is_write, true,
47
- &target_as);
48
+ &target_as, attrs);
49
return section.mr;
50
}
31
}
51
32
33
@@ -XXX,XX +XXX,XX @@ static void arm1026_initfn(Object *obj)
34
*/
35
cpu->isar.id_isar1 = FIELD_DP32(cpu->isar.id_isar1, ID_ISAR1, JAZELLE, 1);
36
/*
37
- * Similarly, we need to set MVFR0 fields to enable double precision
38
- * and short vector support even though ARMv5 doesn't have this register.
39
+ * Similarly, we need to set MVFR0 fields to enable vfp and short vector
40
+ * support even though ARMv5 doesn't have this register.
41
*/
42
cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPSHVEC, 1);
43
+ cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPSP, 1);
44
cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPDP, 1);
45
46
{
52
--
47
--
53
2.17.1
48
2.20.1
54
49
55
50
diff view generated by jsdifflib
New patch
1
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Use this in the places that were checking ARM_FEATURE_VFP, and
4
are obviously testing for the existance of the register set
5
as opposed to testing for some particular instruction extension.
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200214181547.21408-6-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/cpu.h | 6 ++++++
13
hw/intc/armv7m_nvic.c | 20 ++++++++++----------
14
linux-user/arm/signal.c | 4 ++--
15
target/arm/arch_dump.c | 11 ++++++-----
16
target/arm/cpu.c | 8 ++++----
17
target/arm/helper.c | 4 ++--
18
target/arm/m_helper.c | 11 ++++++-----
19
target/arm/machine.c | 3 +--
20
8 files changed, 37 insertions(+), 30 deletions(-)
21
22
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
23
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/cpu.h
25
+++ b/target/arm/cpu.h
26
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_fp16_arith(const ARMISARegisters *id)
27
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, FP) == 1;
28
}
29
30
+static inline bool isar_feature_aa32_simd_r16(const ARMISARegisters *id)
31
+{
32
+ /* Return true if D0-D15 are implemented */
33
+ return FIELD_EX32(id->mvfr0, MVFR0, SIMDREG) > 0;
34
+}
35
+
36
static inline bool isar_feature_aa32_simd_r32(const ARMISARegisters *id)
37
{
38
/* Return true if D16-D31 are implemented */
39
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
40
index XXXXXXX..XXXXXXX 100644
41
--- a/hw/intc/armv7m_nvic.c
42
+++ b/hw/intc/armv7m_nvic.c
43
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
44
case 0xd84: /* CSSELR */
45
return cpu->env.v7m.csselr[attrs.secure];
46
case 0xd88: /* CPACR */
47
- if (!arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
48
+ if (!cpu_isar_feature(aa32_simd_r16, cpu)) {
49
return 0;
50
}
51
return cpu->env.v7m.cpacr[attrs.secure];
52
case 0xd8c: /* NSACR */
53
- if (!attrs.secure || !arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
54
+ if (!attrs.secure || !cpu_isar_feature(aa32_simd_r16, cpu)) {
55
return 0;
56
}
57
return cpu->env.v7m.nsacr;
58
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
59
}
60
return cpu->env.v7m.sfar;
61
case 0xf34: /* FPCCR */
62
- if (!arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
63
+ if (!cpu_isar_feature(aa32_simd_r16, cpu)) {
64
return 0;
65
}
66
if (attrs.secure) {
67
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
68
return value;
69
}
70
case 0xf38: /* FPCAR */
71
- if (!arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
72
+ if (!cpu_isar_feature(aa32_simd_r16, cpu)) {
73
return 0;
74
}
75
return cpu->env.v7m.fpcar[attrs.secure];
76
case 0xf3c: /* FPDSCR */
77
- if (!arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
78
+ if (!cpu_isar_feature(aa32_simd_r16, cpu)) {
79
return 0;
80
}
81
return cpu->env.v7m.fpdscr[attrs.secure];
82
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
83
}
84
break;
85
case 0xd88: /* CPACR */
86
- if (arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
87
+ if (cpu_isar_feature(aa32_simd_r16, cpu)) {
88
/* We implement only the Floating Point extension's CP10/CP11 */
89
cpu->env.v7m.cpacr[attrs.secure] = value & (0xf << 20);
90
}
91
break;
92
case 0xd8c: /* NSACR */
93
- if (attrs.secure && arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
94
+ if (attrs.secure && cpu_isar_feature(aa32_simd_r16, cpu)) {
95
/* We implement only the Floating Point extension's CP10/CP11 */
96
cpu->env.v7m.nsacr = value & (3 << 10);
97
}
98
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
99
break;
100
}
101
case 0xf34: /* FPCCR */
102
- if (arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
103
+ if (cpu_isar_feature(aa32_simd_r16, cpu)) {
104
/* Not all bits here are banked. */
105
uint32_t fpccr_s;
106
107
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
108
}
109
break;
110
case 0xf38: /* FPCAR */
111
- if (arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
112
+ if (cpu_isar_feature(aa32_simd_r16, cpu)) {
113
value &= ~7;
114
cpu->env.v7m.fpcar[attrs.secure] = value;
115
}
116
break;
117
case 0xf3c: /* FPDSCR */
118
- if (arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
119
+ if (cpu_isar_feature(aa32_simd_r16, cpu)) {
120
value &= 0x07c00000;
121
cpu->env.v7m.fpdscr[attrs.secure] = value;
122
}
123
diff --git a/linux-user/arm/signal.c b/linux-user/arm/signal.c
124
index XXXXXXX..XXXXXXX 100644
125
--- a/linux-user/arm/signal.c
126
+++ b/linux-user/arm/signal.c
127
@@ -XXX,XX +XXX,XX @@ static void setup_sigframe_v2(struct target_ucontext_v2 *uc,
128
setup_sigcontext(&uc->tuc_mcontext, env, set->sig[0]);
129
/* Save coprocessor signal frame. */
130
regspace = uc->tuc_regspace;
131
- if (arm_feature(env, ARM_FEATURE_VFP)) {
132
+ if (cpu_isar_feature(aa32_simd_r16, env_archcpu(env))) {
133
regspace = setup_sigframe_v2_vfp(regspace, env);
134
}
135
if (arm_feature(env, ARM_FEATURE_IWMMXT)) {
136
@@ -XXX,XX +XXX,XX @@ static int do_sigframe_return_v2(CPUARMState *env,
137
138
/* Restore coprocessor signal frame */
139
regspace = uc->tuc_regspace;
140
- if (arm_feature(env, ARM_FEATURE_VFP)) {
141
+ if (cpu_isar_feature(aa32_simd_r16, env_archcpu(env))) {
142
regspace = restore_sigframe_v2_vfp(env, regspace);
143
if (!regspace) {
144
return 1;
145
diff --git a/target/arm/arch_dump.c b/target/arm/arch_dump.c
146
index XXXXXXX..XXXXXXX 100644
147
--- a/target/arm/arch_dump.c
148
+++ b/target/arm/arch_dump.c
149
@@ -XXX,XX +XXX,XX @@ int arm_cpu_write_elf32_note(WriteCoreDumpFunction f, CPUState *cs,
150
int cpuid, void *opaque)
151
{
152
struct arm_note note;
153
- CPUARMState *env = &ARM_CPU(cs)->env;
154
+ ARMCPU *cpu = ARM_CPU(cs);
155
+ CPUARMState *env = &cpu->env;
156
DumpState *s = opaque;
157
- int ret, i, fpvalid = !!arm_feature(env, ARM_FEATURE_VFP);
158
+ int ret, i;
159
+ bool fpvalid = cpu_isar_feature(aa32_simd_r16, cpu);
160
161
arm_note_init(&note, s, "CORE", 5, NT_PRSTATUS, sizeof(note.prstatus));
162
163
@@ -XXX,XX +XXX,XX @@ int cpu_get_dump_info(ArchDumpInfo *info,
164
ssize_t cpu_get_note_size(int class, int machine, int nr_cpus)
165
{
166
ARMCPU *cpu = ARM_CPU(first_cpu);
167
- CPUARMState *env = &cpu->env;
168
size_t note_size;
169
170
if (class == ELFCLASS64) {
171
@@ -XXX,XX +XXX,XX @@ ssize_t cpu_get_note_size(int class, int machine, int nr_cpus)
172
note_size += AARCH64_PRFPREG_NOTE_SIZE;
173
#ifdef TARGET_AARCH64
174
if (cpu_isar_feature(aa64_sve, cpu)) {
175
- note_size += AARCH64_SVE_NOTE_SIZE(env);
176
+ note_size += AARCH64_SVE_NOTE_SIZE(&cpu->env);
177
}
178
#endif
179
} else {
180
note_size = ARM_PRSTATUS_NOTE_SIZE;
181
- if (arm_feature(env, ARM_FEATURE_VFP)) {
182
+ if (cpu_isar_feature(aa32_simd_r16, cpu)) {
183
note_size += ARM_VFP_NOTE_SIZE;
184
}
185
}
186
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
187
index XXXXXXX..XXXXXXX 100644
188
--- a/target/arm/cpu.c
189
+++ b/target/arm/cpu.c
190
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(CPUState *s)
191
env->v7m.ccr[M_REG_S] |= R_V7M_CCR_UNALIGN_TRP_MASK;
192
}
193
194
- if (arm_feature(env, ARM_FEATURE_VFP)) {
195
+ if (cpu_isar_feature(aa32_simd_r16, cpu)) {
196
env->v7m.fpccr[M_REG_NS] = R_V7M_FPCCR_ASPEN_MASK;
197
env->v7m.fpccr[M_REG_S] = R_V7M_FPCCR_ASPEN_MASK |
198
R_V7M_FPCCR_LSPEN_MASK | R_V7M_FPCCR_S_MASK;
199
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_dump_state(CPUState *cs, FILE *f, int flags)
200
int numvfpregs = 0;
201
if (cpu_isar_feature(aa32_simd_r32, cpu)) {
202
numvfpregs = 32;
203
- } else if (arm_feature(env, ARM_FEATURE_VFP)) {
204
+ } else if (cpu_isar_feature(aa32_simd_r16, cpu)) {
205
numvfpregs = 16;
206
}
207
for (i = 0; i < numvfpregs; i++) {
208
@@ -XXX,XX +XXX,XX @@ void arm_cpu_post_init(Object *obj)
209
* KVM does not currently allow us to lie to the guest about its
210
* ID/feature registers, so the guest always sees what the host has.
211
*/
212
- if (arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
213
+ if (cpu_isar_feature(aa32_simd_r16, cpu)) {
214
cpu->has_vfp = true;
215
if (!kvm_enabled()) {
216
qdev_property_add_static(DEVICE(obj), &arm_cpu_has_vfp_property);
217
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
218
* We rely on no XScale CPU having VFP so we can use the same bits in the
219
* TB flags field for VECSTRIDE and XSCALE_CPAR.
220
*/
221
- assert(!(arm_feature(env, ARM_FEATURE_VFP) &&
222
+ assert(!(cpu_isar_feature(aa32_simd_r16, cpu) &&
223
arm_feature(env, ARM_FEATURE_XSCALE)));
224
225
if (arm_feature(env, ARM_FEATURE_V7) &&
226
diff --git a/target/arm/helper.c b/target/arm/helper.c
227
index XXXXXXX..XXXXXXX 100644
228
--- a/target/arm/helper.c
229
+++ b/target/arm/helper.c
230
@@ -XXX,XX +XXX,XX @@ static void cpacr_write(CPUARMState *env, const ARMCPRegInfo *ri,
231
* ASEDIS [31] and D32DIS [30] are both UNK/SBZP without VFP.
232
* TRCDIS [28] is RAZ/WI since we do not implement a trace macrocell.
233
*/
234
- if (arm_feature(env, ARM_FEATURE_VFP)) {
235
+ if (cpu_isar_feature(aa32_simd_r16, env_archcpu(env))) {
236
/* VFP coprocessor: cp10 & cp11 [23:20] */
237
mask |= (1 << 31) | (1 << 30) | (0xf << 20);
238
239
@@ -XXX,XX +XXX,XX @@ void arm_cpu_register_gdb_regs_for_features(ARMCPU *cpu)
240
} else if (cpu_isar_feature(aa32_simd_r32, cpu)) {
241
gdb_register_coprocessor(cs, vfp_gdb_get_reg, vfp_gdb_set_reg,
242
35, "arm-vfp3.xml", 0);
243
- } else if (arm_feature(env, ARM_FEATURE_VFP)) {
244
+ } else if (cpu_isar_feature(aa32_simd_r16, cpu)) {
245
gdb_register_coprocessor(cs, vfp_gdb_get_reg, vfp_gdb_set_reg,
246
19, "arm-vfp.xml", 0);
247
}
248
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
249
index XXXXXXX..XXXXXXX 100644
250
--- a/target/arm/m_helper.c
251
+++ b/target/arm/m_helper.c
252
@@ -XXX,XX +XXX,XX @@ static uint32_t v7m_integrity_sig(CPUARMState *env, uint32_t lr)
253
*/
254
uint32_t sig = 0xfefa125a;
255
256
- if (!arm_feature(env, ARM_FEATURE_VFP) || (lr & R_V7M_EXCRET_FTYPE_MASK)) {
257
+ if (!cpu_isar_feature(aa32_simd_r16, env_archcpu(env))
258
+ || (lr & R_V7M_EXCRET_FTYPE_MASK)) {
259
sig |= 1;
260
}
261
return sig;
262
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
263
264
if (dotailchain) {
265
/* Sanitize LR FType and PREFIX bits */
266
- if (!arm_feature(env, ARM_FEATURE_VFP)) {
267
+ if (!cpu_isar_feature(aa32_simd_r16, cpu)) {
268
lr |= R_V7M_EXCRET_FTYPE_MASK;
269
}
270
lr = deposit32(lr, 24, 8, 0xff);
271
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
272
273
ftype = excret & R_V7M_EXCRET_FTYPE_MASK;
274
275
- if (!arm_feature(env, ARM_FEATURE_VFP) && !ftype) {
276
+ if (!ftype && !cpu_isar_feature(aa32_simd_r16, cpu)) {
277
qemu_log_mask(LOG_GUEST_ERROR, "M profile: zero FTYPE in exception "
278
"exit PC value 0x%" PRIx32 " is UNPREDICTABLE "
279
"if FPU not present\n",
280
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
281
* SFPA is RAZ/WI from NS. FPCA is RO if NSACR.CP10 == 0,
282
* RES0 if the FPU is not present, and is stored in the S bank
283
*/
284
- if (arm_feature(env, ARM_FEATURE_VFP) &&
285
+ if (cpu_isar_feature(aa32_simd_r16, env_archcpu(env)) &&
286
extract32(env->v7m.nsacr, 10, 1)) {
287
env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK;
288
env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_FPCA_MASK;
289
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
290
env->v7m.control[env->v7m.secure] &= ~R_V7M_CONTROL_NPRIV_MASK;
291
env->v7m.control[env->v7m.secure] |= val & R_V7M_CONTROL_NPRIV_MASK;
292
}
293
- if (arm_feature(env, ARM_FEATURE_VFP)) {
294
+ if (cpu_isar_feature(aa32_simd_r16, env_archcpu(env))) {
295
/*
296
* SFPA is RAZ/WI from NS or if no FPU.
297
* FPCA is RO if NSACR.CP10 == 0, RES0 if the FPU is not present.
298
diff --git a/target/arm/machine.c b/target/arm/machine.c
299
index XXXXXXX..XXXXXXX 100644
300
--- a/target/arm/machine.c
301
+++ b/target/arm/machine.c
302
@@ -XXX,XX +XXX,XX @@
303
static bool vfp_needed(void *opaque)
304
{
305
ARMCPU *cpu = opaque;
306
- CPUARMState *env = &cpu->env;
307
308
- return arm_feature(env, ARM_FEATURE_VFP);
309
+ return cpu_isar_feature(aa32_simd_r16, cpu);
310
}
311
312
static int get_fpscr(QEMUFile *f, void *opaque, size_t size,
313
--
314
2.20.1
315
316
diff view generated by jsdifflib
New patch
1
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
The old name, isar_feature_aa32_fpdp, does not reflect
4
that the test includes VFPv2. We will introduce further
5
feature tests for VFPv3.
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Message-id: 20200214181547.21408-7-richard.henderson@linaro.org
10
[PMM: fixed grammar in commit message]
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
target/arm/cpu.h | 4 ++--
15
target/arm/translate-vfp.inc.c | 40 +++++++++++++++++-----------------
16
2 files changed, 22 insertions(+), 22 deletions(-)
17
18
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/cpu.h
21
+++ b/target/arm/cpu.h
22
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_fpshvec(const ARMISARegisters *id)
23
return FIELD_EX32(id->mvfr0, MVFR0, FPSHVEC) > 0;
24
}
25
26
-static inline bool isar_feature_aa32_fpdp(const ARMISARegisters *id)
27
+static inline bool isar_feature_aa32_fpdp_v2(const ARMISARegisters *id)
28
{
29
- /* Return true if CPU supports double precision floating point */
30
+ /* Return true if CPU supports double precision floating point, VFPv2 */
31
return FIELD_EX32(id->mvfr0, MVFR0, FPDP) > 0;
32
}
33
34
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/translate-vfp.inc.c
37
+++ b/target/arm/translate-vfp.inc.c
38
@@ -XXX,XX +XXX,XX @@ static bool trans_VSEL(DisasContext *s, arg_VSEL *a)
39
return false;
40
}
41
42
- if (dp && !dc_isar_feature(aa32_fpdp, s)) {
43
+ if (dp && !dc_isar_feature(aa32_fpdp_v2, s)) {
44
return false;
45
}
46
47
@@ -XXX,XX +XXX,XX @@ static bool trans_VMINMAXNM(DisasContext *s, arg_VMINMAXNM *a)
48
return false;
49
}
50
51
- if (dp && !dc_isar_feature(aa32_fpdp, s)) {
52
+ if (dp && !dc_isar_feature(aa32_fpdp_v2, s)) {
53
return false;
54
}
55
56
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINT(DisasContext *s, arg_VRINT *a)
57
return false;
58
}
59
60
- if (dp && !dc_isar_feature(aa32_fpdp, s)) {
61
+ if (dp && !dc_isar_feature(aa32_fpdp_v2, s)) {
62
return false;
63
}
64
65
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a)
66
return false;
67
}
68
69
- if (dp && !dc_isar_feature(aa32_fpdp, s)) {
70
+ if (dp && !dc_isar_feature(aa32_fpdp_v2, s)) {
71
return false;
72
}
73
74
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_dp(DisasContext *s, VFPGen3OpDPFn *fn,
75
return false;
76
}
77
78
- if (!dc_isar_feature(aa32_fpdp, s)) {
79
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
80
return false;
81
}
82
83
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_dp(DisasContext *s, VFPGen2OpDPFn *fn, int vd, int vm)
84
return false;
85
}
86
87
- if (!dc_isar_feature(aa32_fpdp, s)) {
88
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
89
return false;
90
}
91
92
@@ -XXX,XX +XXX,XX @@ static bool trans_VFM_dp(DisasContext *s, arg_VFM_dp *a)
93
return false;
94
}
95
96
- if (!dc_isar_feature(aa32_fpdp, s)) {
97
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
98
return false;
99
}
100
101
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_dp(DisasContext *s, arg_VMOV_imm_dp *a)
102
return false;
103
}
104
105
- if (!dc_isar_feature(aa32_fpdp, s)) {
106
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
107
return false;
108
}
109
110
@@ -XXX,XX +XXX,XX @@ static bool trans_VCMP_dp(DisasContext *s, arg_VCMP_dp *a)
111
return false;
112
}
113
114
- if (!dc_isar_feature(aa32_fpdp, s)) {
115
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
116
return false;
117
}
118
119
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f64_f16(DisasContext *s, arg_VCVT_f64_f16 *a)
120
return false;
121
}
122
123
- if (!dc_isar_feature(aa32_fpdp, s)) {
124
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
125
return false;
126
}
127
128
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f16_f64(DisasContext *s, arg_VCVT_f16_f64 *a)
129
return false;
130
}
131
132
- if (!dc_isar_feature(aa32_fpdp, s)) {
133
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
134
return false;
135
}
136
137
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTR_dp(DisasContext *s, arg_VRINTR_dp *a)
138
return false;
139
}
140
141
- if (!dc_isar_feature(aa32_fpdp, s)) {
142
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
143
return false;
144
}
145
146
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTZ_dp(DisasContext *s, arg_VRINTZ_dp *a)
147
return false;
148
}
149
150
- if (!dc_isar_feature(aa32_fpdp, s)) {
151
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
152
return false;
153
}
154
155
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTX_dp(DisasContext *s, arg_VRINTX_dp *a)
156
return false;
157
}
158
159
- if (!dc_isar_feature(aa32_fpdp, s)) {
160
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
161
return false;
162
}
163
164
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_sp(DisasContext *s, arg_VCVT_sp *a)
165
return false;
166
}
167
168
- if (!dc_isar_feature(aa32_fpdp, s)) {
169
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
170
return false;
171
}
172
173
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp(DisasContext *s, arg_VCVT_dp *a)
174
return false;
175
}
176
177
- if (!dc_isar_feature(aa32_fpdp, s)) {
178
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
179
return false;
180
}
181
182
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_dp(DisasContext *s, arg_VCVT_int_dp *a)
183
return false;
184
}
185
186
- if (!dc_isar_feature(aa32_fpdp, s)) {
187
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
188
return false;
189
}
190
191
@@ -XXX,XX +XXX,XX @@ static bool trans_VJCVT(DisasContext *s, arg_VJCVT *a)
192
return false;
193
}
194
195
- if (!dc_isar_feature(aa32_fpdp, s)) {
196
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
197
return false;
198
}
199
200
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_dp(DisasContext *s, arg_VCVT_fix_dp *a)
201
return false;
202
}
203
204
- if (!dc_isar_feature(aa32_fpdp, s)) {
205
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
206
return false;
207
}
208
209
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp_int(DisasContext *s, arg_VCVT_dp_int *a)
210
return false;
211
}
212
213
- if (!dc_isar_feature(aa32_fpdp, s)) {
214
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
215
return false;
216
}
217
218
--
219
2.20.1
220
221
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
add MemTxAttrs as an argument to flatview_access_valid().
3
Its callers now all have an attrs value to hand, so we can
4
correct our earlier temporary use of MEMTXATTRS_UNSPECIFIED.
5
2
3
We will shortly use these to test for VFPv2 and VFPv3
4
in different situations.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20200214181547.21408-8-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-10-peter.maydell@linaro.org
10
---
10
---
11
exec.c | 12 +++++-------
11
target/arm/cpu.h | 18 ++++++++++++++++++
12
1 file changed, 5 insertions(+), 7 deletions(-)
12
1 file changed, 18 insertions(+)
13
13
14
diff --git a/exec.c b/exec.c
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/exec.c
16
--- a/target/arm/cpu.h
17
+++ b/exec.c
17
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_read(FlatView *fv, hwaddr addr,
18
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_fpshvec(const ARMISARegisters *id)
19
static MemTxResult flatview_write(FlatView *fv, hwaddr addr, MemTxAttrs attrs,
19
return FIELD_EX32(id->mvfr0, MVFR0, FPSHVEC) > 0;
20
const uint8_t *buf, int len);
21
static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
22
- bool is_write);
23
+ bool is_write, MemTxAttrs attrs);
24
25
static MemTxResult subpage_read(void *opaque, hwaddr addr, uint64_t *data,
26
unsigned len, MemTxAttrs attrs)
27
@@ -XXX,XX +XXX,XX @@ static bool subpage_accepts(void *opaque, hwaddr addr,
28
#endif
29
30
return flatview_access_valid(subpage->fv, addr + subpage->base,
31
- len, is_write);
32
+ len, is_write, attrs);
33
}
20
}
34
21
35
static const MemoryRegionOps subpage_ops = {
22
+static inline bool isar_feature_aa32_fpsp_v2(const ARMISARegisters *id)
36
@@ -XXX,XX +XXX,XX @@ static void cpu_notify_map_clients(void)
23
+{
24
+ /* Return true if CPU supports single precision floating point, VFPv2 */
25
+ return FIELD_EX32(id->mvfr0, MVFR0, FPSP) > 0;
26
+}
27
+
28
+static inline bool isar_feature_aa32_fpsp_v3(const ARMISARegisters *id)
29
+{
30
+ /* Return true if CPU supports single precision floating point, VFPv3 */
31
+ return FIELD_EX32(id->mvfr0, MVFR0, FPSP) >= 2;
32
+}
33
+
34
static inline bool isar_feature_aa32_fpdp_v2(const ARMISARegisters *id)
35
{
36
/* Return true if CPU supports double precision floating point, VFPv2 */
37
return FIELD_EX32(id->mvfr0, MVFR0, FPDP) > 0;
37
}
38
}
38
39
39
static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
40
+static inline bool isar_feature_aa32_fpdp_v3(const ARMISARegisters *id)
40
- bool is_write)
41
+{
41
+ bool is_write, MemTxAttrs attrs)
42
+ /* Return true if CPU supports double precision floating point, VFPv3 */
42
{
43
+ return FIELD_EX32(id->mvfr0, MVFR0, FPDP) >= 2;
43
MemoryRegion *mr;
44
+}
44
hwaddr l, xlat;
45
+
45
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
46
/*
46
mr = flatview_translate(fv, addr, &xlat, &l, is_write);
47
* We always set the FP and SIMD FP16 fields to indicate identical
47
if (!memory_access_is_direct(mr, is_write)) {
48
* levels of support (assuming SIMD is implemented at all), so
48
l = memory_access_size(mr, l, addr);
49
- /* When our callers all have attrs we'll pass them through here */
50
- if (!memory_region_access_valid(mr, xlat, l, is_write,
51
- MEMTXATTRS_UNSPECIFIED)) {
52
+ if (!memory_region_access_valid(mr, xlat, l, is_write, attrs)) {
53
return false;
54
}
55
}
56
@@ -XXX,XX +XXX,XX @@ bool address_space_access_valid(AddressSpace *as, hwaddr addr,
57
58
rcu_read_lock();
59
fv = address_space_to_flatview(as);
60
- result = flatview_access_valid(fv, addr, len, is_write);
61
+ result = flatview_access_valid(fv, addr, len, is_write, attrs);
62
rcu_read_unlock();
63
return result;
64
}
65
--
49
--
66
2.17.1
50
2.20.1
67
51
68
52
diff view generated by jsdifflib
New patch
1
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Shuffle the order of the checks so that we test the ISA
4
before we test anything else, such as the register arguments.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20200214181547.21408-9-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/translate-vfp.inc.c | 144 ++++++++++++++++-----------------
12
1 file changed, 72 insertions(+), 72 deletions(-)
13
14
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate-vfp.inc.c
17
+++ b/target/arm/translate-vfp.inc.c
18
@@ -XXX,XX +XXX,XX @@ static bool trans_VSEL(DisasContext *s, arg_VSEL *a)
19
return false;
20
}
21
22
- /* UNDEF accesses to D16-D31 if they don't exist */
23
- if (dp && !dc_isar_feature(aa32_simd_r32, s) &&
24
- ((a->vm | a->vn | a->vd) & 0x10)) {
25
+ if (dp && !dc_isar_feature(aa32_fpdp_v2, s)) {
26
return false;
27
}
28
29
- if (dp && !dc_isar_feature(aa32_fpdp_v2, s)) {
30
+ /* UNDEF accesses to D16-D31 if they don't exist */
31
+ if (dp && !dc_isar_feature(aa32_simd_r32, s) &&
32
+ ((a->vm | a->vn | a->vd) & 0x10)) {
33
return false;
34
}
35
36
@@ -XXX,XX +XXX,XX @@ static bool trans_VMINMAXNM(DisasContext *s, arg_VMINMAXNM *a)
37
return false;
38
}
39
40
- /* UNDEF accesses to D16-D31 if they don't exist */
41
- if (dp && !dc_isar_feature(aa32_simd_r32, s) &&
42
- ((a->vm | a->vn | a->vd) & 0x10)) {
43
+ if (dp && !dc_isar_feature(aa32_fpdp_v2, s)) {
44
return false;
45
}
46
47
- if (dp && !dc_isar_feature(aa32_fpdp_v2, s)) {
48
+ /* UNDEF accesses to D16-D31 if they don't exist */
49
+ if (dp && !dc_isar_feature(aa32_simd_r32, s) &&
50
+ ((a->vm | a->vn | a->vd) & 0x10)) {
51
return false;
52
}
53
54
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINT(DisasContext *s, arg_VRINT *a)
55
return false;
56
}
57
58
- /* UNDEF accesses to D16-D31 if they don't exist */
59
- if (dp && !dc_isar_feature(aa32_simd_r32, s) &&
60
- ((a->vm | a->vd) & 0x10)) {
61
+ if (dp && !dc_isar_feature(aa32_fpdp_v2, s)) {
62
return false;
63
}
64
65
- if (dp && !dc_isar_feature(aa32_fpdp_v2, s)) {
66
+ /* UNDEF accesses to D16-D31 if they don't exist */
67
+ if (dp && !dc_isar_feature(aa32_simd_r32, s) &&
68
+ ((a->vm | a->vd) & 0x10)) {
69
return false;
70
}
71
72
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a)
73
return false;
74
}
75
76
- /* UNDEF accesses to D16-D31 if they don't exist */
77
- if (dp && !dc_isar_feature(aa32_simd_r32, s) && (a->vm & 0x10)) {
78
+ if (dp && !dc_isar_feature(aa32_fpdp_v2, s)) {
79
return false;
80
}
81
82
- if (dp && !dc_isar_feature(aa32_fpdp_v2, s)) {
83
+ /* UNDEF accesses to D16-D31 if they don't exist */
84
+ if (dp && !dc_isar_feature(aa32_simd_r32, s) && (a->vm & 0x10)) {
85
return false;
86
}
87
88
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_dp(DisasContext *s, VFPGen3OpDPFn *fn,
89
TCGv_i64 f0, f1, fd;
90
TCGv_ptr fpst;
91
92
- /* UNDEF accesses to D16-D31 if they don't exist */
93
- if (!dc_isar_feature(aa32_simd_r32, s) && ((vd | vn | vm) & 0x10)) {
94
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
95
return false;
96
}
97
98
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
99
+ /* UNDEF accesses to D16-D31 if they don't exist */
100
+ if (!dc_isar_feature(aa32_simd_r32, s) && ((vd | vn | vm) & 0x10)) {
101
return false;
102
}
103
104
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_dp(DisasContext *s, VFPGen2OpDPFn *fn, int vd, int vm)
105
int veclen = s->vec_len;
106
TCGv_i64 f0, fd;
107
108
- /* UNDEF accesses to D16-D31 if they don't exist */
109
- if (!dc_isar_feature(aa32_simd_r32, s) && ((vd | vm) & 0x10)) {
110
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
111
return false;
112
}
113
114
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
115
+ /* UNDEF accesses to D16-D31 if they don't exist */
116
+ if (!dc_isar_feature(aa32_simd_r32, s) && ((vd | vm) & 0x10)) {
117
return false;
118
}
119
120
@@ -XXX,XX +XXX,XX @@ static bool trans_VFM_dp(DisasContext *s, arg_VFM_dp *a)
121
return false;
122
}
123
124
- /* UNDEF accesses to D16-D31 if they don't exist. */
125
- if (!dc_isar_feature(aa32_simd_r32, s) &&
126
- ((a->vd | a->vn | a->vm) & 0x10)) {
127
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
128
return false;
129
}
130
131
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
132
+ /* UNDEF accesses to D16-D31 if they don't exist. */
133
+ if (!dc_isar_feature(aa32_simd_r32, s) &&
134
+ ((a->vd | a->vn | a->vm) & 0x10)) {
135
return false;
136
}
137
138
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_dp(DisasContext *s, arg_VMOV_imm_dp *a)
139
140
vd = a->vd;
141
142
- /* UNDEF accesses to D16-D31 if they don't exist. */
143
- if (!dc_isar_feature(aa32_simd_r32, s) && (vd & 0x10)) {
144
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
145
return false;
146
}
147
148
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
149
+ /* UNDEF accesses to D16-D31 if they don't exist. */
150
+ if (!dc_isar_feature(aa32_simd_r32, s) && (vd & 0x10)) {
151
return false;
152
}
153
154
@@ -XXX,XX +XXX,XX @@ static bool trans_VCMP_dp(DisasContext *s, arg_VCMP_dp *a)
155
{
156
TCGv_i64 vd, vm;
157
158
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
159
+ return false;
160
+ }
161
+
162
/* Vm/M bits must be zero for the Z variant */
163
if (a->z && a->vm != 0) {
164
return false;
165
@@ -XXX,XX +XXX,XX @@ static bool trans_VCMP_dp(DisasContext *s, arg_VCMP_dp *a)
166
return false;
167
}
168
169
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
170
- return false;
171
- }
172
-
173
if (!vfp_access_check(s)) {
174
return true;
175
}
176
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f64_f16(DisasContext *s, arg_VCVT_f64_f16 *a)
177
TCGv_i32 tmp;
178
TCGv_i64 vd;
179
180
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
181
+ return false;
182
+ }
183
+
184
if (!dc_isar_feature(aa32_fp16_dpconv, s)) {
185
return false;
186
}
187
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f64_f16(DisasContext *s, arg_VCVT_f64_f16 *a)
188
return false;
189
}
190
191
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
192
- return false;
193
- }
194
-
195
if (!vfp_access_check(s)) {
196
return true;
197
}
198
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f16_f64(DisasContext *s, arg_VCVT_f16_f64 *a)
199
TCGv_i32 tmp;
200
TCGv_i64 vm;
201
202
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
203
+ return false;
204
+ }
205
+
206
if (!dc_isar_feature(aa32_fp16_dpconv, s)) {
207
return false;
208
}
209
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f16_f64(DisasContext *s, arg_VCVT_f16_f64 *a)
210
return false;
211
}
212
213
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
214
- return false;
215
- }
216
-
217
if (!vfp_access_check(s)) {
218
return true;
219
}
220
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTR_dp(DisasContext *s, arg_VRINTR_dp *a)
221
TCGv_ptr fpst;
222
TCGv_i64 tmp;
223
224
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
225
+ return false;
226
+ }
227
+
228
if (!dc_isar_feature(aa32_vrint, s)) {
229
return false;
230
}
231
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTR_dp(DisasContext *s, arg_VRINTR_dp *a)
232
return false;
233
}
234
235
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
236
- return false;
237
- }
238
-
239
if (!vfp_access_check(s)) {
240
return true;
241
}
242
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTZ_dp(DisasContext *s, arg_VRINTZ_dp *a)
243
TCGv_i64 tmp;
244
TCGv_i32 tcg_rmode;
245
246
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
247
+ return false;
248
+ }
249
+
250
if (!dc_isar_feature(aa32_vrint, s)) {
251
return false;
252
}
253
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTZ_dp(DisasContext *s, arg_VRINTZ_dp *a)
254
return false;
255
}
256
257
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
258
- return false;
259
- }
260
-
261
if (!vfp_access_check(s)) {
262
return true;
263
}
264
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTX_dp(DisasContext *s, arg_VRINTX_dp *a)
265
TCGv_ptr fpst;
266
TCGv_i64 tmp;
267
268
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
269
+ return false;
270
+ }
271
+
272
if (!dc_isar_feature(aa32_vrint, s)) {
273
return false;
274
}
275
@@ -XXX,XX +XXX,XX @@ static bool trans_VRINTX_dp(DisasContext *s, arg_VRINTX_dp *a)
276
return false;
277
}
278
279
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
280
- return false;
281
- }
282
-
283
if (!vfp_access_check(s)) {
284
return true;
285
}
286
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_sp(DisasContext *s, arg_VCVT_sp *a)
287
TCGv_i64 vd;
288
TCGv_i32 vm;
289
290
- /* UNDEF accesses to D16-D31 if they don't exist. */
291
- if (!dc_isar_feature(aa32_simd_r32, s) && (a->vd & 0x10)) {
292
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
293
return false;
294
}
295
296
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
297
+ /* UNDEF accesses to D16-D31 if they don't exist. */
298
+ if (!dc_isar_feature(aa32_simd_r32, s) && (a->vd & 0x10)) {
299
return false;
300
}
301
302
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp(DisasContext *s, arg_VCVT_dp *a)
303
TCGv_i64 vm;
304
TCGv_i32 vd;
305
306
- /* UNDEF accesses to D16-D31 if they don't exist. */
307
- if (!dc_isar_feature(aa32_simd_r32, s) && (a->vm & 0x10)) {
308
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
309
return false;
310
}
311
312
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
313
+ /* UNDEF accesses to D16-D31 if they don't exist. */
314
+ if (!dc_isar_feature(aa32_simd_r32, s) && (a->vm & 0x10)) {
315
return false;
316
}
317
318
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_dp(DisasContext *s, arg_VCVT_int_dp *a)
319
TCGv_i64 vd;
320
TCGv_ptr fpst;
321
322
- /* UNDEF accesses to D16-D31 if they don't exist. */
323
- if (!dc_isar_feature(aa32_simd_r32, s) && (a->vd & 0x10)) {
324
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
325
return false;
326
}
327
328
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
329
+ /* UNDEF accesses to D16-D31 if they don't exist. */
330
+ if (!dc_isar_feature(aa32_simd_r32, s) && (a->vd & 0x10)) {
331
return false;
332
}
333
334
@@ -XXX,XX +XXX,XX @@ static bool trans_VJCVT(DisasContext *s, arg_VJCVT *a)
335
TCGv_i32 vd;
336
TCGv_i64 vm;
337
338
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
339
+ return false;
340
+ }
341
+
342
if (!dc_isar_feature(aa32_jscvt, s)) {
343
return false;
344
}
345
@@ -XXX,XX +XXX,XX @@ static bool trans_VJCVT(DisasContext *s, arg_VJCVT *a)
346
return false;
347
}
348
349
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
350
- return false;
351
- }
352
-
353
if (!vfp_access_check(s)) {
354
return true;
355
}
356
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_dp(DisasContext *s, arg_VCVT_fix_dp *a)
357
TCGv_ptr fpst;
358
int frac_bits;
359
360
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
361
+ return false;
362
+ }
363
+
364
if (!arm_dc_feature(s, ARM_FEATURE_VFP3)) {
365
return false;
366
}
367
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_dp(DisasContext *s, arg_VCVT_fix_dp *a)
368
return false;
369
}
370
371
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
372
- return false;
373
- }
374
-
375
if (!vfp_access_check(s)) {
376
return true;
377
}
378
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp_int(DisasContext *s, arg_VCVT_dp_int *a)
379
TCGv_i64 vm;
380
TCGv_ptr fpst;
381
382
- /* UNDEF accesses to D16-D31 if they don't exist. */
383
- if (!dc_isar_feature(aa32_simd_r32, s) && (a->vm & 0x10)) {
384
+ if (!dc_isar_feature(aa32_fpdp_v2, s)) {
385
return false;
386
}
387
388
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
389
+ /* UNDEF accesses to D16-D31 if they don't exist. */
390
+ if (!dc_isar_feature(aa32_simd_r32, s) && (a->vm & 0x10)) {
391
return false;
392
}
393
394
--
395
2.20.1
396
397
diff view generated by jsdifflib
1
From: Igor Mammedov <imammedo@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
When QEMU is started with following CLI
3
Sort this check to the start of a trans_* function.
4
-machine virt,gic-version=3,accel=kvm -cpu host -bios AAVMF_CODE.fd
4
Merge this with any existing test for fpdp_v2.
5
it crashes with abort at
6
accel/kvm/kvm-all.c:2164:
7
KVM_SET_DEVICE_ATTR failed: Group 6 attr 0x000000000000c665: Invalid argument
8
5
9
Which is caused by implicit dependency of kvm_arm_gicv3_reset() on
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
arm_gicv3_icc_reset() where the later is called by CPU reset
7
Message-id: 20200214181547.21408-10-richard.henderson@linaro.org
11
reset callback.
12
13
However commit:
14
3b77f6c arm/boot: split load_dtb() from arm_load_kernel()
15
broke CPU reset callback registration in case
16
17
arm_load_kernel()
18
...
19
if (!info->kernel_filename || info->firmware_loaded)
20
21
branch is taken, i.e. it's sufficient to provide a firmware
22
or do not provide kernel on CLI to skip cpu reset callback
23
registration, where before offending commit the callback
24
has been registered unconditionally.
25
26
Fix it by registering the callback right at the beginning of
27
arm_load_kernel() unconditionally instead of doing it at the end.
28
29
NOTE:
30
we probably should eliminate that dependency anyways as well as
31
separate arch CPU reset parts from arm_load_kernel() into CPU
32
itself, but that refactoring that I probably would have to do
33
anyways later for CPU hotplug to work.
34
35
Reported-by: Auger Eric <eric.auger@redhat.com>
36
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
37
Reviewed-by: Eric Auger <eric.auger@redhat.com>
38
Tested-by: Eric Auger <eric.auger@redhat.com>
39
Message-id: 1527070950-208350-1-git-send-email-imammedo@redhat.com
40
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
41
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
42
---
10
---
43
hw/arm/boot.c | 18 +++++++++---------
11
target/arm/translate-vfp.inc.c | 24 ++++++++----------------
44
1 file changed, 9 insertions(+), 9 deletions(-)
12
1 file changed, 8 insertions(+), 16 deletions(-)
45
13
46
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
14
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
47
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
48
--- a/hw/arm/boot.c
16
--- a/target/arm/translate-vfp.inc.c
49
+++ b/hw/arm/boot.c
17
+++ b/target/arm/translate-vfp.inc.c
50
@@ -XXX,XX +XXX,XX @@ void arm_load_kernel(ARMCPU *cpu, struct arm_boot_info *info)
18
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
51
static const ARMInsnFixup *primary_loader;
19
* VFPv2 allows access to FPSID from userspace; VFPv3 restricts
52
AddressSpace *as = arm_boot_address_space(cpu, info);
20
* all ID registers to privileged access only.
53
21
*/
54
+ /* CPU objects (unlike devices) are not automatically reset on system
22
- if (IS_USER(s) && arm_dc_feature(s, ARM_FEATURE_VFP3)) {
55
+ * reset, so we must always register a handler to do so. If we're
23
+ if (IS_USER(s) && dc_isar_feature(aa32_fpsp_v3, s)) {
56
+ * actually loading a kernel, the handler is also responsible for
24
return false;
57
+ * arranging that we start it correctly.
25
}
58
+ */
26
ignore_vfp_enabled = true;
59
+ for (cs = first_cpu; cs; cs = CPU_NEXT(cs)) {
27
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
60
+ qemu_register_reset(do_cpu_reset, ARM_CPU(cs));
28
case ARM_VFP_FPINST:
61
+ }
29
case ARM_VFP_FPINST2:
62
+
30
/* Not present in VFPv3 */
63
/* The board code is not supposed to set secure_board_setup unless
31
- if (IS_USER(s) || arm_dc_feature(s, ARM_FEATURE_VFP3)) {
64
* running its code in secure mode is actually possible, and KVM
32
+ if (IS_USER(s) || dc_isar_feature(aa32_fpsp_v3, s)) {
65
* doesn't support secure.
33
return false;
66
@@ -XXX,XX +XXX,XX @@ void arm_load_kernel(ARMCPU *cpu, struct arm_boot_info *info)
34
}
67
ARM_CPU(cs)->env.boot_info = info;
35
break;
36
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_sp(DisasContext *s, arg_VMOV_imm_sp *a)
37
38
vd = a->vd;
39
40
- if (!dc_isar_feature(aa32_fpshvec, s) &&
41
- (veclen != 0 || s->vec_stride != 0)) {
42
+ if (!dc_isar_feature(aa32_fpsp_v3, s)) {
43
return false;
68
}
44
}
69
45
70
- /* CPU objects (unlike devices) are not automatically reset on system
46
- if (!arm_dc_feature(s, ARM_FEATURE_VFP3)) {
71
- * reset, so we must always register a handler to do so. If we're
47
+ if (!dc_isar_feature(aa32_fpshvec, s) &&
72
- * actually loading a kernel, the handler is also responsible for
48
+ (veclen != 0 || s->vec_stride != 0)) {
73
- * arranging that we start it correctly.
49
return false;
74
- */
50
}
75
- for (cs = first_cpu; cs; cs = CPU_NEXT(cs)) {
51
76
- qemu_register_reset(do_cpu_reset, ARM_CPU(cs));
52
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_dp(DisasContext *s, arg_VMOV_imm_dp *a)
53
54
vd = a->vd;
55
56
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
57
+ if (!dc_isar_feature(aa32_fpdp_v3, s)) {
58
return false;
59
}
60
61
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_dp(DisasContext *s, arg_VMOV_imm_dp *a)
62
return false;
63
}
64
65
- if (!arm_dc_feature(s, ARM_FEATURE_VFP3)) {
66
- return false;
77
- }
67
- }
78
-
68
-
79
if (!info->skip_dtb_autoload && have_dtb(info)) {
69
if (!vfp_access_check(s)) {
80
if (arm_load_dtb(info->dtb_start, info, info->dtb_limit, as) < 0) {
70
return true;
81
exit(1);
71
}
72
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_sp(DisasContext *s, arg_VCVT_fix_sp *a)
73
TCGv_ptr fpst;
74
int frac_bits;
75
76
- if (!arm_dc_feature(s, ARM_FEATURE_VFP3)) {
77
+ if (!dc_isar_feature(aa32_fpsp_v3, s)) {
78
return false;
79
}
80
81
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_dp(DisasContext *s, arg_VCVT_fix_dp *a)
82
TCGv_ptr fpst;
83
int frac_bits;
84
85
- if (!dc_isar_feature(aa32_fpdp_v2, s)) {
86
- return false;
87
- }
88
-
89
- if (!arm_dc_feature(s, ARM_FEATURE_VFP3)) {
90
+ if (!dc_isar_feature(aa32_fpdp_v3, s)) {
91
return false;
92
}
93
82
--
94
--
83
2.17.1
95
2.20.1
84
96
85
97
diff view generated by jsdifflib
New patch
1
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
We will eventually remove the early ARM_FEATURE_VFP test,
4
so add a proper test for each trans_* that does not already
5
have another ISA test.
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200214181547.21408-11-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/translate-vfp.inc.c | 78 ++++++++++++++++++++++++++++++----
13
1 file changed, 69 insertions(+), 9 deletions(-)
14
15
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate-vfp.inc.c
18
+++ b/target/arm/translate-vfp.inc.c
19
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_to_gp(DisasContext *s, arg_VMOV_to_gp *a)
20
int pass;
21
uint32_t offset;
22
23
+ /* SIZE == 2 is a VFP instruction; otherwise NEON. */
24
+ if (a->size == 2
25
+ ? !dc_isar_feature(aa32_fpsp_v2, s)
26
+ : !arm_dc_feature(s, ARM_FEATURE_NEON)) {
27
+ return false;
28
+ }
29
+
30
/* UNDEF accesses to D16-D31 if they don't exist */
31
if (!dc_isar_feature(aa32_simd_r32, s) && (a->vn & 0x10)) {
32
return false;
33
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_to_gp(DisasContext *s, arg_VMOV_to_gp *a)
34
pass = extract32(offset, 2, 1);
35
offset = extract32(offset, 0, 2) * 8;
36
37
- if (a->size != 2 && !arm_dc_feature(s, ARM_FEATURE_NEON)) {
38
- return false;
39
- }
40
-
41
if (!vfp_access_check(s)) {
42
return true;
43
}
44
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_from_gp(DisasContext *s, arg_VMOV_from_gp *a)
45
int pass;
46
uint32_t offset;
47
48
+ /* SIZE == 2 is a VFP instruction; otherwise NEON. */
49
+ if (a->size == 2
50
+ ? !dc_isar_feature(aa32_fpsp_v2, s)
51
+ : !arm_dc_feature(s, ARM_FEATURE_NEON)) {
52
+ return false;
53
+ }
54
+
55
/* UNDEF accesses to D16-D31 if they don't exist */
56
if (!dc_isar_feature(aa32_simd_r32, s) && (a->vn & 0x10)) {
57
return false;
58
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_from_gp(DisasContext *s, arg_VMOV_from_gp *a)
59
pass = extract32(offset, 2, 1);
60
offset = extract32(offset, 0, 2) * 8;
61
62
- if (a->size != 2 && !arm_dc_feature(s, ARM_FEATURE_NEON)) {
63
- return false;
64
- }
65
-
66
if (!vfp_access_check(s)) {
67
return true;
68
}
69
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
70
TCGv_i32 tmp;
71
bool ignore_vfp_enabled = false;
72
73
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
74
+ return false;
75
+ }
76
+
77
if (arm_dc_feature(s, ARM_FEATURE_M)) {
78
/*
79
* The only M-profile VFP vmrs/vmsr sysreg is FPSCR.
80
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_single(DisasContext *s, arg_VMOV_single *a)
81
{
82
TCGv_i32 tmp;
83
84
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
85
+ return false;
86
+ }
87
+
88
if (!vfp_access_check(s)) {
89
return true;
90
}
91
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_64_sp(DisasContext *s, arg_VMOV_64_sp *a)
92
{
93
TCGv_i32 tmp;
94
95
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
96
+ return false;
97
+ }
98
+
99
/*
100
* VMOV between two general-purpose registers and two single precision
101
* floating point registers
102
@@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_64_dp(DisasContext *s, arg_VMOV_64_dp *a)
103
104
/*
105
* VMOV between two general-purpose registers and one double precision
106
- * floating point register
107
+ * floating point register. Note that this does not require support
108
+ * for double precision arithmetic.
109
*/
110
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
111
+ return false;
112
+ }
113
114
/* UNDEF accesses to D16-D31 if they don't exist */
115
if (!dc_isar_feature(aa32_simd_r32, s) && (a->vm & 0x10)) {
116
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_sp(DisasContext *s, arg_VLDR_VSTR_sp *a)
117
uint32_t offset;
118
TCGv_i32 addr, tmp;
119
120
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
121
+ return false;
122
+ }
123
+
124
if (!vfp_access_check(s)) {
125
return true;
126
}
127
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_dp(DisasContext *s, arg_VLDR_VSTR_dp *a)
128
TCGv_i32 addr;
129
TCGv_i64 tmp;
130
131
+ /* Note that this does not require support for double arithmetic. */
132
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
133
+ return false;
134
+ }
135
+
136
/* UNDEF accesses to D16-D31 if they don't exist */
137
if (!dc_isar_feature(aa32_simd_r32, s) && (a->vd & 0x10)) {
138
return false;
139
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDM_VSTM_sp(DisasContext *s, arg_VLDM_VSTM_sp *a)
140
TCGv_i32 addr, tmp;
141
int i, n;
142
143
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
144
+ return false;
145
+ }
146
+
147
n = a->imm;
148
149
if (n == 0 || (a->vd + n) > 32) {
150
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDM_VSTM_dp(DisasContext *s, arg_VLDM_VSTM_dp *a)
151
TCGv_i64 tmp;
152
int i, n;
153
154
+ /* Note that this does not require support for double arithmetic. */
155
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
156
+ return false;
157
+ }
158
+
159
n = a->imm >> 1;
160
161
if (n == 0 || (a->vd + n) > 32 || n > 16) {
162
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_sp(DisasContext *s, VFPGen3OpSPFn *fn,
163
TCGv_i32 f0, f1, fd;
164
TCGv_ptr fpst;
165
166
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
167
+ return false;
168
+ }
169
+
170
if (!dc_isar_feature(aa32_fpshvec, s) &&
171
(veclen != 0 || s->vec_stride != 0)) {
172
return false;
173
@@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_sp(DisasContext *s, VFPGen2OpSPFn *fn, int vd, int vm)
174
int veclen = s->vec_len;
175
TCGv_i32 f0, fd;
176
177
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
178
+ return false;
179
+ }
180
+
181
if (!dc_isar_feature(aa32_fpshvec, s) &&
182
(veclen != 0 || s->vec_stride != 0)) {
183
return false;
184
@@ -XXX,XX +XXX,XX @@ static bool trans_VCMP_sp(DisasContext *s, arg_VCMP_sp *a)
185
{
186
TCGv_i32 vd, vm;
187
188
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
189
+ return false;
190
+ }
191
+
192
/* Vm/M bits must be zero for the Z variant */
193
if (a->z && a->vm != 0) {
194
return false;
195
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_sp(DisasContext *s, arg_VCVT_int_sp *a)
196
TCGv_i32 vm;
197
TCGv_ptr fpst;
198
199
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
200
+ return false;
201
+ }
202
+
203
if (!vfp_access_check(s)) {
204
return true;
205
}
206
@@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_sp_int(DisasContext *s, arg_VCVT_sp_int *a)
207
TCGv_i32 vm;
208
TCGv_ptr fpst;
209
210
+ if (!dc_isar_feature(aa32_fpsp_v2, s)) {
211
+ return false;
212
+ }
213
+
214
if (!vfp_access_check(s)) {
215
return true;
216
}
217
--
218
2.20.1
219
220
diff view generated by jsdifflib