1
target-arm queue. This has the "plumb txattrs through various
1
The following changes since commit 0d3e41d5efd638a0c5682f6813b26448c3c51624:
2
bits of exec.c" patches, and a collection of bug fixes from
3
various people.
4
2
5
thanks
3
Merge remote-tracking branch 'remotes/vivier2/tags/trivial-branch-pull-request' into staging (2019-02-14 17:42:25 +0000)
6
-- PMM
7
8
9
10
The following changes since commit a3ac12fba028df90f7b3dbec924995c126c41022:
11
12
Merge remote-tracking branch 'remotes/ehabkost/tags/numa-next-pull-request' into staging (2018-05-31 11:12:36 +0100)
13
4
14
are available in the Git repository at:
5
are available in the Git repository at:
15
6
16
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20180531
7
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20190214
17
8
18
for you to fetch changes up to 49d1dca0520ea71bc21867fab6647f474fcf857b:
9
for you to fetch changes up to 497bc12b1b374ecd62903bf062229bd93f8924af:
19
10
20
KVM: GIC: Fix memory leak due to calling kvm_init_irq_routing twice (2018-05-31 14:52:53 +0100)
11
gdbstub: Send a reply to the vKill packet. (2019-02-14 18:45:49 +0000)
21
12
22
----------------------------------------------------------------
13
----------------------------------------------------------------
23
target-arm queue:
14
target-arm queue:
24
* target/arm: Honour FPCR.FZ in FRECPX
15
* gdbstub: Send a reply to the vKill packet
25
* MAINTAINERS: Add entries for newer MPS2 boards and devices
16
* Improve codegen for neon min/max and saturating arithmetic
26
* hw/intc/arm_gicv3: Fix APxR<n> register dispatching
17
* Fix a bug in clearing FPSCR exception status bits
27
* arm_gicv3_kvm: fix bug in writing zero bits back to the in-kernel
18
* hw/arm/armsse: Fix miswiring of expansion IRQs
28
GIC state
19
* hw/intc/armv7m_nvic: Allow byte accesses to SHPR1
29
* tcg: Fix helper function vs host abi for float16
20
* MAINTAINERS: Remove Peter Crosthwaite from various entries
30
* arm: fix qemu crash on startup with -bios option
21
* arm: Allow system registers for KVM guests to be changed by QEMU code
31
* arm: fix malloc type mismatch
22
* linux-user: support HWCAP_CPUID which exposes ID registers to user code
32
* xlnx-zdma: Correct mem leaks and memset to zero on desc unaligned errors
23
* Fix bug in 128-bit cmpxchg for BE Arm guests
33
* Correct CPACR reset value for v7 cores
24
* Implement (no-op) HACR_EL2
34
* memory.h: Improve IOMMU related documentation
25
* Fix CRn to be 14 for PMEVTYPER/PMEVCNTR
35
* exec: Plumb transaction attributes through various functions in
36
preparation for allowing IOMMUs to see them
37
* vmstate.h: Provide VMSTATE_BOOL_SUB_ARRAY
38
* ARM: ACPI: Fix use-after-free due to memory realloc
39
* KVM: GIC: Fix memory leak due to calling kvm_init_irq_routing twice
40
26
41
----------------------------------------------------------------
27
----------------------------------------------------------------
42
Francisco Iglesias (1):
28
Aaron Lindsay OS (1):
43
xlnx-zdma: Correct mem leaks and memset to zero on desc unaligned errors
29
target/arm: Fix CRn to be 14 for PMEVTYPER/PMEVCNTR
44
30
45
Igor Mammedov (1):
31
Alex Bennée (5):
46
arm: fix qemu crash on startup with -bios option
32
target/arm: relax permission checks for HWCAP_CPUID registers
33
target/arm: expose CPUID registers to userspace
34
target/arm: expose MPIDR_EL1 to userspace
35
target/arm: expose remaining CPUID registers as RAZ
36
linux-user/elfload: enable HWCAP_CPUID for AArch64
47
37
48
Jan Kiszka (1):
38
Catherine Ho (1):
49
hw/intc/arm_gicv3: Fix APxR<n> register dispatching
39
target/arm: Fix int128_make128 lo, hi order in paired_cmpxchg64_be
50
40
51
Paolo Bonzini (1):
41
Peter Maydell (5):
52
arm: fix malloc type mismatch
42
target/arm: Implement HACR_EL2
43
arm: Allow system registers for KVM guests to be changed by QEMU code
44
MAINTAINERS: Remove Peter Crosthwaite from various entries
45
hw/intc/armv7m_nvic: Allow byte accesses to SHPR1
46
hw/arm/armsse: Fix miswiring of expansion IRQs
53
47
54
Peter Maydell (17):
48
Richard Henderson (14):
55
target/arm: Honour FPCR.FZ in FRECPX
49
target/arm: Force result size into dp after operation
56
MAINTAINERS: Add entries for newer MPS2 boards and devices
50
target/arm: Restructure disas_fp_int_conv
57
Correct CPACR reset value for v7 cores
51
target/arm: Rely on optimization within tcg_gen_gvec_or
58
memory.h: Improve IOMMU related documentation
52
target/arm: Use vector minmax expanders for aarch64
59
Make tb_invalidate_phys_addr() take a MemTxAttrs argument
53
target/arm: Use vector minmax expanders for aarch32
60
Make address_space_translate{, _cached}() take a MemTxAttrs argument
54
target/arm: Use tcg integer min/max primitives for neon
61
Make address_space_map() take a MemTxAttrs argument
55
target/arm: Remove neon min/max helpers
62
Make address_space_access_valid() take a MemTxAttrs argument
56
target/arm: Fix vfp_gdb_get/set_reg vs FPSCR
63
Make flatview_extend_translation() take a MemTxAttrs argument
57
target/arm: Fix arm_cpu_dump_state vs FPSCR
64
Make memory_region_access_valid() take a MemTxAttrs argument
58
target/arm: Split out flags setting from vfp compares
65
Make MemoryRegion valid.accepts callback take a MemTxAttrs argument
59
target/arm: Fix set of bits kept in xregs[ARM_VFP_FPSCR]
66
Make flatview_access_valid() take a MemTxAttrs argument
60
target/arm: Split out FPSCR.QC to a vector field
67
Make flatview_translate() take a MemTxAttrs argument
61
target/arm: Use vector operations for saturation
68
Make address_space_get_iotlb_entry() take a MemTxAttrs argument
62
target/arm: Add missing clear_tail calls
69
Make flatview_do_translate() take a MemTxAttrs argument
70
Make address_space_translate_iommu take a MemTxAttrs argument
71
vmstate.h: Provide VMSTATE_BOOL_SUB_ARRAY
72
63
73
Richard Henderson (1):
64
Sandra Loosemore (1):
74
tcg: Fix helper function vs host abi for float16
65
gdbstub: Send a reply to the vKill packet.
75
66
76
Shannon Zhao (3):
67
target/arm/cpu.h | 50 ++++++++-
77
arm_gicv3_kvm: increase clroffset accordingly
68
target/arm/helper.h | 45 +++++---
78
ARM: ACPI: Fix use-after-free due to memory realloc
69
target/arm/translate.h | 4 +
79
KVM: GIC: Fix memory leak due to calling kvm_init_irq_routing twice
70
gdbstub.c | 1 +
71
hw/arm/armsse.c | 2 +-
72
hw/intc/armv7m_nvic.c | 4 +-
73
linux-user/elfload.c | 1 +
74
target/arm/helper-a64.c | 4 +-
75
target/arm/helper.c | 228 ++++++++++++++++++++++++++++++++--------
76
target/arm/kvm32.c | 20 +---
77
target/arm/kvm64.c | 2 +
78
target/arm/machine.c | 2 +-
79
target/arm/neon_helper.c | 14 +--
80
target/arm/translate-a64.c | 171 +++++++++++++++---------------
81
target/arm/translate-sve.c | 6 +-
82
target/arm/translate.c | 251 ++++++++++++++++++++++++++++++++++-----------
83
target/arm/vec_helper.c | 134 +++++++++++++++++++++++-
84
MAINTAINERS | 4 -
85
18 files changed, 687 insertions(+), 256 deletions(-)
80
86
81
include/exec/exec-all.h | 5 +-
82
include/exec/helper-head.h | 2 +-
83
include/exec/memory-internal.h | 3 +-
84
include/exec/memory.h | 128 +++++++++++++++++++++++++++++++++++------
85
include/migration/vmstate.h | 3 +
86
include/sysemu/dma.h | 6 +-
87
accel/tcg/translate-all.c | 4 +-
88
exec.c | 95 ++++++++++++++++++------------
89
hw/arm/boot.c | 18 +++---
90
hw/arm/virt-acpi-build.c | 20 +++++--
91
hw/dma/xlnx-zdma.c | 10 +++-
92
hw/hppa/dino.c | 3 +-
93
hw/intc/arm_gic_kvm.c | 1 -
94
hw/intc/arm_gicv3_cpuif.c | 12 ++--
95
hw/intc/arm_gicv3_kvm.c | 2 +-
96
hw/nvram/fw_cfg.c | 12 ++--
97
hw/s390x/s390-pci-inst.c | 3 +-
98
hw/scsi/esp.c | 3 +-
99
hw/vfio/common.c | 3 +-
100
hw/virtio/vhost.c | 3 +-
101
hw/xen/xen_pt_msi.c | 3 +-
102
memory.c | 12 ++--
103
memory_ldst.inc.c | 18 +++---
104
target/arm/gdbstub.c | 3 +-
105
target/arm/helper-a64.c | 41 +++++++------
106
target/arm/helper.c | 90 ++++++++++++++++-------------
107
target/ppc/mmu-hash64.c | 3 +-
108
target/riscv/helper.c | 2 +-
109
target/s390x/diag.c | 6 +-
110
target/s390x/excp_helper.c | 3 +-
111
target/s390x/mmu_helper.c | 3 +-
112
target/s390x/sigp.c | 3 +-
113
target/xtensa/op_helper.c | 3 +-
114
MAINTAINERS | 9 ++-
115
34 files changed, 353 insertions(+), 182 deletions(-)
116
diff view generated by jsdifflib
New patch
1
From: Aaron Lindsay OS <aaron@os.amperecomputing.com>
1
2
3
This bug was introduced in:
4
commit 5ecdd3e47cadae83a62dc92b472f1fe163b56f59
5
target/arm: Finish implementation of PM[X]EVCNTR and PM[X]EVTYPER
6
7
Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com>
8
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
9
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
10
Message-id: 20190205135129.19338-1-aaron@os.amperecomputing.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/helper.c | 8 ++++----
14
1 file changed, 4 insertions(+), 4 deletions(-)
15
16
diff --git a/target/arm/helper.c b/target/arm/helper.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/helper.c
19
+++ b/target/arm/helper.c
20
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
21
char *pmevtyper_name = g_strdup_printf("PMEVTYPER%d", i);
22
char *pmevtyper_el0_name = g_strdup_printf("PMEVTYPER%d_EL0", i);
23
ARMCPRegInfo pmev_regs[] = {
24
- { .name = pmevcntr_name, .cp = 15, .crn = 15,
25
+ { .name = pmevcntr_name, .cp = 15, .crn = 14,
26
.crm = 8 | (3 & (i >> 3)), .opc1 = 0, .opc2 = i & 7,
27
.access = PL0_RW, .type = ARM_CP_IO | ARM_CP_ALIAS,
28
.readfn = pmevcntr_readfn, .writefn = pmevcntr_writefn,
29
.accessfn = pmreg_access },
30
{ .name = pmevcntr_el0_name, .state = ARM_CP_STATE_AA64,
31
- .opc0 = 3, .opc1 = 3, .crn = 15, .crm = 8 | (3 & (i >> 3)),
32
+ .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 8 | (3 & (i >> 3)),
33
.opc2 = i & 7, .access = PL0_RW, .accessfn = pmreg_access,
34
.type = ARM_CP_IO,
35
.readfn = pmevcntr_readfn, .writefn = pmevcntr_writefn,
36
.raw_readfn = pmevcntr_rawread,
37
.raw_writefn = pmevcntr_rawwrite },
38
- { .name = pmevtyper_name, .cp = 15, .crn = 15,
39
+ { .name = pmevtyper_name, .cp = 15, .crn = 14,
40
.crm = 12 | (3 & (i >> 3)), .opc1 = 0, .opc2 = i & 7,
41
.access = PL0_RW, .type = ARM_CP_IO | ARM_CP_ALIAS,
42
.readfn = pmevtyper_readfn, .writefn = pmevtyper_writefn,
43
.accessfn = pmreg_access },
44
{ .name = pmevtyper_el0_name, .state = ARM_CP_STATE_AA64,
45
- .opc0 = 3, .opc1 = 3, .crn = 15, .crm = 12 | (3 & (i >> 3)),
46
+ .opc0 = 3, .opc1 = 3, .crn = 14, .crm = 12 | (3 & (i >> 3)),
47
.opc2 = i & 7, .access = PL0_RW, .accessfn = pmreg_access,
48
.type = ARM_CP_IO,
49
.readfn = pmevtyper_readfn, .writefn = pmevtyper_writefn,
50
--
51
2.20.1
52
53
diff view generated by jsdifflib
1
Add more detail to the documentation for memory_region_init_iommu()
1
HACR_EL2 is a register with IMPDEF behaviour, which allows
2
and other IOMMU-related functions and data structures.
2
implementation specific trapping to EL2. Implement it as RAZ/WI,
3
since QEMU's implementation has no extra traps. This also
4
matches what h/w implementations like Cortex-A53 and A57 do.
3
5
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Message-id: 20190205181218.8995-1-peter.maydell@linaro.org
7
Reviewed-by: Eric Auger <eric.auger@redhat.com>
8
Message-id: 20180521140402.23318-2-peter.maydell@linaro.org
9
---
9
---
10
include/exec/memory.h | 105 ++++++++++++++++++++++++++++++++++++++----
10
target/arm/helper.c | 6 ++++++
11
1 file changed, 95 insertions(+), 10 deletions(-)
11
1 file changed, 6 insertions(+)
12
12
13
diff --git a/include/exec/memory.h b/include/exec/memory.h
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
15
--- a/include/exec/memory.h
15
--- a/target/arm/helper.c
16
+++ b/include/exec/memory.h
16
+++ b/target/arm/helper.c
17
@@ -XXX,XX +XXX,XX @@ enum IOMMUMemoryRegionAttr {
17
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_no_el2_cp_reginfo[] = {
18
IOMMU_ATTR_SPAPR_TCE_FD
18
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
19
};
19
.access = PL2_RW,
20
20
.type = ARM_CP_CONST, .resetvalue = 0 },
21
+/**
21
+ { .name = "HACR_EL2", .state = ARM_CP_STATE_BOTH,
22
+ * IOMMUMemoryRegionClass:
22
+ .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 7,
23
+ *
23
+ .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
24
+ * All IOMMU implementations need to subclass TYPE_IOMMU_MEMORY_REGION
24
{ .name = "ESR_EL2", .state = ARM_CP_STATE_BOTH,
25
+ * and provide an implementation of at least the @translate method here
25
.opc0 = 3, .opc1 = 4, .crn = 5, .crm = 2, .opc2 = 0,
26
+ * to handle requests to the memory region. Other methods are optional.
26
.access = PL2_RW,
27
+ *
27
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
28
+ * The IOMMU implementation must use the IOMMU notifier infrastructure
28
.cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
29
+ * to report whenever mappings are changed, by calling
29
.access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.hcr_el2),
30
+ * memory_region_notify_iommu() (or, if necessary, by calling
30
.writefn = hcr_writelow },
31
+ * memory_region_notify_one() for each registered notifier).
31
+ { .name = "HACR_EL2", .state = ARM_CP_STATE_BOTH,
32
+ */
32
+ .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 7,
33
typedef struct IOMMUMemoryRegionClass {
33
+ .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
34
/* private */
34
{ .name = "ELR_EL2", .state = ARM_CP_STATE_AA64,
35
struct DeviceClass parent_class;
35
.type = ARM_CP_ALIAS,
36
36
.opc0 = 3, .opc1 = 4, .crn = 4, .crm = 0, .opc2 = 1,
37
/*
38
- * Return a TLB entry that contains a given address. Flag should
39
- * be the access permission of this translation operation. We can
40
- * set flag to IOMMU_NONE to mean that we don't need any
41
- * read/write permission checks, like, when for region replay.
42
+ * Return a TLB entry that contains a given address.
43
+ *
44
+ * The IOMMUAccessFlags indicated via @flag are optional and may
45
+ * be specified as IOMMU_NONE to indicate that the caller needs
46
+ * the full translation information for both reads and writes. If
47
+ * the access flags are specified then the IOMMU implementation
48
+ * may use this as an optimization, to stop doing a page table
49
+ * walk as soon as it knows that the requested permissions are not
50
+ * allowed. If IOMMU_NONE is passed then the IOMMU must do the
51
+ * full page table walk and report the permissions in the returned
52
+ * IOMMUTLBEntry. (Note that this implies that an IOMMU may not
53
+ * return different mappings for reads and writes.)
54
+ *
55
+ * The returned information remains valid while the caller is
56
+ * holding the big QEMU lock or is inside an RCU critical section;
57
+ * if the caller wishes to cache the mapping beyond that it must
58
+ * register an IOMMU notifier so it can invalidate its cached
59
+ * information when the IOMMU mapping changes.
60
+ *
61
+ * @iommu: the IOMMUMemoryRegion
62
+ * @hwaddr: address to be translated within the memory region
63
+ * @flag: requested access permissions
64
*/
65
IOMMUTLBEntry (*translate)(IOMMUMemoryRegion *iommu, hwaddr addr,
66
IOMMUAccessFlags flag);
67
- /* Returns minimum supported page size */
68
+ /* Returns minimum supported page size in bytes.
69
+ * If this method is not provided then the minimum is assumed to
70
+ * be TARGET_PAGE_SIZE.
71
+ *
72
+ * @iommu: the IOMMUMemoryRegion
73
+ */
74
uint64_t (*get_min_page_size)(IOMMUMemoryRegion *iommu);
75
- /* Called when IOMMU Notifier flag changed */
76
+ /* Called when IOMMU Notifier flag changes (ie when the set of
77
+ * events which IOMMU users are requesting notification for changes).
78
+ * Optional method -- need not be provided if the IOMMU does not
79
+ * need to know exactly which events must be notified.
80
+ *
81
+ * @iommu: the IOMMUMemoryRegion
82
+ * @old_flags: events which previously needed to be notified
83
+ * @new_flags: events which now need to be notified
84
+ */
85
void (*notify_flag_changed)(IOMMUMemoryRegion *iommu,
86
IOMMUNotifierFlag old_flags,
87
IOMMUNotifierFlag new_flags);
88
- /* Set this up to provide customized IOMMU replay function */
89
+ /* Called to handle memory_region_iommu_replay().
90
+ *
91
+ * The default implementation of memory_region_iommu_replay() is to
92
+ * call the IOMMU translate method for every page in the address space
93
+ * with flag == IOMMU_NONE and then call the notifier if translate
94
+ * returns a valid mapping. If this method is implemented then it
95
+ * overrides the default behaviour, and must provide the full semantics
96
+ * of memory_region_iommu_replay(), by calling @notifier for every
97
+ * translation present in the IOMMU.
98
+ *
99
+ * Optional method -- an IOMMU only needs to provide this method
100
+ * if the default is inefficient or produces undesirable side effects.
101
+ *
102
+ * Note: this is not related to record-and-replay functionality.
103
+ */
104
void (*replay)(IOMMUMemoryRegion *iommu, IOMMUNotifier *notifier);
105
106
- /* Get IOMMU misc attributes */
107
- int (*get_attr)(IOMMUMemoryRegion *iommu, enum IOMMUMemoryRegionAttr,
108
+ /* Get IOMMU misc attributes. This is an optional method that
109
+ * can be used to allow users of the IOMMU to get implementation-specific
110
+ * information. The IOMMU implements this method to handle calls
111
+ * by IOMMU users to memory_region_iommu_get_attr() by filling in
112
+ * the arbitrary data pointer for any IOMMUMemoryRegionAttr values that
113
+ * the IOMMU supports. If the method is unimplemented then
114
+ * memory_region_iommu_get_attr() will always return -EINVAL.
115
+ *
116
+ * @iommu: the IOMMUMemoryRegion
117
+ * @attr: attribute being queried
118
+ * @data: memory to fill in with the attribute data
119
+ *
120
+ * Returns 0 on success, or a negative errno; in particular
121
+ * returns -EINVAL for unrecognized or unimplemented attribute types.
122
+ */
123
+ int (*get_attr)(IOMMUMemoryRegion *iommu, enum IOMMUMemoryRegionAttr attr,
124
void *data);
125
} IOMMUMemoryRegionClass;
126
127
@@ -XXX,XX +XXX,XX @@ static inline void memory_region_init_reservation(MemoryRegion *mr,
128
* An IOMMU region translates addresses and forwards accesses to a target
129
* memory region.
130
*
131
+ * The IOMMU implementation must define a subclass of TYPE_IOMMU_MEMORY_REGION.
132
+ * @_iommu_mr should be a pointer to enough memory for an instance of
133
+ * that subclass, @instance_size is the size of that subclass, and
134
+ * @mrtypename is its name. This function will initialize @_iommu_mr as an
135
+ * instance of the subclass, and its methods will then be called to handle
136
+ * accesses to the memory region. See the documentation of
137
+ * #IOMMUMemoryRegionClass for further details.
138
+ *
139
* @_iommu_mr: the #IOMMUMemoryRegion to be initialized
140
* @instance_size: the IOMMUMemoryRegion subclass instance size
141
* @mrtypename: the type name of the #IOMMUMemoryRegion
142
@@ -XXX,XX +XXX,XX @@ void memory_region_register_iommu_notifier(MemoryRegion *mr,
143
* a notifier with the minimum page granularity returned by
144
* mr->iommu_ops->get_page_size().
145
*
146
+ * Note: this is not related to record-and-replay functionality.
147
+ *
148
* @iommu_mr: the memory region to observe
149
* @n: the notifier to which to replay iommu mappings
150
*/
151
@@ -XXX,XX +XXX,XX @@ void memory_region_iommu_replay(IOMMUMemoryRegion *iommu_mr, IOMMUNotifier *n);
152
* memory_region_iommu_replay_all: replay existing IOMMU translations
153
* to all the notifiers registered.
154
*
155
+ * Note: this is not related to record-and-replay functionality.
156
+ *
157
* @iommu_mr: the memory region to observe
158
*/
159
void memory_region_iommu_replay_all(IOMMUMemoryRegion *iommu_mr);
160
@@ -XXX,XX +XXX,XX @@ void memory_region_unregister_iommu_notifier(MemoryRegion *mr,
161
* memory_region_iommu_get_attr: return an IOMMU attr if get_attr() is
162
* defined on the IOMMU.
163
*
164
- * Returns 0 if succeded, error code otherwise.
165
+ * Returns 0 on success, or a negative errno otherwise. In particular,
166
+ * -EINVAL indicates that the IOMMU does not support the requested
167
+ * attribute.
168
*
169
* @iommu_mr: the memory region
170
* @attr: the requested attribute
171
--
37
--
172
2.17.1
38
2.20.1
173
39
174
40
diff view generated by jsdifflib
1
The FRECPX instructions should (like most other floating point operations)
1
From: Catherine Ho <catherine.hecx@gmail.com>
2
honour the FPCR.FZ bit which specifies whether input denormals should
3
be flushed to zero (or FZ16 for the half-precision version).
4
We forgot to implement this, which doesn't affect the results (since
5
the calculation doesn't actually care about the mantissa bits) but did
6
mean we were failing to set the FPSR.IDC bit.
7
2
3
The lo,hi order is different from the comments. And in commit
4
1ec182c33379 ("target/arm: Convert to HAVE_CMPXCHG128"), it changes
5
the original code logic. So just restore the old code logic before this
6
commit:
7
do_paired_cmpxchg64_be():
8
cmpv = int128_make128(env->exclusive_high, env->exclusive_val);
9
newv = int128_make128(new_hi, new_lo);
10
11
This fixes a bug that would only be visible for big-endian
12
AArch64 guest code.
13
14
Fixes: 1ec182c33379 ("target/arm: Convert to HAVE_CMPXCHG128")
15
Signed-off-by: Catherine Ho <catherine.hecx@gmail.com>
16
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
17
Message-id: 1548985244-24523-1-git-send-email-catherine.hecx@gmail.com
18
[PMM: added note that bug only affects BE guests]
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20180521172712.19930-1-peter.maydell@linaro.org
11
---
20
---
12
target/arm/helper-a64.c | 6 ++++++
21
target/arm/helper-a64.c | 4 ++--
13
1 file changed, 6 insertions(+)
22
1 file changed, 2 insertions(+), 2 deletions(-)
14
23
15
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
24
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
16
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper-a64.c
26
--- a/target/arm/helper-a64.c
18
+++ b/target/arm/helper-a64.c
27
+++ b/target/arm/helper-a64.c
19
@@ -XXX,XX +XXX,XX @@ float16 HELPER(frecpx_f16)(float16 a, void *fpstp)
28
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(paired_cmpxchg64_be)(CPUARMState *env, uint64_t addr,
20
return nan;
29
* High and low need to be switched here because this is not actually a
21
}
30
* 128bit store but two doublewords stored consecutively
22
31
*/
23
+ a = float16_squash_input_denormal(a, fpst);
32
- Int128 cmpv = int128_make128(env->exclusive_val, env->exclusive_high);
24
+
33
- Int128 newv = int128_make128(new_lo, new_hi);
25
val16 = float16_val(a);
34
+ Int128 cmpv = int128_make128(env->exclusive_high, env->exclusive_val);
26
sbit = 0x8000 & val16;
35
+ Int128 newv = int128_make128(new_hi, new_lo);
27
exp = extract32(val16, 10, 5);
36
Int128 oldv;
28
@@ -XXX,XX +XXX,XX @@ float32 HELPER(frecpx_f32)(float32 a, void *fpstp)
37
uintptr_t ra = GETPC();
29
return nan;
38
uint64_t o0, o1;
30
}
31
32
+ a = float32_squash_input_denormal(a, fpst);
33
+
34
val32 = float32_val(a);
35
sbit = 0x80000000ULL & val32;
36
exp = extract32(val32, 23, 8);
37
@@ -XXX,XX +XXX,XX @@ float64 HELPER(frecpx_f64)(float64 a, void *fpstp)
38
return nan;
39
}
40
41
+ a = float64_squash_input_denormal(a, fpst);
42
+
43
val64 = float64_val(a);
44
sbit = 0x8000000000000000ULL & val64;
45
exp = extract64(float64_val(a), 52, 11);
46
--
39
--
47
2.17.1
40
2.20.1
48
41
49
42
diff view generated by jsdifflib
1
Provide a VMSTATE_BOOL_SUB_ARRAY to go with VMSTATE_UINT8_SUB_ARRAY
1
From: Richard Henderson <richard.henderson@linaro.org>
2
and friends.
3
2
3
Rather than a complex set of cases testing for writeback,
4
adjust DP after performing the operation.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20190206052857.5077-2-richard.henderson@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Message-id: 20180521140402.23318-23-peter.maydell@linaro.org
7
---
10
---
8
include/migration/vmstate.h | 3 +++
11
target/arm/translate.c | 32 ++++++++++++++++----------------
9
1 file changed, 3 insertions(+)
12
1 file changed, 16 insertions(+), 16 deletions(-)
10
13
11
diff --git a/include/migration/vmstate.h b/include/migration/vmstate.h
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
13
--- a/include/migration/vmstate.h
16
--- a/target/arm/translate.c
14
+++ b/include/migration/vmstate.h
17
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ extern const VMStateInfo vmstate_info_qtailq;
18
@@ -XXX,XX +XXX,XX @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
16
#define VMSTATE_BOOL_ARRAY(_f, _s, _n) \
19
tcg_gen_or_i32(tmp, tmp, tmp2);
17
VMSTATE_BOOL_ARRAY_V(_f, _s, _n, 0)
20
tcg_temp_free_i32(tmp2);
18
21
gen_vfp_msr(tmp);
19
+#define VMSTATE_BOOL_SUB_ARRAY(_f, _s, _start, _num) \
22
+ dp = 0; /* always a single precision result */
20
+ VMSTATE_SUB_ARRAY(_f, _s, _start, _num, 0, vmstate_info_bool, bool)
23
break;
21
+
24
}
22
#define VMSTATE_UINT16_ARRAY_V(_f, _s, _n, _v) \
25
case 7: /* vcvtt.f16.f32, vcvtt.f16.f64 */
23
VMSTATE_ARRAY(_f, _s, _n, _v, vmstate_info_uint16, uint16_t)
26
@@ -XXX,XX +XXX,XX @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
27
tcg_gen_or_i32(tmp, tmp, tmp2);
28
tcg_temp_free_i32(tmp2);
29
gen_vfp_msr(tmp);
30
+ dp = 0; /* always a single precision result */
31
break;
32
}
33
case 8: /* cmp */
34
gen_vfp_cmp(dp);
35
+ dp = -1; /* no write back */
36
break;
37
case 9: /* cmpe */
38
gen_vfp_cmpe(dp);
39
+ dp = -1; /* no write back */
40
break;
41
case 10: /* cmpz */
42
gen_vfp_cmp(dp);
43
+ dp = -1; /* no write back */
44
break;
45
case 11: /* cmpez */
46
gen_vfp_F1_ld0(dp);
47
gen_vfp_cmpe(dp);
48
+ dp = -1; /* no write back */
49
break;
50
case 12: /* vrintr */
51
{
52
@@ -XXX,XX +XXX,XX @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
53
break;
54
}
55
case 15: /* single<->double conversion */
56
- if (dp)
57
+ if (dp) {
58
gen_helper_vfp_fcvtsd(cpu_F0s, cpu_F0d, cpu_env);
59
- else
60
+ } else {
61
gen_helper_vfp_fcvtds(cpu_F0d, cpu_F0s, cpu_env);
62
+ }
63
+ dp = !dp; /* result size is opposite */
64
break;
65
case 16: /* fuito */
66
gen_vfp_uito(dp, 0);
67
@@ -XXX,XX +XXX,XX @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
68
break;
69
case 24: /* ftoui */
70
gen_vfp_toui(dp, 0);
71
+ dp = 0; /* always an integer result */
72
break;
73
case 25: /* ftouiz */
74
gen_vfp_touiz(dp, 0);
75
+ dp = 0; /* always an integer result */
76
break;
77
case 26: /* ftosi */
78
gen_vfp_tosi(dp, 0);
79
+ dp = 0; /* always an integer result */
80
break;
81
case 27: /* ftosiz */
82
gen_vfp_tosiz(dp, 0);
83
+ dp = 0; /* always an integer result */
84
break;
85
case 28: /* ftosh */
86
if (!arm_dc_feature(s, ARM_FEATURE_VFP3)) {
87
@@ -XXX,XX +XXX,XX @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
88
return 1;
89
}
90
91
- /* Write back the result. */
92
- if (op == 15 && (rn >= 8 && rn <= 11)) {
93
- /* Comparison, do nothing. */
94
- } else if (op == 15 && dp && ((rn & 0x1c) == 0x18 ||
95
- (rn & 0x1e) == 0x6)) {
96
- /* VCVT double to int: always integer result.
97
- * VCVT double to half precision is always a single
98
- * precision result.
99
- */
100
- gen_mov_vreg_F0(0, rd);
101
- } else if (op == 15 && rn == 15) {
102
- /* conversion */
103
- gen_mov_vreg_F0(!dp, rd);
104
- } else {
105
+ /* Write back the result, if any. */
106
+ if (dp >= 0) {
107
gen_mov_vreg_F0(dp, rd);
108
}
24
109
25
--
110
--
26
2.17.1
111
2.20.1
27
112
28
113
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
add MemTxAttrs as an argument to address_space_translate_iommu().
3
2
3
For opcodes 0-5, move some if conditions into the structure
4
of a switch statement. For opcodes 6 & 7, decode everything
5
at once with a second switch.
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20190206052857.5077-3-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180521140402.23318-14-peter.maydell@linaro.org
8
---
11
---
9
exec.c | 8 +++++---
12
target/arm/translate-a64.c | 94 ++++++++++++++++++++------------------
10
1 file changed, 5 insertions(+), 3 deletions(-)
13
1 file changed, 49 insertions(+), 45 deletions(-)
11
14
12
diff --git a/exec.c b/exec.c
15
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
13
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
14
--- a/exec.c
17
--- a/target/arm/translate-a64.c
15
+++ b/exec.c
18
+++ b/target/arm/translate-a64.c
16
@@ -XXX,XX +XXX,XX @@ address_space_translate_internal(AddressSpaceDispatch *d, hwaddr addr, hwaddr *x
19
@@ -XXX,XX +XXX,XX @@ static void disas_fp_int_conv(DisasContext *s, uint32_t insn)
17
* @is_write: whether the translation operation is for write
20
int type = extract32(insn, 22, 2);
18
* @is_mmio: whether this can be MMIO, set true if it can
21
bool sbit = extract32(insn, 29, 1);
19
* @target_as: the address space targeted by the IOMMU
22
bool sf = extract32(insn, 31, 1);
20
+ * @attrs: transaction attributes
23
+ bool itof = false;
21
*
24
22
* This function is called from RCU critical section. It is the common
25
if (sbit) {
23
* part of flatview_do_translate and address_space_translate_cached.
26
- unallocated_encoding(s);
24
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection address_space_translate_iommu(IOMMUMemoryRegion *iomm
27
- return;
25
hwaddr *page_mask_out,
28
+ goto do_unallocated;
26
bool is_write,
27
bool is_mmio,
28
- AddressSpace **target_as)
29
+ AddressSpace **target_as,
30
+ MemTxAttrs attrs)
31
{
32
MemoryRegionSection *section;
33
hwaddr page_mask = (hwaddr)-1;
34
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection flatview_do_translate(FlatView *fv,
35
return address_space_translate_iommu(iommu_mr, xlat,
36
plen_out, page_mask_out,
37
is_write, is_mmio,
38
- target_as);
39
+ target_as, attrs);
40
}
29
}
41
if (page_mask_out) {
30
42
/* Not behind an IOMMU, use default page size. */
31
- if (opcode > 5) {
43
@@ -XXX,XX +XXX,XX @@ static inline MemoryRegion *address_space_translate_cached(
32
- /* FMOV */
44
33
- bool itof = opcode & 1;
45
section = address_space_translate_iommu(iommu_mr, xlat, plen,
34
-
46
NULL, is_write, true,
35
- if (rmode >= 2) {
47
- &target_as);
36
- unallocated_encoding(s);
48
+ &target_as, attrs);
37
- return;
49
return section.mr;
38
- }
39
-
40
- switch (sf << 3 | type << 1 | rmode) {
41
- case 0x0: /* 32 bit */
42
- case 0xa: /* 64 bit */
43
- case 0xd: /* 64 bit to top half of quad */
44
- break;
45
- case 0x6: /* 16-bit float, 32-bit int */
46
- case 0xe: /* 16-bit float, 64-bit int */
47
- if (dc_isar_feature(aa64_fp16, s)) {
48
- break;
49
- }
50
- /* fallthru */
51
- default:
52
- /* all other sf/type/rmode combinations are invalid */
53
- unallocated_encoding(s);
54
- return;
55
- }
56
-
57
- if (!fp_access_check(s)) {
58
- return;
59
- }
60
- handle_fmov(s, rd, rn, type, itof);
61
- } else {
62
- /* actual FP conversions */
63
- bool itof = extract32(opcode, 1, 1);
64
-
65
- if (rmode != 0 && opcode > 1) {
66
- unallocated_encoding(s);
67
- return;
68
+ switch (opcode) {
69
+ case 2: /* SCVTF */
70
+ case 3: /* UCVTF */
71
+ itof = true;
72
+ /* fallthru */
73
+ case 4: /* FCVTAS */
74
+ case 5: /* FCVTAU */
75
+ if (rmode != 0) {
76
+ goto do_unallocated;
77
}
78
+ /* fallthru */
79
+ case 0: /* FCVT[NPMZ]S */
80
+ case 1: /* FCVT[NPMZ]U */
81
switch (type) {
82
case 0: /* float32 */
83
case 1: /* float64 */
84
break;
85
case 3: /* float16 */
86
- if (dc_isar_feature(aa64_fp16, s)) {
87
- break;
88
+ if (!dc_isar_feature(aa64_fp16, s)) {
89
+ goto do_unallocated;
90
}
91
- /* fallthru */
92
+ break;
93
default:
94
- unallocated_encoding(s);
95
- return;
96
+ goto do_unallocated;
97
}
98
-
99
if (!fp_access_check(s)) {
100
return;
101
}
102
handle_fpfpcvt(s, rd, rn, opcode, itof, rmode, 64, sf, type);
103
+ break;
104
+
105
+ default:
106
+ switch (sf << 7 | type << 5 | rmode << 3 | opcode) {
107
+ case 0b01100110: /* FMOV half <-> 32-bit int */
108
+ case 0b01100111:
109
+ case 0b11100110: /* FMOV half <-> 64-bit int */
110
+ case 0b11100111:
111
+ if (!dc_isar_feature(aa64_fp16, s)) {
112
+ goto do_unallocated;
113
+ }
114
+ /* fallthru */
115
+ case 0b00000110: /* FMOV 32-bit */
116
+ case 0b00000111:
117
+ case 0b10100110: /* FMOV 64-bit */
118
+ case 0b10100111:
119
+ case 0b11001110: /* FMOV top half of 128-bit */
120
+ case 0b11001111:
121
+ if (!fp_access_check(s)) {
122
+ return;
123
+ }
124
+ itof = opcode & 1;
125
+ handle_fmov(s, rd, rn, type, itof);
126
+ break;
127
+
128
+ default:
129
+ do_unallocated:
130
+ unallocated_encoding(s);
131
+ return;
132
+ }
133
+ break;
134
}
50
}
135
}
51
136
52
--
137
--
53
2.17.1
138
2.20.1
54
139
55
140
diff view generated by jsdifflib
New patch
1
From: Alex Bennée <alex.bennee@linaro.org>
1
2
3
Although technically not visible to userspace the kernel does make
4
them visible via a trap and emulate ABI. We provide a new permission
5
mask (PL0U_R) which maps to PL0_R for CONFIG_USER builds and adjust
6
the minimum permission check accordingly.
7
8
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
9
Message-id: 20190205190224.2198-2-alex.bennee@linaro.org
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/cpu.h | 12 ++++++++++++
14
target/arm/helper.c | 6 +++++-
15
2 files changed, 17 insertions(+), 1 deletion(-)
16
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/cpu.h
20
+++ b/target/arm/cpu.h
21
@@ -XXX,XX +XXX,XX @@ static inline bool cptype_valid(int cptype)
22
#define PL0_R (0x02 | PL1_R)
23
#define PL0_W (0x01 | PL1_W)
24
25
+/*
26
+ * For user-mode some registers are accessible to EL0 via a kernel
27
+ * trap-and-emulate ABI. In this case we define the read permissions
28
+ * as actually being PL0_R. However some bits of any given register
29
+ * may still be masked.
30
+ */
31
+#ifdef CONFIG_USER_ONLY
32
+#define PL0U_R PL0_R
33
+#else
34
+#define PL0U_R PL1_R
35
+#endif
36
+
37
#define PL3_RW (PL3_R | PL3_W)
38
#define PL2_RW (PL2_R | PL2_W)
39
#define PL1_RW (PL1_R | PL1_W)
40
diff --git a/target/arm/helper.c b/target/arm/helper.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/helper.c
43
+++ b/target/arm/helper.c
44
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
45
if (r->state != ARM_CP_STATE_AA32) {
46
int mask = 0;
47
switch (r->opc1) {
48
- case 0: case 1: case 2:
49
+ case 0:
50
+ /* min_EL EL1, but some accessible to EL0 via kernel ABI */
51
+ mask = PL0U_R | PL1_RW;
52
+ break;
53
+ case 1: case 2:
54
/* min_EL EL1 */
55
mask = PL1_RW;
56
break;
57
--
58
2.20.1
59
60
diff view generated by jsdifflib
1
From: Shannon Zhao <zhaoshenglong@huawei.com>
1
From: Alex Bennée <alex.bennee@linaro.org>
2
2
3
kvm_irqchip_create called by kvm_init will call kvm_init_irq_routing to
3
A number of CPUID registers are exposed to userspace by modern Linux
4
initialize global capability variables. If we call kvm_init_irq_routing in
4
kernels thanks to the "ARM64 CPU Feature Registers" ABI. For QEMU's
5
GIC realize function, previous allocated memory will leak.
5
user-mode emulation we don't need to emulate the kernels trap but just
6
return the value the trap would have done. To avoid too much #ifdef
7
hackery we process ARMCPRegInfo with a new helper (modify_arm_cp_regs)
8
before defining the registers. The modify routine is driven by a
9
simple data structure which describes which bits are exported and
10
which are fixed.
6
11
7
Fix this by deleting the unnecessary call.
12
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
8
13
Message-id: 20190205190224.2198-3-alex.bennee@linaro.org
9
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Eric Auger <eric.auger@redhat.com>
11
Message-id: 1527750994-14360-1-git-send-email-zhaoshenglong@huawei.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
16
---
14
hw/intc/arm_gic_kvm.c | 1 -
17
target/arm/cpu.h | 21 ++++++++++++++++
15
hw/intc/arm_gicv3_kvm.c | 1 -
18
target/arm/helper.c | 59 +++++++++++++++++++++++++++++++++++++++++++++
16
2 files changed, 2 deletions(-)
19
2 files changed, 80 insertions(+)
17
20
18
diff --git a/hw/intc/arm_gic_kvm.c b/hw/intc/arm_gic_kvm.c
21
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
19
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/intc/arm_gic_kvm.c
23
--- a/target/arm/cpu.h
21
+++ b/hw/intc/arm_gic_kvm.c
24
+++ b/target/arm/cpu.h
22
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gic_realize(DeviceState *dev, Error **errp)
25
@@ -XXX,XX +XXX,XX @@ static inline void define_one_arm_cp_reg(ARMCPU *cpu, const ARMCPRegInfo *regs)
23
26
}
24
if (kvm_has_gsi_routing()) {
27
const ARMCPRegInfo *get_arm_cp_reginfo(GHashTable *cpregs, uint32_t encoded_cp);
25
/* set up irq routing */
28
26
- kvm_init_irq_routing(kvm_state);
29
+/*
27
for (i = 0; i < s->num_irq - GIC_INTERNAL; ++i) {
30
+ * Definition of an ARM co-processor register as viewed from
28
kvm_irqchip_add_irq_route(kvm_state, i, 0, i);
31
+ * userspace. This is used for presenting sanitised versions of
29
}
32
+ * registers to userspace when emulating the Linux AArch64 CPU
30
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
33
+ * ID/feature ABI (advertised as HWCAP_CPUID).
34
+ */
35
+typedef struct ARMCPRegUserSpaceInfo {
36
+ /* Name of register */
37
+ const char *name;
38
+
39
+ /* Only some bits are exported to user space */
40
+ uint64_t exported_bits;
41
+
42
+ /* Fixed bits are applied after the mask */
43
+ uint64_t fixed_bits;
44
+} ARMCPRegUserSpaceInfo;
45
+
46
+#define REGUSERINFO_SENTINEL { .name = NULL }
47
+
48
+void modify_arm_cp_regs(ARMCPRegInfo *regs, const ARMCPRegUserSpaceInfo *mods);
49
+
50
/* CPWriteFn that can be used to implement writes-ignored behaviour */
51
void arm_cp_write_ignore(CPUARMState *env, const ARMCPRegInfo *ri,
52
uint64_t value);
53
diff --git a/target/arm/helper.c b/target/arm/helper.c
31
index XXXXXXX..XXXXXXX 100644
54
index XXXXXXX..XXXXXXX 100644
32
--- a/hw/intc/arm_gicv3_kvm.c
55
--- a/target/arm/helper.c
33
+++ b/hw/intc/arm_gicv3_kvm.c
56
+++ b/target/arm/helper.c
34
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gicv3_realize(DeviceState *dev, Error **errp)
57
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
35
58
.resetvalue = cpu->pmceid1 },
36
if (kvm_has_gsi_routing()) {
59
REGINFO_SENTINEL
37
/* set up irq routing */
60
};
38
- kvm_init_irq_routing(kvm_state);
61
+#ifdef CONFIG_USER_ONLY
39
for (i = 0; i < s->num_irq - GIC_INTERNAL; ++i) {
62
+ ARMCPRegUserSpaceInfo v8_user_idregs[] = {
40
kvm_irqchip_add_irq_route(kvm_state, i, 0, i);
63
+ { .name = "ID_AA64PFR0_EL1",
41
}
64
+ .exported_bits = 0x000f000f00ff0000,
65
+ .fixed_bits = 0x0000000000000011 },
66
+ { .name = "ID_AA64PFR1_EL1",
67
+ .exported_bits = 0x00000000000000f0 },
68
+ { .name = "ID_AA64ZFR0_EL1" },
69
+ { .name = "ID_AA64MMFR0_EL1",
70
+ .fixed_bits = 0x00000000ff000000 },
71
+ { .name = "ID_AA64MMFR1_EL1" },
72
+ { .name = "ID_AA64DFR0_EL1",
73
+ .fixed_bits = 0x0000000000000006 },
74
+ { .name = "ID_AA64DFR1_EL1" },
75
+ { .name = "ID_AA64AFR0_EL1" },
76
+ { .name = "ID_AA64AFR1_EL1" },
77
+ { .name = "ID_AA64ISAR0_EL1",
78
+ .exported_bits = 0x00fffffff0fffff0 },
79
+ { .name = "ID_AA64ISAR1_EL1",
80
+ .exported_bits = 0x000000f0ffffffff },
81
+ REGUSERINFO_SENTINEL
82
+ };
83
+ modify_arm_cp_regs(v8_idregs, v8_user_idregs);
84
+#endif
85
/* RVBAR_EL1 is only implemented if EL1 is the highest EL */
86
if (!arm_feature(env, ARM_FEATURE_EL3) &&
87
!arm_feature(env, ARM_FEATURE_EL2)) {
88
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
89
.opc1 = CP_ANY, .opc2 = CP_ANY, .access = PL1_W,
90
.type = ARM_CP_NOP | ARM_CP_OVERRIDE
91
};
92
+#ifdef CONFIG_USER_ONLY
93
+ ARMCPRegUserSpaceInfo id_v8_user_midr_cp_reginfo[] = {
94
+ { .name = "MIDR_EL1",
95
+ .exported_bits = 0x00000000ffffffff },
96
+ { .name = "REVIDR_EL1" },
97
+ REGUSERINFO_SENTINEL
98
+ };
99
+ modify_arm_cp_regs(id_v8_midr_cp_reginfo, id_v8_user_midr_cp_reginfo);
100
+#endif
101
if (arm_feature(env, ARM_FEATURE_OMAPCP) ||
102
arm_feature(env, ARM_FEATURE_STRONGARM)) {
103
ARMCPRegInfo *r;
104
@@ -XXX,XX +XXX,XX @@ void define_arm_cp_regs_with_opaque(ARMCPU *cpu,
105
}
106
}
107
108
+/*
109
+ * Modify ARMCPRegInfo for access from userspace.
110
+ *
111
+ * This is a data driven modification directed by
112
+ * ARMCPRegUserSpaceInfo. All registers become ARM_CP_CONST as
113
+ * user-space cannot alter any values and dynamic values pertaining to
114
+ * execution state are hidden from user space view anyway.
115
+ */
116
+void modify_arm_cp_regs(ARMCPRegInfo *regs, const ARMCPRegUserSpaceInfo *mods)
117
+{
118
+ const ARMCPRegUserSpaceInfo *m;
119
+ ARMCPRegInfo *r;
120
+
121
+ for (m = mods; m->name; m++) {
122
+ for (r = regs; r->type != ARM_CP_SENTINEL; r++) {
123
+ if (strcmp(r->name, m->name) == 0) {
124
+ r->type = ARM_CP_CONST;
125
+ r->access = PL0U_R;
126
+ r->resetvalue &= m->exported_bits;
127
+ r->resetvalue |= m->fixed_bits;
128
+ break;
129
+ }
130
+ }
131
+ }
132
+}
133
+
134
const ARMCPRegInfo *get_arm_cp_reginfo(GHashTable *cpregs, uint32_t encoded_cp)
135
{
136
return g_hash_table_lookup(cpregs, &encoded_cp);
42
--
137
--
43
2.17.1
138
2.20.1
44
139
45
140
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Alex Bennée <alex.bennee@linaro.org>
2
add MemTxAttrs as an argument to flatview_translate(); all its
3
callers now have attrs available.
4
2
3
As this is a single register we could expose it with a simple ifdef
4
but we use the existing modify_arm_cp_regs mechanism for consistency.
5
6
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
7
Message-id: 20190205190224.2198-4-alex.bennee@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180521140402.23318-11-peter.maydell@linaro.org
9
---
10
---
10
include/exec/memory.h | 7 ++++---
11
target/arm/helper.c | 21 ++++++++++++++-------
11
exec.c | 17 +++++++++--------
12
1 file changed, 14 insertions(+), 7 deletions(-)
12
2 files changed, 13 insertions(+), 11 deletions(-)
13
13
14
diff --git a/include/exec/memory.h b/include/exec/memory.h
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/include/exec/memory.h
16
--- a/target/arm/helper.c
17
+++ b/include/exec/memory.h
17
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
18
@@ -XXX,XX +XXX,XX @@ static uint64_t mpidr_read(CPUARMState *env, const ARMCPRegInfo *ri)
19
*/
19
return mpidr_read_val(env);
20
MemoryRegion *flatview_translate(FlatView *fv,
21
hwaddr addr, hwaddr *xlat,
22
- hwaddr *len, bool is_write);
23
+ hwaddr *len, bool is_write,
24
+ MemTxAttrs attrs);
25
26
static inline MemoryRegion *address_space_translate(AddressSpace *as,
27
hwaddr addr, hwaddr *xlat,
28
@@ -XXX,XX +XXX,XX @@ static inline MemoryRegion *address_space_translate(AddressSpace *as,
29
MemTxAttrs attrs)
30
{
31
return flatview_translate(address_space_to_flatview(as),
32
- addr, xlat, len, is_write);
33
+ addr, xlat, len, is_write, attrs);
34
}
20
}
35
21
36
/* address_space_access_valid: check for validity of accessing an address
22
-static const ARMCPRegInfo mpidr_cp_reginfo[] = {
37
@@ -XXX,XX +XXX,XX @@ MemTxResult address_space_read(AddressSpace *as, hwaddr addr,
23
- { .name = "MPIDR", .state = ARM_CP_STATE_BOTH,
38
rcu_read_lock();
24
- .opc0 = 3, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 5,
39
fv = address_space_to_flatview(as);
25
- .access = PL1_R, .readfn = mpidr_read, .type = ARM_CP_NO_RAW },
40
l = len;
26
- REGINFO_SENTINEL
41
- mr = flatview_translate(fv, addr, &addr1, &l, false);
27
-};
42
+ mr = flatview_translate(fv, addr, &addr1, &l, false, attrs);
28
-
43
if (len == l && memory_access_is_direct(mr, false)) {
29
static const ARMCPRegInfo lpae_cp_reginfo[] = {
44
ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
30
/* NOP AMAIR0/1 */
45
memcpy(buf, ptr, len);
31
{ .name = "AMAIR0", .state = ARM_CP_STATE_BOTH,
46
diff --git a/exec.c b/exec.c
32
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
47
index XXXXXXX..XXXXXXX 100644
48
--- a/exec.c
49
+++ b/exec.c
50
@@ -XXX,XX +XXX,XX @@ iotlb_fail:
51
52
/* Called from RCU critical section */
53
MemoryRegion *flatview_translate(FlatView *fv, hwaddr addr, hwaddr *xlat,
54
- hwaddr *plen, bool is_write)
55
+ hwaddr *plen, bool is_write,
56
+ MemTxAttrs attrs)
57
{
58
MemoryRegion *mr;
59
MemoryRegionSection section;
60
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_write_continue(FlatView *fv, hwaddr addr,
61
}
62
63
l = len;
64
- mr = flatview_translate(fv, addr, &addr1, &l, true);
65
+ mr = flatview_translate(fv, addr, &addr1, &l, true, attrs);
66
}
33
}
67
34
68
return result;
35
if (arm_feature(env, ARM_FEATURE_MPIDR)) {
69
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_write(FlatView *fv, hwaddr addr, MemTxAttrs attrs,
36
+ ARMCPRegInfo mpidr_cp_reginfo[] = {
70
MemTxResult result = MEMTX_OK;
37
+ { .name = "MPIDR_EL1", .state = ARM_CP_STATE_BOTH,
71
38
+ .opc0 = 3, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 5,
72
l = len;
39
+ .access = PL1_R, .readfn = mpidr_read, .type = ARM_CP_NO_RAW },
73
- mr = flatview_translate(fv, addr, &addr1, &l, true);
40
+ REGINFO_SENTINEL
74
+ mr = flatview_translate(fv, addr, &addr1, &l, true, attrs);
41
+ };
75
result = flatview_write_continue(fv, addr, attrs, buf, len,
42
+#ifdef CONFIG_USER_ONLY
76
addr1, l, mr);
43
+ ARMCPRegUserSpaceInfo mpidr_user_cp_reginfo[] = {
77
44
+ { .name = "MPIDR_EL1",
78
@@ -XXX,XX +XXX,XX @@ MemTxResult flatview_read_continue(FlatView *fv, hwaddr addr,
45
+ .fixed_bits = 0x0000000080000000 },
79
}
46
+ REGUSERINFO_SENTINEL
80
47
+ };
81
l = len;
48
+ modify_arm_cp_regs(mpidr_cp_reginfo, mpidr_user_cp_reginfo);
82
- mr = flatview_translate(fv, addr, &addr1, &l, false);
49
+#endif
83
+ mr = flatview_translate(fv, addr, &addr1, &l, false, attrs);
50
define_arm_cp_regs(cpu, mpidr_cp_reginfo);
84
}
51
}
85
52
86
return result;
87
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_read(FlatView *fv, hwaddr addr,
88
MemoryRegion *mr;
89
90
l = len;
91
- mr = flatview_translate(fv, addr, &addr1, &l, false);
92
+ mr = flatview_translate(fv, addr, &addr1, &l, false, attrs);
93
return flatview_read_continue(fv, addr, attrs, buf, len,
94
addr1, l, mr);
95
}
96
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
97
98
while (len > 0) {
99
l = len;
100
- mr = flatview_translate(fv, addr, &xlat, &l, is_write);
101
+ mr = flatview_translate(fv, addr, &xlat, &l, is_write, attrs);
102
if (!memory_access_is_direct(mr, is_write)) {
103
l = memory_access_size(mr, l, addr);
104
if (!memory_region_access_valid(mr, xlat, l, is_write, attrs)) {
105
@@ -XXX,XX +XXX,XX @@ flatview_extend_translation(FlatView *fv, hwaddr addr,
106
107
len = target_len;
108
this_mr = flatview_translate(fv, addr, &xlat,
109
- &len, is_write);
110
+ &len, is_write, attrs);
111
if (this_mr != mr || xlat != base + done) {
112
return done;
113
}
114
@@ -XXX,XX +XXX,XX @@ void *address_space_map(AddressSpace *as,
115
l = len;
116
rcu_read_lock();
117
fv = address_space_to_flatview(as);
118
- mr = flatview_translate(fv, addr, &xlat, &l, is_write);
119
+ mr = flatview_translate(fv, addr, &xlat, &l, is_write, attrs);
120
121
if (!memory_access_is_direct(mr, is_write)) {
122
if (atomic_xchg(&bounce.in_use, true)) {
123
--
53
--
124
2.17.1
54
2.20.1
125
55
126
56
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Alex Bennée <alex.bennee@linaro.org>
2
2
3
Depending on the host abi, float16, aka uint16_t, values are
3
There are a whole bunch more registers in the CPUID space which are
4
passed and returned either zero-extended in the host register
4
currently not used but are exposed as RAZ. To avoid too much
5
or with garbage at the top of the host register.
5
duplication we expand ARMCPRegUserSpaceInfo to understand glob
6
patterns so we only need one entry to tweak whole ranges of registers.
6
7
7
The tcg code generator has so far been assuming garbage, as that
8
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
8
matches the x86 abi, but this is incorrect for other host abis.
9
Message-id: 20190205190224.2198-5-alex.bennee@linaro.org
9
Further, target/arm has so far been assuming zero-extended results,
10
so that it may store the 16-bit value into a 32-bit slot with the
11
high 16-bits already clear.
12
13
Rectify both problems by mapping "f16" in the helper definition
14
to uint32_t instead of (a typedef for) uint16_t. This forces
15
the host compiler to assume garbage in the upper 16 bits on input
16
and to zero-extend the result on output.
17
18
Cc: qemu-stable@nongnu.org
19
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
20
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
21
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
22
Message-id: 20180522175629.24932-1-richard.henderson@linaro.org
23
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
25
---
12
---
26
include/exec/helper-head.h | 2 +-
13
target/arm/cpu.h | 3 +++
27
target/arm/helper-a64.c | 35 +++++++++--------
14
target/arm/helper.c | 26 +++++++++++++++++++++++---
28
target/arm/helper.c | 80 +++++++++++++++++++-------------------
15
2 files changed, 26 insertions(+), 3 deletions(-)
29
3 files changed, 59 insertions(+), 58 deletions(-)
30
16
31
diff --git a/include/exec/helper-head.h b/include/exec/helper-head.h
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
32
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
33
--- a/include/exec/helper-head.h
19
--- a/target/arm/cpu.h
34
+++ b/include/exec/helper-head.h
20
+++ b/target/arm/cpu.h
35
@@ -XXX,XX +XXX,XX @@
21
@@ -XXX,XX +XXX,XX @@ typedef struct ARMCPRegUserSpaceInfo {
36
#define dh_ctype_int int
22
/* Name of register */
37
#define dh_ctype_i64 uint64_t
23
const char *name;
38
#define dh_ctype_s64 int64_t
24
39
-#define dh_ctype_f16 float16
25
+ /* Is the name actually a glob pattern */
40
+#define dh_ctype_f16 uint32_t
26
+ bool is_glob;
41
#define dh_ctype_f32 float32
27
+
42
#define dh_ctype_f64 float64
28
/* Only some bits are exported to user space */
43
#define dh_ctype_ptr void *
29
uint64_t exported_bits;
44
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/helper-a64.c
47
+++ b/target/arm/helper-a64.c
48
@@ -XXX,XX +XXX,XX @@ static inline uint32_t float_rel_to_flags(int res)
49
return flags;
50
}
51
52
-uint64_t HELPER(vfp_cmph_a64)(float16 x, float16 y, void *fp_status)
53
+uint64_t HELPER(vfp_cmph_a64)(uint32_t x, uint32_t y, void *fp_status)
54
{
55
return float_rel_to_flags(float16_compare_quiet(x, y, fp_status));
56
}
57
58
-uint64_t HELPER(vfp_cmpeh_a64)(float16 x, float16 y, void *fp_status)
59
+uint64_t HELPER(vfp_cmpeh_a64)(uint32_t x, uint32_t y, void *fp_status)
60
{
61
return float_rel_to_flags(float16_compare(x, y, fp_status));
62
}
63
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(neon_cgt_f64)(float64 a, float64 b, void *fpstp)
64
#define float64_three make_float64(0x4008000000000000ULL)
65
#define float64_one_point_five make_float64(0x3FF8000000000000ULL)
66
67
-float16 HELPER(recpsf_f16)(float16 a, float16 b, void *fpstp)
68
+uint32_t HELPER(recpsf_f16)(uint32_t a, uint32_t b, void *fpstp)
69
{
70
float_status *fpst = fpstp;
71
72
@@ -XXX,XX +XXX,XX @@ float64 HELPER(recpsf_f64)(float64 a, float64 b, void *fpstp)
73
return float64_muladd(a, b, float64_two, 0, fpst);
74
}
75
76
-float16 HELPER(rsqrtsf_f16)(float16 a, float16 b, void *fpstp)
77
+uint32_t HELPER(rsqrtsf_f16)(uint32_t a, uint32_t b, void *fpstp)
78
{
79
float_status *fpst = fpstp;
80
81
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(neon_addlp_u16)(uint64_t a)
82
}
83
84
/* Floating-point reciprocal exponent - see FPRecpX in ARM ARM */
85
-float16 HELPER(frecpx_f16)(float16 a, void *fpstp)
86
+uint32_t HELPER(frecpx_f16)(uint32_t a, void *fpstp)
87
{
88
float_status *fpst = fpstp;
89
uint16_t val16, sbit;
90
@@ -XXX,XX +XXX,XX @@ void HELPER(casp_be_parallel)(CPUARMState *env, uint32_t rs, uint64_t addr,
91
#define ADVSIMD_HELPER(name, suffix) HELPER(glue(glue(advsimd_, name), suffix))
92
93
#define ADVSIMD_HALFOP(name) \
94
-float16 ADVSIMD_HELPER(name, h)(float16 a, float16 b, void *fpstp) \
95
+uint32_t ADVSIMD_HELPER(name, h)(uint32_t a, uint32_t b, void *fpstp) \
96
{ \
97
float_status *fpst = fpstp; \
98
return float16_ ## name(a, b, fpst); \
99
@@ -XXX,XX +XXX,XX @@ ADVSIMD_HALFOP(mulx)
100
ADVSIMD_TWOHALFOP(mulx)
101
102
/* fused multiply-accumulate */
103
-float16 HELPER(advsimd_muladdh)(float16 a, float16 b, float16 c, void *fpstp)
104
+uint32_t HELPER(advsimd_muladdh)(uint32_t a, uint32_t b, uint32_t c,
105
+ void *fpstp)
106
{
107
float_status *fpst = fpstp;
108
return float16_muladd(a, b, c, 0, fpst);
109
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_muladd2h)(uint32_t two_a, uint32_t two_b,
110
111
#define ADVSIMD_CMPRES(test) (test) ? 0xffff : 0
112
113
-uint32_t HELPER(advsimd_ceq_f16)(float16 a, float16 b, void *fpstp)
114
+uint32_t HELPER(advsimd_ceq_f16)(uint32_t a, uint32_t b, void *fpstp)
115
{
116
float_status *fpst = fpstp;
117
int compare = float16_compare_quiet(a, b, fpst);
118
return ADVSIMD_CMPRES(compare == float_relation_equal);
119
}
120
121
-uint32_t HELPER(advsimd_cge_f16)(float16 a, float16 b, void *fpstp)
122
+uint32_t HELPER(advsimd_cge_f16)(uint32_t a, uint32_t b, void *fpstp)
123
{
124
float_status *fpst = fpstp;
125
int compare = float16_compare(a, b, fpst);
126
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_cge_f16)(float16 a, float16 b, void *fpstp)
127
compare == float_relation_equal);
128
}
129
130
-uint32_t HELPER(advsimd_cgt_f16)(float16 a, float16 b, void *fpstp)
131
+uint32_t HELPER(advsimd_cgt_f16)(uint32_t a, uint32_t b, void *fpstp)
132
{
133
float_status *fpst = fpstp;
134
int compare = float16_compare(a, b, fpst);
135
return ADVSIMD_CMPRES(compare == float_relation_greater);
136
}
137
138
-uint32_t HELPER(advsimd_acge_f16)(float16 a, float16 b, void *fpstp)
139
+uint32_t HELPER(advsimd_acge_f16)(uint32_t a, uint32_t b, void *fpstp)
140
{
141
float_status *fpst = fpstp;
142
float16 f0 = float16_abs(a);
143
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_acge_f16)(float16 a, float16 b, void *fpstp)
144
compare == float_relation_equal);
145
}
146
147
-uint32_t HELPER(advsimd_acgt_f16)(float16 a, float16 b, void *fpstp)
148
+uint32_t HELPER(advsimd_acgt_f16)(uint32_t a, uint32_t b, void *fpstp)
149
{
150
float_status *fpst = fpstp;
151
float16 f0 = float16_abs(a);
152
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_acgt_f16)(float16 a, float16 b, void *fpstp)
153
}
154
155
/* round to integral */
156
-float16 HELPER(advsimd_rinth_exact)(float16 x, void *fp_status)
157
+uint32_t HELPER(advsimd_rinth_exact)(uint32_t x, void *fp_status)
158
{
159
return float16_round_to_int(x, fp_status);
160
}
161
162
-float16 HELPER(advsimd_rinth)(float16 x, void *fp_status)
163
+uint32_t HELPER(advsimd_rinth)(uint32_t x, void *fp_status)
164
{
165
int old_flags = get_float_exception_flags(fp_status), new_flags;
166
float16 ret;
167
@@ -XXX,XX +XXX,XX @@ float16 HELPER(advsimd_rinth)(float16 x, void *fp_status)
168
* setting the mode appropriately before calling the helper.
169
*/
170
171
-uint32_t HELPER(advsimd_f16tosinth)(float16 a, void *fpstp)
172
+uint32_t HELPER(advsimd_f16tosinth)(uint32_t a, void *fpstp)
173
{
174
float_status *fpst = fpstp;
175
176
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_f16tosinth)(float16 a, void *fpstp)
177
return float16_to_int16(a, fpst);
178
}
179
180
-uint32_t HELPER(advsimd_f16touinth)(float16 a, void *fpstp)
181
+uint32_t HELPER(advsimd_f16touinth)(uint32_t a, void *fpstp)
182
{
183
float_status *fpst = fpstp;
184
185
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_f16touinth)(float16 a, void *fpstp)
186
* Square Root and Reciprocal square root
187
*/
188
189
-float16 HELPER(sqrt_f16)(float16 a, void *fpstp)
190
+uint32_t HELPER(sqrt_f16)(uint32_t a, void *fpstp)
191
{
192
float_status *s = fpstp;
193
30
194
diff --git a/target/arm/helper.c b/target/arm/helper.c
31
diff --git a/target/arm/helper.c b/target/arm/helper.c
195
index XXXXXXX..XXXXXXX 100644
32
index XXXXXXX..XXXXXXX 100644
196
--- a/target/arm/helper.c
33
--- a/target/arm/helper.c
197
+++ b/target/arm/helper.c
34
+++ b/target/arm/helper.c
198
@@ -XXX,XX +XXX,XX @@ DO_VFP_cmp(d, float64)
35
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
199
36
.fixed_bits = 0x0000000000000011 },
200
/* Integer to float and float to integer conversions */
37
{ .name = "ID_AA64PFR1_EL1",
201
38
.exported_bits = 0x00000000000000f0 },
202
-#define CONV_ITOF(name, fsz, sign) \
39
+ { .name = "ID_AA64PFR*_EL1_RESERVED",
203
- float##fsz HELPER(name)(uint32_t x, void *fpstp) \
40
+ .is_glob = true },
204
-{ \
41
{ .name = "ID_AA64ZFR0_EL1" },
205
- float_status *fpst = fpstp; \
42
{ .name = "ID_AA64MMFR0_EL1",
206
- return sign##int32_to_##float##fsz((sign##int32_t)x, fpst); \
43
.fixed_bits = 0x00000000ff000000 },
207
+#define CONV_ITOF(name, ftype, fsz, sign) \
44
{ .name = "ID_AA64MMFR1_EL1" },
208
+ftype HELPER(name)(uint32_t x, void *fpstp) \
45
+ { .name = "ID_AA64MMFR*_EL1_RESERVED",
209
+{ \
46
+ .is_glob = true },
210
+ float_status *fpst = fpstp; \
47
{ .name = "ID_AA64DFR0_EL1",
211
+ return sign##int32_to_##float##fsz((sign##int32_t)x, fpst); \
48
.fixed_bits = 0x0000000000000006 },
212
}
49
{ .name = "ID_AA64DFR1_EL1" },
213
50
- { .name = "ID_AA64AFR0_EL1" },
214
-#define CONV_FTOI(name, fsz, sign, round) \
51
- { .name = "ID_AA64AFR1_EL1" },
215
-uint32_t HELPER(name)(float##fsz x, void *fpstp) \
52
+ { .name = "ID_AA64DFR*_EL1_RESERVED",
216
-{ \
53
+ .is_glob = true },
217
- float_status *fpst = fpstp; \
54
+ { .name = "ID_AA64AFR*",
218
- if (float##fsz##_is_any_nan(x)) { \
55
+ .is_glob = true },
219
- float_raise(float_flag_invalid, fpst); \
56
{ .name = "ID_AA64ISAR0_EL1",
220
- return 0; \
57
.exported_bits = 0x00fffffff0fffff0 },
221
- } \
58
{ .name = "ID_AA64ISAR1_EL1",
222
- return float##fsz##_to_##sign##int32##round(x, fpst); \
59
.exported_bits = 0x000000f0ffffffff },
223
+#define CONV_FTOI(name, ftype, fsz, sign, round) \
60
+ { .name = "ID_AA64ISAR*_EL1_RESERVED",
224
+uint32_t HELPER(name)(ftype x, void *fpstp) \
61
+ .is_glob = true },
225
+{ \
62
REGUSERINFO_SENTINEL
226
+ float_status *fpst = fpstp; \
63
};
227
+ if (float##fsz##_is_any_nan(x)) { \
64
modify_arm_cp_regs(v8_idregs, v8_user_idregs);
228
+ float_raise(float_flag_invalid, fpst); \
65
@@ -XXX,XX +XXX,XX @@ void modify_arm_cp_regs(ARMCPRegInfo *regs, const ARMCPRegUserSpaceInfo *mods)
229
+ return 0; \
66
ARMCPRegInfo *r;
230
+ } \
67
231
+ return float##fsz##_to_##sign##int32##round(x, fpst); \
68
for (m = mods; m->name; m++) {
232
}
69
+ GPatternSpec *pat = NULL;
233
70
+ if (m->is_glob) {
234
-#define FLOAT_CONVS(name, p, fsz, sign) \
71
+ pat = g_pattern_spec_new(m->name);
235
-CONV_ITOF(vfp_##name##to##p, fsz, sign) \
72
+ }
236
-CONV_FTOI(vfp_to##name##p, fsz, sign, ) \
73
for (r = regs; r->type != ARM_CP_SENTINEL; r++) {
237
-CONV_FTOI(vfp_to##name##z##p, fsz, sign, _round_to_zero)
74
- if (strcmp(r->name, m->name) == 0) {
238
+#define FLOAT_CONVS(name, p, ftype, fsz, sign) \
75
+ if (pat && g_pattern_match_string(pat, r->name)) {
239
+ CONV_ITOF(vfp_##name##to##p, ftype, fsz, sign) \
76
+ r->type = ARM_CP_CONST;
240
+ CONV_FTOI(vfp_to##name##p, ftype, fsz, sign, ) \
77
+ r->access = PL0U_R;
241
+ CONV_FTOI(vfp_to##name##z##p, ftype, fsz, sign, _round_to_zero)
78
+ r->resetvalue = 0;
242
79
+ /* continue */
243
-FLOAT_CONVS(si, h, 16, )
80
+ } else if (strcmp(r->name, m->name) == 0) {
244
-FLOAT_CONVS(si, s, 32, )
81
r->type = ARM_CP_CONST;
245
-FLOAT_CONVS(si, d, 64, )
82
r->access = PL0U_R;
246
-FLOAT_CONVS(ui, h, 16, u)
83
r->resetvalue &= m->exported_bits;
247
-FLOAT_CONVS(ui, s, 32, u)
84
@@ -XXX,XX +XXX,XX @@ void modify_arm_cp_regs(ARMCPRegInfo *regs, const ARMCPRegUserSpaceInfo *mods)
248
-FLOAT_CONVS(ui, d, 64, u)
85
break;
249
+FLOAT_CONVS(si, h, uint32_t, 16, )
86
}
250
+FLOAT_CONVS(si, s, float32, 32, )
87
}
251
+FLOAT_CONVS(si, d, float64, 64, )
88
+ if (pat) {
252
+FLOAT_CONVS(ui, h, uint32_t, 16, u)
89
+ g_pattern_spec_free(pat);
253
+FLOAT_CONVS(ui, s, float32, 32, u)
90
+ }
254
+FLOAT_CONVS(ui, d, float64, 64, u)
255
256
#undef CONV_ITOF
257
#undef CONV_FTOI
258
@@ -XXX,XX +XXX,XX @@ static float16 do_postscale_fp16(float64 f, int shift, float_status *fpst)
259
return float64_to_float16(float64_scalbn(f, -shift, fpst), true, fpst);
260
}
261
262
-float16 HELPER(vfp_sltoh)(uint32_t x, uint32_t shift, void *fpst)
263
+uint32_t HELPER(vfp_sltoh)(uint32_t x, uint32_t shift, void *fpst)
264
{
265
return do_postscale_fp16(int32_to_float64(x, fpst), shift, fpst);
266
}
267
268
-float16 HELPER(vfp_ultoh)(uint32_t x, uint32_t shift, void *fpst)
269
+uint32_t HELPER(vfp_ultoh)(uint32_t x, uint32_t shift, void *fpst)
270
{
271
return do_postscale_fp16(uint32_to_float64(x, fpst), shift, fpst);
272
}
273
274
-float16 HELPER(vfp_sqtoh)(uint64_t x, uint32_t shift, void *fpst)
275
+uint32_t HELPER(vfp_sqtoh)(uint64_t x, uint32_t shift, void *fpst)
276
{
277
return do_postscale_fp16(int64_to_float64(x, fpst), shift, fpst);
278
}
279
280
-float16 HELPER(vfp_uqtoh)(uint64_t x, uint32_t shift, void *fpst)
281
+uint32_t HELPER(vfp_uqtoh)(uint64_t x, uint32_t shift, void *fpst)
282
{
283
return do_postscale_fp16(uint64_to_float64(x, fpst), shift, fpst);
284
}
285
@@ -XXX,XX +XXX,XX @@ static float64 do_prescale_fp16(float16 f, int shift, float_status *fpst)
286
}
91
}
287
}
92
}
288
93
289
-uint32_t HELPER(vfp_toshh)(float16 x, uint32_t shift, void *fpst)
290
+uint32_t HELPER(vfp_toshh)(uint32_t x, uint32_t shift, void *fpst)
291
{
292
return float64_to_int16(do_prescale_fp16(x, shift, fpst), fpst);
293
}
294
295
-uint32_t HELPER(vfp_touhh)(float16 x, uint32_t shift, void *fpst)
296
+uint32_t HELPER(vfp_touhh)(uint32_t x, uint32_t shift, void *fpst)
297
{
298
return float64_to_uint16(do_prescale_fp16(x, shift, fpst), fpst);
299
}
300
301
-uint32_t HELPER(vfp_toslh)(float16 x, uint32_t shift, void *fpst)
302
+uint32_t HELPER(vfp_toslh)(uint32_t x, uint32_t shift, void *fpst)
303
{
304
return float64_to_int32(do_prescale_fp16(x, shift, fpst), fpst);
305
}
306
307
-uint32_t HELPER(vfp_toulh)(float16 x, uint32_t shift, void *fpst)
308
+uint32_t HELPER(vfp_toulh)(uint32_t x, uint32_t shift, void *fpst)
309
{
310
return float64_to_uint32(do_prescale_fp16(x, shift, fpst), fpst);
311
}
312
313
-uint64_t HELPER(vfp_tosqh)(float16 x, uint32_t shift, void *fpst)
314
+uint64_t HELPER(vfp_tosqh)(uint32_t x, uint32_t shift, void *fpst)
315
{
316
return float64_to_int64(do_prescale_fp16(x, shift, fpst), fpst);
317
}
318
319
-uint64_t HELPER(vfp_touqh)(float16 x, uint32_t shift, void *fpst)
320
+uint64_t HELPER(vfp_touqh)(uint32_t x, uint32_t shift, void *fpst)
321
{
322
return float64_to_uint64(do_prescale_fp16(x, shift, fpst), fpst);
323
}
324
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(set_neon_rmode)(uint32_t rmode, CPUARMState *env)
325
}
326
327
/* Half precision conversions. */
328
-float32 HELPER(vfp_fcvt_f16_to_f32)(float16 a, void *fpstp, uint32_t ahp_mode)
329
+float32 HELPER(vfp_fcvt_f16_to_f32)(uint32_t a, void *fpstp, uint32_t ahp_mode)
330
{
331
/* Squash FZ16 to 0 for the duration of conversion. In this case,
332
* it would affect flushing input denormals.
333
@@ -XXX,XX +XXX,XX @@ float32 HELPER(vfp_fcvt_f16_to_f32)(float16 a, void *fpstp, uint32_t ahp_mode)
334
return r;
335
}
336
337
-float16 HELPER(vfp_fcvt_f32_to_f16)(float32 a, void *fpstp, uint32_t ahp_mode)
338
+uint32_t HELPER(vfp_fcvt_f32_to_f16)(float32 a, void *fpstp, uint32_t ahp_mode)
339
{
340
/* Squash FZ16 to 0 for the duration of conversion. In this case,
341
* it would affect flushing output denormals.
342
@@ -XXX,XX +XXX,XX @@ float16 HELPER(vfp_fcvt_f32_to_f16)(float32 a, void *fpstp, uint32_t ahp_mode)
343
return r;
344
}
345
346
-float64 HELPER(vfp_fcvt_f16_to_f64)(float16 a, void *fpstp, uint32_t ahp_mode)
347
+float64 HELPER(vfp_fcvt_f16_to_f64)(uint32_t a, void *fpstp, uint32_t ahp_mode)
348
{
349
/* Squash FZ16 to 0 for the duration of conversion. In this case,
350
* it would affect flushing input denormals.
351
@@ -XXX,XX +XXX,XX @@ float64 HELPER(vfp_fcvt_f16_to_f64)(float16 a, void *fpstp, uint32_t ahp_mode)
352
return r;
353
}
354
355
-float16 HELPER(vfp_fcvt_f64_to_f16)(float64 a, void *fpstp, uint32_t ahp_mode)
356
+uint32_t HELPER(vfp_fcvt_f64_to_f16)(float64 a, void *fpstp, uint32_t ahp_mode)
357
{
358
/* Squash FZ16 to 0 for the duration of conversion. In this case,
359
* it would affect flushing output denormals.
360
@@ -XXX,XX +XXX,XX @@ static bool round_to_inf(float_status *fpst, bool sign_bit)
361
g_assert_not_reached();
362
}
363
364
-float16 HELPER(recpe_f16)(float16 input, void *fpstp)
365
+uint32_t HELPER(recpe_f16)(uint32_t input, void *fpstp)
366
{
367
float_status *fpst = fpstp;
368
float16 f16 = float16_squash_input_denormal(input, fpst);
369
@@ -XXX,XX +XXX,XX @@ static uint64_t recip_sqrt_estimate(int *exp , int exp_off, uint64_t frac)
370
return extract64(estimate, 0, 8) << 44;
371
}
372
373
-float16 HELPER(rsqrte_f16)(float16 input, void *fpstp)
374
+uint32_t HELPER(rsqrte_f16)(uint32_t input, void *fpstp)
375
{
376
float_status *s = fpstp;
377
float16 f16 = float16_squash_input_denormal(input, s);
378
--
94
--
379
2.17.1
95
2.20.1
380
96
381
97
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Alex Bennée <alex.bennee@linaro.org>
2
add MemTxAttrs as an argument to flatview_do_translate().
3
2
3
Userspace programs should (in theory) query the ELF HWCAP before
4
probing these registers. Now we have implemented them all make it
5
public.
6
7
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20190205190224.2198-6-alex.bennee@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180521140402.23318-13-peter.maydell@linaro.org
8
---
11
---
9
exec.c | 9 ++++++---
12
linux-user/elfload.c | 1 +
10
1 file changed, 6 insertions(+), 3 deletions(-)
13
1 file changed, 1 insertion(+)
11
14
12
diff --git a/exec.c b/exec.c
15
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
13
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
14
--- a/exec.c
17
--- a/linux-user/elfload.c
15
+++ b/exec.c
18
+++ b/linux-user/elfload.c
16
@@ -XXX,XX +XXX,XX @@ unassigned:
19
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
17
* @is_write: whether the translation operation is for write
20
18
* @is_mmio: whether this can be MMIO, set true if it can
21
hwcaps |= ARM_HWCAP_A64_FP;
19
* @target_as: the address space targeted by the IOMMU
22
hwcaps |= ARM_HWCAP_A64_ASIMD;
20
+ * @attrs: memory transaction attributes
23
+ hwcaps |= ARM_HWCAP_A64_CPUID;
21
*
24
22
* This function is called from RCU critical section
25
/* probe for the extra features */
23
*/
26
#define GET_FEATURE_ID(feat, hwcap) \
24
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection flatview_do_translate(FlatView *fv,
25
hwaddr *page_mask_out,
26
bool is_write,
27
bool is_mmio,
28
- AddressSpace **target_as)
29
+ AddressSpace **target_as,
30
+ MemTxAttrs attrs)
31
{
32
MemoryRegionSection *section;
33
IOMMUMemoryRegion *iommu_mr;
34
@@ -XXX,XX +XXX,XX @@ IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
35
* but page mask.
36
*/
37
section = flatview_do_translate(address_space_to_flatview(as), addr, &xlat,
38
- NULL, &page_mask, is_write, false, &as);
39
+ NULL, &page_mask, is_write, false, &as,
40
+ attrs);
41
42
/* Illegal translation */
43
if (section.mr == &io_mem_unassigned) {
44
@@ -XXX,XX +XXX,XX @@ MemoryRegion *flatview_translate(FlatView *fv, hwaddr addr, hwaddr *xlat,
45
46
/* This can be MMIO, so setup MMIO bit. */
47
section = flatview_do_translate(fv, addr, xlat, plen, NULL,
48
- is_write, true, &as);
49
+ is_write, true, &as, attrs);
50
mr = section.mr;
51
52
if (xen_enabled() && memory_access_is_direct(mr, is_write)) {
53
--
27
--
54
2.17.1
28
2.20.1
55
29
56
30
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
At the moment the Arm implementations of kvm_arch_{get,put}_registers()
2
add MemTxAttrs as an argument to address_space_map().
2
don't support having QEMU change the values of system registers
3
Its callers either have an attrs value to hand, or don't care
3
(aka coprocessor registers for AArch32). This is because although
4
and can use MEMTXATTRS_UNSPECIFIED.
4
kvm_arch_get_registers() calls write_list_to_cpustate() to
5
update the CPU state struct fields (so QEMU code can read the
6
values in the usual way), kvm_arch_put_registers() does not
7
call write_cpustate_to_list(), meaning that any changes to
8
the CPU state struct fields will not be passed back to KVM.
9
10
The rationale for this design is documented in a comment in the
11
AArch32 kvm_arch_put_registers() -- writing the values in the
12
cpregs list into the CPU state struct is "lossy" because the
13
write of a register might not succeed, and so if we blindly
14
copy the CPU state values back again we will incorrectly
15
change register values for the guest. The assumption was that
16
no QEMU code would need to write to the registers.
17
18
However, when we implemented debug support for KVM guests, we
19
broke that assumption: the code to handle "set the guest up
20
to take a breakpoint exception" does so by updating various
21
guest registers including ESR_EL1.
22
23
Support this by making kvm_arch_put_registers() synchronize
24
CPU state back into the list. We sync only those registers
25
where the initial write succeeds, which should be sufficient.
5
26
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
28
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
29
Tested-by: Alex Bennée <alex.bennee@linaro.org>
9
Message-id: 20180521140402.23318-5-peter.maydell@linaro.org
30
Tested-by: Dongjiu Geng <gengdongjiu@huawei.com>
10
---
31
---
11
include/exec/memory.h | 3 ++-
32
target/arm/cpu.h | 9 ++++++++-
12
include/sysemu/dma.h | 3 ++-
33
target/arm/helper.c | 27 +++++++++++++++++++++++++--
13
exec.c | 6 ++++--
34
target/arm/kvm32.c | 20 ++------------------
14
target/ppc/mmu-hash64.c | 3 ++-
35
target/arm/kvm64.c | 2 ++
15
4 files changed, 10 insertions(+), 5 deletions(-)
36
target/arm/machine.c | 2 +-
37
5 files changed, 38 insertions(+), 22 deletions(-)
16
38
17
diff --git a/include/exec/memory.h b/include/exec/memory.h
39
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
index XXXXXXX..XXXXXXX 100644
40
index XXXXXXX..XXXXXXX 100644
19
--- a/include/exec/memory.h
41
--- a/target/arm/cpu.h
20
+++ b/include/exec/memory.h
42
+++ b/target/arm/cpu.h
21
@@ -XXX,XX +XXX,XX @@ bool address_space_access_valid(AddressSpace *as, hwaddr addr, int len, bool is_
43
@@ -XXX,XX +XXX,XX @@ bool write_list_to_cpustate(ARMCPU *cpu);
22
* @addr: address within that address space
44
/**
23
* @plen: pointer to length of buffer; updated on return
45
* write_cpustate_to_list:
24
* @is_write: indicates the transfer direction
46
* @cpu: ARMCPU
25
+ * @attrs: memory attributes
47
+ * @kvm_sync: true if this is for syncing back to KVM
48
*
49
* For each register listed in the ARMCPU cpreg_indexes list, write
50
* its value from the ARMCPUState structure into the cpreg_values list.
51
* This is used to copy info from TCG's working data structures into
52
* KVM or for outbound migration.
53
*
54
+ * @kvm_sync is true if we are doing this in order to sync the
55
+ * register state back to KVM. In this case we will only update
56
+ * values in the list if the previous list->cpustate sync actually
57
+ * successfully wrote the CPU state. Otherwise we will keep the value
58
+ * that is in the list.
59
+ *
60
* Returns: true if all register values were read correctly,
61
* false if some register was unknown or could not be read.
62
* Note that we do not stop early on failure -- we will attempt
63
* reading all registers in the list.
26
*/
64
*/
27
void *address_space_map(AddressSpace *as, hwaddr addr,
65
-bool write_cpustate_to_list(ARMCPU *cpu);
28
- hwaddr *plen, bool is_write);
66
+bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
29
+ hwaddr *plen, bool is_write, MemTxAttrs attrs);
67
30
68
#define ARM_CPUID_TI915T 0x54029152
31
/* address_space_unmap: Unmaps a memory region previously mapped by address_space_map()
69
#define ARM_CPUID_TI925T 0x54029252
32
*
70
diff --git a/target/arm/helper.c b/target/arm/helper.c
33
diff --git a/include/sysemu/dma.h b/include/sysemu/dma.h
34
index XXXXXXX..XXXXXXX 100644
71
index XXXXXXX..XXXXXXX 100644
35
--- a/include/sysemu/dma.h
72
--- a/target/arm/helper.c
36
+++ b/include/sysemu/dma.h
73
+++ b/target/arm/helper.c
37
@@ -XXX,XX +XXX,XX @@ static inline void *dma_memory_map(AddressSpace *as,
74
@@ -XXX,XX +XXX,XX @@ static bool raw_accessors_invalid(const ARMCPRegInfo *ri)
38
hwaddr xlen = *len;
75
return true;
39
void *p;
40
41
- p = address_space_map(as, addr, &xlen, dir == DMA_DIRECTION_FROM_DEVICE);
42
+ p = address_space_map(as, addr, &xlen, dir == DMA_DIRECTION_FROM_DEVICE,
43
+ MEMTXATTRS_UNSPECIFIED);
44
*len = xlen;
45
return p;
46
}
76
}
47
diff --git a/exec.c b/exec.c
77
78
-bool write_cpustate_to_list(ARMCPU *cpu)
79
+bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync)
80
{
81
/* Write the coprocessor state from cpu->env to the (index,value) list. */
82
int i;
83
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu)
84
for (i = 0; i < cpu->cpreg_array_len; i++) {
85
uint32_t regidx = kvm_to_cpreg_id(cpu->cpreg_indexes[i]);
86
const ARMCPRegInfo *ri;
87
+ uint64_t newval;
88
89
ri = get_arm_cp_reginfo(cpu->cp_regs, regidx);
90
if (!ri) {
91
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu)
92
if (ri->type & ARM_CP_NO_RAW) {
93
continue;
94
}
95
- cpu->cpreg_values[i] = read_raw_cp_reg(&cpu->env, ri);
96
+
97
+ newval = read_raw_cp_reg(&cpu->env, ri);
98
+ if (kvm_sync) {
99
+ /*
100
+ * Only sync if the previous list->cpustate sync succeeded.
101
+ * Rather than tracking the success/failure state for every
102
+ * item in the list, we just recheck "does the raw write we must
103
+ * have made in write_list_to_cpustate() read back OK" here.
104
+ */
105
+ uint64_t oldval = cpu->cpreg_values[i];
106
+
107
+ if (oldval == newval) {
108
+ continue;
109
+ }
110
+
111
+ write_raw_cp_reg(&cpu->env, ri, oldval);
112
+ if (read_raw_cp_reg(&cpu->env, ri) != oldval) {
113
+ continue;
114
+ }
115
+
116
+ write_raw_cp_reg(&cpu->env, ri, newval);
117
+ }
118
+ cpu->cpreg_values[i] = newval;
119
}
120
return ok;
121
}
122
diff --git a/target/arm/kvm32.c b/target/arm/kvm32.c
48
index XXXXXXX..XXXXXXX 100644
123
index XXXXXXX..XXXXXXX 100644
49
--- a/exec.c
124
--- a/target/arm/kvm32.c
50
+++ b/exec.c
125
+++ b/target/arm/kvm32.c
51
@@ -XXX,XX +XXX,XX @@ flatview_extend_translation(FlatView *fv, hwaddr addr,
126
@@ -XXX,XX +XXX,XX @@ int kvm_arch_put_registers(CPUState *cs, int level)
52
void *address_space_map(AddressSpace *as,
127
return ret;
53
hwaddr addr,
128
}
54
hwaddr *plen,
129
55
- bool is_write)
130
- /* Note that we do not call write_cpustate_to_list()
56
+ bool is_write,
131
- * here, so we are only writing the tuple list back to
57
+ MemTxAttrs attrs)
132
- * KVM. This is safe because nothing can change the
58
{
133
- * CPUARMState cp15 fields (in particular gdb accesses cannot)
59
hwaddr len = *plen;
134
- * and so there are no changes to sync. In fact syncing would
60
hwaddr l, xlat;
135
- * be wrong at this point: for a constant register where TCG and
61
@@ -XXX,XX +XXX,XX @@ void *cpu_physical_memory_map(hwaddr addr,
136
- * KVM disagree about its value, the preceding write_list_to_cpustate()
62
hwaddr *plen,
137
- * would not have had any effect on the CPUARMState value (since the
63
int is_write)
138
- * register is read-only), and a write_cpustate_to_list() here would
64
{
139
- * then try to write the TCG value back into KVM -- this would either
65
- return address_space_map(&address_space_memory, addr, plen, is_write);
140
- * fail or incorrectly change the value the guest sees.
66
+ return address_space_map(&address_space_memory, addr, plen, is_write,
141
- *
67
+ MEMTXATTRS_UNSPECIFIED);
142
- * If we ever want to allow the user to modify cp15 registers via
68
}
143
- * the gdb stub, we would need to be more clever here (for instance
69
144
- * tracking the set of registers kvm_arch_get_registers() successfully
70
void cpu_physical_memory_unmap(void *buffer, hwaddr len,
145
- * managed to update the CPUARMState with, and only allowing those
71
diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
146
- * to be written back up into the kernel).
147
- */
148
+ write_cpustate_to_list(cpu, true);
149
+
150
if (!write_list_to_kvmstate(cpu, level)) {
151
return EINVAL;
152
}
153
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
72
index XXXXXXX..XXXXXXX 100644
154
index XXXXXXX..XXXXXXX 100644
73
--- a/target/ppc/mmu-hash64.c
155
--- a/target/arm/kvm64.c
74
+++ b/target/ppc/mmu-hash64.c
156
+++ b/target/arm/kvm64.c
75
@@ -XXX,XX +XXX,XX @@ const ppc_hash_pte64_t *ppc_hash64_map_hptes(PowerPCCPU *cpu,
157
@@ -XXX,XX +XXX,XX @@ int kvm_arch_put_registers(CPUState *cs, int level)
76
return NULL;
158
return ret;
77
}
159
}
78
160
79
- hptes = address_space_map(CPU(cpu)->as, base + pte_offset, &plen, false);
161
+ write_cpustate_to_list(cpu, true);
80
+ hptes = address_space_map(CPU(cpu)->as, base + pte_offset, &plen, false,
162
+
81
+ MEMTXATTRS_UNSPECIFIED);
163
if (!write_list_to_kvmstate(cpu, level)) {
82
if (plen < (n * HASH_PTE_SIZE_64)) {
164
return EINVAL;
83
hw_error("%s: Unable to map all requested HPTEs\n", __func__);
84
}
165
}
166
diff --git a/target/arm/machine.c b/target/arm/machine.c
167
index XXXXXXX..XXXXXXX 100644
168
--- a/target/arm/machine.c
169
+++ b/target/arm/machine.c
170
@@ -XXX,XX +XXX,XX @@ static int cpu_pre_save(void *opaque)
171
abort();
172
}
173
} else {
174
- if (!write_cpustate_to_list(cpu)) {
175
+ if (!write_cpustate_to_list(cpu, false)) {
176
/* This should never fail. */
177
abort();
178
}
85
--
179
--
86
2.17.1
180
2.20.1
87
181
88
182
diff view generated by jsdifflib
1
Add entries to MAINTAINERS to cover the newer MPS2 boards and
1
Peter Crosthwaite hasn't had the bandwidth to do code review or
2
the new devices they use.
2
other QEMU work for some time now -- remove his email address
3
from MAINTAINERS file entries so we don't bombard him with
4
patch emails.
3
5
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20180518153157.14899-1-peter.maydell@linaro.org
7
Message-id: 20190207181422.4907-1-peter.maydell@linaro.org
6
---
8
---
7
MAINTAINERS | 9 +++++++--
9
MAINTAINERS | 4 ----
8
1 file changed, 7 insertions(+), 2 deletions(-)
10
1 file changed, 4 deletions(-)
9
11
10
diff --git a/MAINTAINERS b/MAINTAINERS
12
diff --git a/MAINTAINERS b/MAINTAINERS
11
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
12
--- a/MAINTAINERS
14
--- a/MAINTAINERS
13
+++ b/MAINTAINERS
15
+++ b/MAINTAINERS
14
@@ -XXX,XX +XXX,XX @@ F: hw/timer/cmsdk-apb-timer.c
16
@@ -XXX,XX +XXX,XX @@ Guest CPU cores (TCG):
15
F: include/hw/timer/cmsdk-apb-timer.h
17
----------------------
16
F: hw/char/cmsdk-apb-uart.c
18
Overall
17
F: include/hw/char/cmsdk-apb-uart.h
19
L: qemu-devel@nongnu.org
18
+F: hw/misc/tz-ppc.c
20
-M: Peter Crosthwaite <crosthwaite.peter@gmail.com>
19
+F: include/hw/misc/tz-ppc.h
21
M: Richard Henderson <rth@twiddle.net>
20
22
R: Paolo Bonzini <pbonzini@redhat.com>
21
ARM cores
22
M: Peter Maydell <peter.maydell@linaro.org>
23
@@ -XXX,XX +XXX,XX @@ M: Peter Maydell <peter.maydell@linaro.org>
24
L: qemu-arm@nongnu.org
25
S: Maintained
23
S: Maintained
26
F: hw/arm/mps2.c
24
@@ -XXX,XX +XXX,XX @@ F: tests/virtio-scsi-test.c
27
-F: hw/misc/mps2-scc.c
25
T: git https://github.com/bonzini/qemu.git scsi-next
28
-F: include/hw/misc/mps2-scc.h
26
29
+F: hw/arm/mps2-tz.c
27
SSI
30
+F: hw/misc/mps2-*.c
28
-M: Peter Crosthwaite <crosthwaite.peter@gmail.com>
31
+F: include/hw/misc/mps2-*.h
29
M: Alistair Francis <alistair@alistair23.me>
32
+F: hw/arm/iotkit.c
30
S: Maintained
33
+F: include/hw/arm/iotkit.h
31
F: hw/ssi/*
34
32
@@ -XXX,XX +XXX,XX @@ F: tests/m25p80-test.c
35
Musicpal
33
36
M: Jan Kiszka <jan.kiszka@web.de>
34
Xilinx SPI
35
M: Alistair Francis <alistair@alistair23.me>
36
-M: Peter Crosthwaite <crosthwaite.peter@gmail.com>
37
S: Maintained
38
F: hw/ssi/xilinx_*
39
40
@@ -XXX,XX +XXX,XX @@ F: qom/cpu.c
41
F: include/qom/cpu.h
42
43
Device Tree
44
-M: Peter Crosthwaite <crosthwaite.peter@gmail.com>
45
M: Alexander Graf <agraf@suse.de>
46
S: Maintained
47
F: device_tree.c
37
--
48
--
38
2.17.1
49
2.20.1
39
50
40
51
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
The code for handling the NVIC SHPR1 register intends to permit
2
add MemTxAttrs as an argument to memory_region_access_valid().
2
byte and halfword accesses (as the architecture requires). However
3
Its callers either have an attrs value to hand, or don't care
3
the 'case' line for it only lists the base address of the
4
and can use MEMTXATTRS_UNSPECIFIED.
4
register, so attempts to access bytes other than the first one
5
end up in the "bad write" default logic. This bug was added
6
accidentally when we split out the SHPR1 logic from SHPR2 and
7
SHPR3 to support v6M.
5
8
6
The callsite in flatview_access_valid() is part of a recursive
9
Fixes: 7c9140afd594 ("nvic: Handle ARMv6-M SCS reserved registers")
7
loop flatview_access_valid() -> memory_region_access_valid() ->
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
subpage_accepts() -> flatview_access_valid(); we make it pass
11
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
MEMTXATTRS_UNSPECIFIED for now, until the next several commits
12
---
10
have plumbed an attrs parameter through the rest of the loop
13
The Zephyr RTOS happens to access SHPR1 byte at a time,
11
and we can add an attrs parameter to flatview_access_valid().
14
which is how I spotted this.
15
---
16
hw/intc/armv7m_nvic.c | 4 ++--
17
1 file changed, 2 insertions(+), 2 deletions(-)
12
18
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
14
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20180521140402.23318-8-peter.maydell@linaro.org
17
---
18
include/exec/memory-internal.h | 3 ++-
19
exec.c | 4 +++-
20
hw/s390x/s390-pci-inst.c | 3 ++-
21
memory.c | 7 ++++---
22
4 files changed, 11 insertions(+), 6 deletions(-)
23
24
diff --git a/include/exec/memory-internal.h b/include/exec/memory-internal.h
25
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
26
--- a/include/exec/memory-internal.h
21
--- a/hw/intc/armv7m_nvic.c
27
+++ b/include/exec/memory-internal.h
22
+++ b/hw/intc/armv7m_nvic.c
28
@@ -XXX,XX +XXX,XX @@ void flatview_unref(FlatView *view);
23
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_read(void *opaque, hwaddr addr,
29
extern const MemoryRegionOps unassigned_mem_ops;
30
31
bool memory_region_access_valid(MemoryRegion *mr, hwaddr addr,
32
- unsigned size, bool is_write);
33
+ unsigned size, bool is_write,
34
+ MemTxAttrs attrs);
35
36
void flatview_add_to_dispatch(FlatView *fv, MemoryRegionSection *section);
37
AddressSpaceDispatch *address_space_dispatch_new(FlatView *fv);
38
diff --git a/exec.c b/exec.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/exec.c
41
+++ b/exec.c
42
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
43
mr = flatview_translate(fv, addr, &xlat, &l, is_write);
44
if (!memory_access_is_direct(mr, is_write)) {
45
l = memory_access_size(mr, l, addr);
46
- if (!memory_region_access_valid(mr, xlat, l, is_write)) {
47
+ /* When our callers all have attrs we'll pass them through here */
48
+ if (!memory_region_access_valid(mr, xlat, l, is_write,
49
+ MEMTXATTRS_UNSPECIFIED)) {
50
return false;
51
}
24
}
52
}
25
}
53
diff --git a/hw/s390x/s390-pci-inst.c b/hw/s390x/s390-pci-inst.c
26
break;
54
index XXXXXXX..XXXXXXX 100644
27
- case 0xd18: /* System Handler Priority (SHPR1) */
55
--- a/hw/s390x/s390-pci-inst.c
28
+ case 0xd18 ... 0xd1b: /* System Handler Priority (SHPR1) */
56
+++ b/hw/s390x/s390-pci-inst.c
29
if (!arm_feature(&s->cpu->env, ARM_FEATURE_M_MAIN)) {
57
@@ -XXX,XX +XXX,XX @@ int pcistb_service_call(S390CPU *cpu, uint8_t r1, uint8_t r3, uint64_t gaddr,
30
val = 0;
58
mr = s390_get_subregion(mr, offset, len);
31
break;
59
offset -= mr->addr;
32
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_write(void *opaque, hwaddr addr,
60
33
}
61
- if (!memory_region_access_valid(mr, offset, len, true)) {
34
nvic_irq_update(s);
62
+ if (!memory_region_access_valid(mr, offset, len, true,
35
return MEMTX_OK;
63
+ MEMTXATTRS_UNSPECIFIED)) {
36
- case 0xd18: /* System Handler Priority (SHPR1) */
64
s390_program_interrupt(env, PGM_OPERAND, 6, ra);
37
+ case 0xd18 ... 0xd1b: /* System Handler Priority (SHPR1) */
65
return 0;
38
if (!arm_feature(&s->cpu->env, ARM_FEATURE_M_MAIN)) {
66
}
39
return MEMTX_OK;
67
diff --git a/memory.c b/memory.c
40
}
68
index XXXXXXX..XXXXXXX 100644
69
--- a/memory.c
70
+++ b/memory.c
71
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps ram_device_mem_ops = {
72
bool memory_region_access_valid(MemoryRegion *mr,
73
hwaddr addr,
74
unsigned size,
75
- bool is_write)
76
+ bool is_write,
77
+ MemTxAttrs attrs)
78
{
79
int access_size_min, access_size_max;
80
int access_size, i;
81
@@ -XXX,XX +XXX,XX @@ MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
82
{
83
MemTxResult r;
84
85
- if (!memory_region_access_valid(mr, addr, size, false)) {
86
+ if (!memory_region_access_valid(mr, addr, size, false, attrs)) {
87
*pval = unassigned_mem_read(mr, addr, size);
88
return MEMTX_DECODE_ERROR;
89
}
90
@@ -XXX,XX +XXX,XX @@ MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
91
unsigned size,
92
MemTxAttrs attrs)
93
{
94
- if (!memory_region_access_valid(mr, addr, size, true)) {
95
+ if (!memory_region_access_valid(mr, addr, size, true, attrs)) {
96
unassigned_mem_write(mr, addr, data, size);
97
return MEMTX_DECODE_ERROR;
98
}
99
--
41
--
100
2.17.1
42
2.20.1
101
43
102
44
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
In commit 91c1e9fcbd7548db368 where we added dual-CPU support to
2
add MemTxAttrs as an argument to address_space_get_iotlb_entry().
2
the ARMSSE, we set up the wiring of the expansion IRQs via nested
3
loops: the outer loop on 'i' loops for each CPU, and the inner loop
4
on 'j' loops for each interrupt. Fix a typo which meant we were
5
wiring every expansion IRQ line to external IRQ 0 on CPU 0 and
6
to external IRQ 1 on CPU 1.
3
7
8
Fixes: 91c1e9fcbd7548db368 ("hw/arm/armsse: Support dual-CPU configuration")
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180521140402.23318-12-peter.maydell@linaro.org
8
---
11
---
9
include/exec/memory.h | 2 +-
12
hw/arm/armsse.c | 2 +-
10
exec.c | 2 +-
13
1 file changed, 1 insertion(+), 1 deletion(-)
11
hw/virtio/vhost.c | 3 ++-
12
3 files changed, 4 insertions(+), 3 deletions(-)
13
14
14
diff --git a/include/exec/memory.h b/include/exec/memory.h
15
diff --git a/hw/arm/armsse.c b/hw/arm/armsse.c
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/include/exec/memory.h
17
--- a/hw/arm/armsse.c
17
+++ b/include/exec/memory.h
18
+++ b/hw/arm/armsse.c
18
@@ -XXX,XX +XXX,XX @@ void address_space_cache_destroy(MemoryRegionCache *cache);
19
@@ -XXX,XX +XXX,XX @@ static void armsse_realize(DeviceState *dev, Error **errp)
19
* entry. Should be called from an RCU critical section.
20
/* Connect EXP_IRQ/EXP_CPUn_IRQ GPIOs to the NVIC's lines 32 and up */
20
*/
21
s->exp_irqs[i] = g_new(qemu_irq, s->exp_numirq);
21
IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
22
for (j = 0; j < s->exp_numirq; j++) {
22
- bool is_write);
23
- s->exp_irqs[i][j] = qdev_get_gpio_in(cpudev, i + 32);
23
+ bool is_write, MemTxAttrs attrs);
24
+ s->exp_irqs[i][j] = qdev_get_gpio_in(cpudev, j + 32);
24
25
}
25
/* address_space_translate: translate an address range into an address space
26
if (i == 0) {
26
* into a MemoryRegion and an address range into that section. Should be
27
gpioname = g_strdup("EXP_IRQ");
27
diff --git a/exec.c b/exec.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/exec.c
30
+++ b/exec.c
31
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection flatview_do_translate(FlatView *fv,
32
33
/* Called from RCU critical section */
34
IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
35
- bool is_write)
36
+ bool is_write, MemTxAttrs attrs)
37
{
38
MemoryRegionSection section;
39
hwaddr xlat, page_mask;
40
diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/hw/virtio/vhost.c
43
+++ b/hw/virtio/vhost.c
44
@@ -XXX,XX +XXX,XX @@ int vhost_device_iotlb_miss(struct vhost_dev *dev, uint64_t iova, int write)
45
trace_vhost_iotlb_miss(dev, 1);
46
47
iotlb = address_space_get_iotlb_entry(dev->vdev->dma_as,
48
- iova, write);
49
+ iova, write,
50
+ MEMTXATTRS_UNSPECIFIED);
51
if (iotlb.target_as != NULL) {
52
ret = vhost_memory_region_lookup(dev, iotlb.translated_addr,
53
&uaddr, &len);
54
--
28
--
55
2.17.1
29
2.20.1
56
30
57
31
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
add MemTxAttrs as an argument to flatview_extend_translation().
3
Its callers either have an attrs value to hand, or don't care
4
and can use MEMTXATTRS_UNSPECIFIED.
5
2
3
Since we're now handling a == b generically, we no longer need
4
to do it by hand within target/arm/.
5
6
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20190209033847.9014-2-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-7-peter.maydell@linaro.org
10
---
10
---
11
exec.c | 15 ++++++++++-----
11
target/arm/translate-a64.c | 6 +-----
12
1 file changed, 10 insertions(+), 5 deletions(-)
12
target/arm/translate-sve.c | 6 +-----
13
target/arm/translate.c | 12 +++---------
14
3 files changed, 5 insertions(+), 19 deletions(-)
13
15
14
diff --git a/exec.c b/exec.c
16
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/exec.c
18
--- a/target/arm/translate-a64.c
17
+++ b/exec.c
19
+++ b/target/arm/translate-a64.c
18
@@ -XXX,XX +XXX,XX @@ bool address_space_access_valid(AddressSpace *as, hwaddr addr,
20
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_logic(DisasContext *s, uint32_t insn)
19
21
gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_andc, 0);
20
static hwaddr
22
return;
21
flatview_extend_translation(FlatView *fv, hwaddr addr,
23
case 2: /* ORR */
22
- hwaddr target_len,
24
- if (rn == rm) { /* MOV */
23
- MemoryRegion *mr, hwaddr base, hwaddr len,
25
- gen_gvec_fn2(s, is_q, rd, rn, tcg_gen_gvec_mov, 0);
24
- bool is_write)
26
- } else {
25
+ hwaddr target_len,
27
- gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_or, 0);
26
+ MemoryRegion *mr, hwaddr base, hwaddr len,
28
- }
27
+ bool is_write, MemTxAttrs attrs)
29
+ gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_or, 0);
30
return;
31
case 3: /* ORN */
32
gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_orc, 0);
33
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/translate-sve.c
36
+++ b/target/arm/translate-sve.c
37
@@ -XXX,XX +XXX,XX @@ static bool trans_AND_zzz(DisasContext *s, arg_rrr_esz *a)
38
39
static bool trans_ORR_zzz(DisasContext *s, arg_rrr_esz *a)
28
{
40
{
29
hwaddr done = 0;
41
- if (a->rn == a->rm) { /* MOV */
30
hwaddr xlat;
42
- return do_mov_z(s, a->rd, a->rn);
31
@@ -XXX,XX +XXX,XX @@ void *address_space_map(AddressSpace *as,
43
- } else {
32
44
- return do_vector3_z(s, tcg_gen_gvec_or, 0, a->rd, a->rn, a->rm);
33
memory_region_ref(mr);
45
- }
34
*plen = flatview_extend_translation(fv, addr, len, mr, xlat,
46
+ return do_vector3_z(s, tcg_gen_gvec_or, 0, a->rd, a->rn, a->rm);
35
- l, is_write);
47
}
36
+ l, is_write, attrs);
48
37
ptr = qemu_ram_ptr_length(mr->ram_block, xlat, plen, true);
49
static bool trans_EOR_zzz(DisasContext *s, arg_rrr_esz *a)
38
rcu_read_unlock();
50
diff --git a/target/arm/translate.c b/target/arm/translate.c
39
51
index XXXXXXX..XXXXXXX 100644
40
@@ -XXX,XX +XXX,XX @@ int64_t address_space_cache_init(MemoryRegionCache *cache,
52
--- a/target/arm/translate.c
41
mr = cache->mrs.mr;
53
+++ b/target/arm/translate.c
42
memory_region_ref(mr);
54
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
43
if (memory_access_is_direct(mr, is_write)) {
55
tcg_gen_gvec_andc(0, rd_ofs, rn_ofs, rm_ofs,
44
+ /* We don't care about the memory attributes here as we're only
56
vec_size, vec_size);
45
+ * doing this if we found actual RAM, which behaves the same
57
break;
46
+ * regardless of attributes; so UNSPECIFIED is fine.
58
- case 2:
47
+ */
59
- if (rn == rm) {
48
l = flatview_extend_translation(cache->fv, addr, len, mr,
60
- /* VMOV */
49
- cache->xlat, l, is_write);
61
- tcg_gen_gvec_mov(0, rd_ofs, rn_ofs, vec_size, vec_size);
50
+ cache->xlat, l, is_write,
62
- } else {
51
+ MEMTXATTRS_UNSPECIFIED);
63
- /* VORR */
52
cache->ptr = qemu_ram_ptr_length(mr->ram_block, cache->xlat, &l, true);
64
- tcg_gen_gvec_or(0, rd_ofs, rn_ofs, rm_ofs,
53
} else {
65
- vec_size, vec_size);
54
cache->ptr = NULL;
66
- }
67
+ case 2: /* VORR */
68
+ tcg_gen_gvec_or(0, rd_ofs, rn_ofs, rm_ofs,
69
+ vec_size, vec_size);
70
break;
71
case 3: /* VORN */
72
tcg_gen_gvec_orc(0, rd_ofs, rn_ofs, rm_ofs,
55
--
73
--
56
2.17.1
74
2.20.1
57
75
58
76
diff view generated by jsdifflib
1
From: Shannon Zhao <zhaoshenglong@huawei.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
acpi_data_push uses g_array_set_size to resize the memory size. If there
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
is no enough contiguous memory, the address will be changed. So previous
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
pointer could not be used any more. It must update the pointer and use
5
Message-id: 20190209033847.9014-3-richard.henderson@linaro.org
6
the new one.
7
8
Also, previous codes wrongly use le32 conversion of iort->node_offset
9
for subsequent computations that will result incorrect value if host is
10
not litlle endian. So use the non-converted one instead.
11
12
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
13
Reviewed-by: Eric Auger <eric.auger@redhat.com>
14
Message-id: 1527663951-14552-1-git-send-email-zhaoshenglong@huawei.com
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
7
---
17
hw/arm/virt-acpi-build.c | 20 +++++++++++++++-----
8
target/arm/translate-a64.c | 35 ++++++++++++++---------------------
18
1 file changed, 15 insertions(+), 5 deletions(-)
9
1 file changed, 14 insertions(+), 21 deletions(-)
19
10
20
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
21
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/arm/virt-acpi-build.c
13
--- a/target/arm/translate-a64.c
23
+++ b/hw/arm/virt-acpi-build.c
14
+++ b/target/arm/translate-a64.c
24
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
15
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
25
AcpiIortItsGroup *its;
26
AcpiIortTable *iort;
27
AcpiIortSmmu3 *smmu;
28
- size_t node_size, iort_length, smmu_offset = 0;
29
+ size_t node_size, iort_node_offset, iort_length, smmu_offset = 0;
30
AcpiIortRC *rc;
31
32
iort = acpi_data_push(table_data, sizeof(*iort));
33
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
34
35
iort_length = sizeof(*iort);
36
iort->node_count = cpu_to_le32(nb_nodes);
37
- iort->node_offset = cpu_to_le32(sizeof(*iort));
38
+ /*
39
+ * Use a copy in case table_data->data moves during acpi_data_push
40
+ * operations.
41
+ */
42
+ iort_node_offset = sizeof(*iort);
43
+ iort->node_offset = cpu_to_le32(iort_node_offset);
44
45
/* ITS group node */
46
node_size = sizeof(*its) + sizeof(uint32_t);
47
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
48
int irq = vms->irqmap[VIRT_SMMU];
49
50
/* SMMUv3 node */
51
- smmu_offset = iort->node_offset + node_size;
52
+ smmu_offset = iort_node_offset + node_size;
53
node_size = sizeof(*smmu) + sizeof(*idmap);
54
iort_length += node_size;
55
smmu = acpi_data_push(table_data, node_size);
56
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
57
idmap->id_count = cpu_to_le32(0xFFFF);
58
idmap->output_base = 0;
59
/* output IORT node is the ITS group node (the first node) */
60
- idmap->output_reference = cpu_to_le32(iort->node_offset);
61
+ idmap->output_reference = cpu_to_le32(iort_node_offset);
62
}
16
}
63
17
64
/* Root Complex Node */
18
switch (opcode) {
65
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
19
+ case 0x0c: /* SMAX, UMAX */
66
idmap->output_reference = cpu_to_le32(smmu_offset);
20
+ if (u) {
67
} else {
21
+ gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_umax, size);
68
/* output IORT node is the ITS group node (the first node) */
22
+ } else {
69
- idmap->output_reference = cpu_to_le32(iort->node_offset);
23
+ gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_smax, size);
70
+ idmap->output_reference = cpu_to_le32(iort_node_offset);
24
+ }
71
}
25
+ return;
72
26
+ case 0x0d: /* SMIN, UMIN */
73
+ /*
27
+ if (u) {
74
+ * Update the pointer address in case table_data->data moves during above
28
+ gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_umin, size);
75
+ * acpi_data_push operations.
29
+ } else {
76
+ */
30
+ gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_smin, size);
77
+ iort = (AcpiIortTable *)(table_data->data + iort_start);
31
+ }
78
iort->length = cpu_to_le32(iort_length);
32
+ return;
79
33
case 0x10: /* ADD, SUB */
80
build_header(linker, table_data, (void *)(table_data->data + iort_start),
34
if (u) {
35
gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_sub, size);
36
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
37
genenvfn = fns[size][u];
38
break;
39
}
40
- case 0xc: /* SMAX, UMAX */
41
- {
42
- static NeonGenTwoOpFn * const fns[3][2] = {
43
- { gen_helper_neon_max_s8, gen_helper_neon_max_u8 },
44
- { gen_helper_neon_max_s16, gen_helper_neon_max_u16 },
45
- { tcg_gen_smax_i32, tcg_gen_umax_i32 },
46
- };
47
- genfn = fns[size][u];
48
- break;
49
- }
50
-
51
- case 0xd: /* SMIN, UMIN */
52
- {
53
- static NeonGenTwoOpFn * const fns[3][2] = {
54
- { gen_helper_neon_min_s8, gen_helper_neon_min_u8 },
55
- { gen_helper_neon_min_s16, gen_helper_neon_min_u16 },
56
- { tcg_gen_smin_i32, tcg_gen_umin_i32 },
57
- };
58
- genfn = fns[size][u];
59
- break;
60
- }
61
case 0xe: /* SABD, UABD */
62
case 0xf: /* SABA, UABA */
63
{
81
--
64
--
82
2.17.1
65
2.20.1
83
66
84
67
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
add MemTxAttrs as an argument to address_space_translate()
3
and address_space_translate_cached(). Callers either have an
4
attrs value to hand, or don't care and can use MEMTXATTRS_UNSPECIFIED.
5
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20190209033847.9014-4-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-4-peter.maydell@linaro.org
10
---
7
---
11
include/exec/memory.h | 4 +++-
8
target/arm/translate.c | 25 +++++++++++++++++++------
12
accel/tcg/translate-all.c | 2 +-
9
1 file changed, 19 insertions(+), 6 deletions(-)
13
exec.c | 14 +++++++++-----
14
hw/vfio/common.c | 3 ++-
15
memory_ldst.inc.c | 18 +++++++++---------
16
target/riscv/helper.c | 2 +-
17
6 files changed, 25 insertions(+), 18 deletions(-)
18
10
19
diff --git a/include/exec/memory.h b/include/exec/memory.h
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
20
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
21
--- a/include/exec/memory.h
13
--- a/target/arm/translate.c
22
+++ b/include/exec/memory.h
14
+++ b/target/arm/translate.c
23
@@ -XXX,XX +XXX,XX @@ IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
24
* #MemoryRegion.
16
tcg_gen_gvec_cmp(u ? TCG_COND_GEU : TCG_COND_GE, size,
25
* @len: pointer to length
17
rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size);
26
* @is_write: indicates the transfer direction
18
return 0;
27
+ * @attrs: memory attributes
19
+
28
*/
20
+ case NEON_3R_VMAX:
29
MemoryRegion *flatview_translate(FlatView *fv,
21
+ if (u) {
30
hwaddr addr, hwaddr *xlat,
22
+ tcg_gen_gvec_umax(size, rd_ofs, rn_ofs, rm_ofs,
31
@@ -XXX,XX +XXX,XX @@ MemoryRegion *flatview_translate(FlatView *fv,
23
+ vec_size, vec_size);
32
24
+ } else {
33
static inline MemoryRegion *address_space_translate(AddressSpace *as,
25
+ tcg_gen_gvec_smax(size, rd_ofs, rn_ofs, rm_ofs,
34
hwaddr addr, hwaddr *xlat,
26
+ vec_size, vec_size);
35
- hwaddr *len, bool is_write)
27
+ }
36
+ hwaddr *len, bool is_write,
28
+ return 0;
37
+ MemTxAttrs attrs)
29
+ case NEON_3R_VMIN:
38
{
30
+ if (u) {
39
return flatview_translate(address_space_to_flatview(as),
31
+ tcg_gen_gvec_umin(size, rd_ofs, rn_ofs, rm_ofs,
40
addr, xlat, len, is_write);
32
+ vec_size, vec_size);
41
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
33
+ } else {
42
index XXXXXXX..XXXXXXX 100644
34
+ tcg_gen_gvec_smin(size, rd_ofs, rn_ofs, rm_ofs,
43
--- a/accel/tcg/translate-all.c
35
+ vec_size, vec_size);
44
+++ b/accel/tcg/translate-all.c
36
+ }
45
@@ -XXX,XX +XXX,XX @@ void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs)
37
+ return 0;
46
hwaddr l = 1;
38
}
47
39
48
rcu_read_lock();
40
if (size == 3) {
49
- mr = address_space_translate(as, addr, &addr, &l, false);
41
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
50
+ mr = address_space_translate(as, addr, &addr, &l, false, attrs);
42
case NEON_3R_VQRSHL:
51
if (!(memory_region_is_ram(mr)
43
GEN_NEON_INTEGER_OP_ENV(qrshl);
52
|| memory_region_is_romd(mr))) {
44
break;
53
rcu_read_unlock();
45
- case NEON_3R_VMAX:
54
diff --git a/exec.c b/exec.c
46
- GEN_NEON_INTEGER_OP(max);
55
index XXXXXXX..XXXXXXX 100644
47
- break;
56
--- a/exec.c
48
- case NEON_3R_VMIN:
57
+++ b/exec.c
49
- GEN_NEON_INTEGER_OP(min);
58
@@ -XXX,XX +XXX,XX @@ static inline void cpu_physical_memory_write_rom_internal(AddressSpace *as,
50
- break;
59
rcu_read_lock();
51
case NEON_3R_VABD:
60
while (len > 0) {
52
GEN_NEON_INTEGER_OP(abd);
61
l = len;
53
break;
62
- mr = address_space_translate(as, addr, &addr1, &l, true);
63
+ mr = address_space_translate(as, addr, &addr1, &l, true,
64
+ MEMTXATTRS_UNSPECIFIED);
65
66
if (!(memory_region_is_ram(mr) ||
67
memory_region_is_romd(mr))) {
68
@@ -XXX,XX +XXX,XX @@ void address_space_cache_destroy(MemoryRegionCache *cache)
69
*/
70
static inline MemoryRegion *address_space_translate_cached(
71
MemoryRegionCache *cache, hwaddr addr, hwaddr *xlat,
72
- hwaddr *plen, bool is_write)
73
+ hwaddr *plen, bool is_write, MemTxAttrs attrs)
74
{
75
MemoryRegionSection section;
76
MemoryRegion *mr;
77
@@ -XXX,XX +XXX,XX @@ address_space_read_cached_slow(MemoryRegionCache *cache, hwaddr addr,
78
MemoryRegion *mr;
79
80
l = len;
81
- mr = address_space_translate_cached(cache, addr, &addr1, &l, false);
82
+ mr = address_space_translate_cached(cache, addr, &addr1, &l, false,
83
+ MEMTXATTRS_UNSPECIFIED);
84
flatview_read_continue(cache->fv,
85
addr, MEMTXATTRS_UNSPECIFIED, buf, len,
86
addr1, l, mr);
87
@@ -XXX,XX +XXX,XX @@ address_space_write_cached_slow(MemoryRegionCache *cache, hwaddr addr,
88
MemoryRegion *mr;
89
90
l = len;
91
- mr = address_space_translate_cached(cache, addr, &addr1, &l, true);
92
+ mr = address_space_translate_cached(cache, addr, &addr1, &l, true,
93
+ MEMTXATTRS_UNSPECIFIED);
94
flatview_write_continue(cache->fv,
95
addr, MEMTXATTRS_UNSPECIFIED, buf, len,
96
addr1, l, mr);
97
@@ -XXX,XX +XXX,XX @@ bool cpu_physical_memory_is_io(hwaddr phys_addr)
98
99
rcu_read_lock();
100
mr = address_space_translate(&address_space_memory,
101
- phys_addr, &phys_addr, &l, false);
102
+ phys_addr, &phys_addr, &l, false,
103
+ MEMTXATTRS_UNSPECIFIED);
104
105
res = !(memory_region_is_ram(mr) || memory_region_is_romd(mr));
106
rcu_read_unlock();
107
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
108
index XXXXXXX..XXXXXXX 100644
109
--- a/hw/vfio/common.c
110
+++ b/hw/vfio/common.c
111
@@ -XXX,XX +XXX,XX @@ static bool vfio_get_vaddr(IOMMUTLBEntry *iotlb, void **vaddr,
112
*/
113
mr = address_space_translate(&address_space_memory,
114
iotlb->translated_addr,
115
- &xlat, &len, writable);
116
+ &xlat, &len, writable,
117
+ MEMTXATTRS_UNSPECIFIED);
118
if (!memory_region_is_ram(mr)) {
119
error_report("iommu map to non memory area %"HWADDR_PRIx"",
120
xlat);
121
diff --git a/memory_ldst.inc.c b/memory_ldst.inc.c
122
index XXXXXXX..XXXXXXX 100644
123
--- a/memory_ldst.inc.c
124
+++ b/memory_ldst.inc.c
125
@@ -XXX,XX +XXX,XX @@ static inline uint32_t glue(address_space_ldl_internal, SUFFIX)(ARG1_DECL,
126
bool release_lock = false;
127
128
RCU_READ_LOCK();
129
- mr = TRANSLATE(addr, &addr1, &l, false);
130
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
131
if (l < 4 || !IS_DIRECT(mr, false)) {
132
release_lock |= prepare_mmio_access(mr);
133
134
@@ -XXX,XX +XXX,XX @@ static inline uint64_t glue(address_space_ldq_internal, SUFFIX)(ARG1_DECL,
135
bool release_lock = false;
136
137
RCU_READ_LOCK();
138
- mr = TRANSLATE(addr, &addr1, &l, false);
139
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
140
if (l < 8 || !IS_DIRECT(mr, false)) {
141
release_lock |= prepare_mmio_access(mr);
142
143
@@ -XXX,XX +XXX,XX @@ uint32_t glue(address_space_ldub, SUFFIX)(ARG1_DECL,
144
bool release_lock = false;
145
146
RCU_READ_LOCK();
147
- mr = TRANSLATE(addr, &addr1, &l, false);
148
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
149
if (!IS_DIRECT(mr, false)) {
150
release_lock |= prepare_mmio_access(mr);
151
152
@@ -XXX,XX +XXX,XX @@ static inline uint32_t glue(address_space_lduw_internal, SUFFIX)(ARG1_DECL,
153
bool release_lock = false;
154
155
RCU_READ_LOCK();
156
- mr = TRANSLATE(addr, &addr1, &l, false);
157
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
158
if (l < 2 || !IS_DIRECT(mr, false)) {
159
release_lock |= prepare_mmio_access(mr);
160
161
@@ -XXX,XX +XXX,XX @@ void glue(address_space_stl_notdirty, SUFFIX)(ARG1_DECL,
162
bool release_lock = false;
163
164
RCU_READ_LOCK();
165
- mr = TRANSLATE(addr, &addr1, &l, true);
166
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
167
if (l < 4 || !IS_DIRECT(mr, true)) {
168
release_lock |= prepare_mmio_access(mr);
169
170
@@ -XXX,XX +XXX,XX @@ static inline void glue(address_space_stl_internal, SUFFIX)(ARG1_DECL,
171
bool release_lock = false;
172
173
RCU_READ_LOCK();
174
- mr = TRANSLATE(addr, &addr1, &l, true);
175
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
176
if (l < 4 || !IS_DIRECT(mr, true)) {
177
release_lock |= prepare_mmio_access(mr);
178
179
@@ -XXX,XX +XXX,XX @@ void glue(address_space_stb, SUFFIX)(ARG1_DECL,
180
bool release_lock = false;
181
182
RCU_READ_LOCK();
183
- mr = TRANSLATE(addr, &addr1, &l, true);
184
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
185
if (!IS_DIRECT(mr, true)) {
186
release_lock |= prepare_mmio_access(mr);
187
r = memory_region_dispatch_write(mr, addr1, val, 1, attrs);
188
@@ -XXX,XX +XXX,XX @@ static inline void glue(address_space_stw_internal, SUFFIX)(ARG1_DECL,
189
bool release_lock = false;
190
191
RCU_READ_LOCK();
192
- mr = TRANSLATE(addr, &addr1, &l, true);
193
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
194
if (l < 2 || !IS_DIRECT(mr, true)) {
195
release_lock |= prepare_mmio_access(mr);
196
197
@@ -XXX,XX +XXX,XX @@ static void glue(address_space_stq_internal, SUFFIX)(ARG1_DECL,
198
bool release_lock = false;
199
200
RCU_READ_LOCK();
201
- mr = TRANSLATE(addr, &addr1, &l, true);
202
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
203
if (l < 8 || !IS_DIRECT(mr, true)) {
204
release_lock |= prepare_mmio_access(mr);
205
206
diff --git a/target/riscv/helper.c b/target/riscv/helper.c
207
index XXXXXXX..XXXXXXX 100644
208
--- a/target/riscv/helper.c
209
+++ b/target/riscv/helper.c
210
@@ -XXX,XX +XXX,XX @@ restart:
211
MemoryRegion *mr;
212
hwaddr l = sizeof(target_ulong), addr1;
213
mr = address_space_translate(cs->as, pte_addr,
214
- &addr1, &l, false);
215
+ &addr1, &l, false, MEMTXATTRS_UNSPECIFIED);
216
if (memory_access_is_direct(mr, true)) {
217
target_ulong *pte_pa =
218
qemu_map_ram_ptr(mr->ram_block, addr1);
219
--
54
--
220
2.17.1
55
2.20.1
221
56
222
57
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
add MemTxAttrs as an argument to the MemoryRegion valid.accepts
3
callback. We'll need this for subpage_accepts().
4
2
5
We could take the approach we used with the read and write
3
The 32-bit PMIN/PMAX has been decomposed to scalars,
6
callbacks and add new a new _with_attrs version, but since there
4
and so can be trivially expanded inline.
7
are so few implementations of the accepts hook we just change
8
them all.
9
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20190209033847.9014-5-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20180521140402.23318-9-peter.maydell@linaro.org
14
---
10
---
15
include/exec/memory.h | 3 ++-
11
target/arm/translate.c | 8 ++++----
16
exec.c | 9 ++++++---
12
1 file changed, 4 insertions(+), 4 deletions(-)
17
hw/hppa/dino.c | 3 ++-
18
hw/nvram/fw_cfg.c | 12 ++++++++----
19
hw/scsi/esp.c | 3 ++-
20
hw/xen/xen_pt_msi.c | 3 ++-
21
memory.c | 5 +++--
22
7 files changed, 25 insertions(+), 13 deletions(-)
23
13
24
diff --git a/include/exec/memory.h b/include/exec/memory.h
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
25
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
26
--- a/include/exec/memory.h
16
--- a/target/arm/translate.c
27
+++ b/include/exec/memory.h
17
+++ b/target/arm/translate.c
28
@@ -XXX,XX +XXX,XX @@ struct MemoryRegionOps {
18
@@ -XXX,XX +XXX,XX @@ static inline void gen_neon_rsb(int size, TCGv_i32 t0, TCGv_i32 t1)
29
* as a machine check exception).
30
*/
31
bool (*accepts)(void *opaque, hwaddr addr,
32
- unsigned size, bool is_write);
33
+ unsigned size, bool is_write,
34
+ MemTxAttrs attrs);
35
} valid;
36
/* Internal implementation constraints: */
37
struct {
38
diff --git a/exec.c b/exec.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/exec.c
41
+++ b/exec.c
42
@@ -XXX,XX +XXX,XX @@ static void notdirty_mem_write(void *opaque, hwaddr ram_addr,
43
}
19
}
44
20
45
static bool notdirty_mem_accepts(void *opaque, hwaddr addr,
21
/* 32-bit pairwise ops end up the same as the elementwise versions. */
46
- unsigned size, bool is_write)
22
-#define gen_helper_neon_pmax_s32 gen_helper_neon_max_s32
47
+ unsigned size, bool is_write,
23
-#define gen_helper_neon_pmax_u32 gen_helper_neon_max_u32
48
+ MemTxAttrs attrs)
24
-#define gen_helper_neon_pmin_s32 gen_helper_neon_min_s32
49
{
25
-#define gen_helper_neon_pmin_u32 gen_helper_neon_min_u32
50
return is_write;
26
+#define gen_helper_neon_pmax_s32 tcg_gen_smax_i32
51
}
27
+#define gen_helper_neon_pmax_u32 tcg_gen_umax_i32
52
@@ -XXX,XX +XXX,XX @@ static MemTxResult subpage_write(void *opaque, hwaddr addr,
28
+#define gen_helper_neon_pmin_s32 tcg_gen_smin_i32
53
}
29
+#define gen_helper_neon_pmin_u32 tcg_gen_umin_i32
54
30
55
static bool subpage_accepts(void *opaque, hwaddr addr,
31
#define GEN_NEON_INTEGER_OP_ENV(name) do { \
56
- unsigned len, bool is_write)
32
switch ((size << 1) | u) { \
57
+ unsigned len, bool is_write,
58
+ MemTxAttrs attrs)
59
{
60
subpage_t *subpage = opaque;
61
#if defined(DEBUG_SUBPAGE)
62
@@ -XXX,XX +XXX,XX @@ static void readonly_mem_write(void *opaque, hwaddr addr,
63
}
64
65
static bool readonly_mem_accepts(void *opaque, hwaddr addr,
66
- unsigned size, bool is_write)
67
+ unsigned size, bool is_write,
68
+ MemTxAttrs attrs)
69
{
70
return is_write;
71
}
72
diff --git a/hw/hppa/dino.c b/hw/hppa/dino.c
73
index XXXXXXX..XXXXXXX 100644
74
--- a/hw/hppa/dino.c
75
+++ b/hw/hppa/dino.c
76
@@ -XXX,XX +XXX,XX @@ static void gsc_to_pci_forwarding(DinoState *s)
77
}
78
79
static bool dino_chip_mem_valid(void *opaque, hwaddr addr,
80
- unsigned size, bool is_write)
81
+ unsigned size, bool is_write,
82
+ MemTxAttrs attrs)
83
{
84
switch (addr) {
85
case DINO_IAR0:
86
diff --git a/hw/nvram/fw_cfg.c b/hw/nvram/fw_cfg.c
87
index XXXXXXX..XXXXXXX 100644
88
--- a/hw/nvram/fw_cfg.c
89
+++ b/hw/nvram/fw_cfg.c
90
@@ -XXX,XX +XXX,XX @@ static void fw_cfg_dma_mem_write(void *opaque, hwaddr addr,
91
}
92
93
static bool fw_cfg_dma_mem_valid(void *opaque, hwaddr addr,
94
- unsigned size, bool is_write)
95
+ unsigned size, bool is_write,
96
+ MemTxAttrs attrs)
97
{
98
return !is_write || ((size == 4 && (addr == 0 || addr == 4)) ||
99
(size == 8 && addr == 0));
100
}
101
102
static bool fw_cfg_data_mem_valid(void *opaque, hwaddr addr,
103
- unsigned size, bool is_write)
104
+ unsigned size, bool is_write,
105
+ MemTxAttrs attrs)
106
{
107
return addr == 0;
108
}
109
@@ -XXX,XX +XXX,XX @@ static void fw_cfg_ctl_mem_write(void *opaque, hwaddr addr,
110
}
111
112
static bool fw_cfg_ctl_mem_valid(void *opaque, hwaddr addr,
113
- unsigned size, bool is_write)
114
+ unsigned size, bool is_write,
115
+ MemTxAttrs attrs)
116
{
117
return is_write && size == 2;
118
}
119
@@ -XXX,XX +XXX,XX @@ static void fw_cfg_comb_write(void *opaque, hwaddr addr,
120
}
121
122
static bool fw_cfg_comb_valid(void *opaque, hwaddr addr,
123
- unsigned size, bool is_write)
124
+ unsigned size, bool is_write,
125
+ MemTxAttrs attrs)
126
{
127
return (size == 1) || (is_write && size == 2);
128
}
129
diff --git a/hw/scsi/esp.c b/hw/scsi/esp.c
130
index XXXXXXX..XXXXXXX 100644
131
--- a/hw/scsi/esp.c
132
+++ b/hw/scsi/esp.c
133
@@ -XXX,XX +XXX,XX @@ void esp_reg_write(ESPState *s, uint32_t saddr, uint64_t val)
134
}
135
136
static bool esp_mem_accepts(void *opaque, hwaddr addr,
137
- unsigned size, bool is_write)
138
+ unsigned size, bool is_write,
139
+ MemTxAttrs attrs)
140
{
141
return (size == 1) || (is_write && size == 4);
142
}
143
diff --git a/hw/xen/xen_pt_msi.c b/hw/xen/xen_pt_msi.c
144
index XXXXXXX..XXXXXXX 100644
145
--- a/hw/xen/xen_pt_msi.c
146
+++ b/hw/xen/xen_pt_msi.c
147
@@ -XXX,XX +XXX,XX @@ static uint64_t pci_msix_read(void *opaque, hwaddr addr,
148
}
149
150
static bool pci_msix_accepts(void *opaque, hwaddr addr,
151
- unsigned size, bool is_write)
152
+ unsigned size, bool is_write,
153
+ MemTxAttrs attrs)
154
{
155
return !(addr & (size - 1));
156
}
157
diff --git a/memory.c b/memory.c
158
index XXXXXXX..XXXXXXX 100644
159
--- a/memory.c
160
+++ b/memory.c
161
@@ -XXX,XX +XXX,XX @@ static void unassigned_mem_write(void *opaque, hwaddr addr,
162
}
163
164
static bool unassigned_mem_accepts(void *opaque, hwaddr addr,
165
- unsigned size, bool is_write)
166
+ unsigned size, bool is_write,
167
+ MemTxAttrs attrs)
168
{
169
return false;
170
}
171
@@ -XXX,XX +XXX,XX @@ bool memory_region_access_valid(MemoryRegion *mr,
172
access_size = MAX(MIN(size, access_size_max), access_size_min);
173
for (i = 0; i < size; i += access_size) {
174
if (!mr->ops->valid.accepts(mr->opaque, addr + i, access_size,
175
- is_write)) {
176
+ is_write, attrs)) {
177
return false;
178
}
179
}
180
--
33
--
181
2.17.1
34
2.20.1
182
35
183
36
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Coverity found that the string return by 'object_get_canonical_path' was not
3
These are now unused.
4
being freed at two locations in the model (CID 1391294 and CID 1391293) and
5
also that a memset was being called with a value greater than the max of a byte
6
on the second argument (CID 1391286). This patch corrects this by adding the
7
freeing of the strings and also changing to memset to zero instead on
8
descriptor unaligned errors.
9
4
10
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
6
Message-id: 20190209033847.9014-6-richard.henderson@linaro.org
12
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
Message-id: 20180528184859.3530-1-frasse.iglesias@gmail.com
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
9
---
17
hw/dma/xlnx-zdma.c | 10 +++++++---
10
target/arm/helper.h | 12 ------------
18
1 file changed, 7 insertions(+), 3 deletions(-)
11
target/arm/neon_helper.c | 12 ------------
12
2 files changed, 24 deletions(-)
19
13
20
diff --git a/hw/dma/xlnx-zdma.c b/hw/dma/xlnx-zdma.c
14
diff --git a/target/arm/helper.h b/target/arm/helper.h
21
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/dma/xlnx-zdma.c
16
--- a/target/arm/helper.h
23
+++ b/hw/dma/xlnx-zdma.c
17
+++ b/target/arm/helper.h
24
@@ -XXX,XX +XXX,XX @@ static bool zdma_load_descriptor(XlnxZDMA *s, uint64_t addr, void *buf)
18
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_2(neon_cge_s16, i32, i32, i32)
25
qemu_log_mask(LOG_GUEST_ERROR,
19
DEF_HELPER_2(neon_cge_u32, i32, i32, i32)
26
"zdma: unaligned descriptor at %" PRIx64,
20
DEF_HELPER_2(neon_cge_s32, i32, i32, i32)
27
addr);
21
28
- memset(buf, 0xdeadbeef, sizeof(XlnxZDMADescr));
22
-DEF_HELPER_2(neon_min_u8, i32, i32, i32)
29
+ memset(buf, 0x0, sizeof(XlnxZDMADescr));
23
-DEF_HELPER_2(neon_min_s8, i32, i32, i32)
30
s->error = true;
24
-DEF_HELPER_2(neon_min_u16, i32, i32, i32)
31
return false;
25
-DEF_HELPER_2(neon_min_s16, i32, i32, i32)
32
}
26
-DEF_HELPER_2(neon_min_u32, i32, i32, i32)
33
@@ -XXX,XX +XXX,XX @@ static uint64_t zdma_read(void *opaque, hwaddr addr, unsigned size)
27
-DEF_HELPER_2(neon_min_s32, i32, i32, i32)
34
RegisterInfo *r = &s->regs_info[addr / 4];
28
-DEF_HELPER_2(neon_max_u8, i32, i32, i32)
35
29
-DEF_HELPER_2(neon_max_s8, i32, i32, i32)
36
if (!r->data) {
30
-DEF_HELPER_2(neon_max_u16, i32, i32, i32)
37
+ gchar *path = object_get_canonical_path(OBJECT(s));
31
-DEF_HELPER_2(neon_max_s16, i32, i32, i32)
38
qemu_log("%s: Decode error: read from %" HWADDR_PRIx "\n",
32
-DEF_HELPER_2(neon_max_u32, i32, i32, i32)
39
- object_get_canonical_path(OBJECT(s)),
33
-DEF_HELPER_2(neon_max_s32, i32, i32, i32)
40
+ path,
34
DEF_HELPER_2(neon_pmin_u8, i32, i32, i32)
41
addr);
35
DEF_HELPER_2(neon_pmin_s8, i32, i32, i32)
42
+ g_free(path);
36
DEF_HELPER_2(neon_pmin_u16, i32, i32, i32)
43
ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, INV_APB, true);
37
diff --git a/target/arm/neon_helper.c b/target/arm/neon_helper.c
44
zdma_ch_imr_update_irq(s);
38
index XXXXXXX..XXXXXXX 100644
45
return 0;
39
--- a/target/arm/neon_helper.c
46
@@ -XXX,XX +XXX,XX @@ static void zdma_write(void *opaque, hwaddr addr, uint64_t value,
40
+++ b/target/arm/neon_helper.c
47
RegisterInfo *r = &s->regs_info[addr / 4];
41
@@ -XXX,XX +XXX,XX @@ NEON_VOP(cge_u32, neon_u32, 1)
48
42
#undef NEON_FN
49
if (!r->data) {
43
50
+ gchar *path = object_get_canonical_path(OBJECT(s));
44
#define NEON_FN(dest, src1, src2) dest = (src1 < src2) ? src1 : src2
51
qemu_log("%s: Decode error: write to %" HWADDR_PRIx "=%" PRIx64 "\n",
45
-NEON_VOP(min_s8, neon_s8, 4)
52
- object_get_canonical_path(OBJECT(s)),
46
-NEON_VOP(min_u8, neon_u8, 4)
53
+ path,
47
-NEON_VOP(min_s16, neon_s16, 2)
54
addr, value);
48
-NEON_VOP(min_u16, neon_u16, 2)
55
+ g_free(path);
49
-NEON_VOP(min_s32, neon_s32, 1)
56
ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, INV_APB, true);
50
-NEON_VOP(min_u32, neon_u32, 1)
57
zdma_ch_imr_update_irq(s);
51
NEON_POP(pmin_s8, neon_s8, 4)
58
return;
52
NEON_POP(pmin_u8, neon_u8, 4)
53
NEON_POP(pmin_s16, neon_s16, 2)
54
@@ -XXX,XX +XXX,XX @@ NEON_POP(pmin_u16, neon_u16, 2)
55
#undef NEON_FN
56
57
#define NEON_FN(dest, src1, src2) dest = (src1 > src2) ? src1 : src2
58
-NEON_VOP(max_s8, neon_s8, 4)
59
-NEON_VOP(max_u8, neon_u8, 4)
60
-NEON_VOP(max_s16, neon_s16, 2)
61
-NEON_VOP(max_u16, neon_u16, 2)
62
-NEON_VOP(max_s32, neon_s32, 1)
63
-NEON_VOP(max_u32, neon_u32, 1)
64
NEON_POP(pmax_s8, neon_s8, 4)
65
NEON_POP(pmax_u8, neon_u8, 4)
66
NEON_POP(pmax_s16, neon_s16, 2)
59
--
67
--
60
2.17.1
68
2.20.1
61
69
62
70
diff view generated by jsdifflib
1
From: Paolo Bonzini <pbonzini@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
cpregs_keys is an uint32_t* so the allocation should use uint32_t.
3
The components of this register is stored in several
4
g_new is even better because it is type-safe.
4
different locations.
5
5
6
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20190209033847.9014-7-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
10
---
11
target/arm/gdbstub.c | 3 +--
11
target/arm/helper.c | 4 ++--
12
1 file changed, 1 insertion(+), 2 deletions(-)
12
1 file changed, 2 insertions(+), 2 deletions(-)
13
13
14
diff --git a/target/arm/gdbstub.c b/target/arm/gdbstub.c
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/gdbstub.c
16
--- a/target/arm/helper.c
17
+++ b/target/arm/gdbstub.c
17
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ int arm_gen_dynamic_xml(CPUState *cs)
18
@@ -XXX,XX +XXX,XX @@ static int vfp_gdb_get_reg(CPUARMState *env, uint8_t *buf, int reg)
19
RegisterSysregXmlParam param = {cs, s};
19
}
20
20
switch (reg - nregs) {
21
cpu->dyn_xml.num_cpregs = 0;
21
case 0: stl_p(buf, env->vfp.xregs[ARM_VFP_FPSID]); return 4;
22
- cpu->dyn_xml.cpregs_keys = g_malloc(sizeof(uint32_t *) *
22
- case 1: stl_p(buf, env->vfp.xregs[ARM_VFP_FPSCR]); return 4;
23
- g_hash_table_size(cpu->cp_regs));
23
+ case 1: stl_p(buf, vfp_get_fpscr(env)); return 4;
24
+ cpu->dyn_xml.cpregs_keys = g_new(uint32_t, g_hash_table_size(cpu->cp_regs));
24
case 2: stl_p(buf, env->vfp.xregs[ARM_VFP_FPEXC]); return 4;
25
g_string_printf(s, "<?xml version=\"1.0\"?>");
25
}
26
g_string_append_printf(s, "<!DOCTYPE target SYSTEM \"gdb-target.dtd\">");
26
return 0;
27
g_string_append_printf(s, "<feature name=\"org.qemu.gdb.arm.sys.regs\">");
27
@@ -XXX,XX +XXX,XX @@ static int vfp_gdb_set_reg(CPUARMState *env, uint8_t *buf, int reg)
28
}
29
switch (reg - nregs) {
30
case 0: env->vfp.xregs[ARM_VFP_FPSID] = ldl_p(buf); return 4;
31
- case 1: env->vfp.xregs[ARM_VFP_FPSCR] = ldl_p(buf); return 4;
32
+ case 1: vfp_set_fpscr(env, ldl_p(buf)); return 4;
33
case 2: env->vfp.xregs[ARM_VFP_FPEXC] = ldl_p(buf) & (1 << 30); return 4;
34
}
35
return 0;
28
--
36
--
29
2.17.1
37
2.20.1
30
38
31
39
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
add MemTxAttrs as an argument to tb_invalidate_phys_addr().
3
Its callers either have an attrs value to hand, or don't care
4
and can use MEMTXATTRS_UNSPECIFIED.
5
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20190209033847.9014-8-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
Message-id: 20180521140402.23318-3-peter.maydell@linaro.org
10
---
7
---
11
include/exec/exec-all.h | 5 +++--
8
target/arm/translate.c | 2 +-
12
accel/tcg/translate-all.c | 2 +-
9
1 file changed, 1 insertion(+), 1 deletion(-)
13
exec.c | 2 +-
14
target/xtensa/op_helper.c | 3 ++-
15
4 files changed, 7 insertions(+), 5 deletions(-)
16
10
17
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
18
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
19
--- a/include/exec/exec-all.h
13
--- a/target/arm/translate.c
20
+++ b/include/exec/exec-all.h
14
+++ b/target/arm/translate.c
21
@@ -XXX,XX +XXX,XX @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
15
@@ -XXX,XX +XXX,XX @@ void arm_cpu_dump_state(CPUState *cs, FILE *f, fprintf_function cpu_fprintf,
22
void tlb_set_page(CPUState *cpu, target_ulong vaddr,
16
i * 2 + 1, (uint32_t)(v >> 32),
23
hwaddr paddr, int prot,
17
i, v);
24
int mmu_idx, target_ulong size);
18
}
25
-void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr);
19
- cpu_fprintf(f, "FPSCR: %08x\n", (int)env->vfp.xregs[ARM_VFP_FPSCR]);
26
+void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs);
20
+ cpu_fprintf(f, "FPSCR: %08x\n", vfp_get_fpscr(env));
27
void probe_write(CPUArchState *env, target_ulong addr, int size, int mmu_idx,
28
uintptr_t retaddr);
29
#else
30
@@ -XXX,XX +XXX,XX @@ static inline void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *cpu,
31
uint16_t idxmap)
32
{
33
}
34
-static inline void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr)
35
+static inline void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr,
36
+ MemTxAttrs attrs)
37
{
38
}
39
#endif
40
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/accel/tcg/translate-all.c
43
+++ b/accel/tcg/translate-all.c
44
@@ -XXX,XX +XXX,XX @@ static TranslationBlock *tb_find_pc(uintptr_t tc_ptr)
45
}
46
47
#if !defined(CONFIG_USER_ONLY)
48
-void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr)
49
+void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs)
50
{
51
ram_addr_t ram_addr;
52
MemoryRegion *mr;
53
diff --git a/exec.c b/exec.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/exec.c
56
+++ b/exec.c
57
@@ -XXX,XX +XXX,XX @@ static void breakpoint_invalidate(CPUState *cpu, target_ulong pc)
58
if (phys != -1) {
59
/* Locks grabbed by tb_invalidate_phys_addr */
60
tb_invalidate_phys_addr(cpu->cpu_ases[asidx].as,
61
- phys | (pc & ~TARGET_PAGE_MASK));
62
+ phys | (pc & ~TARGET_PAGE_MASK), attrs);
63
}
21
}
64
}
22
}
65
#endif
66
diff --git a/target/xtensa/op_helper.c b/target/xtensa/op_helper.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/target/xtensa/op_helper.c
69
+++ b/target/xtensa/op_helper.c
70
@@ -XXX,XX +XXX,XX @@ static void tb_invalidate_virtual_addr(CPUXtensaState *env, uint32_t vaddr)
71
int ret = xtensa_get_physical_addr(env, false, vaddr, 2, 0,
72
&paddr, &page_size, &access);
73
if (ret == 0) {
74
- tb_invalidate_phys_addr(&address_space_memory, paddr);
75
+ tb_invalidate_phys_addr(&address_space_memory, paddr,
76
+ MEMTXATTRS_UNSPECIFIED);
77
}
78
}
79
23
80
--
24
--
81
2.17.1
25
2.20.1
82
26
83
27
diff view generated by jsdifflib
1
In commit f0aff255700 we made cpacr_write() enforce that some CPACR
1
From: Richard Henderson <richard.henderson@linaro.org>
2
bits are RAZ/WI and some are RAO/WI for ARMv7 cores. Unfortunately
3
we forgot to also update the register's reset value. The effect
4
was that (a) a guest that read CPACR on reset would not see ones in
5
the RAO bits, and (b) if you did a migration before the guest did
6
a write to the CPACR then the migration would fail because the
7
destination would enforce the RAO bits and then complain that they
8
didn't match the zero value from the source.
9
2
10
Implement reset for the CPACR using a custom reset function
3
Minimize the code within a macro by splitting out a helper function.
11
that just calls cpacr_write(), to avoid having to duplicate
4
Use deposit32 instead of manual bit manipulation.
12
the logic for which bits are RAO.
13
5
14
This bug would affect migration for TCG CPUs which are ARMv7
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
15
with VFP but without one of Neon or VFPv3.
7
Message-id: 20190209033847.9014-9-richard.henderson@linaro.org
16
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Reported-by: Cédric Le Goater <clg@kaod.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Tested-by: Cédric Le Goater <clg@kaod.org>
20
Message-id: 20180522173713.26282-1-peter.maydell@linaro.org
21
---
10
---
22
target/arm/helper.c | 10 +++++++++-
11
target/arm/helper.c | 45 +++++++++++++++++++++++++++------------------
23
1 file changed, 9 insertions(+), 1 deletion(-)
12
1 file changed, 27 insertions(+), 18 deletions(-)
24
13
25
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
26
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/helper.c
16
--- a/target/arm/helper.c
28
+++ b/target/arm/helper.c
17
+++ b/target/arm/helper.c
29
@@ -XXX,XX +XXX,XX @@ static void cpacr_write(CPUARMState *env, const ARMCPRegInfo *ri,
18
@@ -XXX,XX +XXX,XX @@ float64 VFP_HELPER(sqrt, d)(float64 a, CPUARMState *env)
30
env->cp15.cpacr_el1 = value;
19
return float64_sqrt(a, &env->vfp.fp_status);
31
}
20
}
32
21
33
+static void cpacr_reset(CPUARMState *env, const ARMCPRegInfo *ri)
22
+static void softfloat_to_vfp_compare(CPUARMState *env, int cmp)
34
+{
23
+{
35
+ /* Call cpacr_write() so that we reset with the correct RAO bits set
24
+ uint32_t flags;
36
+ * for our CPU features.
25
+ switch (cmp) {
37
+ */
26
+ case float_relation_equal:
38
+ cpacr_write(env, ri, 0);
27
+ flags = 0x6;
28
+ break;
29
+ case float_relation_less:
30
+ flags = 0x8;
31
+ break;
32
+ case float_relation_greater:
33
+ flags = 0x2;
34
+ break;
35
+ case float_relation_unordered:
36
+ flags = 0x3;
37
+ break;
38
+ default:
39
+ g_assert_not_reached();
40
+ }
41
+ env->vfp.xregs[ARM_VFP_FPSCR] =
42
+ deposit32(env->vfp.xregs[ARM_VFP_FPSCR], 28, 4, flags);
39
+}
43
+}
40
+
44
+
41
static CPAccessResult cpacr_access(CPUARMState *env, const ARMCPRegInfo *ri,
45
/* XXX: check quiet/signaling case */
42
bool isread)
46
#define DO_VFP_cmp(p, type) \
43
{
47
void VFP_HELPER(cmp, p)(type a, type b, CPUARMState *env) \
44
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6_cp_reginfo[] = {
48
{ \
45
{ .name = "CPACR", .state = ARM_CP_STATE_BOTH, .opc0 = 3,
49
- uint32_t flags; \
46
.crn = 1, .crm = 0, .opc1 = 0, .opc2 = 2, .accessfn = cpacr_access,
50
- switch(type ## _compare_quiet(a, b, &env->vfp.fp_status)) { \
47
.access = PL1_RW, .fieldoffset = offsetof(CPUARMState, cp15.cpacr_el1),
51
- case 0: flags = 0x6; break; \
48
- .resetvalue = 0, .writefn = cpacr_write },
52
- case -1: flags = 0x8; break; \
49
+ .resetfn = cpacr_reset, .writefn = cpacr_write },
53
- case 1: flags = 0x2; break; \
50
REGINFO_SENTINEL
54
- default: case 2: flags = 0x3; break; \
51
};
55
- } \
52
56
- env->vfp.xregs[ARM_VFP_FPSCR] = (flags << 28) \
57
- | (env->vfp.xregs[ARM_VFP_FPSCR] & 0x0fffffff); \
58
+ softfloat_to_vfp_compare(env, \
59
+ type ## _compare_quiet(a, b, &env->vfp.fp_status)); \
60
} \
61
void VFP_HELPER(cmpe, p)(type a, type b, CPUARMState *env) \
62
{ \
63
- uint32_t flags; \
64
- switch(type ## _compare(a, b, &env->vfp.fp_status)) { \
65
- case 0: flags = 0x6; break; \
66
- case -1: flags = 0x8; break; \
67
- case 1: flags = 0x2; break; \
68
- default: case 2: flags = 0x3; break; \
69
- } \
70
- env->vfp.xregs[ARM_VFP_FPSCR] = (flags << 28) \
71
- | (env->vfp.xregs[ARM_VFP_FPSCR] & 0x0fffffff); \
72
+ softfloat_to_vfp_compare(env, \
73
+ type ## _compare(a, b, &env->vfp.fp_status)); \
74
}
75
DO_VFP_cmp(s, float32)
76
DO_VFP_cmp(d, float64)
53
--
77
--
54
2.17.1
78
2.20.1
55
79
56
80
diff view generated by jsdifflib
1
From: Jan Kiszka <jan.kiszka@siemens.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
There was a nasty flip in identifying which register group an access is
3
Given that we mask bits properly on set, there is no reason
4
targeting. The issue caused spuriously raised priorities of the guest
4
to mask them again on get. We failed to clear the exception
5
when handing CPUs over in the Jailhouse hypervisor.
5
status bits, 0x9f, which means that the wrong value would be
6
returned on get. Except in the (probably normal) case in which
7
the set clears all of the bits.
6
8
7
Cc: qemu-stable@nongnu.org
9
Simplify the code in set to also clear the RES0 bits.
8
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
10
9
Message-id: 28b927d3-da58-bce4-cc13-bfec7f9b1cb9@siemens.com
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20190209033847.9014-10-richard.henderson@linaro.org
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
15
---
13
hw/intc/arm_gicv3_cpuif.c | 12 ++++++------
16
target/arm/helper.c | 15 ++++++++-------
14
1 file changed, 6 insertions(+), 6 deletions(-)
17
1 file changed, 8 insertions(+), 7 deletions(-)
15
18
16
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
19
diff --git a/target/arm/helper.c b/target/arm/helper.c
17
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/intc/arm_gicv3_cpuif.c
21
--- a/target/arm/helper.c
19
+++ b/hw/intc/arm_gicv3_cpuif.c
22
+++ b/target/arm/helper.c
20
@@ -XXX,XX +XXX,XX @@ static uint64_t icv_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
23
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(vfp_get_fpscr)(CPUARMState *env)
24
int i;
25
uint32_t fpscr;
26
27
- fpscr = (env->vfp.xregs[ARM_VFP_FPSCR] & 0xffc8ffff)
28
+ fpscr = env->vfp.xregs[ARM_VFP_FPSCR]
29
| (env->vfp.vec_len << 16)
30
| (env->vfp.vec_stride << 20);
31
32
@@ -XXX,XX +XXX,XX @@ static inline int vfp_exceptbits_to_host(int target_bits)
33
void HELPER(vfp_set_fpscr)(CPUARMState *env, uint32_t val)
21
{
34
{
22
GICv3CPUState *cs = icc_cs_from_env(env);
35
int i;
23
int regno = ri->opc2 & 3;
36
- uint32_t changed;
24
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
37
+ uint32_t changed = env->vfp.xregs[ARM_VFP_FPSCR];
25
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
38
26
uint64_t value = cs->ich_apr[grp][regno];
39
/* When ARMv8.2-FP16 is not supported, FZ16 is RES0. */
27
40
if (!cpu_isar_feature(aa64_fp16, arm_env_get_cpu(env))) {
28
trace_gicv3_icv_ap_read(ri->crm & 1, regno, gicv3_redist_affid(cs), value);
41
@@ -XXX,XX +XXX,XX @@ void HELPER(vfp_set_fpscr)(CPUARMState *env, uint32_t val)
29
@@ -XXX,XX +XXX,XX @@ static void icv_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
42
30
{
43
/*
31
GICv3CPUState *cs = icc_cs_from_env(env);
44
* We don't implement trapped exception handling, so the
32
int regno = ri->opc2 & 3;
45
- * trap enable bits are all RAZ/WI (not RES0!)
33
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
46
+ * trap enable bits, IDE|IXE|UFE|OFE|DZE|IOE are all RAZ/WI (not RES0!)
34
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
47
+ *
35
48
+ * If we exclude the exception flags, IOC|DZC|OFC|UFC|IXC|IDC
36
trace_gicv3_icv_ap_write(ri->crm & 1, regno, gicv3_redist_affid(cs), value);
49
+ * (which are stored in fp_status), and the other RES0 bits
37
50
+ * in between, then we clear all of the low 16 bits.
38
@@ -XXX,XX +XXX,XX @@ static uint64_t icc_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
51
*/
39
uint64_t value;
52
- val &= ~(FPCR_IDE | FPCR_IXE | FPCR_UFE | FPCR_OFE | FPCR_DZE | FPCR_IOE);
40
53
-
41
int regno = ri->opc2 & 3;
54
- changed = env->vfp.xregs[ARM_VFP_FPSCR];
42
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1;
55
- env->vfp.xregs[ARM_VFP_FPSCR] = (val & 0xffc8ffff);
43
+ int grp = (ri->crm & 1) ? GICV3_G1 : GICV3_G0;
56
+ env->vfp.xregs[ARM_VFP_FPSCR] = val & 0xffc80000;
44
57
env->vfp.vec_len = (val >> 16) & 7;
45
if (icv_access(env, grp == GICV3_G0 ? HCR_FMO : HCR_IMO)) {
58
env->vfp.vec_stride = (val >> 20) & 3;
46
return icv_ap_read(env, ri);
47
@@ -XXX,XX +XXX,XX @@ static void icc_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
48
GICv3CPUState *cs = icc_cs_from_env(env);
49
50
int regno = ri->opc2 & 3;
51
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1;
52
+ int grp = (ri->crm & 1) ? GICV3_G1 : GICV3_G0;
53
54
if (icv_access(env, grp == GICV3_G0 ? HCR_FMO : HCR_IMO)) {
55
icv_ap_write(env, ri, value);
56
@@ -XXX,XX +XXX,XX @@ static uint64_t ich_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
57
{
58
GICv3CPUState *cs = icc_cs_from_env(env);
59
int regno = ri->opc2 & 3;
60
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
61
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
62
uint64_t value;
63
64
value = cs->ich_apr[grp][regno];
65
@@ -XXX,XX +XXX,XX @@ static void ich_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
66
{
67
GICv3CPUState *cs = icc_cs_from_env(env);
68
int regno = ri->opc2 & 3;
69
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
70
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
71
72
trace_gicv3_ich_ap_write(ri->crm & 1, regno, gicv3_redist_affid(cs), value);
73
59
74
--
60
--
75
2.17.1
61
2.20.1
76
62
77
63
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
add MemTxAttrs as an argument to flatview_access_valid().
3
Its callers now all have an attrs value to hand, so we can
4
correct our earlier temporary use of MEMTXATTRS_UNSPECIFIED.
5
2
3
Change the representation of this field such that it is easy
4
to set from vector code.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20190209033847.9014-11-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-10-peter.maydell@linaro.org
10
---
10
---
11
exec.c | 12 +++++-------
11
target/arm/cpu.h | 5 ++++-
12
1 file changed, 5 insertions(+), 7 deletions(-)
12
target/arm/helper.c | 19 +++++++++++++++----
13
target/arm/neon_helper.c | 2 +-
14
target/arm/vec_helper.c | 2 +-
15
4 files changed, 21 insertions(+), 7 deletions(-)
13
16
14
diff --git a/exec.c b/exec.c
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
16
--- a/exec.c
19
--- a/target/arm/cpu.h
17
+++ b/exec.c
20
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_read(FlatView *fv, hwaddr addr,
21
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
19
static MemTxResult flatview_write(FlatView *fv, hwaddr addr, MemTxAttrs attrs,
22
ARMPredicateReg preg_tmp;
20
const uint8_t *buf, int len);
21
static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
22
- bool is_write);
23
+ bool is_write, MemTxAttrs attrs);
24
25
static MemTxResult subpage_read(void *opaque, hwaddr addr, uint64_t *data,
26
unsigned len, MemTxAttrs attrs)
27
@@ -XXX,XX +XXX,XX @@ static bool subpage_accepts(void *opaque, hwaddr addr,
28
#endif
23
#endif
29
24
30
return flatview_access_valid(subpage->fv, addr + subpage->base,
25
- uint32_t xregs[16];
31
- len, is_write);
26
/* We store these fpcsr fields separately for convenience. */
32
+ len, is_write, attrs);
27
+ uint32_t qc[4] QEMU_ALIGNED(16);
28
int vec_len;
29
int vec_stride;
30
31
+ uint32_t xregs[16];
32
+
33
/* Scratch space for aa32 neon expansion. */
34
uint32_t scratch[8];
35
36
@@ -XXX,XX +XXX,XX @@ void vfp_set_fpscr(CPUARMState *env, uint32_t val);
37
#define FPCR_FZ16 (1 << 19) /* ARMv8.2+, FP16 flush-to-zero */
38
#define FPCR_FZ (1 << 24) /* Flush-to-zero enable bit */
39
#define FPCR_DN (1 << 25) /* Default NaN enable bit */
40
+#define FPCR_QC (1 << 27) /* Cumulative saturation bit */
41
42
static inline uint32_t vfp_get_fpsr(CPUARMState *env)
43
{
44
diff --git a/target/arm/helper.c b/target/arm/helper.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/helper.c
47
+++ b/target/arm/helper.c
48
@@ -XXX,XX +XXX,XX @@ static inline int vfp_exceptbits_from_host(int host_bits)
49
50
uint32_t HELPER(vfp_get_fpscr)(CPUARMState *env)
51
{
52
- int i;
53
- uint32_t fpscr;
54
+ uint32_t i, fpscr;
55
56
fpscr = env->vfp.xregs[ARM_VFP_FPSCR]
57
| (env->vfp.vec_len << 16)
58
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(vfp_get_fpscr)(CPUARMState *env)
59
/* FZ16 does not generate an input denormal exception. */
60
i |= (get_float_exception_flags(&env->vfp.fp_status_f16)
61
& ~float_flag_input_denormal);
62
-
63
fpscr |= vfp_exceptbits_from_host(i);
64
+
65
+ i = env->vfp.qc[0] | env->vfp.qc[1] | env->vfp.qc[2] | env->vfp.qc[3];
66
+ fpscr |= i ? FPCR_QC : 0;
67
+
68
return fpscr;
33
}
69
}
34
70
35
static const MemoryRegionOps subpage_ops = {
71
@@ -XXX,XX +XXX,XX @@ void HELPER(vfp_set_fpscr)(CPUARMState *env, uint32_t val)
36
@@ -XXX,XX +XXX,XX @@ static void cpu_notify_map_clients(void)
72
* (which are stored in fp_status), and the other RES0 bits
37
}
73
* in between, then we clear all of the low 16 bits.
38
74
*/
39
static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
75
- env->vfp.xregs[ARM_VFP_FPSCR] = val & 0xffc80000;
40
- bool is_write)
76
+ env->vfp.xregs[ARM_VFP_FPSCR] = val & 0xf7c80000;
41
+ bool is_write, MemTxAttrs attrs)
77
env->vfp.vec_len = (val >> 16) & 7;
78
env->vfp.vec_stride = (val >> 20) & 3;
79
80
+ /*
81
+ * The bit we set within fpscr_q is arbitrary; the register as a
82
+ * whole being zero/non-zero is what counts.
83
+ */
84
+ env->vfp.qc[0] = val & FPCR_QC;
85
+ env->vfp.qc[1] = 0;
86
+ env->vfp.qc[2] = 0;
87
+ env->vfp.qc[3] = 0;
88
+
89
changed ^= val;
90
if (changed & (3 << 22)) {
91
i = (val >> 22) & 3;
92
diff --git a/target/arm/neon_helper.c b/target/arm/neon_helper.c
93
index XXXXXXX..XXXXXXX 100644
94
--- a/target/arm/neon_helper.c
95
+++ b/target/arm/neon_helper.c
96
@@ -XXX,XX +XXX,XX @@
97
#define SIGNBIT (uint32_t)0x80000000
98
#define SIGNBIT64 ((uint64_t)1 << 63)
99
100
-#define SET_QC() env->vfp.xregs[ARM_VFP_FPSCR] |= CPSR_Q
101
+#define SET_QC() env->vfp.qc[0] = 1
102
103
#define NEON_TYPE1(name, type) \
104
typedef struct \
105
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
106
index XXXXXXX..XXXXXXX 100644
107
--- a/target/arm/vec_helper.c
108
+++ b/target/arm/vec_helper.c
109
@@ -XXX,XX +XXX,XX @@
110
#define H4(x) (x)
111
#endif
112
113
-#define SET_QC() env->vfp.xregs[ARM_VFP_FPSCR] |= CPSR_Q
114
+#define SET_QC() env->vfp.qc[0] = 1
115
116
static void clear_tail(void *vd, uintptr_t opr_sz, uintptr_t max_sz)
42
{
117
{
43
MemoryRegion *mr;
44
hwaddr l, xlat;
45
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
46
mr = flatview_translate(fv, addr, &xlat, &l, is_write);
47
if (!memory_access_is_direct(mr, is_write)) {
48
l = memory_access_size(mr, l, addr);
49
- /* When our callers all have attrs we'll pass them through here */
50
- if (!memory_region_access_valid(mr, xlat, l, is_write,
51
- MEMTXATTRS_UNSPECIFIED)) {
52
+ if (!memory_region_access_valid(mr, xlat, l, is_write, attrs)) {
53
return false;
54
}
55
}
56
@@ -XXX,XX +XXX,XX @@ bool address_space_access_valid(AddressSpace *as, hwaddr addr,
57
58
rcu_read_lock();
59
fv = address_space_to_flatview(as);
60
- result = flatview_access_valid(fv, addr, len, is_write);
61
+ result = flatview_access_valid(fv, addr, len, is_write, attrs);
62
rcu_read_unlock();
63
return result;
64
}
65
--
118
--
66
2.17.1
119
2.20.1
67
120
68
121
diff view generated by jsdifflib
1
From: Igor Mammedov <imammedo@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
When QEMU is started with following CLI
3
For same-sign saturation, we have tcg vector operations. We can
4
-machine virt,gic-version=3,accel=kvm -cpu host -bios AAVMF_CODE.fd
4
compute the QC bit by comparing the saturated value against the
5
it crashes with abort at
5
unsaturated value.
6
accel/kvm/kvm-all.c:2164:
7
KVM_SET_DEVICE_ATTR failed: Group 6 attr 0x000000000000c665: Invalid argument
8
6
9
Which is caused by implicit dependency of kvm_arm_gicv3_reset() on
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
arm_gicv3_icc_reset() where the later is called by CPU reset
8
Message-id: 20190209033847.9014-12-richard.henderson@linaro.org
11
reset callback.
12
13
However commit:
14
3b77f6c arm/boot: split load_dtb() from arm_load_kernel()
15
broke CPU reset callback registration in case
16
17
arm_load_kernel()
18
...
19
if (!info->kernel_filename || info->firmware_loaded)
20
21
branch is taken, i.e. it's sufficient to provide a firmware
22
or do not provide kernel on CLI to skip cpu reset callback
23
registration, where before offending commit the callback
24
has been registered unconditionally.
25
26
Fix it by registering the callback right at the beginning of
27
arm_load_kernel() unconditionally instead of doing it at the end.
28
29
NOTE:
30
we probably should eliminate that dependency anyways as well as
31
separate arch CPU reset parts from arm_load_kernel() into CPU
32
itself, but that refactoring that I probably would have to do
33
anyways later for CPU hotplug to work.
34
35
Reported-by: Auger Eric <eric.auger@redhat.com>
36
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
37
Reviewed-by: Eric Auger <eric.auger@redhat.com>
38
Tested-by: Eric Auger <eric.auger@redhat.com>
39
Message-id: 1527070950-208350-1-git-send-email-imammedo@redhat.com
40
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
41
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
42
---
11
---
43
hw/arm/boot.c | 18 +++++++++---------
12
target/arm/helper.h | 33 +++++++
44
1 file changed, 9 insertions(+), 9 deletions(-)
13
target/arm/translate.h | 4 +
14
target/arm/translate-a64.c | 36 ++++----
15
target/arm/translate.c | 172 +++++++++++++++++++++++++++++++------
16
target/arm/vec_helper.c | 130 ++++++++++++++++++++++++++++
17
5 files changed, 331 insertions(+), 44 deletions(-)
45
18
46
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
19
diff --git a/target/arm/helper.h b/target/arm/helper.h
47
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
48
--- a/hw/arm/boot.c
21
--- a/target/arm/helper.h
49
+++ b/hw/arm/boot.c
22
+++ b/target/arm/helper.h
50
@@ -XXX,XX +XXX,XX @@ void arm_load_kernel(ARMCPU *cpu, struct arm_boot_info *info)
23
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_6(gvec_fmla_idx_s, TCG_CALL_NO_RWG,
51
static const ARMInsnFixup *primary_loader;
24
DEF_HELPER_FLAGS_6(gvec_fmla_idx_d, TCG_CALL_NO_RWG,
52
AddressSpace *as = arm_boot_address_space(cpu, info);
25
void, ptr, ptr, ptr, ptr, ptr, i32)
53
26
54
+ /* CPU objects (unlike devices) are not automatically reset on system
27
+DEF_HELPER_FLAGS_5(gvec_uqadd_b, TCG_CALL_NO_RWG,
55
+ * reset, so we must always register a handler to do so. If we're
28
+ void, ptr, ptr, ptr, ptr, i32)
56
+ * actually loading a kernel, the handler is also responsible for
29
+DEF_HELPER_FLAGS_5(gvec_uqadd_h, TCG_CALL_NO_RWG,
57
+ * arranging that we start it correctly.
30
+ void, ptr, ptr, ptr, ptr, i32)
58
+ */
31
+DEF_HELPER_FLAGS_5(gvec_uqadd_s, TCG_CALL_NO_RWG,
59
+ for (cs = first_cpu; cs; cs = CPU_NEXT(cs)) {
32
+ void, ptr, ptr, ptr, ptr, i32)
60
+ qemu_register_reset(do_cpu_reset, ARM_CPU(cs));
33
+DEF_HELPER_FLAGS_5(gvec_uqadd_d, TCG_CALL_NO_RWG,
61
+ }
34
+ void, ptr, ptr, ptr, ptr, i32)
62
+
35
+DEF_HELPER_FLAGS_5(gvec_sqadd_b, TCG_CALL_NO_RWG,
63
/* The board code is not supposed to set secure_board_setup unless
36
+ void, ptr, ptr, ptr, ptr, i32)
64
* running its code in secure mode is actually possible, and KVM
37
+DEF_HELPER_FLAGS_5(gvec_sqadd_h, TCG_CALL_NO_RWG,
65
* doesn't support secure.
38
+ void, ptr, ptr, ptr, ptr, i32)
66
@@ -XXX,XX +XXX,XX @@ void arm_load_kernel(ARMCPU *cpu, struct arm_boot_info *info)
39
+DEF_HELPER_FLAGS_5(gvec_sqadd_s, TCG_CALL_NO_RWG,
67
ARM_CPU(cs)->env.boot_info = info;
40
+ void, ptr, ptr, ptr, ptr, i32)
41
+DEF_HELPER_FLAGS_5(gvec_sqadd_d, TCG_CALL_NO_RWG,
42
+ void, ptr, ptr, ptr, ptr, i32)
43
+DEF_HELPER_FLAGS_5(gvec_uqsub_b, TCG_CALL_NO_RWG,
44
+ void, ptr, ptr, ptr, ptr, i32)
45
+DEF_HELPER_FLAGS_5(gvec_uqsub_h, TCG_CALL_NO_RWG,
46
+ void, ptr, ptr, ptr, ptr, i32)
47
+DEF_HELPER_FLAGS_5(gvec_uqsub_s, TCG_CALL_NO_RWG,
48
+ void, ptr, ptr, ptr, ptr, i32)
49
+DEF_HELPER_FLAGS_5(gvec_uqsub_d, TCG_CALL_NO_RWG,
50
+ void, ptr, ptr, ptr, ptr, i32)
51
+DEF_HELPER_FLAGS_5(gvec_sqsub_b, TCG_CALL_NO_RWG,
52
+ void, ptr, ptr, ptr, ptr, i32)
53
+DEF_HELPER_FLAGS_5(gvec_sqsub_h, TCG_CALL_NO_RWG,
54
+ void, ptr, ptr, ptr, ptr, i32)
55
+DEF_HELPER_FLAGS_5(gvec_sqsub_s, TCG_CALL_NO_RWG,
56
+ void, ptr, ptr, ptr, ptr, i32)
57
+DEF_HELPER_FLAGS_5(gvec_sqsub_d, TCG_CALL_NO_RWG,
58
+ void, ptr, ptr, ptr, ptr, i32)
59
+
60
#ifdef TARGET_AARCH64
61
#include "helper-a64.h"
62
#include "helper-sve.h"
63
diff --git a/target/arm/translate.h b/target/arm/translate.h
64
index XXXXXXX..XXXXXXX 100644
65
--- a/target/arm/translate.h
66
+++ b/target/arm/translate.h
67
@@ -XXX,XX +XXX,XX @@ extern const GVecGen2i ssra_op[4];
68
extern const GVecGen2i usra_op[4];
69
extern const GVecGen2i sri_op[4];
70
extern const GVecGen2i sli_op[4];
71
+extern const GVecGen4 uqadd_op[4];
72
+extern const GVecGen4 sqadd_op[4];
73
+extern const GVecGen4 uqsub_op[4];
74
+extern const GVecGen4 sqsub_op[4];
75
void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b);
76
77
/*
78
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
79
index XXXXXXX..XXXXXXX 100644
80
--- a/target/arm/translate-a64.c
81
+++ b/target/arm/translate-a64.c
82
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
68
}
83
}
69
84
70
- /* CPU objects (unlike devices) are not automatically reset on system
85
switch (opcode) {
71
- * reset, so we must always register a handler to do so. If we're
86
+ case 0x01: /* SQADD, UQADD */
72
- * actually loading a kernel, the handler is also responsible for
87
+ tcg_gen_gvec_4(vec_full_reg_offset(s, rd),
73
- * arranging that we start it correctly.
88
+ offsetof(CPUARMState, vfp.qc),
74
- */
89
+ vec_full_reg_offset(s, rn),
75
- for (cs = first_cpu; cs; cs = CPU_NEXT(cs)) {
90
+ vec_full_reg_offset(s, rm),
76
- qemu_register_reset(do_cpu_reset, ARM_CPU(cs));
91
+ is_q ? 16 : 8, vec_full_reg_size(s),
77
- }
92
+ (u ? uqadd_op : sqadd_op) + size);
78
-
93
+ return;
79
if (!info->skip_dtb_autoload && have_dtb(info)) {
94
+ case 0x05: /* SQSUB, UQSUB */
80
if (arm_load_dtb(info->dtb_start, info, info->dtb_limit, as) < 0) {
95
+ tcg_gen_gvec_4(vec_full_reg_offset(s, rd),
81
exit(1);
96
+ offsetof(CPUARMState, vfp.qc),
97
+ vec_full_reg_offset(s, rn),
98
+ vec_full_reg_offset(s, rm),
99
+ is_q ? 16 : 8, vec_full_reg_size(s),
100
+ (u ? uqsub_op : sqsub_op) + size);
101
+ return;
102
case 0x0c: /* SMAX, UMAX */
103
if (u) {
104
gen_gvec_fn3(s, is_q, rd, rn, rm, tcg_gen_gvec_umax, size);
105
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
106
genfn = fns[size][u];
107
break;
108
}
109
- case 0x1: /* SQADD, UQADD */
110
- {
111
- static NeonGenTwoOpEnvFn * const fns[3][2] = {
112
- { gen_helper_neon_qadd_s8, gen_helper_neon_qadd_u8 },
113
- { gen_helper_neon_qadd_s16, gen_helper_neon_qadd_u16 },
114
- { gen_helper_neon_qadd_s32, gen_helper_neon_qadd_u32 },
115
- };
116
- genenvfn = fns[size][u];
117
- break;
118
- }
119
case 0x2: /* SRHADD, URHADD */
120
{
121
static NeonGenTwoOpFn * const fns[3][2] = {
122
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
123
genfn = fns[size][u];
124
break;
125
}
126
- case 0x5: /* SQSUB, UQSUB */
127
- {
128
- static NeonGenTwoOpEnvFn * const fns[3][2] = {
129
- { gen_helper_neon_qsub_s8, gen_helper_neon_qsub_u8 },
130
- { gen_helper_neon_qsub_s16, gen_helper_neon_qsub_u16 },
131
- { gen_helper_neon_qsub_s32, gen_helper_neon_qsub_u32 },
132
- };
133
- genenvfn = fns[size][u];
134
- break;
135
- }
136
case 0x8: /* SSHL, USHL */
137
{
138
static NeonGenTwoOpFn * const fns[3][2] = {
139
diff --git a/target/arm/translate.c b/target/arm/translate.c
140
index XXXXXXX..XXXXXXX 100644
141
--- a/target/arm/translate.c
142
+++ b/target/arm/translate.c
143
@@ -XXX,XX +XXX,XX @@ const GVecGen3 cmtst_op[4] = {
144
.vece = MO_64 },
145
};
146
147
+static void gen_uqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
148
+ TCGv_vec a, TCGv_vec b)
149
+{
150
+ TCGv_vec x = tcg_temp_new_vec_matching(t);
151
+ tcg_gen_add_vec(vece, x, a, b);
152
+ tcg_gen_usadd_vec(vece, t, a, b);
153
+ tcg_gen_cmp_vec(TCG_COND_NE, vece, x, x, t);
154
+ tcg_gen_or_vec(vece, sat, sat, x);
155
+ tcg_temp_free_vec(x);
156
+}
157
+
158
+const GVecGen4 uqadd_op[4] = {
159
+ { .fniv = gen_uqadd_vec,
160
+ .fno = gen_helper_gvec_uqadd_b,
161
+ .opc = INDEX_op_usadd_vec,
162
+ .write_aofs = true,
163
+ .vece = MO_8 },
164
+ { .fniv = gen_uqadd_vec,
165
+ .fno = gen_helper_gvec_uqadd_h,
166
+ .opc = INDEX_op_usadd_vec,
167
+ .write_aofs = true,
168
+ .vece = MO_16 },
169
+ { .fniv = gen_uqadd_vec,
170
+ .fno = gen_helper_gvec_uqadd_s,
171
+ .opc = INDEX_op_usadd_vec,
172
+ .write_aofs = true,
173
+ .vece = MO_32 },
174
+ { .fniv = gen_uqadd_vec,
175
+ .fno = gen_helper_gvec_uqadd_d,
176
+ .opc = INDEX_op_usadd_vec,
177
+ .write_aofs = true,
178
+ .vece = MO_64 },
179
+};
180
+
181
+static void gen_sqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
182
+ TCGv_vec a, TCGv_vec b)
183
+{
184
+ TCGv_vec x = tcg_temp_new_vec_matching(t);
185
+ tcg_gen_add_vec(vece, x, a, b);
186
+ tcg_gen_ssadd_vec(vece, t, a, b);
187
+ tcg_gen_cmp_vec(TCG_COND_NE, vece, x, x, t);
188
+ tcg_gen_or_vec(vece, sat, sat, x);
189
+ tcg_temp_free_vec(x);
190
+}
191
+
192
+const GVecGen4 sqadd_op[4] = {
193
+ { .fniv = gen_sqadd_vec,
194
+ .fno = gen_helper_gvec_sqadd_b,
195
+ .opc = INDEX_op_ssadd_vec,
196
+ .write_aofs = true,
197
+ .vece = MO_8 },
198
+ { .fniv = gen_sqadd_vec,
199
+ .fno = gen_helper_gvec_sqadd_h,
200
+ .opc = INDEX_op_ssadd_vec,
201
+ .write_aofs = true,
202
+ .vece = MO_16 },
203
+ { .fniv = gen_sqadd_vec,
204
+ .fno = gen_helper_gvec_sqadd_s,
205
+ .opc = INDEX_op_ssadd_vec,
206
+ .write_aofs = true,
207
+ .vece = MO_32 },
208
+ { .fniv = gen_sqadd_vec,
209
+ .fno = gen_helper_gvec_sqadd_d,
210
+ .opc = INDEX_op_ssadd_vec,
211
+ .write_aofs = true,
212
+ .vece = MO_64 },
213
+};
214
+
215
+static void gen_uqsub_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
216
+ TCGv_vec a, TCGv_vec b)
217
+{
218
+ TCGv_vec x = tcg_temp_new_vec_matching(t);
219
+ tcg_gen_sub_vec(vece, x, a, b);
220
+ tcg_gen_ussub_vec(vece, t, a, b);
221
+ tcg_gen_cmp_vec(TCG_COND_NE, vece, x, x, t);
222
+ tcg_gen_or_vec(vece, sat, sat, x);
223
+ tcg_temp_free_vec(x);
224
+}
225
+
226
+const GVecGen4 uqsub_op[4] = {
227
+ { .fniv = gen_uqsub_vec,
228
+ .fno = gen_helper_gvec_uqsub_b,
229
+ .opc = INDEX_op_ussub_vec,
230
+ .write_aofs = true,
231
+ .vece = MO_8 },
232
+ { .fniv = gen_uqsub_vec,
233
+ .fno = gen_helper_gvec_uqsub_h,
234
+ .opc = INDEX_op_ussub_vec,
235
+ .write_aofs = true,
236
+ .vece = MO_16 },
237
+ { .fniv = gen_uqsub_vec,
238
+ .fno = gen_helper_gvec_uqsub_s,
239
+ .opc = INDEX_op_ussub_vec,
240
+ .write_aofs = true,
241
+ .vece = MO_32 },
242
+ { .fniv = gen_uqsub_vec,
243
+ .fno = gen_helper_gvec_uqsub_d,
244
+ .opc = INDEX_op_ussub_vec,
245
+ .write_aofs = true,
246
+ .vece = MO_64 },
247
+};
248
+
249
+static void gen_sqsub_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
250
+ TCGv_vec a, TCGv_vec b)
251
+{
252
+ TCGv_vec x = tcg_temp_new_vec_matching(t);
253
+ tcg_gen_sub_vec(vece, x, a, b);
254
+ tcg_gen_sssub_vec(vece, t, a, b);
255
+ tcg_gen_cmp_vec(TCG_COND_NE, vece, x, x, t);
256
+ tcg_gen_or_vec(vece, sat, sat, x);
257
+ tcg_temp_free_vec(x);
258
+}
259
+
260
+const GVecGen4 sqsub_op[4] = {
261
+ { .fniv = gen_sqsub_vec,
262
+ .fno = gen_helper_gvec_sqsub_b,
263
+ .opc = INDEX_op_sssub_vec,
264
+ .write_aofs = true,
265
+ .vece = MO_8 },
266
+ { .fniv = gen_sqsub_vec,
267
+ .fno = gen_helper_gvec_sqsub_h,
268
+ .opc = INDEX_op_sssub_vec,
269
+ .write_aofs = true,
270
+ .vece = MO_16 },
271
+ { .fniv = gen_sqsub_vec,
272
+ .fno = gen_helper_gvec_sqsub_s,
273
+ .opc = INDEX_op_sssub_vec,
274
+ .write_aofs = true,
275
+ .vece = MO_32 },
276
+ { .fniv = gen_sqsub_vec,
277
+ .fno = gen_helper_gvec_sqsub_d,
278
+ .opc = INDEX_op_sssub_vec,
279
+ .write_aofs = true,
280
+ .vece = MO_64 },
281
+};
282
+
283
/* Translate a NEON data processing instruction. Return nonzero if the
284
instruction is invalid.
285
We process data in a mixture of 32-bit and 64-bit chunks.
286
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
287
}
288
return 0;
289
290
+ case NEON_3R_VQADD:
291
+ tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
292
+ rn_ofs, rm_ofs, vec_size, vec_size,
293
+ (u ? uqadd_op : sqadd_op) + size);
294
+ break;
295
+
296
+ case NEON_3R_VQSUB:
297
+ tcg_gen_gvec_4(rd_ofs, offsetof(CPUARMState, vfp.qc),
298
+ rn_ofs, rm_ofs, vec_size, vec_size,
299
+ (u ? uqsub_op : sqsub_op) + size);
300
+ break;
301
+
302
case NEON_3R_VMUL: /* VMUL */
303
if (u) {
304
/* Polynomial case allows only P8 and is handled below. */
305
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
306
neon_load_reg64(cpu_V0, rn + pass);
307
neon_load_reg64(cpu_V1, rm + pass);
308
switch (op) {
309
- case NEON_3R_VQADD:
310
- if (u) {
311
- gen_helper_neon_qadd_u64(cpu_V0, cpu_env,
312
- cpu_V0, cpu_V1);
313
- } else {
314
- gen_helper_neon_qadd_s64(cpu_V0, cpu_env,
315
- cpu_V0, cpu_V1);
316
- }
317
- break;
318
- case NEON_3R_VQSUB:
319
- if (u) {
320
- gen_helper_neon_qsub_u64(cpu_V0, cpu_env,
321
- cpu_V0, cpu_V1);
322
- } else {
323
- gen_helper_neon_qsub_s64(cpu_V0, cpu_env,
324
- cpu_V0, cpu_V1);
325
- }
326
- break;
327
case NEON_3R_VSHL:
328
if (u) {
329
gen_helper_neon_shl_u64(cpu_V0, cpu_V1, cpu_V0);
330
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
331
case NEON_3R_VHADD:
332
GEN_NEON_INTEGER_OP(hadd);
333
break;
334
- case NEON_3R_VQADD:
335
- GEN_NEON_INTEGER_OP_ENV(qadd);
336
- break;
337
case NEON_3R_VRHADD:
338
GEN_NEON_INTEGER_OP(rhadd);
339
break;
340
case NEON_3R_VHSUB:
341
GEN_NEON_INTEGER_OP(hsub);
342
break;
343
- case NEON_3R_VQSUB:
344
- GEN_NEON_INTEGER_OP_ENV(qsub);
345
- break;
346
case NEON_3R_VSHL:
347
GEN_NEON_INTEGER_OP(shl);
348
break;
349
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
350
index XXXXXXX..XXXXXXX 100644
351
--- a/target/arm/vec_helper.c
352
+++ b/target/arm/vec_helper.c
353
@@ -XXX,XX +XXX,XX @@ DO_FMLA_IDX(gvec_fmla_idx_s, float32, H4)
354
DO_FMLA_IDX(gvec_fmla_idx_d, float64, )
355
356
#undef DO_FMLA_IDX
357
+
358
+#define DO_SAT(NAME, WTYPE, TYPEN, TYPEM, OP, MIN, MAX) \
359
+void HELPER(NAME)(void *vd, void *vq, void *vn, void *vm, uint32_t desc) \
360
+{ \
361
+ intptr_t i, oprsz = simd_oprsz(desc); \
362
+ TYPEN *d = vd, *n = vn; TYPEM *m = vm; \
363
+ bool q = false; \
364
+ for (i = 0; i < oprsz / sizeof(TYPEN); i++) { \
365
+ WTYPE dd = (WTYPE)n[i] OP m[i]; \
366
+ if (dd < MIN) { \
367
+ dd = MIN; \
368
+ q = true; \
369
+ } else if (dd > MAX) { \
370
+ dd = MAX; \
371
+ q = true; \
372
+ } \
373
+ d[i] = dd; \
374
+ } \
375
+ if (q) { \
376
+ uint32_t *qc = vq; \
377
+ qc[0] = 1; \
378
+ } \
379
+ clear_tail(d, oprsz, simd_maxsz(desc)); \
380
+}
381
+
382
+DO_SAT(gvec_uqadd_b, int, uint8_t, uint8_t, +, 0, UINT8_MAX)
383
+DO_SAT(gvec_uqadd_h, int, uint16_t, uint16_t, +, 0, UINT16_MAX)
384
+DO_SAT(gvec_uqadd_s, int64_t, uint32_t, uint32_t, +, 0, UINT32_MAX)
385
+
386
+DO_SAT(gvec_sqadd_b, int, int8_t, int8_t, +, INT8_MIN, INT8_MAX)
387
+DO_SAT(gvec_sqadd_h, int, int16_t, int16_t, +, INT16_MIN, INT16_MAX)
388
+DO_SAT(gvec_sqadd_s, int64_t, int32_t, int32_t, +, INT32_MIN, INT32_MAX)
389
+
390
+DO_SAT(gvec_uqsub_b, int, uint8_t, uint8_t, -, 0, UINT8_MAX)
391
+DO_SAT(gvec_uqsub_h, int, uint16_t, uint16_t, -, 0, UINT16_MAX)
392
+DO_SAT(gvec_uqsub_s, int64_t, uint32_t, uint32_t, -, 0, UINT32_MAX)
393
+
394
+DO_SAT(gvec_sqsub_b, int, int8_t, int8_t, -, INT8_MIN, INT8_MAX)
395
+DO_SAT(gvec_sqsub_h, int, int16_t, int16_t, -, INT16_MIN, INT16_MAX)
396
+DO_SAT(gvec_sqsub_s, int64_t, int32_t, int32_t, -, INT32_MIN, INT32_MAX)
397
+
398
+#undef DO_SAT
399
+
400
+void HELPER(gvec_uqadd_d)(void *vd, void *vq, void *vn,
401
+ void *vm, uint32_t desc)
402
+{
403
+ intptr_t i, oprsz = simd_oprsz(desc);
404
+ uint64_t *d = vd, *n = vn, *m = vm;
405
+ bool q = false;
406
+
407
+ for (i = 0; i < oprsz / 8; i++) {
408
+ uint64_t nn = n[i], mm = m[i], dd = nn + mm;
409
+ if (dd < nn) {
410
+ dd = UINT64_MAX;
411
+ q = true;
412
+ }
413
+ d[i] = dd;
414
+ }
415
+ if (q) {
416
+ uint32_t *qc = vq;
417
+ qc[0] = 1;
418
+ }
419
+ clear_tail(d, oprsz, simd_maxsz(desc));
420
+}
421
+
422
+void HELPER(gvec_uqsub_d)(void *vd, void *vq, void *vn,
423
+ void *vm, uint32_t desc)
424
+{
425
+ intptr_t i, oprsz = simd_oprsz(desc);
426
+ uint64_t *d = vd, *n = vn, *m = vm;
427
+ bool q = false;
428
+
429
+ for (i = 0; i < oprsz / 8; i++) {
430
+ uint64_t nn = n[i], mm = m[i], dd = nn - mm;
431
+ if (nn < mm) {
432
+ dd = 0;
433
+ q = true;
434
+ }
435
+ d[i] = dd;
436
+ }
437
+ if (q) {
438
+ uint32_t *qc = vq;
439
+ qc[0] = 1;
440
+ }
441
+ clear_tail(d, oprsz, simd_maxsz(desc));
442
+}
443
+
444
+void HELPER(gvec_sqadd_d)(void *vd, void *vq, void *vn,
445
+ void *vm, uint32_t desc)
446
+{
447
+ intptr_t i, oprsz = simd_oprsz(desc);
448
+ int64_t *d = vd, *n = vn, *m = vm;
449
+ bool q = false;
450
+
451
+ for (i = 0; i < oprsz / 8; i++) {
452
+ int64_t nn = n[i], mm = m[i], dd = nn + mm;
453
+ if (((dd ^ nn) & ~(nn ^ mm)) & INT64_MIN) {
454
+ dd = (nn >> 63) ^ ~INT64_MIN;
455
+ q = true;
456
+ }
457
+ d[i] = dd;
458
+ }
459
+ if (q) {
460
+ uint32_t *qc = vq;
461
+ qc[0] = 1;
462
+ }
463
+ clear_tail(d, oprsz, simd_maxsz(desc));
464
+}
465
+
466
+void HELPER(gvec_sqsub_d)(void *vd, void *vq, void *vn,
467
+ void *vm, uint32_t desc)
468
+{
469
+ intptr_t i, oprsz = simd_oprsz(desc);
470
+ int64_t *d = vd, *n = vn, *m = vm;
471
+ bool q = false;
472
+
473
+ for (i = 0; i < oprsz / 8; i++) {
474
+ int64_t nn = n[i], mm = m[i], dd = nn - mm;
475
+ if (((dd ^ nn) & (nn ^ mm)) & INT64_MIN) {
476
+ dd = (nn >> 63) ^ ~INT64_MIN;
477
+ q = true;
478
+ }
479
+ d[i] = dd;
480
+ }
481
+ if (q) {
482
+ uint32_t *qc = vq;
483
+ qc[0] = 1;
484
+ }
485
+ clear_tail(d, oprsz, simd_maxsz(desc));
486
+}
82
--
487
--
83
2.17.1
488
2.20.1
84
489
85
490
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
add MemTxAttrs as an argument to address_space_access_valid().
3
Its callers either have an attrs value to hand, or don't care
4
and can use MEMTXATTRS_UNSPECIFIED.
5
2
3
Fortunately, the functions affected are so far only called from SVE,
4
so there is no tail to be cleared. But as we convert more of AdvSIMD
5
to gvec, this will matter.
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20190209033847.9014-13-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-6-peter.maydell@linaro.org
10
---
11
---
11
include/exec/memory.h | 4 +++-
12
target/arm/vec_helper.c | 2 ++
12
include/sysemu/dma.h | 3 ++-
13
1 file changed, 2 insertions(+)
13
exec.c | 3 ++-
14
target/s390x/diag.c | 6 ++++--
15
target/s390x/excp_helper.c | 3 ++-
16
target/s390x/mmu_helper.c | 3 ++-
17
target/s390x/sigp.c | 3 ++-
18
7 files changed, 17 insertions(+), 8 deletions(-)
19
14
20
diff --git a/include/exec/memory.h b/include/exec/memory.h
15
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
21
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
22
--- a/include/exec/memory.h
17
--- a/target/arm/vec_helper.c
23
+++ b/include/exec/memory.h
18
+++ b/target/arm/vec_helper.c
24
@@ -XXX,XX +XXX,XX @@ static inline MemoryRegion *address_space_translate(AddressSpace *as,
19
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *stat, uint32_t desc) \
25
* @addr: address within that address space
20
for (i = 0; i < oprsz / sizeof(TYPE); i++) { \
26
* @len: length of the area to be checked
21
d[i] = FUNC(n[i], stat); \
27
* @is_write: indicates the transfer direction
22
} \
28
+ * @attrs: memory attributes
23
+ clear_tail(d, oprsz, simd_maxsz(desc)); \
29
*/
30
-bool address_space_access_valid(AddressSpace *as, hwaddr addr, int len, bool is_write);
31
+bool address_space_access_valid(AddressSpace *as, hwaddr addr, int len,
32
+ bool is_write, MemTxAttrs attrs);
33
34
/* address_space_map: map a physical memory region into a host virtual address
35
*
36
diff --git a/include/sysemu/dma.h b/include/sysemu/dma.h
37
index XXXXXXX..XXXXXXX 100644
38
--- a/include/sysemu/dma.h
39
+++ b/include/sysemu/dma.h
40
@@ -XXX,XX +XXX,XX @@ static inline bool dma_memory_valid(AddressSpace *as,
41
DMADirection dir)
42
{
43
return address_space_access_valid(as, addr, len,
44
- dir == DMA_DIRECTION_FROM_DEVICE);
45
+ dir == DMA_DIRECTION_FROM_DEVICE,
46
+ MEMTXATTRS_UNSPECIFIED);
47
}
24
}
48
25
49
static inline int dma_memory_rw_relaxed(AddressSpace *as, dma_addr_t addr,
26
DO_2OP(gvec_frecpe_h, helper_recpe_f16, float16)
50
diff --git a/exec.c b/exec.c
27
@@ -XXX,XX +XXX,XX @@ void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
51
index XXXXXXX..XXXXXXX 100644
28
for (i = 0; i < oprsz / sizeof(TYPE); i++) { \
52
--- a/exec.c
29
d[i] = FUNC(n[i], m[i], stat); \
53
+++ b/exec.c
30
} \
54
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
31
+ clear_tail(d, oprsz, simd_maxsz(desc)); \
55
}
32
}
56
33
57
bool address_space_access_valid(AddressSpace *as, hwaddr addr,
34
DO_3OP(gvec_fadd_h, float16_add, float16)
58
- int len, bool is_write)
59
+ int len, bool is_write,
60
+ MemTxAttrs attrs)
61
{
62
FlatView *fv;
63
bool result;
64
diff --git a/target/s390x/diag.c b/target/s390x/diag.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/target/s390x/diag.c
67
+++ b/target/s390x/diag.c
68
@@ -XXX,XX +XXX,XX @@ void handle_diag_308(CPUS390XState *env, uint64_t r1, uint64_t r3, uintptr_t ra)
69
return;
70
}
71
if (!address_space_access_valid(&address_space_memory, addr,
72
- sizeof(IplParameterBlock), false)) {
73
+ sizeof(IplParameterBlock), false,
74
+ MEMTXATTRS_UNSPECIFIED)) {
75
s390_program_interrupt(env, PGM_ADDRESSING, ILEN_AUTO, ra);
76
return;
77
}
78
@@ -XXX,XX +XXX,XX @@ out:
79
return;
80
}
81
if (!address_space_access_valid(&address_space_memory, addr,
82
- sizeof(IplParameterBlock), true)) {
83
+ sizeof(IplParameterBlock), true,
84
+ MEMTXATTRS_UNSPECIFIED)) {
85
s390_program_interrupt(env, PGM_ADDRESSING, ILEN_AUTO, ra);
86
return;
87
}
88
diff --git a/target/s390x/excp_helper.c b/target/s390x/excp_helper.c
89
index XXXXXXX..XXXXXXX 100644
90
--- a/target/s390x/excp_helper.c
91
+++ b/target/s390x/excp_helper.c
92
@@ -XXX,XX +XXX,XX @@ int s390_cpu_handle_mmu_fault(CPUState *cs, vaddr orig_vaddr, int size,
93
94
/* check out of RAM access */
95
if (!address_space_access_valid(&address_space_memory, raddr,
96
- TARGET_PAGE_SIZE, rw)) {
97
+ TARGET_PAGE_SIZE, rw,
98
+ MEMTXATTRS_UNSPECIFIED)) {
99
DPRINTF("%s: raddr %" PRIx64 " > ram_size %" PRIx64 "\n", __func__,
100
(uint64_t)raddr, (uint64_t)ram_size);
101
trigger_pgm_exception(env, PGM_ADDRESSING, ILEN_AUTO);
102
diff --git a/target/s390x/mmu_helper.c b/target/s390x/mmu_helper.c
103
index XXXXXXX..XXXXXXX 100644
104
--- a/target/s390x/mmu_helper.c
105
+++ b/target/s390x/mmu_helper.c
106
@@ -XXX,XX +XXX,XX @@ static int translate_pages(S390CPU *cpu, vaddr addr, int nr_pages,
107
return ret;
108
}
109
if (!address_space_access_valid(&address_space_memory, pages[i],
110
- TARGET_PAGE_SIZE, is_write)) {
111
+ TARGET_PAGE_SIZE, is_write,
112
+ MEMTXATTRS_UNSPECIFIED)) {
113
trigger_access_exception(env, PGM_ADDRESSING, ILEN_AUTO, 0);
114
return -EFAULT;
115
}
116
diff --git a/target/s390x/sigp.c b/target/s390x/sigp.c
117
index XXXXXXX..XXXXXXX 100644
118
--- a/target/s390x/sigp.c
119
+++ b/target/s390x/sigp.c
120
@@ -XXX,XX +XXX,XX @@ static void sigp_set_prefix(CPUState *cs, run_on_cpu_data arg)
121
cpu_synchronize_state(cs);
122
123
if (!address_space_access_valid(&address_space_memory, addr,
124
- sizeof(struct LowCore), false)) {
125
+ sizeof(struct LowCore), false,
126
+ MEMTXATTRS_UNSPECIFIED)) {
127
set_sigp_status(si, SIGP_STAT_INVALID_PARAMETER);
128
return;
129
}
130
--
35
--
131
2.17.1
36
2.20.1
132
37
133
38
diff view generated by jsdifflib
1
From: Shannon Zhao <zhaoshenglong@huawei.com>
1
From: Sandra Loosemore <sandra@codesourcery.com>
2
2
3
It forgot to increase clroffset during the loop. So it only clear the
3
Per the GDB remote protocol documentation
4
first 4 bytes.
5
4
6
Fixes: 367b9f527becdd20ddf116e17a3c0c2bbc486920
5
https://sourceware.org/gdb/current/onlinedocs/gdb/Packets.html#index-vKill-packet
7
Cc: qemu-stable@nongnu.org
6
8
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
7
the debug stub is expected to send a reply to the 'vKill' packet. At
9
Reviewed-by: Eric Auger <eric.auger@redhat.com>
8
least some versions of GDB crash if the gdb stub simply exits without
10
Message-id: 1527047633-12368-1-git-send-email-zhaoshenglong@huawei.com
9
sending a reply. This patch fixes QEMU's gdb stub to conform to the
10
expected behavior.
11
12
Note that QEMU's existing handling of the legacy 'k' packet is
13
correct: in that case GDB does not expect a reply, and QEMU does not
14
send one.
15
16
Signed-off-by: Sandra Loosemore <sandra@codesourcery.com>
17
Message-id: 1550008033-26540-1-git-send-email-sandra@codesourcery.com
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
20
---
14
hw/intc/arm_gicv3_kvm.c | 1 +
21
gdbstub.c | 1 +
15
1 file changed, 1 insertion(+)
22
1 file changed, 1 insertion(+)
16
23
17
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
24
diff --git a/gdbstub.c b/gdbstub.c
18
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/intc/arm_gicv3_kvm.c
26
--- a/gdbstub.c
20
+++ b/hw/intc/arm_gicv3_kvm.c
27
+++ b/gdbstub.c
21
@@ -XXX,XX +XXX,XX @@ static void kvm_dist_putbmp(GICv3State *s, uint32_t offset,
28
@@ -XXX,XX +XXX,XX @@ static int gdb_handle_packet(GDBState *s, const char *line_buf)
22
if (clroffset != 0) {
29
break;
23
reg = 0;
30
} else if (strncmp(p, "Kill;", 5) == 0) {
24
kvm_gicd_access(s, clroffset, &reg, true);
31
/* Kill the target */
25
+ clroffset += 4;
32
+ put_packet(s, "OK");
26
}
33
error_report("QEMU: Terminated via GDBstub");
27
reg = *gic_bmp_ptr32(bmp, irq);
34
exit(0);
28
kvm_gicd_access(s, offset, &reg, true);
35
} else {
29
--
36
--
30
2.17.1
37
2.20.1
31
38
32
39
diff view generated by jsdifflib