1
target-arm queue. This has the "plumb txattrs through various
1
First arm pullreq for 6.1 cycle. The big stuff here is RTH's alignment series.
2
bits of exec.c" patches, and a collection of bug fixes from
3
various people.
4
2
5
thanks
3
thanks
6
-- PMM
4
-- PMM
7
5
6
The following changes since commit ccdf06c1db192152ac70a1dd974c624f566cb7d4:
8
7
9
8
Open 6.1 development tree (2021-04-30 11:15:40 +0100)
10
The following changes since commit a3ac12fba028df90f7b3dbec924995c126c41022:
11
12
Merge remote-tracking branch 'remotes/ehabkost/tags/numa-next-pull-request' into staging (2018-05-31 11:12:36 +0100)
13
9
14
are available in the Git repository at:
10
are available in the Git repository at:
15
11
16
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20180531
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20210430
17
13
18
for you to fetch changes up to 49d1dca0520ea71bc21867fab6647f474fcf857b:
14
for you to fetch changes up to a6091108aa44e9017af4ca13c43f55a629e3744c:
19
15
20
KVM: GIC: Fix memory leak due to calling kvm_init_irq_routing twice (2018-05-31 14:52:53 +0100)
16
hw/pci-host/gpex: Don't fault for unmapped parts of MMIO and PIO windows (2021-04-30 11:16:52 +0100)
21
17
22
----------------------------------------------------------------
18
----------------------------------------------------------------
23
target-arm queue:
19
target-arm queue:
24
* target/arm: Honour FPCR.FZ in FRECPX
20
* hw/pci-host/gpex: Don't fault for unmapped parts of MMIO and PIO windows
25
* MAINTAINERS: Add entries for newer MPS2 boards and devices
21
* hw: add compat machines for 6.1
26
* hw/intc/arm_gicv3: Fix APxR<n> register dispatching
22
* Fault misaligned accesses where the architecture requires it
27
* arm_gicv3_kvm: fix bug in writing zero bits back to the in-kernel
23
* Fix some corner cases of MTE faults (notably with misaligned accesses)
28
GIC state
24
* Make Thumb store insns UNDEF for Rn==1111
29
* tcg: Fix helper function vs host abi for float16
25
* hw/arm/smmuv3: Support 16K translation granule
30
* arm: fix qemu crash on startup with -bios option
31
* arm: fix malloc type mismatch
32
* xlnx-zdma: Correct mem leaks and memset to zero on desc unaligned errors
33
* Correct CPACR reset value for v7 cores
34
* memory.h: Improve IOMMU related documentation
35
* exec: Plumb transaction attributes through various functions in
36
preparation for allowing IOMMUs to see them
37
* vmstate.h: Provide VMSTATE_BOOL_SUB_ARRAY
38
* ARM: ACPI: Fix use-after-free due to memory realloc
39
* KVM: GIC: Fix memory leak due to calling kvm_init_irq_routing twice
40
26
41
----------------------------------------------------------------
27
----------------------------------------------------------------
42
Francisco Iglesias (1):
28
Cornelia Huck (1):
43
xlnx-zdma: Correct mem leaks and memset to zero on desc unaligned errors
29
hw: add compat machines for 6.1
44
30
45
Igor Mammedov (1):
31
Kunkun Jiang (1):
46
arm: fix qemu crash on startup with -bios option
32
hw/arm/smmuv3: Support 16K translation granule
47
33
48
Jan Kiszka (1):
34
Peter Maydell (2):
49
hw/intc/arm_gicv3: Fix APxR<n> register dispatching
35
target/arm: Make Thumb store insns UNDEF for Rn==1111
36
hw/pci-host/gpex: Don't fault for unmapped parts of MMIO and PIO windows
50
37
51
Paolo Bonzini (1):
38
Richard Henderson (39):
52
arm: fix malloc type mismatch
39
target/arm: Fix mte_checkN
40
target/arm: Split out mte_probe_int
41
target/arm: Fix unaligned checks for mte_check1, mte_probe1
42
test/tcg/aarch64: Add mte-5
43
target/arm: Replace MTEDESC ESIZE+TSIZE with SIZEM1
44
target/arm: Merge mte_check1, mte_checkN
45
target/arm: Rename mte_probe1 to mte_probe
46
target/arm: Simplify sve mte checking
47
target/arm: Remove log2_esize parameter to gen_mte_checkN
48
target/arm: Fix decode of align in VLDST_single
49
target/arm: Rename TBFLAG_A32, SCTLR_B
50
target/arm: Rename TBFLAG_ANY, PSTATE_SS
51
target/arm: Add wrapper macros for accessing tbflags
52
target/arm: Introduce CPUARMTBFlags
53
target/arm: Move mode specific TB flags to tb->cs_base
54
target/arm: Move TBFLAG_AM32 bits to the top
55
target/arm: Move TBFLAG_ANY bits to the bottom
56
target/arm: Add ALIGN_MEM to TBFLAG_ANY
57
target/arm: Adjust gen_aa32_{ld, st}_i32 for align+endianness
58
target/arm: Merge gen_aa32_frob64 into gen_aa32_ld_i64
59
target/arm: Fix SCTLR_B test for TCGv_i64 load/store
60
target/arm: Adjust gen_aa32_{ld, st}_i64 for align+endianness
61
target/arm: Enforce word alignment for LDRD/STRD
62
target/arm: Enforce alignment for LDA/LDAH/STL/STLH
63
target/arm: Enforce alignment for LDM/STM
64
target/arm: Enforce alignment for RFE
65
target/arm: Enforce alignment for SRS
66
target/arm: Enforce alignment for VLDM/VSTM
67
target/arm: Enforce alignment for VLDR/VSTR
68
target/arm: Enforce alignment for VLDn (all lanes)
69
target/arm: Enforce alignment for VLDn/VSTn (multiple)
70
target/arm: Enforce alignment for VLDn/VSTn (single)
71
target/arm: Use finalize_memop for aa64 gpr load/store
72
target/arm: Use finalize_memop for aa64 fpr load/store
73
target/arm: Enforce alignment for aa64 load-acq/store-rel
74
target/arm: Use MemOp for size + endian in aa64 vector ld/st
75
target/arm: Enforce alignment for aa64 vector LDn/STn (multiple)
76
target/arm: Enforce alignment for aa64 vector LDn/STn (single)
77
target/arm: Enforce alignment for sve LD1R
53
78
54
Peter Maydell (17):
79
include/hw/boards.h | 3 +
55
target/arm: Honour FPCR.FZ in FRECPX
80
include/hw/i386/pc.h | 3 +
56
MAINTAINERS: Add entries for newer MPS2 boards and devices
81
include/hw/pci-host/gpex.h | 4 +
57
Correct CPACR reset value for v7 cores
82
target/arm/cpu.h | 105 ++++++++++-----
58
memory.h: Improve IOMMU related documentation
83
target/arm/helper-a64.h | 3 +-
59
Make tb_invalidate_phys_addr() take a MemTxAttrs argument
84
target/arm/internals.h | 11 +-
60
Make address_space_translate{, _cached}() take a MemTxAttrs argument
85
target/arm/translate-a64.h | 2 +-
61
Make address_space_map() take a MemTxAttrs argument
86
target/arm/translate.h | 38 ++++++
62
Make address_space_access_valid() take a MemTxAttrs argument
87
target/arm/neon-ls.decode | 4 +-
63
Make flatview_extend_translation() take a MemTxAttrs argument
88
hw/arm/smmuv3.c | 6 +-
64
Make memory_region_access_valid() take a MemTxAttrs argument
89
hw/arm/virt.c | 7 +-
65
Make MemoryRegion valid.accepts callback take a MemTxAttrs argument
90
hw/core/machine.c | 5 +
66
Make flatview_access_valid() take a MemTxAttrs argument
91
hw/i386/pc.c | 3 +
67
Make flatview_translate() take a MemTxAttrs argument
92
hw/i386/pc_piix.c | 14 +-
68
Make address_space_get_iotlb_entry() take a MemTxAttrs argument
93
hw/i386/pc_q35.c | 13 +-
69
Make flatview_do_translate() take a MemTxAttrs argument
94
hw/pci-host/gpex.c | 56 +++++++-
70
Make address_space_translate_iommu take a MemTxAttrs argument
95
hw/ppc/spapr.c | 17 ++-
71
vmstate.h: Provide VMSTATE_BOOL_SUB_ARRAY
96
hw/s390x/s390-virtio-ccw.c | 14 +-
97
target/arm/helper-a64.c | 2 +-
98
target/arm/helper.c | 162 ++++++++++++----------
99
target/arm/mte_helper.c | 185 ++++++++++---------------
100
target/arm/sve_helper.c | 100 +++++---------
101
target/arm/translate-a64.c | 236 ++++++++++++++++----------------
102
target/arm/translate-sve.c | 11 +-
103
target/arm/translate.c | 274 ++++++++++++++++++++++----------------
104
tests/tcg/aarch64/mte-5.c | 44 ++++++
105
target/arm/translate-neon.c.inc | 117 ++++++++++++----
106
target/arm/translate-vfp.c.inc | 20 +--
107
tests/tcg/aarch64/Makefile.target | 2 +-
108
29 files changed, 878 insertions(+), 583 deletions(-)
109
create mode 100644 tests/tcg/aarch64/mte-5.c
72
110
73
Richard Henderson (1):
74
tcg: Fix helper function vs host abi for float16
75
76
Shannon Zhao (3):
77
arm_gicv3_kvm: increase clroffset accordingly
78
ARM: ACPI: Fix use-after-free due to memory realloc
79
KVM: GIC: Fix memory leak due to calling kvm_init_irq_routing twice
80
81
include/exec/exec-all.h | 5 +-
82
include/exec/helper-head.h | 2 +-
83
include/exec/memory-internal.h | 3 +-
84
include/exec/memory.h | 128 +++++++++++++++++++++++++++++++++++------
85
include/migration/vmstate.h | 3 +
86
include/sysemu/dma.h | 6 +-
87
accel/tcg/translate-all.c | 4 +-
88
exec.c | 95 ++++++++++++++++++------------
89
hw/arm/boot.c | 18 +++---
90
hw/arm/virt-acpi-build.c | 20 +++++--
91
hw/dma/xlnx-zdma.c | 10 +++-
92
hw/hppa/dino.c | 3 +-
93
hw/intc/arm_gic_kvm.c | 1 -
94
hw/intc/arm_gicv3_cpuif.c | 12 ++--
95
hw/intc/arm_gicv3_kvm.c | 2 +-
96
hw/nvram/fw_cfg.c | 12 ++--
97
hw/s390x/s390-pci-inst.c | 3 +-
98
hw/scsi/esp.c | 3 +-
99
hw/vfio/common.c | 3 +-
100
hw/virtio/vhost.c | 3 +-
101
hw/xen/xen_pt_msi.c | 3 +-
102
memory.c | 12 ++--
103
memory_ldst.inc.c | 18 +++---
104
target/arm/gdbstub.c | 3 +-
105
target/arm/helper-a64.c | 41 +++++++------
106
target/arm/helper.c | 90 ++++++++++++++++-------------
107
target/ppc/mmu-hash64.c | 3 +-
108
target/riscv/helper.c | 2 +-
109
target/s390x/diag.c | 6 +-
110
target/s390x/excp_helper.c | 3 +-
111
target/s390x/mmu_helper.c | 3 +-
112
target/s390x/sigp.c | 3 +-
113
target/xtensa/op_helper.c | 3 +-
114
MAINTAINERS | 9 ++-
115
34 files changed, 353 insertions(+), 182 deletions(-)
116
diff view generated by jsdifflib
New patch
1
From: Kunkun Jiang <jiangkunkun@huawei.com>
1
2
3
The driver can query some bits in SMMUv3 IDR5 to learn which
4
translation granules are supported. Arm recommends that SMMUv3
5
implementations support at least 4K and 64K granules. But in
6
the vSMMUv3, there seems to be no reason not to support 16K
7
translation granule. In addition, if 16K is not supported,
8
vSVA will failed to be enabled in the future for 16K guest
9
kernel. So it'd better to support it.
10
11
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
12
Reviewed-by: Eric Auger <eric.auger@redhat.com>
13
Tested-by: Eric Auger <eric.auger@redhat.com>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
hw/arm/smmuv3.c | 6 ++++--
17
1 file changed, 4 insertions(+), 2 deletions(-)
18
19
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/hw/arm/smmuv3.c
22
+++ b/hw/arm/smmuv3.c
23
@@ -XXX,XX +XXX,XX @@ static void smmuv3_init_regs(SMMUv3State *s)
24
s->idr[3] = FIELD_DP32(s->idr[3], IDR3, RIL, 1);
25
s->idr[3] = FIELD_DP32(s->idr[3], IDR3, HAD, 1);
26
27
- /* 4K and 64K granule support */
28
+ /* 4K, 16K and 64K granule support */
29
s->idr[5] = FIELD_DP32(s->idr[5], IDR5, GRAN4K, 1);
30
+ s->idr[5] = FIELD_DP32(s->idr[5], IDR5, GRAN16K, 1);
31
s->idr[5] = FIELD_DP32(s->idr[5], IDR5, GRAN64K, 1);
32
s->idr[5] = FIELD_DP32(s->idr[5], IDR5, OAS, SMMU_IDR5_OAS); /* 44 bits */
33
34
@@ -XXX,XX +XXX,XX @@ static int decode_cd(SMMUTransCfg *cfg, CD *cd, SMMUEventInfo *event)
35
36
tg = CD_TG(cd, i);
37
tt->granule_sz = tg2granule(tg, i);
38
- if ((tt->granule_sz != 12 && tt->granule_sz != 16) || CD_ENDI(cd)) {
39
+ if ((tt->granule_sz != 12 && tt->granule_sz != 14 &&
40
+ tt->granule_sz != 16) || CD_ENDI(cd)) {
41
goto bad_cd;
42
}
43
44
--
45
2.20.1
46
47
diff view generated by jsdifflib
New patch
1
The Arm ARM specifies that for Thumb encodings of the various plain
2
store insns, if the Rn field is 1111 then we must UNDEF. This is
3
different from the Arm encodings, where this case is either
4
UNPREDICTABLE or has well-defined behaviour. The exclusive stores,
5
store-release and STRD do not have this UNDEF case for any encoding.
1
6
7
Enforce the UNDEF for this case in the Thumb plain store insns.
8
9
Fixes: https://bugs.launchpad.net/qemu/+bug/1922887
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20210408162402.5822-1-peter.maydell@linaro.org
13
---
14
target/arm/translate.c | 16 ++++++++++++++++
15
1 file changed, 16 insertions(+)
16
17
diff --git a/target/arm/translate.c b/target/arm/translate.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/translate.c
20
+++ b/target/arm/translate.c
21
@@ -XXX,XX +XXX,XX @@ static bool op_store_rr(DisasContext *s, arg_ldst_rr *a,
22
ISSInfo issinfo = make_issinfo(s, a->rt, a->p, a->w) | ISSIsWrite;
23
TCGv_i32 addr, tmp;
24
25
+ /*
26
+ * In Thumb encodings of stores Rn=1111 is UNDEF; for Arm it
27
+ * is either UNPREDICTABLE or has defined behaviour
28
+ */
29
+ if (s->thumb && a->rn == 15) {
30
+ return false;
31
+ }
32
+
33
addr = op_addr_rr_pre(s, a);
34
35
tmp = load_reg(s, a->rt);
36
@@ -XXX,XX +XXX,XX @@ static bool op_store_ri(DisasContext *s, arg_ldst_ri *a,
37
ISSInfo issinfo = make_issinfo(s, a->rt, a->p, a->w) | ISSIsWrite;
38
TCGv_i32 addr, tmp;
39
40
+ /*
41
+ * In Thumb encodings of stores Rn=1111 is UNDEF; for Arm it
42
+ * is either UNPREDICTABLE or has defined behaviour
43
+ */
44
+ if (s->thumb && a->rn == 15) {
45
+ return false;
46
+ }
47
+
48
addr = op_addr_ri_pre(s, a);
49
50
tmp = load_reg(s, a->rt);
51
--
52
2.20.1
53
54
diff view generated by jsdifflib
1
From: Shannon Zhao <zhaoshenglong@huawei.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
acpi_data_push uses g_array_set_size to resize the memory size. If there
3
We were incorrectly assuming that only the first byte of an MTE access
4
is no enough contiguous memory, the address will be changed. So previous
4
is checked against the tags. But per the ARM, unaligned accesses are
5
pointer could not be used any more. It must update the pointer and use
5
pre-decomposed into single-byte accesses. So by the time we reach the
6
the new one.
6
actual MTE check in the ARM pseudocode, all accesses are aligned.
7
7
8
Also, previous codes wrongly use le32 conversion of iort->node_offset
8
Therefore, the first failure is always either the first byte of the
9
for subsequent computations that will result incorrect value if host is
9
access, or the first byte of the granule.
10
not litlle endian. So use the non-converted one instead.
11
10
12
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
11
In addition, some of the arithmetic is off for last-first -> count.
13
Reviewed-by: Eric Auger <eric.auger@redhat.com>
12
This does not become directly visible until a later patch that passes
14
Message-id: 1527663951-14552-1-git-send-email-zhaoshenglong@huawei.com
13
single bytes into this function, so ptr == ptr_last.
14
15
Buglink: https://bugs.launchpad.net/bugs/1921948
16
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
17
Message-id: 20210416183106.1516563-2-richard.henderson@linaro.org
18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
19
[PMM: tweaked a comment]
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
21
---
17
hw/arm/virt-acpi-build.c | 20 +++++++++++++++-----
22
target/arm/mte_helper.c | 40 ++++++++++++++++++----------------------
18
1 file changed, 15 insertions(+), 5 deletions(-)
23
1 file changed, 18 insertions(+), 22 deletions(-)
19
24
20
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
25
diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c
21
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/arm/virt-acpi-build.c
27
--- a/target/arm/mte_helper.c
23
+++ b/hw/arm/virt-acpi-build.c
28
+++ b/target/arm/mte_helper.c
24
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
29
@@ -XXX,XX +XXX,XX @@ uint64_t mte_checkN(CPUARMState *env, uint32_t desc,
25
AcpiIortItsGroup *its;
30
uint64_t ptr, uintptr_t ra)
26
AcpiIortTable *iort;
31
{
27
AcpiIortSmmu3 *smmu;
32
int mmu_idx, ptr_tag, bit55;
28
- size_t node_size, iort_length, smmu_offset = 0;
33
- uint64_t ptr_last, ptr_end, prev_page, next_page;
29
+ size_t node_size, iort_node_offset, iort_length, smmu_offset = 0;
34
- uint64_t tag_first, tag_end;
30
AcpiIortRC *rc;
35
- uint64_t tag_byte_first, tag_byte_end;
31
36
- uint32_t esize, total, tag_count, tag_size, n, c;
32
iort = acpi_data_push(table_data, sizeof(*iort));
37
+ uint64_t ptr_last, prev_page, next_page;
33
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
38
+ uint64_t tag_first, tag_last;
34
39
+ uint64_t tag_byte_first, tag_byte_last;
35
iort_length = sizeof(*iort);
40
+ uint32_t total, tag_count, tag_size, n, c;
36
iort->node_count = cpu_to_le32(nb_nodes);
41
uint8_t *mem1, *mem2;
37
- iort->node_offset = cpu_to_le32(sizeof(*iort));
42
MMUAccessType type;
38
+ /*
43
39
+ * Use a copy in case table_data->data moves during acpi_data_push
44
@@ -XXX,XX +XXX,XX @@ uint64_t mte_checkN(CPUARMState *env, uint32_t desc,
40
+ * operations.
45
41
+ */
46
mmu_idx = FIELD_EX32(desc, MTEDESC, MIDX);
42
+ iort_node_offset = sizeof(*iort);
47
type = FIELD_EX32(desc, MTEDESC, WRITE) ? MMU_DATA_STORE : MMU_DATA_LOAD;
43
+ iort->node_offset = cpu_to_le32(iort_node_offset);
48
- esize = FIELD_EX32(desc, MTEDESC, ESIZE);
44
49
total = FIELD_EX32(desc, MTEDESC, TSIZE);
45
/* ITS group node */
50
46
node_size = sizeof(*its) + sizeof(uint32_t);
51
- /* Find the addr of the end of the access, and of the last element. */
47
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
52
- ptr_end = ptr + total;
48
int irq = vms->irqmap[VIRT_SMMU];
53
- ptr_last = ptr_end - esize;
49
54
+ /* Find the addr of the end of the access */
50
/* SMMUv3 node */
55
+ ptr_last = ptr + total - 1;
51
- smmu_offset = iort->node_offset + node_size;
56
52
+ smmu_offset = iort_node_offset + node_size;
57
/* Round the bounds to the tag granule, and compute the number of tags. */
53
node_size = sizeof(*smmu) + sizeof(*idmap);
58
tag_first = QEMU_ALIGN_DOWN(ptr, TAG_GRANULE);
54
iort_length += node_size;
59
- tag_end = QEMU_ALIGN_UP(ptr_last, TAG_GRANULE);
55
smmu = acpi_data_push(table_data, node_size);
60
- tag_count = (tag_end - tag_first) / TAG_GRANULE;
56
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
61
+ tag_last = QEMU_ALIGN_DOWN(ptr_last, TAG_GRANULE);
57
idmap->id_count = cpu_to_le32(0xFFFF);
62
+ tag_count = ((tag_last - tag_first) / TAG_GRANULE) + 1;
58
idmap->output_base = 0;
63
59
/* output IORT node is the ITS group node (the first node) */
64
/* Round the bounds to twice the tag granule, and compute the bytes. */
60
- idmap->output_reference = cpu_to_le32(iort->node_offset);
65
tag_byte_first = QEMU_ALIGN_DOWN(ptr, 2 * TAG_GRANULE);
61
+ idmap->output_reference = cpu_to_le32(iort_node_offset);
66
- tag_byte_end = QEMU_ALIGN_UP(ptr_last, 2 * TAG_GRANULE);
67
+ tag_byte_last = QEMU_ALIGN_DOWN(ptr_last, 2 * TAG_GRANULE);
68
69
/* Locate the page boundaries. */
70
prev_page = ptr & TARGET_PAGE_MASK;
71
next_page = prev_page + TARGET_PAGE_SIZE;
72
73
- if (likely(tag_end - prev_page <= TARGET_PAGE_SIZE)) {
74
+ if (likely(tag_last - prev_page <= TARGET_PAGE_SIZE)) {
75
/* Memory access stays on one page. */
76
- tag_size = (tag_byte_end - tag_byte_first) / (2 * TAG_GRANULE);
77
+ tag_size = ((tag_byte_last - tag_byte_first) / (2 * TAG_GRANULE)) + 1;
78
mem1 = allocation_tag_mem(env, mmu_idx, ptr, type, total,
79
MMU_DATA_LOAD, tag_size, ra);
80
if (!mem1) {
81
@@ -XXX,XX +XXX,XX @@ uint64_t mte_checkN(CPUARMState *env, uint32_t desc,
82
mem1 = allocation_tag_mem(env, mmu_idx, ptr, type, next_page - ptr,
83
MMU_DATA_LOAD, tag_size, ra);
84
85
- tag_size = (tag_byte_end - next_page) / (2 * TAG_GRANULE);
86
+ tag_size = ((tag_byte_last - next_page) / (2 * TAG_GRANULE)) + 1;
87
mem2 = allocation_tag_mem(env, mmu_idx, next_page, type,
88
- ptr_end - next_page,
89
+ ptr_last - next_page + 1,
90
MMU_DATA_LOAD, tag_size, ra);
91
92
/*
93
@@ -XXX,XX +XXX,XX @@ uint64_t mte_checkN(CPUARMState *env, uint32_t desc,
62
}
94
}
63
95
64
/* Root Complex Node */
96
/*
65
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
97
- * If we failed, we know which granule. Compute the element that
66
idmap->output_reference = cpu_to_le32(smmu_offset);
98
- * is first in that granule, and signal failure on that element.
67
} else {
99
+ * If we failed, we know which granule. For the first granule, the
68
/* output IORT node is the ITS group node (the first node) */
100
+ * failure address is @ptr, the first byte accessed. Otherwise the
69
- idmap->output_reference = cpu_to_le32(iort->node_offset);
101
+ * failure address is the first byte of the nth granule.
70
+ idmap->output_reference = cpu_to_le32(iort_node_offset);
102
*/
103
if (unlikely(n < tag_count)) {
104
- uint64_t fail_ofs;
105
-
106
- fail_ofs = tag_first + n * TAG_GRANULE - ptr;
107
- fail_ofs = ROUND_UP(fail_ofs, esize);
108
- mte_check_fail(env, desc, ptr + fail_ofs, ra);
109
+ uint64_t fault = (n == 0 ? ptr : tag_first + n * TAG_GRANULE);
110
+ mte_check_fail(env, desc, fault, ra);
71
}
111
}
72
112
73
+ /*
113
done:
74
+ * Update the pointer address in case table_data->data moves during above
75
+ * acpi_data_push operations.
76
+ */
77
+ iort = (AcpiIortTable *)(table_data->data + iort_start);
78
iort->length = cpu_to_le32(iort_length);
79
80
build_header(linker, table_data, (void *)(table_data->data + iort_start),
81
--
114
--
82
2.17.1
115
2.20.1
83
116
84
117
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
add MemTxAttrs as an argument to address_space_translate_iommu().
3
2
3
Split out a helper function from mte_checkN to perform
4
all of the checking and address manpulation. So far,
5
just use this in mte_checkN itself.
6
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210416183106.1516563-3-richard.henderson@linaro.org
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180521140402.23318-14-peter.maydell@linaro.org
8
---
12
---
9
exec.c | 8 +++++---
13
target/arm/mte_helper.c | 52 +++++++++++++++++++++++++++++++----------
10
1 file changed, 5 insertions(+), 3 deletions(-)
14
1 file changed, 40 insertions(+), 12 deletions(-)
11
15
12
diff --git a/exec.c b/exec.c
16
diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c
13
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
14
--- a/exec.c
18
--- a/target/arm/mte_helper.c
15
+++ b/exec.c
19
+++ b/target/arm/mte_helper.c
16
@@ -XXX,XX +XXX,XX @@ address_space_translate_internal(AddressSpaceDispatch *d, hwaddr addr, hwaddr *x
20
@@ -XXX,XX +XXX,XX @@ static int checkN(uint8_t *mem, int odd, int cmp, int count)
17
* @is_write: whether the translation operation is for write
21
return n;
18
* @is_mmio: whether this can be MMIO, set true if it can
22
}
19
* @target_as: the address space targeted by the IOMMU
23
20
+ * @attrs: transaction attributes
24
-uint64_t mte_checkN(CPUARMState *env, uint32_t desc,
21
*
25
- uint64_t ptr, uintptr_t ra)
22
* This function is called from RCU critical section. It is the common
26
+/**
23
* part of flatview_do_translate and address_space_translate_cached.
27
+ * mte_probe_int() - helper for mte_probe and mte_check
24
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection address_space_translate_iommu(IOMMUMemoryRegion *iomm
28
+ * @env: CPU environment
25
hwaddr *page_mask_out,
29
+ * @desc: MTEDESC descriptor
26
bool is_write,
30
+ * @ptr: virtual address of the base of the access
27
bool is_mmio,
31
+ * @fault: return virtual address of the first check failure
28
- AddressSpace **target_as)
32
+ *
29
+ AddressSpace **target_as,
33
+ * Internal routine for both mte_probe and mte_check.
30
+ MemTxAttrs attrs)
34
+ * Return zero on failure, filling in *fault.
35
+ * Return negative on trivial success for tbi disabled.
36
+ * Return positive on success with tbi enabled.
37
+ */
38
+static int mte_probe_int(CPUARMState *env, uint32_t desc, uint64_t ptr,
39
+ uintptr_t ra, uint32_t total, uint64_t *fault)
31
{
40
{
32
MemoryRegionSection *section;
41
int mmu_idx, ptr_tag, bit55;
33
hwaddr page_mask = (hwaddr)-1;
42
uint64_t ptr_last, prev_page, next_page;
34
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection flatview_do_translate(FlatView *fv,
43
uint64_t tag_first, tag_last;
35
return address_space_translate_iommu(iommu_mr, xlat,
44
uint64_t tag_byte_first, tag_byte_last;
36
plen_out, page_mask_out,
45
- uint32_t total, tag_count, tag_size, n, c;
37
is_write, is_mmio,
46
+ uint32_t tag_count, tag_size, n, c;
38
- target_as);
47
uint8_t *mem1, *mem2;
39
+ target_as, attrs);
48
MMUAccessType type;
49
50
bit55 = extract64(ptr, 55, 1);
51
+ *fault = ptr;
52
53
/* If TBI is disabled, the access is unchecked, and ptr is not dirty. */
54
if (unlikely(!tbi_check(desc, bit55))) {
55
- return ptr;
56
+ return -1;
40
}
57
}
41
if (page_mask_out) {
58
42
/* Not behind an IOMMU, use default page size. */
59
ptr_tag = allocation_tag_from_addr(ptr);
43
@@ -XXX,XX +XXX,XX @@ static inline MemoryRegion *address_space_translate_cached(
60
44
61
if (tcma_check(desc, bit55, ptr_tag)) {
45
section = address_space_translate_iommu(iommu_mr, xlat, plen,
62
- goto done;
46
NULL, is_write, true,
63
+ return 1;
47
- &target_as);
64
}
48
+ &target_as, attrs);
65
49
return section.mr;
66
mmu_idx = FIELD_EX32(desc, MTEDESC, MIDX);
67
type = FIELD_EX32(desc, MTEDESC, WRITE) ? MMU_DATA_STORE : MMU_DATA_LOAD;
68
- total = FIELD_EX32(desc, MTEDESC, TSIZE);
69
70
/* Find the addr of the end of the access */
71
ptr_last = ptr + total - 1;
72
@@ -XXX,XX +XXX,XX @@ uint64_t mte_checkN(CPUARMState *env, uint32_t desc,
73
mem1 = allocation_tag_mem(env, mmu_idx, ptr, type, total,
74
MMU_DATA_LOAD, tag_size, ra);
75
if (!mem1) {
76
- goto done;
77
+ return 1;
78
}
79
/* Perform all of the comparisons. */
80
n = checkN(mem1, ptr & TAG_GRANULE, ptr_tag, tag_count);
81
@@ -XXX,XX +XXX,XX @@ uint64_t mte_checkN(CPUARMState *env, uint32_t desc,
82
}
83
if (n == c) {
84
if (!mem2) {
85
- goto done;
86
+ return 1;
87
}
88
n += checkN(mem2, 0, ptr_tag, tag_count - c);
89
}
90
}
91
92
+ if (likely(n == tag_count)) {
93
+ return 1;
94
+ }
95
+
96
/*
97
* If we failed, we know which granule. For the first granule, the
98
* failure address is @ptr, the first byte accessed. Otherwise the
99
* failure address is the first byte of the nth granule.
100
*/
101
- if (unlikely(n < tag_count)) {
102
- uint64_t fault = (n == 0 ? ptr : tag_first + n * TAG_GRANULE);
103
- mte_check_fail(env, desc, fault, ra);
104
+ if (n > 0) {
105
+ *fault = tag_first + n * TAG_GRANULE;
106
}
107
+ return 0;
108
+}
109
110
- done:
111
+uint64_t mte_checkN(CPUARMState *env, uint32_t desc,
112
+ uint64_t ptr, uintptr_t ra)
113
+{
114
+ uint64_t fault;
115
+ uint32_t total = FIELD_EX32(desc, MTEDESC, TSIZE);
116
+ int ret = mte_probe_int(env, desc, ptr, ra, total, &fault);
117
+
118
+ if (unlikely(ret == 0)) {
119
+ mte_check_fail(env, desc, fault, ra);
120
+ } else if (ret < 0) {
121
+ return ptr;
122
+ }
123
return useronly_clean_ptr(ptr);
50
}
124
}
51
125
52
--
126
--
53
2.17.1
127
2.20.1
54
128
55
129
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
add MemTxAttrs as an argument to tb_invalidate_phys_addr().
3
Its callers either have an attrs value to hand, or don't care
4
and can use MEMTXATTRS_UNSPECIFIED.
5
2
3
We were incorrectly assuming that only the first byte of an MTE access
4
is checked against the tags. But per the ARM, unaligned accesses are
5
pre-decomposed into single-byte accesses. So by the time we reach the
6
actual MTE check in the ARM pseudocode, all accesses are aligned.
7
8
We cannot tell a priori whether or not a given scalar access is aligned,
9
therefore we must at least check. Use mte_probe_int, which is already
10
set up for checking multiple granules.
11
12
Buglink: https://bugs.launchpad.net/bugs/1921948
13
Tested-by: Alex Bennée <alex.bennee@linaro.org>
14
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
15
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20210416183106.1516563-4-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
Message-id: 20180521140402.23318-3-peter.maydell@linaro.org
10
---
18
---
11
include/exec/exec-all.h | 5 +++--
19
target/arm/mte_helper.c | 109 +++++++++++++---------------------------
12
accel/tcg/translate-all.c | 2 +-
20
1 file changed, 35 insertions(+), 74 deletions(-)
13
exec.c | 2 +-
14
target/xtensa/op_helper.c | 3 ++-
15
4 files changed, 7 insertions(+), 5 deletions(-)
16
21
17
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
22
diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c
18
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
19
--- a/include/exec/exec-all.h
24
--- a/target/arm/mte_helper.c
20
+++ b/include/exec/exec-all.h
25
+++ b/target/arm/mte_helper.c
21
@@ -XXX,XX +XXX,XX @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
26
@@ -XXX,XX +XXX,XX @@ static void mte_check_fail(CPUARMState *env, uint32_t desc,
22
void tlb_set_page(CPUState *cpu, target_ulong vaddr,
23
hwaddr paddr, int prot,
24
int mmu_idx, target_ulong size);
25
-void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr);
26
+void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs);
27
void probe_write(CPUArchState *env, target_ulong addr, int size, int mmu_idx,
28
uintptr_t retaddr);
29
#else
30
@@ -XXX,XX +XXX,XX @@ static inline void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *cpu,
31
uint16_t idxmap)
32
{
33
}
34
-static inline void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr)
35
+static inline void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr,
36
+ MemTxAttrs attrs)
37
{
38
}
39
#endif
40
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/accel/tcg/translate-all.c
43
+++ b/accel/tcg/translate-all.c
44
@@ -XXX,XX +XXX,XX @@ static TranslationBlock *tb_find_pc(uintptr_t tc_ptr)
45
}
46
47
#if !defined(CONFIG_USER_ONLY)
48
-void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr)
49
+void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs)
50
{
51
ram_addr_t ram_addr;
52
MemoryRegion *mr;
53
diff --git a/exec.c b/exec.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/exec.c
56
+++ b/exec.c
57
@@ -XXX,XX +XXX,XX @@ static void breakpoint_invalidate(CPUState *cpu, target_ulong pc)
58
if (phys != -1) {
59
/* Locks grabbed by tb_invalidate_phys_addr */
60
tb_invalidate_phys_addr(cpu->cpu_ases[asidx].as,
61
- phys | (pc & ~TARGET_PAGE_MASK));
62
+ phys | (pc & ~TARGET_PAGE_MASK), attrs);
63
}
27
}
64
}
28
}
65
#endif
29
66
diff --git a/target/xtensa/op_helper.c b/target/xtensa/op_helper.c
30
-/*
67
index XXXXXXX..XXXXXXX 100644
31
- * Perform an MTE checked access for a single logical or atomic access.
68
--- a/target/xtensa/op_helper.c
32
- */
69
+++ b/target/xtensa/op_helper.c
33
-static bool mte_probe1_int(CPUARMState *env, uint32_t desc, uint64_t ptr,
70
@@ -XXX,XX +XXX,XX @@ static void tb_invalidate_virtual_addr(CPUXtensaState *env, uint32_t vaddr)
34
- uintptr_t ra, int bit55)
71
int ret = xtensa_get_physical_addr(env, false, vaddr, 2, 0,
35
-{
72
&paddr, &page_size, &access);
36
- int mem_tag, mmu_idx, ptr_tag, size;
73
if (ret == 0) {
37
- MMUAccessType type;
74
- tb_invalidate_phys_addr(&address_space_memory, paddr);
38
- uint8_t *mem;
75
+ tb_invalidate_phys_addr(&address_space_memory, paddr,
39
-
76
+ MEMTXATTRS_UNSPECIFIED);
40
- ptr_tag = allocation_tag_from_addr(ptr);
77
}
41
-
42
- if (tcma_check(desc, bit55, ptr_tag)) {
43
- return true;
44
- }
45
-
46
- mmu_idx = FIELD_EX32(desc, MTEDESC, MIDX);
47
- type = FIELD_EX32(desc, MTEDESC, WRITE) ? MMU_DATA_STORE : MMU_DATA_LOAD;
48
- size = FIELD_EX32(desc, MTEDESC, ESIZE);
49
-
50
- mem = allocation_tag_mem(env, mmu_idx, ptr, type, size,
51
- MMU_DATA_LOAD, 1, ra);
52
- if (!mem) {
53
- return true;
54
- }
55
-
56
- mem_tag = load_tag1(ptr, mem);
57
- return ptr_tag == mem_tag;
58
-}
59
-
60
-/*
61
- * No-fault version of mte_check1, to be used by SVE for MemSingleNF.
62
- * Returns false if the access is Checked and the check failed. This
63
- * is only intended to probe the tag -- the validity of the page must
64
- * be checked beforehand.
65
- */
66
-bool mte_probe1(CPUARMState *env, uint32_t desc, uint64_t ptr)
67
-{
68
- int bit55 = extract64(ptr, 55, 1);
69
-
70
- /* If TBI is disabled, the access is unchecked. */
71
- if (unlikely(!tbi_check(desc, bit55))) {
72
- return true;
73
- }
74
-
75
- return mte_probe1_int(env, desc, ptr, 0, bit55);
76
-}
77
-
78
-uint64_t mte_check1(CPUARMState *env, uint32_t desc,
79
- uint64_t ptr, uintptr_t ra)
80
-{
81
- int bit55 = extract64(ptr, 55, 1);
82
-
83
- /* If TBI is disabled, the access is unchecked, and ptr is not dirty. */
84
- if (unlikely(!tbi_check(desc, bit55))) {
85
- return ptr;
86
- }
87
-
88
- if (unlikely(!mte_probe1_int(env, desc, ptr, ra, bit55))) {
89
- mte_check_fail(env, desc, ptr, ra);
90
- }
91
-
92
- return useronly_clean_ptr(ptr);
93
-}
94
-
95
-uint64_t HELPER(mte_check1)(CPUARMState *env, uint32_t desc, uint64_t ptr)
96
-{
97
- return mte_check1(env, desc, ptr, GETPC());
98
-}
99
-
100
-/*
101
- * Perform an MTE checked access for multiple logical accesses.
102
- */
103
-
104
/**
105
* checkN:
106
* @tag: tag memory to test
107
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(mte_checkN)(CPUARMState *env, uint32_t desc, uint64_t ptr)
108
return mte_checkN(env, desc, ptr, GETPC());
78
}
109
}
79
110
111
+uint64_t mte_check1(CPUARMState *env, uint32_t desc,
112
+ uint64_t ptr, uintptr_t ra)
113
+{
114
+ uint64_t fault;
115
+ uint32_t total = FIELD_EX32(desc, MTEDESC, ESIZE);
116
+ int ret = mte_probe_int(env, desc, ptr, ra, total, &fault);
117
+
118
+ if (unlikely(ret == 0)) {
119
+ mte_check_fail(env, desc, fault, ra);
120
+ } else if (ret < 0) {
121
+ return ptr;
122
+ }
123
+ return useronly_clean_ptr(ptr);
124
+}
125
+
126
+uint64_t HELPER(mte_check1)(CPUARMState *env, uint32_t desc, uint64_t ptr)
127
+{
128
+ return mte_check1(env, desc, ptr, GETPC());
129
+}
130
+
131
+/*
132
+ * No-fault version of mte_check1, to be used by SVE for MemSingleNF.
133
+ * Returns false if the access is Checked and the check failed. This
134
+ * is only intended to probe the tag -- the validity of the page must
135
+ * be checked beforehand.
136
+ */
137
+bool mte_probe1(CPUARMState *env, uint32_t desc, uint64_t ptr)
138
+{
139
+ uint64_t fault;
140
+ uint32_t total = FIELD_EX32(desc, MTEDESC, ESIZE);
141
+ int ret = mte_probe_int(env, desc, ptr, 0, total, &fault);
142
+
143
+ return ret != 0;
144
+}
145
+
146
/*
147
* Perform an MTE checked access for DC_ZVA.
148
*/
80
--
149
--
81
2.17.1
150
2.20.1
82
151
83
152
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Buglink: https://bugs.launchpad.net/bugs/1921948
4
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20210416183106.1516563-5-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
9
tests/tcg/aarch64/mte-5.c | 44 +++++++++++++++++++++++++++++++
10
tests/tcg/aarch64/Makefile.target | 2 +-
11
2 files changed, 45 insertions(+), 1 deletion(-)
12
create mode 100644 tests/tcg/aarch64/mte-5.c
13
14
diff --git a/tests/tcg/aarch64/mte-5.c b/tests/tcg/aarch64/mte-5.c
15
new file mode 100644
16
index XXXXXXX..XXXXXXX
17
--- /dev/null
18
+++ b/tests/tcg/aarch64/mte-5.c
19
@@ -XXX,XX +XXX,XX @@
20
+/*
21
+ * Memory tagging, faulting unaligned access.
22
+ *
23
+ * Copyright (c) 2021 Linaro Ltd
24
+ * SPDX-License-Identifier: GPL-2.0-or-later
25
+ */
26
+
27
+#include "mte.h"
28
+
29
+void pass(int sig, siginfo_t *info, void *uc)
30
+{
31
+ assert(info->si_code == SEGV_MTESERR);
32
+ exit(0);
33
+}
34
+
35
+int main(int ac, char **av)
36
+{
37
+ struct sigaction sa;
38
+ void *p0, *p1, *p2;
39
+ long excl = 1;
40
+
41
+ enable_mte(PR_MTE_TCF_SYNC);
42
+ p0 = alloc_mte_mem(sizeof(*p0));
43
+
44
+ /* Create two differently tagged pointers. */
45
+ asm("irg %0,%1,%2" : "=r"(p1) : "r"(p0), "r"(excl));
46
+ asm("gmi %0,%1,%0" : "+r"(excl) : "r" (p1));
47
+ assert(excl != 1);
48
+ asm("irg %0,%1,%2" : "=r"(p2) : "r"(p0), "r"(excl));
49
+ assert(p1 != p2);
50
+
51
+ memset(&sa, 0, sizeof(sa));
52
+ sa.sa_sigaction = pass;
53
+ sa.sa_flags = SA_SIGINFO;
54
+ sigaction(SIGSEGV, &sa, NULL);
55
+
56
+ /* Store store two different tags in sequential granules. */
57
+ asm("stg %0, [%0]" : : "r"(p1));
58
+ asm("stg %0, [%0]" : : "r"(p2 + 16));
59
+
60
+ /* Perform an unaligned load crossing the granules. */
61
+ asm volatile("ldr %0, [%1]" : "=r"(p0) : "r"(p1 + 12));
62
+ abort();
63
+}
64
diff --git a/tests/tcg/aarch64/Makefile.target b/tests/tcg/aarch64/Makefile.target
65
index XXXXXXX..XXXXXXX 100644
66
--- a/tests/tcg/aarch64/Makefile.target
67
+++ b/tests/tcg/aarch64/Makefile.target
68
@@ -XXX,XX +XXX,XX @@ AARCH64_TESTS += bti-2
69
70
# MTE Tests
71
ifneq ($(DOCKER_IMAGE)$(CROSS_CC_HAS_ARMV8_MTE),)
72
-AARCH64_TESTS += mte-1 mte-2 mte-3 mte-4 mte-6
73
+AARCH64_TESTS += mte-1 mte-2 mte-3 mte-4 mte-5 mte-6
74
mte-%: CFLAGS += -march=armv8.5-a+memtag
75
endif
76
77
--
78
2.20.1
79
80
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
After recent changes, mte_checkN does not use ESIZE,
4
and mte_check1 never used TSIZE. We can combine the
5
two into a single field: SIZEM1.
6
7
Choose to pass size - 1 because size == 0 is never used,
8
our immediate need in mte_probe_int is for the address
9
of the last byte (ptr + size - 1), and since almost all
10
operations are powers of 2, this makes the immediate
11
constant one bit smaller.
12
13
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
14
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20210416183106.1516563-6-richard.henderson@linaro.org
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
18
target/arm/internals.h | 4 ++--
19
target/arm/mte_helper.c | 18 ++++++++----------
20
target/arm/translate-a64.c | 5 ++---
21
target/arm/translate-sve.c | 5 ++---
22
4 files changed, 14 insertions(+), 18 deletions(-)
23
24
diff --git a/target/arm/internals.h b/target/arm/internals.h
25
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/internals.h
27
+++ b/target/arm/internals.h
28
@@ -XXX,XX +XXX,XX @@
29
#define TARGET_ARM_INTERNALS_H
30
31
#include "hw/registerfields.h"
32
+#include "tcg/tcg-gvec-desc.h"
33
#include "syndrome.h"
34
35
/* register banks for CPU modes */
36
@@ -XXX,XX +XXX,XX @@ FIELD(MTEDESC, MIDX, 0, 4)
37
FIELD(MTEDESC, TBI, 4, 2)
38
FIELD(MTEDESC, TCMA, 6, 2)
39
FIELD(MTEDESC, WRITE, 8, 1)
40
-FIELD(MTEDESC, ESIZE, 9, 5)
41
-FIELD(MTEDESC, TSIZE, 14, 10) /* mte_checkN only */
42
+FIELD(MTEDESC, SIZEM1, 9, SIMD_DATA_BITS - 9) /* size - 1 */
43
44
bool mte_probe1(CPUARMState *env, uint32_t desc, uint64_t ptr);
45
uint64_t mte_check1(CPUARMState *env, uint32_t desc,
46
diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/mte_helper.c
49
+++ b/target/arm/mte_helper.c
50
@@ -XXX,XX +XXX,XX @@ static int checkN(uint8_t *mem, int odd, int cmp, int count)
51
* Return positive on success with tbi enabled.
52
*/
53
static int mte_probe_int(CPUARMState *env, uint32_t desc, uint64_t ptr,
54
- uintptr_t ra, uint32_t total, uint64_t *fault)
55
+ uintptr_t ra, uint64_t *fault)
56
{
57
int mmu_idx, ptr_tag, bit55;
58
uint64_t ptr_last, prev_page, next_page;
59
uint64_t tag_first, tag_last;
60
uint64_t tag_byte_first, tag_byte_last;
61
- uint32_t tag_count, tag_size, n, c;
62
+ uint32_t sizem1, tag_count, tag_size, n, c;
63
uint8_t *mem1, *mem2;
64
MMUAccessType type;
65
66
@@ -XXX,XX +XXX,XX @@ static int mte_probe_int(CPUARMState *env, uint32_t desc, uint64_t ptr,
67
68
mmu_idx = FIELD_EX32(desc, MTEDESC, MIDX);
69
type = FIELD_EX32(desc, MTEDESC, WRITE) ? MMU_DATA_STORE : MMU_DATA_LOAD;
70
+ sizem1 = FIELD_EX32(desc, MTEDESC, SIZEM1);
71
72
/* Find the addr of the end of the access */
73
- ptr_last = ptr + total - 1;
74
+ ptr_last = ptr + sizem1;
75
76
/* Round the bounds to the tag granule, and compute the number of tags. */
77
tag_first = QEMU_ALIGN_DOWN(ptr, TAG_GRANULE);
78
@@ -XXX,XX +XXX,XX @@ static int mte_probe_int(CPUARMState *env, uint32_t desc, uint64_t ptr,
79
if (likely(tag_last - prev_page <= TARGET_PAGE_SIZE)) {
80
/* Memory access stays on one page. */
81
tag_size = ((tag_byte_last - tag_byte_first) / (2 * TAG_GRANULE)) + 1;
82
- mem1 = allocation_tag_mem(env, mmu_idx, ptr, type, total,
83
+ mem1 = allocation_tag_mem(env, mmu_idx, ptr, type, sizem1 + 1,
84
MMU_DATA_LOAD, tag_size, ra);
85
if (!mem1) {
86
return 1;
87
@@ -XXX,XX +XXX,XX @@ uint64_t mte_checkN(CPUARMState *env, uint32_t desc,
88
uint64_t ptr, uintptr_t ra)
89
{
90
uint64_t fault;
91
- uint32_t total = FIELD_EX32(desc, MTEDESC, TSIZE);
92
- int ret = mte_probe_int(env, desc, ptr, ra, total, &fault);
93
+ int ret = mte_probe_int(env, desc, ptr, ra, &fault);
94
95
if (unlikely(ret == 0)) {
96
mte_check_fail(env, desc, fault, ra);
97
@@ -XXX,XX +XXX,XX @@ uint64_t mte_check1(CPUARMState *env, uint32_t desc,
98
uint64_t ptr, uintptr_t ra)
99
{
100
uint64_t fault;
101
- uint32_t total = FIELD_EX32(desc, MTEDESC, ESIZE);
102
- int ret = mte_probe_int(env, desc, ptr, ra, total, &fault);
103
+ int ret = mte_probe_int(env, desc, ptr, ra, &fault);
104
105
if (unlikely(ret == 0)) {
106
mte_check_fail(env, desc, fault, ra);
107
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(mte_check1)(CPUARMState *env, uint32_t desc, uint64_t ptr)
108
bool mte_probe1(CPUARMState *env, uint32_t desc, uint64_t ptr)
109
{
110
uint64_t fault;
111
- uint32_t total = FIELD_EX32(desc, MTEDESC, ESIZE);
112
- int ret = mte_probe_int(env, desc, ptr, 0, total, &fault);
113
+ int ret = mte_probe_int(env, desc, ptr, 0, &fault);
114
115
return ret != 0;
116
}
117
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
118
index XXXXXXX..XXXXXXX 100644
119
--- a/target/arm/translate-a64.c
120
+++ b/target/arm/translate-a64.c
121
@@ -XXX,XX +XXX,XX @@ static TCGv_i64 gen_mte_check1_mmuidx(DisasContext *s, TCGv_i64 addr,
122
desc = FIELD_DP32(desc, MTEDESC, TBI, s->tbid);
123
desc = FIELD_DP32(desc, MTEDESC, TCMA, s->tcma);
124
desc = FIELD_DP32(desc, MTEDESC, WRITE, is_write);
125
- desc = FIELD_DP32(desc, MTEDESC, ESIZE, 1 << log2_size);
126
+ desc = FIELD_DP32(desc, MTEDESC, SIZEM1, (1 << log2_size) - 1);
127
tcg_desc = tcg_const_i32(desc);
128
129
ret = new_tmp_a64(s);
130
@@ -XXX,XX +XXX,XX @@ TCGv_i64 gen_mte_checkN(DisasContext *s, TCGv_i64 addr, bool is_write,
131
desc = FIELD_DP32(desc, MTEDESC, TBI, s->tbid);
132
desc = FIELD_DP32(desc, MTEDESC, TCMA, s->tcma);
133
desc = FIELD_DP32(desc, MTEDESC, WRITE, is_write);
134
- desc = FIELD_DP32(desc, MTEDESC, ESIZE, 1 << log2_esize);
135
- desc = FIELD_DP32(desc, MTEDESC, TSIZE, total_size);
136
+ desc = FIELD_DP32(desc, MTEDESC, SIZEM1, total_size - 1);
137
tcg_desc = tcg_const_i32(desc);
138
139
ret = new_tmp_a64(s);
140
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
141
index XXXXXXX..XXXXXXX 100644
142
--- a/target/arm/translate-sve.c
143
+++ b/target/arm/translate-sve.c
144
@@ -XXX,XX +XXX,XX @@ static void do_mem_zpa(DisasContext *s, int zt, int pg, TCGv_i64 addr,
145
desc = FIELD_DP32(desc, MTEDESC, TBI, s->tbid);
146
desc = FIELD_DP32(desc, MTEDESC, TCMA, s->tcma);
147
desc = FIELD_DP32(desc, MTEDESC, WRITE, is_write);
148
- desc = FIELD_DP32(desc, MTEDESC, ESIZE, 1 << msz);
149
- desc = FIELD_DP32(desc, MTEDESC, TSIZE, mte_n << msz);
150
+ desc = FIELD_DP32(desc, MTEDESC, SIZEM1, (mte_n << msz) - 1);
151
desc <<= SVE_MTEDESC_SHIFT;
152
} else {
153
addr = clean_data_tbi(s, addr);
154
@@ -XXX,XX +XXX,XX @@ static void do_mem_zpz(DisasContext *s, int zt, int pg, int zm,
155
desc = FIELD_DP32(desc, MTEDESC, TBI, s->tbid);
156
desc = FIELD_DP32(desc, MTEDESC, TCMA, s->tcma);
157
desc = FIELD_DP32(desc, MTEDESC, WRITE, is_write);
158
- desc = FIELD_DP32(desc, MTEDESC, ESIZE, 1 << msz);
159
+ desc = FIELD_DP32(desc, MTEDESC, SIZEM1, (1 << msz) - 1);
160
desc <<= SVE_MTEDESC_SHIFT;
161
}
162
desc = simd_desc(vsz, vsz, desc | scale);
163
--
164
2.20.1
165
166
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
add MemTxAttrs as an argument to the MemoryRegion valid.accepts
3
callback. We'll need this for subpage_accepts().
4
2
5
We could take the approach we used with the read and write
3
The mte_check1 and mte_checkN functions are now identical.
6
callbacks and add new a new _with_attrs version, but since there
4
Drop mte_check1 and rename mte_checkN to mte_check.
7
are so few implementations of the accepts hook we just change
8
them all.
9
5
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210416183106.1516563-7-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20180521140402.23318-9-peter.maydell@linaro.org
14
---
10
---
15
include/exec/memory.h | 3 ++-
11
target/arm/helper-a64.h | 3 +--
16
exec.c | 9 ++++++---
12
target/arm/internals.h | 5 +----
17
hw/hppa/dino.c | 3 ++-
13
target/arm/mte_helper.c | 26 +++-----------------------
18
hw/nvram/fw_cfg.c | 12 ++++++++----
14
target/arm/sve_helper.c | 14 +++++++-------
19
hw/scsi/esp.c | 3 ++-
15
target/arm/translate-a64.c | 4 ++--
20
hw/xen/xen_pt_msi.c | 3 ++-
16
5 files changed, 14 insertions(+), 38 deletions(-)
21
memory.c | 5 +++--
22
7 files changed, 25 insertions(+), 13 deletions(-)
23
17
24
diff --git a/include/exec/memory.h b/include/exec/memory.h
18
diff --git a/target/arm/helper-a64.h b/target/arm/helper-a64.h
25
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
26
--- a/include/exec/memory.h
20
--- a/target/arm/helper-a64.h
27
+++ b/include/exec/memory.h
21
+++ b/target/arm/helper-a64.h
28
@@ -XXX,XX +XXX,XX @@ struct MemoryRegionOps {
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_3(autdb, TCG_CALL_NO_WG, i64, env, i64, i64)
29
* as a machine check exception).
23
DEF_HELPER_FLAGS_2(xpaci, TCG_CALL_NO_RWG_SE, i64, env, i64)
30
*/
24
DEF_HELPER_FLAGS_2(xpacd, TCG_CALL_NO_RWG_SE, i64, env, i64)
31
bool (*accepts)(void *opaque, hwaddr addr,
25
32
- unsigned size, bool is_write);
26
-DEF_HELPER_FLAGS_3(mte_check1, TCG_CALL_NO_WG, i64, env, i32, i64)
33
+ unsigned size, bool is_write,
27
-DEF_HELPER_FLAGS_3(mte_checkN, TCG_CALL_NO_WG, i64, env, i32, i64)
34
+ MemTxAttrs attrs);
28
+DEF_HELPER_FLAGS_3(mte_check, TCG_CALL_NO_WG, i64, env, i32, i64)
35
} valid;
29
DEF_HELPER_FLAGS_3(mte_check_zva, TCG_CALL_NO_WG, i64, env, i32, i64)
36
/* Internal implementation constraints: */
30
DEF_HELPER_FLAGS_3(irg, TCG_CALL_NO_RWG, i64, env, i64, i64)
37
struct {
31
DEF_HELPER_FLAGS_4(addsubg, TCG_CALL_NO_RWG_SE, i64, env, i64, s32, i32)
38
diff --git a/exec.c b/exec.c
32
diff --git a/target/arm/internals.h b/target/arm/internals.h
39
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
40
--- a/exec.c
34
--- a/target/arm/internals.h
41
+++ b/exec.c
35
+++ b/target/arm/internals.h
42
@@ -XXX,XX +XXX,XX @@ static void notdirty_mem_write(void *opaque, hwaddr ram_addr,
36
@@ -XXX,XX +XXX,XX @@ FIELD(MTEDESC, WRITE, 8, 1)
37
FIELD(MTEDESC, SIZEM1, 9, SIMD_DATA_BITS - 9) /* size - 1 */
38
39
bool mte_probe1(CPUARMState *env, uint32_t desc, uint64_t ptr);
40
-uint64_t mte_check1(CPUARMState *env, uint32_t desc,
41
- uint64_t ptr, uintptr_t ra);
42
-uint64_t mte_checkN(CPUARMState *env, uint32_t desc,
43
- uint64_t ptr, uintptr_t ra);
44
+uint64_t mte_check(CPUARMState *env, uint32_t desc, uint64_t ptr, uintptr_t ra);
45
46
static inline int allocation_tag_from_addr(uint64_t ptr)
47
{
48
diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c
49
index XXXXXXX..XXXXXXX 100644
50
--- a/target/arm/mte_helper.c
51
+++ b/target/arm/mte_helper.c
52
@@ -XXX,XX +XXX,XX @@ static int mte_probe_int(CPUARMState *env, uint32_t desc, uint64_t ptr,
53
return 0;
43
}
54
}
44
55
45
static bool notdirty_mem_accepts(void *opaque, hwaddr addr,
56
-uint64_t mte_checkN(CPUARMState *env, uint32_t desc,
46
- unsigned size, bool is_write)
57
- uint64_t ptr, uintptr_t ra)
47
+ unsigned size, bool is_write,
58
+uint64_t mte_check(CPUARMState *env, uint32_t desc, uint64_t ptr, uintptr_t ra)
48
+ MemTxAttrs attrs)
49
{
59
{
50
return is_write;
60
uint64_t fault;
61
int ret = mte_probe_int(env, desc, ptr, ra, &fault);
62
@@ -XXX,XX +XXX,XX @@ uint64_t mte_checkN(CPUARMState *env, uint32_t desc,
63
return useronly_clean_ptr(ptr);
51
}
64
}
52
@@ -XXX,XX +XXX,XX @@ static MemTxResult subpage_write(void *opaque, hwaddr addr,
65
66
-uint64_t HELPER(mte_checkN)(CPUARMState *env, uint32_t desc, uint64_t ptr)
67
+uint64_t HELPER(mte_check)(CPUARMState *env, uint32_t desc, uint64_t ptr)
68
{
69
- return mte_checkN(env, desc, ptr, GETPC());
70
-}
71
-
72
-uint64_t mte_check1(CPUARMState *env, uint32_t desc,
73
- uint64_t ptr, uintptr_t ra)
74
-{
75
- uint64_t fault;
76
- int ret = mte_probe_int(env, desc, ptr, ra, &fault);
77
-
78
- if (unlikely(ret == 0)) {
79
- mte_check_fail(env, desc, fault, ra);
80
- } else if (ret < 0) {
81
- return ptr;
82
- }
83
- return useronly_clean_ptr(ptr);
84
-}
85
-
86
-uint64_t HELPER(mte_check1)(CPUARMState *env, uint32_t desc, uint64_t ptr)
87
-{
88
- return mte_check1(env, desc, ptr, GETPC());
89
+ return mte_check(env, desc, ptr, GETPC());
53
}
90
}
54
91
55
static bool subpage_accepts(void *opaque, hwaddr addr,
92
/*
56
- unsigned len, bool is_write)
93
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
57
+ unsigned len, bool is_write,
94
index XXXXXXX..XXXXXXX 100644
58
+ MemTxAttrs attrs)
95
--- a/target/arm/sve_helper.c
96
+++ b/target/arm/sve_helper.c
97
@@ -XXX,XX +XXX,XX @@ static void sve_cont_ldst_mte_check1(SVEContLdSt *info, CPUARMState *env,
98
uintptr_t ra)
59
{
99
{
60
subpage_t *subpage = opaque;
100
sve_cont_ldst_mte_check_int(info, env, vg, addr, esize, msize,
61
#if defined(DEBUG_SUBPAGE)
101
- mtedesc, ra, mte_check1);
62
@@ -XXX,XX +XXX,XX @@ static void readonly_mem_write(void *opaque, hwaddr addr,
102
+ mtedesc, ra, mte_check);
63
}
103
}
64
104
65
static bool readonly_mem_accepts(void *opaque, hwaddr addr,
105
static void sve_cont_ldst_mte_checkN(SVEContLdSt *info, CPUARMState *env,
66
- unsigned size, bool is_write)
106
@@ -XXX,XX +XXX,XX @@ static void sve_cont_ldst_mte_checkN(SVEContLdSt *info, CPUARMState *env,
67
+ unsigned size, bool is_write,
107
uintptr_t ra)
68
+ MemTxAttrs attrs)
69
{
108
{
70
return is_write;
109
sve_cont_ldst_mte_check_int(info, env, vg, addr, esize, msize,
110
- mtedesc, ra, mte_checkN);
111
+ mtedesc, ra, mte_check);
71
}
112
}
72
diff --git a/hw/hppa/dino.c b/hw/hppa/dino.c
113
114
115
@@ -XXX,XX +XXX,XX @@ void sve_ldnfff1_r(CPUARMState *env, void *vg, const target_ulong addr,
116
if (fault == FAULT_FIRST) {
117
/* Trapping mte check for the first-fault element. */
118
if (mtedesc) {
119
- mte_check1(env, mtedesc, addr + mem_off, retaddr);
120
+ mte_check(env, mtedesc, addr + mem_off, retaddr);
121
}
122
123
/*
124
@@ -XXX,XX +XXX,XX @@ void sve_ld1_z(CPUARMState *env, void *vd, uint64_t *vg, void *vm,
125
info.attrs, BP_MEM_READ, retaddr);
126
}
127
if (mtedesc && arm_tlb_mte_tagged(&info.attrs)) {
128
- mte_check1(env, mtedesc, addr, retaddr);
129
+ mte_check(env, mtedesc, addr, retaddr);
130
}
131
host_fn(&scratch, reg_off, info.host);
132
} else {
133
@@ -XXX,XX +XXX,XX @@ void sve_ld1_z(CPUARMState *env, void *vd, uint64_t *vg, void *vm,
134
BP_MEM_READ, retaddr);
135
}
136
if (mtedesc && arm_tlb_mte_tagged(&info.attrs)) {
137
- mte_check1(env, mtedesc, addr, retaddr);
138
+ mte_check(env, mtedesc, addr, retaddr);
139
}
140
tlb_fn(env, &scratch, reg_off, addr, retaddr);
141
}
142
@@ -XXX,XX +XXX,XX @@ void sve_ldff1_z(CPUARMState *env, void *vd, uint64_t *vg, void *vm,
143
*/
144
addr = base + (off_fn(vm, reg_off) << scale);
145
if (mtedesc) {
146
- mte_check1(env, mtedesc, addr, retaddr);
147
+ mte_check(env, mtedesc, addr, retaddr);
148
}
149
tlb_fn(env, vd, reg_off, addr, retaddr);
150
151
@@ -XXX,XX +XXX,XX @@ void sve_st1_z(CPUARMState *env, void *vd, uint64_t *vg, void *vm,
152
}
153
154
if (mtedesc && arm_tlb_mte_tagged(&info.attrs)) {
155
- mte_check1(env, mtedesc, addr, retaddr);
156
+ mte_check(env, mtedesc, addr, retaddr);
157
}
158
}
159
i += 1;
160
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
73
index XXXXXXX..XXXXXXX 100644
161
index XXXXXXX..XXXXXXX 100644
74
--- a/hw/hppa/dino.c
162
--- a/target/arm/translate-a64.c
75
+++ b/hw/hppa/dino.c
163
+++ b/target/arm/translate-a64.c
76
@@ -XXX,XX +XXX,XX @@ static void gsc_to_pci_forwarding(DinoState *s)
164
@@ -XXX,XX +XXX,XX @@ static TCGv_i64 gen_mte_check1_mmuidx(DisasContext *s, TCGv_i64 addr,
77
}
165
tcg_desc = tcg_const_i32(desc);
78
166
79
static bool dino_chip_mem_valid(void *opaque, hwaddr addr,
167
ret = new_tmp_a64(s);
80
- unsigned size, bool is_write)
168
- gen_helper_mte_check1(ret, cpu_env, tcg_desc, addr);
81
+ unsigned size, bool is_write,
169
+ gen_helper_mte_check(ret, cpu_env, tcg_desc, addr);
82
+ MemTxAttrs attrs)
170
tcg_temp_free_i32(tcg_desc);
83
{
171
84
switch (addr) {
172
return ret;
85
case DINO_IAR0:
173
@@ -XXX,XX +XXX,XX @@ TCGv_i64 gen_mte_checkN(DisasContext *s, TCGv_i64 addr, bool is_write,
86
diff --git a/hw/nvram/fw_cfg.c b/hw/nvram/fw_cfg.c
174
tcg_desc = tcg_const_i32(desc);
87
index XXXXXXX..XXXXXXX 100644
175
88
--- a/hw/nvram/fw_cfg.c
176
ret = new_tmp_a64(s);
89
+++ b/hw/nvram/fw_cfg.c
177
- gen_helper_mte_checkN(ret, cpu_env, tcg_desc, addr);
90
@@ -XXX,XX +XXX,XX @@ static void fw_cfg_dma_mem_write(void *opaque, hwaddr addr,
178
+ gen_helper_mte_check(ret, cpu_env, tcg_desc, addr);
91
}
179
tcg_temp_free_i32(tcg_desc);
92
180
93
static bool fw_cfg_dma_mem_valid(void *opaque, hwaddr addr,
181
return ret;
94
- unsigned size, bool is_write)
95
+ unsigned size, bool is_write,
96
+ MemTxAttrs attrs)
97
{
98
return !is_write || ((size == 4 && (addr == 0 || addr == 4)) ||
99
(size == 8 && addr == 0));
100
}
101
102
static bool fw_cfg_data_mem_valid(void *opaque, hwaddr addr,
103
- unsigned size, bool is_write)
104
+ unsigned size, bool is_write,
105
+ MemTxAttrs attrs)
106
{
107
return addr == 0;
108
}
109
@@ -XXX,XX +XXX,XX @@ static void fw_cfg_ctl_mem_write(void *opaque, hwaddr addr,
110
}
111
112
static bool fw_cfg_ctl_mem_valid(void *opaque, hwaddr addr,
113
- unsigned size, bool is_write)
114
+ unsigned size, bool is_write,
115
+ MemTxAttrs attrs)
116
{
117
return is_write && size == 2;
118
}
119
@@ -XXX,XX +XXX,XX @@ static void fw_cfg_comb_write(void *opaque, hwaddr addr,
120
}
121
122
static bool fw_cfg_comb_valid(void *opaque, hwaddr addr,
123
- unsigned size, bool is_write)
124
+ unsigned size, bool is_write,
125
+ MemTxAttrs attrs)
126
{
127
return (size == 1) || (is_write && size == 2);
128
}
129
diff --git a/hw/scsi/esp.c b/hw/scsi/esp.c
130
index XXXXXXX..XXXXXXX 100644
131
--- a/hw/scsi/esp.c
132
+++ b/hw/scsi/esp.c
133
@@ -XXX,XX +XXX,XX @@ void esp_reg_write(ESPState *s, uint32_t saddr, uint64_t val)
134
}
135
136
static bool esp_mem_accepts(void *opaque, hwaddr addr,
137
- unsigned size, bool is_write)
138
+ unsigned size, bool is_write,
139
+ MemTxAttrs attrs)
140
{
141
return (size == 1) || (is_write && size == 4);
142
}
143
diff --git a/hw/xen/xen_pt_msi.c b/hw/xen/xen_pt_msi.c
144
index XXXXXXX..XXXXXXX 100644
145
--- a/hw/xen/xen_pt_msi.c
146
+++ b/hw/xen/xen_pt_msi.c
147
@@ -XXX,XX +XXX,XX @@ static uint64_t pci_msix_read(void *opaque, hwaddr addr,
148
}
149
150
static bool pci_msix_accepts(void *opaque, hwaddr addr,
151
- unsigned size, bool is_write)
152
+ unsigned size, bool is_write,
153
+ MemTxAttrs attrs)
154
{
155
return !(addr & (size - 1));
156
}
157
diff --git a/memory.c b/memory.c
158
index XXXXXXX..XXXXXXX 100644
159
--- a/memory.c
160
+++ b/memory.c
161
@@ -XXX,XX +XXX,XX @@ static void unassigned_mem_write(void *opaque, hwaddr addr,
162
}
163
164
static bool unassigned_mem_accepts(void *opaque, hwaddr addr,
165
- unsigned size, bool is_write)
166
+ unsigned size, bool is_write,
167
+ MemTxAttrs attrs)
168
{
169
return false;
170
}
171
@@ -XXX,XX +XXX,XX @@ bool memory_region_access_valid(MemoryRegion *mr,
172
access_size = MAX(MIN(size, access_size_max), access_size_min);
173
for (i = 0; i < size; i += access_size) {
174
if (!mr->ops->valid.accepts(mr->opaque, addr + i, access_size,
175
- is_write)) {
176
+ is_write, attrs)) {
177
return false;
178
}
179
}
180
--
182
--
181
2.17.1
183
2.20.1
182
184
183
185
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
For consistency with the mte_check1 + mte_checkN merge
4
to mte_check, rename the probe function as well.
5
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210416183106.1516563-8-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/internals.h | 2 +-
12
target/arm/mte_helper.c | 6 +++---
13
target/arm/sve_helper.c | 6 +++---
14
3 files changed, 7 insertions(+), 7 deletions(-)
15
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/internals.h
19
+++ b/target/arm/internals.h
20
@@ -XXX,XX +XXX,XX @@ FIELD(MTEDESC, TCMA, 6, 2)
21
FIELD(MTEDESC, WRITE, 8, 1)
22
FIELD(MTEDESC, SIZEM1, 9, SIMD_DATA_BITS - 9) /* size - 1 */
23
24
-bool mte_probe1(CPUARMState *env, uint32_t desc, uint64_t ptr);
25
+bool mte_probe(CPUARMState *env, uint32_t desc, uint64_t ptr);
26
uint64_t mte_check(CPUARMState *env, uint32_t desc, uint64_t ptr, uintptr_t ra);
27
28
static inline int allocation_tag_from_addr(uint64_t ptr)
29
diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/mte_helper.c
32
+++ b/target/arm/mte_helper.c
33
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
34
* exception for inaccessible pages, and resolves the virtual address
35
* into the softmmu tlb.
36
*
37
- * When RA == 0, this is for mte_probe1. The page is expected to be
38
+ * When RA == 0, this is for mte_probe. The page is expected to be
39
* valid. Indicate to probe_access_flags no-fault, then assert that
40
* we received a valid page.
41
*/
42
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(mte_check)(CPUARMState *env, uint32_t desc, uint64_t ptr)
43
}
44
45
/*
46
- * No-fault version of mte_check1, to be used by SVE for MemSingleNF.
47
+ * No-fault version of mte_check, to be used by SVE for MemSingleNF.
48
* Returns false if the access is Checked and the check failed. This
49
* is only intended to probe the tag -- the validity of the page must
50
* be checked beforehand.
51
*/
52
-bool mte_probe1(CPUARMState *env, uint32_t desc, uint64_t ptr)
53
+bool mte_probe(CPUARMState *env, uint32_t desc, uint64_t ptr)
54
{
55
uint64_t fault;
56
int ret = mte_probe_int(env, desc, ptr, 0, &fault);
57
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
58
index XXXXXXX..XXXXXXX 100644
59
--- a/target/arm/sve_helper.c
60
+++ b/target/arm/sve_helper.c
61
@@ -XXX,XX +XXX,XX @@ void sve_ldnfff1_r(CPUARMState *env, void *vg, const target_ulong addr,
62
/* Watchpoint hit, see below. */
63
goto do_fault;
64
}
65
- if (mtedesc && !mte_probe1(env, mtedesc, addr + mem_off)) {
66
+ if (mtedesc && !mte_probe(env, mtedesc, addr + mem_off)) {
67
goto do_fault;
68
}
69
/*
70
@@ -XXX,XX +XXX,XX @@ void sve_ldnfff1_r(CPUARMState *env, void *vg, const target_ulong addr,
71
& BP_MEM_READ)) {
72
goto do_fault;
73
}
74
- if (mtedesc && !mte_probe1(env, mtedesc, addr + mem_off)) {
75
+ if (mtedesc && !mte_probe(env, mtedesc, addr + mem_off)) {
76
goto do_fault;
77
}
78
host_fn(vd, reg_off, host + mem_off);
79
@@ -XXX,XX +XXX,XX @@ void sve_ldff1_z(CPUARMState *env, void *vd, uint64_t *vg, void *vm,
80
}
81
if (mtedesc &&
82
arm_tlb_mte_tagged(&info.attrs) &&
83
- !mte_probe1(env, mtedesc, addr)) {
84
+ !mte_probe(env, mtedesc, addr)) {
85
goto fault;
86
}
87
88
--
89
2.20.1
90
91
diff view generated by jsdifflib
New patch
1
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Now that mte_check1 and mte_checkN have been merged, we can
4
merge sve_cont_ldst_mte_check1 and sve_cont_ldst_mte_checkN.
5
6
Which means that we can eliminate the function pointer into
7
sve_ldN_r and sve_stN_r, calling sve_cont_ldst_mte_check directly.
8
9
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20210416183106.1516563-9-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
target/arm/sve_helper.c | 84 +++++++++++++----------------------------
15
1 file changed, 26 insertions(+), 58 deletions(-)
16
17
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/sve_helper.c
20
+++ b/target/arm/sve_helper.c
21
@@ -XXX,XX +XXX,XX @@ static void sve_cont_ldst_watchpoints(SVEContLdSt *info, CPUARMState *env,
22
#endif
23
}
24
25
-typedef uint64_t mte_check_fn(CPUARMState *, uint32_t, uint64_t, uintptr_t);
26
-
27
-static inline QEMU_ALWAYS_INLINE
28
-void sve_cont_ldst_mte_check_int(SVEContLdSt *info, CPUARMState *env,
29
- uint64_t *vg, target_ulong addr, int esize,
30
- int msize, uint32_t mtedesc, uintptr_t ra,
31
- mte_check_fn *check)
32
+static void sve_cont_ldst_mte_check(SVEContLdSt *info, CPUARMState *env,
33
+ uint64_t *vg, target_ulong addr, int esize,
34
+ int msize, uint32_t mtedesc, uintptr_t ra)
35
{
36
intptr_t mem_off, reg_off, reg_last;
37
38
@@ -XXX,XX +XXX,XX @@ void sve_cont_ldst_mte_check_int(SVEContLdSt *info, CPUARMState *env,
39
uint64_t pg = vg[reg_off >> 6];
40
do {
41
if ((pg >> (reg_off & 63)) & 1) {
42
- check(env, mtedesc, addr, ra);
43
+ mte_check(env, mtedesc, addr, ra);
44
}
45
reg_off += esize;
46
mem_off += msize;
47
@@ -XXX,XX +XXX,XX @@ void sve_cont_ldst_mte_check_int(SVEContLdSt *info, CPUARMState *env,
48
uint64_t pg = vg[reg_off >> 6];
49
do {
50
if ((pg >> (reg_off & 63)) & 1) {
51
- check(env, mtedesc, addr, ra);
52
+ mte_check(env, mtedesc, addr, ra);
53
}
54
reg_off += esize;
55
mem_off += msize;
56
@@ -XXX,XX +XXX,XX @@ void sve_cont_ldst_mte_check_int(SVEContLdSt *info, CPUARMState *env,
57
}
58
}
59
60
-typedef void sve_cont_ldst_mte_check_fn(SVEContLdSt *info, CPUARMState *env,
61
- uint64_t *vg, target_ulong addr,
62
- int esize, int msize, uint32_t mtedesc,
63
- uintptr_t ra);
64
-
65
-static void sve_cont_ldst_mte_check1(SVEContLdSt *info, CPUARMState *env,
66
- uint64_t *vg, target_ulong addr,
67
- int esize, int msize, uint32_t mtedesc,
68
- uintptr_t ra)
69
-{
70
- sve_cont_ldst_mte_check_int(info, env, vg, addr, esize, msize,
71
- mtedesc, ra, mte_check);
72
-}
73
-
74
-static void sve_cont_ldst_mte_checkN(SVEContLdSt *info, CPUARMState *env,
75
- uint64_t *vg, target_ulong addr,
76
- int esize, int msize, uint32_t mtedesc,
77
- uintptr_t ra)
78
-{
79
- sve_cont_ldst_mte_check_int(info, env, vg, addr, esize, msize,
80
- mtedesc, ra, mte_check);
81
-}
82
-
83
-
84
/*
85
* Common helper for all contiguous 1,2,3,4-register predicated stores.
86
*/
87
@@ -XXX,XX +XXX,XX @@ void sve_ldN_r(CPUARMState *env, uint64_t *vg, const target_ulong addr,
88
uint32_t desc, const uintptr_t retaddr,
89
const int esz, const int msz, const int N, uint32_t mtedesc,
90
sve_ldst1_host_fn *host_fn,
91
- sve_ldst1_tlb_fn *tlb_fn,
92
- sve_cont_ldst_mte_check_fn *mte_check_fn)
93
+ sve_ldst1_tlb_fn *tlb_fn)
94
{
95
const unsigned rd = simd_data(desc);
96
const intptr_t reg_max = simd_oprsz(desc);
97
@@ -XXX,XX +XXX,XX @@ void sve_ldN_r(CPUARMState *env, uint64_t *vg, const target_ulong addr,
98
* Handle mte checks for all active elements.
99
* Since TBI must be set for MTE, !mtedesc => !mte_active.
100
*/
101
- if (mte_check_fn && mtedesc) {
102
- mte_check_fn(&info, env, vg, addr, 1 << esz, N << msz,
103
- mtedesc, retaddr);
104
+ if (mtedesc) {
105
+ sve_cont_ldst_mte_check(&info, env, vg, addr, 1 << esz, N << msz,
106
+ mtedesc, retaddr);
107
}
108
109
flags = info.page[0].flags | info.page[1].flags;
110
@@ -XXX,XX +XXX,XX @@ void sve_ldN_r_mte(CPUARMState *env, uint64_t *vg, target_ulong addr,
111
mtedesc = 0;
112
}
113
114
- sve_ldN_r(env, vg, addr, desc, ra, esz, msz, N, mtedesc, host_fn, tlb_fn,
115
- N == 1 ? sve_cont_ldst_mte_check1 : sve_cont_ldst_mte_checkN);
116
+ sve_ldN_r(env, vg, addr, desc, ra, esz, msz, N, mtedesc, host_fn, tlb_fn);
117
}
118
119
#define DO_LD1_1(NAME, ESZ) \
120
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_##NAME##_r)(CPUARMState *env, void *vg, \
121
target_ulong addr, uint32_t desc) \
122
{ \
123
sve_ldN_r(env, vg, addr, desc, GETPC(), ESZ, MO_8, 1, 0, \
124
- sve_##NAME##_host, sve_##NAME##_tlb, NULL); \
125
+ sve_##NAME##_host, sve_##NAME##_tlb); \
126
} \
127
void HELPER(sve_##NAME##_r_mte)(CPUARMState *env, void *vg, \
128
target_ulong addr, uint32_t desc) \
129
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_##NAME##_le_r)(CPUARMState *env, void *vg, \
130
target_ulong addr, uint32_t desc) \
131
{ \
132
sve_ldN_r(env, vg, addr, desc, GETPC(), ESZ, MSZ, 1, 0, \
133
- sve_##NAME##_le_host, sve_##NAME##_le_tlb, NULL); \
134
+ sve_##NAME##_le_host, sve_##NAME##_le_tlb); \
135
} \
136
void HELPER(sve_##NAME##_be_r)(CPUARMState *env, void *vg, \
137
target_ulong addr, uint32_t desc) \
138
{ \
139
sve_ldN_r(env, vg, addr, desc, GETPC(), ESZ, MSZ, 1, 0, \
140
- sve_##NAME##_be_host, sve_##NAME##_be_tlb, NULL); \
141
+ sve_##NAME##_be_host, sve_##NAME##_be_tlb); \
142
} \
143
void HELPER(sve_##NAME##_le_r_mte)(CPUARMState *env, void *vg, \
144
- target_ulong addr, uint32_t desc) \
145
+ target_ulong addr, uint32_t desc) \
146
{ \
147
sve_ldN_r_mte(env, vg, addr, desc, GETPC(), ESZ, MSZ, 1, \
148
sve_##NAME##_le_host, sve_##NAME##_le_tlb); \
149
} \
150
void HELPER(sve_##NAME##_be_r_mte)(CPUARMState *env, void *vg, \
151
- target_ulong addr, uint32_t desc) \
152
+ target_ulong addr, uint32_t desc) \
153
{ \
154
sve_ldN_r_mte(env, vg, addr, desc, GETPC(), ESZ, MSZ, 1, \
155
sve_##NAME##_be_host, sve_##NAME##_be_tlb); \
156
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_ld##N##bb_r)(CPUARMState *env, void *vg, \
157
target_ulong addr, uint32_t desc) \
158
{ \
159
sve_ldN_r(env, vg, addr, desc, GETPC(), MO_8, MO_8, N, 0, \
160
- sve_ld1bb_host, sve_ld1bb_tlb, NULL); \
161
+ sve_ld1bb_host, sve_ld1bb_tlb); \
162
} \
163
void HELPER(sve_ld##N##bb_r_mte)(CPUARMState *env, void *vg, \
164
target_ulong addr, uint32_t desc) \
165
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_ld##N##SUFF##_le_r)(CPUARMState *env, void *vg, \
166
target_ulong addr, uint32_t desc) \
167
{ \
168
sve_ldN_r(env, vg, addr, desc, GETPC(), ESZ, ESZ, N, 0, \
169
- sve_ld1##SUFF##_le_host, sve_ld1##SUFF##_le_tlb, NULL); \
170
+ sve_ld1##SUFF##_le_host, sve_ld1##SUFF##_le_tlb); \
171
} \
172
void HELPER(sve_ld##N##SUFF##_be_r)(CPUARMState *env, void *vg, \
173
target_ulong addr, uint32_t desc) \
174
{ \
175
sve_ldN_r(env, vg, addr, desc, GETPC(), ESZ, ESZ, N, 0, \
176
- sve_ld1##SUFF##_be_host, sve_ld1##SUFF##_be_tlb, NULL); \
177
+ sve_ld1##SUFF##_be_host, sve_ld1##SUFF##_be_tlb); \
178
} \
179
void HELPER(sve_ld##N##SUFF##_le_r_mte)(CPUARMState *env, void *vg, \
180
target_ulong addr, uint32_t desc) \
181
@@ -XXX,XX +XXX,XX @@ void sve_stN_r(CPUARMState *env, uint64_t *vg, target_ulong addr,
182
uint32_t desc, const uintptr_t retaddr,
183
const int esz, const int msz, const int N, uint32_t mtedesc,
184
sve_ldst1_host_fn *host_fn,
185
- sve_ldst1_tlb_fn *tlb_fn,
186
- sve_cont_ldst_mte_check_fn *mte_check_fn)
187
+ sve_ldst1_tlb_fn *tlb_fn)
188
{
189
const unsigned rd = simd_data(desc);
190
const intptr_t reg_max = simd_oprsz(desc);
191
@@ -XXX,XX +XXX,XX @@ void sve_stN_r(CPUARMState *env, uint64_t *vg, target_ulong addr,
192
* Handle mte checks for all active elements.
193
* Since TBI must be set for MTE, !mtedesc => !mte_active.
194
*/
195
- if (mte_check_fn && mtedesc) {
196
- mte_check_fn(&info, env, vg, addr, 1 << esz, N << msz,
197
- mtedesc, retaddr);
198
+ if (mtedesc) {
199
+ sve_cont_ldst_mte_check(&info, env, vg, addr, 1 << esz, N << msz,
200
+ mtedesc, retaddr);
201
}
202
203
flags = info.page[0].flags | info.page[1].flags;
204
@@ -XXX,XX +XXX,XX @@ void sve_stN_r_mte(CPUARMState *env, uint64_t *vg, target_ulong addr,
205
mtedesc = 0;
206
}
207
208
- sve_stN_r(env, vg, addr, desc, ra, esz, msz, N, mtedesc, host_fn, tlb_fn,
209
- N == 1 ? sve_cont_ldst_mte_check1 : sve_cont_ldst_mte_checkN);
210
+ sve_stN_r(env, vg, addr, desc, ra, esz, msz, N, mtedesc, host_fn, tlb_fn);
211
}
212
213
#define DO_STN_1(N, NAME, ESZ) \
214
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_st##N##NAME##_r)(CPUARMState *env, void *vg, \
215
target_ulong addr, uint32_t desc) \
216
{ \
217
sve_stN_r(env, vg, addr, desc, GETPC(), ESZ, MO_8, N, 0, \
218
- sve_st1##NAME##_host, sve_st1##NAME##_tlb, NULL); \
219
+ sve_st1##NAME##_host, sve_st1##NAME##_tlb); \
220
} \
221
void HELPER(sve_st##N##NAME##_r_mte)(CPUARMState *env, void *vg, \
222
target_ulong addr, uint32_t desc) \
223
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_st##N##NAME##_le_r)(CPUARMState *env, void *vg, \
224
target_ulong addr, uint32_t desc) \
225
{ \
226
sve_stN_r(env, vg, addr, desc, GETPC(), ESZ, MSZ, N, 0, \
227
- sve_st1##NAME##_le_host, sve_st1##NAME##_le_tlb, NULL); \
228
+ sve_st1##NAME##_le_host, sve_st1##NAME##_le_tlb); \
229
} \
230
void HELPER(sve_st##N##NAME##_be_r)(CPUARMState *env, void *vg, \
231
target_ulong addr, uint32_t desc) \
232
{ \
233
sve_stN_r(env, vg, addr, desc, GETPC(), ESZ, MSZ, N, 0, \
234
- sve_st1##NAME##_be_host, sve_st1##NAME##_be_tlb, NULL); \
235
+ sve_st1##NAME##_be_host, sve_st1##NAME##_be_tlb); \
236
} \
237
void HELPER(sve_st##N##NAME##_le_r_mte)(CPUARMState *env, void *vg, \
238
target_ulong addr, uint32_t desc) \
239
--
240
2.20.1
241
242
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
The log2_esize parameter is not used except trivially.
4
Drop the parameter and the deferral to gen_mte_check1.
5
6
This fixes a bug in that the parameters as documented
7
in the header file were the reverse from those in the
8
implementation. Which meant that translate-sve.c was
9
passing the parameters in the wrong order.
10
11
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
12
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20210416183106.1516563-10-richard.henderson@linaro.org
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
target/arm/translate-a64.h | 2 +-
17
target/arm/translate-a64.c | 15 +++++++--------
18
target/arm/translate-sve.c | 4 ++--
19
3 files changed, 10 insertions(+), 11 deletions(-)
20
21
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
22
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/translate-a64.h
24
+++ b/target/arm/translate-a64.h
25
@@ -XXX,XX +XXX,XX @@ TCGv_i64 clean_data_tbi(DisasContext *s, TCGv_i64 addr);
26
TCGv_i64 gen_mte_check1(DisasContext *s, TCGv_i64 addr, bool is_write,
27
bool tag_checked, int log2_size);
28
TCGv_i64 gen_mte_checkN(DisasContext *s, TCGv_i64 addr, bool is_write,
29
- bool tag_checked, int count, int log2_esize);
30
+ bool tag_checked, int size);
31
32
/* We should have at some point before trying to access an FP register
33
* done the necessary access check, so assert that
34
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/translate-a64.c
37
+++ b/target/arm/translate-a64.c
38
@@ -XXX,XX +XXX,XX @@ TCGv_i64 gen_mte_check1(DisasContext *s, TCGv_i64 addr, bool is_write,
39
* For MTE, check multiple logical sequential accesses.
40
*/
41
TCGv_i64 gen_mte_checkN(DisasContext *s, TCGv_i64 addr, bool is_write,
42
- bool tag_checked, int log2_esize, int total_size)
43
+ bool tag_checked, int size)
44
{
45
- if (tag_checked && s->mte_active[0] && total_size != (1 << log2_esize)) {
46
+ if (tag_checked && s->mte_active[0]) {
47
TCGv_i32 tcg_desc;
48
TCGv_i64 ret;
49
int desc = 0;
50
@@ -XXX,XX +XXX,XX @@ TCGv_i64 gen_mte_checkN(DisasContext *s, TCGv_i64 addr, bool is_write,
51
desc = FIELD_DP32(desc, MTEDESC, TBI, s->tbid);
52
desc = FIELD_DP32(desc, MTEDESC, TCMA, s->tcma);
53
desc = FIELD_DP32(desc, MTEDESC, WRITE, is_write);
54
- desc = FIELD_DP32(desc, MTEDESC, SIZEM1, total_size - 1);
55
+ desc = FIELD_DP32(desc, MTEDESC, SIZEM1, size - 1);
56
tcg_desc = tcg_const_i32(desc);
57
58
ret = new_tmp_a64(s);
59
@@ -XXX,XX +XXX,XX @@ TCGv_i64 gen_mte_checkN(DisasContext *s, TCGv_i64 addr, bool is_write,
60
61
return ret;
62
}
63
- return gen_mte_check1(s, addr, is_write, tag_checked, log2_esize);
64
+ return clean_data_tbi(s, addr);
65
}
66
67
typedef struct DisasCompare64 {
68
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_pair(DisasContext *s, uint32_t insn)
69
}
70
71
clean_addr = gen_mte_checkN(s, dirty_addr, !is_load,
72
- (wback || rn != 31) && !set_tag,
73
- size, 2 << size);
74
+ (wback || rn != 31) && !set_tag, 2 << size);
75
76
if (is_vector) {
77
if (is_load) {
78
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
79
* promote consecutive little-endian elements below.
80
*/
81
clean_addr = gen_mte_checkN(s, tcg_rn, is_store, is_postidx || rn != 31,
82
- size, total);
83
+ total);
84
85
/*
86
* Consecutive little-endian elements from a single register
87
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
88
tcg_rn = cpu_reg_sp(s, rn);
89
90
clean_addr = gen_mte_checkN(s, tcg_rn, !is_load, is_postidx || rn != 31,
91
- scale, total);
92
+ total);
93
94
tcg_ebytes = tcg_const_i64(1 << scale);
95
for (xs = 0; xs < selem; xs++) {
96
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
97
index XXXXXXX..XXXXXXX 100644
98
--- a/target/arm/translate-sve.c
99
+++ b/target/arm/translate-sve.c
100
@@ -XXX,XX +XXX,XX @@ static void do_ldr(DisasContext *s, uint32_t vofs, int len, int rn, int imm)
101
102
dirty_addr = tcg_temp_new_i64();
103
tcg_gen_addi_i64(dirty_addr, cpu_reg_sp(s, rn), imm);
104
- clean_addr = gen_mte_checkN(s, dirty_addr, false, rn != 31, len, MO_8);
105
+ clean_addr = gen_mte_checkN(s, dirty_addr, false, rn != 31, len);
106
tcg_temp_free_i64(dirty_addr);
107
108
/*
109
@@ -XXX,XX +XXX,XX @@ static void do_str(DisasContext *s, uint32_t vofs, int len, int rn, int imm)
110
111
dirty_addr = tcg_temp_new_i64();
112
tcg_gen_addi_i64(dirty_addr, cpu_reg_sp(s, rn), imm);
113
- clean_addr = gen_mte_checkN(s, dirty_addr, false, rn != 31, len, MO_8);
114
+ clean_addr = gen_mte_checkN(s, dirty_addr, false, rn != 31, len);
115
tcg_temp_free_i64(dirty_addr);
116
117
/* Note that unpredicated load/store of vector/predicate registers
118
--
119
2.20.1
120
121
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
The encoding of size = 2 and size = 3 had the incorrect decode
4
for align, overlapping the stride field. This error was hidden
5
by what should have been unnecessary masking in translate.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210419202257.161730-2-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/neon-ls.decode | 4 ++--
13
target/arm/translate-neon.c.inc | 4 ++--
14
2 files changed, 4 insertions(+), 4 deletions(-)
15
16
diff --git a/target/arm/neon-ls.decode b/target/arm/neon-ls.decode
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/neon-ls.decode
19
+++ b/target/arm/neon-ls.decode
20
@@ -XXX,XX +XXX,XX @@ VLD_all_lanes 1111 0100 1 . 1 0 rn:4 .... 11 n:2 size:2 t:1 a:1 rm:4 \
21
22
VLDST_single 1111 0100 1 . l:1 0 rn:4 .... 00 n:2 reg_idx:3 align:1 rm:4 \
23
vd=%vd_dp size=0 stride=1
24
-VLDST_single 1111 0100 1 . l:1 0 rn:4 .... 01 n:2 reg_idx:2 align:2 rm:4 \
25
+VLDST_single 1111 0100 1 . l:1 0 rn:4 .... 01 n:2 reg_idx:2 . align:1 rm:4 \
26
vd=%vd_dp size=1 stride=%imm1_5_p1
27
-VLDST_single 1111 0100 1 . l:1 0 rn:4 .... 10 n:2 reg_idx:1 align:3 rm:4 \
28
+VLDST_single 1111 0100 1 . l:1 0 rn:4 .... 10 n:2 reg_idx:1 . align:2 rm:4 \
29
vd=%vd_dp size=2 stride=%imm1_6_p1
30
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
31
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/translate-neon.c.inc
33
+++ b/target/arm/translate-neon.c.inc
34
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDST_single(DisasContext *s, arg_VLDST_single *a)
35
switch (nregs) {
36
case 1:
37
if (((a->align & (1 << a->size)) != 0) ||
38
- (a->size == 2 && ((a->align & 3) == 1 || (a->align & 3) == 2))) {
39
+ (a->size == 2 && (a->align == 1 || a->align == 2))) {
40
return false;
41
}
42
break;
43
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDST_single(DisasContext *s, arg_VLDST_single *a)
44
}
45
break;
46
case 4:
47
- if ((a->size == 2) && ((a->align & 3) == 3)) {
48
+ if (a->size == 2 && a->align == 3) {
49
return false;
50
}
51
break;
52
--
53
2.20.1
54
55
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
We're about to rearrange the macro expansion surrounding tbflags,
4
and this field name will be expanded using the bit definition of
5
the same name, resulting in a token pasting error.
6
7
So SCTLR_B -> SCTLR__B in the 3 uses, and document it.
8
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20210419202257.161730-3-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
target/arm/cpu.h | 2 +-
15
target/arm/helper.c | 2 +-
16
target/arm/translate.c | 2 +-
17
3 files changed, 3 insertions(+), 3 deletions(-)
18
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/cpu.h
22
+++ b/target/arm/cpu.h
23
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A32, VECSTRIDE, 12, 2) /* Not cached. */
24
*/
25
FIELD(TBFLAG_A32, XSCALE_CPAR, 12, 2)
26
FIELD(TBFLAG_A32, VFPEN, 14, 1) /* Partially cached, minus FPEXC. */
27
-FIELD(TBFLAG_A32, SCTLR_B, 15, 1)
28
+FIELD(TBFLAG_A32, SCTLR__B, 15, 1) /* Cannot overlap with SCTLR_B */
29
FIELD(TBFLAG_A32, HSTR_ACTIVE, 16, 1)
30
/*
31
* Indicates whether cp register reads and writes by guest code should access
32
diff --git a/target/arm/helper.c b/target/arm/helper.c
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/helper.c
35
+++ b/target/arm/helper.c
36
@@ -XXX,XX +XXX,XX @@ static uint32_t rebuild_hflags_common_32(CPUARMState *env, int fp_el,
37
bool sctlr_b = arm_sctlr_b(env);
38
39
if (sctlr_b) {
40
- flags = FIELD_DP32(flags, TBFLAG_A32, SCTLR_B, 1);
41
+ flags = FIELD_DP32(flags, TBFLAG_A32, SCTLR__B, 1);
42
}
43
if (arm_cpu_data_is_big_endian_a32(env, sctlr_b)) {
44
flags = FIELD_DP32(flags, TBFLAG_ANY, BE_DATA, 1);
45
diff --git a/target/arm/translate.c b/target/arm/translate.c
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/translate.c
48
+++ b/target/arm/translate.c
49
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
50
FIELD_EX32(tb_flags, TBFLAG_ANY, BE_DATA) ? MO_BE : MO_LE;
51
dc->debug_target_el =
52
FIELD_EX32(tb_flags, TBFLAG_ANY, DEBUG_TARGET_EL);
53
- dc->sctlr_b = FIELD_EX32(tb_flags, TBFLAG_A32, SCTLR_B);
54
+ dc->sctlr_b = FIELD_EX32(tb_flags, TBFLAG_A32, SCTLR__B);
55
dc->hstr_active = FIELD_EX32(tb_flags, TBFLAG_A32, HSTR_ACTIVE);
56
dc->ns = FIELD_EX32(tb_flags, TBFLAG_A32, NS);
57
dc->vfp_enabled = FIELD_EX32(tb_flags, TBFLAG_A32, VFPEN);
58
--
59
2.20.1
60
61
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
We're about to rearrange the macro expansion surrounding tbflags,
4
and this field name will be expanded using the bit definition of
5
the same name, resulting in a token pasting error.
6
7
So PSTATE_SS -> PSTATE__SS in the uses, and document it.
8
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20210419202257.161730-4-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
target/arm/cpu.h | 2 +-
15
target/arm/helper.c | 4 ++--
16
target/arm/translate-a64.c | 2 +-
17
target/arm/translate.c | 2 +-
18
4 files changed, 5 insertions(+), 5 deletions(-)
19
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/cpu.h
23
+++ b/target/arm/cpu.h
24
@@ -XXX,XX +XXX,XX @@ typedef ARMCPU ArchCPU;
25
*/
26
FIELD(TBFLAG_ANY, AARCH64_STATE, 31, 1)
27
FIELD(TBFLAG_ANY, SS_ACTIVE, 30, 1)
28
-FIELD(TBFLAG_ANY, PSTATE_SS, 29, 1) /* Not cached. */
29
+FIELD(TBFLAG_ANY, PSTATE__SS, 29, 1) /* Not cached. */
30
FIELD(TBFLAG_ANY, BE_DATA, 28, 1)
31
FIELD(TBFLAG_ANY, MMUIDX, 24, 4)
32
/* Target EL if we take a floating-point-disabled exception */
33
diff --git a/target/arm/helper.c b/target/arm/helper.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/helper.c
36
+++ b/target/arm/helper.c
37
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
38
* 0 x Inactive (the TB flag for SS is always 0)
39
* 1 0 Active-pending
40
* 1 1 Active-not-pending
41
- * SS_ACTIVE is set in hflags; PSTATE_SS is computed every TB.
42
+ * SS_ACTIVE is set in hflags; PSTATE__SS is computed every TB.
43
*/
44
if (FIELD_EX32(flags, TBFLAG_ANY, SS_ACTIVE) &&
45
(env->pstate & PSTATE_SS)) {
46
- flags = FIELD_DP32(flags, TBFLAG_ANY, PSTATE_SS, 1);
47
+ flags = FIELD_DP32(flags, TBFLAG_ANY, PSTATE__SS, 1);
48
}
49
50
*pflags = flags;
51
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/target/arm/translate-a64.c
54
+++ b/target/arm/translate-a64.c
55
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
56
* end the TB
57
*/
58
dc->ss_active = FIELD_EX32(tb_flags, TBFLAG_ANY, SS_ACTIVE);
59
- dc->pstate_ss = FIELD_EX32(tb_flags, TBFLAG_ANY, PSTATE_SS);
60
+ dc->pstate_ss = FIELD_EX32(tb_flags, TBFLAG_ANY, PSTATE__SS);
61
dc->is_ldex = false;
62
dc->debug_target_el = FIELD_EX32(tb_flags, TBFLAG_ANY, DEBUG_TARGET_EL);
63
64
diff --git a/target/arm/translate.c b/target/arm/translate.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/target/arm/translate.c
67
+++ b/target/arm/translate.c
68
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
69
* end the TB
70
*/
71
dc->ss_active = FIELD_EX32(tb_flags, TBFLAG_ANY, SS_ACTIVE);
72
- dc->pstate_ss = FIELD_EX32(tb_flags, TBFLAG_ANY, PSTATE_SS);
73
+ dc->pstate_ss = FIELD_EX32(tb_flags, TBFLAG_ANY, PSTATE__SS);
74
dc->is_ldex = false;
75
76
dc->page_start = dc->base.pc_first & TARGET_PAGE_MASK;
77
--
78
2.20.1
79
80
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Depending on the host abi, float16, aka uint16_t, values are
3
We're about to split tbflags into two parts. These macros
4
passed and returned either zero-extended in the host register
4
will ensure that the correct part is used with the correct
5
or with garbage at the top of the host register.
5
set of bits.
6
6
7
The tcg code generator has so far been assuming garbage, as that
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
matches the x86 abi, but this is incorrect for other host abis.
9
Further, target/arm has so far been assuming zero-extended results,
10
so that it may store the 16-bit value into a 32-bit slot with the
11
high 16-bits already clear.
12
13
Rectify both problems by mapping "f16" in the helper definition
14
to uint32_t instead of (a typedef for) uint16_t. This forces
15
the host compiler to assume garbage in the upper 16 bits on input
16
and to zero-extend the result on output.
17
18
Cc: qemu-stable@nongnu.org
19
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
20
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Message-id: 20210419202257.161730-5-richard.henderson@linaro.org
21
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
22
Message-id: 20180522175629.24932-1-richard.henderson@linaro.org
23
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
25
---
11
---
26
include/exec/helper-head.h | 2 +-
12
target/arm/cpu.h | 22 +++++++++-
27
target/arm/helper-a64.c | 35 +++++++++--------
13
target/arm/helper-a64.c | 2 +-
28
target/arm/helper.c | 80 +++++++++++++++++++-------------------
14
target/arm/helper.c | 85 +++++++++++++++++---------------------
29
3 files changed, 59 insertions(+), 58 deletions(-)
15
target/arm/translate-a64.c | 36 ++++++++--------
16
target/arm/translate.c | 48 ++++++++++-----------
17
5 files changed, 101 insertions(+), 92 deletions(-)
30
18
31
diff --git a/include/exec/helper-head.h b/include/exec/helper-head.h
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
32
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
33
--- a/include/exec/helper-head.h
21
--- a/target/arm/cpu.h
34
+++ b/include/exec/helper-head.h
22
+++ b/target/arm/cpu.h
35
@@ -XXX,XX +XXX,XX @@
23
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A64, TCMA, 16, 2)
36
#define dh_ctype_int int
24
FIELD(TBFLAG_A64, MTE_ACTIVE, 18, 1)
37
#define dh_ctype_i64 uint64_t
25
FIELD(TBFLAG_A64, MTE0_ACTIVE, 19, 1)
38
#define dh_ctype_s64 int64_t
26
39
-#define dh_ctype_f16 float16
27
+/*
40
+#define dh_ctype_f16 uint32_t
28
+ * Helpers for using the above.
41
#define dh_ctype_f32 float32
29
+ */
42
#define dh_ctype_f64 float64
30
+#define DP_TBFLAG_ANY(DST, WHICH, VAL) \
43
#define dh_ctype_ptr void *
31
+ (DST = FIELD_DP32(DST, TBFLAG_ANY, WHICH, VAL))
32
+#define DP_TBFLAG_A64(DST, WHICH, VAL) \
33
+ (DST = FIELD_DP32(DST, TBFLAG_A64, WHICH, VAL))
34
+#define DP_TBFLAG_A32(DST, WHICH, VAL) \
35
+ (DST = FIELD_DP32(DST, TBFLAG_A32, WHICH, VAL))
36
+#define DP_TBFLAG_M32(DST, WHICH, VAL) \
37
+ (DST = FIELD_DP32(DST, TBFLAG_M32, WHICH, VAL))
38
+#define DP_TBFLAG_AM32(DST, WHICH, VAL) \
39
+ (DST = FIELD_DP32(DST, TBFLAG_AM32, WHICH, VAL))
40
+
41
+#define EX_TBFLAG_ANY(IN, WHICH) FIELD_EX32(IN, TBFLAG_ANY, WHICH)
42
+#define EX_TBFLAG_A64(IN, WHICH) FIELD_EX32(IN, TBFLAG_A64, WHICH)
43
+#define EX_TBFLAG_A32(IN, WHICH) FIELD_EX32(IN, TBFLAG_A32, WHICH)
44
+#define EX_TBFLAG_M32(IN, WHICH) FIELD_EX32(IN, TBFLAG_M32, WHICH)
45
+#define EX_TBFLAG_AM32(IN, WHICH) FIELD_EX32(IN, TBFLAG_AM32, WHICH)
46
+
47
/**
48
* cpu_mmu_index:
49
* @env: The cpu environment
50
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A64, MTE0_ACTIVE, 19, 1)
51
*/
52
static inline int cpu_mmu_index(CPUARMState *env, bool ifetch)
53
{
54
- return FIELD_EX32(env->hflags, TBFLAG_ANY, MMUIDX);
55
+ return EX_TBFLAG_ANY(env->hflags, MMUIDX);
56
}
57
58
static inline bool bswap_code(bool sctlr_b)
44
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
59
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
45
index XXXXXXX..XXXXXXX 100644
60
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/helper-a64.c
61
--- a/target/arm/helper-a64.c
47
+++ b/target/arm/helper-a64.c
62
+++ b/target/arm/helper-a64.c
48
@@ -XXX,XX +XXX,XX @@ static inline uint32_t float_rel_to_flags(int res)
63
@@ -XXX,XX +XXX,XX @@ void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
49
return flags;
64
* the hflags rebuild, since we can pull the composite TBII field
50
}
65
* from there.
51
66
*/
52
-uint64_t HELPER(vfp_cmph_a64)(float16 x, float16 y, void *fp_status)
67
- tbii = FIELD_EX32(env->hflags, TBFLAG_A64, TBII);
53
+uint64_t HELPER(vfp_cmph_a64)(uint32_t x, uint32_t y, void *fp_status)
68
+ tbii = EX_TBFLAG_A64(env->hflags, TBII);
54
{
69
if ((tbii >> extract64(new_pc, 55, 1)) & 1) {
55
return float_rel_to_flags(float16_compare_quiet(x, y, fp_status));
70
/* TBI is enabled. */
56
}
71
int core_mmu_idx = cpu_mmu_index(env, false);
57
58
-uint64_t HELPER(vfp_cmpeh_a64)(float16 x, float16 y, void *fp_status)
59
+uint64_t HELPER(vfp_cmpeh_a64)(uint32_t x, uint32_t y, void *fp_status)
60
{
61
return float_rel_to_flags(float16_compare(x, y, fp_status));
62
}
63
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(neon_cgt_f64)(float64 a, float64 b, void *fpstp)
64
#define float64_three make_float64(0x4008000000000000ULL)
65
#define float64_one_point_five make_float64(0x3FF8000000000000ULL)
66
67
-float16 HELPER(recpsf_f16)(float16 a, float16 b, void *fpstp)
68
+uint32_t HELPER(recpsf_f16)(uint32_t a, uint32_t b, void *fpstp)
69
{
70
float_status *fpst = fpstp;
71
72
@@ -XXX,XX +XXX,XX @@ float64 HELPER(recpsf_f64)(float64 a, float64 b, void *fpstp)
73
return float64_muladd(a, b, float64_two, 0, fpst);
74
}
75
76
-float16 HELPER(rsqrtsf_f16)(float16 a, float16 b, void *fpstp)
77
+uint32_t HELPER(rsqrtsf_f16)(uint32_t a, uint32_t b, void *fpstp)
78
{
79
float_status *fpst = fpstp;
80
81
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(neon_addlp_u16)(uint64_t a)
82
}
83
84
/* Floating-point reciprocal exponent - see FPRecpX in ARM ARM */
85
-float16 HELPER(frecpx_f16)(float16 a, void *fpstp)
86
+uint32_t HELPER(frecpx_f16)(uint32_t a, void *fpstp)
87
{
88
float_status *fpst = fpstp;
89
uint16_t val16, sbit;
90
@@ -XXX,XX +XXX,XX @@ void HELPER(casp_be_parallel)(CPUARMState *env, uint32_t rs, uint64_t addr,
91
#define ADVSIMD_HELPER(name, suffix) HELPER(glue(glue(advsimd_, name), suffix))
92
93
#define ADVSIMD_HALFOP(name) \
94
-float16 ADVSIMD_HELPER(name, h)(float16 a, float16 b, void *fpstp) \
95
+uint32_t ADVSIMD_HELPER(name, h)(uint32_t a, uint32_t b, void *fpstp) \
96
{ \
97
float_status *fpst = fpstp; \
98
return float16_ ## name(a, b, fpst); \
99
@@ -XXX,XX +XXX,XX @@ ADVSIMD_HALFOP(mulx)
100
ADVSIMD_TWOHALFOP(mulx)
101
102
/* fused multiply-accumulate */
103
-float16 HELPER(advsimd_muladdh)(float16 a, float16 b, float16 c, void *fpstp)
104
+uint32_t HELPER(advsimd_muladdh)(uint32_t a, uint32_t b, uint32_t c,
105
+ void *fpstp)
106
{
107
float_status *fpst = fpstp;
108
return float16_muladd(a, b, c, 0, fpst);
109
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_muladd2h)(uint32_t two_a, uint32_t two_b,
110
111
#define ADVSIMD_CMPRES(test) (test) ? 0xffff : 0
112
113
-uint32_t HELPER(advsimd_ceq_f16)(float16 a, float16 b, void *fpstp)
114
+uint32_t HELPER(advsimd_ceq_f16)(uint32_t a, uint32_t b, void *fpstp)
115
{
116
float_status *fpst = fpstp;
117
int compare = float16_compare_quiet(a, b, fpst);
118
return ADVSIMD_CMPRES(compare == float_relation_equal);
119
}
120
121
-uint32_t HELPER(advsimd_cge_f16)(float16 a, float16 b, void *fpstp)
122
+uint32_t HELPER(advsimd_cge_f16)(uint32_t a, uint32_t b, void *fpstp)
123
{
124
float_status *fpst = fpstp;
125
int compare = float16_compare(a, b, fpst);
126
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_cge_f16)(float16 a, float16 b, void *fpstp)
127
compare == float_relation_equal);
128
}
129
130
-uint32_t HELPER(advsimd_cgt_f16)(float16 a, float16 b, void *fpstp)
131
+uint32_t HELPER(advsimd_cgt_f16)(uint32_t a, uint32_t b, void *fpstp)
132
{
133
float_status *fpst = fpstp;
134
int compare = float16_compare(a, b, fpst);
135
return ADVSIMD_CMPRES(compare == float_relation_greater);
136
}
137
138
-uint32_t HELPER(advsimd_acge_f16)(float16 a, float16 b, void *fpstp)
139
+uint32_t HELPER(advsimd_acge_f16)(uint32_t a, uint32_t b, void *fpstp)
140
{
141
float_status *fpst = fpstp;
142
float16 f0 = float16_abs(a);
143
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_acge_f16)(float16 a, float16 b, void *fpstp)
144
compare == float_relation_equal);
145
}
146
147
-uint32_t HELPER(advsimd_acgt_f16)(float16 a, float16 b, void *fpstp)
148
+uint32_t HELPER(advsimd_acgt_f16)(uint32_t a, uint32_t b, void *fpstp)
149
{
150
float_status *fpst = fpstp;
151
float16 f0 = float16_abs(a);
152
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_acgt_f16)(float16 a, float16 b, void *fpstp)
153
}
154
155
/* round to integral */
156
-float16 HELPER(advsimd_rinth_exact)(float16 x, void *fp_status)
157
+uint32_t HELPER(advsimd_rinth_exact)(uint32_t x, void *fp_status)
158
{
159
return float16_round_to_int(x, fp_status);
160
}
161
162
-float16 HELPER(advsimd_rinth)(float16 x, void *fp_status)
163
+uint32_t HELPER(advsimd_rinth)(uint32_t x, void *fp_status)
164
{
165
int old_flags = get_float_exception_flags(fp_status), new_flags;
166
float16 ret;
167
@@ -XXX,XX +XXX,XX @@ float16 HELPER(advsimd_rinth)(float16 x, void *fp_status)
168
* setting the mode appropriately before calling the helper.
169
*/
170
171
-uint32_t HELPER(advsimd_f16tosinth)(float16 a, void *fpstp)
172
+uint32_t HELPER(advsimd_f16tosinth)(uint32_t a, void *fpstp)
173
{
174
float_status *fpst = fpstp;
175
176
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_f16tosinth)(float16 a, void *fpstp)
177
return float16_to_int16(a, fpst);
178
}
179
180
-uint32_t HELPER(advsimd_f16touinth)(float16 a, void *fpstp)
181
+uint32_t HELPER(advsimd_f16touinth)(uint32_t a, void *fpstp)
182
{
183
float_status *fpst = fpstp;
184
185
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_f16touinth)(float16 a, void *fpstp)
186
* Square Root and Reciprocal square root
187
*/
188
189
-float16 HELPER(sqrt_f16)(float16 a, void *fpstp)
190
+uint32_t HELPER(sqrt_f16)(uint32_t a, void *fpstp)
191
{
192
float_status *s = fpstp;
193
194
diff --git a/target/arm/helper.c b/target/arm/helper.c
72
diff --git a/target/arm/helper.c b/target/arm/helper.c
195
index XXXXXXX..XXXXXXX 100644
73
index XXXXXXX..XXXXXXX 100644
196
--- a/target/arm/helper.c
74
--- a/target/arm/helper.c
197
+++ b/target/arm/helper.c
75
+++ b/target/arm/helper.c
198
@@ -XXX,XX +XXX,XX @@ DO_VFP_cmp(d, float64)
76
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_stage1_mmu_idx(CPUARMState *env)
199
77
static uint32_t rebuild_hflags_common(CPUARMState *env, int fp_el,
200
/* Integer to float and float to integer conversions */
78
ARMMMUIdx mmu_idx, uint32_t flags)
201
79
{
202
-#define CONV_ITOF(name, fsz, sign) \
80
- flags = FIELD_DP32(flags, TBFLAG_ANY, FPEXC_EL, fp_el);
203
- float##fsz HELPER(name)(uint32_t x, void *fpstp) \
81
- flags = FIELD_DP32(flags, TBFLAG_ANY, MMUIDX,
204
-{ \
82
- arm_to_core_mmu_idx(mmu_idx));
205
- float_status *fpst = fpstp; \
83
+ DP_TBFLAG_ANY(flags, FPEXC_EL, fp_el);
206
- return sign##int32_to_##float##fsz((sign##int32_t)x, fpst); \
84
+ DP_TBFLAG_ANY(flags, MMUIDX, arm_to_core_mmu_idx(mmu_idx));
207
+#define CONV_ITOF(name, ftype, fsz, sign) \
85
208
+ftype HELPER(name)(uint32_t x, void *fpstp) \
86
if (arm_singlestep_active(env)) {
209
+{ \
87
- flags = FIELD_DP32(flags, TBFLAG_ANY, SS_ACTIVE, 1);
210
+ float_status *fpst = fpstp; \
88
+ DP_TBFLAG_ANY(flags, SS_ACTIVE, 1);
211
+ return sign##int32_to_##float##fsz((sign##int32_t)x, fpst); \
89
}
90
return flags;
212
}
91
}
213
92
@@ -XXX,XX +XXX,XX @@ static uint32_t rebuild_hflags_common_32(CPUARMState *env, int fp_el,
214
-#define CONV_FTOI(name, fsz, sign, round) \
93
bool sctlr_b = arm_sctlr_b(env);
215
-uint32_t HELPER(name)(float##fsz x, void *fpstp) \
94
216
-{ \
95
if (sctlr_b) {
217
- float_status *fpst = fpstp; \
96
- flags = FIELD_DP32(flags, TBFLAG_A32, SCTLR__B, 1);
218
- if (float##fsz##_is_any_nan(x)) { \
97
+ DP_TBFLAG_A32(flags, SCTLR__B, 1);
219
- float_raise(float_flag_invalid, fpst); \
98
}
220
- return 0; \
99
if (arm_cpu_data_is_big_endian_a32(env, sctlr_b)) {
221
- } \
100
- flags = FIELD_DP32(flags, TBFLAG_ANY, BE_DATA, 1);
222
- return float##fsz##_to_##sign##int32##round(x, fpst); \
101
+ DP_TBFLAG_ANY(flags, BE_DATA, 1);
223
+#define CONV_FTOI(name, ftype, fsz, sign, round) \
102
}
224
+uint32_t HELPER(name)(ftype x, void *fpstp) \
103
- flags = FIELD_DP32(flags, TBFLAG_A32, NS, !access_secure_reg(env));
225
+{ \
104
+ DP_TBFLAG_A32(flags, NS, !access_secure_reg(env));
226
+ float_status *fpst = fpstp; \
105
227
+ if (float##fsz##_is_any_nan(x)) { \
106
return rebuild_hflags_common(env, fp_el, mmu_idx, flags);
228
+ float_raise(float_flag_invalid, fpst); \
229
+ return 0; \
230
+ } \
231
+ return float##fsz##_to_##sign##int32##round(x, fpst); \
232
}
107
}
233
108
@@ -XXX,XX +XXX,XX @@ static uint32_t rebuild_hflags_m32(CPUARMState *env, int fp_el,
234
-#define FLOAT_CONVS(name, p, fsz, sign) \
109
uint32_t flags = 0;
235
-CONV_ITOF(vfp_##name##to##p, fsz, sign) \
110
236
-CONV_FTOI(vfp_to##name##p, fsz, sign, ) \
111
if (arm_v7m_is_handler_mode(env)) {
237
-CONV_FTOI(vfp_to##name##z##p, fsz, sign, _round_to_zero)
112
- flags = FIELD_DP32(flags, TBFLAG_M32, HANDLER, 1);
238
+#define FLOAT_CONVS(name, p, ftype, fsz, sign) \
113
+ DP_TBFLAG_M32(flags, HANDLER, 1);
239
+ CONV_ITOF(vfp_##name##to##p, ftype, fsz, sign) \
114
}
240
+ CONV_FTOI(vfp_to##name##p, ftype, fsz, sign, ) \
115
241
+ CONV_FTOI(vfp_to##name##z##p, ftype, fsz, sign, _round_to_zero)
116
/*
242
117
@@ -XXX,XX +XXX,XX @@ static uint32_t rebuild_hflags_m32(CPUARMState *env, int fp_el,
243
-FLOAT_CONVS(si, h, 16, )
118
if (arm_feature(env, ARM_FEATURE_V8) &&
244
-FLOAT_CONVS(si, s, 32, )
119
!((mmu_idx & ARM_MMU_IDX_M_NEGPRI) &&
245
-FLOAT_CONVS(si, d, 64, )
120
(env->v7m.ccr[env->v7m.secure] & R_V7M_CCR_STKOFHFNMIGN_MASK))) {
246
-FLOAT_CONVS(ui, h, 16, u)
121
- flags = FIELD_DP32(flags, TBFLAG_M32, STACKCHECK, 1);
247
-FLOAT_CONVS(ui, s, 32, u)
122
+ DP_TBFLAG_M32(flags, STACKCHECK, 1);
248
-FLOAT_CONVS(ui, d, 64, u)
123
}
249
+FLOAT_CONVS(si, h, uint32_t, 16, )
124
250
+FLOAT_CONVS(si, s, float32, 32, )
125
return rebuild_hflags_common_32(env, fp_el, mmu_idx, flags);
251
+FLOAT_CONVS(si, d, float64, 64, )
126
@@ -XXX,XX +XXX,XX @@ static uint32_t rebuild_hflags_aprofile(CPUARMState *env)
252
+FLOAT_CONVS(ui, h, uint32_t, 16, u)
127
{
253
+FLOAT_CONVS(ui, s, float32, 32, u)
128
int flags = 0;
254
+FLOAT_CONVS(ui, d, float64, 64, u)
129
255
130
- flags = FIELD_DP32(flags, TBFLAG_ANY, DEBUG_TARGET_EL,
256
#undef CONV_ITOF
131
- arm_debug_target_el(env));
257
#undef CONV_FTOI
132
+ DP_TBFLAG_ANY(flags, DEBUG_TARGET_EL, arm_debug_target_el(env));
258
@@ -XXX,XX +XXX,XX @@ static float16 do_postscale_fp16(float64 f, int shift, float_status *fpst)
133
return flags;
259
return float64_to_float16(float64_scalbn(f, -shift, fpst), true, fpst);
260
}
134
}
261
135
262
-float16 HELPER(vfp_sltoh)(uint32_t x, uint32_t shift, void *fpst)
136
@@ -XXX,XX +XXX,XX @@ static uint32_t rebuild_hflags_a32(CPUARMState *env, int fp_el,
263
+uint32_t HELPER(vfp_sltoh)(uint32_t x, uint32_t shift, void *fpst)
137
uint32_t flags = rebuild_hflags_aprofile(env);
264
{
138
265
return do_postscale_fp16(int32_to_float64(x, fpst), shift, fpst);
139
if (arm_el_is_aa64(env, 1)) {
266
}
140
- flags = FIELD_DP32(flags, TBFLAG_A32, VFPEN, 1);
267
141
+ DP_TBFLAG_A32(flags, VFPEN, 1);
268
-float16 HELPER(vfp_ultoh)(uint32_t x, uint32_t shift, void *fpst)
142
}
269
+uint32_t HELPER(vfp_ultoh)(uint32_t x, uint32_t shift, void *fpst)
143
270
{
144
if (arm_current_el(env) < 2 && env->cp15.hstr_el2 &&
271
return do_postscale_fp16(uint32_to_float64(x, fpst), shift, fpst);
145
(arm_hcr_el2_eff(env) & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) {
272
}
146
- flags = FIELD_DP32(flags, TBFLAG_A32, HSTR_ACTIVE, 1);
273
147
+ DP_TBFLAG_A32(flags, HSTR_ACTIVE, 1);
274
-float16 HELPER(vfp_sqtoh)(uint64_t x, uint32_t shift, void *fpst)
148
}
275
+uint32_t HELPER(vfp_sqtoh)(uint64_t x, uint32_t shift, void *fpst)
149
276
{
150
return rebuild_hflags_common_32(env, fp_el, mmu_idx, flags);
277
return do_postscale_fp16(int64_to_float64(x, fpst), shift, fpst);
151
@@ -XXX,XX +XXX,XX @@ static uint32_t rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
278
}
152
uint64_t sctlr;
279
153
int tbii, tbid;
280
-float16 HELPER(vfp_uqtoh)(uint64_t x, uint32_t shift, void *fpst)
154
281
+uint32_t HELPER(vfp_uqtoh)(uint64_t x, uint32_t shift, void *fpst)
155
- flags = FIELD_DP32(flags, TBFLAG_ANY, AARCH64_STATE, 1);
282
{
156
+ DP_TBFLAG_ANY(flags, AARCH64_STATE, 1);
283
return do_postscale_fp16(uint64_to_float64(x, fpst), shift, fpst);
157
284
}
158
/* Get control bits for tagged addresses. */
285
@@ -XXX,XX +XXX,XX @@ static float64 do_prescale_fp16(float16 f, int shift, float_status *fpst)
159
tbid = aa64_va_parameter_tbi(tcr, mmu_idx);
286
}
160
tbii = tbid & ~aa64_va_parameter_tbid(tcr, mmu_idx);
287
}
161
288
162
- flags = FIELD_DP32(flags, TBFLAG_A64, TBII, tbii);
289
-uint32_t HELPER(vfp_toshh)(float16 x, uint32_t shift, void *fpst)
163
- flags = FIELD_DP32(flags, TBFLAG_A64, TBID, tbid);
290
+uint32_t HELPER(vfp_toshh)(uint32_t x, uint32_t shift, void *fpst)
164
+ DP_TBFLAG_A64(flags, TBII, tbii);
291
{
165
+ DP_TBFLAG_A64(flags, TBID, tbid);
292
return float64_to_int16(do_prescale_fp16(x, shift, fpst), fpst);
166
293
}
167
if (cpu_isar_feature(aa64_sve, env_archcpu(env))) {
294
168
int sve_el = sve_exception_el(env, el);
295
-uint32_t HELPER(vfp_touhh)(float16 x, uint32_t shift, void *fpst)
169
@@ -XXX,XX +XXX,XX @@ static uint32_t rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
296
+uint32_t HELPER(vfp_touhh)(uint32_t x, uint32_t shift, void *fpst)
170
} else {
297
{
171
zcr_len = sve_zcr_len_for_el(env, el);
298
return float64_to_uint16(do_prescale_fp16(x, shift, fpst), fpst);
172
}
299
}
173
- flags = FIELD_DP32(flags, TBFLAG_A64, SVEEXC_EL, sve_el);
300
174
- flags = FIELD_DP32(flags, TBFLAG_A64, ZCR_LEN, zcr_len);
301
-uint32_t HELPER(vfp_toslh)(float16 x, uint32_t shift, void *fpst)
175
+ DP_TBFLAG_A64(flags, SVEEXC_EL, sve_el);
302
+uint32_t HELPER(vfp_toslh)(uint32_t x, uint32_t shift, void *fpst)
176
+ DP_TBFLAG_A64(flags, ZCR_LEN, zcr_len);
303
{
177
}
304
return float64_to_int32(do_prescale_fp16(x, shift, fpst), fpst);
178
305
}
179
sctlr = regime_sctlr(env, stage1);
306
180
307
-uint32_t HELPER(vfp_toulh)(float16 x, uint32_t shift, void *fpst)
181
if (arm_cpu_data_is_big_endian_a64(el, sctlr)) {
308
+uint32_t HELPER(vfp_toulh)(uint32_t x, uint32_t shift, void *fpst)
182
- flags = FIELD_DP32(flags, TBFLAG_ANY, BE_DATA, 1);
309
{
183
+ DP_TBFLAG_ANY(flags, BE_DATA, 1);
310
return float64_to_uint32(do_prescale_fp16(x, shift, fpst), fpst);
184
}
311
}
185
312
186
if (cpu_isar_feature(aa64_pauth, env_archcpu(env))) {
313
-uint64_t HELPER(vfp_tosqh)(float16 x, uint32_t shift, void *fpst)
187
@@ -XXX,XX +XXX,XX @@ static uint32_t rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
314
+uint64_t HELPER(vfp_tosqh)(uint32_t x, uint32_t shift, void *fpst)
188
* The decision of which action to take is left to a helper.
315
{
189
*/
316
return float64_to_int64(do_prescale_fp16(x, shift, fpst), fpst);
190
if (sctlr & (SCTLR_EnIA | SCTLR_EnIB | SCTLR_EnDA | SCTLR_EnDB)) {
317
}
191
- flags = FIELD_DP32(flags, TBFLAG_A64, PAUTH_ACTIVE, 1);
318
192
+ DP_TBFLAG_A64(flags, PAUTH_ACTIVE, 1);
319
-uint64_t HELPER(vfp_touqh)(float16 x, uint32_t shift, void *fpst)
193
}
320
+uint64_t HELPER(vfp_touqh)(uint32_t x, uint32_t shift, void *fpst)
194
}
321
{
195
322
return float64_to_uint64(do_prescale_fp16(x, shift, fpst), fpst);
196
if (cpu_isar_feature(aa64_bti, env_archcpu(env))) {
323
}
197
/* Note that SCTLR_EL[23].BT == SCTLR_BT1. */
324
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(set_neon_rmode)(uint32_t rmode, CPUARMState *env)
198
if (sctlr & (el == 0 ? SCTLR_BT0 : SCTLR_BT1)) {
325
}
199
- flags = FIELD_DP32(flags, TBFLAG_A64, BT, 1);
326
200
+ DP_TBFLAG_A64(flags, BT, 1);
327
/* Half precision conversions. */
201
}
328
-float32 HELPER(vfp_fcvt_f16_to_f32)(float16 a, void *fpstp, uint32_t ahp_mode)
202
}
329
+float32 HELPER(vfp_fcvt_f16_to_f32)(uint32_t a, void *fpstp, uint32_t ahp_mode)
203
330
{
204
@@ -XXX,XX +XXX,XX @@ static uint32_t rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
331
/* Squash FZ16 to 0 for the duration of conversion. In this case,
205
case ARMMMUIdx_SE10_1:
332
* it would affect flushing input denormals.
206
case ARMMMUIdx_SE10_1_PAN:
333
@@ -XXX,XX +XXX,XX @@ float32 HELPER(vfp_fcvt_f16_to_f32)(float16 a, void *fpstp, uint32_t ahp_mode)
207
/* TODO: ARMv8.3-NV */
334
return r;
208
- flags = FIELD_DP32(flags, TBFLAG_A64, UNPRIV, 1);
335
}
209
+ DP_TBFLAG_A64(flags, UNPRIV, 1);
336
210
break;
337
-float16 HELPER(vfp_fcvt_f32_to_f16)(float32 a, void *fpstp, uint32_t ahp_mode)
211
case ARMMMUIdx_E20_2:
338
+uint32_t HELPER(vfp_fcvt_f32_to_f16)(float32 a, void *fpstp, uint32_t ahp_mode)
212
case ARMMMUIdx_E20_2_PAN:
339
{
213
@@ -XXX,XX +XXX,XX @@ static uint32_t rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
340
/* Squash FZ16 to 0 for the duration of conversion. In this case,
214
* gated by HCR_EL2.<E2H,TGE> == '11', and so is LDTR.
341
* it would affect flushing output denormals.
215
*/
342
@@ -XXX,XX +XXX,XX @@ float16 HELPER(vfp_fcvt_f32_to_f16)(float32 a, void *fpstp, uint32_t ahp_mode)
216
if (env->cp15.hcr_el2 & HCR_TGE) {
343
return r;
217
- flags = FIELD_DP32(flags, TBFLAG_A64, UNPRIV, 1);
344
}
218
+ DP_TBFLAG_A64(flags, UNPRIV, 1);
345
219
}
346
-float64 HELPER(vfp_fcvt_f16_to_f64)(float16 a, void *fpstp, uint32_t ahp_mode)
220
break;
347
+float64 HELPER(vfp_fcvt_f16_to_f64)(uint32_t a, void *fpstp, uint32_t ahp_mode)
221
default:
348
{
222
@@ -XXX,XX +XXX,XX @@ static uint32_t rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
349
/* Squash FZ16 to 0 for the duration of conversion. In this case,
223
* 4) If no Allocation Tag Access, then all accesses are Unchecked.
350
* it would affect flushing input denormals.
224
*/
351
@@ -XXX,XX +XXX,XX @@ float64 HELPER(vfp_fcvt_f16_to_f64)(float16 a, void *fpstp, uint32_t ahp_mode)
225
if (allocation_tag_access_enabled(env, el, sctlr)) {
352
return r;
226
- flags = FIELD_DP32(flags, TBFLAG_A64, ATA, 1);
353
}
227
+ DP_TBFLAG_A64(flags, ATA, 1);
354
228
if (tbid
355
-float16 HELPER(vfp_fcvt_f64_to_f16)(float64 a, void *fpstp, uint32_t ahp_mode)
229
&& !(env->pstate & PSTATE_TCO)
356
+uint32_t HELPER(vfp_fcvt_f64_to_f16)(float64 a, void *fpstp, uint32_t ahp_mode)
230
&& (sctlr & (el == 0 ? SCTLR_TCF0 : SCTLR_TCF))) {
357
{
231
- flags = FIELD_DP32(flags, TBFLAG_A64, MTE_ACTIVE, 1);
358
/* Squash FZ16 to 0 for the duration of conversion. In this case,
232
+ DP_TBFLAG_A64(flags, MTE_ACTIVE, 1);
359
* it would affect flushing output denormals.
233
}
360
@@ -XXX,XX +XXX,XX @@ static bool round_to_inf(float_status *fpst, bool sign_bit)
234
}
361
g_assert_not_reached();
235
/* And again for unprivileged accesses, if required. */
362
}
236
- if (FIELD_EX32(flags, TBFLAG_A64, UNPRIV)
363
237
+ if (EX_TBFLAG_A64(flags, UNPRIV)
364
-float16 HELPER(recpe_f16)(float16 input, void *fpstp)
238
&& tbid
365
+uint32_t HELPER(recpe_f16)(uint32_t input, void *fpstp)
239
&& !(env->pstate & PSTATE_TCO)
366
{
240
&& (sctlr & SCTLR_TCF0)
367
float_status *fpst = fpstp;
241
&& allocation_tag_access_enabled(env, 0, sctlr)) {
368
float16 f16 = float16_squash_input_denormal(input, fpst);
242
- flags = FIELD_DP32(flags, TBFLAG_A64, MTE0_ACTIVE, 1);
369
@@ -XXX,XX +XXX,XX @@ static uint64_t recip_sqrt_estimate(int *exp , int exp_off, uint64_t frac)
243
+ DP_TBFLAG_A64(flags, MTE0_ACTIVE, 1);
370
return extract64(estimate, 0, 8) << 44;
244
}
371
}
245
/* Cache TCMA as well as TBI. */
372
246
- flags = FIELD_DP32(flags, TBFLAG_A64, TCMA,
373
-float16 HELPER(rsqrte_f16)(float16 input, void *fpstp)
247
- aa64_va_parameter_tcma(tcr, mmu_idx));
374
+uint32_t HELPER(rsqrte_f16)(uint32_t input, void *fpstp)
248
+ DP_TBFLAG_A64(flags, TCMA, aa64_va_parameter_tcma(tcr, mmu_idx));
375
{
249
}
376
float_status *s = fpstp;
250
377
float16 f16 = float16_squash_input_denormal(input, s);
251
return rebuild_hflags_common(env, fp_el, mmu_idx, flags);
252
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
253
*cs_base = 0;
254
assert_hflags_rebuild_correctly(env);
255
256
- if (FIELD_EX32(flags, TBFLAG_ANY, AARCH64_STATE)) {
257
+ if (EX_TBFLAG_ANY(flags, AARCH64_STATE)) {
258
*pc = env->pc;
259
if (cpu_isar_feature(aa64_bti, env_archcpu(env))) {
260
- flags = FIELD_DP32(flags, TBFLAG_A64, BTYPE, env->btype);
261
+ DP_TBFLAG_A64(flags, BTYPE, env->btype);
262
}
263
} else {
264
*pc = env->regs[15];
265
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
266
if (arm_feature(env, ARM_FEATURE_M_SECURITY) &&
267
FIELD_EX32(env->v7m.fpccr[M_REG_S], V7M_FPCCR, S)
268
!= env->v7m.secure) {
269
- flags = FIELD_DP32(flags, TBFLAG_M32, FPCCR_S_WRONG, 1);
270
+ DP_TBFLAG_M32(flags, FPCCR_S_WRONG, 1);
271
}
272
273
if ((env->v7m.fpccr[env->v7m.secure] & R_V7M_FPCCR_ASPEN_MASK) &&
274
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
275
* active FP context; we must create a new FP context before
276
* executing any FP insn.
277
*/
278
- flags = FIELD_DP32(flags, TBFLAG_M32, NEW_FP_CTXT_NEEDED, 1);
279
+ DP_TBFLAG_M32(flags, NEW_FP_CTXT_NEEDED, 1);
280
}
281
282
bool is_secure = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
283
if (env->v7m.fpccr[is_secure] & R_V7M_FPCCR_LSPACT_MASK) {
284
- flags = FIELD_DP32(flags, TBFLAG_M32, LSPACT, 1);
285
+ DP_TBFLAG_M32(flags, LSPACT, 1);
286
}
287
} else {
288
/*
289
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
290
* Note that VECLEN+VECSTRIDE are RES0 for M-profile.
291
*/
292
if (arm_feature(env, ARM_FEATURE_XSCALE)) {
293
- flags = FIELD_DP32(flags, TBFLAG_A32,
294
- XSCALE_CPAR, env->cp15.c15_cpar);
295
+ DP_TBFLAG_A32(flags, XSCALE_CPAR, env->cp15.c15_cpar);
296
} else {
297
- flags = FIELD_DP32(flags, TBFLAG_A32, VECLEN,
298
- env->vfp.vec_len);
299
- flags = FIELD_DP32(flags, TBFLAG_A32, VECSTRIDE,
300
- env->vfp.vec_stride);
301
+ DP_TBFLAG_A32(flags, VECLEN, env->vfp.vec_len);
302
+ DP_TBFLAG_A32(flags, VECSTRIDE, env->vfp.vec_stride);
303
}
304
if (env->vfp.xregs[ARM_VFP_FPEXC] & (1 << 30)) {
305
- flags = FIELD_DP32(flags, TBFLAG_A32, VFPEN, 1);
306
+ DP_TBFLAG_A32(flags, VFPEN, 1);
307
}
308
}
309
310
- flags = FIELD_DP32(flags, TBFLAG_AM32, THUMB, env->thumb);
311
- flags = FIELD_DP32(flags, TBFLAG_AM32, CONDEXEC, env->condexec_bits);
312
+ DP_TBFLAG_AM32(flags, THUMB, env->thumb);
313
+ DP_TBFLAG_AM32(flags, CONDEXEC, env->condexec_bits);
314
}
315
316
/*
317
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
318
* 1 1 Active-not-pending
319
* SS_ACTIVE is set in hflags; PSTATE__SS is computed every TB.
320
*/
321
- if (FIELD_EX32(flags, TBFLAG_ANY, SS_ACTIVE) &&
322
- (env->pstate & PSTATE_SS)) {
323
- flags = FIELD_DP32(flags, TBFLAG_ANY, PSTATE__SS, 1);
324
+ if (EX_TBFLAG_ANY(flags, SS_ACTIVE) && (env->pstate & PSTATE_SS)) {
325
+ DP_TBFLAG_ANY(flags, PSTATE__SS, 1);
326
}
327
328
*pflags = flags;
329
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
330
index XXXXXXX..XXXXXXX 100644
331
--- a/target/arm/translate-a64.c
332
+++ b/target/arm/translate-a64.c
333
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
334
!arm_el_is_aa64(env, 3);
335
dc->thumb = 0;
336
dc->sctlr_b = 0;
337
- dc->be_data = FIELD_EX32(tb_flags, TBFLAG_ANY, BE_DATA) ? MO_BE : MO_LE;
338
+ dc->be_data = EX_TBFLAG_ANY(tb_flags, BE_DATA) ? MO_BE : MO_LE;
339
dc->condexec_mask = 0;
340
dc->condexec_cond = 0;
341
- core_mmu_idx = FIELD_EX32(tb_flags, TBFLAG_ANY, MMUIDX);
342
+ core_mmu_idx = EX_TBFLAG_ANY(tb_flags, MMUIDX);
343
dc->mmu_idx = core_to_aa64_mmu_idx(core_mmu_idx);
344
- dc->tbii = FIELD_EX32(tb_flags, TBFLAG_A64, TBII);
345
- dc->tbid = FIELD_EX32(tb_flags, TBFLAG_A64, TBID);
346
- dc->tcma = FIELD_EX32(tb_flags, TBFLAG_A64, TCMA);
347
+ dc->tbii = EX_TBFLAG_A64(tb_flags, TBII);
348
+ dc->tbid = EX_TBFLAG_A64(tb_flags, TBID);
349
+ dc->tcma = EX_TBFLAG_A64(tb_flags, TCMA);
350
dc->current_el = arm_mmu_idx_to_el(dc->mmu_idx);
351
#if !defined(CONFIG_USER_ONLY)
352
dc->user = (dc->current_el == 0);
353
#endif
354
- dc->fp_excp_el = FIELD_EX32(tb_flags, TBFLAG_ANY, FPEXC_EL);
355
- dc->sve_excp_el = FIELD_EX32(tb_flags, TBFLAG_A64, SVEEXC_EL);
356
- dc->sve_len = (FIELD_EX32(tb_flags, TBFLAG_A64, ZCR_LEN) + 1) * 16;
357
- dc->pauth_active = FIELD_EX32(tb_flags, TBFLAG_A64, PAUTH_ACTIVE);
358
- dc->bt = FIELD_EX32(tb_flags, TBFLAG_A64, BT);
359
- dc->btype = FIELD_EX32(tb_flags, TBFLAG_A64, BTYPE);
360
- dc->unpriv = FIELD_EX32(tb_flags, TBFLAG_A64, UNPRIV);
361
- dc->ata = FIELD_EX32(tb_flags, TBFLAG_A64, ATA);
362
- dc->mte_active[0] = FIELD_EX32(tb_flags, TBFLAG_A64, MTE_ACTIVE);
363
- dc->mte_active[1] = FIELD_EX32(tb_flags, TBFLAG_A64, MTE0_ACTIVE);
364
+ dc->fp_excp_el = EX_TBFLAG_ANY(tb_flags, FPEXC_EL);
365
+ dc->sve_excp_el = EX_TBFLAG_A64(tb_flags, SVEEXC_EL);
366
+ dc->sve_len = (EX_TBFLAG_A64(tb_flags, ZCR_LEN) + 1) * 16;
367
+ dc->pauth_active = EX_TBFLAG_A64(tb_flags, PAUTH_ACTIVE);
368
+ dc->bt = EX_TBFLAG_A64(tb_flags, BT);
369
+ dc->btype = EX_TBFLAG_A64(tb_flags, BTYPE);
370
+ dc->unpriv = EX_TBFLAG_A64(tb_flags, UNPRIV);
371
+ dc->ata = EX_TBFLAG_A64(tb_flags, ATA);
372
+ dc->mte_active[0] = EX_TBFLAG_A64(tb_flags, MTE_ACTIVE);
373
+ dc->mte_active[1] = EX_TBFLAG_A64(tb_flags, MTE0_ACTIVE);
374
dc->vec_len = 0;
375
dc->vec_stride = 0;
376
dc->cp_regs = arm_cpu->cp_regs;
377
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
378
* emit code to generate a software step exception
379
* end the TB
380
*/
381
- dc->ss_active = FIELD_EX32(tb_flags, TBFLAG_ANY, SS_ACTIVE);
382
- dc->pstate_ss = FIELD_EX32(tb_flags, TBFLAG_ANY, PSTATE__SS);
383
+ dc->ss_active = EX_TBFLAG_ANY(tb_flags, SS_ACTIVE);
384
+ dc->pstate_ss = EX_TBFLAG_ANY(tb_flags, PSTATE__SS);
385
dc->is_ldex = false;
386
- dc->debug_target_el = FIELD_EX32(tb_flags, TBFLAG_ANY, DEBUG_TARGET_EL);
387
+ dc->debug_target_el = EX_TBFLAG_ANY(tb_flags, DEBUG_TARGET_EL);
388
389
/* Bound the number of insns to execute to those left on the page. */
390
bound = -(dc->base.pc_first | TARGET_PAGE_MASK) / 4;
391
diff --git a/target/arm/translate.c b/target/arm/translate.c
392
index XXXXXXX..XXXXXXX 100644
393
--- a/target/arm/translate.c
394
+++ b/target/arm/translate.c
395
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
396
*/
397
dc->secure_routed_to_el3 = arm_feature(env, ARM_FEATURE_EL3) &&
398
!arm_el_is_aa64(env, 3);
399
- dc->thumb = FIELD_EX32(tb_flags, TBFLAG_AM32, THUMB);
400
- dc->be_data = FIELD_EX32(tb_flags, TBFLAG_ANY, BE_DATA) ? MO_BE : MO_LE;
401
- condexec = FIELD_EX32(tb_flags, TBFLAG_AM32, CONDEXEC);
402
+ dc->thumb = EX_TBFLAG_AM32(tb_flags, THUMB);
403
+ dc->be_data = EX_TBFLAG_ANY(tb_flags, BE_DATA) ? MO_BE : MO_LE;
404
+ condexec = EX_TBFLAG_AM32(tb_flags, CONDEXEC);
405
dc->condexec_mask = (condexec & 0xf) << 1;
406
dc->condexec_cond = condexec >> 4;
407
408
- core_mmu_idx = FIELD_EX32(tb_flags, TBFLAG_ANY, MMUIDX);
409
+ core_mmu_idx = EX_TBFLAG_ANY(tb_flags, MMUIDX);
410
dc->mmu_idx = core_to_arm_mmu_idx(env, core_mmu_idx);
411
dc->current_el = arm_mmu_idx_to_el(dc->mmu_idx);
412
#if !defined(CONFIG_USER_ONLY)
413
dc->user = (dc->current_el == 0);
414
#endif
415
- dc->fp_excp_el = FIELD_EX32(tb_flags, TBFLAG_ANY, FPEXC_EL);
416
+ dc->fp_excp_el = EX_TBFLAG_ANY(tb_flags, FPEXC_EL);
417
418
if (arm_feature(env, ARM_FEATURE_M)) {
419
dc->vfp_enabled = 1;
420
dc->be_data = MO_TE;
421
- dc->v7m_handler_mode = FIELD_EX32(tb_flags, TBFLAG_M32, HANDLER);
422
+ dc->v7m_handler_mode = EX_TBFLAG_M32(tb_flags, HANDLER);
423
dc->v8m_secure = arm_feature(env, ARM_FEATURE_M_SECURITY) &&
424
regime_is_secure(env, dc->mmu_idx);
425
- dc->v8m_stackcheck = FIELD_EX32(tb_flags, TBFLAG_M32, STACKCHECK);
426
- dc->v8m_fpccr_s_wrong =
427
- FIELD_EX32(tb_flags, TBFLAG_M32, FPCCR_S_WRONG);
428
+ dc->v8m_stackcheck = EX_TBFLAG_M32(tb_flags, STACKCHECK);
429
+ dc->v8m_fpccr_s_wrong = EX_TBFLAG_M32(tb_flags, FPCCR_S_WRONG);
430
dc->v7m_new_fp_ctxt_needed =
431
- FIELD_EX32(tb_flags, TBFLAG_M32, NEW_FP_CTXT_NEEDED);
432
- dc->v7m_lspact = FIELD_EX32(tb_flags, TBFLAG_M32, LSPACT);
433
+ EX_TBFLAG_M32(tb_flags, NEW_FP_CTXT_NEEDED);
434
+ dc->v7m_lspact = EX_TBFLAG_M32(tb_flags, LSPACT);
435
} else {
436
- dc->be_data =
437
- FIELD_EX32(tb_flags, TBFLAG_ANY, BE_DATA) ? MO_BE : MO_LE;
438
- dc->debug_target_el =
439
- FIELD_EX32(tb_flags, TBFLAG_ANY, DEBUG_TARGET_EL);
440
- dc->sctlr_b = FIELD_EX32(tb_flags, TBFLAG_A32, SCTLR__B);
441
- dc->hstr_active = FIELD_EX32(tb_flags, TBFLAG_A32, HSTR_ACTIVE);
442
- dc->ns = FIELD_EX32(tb_flags, TBFLAG_A32, NS);
443
- dc->vfp_enabled = FIELD_EX32(tb_flags, TBFLAG_A32, VFPEN);
444
+ dc->debug_target_el = EX_TBFLAG_ANY(tb_flags, DEBUG_TARGET_EL);
445
+ dc->sctlr_b = EX_TBFLAG_A32(tb_flags, SCTLR__B);
446
+ dc->hstr_active = EX_TBFLAG_A32(tb_flags, HSTR_ACTIVE);
447
+ dc->ns = EX_TBFLAG_A32(tb_flags, NS);
448
+ dc->vfp_enabled = EX_TBFLAG_A32(tb_flags, VFPEN);
449
if (arm_feature(env, ARM_FEATURE_XSCALE)) {
450
- dc->c15_cpar = FIELD_EX32(tb_flags, TBFLAG_A32, XSCALE_CPAR);
451
+ dc->c15_cpar = EX_TBFLAG_A32(tb_flags, XSCALE_CPAR);
452
} else {
453
- dc->vec_len = FIELD_EX32(tb_flags, TBFLAG_A32, VECLEN);
454
- dc->vec_stride = FIELD_EX32(tb_flags, TBFLAG_A32, VECSTRIDE);
455
+ dc->vec_len = EX_TBFLAG_A32(tb_flags, VECLEN);
456
+ dc->vec_stride = EX_TBFLAG_A32(tb_flags, VECSTRIDE);
457
}
458
}
459
dc->cp_regs = cpu->cp_regs;
460
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
461
* emit code to generate a software step exception
462
* end the TB
463
*/
464
- dc->ss_active = FIELD_EX32(tb_flags, TBFLAG_ANY, SS_ACTIVE);
465
- dc->pstate_ss = FIELD_EX32(tb_flags, TBFLAG_ANY, PSTATE__SS);
466
+ dc->ss_active = EX_TBFLAG_ANY(tb_flags, SS_ACTIVE);
467
+ dc->pstate_ss = EX_TBFLAG_ANY(tb_flags, PSTATE__SS);
468
dc->is_ldex = false;
469
470
dc->page_start = dc->base.pc_first & TARGET_PAGE_MASK;
471
@@ -XXX,XX +XXX,XX @@ void gen_intermediate_code(CPUState *cpu, TranslationBlock *tb, int max_insns)
472
DisasContext dc = { };
473
const TranslatorOps *ops = &arm_translator_ops;
474
475
- if (FIELD_EX32(tb->flags, TBFLAG_AM32, THUMB)) {
476
+ if (EX_TBFLAG_AM32(tb->flags, THUMB)) {
477
ops = &thumb_translator_ops;
478
}
479
#ifdef TARGET_AARCH64
480
- if (FIELD_EX32(tb->flags, TBFLAG_ANY, AARCH64_STATE)) {
481
+ if (EX_TBFLAG_ANY(tb->flags, AARCH64_STATE)) {
482
ops = &aarch64_translator_ops;
483
}
484
#endif
378
--
485
--
379
2.17.1
486
2.20.1
380
487
381
488
diff view generated by jsdifflib
New patch
1
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
In preparation for splitting tb->flags across multiple
4
fields, introduce a structure to hold the value(s).
5
So far this only migrates the one uint32_t and fixes
6
all of the places that require adjustment to match.
7
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20210419202257.161730-6-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/cpu.h | 26 ++++++++++++---------
14
target/arm/translate.h | 11 +++++++++
15
target/arm/helper.c | 48 +++++++++++++++++++++-----------------
16
target/arm/translate-a64.c | 2 +-
17
target/arm/translate.c | 7 +++---
18
5 files changed, 57 insertions(+), 37 deletions(-)
19
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/cpu.h
23
+++ b/target/arm/cpu.h
24
@@ -XXX,XX +XXX,XX @@ typedef struct ARMPACKey {
25
} ARMPACKey;
26
#endif
27
28
+/* See the commentary above the TBFLAG field definitions. */
29
+typedef struct CPUARMTBFlags {
30
+ uint32_t flags;
31
+} CPUARMTBFlags;
32
33
typedef struct CPUARMState {
34
/* Regs for current mode. */
35
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
36
uint32_t aarch64; /* 1 if CPU is in aarch64 state; inverse of PSTATE.nRW */
37
38
/* Cached TBFLAGS state. See below for which bits are included. */
39
- uint32_t hflags;
40
+ CPUARMTBFlags hflags;
41
42
/* Frequently accessed CPSR bits are stored separately for efficiency.
43
This contains all the other bits. Use cpsr_{read,write} to access
44
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A64, MTE0_ACTIVE, 19, 1)
45
* Helpers for using the above.
46
*/
47
#define DP_TBFLAG_ANY(DST, WHICH, VAL) \
48
- (DST = FIELD_DP32(DST, TBFLAG_ANY, WHICH, VAL))
49
+ (DST.flags = FIELD_DP32(DST.flags, TBFLAG_ANY, WHICH, VAL))
50
#define DP_TBFLAG_A64(DST, WHICH, VAL) \
51
- (DST = FIELD_DP32(DST, TBFLAG_A64, WHICH, VAL))
52
+ (DST.flags = FIELD_DP32(DST.flags, TBFLAG_A64, WHICH, VAL))
53
#define DP_TBFLAG_A32(DST, WHICH, VAL) \
54
- (DST = FIELD_DP32(DST, TBFLAG_A32, WHICH, VAL))
55
+ (DST.flags = FIELD_DP32(DST.flags, TBFLAG_A32, WHICH, VAL))
56
#define DP_TBFLAG_M32(DST, WHICH, VAL) \
57
- (DST = FIELD_DP32(DST, TBFLAG_M32, WHICH, VAL))
58
+ (DST.flags = FIELD_DP32(DST.flags, TBFLAG_M32, WHICH, VAL))
59
#define DP_TBFLAG_AM32(DST, WHICH, VAL) \
60
- (DST = FIELD_DP32(DST, TBFLAG_AM32, WHICH, VAL))
61
+ (DST.flags = FIELD_DP32(DST.flags, TBFLAG_AM32, WHICH, VAL))
62
63
-#define EX_TBFLAG_ANY(IN, WHICH) FIELD_EX32(IN, TBFLAG_ANY, WHICH)
64
-#define EX_TBFLAG_A64(IN, WHICH) FIELD_EX32(IN, TBFLAG_A64, WHICH)
65
-#define EX_TBFLAG_A32(IN, WHICH) FIELD_EX32(IN, TBFLAG_A32, WHICH)
66
-#define EX_TBFLAG_M32(IN, WHICH) FIELD_EX32(IN, TBFLAG_M32, WHICH)
67
-#define EX_TBFLAG_AM32(IN, WHICH) FIELD_EX32(IN, TBFLAG_AM32, WHICH)
68
+#define EX_TBFLAG_ANY(IN, WHICH) FIELD_EX32(IN.flags, TBFLAG_ANY, WHICH)
69
+#define EX_TBFLAG_A64(IN, WHICH) FIELD_EX32(IN.flags, TBFLAG_A64, WHICH)
70
+#define EX_TBFLAG_A32(IN, WHICH) FIELD_EX32(IN.flags, TBFLAG_A32, WHICH)
71
+#define EX_TBFLAG_M32(IN, WHICH) FIELD_EX32(IN.flags, TBFLAG_M32, WHICH)
72
+#define EX_TBFLAG_AM32(IN, WHICH) FIELD_EX32(IN.flags, TBFLAG_AM32, WHICH)
73
74
/**
75
* cpu_mmu_index:
76
diff --git a/target/arm/translate.h b/target/arm/translate.h
77
index XXXXXXX..XXXXXXX 100644
78
--- a/target/arm/translate.h
79
+++ b/target/arm/translate.h
80
@@ -XXX,XX +XXX,XX @@ typedef void CryptoThreeOpIntFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
81
typedef void CryptoThreeOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
82
typedef void AtomicThreeOpFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGArg, MemOp);
83
84
+/**
85
+ * arm_tbflags_from_tb:
86
+ * @tb: the TranslationBlock
87
+ *
88
+ * Extract the flag values from @tb.
89
+ */
90
+static inline CPUARMTBFlags arm_tbflags_from_tb(const TranslationBlock *tb)
91
+{
92
+ return (CPUARMTBFlags){ tb->flags };
93
+}
94
+
95
/*
96
* Enum for argument to fpstatus_ptr().
97
*/
98
diff --git a/target/arm/helper.c b/target/arm/helper.c
99
index XXXXXXX..XXXXXXX 100644
100
--- a/target/arm/helper.c
101
+++ b/target/arm/helper.c
102
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_stage1_mmu_idx(CPUARMState *env)
103
}
104
#endif
105
106
-static uint32_t rebuild_hflags_common(CPUARMState *env, int fp_el,
107
- ARMMMUIdx mmu_idx, uint32_t flags)
108
+static CPUARMTBFlags rebuild_hflags_common(CPUARMState *env, int fp_el,
109
+ ARMMMUIdx mmu_idx,
110
+ CPUARMTBFlags flags)
111
{
112
DP_TBFLAG_ANY(flags, FPEXC_EL, fp_el);
113
DP_TBFLAG_ANY(flags, MMUIDX, arm_to_core_mmu_idx(mmu_idx));
114
@@ -XXX,XX +XXX,XX @@ static uint32_t rebuild_hflags_common(CPUARMState *env, int fp_el,
115
return flags;
116
}
117
118
-static uint32_t rebuild_hflags_common_32(CPUARMState *env, int fp_el,
119
- ARMMMUIdx mmu_idx, uint32_t flags)
120
+static CPUARMTBFlags rebuild_hflags_common_32(CPUARMState *env, int fp_el,
121
+ ARMMMUIdx mmu_idx,
122
+ CPUARMTBFlags flags)
123
{
124
bool sctlr_b = arm_sctlr_b(env);
125
126
@@ -XXX,XX +XXX,XX @@ static uint32_t rebuild_hflags_common_32(CPUARMState *env, int fp_el,
127
return rebuild_hflags_common(env, fp_el, mmu_idx, flags);
128
}
129
130
-static uint32_t rebuild_hflags_m32(CPUARMState *env, int fp_el,
131
- ARMMMUIdx mmu_idx)
132
+static CPUARMTBFlags rebuild_hflags_m32(CPUARMState *env, int fp_el,
133
+ ARMMMUIdx mmu_idx)
134
{
135
- uint32_t flags = 0;
136
+ CPUARMTBFlags flags = {};
137
138
if (arm_v7m_is_handler_mode(env)) {
139
DP_TBFLAG_M32(flags, HANDLER, 1);
140
@@ -XXX,XX +XXX,XX @@ static uint32_t rebuild_hflags_m32(CPUARMState *env, int fp_el,
141
return rebuild_hflags_common_32(env, fp_el, mmu_idx, flags);
142
}
143
144
-static uint32_t rebuild_hflags_aprofile(CPUARMState *env)
145
+static CPUARMTBFlags rebuild_hflags_aprofile(CPUARMState *env)
146
{
147
- int flags = 0;
148
+ CPUARMTBFlags flags = {};
149
150
DP_TBFLAG_ANY(flags, DEBUG_TARGET_EL, arm_debug_target_el(env));
151
return flags;
152
}
153
154
-static uint32_t rebuild_hflags_a32(CPUARMState *env, int fp_el,
155
- ARMMMUIdx mmu_idx)
156
+static CPUARMTBFlags rebuild_hflags_a32(CPUARMState *env, int fp_el,
157
+ ARMMMUIdx mmu_idx)
158
{
159
- uint32_t flags = rebuild_hflags_aprofile(env);
160
+ CPUARMTBFlags flags = rebuild_hflags_aprofile(env);
161
162
if (arm_el_is_aa64(env, 1)) {
163
DP_TBFLAG_A32(flags, VFPEN, 1);
164
@@ -XXX,XX +XXX,XX @@ static uint32_t rebuild_hflags_a32(CPUARMState *env, int fp_el,
165
return rebuild_hflags_common_32(env, fp_el, mmu_idx, flags);
166
}
167
168
-static uint32_t rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
169
- ARMMMUIdx mmu_idx)
170
+static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
171
+ ARMMMUIdx mmu_idx)
172
{
173
- uint32_t flags = rebuild_hflags_aprofile(env);
174
+ CPUARMTBFlags flags = rebuild_hflags_aprofile(env);
175
ARMMMUIdx stage1 = stage_1_mmu_idx(mmu_idx);
176
uint64_t tcr = regime_tcr(env, mmu_idx)->raw_tcr;
177
uint64_t sctlr;
178
@@ -XXX,XX +XXX,XX @@ static uint32_t rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
179
return rebuild_hflags_common(env, fp_el, mmu_idx, flags);
180
}
181
182
-static uint32_t rebuild_hflags_internal(CPUARMState *env)
183
+static CPUARMTBFlags rebuild_hflags_internal(CPUARMState *env)
184
{
185
int el = arm_current_el(env);
186
int fp_el = fp_exception_el(env, el);
187
@@ -XXX,XX +XXX,XX @@ void HELPER(rebuild_hflags_m32_newel)(CPUARMState *env)
188
int el = arm_current_el(env);
189
int fp_el = fp_exception_el(env, el);
190
ARMMMUIdx mmu_idx = arm_mmu_idx_el(env, el);
191
+
192
env->hflags = rebuild_hflags_m32(env, fp_el, mmu_idx);
193
}
194
195
@@ -XXX,XX +XXX,XX @@ void HELPER(rebuild_hflags_a64)(CPUARMState *env, int el)
196
static inline void assert_hflags_rebuild_correctly(CPUARMState *env)
197
{
198
#ifdef CONFIG_DEBUG_TCG
199
- uint32_t env_flags_current = env->hflags;
200
- uint32_t env_flags_rebuilt = rebuild_hflags_internal(env);
201
+ CPUARMTBFlags c = env->hflags;
202
+ CPUARMTBFlags r = rebuild_hflags_internal(env);
203
204
- if (unlikely(env_flags_current != env_flags_rebuilt)) {
205
+ if (unlikely(c.flags != r.flags)) {
206
fprintf(stderr, "TCG hflags mismatch (current:0x%08x rebuilt:0x%08x)\n",
207
- env_flags_current, env_flags_rebuilt);
208
+ c.flags, r.flags);
209
abort();
210
}
211
#endif
212
@@ -XXX,XX +XXX,XX @@ static inline void assert_hflags_rebuild_correctly(CPUARMState *env)
213
void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
214
target_ulong *cs_base, uint32_t *pflags)
215
{
216
- uint32_t flags = env->hflags;
217
+ CPUARMTBFlags flags;
218
219
*cs_base = 0;
220
assert_hflags_rebuild_correctly(env);
221
+ flags = env->hflags;
222
223
if (EX_TBFLAG_ANY(flags, AARCH64_STATE)) {
224
*pc = env->pc;
225
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
226
DP_TBFLAG_ANY(flags, PSTATE__SS, 1);
227
}
228
229
- *pflags = flags;
230
+ *pflags = flags.flags;
231
}
232
233
#ifdef TARGET_AARCH64
234
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
235
index XXXXXXX..XXXXXXX 100644
236
--- a/target/arm/translate-a64.c
237
+++ b/target/arm/translate-a64.c
238
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
239
DisasContext *dc = container_of(dcbase, DisasContext, base);
240
CPUARMState *env = cpu->env_ptr;
241
ARMCPU *arm_cpu = env_archcpu(env);
242
- uint32_t tb_flags = dc->base.tb->flags;
243
+ CPUARMTBFlags tb_flags = arm_tbflags_from_tb(dc->base.tb);
244
int bound, core_mmu_idx;
245
246
dc->isar = &arm_cpu->isar;
247
diff --git a/target/arm/translate.c b/target/arm/translate.c
248
index XXXXXXX..XXXXXXX 100644
249
--- a/target/arm/translate.c
250
+++ b/target/arm/translate.c
251
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
252
DisasContext *dc = container_of(dcbase, DisasContext, base);
253
CPUARMState *env = cs->env_ptr;
254
ARMCPU *cpu = env_archcpu(env);
255
- uint32_t tb_flags = dc->base.tb->flags;
256
+ CPUARMTBFlags tb_flags = arm_tbflags_from_tb(dc->base.tb);
257
uint32_t condexec, core_mmu_idx;
258
259
dc->isar = &cpu->isar;
260
@@ -XXX,XX +XXX,XX @@ void gen_intermediate_code(CPUState *cpu, TranslationBlock *tb, int max_insns)
261
{
262
DisasContext dc = { };
263
const TranslatorOps *ops = &arm_translator_ops;
264
+ CPUARMTBFlags tb_flags = arm_tbflags_from_tb(tb);
265
266
- if (EX_TBFLAG_AM32(tb->flags, THUMB)) {
267
+ if (EX_TBFLAG_AM32(tb_flags, THUMB)) {
268
ops = &thumb_translator_ops;
269
}
270
#ifdef TARGET_AARCH64
271
- if (EX_TBFLAG_ANY(tb->flags, AARCH64_STATE)) {
272
+ if (EX_TBFLAG_ANY(tb_flags, AARCH64_STATE)) {
273
ops = &aarch64_translator_ops;
274
}
275
#endif
276
--
277
2.20.1
278
279
diff view generated by jsdifflib
1
In commit f0aff255700 we made cpacr_write() enforce that some CPACR
1
From: Richard Henderson <richard.henderson@linaro.org>
2
bits are RAZ/WI and some are RAO/WI for ARMv7 cores. Unfortunately
3
we forgot to also update the register's reset value. The effect
4
was that (a) a guest that read CPACR on reset would not see ones in
5
the RAO bits, and (b) if you did a migration before the guest did
6
a write to the CPACR then the migration would fail because the
7
destination would enforce the RAO bits and then complain that they
8
didn't match the zero value from the source.
9
2
10
Implement reset for the CPACR using a custom reset function
3
Now that we have all of the proper macros defined, expanding
11
that just calls cpacr_write(), to avoid having to duplicate
4
the CPUARMTBFlags structure and populating the two TB fields
12
the logic for which bits are RAO.
5
is relatively simple.
13
6
14
This bug would affect migration for TCG CPUs which are ARMv7
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
with VFP but without one of Neon or VFPv3.
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210419202257.161730-7-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/cpu.h | 49 ++++++++++++++++++++++++------------------
13
target/arm/translate.h | 2 +-
14
target/arm/helper.c | 10 +++++----
15
3 files changed, 35 insertions(+), 26 deletions(-)
16
16
17
Reported-by: Cédric Le Goater <clg@kaod.org>
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
index XXXXXXX..XXXXXXX 100644
19
Tested-by: Cédric Le Goater <clg@kaod.org>
19
--- a/target/arm/cpu.h
20
Message-id: 20180522173713.26282-1-peter.maydell@linaro.org
20
+++ b/target/arm/cpu.h
21
---
21
@@ -XXX,XX +XXX,XX @@ typedef struct ARMPACKey {
22
target/arm/helper.c | 10 +++++++++-
22
/* See the commentary above the TBFLAG field definitions. */
23
1 file changed, 9 insertions(+), 1 deletion(-)
23
typedef struct CPUARMTBFlags {
24
24
uint32_t flags;
25
+ target_ulong flags2;
26
} CPUARMTBFlags;
27
28
typedef struct CPUARMState {
29
@@ -XXX,XX +XXX,XX @@ typedef ARMCPU ArchCPU;
30
#include "exec/cpu-all.h"
31
32
/*
33
- * Bit usage in the TB flags field: bit 31 indicates whether we are
34
- * in 32 or 64 bit mode. The meaning of the other bits depends on that.
35
- * We put flags which are shared between 32 and 64 bit mode at the top
36
- * of the word, and flags which apply to only one mode at the bottom.
37
+ * We have more than 32-bits worth of state per TB, so we split the data
38
+ * between tb->flags and tb->cs_base, which is otherwise unused for ARM.
39
+ * We collect these two parts in CPUARMTBFlags where they are named
40
+ * flags and flags2 respectively.
41
*
42
- * 31 20 18 14 9 0
43
- * +--------------+-----+-----+----------+--------------+
44
- * | | | TBFLAG_A32 | |
45
- * | | +-----+----------+ TBFLAG_AM32 |
46
- * | TBFLAG_ANY | |TBFLAG_M32| |
47
- * | +-----------+----------+--------------|
48
- * | | TBFLAG_A64 |
49
- * +--------------+-------------------------------------+
50
- * 31 20 0
51
+ * The flags that are shared between all execution modes, TBFLAG_ANY,
52
+ * are stored in flags. The flags that are specific to a given mode
53
+ * are stores in flags2. Since cs_base is sized on the configured
54
+ * address size, flags2 always has 64-bits for A64, and a minimum of
55
+ * 32-bits for A32 and M32.
56
+ *
57
+ * The bits for 32-bit A-profile and M-profile partially overlap:
58
+ *
59
+ * 18 9 0
60
+ * +----------------+--------------+
61
+ * | TBFLAG_A32 | |
62
+ * +-----+----------+ TBFLAG_AM32 |
63
+ * | |TBFLAG_M32| |
64
+ * +-----+----------+--------------+
65
+ * 14 9 0
66
*
67
* Unless otherwise noted, these bits are cached in env->hflags.
68
*/
69
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A64, MTE0_ACTIVE, 19, 1)
70
#define DP_TBFLAG_ANY(DST, WHICH, VAL) \
71
(DST.flags = FIELD_DP32(DST.flags, TBFLAG_ANY, WHICH, VAL))
72
#define DP_TBFLAG_A64(DST, WHICH, VAL) \
73
- (DST.flags = FIELD_DP32(DST.flags, TBFLAG_A64, WHICH, VAL))
74
+ (DST.flags2 = FIELD_DP32(DST.flags2, TBFLAG_A64, WHICH, VAL))
75
#define DP_TBFLAG_A32(DST, WHICH, VAL) \
76
- (DST.flags = FIELD_DP32(DST.flags, TBFLAG_A32, WHICH, VAL))
77
+ (DST.flags2 = FIELD_DP32(DST.flags2, TBFLAG_A32, WHICH, VAL))
78
#define DP_TBFLAG_M32(DST, WHICH, VAL) \
79
- (DST.flags = FIELD_DP32(DST.flags, TBFLAG_M32, WHICH, VAL))
80
+ (DST.flags2 = FIELD_DP32(DST.flags2, TBFLAG_M32, WHICH, VAL))
81
#define DP_TBFLAG_AM32(DST, WHICH, VAL) \
82
- (DST.flags = FIELD_DP32(DST.flags, TBFLAG_AM32, WHICH, VAL))
83
+ (DST.flags2 = FIELD_DP32(DST.flags2, TBFLAG_AM32, WHICH, VAL))
84
85
#define EX_TBFLAG_ANY(IN, WHICH) FIELD_EX32(IN.flags, TBFLAG_ANY, WHICH)
86
-#define EX_TBFLAG_A64(IN, WHICH) FIELD_EX32(IN.flags, TBFLAG_A64, WHICH)
87
-#define EX_TBFLAG_A32(IN, WHICH) FIELD_EX32(IN.flags, TBFLAG_A32, WHICH)
88
-#define EX_TBFLAG_M32(IN, WHICH) FIELD_EX32(IN.flags, TBFLAG_M32, WHICH)
89
-#define EX_TBFLAG_AM32(IN, WHICH) FIELD_EX32(IN.flags, TBFLAG_AM32, WHICH)
90
+#define EX_TBFLAG_A64(IN, WHICH) FIELD_EX32(IN.flags2, TBFLAG_A64, WHICH)
91
+#define EX_TBFLAG_A32(IN, WHICH) FIELD_EX32(IN.flags2, TBFLAG_A32, WHICH)
92
+#define EX_TBFLAG_M32(IN, WHICH) FIELD_EX32(IN.flags2, TBFLAG_M32, WHICH)
93
+#define EX_TBFLAG_AM32(IN, WHICH) FIELD_EX32(IN.flags2, TBFLAG_AM32, WHICH)
94
95
/**
96
* cpu_mmu_index:
97
diff --git a/target/arm/translate.h b/target/arm/translate.h
98
index XXXXXXX..XXXXXXX 100644
99
--- a/target/arm/translate.h
100
+++ b/target/arm/translate.h
101
@@ -XXX,XX +XXX,XX @@ typedef void AtomicThreeOpFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGArg, MemOp);
102
*/
103
static inline CPUARMTBFlags arm_tbflags_from_tb(const TranslationBlock *tb)
104
{
105
- return (CPUARMTBFlags){ tb->flags };
106
+ return (CPUARMTBFlags){ tb->flags, tb->cs_base };
107
}
108
109
/*
25
diff --git a/target/arm/helper.c b/target/arm/helper.c
110
diff --git a/target/arm/helper.c b/target/arm/helper.c
26
index XXXXXXX..XXXXXXX 100644
111
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/helper.c
112
--- a/target/arm/helper.c
28
+++ b/target/arm/helper.c
113
+++ b/target/arm/helper.c
29
@@ -XXX,XX +XXX,XX @@ static void cpacr_write(CPUARMState *env, const ARMCPRegInfo *ri,
114
@@ -XXX,XX +XXX,XX @@ static inline void assert_hflags_rebuild_correctly(CPUARMState *env)
30
env->cp15.cpacr_el1 = value;
115
CPUARMTBFlags c = env->hflags;
116
CPUARMTBFlags r = rebuild_hflags_internal(env);
117
118
- if (unlikely(c.flags != r.flags)) {
119
- fprintf(stderr, "TCG hflags mismatch (current:0x%08x rebuilt:0x%08x)\n",
120
- c.flags, r.flags);
121
+ if (unlikely(c.flags != r.flags || c.flags2 != r.flags2)) {
122
+ fprintf(stderr, "TCG hflags mismatch "
123
+ "(current:(0x%08x,0x" TARGET_FMT_lx ")"
124
+ " rebuilt:(0x%08x,0x" TARGET_FMT_lx ")\n",
125
+ c.flags, c.flags2, r.flags, r.flags2);
126
abort();
127
}
128
#endif
129
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
130
{
131
CPUARMTBFlags flags;
132
133
- *cs_base = 0;
134
assert_hflags_rebuild_correctly(env);
135
flags = env->hflags;
136
137
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
138
}
139
140
*pflags = flags.flags;
141
+ *cs_base = flags.flags2;
31
}
142
}
32
143
33
+static void cpacr_reset(CPUARMState *env, const ARMCPRegInfo *ri)
144
#ifdef TARGET_AARCH64
34
+{
35
+ /* Call cpacr_write() so that we reset with the correct RAO bits set
36
+ * for our CPU features.
37
+ */
38
+ cpacr_write(env, ri, 0);
39
+}
40
+
41
static CPAccessResult cpacr_access(CPUARMState *env, const ARMCPRegInfo *ri,
42
bool isread)
43
{
44
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6_cp_reginfo[] = {
45
{ .name = "CPACR", .state = ARM_CP_STATE_BOTH, .opc0 = 3,
46
.crn = 1, .crm = 0, .opc1 = 0, .opc2 = 2, .accessfn = cpacr_access,
47
.access = PL1_RW, .fieldoffset = offsetof(CPUARMState, cp15.cpacr_el1),
48
- .resetvalue = 0, .writefn = cpacr_write },
49
+ .resetfn = cpacr_reset, .writefn = cpacr_write },
50
REGINFO_SENTINEL
51
};
52
53
--
145
--
54
2.17.1
146
2.20.1
55
147
56
148
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Now that these bits have been moved out of tb->flags,
4
where TBFLAG_ANY was filling from the top, move AM32
5
to fill from the top, and A32 and M32 to fill from the
6
bottom. This means fewer changes when adding new bits.
7
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20210419202257.161730-9-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/cpu.h | 42 +++++++++++++++++++++---------------------
14
1 file changed, 21 insertions(+), 21 deletions(-)
15
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/cpu.h
19
+++ b/target/arm/cpu.h
20
@@ -XXX,XX +XXX,XX @@ typedef ARMCPU ArchCPU;
21
*
22
* The bits for 32-bit A-profile and M-profile partially overlap:
23
*
24
- * 18 9 0
25
- * +----------------+--------------+
26
- * | TBFLAG_A32 | |
27
- * +-----+----------+ TBFLAG_AM32 |
28
- * | |TBFLAG_M32| |
29
- * +-----+----------+--------------+
30
- * 14 9 0
31
+ * 31 23 11 10 0
32
+ * +-------------+----------+----------------+
33
+ * | | | TBFLAG_A32 |
34
+ * | TBFLAG_AM32 | +-----+----------+
35
+ * | | |TBFLAG_M32|
36
+ * +-------------+----------------+----------+
37
+ * 31 23 5 4 0
38
*
39
* Unless otherwise noted, these bits are cached in env->hflags.
40
*/
41
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_ANY, DEBUG_TARGET_EL, 20, 2)
42
/*
43
* Bit usage when in AArch32 state, both A- and M-profile.
44
*/
45
-FIELD(TBFLAG_AM32, CONDEXEC, 0, 8) /* Not cached. */
46
-FIELD(TBFLAG_AM32, THUMB, 8, 1) /* Not cached. */
47
+FIELD(TBFLAG_AM32, CONDEXEC, 24, 8) /* Not cached. */
48
+FIELD(TBFLAG_AM32, THUMB, 23, 1) /* Not cached. */
49
50
/*
51
* Bit usage when in AArch32 state, for A-profile only.
52
*/
53
-FIELD(TBFLAG_A32, VECLEN, 9, 3) /* Not cached. */
54
-FIELD(TBFLAG_A32, VECSTRIDE, 12, 2) /* Not cached. */
55
+FIELD(TBFLAG_A32, VECLEN, 0, 3) /* Not cached. */
56
+FIELD(TBFLAG_A32, VECSTRIDE, 3, 2) /* Not cached. */
57
/*
58
* We store the bottom two bits of the CPAR as TB flags and handle
59
* checks on the other bits at runtime. This shares the same bits as
60
* VECSTRIDE, which is OK as no XScale CPU has VFP.
61
* Not cached, because VECLEN+VECSTRIDE are not cached.
62
*/
63
-FIELD(TBFLAG_A32, XSCALE_CPAR, 12, 2)
64
-FIELD(TBFLAG_A32, VFPEN, 14, 1) /* Partially cached, minus FPEXC. */
65
-FIELD(TBFLAG_A32, SCTLR__B, 15, 1) /* Cannot overlap with SCTLR_B */
66
-FIELD(TBFLAG_A32, HSTR_ACTIVE, 16, 1)
67
+FIELD(TBFLAG_A32, XSCALE_CPAR, 5, 2)
68
+FIELD(TBFLAG_A32, VFPEN, 7, 1) /* Partially cached, minus FPEXC. */
69
+FIELD(TBFLAG_A32, SCTLR__B, 8, 1) /* Cannot overlap with SCTLR_B */
70
+FIELD(TBFLAG_A32, HSTR_ACTIVE, 9, 1)
71
/*
72
* Indicates whether cp register reads and writes by guest code should access
73
* the secure or nonsecure bank of banked registers; note that this is not
74
* the same thing as the current security state of the processor!
75
*/
76
-FIELD(TBFLAG_A32, NS, 17, 1)
77
+FIELD(TBFLAG_A32, NS, 10, 1)
78
79
/*
80
* Bit usage when in AArch32 state, for M-profile only.
81
*/
82
/* Handler (ie not Thread) mode */
83
-FIELD(TBFLAG_M32, HANDLER, 9, 1)
84
+FIELD(TBFLAG_M32, HANDLER, 0, 1)
85
/* Whether we should generate stack-limit checks */
86
-FIELD(TBFLAG_M32, STACKCHECK, 10, 1)
87
+FIELD(TBFLAG_M32, STACKCHECK, 1, 1)
88
/* Set if FPCCR.LSPACT is set */
89
-FIELD(TBFLAG_M32, LSPACT, 11, 1) /* Not cached. */
90
+FIELD(TBFLAG_M32, LSPACT, 2, 1) /* Not cached. */
91
/* Set if we must create a new FP context */
92
-FIELD(TBFLAG_M32, NEW_FP_CTXT_NEEDED, 12, 1) /* Not cached. */
93
+FIELD(TBFLAG_M32, NEW_FP_CTXT_NEEDED, 3, 1) /* Not cached. */
94
/* Set if FPCCR.S does not match current security state */
95
-FIELD(TBFLAG_M32, FPCCR_S_WRONG, 13, 1) /* Not cached. */
96
+FIELD(TBFLAG_M32, FPCCR_S_WRONG, 4, 1) /* Not cached. */
97
98
/*
99
* Bit usage when in AArch64 state
100
--
101
2.20.1
102
103
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Now that other bits have been moved out of tb->flags,
4
there's no point in filling from the top.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210419202257.161730-10-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/cpu.h | 14 +++++++-------
12
1 file changed, 7 insertions(+), 7 deletions(-)
13
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpu.h
17
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ typedef ARMCPU ArchCPU;
19
*
20
* Unless otherwise noted, these bits are cached in env->hflags.
21
*/
22
-FIELD(TBFLAG_ANY, AARCH64_STATE, 31, 1)
23
-FIELD(TBFLAG_ANY, SS_ACTIVE, 30, 1)
24
-FIELD(TBFLAG_ANY, PSTATE__SS, 29, 1) /* Not cached. */
25
-FIELD(TBFLAG_ANY, BE_DATA, 28, 1)
26
-FIELD(TBFLAG_ANY, MMUIDX, 24, 4)
27
+FIELD(TBFLAG_ANY, AARCH64_STATE, 0, 1)
28
+FIELD(TBFLAG_ANY, SS_ACTIVE, 1, 1)
29
+FIELD(TBFLAG_ANY, PSTATE__SS, 2, 1) /* Not cached. */
30
+FIELD(TBFLAG_ANY, BE_DATA, 3, 1)
31
+FIELD(TBFLAG_ANY, MMUIDX, 4, 4)
32
/* Target EL if we take a floating-point-disabled exception */
33
-FIELD(TBFLAG_ANY, FPEXC_EL, 22, 2)
34
+FIELD(TBFLAG_ANY, FPEXC_EL, 8, 2)
35
/* For A-profile only, target EL for debug exceptions. */
36
-FIELD(TBFLAG_ANY, DEBUG_TARGET_EL, 20, 2)
37
+FIELD(TBFLAG_ANY, DEBUG_TARGET_EL, 10, 2)
38
39
/*
40
* Bit usage when in AArch32 state, both A- and M-profile.
41
--
42
2.20.1
43
44
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
add MemTxAttrs as an argument to flatview_translate(); all its
3
callers now have attrs available.
4
2
3
Use this to signal when memory access alignment is required.
4
This value comes from the CCR register for M-profile, and
5
from the SCTLR register for A-profile.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210419202257.161730-11-richard.henderson@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180521140402.23318-11-peter.maydell@linaro.org
9
---
11
---
10
include/exec/memory.h | 7 ++++---
12
target/arm/cpu.h | 2 ++
11
exec.c | 17 +++++++++--------
13
target/arm/translate.h | 2 ++
12
2 files changed, 13 insertions(+), 11 deletions(-)
14
target/arm/helper.c | 19 +++++++++++++++++--
15
target/arm/translate-a64.c | 1 +
16
target/arm/translate.c | 7 +++----
17
5 files changed, 25 insertions(+), 6 deletions(-)
13
18
14
diff --git a/include/exec/memory.h b/include/exec/memory.h
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
16
--- a/include/exec/memory.h
21
--- a/target/arm/cpu.h
17
+++ b/include/exec/memory.h
22
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
23
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_ANY, MMUIDX, 4, 4)
19
*/
24
FIELD(TBFLAG_ANY, FPEXC_EL, 8, 2)
20
MemoryRegion *flatview_translate(FlatView *fv,
25
/* For A-profile only, target EL for debug exceptions. */
21
hwaddr addr, hwaddr *xlat,
26
FIELD(TBFLAG_ANY, DEBUG_TARGET_EL, 10, 2)
22
- hwaddr *len, bool is_write);
27
+/* Memory operations require alignment: SCTLR_ELx.A or CCR.UNALIGN_TRP */
23
+ hwaddr *len, bool is_write,
28
+FIELD(TBFLAG_ANY, ALIGN_MEM, 12, 1)
24
+ MemTxAttrs attrs);
29
25
30
/*
26
static inline MemoryRegion *address_space_translate(AddressSpace *as,
31
* Bit usage when in AArch32 state, both A- and M-profile.
27
hwaddr addr, hwaddr *xlat,
32
diff --git a/target/arm/translate.h b/target/arm/translate.h
28
@@ -XXX,XX +XXX,XX @@ static inline MemoryRegion *address_space_translate(AddressSpace *as,
33
index XXXXXXX..XXXXXXX 100644
29
MemTxAttrs attrs)
34
--- a/target/arm/translate.h
35
+++ b/target/arm/translate.h
36
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
37
bool bt;
38
/* True if any CP15 access is trapped by HSTR_EL2 */
39
bool hstr_active;
40
+ /* True if memory operations require alignment */
41
+ bool align_mem;
42
/*
43
* >= 0, a copy of PSTATE.BTYPE, which will be 0 without v8.5-BTI.
44
* < 0, set by the current instruction.
45
diff --git a/target/arm/helper.c b/target/arm/helper.c
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/helper.c
48
+++ b/target/arm/helper.c
49
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_m32(CPUARMState *env, int fp_el,
50
ARMMMUIdx mmu_idx)
30
{
51
{
31
return flatview_translate(address_space_to_flatview(as),
52
CPUARMTBFlags flags = {};
32
- addr, xlat, len, is_write);
53
+ uint32_t ccr = env->v7m.ccr[env->v7m.secure];
33
+ addr, xlat, len, is_write, attrs);
54
+
34
}
55
+ /* Without HaveMainExt, CCR.UNALIGN_TRP is RES1. */
35
56
+ if (ccr & R_V7M_CCR_UNALIGN_TRP_MASK) {
36
/* address_space_access_valid: check for validity of accessing an address
57
+ DP_TBFLAG_ANY(flags, ALIGN_MEM, 1);
37
@@ -XXX,XX +XXX,XX @@ MemTxResult address_space_read(AddressSpace *as, hwaddr addr,
58
+ }
38
rcu_read_lock();
59
39
fv = address_space_to_flatview(as);
60
if (arm_v7m_is_handler_mode(env)) {
40
l = len;
61
DP_TBFLAG_M32(flags, HANDLER, 1);
41
- mr = flatview_translate(fv, addr, &addr1, &l, false);
62
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_m32(CPUARMState *env, int fp_el,
42
+ mr = flatview_translate(fv, addr, &addr1, &l, false, attrs);
63
*/
43
if (len == l && memory_access_is_direct(mr, false)) {
64
if (arm_feature(env, ARM_FEATURE_V8) &&
44
ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
65
!((mmu_idx & ARM_MMU_IDX_M_NEGPRI) &&
45
memcpy(buf, ptr, len);
66
- (env->v7m.ccr[env->v7m.secure] & R_V7M_CCR_STKOFHFNMIGN_MASK))) {
46
diff --git a/exec.c b/exec.c
67
+ (ccr & R_V7M_CCR_STKOFHFNMIGN_MASK))) {
68
DP_TBFLAG_M32(flags, STACKCHECK, 1);
69
}
70
71
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a32(CPUARMState *env, int fp_el,
72
ARMMMUIdx mmu_idx)
73
{
74
CPUARMTBFlags flags = rebuild_hflags_aprofile(env);
75
+ int el = arm_current_el(env);
76
+
77
+ if (arm_sctlr(env, el) & SCTLR_A) {
78
+ DP_TBFLAG_ANY(flags, ALIGN_MEM, 1);
79
+ }
80
81
if (arm_el_is_aa64(env, 1)) {
82
DP_TBFLAG_A32(flags, VFPEN, 1);
83
}
84
85
- if (arm_current_el(env) < 2 && env->cp15.hstr_el2 &&
86
+ if (el < 2 && env->cp15.hstr_el2 &&
87
(arm_hcr_el2_eff(env) & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) {
88
DP_TBFLAG_A32(flags, HSTR_ACTIVE, 1);
89
}
90
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
91
92
sctlr = regime_sctlr(env, stage1);
93
94
+ if (sctlr & SCTLR_A) {
95
+ DP_TBFLAG_ANY(flags, ALIGN_MEM, 1);
96
+ }
97
+
98
if (arm_cpu_data_is_big_endian_a64(el, sctlr)) {
99
DP_TBFLAG_ANY(flags, BE_DATA, 1);
100
}
101
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
47
index XXXXXXX..XXXXXXX 100644
102
index XXXXXXX..XXXXXXX 100644
48
--- a/exec.c
103
--- a/target/arm/translate-a64.c
49
+++ b/exec.c
104
+++ b/target/arm/translate-a64.c
50
@@ -XXX,XX +XXX,XX @@ iotlb_fail:
105
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
51
106
dc->user = (dc->current_el == 0);
52
/* Called from RCU critical section */
107
#endif
53
MemoryRegion *flatview_translate(FlatView *fv, hwaddr addr, hwaddr *xlat,
108
dc->fp_excp_el = EX_TBFLAG_ANY(tb_flags, FPEXC_EL);
54
- hwaddr *plen, bool is_write)
109
+ dc->align_mem = EX_TBFLAG_ANY(tb_flags, ALIGN_MEM);
55
+ hwaddr *plen, bool is_write,
110
dc->sve_excp_el = EX_TBFLAG_A64(tb_flags, SVEEXC_EL);
56
+ MemTxAttrs attrs)
111
dc->sve_len = (EX_TBFLAG_A64(tb_flags, ZCR_LEN) + 1) * 16;
112
dc->pauth_active = EX_TBFLAG_A64(tb_flags, PAUTH_ACTIVE);
113
diff --git a/target/arm/translate.c b/target/arm/translate.c
114
index XXXXXXX..XXXXXXX 100644
115
--- a/target/arm/translate.c
116
+++ b/target/arm/translate.c
117
@@ -XXX,XX +XXX,XX @@ static void gen_aa32_ld_i32(DisasContext *s, TCGv_i32 val, TCGv_i32 a32,
57
{
118
{
58
MemoryRegion *mr;
119
TCGv addr;
59
MemoryRegionSection section;
120
60
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_write_continue(FlatView *fv, hwaddr addr,
121
- if (arm_dc_feature(s, ARM_FEATURE_M) &&
61
}
122
- !arm_dc_feature(s, ARM_FEATURE_M_MAIN)) {
62
123
+ if (s->align_mem) {
63
l = len;
124
opc |= MO_ALIGN;
64
- mr = flatview_translate(fv, addr, &addr1, &l, true);
65
+ mr = flatview_translate(fv, addr, &addr1, &l, true, attrs);
66
}
125
}
67
126
68
return result;
127
@@ -XXX,XX +XXX,XX @@ static void gen_aa32_st_i32(DisasContext *s, TCGv_i32 val, TCGv_i32 a32,
69
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_write(FlatView *fv, hwaddr addr, MemTxAttrs attrs,
128
{
70
MemTxResult result = MEMTX_OK;
129
TCGv addr;
71
130
72
l = len;
131
- if (arm_dc_feature(s, ARM_FEATURE_M) &&
73
- mr = flatview_translate(fv, addr, &addr1, &l, true);
132
- !arm_dc_feature(s, ARM_FEATURE_M_MAIN)) {
74
+ mr = flatview_translate(fv, addr, &addr1, &l, true, attrs);
133
+ if (s->align_mem) {
75
result = flatview_write_continue(fv, addr, attrs, buf, len,
134
opc |= MO_ALIGN;
76
addr1, l, mr);
77
78
@@ -XXX,XX +XXX,XX @@ MemTxResult flatview_read_continue(FlatView *fv, hwaddr addr,
79
}
80
81
l = len;
82
- mr = flatview_translate(fv, addr, &addr1, &l, false);
83
+ mr = flatview_translate(fv, addr, &addr1, &l, false, attrs);
84
}
135
}
85
136
86
return result;
137
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
87
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_read(FlatView *fv, hwaddr addr,
138
dc->user = (dc->current_el == 0);
88
MemoryRegion *mr;
139
#endif
89
140
dc->fp_excp_el = EX_TBFLAG_ANY(tb_flags, FPEXC_EL);
90
l = len;
141
+ dc->align_mem = EX_TBFLAG_ANY(tb_flags, ALIGN_MEM);
91
- mr = flatview_translate(fv, addr, &addr1, &l, false);
142
92
+ mr = flatview_translate(fv, addr, &addr1, &l, false, attrs);
143
if (arm_feature(env, ARM_FEATURE_M)) {
93
return flatview_read_continue(fv, addr, attrs, buf, len,
144
dc->vfp_enabled = 1;
94
addr1, l, mr);
95
}
96
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
97
98
while (len > 0) {
99
l = len;
100
- mr = flatview_translate(fv, addr, &xlat, &l, is_write);
101
+ mr = flatview_translate(fv, addr, &xlat, &l, is_write, attrs);
102
if (!memory_access_is_direct(mr, is_write)) {
103
l = memory_access_size(mr, l, addr);
104
if (!memory_region_access_valid(mr, xlat, l, is_write, attrs)) {
105
@@ -XXX,XX +XXX,XX @@ flatview_extend_translation(FlatView *fv, hwaddr addr,
106
107
len = target_len;
108
this_mr = flatview_translate(fv, addr, &xlat,
109
- &len, is_write);
110
+ &len, is_write, attrs);
111
if (this_mr != mr || xlat != base + done) {
112
return done;
113
}
114
@@ -XXX,XX +XXX,XX @@ void *address_space_map(AddressSpace *as,
115
l = len;
116
rcu_read_lock();
117
fv = address_space_to_flatview(as);
118
- mr = flatview_translate(fv, addr, &xlat, &l, is_write);
119
+ mr = flatview_translate(fv, addr, &xlat, &l, is_write, attrs);
120
121
if (!memory_access_is_direct(mr, is_write)) {
122
if (atomic_xchg(&bounce.in_use, true)) {
123
--
145
--
124
2.17.1
146
2.20.1
125
147
126
148
diff view generated by jsdifflib
New patch
1
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Create a finalize_memop function that computes alignment and
4
endianness and returns the final MemOp for the operation.
5
6
Split out gen_aa32_{ld,st}_internal_i32 which bypasses any special
7
handling of endianness or alignment. Adjust gen_aa32_{ld,st}_i32
8
so that s->be_data is not added by the callers.
9
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20210419202257.161730-12-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
target/arm/translate.h | 24 ++++++++
16
target/arm/translate.c | 100 +++++++++++++++++---------------
17
target/arm/translate-neon.c.inc | 9 +--
18
3 files changed, 79 insertions(+), 54 deletions(-)
19
20
diff --git a/target/arm/translate.h b/target/arm/translate.h
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/translate.h
23
+++ b/target/arm/translate.h
24
@@ -XXX,XX +XXX,XX @@ static inline TCGv_ptr fpstatus_ptr(ARMFPStatusFlavour flavour)
25
return statusptr;
26
}
27
28
+/**
29
+ * finalize_memop:
30
+ * @s: DisasContext
31
+ * @opc: size+sign+align of the memory operation
32
+ *
33
+ * Build the complete MemOp for a memory operation, including alignment
34
+ * and endianness.
35
+ *
36
+ * If (op & MO_AMASK) then the operation already contains the required
37
+ * alignment, e.g. for AccType_ATOMIC. Otherwise, this an optionally
38
+ * unaligned operation, e.g. for AccType_NORMAL.
39
+ *
40
+ * In the latter case, there are configuration bits that require alignment,
41
+ * and this is applied here. Note that there is no way to indicate that
42
+ * no alignment should ever be enforced; this must be handled manually.
43
+ */
44
+static inline MemOp finalize_memop(DisasContext *s, MemOp opc)
45
+{
46
+ if (s->align_mem && !(opc & MO_AMASK)) {
47
+ opc |= MO_ALIGN;
48
+ }
49
+ return opc | s->be_data;
50
+}
51
+
52
#endif /* TARGET_ARM_TRANSLATE_H */
53
diff --git a/target/arm/translate.c b/target/arm/translate.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/target/arm/translate.c
56
+++ b/target/arm/translate.c
57
@@ -XXX,XX +XXX,XX @@ static inline void store_reg_from_load(DisasContext *s, int reg, TCGv_i32 var)
58
#define IS_USER_ONLY 0
59
#endif
60
61
-/* Abstractions of "generate code to do a guest load/store for
62
+/*
63
+ * Abstractions of "generate code to do a guest load/store for
64
* AArch32", where a vaddr is always 32 bits (and is zero
65
* extended if we're a 64 bit core) and data is also
66
* 32 bits unless specifically doing a 64 bit access.
67
@@ -XXX,XX +XXX,XX @@ static inline void store_reg_from_load(DisasContext *s, int reg, TCGv_i32 var)
68
* that the address argument is TCGv_i32 rather than TCGv.
69
*/
70
71
-static inline TCGv gen_aa32_addr(DisasContext *s, TCGv_i32 a32, MemOp op)
72
+static TCGv gen_aa32_addr(DisasContext *s, TCGv_i32 a32, MemOp op)
73
{
74
TCGv addr = tcg_temp_new();
75
tcg_gen_extu_i32_tl(addr, a32);
76
@@ -XXX,XX +XXX,XX @@ static inline TCGv gen_aa32_addr(DisasContext *s, TCGv_i32 a32, MemOp op)
77
return addr;
78
}
79
80
+/*
81
+ * Internal routines are used for NEON cases where the endianness
82
+ * and/or alignment has already been taken into account and manipulated.
83
+ */
84
+static void gen_aa32_ld_internal_i32(DisasContext *s, TCGv_i32 val,
85
+ TCGv_i32 a32, int index, MemOp opc)
86
+{
87
+ TCGv addr = gen_aa32_addr(s, a32, opc);
88
+ tcg_gen_qemu_ld_i32(val, addr, index, opc);
89
+ tcg_temp_free(addr);
90
+}
91
+
92
+static void gen_aa32_st_internal_i32(DisasContext *s, TCGv_i32 val,
93
+ TCGv_i32 a32, int index, MemOp opc)
94
+{
95
+ TCGv addr = gen_aa32_addr(s, a32, opc);
96
+ tcg_gen_qemu_st_i32(val, addr, index, opc);
97
+ tcg_temp_free(addr);
98
+}
99
+
100
static void gen_aa32_ld_i32(DisasContext *s, TCGv_i32 val, TCGv_i32 a32,
101
int index, MemOp opc)
102
{
103
- TCGv addr;
104
-
105
- if (s->align_mem) {
106
- opc |= MO_ALIGN;
107
- }
108
-
109
- addr = gen_aa32_addr(s, a32, opc);
110
- tcg_gen_qemu_ld_i32(val, addr, index, opc);
111
- tcg_temp_free(addr);
112
+ gen_aa32_ld_internal_i32(s, val, a32, index, finalize_memop(s, opc));
113
}
114
115
static void gen_aa32_st_i32(DisasContext *s, TCGv_i32 val, TCGv_i32 a32,
116
int index, MemOp opc)
117
{
118
- TCGv addr;
119
+ gen_aa32_st_internal_i32(s, val, a32, index, finalize_memop(s, opc));
120
+}
121
122
- if (s->align_mem) {
123
- opc |= MO_ALIGN;
124
+#define DO_GEN_LD(SUFF, OPC) \
125
+ static inline void gen_aa32_ld##SUFF(DisasContext *s, TCGv_i32 val, \
126
+ TCGv_i32 a32, int index) \
127
+ { \
128
+ gen_aa32_ld_i32(s, val, a32, index, OPC); \
129
}
130
131
- addr = gen_aa32_addr(s, a32, opc);
132
- tcg_gen_qemu_st_i32(val, addr, index, opc);
133
- tcg_temp_free(addr);
134
-}
135
-
136
-#define DO_GEN_LD(SUFF, OPC) \
137
-static inline void gen_aa32_ld##SUFF(DisasContext *s, TCGv_i32 val, \
138
- TCGv_i32 a32, int index) \
139
-{ \
140
- gen_aa32_ld_i32(s, val, a32, index, OPC | s->be_data); \
141
-}
142
-
143
-#define DO_GEN_ST(SUFF, OPC) \
144
-static inline void gen_aa32_st##SUFF(DisasContext *s, TCGv_i32 val, \
145
- TCGv_i32 a32, int index) \
146
-{ \
147
- gen_aa32_st_i32(s, val, a32, index, OPC | s->be_data); \
148
-}
149
+#define DO_GEN_ST(SUFF, OPC) \
150
+ static inline void gen_aa32_st##SUFF(DisasContext *s, TCGv_i32 val, \
151
+ TCGv_i32 a32, int index) \
152
+ { \
153
+ gen_aa32_st_i32(s, val, a32, index, OPC); \
154
+ }
155
156
static inline void gen_aa32_frob64(DisasContext *s, TCGv_i64 val)
157
{
158
@@ -XXX,XX +XXX,XX @@ static bool op_load_rr(DisasContext *s, arg_ldst_rr *a,
159
addr = op_addr_rr_pre(s, a);
160
161
tmp = tcg_temp_new_i32();
162
- gen_aa32_ld_i32(s, tmp, addr, mem_idx, mop | s->be_data);
163
+ gen_aa32_ld_i32(s, tmp, addr, mem_idx, mop);
164
disas_set_da_iss(s, mop, issinfo);
165
166
/*
167
@@ -XXX,XX +XXX,XX @@ static bool op_store_rr(DisasContext *s, arg_ldst_rr *a,
168
addr = op_addr_rr_pre(s, a);
169
170
tmp = load_reg(s, a->rt);
171
- gen_aa32_st_i32(s, tmp, addr, mem_idx, mop | s->be_data);
172
+ gen_aa32_st_i32(s, tmp, addr, mem_idx, mop);
173
disas_set_da_iss(s, mop, issinfo);
174
tcg_temp_free_i32(tmp);
175
176
@@ -XXX,XX +XXX,XX @@ static bool trans_LDRD_rr(DisasContext *s, arg_ldst_rr *a)
177
addr = op_addr_rr_pre(s, a);
178
179
tmp = tcg_temp_new_i32();
180
- gen_aa32_ld_i32(s, tmp, addr, mem_idx, MO_UL | s->be_data);
181
+ gen_aa32_ld_i32(s, tmp, addr, mem_idx, MO_UL);
182
store_reg(s, a->rt, tmp);
183
184
tcg_gen_addi_i32(addr, addr, 4);
185
186
tmp = tcg_temp_new_i32();
187
- gen_aa32_ld_i32(s, tmp, addr, mem_idx, MO_UL | s->be_data);
188
+ gen_aa32_ld_i32(s, tmp, addr, mem_idx, MO_UL);
189
store_reg(s, a->rt + 1, tmp);
190
191
/* LDRD w/ base writeback is undefined if the registers overlap. */
192
@@ -XXX,XX +XXX,XX @@ static bool trans_STRD_rr(DisasContext *s, arg_ldst_rr *a)
193
addr = op_addr_rr_pre(s, a);
194
195
tmp = load_reg(s, a->rt);
196
- gen_aa32_st_i32(s, tmp, addr, mem_idx, MO_UL | s->be_data);
197
+ gen_aa32_st_i32(s, tmp, addr, mem_idx, MO_UL);
198
tcg_temp_free_i32(tmp);
199
200
tcg_gen_addi_i32(addr, addr, 4);
201
202
tmp = load_reg(s, a->rt + 1);
203
- gen_aa32_st_i32(s, tmp, addr, mem_idx, MO_UL | s->be_data);
204
+ gen_aa32_st_i32(s, tmp, addr, mem_idx, MO_UL);
205
tcg_temp_free_i32(tmp);
206
207
op_addr_rr_post(s, a, addr, -4);
208
@@ -XXX,XX +XXX,XX @@ static bool op_load_ri(DisasContext *s, arg_ldst_ri *a,
209
addr = op_addr_ri_pre(s, a);
210
211
tmp = tcg_temp_new_i32();
212
- gen_aa32_ld_i32(s, tmp, addr, mem_idx, mop | s->be_data);
213
+ gen_aa32_ld_i32(s, tmp, addr, mem_idx, mop);
214
disas_set_da_iss(s, mop, issinfo);
215
216
/*
217
@@ -XXX,XX +XXX,XX @@ static bool op_store_ri(DisasContext *s, arg_ldst_ri *a,
218
addr = op_addr_ri_pre(s, a);
219
220
tmp = load_reg(s, a->rt);
221
- gen_aa32_st_i32(s, tmp, addr, mem_idx, mop | s->be_data);
222
+ gen_aa32_st_i32(s, tmp, addr, mem_idx, mop);
223
disas_set_da_iss(s, mop, issinfo);
224
tcg_temp_free_i32(tmp);
225
226
@@ -XXX,XX +XXX,XX @@ static bool op_ldrd_ri(DisasContext *s, arg_ldst_ri *a, int rt2)
227
addr = op_addr_ri_pre(s, a);
228
229
tmp = tcg_temp_new_i32();
230
- gen_aa32_ld_i32(s, tmp, addr, mem_idx, MO_UL | s->be_data);
231
+ gen_aa32_ld_i32(s, tmp, addr, mem_idx, MO_UL);
232
store_reg(s, a->rt, tmp);
233
234
tcg_gen_addi_i32(addr, addr, 4);
235
236
tmp = tcg_temp_new_i32();
237
- gen_aa32_ld_i32(s, tmp, addr, mem_idx, MO_UL | s->be_data);
238
+ gen_aa32_ld_i32(s, tmp, addr, mem_idx, MO_UL);
239
store_reg(s, rt2, tmp);
240
241
/* LDRD w/ base writeback is undefined if the registers overlap. */
242
@@ -XXX,XX +XXX,XX @@ static bool op_strd_ri(DisasContext *s, arg_ldst_ri *a, int rt2)
243
addr = op_addr_ri_pre(s, a);
244
245
tmp = load_reg(s, a->rt);
246
- gen_aa32_st_i32(s, tmp, addr, mem_idx, MO_UL | s->be_data);
247
+ gen_aa32_st_i32(s, tmp, addr, mem_idx, MO_UL);
248
tcg_temp_free_i32(tmp);
249
250
tcg_gen_addi_i32(addr, addr, 4);
251
252
tmp = load_reg(s, rt2);
253
- gen_aa32_st_i32(s, tmp, addr, mem_idx, MO_UL | s->be_data);
254
+ gen_aa32_st_i32(s, tmp, addr, mem_idx, MO_UL);
255
tcg_temp_free_i32(tmp);
256
257
op_addr_ri_post(s, a, addr, -4);
258
@@ -XXX,XX +XXX,XX @@ static bool op_stl(DisasContext *s, arg_STL *a, MemOp mop)
259
addr = load_reg(s, a->rn);
260
tmp = load_reg(s, a->rt);
261
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_STRL);
262
- gen_aa32_st_i32(s, tmp, addr, get_mem_index(s), mop | s->be_data);
263
+ gen_aa32_st_i32(s, tmp, addr, get_mem_index(s), mop);
264
disas_set_da_iss(s, mop, a->rt | ISSIsAcqRel | ISSIsWrite);
265
266
tcg_temp_free_i32(tmp);
267
@@ -XXX,XX +XXX,XX @@ static bool op_lda(DisasContext *s, arg_LDA *a, MemOp mop)
268
269
addr = load_reg(s, a->rn);
270
tmp = tcg_temp_new_i32();
271
- gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s), mop | s->be_data);
272
+ gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s), mop);
273
disas_set_da_iss(s, mop, a->rt | ISSIsAcqRel);
274
tcg_temp_free_i32(addr);
275
276
@@ -XXX,XX +XXX,XX @@ static bool op_tbranch(DisasContext *s, arg_tbranch *a, bool half)
277
addr = load_reg(s, a->rn);
278
tcg_gen_add_i32(addr, addr, tmp);
279
280
- gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s),
281
- half ? MO_UW | s->be_data : MO_UB);
282
+ gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s), half ? MO_UW : MO_UB);
283
tcg_temp_free_i32(addr);
284
285
tcg_gen_add_i32(tmp, tmp, tmp);
286
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
287
index XXXXXXX..XXXXXXX 100644
288
--- a/target/arm/translate-neon.c.inc
289
+++ b/target/arm/translate-neon.c.inc
290
@@ -XXX,XX +XXX,XX @@ static bool trans_VLD_all_lanes(DisasContext *s, arg_VLD_all_lanes *a)
291
addr = tcg_temp_new_i32();
292
load_reg_var(s, addr, a->rn);
293
for (reg = 0; reg < nregs; reg++) {
294
- gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s),
295
- s->be_data | size);
296
+ gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s), size);
297
if ((vd & 1) && vec_size == 16) {
298
/*
299
* We cannot write 16 bytes at once because the
300
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDST_single(DisasContext *s, arg_VLDST_single *a)
301
*/
302
for (reg = 0; reg < nregs; reg++) {
303
if (a->l) {
304
- gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s),
305
- s->be_data | a->size);
306
+ gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s), a->size);
307
neon_store_element(vd, a->reg_idx, a->size, tmp);
308
} else { /* Store */
309
neon_load_element(tmp, vd, a->reg_idx, a->size);
310
- gen_aa32_st_i32(s, tmp, addr, get_mem_index(s),
311
- s->be_data | a->size);
312
+ gen_aa32_st_i32(s, tmp, addr, get_mem_index(s), a->size);
313
}
314
vd += a->stride;
315
tcg_gen_addi_i32(addr, addr, 1 << a->size);
316
--
317
2.20.1
318
319
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
This is the only caller. Adjust some commentary to talk
4
about SCTLR_B instead of the vanishing function.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210419202257.161730-13-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/translate.c | 37 ++++++++++++++++---------------------
12
1 file changed, 16 insertions(+), 21 deletions(-)
13
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate.c
17
+++ b/target/arm/translate.c
18
@@ -XXX,XX +XXX,XX @@ static void gen_aa32_st_i32(DisasContext *s, TCGv_i32 val, TCGv_i32 a32,
19
gen_aa32_st_i32(s, val, a32, index, OPC); \
20
}
21
22
-static inline void gen_aa32_frob64(DisasContext *s, TCGv_i64 val)
23
-{
24
- /* Not needed for user-mode BE32, where we use MO_BE instead. */
25
- if (!IS_USER_ONLY && s->sctlr_b) {
26
- tcg_gen_rotri_i64(val, val, 32);
27
- }
28
-}
29
-
30
static void gen_aa32_ld_i64(DisasContext *s, TCGv_i64 val, TCGv_i32 a32,
31
int index, MemOp opc)
32
{
33
TCGv addr = gen_aa32_addr(s, a32, opc);
34
tcg_gen_qemu_ld_i64(val, addr, index, opc);
35
- gen_aa32_frob64(s, val);
36
+
37
+ /* Not needed for user-mode BE32, where we use MO_BE instead. */
38
+ if (!IS_USER_ONLY && s->sctlr_b) {
39
+ tcg_gen_rotri_i64(val, val, 32);
40
+ }
41
+
42
tcg_temp_free(addr);
43
}
44
45
@@ -XXX,XX +XXX,XX @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
46
TCGv_i32 tmp2 = tcg_temp_new_i32();
47
TCGv_i64 t64 = tcg_temp_new_i64();
48
49
- /* For AArch32, architecturally the 32-bit word at the lowest
50
+ /*
51
+ * For AArch32, architecturally the 32-bit word at the lowest
52
* address is always Rt and the one at addr+4 is Rt2, even if
53
* the CPU is big-endian. That means we don't want to do a
54
- * gen_aa32_ld_i64(), which invokes gen_aa32_frob64() as if
55
- * for an architecturally 64-bit access, but instead do a
56
- * 64-bit access using MO_BE if appropriate and then split
57
- * the two halves.
58
- * This only makes a difference for BE32 user-mode, where
59
- * frob64() must not flip the two halves of the 64-bit data
60
- * but this code must treat BE32 user-mode like BE32 system.
61
+ * gen_aa32_ld_i64(), which checks SCTLR_B as if for an
62
+ * architecturally 64-bit access, but instead do a 64-bit access
63
+ * using MO_BE if appropriate and then split the two halves.
64
*/
65
TCGv taddr = gen_aa32_addr(s, addr, opc);
66
67
@@ -XXX,XX +XXX,XX @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2,
68
TCGv_i64 n64 = tcg_temp_new_i64();
69
70
t2 = load_reg(s, rt2);
71
- /* For AArch32, architecturally the 32-bit word at the lowest
72
+
73
+ /*
74
+ * For AArch32, architecturally the 32-bit word at the lowest
75
* address is always Rt and the one at addr+4 is Rt2, even if
76
* the CPU is big-endian. Since we're going to treat this as a
77
* single 64-bit BE store, we need to put the two halves in the
78
* opposite order for BE to LE, so that they end up in the right
79
- * places.
80
- * We don't want gen_aa32_frob64() because that does the wrong
81
- * thing for BE32 usermode.
82
+ * places. We don't want gen_aa32_st_i64, because that checks
83
+ * SCTLR_B as if for an architectural 64-bit access.
84
*/
85
if (s->be_data == MO_BE) {
86
tcg_gen_concat_i32_i64(n64, t2, t1);
87
--
88
2.20.1
89
90
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Just because operating on a TCGv_i64 temporary does not
4
mean that we're performing a 64-bit operation. Restrict
5
the frobbing to actual 64-bit operations.
6
7
This bug is not currently visible because all current
8
users of these two functions always pass MO_64.
9
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20210419202257.161730-14-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
target/arm/translate.c | 4 ++--
16
1 file changed, 2 insertions(+), 2 deletions(-)
17
18
diff --git a/target/arm/translate.c b/target/arm/translate.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/translate.c
21
+++ b/target/arm/translate.c
22
@@ -XXX,XX +XXX,XX @@ static void gen_aa32_ld_i64(DisasContext *s, TCGv_i64 val, TCGv_i32 a32,
23
tcg_gen_qemu_ld_i64(val, addr, index, opc);
24
25
/* Not needed for user-mode BE32, where we use MO_BE instead. */
26
- if (!IS_USER_ONLY && s->sctlr_b) {
27
+ if (!IS_USER_ONLY && s->sctlr_b && (opc & MO_SIZE) == MO_64) {
28
tcg_gen_rotri_i64(val, val, 32);
29
}
30
31
@@ -XXX,XX +XXX,XX @@ static void gen_aa32_st_i64(DisasContext *s, TCGv_i64 val, TCGv_i32 a32,
32
TCGv addr = gen_aa32_addr(s, a32, opc);
33
34
/* Not needed for user-mode BE32, where we use MO_BE instead. */
35
- if (!IS_USER_ONLY && s->sctlr_b) {
36
+ if (!IS_USER_ONLY && s->sctlr_b && (opc & MO_SIZE) == MO_64) {
37
TCGv_i64 tmp = tcg_temp_new_i64();
38
tcg_gen_rotri_i64(tmp, val, 32);
39
tcg_gen_qemu_st_i64(tmp, addr, index, opc);
40
--
41
2.20.1
42
43
diff view generated by jsdifflib
1
From: Igor Mammedov <imammedo@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
When QEMU is started with following CLI
3
Adjust the interface to match what has been done to the
4
-machine virt,gic-version=3,accel=kvm -cpu host -bios AAVMF_CODE.fd
4
TCGv_i32 load/store functions.
5
it crashes with abort at
6
accel/kvm/kvm-all.c:2164:
7
KVM_SET_DEVICE_ATTR failed: Group 6 attr 0x000000000000c665: Invalid argument
8
5
9
Which is caused by implicit dependency of kvm_arm_gicv3_reset() on
6
This is less obvious, because at present the only user of
10
arm_gicv3_icc_reset() where the later is called by CPU reset
7
these functions, trans_VLDST_multiple, also wants to manipulate
11
reset callback.
8
the endianness to speed up loading multiple bytes. Thus we
9
retain an "internal" interface which is identical to the
10
current gen_aa32_{ld,st}_i64 interface.
12
11
13
However commit:
12
The "new" interface will gain users as we remove the legacy
14
3b77f6c arm/boot: split load_dtb() from arm_load_kernel()
13
interfaces, gen_aa32_ld64 and gen_aa32_st64.
15
broke CPU reset callback registration in case
16
14
17
arm_load_kernel()
18
...
19
if (!info->kernel_filename || info->firmware_loaded)
20
21
branch is taken, i.e. it's sufficient to provide a firmware
22
or do not provide kernel on CLI to skip cpu reset callback
23
registration, where before offending commit the callback
24
has been registered unconditionally.
25
26
Fix it by registering the callback right at the beginning of
27
arm_load_kernel() unconditionally instead of doing it at the end.
28
29
NOTE:
30
we probably should eliminate that dependency anyways as well as
31
separate arch CPU reset parts from arm_load_kernel() into CPU
32
itself, but that refactoring that I probably would have to do
33
anyways later for CPU hotplug to work.
34
35
Reported-by: Auger Eric <eric.auger@redhat.com>
36
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
37
Reviewed-by: Eric Auger <eric.auger@redhat.com>
38
Tested-by: Eric Auger <eric.auger@redhat.com>
39
Message-id: 1527070950-208350-1-git-send-email-imammedo@redhat.com
40
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
17
Message-id: 20210419202257.161730-15-richard.henderson@linaro.org
41
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
42
---
19
---
43
hw/arm/boot.c | 18 +++++++++---------
20
target/arm/translate.c | 78 +++++++++++++++++++--------------
44
1 file changed, 9 insertions(+), 9 deletions(-)
21
target/arm/translate-neon.c.inc | 6 ++-
22
2 files changed, 49 insertions(+), 35 deletions(-)
45
23
46
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
24
diff --git a/target/arm/translate.c b/target/arm/translate.c
47
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
48
--- a/hw/arm/boot.c
26
--- a/target/arm/translate.c
49
+++ b/hw/arm/boot.c
27
+++ b/target/arm/translate.c
50
@@ -XXX,XX +XXX,XX @@ void arm_load_kernel(ARMCPU *cpu, struct arm_boot_info *info)
28
@@ -XXX,XX +XXX,XX @@ static void gen_aa32_st_internal_i32(DisasContext *s, TCGv_i32 val,
51
static const ARMInsnFixup *primary_loader;
29
tcg_temp_free(addr);
52
AddressSpace *as = arm_boot_address_space(cpu, info);
30
}
53
31
54
+ /* CPU objects (unlike devices) are not automatically reset on system
32
+static void gen_aa32_ld_internal_i64(DisasContext *s, TCGv_i64 val,
55
+ * reset, so we must always register a handler to do so. If we're
33
+ TCGv_i32 a32, int index, MemOp opc)
56
+ * actually loading a kernel, the handler is also responsible for
34
+{
57
+ * arranging that we start it correctly.
35
+ TCGv addr = gen_aa32_addr(s, a32, opc);
58
+ */
36
+
59
+ for (cs = first_cpu; cs; cs = CPU_NEXT(cs)) {
37
+ tcg_gen_qemu_ld_i64(val, addr, index, opc);
60
+ qemu_register_reset(do_cpu_reset, ARM_CPU(cs));
38
+
39
+ /* Not needed for user-mode BE32, where we use MO_BE instead. */
40
+ if (!IS_USER_ONLY && s->sctlr_b && (opc & MO_SIZE) == MO_64) {
41
+ tcg_gen_rotri_i64(val, val, 32);
61
+ }
42
+ }
43
+ tcg_temp_free(addr);
44
+}
62
+
45
+
63
/* The board code is not supposed to set secure_board_setup unless
46
+static void gen_aa32_st_internal_i64(DisasContext *s, TCGv_i64 val,
64
* running its code in secure mode is actually possible, and KVM
47
+ TCGv_i32 a32, int index, MemOp opc)
65
* doesn't support secure.
48
+{
66
@@ -XXX,XX +XXX,XX @@ void arm_load_kernel(ARMCPU *cpu, struct arm_boot_info *info)
49
+ TCGv addr = gen_aa32_addr(s, a32, opc);
67
ARM_CPU(cs)->env.boot_info = info;
50
+
51
+ /* Not needed for user-mode BE32, where we use MO_BE instead. */
52
+ if (!IS_USER_ONLY && s->sctlr_b && (opc & MO_SIZE) == MO_64) {
53
+ TCGv_i64 tmp = tcg_temp_new_i64();
54
+ tcg_gen_rotri_i64(tmp, val, 32);
55
+ tcg_gen_qemu_st_i64(tmp, addr, index, opc);
56
+ tcg_temp_free_i64(tmp);
57
+ } else {
58
+ tcg_gen_qemu_st_i64(val, addr, index, opc);
59
+ }
60
+ tcg_temp_free(addr);
61
+}
62
+
63
static void gen_aa32_ld_i32(DisasContext *s, TCGv_i32 val, TCGv_i32 a32,
64
int index, MemOp opc)
65
{
66
@@ -XXX,XX +XXX,XX @@ static void gen_aa32_st_i32(DisasContext *s, TCGv_i32 val, TCGv_i32 a32,
67
gen_aa32_st_internal_i32(s, val, a32, index, finalize_memop(s, opc));
68
}
69
70
+static void gen_aa32_ld_i64(DisasContext *s, TCGv_i64 val, TCGv_i32 a32,
71
+ int index, MemOp opc)
72
+{
73
+ gen_aa32_ld_internal_i64(s, val, a32, index, finalize_memop(s, opc));
74
+}
75
+
76
+static void gen_aa32_st_i64(DisasContext *s, TCGv_i64 val, TCGv_i32 a32,
77
+ int index, MemOp opc)
78
+{
79
+ gen_aa32_st_internal_i64(s, val, a32, index, finalize_memop(s, opc));
80
+}
81
+
82
#define DO_GEN_LD(SUFF, OPC) \
83
static inline void gen_aa32_ld##SUFF(DisasContext *s, TCGv_i32 val, \
84
TCGv_i32 a32, int index) \
85
@@ -XXX,XX +XXX,XX @@ static void gen_aa32_st_i32(DisasContext *s, TCGv_i32 val, TCGv_i32 a32,
86
gen_aa32_st_i32(s, val, a32, index, OPC); \
68
}
87
}
69
88
70
- /* CPU objects (unlike devices) are not automatically reset on system
89
-static void gen_aa32_ld_i64(DisasContext *s, TCGv_i64 val, TCGv_i32 a32,
71
- * reset, so we must always register a handler to do so. If we're
90
- int index, MemOp opc)
72
- * actually loading a kernel, the handler is also responsible for
91
-{
73
- * arranging that we start it correctly.
92
- TCGv addr = gen_aa32_addr(s, a32, opc);
74
- */
93
- tcg_gen_qemu_ld_i64(val, addr, index, opc);
75
- for (cs = first_cpu; cs; cs = CPU_NEXT(cs)) {
94
-
76
- qemu_register_reset(do_cpu_reset, ARM_CPU(cs));
95
- /* Not needed for user-mode BE32, where we use MO_BE instead. */
96
- if (!IS_USER_ONLY && s->sctlr_b && (opc & MO_SIZE) == MO_64) {
97
- tcg_gen_rotri_i64(val, val, 32);
77
- }
98
- }
78
-
99
-
79
if (!info->skip_dtb_autoload && have_dtb(info)) {
100
- tcg_temp_free(addr);
80
if (arm_load_dtb(info->dtb_start, info, info->dtb_limit, as) < 0) {
101
-}
81
exit(1);
102
-
103
static inline void gen_aa32_ld64(DisasContext *s, TCGv_i64 val,
104
TCGv_i32 a32, int index)
105
{
106
- gen_aa32_ld_i64(s, val, a32, index, MO_Q | s->be_data);
107
-}
108
-
109
-static void gen_aa32_st_i64(DisasContext *s, TCGv_i64 val, TCGv_i32 a32,
110
- int index, MemOp opc)
111
-{
112
- TCGv addr = gen_aa32_addr(s, a32, opc);
113
-
114
- /* Not needed for user-mode BE32, where we use MO_BE instead. */
115
- if (!IS_USER_ONLY && s->sctlr_b && (opc & MO_SIZE) == MO_64) {
116
- TCGv_i64 tmp = tcg_temp_new_i64();
117
- tcg_gen_rotri_i64(tmp, val, 32);
118
- tcg_gen_qemu_st_i64(tmp, addr, index, opc);
119
- tcg_temp_free_i64(tmp);
120
- } else {
121
- tcg_gen_qemu_st_i64(val, addr, index, opc);
122
- }
123
- tcg_temp_free(addr);
124
+ gen_aa32_ld_i64(s, val, a32, index, MO_Q);
125
}
126
127
static inline void gen_aa32_st64(DisasContext *s, TCGv_i64 val,
128
TCGv_i32 a32, int index)
129
{
130
- gen_aa32_st_i64(s, val, a32, index, MO_Q | s->be_data);
131
+ gen_aa32_st_i64(s, val, a32, index, MO_Q);
132
}
133
134
DO_GEN_LD(8u, MO_UB)
135
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
136
index XXXXXXX..XXXXXXX 100644
137
--- a/target/arm/translate-neon.c.inc
138
+++ b/target/arm/translate-neon.c.inc
139
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDST_multiple(DisasContext *s, arg_VLDST_multiple *a)
140
int tt = a->vd + reg + spacing * xs;
141
142
if (a->l) {
143
- gen_aa32_ld_i64(s, tmp64, addr, mmu_idx, endian | size);
144
+ gen_aa32_ld_internal_i64(s, tmp64, addr, mmu_idx,
145
+ endian | size);
146
neon_store_element64(tt, n, size, tmp64);
147
} else {
148
neon_load_element64(tmp64, tt, n, size);
149
- gen_aa32_st_i64(s, tmp64, addr, mmu_idx, endian | size);
150
+ gen_aa32_st_internal_i64(s, tmp64, addr, mmu_idx,
151
+ endian | size);
152
}
153
tcg_gen_add_i32(addr, addr, tmp);
154
}
82
--
155
--
83
2.17.1
156
2.20.1
84
157
85
158
diff view generated by jsdifflib
1
From: Shannon Zhao <zhaoshenglong@huawei.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
kvm_irqchip_create called by kvm_init will call kvm_init_irq_routing to
3
Buglink: https://bugs.launchpad.net/qemu/+bug/1905356
4
initialize global capability variables. If we call kvm_init_irq_routing in
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
GIC realize function, previous allocated memory will leak.
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
6
Message-id: 20210419202257.161730-16-richard.henderson@linaro.org
7
Fix this by deleting the unnecessary call.
8
9
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
10
Reviewed-by: Eric Auger <eric.auger@redhat.com>
11
Message-id: 1527750994-14360-1-git-send-email-zhaoshenglong@huawei.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
8
---
14
hw/intc/arm_gic_kvm.c | 1 -
9
target/arm/translate.c | 16 ++++++++--------
15
hw/intc/arm_gicv3_kvm.c | 1 -
10
1 file changed, 8 insertions(+), 8 deletions(-)
16
2 files changed, 2 deletions(-)
17
11
18
diff --git a/hw/intc/arm_gic_kvm.c b/hw/intc/arm_gic_kvm.c
12
diff --git a/target/arm/translate.c b/target/arm/translate.c
19
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/intc/arm_gic_kvm.c
14
--- a/target/arm/translate.c
21
+++ b/hw/intc/arm_gic_kvm.c
15
+++ b/target/arm/translate.c
22
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gic_realize(DeviceState *dev, Error **errp)
16
@@ -XXX,XX +XXX,XX @@ static bool trans_LDRD_rr(DisasContext *s, arg_ldst_rr *a)
23
17
addr = op_addr_rr_pre(s, a);
24
if (kvm_has_gsi_routing()) {
18
25
/* set up irq routing */
19
tmp = tcg_temp_new_i32();
26
- kvm_init_irq_routing(kvm_state);
20
- gen_aa32_ld_i32(s, tmp, addr, mem_idx, MO_UL);
27
for (i = 0; i < s->num_irq - GIC_INTERNAL; ++i) {
21
+ gen_aa32_ld_i32(s, tmp, addr, mem_idx, MO_UL | MO_ALIGN);
28
kvm_irqchip_add_irq_route(kvm_state, i, 0, i);
22
store_reg(s, a->rt, tmp);
29
}
23
30
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
24
tcg_gen_addi_i32(addr, addr, 4);
31
index XXXXXXX..XXXXXXX 100644
25
32
--- a/hw/intc/arm_gicv3_kvm.c
26
tmp = tcg_temp_new_i32();
33
+++ b/hw/intc/arm_gicv3_kvm.c
27
- gen_aa32_ld_i32(s, tmp, addr, mem_idx, MO_UL);
34
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gicv3_realize(DeviceState *dev, Error **errp)
28
+ gen_aa32_ld_i32(s, tmp, addr, mem_idx, MO_UL | MO_ALIGN);
35
29
store_reg(s, a->rt + 1, tmp);
36
if (kvm_has_gsi_routing()) {
30
37
/* set up irq routing */
31
/* LDRD w/ base writeback is undefined if the registers overlap. */
38
- kvm_init_irq_routing(kvm_state);
32
@@ -XXX,XX +XXX,XX @@ static bool trans_STRD_rr(DisasContext *s, arg_ldst_rr *a)
39
for (i = 0; i < s->num_irq - GIC_INTERNAL; ++i) {
33
addr = op_addr_rr_pre(s, a);
40
kvm_irqchip_add_irq_route(kvm_state, i, 0, i);
34
41
}
35
tmp = load_reg(s, a->rt);
36
- gen_aa32_st_i32(s, tmp, addr, mem_idx, MO_UL);
37
+ gen_aa32_st_i32(s, tmp, addr, mem_idx, MO_UL | MO_ALIGN);
38
tcg_temp_free_i32(tmp);
39
40
tcg_gen_addi_i32(addr, addr, 4);
41
42
tmp = load_reg(s, a->rt + 1);
43
- gen_aa32_st_i32(s, tmp, addr, mem_idx, MO_UL);
44
+ gen_aa32_st_i32(s, tmp, addr, mem_idx, MO_UL | MO_ALIGN);
45
tcg_temp_free_i32(tmp);
46
47
op_addr_rr_post(s, a, addr, -4);
48
@@ -XXX,XX +XXX,XX @@ static bool op_ldrd_ri(DisasContext *s, arg_ldst_ri *a, int rt2)
49
addr = op_addr_ri_pre(s, a);
50
51
tmp = tcg_temp_new_i32();
52
- gen_aa32_ld_i32(s, tmp, addr, mem_idx, MO_UL);
53
+ gen_aa32_ld_i32(s, tmp, addr, mem_idx, MO_UL | MO_ALIGN);
54
store_reg(s, a->rt, tmp);
55
56
tcg_gen_addi_i32(addr, addr, 4);
57
58
tmp = tcg_temp_new_i32();
59
- gen_aa32_ld_i32(s, tmp, addr, mem_idx, MO_UL);
60
+ gen_aa32_ld_i32(s, tmp, addr, mem_idx, MO_UL | MO_ALIGN);
61
store_reg(s, rt2, tmp);
62
63
/* LDRD w/ base writeback is undefined if the registers overlap. */
64
@@ -XXX,XX +XXX,XX @@ static bool op_strd_ri(DisasContext *s, arg_ldst_ri *a, int rt2)
65
addr = op_addr_ri_pre(s, a);
66
67
tmp = load_reg(s, a->rt);
68
- gen_aa32_st_i32(s, tmp, addr, mem_idx, MO_UL);
69
+ gen_aa32_st_i32(s, tmp, addr, mem_idx, MO_UL | MO_ALIGN);
70
tcg_temp_free_i32(tmp);
71
72
tcg_gen_addi_i32(addr, addr, 4);
73
74
tmp = load_reg(s, rt2);
75
- gen_aa32_st_i32(s, tmp, addr, mem_idx, MO_UL);
76
+ gen_aa32_st_i32(s, tmp, addr, mem_idx, MO_UL | MO_ALIGN);
77
tcg_temp_free_i32(tmp);
78
79
op_addr_ri_post(s, a, addr, -4);
42
--
80
--
43
2.17.1
81
2.20.1
44
82
45
83
diff view generated by jsdifflib
1
Provide a VMSTATE_BOOL_SUB_ARRAY to go with VMSTATE_UINT8_SUB_ARRAY
1
From: Richard Henderson <richard.henderson@linaro.org>
2
and friends.
3
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210419202257.161730-17-richard.henderson@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Message-id: 20180521140402.23318-23-peter.maydell@linaro.org
7
---
7
---
8
include/migration/vmstate.h | 3 +++
8
target/arm/translate.c | 4 ++--
9
1 file changed, 3 insertions(+)
9
1 file changed, 2 insertions(+), 2 deletions(-)
10
10
11
diff --git a/include/migration/vmstate.h b/include/migration/vmstate.h
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
13
--- a/include/migration/vmstate.h
13
--- a/target/arm/translate.c
14
+++ b/include/migration/vmstate.h
14
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ extern const VMStateInfo vmstate_info_qtailq;
15
@@ -XXX,XX +XXX,XX @@ static bool op_stl(DisasContext *s, arg_STL *a, MemOp mop)
16
#define VMSTATE_BOOL_ARRAY(_f, _s, _n) \
16
addr = load_reg(s, a->rn);
17
VMSTATE_BOOL_ARRAY_V(_f, _s, _n, 0)
17
tmp = load_reg(s, a->rt);
18
18
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_STRL);
19
+#define VMSTATE_BOOL_SUB_ARRAY(_f, _s, _start, _num) \
19
- gen_aa32_st_i32(s, tmp, addr, get_mem_index(s), mop);
20
+ VMSTATE_SUB_ARRAY(_f, _s, _start, _num, 0, vmstate_info_bool, bool)
20
+ gen_aa32_st_i32(s, tmp, addr, get_mem_index(s), mop | MO_ALIGN);
21
+
21
disas_set_da_iss(s, mop, a->rt | ISSIsAcqRel | ISSIsWrite);
22
#define VMSTATE_UINT16_ARRAY_V(_f, _s, _n, _v) \
22
23
VMSTATE_ARRAY(_f, _s, _n, _v, vmstate_info_uint16, uint16_t)
23
tcg_temp_free_i32(tmp);
24
@@ -XXX,XX +XXX,XX @@ static bool op_lda(DisasContext *s, arg_LDA *a, MemOp mop)
25
26
addr = load_reg(s, a->rn);
27
tmp = tcg_temp_new_i32();
28
- gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s), mop);
29
+ gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s), mop | MO_ALIGN);
30
disas_set_da_iss(s, mop, a->rt | ISSIsAcqRel);
31
tcg_temp_free_i32(addr);
24
32
25
--
33
--
26
2.17.1
34
2.20.1
27
35
28
36
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
add MemTxAttrs as an argument to flatview_do_translate().
3
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210419202257.161730-18-richard.henderson@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180521140402.23318-13-peter.maydell@linaro.org
8
---
7
---
9
exec.c | 9 ++++++---
8
target/arm/translate.c | 4 ++--
10
1 file changed, 6 insertions(+), 3 deletions(-)
9
1 file changed, 2 insertions(+), 2 deletions(-)
11
10
12
diff --git a/exec.c b/exec.c
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
13
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
14
--- a/exec.c
13
--- a/target/arm/translate.c
15
+++ b/exec.c
14
+++ b/target/arm/translate.c
16
@@ -XXX,XX +XXX,XX @@ unassigned:
15
@@ -XXX,XX +XXX,XX @@ static bool op_stm(DisasContext *s, arg_ldst_block *a, int min_n)
17
* @is_write: whether the translation operation is for write
16
} else {
18
* @is_mmio: whether this can be MMIO, set true if it can
17
tmp = load_reg(s, i);
19
* @target_as: the address space targeted by the IOMMU
18
}
20
+ * @attrs: memory transaction attributes
19
- gen_aa32_st32(s, tmp, addr, mem_idx);
21
*
20
+ gen_aa32_st_i32(s, tmp, addr, mem_idx, MO_UL | MO_ALIGN);
22
* This function is called from RCU critical section
21
tcg_temp_free_i32(tmp);
23
*/
22
24
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection flatview_do_translate(FlatView *fv,
23
/* No need to add after the last transfer. */
25
hwaddr *page_mask_out,
24
@@ -XXX,XX +XXX,XX @@ static bool do_ldm(DisasContext *s, arg_ldst_block *a, int min_n)
26
bool is_write,
25
}
27
bool is_mmio,
26
28
- AddressSpace **target_as)
27
tmp = tcg_temp_new_i32();
29
+ AddressSpace **target_as,
28
- gen_aa32_ld32u(s, tmp, addr, mem_idx);
30
+ MemTxAttrs attrs)
29
+ gen_aa32_ld_i32(s, tmp, addr, mem_idx, MO_UL | MO_ALIGN);
31
{
30
if (user) {
32
MemoryRegionSection *section;
31
tmp2 = tcg_const_i32(i);
33
IOMMUMemoryRegion *iommu_mr;
32
gen_helper_set_user_reg(cpu_env, tmp2, tmp);
34
@@ -XXX,XX +XXX,XX @@ IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
35
* but page mask.
36
*/
37
section = flatview_do_translate(address_space_to_flatview(as), addr, &xlat,
38
- NULL, &page_mask, is_write, false, &as);
39
+ NULL, &page_mask, is_write, false, &as,
40
+ attrs);
41
42
/* Illegal translation */
43
if (section.mr == &io_mem_unassigned) {
44
@@ -XXX,XX +XXX,XX @@ MemoryRegion *flatview_translate(FlatView *fv, hwaddr addr, hwaddr *xlat,
45
46
/* This can be MMIO, so setup MMIO bit. */
47
section = flatview_do_translate(fv, addr, xlat, plen, NULL,
48
- is_write, true, &as);
49
+ is_write, true, &as, attrs);
50
mr = section.mr;
51
52
if (xen_enabled() && memory_access_is_direct(mr, is_write)) {
53
--
33
--
54
2.17.1
34
2.20.1
55
35
56
36
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
add MemTxAttrs as an argument to address_space_get_iotlb_entry().
3
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210419202257.161730-19-richard.henderson@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180521140402.23318-12-peter.maydell@linaro.org
8
---
7
---
9
include/exec/memory.h | 2 +-
8
target/arm/translate.c | 4 ++--
10
exec.c | 2 +-
9
1 file changed, 2 insertions(+), 2 deletions(-)
11
hw/virtio/vhost.c | 3 ++-
12
3 files changed, 4 insertions(+), 3 deletions(-)
13
10
14
diff --git a/include/exec/memory.h b/include/exec/memory.h
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
15
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
16
--- a/include/exec/memory.h
13
--- a/target/arm/translate.c
17
+++ b/include/exec/memory.h
14
+++ b/target/arm/translate.c
18
@@ -XXX,XX +XXX,XX @@ void address_space_cache_destroy(MemoryRegionCache *cache);
15
@@ -XXX,XX +XXX,XX @@ static bool trans_RFE(DisasContext *s, arg_RFE *a)
19
* entry. Should be called from an RCU critical section.
16
20
*/
17
/* Load PC into tmp and CPSR into tmp2. */
21
IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
18
t1 = tcg_temp_new_i32();
22
- bool is_write);
19
- gen_aa32_ld32u(s, t1, addr, get_mem_index(s));
23
+ bool is_write, MemTxAttrs attrs);
20
+ gen_aa32_ld_i32(s, t1, addr, get_mem_index(s), MO_UL | MO_ALIGN);
24
21
tcg_gen_addi_i32(addr, addr, 4);
25
/* address_space_translate: translate an address range into an address space
22
t2 = tcg_temp_new_i32();
26
* into a MemoryRegion and an address range into that section. Should be
23
- gen_aa32_ld32u(s, t2, addr, get_mem_index(s));
27
diff --git a/exec.c b/exec.c
24
+ gen_aa32_ld_i32(s, t2, addr, get_mem_index(s), MO_UL | MO_ALIGN);
28
index XXXXXXX..XXXXXXX 100644
25
29
--- a/exec.c
26
if (a->w) {
30
+++ b/exec.c
27
/* Base writeback. */
31
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection flatview_do_translate(FlatView *fv,
32
33
/* Called from RCU critical section */
34
IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
35
- bool is_write)
36
+ bool is_write, MemTxAttrs attrs)
37
{
38
MemoryRegionSection section;
39
hwaddr xlat, page_mask;
40
diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/hw/virtio/vhost.c
43
+++ b/hw/virtio/vhost.c
44
@@ -XXX,XX +XXX,XX @@ int vhost_device_iotlb_miss(struct vhost_dev *dev, uint64_t iova, int write)
45
trace_vhost_iotlb_miss(dev, 1);
46
47
iotlb = address_space_get_iotlb_entry(dev->vdev->dma_as,
48
- iova, write);
49
+ iova, write,
50
+ MEMTXATTRS_UNSPECIFIED);
51
if (iotlb.target_as != NULL) {
52
ret = vhost_memory_region_lookup(dev, iotlb.translated_addr,
53
&uaddr, &len);
54
--
28
--
55
2.17.1
29
2.20.1
56
30
57
31
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
add MemTxAttrs as an argument to flatview_extend_translation().
3
Its callers either have an attrs value to hand, or don't care
4
and can use MEMTXATTRS_UNSPECIFIED.
5
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210419202257.161730-20-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-7-peter.maydell@linaro.org
10
---
7
---
11
exec.c | 15 ++++++++++-----
8
target/arm/translate.c | 4 ++--
12
1 file changed, 10 insertions(+), 5 deletions(-)
9
1 file changed, 2 insertions(+), 2 deletions(-)
13
10
14
diff --git a/exec.c b/exec.c
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
15
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
16
--- a/exec.c
13
--- a/target/arm/translate.c
17
+++ b/exec.c
14
+++ b/target/arm/translate.c
18
@@ -XXX,XX +XXX,XX @@ bool address_space_access_valid(AddressSpace *as, hwaddr addr,
15
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
19
16
}
20
static hwaddr
17
tcg_gen_addi_i32(addr, addr, offset);
21
flatview_extend_translation(FlatView *fv, hwaddr addr,
18
tmp = load_reg(s, 14);
22
- hwaddr target_len,
19
- gen_aa32_st32(s, tmp, addr, get_mem_index(s));
23
- MemoryRegion *mr, hwaddr base, hwaddr len,
20
+ gen_aa32_st_i32(s, tmp, addr, get_mem_index(s), MO_UL | MO_ALIGN);
24
- bool is_write)
21
tcg_temp_free_i32(tmp);
25
+ hwaddr target_len,
22
tmp = load_cpu_field(spsr);
26
+ MemoryRegion *mr, hwaddr base, hwaddr len,
23
tcg_gen_addi_i32(addr, addr, 4);
27
+ bool is_write, MemTxAttrs attrs)
24
- gen_aa32_st32(s, tmp, addr, get_mem_index(s));
28
{
25
+ gen_aa32_st_i32(s, tmp, addr, get_mem_index(s), MO_UL | MO_ALIGN);
29
hwaddr done = 0;
26
tcg_temp_free_i32(tmp);
30
hwaddr xlat;
27
if (writeback) {
31
@@ -XXX,XX +XXX,XX @@ void *address_space_map(AddressSpace *as,
28
switch (amode) {
32
33
memory_region_ref(mr);
34
*plen = flatview_extend_translation(fv, addr, len, mr, xlat,
35
- l, is_write);
36
+ l, is_write, attrs);
37
ptr = qemu_ram_ptr_length(mr->ram_block, xlat, plen, true);
38
rcu_read_unlock();
39
40
@@ -XXX,XX +XXX,XX @@ int64_t address_space_cache_init(MemoryRegionCache *cache,
41
mr = cache->mrs.mr;
42
memory_region_ref(mr);
43
if (memory_access_is_direct(mr, is_write)) {
44
+ /* We don't care about the memory attributes here as we're only
45
+ * doing this if we found actual RAM, which behaves the same
46
+ * regardless of attributes; so UNSPECIFIED is fine.
47
+ */
48
l = flatview_extend_translation(cache->fv, addr, len, mr,
49
- cache->xlat, l, is_write);
50
+ cache->xlat, l, is_write,
51
+ MEMTXATTRS_UNSPECIFIED);
52
cache->ptr = qemu_ram_ptr_length(mr->ram_block, cache->xlat, &l, true);
53
} else {
54
cache->ptr = NULL;
55
--
29
--
56
2.17.1
30
2.20.1
57
31
58
32
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
add MemTxAttrs as an argument to address_space_translate()
3
and address_space_translate_cached(). Callers either have an
4
attrs value to hand, or don't care and can use MEMTXATTRS_UNSPECIFIED.
5
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210419202257.161730-21-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-4-peter.maydell@linaro.org
10
---
7
---
11
include/exec/memory.h | 4 +++-
8
target/arm/translate-vfp.c.inc | 8 ++++----
12
accel/tcg/translate-all.c | 2 +-
9
1 file changed, 4 insertions(+), 4 deletions(-)
13
exec.c | 14 +++++++++-----
14
hw/vfio/common.c | 3 ++-
15
memory_ldst.inc.c | 18 +++++++++---------
16
target/riscv/helper.c | 2 +-
17
6 files changed, 25 insertions(+), 18 deletions(-)
18
10
19
diff --git a/include/exec/memory.h b/include/exec/memory.h
11
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
20
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
21
--- a/include/exec/memory.h
13
--- a/target/arm/translate-vfp.c.inc
22
+++ b/include/exec/memory.h
14
+++ b/target/arm/translate-vfp.c.inc
23
@@ -XXX,XX +XXX,XX @@ IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
15
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDM_VSTM_sp(DisasContext *s, arg_VLDM_VSTM_sp *a)
24
* #MemoryRegion.
16
for (i = 0; i < n; i++) {
25
* @len: pointer to length
17
if (a->l) {
26
* @is_write: indicates the transfer direction
18
/* load */
27
+ * @attrs: memory attributes
19
- gen_aa32_ld32u(s, tmp, addr, get_mem_index(s));
28
*/
20
+ gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s), MO_UL | MO_ALIGN);
29
MemoryRegion *flatview_translate(FlatView *fv,
21
vfp_store_reg32(tmp, a->vd + i);
30
hwaddr addr, hwaddr *xlat,
22
} else {
31
@@ -XXX,XX +XXX,XX @@ MemoryRegion *flatview_translate(FlatView *fv,
23
/* store */
32
24
vfp_load_reg32(tmp, a->vd + i);
33
static inline MemoryRegion *address_space_translate(AddressSpace *as,
25
- gen_aa32_st32(s, tmp, addr, get_mem_index(s));
34
hwaddr addr, hwaddr *xlat,
26
+ gen_aa32_st_i32(s, tmp, addr, get_mem_index(s), MO_UL | MO_ALIGN);
35
- hwaddr *len, bool is_write)
27
}
36
+ hwaddr *len, bool is_write,
28
tcg_gen_addi_i32(addr, addr, offset);
37
+ MemTxAttrs attrs)
29
}
38
{
30
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDM_VSTM_dp(DisasContext *s, arg_VLDM_VSTM_dp *a)
39
return flatview_translate(address_space_to_flatview(as),
31
for (i = 0; i < n; i++) {
40
addr, xlat, len, is_write);
32
if (a->l) {
41
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
33
/* load */
42
index XXXXXXX..XXXXXXX 100644
34
- gen_aa32_ld64(s, tmp, addr, get_mem_index(s));
43
--- a/accel/tcg/translate-all.c
35
+ gen_aa32_ld_i64(s, tmp, addr, get_mem_index(s), MO_Q | MO_ALIGN_4);
44
+++ b/accel/tcg/translate-all.c
36
vfp_store_reg64(tmp, a->vd + i);
45
@@ -XXX,XX +XXX,XX @@ void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs)
37
} else {
46
hwaddr l = 1;
38
/* store */
47
39
vfp_load_reg64(tmp, a->vd + i);
48
rcu_read_lock();
40
- gen_aa32_st64(s, tmp, addr, get_mem_index(s));
49
- mr = address_space_translate(as, addr, &addr, &l, false);
41
+ gen_aa32_st_i64(s, tmp, addr, get_mem_index(s), MO_Q | MO_ALIGN_4);
50
+ mr = address_space_translate(as, addr, &addr, &l, false, attrs);
42
}
51
if (!(memory_region_is_ram(mr)
43
tcg_gen_addi_i32(addr, addr, offset);
52
|| memory_region_is_romd(mr))) {
44
}
53
rcu_read_unlock();
54
diff --git a/exec.c b/exec.c
55
index XXXXXXX..XXXXXXX 100644
56
--- a/exec.c
57
+++ b/exec.c
58
@@ -XXX,XX +XXX,XX @@ static inline void cpu_physical_memory_write_rom_internal(AddressSpace *as,
59
rcu_read_lock();
60
while (len > 0) {
61
l = len;
62
- mr = address_space_translate(as, addr, &addr1, &l, true);
63
+ mr = address_space_translate(as, addr, &addr1, &l, true,
64
+ MEMTXATTRS_UNSPECIFIED);
65
66
if (!(memory_region_is_ram(mr) ||
67
memory_region_is_romd(mr))) {
68
@@ -XXX,XX +XXX,XX @@ void address_space_cache_destroy(MemoryRegionCache *cache)
69
*/
70
static inline MemoryRegion *address_space_translate_cached(
71
MemoryRegionCache *cache, hwaddr addr, hwaddr *xlat,
72
- hwaddr *plen, bool is_write)
73
+ hwaddr *plen, bool is_write, MemTxAttrs attrs)
74
{
75
MemoryRegionSection section;
76
MemoryRegion *mr;
77
@@ -XXX,XX +XXX,XX @@ address_space_read_cached_slow(MemoryRegionCache *cache, hwaddr addr,
78
MemoryRegion *mr;
79
80
l = len;
81
- mr = address_space_translate_cached(cache, addr, &addr1, &l, false);
82
+ mr = address_space_translate_cached(cache, addr, &addr1, &l, false,
83
+ MEMTXATTRS_UNSPECIFIED);
84
flatview_read_continue(cache->fv,
85
addr, MEMTXATTRS_UNSPECIFIED, buf, len,
86
addr1, l, mr);
87
@@ -XXX,XX +XXX,XX @@ address_space_write_cached_slow(MemoryRegionCache *cache, hwaddr addr,
88
MemoryRegion *mr;
89
90
l = len;
91
- mr = address_space_translate_cached(cache, addr, &addr1, &l, true);
92
+ mr = address_space_translate_cached(cache, addr, &addr1, &l, true,
93
+ MEMTXATTRS_UNSPECIFIED);
94
flatview_write_continue(cache->fv,
95
addr, MEMTXATTRS_UNSPECIFIED, buf, len,
96
addr1, l, mr);
97
@@ -XXX,XX +XXX,XX @@ bool cpu_physical_memory_is_io(hwaddr phys_addr)
98
99
rcu_read_lock();
100
mr = address_space_translate(&address_space_memory,
101
- phys_addr, &phys_addr, &l, false);
102
+ phys_addr, &phys_addr, &l, false,
103
+ MEMTXATTRS_UNSPECIFIED);
104
105
res = !(memory_region_is_ram(mr) || memory_region_is_romd(mr));
106
rcu_read_unlock();
107
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
108
index XXXXXXX..XXXXXXX 100644
109
--- a/hw/vfio/common.c
110
+++ b/hw/vfio/common.c
111
@@ -XXX,XX +XXX,XX @@ static bool vfio_get_vaddr(IOMMUTLBEntry *iotlb, void **vaddr,
112
*/
113
mr = address_space_translate(&address_space_memory,
114
iotlb->translated_addr,
115
- &xlat, &len, writable);
116
+ &xlat, &len, writable,
117
+ MEMTXATTRS_UNSPECIFIED);
118
if (!memory_region_is_ram(mr)) {
119
error_report("iommu map to non memory area %"HWADDR_PRIx"",
120
xlat);
121
diff --git a/memory_ldst.inc.c b/memory_ldst.inc.c
122
index XXXXXXX..XXXXXXX 100644
123
--- a/memory_ldst.inc.c
124
+++ b/memory_ldst.inc.c
125
@@ -XXX,XX +XXX,XX @@ static inline uint32_t glue(address_space_ldl_internal, SUFFIX)(ARG1_DECL,
126
bool release_lock = false;
127
128
RCU_READ_LOCK();
129
- mr = TRANSLATE(addr, &addr1, &l, false);
130
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
131
if (l < 4 || !IS_DIRECT(mr, false)) {
132
release_lock |= prepare_mmio_access(mr);
133
134
@@ -XXX,XX +XXX,XX @@ static inline uint64_t glue(address_space_ldq_internal, SUFFIX)(ARG1_DECL,
135
bool release_lock = false;
136
137
RCU_READ_LOCK();
138
- mr = TRANSLATE(addr, &addr1, &l, false);
139
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
140
if (l < 8 || !IS_DIRECT(mr, false)) {
141
release_lock |= prepare_mmio_access(mr);
142
143
@@ -XXX,XX +XXX,XX @@ uint32_t glue(address_space_ldub, SUFFIX)(ARG1_DECL,
144
bool release_lock = false;
145
146
RCU_READ_LOCK();
147
- mr = TRANSLATE(addr, &addr1, &l, false);
148
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
149
if (!IS_DIRECT(mr, false)) {
150
release_lock |= prepare_mmio_access(mr);
151
152
@@ -XXX,XX +XXX,XX @@ static inline uint32_t glue(address_space_lduw_internal, SUFFIX)(ARG1_DECL,
153
bool release_lock = false;
154
155
RCU_READ_LOCK();
156
- mr = TRANSLATE(addr, &addr1, &l, false);
157
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
158
if (l < 2 || !IS_DIRECT(mr, false)) {
159
release_lock |= prepare_mmio_access(mr);
160
161
@@ -XXX,XX +XXX,XX @@ void glue(address_space_stl_notdirty, SUFFIX)(ARG1_DECL,
162
bool release_lock = false;
163
164
RCU_READ_LOCK();
165
- mr = TRANSLATE(addr, &addr1, &l, true);
166
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
167
if (l < 4 || !IS_DIRECT(mr, true)) {
168
release_lock |= prepare_mmio_access(mr);
169
170
@@ -XXX,XX +XXX,XX @@ static inline void glue(address_space_stl_internal, SUFFIX)(ARG1_DECL,
171
bool release_lock = false;
172
173
RCU_READ_LOCK();
174
- mr = TRANSLATE(addr, &addr1, &l, true);
175
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
176
if (l < 4 || !IS_DIRECT(mr, true)) {
177
release_lock |= prepare_mmio_access(mr);
178
179
@@ -XXX,XX +XXX,XX @@ void glue(address_space_stb, SUFFIX)(ARG1_DECL,
180
bool release_lock = false;
181
182
RCU_READ_LOCK();
183
- mr = TRANSLATE(addr, &addr1, &l, true);
184
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
185
if (!IS_DIRECT(mr, true)) {
186
release_lock |= prepare_mmio_access(mr);
187
r = memory_region_dispatch_write(mr, addr1, val, 1, attrs);
188
@@ -XXX,XX +XXX,XX @@ static inline void glue(address_space_stw_internal, SUFFIX)(ARG1_DECL,
189
bool release_lock = false;
190
191
RCU_READ_LOCK();
192
- mr = TRANSLATE(addr, &addr1, &l, true);
193
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
194
if (l < 2 || !IS_DIRECT(mr, true)) {
195
release_lock |= prepare_mmio_access(mr);
196
197
@@ -XXX,XX +XXX,XX @@ static void glue(address_space_stq_internal, SUFFIX)(ARG1_DECL,
198
bool release_lock = false;
199
200
RCU_READ_LOCK();
201
- mr = TRANSLATE(addr, &addr1, &l, true);
202
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
203
if (l < 8 || !IS_DIRECT(mr, true)) {
204
release_lock |= prepare_mmio_access(mr);
205
206
diff --git a/target/riscv/helper.c b/target/riscv/helper.c
207
index XXXXXXX..XXXXXXX 100644
208
--- a/target/riscv/helper.c
209
+++ b/target/riscv/helper.c
210
@@ -XXX,XX +XXX,XX @@ restart:
211
MemoryRegion *mr;
212
hwaddr l = sizeof(target_ulong), addr1;
213
mr = address_space_translate(cs->as, pte_addr,
214
- &addr1, &l, false);
215
+ &addr1, &l, false, MEMTXATTRS_UNSPECIFIED);
216
if (memory_access_is_direct(mr, true)) {
217
target_ulong *pte_pa =
218
qemu_map_ram_ptr(mr->ram_block, addr1);
219
--
45
--
220
2.17.1
46
2.20.1
221
47
222
48
diff view generated by jsdifflib
1
From: Jan Kiszka <jan.kiszka@siemens.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
There was a nasty flip in identifying which register group an access is
4
targeting. The issue caused spuriously raised priorities of the guest
5
when handing CPUs over in the Jailhouse hypervisor.
6
7
Cc: qemu-stable@nongnu.org
8
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
9
Message-id: 28b927d3-da58-bce4-cc13-bfec7f9b1cb9@siemens.com
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210419202257.161730-22-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
7
---
13
hw/intc/arm_gicv3_cpuif.c | 12 ++++++------
8
target/arm/translate-vfp.c.inc | 12 ++++++------
14
1 file changed, 6 insertions(+), 6 deletions(-)
9
1 file changed, 6 insertions(+), 6 deletions(-)
15
10
16
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
11
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
17
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/intc/arm_gicv3_cpuif.c
13
--- a/target/arm/translate-vfp.c.inc
19
+++ b/hw/intc/arm_gicv3_cpuif.c
14
+++ b/target/arm/translate-vfp.c.inc
20
@@ -XXX,XX +XXX,XX @@ static uint64_t icv_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
15
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_hp(DisasContext *s, arg_VLDR_VSTR_sp *a)
21
{
16
addr = add_reg_for_lit(s, a->rn, offset);
22
GICv3CPUState *cs = icc_cs_from_env(env);
17
tmp = tcg_temp_new_i32();
23
int regno = ri->opc2 & 3;
18
if (a->l) {
24
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
19
- gen_aa32_ld16u(s, tmp, addr, get_mem_index(s));
25
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
20
+ gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s), MO_UW | MO_ALIGN);
26
uint64_t value = cs->ich_apr[grp][regno];
21
vfp_store_reg32(tmp, a->vd);
27
22
} else {
28
trace_gicv3_icv_ap_read(ri->crm & 1, regno, gicv3_redist_affid(cs), value);
23
vfp_load_reg32(tmp, a->vd);
29
@@ -XXX,XX +XXX,XX @@ static void icv_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
24
- gen_aa32_st16(s, tmp, addr, get_mem_index(s));
30
{
25
+ gen_aa32_st_i32(s, tmp, addr, get_mem_index(s), MO_UW | MO_ALIGN);
31
GICv3CPUState *cs = icc_cs_from_env(env);
26
}
32
int regno = ri->opc2 & 3;
27
tcg_temp_free_i32(tmp);
33
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
28
tcg_temp_free_i32(addr);
34
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
29
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_sp(DisasContext *s, arg_VLDR_VSTR_sp *a)
35
30
addr = add_reg_for_lit(s, a->rn, offset);
36
trace_gicv3_icv_ap_write(ri->crm & 1, regno, gicv3_redist_affid(cs), value);
31
tmp = tcg_temp_new_i32();
37
32
if (a->l) {
38
@@ -XXX,XX +XXX,XX @@ static uint64_t icc_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
33
- gen_aa32_ld32u(s, tmp, addr, get_mem_index(s));
39
uint64_t value;
34
+ gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s), MO_UL | MO_ALIGN);
40
35
vfp_store_reg32(tmp, a->vd);
41
int regno = ri->opc2 & 3;
36
} else {
42
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1;
37
vfp_load_reg32(tmp, a->vd);
43
+ int grp = (ri->crm & 1) ? GICV3_G1 : GICV3_G0;
38
- gen_aa32_st32(s, tmp, addr, get_mem_index(s));
44
39
+ gen_aa32_st_i32(s, tmp, addr, get_mem_index(s), MO_UL | MO_ALIGN);
45
if (icv_access(env, grp == GICV3_G0 ? HCR_FMO : HCR_IMO)) {
40
}
46
return icv_ap_read(env, ri);
41
tcg_temp_free_i32(tmp);
47
@@ -XXX,XX +XXX,XX @@ static void icc_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
42
tcg_temp_free_i32(addr);
48
GICv3CPUState *cs = icc_cs_from_env(env);
43
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_dp(DisasContext *s, arg_VLDR_VSTR_dp *a)
49
44
addr = add_reg_for_lit(s, a->rn, offset);
50
int regno = ri->opc2 & 3;
45
tmp = tcg_temp_new_i64();
51
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1;
46
if (a->l) {
52
+ int grp = (ri->crm & 1) ? GICV3_G1 : GICV3_G0;
47
- gen_aa32_ld64(s, tmp, addr, get_mem_index(s));
53
48
+ gen_aa32_ld_i64(s, tmp, addr, get_mem_index(s), MO_Q | MO_ALIGN_4);
54
if (icv_access(env, grp == GICV3_G0 ? HCR_FMO : HCR_IMO)) {
49
vfp_store_reg64(tmp, a->vd);
55
icv_ap_write(env, ri, value);
50
} else {
56
@@ -XXX,XX +XXX,XX @@ static uint64_t ich_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
51
vfp_load_reg64(tmp, a->vd);
57
{
52
- gen_aa32_st64(s, tmp, addr, get_mem_index(s));
58
GICv3CPUState *cs = icc_cs_from_env(env);
53
+ gen_aa32_st_i64(s, tmp, addr, get_mem_index(s), MO_Q | MO_ALIGN_4);
59
int regno = ri->opc2 & 3;
54
}
60
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
55
tcg_temp_free_i64(tmp);
61
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
56
tcg_temp_free_i32(addr);
62
uint64_t value;
63
64
value = cs->ich_apr[grp][regno];
65
@@ -XXX,XX +XXX,XX @@ static void ich_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
66
{
67
GICv3CPUState *cs = icc_cs_from_env(env);
68
int regno = ri->opc2 & 3;
69
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
70
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
71
72
trace_gicv3_ich_ap_write(ri->crm & 1, regno, gicv3_redist_affid(cs), value);
73
74
--
57
--
75
2.17.1
58
2.20.1
76
59
77
60
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Coverity found that the string return by 'object_get_canonical_path' was not
4
being freed at two locations in the model (CID 1391294 and CID 1391293) and
5
also that a memset was being called with a value greater than the max of a byte
6
on the second argument (CID 1391286). This patch corrects this by adding the
7
freeing of the strings and also changing to memset to zero instead on
8
descriptor unaligned errors.
9
10
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
11
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
12
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
Message-id: 20180528184859.3530-1-frasse.iglesias@gmail.com
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210419202257.161730-23-richard.henderson@linaro.org
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
7
---
17
hw/dma/xlnx-zdma.c | 10 +++++++---
8
target/arm/translate.h | 1 +
18
1 file changed, 7 insertions(+), 3 deletions(-)
9
target/arm/translate.c | 15 +++++++++++++
10
target/arm/translate-neon.c.inc | 37 +++++++++++++++++++++++++--------
11
3 files changed, 44 insertions(+), 9 deletions(-)
19
12
20
diff --git a/hw/dma/xlnx-zdma.c b/hw/dma/xlnx-zdma.c
13
diff --git a/target/arm/translate.h b/target/arm/translate.h
21
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/dma/xlnx-zdma.c
15
--- a/target/arm/translate.h
23
+++ b/hw/dma/xlnx-zdma.c
16
+++ b/target/arm/translate.h
24
@@ -XXX,XX +XXX,XX @@ static bool zdma_load_descriptor(XlnxZDMA *s, uint64_t addr, void *buf)
17
@@ -XXX,XX +XXX,XX @@ void arm_test_cc(DisasCompare *cmp, int cc);
25
qemu_log_mask(LOG_GUEST_ERROR,
18
void arm_free_cc(DisasCompare *cmp);
26
"zdma: unaligned descriptor at %" PRIx64,
19
void arm_jump_cc(DisasCompare *cmp, TCGLabel *label);
27
addr);
20
void arm_gen_test_cc(int cc, TCGLabel *label);
28
- memset(buf, 0xdeadbeef, sizeof(XlnxZDMADescr));
21
+MemOp pow2_align(unsigned i);
29
+ memset(buf, 0x0, sizeof(XlnxZDMADescr));
22
30
s->error = true;
23
/* Return state of Alternate Half-precision flag, caller frees result */
24
static inline TCGv_i32 get_ahp_flag(void)
25
diff --git a/target/arm/translate.c b/target/arm/translate.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/translate.c
28
+++ b/target/arm/translate.c
29
@@ -XXX,XX +XXX,XX @@ static inline void store_reg_from_load(DisasContext *s, int reg, TCGv_i32 var)
30
#define IS_USER_ONLY 0
31
#endif
32
33
+MemOp pow2_align(unsigned i)
34
+{
35
+ static const MemOp mop_align[] = {
36
+ 0, MO_ALIGN_2, MO_ALIGN_4, MO_ALIGN_8, MO_ALIGN_16,
37
+ /*
38
+ * FIXME: TARGET_PAGE_BITS_MIN affects TLB_FLAGS_MASK such
39
+ * that 256-bit alignment (MO_ALIGN_32) cannot be supported:
40
+ * see get_alignment_bits(). Enforce only 128-bit alignment for now.
41
+ */
42
+ MO_ALIGN_16
43
+ };
44
+ g_assert(i < ARRAY_SIZE(mop_align));
45
+ return mop_align[i];
46
+}
47
+
48
/*
49
* Abstractions of "generate code to do a guest load/store for
50
* AArch32", where a vaddr is always 32 bits (and is zero
51
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
52
index XXXXXXX..XXXXXXX 100644
53
--- a/target/arm/translate-neon.c.inc
54
+++ b/target/arm/translate-neon.c.inc
55
@@ -XXX,XX +XXX,XX @@ static bool trans_VLD_all_lanes(DisasContext *s, arg_VLD_all_lanes *a)
56
int size = a->size;
57
int nregs = a->n + 1;
58
TCGv_i32 addr, tmp;
59
+ MemOp mop, align;
60
61
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
62
return false;
63
@@ -XXX,XX +XXX,XX @@ static bool trans_VLD_all_lanes(DisasContext *s, arg_VLD_all_lanes *a)
31
return false;
64
return false;
32
}
65
}
33
@@ -XXX,XX +XXX,XX @@ static uint64_t zdma_read(void *opaque, hwaddr addr, unsigned size)
66
34
RegisterInfo *r = &s->regs_info[addr / 4];
67
+ align = 0;
35
68
if (size == 3) {
36
if (!r->data) {
69
if (nregs != 4 || a->a == 0) {
37
+ gchar *path = object_get_canonical_path(OBJECT(s));
70
return false;
38
qemu_log("%s: Decode error: read from %" HWADDR_PRIx "\n",
71
}
39
- object_get_canonical_path(OBJECT(s)),
72
/* For VLD4 size == 3 a == 1 means 32 bits at 16 byte alignment */
40
+ path,
73
- size = 2;
41
addr);
74
- }
42
+ g_free(path);
75
- if (nregs == 1 && a->a == 1 && size == 0) {
43
ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, INV_APB, true);
76
- return false;
44
zdma_ch_imr_update_irq(s);
77
- }
45
return 0;
78
- if (nregs == 3 && a->a == 1) {
46
@@ -XXX,XX +XXX,XX @@ static void zdma_write(void *opaque, hwaddr addr, uint64_t value,
79
- return false;
47
RegisterInfo *r = &s->regs_info[addr / 4];
80
+ size = MO_32;
48
81
+ align = MO_ALIGN_16;
49
if (!r->data) {
82
+ } else if (a->a) {
50
+ gchar *path = object_get_canonical_path(OBJECT(s));
83
+ switch (nregs) {
51
qemu_log("%s: Decode error: write to %" HWADDR_PRIx "=%" PRIx64 "\n",
84
+ case 1:
52
- object_get_canonical_path(OBJECT(s)),
85
+ if (size == 0) {
53
+ path,
86
+ return false;
54
addr, value);
87
+ }
55
+ g_free(path);
88
+ align = MO_ALIGN;
56
ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, INV_APB, true);
89
+ break;
57
zdma_ch_imr_update_irq(s);
90
+ case 2:
58
return;
91
+ align = pow2_align(size + 1);
92
+ break;
93
+ case 3:
94
+ return false;
95
+ case 4:
96
+ align = pow2_align(size + 2);
97
+ break;
98
+ default:
99
+ g_assert_not_reached();
100
+ }
101
}
102
103
if (!vfp_access_check(s)) {
104
@@ -XXX,XX +XXX,XX @@ static bool trans_VLD_all_lanes(DisasContext *s, arg_VLD_all_lanes *a)
105
*/
106
stride = a->t ? 2 : 1;
107
vec_size = nregs == 1 ? stride * 8 : 8;
108
-
109
+ mop = size | align;
110
tmp = tcg_temp_new_i32();
111
addr = tcg_temp_new_i32();
112
load_reg_var(s, addr, a->rn);
113
for (reg = 0; reg < nregs; reg++) {
114
- gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s), size);
115
+ gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s), mop);
116
if ((vd & 1) && vec_size == 16) {
117
/*
118
* We cannot write 16 bytes at once because the
119
@@ -XXX,XX +XXX,XX @@ static bool trans_VLD_all_lanes(DisasContext *s, arg_VLD_all_lanes *a)
120
}
121
tcg_gen_addi_i32(addr, addr, 1 << size);
122
vd += stride;
123
+
124
+ /* Subsequent memory operations inherit alignment */
125
+ mop &= ~MO_AMASK;
126
}
127
tcg_temp_free_i32(tmp);
128
tcg_temp_free_i32(addr);
59
--
129
--
60
2.17.1
130
2.20.1
61
131
62
132
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
add MemTxAttrs as an argument to address_space_map().
3
Its callers either have an attrs value to hand, or don't care
4
and can use MEMTXATTRS_UNSPECIFIED.
5
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210419202257.161730-24-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-5-peter.maydell@linaro.org
10
---
7
---
11
include/exec/memory.h | 3 ++-
8
target/arm/translate-neon.c.inc | 27 ++++++++++++++++++++++-----
12
include/sysemu/dma.h | 3 ++-
9
1 file changed, 22 insertions(+), 5 deletions(-)
13
exec.c | 6 ++++--
14
target/ppc/mmu-hash64.c | 3 ++-
15
4 files changed, 10 insertions(+), 5 deletions(-)
16
10
17
diff --git a/include/exec/memory.h b/include/exec/memory.h
11
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
18
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
19
--- a/include/exec/memory.h
13
--- a/target/arm/translate-neon.c.inc
20
+++ b/include/exec/memory.h
14
+++ b/target/arm/translate-neon.c.inc
21
@@ -XXX,XX +XXX,XX @@ bool address_space_access_valid(AddressSpace *as, hwaddr addr, int len, bool is_
15
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDST_multiple(DisasContext *s, arg_VLDST_multiple *a)
22
* @addr: address within that address space
23
* @plen: pointer to length of buffer; updated on return
24
* @is_write: indicates the transfer direction
25
+ * @attrs: memory attributes
26
*/
27
void *address_space_map(AddressSpace *as, hwaddr addr,
28
- hwaddr *plen, bool is_write);
29
+ hwaddr *plen, bool is_write, MemTxAttrs attrs);
30
31
/* address_space_unmap: Unmaps a memory region previously mapped by address_space_map()
32
*
33
diff --git a/include/sysemu/dma.h b/include/sysemu/dma.h
34
index XXXXXXX..XXXXXXX 100644
35
--- a/include/sysemu/dma.h
36
+++ b/include/sysemu/dma.h
37
@@ -XXX,XX +XXX,XX @@ static inline void *dma_memory_map(AddressSpace *as,
38
hwaddr xlen = *len;
39
void *p;
40
41
- p = address_space_map(as, addr, &xlen, dir == DMA_DIRECTION_FROM_DEVICE);
42
+ p = address_space_map(as, addr, &xlen, dir == DMA_DIRECTION_FROM_DEVICE,
43
+ MEMTXATTRS_UNSPECIFIED);
44
*len = xlen;
45
return p;
46
}
47
diff --git a/exec.c b/exec.c
48
index XXXXXXX..XXXXXXX 100644
49
--- a/exec.c
50
+++ b/exec.c
51
@@ -XXX,XX +XXX,XX @@ flatview_extend_translation(FlatView *fv, hwaddr addr,
52
void *address_space_map(AddressSpace *as,
53
hwaddr addr,
54
hwaddr *plen,
55
- bool is_write)
56
+ bool is_write,
57
+ MemTxAttrs attrs)
58
{
16
{
59
hwaddr len = *plen;
17
/* Neon load/store multiple structures */
60
hwaddr l, xlat;
18
int nregs, interleave, spacing, reg, n;
61
@@ -XXX,XX +XXX,XX @@ void *cpu_physical_memory_map(hwaddr addr,
19
- MemOp endian = s->be_data;
62
hwaddr *plen,
20
+ MemOp mop, align, endian;
63
int is_write)
21
int mmu_idx = get_mem_index(s);
64
{
22
int size = a->size;
65
- return address_space_map(&address_space_memory, addr, plen, is_write);
23
TCGv_i64 tmp64;
66
+ return address_space_map(&address_space_memory, addr, plen, is_write,
24
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDST_multiple(DisasContext *s, arg_VLDST_multiple *a)
67
+ MEMTXATTRS_UNSPECIFIED);
68
}
69
70
void cpu_physical_memory_unmap(void *buffer, hwaddr len,
71
diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
72
index XXXXXXX..XXXXXXX 100644
73
--- a/target/ppc/mmu-hash64.c
74
+++ b/target/ppc/mmu-hash64.c
75
@@ -XXX,XX +XXX,XX @@ const ppc_hash_pte64_t *ppc_hash64_map_hptes(PowerPCCPU *cpu,
76
return NULL;
77
}
25
}
78
26
79
- hptes = address_space_map(CPU(cpu)->as, base + pte_offset, &plen, false);
27
/* For our purposes, bytes are always little-endian. */
80
+ hptes = address_space_map(CPU(cpu)->as, base + pte_offset, &plen, false,
28
+ endian = s->be_data;
81
+ MEMTXATTRS_UNSPECIFIED);
29
if (size == 0) {
82
if (plen < (n * HASH_PTE_SIZE_64)) {
30
endian = MO_LE;
83
hw_error("%s: Unable to map all requested HPTEs\n", __func__);
31
}
32
+
33
+ /* Enforce alignment requested by the instruction */
34
+ if (a->align) {
35
+ align = pow2_align(a->align + 2); /* 4 ** a->align */
36
+ } else {
37
+ align = s->align_mem ? MO_ALIGN : 0;
38
+ }
39
+
40
/*
41
* Consecutive little-endian elements from a single register
42
* can be promoted to a larger little-endian operation.
43
*/
44
if (interleave == 1 && endian == MO_LE) {
45
+ /* Retain any natural alignment. */
46
+ if (align == MO_ALIGN) {
47
+ align = pow2_align(size);
48
+ }
49
size = 3;
50
}
51
+
52
tmp64 = tcg_temp_new_i64();
53
addr = tcg_temp_new_i32();
54
tmp = tcg_const_i32(1 << size);
55
load_reg_var(s, addr, a->rn);
56
+
57
+ mop = endian | size | align;
58
for (reg = 0; reg < nregs; reg++) {
59
for (n = 0; n < 8 >> size; n++) {
60
int xs;
61
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDST_multiple(DisasContext *s, arg_VLDST_multiple *a)
62
int tt = a->vd + reg + spacing * xs;
63
64
if (a->l) {
65
- gen_aa32_ld_internal_i64(s, tmp64, addr, mmu_idx,
66
- endian | size);
67
+ gen_aa32_ld_internal_i64(s, tmp64, addr, mmu_idx, mop);
68
neon_store_element64(tt, n, size, tmp64);
69
} else {
70
neon_load_element64(tmp64, tt, n, size);
71
- gen_aa32_st_internal_i64(s, tmp64, addr, mmu_idx,
72
- endian | size);
73
+ gen_aa32_st_internal_i64(s, tmp64, addr, mmu_idx, mop);
74
}
75
tcg_gen_add_i32(addr, addr, tmp);
76
+
77
+ /* Subsequent memory operations inherit alignment */
78
+ mop &= ~MO_AMASK;
79
}
80
}
84
}
81
}
85
--
82
--
86
2.17.1
83
2.20.1
87
84
88
85
diff view generated by jsdifflib
1
From: Paolo Bonzini <pbonzini@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
cpregs_keys is an uint32_t* so the allocation should use uint32_t.
4
g_new is even better because it is type-safe.
5
6
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210419202257.161730-25-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
7
---
11
target/arm/gdbstub.c | 3 +--
8
target/arm/translate-neon.c.inc | 48 ++++++++++++++++++++++++++++-----
12
1 file changed, 1 insertion(+), 2 deletions(-)
9
1 file changed, 42 insertions(+), 6 deletions(-)
13
10
14
diff --git a/target/arm/gdbstub.c b/target/arm/gdbstub.c
11
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
15
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/gdbstub.c
13
--- a/target/arm/translate-neon.c.inc
17
+++ b/target/arm/gdbstub.c
14
+++ b/target/arm/translate-neon.c.inc
18
@@ -XXX,XX +XXX,XX @@ int arm_gen_dynamic_xml(CPUState *cs)
15
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDST_single(DisasContext *s, arg_VLDST_single *a)
19
RegisterSysregXmlParam param = {cs, s};
16
int nregs = a->n + 1;
20
17
int vd = a->vd;
21
cpu->dyn_xml.num_cpregs = 0;
18
TCGv_i32 addr, tmp;
22
- cpu->dyn_xml.cpregs_keys = g_malloc(sizeof(uint32_t *) *
19
+ MemOp mop;
23
- g_hash_table_size(cpu->cp_regs));
20
24
+ cpu->dyn_xml.cpregs_keys = g_new(uint32_t, g_hash_table_size(cpu->cp_regs));
21
if (!arm_dc_feature(s, ARM_FEATURE_NEON)) {
25
g_string_printf(s, "<?xml version=\"1.0\"?>");
22
return false;
26
g_string_append_printf(s, "<!DOCTYPE target SYSTEM \"gdb-target.dtd\">");
23
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDST_single(DisasContext *s, arg_VLDST_single *a)
27
g_string_append_printf(s, "<feature name=\"org.qemu.gdb.arm.sys.regs\">");
24
return true;
25
}
26
27
+ /* Pick up SCTLR settings */
28
+ mop = finalize_memop(s, a->size);
29
+
30
+ if (a->align) {
31
+ MemOp align_op;
32
+
33
+ switch (nregs) {
34
+ case 1:
35
+ /* For VLD1, use natural alignment. */
36
+ align_op = MO_ALIGN;
37
+ break;
38
+ case 2:
39
+ /* For VLD2, use double alignment. */
40
+ align_op = pow2_align(a->size + 1);
41
+ break;
42
+ case 4:
43
+ if (a->size == MO_32) {
44
+ /*
45
+ * For VLD4.32, align = 1 is double alignment, align = 2 is
46
+ * quad alignment; align = 3 is rejected above.
47
+ */
48
+ align_op = pow2_align(a->size + a->align);
49
+ } else {
50
+ /* For VLD4.8 and VLD.16, we want quad alignment. */
51
+ align_op = pow2_align(a->size + 2);
52
+ }
53
+ break;
54
+ default:
55
+ /* For VLD3, the alignment field is zero and rejected above. */
56
+ g_assert_not_reached();
57
+ }
58
+
59
+ mop = (mop & ~MO_AMASK) | align_op;
60
+ }
61
+
62
tmp = tcg_temp_new_i32();
63
addr = tcg_temp_new_i32();
64
load_reg_var(s, addr, a->rn);
65
- /*
66
- * TODO: if we implemented alignment exceptions, we should check
67
- * addr against the alignment encoded in a->align here.
68
- */
69
+
70
for (reg = 0; reg < nregs; reg++) {
71
if (a->l) {
72
- gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s), a->size);
73
+ gen_aa32_ld_internal_i32(s, tmp, addr, get_mem_index(s), mop);
74
neon_store_element(vd, a->reg_idx, a->size, tmp);
75
} else { /* Store */
76
neon_load_element(tmp, vd, a->reg_idx, a->size);
77
- gen_aa32_st_i32(s, tmp, addr, get_mem_index(s), a->size);
78
+ gen_aa32_st_internal_i32(s, tmp, addr, get_mem_index(s), mop);
79
}
80
vd += a->stride;
81
tcg_gen_addi_i32(addr, addr, 1 << a->size);
82
+
83
+ /* Subsequent memory operations inherit alignment */
84
+ mop &= ~MO_AMASK;
85
}
86
tcg_temp_free_i32(addr);
87
tcg_temp_free_i32(tmp);
28
--
88
--
29
2.17.1
89
2.20.1
30
90
31
91
diff view generated by jsdifflib
New patch
1
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
In the case of gpr load, merge the size and is_signed arguments;
4
otherwise, simply convert size to memop.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210419202257.161730-26-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/translate-a64.c | 78 ++++++++++++++++----------------------
12
1 file changed, 33 insertions(+), 45 deletions(-)
13
14
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate-a64.c
17
+++ b/target/arm/translate-a64.c
18
@@ -XXX,XX +XXX,XX @@ static void gen_adc_CC(int sf, TCGv_i64 dest, TCGv_i64 t0, TCGv_i64 t1)
19
* Store from GPR register to memory.
20
*/
21
static void do_gpr_st_memidx(DisasContext *s, TCGv_i64 source,
22
- TCGv_i64 tcg_addr, int size, int memidx,
23
+ TCGv_i64 tcg_addr, MemOp memop, int memidx,
24
bool iss_valid,
25
unsigned int iss_srt,
26
bool iss_sf, bool iss_ar)
27
{
28
- g_assert(size <= 3);
29
- tcg_gen_qemu_st_i64(source, tcg_addr, memidx, s->be_data + size);
30
+ memop = finalize_memop(s, memop);
31
+ tcg_gen_qemu_st_i64(source, tcg_addr, memidx, memop);
32
33
if (iss_valid) {
34
uint32_t syn;
35
36
syn = syn_data_abort_with_iss(0,
37
- size,
38
+ (memop & MO_SIZE),
39
false,
40
iss_srt,
41
iss_sf,
42
@@ -XXX,XX +XXX,XX @@ static void do_gpr_st_memidx(DisasContext *s, TCGv_i64 source,
43
}
44
45
static void do_gpr_st(DisasContext *s, TCGv_i64 source,
46
- TCGv_i64 tcg_addr, int size,
47
+ TCGv_i64 tcg_addr, MemOp memop,
48
bool iss_valid,
49
unsigned int iss_srt,
50
bool iss_sf, bool iss_ar)
51
{
52
- do_gpr_st_memidx(s, source, tcg_addr, size, get_mem_index(s),
53
+ do_gpr_st_memidx(s, source, tcg_addr, memop, get_mem_index(s),
54
iss_valid, iss_srt, iss_sf, iss_ar);
55
}
56
57
/*
58
* Load from memory to GPR register
59
*/
60
-static void do_gpr_ld_memidx(DisasContext *s,
61
- TCGv_i64 dest, TCGv_i64 tcg_addr,
62
- int size, bool is_signed,
63
- bool extend, int memidx,
64
+static void do_gpr_ld_memidx(DisasContext *s, TCGv_i64 dest, TCGv_i64 tcg_addr,
65
+ MemOp memop, bool extend, int memidx,
66
bool iss_valid, unsigned int iss_srt,
67
bool iss_sf, bool iss_ar)
68
{
69
- MemOp memop = s->be_data + size;
70
-
71
- g_assert(size <= 3);
72
-
73
- if (is_signed) {
74
- memop += MO_SIGN;
75
- }
76
-
77
+ memop = finalize_memop(s, memop);
78
tcg_gen_qemu_ld_i64(dest, tcg_addr, memidx, memop);
79
80
- if (extend && is_signed) {
81
- g_assert(size < 3);
82
+ if (extend && (memop & MO_SIGN)) {
83
+ g_assert((memop & MO_SIZE) <= MO_32);
84
tcg_gen_ext32u_i64(dest, dest);
85
}
86
87
@@ -XXX,XX +XXX,XX @@ static void do_gpr_ld_memidx(DisasContext *s,
88
uint32_t syn;
89
90
syn = syn_data_abort_with_iss(0,
91
- size,
92
- is_signed,
93
+ (memop & MO_SIZE),
94
+ (memop & MO_SIGN) != 0,
95
iss_srt,
96
iss_sf,
97
iss_ar,
98
@@ -XXX,XX +XXX,XX @@ static void do_gpr_ld_memidx(DisasContext *s,
99
}
100
}
101
102
-static void do_gpr_ld(DisasContext *s,
103
- TCGv_i64 dest, TCGv_i64 tcg_addr,
104
- int size, bool is_signed, bool extend,
105
+static void do_gpr_ld(DisasContext *s, TCGv_i64 dest, TCGv_i64 tcg_addr,
106
+ MemOp memop, bool extend,
107
bool iss_valid, unsigned int iss_srt,
108
bool iss_sf, bool iss_ar)
109
{
110
- do_gpr_ld_memidx(s, dest, tcg_addr, size, is_signed, extend,
111
- get_mem_index(s),
112
+ do_gpr_ld_memidx(s, dest, tcg_addr, memop, extend, get_mem_index(s),
113
iss_valid, iss_srt, iss_sf, iss_ar);
114
}
115
116
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
117
}
118
clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn),
119
false, rn != 31, size);
120
- do_gpr_ld(s, cpu_reg(s, rt), clean_addr, size, false, false, true, rt,
121
+ do_gpr_ld(s, cpu_reg(s, rt), clean_addr, size, false, true, rt,
122
disas_ldst_compute_iss_sf(size, false, 0), is_lasr);
123
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ);
124
return;
125
@@ -XXX,XX +XXX,XX @@ static void disas_ld_lit(DisasContext *s, uint32_t insn)
126
/* Only unsigned 32bit loads target 32bit registers. */
127
bool iss_sf = opc != 0;
128
129
- do_gpr_ld(s, tcg_rt, clean_addr, size, is_signed, false,
130
- true, rt, iss_sf, false);
131
+ do_gpr_ld(s, tcg_rt, clean_addr, size + is_signed * MO_SIGN,
132
+ false, true, rt, iss_sf, false);
133
}
134
tcg_temp_free_i64(clean_addr);
135
}
136
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_pair(DisasContext *s, uint32_t insn)
137
/* Do not modify tcg_rt before recognizing any exception
138
* from the second load.
139
*/
140
- do_gpr_ld(s, tmp, clean_addr, size, is_signed, false,
141
- false, 0, false, false);
142
+ do_gpr_ld(s, tmp, clean_addr, size + is_signed * MO_SIGN,
143
+ false, false, 0, false, false);
144
tcg_gen_addi_i64(clean_addr, clean_addr, 1 << size);
145
- do_gpr_ld(s, tcg_rt2, clean_addr, size, is_signed, false,
146
- false, 0, false, false);
147
+ do_gpr_ld(s, tcg_rt2, clean_addr, size + is_signed * MO_SIGN,
148
+ false, false, 0, false, false);
149
150
tcg_gen_mov_i64(tcg_rt, tmp);
151
tcg_temp_free_i64(tmp);
152
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_imm9(DisasContext *s, uint32_t insn,
153
do_gpr_st_memidx(s, tcg_rt, clean_addr, size, memidx,
154
iss_valid, rt, iss_sf, false);
155
} else {
156
- do_gpr_ld_memidx(s, tcg_rt, clean_addr, size,
157
- is_signed, is_extended, memidx,
158
+ do_gpr_ld_memidx(s, tcg_rt, clean_addr, size + is_signed * MO_SIGN,
159
+ is_extended, memidx,
160
iss_valid, rt, iss_sf, false);
161
}
162
}
163
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_roffset(DisasContext *s, uint32_t insn,
164
do_gpr_st(s, tcg_rt, clean_addr, size,
165
true, rt, iss_sf, false);
166
} else {
167
- do_gpr_ld(s, tcg_rt, clean_addr, size,
168
- is_signed, is_extended,
169
- true, rt, iss_sf, false);
170
+ do_gpr_ld(s, tcg_rt, clean_addr, size + is_signed * MO_SIGN,
171
+ is_extended, true, rt, iss_sf, false);
172
}
173
}
174
}
175
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_unsigned_imm(DisasContext *s, uint32_t insn,
176
do_gpr_st(s, tcg_rt, clean_addr, size,
177
true, rt, iss_sf, false);
178
} else {
179
- do_gpr_ld(s, tcg_rt, clean_addr, size, is_signed, is_extended,
180
- true, rt, iss_sf, false);
181
+ do_gpr_ld(s, tcg_rt, clean_addr, size + is_signed * MO_SIGN,
182
+ is_extended, true, rt, iss_sf, false);
183
}
184
}
185
}
186
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
187
* full load-acquire (we only need "load-acquire processor consistent"),
188
* but we choose to implement them as full LDAQ.
189
*/
190
- do_gpr_ld(s, cpu_reg(s, rt), clean_addr, size, false, false,
191
+ do_gpr_ld(s, cpu_reg(s, rt), clean_addr, size, false,
192
true, rt, disas_ldst_compute_iss_sf(size, false, 0), true);
193
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ);
194
return;
195
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_pac(DisasContext *s, uint32_t insn,
196
is_wback || rn != 31, size);
197
198
tcg_rt = cpu_reg(s, rt);
199
- do_gpr_ld(s, tcg_rt, clean_addr, size, /* is_signed */ false,
200
+ do_gpr_ld(s, tcg_rt, clean_addr, size,
201
/* extend */ false, /* iss_valid */ !is_wback,
202
/* iss_srt */ rt, /* iss_sf */ true, /* iss_ar */ false);
203
204
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_ldapr_stlr(DisasContext *s, uint32_t insn)
205
* Load-AcquirePC semantics; we implement as the slightly more
206
* restrictive Load-Acquire.
207
*/
208
- do_gpr_ld(s, cpu_reg(s, rt), clean_addr, size, is_signed, extend,
209
- true, rt, iss_sf, true);
210
+ do_gpr_ld(s, cpu_reg(s, rt), clean_addr, size + is_signed * MO_SIGN,
211
+ extend, true, rt, iss_sf, true);
212
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ);
213
}
214
}
215
--
216
2.20.1
217
218
diff view generated by jsdifflib
1
The FRECPX instructions should (like most other floating point operations)
1
From: Richard Henderson <richard.henderson@linaro.org>
2
honour the FPCR.FZ bit which specifies whether input denormals should
3
be flushed to zero (or FZ16 for the half-precision version).
4
We forgot to implement this, which doesn't affect the results (since
5
the calculation doesn't actually care about the mantissa bits) but did
6
mean we were failing to set the FPSR.IDC bit.
7
2
3
For 128-bit load/store, use 16-byte alignment. This
4
requires that we perform the two operations in the
5
correct order so that we generate the alignment fault
6
before modifying memory.
7
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20210419202257.161730-27-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20180521172712.19930-1-peter.maydell@linaro.org
11
---
12
---
12
target/arm/helper-a64.c | 6 ++++++
13
target/arm/translate-a64.c | 42 +++++++++++++++++++++++---------------
13
1 file changed, 6 insertions(+)
14
1 file changed, 26 insertions(+), 16 deletions(-)
14
15
15
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
16
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
16
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper-a64.c
18
--- a/target/arm/translate-a64.c
18
+++ b/target/arm/helper-a64.c
19
+++ b/target/arm/translate-a64.c
19
@@ -XXX,XX +XXX,XX @@ float16 HELPER(frecpx_f16)(float16 a, void *fpstp)
20
@@ -XXX,XX +XXX,XX @@ static void do_gpr_ld(DisasContext *s, TCGv_i64 dest, TCGv_i64 tcg_addr,
20
return nan;
21
static void do_fp_st(DisasContext *s, int srcidx, TCGv_i64 tcg_addr, int size)
22
{
23
/* This writes the bottom N bits of a 128 bit wide vector to memory */
24
- TCGv_i64 tmp = tcg_temp_new_i64();
25
- tcg_gen_ld_i64(tmp, cpu_env, fp_reg_offset(s, srcidx, MO_64));
26
+ TCGv_i64 tmplo = tcg_temp_new_i64();
27
+ MemOp mop;
28
+
29
+ tcg_gen_ld_i64(tmplo, cpu_env, fp_reg_offset(s, srcidx, MO_64));
30
+
31
if (size < 4) {
32
- tcg_gen_qemu_st_i64(tmp, tcg_addr, get_mem_index(s),
33
- s->be_data + size);
34
+ mop = finalize_memop(s, size);
35
+ tcg_gen_qemu_st_i64(tmplo, tcg_addr, get_mem_index(s), mop);
36
} else {
37
bool be = s->be_data == MO_BE;
38
TCGv_i64 tcg_hiaddr = tcg_temp_new_i64();
39
+ TCGv_i64 tmphi = tcg_temp_new_i64();
40
41
+ tcg_gen_ld_i64(tmphi, cpu_env, fp_reg_hi_offset(s, srcidx));
42
+
43
+ mop = s->be_data | MO_Q;
44
+ tcg_gen_qemu_st_i64(be ? tmphi : tmplo, tcg_addr, get_mem_index(s),
45
+ mop | (s->align_mem ? MO_ALIGN_16 : 0));
46
tcg_gen_addi_i64(tcg_hiaddr, tcg_addr, 8);
47
- tcg_gen_qemu_st_i64(tmp, be ? tcg_hiaddr : tcg_addr, get_mem_index(s),
48
- s->be_data | MO_Q);
49
- tcg_gen_ld_i64(tmp, cpu_env, fp_reg_hi_offset(s, srcidx));
50
- tcg_gen_qemu_st_i64(tmp, be ? tcg_addr : tcg_hiaddr, get_mem_index(s),
51
- s->be_data | MO_Q);
52
+ tcg_gen_qemu_st_i64(be ? tmplo : tmphi, tcg_hiaddr,
53
+ get_mem_index(s), mop);
54
+
55
tcg_temp_free_i64(tcg_hiaddr);
56
+ tcg_temp_free_i64(tmphi);
21
}
57
}
22
58
23
+ a = float16_squash_input_denormal(a, fpst);
59
- tcg_temp_free_i64(tmp);
24
+
60
+ tcg_temp_free_i64(tmplo);
25
val16 = float16_val(a);
61
}
26
sbit = 0x8000 & val16;
62
27
exp = extract32(val16, 10, 5);
63
/*
28
@@ -XXX,XX +XXX,XX @@ float32 HELPER(frecpx_f32)(float32 a, void *fpstp)
64
@@ -XXX,XX +XXX,XX @@ static void do_fp_ld(DisasContext *s, int destidx, TCGv_i64 tcg_addr, int size)
29
return nan;
65
/* This always zero-extends and writes to a full 128 bit wide vector */
66
TCGv_i64 tmplo = tcg_temp_new_i64();
67
TCGv_i64 tmphi = NULL;
68
+ MemOp mop;
69
70
if (size < 4) {
71
- MemOp memop = s->be_data + size;
72
- tcg_gen_qemu_ld_i64(tmplo, tcg_addr, get_mem_index(s), memop);
73
+ mop = finalize_memop(s, size);
74
+ tcg_gen_qemu_ld_i64(tmplo, tcg_addr, get_mem_index(s), mop);
75
} else {
76
bool be = s->be_data == MO_BE;
77
TCGv_i64 tcg_hiaddr;
78
@@ -XXX,XX +XXX,XX @@ static void do_fp_ld(DisasContext *s, int destidx, TCGv_i64 tcg_addr, int size)
79
tmphi = tcg_temp_new_i64();
80
tcg_hiaddr = tcg_temp_new_i64();
81
82
+ mop = s->be_data | MO_Q;
83
+ tcg_gen_qemu_ld_i64(be ? tmphi : tmplo, tcg_addr, get_mem_index(s),
84
+ mop | (s->align_mem ? MO_ALIGN_16 : 0));
85
tcg_gen_addi_i64(tcg_hiaddr, tcg_addr, 8);
86
- tcg_gen_qemu_ld_i64(tmplo, be ? tcg_hiaddr : tcg_addr, get_mem_index(s),
87
- s->be_data | MO_Q);
88
- tcg_gen_qemu_ld_i64(tmphi, be ? tcg_addr : tcg_hiaddr, get_mem_index(s),
89
- s->be_data | MO_Q);
90
+ tcg_gen_qemu_ld_i64(be ? tmplo : tmphi, tcg_hiaddr,
91
+ get_mem_index(s), mop);
92
tcg_temp_free_i64(tcg_hiaddr);
30
}
93
}
31
94
32
+ a = float32_squash_input_denormal(a, fpst);
33
+
34
val32 = float32_val(a);
35
sbit = 0x80000000ULL & val32;
36
exp = extract32(val32, 23, 8);
37
@@ -XXX,XX +XXX,XX @@ float64 HELPER(frecpx_f64)(float64 a, void *fpstp)
38
return nan;
39
}
40
41
+ a = float64_squash_input_denormal(a, fpst);
42
+
43
val64 = float64_val(a);
44
sbit = 0x8000000000000000ULL & val64;
45
exp = extract64(float64_val(a), 52, 11);
46
--
95
--
47
2.17.1
96
2.20.1
48
97
49
98
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
add MemTxAttrs as an argument to address_space_access_valid().
3
Its callers either have an attrs value to hand, or don't care
4
and can use MEMTXATTRS_UNSPECIFIED.
5
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210419202257.161730-28-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-6-peter.maydell@linaro.org
10
---
7
---
11
include/exec/memory.h | 4 +++-
8
target/arm/translate-a64.c | 23 ++++++++++++++---------
12
include/sysemu/dma.h | 3 ++-
9
1 file changed, 14 insertions(+), 9 deletions(-)
13
exec.c | 3 ++-
14
target/s390x/diag.c | 6 ++++--
15
target/s390x/excp_helper.c | 3 ++-
16
target/s390x/mmu_helper.c | 3 ++-
17
target/s390x/sigp.c | 3 ++-
18
7 files changed, 17 insertions(+), 8 deletions(-)
19
10
20
diff --git a/include/exec/memory.h b/include/exec/memory.h
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
21
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
22
--- a/include/exec/memory.h
13
--- a/target/arm/translate-a64.c
23
+++ b/include/exec/memory.h
14
+++ b/target/arm/translate-a64.c
24
@@ -XXX,XX +XXX,XX @@ static inline MemoryRegion *address_space_translate(AddressSpace *as,
15
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
25
* @addr: address within that address space
16
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_STRL);
26
* @len: length of the area to be checked
17
clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn),
27
* @is_write: indicates the transfer direction
18
true, rn != 31, size);
28
+ * @attrs: memory attributes
19
- do_gpr_st(s, cpu_reg(s, rt), clean_addr, size, true, rt,
29
*/
20
+ /* TODO: ARMv8.4-LSE SCTLR.nAA */
30
-bool address_space_access_valid(AddressSpace *as, hwaddr addr, int len, bool is_write);
21
+ do_gpr_st(s, cpu_reg(s, rt), clean_addr, size | MO_ALIGN, true, rt,
31
+bool address_space_access_valid(AddressSpace *as, hwaddr addr, int len,
22
disas_ldst_compute_iss_sf(size, false, 0), is_lasr);
32
+ bool is_write, MemTxAttrs attrs);
23
return;
33
24
34
/* address_space_map: map a physical memory region into a host virtual address
25
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
35
*
26
}
36
diff --git a/include/sysemu/dma.h b/include/sysemu/dma.h
27
clean_addr = gen_mte_check1(s, cpu_reg_sp(s, rn),
37
index XXXXXXX..XXXXXXX 100644
28
false, rn != 31, size);
38
--- a/include/sysemu/dma.h
29
- do_gpr_ld(s, cpu_reg(s, rt), clean_addr, size, false, true, rt,
39
+++ b/include/sysemu/dma.h
30
- disas_ldst_compute_iss_sf(size, false, 0), is_lasr);
40
@@ -XXX,XX +XXX,XX @@ static inline bool dma_memory_valid(AddressSpace *as,
31
+ /* TODO: ARMv8.4-LSE SCTLR.nAA */
41
DMADirection dir)
32
+ do_gpr_ld(s, cpu_reg(s, rt), clean_addr, size | MO_ALIGN, false, true,
42
{
33
+ rt, disas_ldst_compute_iss_sf(size, false, 0), is_lasr);
43
return address_space_access_valid(as, addr, len,
34
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ);
44
- dir == DMA_DIRECTION_FROM_DEVICE);
35
return;
45
+ dir == DMA_DIRECTION_FROM_DEVICE,
36
46
+ MEMTXATTRS_UNSPECIFIED);
37
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_ldapr_stlr(DisasContext *s, uint32_t insn)
47
}
38
int size = extract32(insn, 30, 2);
48
39
TCGv_i64 clean_addr, dirty_addr;
49
static inline int dma_memory_rw_relaxed(AddressSpace *as, dma_addr_t addr,
40
bool is_store = false;
50
diff --git a/exec.c b/exec.c
41
- bool is_signed = false;
51
index XXXXXXX..XXXXXXX 100644
42
bool extend = false;
52
--- a/exec.c
43
bool iss_sf;
53
+++ b/exec.c
44
+ MemOp mop;
54
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
45
55
}
46
if (!dc_isar_feature(aa64_rcpc_8_4, s)) {
56
47
unallocated_encoding(s);
57
bool address_space_access_valid(AddressSpace *as, hwaddr addr,
48
return;
58
- int len, bool is_write)
49
}
59
+ int len, bool is_write,
50
60
+ MemTxAttrs attrs)
51
+ /* TODO: ARMv8.4-LSE SCTLR.nAA */
61
{
52
+ mop = size | MO_ALIGN;
62
FlatView *fv;
53
+
63
bool result;
54
switch (opc) {
64
diff --git a/target/s390x/diag.c b/target/s390x/diag.c
55
case 0: /* STLURB */
65
index XXXXXXX..XXXXXXX 100644
56
is_store = true;
66
--- a/target/s390x/diag.c
57
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_ldapr_stlr(DisasContext *s, uint32_t insn)
67
+++ b/target/s390x/diag.c
58
unallocated_encoding(s);
68
@@ -XXX,XX +XXX,XX @@ void handle_diag_308(CPUS390XState *env, uint64_t r1, uint64_t r3, uintptr_t ra)
69
return;
59
return;
70
}
60
}
71
if (!address_space_access_valid(&address_space_memory, addr,
61
- is_signed = true;
72
- sizeof(IplParameterBlock), false)) {
62
+ mop |= MO_SIGN;
73
+ sizeof(IplParameterBlock), false,
63
break;
74
+ MEMTXATTRS_UNSPECIFIED)) {
64
case 3: /* LDAPURS* 32-bit variant */
75
s390_program_interrupt(env, PGM_ADDRESSING, ILEN_AUTO, ra);
65
if (size > 1) {
66
unallocated_encoding(s);
76
return;
67
return;
77
}
68
}
78
@@ -XXX,XX +XXX,XX @@ out:
69
- is_signed = true;
79
return;
70
+ mop |= MO_SIGN;
80
}
71
extend = true; /* zero-extend 32->64 after signed load */
81
if (!address_space_access_valid(&address_space_memory, addr,
72
break;
82
- sizeof(IplParameterBlock), true)) {
73
default:
83
+ sizeof(IplParameterBlock), true,
74
g_assert_not_reached();
84
+ MEMTXATTRS_UNSPECIFIED)) {
75
}
85
s390_program_interrupt(env, PGM_ADDRESSING, ILEN_AUTO, ra);
76
86
return;
77
- iss_sf = disas_ldst_compute_iss_sf(size, is_signed, opc);
87
}
78
+ iss_sf = disas_ldst_compute_iss_sf(size, (mop & MO_SIGN) != 0, opc);
88
diff --git a/target/s390x/excp_helper.c b/target/s390x/excp_helper.c
79
89
index XXXXXXX..XXXXXXX 100644
80
if (rn == 31) {
90
--- a/target/s390x/excp_helper.c
81
gen_check_sp_alignment(s);
91
+++ b/target/s390x/excp_helper.c
82
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_ldapr_stlr(DisasContext *s, uint32_t insn)
92
@@ -XXX,XX +XXX,XX @@ int s390_cpu_handle_mmu_fault(CPUState *cs, vaddr orig_vaddr, int size,
83
if (is_store) {
93
84
/* Store-Release semantics */
94
/* check out of RAM access */
85
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_STRL);
95
if (!address_space_access_valid(&address_space_memory, raddr,
86
- do_gpr_st(s, cpu_reg(s, rt), clean_addr, size, true, rt, iss_sf, true);
96
- TARGET_PAGE_SIZE, rw)) {
87
+ do_gpr_st(s, cpu_reg(s, rt), clean_addr, mop, true, rt, iss_sf, true);
97
+ TARGET_PAGE_SIZE, rw,
88
} else {
98
+ MEMTXATTRS_UNSPECIFIED)) {
89
/*
99
DPRINTF("%s: raddr %" PRIx64 " > ram_size %" PRIx64 "\n", __func__,
90
* Load-AcquirePC semantics; we implement as the slightly more
100
(uint64_t)raddr, (uint64_t)ram_size);
91
* restrictive Load-Acquire.
101
trigger_pgm_exception(env, PGM_ADDRESSING, ILEN_AUTO);
92
*/
102
diff --git a/target/s390x/mmu_helper.c b/target/s390x/mmu_helper.c
93
- do_gpr_ld(s, cpu_reg(s, rt), clean_addr, size + is_signed * MO_SIGN,
103
index XXXXXXX..XXXXXXX 100644
94
+ do_gpr_ld(s, cpu_reg(s, rt), clean_addr, mop,
104
--- a/target/s390x/mmu_helper.c
95
extend, true, rt, iss_sf, true);
105
+++ b/target/s390x/mmu_helper.c
96
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ);
106
@@ -XXX,XX +XXX,XX @@ static int translate_pages(S390CPU *cpu, vaddr addr, int nr_pages,
107
return ret;
108
}
109
if (!address_space_access_valid(&address_space_memory, pages[i],
110
- TARGET_PAGE_SIZE, is_write)) {
111
+ TARGET_PAGE_SIZE, is_write,
112
+ MEMTXATTRS_UNSPECIFIED)) {
113
trigger_access_exception(env, PGM_ADDRESSING, ILEN_AUTO, 0);
114
return -EFAULT;
115
}
116
diff --git a/target/s390x/sigp.c b/target/s390x/sigp.c
117
index XXXXXXX..XXXXXXX 100644
118
--- a/target/s390x/sigp.c
119
+++ b/target/s390x/sigp.c
120
@@ -XXX,XX +XXX,XX @@ static void sigp_set_prefix(CPUState *cs, run_on_cpu_data arg)
121
cpu_synchronize_state(cs);
122
123
if (!address_space_access_valid(&address_space_memory, addr,
124
- sizeof(struct LowCore), false)) {
125
+ sizeof(struct LowCore), false,
126
+ MEMTXATTRS_UNSPECIFIED)) {
127
set_sigp_status(si, SIGP_STAT_INVALID_PARAMETER);
128
return;
129
}
97
}
130
--
98
--
131
2.17.1
99
2.20.1
132
100
133
101
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
add MemTxAttrs as an argument to flatview_access_valid().
3
Its callers now all have an attrs value to hand, so we can
4
correct our earlier temporary use of MEMTXATTRS_UNSPECIFIED.
5
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210419202257.161730-29-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-10-peter.maydell@linaro.org
10
---
7
---
11
exec.c | 12 +++++-------
8
target/arm/translate-a64.c | 20 ++++++++++----------
12
1 file changed, 5 insertions(+), 7 deletions(-)
9
1 file changed, 10 insertions(+), 10 deletions(-)
13
10
14
diff --git a/exec.c b/exec.c
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
15
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
16
--- a/exec.c
13
--- a/target/arm/translate-a64.c
17
+++ b/exec.c
14
+++ b/target/arm/translate-a64.c
18
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_read(FlatView *fv, hwaddr addr,
15
@@ -XXX,XX +XXX,XX @@ static void write_vec_element_i32(DisasContext *s, TCGv_i32 tcg_src,
19
static MemTxResult flatview_write(FlatView *fv, hwaddr addr, MemTxAttrs attrs,
16
20
const uint8_t *buf, int len);
17
/* Store from vector register to memory */
21
static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
18
static void do_vec_st(DisasContext *s, int srcidx, int element,
22
- bool is_write);
19
- TCGv_i64 tcg_addr, int size, MemOp endian)
23
+ bool is_write, MemTxAttrs attrs);
20
+ TCGv_i64 tcg_addr, MemOp mop)
24
21
{
25
static MemTxResult subpage_read(void *opaque, hwaddr addr, uint64_t *data,
22
TCGv_i64 tcg_tmp = tcg_temp_new_i64();
26
unsigned len, MemTxAttrs attrs)
23
27
@@ -XXX,XX +XXX,XX @@ static bool subpage_accepts(void *opaque, hwaddr addr,
24
- read_vec_element(s, tcg_tmp, srcidx, element, size);
28
#endif
25
- tcg_gen_qemu_st_i64(tcg_tmp, tcg_addr, get_mem_index(s), endian | size);
29
26
+ read_vec_element(s, tcg_tmp, srcidx, element, mop & MO_SIZE);
30
return flatview_access_valid(subpage->fv, addr + subpage->base,
27
+ tcg_gen_qemu_st_i64(tcg_tmp, tcg_addr, get_mem_index(s), mop);
31
- len, is_write);
28
32
+ len, is_write, attrs);
29
tcg_temp_free_i64(tcg_tmp);
33
}
30
}
34
31
35
static const MemoryRegionOps subpage_ops = {
32
/* Load from memory to vector register */
36
@@ -XXX,XX +XXX,XX @@ static void cpu_notify_map_clients(void)
33
static void do_vec_ld(DisasContext *s, int destidx, int element,
34
- TCGv_i64 tcg_addr, int size, MemOp endian)
35
+ TCGv_i64 tcg_addr, MemOp mop)
36
{
37
TCGv_i64 tcg_tmp = tcg_temp_new_i64();
38
39
- tcg_gen_qemu_ld_i64(tcg_tmp, tcg_addr, get_mem_index(s), endian | size);
40
- write_vec_element(s, tcg_tmp, destidx, element, size);
41
+ tcg_gen_qemu_ld_i64(tcg_tmp, tcg_addr, get_mem_index(s), mop);
42
+ write_vec_element(s, tcg_tmp, destidx, element, mop & MO_SIZE);
43
44
tcg_temp_free_i64(tcg_tmp);
37
}
45
}
38
46
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
39
static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
47
for (xs = 0; xs < selem; xs++) {
40
- bool is_write)
48
int tt = (rt + r + xs) % 32;
41
+ bool is_write, MemTxAttrs attrs)
49
if (is_store) {
42
{
50
- do_vec_st(s, tt, e, clean_addr, size, endian);
43
MemoryRegion *mr;
51
+ do_vec_st(s, tt, e, clean_addr, size | endian);
44
hwaddr l, xlat;
52
} else {
45
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
53
- do_vec_ld(s, tt, e, clean_addr, size, endian);
46
mr = flatview_translate(fv, addr, &xlat, &l, is_write);
54
+ do_vec_ld(s, tt, e, clean_addr, size | endian);
47
if (!memory_access_is_direct(mr, is_write)) {
55
}
48
l = memory_access_size(mr, l, addr);
56
tcg_gen_add_i64(clean_addr, clean_addr, tcg_ebytes);
49
- /* When our callers all have attrs we'll pass them through here */
57
}
50
- if (!memory_region_access_valid(mr, xlat, l, is_write,
58
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
51
- MEMTXATTRS_UNSPECIFIED)) {
59
} else {
52
+ if (!memory_region_access_valid(mr, xlat, l, is_write, attrs)) {
60
/* Load/store one element per register */
53
return false;
61
if (is_load) {
62
- do_vec_ld(s, rt, index, clean_addr, scale, s->be_data);
63
+ do_vec_ld(s, rt, index, clean_addr, scale | s->be_data);
64
} else {
65
- do_vec_st(s, rt, index, clean_addr, scale, s->be_data);
66
+ do_vec_st(s, rt, index, clean_addr, scale | s->be_data);
54
}
67
}
55
}
68
}
56
@@ -XXX,XX +XXX,XX @@ bool address_space_access_valid(AddressSpace *as, hwaddr addr,
69
tcg_gen_add_i64(clean_addr, clean_addr, tcg_ebytes);
57
58
rcu_read_lock();
59
fv = address_space_to_flatview(as);
60
- result = flatview_access_valid(fv, addr, len, is_write);
61
+ result = flatview_access_valid(fv, addr, len, is_write, attrs);
62
rcu_read_unlock();
63
return result;
64
}
65
--
70
--
66
2.17.1
71
2.20.1
67
72
68
73
diff view generated by jsdifflib
1
From: Shannon Zhao <zhaoshenglong@huawei.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
It forgot to increase clroffset during the loop. So it only clear the
4
first 4 bytes.
5
6
Fixes: 367b9f527becdd20ddf116e17a3c0c2bbc486920
7
Cc: qemu-stable@nongnu.org
8
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
9
Reviewed-by: Eric Auger <eric.auger@redhat.com>
10
Message-id: 1527047633-12368-1-git-send-email-zhaoshenglong@huawei.com
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210419202257.161730-30-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
7
---
14
hw/intc/arm_gicv3_kvm.c | 1 +
8
target/arm/translate-a64.c | 15 +++++++++++----
15
1 file changed, 1 insertion(+)
9
1 file changed, 11 insertions(+), 4 deletions(-)
16
10
17
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
18
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/intc/arm_gicv3_kvm.c
13
--- a/target/arm/translate-a64.c
20
+++ b/hw/intc/arm_gicv3_kvm.c
14
+++ b/target/arm/translate-a64.c
21
@@ -XXX,XX +XXX,XX @@ static void kvm_dist_putbmp(GICv3State *s, uint32_t offset,
15
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
22
if (clroffset != 0) {
16
bool is_postidx = extract32(insn, 23, 1);
23
reg = 0;
17
bool is_q = extract32(insn, 30, 1);
24
kvm_gicd_access(s, clroffset, &reg, true);
18
TCGv_i64 clean_addr, tcg_rn, tcg_ebytes;
25
+ clroffset += 4;
19
- MemOp endian = s->be_data;
26
}
20
+ MemOp endian, align, mop;
27
reg = *gic_bmp_ptr32(bmp, irq);
21
28
kvm_gicd_access(s, offset, &reg, true);
22
int total; /* total bytes */
23
int elements; /* elements per vector */
24
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
25
}
26
27
/* For our purposes, bytes are always little-endian. */
28
+ endian = s->be_data;
29
if (size == 0) {
30
endian = MO_LE;
31
}
32
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
33
* Consecutive little-endian elements from a single register
34
* can be promoted to a larger little-endian operation.
35
*/
36
+ align = MO_ALIGN;
37
if (selem == 1 && endian == MO_LE) {
38
+ align = pow2_align(size);
39
size = 3;
40
}
41
- elements = (is_q ? 16 : 8) >> size;
42
+ if (!s->align_mem) {
43
+ align = 0;
44
+ }
45
+ mop = endian | size | align;
46
47
+ elements = (is_q ? 16 : 8) >> size;
48
tcg_ebytes = tcg_const_i64(1 << size);
49
for (r = 0; r < rpt; r++) {
50
int e;
51
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
52
for (xs = 0; xs < selem; xs++) {
53
int tt = (rt + r + xs) % 32;
54
if (is_store) {
55
- do_vec_st(s, tt, e, clean_addr, size | endian);
56
+ do_vec_st(s, tt, e, clean_addr, mop);
57
} else {
58
- do_vec_ld(s, tt, e, clean_addr, size | endian);
59
+ do_vec_ld(s, tt, e, clean_addr, mop);
60
}
61
tcg_gen_add_i64(clean_addr, clean_addr, tcg_ebytes);
62
}
29
--
63
--
30
2.17.1
64
2.20.1
31
65
32
66
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
add MemTxAttrs as an argument to memory_region_access_valid().
3
Its callers either have an attrs value to hand, or don't care
4
and can use MEMTXATTRS_UNSPECIFIED.
5
2
6
The callsite in flatview_access_valid() is part of a recursive
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
loop flatview_access_valid() -> memory_region_access_valid() ->
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
subpage_accepts() -> flatview_access_valid(); we make it pass
5
Message-id: 20210419202257.161730-31-richard.henderson@linaro.org
9
MEMTXATTRS_UNSPECIFIED for now, until the next several commits
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
have plumbed an attrs parameter through the rest of the loop
7
---
11
and we can add an attrs parameter to flatview_access_valid().
8
target/arm/translate-a64.c | 9 +++++----
9
1 file changed, 5 insertions(+), 4 deletions(-)
12
10
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
14
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20180521140402.23318-8-peter.maydell@linaro.org
17
---
18
include/exec/memory-internal.h | 3 ++-
19
exec.c | 4 +++-
20
hw/s390x/s390-pci-inst.c | 3 ++-
21
memory.c | 7 ++++---
22
4 files changed, 11 insertions(+), 6 deletions(-)
23
24
diff --git a/include/exec/memory-internal.h b/include/exec/memory-internal.h
25
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
26
--- a/include/exec/memory-internal.h
13
--- a/target/arm/translate-a64.c
27
+++ b/include/exec/memory-internal.h
14
+++ b/target/arm/translate-a64.c
28
@@ -XXX,XX +XXX,XX @@ void flatview_unref(FlatView *view);
15
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
29
extern const MemoryRegionOps unassigned_mem_ops;
16
int index = is_q << 3 | S << 2 | size;
30
17
int xs, total;
31
bool memory_region_access_valid(MemoryRegion *mr, hwaddr addr,
18
TCGv_i64 clean_addr, tcg_rn, tcg_ebytes;
32
- unsigned size, bool is_write);
19
+ MemOp mop;
33
+ unsigned size, bool is_write,
20
34
+ MemTxAttrs attrs);
21
if (extract32(insn, 31, 1)) {
35
22
unallocated_encoding(s);
36
void flatview_add_to_dispatch(FlatView *fv, MemoryRegionSection *section);
23
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
37
AddressSpaceDispatch *address_space_dispatch_new(FlatView *fv);
24
38
diff --git a/exec.c b/exec.c
25
clean_addr = gen_mte_checkN(s, tcg_rn, !is_load, is_postidx || rn != 31,
39
index XXXXXXX..XXXXXXX 100644
26
total);
40
--- a/exec.c
27
+ mop = finalize_memop(s, scale);
41
+++ b/exec.c
28
42
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
29
tcg_ebytes = tcg_const_i64(1 << scale);
43
mr = flatview_translate(fv, addr, &xlat, &l, is_write);
30
for (xs = 0; xs < selem; xs++) {
44
if (!memory_access_is_direct(mr, is_write)) {
31
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
45
l = memory_access_size(mr, l, addr);
32
/* Load and replicate to all elements */
46
- if (!memory_region_access_valid(mr, xlat, l, is_write)) {
33
TCGv_i64 tcg_tmp = tcg_temp_new_i64();
47
+ /* When our callers all have attrs we'll pass them through here */
34
48
+ if (!memory_region_access_valid(mr, xlat, l, is_write,
35
- tcg_gen_qemu_ld_i64(tcg_tmp, clean_addr,
49
+ MEMTXATTRS_UNSPECIFIED)) {
36
- get_mem_index(s), s->be_data + scale);
50
return false;
37
+ tcg_gen_qemu_ld_i64(tcg_tmp, clean_addr, get_mem_index(s), mop);
38
tcg_gen_gvec_dup_i64(scale, vec_full_reg_offset(s, rt),
39
(is_q + 1) * 8, vec_full_reg_size(s),
40
tcg_tmp);
41
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
42
} else {
43
/* Load/store one element per register */
44
if (is_load) {
45
- do_vec_ld(s, rt, index, clean_addr, scale | s->be_data);
46
+ do_vec_ld(s, rt, index, clean_addr, mop);
47
} else {
48
- do_vec_st(s, rt, index, clean_addr, scale | s->be_data);
49
+ do_vec_st(s, rt, index, clean_addr, mop);
51
}
50
}
52
}
51
}
53
diff --git a/hw/s390x/s390-pci-inst.c b/hw/s390x/s390-pci-inst.c
52
tcg_gen_add_i64(clean_addr, clean_addr, tcg_ebytes);
54
index XXXXXXX..XXXXXXX 100644
55
--- a/hw/s390x/s390-pci-inst.c
56
+++ b/hw/s390x/s390-pci-inst.c
57
@@ -XXX,XX +XXX,XX @@ int pcistb_service_call(S390CPU *cpu, uint8_t r1, uint8_t r3, uint64_t gaddr,
58
mr = s390_get_subregion(mr, offset, len);
59
offset -= mr->addr;
60
61
- if (!memory_region_access_valid(mr, offset, len, true)) {
62
+ if (!memory_region_access_valid(mr, offset, len, true,
63
+ MEMTXATTRS_UNSPECIFIED)) {
64
s390_program_interrupt(env, PGM_OPERAND, 6, ra);
65
return 0;
66
}
67
diff --git a/memory.c b/memory.c
68
index XXXXXXX..XXXXXXX 100644
69
--- a/memory.c
70
+++ b/memory.c
71
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps ram_device_mem_ops = {
72
bool memory_region_access_valid(MemoryRegion *mr,
73
hwaddr addr,
74
unsigned size,
75
- bool is_write)
76
+ bool is_write,
77
+ MemTxAttrs attrs)
78
{
79
int access_size_min, access_size_max;
80
int access_size, i;
81
@@ -XXX,XX +XXX,XX @@ MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
82
{
83
MemTxResult r;
84
85
- if (!memory_region_access_valid(mr, addr, size, false)) {
86
+ if (!memory_region_access_valid(mr, addr, size, false, attrs)) {
87
*pval = unassigned_mem_read(mr, addr, size);
88
return MEMTX_DECODE_ERROR;
89
}
90
@@ -XXX,XX +XXX,XX @@ MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
91
unsigned size,
92
MemTxAttrs attrs)
93
{
94
- if (!memory_region_access_valid(mr, addr, size, true)) {
95
+ if (!memory_region_access_valid(mr, addr, size, true, attrs)) {
96
unassigned_mem_write(mr, addr, data, size);
97
return MEMTX_DECODE_ERROR;
98
}
99
--
53
--
100
2.17.1
54
2.20.1
101
55
102
56
diff view generated by jsdifflib
1
Add entries to MAINTAINERS to cover the newer MPS2 boards and
1
From: Richard Henderson <richard.henderson@linaro.org>
2
the new devices they use.
3
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210419202257.161730-32-richard.henderson@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20180518153157.14899-1-peter.maydell@linaro.org
6
---
7
---
7
MAINTAINERS | 9 +++++++--
8
target/arm/translate-sve.c | 2 +-
8
1 file changed, 7 insertions(+), 2 deletions(-)
9
1 file changed, 1 insertion(+), 1 deletion(-)
9
10
10
diff --git a/MAINTAINERS b/MAINTAINERS
11
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
11
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
12
--- a/MAINTAINERS
13
--- a/target/arm/translate-sve.c
13
+++ b/MAINTAINERS
14
+++ b/target/arm/translate-sve.c
14
@@ -XXX,XX +XXX,XX @@ F: hw/timer/cmsdk-apb-timer.c
15
@@ -XXX,XX +XXX,XX @@ static bool trans_LD1R_zpri(DisasContext *s, arg_rpri_load *a)
15
F: include/hw/timer/cmsdk-apb-timer.h
16
clean_addr = gen_mte_check1(s, temp, false, true, msz);
16
F: hw/char/cmsdk-apb-uart.c
17
17
F: include/hw/char/cmsdk-apb-uart.h
18
tcg_gen_qemu_ld_i64(temp, clean_addr, get_mem_index(s),
18
+F: hw/misc/tz-ppc.c
19
- s->be_data | dtype_mop[a->dtype]);
19
+F: include/hw/misc/tz-ppc.h
20
+ finalize_memop(s, dtype_mop[a->dtype]));
20
21
21
ARM cores
22
/* Broadcast to *all* elements. */
22
M: Peter Maydell <peter.maydell@linaro.org>
23
tcg_gen_gvec_dup_i64(esz, vec_full_reg_offset(s, a->rd),
23
@@ -XXX,XX +XXX,XX @@ M: Peter Maydell <peter.maydell@linaro.org>
24
L: qemu-arm@nongnu.org
25
S: Maintained
26
F: hw/arm/mps2.c
27
-F: hw/misc/mps2-scc.c
28
-F: include/hw/misc/mps2-scc.h
29
+F: hw/arm/mps2-tz.c
30
+F: hw/misc/mps2-*.c
31
+F: include/hw/misc/mps2-*.h
32
+F: hw/arm/iotkit.c
33
+F: include/hw/arm/iotkit.h
34
35
Musicpal
36
M: Jan Kiszka <jan.kiszka@web.de>
37
--
24
--
38
2.17.1
25
2.20.1
39
26
40
27
diff view generated by jsdifflib
New patch
1
1
From: Cornelia Huck <cohuck@redhat.com>
2
3
Add 6.1 machine types for arm/i440fx/q35/s390x/spapr.
4
5
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
6
Acked-by: Greg Kurz <groug@kaod.org>
7
Message-id: 20210331111900.118274-1-cohuck@redhat.com
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
include/hw/boards.h | 3 +++
12
include/hw/i386/pc.h | 3 +++
13
hw/arm/virt.c | 7 ++++++-
14
hw/core/machine.c | 3 +++
15
hw/i386/pc.c | 3 +++
16
hw/i386/pc_piix.c | 14 +++++++++++++-
17
hw/i386/pc_q35.c | 13 ++++++++++++-
18
hw/ppc/spapr.c | 17 ++++++++++++++---
19
hw/s390x/s390-virtio-ccw.c | 14 +++++++++++++-
20
9 files changed, 70 insertions(+), 7 deletions(-)
21
22
diff --git a/include/hw/boards.h b/include/hw/boards.h
23
index XXXXXXX..XXXXXXX 100644
24
--- a/include/hw/boards.h
25
+++ b/include/hw/boards.h
26
@@ -XXX,XX +XXX,XX @@ struct MachineState {
27
} \
28
type_init(machine_initfn##_register_types)
29
30
+extern GlobalProperty hw_compat_6_0[];
31
+extern const size_t hw_compat_6_0_len;
32
+
33
extern GlobalProperty hw_compat_5_2[];
34
extern const size_t hw_compat_5_2_len;
35
36
diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
37
index XXXXXXX..XXXXXXX 100644
38
--- a/include/hw/i386/pc.h
39
+++ b/include/hw/i386/pc.h
40
@@ -XXX,XX +XXX,XX @@ bool pc_system_ovmf_table_find(const char *entry, uint8_t **data,
41
void pc_madt_cpu_entry(AcpiDeviceIf *adev, int uid,
42
const CPUArchIdList *apic_ids, GArray *entry);
43
44
+extern GlobalProperty pc_compat_6_0[];
45
+extern const size_t pc_compat_6_0_len;
46
+
47
extern GlobalProperty pc_compat_5_2[];
48
extern const size_t pc_compat_5_2_len;
49
50
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
51
index XXXXXXX..XXXXXXX 100644
52
--- a/hw/arm/virt.c
53
+++ b/hw/arm/virt.c
54
@@ -XXX,XX +XXX,XX @@ static void machvirt_machine_init(void)
55
}
56
type_init(machvirt_machine_init);
57
58
+static void virt_machine_6_1_options(MachineClass *mc)
59
+{
60
+}
61
+DEFINE_VIRT_MACHINE_AS_LATEST(6, 1)
62
+
63
static void virt_machine_6_0_options(MachineClass *mc)
64
{
65
}
66
-DEFINE_VIRT_MACHINE_AS_LATEST(6, 0)
67
+DEFINE_VIRT_MACHINE(6, 0)
68
69
static void virt_machine_5_2_options(MachineClass *mc)
70
{
71
diff --git a/hw/core/machine.c b/hw/core/machine.c
72
index XXXXXXX..XXXXXXX 100644
73
--- a/hw/core/machine.c
74
+++ b/hw/core/machine.c
75
@@ -XXX,XX +XXX,XX @@
76
#include "hw/virtio/virtio.h"
77
#include "hw/virtio/virtio-pci.h"
78
79
+GlobalProperty hw_compat_6_0[] = {};
80
+const size_t hw_compat_6_0_len = G_N_ELEMENTS(hw_compat_6_0);
81
+
82
GlobalProperty hw_compat_5_2[] = {
83
{ "ICH9-LPC", "smm-compat", "on"},
84
{ "PIIX4_PM", "smm-compat", "on"},
85
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
86
index XXXXXXX..XXXXXXX 100644
87
--- a/hw/i386/pc.c
88
+++ b/hw/i386/pc.c
89
@@ -XXX,XX +XXX,XX @@
90
#include "trace.h"
91
#include CONFIG_DEVICES
92
93
+GlobalProperty pc_compat_6_0[] = {};
94
+const size_t pc_compat_6_0_len = G_N_ELEMENTS(pc_compat_6_0);
95
+
96
GlobalProperty pc_compat_5_2[] = {
97
{ "ICH9-LPC", "x-smi-cpu-hotunplug", "off" },
98
};
99
diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
100
index XXXXXXX..XXXXXXX 100644
101
--- a/hw/i386/pc_piix.c
102
+++ b/hw/i386/pc_piix.c
103
@@ -XXX,XX +XXX,XX @@ static void pc_i440fx_machine_options(MachineClass *m)
104
machine_class_allow_dynamic_sysbus_dev(m, TYPE_VMBUS_BRIDGE);
105
}
106
107
-static void pc_i440fx_6_0_machine_options(MachineClass *m)
108
+static void pc_i440fx_6_1_machine_options(MachineClass *m)
109
{
110
PCMachineClass *pcmc = PC_MACHINE_CLASS(m);
111
pc_i440fx_machine_options(m);
112
@@ -XXX,XX +XXX,XX @@ static void pc_i440fx_6_0_machine_options(MachineClass *m)
113
pcmc->default_cpu_version = 1;
114
}
115
116
+DEFINE_I440FX_MACHINE(v6_1, "pc-i440fx-6.1", NULL,
117
+ pc_i440fx_6_1_machine_options);
118
+
119
+static void pc_i440fx_6_0_machine_options(MachineClass *m)
120
+{
121
+ pc_i440fx_6_1_machine_options(m);
122
+ m->alias = NULL;
123
+ m->is_default = false;
124
+ compat_props_add(m->compat_props, hw_compat_6_0, hw_compat_6_0_len);
125
+ compat_props_add(m->compat_props, pc_compat_6_0, pc_compat_6_0_len);
126
+}
127
+
128
DEFINE_I440FX_MACHINE(v6_0, "pc-i440fx-6.0", NULL,
129
pc_i440fx_6_0_machine_options);
130
131
diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
132
index XXXXXXX..XXXXXXX 100644
133
--- a/hw/i386/pc_q35.c
134
+++ b/hw/i386/pc_q35.c
135
@@ -XXX,XX +XXX,XX @@ static void pc_q35_machine_options(MachineClass *m)
136
m->max_cpus = 288;
137
}
138
139
-static void pc_q35_6_0_machine_options(MachineClass *m)
140
+static void pc_q35_6_1_machine_options(MachineClass *m)
141
{
142
PCMachineClass *pcmc = PC_MACHINE_CLASS(m);
143
pc_q35_machine_options(m);
144
@@ -XXX,XX +XXX,XX @@ static void pc_q35_6_0_machine_options(MachineClass *m)
145
pcmc->default_cpu_version = 1;
146
}
147
148
+DEFINE_Q35_MACHINE(v6_1, "pc-q35-6.1", NULL,
149
+ pc_q35_6_1_machine_options);
150
+
151
+static void pc_q35_6_0_machine_options(MachineClass *m)
152
+{
153
+ pc_q35_6_1_machine_options(m);
154
+ m->alias = NULL;
155
+ compat_props_add(m->compat_props, hw_compat_6_0, hw_compat_6_0_len);
156
+ compat_props_add(m->compat_props, pc_compat_6_0, pc_compat_6_0_len);
157
+}
158
+
159
DEFINE_Q35_MACHINE(v6_0, "pc-q35-6.0", NULL,
160
pc_q35_6_0_machine_options);
161
162
diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
163
index XXXXXXX..XXXXXXX 100644
164
--- a/hw/ppc/spapr.c
165
+++ b/hw/ppc/spapr.c
166
@@ -XXX,XX +XXX,XX @@ static void spapr_machine_latest_class_options(MachineClass *mc)
167
type_init(spapr_machine_register_##suffix)
168
169
/*
170
- * pseries-6.0
171
+ * pseries-6.1
172
*/
173
-static void spapr_machine_6_0_class_options(MachineClass *mc)
174
+static void spapr_machine_6_1_class_options(MachineClass *mc)
175
{
176
/* Defaults for the latest behaviour inherited from the base class */
177
}
178
179
-DEFINE_SPAPR_MACHINE(6_0, "6.0", true);
180
+DEFINE_SPAPR_MACHINE(6_1, "6.1", true);
181
+
182
+/*
183
+ * pseries-6.0
184
+ */
185
+static void spapr_machine_6_0_class_options(MachineClass *mc)
186
+{
187
+ spapr_machine_6_1_class_options(mc);
188
+ compat_props_add(mc->compat_props, hw_compat_6_0, hw_compat_6_0_len);
189
+}
190
+
191
+DEFINE_SPAPR_MACHINE(6_0, "6.0", false);
192
193
/*
194
* pseries-5.2
195
diff --git a/hw/s390x/s390-virtio-ccw.c b/hw/s390x/s390-virtio-ccw.c
196
index XXXXXXX..XXXXXXX 100644
197
--- a/hw/s390x/s390-virtio-ccw.c
198
+++ b/hw/s390x/s390-virtio-ccw.c
199
@@ -XXX,XX +XXX,XX @@ bool css_migration_enabled(void)
200
} \
201
type_init(ccw_machine_register_##suffix)
202
203
+static void ccw_machine_6_1_instance_options(MachineState *machine)
204
+{
205
+}
206
+
207
+static void ccw_machine_6_1_class_options(MachineClass *mc)
208
+{
209
+}
210
+DEFINE_CCW_MACHINE(6_1, "6.1", true);
211
+
212
static void ccw_machine_6_0_instance_options(MachineState *machine)
213
{
214
+ ccw_machine_6_1_instance_options(machine);
215
}
216
217
static void ccw_machine_6_0_class_options(MachineClass *mc)
218
{
219
+ ccw_machine_6_1_class_options(mc);
220
+ compat_props_add(mc->compat_props, hw_compat_6_0, hw_compat_6_0_len);
221
}
222
-DEFINE_CCW_MACHINE(6_0, "6.0", true);
223
+DEFINE_CCW_MACHINE(6_0, "6.0", false);
224
225
static void ccw_machine_5_2_instance_options(MachineState *machine)
226
{
227
--
228
2.20.1
229
230
diff view generated by jsdifflib
1
Add more detail to the documentation for memory_region_init_iommu()
1
Currently the gpex PCI controller implements no special behaviour for
2
and other IOMMU-related functions and data structures.
2
guest accesses to areas of the PIO and MMIO where it has not mapped
3
any PCI devices, which means that for Arm you end up with a CPU
4
exception due to a data abort.
3
5
6
Most host OSes expect "like an x86 PC" behaviour, where bad accesses
7
like this return -1 for reads and ignore writes. In the interests of
8
not being surprising, make host CPU accesses to these windows behave
9
as -1/discard where there's no mapped PCI device.
10
11
The old behaviour generally didn't cause any problems, because
12
almost always the guest OS will map the PCI devices and then only
13
access where it has mapped them. One corner case where you will see
14
this kind of access is if Linux attempts to probe legacy ISA
15
devices via a PIO window access. So far the only case where we've
16
seen this has been via the syzkaller fuzzer.
17
18
Reported-by: Dmitry Vyukov <dvyukov@google.com>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
21
Acked-by: Michael S. Tsirkin <mst@redhat.com>
7
Reviewed-by: Eric Auger <eric.auger@redhat.com>
22
Message-id: 20210325163315.27724-1-peter.maydell@linaro.org
8
Message-id: 20180521140402.23318-2-peter.maydell@linaro.org
23
Fixes: https://bugs.launchpad.net/qemu/+bug/1918917
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
25
---
10
include/exec/memory.h | 105 ++++++++++++++++++++++++++++++++++++++----
26
include/hw/pci-host/gpex.h | 4 +++
11
1 file changed, 95 insertions(+), 10 deletions(-)
27
hw/core/machine.c | 4 ++-
28
hw/pci-host/gpex.c | 56 ++++++++++++++++++++++++++++++++++++--
29
3 files changed, 60 insertions(+), 4 deletions(-)
12
30
13
diff --git a/include/exec/memory.h b/include/exec/memory.h
31
diff --git a/include/hw/pci-host/gpex.h b/include/hw/pci-host/gpex.h
14
index XXXXXXX..XXXXXXX 100644
32
index XXXXXXX..XXXXXXX 100644
15
--- a/include/exec/memory.h
33
--- a/include/hw/pci-host/gpex.h
16
+++ b/include/exec/memory.h
34
+++ b/include/hw/pci-host/gpex.h
17
@@ -XXX,XX +XXX,XX @@ enum IOMMUMemoryRegionAttr {
35
@@ -XXX,XX +XXX,XX @@ struct GPEXHost {
18
IOMMU_ATTR_SPAPR_TCE_FD
36
37
MemoryRegion io_ioport;
38
MemoryRegion io_mmio;
39
+ MemoryRegion io_ioport_window;
40
+ MemoryRegion io_mmio_window;
41
qemu_irq irq[GPEX_NUM_IRQS];
42
int irq_num[GPEX_NUM_IRQS];
43
+
44
+ bool allow_unmapped_accesses;
19
};
45
};
20
46
21
+/**
47
struct GPEXConfig {
22
+ * IOMMUMemoryRegionClass:
48
diff --git a/hw/core/machine.c b/hw/core/machine.c
23
+ *
49
index XXXXXXX..XXXXXXX 100644
24
+ * All IOMMU implementations need to subclass TYPE_IOMMU_MEMORY_REGION
50
--- a/hw/core/machine.c
25
+ * and provide an implementation of at least the @translate method here
51
+++ b/hw/core/machine.c
26
+ * to handle requests to the memory region. Other methods are optional.
52
@@ -XXX,XX +XXX,XX @@
27
+ *
53
#include "hw/virtio/virtio.h"
28
+ * The IOMMU implementation must use the IOMMU notifier infrastructure
54
#include "hw/virtio/virtio-pci.h"
29
+ * to report whenever mappings are changed, by calling
55
30
+ * memory_region_notify_iommu() (or, if necessary, by calling
56
-GlobalProperty hw_compat_6_0[] = {};
31
+ * memory_region_notify_one() for each registered notifier).
57
+GlobalProperty hw_compat_6_0[] = {
32
+ */
58
+ { "gpex-pcihost", "allow-unmapped-accesses", "false" },
33
typedef struct IOMMUMemoryRegionClass {
59
+};
34
/* private */
60
const size_t hw_compat_6_0_len = G_N_ELEMENTS(hw_compat_6_0);
35
struct DeviceClass parent_class;
61
36
62
GlobalProperty hw_compat_5_2[] = {
37
/*
63
diff --git a/hw/pci-host/gpex.c b/hw/pci-host/gpex.c
38
- * Return a TLB entry that contains a given address. Flag should
64
index XXXXXXX..XXXXXXX 100644
39
- * be the access permission of this translation operation. We can
65
--- a/hw/pci-host/gpex.c
40
- * set flag to IOMMU_NONE to mean that we don't need any
66
+++ b/hw/pci-host/gpex.c
41
- * read/write permission checks, like, when for region replay.
67
@@ -XXX,XX +XXX,XX @@ static void gpex_host_realize(DeviceState *dev, Error **errp)
42
+ * Return a TLB entry that contains a given address.
68
int i;
69
70
pcie_host_mmcfg_init(pex, PCIE_MMCFG_SIZE_MAX);
71
+ sysbus_init_mmio(sbd, &pex->mmio);
72
+
73
+ /*
74
+ * Note that the MemoryRegions io_mmio and io_ioport that we pass
75
+ * to pci_register_root_bus() are not the same as the
76
+ * MemoryRegions io_mmio_window and io_ioport_window that we
77
+ * expose as SysBus MRs. The difference is in the behaviour of
78
+ * accesses to addresses where no PCI device has been mapped.
43
+ *
79
+ *
44
+ * The IOMMUAccessFlags indicated via @flag are optional and may
80
+ * io_mmio and io_ioport are the underlying PCI view of the PCI
45
+ * be specified as IOMMU_NONE to indicate that the caller needs
81
+ * address space, and when a PCI device does a bus master access
46
+ * the full translation information for both reads and writes. If
82
+ * to a bad address this is reported back to it as a transaction
47
+ * the access flags are specified then the IOMMU implementation
83
+ * failure.
48
+ * may use this as an optimization, to stop doing a page table
49
+ * walk as soon as it knows that the requested permissions are not
50
+ * allowed. If IOMMU_NONE is passed then the IOMMU must do the
51
+ * full page table walk and report the permissions in the returned
52
+ * IOMMUTLBEntry. (Note that this implies that an IOMMU may not
53
+ * return different mappings for reads and writes.)
54
+ *
84
+ *
55
+ * The returned information remains valid while the caller is
85
+ * io_mmio_window and io_ioport_window implement "unmapped
56
+ * holding the big QEMU lock or is inside an RCU critical section;
86
+ * addresses read as -1 and ignore writes"; this is traditional
57
+ * if the caller wishes to cache the mapping beyond that it must
87
+ * x86 PC behaviour, which is not mandated by the PCI spec proper
58
+ * register an IOMMU notifier so it can invalidate its cached
88
+ * but expected by much PCI-using guest software, including Linux.
59
+ * information when the IOMMU mapping changes.
60
+ *
89
+ *
61
+ * @iommu: the IOMMUMemoryRegion
90
+ * In the interests of not being unnecessarily surprising, we
62
+ * @hwaddr: address to be translated within the memory region
91
+ * implement it in the gpex PCI host controller, by providing the
63
+ * @flag: requested access permissions
92
+ * _window MRs, which are containers with io ops that implement
64
*/
93
+ * the 'background' behaviour and which hold the real PCI MRs as
65
IOMMUTLBEntry (*translate)(IOMMUMemoryRegion *iommu, hwaddr addr,
94
+ * subregions.
66
IOMMUAccessFlags flag);
67
- /* Returns minimum supported page size */
68
+ /* Returns minimum supported page size in bytes.
69
+ * If this method is not provided then the minimum is assumed to
70
+ * be TARGET_PAGE_SIZE.
71
+ *
72
+ * @iommu: the IOMMUMemoryRegion
73
+ */
95
+ */
74
uint64_t (*get_min_page_size)(IOMMUMemoryRegion *iommu);
96
memory_region_init(&s->io_mmio, OBJECT(s), "gpex_mmio", UINT64_MAX);
75
- /* Called when IOMMU Notifier flag changed */
97
memory_region_init(&s->io_ioport, OBJECT(s), "gpex_ioport", 64 * 1024);
76
+ /* Called when IOMMU Notifier flag changes (ie when the set of
98
77
+ * events which IOMMU users are requesting notification for changes).
99
- sysbus_init_mmio(sbd, &pex->mmio);
78
+ * Optional method -- need not be provided if the IOMMU does not
100
- sysbus_init_mmio(sbd, &s->io_mmio);
79
+ * need to know exactly which events must be notified.
101
- sysbus_init_mmio(sbd, &s->io_ioport);
80
+ *
102
+ if (s->allow_unmapped_accesses) {
81
+ * @iommu: the IOMMUMemoryRegion
103
+ memory_region_init_io(&s->io_mmio_window, OBJECT(s),
82
+ * @old_flags: events which previously needed to be notified
104
+ &unassigned_io_ops, OBJECT(s),
83
+ * @new_flags: events which now need to be notified
105
+ "gpex_mmio_window", UINT64_MAX);
106
+ memory_region_init_io(&s->io_ioport_window, OBJECT(s),
107
+ &unassigned_io_ops, OBJECT(s),
108
+ "gpex_ioport_window", 64 * 1024);
109
+
110
+ memory_region_add_subregion(&s->io_mmio_window, 0, &s->io_mmio);
111
+ memory_region_add_subregion(&s->io_ioport_window, 0, &s->io_ioport);
112
+ sysbus_init_mmio(sbd, &s->io_mmio_window);
113
+ sysbus_init_mmio(sbd, &s->io_ioport_window);
114
+ } else {
115
+ sysbus_init_mmio(sbd, &s->io_mmio);
116
+ sysbus_init_mmio(sbd, &s->io_ioport);
117
+ }
118
+
119
for (i = 0; i < GPEX_NUM_IRQS; i++) {
120
sysbus_init_irq(sbd, &s->irq[i]);
121
s->irq_num[i] = -1;
122
@@ -XXX,XX +XXX,XX @@ static const char *gpex_host_root_bus_path(PCIHostState *host_bridge,
123
return "0000:00";
124
}
125
126
+static Property gpex_host_properties[] = {
127
+ /*
128
+ * Permit CPU accesses to unmapped areas of the PIO and MMIO windows
129
+ * (discarding writes and returning -1 for reads) rather than aborting.
84
+ */
130
+ */
85
void (*notify_flag_changed)(IOMMUMemoryRegion *iommu,
131
+ DEFINE_PROP_BOOL("allow-unmapped-accesses", GPEXHost,
86
IOMMUNotifierFlag old_flags,
132
+ allow_unmapped_accesses, true),
87
IOMMUNotifierFlag new_flags);
133
+ DEFINE_PROP_END_OF_LIST(),
88
- /* Set this up to provide customized IOMMU replay function */
134
+};
89
+ /* Called to handle memory_region_iommu_replay().
135
+
90
+ *
136
static void gpex_host_class_init(ObjectClass *klass, void *data)
91
+ * The default implementation of memory_region_iommu_replay() is to
137
{
92
+ * call the IOMMU translate method for every page in the address space
138
DeviceClass *dc = DEVICE_CLASS(klass);
93
+ * with flag == IOMMU_NONE and then call the notifier if translate
139
@@ -XXX,XX +XXX,XX @@ static void gpex_host_class_init(ObjectClass *klass, void *data)
94
+ * returns a valid mapping. If this method is implemented then it
140
dc->realize = gpex_host_realize;
95
+ * overrides the default behaviour, and must provide the full semantics
141
set_bit(DEVICE_CATEGORY_BRIDGE, dc->categories);
96
+ * of memory_region_iommu_replay(), by calling @notifier for every
142
dc->fw_name = "pci";
97
+ * translation present in the IOMMU.
143
+ device_class_set_props(dc, gpex_host_properties);
98
+ *
144
}
99
+ * Optional method -- an IOMMU only needs to provide this method
145
100
+ * if the default is inefficient or produces undesirable side effects.
146
static void gpex_host_initfn(Object *obj)
101
+ *
102
+ * Note: this is not related to record-and-replay functionality.
103
+ */
104
void (*replay)(IOMMUMemoryRegion *iommu, IOMMUNotifier *notifier);
105
106
- /* Get IOMMU misc attributes */
107
- int (*get_attr)(IOMMUMemoryRegion *iommu, enum IOMMUMemoryRegionAttr,
108
+ /* Get IOMMU misc attributes. This is an optional method that
109
+ * can be used to allow users of the IOMMU to get implementation-specific
110
+ * information. The IOMMU implements this method to handle calls
111
+ * by IOMMU users to memory_region_iommu_get_attr() by filling in
112
+ * the arbitrary data pointer for any IOMMUMemoryRegionAttr values that
113
+ * the IOMMU supports. If the method is unimplemented then
114
+ * memory_region_iommu_get_attr() will always return -EINVAL.
115
+ *
116
+ * @iommu: the IOMMUMemoryRegion
117
+ * @attr: attribute being queried
118
+ * @data: memory to fill in with the attribute data
119
+ *
120
+ * Returns 0 on success, or a negative errno; in particular
121
+ * returns -EINVAL for unrecognized or unimplemented attribute types.
122
+ */
123
+ int (*get_attr)(IOMMUMemoryRegion *iommu, enum IOMMUMemoryRegionAttr attr,
124
void *data);
125
} IOMMUMemoryRegionClass;
126
127
@@ -XXX,XX +XXX,XX @@ static inline void memory_region_init_reservation(MemoryRegion *mr,
128
* An IOMMU region translates addresses and forwards accesses to a target
129
* memory region.
130
*
131
+ * The IOMMU implementation must define a subclass of TYPE_IOMMU_MEMORY_REGION.
132
+ * @_iommu_mr should be a pointer to enough memory for an instance of
133
+ * that subclass, @instance_size is the size of that subclass, and
134
+ * @mrtypename is its name. This function will initialize @_iommu_mr as an
135
+ * instance of the subclass, and its methods will then be called to handle
136
+ * accesses to the memory region. See the documentation of
137
+ * #IOMMUMemoryRegionClass for further details.
138
+ *
139
* @_iommu_mr: the #IOMMUMemoryRegion to be initialized
140
* @instance_size: the IOMMUMemoryRegion subclass instance size
141
* @mrtypename: the type name of the #IOMMUMemoryRegion
142
@@ -XXX,XX +XXX,XX @@ void memory_region_register_iommu_notifier(MemoryRegion *mr,
143
* a notifier with the minimum page granularity returned by
144
* mr->iommu_ops->get_page_size().
145
*
146
+ * Note: this is not related to record-and-replay functionality.
147
+ *
148
* @iommu_mr: the memory region to observe
149
* @n: the notifier to which to replay iommu mappings
150
*/
151
@@ -XXX,XX +XXX,XX @@ void memory_region_iommu_replay(IOMMUMemoryRegion *iommu_mr, IOMMUNotifier *n);
152
* memory_region_iommu_replay_all: replay existing IOMMU translations
153
* to all the notifiers registered.
154
*
155
+ * Note: this is not related to record-and-replay functionality.
156
+ *
157
* @iommu_mr: the memory region to observe
158
*/
159
void memory_region_iommu_replay_all(IOMMUMemoryRegion *iommu_mr);
160
@@ -XXX,XX +XXX,XX @@ void memory_region_unregister_iommu_notifier(MemoryRegion *mr,
161
* memory_region_iommu_get_attr: return an IOMMU attr if get_attr() is
162
* defined on the IOMMU.
163
*
164
- * Returns 0 if succeded, error code otherwise.
165
+ * Returns 0 on success, or a negative errno otherwise. In particular,
166
+ * -EINVAL indicates that the IOMMU does not support the requested
167
+ * attribute.
168
*
169
* @iommu_mr: the memory region
170
* @attr: the requested attribute
171
--
147
--
172
2.17.1
148
2.20.1
173
149
174
150
diff view generated by jsdifflib