1
target-arm queue. This has the "plumb txattrs through various
1
The following changes since commit bf4460a8d9a86f6cfe05d7a7f470c48e3a93d8b2:
2
bits of exec.c" patches, and a collection of bug fixes from
3
various people.
4
2
5
thanks
3
Merge tag 'pull-tcg-20230123' of https://gitlab.com/rth7680/qemu into staging (2023-02-03 09:30:45 +0000)
6
-- PMM
7
8
9
10
The following changes since commit a3ac12fba028df90f7b3dbec924995c126c41022:
11
12
Merge remote-tracking branch 'remotes/ehabkost/tags/numa-next-pull-request' into staging (2018-05-31 11:12:36 +0100)
13
4
14
are available in the Git repository at:
5
are available in the Git repository at:
15
6
16
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20180531
7
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20230203
17
8
18
for you to fetch changes up to 49d1dca0520ea71bc21867fab6647f474fcf857b:
9
for you to fetch changes up to bb18151d8bd9bedc497ee9d4e8d81b39a4e5bbf6:
19
10
20
KVM: GIC: Fix memory leak due to calling kvm_init_irq_routing twice (2018-05-31 14:52:53 +0100)
11
target/arm: Enable FEAT_FGT on '-cpu max' (2023-02-03 12:59:24 +0000)
21
12
22
----------------------------------------------------------------
13
----------------------------------------------------------------
23
target-arm queue:
14
target-arm queue:
24
* target/arm: Honour FPCR.FZ in FRECPX
15
* Fix physical address resolution for Stage2
25
* MAINTAINERS: Add entries for newer MPS2 boards and devices
16
* pl011: refactoring, implement reset method
26
* hw/intc/arm_gicv3: Fix APxR<n> register dispatching
17
* Support GICv3 with hvf acceleration
27
* arm_gicv3_kvm: fix bug in writing zero bits back to the in-kernel
18
* sbsa-ref: remove cortex-a76 from list of supported cpus
28
GIC state
19
* Correct syndrome for ATS12NSO* traps at Secure EL1
29
* tcg: Fix helper function vs host abi for float16
20
* Fix priority of HSTR_EL2 traps vs UNDEFs
30
* arm: fix qemu crash on startup with -bios option
21
* Implement FEAT_FGT for '-cpu max'
31
* arm: fix malloc type mismatch
32
* xlnx-zdma: Correct mem leaks and memset to zero on desc unaligned errors
33
* Correct CPACR reset value for v7 cores
34
* memory.h: Improve IOMMU related documentation
35
* exec: Plumb transaction attributes through various functions in
36
preparation for allowing IOMMUs to see them
37
* vmstate.h: Provide VMSTATE_BOOL_SUB_ARRAY
38
* ARM: ACPI: Fix use-after-free due to memory realloc
39
* KVM: GIC: Fix memory leak due to calling kvm_init_irq_routing twice
40
22
41
----------------------------------------------------------------
23
----------------------------------------------------------------
42
Francisco Iglesias (1):
24
Alexander Graf (3):
43
xlnx-zdma: Correct mem leaks and memset to zero on desc unaligned errors
25
hvf: arm: Add support for GICv3
26
hw/arm/virt: Consolidate GIC finalize logic
27
hw/arm/virt: Make accels in GIC finalize logic explicit
44
28
45
Igor Mammedov (1):
29
Evgeny Iakovlev (4):
46
arm: fix qemu crash on startup with -bios option
30
hw/char/pl011: refactor FIFO depth handling code
31
hw/char/pl011: add post_load hook for backwards-compatibility
32
hw/char/pl011: implement a reset method
33
hw/char/pl011: better handling of FIFO flags on LCR reset
47
34
48
Jan Kiszka (1):
35
Marcin Juszkiewicz (1):
49
hw/intc/arm_gicv3: Fix APxR<n> register dispatching
36
sbsa-ref: remove cortex-a76 from list of supported cpus
50
37
51
Paolo Bonzini (1):
38
Peter Maydell (23):
52
arm: fix malloc type mismatch
39
target/arm: Name AT_S1E1RP and AT_S1E1WP cpregs correctly
40
target/arm: Correct syndrome for ATS12NSO* at Secure EL1
41
target/arm: Remove CP_ACCESS_TRAP_UNCATEGORIZED_{EL2, EL3}
42
target/arm: Move do_coproc_insn() syndrome calculation earlier
43
target/arm: All UNDEF-at-EL0 traps take priority over HSTR_EL2 traps
44
target/arm: Make HSTR_EL2 traps take priority over UNDEF-at-EL1
45
target/arm: Disable HSTR_EL2 traps if EL2 is not enabled
46
target/arm: Define the FEAT_FGT registers
47
target/arm: Implement FGT trapping infrastructure
48
target/arm: Mark up sysregs for HFGRTR bits 0..11
49
target/arm: Mark up sysregs for HFGRTR bits 12..23
50
target/arm: Mark up sysregs for HFGRTR bits 24..35
51
target/arm: Mark up sysregs for HFGRTR bits 36..63
52
target/arm: Mark up sysregs for HDFGRTR bits 0..11
53
target/arm: Mark up sysregs for HDFGRTR bits 12..63
54
target/arm: Mark up sysregs for HFGITR bits 0..11
55
target/arm: Mark up sysregs for HFGITR bits 12..17
56
target/arm: Mark up sysregs for HFGITR bits 18..47
57
target/arm: Mark up sysregs for HFGITR bits 48..63
58
target/arm: Implement the HFGITR_EL2.ERET trap
59
target/arm: Implement the HFGITR_EL2.SVC_EL0 and SVC_EL1 traps
60
target/arm: Implement MDCR_EL2.TDCC and MDCR_EL3.TDCC traps
61
target/arm: Enable FEAT_FGT on '-cpu max'
53
62
54
Peter Maydell (17):
63
Richard Henderson (2):
55
target/arm: Honour FPCR.FZ in FRECPX
64
hw/arm: Use TYPE_ARM_SMMUV3
56
MAINTAINERS: Add entries for newer MPS2 boards and devices
65
target/arm: Fix physical address resolution for Stage2
57
Correct CPACR reset value for v7 cores
58
memory.h: Improve IOMMU related documentation
59
Make tb_invalidate_phys_addr() take a MemTxAttrs argument
60
Make address_space_translate{, _cached}() take a MemTxAttrs argument
61
Make address_space_map() take a MemTxAttrs argument
62
Make address_space_access_valid() take a MemTxAttrs argument
63
Make flatview_extend_translation() take a MemTxAttrs argument
64
Make memory_region_access_valid() take a MemTxAttrs argument
65
Make MemoryRegion valid.accepts callback take a MemTxAttrs argument
66
Make flatview_access_valid() take a MemTxAttrs argument
67
Make flatview_translate() take a MemTxAttrs argument
68
Make address_space_get_iotlb_entry() take a MemTxAttrs argument
69
Make flatview_do_translate() take a MemTxAttrs argument
70
Make address_space_translate_iommu take a MemTxAttrs argument
71
vmstate.h: Provide VMSTATE_BOOL_SUB_ARRAY
72
66
73
Richard Henderson (1):
67
docs/system/arm/emulation.rst | 1 +
74
tcg: Fix helper function vs host abi for float16
68
include/hw/arm/virt.h | 15 +-
75
69
include/hw/char/pl011.h | 5 +-
76
Shannon Zhao (3):
70
target/arm/cpregs.h | 484 +++++++++++++++++++++++++++++++++++++++++-
77
arm_gicv3_kvm: increase clroffset accordingly
71
target/arm/cpu.h | 18 ++
78
ARM: ACPI: Fix use-after-free due to memory realloc
72
target/arm/internals.h | 20 ++
79
KVM: GIC: Fix memory leak due to calling kvm_init_irq_routing twice
73
target/arm/syndrome.h | 10 +
80
74
target/arm/translate.h | 6 +
81
include/exec/exec-all.h | 5 +-
75
hw/arm/sbsa-ref.c | 4 +-
82
include/exec/helper-head.h | 2 +-
76
hw/arm/virt.c | 203 +++++++++---------
83
include/exec/memory-internal.h | 3 +-
77
hw/char/pl011.c | 93 ++++++--
84
include/exec/memory.h | 128 +++++++++++++++++++++++++++++++++++------
78
hw/intc/arm_gicv3_cpuif.c | 18 +-
85
include/migration/vmstate.h | 3 +
79
target/arm/cpu64.c | 1 +
86
include/sysemu/dma.h | 6 +-
80
target/arm/debug_helper.c | 46 +++-
87
accel/tcg/translate-all.c | 4 +-
81
target/arm/helper.c | 245 ++++++++++++++++++++-
88
exec.c | 95 ++++++++++++++++++------------
82
target/arm/hvf/hvf.c | 151 +++++++++++++
89
hw/arm/boot.c | 18 +++---
83
target/arm/op_helper.c | 58 ++++-
90
hw/arm/virt-acpi-build.c | 20 +++++--
84
target/arm/ptw.c | 2 +-
91
hw/dma/xlnx-zdma.c | 10 +++-
85
target/arm/translate-a64.c | 22 +-
92
hw/hppa/dino.c | 3 +-
86
target/arm/translate.c | 125 +++++++----
93
hw/intc/arm_gic_kvm.c | 1 -
87
target/arm/hvf/trace-events | 2 +
94
hw/intc/arm_gicv3_cpuif.c | 12 ++--
88
21 files changed, 1340 insertions(+), 189 deletions(-)
95
hw/intc/arm_gicv3_kvm.c | 2 +-
96
hw/nvram/fw_cfg.c | 12 ++--
97
hw/s390x/s390-pci-inst.c | 3 +-
98
hw/scsi/esp.c | 3 +-
99
hw/vfio/common.c | 3 +-
100
hw/virtio/vhost.c | 3 +-
101
hw/xen/xen_pt_msi.c | 3 +-
102
memory.c | 12 ++--
103
memory_ldst.inc.c | 18 +++---
104
target/arm/gdbstub.c | 3 +-
105
target/arm/helper-a64.c | 41 +++++++------
106
target/arm/helper.c | 90 ++++++++++++++++-------------
107
target/ppc/mmu-hash64.c | 3 +-
108
target/riscv/helper.c | 2 +-
109
target/s390x/diag.c | 6 +-
110
target/s390x/excp_helper.c | 3 +-
111
target/s390x/mmu_helper.c | 3 +-
112
target/s390x/sigp.c | 3 +-
113
target/xtensa/op_helper.c | 3 +-
114
MAINTAINERS | 9 ++-
115
34 files changed, 353 insertions(+), 182 deletions(-)
116
diff view generated by jsdifflib
1
From: Shannon Zhao <zhaoshenglong@huawei.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
kvm_irqchip_create called by kvm_init will call kvm_init_irq_routing to
3
Use the macro instead of two explicit string literals.
4
initialize global capability variables. If we call kvm_init_irq_routing in
5
GIC realize function, previous allocated memory will leak.
6
4
7
Fix this by deleting the unnecessary call.
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
10
Reviewed-by: Eric Auger <eric.auger@redhat.com>
7
Reviewed-by: Eric Auger <eric.auger@redhat.com>
11
Message-id: 1527750994-14360-1-git-send-email-zhaoshenglong@huawei.com
8
Message-id: 20230124232059.4017615-1-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
10
---
14
hw/intc/arm_gic_kvm.c | 1 -
11
hw/arm/sbsa-ref.c | 3 ++-
15
hw/intc/arm_gicv3_kvm.c | 1 -
12
hw/arm/virt.c | 2 +-
16
2 files changed, 2 deletions(-)
13
2 files changed, 3 insertions(+), 2 deletions(-)
17
14
18
diff --git a/hw/intc/arm_gic_kvm.c b/hw/intc/arm_gic_kvm.c
15
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
19
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/intc/arm_gic_kvm.c
17
--- a/hw/arm/sbsa-ref.c
21
+++ b/hw/intc/arm_gic_kvm.c
18
+++ b/hw/arm/sbsa-ref.c
22
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gic_realize(DeviceState *dev, Error **errp)
19
@@ -XXX,XX +XXX,XX @@
23
20
#include "exec/hwaddr.h"
24
if (kvm_has_gsi_routing()) {
21
#include "kvm_arm.h"
25
/* set up irq routing */
22
#include "hw/arm/boot.h"
26
- kvm_init_irq_routing(kvm_state);
23
+#include "hw/arm/smmuv3.h"
27
for (i = 0; i < s->num_irq - GIC_INTERNAL; ++i) {
24
#include "hw/block/flash.h"
28
kvm_irqchip_add_irq_route(kvm_state, i, 0, i);
25
#include "hw/boards.h"
29
}
26
#include "hw/ide/internal.h"
30
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
27
@@ -XXX,XX +XXX,XX @@ static void create_smmu(const SBSAMachineState *sms, PCIBus *bus)
28
DeviceState *dev;
29
int i;
30
31
- dev = qdev_new("arm-smmuv3");
32
+ dev = qdev_new(TYPE_ARM_SMMUV3);
33
34
object_property_set_link(OBJECT(dev), "primary-bus", OBJECT(bus),
35
&error_abort);
36
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
31
index XXXXXXX..XXXXXXX 100644
37
index XXXXXXX..XXXXXXX 100644
32
--- a/hw/intc/arm_gicv3_kvm.c
38
--- a/hw/arm/virt.c
33
+++ b/hw/intc/arm_gicv3_kvm.c
39
+++ b/hw/arm/virt.c
34
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gicv3_realize(DeviceState *dev, Error **errp)
40
@@ -XXX,XX +XXX,XX @@ static void create_smmu(const VirtMachineState *vms,
35
41
return;
36
if (kvm_has_gsi_routing()) {
42
}
37
/* set up irq routing */
43
38
- kvm_init_irq_routing(kvm_state);
44
- dev = qdev_new("arm-smmuv3");
39
for (i = 0; i < s->num_irq - GIC_INTERNAL; ++i) {
45
+ dev = qdev_new(TYPE_ARM_SMMUV3);
40
kvm_irqchip_add_irq_route(kvm_state, i, 0, i);
46
41
}
47
object_property_set_link(OBJECT(dev), "primary-bus", OBJECT(bus),
48
&error_abort);
42
--
49
--
43
2.17.1
50
2.34.1
44
51
45
52
diff view generated by jsdifflib
1
From: Francisco Iglesias <frasse.iglesias@gmail.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Coverity found that the string return by 'object_get_canonical_path' was not
3
Conversion to probe_access_full missed applying the page offset.
4
being freed at two locations in the model (CID 1391294 and CID 1391293) and
5
also that a memset was being called with a value greater than the max of a byte
6
on the second argument (CID 1391286). This patch corrects this by adding the
7
freeing of the strings and also changing to memset to zero instead on
8
descriptor unaligned errors.
9
4
10
Signed-off-by: Francisco Iglesias <frasse.iglesias@gmail.com>
5
Cc: qemu-stable@nongnu.org
11
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
6
Reported-by: Sid Manning <sidneym@quicinc.com>
12
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20180528184859.3530-1-frasse.iglesias@gmail.com
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20230126233134.103193-1-richard.henderson@linaro.org
10
Fixes: f3639a64f602 ("target/arm: Use softmmu tlbs for page table walking")
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
13
---
17
hw/dma/xlnx-zdma.c | 10 +++++++---
14
target/arm/ptw.c | 2 +-
18
1 file changed, 7 insertions(+), 3 deletions(-)
15
1 file changed, 1 insertion(+), 1 deletion(-)
19
16
20
diff --git a/hw/dma/xlnx-zdma.c b/hw/dma/xlnx-zdma.c
17
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
21
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/dma/xlnx-zdma.c
19
--- a/target/arm/ptw.c
23
+++ b/hw/dma/xlnx-zdma.c
20
+++ b/target/arm/ptw.c
24
@@ -XXX,XX +XXX,XX @@ static bool zdma_load_descriptor(XlnxZDMA *s, uint64_t addr, void *buf)
21
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
25
qemu_log_mask(LOG_GUEST_ERROR,
22
if (unlikely(flags & TLB_INVALID_MASK)) {
26
"zdma: unaligned descriptor at %" PRIx64,
23
goto fail;
27
addr);
24
}
28
- memset(buf, 0xdeadbeef, sizeof(XlnxZDMADescr));
25
- ptw->out_phys = full->phys_addr;
29
+ memset(buf, 0x0, sizeof(XlnxZDMADescr));
26
+ ptw->out_phys = full->phys_addr | (addr & ~TARGET_PAGE_MASK);
30
s->error = true;
27
ptw->out_rw = full->prot & PAGE_WRITE;
31
return false;
28
pte_attrs = full->pte_attrs;
32
}
29
pte_secure = full->attrs.secure;
33
@@ -XXX,XX +XXX,XX @@ static uint64_t zdma_read(void *opaque, hwaddr addr, unsigned size)
34
RegisterInfo *r = &s->regs_info[addr / 4];
35
36
if (!r->data) {
37
+ gchar *path = object_get_canonical_path(OBJECT(s));
38
qemu_log("%s: Decode error: read from %" HWADDR_PRIx "\n",
39
- object_get_canonical_path(OBJECT(s)),
40
+ path,
41
addr);
42
+ g_free(path);
43
ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, INV_APB, true);
44
zdma_ch_imr_update_irq(s);
45
return 0;
46
@@ -XXX,XX +XXX,XX @@ static void zdma_write(void *opaque, hwaddr addr, uint64_t value,
47
RegisterInfo *r = &s->regs_info[addr / 4];
48
49
if (!r->data) {
50
+ gchar *path = object_get_canonical_path(OBJECT(s));
51
qemu_log("%s: Decode error: write to %" HWADDR_PRIx "=%" PRIx64 "\n",
52
- object_get_canonical_path(OBJECT(s)),
53
+ path,
54
addr, value);
55
+ g_free(path);
56
ARRAY_FIELD_DP32(s->regs, ZDMA_CH_ISR, INV_APB, true);
57
zdma_ch_imr_update_irq(s);
58
return;
59
--
30
--
60
2.17.1
31
2.34.1
61
32
62
33
diff view generated by jsdifflib
1
From: Paolo Bonzini <pbonzini@redhat.com>
1
From: Evgeny Iakovlev <eiakovlev@linux.microsoft.com>
2
2
3
cpregs_keys is an uint32_t* so the allocation should use uint32_t.
3
PL011 can be in either of 2 modes depending guest config: FIFO and
4
g_new is even better because it is type-safe.
4
single register. The last mode could be viewed as a 1-element-deep FIFO.
5
5
6
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6
Current code open-codes a bunch of depth-dependent logic. Refactor FIFO
7
depth handling code to isolate calculating current FIFO depth.
8
9
One functional (albeit guest-invisible) side-effect of this change is
10
that previously we would always increment s->read_pos in UARTDR read
11
handler even if FIFO was disabled, now we are limiting read_pos to not
12
exceed FIFO depth (read_pos itself is reset to 0 if user disables FIFO).
13
14
Signed-off-by: Evgeny Iakovlev <eiakovlev@linux.microsoft.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
16
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
17
Message-id: 20230123162304.26254-2-eiakovlev@linux.microsoft.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
19
---
11
target/arm/gdbstub.c | 3 +--
20
include/hw/char/pl011.h | 5 ++++-
12
1 file changed, 1 insertion(+), 2 deletions(-)
21
hw/char/pl011.c | 30 ++++++++++++++++++------------
22
2 files changed, 22 insertions(+), 13 deletions(-)
13
23
14
diff --git a/target/arm/gdbstub.c b/target/arm/gdbstub.c
24
diff --git a/include/hw/char/pl011.h b/include/hw/char/pl011.h
15
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/gdbstub.c
26
--- a/include/hw/char/pl011.h
17
+++ b/target/arm/gdbstub.c
27
+++ b/include/hw/char/pl011.h
18
@@ -XXX,XX +XXX,XX @@ int arm_gen_dynamic_xml(CPUState *cs)
28
@@ -XXX,XX +XXX,XX @@ OBJECT_DECLARE_SIMPLE_TYPE(PL011State, PL011)
19
RegisterSysregXmlParam param = {cs, s};
29
/* This shares the same struct (and cast macro) as the base pl011 device */
20
30
#define TYPE_PL011_LUMINARY "pl011_luminary"
21
cpu->dyn_xml.num_cpregs = 0;
31
22
- cpu->dyn_xml.cpregs_keys = g_malloc(sizeof(uint32_t *) *
32
+/* Depth of UART FIFO in bytes, when FIFO mode is enabled (else depth == 1) */
23
- g_hash_table_size(cpu->cp_regs));
33
+#define PL011_FIFO_DEPTH 16
24
+ cpu->dyn_xml.cpregs_keys = g_new(uint32_t, g_hash_table_size(cpu->cp_regs));
34
+
25
g_string_printf(s, "<?xml version=\"1.0\"?>");
35
struct PL011State {
26
g_string_append_printf(s, "<!DOCTYPE target SYSTEM \"gdb-target.dtd\">");
36
SysBusDevice parent_obj;
27
g_string_append_printf(s, "<feature name=\"org.qemu.gdb.arm.sys.regs\">");
37
38
@@ -XXX,XX +XXX,XX @@ struct PL011State {
39
uint32_t dmacr;
40
uint32_t int_enabled;
41
uint32_t int_level;
42
- uint32_t read_fifo[16];
43
+ uint32_t read_fifo[PL011_FIFO_DEPTH];
44
uint32_t ilpr;
45
uint32_t ibrd;
46
uint32_t fbrd;
47
diff --git a/hw/char/pl011.c b/hw/char/pl011.c
48
index XXXXXXX..XXXXXXX 100644
49
--- a/hw/char/pl011.c
50
+++ b/hw/char/pl011.c
51
@@ -XXX,XX +XXX,XX @@ static void pl011_update(PL011State *s)
52
}
53
}
54
55
+static bool pl011_is_fifo_enabled(PL011State *s)
56
+{
57
+ return (s->lcr & 0x10) != 0;
58
+}
59
+
60
+static inline unsigned pl011_get_fifo_depth(PL011State *s)
61
+{
62
+ /* Note: FIFO depth is expected to be power-of-2 */
63
+ return pl011_is_fifo_enabled(s) ? PL011_FIFO_DEPTH : 1;
64
+}
65
+
66
static uint64_t pl011_read(void *opaque, hwaddr offset,
67
unsigned size)
68
{
69
@@ -XXX,XX +XXX,XX @@ static uint64_t pl011_read(void *opaque, hwaddr offset,
70
c = s->read_fifo[s->read_pos];
71
if (s->read_count > 0) {
72
s->read_count--;
73
- if (++s->read_pos == 16)
74
- s->read_pos = 0;
75
+ s->read_pos = (s->read_pos + 1) & (pl011_get_fifo_depth(s) - 1);
76
}
77
if (s->read_count == 0) {
78
s->flags |= PL011_FLAG_RXFE;
79
@@ -XXX,XX +XXX,XX @@ static int pl011_can_receive(void *opaque)
80
PL011State *s = (PL011State *)opaque;
81
int r;
82
83
- if (s->lcr & 0x10) {
84
- r = s->read_count < 16;
85
- } else {
86
- r = s->read_count < 1;
87
- }
88
+ r = s->read_count < pl011_get_fifo_depth(s);
89
trace_pl011_can_receive(s->lcr, s->read_count, r);
90
return r;
91
}
92
@@ -XXX,XX +XXX,XX @@ static void pl011_put_fifo(void *opaque, uint32_t value)
93
{
94
PL011State *s = (PL011State *)opaque;
95
int slot;
96
+ unsigned pipe_depth;
97
98
- slot = s->read_pos + s->read_count;
99
- if (slot >= 16)
100
- slot -= 16;
101
+ pipe_depth = pl011_get_fifo_depth(s);
102
+ slot = (s->read_pos + s->read_count) & (pipe_depth - 1);
103
s->read_fifo[slot] = value;
104
s->read_count++;
105
s->flags &= ~PL011_FLAG_RXFE;
106
trace_pl011_put_fifo(value, s->read_count);
107
- if (!(s->lcr & 0x10) || s->read_count == 16) {
108
+ if (s->read_count == pipe_depth) {
109
trace_pl011_put_fifo_full();
110
s->flags |= PL011_FLAG_RXFF;
111
}
112
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_pl011 = {
113
VMSTATE_UINT32(dmacr, PL011State),
114
VMSTATE_UINT32(int_enabled, PL011State),
115
VMSTATE_UINT32(int_level, PL011State),
116
- VMSTATE_UINT32_ARRAY(read_fifo, PL011State, 16),
117
+ VMSTATE_UINT32_ARRAY(read_fifo, PL011State, PL011_FIFO_DEPTH),
118
VMSTATE_UINT32(ilpr, PL011State),
119
VMSTATE_UINT32(ibrd, PL011State),
120
VMSTATE_UINT32(fbrd, PL011State),
28
--
121
--
29
2.17.1
122
2.34.1
30
123
31
124
diff view generated by jsdifflib
1
From: Shannon Zhao <zhaoshenglong@huawei.com>
1
From: Evgeny Iakovlev <eiakovlev@linux.microsoft.com>
2
2
3
It forgot to increase clroffset during the loop. So it only clear the
3
Previous change slightly modified the way we handle data writes when
4
first 4 bytes.
4
FIFO is disabled. Previously we kept incrementing read_pos and were
5
storing data at that position, although we only have a
6
single-register-deep FIFO now. Then we changed it to always store data
7
at pos 0.
5
8
6
Fixes: 367b9f527becdd20ddf116e17a3c0c2bbc486920
9
If guest disables FIFO and the proceeds to read data, it will work out
7
Cc: qemu-stable@nongnu.org
10
fine, because we still read from current read_pos before setting it to
8
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
11
0.
9
Reviewed-by: Eric Auger <eric.auger@redhat.com>
12
10
Message-id: 1527047633-12368-1-git-send-email-zhaoshenglong@huawei.com
13
However, to make code less fragile, introduce a post_load hook for
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
PL011State and move fixup read FIFO state when FIFO is disabled. Since
15
we are introducing a post_load hook, also do some sanity checking on
16
untrusted incoming input state.
17
18
Signed-off-by: Evgeny Iakovlev <eiakovlev@linux.microsoft.com>
19
Message-id: 20230123162304.26254-3-eiakovlev@linux.microsoft.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
21
---
14
hw/intc/arm_gicv3_kvm.c | 1 +
22
hw/char/pl011.c | 25 +++++++++++++++++++++++++
15
1 file changed, 1 insertion(+)
23
1 file changed, 25 insertions(+)
16
24
17
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
25
diff --git a/hw/char/pl011.c b/hw/char/pl011.c
18
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/intc/arm_gicv3_kvm.c
27
--- a/hw/char/pl011.c
20
+++ b/hw/intc/arm_gicv3_kvm.c
28
+++ b/hw/char/pl011.c
21
@@ -XXX,XX +XXX,XX @@ static void kvm_dist_putbmp(GICv3State *s, uint32_t offset,
29
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_pl011_clock = {
22
if (clroffset != 0) {
30
}
23
reg = 0;
31
};
24
kvm_gicd_access(s, clroffset, &reg, true);
32
25
+ clroffset += 4;
33
+static int pl011_post_load(void *opaque, int version_id)
26
}
34
+{
27
reg = *gic_bmp_ptr32(bmp, irq);
35
+ PL011State* s = opaque;
28
kvm_gicd_access(s, offset, &reg, true);
36
+
37
+ /* Sanity-check input state */
38
+ if (s->read_pos >= ARRAY_SIZE(s->read_fifo) ||
39
+ s->read_count > ARRAY_SIZE(s->read_fifo)) {
40
+ return -1;
41
+ }
42
+
43
+ if (!pl011_is_fifo_enabled(s) && s->read_count > 0 && s->read_pos > 0) {
44
+ /*
45
+ * Older versions of PL011 didn't ensure that the single
46
+ * character in the FIFO in FIFO-disabled mode is in
47
+ * element 0 of the array; convert to follow the current
48
+ * code's assumptions.
49
+ */
50
+ s->read_fifo[0] = s->read_fifo[s->read_pos];
51
+ s->read_pos = 0;
52
+ }
53
+
54
+ return 0;
55
+}
56
+
57
static const VMStateDescription vmstate_pl011 = {
58
.name = "pl011",
59
.version_id = 2,
60
.minimum_version_id = 2,
61
+ .post_load = pl011_post_load,
62
.fields = (VMStateField[]) {
63
VMSTATE_UINT32(readbuff, PL011State),
64
VMSTATE_UINT32(flags, PL011State),
29
--
65
--
30
2.17.1
66
2.34.1
31
32
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Evgeny Iakovlev <eiakovlev@linux.microsoft.com>
2
add MemTxAttrs as an argument to flatview_access_valid().
3
Its callers now all have an attrs value to hand, so we can
4
correct our earlier temporary use of MEMTXATTRS_UNSPECIFIED.
5
2
3
PL011 currently lacks a reset method. Implement it.
4
5
Signed-off-by: Evgeny Iakovlev <eiakovlev@linux.microsoft.com>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Message-id: 20230123162304.26254-4-eiakovlev@linux.microsoft.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-10-peter.maydell@linaro.org
10
---
10
---
11
exec.c | 12 +++++-------
11
hw/char/pl011.c | 26 +++++++++++++++++++++-----
12
1 file changed, 5 insertions(+), 7 deletions(-)
12
1 file changed, 21 insertions(+), 5 deletions(-)
13
13
14
diff --git a/exec.c b/exec.c
14
diff --git a/hw/char/pl011.c b/hw/char/pl011.c
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/exec.c
16
--- a/hw/char/pl011.c
17
+++ b/exec.c
17
+++ b/hw/char/pl011.c
18
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_read(FlatView *fv, hwaddr addr,
18
@@ -XXX,XX +XXX,XX @@ static void pl011_init(Object *obj)
19
static MemTxResult flatview_write(FlatView *fv, hwaddr addr, MemTxAttrs attrs,
19
s->clk = qdev_init_clock_in(DEVICE(obj), "clk", pl011_clock_update, s,
20
const uint8_t *buf, int len);
20
ClockUpdate);
21
static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
21
22
- bool is_write);
22
- s->read_trigger = 1;
23
+ bool is_write, MemTxAttrs attrs);
23
- s->ifl = 0x12;
24
24
- s->cr = 0x300;
25
static MemTxResult subpage_read(void *opaque, hwaddr addr, uint64_t *data,
25
- s->flags = 0x90;
26
unsigned len, MemTxAttrs attrs)
26
-
27
@@ -XXX,XX +XXX,XX @@ static bool subpage_accepts(void *opaque, hwaddr addr,
27
s->id = pl011_id_arm;
28
#endif
29
30
return flatview_access_valid(subpage->fv, addr + subpage->base,
31
- len, is_write);
32
+ len, is_write, attrs);
33
}
28
}
34
29
35
static const MemoryRegionOps subpage_ops = {
30
@@ -XXX,XX +XXX,XX @@ static void pl011_realize(DeviceState *dev, Error **errp)
36
@@ -XXX,XX +XXX,XX @@ static void cpu_notify_map_clients(void)
31
pl011_event, NULL, s, NULL, true);
37
}
32
}
38
33
39
static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
34
+static void pl011_reset(DeviceState *dev)
40
- bool is_write)
35
+{
41
+ bool is_write, MemTxAttrs attrs)
36
+ PL011State *s = PL011(dev);
37
+
38
+ s->lcr = 0;
39
+ s->rsr = 0;
40
+ s->dmacr = 0;
41
+ s->int_enabled = 0;
42
+ s->int_level = 0;
43
+ s->ilpr = 0;
44
+ s->ibrd = 0;
45
+ s->fbrd = 0;
46
+ s->read_pos = 0;
47
+ s->read_count = 0;
48
+ s->read_trigger = 1;
49
+ s->ifl = 0x12;
50
+ s->cr = 0x300;
51
+ s->flags = 0x90;
52
+}
53
+
54
static void pl011_class_init(ObjectClass *oc, void *data)
42
{
55
{
43
MemoryRegion *mr;
56
DeviceClass *dc = DEVICE_CLASS(oc);
44
hwaddr l, xlat;
57
45
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
58
dc->realize = pl011_realize;
46
mr = flatview_translate(fv, addr, &xlat, &l, is_write);
59
+ dc->reset = pl011_reset;
47
if (!memory_access_is_direct(mr, is_write)) {
60
dc->vmsd = &vmstate_pl011;
48
l = memory_access_size(mr, l, addr);
61
device_class_set_props(dc, pl011_properties);
49
- /* When our callers all have attrs we'll pass them through here */
50
- if (!memory_region_access_valid(mr, xlat, l, is_write,
51
- MEMTXATTRS_UNSPECIFIED)) {
52
+ if (!memory_region_access_valid(mr, xlat, l, is_write, attrs)) {
53
return false;
54
}
55
}
56
@@ -XXX,XX +XXX,XX @@ bool address_space_access_valid(AddressSpace *as, hwaddr addr,
57
58
rcu_read_lock();
59
fv = address_space_to_flatview(as);
60
- result = flatview_access_valid(fv, addr, len, is_write);
61
+ result = flatview_access_valid(fv, addr, len, is_write, attrs);
62
rcu_read_unlock();
63
return result;
64
}
62
}
65
--
63
--
66
2.17.1
64
2.34.1
67
65
68
66
diff view generated by jsdifflib
New patch
1
From: Evgeny Iakovlev <eiakovlev@linux.microsoft.com>
1
2
3
Current FIFO handling code does not reset RXFE/RXFF flags when guest
4
resets FIFO by writing to UARTLCR register, although internal FIFO state
5
is reset to 0 read count. Actual guest-visible flag update will happen
6
only on next data read or write attempt. As a result of that any guest
7
that expects RXFE flag to be set (and RXFF to be cleared) after resetting
8
FIFO will never see that happen.
9
10
Signed-off-by: Evgeny Iakovlev <eiakovlev@linux.microsoft.com>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 20230123162304.26254-5-eiakovlev@linux.microsoft.com
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
hw/char/pl011.c | 18 +++++++++++++-----
16
1 file changed, 13 insertions(+), 5 deletions(-)
17
18
diff --git a/hw/char/pl011.c b/hw/char/pl011.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/char/pl011.c
21
+++ b/hw/char/pl011.c
22
@@ -XXX,XX +XXX,XX @@ static inline unsigned pl011_get_fifo_depth(PL011State *s)
23
return pl011_is_fifo_enabled(s) ? PL011_FIFO_DEPTH : 1;
24
}
25
26
+static inline void pl011_reset_fifo(PL011State *s)
27
+{
28
+ s->read_count = 0;
29
+ s->read_pos = 0;
30
+
31
+ /* Reset FIFO flags */
32
+ s->flags &= ~(PL011_FLAG_RXFF | PL011_FLAG_TXFF);
33
+ s->flags |= PL011_FLAG_RXFE | PL011_FLAG_TXFE;
34
+}
35
+
36
static uint64_t pl011_read(void *opaque, hwaddr offset,
37
unsigned size)
38
{
39
@@ -XXX,XX +XXX,XX @@ static void pl011_write(void *opaque, hwaddr offset,
40
case 11: /* UARTLCR_H */
41
/* Reset the FIFO state on FIFO enable or disable */
42
if ((s->lcr ^ value) & 0x10) {
43
- s->read_count = 0;
44
- s->read_pos = 0;
45
+ pl011_reset_fifo(s);
46
}
47
if ((s->lcr ^ value) & 0x1) {
48
int break_enable = value & 0x1;
49
@@ -XXX,XX +XXX,XX @@ static void pl011_reset(DeviceState *dev)
50
s->ilpr = 0;
51
s->ibrd = 0;
52
s->fbrd = 0;
53
- s->read_pos = 0;
54
- s->read_count = 0;
55
s->read_trigger = 1;
56
s->ifl = 0x12;
57
s->cr = 0x300;
58
- s->flags = 0x90;
59
+ s->flags = 0;
60
+ pl011_reset_fifo(s);
61
}
62
63
static void pl011_class_init(ObjectClass *oc, void *data)
64
--
65
2.34.1
diff view generated by jsdifflib
1
From: Jan Kiszka <jan.kiszka@siemens.com>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
There was a nasty flip in identifying which register group an access is
3
We currently only support GICv2 emulation. To also support GICv3, we will
4
targeting. The issue caused spuriously raised priorities of the guest
4
need to pass a few system registers into their respective handler functions.
5
when handing CPUs over in the Jailhouse hypervisor.
5
6
6
This patch adds support for HVF to call into the TCG callbacks for GICv3
7
Cc: qemu-stable@nongnu.org
7
system register handlers. This is safe because the GICv3 TCG code is generic
8
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
8
as long as we limit ourselves to EL0 and EL1 - which are the only modes
9
Message-id: 28b927d3-da58-bce4-cc13-bfec7f9b1cb9@siemens.com
9
supported by HVF.
10
11
To make sure nobody trips over that, we also annotate callbacks that don't
12
work in HVF mode, such as EL state change hooks.
13
14
With GICv3 support in place, we can run with more than 8 vCPUs.
15
16
Signed-off-by: Alexander Graf <agraf@csgraf.de>
17
Message-id: 20230128224459.70676-1-agraf@csgraf.de
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
20
---
13
hw/intc/arm_gicv3_cpuif.c | 12 ++++++------
21
hw/intc/arm_gicv3_cpuif.c | 16 +++-
14
1 file changed, 6 insertions(+), 6 deletions(-)
22
target/arm/hvf/hvf.c | 151 ++++++++++++++++++++++++++++++++++++
23
target/arm/hvf/trace-events | 2 +
24
3 files changed, 168 insertions(+), 1 deletion(-)
15
25
16
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
26
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
17
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/intc/arm_gicv3_cpuif.c
28
--- a/hw/intc/arm_gicv3_cpuif.c
19
+++ b/hw/intc/arm_gicv3_cpuif.c
29
+++ b/hw/intc/arm_gicv3_cpuif.c
20
@@ -XXX,XX +XXX,XX @@ static uint64_t icv_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
30
@@ -XXX,XX +XXX,XX @@
31
#include "hw/irq.h"
32
#include "cpu.h"
33
#include "target/arm/cpregs.h"
34
+#include "sysemu/tcg.h"
35
+#include "sysemu/qtest.h"
36
37
/*
38
* Special case return value from hppvi_index(); must be larger than
39
@@ -XXX,XX +XXX,XX @@ void gicv3_init_cpuif(GICv3State *s)
40
* which case we'd get the wrong value.
41
* So instead we define the regs with no ri->opaque info, and
42
* get back to the GICv3CPUState from the CPUARMState.
43
+ *
44
+ * These CP regs callbacks can be called from either TCG or HVF code.
45
*/
46
define_arm_cp_regs(cpu, gicv3_cpuif_reginfo);
47
48
@@ -XXX,XX +XXX,XX @@ void gicv3_init_cpuif(GICv3State *s)
49
define_arm_cp_regs(cpu, gicv3_cpuif_ich_apxr23_reginfo);
50
}
51
}
52
- arm_register_el_change_hook(cpu, gicv3_cpuif_el_change_hook, cs);
53
+ if (tcg_enabled() || qtest_enabled()) {
54
+ /*
55
+ * We can only trap EL changes with TCG. However the GIC interrupt
56
+ * state only changes on EL changes involving EL2 or EL3, so for
57
+ * the non-TCG case this is OK, as EL2 and EL3 can't exist.
58
+ */
59
+ arm_register_el_change_hook(cpu, gicv3_cpuif_el_change_hook, cs);
60
+ } else {
61
+ assert(!arm_feature(&cpu->env, ARM_FEATURE_EL2));
62
+ assert(!arm_feature(&cpu->env, ARM_FEATURE_EL3));
63
+ }
64
}
65
}
66
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/target/arm/hvf/hvf.c
69
+++ b/target/arm/hvf/hvf.c
70
@@ -XXX,XX +XXX,XX @@
71
#define SYSREG_PMCCNTR_EL0 SYSREG(3, 3, 9, 13, 0)
72
#define SYSREG_PMCCFILTR_EL0 SYSREG(3, 3, 14, 15, 7)
73
74
+#define SYSREG_ICC_AP0R0_EL1 SYSREG(3, 0, 12, 8, 4)
75
+#define SYSREG_ICC_AP0R1_EL1 SYSREG(3, 0, 12, 8, 5)
76
+#define SYSREG_ICC_AP0R2_EL1 SYSREG(3, 0, 12, 8, 6)
77
+#define SYSREG_ICC_AP0R3_EL1 SYSREG(3, 0, 12, 8, 7)
78
+#define SYSREG_ICC_AP1R0_EL1 SYSREG(3, 0, 12, 9, 0)
79
+#define SYSREG_ICC_AP1R1_EL1 SYSREG(3, 0, 12, 9, 1)
80
+#define SYSREG_ICC_AP1R2_EL1 SYSREG(3, 0, 12, 9, 2)
81
+#define SYSREG_ICC_AP1R3_EL1 SYSREG(3, 0, 12, 9, 3)
82
+#define SYSREG_ICC_ASGI1R_EL1 SYSREG(3, 0, 12, 11, 6)
83
+#define SYSREG_ICC_BPR0_EL1 SYSREG(3, 0, 12, 8, 3)
84
+#define SYSREG_ICC_BPR1_EL1 SYSREG(3, 0, 12, 12, 3)
85
+#define SYSREG_ICC_CTLR_EL1 SYSREG(3, 0, 12, 12, 4)
86
+#define SYSREG_ICC_DIR_EL1 SYSREG(3, 0, 12, 11, 1)
87
+#define SYSREG_ICC_EOIR0_EL1 SYSREG(3, 0, 12, 8, 1)
88
+#define SYSREG_ICC_EOIR1_EL1 SYSREG(3, 0, 12, 12, 1)
89
+#define SYSREG_ICC_HPPIR0_EL1 SYSREG(3, 0, 12, 8, 2)
90
+#define SYSREG_ICC_HPPIR1_EL1 SYSREG(3, 0, 12, 12, 2)
91
+#define SYSREG_ICC_IAR0_EL1 SYSREG(3, 0, 12, 8, 0)
92
+#define SYSREG_ICC_IAR1_EL1 SYSREG(3, 0, 12, 12, 0)
93
+#define SYSREG_ICC_IGRPEN0_EL1 SYSREG(3, 0, 12, 12, 6)
94
+#define SYSREG_ICC_IGRPEN1_EL1 SYSREG(3, 0, 12, 12, 7)
95
+#define SYSREG_ICC_PMR_EL1 SYSREG(3, 0, 4, 6, 0)
96
+#define SYSREG_ICC_RPR_EL1 SYSREG(3, 0, 12, 11, 3)
97
+#define SYSREG_ICC_SGI0R_EL1 SYSREG(3, 0, 12, 11, 7)
98
+#define SYSREG_ICC_SGI1R_EL1 SYSREG(3, 0, 12, 11, 5)
99
+#define SYSREG_ICC_SRE_EL1 SYSREG(3, 0, 12, 12, 5)
100
+
101
#define WFX_IS_WFE (1 << 0)
102
103
#define TMR_CTL_ENABLE (1 << 0)
104
@@ -XXX,XX +XXX,XX @@ static bool is_id_sysreg(uint32_t reg)
105
SYSREG_CRM(reg) < 8;
106
}
107
108
+static uint32_t hvf_reg2cp_reg(uint32_t reg)
109
+{
110
+ return ENCODE_AA64_CP_REG(CP_REG_ARM64_SYSREG_CP,
111
+ (reg >> SYSREG_CRN_SHIFT) & SYSREG_CRN_MASK,
112
+ (reg >> SYSREG_CRM_SHIFT) & SYSREG_CRM_MASK,
113
+ (reg >> SYSREG_OP0_SHIFT) & SYSREG_OP0_MASK,
114
+ (reg >> SYSREG_OP1_SHIFT) & SYSREG_OP1_MASK,
115
+ (reg >> SYSREG_OP2_SHIFT) & SYSREG_OP2_MASK);
116
+}
117
+
118
+static bool hvf_sysreg_read_cp(CPUState *cpu, uint32_t reg, uint64_t *val)
119
+{
120
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
121
+ CPUARMState *env = &arm_cpu->env;
122
+ const ARMCPRegInfo *ri;
123
+
124
+ ri = get_arm_cp_reginfo(arm_cpu->cp_regs, hvf_reg2cp_reg(reg));
125
+ if (ri) {
126
+ if (ri->accessfn) {
127
+ if (ri->accessfn(env, ri, true) != CP_ACCESS_OK) {
128
+ return false;
129
+ }
130
+ }
131
+ if (ri->type & ARM_CP_CONST) {
132
+ *val = ri->resetvalue;
133
+ } else if (ri->readfn) {
134
+ *val = ri->readfn(env, ri);
135
+ } else {
136
+ *val = CPREG_FIELD64(env, ri);
137
+ }
138
+ trace_hvf_vgic_read(ri->name, *val);
139
+ return true;
140
+ }
141
+
142
+ return false;
143
+}
144
+
145
static int hvf_sysreg_read(CPUState *cpu, uint32_t reg, uint32_t rt)
21
{
146
{
22
GICv3CPUState *cs = icc_cs_from_env(env);
147
ARMCPU *arm_cpu = ARM_CPU(cpu);
23
int regno = ri->opc2 & 3;
148
@@ -XXX,XX +XXX,XX @@ static int hvf_sysreg_read(CPUState *cpu, uint32_t reg, uint32_t rt)
24
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
149
case SYSREG_OSDLR_EL1:
25
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
150
/* Dummy register */
26
uint64_t value = cs->ich_apr[grp][regno];
151
break;
27
152
+ case SYSREG_ICC_AP0R0_EL1:
28
trace_gicv3_icv_ap_read(ri->crm & 1, regno, gicv3_redist_affid(cs), value);
153
+ case SYSREG_ICC_AP0R1_EL1:
29
@@ -XXX,XX +XXX,XX @@ static void icv_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
154
+ case SYSREG_ICC_AP0R2_EL1:
155
+ case SYSREG_ICC_AP0R3_EL1:
156
+ case SYSREG_ICC_AP1R0_EL1:
157
+ case SYSREG_ICC_AP1R1_EL1:
158
+ case SYSREG_ICC_AP1R2_EL1:
159
+ case SYSREG_ICC_AP1R3_EL1:
160
+ case SYSREG_ICC_ASGI1R_EL1:
161
+ case SYSREG_ICC_BPR0_EL1:
162
+ case SYSREG_ICC_BPR1_EL1:
163
+ case SYSREG_ICC_DIR_EL1:
164
+ case SYSREG_ICC_EOIR0_EL1:
165
+ case SYSREG_ICC_EOIR1_EL1:
166
+ case SYSREG_ICC_HPPIR0_EL1:
167
+ case SYSREG_ICC_HPPIR1_EL1:
168
+ case SYSREG_ICC_IAR0_EL1:
169
+ case SYSREG_ICC_IAR1_EL1:
170
+ case SYSREG_ICC_IGRPEN0_EL1:
171
+ case SYSREG_ICC_IGRPEN1_EL1:
172
+ case SYSREG_ICC_PMR_EL1:
173
+ case SYSREG_ICC_SGI0R_EL1:
174
+ case SYSREG_ICC_SGI1R_EL1:
175
+ case SYSREG_ICC_SRE_EL1:
176
+ case SYSREG_ICC_CTLR_EL1:
177
+ /* Call the TCG sysreg handler. This is only safe for GICv3 regs. */
178
+ if (!hvf_sysreg_read_cp(cpu, reg, &val)) {
179
+ hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
180
+ }
181
+ break;
182
default:
183
if (is_id_sysreg(reg)) {
184
/* ID system registers read as RES0 */
185
@@ -XXX,XX +XXX,XX @@ static void pmswinc_write(CPUARMState *env, uint64_t value)
186
}
187
}
188
189
+static bool hvf_sysreg_write_cp(CPUState *cpu, uint32_t reg, uint64_t val)
190
+{
191
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
192
+ CPUARMState *env = &arm_cpu->env;
193
+ const ARMCPRegInfo *ri;
194
+
195
+ ri = get_arm_cp_reginfo(arm_cpu->cp_regs, hvf_reg2cp_reg(reg));
196
+
197
+ if (ri) {
198
+ if (ri->accessfn) {
199
+ if (ri->accessfn(env, ri, false) != CP_ACCESS_OK) {
200
+ return false;
201
+ }
202
+ }
203
+ if (ri->writefn) {
204
+ ri->writefn(env, ri, val);
205
+ } else {
206
+ CPREG_FIELD64(env, ri) = val;
207
+ }
208
+
209
+ trace_hvf_vgic_write(ri->name, val);
210
+ return true;
211
+ }
212
+
213
+ return false;
214
+}
215
+
216
static int hvf_sysreg_write(CPUState *cpu, uint32_t reg, uint64_t val)
30
{
217
{
31
GICv3CPUState *cs = icc_cs_from_env(env);
218
ARMCPU *arm_cpu = ARM_CPU(cpu);
32
int regno = ri->opc2 & 3;
219
@@ -XXX,XX +XXX,XX @@ static int hvf_sysreg_write(CPUState *cpu, uint32_t reg, uint64_t val)
33
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
220
case SYSREG_OSDLR_EL1:
34
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
221
/* Dummy register */
35
222
break;
36
trace_gicv3_icv_ap_write(ri->crm & 1, regno, gicv3_redist_affid(cs), value);
223
+ case SYSREG_ICC_AP0R0_EL1:
37
224
+ case SYSREG_ICC_AP0R1_EL1:
38
@@ -XXX,XX +XXX,XX @@ static uint64_t icc_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
225
+ case SYSREG_ICC_AP0R2_EL1:
39
uint64_t value;
226
+ case SYSREG_ICC_AP0R3_EL1:
40
227
+ case SYSREG_ICC_AP1R0_EL1:
41
int regno = ri->opc2 & 3;
228
+ case SYSREG_ICC_AP1R1_EL1:
42
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1;
229
+ case SYSREG_ICC_AP1R2_EL1:
43
+ int grp = (ri->crm & 1) ? GICV3_G1 : GICV3_G0;
230
+ case SYSREG_ICC_AP1R3_EL1:
44
231
+ case SYSREG_ICC_ASGI1R_EL1:
45
if (icv_access(env, grp == GICV3_G0 ? HCR_FMO : HCR_IMO)) {
232
+ case SYSREG_ICC_BPR0_EL1:
46
return icv_ap_read(env, ri);
233
+ case SYSREG_ICC_BPR1_EL1:
47
@@ -XXX,XX +XXX,XX @@ static void icc_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
234
+ case SYSREG_ICC_CTLR_EL1:
48
GICv3CPUState *cs = icc_cs_from_env(env);
235
+ case SYSREG_ICC_DIR_EL1:
49
236
+ case SYSREG_ICC_EOIR0_EL1:
50
int regno = ri->opc2 & 3;
237
+ case SYSREG_ICC_EOIR1_EL1:
51
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1;
238
+ case SYSREG_ICC_HPPIR0_EL1:
52
+ int grp = (ri->crm & 1) ? GICV3_G1 : GICV3_G0;
239
+ case SYSREG_ICC_HPPIR1_EL1:
53
240
+ case SYSREG_ICC_IAR0_EL1:
54
if (icv_access(env, grp == GICV3_G0 ? HCR_FMO : HCR_IMO)) {
241
+ case SYSREG_ICC_IAR1_EL1:
55
icv_ap_write(env, ri, value);
242
+ case SYSREG_ICC_IGRPEN0_EL1:
56
@@ -XXX,XX +XXX,XX @@ static uint64_t ich_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
243
+ case SYSREG_ICC_IGRPEN1_EL1:
57
{
244
+ case SYSREG_ICC_PMR_EL1:
58
GICv3CPUState *cs = icc_cs_from_env(env);
245
+ case SYSREG_ICC_SGI0R_EL1:
59
int regno = ri->opc2 & 3;
246
+ case SYSREG_ICC_SGI1R_EL1:
60
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
247
+ case SYSREG_ICC_SRE_EL1:
61
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
248
+ /* Call the TCG sysreg handler. This is only safe for GICv3 regs. */
62
uint64_t value;
249
+ if (!hvf_sysreg_write_cp(cpu, reg, val)) {
63
250
+ hvf_raise_exception(cpu, EXCP_UDEF, syn_uncategorized());
64
value = cs->ich_apr[grp][regno];
251
+ }
65
@@ -XXX,XX +XXX,XX @@ static void ich_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
252
+ break;
66
{
253
default:
67
GICv3CPUState *cs = icc_cs_from_env(env);
254
cpu_synchronize_state(cpu);
68
int regno = ri->opc2 & 3;
255
trace_hvf_unhandled_sysreg_write(env->pc, reg,
69
- int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
256
diff --git a/target/arm/hvf/trace-events b/target/arm/hvf/trace-events
70
+ int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
257
index XXXXXXX..XXXXXXX 100644
71
258
--- a/target/arm/hvf/trace-events
72
trace_gicv3_ich_ap_write(ri->crm & 1, regno, gicv3_redist_affid(cs), value);
259
+++ b/target/arm/hvf/trace-events
73
260
@@ -XXX,XX +XXX,XX @@ hvf_unknown_hvc(uint64_t x0) "unknown HVC! 0x%016"PRIx64
261
hvf_unknown_smc(uint64_t x0) "unknown SMC! 0x%016"PRIx64
262
hvf_exit(uint64_t syndrome, uint32_t ec, uint64_t pc) "exit: 0x%"PRIx64" [ec=0x%x pc=0x%"PRIx64"]"
263
hvf_psci_call(uint64_t x0, uint64_t x1, uint64_t x2, uint64_t x3, uint32_t cpuid) "PSCI Call x0=0x%016"PRIx64" x1=0x%016"PRIx64" x2=0x%016"PRIx64" x3=0x%016"PRIx64" cpu=0x%x"
264
+hvf_vgic_write(const char *name, uint64_t val) "vgic write to %s [val=0x%016"PRIx64"]"
265
+hvf_vgic_read(const char *name, uint64_t val) "vgic read from %s [val=0x%016"PRIx64"]"
74
--
266
--
75
2.17.1
267
2.34.1
76
77
diff view generated by jsdifflib
1
From: Igor Mammedov <imammedo@redhat.com>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
When QEMU is started with following CLI
3
Up to now, the finalize_gic_version() code open coded what is essentially
4
-machine virt,gic-version=3,accel=kvm -cpu host -bios AAVMF_CODE.fd
4
a support bitmap match between host/emulation environment and desired
5
it crashes with abort at
5
target GIC type.
6
accel/kvm/kvm-all.c:2164:
6
7
KVM_SET_DEVICE_ATTR failed: Group 6 attr 0x000000000000c665: Invalid argument
7
This open coding leads to undesirable side effects. For example, a VM with
8
8
KVM and -smp 10 will automatically choose GICv3 while the same command
9
Which is caused by implicit dependency of kvm_arm_gicv3_reset() on
9
line with TCG will stay on GICv2 and fail the launch.
10
arm_gicv3_icc_reset() where the later is called by CPU reset
10
11
reset callback.
11
This patch combines the TCG and KVM matching code paths by making
12
12
everything a 2 pass process. First, we determine which GIC versions the
13
However commit:
13
current environment is able to support, then we go through a single
14
3b77f6c arm/boot: split load_dtb() from arm_load_kernel()
14
state machine to determine which target GIC mode that means for us.
15
broke CPU reset callback registration in case
15
16
16
After this patch, the only user noticable changes should be consolidated
17
arm_load_kernel()
17
error messages as well as TCG -M virt supporting -smp > 8 automatically.
18
...
18
19
if (!info->kernel_filename || info->firmware_loaded)
19
Signed-off-by: Alexander Graf <agraf@csgraf.de>
20
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
21
branch is taken, i.e. it's sufficient to provide a firmware
21
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
22
or do not provide kernel on CLI to skip cpu reset callback
22
Reviewed-by: Zenghui Yu <yuzenghui@huawei.com>
23
registration, where before offending commit the callback
23
Message-id: 20221223090107.98888-2-agraf@csgraf.de
24
has been registered unconditionally.
25
26
Fix it by registering the callback right at the beginning of
27
arm_load_kernel() unconditionally instead of doing it at the end.
28
29
NOTE:
30
we probably should eliminate that dependency anyways as well as
31
separate arch CPU reset parts from arm_load_kernel() into CPU
32
itself, but that refactoring that I probably would have to do
33
anyways later for CPU hotplug to work.
34
35
Reported-by: Auger Eric <eric.auger@redhat.com>
36
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
37
Reviewed-by: Eric Auger <eric.auger@redhat.com>
38
Tested-by: Eric Auger <eric.auger@redhat.com>
39
Message-id: 1527070950-208350-1-git-send-email-imammedo@redhat.com
40
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
41
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
42
---
25
---
43
hw/arm/boot.c | 18 +++++++++---------
26
include/hw/arm/virt.h | 15 ++--
44
1 file changed, 9 insertions(+), 9 deletions(-)
27
hw/arm/virt.c | 198 ++++++++++++++++++++++--------------------
45
28
2 files changed, 112 insertions(+), 101 deletions(-)
46
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
29
30
diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h
47
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
48
--- a/hw/arm/boot.c
32
--- a/include/hw/arm/virt.h
49
+++ b/hw/arm/boot.c
33
+++ b/include/hw/arm/virt.h
50
@@ -XXX,XX +XXX,XX @@ void arm_load_kernel(ARMCPU *cpu, struct arm_boot_info *info)
34
@@ -XXX,XX +XXX,XX @@ typedef enum VirtMSIControllerType {
51
static const ARMInsnFixup *primary_loader;
35
} VirtMSIControllerType;
52
AddressSpace *as = arm_boot_address_space(cpu, info);
36
53
37
typedef enum VirtGICType {
54
+ /* CPU objects (unlike devices) are not automatically reset on system
38
- VIRT_GIC_VERSION_MAX,
55
+ * reset, so we must always register a handler to do so. If we're
39
- VIRT_GIC_VERSION_HOST,
56
+ * actually loading a kernel, the handler is also responsible for
40
- VIRT_GIC_VERSION_2,
57
+ * arranging that we start it correctly.
41
- VIRT_GIC_VERSION_3,
58
+ */
42
- VIRT_GIC_VERSION_4,
59
+ for (cs = first_cpu; cs; cs = CPU_NEXT(cs)) {
43
+ VIRT_GIC_VERSION_MAX = 0,
60
+ qemu_register_reset(do_cpu_reset, ARM_CPU(cs));
44
+ VIRT_GIC_VERSION_HOST = 1,
45
+ /* The concrete GIC values have to match the GIC version number */
46
+ VIRT_GIC_VERSION_2 = 2,
47
+ VIRT_GIC_VERSION_3 = 3,
48
+ VIRT_GIC_VERSION_4 = 4,
49
VIRT_GIC_VERSION_NOSEL,
50
} VirtGICType;
51
52
+#define VIRT_GIC_VERSION_2_MASK BIT(VIRT_GIC_VERSION_2)
53
+#define VIRT_GIC_VERSION_3_MASK BIT(VIRT_GIC_VERSION_3)
54
+#define VIRT_GIC_VERSION_4_MASK BIT(VIRT_GIC_VERSION_4)
55
+
56
struct VirtMachineClass {
57
MachineClass parent;
58
bool disallow_affinity_adjustment;
59
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
60
index XXXXXXX..XXXXXXX 100644
61
--- a/hw/arm/virt.c
62
+++ b/hw/arm/virt.c
63
@@ -XXX,XX +XXX,XX @@ static void virt_set_memmap(VirtMachineState *vms, int pa_bits)
64
}
65
}
66
67
+static VirtGICType finalize_gic_version_do(const char *accel_name,
68
+ VirtGICType gic_version,
69
+ int gics_supported,
70
+ unsigned int max_cpus)
71
+{
72
+ /* Convert host/max/nosel to GIC version number */
73
+ switch (gic_version) {
74
+ case VIRT_GIC_VERSION_HOST:
75
+ if (!kvm_enabled()) {
76
+ error_report("gic-version=host requires KVM");
77
+ exit(1);
78
+ }
79
+
80
+ /* For KVM, gic-version=host means gic-version=max */
81
+ return finalize_gic_version_do(accel_name, VIRT_GIC_VERSION_MAX,
82
+ gics_supported, max_cpus);
83
+ case VIRT_GIC_VERSION_MAX:
84
+ if (gics_supported & VIRT_GIC_VERSION_4_MASK) {
85
+ gic_version = VIRT_GIC_VERSION_4;
86
+ } else if (gics_supported & VIRT_GIC_VERSION_3_MASK) {
87
+ gic_version = VIRT_GIC_VERSION_3;
88
+ } else {
89
+ gic_version = VIRT_GIC_VERSION_2;
90
+ }
91
+ break;
92
+ case VIRT_GIC_VERSION_NOSEL:
93
+ if ((gics_supported & VIRT_GIC_VERSION_2_MASK) &&
94
+ max_cpus <= GIC_NCPU) {
95
+ gic_version = VIRT_GIC_VERSION_2;
96
+ } else if (gics_supported & VIRT_GIC_VERSION_3_MASK) {
97
+ /*
98
+ * in case the host does not support v2 emulation or
99
+ * the end-user requested more than 8 VCPUs we now default
100
+ * to v3. In any case defaulting to v2 would be broken.
101
+ */
102
+ gic_version = VIRT_GIC_VERSION_3;
103
+ } else if (max_cpus > GIC_NCPU) {
104
+ error_report("%s only supports GICv2 emulation but more than 8 "
105
+ "vcpus are requested", accel_name);
106
+ exit(1);
107
+ }
108
+ break;
109
+ case VIRT_GIC_VERSION_2:
110
+ case VIRT_GIC_VERSION_3:
111
+ case VIRT_GIC_VERSION_4:
112
+ break;
61
+ }
113
+ }
62
+
114
+
63
/* The board code is not supposed to set secure_board_setup unless
115
+ /* Check chosen version is effectively supported */
64
* running its code in secure mode is actually possible, and KVM
116
+ switch (gic_version) {
65
* doesn't support secure.
117
+ case VIRT_GIC_VERSION_2:
66
@@ -XXX,XX +XXX,XX @@ void arm_load_kernel(ARMCPU *cpu, struct arm_boot_info *info)
118
+ if (!(gics_supported & VIRT_GIC_VERSION_2_MASK)) {
67
ARM_CPU(cs)->env.boot_info = info;
119
+ error_report("%s does not support GICv2 emulation", accel_name);
68
}
120
+ exit(1);
69
121
+ }
70
- /* CPU objects (unlike devices) are not automatically reset on system
122
+ break;
71
- * reset, so we must always register a handler to do so. If we're
123
+ case VIRT_GIC_VERSION_3:
72
- * actually loading a kernel, the handler is also responsible for
124
+ if (!(gics_supported & VIRT_GIC_VERSION_3_MASK)) {
73
- * arranging that we start it correctly.
125
+ error_report("%s does not support GICv3 emulation", accel_name);
74
- */
126
+ exit(1);
75
- for (cs = first_cpu; cs; cs = CPU_NEXT(cs)) {
127
+ }
76
- qemu_register_reset(do_cpu_reset, ARM_CPU(cs));
128
+ break;
129
+ case VIRT_GIC_VERSION_4:
130
+ if (!(gics_supported & VIRT_GIC_VERSION_4_MASK)) {
131
+ error_report("%s does not support GICv4 emulation, is virtualization=on?",
132
+ accel_name);
133
+ exit(1);
134
+ }
135
+ break;
136
+ default:
137
+ error_report("logic error in finalize_gic_version");
138
+ exit(1);
139
+ break;
140
+ }
141
+
142
+ return gic_version;
143
+}
144
+
145
/*
146
* finalize_gic_version - Determines the final gic_version
147
* according to the gic-version property
148
@@ -XXX,XX +XXX,XX @@ static void virt_set_memmap(VirtMachineState *vms, int pa_bits)
149
*/
150
static void finalize_gic_version(VirtMachineState *vms)
151
{
152
+ const char *accel_name = current_accel_name();
153
unsigned int max_cpus = MACHINE(vms)->smp.max_cpus;
154
+ int gics_supported = 0;
155
156
- if (kvm_enabled()) {
157
- int probe_bitmap;
158
+ /* Determine which GIC versions the current environment supports */
159
+ if (kvm_enabled() && kvm_irqchip_in_kernel()) {
160
+ int probe_bitmap = kvm_arm_vgic_probe();
161
162
- if (!kvm_irqchip_in_kernel()) {
163
- switch (vms->gic_version) {
164
- case VIRT_GIC_VERSION_HOST:
165
- warn_report(
166
- "gic-version=host not relevant with kernel-irqchip=off "
167
- "as only userspace GICv2 is supported. Using v2 ...");
168
- return;
169
- case VIRT_GIC_VERSION_MAX:
170
- case VIRT_GIC_VERSION_NOSEL:
171
- vms->gic_version = VIRT_GIC_VERSION_2;
172
- return;
173
- case VIRT_GIC_VERSION_2:
174
- return;
175
- case VIRT_GIC_VERSION_3:
176
- error_report(
177
- "gic-version=3 is not supported with kernel-irqchip=off");
178
- exit(1);
179
- case VIRT_GIC_VERSION_4:
180
- error_report(
181
- "gic-version=4 is not supported with kernel-irqchip=off");
182
- exit(1);
183
- }
184
- }
185
-
186
- probe_bitmap = kvm_arm_vgic_probe();
187
if (!probe_bitmap) {
188
error_report("Unable to determine GIC version supported by host");
189
exit(1);
190
}
191
192
- switch (vms->gic_version) {
193
- case VIRT_GIC_VERSION_HOST:
194
- case VIRT_GIC_VERSION_MAX:
195
- if (probe_bitmap & KVM_ARM_VGIC_V3) {
196
- vms->gic_version = VIRT_GIC_VERSION_3;
197
- } else {
198
- vms->gic_version = VIRT_GIC_VERSION_2;
199
- }
200
- return;
201
- case VIRT_GIC_VERSION_NOSEL:
202
- if ((probe_bitmap & KVM_ARM_VGIC_V2) && max_cpus <= GIC_NCPU) {
203
- vms->gic_version = VIRT_GIC_VERSION_2;
204
- } else if (probe_bitmap & KVM_ARM_VGIC_V3) {
205
- /*
206
- * in case the host does not support v2 in-kernel emulation or
207
- * the end-user requested more than 8 VCPUs we now default
208
- * to v3. In any case defaulting to v2 would be broken.
209
- */
210
- vms->gic_version = VIRT_GIC_VERSION_3;
211
- } else if (max_cpus > GIC_NCPU) {
212
- error_report("host only supports in-kernel GICv2 emulation "
213
- "but more than 8 vcpus are requested");
214
- exit(1);
215
- }
216
- break;
217
- case VIRT_GIC_VERSION_2:
218
- case VIRT_GIC_VERSION_3:
219
- break;
220
- case VIRT_GIC_VERSION_4:
221
- error_report("gic-version=4 is not supported with KVM");
222
- exit(1);
223
+ if (probe_bitmap & KVM_ARM_VGIC_V2) {
224
+ gics_supported |= VIRT_GIC_VERSION_2_MASK;
225
}
226
-
227
- /* Check chosen version is effectively supported by the host */
228
- if (vms->gic_version == VIRT_GIC_VERSION_2 &&
229
- !(probe_bitmap & KVM_ARM_VGIC_V2)) {
230
- error_report("host does not support in-kernel GICv2 emulation");
231
- exit(1);
232
- } else if (vms->gic_version == VIRT_GIC_VERSION_3 &&
233
- !(probe_bitmap & KVM_ARM_VGIC_V3)) {
234
- error_report("host does not support in-kernel GICv3 emulation");
235
- exit(1);
236
+ if (probe_bitmap & KVM_ARM_VGIC_V3) {
237
+ gics_supported |= VIRT_GIC_VERSION_3_MASK;
238
}
239
- return;
77
- }
240
- }
78
-
241
-
79
if (!info->skip_dtb_autoload && have_dtb(info)) {
242
- /* TCG mode */
80
if (arm_load_dtb(info->dtb_start, info, info->dtb_limit, as) < 0) {
243
- switch (vms->gic_version) {
81
exit(1);
244
- case VIRT_GIC_VERSION_NOSEL:
245
- vms->gic_version = VIRT_GIC_VERSION_2;
246
- break;
247
- case VIRT_GIC_VERSION_MAX:
248
+ } else if (kvm_enabled() && !kvm_irqchip_in_kernel()) {
249
+ /* KVM w/o kernel irqchip can only deal with GICv2 */
250
+ gics_supported |= VIRT_GIC_VERSION_2_MASK;
251
+ accel_name = "KVM with kernel-irqchip=off";
252
+ } else {
253
+ gics_supported |= VIRT_GIC_VERSION_2_MASK;
254
if (module_object_class_by_name("arm-gicv3")) {
255
- /* CONFIG_ARM_GICV3_TCG was set */
256
+ gics_supported |= VIRT_GIC_VERSION_3_MASK;
257
if (vms->virt) {
258
/* GICv4 only makes sense if CPU has EL2 */
259
- vms->gic_version = VIRT_GIC_VERSION_4;
260
- } else {
261
- vms->gic_version = VIRT_GIC_VERSION_3;
262
+ gics_supported |= VIRT_GIC_VERSION_4_MASK;
263
}
264
- } else {
265
- vms->gic_version = VIRT_GIC_VERSION_2;
266
}
267
- break;
268
- case VIRT_GIC_VERSION_HOST:
269
- error_report("gic-version=host requires KVM");
270
- exit(1);
271
- case VIRT_GIC_VERSION_4:
272
- if (!vms->virt) {
273
- error_report("gic-version=4 requires virtualization enabled");
274
- exit(1);
275
- }
276
- break;
277
- case VIRT_GIC_VERSION_2:
278
- case VIRT_GIC_VERSION_3:
279
- break;
280
}
281
+
282
+ /*
283
+ * Then convert helpers like host/max to concrete GIC versions and ensure
284
+ * the desired version is supported
285
+ */
286
+ vms->gic_version = finalize_gic_version_do(accel_name, vms->gic_version,
287
+ gics_supported, max_cpus);
288
}
289
290
/*
82
--
291
--
83
2.17.1
292
2.34.1
84
85
diff view generated by jsdifflib
1
From: Shannon Zhao <zhaoshenglong@huawei.com>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
acpi_data_push uses g_array_set_size to resize the memory size. If there
3
Let's explicitly list out all accelerators that we support when trying to
4
is no enough contiguous memory, the address will be changed. So previous
4
determine the supported set of GIC versions. KVM was already separate, so
5
pointer could not be used any more. It must update the pointer and use
5
the only missing one is HVF which simply reuses all of TCG's emulation
6
the new one.
6
code and thus has the same compatibility matrix.
7
7
8
Also, previous codes wrongly use le32 conversion of iort->node_offset
8
Signed-off-by: Alexander Graf <agraf@csgraf.de>
9
for subsequent computations that will result incorrect value if host is
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
not litlle endian. So use the non-converted one instead.
10
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
11
11
Reviewed-by: Zenghui Yu <yuzenghui@huawei.com>
12
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Reviewed-by: Eric Auger <eric.auger@redhat.com>
13
Message-id: 20221223090107.98888-3-agraf@csgraf.de
14
Message-id: 1527663951-14552-1-git-send-email-zhaoshenglong@huawei.com
14
[PMM: Added qtest to the list of accelerators]
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
16
---
17
hw/arm/virt-acpi-build.c | 20 +++++++++++++++-----
17
hw/arm/virt.c | 7 ++++++-
18
1 file changed, 15 insertions(+), 5 deletions(-)
18
1 file changed, 6 insertions(+), 1 deletion(-)
19
19
20
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
20
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
21
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/arm/virt-acpi-build.c
22
--- a/hw/arm/virt.c
23
+++ b/hw/arm/virt-acpi-build.c
23
+++ b/hw/arm/virt.c
24
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
24
@@ -XXX,XX +XXX,XX @@
25
AcpiIortItsGroup *its;
25
#include "sysemu/numa.h"
26
AcpiIortTable *iort;
26
#include "sysemu/runstate.h"
27
AcpiIortSmmu3 *smmu;
27
#include "sysemu/tpm.h"
28
- size_t node_size, iort_length, smmu_offset = 0;
28
+#include "sysemu/tcg.h"
29
+ size_t node_size, iort_node_offset, iort_length, smmu_offset = 0;
29
#include "sysemu/kvm.h"
30
AcpiIortRC *rc;
30
#include "sysemu/hvf.h"
31
31
+#include "sysemu/qtest.h"
32
iort = acpi_data_push(table_data, sizeof(*iort));
32
#include "hw/loader.h"
33
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
33
#include "qapi/error.h"
34
34
#include "qemu/bitops.h"
35
iort_length = sizeof(*iort);
35
@@ -XXX,XX +XXX,XX @@ static void finalize_gic_version(VirtMachineState *vms)
36
iort->node_count = cpu_to_le32(nb_nodes);
36
/* KVM w/o kernel irqchip can only deal with GICv2 */
37
- iort->node_offset = cpu_to_le32(sizeof(*iort));
37
gics_supported |= VIRT_GIC_VERSION_2_MASK;
38
+ /*
38
accel_name = "KVM with kernel-irqchip=off";
39
+ * Use a copy in case table_data->data moves during acpi_data_push
39
- } else {
40
+ * operations.
40
+ } else if (tcg_enabled() || hvf_enabled() || qtest_enabled()) {
41
+ */
41
gics_supported |= VIRT_GIC_VERSION_2_MASK;
42
+ iort_node_offset = sizeof(*iort);
42
if (module_object_class_by_name("arm-gicv3")) {
43
+ iort->node_offset = cpu_to_le32(iort_node_offset);
43
gics_supported |= VIRT_GIC_VERSION_3_MASK;
44
44
@@ -XXX,XX +XXX,XX @@ static void finalize_gic_version(VirtMachineState *vms)
45
/* ITS group node */
45
gics_supported |= VIRT_GIC_VERSION_4_MASK;
46
node_size = sizeof(*its) + sizeof(uint32_t);
46
}
47
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
47
}
48
int irq = vms->irqmap[VIRT_SMMU];
48
+ } else {
49
49
+ error_report("Unsupported accelerator, can not determine GIC support");
50
/* SMMUv3 node */
50
+ exit(1);
51
- smmu_offset = iort->node_offset + node_size;
52
+ smmu_offset = iort_node_offset + node_size;
53
node_size = sizeof(*smmu) + sizeof(*idmap);
54
iort_length += node_size;
55
smmu = acpi_data_push(table_data, node_size);
56
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
57
idmap->id_count = cpu_to_le32(0xFFFF);
58
idmap->output_base = 0;
59
/* output IORT node is the ITS group node (the first node) */
60
- idmap->output_reference = cpu_to_le32(iort->node_offset);
61
+ idmap->output_reference = cpu_to_le32(iort_node_offset);
62
}
51
}
63
52
64
/* Root Complex Node */
53
/*
65
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
66
idmap->output_reference = cpu_to_le32(smmu_offset);
67
} else {
68
/* output IORT node is the ITS group node (the first node) */
69
- idmap->output_reference = cpu_to_le32(iort->node_offset);
70
+ idmap->output_reference = cpu_to_le32(iort_node_offset);
71
}
72
73
+ /*
74
+ * Update the pointer address in case table_data->data moves during above
75
+ * acpi_data_push operations.
76
+ */
77
+ iort = (AcpiIortTable *)(table_data->data + iort_start);
78
iort->length = cpu_to_le32(iort_length);
79
80
build_header(linker, table_data, (void *)(table_data->data + iort_start),
81
--
54
--
82
2.17.1
55
2.34.1
83
56
84
57
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
From: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
2
add MemTxAttrs as an argument to address_space_translate_iommu().
3
2
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Cortex-A76 supports 40bits of address space. sbsa-ref's memory
4
starts above this limit.
5
6
Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180521140402.23318-14-peter.maydell@linaro.org
9
Message-id: 20230126114416.2447685-1-marcin.juszkiewicz@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
11
---
9
exec.c | 8 +++++---
12
hw/arm/sbsa-ref.c | 1 -
10
1 file changed, 5 insertions(+), 3 deletions(-)
13
1 file changed, 1 deletion(-)
11
14
12
diff --git a/exec.c b/exec.c
15
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
13
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
14
--- a/exec.c
17
--- a/hw/arm/sbsa-ref.c
15
+++ b/exec.c
18
+++ b/hw/arm/sbsa-ref.c
16
@@ -XXX,XX +XXX,XX @@ address_space_translate_internal(AddressSpaceDispatch *d, hwaddr addr, hwaddr *x
19
@@ -XXX,XX +XXX,XX @@ static const int sbsa_ref_irqmap[] = {
17
* @is_write: whether the translation operation is for write
20
static const char * const valid_cpus[] = {
18
* @is_mmio: whether this can be MMIO, set true if it can
21
ARM_CPU_TYPE_NAME("cortex-a57"),
19
* @target_as: the address space targeted by the IOMMU
22
ARM_CPU_TYPE_NAME("cortex-a72"),
20
+ * @attrs: transaction attributes
23
- ARM_CPU_TYPE_NAME("cortex-a76"),
21
*
24
ARM_CPU_TYPE_NAME("neoverse-n1"),
22
* This function is called from RCU critical section. It is the common
25
ARM_CPU_TYPE_NAME("max"),
23
* part of flatview_do_translate and address_space_translate_cached.
26
};
24
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection address_space_translate_iommu(IOMMUMemoryRegion *iomm
25
hwaddr *page_mask_out,
26
bool is_write,
27
bool is_mmio,
28
- AddressSpace **target_as)
29
+ AddressSpace **target_as,
30
+ MemTxAttrs attrs)
31
{
32
MemoryRegionSection *section;
33
hwaddr page_mask = (hwaddr)-1;
34
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection flatview_do_translate(FlatView *fv,
35
return address_space_translate_iommu(iommu_mr, xlat,
36
plen_out, page_mask_out,
37
is_write, is_mmio,
38
- target_as);
39
+ target_as, attrs);
40
}
41
if (page_mask_out) {
42
/* Not behind an IOMMU, use default page size. */
43
@@ -XXX,XX +XXX,XX @@ static inline MemoryRegion *address_space_translate_cached(
44
45
section = address_space_translate_iommu(iommu_mr, xlat, plen,
46
NULL, is_write, true,
47
- &target_as);
48
+ &target_as, attrs);
49
return section.mr;
50
}
51
52
--
27
--
53
2.17.1
28
2.34.1
54
29
55
30
diff view generated by jsdifflib
New patch
1
The encodings 0,0,C7,C9,0 and 0,0,C7,C9,1 are AT SP1E1RP and AT
2
S1E1WP, but our ARMCPRegInfo definitions for them incorrectly name
3
them AT S1E1R and AT S1E1W (which are entirely different
4
instructions). Fix the names.
1
5
6
(This has no guest-visible effect as the names are for debug purposes
7
only.)
8
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Tested-by: Fuad Tabba <tabba@google.com>
12
Message-id: 20230130182459.3309057-2-peter.maydell@linaro.org
13
Message-id: 20230127175507.2895013-2-peter.maydell@linaro.org
14
---
15
target/arm/helper.c | 4 ++--
16
1 file changed, 2 insertions(+), 2 deletions(-)
17
18
diff --git a/target/arm/helper.c b/target/arm/helper.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/helper.c
21
+++ b/target/arm/helper.c
22
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vhe_reginfo[] = {
23
24
#ifndef CONFIG_USER_ONLY
25
static const ARMCPRegInfo ats1e1_reginfo[] = {
26
- { .name = "AT_S1E1R", .state = ARM_CP_STATE_AA64,
27
+ { .name = "AT_S1E1RP", .state = ARM_CP_STATE_AA64,
28
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 0,
29
.access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
30
.writefn = ats_write64 },
31
- { .name = "AT_S1E1W", .state = ARM_CP_STATE_AA64,
32
+ { .name = "AT_S1E1WP", .state = ARM_CP_STATE_AA64,
33
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 1,
34
.access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
35
.writefn = ats_write64 },
36
--
37
2.34.1
diff view generated by jsdifflib
New patch
1
The AArch32 ATS12NSO* address translation operations are supposed to
2
trap to either EL2 or EL3 if they're executed at Secure EL1 (which
3
can only happen if EL3 is AArch64). We implement this, but we got
4
the syndrome value wrong: like other traps to EL2 or EL3 on an
5
AArch32 cpreg access, they should report the 0x3 syndrome, not the
6
0x0 'uncategorized' syndrome. This is clear in the access pseudocode
7
for these instructions.
1
8
9
Fix the syndrome value for these operations by correcting the
10
returned value from the ats_access() function.
11
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Tested-by: Fuad Tabba <tabba@google.com>
15
Message-id: 20230130182459.3309057-3-peter.maydell@linaro.org
16
Message-id: 20230127175507.2895013-3-peter.maydell@linaro.org
17
---
18
target/arm/helper.c | 4 ++--
19
1 file changed, 2 insertions(+), 2 deletions(-)
20
21
diff --git a/target/arm/helper.c b/target/arm/helper.c
22
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/helper.c
24
+++ b/target/arm/helper.c
25
@@ -XXX,XX +XXX,XX @@ static CPAccessResult ats_access(CPUARMState *env, const ARMCPRegInfo *ri,
26
if (arm_current_el(env) == 1) {
27
if (arm_is_secure_below_el3(env)) {
28
if (env->cp15.scr_el3 & SCR_EEL2) {
29
- return CP_ACCESS_TRAP_UNCATEGORIZED_EL2;
30
+ return CP_ACCESS_TRAP_EL2;
31
}
32
- return CP_ACCESS_TRAP_UNCATEGORIZED_EL3;
33
+ return CP_ACCESS_TRAP_EL3;
34
}
35
return CP_ACCESS_TRAP_UNCATEGORIZED;
36
}
37
--
38
2.34.1
diff view generated by jsdifflib
New patch
1
We added the CPAccessResult values CP_ACCESS_TRAP_UNCATEGORIZED_EL2
2
and CP_ACCESS_TRAP_UNCATEGORIZED_EL3 purely in order to use them in
3
the ats_access() function, but doing so was incorrect (a bug fixed in
4
a previous commit). There aren't any cases where we want an access
5
function to be able to request a trap to EL2 or EL3 with a zero
6
syndrome value, so remove these enum values.
1
7
8
As well as cleaning up dead code, the motivation here is that
9
we'd like to implement fine-grained-trap handling in
10
helper_access_check_cp_reg(). Although the fine-grained traps
11
to EL2 are always lower priority than trap-to-same-EL and
12
higher priority than trap-to-EL3, they are in the middle of
13
various other kinds of trap-to-EL2. Knowing that a trap-to-EL2
14
must always for us have the same syndrome (ie that an access
15
function will return CP_ACCESS_TRAP_EL2 and there is no other
16
kind of trap-to-EL2 enum value) means we don't have to try
17
to choose which of the two syndrome values to report if the
18
access would trap to EL2 both for the fine-grained-trap and
19
because the access function requires it.
20
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
23
Tested-by: Fuad Tabba <tabba@google.com>
24
Message-id: 20230130182459.3309057-4-peter.maydell@linaro.org
25
Message-id: 20230127175507.2895013-4-peter.maydell@linaro.org
26
---
27
target/arm/cpregs.h | 4 ++--
28
target/arm/op_helper.c | 2 ++
29
2 files changed, 4 insertions(+), 2 deletions(-)
30
31
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/cpregs.h
34
+++ b/target/arm/cpregs.h
35
@@ -XXX,XX +XXX,XX @@ typedef enum CPAccessResult {
36
* Access fails and results in an exception syndrome 0x0 ("uncategorized").
37
* Note that this is not a catch-all case -- the set of cases which may
38
* result in this failure is specifically defined by the architecture.
39
+ * This trap is always to the usual target EL, never directly to a
40
+ * specified target EL.
41
*/
42
CP_ACCESS_TRAP_UNCATEGORIZED = (2 << 2),
43
- CP_ACCESS_TRAP_UNCATEGORIZED_EL2 = CP_ACCESS_TRAP_UNCATEGORIZED | 2,
44
- CP_ACCESS_TRAP_UNCATEGORIZED_EL3 = CP_ACCESS_TRAP_UNCATEGORIZED | 3,
45
} CPAccessResult;
46
47
typedef struct ARMCPRegInfo ARMCPRegInfo;
48
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
49
index XXXXXXX..XXXXXXX 100644
50
--- a/target/arm/op_helper.c
51
+++ b/target/arm/op_helper.c
52
@@ -XXX,XX +XXX,XX @@ const void *HELPER(access_check_cp_reg)(CPUARMState *env, uint32_t key,
53
case CP_ACCESS_TRAP:
54
break;
55
case CP_ACCESS_TRAP_UNCATEGORIZED:
56
+ /* Only CP_ACCESS_TRAP traps are direct to a specified EL */
57
+ assert((res & CP_ACCESS_EL_MASK) == 0);
58
if (cpu_isar_feature(aa64_ids, cpu) && isread &&
59
arm_cpreg_in_idspace(ri)) {
60
/*
61
--
62
2.34.1
diff view generated by jsdifflib
New patch
1
Rearrange the code in do_coproc_insn() so that we calculate the
2
syndrome value for a potential trap early; we're about to add a
3
second check that wants this value earlier than where it is currently
4
determined.
1
5
6
(Specifically, a trap to EL2 because of HSTR_EL2 should take
7
priority over an UNDEF to EL1, even when the UNDEF is because
8
the register does not exist at all or because its ri->access
9
bits non-configurably fail the access. So the check we put in
10
for HSTR_EL2 trapping at EL1 (which needs the syndrome) is
11
going to have to be done before the check "is the ARMCPRegInfo
12
pointer NULL".)
13
14
This commit is just code motion; the change to HSTR_EL2
15
handling that will use the 'syndrome' variable is in a
16
subsequent commit.
17
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Tested-by: Fuad Tabba <tabba@google.com>
21
Message-id: 20230130182459.3309057-5-peter.maydell@linaro.org
22
Message-id: 20230127175507.2895013-5-peter.maydell@linaro.org
23
---
24
target/arm/translate.c | 83 +++++++++++++++++++++---------------------
25
1 file changed, 41 insertions(+), 42 deletions(-)
26
27
diff --git a/target/arm/translate.c b/target/arm/translate.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/translate.c
30
+++ b/target/arm/translate.c
31
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
32
const ARMCPRegInfo *ri = get_arm_cp_reginfo(s->cp_regs, key);
33
TCGv_ptr tcg_ri = NULL;
34
bool need_exit_tb;
35
+ uint32_t syndrome;
36
+
37
+ /*
38
+ * Note that since we are an implementation which takes an
39
+ * exception on a trapped conditional instruction only if the
40
+ * instruction passes its condition code check, we can take
41
+ * advantage of the clause in the ARM ARM that allows us to set
42
+ * the COND field in the instruction to 0xE in all cases.
43
+ * We could fish the actual condition out of the insn (ARM)
44
+ * or the condexec bits (Thumb) but it isn't necessary.
45
+ */
46
+ switch (cpnum) {
47
+ case 14:
48
+ if (is64) {
49
+ syndrome = syn_cp14_rrt_trap(1, 0xe, opc1, crm, rt, rt2,
50
+ isread, false);
51
+ } else {
52
+ syndrome = syn_cp14_rt_trap(1, 0xe, opc1, opc2, crn, crm,
53
+ rt, isread, false);
54
+ }
55
+ break;
56
+ case 15:
57
+ if (is64) {
58
+ syndrome = syn_cp15_rrt_trap(1, 0xe, opc1, crm, rt, rt2,
59
+ isread, false);
60
+ } else {
61
+ syndrome = syn_cp15_rt_trap(1, 0xe, opc1, opc2, crn, crm,
62
+ rt, isread, false);
63
+ }
64
+ break;
65
+ default:
66
+ /*
67
+ * ARMv8 defines that only coprocessors 14 and 15 exist,
68
+ * so this can only happen if this is an ARMv7 or earlier CPU,
69
+ * in which case the syndrome information won't actually be
70
+ * guest visible.
71
+ */
72
+ assert(!arm_dc_feature(s, ARM_FEATURE_V8));
73
+ syndrome = syn_uncategorized();
74
+ break;
75
+ }
76
77
if (!ri) {
78
/*
79
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
80
* Note that on XScale all cp0..c13 registers do an access check
81
* call in order to handle c15_cpar.
82
*/
83
- uint32_t syndrome;
84
-
85
- /*
86
- * Note that since we are an implementation which takes an
87
- * exception on a trapped conditional instruction only if the
88
- * instruction passes its condition code check, we can take
89
- * advantage of the clause in the ARM ARM that allows us to set
90
- * the COND field in the instruction to 0xE in all cases.
91
- * We could fish the actual condition out of the insn (ARM)
92
- * or the condexec bits (Thumb) but it isn't necessary.
93
- */
94
- switch (cpnum) {
95
- case 14:
96
- if (is64) {
97
- syndrome = syn_cp14_rrt_trap(1, 0xe, opc1, crm, rt, rt2,
98
- isread, false);
99
- } else {
100
- syndrome = syn_cp14_rt_trap(1, 0xe, opc1, opc2, crn, crm,
101
- rt, isread, false);
102
- }
103
- break;
104
- case 15:
105
- if (is64) {
106
- syndrome = syn_cp15_rrt_trap(1, 0xe, opc1, crm, rt, rt2,
107
- isread, false);
108
- } else {
109
- syndrome = syn_cp15_rt_trap(1, 0xe, opc1, opc2, crn, crm,
110
- rt, isread, false);
111
- }
112
- break;
113
- default:
114
- /*
115
- * ARMv8 defines that only coprocessors 14 and 15 exist,
116
- * so this can only happen if this is an ARMv7 or earlier CPU,
117
- * in which case the syndrome information won't actually be
118
- * guest visible.
119
- */
120
- assert(!arm_dc_feature(s, ARM_FEATURE_V8));
121
- syndrome = syn_uncategorized();
122
- break;
123
- }
124
-
125
gen_set_condexec(s);
126
gen_update_pc(s, 0);
127
tcg_ri = tcg_temp_new_ptr();
128
--
129
2.34.1
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
The HSTR_EL2 register has a collection of trap bits which allow
2
add MemTxAttrs as an argument to memory_region_access_valid().
2
trapping to EL2 for AArch32 EL0 or EL1 accesses to coprocessor
3
Its callers either have an attrs value to hand, or don't care
3
registers. The specification of these bits is that when the bit is
4
and can use MEMTXATTRS_UNSPECIFIED.
4
set we should trap
5
* EL1 accesses
6
* EL0 accesses, if the access is not UNDEFINED when the
7
trap bit is 0
5
8
6
The callsite in flatview_access_valid() is part of a recursive
9
In other words, all UNDEF traps from EL0 to EL1 take precedence over
7
loop flatview_access_valid() -> memory_region_access_valid() ->
10
the HSTR_EL2 trap to EL2. (Since this is all AArch32, the only kind
8
subpage_accepts() -> flatview_access_valid(); we make it pass
11
of trap-to-EL1 is the UNDEF.)
9
MEMTXATTRS_UNSPECIFIED for now, until the next several commits
12
10
have plumbed an attrs parameter through the rest of the loop
13
Our implementation doesn't quite get this right -- we check for traps
11
and we can add an attrs parameter to flatview_access_valid().
14
in the order:
15
* no such register
16
* ARMCPRegInfo::access bits
17
* HSTR_EL2 trap bits
18
* ARMCPRegInfo::accessfn
19
20
So UNDEFs that happen because of the access bits or because the
21
register doesn't exist at all correctly take priority over the
22
HSTR_EL2 trap, but where a register can UNDEF at EL0 because of the
23
accessfn we are incorrectly always taking the HSTR_EL2 trap. There
24
aren't many of these, but one example is the PMCR; if you look at the
25
access pseudocode for this register you can see that UNDEFs taken
26
because of the value of PMUSERENR.EN are checked before the HSTR_EL2
27
bit.
28
29
Rearrange helper_access_check_cp_reg() so that we always call the
30
accessfn, and use its return value if it indicates that the access
31
traps to EL0 rather than continuing to do the HSTR_EL2 check.
12
32
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
33
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
34
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20180521140402.23318-8-peter.maydell@linaro.org
35
Tested-by: Fuad Tabba <tabba@google.com>
36
Message-id: 20230130182459.3309057-6-peter.maydell@linaro.org
37
Message-id: 20230127175507.2895013-6-peter.maydell@linaro.org
17
---
38
---
18
include/exec/memory-internal.h | 3 ++-
39
target/arm/op_helper.c | 21 ++++++++++++++++-----
19
exec.c | 4 +++-
40
1 file changed, 16 insertions(+), 5 deletions(-)
20
hw/s390x/s390-pci-inst.c | 3 ++-
21
memory.c | 7 ++++---
22
4 files changed, 11 insertions(+), 6 deletions(-)
23
41
24
diff --git a/include/exec/memory-internal.h b/include/exec/memory-internal.h
42
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
25
index XXXXXXX..XXXXXXX 100644
43
index XXXXXXX..XXXXXXX 100644
26
--- a/include/exec/memory-internal.h
44
--- a/target/arm/op_helper.c
27
+++ b/include/exec/memory-internal.h
45
+++ b/target/arm/op_helper.c
28
@@ -XXX,XX +XXX,XX @@ void flatview_unref(FlatView *view);
46
@@ -XXX,XX +XXX,XX @@ const void *HELPER(access_check_cp_reg)(CPUARMState *env, uint32_t key,
29
extern const MemoryRegionOps unassigned_mem_ops;
47
goto fail;
30
48
}
31
bool memory_region_access_valid(MemoryRegion *mr, hwaddr addr,
49
32
- unsigned size, bool is_write);
50
+ if (ri->accessfn) {
33
+ unsigned size, bool is_write,
51
+ res = ri->accessfn(env, ri, isread);
34
+ MemTxAttrs attrs);
52
+ }
35
53
+
36
void flatview_add_to_dispatch(FlatView *fv, MemoryRegionSection *section);
54
/*
37
AddressSpaceDispatch *address_space_dispatch_new(FlatView *fv);
55
- * Check for an EL2 trap due to HSTR_EL2. We expect EL0 accesses
38
diff --git a/exec.c b/exec.c
56
- * to sysregs non accessible at EL0 to have UNDEF-ed already.
39
index XXXXXXX..XXXXXXX 100644
57
+ * If the access function indicates a trap from EL0 to EL1 then
40
--- a/exec.c
58
+ * that always takes priority over the HSTR_EL2 trap. (If it indicates
41
+++ b/exec.c
59
+ * a trap to EL3, then the HSTR_EL2 trap takes priority; if it indicates
42
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
60
+ * a trap to EL2, then the syndrome is the same either way so we don't
43
mr = flatview_translate(fv, addr, &xlat, &l, is_write);
61
+ * care whether technically the architecture says that HSTR_EL2 trap or
44
if (!memory_access_is_direct(mr, is_write)) {
62
+ * the other trap takes priority. So we take the "check HSTR_EL2" path
45
l = memory_access_size(mr, l, addr);
63
+ * for all of those cases.)
46
- if (!memory_region_access_valid(mr, xlat, l, is_write)) {
64
*/
47
+ /* When our callers all have attrs we'll pass them through here */
65
+ if (res != CP_ACCESS_OK && ((res & CP_ACCESS_EL_MASK) == 0) &&
48
+ if (!memory_region_access_valid(mr, xlat, l, is_write,
66
+ arm_current_el(env) == 0) {
49
+ MEMTXATTRS_UNSPECIFIED)) {
67
+ goto fail;
50
return false;
68
+ }
51
}
69
+
70
if (!is_a64(env) && arm_current_el(env) < 2 && ri->cp == 15 &&
71
(arm_hcr_el2_eff(env) & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) {
72
uint32_t mask = 1 << ri->crn;
73
@@ -XXX,XX +XXX,XX @@ const void *HELPER(access_check_cp_reg)(CPUARMState *env, uint32_t key,
52
}
74
}
53
diff --git a/hw/s390x/s390-pci-inst.c b/hw/s390x/s390-pci-inst.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/hw/s390x/s390-pci-inst.c
56
+++ b/hw/s390x/s390-pci-inst.c
57
@@ -XXX,XX +XXX,XX @@ int pcistb_service_call(S390CPU *cpu, uint8_t r1, uint8_t r3, uint64_t gaddr,
58
mr = s390_get_subregion(mr, offset, len);
59
offset -= mr->addr;
60
61
- if (!memory_region_access_valid(mr, offset, len, true)) {
62
+ if (!memory_region_access_valid(mr, offset, len, true,
63
+ MEMTXATTRS_UNSPECIFIED)) {
64
s390_program_interrupt(env, PGM_OPERAND, 6, ra);
65
return 0;
66
}
75
}
67
diff --git a/memory.c b/memory.c
76
68
index XXXXXXX..XXXXXXX 100644
77
- if (ri->accessfn) {
69
--- a/memory.c
78
- res = ri->accessfn(env, ri, isread);
70
+++ b/memory.c
79
- }
71
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps ram_device_mem_ops = {
80
if (likely(res == CP_ACCESS_OK)) {
72
bool memory_region_access_valid(MemoryRegion *mr,
81
return ri;
73
hwaddr addr,
74
unsigned size,
75
- bool is_write)
76
+ bool is_write,
77
+ MemTxAttrs attrs)
78
{
79
int access_size_min, access_size_max;
80
int access_size, i;
81
@@ -XXX,XX +XXX,XX @@ MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
82
{
83
MemTxResult r;
84
85
- if (!memory_region_access_valid(mr, addr, size, false)) {
86
+ if (!memory_region_access_valid(mr, addr, size, false, attrs)) {
87
*pval = unassigned_mem_read(mr, addr, size);
88
return MEMTX_DECODE_ERROR;
89
}
90
@@ -XXX,XX +XXX,XX @@ MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
91
unsigned size,
92
MemTxAttrs attrs)
93
{
94
- if (!memory_region_access_valid(mr, addr, size, true)) {
95
+ if (!memory_region_access_valid(mr, addr, size, true, attrs)) {
96
unassigned_mem_write(mr, addr, data, size);
97
return MEMTX_DECODE_ERROR;
98
}
82
}
99
--
83
--
100
2.17.1
84
2.34.1
101
102
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
The semantics of HSTR_EL2 require that it traps cpreg accesses
2
add MemTxAttrs as an argument to flatview_translate(); all its
2
to EL2 for:
3
callers now have attrs available.
3
* EL1 accesses
4
* EL0 accesses, if the access is not UNDEFINED when the
5
trap bit is 0
6
7
(You can see this in the I_ZFGJP priority ordering, where HSTR_EL2
8
traps from EL1 to EL2 are priority 12, UNDEFs are priority 13, and
9
HSTR_EL2 traps from EL0 are priority 15.)
10
11
However, we don't get this right for EL1 accesses which UNDEF because
12
the register doesn't exist at all or because its ri->access bits
13
non-configurably forbid the access. At EL1, check for the HSTR_EL2
14
trap early, before either of these UNDEF reasons.
15
16
We have to retain the HSTR_EL2 check in access_check_cp_reg(),
17
because at EL0 any kind of UNDEF-to-EL1 (including "no such
18
register", "bad ri->access" and "ri->accessfn returns 'trap to EL1'")
19
takes precedence over the trap to EL2. But we only need to do that
20
check for EL0 now.
4
21
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
23
Tested-by: Fuad Tabba <tabba@google.com>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
24
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180521140402.23318-11-peter.maydell@linaro.org
25
Message-id: 20230130182459.3309057-7-peter.maydell@linaro.org
26
Message-id: 20230127175507.2895013-7-peter.maydell@linaro.org
9
---
27
---
10
include/exec/memory.h | 7 ++++---
28
target/arm/op_helper.c | 6 +++++-
11
exec.c | 17 +++++++++--------
29
target/arm/translate.c | 28 +++++++++++++++++++++++++++-
12
2 files changed, 13 insertions(+), 11 deletions(-)
30
2 files changed, 32 insertions(+), 2 deletions(-)
13
31
14
diff --git a/include/exec/memory.h b/include/exec/memory.h
32
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
15
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
16
--- a/include/exec/memory.h
34
--- a/target/arm/op_helper.c
17
+++ b/include/exec/memory.h
35
+++ b/target/arm/op_helper.c
18
@@ -XXX,XX +XXX,XX @@ IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
36
@@ -XXX,XX +XXX,XX @@ const void *HELPER(access_check_cp_reg)(CPUARMState *env, uint32_t key,
19
*/
37
goto fail;
20
MemoryRegion *flatview_translate(FlatView *fv,
38
}
21
hwaddr addr, hwaddr *xlat,
39
22
- hwaddr *len, bool is_write);
40
- if (!is_a64(env) && arm_current_el(env) < 2 && ri->cp == 15 &&
23
+ hwaddr *len, bool is_write,
41
+ /*
24
+ MemTxAttrs attrs);
42
+ * HSTR_EL2 traps from EL1 are checked earlier, in generated code;
25
43
+ * we only need to check here for traps from EL0.
26
static inline MemoryRegion *address_space_translate(AddressSpace *as,
44
+ */
27
hwaddr addr, hwaddr *xlat,
45
+ if (!is_a64(env) && arm_current_el(env) == 0 && ri->cp == 15 &&
28
@@ -XXX,XX +XXX,XX @@ static inline MemoryRegion *address_space_translate(AddressSpace *as,
46
(arm_hcr_el2_eff(env) & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) {
29
MemTxAttrs attrs)
47
uint32_t mask = 1 << ri->crn;
30
{
48
31
return flatview_translate(address_space_to_flatview(as),
49
diff --git a/target/arm/translate.c b/target/arm/translate.c
32
- addr, xlat, len, is_write);
33
+ addr, xlat, len, is_write, attrs);
34
}
35
36
/* address_space_access_valid: check for validity of accessing an address
37
@@ -XXX,XX +XXX,XX @@ MemTxResult address_space_read(AddressSpace *as, hwaddr addr,
38
rcu_read_lock();
39
fv = address_space_to_flatview(as);
40
l = len;
41
- mr = flatview_translate(fv, addr, &addr1, &l, false);
42
+ mr = flatview_translate(fv, addr, &addr1, &l, false, attrs);
43
if (len == l && memory_access_is_direct(mr, false)) {
44
ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
45
memcpy(buf, ptr, len);
46
diff --git a/exec.c b/exec.c
47
index XXXXXXX..XXXXXXX 100644
50
index XXXXXXX..XXXXXXX 100644
48
--- a/exec.c
51
--- a/target/arm/translate.c
49
+++ b/exec.c
52
+++ b/target/arm/translate.c
50
@@ -XXX,XX +XXX,XX @@ iotlb_fail:
53
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
51
54
break;
52
/* Called from RCU critical section */
53
MemoryRegion *flatview_translate(FlatView *fv, hwaddr addr, hwaddr *xlat,
54
- hwaddr *plen, bool is_write)
55
+ hwaddr *plen, bool is_write,
56
+ MemTxAttrs attrs)
57
{
58
MemoryRegion *mr;
59
MemoryRegionSection section;
60
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_write_continue(FlatView *fv, hwaddr addr,
61
}
62
63
l = len;
64
- mr = flatview_translate(fv, addr, &addr1, &l, true);
65
+ mr = flatview_translate(fv, addr, &addr1, &l, true, attrs);
66
}
55
}
67
56
68
return result;
57
+ if (s->hstr_active && cpnum == 15 && s->current_el == 1) {
69
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_write(FlatView *fv, hwaddr addr, MemTxAttrs attrs,
58
+ /*
70
MemTxResult result = MEMTX_OK;
59
+ * At EL1, check for a HSTR_EL2 trap, which must take precedence
71
60
+ * over the UNDEF for "no such register" or the UNDEF for "access
72
l = len;
61
+ * permissions forbid this EL1 access". HSTR_EL2 traps from EL0
73
- mr = flatview_translate(fv, addr, &addr1, &l, true);
62
+ * only happen if the cpreg doesn't UNDEF at EL0, so we do those in
74
+ mr = flatview_translate(fv, addr, &addr1, &l, true, attrs);
63
+ * access_check_cp_reg(), after the checks for whether the access
75
result = flatview_write_continue(fv, addr, attrs, buf, len,
64
+ * configurably trapped to EL1.
76
addr1, l, mr);
65
+ */
77
66
+ uint32_t maskbit = is64 ? crm : crn;
78
@@ -XXX,XX +XXX,XX @@ MemTxResult flatview_read_continue(FlatView *fv, hwaddr addr,
67
+
79
}
68
+ if (maskbit != 4 && maskbit != 14) {
80
69
+ /* T4 and T14 are RES0 so never cause traps */
81
l = len;
70
+ TCGv_i32 t;
82
- mr = flatview_translate(fv, addr, &addr1, &l, false);
71
+ DisasLabel over = gen_disas_label(s);
83
+ mr = flatview_translate(fv, addr, &addr1, &l, false, attrs);
72
+
73
+ t = load_cpu_offset(offsetoflow32(CPUARMState, cp15.hstr_el2));
74
+ tcg_gen_andi_i32(t, t, 1u << maskbit);
75
+ tcg_gen_brcondi_i32(TCG_COND_EQ, t, 0, over.label);
76
+ tcg_temp_free_i32(t);
77
+
78
+ gen_exception_insn(s, 0, EXCP_UDEF, syndrome);
79
+ set_disas_label(s, over);
80
+ }
81
+ }
82
+
83
if (!ri) {
84
/*
85
* Unknown register; this might be a guest error or a QEMU
86
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
87
return;
84
}
88
}
85
89
86
return result;
90
- if (s->hstr_active || ri->accessfn ||
87
@@ -XXX,XX +XXX,XX @@ static MemTxResult flatview_read(FlatView *fv, hwaddr addr,
91
+ if ((s->hstr_active && s->current_el == 0) || ri->accessfn ||
88
MemoryRegion *mr;
92
(arm_dc_feature(s, ARM_FEATURE_XSCALE) && cpnum < 14)) {
89
93
/*
90
l = len;
94
* Emit code to perform further access permissions checks at
91
- mr = flatview_translate(fv, addr, &addr1, &l, false);
92
+ mr = flatview_translate(fv, addr, &addr1, &l, false, attrs);
93
return flatview_read_continue(fv, addr, attrs, buf, len,
94
addr1, l, mr);
95
}
96
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
97
98
while (len > 0) {
99
l = len;
100
- mr = flatview_translate(fv, addr, &xlat, &l, is_write);
101
+ mr = flatview_translate(fv, addr, &xlat, &l, is_write, attrs);
102
if (!memory_access_is_direct(mr, is_write)) {
103
l = memory_access_size(mr, l, addr);
104
if (!memory_region_access_valid(mr, xlat, l, is_write, attrs)) {
105
@@ -XXX,XX +XXX,XX @@ flatview_extend_translation(FlatView *fv, hwaddr addr,
106
107
len = target_len;
108
this_mr = flatview_translate(fv, addr, &xlat,
109
- &len, is_write);
110
+ &len, is_write, attrs);
111
if (this_mr != mr || xlat != base + done) {
112
return done;
113
}
114
@@ -XXX,XX +XXX,XX @@ void *address_space_map(AddressSpace *as,
115
l = len;
116
rcu_read_lock();
117
fv = address_space_to_flatview(as);
118
- mr = flatview_translate(fv, addr, &xlat, &l, is_write);
119
+ mr = flatview_translate(fv, addr, &xlat, &l, is_write, attrs);
120
121
if (!memory_access_is_direct(mr, is_write)) {
122
if (atomic_xchg(&bounce.in_use, true)) {
123
--
95
--
124
2.17.1
96
2.34.1
125
126
diff view generated by jsdifflib
1
Provide a VMSTATE_BOOL_SUB_ARRAY to go with VMSTATE_UINT8_SUB_ARRAY
1
The HSTR_EL2 register is not supposed to have an effect unless EL2 is
2
and friends.
2
enabled in the current security state. We weren't checking for this,
3
which meant that if the guest set up the HSTR_EL2 register we would
4
incorrectly trap even for accesses from Secure EL0 and EL1.
5
6
Add the missing checks. (Other places where we look at HSTR_EL2
7
for the not-in-v8A bits TTEE and TJDBX are already checking that
8
we are in NS EL0 or EL1, so there we alredy know EL2 is enabled.)
3
9
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20180521140402.23318-23-peter.maydell@linaro.org
12
Tested-by: Fuad Tabba <tabba@google.com>
13
Message-id: 20230130182459.3309057-8-peter.maydell@linaro.org
14
Message-id: 20230127175507.2895013-8-peter.maydell@linaro.org
7
---
15
---
8
include/migration/vmstate.h | 3 +++
16
target/arm/helper.c | 2 +-
9
1 file changed, 3 insertions(+)
17
target/arm/op_helper.c | 1 +
18
2 files changed, 2 insertions(+), 1 deletion(-)
10
19
11
diff --git a/include/migration/vmstate.h b/include/migration/vmstate.h
20
diff --git a/target/arm/helper.c b/target/arm/helper.c
12
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
13
--- a/include/migration/vmstate.h
22
--- a/target/arm/helper.c
14
+++ b/include/migration/vmstate.h
23
+++ b/target/arm/helper.c
15
@@ -XXX,XX +XXX,XX @@ extern const VMStateInfo vmstate_info_qtailq;
24
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a32(CPUARMState *env, int fp_el,
16
#define VMSTATE_BOOL_ARRAY(_f, _s, _n) \
25
DP_TBFLAG_A32(flags, VFPEN, 1);
17
VMSTATE_BOOL_ARRAY_V(_f, _s, _n, 0)
26
}
18
27
19
+#define VMSTATE_BOOL_SUB_ARRAY(_f, _s, _start, _num) \
28
- if (el < 2 && env->cp15.hstr_el2 &&
20
+ VMSTATE_SUB_ARRAY(_f, _s, _start, _num, 0, vmstate_info_bool, bool)
29
+ if (el < 2 && env->cp15.hstr_el2 && arm_is_el2_enabled(env) &&
21
+
30
(arm_hcr_el2_eff(env) & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) {
22
#define VMSTATE_UINT16_ARRAY_V(_f, _s, _n, _v) \
31
DP_TBFLAG_A32(flags, HSTR_ACTIVE, 1);
23
VMSTATE_ARRAY(_f, _s, _n, _v, vmstate_info_uint16, uint16_t)
32
}
33
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/op_helper.c
36
+++ b/target/arm/op_helper.c
37
@@ -XXX,XX +XXX,XX @@ const void *HELPER(access_check_cp_reg)(CPUARMState *env, uint32_t key,
38
* we only need to check here for traps from EL0.
39
*/
40
if (!is_a64(env) && arm_current_el(env) == 0 && ri->cp == 15 &&
41
+ arm_is_el2_enabled(env) &&
42
(arm_hcr_el2_eff(env) & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) {
43
uint32_t mask = 1 << ri->crn;
24
44
25
--
45
--
26
2.17.1
46
2.34.1
27
28
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Define the system registers which are provided by the
2
2
FEAT_FGT fine-grained trap architectural feature:
3
Depending on the host abi, float16, aka uint16_t, values are
3
HFGRTR_EL2, HFGWTR_EL2, HDFGRTR_EL2, HDFGWTR_EL2, HFGITR_EL2
4
passed and returned either zero-extended in the host register
4
5
or with garbage at the top of the host register.
5
All these registers are a set of bit fields, where each bit is set
6
6
for a trap and clear to not trap on a particular system register
7
The tcg code generator has so far been assuming garbage, as that
7
access. The R and W register pairs are for system registers,
8
matches the x86 abi, but this is incorrect for other host abis.
8
allowing trapping to be done separately for reads and writes; the I
9
Further, target/arm has so far been assuming zero-extended results,
9
register is for system instructions where trapping is on instruction
10
so that it may store the 16-bit value into a 32-bit slot with the
10
execution.
11
high 16-bits already clear.
11
12
12
The data storage in the CPU state struct is arranged as a set of
13
Rectify both problems by mapping "f16" in the helper definition
13
arrays rather than separate fields so that when we're looking up the
14
to uint32_t instead of (a typedef for) uint16_t. This forces
14
bits for a system register access we can just index into the array
15
the host compiler to assume garbage in the upper 16 bits on input
15
rather than having to use a switch to select a named struct member.
16
and to zero-extend the result on output.
16
The later FEAT_FGT2 will add extra elements to these arrays.
17
17
18
Cc: qemu-stable@nongnu.org
18
The field definitions for the new registers are in cpregs.h because
19
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
19
in practice the code that needs them is code that also needs
20
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
20
the cpregs information; cpu.h is included in a lot more files.
21
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
21
We're also going to add some FGT-specific definitions to cpregs.h
22
Message-id: 20180522175629.24932-1-richard.henderson@linaro.org
22
in the next commit.
23
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
23
24
We do not implement HAFGRTR_EL2, because we don't implement
25
FEAT_AMUv1.
26
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
28
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
29
Tested-by: Fuad Tabba <tabba@google.com>
30
Message-id: 20230130182459.3309057-9-peter.maydell@linaro.org
31
Message-id: 20230127175507.2895013-9-peter.maydell@linaro.org
25
---
32
---
26
include/exec/helper-head.h | 2 +-
33
target/arm/cpregs.h | 285 ++++++++++++++++++++++++++++++++++++++++++++
27
target/arm/helper-a64.c | 35 +++++++++--------
34
target/arm/cpu.h | 15 +++
28
target/arm/helper.c | 80 +++++++++++++++++++-------------------
35
target/arm/helper.c | 40 +++++++
29
3 files changed, 59 insertions(+), 58 deletions(-)
36
3 files changed, 340 insertions(+)
30
37
31
diff --git a/include/exec/helper-head.h b/include/exec/helper-head.h
38
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
32
index XXXXXXX..XXXXXXX 100644
39
index XXXXXXX..XXXXXXX 100644
33
--- a/include/exec/helper-head.h
40
--- a/target/arm/cpregs.h
34
+++ b/include/exec/helper-head.h
41
+++ b/target/arm/cpregs.h
35
@@ -XXX,XX +XXX,XX @@
42
@@ -XXX,XX +XXX,XX @@ typedef enum CPAccessResult {
36
#define dh_ctype_int int
43
CP_ACCESS_TRAP_UNCATEGORIZED = (2 << 2),
37
#define dh_ctype_i64 uint64_t
44
} CPAccessResult;
38
#define dh_ctype_s64 int64_t
45
39
-#define dh_ctype_f16 float16
46
+/* Indexes into fgt_read[] */
40
+#define dh_ctype_f16 uint32_t
47
+#define FGTREG_HFGRTR 0
41
#define dh_ctype_f32 float32
48
+#define FGTREG_HDFGRTR 1
42
#define dh_ctype_f64 float64
49
+/* Indexes into fgt_write[] */
43
#define dh_ctype_ptr void *
50
+#define FGTREG_HFGWTR 0
44
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
51
+#define FGTREG_HDFGWTR 1
52
+/* Indexes into fgt_exec[] */
53
+#define FGTREG_HFGITR 0
54
+
55
+FIELD(HFGRTR_EL2, AFSR0_EL1, 0, 1)
56
+FIELD(HFGRTR_EL2, AFSR1_EL1, 1, 1)
57
+FIELD(HFGRTR_EL2, AIDR_EL1, 2, 1)
58
+FIELD(HFGRTR_EL2, AMAIR_EL1, 3, 1)
59
+FIELD(HFGRTR_EL2, APDAKEY, 4, 1)
60
+FIELD(HFGRTR_EL2, APDBKEY, 5, 1)
61
+FIELD(HFGRTR_EL2, APGAKEY, 6, 1)
62
+FIELD(HFGRTR_EL2, APIAKEY, 7, 1)
63
+FIELD(HFGRTR_EL2, APIBKEY, 8, 1)
64
+FIELD(HFGRTR_EL2, CCSIDR_EL1, 9, 1)
65
+FIELD(HFGRTR_EL2, CLIDR_EL1, 10, 1)
66
+FIELD(HFGRTR_EL2, CONTEXTIDR_EL1, 11, 1)
67
+FIELD(HFGRTR_EL2, CPACR_EL1, 12, 1)
68
+FIELD(HFGRTR_EL2, CSSELR_EL1, 13, 1)
69
+FIELD(HFGRTR_EL2, CTR_EL0, 14, 1)
70
+FIELD(HFGRTR_EL2, DCZID_EL0, 15, 1)
71
+FIELD(HFGRTR_EL2, ESR_EL1, 16, 1)
72
+FIELD(HFGRTR_EL2, FAR_EL1, 17, 1)
73
+FIELD(HFGRTR_EL2, ISR_EL1, 18, 1)
74
+FIELD(HFGRTR_EL2, LORC_EL1, 19, 1)
75
+FIELD(HFGRTR_EL2, LOREA_EL1, 20, 1)
76
+FIELD(HFGRTR_EL2, LORID_EL1, 21, 1)
77
+FIELD(HFGRTR_EL2, LORN_EL1, 22, 1)
78
+FIELD(HFGRTR_EL2, LORSA_EL1, 23, 1)
79
+FIELD(HFGRTR_EL2, MAIR_EL1, 24, 1)
80
+FIELD(HFGRTR_EL2, MIDR_EL1, 25, 1)
81
+FIELD(HFGRTR_EL2, MPIDR_EL1, 26, 1)
82
+FIELD(HFGRTR_EL2, PAR_EL1, 27, 1)
83
+FIELD(HFGRTR_EL2, REVIDR_EL1, 28, 1)
84
+FIELD(HFGRTR_EL2, SCTLR_EL1, 29, 1)
85
+FIELD(HFGRTR_EL2, SCXTNUM_EL1, 30, 1)
86
+FIELD(HFGRTR_EL2, SCXTNUM_EL0, 31, 1)
87
+FIELD(HFGRTR_EL2, TCR_EL1, 32, 1)
88
+FIELD(HFGRTR_EL2, TPIDR_EL1, 33, 1)
89
+FIELD(HFGRTR_EL2, TPIDRRO_EL0, 34, 1)
90
+FIELD(HFGRTR_EL2, TPIDR_EL0, 35, 1)
91
+FIELD(HFGRTR_EL2, TTBR0_EL1, 36, 1)
92
+FIELD(HFGRTR_EL2, TTBR1_EL1, 37, 1)
93
+FIELD(HFGRTR_EL2, VBAR_EL1, 38, 1)
94
+FIELD(HFGRTR_EL2, ICC_IGRPENN_EL1, 39, 1)
95
+FIELD(HFGRTR_EL2, ERRIDR_EL1, 40, 1)
96
+FIELD(HFGRTR_EL2, ERRSELR_EL1, 41, 1)
97
+FIELD(HFGRTR_EL2, ERXFR_EL1, 42, 1)
98
+FIELD(HFGRTR_EL2, ERXCTLR_EL1, 43, 1)
99
+FIELD(HFGRTR_EL2, ERXSTATUS_EL1, 44, 1)
100
+FIELD(HFGRTR_EL2, ERXMISCN_EL1, 45, 1)
101
+FIELD(HFGRTR_EL2, ERXPFGF_EL1, 46, 1)
102
+FIELD(HFGRTR_EL2, ERXPFGCTL_EL1, 47, 1)
103
+FIELD(HFGRTR_EL2, ERXPFGCDN_EL1, 48, 1)
104
+FIELD(HFGRTR_EL2, ERXADDR_EL1, 49, 1)
105
+FIELD(HFGRTR_EL2, NACCDATA_EL1, 50, 1)
106
+/* 51-53: RES0 */
107
+FIELD(HFGRTR_EL2, NSMPRI_EL1, 54, 1)
108
+FIELD(HFGRTR_EL2, NTPIDR2_EL0, 55, 1)
109
+/* 56-63: RES0 */
110
+
111
+/* These match HFGRTR but bits for RO registers are RES0 */
112
+FIELD(HFGWTR_EL2, AFSR0_EL1, 0, 1)
113
+FIELD(HFGWTR_EL2, AFSR1_EL1, 1, 1)
114
+FIELD(HFGWTR_EL2, AMAIR_EL1, 3, 1)
115
+FIELD(HFGWTR_EL2, APDAKEY, 4, 1)
116
+FIELD(HFGWTR_EL2, APDBKEY, 5, 1)
117
+FIELD(HFGWTR_EL2, APGAKEY, 6, 1)
118
+FIELD(HFGWTR_EL2, APIAKEY, 7, 1)
119
+FIELD(HFGWTR_EL2, APIBKEY, 8, 1)
120
+FIELD(HFGWTR_EL2, CONTEXTIDR_EL1, 11, 1)
121
+FIELD(HFGWTR_EL2, CPACR_EL1, 12, 1)
122
+FIELD(HFGWTR_EL2, CSSELR_EL1, 13, 1)
123
+FIELD(HFGWTR_EL2, ESR_EL1, 16, 1)
124
+FIELD(HFGWTR_EL2, FAR_EL1, 17, 1)
125
+FIELD(HFGWTR_EL2, LORC_EL1, 19, 1)
126
+FIELD(HFGWTR_EL2, LOREA_EL1, 20, 1)
127
+FIELD(HFGWTR_EL2, LORN_EL1, 22, 1)
128
+FIELD(HFGWTR_EL2, LORSA_EL1, 23, 1)
129
+FIELD(HFGWTR_EL2, MAIR_EL1, 24, 1)
130
+FIELD(HFGWTR_EL2, PAR_EL1, 27, 1)
131
+FIELD(HFGWTR_EL2, SCTLR_EL1, 29, 1)
132
+FIELD(HFGWTR_EL2, SCXTNUM_EL1, 30, 1)
133
+FIELD(HFGWTR_EL2, SCXTNUM_EL0, 31, 1)
134
+FIELD(HFGWTR_EL2, TCR_EL1, 32, 1)
135
+FIELD(HFGWTR_EL2, TPIDR_EL1, 33, 1)
136
+FIELD(HFGWTR_EL2, TPIDRRO_EL0, 34, 1)
137
+FIELD(HFGWTR_EL2, TPIDR_EL0, 35, 1)
138
+FIELD(HFGWTR_EL2, TTBR0_EL1, 36, 1)
139
+FIELD(HFGWTR_EL2, TTBR1_EL1, 37, 1)
140
+FIELD(HFGWTR_EL2, VBAR_EL1, 38, 1)
141
+FIELD(HFGWTR_EL2, ICC_IGRPENN_EL1, 39, 1)
142
+FIELD(HFGWTR_EL2, ERRSELR_EL1, 41, 1)
143
+FIELD(HFGWTR_EL2, ERXCTLR_EL1, 43, 1)
144
+FIELD(HFGWTR_EL2, ERXSTATUS_EL1, 44, 1)
145
+FIELD(HFGWTR_EL2, ERXMISCN_EL1, 45, 1)
146
+FIELD(HFGWTR_EL2, ERXPFGCTL_EL1, 47, 1)
147
+FIELD(HFGWTR_EL2, ERXPFGCDN_EL1, 48, 1)
148
+FIELD(HFGWTR_EL2, ERXADDR_EL1, 49, 1)
149
+FIELD(HFGWTR_EL2, NACCDATA_EL1, 50, 1)
150
+FIELD(HFGWTR_EL2, NSMPRI_EL1, 54, 1)
151
+FIELD(HFGWTR_EL2, NTPIDR2_EL0, 55, 1)
152
+
153
+FIELD(HFGITR_EL2, ICIALLUIS, 0, 1)
154
+FIELD(HFGITR_EL2, ICIALLU, 1, 1)
155
+FIELD(HFGITR_EL2, ICIVAU, 2, 1)
156
+FIELD(HFGITR_EL2, DCIVAC, 3, 1)
157
+FIELD(HFGITR_EL2, DCISW, 4, 1)
158
+FIELD(HFGITR_EL2, DCCSW, 5, 1)
159
+FIELD(HFGITR_EL2, DCCISW, 6, 1)
160
+FIELD(HFGITR_EL2, DCCVAU, 7, 1)
161
+FIELD(HFGITR_EL2, DCCVAP, 8, 1)
162
+FIELD(HFGITR_EL2, DCCVADP, 9, 1)
163
+FIELD(HFGITR_EL2, DCCIVAC, 10, 1)
164
+FIELD(HFGITR_EL2, DCZVA, 11, 1)
165
+FIELD(HFGITR_EL2, ATS1E1R, 12, 1)
166
+FIELD(HFGITR_EL2, ATS1E1W, 13, 1)
167
+FIELD(HFGITR_EL2, ATS1E0R, 14, 1)
168
+FIELD(HFGITR_EL2, ATS1E0W, 15, 1)
169
+FIELD(HFGITR_EL2, ATS1E1RP, 16, 1)
170
+FIELD(HFGITR_EL2, ATS1E1WP, 17, 1)
171
+FIELD(HFGITR_EL2, TLBIVMALLE1OS, 18, 1)
172
+FIELD(HFGITR_EL2, TLBIVAE1OS, 19, 1)
173
+FIELD(HFGITR_EL2, TLBIASIDE1OS, 20, 1)
174
+FIELD(HFGITR_EL2, TLBIVAAE1OS, 21, 1)
175
+FIELD(HFGITR_EL2, TLBIVALE1OS, 22, 1)
176
+FIELD(HFGITR_EL2, TLBIVAALE1OS, 23, 1)
177
+FIELD(HFGITR_EL2, TLBIRVAE1OS, 24, 1)
178
+FIELD(HFGITR_EL2, TLBIRVAAE1OS, 25, 1)
179
+FIELD(HFGITR_EL2, TLBIRVALE1OS, 26, 1)
180
+FIELD(HFGITR_EL2, TLBIRVAALE1OS, 27, 1)
181
+FIELD(HFGITR_EL2, TLBIVMALLE1IS, 28, 1)
182
+FIELD(HFGITR_EL2, TLBIVAE1IS, 29, 1)
183
+FIELD(HFGITR_EL2, TLBIASIDE1IS, 30, 1)
184
+FIELD(HFGITR_EL2, TLBIVAAE1IS, 31, 1)
185
+FIELD(HFGITR_EL2, TLBIVALE1IS, 32, 1)
186
+FIELD(HFGITR_EL2, TLBIVAALE1IS, 33, 1)
187
+FIELD(HFGITR_EL2, TLBIRVAE1IS, 34, 1)
188
+FIELD(HFGITR_EL2, TLBIRVAAE1IS, 35, 1)
189
+FIELD(HFGITR_EL2, TLBIRVALE1IS, 36, 1)
190
+FIELD(HFGITR_EL2, TLBIRVAALE1IS, 37, 1)
191
+FIELD(HFGITR_EL2, TLBIRVAE1, 38, 1)
192
+FIELD(HFGITR_EL2, TLBIRVAAE1, 39, 1)
193
+FIELD(HFGITR_EL2, TLBIRVALE1, 40, 1)
194
+FIELD(HFGITR_EL2, TLBIRVAALE1, 41, 1)
195
+FIELD(HFGITR_EL2, TLBIVMALLE1, 42, 1)
196
+FIELD(HFGITR_EL2, TLBIVAE1, 43, 1)
197
+FIELD(HFGITR_EL2, TLBIASIDE1, 44, 1)
198
+FIELD(HFGITR_EL2, TLBIVAAE1, 45, 1)
199
+FIELD(HFGITR_EL2, TLBIVALE1, 46, 1)
200
+FIELD(HFGITR_EL2, TLBIVAALE1, 47, 1)
201
+FIELD(HFGITR_EL2, CFPRCTX, 48, 1)
202
+FIELD(HFGITR_EL2, DVPRCTX, 49, 1)
203
+FIELD(HFGITR_EL2, CPPRCTX, 50, 1)
204
+FIELD(HFGITR_EL2, ERET, 51, 1)
205
+FIELD(HFGITR_EL2, SVC_EL0, 52, 1)
206
+FIELD(HFGITR_EL2, SVC_EL1, 53, 1)
207
+FIELD(HFGITR_EL2, DCCVAC, 54, 1)
208
+FIELD(HFGITR_EL2, NBRBINJ, 55, 1)
209
+FIELD(HFGITR_EL2, NBRBIALL, 56, 1)
210
+
211
+FIELD(HDFGRTR_EL2, DBGBCRN_EL1, 0, 1)
212
+FIELD(HDFGRTR_EL2, DBGBVRN_EL1, 1, 1)
213
+FIELD(HDFGRTR_EL2, DBGWCRN_EL1, 2, 1)
214
+FIELD(HDFGRTR_EL2, DBGWVRN_EL1, 3, 1)
215
+FIELD(HDFGRTR_EL2, MDSCR_EL1, 4, 1)
216
+FIELD(HDFGRTR_EL2, DBGCLAIM, 5, 1)
217
+FIELD(HDFGRTR_EL2, DBGAUTHSTATUS_EL1, 6, 1)
218
+FIELD(HDFGRTR_EL2, DBGPRCR_EL1, 7, 1)
219
+/* 8: RES0: OSLAR_EL1 is WO */
220
+FIELD(HDFGRTR_EL2, OSLSR_EL1, 9, 1)
221
+FIELD(HDFGRTR_EL2, OSECCR_EL1, 10, 1)
222
+FIELD(HDFGRTR_EL2, OSDLR_EL1, 11, 1)
223
+FIELD(HDFGRTR_EL2, PMEVCNTRN_EL0, 12, 1)
224
+FIELD(HDFGRTR_EL2, PMEVTYPERN_EL0, 13, 1)
225
+FIELD(HDFGRTR_EL2, PMCCFILTR_EL0, 14, 1)
226
+FIELD(HDFGRTR_EL2, PMCCNTR_EL0, 15, 1)
227
+FIELD(HDFGRTR_EL2, PMCNTEN, 16, 1)
228
+FIELD(HDFGRTR_EL2, PMINTEN, 17, 1)
229
+FIELD(HDFGRTR_EL2, PMOVS, 18, 1)
230
+FIELD(HDFGRTR_EL2, PMSELR_EL0, 19, 1)
231
+/* 20: RES0: PMSWINC_EL0 is WO */
232
+/* 21: RES0: PMCR_EL0 is WO */
233
+FIELD(HDFGRTR_EL2, PMMIR_EL1, 22, 1)
234
+FIELD(HDFGRTR_EL2, PMBLIMITR_EL1, 23, 1)
235
+FIELD(HDFGRTR_EL2, PMBPTR_EL1, 24, 1)
236
+FIELD(HDFGRTR_EL2, PMBSR_EL1, 25, 1)
237
+FIELD(HDFGRTR_EL2, PMSCR_EL1, 26, 1)
238
+FIELD(HDFGRTR_EL2, PMSEVFR_EL1, 27, 1)
239
+FIELD(HDFGRTR_EL2, PMSFCR_EL1, 28, 1)
240
+FIELD(HDFGRTR_EL2, PMSICR_EL1, 29, 1)
241
+FIELD(HDFGRTR_EL2, PMSIDR_EL1, 30, 1)
242
+FIELD(HDFGRTR_EL2, PMSIRR_EL1, 31, 1)
243
+FIELD(HDFGRTR_EL2, PMSLATFR_EL1, 32, 1)
244
+FIELD(HDFGRTR_EL2, TRC, 33, 1)
245
+FIELD(HDFGRTR_EL2, TRCAUTHSTATUS, 34, 1)
246
+FIELD(HDFGRTR_EL2, TRCAUXCTLR, 35, 1)
247
+FIELD(HDFGRTR_EL2, TRCCLAIM, 36, 1)
248
+FIELD(HDFGRTR_EL2, TRCCNTVRn, 37, 1)
249
+/* 38, 39: RES0 */
250
+FIELD(HDFGRTR_EL2, TRCID, 40, 1)
251
+FIELD(HDFGRTR_EL2, TRCIMSPECN, 41, 1)
252
+/* 42: RES0: TRCOSLAR is WO */
253
+FIELD(HDFGRTR_EL2, TRCOSLSR, 43, 1)
254
+FIELD(HDFGRTR_EL2, TRCPRGCTLR, 44, 1)
255
+FIELD(HDFGRTR_EL2, TRCSEQSTR, 45, 1)
256
+FIELD(HDFGRTR_EL2, TRCSSCSRN, 46, 1)
257
+FIELD(HDFGRTR_EL2, TRCSTATR, 47, 1)
258
+FIELD(HDFGRTR_EL2, TRCVICTLR, 48, 1)
259
+/* 49: RES0: TRFCR_EL1 is WO */
260
+FIELD(HDFGRTR_EL2, TRBBASER_EL1, 50, 1)
261
+FIELD(HDFGRTR_EL2, TRBIDR_EL1, 51, 1)
262
+FIELD(HDFGRTR_EL2, TRBLIMITR_EL1, 52, 1)
263
+FIELD(HDFGRTR_EL2, TRBMAR_EL1, 53, 1)
264
+FIELD(HDFGRTR_EL2, TRBPTR_EL1, 54, 1)
265
+FIELD(HDFGRTR_EL2, TRBSR_EL1, 55, 1)
266
+FIELD(HDFGRTR_EL2, TRBTRG_EL1, 56, 1)
267
+FIELD(HDFGRTR_EL2, PMUSERENR_EL0, 57, 1)
268
+FIELD(HDFGRTR_EL2, PMCEIDN_EL0, 58, 1)
269
+FIELD(HDFGRTR_EL2, NBRBIDR, 59, 1)
270
+FIELD(HDFGRTR_EL2, NBRBCTL, 60, 1)
271
+FIELD(HDFGRTR_EL2, NBRBDATA, 61, 1)
272
+FIELD(HDFGRTR_EL2, NPMSNEVFR_EL1, 62, 1)
273
+FIELD(HDFGRTR_EL2, PMBIDR_EL1, 63, 1)
274
+
275
+/*
276
+ * These match HDFGRTR_EL2, but bits for RO registers are RES0.
277
+ * A few bits are for WO registers, where the HDFGRTR_EL2 bit is RES0.
278
+ */
279
+FIELD(HDFGWTR_EL2, DBGBCRN_EL1, 0, 1)
280
+FIELD(HDFGWTR_EL2, DBGBVRN_EL1, 1, 1)
281
+FIELD(HDFGWTR_EL2, DBGWCRN_EL1, 2, 1)
282
+FIELD(HDFGWTR_EL2, DBGWVRN_EL1, 3, 1)
283
+FIELD(HDFGWTR_EL2, MDSCR_EL1, 4, 1)
284
+FIELD(HDFGWTR_EL2, DBGCLAIM, 5, 1)
285
+FIELD(HDFGWTR_EL2, DBGPRCR_EL1, 7, 1)
286
+FIELD(HDFGWTR_EL2, OSLAR_EL1, 8, 1)
287
+FIELD(HDFGWTR_EL2, OSLSR_EL1, 9, 1)
288
+FIELD(HDFGWTR_EL2, OSECCR_EL1, 10, 1)
289
+FIELD(HDFGWTR_EL2, OSDLR_EL1, 11, 1)
290
+FIELD(HDFGWTR_EL2, PMEVCNTRN_EL0, 12, 1)
291
+FIELD(HDFGWTR_EL2, PMEVTYPERN_EL0, 13, 1)
292
+FIELD(HDFGWTR_EL2, PMCCFILTR_EL0, 14, 1)
293
+FIELD(HDFGWTR_EL2, PMCCNTR_EL0, 15, 1)
294
+FIELD(HDFGWTR_EL2, PMCNTEN, 16, 1)
295
+FIELD(HDFGWTR_EL2, PMINTEN, 17, 1)
296
+FIELD(HDFGWTR_EL2, PMOVS, 18, 1)
297
+FIELD(HDFGWTR_EL2, PMSELR_EL0, 19, 1)
298
+FIELD(HDFGWTR_EL2, PMSWINC_EL0, 20, 1)
299
+FIELD(HDFGWTR_EL2, PMCR_EL0, 21, 1)
300
+FIELD(HDFGWTR_EL2, PMBLIMITR_EL1, 23, 1)
301
+FIELD(HDFGWTR_EL2, PMBPTR_EL1, 24, 1)
302
+FIELD(HDFGWTR_EL2, PMBSR_EL1, 25, 1)
303
+FIELD(HDFGWTR_EL2, PMSCR_EL1, 26, 1)
304
+FIELD(HDFGWTR_EL2, PMSEVFR_EL1, 27, 1)
305
+FIELD(HDFGWTR_EL2, PMSFCR_EL1, 28, 1)
306
+FIELD(HDFGWTR_EL2, PMSICR_EL1, 29, 1)
307
+FIELD(HDFGWTR_EL2, PMSIRR_EL1, 31, 1)
308
+FIELD(HDFGWTR_EL2, PMSLATFR_EL1, 32, 1)
309
+FIELD(HDFGWTR_EL2, TRC, 33, 1)
310
+FIELD(HDFGWTR_EL2, TRCAUXCTLR, 35, 1)
311
+FIELD(HDFGWTR_EL2, TRCCLAIM, 36, 1)
312
+FIELD(HDFGWTR_EL2, TRCCNTVRn, 37, 1)
313
+FIELD(HDFGWTR_EL2, TRCIMSPECN, 41, 1)
314
+FIELD(HDFGWTR_EL2, TRCOSLAR, 42, 1)
315
+FIELD(HDFGWTR_EL2, TRCPRGCTLR, 44, 1)
316
+FIELD(HDFGWTR_EL2, TRCSEQSTR, 45, 1)
317
+FIELD(HDFGWTR_EL2, TRCSSCSRN, 46, 1)
318
+FIELD(HDFGWTR_EL2, TRCVICTLR, 48, 1)
319
+FIELD(HDFGWTR_EL2, TRFCR_EL1, 49, 1)
320
+FIELD(HDFGWTR_EL2, TRBBASER_EL1, 50, 1)
321
+FIELD(HDFGWTR_EL2, TRBLIMITR_EL1, 52, 1)
322
+FIELD(HDFGWTR_EL2, TRBMAR_EL1, 53, 1)
323
+FIELD(HDFGWTR_EL2, TRBPTR_EL1, 54, 1)
324
+FIELD(HDFGWTR_EL2, TRBSR_EL1, 55, 1)
325
+FIELD(HDFGWTR_EL2, TRBTRG_EL1, 56, 1)
326
+FIELD(HDFGWTR_EL2, PMUSERENR_EL0, 57, 1)
327
+FIELD(HDFGWTR_EL2, NBRBCTL, 60, 1)
328
+FIELD(HDFGWTR_EL2, NBRBDATA, 61, 1)
329
+FIELD(HDFGWTR_EL2, NPMSNEVFR_EL1, 62, 1)
330
+
331
typedef struct ARMCPRegInfo ARMCPRegInfo;
332
333
/*
334
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
45
index XXXXXXX..XXXXXXX 100644
335
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/helper-a64.c
336
--- a/target/arm/cpu.h
47
+++ b/target/arm/helper-a64.c
337
+++ b/target/arm/cpu.h
48
@@ -XXX,XX +XXX,XX @@ static inline uint32_t float_rel_to_flags(int res)
338
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
49
return flags;
339
uint64_t disr_el1;
340
uint64_t vdisr_el2;
341
uint64_t vsesr_el2;
342
+
343
+ /*
344
+ * Fine-Grained Trap registers. We store these as arrays so the
345
+ * access checking code doesn't have to manually select
346
+ * HFGRTR_EL2 vs HFDFGRTR_EL2 etc when looking up the bit to test.
347
+ * FEAT_FGT2 will add more elements to these arrays.
348
+ */
349
+ uint64_t fgt_read[2]; /* HFGRTR, HDFGRTR */
350
+ uint64_t fgt_write[2]; /* HFGWTR, HDFGWTR */
351
+ uint64_t fgt_exec[1]; /* HFGITR */
352
} cp15;
353
354
struct {
355
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_tgran64_2(const ARMISARegisters *id)
356
return t >= 2 || (t == 0 && isar_feature_aa64_tgran64(id));
50
}
357
}
51
358
52
-uint64_t HELPER(vfp_cmph_a64)(float16 x, float16 y, void *fp_status)
359
+static inline bool isar_feature_aa64_fgt(const ARMISARegisters *id)
53
+uint64_t HELPER(vfp_cmph_a64)(uint32_t x, uint32_t y, void *fp_status)
360
+{
361
+ return FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, FGT) != 0;
362
+}
363
+
364
static inline bool isar_feature_aa64_ccidx(const ARMISARegisters *id)
54
{
365
{
55
return float_rel_to_flags(float16_compare_quiet(x, y, fp_status));
366
return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, CCIDX) != 0;
56
}
57
58
-uint64_t HELPER(vfp_cmpeh_a64)(float16 x, float16 y, void *fp_status)
59
+uint64_t HELPER(vfp_cmpeh_a64)(uint32_t x, uint32_t y, void *fp_status)
60
{
61
return float_rel_to_flags(float16_compare(x, y, fp_status));
62
}
63
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(neon_cgt_f64)(float64 a, float64 b, void *fpstp)
64
#define float64_three make_float64(0x4008000000000000ULL)
65
#define float64_one_point_five make_float64(0x3FF8000000000000ULL)
66
67
-float16 HELPER(recpsf_f16)(float16 a, float16 b, void *fpstp)
68
+uint32_t HELPER(recpsf_f16)(uint32_t a, uint32_t b, void *fpstp)
69
{
70
float_status *fpst = fpstp;
71
72
@@ -XXX,XX +XXX,XX @@ float64 HELPER(recpsf_f64)(float64 a, float64 b, void *fpstp)
73
return float64_muladd(a, b, float64_two, 0, fpst);
74
}
75
76
-float16 HELPER(rsqrtsf_f16)(float16 a, float16 b, void *fpstp)
77
+uint32_t HELPER(rsqrtsf_f16)(uint32_t a, uint32_t b, void *fpstp)
78
{
79
float_status *fpst = fpstp;
80
81
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(neon_addlp_u16)(uint64_t a)
82
}
83
84
/* Floating-point reciprocal exponent - see FPRecpX in ARM ARM */
85
-float16 HELPER(frecpx_f16)(float16 a, void *fpstp)
86
+uint32_t HELPER(frecpx_f16)(uint32_t a, void *fpstp)
87
{
88
float_status *fpst = fpstp;
89
uint16_t val16, sbit;
90
@@ -XXX,XX +XXX,XX @@ void HELPER(casp_be_parallel)(CPUARMState *env, uint32_t rs, uint64_t addr,
91
#define ADVSIMD_HELPER(name, suffix) HELPER(glue(glue(advsimd_, name), suffix))
92
93
#define ADVSIMD_HALFOP(name) \
94
-float16 ADVSIMD_HELPER(name, h)(float16 a, float16 b, void *fpstp) \
95
+uint32_t ADVSIMD_HELPER(name, h)(uint32_t a, uint32_t b, void *fpstp) \
96
{ \
97
float_status *fpst = fpstp; \
98
return float16_ ## name(a, b, fpst); \
99
@@ -XXX,XX +XXX,XX @@ ADVSIMD_HALFOP(mulx)
100
ADVSIMD_TWOHALFOP(mulx)
101
102
/* fused multiply-accumulate */
103
-float16 HELPER(advsimd_muladdh)(float16 a, float16 b, float16 c, void *fpstp)
104
+uint32_t HELPER(advsimd_muladdh)(uint32_t a, uint32_t b, uint32_t c,
105
+ void *fpstp)
106
{
107
float_status *fpst = fpstp;
108
return float16_muladd(a, b, c, 0, fpst);
109
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_muladd2h)(uint32_t two_a, uint32_t two_b,
110
111
#define ADVSIMD_CMPRES(test) (test) ? 0xffff : 0
112
113
-uint32_t HELPER(advsimd_ceq_f16)(float16 a, float16 b, void *fpstp)
114
+uint32_t HELPER(advsimd_ceq_f16)(uint32_t a, uint32_t b, void *fpstp)
115
{
116
float_status *fpst = fpstp;
117
int compare = float16_compare_quiet(a, b, fpst);
118
return ADVSIMD_CMPRES(compare == float_relation_equal);
119
}
120
121
-uint32_t HELPER(advsimd_cge_f16)(float16 a, float16 b, void *fpstp)
122
+uint32_t HELPER(advsimd_cge_f16)(uint32_t a, uint32_t b, void *fpstp)
123
{
124
float_status *fpst = fpstp;
125
int compare = float16_compare(a, b, fpst);
126
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_cge_f16)(float16 a, float16 b, void *fpstp)
127
compare == float_relation_equal);
128
}
129
130
-uint32_t HELPER(advsimd_cgt_f16)(float16 a, float16 b, void *fpstp)
131
+uint32_t HELPER(advsimd_cgt_f16)(uint32_t a, uint32_t b, void *fpstp)
132
{
133
float_status *fpst = fpstp;
134
int compare = float16_compare(a, b, fpst);
135
return ADVSIMD_CMPRES(compare == float_relation_greater);
136
}
137
138
-uint32_t HELPER(advsimd_acge_f16)(float16 a, float16 b, void *fpstp)
139
+uint32_t HELPER(advsimd_acge_f16)(uint32_t a, uint32_t b, void *fpstp)
140
{
141
float_status *fpst = fpstp;
142
float16 f0 = float16_abs(a);
143
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_acge_f16)(float16 a, float16 b, void *fpstp)
144
compare == float_relation_equal);
145
}
146
147
-uint32_t HELPER(advsimd_acgt_f16)(float16 a, float16 b, void *fpstp)
148
+uint32_t HELPER(advsimd_acgt_f16)(uint32_t a, uint32_t b, void *fpstp)
149
{
150
float_status *fpst = fpstp;
151
float16 f0 = float16_abs(a);
152
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_acgt_f16)(float16 a, float16 b, void *fpstp)
153
}
154
155
/* round to integral */
156
-float16 HELPER(advsimd_rinth_exact)(float16 x, void *fp_status)
157
+uint32_t HELPER(advsimd_rinth_exact)(uint32_t x, void *fp_status)
158
{
159
return float16_round_to_int(x, fp_status);
160
}
161
162
-float16 HELPER(advsimd_rinth)(float16 x, void *fp_status)
163
+uint32_t HELPER(advsimd_rinth)(uint32_t x, void *fp_status)
164
{
165
int old_flags = get_float_exception_flags(fp_status), new_flags;
166
float16 ret;
167
@@ -XXX,XX +XXX,XX @@ float16 HELPER(advsimd_rinth)(float16 x, void *fp_status)
168
* setting the mode appropriately before calling the helper.
169
*/
170
171
-uint32_t HELPER(advsimd_f16tosinth)(float16 a, void *fpstp)
172
+uint32_t HELPER(advsimd_f16tosinth)(uint32_t a, void *fpstp)
173
{
174
float_status *fpst = fpstp;
175
176
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_f16tosinth)(float16 a, void *fpstp)
177
return float16_to_int16(a, fpst);
178
}
179
180
-uint32_t HELPER(advsimd_f16touinth)(float16 a, void *fpstp)
181
+uint32_t HELPER(advsimd_f16touinth)(uint32_t a, void *fpstp)
182
{
183
float_status *fpst = fpstp;
184
185
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_f16touinth)(float16 a, void *fpstp)
186
* Square Root and Reciprocal square root
187
*/
188
189
-float16 HELPER(sqrt_f16)(float16 a, void *fpstp)
190
+uint32_t HELPER(sqrt_f16)(uint32_t a, void *fpstp)
191
{
192
float_status *s = fpstp;
193
194
diff --git a/target/arm/helper.c b/target/arm/helper.c
367
diff --git a/target/arm/helper.c b/target/arm/helper.c
195
index XXXXXXX..XXXXXXX 100644
368
index XXXXXXX..XXXXXXX 100644
196
--- a/target/arm/helper.c
369
--- a/target/arm/helper.c
197
+++ b/target/arm/helper.c
370
+++ b/target/arm/helper.c
198
@@ -XXX,XX +XXX,XX @@ DO_VFP_cmp(d, float64)
371
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
199
372
if (cpu_isar_feature(aa64_hcx, cpu)) {
200
/* Integer to float and float to integer conversions */
373
valid_mask |= SCR_HXEN;
201
374
}
202
-#define CONV_ITOF(name, fsz, sign) \
375
+ if (cpu_isar_feature(aa64_fgt, cpu)) {
203
- float##fsz HELPER(name)(uint32_t x, void *fpstp) \
376
+ valid_mask |= SCR_FGTEN;
204
-{ \
377
+ }
205
- float_status *fpst = fpstp; \
378
} else {
206
- return sign##int32_to_##float##fsz((sign##int32_t)x, fpst); \
379
valid_mask &= ~(SCR_RW | SCR_ST);
207
+#define CONV_ITOF(name, ftype, fsz, sign) \
380
if (cpu_isar_feature(aa32_ras, cpu)) {
208
+ftype HELPER(name)(uint32_t x, void *fpstp) \
381
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo scxtnum_reginfo[] = {
209
+{ \
382
.access = PL3_RW,
210
+ float_status *fpst = fpstp; \
383
.fieldoffset = offsetof(CPUARMState, scxtnum_el[3]) },
211
+ return sign##int32_to_##float##fsz((sign##int32_t)x, fpst); \
384
};
212
}
385
+
213
386
+static CPAccessResult access_fgt(CPUARMState *env, const ARMCPRegInfo *ri,
214
-#define CONV_FTOI(name, fsz, sign, round) \
387
+ bool isread)
215
-uint32_t HELPER(name)(float##fsz x, void *fpstp) \
388
+{
216
-{ \
389
+ if (arm_current_el(env) == 2 &&
217
- float_status *fpst = fpstp; \
390
+ arm_feature(env, ARM_FEATURE_EL3) && !(env->cp15.scr_el3 & SCR_FGTEN)) {
218
- if (float##fsz##_is_any_nan(x)) { \
391
+ return CP_ACCESS_TRAP_EL3;
219
- float_raise(float_flag_invalid, fpst); \
392
+ }
220
- return 0; \
393
+ return CP_ACCESS_OK;
221
- } \
394
+}
222
- return float##fsz##_to_##sign##int32##round(x, fpst); \
395
+
223
+#define CONV_FTOI(name, ftype, fsz, sign, round) \
396
+static const ARMCPRegInfo fgt_reginfo[] = {
224
+uint32_t HELPER(name)(ftype x, void *fpstp) \
397
+ { .name = "HFGRTR_EL2", .state = ARM_CP_STATE_AA64,
225
+{ \
398
+ .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 4,
226
+ float_status *fpst = fpstp; \
399
+ .access = PL2_RW, .accessfn = access_fgt,
227
+ if (float##fsz##_is_any_nan(x)) { \
400
+ .fieldoffset = offsetof(CPUARMState, cp15.fgt_read[FGTREG_HFGRTR]) },
228
+ float_raise(float_flag_invalid, fpst); \
401
+ { .name = "HFGWTR_EL2", .state = ARM_CP_STATE_AA64,
229
+ return 0; \
402
+ .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 5,
230
+ } \
403
+ .access = PL2_RW, .accessfn = access_fgt,
231
+ return float##fsz##_to_##sign##int32##round(x, fpst); \
404
+ .fieldoffset = offsetof(CPUARMState, cp15.fgt_write[FGTREG_HFGWTR]) },
232
}
405
+ { .name = "HDFGRTR_EL2", .state = ARM_CP_STATE_AA64,
233
406
+ .opc0 = 3, .opc1 = 4, .crn = 3, .crm = 1, .opc2 = 4,
234
-#define FLOAT_CONVS(name, p, fsz, sign) \
407
+ .access = PL2_RW, .accessfn = access_fgt,
235
-CONV_ITOF(vfp_##name##to##p, fsz, sign) \
408
+ .fieldoffset = offsetof(CPUARMState, cp15.fgt_read[FGTREG_HDFGRTR]) },
236
-CONV_FTOI(vfp_to##name##p, fsz, sign, ) \
409
+ { .name = "HDFGWTR_EL2", .state = ARM_CP_STATE_AA64,
237
-CONV_FTOI(vfp_to##name##z##p, fsz, sign, _round_to_zero)
410
+ .opc0 = 3, .opc1 = 4, .crn = 3, .crm = 1, .opc2 = 5,
238
+#define FLOAT_CONVS(name, p, ftype, fsz, sign) \
411
+ .access = PL2_RW, .accessfn = access_fgt,
239
+ CONV_ITOF(vfp_##name##to##p, ftype, fsz, sign) \
412
+ .fieldoffset = offsetof(CPUARMState, cp15.fgt_write[FGTREG_HDFGWTR]) },
240
+ CONV_FTOI(vfp_to##name##p, ftype, fsz, sign, ) \
413
+ { .name = "HFGITR_EL2", .state = ARM_CP_STATE_AA64,
241
+ CONV_FTOI(vfp_to##name##z##p, ftype, fsz, sign, _round_to_zero)
414
+ .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 6,
242
415
+ .access = PL2_RW, .accessfn = access_fgt,
243
-FLOAT_CONVS(si, h, 16, )
416
+ .fieldoffset = offsetof(CPUARMState, cp15.fgt_exec[FGTREG_HFGITR]) },
244
-FLOAT_CONVS(si, s, 32, )
417
+};
245
-FLOAT_CONVS(si, d, 64, )
418
#endif /* TARGET_AARCH64 */
246
-FLOAT_CONVS(ui, h, 16, u)
419
247
-FLOAT_CONVS(ui, s, 32, u)
420
static CPAccessResult access_predinv(CPUARMState *env, const ARMCPRegInfo *ri,
248
-FLOAT_CONVS(ui, d, 64, u)
421
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
249
+FLOAT_CONVS(si, h, uint32_t, 16, )
422
if (cpu_isar_feature(aa64_scxtnum, cpu)) {
250
+FLOAT_CONVS(si, s, float32, 32, )
423
define_arm_cp_regs(cpu, scxtnum_reginfo);
251
+FLOAT_CONVS(si, d, float64, 64, )
252
+FLOAT_CONVS(ui, h, uint32_t, 16, u)
253
+FLOAT_CONVS(ui, s, float32, 32, u)
254
+FLOAT_CONVS(ui, d, float64, 64, u)
255
256
#undef CONV_ITOF
257
#undef CONV_FTOI
258
@@ -XXX,XX +XXX,XX @@ static float16 do_postscale_fp16(float64 f, int shift, float_status *fpst)
259
return float64_to_float16(float64_scalbn(f, -shift, fpst), true, fpst);
260
}
261
262
-float16 HELPER(vfp_sltoh)(uint32_t x, uint32_t shift, void *fpst)
263
+uint32_t HELPER(vfp_sltoh)(uint32_t x, uint32_t shift, void *fpst)
264
{
265
return do_postscale_fp16(int32_to_float64(x, fpst), shift, fpst);
266
}
267
268
-float16 HELPER(vfp_ultoh)(uint32_t x, uint32_t shift, void *fpst)
269
+uint32_t HELPER(vfp_ultoh)(uint32_t x, uint32_t shift, void *fpst)
270
{
271
return do_postscale_fp16(uint32_to_float64(x, fpst), shift, fpst);
272
}
273
274
-float16 HELPER(vfp_sqtoh)(uint64_t x, uint32_t shift, void *fpst)
275
+uint32_t HELPER(vfp_sqtoh)(uint64_t x, uint32_t shift, void *fpst)
276
{
277
return do_postscale_fp16(int64_to_float64(x, fpst), shift, fpst);
278
}
279
280
-float16 HELPER(vfp_uqtoh)(uint64_t x, uint32_t shift, void *fpst)
281
+uint32_t HELPER(vfp_uqtoh)(uint64_t x, uint32_t shift, void *fpst)
282
{
283
return do_postscale_fp16(uint64_to_float64(x, fpst), shift, fpst);
284
}
285
@@ -XXX,XX +XXX,XX @@ static float64 do_prescale_fp16(float16 f, int shift, float_status *fpst)
286
}
424
}
287
}
425
+
288
426
+ if (cpu_isar_feature(aa64_fgt, cpu)) {
289
-uint32_t HELPER(vfp_toshh)(float16 x, uint32_t shift, void *fpst)
427
+ define_arm_cp_regs(cpu, fgt_reginfo);
290
+uint32_t HELPER(vfp_toshh)(uint32_t x, uint32_t shift, void *fpst)
428
+ }
291
{
429
#endif
292
return float64_to_int16(do_prescale_fp16(x, shift, fpst), fpst);
430
293
}
431
if (cpu_isar_feature(any_predinv, cpu)) {
294
295
-uint32_t HELPER(vfp_touhh)(float16 x, uint32_t shift, void *fpst)
296
+uint32_t HELPER(vfp_touhh)(uint32_t x, uint32_t shift, void *fpst)
297
{
298
return float64_to_uint16(do_prescale_fp16(x, shift, fpst), fpst);
299
}
300
301
-uint32_t HELPER(vfp_toslh)(float16 x, uint32_t shift, void *fpst)
302
+uint32_t HELPER(vfp_toslh)(uint32_t x, uint32_t shift, void *fpst)
303
{
304
return float64_to_int32(do_prescale_fp16(x, shift, fpst), fpst);
305
}
306
307
-uint32_t HELPER(vfp_toulh)(float16 x, uint32_t shift, void *fpst)
308
+uint32_t HELPER(vfp_toulh)(uint32_t x, uint32_t shift, void *fpst)
309
{
310
return float64_to_uint32(do_prescale_fp16(x, shift, fpst), fpst);
311
}
312
313
-uint64_t HELPER(vfp_tosqh)(float16 x, uint32_t shift, void *fpst)
314
+uint64_t HELPER(vfp_tosqh)(uint32_t x, uint32_t shift, void *fpst)
315
{
316
return float64_to_int64(do_prescale_fp16(x, shift, fpst), fpst);
317
}
318
319
-uint64_t HELPER(vfp_touqh)(float16 x, uint32_t shift, void *fpst)
320
+uint64_t HELPER(vfp_touqh)(uint32_t x, uint32_t shift, void *fpst)
321
{
322
return float64_to_uint64(do_prescale_fp16(x, shift, fpst), fpst);
323
}
324
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(set_neon_rmode)(uint32_t rmode, CPUARMState *env)
325
}
326
327
/* Half precision conversions. */
328
-float32 HELPER(vfp_fcvt_f16_to_f32)(float16 a, void *fpstp, uint32_t ahp_mode)
329
+float32 HELPER(vfp_fcvt_f16_to_f32)(uint32_t a, void *fpstp, uint32_t ahp_mode)
330
{
331
/* Squash FZ16 to 0 for the duration of conversion. In this case,
332
* it would affect flushing input denormals.
333
@@ -XXX,XX +XXX,XX @@ float32 HELPER(vfp_fcvt_f16_to_f32)(float16 a, void *fpstp, uint32_t ahp_mode)
334
return r;
335
}
336
337
-float16 HELPER(vfp_fcvt_f32_to_f16)(float32 a, void *fpstp, uint32_t ahp_mode)
338
+uint32_t HELPER(vfp_fcvt_f32_to_f16)(float32 a, void *fpstp, uint32_t ahp_mode)
339
{
340
/* Squash FZ16 to 0 for the duration of conversion. In this case,
341
* it would affect flushing output denormals.
342
@@ -XXX,XX +XXX,XX @@ float16 HELPER(vfp_fcvt_f32_to_f16)(float32 a, void *fpstp, uint32_t ahp_mode)
343
return r;
344
}
345
346
-float64 HELPER(vfp_fcvt_f16_to_f64)(float16 a, void *fpstp, uint32_t ahp_mode)
347
+float64 HELPER(vfp_fcvt_f16_to_f64)(uint32_t a, void *fpstp, uint32_t ahp_mode)
348
{
349
/* Squash FZ16 to 0 for the duration of conversion. In this case,
350
* it would affect flushing input denormals.
351
@@ -XXX,XX +XXX,XX @@ float64 HELPER(vfp_fcvt_f16_to_f64)(float16 a, void *fpstp, uint32_t ahp_mode)
352
return r;
353
}
354
355
-float16 HELPER(vfp_fcvt_f64_to_f16)(float64 a, void *fpstp, uint32_t ahp_mode)
356
+uint32_t HELPER(vfp_fcvt_f64_to_f16)(float64 a, void *fpstp, uint32_t ahp_mode)
357
{
358
/* Squash FZ16 to 0 for the duration of conversion. In this case,
359
* it would affect flushing output denormals.
360
@@ -XXX,XX +XXX,XX @@ static bool round_to_inf(float_status *fpst, bool sign_bit)
361
g_assert_not_reached();
362
}
363
364
-float16 HELPER(recpe_f16)(float16 input, void *fpstp)
365
+uint32_t HELPER(recpe_f16)(uint32_t input, void *fpstp)
366
{
367
float_status *fpst = fpstp;
368
float16 f16 = float16_squash_input_denormal(input, fpst);
369
@@ -XXX,XX +XXX,XX @@ static uint64_t recip_sqrt_estimate(int *exp , int exp_off, uint64_t frac)
370
return extract64(estimate, 0, 8) << 44;
371
}
372
373
-float16 HELPER(rsqrte_f16)(float16 input, void *fpstp)
374
+uint32_t HELPER(rsqrte_f16)(uint32_t input, void *fpstp)
375
{
376
float_status *s = fpstp;
377
float16 f16 = float16_squash_input_denormal(input, s);
378
--
432
--
379
2.17.1
433
2.34.1
380
381
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
Implement the machinery for fine-grained traps on normal sysregs.
2
add MemTxAttrs as an argument to address_space_access_valid().
2
Any sysreg with a fine-grained trap will set the new field to
3
Its callers either have an attrs value to hand, or don't care
3
indicate which FGT register bit it should trap on.
4
and can use MEMTXATTRS_UNSPECIFIED.
4
5
FGT traps only happen when an AArch64 EL2 enables them for
6
an AArch64 EL1. They therefore are only relevant for AArch32
7
cpregs when the cpreg can be accessed from EL0. The logic
8
in access_check_cp_reg() will check this, so it is safe to
9
add a .fgt marking to an ARM_CP_STATE_BOTH ARMCPRegInfo.
10
11
The DO_BIT and DO_REV_BIT macros define enum constants FGT_##bitname
12
which can be used to specify the FGT bit, eg
13
.fgt = FGT_AFSR0_EL1
14
(We assume that there is no bit name duplication across the FGT
15
registers, for brevity's sake.)
16
17
Subsequent commits will add the .fgt fields to the relevant register
18
definitions and define the FGT_nnn values for them.
19
20
Note that some of the FGT traps are for instructions that we don't
21
handle via the cpregs mechanisms (mostly these are instruction traps).
22
Those we will have to handle separately.
5
23
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
25
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-6-peter.maydell@linaro.org
26
Tested-by: Fuad Tabba <tabba@google.com>
27
Message-id: 20230130182459.3309057-10-peter.maydell@linaro.org
28
Message-id: 20230127175507.2895013-10-peter.maydell@linaro.org
10
---
29
---
11
include/exec/memory.h | 4 +++-
30
target/arm/cpregs.h | 72 ++++++++++++++++++++++++++++++++++++++
12
include/sysemu/dma.h | 3 ++-
31
target/arm/cpu.h | 1 +
13
exec.c | 3 ++-
32
target/arm/internals.h | 20 +++++++++++
14
target/s390x/diag.c | 6 ++++--
33
target/arm/translate.h | 2 ++
15
target/s390x/excp_helper.c | 3 ++-
34
target/arm/helper.c | 9 +++++
16
target/s390x/mmu_helper.c | 3 ++-
35
target/arm/op_helper.c | 30 ++++++++++++++++
17
target/s390x/sigp.c | 3 ++-
36
target/arm/translate-a64.c | 3 +-
18
7 files changed, 17 insertions(+), 8 deletions(-)
37
target/arm/translate.c | 2 ++
19
38
8 files changed, 138 insertions(+), 1 deletion(-)
20
diff --git a/include/exec/memory.h b/include/exec/memory.h
39
21
index XXXXXXX..XXXXXXX 100644
40
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
22
--- a/include/exec/memory.h
41
index XXXXXXX..XXXXXXX 100644
23
+++ b/include/exec/memory.h
42
--- a/target/arm/cpregs.h
24
@@ -XXX,XX +XXX,XX @@ static inline MemoryRegion *address_space_translate(AddressSpace *as,
43
+++ b/target/arm/cpregs.h
25
* @addr: address within that address space
44
@@ -XXX,XX +XXX,XX @@ FIELD(HDFGWTR_EL2, NBRBCTL, 60, 1)
26
* @len: length of the area to be checked
45
FIELD(HDFGWTR_EL2, NBRBDATA, 61, 1)
27
* @is_write: indicates the transfer direction
46
FIELD(HDFGWTR_EL2, NPMSNEVFR_EL1, 62, 1)
28
+ * @attrs: memory attributes
47
29
*/
48
+/* Which fine-grained trap bit register to check, if any */
30
-bool address_space_access_valid(AddressSpace *as, hwaddr addr, int len, bool is_write);
49
+FIELD(FGT, TYPE, 10, 3)
31
+bool address_space_access_valid(AddressSpace *as, hwaddr addr, int len,
50
+FIELD(FGT, REV, 9, 1) /* Is bit sense reversed? */
32
+ bool is_write, MemTxAttrs attrs);
51
+FIELD(FGT, IDX, 6, 3) /* Index within a uint64_t[] array */
33
52
+FIELD(FGT, BITPOS, 0, 6) /* Bit position within the uint64_t */
34
/* address_space_map: map a physical memory region into a host virtual address
53
+
35
*
54
+/*
36
diff --git a/include/sysemu/dma.h b/include/sysemu/dma.h
55
+ * Macros to define FGT_##bitname enum constants to use in ARMCPRegInfo::fgt
37
index XXXXXXX..XXXXXXX 100644
56
+ * fields. We assume for brevity's sake that there are no duplicated
38
--- a/include/sysemu/dma.h
57
+ * bit names across the various FGT registers.
39
+++ b/include/sysemu/dma.h
58
+ */
40
@@ -XXX,XX +XXX,XX @@ static inline bool dma_memory_valid(AddressSpace *as,
59
+#define DO_BIT(REG, BITNAME) \
41
DMADirection dir)
60
+ FGT_##BITNAME = FGT_##REG | R_##REG##_EL2_##BITNAME##_SHIFT
42
{
61
+
43
return address_space_access_valid(as, addr, len,
62
+/* Some bits have reversed sense, so 0 means trap and 1 means not */
44
- dir == DMA_DIRECTION_FROM_DEVICE);
63
+#define DO_REV_BIT(REG, BITNAME) \
45
+ dir == DMA_DIRECTION_FROM_DEVICE,
64
+ FGT_##BITNAME = FGT_##REG | FGT_REV | R_##REG##_EL2_##BITNAME##_SHIFT
46
+ MEMTXATTRS_UNSPECIFIED);
65
+
66
+typedef enum FGTBit {
67
+ /*
68
+ * These bits tell us which register arrays to use:
69
+ * if FGT_R is set then reads are checked against fgt_read[];
70
+ * if FGT_W is set then writes are checked against fgt_write[];
71
+ * if FGT_EXEC is set then all accesses are checked against fgt_exec[].
72
+ *
73
+ * For almost all bits in the R/W register pairs, the bit exists in
74
+ * both registers for a RW register, in HFGRTR/HDFGRTR for a RO register
75
+ * with the corresponding HFGWTR/HDFGTWTR bit being RES0, and vice-versa
76
+ * for a WO register. There are unfortunately a couple of exceptions
77
+ * (PMCR_EL0, TRFCR_EL1) where the register being trapped is RW but
78
+ * the FGT system only allows trapping of writes, not reads.
79
+ *
80
+ * Note that we arrange these bits so that a 0 FGTBit means "no trap".
81
+ */
82
+ FGT_R = 1 << R_FGT_TYPE_SHIFT,
83
+ FGT_W = 2 << R_FGT_TYPE_SHIFT,
84
+ FGT_EXEC = 4 << R_FGT_TYPE_SHIFT,
85
+ FGT_RW = FGT_R | FGT_W,
86
+ /* Bit to identify whether trap bit is reversed sense */
87
+ FGT_REV = R_FGT_REV_MASK,
88
+
89
+ /*
90
+ * If a bit exists in HFGRTR/HDFGRTR then either the register being
91
+ * trapped is RO or the bit also exists in HFGWTR/HDFGWTR, so we either
92
+ * want to trap for both reads and writes or else it's harmless to mark
93
+ * it as trap-on-writes.
94
+ * If a bit exists only in HFGWTR/HDFGWTR then either the register being
95
+ * trapped is WO, or else it is one of the two oddball special cases
96
+ * which are RW but have only a write trap. We mark these as only
97
+ * FGT_W so we get the right behaviour for those special cases.
98
+ * (If a bit was added in future that provided only a read trap for an
99
+ * RW register we'd need to do something special to get the FGT_R bit
100
+ * only. But this seems unlikely to happen.)
101
+ *
102
+ * So for the DO_BIT/DO_REV_BIT macros: use FGT_HFGRTR/FGT_HDFGRTR if
103
+ * the bit exists in that register. Otherwise use FGT_HFGWTR/FGT_HDFGWTR.
104
+ */
105
+ FGT_HFGRTR = FGT_RW | (FGTREG_HFGRTR << R_FGT_IDX_SHIFT),
106
+ FGT_HFGWTR = FGT_W | (FGTREG_HFGWTR << R_FGT_IDX_SHIFT),
107
+ FGT_HDFGRTR = FGT_RW | (FGTREG_HDFGRTR << R_FGT_IDX_SHIFT),
108
+ FGT_HDFGWTR = FGT_W | (FGTREG_HDFGWTR << R_FGT_IDX_SHIFT),
109
+ FGT_HFGITR = FGT_EXEC | (FGTREG_HFGITR << R_FGT_IDX_SHIFT),
110
+} FGTBit;
111
+
112
+#undef DO_BIT
113
+#undef DO_REV_BIT
114
+
115
typedef struct ARMCPRegInfo ARMCPRegInfo;
116
117
/*
118
@@ -XXX,XX +XXX,XX @@ struct ARMCPRegInfo {
119
CPAccessRights access;
120
/* Security state: ARM_CP_SECSTATE_* bits/values */
121
CPSecureState secure;
122
+ /*
123
+ * Which fine-grained trap register bit to check, if any. This
124
+ * value encodes both the trap register and bit within it.
125
+ */
126
+ FGTBit fgt;
127
/*
128
* The opaque pointer passed to define_arm_cp_regs_with_opaque() when
129
* this register was defined: can be used to hand data through to the
130
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
131
index XXXXXXX..XXXXXXX 100644
132
--- a/target/arm/cpu.h
133
+++ b/target/arm/cpu.h
134
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_ANY, FPEXC_EL, 8, 2)
135
/* Memory operations require alignment: SCTLR_ELx.A or CCR.UNALIGN_TRP */
136
FIELD(TBFLAG_ANY, ALIGN_MEM, 10, 1)
137
FIELD(TBFLAG_ANY, PSTATE__IL, 11, 1)
138
+FIELD(TBFLAG_ANY, FGT_ACTIVE, 12, 1)
139
140
/*
141
* Bit usage when in AArch32 state, both A- and M-profile.
142
diff --git a/target/arm/internals.h b/target/arm/internals.h
143
index XXXXXXX..XXXXXXX 100644
144
--- a/target/arm/internals.h
145
+++ b/target/arm/internals.h
146
@@ -XXX,XX +XXX,XX @@ static inline uint64_t arm_mdcr_el2_eff(CPUARMState *env)
147
((1 << (1 - 1)) | (1 << (2 - 1)) | \
148
(1 << (4 - 1)) | (1 << (8 - 1)) | (1 << (16 - 1)))
149
150
+/*
151
+ * Return true if it is possible to take a fine-grained-trap to EL2.
152
+ */
153
+static inline bool arm_fgt_active(CPUARMState *env, int el)
154
+{
155
+ /*
156
+ * The Arm ARM only requires the "{E2H,TGE} != {1,1}" test for traps
157
+ * that can affect EL0, but it is harmless to do the test also for
158
+ * traps on registers that are only accessible at EL1 because if the test
159
+ * returns true then we can't be executing at EL1 anyway.
160
+ * FGT traps only happen when EL2 is enabled and EL1 is AArch64;
161
+ * traps from AArch32 only happen for the EL0 is AArch32 case.
162
+ */
163
+ return cpu_isar_feature(aa64_fgt, env_archcpu(env)) &&
164
+ el < 2 && arm_is_el2_enabled(env) &&
165
+ arm_el_is_aa64(env, 1) &&
166
+ (arm_hcr_el2_eff(env) & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE) &&
167
+ (!arm_feature(env, ARM_FEATURE_EL3) || (env->cp15.scr_el3 & SCR_FGTEN));
168
+}
169
+
170
#endif
171
diff --git a/target/arm/translate.h b/target/arm/translate.h
172
index XXXXXXX..XXXXXXX 100644
173
--- a/target/arm/translate.h
174
+++ b/target/arm/translate.h
175
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
176
bool is_nonstreaming;
177
/* True if MVE insns are definitely not predicated by VPR or LTPSIZE */
178
bool mve_no_pred;
179
+ /* True if fine-grained traps are active */
180
+ bool fgt_active;
181
/*
182
* >= 0, a copy of PSTATE.BTYPE, which will be 0 without v8.5-BTI.
183
* < 0, set by the current instruction.
184
diff --git a/target/arm/helper.c b/target/arm/helper.c
185
index XXXXXXX..XXXXXXX 100644
186
--- a/target/arm/helper.c
187
+++ b/target/arm/helper.c
188
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_common(CPUARMState *env, int fp_el,
189
if (arm_singlestep_active(env)) {
190
DP_TBFLAG_ANY(flags, SS_ACTIVE, 1);
191
}
192
+
193
return flags;
47
}
194
}
48
195
49
static inline int dma_memory_rw_relaxed(AddressSpace *as, dma_addr_t addr,
196
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a32(CPUARMState *env, int fp_el,
50
diff --git a/exec.c b/exec.c
197
DP_TBFLAG_A32(flags, HSTR_ACTIVE, 1);
51
index XXXXXXX..XXXXXXX 100644
198
}
52
--- a/exec.c
199
53
+++ b/exec.c
200
+ if (arm_fgt_active(env, el)) {
54
@@ -XXX,XX +XXX,XX @@ static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
201
+ DP_TBFLAG_ANY(flags, FGT_ACTIVE, 1);
55
}
202
+ }
56
203
+
57
bool address_space_access_valid(AddressSpace *as, hwaddr addr,
204
if (env->uncached_cpsr & CPSR_IL) {
58
- int len, bool is_write)
205
DP_TBFLAG_ANY(flags, PSTATE__IL, 1);
59
+ int len, bool is_write,
206
}
60
+ MemTxAttrs attrs)
207
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
61
{
208
DP_TBFLAG_ANY(flags, PSTATE__IL, 1);
62
FlatView *fv;
209
}
63
bool result;
210
64
diff --git a/target/s390x/diag.c b/target/s390x/diag.c
211
+ if (arm_fgt_active(env, el)) {
65
index XXXXXXX..XXXXXXX 100644
212
+ DP_TBFLAG_ANY(flags, FGT_ACTIVE, 1);
66
--- a/target/s390x/diag.c
213
+ }
67
+++ b/target/s390x/diag.c
214
+
68
@@ -XXX,XX +XXX,XX @@ void handle_diag_308(CPUS390XState *env, uint64_t r1, uint64_t r3, uintptr_t ra)
215
if (cpu_isar_feature(aa64_mte, env_archcpu(env))) {
69
return;
216
/*
217
* Set MTE_ACTIVE if any access may be Checked, and leave clear
218
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
219
index XXXXXXX..XXXXXXX 100644
220
--- a/target/arm/op_helper.c
221
+++ b/target/arm/op_helper.c
222
@@ -XXX,XX +XXX,XX @@ const void *HELPER(access_check_cp_reg)(CPUARMState *env, uint32_t key,
70
}
223
}
71
if (!address_space_access_valid(&address_space_memory, addr,
224
}
72
- sizeof(IplParameterBlock), false)) {
225
73
+ sizeof(IplParameterBlock), false,
226
+ /*
74
+ MEMTXATTRS_UNSPECIFIED)) {
227
+ * Fine-grained traps also are lower priority than undef-to-EL1,
75
s390_program_interrupt(env, PGM_ADDRESSING, ILEN_AUTO, ra);
228
+ * higher priority than trap-to-EL3, and we don't care about priority
76
return;
229
+ * order with other EL2 traps because the syndrome value is the same.
77
}
230
+ */
78
@@ -XXX,XX +XXX,XX @@ out:
231
+ if (arm_fgt_active(env, arm_current_el(env))) {
79
return;
232
+ uint64_t trapword = 0;
80
}
233
+ unsigned int idx = FIELD_EX32(ri->fgt, FGT, IDX);
81
if (!address_space_access_valid(&address_space_memory, addr,
234
+ unsigned int bitpos = FIELD_EX32(ri->fgt, FGT, BITPOS);
82
- sizeof(IplParameterBlock), true)) {
235
+ bool rev = FIELD_EX32(ri->fgt, FGT, REV);
83
+ sizeof(IplParameterBlock), true,
236
+ bool trapbit;
84
+ MEMTXATTRS_UNSPECIFIED)) {
237
+
85
s390_program_interrupt(env, PGM_ADDRESSING, ILEN_AUTO, ra);
238
+ if (ri->fgt & FGT_EXEC) {
86
return;
239
+ assert(idx < ARRAY_SIZE(env->cp15.fgt_exec));
87
}
240
+ trapword = env->cp15.fgt_exec[idx];
88
diff --git a/target/s390x/excp_helper.c b/target/s390x/excp_helper.c
241
+ } else if (isread && (ri->fgt & FGT_R)) {
89
index XXXXXXX..XXXXXXX 100644
242
+ assert(idx < ARRAY_SIZE(env->cp15.fgt_read));
90
--- a/target/s390x/excp_helper.c
243
+ trapword = env->cp15.fgt_read[idx];
91
+++ b/target/s390x/excp_helper.c
244
+ } else if (!isread && (ri->fgt & FGT_W)) {
92
@@ -XXX,XX +XXX,XX @@ int s390_cpu_handle_mmu_fault(CPUState *cs, vaddr orig_vaddr, int size,
245
+ assert(idx < ARRAY_SIZE(env->cp15.fgt_write));
93
246
+ trapword = env->cp15.fgt_write[idx];
94
/* check out of RAM access */
247
+ }
95
if (!address_space_access_valid(&address_space_memory, raddr,
248
+
96
- TARGET_PAGE_SIZE, rw)) {
249
+ trapbit = extract64(trapword, bitpos, 1);
97
+ TARGET_PAGE_SIZE, rw,
250
+ if (trapbit != rev) {
98
+ MEMTXATTRS_UNSPECIFIED)) {
251
+ res = CP_ACCESS_TRAP_EL2;
99
DPRINTF("%s: raddr %" PRIx64 " > ram_size %" PRIx64 "\n", __func__,
252
+ goto fail;
100
(uint64_t)raddr, (uint64_t)ram_size);
253
+ }
101
trigger_pgm_exception(env, PGM_ADDRESSING, ILEN_AUTO);
254
+ }
102
diff --git a/target/s390x/mmu_helper.c b/target/s390x/mmu_helper.c
255
+
103
index XXXXXXX..XXXXXXX 100644
256
if (likely(res == CP_ACCESS_OK)) {
104
--- a/target/s390x/mmu_helper.c
257
return ri;
105
+++ b/target/s390x/mmu_helper.c
258
}
106
@@ -XXX,XX +XXX,XX @@ static int translate_pages(S390CPU *cpu, vaddr addr, int nr_pages,
259
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
107
return ret;
260
index XXXXXXX..XXXXXXX 100644
108
}
261
--- a/target/arm/translate-a64.c
109
if (!address_space_access_valid(&address_space_memory, pages[i],
262
+++ b/target/arm/translate-a64.c
110
- TARGET_PAGE_SIZE, is_write)) {
263
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
111
+ TARGET_PAGE_SIZE, is_write,
112
+ MEMTXATTRS_UNSPECIFIED)) {
113
trigger_access_exception(env, PGM_ADDRESSING, ILEN_AUTO, 0);
114
return -EFAULT;
115
}
116
diff --git a/target/s390x/sigp.c b/target/s390x/sigp.c
117
index XXXXXXX..XXXXXXX 100644
118
--- a/target/s390x/sigp.c
119
+++ b/target/s390x/sigp.c
120
@@ -XXX,XX +XXX,XX @@ static void sigp_set_prefix(CPUState *cs, run_on_cpu_data arg)
121
cpu_synchronize_state(cs);
122
123
if (!address_space_access_valid(&address_space_memory, addr,
124
- sizeof(struct LowCore), false)) {
125
+ sizeof(struct LowCore), false,
126
+ MEMTXATTRS_UNSPECIFIED)) {
127
set_sigp_status(si, SIGP_STAT_INVALID_PARAMETER);
128
return;
264
return;
129
}
265
}
266
267
- if (ri->accessfn) {
268
+ if (ri->accessfn || (ri->fgt && s->fgt_active)) {
269
/* Emit code to perform further access permissions checks at
270
* runtime; this may result in an exception.
271
*/
272
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
273
dc->fp_excp_el = EX_TBFLAG_ANY(tb_flags, FPEXC_EL);
274
dc->align_mem = EX_TBFLAG_ANY(tb_flags, ALIGN_MEM);
275
dc->pstate_il = EX_TBFLAG_ANY(tb_flags, PSTATE__IL);
276
+ dc->fgt_active = EX_TBFLAG_ANY(tb_flags, FGT_ACTIVE);
277
dc->sve_excp_el = EX_TBFLAG_A64(tb_flags, SVEEXC_EL);
278
dc->sme_excp_el = EX_TBFLAG_A64(tb_flags, SMEEXC_EL);
279
dc->vl = (EX_TBFLAG_A64(tb_flags, VL) + 1) * 16;
280
diff --git a/target/arm/translate.c b/target/arm/translate.c
281
index XXXXXXX..XXXXXXX 100644
282
--- a/target/arm/translate.c
283
+++ b/target/arm/translate.c
284
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
285
}
286
287
if ((s->hstr_active && s->current_el == 0) || ri->accessfn ||
288
+ (ri->fgt && s->fgt_active) ||
289
(arm_dc_feature(s, ARM_FEATURE_XSCALE) && cpnum < 14)) {
290
/*
291
* Emit code to perform further access permissions checks at
292
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
293
dc->fp_excp_el = EX_TBFLAG_ANY(tb_flags, FPEXC_EL);
294
dc->align_mem = EX_TBFLAG_ANY(tb_flags, ALIGN_MEM);
295
dc->pstate_il = EX_TBFLAG_ANY(tb_flags, PSTATE__IL);
296
+ dc->fgt_active = EX_TBFLAG_ANY(tb_flags, FGT_ACTIVE);
297
298
if (arm_feature(env, ARM_FEATURE_M)) {
299
dc->vfp_enabled = 1;
130
--
300
--
131
2.17.1
301
2.34.1
132
133
diff view generated by jsdifflib
1
Add more detail to the documentation for memory_region_init_iommu()
1
Mark up the sysreg definitions for the registers trapped
2
and other IOMMU-related functions and data structures.
2
by HFGRTR/HFGWTR bits 0..11.
3
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Tested-by: Fuad Tabba <tabba@google.com>
7
Reviewed-by: Eric Auger <eric.auger@redhat.com>
7
Message-id: 20230130182459.3309057-11-peter.maydell@linaro.org
8
Message-id: 20180521140402.23318-2-peter.maydell@linaro.org
8
Message-id: 20230127175507.2895013-11-peter.maydell@linaro.org
9
---
9
---
10
include/exec/memory.h | 105 ++++++++++++++++++++++++++++++++++++++----
10
target/arm/cpregs.h | 14 ++++++++++++++
11
1 file changed, 95 insertions(+), 10 deletions(-)
11
target/arm/helper.c | 17 +++++++++++++++++
12
2 files changed, 31 insertions(+)
12
13
13
diff --git a/include/exec/memory.h b/include/exec/memory.h
14
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
14
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
15
--- a/include/exec/memory.h
16
--- a/target/arm/cpregs.h
16
+++ b/include/exec/memory.h
17
+++ b/target/arm/cpregs.h
17
@@ -XXX,XX +XXX,XX @@ enum IOMMUMemoryRegionAttr {
18
@@ -XXX,XX +XXX,XX @@ typedef enum FGTBit {
18
IOMMU_ATTR_SPAPR_TCE_FD
19
FGT_HDFGRTR = FGT_RW | (FGTREG_HDFGRTR << R_FGT_IDX_SHIFT),
20
FGT_HDFGWTR = FGT_W | (FGTREG_HDFGWTR << R_FGT_IDX_SHIFT),
21
FGT_HFGITR = FGT_EXEC | (FGTREG_HFGITR << R_FGT_IDX_SHIFT),
22
+
23
+ /* Trap bits in HFGRTR_EL2 / HFGWTR_EL2, starting from bit 0. */
24
+ DO_BIT(HFGRTR, AFSR0_EL1),
25
+ DO_BIT(HFGRTR, AFSR1_EL1),
26
+ DO_BIT(HFGRTR, AIDR_EL1),
27
+ DO_BIT(HFGRTR, AMAIR_EL1),
28
+ DO_BIT(HFGRTR, APDAKEY),
29
+ DO_BIT(HFGRTR, APDBKEY),
30
+ DO_BIT(HFGRTR, APGAKEY),
31
+ DO_BIT(HFGRTR, APIAKEY),
32
+ DO_BIT(HFGRTR, APIBKEY),
33
+ DO_BIT(HFGRTR, CCSIDR_EL1),
34
+ DO_BIT(HFGRTR, CLIDR_EL1),
35
+ DO_BIT(HFGRTR, CONTEXTIDR_EL1),
36
} FGTBit;
37
38
#undef DO_BIT
39
diff --git a/target/arm/helper.c b/target/arm/helper.c
40
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/helper.c
42
+++ b/target/arm/helper.c
43
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cp_reginfo[] = {
44
{ .name = "CONTEXTIDR_EL1", .state = ARM_CP_STATE_BOTH,
45
.opc0 = 3, .opc1 = 0, .crn = 13, .crm = 0, .opc2 = 1,
46
.access = PL1_RW, .accessfn = access_tvm_trvm,
47
+ .fgt = FGT_CONTEXTIDR_EL1,
48
.secure = ARM_CP_SECSTATE_NS,
49
.fieldoffset = offsetof(CPUARMState, cp15.contextidr_el[1]),
50
.resetvalue = 0, .writefn = contextidr_write, .raw_writefn = raw_write, },
51
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
52
.opc0 = 3, .crn = 0, .crm = 0, .opc1 = 1, .opc2 = 0,
53
.access = PL1_R,
54
.accessfn = access_tid4,
55
+ .fgt = FGT_CCSIDR_EL1,
56
.readfn = ccsidr_read, .type = ARM_CP_NO_RAW },
57
{ .name = "CSSELR", .state = ARM_CP_STATE_BOTH,
58
.opc0 = 3, .crn = 0, .crm = 0, .opc1 = 2, .opc2 = 0,
59
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
60
.opc0 = 3, .opc1 = 1, .crn = 0, .crm = 0, .opc2 = 7,
61
.access = PL1_R, .type = ARM_CP_CONST,
62
.accessfn = access_aa64_tid1,
63
+ .fgt = FGT_AIDR_EL1,
64
.resetvalue = 0 },
65
/*
66
* Auxiliary fault status registers: these also are IMPDEF, and we
67
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
68
{ .name = "AFSR0_EL1", .state = ARM_CP_STATE_BOTH,
69
.opc0 = 3, .opc1 = 0, .crn = 5, .crm = 1, .opc2 = 0,
70
.access = PL1_RW, .accessfn = access_tvm_trvm,
71
+ .fgt = FGT_AFSR0_EL1,
72
.type = ARM_CP_CONST, .resetvalue = 0 },
73
{ .name = "AFSR1_EL1", .state = ARM_CP_STATE_BOTH,
74
.opc0 = 3, .opc1 = 0, .crn = 5, .crm = 1, .opc2 = 1,
75
.access = PL1_RW, .accessfn = access_tvm_trvm,
76
+ .fgt = FGT_AFSR1_EL1,
77
.type = ARM_CP_CONST, .resetvalue = 0 },
78
/*
79
* MAIR can just read-as-written because we don't implement caches
80
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo lpae_cp_reginfo[] = {
81
{ .name = "AMAIR0", .state = ARM_CP_STATE_BOTH,
82
.opc0 = 3, .crn = 10, .crm = 3, .opc1 = 0, .opc2 = 0,
83
.access = PL1_RW, .accessfn = access_tvm_trvm,
84
+ .fgt = FGT_AMAIR_EL1,
85
.type = ARM_CP_CONST, .resetvalue = 0 },
86
/* AMAIR1 is mapped to AMAIR_EL1[63:32] */
87
{ .name = "AMAIR1", .cp = 15, .crn = 10, .crm = 3, .opc1 = 0, .opc2 = 1,
88
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pauth_reginfo[] = {
89
{ .name = "APDAKEYLO_EL1", .state = ARM_CP_STATE_AA64,
90
.opc0 = 3, .opc1 = 0, .crn = 2, .crm = 2, .opc2 = 0,
91
.access = PL1_RW, .accessfn = access_pauth,
92
+ .fgt = FGT_APDAKEY,
93
.fieldoffset = offsetof(CPUARMState, keys.apda.lo) },
94
{ .name = "APDAKEYHI_EL1", .state = ARM_CP_STATE_AA64,
95
.opc0 = 3, .opc1 = 0, .crn = 2, .crm = 2, .opc2 = 1,
96
.access = PL1_RW, .accessfn = access_pauth,
97
+ .fgt = FGT_APDAKEY,
98
.fieldoffset = offsetof(CPUARMState, keys.apda.hi) },
99
{ .name = "APDBKEYLO_EL1", .state = ARM_CP_STATE_AA64,
100
.opc0 = 3, .opc1 = 0, .crn = 2, .crm = 2, .opc2 = 2,
101
.access = PL1_RW, .accessfn = access_pauth,
102
+ .fgt = FGT_APDBKEY,
103
.fieldoffset = offsetof(CPUARMState, keys.apdb.lo) },
104
{ .name = "APDBKEYHI_EL1", .state = ARM_CP_STATE_AA64,
105
.opc0 = 3, .opc1 = 0, .crn = 2, .crm = 2, .opc2 = 3,
106
.access = PL1_RW, .accessfn = access_pauth,
107
+ .fgt = FGT_APDBKEY,
108
.fieldoffset = offsetof(CPUARMState, keys.apdb.hi) },
109
{ .name = "APGAKEYLO_EL1", .state = ARM_CP_STATE_AA64,
110
.opc0 = 3, .opc1 = 0, .crn = 2, .crm = 3, .opc2 = 0,
111
.access = PL1_RW, .accessfn = access_pauth,
112
+ .fgt = FGT_APGAKEY,
113
.fieldoffset = offsetof(CPUARMState, keys.apga.lo) },
114
{ .name = "APGAKEYHI_EL1", .state = ARM_CP_STATE_AA64,
115
.opc0 = 3, .opc1 = 0, .crn = 2, .crm = 3, .opc2 = 1,
116
.access = PL1_RW, .accessfn = access_pauth,
117
+ .fgt = FGT_APGAKEY,
118
.fieldoffset = offsetof(CPUARMState, keys.apga.hi) },
119
{ .name = "APIAKEYLO_EL1", .state = ARM_CP_STATE_AA64,
120
.opc0 = 3, .opc1 = 0, .crn = 2, .crm = 1, .opc2 = 0,
121
.access = PL1_RW, .accessfn = access_pauth,
122
+ .fgt = FGT_APIAKEY,
123
.fieldoffset = offsetof(CPUARMState, keys.apia.lo) },
124
{ .name = "APIAKEYHI_EL1", .state = ARM_CP_STATE_AA64,
125
.opc0 = 3, .opc1 = 0, .crn = 2, .crm = 1, .opc2 = 1,
126
.access = PL1_RW, .accessfn = access_pauth,
127
+ .fgt = FGT_APIAKEY,
128
.fieldoffset = offsetof(CPUARMState, keys.apia.hi) },
129
{ .name = "APIBKEYLO_EL1", .state = ARM_CP_STATE_AA64,
130
.opc0 = 3, .opc1 = 0, .crn = 2, .crm = 1, .opc2 = 2,
131
.access = PL1_RW, .accessfn = access_pauth,
132
+ .fgt = FGT_APIBKEY,
133
.fieldoffset = offsetof(CPUARMState, keys.apib.lo) },
134
{ .name = "APIBKEYHI_EL1", .state = ARM_CP_STATE_AA64,
135
.opc0 = 3, .opc1 = 0, .crn = 2, .crm = 1, .opc2 = 3,
136
.access = PL1_RW, .accessfn = access_pauth,
137
+ .fgt = FGT_APIBKEY,
138
.fieldoffset = offsetof(CPUARMState, keys.apib.hi) },
19
};
139
};
20
140
21
+/**
141
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
22
+ * IOMMUMemoryRegionClass:
142
.opc0 = 3, .crn = 0, .crm = 0, .opc1 = 1, .opc2 = 1,
23
+ *
143
.access = PL1_R, .type = ARM_CP_CONST,
24
+ * All IOMMU implementations need to subclass TYPE_IOMMU_MEMORY_REGION
144
.accessfn = access_tid4,
25
+ * and provide an implementation of at least the @translate method here
145
+ .fgt = FGT_CLIDR_EL1,
26
+ * to handle requests to the memory region. Other methods are optional.
146
.resetvalue = cpu->clidr
27
+ *
147
};
28
+ * The IOMMU implementation must use the IOMMU notifier infrastructure
148
define_one_arm_cp_reg(cpu, &clidr);
29
+ * to report whenever mappings are changed, by calling
30
+ * memory_region_notify_iommu() (or, if necessary, by calling
31
+ * memory_region_notify_one() for each registered notifier).
32
+ */
33
typedef struct IOMMUMemoryRegionClass {
34
/* private */
35
struct DeviceClass parent_class;
36
37
/*
38
- * Return a TLB entry that contains a given address. Flag should
39
- * be the access permission of this translation operation. We can
40
- * set flag to IOMMU_NONE to mean that we don't need any
41
- * read/write permission checks, like, when for region replay.
42
+ * Return a TLB entry that contains a given address.
43
+ *
44
+ * The IOMMUAccessFlags indicated via @flag are optional and may
45
+ * be specified as IOMMU_NONE to indicate that the caller needs
46
+ * the full translation information for both reads and writes. If
47
+ * the access flags are specified then the IOMMU implementation
48
+ * may use this as an optimization, to stop doing a page table
49
+ * walk as soon as it knows that the requested permissions are not
50
+ * allowed. If IOMMU_NONE is passed then the IOMMU must do the
51
+ * full page table walk and report the permissions in the returned
52
+ * IOMMUTLBEntry. (Note that this implies that an IOMMU may not
53
+ * return different mappings for reads and writes.)
54
+ *
55
+ * The returned information remains valid while the caller is
56
+ * holding the big QEMU lock or is inside an RCU critical section;
57
+ * if the caller wishes to cache the mapping beyond that it must
58
+ * register an IOMMU notifier so it can invalidate its cached
59
+ * information when the IOMMU mapping changes.
60
+ *
61
+ * @iommu: the IOMMUMemoryRegion
62
+ * @hwaddr: address to be translated within the memory region
63
+ * @flag: requested access permissions
64
*/
65
IOMMUTLBEntry (*translate)(IOMMUMemoryRegion *iommu, hwaddr addr,
66
IOMMUAccessFlags flag);
67
- /* Returns minimum supported page size */
68
+ /* Returns minimum supported page size in bytes.
69
+ * If this method is not provided then the minimum is assumed to
70
+ * be TARGET_PAGE_SIZE.
71
+ *
72
+ * @iommu: the IOMMUMemoryRegion
73
+ */
74
uint64_t (*get_min_page_size)(IOMMUMemoryRegion *iommu);
75
- /* Called when IOMMU Notifier flag changed */
76
+ /* Called when IOMMU Notifier flag changes (ie when the set of
77
+ * events which IOMMU users are requesting notification for changes).
78
+ * Optional method -- need not be provided if the IOMMU does not
79
+ * need to know exactly which events must be notified.
80
+ *
81
+ * @iommu: the IOMMUMemoryRegion
82
+ * @old_flags: events which previously needed to be notified
83
+ * @new_flags: events which now need to be notified
84
+ */
85
void (*notify_flag_changed)(IOMMUMemoryRegion *iommu,
86
IOMMUNotifierFlag old_flags,
87
IOMMUNotifierFlag new_flags);
88
- /* Set this up to provide customized IOMMU replay function */
89
+ /* Called to handle memory_region_iommu_replay().
90
+ *
91
+ * The default implementation of memory_region_iommu_replay() is to
92
+ * call the IOMMU translate method for every page in the address space
93
+ * with flag == IOMMU_NONE and then call the notifier if translate
94
+ * returns a valid mapping. If this method is implemented then it
95
+ * overrides the default behaviour, and must provide the full semantics
96
+ * of memory_region_iommu_replay(), by calling @notifier for every
97
+ * translation present in the IOMMU.
98
+ *
99
+ * Optional method -- an IOMMU only needs to provide this method
100
+ * if the default is inefficient or produces undesirable side effects.
101
+ *
102
+ * Note: this is not related to record-and-replay functionality.
103
+ */
104
void (*replay)(IOMMUMemoryRegion *iommu, IOMMUNotifier *notifier);
105
106
- /* Get IOMMU misc attributes */
107
- int (*get_attr)(IOMMUMemoryRegion *iommu, enum IOMMUMemoryRegionAttr,
108
+ /* Get IOMMU misc attributes. This is an optional method that
109
+ * can be used to allow users of the IOMMU to get implementation-specific
110
+ * information. The IOMMU implements this method to handle calls
111
+ * by IOMMU users to memory_region_iommu_get_attr() by filling in
112
+ * the arbitrary data pointer for any IOMMUMemoryRegionAttr values that
113
+ * the IOMMU supports. If the method is unimplemented then
114
+ * memory_region_iommu_get_attr() will always return -EINVAL.
115
+ *
116
+ * @iommu: the IOMMUMemoryRegion
117
+ * @attr: attribute being queried
118
+ * @data: memory to fill in with the attribute data
119
+ *
120
+ * Returns 0 on success, or a negative errno; in particular
121
+ * returns -EINVAL for unrecognized or unimplemented attribute types.
122
+ */
123
+ int (*get_attr)(IOMMUMemoryRegion *iommu, enum IOMMUMemoryRegionAttr attr,
124
void *data);
125
} IOMMUMemoryRegionClass;
126
127
@@ -XXX,XX +XXX,XX @@ static inline void memory_region_init_reservation(MemoryRegion *mr,
128
* An IOMMU region translates addresses and forwards accesses to a target
129
* memory region.
130
*
131
+ * The IOMMU implementation must define a subclass of TYPE_IOMMU_MEMORY_REGION.
132
+ * @_iommu_mr should be a pointer to enough memory for an instance of
133
+ * that subclass, @instance_size is the size of that subclass, and
134
+ * @mrtypename is its name. This function will initialize @_iommu_mr as an
135
+ * instance of the subclass, and its methods will then be called to handle
136
+ * accesses to the memory region. See the documentation of
137
+ * #IOMMUMemoryRegionClass for further details.
138
+ *
139
* @_iommu_mr: the #IOMMUMemoryRegion to be initialized
140
* @instance_size: the IOMMUMemoryRegion subclass instance size
141
* @mrtypename: the type name of the #IOMMUMemoryRegion
142
@@ -XXX,XX +XXX,XX @@ void memory_region_register_iommu_notifier(MemoryRegion *mr,
143
* a notifier with the minimum page granularity returned by
144
* mr->iommu_ops->get_page_size().
145
*
146
+ * Note: this is not related to record-and-replay functionality.
147
+ *
148
* @iommu_mr: the memory region to observe
149
* @n: the notifier to which to replay iommu mappings
150
*/
151
@@ -XXX,XX +XXX,XX @@ void memory_region_iommu_replay(IOMMUMemoryRegion *iommu_mr, IOMMUNotifier *n);
152
* memory_region_iommu_replay_all: replay existing IOMMU translations
153
* to all the notifiers registered.
154
*
155
+ * Note: this is not related to record-and-replay functionality.
156
+ *
157
* @iommu_mr: the memory region to observe
158
*/
159
void memory_region_iommu_replay_all(IOMMUMemoryRegion *iommu_mr);
160
@@ -XXX,XX +XXX,XX @@ void memory_region_unregister_iommu_notifier(MemoryRegion *mr,
161
* memory_region_iommu_get_attr: return an IOMMU attr if get_attr() is
162
* defined on the IOMMU.
163
*
164
- * Returns 0 if succeded, error code otherwise.
165
+ * Returns 0 on success, or a negative errno otherwise. In particular,
166
+ * -EINVAL indicates that the IOMMU does not support the requested
167
+ * attribute.
168
*
169
* @iommu_mr: the memory region
170
* @attr: the requested attribute
171
--
149
--
172
2.17.1
150
2.34.1
173
174
diff view generated by jsdifflib
New patch
1
Mark up the sysreg definitions for the registers trapped
2
by HFGRTR/HFGWTR bits 12..23.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Tested-by: Fuad Tabba <tabba@google.com>
7
Message-id: 20230130182459.3309057-12-peter.maydell@linaro.org
8
Message-id: 20230127175507.2895013-12-peter.maydell@linaro.org
9
---
10
target/arm/cpregs.h | 12 ++++++++++++
11
target/arm/helper.c | 12 ++++++++++++
12
2 files changed, 24 insertions(+)
13
14
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpregs.h
17
+++ b/target/arm/cpregs.h
18
@@ -XXX,XX +XXX,XX @@ typedef enum FGTBit {
19
DO_BIT(HFGRTR, CCSIDR_EL1),
20
DO_BIT(HFGRTR, CLIDR_EL1),
21
DO_BIT(HFGRTR, CONTEXTIDR_EL1),
22
+ DO_BIT(HFGRTR, CPACR_EL1),
23
+ DO_BIT(HFGRTR, CSSELR_EL1),
24
+ DO_BIT(HFGRTR, CTR_EL0),
25
+ DO_BIT(HFGRTR, DCZID_EL0),
26
+ DO_BIT(HFGRTR, ESR_EL1),
27
+ DO_BIT(HFGRTR, FAR_EL1),
28
+ DO_BIT(HFGRTR, ISR_EL1),
29
+ DO_BIT(HFGRTR, LORC_EL1),
30
+ DO_BIT(HFGRTR, LOREA_EL1),
31
+ DO_BIT(HFGRTR, LORID_EL1),
32
+ DO_BIT(HFGRTR, LORN_EL1),
33
+ DO_BIT(HFGRTR, LORSA_EL1),
34
} FGTBit;
35
36
#undef DO_BIT
37
diff --git a/target/arm/helper.c b/target/arm/helper.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/helper.c
40
+++ b/target/arm/helper.c
41
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6_cp_reginfo[] = {
42
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0, },
43
{ .name = "CPACR", .state = ARM_CP_STATE_BOTH, .opc0 = 3,
44
.crn = 1, .crm = 0, .opc1 = 0, .opc2 = 2, .accessfn = cpacr_access,
45
+ .fgt = FGT_CPACR_EL1,
46
.access = PL1_RW, .fieldoffset = offsetof(CPUARMState, cp15.cpacr_el1),
47
.resetfn = cpacr_reset, .writefn = cpacr_write, .readfn = cpacr_read },
48
};
49
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
50
.opc0 = 3, .crn = 0, .crm = 0, .opc1 = 2, .opc2 = 0,
51
.access = PL1_RW,
52
.accessfn = access_tid4,
53
+ .fgt = FGT_CSSELR_EL1,
54
.writefn = csselr_write, .resetvalue = 0,
55
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.csselr_s),
56
offsetof(CPUARMState, cp15.csselr_ns) } },
57
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
58
.resetfn = arm_cp_reset_ignore },
59
{ .name = "ISR_EL1", .state = ARM_CP_STATE_BOTH,
60
.opc0 = 3, .opc1 = 0, .crn = 12, .crm = 1, .opc2 = 0,
61
+ .fgt = FGT_ISR_EL1,
62
.type = ARM_CP_NO_RAW, .access = PL1_R, .readfn = isr_read },
63
/* 32 bit ITLB invalidates */
64
{ .name = "ITLBIALL", .cp = 15, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 0,
65
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vmsa_pmsa_cp_reginfo[] = {
66
{ .name = "FAR_EL1", .state = ARM_CP_STATE_AA64,
67
.opc0 = 3, .crn = 6, .crm = 0, .opc1 = 0, .opc2 = 0,
68
.access = PL1_RW, .accessfn = access_tvm_trvm,
69
+ .fgt = FGT_FAR_EL1,
70
.fieldoffset = offsetof(CPUARMState, cp15.far_el[1]),
71
.resetvalue = 0, },
72
};
73
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vmsa_cp_reginfo[] = {
74
{ .name = "ESR_EL1", .state = ARM_CP_STATE_AA64,
75
.opc0 = 3, .crn = 5, .crm = 2, .opc1 = 0, .opc2 = 0,
76
.access = PL1_RW, .accessfn = access_tvm_trvm,
77
+ .fgt = FGT_ESR_EL1,
78
.fieldoffset = offsetof(CPUARMState, cp15.esr_el[1]), .resetvalue = 0, },
79
{ .name = "TTBR0_EL1", .state = ARM_CP_STATE_BOTH,
80
.opc0 = 3, .opc1 = 0, .crn = 2, .crm = 0, .opc2 = 0,
81
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
82
{ .name = "DCZID_EL0", .state = ARM_CP_STATE_AA64,
83
.opc0 = 3, .opc1 = 3, .opc2 = 7, .crn = 0, .crm = 0,
84
.access = PL0_R, .type = ARM_CP_NO_RAW,
85
+ .fgt = FGT_DCZID_EL0,
86
.readfn = aa64_dczid_read },
87
{ .name = "DC_ZVA", .state = ARM_CP_STATE_AA64,
88
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 4, .opc2 = 1,
89
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo lor_reginfo[] = {
90
{ .name = "LORSA_EL1", .state = ARM_CP_STATE_AA64,
91
.opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 0,
92
.access = PL1_RW, .accessfn = access_lor_other,
93
+ .fgt = FGT_LORSA_EL1,
94
.type = ARM_CP_CONST, .resetvalue = 0 },
95
{ .name = "LOREA_EL1", .state = ARM_CP_STATE_AA64,
96
.opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 1,
97
.access = PL1_RW, .accessfn = access_lor_other,
98
+ .fgt = FGT_LOREA_EL1,
99
.type = ARM_CP_CONST, .resetvalue = 0 },
100
{ .name = "LORN_EL1", .state = ARM_CP_STATE_AA64,
101
.opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 2,
102
.access = PL1_RW, .accessfn = access_lor_other,
103
+ .fgt = FGT_LORN_EL1,
104
.type = ARM_CP_CONST, .resetvalue = 0 },
105
{ .name = "LORC_EL1", .state = ARM_CP_STATE_AA64,
106
.opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 3,
107
.access = PL1_RW, .accessfn = access_lor_other,
108
+ .fgt = FGT_LORC_EL1,
109
.type = ARM_CP_CONST, .resetvalue = 0 },
110
{ .name = "LORID_EL1", .state = ARM_CP_STATE_AA64,
111
.opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 7,
112
.access = PL1_R, .accessfn = access_lor_ns,
113
+ .fgt = FGT_LORID_EL1,
114
.type = ARM_CP_CONST, .resetvalue = 0 },
115
};
116
117
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
118
{ .name = "CTR_EL0", .state = ARM_CP_STATE_AA64,
119
.opc0 = 3, .opc1 = 3, .opc2 = 1, .crn = 0, .crm = 0,
120
.access = PL0_R, .accessfn = ctr_el0_access,
121
+ .fgt = FGT_CTR_EL0,
122
.type = ARM_CP_CONST, .resetvalue = cpu->ctr },
123
/* TCMTR and TLBTR exist in v8 but have no 64-bit versions */
124
{ .name = "TCMTR",
125
--
126
2.34.1
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
Mark up the sysreg definitions for the registers trapped
2
add MemTxAttrs as an argument to flatview_do_translate().
2
by HFGRTR/HFGWTR bits 24..35.
3
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180521140402.23318-13-peter.maydell@linaro.org
6
Tested-by: Fuad Tabba <tabba@google.com>
7
Message-id: 20230130182459.3309057-13-peter.maydell@linaro.org
8
Message-id: 20230127175507.2895013-13-peter.maydell@linaro.org
8
---
9
---
9
exec.c | 9 ++++++---
10
target/arm/cpregs.h | 12 ++++++++++++
10
1 file changed, 6 insertions(+), 3 deletions(-)
11
target/arm/helper.c | 14 ++++++++++++++
12
2 files changed, 26 insertions(+)
11
13
12
diff --git a/exec.c b/exec.c
14
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
13
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
14
--- a/exec.c
16
--- a/target/arm/cpregs.h
15
+++ b/exec.c
17
+++ b/target/arm/cpregs.h
16
@@ -XXX,XX +XXX,XX @@ unassigned:
18
@@ -XXX,XX +XXX,XX @@ typedef enum FGTBit {
17
* @is_write: whether the translation operation is for write
19
DO_BIT(HFGRTR, LORID_EL1),
18
* @is_mmio: whether this can be MMIO, set true if it can
20
DO_BIT(HFGRTR, LORN_EL1),
19
* @target_as: the address space targeted by the IOMMU
21
DO_BIT(HFGRTR, LORSA_EL1),
20
+ * @attrs: memory transaction attributes
22
+ DO_BIT(HFGRTR, MAIR_EL1),
21
*
23
+ DO_BIT(HFGRTR, MIDR_EL1),
22
* This function is called from RCU critical section
24
+ DO_BIT(HFGRTR, MPIDR_EL1),
23
*/
25
+ DO_BIT(HFGRTR, PAR_EL1),
24
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection flatview_do_translate(FlatView *fv,
26
+ DO_BIT(HFGRTR, REVIDR_EL1),
25
hwaddr *page_mask_out,
27
+ DO_BIT(HFGRTR, SCTLR_EL1),
26
bool is_write,
28
+ DO_BIT(HFGRTR, SCXTNUM_EL1),
27
bool is_mmio,
29
+ DO_BIT(HFGRTR, SCXTNUM_EL0),
28
- AddressSpace **target_as)
30
+ DO_BIT(HFGRTR, TCR_EL1),
29
+ AddressSpace **target_as,
31
+ DO_BIT(HFGRTR, TPIDR_EL1),
30
+ MemTxAttrs attrs)
32
+ DO_BIT(HFGRTR, TPIDRRO_EL0),
31
{
33
+ DO_BIT(HFGRTR, TPIDR_EL0),
32
MemoryRegionSection *section;
34
} FGTBit;
33
IOMMUMemoryRegion *iommu_mr;
35
34
@@ -XXX,XX +XXX,XX @@ IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
36
#undef DO_BIT
35
* but page mask.
37
diff --git a/target/arm/helper.c b/target/arm/helper.c
36
*/
38
index XXXXXXX..XXXXXXX 100644
37
section = flatview_do_translate(address_space_to_flatview(as), addr, &xlat,
39
--- a/target/arm/helper.c
38
- NULL, &page_mask, is_write, false, &as);
40
+++ b/target/arm/helper.c
39
+ NULL, &page_mask, is_write, false, &as,
41
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
40
+ attrs);
42
{ .name = "MAIR_EL1", .state = ARM_CP_STATE_AA64,
41
43
.opc0 = 3, .opc1 = 0, .crn = 10, .crm = 2, .opc2 = 0,
42
/* Illegal translation */
44
.access = PL1_RW, .accessfn = access_tvm_trvm,
43
if (section.mr == &io_mem_unassigned) {
45
+ .fgt = FGT_MAIR_EL1,
44
@@ -XXX,XX +XXX,XX @@ MemoryRegion *flatview_translate(FlatView *fv, hwaddr addr, hwaddr *xlat,
46
.fieldoffset = offsetof(CPUARMState, cp15.mair_el[1]),
45
47
.resetvalue = 0 },
46
/* This can be MMIO, so setup MMIO bit. */
48
{ .name = "MAIR_EL3", .state = ARM_CP_STATE_AA64,
47
section = flatview_do_translate(fv, addr, xlat, plen, NULL,
49
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6k_cp_reginfo[] = {
48
- is_write, true, &as);
50
{ .name = "TPIDR_EL0", .state = ARM_CP_STATE_AA64,
49
+ is_write, true, &as, attrs);
51
.opc0 = 3, .opc1 = 3, .opc2 = 2, .crn = 13, .crm = 0,
50
mr = section.mr;
52
.access = PL0_RW,
51
53
+ .fgt = FGT_TPIDR_EL0,
52
if (xen_enabled() && memory_access_is_direct(mr, is_write)) {
54
.fieldoffset = offsetof(CPUARMState, cp15.tpidr_el[0]), .resetvalue = 0 },
55
{ .name = "TPIDRURW", .cp = 15, .crn = 13, .crm = 0, .opc1 = 0, .opc2 = 2,
56
.access = PL0_RW,
57
+ .fgt = FGT_TPIDR_EL0,
58
.bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.tpidrurw_s),
59
offsetoflow32(CPUARMState, cp15.tpidrurw_ns) },
60
.resetfn = arm_cp_reset_ignore },
61
{ .name = "TPIDRRO_EL0", .state = ARM_CP_STATE_AA64,
62
.opc0 = 3, .opc1 = 3, .opc2 = 3, .crn = 13, .crm = 0,
63
.access = PL0_R | PL1_W,
64
+ .fgt = FGT_TPIDRRO_EL0,
65
.fieldoffset = offsetof(CPUARMState, cp15.tpidrro_el[0]),
66
.resetvalue = 0},
67
{ .name = "TPIDRURO", .cp = 15, .crn = 13, .crm = 0, .opc1 = 0, .opc2 = 3,
68
.access = PL0_R | PL1_W,
69
+ .fgt = FGT_TPIDRRO_EL0,
70
.bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.tpidruro_s),
71
offsetoflow32(CPUARMState, cp15.tpidruro_ns) },
72
.resetfn = arm_cp_reset_ignore },
73
{ .name = "TPIDR_EL1", .state = ARM_CP_STATE_AA64,
74
.opc0 = 3, .opc1 = 0, .opc2 = 4, .crn = 13, .crm = 0,
75
.access = PL1_RW,
76
+ .fgt = FGT_TPIDR_EL1,
77
.fieldoffset = offsetof(CPUARMState, cp15.tpidr_el[1]), .resetvalue = 0 },
78
{ .name = "TPIDRPRW", .opc1 = 0, .cp = 15, .crn = 13, .crm = 0, .opc2 = 4,
79
.access = PL1_RW,
80
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vmsa_cp_reginfo[] = {
81
{ .name = "TCR_EL1", .state = ARM_CP_STATE_AA64,
82
.opc0 = 3, .crn = 2, .crm = 0, .opc1 = 0, .opc2 = 2,
83
.access = PL1_RW, .accessfn = access_tvm_trvm,
84
+ .fgt = FGT_TCR_EL1,
85
.writefn = vmsa_tcr_el12_write,
86
.raw_writefn = raw_write,
87
.resetvalue = 0,
88
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
89
.type = ARM_CP_ALIAS,
90
.opc0 = 3, .opc1 = 0, .crn = 7, .crm = 4, .opc2 = 0,
91
.access = PL1_RW, .resetvalue = 0,
92
+ .fgt = FGT_PAR_EL1,
93
.fieldoffset = offsetof(CPUARMState, cp15.par_el[1]),
94
.writefn = par_write },
95
#endif
96
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo scxtnum_reginfo[] = {
97
{ .name = "SCXTNUM_EL0", .state = ARM_CP_STATE_AA64,
98
.opc0 = 3, .opc1 = 3, .crn = 13, .crm = 0, .opc2 = 7,
99
.access = PL0_RW, .accessfn = access_scxtnum,
100
+ .fgt = FGT_SCXTNUM_EL0,
101
.fieldoffset = offsetof(CPUARMState, scxtnum_el[0]) },
102
{ .name = "SCXTNUM_EL1", .state = ARM_CP_STATE_AA64,
103
.opc0 = 3, .opc1 = 0, .crn = 13, .crm = 0, .opc2 = 7,
104
.access = PL1_RW, .accessfn = access_scxtnum,
105
+ .fgt = FGT_SCXTNUM_EL1,
106
.fieldoffset = offsetof(CPUARMState, scxtnum_el[1]) },
107
{ .name = "SCXTNUM_EL2", .state = ARM_CP_STATE_AA64,
108
.opc0 = 3, .opc1 = 4, .crn = 13, .crm = 0, .opc2 = 7,
109
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
110
{ .name = "MIDR_EL1", .state = ARM_CP_STATE_BOTH,
111
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 0, .opc2 = 0,
112
.access = PL1_R, .type = ARM_CP_NO_RAW, .resetvalue = cpu->midr,
113
+ .fgt = FGT_MIDR_EL1,
114
.fieldoffset = offsetof(CPUARMState, cp15.c0_cpuid),
115
.readfn = midr_read },
116
/* crn = 0 op1 = 0 crm = 0 op2 = 7 : AArch32 aliases of MIDR */
117
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
118
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 0, .opc2 = 6,
119
.access = PL1_R,
120
.accessfn = access_aa64_tid1,
121
+ .fgt = FGT_REVIDR_EL1,
122
.type = ARM_CP_CONST, .resetvalue = cpu->revidr },
123
};
124
ARMCPRegInfo id_v8_midr_alias_cp_reginfo = {
125
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
126
ARMCPRegInfo mpidr_cp_reginfo[] = {
127
{ .name = "MPIDR_EL1", .state = ARM_CP_STATE_BOTH,
128
.opc0 = 3, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 5,
129
+ .fgt = FGT_MPIDR_EL1,
130
.access = PL1_R, .readfn = mpidr_read, .type = ARM_CP_NO_RAW },
131
};
132
#ifdef CONFIG_USER_ONLY
133
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
134
.name = "SCTLR", .state = ARM_CP_STATE_BOTH,
135
.opc0 = 3, .opc1 = 0, .crn = 1, .crm = 0, .opc2 = 0,
136
.access = PL1_RW, .accessfn = access_tvm_trvm,
137
+ .fgt = FGT_SCTLR_EL1,
138
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.sctlr_s),
139
offsetof(CPUARMState, cp15.sctlr_ns) },
140
.writefn = sctlr_write, .resetvalue = cpu->reset_sctlr,
53
--
141
--
54
2.17.1
142
2.34.1
55
56
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
Mark up the sysreg definitions for the registers trapped
2
add MemTxAttrs as an argument to address_space_get_iotlb_entry().
2
by HFGRTR/HFGWTR bits 36..63.
3
4
Of these, some correspond to RAS registers which we implement as
5
always-UNDEF: these don't need any extra handling for FGT because the
6
UNDEF-to-EL1 always takes priority over any theoretical
7
FGT-trap-to-EL2.
8
9
Bit 50 (NACCDATA_EL1) is for the ACCDATA_EL1 register which is part
10
of the FEAT_LS64_ACCDATA feature which we don't yet implement.
3
11
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180521140402.23318-12-peter.maydell@linaro.org
14
Tested-by: Fuad Tabba <tabba@google.com>
15
Message-id: 20230130182459.3309057-14-peter.maydell@linaro.org
16
Message-id: 20230127175507.2895013-14-peter.maydell@linaro.org
8
---
17
---
9
include/exec/memory.h | 2 +-
18
target/arm/cpregs.h | 7 +++++++
10
exec.c | 2 +-
19
hw/intc/arm_gicv3_cpuif.c | 2 ++
11
hw/virtio/vhost.c | 3 ++-
20
target/arm/helper.c | 10 ++++++++++
12
3 files changed, 4 insertions(+), 3 deletions(-)
21
3 files changed, 19 insertions(+)
13
22
14
diff --git a/include/exec/memory.h b/include/exec/memory.h
23
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
15
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
16
--- a/include/exec/memory.h
25
--- a/target/arm/cpregs.h
17
+++ b/include/exec/memory.h
26
+++ b/target/arm/cpregs.h
18
@@ -XXX,XX +XXX,XX @@ void address_space_cache_destroy(MemoryRegionCache *cache);
27
@@ -XXX,XX +XXX,XX @@ typedef enum FGTBit {
19
* entry. Should be called from an RCU critical section.
28
DO_BIT(HFGRTR, TPIDR_EL1),
29
DO_BIT(HFGRTR, TPIDRRO_EL0),
30
DO_BIT(HFGRTR, TPIDR_EL0),
31
+ DO_BIT(HFGRTR, TTBR0_EL1),
32
+ DO_BIT(HFGRTR, TTBR1_EL1),
33
+ DO_BIT(HFGRTR, VBAR_EL1),
34
+ DO_BIT(HFGRTR, ICC_IGRPENN_EL1),
35
+ DO_BIT(HFGRTR, ERRIDR_EL1),
36
+ DO_REV_BIT(HFGRTR, NSMPRI_EL1),
37
+ DO_REV_BIT(HFGRTR, NTPIDR2_EL0),
38
} FGTBit;
39
40
#undef DO_BIT
41
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
42
index XXXXXXX..XXXXXXX 100644
43
--- a/hw/intc/arm_gicv3_cpuif.c
44
+++ b/hw/intc/arm_gicv3_cpuif.c
45
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo gicv3_cpuif_reginfo[] = {
46
.opc0 = 3, .opc1 = 0, .crn = 12, .crm = 12, .opc2 = 6,
47
.type = ARM_CP_IO | ARM_CP_NO_RAW,
48
.access = PL1_RW, .accessfn = gicv3_fiq_access,
49
+ .fgt = FGT_ICC_IGRPENN_EL1,
50
.readfn = icc_igrpen_read,
51
.writefn = icc_igrpen_write,
52
},
53
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo gicv3_cpuif_reginfo[] = {
54
.opc0 = 3, .opc1 = 0, .crn = 12, .crm = 12, .opc2 = 7,
55
.type = ARM_CP_IO | ARM_CP_NO_RAW,
56
.access = PL1_RW, .accessfn = gicv3_irq_access,
57
+ .fgt = FGT_ICC_IGRPENN_EL1,
58
.readfn = icc_igrpen_read,
59
.writefn = icc_igrpen_write,
60
},
61
diff --git a/target/arm/helper.c b/target/arm/helper.c
62
index XXXXXXX..XXXXXXX 100644
63
--- a/target/arm/helper.c
64
+++ b/target/arm/helper.c
65
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vmsa_cp_reginfo[] = {
66
{ .name = "TTBR0_EL1", .state = ARM_CP_STATE_BOTH,
67
.opc0 = 3, .opc1 = 0, .crn = 2, .crm = 0, .opc2 = 0,
68
.access = PL1_RW, .accessfn = access_tvm_trvm,
69
+ .fgt = FGT_TTBR0_EL1,
70
.writefn = vmsa_ttbr_write, .resetvalue = 0,
71
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.ttbr0_s),
72
offsetof(CPUARMState, cp15.ttbr0_ns) } },
73
{ .name = "TTBR1_EL1", .state = ARM_CP_STATE_BOTH,
74
.opc0 = 3, .opc1 = 0, .crn = 2, .crm = 0, .opc2 = 1,
75
.access = PL1_RW, .accessfn = access_tvm_trvm,
76
+ .fgt = FGT_TTBR1_EL1,
77
.writefn = vmsa_ttbr_write, .resetvalue = 0,
78
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.ttbr1_s),
79
offsetof(CPUARMState, cp15.ttbr1_ns) } },
80
@@ -XXX,XX +XXX,XX @@ static void disr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t val)
81
* ERRSELR_EL1
82
* may generate UNDEFINED, which is the effect we get by not
83
* listing them at all.
84
+ *
85
+ * These registers have fine-grained trap bits, but UNDEF-to-EL1
86
+ * is higher priority than FGT-to-EL2 so we do not need to list them
87
+ * in order to check for an FGT.
20
*/
88
*/
21
IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
89
static const ARMCPRegInfo minimal_ras_reginfo[] = {
22
- bool is_write);
90
{ .name = "DISR_EL1", .state = ARM_CP_STATE_BOTH,
23
+ bool is_write, MemTxAttrs attrs);
91
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo minimal_ras_reginfo[] = {
24
92
{ .name = "ERRIDR_EL1", .state = ARM_CP_STATE_BOTH,
25
/* address_space_translate: translate an address range into an address space
93
.opc0 = 3, .opc1 = 0, .crn = 5, .crm = 3, .opc2 = 0,
26
* into a MemoryRegion and an address range into that section. Should be
94
.access = PL1_R, .accessfn = access_terr,
27
diff --git a/exec.c b/exec.c
95
+ .fgt = FGT_ERRIDR_EL1,
28
index XXXXXXX..XXXXXXX 100644
96
.type = ARM_CP_CONST, .resetvalue = 0 },
29
--- a/exec.c
97
{ .name = "VDISR_EL2", .state = ARM_CP_STATE_BOTH,
30
+++ b/exec.c
98
.opc0 = 3, .opc1 = 4, .crn = 12, .crm = 1, .opc2 = 1,
31
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection flatview_do_translate(FlatView *fv,
99
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo sme_reginfo[] = {
32
100
{ .name = "TPIDR2_EL0", .state = ARM_CP_STATE_AA64,
33
/* Called from RCU critical section */
101
.opc0 = 3, .opc1 = 3, .crn = 13, .crm = 0, .opc2 = 5,
34
IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
102
.access = PL0_RW, .accessfn = access_tpidr2,
35
- bool is_write)
103
+ .fgt = FGT_NTPIDR2_EL0,
36
+ bool is_write, MemTxAttrs attrs)
104
.fieldoffset = offsetof(CPUARMState, cp15.tpidr2_el0) },
37
{
105
{ .name = "SVCR", .state = ARM_CP_STATE_AA64,
38
MemoryRegionSection section;
106
.opc0 = 3, .opc1 = 3, .crn = 4, .crm = 2, .opc2 = 2,
39
hwaddr xlat, page_mask;
107
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo sme_reginfo[] = {
40
diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
108
{ .name = "SMPRI_EL1", .state = ARM_CP_STATE_AA64,
41
index XXXXXXX..XXXXXXX 100644
109
.opc0 = 3, .opc1 = 0, .crn = 1, .crm = 2, .opc2 = 4,
42
--- a/hw/virtio/vhost.c
110
.access = PL1_RW, .accessfn = access_esm,
43
+++ b/hw/virtio/vhost.c
111
+ .fgt = FGT_NSMPRI_EL1,
44
@@ -XXX,XX +XXX,XX @@ int vhost_device_iotlb_miss(struct vhost_dev *dev, uint64_t iova, int write)
112
.type = ARM_CP_CONST, .resetvalue = 0 },
45
trace_vhost_iotlb_miss(dev, 1);
113
{ .name = "SMPRIMAP_EL2", .state = ARM_CP_STATE_AA64,
46
114
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 2, .opc2 = 5,
47
iotlb = address_space_get_iotlb_entry(dev->vdev->dma_as,
115
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
48
- iova, write);
116
{ .name = "VBAR", .state = ARM_CP_STATE_BOTH,
49
+ iova, write,
117
.opc0 = 3, .crn = 12, .crm = 0, .opc1 = 0, .opc2 = 0,
50
+ MEMTXATTRS_UNSPECIFIED);
118
.access = PL1_RW, .writefn = vbar_write,
51
if (iotlb.target_as != NULL) {
119
+ .fgt = FGT_VBAR_EL1,
52
ret = vhost_memory_region_lookup(dev, iotlb.translated_addr,
120
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.vbar_s),
53
&uaddr, &len);
121
offsetof(CPUARMState, cp15.vbar_ns) },
122
.resetvalue = 0 },
54
--
123
--
55
2.17.1
124
2.34.1
56
57
diff view generated by jsdifflib
New patch
1
Mark up the sysreg definitons for the registers trapped
2
by HDFGRTR/HDFGWTR bits 0..11. These cover various debug
3
related registers.
1
4
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Tested-by: Fuad Tabba <tabba@google.com>
8
Message-id: 20230130182459.3309057-15-peter.maydell@linaro.org
9
Message-id: 20230127175507.2895013-15-peter.maydell@linaro.org
10
---
11
target/arm/cpregs.h | 12 ++++++++++++
12
target/arm/debug_helper.c | 11 +++++++++++
13
2 files changed, 23 insertions(+)
14
15
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpregs.h
18
+++ b/target/arm/cpregs.h
19
@@ -XXX,XX +XXX,XX @@ typedef enum FGTBit {
20
DO_BIT(HFGRTR, ERRIDR_EL1),
21
DO_REV_BIT(HFGRTR, NSMPRI_EL1),
22
DO_REV_BIT(HFGRTR, NTPIDR2_EL0),
23
+
24
+ /* Trap bits in HDFGRTR_EL2 / HDFGWTR_EL2, starting from bit 0. */
25
+ DO_BIT(HDFGRTR, DBGBCRN_EL1),
26
+ DO_BIT(HDFGRTR, DBGBVRN_EL1),
27
+ DO_BIT(HDFGRTR, DBGWCRN_EL1),
28
+ DO_BIT(HDFGRTR, DBGWVRN_EL1),
29
+ DO_BIT(HDFGRTR, MDSCR_EL1),
30
+ DO_BIT(HDFGRTR, DBGCLAIM),
31
+ DO_BIT(HDFGWTR, OSLAR_EL1),
32
+ DO_BIT(HDFGRTR, OSLSR_EL1),
33
+ DO_BIT(HDFGRTR, OSECCR_EL1),
34
+ DO_BIT(HDFGRTR, OSDLR_EL1),
35
} FGTBit;
36
37
#undef DO_BIT
38
diff --git a/target/arm/debug_helper.c b/target/arm/debug_helper.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/debug_helper.c
41
+++ b/target/arm/debug_helper.c
42
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_cp_reginfo[] = {
43
{ .name = "MDSCR_EL1", .state = ARM_CP_STATE_BOTH,
44
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 2,
45
.access = PL1_RW, .accessfn = access_tda,
46
+ .fgt = FGT_MDSCR_EL1,
47
.fieldoffset = offsetof(CPUARMState, cp15.mdscr_el1),
48
.resetvalue = 0 },
49
/*
50
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_cp_reginfo[] = {
51
{ .name = "OSECCR_EL1", .state = ARM_CP_STATE_BOTH, .cp = 14,
52
.opc0 = 2, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 2,
53
.access = PL1_RW, .accessfn = access_tda,
54
+ .fgt = FGT_OSECCR_EL1,
55
.type = ARM_CP_CONST, .resetvalue = 0 },
56
/*
57
* DBGDSCRint[15,12,5:2] map to MDSCR_EL1[15,12,5:2]. Map all bits as
58
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_cp_reginfo[] = {
59
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 1, .crm = 0, .opc2 = 4,
60
.access = PL1_W, .type = ARM_CP_NO_RAW,
61
.accessfn = access_tdosa,
62
+ .fgt = FGT_OSLAR_EL1,
63
.writefn = oslar_write },
64
{ .name = "OSLSR_EL1", .state = ARM_CP_STATE_BOTH,
65
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 1, .crm = 1, .opc2 = 4,
66
.access = PL1_R, .resetvalue = 10,
67
.accessfn = access_tdosa,
68
+ .fgt = FGT_OSLSR_EL1,
69
.fieldoffset = offsetof(CPUARMState, cp15.oslsr_el1) },
70
/* Dummy OSDLR_EL1: 32-bit Linux will read this */
71
{ .name = "OSDLR_EL1", .state = ARM_CP_STATE_BOTH,
72
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 1, .crm = 3, .opc2 = 4,
73
.access = PL1_RW, .accessfn = access_tdosa,
74
+ .fgt = FGT_OSDLR_EL1,
75
.writefn = osdlr_write,
76
.fieldoffset = offsetof(CPUARMState, cp15.osdlr_el1) },
77
/*
78
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_cp_reginfo[] = {
79
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 7, .crm = 8, .opc2 = 6,
80
.type = ARM_CP_ALIAS,
81
.access = PL1_RW, .accessfn = access_tda,
82
+ .fgt = FGT_DBGCLAIM,
83
.writefn = dbgclaimset_write, .readfn = dbgclaimset_read },
84
{ .name = "DBGCLAIMCLR_EL1", .state = ARM_CP_STATE_BOTH,
85
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 6,
86
.access = PL1_RW, .accessfn = access_tda,
87
+ .fgt = FGT_DBGCLAIM,
88
.writefn = dbgclaimclr_write, .raw_writefn = raw_write,
89
.fieldoffset = offsetof(CPUARMState, cp15.dbgclaim) },
90
};
91
@@ -XXX,XX +XXX,XX @@ void define_debug_regs(ARMCPU *cpu)
92
{ .name = dbgbvr_el1_name, .state = ARM_CP_STATE_BOTH,
93
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = i, .opc2 = 4,
94
.access = PL1_RW, .accessfn = access_tda,
95
+ .fgt = FGT_DBGBVRN_EL1,
96
.fieldoffset = offsetof(CPUARMState, cp15.dbgbvr[i]),
97
.writefn = dbgbvr_write, .raw_writefn = raw_write
98
},
99
{ .name = dbgbcr_el1_name, .state = ARM_CP_STATE_BOTH,
100
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = i, .opc2 = 5,
101
.access = PL1_RW, .accessfn = access_tda,
102
+ .fgt = FGT_DBGBCRN_EL1,
103
.fieldoffset = offsetof(CPUARMState, cp15.dbgbcr[i]),
104
.writefn = dbgbcr_write, .raw_writefn = raw_write
105
},
106
@@ -XXX,XX +XXX,XX @@ void define_debug_regs(ARMCPU *cpu)
107
{ .name = dbgwvr_el1_name, .state = ARM_CP_STATE_BOTH,
108
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = i, .opc2 = 6,
109
.access = PL1_RW, .accessfn = access_tda,
110
+ .fgt = FGT_DBGWVRN_EL1,
111
.fieldoffset = offsetof(CPUARMState, cp15.dbgwvr[i]),
112
.writefn = dbgwvr_write, .raw_writefn = raw_write
113
},
114
{ .name = dbgwcr_el1_name, .state = ARM_CP_STATE_BOTH,
115
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = i, .opc2 = 7,
116
.access = PL1_RW, .accessfn = access_tda,
117
+ .fgt = FGT_DBGWCRN_EL1,
118
.fieldoffset = offsetof(CPUARMState, cp15.dbgwcr[i]),
119
.writefn = dbgwcr_write, .raw_writefn = raw_write
120
},
121
--
122
2.34.1
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
Mark up the sysreg definitions for the registers trapped
2
add MemTxAttrs as an argument to flatview_extend_translation().
2
by HDFGRTR/HDFGWTR bits 12..x.
3
Its callers either have an attrs value to hand, or don't care
3
4
and can use MEMTXATTRS_UNSPECIFIED.
4
Bits 12..22 and bit 58 are for PMU registers.
5
6
The remaining bits in HDFGRTR/HDFGWTR are for traps on
7
registers that are part of features we don't implement:
8
9
Bits 23..32 and 63 : FEAT_SPE
10
Bits 33..48 : FEAT_ETE
11
Bits 50..56 : FEAT_TRBE
12
Bits 59..61 : FEAT_BRBE
13
Bit 62 : FEAT_SPEv1p2.
5
14
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-7-peter.maydell@linaro.org
17
Tested-by: Fuad Tabba <tabba@google.com>
18
Message-id: 20230130182459.3309057-16-peter.maydell@linaro.org
19
Message-id: 20230127175507.2895013-16-peter.maydell@linaro.org
10
---
20
---
11
exec.c | 15 ++++++++++-----
21
target/arm/cpregs.h | 12 ++++++++++++
12
1 file changed, 10 insertions(+), 5 deletions(-)
22
target/arm/helper.c | 37 +++++++++++++++++++++++++++++++++++++
13
23
2 files changed, 49 insertions(+)
14
diff --git a/exec.c b/exec.c
24
25
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
15
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
16
--- a/exec.c
27
--- a/target/arm/cpregs.h
17
+++ b/exec.c
28
+++ b/target/arm/cpregs.h
18
@@ -XXX,XX +XXX,XX @@ bool address_space_access_valid(AddressSpace *as, hwaddr addr,
29
@@ -XXX,XX +XXX,XX @@ typedef enum FGTBit {
19
30
DO_BIT(HDFGRTR, OSLSR_EL1),
20
static hwaddr
31
DO_BIT(HDFGRTR, OSECCR_EL1),
21
flatview_extend_translation(FlatView *fv, hwaddr addr,
32
DO_BIT(HDFGRTR, OSDLR_EL1),
22
- hwaddr target_len,
33
+ DO_BIT(HDFGRTR, PMEVCNTRN_EL0),
23
- MemoryRegion *mr, hwaddr base, hwaddr len,
34
+ DO_BIT(HDFGRTR, PMEVTYPERN_EL0),
24
- bool is_write)
35
+ DO_BIT(HDFGRTR, PMCCFILTR_EL0),
25
+ hwaddr target_len,
36
+ DO_BIT(HDFGRTR, PMCCNTR_EL0),
26
+ MemoryRegion *mr, hwaddr base, hwaddr len,
37
+ DO_BIT(HDFGRTR, PMCNTEN),
27
+ bool is_write, MemTxAttrs attrs)
38
+ DO_BIT(HDFGRTR, PMINTEN),
28
{
39
+ DO_BIT(HDFGRTR, PMOVS),
29
hwaddr done = 0;
40
+ DO_BIT(HDFGRTR, PMSELR_EL0),
30
hwaddr xlat;
41
+ DO_BIT(HDFGWTR, PMSWINC_EL0),
31
@@ -XXX,XX +XXX,XX @@ void *address_space_map(AddressSpace *as,
42
+ DO_BIT(HDFGWTR, PMCR_EL0),
32
43
+ DO_BIT(HDFGRTR, PMMIR_EL1),
33
memory_region_ref(mr);
44
+ DO_BIT(HDFGRTR, PMCEIDN_EL0),
34
*plen = flatview_extend_translation(fv, addr, len, mr, xlat,
45
} FGTBit;
35
- l, is_write);
46
36
+ l, is_write, attrs);
47
#undef DO_BIT
37
ptr = qemu_ram_ptr_length(mr->ram_block, xlat, plen, true);
48
diff --git a/target/arm/helper.c b/target/arm/helper.c
38
rcu_read_unlock();
49
index XXXXXXX..XXXXXXX 100644
39
50
--- a/target/arm/helper.c
40
@@ -XXX,XX +XXX,XX @@ int64_t address_space_cache_init(MemoryRegionCache *cache,
51
+++ b/target/arm/helper.c
41
mr = cache->mrs.mr;
52
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
42
memory_region_ref(mr);
53
.fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pmcnten),
43
if (memory_access_is_direct(mr, is_write)) {
54
.writefn = pmcntenset_write,
44
+ /* We don't care about the memory attributes here as we're only
55
.accessfn = pmreg_access,
45
+ * doing this if we found actual RAM, which behaves the same
56
+ .fgt = FGT_PMCNTEN,
46
+ * regardless of attributes; so UNSPECIFIED is fine.
57
.raw_writefn = raw_write },
47
+ */
58
{ .name = "PMCNTENSET_EL0", .state = ARM_CP_STATE_AA64, .type = ARM_CP_IO,
48
l = flatview_extend_translation(cache->fv, addr, len, mr,
59
.opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 1,
49
- cache->xlat, l, is_write);
60
.access = PL0_RW, .accessfn = pmreg_access,
50
+ cache->xlat, l, is_write,
61
+ .fgt = FGT_PMCNTEN,
51
+ MEMTXATTRS_UNSPECIFIED);
62
.fieldoffset = offsetof(CPUARMState, cp15.c9_pmcnten), .resetvalue = 0,
52
cache->ptr = qemu_ram_ptr_length(mr->ram_block, cache->xlat, &l, true);
63
.writefn = pmcntenset_write, .raw_writefn = raw_write },
53
} else {
64
{ .name = "PMCNTENCLR", .cp = 15, .crn = 9, .crm = 12, .opc1 = 0, .opc2 = 2,
54
cache->ptr = NULL;
65
.access = PL0_RW,
66
.fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pmcnten),
67
.accessfn = pmreg_access,
68
+ .fgt = FGT_PMCNTEN,
69
.writefn = pmcntenclr_write,
70
.type = ARM_CP_ALIAS | ARM_CP_IO },
71
{ .name = "PMCNTENCLR_EL0", .state = ARM_CP_STATE_AA64,
72
.opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 2,
73
.access = PL0_RW, .accessfn = pmreg_access,
74
+ .fgt = FGT_PMCNTEN,
75
.type = ARM_CP_ALIAS | ARM_CP_IO,
76
.fieldoffset = offsetof(CPUARMState, cp15.c9_pmcnten),
77
.writefn = pmcntenclr_write },
78
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
79
.access = PL0_RW, .type = ARM_CP_IO,
80
.fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pmovsr),
81
.accessfn = pmreg_access,
82
+ .fgt = FGT_PMOVS,
83
.writefn = pmovsr_write,
84
.raw_writefn = raw_write },
85
{ .name = "PMOVSCLR_EL0", .state = ARM_CP_STATE_AA64,
86
.opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 3,
87
.access = PL0_RW, .accessfn = pmreg_access,
88
+ .fgt = FGT_PMOVS,
89
.type = ARM_CP_ALIAS | ARM_CP_IO,
90
.fieldoffset = offsetof(CPUARMState, cp15.c9_pmovsr),
91
.writefn = pmovsr_write,
92
.raw_writefn = raw_write },
93
{ .name = "PMSWINC", .cp = 15, .crn = 9, .crm = 12, .opc1 = 0, .opc2 = 4,
94
.access = PL0_W, .accessfn = pmreg_access_swinc,
95
+ .fgt = FGT_PMSWINC_EL0,
96
.type = ARM_CP_NO_RAW | ARM_CP_IO,
97
.writefn = pmswinc_write },
98
{ .name = "PMSWINC_EL0", .state = ARM_CP_STATE_AA64,
99
.opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 4,
100
.access = PL0_W, .accessfn = pmreg_access_swinc,
101
+ .fgt = FGT_PMSWINC_EL0,
102
.type = ARM_CP_NO_RAW | ARM_CP_IO,
103
.writefn = pmswinc_write },
104
{ .name = "PMSELR", .cp = 15, .crn = 9, .crm = 12, .opc1 = 0, .opc2 = 5,
105
.access = PL0_RW, .type = ARM_CP_ALIAS,
106
+ .fgt = FGT_PMSELR_EL0,
107
.fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pmselr),
108
.accessfn = pmreg_access_selr, .writefn = pmselr_write,
109
.raw_writefn = raw_write},
110
{ .name = "PMSELR_EL0", .state = ARM_CP_STATE_AA64,
111
.opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 5,
112
.access = PL0_RW, .accessfn = pmreg_access_selr,
113
+ .fgt = FGT_PMSELR_EL0,
114
.fieldoffset = offsetof(CPUARMState, cp15.c9_pmselr),
115
.writefn = pmselr_write, .raw_writefn = raw_write, },
116
{ .name = "PMCCNTR", .cp = 15, .crn = 9, .crm = 13, .opc1 = 0, .opc2 = 0,
117
.access = PL0_RW, .resetvalue = 0, .type = ARM_CP_ALIAS | ARM_CP_IO,
118
+ .fgt = FGT_PMCCNTR_EL0,
119
.readfn = pmccntr_read, .writefn = pmccntr_write32,
120
.accessfn = pmreg_access_ccntr },
121
{ .name = "PMCCNTR_EL0", .state = ARM_CP_STATE_AA64,
122
.opc0 = 3, .opc1 = 3, .crn = 9, .crm = 13, .opc2 = 0,
123
.access = PL0_RW, .accessfn = pmreg_access_ccntr,
124
+ .fgt = FGT_PMCCNTR_EL0,
125
.type = ARM_CP_IO,
126
.fieldoffset = offsetof(CPUARMState, cp15.c15_ccnt),
127
.readfn = pmccntr_read, .writefn = pmccntr_write,
128
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
129
{ .name = "PMCCFILTR", .cp = 15, .opc1 = 0, .crn = 14, .crm = 15, .opc2 = 7,
130
.writefn = pmccfiltr_write_a32, .readfn = pmccfiltr_read_a32,
131
.access = PL0_RW, .accessfn = pmreg_access,
132
+ .fgt = FGT_PMCCFILTR_EL0,
133
.type = ARM_CP_ALIAS | ARM_CP_IO,
134
.resetvalue = 0, },
135
{ .name = "PMCCFILTR_EL0", .state = ARM_CP_STATE_AA64,
136
.opc0 = 3, .opc1 = 3, .crn = 14, .crm = 15, .opc2 = 7,
137
.writefn = pmccfiltr_write, .raw_writefn = raw_write,
138
.access = PL0_RW, .accessfn = pmreg_access,
139
+ .fgt = FGT_PMCCFILTR_EL0,
140
.type = ARM_CP_IO,
141
.fieldoffset = offsetof(CPUARMState, cp15.pmccfiltr_el0),
142
.resetvalue = 0, },
143
{ .name = "PMXEVTYPER", .cp = 15, .crn = 9, .crm = 13, .opc1 = 0, .opc2 = 1,
144
.access = PL0_RW, .type = ARM_CP_NO_RAW | ARM_CP_IO,
145
.accessfn = pmreg_access,
146
+ .fgt = FGT_PMEVTYPERN_EL0,
147
.writefn = pmxevtyper_write, .readfn = pmxevtyper_read },
148
{ .name = "PMXEVTYPER_EL0", .state = ARM_CP_STATE_AA64,
149
.opc0 = 3, .opc1 = 3, .crn = 9, .crm = 13, .opc2 = 1,
150
.access = PL0_RW, .type = ARM_CP_NO_RAW | ARM_CP_IO,
151
.accessfn = pmreg_access,
152
+ .fgt = FGT_PMEVTYPERN_EL0,
153
.writefn = pmxevtyper_write, .readfn = pmxevtyper_read },
154
{ .name = "PMXEVCNTR", .cp = 15, .crn = 9, .crm = 13, .opc1 = 0, .opc2 = 2,
155
.access = PL0_RW, .type = ARM_CP_NO_RAW | ARM_CP_IO,
156
.accessfn = pmreg_access_xevcntr,
157
+ .fgt = FGT_PMEVCNTRN_EL0,
158
.writefn = pmxevcntr_write, .readfn = pmxevcntr_read },
159
{ .name = "PMXEVCNTR_EL0", .state = ARM_CP_STATE_AA64,
160
.opc0 = 3, .opc1 = 3, .crn = 9, .crm = 13, .opc2 = 2,
161
.access = PL0_RW, .type = ARM_CP_NO_RAW | ARM_CP_IO,
162
.accessfn = pmreg_access_xevcntr,
163
+ .fgt = FGT_PMEVCNTRN_EL0,
164
.writefn = pmxevcntr_write, .readfn = pmxevcntr_read },
165
{ .name = "PMUSERENR", .cp = 15, .crn = 9, .crm = 14, .opc1 = 0, .opc2 = 0,
166
.access = PL0_R | PL1_RW, .accessfn = access_tpm,
167
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
168
.writefn = pmuserenr_write, .raw_writefn = raw_write },
169
{ .name = "PMINTENSET", .cp = 15, .crn = 9, .crm = 14, .opc1 = 0, .opc2 = 1,
170
.access = PL1_RW, .accessfn = access_tpm,
171
+ .fgt = FGT_PMINTEN,
172
.type = ARM_CP_ALIAS | ARM_CP_IO,
173
.fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pminten),
174
.resetvalue = 0,
175
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
176
{ .name = "PMINTENSET_EL1", .state = ARM_CP_STATE_AA64,
177
.opc0 = 3, .opc1 = 0, .crn = 9, .crm = 14, .opc2 = 1,
178
.access = PL1_RW, .accessfn = access_tpm,
179
+ .fgt = FGT_PMINTEN,
180
.type = ARM_CP_IO,
181
.fieldoffset = offsetof(CPUARMState, cp15.c9_pminten),
182
.writefn = pmintenset_write, .raw_writefn = raw_write,
183
.resetvalue = 0x0 },
184
{ .name = "PMINTENCLR", .cp = 15, .crn = 9, .crm = 14, .opc1 = 0, .opc2 = 2,
185
.access = PL1_RW, .accessfn = access_tpm,
186
+ .fgt = FGT_PMINTEN,
187
.type = ARM_CP_ALIAS | ARM_CP_IO | ARM_CP_NO_RAW,
188
.fieldoffset = offsetof(CPUARMState, cp15.c9_pminten),
189
.writefn = pmintenclr_write, },
190
{ .name = "PMINTENCLR_EL1", .state = ARM_CP_STATE_AA64,
191
.opc0 = 3, .opc1 = 0, .crn = 9, .crm = 14, .opc2 = 2,
192
.access = PL1_RW, .accessfn = access_tpm,
193
+ .fgt = FGT_PMINTEN,
194
.type = ARM_CP_ALIAS | ARM_CP_IO | ARM_CP_NO_RAW,
195
.fieldoffset = offsetof(CPUARMState, cp15.c9_pminten),
196
.writefn = pmintenclr_write },
197
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pmovsset_cp_reginfo[] = {
198
/* PMOVSSET is not implemented in v7 before v7ve */
199
{ .name = "PMOVSSET", .cp = 15, .opc1 = 0, .crn = 9, .crm = 14, .opc2 = 3,
200
.access = PL0_RW, .accessfn = pmreg_access,
201
+ .fgt = FGT_PMOVS,
202
.type = ARM_CP_ALIAS | ARM_CP_IO,
203
.fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pmovsr),
204
.writefn = pmovsset_write,
205
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pmovsset_cp_reginfo[] = {
206
{ .name = "PMOVSSET_EL0", .state = ARM_CP_STATE_AA64,
207
.opc0 = 3, .opc1 = 3, .crn = 9, .crm = 14, .opc2 = 3,
208
.access = PL0_RW, .accessfn = pmreg_access,
209
+ .fgt = FGT_PMOVS,
210
.type = ARM_CP_ALIAS | ARM_CP_IO,
211
.fieldoffset = offsetof(CPUARMState, cp15.c9_pmovsr),
212
.writefn = pmovsset_write,
213
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
214
ARMCPRegInfo pmcr = {
215
.name = "PMCR", .cp = 15, .crn = 9, .crm = 12, .opc1 = 0, .opc2 = 0,
216
.access = PL0_RW,
217
+ .fgt = FGT_PMCR_EL0,
218
.type = ARM_CP_IO | ARM_CP_ALIAS,
219
.fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pmcr),
220
.accessfn = pmreg_access, .writefn = pmcr_write,
221
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
222
.name = "PMCR_EL0", .state = ARM_CP_STATE_AA64,
223
.opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 0,
224
.access = PL0_RW, .accessfn = pmreg_access,
225
+ .fgt = FGT_PMCR_EL0,
226
.type = ARM_CP_IO,
227
.fieldoffset = offsetof(CPUARMState, cp15.c9_pmcr),
228
.resetvalue = cpu->isar.reset_pmcr_el0,
229
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
230
{ .name = pmevcntr_name, .cp = 15, .crn = 14,
231
.crm = 8 | (3 & (i >> 3)), .opc1 = 0, .opc2 = i & 7,
232
.access = PL0_RW, .type = ARM_CP_IO | ARM_CP_ALIAS,
233
+ .fgt = FGT_PMEVCNTRN_EL0,
234
.readfn = pmevcntr_readfn, .writefn = pmevcntr_writefn,
235
.accessfn = pmreg_access_xevcntr },
236
{ .name = pmevcntr_el0_name, .state = ARM_CP_STATE_AA64,
237
.opc0 = 3, .opc1 = 3, .crn = 14, .crm = 8 | (3 & (i >> 3)),
238
.opc2 = i & 7, .access = PL0_RW, .accessfn = pmreg_access_xevcntr,
239
.type = ARM_CP_IO,
240
+ .fgt = FGT_PMEVCNTRN_EL0,
241
.readfn = pmevcntr_readfn, .writefn = pmevcntr_writefn,
242
.raw_readfn = pmevcntr_rawread,
243
.raw_writefn = pmevcntr_rawwrite },
244
{ .name = pmevtyper_name, .cp = 15, .crn = 14,
245
.crm = 12 | (3 & (i >> 3)), .opc1 = 0, .opc2 = i & 7,
246
.access = PL0_RW, .type = ARM_CP_IO | ARM_CP_ALIAS,
247
+ .fgt = FGT_PMEVTYPERN_EL0,
248
.readfn = pmevtyper_readfn, .writefn = pmevtyper_writefn,
249
.accessfn = pmreg_access },
250
{ .name = pmevtyper_el0_name, .state = ARM_CP_STATE_AA64,
251
.opc0 = 3, .opc1 = 3, .crn = 14, .crm = 12 | (3 & (i >> 3)),
252
.opc2 = i & 7, .access = PL0_RW, .accessfn = pmreg_access,
253
+ .fgt = FGT_PMEVTYPERN_EL0,
254
.type = ARM_CP_IO,
255
.readfn = pmevtyper_readfn, .writefn = pmevtyper_writefn,
256
.raw_writefn = pmevtyper_rawwrite },
257
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
258
{ .name = "PMCEID2", .state = ARM_CP_STATE_AA32,
259
.cp = 15, .opc1 = 0, .crn = 9, .crm = 14, .opc2 = 4,
260
.access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST,
261
+ .fgt = FGT_PMCEIDN_EL0,
262
.resetvalue = extract64(cpu->pmceid0, 32, 32) },
263
{ .name = "PMCEID3", .state = ARM_CP_STATE_AA32,
264
.cp = 15, .opc1 = 0, .crn = 9, .crm = 14, .opc2 = 5,
265
.access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST,
266
+ .fgt = FGT_PMCEIDN_EL0,
267
.resetvalue = extract64(cpu->pmceid1, 32, 32) },
268
};
269
define_arm_cp_regs(cpu, v81_pmu_regs);
270
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
271
.name = "PMMIR_EL1", .state = ARM_CP_STATE_BOTH,
272
.opc0 = 3, .opc1 = 0, .crn = 9, .crm = 14, .opc2 = 6,
273
.access = PL1_R, .accessfn = pmreg_access, .type = ARM_CP_CONST,
274
+ .fgt = FGT_PMMIR_EL1,
275
.resetvalue = 0
276
};
277
define_one_arm_cp_reg(cpu, &v84_pmmir);
278
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
279
{ .name = "PMCEID0", .state = ARM_CP_STATE_AA32,
280
.cp = 15, .opc1 = 0, .crn = 9, .crm = 12, .opc2 = 6,
281
.access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST,
282
+ .fgt = FGT_PMCEIDN_EL0,
283
.resetvalue = extract64(cpu->pmceid0, 0, 32) },
284
{ .name = "PMCEID0_EL0", .state = ARM_CP_STATE_AA64,
285
.opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 6,
286
.access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST,
287
+ .fgt = FGT_PMCEIDN_EL0,
288
.resetvalue = cpu->pmceid0 },
289
{ .name = "PMCEID1", .state = ARM_CP_STATE_AA32,
290
.cp = 15, .opc1 = 0, .crn = 9, .crm = 12, .opc2 = 7,
291
.access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST,
292
+ .fgt = FGT_PMCEIDN_EL0,
293
.resetvalue = extract64(cpu->pmceid1, 0, 32) },
294
{ .name = "PMCEID1_EL0", .state = ARM_CP_STATE_AA64,
295
.opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 7,
296
.access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST,
297
+ .fgt = FGT_PMCEIDN_EL0,
298
.resetvalue = cpu->pmceid1 },
299
};
300
#ifdef CONFIG_USER_ONLY
55
--
301
--
56
2.17.1
302
2.34.1
57
58
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
Mark up the sysreg definitions for the system instructions
2
add MemTxAttrs as an argument to address_space_translate()
2
trapped by HFGITR bits 0..11. These bits cover various
3
and address_space_translate_cached(). Callers either have an
3
cache maintenance operations.
4
attrs value to hand, or don't care and can use MEMTXATTRS_UNSPECIFIED.
5
4
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-4-peter.maydell@linaro.org
7
Tested-by: Fuad Tabba <tabba@google.com>
8
Message-id: 20230130182459.3309057-17-peter.maydell@linaro.org
9
Message-id: 20230127175507.2895013-17-peter.maydell@linaro.org
10
---
10
---
11
include/exec/memory.h | 4 +++-
11
target/arm/cpregs.h | 14 ++++++++++++++
12
accel/tcg/translate-all.c | 2 +-
12
target/arm/helper.c | 28 ++++++++++++++++++++++++++++
13
exec.c | 14 +++++++++-----
13
2 files changed, 42 insertions(+)
14
hw/vfio/common.c | 3 ++-
15
memory_ldst.inc.c | 18 +++++++++---------
16
target/riscv/helper.c | 2 +-
17
6 files changed, 25 insertions(+), 18 deletions(-)
18
14
19
diff --git a/include/exec/memory.h b/include/exec/memory.h
15
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
20
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
21
--- a/include/exec/memory.h
17
--- a/target/arm/cpregs.h
22
+++ b/include/exec/memory.h
18
+++ b/target/arm/cpregs.h
23
@@ -XXX,XX +XXX,XX @@ IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
19
@@ -XXX,XX +XXX,XX @@ typedef enum FGTBit {
24
* #MemoryRegion.
20
DO_BIT(HDFGWTR, PMCR_EL0),
25
* @len: pointer to length
21
DO_BIT(HDFGRTR, PMMIR_EL1),
26
* @is_write: indicates the transfer direction
22
DO_BIT(HDFGRTR, PMCEIDN_EL0),
27
+ * @attrs: memory attributes
23
+
28
*/
24
+ /* Trap bits in HFGITR_EL2, starting from bit 0 */
29
MemoryRegion *flatview_translate(FlatView *fv,
25
+ DO_BIT(HFGITR, ICIALLUIS),
30
hwaddr addr, hwaddr *xlat,
26
+ DO_BIT(HFGITR, ICIALLU),
31
@@ -XXX,XX +XXX,XX @@ MemoryRegion *flatview_translate(FlatView *fv,
27
+ DO_BIT(HFGITR, ICIVAU),
32
28
+ DO_BIT(HFGITR, DCIVAC),
33
static inline MemoryRegion *address_space_translate(AddressSpace *as,
29
+ DO_BIT(HFGITR, DCISW),
34
hwaddr addr, hwaddr *xlat,
30
+ DO_BIT(HFGITR, DCCSW),
35
- hwaddr *len, bool is_write)
31
+ DO_BIT(HFGITR, DCCISW),
36
+ hwaddr *len, bool is_write,
32
+ DO_BIT(HFGITR, DCCVAU),
37
+ MemTxAttrs attrs)
33
+ DO_BIT(HFGITR, DCCVAP),
38
{
34
+ DO_BIT(HFGITR, DCCVADP),
39
return flatview_translate(address_space_to_flatview(as),
35
+ DO_BIT(HFGITR, DCCIVAC),
40
addr, xlat, len, is_write);
36
+ DO_BIT(HFGITR, DCZVA),
41
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
37
} FGTBit;
38
39
#undef DO_BIT
40
diff --git a/target/arm/helper.c b/target/arm/helper.c
42
index XXXXXXX..XXXXXXX 100644
41
index XXXXXXX..XXXXXXX 100644
43
--- a/accel/tcg/translate-all.c
42
--- a/target/arm/helper.c
44
+++ b/accel/tcg/translate-all.c
43
+++ b/target/arm/helper.c
45
@@ -XXX,XX +XXX,XX @@ void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs)
44
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
46
hwaddr l = 1;
45
#ifndef CONFIG_USER_ONLY
47
46
/* Avoid overhead of an access check that always passes in user-mode */
48
rcu_read_lock();
47
.accessfn = aa64_zva_access,
49
- mr = address_space_translate(as, addr, &addr, &l, false);
48
+ .fgt = FGT_DCZVA,
50
+ mr = address_space_translate(as, addr, &addr, &l, false, attrs);
49
#endif
51
if (!(memory_region_is_ram(mr)
50
},
52
|| memory_region_is_romd(mr))) {
51
{ .name = "CURRENTEL", .state = ARM_CP_STATE_AA64,
53
rcu_read_unlock();
52
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
54
diff --git a/exec.c b/exec.c
53
{ .name = "IC_IALLUIS", .state = ARM_CP_STATE_AA64,
55
index XXXXXXX..XXXXXXX 100644
54
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 1, .opc2 = 0,
56
--- a/exec.c
55
.access = PL1_W, .type = ARM_CP_NOP,
57
+++ b/exec.c
56
+ .fgt = FGT_ICIALLUIS,
58
@@ -XXX,XX +XXX,XX @@ static inline void cpu_physical_memory_write_rom_internal(AddressSpace *as,
57
.accessfn = access_ticab },
59
rcu_read_lock();
58
{ .name = "IC_IALLU", .state = ARM_CP_STATE_AA64,
60
while (len > 0) {
59
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 5, .opc2 = 0,
61
l = len;
60
.access = PL1_W, .type = ARM_CP_NOP,
62
- mr = address_space_translate(as, addr, &addr1, &l, true);
61
+ .fgt = FGT_ICIALLU,
63
+ mr = address_space_translate(as, addr, &addr1, &l, true,
62
.accessfn = access_tocu },
64
+ MEMTXATTRS_UNSPECIFIED);
63
{ .name = "IC_IVAU", .state = ARM_CP_STATE_AA64,
65
64
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 5, .opc2 = 1,
66
if (!(memory_region_is_ram(mr) ||
65
.access = PL0_W, .type = ARM_CP_NOP,
67
memory_region_is_romd(mr))) {
66
+ .fgt = FGT_ICIVAU,
68
@@ -XXX,XX +XXX,XX @@ void address_space_cache_destroy(MemoryRegionCache *cache)
67
.accessfn = access_tocu },
69
*/
68
{ .name = "DC_IVAC", .state = ARM_CP_STATE_AA64,
70
static inline MemoryRegion *address_space_translate_cached(
69
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 6, .opc2 = 1,
71
MemoryRegionCache *cache, hwaddr addr, hwaddr *xlat,
70
.access = PL1_W, .accessfn = aa64_cacheop_poc_access,
72
- hwaddr *plen, bool is_write)
71
+ .fgt = FGT_DCIVAC,
73
+ hwaddr *plen, bool is_write, MemTxAttrs attrs)
72
.type = ARM_CP_NOP },
74
{
73
{ .name = "DC_ISW", .state = ARM_CP_STATE_AA64,
75
MemoryRegionSection section;
74
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 6, .opc2 = 2,
76
MemoryRegion *mr;
75
+ .fgt = FGT_DCISW,
77
@@ -XXX,XX +XXX,XX @@ address_space_read_cached_slow(MemoryRegionCache *cache, hwaddr addr,
76
.access = PL1_W, .accessfn = access_tsw, .type = ARM_CP_NOP },
78
MemoryRegion *mr;
77
{ .name = "DC_CVAC", .state = ARM_CP_STATE_AA64,
79
78
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 10, .opc2 = 1,
80
l = len;
79
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
81
- mr = address_space_translate_cached(cache, addr, &addr1, &l, false);
80
.accessfn = aa64_cacheop_poc_access },
82
+ mr = address_space_translate_cached(cache, addr, &addr1, &l, false,
81
{ .name = "DC_CSW", .state = ARM_CP_STATE_AA64,
83
+ MEMTXATTRS_UNSPECIFIED);
82
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 10, .opc2 = 2,
84
flatview_read_continue(cache->fv,
83
+ .fgt = FGT_DCCSW,
85
addr, MEMTXATTRS_UNSPECIFIED, buf, len,
84
.access = PL1_W, .accessfn = access_tsw, .type = ARM_CP_NOP },
86
addr1, l, mr);
85
{ .name = "DC_CVAU", .state = ARM_CP_STATE_AA64,
87
@@ -XXX,XX +XXX,XX @@ address_space_write_cached_slow(MemoryRegionCache *cache, hwaddr addr,
86
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 11, .opc2 = 1,
88
MemoryRegion *mr;
87
.access = PL0_W, .type = ARM_CP_NOP,
89
88
+ .fgt = FGT_DCCVAU,
90
l = len;
89
.accessfn = access_tocu },
91
- mr = address_space_translate_cached(cache, addr, &addr1, &l, true);
90
{ .name = "DC_CIVAC", .state = ARM_CP_STATE_AA64,
92
+ mr = address_space_translate_cached(cache, addr, &addr1, &l, true,
91
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 14, .opc2 = 1,
93
+ MEMTXATTRS_UNSPECIFIED);
92
.access = PL0_W, .type = ARM_CP_NOP,
94
flatview_write_continue(cache->fv,
93
+ .fgt = FGT_DCCIVAC,
95
addr, MEMTXATTRS_UNSPECIFIED, buf, len,
94
.accessfn = aa64_cacheop_poc_access },
96
addr1, l, mr);
95
{ .name = "DC_CISW", .state = ARM_CP_STATE_AA64,
97
@@ -XXX,XX +XXX,XX @@ bool cpu_physical_memory_is_io(hwaddr phys_addr)
96
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 14, .opc2 = 2,
98
97
+ .fgt = FGT_DCCISW,
99
rcu_read_lock();
98
.access = PL1_W, .accessfn = access_tsw, .type = ARM_CP_NOP },
100
mr = address_space_translate(&address_space_memory,
99
/* TLBI operations */
101
- phys_addr, &phys_addr, &l, false);
100
{ .name = "TLBI_VMALLE1IS", .state = ARM_CP_STATE_AA64,
102
+ phys_addr, &phys_addr, &l, false,
101
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo dcpop_reg[] = {
103
+ MEMTXATTRS_UNSPECIFIED);
102
{ .name = "DC_CVAP", .state = ARM_CP_STATE_AA64,
104
103
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 12, .opc2 = 1,
105
res = !(memory_region_is_ram(mr) || memory_region_is_romd(mr));
104
.access = PL0_W, .type = ARM_CP_NO_RAW | ARM_CP_SUPPRESS_TB_END,
106
rcu_read_unlock();
105
+ .fgt = FGT_DCCVAP,
107
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
106
.accessfn = aa64_cacheop_poc_access, .writefn = dccvap_writefn },
108
index XXXXXXX..XXXXXXX 100644
107
};
109
--- a/hw/vfio/common.c
108
110
+++ b/hw/vfio/common.c
109
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo dcpodp_reg[] = {
111
@@ -XXX,XX +XXX,XX @@ static bool vfio_get_vaddr(IOMMUTLBEntry *iotlb, void **vaddr,
110
{ .name = "DC_CVADP", .state = ARM_CP_STATE_AA64,
112
*/
111
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 13, .opc2 = 1,
113
mr = address_space_translate(&address_space_memory,
112
.access = PL0_W, .type = ARM_CP_NO_RAW | ARM_CP_SUPPRESS_TB_END,
114
iotlb->translated_addr,
113
+ .fgt = FGT_DCCVADP,
115
- &xlat, &len, writable);
114
.accessfn = aa64_cacheop_poc_access, .writefn = dccvap_writefn },
116
+ &xlat, &len, writable,
115
};
117
+ MEMTXATTRS_UNSPECIFIED);
116
#endif /*CONFIG_USER_ONLY*/
118
if (!memory_region_is_ram(mr)) {
117
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo mte_reginfo[] = {
119
error_report("iommu map to non memory area %"HWADDR_PRIx"",
118
{ .name = "DC_IGVAC", .state = ARM_CP_STATE_AA64,
120
xlat);
119
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 6, .opc2 = 3,
121
diff --git a/memory_ldst.inc.c b/memory_ldst.inc.c
120
.type = ARM_CP_NOP, .access = PL1_W,
122
index XXXXXXX..XXXXXXX 100644
121
+ .fgt = FGT_DCIVAC,
123
--- a/memory_ldst.inc.c
122
.accessfn = aa64_cacheop_poc_access },
124
+++ b/memory_ldst.inc.c
123
{ .name = "DC_IGSW", .state = ARM_CP_STATE_AA64,
125
@@ -XXX,XX +XXX,XX @@ static inline uint32_t glue(address_space_ldl_internal, SUFFIX)(ARG1_DECL,
124
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 6, .opc2 = 4,
126
bool release_lock = false;
125
+ .fgt = FGT_DCISW,
127
126
.type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw },
128
RCU_READ_LOCK();
127
{ .name = "DC_IGDVAC", .state = ARM_CP_STATE_AA64,
129
- mr = TRANSLATE(addr, &addr1, &l, false);
128
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 6, .opc2 = 5,
130
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
129
.type = ARM_CP_NOP, .access = PL1_W,
131
if (l < 4 || !IS_DIRECT(mr, false)) {
130
+ .fgt = FGT_DCIVAC,
132
release_lock |= prepare_mmio_access(mr);
131
.accessfn = aa64_cacheop_poc_access },
133
132
{ .name = "DC_IGDSW", .state = ARM_CP_STATE_AA64,
134
@@ -XXX,XX +XXX,XX @@ static inline uint64_t glue(address_space_ldq_internal, SUFFIX)(ARG1_DECL,
133
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 6, .opc2 = 6,
135
bool release_lock = false;
134
+ .fgt = FGT_DCISW,
136
135
.type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw },
137
RCU_READ_LOCK();
136
{ .name = "DC_CGSW", .state = ARM_CP_STATE_AA64,
138
- mr = TRANSLATE(addr, &addr1, &l, false);
137
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 10, .opc2 = 4,
139
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
138
+ .fgt = FGT_DCCSW,
140
if (l < 8 || !IS_DIRECT(mr, false)) {
139
.type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw },
141
release_lock |= prepare_mmio_access(mr);
140
{ .name = "DC_CGDSW", .state = ARM_CP_STATE_AA64,
142
141
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 10, .opc2 = 6,
143
@@ -XXX,XX +XXX,XX @@ uint32_t glue(address_space_ldub, SUFFIX)(ARG1_DECL,
142
+ .fgt = FGT_DCCSW,
144
bool release_lock = false;
143
.type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw },
145
144
{ .name = "DC_CIGSW", .state = ARM_CP_STATE_AA64,
146
RCU_READ_LOCK();
145
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 14, .opc2 = 4,
147
- mr = TRANSLATE(addr, &addr1, &l, false);
146
+ .fgt = FGT_DCCISW,
148
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
147
.type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw },
149
if (!IS_DIRECT(mr, false)) {
148
{ .name = "DC_CIGDSW", .state = ARM_CP_STATE_AA64,
150
release_lock |= prepare_mmio_access(mr);
149
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 14, .opc2 = 6,
151
150
+ .fgt = FGT_DCCISW,
152
@@ -XXX,XX +XXX,XX @@ static inline uint32_t glue(address_space_lduw_internal, SUFFIX)(ARG1_DECL,
151
.type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw },
153
bool release_lock = false;
152
};
154
153
155
RCU_READ_LOCK();
154
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo mte_el0_cacheop_reginfo[] = {
156
- mr = TRANSLATE(addr, &addr1, &l, false);
155
{ .name = "DC_CGVAP", .state = ARM_CP_STATE_AA64,
157
+ mr = TRANSLATE(addr, &addr1, &l, false, attrs);
156
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 12, .opc2 = 3,
158
if (l < 2 || !IS_DIRECT(mr, false)) {
157
.type = ARM_CP_NOP, .access = PL0_W,
159
release_lock |= prepare_mmio_access(mr);
158
+ .fgt = FGT_DCCVAP,
160
159
.accessfn = aa64_cacheop_poc_access },
161
@@ -XXX,XX +XXX,XX @@ void glue(address_space_stl_notdirty, SUFFIX)(ARG1_DECL,
160
{ .name = "DC_CGDVAP", .state = ARM_CP_STATE_AA64,
162
bool release_lock = false;
161
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 12, .opc2 = 5,
163
162
.type = ARM_CP_NOP, .access = PL0_W,
164
RCU_READ_LOCK();
163
+ .fgt = FGT_DCCVAP,
165
- mr = TRANSLATE(addr, &addr1, &l, true);
164
.accessfn = aa64_cacheop_poc_access },
166
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
165
{ .name = "DC_CGVADP", .state = ARM_CP_STATE_AA64,
167
if (l < 4 || !IS_DIRECT(mr, true)) {
166
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 13, .opc2 = 3,
168
release_lock |= prepare_mmio_access(mr);
167
.type = ARM_CP_NOP, .access = PL0_W,
169
168
+ .fgt = FGT_DCCVADP,
170
@@ -XXX,XX +XXX,XX @@ static inline void glue(address_space_stl_internal, SUFFIX)(ARG1_DECL,
169
.accessfn = aa64_cacheop_poc_access },
171
bool release_lock = false;
170
{ .name = "DC_CGDVADP", .state = ARM_CP_STATE_AA64,
172
171
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 13, .opc2 = 5,
173
RCU_READ_LOCK();
172
.type = ARM_CP_NOP, .access = PL0_W,
174
- mr = TRANSLATE(addr, &addr1, &l, true);
173
+ .fgt = FGT_DCCVADP,
175
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
174
.accessfn = aa64_cacheop_poc_access },
176
if (l < 4 || !IS_DIRECT(mr, true)) {
175
{ .name = "DC_CIGVAC", .state = ARM_CP_STATE_AA64,
177
release_lock |= prepare_mmio_access(mr);
176
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 14, .opc2 = 3,
178
177
.type = ARM_CP_NOP, .access = PL0_W,
179
@@ -XXX,XX +XXX,XX @@ void glue(address_space_stb, SUFFIX)(ARG1_DECL,
178
+ .fgt = FGT_DCCIVAC,
180
bool release_lock = false;
179
.accessfn = aa64_cacheop_poc_access },
181
180
{ .name = "DC_CIGDVAC", .state = ARM_CP_STATE_AA64,
182
RCU_READ_LOCK();
181
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 14, .opc2 = 5,
183
- mr = TRANSLATE(addr, &addr1, &l, true);
182
.type = ARM_CP_NOP, .access = PL0_W,
184
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
183
+ .fgt = FGT_DCCIVAC,
185
if (!IS_DIRECT(mr, true)) {
184
.accessfn = aa64_cacheop_poc_access },
186
release_lock |= prepare_mmio_access(mr);
185
{ .name = "DC_GVA", .state = ARM_CP_STATE_AA64,
187
r = memory_region_dispatch_write(mr, addr1, val, 1, attrs);
186
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 4, .opc2 = 3,
188
@@ -XXX,XX +XXX,XX @@ static inline void glue(address_space_stw_internal, SUFFIX)(ARG1_DECL,
187
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo mte_el0_cacheop_reginfo[] = {
189
bool release_lock = false;
188
#ifndef CONFIG_USER_ONLY
190
189
/* Avoid overhead of an access check that always passes in user-mode */
191
RCU_READ_LOCK();
190
.accessfn = aa64_zva_access,
192
- mr = TRANSLATE(addr, &addr1, &l, true);
191
+ .fgt = FGT_DCZVA,
193
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
192
#endif
194
if (l < 2 || !IS_DIRECT(mr, true)) {
193
},
195
release_lock |= prepare_mmio_access(mr);
194
{ .name = "DC_GZVA", .state = ARM_CP_STATE_AA64,
196
195
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo mte_el0_cacheop_reginfo[] = {
197
@@ -XXX,XX +XXX,XX @@ static void glue(address_space_stq_internal, SUFFIX)(ARG1_DECL,
196
#ifndef CONFIG_USER_ONLY
198
bool release_lock = false;
197
/* Avoid overhead of an access check that always passes in user-mode */
199
198
.accessfn = aa64_zva_access,
200
RCU_READ_LOCK();
199
+ .fgt = FGT_DCZVA,
201
- mr = TRANSLATE(addr, &addr1, &l, true);
200
#endif
202
+ mr = TRANSLATE(addr, &addr1, &l, true, attrs);
201
},
203
if (l < 8 || !IS_DIRECT(mr, true)) {
202
};
204
release_lock |= prepare_mmio_access(mr);
205
206
diff --git a/target/riscv/helper.c b/target/riscv/helper.c
207
index XXXXXXX..XXXXXXX 100644
208
--- a/target/riscv/helper.c
209
+++ b/target/riscv/helper.c
210
@@ -XXX,XX +XXX,XX @@ restart:
211
MemoryRegion *mr;
212
hwaddr l = sizeof(target_ulong), addr1;
213
mr = address_space_translate(cs->as, pte_addr,
214
- &addr1, &l, false);
215
+ &addr1, &l, false, MEMTXATTRS_UNSPECIFIED);
216
if (memory_access_is_direct(mr, true)) {
217
target_ulong *pte_pa =
218
qemu_map_ram_ptr(mr->ram_block, addr1);
219
--
203
--
220
2.17.1
204
2.34.1
221
222
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
Mark up the sysreg definitions for the system instructions
2
add MemTxAttrs as an argument to tb_invalidate_phys_addr().
2
trapped by HFGITR bits 12..17. These bits cover AT address
3
Its callers either have an attrs value to hand, or don't care
3
translation instructions.
4
and can use MEMTXATTRS_UNSPECIFIED.
5
4
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Tested-by: Fuad Tabba <tabba@google.com>
9
Message-id: 20180521140402.23318-3-peter.maydell@linaro.org
8
Message-id: 20230130182459.3309057-18-peter.maydell@linaro.org
9
Message-id: 20230127175507.2895013-18-peter.maydell@linaro.org
10
---
10
---
11
include/exec/exec-all.h | 5 +++--
11
target/arm/cpregs.h | 6 ++++++
12
accel/tcg/translate-all.c | 2 +-
12
target/arm/helper.c | 6 ++++++
13
exec.c | 2 +-
13
2 files changed, 12 insertions(+)
14
target/xtensa/op_helper.c | 3 ++-
15
4 files changed, 7 insertions(+), 5 deletions(-)
16
14
17
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
15
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
18
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
19
--- a/include/exec/exec-all.h
17
--- a/target/arm/cpregs.h
20
+++ b/include/exec/exec-all.h
18
+++ b/target/arm/cpregs.h
21
@@ -XXX,XX +XXX,XX @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
19
@@ -XXX,XX +XXX,XX @@ typedef enum FGTBit {
22
void tlb_set_page(CPUState *cpu, target_ulong vaddr,
20
DO_BIT(HFGITR, DCCVADP),
23
hwaddr paddr, int prot,
21
DO_BIT(HFGITR, DCCIVAC),
24
int mmu_idx, target_ulong size);
22
DO_BIT(HFGITR, DCZVA),
25
-void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr);
23
+ DO_BIT(HFGITR, ATS1E1R),
26
+void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs);
24
+ DO_BIT(HFGITR, ATS1E1W),
27
void probe_write(CPUArchState *env, target_ulong addr, int size, int mmu_idx,
25
+ DO_BIT(HFGITR, ATS1E0R),
28
uintptr_t retaddr);
26
+ DO_BIT(HFGITR, ATS1E0W),
29
#else
27
+ DO_BIT(HFGITR, ATS1E1RP),
30
@@ -XXX,XX +XXX,XX @@ static inline void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *cpu,
28
+ DO_BIT(HFGITR, ATS1E1WP),
31
uint16_t idxmap)
29
} FGTBit;
32
{
30
33
}
31
#undef DO_BIT
34
-static inline void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr)
32
diff --git a/target/arm/helper.c b/target/arm/helper.c
35
+static inline void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr,
36
+ MemTxAttrs attrs)
37
{
38
}
39
#endif
40
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
41
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
42
--- a/accel/tcg/translate-all.c
34
--- a/target/arm/helper.c
43
+++ b/accel/tcg/translate-all.c
35
+++ b/target/arm/helper.c
44
@@ -XXX,XX +XXX,XX @@ static TranslationBlock *tb_find_pc(uintptr_t tc_ptr)
36
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
45
}
37
{ .name = "AT_S1E1R", .state = ARM_CP_STATE_AA64,
46
38
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 8, .opc2 = 0,
47
#if !defined(CONFIG_USER_ONLY)
39
.access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
48
-void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr)
40
+ .fgt = FGT_ATS1E1R,
49
+void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs)
41
.writefn = ats_write64 },
50
{
42
{ .name = "AT_S1E1W", .state = ARM_CP_STATE_AA64,
51
ram_addr_t ram_addr;
43
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 8, .opc2 = 1,
52
MemoryRegion *mr;
44
.access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
53
diff --git a/exec.c b/exec.c
45
+ .fgt = FGT_ATS1E1W,
54
index XXXXXXX..XXXXXXX 100644
46
.writefn = ats_write64 },
55
--- a/exec.c
47
{ .name = "AT_S1E0R", .state = ARM_CP_STATE_AA64,
56
+++ b/exec.c
48
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 8, .opc2 = 2,
57
@@ -XXX,XX +XXX,XX @@ static void breakpoint_invalidate(CPUState *cpu, target_ulong pc)
49
.access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
58
if (phys != -1) {
50
+ .fgt = FGT_ATS1E0R,
59
/* Locks grabbed by tb_invalidate_phys_addr */
51
.writefn = ats_write64 },
60
tb_invalidate_phys_addr(cpu->cpu_ases[asidx].as,
52
{ .name = "AT_S1E0W", .state = ARM_CP_STATE_AA64,
61
- phys | (pc & ~TARGET_PAGE_MASK));
53
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 8, .opc2 = 3,
62
+ phys | (pc & ~TARGET_PAGE_MASK), attrs);
54
.access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
63
}
55
+ .fgt = FGT_ATS1E0W,
64
}
56
.writefn = ats_write64 },
65
#endif
57
{ .name = "AT_S12E1R", .state = ARM_CP_STATE_AA64,
66
diff --git a/target/xtensa/op_helper.c b/target/xtensa/op_helper.c
58
.opc0 = 1, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 4,
67
index XXXXXXX..XXXXXXX 100644
59
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo ats1e1_reginfo[] = {
68
--- a/target/xtensa/op_helper.c
60
{ .name = "AT_S1E1RP", .state = ARM_CP_STATE_AA64,
69
+++ b/target/xtensa/op_helper.c
61
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 0,
70
@@ -XXX,XX +XXX,XX @@ static void tb_invalidate_virtual_addr(CPUXtensaState *env, uint32_t vaddr)
62
.access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
71
int ret = xtensa_get_physical_addr(env, false, vaddr, 2, 0,
63
+ .fgt = FGT_ATS1E1RP,
72
&paddr, &page_size, &access);
64
.writefn = ats_write64 },
73
if (ret == 0) {
65
{ .name = "AT_S1E1WP", .state = ARM_CP_STATE_AA64,
74
- tb_invalidate_phys_addr(&address_space_memory, paddr);
66
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 1,
75
+ tb_invalidate_phys_addr(&address_space_memory, paddr,
67
.access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
76
+ MEMTXATTRS_UNSPECIFIED);
68
+ .fgt = FGT_ATS1E1WP,
77
}
69
.writefn = ats_write64 },
78
}
70
};
79
71
80
--
72
--
81
2.17.1
73
2.34.1
82
83
diff view generated by jsdifflib
1
Add entries to MAINTAINERS to cover the newer MPS2 boards and
1
Mark up the sysreg definitions for the system instructions
2
the new devices they use.
2
trapped by HFGITR bits 18..47. These bits cover TLBI
3
TLB maintenance instructions.
4
5
(If we implemented FEAT_XS we would need to trap some of the
6
instructions added by that feature using these bits; but we don't
7
yet, so will need to add the .fgt markup when we do.)
3
8
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20180518153157.14899-1-peter.maydell@linaro.org
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Tested-by: Fuad Tabba <tabba@google.com>
12
Message-id: 20230130182459.3309057-19-peter.maydell@linaro.org
13
Message-id: 20230127175507.2895013-19-peter.maydell@linaro.org
6
---
14
---
7
MAINTAINERS | 9 +++++++--
15
target/arm/cpregs.h | 30 ++++++++++++++++++++++++++++++
8
1 file changed, 7 insertions(+), 2 deletions(-)
16
target/arm/helper.c | 30 ++++++++++++++++++++++++++++++
17
2 files changed, 60 insertions(+)
9
18
10
diff --git a/MAINTAINERS b/MAINTAINERS
19
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
11
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
12
--- a/MAINTAINERS
21
--- a/target/arm/cpregs.h
13
+++ b/MAINTAINERS
22
+++ b/target/arm/cpregs.h
14
@@ -XXX,XX +XXX,XX @@ F: hw/timer/cmsdk-apb-timer.c
23
@@ -XXX,XX +XXX,XX @@ typedef enum FGTBit {
15
F: include/hw/timer/cmsdk-apb-timer.h
24
DO_BIT(HFGITR, ATS1E0W),
16
F: hw/char/cmsdk-apb-uart.c
25
DO_BIT(HFGITR, ATS1E1RP),
17
F: include/hw/char/cmsdk-apb-uart.h
26
DO_BIT(HFGITR, ATS1E1WP),
18
+F: hw/misc/tz-ppc.c
27
+ DO_BIT(HFGITR, TLBIVMALLE1OS),
19
+F: include/hw/misc/tz-ppc.h
28
+ DO_BIT(HFGITR, TLBIVAE1OS),
20
29
+ DO_BIT(HFGITR, TLBIASIDE1OS),
21
ARM cores
30
+ DO_BIT(HFGITR, TLBIVAAE1OS),
22
M: Peter Maydell <peter.maydell@linaro.org>
31
+ DO_BIT(HFGITR, TLBIVALE1OS),
23
@@ -XXX,XX +XXX,XX @@ M: Peter Maydell <peter.maydell@linaro.org>
32
+ DO_BIT(HFGITR, TLBIVAALE1OS),
24
L: qemu-arm@nongnu.org
33
+ DO_BIT(HFGITR, TLBIRVAE1OS),
25
S: Maintained
34
+ DO_BIT(HFGITR, TLBIRVAAE1OS),
26
F: hw/arm/mps2.c
35
+ DO_BIT(HFGITR, TLBIRVALE1OS),
27
-F: hw/misc/mps2-scc.c
36
+ DO_BIT(HFGITR, TLBIRVAALE1OS),
28
-F: include/hw/misc/mps2-scc.h
37
+ DO_BIT(HFGITR, TLBIVMALLE1IS),
29
+F: hw/arm/mps2-tz.c
38
+ DO_BIT(HFGITR, TLBIVAE1IS),
30
+F: hw/misc/mps2-*.c
39
+ DO_BIT(HFGITR, TLBIASIDE1IS),
31
+F: include/hw/misc/mps2-*.h
40
+ DO_BIT(HFGITR, TLBIVAAE1IS),
32
+F: hw/arm/iotkit.c
41
+ DO_BIT(HFGITR, TLBIVALE1IS),
33
+F: include/hw/arm/iotkit.h
42
+ DO_BIT(HFGITR, TLBIVAALE1IS),
34
43
+ DO_BIT(HFGITR, TLBIRVAE1IS),
35
Musicpal
44
+ DO_BIT(HFGITR, TLBIRVAAE1IS),
36
M: Jan Kiszka <jan.kiszka@web.de>
45
+ DO_BIT(HFGITR, TLBIRVALE1IS),
46
+ DO_BIT(HFGITR, TLBIRVAALE1IS),
47
+ DO_BIT(HFGITR, TLBIRVAE1),
48
+ DO_BIT(HFGITR, TLBIRVAAE1),
49
+ DO_BIT(HFGITR, TLBIRVALE1),
50
+ DO_BIT(HFGITR, TLBIRVAALE1),
51
+ DO_BIT(HFGITR, TLBIVMALLE1),
52
+ DO_BIT(HFGITR, TLBIVAE1),
53
+ DO_BIT(HFGITR, TLBIASIDE1),
54
+ DO_BIT(HFGITR, TLBIVAAE1),
55
+ DO_BIT(HFGITR, TLBIVALE1),
56
+ DO_BIT(HFGITR, TLBIVAALE1),
57
} FGTBit;
58
59
#undef DO_BIT
60
diff --git a/target/arm/helper.c b/target/arm/helper.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/helper.c
63
+++ b/target/arm/helper.c
64
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
65
{ .name = "TLBI_VMALLE1IS", .state = ARM_CP_STATE_AA64,
66
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 0,
67
.access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
68
+ .fgt = FGT_TLBIVMALLE1IS,
69
.writefn = tlbi_aa64_vmalle1is_write },
70
{ .name = "TLBI_VAE1IS", .state = ARM_CP_STATE_AA64,
71
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 1,
72
.access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
73
+ .fgt = FGT_TLBIVAE1IS,
74
.writefn = tlbi_aa64_vae1is_write },
75
{ .name = "TLBI_ASIDE1IS", .state = ARM_CP_STATE_AA64,
76
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 2,
77
.access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
78
+ .fgt = FGT_TLBIASIDE1IS,
79
.writefn = tlbi_aa64_vmalle1is_write },
80
{ .name = "TLBI_VAAE1IS", .state = ARM_CP_STATE_AA64,
81
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 3,
82
.access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
83
+ .fgt = FGT_TLBIVAAE1IS,
84
.writefn = tlbi_aa64_vae1is_write },
85
{ .name = "TLBI_VALE1IS", .state = ARM_CP_STATE_AA64,
86
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 5,
87
.access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
88
+ .fgt = FGT_TLBIVALE1IS,
89
.writefn = tlbi_aa64_vae1is_write },
90
{ .name = "TLBI_VAALE1IS", .state = ARM_CP_STATE_AA64,
91
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 7,
92
.access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
93
+ .fgt = FGT_TLBIVAALE1IS,
94
.writefn = tlbi_aa64_vae1is_write },
95
{ .name = "TLBI_VMALLE1", .state = ARM_CP_STATE_AA64,
96
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 0,
97
.access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
98
+ .fgt = FGT_TLBIVMALLE1,
99
.writefn = tlbi_aa64_vmalle1_write },
100
{ .name = "TLBI_VAE1", .state = ARM_CP_STATE_AA64,
101
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 1,
102
.access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
103
+ .fgt = FGT_TLBIVAE1,
104
.writefn = tlbi_aa64_vae1_write },
105
{ .name = "TLBI_ASIDE1", .state = ARM_CP_STATE_AA64,
106
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 2,
107
.access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
108
+ .fgt = FGT_TLBIASIDE1,
109
.writefn = tlbi_aa64_vmalle1_write },
110
{ .name = "TLBI_VAAE1", .state = ARM_CP_STATE_AA64,
111
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 3,
112
.access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
113
+ .fgt = FGT_TLBIVAAE1,
114
.writefn = tlbi_aa64_vae1_write },
115
{ .name = "TLBI_VALE1", .state = ARM_CP_STATE_AA64,
116
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 5,
117
.access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
118
+ .fgt = FGT_TLBIVALE1,
119
.writefn = tlbi_aa64_vae1_write },
120
{ .name = "TLBI_VAALE1", .state = ARM_CP_STATE_AA64,
121
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 7,
122
.access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
123
+ .fgt = FGT_TLBIVAALE1,
124
.writefn = tlbi_aa64_vae1_write },
125
{ .name = "TLBI_IPAS2E1IS", .state = ARM_CP_STATE_AA64,
126
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 1,
127
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbirange_reginfo[] = {
128
{ .name = "TLBI_RVAE1IS", .state = ARM_CP_STATE_AA64,
129
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 1,
130
.access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
131
+ .fgt = FGT_TLBIRVAE1IS,
132
.writefn = tlbi_aa64_rvae1is_write },
133
{ .name = "TLBI_RVAAE1IS", .state = ARM_CP_STATE_AA64,
134
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 3,
135
.access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
136
+ .fgt = FGT_TLBIRVAAE1IS,
137
.writefn = tlbi_aa64_rvae1is_write },
138
{ .name = "TLBI_RVALE1IS", .state = ARM_CP_STATE_AA64,
139
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 5,
140
.access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
141
+ .fgt = FGT_TLBIRVALE1IS,
142
.writefn = tlbi_aa64_rvae1is_write },
143
{ .name = "TLBI_RVAALE1IS", .state = ARM_CP_STATE_AA64,
144
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 2, .opc2 = 7,
145
.access = PL1_W, .accessfn = access_ttlbis, .type = ARM_CP_NO_RAW,
146
+ .fgt = FGT_TLBIRVAALE1IS,
147
.writefn = tlbi_aa64_rvae1is_write },
148
{ .name = "TLBI_RVAE1OS", .state = ARM_CP_STATE_AA64,
149
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 1,
150
.access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
151
+ .fgt = FGT_TLBIRVAE1OS,
152
.writefn = tlbi_aa64_rvae1is_write },
153
{ .name = "TLBI_RVAAE1OS", .state = ARM_CP_STATE_AA64,
154
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 3,
155
.access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
156
+ .fgt = FGT_TLBIRVAAE1OS,
157
.writefn = tlbi_aa64_rvae1is_write },
158
{ .name = "TLBI_RVALE1OS", .state = ARM_CP_STATE_AA64,
159
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 5,
160
.access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
161
+ .fgt = FGT_TLBIRVALE1OS,
162
.writefn = tlbi_aa64_rvae1is_write },
163
{ .name = "TLBI_RVAALE1OS", .state = ARM_CP_STATE_AA64,
164
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 5, .opc2 = 7,
165
.access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
166
+ .fgt = FGT_TLBIRVAALE1OS,
167
.writefn = tlbi_aa64_rvae1is_write },
168
{ .name = "TLBI_RVAE1", .state = ARM_CP_STATE_AA64,
169
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 1,
170
.access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
171
+ .fgt = FGT_TLBIRVAE1,
172
.writefn = tlbi_aa64_rvae1_write },
173
{ .name = "TLBI_RVAAE1", .state = ARM_CP_STATE_AA64,
174
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 3,
175
.access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
176
+ .fgt = FGT_TLBIRVAAE1,
177
.writefn = tlbi_aa64_rvae1_write },
178
{ .name = "TLBI_RVALE1", .state = ARM_CP_STATE_AA64,
179
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 5,
180
.access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
181
+ .fgt = FGT_TLBIRVALE1,
182
.writefn = tlbi_aa64_rvae1_write },
183
{ .name = "TLBI_RVAALE1", .state = ARM_CP_STATE_AA64,
184
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 6, .opc2 = 7,
185
.access = PL1_W, .accessfn = access_ttlb, .type = ARM_CP_NO_RAW,
186
+ .fgt = FGT_TLBIRVAALE1,
187
.writefn = tlbi_aa64_rvae1_write },
188
{ .name = "TLBI_RIPAS2E1IS", .state = ARM_CP_STATE_AA64,
189
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 2,
190
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbios_reginfo[] = {
191
{ .name = "TLBI_VMALLE1OS", .state = ARM_CP_STATE_AA64,
192
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 0,
193
.access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
194
+ .fgt = FGT_TLBIVMALLE1OS,
195
.writefn = tlbi_aa64_vmalle1is_write },
196
{ .name = "TLBI_VAE1OS", .state = ARM_CP_STATE_AA64,
197
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 1,
198
+ .fgt = FGT_TLBIVAE1OS,
199
.access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
200
.writefn = tlbi_aa64_vae1is_write },
201
{ .name = "TLBI_ASIDE1OS", .state = ARM_CP_STATE_AA64,
202
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 2,
203
.access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
204
+ .fgt = FGT_TLBIASIDE1OS,
205
.writefn = tlbi_aa64_vmalle1is_write },
206
{ .name = "TLBI_VAAE1OS", .state = ARM_CP_STATE_AA64,
207
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 3,
208
.access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
209
+ .fgt = FGT_TLBIVAAE1OS,
210
.writefn = tlbi_aa64_vae1is_write },
211
{ .name = "TLBI_VALE1OS", .state = ARM_CP_STATE_AA64,
212
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 5,
213
.access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
214
+ .fgt = FGT_TLBIVALE1OS,
215
.writefn = tlbi_aa64_vae1is_write },
216
{ .name = "TLBI_VAALE1OS", .state = ARM_CP_STATE_AA64,
217
.opc0 = 1, .opc1 = 0, .crn = 8, .crm = 1, .opc2 = 7,
218
.access = PL1_W, .accessfn = access_ttlbos, .type = ARM_CP_NO_RAW,
219
+ .fgt = FGT_TLBIVAALE1OS,
220
.writefn = tlbi_aa64_vae1is_write },
221
{ .name = "TLBI_ALLE2OS", .state = ARM_CP_STATE_AA64,
222
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 0,
37
--
223
--
38
2.17.1
224
2.34.1
39
40
diff view generated by jsdifflib
1
In commit f0aff255700 we made cpacr_write() enforce that some CPACR
1
Mark up the sysreg definitions for the system instructions
2
bits are RAZ/WI and some are RAO/WI for ARMv7 cores. Unfortunately
2
trapped by HFGITR bits 48..63.
3
we forgot to also update the register's reset value. The effect
4
was that (a) a guest that read CPACR on reset would not see ones in
5
the RAO bits, and (b) if you did a migration before the guest did
6
a write to the CPACR then the migration would fail because the
7
destination would enforce the RAO bits and then complain that they
8
didn't match the zero value from the source.
9
3
10
Implement reset for the CPACR using a custom reset function
4
Some of these bits are for trapping instructions which are
11
that just calls cpacr_write(), to avoid having to duplicate
5
not in the system instruction encoding (i.e. which are
12
the logic for which bits are RAO.
6
not handled by the ARMCPRegInfo mechanism):
7
* ERET, ERETAA, ERETAB
8
* SVC
13
9
14
This bug would affect migration for TCG CPUs which are ARMv7
10
We will have to handle those separately and manually.
15
with VFP but without one of Neon or VFPv3.
16
11
17
Reported-by: Cédric Le Goater <clg@kaod.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Tested-by: Cédric Le Goater <clg@kaod.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Message-id: 20180522173713.26282-1-peter.maydell@linaro.org
14
Tested-by: Fuad Tabba <tabba@google.com>
15
Message-id: 20230130182459.3309057-20-peter.maydell@linaro.org
16
Message-id: 20230127175507.2895013-20-peter.maydell@linaro.org
21
---
17
---
22
target/arm/helper.c | 10 +++++++++-
18
target/arm/cpregs.h | 4 ++++
23
1 file changed, 9 insertions(+), 1 deletion(-)
19
target/arm/helper.c | 9 +++++++++
20
2 files changed, 13 insertions(+)
24
21
22
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
23
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/cpregs.h
25
+++ b/target/arm/cpregs.h
26
@@ -XXX,XX +XXX,XX @@ typedef enum FGTBit {
27
DO_BIT(HFGITR, TLBIVAAE1),
28
DO_BIT(HFGITR, TLBIVALE1),
29
DO_BIT(HFGITR, TLBIVAALE1),
30
+ DO_BIT(HFGITR, CFPRCTX),
31
+ DO_BIT(HFGITR, DVPRCTX),
32
+ DO_BIT(HFGITR, CPPRCTX),
33
+ DO_BIT(HFGITR, DCCVAC),
34
} FGTBit;
35
36
#undef DO_BIT
25
diff --git a/target/arm/helper.c b/target/arm/helper.c
37
diff --git a/target/arm/helper.c b/target/arm/helper.c
26
index XXXXXXX..XXXXXXX 100644
38
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/helper.c
39
--- a/target/arm/helper.c
28
+++ b/target/arm/helper.c
40
+++ b/target/arm/helper.c
29
@@ -XXX,XX +XXX,XX @@ static void cpacr_write(CPUARMState *env, const ARMCPRegInfo *ri,
41
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
30
env->cp15.cpacr_el1 = value;
42
{ .name = "DC_CVAC", .state = ARM_CP_STATE_AA64,
31
}
43
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 10, .opc2 = 1,
32
44
.access = PL0_W, .type = ARM_CP_NOP,
33
+static void cpacr_reset(CPUARMState *env, const ARMCPRegInfo *ri)
45
+ .fgt = FGT_DCCVAC,
34
+{
46
.accessfn = aa64_cacheop_poc_access },
35
+ /* Call cpacr_write() so that we reset with the correct RAO bits set
47
{ .name = "DC_CSW", .state = ARM_CP_STATE_AA64,
36
+ * for our CPU features.
48
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 10, .opc2 = 2,
37
+ */
49
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo mte_el0_cacheop_reginfo[] = {
38
+ cpacr_write(env, ri, 0);
50
{ .name = "DC_CGVAC", .state = ARM_CP_STATE_AA64,
39
+}
51
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 10, .opc2 = 3,
40
+
52
.type = ARM_CP_NOP, .access = PL0_W,
41
static CPAccessResult cpacr_access(CPUARMState *env, const ARMCPRegInfo *ri,
53
+ .fgt = FGT_DCCVAC,
42
bool isread)
54
.accessfn = aa64_cacheop_poc_access },
43
{
55
{ .name = "DC_CGDVAC", .state = ARM_CP_STATE_AA64,
44
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6_cp_reginfo[] = {
56
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 10, .opc2 = 5,
45
{ .name = "CPACR", .state = ARM_CP_STATE_BOTH, .opc0 = 3,
57
.type = ARM_CP_NOP, .access = PL0_W,
46
.crn = 1, .crm = 0, .opc1 = 0, .opc2 = 2, .accessfn = cpacr_access,
58
+ .fgt = FGT_DCCVAC,
47
.access = PL1_RW, .fieldoffset = offsetof(CPUARMState, cp15.cpacr_el1),
59
.accessfn = aa64_cacheop_poc_access },
48
- .resetvalue = 0, .writefn = cpacr_write },
60
{ .name = "DC_CGVAP", .state = ARM_CP_STATE_AA64,
49
+ .resetfn = cpacr_reset, .writefn = cpacr_write },
61
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 12, .opc2 = 3,
50
REGINFO_SENTINEL
62
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_predinv(CPUARMState *env, const ARMCPRegInfo *ri,
63
static const ARMCPRegInfo predinv_reginfo[] = {
64
{ .name = "CFP_RCTX", .state = ARM_CP_STATE_AA64,
65
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 3, .opc2 = 4,
66
+ .fgt = FGT_CFPRCTX,
67
.type = ARM_CP_NOP, .access = PL0_W, .accessfn = access_predinv },
68
{ .name = "DVP_RCTX", .state = ARM_CP_STATE_AA64,
69
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 3, .opc2 = 5,
70
+ .fgt = FGT_DVPRCTX,
71
.type = ARM_CP_NOP, .access = PL0_W, .accessfn = access_predinv },
72
{ .name = "CPP_RCTX", .state = ARM_CP_STATE_AA64,
73
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 3, .opc2 = 7,
74
+ .fgt = FGT_CPPRCTX,
75
.type = ARM_CP_NOP, .access = PL0_W, .accessfn = access_predinv },
76
/*
77
* Note the AArch32 opcodes have a different OPC1.
78
*/
79
{ .name = "CFPRCTX", .state = ARM_CP_STATE_AA32,
80
.cp = 15, .opc1 = 0, .crn = 7, .crm = 3, .opc2 = 4,
81
+ .fgt = FGT_CFPRCTX,
82
.type = ARM_CP_NOP, .access = PL0_W, .accessfn = access_predinv },
83
{ .name = "DVPRCTX", .state = ARM_CP_STATE_AA32,
84
.cp = 15, .opc1 = 0, .crn = 7, .crm = 3, .opc2 = 5,
85
+ .fgt = FGT_DVPRCTX,
86
.type = ARM_CP_NOP, .access = PL0_W, .accessfn = access_predinv },
87
{ .name = "CPPRCTX", .state = ARM_CP_STATE_AA32,
88
.cp = 15, .opc1 = 0, .crn = 7, .crm = 3, .opc2 = 7,
89
+ .fgt = FGT_CPPRCTX,
90
.type = ARM_CP_NOP, .access = PL0_W, .accessfn = access_predinv },
51
};
91
};
52
92
53
--
93
--
54
2.17.1
94
2.34.1
55
56
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
Implement the HFGITR_EL2.ERET fine-grained trap. This traps
2
add MemTxAttrs as an argument to address_space_map().
2
execution from AArch64 EL1 of ERET, ERETAA and ERETAB. The trap is
3
Its callers either have an attrs value to hand, or don't care
3
reported with a syndrome value of 0x1a.
4
and can use MEMTXATTRS_UNSPECIFIED.
4
5
The trap must take precedence over a possible pointer-authentication
6
trap for ERETAA and ERETAB.
5
7
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180521140402.23318-5-peter.maydell@linaro.org
10
Tested-by: Fuad Tabba <tabba@google.com>
11
Message-id: 20230130182459.3309057-21-peter.maydell@linaro.org
12
Message-id: 20230127175507.2895013-21-peter.maydell@linaro.org
10
---
13
---
11
include/exec/memory.h | 3 ++-
14
target/arm/cpu.h | 1 +
12
include/sysemu/dma.h | 3 ++-
15
target/arm/syndrome.h | 10 ++++++++++
13
exec.c | 6 ++++--
16
target/arm/translate.h | 2 ++
14
target/ppc/mmu-hash64.c | 3 ++-
17
target/arm/helper.c | 3 +++
15
4 files changed, 10 insertions(+), 5 deletions(-)
18
target/arm/translate-a64.c | 10 ++++++++++
19
5 files changed, 26 insertions(+)
16
20
17
diff --git a/include/exec/memory.h b/include/exec/memory.h
21
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
19
--- a/include/exec/memory.h
23
--- a/target/arm/cpu.h
20
+++ b/include/exec/memory.h
24
+++ b/target/arm/cpu.h
21
@@ -XXX,XX +XXX,XX @@ bool address_space_access_valid(AddressSpace *as, hwaddr addr, int len, bool is_
25
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A64, PSTATE_ZA, 23, 1)
22
* @addr: address within that address space
26
FIELD(TBFLAG_A64, SVL, 24, 4)
23
* @plen: pointer to length of buffer; updated on return
27
/* Indicates that SME Streaming mode is active, and SMCR_ELx.FA64 is not. */
24
* @is_write: indicates the transfer direction
28
FIELD(TBFLAG_A64, SME_TRAP_NONSTREAMING, 28, 1)
25
+ * @attrs: memory attributes
29
+FIELD(TBFLAG_A64, FGT_ERET, 29, 1)
26
*/
30
27
void *address_space_map(AddressSpace *as, hwaddr addr,
31
/*
28
- hwaddr *plen, bool is_write);
32
* Helpers for using the above.
29
+ hwaddr *plen, bool is_write, MemTxAttrs attrs);
33
diff --git a/target/arm/syndrome.h b/target/arm/syndrome.h
30
31
/* address_space_unmap: Unmaps a memory region previously mapped by address_space_map()
32
*
33
diff --git a/include/sysemu/dma.h b/include/sysemu/dma.h
34
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
35
--- a/include/sysemu/dma.h
35
--- a/target/arm/syndrome.h
36
+++ b/include/sysemu/dma.h
36
+++ b/target/arm/syndrome.h
37
@@ -XXX,XX +XXX,XX @@ static inline void *dma_memory_map(AddressSpace *as,
37
@@ -XXX,XX +XXX,XX @@ enum arm_exception_class {
38
hwaddr xlen = *len;
38
EC_AA64_SMC = 0x17,
39
void *p;
39
EC_SYSTEMREGISTERTRAP = 0x18,
40
40
EC_SVEACCESSTRAP = 0x19,
41
- p = address_space_map(as, addr, &xlen, dir == DMA_DIRECTION_FROM_DEVICE);
41
+ EC_ERETTRAP = 0x1a,
42
+ p = address_space_map(as, addr, &xlen, dir == DMA_DIRECTION_FROM_DEVICE,
42
EC_SMETRAP = 0x1d,
43
+ MEMTXATTRS_UNSPECIFIED);
43
EC_INSNABORT = 0x20,
44
*len = xlen;
44
EC_INSNABORT_SAME_EL = 0x21,
45
return p;
45
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_sve_access_trap(void)
46
return EC_SVEACCESSTRAP << ARM_EL_EC_SHIFT;
46
}
47
}
47
diff --git a/exec.c b/exec.c
48
49
+/*
50
+ * eret_op is bits [1:0] of the ERET instruction, so:
51
+ * 0 for ERET, 2 for ERETAA, 3 for ERETAB.
52
+ */
53
+static inline uint32_t syn_erettrap(int eret_op)
54
+{
55
+ return (EC_ERETTRAP << ARM_EL_EC_SHIFT) | ARM_EL_IL | eret_op;
56
+}
57
+
58
static inline uint32_t syn_smetrap(SMEExceptionType etype, bool is_16bit)
59
{
60
return (EC_SMETRAP << ARM_EL_EC_SHIFT)
61
diff --git a/target/arm/translate.h b/target/arm/translate.h
48
index XXXXXXX..XXXXXXX 100644
62
index XXXXXXX..XXXXXXX 100644
49
--- a/exec.c
63
--- a/target/arm/translate.h
50
+++ b/exec.c
64
+++ b/target/arm/translate.h
51
@@ -XXX,XX +XXX,XX @@ flatview_extend_translation(FlatView *fv, hwaddr addr,
65
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
52
void *address_space_map(AddressSpace *as,
66
bool mve_no_pred;
53
hwaddr addr,
67
/* True if fine-grained traps are active */
54
hwaddr *plen,
68
bool fgt_active;
55
- bool is_write)
69
+ /* True if fine-grained trap on ERET is enabled */
56
+ bool is_write,
70
+ bool fgt_eret;
57
+ MemTxAttrs attrs)
71
/*
58
{
72
* >= 0, a copy of PSTATE.BTYPE, which will be 0 without v8.5-BTI.
59
hwaddr len = *plen;
73
* < 0, set by the current instruction.
60
hwaddr l, xlat;
74
diff --git a/target/arm/helper.c b/target/arm/helper.c
61
@@ -XXX,XX +XXX,XX @@ void *cpu_physical_memory_map(hwaddr addr,
62
hwaddr *plen,
63
int is_write)
64
{
65
- return address_space_map(&address_space_memory, addr, plen, is_write);
66
+ return address_space_map(&address_space_memory, addr, plen, is_write,
67
+ MEMTXATTRS_UNSPECIFIED);
68
}
69
70
void cpu_physical_memory_unmap(void *buffer, hwaddr len,
71
diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
72
index XXXXXXX..XXXXXXX 100644
75
index XXXXXXX..XXXXXXX 100644
73
--- a/target/ppc/mmu-hash64.c
76
--- a/target/arm/helper.c
74
+++ b/target/ppc/mmu-hash64.c
77
+++ b/target/arm/helper.c
75
@@ -XXX,XX +XXX,XX @@ const ppc_hash_pte64_t *ppc_hash64_map_hptes(PowerPCCPU *cpu,
78
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
76
return NULL;
79
80
if (arm_fgt_active(env, el)) {
81
DP_TBFLAG_ANY(flags, FGT_ACTIVE, 1);
82
+ if (FIELD_EX64(env->cp15.fgt_exec[FGTREG_HFGITR], HFGITR_EL2, ERET)) {
83
+ DP_TBFLAG_A64(flags, FGT_ERET, 1);
84
+ }
77
}
85
}
78
86
79
- hptes = address_space_map(CPU(cpu)->as, base + pte_offset, &plen, false);
87
if (cpu_isar_feature(aa64_mte, env_archcpu(env))) {
80
+ hptes = address_space_map(CPU(cpu)->as, base + pte_offset, &plen, false,
88
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
81
+ MEMTXATTRS_UNSPECIFIED);
89
index XXXXXXX..XXXXXXX 100644
82
if (plen < (n * HASH_PTE_SIZE_64)) {
90
--- a/target/arm/translate-a64.c
83
hw_error("%s: Unable to map all requested HPTEs\n", __func__);
91
+++ b/target/arm/translate-a64.c
84
}
92
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
93
if (op4 != 0) {
94
goto do_unallocated;
95
}
96
+ if (s->fgt_eret) {
97
+ gen_exception_insn_el(s, 0, EXCP_UDEF, syn_erettrap(op3), 2);
98
+ return;
99
+ }
100
dst = tcg_temp_new_i64();
101
tcg_gen_ld_i64(dst, cpu_env,
102
offsetof(CPUARMState, elr_el[s->current_el]));
103
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
104
if (rn != 0x1f || op4 != 0x1f) {
105
goto do_unallocated;
106
}
107
+ /* The FGT trap takes precedence over an auth trap. */
108
+ if (s->fgt_eret) {
109
+ gen_exception_insn_el(s, 0, EXCP_UDEF, syn_erettrap(op3), 2);
110
+ return;
111
+ }
112
dst = tcg_temp_new_i64();
113
tcg_gen_ld_i64(dst, cpu_env,
114
offsetof(CPUARMState, elr_el[s->current_el]));
115
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
116
dc->align_mem = EX_TBFLAG_ANY(tb_flags, ALIGN_MEM);
117
dc->pstate_il = EX_TBFLAG_ANY(tb_flags, PSTATE__IL);
118
dc->fgt_active = EX_TBFLAG_ANY(tb_flags, FGT_ACTIVE);
119
+ dc->fgt_eret = EX_TBFLAG_A64(tb_flags, FGT_ERET);
120
dc->sve_excp_el = EX_TBFLAG_A64(tb_flags, SVEEXC_EL);
121
dc->sme_excp_el = EX_TBFLAG_A64(tb_flags, SMEEXC_EL);
122
dc->vl = (EX_TBFLAG_A64(tb_flags, VL) + 1) * 16;
85
--
123
--
86
2.17.1
124
2.34.1
87
88
diff view generated by jsdifflib
1
The FRECPX instructions should (like most other floating point operations)
1
Implement the HFGITR_EL2.SVC_EL0 and SVC_EL1 fine-grained traps.
2
honour the FPCR.FZ bit which specifies whether input denormals should
2
These trap execution of the SVC instruction from AArch32 and AArch64.
3
be flushed to zero (or FZ16 for the half-precision version).
3
(As usual, AArch32 can only trap from EL0, as fine grained traps are
4
We forgot to implement this, which doesn't affect the results (since
4
disabled with an AArch32 EL1.)
5
the calculation doesn't actually care about the mantissa bits) but did
6
mean we were failing to set the FPSR.IDC bit.
7
5
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20180521172712.19930-1-peter.maydell@linaro.org
8
Tested-by: Fuad Tabba <tabba@google.com>
9
Message-id: 20230130182459.3309057-22-peter.maydell@linaro.org
10
Message-id: 20230127175507.2895013-22-peter.maydell@linaro.org
11
---
11
---
12
target/arm/helper-a64.c | 6 ++++++
12
target/arm/cpu.h | 1 +
13
1 file changed, 6 insertions(+)
13
target/arm/translate.h | 2 ++
14
target/arm/helper.c | 20 ++++++++++++++++++++
15
target/arm/translate-a64.c | 9 ++++++++-
16
target/arm/translate.c | 12 +++++++++---
17
5 files changed, 40 insertions(+), 4 deletions(-)
14
18
15
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper-a64.c
21
--- a/target/arm/cpu.h
18
+++ b/target/arm/helper-a64.c
22
+++ b/target/arm/cpu.h
19
@@ -XXX,XX +XXX,XX @@ float16 HELPER(frecpx_f16)(float16 a, void *fpstp)
23
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_ANY, FPEXC_EL, 8, 2)
20
return nan;
24
FIELD(TBFLAG_ANY, ALIGN_MEM, 10, 1)
25
FIELD(TBFLAG_ANY, PSTATE__IL, 11, 1)
26
FIELD(TBFLAG_ANY, FGT_ACTIVE, 12, 1)
27
+FIELD(TBFLAG_ANY, FGT_SVC, 13, 1)
28
29
/*
30
* Bit usage when in AArch32 state, both A- and M-profile.
31
diff --git a/target/arm/translate.h b/target/arm/translate.h
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/translate.h
34
+++ b/target/arm/translate.h
35
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
36
bool fgt_active;
37
/* True if fine-grained trap on ERET is enabled */
38
bool fgt_eret;
39
+ /* True if fine-grained trap on SVC is enabled */
40
+ bool fgt_svc;
41
/*
42
* >= 0, a copy of PSTATE.BTYPE, which will be 0 without v8.5-BTI.
43
* < 0, set by the current instruction.
44
diff --git a/target/arm/helper.c b/target/arm/helper.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/helper.c
47
+++ b/target/arm/helper.c
48
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_mmu_idx(CPUARMState *env)
49
return arm_mmu_idx_el(env, arm_current_el(env));
50
}
51
52
+static inline bool fgt_svc(CPUARMState *env, int el)
53
+{
54
+ /*
55
+ * Assuming fine-grained-traps are active, return true if we
56
+ * should be trapping on SVC instructions. Only AArch64 can
57
+ * trap on an SVC at EL1, but we don't need to special-case this
58
+ * because if this is AArch32 EL1 then arm_fgt_active() is false.
59
+ * We also know el is 0 or 1.
60
+ */
61
+ return el == 0 ?
62
+ FIELD_EX64(env->cp15.fgt_exec[FGTREG_HFGITR], HFGITR_EL2, SVC_EL0) :
63
+ FIELD_EX64(env->cp15.fgt_exec[FGTREG_HFGITR], HFGITR_EL2, SVC_EL1);
64
+}
65
+
66
static CPUARMTBFlags rebuild_hflags_common(CPUARMState *env, int fp_el,
67
ARMMMUIdx mmu_idx,
68
CPUARMTBFlags flags)
69
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a32(CPUARMState *env, int fp_el,
70
71
if (arm_fgt_active(env, el)) {
72
DP_TBFLAG_ANY(flags, FGT_ACTIVE, 1);
73
+ if (fgt_svc(env, el)) {
74
+ DP_TBFLAG_ANY(flags, FGT_SVC, 1);
75
+ }
21
}
76
}
22
77
23
+ a = float16_squash_input_denormal(a, fpst);
78
if (env->uncached_cpsr & CPSR_IL) {
24
+
79
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
25
val16 = float16_val(a);
80
if (FIELD_EX64(env->cp15.fgt_exec[FGTREG_HFGITR], HFGITR_EL2, ERET)) {
26
sbit = 0x8000 & val16;
81
DP_TBFLAG_A64(flags, FGT_ERET, 1);
27
exp = extract32(val16, 10, 5);
82
}
28
@@ -XXX,XX +XXX,XX @@ float32 HELPER(frecpx_f32)(float32 a, void *fpstp)
83
+ if (fgt_svc(env, el)) {
29
return nan;
84
+ DP_TBFLAG_ANY(flags, FGT_SVC, 1);
85
+ }
30
}
86
}
31
87
32
+ a = float32_squash_input_denormal(a, fpst);
88
if (cpu_isar_feature(aa64_mte, env_archcpu(env))) {
33
+
89
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
34
val32 = float32_val(a);
90
index XXXXXXX..XXXXXXX 100644
35
sbit = 0x80000000ULL & val32;
91
--- a/target/arm/translate-a64.c
36
exp = extract32(val32, 23, 8);
92
+++ b/target/arm/translate-a64.c
37
@@ -XXX,XX +XXX,XX @@ float64 HELPER(frecpx_f64)(float64 a, void *fpstp)
93
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
38
return nan;
94
int opc = extract32(insn, 21, 3);
95
int op2_ll = extract32(insn, 0, 5);
96
int imm16 = extract32(insn, 5, 16);
97
+ uint32_t syndrome;
98
99
switch (opc) {
100
case 0:
101
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
102
*/
103
switch (op2_ll) {
104
case 1: /* SVC */
105
+ syndrome = syn_aa64_svc(imm16);
106
+ if (s->fgt_svc) {
107
+ gen_exception_insn_el(s, 0, EXCP_UDEF, syndrome, 2);
108
+ break;
109
+ }
110
gen_ss_advance(s);
111
- gen_exception_insn(s, 4, EXCP_SWI, syn_aa64_svc(imm16));
112
+ gen_exception_insn(s, 4, EXCP_SWI, syndrome);
113
break;
114
case 2: /* HVC */
115
if (s->current_el == 0) {
116
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
117
dc->align_mem = EX_TBFLAG_ANY(tb_flags, ALIGN_MEM);
118
dc->pstate_il = EX_TBFLAG_ANY(tb_flags, PSTATE__IL);
119
dc->fgt_active = EX_TBFLAG_ANY(tb_flags, FGT_ACTIVE);
120
+ dc->fgt_svc = EX_TBFLAG_ANY(tb_flags, FGT_SVC);
121
dc->fgt_eret = EX_TBFLAG_A64(tb_flags, FGT_ERET);
122
dc->sve_excp_el = EX_TBFLAG_A64(tb_flags, SVEEXC_EL);
123
dc->sme_excp_el = EX_TBFLAG_A64(tb_flags, SMEEXC_EL);
124
diff --git a/target/arm/translate.c b/target/arm/translate.c
125
index XXXXXXX..XXXXXXX 100644
126
--- a/target/arm/translate.c
127
+++ b/target/arm/translate.c
128
@@ -XXX,XX +XXX,XX @@ static bool trans_SVC(DisasContext *s, arg_SVC *a)
129
(a->imm == semihost_imm)) {
130
gen_exception_internal_insn(s, EXCP_SEMIHOST);
131
} else {
132
- gen_update_pc(s, curr_insn_len(s));
133
- s->svc_imm = a->imm;
134
- s->base.is_jmp = DISAS_SWI;
135
+ if (s->fgt_svc) {
136
+ uint32_t syndrome = syn_aa32_svc(a->imm, s->thumb);
137
+ gen_exception_insn_el(s, 0, EXCP_UDEF, syndrome, 2);
138
+ } else {
139
+ gen_update_pc(s, curr_insn_len(s));
140
+ s->svc_imm = a->imm;
141
+ s->base.is_jmp = DISAS_SWI;
142
+ }
39
}
143
}
40
144
return true;
41
+ a = float64_squash_input_denormal(a, fpst);
145
}
42
+
146
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
43
val64 = float64_val(a);
147
dc->align_mem = EX_TBFLAG_ANY(tb_flags, ALIGN_MEM);
44
sbit = 0x8000000000000000ULL & val64;
148
dc->pstate_il = EX_TBFLAG_ANY(tb_flags, PSTATE__IL);
45
exp = extract64(float64_val(a), 52, 11);
149
dc->fgt_active = EX_TBFLAG_ANY(tb_flags, FGT_ACTIVE);
150
+ dc->fgt_svc = EX_TBFLAG_ANY(tb_flags, FGT_SVC);
151
152
if (arm_feature(env, ARM_FEATURE_M)) {
153
dc->vfp_enabled = 1;
46
--
154
--
47
2.17.1
155
2.34.1
48
49
diff view generated by jsdifflib
1
As part of plumbing MemTxAttrs down to the IOMMU translate method,
1
FEAT_FGT also implements an extra trap bit in the MDCR_EL2 and
2
add MemTxAttrs as an argument to the MemoryRegion valid.accepts
2
MDCR_EL3 registers: bit TDCC enables trapping of use of the Debug
3
callback. We'll need this for subpage_accepts().
3
Comms Channel registers OSDTRRX_EL1, OSDTRTX_EL1, MDCCSR_EL0,
4
MDCCINT_EL0, DBGDTR_EL0, DBGDTRRX_EL0 and DBGDTRTX_EL0 (and their
5
AArch32 equivalents). This trapping is independent of whether
6
fine-grained traps are enabled or not.
4
7
5
We could take the approach we used with the read and write
8
Implement these extra traps. (We don't implement DBGDTR_EL0,
6
callbacks and add new a new _with_attrs version, but since there
9
DBGDTRRX_EL0 and DBGDTRTX_EL0.)
7
are so few implementations of the accepts hook we just change
8
them all.
9
10
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20180521140402.23318-9-peter.maydell@linaro.org
13
Tested-by: Fuad Tabba <tabba@google.com>
14
Message-id: 20230130182459.3309057-23-peter.maydell@linaro.org
15
Message-id: 20230127175507.2895013-23-peter.maydell@linaro.org
14
---
16
---
15
include/exec/memory.h | 3 ++-
17
target/arm/debug_helper.c | 35 +++++++++++++++++++++++++++++++----
16
exec.c | 9 ++++++---
18
1 file changed, 31 insertions(+), 4 deletions(-)
17
hw/hppa/dino.c | 3 ++-
18
hw/nvram/fw_cfg.c | 12 ++++++++----
19
hw/scsi/esp.c | 3 ++-
20
hw/xen/xen_pt_msi.c | 3 ++-
21
memory.c | 5 +++--
22
7 files changed, 25 insertions(+), 13 deletions(-)
23
19
24
diff --git a/include/exec/memory.h b/include/exec/memory.h
20
diff --git a/target/arm/debug_helper.c b/target/arm/debug_helper.c
25
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
26
--- a/include/exec/memory.h
22
--- a/target/arm/debug_helper.c
27
+++ b/include/exec/memory.h
23
+++ b/target/arm/debug_helper.c
28
@@ -XXX,XX +XXX,XX @@ struct MemoryRegionOps {
24
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tda(CPUARMState *env, const ARMCPRegInfo *ri,
29
* as a machine check exception).
25
return CP_ACCESS_OK;
30
*/
31
bool (*accepts)(void *opaque, hwaddr addr,
32
- unsigned size, bool is_write);
33
+ unsigned size, bool is_write,
34
+ MemTxAttrs attrs);
35
} valid;
36
/* Internal implementation constraints: */
37
struct {
38
diff --git a/exec.c b/exec.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/exec.c
41
+++ b/exec.c
42
@@ -XXX,XX +XXX,XX @@ static void notdirty_mem_write(void *opaque, hwaddr ram_addr,
43
}
26
}
44
27
45
static bool notdirty_mem_accepts(void *opaque, hwaddr addr,
28
+/*
46
- unsigned size, bool is_write)
29
+ * Check for traps to Debug Comms Channel registers. If FEAT_FGT
47
+ unsigned size, bool is_write,
30
+ * is implemented then these are controlled by MDCR_EL2.TDCC for
48
+ MemTxAttrs attrs)
31
+ * EL2 and MDCR_EL3.TDCC for EL3. They are also controlled by
32
+ * the general debug access trap bits MDCR_EL2.TDA and MDCR_EL3.TDA.
33
+ */
34
+static CPAccessResult access_tdcc(CPUARMState *env, const ARMCPRegInfo *ri,
35
+ bool isread)
36
+{
37
+ int el = arm_current_el(env);
38
+ uint64_t mdcr_el2 = arm_mdcr_el2_eff(env);
39
+ bool mdcr_el2_tda = (mdcr_el2 & MDCR_TDA) || (mdcr_el2 & MDCR_TDE) ||
40
+ (arm_hcr_el2_eff(env) & HCR_TGE);
41
+ bool mdcr_el2_tdcc = cpu_isar_feature(aa64_fgt, env_archcpu(env)) &&
42
+ (mdcr_el2 & MDCR_TDCC);
43
+ bool mdcr_el3_tdcc = cpu_isar_feature(aa64_fgt, env_archcpu(env)) &&
44
+ (env->cp15.mdcr_el3 & MDCR_TDCC);
45
+
46
+ if (el < 2 && (mdcr_el2_tda || mdcr_el2_tdcc)) {
47
+ return CP_ACCESS_TRAP_EL2;
48
+ }
49
+ if (el < 3 && ((env->cp15.mdcr_el3 & MDCR_TDA) || mdcr_el3_tdcc)) {
50
+ return CP_ACCESS_TRAP_EL3;
51
+ }
52
+ return CP_ACCESS_OK;
53
+}
54
+
55
static void oslar_write(CPUARMState *env, const ARMCPRegInfo *ri,
56
uint64_t value)
49
{
57
{
50
return is_write;
58
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_cp_reginfo[] = {
51
}
59
*/
52
@@ -XXX,XX +XXX,XX @@ static MemTxResult subpage_write(void *opaque, hwaddr addr,
60
{ .name = "MDCCSR_EL0", .state = ARM_CP_STATE_AA64,
53
}
61
.opc0 = 2, .opc1 = 3, .crn = 0, .crm = 1, .opc2 = 0,
54
62
- .access = PL0_R, .accessfn = access_tda,
55
static bool subpage_accepts(void *opaque, hwaddr addr,
63
+ .access = PL0_R, .accessfn = access_tdcc,
56
- unsigned len, bool is_write)
64
.type = ARM_CP_CONST, .resetvalue = 0 },
57
+ unsigned len, bool is_write,
65
/*
58
+ MemTxAttrs attrs)
66
* OSDTRRX_EL1/OSDTRTX_EL1 are used for save and restore of DBGDTRRX_EL0.
59
{
67
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_cp_reginfo[] = {
60
subpage_t *subpage = opaque;
68
*/
61
#if defined(DEBUG_SUBPAGE)
69
{ .name = "OSDTRRX_EL1", .state = ARM_CP_STATE_BOTH, .cp = 14,
62
@@ -XXX,XX +XXX,XX @@ static void readonly_mem_write(void *opaque, hwaddr addr,
70
.opc0 = 2, .opc1 = 0, .crn = 0, .crm = 0, .opc2 = 2,
63
}
71
- .access = PL1_RW, .accessfn = access_tda,
64
72
+ .access = PL1_RW, .accessfn = access_tdcc,
65
static bool readonly_mem_accepts(void *opaque, hwaddr addr,
73
.type = ARM_CP_CONST, .resetvalue = 0 },
66
- unsigned size, bool is_write)
74
{ .name = "OSDTRTX_EL1", .state = ARM_CP_STATE_BOTH, .cp = 14,
67
+ unsigned size, bool is_write,
75
.opc0 = 2, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 2,
68
+ MemTxAttrs attrs)
76
- .access = PL1_RW, .accessfn = access_tda,
69
{
77
+ .access = PL1_RW, .accessfn = access_tdcc,
70
return is_write;
78
.type = ARM_CP_CONST, .resetvalue = 0 },
71
}
79
/*
72
diff --git a/hw/hppa/dino.c b/hw/hppa/dino.c
80
* OSECCR_EL1 provides a mechanism for an operating system
73
index XXXXXXX..XXXXXXX 100644
81
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_cp_reginfo[] = {
74
--- a/hw/hppa/dino.c
82
*/
75
+++ b/hw/hppa/dino.c
83
{ .name = "MDCCINT_EL1", .state = ARM_CP_STATE_BOTH,
76
@@ -XXX,XX +XXX,XX @@ static void gsc_to_pci_forwarding(DinoState *s)
84
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 0,
77
}
85
- .access = PL1_RW, .accessfn = access_tda,
78
86
+ .access = PL1_RW, .accessfn = access_tdcc,
79
static bool dino_chip_mem_valid(void *opaque, hwaddr addr,
87
.type = ARM_CP_NOP },
80
- unsigned size, bool is_write)
88
/*
81
+ unsigned size, bool is_write,
89
* Dummy DBGCLAIM registers.
82
+ MemTxAttrs attrs)
83
{
84
switch (addr) {
85
case DINO_IAR0:
86
diff --git a/hw/nvram/fw_cfg.c b/hw/nvram/fw_cfg.c
87
index XXXXXXX..XXXXXXX 100644
88
--- a/hw/nvram/fw_cfg.c
89
+++ b/hw/nvram/fw_cfg.c
90
@@ -XXX,XX +XXX,XX @@ static void fw_cfg_dma_mem_write(void *opaque, hwaddr addr,
91
}
92
93
static bool fw_cfg_dma_mem_valid(void *opaque, hwaddr addr,
94
- unsigned size, bool is_write)
95
+ unsigned size, bool is_write,
96
+ MemTxAttrs attrs)
97
{
98
return !is_write || ((size == 4 && (addr == 0 || addr == 4)) ||
99
(size == 8 && addr == 0));
100
}
101
102
static bool fw_cfg_data_mem_valid(void *opaque, hwaddr addr,
103
- unsigned size, bool is_write)
104
+ unsigned size, bool is_write,
105
+ MemTxAttrs attrs)
106
{
107
return addr == 0;
108
}
109
@@ -XXX,XX +XXX,XX @@ static void fw_cfg_ctl_mem_write(void *opaque, hwaddr addr,
110
}
111
112
static bool fw_cfg_ctl_mem_valid(void *opaque, hwaddr addr,
113
- unsigned size, bool is_write)
114
+ unsigned size, bool is_write,
115
+ MemTxAttrs attrs)
116
{
117
return is_write && size == 2;
118
}
119
@@ -XXX,XX +XXX,XX @@ static void fw_cfg_comb_write(void *opaque, hwaddr addr,
120
}
121
122
static bool fw_cfg_comb_valid(void *opaque, hwaddr addr,
123
- unsigned size, bool is_write)
124
+ unsigned size, bool is_write,
125
+ MemTxAttrs attrs)
126
{
127
return (size == 1) || (is_write && size == 2);
128
}
129
diff --git a/hw/scsi/esp.c b/hw/scsi/esp.c
130
index XXXXXXX..XXXXXXX 100644
131
--- a/hw/scsi/esp.c
132
+++ b/hw/scsi/esp.c
133
@@ -XXX,XX +XXX,XX @@ void esp_reg_write(ESPState *s, uint32_t saddr, uint64_t val)
134
}
135
136
static bool esp_mem_accepts(void *opaque, hwaddr addr,
137
- unsigned size, bool is_write)
138
+ unsigned size, bool is_write,
139
+ MemTxAttrs attrs)
140
{
141
return (size == 1) || (is_write && size == 4);
142
}
143
diff --git a/hw/xen/xen_pt_msi.c b/hw/xen/xen_pt_msi.c
144
index XXXXXXX..XXXXXXX 100644
145
--- a/hw/xen/xen_pt_msi.c
146
+++ b/hw/xen/xen_pt_msi.c
147
@@ -XXX,XX +XXX,XX @@ static uint64_t pci_msix_read(void *opaque, hwaddr addr,
148
}
149
150
static bool pci_msix_accepts(void *opaque, hwaddr addr,
151
- unsigned size, bool is_write)
152
+ unsigned size, bool is_write,
153
+ MemTxAttrs attrs)
154
{
155
return !(addr & (size - 1));
156
}
157
diff --git a/memory.c b/memory.c
158
index XXXXXXX..XXXXXXX 100644
159
--- a/memory.c
160
+++ b/memory.c
161
@@ -XXX,XX +XXX,XX @@ static void unassigned_mem_write(void *opaque, hwaddr addr,
162
}
163
164
static bool unassigned_mem_accepts(void *opaque, hwaddr addr,
165
- unsigned size, bool is_write)
166
+ unsigned size, bool is_write,
167
+ MemTxAttrs attrs)
168
{
169
return false;
170
}
171
@@ -XXX,XX +XXX,XX @@ bool memory_region_access_valid(MemoryRegion *mr,
172
access_size = MAX(MIN(size, access_size_max), access_size_min);
173
for (i = 0; i < size; i += access_size) {
174
if (!mr->ops->valid.accepts(mr->opaque, addr + i, access_size,
175
- is_write)) {
176
+ is_write, attrs)) {
177
return false;
178
}
179
}
180
--
90
--
181
2.17.1
91
2.34.1
182
183
diff view generated by jsdifflib
New patch
1
Update the ID registers for TCG's '-cpu max' to report the
2
presence of FEAT_FGT Fine-Grained Traps support.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Tested-by: Fuad Tabba <tabba@google.com>
7
Message-id: 20230130182459.3309057-24-peter.maydell@linaro.org
8
Message-id: 20230127175507.2895013-24-peter.maydell@linaro.org
9
---
10
docs/system/arm/emulation.rst | 1 +
11
target/arm/cpu64.c | 1 +
12
2 files changed, 2 insertions(+)
13
14
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
15
index XXXXXXX..XXXXXXX 100644
16
--- a/docs/system/arm/emulation.rst
17
+++ b/docs/system/arm/emulation.rst
18
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
19
- FEAT_ETS (Enhanced Translation Synchronization)
20
- FEAT_EVT (Enhanced Virtualization Traps)
21
- FEAT_FCMA (Floating-point complex number instructions)
22
+- FEAT_FGT (Fine-Grained Traps)
23
- FEAT_FHM (Floating-point half-precision multiplication instructions)
24
- FEAT_FP16 (Half-precision floating-point data processing)
25
- FEAT_FRINTTS (Floating-point to integer instructions)
26
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/cpu64.c
29
+++ b/target/arm/cpu64.c
30
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
31
t = FIELD_DP64(t, ID_AA64MMFR0, TGRAN16_2, 2); /* 16k stage2 supported */
32
t = FIELD_DP64(t, ID_AA64MMFR0, TGRAN64_2, 2); /* 64k stage2 supported */
33
t = FIELD_DP64(t, ID_AA64MMFR0, TGRAN4_2, 2); /* 4k stage2 supported */
34
+ t = FIELD_DP64(t, ID_AA64MMFR0, FGT, 1); /* FEAT_FGT */
35
cpu->isar.id_aa64mmfr0 = t;
36
37
t = cpu->isar.id_aa64mmfr1;
38
--
39
2.34.1
diff view generated by jsdifflib