1
First pullreq of the 3.1 release cycle, with lots of
1
Two small bugfixes, plus most of RTH's refactoring of cpregs
2
Arm related patches accumulated during freeze. Most
2
handling.
3
notable here is Luc's GICv2 virtualization support and
4
my execute-from-MMIO patches.
5
3
6
I stopped looking at my to-review queue towards the
7
end of freeze, since 45 patches is already pushing what
8
I consider a reasonable sized pullreq; once this goes into
9
master I'll start working through it again.
10
11
thanks
12
-- PMM
4
-- PMM
13
5
14
The following changes since commit 38441756b70eec5807b5f60dad11a93a91199866:
6
The following changes since commit 1fba9dc71a170b3a05b9d3272dd8ecfe7f26e215:
15
7
16
Update version for v3.0.0 release (2018-08-14 16:38:43 +0100)
8
Merge tag 'pull-request-2022-05-04' of https://gitlab.com/thuth/qemu into staging (2022-05-04 08:07:02 -0700)
17
9
18
are available in the Git repository at:
10
are available in the Git repository at:
19
11
20
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20180814
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20220505
21
13
22
for you to fetch changes up to 054e7adf4e64e4acb3b033348ebf7cc871baa34f:
14
for you to fetch changes up to 99a50d1a67c602126fc2b3a4812d3000eba9bf34:
23
15
24
target/arm: Fix typo in helper_sve_movz_d (2018-08-14 17:17:22 +0100)
16
target/arm: read access to performance counters from EL0 (2022-05-05 09:36:22 +0100)
25
17
26
----------------------------------------------------------------
18
----------------------------------------------------------------
27
target-arm queue:
19
target-arm queue:
28
* Implement more of ARMv6-M support
20
* Enable read access to performance counters from EL0
29
* Support direct execution from non-RAM regions;
21
* Enable SCTLR_EL1.BT0 for aarch64-linux-user
30
use this to implmeent execution from small (<1K) MPU regions
22
* Refactoring of cpreg handling
31
* GICv2: implement the virtualization extensions
32
* support a virtualization-capable GICv2 in the virt and
33
xlnx-zynqmp boards
34
* arm: Fix return code of arm_load_elf() so we can detect
35
failure to load the file correctly
36
* Implement HCR_EL2.TGE ("trap general exceptions") bit
37
* Implement tailchaining for M profile cores
38
* Fix bugs in SVE compare, saturating add/sub, WHILE, MOVZ
39
23
40
----------------------------------------------------------------
24
----------------------------------------------------------------
41
Adam Lackorzynski (1):
25
Alex Zuepke (1):
42
arm: Fix return code of arm_load_elf
26
target/arm: read access to performance counters from EL0
43
27
44
Julia Suvorova (4):
28
Richard Henderson (22):
45
target/arm: Forbid unprivileged mode for M Baseline
29
target/arm: Enable SCTLR_EL1.BT0 for aarch64-linux-user
46
nvic: Handle ARMv6-M SCS reserved registers
30
target/arm: Split out cpregs.h
47
arm: Add ARMv6-M programmer's model support
31
target/arm: Reorg CPAccessResult and access_check_cp_reg
48
nvic: Change NVIC to support ARMv6-M
32
target/arm: Replace sentinels with ARRAY_SIZE in cpregs.h
33
target/arm: Make some more cpreg data static const
34
target/arm: Reorg ARMCPRegInfo type field bits
35
target/arm: Avoid bare abort() or assert(0)
36
target/arm: Change cpreg access permissions to enum
37
target/arm: Name CPState type
38
target/arm: Name CPSecureState type
39
target/arm: Drop always-true test in define_arm_vh_e2h_redirects_aliases
40
target/arm: Store cpregs key in the hash table directly
41
target/arm: Merge allocation of the cpreg and its name
42
target/arm: Hoist computation of key in add_cpreg_to_hashtable
43
target/arm: Consolidate cpreg updates in add_cpreg_to_hashtable
44
target/arm: Use bool for is64 and ns in add_cpreg_to_hashtable
45
target/arm: Hoist isbanked computation in add_cpreg_to_hashtable
46
target/arm: Perform override check early in add_cpreg_to_hashtable
47
target/arm: Reformat comments in add_cpreg_to_hashtable
48
target/arm: Remove HOST_BIG_ENDIAN ifdef in add_cpreg_to_hashtable
49
target/arm: Add isar predicates for FEAT_Debugv8p2
50
target/arm: Add isar_feature_{aa64,any}_ras
49
51
50
Luc Michel (20):
52
target/arm/cpregs.h | 453 ++++++++++++++++++++++++++++++++++++++
51
intc/arm_gic: Refactor operations on the distributor
53
target/arm/cpu.h | 393 +++------------------------------
52
intc/arm_gic: Implement GICD_ISACTIVERn and GICD_ICACTIVERn registers
54
hw/arm/pxa2xx.c | 2 +-
53
intc/arm_gic: Remove some dead code and put some functions static
55
hw/arm/pxa2xx_pic.c | 2 +-
54
vmstate.h: Provide VMSTATE_UINT16_SUB_ARRAY
56
hw/intc/arm_gicv3_cpuif.c | 6 +-
55
intc/arm_gic: Add the virtualization extensions to the GIC state
57
hw/intc/arm_gicv3_kvm.c | 3 +-
56
intc/arm_gic: Add virtual interface register definitions
58
target/arm/cpu.c | 25 +--
57
intc/arm_gic: Add virtualization extensions helper macros and functions
59
target/arm/cpu64.c | 2 +-
58
intc/arm_gic: Refactor secure/ns access check in the CPU interface
60
target/arm/cpu_tcg.c | 5 +-
59
intc/arm_gic: Add virtualization enabled IRQ helper functions
61
target/arm/gdbstub.c | 5 +-
60
intc/arm_gic: Implement virtualization extensions in gic_(activate_irq|drop_prio)
62
target/arm/helper.c | 358 +++++++++++++-----------------
61
intc/arm_gic: Implement virtualization extensions in gic_acknowledge_irq
63
target/arm/hvf/hvf.c | 2 +-
62
intc/arm_gic: Implement virtualization extensions in gic_(deactivate|complete_irq)
64
target/arm/kvm-stub.c | 4 +-
63
intc/arm_gic: Implement virtualization extensions in gic_cpu_(read|write)
65
target/arm/kvm.c | 4 +-
64
intc/arm_gic: Wire the vCPU interface
66
target/arm/machine.c | 4 +-
65
intc/arm_gic: Implement the virtual interface registers
67
target/arm/op_helper.c | 57 ++---
66
intc/arm_gic: Implement gic_update_virt() function
68
target/arm/translate-a64.c | 14 +-
67
intc/arm_gic: Implement maintenance interrupt generation
69
target/arm/translate-neon.c | 2 +-
68
intc/arm_gic: Improve traces
70
target/arm/translate.c | 13 +-
69
xlnx-zynqmp: Improve GIC wiring and MMIO mapping
71
tests/tcg/aarch64/bti-3.c | 42 ++++
70
arm/virt: Add support for GICv2 virtualization extensions
72
tests/tcg/aarch64/Makefile.target | 6 +-
71
73
21 files changed, 738 insertions(+), 664 deletions(-)
72
Peter Maydell (16):
74
create mode 100644 target/arm/cpregs.h
73
accel/tcg: Pass read access type through to io_readx()
75
create mode 100644 tests/tcg/aarch64/bti-3.c
74
accel/tcg: Handle get_page_addr_code() returning -1 in hashtable lookups
75
accel/tcg: Handle get_page_addr_code() returning -1 in tb_check_watchpoint()
76
accel/tcg: tb_gen_code(): Create single-insn TB for execution from non-RAM
77
accel/tcg: Return -1 for execution from MMIO regions in get_page_addr_code()
78
target/arm: Allow execution from small regions
79
accel/tcg: Check whether TLB entry is RAM consistently with how we set it up
80
target/arm: Mask virtual interrupts if HCR_EL2.TGE is set
81
target/arm: Honour HCR_EL2.TGE and MDCR_EL2.TDE in debug register access checks
82
target/arm: Honour HCR_EL2.TGE when raising synchronous exceptions
83
target/arm: Provide accessor functions for HCR_EL2.{IMO, FMO, AMO}
84
target/arm: Treat SCTLR_EL1.M as if it were zero when HCR_EL2.TGE is set
85
target/arm: Improve exception-taken logging
86
target/arm: Initialize exc_secure correctly in do_v7m_exception_exit()
87
target/arm: Restore M-profile CONTROL.SPSEL before any tailchaining
88
target/arm: Implement tailchaining for M profile cores
89
90
Richard Henderson (4):
91
target/arm: Fix sign of sve_cmpeq_ppzw/sve_cmpne_ppzw
92
target/arm: Fix typo in do_sat_addsub_64
93
target/arm: Reorganize SVE WHILE
94
target/arm: Fix typo in helper_sve_movz_d
95
96
accel/tcg/softmmu_template.h | 11 +-
97
hw/intc/gic_internal.h | 282 +++++++++--
98
include/exec/exec-all.h | 2 -
99
include/hw/arm/virt.h | 4 +-
100
include/hw/arm/xlnx-zynqmp.h | 4 +-
101
include/hw/intc/arm_gic_common.h | 43 +-
102
include/hw/intc/armv7m_nvic.h | 1 +
103
include/migration/vmstate.h | 3 +
104
include/qom/cpu.h | 6 +
105
target/arm/cpu.h | 62 ++-
106
accel/tcg/cpu-exec.c | 3 +
107
accel/tcg/cputlb.c | 111 +----
108
accel/tcg/translate-all.c | 23 +-
109
exec.c | 6 -
110
hw/arm/boot.c | 8 +-
111
hw/arm/virt-acpi-build.c | 6 +-
112
hw/arm/virt.c | 52 ++-
113
hw/arm/xlnx-zynqmp.c | 92 +++-
114
hw/intc/arm_gic.c | 987 +++++++++++++++++++++++++++++++--------
115
hw/intc/arm_gic_common.c | 154 ++++--
116
hw/intc/arm_gic_kvm.c | 31 +-
117
hw/intc/arm_gicv3_cpuif.c | 19 +-
118
hw/intc/armv7m_nvic.c | 82 +++-
119
memory.c | 3 +-
120
target/arm/cpu.c | 4 +
121
target/arm/helper.c | 127 +++--
122
target/arm/op_helper.c | 14 +
123
target/arm/sve_helper.c | 19 +-
124
target/arm/translate-sve.c | 51 +-
125
hw/intc/trace-events | 12 +-
126
30 files changed, 1724 insertions(+), 498 deletions(-)
127
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Used the wrong temporary in the computation of subtractive overflow.
3
This controls whether the PACI{A,B}SP instructions trap with BTYPE=3
4
(indirect branch from register other than x16/x17). The linux kernel
5
sets this in bti_enable().
4
6
5
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
7
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/998
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Tested-by: Alex Bennée <alex.bennee@linaro.org>
10
Message-id: 20220427042312.294300-1-richard.henderson@linaro.org
9
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
11
[PMM: remove stray change to makefile comment]
10
Message-id: 20180801123111.3595-3-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
---
13
target/arm/translate-sve.c | 2 +-
14
target/arm/cpu.c | 2 ++
14
1 file changed, 1 insertion(+), 1 deletion(-)
15
tests/tcg/aarch64/bti-3.c | 42 +++++++++++++++++++++++++++++++
16
tests/tcg/aarch64/Makefile.target | 6 ++---
17
3 files changed, 47 insertions(+), 3 deletions(-)
18
create mode 100644 tests/tcg/aarch64/bti-3.c
15
19
16
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
20
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
17
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/translate-sve.c
22
--- a/target/arm/cpu.c
19
+++ b/target/arm/translate-sve.c
23
+++ b/target/arm/cpu.c
20
@@ -XXX,XX +XXX,XX @@ static void do_sat_addsub_64(TCGv_i64 reg, TCGv_i64 val, bool u, bool d)
24
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
21
/* Detect signed overflow for subtraction. */
25
/* Enable all PAC keys. */
22
tcg_gen_xor_i64(t0, reg, val);
26
env->cp15.sctlr_el[1] |= (SCTLR_EnIA | SCTLR_EnIB |
23
tcg_gen_sub_i64(t1, reg, val);
27
SCTLR_EnDA | SCTLR_EnDB);
24
- tcg_gen_xor_i64(reg, reg, t0);
28
+ /* Trap on btype=3 for PACIxSP. */
25
+ tcg_gen_xor_i64(reg, reg, t1);
29
+ env->cp15.sctlr_el[1] |= SCTLR_BT0;
26
tcg_gen_and_i64(t0, t0, reg);
30
/* and to the FP/Neon instructions */
27
31
env->cp15.cpacr_el1 = deposit64(env->cp15.cpacr_el1, 20, 2, 3);
28
/* Bound the result. */
32
/* and to the SVE instructions */
33
diff --git a/tests/tcg/aarch64/bti-3.c b/tests/tcg/aarch64/bti-3.c
34
new file mode 100644
35
index XXXXXXX..XXXXXXX
36
--- /dev/null
37
+++ b/tests/tcg/aarch64/bti-3.c
38
@@ -XXX,XX +XXX,XX @@
39
+/*
40
+ * BTI vs PACIASP
41
+ */
42
+
43
+#include "bti-crt.inc.c"
44
+
45
+static void skip2_sigill(int sig, siginfo_t *info, ucontext_t *uc)
46
+{
47
+ uc->uc_mcontext.pc += 8;
48
+ uc->uc_mcontext.pstate = 1;
49
+}
50
+
51
+#define BTYPE_1() \
52
+ asm("mov %0,#1; adr x16, 1f; br x16; 1: hint #25; mov %0,#0" \
53
+ : "=r"(skipped) : : "x16", "x30")
54
+
55
+#define BTYPE_2() \
56
+ asm("mov %0,#1; adr x16, 1f; blr x16; 1: hint #25; mov %0,#0" \
57
+ : "=r"(skipped) : : "x16", "x30")
58
+
59
+#define BTYPE_3() \
60
+ asm("mov %0,#1; adr x15, 1f; br x15; 1: hint #25; mov %0,#0" \
61
+ : "=r"(skipped) : : "x15", "x30")
62
+
63
+#define TEST(WHICH, EXPECT) \
64
+ do { WHICH(); fail += skipped ^ EXPECT; } while (0)
65
+
66
+int main()
67
+{
68
+ int fail = 0;
69
+ int skipped;
70
+
71
+ /* Signal-like with SA_SIGINFO. */
72
+ signal_info(SIGILL, skip2_sigill);
73
+
74
+ /* With SCTLR_EL1.BT0 set, PACIASP is not compatible with type=3. */
75
+ TEST(BTYPE_1, 0);
76
+ TEST(BTYPE_2, 0);
77
+ TEST(BTYPE_3, 1);
78
+
79
+ return fail;
80
+}
81
diff --git a/tests/tcg/aarch64/Makefile.target b/tests/tcg/aarch64/Makefile.target
82
index XXXXXXX..XXXXXXX 100644
83
--- a/tests/tcg/aarch64/Makefile.target
84
+++ b/tests/tcg/aarch64/Makefile.target
85
@@ -XXX,XX +XXX,XX @@ endif
86
# BTI Tests
87
# bti-1 tests the elf notes, so we require special compiler support.
88
ifneq ($(CROSS_CC_HAS_ARMV8_BTI),)
89
-AARCH64_TESTS += bti-1
90
-bti-1: CFLAGS += -mbranch-protection=standard
91
-bti-1: LDFLAGS += -nostdlib
92
+AARCH64_TESTS += bti-1 bti-3
93
+bti-1 bti-3: CFLAGS += -mbranch-protection=standard
94
+bti-1 bti-3: LDFLAGS += -nostdlib
95
endif
96
# bti-2 tests PROT_BTI, so no special compiler support required.
97
AARCH64_TESTS += bti-2
29
--
98
--
30
2.18.0
99
2.25.1
31
32
diff view generated by jsdifflib
1
The IMO, FMO and AMO bits in HCR_EL2 are defined to "behave as
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1 for all purposes other than direct reads" if HCR_EL2.TGE
3
is set and HCR_EL2.E2H is 0, and to "behave as 0 for all
4
purposes other than direct reads" if HCR_EL2.TGE is set
5
and HRC_EL2.E2H is 1.
6
2
7
To avoid having to check E2H and TGE everywhere where we test IMO and
3
Move ARMCPRegInfo and all related declarations to a new
8
FMO, provide accessors arm_hcr_el2_imo(), arm_hcr_el2_fmo()and
4
internal header, out of the public cpu.h.
9
arm_hcr_el2_amo(). We don't implement ARMv8.1-VHE yet, so the E2H
10
case will never be true, but we include the logic to save effort when
11
we eventually do get to that.
12
5
13
(Note that in several of these callsites the change doesn't
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
14
actually make a difference as either the callsite is handling
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
TGE specially anyway, or the CPU can't get into that situation
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
16
with TGE set; we change everywhere for consistency.)
9
Message-id: 20220501055028.646596-2-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/cpregs.h | 413 +++++++++++++++++++++++++++++++++++++
13
target/arm/cpu.h | 368 ---------------------------------
14
hw/arm/pxa2xx.c | 1 +
15
hw/arm/pxa2xx_pic.c | 1 +
16
hw/intc/arm_gicv3_cpuif.c | 1 +
17
hw/intc/arm_gicv3_kvm.c | 2 +
18
target/arm/cpu.c | 1 +
19
target/arm/cpu64.c | 1 +
20
target/arm/cpu_tcg.c | 1 +
21
target/arm/gdbstub.c | 3 +-
22
target/arm/helper.c | 1 +
23
target/arm/op_helper.c | 1 +
24
target/arm/translate-a64.c | 4 +-
25
target/arm/translate.c | 3 +-
26
14 files changed, 427 insertions(+), 374 deletions(-)
27
create mode 100644 target/arm/cpregs.h
17
28
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
29
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
30
new file mode 100644
20
Message-id: 20180724115950.17316-5-peter.maydell@linaro.org
31
index XXXXXXX..XXXXXXX
21
---
32
--- /dev/null
22
target/arm/cpu.h | 64 +++++++++++++++++++++++++++++++++++----
33
+++ b/target/arm/cpregs.h
23
hw/intc/arm_gicv3_cpuif.c | 19 ++++++------
34
@@ -XXX,XX +XXX,XX @@
24
target/arm/helper.c | 6 ++--
35
+/*
25
3 files changed, 71 insertions(+), 18 deletions(-)
36
+ * QEMU ARM CP Register access and descriptions
26
37
+ *
38
+ * Copyright (c) 2022 Linaro Ltd
39
+ *
40
+ * This program is free software; you can redistribute it and/or
41
+ * modify it under the terms of the GNU General Public License
42
+ * as published by the Free Software Foundation; either version 2
43
+ * of the License, or (at your option) any later version.
44
+ *
45
+ * This program is distributed in the hope that it will be useful,
46
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
47
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
48
+ * GNU General Public License for more details.
49
+ *
50
+ * You should have received a copy of the GNU General Public License
51
+ * along with this program; if not, see
52
+ * <http://www.gnu.org/licenses/gpl-2.0.html>
53
+ */
54
+
55
+#ifndef TARGET_ARM_CPREGS_H
56
+#define TARGET_ARM_CPREGS_H
57
+
58
+/*
59
+ * ARMCPRegInfo type field bits. If the SPECIAL bit is set this is a
60
+ * special-behaviour cp reg and bits [11..8] indicate what behaviour
61
+ * it has. Otherwise it is a simple cp reg, where CONST indicates that
62
+ * TCG can assume the value to be constant (ie load at translate time)
63
+ * and 64BIT indicates a 64 bit wide coprocessor register. SUPPRESS_TB_END
64
+ * indicates that the TB should not be ended after a write to this register
65
+ * (the default is that the TB ends after cp writes). OVERRIDE permits
66
+ * a register definition to override a previous definition for the
67
+ * same (cp, is64, crn, crm, opc1, opc2) tuple: either the new or the
68
+ * old must have the OVERRIDE bit set.
69
+ * ALIAS indicates that this register is an alias view of some underlying
70
+ * state which is also visible via another register, and that the other
71
+ * register is handling migration and reset; registers marked ALIAS will not be
72
+ * migrated but may have their state set by syncing of register state from KVM.
73
+ * NO_RAW indicates that this register has no underlying state and does not
74
+ * support raw access for state saving/loading; it will not be used for either
75
+ * migration or KVM state synchronization. (Typically this is for "registers"
76
+ * which are actually used as instructions for cache maintenance and so on.)
77
+ * IO indicates that this register does I/O and therefore its accesses
78
+ * need to be marked with gen_io_start() and also end the TB. In particular,
79
+ * registers which implement clocks or timers require this.
80
+ * RAISES_EXC is for when the read or write hook might raise an exception;
81
+ * the generated code will synchronize the CPU state before calling the hook
82
+ * so that it is safe for the hook to call raise_exception().
83
+ * NEWEL is for writes to registers that might change the exception
84
+ * level - typically on older ARM chips. For those cases we need to
85
+ * re-read the new el when recomputing the translation flags.
86
+ */
87
+#define ARM_CP_SPECIAL 0x0001
88
+#define ARM_CP_CONST 0x0002
89
+#define ARM_CP_64BIT 0x0004
90
+#define ARM_CP_SUPPRESS_TB_END 0x0008
91
+#define ARM_CP_OVERRIDE 0x0010
92
+#define ARM_CP_ALIAS 0x0020
93
+#define ARM_CP_IO 0x0040
94
+#define ARM_CP_NO_RAW 0x0080
95
+#define ARM_CP_NOP (ARM_CP_SPECIAL | 0x0100)
96
+#define ARM_CP_WFI (ARM_CP_SPECIAL | 0x0200)
97
+#define ARM_CP_NZCV (ARM_CP_SPECIAL | 0x0300)
98
+#define ARM_CP_CURRENTEL (ARM_CP_SPECIAL | 0x0400)
99
+#define ARM_CP_DC_ZVA (ARM_CP_SPECIAL | 0x0500)
100
+#define ARM_CP_DC_GVA (ARM_CP_SPECIAL | 0x0600)
101
+#define ARM_CP_DC_GZVA (ARM_CP_SPECIAL | 0x0700)
102
+#define ARM_LAST_SPECIAL ARM_CP_DC_GZVA
103
+#define ARM_CP_FPU 0x1000
104
+#define ARM_CP_SVE 0x2000
105
+#define ARM_CP_NO_GDB 0x4000
106
+#define ARM_CP_RAISES_EXC 0x8000
107
+#define ARM_CP_NEWEL 0x10000
108
+/* Used only as a terminator for ARMCPRegInfo lists */
109
+#define ARM_CP_SENTINEL 0xfffff
110
+/* Mask of only the flag bits in a type field */
111
+#define ARM_CP_FLAG_MASK 0x1f0ff
112
+
113
+/*
114
+ * Valid values for ARMCPRegInfo state field, indicating which of
115
+ * the AArch32 and AArch64 execution states this register is visible in.
116
+ * If the reginfo doesn't explicitly specify then it is AArch32 only.
117
+ * If the reginfo is declared to be visible in both states then a second
118
+ * reginfo is synthesised for the AArch32 view of the AArch64 register,
119
+ * such that the AArch32 view is the lower 32 bits of the AArch64 one.
120
+ * Note that we rely on the values of these enums as we iterate through
121
+ * the various states in some places.
122
+ */
123
+enum {
124
+ ARM_CP_STATE_AA32 = 0,
125
+ ARM_CP_STATE_AA64 = 1,
126
+ ARM_CP_STATE_BOTH = 2,
127
+};
128
+
129
+/*
130
+ * ARM CP register secure state flags. These flags identify security state
131
+ * attributes for a given CP register entry.
132
+ * The existence of both or neither secure and non-secure flags indicates that
133
+ * the register has both a secure and non-secure hash entry. A single one of
134
+ * these flags causes the register to only be hashed for the specified
135
+ * security state.
136
+ * Although definitions may have any combination of the S/NS bits, each
137
+ * registered entry will only have one to identify whether the entry is secure
138
+ * or non-secure.
139
+ */
140
+enum {
141
+ ARM_CP_SECSTATE_S = (1 << 0), /* bit[0]: Secure state register */
142
+ ARM_CP_SECSTATE_NS = (1 << 1), /* bit[1]: Non-secure state register */
143
+};
144
+
145
+/*
146
+ * Return true if cptype is a valid type field. This is used to try to
147
+ * catch errors where the sentinel has been accidentally left off the end
148
+ * of a list of registers.
149
+ */
150
+static inline bool cptype_valid(int cptype)
151
+{
152
+ return ((cptype & ~ARM_CP_FLAG_MASK) == 0)
153
+ || ((cptype & ARM_CP_SPECIAL) &&
154
+ ((cptype & ~ARM_CP_FLAG_MASK) <= ARM_LAST_SPECIAL));
155
+}
156
+
157
+/*
158
+ * Access rights:
159
+ * We define bits for Read and Write access for what rev C of the v7-AR ARM ARM
160
+ * defines as PL0 (user), PL1 (fiq/irq/svc/abt/und/sys, ie privileged), and
161
+ * PL2 (hyp). The other level which has Read and Write bits is Secure PL1
162
+ * (ie any of the privileged modes in Secure state, or Monitor mode).
163
+ * If a register is accessible in one privilege level it's always accessible
164
+ * in higher privilege levels too. Since "Secure PL1" also follows this rule
165
+ * (ie anything visible in PL2 is visible in S-PL1, some things are only
166
+ * visible in S-PL1) but "Secure PL1" is a bit of a mouthful, we bend the
167
+ * terminology a little and call this PL3.
168
+ * In AArch64 things are somewhat simpler as the PLx bits line up exactly
169
+ * with the ELx exception levels.
170
+ *
171
+ * If access permissions for a register are more complex than can be
172
+ * described with these bits, then use a laxer set of restrictions, and
173
+ * do the more restrictive/complex check inside a helper function.
174
+ */
175
+#define PL3_R 0x80
176
+#define PL3_W 0x40
177
+#define PL2_R (0x20 | PL3_R)
178
+#define PL2_W (0x10 | PL3_W)
179
+#define PL1_R (0x08 | PL2_R)
180
+#define PL1_W (0x04 | PL2_W)
181
+#define PL0_R (0x02 | PL1_R)
182
+#define PL0_W (0x01 | PL1_W)
183
+
184
+/*
185
+ * For user-mode some registers are accessible to EL0 via a kernel
186
+ * trap-and-emulate ABI. In this case we define the read permissions
187
+ * as actually being PL0_R. However some bits of any given register
188
+ * may still be masked.
189
+ */
190
+#ifdef CONFIG_USER_ONLY
191
+#define PL0U_R PL0_R
192
+#else
193
+#define PL0U_R PL1_R
194
+#endif
195
+
196
+#define PL3_RW (PL3_R | PL3_W)
197
+#define PL2_RW (PL2_R | PL2_W)
198
+#define PL1_RW (PL1_R | PL1_W)
199
+#define PL0_RW (PL0_R | PL0_W)
200
+
201
+typedef enum CPAccessResult {
202
+ /* Access is permitted */
203
+ CP_ACCESS_OK = 0,
204
+ /*
205
+ * Access fails due to a configurable trap or enable which would
206
+ * result in a categorized exception syndrome giving information about
207
+ * the failing instruction (ie syndrome category 0x3, 0x4, 0x5, 0x6,
208
+ * 0xc or 0x18). The exception is taken to the usual target EL (EL1 or
209
+ * PL1 if in EL0, otherwise to the current EL).
210
+ */
211
+ CP_ACCESS_TRAP = 1,
212
+ /*
213
+ * Access fails and results in an exception syndrome 0x0 ("uncategorized").
214
+ * Note that this is not a catch-all case -- the set of cases which may
215
+ * result in this failure is specifically defined by the architecture.
216
+ */
217
+ CP_ACCESS_TRAP_UNCATEGORIZED = 2,
218
+ /* As CP_ACCESS_TRAP, but for traps directly to EL2 or EL3 */
219
+ CP_ACCESS_TRAP_EL2 = 3,
220
+ CP_ACCESS_TRAP_EL3 = 4,
221
+ /* As CP_ACCESS_UNCATEGORIZED, but for traps directly to EL2 or EL3 */
222
+ CP_ACCESS_TRAP_UNCATEGORIZED_EL2 = 5,
223
+ CP_ACCESS_TRAP_UNCATEGORIZED_EL3 = 6,
224
+} CPAccessResult;
225
+
226
+typedef struct ARMCPRegInfo ARMCPRegInfo;
227
+
228
+/*
229
+ * Access functions for coprocessor registers. These cannot fail and
230
+ * may not raise exceptions.
231
+ */
232
+typedef uint64_t CPReadFn(CPUARMState *env, const ARMCPRegInfo *opaque);
233
+typedef void CPWriteFn(CPUARMState *env, const ARMCPRegInfo *opaque,
234
+ uint64_t value);
235
+/* Access permission check functions for coprocessor registers. */
236
+typedef CPAccessResult CPAccessFn(CPUARMState *env,
237
+ const ARMCPRegInfo *opaque,
238
+ bool isread);
239
+/* Hook function for register reset */
240
+typedef void CPResetFn(CPUARMState *env, const ARMCPRegInfo *opaque);
241
+
242
+#define CP_ANY 0xff
243
+
244
+/* Definition of an ARM coprocessor register */
245
+struct ARMCPRegInfo {
246
+ /* Name of register (useful mainly for debugging, need not be unique) */
247
+ const char *name;
248
+ /*
249
+ * Location of register: coprocessor number and (crn,crm,opc1,opc2)
250
+ * tuple. Any of crm, opc1 and opc2 may be CP_ANY to indicate a
251
+ * 'wildcard' field -- any value of that field in the MRC/MCR insn
252
+ * will be decoded to this register. The register read and write
253
+ * callbacks will be passed an ARMCPRegInfo with the crn/crm/opc1/opc2
254
+ * used by the program, so it is possible to register a wildcard and
255
+ * then behave differently on read/write if necessary.
256
+ * For 64 bit registers, only crm and opc1 are relevant; crn and opc2
257
+ * must both be zero.
258
+ * For AArch64-visible registers, opc0 is also used.
259
+ * Since there are no "coprocessors" in AArch64, cp is purely used as a
260
+ * way to distinguish (for KVM's benefit) guest-visible system registers
261
+ * from demuxed ones provided to preserve the "no side effects on
262
+ * KVM register read/write from QEMU" semantics. cp==0x13 is guest
263
+ * visible (to match KVM's encoding); cp==0 will be converted to
264
+ * cp==0x13 when the ARMCPRegInfo is registered, for convenience.
265
+ */
266
+ uint8_t cp;
267
+ uint8_t crn;
268
+ uint8_t crm;
269
+ uint8_t opc0;
270
+ uint8_t opc1;
271
+ uint8_t opc2;
272
+ /* Execution state in which this register is visible: ARM_CP_STATE_* */
273
+ int state;
274
+ /* Register type: ARM_CP_* bits/values */
275
+ int type;
276
+ /* Access rights: PL*_[RW] */
277
+ int access;
278
+ /* Security state: ARM_CP_SECSTATE_* bits/values */
279
+ int secure;
280
+ /*
281
+ * The opaque pointer passed to define_arm_cp_regs_with_opaque() when
282
+ * this register was defined: can be used to hand data through to the
283
+ * register read/write functions, since they are passed the ARMCPRegInfo*.
284
+ */
285
+ void *opaque;
286
+ /*
287
+ * Value of this register, if it is ARM_CP_CONST. Otherwise, if
288
+ * fieldoffset is non-zero, the reset value of the register.
289
+ */
290
+ uint64_t resetvalue;
291
+ /*
292
+ * Offset of the field in CPUARMState for this register.
293
+ * This is not needed if either:
294
+ * 1. type is ARM_CP_CONST or one of the ARM_CP_SPECIALs
295
+ * 2. both readfn and writefn are specified
296
+ */
297
+ ptrdiff_t fieldoffset; /* offsetof(CPUARMState, field) */
298
+
299
+ /*
300
+ * Offsets of the secure and non-secure fields in CPUARMState for the
301
+ * register if it is banked. These fields are only used during the static
302
+ * registration of a register. During hashing the bank associated
303
+ * with a given security state is copied to fieldoffset which is used from
304
+ * there on out.
305
+ *
306
+ * It is expected that register definitions use either fieldoffset or
307
+ * bank_fieldoffsets in the definition but not both. It is also expected
308
+ * that both bank offsets are set when defining a banked register. This
309
+ * use indicates that a register is banked.
310
+ */
311
+ ptrdiff_t bank_fieldoffsets[2];
312
+
313
+ /*
314
+ * Function for making any access checks for this register in addition to
315
+ * those specified by the 'access' permissions bits. If NULL, no extra
316
+ * checks required. The access check is performed at runtime, not at
317
+ * translate time.
318
+ */
319
+ CPAccessFn *accessfn;
320
+ /*
321
+ * Function for handling reads of this register. If NULL, then reads
322
+ * will be done by loading from the offset into CPUARMState specified
323
+ * by fieldoffset.
324
+ */
325
+ CPReadFn *readfn;
326
+ /*
327
+ * Function for handling writes of this register. If NULL, then writes
328
+ * will be done by writing to the offset into CPUARMState specified
329
+ * by fieldoffset.
330
+ */
331
+ CPWriteFn *writefn;
332
+ /*
333
+ * Function for doing a "raw" read; used when we need to copy
334
+ * coprocessor state to the kernel for KVM or out for
335
+ * migration. This only needs to be provided if there is also a
336
+ * readfn and it has side effects (for instance clear-on-read bits).
337
+ */
338
+ CPReadFn *raw_readfn;
339
+ /*
340
+ * Function for doing a "raw" write; used when we need to copy KVM
341
+ * kernel coprocessor state into userspace, or for inbound
342
+ * migration. This only needs to be provided if there is also a
343
+ * writefn and it masks out "unwritable" bits or has write-one-to-clear
344
+ * or similar behaviour.
345
+ */
346
+ CPWriteFn *raw_writefn;
347
+ /*
348
+ * Function for resetting the register. If NULL, then reset will be done
349
+ * by writing resetvalue to the field specified in fieldoffset. If
350
+ * fieldoffset is 0 then no reset will be done.
351
+ */
352
+ CPResetFn *resetfn;
353
+
354
+ /*
355
+ * "Original" writefn and readfn.
356
+ * For ARMv8.1-VHE register aliases, we overwrite the read/write
357
+ * accessor functions of various EL1/EL0 to perform the runtime
358
+ * check for which sysreg should actually be modified, and then
359
+ * forwards the operation. Before overwriting the accessors,
360
+ * the original function is copied here, so that accesses that
361
+ * really do go to the EL1/EL0 version proceed normally.
362
+ * (The corresponding EL2 register is linked via opaque.)
363
+ */
364
+ CPReadFn *orig_readfn;
365
+ CPWriteFn *orig_writefn;
366
+};
367
+
368
+/*
369
+ * Macros which are lvalues for the field in CPUARMState for the
370
+ * ARMCPRegInfo *ri.
371
+ */
372
+#define CPREG_FIELD32(env, ri) \
373
+ (*(uint32_t *)((char *)(env) + (ri)->fieldoffset))
374
+#define CPREG_FIELD64(env, ri) \
375
+ (*(uint64_t *)((char *)(env) + (ri)->fieldoffset))
376
+
377
+#define REGINFO_SENTINEL { .type = ARM_CP_SENTINEL }
378
+
379
+void define_arm_cp_regs_with_opaque(ARMCPU *cpu,
380
+ const ARMCPRegInfo *regs, void *opaque);
381
+void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
382
+ const ARMCPRegInfo *regs, void *opaque);
383
+static inline void define_arm_cp_regs(ARMCPU *cpu, const ARMCPRegInfo *regs)
384
+{
385
+ define_arm_cp_regs_with_opaque(cpu, regs, 0);
386
+}
387
+static inline void define_one_arm_cp_reg(ARMCPU *cpu, const ARMCPRegInfo *regs)
388
+{
389
+ define_one_arm_cp_reg_with_opaque(cpu, regs, 0);
390
+}
391
+const ARMCPRegInfo *get_arm_cp_reginfo(GHashTable *cpregs, uint32_t encoded_cp);
392
+
393
+/*
394
+ * Definition of an ARM co-processor register as viewed from
395
+ * userspace. This is used for presenting sanitised versions of
396
+ * registers to userspace when emulating the Linux AArch64 CPU
397
+ * ID/feature ABI (advertised as HWCAP_CPUID).
398
+ */
399
+typedef struct ARMCPRegUserSpaceInfo {
400
+ /* Name of register */
401
+ const char *name;
402
+
403
+ /* Is the name actually a glob pattern */
404
+ bool is_glob;
405
+
406
+ /* Only some bits are exported to user space */
407
+ uint64_t exported_bits;
408
+
409
+ /* Fixed bits are applied after the mask */
410
+ uint64_t fixed_bits;
411
+} ARMCPRegUserSpaceInfo;
412
+
413
+#define REGUSERINFO_SENTINEL { .name = NULL }
414
+
415
+void modify_arm_cp_regs(ARMCPRegInfo *regs, const ARMCPRegUserSpaceInfo *mods);
416
+
417
+/* CPWriteFn that can be used to implement writes-ignored behaviour */
418
+void arm_cp_write_ignore(CPUARMState *env, const ARMCPRegInfo *ri,
419
+ uint64_t value);
420
+/* CPReadFn that can be used for read-as-zero behaviour */
421
+uint64_t arm_cp_read_zero(CPUARMState *env, const ARMCPRegInfo *ri);
422
+
423
+/*
424
+ * CPResetFn that does nothing, for use if no reset is required even
425
+ * if fieldoffset is non zero.
426
+ */
427
+void arm_cp_reset_ignore(CPUARMState *env, const ARMCPRegInfo *opaque);
428
+
429
+/*
430
+ * Return true if this reginfo struct's field in the cpu state struct
431
+ * is 64 bits wide.
432
+ */
433
+static inline bool cpreg_field_is_64bit(const ARMCPRegInfo *ri)
434
+{
435
+ return (ri->state == ARM_CP_STATE_AA64) || (ri->type & ARM_CP_64BIT);
436
+}
437
+
438
+static inline bool cp_access_ok(int current_el,
439
+ const ARMCPRegInfo *ri, int isread)
440
+{
441
+ return (ri->access >> ((current_el * 2) + isread)) & 1;
442
+}
443
+
444
+/* Raw read of a coprocessor register (as needed for migration, etc) */
445
+uint64_t read_raw_cp_reg(CPUARMState *env, const ARMCPRegInfo *ri);
446
+
447
+#endif /* TARGET_ARM_CPREGS_H */
27
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
448
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
28
index XXXXXXX..XXXXXXX 100644
449
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/cpu.h
450
--- a/target/arm/cpu.h
30
+++ b/target/arm/cpu.h
451
+++ b/target/arm/cpu.h
31
@@ -XXX,XX +XXX,XX @@ static inline void xpsr_write(CPUARMState *env, uint32_t val, uint32_t mask)
452
@@ -XXX,XX +XXX,XX @@ static inline uint64_t cpreg_to_kvm_id(uint32_t cpregid)
32
#define HCR_RW (1ULL << 31)
453
return kvmid;
33
#define HCR_CD (1ULL << 32)
454
}
34
#define HCR_ID (1ULL << 33)
455
35
+#define HCR_E2H (1ULL << 34)
456
-/* ARMCPRegInfo type field bits. If the SPECIAL bit is set this is a
36
+/*
457
- * special-behaviour cp reg and bits [11..8] indicate what behaviour
37
+ * When we actually implement ARMv8.1-VHE we should add HCR_E2H to
458
- * it has. Otherwise it is a simple cp reg, where CONST indicates that
38
+ * HCR_MASK and then clear it again if the feature bit is not set in
459
- * TCG can assume the value to be constant (ie load at translate time)
39
+ * hcr_write().
460
- * and 64BIT indicates a 64 bit wide coprocessor register. SUPPRESS_TB_END
40
+ */
461
- * indicates that the TB should not be ended after a write to this register
41
#define HCR_MASK ((1ULL << 34) - 1)
462
- * (the default is that the TB ends after cp writes). OVERRIDE permits
42
463
- * a register definition to override a previous definition for the
43
#define SCR_NS (1U << 0)
464
- * same (cp, is64, crn, crm, opc1, opc2) tuple: either the new or the
44
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu);
465
- * old must have the OVERRIDE bit set.
45
# define TARGET_VIRT_ADDR_SPACE_BITS 32
466
- * ALIAS indicates that this register is an alias view of some underlying
46
#endif
467
- * state which is also visible via another register, and that the other
47
468
- * register is handling migration and reset; registers marked ALIAS will not be
48
+/**
469
- * migrated but may have their state set by syncing of register state from KVM.
49
+ * arm_hcr_el2_imo(): Return the effective value of HCR_EL2.IMO.
470
- * NO_RAW indicates that this register has no underlying state and does not
50
+ * Depending on the values of HCR_EL2.E2H and TGE, this may be
471
- * support raw access for state saving/loading; it will not be used for either
51
+ * "behaves as 1 for all purposes other than direct read/write" or
472
- * migration or KVM state synchronization. (Typically this is for "registers"
52
+ * "behaves as 0 for all purposes other than direct read/write"
473
- * which are actually used as instructions for cache maintenance and so on.)
53
+ */
474
- * IO indicates that this register does I/O and therefore its accesses
54
+static inline bool arm_hcr_el2_imo(CPUARMState *env)
475
- * need to be marked with gen_io_start() and also end the TB. In particular,
55
+{
476
- * registers which implement clocks or timers require this.
56
+ switch (env->cp15.hcr_el2 & (HCR_TGE | HCR_E2H)) {
477
- * RAISES_EXC is for when the read or write hook might raise an exception;
57
+ case HCR_TGE:
478
- * the generated code will synchronize the CPU state before calling the hook
58
+ return true;
479
- * so that it is safe for the hook to call raise_exception().
59
+ case HCR_TGE | HCR_E2H:
480
- * NEWEL is for writes to registers that might change the exception
60
+ return false;
481
- * level - typically on older ARM chips. For those cases we need to
61
+ default:
482
- * re-read the new el when recomputing the translation flags.
62
+ return env->cp15.hcr_el2 & HCR_IMO;
483
- */
63
+ }
484
-#define ARM_CP_SPECIAL 0x0001
64
+}
485
-#define ARM_CP_CONST 0x0002
65
+
486
-#define ARM_CP_64BIT 0x0004
66
+/**
487
-#define ARM_CP_SUPPRESS_TB_END 0x0008
67
+ * arm_hcr_el2_fmo(): Return the effective value of HCR_EL2.FMO.
488
-#define ARM_CP_OVERRIDE 0x0010
68
+ */
489
-#define ARM_CP_ALIAS 0x0020
69
+static inline bool arm_hcr_el2_fmo(CPUARMState *env)
490
-#define ARM_CP_IO 0x0040
70
+{
491
-#define ARM_CP_NO_RAW 0x0080
71
+ switch (env->cp15.hcr_el2 & (HCR_TGE | HCR_E2H)) {
492
-#define ARM_CP_NOP (ARM_CP_SPECIAL | 0x0100)
72
+ case HCR_TGE:
493
-#define ARM_CP_WFI (ARM_CP_SPECIAL | 0x0200)
73
+ return true;
494
-#define ARM_CP_NZCV (ARM_CP_SPECIAL | 0x0300)
74
+ case HCR_TGE | HCR_E2H:
495
-#define ARM_CP_CURRENTEL (ARM_CP_SPECIAL | 0x0400)
75
+ return false;
496
-#define ARM_CP_DC_ZVA (ARM_CP_SPECIAL | 0x0500)
76
+ default:
497
-#define ARM_CP_DC_GVA (ARM_CP_SPECIAL | 0x0600)
77
+ return env->cp15.hcr_el2 & HCR_FMO;
498
-#define ARM_CP_DC_GZVA (ARM_CP_SPECIAL | 0x0700)
78
+ }
499
-#define ARM_LAST_SPECIAL ARM_CP_DC_GZVA
79
+}
500
-#define ARM_CP_FPU 0x1000
80
+
501
-#define ARM_CP_SVE 0x2000
81
+/**
502
-#define ARM_CP_NO_GDB 0x4000
82
+ * arm_hcr_el2_amo(): Return the effective value of HCR_EL2.AMO.
503
-#define ARM_CP_RAISES_EXC 0x8000
83
+ */
504
-#define ARM_CP_NEWEL 0x10000
84
+static inline bool arm_hcr_el2_amo(CPUARMState *env)
505
-/* Used only as a terminator for ARMCPRegInfo lists */
85
+{
506
-#define ARM_CP_SENTINEL 0xfffff
86
+ switch (env->cp15.hcr_el2 & (HCR_TGE | HCR_E2H)) {
507
-/* Mask of only the flag bits in a type field */
87
+ case HCR_TGE:
508
-#define ARM_CP_FLAG_MASK 0x1f0ff
88
+ return true;
509
-
89
+ case HCR_TGE | HCR_E2H:
510
-/* Valid values for ARMCPRegInfo state field, indicating which of
90
+ return false;
511
- * the AArch32 and AArch64 execution states this register is visible in.
91
+ default:
512
- * If the reginfo doesn't explicitly specify then it is AArch32 only.
92
+ return env->cp15.hcr_el2 & HCR_AMO;
513
- * If the reginfo is declared to be visible in both states then a second
93
+ }
514
- * reginfo is synthesised for the AArch32 view of the AArch64 register,
94
+}
515
- * such that the AArch32 view is the lower 32 bits of the AArch64 one.
95
+
516
- * Note that we rely on the values of these enums as we iterate through
96
static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
517
- * the various states in some places.
97
unsigned int target_el)
518
- */
519
-enum {
520
- ARM_CP_STATE_AA32 = 0,
521
- ARM_CP_STATE_AA64 = 1,
522
- ARM_CP_STATE_BOTH = 2,
523
-};
524
-
525
-/* ARM CP register secure state flags. These flags identify security state
526
- * attributes for a given CP register entry.
527
- * The existence of both or neither secure and non-secure flags indicates that
528
- * the register has both a secure and non-secure hash entry. A single one of
529
- * these flags causes the register to only be hashed for the specified
530
- * security state.
531
- * Although definitions may have any combination of the S/NS bits, each
532
- * registered entry will only have one to identify whether the entry is secure
533
- * or non-secure.
534
- */
535
-enum {
536
- ARM_CP_SECSTATE_S = (1 << 0), /* bit[0]: Secure state register */
537
- ARM_CP_SECSTATE_NS = (1 << 1), /* bit[1]: Non-secure state register */
538
-};
539
-
540
-/* Return true if cptype is a valid type field. This is used to try to
541
- * catch errors where the sentinel has been accidentally left off the end
542
- * of a list of registers.
543
- */
544
-static inline bool cptype_valid(int cptype)
545
-{
546
- return ((cptype & ~ARM_CP_FLAG_MASK) == 0)
547
- || ((cptype & ARM_CP_SPECIAL) &&
548
- ((cptype & ~ARM_CP_FLAG_MASK) <= ARM_LAST_SPECIAL));
549
-}
550
-
551
-/* Access rights:
552
- * We define bits for Read and Write access for what rev C of the v7-AR ARM ARM
553
- * defines as PL0 (user), PL1 (fiq/irq/svc/abt/und/sys, ie privileged), and
554
- * PL2 (hyp). The other level which has Read and Write bits is Secure PL1
555
- * (ie any of the privileged modes in Secure state, or Monitor mode).
556
- * If a register is accessible in one privilege level it's always accessible
557
- * in higher privilege levels too. Since "Secure PL1" also follows this rule
558
- * (ie anything visible in PL2 is visible in S-PL1, some things are only
559
- * visible in S-PL1) but "Secure PL1" is a bit of a mouthful, we bend the
560
- * terminology a little and call this PL3.
561
- * In AArch64 things are somewhat simpler as the PLx bits line up exactly
562
- * with the ELx exception levels.
563
- *
564
- * If access permissions for a register are more complex than can be
565
- * described with these bits, then use a laxer set of restrictions, and
566
- * do the more restrictive/complex check inside a helper function.
567
- */
568
-#define PL3_R 0x80
569
-#define PL3_W 0x40
570
-#define PL2_R (0x20 | PL3_R)
571
-#define PL2_W (0x10 | PL3_W)
572
-#define PL1_R (0x08 | PL2_R)
573
-#define PL1_W (0x04 | PL2_W)
574
-#define PL0_R (0x02 | PL1_R)
575
-#define PL0_W (0x01 | PL1_W)
576
-
577
-/*
578
- * For user-mode some registers are accessible to EL0 via a kernel
579
- * trap-and-emulate ABI. In this case we define the read permissions
580
- * as actually being PL0_R. However some bits of any given register
581
- * may still be masked.
582
- */
583
-#ifdef CONFIG_USER_ONLY
584
-#define PL0U_R PL0_R
585
-#else
586
-#define PL0U_R PL1_R
587
-#endif
588
-
589
-#define PL3_RW (PL3_R | PL3_W)
590
-#define PL2_RW (PL2_R | PL2_W)
591
-#define PL1_RW (PL1_R | PL1_W)
592
-#define PL0_RW (PL0_R | PL0_W)
593
-
594
/* Return the highest implemented Exception Level */
595
static inline int arm_highest_el(CPUARMState *env)
98
{
596
{
99
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
597
@@ -XXX,XX +XXX,XX @@ static inline int arm_current_el(CPUARMState *env)
100
break;
598
}
101
599
}
102
case EXCP_VFIQ:
600
103
- if (secure || !(env->cp15.hcr_el2 & HCR_FMO)
601
-typedef struct ARMCPRegInfo ARMCPRegInfo;
104
- || (env->cp15.hcr_el2 & HCR_TGE)) {
602
-
105
+ if (secure || !arm_hcr_el2_fmo(env) || (env->cp15.hcr_el2 & HCR_TGE)) {
603
-typedef enum CPAccessResult {
106
/* VFIQs are only taken when hypervized and non-secure. */
604
- /* Access is permitted */
107
return false;
605
- CP_ACCESS_OK = 0,
108
}
606
- /* Access fails due to a configurable trap or enable which would
109
return !(env->daif & PSTATE_F);
607
- * result in a categorized exception syndrome giving information about
110
case EXCP_VIRQ:
608
- * the failing instruction (ie syndrome category 0x3, 0x4, 0x5, 0x6,
111
- if (secure || !(env->cp15.hcr_el2 & HCR_IMO)
609
- * 0xc or 0x18). The exception is taken to the usual target EL (EL1 or
112
- || (env->cp15.hcr_el2 & HCR_TGE)) {
610
- * PL1 if in EL0, otherwise to the current EL).
113
+ if (secure || !arm_hcr_el2_imo(env) || (env->cp15.hcr_el2 & HCR_TGE)) {
611
- */
114
/* VIRQs are only taken when hypervized and non-secure. */
612
- CP_ACCESS_TRAP = 1,
115
return false;
613
- /* Access fails and results in an exception syndrome 0x0 ("uncategorized").
116
}
614
- * Note that this is not a catch-all case -- the set of cases which may
117
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
615
- * result in this failure is specifically defined by the architecture.
118
* to the CPSR.F setting otherwise we further assess the state
616
- */
119
* below.
617
- CP_ACCESS_TRAP_UNCATEGORIZED = 2,
120
*/
618
- /* As CP_ACCESS_TRAP, but for traps directly to EL2 or EL3 */
121
- hcr = (env->cp15.hcr_el2 & HCR_FMO);
619
- CP_ACCESS_TRAP_EL2 = 3,
122
+ hcr = arm_hcr_el2_fmo(env);
620
- CP_ACCESS_TRAP_EL3 = 4,
123
scr = (env->cp15.scr_el3 & SCR_FIQ);
621
- /* As CP_ACCESS_UNCATEGORIZED, but for traps directly to EL2 or EL3 */
124
622
- CP_ACCESS_TRAP_UNCATEGORIZED_EL2 = 5,
125
/* When EL3 is 32-bit, the SCR.FW bit controls whether the
623
- CP_ACCESS_TRAP_UNCATEGORIZED_EL3 = 6,
126
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
624
-} CPAccessResult;
127
* when setting the target EL, so it does not have a further
625
-
128
* affect here.
626
-/* Access functions for coprocessor registers. These cannot fail and
129
*/
627
- * may not raise exceptions.
130
- hcr = (env->cp15.hcr_el2 & HCR_IMO);
628
- */
131
+ hcr = arm_hcr_el2_imo(env);
629
-typedef uint64_t CPReadFn(CPUARMState *env, const ARMCPRegInfo *opaque);
132
scr = false;
630
-typedef void CPWriteFn(CPUARMState *env, const ARMCPRegInfo *opaque,
133
break;
631
- uint64_t value);
134
default:
632
-/* Access permission check functions for coprocessor registers. */
633
-typedef CPAccessResult CPAccessFn(CPUARMState *env,
634
- const ARMCPRegInfo *opaque,
635
- bool isread);
636
-/* Hook function for register reset */
637
-typedef void CPResetFn(CPUARMState *env, const ARMCPRegInfo *opaque);
638
-
639
-#define CP_ANY 0xff
640
-
641
-/* Definition of an ARM coprocessor register */
642
-struct ARMCPRegInfo {
643
- /* Name of register (useful mainly for debugging, need not be unique) */
644
- const char *name;
645
- /* Location of register: coprocessor number and (crn,crm,opc1,opc2)
646
- * tuple. Any of crm, opc1 and opc2 may be CP_ANY to indicate a
647
- * 'wildcard' field -- any value of that field in the MRC/MCR insn
648
- * will be decoded to this register. The register read and write
649
- * callbacks will be passed an ARMCPRegInfo with the crn/crm/opc1/opc2
650
- * used by the program, so it is possible to register a wildcard and
651
- * then behave differently on read/write if necessary.
652
- * For 64 bit registers, only crm and opc1 are relevant; crn and opc2
653
- * must both be zero.
654
- * For AArch64-visible registers, opc0 is also used.
655
- * Since there are no "coprocessors" in AArch64, cp is purely used as a
656
- * way to distinguish (for KVM's benefit) guest-visible system registers
657
- * from demuxed ones provided to preserve the "no side effects on
658
- * KVM register read/write from QEMU" semantics. cp==0x13 is guest
659
- * visible (to match KVM's encoding); cp==0 will be converted to
660
- * cp==0x13 when the ARMCPRegInfo is registered, for convenience.
661
- */
662
- uint8_t cp;
663
- uint8_t crn;
664
- uint8_t crm;
665
- uint8_t opc0;
666
- uint8_t opc1;
667
- uint8_t opc2;
668
- /* Execution state in which this register is visible: ARM_CP_STATE_* */
669
- int state;
670
- /* Register type: ARM_CP_* bits/values */
671
- int type;
672
- /* Access rights: PL*_[RW] */
673
- int access;
674
- /* Security state: ARM_CP_SECSTATE_* bits/values */
675
- int secure;
676
- /* The opaque pointer passed to define_arm_cp_regs_with_opaque() when
677
- * this register was defined: can be used to hand data through to the
678
- * register read/write functions, since they are passed the ARMCPRegInfo*.
679
- */
680
- void *opaque;
681
- /* Value of this register, if it is ARM_CP_CONST. Otherwise, if
682
- * fieldoffset is non-zero, the reset value of the register.
683
- */
684
- uint64_t resetvalue;
685
- /* Offset of the field in CPUARMState for this register.
686
- *
687
- * This is not needed if either:
688
- * 1. type is ARM_CP_CONST or one of the ARM_CP_SPECIALs
689
- * 2. both readfn and writefn are specified
690
- */
691
- ptrdiff_t fieldoffset; /* offsetof(CPUARMState, field) */
692
-
693
- /* Offsets of the secure and non-secure fields in CPUARMState for the
694
- * register if it is banked. These fields are only used during the static
695
- * registration of a register. During hashing the bank associated
696
- * with a given security state is copied to fieldoffset which is used from
697
- * there on out.
698
- *
699
- * It is expected that register definitions use either fieldoffset or
700
- * bank_fieldoffsets in the definition but not both. It is also expected
701
- * that both bank offsets are set when defining a banked register. This
702
- * use indicates that a register is banked.
703
- */
704
- ptrdiff_t bank_fieldoffsets[2];
705
-
706
- /* Function for making any access checks for this register in addition to
707
- * those specified by the 'access' permissions bits. If NULL, no extra
708
- * checks required. The access check is performed at runtime, not at
709
- * translate time.
710
- */
711
- CPAccessFn *accessfn;
712
- /* Function for handling reads of this register. If NULL, then reads
713
- * will be done by loading from the offset into CPUARMState specified
714
- * by fieldoffset.
715
- */
716
- CPReadFn *readfn;
717
- /* Function for handling writes of this register. If NULL, then writes
718
- * will be done by writing to the offset into CPUARMState specified
719
- * by fieldoffset.
720
- */
721
- CPWriteFn *writefn;
722
- /* Function for doing a "raw" read; used when we need to copy
723
- * coprocessor state to the kernel for KVM or out for
724
- * migration. This only needs to be provided if there is also a
725
- * readfn and it has side effects (for instance clear-on-read bits).
726
- */
727
- CPReadFn *raw_readfn;
728
- /* Function for doing a "raw" write; used when we need to copy KVM
729
- * kernel coprocessor state into userspace, or for inbound
730
- * migration. This only needs to be provided if there is also a
731
- * writefn and it masks out "unwritable" bits or has write-one-to-clear
732
- * or similar behaviour.
733
- */
734
- CPWriteFn *raw_writefn;
735
- /* Function for resetting the register. If NULL, then reset will be done
736
- * by writing resetvalue to the field specified in fieldoffset. If
737
- * fieldoffset is 0 then no reset will be done.
738
- */
739
- CPResetFn *resetfn;
740
-
741
- /*
742
- * "Original" writefn and readfn.
743
- * For ARMv8.1-VHE register aliases, we overwrite the read/write
744
- * accessor functions of various EL1/EL0 to perform the runtime
745
- * check for which sysreg should actually be modified, and then
746
- * forwards the operation. Before overwriting the accessors,
747
- * the original function is copied here, so that accesses that
748
- * really do go to the EL1/EL0 version proceed normally.
749
- * (The corresponding EL2 register is linked via opaque.)
750
- */
751
- CPReadFn *orig_readfn;
752
- CPWriteFn *orig_writefn;
753
-};
754
-
755
-/* Macros which are lvalues for the field in CPUARMState for the
756
- * ARMCPRegInfo *ri.
757
- */
758
-#define CPREG_FIELD32(env, ri) \
759
- (*(uint32_t *)((char *)(env) + (ri)->fieldoffset))
760
-#define CPREG_FIELD64(env, ri) \
761
- (*(uint64_t *)((char *)(env) + (ri)->fieldoffset))
762
-
763
-#define REGINFO_SENTINEL { .type = ARM_CP_SENTINEL }
764
-
765
-void define_arm_cp_regs_with_opaque(ARMCPU *cpu,
766
- const ARMCPRegInfo *regs, void *opaque);
767
-void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
768
- const ARMCPRegInfo *regs, void *opaque);
769
-static inline void define_arm_cp_regs(ARMCPU *cpu, const ARMCPRegInfo *regs)
770
-{
771
- define_arm_cp_regs_with_opaque(cpu, regs, 0);
772
-}
773
-static inline void define_one_arm_cp_reg(ARMCPU *cpu, const ARMCPRegInfo *regs)
774
-{
775
- define_one_arm_cp_reg_with_opaque(cpu, regs, 0);
776
-}
777
-const ARMCPRegInfo *get_arm_cp_reginfo(GHashTable *cpregs, uint32_t encoded_cp);
778
-
779
-/*
780
- * Definition of an ARM co-processor register as viewed from
781
- * userspace. This is used for presenting sanitised versions of
782
- * registers to userspace when emulating the Linux AArch64 CPU
783
- * ID/feature ABI (advertised as HWCAP_CPUID).
784
- */
785
-typedef struct ARMCPRegUserSpaceInfo {
786
- /* Name of register */
787
- const char *name;
788
-
789
- /* Is the name actually a glob pattern */
790
- bool is_glob;
791
-
792
- /* Only some bits are exported to user space */
793
- uint64_t exported_bits;
794
-
795
- /* Fixed bits are applied after the mask */
796
- uint64_t fixed_bits;
797
-} ARMCPRegUserSpaceInfo;
798
-
799
-#define REGUSERINFO_SENTINEL { .name = NULL }
800
-
801
-void modify_arm_cp_regs(ARMCPRegInfo *regs, const ARMCPRegUserSpaceInfo *mods);
802
-
803
-/* CPWriteFn that can be used to implement writes-ignored behaviour */
804
-void arm_cp_write_ignore(CPUARMState *env, const ARMCPRegInfo *ri,
805
- uint64_t value);
806
-/* CPReadFn that can be used for read-as-zero behaviour */
807
-uint64_t arm_cp_read_zero(CPUARMState *env, const ARMCPRegInfo *ri);
808
-
809
-/* CPResetFn that does nothing, for use if no reset is required even
810
- * if fieldoffset is non zero.
811
- */
812
-void arm_cp_reset_ignore(CPUARMState *env, const ARMCPRegInfo *opaque);
813
-
814
-/* Return true if this reginfo struct's field in the cpu state struct
815
- * is 64 bits wide.
816
- */
817
-static inline bool cpreg_field_is_64bit(const ARMCPRegInfo *ri)
818
-{
819
- return (ri->state == ARM_CP_STATE_AA64) || (ri->type & ARM_CP_64BIT);
820
-}
821
-
822
-static inline bool cp_access_ok(int current_el,
823
- const ARMCPRegInfo *ri, int isread)
824
-{
825
- return (ri->access >> ((current_el * 2) + isread)) & 1;
826
-}
827
-
828
-/* Raw read of a coprocessor register (as needed for migration, etc) */
829
-uint64_t read_raw_cp_reg(CPUARMState *env, const ARMCPRegInfo *ri);
830
-
831
/**
832
* write_list_to_cpustate
833
* @cpu: ARMCPU
834
diff --git a/hw/arm/pxa2xx.c b/hw/arm/pxa2xx.c
835
index XXXXXXX..XXXXXXX 100644
836
--- a/hw/arm/pxa2xx.c
837
+++ b/hw/arm/pxa2xx.c
838
@@ -XXX,XX +XXX,XX @@
839
#include "qemu/cutils.h"
840
#include "qemu/log.h"
841
#include "qom/object.h"
842
+#include "target/arm/cpregs.h"
843
844
static struct {
845
hwaddr io_base;
846
diff --git a/hw/arm/pxa2xx_pic.c b/hw/arm/pxa2xx_pic.c
847
index XXXXXXX..XXXXXXX 100644
848
--- a/hw/arm/pxa2xx_pic.c
849
+++ b/hw/arm/pxa2xx_pic.c
850
@@ -XXX,XX +XXX,XX @@
851
#include "hw/sysbus.h"
852
#include "migration/vmstate.h"
853
#include "qom/object.h"
854
+#include "target/arm/cpregs.h"
855
856
#define ICIP    0x00    /* Interrupt Controller IRQ Pending register */
857
#define ICMR    0x04    /* Interrupt Controller Mask register */
135
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
858
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
136
index XXXXXXX..XXXXXXX 100644
859
index XXXXXXX..XXXXXXX 100644
137
--- a/hw/intc/arm_gicv3_cpuif.c
860
--- a/hw/intc/arm_gicv3_cpuif.c
138
+++ b/hw/intc/arm_gicv3_cpuif.c
861
+++ b/hw/intc/arm_gicv3_cpuif.c
139
@@ -XXX,XX +XXX,XX @@ static bool icv_access(CPUARMState *env, int hcr_flags)
862
@@ -XXX,XX +XXX,XX @@
140
* * access if NS EL1 and either IMO or FMO == 1:
863
#include "gicv3_internal.h"
141
* CTLR, DIR, PMR, RPR
864
#include "hw/irq.h"
142
*/
865
#include "cpu.h"
143
- return (env->cp15.hcr_el2 & hcr_flags) && arm_current_el(env) == 1
866
+#include "target/arm/cpregs.h"
144
+ bool flagmatch = ((hcr_flags & HCR_IMO) && arm_hcr_el2_imo(env)) ||
867
145
+ ((hcr_flags & HCR_FMO) && arm_hcr_el2_fmo(env));
868
/*
146
+
869
* Special case return value from hppvi_index(); must be larger than
147
+ return flagmatch && arm_current_el(env) == 1
870
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
148
&& !arm_is_secure_below_el3(env);
871
index XXXXXXX..XXXXXXX 100644
149
}
872
--- a/hw/intc/arm_gicv3_kvm.c
150
873
+++ b/hw/intc/arm_gicv3_kvm.c
151
@@ -XXX,XX +XXX,XX @@ static void icc_dir_write(CPUARMState *env, const ARMCPRegInfo *ri,
874
@@ -XXX,XX +XXX,XX @@
152
/* No need to include !IsSecure in route_*_to_el2 as it's only
875
#include "vgic_common.h"
153
* tested in cases where we know !IsSecure is true.
876
#include "migration/blocker.h"
154
*/
877
#include "qom/object.h"
155
- route_fiq_to_el2 = env->cp15.hcr_el2 & HCR_FMO;
878
+#include "target/arm/cpregs.h"
156
- route_irq_to_el2 = env->cp15.hcr_el2 & HCR_IMO;
879
+
157
+ route_fiq_to_el2 = arm_hcr_el2_fmo(env);
880
158
+ route_irq_to_el2 = arm_hcr_el2_imo(env);
881
#ifdef DEBUG_GICV3_KVM
159
882
#define DPRINTF(fmt, ...) \
160
switch (arm_current_el(env)) {
883
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
161
case 3:
884
index XXXXXXX..XXXXXXX 100644
162
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gicv3_irqfiq_access(CPUARMState *env,
885
--- a/target/arm/cpu.c
163
switch (el) {
886
+++ b/target/arm/cpu.c
164
case 1:
887
@@ -XXX,XX +XXX,XX @@
165
if (arm_is_secure_below_el3(env) ||
888
#include "kvm_arm.h"
166
- ((env->cp15.hcr_el2 & (HCR_IMO | HCR_FMO)) == 0)) {
889
#include "disas/capstone.h"
167
+ (arm_hcr_el2_imo(env) == 0 && arm_hcr_el2_fmo(env) == 0)) {
890
#include "fpu/softfloat.h"
168
r = CP_ACCESS_TRAP_EL3;
891
+#include "cpregs.h"
169
}
892
170
break;
893
static void arm_cpu_set_pc(CPUState *cs, vaddr value)
171
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gicv3_dir_access(CPUARMState *env,
172
static CPAccessResult gicv3_sgi_access(CPUARMState *env,
173
const ARMCPRegInfo *ri, bool isread)
174
{
894
{
175
- if ((env->cp15.hcr_el2 & (HCR_IMO | HCR_FMO)) &&
895
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
176
+ if ((arm_hcr_el2_imo(env) || arm_hcr_el2_fmo(env)) &&
896
index XXXXXXX..XXXXXXX 100644
177
arm_current_el(env) == 1 && !arm_is_secure_below_el3(env)) {
897
--- a/target/arm/cpu64.c
178
/* Takes priority over a possible EL3 trap */
898
+++ b/target/arm/cpu64.c
179
return CP_ACCESS_TRAP_EL2;
899
@@ -XXX,XX +XXX,XX @@
180
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gicv3_fiq_access(CPUARMState *env,
900
#include "hvf_arm.h"
181
if (env->cp15.scr_el3 & SCR_FIQ) {
901
#include "qapi/visitor.h"
182
switch (el) {
902
#include "hw/qdev-properties.h"
183
case 1:
903
+#include "cpregs.h"
184
- if (arm_is_secure_below_el3(env) ||
904
185
- ((env->cp15.hcr_el2 & HCR_FMO) == 0)) {
905
186
+ if (arm_is_secure_below_el3(env) || !arm_hcr_el2_fmo(env)) {
906
#ifndef CONFIG_USER_ONLY
187
r = CP_ACCESS_TRAP_EL3;
907
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
188
}
908
index XXXXXXX..XXXXXXX 100644
189
break;
909
--- a/target/arm/cpu_tcg.c
190
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gicv3_irq_access(CPUARMState *env,
910
+++ b/target/arm/cpu_tcg.c
191
if (env->cp15.scr_el3 & SCR_IRQ) {
911
@@ -XXX,XX +XXX,XX @@
192
switch (el) {
912
#if !defined(CONFIG_USER_ONLY)
193
case 1:
913
#include "hw/boards.h"
194
- if (arm_is_secure_below_el3(env) ||
914
#endif
195
- ((env->cp15.hcr_el2 & HCR_IMO) == 0)) {
915
+#include "cpregs.h"
196
+ if (arm_is_secure_below_el3(env) || !arm_hcr_el2_imo(env)) {
916
197
r = CP_ACCESS_TRAP_EL3;
917
/* CPU models. These are not needed for the AArch64 linux-user build. */
198
}
918
#if !defined(CONFIG_USER_ONLY) || !defined(TARGET_AARCH64)
199
break;
919
diff --git a/target/arm/gdbstub.c b/target/arm/gdbstub.c
920
index XXXXXXX..XXXXXXX 100644
921
--- a/target/arm/gdbstub.c
922
+++ b/target/arm/gdbstub.c
923
@@ -XXX,XX +XXX,XX @@
924
*/
925
#include "qemu/osdep.h"
926
#include "cpu.h"
927
-#include "internals.h"
928
#include "exec/gdbstub.h"
929
+#include "internals.h"
930
+#include "cpregs.h"
931
932
typedef struct RegisterSysregXmlParam {
933
CPUState *cs;
200
diff --git a/target/arm/helper.c b/target/arm/helper.c
934
diff --git a/target/arm/helper.c b/target/arm/helper.c
201
index XXXXXXX..XXXXXXX 100644
935
index XXXXXXX..XXXXXXX 100644
202
--- a/target/arm/helper.c
936
--- a/target/arm/helper.c
203
+++ b/target/arm/helper.c
937
+++ b/target/arm/helper.c
204
@@ -XXX,XX +XXX,XX @@ uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
938
@@ -XXX,XX +XXX,XX @@
205
switch (excp_idx) {
939
#include "exec/cpu_ldst.h"
206
case EXCP_IRQ:
940
#include "semihosting/common-semi.h"
207
scr = ((env->cp15.scr_el3 & SCR_IRQ) == SCR_IRQ);
941
#endif
208
- hcr = ((env->cp15.hcr_el2 & HCR_IMO) == HCR_IMO);
942
+#include "cpregs.h"
209
+ hcr = arm_hcr_el2_imo(env);
943
210
break;
944
#define ARM_CPU_FREQ 1000000000 /* FIXME: 1 GHz, should be configurable */
211
case EXCP_FIQ:
945
#define PMCR_NUM_COUNTERS 4 /* QEMU IMPDEF choice */
212
scr = ((env->cp15.scr_el3 & SCR_FIQ) == SCR_FIQ);
946
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
213
- hcr = ((env->cp15.hcr_el2 & HCR_FMO) == HCR_FMO);
947
index XXXXXXX..XXXXXXX 100644
214
+ hcr = arm_hcr_el2_fmo(env);
948
--- a/target/arm/op_helper.c
215
break;
949
+++ b/target/arm/op_helper.c
216
default:
950
@@ -XXX,XX +XXX,XX @@
217
scr = ((env->cp15.scr_el3 & SCR_EA) == SCR_EA);
951
#include "internals.h"
218
- hcr = ((env->cp15.hcr_el2 & HCR_AMO) == HCR_AMO);
952
#include "exec/exec-all.h"
219
+ hcr = arm_hcr_el2_amo(env);
953
#include "exec/cpu_ldst.h"
220
break;
954
+#include "cpregs.h"
221
};
955
222
956
#define SIGNBIT (uint32_t)0x80000000
957
#define SIGNBIT64 ((uint64_t)1 << 63)
958
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
959
index XXXXXXX..XXXXXXX 100644
960
--- a/target/arm/translate-a64.c
961
+++ b/target/arm/translate-a64.c
962
@@ -XXX,XX +XXX,XX @@
963
#include "translate.h"
964
#include "internals.h"
965
#include "qemu/host-utils.h"
966
-
967
#include "semihosting/semihost.h"
968
#include "exec/gen-icount.h"
969
-
970
#include "exec/helper-proto.h"
971
#include "exec/helper-gen.h"
972
#include "exec/log.h"
973
-
974
+#include "cpregs.h"
975
#include "translate-a64.h"
976
#include "qemu/atomic128.h"
977
978
diff --git a/target/arm/translate.c b/target/arm/translate.c
979
index XXXXXXX..XXXXXXX 100644
980
--- a/target/arm/translate.c
981
+++ b/target/arm/translate.c
982
@@ -XXX,XX +XXX,XX @@
983
#include "qemu/bitops.h"
984
#include "arm_ldst.h"
985
#include "semihosting/semihost.h"
986
-
987
#include "exec/helper-proto.h"
988
#include "exec/helper-gen.h"
989
-
990
#include "exec/log.h"
991
+#include "cpregs.h"
992
993
994
#define ENABLE_ARCH_4T arm_dc_feature(s, ARM_FEATURE_V4T)
223
--
995
--
224
2.18.0
996
2.25.1
225
997
226
998
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add some helper functions to gic_internal.h to get or change the state
3
Rearrange the values of the enumerators of CPAccessResult
4
of an IRQ. When the current CPU is not a vCPU, the call is forwarded to
4
so that we may directly extract the target el. For the two
5
the GIC distributor. Otherwise, it acts on the list register matching
5
special cases in access_check_cp_reg, use CPAccessResult.
6
the IRQ in the current CPU virtual interface.
7
6
8
gic_clear_active can have a side effect on the distributor, even in the
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
vCPU case, when the correponding LR has the HW field set.
10
11
Use those functions in the CPU interface code path to prepare for the
12
vCPU interface implementation.
13
14
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
17
Message-id: 20180727095421.386-10-luc.michel@greensocs.com
10
Message-id: 20220501055028.646596-3-richard.henderson@linaro.org
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
---
12
---
20
hw/intc/gic_internal.h | 83 ++++++++++++++++++++++++++++++++++++++++++
13
target/arm/cpregs.h | 26 ++++++++++++--------
21
hw/intc/arm_gic.c | 32 +++++++---------
14
target/arm/op_helper.c | 56 +++++++++++++++++++++---------------------
22
2 files changed, 97 insertions(+), 18 deletions(-)
15
2 files changed, 44 insertions(+), 38 deletions(-)
23
16
24
diff --git a/hw/intc/gic_internal.h b/hw/intc/gic_internal.h
17
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
25
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
26
--- a/hw/intc/gic_internal.h
19
--- a/target/arm/cpregs.h
27
+++ b/hw/intc/gic_internal.h
20
+++ b/target/arm/cpregs.h
28
@@ -XXX,XX +XXX,XX @@ REG32(GICH_LR63, 0x1fc)
21
@@ -XXX,XX +XXX,XX @@ static inline bool cptype_valid(int cptype)
29
#define GICH_LR_GROUP(entry) (FIELD_EX32(entry, GICH_LR0, Grp1))
22
typedef enum CPAccessResult {
30
#define GICH_LR_HW(entry) (FIELD_EX32(entry, GICH_LR0, HW))
23
/* Access is permitted */
31
24
CP_ACCESS_OK = 0,
32
+#define GICH_LR_CLEAR_PENDING(entry) \
33
+ ((entry) &= ~(GICH_LR_STATE_PENDING << R_GICH_LR0_State_SHIFT))
34
+#define GICH_LR_SET_ACTIVE(entry) \
35
+ ((entry) |= (GICH_LR_STATE_ACTIVE << R_GICH_LR0_State_SHIFT))
36
+#define GICH_LR_CLEAR_ACTIVE(entry) \
37
+ ((entry) &= ~(GICH_LR_STATE_ACTIVE << R_GICH_LR0_State_SHIFT))
38
+
25
+
39
/* Valid bits for GICC_CTLR for GICv1, v1 with security extensions,
26
+ /*
40
* GICv2 and GICv2 with security extensions:
27
+ * Combined with one of the following, the low 2 bits indicate the
41
*/
28
+ * target exception level. If 0, the exception is taken to the usual
42
@@ -XXX,XX +XXX,XX @@ static inline uint32_t *gic_get_lr_entry(GICState *s, int irq, int vcpu)
29
+ * target EL (EL1 or PL1 if in EL0, otherwise to the current EL).
43
g_assert_not_reached();
30
+ */
44
}
31
+ CP_ACCESS_EL_MASK = 3,
45
46
+static inline bool gic_test_group(GICState *s, int irq, int cpu)
47
+{
48
+ if (gic_is_vcpu(cpu)) {
49
+ uint32_t *entry = gic_get_lr_entry(s, irq, cpu);
50
+ return GICH_LR_GROUP(*entry);
51
+ } else {
52
+ return GIC_DIST_TEST_GROUP(irq, 1 << cpu);
53
+ }
54
+}
55
+
32
+
56
+static inline void gic_clear_pending(GICState *s, int irq, int cpu)
33
/*
57
+{
34
* Access fails due to a configurable trap or enable which would
58
+ if (gic_is_vcpu(cpu)) {
35
* result in a categorized exception syndrome giving information about
59
+ uint32_t *entry = gic_get_lr_entry(s, irq, cpu);
36
* the failing instruction (ie syndrome category 0x3, 0x4, 0x5, 0x6,
60
+ GICH_LR_CLEAR_PENDING(*entry);
37
- * 0xc or 0x18). The exception is taken to the usual target EL (EL1 or
61
+ } else {
38
- * PL1 if in EL0, otherwise to the current EL).
62
+ /* Clear pending state for both level and edge triggered
39
+ * 0xc or 0x18).
63
+ * interrupts. (level triggered interrupts with an active line
40
*/
64
+ * remain pending, see gic_test_pending)
41
- CP_ACCESS_TRAP = 1,
65
+ */
42
+ CP_ACCESS_TRAP = (1 << 2),
66
+ GIC_DIST_CLEAR_PENDING(irq, GIC_DIST_TEST_MODEL(irq) ? ALL_CPU_MASK
43
+ CP_ACCESS_TRAP_EL2 = CP_ACCESS_TRAP | 2,
67
+ : (1 << cpu));
44
+ CP_ACCESS_TRAP_EL3 = CP_ACCESS_TRAP | 3,
68
+ }
69
+}
70
+
45
+
71
+static inline void gic_set_active(GICState *s, int irq, int cpu)
46
/*
72
+{
47
* Access fails and results in an exception syndrome 0x0 ("uncategorized").
73
+ if (gic_is_vcpu(cpu)) {
48
* Note that this is not a catch-all case -- the set of cases which may
74
+ uint32_t *entry = gic_get_lr_entry(s, irq, cpu);
49
* result in this failure is specifically defined by the architecture.
75
+ GICH_LR_SET_ACTIVE(*entry);
50
*/
76
+ } else {
51
- CP_ACCESS_TRAP_UNCATEGORIZED = 2,
77
+ GIC_DIST_SET_ACTIVE(irq, 1 << cpu);
52
- /* As CP_ACCESS_TRAP, but for traps directly to EL2 or EL3 */
78
+ }
53
- CP_ACCESS_TRAP_EL2 = 3,
79
+}
54
- CP_ACCESS_TRAP_EL3 = 4,
80
+
55
- /* As CP_ACCESS_UNCATEGORIZED, but for traps directly to EL2 or EL3 */
81
+static inline void gic_clear_active(GICState *s, int irq, int cpu)
56
- CP_ACCESS_TRAP_UNCATEGORIZED_EL2 = 5,
82
+{
57
- CP_ACCESS_TRAP_UNCATEGORIZED_EL3 = 6,
83
+ if (gic_is_vcpu(cpu)) {
58
+ CP_ACCESS_TRAP_UNCATEGORIZED = (2 << 2),
84
+ uint32_t *entry = gic_get_lr_entry(s, irq, cpu);
59
+ CP_ACCESS_TRAP_UNCATEGORIZED_EL2 = CP_ACCESS_TRAP_UNCATEGORIZED | 2,
85
+ GICH_LR_CLEAR_ACTIVE(*entry);
60
+ CP_ACCESS_TRAP_UNCATEGORIZED_EL3 = CP_ACCESS_TRAP_UNCATEGORIZED | 3,
86
+
61
} CPAccessResult;
87
+ if (GICH_LR_HW(*entry)) {
62
88
+ /* Hardware interrupt. We must forward the deactivation request to
63
typedef struct ARMCPRegInfo ARMCPRegInfo;
89
+ * the distributor.
64
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
90
+ */
91
+ int phys_irq = GICH_LR_PHYS_ID(*entry);
92
+ int rcpu = gic_get_vcpu_real_id(cpu);
93
+
94
+ if (phys_irq < GIC_NR_SGIS || phys_irq >= GIC_MAXIRQ) {
95
+ /* UNPREDICTABLE behaviour, we choose to ignore the request */
96
+ return;
97
+ }
98
+
99
+ /* This is equivalent to a NS write to DIR on the physical CPU
100
+ * interface. Hence group0 interrupt deactivation is ignored if
101
+ * the GIC is secure.
102
+ */
103
+ if (!s->security_extn || GIC_DIST_TEST_GROUP(phys_irq, 1 << rcpu)) {
104
+ GIC_DIST_CLEAR_ACTIVE(phys_irq, 1 << rcpu);
105
+ }
106
+ }
107
+ } else {
108
+ GIC_DIST_CLEAR_ACTIVE(irq, 1 << cpu);
109
+ }
110
+}
111
+
112
+static inline int gic_get_priority(GICState *s, int irq, int cpu)
113
+{
114
+ if (gic_is_vcpu(cpu)) {
115
+ uint32_t *entry = gic_get_lr_entry(s, irq, cpu);
116
+ return GICH_LR_PRIORITY(*entry);
117
+ } else {
118
+ return GIC_DIST_GET_PRIORITY(irq, cpu);
119
+ }
120
+}
121
+
122
#endif /* QEMU_ARM_GIC_INTERNAL_H */
123
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
124
index XXXXXXX..XXXXXXX 100644
65
index XXXXXXX..XXXXXXX 100644
125
--- a/hw/intc/arm_gic.c
66
--- a/target/arm/op_helper.c
126
+++ b/hw/intc/arm_gic.c
67
+++ b/target/arm/op_helper.c
127
@@ -XXX,XX +XXX,XX @@ static uint16_t gic_get_current_pending_irq(GICState *s, int cpu,
68
@@ -XXX,XX +XXX,XX @@ void HELPER(access_check_cp_reg)(CPUARMState *env, void *rip, uint32_t syndrome,
128
uint16_t pending_irq = s->current_pending[cpu];
69
uint32_t isread)
129
70
{
130
if (pending_irq < GIC_MAXIRQ && gic_has_groups(s)) {
71
const ARMCPRegInfo *ri = rip;
131
- int group = GIC_DIST_TEST_GROUP(pending_irq, (1 << cpu));
72
+ CPAccessResult res = CP_ACCESS_OK;
132
+ int group = gic_test_group(s, pending_irq, cpu);
73
int target_el;
133
+
74
134
/* On a GIC without the security extensions, reading this register
75
if (arm_feature(env, ARM_FEATURE_XSCALE) && ri->cp < 14
135
* behaves in the same way as a secure access to a GIC with them.
76
&& extract32(env->cp15.c15_cpar, ri->cp, 1) == 0) {
136
*/
77
- raise_exception(env, EXCP_UDEF, syndrome, exception_target_el(env));
137
@@ -XXX,XX +XXX,XX @@ static int gic_get_group_priority(GICState *s, int cpu, int irq)
78
+ res = CP_ACCESS_TRAP;
138
79
+ goto fail;
139
if (gic_has_groups(s) &&
140
!(s->cpu_ctlr[cpu] & GICC_CTLR_CBPR) &&
141
- GIC_DIST_TEST_GROUP(irq, (1 << cpu))) {
142
+ gic_test_group(s, irq, cpu)) {
143
bpr = s->abpr[cpu] - 1;
144
assert(bpr >= 0);
145
} else {
146
@@ -XXX,XX +XXX,XX @@ static int gic_get_group_priority(GICState *s, int cpu, int irq)
147
*/
148
mask = ~0U << ((bpr & 7) + 1);
149
150
- return GIC_DIST_GET_PRIORITY(irq, cpu) & mask;
151
+ return gic_get_priority(s, irq, cpu) & mask;
152
}
153
154
static void gic_activate_irq(GICState *s, int cpu, int irq)
155
@@ -XXX,XX +XXX,XX @@ static void gic_activate_irq(GICState *s, int cpu, int irq)
156
int regno = preemption_level / 32;
157
int bitno = preemption_level % 32;
158
159
- if (gic_has_groups(s) && GIC_DIST_TEST_GROUP(irq, (1 << cpu))) {
160
+ if (gic_has_groups(s) && gic_test_group(s, irq, cpu)) {
161
s->nsapr[regno][cpu] |= (1 << bitno);
162
} else {
163
s->apr[regno][cpu] |= (1 << bitno);
164
}
80
}
165
81
166
s->running_priority[cpu] = prio;
82
/*
167
- GIC_DIST_SET_ACTIVE(irq, 1 << cpu);
83
@@ -XXX,XX +XXX,XX @@ void HELPER(access_check_cp_reg)(CPUARMState *env, void *rip, uint32_t syndrome,
168
+ gic_set_active(s, irq, cpu);
84
mask &= ~((1 << 4) | (1 << 14));
169
}
85
170
86
if (env->cp15.hstr_el2 & mask) {
171
static int gic_get_prio_from_apr_bits(GICState *s, int cpu)
87
- target_el = 2;
172
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
88
- goto exept;
173
return irq;
89
+ res = CP_ACCESS_TRAP_EL2;
174
}
90
+ goto fail;
175
176
- if (GIC_DIST_GET_PRIORITY(irq, cpu) >= s->running_priority[cpu]) {
177
+ if (gic_get_priority(s, irq, cpu) >= s->running_priority[cpu]) {
178
DPRINTF("ACK, pending interrupt (%d) has insufficient priority\n", irq);
179
return 1023;
180
}
181
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
182
/* Clear pending flags for both level and edge triggered interrupts.
183
* Level triggered IRQs will be reasserted once they become inactive.
184
*/
185
- GIC_DIST_CLEAR_PENDING(irq, GIC_DIST_TEST_MODEL(irq) ? ALL_CPU_MASK
186
- : cm);
187
+ gic_clear_pending(s, irq, cpu);
188
ret = irq;
189
} else {
190
if (irq < GIC_NR_SGIS) {
191
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
192
src = ctz32(s->sgi_pending[irq][cpu]);
193
s->sgi_pending[irq][cpu] &= ~(1 << src);
194
if (s->sgi_pending[irq][cpu] == 0) {
195
- GIC_DIST_CLEAR_PENDING(irq,
196
- GIC_DIST_TEST_MODEL(irq) ? ALL_CPU_MASK
197
- : cm);
198
+ gic_clear_pending(s, irq, cpu);
199
}
200
ret = irq | ((src & 0x7) << 10);
201
} else {
202
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
203
* interrupts. (level triggered interrupts with an active line
204
* remain pending, see gic_test_pending)
205
*/
206
- GIC_DIST_CLEAR_PENDING(irq, GIC_DIST_TEST_MODEL(irq) ? ALL_CPU_MASK
207
- : cm);
208
+ gic_clear_pending(s, irq, cpu);
209
ret = irq;
210
}
91
}
211
}
92
}
212
@@ -XXX,XX +XXX,XX @@ static bool gic_eoi_split(GICState *s, int cpu, MemTxAttrs attrs)
93
213
94
- if (!ri->accessfn) {
214
static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
95
+ if (ri->accessfn) {
215
{
96
+ res = ri->accessfn(env, ri, isread);
216
- int cm = 1 << cpu;
97
+ }
217
int group;
98
+ if (likely(res == CP_ACCESS_OK)) {
218
219
if (irq >= s->num_irq) {
220
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
221
return;
99
return;
222
}
100
}
223
101
224
- group = gic_has_groups(s) && GIC_DIST_TEST_GROUP(irq, cm);
102
- switch (ri->accessfn(env, ri, isread)) {
225
+ group = gic_has_groups(s) && gic_test_group(s, irq, cpu);
103
- case CP_ACCESS_OK:
226
104
- return;
227
if (!gic_eoi_split(s, cpu, attrs)) {
105
+ fail:
228
/* This is UNPREDICTABLE; we choose to ignore it */
106
+ switch (res & ~CP_ACCESS_EL_MASK) {
229
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
107
case CP_ACCESS_TRAP:
230
return;
108
- target_el = exception_target_el(env);
109
- break;
110
- case CP_ACCESS_TRAP_EL2:
111
- /* Requesting a trap to EL2 when we're in EL3 is
112
- * a bug in the access function.
113
- */
114
- assert(arm_current_el(env) != 3);
115
- target_el = 2;
116
- break;
117
- case CP_ACCESS_TRAP_EL3:
118
- target_el = 3;
119
break;
120
case CP_ACCESS_TRAP_UNCATEGORIZED:
121
- target_el = exception_target_el(env);
122
- syndrome = syn_uncategorized();
123
- break;
124
- case CP_ACCESS_TRAP_UNCATEGORIZED_EL2:
125
- target_el = 2;
126
- syndrome = syn_uncategorized();
127
- break;
128
- case CP_ACCESS_TRAP_UNCATEGORIZED_EL3:
129
- target_el = 3;
130
syndrome = syn_uncategorized();
131
break;
132
default:
133
g_assert_not_reached();
231
}
134
}
232
135
233
- GIC_DIST_CLEAR_ACTIVE(irq, cm);
136
-exept:
234
+ gic_clear_active(s, irq, cpu);
137
+ target_el = res & CP_ACCESS_EL_MASK;
138
+ switch (target_el) {
139
+ case 0:
140
+ target_el = exception_target_el(env);
141
+ break;
142
+ case 2:
143
+ assert(arm_current_el(env) != 3);
144
+ assert(arm_is_el2_enabled(env));
145
+ break;
146
+ case 3:
147
+ assert(arm_feature(env, ARM_FEATURE_EL3));
148
+ break;
149
+ default:
150
+ /* No "direct" traps to EL1 */
151
+ g_assert_not_reached();
152
+ }
153
+
154
raise_exception(env, EXCP_UDEF, syndrome, target_el);
235
}
155
}
236
156
237
static void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
238
@@ -XXX,XX +XXX,XX @@ static void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
239
}
240
}
241
242
- group = gic_has_groups(s) && GIC_DIST_TEST_GROUP(irq, cm);
243
+ group = gic_has_groups(s) && gic_test_group(s, irq, cpu);
244
245
if (gic_cpu_ns_access(s, cpu, attrs) && !group) {
246
DPRINTF("Non-secure EOI for Group0 interrupt %d ignored\n", irq);
247
@@ -XXX,XX +XXX,XX @@ static void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
248
249
/* In GICv2 the guest can choose to split priority-drop and deactivate */
250
if (!gic_eoi_split(s, cpu, attrs)) {
251
- GIC_DIST_CLEAR_ACTIVE(irq, cm);
252
+ gic_clear_active(s, irq, cpu);
253
}
254
gic_update(s);
255
}
256
--
157
--
257
2.18.0
158
2.25.1
258
159
259
160
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
In preparation for the virtualization extensions implementation,
3
Remove a possible source of error by removing REGINFO_SENTINEL
4
refactor the name of the functions and macros that act on the GIC
4
and using ARRAY_SIZE (convinently hidden inside a macro) to
5
distributor to make that fact explicit. It will be useful to
5
find the end of the set of regs being registered or modified.
6
differentiate them from the ones that will act on the virtual
7
interfaces.
8
6
9
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
7
The space saved by not having the extra array element reduces
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
the executable's .data.rel.ro section by about 9k.
11
Reviewed-by: Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
9
10
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Message-id: 20180727095421.386-2-luc.michel@greensocs.com
12
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20220501055028.646596-4-richard.henderson@linaro.org
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
15
---
16
hw/intc/gic_internal.h | 51 ++++++------
16
target/arm/cpregs.h | 53 +++++++++---------
17
hw/intc/arm_gic.c | 163 +++++++++++++++++++++------------------
17
hw/arm/pxa2xx.c | 1 -
18
hw/intc/arm_gic_common.c | 6 +-
18
hw/arm/pxa2xx_pic.c | 1 -
19
hw/intc/arm_gic_kvm.c | 23 +++---
19
hw/intc/arm_gicv3_cpuif.c | 5 --
20
4 files changed, 127 insertions(+), 116 deletions(-)
20
hw/intc/arm_gicv3_kvm.c | 1 -
21
target/arm/cpu64.c | 1 -
22
target/arm/cpu_tcg.c | 4 --
23
target/arm/helper.c | 111 ++++++++------------------------------
24
8 files changed, 48 insertions(+), 129 deletions(-)
21
25
22
diff --git a/hw/intc/gic_internal.h b/hw/intc/gic_internal.h
26
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
23
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
24
--- a/hw/intc/gic_internal.h
28
--- a/target/arm/cpregs.h
25
+++ b/hw/intc/gic_internal.h
29
+++ b/target/arm/cpregs.h
26
@@ -XXX,XX +XXX,XX @@
30
@@ -XXX,XX +XXX,XX @@
27
31
#define ARM_CP_NO_GDB 0x4000
28
#define GIC_BASE_IRQ 0
32
#define ARM_CP_RAISES_EXC 0x8000
29
33
#define ARM_CP_NEWEL 0x10000
30
-#define GIC_SET_ENABLED(irq, cm) s->irq_state[irq].enabled |= (cm)
34
-/* Used only as a terminator for ARMCPRegInfo lists */
31
-#define GIC_CLEAR_ENABLED(irq, cm) s->irq_state[irq].enabled &= ~(cm)
35
-#define ARM_CP_SENTINEL 0xfffff
32
-#define GIC_TEST_ENABLED(irq, cm) ((s->irq_state[irq].enabled & (cm)) != 0)
36
/* Mask of only the flag bits in a type field */
33
-#define GIC_SET_PENDING(irq, cm) s->irq_state[irq].pending |= (cm)
37
#define ARM_CP_FLAG_MASK 0x1f0ff
34
-#define GIC_CLEAR_PENDING(irq, cm) s->irq_state[irq].pending &= ~(cm)
38
35
-#define GIC_SET_ACTIVE(irq, cm) s->irq_state[irq].active |= (cm)
39
@@ -XXX,XX +XXX,XX @@ enum {
36
-#define GIC_CLEAR_ACTIVE(irq, cm) s->irq_state[irq].active &= ~(cm)
40
ARM_CP_SECSTATE_NS = (1 << 1), /* bit[1]: Non-secure state register */
37
-#define GIC_TEST_ACTIVE(irq, cm) ((s->irq_state[irq].active & (cm)) != 0)
41
};
38
-#define GIC_SET_MODEL(irq) s->irq_state[irq].model = true
42
39
-#define GIC_CLEAR_MODEL(irq) s->irq_state[irq].model = false
43
-/*
40
-#define GIC_TEST_MODEL(irq) s->irq_state[irq].model
44
- * Return true if cptype is a valid type field. This is used to try to
41
-#define GIC_SET_LEVEL(irq, cm) s->irq_state[irq].level |= (cm)
45
- * catch errors where the sentinel has been accidentally left off the end
42
-#define GIC_CLEAR_LEVEL(irq, cm) s->irq_state[irq].level &= ~(cm)
46
- * of a list of registers.
43
-#define GIC_TEST_LEVEL(irq, cm) ((s->irq_state[irq].level & (cm)) != 0)
47
- */
44
-#define GIC_SET_EDGE_TRIGGER(irq) s->irq_state[irq].edge_trigger = true
48
-static inline bool cptype_valid(int cptype)
45
-#define GIC_CLEAR_EDGE_TRIGGER(irq) s->irq_state[irq].edge_trigger = false
49
-{
46
-#define GIC_TEST_EDGE_TRIGGER(irq) (s->irq_state[irq].edge_trigger)
50
- return ((cptype & ~ARM_CP_FLAG_MASK) == 0)
47
-#define GIC_GET_PRIORITY(irq, cpu) (((irq) < GIC_INTERNAL) ? \
51
- || ((cptype & ARM_CP_SPECIAL) &&
48
+#define GIC_DIST_SET_ENABLED(irq, cm) (s->irq_state[irq].enabled |= (cm))
52
- ((cptype & ~ARM_CP_FLAG_MASK) <= ARM_LAST_SPECIAL));
49
+#define GIC_DIST_CLEAR_ENABLED(irq, cm) (s->irq_state[irq].enabled &= ~(cm))
53
-}
50
+#define GIC_DIST_TEST_ENABLED(irq, cm) ((s->irq_state[irq].enabled & (cm)) != 0)
54
-
51
+#define GIC_DIST_SET_PENDING(irq, cm) (s->irq_state[irq].pending |= (cm))
55
/*
52
+#define GIC_DIST_CLEAR_PENDING(irq, cm) (s->irq_state[irq].pending &= ~(cm))
56
* Access rights:
53
+#define GIC_DIST_SET_ACTIVE(irq, cm) (s->irq_state[irq].active |= (cm))
57
* We define bits for Read and Write access for what rev C of the v7-AR ARM ARM
54
+#define GIC_DIST_CLEAR_ACTIVE(irq, cm) (s->irq_state[irq].active &= ~(cm))
58
@@ -XXX,XX +XXX,XX @@ struct ARMCPRegInfo {
55
+#define GIC_DIST_TEST_ACTIVE(irq, cm) ((s->irq_state[irq].active & (cm)) != 0)
59
#define CPREG_FIELD64(env, ri) \
56
+#define GIC_DIST_SET_MODEL(irq) (s->irq_state[irq].model = true)
60
(*(uint64_t *)((char *)(env) + (ri)->fieldoffset))
57
+#define GIC_DIST_CLEAR_MODEL(irq) (s->irq_state[irq].model = false)
61
58
+#define GIC_DIST_TEST_MODEL(irq) (s->irq_state[irq].model)
62
-#define REGINFO_SENTINEL { .type = ARM_CP_SENTINEL }
59
+#define GIC_DIST_SET_LEVEL(irq, cm) (s->irq_state[irq].level |= (cm))
63
+void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu, const ARMCPRegInfo *reg,
60
+#define GIC_DIST_CLEAR_LEVEL(irq, cm) (s->irq_state[irq].level &= ~(cm))
64
+ void *opaque);
61
+#define GIC_DIST_TEST_LEVEL(irq, cm) ((s->irq_state[irq].level & (cm)) != 0)
65
62
+#define GIC_DIST_SET_EDGE_TRIGGER(irq) (s->irq_state[irq].edge_trigger = true)
66
-void define_arm_cp_regs_with_opaque(ARMCPU *cpu,
63
+#define GIC_DIST_CLEAR_EDGE_TRIGGER(irq) \
67
- const ARMCPRegInfo *regs, void *opaque);
64
+ (s->irq_state[irq].edge_trigger = false)
68
-void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
65
+#define GIC_DIST_TEST_EDGE_TRIGGER(irq) (s->irq_state[irq].edge_trigger)
69
- const ARMCPRegInfo *regs, void *opaque);
66
+#define GIC_DIST_GET_PRIORITY(irq, cpu) (((irq) < GIC_INTERNAL) ? \
70
-static inline void define_arm_cp_regs(ARMCPU *cpu, const ARMCPRegInfo *regs)
67
s->priority1[irq][cpu] : \
71
-{
68
s->priority2[(irq) - GIC_INTERNAL])
72
- define_arm_cp_regs_with_opaque(cpu, regs, 0);
69
-#define GIC_TARGET(irq) s->irq_target[irq]
73
-}
70
-#define GIC_CLEAR_GROUP(irq, cm) (s->irq_state[irq].group &= ~(cm))
74
static inline void define_one_arm_cp_reg(ARMCPU *cpu, const ARMCPRegInfo *regs)
71
-#define GIC_SET_GROUP(irq, cm) (s->irq_state[irq].group |= (cm))
72
-#define GIC_TEST_GROUP(irq, cm) ((s->irq_state[irq].group & (cm)) != 0)
73
+#define GIC_DIST_TARGET(irq) (s->irq_target[irq])
74
+#define GIC_DIST_CLEAR_GROUP(irq, cm) (s->irq_state[irq].group &= ~(cm))
75
+#define GIC_DIST_SET_GROUP(irq, cm) (s->irq_state[irq].group |= (cm))
76
+#define GIC_DIST_TEST_GROUP(irq, cm) ((s->irq_state[irq].group & (cm)) != 0)
77
78
#define GICD_CTLR_EN_GRP0 (1U << 0)
79
#define GICD_CTLR_EN_GRP1 (1U << 1)
80
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs);
81
void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs);
82
void gic_update(GICState *s);
83
void gic_init_irqs_and_distributor(GICState *s);
84
-void gic_set_priority(GICState *s, int cpu, int irq, uint8_t val,
85
- MemTxAttrs attrs);
86
+void gic_dist_set_priority(GICState *s, int cpu, int irq, uint8_t val,
87
+ MemTxAttrs attrs);
88
89
static inline bool gic_test_pending(GICState *s, int irq, int cm)
90
{
75
{
91
@@ -XXX,XX +XXX,XX @@ static inline bool gic_test_pending(GICState *s, int irq, int cm)
76
- define_one_arm_cp_reg_with_opaque(cpu, regs, 0);
92
* GICD_ISPENDR to set the state pending.
77
+ define_one_arm_cp_reg_with_opaque(cpu, regs, NULL);
93
*/
78
}
94
return (s->irq_state[irq].pending & cm) ||
79
+
95
- (!GIC_TEST_EDGE_TRIGGER(irq) && GIC_TEST_LEVEL(irq, cm));
80
+void define_arm_cp_regs_with_opaque_len(ARMCPU *cpu, const ARMCPRegInfo *regs,
96
+ (!GIC_DIST_TEST_EDGE_TRIGGER(irq) && GIC_DIST_TEST_LEVEL(irq, cm));
81
+ void *opaque, size_t len);
82
+
83
+#define define_arm_cp_regs_with_opaque(CPU, REGS, OPAQUE) \
84
+ do { \
85
+ QEMU_BUILD_BUG_ON(ARRAY_SIZE(REGS) == 0); \
86
+ define_arm_cp_regs_with_opaque_len(CPU, REGS, OPAQUE, \
87
+ ARRAY_SIZE(REGS)); \
88
+ } while (0)
89
+
90
+#define define_arm_cp_regs(CPU, REGS) \
91
+ define_arm_cp_regs_with_opaque(CPU, REGS, NULL)
92
+
93
const ARMCPRegInfo *get_arm_cp_reginfo(GHashTable *cpregs, uint32_t encoded_cp);
94
95
/*
96
@@ -XXX,XX +XXX,XX @@ typedef struct ARMCPRegUserSpaceInfo {
97
uint64_t fixed_bits;
98
} ARMCPRegUserSpaceInfo;
99
100
-#define REGUSERINFO_SENTINEL { .name = NULL }
101
+void modify_arm_cp_regs_with_len(ARMCPRegInfo *regs, size_t regs_len,
102
+ const ARMCPRegUserSpaceInfo *mods,
103
+ size_t mods_len);
104
105
-void modify_arm_cp_regs(ARMCPRegInfo *regs, const ARMCPRegUserSpaceInfo *mods);
106
+#define modify_arm_cp_regs(REGS, MODS) \
107
+ do { \
108
+ QEMU_BUILD_BUG_ON(ARRAY_SIZE(REGS) == 0); \
109
+ QEMU_BUILD_BUG_ON(ARRAY_SIZE(MODS) == 0); \
110
+ modify_arm_cp_regs_with_len(REGS, ARRAY_SIZE(REGS), \
111
+ MODS, ARRAY_SIZE(MODS)); \
112
+ } while (0)
113
114
/* CPWriteFn that can be used to implement writes-ignored behaviour */
115
void arm_cp_write_ignore(CPUARMState *env, const ARMCPRegInfo *ri,
116
diff --git a/hw/arm/pxa2xx.c b/hw/arm/pxa2xx.c
117
index XXXXXXX..XXXXXXX 100644
118
--- a/hw/arm/pxa2xx.c
119
+++ b/hw/arm/pxa2xx.c
120
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pxa_cp_reginfo[] = {
121
{ .name = "PWRMODE", .cp = 14, .crn = 7, .crm = 0, .opc1 = 0, .opc2 = 0,
122
.access = PL1_RW, .type = ARM_CP_IO,
123
.readfn = arm_cp_read_zero, .writefn = pxa2xx_pwrmode_write },
124
- REGINFO_SENTINEL
125
};
126
127
static void pxa2xx_setup_cp14(PXA2xxState *s)
128
diff --git a/hw/arm/pxa2xx_pic.c b/hw/arm/pxa2xx_pic.c
129
index XXXXXXX..XXXXXXX 100644
130
--- a/hw/arm/pxa2xx_pic.c
131
+++ b/hw/arm/pxa2xx_pic.c
132
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pxa_pic_cp_reginfo[] = {
133
REGINFO_FOR_PIC_CP("ICLR2", 8),
134
REGINFO_FOR_PIC_CP("ICFP2", 9),
135
REGINFO_FOR_PIC_CP("ICPR2", 0xa),
136
- REGINFO_SENTINEL
137
};
138
139
static const MemoryRegionOps pxa2xx_pic_ops = {
140
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
141
index XXXXXXX..XXXXXXX 100644
142
--- a/hw/intc/arm_gicv3_cpuif.c
143
+++ b/hw/intc/arm_gicv3_cpuif.c
144
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo gicv3_cpuif_reginfo[] = {
145
.readfn = icc_igrpen1_el3_read,
146
.writefn = icc_igrpen1_el3_write,
147
},
148
- REGINFO_SENTINEL
149
};
150
151
static uint64_t ich_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
152
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo gicv3_cpuif_hcr_reginfo[] = {
153
.readfn = ich_vmcr_read,
154
.writefn = ich_vmcr_write,
155
},
156
- REGINFO_SENTINEL
157
};
158
159
static const ARMCPRegInfo gicv3_cpuif_ich_apxr1_reginfo[] = {
160
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo gicv3_cpuif_ich_apxr1_reginfo[] = {
161
.readfn = ich_ap_read,
162
.writefn = ich_ap_write,
163
},
164
- REGINFO_SENTINEL
165
};
166
167
static const ARMCPRegInfo gicv3_cpuif_ich_apxr23_reginfo[] = {
168
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo gicv3_cpuif_ich_apxr23_reginfo[] = {
169
.readfn = ich_ap_read,
170
.writefn = ich_ap_write,
171
},
172
- REGINFO_SENTINEL
173
};
174
175
static void gicv3_cpuif_el_change_hook(ARMCPU *cpu, void *opaque)
176
@@ -XXX,XX +XXX,XX @@ void gicv3_init_cpuif(GICv3State *s)
177
.readfn = ich_lr_read,
178
.writefn = ich_lr_write,
179
},
180
- REGINFO_SENTINEL
181
};
182
define_arm_cp_regs(cpu, lr_regset);
183
}
184
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
185
index XXXXXXX..XXXXXXX 100644
186
--- a/hw/intc/arm_gicv3_kvm.c
187
+++ b/hw/intc/arm_gicv3_kvm.c
188
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo gicv3_cpuif_reginfo[] = {
189
*/
190
.resetfn = arm_gicv3_icc_reset,
191
},
192
- REGINFO_SENTINEL
193
};
194
195
/**
196
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
197
index XXXXXXX..XXXXXXX 100644
198
--- a/target/arm/cpu64.c
199
+++ b/target/arm/cpu64.c
200
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cortex_a72_a57_a53_cp_reginfo[] = {
201
{ .name = "L2MERRSR",
202
.cp = 15, .opc1 = 3, .crm = 15,
203
.access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
204
- REGINFO_SENTINEL
205
};
206
207
static void aarch64_a57_initfn(Object *obj)
208
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
209
index XXXXXXX..XXXXXXX 100644
210
--- a/target/arm/cpu_tcg.c
211
+++ b/target/arm/cpu_tcg.c
212
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cortexa8_cp_reginfo[] = {
213
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
214
{ .name = "L2AUXCR", .cp = 15, .crn = 9, .crm = 0, .opc1 = 1, .opc2 = 2,
215
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
216
- REGINFO_SENTINEL
217
};
218
219
static void cortex_a8_initfn(Object *obj)
220
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cortexa9_cp_reginfo[] = {
221
.access = PL1_RW, .resetvalue = 0, .type = ARM_CP_CONST },
222
{ .name = "TLB_ATTR", .cp = 15, .crn = 15, .crm = 7, .opc1 = 5, .opc2 = 2,
223
.access = PL1_RW, .resetvalue = 0, .type = ARM_CP_CONST },
224
- REGINFO_SENTINEL
225
};
226
227
static void cortex_a9_initfn(Object *obj)
228
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cortexa15_cp_reginfo[] = {
229
#endif
230
{ .name = "L2ECTLR", .cp = 15, .crn = 9, .crm = 0, .opc1 = 1, .opc2 = 3,
231
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
232
- REGINFO_SENTINEL
233
};
234
235
static void cortex_a7_initfn(Object *obj)
236
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cortexr5_cp_reginfo[] = {
237
.access = PL1_RW, .type = ARM_CP_CONST },
238
{ .name = "DCACHE_INVAL", .cp = 15, .opc1 = 0, .crn = 15, .crm = 5,
239
.opc2 = 0, .access = PL1_W, .type = ARM_CP_NOP },
240
- REGINFO_SENTINEL
241
};
242
243
static void cortex_r5_initfn(Object *obj)
244
diff --git a/target/arm/helper.c b/target/arm/helper.c
245
index XXXXXXX..XXXXXXX 100644
246
--- a/target/arm/helper.c
247
+++ b/target/arm/helper.c
248
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cp_reginfo[] = {
249
.secure = ARM_CP_SECSTATE_S,
250
.fieldoffset = offsetof(CPUARMState, cp15.contextidr_s),
251
.resetvalue = 0, .writefn = contextidr_write, .raw_writefn = raw_write, },
252
- REGINFO_SENTINEL
253
};
254
255
static const ARMCPRegInfo not_v8_cp_reginfo[] = {
256
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo not_v8_cp_reginfo[] = {
257
{ .name = "CACHEMAINT", .cp = 15, .crn = 7, .crm = CP_ANY,
258
.opc1 = 0, .opc2 = CP_ANY, .access = PL1_W,
259
.type = ARM_CP_NOP | ARM_CP_OVERRIDE },
260
- REGINFO_SENTINEL
261
};
262
263
static const ARMCPRegInfo not_v6_cp_reginfo[] = {
264
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo not_v6_cp_reginfo[] = {
265
*/
266
{ .name = "WFI_v5", .cp = 15, .crn = 7, .crm = 8, .opc1 = 0, .opc2 = 2,
267
.access = PL1_W, .type = ARM_CP_WFI },
268
- REGINFO_SENTINEL
269
};
270
271
static const ARMCPRegInfo not_v7_cp_reginfo[] = {
272
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo not_v7_cp_reginfo[] = {
273
.opc1 = 0, .opc2 = 0, .access = PL1_RW, .type = ARM_CP_NOP },
274
{ .name = "NMRR", .cp = 15, .crn = 10, .crm = 2,
275
.opc1 = 0, .opc2 = 1, .access = PL1_RW, .type = ARM_CP_NOP },
276
- REGINFO_SENTINEL
277
};
278
279
static void cpacr_write(CPUARMState *env, const ARMCPRegInfo *ri,
280
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6_cp_reginfo[] = {
281
.crn = 1, .crm = 0, .opc1 = 0, .opc2 = 2, .accessfn = cpacr_access,
282
.access = PL1_RW, .fieldoffset = offsetof(CPUARMState, cp15.cpacr_el1),
283
.resetfn = cpacr_reset, .writefn = cpacr_write, .readfn = cpacr_read },
284
- REGINFO_SENTINEL
285
};
286
287
typedef struct pm_event {
288
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
289
{ .name = "TLBIMVAA", .cp = 15, .opc1 = 0, .crn = 8, .crm = 7, .opc2 = 3,
290
.type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
291
.writefn = tlbimvaa_write },
292
- REGINFO_SENTINEL
293
};
294
295
static const ARMCPRegInfo v7mp_cp_reginfo[] = {
296
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7mp_cp_reginfo[] = {
297
{ .name = "TLBIMVAAIS", .cp = 15, .opc1 = 0, .crn = 8, .crm = 3, .opc2 = 3,
298
.type = ARM_CP_NO_RAW, .access = PL1_W, .accessfn = access_ttlb,
299
.writefn = tlbimvaa_is_write },
300
- REGINFO_SENTINEL
301
};
302
303
static const ARMCPRegInfo pmovsset_cp_reginfo[] = {
304
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pmovsset_cp_reginfo[] = {
305
.fieldoffset = offsetof(CPUARMState, cp15.c9_pmovsr),
306
.writefn = pmovsset_write,
307
.raw_writefn = raw_write },
308
- REGINFO_SENTINEL
309
};
310
311
static void teecr_write(CPUARMState *env, const ARMCPRegInfo *ri,
312
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo t2ee_cp_reginfo[] = {
313
{ .name = "TEEHBR", .cp = 14, .crn = 1, .crm = 0, .opc1 = 6, .opc2 = 0,
314
.access = PL0_RW, .fieldoffset = offsetof(CPUARMState, teehbr),
315
.accessfn = teehbr_access, .resetvalue = 0 },
316
- REGINFO_SENTINEL
317
};
318
319
static const ARMCPRegInfo v6k_cp_reginfo[] = {
320
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6k_cp_reginfo[] = {
321
.bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.tpidrprw_s),
322
offsetoflow32(CPUARMState, cp15.tpidrprw_ns) },
323
.resetvalue = 0 },
324
- REGINFO_SENTINEL
325
};
326
327
#ifndef CONFIG_USER_ONLY
328
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo generic_timer_cp_reginfo[] = {
329
.fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_SEC].cval),
330
.writefn = gt_sec_cval_write, .raw_writefn = raw_write,
331
},
332
- REGINFO_SENTINEL
333
};
334
335
static CPAccessResult e2h_access(CPUARMState *env, const ARMCPRegInfo *ri,
336
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo generic_timer_cp_reginfo[] = {
337
.access = PL0_R, .type = ARM_CP_NO_RAW | ARM_CP_IO,
338
.readfn = gt_virt_cnt_read,
339
},
340
- REGINFO_SENTINEL
341
};
342
343
#endif
344
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vapa_cp_reginfo[] = {
345
.access = PL1_W, .accessfn = ats_access,
346
.writefn = ats_write, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC },
347
#endif
348
- REGINFO_SENTINEL
349
};
350
351
/* Return basic MPU access permission bits. */
352
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pmsav7_cp_reginfo[] = {
353
.fieldoffset = offsetof(CPUARMState, pmsav7.rnr[M_REG_NS]),
354
.writefn = pmsav7_rgnr_write,
355
.resetfn = arm_cp_reset_ignore },
356
- REGINFO_SENTINEL
357
};
358
359
static const ARMCPRegInfo pmsav5_cp_reginfo[] = {
360
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pmsav5_cp_reginfo[] = {
361
{ .name = "946_PRBS7", .cp = 15, .crn = 6, .crm = 7, .opc1 = 0,
362
.opc2 = CP_ANY, .access = PL1_RW, .resetvalue = 0,
363
.fieldoffset = offsetof(CPUARMState, cp15.c6_region[7]) },
364
- REGINFO_SENTINEL
365
};
366
367
static void vmsa_ttbcr_raw_write(CPUARMState *env, const ARMCPRegInfo *ri,
368
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vmsa_pmsa_cp_reginfo[] = {
369
.access = PL1_RW, .accessfn = access_tvm_trvm,
370
.fieldoffset = offsetof(CPUARMState, cp15.far_el[1]),
371
.resetvalue = 0, },
372
- REGINFO_SENTINEL
373
};
374
375
static const ARMCPRegInfo vmsa_cp_reginfo[] = {
376
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vmsa_cp_reginfo[] = {
377
/* No offsetoflow32 -- pass the entire TCR to writefn/raw_writefn. */
378
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.tcr_el[3]),
379
offsetof(CPUARMState, cp15.tcr_el[1])} },
380
- REGINFO_SENTINEL
381
};
382
383
/* Note that unlike TTBCR, writing to TTBCR2 does not require flushing
384
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo omap_cp_reginfo[] = {
385
{ .name = "C9", .cp = 15, .crn = 9,
386
.crm = CP_ANY, .opc1 = CP_ANY, .opc2 = CP_ANY, .access = PL1_RW,
387
.type = ARM_CP_CONST | ARM_CP_OVERRIDE, .resetvalue = 0 },
388
- REGINFO_SENTINEL
389
};
390
391
static void xscale_cpar_write(CPUARMState *env, const ARMCPRegInfo *ri,
392
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo xscale_cp_reginfo[] = {
393
{ .name = "XSCALE_UNLOCK_DCACHE",
394
.cp = 15, .opc1 = 0, .crn = 9, .crm = 2, .opc2 = 1,
395
.access = PL1_W, .type = ARM_CP_NOP },
396
- REGINFO_SENTINEL
397
};
398
399
static const ARMCPRegInfo dummy_c15_cp_reginfo[] = {
400
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo dummy_c15_cp_reginfo[] = {
401
.access = PL1_RW,
402
.type = ARM_CP_CONST | ARM_CP_NO_RAW | ARM_CP_OVERRIDE,
403
.resetvalue = 0 },
404
- REGINFO_SENTINEL
405
};
406
407
static const ARMCPRegInfo cache_dirty_status_cp_reginfo[] = {
408
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cache_dirty_status_cp_reginfo[] = {
409
{ .name = "CDSR", .cp = 15, .crn = 7, .crm = 10, .opc1 = 0, .opc2 = 6,
410
.access = PL1_R, .type = ARM_CP_CONST | ARM_CP_NO_RAW,
411
.resetvalue = 0 },
412
- REGINFO_SENTINEL
413
};
414
415
static const ARMCPRegInfo cache_block_ops_cp_reginfo[] = {
416
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cache_block_ops_cp_reginfo[] = {
417
.access = PL0_W, .type = ARM_CP_NOP|ARM_CP_64BIT },
418
{ .name = "CIDCR", .cp = 15, .crm = 14, .opc1 = 0,
419
.access = PL1_W, .type = ARM_CP_NOP|ARM_CP_64BIT },
420
- REGINFO_SENTINEL
421
};
422
423
static const ARMCPRegInfo cache_test_clean_cp_reginfo[] = {
424
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cache_test_clean_cp_reginfo[] = {
425
{ .name = "TCI_DCACHE", .cp = 15, .crn = 7, .crm = 14, .opc1 = 0, .opc2 = 3,
426
.access = PL0_R, .type = ARM_CP_CONST | ARM_CP_NO_RAW,
427
.resetvalue = (1 << 30) },
428
- REGINFO_SENTINEL
429
};
430
431
static const ARMCPRegInfo strongarm_cp_reginfo[] = {
432
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo strongarm_cp_reginfo[] = {
433
.crm = CP_ANY, .opc1 = CP_ANY, .opc2 = CP_ANY,
434
.access = PL1_RW, .resetvalue = 0,
435
.type = ARM_CP_CONST | ARM_CP_OVERRIDE | ARM_CP_NO_RAW },
436
- REGINFO_SENTINEL
437
};
438
439
static uint64_t midr_read(CPUARMState *env, const ARMCPRegInfo *ri)
440
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo lpae_cp_reginfo[] = {
441
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.ttbr1_s),
442
offsetof(CPUARMState, cp15.ttbr1_ns) },
443
.writefn = vmsa_ttbr_write, },
444
- REGINFO_SENTINEL
445
};
446
447
static uint64_t aa64_fpcr_read(CPUARMState *env, const ARMCPRegInfo *ri)
448
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
449
.access = PL1_RW, .accessfn = access_trap_aa32s_el1,
450
.writefn = sdcr_write,
451
.fieldoffset = offsetoflow32(CPUARMState, cp15.mdcr_el3) },
452
- REGINFO_SENTINEL
453
};
454
455
/* Used to describe the behaviour of EL2 regs when EL2 does not exist. */
456
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_no_el2_cp_reginfo[] = {
457
.type = ARM_CP_CONST,
458
.cp = 15, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 2,
459
.access = PL2_RW, .resetvalue = 0 },
460
- REGINFO_SENTINEL
461
};
462
463
/* Ditto, but for registers which exist in ARMv8 but not v7 */
464
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_no_el2_v8_cp_reginfo[] = {
465
.cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 4,
466
.access = PL2_RW,
467
.type = ARM_CP_CONST, .resetvalue = 0 },
468
- REGINFO_SENTINEL
469
};
470
471
static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
472
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
473
.cp = 15, .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 3,
474
.access = PL2_RW,
475
.fieldoffset = offsetof(CPUARMState, cp15.hstr_el2) },
476
- REGINFO_SENTINEL
477
};
478
479
static const ARMCPRegInfo el2_v8_cp_reginfo[] = {
480
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_v8_cp_reginfo[] = {
481
.access = PL2_RW,
482
.fieldoffset = offsetofhigh32(CPUARMState, cp15.hcr_el2),
483
.writefn = hcr_writehigh },
484
- REGINFO_SENTINEL
485
};
486
487
static CPAccessResult sel2_access(CPUARMState *env, const ARMCPRegInfo *ri,
488
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_sec_cp_reginfo[] = {
489
.opc0 = 3, .opc1 = 4, .crn = 2, .crm = 6, .opc2 = 2,
490
.access = PL2_RW, .accessfn = sel2_access,
491
.fieldoffset = offsetof(CPUARMState, cp15.vstcr_el2) },
492
- REGINFO_SENTINEL
493
};
494
495
static CPAccessResult nsacr_access(CPUARMState *env, const ARMCPRegInfo *ri,
496
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_cp_reginfo[] = {
497
.opc0 = 1, .opc1 = 6, .crn = 8, .crm = 7, .opc2 = 5,
498
.access = PL3_W, .type = ARM_CP_NO_RAW,
499
.writefn = tlbi_aa64_vae3_write },
500
- REGINFO_SENTINEL
501
};
502
503
#ifndef CONFIG_USER_ONLY
504
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_cp_reginfo[] = {
505
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 0,
506
.access = PL1_RW, .accessfn = access_tda,
507
.type = ARM_CP_NOP },
508
- REGINFO_SENTINEL
509
};
510
511
static const ARMCPRegInfo debug_lpae_cp_reginfo[] = {
512
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_lpae_cp_reginfo[] = {
513
.access = PL0_R, .type = ARM_CP_CONST|ARM_CP_64BIT, .resetvalue = 0 },
514
{ .name = "DBGDSAR", .cp = 14, .crm = 2, .opc1 = 0,
515
.access = PL0_R, .type = ARM_CP_CONST|ARM_CP_64BIT, .resetvalue = 0 },
516
- REGINFO_SENTINEL
517
};
518
519
/* Return the exception level to which exceptions should be taken
520
@@ -XXX,XX +XXX,XX @@ static void define_debug_regs(ARMCPU *cpu)
521
.fieldoffset = offsetof(CPUARMState, cp15.dbgbcr[i]),
522
.writefn = dbgbcr_write, .raw_writefn = raw_write
523
},
524
- REGINFO_SENTINEL
525
};
526
define_arm_cp_regs(cpu, dbgregs);
527
}
528
@@ -XXX,XX +XXX,XX @@ static void define_debug_regs(ARMCPU *cpu)
529
.fieldoffset = offsetof(CPUARMState, cp15.dbgwcr[i]),
530
.writefn = dbgwcr_write, .raw_writefn = raw_write
531
},
532
- REGINFO_SENTINEL
533
};
534
define_arm_cp_regs(cpu, dbgregs);
535
}
536
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
537
.type = ARM_CP_IO,
538
.readfn = pmevtyper_readfn, .writefn = pmevtyper_writefn,
539
.raw_writefn = pmevtyper_rawwrite },
540
- REGINFO_SENTINEL
541
};
542
define_arm_cp_regs(cpu, pmev_regs);
543
g_free(pmevcntr_name);
544
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
545
.cp = 15, .opc1 = 0, .crn = 9, .crm = 14, .opc2 = 5,
546
.access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST,
547
.resetvalue = extract64(cpu->pmceid1, 32, 32) },
548
- REGINFO_SENTINEL
549
};
550
define_arm_cp_regs(cpu, v81_pmu_regs);
551
}
552
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo lor_reginfo[] = {
553
.opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 7,
554
.access = PL1_R, .accessfn = access_lor_ns,
555
.type = ARM_CP_CONST, .resetvalue = 0 },
556
- REGINFO_SENTINEL
557
};
558
559
#ifdef TARGET_AARCH64
560
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pauth_reginfo[] = {
561
.opc0 = 3, .opc1 = 0, .crn = 2, .crm = 1, .opc2 = 3,
562
.access = PL1_RW, .accessfn = access_pauth,
563
.fieldoffset = offsetof(CPUARMState, keys.apib.hi) },
564
- REGINFO_SENTINEL
565
};
566
567
static const ARMCPRegInfo tlbirange_reginfo[] = {
568
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbirange_reginfo[] = {
569
.opc0 = 1, .opc1 = 6, .crn = 8, .crm = 6, .opc2 = 5,
570
.access = PL3_W, .type = ARM_CP_NO_RAW,
571
.writefn = tlbi_aa64_rvae3_write },
572
- REGINFO_SENTINEL
573
};
574
575
static const ARMCPRegInfo tlbios_reginfo[] = {
576
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbios_reginfo[] = {
577
.opc0 = 1, .opc1 = 6, .crn = 8, .crm = 1, .opc2 = 5,
578
.access = PL3_W, .type = ARM_CP_NO_RAW,
579
.writefn = tlbi_aa64_vae3is_write },
580
- REGINFO_SENTINEL
581
};
582
583
static uint64_t rndr_readfn(CPUARMState *env, const ARMCPRegInfo *ri)
584
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo rndr_reginfo[] = {
585
.type = ARM_CP_NO_RAW | ARM_CP_SUPPRESS_TB_END | ARM_CP_IO,
586
.opc0 = 3, .opc1 = 3, .crn = 2, .crm = 4, .opc2 = 1,
587
.access = PL0_R, .readfn = rndr_readfn },
588
- REGINFO_SENTINEL
589
};
590
591
#ifndef CONFIG_USER_ONLY
592
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo dcpop_reg[] = {
593
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 12, .opc2 = 1,
594
.access = PL0_W, .type = ARM_CP_NO_RAW | ARM_CP_SUPPRESS_TB_END,
595
.accessfn = aa64_cacheop_poc_access, .writefn = dccvap_writefn },
596
- REGINFO_SENTINEL
597
};
598
599
static const ARMCPRegInfo dcpodp_reg[] = {
600
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo dcpodp_reg[] = {
601
.opc0 = 1, .opc1 = 3, .crn = 7, .crm = 13, .opc2 = 1,
602
.access = PL0_W, .type = ARM_CP_NO_RAW | ARM_CP_SUPPRESS_TB_END,
603
.accessfn = aa64_cacheop_poc_access, .writefn = dccvap_writefn },
604
- REGINFO_SENTINEL
605
};
606
#endif /*CONFIG_USER_ONLY*/
607
608
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo mte_reginfo[] = {
609
{ .name = "DC_CIGDSW", .state = ARM_CP_STATE_AA64,
610
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 14, .opc2 = 6,
611
.type = ARM_CP_NOP, .access = PL1_W, .accessfn = access_tsw },
612
- REGINFO_SENTINEL
613
};
614
615
static const ARMCPRegInfo mte_tco_ro_reginfo[] = {
616
{ .name = "TCO", .state = ARM_CP_STATE_AA64,
617
.opc0 = 3, .opc1 = 3, .crn = 4, .crm = 2, .opc2 = 7,
618
.type = ARM_CP_CONST, .access = PL0_RW, },
619
- REGINFO_SENTINEL
620
};
621
622
static const ARMCPRegInfo mte_el0_cacheop_reginfo[] = {
623
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo mte_el0_cacheop_reginfo[] = {
624
.accessfn = aa64_zva_access,
625
#endif
626
},
627
- REGINFO_SENTINEL
628
};
629
630
#endif
631
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo predinv_reginfo[] = {
632
{ .name = "CPPRCTX", .state = ARM_CP_STATE_AA32,
633
.cp = 15, .opc1 = 0, .crn = 7, .crm = 3, .opc2 = 7,
634
.type = ARM_CP_NOP, .access = PL0_W, .accessfn = access_predinv },
635
- REGINFO_SENTINEL
636
};
637
638
static uint64_t ccsidr2_read(CPUARMState *env, const ARMCPRegInfo *ri)
639
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo ccsidr2_reginfo[] = {
640
.access = PL1_R,
641
.accessfn = access_aa64_tid2,
642
.readfn = ccsidr2_read, .type = ARM_CP_NO_RAW },
643
- REGINFO_SENTINEL
644
};
645
646
static CPAccessResult access_aa64_tid3(CPUARMState *env, const ARMCPRegInfo *ri,
647
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo jazelle_regs[] = {
648
.cp = 14, .crn = 2, .crm = 0, .opc1 = 7, .opc2 = 0,
649
.accessfn = access_joscr_jmcr,
650
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
651
- REGINFO_SENTINEL
652
};
653
654
static const ARMCPRegInfo vhe_reginfo[] = {
655
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vhe_reginfo[] = {
656
.access = PL2_RW, .accessfn = e2h_access,
657
.writefn = gt_virt_cval_write, .raw_writefn = raw_write },
658
#endif
659
- REGINFO_SENTINEL
660
};
661
662
#ifndef CONFIG_USER_ONLY
663
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo ats1e1_reginfo[] = {
664
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 1,
665
.access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
666
.writefn = ats_write64 },
667
- REGINFO_SENTINEL
668
};
669
670
static const ARMCPRegInfo ats1cp_reginfo[] = {
671
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo ats1cp_reginfo[] = {
672
.cp = 15, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 1,
673
.access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
674
.writefn = ats_write },
675
- REGINFO_SENTINEL
676
};
677
#endif
678
679
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo actlr2_hactlr2_reginfo[] = {
680
.cp = 15, .opc1 = 4, .crn = 1, .crm = 0, .opc2 = 3,
681
.access = PL2_RW, .type = ARM_CP_CONST,
682
.resetvalue = 0 },
683
- REGINFO_SENTINEL
684
};
685
686
void register_cp_regs_for_features(ARMCPU *cpu)
687
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
688
.access = PL1_R, .type = ARM_CP_CONST,
689
.accessfn = access_aa32_tid3,
690
.resetvalue = cpu->isar.id_isar6 },
691
- REGINFO_SENTINEL
692
};
693
define_arm_cp_regs(cpu, v6_idregs);
694
define_arm_cp_regs(cpu, v6_cp_reginfo);
695
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
696
.opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 7,
697
.access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST,
698
.resetvalue = cpu->pmceid1 },
699
- REGINFO_SENTINEL
700
};
701
#ifdef CONFIG_USER_ONLY
702
ARMCPRegUserSpaceInfo v8_user_idregs[] = {
703
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
704
.exported_bits = 0x000000f0ffffffff },
705
{ .name = "ID_AA64ISAR*_EL1_RESERVED",
706
.is_glob = true },
707
- REGUSERINFO_SENTINEL
708
};
709
modify_arm_cp_regs(v8_idregs, v8_user_idregs);
710
#endif
711
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
712
.access = PL2_RW,
713
.resetvalue = vmpidr_def,
714
.fieldoffset = offsetof(CPUARMState, cp15.vmpidr_el2) },
715
- REGINFO_SENTINEL
716
};
717
define_arm_cp_regs(cpu, vpidr_regs);
718
define_arm_cp_regs(cpu, el2_cp_reginfo);
719
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
720
.access = PL2_RW, .accessfn = access_el3_aa32ns,
721
.type = ARM_CP_NO_RAW,
722
.writefn = arm_cp_write_ignore, .readfn = mpidr_read },
723
- REGINFO_SENTINEL
724
};
725
define_arm_cp_regs(cpu, vpidr_regs);
726
define_arm_cp_regs(cpu, el3_no_el2_cp_reginfo);
727
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
728
.raw_writefn = raw_write, .writefn = sctlr_write,
729
.fieldoffset = offsetof(CPUARMState, cp15.sctlr_el[3]),
730
.resetvalue = cpu->reset_sctlr },
731
- REGINFO_SENTINEL
732
};
733
734
define_arm_cp_regs(cpu, el3_regs);
735
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
736
{ .name = "DUMMY",
737
.cp = 15, .crn = 0, .crm = 7, .opc1 = 0, .opc2 = CP_ANY,
738
.access = PL1_R, .type = ARM_CP_CONST, .resetvalue = 0 },
739
- REGINFO_SENTINEL
740
};
741
ARMCPRegInfo id_v8_midr_cp_reginfo[] = {
742
{ .name = "MIDR_EL1", .state = ARM_CP_STATE_BOTH,
743
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
744
.access = PL1_R,
745
.accessfn = access_aa64_tid1,
746
.type = ARM_CP_CONST, .resetvalue = cpu->revidr },
747
- REGINFO_SENTINEL
748
};
749
ARMCPRegInfo id_cp_reginfo[] = {
750
/* These are common to v8 and pre-v8 */
751
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
752
.access = PL1_R,
753
.accessfn = access_aa32_tid1,
754
.type = ARM_CP_CONST, .resetvalue = 0 },
755
- REGINFO_SENTINEL
756
};
757
/* TLBTR is specific to VMSA */
758
ARMCPRegInfo id_tlbtr_reginfo = {
759
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
760
{ .name = "MIDR_EL1",
761
.exported_bits = 0x00000000ffffffff },
762
{ .name = "REVIDR_EL1" },
763
- REGUSERINFO_SENTINEL
764
};
765
modify_arm_cp_regs(id_v8_midr_cp_reginfo, id_v8_user_midr_cp_reginfo);
766
#endif
767
if (arm_feature(env, ARM_FEATURE_OMAPCP) ||
768
arm_feature(env, ARM_FEATURE_STRONGARM)) {
769
- ARMCPRegInfo *r;
770
+ size_t i;
771
/* Register the blanket "writes ignored" value first to cover the
772
* whole space. Then update the specific ID registers to allow write
773
* access, so that they ignore writes rather than causing them to
774
* UNDEF.
775
*/
776
define_one_arm_cp_reg(cpu, &crn0_wi_reginfo);
777
- for (r = id_pre_v8_midr_cp_reginfo;
778
- r->type != ARM_CP_SENTINEL; r++) {
779
- r->access = PL1_RW;
780
+ for (i = 0; i < ARRAY_SIZE(id_pre_v8_midr_cp_reginfo); ++i) {
781
+ id_pre_v8_midr_cp_reginfo[i].access = PL1_RW;
782
}
783
- for (r = id_cp_reginfo; r->type != ARM_CP_SENTINEL; r++) {
784
- r->access = PL1_RW;
785
+ for (i = 0; i < ARRAY_SIZE(id_cp_reginfo); ++i) {
786
+ id_cp_reginfo[i].access = PL1_RW;
787
}
788
id_mpuir_reginfo.access = PL1_RW;
789
id_tlbtr_reginfo.access = PL1_RW;
790
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
791
{ .name = "MPIDR_EL1", .state = ARM_CP_STATE_BOTH,
792
.opc0 = 3, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 5,
793
.access = PL1_R, .readfn = mpidr_read, .type = ARM_CP_NO_RAW },
794
- REGINFO_SENTINEL
795
};
796
#ifdef CONFIG_USER_ONLY
797
ARMCPRegUserSpaceInfo mpidr_user_cp_reginfo[] = {
798
{ .name = "MPIDR_EL1",
799
.fixed_bits = 0x0000000080000000 },
800
- REGUSERINFO_SENTINEL
801
};
802
modify_arm_cp_regs(mpidr_cp_reginfo, mpidr_user_cp_reginfo);
803
#endif
804
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
805
.opc0 = 3, .opc1 = 6, .crn = 1, .crm = 0, .opc2 = 1,
806
.access = PL3_RW, .type = ARM_CP_CONST,
807
.resetvalue = 0 },
808
- REGINFO_SENTINEL
809
};
810
define_arm_cp_regs(cpu, auxcr_reginfo);
811
if (cpu_isar_feature(aa32_ac2, cpu)) {
812
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
813
.type = ARM_CP_CONST,
814
.opc0 = 3, .opc1 = 1, .crn = 15, .crm = 3, .opc2 = 0,
815
.access = PL1_R, .resetvalue = cpu->reset_cbar },
816
- REGINFO_SENTINEL
817
};
818
/* We don't implement a r/w 64 bit CBAR currently */
819
assert(arm_feature(env, ARM_FEATURE_CBAR_RO));
820
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
821
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.vbar_s),
822
offsetof(CPUARMState, cp15.vbar_ns) },
823
.resetvalue = 0 },
824
- REGINFO_SENTINEL
825
};
826
define_arm_cp_regs(cpu, vbar_cp_reginfo);
827
}
828
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
829
r->writefn);
830
}
831
}
832
- /* Bad type field probably means missing sentinel at end of reg list */
833
- assert(cptype_valid(r->type));
834
+
835
for (crm = crmmin; crm <= crmmax; crm++) {
836
for (opc1 = opc1min; opc1 <= opc1max; opc1++) {
837
for (opc2 = opc2min; opc2 <= opc2max; opc2++) {
838
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
97
}
839
}
98
}
840
}
99
841
100
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
842
-void define_arm_cp_regs_with_opaque(ARMCPU *cpu,
101
index XXXXXXX..XXXXXXX 100644
843
- const ARMCPRegInfo *regs, void *opaque)
102
--- a/hw/intc/arm_gic.c
844
+/* Define a whole list of registers */
103
+++ b/hw/intc/arm_gic.c
845
+void define_arm_cp_regs_with_opaque_len(ARMCPU *cpu, const ARMCPRegInfo *regs,
104
@@ -XXX,XX +XXX,XX @@ void gic_update(GICState *s)
846
+ void *opaque, size_t len)
105
best_prio = 0x100;
106
best_irq = 1023;
107
for (irq = 0; irq < s->num_irq; irq++) {
108
- if (GIC_TEST_ENABLED(irq, cm) && gic_test_pending(s, irq, cm) &&
109
- (!GIC_TEST_ACTIVE(irq, cm)) &&
110
- (irq < GIC_INTERNAL || GIC_TARGET(irq) & cm)) {
111
- if (GIC_GET_PRIORITY(irq, cpu) < best_prio) {
112
- best_prio = GIC_GET_PRIORITY(irq, cpu);
113
+ if (GIC_DIST_TEST_ENABLED(irq, cm) &&
114
+ gic_test_pending(s, irq, cm) &&
115
+ (!GIC_DIST_TEST_ACTIVE(irq, cm)) &&
116
+ (irq < GIC_INTERNAL || GIC_DIST_TARGET(irq) & cm)) {
117
+ if (GIC_DIST_GET_PRIORITY(irq, cpu) < best_prio) {
118
+ best_prio = GIC_DIST_GET_PRIORITY(irq, cpu);
119
best_irq = irq;
120
}
121
}
122
@@ -XXX,XX +XXX,XX @@ void gic_update(GICState *s)
123
if (best_prio < s->priority_mask[cpu]) {
124
s->current_pending[cpu] = best_irq;
125
if (best_prio < s->running_priority[cpu]) {
126
- int group = GIC_TEST_GROUP(best_irq, cm);
127
+ int group = GIC_DIST_TEST_GROUP(best_irq, cm);
128
129
if (extract32(s->ctlr, group, 1) &&
130
extract32(s->cpu_ctlr[cpu], group, 1)) {
131
@@ -XXX,XX +XXX,XX @@ void gic_set_pending_private(GICState *s, int cpu, int irq)
132
}
133
134
DPRINTF("Set %d pending cpu %d\n", irq, cpu);
135
- GIC_SET_PENDING(irq, cm);
136
+ GIC_DIST_SET_PENDING(irq, cm);
137
gic_update(s);
138
}
139
140
@@ -XXX,XX +XXX,XX @@ static void gic_set_irq_11mpcore(GICState *s, int irq, int level,
141
int cm, int target)
142
{
847
{
143
if (level) {
848
- /* Define a whole list of registers */
144
- GIC_SET_LEVEL(irq, cm);
849
- const ARMCPRegInfo *r;
145
- if (GIC_TEST_EDGE_TRIGGER(irq) || GIC_TEST_ENABLED(irq, cm)) {
850
- for (r = regs; r->type != ARM_CP_SENTINEL; r++) {
146
+ GIC_DIST_SET_LEVEL(irq, cm);
851
- define_one_arm_cp_reg_with_opaque(cpu, r, opaque);
147
+ if (GIC_DIST_TEST_EDGE_TRIGGER(irq) || GIC_DIST_TEST_ENABLED(irq, cm)) {
852
+ size_t i;
148
DPRINTF("Set %d pending mask %x\n", irq, target);
853
+ for (i = 0; i < len; ++i) {
149
- GIC_SET_PENDING(irq, target);
854
+ define_one_arm_cp_reg_with_opaque(cpu, regs + i, opaque);
150
+ GIC_DIST_SET_PENDING(irq, target);
151
}
152
} else {
153
- GIC_CLEAR_LEVEL(irq, cm);
154
+ GIC_DIST_CLEAR_LEVEL(irq, cm);
155
}
855
}
156
}
856
}
157
857
158
@@ -XXX,XX +XXX,XX @@ static void gic_set_irq_generic(GICState *s, int irq, int level,
858
@@ -XXX,XX +XXX,XX @@ void define_arm_cp_regs_with_opaque(ARMCPU *cpu,
159
int cm, int target)
859
* user-space cannot alter any values and dynamic values pertaining to
860
* execution state are hidden from user space view anyway.
861
*/
862
-void modify_arm_cp_regs(ARMCPRegInfo *regs, const ARMCPRegUserSpaceInfo *mods)
863
+void modify_arm_cp_regs_with_len(ARMCPRegInfo *regs, size_t regs_len,
864
+ const ARMCPRegUserSpaceInfo *mods,
865
+ size_t mods_len)
160
{
866
{
161
if (level) {
867
- const ARMCPRegUserSpaceInfo *m;
162
- GIC_SET_LEVEL(irq, cm);
868
- ARMCPRegInfo *r;
163
+ GIC_DIST_SET_LEVEL(irq, cm);
869
-
164
DPRINTF("Set %d pending mask %x\n", irq, target);
870
- for (m = mods; m->name; m++) {
165
- if (GIC_TEST_EDGE_TRIGGER(irq)) {
871
+ for (size_t mi = 0; mi < mods_len; ++mi) {
166
- GIC_SET_PENDING(irq, target);
872
+ const ARMCPRegUserSpaceInfo *m = mods + mi;
167
+ if (GIC_DIST_TEST_EDGE_TRIGGER(irq)) {
873
GPatternSpec *pat = NULL;
168
+ GIC_DIST_SET_PENDING(irq, target);
874
+
875
if (m->is_glob) {
876
pat = g_pattern_spec_new(m->name);
169
}
877
}
170
} else {
878
- for (r = regs; r->type != ARM_CP_SENTINEL; r++) {
171
- GIC_CLEAR_LEVEL(irq, cm);
879
+ for (size_t ri = 0; ri < regs_len; ++ri) {
172
+ GIC_DIST_CLEAR_LEVEL(irq, cm);
880
+ ARMCPRegInfo *r = regs + ri;
173
}
881
+
174
}
882
if (pat && g_pattern_match_string(pat, r->name)) {
175
883
r->type = ARM_CP_CONST;
176
@@ -XXX,XX +XXX,XX @@ static void gic_set_irq(void *opaque, int irq, int level)
884
r->access = PL0U_R;
177
/* The first external input line is internal interrupt 32. */
178
cm = ALL_CPU_MASK;
179
irq += GIC_INTERNAL;
180
- target = GIC_TARGET(irq);
181
+ target = GIC_DIST_TARGET(irq);
182
} else {
183
int cpu;
184
irq -= (s->num_irq - GIC_INTERNAL);
185
@@ -XXX,XX +XXX,XX @@ static void gic_set_irq(void *opaque, int irq, int level)
186
187
assert(irq >= GIC_NR_SGIS);
188
189
- if (level == GIC_TEST_LEVEL(irq, cm)) {
190
+ if (level == GIC_DIST_TEST_LEVEL(irq, cm)) {
191
return;
192
}
193
194
@@ -XXX,XX +XXX,XX @@ static uint16_t gic_get_current_pending_irq(GICState *s, int cpu,
195
uint16_t pending_irq = s->current_pending[cpu];
196
197
if (pending_irq < GIC_MAXIRQ && gic_has_groups(s)) {
198
- int group = GIC_TEST_GROUP(pending_irq, (1 << cpu));
199
+ int group = GIC_DIST_TEST_GROUP(pending_irq, (1 << cpu));
200
/* On a GIC without the security extensions, reading this register
201
* behaves in the same way as a secure access to a GIC with them.
202
*/
203
@@ -XXX,XX +XXX,XX @@ static int gic_get_group_priority(GICState *s, int cpu, int irq)
204
205
if (gic_has_groups(s) &&
206
!(s->cpu_ctlr[cpu] & GICC_CTLR_CBPR) &&
207
- GIC_TEST_GROUP(irq, (1 << cpu))) {
208
+ GIC_DIST_TEST_GROUP(irq, (1 << cpu))) {
209
bpr = s->abpr[cpu] - 1;
210
assert(bpr >= 0);
211
} else {
212
@@ -XXX,XX +XXX,XX @@ static int gic_get_group_priority(GICState *s, int cpu, int irq)
213
*/
214
mask = ~0U << ((bpr & 7) + 1);
215
216
- return GIC_GET_PRIORITY(irq, cpu) & mask;
217
+ return GIC_DIST_GET_PRIORITY(irq, cpu) & mask;
218
}
219
220
static void gic_activate_irq(GICState *s, int cpu, int irq)
221
@@ -XXX,XX +XXX,XX @@ static void gic_activate_irq(GICState *s, int cpu, int irq)
222
int regno = preemption_level / 32;
223
int bitno = preemption_level % 32;
224
225
- if (gic_has_groups(s) && GIC_TEST_GROUP(irq, (1 << cpu))) {
226
+ if (gic_has_groups(s) && GIC_DIST_TEST_GROUP(irq, (1 << cpu))) {
227
s->nsapr[regno][cpu] |= (1 << bitno);
228
} else {
229
s->apr[regno][cpu] |= (1 << bitno);
230
}
231
232
s->running_priority[cpu] = prio;
233
- GIC_SET_ACTIVE(irq, 1 << cpu);
234
+ GIC_DIST_SET_ACTIVE(irq, 1 << cpu);
235
}
236
237
static int gic_get_prio_from_apr_bits(GICState *s, int cpu)
238
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
239
return irq;
240
}
241
242
- if (GIC_GET_PRIORITY(irq, cpu) >= s->running_priority[cpu]) {
243
+ if (GIC_DIST_GET_PRIORITY(irq, cpu) >= s->running_priority[cpu]) {
244
DPRINTF("ACK, pending interrupt (%d) has insufficient priority\n", irq);
245
return 1023;
246
}
247
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
248
/* Clear pending flags for both level and edge triggered interrupts.
249
* Level triggered IRQs will be reasserted once they become inactive.
250
*/
251
- GIC_CLEAR_PENDING(irq, GIC_TEST_MODEL(irq) ? ALL_CPU_MASK : cm);
252
+ GIC_DIST_CLEAR_PENDING(irq, GIC_DIST_TEST_MODEL(irq) ? ALL_CPU_MASK
253
+ : cm);
254
ret = irq;
255
} else {
256
if (irq < GIC_NR_SGIS) {
257
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
258
src = ctz32(s->sgi_pending[irq][cpu]);
259
s->sgi_pending[irq][cpu] &= ~(1 << src);
260
if (s->sgi_pending[irq][cpu] == 0) {
261
- GIC_CLEAR_PENDING(irq, GIC_TEST_MODEL(irq) ? ALL_CPU_MASK : cm);
262
+ GIC_DIST_CLEAR_PENDING(irq,
263
+ GIC_DIST_TEST_MODEL(irq) ? ALL_CPU_MASK
264
+ : cm);
265
}
266
ret = irq | ((src & 0x7) << 10);
267
} else {
268
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
269
* interrupts. (level triggered interrupts with an active line
270
* remain pending, see gic_test_pending)
271
*/
272
- GIC_CLEAR_PENDING(irq, GIC_TEST_MODEL(irq) ? ALL_CPU_MASK : cm);
273
+ GIC_DIST_CLEAR_PENDING(irq, GIC_DIST_TEST_MODEL(irq) ? ALL_CPU_MASK
274
+ : cm);
275
ret = irq;
276
}
277
}
278
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
279
return ret;
280
}
281
282
-void gic_set_priority(GICState *s, int cpu, int irq, uint8_t val,
283
+void gic_dist_set_priority(GICState *s, int cpu, int irq, uint8_t val,
284
MemTxAttrs attrs)
285
{
286
if (s->security_extn && !attrs.secure) {
287
- if (!GIC_TEST_GROUP(irq, (1 << cpu))) {
288
+ if (!GIC_DIST_TEST_GROUP(irq, (1 << cpu))) {
289
return; /* Ignore Non-secure access of Group0 IRQ */
290
}
291
val = 0x80 | (val >> 1); /* Non-secure view */
292
@@ -XXX,XX +XXX,XX @@ void gic_set_priority(GICState *s, int cpu, int irq, uint8_t val,
293
}
294
}
295
296
-static uint32_t gic_get_priority(GICState *s, int cpu, int irq,
297
+static uint32_t gic_dist_get_priority(GICState *s, int cpu, int irq,
298
MemTxAttrs attrs)
299
{
300
- uint32_t prio = GIC_GET_PRIORITY(irq, cpu);
301
+ uint32_t prio = GIC_DIST_GET_PRIORITY(irq, cpu);
302
303
if (s->security_extn && !attrs.secure) {
304
- if (!GIC_TEST_GROUP(irq, (1 << cpu))) {
305
+ if (!GIC_DIST_TEST_GROUP(irq, (1 << cpu))) {
306
return 0; /* Non-secure access cannot read priority of Group0 IRQ */
307
}
308
prio = (prio << 1) & 0xff; /* Non-secure view */
309
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
310
return;
311
}
312
313
- group = gic_has_groups(s) && GIC_TEST_GROUP(irq, cm);
314
+ group = gic_has_groups(s) && GIC_DIST_TEST_GROUP(irq, cm);
315
316
if (!gic_eoi_split(s, cpu, attrs)) {
317
/* This is UNPREDICTABLE; we choose to ignore it */
318
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
319
return;
320
}
321
322
- GIC_CLEAR_ACTIVE(irq, cm);
323
+ GIC_DIST_CLEAR_ACTIVE(irq, cm);
324
}
325
326
void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
327
@@ -XXX,XX +XXX,XX @@ void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
328
if (s->revision == REV_11MPCORE) {
329
/* Mark level triggered interrupts as pending if they are still
330
raised. */
331
- if (!GIC_TEST_EDGE_TRIGGER(irq) && GIC_TEST_ENABLED(irq, cm)
332
- && GIC_TEST_LEVEL(irq, cm) && (GIC_TARGET(irq) & cm) != 0) {
333
+ if (!GIC_DIST_TEST_EDGE_TRIGGER(irq) && GIC_DIST_TEST_ENABLED(irq, cm)
334
+ && GIC_DIST_TEST_LEVEL(irq, cm)
335
+ && (GIC_DIST_TARGET(irq) & cm) != 0) {
336
DPRINTF("Set %d pending mask %x\n", irq, cm);
337
- GIC_SET_PENDING(irq, cm);
338
+ GIC_DIST_SET_PENDING(irq, cm);
339
}
340
}
341
342
- group = gic_has_groups(s) && GIC_TEST_GROUP(irq, cm);
343
+ group = gic_has_groups(s) && GIC_DIST_TEST_GROUP(irq, cm);
344
345
if (s->security_extn && !attrs.secure && !group) {
346
DPRINTF("Non-secure EOI for Group0 interrupt %d ignored\n", irq);
347
@@ -XXX,XX +XXX,XX @@ void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
348
349
/* In GICv2 the guest can choose to split priority-drop and deactivate */
350
if (!gic_eoi_split(s, cpu, attrs)) {
351
- GIC_CLEAR_ACTIVE(irq, cm);
352
+ GIC_DIST_CLEAR_ACTIVE(irq, cm);
353
}
354
gic_update(s);
355
}
356
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
357
goto bad_reg;
358
}
359
for (i = 0; i < 8; i++) {
360
- if (GIC_TEST_GROUP(irq + i, cm)) {
361
+ if (GIC_DIST_TEST_GROUP(irq + i, cm)) {
362
res |= (1 << i);
363
}
364
}
365
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
366
res = 0;
367
for (i = 0; i < 8; i++) {
368
if (s->security_extn && !attrs.secure &&
369
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
370
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
371
continue; /* Ignore Non-secure access of Group0 IRQ */
372
}
373
374
- if (GIC_TEST_ENABLED(irq + i, cm)) {
375
+ if (GIC_DIST_TEST_ENABLED(irq + i, cm)) {
376
res |= (1 << i);
377
}
378
}
379
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
380
mask = (irq < GIC_INTERNAL) ? cm : ALL_CPU_MASK;
381
for (i = 0; i < 8; i++) {
382
if (s->security_extn && !attrs.secure &&
383
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
384
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
385
continue; /* Ignore Non-secure access of Group0 IRQ */
386
}
387
388
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
389
mask = (irq < GIC_INTERNAL) ? cm : ALL_CPU_MASK;
390
for (i = 0; i < 8; i++) {
391
if (s->security_extn && !attrs.secure &&
392
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
393
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
394
continue; /* Ignore Non-secure access of Group0 IRQ */
395
}
396
397
- if (GIC_TEST_ACTIVE(irq + i, mask)) {
398
+ if (GIC_DIST_TEST_ACTIVE(irq + i, mask)) {
399
res |= (1 << i);
400
}
401
}
402
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
403
irq = (offset - 0x400) + GIC_BASE_IRQ;
404
if (irq >= s->num_irq)
405
goto bad_reg;
406
- res = gic_get_priority(s, cpu, irq, attrs);
407
+ res = gic_dist_get_priority(s, cpu, irq, attrs);
408
} else if (offset < 0xc00) {
409
/* Interrupt CPU Target. */
410
if (s->num_cpu == 1 && s->revision != REV_11MPCORE) {
411
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
412
} else if (irq < GIC_INTERNAL) {
413
res = cm;
414
} else {
415
- res = GIC_TARGET(irq);
416
+ res = GIC_DIST_TARGET(irq);
417
}
418
}
419
} else if (offset < 0xf00) {
420
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
421
res = 0;
422
for (i = 0; i < 4; i++) {
423
if (s->security_extn && !attrs.secure &&
424
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
425
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
426
continue; /* Ignore Non-secure access of Group0 IRQ */
427
}
428
429
- if (GIC_TEST_MODEL(irq + i))
430
+ if (GIC_DIST_TEST_MODEL(irq + i)) {
431
res |= (1 << (i * 2));
432
- if (GIC_TEST_EDGE_TRIGGER(irq + i))
433
+ }
434
+ if (GIC_DIST_TEST_EDGE_TRIGGER(irq + i)) {
435
res |= (2 << (i * 2));
436
+ }
437
}
438
} else if (offset < 0xf10) {
439
goto bad_reg;
440
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
441
}
442
443
if (s->security_extn && !attrs.secure &&
444
- !GIC_TEST_GROUP(irq, 1 << cpu)) {
445
+ !GIC_DIST_TEST_GROUP(irq, 1 << cpu)) {
446
res = 0; /* Ignore Non-secure access of Group0 IRQ */
447
} else {
448
res = s->sgi_pending[irq][cpu];
449
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
450
int cm = (irq < GIC_INTERNAL) ? (1 << cpu) : ALL_CPU_MASK;
451
if (value & (1 << i)) {
452
/* Group1 (Non-secure) */
453
- GIC_SET_GROUP(irq + i, cm);
454
+ GIC_DIST_SET_GROUP(irq + i, cm);
455
} else {
456
/* Group0 (Secure) */
457
- GIC_CLEAR_GROUP(irq + i, cm);
458
+ GIC_DIST_CLEAR_GROUP(irq + i, cm);
459
}
460
}
461
}
462
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
463
for (i = 0; i < 8; i++) {
464
if (value & (1 << i)) {
465
int mask =
466
- (irq < GIC_INTERNAL) ? (1 << cpu) : GIC_TARGET(irq + i);
467
+ (irq < GIC_INTERNAL) ? (1 << cpu)
468
+ : GIC_DIST_TARGET(irq + i);
469
int cm = (irq < GIC_INTERNAL) ? (1 << cpu) : ALL_CPU_MASK;
470
471
if (s->security_extn && !attrs.secure &&
472
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
473
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
474
continue; /* Ignore Non-secure access of Group0 IRQ */
475
}
476
477
- if (!GIC_TEST_ENABLED(irq + i, cm)) {
478
+ if (!GIC_DIST_TEST_ENABLED(irq + i, cm)) {
479
DPRINTF("Enabled IRQ %d\n", irq + i);
480
trace_gic_enable_irq(irq + i);
481
}
482
- GIC_SET_ENABLED(irq + i, cm);
483
+ GIC_DIST_SET_ENABLED(irq + i, cm);
484
/* If a raised level triggered IRQ enabled then mark
485
is as pending. */
486
- if (GIC_TEST_LEVEL(irq + i, mask)
487
- && !GIC_TEST_EDGE_TRIGGER(irq + i)) {
488
+ if (GIC_DIST_TEST_LEVEL(irq + i, mask)
489
+ && !GIC_DIST_TEST_EDGE_TRIGGER(irq + i)) {
490
DPRINTF("Set %d pending mask %x\n", irq + i, mask);
491
- GIC_SET_PENDING(irq + i, mask);
492
+ GIC_DIST_SET_PENDING(irq + i, mask);
493
}
494
}
495
}
496
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
497
int cm = (irq < GIC_INTERNAL) ? (1 << cpu) : ALL_CPU_MASK;
498
499
if (s->security_extn && !attrs.secure &&
500
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
501
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
502
continue; /* Ignore Non-secure access of Group0 IRQ */
503
}
504
505
- if (GIC_TEST_ENABLED(irq + i, cm)) {
506
+ if (GIC_DIST_TEST_ENABLED(irq + i, cm)) {
507
DPRINTF("Disabled IRQ %d\n", irq + i);
508
trace_gic_disable_irq(irq + i);
509
}
510
- GIC_CLEAR_ENABLED(irq + i, cm);
511
+ GIC_DIST_CLEAR_ENABLED(irq + i, cm);
512
}
513
}
514
} else if (offset < 0x280) {
515
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
516
for (i = 0; i < 8; i++) {
517
if (value & (1 << i)) {
518
if (s->security_extn && !attrs.secure &&
519
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
520
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
521
continue; /* Ignore Non-secure access of Group0 IRQ */
522
}
523
524
- GIC_SET_PENDING(irq + i, GIC_TARGET(irq + i));
525
+ GIC_DIST_SET_PENDING(irq + i, GIC_DIST_TARGET(irq + i));
526
}
527
}
528
} else if (offset < 0x300) {
529
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
530
531
for (i = 0; i < 8; i++) {
532
if (s->security_extn && !attrs.secure &&
533
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
534
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
535
continue; /* Ignore Non-secure access of Group0 IRQ */
536
}
537
538
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
539
for per-CPU interrupts. It's unclear whether this is the
540
corect behavior. */
541
if (value & (1 << i)) {
542
- GIC_CLEAR_PENDING(irq + i, ALL_CPU_MASK);
543
+ GIC_DIST_CLEAR_PENDING(irq + i, ALL_CPU_MASK);
544
}
545
}
546
} else if (offset < 0x400) {
547
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
548
irq = (offset - 0x400) + GIC_BASE_IRQ;
549
if (irq >= s->num_irq)
550
goto bad_reg;
551
- gic_set_priority(s, cpu, irq, value, attrs);
552
+ gic_dist_set_priority(s, cpu, irq, value, attrs);
553
} else if (offset < 0xc00) {
554
/* Interrupt CPU Target. RAZ/WI on uniprocessor GICs, with the
555
* annoying exception of the 11MPCore's GIC.
556
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
557
value |= 0xaa;
558
for (i = 0; i < 4; i++) {
559
if (s->security_extn && !attrs.secure &&
560
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
561
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
562
continue; /* Ignore Non-secure access of Group0 IRQ */
563
}
564
565
if (s->revision == REV_11MPCORE) {
566
if (value & (1 << (i * 2))) {
567
- GIC_SET_MODEL(irq + i);
568
+ GIC_DIST_SET_MODEL(irq + i);
569
} else {
570
- GIC_CLEAR_MODEL(irq + i);
571
+ GIC_DIST_CLEAR_MODEL(irq + i);
572
}
573
}
574
if (value & (2 << (i * 2))) {
575
- GIC_SET_EDGE_TRIGGER(irq + i);
576
+ GIC_DIST_SET_EDGE_TRIGGER(irq + i);
577
} else {
578
- GIC_CLEAR_EDGE_TRIGGER(irq + i);
579
+ GIC_DIST_CLEAR_EDGE_TRIGGER(irq + i);
580
}
581
}
582
} else if (offset < 0xf10) {
583
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
584
irq = (offset - 0xf10);
585
586
if (!s->security_extn || attrs.secure ||
587
- GIC_TEST_GROUP(irq, 1 << cpu)) {
588
+ GIC_DIST_TEST_GROUP(irq, 1 << cpu)) {
589
s->sgi_pending[irq][cpu] &= ~value;
590
if (s->sgi_pending[irq][cpu] == 0) {
591
- GIC_CLEAR_PENDING(irq, 1 << cpu);
592
+ GIC_DIST_CLEAR_PENDING(irq, 1 << cpu);
593
}
594
}
595
} else if (offset < 0xf30) {
596
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
597
irq = (offset - 0xf20);
598
599
if (!s->security_extn || attrs.secure ||
600
- GIC_TEST_GROUP(irq, 1 << cpu)) {
601
- GIC_SET_PENDING(irq, 1 << cpu);
602
+ GIC_DIST_TEST_GROUP(irq, 1 << cpu)) {
603
+ GIC_DIST_SET_PENDING(irq, 1 << cpu);
604
s->sgi_pending[irq][cpu] |= value;
605
}
606
} else {
607
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writel(void *opaque, hwaddr offset,
608
mask = ALL_CPU_MASK;
609
break;
610
}
611
- GIC_SET_PENDING(irq, mask);
612
+ GIC_DIST_SET_PENDING(irq, mask);
613
target_cpu = ctz32(mask);
614
while (target_cpu < GIC_NCPU) {
615
s->sgi_pending[irq][target_cpu] |= (1 << cpu);
616
diff --git a/hw/intc/arm_gic_common.c b/hw/intc/arm_gic_common.c
617
index XXXXXXX..XXXXXXX 100644
618
--- a/hw/intc/arm_gic_common.c
619
+++ b/hw/intc/arm_gic_common.c
620
@@ -XXX,XX +XXX,XX @@ static void arm_gic_common_reset(DeviceState *dev)
621
}
622
}
623
for (i = 0; i < GIC_NR_SGIS; i++) {
624
- GIC_SET_ENABLED(i, ALL_CPU_MASK);
625
- GIC_SET_EDGE_TRIGGER(i);
626
+ GIC_DIST_SET_ENABLED(i, ALL_CPU_MASK);
627
+ GIC_DIST_SET_EDGE_TRIGGER(i);
628
}
629
630
for (i = 0; i < ARRAY_SIZE(s->priority2); i++) {
631
@@ -XXX,XX +XXX,XX @@ static void arm_gic_common_reset(DeviceState *dev)
632
}
633
if (s->security_extn && s->irq_reset_nonsecure) {
634
for (i = 0; i < GIC_MAXIRQ; i++) {
635
- GIC_SET_GROUP(i, ALL_CPU_MASK);
636
+ GIC_DIST_SET_GROUP(i, ALL_CPU_MASK);
637
}
638
}
639
640
diff --git a/hw/intc/arm_gic_kvm.c b/hw/intc/arm_gic_kvm.c
641
index XXXXXXX..XXXXXXX 100644
642
--- a/hw/intc/arm_gic_kvm.c
643
+++ b/hw/intc/arm_gic_kvm.c
644
@@ -XXX,XX +XXX,XX @@ static void translate_group(GICState *s, int irq, int cpu,
645
int cm = (irq < GIC_INTERNAL) ? (1 << cpu) : ALL_CPU_MASK;
646
647
if (to_kernel) {
648
- *field = GIC_TEST_GROUP(irq, cm);
649
+ *field = GIC_DIST_TEST_GROUP(irq, cm);
650
} else {
651
if (*field & 1) {
652
- GIC_SET_GROUP(irq, cm);
653
+ GIC_DIST_SET_GROUP(irq, cm);
654
}
655
}
656
}
657
@@ -XXX,XX +XXX,XX @@ static void translate_enabled(GICState *s, int irq, int cpu,
658
int cm = (irq < GIC_INTERNAL) ? (1 << cpu) : ALL_CPU_MASK;
659
660
if (to_kernel) {
661
- *field = GIC_TEST_ENABLED(irq, cm);
662
+ *field = GIC_DIST_TEST_ENABLED(irq, cm);
663
} else {
664
if (*field & 1) {
665
- GIC_SET_ENABLED(irq, cm);
666
+ GIC_DIST_SET_ENABLED(irq, cm);
667
}
668
}
669
}
670
@@ -XXX,XX +XXX,XX @@ static void translate_pending(GICState *s, int irq, int cpu,
671
*field = gic_test_pending(s, irq, cm);
672
} else {
673
if (*field & 1) {
674
- GIC_SET_PENDING(irq, cm);
675
+ GIC_DIST_SET_PENDING(irq, cm);
676
/* TODO: Capture is level-line is held high in the kernel */
677
}
678
}
679
@@ -XXX,XX +XXX,XX @@ static void translate_active(GICState *s, int irq, int cpu,
680
int cm = (irq < GIC_INTERNAL) ? (1 << cpu) : ALL_CPU_MASK;
681
682
if (to_kernel) {
683
- *field = GIC_TEST_ACTIVE(irq, cm);
684
+ *field = GIC_DIST_TEST_ACTIVE(irq, cm);
685
} else {
686
if (*field & 1) {
687
- GIC_SET_ACTIVE(irq, cm);
688
+ GIC_DIST_SET_ACTIVE(irq, cm);
689
}
690
}
691
}
692
@@ -XXX,XX +XXX,XX @@ static void translate_trigger(GICState *s, int irq, int cpu,
693
uint32_t *field, bool to_kernel)
694
{
695
if (to_kernel) {
696
- *field = (GIC_TEST_EDGE_TRIGGER(irq)) ? 0x2 : 0x0;
697
+ *field = (GIC_DIST_TEST_EDGE_TRIGGER(irq)) ? 0x2 : 0x0;
698
} else {
699
if (*field & 0x2) {
700
- GIC_SET_EDGE_TRIGGER(irq);
701
+ GIC_DIST_SET_EDGE_TRIGGER(irq);
702
}
703
}
704
}
705
@@ -XXX,XX +XXX,XX @@ static void translate_priority(GICState *s, int irq, int cpu,
706
uint32_t *field, bool to_kernel)
707
{
708
if (to_kernel) {
709
- *field = GIC_GET_PRIORITY(irq, cpu) & 0xff;
710
+ *field = GIC_DIST_GET_PRIORITY(irq, cpu) & 0xff;
711
} else {
712
- gic_set_priority(s, cpu, irq, *field & 0xff, MEMTXATTRS_UNSPECIFIED);
713
+ gic_dist_set_priority(s, cpu, irq,
714
+ *field & 0xff, MEMTXATTRS_UNSPECIFIED);
715
}
716
}
717
718
--
885
--
719
2.18.0
886
2.25.1
720
887
721
888
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add support for GICv2 virtualization extensions by mapping the necessary
3
These particular data structures are not modified at runtime.
4
I/O regions and connecting the maintenance IRQ lines.
5
4
6
Declare those additions in the device tree and in the ACPI tables.
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
8
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20180727095421.386-21-luc.michel@greensocs.com
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220501055028.646596-5-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
10
---
13
include/hw/arm/virt.h | 4 +++-
11
target/arm/helper.c | 16 ++++++++--------
14
hw/arm/virt-acpi-build.c | 6 +++--
12
1 file changed, 8 insertions(+), 8 deletions(-)
15
hw/arm/virt.c | 52 +++++++++++++++++++++++++++++++++-------
16
3 files changed, 50 insertions(+), 12 deletions(-)
17
13
18
diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
19
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
20
--- a/include/hw/arm/virt.h
16
--- a/target/arm/helper.c
21
+++ b/include/hw/arm/virt.h
17
+++ b/target/arm/helper.c
22
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
23
#define NUM_VIRTIO_TRANSPORTS 32
19
.resetvalue = cpu->pmceid1 },
24
#define NUM_SMMU_IRQS 4
20
};
25
21
#ifdef CONFIG_USER_ONLY
26
-#define ARCH_GICV3_MAINT_IRQ 9
22
- ARMCPRegUserSpaceInfo v8_user_idregs[] = {
27
+#define ARCH_GIC_MAINT_IRQ 9
23
+ static const ARMCPRegUserSpaceInfo v8_user_idregs[] = {
28
24
{ .name = "ID_AA64PFR0_EL1",
29
#define ARCH_TIMER_VIRT_IRQ 11
25
.exported_bits = 0x000f000f00ff0000,
30
#define ARCH_TIMER_S_EL1_IRQ 13
26
.fixed_bits = 0x0000000000000011 },
31
@@ -XXX,XX +XXX,XX @@ enum {
27
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
32
VIRT_GIC_DIST,
28
*/
33
VIRT_GIC_CPU,
29
if (arm_feature(env, ARM_FEATURE_EL3)) {
34
VIRT_GIC_V2M,
30
if (arm_feature(env, ARM_FEATURE_AARCH64)) {
35
+ VIRT_GIC_HYP,
31
- ARMCPRegInfo nsacr = {
36
+ VIRT_GIC_VCPU,
32
+ static const ARMCPRegInfo nsacr = {
37
VIRT_GIC_ITS,
33
.name = "NSACR", .type = ARM_CP_CONST,
38
VIRT_GIC_REDIST,
34
.cp = 15, .opc1 = 0, .crn = 1, .crm = 1, .opc2 = 2,
39
VIRT_GIC_REDIST2,
35
.access = PL1_RW, .accessfn = nsacr_access,
40
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
36
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
41
index XXXXXXX..XXXXXXX 100644
37
};
42
--- a/hw/arm/virt-acpi-build.c
38
define_one_arm_cp_reg(cpu, &nsacr);
43
+++ b/hw/arm/virt-acpi-build.c
39
} else {
44
@@ -XXX,XX +XXX,XX @@ build_madt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
40
- ARMCPRegInfo nsacr = {
45
gicc->length = sizeof(*gicc);
41
+ static const ARMCPRegInfo nsacr = {
46
if (vms->gic_version == 2) {
42
.name = "NSACR",
47
gicc->base_address = cpu_to_le64(memmap[VIRT_GIC_CPU].base);
43
.cp = 15, .opc1 = 0, .crn = 1, .crm = 1, .opc2 = 2,
48
+ gicc->gich_base_address = cpu_to_le64(memmap[VIRT_GIC_HYP].base);
44
.access = PL3_RW | PL1_R,
49
+ gicc->gicv_base_address = cpu_to_le64(memmap[VIRT_GIC_VCPU].base);
45
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
50
}
51
gicc->cpu_interface_number = cpu_to_le32(i);
52
gicc->arm_mpidr = cpu_to_le64(armcpu->mp_affinity);
53
@@ -XXX,XX +XXX,XX @@ build_madt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
54
if (arm_feature(&armcpu->env, ARM_FEATURE_PMU)) {
55
gicc->performance_interrupt = cpu_to_le32(PPI(VIRTUAL_PMU_IRQ));
56
}
57
- if (vms->virt && vms->gic_version == 3) {
58
- gicc->vgic_interrupt = cpu_to_le32(PPI(ARCH_GICV3_MAINT_IRQ));
59
+ if (vms->virt) {
60
+ gicc->vgic_interrupt = cpu_to_le32(PPI(ARCH_GIC_MAINT_IRQ));
61
}
62
}
63
64
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/hw/arm/virt.c
67
+++ b/hw/arm/virt.c
68
@@ -XXX,XX +XXX,XX @@ static const MemMapEntry a15memmap[] = {
69
[VIRT_GIC_DIST] = { 0x08000000, 0x00010000 },
70
[VIRT_GIC_CPU] = { 0x08010000, 0x00010000 },
71
[VIRT_GIC_V2M] = { 0x08020000, 0x00001000 },
72
+ [VIRT_GIC_HYP] = { 0x08030000, 0x00010000 },
73
+ [VIRT_GIC_VCPU] = { 0x08040000, 0x00010000 },
74
/* The space in between here is reserved for GICv3 CPU/vCPU/HYP */
75
[VIRT_GIC_ITS] = { 0x08080000, 0x00020000 },
76
/* This redistributor space allows up to 2*64kB*123 CPUs */
77
@@ -XXX,XX +XXX,XX @@ static void fdt_add_gic_node(VirtMachineState *vms)
78
79
if (vms->virt) {
80
qemu_fdt_setprop_cells(vms->fdt, nodename, "interrupts",
81
- GIC_FDT_IRQ_TYPE_PPI, ARCH_GICV3_MAINT_IRQ,
82
+ GIC_FDT_IRQ_TYPE_PPI, ARCH_GIC_MAINT_IRQ,
83
GIC_FDT_IRQ_FLAGS_LEVEL_HI);
84
}
46
}
85
} else {
47
} else {
86
/* 'cortex-a15-gic' means 'GIC v2' */
48
if (arm_feature(env, ARM_FEATURE_V8)) {
87
qemu_fdt_setprop_string(vms->fdt, nodename, "compatible",
49
- ARMCPRegInfo nsacr = {
88
"arm,cortex-a15-gic");
50
+ static const ARMCPRegInfo nsacr = {
89
- qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg",
51
.name = "NSACR", .type = ARM_CP_CONST,
90
- 2, vms->memmap[VIRT_GIC_DIST].base,
52
.cp = 15, .opc1 = 0, .crn = 1, .crm = 1, .opc2 = 2,
91
- 2, vms->memmap[VIRT_GIC_DIST].size,
53
.access = PL1_R,
92
- 2, vms->memmap[VIRT_GIC_CPU].base,
54
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
93
- 2, vms->memmap[VIRT_GIC_CPU].size);
55
.access = PL1_R, .type = ARM_CP_CONST,
94
+ if (!vms->virt) {
56
.resetvalue = cpu->pmsav7_dregion << 8
95
+ qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg",
57
};
96
+ 2, vms->memmap[VIRT_GIC_DIST].base,
58
- ARMCPRegInfo crn0_wi_reginfo = {
97
+ 2, vms->memmap[VIRT_GIC_DIST].size,
59
+ static const ARMCPRegInfo crn0_wi_reginfo = {
98
+ 2, vms->memmap[VIRT_GIC_CPU].base,
60
.name = "CRN0_WI", .cp = 15, .crn = 0, .crm = CP_ANY,
99
+ 2, vms->memmap[VIRT_GIC_CPU].size);
61
.opc1 = CP_ANY, .opc2 = CP_ANY, .access = PL1_W,
100
+ } else {
62
.type = ARM_CP_NOP | ARM_CP_OVERRIDE
101
+ qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg",
63
};
102
+ 2, vms->memmap[VIRT_GIC_DIST].base,
64
#ifdef CONFIG_USER_ONLY
103
+ 2, vms->memmap[VIRT_GIC_DIST].size,
65
- ARMCPRegUserSpaceInfo id_v8_user_midr_cp_reginfo[] = {
104
+ 2, vms->memmap[VIRT_GIC_CPU].base,
66
+ static const ARMCPRegUserSpaceInfo id_v8_user_midr_cp_reginfo[] = {
105
+ 2, vms->memmap[VIRT_GIC_CPU].size,
67
{ .name = "MIDR_EL1",
106
+ 2, vms->memmap[VIRT_GIC_HYP].base,
68
.exported_bits = 0x00000000ffffffff },
107
+ 2, vms->memmap[VIRT_GIC_HYP].size,
69
{ .name = "REVIDR_EL1" },
108
+ 2, vms->memmap[VIRT_GIC_VCPU].base,
70
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
109
+ 2, vms->memmap[VIRT_GIC_VCPU].size);
71
.access = PL1_R, .readfn = mpidr_read, .type = ARM_CP_NO_RAW },
110
+ qemu_fdt_setprop_cells(vms->fdt, nodename, "interrupts",
72
};
111
+ GIC_FDT_IRQ_TYPE_PPI, ARCH_GIC_MAINT_IRQ,
73
#ifdef CONFIG_USER_ONLY
112
+ GIC_FDT_IRQ_FLAGS_LEVEL_HI);
74
- ARMCPRegUserSpaceInfo mpidr_user_cp_reginfo[] = {
113
+ }
75
+ static const ARMCPRegUserSpaceInfo mpidr_user_cp_reginfo[] = {
76
{ .name = "MPIDR_EL1",
77
.fixed_bits = 0x0000000080000000 },
78
};
79
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
114
}
80
}
115
81
116
qemu_fdt_setprop_cell(vms->fdt, nodename, "phandle", vms->gic_phandle);
82
if (arm_feature(env, ARM_FEATURE_VBAR)) {
117
@@ -XXX,XX +XXX,XX @@ static void create_gic(VirtMachineState *vms, qemu_irq *pic)
83
- ARMCPRegInfo vbar_cp_reginfo[] = {
118
qdev_prop_set_uint32(gicdev, "redist-region-count[1]",
84
+ static const ARMCPRegInfo vbar_cp_reginfo[] = {
119
MIN(smp_cpus - redist0_count, redist1_capacity));
85
{ .name = "VBAR", .state = ARM_CP_STATE_BOTH,
120
}
86
.opc0 = 3, .crn = 12, .crm = 0, .opc1 = 0, .opc2 = 0,
121
+ } else {
87
.access = PL1_RW, .writefn = vbar_write,
122
+ if (!kvm_irqchip_in_kernel()) {
123
+ qdev_prop_set_bit(gicdev, "has-virtualization-extensions",
124
+ vms->virt);
125
+ }
126
}
127
qdev_init_nofail(gicdev);
128
gicbusdev = SYS_BUS_DEVICE(gicdev);
129
@@ -XXX,XX +XXX,XX @@ static void create_gic(VirtMachineState *vms, qemu_irq *pic)
130
}
131
} else {
132
sysbus_mmio_map(gicbusdev, 1, vms->memmap[VIRT_GIC_CPU].base);
133
+ if (vms->virt) {
134
+ sysbus_mmio_map(gicbusdev, 2, vms->memmap[VIRT_GIC_HYP].base);
135
+ sysbus_mmio_map(gicbusdev, 3, vms->memmap[VIRT_GIC_VCPU].base);
136
+ }
137
}
138
139
/* Wire the outputs from each CPU's generic timer and the GICv3
140
@@ -XXX,XX +XXX,XX @@ static void create_gic(VirtMachineState *vms, qemu_irq *pic)
141
ppibase + timer_irq[irq]));
142
}
143
144
- qdev_connect_gpio_out_named(cpudev, "gicv3-maintenance-interrupt", 0,
145
- qdev_get_gpio_in(gicdev, ppibase
146
- + ARCH_GICV3_MAINT_IRQ));
147
+ if (type == 3) {
148
+ qemu_irq irq = qdev_get_gpio_in(gicdev,
149
+ ppibase + ARCH_GIC_MAINT_IRQ);
150
+ qdev_connect_gpio_out_named(cpudev, "gicv3-maintenance-interrupt",
151
+ 0, irq);
152
+ } else if (vms->virt) {
153
+ qemu_irq irq = qdev_get_gpio_in(gicdev,
154
+ ppibase + ARCH_GIC_MAINT_IRQ);
155
+ sysbus_connect_irq(gicbusdev, i + 4 * smp_cpus, irq);
156
+ }
157
+
158
qdev_connect_gpio_out_named(cpudev, "pmu-interrupt", 0,
159
qdev_get_gpio_in(gicdev, ppibase
160
+ VIRTUAL_PMU_IRQ));
161
--
88
--
162
2.18.0
89
2.25.1
163
90
164
91
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add some traces to the ARM GIC to catch register accesses (distributor,
3
Instead of defining ARM_CP_FLAG_MASK to remove flags,
4
(v)cpu interface and virtual interface), and to take into account
4
define ARM_CP_SPECIAL_MASK to isolate special cases.
5
virtualization extensions (print `vcpu` instead of `cpu` when needed).
5
Sort the specials to the low bits. Use an enum.
6
6
7
Also add some virtualization extensions specific traces: LR updating
7
Split the large comment block so as to document each
8
and maintenance IRQ generation.
8
value separately.
9
9
10
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Message-id: 20180727095421.386-19-luc.michel@greensocs.com
12
Message-id: 20220501055028.646596-6-richard.henderson@linaro.org
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
14
---
16
hw/intc/arm_gic.c | 31 +++++++++++++++++++++++++------
15
target/arm/cpregs.h | 130 +++++++++++++++++++++++--------------
17
hw/intc/trace-events | 12 ++++++++++--
16
target/arm/cpu.c | 4 +-
18
2 files changed, 35 insertions(+), 8 deletions(-)
17
target/arm/helper.c | 4 +-
19
18
target/arm/translate-a64.c | 6 +-
20
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
19
target/arm/translate.c | 6 +-
21
index XXXXXXX..XXXXXXX 100644
20
5 files changed, 92 insertions(+), 58 deletions(-)
22
--- a/hw/intc/arm_gic.c
21
23
+++ b/hw/intc/arm_gic.c
22
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
24
@@ -XXX,XX +XXX,XX @@ static inline void gic_update_internal(GICState *s, bool virt)
23
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/cpregs.h
25
+++ b/target/arm/cpregs.h
26
@@ -XXX,XX +XXX,XX @@
27
#define TARGET_ARM_CPREGS_H
28
29
/*
30
- * ARMCPRegInfo type field bits. If the SPECIAL bit is set this is a
31
- * special-behaviour cp reg and bits [11..8] indicate what behaviour
32
- * it has. Otherwise it is a simple cp reg, where CONST indicates that
33
- * TCG can assume the value to be constant (ie load at translate time)
34
- * and 64BIT indicates a 64 bit wide coprocessor register. SUPPRESS_TB_END
35
- * indicates that the TB should not be ended after a write to this register
36
- * (the default is that the TB ends after cp writes). OVERRIDE permits
37
- * a register definition to override a previous definition for the
38
- * same (cp, is64, crn, crm, opc1, opc2) tuple: either the new or the
39
- * old must have the OVERRIDE bit set.
40
- * ALIAS indicates that this register is an alias view of some underlying
41
- * state which is also visible via another register, and that the other
42
- * register is handling migration and reset; registers marked ALIAS will not be
43
- * migrated but may have their state set by syncing of register state from KVM.
44
- * NO_RAW indicates that this register has no underlying state and does not
45
- * support raw access for state saving/loading; it will not be used for either
46
- * migration or KVM state synchronization. (Typically this is for "registers"
47
- * which are actually used as instructions for cache maintenance and so on.)
48
- * IO indicates that this register does I/O and therefore its accesses
49
- * need to be marked with gen_io_start() and also end the TB. In particular,
50
- * registers which implement clocks or timers require this.
51
- * RAISES_EXC is for when the read or write hook might raise an exception;
52
- * the generated code will synchronize the CPU state before calling the hook
53
- * so that it is safe for the hook to call raise_exception().
54
- * NEWEL is for writes to registers that might change the exception
55
- * level - typically on older ARM chips. For those cases we need to
56
- * re-read the new el when recomputing the translation flags.
57
+ * ARMCPRegInfo type field bits:
58
*/
59
-#define ARM_CP_SPECIAL 0x0001
60
-#define ARM_CP_CONST 0x0002
61
-#define ARM_CP_64BIT 0x0004
62
-#define ARM_CP_SUPPRESS_TB_END 0x0008
63
-#define ARM_CP_OVERRIDE 0x0010
64
-#define ARM_CP_ALIAS 0x0020
65
-#define ARM_CP_IO 0x0040
66
-#define ARM_CP_NO_RAW 0x0080
67
-#define ARM_CP_NOP (ARM_CP_SPECIAL | 0x0100)
68
-#define ARM_CP_WFI (ARM_CP_SPECIAL | 0x0200)
69
-#define ARM_CP_NZCV (ARM_CP_SPECIAL | 0x0300)
70
-#define ARM_CP_CURRENTEL (ARM_CP_SPECIAL | 0x0400)
71
-#define ARM_CP_DC_ZVA (ARM_CP_SPECIAL | 0x0500)
72
-#define ARM_CP_DC_GVA (ARM_CP_SPECIAL | 0x0600)
73
-#define ARM_CP_DC_GZVA (ARM_CP_SPECIAL | 0x0700)
74
-#define ARM_LAST_SPECIAL ARM_CP_DC_GZVA
75
-#define ARM_CP_FPU 0x1000
76
-#define ARM_CP_SVE 0x2000
77
-#define ARM_CP_NO_GDB 0x4000
78
-#define ARM_CP_RAISES_EXC 0x8000
79
-#define ARM_CP_NEWEL 0x10000
80
-/* Mask of only the flag bits in a type field */
81
-#define ARM_CP_FLAG_MASK 0x1f0ff
82
+enum {
83
+ /*
84
+ * Register must be handled specially during translation.
85
+ * The method is one of the values below:
86
+ */
87
+ ARM_CP_SPECIAL_MASK = 0x000f,
88
+ /* Special: no change to PE state: writes ignored, reads ignored. */
89
+ ARM_CP_NOP = 0x0001,
90
+ /* Special: sysreg is WFI, for v5 and v6. */
91
+ ARM_CP_WFI = 0x0002,
92
+ /* Special: sysreg is NZCV. */
93
+ ARM_CP_NZCV = 0x0003,
94
+ /* Special: sysreg is CURRENTEL. */
95
+ ARM_CP_CURRENTEL = 0x0004,
96
+ /* Special: sysreg is DC ZVA or similar. */
97
+ ARM_CP_DC_ZVA = 0x0005,
98
+ ARM_CP_DC_GVA = 0x0006,
99
+ ARM_CP_DC_GZVA = 0x0007,
100
+
101
+ /* Flag: reads produce resetvalue; writes ignored. */
102
+ ARM_CP_CONST = 1 << 4,
103
+ /* Flag: For ARM_CP_STATE_AA32, sysreg is 64-bit. */
104
+ ARM_CP_64BIT = 1 << 5,
105
+ /*
106
+ * Flag: TB should not be ended after a write to this register
107
+ * (the default is that the TB ends after cp writes).
108
+ */
109
+ ARM_CP_SUPPRESS_TB_END = 1 << 6,
110
+ /*
111
+ * Flag: Permit a register definition to override a previous definition
112
+ * for the same (cp, is64, crn, crm, opc1, opc2) tuple: either the new
113
+ * or the old must have the ARM_CP_OVERRIDE bit set.
114
+ */
115
+ ARM_CP_OVERRIDE = 1 << 7,
116
+ /*
117
+ * Flag: Register is an alias view of some underlying state which is also
118
+ * visible via another register, and that the other register is handling
119
+ * migration and reset; registers marked ARM_CP_ALIAS will not be migrated
120
+ * but may have their state set by syncing of register state from KVM.
121
+ */
122
+ ARM_CP_ALIAS = 1 << 8,
123
+ /*
124
+ * Flag: Register does I/O and therefore its accesses need to be marked
125
+ * with gen_io_start() and also end the TB. In particular, registers which
126
+ * implement clocks or timers require this.
127
+ */
128
+ ARM_CP_IO = 1 << 9,
129
+ /*
130
+ * Flag: Register has no underlying state and does not support raw access
131
+ * for state saving/loading; it will not be used for either migration or
132
+ * KVM state synchronization. Typically this is for "registers" which are
133
+ * actually used as instructions for cache maintenance and so on.
134
+ */
135
+ ARM_CP_NO_RAW = 1 << 10,
136
+ /*
137
+ * Flag: The read or write hook might raise an exception; the generated
138
+ * code will synchronize the CPU state before calling the hook so that it
139
+ * is safe for the hook to call raise_exception().
140
+ */
141
+ ARM_CP_RAISES_EXC = 1 << 11,
142
+ /*
143
+ * Flag: Writes to the sysreg might change the exception level - typically
144
+ * on older ARM chips. For those cases we need to re-read the new el when
145
+ * recomputing the translation flags.
146
+ */
147
+ ARM_CP_NEWEL = 1 << 12,
148
+ /*
149
+ * Flag: Access check for this sysreg is identical to accessing FPU state
150
+ * from an instruction: use translation fp_access_check().
151
+ */
152
+ ARM_CP_FPU = 1 << 13,
153
+ /*
154
+ * Flag: Access check for this sysreg is identical to accessing SVE state
155
+ * from an instruction: use translation sve_access_check().
156
+ */
157
+ ARM_CP_SVE = 1 << 14,
158
+ /* Flag: Do not expose in gdb sysreg xml. */
159
+ ARM_CP_NO_GDB = 1 << 15,
160
+};
161
162
/*
163
* Valid values for ARMCPRegInfo state field, indicating which of
164
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
165
index XXXXXXX..XXXXXXX 100644
166
--- a/target/arm/cpu.c
167
+++ b/target/arm/cpu.c
168
@@ -XXX,XX +XXX,XX @@ static void cp_reg_reset(gpointer key, gpointer value, gpointer opaque)
169
ARMCPRegInfo *ri = value;
170
ARMCPU *cpu = opaque;
171
172
- if (ri->type & (ARM_CP_SPECIAL | ARM_CP_ALIAS)) {
173
+ if (ri->type & (ARM_CP_SPECIAL_MASK | ARM_CP_ALIAS)) {
174
return;
175
}
176
177
@@ -XXX,XX +XXX,XX @@ static void cp_reg_check_reset(gpointer key, gpointer value, gpointer opaque)
178
ARMCPU *cpu = opaque;
179
uint64_t oldvalue, newvalue;
180
181
- if (ri->type & (ARM_CP_SPECIAL | ARM_CP_ALIAS | ARM_CP_NO_RAW)) {
182
+ if (ri->type & (ARM_CP_SPECIAL_MASK | ARM_CP_ALIAS | ARM_CP_NO_RAW)) {
183
return;
184
}
185
186
diff --git a/target/arm/helper.c b/target/arm/helper.c
187
index XXXXXXX..XXXXXXX 100644
188
--- a/target/arm/helper.c
189
+++ b/target/arm/helper.c
190
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
191
* multiple times. Special registers (ie NOP/WFI) are
192
* never migratable and not even raw-accessible.
193
*/
194
- if ((r->type & ARM_CP_SPECIAL)) {
195
+ if (r->type & ARM_CP_SPECIAL_MASK) {
196
r2->type |= ARM_CP_NO_RAW;
197
}
198
if (((r->crm == CP_ANY) && crm != 0) ||
199
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
200
/* Check that the register definition has enough info to handle
201
* reads and writes if they are permitted.
202
*/
203
- if (!(r->type & (ARM_CP_SPECIAL|ARM_CP_CONST))) {
204
+ if (!(r->type & (ARM_CP_SPECIAL_MASK | ARM_CP_CONST))) {
205
if (r->access & PL3_R) {
206
assert((r->fieldoffset ||
207
(r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1])) ||
208
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
209
index XXXXXXX..XXXXXXX 100644
210
--- a/target/arm/translate-a64.c
211
+++ b/target/arm/translate-a64.c
212
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
213
}
214
215
/* Handle special cases first */
216
- switch (ri->type & ~(ARM_CP_FLAG_MASK & ~ARM_CP_SPECIAL)) {
217
+ switch (ri->type & ARM_CP_SPECIAL_MASK) {
218
+ case 0:
219
+ break;
220
case ARM_CP_NOP:
221
return;
222
case ARM_CP_NZCV:
223
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
25
}
224
}
26
225
return;
27
if (best_irq != 1023) {
226
default:
28
- trace_gic_update_bestirq(cpu, best_irq, best_prio,
227
- break;
29
- s->priority_mask[cpu_iface], s->running_priority[cpu_iface]);
228
+ g_assert_not_reached();
30
+ trace_gic_update_bestirq(virt ? "vcpu" : "cpu", cpu,
229
}
31
+ best_irq, best_prio,
230
if ((ri->type & ARM_CP_FPU) && !fp_access_check(s)) {
32
+ s->priority_mask[cpu_iface],
231
return;
33
+ s->running_priority[cpu_iface]);
232
diff --git a/target/arm/translate.c b/target/arm/translate.c
233
index XXXXXXX..XXXXXXX 100644
234
--- a/target/arm/translate.c
235
+++ b/target/arm/translate.c
236
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
34
}
237
}
35
238
36
irq_level = fiq_level = 0;
239
/* Handle special cases first */
37
@@ -XXX,XX +XXX,XX @@ static void gic_update_maintenance(GICState *s)
240
- switch (ri->type & ~(ARM_CP_FLAG_MASK & ~ARM_CP_SPECIAL)) {
38
gic_compute_misr(s, cpu);
241
+ switch (ri->type & ARM_CP_SPECIAL_MASK) {
39
maint_level = (s->h_hcr[cpu] & R_GICH_HCR_EN_MASK) && s->h_misr[cpu];
242
+ case 0:
40
243
+ break;
41
+ trace_gic_update_maintenance_irq(cpu, maint_level);
244
case ARM_CP_NOP:
42
qemu_set_irq(s->maintenance_irq[cpu], maint_level);
245
return;
43
}
246
case ARM_CP_WFI:
44
}
247
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
45
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
248
s->base.is_jmp = DISAS_WFI;
46
* is in the wrong group.
249
return;
47
*/
250
default:
48
irq = gic_get_current_pending_irq(s, cpu, attrs);
251
- break;
49
- trace_gic_acknowledge_irq(gic_get_vcpu_real_id(cpu), irq);
252
+ g_assert_not_reached();
50
+ trace_gic_acknowledge_irq(gic_is_vcpu(cpu) ? "vcpu" : "cpu",
51
+ gic_get_vcpu_real_id(cpu), irq);
52
53
if (irq >= GIC_MAXIRQ) {
54
DPRINTF("ACK, no pending interrupt or it is hidden: %d\n", irq);
55
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_dist_read(void *opaque, hwaddr offset, uint64_t *data,
56
switch (size) {
57
case 1:
58
*data = gic_dist_readb(opaque, offset, attrs);
59
- return MEMTX_OK;
60
+ break;
61
case 2:
62
*data = gic_dist_readb(opaque, offset, attrs);
63
*data |= gic_dist_readb(opaque, offset + 1, attrs) << 8;
64
- return MEMTX_OK;
65
+ break;
66
case 4:
67
*data = gic_dist_readb(opaque, offset, attrs);
68
*data |= gic_dist_readb(opaque, offset + 1, attrs) << 8;
69
*data |= gic_dist_readb(opaque, offset + 2, attrs) << 16;
70
*data |= gic_dist_readb(opaque, offset + 3, attrs) << 24;
71
- return MEMTX_OK;
72
+ break;
73
default:
74
return MEMTX_ERROR;
75
}
76
+
77
+ trace_gic_dist_read(offset, size, *data);
78
+ return MEMTX_OK;
79
}
80
81
static void gic_dist_writeb(void *opaque, hwaddr offset,
82
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writel(void *opaque, hwaddr offset,
83
static MemTxResult gic_dist_write(void *opaque, hwaddr offset, uint64_t data,
84
unsigned size, MemTxAttrs attrs)
85
{
86
+ trace_gic_dist_write(offset, size, data);
87
+
88
switch (size) {
89
case 1:
90
gic_dist_writeb(opaque, offset, data, attrs);
91
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_read(GICState *s, int cpu, int offset,
92
*data = 0;
93
break;
94
}
95
+
96
+ trace_gic_cpu_read(gic_is_vcpu(cpu) ? "vcpu" : "cpu",
97
+ gic_get_vcpu_real_id(cpu), offset, *data);
98
return MEMTX_OK;
99
}
100
101
static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
102
uint32_t value, MemTxAttrs attrs)
103
{
104
+ trace_gic_cpu_write(gic_is_vcpu(cpu) ? "vcpu" : "cpu",
105
+ gic_get_vcpu_real_id(cpu), offset, value);
106
+
107
switch (offset) {
108
case 0x00: /* Control */
109
gic_set_cpu_control(s, cpu, value, attrs);
110
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_hyp_read(void *opaque, int cpu, hwaddr addr,
111
return MEMTX_OK;
112
}
113
114
+ trace_gic_hyp_read(addr, *data);
115
return MEMTX_OK;
116
}
117
118
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_hyp_write(void *opaque, int cpu, hwaddr addr,
119
GICState *s = ARM_GIC(opaque);
120
int vcpu = cpu + GIC_NCPU;
121
122
+ trace_gic_hyp_write(addr, value);
123
+
124
switch (addr) {
125
case A_GICH_HCR: /* Hypervisor Control */
126
s->h_hcr[cpu] = value & GICH_HCR_MASK;
127
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_hyp_write(void *opaque, int cpu, hwaddr addr,
128
}
253
}
129
254
130
s->h_lr[lr_idx][cpu] = value & GICH_LR_MASK;
255
if ((tb_cflags(s->base.tb) & CF_USE_ICOUNT) && (ri->type & ARM_CP_IO)) {
131
+ trace_gic_lr_entry(cpu, lr_idx, s->h_lr[lr_idx][cpu]);
132
break;
133
}
134
135
diff --git a/hw/intc/trace-events b/hw/intc/trace-events
136
index XXXXXXX..XXXXXXX 100644
137
--- a/hw/intc/trace-events
138
+++ b/hw/intc/trace-events
139
@@ -XXX,XX +XXX,XX @@ aspeed_vic_write(uint64_t offset, unsigned size, uint32_t data) "To 0x%" PRIx64
140
gic_enable_irq(int irq) "irq %d enabled"
141
gic_disable_irq(int irq) "irq %d disabled"
142
gic_set_irq(int irq, int level, int cpumask, int target) "irq %d level %d cpumask 0x%x target 0x%x"
143
-gic_update_bestirq(int cpu, int irq, int prio, int priority_mask, int running_priority) "cpu %d irq %d priority %d cpu priority mask %d cpu running priority %d"
144
+gic_update_bestirq(const char *s, int cpu, int irq, int prio, int priority_mask, int running_priority) "%s %d irq %d priority %d cpu priority mask %d cpu running priority %d"
145
gic_update_set_irq(int cpu, const char *name, int level) "cpu[%d]: %s = %d"
146
-gic_acknowledge_irq(int cpu, int irq) "cpu %d acknowledged irq %d"
147
+gic_acknowledge_irq(const char *s, int cpu, int irq) "%s %d acknowledged irq %d"
148
+gic_cpu_write(const char *s, int cpu, int addr, uint32_t val) "%s %d iface write at 0x%08x 0x%08" PRIx32
149
+gic_cpu_read(const char *s, int cpu, int addr, uint32_t val) "%s %d iface read at 0x%08x: 0x%08" PRIx32
150
+gic_hyp_read(int addr, uint32_t val) "hyp read at 0x%08x: 0x%08" PRIx32
151
+gic_hyp_write(int addr, uint32_t val) "hyp write at 0x%08x: 0x%08" PRIx32
152
+gic_dist_read(int addr, unsigned int size, uint32_t val) "dist read at 0x%08x size %u: 0x%08" PRIx32
153
+gic_dist_write(int addr, unsigned int size, uint32_t val) "dist write at 0x%08x size %u: 0x%08" PRIx32
154
+gic_lr_entry(int cpu, int entry, uint32_t val) "cpu %d: new lr entry %d: 0x%08" PRIx32
155
+gic_update_maintenance_irq(int cpu, int val) "cpu %d: maintenance = %d"
156
157
# hw/intc/arm_gicv3_cpuif.c
158
gicv3_icc_pmr_read(uint32_t cpu, uint64_t val) "GICv3 ICC_PMR read cpu 0x%x value 0x%" PRIx64
159
--
256
--
160
2.18.0
257
2.25.1
161
162
diff view generated by jsdifflib
1
Improve the exception-taken logging by logging in
1
From: Richard Henderson <richard.henderson@linaro.org>
2
v7m_exception_taken() the exception we're going to take
3
and whether it is secure/nonsecure.
4
2
5
This requires us to move logging at many callsites from after the
3
Standardize on g_assert_not_reached() for "should not happen".
6
call to before it, so that the logging appears in a sensible order.
4
Retain abort() when preceeded by fprintf or error_report.
7
5
8
(This will make tail-chaining produce more useful logs; for the
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
current callers of v7m_exception_taken() we know which exception
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
we're going to take, so custom log messages at the callsite sufficed;
8
Message-id: 20220501055028.646596-7-richard.henderson@linaro.org
11
for tail-chaining only v7m_exception_taken() knows the exception
12
number that we're going to tail-chain to.)
13
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
17
Message-id: 20180720145647.8810-2-peter.maydell@linaro.org
18
---
10
---
19
target/arm/helper.c | 17 +++++++++++------
11
target/arm/helper.c | 7 +++----
20
1 file changed, 11 insertions(+), 6 deletions(-)
12
target/arm/hvf/hvf.c | 2 +-
13
target/arm/kvm-stub.c | 4 ++--
14
target/arm/kvm.c | 4 ++--
15
target/arm/machine.c | 4 ++--
16
target/arm/translate-a64.c | 4 ++--
17
target/arm/translate-neon.c | 2 +-
18
target/arm/translate.c | 4 ++--
19
8 files changed, 15 insertions(+), 16 deletions(-)
21
20
22
diff --git a/target/arm/helper.c b/target/arm/helper.c
21
diff --git a/target/arm/helper.c b/target/arm/helper.c
23
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/helper.c
23
--- a/target/arm/helper.c
25
+++ b/target/arm/helper.c
24
+++ b/target/arm/helper.c
26
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
25
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
27
bool push_failed = false;
26
break;
28
27
default:
29
armv7m_nvic_get_pending_irq_info(env->nvic, &exc, &targets_secure);
28
/* broken reginfo with out-of-range opc1 */
30
+ qemu_log_mask(CPU_LOG_INT, "...taking pending %s exception %d\n",
29
- assert(false);
31
+ targets_secure ? "secure" : "nonsecure", exc);
30
- break;
32
31
+ g_assert_not_reached();
33
if (arm_feature(env, ARM_FEATURE_V8)) {
32
}
34
if (arm_feature(env, ARM_FEATURE_M_SECURITY) &&
33
/* assert our permissions are not too lax (stricter is fine) */
35
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
34
assert((r->access & ~mask) == 0);
36
* we might now want to take a different exception which
35
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
37
* targets a different security state, so try again from the top.
36
break;
38
*/
37
default:
39
+ qemu_log_mask(CPU_LOG_INT,
38
/* Never happens, but compiler isn't smart enough to tell. */
40
+ "...derived exception on callee-saves register stacking");
39
- abort();
41
v7m_exception_taken(cpu, lr, true, true);
40
+ g_assert_not_reached();
42
return;
41
}
43
}
42
}
44
43
*prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot);
45
if (!arm_v7m_load_vector(cpu, exc, targets_secure, &addr)) {
44
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
46
/* Vector load failed: derived exception */
45
break;
47
+ qemu_log_mask(CPU_LOG_INT, "...derived exception on vector table load");
46
default:
48
v7m_exception_taken(cpu, lr, true, true);
47
/* Never happens, but compiler isn't smart enough to tell. */
49
return;
48
- abort();
49
+ g_assert_not_reached();
50
}
50
}
51
}
51
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
52
if (domain_prot == 3) {
52
if (sfault) {
53
diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
53
env->v7m.sfsr |= R_V7M_SFSR_INVER_MASK;
54
index XXXXXXX..XXXXXXX 100644
54
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
55
--- a/target/arm/hvf/hvf.c
55
- v7m_exception_taken(cpu, excret, true, false);
56
+++ b/target/arm/hvf/hvf.c
56
qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
57
@@ -XXX,XX +XXX,XX @@ int hvf_vcpu_exec(CPUState *cpu)
57
"stackframe: failed EXC_RETURN.ES validity check\n");
58
/* we got kicked, no exit to process */
58
+ v7m_exception_taken(cpu, excret, true, false);
59
return 0;
59
return;
60
default:
61
- assert(0);
62
+ g_assert_not_reached();
60
}
63
}
61
64
62
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
65
hvf_sync_vtimer(cpu);
63
*/
66
diff --git a/target/arm/kvm-stub.c b/target/arm/kvm-stub.c
64
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
67
index XXXXXXX..XXXXXXX 100644
65
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
68
--- a/target/arm/kvm-stub.c
66
- v7m_exception_taken(cpu, excret, true, false);
69
+++ b/target/arm/kvm-stub.c
67
qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
70
@@ -XXX,XX +XXX,XX @@
68
"stackframe: failed exception return integrity check\n");
71
69
+ v7m_exception_taken(cpu, excret, true, false);
72
bool write_kvmstate_to_list(ARMCPU *cpu)
70
return;
73
{
74
- abort();
75
+ g_assert_not_reached();
76
}
77
78
bool write_list_to_kvmstate(ARMCPU *cpu, int level)
79
{
80
- abort();
81
+ g_assert_not_reached();
82
}
83
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
84
index XXXXXXX..XXXXXXX 100644
85
--- a/target/arm/kvm.c
86
+++ b/target/arm/kvm.c
87
@@ -XXX,XX +XXX,XX @@ bool write_kvmstate_to_list(ARMCPU *cpu)
88
ret = kvm_vcpu_ioctl(cs, KVM_GET_ONE_REG, &r);
89
break;
90
default:
91
- abort();
92
+ g_assert_not_reached();
93
}
94
if (ret) {
95
ok = false;
96
@@ -XXX,XX +XXX,XX @@ bool write_list_to_kvmstate(ARMCPU *cpu, int level)
97
r.addr = (uintptr_t)(cpu->cpreg_values + i);
98
break;
99
default:
100
- abort();
101
+ g_assert_not_reached();
102
}
103
ret = kvm_vcpu_ioctl(cs, KVM_SET_ONE_REG, &r);
104
if (ret) {
105
diff --git a/target/arm/machine.c b/target/arm/machine.c
106
index XXXXXXX..XXXXXXX 100644
107
--- a/target/arm/machine.c
108
+++ b/target/arm/machine.c
109
@@ -XXX,XX +XXX,XX @@ static int cpu_pre_save(void *opaque)
110
if (kvm_enabled()) {
111
if (!write_kvmstate_to_list(cpu)) {
112
/* This should never fail */
113
- abort();
114
+ g_assert_not_reached();
115
}
116
117
/*
118
@@ -XXX,XX +XXX,XX @@ static int cpu_pre_save(void *opaque)
119
} else {
120
if (!write_cpustate_to_list(cpu, false)) {
121
/* This should never fail. */
122
- abort();
123
+ g_assert_not_reached();
124
}
71
}
125
}
72
126
73
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
127
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
74
/* Take a SecureFault on the current stack */
128
index XXXXXXX..XXXXXXX 100644
75
env->v7m.sfsr |= R_V7M_SFSR_INVIS_MASK;
129
--- a/target/arm/translate-a64.c
76
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
130
+++ b/target/arm/translate-a64.c
77
- v7m_exception_taken(cpu, excret, true, false);
131
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_half(DisasContext *s, int opcode, int rd, int rn)
78
qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
132
gen_helper_advsimd_rinth(tcg_res, tcg_op, fpst);
79
"stackframe: failed exception return integrity "
133
break;
80
"signature check\n");
134
default:
81
+ v7m_exception_taken(cpu, excret, true, false);
135
- abort();
82
return;
136
+ g_assert_not_reached();
83
}
137
}
84
138
85
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
139
write_fp_sreg(s, rd, tcg_res);
86
/* v7m_stack_read() pended a fault, so take it (as a tail
140
@@ -XXX,XX +XXX,XX @@ static void handle_fp_fcvt(DisasContext *s, int opcode,
87
* chained exception on the same stack frame)
141
break;
88
*/
142
}
89
+ qemu_log_mask(CPU_LOG_INT, "...derived exception on unstacking\n");
143
default:
90
v7m_exception_taken(cpu, excret, true, false);
144
- abort();
91
return;
145
+ g_assert_not_reached();
146
}
147
}
148
149
diff --git a/target/arm/translate-neon.c b/target/arm/translate-neon.c
150
index XXXXXXX..XXXXXXX 100644
151
--- a/target/arm/translate-neon.c
152
+++ b/target/arm/translate-neon.c
153
@@ -XXX,XX +XXX,XX @@ static bool trans_VLDST_single(DisasContext *s, arg_VLDST_single *a)
92
}
154
}
93
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
155
break;
94
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
156
default:
95
env->v7m.secure);
157
- abort();
96
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
158
+ g_assert_not_reached();
97
- v7m_exception_taken(cpu, excret, true, false);
159
}
98
qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
160
if ((vd + a->stride * (nregs - 1)) > 31) {
99
"stackframe: failed exception return integrity "
161
/*
100
"check\n");
162
diff --git a/target/arm/translate.c b/target/arm/translate.c
101
+ v7m_exception_taken(cpu, excret, true, false);
163
index XXXXXXX..XXXXXXX 100644
102
return;
164
--- a/target/arm/translate.c
103
}
165
+++ b/target/arm/translate.c
166
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
167
offset = 4;
168
break;
169
default:
170
- abort();
171
+ g_assert_not_reached();
172
}
173
tcg_gen_addi_i32(addr, addr, offset);
174
tmp = load_reg(s, 14);
175
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
176
offset = 0;
177
break;
178
default:
179
- abort();
180
+ g_assert_not_reached();
104
}
181
}
105
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
182
tcg_gen_addi_i32(addr, addr, offset);
106
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, false);
183
gen_helper_set_r13_banked(cpu_env, tcg_constant_i32(mode), addr);
107
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
108
ignore_stackfaults = v7m_push_stack(cpu);
109
- v7m_exception_taken(cpu, excret, false, ignore_stackfaults);
110
qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on new stackframe: "
111
"failed exception return integrity check\n");
112
+ v7m_exception_taken(cpu, excret, false, ignore_stackfaults);
113
return;
114
}
115
116
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
117
118
ignore_stackfaults = v7m_push_stack(cpu);
119
v7m_exception_taken(cpu, lr, false, ignore_stackfaults);
120
- qemu_log_mask(CPU_LOG_INT, "... as %d\n", env->v7m.exception);
121
}
122
123
/* Function used to synchronize QEMU's AArch64 register set with AArch32
124
--
184
--
125
2.18.0
185
2.25.1
126
127
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The normal vector element is sign-extended before
3
Create a typedef as well, and use it in ARMCPRegInfo.
4
comparing with the wide vector element.
4
This won't be perfect for debugging, but it'll nicely
5
display the most common cases.
5
6
6
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
9
Message-id: 20220501055028.646596-8-richard.henderson@linaro.org
9
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
10
Tested-by: Alex Bennée <alex.bennee@linaro.org>
11
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
12
Message-id: 20180801123111.3595-2-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
11
---
15
target/arm/sve_helper.c | 12 ++++++------
12
target/arm/cpregs.h | 44 +++++++++++++++++++++++---------------------
16
1 file changed, 6 insertions(+), 6 deletions(-)
13
target/arm/helper.c | 2 +-
14
2 files changed, 24 insertions(+), 22 deletions(-)
17
15
18
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
16
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
19
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/sve_helper.c
18
--- a/target/arm/cpregs.h
21
+++ b/target/arm/sve_helper.c
19
+++ b/target/arm/cpregs.h
22
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc) \
20
@@ -XXX,XX +XXX,XX @@ enum {
23
#define DO_CMP_PPZW_S(NAME, TYPE, TYPEW, OP) \
21
* described with these bits, then use a laxer set of restrictions, and
24
DO_CMP_PPZW(NAME, TYPE, TYPEW, OP, H1_4, 0x1111111111111111ull)
22
* do the more restrictive/complex check inside a helper function.
25
23
*/
26
-DO_CMP_PPZW_B(sve_cmpeq_ppzw_b, uint8_t, uint64_t, ==)
24
-#define PL3_R 0x80
27
-DO_CMP_PPZW_H(sve_cmpeq_ppzw_h, uint16_t, uint64_t, ==)
25
-#define PL3_W 0x40
28
-DO_CMP_PPZW_S(sve_cmpeq_ppzw_s, uint32_t, uint64_t, ==)
26
-#define PL2_R (0x20 | PL3_R)
29
+DO_CMP_PPZW_B(sve_cmpeq_ppzw_b, int8_t, uint64_t, ==)
27
-#define PL2_W (0x10 | PL3_W)
30
+DO_CMP_PPZW_H(sve_cmpeq_ppzw_h, int16_t, uint64_t, ==)
28
-#define PL1_R (0x08 | PL2_R)
31
+DO_CMP_PPZW_S(sve_cmpeq_ppzw_s, int32_t, uint64_t, ==)
29
-#define PL1_W (0x04 | PL2_W)
32
30
-#define PL0_R (0x02 | PL1_R)
33
-DO_CMP_PPZW_B(sve_cmpne_ppzw_b, uint8_t, uint64_t, !=)
31
-#define PL0_W (0x01 | PL1_W)
34
-DO_CMP_PPZW_H(sve_cmpne_ppzw_h, uint16_t, uint64_t, !=)
32
+typedef enum {
35
-DO_CMP_PPZW_S(sve_cmpne_ppzw_s, uint32_t, uint64_t, !=)
33
+ PL3_R = 0x80,
36
+DO_CMP_PPZW_B(sve_cmpne_ppzw_b, int8_t, uint64_t, !=)
34
+ PL3_W = 0x40,
37
+DO_CMP_PPZW_H(sve_cmpne_ppzw_h, int16_t, uint64_t, !=)
35
+ PL2_R = 0x20 | PL3_R,
38
+DO_CMP_PPZW_S(sve_cmpne_ppzw_s, int32_t, uint64_t, !=)
36
+ PL2_W = 0x10 | PL3_W,
39
37
+ PL1_R = 0x08 | PL2_R,
40
DO_CMP_PPZW_B(sve_cmpgt_ppzw_b, int8_t, int64_t, >)
38
+ PL1_W = 0x04 | PL2_W,
41
DO_CMP_PPZW_H(sve_cmpgt_ppzw_h, int16_t, int64_t, >)
39
+ PL0_R = 0x02 | PL1_R,
40
+ PL0_W = 0x01 | PL1_W,
41
42
-/*
43
- * For user-mode some registers are accessible to EL0 via a kernel
44
- * trap-and-emulate ABI. In this case we define the read permissions
45
- * as actually being PL0_R. However some bits of any given register
46
- * may still be masked.
47
- */
48
+ /*
49
+ * For user-mode some registers are accessible to EL0 via a kernel
50
+ * trap-and-emulate ABI. In this case we define the read permissions
51
+ * as actually being PL0_R. However some bits of any given register
52
+ * may still be masked.
53
+ */
54
#ifdef CONFIG_USER_ONLY
55
-#define PL0U_R PL0_R
56
+ PL0U_R = PL0_R,
57
#else
58
-#define PL0U_R PL1_R
59
+ PL0U_R = PL1_R,
60
#endif
61
62
-#define PL3_RW (PL3_R | PL3_W)
63
-#define PL2_RW (PL2_R | PL2_W)
64
-#define PL1_RW (PL1_R | PL1_W)
65
-#define PL0_RW (PL0_R | PL0_W)
66
+ PL3_RW = PL3_R | PL3_W,
67
+ PL2_RW = PL2_R | PL2_W,
68
+ PL1_RW = PL1_R | PL1_W,
69
+ PL0_RW = PL0_R | PL0_W,
70
+} CPAccessRights;
71
72
typedef enum CPAccessResult {
73
/* Access is permitted */
74
@@ -XXX,XX +XXX,XX @@ struct ARMCPRegInfo {
75
/* Register type: ARM_CP_* bits/values */
76
int type;
77
/* Access rights: PL*_[RW] */
78
- int access;
79
+ CPAccessRights access;
80
/* Security state: ARM_CP_SECSTATE_* bits/values */
81
int secure;
82
/*
83
diff --git a/target/arm/helper.c b/target/arm/helper.c
84
index XXXXXXX..XXXXXXX 100644
85
--- a/target/arm/helper.c
86
+++ b/target/arm/helper.c
87
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
88
* to encompass the generic architectural permission check.
89
*/
90
if (r->state != ARM_CP_STATE_AA32) {
91
- int mask = 0;
92
+ CPAccessRights mask;
93
switch (r->opc1) {
94
case 0:
95
/* min_EL EL1, but some accessible to EL0 via kernel ABI */
42
--
96
--
43
2.18.0
97
2.25.1
44
45
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
3
Give this enum a name and use in ARMCPRegInfo,
4
add_cpreg_to_hashtable and define_one_arm_cp_reg_with_opaque.
5
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
9
Message-id: 20220501055028.646596-9-richard.henderson@linaro.org
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Tested-by: Alex Bennée <alex.bennee@linaro.org>
9
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
10
Message-id: 20180801123111.3595-5-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
11
---
13
target/arm/sve_helper.c | 2 +-
12
target/arm/cpregs.h | 6 +++---
14
1 file changed, 1 insertion(+), 1 deletion(-)
13
target/arm/helper.c | 6 ++++--
14
2 files changed, 7 insertions(+), 5 deletions(-)
15
15
16
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
16
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
17
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/sve_helper.c
18
--- a/target/arm/cpregs.h
19
+++ b/target/arm/sve_helper.c
19
+++ b/target/arm/cpregs.h
20
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_movz_d)(void *vd, void *vn, void *vg, uint32_t desc)
20
@@ -XXX,XX +XXX,XX @@ enum {
21
uint64_t *d = vd, *n = vn;
21
* Note that we rely on the values of these enums as we iterate through
22
uint8_t *pg = vg;
22
* the various states in some places.
23
for (i = 0; i < opr_sz; i += 1) {
23
*/
24
- d[i] = n[1] & -(uint64_t)(pg[H1(i)] & 1);
24
-enum {
25
+ d[i] = n[i] & -(uint64_t)(pg[H1(i)] & 1);
25
+typedef enum {
26
}
26
ARM_CP_STATE_AA32 = 0,
27
ARM_CP_STATE_AA64 = 1,
28
ARM_CP_STATE_BOTH = 2,
29
-};
30
+} CPState;
31
32
/*
33
* ARM CP register secure state flags. These flags identify security state
34
@@ -XXX,XX +XXX,XX @@ struct ARMCPRegInfo {
35
uint8_t opc1;
36
uint8_t opc2;
37
/* Execution state in which this register is visible: ARM_CP_STATE_* */
38
- int state;
39
+ CPState state;
40
/* Register type: ARM_CP_* bits/values */
41
int type;
42
/* Access rights: PL*_[RW] */
43
diff --git a/target/arm/helper.c b/target/arm/helper.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/helper.c
46
+++ b/target/arm/helper.c
47
@@ -XXX,XX +XXX,XX @@ CpuDefinitionInfoList *qmp_query_cpu_definitions(Error **errp)
27
}
48
}
28
49
50
static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
51
- void *opaque, int state, int secstate,
52
+ void *opaque, CPState state, int secstate,
53
int crm, int opc1, int opc2,
54
const char *name)
55
{
56
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
57
* bits; the ARM_CP_64BIT* flag applies only to the AArch32 view of
58
* the register, if any.
59
*/
60
- int crm, opc1, opc2, state;
61
+ int crm, opc1, opc2;
62
int crmmin = (r->crm == CP_ANY) ? 0 : r->crm;
63
int crmmax = (r->crm == CP_ANY) ? 15 : r->crm;
64
int opc1min = (r->opc1 == CP_ANY) ? 0 : r->opc1;
65
int opc1max = (r->opc1 == CP_ANY) ? 7 : r->opc1;
66
int opc2min = (r->opc2 == CP_ANY) ? 0 : r->opc2;
67
int opc2max = (r->opc2 == CP_ANY) ? 7 : r->opc2;
68
+ CPState state;
69
+
70
/* 64 bit registers have only CRm and Opc1 fields */
71
assert(!((r->type & ARM_CP_64BIT) && (r->opc2 || r->crn)));
72
/* op0 only exists in the AArch64 encodings */
29
--
73
--
30
2.18.0
74
2.25.1
31
75
32
76
diff view generated by jsdifflib
1
From: Adam Lackorzynski <adam@l4re.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Use an int64_t as a return type to restore
3
Give this enum a name and use in ARMCPRegInfo and add_cpreg_to_hashtable.
4
the negative check for arm_load_as.
4
Add the enumerator ARM_CP_SECSTATE_BOTH to clarify how 0
5
is handled in define_one_arm_cp_reg_with_opaque.
5
6
6
Signed-off-by: Adam Lackorzynski <adam@l4re.org>
7
Message-id: 20180730173712.GG4987@os.inf.tu-dresden.de
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20220501055028.646596-10-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
---
11
hw/arm/boot.c | 8 ++++----
12
target/arm/cpregs.h | 7 ++++---
12
1 file changed, 4 insertions(+), 4 deletions(-)
13
target/arm/helper.c | 7 +++++--
14
2 files changed, 9 insertions(+), 5 deletions(-)
13
15
14
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
16
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/arm/boot.c
18
--- a/target/arm/cpregs.h
17
+++ b/hw/arm/boot.c
19
+++ b/target/arm/cpregs.h
18
@@ -XXX,XX +XXX,XX @@ static int do_arm_linux_init(Object *obj, void *opaque)
20
@@ -XXX,XX +XXX,XX @@ typedef enum {
19
return 0;
21
* registered entry will only have one to identify whether the entry is secure
22
* or non-secure.
23
*/
24
-enum {
25
+typedef enum {
26
+ ARM_CP_SECSTATE_BOTH = 0, /* define one cpreg for each secstate */
27
ARM_CP_SECSTATE_S = (1 << 0), /* bit[0]: Secure state register */
28
ARM_CP_SECSTATE_NS = (1 << 1), /* bit[1]: Non-secure state register */
29
-};
30
+} CPSecureState;
31
32
/*
33
* Access rights:
34
@@ -XXX,XX +XXX,XX @@ struct ARMCPRegInfo {
35
/* Access rights: PL*_[RW] */
36
CPAccessRights access;
37
/* Security state: ARM_CP_SECSTATE_* bits/values */
38
- int secure;
39
+ CPSecureState secure;
40
/*
41
* The opaque pointer passed to define_arm_cp_regs_with_opaque() when
42
* this register was defined: can be used to hand data through to the
43
diff --git a/target/arm/helper.c b/target/arm/helper.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/helper.c
46
+++ b/target/arm/helper.c
47
@@ -XXX,XX +XXX,XX @@ CpuDefinitionInfoList *qmp_query_cpu_definitions(Error **errp)
20
}
48
}
21
49
22
-static uint64_t arm_load_elf(struct arm_boot_info *info, uint64_t *pentry,
50
static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
23
- uint64_t *lowaddr, uint64_t *highaddr,
51
- void *opaque, CPState state, int secstate,
24
- int elf_machine, AddressSpace *as)
52
+ void *opaque, CPState state,
25
+static int64_t arm_load_elf(struct arm_boot_info *info, uint64_t *pentry,
53
+ CPSecureState secstate,
26
+ uint64_t *lowaddr, uint64_t *highaddr,
54
int crm, int opc1, int opc2,
27
+ int elf_machine, AddressSpace *as)
55
const char *name)
28
{
56
{
29
bool elf_is64;
57
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
30
union {
58
r->secure, crm, opc1, opc2,
31
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_load_elf(struct arm_boot_info *info, uint64_t *pentry,
59
r->name);
32
} elf_header;
60
break;
33
int data_swab = 0;
61
- default:
34
bool big_endian;
62
+ case ARM_CP_SECSTATE_BOTH:
35
- uint64_t ret = -1;
63
name = g_strdup_printf("%s_S", r->name);
36
+ int64_t ret = -1;
64
add_cpreg_to_hashtable(cpu, r, opaque, state,
37
Error *err = NULL;
65
ARM_CP_SECSTATE_S,
38
66
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
39
67
ARM_CP_SECSTATE_NS,
68
crm, opc1, opc2, r->name);
69
break;
70
+ default:
71
+ g_assert_not_reached();
72
}
73
} else {
74
/* AArch64 registers get mapped to non-secure instance
40
--
75
--
41
2.18.0
76
2.25.1
42
43
diff view generated by jsdifflib
1
In do_v7m_exception_exit(), we use the exc_secure variable to track
1
From: Richard Henderson <richard.henderson@linaro.org>
2
whether the exception we're returning from is secure or non-secure.
3
Unfortunately the statement initializing this was accidentally
4
inside an "if (env->v7m.exception != ARMV7M_EXCP_NMI)" conditional,
5
which meant that we were using the wrong value for NMI handlers.
6
Move the initialization out to the right place.
7
2
3
The new_key field is always non-zero -- drop the if.
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20220501055028.646596-11-richard.henderson@linaro.org
8
[PMM: reinstated dropped PL3_RW mask]
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Message-id: 20180720145647.8810-3-peter.maydell@linaro.org
12
---
10
---
13
target/arm/helper.c | 2 +-
11
target/arm/helper.c | 23 +++++++++++------------
14
1 file changed, 1 insertion(+), 1 deletion(-)
12
1 file changed, 11 insertions(+), 12 deletions(-)
15
13
16
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
17
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/helper.c
16
--- a/target/arm/helper.c
19
+++ b/target/arm/helper.c
17
+++ b/target/arm/helper.c
20
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
18
@@ -XXX,XX +XXX,XX @@ static void define_arm_vh_e2h_redirects_aliases(ARMCPU *cpu)
21
/* For all other purposes, treat ES as 0 (R_HXSR) */
19
22
excret &= ~R_V7M_EXCRET_ES_MASK;
20
for (i = 0; i < ARRAY_SIZE(aliases); i++) {
23
}
21
const struct E2HAlias *a = &aliases[i];
24
+ exc_secure = excret & R_V7M_EXCRET_ES_MASK;
22
- ARMCPRegInfo *src_reg, *dst_reg;
25
}
23
+ ARMCPRegInfo *src_reg, *dst_reg, *new_reg;
26
24
+ uint32_t *new_key;
27
if (env->v7m.exception != ARMV7M_EXCP_NMI) {
25
+ bool ok;
28
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
26
29
* which security state's faultmask to clear. (v8M ARM ARM R_KBNF.)
27
if (a->feature && !a->feature(&cpu->isar)) {
30
*/
28
continue;
31
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
29
@@ -XXX,XX +XXX,XX @@ static void define_arm_vh_e2h_redirects_aliases(ARMCPU *cpu)
32
- exc_secure = excret & R_V7M_EXCRET_ES_MASK;
30
g_assert(src_reg->opaque == NULL);
33
if (armv7m_nvic_raw_execution_priority(env->nvic) >= 0) {
31
34
env->v7m.faultmask[exc_secure] = 0;
32
/* Create alias before redirection so we dup the right data. */
35
}
33
- if (a->new_key) {
34
- ARMCPRegInfo *new_reg = g_memdup(src_reg, sizeof(ARMCPRegInfo));
35
- uint32_t *new_key = g_memdup(&a->new_key, sizeof(uint32_t));
36
- bool ok;
37
+ new_reg = g_memdup(src_reg, sizeof(ARMCPRegInfo));
38
+ new_key = g_memdup(&a->new_key, sizeof(uint32_t));
39
40
- new_reg->name = a->new_name;
41
- new_reg->type |= ARM_CP_ALIAS;
42
- /* Remove PL1/PL0 access, leaving PL2/PL3 R/W in place. */
43
- new_reg->access &= PL2_RW | PL3_RW;
44
+ new_reg->name = a->new_name;
45
+ new_reg->type |= ARM_CP_ALIAS;
46
+ /* Remove PL1/PL0 access, leaving PL2/PL3 R/W in place. */
47
+ new_reg->access &= PL2_RW | PL3_RW;
48
49
- ok = g_hash_table_insert(cpu->cp_regs, new_key, new_reg);
50
- g_assert(ok);
51
- }
52
+ ok = g_hash_table_insert(cpu->cp_regs, new_key, new_reg);
53
+ g_assert(ok);
54
55
src_reg->opaque = dst_reg;
56
src_reg->orig_readfn = src_reg->readfn ?: raw_read;
36
--
57
--
37
2.18.0
58
2.25.1
38
39
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add the gic_update_virt() function to update the vCPU interface states
3
Cast the uint32_t key into a gpointer directly, which
4
and raise vIRQ and vFIQ as needed. This commit renames gic_update() to
4
allows us to avoid allocating storage for each key.
5
gic_update_internal() and generalizes it to handle both cases, with a
6
`virt' parameter to track whether we are updating the CPU or vCPU
7
interfaces.
8
5
9
The main difference between CPU and vCPU is the way we select the best
6
Use g_hash_table_lookup when we already have a gpointer
10
IRQ. This part has been split into the gic_get_best_(v)irq functions.
7
(e.g. for callbacks like count_cpreg), or when using
11
For the virt case, the LRs are iterated to find the best candidate.
8
get_arm_cp_reginfo would require casting away const.
12
9
13
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Message-id: 20180727095421.386-17-luc.michel@greensocs.com
12
Message-id: 20220501055028.646596-12-richard.henderson@linaro.org
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
14
---
18
hw/intc/arm_gic.c | 175 +++++++++++++++++++++++++++++++++++-----------
15
target/arm/cpu.c | 4 ++--
19
1 file changed, 136 insertions(+), 39 deletions(-)
16
target/arm/gdbstub.c | 2 +-
17
target/arm/helper.c | 41 ++++++++++++++++++-----------------------
18
3 files changed, 21 insertions(+), 26 deletions(-)
20
19
21
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
20
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
22
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
23
--- a/hw/intc/arm_gic.c
22
--- a/target/arm/cpu.c
24
+++ b/hw/intc/arm_gic.c
23
+++ b/target/arm/cpu.c
25
@@ -XXX,XX +XXX,XX @@ static inline bool gic_cpu_ns_access(GICState *s, int cpu, MemTxAttrs attrs)
24
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_initfn(Object *obj)
26
return !gic_is_vcpu(cpu) && s->security_extn && !attrs.secure;
25
ARMCPU *cpu = ARM_CPU(obj);
27
}
26
28
27
cpu_set_cpustate_pointers(cpu);
29
+static inline void gic_get_best_irq(GICState *s, int cpu,
28
- cpu->cp_regs = g_hash_table_new_full(g_int_hash, g_int_equal,
30
+ int *best_irq, int *best_prio, int *group)
29
- g_free, cpreg_hashtable_data_destroy);
31
+{
30
+ cpu->cp_regs = g_hash_table_new_full(g_direct_hash, g_direct_equal,
32
+ int irq;
31
+ NULL, cpreg_hashtable_data_destroy);
33
+ int cm = 1 << cpu;
32
34
+
33
QLIST_INIT(&cpu->pre_el_change_hooks);
35
+ *best_irq = 1023;
34
QLIST_INIT(&cpu->el_change_hooks);
36
+ *best_prio = 0x100;
35
diff --git a/target/arm/gdbstub.c b/target/arm/gdbstub.c
37
+
36
index XXXXXXX..XXXXXXX 100644
38
+ for (irq = 0; irq < s->num_irq; irq++) {
37
--- a/target/arm/gdbstub.c
39
+ if (GIC_DIST_TEST_ENABLED(irq, cm) && gic_test_pending(s, irq, cm) &&
38
+++ b/target/arm/gdbstub.c
40
+ (!GIC_DIST_TEST_ACTIVE(irq, cm)) &&
39
@@ -XXX,XX +XXX,XX @@ static void arm_gen_one_xml_sysreg_tag(GString *s, DynamicGDBXMLInfo *dyn_xml,
41
+ (irq < GIC_INTERNAL || GIC_DIST_TARGET(irq) & cm)) {
40
static void arm_register_sysreg_for_xml(gpointer key, gpointer value,
42
+ if (GIC_DIST_GET_PRIORITY(irq, cpu) < *best_prio) {
41
gpointer p)
43
+ *best_prio = GIC_DIST_GET_PRIORITY(irq, cpu);
44
+ *best_irq = irq;
45
+ }
46
+ }
47
+ }
48
+
49
+ if (*best_irq < 1023) {
50
+ *group = GIC_DIST_TEST_GROUP(*best_irq, cm);
51
+ }
52
+}
53
+
54
+static inline void gic_get_best_virq(GICState *s, int cpu,
55
+ int *best_irq, int *best_prio, int *group)
56
+{
57
+ int lr_idx = 0;
58
+
59
+ *best_irq = 1023;
60
+ *best_prio = 0x100;
61
+
62
+ for (lr_idx = 0; lr_idx < s->num_lrs; lr_idx++) {
63
+ uint32_t lr_entry = s->h_lr[lr_idx][cpu];
64
+ int state = GICH_LR_STATE(lr_entry);
65
+
66
+ if (state == GICH_LR_STATE_PENDING) {
67
+ int prio = GICH_LR_PRIORITY(lr_entry);
68
+
69
+ if (prio < *best_prio) {
70
+ *best_prio = prio;
71
+ *best_irq = GICH_LR_VIRT_ID(lr_entry);
72
+ *group = GICH_LR_GROUP(lr_entry);
73
+ }
74
+ }
75
+ }
76
+}
77
+
78
+/* Return true if IRQ signaling is enabled for the given cpu and at least one
79
+ * of the given groups:
80
+ * - in the non-virt case, the distributor must be enabled for one of the
81
+ * given groups
82
+ * - in the virt case, the virtual interface must be enabled.
83
+ * - in all cases, the (v)CPU interface must be enabled for one of the given
84
+ * groups.
85
+ */
86
+static inline bool gic_irq_signaling_enabled(GICState *s, int cpu, bool virt,
87
+ int group_mask)
88
+{
89
+ if (!virt && !(s->ctlr & group_mask)) {
90
+ return false;
91
+ }
92
+
93
+ if (virt && !(s->h_hcr[cpu] & R_GICH_HCR_EN_MASK)) {
94
+ return false;
95
+ }
96
+
97
+ if (!(s->cpu_ctlr[cpu] & group_mask)) {
98
+ return false;
99
+ }
100
+
101
+ return true;
102
+}
103
+
104
/* TODO: Many places that call this routine could be optimized. */
105
/* Update interrupt status after enabled or pending bits have been changed. */
106
-static void gic_update(GICState *s)
107
+static inline void gic_update_internal(GICState *s, bool virt)
108
{
42
{
109
int best_irq;
43
- uint32_t ri_key = *(uint32_t *)key;
110
int best_prio;
44
+ uint32_t ri_key = (uintptr_t)key;
111
- int irq;
45
ARMCPRegInfo *ri = value;
112
int irq_level, fiq_level;
46
RegisterSysregXmlParam *param = (RegisterSysregXmlParam *)p;
113
- int cpu;
47
GString *s = param->s;
114
- int cm;
48
diff --git a/target/arm/helper.c b/target/arm/helper.c
115
+ int cpu, cpu_iface;
49
index XXXXXXX..XXXXXXX 100644
116
+ int group = 0;
50
--- a/target/arm/helper.c
117
+ qemu_irq *irq_lines = virt ? s->parent_virq : s->parent_irq;
51
+++ b/target/arm/helper.c
118
+ qemu_irq *fiq_lines = virt ? s->parent_vfiq : s->parent_fiq;
52
@@ -XXX,XX +XXX,XX @@ bool write_list_to_cpustate(ARMCPU *cpu)
119
53
static void add_cpreg_to_list(gpointer key, gpointer opaque)
120
for (cpu = 0; cpu < s->num_cpu; cpu++) {
54
{
121
- cm = 1 << cpu;
55
ARMCPU *cpu = opaque;
122
- s->current_pending[cpu] = 1023;
56
- uint64_t regidx;
123
- if (!(s->ctlr & (GICD_CTLR_EN_GRP0 | GICD_CTLR_EN_GRP1))
57
- const ARMCPRegInfo *ri;
124
- || !(s->cpu_ctlr[cpu] & (GICC_CTLR_EN_GRP0 | GICC_CTLR_EN_GRP1))) {
58
-
125
- qemu_irq_lower(s->parent_irq[cpu]);
59
- regidx = *(uint32_t *)key;
126
- qemu_irq_lower(s->parent_fiq[cpu]);
60
- ri = get_arm_cp_reginfo(cpu->cp_regs, regidx);
127
+ cpu_iface = virt ? (cpu + GIC_NCPU) : cpu;
61
+ uint32_t regidx = (uintptr_t)key;
128
+
62
+ const ARMCPRegInfo *ri = get_arm_cp_reginfo(cpu->cp_regs, regidx);
129
+ s->current_pending[cpu_iface] = 1023;
63
130
+ if (!gic_irq_signaling_enabled(s, cpu, virt,
64
if (!(ri->type & (ARM_CP_NO_RAW|ARM_CP_ALIAS))) {
131
+ GICD_CTLR_EN_GRP0 | GICD_CTLR_EN_GRP1)) {
65
cpu->cpreg_indexes[cpu->cpreg_array_len] = cpreg_to_kvm_id(regidx);
132
+ qemu_irq_lower(irq_lines[cpu]);
66
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_list(gpointer key, gpointer opaque)
133
+ qemu_irq_lower(fiq_lines[cpu]);
67
static void count_cpreg(gpointer key, gpointer opaque)
68
{
69
ARMCPU *cpu = opaque;
70
- uint64_t regidx;
71
const ARMCPRegInfo *ri;
72
73
- regidx = *(uint32_t *)key;
74
- ri = get_arm_cp_reginfo(cpu->cp_regs, regidx);
75
+ ri = g_hash_table_lookup(cpu->cp_regs, key);
76
77
if (!(ri->type & (ARM_CP_NO_RAW|ARM_CP_ALIAS))) {
78
cpu->cpreg_array_len++;
79
@@ -XXX,XX +XXX,XX @@ static void count_cpreg(gpointer key, gpointer opaque)
80
81
static gint cpreg_key_compare(gconstpointer a, gconstpointer b)
82
{
83
- uint64_t aidx = cpreg_to_kvm_id(*(uint32_t *)a);
84
- uint64_t bidx = cpreg_to_kvm_id(*(uint32_t *)b);
85
+ uint64_t aidx = cpreg_to_kvm_id((uintptr_t)a);
86
+ uint64_t bidx = cpreg_to_kvm_id((uintptr_t)b);
87
88
if (aidx > bidx) {
89
return 1;
90
@@ -XXX,XX +XXX,XX @@ static void define_arm_vh_e2h_redirects_aliases(ARMCPU *cpu)
91
for (i = 0; i < ARRAY_SIZE(aliases); i++) {
92
const struct E2HAlias *a = &aliases[i];
93
ARMCPRegInfo *src_reg, *dst_reg, *new_reg;
94
- uint32_t *new_key;
95
bool ok;
96
97
if (a->feature && !a->feature(&cpu->isar)) {
134
continue;
98
continue;
135
}
99
}
136
- best_prio = 0x100;
100
137
- best_irq = 1023;
101
- src_reg = g_hash_table_lookup(cpu->cp_regs, &a->src_key);
138
- for (irq = 0; irq < s->num_irq; irq++) {
102
- dst_reg = g_hash_table_lookup(cpu->cp_regs, &a->dst_key);
139
- if (GIC_DIST_TEST_ENABLED(irq, cm) &&
103
+ src_reg = g_hash_table_lookup(cpu->cp_regs,
140
- gic_test_pending(s, irq, cm) &&
104
+ (gpointer)(uintptr_t)a->src_key);
141
- (!GIC_DIST_TEST_ACTIVE(irq, cm)) &&
105
+ dst_reg = g_hash_table_lookup(cpu->cp_regs,
142
- (irq < GIC_INTERNAL || GIC_DIST_TARGET(irq) & cm)) {
106
+ (gpointer)(uintptr_t)a->dst_key);
143
- if (GIC_DIST_GET_PRIORITY(irq, cpu) < best_prio) {
107
g_assert(src_reg != NULL);
144
- best_prio = GIC_DIST_GET_PRIORITY(irq, cpu);
108
g_assert(dst_reg != NULL);
145
- best_irq = irq;
109
146
- }
110
@@ -XXX,XX +XXX,XX @@ static void define_arm_vh_e2h_redirects_aliases(ARMCPU *cpu)
147
- }
111
148
+
112
/* Create alias before redirection so we dup the right data. */
149
+ if (virt) {
113
new_reg = g_memdup(src_reg, sizeof(ARMCPRegInfo));
150
+ gic_get_best_virq(s, cpu, &best_irq, &best_prio, &group);
114
- new_key = g_memdup(&a->new_key, sizeof(uint32_t));
151
+ } else {
115
152
+ gic_get_best_irq(s, cpu, &best_irq, &best_prio, &group);
116
new_reg->name = a->new_name;
117
new_reg->type |= ARM_CP_ALIAS;
118
/* Remove PL1/PL0 access, leaving PL2/PL3 R/W in place. */
119
new_reg->access &= PL2_RW | PL3_RW;
120
121
- ok = g_hash_table_insert(cpu->cp_regs, new_key, new_reg);
122
+ ok = g_hash_table_insert(cpu->cp_regs,
123
+ (gpointer)(uintptr_t)a->new_key, new_reg);
124
g_assert(ok);
125
126
src_reg->opaque = dst_reg;
127
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
128
/* Private utility function for define_one_arm_cp_reg_with_opaque():
129
* add a single reginfo struct to the hash table.
130
*/
131
- uint32_t *key = g_new(uint32_t, 1);
132
+ uint32_t key;
133
ARMCPRegInfo *r2 = g_memdup(r, sizeof(ARMCPRegInfo));
134
int is64 = (r->type & ARM_CP_64BIT) ? 1 : 0;
135
int ns = (secstate & ARM_CP_SECSTATE_NS) ? 1 : 0;
136
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
137
if (r->cp == 0 || r->state == ARM_CP_STATE_BOTH) {
138
r2->cp = CP_REG_ARM64_SYSREG_CP;
153
}
139
}
154
140
- *key = ENCODE_AA64_CP_REG(r2->cp, r2->crn, crm,
155
if (best_irq != 1023) {
141
- r2->opc0, opc1, opc2);
156
trace_gic_update_bestirq(cpu, best_irq, best_prio,
142
+ key = ENCODE_AA64_CP_REG(r2->cp, r2->crn, crm,
157
- s->priority_mask[cpu], s->running_priority[cpu]);
143
+ r2->opc0, opc1, opc2);
158
+ s->priority_mask[cpu_iface], s->running_priority[cpu_iface]);
144
} else {
159
}
145
- *key = ENCODE_CP_REG(r2->cp, is64, ns, r2->crn, crm, opc1, opc2);
160
146
+ key = ENCODE_CP_REG(r2->cp, is64, ns, r2->crn, crm, opc1, opc2);
161
irq_level = fiq_level = 0;
162
163
- if (best_prio < s->priority_mask[cpu]) {
164
- s->current_pending[cpu] = best_irq;
165
- if (best_prio < s->running_priority[cpu]) {
166
- int group = GIC_DIST_TEST_GROUP(best_irq, cm);
167
-
168
- if (extract32(s->ctlr, group, 1) &&
169
- extract32(s->cpu_ctlr[cpu], group, 1)) {
170
- if (group == 0 && s->cpu_ctlr[cpu] & GICC_CTLR_FIQ_EN) {
171
+ if (best_prio < s->priority_mask[cpu_iface]) {
172
+ s->current_pending[cpu_iface] = best_irq;
173
+ if (best_prio < s->running_priority[cpu_iface]) {
174
+ if (gic_irq_signaling_enabled(s, cpu, virt, 1 << group)) {
175
+ if (group == 0 &&
176
+ s->cpu_ctlr[cpu_iface] & GICC_CTLR_FIQ_EN) {
177
DPRINTF("Raised pending FIQ %d (cpu %d)\n",
178
- best_irq, cpu);
179
+ best_irq, cpu_iface);
180
fiq_level = 1;
181
- trace_gic_update_set_irq(cpu, "fiq", fiq_level);
182
+ trace_gic_update_set_irq(cpu, virt ? "vfiq" : "fiq",
183
+ fiq_level);
184
} else {
185
DPRINTF("Raised pending IRQ %d (cpu %d)\n",
186
- best_irq, cpu);
187
+ best_irq, cpu_iface);
188
irq_level = 1;
189
- trace_gic_update_set_irq(cpu, "irq", irq_level);
190
+ trace_gic_update_set_irq(cpu, virt ? "virq" : "irq",
191
+ irq_level);
192
}
193
}
194
}
195
}
196
197
- qemu_set_irq(s->parent_irq[cpu], irq_level);
198
- qemu_set_irq(s->parent_fiq[cpu], fiq_level);
199
+ qemu_set_irq(irq_lines[cpu], irq_level);
200
+ qemu_set_irq(fiq_lines[cpu], fiq_level);
201
}
147
}
202
}
148
if (opaque) {
203
149
r2->opaque = opaque;
204
+static void gic_update(GICState *s)
150
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
205
+{
151
* requested.
206
+ gic_update_internal(s, false);
152
*/
207
+}
153
if (!(r->type & ARM_CP_OVERRIDE)) {
208
+
154
- ARMCPRegInfo *oldreg;
209
/* Return true if this LR is empty, i.e. the corresponding bit
155
- oldreg = g_hash_table_lookup(cpu->cp_regs, key);
210
* in ELRSR is set.
156
+ const ARMCPRegInfo *oldreg = get_arm_cp_reginfo(cpu->cp_regs, key);
211
*/
157
if (oldreg && !(oldreg->type & ARM_CP_OVERRIDE)) {
212
@@ -XXX,XX +XXX,XX @@ static inline bool gic_lr_entry_is_eoi(uint32_t entry)
158
fprintf(stderr, "Register redefined: cp=%d %d bit "
213
&& !GICH_LR_HW(entry) && GICH_LR_EOI(entry);
159
"crn=%d crm=%d opc1=%d opc2=%d, "
214
}
160
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
215
161
g_assert_not_reached();
216
+static void gic_update_virt(GICState *s)
217
+{
218
+ gic_update_internal(s, true);
219
+}
220
+
221
static void gic_set_irq_11mpcore(GICState *s, int irq, int level,
222
int cm, int target)
223
{
224
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
225
}
162
}
226
}
163
}
227
164
- g_hash_table_insert(cpu->cp_regs, key, r2);
228
- gic_update(s);
165
+ g_hash_table_insert(cpu->cp_regs, (gpointer)(uintptr_t)key, r2);
229
+ if (gic_is_vcpu(cpu)) {
230
+ gic_update_virt(s);
231
+ } else {
232
+ gic_update(s);
233
+ }
234
DPRINTF("ACK %d\n", irq);
235
return ret;
236
}
166
}
237
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
167
238
*/
168
239
int rcpu = gic_get_vcpu_real_id(cpu);
169
@@ -XXX,XX +XXX,XX @@ void modify_arm_cp_regs_with_len(ARMCPRegInfo *regs, size_t regs_len,
240
s->h_hcr[rcpu] += 1 << R_GICH_HCR_EOICount_SHIFT;
170
241
+
171
const ARMCPRegInfo *get_arm_cp_reginfo(GHashTable *cpregs, uint32_t encoded_cp)
242
+ /* Update the virtual interface in case a maintenance interrupt should
172
{
243
+ * be raised.
173
- return g_hash_table_lookup(cpregs, &encoded_cp);
244
+ */
174
+ return g_hash_table_lookup(cpregs, (gpointer)(uintptr_t)encoded_cp);
245
+ gic_update_virt(s);
246
return;
247
}
248
249
@@ -XXX,XX +XXX,XX @@ static void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
250
}
251
}
252
253
+ gic_update_virt(s);
254
return;
255
}
256
257
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
258
"gic_cpu_write: Bad offset %x\n", (int)offset);
259
return MEMTX_OK;
260
}
261
- gic_update(s);
262
+
263
+ if (gic_is_vcpu(cpu)) {
264
+ gic_update_virt(s);
265
+ } else {
266
+ gic_update(s);
267
+ }
268
+
269
return MEMTX_OK;
270
}
175
}
271
176
272
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_hyp_write(void *opaque, int cpu, hwaddr addr,
177
void arm_cp_write_ignore(CPUARMState *env, const ARMCPRegInfo *ri,
273
return MEMTX_OK;
274
}
275
276
+ gic_update_virt(s);
277
return MEMTX_OK;
278
}
279
280
--
178
--
281
2.18.0
179
2.25.1
282
283
diff view generated by jsdifflib
1
From: Julia Suvorova <jusual@mail.ru>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Forbid stack alignment change. (CCR)
3
Simplify freeing cp_regs hash table entries by using a single
4
Reserve FAULTMASK, BASEPRI registers.
4
allocation for the entire value.
5
Report any fault as a HardFault. Disable MemManage, BusFault and
6
UsageFault, so they always escalated to HardFault. (SHCSR)
7
5
8
Signed-off-by: Julia Suvorova <jusual@mail.ru>
6
This fixes a theoretical bug if we were to ever free the entire
9
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
7
hash table, because we've been installing string literal constants
10
Message-id: 20180718095628.26442-1-jusual@mail.ru
8
into the cpreg structure in define_arm_vh_e2h_redirects_aliases.
9
However, at present we only free entries created for AArch32
10
wildcard cpregs which get overwritten by more specific cpregs,
11
so this bug is never exposed.
12
13
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Message-id: 20220501055028.646596-13-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
17
---
14
hw/intc/armv7m_nvic.c | 10 ++++++++++
18
target/arm/cpu.c | 16 +---------------
15
target/arm/cpu.c | 4 ++++
19
target/arm/helper.c | 10 ++++++++--
16
target/arm/helper.c | 13 +++++++++++--
20
2 files changed, 9 insertions(+), 17 deletions(-)
17
3 files changed, 25 insertions(+), 2 deletions(-)
18
21
19
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/hw/intc/armv7m_nvic.c
22
+++ b/hw/intc/armv7m_nvic.c
23
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
24
val |= cpu->env.v7m.ccr[M_REG_NS] & R_V7M_CCR_BFHFNMIGN_MASK;
25
return val;
26
case 0xd24: /* System Handler Control and State (SHCSR) */
27
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V7)) {
28
+ goto bad_offset;
29
+ }
30
val = 0;
31
if (attrs.secure) {
32
if (s->sec_vectors[ARMV7M_EXCP_MEM].active) {
33
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
34
cpu->env.v7m.scr[attrs.secure] = value;
35
break;
36
case 0xd14: /* Configuration Control. */
37
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
38
+ goto bad_offset;
39
+ }
40
+
41
/* Enforce RAZ/WI on reserved and must-RAZ/WI bits */
42
value &= (R_V7M_CCR_STKALIGN_MASK |
43
R_V7M_CCR_BFHFNMIGN_MASK |
44
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
45
cpu->env.v7m.ccr[attrs.secure] = value;
46
break;
47
case 0xd24: /* System Handler Control and State (SHCSR) */
48
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V7)) {
49
+ goto bad_offset;
50
+ }
51
if (attrs.secure) {
52
s->sec_vectors[ARMV7M_EXCP_MEM].active = (value & (1 << 0)) != 0;
53
/* Secure HardFault active bit cannot be written */
54
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
22
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
55
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
56
--- a/target/arm/cpu.c
24
--- a/target/arm/cpu.c
57
+++ b/target/arm/cpu.c
25
+++ b/target/arm/cpu.c
58
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(CPUState *s)
26
@@ -XXX,XX +XXX,XX @@ uint64_t arm_cpu_mp_affinity(int idx, uint8_t clustersz)
59
env->v7m.ccr[M_REG_NS] |= R_V7M_CCR_NONBASETHRDENA_MASK;
27
return (Aff1 << ARM_AFF1_SHIFT) | Aff0;
60
env->v7m.ccr[M_REG_S] |= R_V7M_CCR_NONBASETHRDENA_MASK;
28
}
61
}
29
62
+ if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
30
-static void cpreg_hashtable_data_destroy(gpointer data)
63
+ env->v7m.ccr[M_REG_NS] |= R_V7M_CCR_UNALIGN_TRP_MASK;
31
-{
64
+ env->v7m.ccr[M_REG_S] |= R_V7M_CCR_UNALIGN_TRP_MASK;
32
- /*
65
+ }
33
- * Destroy function for cpu->cp_regs hashtable data entries.
66
34
- * We must free the name string because it was g_strdup()ed in
67
/* Unlike A/R profile, M profile defines the reset LR value */
35
- * add_cpreg_to_hashtable(). It's OK to cast away the 'const'
68
env->regs[14] = 0xffffffff;
36
- * from r->name because we know we definitely allocated it.
37
- */
38
- ARMCPRegInfo *r = data;
39
-
40
- g_free((void *)r->name);
41
- g_free(r);
42
-}
43
-
44
static void arm_cpu_initfn(Object *obj)
45
{
46
ARMCPU *cpu = ARM_CPU(obj);
47
48
cpu_set_cpustate_pointers(cpu);
49
cpu->cp_regs = g_hash_table_new_full(g_direct_hash, g_direct_equal,
50
- NULL, cpreg_hashtable_data_destroy);
51
+ NULL, g_free);
52
53
QLIST_INIT(&cpu->pre_el_change_hooks);
54
QLIST_INIT(&cpu->el_change_hooks);
69
diff --git a/target/arm/helper.c b/target/arm/helper.c
55
diff --git a/target/arm/helper.c b/target/arm/helper.c
70
index XXXXXXX..XXXXXXX 100644
56
index XXXXXXX..XXXXXXX 100644
71
--- a/target/arm/helper.c
57
--- a/target/arm/helper.c
72
+++ b/target/arm/helper.c
58
+++ b/target/arm/helper.c
73
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
59
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
74
env->v7m.primask[M_REG_NS] = val & 1;
60
* add a single reginfo struct to the hash table.
75
return;
61
*/
76
case 0x91: /* BASEPRI_NS */
62
uint32_t key;
77
- if (!env->v7m.secure) {
63
- ARMCPRegInfo *r2 = g_memdup(r, sizeof(ARMCPRegInfo));
78
+ if (!env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_MAIN)) {
64
+ ARMCPRegInfo *r2;
79
return;
65
int is64 = (r->type & ARM_CP_64BIT) ? 1 : 0;
80
}
66
int ns = (secstate & ARM_CP_SECSTATE_NS) ? 1 : 0;
81
env->v7m.basepri[M_REG_NS] = val & 0xff;
67
+ size_t name_len;
82
return;
68
+
83
case 0x93: /* FAULTMASK_NS */
69
+ /* Combine cpreg and name into one allocation. */
84
- if (!env->v7m.secure) {
70
+ name_len = strlen(name) + 1;
85
+ if (!env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_MAIN)) {
71
+ r2 = g_malloc(sizeof(*r2) + name_len);
86
return;
72
+ *r2 = *r;
87
}
73
+ r2->name = memcpy(r2 + 1, name, name_len);
88
env->v7m.faultmask[M_REG_NS] = val & 1;
74
89
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
75
- r2->name = g_strdup(name);
90
env->v7m.primask[env->v7m.secure] = val & 1;
76
/* Reset the secure state to the specific incoming state. This is
91
break;
77
* necessary as the register may have been defined with both states.
92
case 17: /* BASEPRI */
78
*/
93
+ if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
94
+ goto bad_reg;
95
+ }
96
env->v7m.basepri[env->v7m.secure] = val & 0xff;
97
break;
98
case 18: /* BASEPRI_MAX */
99
+ if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
100
+ goto bad_reg;
101
+ }
102
val &= 0xff;
103
if (val != 0 && (val < env->v7m.basepri[env->v7m.secure]
104
|| env->v7m.basepri[env->v7m.secure] == 0)) {
105
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
106
}
107
break;
108
case 19: /* FAULTMASK */
109
+ if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
110
+ goto bad_reg;
111
+ }
112
env->v7m.faultmask[env->v7m.secure] = val & 1;
113
break;
114
case 20: /* CONTROL */
115
--
79
--
116
2.18.0
80
2.25.1
117
118
diff view generated by jsdifflib
1
Tailchaining is an optimization in handling of exception return
1
From: Richard Henderson <richard.henderson@linaro.org>
2
for M-profile cores: if we are about to pop the exception stack
3
for an exception return, but there is a pending exception which
4
is higher priority than the priority we are returning to, then
5
instead of unstacking and then immediately taking the exception
6
and stacking registers again, we can chain to the pending
7
exception without unstacking and stacking.
8
2
9
For v6M and v7M it is IMPDEF whether tailchaining happens for pending
3
Move the computation of key to the top of the function.
10
exceptions; for v8M this is architecturally required. Implement it
4
Hoist the resolution of cp as well, as an input to the
11
in QEMU for all M-profile cores, since in practice v6M and v7M
5
computation of key.
12
hardware implementations generally do have it.
13
6
14
(We were already doing tailchaining for derived exceptions which
7
This will be required by a subsequent patch.
15
happened during exception return, like the validity checks and
16
stack access failures; these have always been required to be
17
tailchained for all versions of the architecture.)
18
8
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20220501055028.646596-14-richard.henderson@linaro.org
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
21
Message-id: 20180720145647.8810-5-peter.maydell@linaro.org
22
---
13
---
23
target/arm/helper.c | 16 ++++++++++++++++
14
target/arm/helper.c | 49 +++++++++++++++++++++++++--------------------
24
1 file changed, 16 insertions(+)
15
1 file changed, 27 insertions(+), 22 deletions(-)
25
16
26
diff --git a/target/arm/helper.c b/target/arm/helper.c
17
diff --git a/target/arm/helper.c b/target/arm/helper.c
27
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/helper.c
19
--- a/target/arm/helper.c
29
+++ b/target/arm/helper.c
20
+++ b/target/arm/helper.c
30
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
21
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
31
return;
22
ARMCPRegInfo *r2;
32
}
23
int is64 = (r->type & ARM_CP_64BIT) ? 1 : 0;
33
24
int ns = (secstate & ARM_CP_SECSTATE_NS) ? 1 : 0;
34
+ /*
25
+ int cp = r->cp;
35
+ * Tailchaining: if there is currently a pending exception that
26
size_t name_len;
36
+ * is high enough priority to preempt execution at the level we're
27
37
+ * about to return to, then just directly take that exception now,
28
+ switch (state) {
38
+ * avoiding an unstack-and-then-stack. Note that now we have
29
+ case ARM_CP_STATE_AA32:
39
+ * deactivated the previous exception by calling armv7m_nvic_complete_irq()
30
+ /* We assume it is a cp15 register if the .cp field is left unset. */
40
+ * our current execution priority is already the execution priority we are
31
+ if (cp == 0 && r->state == ARM_CP_STATE_BOTH) {
41
+ * returning to -- none of the state we would unstack or set based on
32
+ cp = 15;
42
+ * the EXCRET value affects it.
33
+ }
43
+ */
34
+ key = ENCODE_CP_REG(cp, is64, ns, r->crn, crm, opc1, opc2);
44
+ if (armv7m_nvic_can_take_pending_exception(env->nvic)) {
35
+ break;
45
+ qemu_log_mask(CPU_LOG_INT, "...tailchaining to pending exception\n");
36
+ case ARM_CP_STATE_AA64:
46
+ v7m_exception_taken(cpu, excret, true, false);
37
+ /*
47
+ return;
38
+ * To allow abbreviation of ARMCPRegInfo definitions, we treat
39
+ * cp == 0 as equivalent to the value for "standard guest-visible
40
+ * sysreg". STATE_BOTH definitions are also always "standard sysreg"
41
+ * in their AArch64 view (the .cp value may be non-zero for the
42
+ * benefit of the AArch32 view).
43
+ */
44
+ if (cp == 0 || r->state == ARM_CP_STATE_BOTH) {
45
+ cp = CP_REG_ARM64_SYSREG_CP;
46
+ }
47
+ key = ENCODE_AA64_CP_REG(cp, r->crn, crm, r->opc0, opc1, opc2);
48
+ break;
49
+ default:
50
+ g_assert_not_reached();
48
+ }
51
+ }
49
+
52
+
50
switch_v7m_security_state(env, return_to_secure);
53
/* Combine cpreg and name into one allocation. */
51
54
name_len = strlen(name) + 1;
52
{
55
r2 = g_malloc(sizeof(*r2) + name_len);
56
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
57
}
58
59
if (r->state == ARM_CP_STATE_BOTH) {
60
- /* We assume it is a cp15 register if the .cp field is left unset.
61
- */
62
- if (r2->cp == 0) {
63
- r2->cp = 15;
64
- }
65
-
66
#if HOST_BIG_ENDIAN
67
if (r2->fieldoffset) {
68
r2->fieldoffset += sizeof(uint32_t);
69
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
70
#endif
71
}
72
}
73
- if (state == ARM_CP_STATE_AA64) {
74
- /* To allow abbreviation of ARMCPRegInfo
75
- * definitions, we treat cp == 0 as equivalent to
76
- * the value for "standard guest-visible sysreg".
77
- * STATE_BOTH definitions are also always "standard
78
- * sysreg" in their AArch64 view (the .cp value may
79
- * be non-zero for the benefit of the AArch32 view).
80
- */
81
- if (r->cp == 0 || r->state == ARM_CP_STATE_BOTH) {
82
- r2->cp = CP_REG_ARM64_SYSREG_CP;
83
- }
84
- key = ENCODE_AA64_CP_REG(r2->cp, r2->crn, crm,
85
- r2->opc0, opc1, opc2);
86
- } else {
87
- key = ENCODE_CP_REG(r2->cp, is64, ns, r2->crn, crm, opc1, opc2);
88
- }
89
if (opaque) {
90
r2->opaque = opaque;
91
}
92
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
93
/* Make sure reginfo passed to helpers for wildcarded regs
94
* has the correct crm/opc1/opc2 for this reg, not CP_ANY:
95
*/
96
+ r2->cp = cp;
97
r2->crm = crm;
98
r2->opc1 = opc1;
99
r2->opc2 = opc2;
53
--
100
--
54
2.18.0
101
2.25.1
55
56
diff view generated by jsdifflib
1
Some debug registers can be trapped via MDCR_EL2 bits TDRA, TDOSA,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
and TDA, which we implement in the functions access_tdra(),
3
access_tdosa() and access_tda(). If MDCR_EL2.TDE or HCR_EL2.TGE
4
are 1, the TDRA, TDOSA and TDA bits should behave as if they were 1.
5
Implement this by having the access functions check MDCR_EL2.TDE
6
and HCR_EL2.TGE.
7
2
3
Put most of the value writeback to the same place,
4
and improve the comment that goes with them.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20220501055028.646596-15-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20180724115950.17316-3-peter.maydell@linaro.org
11
---
10
---
12
target/arm/helper.c | 18 ++++++++++++------
11
target/arm/helper.c | 28 ++++++++++++----------------
13
1 file changed, 12 insertions(+), 6 deletions(-)
12
1 file changed, 12 insertions(+), 16 deletions(-)
14
13
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
16
--- a/target/arm/helper.c
18
+++ b/target/arm/helper.c
17
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tdosa(CPUARMState *env, const ARMCPRegInfo *ri,
18
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
20
bool isread)
19
*r2 = *r;
21
{
20
r2->name = memcpy(r2 + 1, name, name_len);
22
int el = arm_current_el(env);
21
23
+ bool mdcr_el2_tdosa = (env->cp15.mdcr_el2 & MDCR_TDOSA) ||
22
- /* Reset the secure state to the specific incoming state. This is
24
+ (env->cp15.mdcr_el2 & MDCR_TDE) ||
23
- * necessary as the register may have been defined with both states.
25
+ (env->cp15.hcr_el2 & HCR_TGE);
24
+ /*
26
25
+ * Update fields to match the instantiation, overwiting wildcards
27
- if (el < 2 && (env->cp15.mdcr_el2 & MDCR_TDOSA)
26
+ * such as CP_ANY, ARM_CP_STATE_BOTH, or ARM_CP_SECSTATE_BOTH.
28
- && !arm_is_secure_below_el3(env)) {
27
*/
29
+ if (el < 2 && mdcr_el2_tdosa && !arm_is_secure_below_el3(env)) {
28
+ r2->cp = cp;
30
return CP_ACCESS_TRAP_EL2;
29
+ r2->crm = crm;
30
+ r2->opc1 = opc1;
31
+ r2->opc2 = opc2;
32
+ r2->state = state;
33
r2->secure = secstate;
34
+ if (opaque) {
35
+ r2->opaque = opaque;
36
+ }
37
38
if (r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1]) {
39
/* Register is banked (using both entries in array).
40
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
41
#endif
42
}
31
}
43
}
32
if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TDOSA)) {
44
- if (opaque) {
33
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tdra(CPUARMState *env, const ARMCPRegInfo *ri,
45
- r2->opaque = opaque;
34
bool isread)
46
- }
35
{
47
- /* reginfo passed to helpers is correct for the actual access,
36
int el = arm_current_el(env);
48
- * and is never ARM_CP_STATE_BOTH:
37
+ bool mdcr_el2_tdra = (env->cp15.mdcr_el2 & MDCR_TDRA) ||
49
- */
38
+ (env->cp15.mdcr_el2 & MDCR_TDE) ||
50
- r2->state = state;
39
+ (env->cp15.hcr_el2 & HCR_TGE);
51
- /* Make sure reginfo passed to helpers for wildcarded regs
40
52
- * has the correct crm/opc1/opc2 for this reg, not CP_ANY:
41
- if (el < 2 && (env->cp15.mdcr_el2 & MDCR_TDRA)
53
- */
42
- && !arm_is_secure_below_el3(env)) {
54
- r2->cp = cp;
43
+ if (el < 2 && mdcr_el2_tdra && !arm_is_secure_below_el3(env)) {
55
- r2->crm = crm;
44
return CP_ACCESS_TRAP_EL2;
56
- r2->opc1 = opc1;
45
}
57
- r2->opc2 = opc2;
46
if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TDA)) {
58
+
47
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tda(CPUARMState *env, const ARMCPRegInfo *ri,
59
/* By convention, for wildcarded registers only the first
48
bool isread)
60
* entry is used for migration; the others are marked as
49
{
61
* ALIAS so we don't try to transfer the register
50
int el = arm_current_el(env);
51
+ bool mdcr_el2_tda = (env->cp15.mdcr_el2 & MDCR_TDA) ||
52
+ (env->cp15.mdcr_el2 & MDCR_TDE) ||
53
+ (env->cp15.hcr_el2 & HCR_TGE);
54
55
- if (el < 2 && (env->cp15.mdcr_el2 & MDCR_TDA)
56
- && !arm_is_secure_below_el3(env)) {
57
+ if (el < 2 && mdcr_el2_tda && !arm_is_secure_below_el3(env)) {
58
return CP_ACCESS_TRAP_EL2;
59
}
60
if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TDA)) {
61
--
62
--
62
2.18.0
63
2.25.1
63
64
diff view generated by jsdifflib
1
Now that we have full support for small regions, including execution,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
we can remove the workarounds where we marked all small regions as
3
non-executable for the M-profile MPU and SAU.
4
2
3
Bool is a more appropriate type for these variables.
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20220501055028.646596-16-richard.henderson@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Tested-by: Cédric Le Goater <clg@kaod.org>
9
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Message-id: 20180710160013.26559-7-peter.maydell@linaro.org
11
---
9
---
12
target/arm/helper.c | 23 -----------------------
10
target/arm/helper.c | 4 ++--
13
1 file changed, 23 deletions(-)
11
1 file changed, 2 insertions(+), 2 deletions(-)
14
12
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
15
--- a/target/arm/helper.c
18
+++ b/target/arm/helper.c
16
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
17
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
20
18
*/
21
fi->type = ARMFault_Permission;
19
uint32_t key;
22
fi->level = 1;
20
ARMCPRegInfo *r2;
23
- /*
21
- int is64 = (r->type & ARM_CP_64BIT) ? 1 : 0;
24
- * Core QEMU code can't handle execution from small pages yet, so
22
- int ns = (secstate & ARM_CP_SECSTATE_NS) ? 1 : 0;
25
- * don't try it. This way we'll get an MPU exception, rather than
23
+ bool is64 = r->type & ARM_CP_64BIT;
26
- * eventually causing QEMU to exit in get_page_addr_code().
24
+ bool ns = secstate & ARM_CP_SECSTATE_NS;
27
- */
25
int cp = r->cp;
28
- if (*page_size < TARGET_PAGE_SIZE && (*prot & PAGE_EXEC)) {
26
size_t name_len;
29
- qemu_log_mask(LOG_UNIMP,
30
- "MPU: No support for execution from regions "
31
- "smaller than 1K\n");
32
- *prot &= ~PAGE_EXEC;
33
- }
34
return !(*prot & (1 << access_type));
35
}
36
37
@@ -XXX,XX +XXX,XX @@ static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
38
39
fi->type = ARMFault_Permission;
40
fi->level = 1;
41
- /*
42
- * Core QEMU code can't handle execution from small pages yet, so
43
- * don't try it. This means any attempted execution will generate
44
- * an MPU exception, rather than eventually causing QEMU to exit in
45
- * get_page_addr_code().
46
- */
47
- if (*is_subpage && (*prot & PAGE_EXEC)) {
48
- qemu_log_mask(LOG_UNIMP,
49
- "MPU: No support for execution from regions "
50
- "smaller than 1K\n");
51
- *prot &= ~PAGE_EXEC;
52
- }
53
return !(*prot & (1 << access_type));
54
}
55
27
56
--
28
--
57
2.18.0
29
2.25.1
58
59
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The pseudocode for this operation is an increment + compare loop,
3
Computing isbanked only once makes the code
4
so comparing <= the maximum integer produces an all-true predicate.
4
a bit easier to read.
5
5
6
Rather than bound in both the inline code and the helper, pass the
7
helper the number of predicate bits to set instead of the number
8
of predicate elements to set.
9
10
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Tested-by: Alex Bennée <alex.bennee@linaro.org>
8
Message-id: 20220501055028.646596-17-richard.henderson@linaro.org
14
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
15
Message-id: 20180801123111.3595-4-richard.henderson@linaro.org
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
10
---
18
target/arm/sve_helper.c | 5 ----
11
target/arm/helper.c | 6 ++++--
19
target/arm/translate-sve.c | 49 +++++++++++++++++++++++++-------------
12
1 file changed, 4 insertions(+), 2 deletions(-)
20
2 files changed, 32 insertions(+), 22 deletions(-)
21
13
22
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
23
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/sve_helper.c
16
--- a/target/arm/helper.c
25
+++ b/target/arm/sve_helper.c
17
+++ b/target/arm/helper.c
26
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sve_while)(void *vd, uint32_t count, uint32_t pred_desc)
18
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
27
return flags;
19
bool is64 = r->type & ARM_CP_64BIT;
20
bool ns = secstate & ARM_CP_SECSTATE_NS;
21
int cp = r->cp;
22
+ bool isbanked;
23
size_t name_len;
24
25
switch (state) {
26
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
27
r2->opaque = opaque;
28
}
28
}
29
29
30
- /* Scale from predicate element count to bits. */
30
- if (r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1]) {
31
- count <<= esz;
31
+ isbanked = r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1];
32
- /* Bound to the bits in the predicate. */
32
+ if (isbanked) {
33
- count = MIN(count, oprsz * 8);
33
/* Register is banked (using both entries in array).
34
-
34
* Overwriting fieldoffset as the array is only used to define
35
/* Set all of the requested bits. */
35
* banked registers but later only fieldoffset is used.
36
for (i = 0; i < count / 64; ++i) {
36
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
37
d->p[i] = esz_mask;
38
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/translate-sve.c
41
+++ b/target/arm/translate-sve.c
42
@@ -XXX,XX +XXX,XX @@ static bool trans_CTERM(DisasContext *s, arg_CTERM *a, uint32_t insn)
43
44
static bool trans_WHILE(DisasContext *s, arg_WHILE *a, uint32_t insn)
45
{
46
- if (!sve_access_check(s)) {
47
- return true;
48
- }
49
-
50
- TCGv_i64 op0 = read_cpu_reg(s, a->rn, 1);
51
- TCGv_i64 op1 = read_cpu_reg(s, a->rm, 1);
52
- TCGv_i64 t0 = tcg_temp_new_i64();
53
- TCGv_i64 t1 = tcg_temp_new_i64();
54
+ TCGv_i64 op0, op1, t0, t1, tmax;
55
TCGv_i32 t2, t3;
56
TCGv_ptr ptr;
57
unsigned desc, vsz = vec_full_reg_size(s);
58
TCGCond cond;
59
60
+ if (!sve_access_check(s)) {
61
+ return true;
62
+ }
63
+
64
+ op0 = read_cpu_reg(s, a->rn, 1);
65
+ op1 = read_cpu_reg(s, a->rm, 1);
66
+
67
if (!a->sf) {
68
if (a->u) {
69
tcg_gen_ext32u_i64(op0, op0);
70
@@ -XXX,XX +XXX,XX @@ static bool trans_WHILE(DisasContext *s, arg_WHILE *a, uint32_t insn)
71
72
/* For the helper, compress the different conditions into a computation
73
* of how many iterations for which the condition is true.
74
- *
75
- * This is slightly complicated by 0 <= UINT64_MAX, which is nominally
76
- * 2**64 iterations, overflowing to 0. Of course, predicate registers
77
- * aren't that large, so any value >= predicate size is sufficient.
78
*/
79
+ t0 = tcg_temp_new_i64();
80
+ t1 = tcg_temp_new_i64();
81
tcg_gen_sub_i64(t0, op1, op0);
82
83
- /* t0 = MIN(op1 - op0, vsz). */
84
- tcg_gen_movi_i64(t1, vsz);
85
- tcg_gen_umin_i64(t0, t0, t1);
86
+ tmax = tcg_const_i64(vsz >> a->esz);
87
if (a->eq) {
88
/* Equality means one more iteration. */
89
tcg_gen_addi_i64(t0, t0, 1);
90
+
91
+ /* If op1 is max (un)signed integer (and the only time the addition
92
+ * above could overflow), then we produce an all-true predicate by
93
+ * setting the count to the vector length. This is because the
94
+ * pseudocode is described as an increment + compare loop, and the
95
+ * max integer would always compare true.
96
+ */
97
+ tcg_gen_movi_i64(t1, (a->sf
98
+ ? (a->u ? UINT64_MAX : INT64_MAX)
99
+ : (a->u ? UINT32_MAX : INT32_MAX)));
100
+ tcg_gen_movcond_i64(TCG_COND_EQ, t0, op1, t1, tmax, t0);
101
}
37
}
102
38
103
- /* t0 = (condition true ? t0 : 0). */
39
if (state == ARM_CP_STATE_AA32) {
104
+ /* Bound to the maximum. */
40
- if (r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1]) {
105
+ tcg_gen_umin_i64(t0, t0, tmax);
41
+ if (isbanked) {
106
+ tcg_temp_free_i64(tmax);
42
/* If the register is banked then we don't need to migrate or
107
+
43
* reset the 32-bit instance in certain cases:
108
+ /* Set the count to zero if the condition is false. */
44
*
109
cond = (a->u
110
? (a->eq ? TCG_COND_LEU : TCG_COND_LTU)
111
: (a->eq ? TCG_COND_LE : TCG_COND_LT));
112
tcg_gen_movi_i64(t1, 0);
113
tcg_gen_movcond_i64(cond, t0, op0, op1, t0, t1);
114
+ tcg_temp_free_i64(t1);
115
116
+ /* Since we're bounded, pass as a 32-bit type. */
117
t2 = tcg_temp_new_i32();
118
tcg_gen_extrl_i64_i32(t2, t0);
119
tcg_temp_free_i64(t0);
120
- tcg_temp_free_i64(t1);
121
+
122
+ /* Scale elements to bits. */
123
+ tcg_gen_shli_i32(t2, t2, a->esz);
124
125
desc = (vsz / 8) - 2;
126
desc = deposit32(desc, SIMD_DATA_SHIFT, 2, a->esz);
127
--
45
--
128
2.18.0
46
2.25.1
129
130
diff view generated by jsdifflib
1
One of the required effects of setting HCR_EL2.TGE is that when
1
From: Richard Henderson <richard.henderson@linaro.org>
2
SCR_EL3.NS is 1 then SCTLR_EL1.M must behave as if it is zero for
3
all purposes except direct reads. That is, it effectively disables
4
the MMU for the NS EL0/EL1 translation regime.
5
2
3
Perform the override check early, so that it is still done
4
even when we decide to discard an unreachable cpreg.
5
6
Use assert not printf+abort.
7
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20220501055028.646596-18-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180724115950.17316-6-peter.maydell@linaro.org
9
---
12
---
10
target/arm/helper.c | 8 ++++++++
13
target/arm/helper.c | 22 ++++++++--------------
11
1 file changed, 8 insertions(+)
14
1 file changed, 8 insertions(+), 14 deletions(-)
12
15
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
18
--- a/target/arm/helper.c
16
+++ b/target/arm/helper.c
19
+++ b/target/arm/helper.c
17
@@ -XXX,XX +XXX,XX @@ static inline bool regime_translation_disabled(CPUARMState *env,
20
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
18
if (mmu_idx == ARMMMUIdx_S2NS) {
21
g_assert_not_reached();
19
return (env->cp15.hcr_el2 & HCR_VM) == 0;
20
}
22
}
21
+
23
22
+ if (env->cp15.hcr_el2 & HCR_TGE) {
24
+ /* Overriding of an existing definition must be explicitly requested. */
23
+ /* TGE means that NS EL0/1 act as if SCTLR_EL1.M is zero */
25
+ if (!(r->type & ARM_CP_OVERRIDE)) {
24
+ if (!regime_is_secure(env, mmu_idx) && regime_el(env, mmu_idx) == 1) {
26
+ const ARMCPRegInfo *oldreg = get_arm_cp_reginfo(cpu->cp_regs, key);
25
+ return true;
27
+ if (oldreg) {
28
+ assert(oldreg->type & ARM_CP_OVERRIDE);
26
+ }
29
+ }
27
+ }
30
+ }
28
+
31
+
29
return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0;
32
/* Combine cpreg and name into one allocation. */
33
name_len = strlen(name) + 1;
34
r2 = g_malloc(sizeof(*r2) + name_len);
35
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
36
assert(!raw_accessors_invalid(r2));
37
}
38
39
- /* Overriding of an existing definition must be explicitly
40
- * requested.
41
- */
42
- if (!(r->type & ARM_CP_OVERRIDE)) {
43
- const ARMCPRegInfo *oldreg = get_arm_cp_reginfo(cpu->cp_regs, key);
44
- if (oldreg && !(oldreg->type & ARM_CP_OVERRIDE)) {
45
- fprintf(stderr, "Register redefined: cp=%d %d bit "
46
- "crn=%d crm=%d opc1=%d opc2=%d, "
47
- "was %s, now %s\n", r2->cp, 32 + 32 * is64,
48
- r2->crn, r2->crm, r2->opc1, r2->opc2,
49
- oldreg->name, r2->name);
50
- g_assert_not_reached();
51
- }
52
- }
53
g_hash_table_insert(cpu->cp_regs, (gpointer)(uintptr_t)key, r2);
30
}
54
}
31
55
32
--
56
--
33
2.18.0
57
2.25.1
34
35
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This commit improve the way the GIC is realized and connected in the
3
Put the block comments into the current coding style.
4
ZynqMP SoC. The security extensions are enabled only if requested in the
5
machine state. The same goes for the virtualization extensions.
6
4
7
All the GIC to APU CPU(s) IRQ lines are now connected, including FIQ,
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
vIRQ and vFIQ. The missing CPU to GIC timers IRQ connections are also
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
added (HYP and SEC timers).
7
Message-id: 20220501055028.646596-19-richard.henderson@linaro.org
10
11
The GIC maintenance IRQs are back-wired to the correct GIC PPIs.
12
13
Finally, the MMIO mappings are reworked to take into account the ZynqMP
14
specifics. The GIC (v)CPU interface is aliased 16 times:
15
* for the first 0x1000 bytes from 0xf9010000 to 0xf901f000
16
* for the second 0x1000 bytes from 0xf9020000 to 0xf902f000
17
Mappings of the virtual interface and virtual CPU interface are mapped
18
only when virtualization extensions are requested. The
19
XlnxZynqMPGICRegion struct has been enhanced to be able to catch all
20
this information.
21
22
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
23
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
24
Message-id: 20180727095421.386-20-luc.michel@greensocs.com
25
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
26
---
9
---
27
include/hw/arm/xlnx-zynqmp.h | 4 +-
10
target/arm/helper.c | 24 +++++++++++++++---------
28
hw/arm/xlnx-zynqmp.c | 92 ++++++++++++++++++++++++++++++++----
11
1 file changed, 15 insertions(+), 9 deletions(-)
29
2 files changed, 86 insertions(+), 10 deletions(-)
30
12
31
diff --git a/include/hw/arm/xlnx-zynqmp.h b/include/hw/arm/xlnx-zynqmp.h
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
32
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
33
--- a/include/hw/arm/xlnx-zynqmp.h
15
--- a/target/arm/helper.c
34
+++ b/include/hw/arm/xlnx-zynqmp.h
16
+++ b/target/arm/helper.c
35
@@ -XXX,XX +XXX,XX @@
17
@@ -XXX,XX +XXX,XX @@ CpuDefinitionInfoList *qmp_query_cpu_definitions(Error **errp)
36
#define XLNX_ZYNQMP_OCM_RAM_0_ADDRESS 0xFFFC0000
18
return cpu_list;
37
#define XLNX_ZYNQMP_OCM_RAM_SIZE 0x10000
19
}
38
20
39
-#define XLNX_ZYNQMP_GIC_REGIONS 2
21
+/*
40
+#define XLNX_ZYNQMP_GIC_REGIONS 6
22
+ * Private utility function for define_one_arm_cp_reg_with_opaque():
41
23
+ * add a single reginfo struct to the hash table.
42
/* ZynqMP maps the ARM GIC regions (GICC, GICD ...) at consecutive 64k offsets
24
+ */
43
* and under-decodes the 64k region. This mirrors the 4k regions to every 4k
25
static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
44
@@ -XXX,XX +XXX,XX @@
26
void *opaque, CPState state,
45
*/
27
CPSecureState secstate,
46
28
int crm, int opc1, int opc2,
47
#define XLNX_ZYNQMP_GIC_REGION_SIZE 0x1000
29
const char *name)
48
-#define XLNX_ZYNQMP_GIC_ALIASES (0x10000 / XLNX_ZYNQMP_GIC_REGION_SIZE - 1)
30
{
49
+#define XLNX_ZYNQMP_GIC_ALIASES (0x10000 / XLNX_ZYNQMP_GIC_REGION_SIZE)
31
- /* Private utility function for define_one_arm_cp_reg_with_opaque():
50
32
- * add a single reginfo struct to the hash table.
51
#define XLNX_ZYNQMP_MAX_LOW_RAM_SIZE 0x80000000ull
33
- */
52
34
uint32_t key;
53
diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c
35
ARMCPRegInfo *r2;
54
index XXXXXXX..XXXXXXX 100644
36
bool is64 = r->type & ARM_CP_64BIT;
55
--- a/hw/arm/xlnx-zynqmp.c
37
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
56
+++ b/hw/arm/xlnx-zynqmp.c
38
57
@@ -XXX,XX +XXX,XX @@
39
isbanked = r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1];
58
40
if (isbanked) {
59
#define ARM_PHYS_TIMER_PPI 30
41
- /* Register is banked (using both entries in array).
60
#define ARM_VIRT_TIMER_PPI 27
42
+ /*
61
+#define ARM_HYP_TIMER_PPI 26
43
+ * Register is banked (using both entries in array).
62
+#define ARM_SEC_TIMER_PPI 29
44
* Overwriting fieldoffset as the array is only used to define
63
+#define GIC_MAINTENANCE_PPI 25
45
* banked registers but later only fieldoffset is used.
64
46
*/
65
#define GEM_REVISION 0x40070106
47
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
66
48
67
#define GIC_BASE_ADDR 0xf9000000
49
if (state == ARM_CP_STATE_AA32) {
68
#define GIC_DIST_ADDR 0xf9010000
50
if (isbanked) {
69
#define GIC_CPU_ADDR 0xf9020000
51
- /* If the register is banked then we don't need to migrate or
70
+#define GIC_VIFACE_ADDR 0xf9040000
52
+ /*
71
+#define GIC_VCPU_ADDR 0xf9060000
53
+ * If the register is banked then we don't need to migrate or
72
54
* reset the 32-bit instance in certain cases:
73
#define SATA_INTR 133
55
*
74
#define SATA_ADDR 0xFD0C0000
56
* 1) If the register has both 32-bit and 64-bit instances then we
75
@@ -XXX,XX +XXX,XX @@ static const int adma_ch_intr[XLNX_ZYNQMP_NUM_ADMA_CH] = {
57
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
76
typedef struct XlnxZynqMPGICRegion {
58
r2->type |= ARM_CP_ALIAS;
77
int region_index;
59
}
78
uint32_t address;
60
} else if ((secstate != r->secure) && !ns) {
79
+ uint32_t offset;
61
- /* The register is not banked so we only want to allow migration of
80
+ bool virt;
62
- * the non-secure instance.
81
} XlnxZynqMPGICRegion;
63
+ /*
82
64
+ * The register is not banked so we only want to allow migration
83
static const XlnxZynqMPGICRegion xlnx_zynqmp_gic_regions[] = {
65
+ * of the non-secure instance.
84
- { .region_index = 0, .address = GIC_DIST_ADDR, },
66
*/
85
- { .region_index = 1, .address = GIC_CPU_ADDR, },
67
r2->type |= ARM_CP_ALIAS;
86
+ /* Distributor */
68
}
87
+ {
69
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
88
+ .region_index = 0,
89
+ .address = GIC_DIST_ADDR,
90
+ .offset = 0,
91
+ .virt = false
92
+ },
93
+
94
+ /* CPU interface */
95
+ {
96
+ .region_index = 1,
97
+ .address = GIC_CPU_ADDR,
98
+ .offset = 0,
99
+ .virt = false
100
+ },
101
+ {
102
+ .region_index = 1,
103
+ .address = GIC_CPU_ADDR + 0x10000,
104
+ .offset = 0x1000,
105
+ .virt = false
106
+ },
107
+
108
+ /* Virtual interface */
109
+ {
110
+ .region_index = 2,
111
+ .address = GIC_VIFACE_ADDR,
112
+ .offset = 0,
113
+ .virt = true
114
+ },
115
+
116
+ /* Virtual CPU interface */
117
+ {
118
+ .region_index = 3,
119
+ .address = GIC_VCPU_ADDR,
120
+ .offset = 0,
121
+ .virt = true
122
+ },
123
+ {
124
+ .region_index = 3,
125
+ .address = GIC_VCPU_ADDR + 0x10000,
126
+ .offset = 0x1000,
127
+ .virt = true
128
+ },
129
};
130
131
static inline int arm_gic_ppi_index(int cpu_nr, int ppi_index)
132
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
133
qdev_prop_set_uint32(DEVICE(&s->gic), "num-irq", GIC_NUM_SPI_INTR + 32);
134
qdev_prop_set_uint32(DEVICE(&s->gic), "revision", 2);
135
qdev_prop_set_uint32(DEVICE(&s->gic), "num-cpu", num_apus);
136
+ qdev_prop_set_bit(DEVICE(&s->gic), "has-security-extensions", s->secure);
137
+ qdev_prop_set_bit(DEVICE(&s->gic),
138
+ "has-virtualization-extensions", s->virt);
139
140
/* Realize APUs before realizing the GIC. KVM requires this. */
141
for (i = 0; i < num_apus; i++) {
142
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
143
for (i = 0; i < XLNX_ZYNQMP_GIC_REGIONS; i++) {
144
SysBusDevice *gic = SYS_BUS_DEVICE(&s->gic);
145
const XlnxZynqMPGICRegion *r = &xlnx_zynqmp_gic_regions[i];
146
- MemoryRegion *mr = sysbus_mmio_get_region(gic, r->region_index);
147
+ MemoryRegion *mr;
148
uint32_t addr = r->address;
149
int j;
150
151
- sysbus_mmio_map(gic, r->region_index, addr);
152
+ if (r->virt && !s->virt) {
153
+ continue;
154
+ }
155
156
+ mr = sysbus_mmio_get_region(gic, r->region_index);
157
for (j = 0; j < XLNX_ZYNQMP_GIC_ALIASES; j++) {
158
MemoryRegion *alias = &s->gic_mr[i][j];
159
160
- addr += XLNX_ZYNQMP_GIC_REGION_SIZE;
161
memory_region_init_alias(alias, OBJECT(s), "zynqmp-gic-alias", mr,
162
- 0, XLNX_ZYNQMP_GIC_REGION_SIZE);
163
+ r->offset, XLNX_ZYNQMP_GIC_REGION_SIZE);
164
memory_region_add_subregion(system_memory, addr, alias);
165
+
166
+ addr += XLNX_ZYNQMP_GIC_REGION_SIZE;
167
}
70
}
168
}
71
}
169
72
170
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
73
- /* By convention, for wildcarded registers only the first
171
sysbus_connect_irq(SYS_BUS_DEVICE(&s->gic), i,
74
+ /*
172
qdev_get_gpio_in(DEVICE(&s->apu_cpu[i]),
75
+ * By convention, for wildcarded registers only the first
173
ARM_CPU_IRQ));
76
* entry is used for migration; the others are marked as
174
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gic), i + num_apus,
77
* ALIAS so we don't try to transfer the register
175
+ qdev_get_gpio_in(DEVICE(&s->apu_cpu[i]),
78
* multiple times. Special registers (ie NOP/WFI) are
176
+ ARM_CPU_FIQ));
79
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
177
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gic), i + num_apus * 2,
80
r2->type |= ARM_CP_ALIAS | ARM_CP_NO_GDB;
178
+ qdev_get_gpio_in(DEVICE(&s->apu_cpu[i]),
179
+ ARM_CPU_VIRQ));
180
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gic), i + num_apus * 3,
181
+ qdev_get_gpio_in(DEVICE(&s->apu_cpu[i]),
182
+ ARM_CPU_VFIQ));
183
irq = qdev_get_gpio_in(DEVICE(&s->gic),
184
arm_gic_ppi_index(i, ARM_PHYS_TIMER_PPI));
185
- qdev_connect_gpio_out(DEVICE(&s->apu_cpu[i]), 0, irq);
186
+ qdev_connect_gpio_out(DEVICE(&s->apu_cpu[i]), GTIMER_PHYS, irq);
187
irq = qdev_get_gpio_in(DEVICE(&s->gic),
188
arm_gic_ppi_index(i, ARM_VIRT_TIMER_PPI));
189
- qdev_connect_gpio_out(DEVICE(&s->apu_cpu[i]), 1, irq);
190
+ qdev_connect_gpio_out(DEVICE(&s->apu_cpu[i]), GTIMER_VIRT, irq);
191
+ irq = qdev_get_gpio_in(DEVICE(&s->gic),
192
+ arm_gic_ppi_index(i, ARM_HYP_TIMER_PPI));
193
+ qdev_connect_gpio_out(DEVICE(&s->apu_cpu[i]), GTIMER_HYP, irq);
194
+ irq = qdev_get_gpio_in(DEVICE(&s->gic),
195
+ arm_gic_ppi_index(i, ARM_SEC_TIMER_PPI));
196
+ qdev_connect_gpio_out(DEVICE(&s->apu_cpu[i]), GTIMER_SEC, irq);
197
+
198
+ if (s->virt) {
199
+ irq = qdev_get_gpio_in(DEVICE(&s->gic),
200
+ arm_gic_ppi_index(i, GIC_MAINTENANCE_PPI));
201
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gic), i + num_apus * 4, irq);
202
+ }
203
}
81
}
204
82
205
if (s->has_rpu) {
83
- /* Check that raw accesses are either forbidden or handled. Note that
84
+ /*
85
+ * Check that raw accesses are either forbidden or handled. Note that
86
* we can't assert this earlier because the setup of fieldoffset for
87
* banked registers has to be done first.
88
*/
206
--
89
--
207
2.18.0
90
2.25.1
208
209
diff view generated by jsdifflib
1
On exception return for M-profile, we must restore the CONTROL.SPSEL
1
From: Richard Henderson <richard.henderson@linaro.org>
2
bit from the EXCRET value before we do any kind of tailchaining,
3
including for the derived exceptions on integrity check failures.
4
Otherwise we will give the guest an incorrect EXCRET.SPSEL value on
5
exception entry for the tailchained exception.
6
2
3
Since e03b56863d2bc, our host endian indicator is unconditionally
4
set, which means that we can use a normal C condition.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20220501055028.646596-20-richard.henderson@linaro.org
9
[PMM: quote correct git hash in commit message]
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180720145647.8810-4-peter.maydell@linaro.org
10
---
11
---
11
target/arm/helper.c | 16 ++++++++++------
12
target/arm/helper.c | 9 +++------
12
1 file changed, 10 insertions(+), 6 deletions(-)
13
1 file changed, 3 insertions(+), 6 deletions(-)
13
14
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.c
17
--- a/target/arm/helper.c
17
+++ b/target/arm/helper.c
18
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
19
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
20
r2->type |= ARM_CP_ALIAS;
21
}
22
23
- if (r->state == ARM_CP_STATE_BOTH) {
24
-#if HOST_BIG_ENDIAN
25
- if (r2->fieldoffset) {
26
- r2->fieldoffset += sizeof(uint32_t);
27
- }
28
-#endif
29
+ if (HOST_BIG_ENDIAN &&
30
+ r->state == ARM_CP_STATE_BOTH && r2->fieldoffset) {
31
+ r2->fieldoffset += sizeof(uint32_t);
19
}
32
}
20
}
33
}
21
34
22
+ /*
23
+ * Set CONTROL.SPSEL from excret.SPSEL. Since we're still in
24
+ * Handler mode (and will be until we write the new XPSR.Interrupt
25
+ * field) this does not switch around the current stack pointer.
26
+ * We must do this before we do any kind of tailchaining, including
27
+ * for the derived exceptions on integrity check failures, or we will
28
+ * give the guest an incorrect EXCRET.SPSEL value on exception entry.
29
+ */
30
+ write_v7m_control_spsel_for_secstate(env, return_to_sp_process, exc_secure);
31
+
32
if (sfault) {
33
env->v7m.sfsr |= R_V7M_SFSR_INVER_MASK;
34
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
35
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
36
return;
37
}
38
39
- /* Set CONTROL.SPSEL from excret.SPSEL. Since we're still in
40
- * Handler mode (and will be until we write the new XPSR.Interrupt
41
- * field) this does not switch around the current stack pointer.
42
- */
43
- write_v7m_control_spsel_for_secstate(env, return_to_sp_process, exc_secure);
44
-
45
switch_v7m_security_state(env, return_to_secure);
46
47
{
48
--
35
--
49
2.18.0
36
2.25.1
50
51
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add the necessary parts of the virtualization extensions state to the
4
GIC state. We choose to increase the size of the CPU interfaces state to
5
add space for the vCPU interfaces (the GIC_NCPU_VCPU macro). This way,
6
we'll be able to reuse most of the CPU interface code for the vCPUs.
7
8
The only exception is the APR value, which is stored in h_apr in the
9
virtual interface state for vCPUs. This is due to some complications
10
with the GIC VMState, for which we don't want to break backward
11
compatibility. APRs being stored in 2D arrays, increasing the second
12
dimension would lead to some ugly VMState description. To avoid
13
that, we keep it in h_apr for vCPUs.
14
15
The vCPUs are numbered from GIC_NCPU to (GIC_NCPU * 2) - 1. The
16
`gic_is_vcpu` function help to determine if a given CPU id correspond to
17
a physical CPU or a virtual one.
18
19
For the in-kernel KVM VGIC, since the exposed VGIC does not implement
20
the virtualization extensions, we report an error if the corresponding
21
property is set to true.
22
23
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
24
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
25
Message-id: 20180727095421.386-6-luc.michel@greensocs.com
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20220501055028.646596-24-richard.henderson@linaro.org
26
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
27
---
7
---
28
hw/intc/gic_internal.h | 5 ++
8
target/arm/cpu.h | 15 +++++++++++++++
29
include/hw/intc/arm_gic_common.h | 43 +++++++--
9
1 file changed, 15 insertions(+)
30
hw/intc/arm_gic.c | 2 +-
31
hw/intc/arm_gic_common.c | 148 ++++++++++++++++++++++++++-----
32
hw/intc/arm_gic_kvm.c | 8 +-
33
5 files changed, 173 insertions(+), 33 deletions(-)
34
10
35
diff --git a/hw/intc/gic_internal.h b/hw/intc/gic_internal.h
11
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
36
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
37
--- a/hw/intc/gic_internal.h
13
--- a/target/arm/cpu.h
38
+++ b/hw/intc/gic_internal.h
14
+++ b/target/arm/cpu.h
39
@@ -XXX,XX +XXX,XX @@ static inline bool gic_test_pending(GICState *s, int irq, int cm)
15
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_ssbs(const ARMISARegisters *id)
40
}
16
return FIELD_EX32(id->id_pfr2, ID_PFR2, SSBS) != 0;
41
}
17
}
42
18
43
+static inline bool gic_is_vcpu(int cpu)
19
+static inline bool isar_feature_aa32_debugv8p2(const ARMISARegisters *id)
44
+{
20
+{
45
+ return cpu >= GIC_NCPU;
21
+ return FIELD_EX32(id->id_dfr0, ID_DFR0, COPDBG) >= 8;
46
+}
22
+}
47
+
23
+
48
#endif /* QEMU_ARM_GIC_INTERNAL_H */
24
/*
49
diff --git a/include/hw/intc/arm_gic_common.h b/include/hw/intc/arm_gic_common.h
25
* 64-bit feature tests via id registers.
50
index XXXXXXX..XXXXXXX 100644
26
*/
51
--- a/include/hw/intc/arm_gic_common.h
27
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_ssbs(const ARMISARegisters *id)
52
+++ b/include/hw/intc/arm_gic_common.h
28
return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, SSBS) != 0;
53
@@ -XXX,XX +XXX,XX @@
54
#define GIC_NR_SGIS 16
55
/* Maximum number of possible CPU interfaces, determined by GIC architecture */
56
#define GIC_NCPU 8
57
+/* Maximum number of possible CPU interfaces with their respective vCPU */
58
+#define GIC_NCPU_VCPU (GIC_NCPU * 2)
59
60
#define MAX_NR_GROUP_PRIO 128
61
#define GIC_NR_APRS (MAX_NR_GROUP_PRIO / 32)
62
@@ -XXX,XX +XXX,XX @@
63
#define GIC_MIN_BPR 0
64
#define GIC_MIN_ABPR (GIC_MIN_BPR + 1)
65
66
+/* Architectural maximum number of list registers in the virtual interface */
67
+#define GIC_MAX_LR 64
68
+
69
+/* Only 32 priority levels and 32 preemption levels in the vCPU interfaces */
70
+#define GIC_VIRT_MAX_GROUP_PRIO_BITS 5
71
+#define GIC_VIRT_MAX_NR_GROUP_PRIO (1 << GIC_VIRT_MAX_GROUP_PRIO_BITS)
72
+#define GIC_VIRT_NR_APRS (GIC_VIRT_MAX_NR_GROUP_PRIO / 32)
73
+
74
+#define GIC_VIRT_MIN_BPR 2
75
+#define GIC_VIRT_MIN_ABPR (GIC_VIRT_MIN_BPR + 1)
76
+
77
typedef struct gic_irq_state {
78
/* The enable bits are only banked for per-cpu interrupts. */
79
uint8_t enabled;
80
@@ -XXX,XX +XXX,XX @@ typedef struct GICState {
81
qemu_irq parent_fiq[GIC_NCPU];
82
qemu_irq parent_virq[GIC_NCPU];
83
qemu_irq parent_vfiq[GIC_NCPU];
84
+ qemu_irq maintenance_irq[GIC_NCPU];
85
+
86
/* GICD_CTLR; for a GIC with the security extensions the NS banked version
87
* of this register is just an alias of bit 1 of the S banked version.
88
*/
89
@@ -XXX,XX +XXX,XX @@ typedef struct GICState {
90
/* GICC_CTLR; again, the NS banked version is just aliases of bits of
91
* the S banked register, so our state only needs to store the S version.
92
*/
93
- uint32_t cpu_ctlr[GIC_NCPU];
94
+ uint32_t cpu_ctlr[GIC_NCPU_VCPU];
95
96
gic_irq_state irq_state[GIC_MAXIRQ];
97
uint8_t irq_target[GIC_MAXIRQ];
98
@@ -XXX,XX +XXX,XX @@ typedef struct GICState {
99
*/
100
uint8_t sgi_pending[GIC_NR_SGIS][GIC_NCPU];
101
102
- uint16_t priority_mask[GIC_NCPU];
103
- uint16_t running_priority[GIC_NCPU];
104
- uint16_t current_pending[GIC_NCPU];
105
+ uint16_t priority_mask[GIC_NCPU_VCPU];
106
+ uint16_t running_priority[GIC_NCPU_VCPU];
107
+ uint16_t current_pending[GIC_NCPU_VCPU];
108
109
/* If we present the GICv2 without security extensions to a guest,
110
* the guest can configure the GICC_CTLR to configure group 1 binary point
111
@@ -XXX,XX +XXX,XX @@ typedef struct GICState {
112
* For a GIC with Security Extensions we use use bpr for the
113
* secure copy and abpr as storage for the non-secure copy of the register.
114
*/
115
- uint8_t bpr[GIC_NCPU];
116
- uint8_t abpr[GIC_NCPU];
117
+ uint8_t bpr[GIC_NCPU_VCPU];
118
+ uint8_t abpr[GIC_NCPU_VCPU];
119
120
/* The APR is implementation defined, so we choose a layout identical to
121
* the KVM ABI layout for QEMU's implementation of the gic:
122
@@ -XXX,XX +XXX,XX @@ typedef struct GICState {
123
uint32_t apr[GIC_NR_APRS][GIC_NCPU];
124
uint32_t nsapr[GIC_NR_APRS][GIC_NCPU];
125
126
+ /* Virtual interface control registers */
127
+ uint32_t h_hcr[GIC_NCPU];
128
+ uint32_t h_misr[GIC_NCPU];
129
+ uint32_t h_lr[GIC_MAX_LR][GIC_NCPU];
130
+ uint32_t h_apr[GIC_NCPU];
131
+
132
+ /* Number of LRs implemented in this GIC instance */
133
+ uint32_t num_lrs;
134
+
135
uint32_t num_cpu;
136
137
MemoryRegion iomem; /* Distributor */
138
@@ -XXX,XX +XXX,XX @@ typedef struct GICState {
139
*/
140
struct GICState *backref[GIC_NCPU];
141
MemoryRegion cpuiomem[GIC_NCPU + 1]; /* CPU interfaces */
142
+ MemoryRegion vifaceiomem[GIC_NCPU + 1]; /* Virtual interfaces */
143
+ MemoryRegion vcpuiomem; /* vCPU interface */
144
+
145
uint32_t num_irq;
146
uint32_t revision;
147
bool security_extn;
148
+ bool virt_extn;
149
bool irq_reset_nonsecure; /* configure IRQs as group 1 (NS) on reset? */
150
int dev_fd; /* kvm device fd if backed by kvm vgic support */
151
Error *migration_blocker;
152
@@ -XXX,XX +XXX,XX @@ typedef struct ARMGICCommonClass {
153
} ARMGICCommonClass;
154
155
void gic_init_irqs_and_mmio(GICState *s, qemu_irq_handler handler,
156
- const MemoryRegionOps *ops);
157
+ const MemoryRegionOps *ops,
158
+ const MemoryRegionOps *virt_ops);
159
160
#endif
161
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
162
index XXXXXXX..XXXXXXX 100644
163
--- a/hw/intc/arm_gic.c
164
+++ b/hw/intc/arm_gic.c
165
@@ -XXX,XX +XXX,XX @@ static void arm_gic_realize(DeviceState *dev, Error **errp)
166
}
167
168
/* This creates distributor and main CPU interface (s->cpuiomem[0]) */
169
- gic_init_irqs_and_mmio(s, gic_set_irq, gic_ops);
170
+ gic_init_irqs_and_mmio(s, gic_set_irq, gic_ops, NULL);
171
172
/* Extra core-specific regions for the CPU interfaces. This is
173
* necessary for "franken-GIC" implementations, for example on
174
diff --git a/hw/intc/arm_gic_common.c b/hw/intc/arm_gic_common.c
175
index XXXXXXX..XXXXXXX 100644
176
--- a/hw/intc/arm_gic_common.c
177
+++ b/hw/intc/arm_gic_common.c
178
@@ -XXX,XX +XXX,XX @@ static int gic_post_load(void *opaque, int version_id)
179
return 0;
180
}
29
}
181
30
182
+static bool gic_virt_state_needed(void *opaque)
31
+static inline bool isar_feature_aa64_debugv8p2(const ARMISARegisters *id)
183
+{
32
+{
184
+ GICState *s = (GICState *)opaque;
33
+ return FIELD_EX64(id->id_aa64dfr0, ID_AA64DFR0, DEBUGVER) >= 8;
185
+
186
+ return s->virt_extn;
187
+}
34
+}
188
+
35
+
189
static const VMStateDescription vmstate_gic_irq_state = {
36
static inline bool isar_feature_aa64_sve2(const ARMISARegisters *id)
190
.name = "arm_gic_irq_state",
191
.version_id = 1,
192
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_gic_irq_state = {
193
}
194
};
195
196
+static const VMStateDescription vmstate_gic_virt_state = {
197
+ .name = "arm_gic_virt_state",
198
+ .version_id = 1,
199
+ .minimum_version_id = 1,
200
+ .needed = gic_virt_state_needed,
201
+ .fields = (VMStateField[]) {
202
+ /* Virtual interface */
203
+ VMSTATE_UINT32_ARRAY(h_hcr, GICState, GIC_NCPU),
204
+ VMSTATE_UINT32_ARRAY(h_misr, GICState, GIC_NCPU),
205
+ VMSTATE_UINT32_2DARRAY(h_lr, GICState, GIC_MAX_LR, GIC_NCPU),
206
+ VMSTATE_UINT32_ARRAY(h_apr, GICState, GIC_NCPU),
207
+
208
+ /* Virtual CPU interfaces */
209
+ VMSTATE_UINT32_SUB_ARRAY(cpu_ctlr, GICState, GIC_NCPU, GIC_NCPU),
210
+ VMSTATE_UINT16_SUB_ARRAY(priority_mask, GICState, GIC_NCPU, GIC_NCPU),
211
+ VMSTATE_UINT16_SUB_ARRAY(running_priority, GICState, GIC_NCPU, GIC_NCPU),
212
+ VMSTATE_UINT16_SUB_ARRAY(current_pending, GICState, GIC_NCPU, GIC_NCPU),
213
+ VMSTATE_UINT8_SUB_ARRAY(bpr, GICState, GIC_NCPU, GIC_NCPU),
214
+ VMSTATE_UINT8_SUB_ARRAY(abpr, GICState, GIC_NCPU, GIC_NCPU),
215
+
216
+ VMSTATE_END_OF_LIST()
217
+ }
218
+};
219
+
220
static const VMStateDescription vmstate_gic = {
221
.name = "arm_gic",
222
.version_id = 12,
223
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_gic = {
224
.post_load = gic_post_load,
225
.fields = (VMStateField[]) {
226
VMSTATE_UINT32(ctlr, GICState),
227
- VMSTATE_UINT32_ARRAY(cpu_ctlr, GICState, GIC_NCPU),
228
+ VMSTATE_UINT32_SUB_ARRAY(cpu_ctlr, GICState, 0, GIC_NCPU),
229
VMSTATE_STRUCT_ARRAY(irq_state, GICState, GIC_MAXIRQ, 1,
230
vmstate_gic_irq_state, gic_irq_state),
231
VMSTATE_UINT8_ARRAY(irq_target, GICState, GIC_MAXIRQ),
232
VMSTATE_UINT8_2DARRAY(priority1, GICState, GIC_INTERNAL, GIC_NCPU),
233
VMSTATE_UINT8_ARRAY(priority2, GICState, GIC_MAXIRQ - GIC_INTERNAL),
234
VMSTATE_UINT8_2DARRAY(sgi_pending, GICState, GIC_NR_SGIS, GIC_NCPU),
235
- VMSTATE_UINT16_ARRAY(priority_mask, GICState, GIC_NCPU),
236
- VMSTATE_UINT16_ARRAY(running_priority, GICState, GIC_NCPU),
237
- VMSTATE_UINT16_ARRAY(current_pending, GICState, GIC_NCPU),
238
- VMSTATE_UINT8_ARRAY(bpr, GICState, GIC_NCPU),
239
- VMSTATE_UINT8_ARRAY(abpr, GICState, GIC_NCPU),
240
+ VMSTATE_UINT16_SUB_ARRAY(priority_mask, GICState, 0, GIC_NCPU),
241
+ VMSTATE_UINT16_SUB_ARRAY(running_priority, GICState, 0, GIC_NCPU),
242
+ VMSTATE_UINT16_SUB_ARRAY(current_pending, GICState, 0, GIC_NCPU),
243
+ VMSTATE_UINT8_SUB_ARRAY(bpr, GICState, 0, GIC_NCPU),
244
+ VMSTATE_UINT8_SUB_ARRAY(abpr, GICState, 0, GIC_NCPU),
245
VMSTATE_UINT32_2DARRAY(apr, GICState, GIC_NR_APRS, GIC_NCPU),
246
VMSTATE_UINT32_2DARRAY(nsapr, GICState, GIC_NR_APRS, GIC_NCPU),
247
VMSTATE_END_OF_LIST()
248
+ },
249
+ .subsections = (const VMStateDescription * []) {
250
+ &vmstate_gic_virt_state,
251
+ NULL
252
}
253
};
254
255
void gic_init_irqs_and_mmio(GICState *s, qemu_irq_handler handler,
256
- const MemoryRegionOps *ops)
257
+ const MemoryRegionOps *ops,
258
+ const MemoryRegionOps *virt_ops)
259
{
37
{
260
SysBusDevice *sbd = SYS_BUS_DEVICE(s);
38
return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, SVEVER) != 0;
261
int i = s->num_irq - GIC_INTERNAL;
39
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_any_tts2uxn(const ARMISARegisters *id)
262
@@ -XXX,XX +XXX,XX @@ void gic_init_irqs_and_mmio(GICState *s, qemu_irq_handler handler,
40
return isar_feature_aa64_tts2uxn(id) || isar_feature_aa32_tts2uxn(id);
263
for (i = 0; i < s->num_cpu; i++) {
264
sysbus_init_irq(sbd, &s->parent_vfiq[i]);
265
}
266
+ if (s->virt_extn) {
267
+ for (i = 0; i < s->num_cpu; i++) {
268
+ sysbus_init_irq(sbd, &s->maintenance_irq[i]);
269
+ }
270
+ }
271
272
/* Distributor */
273
memory_region_init_io(&s->iomem, OBJECT(s), ops, s, "gic_dist", 0x1000);
274
@@ -XXX,XX +XXX,XX @@ void gic_init_irqs_and_mmio(GICState *s, qemu_irq_handler handler,
275
memory_region_init_io(&s->cpuiomem[0], OBJECT(s), ops ? &ops[1] : NULL,
276
s, "gic_cpu", s->revision == 2 ? 0x2000 : 0x100);
277
sysbus_init_mmio(sbd, &s->cpuiomem[0]);
278
+
279
+ if (s->virt_extn) {
280
+ memory_region_init_io(&s->vifaceiomem[0], OBJECT(s), virt_ops,
281
+ s, "gic_viface", 0x1000);
282
+ sysbus_init_mmio(sbd, &s->vifaceiomem[0]);
283
+
284
+ memory_region_init_io(&s->vcpuiomem, OBJECT(s),
285
+ virt_ops ? &virt_ops[1] : NULL,
286
+ s, "gic_vcpu", 0x2000);
287
+ sysbus_init_mmio(sbd, &s->vcpuiomem);
288
+ }
289
}
41
}
290
42
291
static void arm_gic_common_realize(DeviceState *dev, Error **errp)
43
+static inline bool isar_feature_any_debugv8p2(const ARMISARegisters *id)
292
@@ -XXX,XX +XXX,XX @@ static void arm_gic_common_realize(DeviceState *dev, Error **errp)
44
+{
293
"the security extensions");
45
+ return isar_feature_aa64_debugv8p2(id) || isar_feature_aa32_debugv8p2(id);
294
return;
295
}
296
+
297
+ if (s->virt_extn) {
298
+ if (s->revision != 2) {
299
+ error_setg(errp, "GIC virtualization extensions are only "
300
+ "supported by revision 2");
301
+ return;
302
+ }
303
+
304
+ /* For now, set the number of implemented LRs to 4, as found in most
305
+ * real GICv2. This could be promoted as a QOM property if we need to
306
+ * emulate a variant with another num_lrs.
307
+ */
308
+ s->num_lrs = 4;
309
+ }
310
+}
46
+}
311
+
47
+
312
+static inline void arm_gic_common_reset_irq_state(GICState *s, int first_cpu,
48
/*
313
+ int resetprio)
49
* Forward to the above feature tests given an ARMCPU pointer.
314
+{
50
*/
315
+ int i, j;
316
+
317
+ for (i = first_cpu; i < first_cpu + s->num_cpu; i++) {
318
+ if (s->revision == REV_11MPCORE) {
319
+ s->priority_mask[i] = 0xf0;
320
+ } else {
321
+ s->priority_mask[i] = resetprio;
322
+ }
323
+ s->current_pending[i] = 1023;
324
+ s->running_priority[i] = 0x100;
325
+ s->cpu_ctlr[i] = 0;
326
+ s->bpr[i] = gic_is_vcpu(i) ? GIC_VIRT_MIN_BPR : GIC_MIN_BPR;
327
+ s->abpr[i] = gic_is_vcpu(i) ? GIC_VIRT_MIN_ABPR : GIC_MIN_ABPR;
328
+
329
+ if (!gic_is_vcpu(i)) {
330
+ for (j = 0; j < GIC_INTERNAL; j++) {
331
+ s->priority1[j][i] = resetprio;
332
+ }
333
+ for (j = 0; j < GIC_NR_SGIS; j++) {
334
+ s->sgi_pending[j][i] = 0;
335
+ }
336
+ }
337
+ }
338
}
339
340
static void arm_gic_common_reset(DeviceState *dev)
341
@@ -XXX,XX +XXX,XX @@ static void arm_gic_common_reset(DeviceState *dev)
342
}
343
344
memset(s->irq_state, 0, GIC_MAXIRQ * sizeof(gic_irq_state));
345
- for (i = 0 ; i < s->num_cpu; i++) {
346
- if (s->revision == REV_11MPCORE) {
347
- s->priority_mask[i] = 0xf0;
348
- } else {
349
- s->priority_mask[i] = resetprio;
350
- }
351
- s->current_pending[i] = 1023;
352
- s->running_priority[i] = 0x100;
353
- s->cpu_ctlr[i] = 0;
354
- s->bpr[i] = GIC_MIN_BPR;
355
- s->abpr[i] = GIC_MIN_ABPR;
356
- for (j = 0; j < GIC_INTERNAL; j++) {
357
- s->priority1[j][i] = resetprio;
358
- }
359
- for (j = 0; j < GIC_NR_SGIS; j++) {
360
- s->sgi_pending[j][i] = 0;
361
- }
362
+ arm_gic_common_reset_irq_state(s, 0, resetprio);
363
+
364
+ if (s->virt_extn) {
365
+ /* vCPU states are stored at indexes GIC_NCPU .. GIC_NCPU+num_cpu.
366
+ * The exposed vCPU interface does not have security extensions.
367
+ */
368
+ arm_gic_common_reset_irq_state(s, GIC_NCPU, 0);
369
}
370
+
371
for (i = 0; i < GIC_NR_SGIS; i++) {
372
GIC_DIST_SET_ENABLED(i, ALL_CPU_MASK);
373
GIC_DIST_SET_EDGE_TRIGGER(i);
374
@@ -XXX,XX +XXX,XX @@ static void arm_gic_common_reset(DeviceState *dev)
375
}
376
}
377
378
+ if (s->virt_extn) {
379
+ for (i = 0; i < s->num_lrs; i++) {
380
+ for (j = 0; j < s->num_cpu; j++) {
381
+ s->h_lr[i][j] = 0;
382
+ }
383
+ }
384
+
385
+ for (i = 0; i < s->num_cpu; i++) {
386
+ s->h_hcr[i] = 0;
387
+ s->h_misr[i] = 0;
388
+ }
389
+ }
390
+
391
s->ctlr = 0;
392
}
393
394
@@ -XXX,XX +XXX,XX @@ static Property arm_gic_common_properties[] = {
395
DEFINE_PROP_UINT32("revision", GICState, revision, 1),
396
/* True if the GIC should implement the security extensions */
397
DEFINE_PROP_BOOL("has-security-extensions", GICState, security_extn, 0),
398
+ /* True if the GIC should implement the virtualization extensions */
399
+ DEFINE_PROP_BOOL("has-virtualization-extensions", GICState, virt_extn, 0),
400
DEFINE_PROP_END_OF_LIST(),
401
};
402
403
diff --git a/hw/intc/arm_gic_kvm.c b/hw/intc/arm_gic_kvm.c
404
index XXXXXXX..XXXXXXX 100644
405
--- a/hw/intc/arm_gic_kvm.c
406
+++ b/hw/intc/arm_gic_kvm.c
407
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gic_realize(DeviceState *dev, Error **errp)
408
return;
409
}
410
411
+ if (s->virt_extn) {
412
+ error_setg(errp, "the in-kernel VGIC does not implement the "
413
+ "virtualization extensions");
414
+ return;
415
+ }
416
+
417
if (!kvm_arm_gic_can_save_restore(s)) {
418
error_setg(&s->migration_blocker, "This operating system kernel does "
419
"not support vGICv2 migration");
420
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gic_realize(DeviceState *dev, Error **errp)
421
}
422
}
423
424
- gic_init_irqs_and_mmio(s, kvm_arm_gicv2_set_irq, NULL);
425
+ gic_init_irqs_and_mmio(s, kvm_arm_gicv2_set_irq, NULL, NULL);
426
427
for (i = 0; i < s->num_irq - GIC_INTERNAL; i++) {
428
qemu_irq irq = qdev_get_gpio_in(dev, i);
429
--
51
--
430
2.18.0
52
2.25.1
431
432
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add some helper macros and functions related to the virtualization
3
Add the aa64 predicate for detecting RAS support from id registers.
4
extensions to gic_internal.h.
4
We already have the aa32 version from the M-profile work.
5
Add the 'any' predicate for testing both aa64 and aa32.
5
6
6
The GICH_LR_* macros help extracting specific fields of a list register
7
value. The only tricky one is the priority field as only the MSB are
8
stored. The value must be shifted accordingly to obtain the correct
9
priority value.
10
11
gic_is_vcpu() and gic_get_vcpu_real_id() help with (v)CPU id manipulation
12
to abstract the fact that vCPU id are in the range
13
[ GIC_NCPU; (GIC_NCPU + num_cpu) [.
14
15
gic_lr_* and gic_virq_is_valid() help with the list registers.
16
gic_get_lr_entry() returns the LR entry for a given (vCPU, irq) pair. It
17
is meant to be used in contexts where we know for sure that the entry
18
exists, so we assert that entry is actually found, and the caller can
19
avoid the NULL check on the returned pointer.
20
21
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
22
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
23
Message-id: 20180727095421.386-8-luc.michel@greensocs.com
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20220501055028.646596-34-richard.henderson@linaro.org
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
25
---
11
---
26
hw/intc/gic_internal.h | 74 ++++++++++++++++++++++++++++++++++++++++++
12
target/arm/cpu.h | 10 ++++++++++
27
hw/intc/arm_gic.c | 5 +++
13
1 file changed, 10 insertions(+)
28
2 files changed, 79 insertions(+)
29
14
30
diff --git a/hw/intc/gic_internal.h b/hw/intc/gic_internal.h
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
31
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
32
--- a/hw/intc/gic_internal.h
17
--- a/target/arm/cpu.h
33
+++ b/hw/intc/gic_internal.h
18
+++ b/target/arm/cpu.h
34
@@ -XXX,XX +XXX,XX @@ REG32(GICH_LR63, 0x1fc)
19
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_aa32_el1(const ARMISARegisters *id)
35
R_GICH_LR0_Priority_MASK | R_GICH_LR0_State_MASK | \
20
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, EL1) >= 2;
36
R_GICH_LR0_Grp1_MASK | R_GICH_LR0_HW_MASK)
37
38
+#define GICH_LR_STATE_INVALID 0
39
+#define GICH_LR_STATE_PENDING 1
40
+#define GICH_LR_STATE_ACTIVE 2
41
+#define GICH_LR_STATE_ACTIVE_PENDING 3
42
+
43
+#define GICH_LR_VIRT_ID(entry) (FIELD_EX32(entry, GICH_LR0, VirtualID))
44
+#define GICH_LR_PHYS_ID(entry) (FIELD_EX32(entry, GICH_LR0, PhysicalID))
45
+#define GICH_LR_CPUID(entry) (FIELD_EX32(entry, GICH_LR0, CPUID))
46
+#define GICH_LR_EOI(entry) (FIELD_EX32(entry, GICH_LR0, EOI))
47
+#define GICH_LR_PRIORITY(entry) (FIELD_EX32(entry, GICH_LR0, Priority) << 3)
48
+#define GICH_LR_STATE(entry) (FIELD_EX32(entry, GICH_LR0, State))
49
+#define GICH_LR_GROUP(entry) (FIELD_EX32(entry, GICH_LR0, Grp1))
50
+#define GICH_LR_HW(entry) (FIELD_EX32(entry, GICH_LR0, HW))
51
+
52
/* Valid bits for GICC_CTLR for GICv1, v1 with security extensions,
53
* GICv2 and GICv2 with security extensions:
54
*/
55
@@ -XXX,XX +XXX,XX @@ static inline bool gic_is_vcpu(int cpu)
56
return cpu >= GIC_NCPU;
57
}
21
}
58
22
59
+static inline int gic_get_vcpu_real_id(int cpu)
23
+static inline bool isar_feature_aa64_ras(const ARMISARegisters *id)
60
+{
24
+{
61
+ return (cpu >= GIC_NCPU) ? (cpu - GIC_NCPU) : cpu;
25
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, RAS) != 0;
62
+}
26
+}
63
+
27
+
64
+/* Return true if the given vIRQ state exists in a LR and is either active or
28
static inline bool isar_feature_aa64_sve(const ARMISARegisters *id)
65
+ * pending and active.
29
{
66
+ *
30
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, SVE) != 0;
67
+ * This function is used to check that a guest's `end of interrupt' or
31
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_any_debugv8p2(const ARMISARegisters *id)
68
+ * `interrupts deactivation' request is valid, and matches with a LR of an
32
return isar_feature_aa64_debugv8p2(id) || isar_feature_aa32_debugv8p2(id);
69
+ * already acknowledged vIRQ (i.e. has the active bit set in its state).
33
}
70
+ */
34
71
+static inline bool gic_virq_is_valid(GICState *s, int irq, int vcpu)
35
+static inline bool isar_feature_any_ras(const ARMISARegisters *id)
72
+{
36
+{
73
+ int cpu = gic_get_vcpu_real_id(vcpu);
37
+ return isar_feature_aa64_ras(id) || isar_feature_aa32_ras(id);
74
+ int lr_idx;
75
+
76
+ for (lr_idx = 0; lr_idx < s->num_lrs; lr_idx++) {
77
+ uint32_t *entry = &s->h_lr[lr_idx][cpu];
78
+
79
+ if ((GICH_LR_VIRT_ID(*entry) == irq) &&
80
+ (GICH_LR_STATE(*entry) & GICH_LR_STATE_ACTIVE)) {
81
+ return true;
82
+ }
83
+ }
84
+
85
+ return false;
86
+}
38
+}
87
+
39
+
88
+/* Return a pointer on the LR entry matching the given vIRQ.
40
/*
89
+ *
41
* Forward to the above feature tests given an ARMCPU pointer.
90
+ * This function is used to retrieve an LR for which we know for sure that the
91
+ * corresponding vIRQ exists in the current context (i.e. its current state is
92
+ * not `invalid'):
93
+ * - Either the corresponding vIRQ has been validated with gic_virq_is_valid()
94
+ * so it is `active' or `active and pending',
95
+ * - Or it was pending and has been selected by gic_get_best_virq(). It is now
96
+ * `pending', `active' or `active and pending', depending on what the guest
97
+ * already did with this vIRQ.
98
+ *
99
+ * Having multiple LRs with the same VirtualID leads to UNPREDICTABLE
100
+ * behaviour in the GIC. We choose to return the first one that matches.
101
+ */
102
+static inline uint32_t *gic_get_lr_entry(GICState *s, int irq, int vcpu)
103
+{
104
+ int cpu = gic_get_vcpu_real_id(vcpu);
105
+ int lr_idx;
106
+
107
+ for (lr_idx = 0; lr_idx < s->num_lrs; lr_idx++) {
108
+ uint32_t *entry = &s->h_lr[lr_idx][cpu];
109
+
110
+ if ((GICH_LR_VIRT_ID(*entry) == irq) &&
111
+ (GICH_LR_STATE(*entry) != GICH_LR_STATE_INVALID)) {
112
+ return entry;
113
+ }
114
+ }
115
+
116
+ g_assert_not_reached();
117
+}
118
+
119
#endif /* QEMU_ARM_GIC_INTERNAL_H */
120
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
121
index XXXXXXX..XXXXXXX 100644
122
--- a/hw/intc/arm_gic.c
123
+++ b/hw/intc/arm_gic.c
124
@@ -XXX,XX +XXX,XX @@ static inline int gic_get_current_cpu(GICState *s)
125
return 0;
126
}
127
128
+static inline int gic_get_current_vcpu(GICState *s)
129
+{
130
+ return gic_get_current_cpu(s) + GIC_NCPU;
131
+}
132
+
133
/* Return true if this GIC config has interrupt groups, which is
134
* true if we're a GICv2, or a GICv1 with the security extensions.
135
*/
42
*/
136
--
43
--
137
2.18.0
44
2.25.1
138
139
diff view generated by jsdifflib
1
From: Julia Suvorova <jusual@mail.ru>
1
From: Alex Zuepke <alex.zuepke@tum.de>
2
2
3
MSR handling is the only place where CONTROL.nPRIV is modified.
3
The ARMv8 manual defines that PMUSERENR_EL0.ER enables read-access
4
to both PMXEVCNTR_EL0 and PMEVCNTR<n>_EL0 registers, however,
5
we only use it for PMXEVCNTR_EL0. Extend to PMEVCNTR<n>_EL0 as well.
4
6
5
Signed-off-by: Julia Suvorova <jusual@mail.ru>
7
Signed-off-by: Alex Zuepke <alex.zuepke@tum.de>
6
Message-id: 20180705222622.17139-1-jusual@mail.ru
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20220428132717.84190-1-alex.zuepke@tum.de
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
11
---
10
target/arm/helper.c | 12 ++++++++----
12
target/arm/helper.c | 4 ++--
11
1 file changed, 8 insertions(+), 4 deletions(-)
13
1 file changed, 2 insertions(+), 2 deletions(-)
12
14
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
17
--- a/target/arm/helper.c
16
+++ b/target/arm/helper.c
18
+++ b/target/arm/helper.c
17
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
19
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
18
write_v7m_control_spsel_for_secstate(env,
20
.crm = 8 | (3 & (i >> 3)), .opc1 = 0, .opc2 = i & 7,
19
val & R_V7M_CONTROL_SPSEL_MASK,
21
.access = PL0_RW, .type = ARM_CP_IO | ARM_CP_ALIAS,
20
M_REG_NS);
22
.readfn = pmevcntr_readfn, .writefn = pmevcntr_writefn,
21
- env->v7m.control[M_REG_NS] &= ~R_V7M_CONTROL_NPRIV_MASK;
23
- .accessfn = pmreg_access },
22
- env->v7m.control[M_REG_NS] |= val & R_V7M_CONTROL_NPRIV_MASK;
24
+ .accessfn = pmreg_access_xevcntr },
23
+ if (arm_feature(env, ARM_FEATURE_M_MAIN)) {
25
{ .name = pmevcntr_el0_name, .state = ARM_CP_STATE_AA64,
24
+ env->v7m.control[M_REG_NS] &= ~R_V7M_CONTROL_NPRIV_MASK;
26
.opc0 = 3, .opc1 = 3, .crn = 14, .crm = 8 | (3 & (i >> 3)),
25
+ env->v7m.control[M_REG_NS] |= val & R_V7M_CONTROL_NPRIV_MASK;
27
- .opc2 = i & 7, .access = PL0_RW, .accessfn = pmreg_access,
26
+ }
28
+ .opc2 = i & 7, .access = PL0_RW, .accessfn = pmreg_access_xevcntr,
27
return;
29
.type = ARM_CP_IO,
28
case 0x98: /* SP_NS */
30
.readfn = pmevcntr_readfn, .writefn = pmevcntr_writefn,
29
{
31
.raw_readfn = pmevcntr_rawread,
30
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
31
!arm_v7m_is_handler_mode(env)) {
32
write_v7m_control_spsel(env, (val & R_V7M_CONTROL_SPSEL_MASK) != 0);
33
}
34
- env->v7m.control[env->v7m.secure] &= ~R_V7M_CONTROL_NPRIV_MASK;
35
- env->v7m.control[env->v7m.secure] |= val & R_V7M_CONTROL_NPRIV_MASK;
36
+ if (arm_feature(env, ARM_FEATURE_M_MAIN)) {
37
+ env->v7m.control[env->v7m.secure] &= ~R_V7M_CONTROL_NPRIV_MASK;
38
+ env->v7m.control[env->v7m.secure] |= val & R_V7M_CONTROL_NPRIV_MASK;
39
+ }
40
break;
41
default:
42
bad_reg:
43
--
32
--
44
2.18.0
33
2.25.1
45
46
diff view generated by jsdifflib
Deleted patch
1
From: Julia Suvorova <jusual@mail.ru>
2
1
3
Handle SCS reserved registers listed in ARMv6-M ARM D3.6.1.
4
All reserved registers are RAZ/WI. ARM_FEATURE_M_MAIN is used for the
5
checks, because these registers are reserved in ARMv8-M Baseline too.
6
7
Signed-off-by: Julia Suvorova <jusual@mail.ru>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
hw/intc/armv7m_nvic.c | 51 +++++++++++++++++++++++++++++++++++++++++--
12
1 file changed, 49 insertions(+), 2 deletions(-)
13
14
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/intc/armv7m_nvic.c
17
+++ b/hw/intc/armv7m_nvic.c
18
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
19
}
20
return val;
21
case 0xd10: /* System Control. */
22
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V7)) {
23
+ goto bad_offset;
24
+ }
25
return cpu->env.v7m.scr[attrs.secure];
26
case 0xd14: /* Configuration Control. */
27
/* The BFHFNMIGN bit is the only non-banked bit; we
28
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
29
}
30
return val;
31
case 0xd2c: /* Hard Fault Status. */
32
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
33
+ goto bad_offset;
34
+ }
35
return cpu->env.v7m.hfsr;
36
case 0xd30: /* Debug Fault Status. */
37
return cpu->env.v7m.dfsr;
38
case 0xd34: /* MMFAR MemManage Fault Address */
39
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
40
+ goto bad_offset;
41
+ }
42
return cpu->env.v7m.mmfar[attrs.secure];
43
case 0xd38: /* Bus Fault Address. */
44
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
45
+ goto bad_offset;
46
+ }
47
return cpu->env.v7m.bfar;
48
case 0xd3c: /* Aux Fault Status. */
49
/* TODO: Implement fault status registers. */
50
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
51
}
52
break;
53
case 0xd10: /* System Control. */
54
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V7)) {
55
+ goto bad_offset;
56
+ }
57
/* We don't implement deep-sleep so these bits are RAZ/WI.
58
* The other bits in the register are banked.
59
* QEMU's implementation ignores SEVONPEND and SLEEPONEXIT, which
60
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
61
nvic_irq_update(s);
62
break;
63
case 0xd2c: /* Hard Fault Status. */
64
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
65
+ goto bad_offset;
66
+ }
67
cpu->env.v7m.hfsr &= ~value; /* W1C */
68
break;
69
case 0xd30: /* Debug Fault Status. */
70
cpu->env.v7m.dfsr &= ~value; /* W1C */
71
break;
72
case 0xd34: /* Mem Manage Address. */
73
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
74
+ goto bad_offset;
75
+ }
76
cpu->env.v7m.mmfar[attrs.secure] = value;
77
return;
78
case 0xd38: /* Bus Fault Address. */
79
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
80
+ goto bad_offset;
81
+ }
82
cpu->env.v7m.bfar = value;
83
return;
84
case 0xd3c: /* Aux Fault Status. */
85
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
86
case 0xf00: /* Software Triggered Interrupt Register */
87
{
88
int excnum = (value & 0x1ff) + NVIC_FIRST_IRQ;
89
+
90
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
91
+ goto bad_offset;
92
+ }
93
+
94
if (excnum < s->num_irq) {
95
armv7m_nvic_set_pending(s, excnum, false);
96
}
97
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_read(void *opaque, hwaddr addr,
98
}
99
}
100
break;
101
- case 0xd18 ... 0xd23: /* System Handler Priority (SHPR1, SHPR2, SHPR3) */
102
+ case 0xd18: /* System Handler Priority (SHPR1) */
103
+ if (!arm_feature(&s->cpu->env, ARM_FEATURE_M_MAIN)) {
104
+ val = 0;
105
+ break;
106
+ }
107
+ /* fall through */
108
+ case 0xd1c ... 0xd23: /* System Handler Priority (SHPR2, SHPR3) */
109
val = 0;
110
for (i = 0; i < size; i++) {
111
unsigned hdlidx = (offset - 0xd14) + i;
112
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_read(void *opaque, hwaddr addr,
113
}
114
break;
115
case 0xd28 ... 0xd2b: /* Configurable Fault Status (CFSR) */
116
+ if (!arm_feature(&s->cpu->env, ARM_FEATURE_M_MAIN)) {
117
+ val = 0;
118
+ break;
119
+ };
120
/* The BFSR bits [15:8] are shared between security states
121
* and we store them in the NS copy
122
*/
123
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_write(void *opaque, hwaddr addr,
124
}
125
nvic_irq_update(s);
126
return MEMTX_OK;
127
- case 0xd18 ... 0xd23: /* System Handler Priority (SHPR1, SHPR2, SHPR3) */
128
+ case 0xd18: /* System Handler Priority (SHPR1) */
129
+ if (!arm_feature(&s->cpu->env, ARM_FEATURE_M_MAIN)) {
130
+ return MEMTX_OK;
131
+ }
132
+ /* fall through */
133
+ case 0xd1c ... 0xd23: /* System Handler Priority (SHPR2, SHPR3) */
134
for (i = 0; i < size; i++) {
135
unsigned hdlidx = (offset - 0xd14) + i;
136
int newprio = extract32(value, i * 8, 8);
137
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_write(void *opaque, hwaddr addr,
138
nvic_irq_update(s);
139
return MEMTX_OK;
140
case 0xd28 ... 0xd2b: /* Configurable Fault Status (CFSR) */
141
+ if (!arm_feature(&s->cpu->env, ARM_FEATURE_M_MAIN)) {
142
+ return MEMTX_OK;
143
+ }
144
/* All bits are W1C, so construct 32 bit value with 0s in
145
* the parts not written by the access size
146
*/
147
--
148
2.18.0
149
150
diff view generated by jsdifflib
Deleted patch
1
From: Julia Suvorova <jusual@mail.ru>
2
1
3
The differences from ARMv7-M NVIC are:
4
* ARMv6-M only supports up to 32 external interrupts
5
(configurable feature already). The ICTR is reserved.
6
* Active Bit Register is reserved.
7
* ARMv6-M supports 4 priority levels against 256 in ARMv7-M.
8
9
Signed-off-by: Julia Suvorova <jusual@mail.ru>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
include/hw/intc/armv7m_nvic.h | 1 +
14
hw/intc/armv7m_nvic.c | 21 ++++++++++++++++++---
15
2 files changed, 19 insertions(+), 3 deletions(-)
16
17
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/include/hw/intc/armv7m_nvic.h
20
+++ b/include/hw/intc/armv7m_nvic.h
21
@@ -XXX,XX +XXX,XX @@ typedef struct NVICState {
22
VecInfo sec_vectors[NVIC_INTERNAL_VECTORS];
23
/* The PRIGROUP field in AIRCR is banked */
24
uint32_t prigroup[M_REG_NUM_BANKS];
25
+ uint8_t num_prio_bits;
26
27
/* v8M NVIC_ITNS state (stored as a bool per bit) */
28
bool itns[NVIC_MAX_VECTORS];
29
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/hw/intc/armv7m_nvic.c
32
+++ b/hw/intc/armv7m_nvic.c
33
@@ -XXX,XX +XXX,XX @@ static void set_prio(NVICState *s, unsigned irq, bool secure, uint8_t prio)
34
assert(irq > ARMV7M_EXCP_NMI); /* only use for configurable prios */
35
assert(irq < s->num_irq);
36
37
+ prio &= MAKE_64BIT_MASK(8 - s->num_prio_bits, s->num_prio_bits);
38
+
39
if (secure) {
40
assert(exc_is_banked(irq));
41
s->sec_vectors[irq].prio = prio;
42
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
43
44
switch (offset) {
45
case 4: /* Interrupt Control Type. */
46
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V7)) {
47
+ goto bad_offset;
48
+ }
49
return ((s->num_irq - NVIC_FIRST_IRQ) / 32) - 1;
50
case 0xc: /* CPPWR */
51
if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
52
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
53
"Setting VECTRESET when not in DEBUG mode "
54
"is UNPREDICTABLE\n");
55
}
56
- s->prigroup[attrs.secure] = extract32(value,
57
- R_V7M_AIRCR_PRIGROUP_SHIFT,
58
- R_V7M_AIRCR_PRIGROUP_LENGTH);
59
+ if (arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
60
+ s->prigroup[attrs.secure] =
61
+ extract32(value,
62
+ R_V7M_AIRCR_PRIGROUP_SHIFT,
63
+ R_V7M_AIRCR_PRIGROUP_LENGTH);
64
+ }
65
if (attrs.secure) {
66
/* These bits are only writable by secure */
67
cpu->env.v7m.aircr = value &
68
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_read(void *opaque, hwaddr addr,
69
break;
70
case 0x300 ... 0x33f: /* NVIC Active */
71
val = 0;
72
+
73
+ if (!arm_feature(&s->cpu->env, ARM_FEATURE_V7)) {
74
+ break;
75
+ }
76
+
77
startvec = 8 * (offset - 0x300) + NVIC_FIRST_IRQ; /* vector # */
78
79
for (i = 0, end = size * 8; i < end && startvec + i < s->num_irq; i++) {
80
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
81
/* include space for internal exception vectors */
82
s->num_irq += NVIC_FIRST_IRQ;
83
84
+ s->num_prio_bits = arm_feature(&s->cpu->env, ARM_FEATURE_V7) ? 8 : 2;
85
+
86
object_property_set_bool(OBJECT(&s->systick[M_REG_NS]), true,
87
"realized", &err);
88
if (err != NULL) {
89
--
90
2.18.0
91
92
diff view generated by jsdifflib
Deleted patch
1
The io_readx() function needs to know whether the load it is
2
doing is an MMU_DATA_LOAD or an MMU_INST_FETCH, so that it
3
can pass the right value to the cpu_transaction_failed()
4
function. Plumb this information through from the softmmu
5
code.
6
1
7
This is currently not often going to give the wrong answer,
8
because usually instruction fetches go via get_page_addr_code().
9
However once we switch over to handling execution from non-RAM by
10
creating single-insn TBs, the path for an insn fetch to generate
11
a bus error will be through cpu_ld*_code() and io_readx(),
12
so without this change we will generate a d-side fault when we
13
should generate an i-side fault.
14
15
We also have to pass the access type via a CPU struct global
16
down to unassigned_mem_read(), for the benefit of the targets
17
which still use the cpu_unassigned_access() hook (m68k, mips,
18
sparc, xtensa).
19
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
22
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
23
Tested-by: Cédric Le Goater <clg@kaod.org>
24
Message-id: 20180710160013.26559-2-peter.maydell@linaro.org
25
---
26
accel/tcg/softmmu_template.h | 11 +++++++----
27
include/qom/cpu.h | 6 ++++++
28
accel/tcg/cputlb.c | 5 +++--
29
memory.c | 3 ++-
30
4 files changed, 18 insertions(+), 7 deletions(-)
31
32
diff --git a/accel/tcg/softmmu_template.h b/accel/tcg/softmmu_template.h
33
index XXXXXXX..XXXXXXX 100644
34
--- a/accel/tcg/softmmu_template.h
35
+++ b/accel/tcg/softmmu_template.h
36
@@ -XXX,XX +XXX,XX @@ static inline DATA_TYPE glue(io_read, SUFFIX)(CPUArchState *env,
37
size_t mmu_idx, size_t index,
38
target_ulong addr,
39
uintptr_t retaddr,
40
- bool recheck)
41
+ bool recheck,
42
+ MMUAccessType access_type)
43
{
44
CPUIOTLBEntry *iotlbentry = &env->iotlb[mmu_idx][index];
45
return io_readx(env, iotlbentry, mmu_idx, addr, retaddr, recheck,
46
- DATA_SIZE);
47
+ access_type, DATA_SIZE);
48
}
49
#endif
50
51
@@ -XXX,XX +XXX,XX @@ WORD_TYPE helper_le_ld_name(CPUArchState *env, target_ulong addr,
52
/* ??? Note that the io helpers always read data in the target
53
byte ordering. We should push the LE/BE request down into io. */
54
res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr,
55
- tlb_addr & TLB_RECHECK);
56
+ tlb_addr & TLB_RECHECK,
57
+ READ_ACCESS_TYPE);
58
res = TGT_LE(res);
59
return res;
60
}
61
@@ -XXX,XX +XXX,XX @@ WORD_TYPE helper_be_ld_name(CPUArchState *env, target_ulong addr,
62
/* ??? Note that the io helpers always read data in the target
63
byte ordering. We should push the LE/BE request down into io. */
64
res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr,
65
- tlb_addr & TLB_RECHECK);
66
+ tlb_addr & TLB_RECHECK,
67
+ READ_ACCESS_TYPE);
68
res = TGT_BE(res);
69
return res;
70
}
71
diff --git a/include/qom/cpu.h b/include/qom/cpu.h
72
index XXXXXXX..XXXXXXX 100644
73
--- a/include/qom/cpu.h
74
+++ b/include/qom/cpu.h
75
@@ -XXX,XX +XXX,XX @@ struct CPUState {
76
*/
77
uintptr_t mem_io_pc;
78
vaddr mem_io_vaddr;
79
+ /*
80
+ * This is only needed for the legacy cpu_unassigned_access() hook;
81
+ * when all targets using it have been converted to use
82
+ * cpu_transaction_failed() instead it can be removed.
83
+ */
84
+ MMUAccessType mem_io_access_type;
85
86
int kvm_fd;
87
struct KVMState *kvm_state;
88
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
89
index XXXXXXX..XXXXXXX 100644
90
--- a/accel/tcg/cputlb.c
91
+++ b/accel/tcg/cputlb.c
92
@@ -XXX,XX +XXX,XX @@ static inline ram_addr_t qemu_ram_addr_from_host_nofail(void *ptr)
93
static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
94
int mmu_idx,
95
target_ulong addr, uintptr_t retaddr,
96
- bool recheck, int size)
97
+ bool recheck, MMUAccessType access_type, int size)
98
{
99
CPUState *cpu = ENV_GET_CPU(env);
100
hwaddr mr_offset;
101
@@ -XXX,XX +XXX,XX @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
102
}
103
104
cpu->mem_io_vaddr = addr;
105
+ cpu->mem_io_access_type = access_type;
106
107
if (mr->global_locking && !qemu_mutex_iothread_locked()) {
108
qemu_mutex_lock_iothread();
109
@@ -XXX,XX +XXX,XX @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
110
section->offset_within_address_space -
111
section->offset_within_region;
112
113
- cpu_transaction_failed(cpu, physaddr, addr, size, MMU_DATA_LOAD,
114
+ cpu_transaction_failed(cpu, physaddr, addr, size, access_type,
115
mmu_idx, iotlbentry->attrs, r, retaddr);
116
}
117
if (locked) {
118
diff --git a/memory.c b/memory.c
119
index XXXXXXX..XXXXXXX 100644
120
--- a/memory.c
121
+++ b/memory.c
122
@@ -XXX,XX +XXX,XX @@ static uint64_t unassigned_mem_read(void *opaque, hwaddr addr,
123
printf("Unassigned mem read " TARGET_FMT_plx "\n", addr);
124
#endif
125
if (current_cpu != NULL) {
126
- cpu_unassigned_access(current_cpu, addr, false, false, 0, size);
127
+ bool is_exec = current_cpu->mem_io_access_type == MMU_INST_FETCH;
128
+ cpu_unassigned_access(current_cpu, addr, false, is_exec, 0, size);
129
}
130
return 0;
131
}
132
--
133
2.18.0
134
135
diff view generated by jsdifflib
Deleted patch
1
When we support execution from non-RAM MMIO regions, get_page_addr_code()
2
will return -1 to indicate that there is no RAM at the requested address.
3
Handle this in the cpu-exec TB hashtable lookup code, treating it as
4
"no match found".
5
1
6
Note that the call to get_page_addr_code() in tb_lookup_cmp() needs
7
no changes -- a return of -1 will already correctly result in the
8
function returning false.
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Emilio G. Cota <cota@braap.org>
13
Tested-by: Cédric Le Goater <clg@kaod.org>
14
Message-id: 20180710160013.26559-3-peter.maydell@linaro.org
15
---
16
accel/tcg/cpu-exec.c | 3 +++
17
1 file changed, 3 insertions(+)
18
19
diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/accel/tcg/cpu-exec.c
22
+++ b/accel/tcg/cpu-exec.c
23
@@ -XXX,XX +XXX,XX @@ TranslationBlock *tb_htable_lookup(CPUState *cpu, target_ulong pc,
24
desc.trace_vcpu_dstate = *cpu->trace_dstate;
25
desc.pc = pc;
26
phys_pc = get_page_addr_code(desc.env, pc);
27
+ if (phys_pc == -1) {
28
+ return NULL;
29
+ }
30
desc.phys_page1 = phys_pc & TARGET_PAGE_MASK;
31
h = tb_hash_func(phys_pc, pc, flags, cf_mask, *cpu->trace_dstate);
32
return qht_lookup_custom(&tb_ctx.htable, &desc, h, tb_lookup_cmp);
33
--
34
2.18.0
35
36
diff view generated by jsdifflib
Deleted patch
1
When we support execution from non-RAM MMIO regions, get_page_addr_code()
2
will return -1 to indicate that there is no RAM at the requested address.
3
Handle this in tb_check_watchpoint() -- if the exception happened for a
4
PC which doesn't correspond to RAM then there is no need to invalidate
5
any TBs, because the one-instruction TB will not have been cached.
6
1
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Tested-by: Cédric Le Goater <clg@kaod.org>
10
Message-id: 20180710160013.26559-4-peter.maydell@linaro.org
11
---
12
accel/tcg/translate-all.c | 4 +++-
13
1 file changed, 3 insertions(+), 1 deletion(-)
14
15
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/accel/tcg/translate-all.c
18
+++ b/accel/tcg/translate-all.c
19
@@ -XXX,XX +XXX,XX @@ void tb_check_watchpoint(CPUState *cpu)
20
21
cpu_get_tb_cpu_state(env, &pc, &cs_base, &flags);
22
addr = get_page_addr_code(env, pc);
23
- tb_invalidate_phys_range(addr, addr + 1);
24
+ if (addr != -1) {
25
+ tb_invalidate_phys_range(addr, addr + 1);
26
+ }
27
}
28
}
29
30
--
31
2.18.0
32
33
diff view generated by jsdifflib
Deleted patch
1
If get_page_addr_code() returns -1, this indicates that there is no RAM
2
page we can read a full TB from. Instead we must create a TB which
3
contains a single instruction and which we do not cache, so it is
4
executed only once.
5
1
6
Since this means we can now have TBs which are not in any page list,
7
we also need to make tb_phys_invalidate() handle them (by not trying
8
to remove them from a nonexistent page list).
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Emilio G. Cota <cota@braap.org>
13
Tested-by: Cédric Le Goater <clg@kaod.org>
14
Message-id: 20180710160013.26559-5-peter.maydell@linaro.org
15
---
16
accel/tcg/translate-all.c | 19 ++++++++++++++++++-
17
1 file changed, 18 insertions(+), 1 deletion(-)
18
19
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/accel/tcg/translate-all.c
22
+++ b/accel/tcg/translate-all.c
23
@@ -XXX,XX +XXX,XX @@ static void tb_phys_invalidate__locked(TranslationBlock *tb)
24
*/
25
void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr)
26
{
27
- if (page_addr == -1) {
28
+ if (page_addr == -1 && tb->page_addr[0] != -1) {
29
page_lock_tb(tb);
30
do_tb_phys_invalidate(tb, true);
31
page_unlock_tb(tb);
32
@@ -XXX,XX +XXX,XX @@ tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
33
34
assert_memory_lock();
35
36
+ if (phys_pc == -1) {
37
+ /*
38
+ * If the TB is not associated with a physical RAM page then
39
+ * it must be a temporary one-insn TB, and we have nothing to do
40
+ * except fill in the page_addr[] fields.
41
+ */
42
+ assert(tb->cflags & CF_NOCACHE);
43
+ tb->page_addr[0] = tb->page_addr[1] = -1;
44
+ return tb;
45
+ }
46
+
47
/*
48
* Add the TB to the page list, acquiring first the pages's locks.
49
* We keep the locks held until after inserting the TB in the hash table,
50
@@ -XXX,XX +XXX,XX @@ TranslationBlock *tb_gen_code(CPUState *cpu,
51
52
phys_pc = get_page_addr_code(env, pc);
53
54
+ if (phys_pc == -1) {
55
+ /* Generate a temporary TB with 1 insn in it */
56
+ cflags &= ~CF_COUNT_MASK;
57
+ cflags |= CF_NOCACHE | 1;
58
+ }
59
+
60
buffer_overflow:
61
tb = tb_alloc(pc);
62
if (unlikely(!tb)) {
63
--
64
2.18.0
65
66
diff view generated by jsdifflib
Deleted patch
1
Now that all the callers can handle get_page_addr_code() returning -1,
2
remove all the code which tries to handle execution from MMIO regions
3
or small-MMU-region RAM areas. This will mean that we can correctly
4
execute from these areas, rather than ending up either aborting QEMU
5
or delivering an incorrect guest exception.
6
1
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Tested-by: Cédric Le Goater <clg@kaod.org>
11
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
12
Message-id: 20180710160013.26559-6-peter.maydell@linaro.org
13
---
14
accel/tcg/cputlb.c | 95 +++++-----------------------------------------
15
1 file changed, 10 insertions(+), 85 deletions(-)
16
17
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/accel/tcg/cputlb.c
20
+++ b/accel/tcg/cputlb.c
21
@@ -XXX,XX +XXX,XX @@ void tlb_set_page(CPUState *cpu, target_ulong vaddr,
22
prot, mmu_idx, size);
23
}
24
25
-static void report_bad_exec(CPUState *cpu, target_ulong addr)
26
-{
27
- /* Accidentally executing outside RAM or ROM is quite common for
28
- * several user-error situations, so report it in a way that
29
- * makes it clear that this isn't a QEMU bug and provide suggestions
30
- * about what a user could do to fix things.
31
- */
32
- error_report("Trying to execute code outside RAM or ROM at 0x"
33
- TARGET_FMT_lx, addr);
34
- error_printf("This usually means one of the following happened:\n\n"
35
- "(1) You told QEMU to execute a kernel for the wrong machine "
36
- "type, and it crashed on startup (eg trying to run a "
37
- "raspberry pi kernel on a versatilepb QEMU machine)\n"
38
- "(2) You didn't give QEMU a kernel or BIOS filename at all, "
39
- "and QEMU executed a ROM full of no-op instructions until "
40
- "it fell off the end\n"
41
- "(3) Your guest kernel has a bug and crashed by jumping "
42
- "off into nowhere\n\n"
43
- "This is almost always one of the first two, so check your "
44
- "command line and that you are using the right type of kernel "
45
- "for this machine.\n"
46
- "If you think option (3) is likely then you can try debugging "
47
- "your guest with the -d debug options; in particular "
48
- "-d guest_errors will cause the log to include a dump of the "
49
- "guest register state at this point.\n\n"
50
- "Execution cannot continue; stopping here.\n\n");
51
-
52
- /* Report also to the logs, with more detail including register dump */
53
- qemu_log_mask(LOG_GUEST_ERROR, "qemu: fatal: Trying to execute code "
54
- "outside RAM or ROM at 0x" TARGET_FMT_lx "\n", addr);
55
- log_cpu_state_mask(LOG_GUEST_ERROR, cpu, CPU_DUMP_FPU | CPU_DUMP_CCOP);
56
-}
57
-
58
static inline ram_addr_t qemu_ram_addr_from_host_nofail(void *ptr)
59
{
60
ram_addr_t ram_addr;
61
@@ -XXX,XX +XXX,XX @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
62
MemoryRegionSection *section;
63
CPUState *cpu = ENV_GET_CPU(env);
64
CPUIOTLBEntry *iotlbentry;
65
- hwaddr physaddr, mr_offset;
66
67
index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
68
mmu_idx = cpu_mmu_index(env, true);
69
@@ -XXX,XX +XXX,XX @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
70
if (unlikely(env->tlb_table[mmu_idx][index].addr_code & TLB_RECHECK)) {
71
/*
72
* This is a TLB_RECHECK access, where the MMU protection
73
- * covers a smaller range than a target page, and we must
74
- * repeat the MMU check here. This tlb_fill() call might
75
- * longjump out if this access should cause a guest exception.
76
- */
77
- int index;
78
- target_ulong tlb_addr;
79
-
80
- tlb_fill(cpu, addr, 0, MMU_INST_FETCH, mmu_idx, 0);
81
-
82
- index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
83
- tlb_addr = env->tlb_table[mmu_idx][index].addr_code;
84
- if (!(tlb_addr & ~(TARGET_PAGE_MASK | TLB_RECHECK))) {
85
- /* RAM access. We can't handle this, so for now just stop */
86
- cpu_abort(cpu, "Unable to handle guest executing from RAM within "
87
- "a small MPU region at 0x" TARGET_FMT_lx, addr);
88
- }
89
- /*
90
- * Fall through to handle IO accesses (which will almost certainly
91
- * also result in failure)
92
+ * covers a smaller range than a target page. Return -1 to
93
+ * indicate that we cannot simply execute from RAM here;
94
+ * we will perform the necessary repeat of the MMU check
95
+ * when the "execute a single insn" code performs the
96
+ * load of the guest insn.
97
*/
98
+ return -1;
99
}
100
101
iotlbentry = &env->iotlb[mmu_idx][index];
102
section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
103
mr = section->mr;
104
if (memory_region_is_unassigned(mr)) {
105
- qemu_mutex_lock_iothread();
106
- if (memory_region_request_mmio_ptr(mr, addr)) {
107
- qemu_mutex_unlock_iothread();
108
- /* A MemoryRegion is potentially added so re-run the
109
- * get_page_addr_code.
110
- */
111
- return get_page_addr_code(env, addr);
112
- }
113
- qemu_mutex_unlock_iothread();
114
-
115
- /* Give the new-style cpu_transaction_failed() hook first chance
116
- * to handle this.
117
- * This is not the ideal place to detect and generate CPU
118
- * exceptions for instruction fetch failure (for instance
119
- * we don't know the length of the access that the CPU would
120
- * use, and it would be better to go ahead and try the access
121
- * and use the MemTXResult it produced). However it is the
122
- * simplest place we have currently available for the check.
123
+ /*
124
+ * Not guest RAM, so there is no ram_addr_t for it. Return -1,
125
+ * and we will execute a single insn from this device.
126
*/
127
- mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
128
- physaddr = mr_offset +
129
- section->offset_within_address_space -
130
- section->offset_within_region;
131
- cpu_transaction_failed(cpu, physaddr, addr, 0, MMU_INST_FETCH, mmu_idx,
132
- iotlbentry->attrs, MEMTX_DECODE_ERROR, 0);
133
-
134
- cpu_unassigned_access(cpu, addr, false, true, 0, 4);
135
- /* The CPU's unassigned access hook might have longjumped out
136
- * with an exception. If it didn't (or there was no hook) then
137
- * we can't proceed further.
138
- */
139
- report_bad_exec(cpu, addr);
140
- exit(1);
141
+ return -1;
142
}
143
p = (void *)((uintptr_t)addr + env->tlb_table[mmu_idx][index].addend);
144
return qemu_ram_addr_from_host_nofail(p);
145
--
146
2.18.0
147
148
diff view generated by jsdifflib
Deleted patch
1
We set up TLB entries in tlb_set_page_with_attrs(), where we have
2
some logic for determining whether the TLB entry is considered
3
to be RAM-backed, and thus has a valid addend field. When we
4
look at the TLB entry in get_page_addr_code(), we use different
5
logic for determining whether to treat the page as RAM-backed
6
and use the addend field. This is confusing, and in fact buggy,
7
because the code in tlb_set_page_with_attrs() correctly decides
8
that rom_device memory regions not in romd mode are not RAM-backed,
9
but the code in get_page_addr_code() thinks they are RAM-backed.
10
This typically results in "Bad ram pointer" assertion if the
11
guest tries to execute from such a memory region.
12
1
13
Fix this by making get_page_addr_code() just look at the
14
TLB_MMIO bit in the code_address field of the TLB, which
15
tlb_set_page_with_attrs() sets if and only if the addend
16
field is not valid for code execution.
17
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
21
Message-id: 20180713150945.12348-1-peter.maydell@linaro.org
22
---
23
include/exec/exec-all.h | 2 --
24
accel/tcg/cputlb.c | 29 ++++++++---------------------
25
exec.c | 6 ------
26
3 files changed, 8 insertions(+), 29 deletions(-)
27
28
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
29
index XXXXXXX..XXXXXXX 100644
30
--- a/include/exec/exec-all.h
31
+++ b/include/exec/exec-all.h
32
@@ -XXX,XX +XXX,XX @@ hwaddr memory_region_section_get_iotlb(CPUState *cpu,
33
hwaddr paddr, hwaddr xlat,
34
int prot,
35
target_ulong *address);
36
-bool memory_region_is_unassigned(MemoryRegion *mr);
37
-
38
#endif
39
40
/* vl.c */
41
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
42
index XXXXXXX..XXXXXXX 100644
43
--- a/accel/tcg/cputlb.c
44
+++ b/accel/tcg/cputlb.c
45
@@ -XXX,XX +XXX,XX @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
46
{
47
int mmu_idx, index;
48
void *p;
49
- MemoryRegion *mr;
50
- MemoryRegionSection *section;
51
- CPUState *cpu = ENV_GET_CPU(env);
52
- CPUIOTLBEntry *iotlbentry;
53
54
index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
55
mmu_idx = cpu_mmu_index(env, true);
56
@@ -XXX,XX +XXX,XX @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
57
assert(tlb_hit(env->tlb_table[mmu_idx][index].addr_code, addr));
58
}
59
60
- if (unlikely(env->tlb_table[mmu_idx][index].addr_code & TLB_RECHECK)) {
61
+ if (unlikely(env->tlb_table[mmu_idx][index].addr_code &
62
+ (TLB_RECHECK | TLB_MMIO))) {
63
/*
64
- * This is a TLB_RECHECK access, where the MMU protection
65
- * covers a smaller range than a target page. Return -1 to
66
- * indicate that we cannot simply execute from RAM here;
67
- * we will perform the necessary repeat of the MMU check
68
- * when the "execute a single insn" code performs the
69
- * load of the guest insn.
70
+ * Return -1 if we can't translate and execute from an entire
71
+ * page of RAM here, which will cause us to execute by loading
72
+ * and translating one insn at a time, without caching:
73
+ * - TLB_RECHECK: means the MMU protection covers a smaller range
74
+ * than a target page, so we must redo the MMU check every insn
75
+ * - TLB_MMIO: region is not backed by RAM
76
*/
77
return -1;
78
}
79
80
- iotlbentry = &env->iotlb[mmu_idx][index];
81
- section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
82
- mr = section->mr;
83
- if (memory_region_is_unassigned(mr)) {
84
- /*
85
- * Not guest RAM, so there is no ram_addr_t for it. Return -1,
86
- * and we will execute a single insn from this device.
87
- */
88
- return -1;
89
- }
90
p = (void *)((uintptr_t)addr + env->tlb_table[mmu_idx][index].addend);
91
return qemu_ram_addr_from_host_nofail(p);
92
}
93
diff --git a/exec.c b/exec.c
94
index XXXXXXX..XXXXXXX 100644
95
--- a/exec.c
96
+++ b/exec.c
97
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection *phys_page_find(AddressSpaceDispatch *d, hwaddr addr)
98
}
99
}
100
101
-bool memory_region_is_unassigned(MemoryRegion *mr)
102
-{
103
- return mr != &io_mem_rom && mr != &io_mem_notdirty && !mr->rom_device
104
- && mr != &io_mem_watch;
105
-}
106
-
107
/* Called from RCU critical section */
108
static MemoryRegionSection *address_space_lookup_region(AddressSpaceDispatch *d,
109
hwaddr addr,
110
--
111
2.18.0
112
113
diff view generated by jsdifflib
Deleted patch
1
From: Luc Michel <luc.michel@greensocs.com>
2
1
3
Implement GICD_ISACTIVERn and GICD_ICACTIVERn registers in the GICv2.
4
Those registers allow to set or clear the active state of an IRQ in the
5
distributor.
6
7
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20180727095421.386-3-luc.michel@greensocs.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
hw/intc/arm_gic.c | 61 +++++++++++++++++++++++++++++++++++++++++++----
13
1 file changed, 57 insertions(+), 4 deletions(-)
14
15
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/intc/arm_gic.c
18
+++ b/hw/intc/arm_gic.c
19
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
20
}
21
}
22
} else if (offset < 0x400) {
23
- /* Interrupt Active. */
24
- irq = (offset - 0x300) * 8 + GIC_BASE_IRQ;
25
+ /* Interrupt Set/Clear Active. */
26
+ if (offset < 0x380) {
27
+ irq = (offset - 0x300) * 8;
28
+ } else if (s->revision == 2) {
29
+ irq = (offset - 0x380) * 8;
30
+ } else {
31
+ goto bad_reg;
32
+ }
33
+
34
+ irq += GIC_BASE_IRQ;
35
if (irq >= s->num_irq)
36
goto bad_reg;
37
res = 0;
38
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
39
GIC_DIST_CLEAR_PENDING(irq + i, ALL_CPU_MASK);
40
}
41
}
42
+ } else if (offset < 0x380) {
43
+ /* Interrupt Set Active. */
44
+ if (s->revision != 2) {
45
+ goto bad_reg;
46
+ }
47
+
48
+ irq = (offset - 0x300) * 8 + GIC_BASE_IRQ;
49
+ if (irq >= s->num_irq) {
50
+ goto bad_reg;
51
+ }
52
+
53
+ /* This register is banked per-cpu for PPIs */
54
+ int cm = irq < GIC_INTERNAL ? (1 << cpu) : ALL_CPU_MASK;
55
+
56
+ for (i = 0; i < 8; i++) {
57
+ if (s->security_extn && !attrs.secure &&
58
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
59
+ continue; /* Ignore Non-secure access of Group0 IRQ */
60
+ }
61
+
62
+ if (value & (1 << i)) {
63
+ GIC_DIST_SET_ACTIVE(irq + i, cm);
64
+ }
65
+ }
66
} else if (offset < 0x400) {
67
- /* Interrupt Active. */
68
- goto bad_reg;
69
+ /* Interrupt Clear Active. */
70
+ if (s->revision != 2) {
71
+ goto bad_reg;
72
+ }
73
+
74
+ irq = (offset - 0x380) * 8 + GIC_BASE_IRQ;
75
+ if (irq >= s->num_irq) {
76
+ goto bad_reg;
77
+ }
78
+
79
+ /* This register is banked per-cpu for PPIs */
80
+ int cm = irq < GIC_INTERNAL ? (1 << cpu) : ALL_CPU_MASK;
81
+
82
+ for (i = 0; i < 8; i++) {
83
+ if (s->security_extn && !attrs.secure &&
84
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
85
+ continue; /* Ignore Non-secure access of Group0 IRQ */
86
+ }
87
+
88
+ if (value & (1 << i)) {
89
+ GIC_DIST_CLEAR_ACTIVE(irq + i, cm);
90
+ }
91
+ }
92
} else if (offset < 0x800) {
93
/* Interrupt Priority. */
94
irq = (offset - 0x400) + GIC_BASE_IRQ;
95
--
96
2.18.0
97
98
diff view generated by jsdifflib
Deleted patch
1
From: Luc Michel <luc.michel@greensocs.com>
2
1
3
Some functions are now only used in arm_gic.c, put them static. Some of
4
them where only used by the NVIC implementation and are not used
5
anymore, so remove them.
6
7
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20180727095421.386-4-luc.michel@greensocs.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
hw/intc/gic_internal.h | 4 ----
14
hw/intc/arm_gic.c | 23 ++---------------------
15
2 files changed, 2 insertions(+), 25 deletions(-)
16
17
diff --git a/hw/intc/gic_internal.h b/hw/intc/gic_internal.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/intc/gic_internal.h
20
+++ b/hw/intc/gic_internal.h
21
@@ -XXX,XX +XXX,XX @@
22
/* The special cases for the revision property: */
23
#define REV_11MPCORE 0
24
25
-void gic_set_pending_private(GICState *s, int cpu, int irq);
26
uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs);
27
-void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs);
28
-void gic_update(GICState *s);
29
-void gic_init_irqs_and_distributor(GICState *s);
30
void gic_dist_set_priority(GICState *s, int cpu, int irq, uint8_t val,
31
MemTxAttrs attrs);
32
33
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/hw/intc/arm_gic.c
36
+++ b/hw/intc/arm_gic.c
37
@@ -XXX,XX +XXX,XX @@ static inline bool gic_has_groups(GICState *s)
38
39
/* TODO: Many places that call this routine could be optimized. */
40
/* Update interrupt status after enabled or pending bits have been changed. */
41
-void gic_update(GICState *s)
42
+static void gic_update(GICState *s)
43
{
44
int best_irq;
45
int best_prio;
46
@@ -XXX,XX +XXX,XX @@ void gic_update(GICState *s)
47
}
48
}
49
50
-void gic_set_pending_private(GICState *s, int cpu, int irq)
51
-{
52
- int cm = 1 << cpu;
53
-
54
- if (gic_test_pending(s, irq, cm)) {
55
- return;
56
- }
57
-
58
- DPRINTF("Set %d pending cpu %d\n", irq, cpu);
59
- GIC_DIST_SET_PENDING(irq, cm);
60
- gic_update(s);
61
-}
62
-
63
static void gic_set_irq_11mpcore(GICState *s, int irq, int level,
64
int cm, int target)
65
{
66
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
67
GIC_DIST_CLEAR_ACTIVE(irq, cm);
68
}
69
70
-void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
71
+static void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
72
{
73
int cm = 1 << cpu;
74
int group;
75
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps gic_cpu_ops = {
76
.endianness = DEVICE_NATIVE_ENDIAN,
77
};
78
79
-/* This function is used by nvic model */
80
-void gic_init_irqs_and_distributor(GICState *s)
81
-{
82
- gic_init_irqs_and_mmio(s, gic_set_irq, gic_ops);
83
-}
84
-
85
static void arm_gic_realize(DeviceState *dev, Error **errp)
86
{
87
/* Device instance realize function for the GIC sysbus device */
88
--
89
2.18.0
90
91
diff view generated by jsdifflib
Deleted patch
1
From: Luc Michel <luc.michel@greensocs.com>
2
1
3
Provide a VMSTATE_UINT16_SUB_ARRAY macro to save a uint16_t sub-array in
4
a VMState.
5
6
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Message-id: 20180727095421.386-5-luc.michel@greensocs.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
include/migration/vmstate.h | 3 +++
13
1 file changed, 3 insertions(+)
14
15
diff --git a/include/migration/vmstate.h b/include/migration/vmstate.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/include/migration/vmstate.h
18
+++ b/include/migration/vmstate.h
19
@@ -XXX,XX +XXX,XX @@ extern const VMStateInfo vmstate_info_qtailq;
20
#define VMSTATE_UINT16_ARRAY(_f, _s, _n) \
21
VMSTATE_UINT16_ARRAY_V(_f, _s, _n, 0)
22
23
+#define VMSTATE_UINT16_SUB_ARRAY(_f, _s, _start, _num) \
24
+ VMSTATE_SUB_ARRAY(_f, _s, _start, _num, 0, vmstate_info_uint16, uint16_t)
25
+
26
#define VMSTATE_UINT16_2DARRAY(_f, _s, _n1, _n2) \
27
VMSTATE_UINT16_2DARRAY_V(_f, _s, _n1, _n2, 0)
28
29
--
30
2.18.0
31
32
diff view generated by jsdifflib
Deleted patch
1
From: Luc Michel <luc.michel@greensocs.com>
2
1
3
Add the register definitions for the virtual interface of the GICv2.
4
5
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20180727095421.386-7-luc.michel@greensocs.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
hw/intc/gic_internal.h | 65 ++++++++++++++++++++++++++++++++++++++++++
11
1 file changed, 65 insertions(+)
12
13
diff --git a/hw/intc/gic_internal.h b/hw/intc/gic_internal.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/intc/gic_internal.h
16
+++ b/hw/intc/gic_internal.h
17
@@ -XXX,XX +XXX,XX @@
18
#ifndef QEMU_ARM_GIC_INTERNAL_H
19
#define QEMU_ARM_GIC_INTERNAL_H
20
21
+#include "hw/registerfields.h"
22
#include "hw/intc/arm_gic.h"
23
24
#define ALL_CPU_MASK ((unsigned)(((1 << GIC_NCPU) - 1)))
25
@@ -XXX,XX +XXX,XX @@
26
#define GICC_CTLR_EOIMODE (1U << 9)
27
#define GICC_CTLR_EOIMODE_NS (1U << 10)
28
29
+REG32(GICH_HCR, 0x0)
30
+ FIELD(GICH_HCR, EN, 0, 1)
31
+ FIELD(GICH_HCR, UIE, 1, 1)
32
+ FIELD(GICH_HCR, LRENPIE, 2, 1)
33
+ FIELD(GICH_HCR, NPIE, 3, 1)
34
+ FIELD(GICH_HCR, VGRP0EIE, 4, 1)
35
+ FIELD(GICH_HCR, VGRP0DIE, 5, 1)
36
+ FIELD(GICH_HCR, VGRP1EIE, 6, 1)
37
+ FIELD(GICH_HCR, VGRP1DIE, 7, 1)
38
+ FIELD(GICH_HCR, EOICount, 27, 5)
39
+
40
+#define GICH_HCR_MASK \
41
+ (R_GICH_HCR_EN_MASK | R_GICH_HCR_UIE_MASK | \
42
+ R_GICH_HCR_LRENPIE_MASK | R_GICH_HCR_NPIE_MASK | \
43
+ R_GICH_HCR_VGRP0EIE_MASK | R_GICH_HCR_VGRP0DIE_MASK | \
44
+ R_GICH_HCR_VGRP1EIE_MASK | R_GICH_HCR_VGRP1DIE_MASK | \
45
+ R_GICH_HCR_EOICount_MASK)
46
+
47
+REG32(GICH_VTR, 0x4)
48
+ FIELD(GICH_VTR, ListRegs, 0, 6)
49
+ FIELD(GICH_VTR, PREbits, 26, 3)
50
+ FIELD(GICH_VTR, PRIbits, 29, 3)
51
+
52
+REG32(GICH_VMCR, 0x8)
53
+ FIELD(GICH_VMCR, VMCCtlr, 0, 10)
54
+ FIELD(GICH_VMCR, VMABP, 18, 3)
55
+ FIELD(GICH_VMCR, VMBP, 21, 3)
56
+ FIELD(GICH_VMCR, VMPriMask, 27, 5)
57
+
58
+REG32(GICH_MISR, 0x10)
59
+ FIELD(GICH_MISR, EOI, 0, 1)
60
+ FIELD(GICH_MISR, U, 1, 1)
61
+ FIELD(GICH_MISR, LRENP, 2, 1)
62
+ FIELD(GICH_MISR, NP, 3, 1)
63
+ FIELD(GICH_MISR, VGrp0E, 4, 1)
64
+ FIELD(GICH_MISR, VGrp0D, 5, 1)
65
+ FIELD(GICH_MISR, VGrp1E, 6, 1)
66
+ FIELD(GICH_MISR, VGrp1D, 7, 1)
67
+
68
+REG32(GICH_EISR0, 0x20)
69
+REG32(GICH_EISR1, 0x24)
70
+REG32(GICH_ELRSR0, 0x30)
71
+REG32(GICH_ELRSR1, 0x34)
72
+REG32(GICH_APR, 0xf0)
73
+
74
+REG32(GICH_LR0, 0x100)
75
+ FIELD(GICH_LR0, VirtualID, 0, 10)
76
+ FIELD(GICH_LR0, PhysicalID, 10, 10)
77
+ FIELD(GICH_LR0, CPUID, 10, 3)
78
+ FIELD(GICH_LR0, EOI, 19, 1)
79
+ FIELD(GICH_LR0, Priority, 23, 5)
80
+ FIELD(GICH_LR0, State, 28, 2)
81
+ FIELD(GICH_LR0, Grp1, 30, 1)
82
+ FIELD(GICH_LR0, HW, 31, 1)
83
+
84
+/* Last LR register */
85
+REG32(GICH_LR63, 0x1fc)
86
+
87
+#define GICH_LR_MASK \
88
+ (R_GICH_LR0_VirtualID_MASK | R_GICH_LR0_PhysicalID_MASK | \
89
+ R_GICH_LR0_CPUID_MASK | R_GICH_LR0_EOI_MASK | \
90
+ R_GICH_LR0_Priority_MASK | R_GICH_LR0_State_MASK | \
91
+ R_GICH_LR0_Grp1_MASK | R_GICH_LR0_HW_MASK)
92
+
93
/* Valid bits for GICC_CTLR for GICv1, v1 with security extensions,
94
* GICv2 and GICv2 with security extensions:
95
*/
96
--
97
2.18.0
98
99
diff view generated by jsdifflib
Deleted patch
1
From: Luc Michel <luc.michel@greensocs.com>
2
1
3
An access to the CPU interface is non-secure if the current GIC instance
4
implements the security extensions, and the memory access is actually
5
non-secure. Until then, it was checked with tests such as
6
if (s->security_extn && !attrs.secure) { ... }
7
in various places of the CPU interface code.
8
9
With the implementation of the virtualization extensions, those tests
10
must be updated to take into account whether we are in a vCPU interface
11
or not. This is because the exposed vCPU interface does not implement
12
security extensions.
13
14
This commits replaces all those tests with a call to the
15
gic_cpu_ns_access() function to check if the current access to the CPU
16
interface is non-secure. This function takes into account whether the
17
current CPU is a vCPU or not.
18
19
Note that this function is used only in the (v)CPU interface code path.
20
The distributor code path is left unchanged, as the distributor is not
21
exposed to vCPUs at all.
22
23
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
24
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
25
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
26
Message-id: 20180727095421.386-9-luc.michel@greensocs.com
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
28
---
29
hw/intc/arm_gic.c | 39 ++++++++++++++++++++++-----------------
30
1 file changed, 22 insertions(+), 17 deletions(-)
31
32
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
33
index XXXXXXX..XXXXXXX 100644
34
--- a/hw/intc/arm_gic.c
35
+++ b/hw/intc/arm_gic.c
36
@@ -XXX,XX +XXX,XX @@ static inline bool gic_has_groups(GICState *s)
37
return s->revision == 2 || s->security_extn;
38
}
39
40
+static inline bool gic_cpu_ns_access(GICState *s, int cpu, MemTxAttrs attrs)
41
+{
42
+ return !gic_is_vcpu(cpu) && s->security_extn && !attrs.secure;
43
+}
44
+
45
/* TODO: Many places that call this routine could be optimized. */
46
/* Update interrupt status after enabled or pending bits have been changed. */
47
static void gic_update(GICState *s)
48
@@ -XXX,XX +XXX,XX @@ static uint16_t gic_get_current_pending_irq(GICState *s, int cpu,
49
/* On a GIC without the security extensions, reading this register
50
* behaves in the same way as a secure access to a GIC with them.
51
*/
52
- bool secure = !s->security_extn || attrs.secure;
53
+ bool secure = !gic_cpu_ns_access(s, cpu, attrs);
54
55
if (group == 0 && !secure) {
56
/* Group0 interrupts hidden from Non-secure access */
57
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_get_priority(GICState *s, int cpu, int irq,
58
static void gic_set_priority_mask(GICState *s, int cpu, uint8_t pmask,
59
MemTxAttrs attrs)
60
{
61
- if (s->security_extn && !attrs.secure) {
62
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
63
if (s->priority_mask[cpu] & 0x80) {
64
/* Priority Mask in upper half */
65
pmask = 0x80 | (pmask >> 1);
66
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_get_priority_mask(GICState *s, int cpu, MemTxAttrs attrs)
67
{
68
uint32_t pmask = s->priority_mask[cpu];
69
70
- if (s->security_extn && !attrs.secure) {
71
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
72
if (pmask & 0x80) {
73
/* Priority Mask in upper half, return Non-secure view */
74
pmask = (pmask << 1) & 0xff;
75
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_get_cpu_control(GICState *s, int cpu, MemTxAttrs attrs)
76
{
77
uint32_t ret = s->cpu_ctlr[cpu];
78
79
- if (s->security_extn && !attrs.secure) {
80
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
81
/* Construct the NS banked view of GICC_CTLR from the correct
82
* bits of the S banked view. We don't need to move the bypass
83
* control bits because we don't implement that (IMPDEF) part
84
@@ -XXX,XX +XXX,XX @@ static void gic_set_cpu_control(GICState *s, int cpu, uint32_t value,
85
{
86
uint32_t mask;
87
88
- if (s->security_extn && !attrs.secure) {
89
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
90
/* The NS view can only write certain bits in the register;
91
* the rest are unchanged
92
*/
93
@@ -XXX,XX +XXX,XX @@ static uint8_t gic_get_running_priority(GICState *s, int cpu, MemTxAttrs attrs)
94
return 0xff;
95
}
96
97
- if (s->security_extn && !attrs.secure) {
98
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
99
if (s->running_priority[cpu] & 0x80) {
100
/* Running priority in upper half of range: return the Non-secure
101
* view of the priority.
102
@@ -XXX,XX +XXX,XX @@ static bool gic_eoi_split(GICState *s, int cpu, MemTxAttrs attrs)
103
/* Before GICv2 prio-drop and deactivate are not separable */
104
return false;
105
}
106
- if (s->security_extn && !attrs.secure) {
107
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
108
return s->cpu_ctlr[cpu] & GICC_CTLR_EOIMODE_NS;
109
}
110
return s->cpu_ctlr[cpu] & GICC_CTLR_EOIMODE;
111
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
112
return;
113
}
114
115
- if (s->security_extn && !attrs.secure && !group) {
116
+ if (gic_cpu_ns_access(s, cpu, attrs) && !group) {
117
DPRINTF("Non-secure DI for Group0 interrupt %d ignored\n", irq);
118
return;
119
}
120
@@ -XXX,XX +XXX,XX @@ static void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
121
122
group = gic_has_groups(s) && GIC_DIST_TEST_GROUP(irq, cm);
123
124
- if (s->security_extn && !attrs.secure && !group) {
125
+ if (gic_cpu_ns_access(s, cpu, attrs) && !group) {
126
DPRINTF("Non-secure EOI for Group0 interrupt %d ignored\n", irq);
127
return;
128
}
129
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_read(GICState *s, int cpu, int offset,
130
*data = gic_get_priority_mask(s, cpu, attrs);
131
break;
132
case 0x08: /* Binary Point */
133
- if (s->security_extn && !attrs.secure) {
134
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
135
if (s->cpu_ctlr[cpu] & GICC_CTLR_CBPR) {
136
/* NS view of BPR when CBPR is 1 */
137
*data = MIN(s->bpr[cpu] + 1, 7);
138
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_read(GICState *s, int cpu, int offset,
139
* With security extensions, secure access: ABPR (alias of NS BPR)
140
* With security extensions, nonsecure access: RAZ/WI
141
*/
142
- if (!gic_has_groups(s) || (s->security_extn && !attrs.secure)) {
143
+ if (!gic_has_groups(s) || (gic_cpu_ns_access(s, cpu, attrs))) {
144
*data = 0;
145
} else {
146
*data = s->abpr[cpu];
147
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_read(GICState *s, int cpu, int offset,
148
149
if (regno >= GIC_NR_APRS || s->revision != 2) {
150
*data = 0;
151
- } else if (s->security_extn && !attrs.secure) {
152
+ } else if (gic_cpu_ns_access(s, cpu, attrs)) {
153
/* NS view of GICC_APR<n> is the top half of GIC_NSAPR<n> */
154
*data = gic_apr_ns_view(s, regno, cpu);
155
} else {
156
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_read(GICState *s, int cpu, int offset,
157
int regno = (offset - 0xe0) / 4;
158
159
if (regno >= GIC_NR_APRS || s->revision != 2 || !gic_has_groups(s) ||
160
- (s->security_extn && !attrs.secure)) {
161
+ gic_cpu_ns_access(s, cpu, attrs)) {
162
*data = 0;
163
} else {
164
*data = s->nsapr[regno][cpu];
165
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
166
gic_set_priority_mask(s, cpu, value, attrs);
167
break;
168
case 0x08: /* Binary Point */
169
- if (s->security_extn && !attrs.secure) {
170
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
171
if (s->cpu_ctlr[cpu] & GICC_CTLR_CBPR) {
172
/* WI when CBPR is 1 */
173
return MEMTX_OK;
174
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
175
gic_complete_irq(s, cpu, value & 0x3ff, attrs);
176
return MEMTX_OK;
177
case 0x1c: /* Aliased Binary Point */
178
- if (!gic_has_groups(s) || (s->security_extn && !attrs.secure)) {
179
+ if (!gic_has_groups(s) || (gic_cpu_ns_access(s, cpu, attrs))) {
180
/* unimplemented, or NS access: RAZ/WI */
181
return MEMTX_OK;
182
} else {
183
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
184
if (regno >= GIC_NR_APRS || s->revision != 2) {
185
return MEMTX_OK;
186
}
187
- if (s->security_extn && !attrs.secure) {
188
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
189
/* NS view of GICC_APR<n> is the top half of GIC_NSAPR<n> */
190
gic_apr_write_ns_view(s, regno, cpu, value);
191
} else {
192
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
193
if (regno >= GIC_NR_APRS || s->revision != 2) {
194
return MEMTX_OK;
195
}
196
- if (!gic_has_groups(s) || (s->security_extn && !attrs.secure)) {
197
+ if (!gic_has_groups(s) || (gic_cpu_ns_access(s, cpu, attrs))) {
198
return MEMTX_OK;
199
}
200
s->nsapr[regno][cpu] = value;
201
--
202
2.18.0
203
204
diff view generated by jsdifflib
Deleted patch
1
From: Luc Michel <luc.michel@greensocs.com>
2
1
3
Implement virtualization extensions in gic_activate_irq() and
4
gic_drop_prio() and in gic_get_prio_from_apr_bits() called by
5
gic_drop_prio().
6
7
When the current CPU is a vCPU:
8
- Use GIC_VIRT_MIN_BPR and GIC_VIRT_NR_APRS instead of their non-virt
9
counterparts,
10
- the vCPU APR is stored in the virtual interface, in h_apr.
11
12
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Message-id: 20180727095421.386-11-luc.michel@greensocs.com
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
17
hw/intc/arm_gic.c | 50 +++++++++++++++++++++++++++++++++++------------
18
1 file changed, 38 insertions(+), 12 deletions(-)
19
20
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/intc/arm_gic.c
23
+++ b/hw/intc/arm_gic.c
24
@@ -XXX,XX +XXX,XX @@ static void gic_activate_irq(GICState *s, int cpu, int irq)
25
* and update the running priority.
26
*/
27
int prio = gic_get_group_priority(s, cpu, irq);
28
- int preemption_level = prio >> (GIC_MIN_BPR + 1);
29
+ int min_bpr = gic_is_vcpu(cpu) ? GIC_VIRT_MIN_BPR : GIC_MIN_BPR;
30
+ int preemption_level = prio >> (min_bpr + 1);
31
int regno = preemption_level / 32;
32
int bitno = preemption_level % 32;
33
+ uint32_t *papr = NULL;
34
35
- if (gic_has_groups(s) && gic_test_group(s, irq, cpu)) {
36
- s->nsapr[regno][cpu] |= (1 << bitno);
37
+ if (gic_is_vcpu(cpu)) {
38
+ assert(regno == 0);
39
+ papr = &s->h_apr[gic_get_vcpu_real_id(cpu)];
40
+ } else if (gic_has_groups(s) && gic_test_group(s, irq, cpu)) {
41
+ papr = &s->nsapr[regno][cpu];
42
} else {
43
- s->apr[regno][cpu] |= (1 << bitno);
44
+ papr = &s->apr[regno][cpu];
45
}
46
47
+ *papr |= (1 << bitno);
48
+
49
s->running_priority[cpu] = prio;
50
gic_set_active(s, irq, cpu);
51
}
52
@@ -XXX,XX +XXX,XX @@ static int gic_get_prio_from_apr_bits(GICState *s, int cpu)
53
* on the set bits in the Active Priority Registers.
54
*/
55
int i;
56
+
57
+ if (gic_is_vcpu(cpu)) {
58
+ uint32_t apr = s->h_apr[gic_get_vcpu_real_id(cpu)];
59
+ if (apr) {
60
+ return ctz32(apr) << (GIC_VIRT_MIN_BPR + 1);
61
+ } else {
62
+ return 0x100;
63
+ }
64
+ }
65
+
66
for (i = 0; i < GIC_NR_APRS; i++) {
67
uint32_t apr = s->apr[i][cpu] | s->nsapr[i][cpu];
68
if (!apr) {
69
@@ -XXX,XX +XXX,XX @@ static void gic_drop_prio(GICState *s, int cpu, int group)
70
* running priority will be wrong, so interrupts that should preempt
71
* might not do so, and interrupts that should not preempt might do so.
72
*/
73
- int i;
74
+ if (gic_is_vcpu(cpu)) {
75
+ int rcpu = gic_get_vcpu_real_id(cpu);
76
77
- for (i = 0; i < GIC_NR_APRS; i++) {
78
- uint32_t *papr = group ? &s->nsapr[i][cpu] : &s->apr[i][cpu];
79
- if (!*papr) {
80
- continue;
81
+ if (s->h_apr[rcpu]) {
82
+ /* Clear lowest set bit */
83
+ s->h_apr[rcpu] &= s->h_apr[rcpu] - 1;
84
+ }
85
+ } else {
86
+ int i;
87
+
88
+ for (i = 0; i < GIC_NR_APRS; i++) {
89
+ uint32_t *papr = group ? &s->nsapr[i][cpu] : &s->apr[i][cpu];
90
+ if (!*papr) {
91
+ continue;
92
+ }
93
+ /* Clear lowest set bit */
94
+ *papr &= *papr - 1;
95
+ break;
96
}
97
- /* Clear lowest set bit */
98
- *papr &= *papr - 1;
99
- break;
100
}
101
102
s->running_priority[cpu] = gic_get_prio_from_apr_bits(s, cpu);
103
--
104
2.18.0
105
106
diff view generated by jsdifflib
Deleted patch
1
From: Luc Michel <luc.michel@greensocs.com>
2
1
3
Implement virtualization extensions in the gic_acknowledge_irq()
4
function. This function changes the state of the highest priority IRQ
5
from pending to active.
6
7
When the current CPU is a vCPU, modifying the state of an IRQ modifies
8
the corresponding LR entry. However if we clear the pending flag before
9
setting the active one, we lose track of the LR entry as it becomes
10
invalid. The next call to gic_get_lr_entry() will fail.
11
12
To overcome this issue, we call gic_activate_irq() before
13
gic_clear_pending(). This does not change the general behaviour of
14
gic_acknowledge_irq.
15
16
We also move the SGI case in gic_clear_pending_sgi() to enhance
17
code readability as the virtualization extensions support adds a if-else
18
level.
19
20
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
21
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
22
Message-id: 20180727095421.386-12-luc.michel@greensocs.com
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
---
25
hw/intc/arm_gic.c | 52 ++++++++++++++++++++++++++++++-----------------
26
1 file changed, 33 insertions(+), 19 deletions(-)
27
28
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/hw/intc/arm_gic.c
31
+++ b/hw/intc/arm_gic.c
32
@@ -XXX,XX +XXX,XX @@ static void gic_drop_prio(GICState *s, int cpu, int group)
33
s->running_priority[cpu] = gic_get_prio_from_apr_bits(s, cpu);
34
}
35
36
+static inline uint32_t gic_clear_pending_sgi(GICState *s, int irq, int cpu)
37
+{
38
+ int src;
39
+ uint32_t ret;
40
+
41
+ if (!gic_is_vcpu(cpu)) {
42
+ /* Lookup the source CPU for the SGI and clear this in the
43
+ * sgi_pending map. Return the src and clear the overall pending
44
+ * state on this CPU if the SGI is not pending from any CPUs.
45
+ */
46
+ assert(s->sgi_pending[irq][cpu] != 0);
47
+ src = ctz32(s->sgi_pending[irq][cpu]);
48
+ s->sgi_pending[irq][cpu] &= ~(1 << src);
49
+ if (s->sgi_pending[irq][cpu] == 0) {
50
+ gic_clear_pending(s, irq, cpu);
51
+ }
52
+ ret = irq | ((src & 0x7) << 10);
53
+ } else {
54
+ uint32_t *lr_entry = gic_get_lr_entry(s, irq, cpu);
55
+ src = GICH_LR_CPUID(*lr_entry);
56
+
57
+ gic_clear_pending(s, irq, cpu);
58
+ ret = irq | (src << 10);
59
+ }
60
+
61
+ return ret;
62
+}
63
+
64
uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
65
{
66
- int ret, irq, src;
67
- int cm = 1 << cpu;
68
+ int ret, irq;
69
70
/* gic_get_current_pending_irq() will return 1022 or 1023 appropriately
71
* for the case where this GIC supports grouping and the pending interrupt
72
* is in the wrong group.
73
*/
74
irq = gic_get_current_pending_irq(s, cpu, attrs);
75
- trace_gic_acknowledge_irq(cpu, irq);
76
+ trace_gic_acknowledge_irq(gic_get_vcpu_real_id(cpu), irq);
77
78
if (irq >= GIC_MAXIRQ) {
79
DPRINTF("ACK, no pending interrupt or it is hidden: %d\n", irq);
80
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
81
return 1023;
82
}
83
84
+ gic_activate_irq(s, cpu, irq);
85
+
86
if (s->revision == REV_11MPCORE) {
87
/* Clear pending flags for both level and edge triggered interrupts.
88
* Level triggered IRQs will be reasserted once they become inactive.
89
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
90
ret = irq;
91
} else {
92
if (irq < GIC_NR_SGIS) {
93
- /* Lookup the source CPU for the SGI and clear this in the
94
- * sgi_pending map. Return the src and clear the overall pending
95
- * state on this CPU if the SGI is not pending from any CPUs.
96
- */
97
- assert(s->sgi_pending[irq][cpu] != 0);
98
- src = ctz32(s->sgi_pending[irq][cpu]);
99
- s->sgi_pending[irq][cpu] &= ~(1 << src);
100
- if (s->sgi_pending[irq][cpu] == 0) {
101
- gic_clear_pending(s, irq, cpu);
102
- }
103
- ret = irq | ((src & 0x7) << 10);
104
+ ret = gic_clear_pending_sgi(s, irq, cpu);
105
} else {
106
- /* Clear pending state for both level and edge triggered
107
- * interrupts. (level triggered interrupts with an active line
108
- * remain pending, see gic_test_pending)
109
- */
110
gic_clear_pending(s, irq, cpu);
111
ret = irq;
112
}
113
}
114
115
- gic_activate_irq(s, cpu, irq);
116
gic_update(s);
117
DPRINTF("ACK %d\n", irq);
118
return ret;
119
--
120
2.18.0
121
122
diff view generated by jsdifflib
Deleted patch
1
From: Luc Michel <luc.michel@greensocs.com>
2
1
3
Implement virtualization extensions in the gic_deactivate_irq() and
4
gic_complete_irq() functions.
5
6
When the guest writes an invalid vIRQ to V_EOIR or V_DIR, since the
7
GICv2 specification is not entirely clear here, we adopt the behaviour
8
observed on real hardware:
9
* When V_CTRL.EOIMode is false (EOI split is disabled):
10
- In case of an invalid vIRQ write to V_EOIR:
11
-> If some bits are set in H_APR, an invalid vIRQ write to V_EOIR
12
triggers a priority drop, and increments V_HCR.EOICount.
13
-> If V_APR is already cleared, nothing happen
14
15
- An invalid vIRQ write to V_DIR is ignored.
16
17
* When V_CTRL.EOIMode is true:
18
- In case of an invalid vIRQ write to V_EOIR:
19
-> If some bits are set in H_APR, an invalid vIRQ write to V_EOIR
20
triggers a priority drop.
21
-> If V_APR is already cleared, nothing happen
22
23
- An invalid vIRQ write to V_DIR increments V_HCR.EOICount.
24
25
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
26
Message-id: 20180727095421.386-13-luc.michel@greensocs.com
27
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
28
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
29
---
30
hw/intc/arm_gic.c | 51 +++++++++++++++++++++++++++++++++++++++++++----
31
1 file changed, 47 insertions(+), 4 deletions(-)
32
33
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/hw/intc/arm_gic.c
36
+++ b/hw/intc/arm_gic.c
37
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
38
{
39
int group;
40
41
- if (irq >= s->num_irq) {
42
+ if (irq >= GIC_MAXIRQ || (!gic_is_vcpu(cpu) && irq >= s->num_irq)) {
43
/*
44
* This handles two cases:
45
* 1. If software writes the ID of a spurious interrupt [ie 1023]
46
* to the GICC_DIR, the GIC ignores that write.
47
* 2. If software writes the number of a non-existent interrupt
48
* this must be a subcase of "value written is not an active interrupt"
49
- * and so this is UNPREDICTABLE. We choose to ignore it.
50
+ * and so this is UNPREDICTABLE. We choose to ignore it. For vCPUs,
51
+ * all IRQs potentially exist, so this limit does not apply.
52
*/
53
return;
54
}
55
56
- group = gic_has_groups(s) && gic_test_group(s, irq, cpu);
57
-
58
if (!gic_eoi_split(s, cpu, attrs)) {
59
/* This is UNPREDICTABLE; we choose to ignore it */
60
qemu_log_mask(LOG_GUEST_ERROR,
61
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
62
return;
63
}
64
65
+ if (gic_is_vcpu(cpu) && !gic_virq_is_valid(s, irq, cpu)) {
66
+ /* This vIRQ does not have an LR entry which is either active or
67
+ * pending and active. Increment EOICount and ignore the write.
68
+ */
69
+ int rcpu = gic_get_vcpu_real_id(cpu);
70
+ s->h_hcr[rcpu] += 1 << R_GICH_HCR_EOICount_SHIFT;
71
+ return;
72
+ }
73
+
74
+ group = gic_has_groups(s) && gic_test_group(s, irq, cpu);
75
+
76
if (gic_cpu_ns_access(s, cpu, attrs) && !group) {
77
DPRINTF("Non-secure DI for Group0 interrupt %d ignored\n", irq);
78
return;
79
@@ -XXX,XX +XXX,XX @@ static void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
80
int group;
81
82
DPRINTF("EOI %d\n", irq);
83
+ if (gic_is_vcpu(cpu)) {
84
+ /* The call to gic_prio_drop() will clear a bit in GICH_APR iff the
85
+ * running prio is < 0x100.
86
+ */
87
+ bool prio_drop = s->running_priority[cpu] < 0x100;
88
+
89
+ if (irq >= GIC_MAXIRQ) {
90
+ /* Ignore spurious interrupt */
91
+ return;
92
+ }
93
+
94
+ gic_drop_prio(s, cpu, 0);
95
+
96
+ if (!gic_eoi_split(s, cpu, attrs)) {
97
+ bool valid = gic_virq_is_valid(s, irq, cpu);
98
+ if (prio_drop && !valid) {
99
+ /* We are in a situation where:
100
+ * - V_CTRL.EOIMode is false (no EOI split),
101
+ * - The call to gic_drop_prio() cleared a bit in GICH_APR,
102
+ * - This vIRQ does not have an LR entry which is either
103
+ * active or pending and active.
104
+ * In that case, we must increment EOICount.
105
+ */
106
+ int rcpu = gic_get_vcpu_real_id(cpu);
107
+ s->h_hcr[rcpu] += 1 << R_GICH_HCR_EOICount_SHIFT;
108
+ } else if (valid) {
109
+ gic_clear_active(s, irq, cpu);
110
+ }
111
+ }
112
+
113
+ return;
114
+ }
115
+
116
if (irq >= s->num_irq) {
117
/* This handles two cases:
118
* 1. If software writes the ID of a spurious interrupt [ie 1023]
119
--
120
2.18.0
121
122
diff view generated by jsdifflib
Deleted patch
1
From: Luc Michel <luc.michel@greensocs.com>
2
1
3
Implement virtualization extensions in the gic_cpu_read() and
4
gic_cpu_write() functions. Those are the last bits missing to fully
5
support virtualization extensions in the CPU interface path.
6
7
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20180727095421.386-14-luc.michel@greensocs.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
hw/intc/arm_gic.c | 20 +++++++++++++++-----
13
1 file changed, 15 insertions(+), 5 deletions(-)
14
15
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/intc/arm_gic.c
18
+++ b/hw/intc/arm_gic.c
19
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_read(GICState *s, int cpu, int offset,
20
case 0xd0: case 0xd4: case 0xd8: case 0xdc:
21
{
22
int regno = (offset - 0xd0) / 4;
23
+ int nr_aprs = gic_is_vcpu(cpu) ? GIC_VIRT_NR_APRS : GIC_NR_APRS;
24
25
- if (regno >= GIC_NR_APRS || s->revision != 2) {
26
+ if (regno >= nr_aprs || s->revision != 2) {
27
*data = 0;
28
+ } else if (gic_is_vcpu(cpu)) {
29
+ *data = s->h_apr[gic_get_vcpu_real_id(cpu)];
30
} else if (gic_cpu_ns_access(s, cpu, attrs)) {
31
/* NS view of GICC_APR<n> is the top half of GIC_NSAPR<n> */
32
*data = gic_apr_ns_view(s, regno, cpu);
33
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_read(GICState *s, int cpu, int offset,
34
int regno = (offset - 0xe0) / 4;
35
36
if (regno >= GIC_NR_APRS || s->revision != 2 || !gic_has_groups(s) ||
37
- gic_cpu_ns_access(s, cpu, attrs)) {
38
+ gic_cpu_ns_access(s, cpu, attrs) || gic_is_vcpu(cpu)) {
39
*data = 0;
40
} else {
41
*data = s->nsapr[regno][cpu];
42
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
43
s->abpr[cpu] = MAX(value & 0x7, GIC_MIN_ABPR);
44
}
45
} else {
46
- s->bpr[cpu] = MAX(value & 0x7, GIC_MIN_BPR);
47
+ int min_bpr = gic_is_vcpu(cpu) ? GIC_VIRT_MIN_BPR : GIC_MIN_BPR;
48
+ s->bpr[cpu] = MAX(value & 0x7, min_bpr);
49
}
50
break;
51
case 0x10: /* End Of Interrupt */
52
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
53
case 0xd0: case 0xd4: case 0xd8: case 0xdc:
54
{
55
int regno = (offset - 0xd0) / 4;
56
+ int nr_aprs = gic_is_vcpu(cpu) ? GIC_VIRT_NR_APRS : GIC_NR_APRS;
57
58
- if (regno >= GIC_NR_APRS || s->revision != 2) {
59
+ if (regno >= nr_aprs || s->revision != 2) {
60
return MEMTX_OK;
61
}
62
- if (gic_cpu_ns_access(s, cpu, attrs)) {
63
+ if (gic_is_vcpu(cpu)) {
64
+ s->h_apr[gic_get_vcpu_real_id(cpu)] = value;
65
+ } else if (gic_cpu_ns_access(s, cpu, attrs)) {
66
/* NS view of GICC_APR<n> is the top half of GIC_NSAPR<n> */
67
gic_apr_write_ns_view(s, regno, cpu, value);
68
} else {
69
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
70
if (regno >= GIC_NR_APRS || s->revision != 2) {
71
return MEMTX_OK;
72
}
73
+ if (gic_is_vcpu(cpu)) {
74
+ return MEMTX_OK;
75
+ }
76
if (!gic_has_groups(s) || (gic_cpu_ns_access(s, cpu, attrs))) {
77
return MEMTX_OK;
78
}
79
--
80
2.18.0
81
82
diff view generated by jsdifflib
Deleted patch
1
From: Luc Michel <luc.michel@greensocs.com>
2
1
3
Add the read/write functions to handle accesses to the vCPU interface.
4
Those accesses are forwarded to the real CPU interface, with the CPU id
5
being converted to the corresponding vCPU id (vCPU id = CPU id +
6
GIC_NCPU).
7
8
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
9
Message-id: 20180727095421.386-15-luc.michel@greensocs.com
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
hw/intc/arm_gic.c | 37 +++++++++++++++++++++++++++++++++++--
14
1 file changed, 35 insertions(+), 2 deletions(-)
15
16
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/intc/arm_gic.c
19
+++ b/hw/intc/arm_gic.c
20
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_do_cpu_write(void *opaque, hwaddr addr,
21
return gic_cpu_write(s, id, addr, value, attrs);
22
}
23
24
+static MemTxResult gic_thisvcpu_read(void *opaque, hwaddr addr, uint64_t *data,
25
+ unsigned size, MemTxAttrs attrs)
26
+{
27
+ GICState *s = (GICState *)opaque;
28
+
29
+ return gic_cpu_read(s, gic_get_current_vcpu(s), addr, data, attrs);
30
+}
31
+
32
+static MemTxResult gic_thisvcpu_write(void *opaque, hwaddr addr,
33
+ uint64_t value, unsigned size,
34
+ MemTxAttrs attrs)
35
+{
36
+ GICState *s = (GICState *)opaque;
37
+
38
+ return gic_cpu_write(s, gic_get_current_vcpu(s), addr, value, attrs);
39
+}
40
+
41
static const MemoryRegionOps gic_ops[2] = {
42
{
43
.read_with_attrs = gic_dist_read,
44
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps gic_cpu_ops = {
45
.endianness = DEVICE_NATIVE_ENDIAN,
46
};
47
48
+static const MemoryRegionOps gic_virt_ops[2] = {
49
+ {
50
+ .read_with_attrs = NULL,
51
+ .write_with_attrs = NULL,
52
+ .endianness = DEVICE_NATIVE_ENDIAN,
53
+ },
54
+ {
55
+ .read_with_attrs = gic_thisvcpu_read,
56
+ .write_with_attrs = gic_thisvcpu_write,
57
+ .endianness = DEVICE_NATIVE_ENDIAN,
58
+ }
59
+};
60
+
61
static void arm_gic_realize(DeviceState *dev, Error **errp)
62
{
63
/* Device instance realize function for the GIC sysbus device */
64
@@ -XXX,XX +XXX,XX @@ static void arm_gic_realize(DeviceState *dev, Error **errp)
65
return;
66
}
67
68
- /* This creates distributor and main CPU interface (s->cpuiomem[0]) */
69
- gic_init_irqs_and_mmio(s, gic_set_irq, gic_ops, NULL);
70
+ /* This creates distributor, main CPU interface (s->cpuiomem[0]) and if
71
+ * enabled, virtualization extensions related interfaces (main virtual
72
+ * interface (s->vifaceiomem[0]) and virtual CPU interface).
73
+ */
74
+ gic_init_irqs_and_mmio(s, gic_set_irq, gic_ops, gic_virt_ops);
75
76
/* Extra core-specific regions for the CPU interfaces. This is
77
* necessary for "franken-GIC" implementations, for example on
78
--
79
2.18.0
80
81
diff view generated by jsdifflib
Deleted patch
1
From: Luc Michel <luc.michel@greensocs.com>
2
1
3
Implement the read and write functions for the virtual interface of the
4
virtualization extensions in the GICv2.
5
6
One mirror region per CPU is also created, which maps to that specific
7
CPU id. This is required by the GIC architecture specification.
8
9
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20180727095421.386-16-luc.michel@greensocs.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
hw/intc/arm_gic.c | 235 +++++++++++++++++++++++++++++++++++++++++++++-
15
1 file changed, 233 insertions(+), 2 deletions(-)
16
17
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/intc/arm_gic.c
20
+++ b/hw/intc/arm_gic.c
21
@@ -XXX,XX +XXX,XX @@ static void gic_update(GICState *s)
22
}
23
}
24
25
+/* Return true if this LR is empty, i.e. the corresponding bit
26
+ * in ELRSR is set.
27
+ */
28
+static inline bool gic_lr_entry_is_free(uint32_t entry)
29
+{
30
+ return (GICH_LR_STATE(entry) == GICH_LR_STATE_INVALID)
31
+ && (GICH_LR_HW(entry) || !GICH_LR_EOI(entry));
32
+}
33
+
34
+/* Return true if this LR should trigger an EOI maintenance interrupt, i.e. the
35
+ * corrsponding bit in EISR is set.
36
+ */
37
+static inline bool gic_lr_entry_is_eoi(uint32_t entry)
38
+{
39
+ return (GICH_LR_STATE(entry) == GICH_LR_STATE_INVALID)
40
+ && !GICH_LR_HW(entry) && GICH_LR_EOI(entry);
41
+}
42
+
43
static void gic_set_irq_11mpcore(GICState *s, int irq, int level,
44
int cm, int target)
45
{
46
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_thisvcpu_write(void *opaque, hwaddr addr,
47
return gic_cpu_write(s, gic_get_current_vcpu(s), addr, value, attrs);
48
}
49
50
+static uint32_t gic_compute_eisr(GICState *s, int cpu, int lr_start)
51
+{
52
+ int lr_idx;
53
+ uint32_t ret = 0;
54
+
55
+ for (lr_idx = lr_start; lr_idx < s->num_lrs; lr_idx++) {
56
+ uint32_t *entry = &s->h_lr[lr_idx][cpu];
57
+ ret = deposit32(ret, lr_idx - lr_start, 1,
58
+ gic_lr_entry_is_eoi(*entry));
59
+ }
60
+
61
+ return ret;
62
+}
63
+
64
+static uint32_t gic_compute_elrsr(GICState *s, int cpu, int lr_start)
65
+{
66
+ int lr_idx;
67
+ uint32_t ret = 0;
68
+
69
+ for (lr_idx = lr_start; lr_idx < s->num_lrs; lr_idx++) {
70
+ uint32_t *entry = &s->h_lr[lr_idx][cpu];
71
+ ret = deposit32(ret, lr_idx - lr_start, 1,
72
+ gic_lr_entry_is_free(*entry));
73
+ }
74
+
75
+ return ret;
76
+}
77
+
78
+static void gic_vmcr_write(GICState *s, uint32_t value, MemTxAttrs attrs)
79
+{
80
+ int vcpu = gic_get_current_vcpu(s);
81
+ uint32_t ctlr;
82
+ uint32_t abpr;
83
+ uint32_t bpr;
84
+ uint32_t prio_mask;
85
+
86
+ ctlr = FIELD_EX32(value, GICH_VMCR, VMCCtlr);
87
+ abpr = FIELD_EX32(value, GICH_VMCR, VMABP);
88
+ bpr = FIELD_EX32(value, GICH_VMCR, VMBP);
89
+ prio_mask = FIELD_EX32(value, GICH_VMCR, VMPriMask) << 3;
90
+
91
+ gic_set_cpu_control(s, vcpu, ctlr, attrs);
92
+ s->abpr[vcpu] = MAX(abpr, GIC_VIRT_MIN_ABPR);
93
+ s->bpr[vcpu] = MAX(bpr, GIC_VIRT_MIN_BPR);
94
+ gic_set_priority_mask(s, vcpu, prio_mask, attrs);
95
+}
96
+
97
+static MemTxResult gic_hyp_read(void *opaque, int cpu, hwaddr addr,
98
+ uint64_t *data, MemTxAttrs attrs)
99
+{
100
+ GICState *s = ARM_GIC(opaque);
101
+ int vcpu = cpu + GIC_NCPU;
102
+
103
+ switch (addr) {
104
+ case A_GICH_HCR: /* Hypervisor Control */
105
+ *data = s->h_hcr[cpu];
106
+ break;
107
+
108
+ case A_GICH_VTR: /* VGIC Type */
109
+ *data = FIELD_DP32(0, GICH_VTR, ListRegs, s->num_lrs - 1);
110
+ *data = FIELD_DP32(*data, GICH_VTR, PREbits,
111
+ GIC_VIRT_MAX_GROUP_PRIO_BITS - 1);
112
+ *data = FIELD_DP32(*data, GICH_VTR, PRIbits,
113
+ (7 - GIC_VIRT_MIN_BPR) - 1);
114
+ break;
115
+
116
+ case A_GICH_VMCR: /* Virtual Machine Control */
117
+ *data = FIELD_DP32(0, GICH_VMCR, VMCCtlr,
118
+ extract32(s->cpu_ctlr[vcpu], 0, 10));
119
+ *data = FIELD_DP32(*data, GICH_VMCR, VMABP, s->abpr[vcpu]);
120
+ *data = FIELD_DP32(*data, GICH_VMCR, VMBP, s->bpr[vcpu]);
121
+ *data = FIELD_DP32(*data, GICH_VMCR, VMPriMask,
122
+ extract32(s->priority_mask[vcpu], 3, 5));
123
+ break;
124
+
125
+ case A_GICH_MISR: /* Maintenance Interrupt Status */
126
+ *data = s->h_misr[cpu];
127
+ break;
128
+
129
+ case A_GICH_EISR0: /* End of Interrupt Status 0 and 1 */
130
+ case A_GICH_EISR1:
131
+ *data = gic_compute_eisr(s, cpu, (addr - A_GICH_EISR0) * 8);
132
+ break;
133
+
134
+ case A_GICH_ELRSR0: /* Empty List Status 0 and 1 */
135
+ case A_GICH_ELRSR1:
136
+ *data = gic_compute_elrsr(s, cpu, (addr - A_GICH_ELRSR0) * 8);
137
+ break;
138
+
139
+ case A_GICH_APR: /* Active Priorities */
140
+ *data = s->h_apr[cpu];
141
+ break;
142
+
143
+ case A_GICH_LR0 ... A_GICH_LR63: /* List Registers */
144
+ {
145
+ int lr_idx = (addr - A_GICH_LR0) / 4;
146
+
147
+ if (lr_idx > s->num_lrs) {
148
+ *data = 0;
149
+ } else {
150
+ *data = s->h_lr[lr_idx][cpu];
151
+ }
152
+ break;
153
+ }
154
+
155
+ default:
156
+ qemu_log_mask(LOG_GUEST_ERROR,
157
+ "gic_hyp_read: Bad offset %" HWADDR_PRIx "\n", addr);
158
+ return MEMTX_OK;
159
+ }
160
+
161
+ return MEMTX_OK;
162
+}
163
+
164
+static MemTxResult gic_hyp_write(void *opaque, int cpu, hwaddr addr,
165
+ uint64_t value, MemTxAttrs attrs)
166
+{
167
+ GICState *s = ARM_GIC(opaque);
168
+ int vcpu = cpu + GIC_NCPU;
169
+
170
+ switch (addr) {
171
+ case A_GICH_HCR: /* Hypervisor Control */
172
+ s->h_hcr[cpu] = value & GICH_HCR_MASK;
173
+ break;
174
+
175
+ case A_GICH_VMCR: /* Virtual Machine Control */
176
+ gic_vmcr_write(s, value, attrs);
177
+ break;
178
+
179
+ case A_GICH_APR: /* Active Priorities */
180
+ s->h_apr[cpu] = value;
181
+ s->running_priority[vcpu] = gic_get_prio_from_apr_bits(s, vcpu);
182
+ break;
183
+
184
+ case A_GICH_LR0 ... A_GICH_LR63: /* List Registers */
185
+ {
186
+ int lr_idx = (addr - A_GICH_LR0) / 4;
187
+
188
+ if (lr_idx > s->num_lrs) {
189
+ return MEMTX_OK;
190
+ }
191
+
192
+ s->h_lr[lr_idx][cpu] = value & GICH_LR_MASK;
193
+ break;
194
+ }
195
+
196
+ default:
197
+ qemu_log_mask(LOG_GUEST_ERROR,
198
+ "gic_hyp_write: Bad offset %" HWADDR_PRIx "\n", addr);
199
+ return MEMTX_OK;
200
+ }
201
+
202
+ return MEMTX_OK;
203
+}
204
+
205
+static MemTxResult gic_thiscpu_hyp_read(void *opaque, hwaddr addr, uint64_t *data,
206
+ unsigned size, MemTxAttrs attrs)
207
+{
208
+ GICState *s = (GICState *)opaque;
209
+
210
+ return gic_hyp_read(s, gic_get_current_cpu(s), addr, data, attrs);
211
+}
212
+
213
+static MemTxResult gic_thiscpu_hyp_write(void *opaque, hwaddr addr,
214
+ uint64_t value, unsigned size,
215
+ MemTxAttrs attrs)
216
+{
217
+ GICState *s = (GICState *)opaque;
218
+
219
+ return gic_hyp_write(s, gic_get_current_cpu(s), addr, value, attrs);
220
+}
221
+
222
+static MemTxResult gic_do_hyp_read(void *opaque, hwaddr addr, uint64_t *data,
223
+ unsigned size, MemTxAttrs attrs)
224
+{
225
+ GICState **backref = (GICState **)opaque;
226
+ GICState *s = *backref;
227
+ int id = (backref - s->backref);
228
+
229
+ return gic_hyp_read(s, id, addr, data, attrs);
230
+}
231
+
232
+static MemTxResult gic_do_hyp_write(void *opaque, hwaddr addr,
233
+ uint64_t value, unsigned size,
234
+ MemTxAttrs attrs)
235
+{
236
+ GICState **backref = (GICState **)opaque;
237
+ GICState *s = *backref;
238
+ int id = (backref - s->backref);
239
+
240
+ return gic_hyp_write(s, id + GIC_NCPU, addr, value, attrs);
241
+
242
+}
243
+
244
static const MemoryRegionOps gic_ops[2] = {
245
{
246
.read_with_attrs = gic_dist_read,
247
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps gic_cpu_ops = {
248
249
static const MemoryRegionOps gic_virt_ops[2] = {
250
{
251
- .read_with_attrs = NULL,
252
- .write_with_attrs = NULL,
253
+ .read_with_attrs = gic_thiscpu_hyp_read,
254
+ .write_with_attrs = gic_thiscpu_hyp_write,
255
.endianness = DEVICE_NATIVE_ENDIAN,
256
},
257
{
258
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps gic_virt_ops[2] = {
259
}
260
};
261
262
+static const MemoryRegionOps gic_viface_ops = {
263
+ .read_with_attrs = gic_do_hyp_read,
264
+ .write_with_attrs = gic_do_hyp_write,
265
+ .endianness = DEVICE_NATIVE_ENDIAN,
266
+};
267
+
268
static void arm_gic_realize(DeviceState *dev, Error **errp)
269
{
270
/* Device instance realize function for the GIC sysbus device */
271
@@ -XXX,XX +XXX,XX @@ static void arm_gic_realize(DeviceState *dev, Error **errp)
272
&s->backref[i], "gic_cpu", 0x100);
273
sysbus_init_mmio(sbd, &s->cpuiomem[i+1]);
274
}
275
+
276
+ /* Extra core-specific regions for virtual interfaces. This is required by
277
+ * the GICv2 specification.
278
+ */
279
+ if (s->virt_extn) {
280
+ for (i = 0; i < s->num_cpu; i++) {
281
+ memory_region_init_io(&s->vifaceiomem[i + 1], OBJECT(s),
282
+ &gic_viface_ops, &s->backref[i],
283
+ "gic_viface", 0x1000);
284
+ sysbus_init_mmio(sbd, &s->vifaceiomem[i + 1]);
285
+ }
286
+ }
287
+
288
}
289
290
static void arm_gic_class_init(ObjectClass *klass, void *data)
291
--
292
2.18.0
293
294
diff view generated by jsdifflib
Deleted patch
1
From: Luc Michel <luc.michel@greensocs.com>
2
1
3
Implement the maintenance interrupt generation that is part of the GICv2
4
virtualization extensions.
5
6
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20180727095421.386-18-luc.michel@greensocs.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
hw/intc/arm_gic.c | 97 +++++++++++++++++++++++++++++++++++++++++++++++
12
1 file changed, 97 insertions(+)
13
14
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/intc/arm_gic.c
17
+++ b/hw/intc/arm_gic.c
18
@@ -XXX,XX +XXX,XX @@ static inline bool gic_lr_entry_is_eoi(uint32_t entry)
19
&& !GICH_LR_HW(entry) && GICH_LR_EOI(entry);
20
}
21
22
+static inline void gic_extract_lr_info(GICState *s, int cpu,
23
+ int *num_eoi, int *num_valid, int *num_pending)
24
+{
25
+ int lr_idx;
26
+
27
+ *num_eoi = 0;
28
+ *num_valid = 0;
29
+ *num_pending = 0;
30
+
31
+ for (lr_idx = 0; lr_idx < s->num_lrs; lr_idx++) {
32
+ uint32_t *entry = &s->h_lr[lr_idx][cpu];
33
+
34
+ if (gic_lr_entry_is_eoi(*entry)) {
35
+ (*num_eoi)++;
36
+ }
37
+
38
+ if (GICH_LR_STATE(*entry) != GICH_LR_STATE_INVALID) {
39
+ (*num_valid)++;
40
+ }
41
+
42
+ if (GICH_LR_STATE(*entry) == GICH_LR_STATE_PENDING) {
43
+ (*num_pending)++;
44
+ }
45
+ }
46
+}
47
+
48
+static void gic_compute_misr(GICState *s, int cpu)
49
+{
50
+ uint32_t value = 0;
51
+ int vcpu = cpu + GIC_NCPU;
52
+
53
+ int num_eoi, num_valid, num_pending;
54
+
55
+ gic_extract_lr_info(s, cpu, &num_eoi, &num_valid, &num_pending);
56
+
57
+ /* EOI */
58
+ if (num_eoi) {
59
+ value |= R_GICH_MISR_EOI_MASK;
60
+ }
61
+
62
+ /* U: true if only 0 or 1 LR entry is valid */
63
+ if ((s->h_hcr[cpu] & R_GICH_HCR_UIE_MASK) && (num_valid < 2)) {
64
+ value |= R_GICH_MISR_U_MASK;
65
+ }
66
+
67
+ /* LRENP: EOICount is not 0 */
68
+ if ((s->h_hcr[cpu] & R_GICH_HCR_LRENPIE_MASK) &&
69
+ ((s->h_hcr[cpu] & R_GICH_HCR_EOICount_MASK) != 0)) {
70
+ value |= R_GICH_MISR_LRENP_MASK;
71
+ }
72
+
73
+ /* NP: no pending interrupts */
74
+ if ((s->h_hcr[cpu] & R_GICH_HCR_NPIE_MASK) && (num_pending == 0)) {
75
+ value |= R_GICH_MISR_NP_MASK;
76
+ }
77
+
78
+ /* VGrp0E: group0 virq signaling enabled */
79
+ if ((s->h_hcr[cpu] & R_GICH_HCR_VGRP0EIE_MASK) &&
80
+ (s->cpu_ctlr[vcpu] & GICC_CTLR_EN_GRP0)) {
81
+ value |= R_GICH_MISR_VGrp0E_MASK;
82
+ }
83
+
84
+ /* VGrp0D: group0 virq signaling disabled */
85
+ if ((s->h_hcr[cpu] & R_GICH_HCR_VGRP0DIE_MASK) &&
86
+ !(s->cpu_ctlr[vcpu] & GICC_CTLR_EN_GRP0)) {
87
+ value |= R_GICH_MISR_VGrp0D_MASK;
88
+ }
89
+
90
+ /* VGrp1E: group1 virq signaling enabled */
91
+ if ((s->h_hcr[cpu] & R_GICH_HCR_VGRP1EIE_MASK) &&
92
+ (s->cpu_ctlr[vcpu] & GICC_CTLR_EN_GRP1)) {
93
+ value |= R_GICH_MISR_VGrp1E_MASK;
94
+ }
95
+
96
+ /* VGrp1D: group1 virq signaling disabled */
97
+ if ((s->h_hcr[cpu] & R_GICH_HCR_VGRP1DIE_MASK) &&
98
+ !(s->cpu_ctlr[vcpu] & GICC_CTLR_EN_GRP1)) {
99
+ value |= R_GICH_MISR_VGrp1D_MASK;
100
+ }
101
+
102
+ s->h_misr[cpu] = value;
103
+}
104
+
105
+static void gic_update_maintenance(GICState *s)
106
+{
107
+ int cpu = 0;
108
+ int maint_level;
109
+
110
+ for (cpu = 0; cpu < s->num_cpu; cpu++) {
111
+ gic_compute_misr(s, cpu);
112
+ maint_level = (s->h_hcr[cpu] & R_GICH_HCR_EN_MASK) && s->h_misr[cpu];
113
+
114
+ qemu_set_irq(s->maintenance_irq[cpu], maint_level);
115
+ }
116
+}
117
+
118
static void gic_update_virt(GICState *s)
119
{
120
gic_update_internal(s, true);
121
+ gic_update_maintenance(s);
122
}
123
124
static void gic_set_irq_11mpcore(GICState *s, int irq, int level,
125
--
126
2.18.0
127
128
diff view generated by jsdifflib
Deleted patch
1
If the "trap general exceptions" bit HCR_EL2.TGE is set, we
2
must mask all virtual interrupts (as per DDI0487C.a D1.14.3).
3
Implement this in arm_excp_unmasked().
4
1
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180724115950.17316-2-peter.maydell@linaro.org
8
---
9
target/arm/cpu.h | 6 ++++--
10
1 file changed, 4 insertions(+), 2 deletions(-)
11
12
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/cpu.h
15
+++ b/target/arm/cpu.h
16
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
17
break;
18
19
case EXCP_VFIQ:
20
- if (secure || !(env->cp15.hcr_el2 & HCR_FMO)) {
21
+ if (secure || !(env->cp15.hcr_el2 & HCR_FMO)
22
+ || (env->cp15.hcr_el2 & HCR_TGE)) {
23
/* VFIQs are only taken when hypervized and non-secure. */
24
return false;
25
}
26
return !(env->daif & PSTATE_F);
27
case EXCP_VIRQ:
28
- if (secure || !(env->cp15.hcr_el2 & HCR_IMO)) {
29
+ if (secure || !(env->cp15.hcr_el2 & HCR_IMO)
30
+ || (env->cp15.hcr_el2 & HCR_TGE)) {
31
/* VIRQs are only taken when hypervized and non-secure. */
32
return false;
33
}
34
--
35
2.18.0
36
37
diff view generated by jsdifflib
Deleted patch
1
Whene we raise a synchronous exception, if HCR_EL2.TGE is set then
2
exceptions targeting NS EL1 must be redirected to EL2. Implement
3
this in raise_exception() -- all synchronous exceptions go through
4
this function.
5
1
6
(Asynchronous exceptions go via arm_cpu_exec_interrupt(), which
7
already honours HCR_EL2.TGE when it determines the target EL
8
in arm_phys_excp_target_el().)
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20180724115950.17316-4-peter.maydell@linaro.org
13
---
14
target/arm/op_helper.c | 14 ++++++++++++++
15
1 file changed, 14 insertions(+)
16
17
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/op_helper.c
20
+++ b/target/arm/op_helper.c
21
@@ -XXX,XX +XXX,XX @@ static void raise_exception(CPUARMState *env, uint32_t excp,
22
{
23
CPUState *cs = CPU(arm_env_get_cpu(env));
24
25
+ if ((env->cp15.hcr_el2 & HCR_TGE) &&
26
+ target_el == 1 && !arm_is_secure(env)) {
27
+ /*
28
+ * Redirect NS EL1 exceptions to NS EL2. These are reported with
29
+ * their original syndrome register value, with the exception of
30
+ * SIMD/FP access traps, which are reported as uncategorized
31
+ * (see DDI0478C.a D1.10.4)
32
+ */
33
+ target_el = 2;
34
+ if (syndrome >> ARM_EL_EC_SHIFT == EC_ADVSIMDFPACCESSTRAP) {
35
+ syndrome = syn_uncategorized();
36
+ }
37
+ }
38
+
39
assert(!excp_is_internal(excp));
40
cs->exception_index = excp;
41
env->exception.syndrome = syndrome;
42
--
43
2.18.0
44
45
diff view generated by jsdifflib