1
Second pull request of the week; mostly RTH's support for some
1
First pullreq of the 3.1 release cycle, with lots of
2
new-in-v8.1/v8.3 instructions, and my v8M board model.
2
Arm related patches accumulated during freeze. Most
3
notable here is Luc's GICv2 virtualization support and
4
my execute-from-MMIO patches.
5
6
I stopped looking at my to-review queue towards the
7
end of freeze, since 45 patches is already pushing what
8
I consider a reasonable sized pullreq; once this goes into
9
master I'll start working through it again.
3
10
4
thanks
11
thanks
5
-- PMM
12
-- PMM
6
13
7
The following changes since commit 427cbc7e4136a061628cb4315cc8182ea36d772f:
14
The following changes since commit 38441756b70eec5807b5f60dad11a93a91199866:
8
15
9
Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into staging (2018-03-01 18:46:41 +0000)
16
Update version for v3.0.0 release (2018-08-14 16:38:43 +0100)
10
17
11
are available in the Git repository at:
18
are available in the Git repository at:
12
19
13
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20180302
20
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20180814
14
21
15
for you to fetch changes up to e66a67bf28e1b4fce2e3d72a2610dbd48d9d3078:
22
for you to fetch changes up to 054e7adf4e64e4acb3b033348ebf7cc871baa34f:
16
23
17
target/arm: Enable ARM_FEATURE_V8_FCMA (2018-03-02 11:03:45 +0000)
24
target/arm: Fix typo in helper_sve_movz_d (2018-08-14 17:17:22 +0100)
18
25
19
----------------------------------------------------------------
26
----------------------------------------------------------------
20
target-arm queue:
27
target-arm queue:
21
* implement FCMA and RDM v8.1 and v8.3 instructions
28
* Implement more of ARMv6-M support
22
* enable Cortex-M33 v8M core, and provide new mps2-an505 board model
29
* Support direct execution from non-RAM regions;
23
that uses it
30
use this to implmeent execution from small (<1K) MPU regions
24
* decodetree: Propagate return value from translate subroutines
31
* GICv2: implement the virtualization extensions
25
* xlnx-zynqmp: Implement the RTC device
32
* support a virtualization-capable GICv2 in the virt and
33
xlnx-zynqmp boards
34
* arm: Fix return code of arm_load_elf() so we can detect
35
failure to load the file correctly
36
* Implement HCR_EL2.TGE ("trap general exceptions") bit
37
* Implement tailchaining for M profile cores
38
* Fix bugs in SVE compare, saturating add/sub, WHILE, MOVZ
26
39
27
----------------------------------------------------------------
40
----------------------------------------------------------------
28
Alistair Francis (3):
41
Adam Lackorzynski (1):
29
xlnx-zynqmp-rtc: Initial commit
42
arm: Fix return code of arm_load_elf
30
xlnx-zynqmp-rtc: Add basic time support
31
xlnx-zynqmp: Connect the RTC device
32
43
33
Peter Maydell (19):
44
Julia Suvorova (4):
34
loader: Add new load_ramdisk_as()
45
target/arm: Forbid unprivileged mode for M Baseline
35
hw/arm/boot: Honour CPU's address space for image loads
46
nvic: Handle ARMv6-M SCS reserved registers
36
hw/arm/armv7m: Honour CPU's address space for image loads
47
arm: Add ARMv6-M programmer's model support
37
target/arm: Define an IDAU interface
48
nvic: Change NVIC to support ARMv6-M
38
armv7m: Forward idau property to CPU object
39
target/arm: Define init-svtor property for the reset secure VTOR value
40
armv7m: Forward init-svtor property to CPU object
41
target/arm: Add Cortex-M33
42
hw/misc/unimp: Move struct to header file
43
include/hw/or-irq.h: Add missing include guard
44
qdev: Add new qdev_init_gpio_in_named_with_opaque()
45
hw/core/split-irq: Device that splits IRQ lines
46
hw/misc/mps2-fpgaio: FPGA control block for MPS2 AN505
47
hw/misc/tz-ppc: Model TrustZone peripheral protection controller
48
hw/misc/iotkit-secctl: Arm IoT Kit security controller initial skeleton
49
hw/misc/iotkit-secctl: Add handling for PPCs
50
hw/misc/iotkit-secctl: Add remaining simple registers
51
hw/arm/iotkit: Model Arm IOT Kit
52
mps2-an505: New board model: MPS2 with AN505 Cortex-M33 FPGA image
53
49
54
Richard Henderson (17):
50
Luc Michel (20):
55
decodetree: Propagate return value from translate subroutines
51
intc/arm_gic: Refactor operations on the distributor
56
target/arm: Add ARM_FEATURE_V8_RDM
52
intc/arm_gic: Implement GICD_ISACTIVERn and GICD_ICACTIVERn registers
57
target/arm: Refactor disas_simd_indexed decode
53
intc/arm_gic: Remove some dead code and put some functions static
58
target/arm: Refactor disas_simd_indexed size checks
54
vmstate.h: Provide VMSTATE_UINT16_SUB_ARRAY
59
target/arm: Decode aa64 armv8.1 scalar three same extra
55
intc/arm_gic: Add the virtualization extensions to the GIC state
60
target/arm: Decode aa64 armv8.1 three same extra
56
intc/arm_gic: Add virtual interface register definitions
61
target/arm: Decode aa64 armv8.1 scalar/vector x indexed element
57
intc/arm_gic: Add virtualization extensions helper macros and functions
62
target/arm: Decode aa32 armv8.1 three same
58
intc/arm_gic: Refactor secure/ns access check in the CPU interface
63
target/arm: Decode aa32 armv8.1 two reg and a scalar
59
intc/arm_gic: Add virtualization enabled IRQ helper functions
64
target/arm: Enable ARM_FEATURE_V8_RDM
60
intc/arm_gic: Implement virtualization extensions in gic_(activate_irq|drop_prio)
65
target/arm: Add ARM_FEATURE_V8_FCMA
61
intc/arm_gic: Implement virtualization extensions in gic_acknowledge_irq
66
target/arm: Decode aa64 armv8.3 fcadd
62
intc/arm_gic: Implement virtualization extensions in gic_(deactivate|complete_irq)
67
target/arm: Decode aa64 armv8.3 fcmla
63
intc/arm_gic: Implement virtualization extensions in gic_cpu_(read|write)
68
target/arm: Decode aa32 armv8.3 3-same
64
intc/arm_gic: Wire the vCPU interface
69
target/arm: Decode aa32 armv8.3 2-reg-index
65
intc/arm_gic: Implement the virtual interface registers
70
target/arm: Decode t32 simd 3reg and 2reg_scalar extension
66
intc/arm_gic: Implement gic_update_virt() function
71
target/arm: Enable ARM_FEATURE_V8_FCMA
67
intc/arm_gic: Implement maintenance interrupt generation
68
intc/arm_gic: Improve traces
69
xlnx-zynqmp: Improve GIC wiring and MMIO mapping
70
arm/virt: Add support for GICv2 virtualization extensions
72
71
73
hw/arm/Makefile.objs | 2 +
72
Peter Maydell (16):
74
hw/core/Makefile.objs | 1 +
73
accel/tcg: Pass read access type through to io_readx()
75
hw/misc/Makefile.objs | 4 +
74
accel/tcg: Handle get_page_addr_code() returning -1 in hashtable lookups
76
hw/timer/Makefile.objs | 1 +
75
accel/tcg: Handle get_page_addr_code() returning -1 in tb_check_watchpoint()
77
target/arm/Makefile.objs | 2 +-
76
accel/tcg: tb_gen_code(): Create single-insn TB for execution from non-RAM
78
include/hw/arm/armv7m.h | 5 +
77
accel/tcg: Return -1 for execution from MMIO regions in get_page_addr_code()
79
include/hw/arm/iotkit.h | 109 ++++++
78
target/arm: Allow execution from small regions
80
include/hw/arm/xlnx-zynqmp.h | 2 +
79
accel/tcg: Check whether TLB entry is RAM consistently with how we set it up
81
include/hw/core/split-irq.h | 57 +++
80
target/arm: Mask virtual interrupts if HCR_EL2.TGE is set
82
include/hw/irq.h | 4 +-
81
target/arm: Honour HCR_EL2.TGE and MDCR_EL2.TDE in debug register access checks
83
include/hw/loader.h | 12 +-
82
target/arm: Honour HCR_EL2.TGE when raising synchronous exceptions
84
include/hw/misc/iotkit-secctl.h | 103 ++++++
83
target/arm: Provide accessor functions for HCR_EL2.{IMO, FMO, AMO}
85
include/hw/misc/mps2-fpgaio.h | 43 +++
84
target/arm: Treat SCTLR_EL1.M as if it were zero when HCR_EL2.TGE is set
86
include/hw/misc/tz-ppc.h | 101 ++++++
85
target/arm: Improve exception-taken logging
87
include/hw/misc/unimp.h | 10 +
86
target/arm: Initialize exc_secure correctly in do_v7m_exception_exit()
88
include/hw/or-irq.h | 5 +
87
target/arm: Restore M-profile CONTROL.SPSEL before any tailchaining
89
include/hw/qdev-core.h | 30 +-
88
target/arm: Implement tailchaining for M profile cores
90
include/hw/timer/xlnx-zynqmp-rtc.h | 86 +++++
91
target/arm/cpu.h | 8 +
92
target/arm/helper.h | 31 ++
93
target/arm/idau.h | 61 ++++
94
hw/arm/armv7m.c | 35 +-
95
hw/arm/boot.c | 119 ++++---
96
hw/arm/iotkit.c | 598 +++++++++++++++++++++++++++++++
97
hw/arm/mps2-tz.c | 503 ++++++++++++++++++++++++++
98
hw/arm/xlnx-zynqmp.c | 14 +
99
hw/core/loader.c | 8 +-
100
hw/core/qdev.c | 8 +-
101
hw/core/split-irq.c | 89 +++++
102
hw/misc/iotkit-secctl.c | 704 +++++++++++++++++++++++++++++++++++++
103
hw/misc/mps2-fpgaio.c | 176 ++++++++++
104
hw/misc/tz-ppc.c | 302 ++++++++++++++++
105
hw/misc/unimp.c | 10 -
106
hw/timer/xlnx-zynqmp-rtc.c | 272 ++++++++++++++
107
linux-user/elfload.c | 2 +
108
target/arm/cpu.c | 66 +++-
109
target/arm/cpu64.c | 2 +
110
target/arm/helper.c | 28 +-
111
target/arm/translate-a64.c | 514 +++++++++++++++++++++------
112
target/arm/translate.c | 275 +++++++++++++--
113
target/arm/vec_helper.c | 429 ++++++++++++++++++++++
114
default-configs/arm-softmmu.mak | 5 +
115
hw/misc/trace-events | 24 ++
116
hw/timer/trace-events | 3 +
117
scripts/decodetree.py | 5 +-
118
45 files changed, 4668 insertions(+), 200 deletions(-)
119
create mode 100644 include/hw/arm/iotkit.h
120
create mode 100644 include/hw/core/split-irq.h
121
create mode 100644 include/hw/misc/iotkit-secctl.h
122
create mode 100644 include/hw/misc/mps2-fpgaio.h
123
create mode 100644 include/hw/misc/tz-ppc.h
124
create mode 100644 include/hw/timer/xlnx-zynqmp-rtc.h
125
create mode 100644 target/arm/idau.h
126
create mode 100644 hw/arm/iotkit.c
127
create mode 100644 hw/arm/mps2-tz.c
128
create mode 100644 hw/core/split-irq.c
129
create mode 100644 hw/misc/iotkit-secctl.c
130
create mode 100644 hw/misc/mps2-fpgaio.c
131
create mode 100644 hw/misc/tz-ppc.c
132
create mode 100644 hw/timer/xlnx-zynqmp-rtc.c
133
create mode 100644 target/arm/vec_helper.c
134
89
90
Richard Henderson (4):
91
target/arm: Fix sign of sve_cmpeq_ppzw/sve_cmpne_ppzw
92
target/arm: Fix typo in do_sat_addsub_64
93
target/arm: Reorganize SVE WHILE
94
target/arm: Fix typo in helper_sve_movz_d
95
96
accel/tcg/softmmu_template.h | 11 +-
97
hw/intc/gic_internal.h | 282 +++++++++--
98
include/exec/exec-all.h | 2 -
99
include/hw/arm/virt.h | 4 +-
100
include/hw/arm/xlnx-zynqmp.h | 4 +-
101
include/hw/intc/arm_gic_common.h | 43 +-
102
include/hw/intc/armv7m_nvic.h | 1 +
103
include/migration/vmstate.h | 3 +
104
include/qom/cpu.h | 6 +
105
target/arm/cpu.h | 62 ++-
106
accel/tcg/cpu-exec.c | 3 +
107
accel/tcg/cputlb.c | 111 +----
108
accel/tcg/translate-all.c | 23 +-
109
exec.c | 6 -
110
hw/arm/boot.c | 8 +-
111
hw/arm/virt-acpi-build.c | 6 +-
112
hw/arm/virt.c | 52 ++-
113
hw/arm/xlnx-zynqmp.c | 92 +++-
114
hw/intc/arm_gic.c | 987 +++++++++++++++++++++++++++++++--------
115
hw/intc/arm_gic_common.c | 154 ++++--
116
hw/intc/arm_gic_kvm.c | 31 +-
117
hw/intc/arm_gicv3_cpuif.c | 19 +-
118
hw/intc/armv7m_nvic.c | 82 +++-
119
memory.c | 3 +-
120
target/arm/cpu.c | 4 +
121
target/arm/helper.c | 127 +++--
122
target/arm/op_helper.c | 14 +
123
target/arm/sve_helper.c | 19 +-
124
target/arm/translate-sve.c | 51 +-
125
hw/intc/trace-events | 12 +-
126
30 files changed, 1724 insertions(+), 498 deletions(-)
127
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Julia Suvorova <jusual@mail.ru>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
MSR handling is the only place where CONTROL.nPRIV is modified.
4
Message-id: 20180228193125.20577-15-richard.henderson@linaro.org
4
5
Signed-off-by: Julia Suvorova <jusual@mail.ru>
6
Message-id: 20180705222622.17139-1-jusual@mail.ru
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
9
---
8
target/arm/translate.c | 61 ++++++++++++++++++++++++++++++++++++++++++++++++++
10
target/arm/helper.c | 12 ++++++++----
9
1 file changed, 61 insertions(+)
11
1 file changed, 8 insertions(+), 4 deletions(-)
10
12
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
12
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
15
--- a/target/arm/helper.c
14
+++ b/target/arm/translate.c
16
+++ b/target/arm/helper.c
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
17
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
16
return 0;
18
write_v7m_control_spsel_for_secstate(env,
17
}
19
val & R_V7M_CONTROL_SPSEL_MASK,
18
20
M_REG_NS);
19
+/* Advanced SIMD two registers and a scalar extension.
21
- env->v7m.control[M_REG_NS] &= ~R_V7M_CONTROL_NPRIV_MASK;
20
+ * 31 24 23 22 20 16 12 11 10 9 8 3 0
22
- env->v7m.control[M_REG_NS] |= val & R_V7M_CONTROL_NPRIV_MASK;
21
+ * +-----------------+----+---+----+----+----+---+----+---+----+---------+----+
23
+ if (arm_feature(env, ARM_FEATURE_M_MAIN)) {
22
+ * | 1 1 1 1 1 1 1 0 | o1 | D | o2 | Vn | Vd | 1 | o3 | 0 | o4 | N Q M U | Vm |
24
+ env->v7m.control[M_REG_NS] &= ~R_V7M_CONTROL_NPRIV_MASK;
23
+ * +-----------------+----+---+----+----+----+---+----+---+----+---------+----+
25
+ env->v7m.control[M_REG_NS] |= val & R_V7M_CONTROL_NPRIV_MASK;
24
+ *
26
+ }
25
+ */
27
return;
26
+
28
case 0x98: /* SP_NS */
27
+static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
29
{
28
+{
30
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
29
+ int rd, rn, rm, rot, size, opr_sz;
31
!arm_v7m_is_handler_mode(env)) {
30
+ TCGv_ptr fpst;
32
write_v7m_control_spsel(env, (val & R_V7M_CONTROL_SPSEL_MASK) != 0);
31
+ bool q;
33
}
32
+
34
- env->v7m.control[env->v7m.secure] &= ~R_V7M_CONTROL_NPRIV_MASK;
33
+ q = extract32(insn, 6, 1);
35
- env->v7m.control[env->v7m.secure] |= val & R_V7M_CONTROL_NPRIV_MASK;
34
+ VFP_DREG_D(rd, insn);
36
+ if (arm_feature(env, ARM_FEATURE_M_MAIN)) {
35
+ VFP_DREG_N(rn, insn);
37
+ env->v7m.control[env->v7m.secure] &= ~R_V7M_CONTROL_NPRIV_MASK;
36
+ VFP_DREG_M(rm, insn);
38
+ env->v7m.control[env->v7m.secure] |= val & R_V7M_CONTROL_NPRIV_MASK;
37
+ if ((rd | rn) & q) {
38
+ return 1;
39
+ }
40
+
41
+ if ((insn & 0xff000f10) == 0xfe000800) {
42
+ /* VCMLA (indexed) -- 1111 1110 S.RR .... .... 1000 ...0 .... */
43
+ rot = extract32(insn, 20, 2);
44
+ size = extract32(insn, 23, 1);
45
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)
46
+ || (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
47
+ return 1;
48
+ }
39
+ }
49
+ } else {
40
break;
50
+ return 1;
41
default:
51
+ }
42
bad_reg:
52
+
53
+ if (s->fp_excp_el) {
54
+ gen_exception_insn(s, 4, EXCP_UDEF,
55
+ syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
56
+ return 0;
57
+ }
58
+ if (!s->vfp_enabled) {
59
+ return 1;
60
+ }
61
+
62
+ opr_sz = (1 + q) * 8;
63
+ fpst = get_fpstatus_ptr(1);
64
+ tcg_gen_gvec_3_ptr(vfp_reg_offset(1, rd),
65
+ vfp_reg_offset(1, rn),
66
+ vfp_reg_offset(1, rm), fpst,
67
+ opr_sz, opr_sz, rot,
68
+ size ? gen_helper_gvec_fcmlas_idx
69
+ : gen_helper_gvec_fcmlah_idx);
70
+ tcg_temp_free_ptr(fpst);
71
+ return 0;
72
+}
73
+
74
static int disas_coproc_insn(DisasContext *s, uint32_t insn)
75
{
76
int cpnum, is64, crn, crm, opc1, opc2, isread, rt, rt2;
77
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
78
goto illegal_op;
79
}
80
return;
81
+ } else if ((insn & 0x0f000a00) == 0x0e000800
82
+ && arm_dc_feature(s, ARM_FEATURE_V8)) {
83
+ if (disas_neon_insn_2reg_scalar_ext(s, insn)) {
84
+ goto illegal_op;
85
+ }
86
+ return;
87
} else if ((insn & 0x0fe00000) == 0x0c400000) {
88
/* Coprocessor double register transfer. */
89
ARCH(5TE);
90
--
43
--
91
2.16.2
44
2.18.0
92
45
93
46
diff view generated by jsdifflib
New patch
1
From: Julia Suvorova <jusual@mail.ru>
1
2
3
Handle SCS reserved registers listed in ARMv6-M ARM D3.6.1.
4
All reserved registers are RAZ/WI. ARM_FEATURE_M_MAIN is used for the
5
checks, because these registers are reserved in ARMv8-M Baseline too.
6
7
Signed-off-by: Julia Suvorova <jusual@mail.ru>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
hw/intc/armv7m_nvic.c | 51 +++++++++++++++++++++++++++++++++++++++++--
12
1 file changed, 49 insertions(+), 2 deletions(-)
13
14
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/intc/armv7m_nvic.c
17
+++ b/hw/intc/armv7m_nvic.c
18
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
19
}
20
return val;
21
case 0xd10: /* System Control. */
22
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V7)) {
23
+ goto bad_offset;
24
+ }
25
return cpu->env.v7m.scr[attrs.secure];
26
case 0xd14: /* Configuration Control. */
27
/* The BFHFNMIGN bit is the only non-banked bit; we
28
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
29
}
30
return val;
31
case 0xd2c: /* Hard Fault Status. */
32
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
33
+ goto bad_offset;
34
+ }
35
return cpu->env.v7m.hfsr;
36
case 0xd30: /* Debug Fault Status. */
37
return cpu->env.v7m.dfsr;
38
case 0xd34: /* MMFAR MemManage Fault Address */
39
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
40
+ goto bad_offset;
41
+ }
42
return cpu->env.v7m.mmfar[attrs.secure];
43
case 0xd38: /* Bus Fault Address. */
44
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
45
+ goto bad_offset;
46
+ }
47
return cpu->env.v7m.bfar;
48
case 0xd3c: /* Aux Fault Status. */
49
/* TODO: Implement fault status registers. */
50
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
51
}
52
break;
53
case 0xd10: /* System Control. */
54
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V7)) {
55
+ goto bad_offset;
56
+ }
57
/* We don't implement deep-sleep so these bits are RAZ/WI.
58
* The other bits in the register are banked.
59
* QEMU's implementation ignores SEVONPEND and SLEEPONEXIT, which
60
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
61
nvic_irq_update(s);
62
break;
63
case 0xd2c: /* Hard Fault Status. */
64
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
65
+ goto bad_offset;
66
+ }
67
cpu->env.v7m.hfsr &= ~value; /* W1C */
68
break;
69
case 0xd30: /* Debug Fault Status. */
70
cpu->env.v7m.dfsr &= ~value; /* W1C */
71
break;
72
case 0xd34: /* Mem Manage Address. */
73
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
74
+ goto bad_offset;
75
+ }
76
cpu->env.v7m.mmfar[attrs.secure] = value;
77
return;
78
case 0xd38: /* Bus Fault Address. */
79
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
80
+ goto bad_offset;
81
+ }
82
cpu->env.v7m.bfar = value;
83
return;
84
case 0xd3c: /* Aux Fault Status. */
85
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
86
case 0xf00: /* Software Triggered Interrupt Register */
87
{
88
int excnum = (value & 0x1ff) + NVIC_FIRST_IRQ;
89
+
90
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
91
+ goto bad_offset;
92
+ }
93
+
94
if (excnum < s->num_irq) {
95
armv7m_nvic_set_pending(s, excnum, false);
96
}
97
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_read(void *opaque, hwaddr addr,
98
}
99
}
100
break;
101
- case 0xd18 ... 0xd23: /* System Handler Priority (SHPR1, SHPR2, SHPR3) */
102
+ case 0xd18: /* System Handler Priority (SHPR1) */
103
+ if (!arm_feature(&s->cpu->env, ARM_FEATURE_M_MAIN)) {
104
+ val = 0;
105
+ break;
106
+ }
107
+ /* fall through */
108
+ case 0xd1c ... 0xd23: /* System Handler Priority (SHPR2, SHPR3) */
109
val = 0;
110
for (i = 0; i < size; i++) {
111
unsigned hdlidx = (offset - 0xd14) + i;
112
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_read(void *opaque, hwaddr addr,
113
}
114
break;
115
case 0xd28 ... 0xd2b: /* Configurable Fault Status (CFSR) */
116
+ if (!arm_feature(&s->cpu->env, ARM_FEATURE_M_MAIN)) {
117
+ val = 0;
118
+ break;
119
+ };
120
/* The BFSR bits [15:8] are shared between security states
121
* and we store them in the NS copy
122
*/
123
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_write(void *opaque, hwaddr addr,
124
}
125
nvic_irq_update(s);
126
return MEMTX_OK;
127
- case 0xd18 ... 0xd23: /* System Handler Priority (SHPR1, SHPR2, SHPR3) */
128
+ case 0xd18: /* System Handler Priority (SHPR1) */
129
+ if (!arm_feature(&s->cpu->env, ARM_FEATURE_M_MAIN)) {
130
+ return MEMTX_OK;
131
+ }
132
+ /* fall through */
133
+ case 0xd1c ... 0xd23: /* System Handler Priority (SHPR2, SHPR3) */
134
for (i = 0; i < size; i++) {
135
unsigned hdlidx = (offset - 0xd14) + i;
136
int newprio = extract32(value, i * 8, 8);
137
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_write(void *opaque, hwaddr addr,
138
nvic_irq_update(s);
139
return MEMTX_OK;
140
case 0xd28 ... 0xd2b: /* Configurable Fault Status (CFSR) */
141
+ if (!arm_feature(&s->cpu->env, ARM_FEATURE_M_MAIN)) {
142
+ return MEMTX_OK;
143
+ }
144
/* All bits are W1C, so construct 32 bit value with 0s in
145
* the parts not written by the access size
146
*/
147
--
148
2.18.0
149
150
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Julia Suvorova <jusual@mail.ru>
2
2
3
Enable it for the "any" CPU used by *-linux-user.
3
Forbid stack alignment change. (CCR)
4
Reserve FAULTMASK, BASEPRI registers.
5
Report any fault as a HardFault. Disable MemManage, BusFault and
6
UsageFault, so they always escalated to HardFault. (SHCSR)
4
7
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Julia Suvorova <jusual@mail.ru>
9
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
10
Message-id: 20180718095628.26442-1-jusual@mail.ru
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20180228193125.20577-10-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
13
---
10
target/arm/cpu.c | 1 +
14
hw/intc/armv7m_nvic.c | 10 ++++++++++
11
target/arm/cpu64.c | 1 +
15
target/arm/cpu.c | 4 ++++
12
2 files changed, 2 insertions(+)
16
target/arm/helper.c | 13 +++++++++++--
17
3 files changed, 25 insertions(+), 2 deletions(-)
13
18
19
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/hw/intc/armv7m_nvic.c
22
+++ b/hw/intc/armv7m_nvic.c
23
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
24
val |= cpu->env.v7m.ccr[M_REG_NS] & R_V7M_CCR_BFHFNMIGN_MASK;
25
return val;
26
case 0xd24: /* System Handler Control and State (SHCSR) */
27
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V7)) {
28
+ goto bad_offset;
29
+ }
30
val = 0;
31
if (attrs.secure) {
32
if (s->sec_vectors[ARMV7M_EXCP_MEM].active) {
33
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
34
cpu->env.v7m.scr[attrs.secure] = value;
35
break;
36
case 0xd14: /* Configuration Control. */
37
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
38
+ goto bad_offset;
39
+ }
40
+
41
/* Enforce RAZ/WI on reserved and must-RAZ/WI bits */
42
value &= (R_V7M_CCR_STKALIGN_MASK |
43
R_V7M_CCR_BFHFNMIGN_MASK |
44
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
45
cpu->env.v7m.ccr[attrs.secure] = value;
46
break;
47
case 0xd24: /* System Handler Control and State (SHCSR) */
48
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V7)) {
49
+ goto bad_offset;
50
+ }
51
if (attrs.secure) {
52
s->sec_vectors[ARMV7M_EXCP_MEM].active = (value & (1 << 0)) != 0;
53
/* Secure HardFault active bit cannot be written */
14
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
54
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
15
index XXXXXXX..XXXXXXX 100644
55
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpu.c
56
--- a/target/arm/cpu.c
17
+++ b/target/arm/cpu.c
57
+++ b/target/arm/cpu.c
18
@@ -XXX,XX +XXX,XX @@ static void arm_any_initfn(Object *obj)
58
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(CPUState *s)
19
set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
59
env->v7m.ccr[M_REG_NS] |= R_V7M_CCR_NONBASETHRDENA_MASK;
20
set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
60
env->v7m.ccr[M_REG_S] |= R_V7M_CCR_NONBASETHRDENA_MASK;
21
set_feature(&cpu->env, ARM_FEATURE_CRC);
61
}
22
+ set_feature(&cpu->env, ARM_FEATURE_V8_RDM);
62
+ if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
23
cpu->midr = 0xffffffff;
63
+ env->v7m.ccr[M_REG_NS] |= R_V7M_CCR_UNALIGN_TRP_MASK;
24
}
64
+ env->v7m.ccr[M_REG_S] |= R_V7M_CCR_UNALIGN_TRP_MASK;
25
#endif
65
+ }
26
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
66
67
/* Unlike A/R profile, M profile defines the reset LR value */
68
env->regs[14] = 0xffffffff;
69
diff --git a/target/arm/helper.c b/target/arm/helper.c
27
index XXXXXXX..XXXXXXX 100644
70
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/cpu64.c
71
--- a/target/arm/helper.c
29
+++ b/target/arm/cpu64.c
72
+++ b/target/arm/helper.c
30
@@ -XXX,XX +XXX,XX @@ static void aarch64_any_initfn(Object *obj)
73
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
31
set_feature(&cpu->env, ARM_FEATURE_V8_SM4);
74
env->v7m.primask[M_REG_NS] = val & 1;
32
set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
75
return;
33
set_feature(&cpu->env, ARM_FEATURE_CRC);
76
case 0x91: /* BASEPRI_NS */
34
+ set_feature(&cpu->env, ARM_FEATURE_V8_RDM);
77
- if (!env->v7m.secure) {
35
set_feature(&cpu->env, ARM_FEATURE_V8_FP16);
78
+ if (!env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_MAIN)) {
36
cpu->ctr = 0x80038003; /* 32 byte I and D cacheline size, VIPT icache */
79
return;
37
cpu->dcz_blocksize = 7; /* 512 bytes */
80
}
81
env->v7m.basepri[M_REG_NS] = val & 0xff;
82
return;
83
case 0x93: /* FAULTMASK_NS */
84
- if (!env->v7m.secure) {
85
+ if (!env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_MAIN)) {
86
return;
87
}
88
env->v7m.faultmask[M_REG_NS] = val & 1;
89
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
90
env->v7m.primask[env->v7m.secure] = val & 1;
91
break;
92
case 17: /* BASEPRI */
93
+ if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
94
+ goto bad_reg;
95
+ }
96
env->v7m.basepri[env->v7m.secure] = val & 0xff;
97
break;
98
case 18: /* BASEPRI_MAX */
99
+ if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
100
+ goto bad_reg;
101
+ }
102
val &= 0xff;
103
if (val != 0 && (val < env->v7m.basepri[env->v7m.secure]
104
|| env->v7m.basepri[env->v7m.secure] == 0)) {
105
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
106
}
107
break;
108
case 19: /* FAULTMASK */
109
+ if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
110
+ goto bad_reg;
111
+ }
112
env->v7m.faultmask[env->v7m.secure] = val & 1;
113
break;
114
case 20: /* CONTROL */
38
--
115
--
39
2.16.2
116
2.18.0
40
117
41
118
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Julia Suvorova <jusual@mail.ru>
2
2
3
Happily, the bits are in the same places compared to a32.
3
The differences from ARMv7-M NVIC are:
4
* ARMv6-M only supports up to 32 external interrupts
5
(configurable feature already). The ICTR is reserved.
6
* Active Bit Register is reserved.
7
* ARMv6-M supports 4 priority levels against 256 in ARMv7-M.
4
8
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Julia Suvorova <jusual@mail.ru>
6
Message-id: 20180228193125.20577-16-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
12
---
10
target/arm/translate.c | 14 +++++++++++++-
13
include/hw/intc/armv7m_nvic.h | 1 +
11
1 file changed, 13 insertions(+), 1 deletion(-)
14
hw/intc/armv7m_nvic.c | 21 ++++++++++++++++++---
15
2 files changed, 19 insertions(+), 3 deletions(-)
12
16
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
17
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
14
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate.c
19
--- a/include/hw/intc/armv7m_nvic.h
16
+++ b/target/arm/translate.c
20
+++ b/include/hw/intc/armv7m_nvic.h
17
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
21
@@ -XXX,XX +XXX,XX @@ typedef struct NVICState {
18
default_exception_el(s));
22
VecInfo sec_vectors[NVIC_INTERNAL_VECTORS];
19
break;
23
/* The PRIGROUP field in AIRCR is banked */
20
}
24
uint32_t prigroup[M_REG_NUM_BANKS];
21
- if (((insn >> 24) & 3) == 3) {
25
+ uint8_t num_prio_bits;
22
+ if ((insn & 0xfe000a00) == 0xfc000800
26
23
+ && arm_dc_feature(s, ARM_FEATURE_V8)) {
27
/* v8M NVIC_ITNS state (stored as a bool per bit) */
24
+ /* The Thumb2 and ARM encodings are identical. */
28
bool itns[NVIC_MAX_VECTORS];
25
+ if (disas_neon_insn_3same_ext(s, insn)) {
29
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
26
+ goto illegal_op;
30
index XXXXXXX..XXXXXXX 100644
31
--- a/hw/intc/armv7m_nvic.c
32
+++ b/hw/intc/armv7m_nvic.c
33
@@ -XXX,XX +XXX,XX @@ static void set_prio(NVICState *s, unsigned irq, bool secure, uint8_t prio)
34
assert(irq > ARMV7M_EXCP_NMI); /* only use for configurable prios */
35
assert(irq < s->num_irq);
36
37
+ prio &= MAKE_64BIT_MASK(8 - s->num_prio_bits, s->num_prio_bits);
38
+
39
if (secure) {
40
assert(exc_is_banked(irq));
41
s->sec_vectors[irq].prio = prio;
42
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
43
44
switch (offset) {
45
case 4: /* Interrupt Control Type. */
46
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V7)) {
47
+ goto bad_offset;
48
+ }
49
return ((s->num_irq - NVIC_FIRST_IRQ) / 32) - 1;
50
case 0xc: /* CPPWR */
51
if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
52
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
53
"Setting VECTRESET when not in DEBUG mode "
54
"is UNPREDICTABLE\n");
55
}
56
- s->prigroup[attrs.secure] = extract32(value,
57
- R_V7M_AIRCR_PRIGROUP_SHIFT,
58
- R_V7M_AIRCR_PRIGROUP_LENGTH);
59
+ if (arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
60
+ s->prigroup[attrs.secure] =
61
+ extract32(value,
62
+ R_V7M_AIRCR_PRIGROUP_SHIFT,
63
+ R_V7M_AIRCR_PRIGROUP_LENGTH);
27
+ }
64
+ }
28
+ } else if ((insn & 0xff000a00) == 0xfe000800
65
if (attrs.secure) {
29
+ && arm_dc_feature(s, ARM_FEATURE_V8)) {
66
/* These bits are only writable by secure */
30
+ /* The Thumb2 and ARM encodings are identical. */
67
cpu->env.v7m.aircr = value &
31
+ if (disas_neon_insn_2reg_scalar_ext(s, insn)) {
68
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_read(void *opaque, hwaddr addr,
32
+ goto illegal_op;
69
break;
33
+ }
70
case 0x300 ... 0x33f: /* NVIC Active */
34
+ } else if (((insn >> 24) & 3) == 3) {
71
val = 0;
35
/* Translate into the equivalent ARM encoding. */
72
+
36
insn = (insn & 0xe2ffffff) | ((insn & (1 << 28)) >> 4) | (1 << 28);
73
+ if (!arm_feature(&s->cpu->env, ARM_FEATURE_V7)) {
37
if (disas_neon_data_insn(s, insn)) {
74
+ break;
75
+ }
76
+
77
startvec = 8 * (offset - 0x300) + NVIC_FIRST_IRQ; /* vector # */
78
79
for (i = 0, end = size * 8; i < end && startvec + i < s->num_irq; i++) {
80
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
81
/* include space for internal exception vectors */
82
s->num_irq += NVIC_FIRST_IRQ;
83
84
+ s->num_prio_bits = arm_feature(&s->cpu->env, ARM_FEATURE_V7) ? 8 : 2;
85
+
86
object_property_set_bool(OBJECT(&s->systick[M_REG_NS]), true,
87
"realized", &err);
88
if (err != NULL) {
38
--
89
--
39
2.16.2
90
2.18.0
40
91
41
92
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
The io_readx() function needs to know whether the load it is
2
doing is an MMU_DATA_LOAD or an MMU_INST_FETCH, so that it
3
can pass the right value to the cpu_transaction_failed()
4
function. Plumb this information through from the softmmu
5
code.
2
6
3
Enable it for the "any" CPU used by *-linux-user.
7
This is currently not often going to give the wrong answer,
8
because usually instruction fetches go via get_page_addr_code().
9
However once we switch over to handling execution from non-RAM by
10
creating single-insn TBs, the path for an insn fetch to generate
11
a bus error will be through cpu_ld*_code() and io_readx(),
12
so without this change we will generate a d-side fault when we
13
should generate an i-side fault.
4
14
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
15
We also have to pass the access type via a CPU struct global
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
down to unassigned_mem_read(), for the benefit of the targets
7
Message-id: 20180228193125.20577-17-richard.henderson@linaro.org
17
which still use the cpu_unassigned_access() hook (m68k, mips,
18
sparc, xtensa).
19
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
22
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
23
Tested-by: Cédric Le Goater <clg@kaod.org>
24
Message-id: 20180710160013.26559-2-peter.maydell@linaro.org
9
---
25
---
10
target/arm/cpu.c | 1 +
26
accel/tcg/softmmu_template.h | 11 +++++++----
11
target/arm/cpu64.c | 1 +
27
include/qom/cpu.h | 6 ++++++
12
2 files changed, 2 insertions(+)
28
accel/tcg/cputlb.c | 5 +++--
29
memory.c | 3 ++-
30
4 files changed, 18 insertions(+), 7 deletions(-)
13
31
14
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
32
diff --git a/accel/tcg/softmmu_template.h b/accel/tcg/softmmu_template.h
15
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpu.c
34
--- a/accel/tcg/softmmu_template.h
17
+++ b/target/arm/cpu.c
35
+++ b/accel/tcg/softmmu_template.h
18
@@ -XXX,XX +XXX,XX @@ static void arm_any_initfn(Object *obj)
36
@@ -XXX,XX +XXX,XX @@ static inline DATA_TYPE glue(io_read, SUFFIX)(CPUArchState *env,
19
set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
37
size_t mmu_idx, size_t index,
20
set_feature(&cpu->env, ARM_FEATURE_CRC);
38
target_ulong addr,
21
set_feature(&cpu->env, ARM_FEATURE_V8_RDM);
39
uintptr_t retaddr,
22
+ set_feature(&cpu->env, ARM_FEATURE_V8_FCMA);
40
- bool recheck)
23
cpu->midr = 0xffffffff;
41
+ bool recheck,
42
+ MMUAccessType access_type)
43
{
44
CPUIOTLBEntry *iotlbentry = &env->iotlb[mmu_idx][index];
45
return io_readx(env, iotlbentry, mmu_idx, addr, retaddr, recheck,
46
- DATA_SIZE);
47
+ access_type, DATA_SIZE);
24
}
48
}
25
#endif
49
#endif
26
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
50
51
@@ -XXX,XX +XXX,XX @@ WORD_TYPE helper_le_ld_name(CPUArchState *env, target_ulong addr,
52
/* ??? Note that the io helpers always read data in the target
53
byte ordering. We should push the LE/BE request down into io. */
54
res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr,
55
- tlb_addr & TLB_RECHECK);
56
+ tlb_addr & TLB_RECHECK,
57
+ READ_ACCESS_TYPE);
58
res = TGT_LE(res);
59
return res;
60
}
61
@@ -XXX,XX +XXX,XX @@ WORD_TYPE helper_be_ld_name(CPUArchState *env, target_ulong addr,
62
/* ??? Note that the io helpers always read data in the target
63
byte ordering. We should push the LE/BE request down into io. */
64
res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr,
65
- tlb_addr & TLB_RECHECK);
66
+ tlb_addr & TLB_RECHECK,
67
+ READ_ACCESS_TYPE);
68
res = TGT_BE(res);
69
return res;
70
}
71
diff --git a/include/qom/cpu.h b/include/qom/cpu.h
27
index XXXXXXX..XXXXXXX 100644
72
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/cpu64.c
73
--- a/include/qom/cpu.h
29
+++ b/target/arm/cpu64.c
74
+++ b/include/qom/cpu.h
30
@@ -XXX,XX +XXX,XX @@ static void aarch64_any_initfn(Object *obj)
75
@@ -XXX,XX +XXX,XX @@ struct CPUState {
31
set_feature(&cpu->env, ARM_FEATURE_CRC);
76
*/
32
set_feature(&cpu->env, ARM_FEATURE_V8_RDM);
77
uintptr_t mem_io_pc;
33
set_feature(&cpu->env, ARM_FEATURE_V8_FP16);
78
vaddr mem_io_vaddr;
34
+ set_feature(&cpu->env, ARM_FEATURE_V8_FCMA);
79
+ /*
35
cpu->ctr = 0x80038003; /* 32 byte I and D cacheline size, VIPT icache */
80
+ * This is only needed for the legacy cpu_unassigned_access() hook;
36
cpu->dcz_blocksize = 7; /* 512 bytes */
81
+ * when all targets using it have been converted to use
82
+ * cpu_transaction_failed() instead it can be removed.
83
+ */
84
+ MMUAccessType mem_io_access_type;
85
86
int kvm_fd;
87
struct KVMState *kvm_state;
88
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
89
index XXXXXXX..XXXXXXX 100644
90
--- a/accel/tcg/cputlb.c
91
+++ b/accel/tcg/cputlb.c
92
@@ -XXX,XX +XXX,XX @@ static inline ram_addr_t qemu_ram_addr_from_host_nofail(void *ptr)
93
static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
94
int mmu_idx,
95
target_ulong addr, uintptr_t retaddr,
96
- bool recheck, int size)
97
+ bool recheck, MMUAccessType access_type, int size)
98
{
99
CPUState *cpu = ENV_GET_CPU(env);
100
hwaddr mr_offset;
101
@@ -XXX,XX +XXX,XX @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
102
}
103
104
cpu->mem_io_vaddr = addr;
105
+ cpu->mem_io_access_type = access_type;
106
107
if (mr->global_locking && !qemu_mutex_iothread_locked()) {
108
qemu_mutex_lock_iothread();
109
@@ -XXX,XX +XXX,XX @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
110
section->offset_within_address_space -
111
section->offset_within_region;
112
113
- cpu_transaction_failed(cpu, physaddr, addr, size, MMU_DATA_LOAD,
114
+ cpu_transaction_failed(cpu, physaddr, addr, size, access_type,
115
mmu_idx, iotlbentry->attrs, r, retaddr);
116
}
117
if (locked) {
118
diff --git a/memory.c b/memory.c
119
index XXXXXXX..XXXXXXX 100644
120
--- a/memory.c
121
+++ b/memory.c
122
@@ -XXX,XX +XXX,XX @@ static uint64_t unassigned_mem_read(void *opaque, hwaddr addr,
123
printf("Unassigned mem read " TARGET_FMT_plx "\n", addr);
124
#endif
125
if (current_cpu != NULL) {
126
- cpu_unassigned_access(current_cpu, addr, false, false, 0, size);
127
+ bool is_exec = current_cpu->mem_io_access_type == MMU_INST_FETCH;
128
+ cpu_unassigned_access(current_cpu, addr, false, is_exec, 0, size);
129
}
130
return 0;
37
}
131
}
38
--
132
--
39
2.16.2
133
2.18.0
40
134
41
135
diff view generated by jsdifflib
1
The MPS2 AN505 FPGA image includes a "FPGA control block"
1
When we support execution from non-RAM MMIO regions, get_page_addr_code()
2
which is a small set of registers handling LEDs, buttons
2
will return -1 to indicate that there is no RAM at the requested address.
3
and some counters.
3
Handle this in the cpu-exec TB hashtable lookup code, treating it as
4
"no match found".
5
6
Note that the call to get_page_addr_code() in tb_lookup_cmp() needs
7
no changes -- a return of -1 will already correctly result in the
8
function returning false.
4
9
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180220180325.29818-14-peter.maydell@linaro.org
12
Reviewed-by: Emilio G. Cota <cota@braap.org>
13
Tested-by: Cédric Le Goater <clg@kaod.org>
14
Message-id: 20180710160013.26559-3-peter.maydell@linaro.org
8
---
15
---
9
hw/misc/Makefile.objs | 1 +
16
accel/tcg/cpu-exec.c | 3 +++
10
include/hw/misc/mps2-fpgaio.h | 43 ++++++++++
17
1 file changed, 3 insertions(+)
11
hw/misc/mps2-fpgaio.c | 176 ++++++++++++++++++++++++++++++++++++++++
12
default-configs/arm-softmmu.mak | 1 +
13
hw/misc/trace-events | 6 ++
14
5 files changed, 227 insertions(+)
15
create mode 100644 include/hw/misc/mps2-fpgaio.h
16
create mode 100644 hw/misc/mps2-fpgaio.c
17
18
18
diff --git a/hw/misc/Makefile.objs b/hw/misc/Makefile.objs
19
diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
19
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/misc/Makefile.objs
21
--- a/accel/tcg/cpu-exec.c
21
+++ b/hw/misc/Makefile.objs
22
+++ b/accel/tcg/cpu-exec.c
22
@@ -XXX,XX +XXX,XX @@ obj-$(CONFIG_STM32F2XX_SYSCFG) += stm32f2xx_syscfg.o
23
@@ -XXX,XX +XXX,XX @@ TranslationBlock *tb_htable_lookup(CPUState *cpu, target_ulong pc,
23
obj-$(CONFIG_MIPS_CPS) += mips_cmgcr.o
24
desc.trace_vcpu_dstate = *cpu->trace_dstate;
24
obj-$(CONFIG_MIPS_CPS) += mips_cpc.o
25
desc.pc = pc;
25
obj-$(CONFIG_MIPS_ITU) += mips_itu.o
26
phys_pc = get_page_addr_code(desc.env, pc);
26
+obj-$(CONFIG_MPS2_FPGAIO) += mps2-fpgaio.o
27
+ if (phys_pc == -1) {
27
obj-$(CONFIG_MPS2_SCC) += mps2-scc.o
28
+ return NULL;
28
29
obj-$(CONFIG_PVPANIC) += pvpanic.o
30
diff --git a/include/hw/misc/mps2-fpgaio.h b/include/hw/misc/mps2-fpgaio.h
31
new file mode 100644
32
index XXXXXXX..XXXXXXX
33
--- /dev/null
34
+++ b/include/hw/misc/mps2-fpgaio.h
35
@@ -XXX,XX +XXX,XX @@
36
+/*
37
+ * ARM MPS2 FPGAIO emulation
38
+ *
39
+ * Copyright (c) 2018 Linaro Limited
40
+ * Written by Peter Maydell
41
+ *
42
+ * This program is free software; you can redistribute it and/or modify
43
+ * it under the terms of the GNU General Public License version 2 or
44
+ * (at your option) any later version.
45
+ */
46
+
47
+/* This is a model of the FPGAIO register block in the AN505
48
+ * FPGA image for the MPS2 dev board; it is documented in the
49
+ * application note:
50
+ * http://infocenter.arm.com/help/topic/com.arm.doc.dai0505b/index.html
51
+ *
52
+ * QEMU interface:
53
+ * + sysbus MMIO region 0: the register bank
54
+ */
55
+
56
+#ifndef MPS2_FPGAIO_H
57
+#define MPS2_FPGAIO_H
58
+
59
+#include "hw/sysbus.h"
60
+
61
+#define TYPE_MPS2_FPGAIO "mps2-fpgaio"
62
+#define MPS2_FPGAIO(obj) OBJECT_CHECK(MPS2FPGAIO, (obj), TYPE_MPS2_FPGAIO)
63
+
64
+typedef struct {
65
+ /*< private >*/
66
+ SysBusDevice parent_obj;
67
+
68
+ /*< public >*/
69
+ MemoryRegion iomem;
70
+
71
+ uint32_t led0;
72
+ uint32_t prescale;
73
+ uint32_t misc;
74
+
75
+ uint32_t prescale_clk;
76
+} MPS2FPGAIO;
77
+
78
+#endif
79
diff --git a/hw/misc/mps2-fpgaio.c b/hw/misc/mps2-fpgaio.c
80
new file mode 100644
81
index XXXXXXX..XXXXXXX
82
--- /dev/null
83
+++ b/hw/misc/mps2-fpgaio.c
84
@@ -XXX,XX +XXX,XX @@
85
+/*
86
+ * ARM MPS2 AN505 FPGAIO emulation
87
+ *
88
+ * Copyright (c) 2018 Linaro Limited
89
+ * Written by Peter Maydell
90
+ *
91
+ * This program is free software; you can redistribute it and/or modify
92
+ * it under the terms of the GNU General Public License version 2 or
93
+ * (at your option) any later version.
94
+ */
95
+
96
+/* This is a model of the "FPGA system control and I/O" block found
97
+ * in the AN505 FPGA image for the MPS2 devboard.
98
+ * It is documented in AN505:
99
+ * http://infocenter.arm.com/help/topic/com.arm.doc.dai0505b/index.html
100
+ */
101
+
102
+#include "qemu/osdep.h"
103
+#include "qemu/log.h"
104
+#include "qapi/error.h"
105
+#include "trace.h"
106
+#include "hw/sysbus.h"
107
+#include "hw/registerfields.h"
108
+#include "hw/misc/mps2-fpgaio.h"
109
+
110
+REG32(LED0, 0)
111
+REG32(BUTTON, 8)
112
+REG32(CLK1HZ, 0x10)
113
+REG32(CLK100HZ, 0x14)
114
+REG32(COUNTER, 0x18)
115
+REG32(PRESCALE, 0x1c)
116
+REG32(PSCNTR, 0x20)
117
+REG32(MISC, 0x4c)
118
+
119
+static uint64_t mps2_fpgaio_read(void *opaque, hwaddr offset, unsigned size)
120
+{
121
+ MPS2FPGAIO *s = MPS2_FPGAIO(opaque);
122
+ uint64_t r;
123
+
124
+ switch (offset) {
125
+ case A_LED0:
126
+ r = s->led0;
127
+ break;
128
+ case A_BUTTON:
129
+ /* User-pressable board buttons. We don't model that, so just return
130
+ * zeroes.
131
+ */
132
+ r = 0;
133
+ break;
134
+ case A_PRESCALE:
135
+ r = s->prescale;
136
+ break;
137
+ case A_MISC:
138
+ r = s->misc;
139
+ break;
140
+ case A_CLK1HZ:
141
+ case A_CLK100HZ:
142
+ case A_COUNTER:
143
+ case A_PSCNTR:
144
+ /* These are all upcounters of various frequencies. */
145
+ qemu_log_mask(LOG_UNIMP, "MPS2 FPGAIO: counters unimplemented\n");
146
+ r = 0;
147
+ break;
148
+ default:
149
+ qemu_log_mask(LOG_GUEST_ERROR,
150
+ "MPS2 FPGAIO read: bad offset %x\n", (int) offset);
151
+ r = 0;
152
+ break;
153
+ }
29
+ }
154
+
30
desc.phys_page1 = phys_pc & TARGET_PAGE_MASK;
155
+ trace_mps2_fpgaio_read(offset, r, size);
31
h = tb_hash_func(phys_pc, pc, flags, cf_mask, *cpu->trace_dstate);
156
+ return r;
32
return qht_lookup_custom(&tb_ctx.htable, &desc, h, tb_lookup_cmp);
157
+}
158
+
159
+static void mps2_fpgaio_write(void *opaque, hwaddr offset, uint64_t value,
160
+ unsigned size)
161
+{
162
+ MPS2FPGAIO *s = MPS2_FPGAIO(opaque);
163
+
164
+ trace_mps2_fpgaio_write(offset, value, size);
165
+
166
+ switch (offset) {
167
+ case A_LED0:
168
+ /* LED bits [1:0] control board LEDs. We don't currently have
169
+ * a mechanism for displaying this graphically, so use a trace event.
170
+ */
171
+ trace_mps2_fpgaio_leds(value & 0x02 ? '*' : '.',
172
+ value & 0x01 ? '*' : '.');
173
+ s->led0 = value & 0x3;
174
+ break;
175
+ case A_PRESCALE:
176
+ s->prescale = value;
177
+ break;
178
+ case A_MISC:
179
+ /* These are control bits for some of the other devices on the
180
+ * board (SPI, CLCD, etc). We don't implement that yet, so just
181
+ * make the bits read as written.
182
+ */
183
+ qemu_log_mask(LOG_UNIMP,
184
+ "MPS2 FPGAIO: MISC control bits unimplemented\n");
185
+ s->misc = value;
186
+ break;
187
+ default:
188
+ qemu_log_mask(LOG_GUEST_ERROR,
189
+ "MPS2 FPGAIO write: bad offset 0x%x\n", (int) offset);
190
+ break;
191
+ }
192
+}
193
+
194
+static const MemoryRegionOps mps2_fpgaio_ops = {
195
+ .read = mps2_fpgaio_read,
196
+ .write = mps2_fpgaio_write,
197
+ .endianness = DEVICE_LITTLE_ENDIAN,
198
+};
199
+
200
+static void mps2_fpgaio_reset(DeviceState *dev)
201
+{
202
+ MPS2FPGAIO *s = MPS2_FPGAIO(dev);
203
+
204
+ trace_mps2_fpgaio_reset();
205
+ s->led0 = 0;
206
+ s->prescale = 0;
207
+ s->misc = 0;
208
+}
209
+
210
+static void mps2_fpgaio_init(Object *obj)
211
+{
212
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
213
+ MPS2FPGAIO *s = MPS2_FPGAIO(obj);
214
+
215
+ memory_region_init_io(&s->iomem, obj, &mps2_fpgaio_ops, s,
216
+ "mps2-fpgaio", 0x1000);
217
+ sysbus_init_mmio(sbd, &s->iomem);
218
+}
219
+
220
+static const VMStateDescription mps2_fpgaio_vmstate = {
221
+ .name = "mps2-fpgaio",
222
+ .version_id = 1,
223
+ .minimum_version_id = 1,
224
+ .fields = (VMStateField[]) {
225
+ VMSTATE_UINT32(led0, MPS2FPGAIO),
226
+ VMSTATE_UINT32(prescale, MPS2FPGAIO),
227
+ VMSTATE_UINT32(misc, MPS2FPGAIO),
228
+ VMSTATE_END_OF_LIST()
229
+ }
230
+};
231
+
232
+static Property mps2_fpgaio_properties[] = {
233
+ /* Frequency of the prescale counter */
234
+ DEFINE_PROP_UINT32("prescale-clk", MPS2FPGAIO, prescale_clk, 20000000),
235
+ DEFINE_PROP_END_OF_LIST(),
236
+};
237
+
238
+static void mps2_fpgaio_class_init(ObjectClass *klass, void *data)
239
+{
240
+ DeviceClass *dc = DEVICE_CLASS(klass);
241
+
242
+ dc->vmsd = &mps2_fpgaio_vmstate;
243
+ dc->reset = mps2_fpgaio_reset;
244
+ dc->props = mps2_fpgaio_properties;
245
+}
246
+
247
+static const TypeInfo mps2_fpgaio_info = {
248
+ .name = TYPE_MPS2_FPGAIO,
249
+ .parent = TYPE_SYS_BUS_DEVICE,
250
+ .instance_size = sizeof(MPS2FPGAIO),
251
+ .instance_init = mps2_fpgaio_init,
252
+ .class_init = mps2_fpgaio_class_init,
253
+};
254
+
255
+static void mps2_fpgaio_register_types(void)
256
+{
257
+ type_register_static(&mps2_fpgaio_info);
258
+}
259
+
260
+type_init(mps2_fpgaio_register_types);
261
diff --git a/default-configs/arm-softmmu.mak b/default-configs/arm-softmmu.mak
262
index XXXXXXX..XXXXXXX 100644
263
--- a/default-configs/arm-softmmu.mak
264
+++ b/default-configs/arm-softmmu.mak
265
@@ -XXX,XX +XXX,XX @@ CONFIG_STM32F205_SOC=y
266
CONFIG_CMSDK_APB_TIMER=y
267
CONFIG_CMSDK_APB_UART=y
268
269
+CONFIG_MPS2_FPGAIO=y
270
CONFIG_MPS2_SCC=y
271
272
CONFIG_VERSATILE_PCI=y
273
diff --git a/hw/misc/trace-events b/hw/misc/trace-events
274
index XXXXXXX..XXXXXXX 100644
275
--- a/hw/misc/trace-events
276
+++ b/hw/misc/trace-events
277
@@ -XXX,XX +XXX,XX @@ mps2_scc_leds(char led7, char led6, char led5, char led4, char led3, char led2,
278
mps2_scc_cfg_write(unsigned function, unsigned device, uint32_t value) "MPS2 SCC config write: function %d device %d data 0x%" PRIx32
279
mps2_scc_cfg_read(unsigned function, unsigned device, uint32_t value) "MPS2 SCC config read: function %d device %d data 0x%" PRIx32
280
281
+# hw/misc/mps2_fpgaio.c
282
+mps2_fpgaio_read(uint64_t offset, uint64_t data, unsigned size) "MPS2 FPGAIO read: offset 0x%" PRIx64 " data 0x%" PRIx64 " size %u"
283
+mps2_fpgaio_write(uint64_t offset, uint64_t data, unsigned size) "MPS2 FPGAIO write: offset 0x%" PRIx64 " data 0x%" PRIx64 " size %u"
284
+mps2_fpgaio_reset(void) "MPS2 FPGAIO: reset"
285
+mps2_fpgaio_leds(char led1, char led0) "MPS2 FPGAIO LEDs: %c%c"
286
+
287
# hw/misc/msf2-sysreg.c
288
msf2_sysreg_write(uint64_t offset, uint32_t val, uint32_t prev) "msf2-sysreg write: addr 0x%08" HWADDR_PRIx " data 0x%" PRIx32 " prev 0x%" PRIx32
289
msf2_sysreg_read(uint64_t offset, uint32_t val) "msf2-sysreg read: addr 0x%08" HWADDR_PRIx " data 0x%08" PRIx32
290
--
33
--
291
2.16.2
34
2.18.0
292
35
293
36
diff view generated by jsdifflib
1
The function qdev_init_gpio_in_named() passes the DeviceState pointer
1
When we support execution from non-RAM MMIO regions, get_page_addr_code()
2
as the opaque data pointor for the irq handler function. Usually
2
will return -1 to indicate that there is no RAM at the requested address.
3
this is what you want, but in some cases it would be helpful to use
3
Handle this in tb_check_watchpoint() -- if the exception happened for a
4
some other data pointer.
4
PC which doesn't correspond to RAM then there is no need to invalidate
5
5
any TBs, because the one-instruction TB will not have been cached.
6
Add a new function qdev_init_gpio_in_named_with_opaque() which allows
7
the caller to specify the data pointer they want.
8
6
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20180220180325.29818-12-peter.maydell@linaro.org
9
Tested-by: Cédric Le Goater <clg@kaod.org>
10
Message-id: 20180710160013.26559-4-peter.maydell@linaro.org
13
---
11
---
14
include/hw/qdev-core.h | 30 ++++++++++++++++++++++++++++--
12
accel/tcg/translate-all.c | 4 +++-
15
hw/core/qdev.c | 8 +++++---
13
1 file changed, 3 insertions(+), 1 deletion(-)
16
2 files changed, 33 insertions(+), 5 deletions(-)
17
14
18
diff --git a/include/hw/qdev-core.h b/include/hw/qdev-core.h
15
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
19
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
20
--- a/include/hw/qdev-core.h
17
--- a/accel/tcg/translate-all.c
21
+++ b/include/hw/qdev-core.h
18
+++ b/accel/tcg/translate-all.c
22
@@ -XXX,XX +XXX,XX @@ BusState *qdev_get_child_bus(DeviceState *dev, const char *name);
19
@@ -XXX,XX +XXX,XX @@ void tb_check_watchpoint(CPUState *cpu)
23
/* GPIO inputs also double as IRQ sinks. */
20
24
void qdev_init_gpio_in(DeviceState *dev, qemu_irq_handler handler, int n);
21
cpu_get_tb_cpu_state(env, &pc, &cs_base, &flags);
25
void qdev_init_gpio_out(DeviceState *dev, qemu_irq *pins, int n);
22
addr = get_page_addr_code(env, pc);
26
-void qdev_init_gpio_in_named(DeviceState *dev, qemu_irq_handler handler,
23
- tb_invalidate_phys_range(addr, addr + 1);
27
- const char *name, int n);
24
+ if (addr != -1) {
28
void qdev_init_gpio_out_named(DeviceState *dev, qemu_irq *pins,
25
+ tb_invalidate_phys_range(addr, addr + 1);
29
const char *name, int n);
26
+ }
30
+/**
27
}
31
+ * qdev_init_gpio_in_named_with_opaque: create an array of input GPIO lines
32
+ * for the specified device
33
+ *
34
+ * @dev: Device to create input GPIOs for
35
+ * @handler: Function to call when GPIO line value is set
36
+ * @opaque: Opaque data pointer to pass to @handler
37
+ * @name: Name of the GPIO input (must be unique for this device)
38
+ * @n: Number of GPIO lines in this input set
39
+ */
40
+void qdev_init_gpio_in_named_with_opaque(DeviceState *dev,
41
+ qemu_irq_handler handler,
42
+ void *opaque,
43
+ const char *name, int n);
44
+
45
+/**
46
+ * qdev_init_gpio_in_named: create an array of input GPIO lines
47
+ * for the specified device
48
+ *
49
+ * Like qdev_init_gpio_in_named_with_opaque(), but the opaque pointer
50
+ * passed to the handler is @dev (which is the most commonly desired behaviour).
51
+ */
52
+static inline void qdev_init_gpio_in_named(DeviceState *dev,
53
+ qemu_irq_handler handler,
54
+ const char *name, int n)
55
+{
56
+ qdev_init_gpio_in_named_with_opaque(dev, handler, dev, name, n);
57
+}
58
59
void qdev_pass_gpios(DeviceState *dev, DeviceState *container,
60
const char *name);
61
diff --git a/hw/core/qdev.c b/hw/core/qdev.c
62
index XXXXXXX..XXXXXXX 100644
63
--- a/hw/core/qdev.c
64
+++ b/hw/core/qdev.c
65
@@ -XXX,XX +XXX,XX @@ static NamedGPIOList *qdev_get_named_gpio_list(DeviceState *dev,
66
return ngl;
67
}
28
}
68
29
69
-void qdev_init_gpio_in_named(DeviceState *dev, qemu_irq_handler handler,
70
- const char *name, int n)
71
+void qdev_init_gpio_in_named_with_opaque(DeviceState *dev,
72
+ qemu_irq_handler handler,
73
+ void *opaque,
74
+ const char *name, int n)
75
{
76
int i;
77
NamedGPIOList *gpio_list = qdev_get_named_gpio_list(dev, name);
78
79
assert(gpio_list->num_out == 0 || !name);
80
gpio_list->in = qemu_extend_irqs(gpio_list->in, gpio_list->num_in, handler,
81
- dev, n);
82
+ opaque, n);
83
84
if (!name) {
85
name = "unnamed-gpio-in";
86
--
30
--
87
2.16.2
31
2.18.0
88
32
89
33
diff view generated by jsdifflib
1
Define a new board model for the MPS2 with an AN505 FPGA image
1
If get_page_addr_code() returns -1, this indicates that there is no RAM
2
containing a Cortex-M33. Since the FPGA images for TrustZone
2
page we can read a full TB from. Instead we must create a TB which
3
cores (AN505, and the similar AN519 for Cortex-M23) have a
3
contains a single instruction and which we do not cache, so it is
4
significantly different layout of devices to the non-TrustZone
4
executed only once.
5
images, we use a new source file rather than shoehorning them
5
6
into the existing mps2.c.
6
Since this means we can now have TBs which are not in any page list,
7
we also need to make tb_phys_invalidate() handle them (by not trying
8
to remove them from a nonexistent page list).
7
9
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20180220180325.29818-20-peter.maydell@linaro.org
12
Reviewed-by: Emilio G. Cota <cota@braap.org>
13
Tested-by: Cédric Le Goater <clg@kaod.org>
14
Message-id: 20180710160013.26559-5-peter.maydell@linaro.org
11
---
15
---
12
hw/arm/Makefile.objs | 1 +
16
accel/tcg/translate-all.c | 19 ++++++++++++++++++-
13
hw/arm/mps2-tz.c | 503 +++++++++++++++++++++++++++++++++++++++++++++++++++
17
1 file changed, 18 insertions(+), 1 deletion(-)
14
2 files changed, 504 insertions(+)
15
create mode 100644 hw/arm/mps2-tz.c
16
18
17
diff --git a/hw/arm/Makefile.objs b/hw/arm/Makefile.objs
19
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
18
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/arm/Makefile.objs
21
--- a/accel/tcg/translate-all.c
20
+++ b/hw/arm/Makefile.objs
22
+++ b/accel/tcg/translate-all.c
21
@@ -XXX,XX +XXX,XX @@ obj-$(CONFIG_FSL_IMX31) += fsl-imx31.o kzm.o
23
@@ -XXX,XX +XXX,XX @@ static void tb_phys_invalidate__locked(TranslationBlock *tb)
22
obj-$(CONFIG_FSL_IMX6) += fsl-imx6.o sabrelite.o
24
*/
23
obj-$(CONFIG_ASPEED_SOC) += aspeed_soc.o aspeed.o
25
void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr)
24
obj-$(CONFIG_MPS2) += mps2.o
26
{
25
+obj-$(CONFIG_MPS2) += mps2-tz.o
27
- if (page_addr == -1) {
26
obj-$(CONFIG_MSF2) += msf2-soc.o msf2-som.o
28
+ if (page_addr == -1 && tb->page_addr[0] != -1) {
27
obj-$(CONFIG_IOTKIT) += iotkit.o
29
page_lock_tb(tb);
28
diff --git a/hw/arm/mps2-tz.c b/hw/arm/mps2-tz.c
30
do_tb_phys_invalidate(tb, true);
29
new file mode 100644
31
page_unlock_tb(tb);
30
index XXXXXXX..XXXXXXX
32
@@ -XXX,XX +XXX,XX @@ tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
31
--- /dev/null
33
32
+++ b/hw/arm/mps2-tz.c
34
assert_memory_lock();
33
@@ -XXX,XX +XXX,XX @@
35
34
+/*
36
+ if (phys_pc == -1) {
35
+ * ARM V2M MPS2 board emulation, trustzone aware FPGA images
37
+ /*
36
+ *
38
+ * If the TB is not associated with a physical RAM page then
37
+ * Copyright (c) 2017 Linaro Limited
39
+ * it must be a temporary one-insn TB, and we have nothing to do
38
+ * Written by Peter Maydell
40
+ * except fill in the page_addr[] fields.
39
+ *
41
+ */
40
+ * This program is free software; you can redistribute it and/or modify
42
+ assert(tb->cflags & CF_NOCACHE);
41
+ * it under the terms of the GNU General Public License version 2 or
43
+ tb->page_addr[0] = tb->page_addr[1] = -1;
42
+ * (at your option) any later version.
44
+ return tb;
43
+ */
44
+
45
+/* The MPS2 and MPS2+ dev boards are FPGA based (the 2+ has a bigger
46
+ * FPGA but is otherwise the same as the 2). Since the CPU itself
47
+ * and most of the devices are in the FPGA, the details of the board
48
+ * as seen by the guest depend significantly on the FPGA image.
49
+ * This source file covers the following FPGA images, for TrustZone cores:
50
+ * "mps2-an505" -- Cortex-M33 as documented in ARM Application Note AN505
51
+ *
52
+ * Links to the TRM for the board itself and to the various Application
53
+ * Notes which document the FPGA images can be found here:
54
+ * https://developer.arm.com/products/system-design/development-boards/fpga-prototyping-boards/mps2
55
+ *
56
+ * Board TRM:
57
+ * http://infocenter.arm.com/help/topic/com.arm.doc.100112_0200_06_en/versatile_express_cortex_m_prototyping_systems_v2m_mps2_and_v2m_mps2plus_technical_reference_100112_0200_06_en.pdf
58
+ * Application Note AN505:
59
+ * http://infocenter.arm.com/help/topic/com.arm.doc.dai0505b/index.html
60
+ *
61
+ * The AN505 defers to the Cortex-M33 processor ARMv8M IoT Kit FVP User Guide
62
+ * (ARM ECM0601256) for the details of some of the device layout:
63
+ * http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ecm0601256/index.html
64
+ */
65
+
66
+#include "qemu/osdep.h"
67
+#include "qapi/error.h"
68
+#include "qemu/error-report.h"
69
+#include "hw/arm/arm.h"
70
+#include "hw/arm/armv7m.h"
71
+#include "hw/or-irq.h"
72
+#include "hw/boards.h"
73
+#include "exec/address-spaces.h"
74
+#include "sysemu/sysemu.h"
75
+#include "hw/misc/unimp.h"
76
+#include "hw/char/cmsdk-apb-uart.h"
77
+#include "hw/timer/cmsdk-apb-timer.h"
78
+#include "hw/misc/mps2-scc.h"
79
+#include "hw/misc/mps2-fpgaio.h"
80
+#include "hw/arm/iotkit.h"
81
+#include "hw/devices.h"
82
+#include "net/net.h"
83
+#include "hw/core/split-irq.h"
84
+
85
+typedef enum MPS2TZFPGAType {
86
+ FPGA_AN505,
87
+} MPS2TZFPGAType;
88
+
89
+typedef struct {
90
+ MachineClass parent;
91
+ MPS2TZFPGAType fpga_type;
92
+ uint32_t scc_id;
93
+} MPS2TZMachineClass;
94
+
95
+typedef struct {
96
+ MachineState parent;
97
+
98
+ IoTKit iotkit;
99
+ MemoryRegion psram;
100
+ MemoryRegion ssram1;
101
+ MemoryRegion ssram1_m;
102
+ MemoryRegion ssram23;
103
+ MPS2SCC scc;
104
+ MPS2FPGAIO fpgaio;
105
+ TZPPC ppc[5];
106
+ UnimplementedDeviceState ssram_mpc[3];
107
+ UnimplementedDeviceState spi[5];
108
+ UnimplementedDeviceState i2c[4];
109
+ UnimplementedDeviceState i2s_audio;
110
+ UnimplementedDeviceState gpio[5];
111
+ UnimplementedDeviceState dma[4];
112
+ UnimplementedDeviceState gfx;
113
+ CMSDKAPBUART uart[5];
114
+ SplitIRQ sec_resp_splitter;
115
+ qemu_or_irq uart_irq_orgate;
116
+} MPS2TZMachineState;
117
+
118
+#define TYPE_MPS2TZ_MACHINE "mps2tz"
119
+#define TYPE_MPS2TZ_AN505_MACHINE MACHINE_TYPE_NAME("mps2-an505")
120
+
121
+#define MPS2TZ_MACHINE(obj) \
122
+ OBJECT_CHECK(MPS2TZMachineState, obj, TYPE_MPS2TZ_MACHINE)
123
+#define MPS2TZ_MACHINE_GET_CLASS(obj) \
124
+ OBJECT_GET_CLASS(MPS2TZMachineClass, obj, TYPE_MPS2TZ_MACHINE)
125
+#define MPS2TZ_MACHINE_CLASS(klass) \
126
+ OBJECT_CLASS_CHECK(MPS2TZMachineClass, klass, TYPE_MPS2TZ_MACHINE)
127
+
128
+/* Main SYSCLK frequency in Hz */
129
+#define SYSCLK_FRQ 20000000
130
+
131
+/* Initialize the auxiliary RAM region @mr and map it into
132
+ * the memory map at @base.
133
+ */
134
+static void make_ram(MemoryRegion *mr, const char *name,
135
+ hwaddr base, hwaddr size)
136
+{
137
+ memory_region_init_ram(mr, NULL, name, size, &error_fatal);
138
+ memory_region_add_subregion(get_system_memory(), base, mr);
139
+}
140
+
141
+/* Create an alias of an entire original MemoryRegion @orig
142
+ * located at @base in the memory map.
143
+ */
144
+static void make_ram_alias(MemoryRegion *mr, const char *name,
145
+ MemoryRegion *orig, hwaddr base)
146
+{
147
+ memory_region_init_alias(mr, NULL, name, orig, 0,
148
+ memory_region_size(orig));
149
+ memory_region_add_subregion(get_system_memory(), base, mr);
150
+}
151
+
152
+static void init_sysbus_child(Object *parent, const char *childname,
153
+ void *child, size_t childsize,
154
+ const char *childtype)
155
+{
156
+ object_initialize(child, childsize, childtype);
157
+ object_property_add_child(parent, childname, OBJECT(child), &error_abort);
158
+ qdev_set_parent_bus(DEVICE(child), sysbus_get_default());
159
+
160
+}
161
+
162
+/* Most of the devices in the AN505 FPGA image sit behind
163
+ * Peripheral Protection Controllers. These data structures
164
+ * define the layout of which devices sit behind which PPCs.
165
+ * The devfn for each port is a function which creates, configures
166
+ * and initializes the device, returning the MemoryRegion which
167
+ * needs to be plugged into the downstream end of the PPC port.
168
+ */
169
+typedef MemoryRegion *MakeDevFn(MPS2TZMachineState *mms, void *opaque,
170
+ const char *name, hwaddr size);
171
+
172
+typedef struct PPCPortInfo {
173
+ const char *name;
174
+ MakeDevFn *devfn;
175
+ void *opaque;
176
+ hwaddr addr;
177
+ hwaddr size;
178
+} PPCPortInfo;
179
+
180
+typedef struct PPCInfo {
181
+ const char *name;
182
+ PPCPortInfo ports[TZ_NUM_PORTS];
183
+} PPCInfo;
184
+
185
+static MemoryRegion *make_unimp_dev(MPS2TZMachineState *mms,
186
+ void *opaque,
187
+ const char *name, hwaddr size)
188
+{
189
+ /* Initialize, configure and realize a TYPE_UNIMPLEMENTED_DEVICE,
190
+ * and return a pointer to its MemoryRegion.
191
+ */
192
+ UnimplementedDeviceState *uds = opaque;
193
+
194
+ init_sysbus_child(OBJECT(mms), name, uds,
195
+ sizeof(UnimplementedDeviceState),
196
+ TYPE_UNIMPLEMENTED_DEVICE);
197
+ qdev_prop_set_string(DEVICE(uds), "name", name);
198
+ qdev_prop_set_uint64(DEVICE(uds), "size", size);
199
+ object_property_set_bool(OBJECT(uds), true, "realized", &error_fatal);
200
+ return sysbus_mmio_get_region(SYS_BUS_DEVICE(uds), 0);
201
+}
202
+
203
+static MemoryRegion *make_uart(MPS2TZMachineState *mms, void *opaque,
204
+ const char *name, hwaddr size)
205
+{
206
+ CMSDKAPBUART *uart = opaque;
207
+ int i = uart - &mms->uart[0];
208
+ Chardev *uartchr = i < MAX_SERIAL_PORTS ? serial_hds[i] : NULL;
209
+ int rxirqno = i * 2;
210
+ int txirqno = i * 2 + 1;
211
+ int combirqno = i + 10;
212
+ SysBusDevice *s;
213
+ DeviceState *iotkitdev = DEVICE(&mms->iotkit);
214
+ DeviceState *orgate_dev = DEVICE(&mms->uart_irq_orgate);
215
+
216
+ init_sysbus_child(OBJECT(mms), name, uart,
217
+ sizeof(mms->uart[0]), TYPE_CMSDK_APB_UART);
218
+ qdev_prop_set_chr(DEVICE(uart), "chardev", uartchr);
219
+ qdev_prop_set_uint32(DEVICE(uart), "pclk-frq", SYSCLK_FRQ);
220
+ object_property_set_bool(OBJECT(uart), true, "realized", &error_fatal);
221
+ s = SYS_BUS_DEVICE(uart);
222
+ sysbus_connect_irq(s, 0, qdev_get_gpio_in_named(iotkitdev,
223
+ "EXP_IRQ", txirqno));
224
+ sysbus_connect_irq(s, 1, qdev_get_gpio_in_named(iotkitdev,
225
+ "EXP_IRQ", rxirqno));
226
+ sysbus_connect_irq(s, 2, qdev_get_gpio_in(orgate_dev, i * 2));
227
+ sysbus_connect_irq(s, 3, qdev_get_gpio_in(orgate_dev, i * 2 + 1));
228
+ sysbus_connect_irq(s, 4, qdev_get_gpio_in_named(iotkitdev,
229
+ "EXP_IRQ", combirqno));
230
+ return sysbus_mmio_get_region(SYS_BUS_DEVICE(uart), 0);
231
+}
232
+
233
+static MemoryRegion *make_scc(MPS2TZMachineState *mms, void *opaque,
234
+ const char *name, hwaddr size)
235
+{
236
+ MPS2SCC *scc = opaque;
237
+ DeviceState *sccdev;
238
+ MPS2TZMachineClass *mmc = MPS2TZ_MACHINE_GET_CLASS(mms);
239
+
240
+ object_initialize(scc, sizeof(mms->scc), TYPE_MPS2_SCC);
241
+ sccdev = DEVICE(scc);
242
+ qdev_set_parent_bus(sccdev, sysbus_get_default());
243
+ qdev_prop_set_uint32(sccdev, "scc-cfg4", 0x2);
244
+ qdev_prop_set_uint32(sccdev, "scc-aid", 0x02000008);
245
+ qdev_prop_set_uint32(sccdev, "scc-id", mmc->scc_id);
246
+ object_property_set_bool(OBJECT(scc), true, "realized", &error_fatal);
247
+ return sysbus_mmio_get_region(SYS_BUS_DEVICE(sccdev), 0);
248
+}
249
+
250
+static MemoryRegion *make_fpgaio(MPS2TZMachineState *mms, void *opaque,
251
+ const char *name, hwaddr size)
252
+{
253
+ MPS2FPGAIO *fpgaio = opaque;
254
+
255
+ object_initialize(fpgaio, sizeof(mms->fpgaio), TYPE_MPS2_FPGAIO);
256
+ qdev_set_parent_bus(DEVICE(fpgaio), sysbus_get_default());
257
+ object_property_set_bool(OBJECT(fpgaio), true, "realized", &error_fatal);
258
+ return sysbus_mmio_get_region(SYS_BUS_DEVICE(fpgaio), 0);
259
+}
260
+
261
+static void mps2tz_common_init(MachineState *machine)
262
+{
263
+ MPS2TZMachineState *mms = MPS2TZ_MACHINE(machine);
264
+ MachineClass *mc = MACHINE_GET_CLASS(machine);
265
+ MemoryRegion *system_memory = get_system_memory();
266
+ DeviceState *iotkitdev;
267
+ DeviceState *dev_splitter;
268
+ int i;
269
+
270
+ if (strcmp(machine->cpu_type, mc->default_cpu_type) != 0) {
271
+ error_report("This board can only be used with CPU %s",
272
+ mc->default_cpu_type);
273
+ exit(1);
274
+ }
45
+ }
275
+
46
+
276
+ init_sysbus_child(OBJECT(machine), "iotkit", &mms->iotkit,
47
/*
277
+ sizeof(mms->iotkit), TYPE_IOTKIT);
48
* Add the TB to the page list, acquiring first the pages's locks.
278
+ iotkitdev = DEVICE(&mms->iotkit);
49
* We keep the locks held until after inserting the TB in the hash table,
279
+ object_property_set_link(OBJECT(&mms->iotkit), OBJECT(system_memory),
50
@@ -XXX,XX +XXX,XX @@ TranslationBlock *tb_gen_code(CPUState *cpu,
280
+ "memory", &error_abort);
51
281
+ qdev_prop_set_uint32(iotkitdev, "EXP_NUMIRQ", 92);
52
phys_pc = get_page_addr_code(env, pc);
282
+ qdev_prop_set_uint32(iotkitdev, "MAINCLK", SYSCLK_FRQ);
53
283
+ object_property_set_bool(OBJECT(&mms->iotkit), true, "realized",
54
+ if (phys_pc == -1) {
284
+ &error_fatal);
55
+ /* Generate a temporary TB with 1 insn in it */
285
+
56
+ cflags &= ~CF_COUNT_MASK;
286
+ /* The sec_resp_cfg output from the IoTKit must be split into multiple
57
+ cflags |= CF_NOCACHE | 1;
287
+ * lines, one for each of the PPCs we create here.
288
+ */
289
+ object_initialize(&mms->sec_resp_splitter, sizeof(mms->sec_resp_splitter),
290
+ TYPE_SPLIT_IRQ);
291
+ object_property_add_child(OBJECT(machine), "sec-resp-splitter",
292
+ OBJECT(&mms->sec_resp_splitter), &error_abort);
293
+ object_property_set_int(OBJECT(&mms->sec_resp_splitter), 5,
294
+ "num-lines", &error_fatal);
295
+ object_property_set_bool(OBJECT(&mms->sec_resp_splitter), true,
296
+ "realized", &error_fatal);
297
+ dev_splitter = DEVICE(&mms->sec_resp_splitter);
298
+ qdev_connect_gpio_out_named(iotkitdev, "sec_resp_cfg", 0,
299
+ qdev_get_gpio_in(dev_splitter, 0));
300
+
301
+ /* The IoTKit sets up much of the memory layout, including
302
+ * the aliases between secure and non-secure regions in the
303
+ * address space. The FPGA itself contains:
304
+ *
305
+ * 0x00000000..0x003fffff SSRAM1
306
+ * 0x00400000..0x007fffff alias of SSRAM1
307
+ * 0x28000000..0x283fffff 4MB SSRAM2 + SSRAM3
308
+ * 0x40100000..0x4fffffff AHB Master Expansion 1 interface devices
309
+ * 0x80000000..0x80ffffff 16MB PSRAM
310
+ */
311
+
312
+ /* The FPGA images have an odd combination of different RAMs,
313
+ * because in hardware they are different implementations and
314
+ * connected to different buses, giving varying performance/size
315
+ * tradeoffs. For QEMU they're all just RAM, though. We arbitrarily
316
+ * call the 16MB our "system memory", as it's the largest lump.
317
+ */
318
+ memory_region_allocate_system_memory(&mms->psram,
319
+ NULL, "mps.ram", 0x01000000);
320
+ memory_region_add_subregion(system_memory, 0x80000000, &mms->psram);
321
+
322
+ /* The SSRAM memories should all be behind Memory Protection Controllers,
323
+ * but we don't implement that yet.
324
+ */
325
+ make_ram(&mms->ssram1, "mps.ssram1", 0x00000000, 0x00400000);
326
+ make_ram_alias(&mms->ssram1_m, "mps.ssram1_m", &mms->ssram1, 0x00400000);
327
+
328
+ make_ram(&mms->ssram23, "mps.ssram23", 0x28000000, 0x00400000);
329
+
330
+ /* The overflow IRQs for all UARTs are ORed together.
331
+ * Tx, Rx and "combined" IRQs are sent to the NVIC separately.
332
+ * Create the OR gate for this.
333
+ */
334
+ object_initialize(&mms->uart_irq_orgate, sizeof(mms->uart_irq_orgate),
335
+ TYPE_OR_IRQ);
336
+ object_property_add_child(OBJECT(mms), "uart-irq-orgate",
337
+ OBJECT(&mms->uart_irq_orgate), &error_abort);
338
+ object_property_set_int(OBJECT(&mms->uart_irq_orgate), 10, "num-lines",
339
+ &error_fatal);
340
+ object_property_set_bool(OBJECT(&mms->uart_irq_orgate), true,
341
+ "realized", &error_fatal);
342
+ qdev_connect_gpio_out(DEVICE(&mms->uart_irq_orgate), 0,
343
+ qdev_get_gpio_in_named(iotkitdev, "EXP_IRQ", 15));
344
+
345
+ /* Most of the devices in the FPGA are behind Peripheral Protection
346
+ * Controllers. The required order for initializing things is:
347
+ * + initialize the PPC
348
+ * + initialize, configure and realize downstream devices
349
+ * + connect downstream device MemoryRegions to the PPC
350
+ * + realize the PPC
351
+ * + map the PPC's MemoryRegions to the places in the address map
352
+ * where the downstream devices should appear
353
+ * + wire up the PPC's control lines to the IoTKit object
354
+ */
355
+
356
+ const PPCInfo ppcs[] = { {
357
+ .name = "apb_ppcexp0",
358
+ .ports = {
359
+ { "ssram-mpc0", make_unimp_dev, &mms->ssram_mpc[0],
360
+ 0x58007000, 0x1000 },
361
+ { "ssram-mpc1", make_unimp_dev, &mms->ssram_mpc[1],
362
+ 0x58008000, 0x1000 },
363
+ { "ssram-mpc2", make_unimp_dev, &mms->ssram_mpc[2],
364
+ 0x58009000, 0x1000 },
365
+ },
366
+ }, {
367
+ .name = "apb_ppcexp1",
368
+ .ports = {
369
+ { "spi0", make_unimp_dev, &mms->spi[0], 0x40205000, 0x1000 },
370
+ { "spi1", make_unimp_dev, &mms->spi[1], 0x40206000, 0x1000 },
371
+ { "spi2", make_unimp_dev, &mms->spi[2], 0x40209000, 0x1000 },
372
+ { "spi3", make_unimp_dev, &mms->spi[3], 0x4020a000, 0x1000 },
373
+ { "spi4", make_unimp_dev, &mms->spi[4], 0x4020b000, 0x1000 },
374
+ { "uart0", make_uart, &mms->uart[0], 0x40200000, 0x1000 },
375
+ { "uart1", make_uart, &mms->uart[1], 0x40201000, 0x1000 },
376
+ { "uart2", make_uart, &mms->uart[2], 0x40202000, 0x1000 },
377
+ { "uart3", make_uart, &mms->uart[3], 0x40203000, 0x1000 },
378
+ { "uart4", make_uart, &mms->uart[4], 0x40204000, 0x1000 },
379
+ { "i2c0", make_unimp_dev, &mms->i2c[0], 0x40207000, 0x1000 },
380
+ { "i2c1", make_unimp_dev, &mms->i2c[1], 0x40208000, 0x1000 },
381
+ { "i2c2", make_unimp_dev, &mms->i2c[2], 0x4020c000, 0x1000 },
382
+ { "i2c3", make_unimp_dev, &mms->i2c[3], 0x4020d000, 0x1000 },
383
+ },
384
+ }, {
385
+ .name = "apb_ppcexp2",
386
+ .ports = {
387
+ { "scc", make_scc, &mms->scc, 0x40300000, 0x1000 },
388
+ { "i2s-audio", make_unimp_dev, &mms->i2s_audio,
389
+ 0x40301000, 0x1000 },
390
+ { "fpgaio", make_fpgaio, &mms->fpgaio, 0x40302000, 0x1000 },
391
+ },
392
+ }, {
393
+ .name = "ahb_ppcexp0",
394
+ .ports = {
395
+ { "gfx", make_unimp_dev, &mms->gfx, 0x41000000, 0x140000 },
396
+ { "gpio0", make_unimp_dev, &mms->gpio[0], 0x40100000, 0x1000 },
397
+ { "gpio1", make_unimp_dev, &mms->gpio[1], 0x40101000, 0x1000 },
398
+ { "gpio2", make_unimp_dev, &mms->gpio[2], 0x40102000, 0x1000 },
399
+ { "gpio3", make_unimp_dev, &mms->gpio[3], 0x40103000, 0x1000 },
400
+ { "gpio4", make_unimp_dev, &mms->gpio[4], 0x40104000, 0x1000 },
401
+ },
402
+ }, {
403
+ .name = "ahb_ppcexp1",
404
+ .ports = {
405
+ { "dma0", make_unimp_dev, &mms->dma[0], 0x40110000, 0x1000 },
406
+ { "dma1", make_unimp_dev, &mms->dma[1], 0x40111000, 0x1000 },
407
+ { "dma2", make_unimp_dev, &mms->dma[2], 0x40112000, 0x1000 },
408
+ { "dma3", make_unimp_dev, &mms->dma[3], 0x40113000, 0x1000 },
409
+ },
410
+ },
411
+ };
412
+
413
+ for (i = 0; i < ARRAY_SIZE(ppcs); i++) {
414
+ const PPCInfo *ppcinfo = &ppcs[i];
415
+ TZPPC *ppc = &mms->ppc[i];
416
+ DeviceState *ppcdev;
417
+ int port;
418
+ char *gpioname;
419
+
420
+ init_sysbus_child(OBJECT(machine), ppcinfo->name, ppc,
421
+ sizeof(TZPPC), TYPE_TZ_PPC);
422
+ ppcdev = DEVICE(ppc);
423
+
424
+ for (port = 0; port < TZ_NUM_PORTS; port++) {
425
+ const PPCPortInfo *pinfo = &ppcinfo->ports[port];
426
+ MemoryRegion *mr;
427
+ char *portname;
428
+
429
+ if (!pinfo->devfn) {
430
+ continue;
431
+ }
432
+
433
+ mr = pinfo->devfn(mms, pinfo->opaque, pinfo->name, pinfo->size);
434
+ portname = g_strdup_printf("port[%d]", port);
435
+ object_property_set_link(OBJECT(ppc), OBJECT(mr),
436
+ portname, &error_fatal);
437
+ g_free(portname);
438
+ }
439
+
440
+ object_property_set_bool(OBJECT(ppc), true, "realized", &error_fatal);
441
+
442
+ for (port = 0; port < TZ_NUM_PORTS; port++) {
443
+ const PPCPortInfo *pinfo = &ppcinfo->ports[port];
444
+
445
+ if (!pinfo->devfn) {
446
+ continue;
447
+ }
448
+ sysbus_mmio_map(SYS_BUS_DEVICE(ppc), port, pinfo->addr);
449
+
450
+ gpioname = g_strdup_printf("%s_nonsec", ppcinfo->name);
451
+ qdev_connect_gpio_out_named(iotkitdev, gpioname, port,
452
+ qdev_get_gpio_in_named(ppcdev,
453
+ "cfg_nonsec",
454
+ port));
455
+ g_free(gpioname);
456
+ gpioname = g_strdup_printf("%s_ap", ppcinfo->name);
457
+ qdev_connect_gpio_out_named(iotkitdev, gpioname, port,
458
+ qdev_get_gpio_in_named(ppcdev,
459
+ "cfg_ap", port));
460
+ g_free(gpioname);
461
+ }
462
+
463
+ gpioname = g_strdup_printf("%s_irq_enable", ppcinfo->name);
464
+ qdev_connect_gpio_out_named(iotkitdev, gpioname, 0,
465
+ qdev_get_gpio_in_named(ppcdev,
466
+ "irq_enable", 0));
467
+ g_free(gpioname);
468
+ gpioname = g_strdup_printf("%s_irq_clear", ppcinfo->name);
469
+ qdev_connect_gpio_out_named(iotkitdev, gpioname, 0,
470
+ qdev_get_gpio_in_named(ppcdev,
471
+ "irq_clear", 0));
472
+ g_free(gpioname);
473
+ gpioname = g_strdup_printf("%s_irq_status", ppcinfo->name);
474
+ qdev_connect_gpio_out_named(ppcdev, "irq", 0,
475
+ qdev_get_gpio_in_named(iotkitdev,
476
+ gpioname, 0));
477
+ g_free(gpioname);
478
+
479
+ qdev_connect_gpio_out(dev_splitter, i,
480
+ qdev_get_gpio_in_named(ppcdev,
481
+ "cfg_sec_resp", 0));
482
+ }
58
+ }
483
+
59
+
484
+ /* In hardware this is a LAN9220; the LAN9118 is software compatible
60
buffer_overflow:
485
+ * except that it doesn't support the checksum-offload feature.
61
tb = tb_alloc(pc);
486
+ * The ethernet controller is not behind a PPC.
62
if (unlikely(!tb)) {
487
+ */
488
+ lan9118_init(&nd_table[0], 0x42000000,
489
+ qdev_get_gpio_in_named(iotkitdev, "EXP_IRQ", 16));
490
+
491
+ create_unimplemented_device("FPGA NS PC", 0x48007000, 0x1000);
492
+
493
+ armv7m_load_kernel(ARM_CPU(first_cpu), machine->kernel_filename, 0x400000);
494
+}
495
+
496
+static void mps2tz_class_init(ObjectClass *oc, void *data)
497
+{
498
+ MachineClass *mc = MACHINE_CLASS(oc);
499
+
500
+ mc->init = mps2tz_common_init;
501
+ mc->max_cpus = 1;
502
+}
503
+
504
+static void mps2tz_an505_class_init(ObjectClass *oc, void *data)
505
+{
506
+ MachineClass *mc = MACHINE_CLASS(oc);
507
+ MPS2TZMachineClass *mmc = MPS2TZ_MACHINE_CLASS(oc);
508
+
509
+ mc->desc = "ARM MPS2 with AN505 FPGA image for Cortex-M33";
510
+ mmc->fpga_type = FPGA_AN505;
511
+ mc->default_cpu_type = ARM_CPU_TYPE_NAME("cortex-m33");
512
+ mmc->scc_id = 0x41040000 | (505 << 4);
513
+}
514
+
515
+static const TypeInfo mps2tz_info = {
516
+ .name = TYPE_MPS2TZ_MACHINE,
517
+ .parent = TYPE_MACHINE,
518
+ .abstract = true,
519
+ .instance_size = sizeof(MPS2TZMachineState),
520
+ .class_size = sizeof(MPS2TZMachineClass),
521
+ .class_init = mps2tz_class_init,
522
+};
523
+
524
+static const TypeInfo mps2tz_an505_info = {
525
+ .name = TYPE_MPS2TZ_AN505_MACHINE,
526
+ .parent = TYPE_MPS2TZ_MACHINE,
527
+ .class_init = mps2tz_an505_class_init,
528
+};
529
+
530
+static void mps2tz_machine_init(void)
531
+{
532
+ type_register_static(&mps2tz_info);
533
+ type_register_static(&mps2tz_an505_info);
534
+}
535
+
536
+type_init(mps2tz_machine_init);
537
--
63
--
538
2.16.2
64
2.18.0
539
65
540
66
diff view generated by jsdifflib
1
Add a Cortex-M33 definition. The M33 is an M profile CPU
1
Now that all the callers can handle get_page_addr_code() returning -1,
2
which implements the ARM v8M architecture, including the
2
remove all the code which tries to handle execution from MMIO regions
3
M profile Security Extension.
3
or small-MMU-region RAM areas. This will mean that we can correctly
4
execute from these areas, rather than ending up either aborting QEMU
5
or delivering an incorrect guest exception.
4
6
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180220180325.29818-9-peter.maydell@linaro.org
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Tested-by: Cédric Le Goater <clg@kaod.org>
11
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
12
Message-id: 20180710160013.26559-6-peter.maydell@linaro.org
8
---
13
---
9
target/arm/cpu.c | 31 +++++++++++++++++++++++++++++++
14
accel/tcg/cputlb.c | 95 +++++-----------------------------------------
10
1 file changed, 31 insertions(+)
15
1 file changed, 10 insertions(+), 85 deletions(-)
11
16
12
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
17
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
13
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/cpu.c
19
--- a/accel/tcg/cputlb.c
15
+++ b/target/arm/cpu.c
20
+++ b/accel/tcg/cputlb.c
16
@@ -XXX,XX +XXX,XX @@ static void cortex_m4_initfn(Object *obj)
21
@@ -XXX,XX +XXX,XX @@ void tlb_set_page(CPUState *cpu, target_ulong vaddr,
17
cpu->id_isar5 = 0x00000000;
22
prot, mmu_idx, size);
18
}
23
}
19
24
20
+static void cortex_m33_initfn(Object *obj)
25
-static void report_bad_exec(CPUState *cpu, target_ulong addr)
21
+{
26
-{
22
+ ARMCPU *cpu = ARM_CPU(obj);
27
- /* Accidentally executing outside RAM or ROM is quite common for
23
+
28
- * several user-error situations, so report it in a way that
24
+ set_feature(&cpu->env, ARM_FEATURE_V8);
29
- * makes it clear that this isn't a QEMU bug and provide suggestions
25
+ set_feature(&cpu->env, ARM_FEATURE_M);
30
- * about what a user could do to fix things.
26
+ set_feature(&cpu->env, ARM_FEATURE_M_SECURITY);
31
- */
27
+ set_feature(&cpu->env, ARM_FEATURE_THUMB_DSP);
32
- error_report("Trying to execute code outside RAM or ROM at 0x"
28
+ cpu->midr = 0x410fd213; /* r0p3 */
33
- TARGET_FMT_lx, addr);
29
+ cpu->pmsav7_dregion = 16;
34
- error_printf("This usually means one of the following happened:\n\n"
30
+ cpu->sau_sregion = 8;
35
- "(1) You told QEMU to execute a kernel for the wrong machine "
31
+ cpu->id_pfr0 = 0x00000030;
36
- "type, and it crashed on startup (eg trying to run a "
32
+ cpu->id_pfr1 = 0x00000210;
37
- "raspberry pi kernel on a versatilepb QEMU machine)\n"
33
+ cpu->id_dfr0 = 0x00200000;
38
- "(2) You didn't give QEMU a kernel or BIOS filename at all, "
34
+ cpu->id_afr0 = 0x00000000;
39
- "and QEMU executed a ROM full of no-op instructions until "
35
+ cpu->id_mmfr0 = 0x00101F40;
40
- "it fell off the end\n"
36
+ cpu->id_mmfr1 = 0x00000000;
41
- "(3) Your guest kernel has a bug and crashed by jumping "
37
+ cpu->id_mmfr2 = 0x01000000;
42
- "off into nowhere\n\n"
38
+ cpu->id_mmfr3 = 0x00000000;
43
- "This is almost always one of the first two, so check your "
39
+ cpu->id_isar0 = 0x01101110;
44
- "command line and that you are using the right type of kernel "
40
+ cpu->id_isar1 = 0x02212000;
45
- "for this machine.\n"
41
+ cpu->id_isar2 = 0x20232232;
46
- "If you think option (3) is likely then you can try debugging "
42
+ cpu->id_isar3 = 0x01111131;
47
- "your guest with the -d debug options; in particular "
43
+ cpu->id_isar4 = 0x01310132;
48
- "-d guest_errors will cause the log to include a dump of the "
44
+ cpu->id_isar5 = 0x00000000;
49
- "guest register state at this point.\n\n"
45
+ cpu->clidr = 0x00000000;
50
- "Execution cannot continue; stopping here.\n\n");
46
+ cpu->ctr = 0x8000c000;
51
-
47
+}
52
- /* Report also to the logs, with more detail including register dump */
48
+
53
- qemu_log_mask(LOG_GUEST_ERROR, "qemu: fatal: Trying to execute code "
49
static void arm_v7m_class_init(ObjectClass *oc, void *data)
54
- "outside RAM or ROM at 0x" TARGET_FMT_lx "\n", addr);
55
- log_cpu_state_mask(LOG_GUEST_ERROR, cpu, CPU_DUMP_FPU | CPU_DUMP_CCOP);
56
-}
57
-
58
static inline ram_addr_t qemu_ram_addr_from_host_nofail(void *ptr)
50
{
59
{
51
CPUClass *cc = CPU_CLASS(oc);
60
ram_addr_t ram_addr;
52
@@ -XXX,XX +XXX,XX @@ static const ARMCPUInfo arm_cpus[] = {
61
@@ -XXX,XX +XXX,XX @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
53
.class_init = arm_v7m_class_init },
62
MemoryRegionSection *section;
54
{ .name = "cortex-m4", .initfn = cortex_m4_initfn,
63
CPUState *cpu = ENV_GET_CPU(env);
55
.class_init = arm_v7m_class_init },
64
CPUIOTLBEntry *iotlbentry;
56
+ { .name = "cortex-m33", .initfn = cortex_m33_initfn,
65
- hwaddr physaddr, mr_offset;
57
+ .class_init = arm_v7m_class_init },
66
58
{ .name = "cortex-r5", .initfn = cortex_r5_initfn },
67
index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
59
{ .name = "cortex-a7", .initfn = cortex_a7_initfn },
68
mmu_idx = cpu_mmu_index(env, true);
60
{ .name = "cortex-a8", .initfn = cortex_a8_initfn },
69
@@ -XXX,XX +XXX,XX @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
70
if (unlikely(env->tlb_table[mmu_idx][index].addr_code & TLB_RECHECK)) {
71
/*
72
* This is a TLB_RECHECK access, where the MMU protection
73
- * covers a smaller range than a target page, and we must
74
- * repeat the MMU check here. This tlb_fill() call might
75
- * longjump out if this access should cause a guest exception.
76
- */
77
- int index;
78
- target_ulong tlb_addr;
79
-
80
- tlb_fill(cpu, addr, 0, MMU_INST_FETCH, mmu_idx, 0);
81
-
82
- index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
83
- tlb_addr = env->tlb_table[mmu_idx][index].addr_code;
84
- if (!(tlb_addr & ~(TARGET_PAGE_MASK | TLB_RECHECK))) {
85
- /* RAM access. We can't handle this, so for now just stop */
86
- cpu_abort(cpu, "Unable to handle guest executing from RAM within "
87
- "a small MPU region at 0x" TARGET_FMT_lx, addr);
88
- }
89
- /*
90
- * Fall through to handle IO accesses (which will almost certainly
91
- * also result in failure)
92
+ * covers a smaller range than a target page. Return -1 to
93
+ * indicate that we cannot simply execute from RAM here;
94
+ * we will perform the necessary repeat of the MMU check
95
+ * when the "execute a single insn" code performs the
96
+ * load of the guest insn.
97
*/
98
+ return -1;
99
}
100
101
iotlbentry = &env->iotlb[mmu_idx][index];
102
section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
103
mr = section->mr;
104
if (memory_region_is_unassigned(mr)) {
105
- qemu_mutex_lock_iothread();
106
- if (memory_region_request_mmio_ptr(mr, addr)) {
107
- qemu_mutex_unlock_iothread();
108
- /* A MemoryRegion is potentially added so re-run the
109
- * get_page_addr_code.
110
- */
111
- return get_page_addr_code(env, addr);
112
- }
113
- qemu_mutex_unlock_iothread();
114
-
115
- /* Give the new-style cpu_transaction_failed() hook first chance
116
- * to handle this.
117
- * This is not the ideal place to detect and generate CPU
118
- * exceptions for instruction fetch failure (for instance
119
- * we don't know the length of the access that the CPU would
120
- * use, and it would be better to go ahead and try the access
121
- * and use the MemTXResult it produced). However it is the
122
- * simplest place we have currently available for the check.
123
+ /*
124
+ * Not guest RAM, so there is no ram_addr_t for it. Return -1,
125
+ * and we will execute a single insn from this device.
126
*/
127
- mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
128
- physaddr = mr_offset +
129
- section->offset_within_address_space -
130
- section->offset_within_region;
131
- cpu_transaction_failed(cpu, physaddr, addr, 0, MMU_INST_FETCH, mmu_idx,
132
- iotlbentry->attrs, MEMTX_DECODE_ERROR, 0);
133
-
134
- cpu_unassigned_access(cpu, addr, false, true, 0, 4);
135
- /* The CPU's unassigned access hook might have longjumped out
136
- * with an exception. If it didn't (or there was no hook) then
137
- * we can't proceed further.
138
- */
139
- report_bad_exec(cpu, addr);
140
- exit(1);
141
+ return -1;
142
}
143
p = (void *)((uintptr_t)addr + env->tlb_table[mmu_idx][index].addend);
144
return qemu_ram_addr_from_host_nofail(p);
61
--
145
--
62
2.16.2
146
2.18.0
63
147
64
148
diff view generated by jsdifflib
1
Add a function load_ramdisk_as() which behaves like the existing
1
Now that we have full support for small regions, including execution,
2
load_ramdisk() but allows the caller to specify the AddressSpace
2
we can remove the workarounds where we marked all small regions as
3
to use. This matches the pattern we have already for various
3
non-executable for the M-profile MPU and SAU.
4
other loader functions.
5
4
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Tested-by: Cédric Le Goater <clg@kaod.org>
9
Message-id: 20180220180325.29818-2-peter.maydell@linaro.org
9
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Message-id: 20180710160013.26559-7-peter.maydell@linaro.org
10
---
11
---
11
include/hw/loader.h | 12 +++++++++++-
12
target/arm/helper.c | 23 -----------------------
12
hw/core/loader.c | 8 +++++++-
13
1 file changed, 23 deletions(-)
13
2 files changed, 18 insertions(+), 2 deletions(-)
14
14
15
diff --git a/include/hw/loader.h b/include/hw/loader.h
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/loader.h
17
--- a/target/arm/helper.c
18
+++ b/include/hw/loader.h
18
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ int load_uimage(const char *filename, hwaddr *ep,
19
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
20
void *translate_opaque);
20
21
21
fi->type = ARMFault_Permission;
22
/**
22
fi->level = 1;
23
- * load_ramdisk:
23
- /*
24
+ * load_ramdisk_as:
24
- * Core QEMU code can't handle execution from small pages yet, so
25
* @filename: Path to the ramdisk image
25
- * don't try it. This way we'll get an MPU exception, rather than
26
* @addr: Memory address to load the ramdisk to
26
- * eventually causing QEMU to exit in get_page_addr_code().
27
* @max_sz: Maximum allowed ramdisk size (for non-u-boot ramdisks)
27
- */
28
+ * @as: The AddressSpace to load the ELF to. The value of address_space_memory
28
- if (*page_size < TARGET_PAGE_SIZE && (*prot & PAGE_EXEC)) {
29
+ * is used if nothing is supplied here.
29
- qemu_log_mask(LOG_UNIMP,
30
*
30
- "MPU: No support for execution from regions "
31
* Load a ramdisk image with U-Boot header to the specified memory
31
- "smaller than 1K\n");
32
* address.
32
- *prot &= ~PAGE_EXEC;
33
*
33
- }
34
* Returns the size of the loaded image on success, -1 otherwise.
34
return !(*prot & (1 << access_type));
35
*/
36
+int load_ramdisk_as(const char *filename, hwaddr addr, uint64_t max_sz,
37
+ AddressSpace *as);
38
+
39
+/**
40
+ * load_ramdisk:
41
+ * Same as load_ramdisk_as(), but doesn't allow the caller to specify
42
+ * an AddressSpace.
43
+ */
44
int load_ramdisk(const char *filename, hwaddr addr, uint64_t max_sz);
45
46
ssize_t gunzip(void *dst, size_t dstlen, uint8_t *src, size_t srclen);
47
diff --git a/hw/core/loader.c b/hw/core/loader.c
48
index XXXXXXX..XXXXXXX 100644
49
--- a/hw/core/loader.c
50
+++ b/hw/core/loader.c
51
@@ -XXX,XX +XXX,XX @@ int load_uimage_as(const char *filename, hwaddr *ep, hwaddr *loadaddr,
52
53
/* Load a ramdisk. */
54
int load_ramdisk(const char *filename, hwaddr addr, uint64_t max_sz)
55
+{
56
+ return load_ramdisk_as(filename, addr, max_sz, NULL);
57
+}
58
+
59
+int load_ramdisk_as(const char *filename, hwaddr addr, uint64_t max_sz,
60
+ AddressSpace *as)
61
{
62
return load_uboot_image(filename, NULL, &addr, NULL, IH_TYPE_RAMDISK,
63
- NULL, NULL, NULL);
64
+ NULL, NULL, as);
65
}
35
}
66
36
67
/* Load a gzip-compressed kernel to a dynamically allocated buffer. */
37
@@ -XXX,XX +XXX,XX @@ static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
38
39
fi->type = ARMFault_Permission;
40
fi->level = 1;
41
- /*
42
- * Core QEMU code can't handle execution from small pages yet, so
43
- * don't try it. This means any attempted execution will generate
44
- * an MPU exception, rather than eventually causing QEMU to exit in
45
- * get_page_addr_code().
46
- */
47
- if (*is_subpage && (*prot & PAGE_EXEC)) {
48
- qemu_log_mask(LOG_UNIMP,
49
- "MPU: No support for execution from regions "
50
- "smaller than 1K\n");
51
- *prot &= ~PAGE_EXEC;
52
- }
53
return !(*prot & (1 << access_type));
54
}
55
68
--
56
--
69
2.16.2
57
2.18.0
70
58
71
59
diff view generated by jsdifflib
1
The or-irq.h header file is missing the customary guard against
1
We set up TLB entries in tlb_set_page_with_attrs(), where we have
2
multiple inclusion, which means compilation fails if it gets
2
some logic for determining whether the TLB entry is considered
3
included twice. Fix the omission.
3
to be RAM-backed, and thus has a valid addend field. When we
4
look at the TLB entry in get_page_addr_code(), we use different
5
logic for determining whether to treat the page as RAM-backed
6
and use the addend field. This is confusing, and in fact buggy,
7
because the code in tlb_set_page_with_attrs() correctly decides
8
that rom_device memory regions not in romd mode are not RAM-backed,
9
but the code in get_page_addr_code() thinks they are RAM-backed.
10
This typically results in "Bad ram pointer" assertion if the
11
guest tries to execute from such a memory region.
12
13
Fix this by making get_page_addr_code() just look at the
14
TLB_MMIO bit in the code_address field of the TLB, which
15
tlb_set_page_with_attrs() sets if and only if the addend
16
field is not valid for code execution.
4
17
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180220180325.29818-11-peter.maydell@linaro.org
20
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
21
Message-id: 20180713150945.12348-1-peter.maydell@linaro.org
9
---
22
---
10
include/hw/or-irq.h | 5 +++++
23
include/exec/exec-all.h | 2 --
11
1 file changed, 5 insertions(+)
24
accel/tcg/cputlb.c | 29 ++++++++---------------------
25
exec.c | 6 ------
26
3 files changed, 8 insertions(+), 29 deletions(-)
12
27
13
diff --git a/include/hw/or-irq.h b/include/hw/or-irq.h
28
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
14
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
15
--- a/include/hw/or-irq.h
30
--- a/include/exec/exec-all.h
16
+++ b/include/hw/or-irq.h
31
+++ b/include/exec/exec-all.h
17
@@ -XXX,XX +XXX,XX @@
32
@@ -XXX,XX +XXX,XX @@ hwaddr memory_region_section_get_iotlb(CPUState *cpu,
18
* THE SOFTWARE.
33
hwaddr paddr, hwaddr xlat,
19
*/
34
int prot,
20
35
target_ulong *address);
21
+#ifndef HW_OR_IRQ_H
36
-bool memory_region_is_unassigned(MemoryRegion *mr);
22
+#define HW_OR_IRQ_H
37
-
23
+
38
#endif
24
#include "hw/irq.h"
39
25
#include "hw/sysbus.h"
40
/* vl.c */
26
#include "qom/object.h"
41
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
27
@@ -XXX,XX +XXX,XX @@ struct OrIRQState {
42
index XXXXXXX..XXXXXXX 100644
28
bool levels[MAX_OR_LINES];
43
--- a/accel/tcg/cputlb.c
29
uint16_t num_lines;
44
+++ b/accel/tcg/cputlb.c
30
};
45
@@ -XXX,XX +XXX,XX @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
31
+
46
{
32
+#endif
47
int mmu_idx, index;
48
void *p;
49
- MemoryRegion *mr;
50
- MemoryRegionSection *section;
51
- CPUState *cpu = ENV_GET_CPU(env);
52
- CPUIOTLBEntry *iotlbentry;
53
54
index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
55
mmu_idx = cpu_mmu_index(env, true);
56
@@ -XXX,XX +XXX,XX @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
57
assert(tlb_hit(env->tlb_table[mmu_idx][index].addr_code, addr));
58
}
59
60
- if (unlikely(env->tlb_table[mmu_idx][index].addr_code & TLB_RECHECK)) {
61
+ if (unlikely(env->tlb_table[mmu_idx][index].addr_code &
62
+ (TLB_RECHECK | TLB_MMIO))) {
63
/*
64
- * This is a TLB_RECHECK access, where the MMU protection
65
- * covers a smaller range than a target page. Return -1 to
66
- * indicate that we cannot simply execute from RAM here;
67
- * we will perform the necessary repeat of the MMU check
68
- * when the "execute a single insn" code performs the
69
- * load of the guest insn.
70
+ * Return -1 if we can't translate and execute from an entire
71
+ * page of RAM here, which will cause us to execute by loading
72
+ * and translating one insn at a time, without caching:
73
+ * - TLB_RECHECK: means the MMU protection covers a smaller range
74
+ * than a target page, so we must redo the MMU check every insn
75
+ * - TLB_MMIO: region is not backed by RAM
76
*/
77
return -1;
78
}
79
80
- iotlbentry = &env->iotlb[mmu_idx][index];
81
- section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
82
- mr = section->mr;
83
- if (memory_region_is_unassigned(mr)) {
84
- /*
85
- * Not guest RAM, so there is no ram_addr_t for it. Return -1,
86
- * and we will execute a single insn from this device.
87
- */
88
- return -1;
89
- }
90
p = (void *)((uintptr_t)addr + env->tlb_table[mmu_idx][index].addend);
91
return qemu_ram_addr_from_host_nofail(p);
92
}
93
diff --git a/exec.c b/exec.c
94
index XXXXXXX..XXXXXXX 100644
95
--- a/exec.c
96
+++ b/exec.c
97
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection *phys_page_find(AddressSpaceDispatch *d, hwaddr addr)
98
}
99
}
100
101
-bool memory_region_is_unassigned(MemoryRegion *mr)
102
-{
103
- return mr != &io_mem_rom && mr != &io_mem_notdirty && !mr->rom_device
104
- && mr != &io_mem_watch;
105
-}
106
-
107
/* Called from RCU critical section */
108
static MemoryRegionSection *address_space_lookup_region(AddressSpaceDispatch *d,
109
hwaddr addr,
33
--
110
--
34
2.16.2
111
2.18.0
35
112
36
113
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Luc Michel <luc.michel@greensocs.com>
2
2
3
Allow the translate subroutines to return false for invalid insns.
3
In preparation for the virtualization extensions implementation,
4
refactor the name of the functions and macros that act on the GIC
5
distributor to make that fact explicit. It will be useful to
6
differentiate them from the ones that will act on the virtual
7
interfaces.
4
8
5
At present we can of course invoke an invalid insn exception from within
9
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
6
the translate subroutine, but in the short term this consolidates code.
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
In the long term it would allow the decodetree language to support
11
Reviewed-by: Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
8
overlapping patterns for ISA extensions.
9
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20180227232618.2908-1-richard.henderson@linaro.org
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Message-id: 20180727095421.386-2-luc.michel@greensocs.com
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
---
15
scripts/decodetree.py | 5 ++---
16
hw/intc/gic_internal.h | 51 ++++++------
16
1 file changed, 2 insertions(+), 3 deletions(-)
17
hw/intc/arm_gic.c | 163 +++++++++++++++++++++------------------
18
hw/intc/arm_gic_common.c | 6 +-
19
hw/intc/arm_gic_kvm.c | 23 +++---
20
4 files changed, 127 insertions(+), 116 deletions(-)
17
21
18
diff --git a/scripts/decodetree.py b/scripts/decodetree.py
22
diff --git a/hw/intc/gic_internal.h b/hw/intc/gic_internal.h
19
index XXXXXXX..XXXXXXX 100755
23
index XXXXXXX..XXXXXXX 100644
20
--- a/scripts/decodetree.py
24
--- a/hw/intc/gic_internal.h
21
+++ b/scripts/decodetree.py
25
+++ b/hw/intc/gic_internal.h
22
@@ -XXX,XX +XXX,XX @@ class Pattern(General):
26
@@ -XXX,XX +XXX,XX @@
23
global translate_prefix
27
24
output('typedef ', self.base.base.struct_name(),
28
#define GIC_BASE_IRQ 0
25
' arg_', self.name, ';\n')
29
26
- output(translate_scope, 'void ', translate_prefix, '_', self.name,
30
-#define GIC_SET_ENABLED(irq, cm) s->irq_state[irq].enabled |= (cm)
27
+ output(translate_scope, 'bool ', translate_prefix, '_', self.name,
31
-#define GIC_CLEAR_ENABLED(irq, cm) s->irq_state[irq].enabled &= ~(cm)
28
'(DisasContext *ctx, arg_', self.name,
32
-#define GIC_TEST_ENABLED(irq, cm) ((s->irq_state[irq].enabled & (cm)) != 0)
29
' *a, ', insntype, ' insn);\n')
33
-#define GIC_SET_PENDING(irq, cm) s->irq_state[irq].pending |= (cm)
30
34
-#define GIC_CLEAR_PENDING(irq, cm) s->irq_state[irq].pending &= ~(cm)
31
@@ -XXX,XX +XXX,XX @@ class Pattern(General):
35
-#define GIC_SET_ACTIVE(irq, cm) s->irq_state[irq].active |= (cm)
32
output(ind, self.base.extract_name(), '(&u.f_', arg, ', insn);\n')
36
-#define GIC_CLEAR_ACTIVE(irq, cm) s->irq_state[irq].active &= ~(cm)
33
for n, f in self.fields.items():
37
-#define GIC_TEST_ACTIVE(irq, cm) ((s->irq_state[irq].active & (cm)) != 0)
34
output(ind, 'u.f_', arg, '.', n, ' = ', f.str_extract(), ';\n')
38
-#define GIC_SET_MODEL(irq) s->irq_state[irq].model = true
35
- output(ind, translate_prefix, '_', self.name,
39
-#define GIC_CLEAR_MODEL(irq) s->irq_state[irq].model = false
36
+ output(ind, 'return ', translate_prefix, '_', self.name,
40
-#define GIC_TEST_MODEL(irq) s->irq_state[irq].model
37
'(ctx, &u.f_', arg, ', insn);\n')
41
-#define GIC_SET_LEVEL(irq, cm) s->irq_state[irq].level |= (cm)
38
- output(ind, 'return true;\n')
42
-#define GIC_CLEAR_LEVEL(irq, cm) s->irq_state[irq].level &= ~(cm)
39
# end Pattern
43
-#define GIC_TEST_LEVEL(irq, cm) ((s->irq_state[irq].level & (cm)) != 0)
40
44
-#define GIC_SET_EDGE_TRIGGER(irq) s->irq_state[irq].edge_trigger = true
45
-#define GIC_CLEAR_EDGE_TRIGGER(irq) s->irq_state[irq].edge_trigger = false
46
-#define GIC_TEST_EDGE_TRIGGER(irq) (s->irq_state[irq].edge_trigger)
47
-#define GIC_GET_PRIORITY(irq, cpu) (((irq) < GIC_INTERNAL) ? \
48
+#define GIC_DIST_SET_ENABLED(irq, cm) (s->irq_state[irq].enabled |= (cm))
49
+#define GIC_DIST_CLEAR_ENABLED(irq, cm) (s->irq_state[irq].enabled &= ~(cm))
50
+#define GIC_DIST_TEST_ENABLED(irq, cm) ((s->irq_state[irq].enabled & (cm)) != 0)
51
+#define GIC_DIST_SET_PENDING(irq, cm) (s->irq_state[irq].pending |= (cm))
52
+#define GIC_DIST_CLEAR_PENDING(irq, cm) (s->irq_state[irq].pending &= ~(cm))
53
+#define GIC_DIST_SET_ACTIVE(irq, cm) (s->irq_state[irq].active |= (cm))
54
+#define GIC_DIST_CLEAR_ACTIVE(irq, cm) (s->irq_state[irq].active &= ~(cm))
55
+#define GIC_DIST_TEST_ACTIVE(irq, cm) ((s->irq_state[irq].active & (cm)) != 0)
56
+#define GIC_DIST_SET_MODEL(irq) (s->irq_state[irq].model = true)
57
+#define GIC_DIST_CLEAR_MODEL(irq) (s->irq_state[irq].model = false)
58
+#define GIC_DIST_TEST_MODEL(irq) (s->irq_state[irq].model)
59
+#define GIC_DIST_SET_LEVEL(irq, cm) (s->irq_state[irq].level |= (cm))
60
+#define GIC_DIST_CLEAR_LEVEL(irq, cm) (s->irq_state[irq].level &= ~(cm))
61
+#define GIC_DIST_TEST_LEVEL(irq, cm) ((s->irq_state[irq].level & (cm)) != 0)
62
+#define GIC_DIST_SET_EDGE_TRIGGER(irq) (s->irq_state[irq].edge_trigger = true)
63
+#define GIC_DIST_CLEAR_EDGE_TRIGGER(irq) \
64
+ (s->irq_state[irq].edge_trigger = false)
65
+#define GIC_DIST_TEST_EDGE_TRIGGER(irq) (s->irq_state[irq].edge_trigger)
66
+#define GIC_DIST_GET_PRIORITY(irq, cpu) (((irq) < GIC_INTERNAL) ? \
67
s->priority1[irq][cpu] : \
68
s->priority2[(irq) - GIC_INTERNAL])
69
-#define GIC_TARGET(irq) s->irq_target[irq]
70
-#define GIC_CLEAR_GROUP(irq, cm) (s->irq_state[irq].group &= ~(cm))
71
-#define GIC_SET_GROUP(irq, cm) (s->irq_state[irq].group |= (cm))
72
-#define GIC_TEST_GROUP(irq, cm) ((s->irq_state[irq].group & (cm)) != 0)
73
+#define GIC_DIST_TARGET(irq) (s->irq_target[irq])
74
+#define GIC_DIST_CLEAR_GROUP(irq, cm) (s->irq_state[irq].group &= ~(cm))
75
+#define GIC_DIST_SET_GROUP(irq, cm) (s->irq_state[irq].group |= (cm))
76
+#define GIC_DIST_TEST_GROUP(irq, cm) ((s->irq_state[irq].group & (cm)) != 0)
77
78
#define GICD_CTLR_EN_GRP0 (1U << 0)
79
#define GICD_CTLR_EN_GRP1 (1U << 1)
80
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs);
81
void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs);
82
void gic_update(GICState *s);
83
void gic_init_irqs_and_distributor(GICState *s);
84
-void gic_set_priority(GICState *s, int cpu, int irq, uint8_t val,
85
- MemTxAttrs attrs);
86
+void gic_dist_set_priority(GICState *s, int cpu, int irq, uint8_t val,
87
+ MemTxAttrs attrs);
88
89
static inline bool gic_test_pending(GICState *s, int irq, int cm)
90
{
91
@@ -XXX,XX +XXX,XX @@ static inline bool gic_test_pending(GICState *s, int irq, int cm)
92
* GICD_ISPENDR to set the state pending.
93
*/
94
return (s->irq_state[irq].pending & cm) ||
95
- (!GIC_TEST_EDGE_TRIGGER(irq) && GIC_TEST_LEVEL(irq, cm));
96
+ (!GIC_DIST_TEST_EDGE_TRIGGER(irq) && GIC_DIST_TEST_LEVEL(irq, cm));
97
}
98
}
99
100
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
101
index XXXXXXX..XXXXXXX 100644
102
--- a/hw/intc/arm_gic.c
103
+++ b/hw/intc/arm_gic.c
104
@@ -XXX,XX +XXX,XX @@ void gic_update(GICState *s)
105
best_prio = 0x100;
106
best_irq = 1023;
107
for (irq = 0; irq < s->num_irq; irq++) {
108
- if (GIC_TEST_ENABLED(irq, cm) && gic_test_pending(s, irq, cm) &&
109
- (!GIC_TEST_ACTIVE(irq, cm)) &&
110
- (irq < GIC_INTERNAL || GIC_TARGET(irq) & cm)) {
111
- if (GIC_GET_PRIORITY(irq, cpu) < best_prio) {
112
- best_prio = GIC_GET_PRIORITY(irq, cpu);
113
+ if (GIC_DIST_TEST_ENABLED(irq, cm) &&
114
+ gic_test_pending(s, irq, cm) &&
115
+ (!GIC_DIST_TEST_ACTIVE(irq, cm)) &&
116
+ (irq < GIC_INTERNAL || GIC_DIST_TARGET(irq) & cm)) {
117
+ if (GIC_DIST_GET_PRIORITY(irq, cpu) < best_prio) {
118
+ best_prio = GIC_DIST_GET_PRIORITY(irq, cpu);
119
best_irq = irq;
120
}
121
}
122
@@ -XXX,XX +XXX,XX @@ void gic_update(GICState *s)
123
if (best_prio < s->priority_mask[cpu]) {
124
s->current_pending[cpu] = best_irq;
125
if (best_prio < s->running_priority[cpu]) {
126
- int group = GIC_TEST_GROUP(best_irq, cm);
127
+ int group = GIC_DIST_TEST_GROUP(best_irq, cm);
128
129
if (extract32(s->ctlr, group, 1) &&
130
extract32(s->cpu_ctlr[cpu], group, 1)) {
131
@@ -XXX,XX +XXX,XX @@ void gic_set_pending_private(GICState *s, int cpu, int irq)
132
}
133
134
DPRINTF("Set %d pending cpu %d\n", irq, cpu);
135
- GIC_SET_PENDING(irq, cm);
136
+ GIC_DIST_SET_PENDING(irq, cm);
137
gic_update(s);
138
}
139
140
@@ -XXX,XX +XXX,XX @@ static void gic_set_irq_11mpcore(GICState *s, int irq, int level,
141
int cm, int target)
142
{
143
if (level) {
144
- GIC_SET_LEVEL(irq, cm);
145
- if (GIC_TEST_EDGE_TRIGGER(irq) || GIC_TEST_ENABLED(irq, cm)) {
146
+ GIC_DIST_SET_LEVEL(irq, cm);
147
+ if (GIC_DIST_TEST_EDGE_TRIGGER(irq) || GIC_DIST_TEST_ENABLED(irq, cm)) {
148
DPRINTF("Set %d pending mask %x\n", irq, target);
149
- GIC_SET_PENDING(irq, target);
150
+ GIC_DIST_SET_PENDING(irq, target);
151
}
152
} else {
153
- GIC_CLEAR_LEVEL(irq, cm);
154
+ GIC_DIST_CLEAR_LEVEL(irq, cm);
155
}
156
}
157
158
@@ -XXX,XX +XXX,XX @@ static void gic_set_irq_generic(GICState *s, int irq, int level,
159
int cm, int target)
160
{
161
if (level) {
162
- GIC_SET_LEVEL(irq, cm);
163
+ GIC_DIST_SET_LEVEL(irq, cm);
164
DPRINTF("Set %d pending mask %x\n", irq, target);
165
- if (GIC_TEST_EDGE_TRIGGER(irq)) {
166
- GIC_SET_PENDING(irq, target);
167
+ if (GIC_DIST_TEST_EDGE_TRIGGER(irq)) {
168
+ GIC_DIST_SET_PENDING(irq, target);
169
}
170
} else {
171
- GIC_CLEAR_LEVEL(irq, cm);
172
+ GIC_DIST_CLEAR_LEVEL(irq, cm);
173
}
174
}
175
176
@@ -XXX,XX +XXX,XX @@ static void gic_set_irq(void *opaque, int irq, int level)
177
/* The first external input line is internal interrupt 32. */
178
cm = ALL_CPU_MASK;
179
irq += GIC_INTERNAL;
180
- target = GIC_TARGET(irq);
181
+ target = GIC_DIST_TARGET(irq);
182
} else {
183
int cpu;
184
irq -= (s->num_irq - GIC_INTERNAL);
185
@@ -XXX,XX +XXX,XX @@ static void gic_set_irq(void *opaque, int irq, int level)
186
187
assert(irq >= GIC_NR_SGIS);
188
189
- if (level == GIC_TEST_LEVEL(irq, cm)) {
190
+ if (level == GIC_DIST_TEST_LEVEL(irq, cm)) {
191
return;
192
}
193
194
@@ -XXX,XX +XXX,XX @@ static uint16_t gic_get_current_pending_irq(GICState *s, int cpu,
195
uint16_t pending_irq = s->current_pending[cpu];
196
197
if (pending_irq < GIC_MAXIRQ && gic_has_groups(s)) {
198
- int group = GIC_TEST_GROUP(pending_irq, (1 << cpu));
199
+ int group = GIC_DIST_TEST_GROUP(pending_irq, (1 << cpu));
200
/* On a GIC without the security extensions, reading this register
201
* behaves in the same way as a secure access to a GIC with them.
202
*/
203
@@ -XXX,XX +XXX,XX @@ static int gic_get_group_priority(GICState *s, int cpu, int irq)
204
205
if (gic_has_groups(s) &&
206
!(s->cpu_ctlr[cpu] & GICC_CTLR_CBPR) &&
207
- GIC_TEST_GROUP(irq, (1 << cpu))) {
208
+ GIC_DIST_TEST_GROUP(irq, (1 << cpu))) {
209
bpr = s->abpr[cpu] - 1;
210
assert(bpr >= 0);
211
} else {
212
@@ -XXX,XX +XXX,XX @@ static int gic_get_group_priority(GICState *s, int cpu, int irq)
213
*/
214
mask = ~0U << ((bpr & 7) + 1);
215
216
- return GIC_GET_PRIORITY(irq, cpu) & mask;
217
+ return GIC_DIST_GET_PRIORITY(irq, cpu) & mask;
218
}
219
220
static void gic_activate_irq(GICState *s, int cpu, int irq)
221
@@ -XXX,XX +XXX,XX @@ static void gic_activate_irq(GICState *s, int cpu, int irq)
222
int regno = preemption_level / 32;
223
int bitno = preemption_level % 32;
224
225
- if (gic_has_groups(s) && GIC_TEST_GROUP(irq, (1 << cpu))) {
226
+ if (gic_has_groups(s) && GIC_DIST_TEST_GROUP(irq, (1 << cpu))) {
227
s->nsapr[regno][cpu] |= (1 << bitno);
228
} else {
229
s->apr[regno][cpu] |= (1 << bitno);
230
}
231
232
s->running_priority[cpu] = prio;
233
- GIC_SET_ACTIVE(irq, 1 << cpu);
234
+ GIC_DIST_SET_ACTIVE(irq, 1 << cpu);
235
}
236
237
static int gic_get_prio_from_apr_bits(GICState *s, int cpu)
238
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
239
return irq;
240
}
241
242
- if (GIC_GET_PRIORITY(irq, cpu) >= s->running_priority[cpu]) {
243
+ if (GIC_DIST_GET_PRIORITY(irq, cpu) >= s->running_priority[cpu]) {
244
DPRINTF("ACK, pending interrupt (%d) has insufficient priority\n", irq);
245
return 1023;
246
}
247
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
248
/* Clear pending flags for both level and edge triggered interrupts.
249
* Level triggered IRQs will be reasserted once they become inactive.
250
*/
251
- GIC_CLEAR_PENDING(irq, GIC_TEST_MODEL(irq) ? ALL_CPU_MASK : cm);
252
+ GIC_DIST_CLEAR_PENDING(irq, GIC_DIST_TEST_MODEL(irq) ? ALL_CPU_MASK
253
+ : cm);
254
ret = irq;
255
} else {
256
if (irq < GIC_NR_SGIS) {
257
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
258
src = ctz32(s->sgi_pending[irq][cpu]);
259
s->sgi_pending[irq][cpu] &= ~(1 << src);
260
if (s->sgi_pending[irq][cpu] == 0) {
261
- GIC_CLEAR_PENDING(irq, GIC_TEST_MODEL(irq) ? ALL_CPU_MASK : cm);
262
+ GIC_DIST_CLEAR_PENDING(irq,
263
+ GIC_DIST_TEST_MODEL(irq) ? ALL_CPU_MASK
264
+ : cm);
265
}
266
ret = irq | ((src & 0x7) << 10);
267
} else {
268
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
269
* interrupts. (level triggered interrupts with an active line
270
* remain pending, see gic_test_pending)
271
*/
272
- GIC_CLEAR_PENDING(irq, GIC_TEST_MODEL(irq) ? ALL_CPU_MASK : cm);
273
+ GIC_DIST_CLEAR_PENDING(irq, GIC_DIST_TEST_MODEL(irq) ? ALL_CPU_MASK
274
+ : cm);
275
ret = irq;
276
}
277
}
278
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
279
return ret;
280
}
281
282
-void gic_set_priority(GICState *s, int cpu, int irq, uint8_t val,
283
+void gic_dist_set_priority(GICState *s, int cpu, int irq, uint8_t val,
284
MemTxAttrs attrs)
285
{
286
if (s->security_extn && !attrs.secure) {
287
- if (!GIC_TEST_GROUP(irq, (1 << cpu))) {
288
+ if (!GIC_DIST_TEST_GROUP(irq, (1 << cpu))) {
289
return; /* Ignore Non-secure access of Group0 IRQ */
290
}
291
val = 0x80 | (val >> 1); /* Non-secure view */
292
@@ -XXX,XX +XXX,XX @@ void gic_set_priority(GICState *s, int cpu, int irq, uint8_t val,
293
}
294
}
295
296
-static uint32_t gic_get_priority(GICState *s, int cpu, int irq,
297
+static uint32_t gic_dist_get_priority(GICState *s, int cpu, int irq,
298
MemTxAttrs attrs)
299
{
300
- uint32_t prio = GIC_GET_PRIORITY(irq, cpu);
301
+ uint32_t prio = GIC_DIST_GET_PRIORITY(irq, cpu);
302
303
if (s->security_extn && !attrs.secure) {
304
- if (!GIC_TEST_GROUP(irq, (1 << cpu))) {
305
+ if (!GIC_DIST_TEST_GROUP(irq, (1 << cpu))) {
306
return 0; /* Non-secure access cannot read priority of Group0 IRQ */
307
}
308
prio = (prio << 1) & 0xff; /* Non-secure view */
309
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
310
return;
311
}
312
313
- group = gic_has_groups(s) && GIC_TEST_GROUP(irq, cm);
314
+ group = gic_has_groups(s) && GIC_DIST_TEST_GROUP(irq, cm);
315
316
if (!gic_eoi_split(s, cpu, attrs)) {
317
/* This is UNPREDICTABLE; we choose to ignore it */
318
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
319
return;
320
}
321
322
- GIC_CLEAR_ACTIVE(irq, cm);
323
+ GIC_DIST_CLEAR_ACTIVE(irq, cm);
324
}
325
326
void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
327
@@ -XXX,XX +XXX,XX @@ void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
328
if (s->revision == REV_11MPCORE) {
329
/* Mark level triggered interrupts as pending if they are still
330
raised. */
331
- if (!GIC_TEST_EDGE_TRIGGER(irq) && GIC_TEST_ENABLED(irq, cm)
332
- && GIC_TEST_LEVEL(irq, cm) && (GIC_TARGET(irq) & cm) != 0) {
333
+ if (!GIC_DIST_TEST_EDGE_TRIGGER(irq) && GIC_DIST_TEST_ENABLED(irq, cm)
334
+ && GIC_DIST_TEST_LEVEL(irq, cm)
335
+ && (GIC_DIST_TARGET(irq) & cm) != 0) {
336
DPRINTF("Set %d pending mask %x\n", irq, cm);
337
- GIC_SET_PENDING(irq, cm);
338
+ GIC_DIST_SET_PENDING(irq, cm);
339
}
340
}
341
342
- group = gic_has_groups(s) && GIC_TEST_GROUP(irq, cm);
343
+ group = gic_has_groups(s) && GIC_DIST_TEST_GROUP(irq, cm);
344
345
if (s->security_extn && !attrs.secure && !group) {
346
DPRINTF("Non-secure EOI for Group0 interrupt %d ignored\n", irq);
347
@@ -XXX,XX +XXX,XX @@ void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
348
349
/* In GICv2 the guest can choose to split priority-drop and deactivate */
350
if (!gic_eoi_split(s, cpu, attrs)) {
351
- GIC_CLEAR_ACTIVE(irq, cm);
352
+ GIC_DIST_CLEAR_ACTIVE(irq, cm);
353
}
354
gic_update(s);
355
}
356
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
357
goto bad_reg;
358
}
359
for (i = 0; i < 8; i++) {
360
- if (GIC_TEST_GROUP(irq + i, cm)) {
361
+ if (GIC_DIST_TEST_GROUP(irq + i, cm)) {
362
res |= (1 << i);
363
}
364
}
365
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
366
res = 0;
367
for (i = 0; i < 8; i++) {
368
if (s->security_extn && !attrs.secure &&
369
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
370
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
371
continue; /* Ignore Non-secure access of Group0 IRQ */
372
}
373
374
- if (GIC_TEST_ENABLED(irq + i, cm)) {
375
+ if (GIC_DIST_TEST_ENABLED(irq + i, cm)) {
376
res |= (1 << i);
377
}
378
}
379
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
380
mask = (irq < GIC_INTERNAL) ? cm : ALL_CPU_MASK;
381
for (i = 0; i < 8; i++) {
382
if (s->security_extn && !attrs.secure &&
383
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
384
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
385
continue; /* Ignore Non-secure access of Group0 IRQ */
386
}
387
388
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
389
mask = (irq < GIC_INTERNAL) ? cm : ALL_CPU_MASK;
390
for (i = 0; i < 8; i++) {
391
if (s->security_extn && !attrs.secure &&
392
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
393
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
394
continue; /* Ignore Non-secure access of Group0 IRQ */
395
}
396
397
- if (GIC_TEST_ACTIVE(irq + i, mask)) {
398
+ if (GIC_DIST_TEST_ACTIVE(irq + i, mask)) {
399
res |= (1 << i);
400
}
401
}
402
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
403
irq = (offset - 0x400) + GIC_BASE_IRQ;
404
if (irq >= s->num_irq)
405
goto bad_reg;
406
- res = gic_get_priority(s, cpu, irq, attrs);
407
+ res = gic_dist_get_priority(s, cpu, irq, attrs);
408
} else if (offset < 0xc00) {
409
/* Interrupt CPU Target. */
410
if (s->num_cpu == 1 && s->revision != REV_11MPCORE) {
411
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
412
} else if (irq < GIC_INTERNAL) {
413
res = cm;
414
} else {
415
- res = GIC_TARGET(irq);
416
+ res = GIC_DIST_TARGET(irq);
417
}
418
}
419
} else if (offset < 0xf00) {
420
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
421
res = 0;
422
for (i = 0; i < 4; i++) {
423
if (s->security_extn && !attrs.secure &&
424
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
425
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
426
continue; /* Ignore Non-secure access of Group0 IRQ */
427
}
428
429
- if (GIC_TEST_MODEL(irq + i))
430
+ if (GIC_DIST_TEST_MODEL(irq + i)) {
431
res |= (1 << (i * 2));
432
- if (GIC_TEST_EDGE_TRIGGER(irq + i))
433
+ }
434
+ if (GIC_DIST_TEST_EDGE_TRIGGER(irq + i)) {
435
res |= (2 << (i * 2));
436
+ }
437
}
438
} else if (offset < 0xf10) {
439
goto bad_reg;
440
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
441
}
442
443
if (s->security_extn && !attrs.secure &&
444
- !GIC_TEST_GROUP(irq, 1 << cpu)) {
445
+ !GIC_DIST_TEST_GROUP(irq, 1 << cpu)) {
446
res = 0; /* Ignore Non-secure access of Group0 IRQ */
447
} else {
448
res = s->sgi_pending[irq][cpu];
449
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
450
int cm = (irq < GIC_INTERNAL) ? (1 << cpu) : ALL_CPU_MASK;
451
if (value & (1 << i)) {
452
/* Group1 (Non-secure) */
453
- GIC_SET_GROUP(irq + i, cm);
454
+ GIC_DIST_SET_GROUP(irq + i, cm);
455
} else {
456
/* Group0 (Secure) */
457
- GIC_CLEAR_GROUP(irq + i, cm);
458
+ GIC_DIST_CLEAR_GROUP(irq + i, cm);
459
}
460
}
461
}
462
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
463
for (i = 0; i < 8; i++) {
464
if (value & (1 << i)) {
465
int mask =
466
- (irq < GIC_INTERNAL) ? (1 << cpu) : GIC_TARGET(irq + i);
467
+ (irq < GIC_INTERNAL) ? (1 << cpu)
468
+ : GIC_DIST_TARGET(irq + i);
469
int cm = (irq < GIC_INTERNAL) ? (1 << cpu) : ALL_CPU_MASK;
470
471
if (s->security_extn && !attrs.secure &&
472
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
473
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
474
continue; /* Ignore Non-secure access of Group0 IRQ */
475
}
476
477
- if (!GIC_TEST_ENABLED(irq + i, cm)) {
478
+ if (!GIC_DIST_TEST_ENABLED(irq + i, cm)) {
479
DPRINTF("Enabled IRQ %d\n", irq + i);
480
trace_gic_enable_irq(irq + i);
481
}
482
- GIC_SET_ENABLED(irq + i, cm);
483
+ GIC_DIST_SET_ENABLED(irq + i, cm);
484
/* If a raised level triggered IRQ enabled then mark
485
is as pending. */
486
- if (GIC_TEST_LEVEL(irq + i, mask)
487
- && !GIC_TEST_EDGE_TRIGGER(irq + i)) {
488
+ if (GIC_DIST_TEST_LEVEL(irq + i, mask)
489
+ && !GIC_DIST_TEST_EDGE_TRIGGER(irq + i)) {
490
DPRINTF("Set %d pending mask %x\n", irq + i, mask);
491
- GIC_SET_PENDING(irq + i, mask);
492
+ GIC_DIST_SET_PENDING(irq + i, mask);
493
}
494
}
495
}
496
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
497
int cm = (irq < GIC_INTERNAL) ? (1 << cpu) : ALL_CPU_MASK;
498
499
if (s->security_extn && !attrs.secure &&
500
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
501
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
502
continue; /* Ignore Non-secure access of Group0 IRQ */
503
}
504
505
- if (GIC_TEST_ENABLED(irq + i, cm)) {
506
+ if (GIC_DIST_TEST_ENABLED(irq + i, cm)) {
507
DPRINTF("Disabled IRQ %d\n", irq + i);
508
trace_gic_disable_irq(irq + i);
509
}
510
- GIC_CLEAR_ENABLED(irq + i, cm);
511
+ GIC_DIST_CLEAR_ENABLED(irq + i, cm);
512
}
513
}
514
} else if (offset < 0x280) {
515
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
516
for (i = 0; i < 8; i++) {
517
if (value & (1 << i)) {
518
if (s->security_extn && !attrs.secure &&
519
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
520
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
521
continue; /* Ignore Non-secure access of Group0 IRQ */
522
}
523
524
- GIC_SET_PENDING(irq + i, GIC_TARGET(irq + i));
525
+ GIC_DIST_SET_PENDING(irq + i, GIC_DIST_TARGET(irq + i));
526
}
527
}
528
} else if (offset < 0x300) {
529
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
530
531
for (i = 0; i < 8; i++) {
532
if (s->security_extn && !attrs.secure &&
533
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
534
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
535
continue; /* Ignore Non-secure access of Group0 IRQ */
536
}
537
538
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
539
for per-CPU interrupts. It's unclear whether this is the
540
corect behavior. */
541
if (value & (1 << i)) {
542
- GIC_CLEAR_PENDING(irq + i, ALL_CPU_MASK);
543
+ GIC_DIST_CLEAR_PENDING(irq + i, ALL_CPU_MASK);
544
}
545
}
546
} else if (offset < 0x400) {
547
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
548
irq = (offset - 0x400) + GIC_BASE_IRQ;
549
if (irq >= s->num_irq)
550
goto bad_reg;
551
- gic_set_priority(s, cpu, irq, value, attrs);
552
+ gic_dist_set_priority(s, cpu, irq, value, attrs);
553
} else if (offset < 0xc00) {
554
/* Interrupt CPU Target. RAZ/WI on uniprocessor GICs, with the
555
* annoying exception of the 11MPCore's GIC.
556
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
557
value |= 0xaa;
558
for (i = 0; i < 4; i++) {
559
if (s->security_extn && !attrs.secure &&
560
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
561
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
562
continue; /* Ignore Non-secure access of Group0 IRQ */
563
}
564
565
if (s->revision == REV_11MPCORE) {
566
if (value & (1 << (i * 2))) {
567
- GIC_SET_MODEL(irq + i);
568
+ GIC_DIST_SET_MODEL(irq + i);
569
} else {
570
- GIC_CLEAR_MODEL(irq + i);
571
+ GIC_DIST_CLEAR_MODEL(irq + i);
572
}
573
}
574
if (value & (2 << (i * 2))) {
575
- GIC_SET_EDGE_TRIGGER(irq + i);
576
+ GIC_DIST_SET_EDGE_TRIGGER(irq + i);
577
} else {
578
- GIC_CLEAR_EDGE_TRIGGER(irq + i);
579
+ GIC_DIST_CLEAR_EDGE_TRIGGER(irq + i);
580
}
581
}
582
} else if (offset < 0xf10) {
583
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
584
irq = (offset - 0xf10);
585
586
if (!s->security_extn || attrs.secure ||
587
- GIC_TEST_GROUP(irq, 1 << cpu)) {
588
+ GIC_DIST_TEST_GROUP(irq, 1 << cpu)) {
589
s->sgi_pending[irq][cpu] &= ~value;
590
if (s->sgi_pending[irq][cpu] == 0) {
591
- GIC_CLEAR_PENDING(irq, 1 << cpu);
592
+ GIC_DIST_CLEAR_PENDING(irq, 1 << cpu);
593
}
594
}
595
} else if (offset < 0xf30) {
596
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
597
irq = (offset - 0xf20);
598
599
if (!s->security_extn || attrs.secure ||
600
- GIC_TEST_GROUP(irq, 1 << cpu)) {
601
- GIC_SET_PENDING(irq, 1 << cpu);
602
+ GIC_DIST_TEST_GROUP(irq, 1 << cpu)) {
603
+ GIC_DIST_SET_PENDING(irq, 1 << cpu);
604
s->sgi_pending[irq][cpu] |= value;
605
}
606
} else {
607
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writel(void *opaque, hwaddr offset,
608
mask = ALL_CPU_MASK;
609
break;
610
}
611
- GIC_SET_PENDING(irq, mask);
612
+ GIC_DIST_SET_PENDING(irq, mask);
613
target_cpu = ctz32(mask);
614
while (target_cpu < GIC_NCPU) {
615
s->sgi_pending[irq][target_cpu] |= (1 << cpu);
616
diff --git a/hw/intc/arm_gic_common.c b/hw/intc/arm_gic_common.c
617
index XXXXXXX..XXXXXXX 100644
618
--- a/hw/intc/arm_gic_common.c
619
+++ b/hw/intc/arm_gic_common.c
620
@@ -XXX,XX +XXX,XX @@ static void arm_gic_common_reset(DeviceState *dev)
621
}
622
}
623
for (i = 0; i < GIC_NR_SGIS; i++) {
624
- GIC_SET_ENABLED(i, ALL_CPU_MASK);
625
- GIC_SET_EDGE_TRIGGER(i);
626
+ GIC_DIST_SET_ENABLED(i, ALL_CPU_MASK);
627
+ GIC_DIST_SET_EDGE_TRIGGER(i);
628
}
629
630
for (i = 0; i < ARRAY_SIZE(s->priority2); i++) {
631
@@ -XXX,XX +XXX,XX @@ static void arm_gic_common_reset(DeviceState *dev)
632
}
633
if (s->security_extn && s->irq_reset_nonsecure) {
634
for (i = 0; i < GIC_MAXIRQ; i++) {
635
- GIC_SET_GROUP(i, ALL_CPU_MASK);
636
+ GIC_DIST_SET_GROUP(i, ALL_CPU_MASK);
637
}
638
}
639
640
diff --git a/hw/intc/arm_gic_kvm.c b/hw/intc/arm_gic_kvm.c
641
index XXXXXXX..XXXXXXX 100644
642
--- a/hw/intc/arm_gic_kvm.c
643
+++ b/hw/intc/arm_gic_kvm.c
644
@@ -XXX,XX +XXX,XX @@ static void translate_group(GICState *s, int irq, int cpu,
645
int cm = (irq < GIC_INTERNAL) ? (1 << cpu) : ALL_CPU_MASK;
646
647
if (to_kernel) {
648
- *field = GIC_TEST_GROUP(irq, cm);
649
+ *field = GIC_DIST_TEST_GROUP(irq, cm);
650
} else {
651
if (*field & 1) {
652
- GIC_SET_GROUP(irq, cm);
653
+ GIC_DIST_SET_GROUP(irq, cm);
654
}
655
}
656
}
657
@@ -XXX,XX +XXX,XX @@ static void translate_enabled(GICState *s, int irq, int cpu,
658
int cm = (irq < GIC_INTERNAL) ? (1 << cpu) : ALL_CPU_MASK;
659
660
if (to_kernel) {
661
- *field = GIC_TEST_ENABLED(irq, cm);
662
+ *field = GIC_DIST_TEST_ENABLED(irq, cm);
663
} else {
664
if (*field & 1) {
665
- GIC_SET_ENABLED(irq, cm);
666
+ GIC_DIST_SET_ENABLED(irq, cm);
667
}
668
}
669
}
670
@@ -XXX,XX +XXX,XX @@ static void translate_pending(GICState *s, int irq, int cpu,
671
*field = gic_test_pending(s, irq, cm);
672
} else {
673
if (*field & 1) {
674
- GIC_SET_PENDING(irq, cm);
675
+ GIC_DIST_SET_PENDING(irq, cm);
676
/* TODO: Capture is level-line is held high in the kernel */
677
}
678
}
679
@@ -XXX,XX +XXX,XX @@ static void translate_active(GICState *s, int irq, int cpu,
680
int cm = (irq < GIC_INTERNAL) ? (1 << cpu) : ALL_CPU_MASK;
681
682
if (to_kernel) {
683
- *field = GIC_TEST_ACTIVE(irq, cm);
684
+ *field = GIC_DIST_TEST_ACTIVE(irq, cm);
685
} else {
686
if (*field & 1) {
687
- GIC_SET_ACTIVE(irq, cm);
688
+ GIC_DIST_SET_ACTIVE(irq, cm);
689
}
690
}
691
}
692
@@ -XXX,XX +XXX,XX @@ static void translate_trigger(GICState *s, int irq, int cpu,
693
uint32_t *field, bool to_kernel)
694
{
695
if (to_kernel) {
696
- *field = (GIC_TEST_EDGE_TRIGGER(irq)) ? 0x2 : 0x0;
697
+ *field = (GIC_DIST_TEST_EDGE_TRIGGER(irq)) ? 0x2 : 0x0;
698
} else {
699
if (*field & 0x2) {
700
- GIC_SET_EDGE_TRIGGER(irq);
701
+ GIC_DIST_SET_EDGE_TRIGGER(irq);
702
}
703
}
704
}
705
@@ -XXX,XX +XXX,XX @@ static void translate_priority(GICState *s, int irq, int cpu,
706
uint32_t *field, bool to_kernel)
707
{
708
if (to_kernel) {
709
- *field = GIC_GET_PRIORITY(irq, cpu) & 0xff;
710
+ *field = GIC_DIST_GET_PRIORITY(irq, cpu) & 0xff;
711
} else {
712
- gic_set_priority(s, cpu, irq, *field & 0xff, MEMTXATTRS_UNSPECIFIED);
713
+ gic_dist_set_priority(s, cpu, irq,
714
+ *field & 0xff, MEMTXATTRS_UNSPECIFIED);
715
}
716
}
41
717
42
--
718
--
43
2.16.2
719
2.18.0
44
720
45
721
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Luc Michel <luc.michel@greensocs.com>
2
2
3
Not enabled anywhere yet.
3
Implement GICD_ISACTIVERn and GICD_ICACTIVERn registers in the GICv2.
4
Those registers allow to set or clear the active state of an IRQ in the
5
distributor.
4
6
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20180228193125.20577-11-richard.henderson@linaro.org
9
Message-id: 20180727095421.386-3-luc.michel@greensocs.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
11
---
10
target/arm/cpu.h | 1 +
12
hw/intc/arm_gic.c | 61 +++++++++++++++++++++++++++++++++++++++++++----
11
linux-user/elfload.c | 1 +
13
1 file changed, 57 insertions(+), 4 deletions(-)
12
2 files changed, 2 insertions(+)
13
14
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpu.h
17
--- a/hw/intc/arm_gic.c
17
+++ b/target/arm/cpu.h
18
+++ b/hw/intc/arm_gic.c
18
@@ -XXX,XX +XXX,XX @@ enum arm_features {
19
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
19
ARM_FEATURE_V8_SM4, /* implements SM4 part of v8 Crypto Extensions */
20
}
20
ARM_FEATURE_V8_RDM, /* implements v8.1 simd round multiply */
21
}
21
ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */
22
} else if (offset < 0x400) {
22
+ ARM_FEATURE_V8_FCMA, /* has complex number part of v8.3 extensions. */
23
- /* Interrupt Active. */
23
};
24
- irq = (offset - 0x300) * 8 + GIC_BASE_IRQ;
24
25
+ /* Interrupt Set/Clear Active. */
25
static inline int arm_feature(CPUARMState *env, int feature)
26
+ if (offset < 0x380) {
26
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
27
+ irq = (offset - 0x300) * 8;
27
index XXXXXXX..XXXXXXX 100644
28
+ } else if (s->revision == 2) {
28
--- a/linux-user/elfload.c
29
+ irq = (offset - 0x380) * 8;
29
+++ b/linux-user/elfload.c
30
+ } else {
30
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
31
+ goto bad_reg;
31
GET_FEATURE(ARM_FEATURE_V8_FP16,
32
+ }
32
ARM_HWCAP_A64_FPHP | ARM_HWCAP_A64_ASIMDHP);
33
+
33
GET_FEATURE(ARM_FEATURE_V8_RDM, ARM_HWCAP_A64_ASIMDRDM);
34
+ irq += GIC_BASE_IRQ;
34
+ GET_FEATURE(ARM_FEATURE_V8_FCMA, ARM_HWCAP_A64_FCMA);
35
if (irq >= s->num_irq)
35
#undef GET_FEATURE
36
goto bad_reg;
36
37
res = 0;
37
return hwcaps;
38
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
39
GIC_DIST_CLEAR_PENDING(irq + i, ALL_CPU_MASK);
40
}
41
}
42
+ } else if (offset < 0x380) {
43
+ /* Interrupt Set Active. */
44
+ if (s->revision != 2) {
45
+ goto bad_reg;
46
+ }
47
+
48
+ irq = (offset - 0x300) * 8 + GIC_BASE_IRQ;
49
+ if (irq >= s->num_irq) {
50
+ goto bad_reg;
51
+ }
52
+
53
+ /* This register is banked per-cpu for PPIs */
54
+ int cm = irq < GIC_INTERNAL ? (1 << cpu) : ALL_CPU_MASK;
55
+
56
+ for (i = 0; i < 8; i++) {
57
+ if (s->security_extn && !attrs.secure &&
58
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
59
+ continue; /* Ignore Non-secure access of Group0 IRQ */
60
+ }
61
+
62
+ if (value & (1 << i)) {
63
+ GIC_DIST_SET_ACTIVE(irq + i, cm);
64
+ }
65
+ }
66
} else if (offset < 0x400) {
67
- /* Interrupt Active. */
68
- goto bad_reg;
69
+ /* Interrupt Clear Active. */
70
+ if (s->revision != 2) {
71
+ goto bad_reg;
72
+ }
73
+
74
+ irq = (offset - 0x380) * 8 + GIC_BASE_IRQ;
75
+ if (irq >= s->num_irq) {
76
+ goto bad_reg;
77
+ }
78
+
79
+ /* This register is banked per-cpu for PPIs */
80
+ int cm = irq < GIC_INTERNAL ? (1 << cpu) : ALL_CPU_MASK;
81
+
82
+ for (i = 0; i < 8; i++) {
83
+ if (s->security_extn && !attrs.secure &&
84
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
85
+ continue; /* Ignore Non-secure access of Group0 IRQ */
86
+ }
87
+
88
+ if (value & (1 << i)) {
89
+ GIC_DIST_CLEAR_ACTIVE(irq + i, cm);
90
+ }
91
+ }
92
} else if (offset < 0x800) {
93
/* Interrupt Priority. */
94
irq = (offset - 0x400) + GIC_BASE_IRQ;
38
--
95
--
39
2.16.2
96
2.18.0
40
97
41
98
diff view generated by jsdifflib
1
From: Alistair Francis <alistair.francis@xilinx.com>
1
From: Luc Michel <luc.michel@greensocs.com>
2
2
3
Allow the guest to determine the time set from the QEMU command line.
3
Some functions are now only used in arm_gic.c, put them static. Some of
4
them where only used by the NVIC implementation and are not used
5
anymore, so remove them.
4
6
5
This includes adding a trace event to debug the new time.
7
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
6
7
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20180727095421.386-4-luc.michel@greensocs.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
---
12
include/hw/timer/xlnx-zynqmp-rtc.h | 2 ++
13
hw/intc/gic_internal.h | 4 ----
13
hw/timer/xlnx-zynqmp-rtc.c | 58 ++++++++++++++++++++++++++++++++++++++
14
hw/intc/arm_gic.c | 23 ++---------------------
14
hw/timer/trace-events | 3 ++
15
2 files changed, 2 insertions(+), 25 deletions(-)
15
3 files changed, 63 insertions(+)
16
16
17
diff --git a/include/hw/timer/xlnx-zynqmp-rtc.h b/include/hw/timer/xlnx-zynqmp-rtc.h
17
diff --git a/hw/intc/gic_internal.h b/hw/intc/gic_internal.h
18
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
19
--- a/include/hw/timer/xlnx-zynqmp-rtc.h
19
--- a/hw/intc/gic_internal.h
20
+++ b/include/hw/timer/xlnx-zynqmp-rtc.h
20
+++ b/hw/intc/gic_internal.h
21
@@ -XXX,XX +XXX,XX @@ typedef struct XlnxZynqMPRTC {
21
@@ -XXX,XX +XXX,XX @@
22
qemu_irq irq_rtc_int;
22
/* The special cases for the revision property: */
23
qemu_irq irq_addr_error_int;
23
#define REV_11MPCORE 0
24
24
25
+ uint32_t tick_offset;
25
-void gic_set_pending_private(GICState *s, int cpu, int irq);
26
+
26
uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs);
27
uint32_t regs[XLNX_ZYNQMP_RTC_R_MAX];
27
-void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs);
28
RegisterInfo regs_info[XLNX_ZYNQMP_RTC_R_MAX];
28
-void gic_update(GICState *s);
29
} XlnxZynqMPRTC;
29
-void gic_init_irqs_and_distributor(GICState *s);
30
diff --git a/hw/timer/xlnx-zynqmp-rtc.c b/hw/timer/xlnx-zynqmp-rtc.c
30
void gic_dist_set_priority(GICState *s, int cpu, int irq, uint8_t val,
31
MemTxAttrs attrs);
32
33
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
31
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
32
--- a/hw/timer/xlnx-zynqmp-rtc.c
35
--- a/hw/intc/arm_gic.c
33
+++ b/hw/timer/xlnx-zynqmp-rtc.c
36
+++ b/hw/intc/arm_gic.c
34
@@ -XXX,XX +XXX,XX @@
37
@@ -XXX,XX +XXX,XX @@ static inline bool gic_has_groups(GICState *s)
35
#include "hw/register.h"
38
36
#include "qemu/bitops.h"
39
/* TODO: Many places that call this routine could be optimized. */
37
#include "qemu/log.h"
40
/* Update interrupt status after enabled or pending bits have been changed. */
38
+#include "hw/ptimer.h"
41
-void gic_update(GICState *s)
39
+#include "qemu/cutils.h"
42
+static void gic_update(GICState *s)
40
+#include "sysemu/sysemu.h"
43
{
41
+#include "trace.h"
44
int best_irq;
42
#include "hw/timer/xlnx-zynqmp-rtc.h"
45
int best_prio;
43
46
@@ -XXX,XX +XXX,XX @@ void gic_update(GICState *s)
44
#ifndef XLNX_ZYNQMP_RTC_ERR_DEBUG
47
}
45
@@ -XXX,XX +XXX,XX @@ static void addr_error_int_update_irq(XlnxZynqMPRTC *s)
46
qemu_set_irq(s->irq_addr_error_int, pending);
47
}
48
}
48
49
49
+static uint32_t rtc_get_count(XlnxZynqMPRTC *s)
50
-void gic_set_pending_private(GICState *s, int cpu, int irq)
50
+{
51
-{
51
+ int64_t now = qemu_clock_get_ns(rtc_clock);
52
- int cm = 1 << cpu;
52
+ return s->tick_offset + now / NANOSECONDS_PER_SECOND;
53
-
53
+}
54
- if (gic_test_pending(s, irq, cm)) {
54
+
55
- return;
55
+static uint64_t current_time_postr(RegisterInfo *reg, uint64_t val64)
56
- }
56
+{
57
-
57
+ XlnxZynqMPRTC *s = XLNX_ZYNQMP_RTC(reg->opaque);
58
- DPRINTF("Set %d pending cpu %d\n", irq, cpu);
58
+
59
- GIC_DIST_SET_PENDING(irq, cm);
59
+ return rtc_get_count(s);
60
- gic_update(s);
60
+}
61
-}
61
+
62
-
62
static void rtc_int_status_postw(RegisterInfo *reg, uint64_t val64)
63
static void gic_set_irq_11mpcore(GICState *s, int irq, int level,
64
int cm, int target)
63
{
65
{
64
XlnxZynqMPRTC *s = XLNX_ZYNQMP_RTC(reg->opaque);
66
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
65
@@ -XXX,XX +XXX,XX @@ static uint64_t addr_error_int_dis_prew(RegisterInfo *reg, uint64_t val64)
67
GIC_DIST_CLEAR_ACTIVE(irq, cm);
66
67
static const RegisterAccessInfo rtc_regs_info[] = {
68
{ .name = "SET_TIME_WRITE", .addr = A_SET_TIME_WRITE,
69
+ .unimp = MAKE_64BIT_MASK(0, 32),
70
},{ .name = "SET_TIME_READ", .addr = A_SET_TIME_READ,
71
.ro = 0xffffffff,
72
+ .post_read = current_time_postr,
73
},{ .name = "CALIB_WRITE", .addr = A_CALIB_WRITE,
74
+ .unimp = MAKE_64BIT_MASK(0, 32),
75
},{ .name = "CALIB_READ", .addr = A_CALIB_READ,
76
.ro = 0x1fffff,
77
},{ .name = "CURRENT_TIME", .addr = A_CURRENT_TIME,
78
.ro = 0xffffffff,
79
+ .post_read = current_time_postr,
80
},{ .name = "CURRENT_TICK", .addr = A_CURRENT_TICK,
81
.ro = 0xffff,
82
},{ .name = "ALARM", .addr = A_ALARM,
83
@@ -XXX,XX +XXX,XX @@ static void rtc_init(Object *obj)
84
XlnxZynqMPRTC *s = XLNX_ZYNQMP_RTC(obj);
85
SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
86
RegisterInfoArray *reg_array;
87
+ struct tm current_tm;
88
89
memory_region_init(&s->iomem, obj, TYPE_XLNX_ZYNQMP_RTC,
90
XLNX_ZYNQMP_RTC_R_MAX * 4);
91
@@ -XXX,XX +XXX,XX @@ static void rtc_init(Object *obj)
92
sysbus_init_mmio(sbd, &s->iomem);
93
sysbus_init_irq(sbd, &s->irq_rtc_int);
94
sysbus_init_irq(sbd, &s->irq_addr_error_int);
95
+
96
+ qemu_get_timedate(&current_tm, 0);
97
+ s->tick_offset = mktimegm(&current_tm) -
98
+ qemu_clock_get_ns(rtc_clock) / NANOSECONDS_PER_SECOND;
99
+
100
+ trace_xlnx_zynqmp_rtc_gettime(current_tm.tm_year, current_tm.tm_mon,
101
+ current_tm.tm_mday, current_tm.tm_hour,
102
+ current_tm.tm_min, current_tm.tm_sec);
103
+}
104
+
105
+static int rtc_pre_save(void *opaque)
106
+{
107
+ XlnxZynqMPRTC *s = opaque;
108
+ int64_t now = qemu_clock_get_ns(rtc_clock) / NANOSECONDS_PER_SECOND;
109
+
110
+ /* Add the time at migration */
111
+ s->tick_offset = s->tick_offset + now;
112
+
113
+ return 0;
114
+}
115
+
116
+static int rtc_post_load(void *opaque, int version_id)
117
+{
118
+ XlnxZynqMPRTC *s = opaque;
119
+ int64_t now = qemu_clock_get_ns(rtc_clock) / NANOSECONDS_PER_SECOND;
120
+
121
+ /* Subtract the time after migration. This combined with the pre_save
122
+ * action results in us having subtracted the time that the guest was
123
+ * stopped to the offset.
124
+ */
125
+ s->tick_offset = s->tick_offset - now;
126
+
127
+ return 0;
128
}
68
}
129
69
130
static const VMStateDescription vmstate_rtc = {
70
-void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
131
.name = TYPE_XLNX_ZYNQMP_RTC,
71
+static void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
132
.version_id = 1,
72
{
133
.minimum_version_id = 1,
73
int cm = 1 << cpu;
134
+ .pre_save = rtc_pre_save,
74
int group;
135
+ .post_load = rtc_post_load,
75
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps gic_cpu_ops = {
136
.fields = (VMStateField[]) {
76
.endianness = DEVICE_NATIVE_ENDIAN,
137
VMSTATE_UINT32_ARRAY(regs, XlnxZynqMPRTC, XLNX_ZYNQMP_RTC_R_MAX),
138
+ VMSTATE_UINT32(tick_offset, XlnxZynqMPRTC),
139
VMSTATE_END_OF_LIST(),
140
}
141
};
77
};
142
diff --git a/hw/timer/trace-events b/hw/timer/trace-events
78
143
index XXXXXXX..XXXXXXX 100644
79
-/* This function is used by nvic model */
144
--- a/hw/timer/trace-events
80
-void gic_init_irqs_and_distributor(GICState *s)
145
+++ b/hw/timer/trace-events
81
-{
146
@@ -XXX,XX +XXX,XX @@ systick_write(uint64_t addr, uint32_t value, unsigned size) "systick write addr
82
- gic_init_irqs_and_mmio(s, gic_set_irq, gic_ops);
147
cmsdk_apb_timer_read(uint64_t offset, uint64_t data, unsigned size) "CMSDK APB timer read: offset 0x%" PRIx64 " data 0x%" PRIx64 " size %u"
83
-}
148
cmsdk_apb_timer_write(uint64_t offset, uint64_t data, unsigned size) "CMSDK APB timer write: offset 0x%" PRIx64 " data 0x%" PRIx64 " size %u"
84
-
149
cmsdk_apb_timer_reset(void) "CMSDK APB timer: reset"
85
static void arm_gic_realize(DeviceState *dev, Error **errp)
150
+
86
{
151
+# hw/timer/xlnx-zynqmp-rtc.c
87
/* Device instance realize function for the GIC sysbus device */
152
+xlnx_zynqmp_rtc_gettime(int year, int month, int day, int hour, int min, int sec) "Get time from host: %d-%d-%d %2d:%02d:%02d"
153
--
88
--
154
2.16.2
89
2.18.0
155
90
156
91
diff view generated by jsdifflib
New patch
1
From: Luc Michel <luc.michel@greensocs.com>
1
2
3
Provide a VMSTATE_UINT16_SUB_ARRAY macro to save a uint16_t sub-array in
4
a VMState.
5
6
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Message-id: 20180727095421.386-5-luc.michel@greensocs.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
include/migration/vmstate.h | 3 +++
13
1 file changed, 3 insertions(+)
14
15
diff --git a/include/migration/vmstate.h b/include/migration/vmstate.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/include/migration/vmstate.h
18
+++ b/include/migration/vmstate.h
19
@@ -XXX,XX +XXX,XX @@ extern const VMStateInfo vmstate_info_qtailq;
20
#define VMSTATE_UINT16_ARRAY(_f, _s, _n) \
21
VMSTATE_UINT16_ARRAY_V(_f, _s, _n, 0)
22
23
+#define VMSTATE_UINT16_SUB_ARRAY(_f, _s, _start, _num) \
24
+ VMSTATE_SUB_ARRAY(_f, _s, _start, _num, 0, vmstate_info_uint16, uint16_t)
25
+
26
#define VMSTATE_UINT16_2DARRAY(_f, _s, _n1, _n2) \
27
VMSTATE_UINT16_2DARRAY_V(_f, _s, _n1, _n2, 0)
28
29
--
30
2.18.0
31
32
diff view generated by jsdifflib
1
The IoTKit Security Controller includes various registers
1
From: Luc Michel <luc.michel@greensocs.com>
2
that expose to software the controls for the Peripheral
2
3
Protection Controllers in the system. Implement these.
3
Add the necessary parts of the virtualization extensions state to the
4
4
GIC state. We choose to increase the size of the CPU interfaces state to
5
add space for the vCPU interfaces (the GIC_NCPU_VCPU macro). This way,
6
we'll be able to reuse most of the CPU interface code for the vCPUs.
7
8
The only exception is the APR value, which is stored in h_apr in the
9
virtual interface state for vCPUs. This is due to some complications
10
with the GIC VMState, for which we don't want to break backward
11
compatibility. APRs being stored in 2D arrays, increasing the second
12
dimension would lead to some ugly VMState description. To avoid
13
that, we keep it in h_apr for vCPUs.
14
15
The vCPUs are numbered from GIC_NCPU to (GIC_NCPU * 2) - 1. The
16
`gic_is_vcpu` function help to determine if a given CPU id correspond to
17
a physical CPU or a virtual one.
18
19
For the in-kernel KVM VGIC, since the exposed VGIC does not implement
20
the virtualization extensions, we report an error if the corresponding
21
property is set to true.
22
23
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
24
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
25
Message-id: 20180727095421.386-6-luc.michel@greensocs.com
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
26
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180220180325.29818-17-peter.maydell@linaro.org
8
---
27
---
9
include/hw/misc/iotkit-secctl.h | 64 +++++++++-
28
hw/intc/gic_internal.h | 5 ++
10
hw/misc/iotkit-secctl.c | 270 +++++++++++++++++++++++++++++++++++++---
29
include/hw/intc/arm_gic_common.h | 43 +++++++--
11
2 files changed, 315 insertions(+), 19 deletions(-)
30
hw/intc/arm_gic.c | 2 +-
12
31
hw/intc/arm_gic_common.c | 148 ++++++++++++++++++++++++++-----
13
diff --git a/include/hw/misc/iotkit-secctl.h b/include/hw/misc/iotkit-secctl.h
32
hw/intc/arm_gic_kvm.c | 8 +-
33
5 files changed, 173 insertions(+), 33 deletions(-)
34
35
diff --git a/hw/intc/gic_internal.h b/hw/intc/gic_internal.h
14
index XXXXXXX..XXXXXXX 100644
36
index XXXXXXX..XXXXXXX 100644
15
--- a/include/hw/misc/iotkit-secctl.h
37
--- a/hw/intc/gic_internal.h
16
+++ b/include/hw/misc/iotkit-secctl.h
38
+++ b/hw/intc/gic_internal.h
39
@@ -XXX,XX +XXX,XX @@ static inline bool gic_test_pending(GICState *s, int irq, int cm)
40
}
41
}
42
43
+static inline bool gic_is_vcpu(int cpu)
44
+{
45
+ return cpu >= GIC_NCPU;
46
+}
47
+
48
#endif /* QEMU_ARM_GIC_INTERNAL_H */
49
diff --git a/include/hw/intc/arm_gic_common.h b/include/hw/intc/arm_gic_common.h
50
index XXXXXXX..XXXXXXX 100644
51
--- a/include/hw/intc/arm_gic_common.h
52
+++ b/include/hw/intc/arm_gic_common.h
17
@@ -XXX,XX +XXX,XX @@
53
@@ -XXX,XX +XXX,XX @@
18
* QEMU interface:
54
#define GIC_NR_SGIS 16
19
* + sysbus MMIO region 0 is the "secure privilege control block" registers
55
/* Maximum number of possible CPU interfaces, determined by GIC architecture */
20
* + sysbus MMIO region 1 is the "non-secure privilege control block" registers
56
#define GIC_NCPU 8
21
+ * + named GPIO output "sec_resp_cfg" indicating whether blocked accesses
57
+/* Maximum number of possible CPU interfaces with their respective vCPU */
22
+ * should RAZ/WI or bus error
58
+#define GIC_NCPU_VCPU (GIC_NCPU * 2)
23
+ * Controlling the 2 APB PPCs in the IoTKit:
59
24
+ * + named GPIO outputs apb_ppc0_nonsec[0..2] and apb_ppc1_nonsec
60
#define MAX_NR_GROUP_PRIO 128
25
+ * + named GPIO outputs apb_ppc0_ap[0..2] and apb_ppc1_ap
61
#define GIC_NR_APRS (MAX_NR_GROUP_PRIO / 32)
26
+ * + named GPIO outputs apb_ppc{0,1}_irq_enable
27
+ * + named GPIO outputs apb_ppc{0,1}_irq_clear
28
+ * + named GPIO inputs apb_ppc{0,1}_irq_status
29
+ * Controlling each of the 4 expansion APB PPCs which a system using the IoTKit
30
+ * might provide:
31
+ * + named GPIO outputs apb_ppcexp{0,1,2,3}_nonsec[0..15]
32
+ * + named GPIO outputs apb_ppcexp{0,1,2,3}_ap[0..15]
33
+ * + named GPIO outputs apb_ppcexp{0,1,2,3}_irq_enable
34
+ * + named GPIO outputs apb_ppcexp{0,1,2,3}_irq_clear
35
+ * + named GPIO inputs apb_ppcexp{0,1,2,3}_irq_status
36
+ * Controlling each of the 4 expansion AHB PPCs which a system using the IoTKit
37
+ * might provide:
38
+ * + named GPIO outputs ahb_ppcexp{0,1,2,3}_nonsec[0..15]
39
+ * + named GPIO outputs ahb_ppcexp{0,1,2,3}_ap[0..15]
40
+ * + named GPIO outputs ahb_ppcexp{0,1,2,3}_irq_enable
41
+ * + named GPIO outputs ahb_ppcexp{0,1,2,3}_irq_clear
42
+ * + named GPIO inputs ahb_ppcexp{0,1,2,3}_irq_status
43
*/
44
45
#ifndef IOTKIT_SECCTL_H
46
@@ -XXX,XX +XXX,XX @@
62
@@ -XXX,XX +XXX,XX @@
47
#define TYPE_IOTKIT_SECCTL "iotkit-secctl"
63
#define GIC_MIN_BPR 0
48
#define IOTKIT_SECCTL(obj) OBJECT_CHECK(IoTKitSecCtl, (obj), TYPE_IOTKIT_SECCTL)
64
#define GIC_MIN_ABPR (GIC_MIN_BPR + 1)
49
65
50
-typedef struct IoTKitSecCtl {
66
+/* Architectural maximum number of list registers in the virtual interface */
51
+#define IOTS_APB_PPC0_NUM_PORTS 3
67
+#define GIC_MAX_LR 64
52
+#define IOTS_APB_PPC1_NUM_PORTS 1
68
+
53
+#define IOTS_PPC_NUM_PORTS 16
69
+/* Only 32 priority levels and 32 preemption levels in the vCPU interfaces */
54
+#define IOTS_NUM_APB_PPC 2
70
+#define GIC_VIRT_MAX_GROUP_PRIO_BITS 5
55
+#define IOTS_NUM_APB_EXP_PPC 4
71
+#define GIC_VIRT_MAX_NR_GROUP_PRIO (1 << GIC_VIRT_MAX_GROUP_PRIO_BITS)
56
+#define IOTS_NUM_AHB_EXP_PPC 4
72
+#define GIC_VIRT_NR_APRS (GIC_VIRT_MAX_NR_GROUP_PRIO / 32)
57
+
73
+
58
+typedef struct IoTKitSecCtl IoTKitSecCtl;
74
+#define GIC_VIRT_MIN_BPR 2
59
+
75
+#define GIC_VIRT_MIN_ABPR (GIC_VIRT_MIN_BPR + 1)
60
+/* State and IRQ lines relating to a PPC. For the
76
+
61
+ * PPCs in the IoTKit not all the IRQ lines are used.
77
typedef struct gic_irq_state {
62
+ */
78
/* The enable bits are only banked for per-cpu interrupts. */
63
+typedef struct IoTKitSecCtlPPC {
79
uint8_t enabled;
64
+ qemu_irq nonsec[IOTS_PPC_NUM_PORTS];
80
@@ -XXX,XX +XXX,XX @@ typedef struct GICState {
65
+ qemu_irq ap[IOTS_PPC_NUM_PORTS];
81
qemu_irq parent_fiq[GIC_NCPU];
66
+ qemu_irq irq_enable;
82
qemu_irq parent_virq[GIC_NCPU];
67
+ qemu_irq irq_clear;
83
qemu_irq parent_vfiq[GIC_NCPU];
68
+
84
+ qemu_irq maintenance_irq[GIC_NCPU];
69
+ uint32_t ns;
85
+
70
+ uint32_t sp;
86
/* GICD_CTLR; for a GIC with the security extensions the NS banked version
71
+ uint32_t nsp;
87
* of this register is just an alias of bit 1 of the S banked version.
72
+
88
*/
73
+ /* Number of ports actually present */
89
@@ -XXX,XX +XXX,XX @@ typedef struct GICState {
74
+ int numports;
90
/* GICC_CTLR; again, the NS banked version is just aliases of bits of
75
+ /* Offset of this PPC's interrupt bits in SECPPCINTSTAT */
91
* the S banked register, so our state only needs to store the S version.
76
+ int irq_bit_offset;
92
*/
77
+ IoTKitSecCtl *parent;
93
- uint32_t cpu_ctlr[GIC_NCPU];
78
+} IoTKitSecCtlPPC;
94
+ uint32_t cpu_ctlr[GIC_NCPU_VCPU];
79
+
95
80
+struct IoTKitSecCtl {
96
gic_irq_state irq_state[GIC_MAXIRQ];
81
/*< private >*/
97
uint8_t irq_target[GIC_MAXIRQ];
82
SysBusDevice parent_obj;
98
@@ -XXX,XX +XXX,XX @@ typedef struct GICState {
83
99
*/
84
/*< public >*/
100
uint8_t sgi_pending[GIC_NR_SGIS][GIC_NCPU];
85
+ qemu_irq sec_resp_cfg;
101
86
102
- uint16_t priority_mask[GIC_NCPU];
87
MemoryRegion s_regs;
103
- uint16_t running_priority[GIC_NCPU];
88
MemoryRegion ns_regs;
104
- uint16_t current_pending[GIC_NCPU];
89
-} IoTKitSecCtl;
105
+ uint16_t priority_mask[GIC_NCPU_VCPU];
90
+
106
+ uint16_t running_priority[GIC_NCPU_VCPU];
91
+ uint32_t secppcintstat;
107
+ uint16_t current_pending[GIC_NCPU_VCPU];
92
+ uint32_t secppcinten;
108
93
+ uint32_t secrespcfg;
109
/* If we present the GICv2 without security extensions to a guest,
94
+
110
* the guest can configure the GICC_CTLR to configure group 1 binary point
95
+ IoTKitSecCtlPPC apb[IOTS_NUM_APB_PPC];
111
@@ -XXX,XX +XXX,XX @@ typedef struct GICState {
96
+ IoTKitSecCtlPPC apbexp[IOTS_NUM_APB_EXP_PPC];
112
* For a GIC with Security Extensions we use use bpr for the
97
+ IoTKitSecCtlPPC ahbexp[IOTS_NUM_APB_EXP_PPC];
113
* secure copy and abpr as storage for the non-secure copy of the register.
98
+};
114
*/
115
- uint8_t bpr[GIC_NCPU];
116
- uint8_t abpr[GIC_NCPU];
117
+ uint8_t bpr[GIC_NCPU_VCPU];
118
+ uint8_t abpr[GIC_NCPU_VCPU];
119
120
/* The APR is implementation defined, so we choose a layout identical to
121
* the KVM ABI layout for QEMU's implementation of the gic:
122
@@ -XXX,XX +XXX,XX @@ typedef struct GICState {
123
uint32_t apr[GIC_NR_APRS][GIC_NCPU];
124
uint32_t nsapr[GIC_NR_APRS][GIC_NCPU];
125
126
+ /* Virtual interface control registers */
127
+ uint32_t h_hcr[GIC_NCPU];
128
+ uint32_t h_misr[GIC_NCPU];
129
+ uint32_t h_lr[GIC_MAX_LR][GIC_NCPU];
130
+ uint32_t h_apr[GIC_NCPU];
131
+
132
+ /* Number of LRs implemented in this GIC instance */
133
+ uint32_t num_lrs;
134
+
135
uint32_t num_cpu;
136
137
MemoryRegion iomem; /* Distributor */
138
@@ -XXX,XX +XXX,XX @@ typedef struct GICState {
139
*/
140
struct GICState *backref[GIC_NCPU];
141
MemoryRegion cpuiomem[GIC_NCPU + 1]; /* CPU interfaces */
142
+ MemoryRegion vifaceiomem[GIC_NCPU + 1]; /* Virtual interfaces */
143
+ MemoryRegion vcpuiomem; /* vCPU interface */
144
+
145
uint32_t num_irq;
146
uint32_t revision;
147
bool security_extn;
148
+ bool virt_extn;
149
bool irq_reset_nonsecure; /* configure IRQs as group 1 (NS) on reset? */
150
int dev_fd; /* kvm device fd if backed by kvm vgic support */
151
Error *migration_blocker;
152
@@ -XXX,XX +XXX,XX @@ typedef struct ARMGICCommonClass {
153
} ARMGICCommonClass;
154
155
void gic_init_irqs_and_mmio(GICState *s, qemu_irq_handler handler,
156
- const MemoryRegionOps *ops);
157
+ const MemoryRegionOps *ops,
158
+ const MemoryRegionOps *virt_ops);
99
159
100
#endif
160
#endif
101
diff --git a/hw/misc/iotkit-secctl.c b/hw/misc/iotkit-secctl.c
161
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
102
index XXXXXXX..XXXXXXX 100644
162
index XXXXXXX..XXXXXXX 100644
103
--- a/hw/misc/iotkit-secctl.c
163
--- a/hw/intc/arm_gic.c
104
+++ b/hw/misc/iotkit-secctl.c
164
+++ b/hw/intc/arm_gic.c
105
@@ -XXX,XX +XXX,XX @@ static const uint8_t iotkit_secctl_ns_idregs[] = {
165
@@ -XXX,XX +XXX,XX @@ static void arm_gic_realize(DeviceState *dev, Error **errp)
106
0x0d, 0xf0, 0x05, 0xb1,
166
}
167
168
/* This creates distributor and main CPU interface (s->cpuiomem[0]) */
169
- gic_init_irqs_and_mmio(s, gic_set_irq, gic_ops);
170
+ gic_init_irqs_and_mmio(s, gic_set_irq, gic_ops, NULL);
171
172
/* Extra core-specific regions for the CPU interfaces. This is
173
* necessary for "franken-GIC" implementations, for example on
174
diff --git a/hw/intc/arm_gic_common.c b/hw/intc/arm_gic_common.c
175
index XXXXXXX..XXXXXXX 100644
176
--- a/hw/intc/arm_gic_common.c
177
+++ b/hw/intc/arm_gic_common.c
178
@@ -XXX,XX +XXX,XX @@ static int gic_post_load(void *opaque, int version_id)
179
return 0;
180
}
181
182
+static bool gic_virt_state_needed(void *opaque)
183
+{
184
+ GICState *s = (GICState *)opaque;
185
+
186
+ return s->virt_extn;
187
+}
188
+
189
static const VMStateDescription vmstate_gic_irq_state = {
190
.name = "arm_gic_irq_state",
191
.version_id = 1,
192
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_gic_irq_state = {
193
}
107
};
194
};
108
195
109
+/* The register sets for the various PPCs (AHB internal, APB internal,
196
+static const VMStateDescription vmstate_gic_virt_state = {
110
+ * AHB expansion, APB expansion) are all set up so that they are
197
+ .name = "arm_gic_virt_state",
111
+ * in 16-aligned blocks so offsets 0xN0, 0xN4, 0xN8, 0xNC are PPCs
112
+ * 0, 1, 2, 3 of that type, so we can convert a register address offset
113
+ * into an an index into a PPC array easily.
114
+ */
115
+static inline int offset_to_ppc_idx(uint32_t offset)
116
+{
117
+ return extract32(offset, 2, 2);
118
+}
119
+
120
+typedef void PerPPCFunction(IoTKitSecCtlPPC *ppc);
121
+
122
+static void foreach_ppc(IoTKitSecCtl *s, PerPPCFunction *fn)
123
+{
124
+ int i;
125
+
126
+ for (i = 0; i < IOTS_NUM_APB_PPC; i++) {
127
+ fn(&s->apb[i]);
128
+ }
129
+ for (i = 0; i < IOTS_NUM_APB_EXP_PPC; i++) {
130
+ fn(&s->apbexp[i]);
131
+ }
132
+ for (i = 0; i < IOTS_NUM_AHB_EXP_PPC; i++) {
133
+ fn(&s->ahbexp[i]);
134
+ }
135
+}
136
+
137
static MemTxResult iotkit_secctl_s_read(void *opaque, hwaddr addr,
138
uint64_t *pdata,
139
unsigned size, MemTxAttrs attrs)
140
{
141
uint64_t r;
142
uint32_t offset = addr & ~0x3;
143
+ IoTKitSecCtl *s = IOTKIT_SECCTL(opaque);
144
145
switch (offset) {
146
case A_AHBNSPPC0:
147
@@ -XXX,XX +XXX,XX @@ static MemTxResult iotkit_secctl_s_read(void *opaque, hwaddr addr,
148
r = 0;
149
break;
150
case A_SECRESPCFG:
151
- case A_NSCCFG:
152
- case A_SECMPCINTSTATUS:
153
+ r = s->secrespcfg;
154
+ break;
155
case A_SECPPCINTSTAT:
156
+ r = s->secppcintstat;
157
+ break;
158
case A_SECPPCINTEN:
159
- case A_SECMSCINTSTAT:
160
- case A_SECMSCINTEN:
161
- case A_BRGINTSTAT:
162
- case A_BRGINTEN:
163
+ r = s->secppcinten;
164
+ break;
165
case A_AHBNSPPCEXP0:
166
case A_AHBNSPPCEXP1:
167
case A_AHBNSPPCEXP2:
168
case A_AHBNSPPCEXP3:
169
+ r = s->ahbexp[offset_to_ppc_idx(offset)].ns;
170
+ break;
171
case A_APBNSPPC0:
172
case A_APBNSPPC1:
173
+ r = s->apb[offset_to_ppc_idx(offset)].ns;
174
+ break;
175
case A_APBNSPPCEXP0:
176
case A_APBNSPPCEXP1:
177
case A_APBNSPPCEXP2:
178
case A_APBNSPPCEXP3:
179
+ r = s->apbexp[offset_to_ppc_idx(offset)].ns;
180
+ break;
181
case A_AHBSPPPCEXP0:
182
case A_AHBSPPPCEXP1:
183
case A_AHBSPPPCEXP2:
184
case A_AHBSPPPCEXP3:
185
+ r = s->apbexp[offset_to_ppc_idx(offset)].sp;
186
+ break;
187
case A_APBSPPPC0:
188
case A_APBSPPPC1:
189
+ r = s->apb[offset_to_ppc_idx(offset)].sp;
190
+ break;
191
case A_APBSPPPCEXP0:
192
case A_APBSPPPCEXP1:
193
case A_APBSPPPCEXP2:
194
case A_APBSPPPCEXP3:
195
+ r = s->apbexp[offset_to_ppc_idx(offset)].sp;
196
+ break;
197
+ case A_NSCCFG:
198
+ case A_SECMPCINTSTATUS:
199
+ case A_SECMSCINTSTAT:
200
+ case A_SECMSCINTEN:
201
+ case A_BRGINTSTAT:
202
+ case A_BRGINTEN:
203
case A_NSMSCEXP:
204
qemu_log_mask(LOG_UNIMP,
205
"IoTKit SecCtl S block read: "
206
@@ -XXX,XX +XXX,XX @@ static MemTxResult iotkit_secctl_s_read(void *opaque, hwaddr addr,
207
return MEMTX_OK;
208
}
209
210
+static void iotkit_secctl_update_ppc_ap(IoTKitSecCtlPPC *ppc)
211
+{
212
+ int i;
213
+
214
+ for (i = 0; i < ppc->numports; i++) {
215
+ bool v;
216
+
217
+ if (extract32(ppc->ns, i, 1)) {
218
+ v = extract32(ppc->nsp, i, 1);
219
+ } else {
220
+ v = extract32(ppc->sp, i, 1);
221
+ }
222
+ qemu_set_irq(ppc->ap[i], v);
223
+ }
224
+}
225
+
226
+static void iotkit_secctl_ppc_ns_write(IoTKitSecCtlPPC *ppc, uint32_t value)
227
+{
228
+ int i;
229
+
230
+ ppc->ns = value & MAKE_64BIT_MASK(0, ppc->numports);
231
+ for (i = 0; i < ppc->numports; i++) {
232
+ qemu_set_irq(ppc->nonsec[i], extract32(ppc->ns, i, 1));
233
+ }
234
+ iotkit_secctl_update_ppc_ap(ppc);
235
+}
236
+
237
+static void iotkit_secctl_ppc_sp_write(IoTKitSecCtlPPC *ppc, uint32_t value)
238
+{
239
+ ppc->sp = value & MAKE_64BIT_MASK(0, ppc->numports);
240
+ iotkit_secctl_update_ppc_ap(ppc);
241
+}
242
+
243
+static void iotkit_secctl_ppc_nsp_write(IoTKitSecCtlPPC *ppc, uint32_t value)
244
+{
245
+ ppc->nsp = value & MAKE_64BIT_MASK(0, ppc->numports);
246
+ iotkit_secctl_update_ppc_ap(ppc);
247
+}
248
+
249
+static void iotkit_secctl_ppc_update_irq_clear(IoTKitSecCtlPPC *ppc)
250
+{
251
+ uint32_t value = ppc->parent->secppcintstat;
252
+
253
+ qemu_set_irq(ppc->irq_clear, extract32(value, ppc->irq_bit_offset, 1));
254
+}
255
+
256
+static void iotkit_secctl_ppc_update_irq_enable(IoTKitSecCtlPPC *ppc)
257
+{
258
+ uint32_t value = ppc->parent->secppcinten;
259
+
260
+ qemu_set_irq(ppc->irq_enable, extract32(value, ppc->irq_bit_offset, 1));
261
+}
262
+
263
static MemTxResult iotkit_secctl_s_write(void *opaque, hwaddr addr,
264
uint64_t value,
265
unsigned size, MemTxAttrs attrs)
266
{
267
+ IoTKitSecCtl *s = IOTKIT_SECCTL(opaque);
268
uint32_t offset = addr;
269
+ IoTKitSecCtlPPC *ppc;
270
271
trace_iotkit_secctl_s_write(offset, value, size);
272
273
@@ -XXX,XX +XXX,XX @@ static MemTxResult iotkit_secctl_s_write(void *opaque, hwaddr addr,
274
275
switch (offset) {
276
case A_SECRESPCFG:
277
- case A_NSCCFG:
278
+ value &= 1;
279
+ s->secrespcfg = value;
280
+ qemu_set_irq(s->sec_resp_cfg, s->secrespcfg);
281
+ break;
282
case A_SECPPCINTCLR:
283
+ value &= 0x00f000f3;
284
+ foreach_ppc(s, iotkit_secctl_ppc_update_irq_clear);
285
+ break;
286
case A_SECPPCINTEN:
287
- case A_SECMSCINTCLR:
288
- case A_SECMSCINTEN:
289
- case A_BRGINTCLR:
290
- case A_BRGINTEN:
291
+ s->secppcinten = value & 0x00f000f3;
292
+ foreach_ppc(s, iotkit_secctl_ppc_update_irq_enable);
293
+ break;
294
case A_AHBNSPPCEXP0:
295
case A_AHBNSPPCEXP1:
296
case A_AHBNSPPCEXP2:
297
case A_AHBNSPPCEXP3:
298
+ ppc = &s->ahbexp[offset_to_ppc_idx(offset)];
299
+ iotkit_secctl_ppc_ns_write(ppc, value);
300
+ break;
301
case A_APBNSPPC0:
302
case A_APBNSPPC1:
303
+ ppc = &s->apb[offset_to_ppc_idx(offset)];
304
+ iotkit_secctl_ppc_ns_write(ppc, value);
305
+ break;
306
case A_APBNSPPCEXP0:
307
case A_APBNSPPCEXP1:
308
case A_APBNSPPCEXP2:
309
case A_APBNSPPCEXP3:
310
+ ppc = &s->apbexp[offset_to_ppc_idx(offset)];
311
+ iotkit_secctl_ppc_ns_write(ppc, value);
312
+ break;
313
case A_AHBSPPPCEXP0:
314
case A_AHBSPPPCEXP1:
315
case A_AHBSPPPCEXP2:
316
case A_AHBSPPPCEXP3:
317
+ ppc = &s->ahbexp[offset_to_ppc_idx(offset)];
318
+ iotkit_secctl_ppc_sp_write(ppc, value);
319
+ break;
320
case A_APBSPPPC0:
321
case A_APBSPPPC1:
322
+ ppc = &s->apb[offset_to_ppc_idx(offset)];
323
+ iotkit_secctl_ppc_sp_write(ppc, value);
324
+ break;
325
case A_APBSPPPCEXP0:
326
case A_APBSPPPCEXP1:
327
case A_APBSPPPCEXP2:
328
case A_APBSPPPCEXP3:
329
+ ppc = &s->apbexp[offset_to_ppc_idx(offset)];
330
+ iotkit_secctl_ppc_sp_write(ppc, value);
331
+ break;
332
+ case A_NSCCFG:
333
+ case A_SECMSCINTCLR:
334
+ case A_SECMSCINTEN:
335
+ case A_BRGINTCLR:
336
+ case A_BRGINTEN:
337
qemu_log_mask(LOG_UNIMP,
338
"IoTKit SecCtl S block write: "
339
"unimplemented offset 0x%x\n", offset);
340
@@ -XXX,XX +XXX,XX @@ static MemTxResult iotkit_secctl_ns_read(void *opaque, hwaddr addr,
341
uint64_t *pdata,
342
unsigned size, MemTxAttrs attrs)
343
{
344
+ IoTKitSecCtl *s = IOTKIT_SECCTL(opaque);
345
uint64_t r;
346
uint32_t offset = addr & ~0x3;
347
348
@@ -XXX,XX +XXX,XX @@ static MemTxResult iotkit_secctl_ns_read(void *opaque, hwaddr addr,
349
case A_AHBNSPPPCEXP1:
350
case A_AHBNSPPPCEXP2:
351
case A_AHBNSPPPCEXP3:
352
+ r = s->ahbexp[offset_to_ppc_idx(offset)].nsp;
353
+ break;
354
case A_APBNSPPPC0:
355
case A_APBNSPPPC1:
356
+ r = s->apb[offset_to_ppc_idx(offset)].nsp;
357
+ break;
358
case A_APBNSPPPCEXP0:
359
case A_APBNSPPPCEXP1:
360
case A_APBNSPPPCEXP2:
361
case A_APBNSPPPCEXP3:
362
- qemu_log_mask(LOG_UNIMP,
363
- "IoTKit SecCtl NS block read: "
364
- "unimplemented offset 0x%x\n", offset);
365
+ r = s->apbexp[offset_to_ppc_idx(offset)].nsp;
366
break;
367
case A_PID4:
368
case A_PID5:
369
@@ -XXX,XX +XXX,XX @@ static MemTxResult iotkit_secctl_ns_write(void *opaque, hwaddr addr,
370
uint64_t value,
371
unsigned size, MemTxAttrs attrs)
372
{
373
+ IoTKitSecCtl *s = IOTKIT_SECCTL(opaque);
374
uint32_t offset = addr;
375
+ IoTKitSecCtlPPC *ppc;
376
377
trace_iotkit_secctl_ns_write(offset, value, size);
378
379
@@ -XXX,XX +XXX,XX @@ static MemTxResult iotkit_secctl_ns_write(void *opaque, hwaddr addr,
380
case A_AHBNSPPPCEXP1:
381
case A_AHBNSPPPCEXP2:
382
case A_AHBNSPPPCEXP3:
383
+ ppc = &s->ahbexp[offset_to_ppc_idx(offset)];
384
+ iotkit_secctl_ppc_nsp_write(ppc, value);
385
+ break;
386
case A_APBNSPPPC0:
387
case A_APBNSPPPC1:
388
+ ppc = &s->apb[offset_to_ppc_idx(offset)];
389
+ iotkit_secctl_ppc_nsp_write(ppc, value);
390
+ break;
391
case A_APBNSPPPCEXP0:
392
case A_APBNSPPPCEXP1:
393
case A_APBNSPPPCEXP2:
394
case A_APBNSPPPCEXP3:
395
- qemu_log_mask(LOG_UNIMP,
396
- "IoTKit SecCtl NS block write: "
397
- "unimplemented offset 0x%x\n", offset);
398
+ ppc = &s->apbexp[offset_to_ppc_idx(offset)];
399
+ iotkit_secctl_ppc_nsp_write(ppc, value);
400
break;
401
case A_AHBNSPPPC0:
402
case A_PID4:
403
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps iotkit_secctl_ns_ops = {
404
.impl.max_access_size = 4,
405
};
406
407
+static void iotkit_secctl_reset_ppc(IoTKitSecCtlPPC *ppc)
408
+{
409
+ ppc->ns = 0;
410
+ ppc->sp = 0;
411
+ ppc->nsp = 0;
412
+}
413
+
414
static void iotkit_secctl_reset(DeviceState *dev)
415
{
416
+ IoTKitSecCtl *s = IOTKIT_SECCTL(dev);
417
418
+ s->secppcintstat = 0;
419
+ s->secppcinten = 0;
420
+ s->secrespcfg = 0;
421
+
422
+ foreach_ppc(s, iotkit_secctl_reset_ppc);
423
+}
424
+
425
+static void iotkit_secctl_ppc_irqstatus(void *opaque, int n, int level)
426
+{
427
+ IoTKitSecCtlPPC *ppc = opaque;
428
+ IoTKitSecCtl *s = IOTKIT_SECCTL(ppc->parent);
429
+ int irqbit = ppc->irq_bit_offset + n;
430
+
431
+ s->secppcintstat = deposit32(s->secppcintstat, irqbit, 1, level);
432
+}
433
+
434
+static void iotkit_secctl_init_ppc(IoTKitSecCtl *s,
435
+ IoTKitSecCtlPPC *ppc,
436
+ const char *name,
437
+ int numports,
438
+ int irq_bit_offset)
439
+{
440
+ char *gpioname;
441
+ DeviceState *dev = DEVICE(s);
442
+
443
+ ppc->numports = numports;
444
+ ppc->irq_bit_offset = irq_bit_offset;
445
+ ppc->parent = s;
446
+
447
+ gpioname = g_strdup_printf("%s_nonsec", name);
448
+ qdev_init_gpio_out_named(dev, ppc->nonsec, gpioname, numports);
449
+ g_free(gpioname);
450
+ gpioname = g_strdup_printf("%s_ap", name);
451
+ qdev_init_gpio_out_named(dev, ppc->ap, gpioname, numports);
452
+ g_free(gpioname);
453
+ gpioname = g_strdup_printf("%s_irq_enable", name);
454
+ qdev_init_gpio_out_named(dev, &ppc->irq_enable, gpioname, 1);
455
+ g_free(gpioname);
456
+ gpioname = g_strdup_printf("%s_irq_clear", name);
457
+ qdev_init_gpio_out_named(dev, &ppc->irq_clear, gpioname, 1);
458
+ g_free(gpioname);
459
+ gpioname = g_strdup_printf("%s_irq_status", name);
460
+ qdev_init_gpio_in_named_with_opaque(dev, iotkit_secctl_ppc_irqstatus,
461
+ ppc, gpioname, 1);
462
+ g_free(gpioname);
463
}
464
465
static void iotkit_secctl_init(Object *obj)
466
{
467
IoTKitSecCtl *s = IOTKIT_SECCTL(obj);
468
SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
469
+ DeviceState *dev = DEVICE(obj);
470
+ int i;
471
+
472
+ iotkit_secctl_init_ppc(s, &s->apb[0], "apb_ppc0",
473
+ IOTS_APB_PPC0_NUM_PORTS, 0);
474
+ iotkit_secctl_init_ppc(s, &s->apb[1], "apb_ppc1",
475
+ IOTS_APB_PPC1_NUM_PORTS, 1);
476
+
477
+ for (i = 0; i < IOTS_NUM_APB_EXP_PPC; i++) {
478
+ IoTKitSecCtlPPC *ppc = &s->apbexp[i];
479
+ char *ppcname = g_strdup_printf("apb_ppcexp%d", i);
480
+ iotkit_secctl_init_ppc(s, ppc, ppcname, IOTS_PPC_NUM_PORTS, 4 + i);
481
+ g_free(ppcname);
482
+ }
483
+ for (i = 0; i < IOTS_NUM_AHB_EXP_PPC; i++) {
484
+ IoTKitSecCtlPPC *ppc = &s->ahbexp[i];
485
+ char *ppcname = g_strdup_printf("ahb_ppcexp%d", i);
486
+ iotkit_secctl_init_ppc(s, ppc, ppcname, IOTS_PPC_NUM_PORTS, 20 + i);
487
+ g_free(ppcname);
488
+ }
489
+
490
+ qdev_init_gpio_out_named(dev, &s->sec_resp_cfg, "sec_resp_cfg", 1);
491
492
memory_region_init_io(&s->s_regs, obj, &iotkit_secctl_s_ops,
493
s, "iotkit-secctl-s-regs", 0x1000);
494
@@ -XXX,XX +XXX,XX @@ static void iotkit_secctl_init(Object *obj)
495
sysbus_init_mmio(sbd, &s->ns_regs);
496
}
497
498
+static const VMStateDescription iotkit_secctl_ppc_vmstate = {
499
+ .name = "iotkit-secctl-ppc",
500
+ .version_id = 1,
198
+ .version_id = 1,
501
+ .minimum_version_id = 1,
199
+ .minimum_version_id = 1,
200
+ .needed = gic_virt_state_needed,
502
+ .fields = (VMStateField[]) {
201
+ .fields = (VMStateField[]) {
503
+ VMSTATE_UINT32(ns, IoTKitSecCtlPPC),
202
+ /* Virtual interface */
504
+ VMSTATE_UINT32(sp, IoTKitSecCtlPPC),
203
+ VMSTATE_UINT32_ARRAY(h_hcr, GICState, GIC_NCPU),
505
+ VMSTATE_UINT32(nsp, IoTKitSecCtlPPC),
204
+ VMSTATE_UINT32_ARRAY(h_misr, GICState, GIC_NCPU),
205
+ VMSTATE_UINT32_2DARRAY(h_lr, GICState, GIC_MAX_LR, GIC_NCPU),
206
+ VMSTATE_UINT32_ARRAY(h_apr, GICState, GIC_NCPU),
207
+
208
+ /* Virtual CPU interfaces */
209
+ VMSTATE_UINT32_SUB_ARRAY(cpu_ctlr, GICState, GIC_NCPU, GIC_NCPU),
210
+ VMSTATE_UINT16_SUB_ARRAY(priority_mask, GICState, GIC_NCPU, GIC_NCPU),
211
+ VMSTATE_UINT16_SUB_ARRAY(running_priority, GICState, GIC_NCPU, GIC_NCPU),
212
+ VMSTATE_UINT16_SUB_ARRAY(current_pending, GICState, GIC_NCPU, GIC_NCPU),
213
+ VMSTATE_UINT8_SUB_ARRAY(bpr, GICState, GIC_NCPU, GIC_NCPU),
214
+ VMSTATE_UINT8_SUB_ARRAY(abpr, GICState, GIC_NCPU, GIC_NCPU),
215
+
506
+ VMSTATE_END_OF_LIST()
216
+ VMSTATE_END_OF_LIST()
507
+ }
217
+ }
508
+};
218
+};
509
+
219
+
510
static const VMStateDescription iotkit_secctl_vmstate = {
220
static const VMStateDescription vmstate_gic = {
511
.name = "iotkit-secctl",
221
.name = "arm_gic",
512
.version_id = 1,
222
.version_id = 12,
513
.minimum_version_id = 1,
223
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_gic = {
224
.post_load = gic_post_load,
514
.fields = (VMStateField[]) {
225
.fields = (VMStateField[]) {
515
+ VMSTATE_UINT32(secppcintstat, IoTKitSecCtl),
226
VMSTATE_UINT32(ctlr, GICState),
516
+ VMSTATE_UINT32(secppcinten, IoTKitSecCtl),
227
- VMSTATE_UINT32_ARRAY(cpu_ctlr, GICState, GIC_NCPU),
517
+ VMSTATE_UINT32(secrespcfg, IoTKitSecCtl),
228
+ VMSTATE_UINT32_SUB_ARRAY(cpu_ctlr, GICState, 0, GIC_NCPU),
518
+ VMSTATE_STRUCT_ARRAY(apb, IoTKitSecCtl, IOTS_NUM_APB_PPC, 1,
229
VMSTATE_STRUCT_ARRAY(irq_state, GICState, GIC_MAXIRQ, 1,
519
+ iotkit_secctl_ppc_vmstate, IoTKitSecCtlPPC),
230
vmstate_gic_irq_state, gic_irq_state),
520
+ VMSTATE_STRUCT_ARRAY(apbexp, IoTKitSecCtl, IOTS_NUM_APB_EXP_PPC, 1,
231
VMSTATE_UINT8_ARRAY(irq_target, GICState, GIC_MAXIRQ),
521
+ iotkit_secctl_ppc_vmstate, IoTKitSecCtlPPC),
232
VMSTATE_UINT8_2DARRAY(priority1, GICState, GIC_INTERNAL, GIC_NCPU),
522
+ VMSTATE_STRUCT_ARRAY(ahbexp, IoTKitSecCtl, IOTS_NUM_AHB_EXP_PPC, 1,
233
VMSTATE_UINT8_ARRAY(priority2, GICState, GIC_MAXIRQ - GIC_INTERNAL),
523
+ iotkit_secctl_ppc_vmstate, IoTKitSecCtlPPC),
234
VMSTATE_UINT8_2DARRAY(sgi_pending, GICState, GIC_NR_SGIS, GIC_NCPU),
235
- VMSTATE_UINT16_ARRAY(priority_mask, GICState, GIC_NCPU),
236
- VMSTATE_UINT16_ARRAY(running_priority, GICState, GIC_NCPU),
237
- VMSTATE_UINT16_ARRAY(current_pending, GICState, GIC_NCPU),
238
- VMSTATE_UINT8_ARRAY(bpr, GICState, GIC_NCPU),
239
- VMSTATE_UINT8_ARRAY(abpr, GICState, GIC_NCPU),
240
+ VMSTATE_UINT16_SUB_ARRAY(priority_mask, GICState, 0, GIC_NCPU),
241
+ VMSTATE_UINT16_SUB_ARRAY(running_priority, GICState, 0, GIC_NCPU),
242
+ VMSTATE_UINT16_SUB_ARRAY(current_pending, GICState, 0, GIC_NCPU),
243
+ VMSTATE_UINT8_SUB_ARRAY(bpr, GICState, 0, GIC_NCPU),
244
+ VMSTATE_UINT8_SUB_ARRAY(abpr, GICState, 0, GIC_NCPU),
245
VMSTATE_UINT32_2DARRAY(apr, GICState, GIC_NR_APRS, GIC_NCPU),
246
VMSTATE_UINT32_2DARRAY(nsapr, GICState, GIC_NR_APRS, GIC_NCPU),
524
VMSTATE_END_OF_LIST()
247
VMSTATE_END_OF_LIST()
248
+ },
249
+ .subsections = (const VMStateDescription * []) {
250
+ &vmstate_gic_virt_state,
251
+ NULL
525
}
252
}
526
};
253
};
254
255
void gic_init_irqs_and_mmio(GICState *s, qemu_irq_handler handler,
256
- const MemoryRegionOps *ops)
257
+ const MemoryRegionOps *ops,
258
+ const MemoryRegionOps *virt_ops)
259
{
260
SysBusDevice *sbd = SYS_BUS_DEVICE(s);
261
int i = s->num_irq - GIC_INTERNAL;
262
@@ -XXX,XX +XXX,XX @@ void gic_init_irqs_and_mmio(GICState *s, qemu_irq_handler handler,
263
for (i = 0; i < s->num_cpu; i++) {
264
sysbus_init_irq(sbd, &s->parent_vfiq[i]);
265
}
266
+ if (s->virt_extn) {
267
+ for (i = 0; i < s->num_cpu; i++) {
268
+ sysbus_init_irq(sbd, &s->maintenance_irq[i]);
269
+ }
270
+ }
271
272
/* Distributor */
273
memory_region_init_io(&s->iomem, OBJECT(s), ops, s, "gic_dist", 0x1000);
274
@@ -XXX,XX +XXX,XX @@ void gic_init_irqs_and_mmio(GICState *s, qemu_irq_handler handler,
275
memory_region_init_io(&s->cpuiomem[0], OBJECT(s), ops ? &ops[1] : NULL,
276
s, "gic_cpu", s->revision == 2 ? 0x2000 : 0x100);
277
sysbus_init_mmio(sbd, &s->cpuiomem[0]);
278
+
279
+ if (s->virt_extn) {
280
+ memory_region_init_io(&s->vifaceiomem[0], OBJECT(s), virt_ops,
281
+ s, "gic_viface", 0x1000);
282
+ sysbus_init_mmio(sbd, &s->vifaceiomem[0]);
283
+
284
+ memory_region_init_io(&s->vcpuiomem, OBJECT(s),
285
+ virt_ops ? &virt_ops[1] : NULL,
286
+ s, "gic_vcpu", 0x2000);
287
+ sysbus_init_mmio(sbd, &s->vcpuiomem);
288
+ }
289
}
290
291
static void arm_gic_common_realize(DeviceState *dev, Error **errp)
292
@@ -XXX,XX +XXX,XX @@ static void arm_gic_common_realize(DeviceState *dev, Error **errp)
293
"the security extensions");
294
return;
295
}
296
+
297
+ if (s->virt_extn) {
298
+ if (s->revision != 2) {
299
+ error_setg(errp, "GIC virtualization extensions are only "
300
+ "supported by revision 2");
301
+ return;
302
+ }
303
+
304
+ /* For now, set the number of implemented LRs to 4, as found in most
305
+ * real GICv2. This could be promoted as a QOM property if we need to
306
+ * emulate a variant with another num_lrs.
307
+ */
308
+ s->num_lrs = 4;
309
+ }
310
+}
311
+
312
+static inline void arm_gic_common_reset_irq_state(GICState *s, int first_cpu,
313
+ int resetprio)
314
+{
315
+ int i, j;
316
+
317
+ for (i = first_cpu; i < first_cpu + s->num_cpu; i++) {
318
+ if (s->revision == REV_11MPCORE) {
319
+ s->priority_mask[i] = 0xf0;
320
+ } else {
321
+ s->priority_mask[i] = resetprio;
322
+ }
323
+ s->current_pending[i] = 1023;
324
+ s->running_priority[i] = 0x100;
325
+ s->cpu_ctlr[i] = 0;
326
+ s->bpr[i] = gic_is_vcpu(i) ? GIC_VIRT_MIN_BPR : GIC_MIN_BPR;
327
+ s->abpr[i] = gic_is_vcpu(i) ? GIC_VIRT_MIN_ABPR : GIC_MIN_ABPR;
328
+
329
+ if (!gic_is_vcpu(i)) {
330
+ for (j = 0; j < GIC_INTERNAL; j++) {
331
+ s->priority1[j][i] = resetprio;
332
+ }
333
+ for (j = 0; j < GIC_NR_SGIS; j++) {
334
+ s->sgi_pending[j][i] = 0;
335
+ }
336
+ }
337
+ }
338
}
339
340
static void arm_gic_common_reset(DeviceState *dev)
341
@@ -XXX,XX +XXX,XX @@ static void arm_gic_common_reset(DeviceState *dev)
342
}
343
344
memset(s->irq_state, 0, GIC_MAXIRQ * sizeof(gic_irq_state));
345
- for (i = 0 ; i < s->num_cpu; i++) {
346
- if (s->revision == REV_11MPCORE) {
347
- s->priority_mask[i] = 0xf0;
348
- } else {
349
- s->priority_mask[i] = resetprio;
350
- }
351
- s->current_pending[i] = 1023;
352
- s->running_priority[i] = 0x100;
353
- s->cpu_ctlr[i] = 0;
354
- s->bpr[i] = GIC_MIN_BPR;
355
- s->abpr[i] = GIC_MIN_ABPR;
356
- for (j = 0; j < GIC_INTERNAL; j++) {
357
- s->priority1[j][i] = resetprio;
358
- }
359
- for (j = 0; j < GIC_NR_SGIS; j++) {
360
- s->sgi_pending[j][i] = 0;
361
- }
362
+ arm_gic_common_reset_irq_state(s, 0, resetprio);
363
+
364
+ if (s->virt_extn) {
365
+ /* vCPU states are stored at indexes GIC_NCPU .. GIC_NCPU+num_cpu.
366
+ * The exposed vCPU interface does not have security extensions.
367
+ */
368
+ arm_gic_common_reset_irq_state(s, GIC_NCPU, 0);
369
}
370
+
371
for (i = 0; i < GIC_NR_SGIS; i++) {
372
GIC_DIST_SET_ENABLED(i, ALL_CPU_MASK);
373
GIC_DIST_SET_EDGE_TRIGGER(i);
374
@@ -XXX,XX +XXX,XX @@ static void arm_gic_common_reset(DeviceState *dev)
375
}
376
}
377
378
+ if (s->virt_extn) {
379
+ for (i = 0; i < s->num_lrs; i++) {
380
+ for (j = 0; j < s->num_cpu; j++) {
381
+ s->h_lr[i][j] = 0;
382
+ }
383
+ }
384
+
385
+ for (i = 0; i < s->num_cpu; i++) {
386
+ s->h_hcr[i] = 0;
387
+ s->h_misr[i] = 0;
388
+ }
389
+ }
390
+
391
s->ctlr = 0;
392
}
393
394
@@ -XXX,XX +XXX,XX @@ static Property arm_gic_common_properties[] = {
395
DEFINE_PROP_UINT32("revision", GICState, revision, 1),
396
/* True if the GIC should implement the security extensions */
397
DEFINE_PROP_BOOL("has-security-extensions", GICState, security_extn, 0),
398
+ /* True if the GIC should implement the virtualization extensions */
399
+ DEFINE_PROP_BOOL("has-virtualization-extensions", GICState, virt_extn, 0),
400
DEFINE_PROP_END_OF_LIST(),
401
};
402
403
diff --git a/hw/intc/arm_gic_kvm.c b/hw/intc/arm_gic_kvm.c
404
index XXXXXXX..XXXXXXX 100644
405
--- a/hw/intc/arm_gic_kvm.c
406
+++ b/hw/intc/arm_gic_kvm.c
407
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gic_realize(DeviceState *dev, Error **errp)
408
return;
409
}
410
411
+ if (s->virt_extn) {
412
+ error_setg(errp, "the in-kernel VGIC does not implement the "
413
+ "virtualization extensions");
414
+ return;
415
+ }
416
+
417
if (!kvm_arm_gic_can_save_restore(s)) {
418
error_setg(&s->migration_blocker, "This operating system kernel does "
419
"not support vGICv2 migration");
420
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gic_realize(DeviceState *dev, Error **errp)
421
}
422
}
423
424
- gic_init_irqs_and_mmio(s, kvm_arm_gicv2_set_irq, NULL);
425
+ gic_init_irqs_and_mmio(s, kvm_arm_gicv2_set_irq, NULL, NULL);
426
427
for (i = 0; i < s->num_irq - GIC_INTERNAL; i++) {
428
qemu_irq irq = qdev_get_gpio_in(dev, i);
527
--
429
--
528
2.16.2
430
2.18.0
529
431
530
432
diff view generated by jsdifflib
New patch
1
From: Luc Michel <luc.michel@greensocs.com>
1
2
3
Add the register definitions for the virtual interface of the GICv2.
4
5
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20180727095421.386-7-luc.michel@greensocs.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
hw/intc/gic_internal.h | 65 ++++++++++++++++++++++++++++++++++++++++++
11
1 file changed, 65 insertions(+)
12
13
diff --git a/hw/intc/gic_internal.h b/hw/intc/gic_internal.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/intc/gic_internal.h
16
+++ b/hw/intc/gic_internal.h
17
@@ -XXX,XX +XXX,XX @@
18
#ifndef QEMU_ARM_GIC_INTERNAL_H
19
#define QEMU_ARM_GIC_INTERNAL_H
20
21
+#include "hw/registerfields.h"
22
#include "hw/intc/arm_gic.h"
23
24
#define ALL_CPU_MASK ((unsigned)(((1 << GIC_NCPU) - 1)))
25
@@ -XXX,XX +XXX,XX @@
26
#define GICC_CTLR_EOIMODE (1U << 9)
27
#define GICC_CTLR_EOIMODE_NS (1U << 10)
28
29
+REG32(GICH_HCR, 0x0)
30
+ FIELD(GICH_HCR, EN, 0, 1)
31
+ FIELD(GICH_HCR, UIE, 1, 1)
32
+ FIELD(GICH_HCR, LRENPIE, 2, 1)
33
+ FIELD(GICH_HCR, NPIE, 3, 1)
34
+ FIELD(GICH_HCR, VGRP0EIE, 4, 1)
35
+ FIELD(GICH_HCR, VGRP0DIE, 5, 1)
36
+ FIELD(GICH_HCR, VGRP1EIE, 6, 1)
37
+ FIELD(GICH_HCR, VGRP1DIE, 7, 1)
38
+ FIELD(GICH_HCR, EOICount, 27, 5)
39
+
40
+#define GICH_HCR_MASK \
41
+ (R_GICH_HCR_EN_MASK | R_GICH_HCR_UIE_MASK | \
42
+ R_GICH_HCR_LRENPIE_MASK | R_GICH_HCR_NPIE_MASK | \
43
+ R_GICH_HCR_VGRP0EIE_MASK | R_GICH_HCR_VGRP0DIE_MASK | \
44
+ R_GICH_HCR_VGRP1EIE_MASK | R_GICH_HCR_VGRP1DIE_MASK | \
45
+ R_GICH_HCR_EOICount_MASK)
46
+
47
+REG32(GICH_VTR, 0x4)
48
+ FIELD(GICH_VTR, ListRegs, 0, 6)
49
+ FIELD(GICH_VTR, PREbits, 26, 3)
50
+ FIELD(GICH_VTR, PRIbits, 29, 3)
51
+
52
+REG32(GICH_VMCR, 0x8)
53
+ FIELD(GICH_VMCR, VMCCtlr, 0, 10)
54
+ FIELD(GICH_VMCR, VMABP, 18, 3)
55
+ FIELD(GICH_VMCR, VMBP, 21, 3)
56
+ FIELD(GICH_VMCR, VMPriMask, 27, 5)
57
+
58
+REG32(GICH_MISR, 0x10)
59
+ FIELD(GICH_MISR, EOI, 0, 1)
60
+ FIELD(GICH_MISR, U, 1, 1)
61
+ FIELD(GICH_MISR, LRENP, 2, 1)
62
+ FIELD(GICH_MISR, NP, 3, 1)
63
+ FIELD(GICH_MISR, VGrp0E, 4, 1)
64
+ FIELD(GICH_MISR, VGrp0D, 5, 1)
65
+ FIELD(GICH_MISR, VGrp1E, 6, 1)
66
+ FIELD(GICH_MISR, VGrp1D, 7, 1)
67
+
68
+REG32(GICH_EISR0, 0x20)
69
+REG32(GICH_EISR1, 0x24)
70
+REG32(GICH_ELRSR0, 0x30)
71
+REG32(GICH_ELRSR1, 0x34)
72
+REG32(GICH_APR, 0xf0)
73
+
74
+REG32(GICH_LR0, 0x100)
75
+ FIELD(GICH_LR0, VirtualID, 0, 10)
76
+ FIELD(GICH_LR0, PhysicalID, 10, 10)
77
+ FIELD(GICH_LR0, CPUID, 10, 3)
78
+ FIELD(GICH_LR0, EOI, 19, 1)
79
+ FIELD(GICH_LR0, Priority, 23, 5)
80
+ FIELD(GICH_LR0, State, 28, 2)
81
+ FIELD(GICH_LR0, Grp1, 30, 1)
82
+ FIELD(GICH_LR0, HW, 31, 1)
83
+
84
+/* Last LR register */
85
+REG32(GICH_LR63, 0x1fc)
86
+
87
+#define GICH_LR_MASK \
88
+ (R_GICH_LR0_VirtualID_MASK | R_GICH_LR0_PhysicalID_MASK | \
89
+ R_GICH_LR0_CPUID_MASK | R_GICH_LR0_EOI_MASK | \
90
+ R_GICH_LR0_Priority_MASK | R_GICH_LR0_State_MASK | \
91
+ R_GICH_LR0_Grp1_MASK | R_GICH_LR0_HW_MASK)
92
+
93
/* Valid bits for GICC_CTLR for GICv1, v1 with security extensions,
94
* GICv2 and GICv2 with security extensions:
95
*/
96
--
97
2.18.0
98
99
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Luc Michel <luc.michel@greensocs.com>
2
2
3
Add some helper macros and functions related to the virtualization
4
extensions to gic_internal.h.
5
6
The GICH_LR_* macros help extracting specific fields of a list register
7
value. The only tricky one is the priority field as only the MSB are
8
stored. The value must be shifted accordingly to obtain the correct
9
priority value.
10
11
gic_is_vcpu() and gic_get_vcpu_real_id() help with (v)CPU id manipulation
12
to abstract the fact that vCPU id are in the range
13
[ GIC_NCPU; (GIC_NCPU + num_cpu) [.
14
15
gic_lr_* and gic_virq_is_valid() help with the list registers.
16
gic_get_lr_entry() returns the LR entry for a given (vCPU, irq) pair. It
17
is meant to be used in contexts where we know for sure that the entry
18
exists, so we assert that entry is actually found, and the caller can
19
avoid the NULL check on the returned pointer.
20
21
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
22
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
23
Message-id: 20180727095421.386-8-luc.michel@greensocs.com
5
Message-id: 20180228193125.20577-6-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
25
---
8
target/arm/helper.h | 9 +++++
26
hw/intc/gic_internal.h | 74 ++++++++++++++++++++++++++++++++++++++++++
9
target/arm/translate-a64.c | 83 ++++++++++++++++++++++++++++++++++++++++++++++
27
hw/intc/arm_gic.c | 5 +++
10
target/arm/vec_helper.c | 74 +++++++++++++++++++++++++++++++++++++++++
28
2 files changed, 79 insertions(+)
11
3 files changed, 166 insertions(+)
12
29
13
diff --git a/target/arm/helper.h b/target/arm/helper.h
30
diff --git a/hw/intc/gic_internal.h b/hw/intc/gic_internal.h
14
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.h
32
--- a/hw/intc/gic_internal.h
16
+++ b/target/arm/helper.h
33
+++ b/hw/intc/gic_internal.h
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_2(dc_zva, void, env, i64)
34
@@ -XXX,XX +XXX,XX @@ REG32(GICH_LR63, 0x1fc)
18
DEF_HELPER_FLAGS_2(neon_pmull_64_lo, TCG_CALL_NO_RWG_SE, i64, i64, i64)
35
R_GICH_LR0_Priority_MASK | R_GICH_LR0_State_MASK | \
19
DEF_HELPER_FLAGS_2(neon_pmull_64_hi, TCG_CALL_NO_RWG_SE, i64, i64, i64)
36
R_GICH_LR0_Grp1_MASK | R_GICH_LR0_HW_MASK)
20
37
21
+DEF_HELPER_FLAGS_5(gvec_qrdmlah_s16, TCG_CALL_NO_RWG,
38
+#define GICH_LR_STATE_INVALID 0
22
+ void, ptr, ptr, ptr, ptr, i32)
39
+#define GICH_LR_STATE_PENDING 1
23
+DEF_HELPER_FLAGS_5(gvec_qrdmlsh_s16, TCG_CALL_NO_RWG,
40
+#define GICH_LR_STATE_ACTIVE 2
24
+ void, ptr, ptr, ptr, ptr, i32)
41
+#define GICH_LR_STATE_ACTIVE_PENDING 3
25
+DEF_HELPER_FLAGS_5(gvec_qrdmlah_s32, TCG_CALL_NO_RWG,
26
+ void, ptr, ptr, ptr, ptr, i32)
27
+DEF_HELPER_FLAGS_5(gvec_qrdmlsh_s32, TCG_CALL_NO_RWG,
28
+ void, ptr, ptr, ptr, ptr, i32)
29
+
42
+
30
#ifdef TARGET_AARCH64
43
+#define GICH_LR_VIRT_ID(entry) (FIELD_EX32(entry, GICH_LR0, VirtualID))
31
#include "helper-a64.h"
44
+#define GICH_LR_PHYS_ID(entry) (FIELD_EX32(entry, GICH_LR0, PhysicalID))
32
#endif
45
+#define GICH_LR_CPUID(entry) (FIELD_EX32(entry, GICH_LR0, CPUID))
33
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
46
+#define GICH_LR_EOI(entry) (FIELD_EX32(entry, GICH_LR0, EOI))
34
index XXXXXXX..XXXXXXX 100644
47
+#define GICH_LR_PRIORITY(entry) (FIELD_EX32(entry, GICH_LR0, Priority) << 3)
35
--- a/target/arm/translate-a64.c
48
+#define GICH_LR_STATE(entry) (FIELD_EX32(entry, GICH_LR0, State))
36
+++ b/target/arm/translate-a64.c
49
+#define GICH_LR_GROUP(entry) (FIELD_EX32(entry, GICH_LR0, Grp1))
37
@@ -XXX,XX +XXX,XX @@ static void gen_gvec_op3(DisasContext *s, bool is_q, int rd,
50
+#define GICH_LR_HW(entry) (FIELD_EX32(entry, GICH_LR0, HW))
38
vec_full_reg_size(s), gvec_op);
51
+
52
/* Valid bits for GICC_CTLR for GICv1, v1 with security extensions,
53
* GICv2 and GICv2 with security extensions:
54
*/
55
@@ -XXX,XX +XXX,XX @@ static inline bool gic_is_vcpu(int cpu)
56
return cpu >= GIC_NCPU;
39
}
57
}
40
58
41
+/* Expand a 3-operand + env pointer operation using
59
+static inline int gic_get_vcpu_real_id(int cpu)
42
+ * an out-of-line helper.
43
+ */
44
+static void gen_gvec_op3_env(DisasContext *s, bool is_q, int rd,
45
+ int rn, int rm, gen_helper_gvec_3_ptr *fn)
46
+{
60
+{
47
+ tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, rd),
61
+ return (cpu >= GIC_NCPU) ? (cpu - GIC_NCPU) : cpu;
48
+ vec_full_reg_offset(s, rn),
49
+ vec_full_reg_offset(s, rm), cpu_env,
50
+ is_q ? 16 : 8, vec_full_reg_size(s), 0, fn);
51
+}
62
+}
52
+
63
+
53
/* Set ZF and NF based on a 64 bit result. This is alas fiddlier
64
+/* Return true if the given vIRQ state exists in a LR and is either active or
54
* than the 32 bit equivalent.
65
+ * pending and active.
55
*/
66
+ *
56
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
67
+ * This function is used to check that a guest's `end of interrupt' or
57
clear_vec_high(s, is_q, rd);
68
+ * `interrupts deactivation' request is valid, and matches with a LR of an
58
}
69
+ * already acknowledged vIRQ (i.e. has the active bit set in its state).
59
60
+/* AdvSIMD three same extra
61
+ * 31 30 29 28 24 23 22 21 20 16 15 14 11 10 9 5 4 0
62
+ * +---+---+---+-----------+------+---+------+---+--------+---+----+----+
63
+ * | 0 | Q | U | 0 1 1 1 0 | size | 0 | Rm | 1 | opcode | 1 | Rn | Rd |
64
+ * +---+---+---+-----------+------+---+------+---+--------+---+----+----+
65
+ */
70
+ */
66
+static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
71
+static inline bool gic_virq_is_valid(GICState *s, int irq, int vcpu)
67
+{
72
+{
68
+ int rd = extract32(insn, 0, 5);
73
+ int cpu = gic_get_vcpu_real_id(vcpu);
69
+ int rn = extract32(insn, 5, 5);
74
+ int lr_idx;
70
+ int opcode = extract32(insn, 11, 4);
71
+ int rm = extract32(insn, 16, 5);
72
+ int size = extract32(insn, 22, 2);
73
+ bool u = extract32(insn, 29, 1);
74
+ bool is_q = extract32(insn, 30, 1);
75
+ int feature;
76
+
75
+
77
+ switch (u * 16 + opcode) {
76
+ for (lr_idx = 0; lr_idx < s->num_lrs; lr_idx++) {
78
+ case 0x10: /* SQRDMLAH (vector) */
77
+ uint32_t *entry = &s->h_lr[lr_idx][cpu];
79
+ case 0x11: /* SQRDMLSH (vector) */
78
+
80
+ if (size != 1 && size != 2) {
79
+ if ((GICH_LR_VIRT_ID(*entry) == irq) &&
81
+ unallocated_encoding(s);
80
+ (GICH_LR_STATE(*entry) & GICH_LR_STATE_ACTIVE)) {
82
+ return;
81
+ return true;
83
+ }
82
+ }
84
+ feature = ARM_FEATURE_V8_RDM;
85
+ break;
86
+ default:
87
+ unallocated_encoding(s);
88
+ return;
89
+ }
90
+ if (!arm_dc_feature(s, feature)) {
91
+ unallocated_encoding(s);
92
+ return;
93
+ }
94
+ if (!fp_access_check(s)) {
95
+ return;
96
+ }
83
+ }
97
+
84
+
98
+ switch (opcode) {
85
+ return false;
99
+ case 0x0: /* SQRDMLAH (vector) */
100
+ switch (size) {
101
+ case 1:
102
+ gen_gvec_op3_env(s, is_q, rd, rn, rm, gen_helper_gvec_qrdmlah_s16);
103
+ break;
104
+ case 2:
105
+ gen_gvec_op3_env(s, is_q, rd, rn, rm, gen_helper_gvec_qrdmlah_s32);
106
+ break;
107
+ default:
108
+ g_assert_not_reached();
109
+ }
110
+ return;
111
+
112
+ case 0x1: /* SQRDMLSH (vector) */
113
+ switch (size) {
114
+ case 1:
115
+ gen_gvec_op3_env(s, is_q, rd, rn, rm, gen_helper_gvec_qrdmlsh_s16);
116
+ break;
117
+ case 2:
118
+ gen_gvec_op3_env(s, is_q, rd, rn, rm, gen_helper_gvec_qrdmlsh_s32);
119
+ break;
120
+ default:
121
+ g_assert_not_reached();
122
+ }
123
+ return;
124
+
125
+ default:
126
+ g_assert_not_reached();
127
+ }
128
+}
86
+}
129
+
87
+
130
static void handle_2misc_widening(DisasContext *s, int opcode, bool is_q,
88
+/* Return a pointer on the LR entry matching the given vIRQ.
131
int size, int rn, int rd)
89
+ *
132
{
90
+ * This function is used to retrieve an LR for which we know for sure that the
133
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_imm2(DisasContext *s, uint32_t insn)
91
+ * corresponding vIRQ exists in the current context (i.e. its current state is
134
static const AArch64DecodeTable data_proc_simd[] = {
92
+ * not `invalid'):
135
/* pattern , mask , fn */
93
+ * - Either the corresponding vIRQ has been validated with gic_virq_is_valid()
136
{ 0x0e200400, 0x9f200400, disas_simd_three_reg_same },
94
+ * so it is `active' or `active and pending',
137
+ { 0x0e008400, 0x9f208400, disas_simd_three_reg_same_extra },
95
+ * - Or it was pending and has been selected by gic_get_best_virq(). It is now
138
{ 0x0e200000, 0x9f200c00, disas_simd_three_reg_diff },
96
+ * `pending', `active' or `active and pending', depending on what the guest
139
{ 0x0e200800, 0x9f3e0c00, disas_simd_two_reg_misc },
97
+ * already did with this vIRQ.
140
{ 0x0e300800, 0x9f3e0c00, disas_simd_across_lanes },
98
+ *
141
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
99
+ * Having multiple LRs with the same VirtualID leads to UNPREDICTABLE
142
index XXXXXXX..XXXXXXX 100644
100
+ * behaviour in the GIC. We choose to return the first one that matches.
143
--- a/target/arm/vec_helper.c
101
+ */
144
+++ b/target/arm/vec_helper.c
102
+static inline uint32_t *gic_get_lr_entry(GICState *s, int irq, int vcpu)
145
@@ -XXX,XX +XXX,XX @@
146
147
#define SET_QC() env->vfp.xregs[ARM_VFP_FPSCR] |= CPSR_Q
148
149
+static void clear_tail(void *vd, uintptr_t opr_sz, uintptr_t max_sz)
150
+{
103
+{
151
+ uint64_t *d = vd + opr_sz;
104
+ int cpu = gic_get_vcpu_real_id(vcpu);
152
+ uintptr_t i;
105
+ int lr_idx;
153
+
106
+
154
+ for (i = opr_sz; i < max_sz; i += 8) {
107
+ for (lr_idx = 0; lr_idx < s->num_lrs; lr_idx++) {
155
+ *d++ = 0;
108
+ uint32_t *entry = &s->h_lr[lr_idx][cpu];
109
+
110
+ if ((GICH_LR_VIRT_ID(*entry) == irq) &&
111
+ (GICH_LR_STATE(*entry) != GICH_LR_STATE_INVALID)) {
112
+ return entry;
113
+ }
156
+ }
114
+ }
115
+
116
+ g_assert_not_reached();
157
+}
117
+}
158
+
118
+
159
/* Signed saturating rounding doubling multiply-accumulate high half, 16-bit */
119
#endif /* QEMU_ARM_GIC_INTERNAL_H */
160
static uint16_t inl_qrdmlah_s16(CPUARMState *env, int16_t src1,
120
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
161
int16_t src2, int16_t src3)
121
index XXXXXXX..XXXXXXX 100644
162
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(neon_qrdmlah_s16)(CPUARMState *env, uint32_t src1,
122
--- a/hw/intc/arm_gic.c
163
return deposit32(e1, 16, 16, e2);
123
+++ b/hw/intc/arm_gic.c
124
@@ -XXX,XX +XXX,XX @@ static inline int gic_get_current_cpu(GICState *s)
125
return 0;
164
}
126
}
165
127
166
+void HELPER(gvec_qrdmlah_s16)(void *vd, void *vn, void *vm,
128
+static inline int gic_get_current_vcpu(GICState *s)
167
+ void *ve, uint32_t desc)
168
+{
129
+{
169
+ uintptr_t opr_sz = simd_oprsz(desc);
130
+ return gic_get_current_cpu(s) + GIC_NCPU;
170
+ int16_t *d = vd;
171
+ int16_t *n = vn;
172
+ int16_t *m = vm;
173
+ CPUARMState *env = ve;
174
+ uintptr_t i;
175
+
176
+ for (i = 0; i < opr_sz / 2; ++i) {
177
+ d[i] = inl_qrdmlah_s16(env, n[i], m[i], d[i]);
178
+ }
179
+ clear_tail(d, opr_sz, simd_maxsz(desc));
180
+}
131
+}
181
+
132
+
182
/* Signed saturating rounding doubling multiply-subtract high half, 16-bit */
133
/* Return true if this GIC config has interrupt groups, which is
183
static uint16_t inl_qrdmlsh_s16(CPUARMState *env, int16_t src1,
134
* true if we're a GICv2, or a GICv1 with the security extensions.
184
int16_t src2, int16_t src3)
135
*/
185
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(neon_qrdmlsh_s16)(CPUARMState *env, uint32_t src1,
186
return deposit32(e1, 16, 16, e2);
187
}
188
189
+void HELPER(gvec_qrdmlsh_s16)(void *vd, void *vn, void *vm,
190
+ void *ve, uint32_t desc)
191
+{
192
+ uintptr_t opr_sz = simd_oprsz(desc);
193
+ int16_t *d = vd;
194
+ int16_t *n = vn;
195
+ int16_t *m = vm;
196
+ CPUARMState *env = ve;
197
+ uintptr_t i;
198
+
199
+ for (i = 0; i < opr_sz / 2; ++i) {
200
+ d[i] = inl_qrdmlsh_s16(env, n[i], m[i], d[i]);
201
+ }
202
+ clear_tail(d, opr_sz, simd_maxsz(desc));
203
+}
204
+
205
/* Signed saturating rounding doubling multiply-accumulate high half, 32-bit */
206
uint32_t HELPER(neon_qrdmlah_s32)(CPUARMState *env, int32_t src1,
207
int32_t src2, int32_t src3)
208
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(neon_qrdmlah_s32)(CPUARMState *env, int32_t src1,
209
return ret;
210
}
211
212
+void HELPER(gvec_qrdmlah_s32)(void *vd, void *vn, void *vm,
213
+ void *ve, uint32_t desc)
214
+{
215
+ uintptr_t opr_sz = simd_oprsz(desc);
216
+ int32_t *d = vd;
217
+ int32_t *n = vn;
218
+ int32_t *m = vm;
219
+ CPUARMState *env = ve;
220
+ uintptr_t i;
221
+
222
+ for (i = 0; i < opr_sz / 4; ++i) {
223
+ d[i] = helper_neon_qrdmlah_s32(env, n[i], m[i], d[i]);
224
+ }
225
+ clear_tail(d, opr_sz, simd_maxsz(desc));
226
+}
227
+
228
/* Signed saturating rounding doubling multiply-subtract high half, 32-bit */
229
uint32_t HELPER(neon_qrdmlsh_s32)(CPUARMState *env, int32_t src1,
230
int32_t src2, int32_t src3)
231
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(neon_qrdmlsh_s32)(CPUARMState *env, int32_t src1,
232
}
233
return ret;
234
}
235
+
236
+void HELPER(gvec_qrdmlsh_s32)(void *vd, void *vn, void *vm,
237
+ void *ve, uint32_t desc)
238
+{
239
+ uintptr_t opr_sz = simd_oprsz(desc);
240
+ int32_t *d = vd;
241
+ int32_t *n = vn;
242
+ int32_t *m = vm;
243
+ CPUARMState *env = ve;
244
+ uintptr_t i;
245
+
246
+ for (i = 0; i < opr_sz / 4; ++i) {
247
+ d[i] = helper_neon_qrdmlsh_s32(env, n[i], m[i], d[i]);
248
+ }
249
+ clear_tail(d, opr_sz, simd_maxsz(desc));
250
+}
251
--
136
--
252
2.16.2
137
2.18.0
253
138
254
139
diff view generated by jsdifflib
New patch
1
1
From: Luc Michel <luc.michel@greensocs.com>
2
3
An access to the CPU interface is non-secure if the current GIC instance
4
implements the security extensions, and the memory access is actually
5
non-secure. Until then, it was checked with tests such as
6
if (s->security_extn && !attrs.secure) { ... }
7
in various places of the CPU interface code.
8
9
With the implementation of the virtualization extensions, those tests
10
must be updated to take into account whether we are in a vCPU interface
11
or not. This is because the exposed vCPU interface does not implement
12
security extensions.
13
14
This commits replaces all those tests with a call to the
15
gic_cpu_ns_access() function to check if the current access to the CPU
16
interface is non-secure. This function takes into account whether the
17
current CPU is a vCPU or not.
18
19
Note that this function is used only in the (v)CPU interface code path.
20
The distributor code path is left unchanged, as the distributor is not
21
exposed to vCPUs at all.
22
23
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
24
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
25
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
26
Message-id: 20180727095421.386-9-luc.michel@greensocs.com
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
28
---
29
hw/intc/arm_gic.c | 39 ++++++++++++++++++++++-----------------
30
1 file changed, 22 insertions(+), 17 deletions(-)
31
32
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
33
index XXXXXXX..XXXXXXX 100644
34
--- a/hw/intc/arm_gic.c
35
+++ b/hw/intc/arm_gic.c
36
@@ -XXX,XX +XXX,XX @@ static inline bool gic_has_groups(GICState *s)
37
return s->revision == 2 || s->security_extn;
38
}
39
40
+static inline bool gic_cpu_ns_access(GICState *s, int cpu, MemTxAttrs attrs)
41
+{
42
+ return !gic_is_vcpu(cpu) && s->security_extn && !attrs.secure;
43
+}
44
+
45
/* TODO: Many places that call this routine could be optimized. */
46
/* Update interrupt status after enabled or pending bits have been changed. */
47
static void gic_update(GICState *s)
48
@@ -XXX,XX +XXX,XX @@ static uint16_t gic_get_current_pending_irq(GICState *s, int cpu,
49
/* On a GIC without the security extensions, reading this register
50
* behaves in the same way as a secure access to a GIC with them.
51
*/
52
- bool secure = !s->security_extn || attrs.secure;
53
+ bool secure = !gic_cpu_ns_access(s, cpu, attrs);
54
55
if (group == 0 && !secure) {
56
/* Group0 interrupts hidden from Non-secure access */
57
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_get_priority(GICState *s, int cpu, int irq,
58
static void gic_set_priority_mask(GICState *s, int cpu, uint8_t pmask,
59
MemTxAttrs attrs)
60
{
61
- if (s->security_extn && !attrs.secure) {
62
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
63
if (s->priority_mask[cpu] & 0x80) {
64
/* Priority Mask in upper half */
65
pmask = 0x80 | (pmask >> 1);
66
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_get_priority_mask(GICState *s, int cpu, MemTxAttrs attrs)
67
{
68
uint32_t pmask = s->priority_mask[cpu];
69
70
- if (s->security_extn && !attrs.secure) {
71
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
72
if (pmask & 0x80) {
73
/* Priority Mask in upper half, return Non-secure view */
74
pmask = (pmask << 1) & 0xff;
75
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_get_cpu_control(GICState *s, int cpu, MemTxAttrs attrs)
76
{
77
uint32_t ret = s->cpu_ctlr[cpu];
78
79
- if (s->security_extn && !attrs.secure) {
80
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
81
/* Construct the NS banked view of GICC_CTLR from the correct
82
* bits of the S banked view. We don't need to move the bypass
83
* control bits because we don't implement that (IMPDEF) part
84
@@ -XXX,XX +XXX,XX @@ static void gic_set_cpu_control(GICState *s, int cpu, uint32_t value,
85
{
86
uint32_t mask;
87
88
- if (s->security_extn && !attrs.secure) {
89
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
90
/* The NS view can only write certain bits in the register;
91
* the rest are unchanged
92
*/
93
@@ -XXX,XX +XXX,XX @@ static uint8_t gic_get_running_priority(GICState *s, int cpu, MemTxAttrs attrs)
94
return 0xff;
95
}
96
97
- if (s->security_extn && !attrs.secure) {
98
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
99
if (s->running_priority[cpu] & 0x80) {
100
/* Running priority in upper half of range: return the Non-secure
101
* view of the priority.
102
@@ -XXX,XX +XXX,XX @@ static bool gic_eoi_split(GICState *s, int cpu, MemTxAttrs attrs)
103
/* Before GICv2 prio-drop and deactivate are not separable */
104
return false;
105
}
106
- if (s->security_extn && !attrs.secure) {
107
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
108
return s->cpu_ctlr[cpu] & GICC_CTLR_EOIMODE_NS;
109
}
110
return s->cpu_ctlr[cpu] & GICC_CTLR_EOIMODE;
111
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
112
return;
113
}
114
115
- if (s->security_extn && !attrs.secure && !group) {
116
+ if (gic_cpu_ns_access(s, cpu, attrs) && !group) {
117
DPRINTF("Non-secure DI for Group0 interrupt %d ignored\n", irq);
118
return;
119
}
120
@@ -XXX,XX +XXX,XX @@ static void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
121
122
group = gic_has_groups(s) && GIC_DIST_TEST_GROUP(irq, cm);
123
124
- if (s->security_extn && !attrs.secure && !group) {
125
+ if (gic_cpu_ns_access(s, cpu, attrs) && !group) {
126
DPRINTF("Non-secure EOI for Group0 interrupt %d ignored\n", irq);
127
return;
128
}
129
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_read(GICState *s, int cpu, int offset,
130
*data = gic_get_priority_mask(s, cpu, attrs);
131
break;
132
case 0x08: /* Binary Point */
133
- if (s->security_extn && !attrs.secure) {
134
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
135
if (s->cpu_ctlr[cpu] & GICC_CTLR_CBPR) {
136
/* NS view of BPR when CBPR is 1 */
137
*data = MIN(s->bpr[cpu] + 1, 7);
138
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_read(GICState *s, int cpu, int offset,
139
* With security extensions, secure access: ABPR (alias of NS BPR)
140
* With security extensions, nonsecure access: RAZ/WI
141
*/
142
- if (!gic_has_groups(s) || (s->security_extn && !attrs.secure)) {
143
+ if (!gic_has_groups(s) || (gic_cpu_ns_access(s, cpu, attrs))) {
144
*data = 0;
145
} else {
146
*data = s->abpr[cpu];
147
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_read(GICState *s, int cpu, int offset,
148
149
if (regno >= GIC_NR_APRS || s->revision != 2) {
150
*data = 0;
151
- } else if (s->security_extn && !attrs.secure) {
152
+ } else if (gic_cpu_ns_access(s, cpu, attrs)) {
153
/* NS view of GICC_APR<n> is the top half of GIC_NSAPR<n> */
154
*data = gic_apr_ns_view(s, regno, cpu);
155
} else {
156
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_read(GICState *s, int cpu, int offset,
157
int regno = (offset - 0xe0) / 4;
158
159
if (regno >= GIC_NR_APRS || s->revision != 2 || !gic_has_groups(s) ||
160
- (s->security_extn && !attrs.secure)) {
161
+ gic_cpu_ns_access(s, cpu, attrs)) {
162
*data = 0;
163
} else {
164
*data = s->nsapr[regno][cpu];
165
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
166
gic_set_priority_mask(s, cpu, value, attrs);
167
break;
168
case 0x08: /* Binary Point */
169
- if (s->security_extn && !attrs.secure) {
170
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
171
if (s->cpu_ctlr[cpu] & GICC_CTLR_CBPR) {
172
/* WI when CBPR is 1 */
173
return MEMTX_OK;
174
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
175
gic_complete_irq(s, cpu, value & 0x3ff, attrs);
176
return MEMTX_OK;
177
case 0x1c: /* Aliased Binary Point */
178
- if (!gic_has_groups(s) || (s->security_extn && !attrs.secure)) {
179
+ if (!gic_has_groups(s) || (gic_cpu_ns_access(s, cpu, attrs))) {
180
/* unimplemented, or NS access: RAZ/WI */
181
return MEMTX_OK;
182
} else {
183
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
184
if (regno >= GIC_NR_APRS || s->revision != 2) {
185
return MEMTX_OK;
186
}
187
- if (s->security_extn && !attrs.secure) {
188
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
189
/* NS view of GICC_APR<n> is the top half of GIC_NSAPR<n> */
190
gic_apr_write_ns_view(s, regno, cpu, value);
191
} else {
192
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
193
if (regno >= GIC_NR_APRS || s->revision != 2) {
194
return MEMTX_OK;
195
}
196
- if (!gic_has_groups(s) || (s->security_extn && !attrs.secure)) {
197
+ if (!gic_has_groups(s) || (gic_cpu_ns_access(s, cpu, attrs))) {
198
return MEMTX_OK;
199
}
200
s->nsapr[regno][cpu] = value;
201
--
202
2.18.0
203
204
diff view generated by jsdifflib
New patch
1
1
From: Luc Michel <luc.michel@greensocs.com>
2
3
Add some helper functions to gic_internal.h to get or change the state
4
of an IRQ. When the current CPU is not a vCPU, the call is forwarded to
5
the GIC distributor. Otherwise, it acts on the list register matching
6
the IRQ in the current CPU virtual interface.
7
8
gic_clear_active can have a side effect on the distributor, even in the
9
vCPU case, when the correponding LR has the HW field set.
10
11
Use those functions in the CPU interface code path to prepare for the
12
vCPU interface implementation.
13
14
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
17
Message-id: 20180727095421.386-10-luc.michel@greensocs.com
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
---
20
hw/intc/gic_internal.h | 83 ++++++++++++++++++++++++++++++++++++++++++
21
hw/intc/arm_gic.c | 32 +++++++---------
22
2 files changed, 97 insertions(+), 18 deletions(-)
23
24
diff --git a/hw/intc/gic_internal.h b/hw/intc/gic_internal.h
25
index XXXXXXX..XXXXXXX 100644
26
--- a/hw/intc/gic_internal.h
27
+++ b/hw/intc/gic_internal.h
28
@@ -XXX,XX +XXX,XX @@ REG32(GICH_LR63, 0x1fc)
29
#define GICH_LR_GROUP(entry) (FIELD_EX32(entry, GICH_LR0, Grp1))
30
#define GICH_LR_HW(entry) (FIELD_EX32(entry, GICH_LR0, HW))
31
32
+#define GICH_LR_CLEAR_PENDING(entry) \
33
+ ((entry) &= ~(GICH_LR_STATE_PENDING << R_GICH_LR0_State_SHIFT))
34
+#define GICH_LR_SET_ACTIVE(entry) \
35
+ ((entry) |= (GICH_LR_STATE_ACTIVE << R_GICH_LR0_State_SHIFT))
36
+#define GICH_LR_CLEAR_ACTIVE(entry) \
37
+ ((entry) &= ~(GICH_LR_STATE_ACTIVE << R_GICH_LR0_State_SHIFT))
38
+
39
/* Valid bits for GICC_CTLR for GICv1, v1 with security extensions,
40
* GICv2 and GICv2 with security extensions:
41
*/
42
@@ -XXX,XX +XXX,XX @@ static inline uint32_t *gic_get_lr_entry(GICState *s, int irq, int vcpu)
43
g_assert_not_reached();
44
}
45
46
+static inline bool gic_test_group(GICState *s, int irq, int cpu)
47
+{
48
+ if (gic_is_vcpu(cpu)) {
49
+ uint32_t *entry = gic_get_lr_entry(s, irq, cpu);
50
+ return GICH_LR_GROUP(*entry);
51
+ } else {
52
+ return GIC_DIST_TEST_GROUP(irq, 1 << cpu);
53
+ }
54
+}
55
+
56
+static inline void gic_clear_pending(GICState *s, int irq, int cpu)
57
+{
58
+ if (gic_is_vcpu(cpu)) {
59
+ uint32_t *entry = gic_get_lr_entry(s, irq, cpu);
60
+ GICH_LR_CLEAR_PENDING(*entry);
61
+ } else {
62
+ /* Clear pending state for both level and edge triggered
63
+ * interrupts. (level triggered interrupts with an active line
64
+ * remain pending, see gic_test_pending)
65
+ */
66
+ GIC_DIST_CLEAR_PENDING(irq, GIC_DIST_TEST_MODEL(irq) ? ALL_CPU_MASK
67
+ : (1 << cpu));
68
+ }
69
+}
70
+
71
+static inline void gic_set_active(GICState *s, int irq, int cpu)
72
+{
73
+ if (gic_is_vcpu(cpu)) {
74
+ uint32_t *entry = gic_get_lr_entry(s, irq, cpu);
75
+ GICH_LR_SET_ACTIVE(*entry);
76
+ } else {
77
+ GIC_DIST_SET_ACTIVE(irq, 1 << cpu);
78
+ }
79
+}
80
+
81
+static inline void gic_clear_active(GICState *s, int irq, int cpu)
82
+{
83
+ if (gic_is_vcpu(cpu)) {
84
+ uint32_t *entry = gic_get_lr_entry(s, irq, cpu);
85
+ GICH_LR_CLEAR_ACTIVE(*entry);
86
+
87
+ if (GICH_LR_HW(*entry)) {
88
+ /* Hardware interrupt. We must forward the deactivation request to
89
+ * the distributor.
90
+ */
91
+ int phys_irq = GICH_LR_PHYS_ID(*entry);
92
+ int rcpu = gic_get_vcpu_real_id(cpu);
93
+
94
+ if (phys_irq < GIC_NR_SGIS || phys_irq >= GIC_MAXIRQ) {
95
+ /* UNPREDICTABLE behaviour, we choose to ignore the request */
96
+ return;
97
+ }
98
+
99
+ /* This is equivalent to a NS write to DIR on the physical CPU
100
+ * interface. Hence group0 interrupt deactivation is ignored if
101
+ * the GIC is secure.
102
+ */
103
+ if (!s->security_extn || GIC_DIST_TEST_GROUP(phys_irq, 1 << rcpu)) {
104
+ GIC_DIST_CLEAR_ACTIVE(phys_irq, 1 << rcpu);
105
+ }
106
+ }
107
+ } else {
108
+ GIC_DIST_CLEAR_ACTIVE(irq, 1 << cpu);
109
+ }
110
+}
111
+
112
+static inline int gic_get_priority(GICState *s, int irq, int cpu)
113
+{
114
+ if (gic_is_vcpu(cpu)) {
115
+ uint32_t *entry = gic_get_lr_entry(s, irq, cpu);
116
+ return GICH_LR_PRIORITY(*entry);
117
+ } else {
118
+ return GIC_DIST_GET_PRIORITY(irq, cpu);
119
+ }
120
+}
121
+
122
#endif /* QEMU_ARM_GIC_INTERNAL_H */
123
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
124
index XXXXXXX..XXXXXXX 100644
125
--- a/hw/intc/arm_gic.c
126
+++ b/hw/intc/arm_gic.c
127
@@ -XXX,XX +XXX,XX @@ static uint16_t gic_get_current_pending_irq(GICState *s, int cpu,
128
uint16_t pending_irq = s->current_pending[cpu];
129
130
if (pending_irq < GIC_MAXIRQ && gic_has_groups(s)) {
131
- int group = GIC_DIST_TEST_GROUP(pending_irq, (1 << cpu));
132
+ int group = gic_test_group(s, pending_irq, cpu);
133
+
134
/* On a GIC without the security extensions, reading this register
135
* behaves in the same way as a secure access to a GIC with them.
136
*/
137
@@ -XXX,XX +XXX,XX @@ static int gic_get_group_priority(GICState *s, int cpu, int irq)
138
139
if (gic_has_groups(s) &&
140
!(s->cpu_ctlr[cpu] & GICC_CTLR_CBPR) &&
141
- GIC_DIST_TEST_GROUP(irq, (1 << cpu))) {
142
+ gic_test_group(s, irq, cpu)) {
143
bpr = s->abpr[cpu] - 1;
144
assert(bpr >= 0);
145
} else {
146
@@ -XXX,XX +XXX,XX @@ static int gic_get_group_priority(GICState *s, int cpu, int irq)
147
*/
148
mask = ~0U << ((bpr & 7) + 1);
149
150
- return GIC_DIST_GET_PRIORITY(irq, cpu) & mask;
151
+ return gic_get_priority(s, irq, cpu) & mask;
152
}
153
154
static void gic_activate_irq(GICState *s, int cpu, int irq)
155
@@ -XXX,XX +XXX,XX @@ static void gic_activate_irq(GICState *s, int cpu, int irq)
156
int regno = preemption_level / 32;
157
int bitno = preemption_level % 32;
158
159
- if (gic_has_groups(s) && GIC_DIST_TEST_GROUP(irq, (1 << cpu))) {
160
+ if (gic_has_groups(s) && gic_test_group(s, irq, cpu)) {
161
s->nsapr[regno][cpu] |= (1 << bitno);
162
} else {
163
s->apr[regno][cpu] |= (1 << bitno);
164
}
165
166
s->running_priority[cpu] = prio;
167
- GIC_DIST_SET_ACTIVE(irq, 1 << cpu);
168
+ gic_set_active(s, irq, cpu);
169
}
170
171
static int gic_get_prio_from_apr_bits(GICState *s, int cpu)
172
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
173
return irq;
174
}
175
176
- if (GIC_DIST_GET_PRIORITY(irq, cpu) >= s->running_priority[cpu]) {
177
+ if (gic_get_priority(s, irq, cpu) >= s->running_priority[cpu]) {
178
DPRINTF("ACK, pending interrupt (%d) has insufficient priority\n", irq);
179
return 1023;
180
}
181
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
182
/* Clear pending flags for both level and edge triggered interrupts.
183
* Level triggered IRQs will be reasserted once they become inactive.
184
*/
185
- GIC_DIST_CLEAR_PENDING(irq, GIC_DIST_TEST_MODEL(irq) ? ALL_CPU_MASK
186
- : cm);
187
+ gic_clear_pending(s, irq, cpu);
188
ret = irq;
189
} else {
190
if (irq < GIC_NR_SGIS) {
191
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
192
src = ctz32(s->sgi_pending[irq][cpu]);
193
s->sgi_pending[irq][cpu] &= ~(1 << src);
194
if (s->sgi_pending[irq][cpu] == 0) {
195
- GIC_DIST_CLEAR_PENDING(irq,
196
- GIC_DIST_TEST_MODEL(irq) ? ALL_CPU_MASK
197
- : cm);
198
+ gic_clear_pending(s, irq, cpu);
199
}
200
ret = irq | ((src & 0x7) << 10);
201
} else {
202
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
203
* interrupts. (level triggered interrupts with an active line
204
* remain pending, see gic_test_pending)
205
*/
206
- GIC_DIST_CLEAR_PENDING(irq, GIC_DIST_TEST_MODEL(irq) ? ALL_CPU_MASK
207
- : cm);
208
+ gic_clear_pending(s, irq, cpu);
209
ret = irq;
210
}
211
}
212
@@ -XXX,XX +XXX,XX @@ static bool gic_eoi_split(GICState *s, int cpu, MemTxAttrs attrs)
213
214
static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
215
{
216
- int cm = 1 << cpu;
217
int group;
218
219
if (irq >= s->num_irq) {
220
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
221
return;
222
}
223
224
- group = gic_has_groups(s) && GIC_DIST_TEST_GROUP(irq, cm);
225
+ group = gic_has_groups(s) && gic_test_group(s, irq, cpu);
226
227
if (!gic_eoi_split(s, cpu, attrs)) {
228
/* This is UNPREDICTABLE; we choose to ignore it */
229
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
230
return;
231
}
232
233
- GIC_DIST_CLEAR_ACTIVE(irq, cm);
234
+ gic_clear_active(s, irq, cpu);
235
}
236
237
static void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
238
@@ -XXX,XX +XXX,XX @@ static void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
239
}
240
}
241
242
- group = gic_has_groups(s) && GIC_DIST_TEST_GROUP(irq, cm);
243
+ group = gic_has_groups(s) && gic_test_group(s, irq, cpu);
244
245
if (gic_cpu_ns_access(s, cpu, attrs) && !group) {
246
DPRINTF("Non-secure EOI for Group0 interrupt %d ignored\n", irq);
247
@@ -XXX,XX +XXX,XX @@ static void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
248
249
/* In GICv2 the guest can choose to split priority-drop and deactivate */
250
if (!gic_eoi_split(s, cpu, attrs)) {
251
- GIC_DIST_CLEAR_ACTIVE(irq, cm);
252
+ gic_clear_active(s, irq, cpu);
253
}
254
gic_update(s);
255
}
256
--
257
2.18.0
258
259
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Luc Michel <luc.michel@greensocs.com>
2
2
3
The integer size check was already outside of the opcode switch;
3
Implement virtualization extensions in gic_activate_irq() and
4
move the floating-point size check outside as well. Unify the
4
gic_drop_prio() and in gic_get_prio_from_apr_bits() called by
5
size vs index adjustment between fp and integer paths.
5
gic_drop_prio().
6
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
When the current CPU is a vCPU:
8
- Use GIC_VIRT_MIN_BPR and GIC_VIRT_NR_APRS instead of their non-virt
9
counterparts,
10
- the vCPU APR is stored in the virtual interface, in h_apr.
11
12
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20180228193125.20577-4-richard.henderson@linaro.org
14
Message-id: 20180727095421.386-11-luc.michel@greensocs.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
16
---
12
target/arm/translate-a64.c | 65 +++++++++++++++++++++++-----------------------
17
hw/intc/arm_gic.c | 50 +++++++++++++++++++++++++++++++++++------------
13
1 file changed, 32 insertions(+), 33 deletions(-)
18
1 file changed, 38 insertions(+), 12 deletions(-)
14
19
15
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
20
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
16
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate-a64.c
22
--- a/hw/intc/arm_gic.c
18
+++ b/target/arm/translate-a64.c
23
+++ b/hw/intc/arm_gic.c
19
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
24
@@ -XXX,XX +XXX,XX @@ static void gic_activate_irq(GICState *s, int cpu, int irq)
20
case 0x05: /* FMLS */
25
* and update the running priority.
21
case 0x09: /* FMUL */
26
*/
22
case 0x19: /* FMULX */
27
int prio = gic_get_group_priority(s, cpu, irq);
23
- if (size == 1) {
28
- int preemption_level = prio >> (GIC_MIN_BPR + 1);
24
- unallocated_encoding(s);
29
+ int min_bpr = gic_is_vcpu(cpu) ? GIC_VIRT_MIN_BPR : GIC_MIN_BPR;
25
- return;
30
+ int preemption_level = prio >> (min_bpr + 1);
26
- }
31
int regno = preemption_level / 32;
27
is_fp = true;
32
int bitno = preemption_level % 32;
28
break;
33
+ uint32_t *papr = NULL;
29
default:
34
30
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
35
- if (gic_has_groups(s) && gic_test_group(s, irq, cpu)) {
31
if (is_fp) {
36
- s->nsapr[regno][cpu] |= (1 << bitno);
32
/* convert insn encoded size to TCGMemOp size */
37
+ if (gic_is_vcpu(cpu)) {
33
switch (size) {
38
+ assert(regno == 0);
34
- case 2: /* single precision */
39
+ papr = &s->h_apr[gic_get_vcpu_real_id(cpu)];
35
- size = MO_32;
40
+ } else if (gic_has_groups(s) && gic_test_group(s, irq, cpu)) {
36
- index = h << 1 | l;
41
+ papr = &s->nsapr[regno][cpu];
37
- rm |= (m << 4);
42
} else {
38
- break;
43
- s->apr[regno][cpu] |= (1 << bitno);
39
- case 3: /* double precision */
44
+ papr = &s->apr[regno][cpu];
40
- size = MO_64;
45
}
41
- if (l || !is_q) {
46
42
+ case 0: /* half-precision */
47
+ *papr |= (1 << bitno);
43
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
48
+
44
unallocated_encoding(s);
49
s->running_priority[cpu] = prio;
45
return;
50
gic_set_active(s, irq, cpu);
46
}
51
}
47
- index = h;
52
@@ -XXX,XX +XXX,XX @@ static int gic_get_prio_from_apr_bits(GICState *s, int cpu)
48
- rm |= (m << 4);
53
* on the set bits in the Active Priority Registers.
49
- break;
54
*/
50
- case 0: /* half precision */
55
int i;
51
size = MO_16;
56
+
52
- index = h << 2 | l << 1 | m;
57
+ if (gic_is_vcpu(cpu)) {
53
- is_fp16 = true;
58
+ uint32_t apr = s->h_apr[gic_get_vcpu_real_id(cpu)];
54
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
59
+ if (apr) {
55
- break;
60
+ return ctz32(apr) << (GIC_VIRT_MIN_BPR + 1);
56
- }
61
+ } else {
57
- /* fallthru */
62
+ return 0x100;
58
- default: /* unallocated */
59
- unallocated_encoding(s);
60
- return;
61
- }
62
- } else {
63
- switch (size) {
64
- case 1:
65
- index = h << 2 | l << 1 | m;
66
break;
67
- case 2:
68
- index = h << 1 | l;
69
- rm |= (m << 4);
70
+ case MO_32: /* single precision */
71
+ case MO_64: /* double precision */
72
break;
73
default:
74
unallocated_encoding(s);
75
return;
76
}
77
+ } else {
78
+ switch (size) {
79
+ case MO_8:
80
+ case MO_64:
81
+ unallocated_encoding(s);
82
+ return;
83
+ }
63
+ }
84
+ }
64
+ }
85
+
65
+
86
+ /* Given TCGMemOp size, adjust register and indexing. */
66
for (i = 0; i < GIC_NR_APRS; i++) {
87
+ switch (size) {
67
uint32_t apr = s->apr[i][cpu] | s->nsapr[i][cpu];
88
+ case MO_16:
68
if (!apr) {
89
+ index = h << 2 | l << 1 | m;
69
@@ -XXX,XX +XXX,XX @@ static void gic_drop_prio(GICState *s, int cpu, int group)
90
+ break;
70
* running priority will be wrong, so interrupts that should preempt
91
+ case MO_32:
71
* might not do so, and interrupts that should not preempt might do so.
92
+ index = h << 1 | l;
72
*/
93
+ rm |= m << 4;
73
- int i;
94
+ break;
74
+ if (gic_is_vcpu(cpu)) {
95
+ case MO_64:
75
+ int rcpu = gic_get_vcpu_real_id(cpu);
96
+ if (l || !is_q) {
76
97
+ unallocated_encoding(s);
77
- for (i = 0; i < GIC_NR_APRS; i++) {
98
+ return;
78
- uint32_t *papr = group ? &s->nsapr[i][cpu] : &s->apr[i][cpu];
79
- if (!*papr) {
80
- continue;
81
+ if (s->h_apr[rcpu]) {
82
+ /* Clear lowest set bit */
83
+ s->h_apr[rcpu] &= s->h_apr[rcpu] - 1;
99
+ }
84
+ }
100
+ index = h;
85
+ } else {
101
+ rm |= m << 4;
86
+ int i;
102
+ break;
87
+
103
+ default:
88
+ for (i = 0; i < GIC_NR_APRS; i++) {
104
+ g_assert_not_reached();
89
+ uint32_t *papr = group ? &s->nsapr[i][cpu] : &s->apr[i][cpu];
90
+ if (!*papr) {
91
+ continue;
92
+ }
93
+ /* Clear lowest set bit */
94
+ *papr &= *papr - 1;
95
+ break;
96
}
97
- /* Clear lowest set bit */
98
- *papr &= *papr - 1;
99
- break;
105
}
100
}
106
101
107
if (!fp_access_check(s)) {
102
s->running_priority[cpu] = gic_get_prio_from_apr_bits(s, cpu);
108
--
103
--
109
2.16.2
104
2.18.0
110
105
111
106
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Luc Michel <luc.michel@greensocs.com>
2
2
3
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
3
Implement virtualization extensions in the gic_acknowledge_irq()
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
function. This function changes the state of the highest priority IRQ
5
Message-id: 20180228193125.20577-5-richard.henderson@linaro.org
5
from pending to active.
6
7
When the current CPU is a vCPU, modifying the state of an IRQ modifies
8
the corresponding LR entry. However if we clear the pending flag before
9
setting the active one, we lose track of the LR entry as it becomes
10
invalid. The next call to gic_get_lr_entry() will fail.
11
12
To overcome this issue, we call gic_activate_irq() before
13
gic_clear_pending(). This does not change the general behaviour of
14
gic_acknowledge_irq.
15
16
We also move the SGI case in gic_clear_pending_sgi() to enhance
17
code readability as the virtualization extensions support adds a if-else
18
level.
19
20
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
21
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
22
Message-id: 20180727095421.386-12-luc.michel@greensocs.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
24
---
8
target/arm/Makefile.objs | 2 +-
25
hw/intc/arm_gic.c | 52 ++++++++++++++++++++++++++++++-----------------
9
target/arm/helper.h | 4 ++
26
1 file changed, 33 insertions(+), 19 deletions(-)
10
target/arm/translate-a64.c | 84 ++++++++++++++++++++++++++++++++++
11
target/arm/vec_helper.c | 109 +++++++++++++++++++++++++++++++++++++++++++++
12
4 files changed, 198 insertions(+), 1 deletion(-)
13
create mode 100644 target/arm/vec_helper.c
14
27
15
diff --git a/target/arm/Makefile.objs b/target/arm/Makefile.objs
28
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
16
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/Makefile.objs
30
--- a/hw/intc/arm_gic.c
18
+++ b/target/arm/Makefile.objs
31
+++ b/hw/intc/arm_gic.c
19
@@ -XXX,XX +XXX,XX @@ obj-$(call land,$(CONFIG_KVM),$(call lnot,$(TARGET_AARCH64))) += kvm32.o
32
@@ -XXX,XX +XXX,XX @@ static void gic_drop_prio(GICState *s, int cpu, int group)
20
obj-$(call land,$(CONFIG_KVM),$(TARGET_AARCH64)) += kvm64.o
33
s->running_priority[cpu] = gic_get_prio_from_apr_bits(s, cpu);
21
obj-$(call lnot,$(CONFIG_KVM)) += kvm-stub.o
22
obj-y += translate.o op_helper.o helper.o cpu.o
23
-obj-y += neon_helper.o iwmmxt_helper.o
24
+obj-y += neon_helper.o iwmmxt_helper.o vec_helper.o
25
obj-y += gdbstub.o
26
obj-$(TARGET_AARCH64) += cpu64.o translate-a64.o helper-a64.o gdbstub64.o
27
obj-y += crypto_helper.o
28
diff --git a/target/arm/helper.h b/target/arm/helper.h
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/helper.h
31
+++ b/target/arm/helper.h
32
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_1(neon_rbit_u8, TCG_CALL_NO_RWG_SE, i32, i32)
33
34
DEF_HELPER_3(neon_qdmulh_s16, i32, env, i32, i32)
35
DEF_HELPER_3(neon_qrdmulh_s16, i32, env, i32, i32)
36
+DEF_HELPER_4(neon_qrdmlah_s16, i32, env, i32, i32, i32)
37
+DEF_HELPER_4(neon_qrdmlsh_s16, i32, env, i32, i32, i32)
38
DEF_HELPER_3(neon_qdmulh_s32, i32, env, i32, i32)
39
DEF_HELPER_3(neon_qrdmulh_s32, i32, env, i32, i32)
40
+DEF_HELPER_4(neon_qrdmlah_s32, i32, env, s32, s32, s32)
41
+DEF_HELPER_4(neon_qrdmlsh_s32, i32, env, s32, s32, s32)
42
43
DEF_HELPER_1(neon_narrow_u8, i32, i64)
44
DEF_HELPER_1(neon_narrow_u16, i32, i64)
45
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/translate-a64.c
48
+++ b/target/arm/translate-a64.c
49
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
50
tcg_temp_free_ptr(fpst);
51
}
34
}
52
35
53
+/* AdvSIMD scalar three same extra
36
+static inline uint32_t gic_clear_pending_sgi(GICState *s, int irq, int cpu)
54
+ * 31 30 29 28 24 23 22 21 20 16 15 14 11 10 9 5 4 0
55
+ * +-----+---+-----------+------+---+------+---+--------+---+----+----+
56
+ * | 0 1 | U | 1 1 1 1 0 | size | 0 | Rm | 1 | opcode | 1 | Rn | Rd |
57
+ * +-----+---+-----------+------+---+------+---+--------+---+----+----+
58
+ */
59
+static void disas_simd_scalar_three_reg_same_extra(DisasContext *s,
60
+ uint32_t insn)
61
+{
37
+{
62
+ int rd = extract32(insn, 0, 5);
38
+ int src;
63
+ int rn = extract32(insn, 5, 5);
39
+ uint32_t ret;
64
+ int opcode = extract32(insn, 11, 4);
65
+ int rm = extract32(insn, 16, 5);
66
+ int size = extract32(insn, 22, 2);
67
+ bool u = extract32(insn, 29, 1);
68
+ TCGv_i32 ele1, ele2, ele3;
69
+ TCGv_i64 res;
70
+ int feature;
71
+
40
+
72
+ switch (u * 16 + opcode) {
41
+ if (!gic_is_vcpu(cpu)) {
73
+ case 0x10: /* SQRDMLAH (vector) */
42
+ /* Lookup the source CPU for the SGI and clear this in the
74
+ case 0x11: /* SQRDMLSH (vector) */
43
+ * sgi_pending map. Return the src and clear the overall pending
75
+ if (size != 1 && size != 2) {
44
+ * state on this CPU if the SGI is not pending from any CPUs.
76
+ unallocated_encoding(s);
45
+ */
77
+ return;
46
+ assert(s->sgi_pending[irq][cpu] != 0);
47
+ src = ctz32(s->sgi_pending[irq][cpu]);
48
+ s->sgi_pending[irq][cpu] &= ~(1 << src);
49
+ if (s->sgi_pending[irq][cpu] == 0) {
50
+ gic_clear_pending(s, irq, cpu);
78
+ }
51
+ }
79
+ feature = ARM_FEATURE_V8_RDM;
52
+ ret = irq | ((src & 0x7) << 10);
80
+ break;
53
+ } else {
81
+ default:
54
+ uint32_t *lr_entry = gic_get_lr_entry(s, irq, cpu);
82
+ unallocated_encoding(s);
55
+ src = GICH_LR_CPUID(*lr_entry);
83
+ return;
56
+
84
+ }
57
+ gic_clear_pending(s, irq, cpu);
85
+ if (!arm_dc_feature(s, feature)) {
58
+ ret = irq | (src << 10);
86
+ unallocated_encoding(s);
87
+ return;
88
+ }
89
+ if (!fp_access_check(s)) {
90
+ return;
91
+ }
59
+ }
92
+
60
+
93
+ /* Do a single operation on the lowest element in the vector.
94
+ * We use the standard Neon helpers and rely on 0 OP 0 == 0
95
+ * with no side effects for all these operations.
96
+ * OPTME: special-purpose helpers would avoid doing some
97
+ * unnecessary work in the helper for the 16 bit cases.
98
+ */
99
+ ele1 = tcg_temp_new_i32();
100
+ ele2 = tcg_temp_new_i32();
101
+ ele3 = tcg_temp_new_i32();
102
+
103
+ read_vec_element_i32(s, ele1, rn, 0, size);
104
+ read_vec_element_i32(s, ele2, rm, 0, size);
105
+ read_vec_element_i32(s, ele3, rd, 0, size);
106
+
107
+ switch (opcode) {
108
+ case 0x0: /* SQRDMLAH */
109
+ if (size == 1) {
110
+ gen_helper_neon_qrdmlah_s16(ele3, cpu_env, ele1, ele2, ele3);
111
+ } else {
112
+ gen_helper_neon_qrdmlah_s32(ele3, cpu_env, ele1, ele2, ele3);
113
+ }
114
+ break;
115
+ case 0x1: /* SQRDMLSH */
116
+ if (size == 1) {
117
+ gen_helper_neon_qrdmlsh_s16(ele3, cpu_env, ele1, ele2, ele3);
118
+ } else {
119
+ gen_helper_neon_qrdmlsh_s32(ele3, cpu_env, ele1, ele2, ele3);
120
+ }
121
+ break;
122
+ default:
123
+ g_assert_not_reached();
124
+ }
125
+ tcg_temp_free_i32(ele1);
126
+ tcg_temp_free_i32(ele2);
127
+
128
+ res = tcg_temp_new_i64();
129
+ tcg_gen_extu_i32_i64(res, ele3);
130
+ tcg_temp_free_i32(ele3);
131
+
132
+ write_fp_dreg(s, rd, res);
133
+ tcg_temp_free_i64(res);
134
+}
135
+
136
static void handle_2misc_64(DisasContext *s, int opcode, bool u,
137
TCGv_i64 tcg_rd, TCGv_i64 tcg_rn,
138
TCGv_i32 tcg_rmode, TCGv_ptr tcg_fpstatus)
139
@@ -XXX,XX +XXX,XX @@ static const AArch64DecodeTable data_proc_simd[] = {
140
{ 0x0e000800, 0xbf208c00, disas_simd_zip_trn },
141
{ 0x2e000000, 0xbf208400, disas_simd_ext },
142
{ 0x5e200400, 0xdf200400, disas_simd_scalar_three_reg_same },
143
+ { 0x5e008400, 0xdf208400, disas_simd_scalar_three_reg_same_extra },
144
{ 0x5e200000, 0xdf200c00, disas_simd_scalar_three_reg_diff },
145
{ 0x5e200800, 0xdf3e0c00, disas_simd_scalar_two_reg_misc },
146
{ 0x5e300800, 0xdf3e0c00, disas_simd_scalar_pairwise },
147
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
148
new file mode 100644
149
index XXXXXXX..XXXXXXX
150
--- /dev/null
151
+++ b/target/arm/vec_helper.c
152
@@ -XXX,XX +XXX,XX @@
153
+/*
154
+ * ARM AdvSIMD / SVE Vector Operations
155
+ *
156
+ * Copyright (c) 2018 Linaro
157
+ *
158
+ * This library is free software; you can redistribute it and/or
159
+ * modify it under the terms of the GNU Lesser General Public
160
+ * License as published by the Free Software Foundation; either
161
+ * version 2 of the License, or (at your option) any later version.
162
+ *
163
+ * This library is distributed in the hope that it will be useful,
164
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
165
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
166
+ * Lesser General Public License for more details.
167
+ *
168
+ * You should have received a copy of the GNU Lesser General Public
169
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
170
+ */
171
+
172
+#include "qemu/osdep.h"
173
+#include "cpu.h"
174
+#include "exec/exec-all.h"
175
+#include "exec/helper-proto.h"
176
+#include "tcg/tcg-gvec-desc.h"
177
+
178
+
179
+#define SET_QC() env->vfp.xregs[ARM_VFP_FPSCR] |= CPSR_Q
180
+
181
+/* Signed saturating rounding doubling multiply-accumulate high half, 16-bit */
182
+static uint16_t inl_qrdmlah_s16(CPUARMState *env, int16_t src1,
183
+ int16_t src2, int16_t src3)
184
+{
185
+ /* Simplify:
186
+ * = ((a3 << 16) + ((e1 * e2) << 1) + (1 << 15)) >> 16
187
+ * = ((a3 << 15) + (e1 * e2) + (1 << 14)) >> 15
188
+ */
189
+ int32_t ret = (int32_t)src1 * src2;
190
+ ret = ((int32_t)src3 << 15) + ret + (1 << 14);
191
+ ret >>= 15;
192
+ if (ret != (int16_t)ret) {
193
+ SET_QC();
194
+ ret = (ret < 0 ? -0x8000 : 0x7fff);
195
+ }
196
+ return ret;
61
+ return ret;
197
+}
62
+}
198
+
63
+
199
+uint32_t HELPER(neon_qrdmlah_s16)(CPUARMState *env, uint32_t src1,
64
uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
200
+ uint32_t src2, uint32_t src3)
65
{
201
+{
66
- int ret, irq, src;
202
+ uint16_t e1 = inl_qrdmlah_s16(env, src1, src2, src3);
67
- int cm = 1 << cpu;
203
+ uint16_t e2 = inl_qrdmlah_s16(env, src1 >> 16, src2 >> 16, src3 >> 16);
68
+ int ret, irq;
204
+ return deposit32(e1, 16, 16, e2);
69
205
+}
70
/* gic_get_current_pending_irq() will return 1022 or 1023 appropriately
71
* for the case where this GIC supports grouping and the pending interrupt
72
* is in the wrong group.
73
*/
74
irq = gic_get_current_pending_irq(s, cpu, attrs);
75
- trace_gic_acknowledge_irq(cpu, irq);
76
+ trace_gic_acknowledge_irq(gic_get_vcpu_real_id(cpu), irq);
77
78
if (irq >= GIC_MAXIRQ) {
79
DPRINTF("ACK, no pending interrupt or it is hidden: %d\n", irq);
80
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
81
return 1023;
82
}
83
84
+ gic_activate_irq(s, cpu, irq);
206
+
85
+
207
+/* Signed saturating rounding doubling multiply-subtract high half, 16-bit */
86
if (s->revision == REV_11MPCORE) {
208
+static uint16_t inl_qrdmlsh_s16(CPUARMState *env, int16_t src1,
87
/* Clear pending flags for both level and edge triggered interrupts.
209
+ int16_t src2, int16_t src3)
88
* Level triggered IRQs will be reasserted once they become inactive.
210
+{
89
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
211
+ /* Similarly, using subtraction:
90
ret = irq;
212
+ * = ((a3 << 16) - ((e1 * e2) << 1) + (1 << 15)) >> 16
91
} else {
213
+ * = ((a3 << 15) - (e1 * e2) + (1 << 14)) >> 15
92
if (irq < GIC_NR_SGIS) {
214
+ */
93
- /* Lookup the source CPU for the SGI and clear this in the
215
+ int32_t ret = (int32_t)src1 * src2;
94
- * sgi_pending map. Return the src and clear the overall pending
216
+ ret = ((int32_t)src3 << 15) - ret + (1 << 14);
95
- * state on this CPU if the SGI is not pending from any CPUs.
217
+ ret >>= 15;
96
- */
218
+ if (ret != (int16_t)ret) {
97
- assert(s->sgi_pending[irq][cpu] != 0);
219
+ SET_QC();
98
- src = ctz32(s->sgi_pending[irq][cpu]);
220
+ ret = (ret < 0 ? -0x8000 : 0x7fff);
99
- s->sgi_pending[irq][cpu] &= ~(1 << src);
221
+ }
100
- if (s->sgi_pending[irq][cpu] == 0) {
222
+ return ret;
101
- gic_clear_pending(s, irq, cpu);
223
+}
102
- }
224
+
103
- ret = irq | ((src & 0x7) << 10);
225
+uint32_t HELPER(neon_qrdmlsh_s16)(CPUARMState *env, uint32_t src1,
104
+ ret = gic_clear_pending_sgi(s, irq, cpu);
226
+ uint32_t src2, uint32_t src3)
105
} else {
227
+{
106
- /* Clear pending state for both level and edge triggered
228
+ uint16_t e1 = inl_qrdmlsh_s16(env, src1, src2, src3);
107
- * interrupts. (level triggered interrupts with an active line
229
+ uint16_t e2 = inl_qrdmlsh_s16(env, src1 >> 16, src2 >> 16, src3 >> 16);
108
- * remain pending, see gic_test_pending)
230
+ return deposit32(e1, 16, 16, e2);
109
- */
231
+}
110
gic_clear_pending(s, irq, cpu);
232
+
111
ret = irq;
233
+/* Signed saturating rounding doubling multiply-accumulate high half, 32-bit */
112
}
234
+uint32_t HELPER(neon_qrdmlah_s32)(CPUARMState *env, int32_t src1,
113
}
235
+ int32_t src2, int32_t src3)
114
236
+{
115
- gic_activate_irq(s, cpu, irq);
237
+ /* Simplify similarly to int_qrdmlah_s16 above. */
116
gic_update(s);
238
+ int64_t ret = (int64_t)src1 * src2;
117
DPRINTF("ACK %d\n", irq);
239
+ ret = ((int64_t)src3 << 31) + ret + (1 << 30);
118
return ret;
240
+ ret >>= 31;
241
+ if (ret != (int32_t)ret) {
242
+ SET_QC();
243
+ ret = (ret < 0 ? INT32_MIN : INT32_MAX);
244
+ }
245
+ return ret;
246
+}
247
+
248
+/* Signed saturating rounding doubling multiply-subtract high half, 32-bit */
249
+uint32_t HELPER(neon_qrdmlsh_s32)(CPUARMState *env, int32_t src1,
250
+ int32_t src2, int32_t src3)
251
+{
252
+ /* Simplify similarly to int_qrdmlsh_s16 above. */
253
+ int64_t ret = (int64_t)src1 * src2;
254
+ ret = ((int64_t)src3 << 31) - ret + (1 << 30);
255
+ ret >>= 31;
256
+ if (ret != (int32_t)ret) {
257
+ SET_QC();
258
+ ret = (ret < 0 ? INT32_MIN : INT32_MAX);
259
+ }
260
+ return ret;
261
+}
262
--
119
--
263
2.16.2
120
2.18.0
264
121
265
122
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Luc Michel <luc.michel@greensocs.com>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Implement virtualization extensions in the gic_deactivate_irq() and
4
Message-id: 20180228193125.20577-13-richard.henderson@linaro.org
4
gic_complete_irq() functions.
5
6
When the guest writes an invalid vIRQ to V_EOIR or V_DIR, since the
7
GICv2 specification is not entirely clear here, we adopt the behaviour
8
observed on real hardware:
9
* When V_CTRL.EOIMode is false (EOI split is disabled):
10
- In case of an invalid vIRQ write to V_EOIR:
11
-> If some bits are set in H_APR, an invalid vIRQ write to V_EOIR
12
triggers a priority drop, and increments V_HCR.EOICount.
13
-> If V_APR is already cleared, nothing happen
14
15
- An invalid vIRQ write to V_DIR is ignored.
16
17
* When V_CTRL.EOIMode is true:
18
- In case of an invalid vIRQ write to V_EOIR:
19
-> If some bits are set in H_APR, an invalid vIRQ write to V_EOIR
20
triggers a priority drop.
21
-> If V_APR is already cleared, nothing happen
22
23
- An invalid vIRQ write to V_DIR increments V_HCR.EOICount.
24
25
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
26
Message-id: 20180727095421.386-13-luc.michel@greensocs.com
27
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
28
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
[PMM: renamed e1/e2/e3/e4 to use the same naming as the version
7
of the pseudocode in the Arm ARM]
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
---
29
---
10
target/arm/helper.h | 11 ++++
30
hw/intc/arm_gic.c | 51 +++++++++++++++++++++++++++++++++++++++++++----
11
target/arm/translate-a64.c | 94 +++++++++++++++++++++++++---
31
1 file changed, 47 insertions(+), 4 deletions(-)
12
target/arm/vec_helper.c | 149 +++++++++++++++++++++++++++++++++++++++++++++
13
3 files changed, 246 insertions(+), 8 deletions(-)
14
32
15
diff --git a/target/arm/helper.h b/target/arm/helper.h
33
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
16
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.h
35
--- a/hw/intc/arm_gic.c
18
+++ b/target/arm/helper.h
36
+++ b/hw/intc/arm_gic.c
19
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_fcadds, TCG_CALL_NO_RWG,
37
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
20
DEF_HELPER_FLAGS_5(gvec_fcaddd, TCG_CALL_NO_RWG,
38
{
21
void, ptr, ptr, ptr, ptr, i32)
39
int group;
22
40
23
+DEF_HELPER_FLAGS_5(gvec_fcmlah, TCG_CALL_NO_RWG,
41
- if (irq >= s->num_irq) {
24
+ void, ptr, ptr, ptr, ptr, i32)
42
+ if (irq >= GIC_MAXIRQ || (!gic_is_vcpu(cpu) && irq >= s->num_irq)) {
25
+DEF_HELPER_FLAGS_5(gvec_fcmlah_idx, TCG_CALL_NO_RWG,
43
/*
26
+ void, ptr, ptr, ptr, ptr, i32)
44
* This handles two cases:
27
+DEF_HELPER_FLAGS_5(gvec_fcmlas, TCG_CALL_NO_RWG,
45
* 1. If software writes the ID of a spurious interrupt [ie 1023]
28
+ void, ptr, ptr, ptr, ptr, i32)
46
* to the GICC_DIR, the GIC ignores that write.
29
+DEF_HELPER_FLAGS_5(gvec_fcmlas_idx, TCG_CALL_NO_RWG,
47
* 2. If software writes the number of a non-existent interrupt
30
+ void, ptr, ptr, ptr, ptr, i32)
48
* this must be a subcase of "value written is not an active interrupt"
31
+DEF_HELPER_FLAGS_5(gvec_fcmlad, TCG_CALL_NO_RWG,
49
- * and so this is UNPREDICTABLE. We choose to ignore it.
32
+ void, ptr, ptr, ptr, ptr, i32)
50
+ * and so this is UNPREDICTABLE. We choose to ignore it. For vCPUs,
33
+
51
+ * all IRQs potentially exist, so this limit does not apply.
34
#ifdef TARGET_AARCH64
52
*/
35
#include "helper-a64.h"
36
#endif
37
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/translate-a64.c
40
+++ b/target/arm/translate-a64.c
41
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
42
}
43
feature = ARM_FEATURE_V8_RDM;
44
break;
45
+ case 0x8: /* FCMLA, #0 */
46
+ case 0x9: /* FCMLA, #90 */
47
+ case 0xa: /* FCMLA, #180 */
48
+ case 0xb: /* FCMLA, #270 */
49
case 0xc: /* FCADD, #90 */
50
case 0xe: /* FCADD, #270 */
51
if (size == 0
52
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
53
}
54
return;
55
56
+ case 0x8: /* FCMLA, #0 */
57
+ case 0x9: /* FCMLA, #90 */
58
+ case 0xa: /* FCMLA, #180 */
59
+ case 0xb: /* FCMLA, #270 */
60
+ rot = extract32(opcode, 0, 2);
61
+ switch (size) {
62
+ case 1:
63
+ gen_gvec_op3_fpst(s, is_q, rd, rn, rm, true, rot,
64
+ gen_helper_gvec_fcmlah);
65
+ break;
66
+ case 2:
67
+ gen_gvec_op3_fpst(s, is_q, rd, rn, rm, false, rot,
68
+ gen_helper_gvec_fcmlas);
69
+ break;
70
+ case 3:
71
+ gen_gvec_op3_fpst(s, is_q, rd, rn, rm, false, rot,
72
+ gen_helper_gvec_fcmlad);
73
+ break;
74
+ default:
75
+ g_assert_not_reached();
76
+ }
77
+ return;
78
+
79
case 0xc: /* FCADD, #90 */
80
case 0xe: /* FCADD, #270 */
81
rot = extract32(opcode, 1, 1);
82
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
83
int rn = extract32(insn, 5, 5);
84
int rd = extract32(insn, 0, 5);
85
bool is_long = false;
86
- bool is_fp = false;
87
+ int is_fp = 0;
88
bool is_fp16 = false;
89
int index;
90
TCGv_ptr fpst;
91
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
92
case 0x05: /* FMLS */
93
case 0x09: /* FMUL */
94
case 0x19: /* FMULX */
95
- is_fp = true;
96
+ is_fp = 1;
97
break;
98
case 0x1d: /* SQRDMLAH */
99
case 0x1f: /* SQRDMLSH */
100
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
101
return;
102
}
103
break;
104
+ case 0x11: /* FCMLA #0 */
105
+ case 0x13: /* FCMLA #90 */
106
+ case 0x15: /* FCMLA #180 */
107
+ case 0x17: /* FCMLA #270 */
108
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)) {
109
+ unallocated_encoding(s);
110
+ return;
111
+ }
112
+ is_fp = 2;
113
+ break;
114
default:
115
unallocated_encoding(s);
116
return;
53
return;
117
}
54
}
118
55
119
- if (is_fp) {
56
- group = gic_has_groups(s) && gic_test_group(s, irq, cpu);
120
+ switch (is_fp) {
57
-
121
+ case 1: /* normal fp */
58
if (!gic_eoi_split(s, cpu, attrs)) {
122
/* convert insn encoded size to TCGMemOp size */
59
/* This is UNPREDICTABLE; we choose to ignore it */
123
switch (size) {
60
qemu_log_mask(LOG_GUEST_ERROR,
124
case 0: /* half-precision */
61
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
125
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
62
return;
126
- unallocated_encoding(s);
127
- return;
128
- }
129
size = MO_16;
130
+ is_fp16 = true;
131
break;
132
case MO_32: /* single precision */
133
case MO_64: /* double precision */
134
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
135
unallocated_encoding(s);
136
return;
137
}
138
- } else {
139
+ break;
140
+
141
+ case 2: /* complex fp */
142
+ /* Each indexable element is a complex pair. */
143
+ size <<= 1;
144
+ switch (size) {
145
+ case MO_32:
146
+ if (h && !is_q) {
147
+ unallocated_encoding(s);
148
+ return;
149
+ }
150
+ is_fp16 = true;
151
+ break;
152
+ case MO_64:
153
+ break;
154
+ default:
155
+ unallocated_encoding(s);
156
+ return;
157
+ }
158
+ break;
159
+
160
+ default: /* integer */
161
switch (size) {
162
case MO_8:
163
case MO_64:
164
unallocated_encoding(s);
165
return;
166
}
167
+ break;
168
+ }
169
+ if (is_fp16 && !arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
170
+ unallocated_encoding(s);
171
+ return;
172
}
63
}
173
64
174
/* Given TCGMemOp size, adjust register and indexing. */
65
+ if (gic_is_vcpu(cpu) && !gic_virq_is_valid(s, irq, cpu)) {
175
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
66
+ /* This vIRQ does not have an LR entry which is either active or
176
fpst = NULL;
67
+ * pending and active. Increment EOICount and ignore the write.
177
}
68
+ */
178
69
+ int rcpu = gic_get_vcpu_real_id(cpu);
179
+ switch (16 * u + opcode) {
70
+ s->h_hcr[rcpu] += 1 << R_GICH_HCR_EOICount_SHIFT;
180
+ case 0x11: /* FCMLA #0 */
181
+ case 0x13: /* FCMLA #90 */
182
+ case 0x15: /* FCMLA #180 */
183
+ case 0x17: /* FCMLA #270 */
184
+ tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, rd),
185
+ vec_full_reg_offset(s, rn),
186
+ vec_reg_offset(s, rm, index, size), fpst,
187
+ is_q ? 16 : 8, vec_full_reg_size(s),
188
+ extract32(insn, 13, 2), /* rot */
189
+ size == MO_64
190
+ ? gen_helper_gvec_fcmlas_idx
191
+ : gen_helper_gvec_fcmlah_idx);
192
+ tcg_temp_free_ptr(fpst);
193
+ return;
71
+ return;
194
+ }
72
+ }
195
+
73
+
196
if (size == 3) {
74
+ group = gic_has_groups(s) && gic_test_group(s, irq, cpu);
197
TCGv_i64 tcg_idx = tcg_temp_new_i64();
198
int pass;
199
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
200
index XXXXXXX..XXXXXXX 100644
201
--- a/target/arm/vec_helper.c
202
+++ b/target/arm/vec_helper.c
203
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_fcaddd)(void *vd, void *vn, void *vm,
204
}
205
clear_tail(d, opr_sz, simd_maxsz(desc));
206
}
207
+
75
+
208
+void HELPER(gvec_fcmlah)(void *vd, void *vn, void *vm,
76
if (gic_cpu_ns_access(s, cpu, attrs) && !group) {
209
+ void *vfpst, uint32_t desc)
77
DPRINTF("Non-secure DI for Group0 interrupt %d ignored\n", irq);
210
+{
78
return;
211
+ uintptr_t opr_sz = simd_oprsz(desc);
79
@@ -XXX,XX +XXX,XX @@ static void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
212
+ float16 *d = vd;
80
int group;
213
+ float16 *n = vn;
81
214
+ float16 *m = vm;
82
DPRINTF("EOI %d\n", irq);
215
+ float_status *fpst = vfpst;
83
+ if (gic_is_vcpu(cpu)) {
216
+ intptr_t flip = extract32(desc, SIMD_DATA_SHIFT, 1);
84
+ /* The call to gic_prio_drop() will clear a bit in GICH_APR iff the
217
+ uint32_t neg_imag = extract32(desc, SIMD_DATA_SHIFT + 1, 1);
85
+ * running prio is < 0x100.
218
+ uint32_t neg_real = flip ^ neg_imag;
86
+ */
219
+ uintptr_t i;
87
+ bool prio_drop = s->running_priority[cpu] < 0x100;
220
+
88
+
221
+ /* Shift boolean to the sign bit so we can xor to negate. */
89
+ if (irq >= GIC_MAXIRQ) {
222
+ neg_real <<= 15;
90
+ /* Ignore spurious interrupt */
223
+ neg_imag <<= 15;
91
+ return;
92
+ }
224
+
93
+
225
+ for (i = 0; i < opr_sz / 2; i += 2) {
94
+ gic_drop_prio(s, cpu, 0);
226
+ float16 e2 = n[H2(i + flip)];
227
+ float16 e1 = m[H2(i + flip)] ^ neg_real;
228
+ float16 e4 = e2;
229
+ float16 e3 = m[H2(i + 1 - flip)] ^ neg_imag;
230
+
95
+
231
+ d[H2(i)] = float16_muladd(e2, e1, d[H2(i)], 0, fpst);
96
+ if (!gic_eoi_split(s, cpu, attrs)) {
232
+ d[H2(i + 1)] = float16_muladd(e4, e3, d[H2(i + 1)], 0, fpst);
97
+ bool valid = gic_virq_is_valid(s, irq, cpu);
98
+ if (prio_drop && !valid) {
99
+ /* We are in a situation where:
100
+ * - V_CTRL.EOIMode is false (no EOI split),
101
+ * - The call to gic_drop_prio() cleared a bit in GICH_APR,
102
+ * - This vIRQ does not have an LR entry which is either
103
+ * active or pending and active.
104
+ * In that case, we must increment EOICount.
105
+ */
106
+ int rcpu = gic_get_vcpu_real_id(cpu);
107
+ s->h_hcr[rcpu] += 1 << R_GICH_HCR_EOICount_SHIFT;
108
+ } else if (valid) {
109
+ gic_clear_active(s, irq, cpu);
110
+ }
111
+ }
112
+
113
+ return;
233
+ }
114
+ }
234
+ clear_tail(d, opr_sz, simd_maxsz(desc));
235
+}
236
+
115
+
237
+void HELPER(gvec_fcmlah_idx)(void *vd, void *vn, void *vm,
116
if (irq >= s->num_irq) {
238
+ void *vfpst, uint32_t desc)
117
/* This handles two cases:
239
+{
118
* 1. If software writes the ID of a spurious interrupt [ie 1023]
240
+ uintptr_t opr_sz = simd_oprsz(desc);
241
+ float16 *d = vd;
242
+ float16 *n = vn;
243
+ float16 *m = vm;
244
+ float_status *fpst = vfpst;
245
+ intptr_t flip = extract32(desc, SIMD_DATA_SHIFT, 1);
246
+ uint32_t neg_imag = extract32(desc, SIMD_DATA_SHIFT + 1, 1);
247
+ uint32_t neg_real = flip ^ neg_imag;
248
+ uintptr_t i;
249
+ float16 e1 = m[H2(flip)];
250
+ float16 e3 = m[H2(1 - flip)];
251
+
252
+ /* Shift boolean to the sign bit so we can xor to negate. */
253
+ neg_real <<= 15;
254
+ neg_imag <<= 15;
255
+ e1 ^= neg_real;
256
+ e3 ^= neg_imag;
257
+
258
+ for (i = 0; i < opr_sz / 2; i += 2) {
259
+ float16 e2 = n[H2(i + flip)];
260
+ float16 e4 = e2;
261
+
262
+ d[H2(i)] = float16_muladd(e2, e1, d[H2(i)], 0, fpst);
263
+ d[H2(i + 1)] = float16_muladd(e4, e3, d[H2(i + 1)], 0, fpst);
264
+ }
265
+ clear_tail(d, opr_sz, simd_maxsz(desc));
266
+}
267
+
268
+void HELPER(gvec_fcmlas)(void *vd, void *vn, void *vm,
269
+ void *vfpst, uint32_t desc)
270
+{
271
+ uintptr_t opr_sz = simd_oprsz(desc);
272
+ float32 *d = vd;
273
+ float32 *n = vn;
274
+ float32 *m = vm;
275
+ float_status *fpst = vfpst;
276
+ intptr_t flip = extract32(desc, SIMD_DATA_SHIFT, 1);
277
+ uint32_t neg_imag = extract32(desc, SIMD_DATA_SHIFT + 1, 1);
278
+ uint32_t neg_real = flip ^ neg_imag;
279
+ uintptr_t i;
280
+
281
+ /* Shift boolean to the sign bit so we can xor to negate. */
282
+ neg_real <<= 31;
283
+ neg_imag <<= 31;
284
+
285
+ for (i = 0; i < opr_sz / 4; i += 2) {
286
+ float32 e2 = n[H4(i + flip)];
287
+ float32 e1 = m[H4(i + flip)] ^ neg_real;
288
+ float32 e4 = e2;
289
+ float32 e3 = m[H4(i + 1 - flip)] ^ neg_imag;
290
+
291
+ d[H4(i)] = float32_muladd(e2, e1, d[H4(i)], 0, fpst);
292
+ d[H4(i + 1)] = float32_muladd(e4, e3, d[H4(i + 1)], 0, fpst);
293
+ }
294
+ clear_tail(d, opr_sz, simd_maxsz(desc));
295
+}
296
+
297
+void HELPER(gvec_fcmlas_idx)(void *vd, void *vn, void *vm,
298
+ void *vfpst, uint32_t desc)
299
+{
300
+ uintptr_t opr_sz = simd_oprsz(desc);
301
+ float32 *d = vd;
302
+ float32 *n = vn;
303
+ float32 *m = vm;
304
+ float_status *fpst = vfpst;
305
+ intptr_t flip = extract32(desc, SIMD_DATA_SHIFT, 1);
306
+ uint32_t neg_imag = extract32(desc, SIMD_DATA_SHIFT + 1, 1);
307
+ uint32_t neg_real = flip ^ neg_imag;
308
+ uintptr_t i;
309
+ float32 e1 = m[H4(flip)];
310
+ float32 e3 = m[H4(1 - flip)];
311
+
312
+ /* Shift boolean to the sign bit so we can xor to negate. */
313
+ neg_real <<= 31;
314
+ neg_imag <<= 31;
315
+ e1 ^= neg_real;
316
+ e3 ^= neg_imag;
317
+
318
+ for (i = 0; i < opr_sz / 4; i += 2) {
319
+ float32 e2 = n[H4(i + flip)];
320
+ float32 e4 = e2;
321
+
322
+ d[H4(i)] = float32_muladd(e2, e1, d[H4(i)], 0, fpst);
323
+ d[H4(i + 1)] = float32_muladd(e4, e3, d[H4(i + 1)], 0, fpst);
324
+ }
325
+ clear_tail(d, opr_sz, simd_maxsz(desc));
326
+}
327
+
328
+void HELPER(gvec_fcmlad)(void *vd, void *vn, void *vm,
329
+ void *vfpst, uint32_t desc)
330
+{
331
+ uintptr_t opr_sz = simd_oprsz(desc);
332
+ float64 *d = vd;
333
+ float64 *n = vn;
334
+ float64 *m = vm;
335
+ float_status *fpst = vfpst;
336
+ intptr_t flip = extract32(desc, SIMD_DATA_SHIFT, 1);
337
+ uint64_t neg_imag = extract32(desc, SIMD_DATA_SHIFT + 1, 1);
338
+ uint64_t neg_real = flip ^ neg_imag;
339
+ uintptr_t i;
340
+
341
+ /* Shift boolean to the sign bit so we can xor to negate. */
342
+ neg_real <<= 63;
343
+ neg_imag <<= 63;
344
+
345
+ for (i = 0; i < opr_sz / 8; i += 2) {
346
+ float64 e2 = n[i + flip];
347
+ float64 e1 = m[i + flip] ^ neg_real;
348
+ float64 e4 = e2;
349
+ float64 e3 = m[i + 1 - flip] ^ neg_imag;
350
+
351
+ d[i] = float64_muladd(e2, e1, d[i], 0, fpst);
352
+ d[i + 1] = float64_muladd(e4, e3, d[i + 1], 0, fpst);
353
+ }
354
+ clear_tail(d, opr_sz, simd_maxsz(desc));
355
+}
356
--
119
--
357
2.16.2
120
2.18.0
358
121
359
122
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Luc Michel <luc.michel@greensocs.com>
2
2
3
Include the U bit in the switches rather than testing separately.
3
Implement virtualization extensions in the gic_cpu_read() and
4
gic_cpu_write() functions. Those are the last bits missing to fully
5
support virtualization extensions in the CPU interface path.
4
6
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20180228193125.20577-3-richard.henderson@linaro.org
9
Message-id: 20180727095421.386-14-luc.michel@greensocs.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
11
---
10
target/arm/translate-a64.c | 129 +++++++++++++++++++++------------------------
12
hw/intc/arm_gic.c | 20 +++++++++++++++-----
11
1 file changed, 61 insertions(+), 68 deletions(-)
13
1 file changed, 15 insertions(+), 5 deletions(-)
12
14
13
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
15
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
14
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-a64.c
17
--- a/hw/intc/arm_gic.c
16
+++ b/target/arm/translate-a64.c
18
+++ b/hw/intc/arm_gic.c
17
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
19
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_read(GICState *s, int cpu, int offset,
18
int index;
20
case 0xd0: case 0xd4: case 0xd8: case 0xdc:
19
TCGv_ptr fpst;
21
{
20
22
int regno = (offset - 0xd0) / 4;
21
- switch (opcode) {
23
+ int nr_aprs = gic_is_vcpu(cpu) ? GIC_VIRT_NR_APRS : GIC_NR_APRS;
22
- case 0x0: /* MLA */
24
23
- case 0x4: /* MLS */
25
- if (regno >= GIC_NR_APRS || s->revision != 2) {
24
- if (!u || is_scalar) {
26
+ if (regno >= nr_aprs || s->revision != 2) {
25
+ switch (16 * u + opcode) {
27
*data = 0;
26
+ case 0x08: /* MUL */
28
+ } else if (gic_is_vcpu(cpu)) {
27
+ case 0x10: /* MLA */
29
+ *data = s->h_apr[gic_get_vcpu_real_id(cpu)];
28
+ case 0x14: /* MLS */
30
} else if (gic_cpu_ns_access(s, cpu, attrs)) {
29
+ if (is_scalar) {
31
/* NS view of GICC_APR<n> is the top half of GIC_NSAPR<n> */
30
unallocated_encoding(s);
32
*data = gic_apr_ns_view(s, regno, cpu);
31
return;
33
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_read(GICState *s, int cpu, int offset,
34
int regno = (offset - 0xe0) / 4;
35
36
if (regno >= GIC_NR_APRS || s->revision != 2 || !gic_has_groups(s) ||
37
- gic_cpu_ns_access(s, cpu, attrs)) {
38
+ gic_cpu_ns_access(s, cpu, attrs) || gic_is_vcpu(cpu)) {
39
*data = 0;
40
} else {
41
*data = s->nsapr[regno][cpu];
42
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
43
s->abpr[cpu] = MAX(value & 0x7, GIC_MIN_ABPR);
44
}
45
} else {
46
- s->bpr[cpu] = MAX(value & 0x7, GIC_MIN_BPR);
47
+ int min_bpr = gic_is_vcpu(cpu) ? GIC_VIRT_MIN_BPR : GIC_MIN_BPR;
48
+ s->bpr[cpu] = MAX(value & 0x7, min_bpr);
32
}
49
}
33
break;
50
break;
34
- case 0x2: /* SMLAL, SMLAL2, UMLAL, UMLAL2 */
51
case 0x10: /* End Of Interrupt */
35
- case 0x6: /* SMLSL, SMLSL2, UMLSL, UMLSL2 */
52
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
36
- case 0xa: /* SMULL, SMULL2, UMULL, UMULL2 */
53
case 0xd0: case 0xd4: case 0xd8: case 0xdc:
37
+ case 0x02: /* SMLAL, SMLAL2 */
54
{
38
+ case 0x12: /* UMLAL, UMLAL2 */
55
int regno = (offset - 0xd0) / 4;
39
+ case 0x06: /* SMLSL, SMLSL2 */
56
+ int nr_aprs = gic_is_vcpu(cpu) ? GIC_VIRT_NR_APRS : GIC_NR_APRS;
40
+ case 0x16: /* UMLSL, UMLSL2 */
57
41
+ case 0x0a: /* SMULL, SMULL2 */
58
- if (regno >= GIC_NR_APRS || s->revision != 2) {
42
+ case 0x1a: /* UMULL, UMULL2 */
59
+ if (regno >= nr_aprs || s->revision != 2) {
43
if (is_scalar) {
60
return MEMTX_OK;
44
unallocated_encoding(s);
45
return;
46
}
61
}
47
is_long = true;
62
- if (gic_cpu_ns_access(s, cpu, attrs)) {
48
break;
63
+ if (gic_is_vcpu(cpu)) {
49
- case 0x3: /* SQDMLAL, SQDMLAL2 */
64
+ s->h_apr[gic_get_vcpu_real_id(cpu)] = value;
50
- case 0x7: /* SQDMLSL, SQDMLSL2 */
65
+ } else if (gic_cpu_ns_access(s, cpu, attrs)) {
51
- case 0xb: /* SQDMULL, SQDMULL2 */
66
/* NS view of GICC_APR<n> is the top half of GIC_NSAPR<n> */
52
+ case 0x03: /* SQDMLAL, SQDMLAL2 */
67
gic_apr_write_ns_view(s, regno, cpu, value);
53
+ case 0x07: /* SQDMLSL, SQDMLSL2 */
68
} else {
54
+ case 0x0b: /* SQDMULL, SQDMULL2 */
69
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
55
is_long = true;
70
if (regno >= GIC_NR_APRS || s->revision != 2) {
56
- /* fall through */
71
return MEMTX_OK;
57
- case 0xc: /* SQDMULH */
72
}
58
- case 0xd: /* SQRDMULH */
73
+ if (gic_is_vcpu(cpu)) {
59
- if (u) {
74
+ return MEMTX_OK;
60
- unallocated_encoding(s);
75
+ }
61
- return;
76
if (!gic_has_groups(s) || (gic_cpu_ns_access(s, cpu, attrs))) {
62
- }
77
return MEMTX_OK;
63
break;
78
}
64
- case 0x8: /* MUL */
65
- if (u || is_scalar) {
66
- unallocated_encoding(s);
67
- return;
68
- }
69
+ case 0x0c: /* SQDMULH */
70
+ case 0x0d: /* SQRDMULH */
71
break;
72
- case 0x1: /* FMLA */
73
- case 0x5: /* FMLS */
74
- if (u) {
75
- unallocated_encoding(s);
76
- return;
77
- }
78
- /* fall through */
79
- case 0x9: /* FMUL, FMULX */
80
+ case 0x01: /* FMLA */
81
+ case 0x05: /* FMLS */
82
+ case 0x09: /* FMUL */
83
+ case 0x19: /* FMULX */
84
if (size == 1) {
85
unallocated_encoding(s);
86
return;
87
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
88
89
read_vec_element(s, tcg_op, rn, pass, MO_64);
90
91
- switch (opcode) {
92
- case 0x5: /* FMLS */
93
+ switch (16 * u + opcode) {
94
+ case 0x05: /* FMLS */
95
/* As usual for ARM, separate negation for fused multiply-add */
96
gen_helper_vfp_negd(tcg_op, tcg_op);
97
/* fall through */
98
- case 0x1: /* FMLA */
99
+ case 0x01: /* FMLA */
100
read_vec_element(s, tcg_res, rd, pass, MO_64);
101
gen_helper_vfp_muladdd(tcg_res, tcg_op, tcg_idx, tcg_res, fpst);
102
break;
103
- case 0x9: /* FMUL, FMULX */
104
- if (u) {
105
- gen_helper_vfp_mulxd(tcg_res, tcg_op, tcg_idx, fpst);
106
- } else {
107
- gen_helper_vfp_muld(tcg_res, tcg_op, tcg_idx, fpst);
108
- }
109
+ case 0x09: /* FMUL */
110
+ gen_helper_vfp_muld(tcg_res, tcg_op, tcg_idx, fpst);
111
+ break;
112
+ case 0x19: /* FMULX */
113
+ gen_helper_vfp_mulxd(tcg_res, tcg_op, tcg_idx, fpst);
114
break;
115
default:
116
g_assert_not_reached();
117
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
118
119
read_vec_element_i32(s, tcg_op, rn, pass, is_scalar ? size : MO_32);
120
121
- switch (opcode) {
122
- case 0x0: /* MLA */
123
- case 0x4: /* MLS */
124
- case 0x8: /* MUL */
125
+ switch (16 * u + opcode) {
126
+ case 0x08: /* MUL */
127
+ case 0x10: /* MLA */
128
+ case 0x14: /* MLS */
129
{
130
static NeonGenTwoOpFn * const fns[2][2] = {
131
{ gen_helper_neon_add_u16, gen_helper_neon_sub_u16 },
132
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
133
genfn(tcg_res, tcg_op, tcg_res);
134
break;
135
}
136
- case 0x5: /* FMLS */
137
- case 0x1: /* FMLA */
138
+ case 0x05: /* FMLS */
139
+ case 0x01: /* FMLA */
140
read_vec_element_i32(s, tcg_res, rd, pass,
141
is_scalar ? size : MO_32);
142
switch (size) {
143
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
144
g_assert_not_reached();
145
}
146
break;
147
- case 0x9: /* FMUL, FMULX */
148
+ case 0x09: /* FMUL */
149
switch (size) {
150
case 1:
151
- if (u) {
152
- if (is_scalar) {
153
- gen_helper_advsimd_mulxh(tcg_res, tcg_op,
154
- tcg_idx, fpst);
155
- } else {
156
- gen_helper_advsimd_mulx2h(tcg_res, tcg_op,
157
- tcg_idx, fpst);
158
- }
159
+ if (is_scalar) {
160
+ gen_helper_advsimd_mulh(tcg_res, tcg_op,
161
+ tcg_idx, fpst);
162
} else {
163
- if (is_scalar) {
164
- gen_helper_advsimd_mulh(tcg_res, tcg_op,
165
- tcg_idx, fpst);
166
- } else {
167
- gen_helper_advsimd_mul2h(tcg_res, tcg_op,
168
- tcg_idx, fpst);
169
- }
170
+ gen_helper_advsimd_mul2h(tcg_res, tcg_op,
171
+ tcg_idx, fpst);
172
}
173
break;
174
case 2:
175
- if (u) {
176
- gen_helper_vfp_mulxs(tcg_res, tcg_op, tcg_idx, fpst);
177
- } else {
178
- gen_helper_vfp_muls(tcg_res, tcg_op, tcg_idx, fpst);
179
- }
180
+ gen_helper_vfp_muls(tcg_res, tcg_op, tcg_idx, fpst);
181
break;
182
default:
183
g_assert_not_reached();
184
}
185
break;
186
- case 0xc: /* SQDMULH */
187
+ case 0x19: /* FMULX */
188
+ switch (size) {
189
+ case 1:
190
+ if (is_scalar) {
191
+ gen_helper_advsimd_mulxh(tcg_res, tcg_op,
192
+ tcg_idx, fpst);
193
+ } else {
194
+ gen_helper_advsimd_mulx2h(tcg_res, tcg_op,
195
+ tcg_idx, fpst);
196
+ }
197
+ break;
198
+ case 2:
199
+ gen_helper_vfp_mulxs(tcg_res, tcg_op, tcg_idx, fpst);
200
+ break;
201
+ default:
202
+ g_assert_not_reached();
203
+ }
204
+ break;
205
+ case 0x0c: /* SQDMULH */
206
if (size == 1) {
207
gen_helper_neon_qdmulh_s16(tcg_res, cpu_env,
208
tcg_op, tcg_idx);
209
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
210
tcg_op, tcg_idx);
211
}
212
break;
213
- case 0xd: /* SQRDMULH */
214
+ case 0x0d: /* SQRDMULH */
215
if (size == 1) {
216
gen_helper_neon_qrdmulh_s16(tcg_res, cpu_env,
217
tcg_op, tcg_idx);
218
--
79
--
219
2.16.2
80
2.18.0
220
81
221
82
diff view generated by jsdifflib
1
From: Alistair Francis <alistair.francis@xilinx.com>
1
From: Luc Michel <luc.michel@greensocs.com>
2
2
3
Initial commit of the ZynqMP RTC device.
3
Add the read/write functions to handle accesses to the vCPU interface.
4
Those accesses are forwarded to the real CPU interface, with the CPU id
5
being converted to the corresponding vCPU id (vCPU id = CPU id +
6
GIC_NCPU).
4
7
5
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
8
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Message-id: 20180727095421.386-15-luc.michel@greensocs.com
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
12
---
9
hw/timer/Makefile.objs | 1 +
13
hw/intc/arm_gic.c | 37 +++++++++++++++++++++++++++++++++++--
10
include/hw/timer/xlnx-zynqmp-rtc.h | 84 +++++++++++++++
14
1 file changed, 35 insertions(+), 2 deletions(-)
11
hw/timer/xlnx-zynqmp-rtc.c | 214 +++++++++++++++++++++++++++++++++++++
12
3 files changed, 299 insertions(+)
13
create mode 100644 include/hw/timer/xlnx-zynqmp-rtc.h
14
create mode 100644 hw/timer/xlnx-zynqmp-rtc.c
15
15
16
diff --git a/hw/timer/Makefile.objs b/hw/timer/Makefile.objs
16
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
17
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/timer/Makefile.objs
18
--- a/hw/intc/arm_gic.c
19
+++ b/hw/timer/Makefile.objs
19
+++ b/hw/intc/arm_gic.c
20
@@ -XXX,XX +XXX,XX @@ common-obj-$(CONFIG_IMX) += imx_epit.o
20
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_do_cpu_write(void *opaque, hwaddr addr,
21
common-obj-$(CONFIG_IMX) += imx_gpt.o
21
return gic_cpu_write(s, id, addr, value, attrs);
22
common-obj-$(CONFIG_LM32) += lm32_timer.o
22
}
23
common-obj-$(CONFIG_MILKYMIST) += milkymist-sysctl.o
23
24
+common-obj-$(CONFIG_XLNX_ZYNQMP) += xlnx-zynqmp-rtc.o
24
+static MemTxResult gic_thisvcpu_read(void *opaque, hwaddr addr, uint64_t *data,
25
25
+ unsigned size, MemTxAttrs attrs)
26
obj-$(CONFIG_ALTERA_TIMER) += altera_timer.o
26
+{
27
obj-$(CONFIG_EXYNOS4) += exynos4210_mct.o
27
+ GICState *s = (GICState *)opaque;
28
diff --git a/include/hw/timer/xlnx-zynqmp-rtc.h b/include/hw/timer/xlnx-zynqmp-rtc.h
29
new file mode 100644
30
index XXXXXXX..XXXXXXX
31
--- /dev/null
32
+++ b/include/hw/timer/xlnx-zynqmp-rtc.h
33
@@ -XXX,XX +XXX,XX @@
34
+/*
35
+ * QEMU model of the Xilinx ZynqMP Real Time Clock (RTC).
36
+ *
37
+ * Copyright (c) 2017 Xilinx Inc.
38
+ *
39
+ * Written-by: Alistair Francis <alistair.francis@xilinx.com>
40
+ *
41
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
42
+ * of this software and associated documentation files (the "Software"), to deal
43
+ * in the Software without restriction, including without limitation the rights
44
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
45
+ * copies of the Software, and to permit persons to whom the Software is
46
+ * furnished to do so, subject to the following conditions:
47
+ *
48
+ * The above copyright notice and this permission notice shall be included in
49
+ * all copies or substantial portions of the Software.
50
+ *
51
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
52
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
53
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
54
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
55
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
56
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
57
+ * THE SOFTWARE.
58
+ */
59
+
28
+
60
+#include "hw/register.h"
29
+ return gic_cpu_read(s, gic_get_current_vcpu(s), addr, data, attrs);
61
+
62
+#define TYPE_XLNX_ZYNQMP_RTC "xlnx-zynmp.rtc"
63
+
64
+#define XLNX_ZYNQMP_RTC(obj) \
65
+ OBJECT_CHECK(XlnxZynqMPRTC, (obj), TYPE_XLNX_ZYNQMP_RTC)
66
+
67
+REG32(SET_TIME_WRITE, 0x0)
68
+REG32(SET_TIME_READ, 0x4)
69
+REG32(CALIB_WRITE, 0x8)
70
+ FIELD(CALIB_WRITE, FRACTION_EN, 20, 1)
71
+ FIELD(CALIB_WRITE, FRACTION_DATA, 16, 4)
72
+ FIELD(CALIB_WRITE, MAX_TICK, 0, 16)
73
+REG32(CALIB_READ, 0xc)
74
+ FIELD(CALIB_READ, FRACTION_EN, 20, 1)
75
+ FIELD(CALIB_READ, FRACTION_DATA, 16, 4)
76
+ FIELD(CALIB_READ, MAX_TICK, 0, 16)
77
+REG32(CURRENT_TIME, 0x10)
78
+REG32(CURRENT_TICK, 0x14)
79
+ FIELD(CURRENT_TICK, VALUE, 0, 16)
80
+REG32(ALARM, 0x18)
81
+REG32(RTC_INT_STATUS, 0x20)
82
+ FIELD(RTC_INT_STATUS, ALARM, 1, 1)
83
+ FIELD(RTC_INT_STATUS, SECONDS, 0, 1)
84
+REG32(RTC_INT_MASK, 0x24)
85
+ FIELD(RTC_INT_MASK, ALARM, 1, 1)
86
+ FIELD(RTC_INT_MASK, SECONDS, 0, 1)
87
+REG32(RTC_INT_EN, 0x28)
88
+ FIELD(RTC_INT_EN, ALARM, 1, 1)
89
+ FIELD(RTC_INT_EN, SECONDS, 0, 1)
90
+REG32(RTC_INT_DIS, 0x2c)
91
+ FIELD(RTC_INT_DIS, ALARM, 1, 1)
92
+ FIELD(RTC_INT_DIS, SECONDS, 0, 1)
93
+REG32(ADDR_ERROR, 0x30)
94
+ FIELD(ADDR_ERROR, STATUS, 0, 1)
95
+REG32(ADDR_ERROR_INT_MASK, 0x34)
96
+ FIELD(ADDR_ERROR_INT_MASK, MASK, 0, 1)
97
+REG32(ADDR_ERROR_INT_EN, 0x38)
98
+ FIELD(ADDR_ERROR_INT_EN, MASK, 0, 1)
99
+REG32(ADDR_ERROR_INT_DIS, 0x3c)
100
+ FIELD(ADDR_ERROR_INT_DIS, MASK, 0, 1)
101
+REG32(CONTROL, 0x40)
102
+ FIELD(CONTROL, BATTERY_DISABLE, 31, 1)
103
+ FIELD(CONTROL, OSC_CNTRL, 24, 4)
104
+ FIELD(CONTROL, SLVERR_ENABLE, 0, 1)
105
+REG32(SAFETY_CHK, 0x50)
106
+
107
+#define XLNX_ZYNQMP_RTC_R_MAX (R_SAFETY_CHK + 1)
108
+
109
+typedef struct XlnxZynqMPRTC {
110
+ SysBusDevice parent_obj;
111
+ MemoryRegion iomem;
112
+ qemu_irq irq_rtc_int;
113
+ qemu_irq irq_addr_error_int;
114
+
115
+ uint32_t regs[XLNX_ZYNQMP_RTC_R_MAX];
116
+ RegisterInfo regs_info[XLNX_ZYNQMP_RTC_R_MAX];
117
+} XlnxZynqMPRTC;
118
diff --git a/hw/timer/xlnx-zynqmp-rtc.c b/hw/timer/xlnx-zynqmp-rtc.c
119
new file mode 100644
120
index XXXXXXX..XXXXXXX
121
--- /dev/null
122
+++ b/hw/timer/xlnx-zynqmp-rtc.c
123
@@ -XXX,XX +XXX,XX @@
124
+/*
125
+ * QEMU model of the Xilinx ZynqMP Real Time Clock (RTC).
126
+ *
127
+ * Copyright (c) 2017 Xilinx Inc.
128
+ *
129
+ * Written-by: Alistair Francis <alistair.francis@xilinx.com>
130
+ *
131
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
132
+ * of this software and associated documentation files (the "Software"), to deal
133
+ * in the Software without restriction, including without limitation the rights
134
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
135
+ * copies of the Software, and to permit persons to whom the Software is
136
+ * furnished to do so, subject to the following conditions:
137
+ *
138
+ * The above copyright notice and this permission notice shall be included in
139
+ * all copies or substantial portions of the Software.
140
+ *
141
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
142
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
143
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
144
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
145
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
146
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
147
+ * THE SOFTWARE.
148
+ */
149
+
150
+#include "qemu/osdep.h"
151
+#include "hw/sysbus.h"
152
+#include "hw/register.h"
153
+#include "qemu/bitops.h"
154
+#include "qemu/log.h"
155
+#include "hw/timer/xlnx-zynqmp-rtc.h"
156
+
157
+#ifndef XLNX_ZYNQMP_RTC_ERR_DEBUG
158
+#define XLNX_ZYNQMP_RTC_ERR_DEBUG 0
159
+#endif
160
+
161
+static void rtc_int_update_irq(XlnxZynqMPRTC *s)
162
+{
163
+ bool pending = s->regs[R_RTC_INT_STATUS] & ~s->regs[R_RTC_INT_MASK];
164
+ qemu_set_irq(s->irq_rtc_int, pending);
165
+}
30
+}
166
+
31
+
167
+static void addr_error_int_update_irq(XlnxZynqMPRTC *s)
32
+static MemTxResult gic_thisvcpu_write(void *opaque, hwaddr addr,
33
+ uint64_t value, unsigned size,
34
+ MemTxAttrs attrs)
168
+{
35
+{
169
+ bool pending = s->regs[R_ADDR_ERROR] & ~s->regs[R_ADDR_ERROR_INT_MASK];
36
+ GICState *s = (GICState *)opaque;
170
+ qemu_set_irq(s->irq_addr_error_int, pending);
37
+
38
+ return gic_cpu_write(s, gic_get_current_vcpu(s), addr, value, attrs);
171
+}
39
+}
172
+
40
+
173
+static void rtc_int_status_postw(RegisterInfo *reg, uint64_t val64)
41
static const MemoryRegionOps gic_ops[2] = {
174
+{
42
{
175
+ XlnxZynqMPRTC *s = XLNX_ZYNQMP_RTC(reg->opaque);
43
.read_with_attrs = gic_dist_read,
176
+ rtc_int_update_irq(s);
44
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps gic_cpu_ops = {
177
+}
45
.endianness = DEVICE_NATIVE_ENDIAN,
178
+
46
};
179
+static uint64_t rtc_int_en_prew(RegisterInfo *reg, uint64_t val64)
47
180
+{
48
+static const MemoryRegionOps gic_virt_ops[2] = {
181
+ XlnxZynqMPRTC *s = XLNX_ZYNQMP_RTC(reg->opaque);
49
+ {
182
+
50
+ .read_with_attrs = NULL,
183
+ s->regs[R_RTC_INT_MASK] &= (uint32_t) ~val64;
51
+ .write_with_attrs = NULL,
184
+ rtc_int_update_irq(s);
52
+ .endianness = DEVICE_NATIVE_ENDIAN,
185
+ return 0;
53
+ },
186
+}
54
+ {
187
+
55
+ .read_with_attrs = gic_thisvcpu_read,
188
+static uint64_t rtc_int_dis_prew(RegisterInfo *reg, uint64_t val64)
56
+ .write_with_attrs = gic_thisvcpu_write,
189
+{
57
+ .endianness = DEVICE_NATIVE_ENDIAN,
190
+ XlnxZynqMPRTC *s = XLNX_ZYNQMP_RTC(reg->opaque);
191
+
192
+ s->regs[R_RTC_INT_MASK] |= (uint32_t) val64;
193
+ rtc_int_update_irq(s);
194
+ return 0;
195
+}
196
+
197
+static void addr_error_postw(RegisterInfo *reg, uint64_t val64)
198
+{
199
+ XlnxZynqMPRTC *s = XLNX_ZYNQMP_RTC(reg->opaque);
200
+ addr_error_int_update_irq(s);
201
+}
202
+
203
+static uint64_t addr_error_int_en_prew(RegisterInfo *reg, uint64_t val64)
204
+{
205
+ XlnxZynqMPRTC *s = XLNX_ZYNQMP_RTC(reg->opaque);
206
+
207
+ s->regs[R_ADDR_ERROR_INT_MASK] &= (uint32_t) ~val64;
208
+ addr_error_int_update_irq(s);
209
+ return 0;
210
+}
211
+
212
+static uint64_t addr_error_int_dis_prew(RegisterInfo *reg, uint64_t val64)
213
+{
214
+ XlnxZynqMPRTC *s = XLNX_ZYNQMP_RTC(reg->opaque);
215
+
216
+ s->regs[R_ADDR_ERROR_INT_MASK] |= (uint32_t) val64;
217
+ addr_error_int_update_irq(s);
218
+ return 0;
219
+}
220
+
221
+static const RegisterAccessInfo rtc_regs_info[] = {
222
+ { .name = "SET_TIME_WRITE", .addr = A_SET_TIME_WRITE,
223
+ },{ .name = "SET_TIME_READ", .addr = A_SET_TIME_READ,
224
+ .ro = 0xffffffff,
225
+ },{ .name = "CALIB_WRITE", .addr = A_CALIB_WRITE,
226
+ },{ .name = "CALIB_READ", .addr = A_CALIB_READ,
227
+ .ro = 0x1fffff,
228
+ },{ .name = "CURRENT_TIME", .addr = A_CURRENT_TIME,
229
+ .ro = 0xffffffff,
230
+ },{ .name = "CURRENT_TICK", .addr = A_CURRENT_TICK,
231
+ .ro = 0xffff,
232
+ },{ .name = "ALARM", .addr = A_ALARM,
233
+ },{ .name = "RTC_INT_STATUS", .addr = A_RTC_INT_STATUS,
234
+ .w1c = 0x3,
235
+ .post_write = rtc_int_status_postw,
236
+ },{ .name = "RTC_INT_MASK", .addr = A_RTC_INT_MASK,
237
+ .reset = 0x3,
238
+ .ro = 0x3,
239
+ },{ .name = "RTC_INT_EN", .addr = A_RTC_INT_EN,
240
+ .pre_write = rtc_int_en_prew,
241
+ },{ .name = "RTC_INT_DIS", .addr = A_RTC_INT_DIS,
242
+ .pre_write = rtc_int_dis_prew,
243
+ },{ .name = "ADDR_ERROR", .addr = A_ADDR_ERROR,
244
+ .w1c = 0x1,
245
+ .post_write = addr_error_postw,
246
+ },{ .name = "ADDR_ERROR_INT_MASK", .addr = A_ADDR_ERROR_INT_MASK,
247
+ .reset = 0x1,
248
+ .ro = 0x1,
249
+ },{ .name = "ADDR_ERROR_INT_EN", .addr = A_ADDR_ERROR_INT_EN,
250
+ .pre_write = addr_error_int_en_prew,
251
+ },{ .name = "ADDR_ERROR_INT_DIS", .addr = A_ADDR_ERROR_INT_DIS,
252
+ .pre_write = addr_error_int_dis_prew,
253
+ },{ .name = "CONTROL", .addr = A_CONTROL,
254
+ .reset = 0x1000000,
255
+ .rsvd = 0x70fffffe,
256
+ },{ .name = "SAFETY_CHK", .addr = A_SAFETY_CHK,
257
+ }
58
+ }
258
+};
59
+};
259
+
60
+
260
+static void rtc_reset(DeviceState *dev)
61
static void arm_gic_realize(DeviceState *dev, Error **errp)
261
+{
62
{
262
+ XlnxZynqMPRTC *s = XLNX_ZYNQMP_RTC(dev);
63
/* Device instance realize function for the GIC sysbus device */
263
+ unsigned int i;
64
@@ -XXX,XX +XXX,XX @@ static void arm_gic_realize(DeviceState *dev, Error **errp)
264
+
65
return;
265
+ for (i = 0; i < ARRAY_SIZE(s->regs_info); ++i) {
66
}
266
+ register_reset(&s->regs_info[i]);
67
267
+ }
68
- /* This creates distributor and main CPU interface (s->cpuiomem[0]) */
268
+
69
- gic_init_irqs_and_mmio(s, gic_set_irq, gic_ops, NULL);
269
+ rtc_int_update_irq(s);
70
+ /* This creates distributor, main CPU interface (s->cpuiomem[0]) and if
270
+ addr_error_int_update_irq(s);
71
+ * enabled, virtualization extensions related interfaces (main virtual
271
+}
72
+ * interface (s->vifaceiomem[0]) and virtual CPU interface).
272
+
73
+ */
273
+static const MemoryRegionOps rtc_ops = {
74
+ gic_init_irqs_and_mmio(s, gic_set_irq, gic_ops, gic_virt_ops);
274
+ .read = register_read_memory,
75
275
+ .write = register_write_memory,
76
/* Extra core-specific regions for the CPU interfaces. This is
276
+ .endianness = DEVICE_LITTLE_ENDIAN,
77
* necessary for "franken-GIC" implementations, for example on
277
+ .valid = {
278
+ .min_access_size = 4,
279
+ .max_access_size = 4,
280
+ },
281
+};
282
+
283
+static void rtc_init(Object *obj)
284
+{
285
+ XlnxZynqMPRTC *s = XLNX_ZYNQMP_RTC(obj);
286
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
287
+ RegisterInfoArray *reg_array;
288
+
289
+ memory_region_init(&s->iomem, obj, TYPE_XLNX_ZYNQMP_RTC,
290
+ XLNX_ZYNQMP_RTC_R_MAX * 4);
291
+ reg_array =
292
+ register_init_block32(DEVICE(obj), rtc_regs_info,
293
+ ARRAY_SIZE(rtc_regs_info),
294
+ s->regs_info, s->regs,
295
+ &rtc_ops,
296
+ XLNX_ZYNQMP_RTC_ERR_DEBUG,
297
+ XLNX_ZYNQMP_RTC_R_MAX * 4);
298
+ memory_region_add_subregion(&s->iomem,
299
+ 0x0,
300
+ &reg_array->mem);
301
+ sysbus_init_mmio(sbd, &s->iomem);
302
+ sysbus_init_irq(sbd, &s->irq_rtc_int);
303
+ sysbus_init_irq(sbd, &s->irq_addr_error_int);
304
+}
305
+
306
+static const VMStateDescription vmstate_rtc = {
307
+ .name = TYPE_XLNX_ZYNQMP_RTC,
308
+ .version_id = 1,
309
+ .minimum_version_id = 1,
310
+ .fields = (VMStateField[]) {
311
+ VMSTATE_UINT32_ARRAY(regs, XlnxZynqMPRTC, XLNX_ZYNQMP_RTC_R_MAX),
312
+ VMSTATE_END_OF_LIST(),
313
+ }
314
+};
315
+
316
+static void rtc_class_init(ObjectClass *klass, void *data)
317
+{
318
+ DeviceClass *dc = DEVICE_CLASS(klass);
319
+
320
+ dc->reset = rtc_reset;
321
+ dc->vmsd = &vmstate_rtc;
322
+}
323
+
324
+static const TypeInfo rtc_info = {
325
+ .name = TYPE_XLNX_ZYNQMP_RTC,
326
+ .parent = TYPE_SYS_BUS_DEVICE,
327
+ .instance_size = sizeof(XlnxZynqMPRTC),
328
+ .class_init = rtc_class_init,
329
+ .instance_init = rtc_init,
330
+};
331
+
332
+static void rtc_register_types(void)
333
+{
334
+ type_register_static(&rtc_info);
335
+}
336
+
337
+type_init(rtc_register_types)
338
--
78
--
339
2.16.2
79
2.18.0
340
80
341
81
diff view generated by jsdifflib
1
The Arm IoT Kit includes a "security controller" which is largely a
1
From: Luc Michel <luc.michel@greensocs.com>
2
collection of registers for controlling the PPCs and other bits of
2
3
glue in the system. This commit provides the initial skeleton of the
3
Implement the read and write functions for the virtual interface of the
4
device, implementing just the ID registers, and a couple of read-only
4
virtualization extensions in the GICv2.
5
read-as-zero registers.
5
6
6
One mirror region per CPU is also created, which maps to that specific
7
CPU id. This is required by the GIC architecture specification.
8
9
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20180727095421.386-16-luc.michel@greensocs.com
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180220180325.29818-16-peter.maydell@linaro.org
10
---
13
---
11
hw/misc/Makefile.objs | 1 +
14
hw/intc/arm_gic.c | 235 +++++++++++++++++++++++++++++++++++++++++++++-
12
include/hw/misc/iotkit-secctl.h | 39 ++++
15
1 file changed, 233 insertions(+), 2 deletions(-)
13
hw/misc/iotkit-secctl.c | 448 ++++++++++++++++++++++++++++++++++++++++
16
14
default-configs/arm-softmmu.mak | 1 +
17
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
15
hw/misc/trace-events | 7 +
16
5 files changed, 496 insertions(+)
17
create mode 100644 include/hw/misc/iotkit-secctl.h
18
create mode 100644 hw/misc/iotkit-secctl.c
19
20
diff --git a/hw/misc/Makefile.objs b/hw/misc/Makefile.objs
21
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/misc/Makefile.objs
19
--- a/hw/intc/arm_gic.c
23
+++ b/hw/misc/Makefile.objs
20
+++ b/hw/intc/arm_gic.c
24
@@ -XXX,XX +XXX,XX @@ obj-$(CONFIG_MPS2_FPGAIO) += mps2-fpgaio.o
21
@@ -XXX,XX +XXX,XX @@ static void gic_update(GICState *s)
25
obj-$(CONFIG_MPS2_SCC) += mps2-scc.o
22
}
26
23
}
27
obj-$(CONFIG_TZ_PPC) += tz-ppc.o
24
28
+obj-$(CONFIG_IOTKIT_SECCTL) += iotkit-secctl.o
25
+/* Return true if this LR is empty, i.e. the corresponding bit
29
26
+ * in ELRSR is set.
30
obj-$(CONFIG_PVPANIC) += pvpanic.o
31
obj-$(CONFIG_HYPERV_TESTDEV) += hyperv_testdev.o
32
diff --git a/include/hw/misc/iotkit-secctl.h b/include/hw/misc/iotkit-secctl.h
33
new file mode 100644
34
index XXXXXXX..XXXXXXX
35
--- /dev/null
36
+++ b/include/hw/misc/iotkit-secctl.h
37
@@ -XXX,XX +XXX,XX @@
38
+/*
39
+ * ARM IoT Kit security controller
40
+ *
41
+ * Copyright (c) 2018 Linaro Limited
42
+ * Written by Peter Maydell
43
+ *
44
+ * This program is free software; you can redistribute it and/or modify
45
+ * it under the terms of the GNU General Public License version 2 or
46
+ * (at your option) any later version.
47
+ */
27
+ */
48
+
28
+static inline bool gic_lr_entry_is_free(uint32_t entry)
49
+/* This is a model of the security controller which is part of the
29
+{
50
+ * Arm IoT Kit and documented in
30
+ return (GICH_LR_STATE(entry) == GICH_LR_STATE_INVALID)
51
+ * http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ecm0601256/index.html
31
+ && (GICH_LR_HW(entry) || !GICH_LR_EOI(entry));
52
+ *
32
+}
53
+ * QEMU interface:
33
+
54
+ * + sysbus MMIO region 0 is the "secure privilege control block" registers
34
+/* Return true if this LR should trigger an EOI maintenance interrupt, i.e. the
55
+ * + sysbus MMIO region 1 is the "non-secure privilege control block" registers
35
+ * corrsponding bit in EISR is set.
56
+ */
36
+ */
57
+
37
+static inline bool gic_lr_entry_is_eoi(uint32_t entry)
58
+#ifndef IOTKIT_SECCTL_H
38
+{
59
+#define IOTKIT_SECCTL_H
39
+ return (GICH_LR_STATE(entry) == GICH_LR_STATE_INVALID)
60
+
40
+ && !GICH_LR_HW(entry) && GICH_LR_EOI(entry);
61
+#include "hw/sysbus.h"
41
+}
62
+
42
+
63
+#define TYPE_IOTKIT_SECCTL "iotkit-secctl"
43
static void gic_set_irq_11mpcore(GICState *s, int irq, int level,
64
+#define IOTKIT_SECCTL(obj) OBJECT_CHECK(IoTKitSecCtl, (obj), TYPE_IOTKIT_SECCTL)
44
int cm, int target)
65
+
45
{
66
+typedef struct IoTKitSecCtl {
46
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_thisvcpu_write(void *opaque, hwaddr addr,
67
+ /*< private >*/
47
return gic_cpu_write(s, gic_get_current_vcpu(s), addr, value, attrs);
68
+ SysBusDevice parent_obj;
48
}
69
+
49
70
+ /*< public >*/
50
+static uint32_t gic_compute_eisr(GICState *s, int cpu, int lr_start)
71
+
51
+{
72
+ MemoryRegion s_regs;
52
+ int lr_idx;
73
+ MemoryRegion ns_regs;
53
+ uint32_t ret = 0;
74
+} IoTKitSecCtl;
54
+
75
+
55
+ for (lr_idx = lr_start; lr_idx < s->num_lrs; lr_idx++) {
76
+#endif
56
+ uint32_t *entry = &s->h_lr[lr_idx][cpu];
77
diff --git a/hw/misc/iotkit-secctl.c b/hw/misc/iotkit-secctl.c
57
+ ret = deposit32(ret, lr_idx - lr_start, 1,
78
new file mode 100644
58
+ gic_lr_entry_is_eoi(*entry));
79
index XXXXXXX..XXXXXXX
59
+ }
80
--- /dev/null
60
+
81
+++ b/hw/misc/iotkit-secctl.c
61
+ return ret;
82
@@ -XXX,XX +XXX,XX @@
62
+}
83
+/*
63
+
84
+ * Arm IoT Kit security controller
64
+static uint32_t gic_compute_elrsr(GICState *s, int cpu, int lr_start)
85
+ *
65
+{
86
+ * Copyright (c) 2018 Linaro Limited
66
+ int lr_idx;
87
+ * Written by Peter Maydell
67
+ uint32_t ret = 0;
88
+ *
68
+
89
+ * This program is free software; you can redistribute it and/or modify
69
+ for (lr_idx = lr_start; lr_idx < s->num_lrs; lr_idx++) {
90
+ * it under the terms of the GNU General Public License version 2 or
70
+ uint32_t *entry = &s->h_lr[lr_idx][cpu];
91
+ * (at your option) any later version.
71
+ ret = deposit32(ret, lr_idx - lr_start, 1,
92
+ */
72
+ gic_lr_entry_is_free(*entry));
93
+
73
+ }
94
+#include "qemu/osdep.h"
74
+
95
+#include "qemu/log.h"
75
+ return ret;
96
+#include "qapi/error.h"
76
+}
97
+#include "trace.h"
77
+
98
+#include "hw/sysbus.h"
78
+static void gic_vmcr_write(GICState *s, uint32_t value, MemTxAttrs attrs)
99
+#include "hw/registerfields.h"
79
+{
100
+#include "hw/misc/iotkit-secctl.h"
80
+ int vcpu = gic_get_current_vcpu(s);
101
+
81
+ uint32_t ctlr;
102
+/* Registers in the secure privilege control block */
82
+ uint32_t abpr;
103
+REG32(SECRESPCFG, 0x10)
83
+ uint32_t bpr;
104
+REG32(NSCCFG, 0x14)
84
+ uint32_t prio_mask;
105
+REG32(SECMPCINTSTATUS, 0x1c)
85
+
106
+REG32(SECPPCINTSTAT, 0x20)
86
+ ctlr = FIELD_EX32(value, GICH_VMCR, VMCCtlr);
107
+REG32(SECPPCINTCLR, 0x24)
87
+ abpr = FIELD_EX32(value, GICH_VMCR, VMABP);
108
+REG32(SECPPCINTEN, 0x28)
88
+ bpr = FIELD_EX32(value, GICH_VMCR, VMBP);
109
+REG32(SECMSCINTSTAT, 0x30)
89
+ prio_mask = FIELD_EX32(value, GICH_VMCR, VMPriMask) << 3;
110
+REG32(SECMSCINTCLR, 0x34)
90
+
111
+REG32(SECMSCINTEN, 0x38)
91
+ gic_set_cpu_control(s, vcpu, ctlr, attrs);
112
+REG32(BRGINTSTAT, 0x40)
92
+ s->abpr[vcpu] = MAX(abpr, GIC_VIRT_MIN_ABPR);
113
+REG32(BRGINTCLR, 0x44)
93
+ s->bpr[vcpu] = MAX(bpr, GIC_VIRT_MIN_BPR);
114
+REG32(BRGINTEN, 0x48)
94
+ gic_set_priority_mask(s, vcpu, prio_mask, attrs);
115
+REG32(AHBNSPPC0, 0x50)
95
+}
116
+REG32(AHBNSPPCEXP0, 0x60)
96
+
117
+REG32(AHBNSPPCEXP1, 0x64)
97
+static MemTxResult gic_hyp_read(void *opaque, int cpu, hwaddr addr,
118
+REG32(AHBNSPPCEXP2, 0x68)
98
+ uint64_t *data, MemTxAttrs attrs)
119
+REG32(AHBNSPPCEXP3, 0x6c)
99
+{
120
+REG32(APBNSPPC0, 0x70)
100
+ GICState *s = ARM_GIC(opaque);
121
+REG32(APBNSPPC1, 0x74)
101
+ int vcpu = cpu + GIC_NCPU;
122
+REG32(APBNSPPCEXP0, 0x80)
102
+
123
+REG32(APBNSPPCEXP1, 0x84)
103
+ switch (addr) {
124
+REG32(APBNSPPCEXP2, 0x88)
104
+ case A_GICH_HCR: /* Hypervisor Control */
125
+REG32(APBNSPPCEXP3, 0x8c)
105
+ *data = s->h_hcr[cpu];
126
+REG32(AHBSPPPC0, 0x90)
106
+ break;
127
+REG32(AHBSPPPCEXP0, 0xa0)
107
+
128
+REG32(AHBSPPPCEXP1, 0xa4)
108
+ case A_GICH_VTR: /* VGIC Type */
129
+REG32(AHBSPPPCEXP2, 0xa8)
109
+ *data = FIELD_DP32(0, GICH_VTR, ListRegs, s->num_lrs - 1);
130
+REG32(AHBSPPPCEXP3, 0xac)
110
+ *data = FIELD_DP32(*data, GICH_VTR, PREbits,
131
+REG32(APBSPPPC0, 0xb0)
111
+ GIC_VIRT_MAX_GROUP_PRIO_BITS - 1);
132
+REG32(APBSPPPC1, 0xb4)
112
+ *data = FIELD_DP32(*data, GICH_VTR, PRIbits,
133
+REG32(APBSPPPCEXP0, 0xc0)
113
+ (7 - GIC_VIRT_MIN_BPR) - 1);
134
+REG32(APBSPPPCEXP1, 0xc4)
114
+ break;
135
+REG32(APBSPPPCEXP2, 0xc8)
115
+
136
+REG32(APBSPPPCEXP3, 0xcc)
116
+ case A_GICH_VMCR: /* Virtual Machine Control */
137
+REG32(NSMSCEXP, 0xd0)
117
+ *data = FIELD_DP32(0, GICH_VMCR, VMCCtlr,
138
+REG32(PID4, 0xfd0)
118
+ extract32(s->cpu_ctlr[vcpu], 0, 10));
139
+REG32(PID5, 0xfd4)
119
+ *data = FIELD_DP32(*data, GICH_VMCR, VMABP, s->abpr[vcpu]);
140
+REG32(PID6, 0xfd8)
120
+ *data = FIELD_DP32(*data, GICH_VMCR, VMBP, s->bpr[vcpu]);
141
+REG32(PID7, 0xfdc)
121
+ *data = FIELD_DP32(*data, GICH_VMCR, VMPriMask,
142
+REG32(PID0, 0xfe0)
122
+ extract32(s->priority_mask[vcpu], 3, 5));
143
+REG32(PID1, 0xfe4)
123
+ break;
144
+REG32(PID2, 0xfe8)
124
+
145
+REG32(PID3, 0xfec)
125
+ case A_GICH_MISR: /* Maintenance Interrupt Status */
146
+REG32(CID0, 0xff0)
126
+ *data = s->h_misr[cpu];
147
+REG32(CID1, 0xff4)
127
+ break;
148
+REG32(CID2, 0xff8)
128
+
149
+REG32(CID3, 0xffc)
129
+ case A_GICH_EISR0: /* End of Interrupt Status 0 and 1 */
150
+
130
+ case A_GICH_EISR1:
151
+/* Registers in the non-secure privilege control block */
131
+ *data = gic_compute_eisr(s, cpu, (addr - A_GICH_EISR0) * 8);
152
+REG32(AHBNSPPPC0, 0x90)
132
+ break;
153
+REG32(AHBNSPPPCEXP0, 0xa0)
133
+
154
+REG32(AHBNSPPPCEXP1, 0xa4)
134
+ case A_GICH_ELRSR0: /* Empty List Status 0 and 1 */
155
+REG32(AHBNSPPPCEXP2, 0xa8)
135
+ case A_GICH_ELRSR1:
156
+REG32(AHBNSPPPCEXP3, 0xac)
136
+ *data = gic_compute_elrsr(s, cpu, (addr - A_GICH_ELRSR0) * 8);
157
+REG32(APBNSPPPC0, 0xb0)
137
+ break;
158
+REG32(APBNSPPPC1, 0xb4)
138
+
159
+REG32(APBNSPPPCEXP0, 0xc0)
139
+ case A_GICH_APR: /* Active Priorities */
160
+REG32(APBNSPPPCEXP1, 0xc4)
140
+ *data = s->h_apr[cpu];
161
+REG32(APBNSPPPCEXP2, 0xc8)
141
+ break;
162
+REG32(APBNSPPPCEXP3, 0xcc)
142
+
163
+/* PID and CID registers are also present in the NS block */
143
+ case A_GICH_LR0 ... A_GICH_LR63: /* List Registers */
164
+
144
+ {
165
+static const uint8_t iotkit_secctl_s_idregs[] = {
145
+ int lr_idx = (addr - A_GICH_LR0) / 4;
166
+ 0x04, 0x00, 0x00, 0x00,
146
+
167
+ 0x52, 0xb8, 0x0b, 0x00,
147
+ if (lr_idx > s->num_lrs) {
168
+ 0x0d, 0xf0, 0x05, 0xb1,
148
+ *data = 0;
169
+};
149
+ } else {
170
+
150
+ *data = s->h_lr[lr_idx][cpu];
171
+static const uint8_t iotkit_secctl_ns_idregs[] = {
151
+ }
172
+ 0x04, 0x00, 0x00, 0x00,
152
+ break;
173
+ 0x53, 0xb8, 0x0b, 0x00,
153
+ }
174
+ 0x0d, 0xf0, 0x05, 0xb1,
154
+
175
+};
176
+
177
+static MemTxResult iotkit_secctl_s_read(void *opaque, hwaddr addr,
178
+ uint64_t *pdata,
179
+ unsigned size, MemTxAttrs attrs)
180
+{
181
+ uint64_t r;
182
+ uint32_t offset = addr & ~0x3;
183
+
184
+ switch (offset) {
185
+ case A_AHBNSPPC0:
186
+ case A_AHBSPPPC0:
187
+ r = 0;
188
+ break;
189
+ case A_SECRESPCFG:
190
+ case A_NSCCFG:
191
+ case A_SECMPCINTSTATUS:
192
+ case A_SECPPCINTSTAT:
193
+ case A_SECPPCINTEN:
194
+ case A_SECMSCINTSTAT:
195
+ case A_SECMSCINTEN:
196
+ case A_BRGINTSTAT:
197
+ case A_BRGINTEN:
198
+ case A_AHBNSPPCEXP0:
199
+ case A_AHBNSPPCEXP1:
200
+ case A_AHBNSPPCEXP2:
201
+ case A_AHBNSPPCEXP3:
202
+ case A_APBNSPPC0:
203
+ case A_APBNSPPC1:
204
+ case A_APBNSPPCEXP0:
205
+ case A_APBNSPPCEXP1:
206
+ case A_APBNSPPCEXP2:
207
+ case A_APBNSPPCEXP3:
208
+ case A_AHBSPPPCEXP0:
209
+ case A_AHBSPPPCEXP1:
210
+ case A_AHBSPPPCEXP2:
211
+ case A_AHBSPPPCEXP3:
212
+ case A_APBSPPPC0:
213
+ case A_APBSPPPC1:
214
+ case A_APBSPPPCEXP0:
215
+ case A_APBSPPPCEXP1:
216
+ case A_APBSPPPCEXP2:
217
+ case A_APBSPPPCEXP3:
218
+ case A_NSMSCEXP:
219
+ qemu_log_mask(LOG_UNIMP,
220
+ "IoTKit SecCtl S block read: "
221
+ "unimplemented offset 0x%x\n", offset);
222
+ r = 0;
223
+ break;
224
+ case A_PID4:
225
+ case A_PID5:
226
+ case A_PID6:
227
+ case A_PID7:
228
+ case A_PID0:
229
+ case A_PID1:
230
+ case A_PID2:
231
+ case A_PID3:
232
+ case A_CID0:
233
+ case A_CID1:
234
+ case A_CID2:
235
+ case A_CID3:
236
+ r = iotkit_secctl_s_idregs[(offset - A_PID4) / 4];
237
+ break;
238
+ case A_SECPPCINTCLR:
239
+ case A_SECMSCINTCLR:
240
+ case A_BRGINTCLR:
241
+ qemu_log_mask(LOG_GUEST_ERROR,
242
+ "IotKit SecCtl S block read: write-only offset 0x%x\n",
243
+ offset);
244
+ r = 0;
245
+ break;
246
+ default:
155
+ default:
247
+ qemu_log_mask(LOG_GUEST_ERROR,
156
+ qemu_log_mask(LOG_GUEST_ERROR,
248
+ "IotKit SecCtl S block read: bad offset 0x%x\n", offset);
157
+ "gic_hyp_read: Bad offset %" HWADDR_PRIx "\n", addr);
249
+ r = 0;
158
+ return MEMTX_OK;
250
+ break;
159
+ }
251
+ }
160
+
252
+
253
+ if (size != 4) {
254
+ /* None of our registers are access-sensitive, so just pull the right
255
+ * byte out of the word read result.
256
+ */
257
+ r = extract32(r, (addr & 3) * 8, size * 8);
258
+ }
259
+
260
+ trace_iotkit_secctl_s_read(offset, r, size);
261
+ *pdata = r;
262
+ return MEMTX_OK;
161
+ return MEMTX_OK;
263
+}
162
+}
264
+
163
+
265
+static MemTxResult iotkit_secctl_s_write(void *opaque, hwaddr addr,
164
+static MemTxResult gic_hyp_write(void *opaque, int cpu, hwaddr addr,
266
+ uint64_t value,
165
+ uint64_t value, MemTxAttrs attrs)
267
+ unsigned size, MemTxAttrs attrs)
166
+{
268
+{
167
+ GICState *s = ARM_GIC(opaque);
269
+ uint32_t offset = addr;
168
+ int vcpu = cpu + GIC_NCPU;
270
+
169
+
271
+ trace_iotkit_secctl_s_write(offset, value, size);
170
+ switch (addr) {
272
+
171
+ case A_GICH_HCR: /* Hypervisor Control */
273
+ if (size != 4) {
172
+ s->h_hcr[cpu] = value & GICH_HCR_MASK;
274
+ /* Byte and halfword writes are ignored */
173
+ break;
275
+ qemu_log_mask(LOG_GUEST_ERROR,
174
+
276
+ "IotKit SecCtl S block write: bad size, ignored\n");
175
+ case A_GICH_VMCR: /* Virtual Machine Control */
277
+ return MEMTX_OK;
176
+ gic_vmcr_write(s, value, attrs);
278
+ }
177
+ break;
279
+
178
+
280
+ switch (offset) {
179
+ case A_GICH_APR: /* Active Priorities */
281
+ case A_SECRESPCFG:
180
+ s->h_apr[cpu] = value;
282
+ case A_NSCCFG:
181
+ s->running_priority[vcpu] = gic_get_prio_from_apr_bits(s, vcpu);
283
+ case A_SECPPCINTCLR:
182
+ break;
284
+ case A_SECPPCINTEN:
183
+
285
+ case A_SECMSCINTCLR:
184
+ case A_GICH_LR0 ... A_GICH_LR63: /* List Registers */
286
+ case A_SECMSCINTEN:
185
+ {
287
+ case A_BRGINTCLR:
186
+ int lr_idx = (addr - A_GICH_LR0) / 4;
288
+ case A_BRGINTEN:
187
+
289
+ case A_AHBNSPPCEXP0:
188
+ if (lr_idx > s->num_lrs) {
290
+ case A_AHBNSPPCEXP1:
189
+ return MEMTX_OK;
291
+ case A_AHBNSPPCEXP2:
190
+ }
292
+ case A_AHBNSPPCEXP3:
191
+
293
+ case A_APBNSPPC0:
192
+ s->h_lr[lr_idx][cpu] = value & GICH_LR_MASK;
294
+ case A_APBNSPPC1:
193
+ break;
295
+ case A_APBNSPPCEXP0:
194
+ }
296
+ case A_APBNSPPCEXP1:
195
+
297
+ case A_APBNSPPCEXP2:
298
+ case A_APBNSPPCEXP3:
299
+ case A_AHBSPPPCEXP0:
300
+ case A_AHBSPPPCEXP1:
301
+ case A_AHBSPPPCEXP2:
302
+ case A_AHBSPPPCEXP3:
303
+ case A_APBSPPPC0:
304
+ case A_APBSPPPC1:
305
+ case A_APBSPPPCEXP0:
306
+ case A_APBSPPPCEXP1:
307
+ case A_APBSPPPCEXP2:
308
+ case A_APBSPPPCEXP3:
309
+ qemu_log_mask(LOG_UNIMP,
310
+ "IoTKit SecCtl S block write: "
311
+ "unimplemented offset 0x%x\n", offset);
312
+ break;
313
+ case A_SECMPCINTSTATUS:
314
+ case A_SECPPCINTSTAT:
315
+ case A_SECMSCINTSTAT:
316
+ case A_BRGINTSTAT:
317
+ case A_AHBNSPPC0:
318
+ case A_AHBSPPPC0:
319
+ case A_NSMSCEXP:
320
+ case A_PID4:
321
+ case A_PID5:
322
+ case A_PID6:
323
+ case A_PID7:
324
+ case A_PID0:
325
+ case A_PID1:
326
+ case A_PID2:
327
+ case A_PID3:
328
+ case A_CID0:
329
+ case A_CID1:
330
+ case A_CID2:
331
+ case A_CID3:
332
+ qemu_log_mask(LOG_GUEST_ERROR,
333
+ "IoTKit SecCtl S block write: "
334
+ "read-only offset 0x%x\n", offset);
335
+ break;
336
+ default:
196
+ default:
337
+ qemu_log_mask(LOG_GUEST_ERROR,
197
+ qemu_log_mask(LOG_GUEST_ERROR,
338
+ "IotKit SecCtl S block write: bad offset 0x%x\n",
198
+ "gic_hyp_write: Bad offset %" HWADDR_PRIx "\n", addr);
339
+ offset);
199
+ return MEMTX_OK;
340
+ break;
341
+ }
200
+ }
342
+
201
+
343
+ return MEMTX_OK;
202
+ return MEMTX_OK;
344
+}
203
+}
345
+
204
+
346
+static MemTxResult iotkit_secctl_ns_read(void *opaque, hwaddr addr,
205
+static MemTxResult gic_thiscpu_hyp_read(void *opaque, hwaddr addr, uint64_t *data,
347
+ uint64_t *pdata,
206
+ unsigned size, MemTxAttrs attrs)
348
+ unsigned size, MemTxAttrs attrs)
207
+{
349
+{
208
+ GICState *s = (GICState *)opaque;
350
+ uint64_t r;
209
+
351
+ uint32_t offset = addr & ~0x3;
210
+ return gic_hyp_read(s, gic_get_current_cpu(s), addr, data, attrs);
352
+
211
+}
353
+ switch (offset) {
212
+
354
+ case A_AHBNSPPPC0:
213
+static MemTxResult gic_thiscpu_hyp_write(void *opaque, hwaddr addr,
355
+ r = 0;
214
+ uint64_t value, unsigned size,
356
+ break;
215
+ MemTxAttrs attrs)
357
+ case A_AHBNSPPPCEXP0:
216
+{
358
+ case A_AHBNSPPPCEXP1:
217
+ GICState *s = (GICState *)opaque;
359
+ case A_AHBNSPPPCEXP2:
218
+
360
+ case A_AHBNSPPPCEXP3:
219
+ return gic_hyp_write(s, gic_get_current_cpu(s), addr, value, attrs);
361
+ case A_APBNSPPPC0:
220
+}
362
+ case A_APBNSPPPC1:
221
+
363
+ case A_APBNSPPPCEXP0:
222
+static MemTxResult gic_do_hyp_read(void *opaque, hwaddr addr, uint64_t *data,
364
+ case A_APBNSPPPCEXP1:
223
+ unsigned size, MemTxAttrs attrs)
365
+ case A_APBNSPPPCEXP2:
224
+{
366
+ case A_APBNSPPPCEXP3:
225
+ GICState **backref = (GICState **)opaque;
367
+ qemu_log_mask(LOG_UNIMP,
226
+ GICState *s = *backref;
368
+ "IoTKit SecCtl NS block read: "
227
+ int id = (backref - s->backref);
369
+ "unimplemented offset 0x%x\n", offset);
228
+
370
+ break;
229
+ return gic_hyp_read(s, id, addr, data, attrs);
371
+ case A_PID4:
230
+}
372
+ case A_PID5:
231
+
373
+ case A_PID6:
232
+static MemTxResult gic_do_hyp_write(void *opaque, hwaddr addr,
374
+ case A_PID7:
233
+ uint64_t value, unsigned size,
375
+ case A_PID0:
234
+ MemTxAttrs attrs)
376
+ case A_PID1:
235
+{
377
+ case A_PID2:
236
+ GICState **backref = (GICState **)opaque;
378
+ case A_PID3:
237
+ GICState *s = *backref;
379
+ case A_CID0:
238
+ int id = (backref - s->backref);
380
+ case A_CID1:
239
+
381
+ case A_CID2:
240
+ return gic_hyp_write(s, id + GIC_NCPU, addr, value, attrs);
382
+ case A_CID3:
241
+
383
+ r = iotkit_secctl_ns_idregs[(offset - A_PID4) / 4];
242
+}
384
+ break;
243
+
385
+ default:
244
static const MemoryRegionOps gic_ops[2] = {
386
+ qemu_log_mask(LOG_GUEST_ERROR,
245
{
387
+ "IotKit SecCtl NS block write: bad offset 0x%x\n",
246
.read_with_attrs = gic_dist_read,
388
+ offset);
247
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps gic_cpu_ops = {
389
+ r = 0;
248
390
+ break;
249
static const MemoryRegionOps gic_virt_ops[2] = {
391
+ }
250
{
392
+
251
- .read_with_attrs = NULL,
393
+ if (size != 4) {
252
- .write_with_attrs = NULL,
394
+ /* None of our registers are access-sensitive, so just pull the right
253
+ .read_with_attrs = gic_thiscpu_hyp_read,
395
+ * byte out of the word read result.
254
+ .write_with_attrs = gic_thiscpu_hyp_write,
396
+ */
255
.endianness = DEVICE_NATIVE_ENDIAN,
397
+ r = extract32(r, (addr & 3) * 8, size * 8);
256
},
398
+ }
257
{
399
+
258
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps gic_virt_ops[2] = {
400
+ trace_iotkit_secctl_ns_read(offset, r, size);
259
}
401
+ *pdata = r;
260
};
402
+ return MEMTX_OK;
261
403
+}
262
+static const MemoryRegionOps gic_viface_ops = {
404
+
263
+ .read_with_attrs = gic_do_hyp_read,
405
+static MemTxResult iotkit_secctl_ns_write(void *opaque, hwaddr addr,
264
+ .write_with_attrs = gic_do_hyp_write,
406
+ uint64_t value,
265
+ .endianness = DEVICE_NATIVE_ENDIAN,
407
+ unsigned size, MemTxAttrs attrs)
408
+{
409
+ uint32_t offset = addr;
410
+
411
+ trace_iotkit_secctl_ns_write(offset, value, size);
412
+
413
+ if (size != 4) {
414
+ /* Byte and halfword writes are ignored */
415
+ qemu_log_mask(LOG_GUEST_ERROR,
416
+ "IotKit SecCtl NS block write: bad size, ignored\n");
417
+ return MEMTX_OK;
418
+ }
419
+
420
+ switch (offset) {
421
+ case A_AHBNSPPPCEXP0:
422
+ case A_AHBNSPPPCEXP1:
423
+ case A_AHBNSPPPCEXP2:
424
+ case A_AHBNSPPPCEXP3:
425
+ case A_APBNSPPPC0:
426
+ case A_APBNSPPPC1:
427
+ case A_APBNSPPPCEXP0:
428
+ case A_APBNSPPPCEXP1:
429
+ case A_APBNSPPPCEXP2:
430
+ case A_APBNSPPPCEXP3:
431
+ qemu_log_mask(LOG_UNIMP,
432
+ "IoTKit SecCtl NS block write: "
433
+ "unimplemented offset 0x%x\n", offset);
434
+ break;
435
+ case A_AHBNSPPPC0:
436
+ case A_PID4:
437
+ case A_PID5:
438
+ case A_PID6:
439
+ case A_PID7:
440
+ case A_PID0:
441
+ case A_PID1:
442
+ case A_PID2:
443
+ case A_PID3:
444
+ case A_CID0:
445
+ case A_CID1:
446
+ case A_CID2:
447
+ case A_CID3:
448
+ qemu_log_mask(LOG_GUEST_ERROR,
449
+ "IoTKit SecCtl NS block write: "
450
+ "read-only offset 0x%x\n", offset);
451
+ break;
452
+ default:
453
+ qemu_log_mask(LOG_GUEST_ERROR,
454
+ "IotKit SecCtl NS block write: bad offset 0x%x\n",
455
+ offset);
456
+ break;
457
+ }
458
+
459
+ return MEMTX_OK;
460
+}
461
+
462
+static const MemoryRegionOps iotkit_secctl_s_ops = {
463
+ .read_with_attrs = iotkit_secctl_s_read,
464
+ .write_with_attrs = iotkit_secctl_s_write,
465
+ .endianness = DEVICE_LITTLE_ENDIAN,
466
+ .valid.min_access_size = 1,
467
+ .valid.max_access_size = 4,
468
+ .impl.min_access_size = 1,
469
+ .impl.max_access_size = 4,
470
+};
266
+};
471
+
267
+
472
+static const MemoryRegionOps iotkit_secctl_ns_ops = {
268
static void arm_gic_realize(DeviceState *dev, Error **errp)
473
+ .read_with_attrs = iotkit_secctl_ns_read,
269
{
474
+ .write_with_attrs = iotkit_secctl_ns_write,
270
/* Device instance realize function for the GIC sysbus device */
475
+ .endianness = DEVICE_LITTLE_ENDIAN,
271
@@ -XXX,XX +XXX,XX @@ static void arm_gic_realize(DeviceState *dev, Error **errp)
476
+ .valid.min_access_size = 1,
272
&s->backref[i], "gic_cpu", 0x100);
477
+ .valid.max_access_size = 4,
273
sysbus_init_mmio(sbd, &s->cpuiomem[i+1]);
478
+ .impl.min_access_size = 1,
274
}
479
+ .impl.max_access_size = 4,
275
+
480
+};
276
+ /* Extra core-specific regions for virtual interfaces. This is required by
481
+
277
+ * the GICv2 specification.
482
+static void iotkit_secctl_reset(DeviceState *dev)
278
+ */
483
+{
279
+ if (s->virt_extn) {
484
+
280
+ for (i = 0; i < s->num_cpu; i++) {
485
+}
281
+ memory_region_init_io(&s->vifaceiomem[i + 1], OBJECT(s),
486
+
282
+ &gic_viface_ops, &s->backref[i],
487
+static void iotkit_secctl_init(Object *obj)
283
+ "gic_viface", 0x1000);
488
+{
284
+ sysbus_init_mmio(sbd, &s->vifaceiomem[i + 1]);
489
+ IoTKitSecCtl *s = IOTKIT_SECCTL(obj);
285
+ }
490
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
286
+ }
491
+
287
+
492
+ memory_region_init_io(&s->s_regs, obj, &iotkit_secctl_s_ops,
288
}
493
+ s, "iotkit-secctl-s-regs", 0x1000);
289
494
+ memory_region_init_io(&s->ns_regs, obj, &iotkit_secctl_ns_ops,
290
static void arm_gic_class_init(ObjectClass *klass, void *data)
495
+ s, "iotkit-secctl-ns-regs", 0x1000);
496
+ sysbus_init_mmio(sbd, &s->s_regs);
497
+ sysbus_init_mmio(sbd, &s->ns_regs);
498
+}
499
+
500
+static const VMStateDescription iotkit_secctl_vmstate = {
501
+ .name = "iotkit-secctl",
502
+ .version_id = 1,
503
+ .minimum_version_id = 1,
504
+ .fields = (VMStateField[]) {
505
+ VMSTATE_END_OF_LIST()
506
+ }
507
+};
508
+
509
+static void iotkit_secctl_class_init(ObjectClass *klass, void *data)
510
+{
511
+ DeviceClass *dc = DEVICE_CLASS(klass);
512
+
513
+ dc->vmsd = &iotkit_secctl_vmstate;
514
+ dc->reset = iotkit_secctl_reset;
515
+}
516
+
517
+static const TypeInfo iotkit_secctl_info = {
518
+ .name = TYPE_IOTKIT_SECCTL,
519
+ .parent = TYPE_SYS_BUS_DEVICE,
520
+ .instance_size = sizeof(IoTKitSecCtl),
521
+ .instance_init = iotkit_secctl_init,
522
+ .class_init = iotkit_secctl_class_init,
523
+};
524
+
525
+static void iotkit_secctl_register_types(void)
526
+{
527
+ type_register_static(&iotkit_secctl_info);
528
+}
529
+
530
+type_init(iotkit_secctl_register_types);
531
diff --git a/default-configs/arm-softmmu.mak b/default-configs/arm-softmmu.mak
532
index XXXXXXX..XXXXXXX 100644
533
--- a/default-configs/arm-softmmu.mak
534
+++ b/default-configs/arm-softmmu.mak
535
@@ -XXX,XX +XXX,XX @@ CONFIG_MPS2_FPGAIO=y
536
CONFIG_MPS2_SCC=y
537
538
CONFIG_TZ_PPC=y
539
+CONFIG_IOTKIT_SECCTL=y
540
541
CONFIG_VERSATILE_PCI=y
542
CONFIG_VERSATILE_I2C=y
543
diff --git a/hw/misc/trace-events b/hw/misc/trace-events
544
index XXXXXXX..XXXXXXX 100644
545
--- a/hw/misc/trace-events
546
+++ b/hw/misc/trace-events
547
@@ -XXX,XX +XXX,XX @@ tz_ppc_irq_clear(int level) "TZ PPC: int_clear = %d"
548
tz_ppc_update_irq(int level) "TZ PPC: setting irq line to %d"
549
tz_ppc_read_blocked(int n, hwaddr offset, bool secure, bool user) "TZ PPC: port %d offset 0x%" HWADDR_PRIx " read (secure %d user %d) blocked"
550
tz_ppc_write_blocked(int n, hwaddr offset, bool secure, bool user) "TZ PPC: port %d offset 0x%" HWADDR_PRIx " write (secure %d user %d) blocked"
551
+
552
+# hw/misc/iotkit-secctl.c
553
+iotkit_secctl_s_read(uint32_t offset, uint64_t data, unsigned size) "IoTKit SecCtl S regs read: offset 0x%x data 0x%" PRIx64 " size %u"
554
+iotkit_secctl_s_write(uint32_t offset, uint64_t data, unsigned size) "IoTKit SecCtl S regs write: offset 0x%x data 0x%" PRIx64 " size %u"
555
+iotkit_secctl_ns_read(uint32_t offset, uint64_t data, unsigned size) "IoTKit SecCtl NS regs read: offset 0x%x data 0x%" PRIx64 " size %u"
556
+iotkit_secctl_ns_write(uint32_t offset, uint64_t data, unsigned size) "IoTKit SecCtl NS regs write: offset 0x%x data 0x%" PRIx64 " size %u"
557
+iotkit_secctl_reset(void) "IoTKit SecCtl: reset"
558
--
291
--
559
2.16.2
292
2.18.0
560
293
561
294
diff view generated by jsdifflib
New patch
1
1
From: Luc Michel <luc.michel@greensocs.com>
2
3
Add the gic_update_virt() function to update the vCPU interface states
4
and raise vIRQ and vFIQ as needed. This commit renames gic_update() to
5
gic_update_internal() and generalizes it to handle both cases, with a
6
`virt' parameter to track whether we are updating the CPU or vCPU
7
interfaces.
8
9
The main difference between CPU and vCPU is the way we select the best
10
IRQ. This part has been split into the gic_get_best_(v)irq functions.
11
For the virt case, the LRs are iterated to find the best candidate.
12
13
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Message-id: 20180727095421.386-17-luc.michel@greensocs.com
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
18
hw/intc/arm_gic.c | 175 +++++++++++++++++++++++++++++++++++-----------
19
1 file changed, 136 insertions(+), 39 deletions(-)
20
21
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
22
index XXXXXXX..XXXXXXX 100644
23
--- a/hw/intc/arm_gic.c
24
+++ b/hw/intc/arm_gic.c
25
@@ -XXX,XX +XXX,XX @@ static inline bool gic_cpu_ns_access(GICState *s, int cpu, MemTxAttrs attrs)
26
return !gic_is_vcpu(cpu) && s->security_extn && !attrs.secure;
27
}
28
29
+static inline void gic_get_best_irq(GICState *s, int cpu,
30
+ int *best_irq, int *best_prio, int *group)
31
+{
32
+ int irq;
33
+ int cm = 1 << cpu;
34
+
35
+ *best_irq = 1023;
36
+ *best_prio = 0x100;
37
+
38
+ for (irq = 0; irq < s->num_irq; irq++) {
39
+ if (GIC_DIST_TEST_ENABLED(irq, cm) && gic_test_pending(s, irq, cm) &&
40
+ (!GIC_DIST_TEST_ACTIVE(irq, cm)) &&
41
+ (irq < GIC_INTERNAL || GIC_DIST_TARGET(irq) & cm)) {
42
+ if (GIC_DIST_GET_PRIORITY(irq, cpu) < *best_prio) {
43
+ *best_prio = GIC_DIST_GET_PRIORITY(irq, cpu);
44
+ *best_irq = irq;
45
+ }
46
+ }
47
+ }
48
+
49
+ if (*best_irq < 1023) {
50
+ *group = GIC_DIST_TEST_GROUP(*best_irq, cm);
51
+ }
52
+}
53
+
54
+static inline void gic_get_best_virq(GICState *s, int cpu,
55
+ int *best_irq, int *best_prio, int *group)
56
+{
57
+ int lr_idx = 0;
58
+
59
+ *best_irq = 1023;
60
+ *best_prio = 0x100;
61
+
62
+ for (lr_idx = 0; lr_idx < s->num_lrs; lr_idx++) {
63
+ uint32_t lr_entry = s->h_lr[lr_idx][cpu];
64
+ int state = GICH_LR_STATE(lr_entry);
65
+
66
+ if (state == GICH_LR_STATE_PENDING) {
67
+ int prio = GICH_LR_PRIORITY(lr_entry);
68
+
69
+ if (prio < *best_prio) {
70
+ *best_prio = prio;
71
+ *best_irq = GICH_LR_VIRT_ID(lr_entry);
72
+ *group = GICH_LR_GROUP(lr_entry);
73
+ }
74
+ }
75
+ }
76
+}
77
+
78
+/* Return true if IRQ signaling is enabled for the given cpu and at least one
79
+ * of the given groups:
80
+ * - in the non-virt case, the distributor must be enabled for one of the
81
+ * given groups
82
+ * - in the virt case, the virtual interface must be enabled.
83
+ * - in all cases, the (v)CPU interface must be enabled for one of the given
84
+ * groups.
85
+ */
86
+static inline bool gic_irq_signaling_enabled(GICState *s, int cpu, bool virt,
87
+ int group_mask)
88
+{
89
+ if (!virt && !(s->ctlr & group_mask)) {
90
+ return false;
91
+ }
92
+
93
+ if (virt && !(s->h_hcr[cpu] & R_GICH_HCR_EN_MASK)) {
94
+ return false;
95
+ }
96
+
97
+ if (!(s->cpu_ctlr[cpu] & group_mask)) {
98
+ return false;
99
+ }
100
+
101
+ return true;
102
+}
103
+
104
/* TODO: Many places that call this routine could be optimized. */
105
/* Update interrupt status after enabled or pending bits have been changed. */
106
-static void gic_update(GICState *s)
107
+static inline void gic_update_internal(GICState *s, bool virt)
108
{
109
int best_irq;
110
int best_prio;
111
- int irq;
112
int irq_level, fiq_level;
113
- int cpu;
114
- int cm;
115
+ int cpu, cpu_iface;
116
+ int group = 0;
117
+ qemu_irq *irq_lines = virt ? s->parent_virq : s->parent_irq;
118
+ qemu_irq *fiq_lines = virt ? s->parent_vfiq : s->parent_fiq;
119
120
for (cpu = 0; cpu < s->num_cpu; cpu++) {
121
- cm = 1 << cpu;
122
- s->current_pending[cpu] = 1023;
123
- if (!(s->ctlr & (GICD_CTLR_EN_GRP0 | GICD_CTLR_EN_GRP1))
124
- || !(s->cpu_ctlr[cpu] & (GICC_CTLR_EN_GRP0 | GICC_CTLR_EN_GRP1))) {
125
- qemu_irq_lower(s->parent_irq[cpu]);
126
- qemu_irq_lower(s->parent_fiq[cpu]);
127
+ cpu_iface = virt ? (cpu + GIC_NCPU) : cpu;
128
+
129
+ s->current_pending[cpu_iface] = 1023;
130
+ if (!gic_irq_signaling_enabled(s, cpu, virt,
131
+ GICD_CTLR_EN_GRP0 | GICD_CTLR_EN_GRP1)) {
132
+ qemu_irq_lower(irq_lines[cpu]);
133
+ qemu_irq_lower(fiq_lines[cpu]);
134
continue;
135
}
136
- best_prio = 0x100;
137
- best_irq = 1023;
138
- for (irq = 0; irq < s->num_irq; irq++) {
139
- if (GIC_DIST_TEST_ENABLED(irq, cm) &&
140
- gic_test_pending(s, irq, cm) &&
141
- (!GIC_DIST_TEST_ACTIVE(irq, cm)) &&
142
- (irq < GIC_INTERNAL || GIC_DIST_TARGET(irq) & cm)) {
143
- if (GIC_DIST_GET_PRIORITY(irq, cpu) < best_prio) {
144
- best_prio = GIC_DIST_GET_PRIORITY(irq, cpu);
145
- best_irq = irq;
146
- }
147
- }
148
+
149
+ if (virt) {
150
+ gic_get_best_virq(s, cpu, &best_irq, &best_prio, &group);
151
+ } else {
152
+ gic_get_best_irq(s, cpu, &best_irq, &best_prio, &group);
153
}
154
155
if (best_irq != 1023) {
156
trace_gic_update_bestirq(cpu, best_irq, best_prio,
157
- s->priority_mask[cpu], s->running_priority[cpu]);
158
+ s->priority_mask[cpu_iface], s->running_priority[cpu_iface]);
159
}
160
161
irq_level = fiq_level = 0;
162
163
- if (best_prio < s->priority_mask[cpu]) {
164
- s->current_pending[cpu] = best_irq;
165
- if (best_prio < s->running_priority[cpu]) {
166
- int group = GIC_DIST_TEST_GROUP(best_irq, cm);
167
-
168
- if (extract32(s->ctlr, group, 1) &&
169
- extract32(s->cpu_ctlr[cpu], group, 1)) {
170
- if (group == 0 && s->cpu_ctlr[cpu] & GICC_CTLR_FIQ_EN) {
171
+ if (best_prio < s->priority_mask[cpu_iface]) {
172
+ s->current_pending[cpu_iface] = best_irq;
173
+ if (best_prio < s->running_priority[cpu_iface]) {
174
+ if (gic_irq_signaling_enabled(s, cpu, virt, 1 << group)) {
175
+ if (group == 0 &&
176
+ s->cpu_ctlr[cpu_iface] & GICC_CTLR_FIQ_EN) {
177
DPRINTF("Raised pending FIQ %d (cpu %d)\n",
178
- best_irq, cpu);
179
+ best_irq, cpu_iface);
180
fiq_level = 1;
181
- trace_gic_update_set_irq(cpu, "fiq", fiq_level);
182
+ trace_gic_update_set_irq(cpu, virt ? "vfiq" : "fiq",
183
+ fiq_level);
184
} else {
185
DPRINTF("Raised pending IRQ %d (cpu %d)\n",
186
- best_irq, cpu);
187
+ best_irq, cpu_iface);
188
irq_level = 1;
189
- trace_gic_update_set_irq(cpu, "irq", irq_level);
190
+ trace_gic_update_set_irq(cpu, virt ? "virq" : "irq",
191
+ irq_level);
192
}
193
}
194
}
195
}
196
197
- qemu_set_irq(s->parent_irq[cpu], irq_level);
198
- qemu_set_irq(s->parent_fiq[cpu], fiq_level);
199
+ qemu_set_irq(irq_lines[cpu], irq_level);
200
+ qemu_set_irq(fiq_lines[cpu], fiq_level);
201
}
202
}
203
204
+static void gic_update(GICState *s)
205
+{
206
+ gic_update_internal(s, false);
207
+}
208
+
209
/* Return true if this LR is empty, i.e. the corresponding bit
210
* in ELRSR is set.
211
*/
212
@@ -XXX,XX +XXX,XX @@ static inline bool gic_lr_entry_is_eoi(uint32_t entry)
213
&& !GICH_LR_HW(entry) && GICH_LR_EOI(entry);
214
}
215
216
+static void gic_update_virt(GICState *s)
217
+{
218
+ gic_update_internal(s, true);
219
+}
220
+
221
static void gic_set_irq_11mpcore(GICState *s, int irq, int level,
222
int cm, int target)
223
{
224
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
225
}
226
}
227
228
- gic_update(s);
229
+ if (gic_is_vcpu(cpu)) {
230
+ gic_update_virt(s);
231
+ } else {
232
+ gic_update(s);
233
+ }
234
DPRINTF("ACK %d\n", irq);
235
return ret;
236
}
237
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
238
*/
239
int rcpu = gic_get_vcpu_real_id(cpu);
240
s->h_hcr[rcpu] += 1 << R_GICH_HCR_EOICount_SHIFT;
241
+
242
+ /* Update the virtual interface in case a maintenance interrupt should
243
+ * be raised.
244
+ */
245
+ gic_update_virt(s);
246
return;
247
}
248
249
@@ -XXX,XX +XXX,XX @@ static void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
250
}
251
}
252
253
+ gic_update_virt(s);
254
return;
255
}
256
257
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
258
"gic_cpu_write: Bad offset %x\n", (int)offset);
259
return MEMTX_OK;
260
}
261
- gic_update(s);
262
+
263
+ if (gic_is_vcpu(cpu)) {
264
+ gic_update_virt(s);
265
+ } else {
266
+ gic_update(s);
267
+ }
268
+
269
return MEMTX_OK;
270
}
271
272
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_hyp_write(void *opaque, int cpu, hwaddr addr,
273
return MEMTX_OK;
274
}
275
276
+ gic_update_virt(s);
277
return MEMTX_OK;
278
}
279
280
--
281
2.18.0
282
283
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Luc Michel <luc.michel@greensocs.com>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Implement the maintenance interrupt generation that is part of the GICv2
4
virtualization extensions.
5
6
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20180228193125.20577-14-richard.henderson@linaro.org
8
Message-id: 20180727095421.386-18-luc.michel@greensocs.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/translate.c | 68 ++++++++++++++++++++++++++++++++++++++++++++++++++
11
hw/intc/arm_gic.c | 97 +++++++++++++++++++++++++++++++++++++++++++++++
9
1 file changed, 68 insertions(+)
12
1 file changed, 97 insertions(+)
10
13
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
14
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
12
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
16
--- a/hw/intc/arm_gic.c
14
+++ b/target/arm/translate.c
17
+++ b/hw/intc/arm_gic.c
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
18
@@ -XXX,XX +XXX,XX @@ static inline bool gic_lr_entry_is_eoi(uint32_t entry)
16
return 0;
19
&& !GICH_LR_HW(entry) && GICH_LR_EOI(entry);
17
}
20
}
18
21
19
+/* Advanced SIMD three registers of the same length extension.
22
+static inline void gic_extract_lr_info(GICState *s, int cpu,
20
+ * 31 25 23 22 20 16 12 11 10 9 8 3 0
23
+ int *num_eoi, int *num_valid, int *num_pending)
21
+ * +---------------+-----+---+-----+----+----+---+----+---+----+---------+----+
22
+ * | 1 1 1 1 1 1 0 | op1 | D | op2 | Vn | Vd | 1 | o3 | 0 | o4 | N Q M U | Vm |
23
+ * +---------------+-----+---+-----+----+----+---+----+---+----+---------+----+
24
+ */
25
+static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
26
+{
24
+{
27
+ gen_helper_gvec_3_ptr *fn_gvec_ptr;
25
+ int lr_idx;
28
+ int rd, rn, rm, rot, size, opr_sz;
29
+ TCGv_ptr fpst;
30
+ bool q;
31
+
26
+
32
+ q = extract32(insn, 6, 1);
27
+ *num_eoi = 0;
33
+ VFP_DREG_D(rd, insn);
28
+ *num_valid = 0;
34
+ VFP_DREG_N(rn, insn);
29
+ *num_pending = 0;
35
+ VFP_DREG_M(rm, insn);
30
+
36
+ if ((rd | rn | rm) & q) {
31
+ for (lr_idx = 0; lr_idx < s->num_lrs; lr_idx++) {
37
+ return 1;
32
+ uint32_t *entry = &s->h_lr[lr_idx][cpu];
33
+
34
+ if (gic_lr_entry_is_eoi(*entry)) {
35
+ (*num_eoi)++;
36
+ }
37
+
38
+ if (GICH_LR_STATE(*entry) != GICH_LR_STATE_INVALID) {
39
+ (*num_valid)++;
40
+ }
41
+
42
+ if (GICH_LR_STATE(*entry) == GICH_LR_STATE_PENDING) {
43
+ (*num_pending)++;
44
+ }
45
+ }
46
+}
47
+
48
+static void gic_compute_misr(GICState *s, int cpu)
49
+{
50
+ uint32_t value = 0;
51
+ int vcpu = cpu + GIC_NCPU;
52
+
53
+ int num_eoi, num_valid, num_pending;
54
+
55
+ gic_extract_lr_info(s, cpu, &num_eoi, &num_valid, &num_pending);
56
+
57
+ /* EOI */
58
+ if (num_eoi) {
59
+ value |= R_GICH_MISR_EOI_MASK;
38
+ }
60
+ }
39
+
61
+
40
+ if ((insn & 0xfe200f10) == 0xfc200800) {
62
+ /* U: true if only 0 or 1 LR entry is valid */
41
+ /* VCMLA -- 1111 110R R.1S .... .... 1000 ...0 .... */
63
+ if ((s->h_hcr[cpu] & R_GICH_HCR_UIE_MASK) && (num_valid < 2)) {
42
+ size = extract32(insn, 20, 1);
64
+ value |= R_GICH_MISR_U_MASK;
43
+ rot = extract32(insn, 23, 2);
44
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)
45
+ || (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
46
+ return 1;
47
+ }
48
+ fn_gvec_ptr = size ? gen_helper_gvec_fcmlas : gen_helper_gvec_fcmlah;
49
+ } else if ((insn & 0xfea00f10) == 0xfc800800) {
50
+ /* VCADD -- 1111 110R 1.0S .... .... 1000 ...0 .... */
51
+ size = extract32(insn, 20, 1);
52
+ rot = extract32(insn, 24, 1);
53
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)
54
+ || (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
55
+ return 1;
56
+ }
57
+ fn_gvec_ptr = size ? gen_helper_gvec_fcadds : gen_helper_gvec_fcaddh;
58
+ } else {
59
+ return 1;
60
+ }
65
+ }
61
+
66
+
62
+ if (s->fp_excp_el) {
67
+ /* LRENP: EOICount is not 0 */
63
+ gen_exception_insn(s, 4, EXCP_UDEF,
68
+ if ((s->h_hcr[cpu] & R_GICH_HCR_LRENPIE_MASK) &&
64
+ syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
69
+ ((s->h_hcr[cpu] & R_GICH_HCR_EOICount_MASK) != 0)) {
65
+ return 0;
70
+ value |= R_GICH_MISR_LRENP_MASK;
66
+ }
67
+ if (!s->vfp_enabled) {
68
+ return 1;
69
+ }
71
+ }
70
+
72
+
71
+ opr_sz = (1 + q) * 8;
73
+ /* NP: no pending interrupts */
72
+ fpst = get_fpstatus_ptr(1);
74
+ if ((s->h_hcr[cpu] & R_GICH_HCR_NPIE_MASK) && (num_pending == 0)) {
73
+ tcg_gen_gvec_3_ptr(vfp_reg_offset(1, rd),
75
+ value |= R_GICH_MISR_NP_MASK;
74
+ vfp_reg_offset(1, rn),
76
+ }
75
+ vfp_reg_offset(1, rm), fpst,
77
+
76
+ opr_sz, opr_sz, rot, fn_gvec_ptr);
78
+ /* VGrp0E: group0 virq signaling enabled */
77
+ tcg_temp_free_ptr(fpst);
79
+ if ((s->h_hcr[cpu] & R_GICH_HCR_VGRP0EIE_MASK) &&
78
+ return 0;
80
+ (s->cpu_ctlr[vcpu] & GICC_CTLR_EN_GRP0)) {
81
+ value |= R_GICH_MISR_VGrp0E_MASK;
82
+ }
83
+
84
+ /* VGrp0D: group0 virq signaling disabled */
85
+ if ((s->h_hcr[cpu] & R_GICH_HCR_VGRP0DIE_MASK) &&
86
+ !(s->cpu_ctlr[vcpu] & GICC_CTLR_EN_GRP0)) {
87
+ value |= R_GICH_MISR_VGrp0D_MASK;
88
+ }
89
+
90
+ /* VGrp1E: group1 virq signaling enabled */
91
+ if ((s->h_hcr[cpu] & R_GICH_HCR_VGRP1EIE_MASK) &&
92
+ (s->cpu_ctlr[vcpu] & GICC_CTLR_EN_GRP1)) {
93
+ value |= R_GICH_MISR_VGrp1E_MASK;
94
+ }
95
+
96
+ /* VGrp1D: group1 virq signaling disabled */
97
+ if ((s->h_hcr[cpu] & R_GICH_HCR_VGRP1DIE_MASK) &&
98
+ !(s->cpu_ctlr[vcpu] & GICC_CTLR_EN_GRP1)) {
99
+ value |= R_GICH_MISR_VGrp1D_MASK;
100
+ }
101
+
102
+ s->h_misr[cpu] = value;
79
+}
103
+}
80
+
104
+
81
static int disas_coproc_insn(DisasContext *s, uint32_t insn)
105
+static void gic_update_maintenance(GICState *s)
106
+{
107
+ int cpu = 0;
108
+ int maint_level;
109
+
110
+ for (cpu = 0; cpu < s->num_cpu; cpu++) {
111
+ gic_compute_misr(s, cpu);
112
+ maint_level = (s->h_hcr[cpu] & R_GICH_HCR_EN_MASK) && s->h_misr[cpu];
113
+
114
+ qemu_set_irq(s->maintenance_irq[cpu], maint_level);
115
+ }
116
+}
117
+
118
static void gic_update_virt(GICState *s)
82
{
119
{
83
int cpnum, is64, crn, crm, opc1, opc2, isread, rt, rt2;
120
gic_update_internal(s, true);
84
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
121
+ gic_update_maintenance(s);
85
}
122
}
86
}
123
87
}
124
static void gic_set_irq_11mpcore(GICState *s, int irq, int level,
88
+ } else if ((insn & 0x0e000a00) == 0x0c000800
89
+ && arm_dc_feature(s, ARM_FEATURE_V8)) {
90
+ if (disas_neon_insn_3same_ext(s, insn)) {
91
+ goto illegal_op;
92
+ }
93
+ return;
94
} else if ((insn & 0x0fe00000) == 0x0c400000) {
95
/* Coprocessor double register transfer. */
96
ARCH(5TE);
97
--
125
--
98
2.16.2
126
2.18.0
99
127
100
128
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Luc Michel <luc.michel@greensocs.com>
2
2
3
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
3
Add some traces to the ARM GIC to catch register accesses (distributor,
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
(v)cpu interface and virtual interface), and to take into account
5
Message-id: 20180228193125.20577-7-richard.henderson@linaro.org
5
virtualization extensions (print `vcpu` instead of `cpu` when needed).
6
7
Also add some virtualization extensions specific traces: LR updating
8
and maintenance IRQ generation.
9
10
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
11
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Message-id: 20180727095421.386-19-luc.michel@greensocs.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
15
---
8
target/arm/translate-a64.c | 29 +++++++++++++++++++++++++++++
16
hw/intc/arm_gic.c | 31 +++++++++++++++++++++++++------
9
1 file changed, 29 insertions(+)
17
hw/intc/trace-events | 12 ++++++++++--
18
2 files changed, 35 insertions(+), 8 deletions(-)
10
19
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
20
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
12
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-a64.c
22
--- a/hw/intc/arm_gic.c
14
+++ b/target/arm/translate-a64.c
23
+++ b/hw/intc/arm_gic.c
15
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
24
@@ -XXX,XX +XXX,XX @@ static inline void gic_update_internal(GICState *s, bool virt)
16
case 0x19: /* FMULX */
25
}
17
is_fp = true;
26
18
break;
27
if (best_irq != 1023) {
19
+ case 0x1d: /* SQRDMLAH */
28
- trace_gic_update_bestirq(cpu, best_irq, best_prio,
20
+ case 0x1f: /* SQRDMLSH */
29
- s->priority_mask[cpu_iface], s->running_priority[cpu_iface]);
21
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_RDM)) {
30
+ trace_gic_update_bestirq(virt ? "vcpu" : "cpu", cpu,
22
+ unallocated_encoding(s);
31
+ best_irq, best_prio,
23
+ return;
32
+ s->priority_mask[cpu_iface],
24
+ }
33
+ s->running_priority[cpu_iface]);
34
}
35
36
irq_level = fiq_level = 0;
37
@@ -XXX,XX +XXX,XX @@ static void gic_update_maintenance(GICState *s)
38
gic_compute_misr(s, cpu);
39
maint_level = (s->h_hcr[cpu] & R_GICH_HCR_EN_MASK) && s->h_misr[cpu];
40
41
+ trace_gic_update_maintenance_irq(cpu, maint_level);
42
qemu_set_irq(s->maintenance_irq[cpu], maint_level);
43
}
44
}
45
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
46
* is in the wrong group.
47
*/
48
irq = gic_get_current_pending_irq(s, cpu, attrs);
49
- trace_gic_acknowledge_irq(gic_get_vcpu_real_id(cpu), irq);
50
+ trace_gic_acknowledge_irq(gic_is_vcpu(cpu) ? "vcpu" : "cpu",
51
+ gic_get_vcpu_real_id(cpu), irq);
52
53
if (irq >= GIC_MAXIRQ) {
54
DPRINTF("ACK, no pending interrupt or it is hidden: %d\n", irq);
55
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_dist_read(void *opaque, hwaddr offset, uint64_t *data,
56
switch (size) {
57
case 1:
58
*data = gic_dist_readb(opaque, offset, attrs);
59
- return MEMTX_OK;
60
+ break;
61
case 2:
62
*data = gic_dist_readb(opaque, offset, attrs);
63
*data |= gic_dist_readb(opaque, offset + 1, attrs) << 8;
64
- return MEMTX_OK;
65
+ break;
66
case 4:
67
*data = gic_dist_readb(opaque, offset, attrs);
68
*data |= gic_dist_readb(opaque, offset + 1, attrs) << 8;
69
*data |= gic_dist_readb(opaque, offset + 2, attrs) << 16;
70
*data |= gic_dist_readb(opaque, offset + 3, attrs) << 24;
71
- return MEMTX_OK;
25
+ break;
72
+ break;
26
default:
73
default:
27
unallocated_encoding(s);
74
return MEMTX_ERROR;
28
return;
75
}
29
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
76
+
30
tcg_op, tcg_idx);
77
+ trace_gic_dist_read(offset, size, *data);
31
}
78
+ return MEMTX_OK;
32
break;
79
}
33
+ case 0x1d: /* SQRDMLAH */
80
34
+ read_vec_element_i32(s, tcg_res, rd, pass,
81
static void gic_dist_writeb(void *opaque, hwaddr offset,
35
+ is_scalar ? size : MO_32);
82
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writel(void *opaque, hwaddr offset,
36
+ if (size == 1) {
83
static MemTxResult gic_dist_write(void *opaque, hwaddr offset, uint64_t data,
37
+ gen_helper_neon_qrdmlah_s16(tcg_res, cpu_env,
84
unsigned size, MemTxAttrs attrs)
38
+ tcg_op, tcg_idx, tcg_res);
85
{
39
+ } else {
86
+ trace_gic_dist_write(offset, size, data);
40
+ gen_helper_neon_qrdmlah_s32(tcg_res, cpu_env,
87
+
41
+ tcg_op, tcg_idx, tcg_res);
88
switch (size) {
42
+ }
89
case 1:
43
+ break;
90
gic_dist_writeb(opaque, offset, data, attrs);
44
+ case 0x1f: /* SQRDMLSH */
91
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_read(GICState *s, int cpu, int offset,
45
+ read_vec_element_i32(s, tcg_res, rd, pass,
92
*data = 0;
46
+ is_scalar ? size : MO_32);
93
break;
47
+ if (size == 1) {
94
}
48
+ gen_helper_neon_qrdmlsh_s16(tcg_res, cpu_env,
95
+
49
+ tcg_op, tcg_idx, tcg_res);
96
+ trace_gic_cpu_read(gic_is_vcpu(cpu) ? "vcpu" : "cpu",
50
+ } else {
97
+ gic_get_vcpu_real_id(cpu), offset, *data);
51
+ gen_helper_neon_qrdmlsh_s32(tcg_res, cpu_env,
98
return MEMTX_OK;
52
+ tcg_op, tcg_idx, tcg_res);
99
}
53
+ }
100
54
+ break;
101
static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
55
default:
102
uint32_t value, MemTxAttrs attrs)
56
g_assert_not_reached();
103
{
57
}
104
+ trace_gic_cpu_write(gic_is_vcpu(cpu) ? "vcpu" : "cpu",
105
+ gic_get_vcpu_real_id(cpu), offset, value);
106
+
107
switch (offset) {
108
case 0x00: /* Control */
109
gic_set_cpu_control(s, cpu, value, attrs);
110
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_hyp_read(void *opaque, int cpu, hwaddr addr,
111
return MEMTX_OK;
112
}
113
114
+ trace_gic_hyp_read(addr, *data);
115
return MEMTX_OK;
116
}
117
118
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_hyp_write(void *opaque, int cpu, hwaddr addr,
119
GICState *s = ARM_GIC(opaque);
120
int vcpu = cpu + GIC_NCPU;
121
122
+ trace_gic_hyp_write(addr, value);
123
+
124
switch (addr) {
125
case A_GICH_HCR: /* Hypervisor Control */
126
s->h_hcr[cpu] = value & GICH_HCR_MASK;
127
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_hyp_write(void *opaque, int cpu, hwaddr addr,
128
}
129
130
s->h_lr[lr_idx][cpu] = value & GICH_LR_MASK;
131
+ trace_gic_lr_entry(cpu, lr_idx, s->h_lr[lr_idx][cpu]);
132
break;
133
}
134
135
diff --git a/hw/intc/trace-events b/hw/intc/trace-events
136
index XXXXXXX..XXXXXXX 100644
137
--- a/hw/intc/trace-events
138
+++ b/hw/intc/trace-events
139
@@ -XXX,XX +XXX,XX @@ aspeed_vic_write(uint64_t offset, unsigned size, uint32_t data) "To 0x%" PRIx64
140
gic_enable_irq(int irq) "irq %d enabled"
141
gic_disable_irq(int irq) "irq %d disabled"
142
gic_set_irq(int irq, int level, int cpumask, int target) "irq %d level %d cpumask 0x%x target 0x%x"
143
-gic_update_bestirq(int cpu, int irq, int prio, int priority_mask, int running_priority) "cpu %d irq %d priority %d cpu priority mask %d cpu running priority %d"
144
+gic_update_bestirq(const char *s, int cpu, int irq, int prio, int priority_mask, int running_priority) "%s %d irq %d priority %d cpu priority mask %d cpu running priority %d"
145
gic_update_set_irq(int cpu, const char *name, int level) "cpu[%d]: %s = %d"
146
-gic_acknowledge_irq(int cpu, int irq) "cpu %d acknowledged irq %d"
147
+gic_acknowledge_irq(const char *s, int cpu, int irq) "%s %d acknowledged irq %d"
148
+gic_cpu_write(const char *s, int cpu, int addr, uint32_t val) "%s %d iface write at 0x%08x 0x%08" PRIx32
149
+gic_cpu_read(const char *s, int cpu, int addr, uint32_t val) "%s %d iface read at 0x%08x: 0x%08" PRIx32
150
+gic_hyp_read(int addr, uint32_t val) "hyp read at 0x%08x: 0x%08" PRIx32
151
+gic_hyp_write(int addr, uint32_t val) "hyp write at 0x%08x: 0x%08" PRIx32
152
+gic_dist_read(int addr, unsigned int size, uint32_t val) "dist read at 0x%08x size %u: 0x%08" PRIx32
153
+gic_dist_write(int addr, unsigned int size, uint32_t val) "dist write at 0x%08x size %u: 0x%08" PRIx32
154
+gic_lr_entry(int cpu, int entry, uint32_t val) "cpu %d: new lr entry %d: 0x%08" PRIx32
155
+gic_update_maintenance_irq(int cpu, int val) "cpu %d: maintenance = %d"
156
157
# hw/intc/arm_gicv3_cpuif.c
158
gicv3_icc_pmr_read(uint32_t cpu, uint64_t val) "GICv3 ICC_PMR read cpu 0x%x value 0x%" PRIx64
58
--
159
--
59
2.16.2
160
2.18.0
60
161
61
162
diff view generated by jsdifflib
1
From: Alistair Francis <alistair.francis@xilinx.com>
1
From: Luc Michel <luc.michel@greensocs.com>
2
2
3
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
3
This commit improve the way the GIC is realized and connected in the
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
ZynqMP SoC. The security extensions are enabled only if requested in the
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
5
machine state. The same goes for the virtualization extensions.
6
7
All the GIC to APU CPU(s) IRQ lines are now connected, including FIQ,
8
vIRQ and vFIQ. The missing CPU to GIC timers IRQ connections are also
9
added (HYP and SEC timers).
10
11
The GIC maintenance IRQs are back-wired to the correct GIC PPIs.
12
13
Finally, the MMIO mappings are reworked to take into account the ZynqMP
14
specifics. The GIC (v)CPU interface is aliased 16 times:
15
* for the first 0x1000 bytes from 0xf9010000 to 0xf901f000
16
* for the second 0x1000 bytes from 0xf9020000 to 0xf902f000
17
Mappings of the virtual interface and virtual CPU interface are mapped
18
only when virtualization extensions are requested. The
19
XlnxZynqMPGICRegion struct has been enhanced to be able to catch all
20
this information.
21
22
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
23
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
24
Message-id: 20180727095421.386-20-luc.michel@greensocs.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
25
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
26
---
8
include/hw/arm/xlnx-zynqmp.h | 2 ++
27
include/hw/arm/xlnx-zynqmp.h | 4 +-
9
hw/arm/xlnx-zynqmp.c | 14 ++++++++++++++
28
hw/arm/xlnx-zynqmp.c | 92 ++++++++++++++++++++++++++++++++----
10
2 files changed, 16 insertions(+)
29
2 files changed, 86 insertions(+), 10 deletions(-)
11
30
12
diff --git a/include/hw/arm/xlnx-zynqmp.h b/include/hw/arm/xlnx-zynqmp.h
31
diff --git a/include/hw/arm/xlnx-zynqmp.h b/include/hw/arm/xlnx-zynqmp.h
13
index XXXXXXX..XXXXXXX 100644
32
index XXXXXXX..XXXXXXX 100644
14
--- a/include/hw/arm/xlnx-zynqmp.h
33
--- a/include/hw/arm/xlnx-zynqmp.h
15
+++ b/include/hw/arm/xlnx-zynqmp.h
34
+++ b/include/hw/arm/xlnx-zynqmp.h
16
@@ -XXX,XX +XXX,XX @@
35
@@ -XXX,XX +XXX,XX @@
17
#include "hw/dma/xlnx_dpdma.h"
36
#define XLNX_ZYNQMP_OCM_RAM_0_ADDRESS 0xFFFC0000
18
#include "hw/display/xlnx_dp.h"
37
#define XLNX_ZYNQMP_OCM_RAM_SIZE 0x10000
19
#include "hw/intc/xlnx-zynqmp-ipi.h"
38
20
+#include "hw/timer/xlnx-zynqmp-rtc.h"
39
-#define XLNX_ZYNQMP_GIC_REGIONS 2
21
40
+#define XLNX_ZYNQMP_GIC_REGIONS 6
22
#define TYPE_XLNX_ZYNQMP "xlnx,zynqmp"
41
23
#define XLNX_ZYNQMP(obj) OBJECT_CHECK(XlnxZynqMPState, (obj), \
42
/* ZynqMP maps the ARM GIC regions (GICC, GICD ...) at consecutive 64k offsets
24
@@ -XXX,XX +XXX,XX @@ typedef struct XlnxZynqMPState {
43
* and under-decodes the 64k region. This mirrors the 4k regions to every 4k
25
XlnxDPState dp;
44
@@ -XXX,XX +XXX,XX @@
26
XlnxDPDMAState dpdma;
45
*/
27
XlnxZynqMPIPI ipi;
46
28
+ XlnxZynqMPRTC rtc;
47
#define XLNX_ZYNQMP_GIC_REGION_SIZE 0x1000
29
48
-#define XLNX_ZYNQMP_GIC_ALIASES (0x10000 / XLNX_ZYNQMP_GIC_REGION_SIZE - 1)
30
char *boot_cpu;
49
+#define XLNX_ZYNQMP_GIC_ALIASES (0x10000 / XLNX_ZYNQMP_GIC_REGION_SIZE)
31
ARMCPU *boot_cpu_ptr;
50
51
#define XLNX_ZYNQMP_MAX_LOW_RAM_SIZE 0x80000000ull
52
32
diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c
53
diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c
33
index XXXXXXX..XXXXXXX 100644
54
index XXXXXXX..XXXXXXX 100644
34
--- a/hw/arm/xlnx-zynqmp.c
55
--- a/hw/arm/xlnx-zynqmp.c
35
+++ b/hw/arm/xlnx-zynqmp.c
56
+++ b/hw/arm/xlnx-zynqmp.c
36
@@ -XXX,XX +XXX,XX @@
57
@@ -XXX,XX +XXX,XX @@
37
#define IPI_ADDR 0xFF300000
58
38
#define IPI_IRQ 64
59
#define ARM_PHYS_TIMER_PPI 30
39
60
#define ARM_VIRT_TIMER_PPI 27
40
+#define RTC_ADDR 0xffa60000
61
+#define ARM_HYP_TIMER_PPI 26
41
+#define RTC_IRQ 26
62
+#define ARM_SEC_TIMER_PPI 29
42
+
63
+#define GIC_MAINTENANCE_PPI 25
43
#define SDHCI_CAPABILITIES 0x280737ec6481 /* Datasheet: UG1085 (v1.7) */
64
44
65
#define GEM_REVISION 0x40070106
45
static const uint64_t gem_addr[XLNX_ZYNQMP_NUM_GEMS] = {
66
46
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_init(Object *obj)
67
#define GIC_BASE_ADDR 0xf9000000
47
68
#define GIC_DIST_ADDR 0xf9010000
48
object_initialize(&s->ipi, sizeof(s->ipi), TYPE_XLNX_ZYNQMP_IPI);
69
#define GIC_CPU_ADDR 0xf9020000
49
qdev_set_parent_bus(DEVICE(&s->ipi), sysbus_get_default());
70
+#define GIC_VIFACE_ADDR 0xf9040000
50
+
71
+#define GIC_VCPU_ADDR 0xf9060000
51
+ object_initialize(&s->rtc, sizeof(s->rtc), TYPE_XLNX_ZYNQMP_RTC);
72
52
+ qdev_set_parent_bus(DEVICE(&s->rtc), sysbus_get_default());
73
#define SATA_INTR 133
53
}
74
#define SATA_ADDR 0xFD0C0000
54
75
@@ -XXX,XX +XXX,XX @@ static const int adma_ch_intr[XLNX_ZYNQMP_NUM_ADMA_CH] = {
55
static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
76
typedef struct XlnxZynqMPGICRegion {
77
int region_index;
78
uint32_t address;
79
+ uint32_t offset;
80
+ bool virt;
81
} XlnxZynqMPGICRegion;
82
83
static const XlnxZynqMPGICRegion xlnx_zynqmp_gic_regions[] = {
84
- { .region_index = 0, .address = GIC_DIST_ADDR, },
85
- { .region_index = 1, .address = GIC_CPU_ADDR, },
86
+ /* Distributor */
87
+ {
88
+ .region_index = 0,
89
+ .address = GIC_DIST_ADDR,
90
+ .offset = 0,
91
+ .virt = false
92
+ },
93
+
94
+ /* CPU interface */
95
+ {
96
+ .region_index = 1,
97
+ .address = GIC_CPU_ADDR,
98
+ .offset = 0,
99
+ .virt = false
100
+ },
101
+ {
102
+ .region_index = 1,
103
+ .address = GIC_CPU_ADDR + 0x10000,
104
+ .offset = 0x1000,
105
+ .virt = false
106
+ },
107
+
108
+ /* Virtual interface */
109
+ {
110
+ .region_index = 2,
111
+ .address = GIC_VIFACE_ADDR,
112
+ .offset = 0,
113
+ .virt = true
114
+ },
115
+
116
+ /* Virtual CPU interface */
117
+ {
118
+ .region_index = 3,
119
+ .address = GIC_VCPU_ADDR,
120
+ .offset = 0,
121
+ .virt = true
122
+ },
123
+ {
124
+ .region_index = 3,
125
+ .address = GIC_VCPU_ADDR + 0x10000,
126
+ .offset = 0x1000,
127
+ .virt = true
128
+ },
129
};
130
131
static inline int arm_gic_ppi_index(int cpu_nr, int ppi_index)
56
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
132
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
133
qdev_prop_set_uint32(DEVICE(&s->gic), "num-irq", GIC_NUM_SPI_INTR + 32);
134
qdev_prop_set_uint32(DEVICE(&s->gic), "revision", 2);
135
qdev_prop_set_uint32(DEVICE(&s->gic), "num-cpu", num_apus);
136
+ qdev_prop_set_bit(DEVICE(&s->gic), "has-security-extensions", s->secure);
137
+ qdev_prop_set_bit(DEVICE(&s->gic),
138
+ "has-virtualization-extensions", s->virt);
139
140
/* Realize APUs before realizing the GIC. KVM requires this. */
141
for (i = 0; i < num_apus; i++) {
142
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
143
for (i = 0; i < XLNX_ZYNQMP_GIC_REGIONS; i++) {
144
SysBusDevice *gic = SYS_BUS_DEVICE(&s->gic);
145
const XlnxZynqMPGICRegion *r = &xlnx_zynqmp_gic_regions[i];
146
- MemoryRegion *mr = sysbus_mmio_get_region(gic, r->region_index);
147
+ MemoryRegion *mr;
148
uint32_t addr = r->address;
149
int j;
150
151
- sysbus_mmio_map(gic, r->region_index, addr);
152
+ if (r->virt && !s->virt) {
153
+ continue;
154
+ }
155
156
+ mr = sysbus_mmio_get_region(gic, r->region_index);
157
for (j = 0; j < XLNX_ZYNQMP_GIC_ALIASES; j++) {
158
MemoryRegion *alias = &s->gic_mr[i][j];
159
160
- addr += XLNX_ZYNQMP_GIC_REGION_SIZE;
161
memory_region_init_alias(alias, OBJECT(s), "zynqmp-gic-alias", mr,
162
- 0, XLNX_ZYNQMP_GIC_REGION_SIZE);
163
+ r->offset, XLNX_ZYNQMP_GIC_REGION_SIZE);
164
memory_region_add_subregion(system_memory, addr, alias);
165
+
166
+ addr += XLNX_ZYNQMP_GIC_REGION_SIZE;
167
}
57
}
168
}
58
sysbus_mmio_map(SYS_BUS_DEVICE(&s->ipi), 0, IPI_ADDR);
169
59
sysbus_connect_irq(SYS_BUS_DEVICE(&s->ipi), 0, gic_spi[IPI_IRQ]);
170
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
60
+
171
sysbus_connect_irq(SYS_BUS_DEVICE(&s->gic), i,
61
+ object_property_set_bool(OBJECT(&s->rtc), true, "realized", &err);
172
qdev_get_gpio_in(DEVICE(&s->apu_cpu[i]),
62
+ if (err) {
173
ARM_CPU_IRQ));
63
+ error_propagate(errp, err);
174
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gic), i + num_apus,
64
+ return;
175
+ qdev_get_gpio_in(DEVICE(&s->apu_cpu[i]),
65
+ }
176
+ ARM_CPU_FIQ));
66
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->rtc), 0, RTC_ADDR);
177
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gic), i + num_apus * 2,
67
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->rtc), 0, gic_spi[RTC_IRQ]);
178
+ qdev_get_gpio_in(DEVICE(&s->apu_cpu[i]),
68
}
179
+ ARM_CPU_VIRQ));
69
180
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gic), i + num_apus * 3,
70
static Property xlnx_zynqmp_props[] = {
181
+ qdev_get_gpio_in(DEVICE(&s->apu_cpu[i]),
182
+ ARM_CPU_VFIQ));
183
irq = qdev_get_gpio_in(DEVICE(&s->gic),
184
arm_gic_ppi_index(i, ARM_PHYS_TIMER_PPI));
185
- qdev_connect_gpio_out(DEVICE(&s->apu_cpu[i]), 0, irq);
186
+ qdev_connect_gpio_out(DEVICE(&s->apu_cpu[i]), GTIMER_PHYS, irq);
187
irq = qdev_get_gpio_in(DEVICE(&s->gic),
188
arm_gic_ppi_index(i, ARM_VIRT_TIMER_PPI));
189
- qdev_connect_gpio_out(DEVICE(&s->apu_cpu[i]), 1, irq);
190
+ qdev_connect_gpio_out(DEVICE(&s->apu_cpu[i]), GTIMER_VIRT, irq);
191
+ irq = qdev_get_gpio_in(DEVICE(&s->gic),
192
+ arm_gic_ppi_index(i, ARM_HYP_TIMER_PPI));
193
+ qdev_connect_gpio_out(DEVICE(&s->apu_cpu[i]), GTIMER_HYP, irq);
194
+ irq = qdev_get_gpio_in(DEVICE(&s->gic),
195
+ arm_gic_ppi_index(i, ARM_SEC_TIMER_PPI));
196
+ qdev_connect_gpio_out(DEVICE(&s->apu_cpu[i]), GTIMER_SEC, irq);
197
+
198
+ if (s->virt) {
199
+ irq = qdev_get_gpio_in(DEVICE(&s->gic),
200
+ arm_gic_ppi_index(i, GIC_MAINTENANCE_PPI));
201
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gic), i + num_apus * 4, irq);
202
+ }
203
}
204
205
if (s->has_rpu) {
71
--
206
--
72
2.16.2
207
2.18.0
73
208
74
209
diff view generated by jsdifflib
1
Add remaining easy registers to iotkit-secctl:
1
From: Luc Michel <luc.michel@greensocs.com>
2
* NSCCFG just routes its two bits out to external GPIO lines
3
* BRGINSTAT/BRGINTCLR/BRGINTEN can be dummies, because QEMU's
4
bus fabric can never report errors
5
2
3
Add support for GICv2 virtualization extensions by mapping the necessary
4
I/O regions and connecting the maintenance IRQ lines.
5
6
Declare those additions in the device tree and in the ACPI tables.
7
8
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20180727095421.386-21-luc.michel@greensocs.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20180220180325.29818-18-peter.maydell@linaro.org
8
---
12
---
9
include/hw/misc/iotkit-secctl.h | 4 ++++
13
include/hw/arm/virt.h | 4 +++-
10
hw/misc/iotkit-secctl.c | 32 ++++++++++++++++++++++++++------
14
hw/arm/virt-acpi-build.c | 6 +++--
11
2 files changed, 30 insertions(+), 6 deletions(-)
15
hw/arm/virt.c | 52 +++++++++++++++++++++++++++++++++-------
16
3 files changed, 50 insertions(+), 12 deletions(-)
12
17
13
diff --git a/include/hw/misc/iotkit-secctl.h b/include/hw/misc/iotkit-secctl.h
18
diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h
14
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
15
--- a/include/hw/misc/iotkit-secctl.h
20
--- a/include/hw/arm/virt.h
16
+++ b/include/hw/misc/iotkit-secctl.h
21
+++ b/include/hw/arm/virt.h
17
@@ -XXX,XX +XXX,XX @@
22
@@ -XXX,XX +XXX,XX @@
18
* + sysbus MMIO region 1 is the "non-secure privilege control block" registers
23
#define NUM_VIRTIO_TRANSPORTS 32
19
* + named GPIO output "sec_resp_cfg" indicating whether blocked accesses
24
#define NUM_SMMU_IRQS 4
20
* should RAZ/WI or bus error
25
21
+ * + named GPIO output "nsc_cfg" whose value tracks the NSCCFG register value
26
-#define ARCH_GICV3_MAINT_IRQ 9
22
* Controlling the 2 APB PPCs in the IoTKit:
27
+#define ARCH_GIC_MAINT_IRQ 9
23
* + named GPIO outputs apb_ppc0_nonsec[0..2] and apb_ppc1_nonsec
28
24
* + named GPIO outputs apb_ppc0_ap[0..2] and apb_ppc1_ap
29
#define ARCH_TIMER_VIRT_IRQ 11
25
@@ -XXX,XX +XXX,XX @@ struct IoTKitSecCtl {
30
#define ARCH_TIMER_S_EL1_IRQ 13
26
31
@@ -XXX,XX +XXX,XX @@ enum {
27
/*< public >*/
32
VIRT_GIC_DIST,
28
qemu_irq sec_resp_cfg;
33
VIRT_GIC_CPU,
29
+ qemu_irq nsc_cfg_irq;
34
VIRT_GIC_V2M,
30
35
+ VIRT_GIC_HYP,
31
MemoryRegion s_regs;
36
+ VIRT_GIC_VCPU,
32
MemoryRegion ns_regs;
37
VIRT_GIC_ITS,
33
@@ -XXX,XX +XXX,XX @@ struct IoTKitSecCtl {
38
VIRT_GIC_REDIST,
34
uint32_t secppcintstat;
39
VIRT_GIC_REDIST2,
35
uint32_t secppcinten;
40
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
36
uint32_t secrespcfg;
37
+ uint32_t nsccfg;
38
+ uint32_t brginten;
39
40
IoTKitSecCtlPPC apb[IOTS_NUM_APB_PPC];
41
IoTKitSecCtlPPC apbexp[IOTS_NUM_APB_EXP_PPC];
42
diff --git a/hw/misc/iotkit-secctl.c b/hw/misc/iotkit-secctl.c
43
index XXXXXXX..XXXXXXX 100644
41
index XXXXXXX..XXXXXXX 100644
44
--- a/hw/misc/iotkit-secctl.c
42
--- a/hw/arm/virt-acpi-build.c
45
+++ b/hw/misc/iotkit-secctl.c
43
+++ b/hw/arm/virt-acpi-build.c
46
@@ -XXX,XX +XXX,XX @@ static MemTxResult iotkit_secctl_s_read(void *opaque, hwaddr addr,
44
@@ -XXX,XX +XXX,XX @@ build_madt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
47
case A_SECRESPCFG:
45
gicc->length = sizeof(*gicc);
48
r = s->secrespcfg;
46
if (vms->gic_version == 2) {
49
break;
47
gicc->base_address = cpu_to_le64(memmap[VIRT_GIC_CPU].base);
50
+ case A_NSCCFG:
48
+ gicc->gich_base_address = cpu_to_le64(memmap[VIRT_GIC_HYP].base);
51
+ r = s->nsccfg;
49
+ gicc->gicv_base_address = cpu_to_le64(memmap[VIRT_GIC_VCPU].base);
52
+ break;
50
}
53
case A_SECPPCINTSTAT:
51
gicc->cpu_interface_number = cpu_to_le32(i);
54
r = s->secppcintstat;
52
gicc->arm_mpidr = cpu_to_le64(armcpu->mp_affinity);
55
break;
53
@@ -XXX,XX +XXX,XX @@ build_madt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
56
case A_SECPPCINTEN:
54
if (arm_feature(&armcpu->env, ARM_FEATURE_PMU)) {
57
r = s->secppcinten;
55
gicc->performance_interrupt = cpu_to_le32(PPI(VIRTUAL_PMU_IRQ));
58
break;
56
}
59
+ case A_BRGINTSTAT:
57
- if (vms->virt && vms->gic_version == 3) {
60
+ /* QEMU's bus fabric can never report errors as it doesn't buffer
58
- gicc->vgic_interrupt = cpu_to_le32(PPI(ARCH_GICV3_MAINT_IRQ));
61
+ * writes, so we never report bridge interrupts.
59
+ if (vms->virt) {
62
+ */
60
+ gicc->vgic_interrupt = cpu_to_le32(PPI(ARCH_GIC_MAINT_IRQ));
63
+ r = 0;
61
}
64
+ break;
65
+ case A_BRGINTEN:
66
+ r = s->brginten;
67
+ break;
68
case A_AHBNSPPCEXP0:
69
case A_AHBNSPPCEXP1:
70
case A_AHBNSPPCEXP2:
71
@@ -XXX,XX +XXX,XX @@ static MemTxResult iotkit_secctl_s_read(void *opaque, hwaddr addr,
72
case A_APBSPPPCEXP3:
73
r = s->apbexp[offset_to_ppc_idx(offset)].sp;
74
break;
75
- case A_NSCCFG:
76
case A_SECMPCINTSTATUS:
77
case A_SECMSCINTSTAT:
78
case A_SECMSCINTEN:
79
- case A_BRGINTSTAT:
80
- case A_BRGINTEN:
81
case A_NSMSCEXP:
82
qemu_log_mask(LOG_UNIMP,
83
"IoTKit SecCtl S block read: "
84
@@ -XXX,XX +XXX,XX @@ static MemTxResult iotkit_secctl_s_write(void *opaque, hwaddr addr,
85
}
62
}
86
63
87
switch (offset) {
64
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
88
+ case A_NSCCFG:
65
index XXXXXXX..XXXXXXX 100644
89
+ s->nsccfg = value & 3;
66
--- a/hw/arm/virt.c
90
+ qemu_set_irq(s->nsc_cfg_irq, s->nsccfg);
67
+++ b/hw/arm/virt.c
91
+ break;
68
@@ -XXX,XX +XXX,XX @@ static const MemMapEntry a15memmap[] = {
92
case A_SECRESPCFG:
69
[VIRT_GIC_DIST] = { 0x08000000, 0x00010000 },
93
value &= 1;
70
[VIRT_GIC_CPU] = { 0x08010000, 0x00010000 },
94
s->secrespcfg = value;
71
[VIRT_GIC_V2M] = { 0x08020000, 0x00001000 },
95
@@ -XXX,XX +XXX,XX @@ static MemTxResult iotkit_secctl_s_write(void *opaque, hwaddr addr,
72
+ [VIRT_GIC_HYP] = { 0x08030000, 0x00010000 },
96
s->secppcinten = value & 0x00f000f3;
73
+ [VIRT_GIC_VCPU] = { 0x08040000, 0x00010000 },
97
foreach_ppc(s, iotkit_secctl_ppc_update_irq_enable);
74
/* The space in between here is reserved for GICv3 CPU/vCPU/HYP */
98
break;
75
[VIRT_GIC_ITS] = { 0x08080000, 0x00020000 },
99
+ case A_BRGINTCLR:
76
/* This redistributor space allows up to 2*64kB*123 CPUs */
100
+ break;
77
@@ -XXX,XX +XXX,XX @@ static void fdt_add_gic_node(VirtMachineState *vms)
101
+ case A_BRGINTEN:
78
102
+ s->brginten = value & 0xffff0000;
79
if (vms->virt) {
103
+ break;
80
qemu_fdt_setprop_cells(vms->fdt, nodename, "interrupts",
104
case A_AHBNSPPCEXP0:
81
- GIC_FDT_IRQ_TYPE_PPI, ARCH_GICV3_MAINT_IRQ,
105
case A_AHBNSPPCEXP1:
82
+ GIC_FDT_IRQ_TYPE_PPI, ARCH_GIC_MAINT_IRQ,
106
case A_AHBNSPPCEXP2:
83
GIC_FDT_IRQ_FLAGS_LEVEL_HI);
107
@@ -XXX,XX +XXX,XX @@ static MemTxResult iotkit_secctl_s_write(void *opaque, hwaddr addr,
84
}
108
ppc = &s->apbexp[offset_to_ppc_idx(offset)];
85
} else {
109
iotkit_secctl_ppc_sp_write(ppc, value);
86
/* 'cortex-a15-gic' means 'GIC v2' */
110
break;
87
qemu_fdt_setprop_string(vms->fdt, nodename, "compatible",
111
- case A_NSCCFG:
88
"arm,cortex-a15-gic");
112
case A_SECMSCINTCLR:
89
- qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg",
113
case A_SECMSCINTEN:
90
- 2, vms->memmap[VIRT_GIC_DIST].base,
114
- case A_BRGINTCLR:
91
- 2, vms->memmap[VIRT_GIC_DIST].size,
115
- case A_BRGINTEN:
92
- 2, vms->memmap[VIRT_GIC_CPU].base,
116
qemu_log_mask(LOG_UNIMP,
93
- 2, vms->memmap[VIRT_GIC_CPU].size);
117
"IoTKit SecCtl S block write: "
94
+ if (!vms->virt) {
118
"unimplemented offset 0x%x\n", offset);
95
+ qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg",
119
@@ -XXX,XX +XXX,XX @@ static void iotkit_secctl_reset(DeviceState *dev)
96
+ 2, vms->memmap[VIRT_GIC_DIST].base,
120
s->secppcintstat = 0;
97
+ 2, vms->memmap[VIRT_GIC_DIST].size,
121
s->secppcinten = 0;
98
+ 2, vms->memmap[VIRT_GIC_CPU].base,
122
s->secrespcfg = 0;
99
+ 2, vms->memmap[VIRT_GIC_CPU].size);
123
+ s->nsccfg = 0;
100
+ } else {
124
+ s->brginten = 0;
101
+ qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg",
125
102
+ 2, vms->memmap[VIRT_GIC_DIST].base,
126
foreach_ppc(s, iotkit_secctl_reset_ppc);
103
+ 2, vms->memmap[VIRT_GIC_DIST].size,
127
}
104
+ 2, vms->memmap[VIRT_GIC_CPU].base,
128
@@ -XXX,XX +XXX,XX @@ static void iotkit_secctl_init(Object *obj)
105
+ 2, vms->memmap[VIRT_GIC_CPU].size,
106
+ 2, vms->memmap[VIRT_GIC_HYP].base,
107
+ 2, vms->memmap[VIRT_GIC_HYP].size,
108
+ 2, vms->memmap[VIRT_GIC_VCPU].base,
109
+ 2, vms->memmap[VIRT_GIC_VCPU].size);
110
+ qemu_fdt_setprop_cells(vms->fdt, nodename, "interrupts",
111
+ GIC_FDT_IRQ_TYPE_PPI, ARCH_GIC_MAINT_IRQ,
112
+ GIC_FDT_IRQ_FLAGS_LEVEL_HI);
113
+ }
129
}
114
}
130
115
131
qdev_init_gpio_out_named(dev, &s->sec_resp_cfg, "sec_resp_cfg", 1);
116
qemu_fdt_setprop_cell(vms->fdt, nodename, "phandle", vms->gic_phandle);
132
+ qdev_init_gpio_out_named(dev, &s->nsc_cfg_irq, "nsc_cfg", 1);
117
@@ -XXX,XX +XXX,XX @@ static void create_gic(VirtMachineState *vms, qemu_irq *pic)
133
118
qdev_prop_set_uint32(gicdev, "redist-region-count[1]",
134
memory_region_init_io(&s->s_regs, obj, &iotkit_secctl_s_ops,
119
MIN(smp_cpus - redist0_count, redist1_capacity));
135
s, "iotkit-secctl-s-regs", 0x1000);
120
}
136
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription iotkit_secctl_vmstate = {
121
+ } else {
137
VMSTATE_UINT32(secppcintstat, IoTKitSecCtl),
122
+ if (!kvm_irqchip_in_kernel()) {
138
VMSTATE_UINT32(secppcinten, IoTKitSecCtl),
123
+ qdev_prop_set_bit(gicdev, "has-virtualization-extensions",
139
VMSTATE_UINT32(secrespcfg, IoTKitSecCtl),
124
+ vms->virt);
140
+ VMSTATE_UINT32(nsccfg, IoTKitSecCtl),
125
+ }
141
+ VMSTATE_UINT32(brginten, IoTKitSecCtl),
126
}
142
VMSTATE_STRUCT_ARRAY(apb, IoTKitSecCtl, IOTS_NUM_APB_PPC, 1,
127
qdev_init_nofail(gicdev);
143
iotkit_secctl_ppc_vmstate, IoTKitSecCtlPPC),
128
gicbusdev = SYS_BUS_DEVICE(gicdev);
144
VMSTATE_STRUCT_ARRAY(apbexp, IoTKitSecCtl, IOTS_NUM_APB_EXP_PPC, 1,
129
@@ -XXX,XX +XXX,XX @@ static void create_gic(VirtMachineState *vms, qemu_irq *pic)
130
}
131
} else {
132
sysbus_mmio_map(gicbusdev, 1, vms->memmap[VIRT_GIC_CPU].base);
133
+ if (vms->virt) {
134
+ sysbus_mmio_map(gicbusdev, 2, vms->memmap[VIRT_GIC_HYP].base);
135
+ sysbus_mmio_map(gicbusdev, 3, vms->memmap[VIRT_GIC_VCPU].base);
136
+ }
137
}
138
139
/* Wire the outputs from each CPU's generic timer and the GICv3
140
@@ -XXX,XX +XXX,XX @@ static void create_gic(VirtMachineState *vms, qemu_irq *pic)
141
ppibase + timer_irq[irq]));
142
}
143
144
- qdev_connect_gpio_out_named(cpudev, "gicv3-maintenance-interrupt", 0,
145
- qdev_get_gpio_in(gicdev, ppibase
146
- + ARCH_GICV3_MAINT_IRQ));
147
+ if (type == 3) {
148
+ qemu_irq irq = qdev_get_gpio_in(gicdev,
149
+ ppibase + ARCH_GIC_MAINT_IRQ);
150
+ qdev_connect_gpio_out_named(cpudev, "gicv3-maintenance-interrupt",
151
+ 0, irq);
152
+ } else if (vms->virt) {
153
+ qemu_irq irq = qdev_get_gpio_in(gicdev,
154
+ ppibase + ARCH_GIC_MAINT_IRQ);
155
+ sysbus_connect_irq(gicbusdev, i + 4 * smp_cpus, irq);
156
+ }
157
+
158
qdev_connect_gpio_out_named(cpudev, "pmu-interrupt", 0,
159
qdev_get_gpio_in(gicdev, ppibase
160
+ VIRTUAL_PMU_IRQ));
145
--
161
--
146
2.16.2
162
2.18.0
147
163
148
164
diff view generated by jsdifflib
1
Instead of loading kernels, device trees, and the like to
1
From: Adam Lackorzynski <adam@l4re.org>
2
the system address space, use the CPU's address space. This
3
is important if we're trying to load the file to memory or
4
via an alias memory region that is provided by an SoC
5
object and thus not mapped into the system address space.
6
2
3
Use an int64_t as a return type to restore
4
the negative check for arm_load_as.
5
6
Signed-off-by: Adam Lackorzynski <adam@l4re.org>
7
Message-id: 20180730173712.GG4987@os.inf.tu-dresden.de
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20180220180325.29818-3-peter.maydell@linaro.org
11
---
10
---
12
hw/arm/boot.c | 119 +++++++++++++++++++++++++++++++++++++---------------------
11
hw/arm/boot.c | 8 ++++----
13
1 file changed, 76 insertions(+), 43 deletions(-)
12
1 file changed, 4 insertions(+), 4 deletions(-)
14
13
15
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
14
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
16
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/arm/boot.c
16
--- a/hw/arm/boot.c
18
+++ b/hw/arm/boot.c
17
+++ b/hw/arm/boot.c
19
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@ static int do_arm_linux_init(Object *obj, void *opaque)
20
#define ARM64_TEXT_OFFSET_OFFSET 8
19
return 0;
21
#define ARM64_MAGIC_OFFSET 56
22
23
+static AddressSpace *arm_boot_address_space(ARMCPU *cpu,
24
+ const struct arm_boot_info *info)
25
+{
26
+ /* Return the address space to use for bootloader reads and writes.
27
+ * We prefer the secure address space if the CPU has it and we're
28
+ * going to boot the guest into it.
29
+ */
30
+ int asidx;
31
+ CPUState *cs = CPU(cpu);
32
+
33
+ if (arm_feature(&cpu->env, ARM_FEATURE_EL3) && info->secure_boot) {
34
+ asidx = ARMASIdx_S;
35
+ } else {
36
+ asidx = ARMASIdx_NS;
37
+ }
38
+
39
+ return cpu_get_address_space(cs, asidx);
40
+}
41
+
42
typedef enum {
43
FIXUP_NONE = 0, /* do nothing */
44
FIXUP_TERMINATOR, /* end of insns */
45
@@ -XXX,XX +XXX,XX @@ static const ARMInsnFixup smpboot[] = {
46
};
47
48
static void write_bootloader(const char *name, hwaddr addr,
49
- const ARMInsnFixup *insns, uint32_t *fixupcontext)
50
+ const ARMInsnFixup *insns, uint32_t *fixupcontext,
51
+ AddressSpace *as)
52
{
53
/* Fix up the specified bootloader fragment and write it into
54
* guest memory using rom_add_blob_fixed(). fixupcontext is
55
@@ -XXX,XX +XXX,XX @@ static void write_bootloader(const char *name, hwaddr addr,
56
code[i] = tswap32(insn);
57
}
58
59
- rom_add_blob_fixed(name, code, len * sizeof(uint32_t), addr);
60
+ rom_add_blob_fixed_as(name, code, len * sizeof(uint32_t), addr, as);
61
62
g_free(code);
63
}
20
}
64
@@ -XXX,XX +XXX,XX @@ static void default_write_secondary(ARMCPU *cpu,
21
65
const struct arm_boot_info *info)
22
-static uint64_t arm_load_elf(struct arm_boot_info *info, uint64_t *pentry,
66
{
23
- uint64_t *lowaddr, uint64_t *highaddr,
67
uint32_t fixupcontext[FIXUP_MAX];
24
- int elf_machine, AddressSpace *as)
68
+ AddressSpace *as = arm_boot_address_space(cpu, info);
25
+static int64_t arm_load_elf(struct arm_boot_info *info, uint64_t *pentry,
69
26
+ uint64_t *lowaddr, uint64_t *highaddr,
70
fixupcontext[FIXUP_GIC_CPU_IF] = info->gic_cpu_if_addr;
27
+ int elf_machine, AddressSpace *as)
71
fixupcontext[FIXUP_BOOTREG] = info->smp_bootreg_addr;
72
@@ -XXX,XX +XXX,XX @@ static void default_write_secondary(ARMCPU *cpu,
73
}
74
75
write_bootloader("smpboot", info->smp_loader_start,
76
- smpboot, fixupcontext);
77
+ smpboot, fixupcontext, as);
78
}
79
80
void arm_write_secure_board_setup_dummy_smc(ARMCPU *cpu,
81
const struct arm_boot_info *info,
82
hwaddr mvbar_addr)
83
{
84
+ AddressSpace *as = arm_boot_address_space(cpu, info);
85
int n;
86
uint32_t mvbar_blob[] = {
87
/* mvbar_addr: secure monitor vectors
88
@@ -XXX,XX +XXX,XX @@ void arm_write_secure_board_setup_dummy_smc(ARMCPU *cpu,
89
for (n = 0; n < ARRAY_SIZE(mvbar_blob); n++) {
90
mvbar_blob[n] = tswap32(mvbar_blob[n]);
91
}
92
- rom_add_blob_fixed("board-setup-mvbar", mvbar_blob, sizeof(mvbar_blob),
93
- mvbar_addr);
94
+ rom_add_blob_fixed_as("board-setup-mvbar", mvbar_blob, sizeof(mvbar_blob),
95
+ mvbar_addr, as);
96
97
for (n = 0; n < ARRAY_SIZE(board_setup_blob); n++) {
98
board_setup_blob[n] = tswap32(board_setup_blob[n]);
99
}
100
- rom_add_blob_fixed("board-setup", board_setup_blob,
101
- sizeof(board_setup_blob), info->board_setup_addr);
102
+ rom_add_blob_fixed_as("board-setup", board_setup_blob,
103
+ sizeof(board_setup_blob), info->board_setup_addr, as);
104
}
105
106
static void default_reset_secondary(ARMCPU *cpu,
107
const struct arm_boot_info *info)
108
{
109
+ AddressSpace *as = arm_boot_address_space(cpu, info);
110
CPUState *cs = CPU(cpu);
111
112
- address_space_stl_notdirty(&address_space_memory, info->smp_bootreg_addr,
113
+ address_space_stl_notdirty(as, info->smp_bootreg_addr,
114
0, MEMTXATTRS_UNSPECIFIED, NULL);
115
cpu_set_pc(cs, info->smp_loader_start);
116
}
117
@@ -XXX,XX +XXX,XX @@ static inline bool have_dtb(const struct arm_boot_info *info)
118
}
119
120
#define WRITE_WORD(p, value) do { \
121
- address_space_stl_notdirty(&address_space_memory, p, value, \
122
+ address_space_stl_notdirty(as, p, value, \
123
MEMTXATTRS_UNSPECIFIED, NULL); \
124
p += 4; \
125
} while (0)
126
127
-static void set_kernel_args(const struct arm_boot_info *info)
128
+static void set_kernel_args(const struct arm_boot_info *info, AddressSpace *as)
129
{
130
int initrd_size = info->initrd_size;
131
hwaddr base = info->loader_start;
132
@@ -XXX,XX +XXX,XX @@ static void set_kernel_args(const struct arm_boot_info *info)
133
int cmdline_size;
134
135
cmdline_size = strlen(info->kernel_cmdline);
136
- cpu_physical_memory_write(p + 8, info->kernel_cmdline,
137
- cmdline_size + 1);
138
+ address_space_write(as, p + 8, MEMTXATTRS_UNSPECIFIED,
139
+ (const uint8_t *)info->kernel_cmdline,
140
+ cmdline_size + 1);
141
cmdline_size = (cmdline_size >> 2) + 1;
142
WRITE_WORD(p, cmdline_size + 2);
143
WRITE_WORD(p, 0x54410009);
144
@@ -XXX,XX +XXX,XX @@ static void set_kernel_args(const struct arm_boot_info *info)
145
atag_board_len = (info->atag_board(info, atag_board_buf) + 3) & ~3;
146
WRITE_WORD(p, (atag_board_len + 8) >> 2);
147
WRITE_WORD(p, 0x414f4d50);
148
- cpu_physical_memory_write(p, atag_board_buf, atag_board_len);
149
+ address_space_write(as, p, MEMTXATTRS_UNSPECIFIED,
150
+ atag_board_buf, atag_board_len);
151
p += atag_board_len;
152
}
153
/* ATAG_END */
154
@@ -XXX,XX +XXX,XX @@ static void set_kernel_args(const struct arm_boot_info *info)
155
WRITE_WORD(p, 0);
156
}
157
158
-static void set_kernel_args_old(const struct arm_boot_info *info)
159
+static void set_kernel_args_old(const struct arm_boot_info *info,
160
+ AddressSpace *as)
161
{
162
hwaddr p;
163
const char *s;
164
@@ -XXX,XX +XXX,XX @@ static void set_kernel_args_old(const struct arm_boot_info *info)
165
}
166
s = info->kernel_cmdline;
167
if (s) {
168
- cpu_physical_memory_write(p, s, strlen(s) + 1);
169
+ address_space_write(as, p, MEMTXATTRS_UNSPECIFIED,
170
+ (const uint8_t *)s, strlen(s) + 1);
171
} else {
172
WRITE_WORD(p, 0);
173
}
174
@@ -XXX,XX +XXX,XX @@ static void fdt_add_psci_node(void *fdt)
175
* @addr: the address to load the image at
176
* @binfo: struct describing the boot environment
177
* @addr_limit: upper limit of the available memory area at @addr
178
+ * @as: address space to load image to
179
*
180
* Load a device tree supplied by the machine or by the user with the
181
* '-dtb' command line option, and put it at offset @addr in target
182
@@ -XXX,XX +XXX,XX @@ static void fdt_add_psci_node(void *fdt)
183
* Note: Must not be called unless have_dtb(binfo) is true.
184
*/
185
static int load_dtb(hwaddr addr, const struct arm_boot_info *binfo,
186
- hwaddr addr_limit)
187
+ hwaddr addr_limit, AddressSpace *as)
188
{
189
void *fdt = NULL;
190
int size, rc;
191
@@ -XXX,XX +XXX,XX @@ static int load_dtb(hwaddr addr, const struct arm_boot_info *binfo,
192
/* Put the DTB into the memory map as a ROM image: this will ensure
193
* the DTB is copied again upon reset, even if addr points into RAM.
194
*/
195
- rom_add_blob_fixed("dtb", fdt, size, addr);
196
+ rom_add_blob_fixed_as("dtb", fdt, size, addr, as);
197
198
g_free(fdt);
199
200
@@ -XXX,XX +XXX,XX @@ static void do_cpu_reset(void *opaque)
201
}
202
203
if (cs == first_cpu) {
204
+ AddressSpace *as = arm_boot_address_space(cpu, info);
205
+
206
cpu_set_pc(cs, info->loader_start);
207
208
if (!have_dtb(info)) {
209
if (old_param) {
210
- set_kernel_args_old(info);
211
+ set_kernel_args_old(info, as);
212
} else {
213
- set_kernel_args(info);
214
+ set_kernel_args(info, as);
215
}
216
}
217
} else {
218
@@ -XXX,XX +XXX,XX @@ static int do_arm_linux_init(Object *obj, void *opaque)
219
220
static uint64_t arm_load_elf(struct arm_boot_info *info, uint64_t *pentry,
221
uint64_t *lowaddr, uint64_t *highaddr,
222
- int elf_machine)
223
+ int elf_machine, AddressSpace *as)
224
{
28
{
225
bool elf_is64;
29
bool elf_is64;
226
union {
30
union {
227
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_load_elf(struct arm_boot_info *info, uint64_t *pentry,
31
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_load_elf(struct arm_boot_info *info, uint64_t *pentry,
228
}
32
} elf_header;
229
}
33
int data_swab = 0;
230
34
bool big_endian;
231
- ret = load_elf(info->kernel_filename, NULL, NULL,
35
- uint64_t ret = -1;
232
- pentry, lowaddr, highaddr, big_endian, elf_machine,
36
+ int64_t ret = -1;
233
- 1, data_swab);
37
Error *err = NULL;
234
+ ret = load_elf_as(info->kernel_filename, NULL, NULL,
38
235
+ pentry, lowaddr, highaddr, big_endian, elf_machine,
39
236
+ 1, data_swab, as);
237
if (ret <= 0) {
238
/* The header loaded but the image didn't */
239
exit(1);
240
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_load_elf(struct arm_boot_info *info, uint64_t *pentry,
241
}
242
243
static uint64_t load_aarch64_image(const char *filename, hwaddr mem_base,
244
- hwaddr *entry)
245
+ hwaddr *entry, AddressSpace *as)
246
{
247
hwaddr kernel_load_offset = KERNEL64_LOAD_ADDR;
248
uint8_t *buffer;
249
@@ -XXX,XX +XXX,XX @@ static uint64_t load_aarch64_image(const char *filename, hwaddr mem_base,
250
}
251
252
*entry = mem_base + kernel_load_offset;
253
- rom_add_blob_fixed(filename, buffer, size, *entry);
254
+ rom_add_blob_fixed_as(filename, buffer, size, *entry, as);
255
256
g_free(buffer);
257
258
@@ -XXX,XX +XXX,XX @@ static void arm_load_kernel_notify(Notifier *notifier, void *data)
259
ARMCPU *cpu = n->cpu;
260
struct arm_boot_info *info =
261
container_of(n, struct arm_boot_info, load_kernel_notifier);
262
+ AddressSpace *as = arm_boot_address_space(cpu, info);
263
264
/* The board code is not supposed to set secure_board_setup unless
265
* running its code in secure mode is actually possible, and KVM
266
@@ -XXX,XX +XXX,XX @@ static void arm_load_kernel_notify(Notifier *notifier, void *data)
267
* the kernel is supposed to be loaded by the bootloader), copy the
268
* DTB to the base of RAM for the bootloader to pick up.
269
*/
270
- if (load_dtb(info->loader_start, info, 0) < 0) {
271
+ if (load_dtb(info->loader_start, info, 0, as) < 0) {
272
exit(1);
273
}
274
}
275
@@ -XXX,XX +XXX,XX @@ static void arm_load_kernel_notify(Notifier *notifier, void *data)
276
277
/* Assume that raw images are linux kernels, and ELF images are not. */
278
kernel_size = arm_load_elf(info, &elf_entry, &elf_low_addr,
279
- &elf_high_addr, elf_machine);
280
+ &elf_high_addr, elf_machine, as);
281
if (kernel_size > 0 && have_dtb(info)) {
282
/* If there is still some room left at the base of RAM, try and put
283
* the DTB there like we do for images loaded with -bios or -pflash.
284
@@ -XXX,XX +XXX,XX @@ static void arm_load_kernel_notify(Notifier *notifier, void *data)
285
if (elf_low_addr < info->loader_start) {
286
elf_low_addr = 0;
287
}
288
- if (load_dtb(info->loader_start, info, elf_low_addr) < 0) {
289
+ if (load_dtb(info->loader_start, info, elf_low_addr, as) < 0) {
290
exit(1);
291
}
292
}
293
}
294
entry = elf_entry;
295
if (kernel_size < 0) {
296
- kernel_size = load_uimage(info->kernel_filename, &entry, NULL,
297
- &is_linux, NULL, NULL);
298
+ kernel_size = load_uimage_as(info->kernel_filename, &entry, NULL,
299
+ &is_linux, NULL, NULL, as);
300
}
301
if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64) && kernel_size < 0) {
302
kernel_size = load_aarch64_image(info->kernel_filename,
303
- info->loader_start, &entry);
304
+ info->loader_start, &entry, as);
305
is_linux = 1;
306
} else if (kernel_size < 0) {
307
/* 32-bit ARM */
308
entry = info->loader_start + KERNEL_LOAD_ADDR;
309
- kernel_size = load_image_targphys(info->kernel_filename, entry,
310
- info->ram_size - KERNEL_LOAD_ADDR);
311
+ kernel_size = load_image_targphys_as(info->kernel_filename, entry,
312
+ info->ram_size - KERNEL_LOAD_ADDR,
313
+ as);
314
is_linux = 1;
315
}
316
if (kernel_size < 0) {
317
@@ -XXX,XX +XXX,XX @@ static void arm_load_kernel_notify(Notifier *notifier, void *data)
318
uint32_t fixupcontext[FIXUP_MAX];
319
320
if (info->initrd_filename) {
321
- initrd_size = load_ramdisk(info->initrd_filename,
322
- info->initrd_start,
323
- info->ram_size -
324
- info->initrd_start);
325
+ initrd_size = load_ramdisk_as(info->initrd_filename,
326
+ info->initrd_start,
327
+ info->ram_size - info->initrd_start,
328
+ as);
329
if (initrd_size < 0) {
330
- initrd_size = load_image_targphys(info->initrd_filename,
331
- info->initrd_start,
332
- info->ram_size -
333
- info->initrd_start);
334
+ initrd_size = load_image_targphys_as(info->initrd_filename,
335
+ info->initrd_start,
336
+ info->ram_size -
337
+ info->initrd_start,
338
+ as);
339
}
340
if (initrd_size < 0) {
341
error_report("could not load initrd '%s'",
342
@@ -XXX,XX +XXX,XX @@ static void arm_load_kernel_notify(Notifier *notifier, void *data)
343
344
/* Place the DTB after the initrd in memory with alignment. */
345
dtb_start = QEMU_ALIGN_UP(info->initrd_start + initrd_size, align);
346
- if (load_dtb(dtb_start, info, 0) < 0) {
347
+ if (load_dtb(dtb_start, info, 0, as) < 0) {
348
exit(1);
349
}
350
fixupcontext[FIXUP_ARGPTR] = dtb_start;
351
@@ -XXX,XX +XXX,XX @@ static void arm_load_kernel_notify(Notifier *notifier, void *data)
352
fixupcontext[FIXUP_ENTRYPOINT] = entry;
353
354
write_bootloader("bootloader", info->loader_start,
355
- primary_loader, fixupcontext);
356
+ primary_loader, fixupcontext, as);
357
358
if (info->nb_cpus > 1) {
359
info->write_secondary_boot(cpu, info);
360
--
40
--
361
2.16.2
41
2.18.0
362
42
363
43
diff view generated by jsdifflib
1
The Cortex-M33 allows the system to specify the reset value of the
1
If the "trap general exceptions" bit HCR_EL2.TGE is set, we
2
secure Vector Table Offset Register (VTOR) by asserting config
2
must mask all virtual interrupts (as per DDI0487C.a D1.14.3).
3
signals. In particular, guest images for the MPS2 AN505 board rely
3
Implement this in arm_excp_unmasked().
4
on the MPS2's initial VTOR being correct for that board.
5
Implement a QEMU property so board and SoC code can set the reset
6
value to the correct value.
7
4
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20180220180325.29818-7-peter.maydell@linaro.org
7
Message-id: 20180724115950.17316-2-peter.maydell@linaro.org
11
---
8
---
12
target/arm/cpu.h | 3 +++
9
target/arm/cpu.h | 6 ++++--
13
target/arm/cpu.c | 18 ++++++++++++++----
10
1 file changed, 4 insertions(+), 2 deletions(-)
14
2 files changed, 17 insertions(+), 4 deletions(-)
15
11
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
12
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
17
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/cpu.h
14
--- a/target/arm/cpu.h
19
+++ b/target/arm/cpu.h
15
+++ b/target/arm/cpu.h
20
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
16
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
21
*/
17
break;
22
uint32_t psci_conduit;
18
23
19
case EXCP_VFIQ:
24
+ /* For v8M, initial value of the Secure VTOR */
20
- if (secure || !(env->cp15.hcr_el2 & HCR_FMO)) {
25
+ uint32_t init_svtor;
21
+ if (secure || !(env->cp15.hcr_el2 & HCR_FMO)
26
+
22
+ || (env->cp15.hcr_el2 & HCR_TGE)) {
27
/* [QEMU_]KVM_ARM_TARGET_* constant for this CPU, or
23
/* VFIQs are only taken when hypervized and non-secure. */
28
* QEMU_KVM_ARM_TARGET_NONE if the kernel doesn't support this CPU type.
24
return false;
29
*/
30
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/cpu.c
33
+++ b/target/arm/cpu.c
34
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(CPUState *s)
35
uint32_t initial_msp; /* Loaded from 0x0 */
36
uint32_t initial_pc; /* Loaded from 0x4 */
37
uint8_t *rom;
38
+ uint32_t vecbase;
39
40
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
41
env->v7m.secure = true;
42
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(CPUState *s)
43
/* Unlike A/R profile, M profile defines the reset LR value */
44
env->regs[14] = 0xffffffff;
45
46
- /* Load the initial SP and PC from the vector table at address 0 */
47
- rom = rom_ptr(0);
48
+ env->v7m.vecbase[M_REG_S] = cpu->init_svtor & 0xffffff80;
49
+
50
+ /* Load the initial SP and PC from offset 0 and 4 in the vector table */
51
+ vecbase = env->v7m.vecbase[env->v7m.secure];
52
+ rom = rom_ptr(vecbase);
53
if (rom) {
54
/* Address zero is covered by ROM which hasn't yet been
55
* copied into physical memory.
56
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(CPUState *s)
57
* it got copied into memory. In the latter case, rom_ptr
58
* will return a NULL pointer and we should use ldl_phys instead.
59
*/
60
- initial_msp = ldl_phys(s->as, 0);
61
- initial_pc = ldl_phys(s->as, 4);
62
+ initial_msp = ldl_phys(s->as, vecbase);
63
+ initial_pc = ldl_phys(s->as, vecbase + 4);
64
}
25
}
65
26
return !(env->daif & PSTATE_F);
66
env->regs[13] = initial_msp & 0xFFFFFFFC;
27
case EXCP_VIRQ:
67
@@ -XXX,XX +XXX,XX @@ static Property arm_cpu_pmsav7_dregion_property =
28
- if (secure || !(env->cp15.hcr_el2 & HCR_IMO)) {
68
pmsav7_dregion,
29
+ if (secure || !(env->cp15.hcr_el2 & HCR_IMO)
69
qdev_prop_uint32, uint32_t);
30
+ || (env->cp15.hcr_el2 & HCR_TGE)) {
70
31
/* VIRQs are only taken when hypervized and non-secure. */
71
+/* M profile: initial value of the Secure VTOR */
32
return false;
72
+static Property arm_cpu_initsvtor_property =
33
}
73
+ DEFINE_PROP_UINT32("init-svtor", ARMCPU, init_svtor, 0);
74
+
75
static void arm_cpu_post_init(Object *obj)
76
{
77
ARMCPU *cpu = ARM_CPU(obj);
78
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_post_init(Object *obj)
79
qdev_prop_allow_set_link_before_realize,
80
OBJ_PROP_LINK_UNREF_ON_RELEASE,
81
&error_abort);
82
+ qdev_property_add_static(DEVICE(obj), &arm_cpu_initsvtor_property,
83
+ &error_abort);
84
}
85
86
qdev_property_add_static(DEVICE(obj), &arm_cpu_cfgend_property,
87
--
34
--
88
2.16.2
35
2.18.0
89
36
90
37
diff view generated by jsdifflib
1
Move the definition of the struct for the unimplemented-device
1
Some debug registers can be trapped via MDCR_EL2 bits TDRA, TDOSA,
2
from unimp.c to unimp.h, so that users can embed the struct
2
and TDA, which we implement in the functions access_tdra(),
3
in their own device structs if they prefer.
3
access_tdosa() and access_tda(). If MDCR_EL2.TDE or HCR_EL2.TGE
4
are 1, the TDRA, TDOSA and TDA bits should behave as if they were 1.
5
Implement this by having the access functions check MDCR_EL2.TDE
6
and HCR_EL2.TGE.
4
7
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180220180325.29818-10-peter.maydell@linaro.org
10
Message-id: 20180724115950.17316-3-peter.maydell@linaro.org
9
---
11
---
10
include/hw/misc/unimp.h | 10 ++++++++++
12
target/arm/helper.c | 18 ++++++++++++------
11
hw/misc/unimp.c | 10 ----------
13
1 file changed, 12 insertions(+), 6 deletions(-)
12
2 files changed, 10 insertions(+), 10 deletions(-)
13
14
14
diff --git a/include/hw/misc/unimp.h b/include/hw/misc/unimp.h
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/include/hw/misc/unimp.h
17
--- a/target/arm/helper.c
17
+++ b/include/hw/misc/unimp.h
18
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tdosa(CPUARMState *env, const ARMCPRegInfo *ri,
19
20
bool isread)
20
#define TYPE_UNIMPLEMENTED_DEVICE "unimplemented-device"
21
22
+#define UNIMPLEMENTED_DEVICE(obj) \
23
+ OBJECT_CHECK(UnimplementedDeviceState, (obj), TYPE_UNIMPLEMENTED_DEVICE)
24
+
25
+typedef struct {
26
+ SysBusDevice parent_obj;
27
+ MemoryRegion iomem;
28
+ char *name;
29
+ uint64_t size;
30
+} UnimplementedDeviceState;
31
+
32
/**
33
* create_unimplemented_device: create and map a dummy device
34
* @name: name of the device for debug logging
35
diff --git a/hw/misc/unimp.c b/hw/misc/unimp.c
36
index XXXXXXX..XXXXXXX 100644
37
--- a/hw/misc/unimp.c
38
+++ b/hw/misc/unimp.c
39
@@ -XXX,XX +XXX,XX @@
40
#include "qemu/log.h"
41
#include "qapi/error.h"
42
43
-#define UNIMPLEMENTED_DEVICE(obj) \
44
- OBJECT_CHECK(UnimplementedDeviceState, (obj), TYPE_UNIMPLEMENTED_DEVICE)
45
-
46
-typedef struct {
47
- SysBusDevice parent_obj;
48
- MemoryRegion iomem;
49
- char *name;
50
- uint64_t size;
51
-} UnimplementedDeviceState;
52
-
53
static uint64_t unimp_read(void *opaque, hwaddr offset, unsigned size)
54
{
21
{
55
UnimplementedDeviceState *s = UNIMPLEMENTED_DEVICE(opaque);
22
int el = arm_current_el(env);
23
+ bool mdcr_el2_tdosa = (env->cp15.mdcr_el2 & MDCR_TDOSA) ||
24
+ (env->cp15.mdcr_el2 & MDCR_TDE) ||
25
+ (env->cp15.hcr_el2 & HCR_TGE);
26
27
- if (el < 2 && (env->cp15.mdcr_el2 & MDCR_TDOSA)
28
- && !arm_is_secure_below_el3(env)) {
29
+ if (el < 2 && mdcr_el2_tdosa && !arm_is_secure_below_el3(env)) {
30
return CP_ACCESS_TRAP_EL2;
31
}
32
if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TDOSA)) {
33
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tdra(CPUARMState *env, const ARMCPRegInfo *ri,
34
bool isread)
35
{
36
int el = arm_current_el(env);
37
+ bool mdcr_el2_tdra = (env->cp15.mdcr_el2 & MDCR_TDRA) ||
38
+ (env->cp15.mdcr_el2 & MDCR_TDE) ||
39
+ (env->cp15.hcr_el2 & HCR_TGE);
40
41
- if (el < 2 && (env->cp15.mdcr_el2 & MDCR_TDRA)
42
- && !arm_is_secure_below_el3(env)) {
43
+ if (el < 2 && mdcr_el2_tdra && !arm_is_secure_below_el3(env)) {
44
return CP_ACCESS_TRAP_EL2;
45
}
46
if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TDA)) {
47
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tda(CPUARMState *env, const ARMCPRegInfo *ri,
48
bool isread)
49
{
50
int el = arm_current_el(env);
51
+ bool mdcr_el2_tda = (env->cp15.mdcr_el2 & MDCR_TDA) ||
52
+ (env->cp15.mdcr_el2 & MDCR_TDE) ||
53
+ (env->cp15.hcr_el2 & HCR_TGE);
54
55
- if (el < 2 && (env->cp15.mdcr_el2 & MDCR_TDA)
56
- && !arm_is_secure_below_el3(env)) {
57
+ if (el < 2 && mdcr_el2_tda && !arm_is_secure_below_el3(env)) {
58
return CP_ACCESS_TRAP_EL2;
59
}
60
if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TDA)) {
56
--
61
--
57
2.16.2
62
2.18.0
58
63
59
64
diff view generated by jsdifflib
1
Model the Arm IoT Kit documented in
1
Whene we raise a synchronous exception, if HCR_EL2.TGE is set then
2
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ecm0601256/index.html
2
exceptions targeting NS EL1 must be redirected to EL2. Implement
3
this in raise_exception() -- all synchronous exceptions go through
4
this function.
3
5
4
The Arm IoT Kit is a subsystem which includes a CPU and some devices,
6
(Asynchronous exceptions go via arm_cpu_exec_interrupt(), which
5
and is intended be extended by adding extra devices to form a
7
already honours HCR_EL2.TGE when it determines the target EL
6
complete system. It is used in the MPS2 board's AN505 image for the
8
in arm_phys_excp_target_el().)
7
Cortex-M33.
8
9
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20180220180325.29818-19-peter.maydell@linaro.org
12
Message-id: 20180724115950.17316-4-peter.maydell@linaro.org
12
---
13
---
13
hw/arm/Makefile.objs | 1 +
14
target/arm/op_helper.c | 14 ++++++++++++++
14
include/hw/arm/iotkit.h | 109 ++++++++
15
1 file changed, 14 insertions(+)
15
hw/arm/iotkit.c | 598 ++++++++++++++++++++++++++++++++++++++++
16
default-configs/arm-softmmu.mak | 1 +
17
4 files changed, 709 insertions(+)
18
create mode 100644 include/hw/arm/iotkit.h
19
create mode 100644 hw/arm/iotkit.c
20
16
21
diff --git a/hw/arm/Makefile.objs b/hw/arm/Makefile.objs
17
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
22
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
23
--- a/hw/arm/Makefile.objs
19
--- a/target/arm/op_helper.c
24
+++ b/hw/arm/Makefile.objs
20
+++ b/target/arm/op_helper.c
25
@@ -XXX,XX +XXX,XX @@ obj-$(CONFIG_FSL_IMX6) += fsl-imx6.o sabrelite.o
21
@@ -XXX,XX +XXX,XX @@ static void raise_exception(CPUARMState *env, uint32_t excp,
26
obj-$(CONFIG_ASPEED_SOC) += aspeed_soc.o aspeed.o
22
{
27
obj-$(CONFIG_MPS2) += mps2.o
23
CPUState *cs = CPU(arm_env_get_cpu(env));
28
obj-$(CONFIG_MSF2) += msf2-soc.o msf2-som.o
24
29
+obj-$(CONFIG_IOTKIT) += iotkit.o
25
+ if ((env->cp15.hcr_el2 & HCR_TGE) &&
30
diff --git a/include/hw/arm/iotkit.h b/include/hw/arm/iotkit.h
26
+ target_el == 1 && !arm_is_secure(env)) {
31
new file mode 100644
27
+ /*
32
index XXXXXXX..XXXXXXX
28
+ * Redirect NS EL1 exceptions to NS EL2. These are reported with
33
--- /dev/null
29
+ * their original syndrome register value, with the exception of
34
+++ b/include/hw/arm/iotkit.h
30
+ * SIMD/FP access traps, which are reported as uncategorized
35
@@ -XXX,XX +XXX,XX @@
31
+ * (see DDI0478C.a D1.10.4)
36
+/*
32
+ */
37
+ * ARM IoT Kit
33
+ target_el = 2;
38
+ *
34
+ if (syndrome >> ARM_EL_EC_SHIFT == EC_ADVSIMDFPACCESSTRAP) {
39
+ * Copyright (c) 2018 Linaro Limited
35
+ syndrome = syn_uncategorized();
40
+ * Written by Peter Maydell
41
+ *
42
+ * This program is free software; you can redistribute it and/or modify
43
+ * it under the terms of the GNU General Public License version 2 or
44
+ * (at your option) any later version.
45
+ */
46
+
47
+/* This is a model of the Arm IoT Kit which is documented in
48
+ * http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ecm0601256/index.html
49
+ * It contains:
50
+ * a Cortex-M33
51
+ * the IDAU
52
+ * some timers and watchdogs
53
+ * two peripheral protection controllers
54
+ * a memory protection controller
55
+ * a security controller
56
+ * a bus fabric which arranges that some parts of the address
57
+ * space are secure and non-secure aliases of each other
58
+ *
59
+ * QEMU interface:
60
+ * + QOM property "memory" is a MemoryRegion containing the devices provided
61
+ * by the board model.
62
+ * + QOM property "MAINCLK" is the frequency of the main system clock
63
+ * + QOM property "EXP_NUMIRQ" sets the number of expansion interrupts
64
+ * + Named GPIO inputs "EXP_IRQ" 0..n are the expansion interrupts, which
65
+ * are wired to the NVIC lines 32 .. n+32
66
+ * Controlling up to 4 AHB expansion PPBs which a system using the IoTKit
67
+ * might provide:
68
+ * + named GPIO outputs apb_ppcexp{0,1,2,3}_nonsec[0..15]
69
+ * + named GPIO outputs apb_ppcexp{0,1,2,3}_ap[0..15]
70
+ * + named GPIO outputs apb_ppcexp{0,1,2,3}_irq_enable
71
+ * + named GPIO outputs apb_ppcexp{0,1,2,3}_irq_clear
72
+ * + named GPIO inputs apb_ppcexp{0,1,2,3}_irq_status
73
+ * Controlling each of the 4 expansion AHB PPCs which a system using the IoTKit
74
+ * might provide:
75
+ * + named GPIO outputs ahb_ppcexp{0,1,2,3}_nonsec[0..15]
76
+ * + named GPIO outputs ahb_ppcexp{0,1,2,3}_ap[0..15]
77
+ * + named GPIO outputs ahb_ppcexp{0,1,2,3}_irq_enable
78
+ * + named GPIO outputs ahb_ppcexp{0,1,2,3}_irq_clear
79
+ * + named GPIO inputs ahb_ppcexp{0,1,2,3}_irq_status
80
+ */
81
+
82
+#ifndef IOTKIT_H
83
+#define IOTKIT_H
84
+
85
+#include "hw/sysbus.h"
86
+#include "hw/arm/armv7m.h"
87
+#include "hw/misc/iotkit-secctl.h"
88
+#include "hw/misc/tz-ppc.h"
89
+#include "hw/timer/cmsdk-apb-timer.h"
90
+#include "hw/misc/unimp.h"
91
+#include "hw/or-irq.h"
92
+#include "hw/core/split-irq.h"
93
+
94
+#define TYPE_IOTKIT "iotkit"
95
+#define IOTKIT(obj) OBJECT_CHECK(IoTKit, (obj), TYPE_IOTKIT)
96
+
97
+/* We have an IRQ splitter and an OR gate input for each external PPC
98
+ * and the 2 internal PPCs
99
+ */
100
+#define NUM_EXTERNAL_PPCS (IOTS_NUM_AHB_EXP_PPC + IOTS_NUM_APB_EXP_PPC)
101
+#define NUM_PPCS (NUM_EXTERNAL_PPCS + 2)
102
+
103
+typedef struct IoTKit {
104
+ /*< private >*/
105
+ SysBusDevice parent_obj;
106
+
107
+ /*< public >*/
108
+ ARMv7MState armv7m;
109
+ IoTKitSecCtl secctl;
110
+ TZPPC apb_ppc0;
111
+ TZPPC apb_ppc1;
112
+ CMSDKAPBTIMER timer0;
113
+ CMSDKAPBTIMER timer1;
114
+ qemu_or_irq ppc_irq_orgate;
115
+ SplitIRQ sec_resp_splitter;
116
+ SplitIRQ ppc_irq_splitter[NUM_PPCS];
117
+
118
+ UnimplementedDeviceState dualtimer;
119
+ UnimplementedDeviceState s32ktimer;
120
+
121
+ MemoryRegion container;
122
+ MemoryRegion alias1;
123
+ MemoryRegion alias2;
124
+ MemoryRegion alias3;
125
+ MemoryRegion sram0;
126
+
127
+ qemu_irq *exp_irqs;
128
+ qemu_irq ppc0_irq;
129
+ qemu_irq ppc1_irq;
130
+ qemu_irq sec_resp_cfg;
131
+ qemu_irq sec_resp_cfg_in;
132
+ qemu_irq nsc_cfg_in;
133
+
134
+ qemu_irq irq_status_in[NUM_EXTERNAL_PPCS];
135
+
136
+ uint32_t nsccfg;
137
+
138
+ /* Properties */
139
+ MemoryRegion *board_memory;
140
+ uint32_t exp_numirq;
141
+ uint32_t mainclk_frq;
142
+} IoTKit;
143
+
144
+#endif
145
diff --git a/hw/arm/iotkit.c b/hw/arm/iotkit.c
146
new file mode 100644
147
index XXXXXXX..XXXXXXX
148
--- /dev/null
149
+++ b/hw/arm/iotkit.c
150
@@ -XXX,XX +XXX,XX @@
151
+/*
152
+ * Arm IoT Kit
153
+ *
154
+ * Copyright (c) 2018 Linaro Limited
155
+ * Written by Peter Maydell
156
+ *
157
+ * This program is free software; you can redistribute it and/or modify
158
+ * it under the terms of the GNU General Public License version 2 or
159
+ * (at your option) any later version.
160
+ */
161
+
162
+#include "qemu/osdep.h"
163
+#include "qemu/log.h"
164
+#include "qapi/error.h"
165
+#include "trace.h"
166
+#include "hw/sysbus.h"
167
+#include "hw/registerfields.h"
168
+#include "hw/arm/iotkit.h"
169
+#include "hw/misc/unimp.h"
170
+#include "hw/arm/arm.h"
171
+
172
+/* Create an alias region of @size bytes starting at @base
173
+ * which mirrors the memory starting at @orig.
174
+ */
175
+static void make_alias(IoTKit *s, MemoryRegion *mr, const char *name,
176
+ hwaddr base, hwaddr size, hwaddr orig)
177
+{
178
+ memory_region_init_alias(mr, NULL, name, &s->container, orig, size);
179
+ /* The alias is even lower priority than unimplemented_device regions */
180
+ memory_region_add_subregion_overlap(&s->container, base, mr, -1500);
181
+}
182
+
183
+static void init_sysbus_child(Object *parent, const char *childname,
184
+ void *child, size_t childsize,
185
+ const char *childtype)
186
+{
187
+ object_initialize(child, childsize, childtype);
188
+ object_property_add_child(parent, childname, OBJECT(child), &error_abort);
189
+ qdev_set_parent_bus(DEVICE(child), sysbus_get_default());
190
+}
191
+
192
+static void irq_status_forwarder(void *opaque, int n, int level)
193
+{
194
+ qemu_irq destirq = opaque;
195
+
196
+ qemu_set_irq(destirq, level);
197
+}
198
+
199
+static void nsccfg_handler(void *opaque, int n, int level)
200
+{
201
+ IoTKit *s = IOTKIT(opaque);
202
+
203
+ s->nsccfg = level;
204
+}
205
+
206
+static void iotkit_forward_ppc(IoTKit *s, const char *ppcname, int ppcnum)
207
+{
208
+ /* Each of the 4 AHB and 4 APB PPCs that might be present in a
209
+ * system using the IoTKit has a collection of control lines which
210
+ * are provided by the security controller and which we want to
211
+ * expose as control lines on the IoTKit device itself, so the
212
+ * code using the IoTKit can wire them up to the PPCs.
213
+ */
214
+ SplitIRQ *splitter = &s->ppc_irq_splitter[ppcnum];
215
+ DeviceState *iotkitdev = DEVICE(s);
216
+ DeviceState *dev_secctl = DEVICE(&s->secctl);
217
+ DeviceState *dev_splitter = DEVICE(splitter);
218
+ char *name;
219
+
220
+ name = g_strdup_printf("%s_nonsec", ppcname);
221
+ qdev_pass_gpios(dev_secctl, iotkitdev, name);
222
+ g_free(name);
223
+ name = g_strdup_printf("%s_ap", ppcname);
224
+ qdev_pass_gpios(dev_secctl, iotkitdev, name);
225
+ g_free(name);
226
+ name = g_strdup_printf("%s_irq_enable", ppcname);
227
+ qdev_pass_gpios(dev_secctl, iotkitdev, name);
228
+ g_free(name);
229
+ name = g_strdup_printf("%s_irq_clear", ppcname);
230
+ qdev_pass_gpios(dev_secctl, iotkitdev, name);
231
+ g_free(name);
232
+
233
+ /* irq_status is a little more tricky, because we need to
234
+ * split it so we can send it both to the security controller
235
+ * and to our OR gate for the NVIC interrupt line.
236
+ * Connect up the splitter's outputs, and create a GPIO input
237
+ * which will pass the line state to the input splitter.
238
+ */
239
+ name = g_strdup_printf("%s_irq_status", ppcname);
240
+ qdev_connect_gpio_out(dev_splitter, 0,
241
+ qdev_get_gpio_in_named(dev_secctl,
242
+ name, 0));
243
+ qdev_connect_gpio_out(dev_splitter, 1,
244
+ qdev_get_gpio_in(DEVICE(&s->ppc_irq_orgate), ppcnum));
245
+ s->irq_status_in[ppcnum] = qdev_get_gpio_in(dev_splitter, 0);
246
+ qdev_init_gpio_in_named_with_opaque(iotkitdev, irq_status_forwarder,
247
+ s->irq_status_in[ppcnum], name, 1);
248
+ g_free(name);
249
+}
250
+
251
+static void iotkit_forward_sec_resp_cfg(IoTKit *s)
252
+{
253
+ /* Forward the 3rd output from the splitter device as a
254
+ * named GPIO output of the iotkit object.
255
+ */
256
+ DeviceState *dev = DEVICE(s);
257
+ DeviceState *dev_splitter = DEVICE(&s->sec_resp_splitter);
258
+
259
+ qdev_init_gpio_out_named(dev, &s->sec_resp_cfg, "sec_resp_cfg", 1);
260
+ s->sec_resp_cfg_in = qemu_allocate_irq(irq_status_forwarder,
261
+ s->sec_resp_cfg, 1);
262
+ qdev_connect_gpio_out(dev_splitter, 2, s->sec_resp_cfg_in);
263
+}
264
+
265
+static void iotkit_init(Object *obj)
266
+{
267
+ IoTKit *s = IOTKIT(obj);
268
+ int i;
269
+
270
+ memory_region_init(&s->container, obj, "iotkit-container", UINT64_MAX);
271
+
272
+ init_sysbus_child(obj, "armv7m", &s->armv7m, sizeof(s->armv7m),
273
+ TYPE_ARMV7M);
274
+ qdev_prop_set_string(DEVICE(&s->armv7m), "cpu-type",
275
+ ARM_CPU_TYPE_NAME("cortex-m33"));
276
+
277
+ init_sysbus_child(obj, "secctl", &s->secctl, sizeof(s->secctl),
278
+ TYPE_IOTKIT_SECCTL);
279
+ init_sysbus_child(obj, "apb-ppc0", &s->apb_ppc0, sizeof(s->apb_ppc0),
280
+ TYPE_TZ_PPC);
281
+ init_sysbus_child(obj, "apb-ppc1", &s->apb_ppc1, sizeof(s->apb_ppc1),
282
+ TYPE_TZ_PPC);
283
+ init_sysbus_child(obj, "timer0", &s->timer0, sizeof(s->timer0),
284
+ TYPE_CMSDK_APB_TIMER);
285
+ init_sysbus_child(obj, "timer1", &s->timer1, sizeof(s->timer1),
286
+ TYPE_CMSDK_APB_TIMER);
287
+ init_sysbus_child(obj, "dualtimer", &s->dualtimer, sizeof(s->dualtimer),
288
+ TYPE_UNIMPLEMENTED_DEVICE);
289
+ object_initialize(&s->ppc_irq_orgate, sizeof(s->ppc_irq_orgate),
290
+ TYPE_OR_IRQ);
291
+ object_property_add_child(obj, "ppc-irq-orgate",
292
+ OBJECT(&s->ppc_irq_orgate), &error_abort);
293
+ object_initialize(&s->sec_resp_splitter, sizeof(s->sec_resp_splitter),
294
+ TYPE_SPLIT_IRQ);
295
+ object_property_add_child(obj, "sec-resp-splitter",
296
+ OBJECT(&s->sec_resp_splitter), &error_abort);
297
+ for (i = 0; i < ARRAY_SIZE(s->ppc_irq_splitter); i++) {
298
+ char *name = g_strdup_printf("ppc-irq-splitter-%d", i);
299
+ SplitIRQ *splitter = &s->ppc_irq_splitter[i];
300
+
301
+ object_initialize(splitter, sizeof(*splitter), TYPE_SPLIT_IRQ);
302
+ object_property_add_child(obj, name, OBJECT(splitter), &error_abort);
303
+ }
304
+ init_sysbus_child(obj, "s32ktimer", &s->s32ktimer, sizeof(s->s32ktimer),
305
+ TYPE_UNIMPLEMENTED_DEVICE);
306
+}
307
+
308
+static void iotkit_exp_irq(void *opaque, int n, int level)
309
+{
310
+ IoTKit *s = IOTKIT(opaque);
311
+
312
+ qemu_set_irq(s->exp_irqs[n], level);
313
+}
314
+
315
+static void iotkit_realize(DeviceState *dev, Error **errp)
316
+{
317
+ IoTKit *s = IOTKIT(dev);
318
+ int i;
319
+ MemoryRegion *mr;
320
+ Error *err = NULL;
321
+ SysBusDevice *sbd_apb_ppc0;
322
+ SysBusDevice *sbd_secctl;
323
+ DeviceState *dev_apb_ppc0;
324
+ DeviceState *dev_apb_ppc1;
325
+ DeviceState *dev_secctl;
326
+ DeviceState *dev_splitter;
327
+
328
+ if (!s->board_memory) {
329
+ error_setg(errp, "memory property was not set");
330
+ return;
331
+ }
332
+
333
+ if (!s->mainclk_frq) {
334
+ error_setg(errp, "MAINCLK property was not set");
335
+ return;
336
+ }
337
+
338
+ /* Handling of which devices should be available only to secure
339
+ * code is usually done differently for M profile than for A profile.
340
+ * Instead of putting some devices only into the secure address space,
341
+ * devices exist in both address spaces but with hard-wired security
342
+ * permissions that will cause the CPU to fault for non-secure accesses.
343
+ *
344
+ * The IoTKit has an IDAU (Implementation Defined Access Unit),
345
+ * which specifies hard-wired security permissions for different
346
+ * areas of the physical address space. For the IoTKit IDAU, the
347
+ * top 4 bits of the physical address are the IDAU region ID, and
348
+ * if bit 28 (ie the lowest bit of the ID) is 0 then this is an NS
349
+ * region, otherwise it is an S region.
350
+ *
351
+ * The various devices and RAMs are generally all mapped twice,
352
+ * once into a region that the IDAU defines as secure and once
353
+ * into a non-secure region. They sit behind either a Memory
354
+ * Protection Controller (for RAM) or a Peripheral Protection
355
+ * Controller (for devices), which allow a more fine grained
356
+ * configuration of whether non-secure accesses are permitted.
357
+ *
358
+ * (The other place that guest software can configure security
359
+ * permissions is in the architected SAU (Security Attribution
360
+ * Unit), which is entirely inside the CPU. The IDAU can upgrade
361
+ * the security attributes for a region to more restrictive than
362
+ * the SAU specifies, but cannot downgrade them.)
363
+ *
364
+ * 0x10000000..0x1fffffff alias of 0x00000000..0x0fffffff
365
+ * 0x20000000..0x2007ffff 32KB FPGA block RAM
366
+ * 0x30000000..0x3fffffff alias of 0x20000000..0x2fffffff
367
+ * 0x40000000..0x4000ffff base peripheral region 1
368
+ * 0x40010000..0x4001ffff CPU peripherals (none for IoTKit)
369
+ * 0x40020000..0x4002ffff system control element peripherals
370
+ * 0x40080000..0x400fffff base peripheral region 2
371
+ * 0x50000000..0x5fffffff alias of 0x40000000..0x4fffffff
372
+ */
373
+
374
+ memory_region_add_subregion_overlap(&s->container, 0, s->board_memory, -1);
375
+
376
+ qdev_prop_set_uint32(DEVICE(&s->armv7m), "num-irq", s->exp_numirq + 32);
377
+ /* In real hardware the initial Secure VTOR is set from the INITSVTOR0
378
+ * register in the IoT Kit System Control Register block, and the
379
+ * initial value of that is in turn specifiable by the FPGA that
380
+ * instantiates the IoT Kit. In QEMU we don't implement this wrinkle,
381
+ * and simply set the CPU's init-svtor to the IoT Kit default value.
382
+ */
383
+ qdev_prop_set_uint32(DEVICE(&s->armv7m), "init-svtor", 0x10000000);
384
+ object_property_set_link(OBJECT(&s->armv7m), OBJECT(&s->container),
385
+ "memory", &err);
386
+ if (err) {
387
+ error_propagate(errp, err);
388
+ return;
389
+ }
390
+ object_property_set_link(OBJECT(&s->armv7m), OBJECT(s), "idau", &err);
391
+ if (err) {
392
+ error_propagate(errp, err);
393
+ return;
394
+ }
395
+ object_property_set_bool(OBJECT(&s->armv7m), true, "realized", &err);
396
+ if (err) {
397
+ error_propagate(errp, err);
398
+ return;
399
+ }
400
+
401
+ /* Connect our EXP_IRQ GPIOs to the NVIC's lines 32 and up. */
402
+ s->exp_irqs = g_new(qemu_irq, s->exp_numirq);
403
+ for (i = 0; i < s->exp_numirq; i++) {
404
+ s->exp_irqs[i] = qdev_get_gpio_in(DEVICE(&s->armv7m), i + 32);
405
+ }
406
+ qdev_init_gpio_in_named(dev, iotkit_exp_irq, "EXP_IRQ", s->exp_numirq);
407
+
408
+ /* Set up the big aliases first */
409
+ make_alias(s, &s->alias1, "alias 1", 0x10000000, 0x10000000, 0x00000000);
410
+ make_alias(s, &s->alias2, "alias 2", 0x30000000, 0x10000000, 0x20000000);
411
+ /* The 0x50000000..0x5fffffff region is not a pure alias: it has
412
+ * a few extra devices that only appear there (generally the
413
+ * control interfaces for the protection controllers).
414
+ * We implement this by mapping those devices over the top of this
415
+ * alias MR at a higher priority.
416
+ */
417
+ make_alias(s, &s->alias3, "alias 3", 0x50000000, 0x10000000, 0x40000000);
418
+
419
+ /* This RAM should be behind a Memory Protection Controller, but we
420
+ * don't implement that yet.
421
+ */
422
+ memory_region_init_ram(&s->sram0, NULL, "iotkit.sram0", 0x00008000, &err);
423
+ if (err) {
424
+ error_propagate(errp, err);
425
+ return;
426
+ }
427
+ memory_region_add_subregion(&s->container, 0x20000000, &s->sram0);
428
+
429
+ /* Security controller */
430
+ object_property_set_bool(OBJECT(&s->secctl), true, "realized", &err);
431
+ if (err) {
432
+ error_propagate(errp, err);
433
+ return;
434
+ }
435
+ sbd_secctl = SYS_BUS_DEVICE(&s->secctl);
436
+ dev_secctl = DEVICE(&s->secctl);
437
+ sysbus_mmio_map(sbd_secctl, 0, 0x50080000);
438
+ sysbus_mmio_map(sbd_secctl, 1, 0x40080000);
439
+
440
+ s->nsc_cfg_in = qemu_allocate_irq(nsccfg_handler, s, 1);
441
+ qdev_connect_gpio_out_named(dev_secctl, "nsc_cfg", 0, s->nsc_cfg_in);
442
+
443
+ /* The sec_resp_cfg output from the security controller must be split into
444
+ * multiple lines, one for each of the PPCs within the IoTKit and one
445
+ * that will be an output from the IoTKit to the system.
446
+ */
447
+ object_property_set_int(OBJECT(&s->sec_resp_splitter), 3,
448
+ "num-lines", &err);
449
+ if (err) {
450
+ error_propagate(errp, err);
451
+ return;
452
+ }
453
+ object_property_set_bool(OBJECT(&s->sec_resp_splitter), true,
454
+ "realized", &err);
455
+ if (err) {
456
+ error_propagate(errp, err);
457
+ return;
458
+ }
459
+ dev_splitter = DEVICE(&s->sec_resp_splitter);
460
+ qdev_connect_gpio_out_named(dev_secctl, "sec_resp_cfg", 0,
461
+ qdev_get_gpio_in(dev_splitter, 0));
462
+
463
+ /* Devices behind APB PPC0:
464
+ * 0x40000000: timer0
465
+ * 0x40001000: timer1
466
+ * 0x40002000: dual timer
467
+ * We must configure and realize each downstream device and connect
468
+ * it to the appropriate PPC port; then we can realize the PPC and
469
+ * map its upstream ends to the right place in the container.
470
+ */
471
+ qdev_prop_set_uint32(DEVICE(&s->timer0), "pclk-frq", s->mainclk_frq);
472
+ object_property_set_bool(OBJECT(&s->timer0), true, "realized", &err);
473
+ if (err) {
474
+ error_propagate(errp, err);
475
+ return;
476
+ }
477
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->timer0), 0,
478
+ qdev_get_gpio_in(DEVICE(&s->armv7m), 3));
479
+ mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->timer0), 0);
480
+ object_property_set_link(OBJECT(&s->apb_ppc0), OBJECT(mr), "port[0]", &err);
481
+ if (err) {
482
+ error_propagate(errp, err);
483
+ return;
484
+ }
485
+
486
+ qdev_prop_set_uint32(DEVICE(&s->timer1), "pclk-frq", s->mainclk_frq);
487
+ object_property_set_bool(OBJECT(&s->timer1), true, "realized", &err);
488
+ if (err) {
489
+ error_propagate(errp, err);
490
+ return;
491
+ }
492
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->timer1), 0,
493
+ qdev_get_gpio_in(DEVICE(&s->armv7m), 3));
494
+ mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->timer1), 0);
495
+ object_property_set_link(OBJECT(&s->apb_ppc0), OBJECT(mr), "port[1]", &err);
496
+ if (err) {
497
+ error_propagate(errp, err);
498
+ return;
499
+ }
500
+
501
+ qdev_prop_set_string(DEVICE(&s->dualtimer), "name", "Dual timer");
502
+ qdev_prop_set_uint64(DEVICE(&s->dualtimer), "size", 0x1000);
503
+ object_property_set_bool(OBJECT(&s->dualtimer), true, "realized", &err);
504
+ if (err) {
505
+ error_propagate(errp, err);
506
+ return;
507
+ }
508
+ mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->dualtimer), 0);
509
+ object_property_set_link(OBJECT(&s->apb_ppc0), OBJECT(mr), "port[2]", &err);
510
+ if (err) {
511
+ error_propagate(errp, err);
512
+ return;
513
+ }
514
+
515
+ object_property_set_bool(OBJECT(&s->apb_ppc0), true, "realized", &err);
516
+ if (err) {
517
+ error_propagate(errp, err);
518
+ return;
519
+ }
520
+
521
+ sbd_apb_ppc0 = SYS_BUS_DEVICE(&s->apb_ppc0);
522
+ dev_apb_ppc0 = DEVICE(&s->apb_ppc0);
523
+
524
+ mr = sysbus_mmio_get_region(sbd_apb_ppc0, 0);
525
+ memory_region_add_subregion(&s->container, 0x40000000, mr);
526
+ mr = sysbus_mmio_get_region(sbd_apb_ppc0, 1);
527
+ memory_region_add_subregion(&s->container, 0x40001000, mr);
528
+ mr = sysbus_mmio_get_region(sbd_apb_ppc0, 2);
529
+ memory_region_add_subregion(&s->container, 0x40002000, mr);
530
+ for (i = 0; i < IOTS_APB_PPC0_NUM_PORTS; i++) {
531
+ qdev_connect_gpio_out_named(dev_secctl, "apb_ppc0_nonsec", i,
532
+ qdev_get_gpio_in_named(dev_apb_ppc0,
533
+ "cfg_nonsec", i));
534
+ qdev_connect_gpio_out_named(dev_secctl, "apb_ppc0_ap", i,
535
+ qdev_get_gpio_in_named(dev_apb_ppc0,
536
+ "cfg_ap", i));
537
+ }
538
+ qdev_connect_gpio_out_named(dev_secctl, "apb_ppc0_irq_enable", 0,
539
+ qdev_get_gpio_in_named(dev_apb_ppc0,
540
+ "irq_enable", 0));
541
+ qdev_connect_gpio_out_named(dev_secctl, "apb_ppc0_irq_clear", 0,
542
+ qdev_get_gpio_in_named(dev_apb_ppc0,
543
+ "irq_clear", 0));
544
+ qdev_connect_gpio_out(dev_splitter, 0,
545
+ qdev_get_gpio_in_named(dev_apb_ppc0,
546
+ "cfg_sec_resp", 0));
547
+
548
+ /* All the PPC irq lines (from the 2 internal PPCs and the 8 external
549
+ * ones) are sent individually to the security controller, and also
550
+ * ORed together to give a single combined PPC interrupt to the NVIC.
551
+ */
552
+ object_property_set_int(OBJECT(&s->ppc_irq_orgate),
553
+ NUM_PPCS, "num-lines", &err);
554
+ if (err) {
555
+ error_propagate(errp, err);
556
+ return;
557
+ }
558
+ object_property_set_bool(OBJECT(&s->ppc_irq_orgate), true,
559
+ "realized", &err);
560
+ if (err) {
561
+ error_propagate(errp, err);
562
+ return;
563
+ }
564
+ qdev_connect_gpio_out(DEVICE(&s->ppc_irq_orgate), 0,
565
+ qdev_get_gpio_in(DEVICE(&s->armv7m), 10));
566
+
567
+ /* 0x40010000 .. 0x4001ffff: private CPU region: unused in IoTKit */
568
+
569
+ /* 0x40020000 .. 0x4002ffff : IoTKit system control peripheral region */
570
+ /* Devices behind APB PPC1:
571
+ * 0x4002f000: S32K timer
572
+ */
573
+ qdev_prop_set_string(DEVICE(&s->s32ktimer), "name", "S32KTIMER");
574
+ qdev_prop_set_uint64(DEVICE(&s->s32ktimer), "size", 0x1000);
575
+ object_property_set_bool(OBJECT(&s->s32ktimer), true, "realized", &err);
576
+ if (err) {
577
+ error_propagate(errp, err);
578
+ return;
579
+ }
580
+ mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->s32ktimer), 0);
581
+ object_property_set_link(OBJECT(&s->apb_ppc1), OBJECT(mr), "port[0]", &err);
582
+ if (err) {
583
+ error_propagate(errp, err);
584
+ return;
585
+ }
586
+
587
+ object_property_set_bool(OBJECT(&s->apb_ppc1), true, "realized", &err);
588
+ if (err) {
589
+ error_propagate(errp, err);
590
+ return;
591
+ }
592
+ mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->apb_ppc1), 0);
593
+ memory_region_add_subregion(&s->container, 0x4002f000, mr);
594
+
595
+ dev_apb_ppc1 = DEVICE(&s->apb_ppc1);
596
+ qdev_connect_gpio_out_named(dev_secctl, "apb_ppc1_nonsec", 0,
597
+ qdev_get_gpio_in_named(dev_apb_ppc1,
598
+ "cfg_nonsec", 0));
599
+ qdev_connect_gpio_out_named(dev_secctl, "apb_ppc1_ap", 0,
600
+ qdev_get_gpio_in_named(dev_apb_ppc1,
601
+ "cfg_ap", 0));
602
+ qdev_connect_gpio_out_named(dev_secctl, "apb_ppc1_irq_enable", 0,
603
+ qdev_get_gpio_in_named(dev_apb_ppc1,
604
+ "irq_enable", 0));
605
+ qdev_connect_gpio_out_named(dev_secctl, "apb_ppc1_irq_clear", 0,
606
+ qdev_get_gpio_in_named(dev_apb_ppc1,
607
+ "irq_clear", 0));
608
+ qdev_connect_gpio_out(dev_splitter, 1,
609
+ qdev_get_gpio_in_named(dev_apb_ppc1,
610
+ "cfg_sec_resp", 0));
611
+
612
+ /* Using create_unimplemented_device() maps the stub into the
613
+ * system address space rather than into our container, but the
614
+ * overall effect to the guest is the same.
615
+ */
616
+ create_unimplemented_device("SYSINFO", 0x40020000, 0x1000);
617
+
618
+ create_unimplemented_device("SYSCONTROL", 0x50021000, 0x1000);
619
+ create_unimplemented_device("S32KWATCHDOG", 0x5002e000, 0x1000);
620
+
621
+ /* 0x40080000 .. 0x4008ffff : IoTKit second Base peripheral region */
622
+
623
+ create_unimplemented_device("NS watchdog", 0x40081000, 0x1000);
624
+ create_unimplemented_device("S watchdog", 0x50081000, 0x1000);
625
+
626
+ create_unimplemented_device("SRAM0 MPC", 0x50083000, 0x1000);
627
+
628
+ for (i = 0; i < ARRAY_SIZE(s->ppc_irq_splitter); i++) {
629
+ Object *splitter = OBJECT(&s->ppc_irq_splitter[i]);
630
+
631
+ object_property_set_int(splitter, 2, "num-lines", &err);
632
+ if (err) {
633
+ error_propagate(errp, err);
634
+ return;
635
+ }
636
+ object_property_set_bool(splitter, true, "realized", &err);
637
+ if (err) {
638
+ error_propagate(errp, err);
639
+ return;
640
+ }
36
+ }
641
+ }
37
+ }
642
+
38
+
643
+ for (i = 0; i < IOTS_NUM_AHB_EXP_PPC; i++) {
39
assert(!excp_is_internal(excp));
644
+ char *ppcname = g_strdup_printf("ahb_ppcexp%d", i);
40
cs->exception_index = excp;
645
+
41
env->exception.syndrome = syndrome;
646
+ iotkit_forward_ppc(s, ppcname, i);
647
+ g_free(ppcname);
648
+ }
649
+
650
+ for (i = 0; i < IOTS_NUM_APB_EXP_PPC; i++) {
651
+ char *ppcname = g_strdup_printf("apb_ppcexp%d", i);
652
+
653
+ iotkit_forward_ppc(s, ppcname, i + IOTS_NUM_AHB_EXP_PPC);
654
+ g_free(ppcname);
655
+ }
656
+
657
+ for (i = NUM_EXTERNAL_PPCS; i < NUM_PPCS; i++) {
658
+ /* Wire up IRQ splitter for internal PPCs */
659
+ DeviceState *devs = DEVICE(&s->ppc_irq_splitter[i]);
660
+ char *gpioname = g_strdup_printf("apb_ppc%d_irq_status",
661
+ i - NUM_EXTERNAL_PPCS);
662
+ TZPPC *ppc = (i == NUM_EXTERNAL_PPCS) ? &s->apb_ppc0 : &s->apb_ppc1;
663
+
664
+ qdev_connect_gpio_out(devs, 0,
665
+ qdev_get_gpio_in_named(dev_secctl, gpioname, 0));
666
+ qdev_connect_gpio_out(devs, 1,
667
+ qdev_get_gpio_in(DEVICE(&s->ppc_irq_orgate), i));
668
+ qdev_connect_gpio_out_named(DEVICE(ppc), "irq", 0,
669
+ qdev_get_gpio_in(devs, 0));
670
+ }
671
+
672
+ iotkit_forward_sec_resp_cfg(s);
673
+
674
+ system_clock_scale = NANOSECONDS_PER_SECOND / s->mainclk_frq;
675
+}
676
+
677
+static void iotkit_idau_check(IDAUInterface *ii, uint32_t address,
678
+ int *iregion, bool *exempt, bool *ns, bool *nsc)
679
+{
680
+ /* For IoTKit systems the IDAU responses are simple logical functions
681
+ * of the address bits. The NSC attribute is guest-adjustable via the
682
+ * NSCCFG register in the security controller.
683
+ */
684
+ IoTKit *s = IOTKIT(ii);
685
+ int region = extract32(address, 28, 4);
686
+
687
+ *ns = !(region & 1);
688
+ *nsc = (region == 1 && (s->nsccfg & 1)) || (region == 3 && (s->nsccfg & 2));
689
+ /* 0xe0000000..0xe00fffff and 0xf0000000..0xf00fffff are exempt */
690
+ *exempt = (address & 0xeff00000) == 0xe0000000;
691
+ *iregion = region;
692
+}
693
+
694
+static const VMStateDescription iotkit_vmstate = {
695
+ .name = "iotkit",
696
+ .version_id = 1,
697
+ .minimum_version_id = 1,
698
+ .fields = (VMStateField[]) {
699
+ VMSTATE_UINT32(nsccfg, IoTKit),
700
+ VMSTATE_END_OF_LIST()
701
+ }
702
+};
703
+
704
+static Property iotkit_properties[] = {
705
+ DEFINE_PROP_LINK("memory", IoTKit, board_memory, TYPE_MEMORY_REGION,
706
+ MemoryRegion *),
707
+ DEFINE_PROP_UINT32("EXP_NUMIRQ", IoTKit, exp_numirq, 64),
708
+ DEFINE_PROP_UINT32("MAINCLK", IoTKit, mainclk_frq, 0),
709
+ DEFINE_PROP_END_OF_LIST()
710
+};
711
+
712
+static void iotkit_reset(DeviceState *dev)
713
+{
714
+ IoTKit *s = IOTKIT(dev);
715
+
716
+ s->nsccfg = 0;
717
+}
718
+
719
+static void iotkit_class_init(ObjectClass *klass, void *data)
720
+{
721
+ DeviceClass *dc = DEVICE_CLASS(klass);
722
+ IDAUInterfaceClass *iic = IDAU_INTERFACE_CLASS(klass);
723
+
724
+ dc->realize = iotkit_realize;
725
+ dc->vmsd = &iotkit_vmstate;
726
+ dc->props = iotkit_properties;
727
+ dc->reset = iotkit_reset;
728
+ iic->check = iotkit_idau_check;
729
+}
730
+
731
+static const TypeInfo iotkit_info = {
732
+ .name = TYPE_IOTKIT,
733
+ .parent = TYPE_SYS_BUS_DEVICE,
734
+ .instance_size = sizeof(IoTKit),
735
+ .instance_init = iotkit_init,
736
+ .class_init = iotkit_class_init,
737
+ .interfaces = (InterfaceInfo[]) {
738
+ { TYPE_IDAU_INTERFACE },
739
+ { }
740
+ }
741
+};
742
+
743
+static void iotkit_register_types(void)
744
+{
745
+ type_register_static(&iotkit_info);
746
+}
747
+
748
+type_init(iotkit_register_types);
749
diff --git a/default-configs/arm-softmmu.mak b/default-configs/arm-softmmu.mak
750
index XXXXXXX..XXXXXXX 100644
751
--- a/default-configs/arm-softmmu.mak
752
+++ b/default-configs/arm-softmmu.mak
753
@@ -XXX,XX +XXX,XX @@ CONFIG_MPS2_FPGAIO=y
754
CONFIG_MPS2_SCC=y
755
756
CONFIG_TZ_PPC=y
757
+CONFIG_IOTKIT=y
758
CONFIG_IOTKIT_SECCTL=y
759
760
CONFIG_VERSATILE_PCI=y
761
--
42
--
762
2.16.2
43
2.18.0
763
44
764
45
diff view generated by jsdifflib
1
Add a model of the TrustZone peripheral protection controller (PPC),
1
The IMO, FMO and AMO bits in HCR_EL2 are defined to "behave as
2
which is used to gate transactions to non-TZ-aware peripherals so
2
1 for all purposes other than direct reads" if HCR_EL2.TGE
3
that secure software can configure them to not be accessible to
3
is set and HCR_EL2.E2H is 0, and to "behave as 0 for all
4
non-secure software.
4
purposes other than direct reads" if HCR_EL2.TGE is set
5
and HRC_EL2.E2H is 1.
6
7
To avoid having to check E2H and TGE everywhere where we test IMO and
8
FMO, provide accessors arm_hcr_el2_imo(), arm_hcr_el2_fmo()and
9
arm_hcr_el2_amo(). We don't implement ARMv8.1-VHE yet, so the E2H
10
case will never be true, but we include the logic to save effort when
11
we eventually do get to that.
12
13
(Note that in several of these callsites the change doesn't
14
actually make a difference as either the callsite is handling
15
TGE specially anyway, or the CPU can't get into that situation
16
with TGE set; we change everywhere for consistency.)
5
17
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180220180325.29818-15-peter.maydell@linaro.org
20
Message-id: 20180724115950.17316-5-peter.maydell@linaro.org
9
---
21
---
10
hw/misc/Makefile.objs | 2 +
22
target/arm/cpu.h | 64 +++++++++++++++++++++++++++++++++++----
11
include/hw/misc/tz-ppc.h | 101 ++++++++++++++
23
hw/intc/arm_gicv3_cpuif.c | 19 ++++++------
12
hw/misc/tz-ppc.c | 302 ++++++++++++++++++++++++++++++++++++++++
24
target/arm/helper.c | 6 ++--
13
default-configs/arm-softmmu.mak | 2 +
25
3 files changed, 71 insertions(+), 18 deletions(-)
14
hw/misc/trace-events | 11 ++
26
15
5 files changed, 418 insertions(+)
27
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
create mode 100644 include/hw/misc/tz-ppc.h
17
create mode 100644 hw/misc/tz-ppc.c
18
19
diff --git a/hw/misc/Makefile.objs b/hw/misc/Makefile.objs
20
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
21
--- a/hw/misc/Makefile.objs
29
--- a/target/arm/cpu.h
22
+++ b/hw/misc/Makefile.objs
30
+++ b/target/arm/cpu.h
23
@@ -XXX,XX +XXX,XX @@ obj-$(CONFIG_MIPS_ITU) += mips_itu.o
31
@@ -XXX,XX +XXX,XX @@ static inline void xpsr_write(CPUARMState *env, uint32_t val, uint32_t mask)
24
obj-$(CONFIG_MPS2_FPGAIO) += mps2-fpgaio.o
32
#define HCR_RW (1ULL << 31)
25
obj-$(CONFIG_MPS2_SCC) += mps2-scc.o
33
#define HCR_CD (1ULL << 32)
26
34
#define HCR_ID (1ULL << 33)
27
+obj-$(CONFIG_TZ_PPC) += tz-ppc.o
35
+#define HCR_E2H (1ULL << 34)
28
+
29
obj-$(CONFIG_PVPANIC) += pvpanic.o
30
obj-$(CONFIG_HYPERV_TESTDEV) += hyperv_testdev.o
31
obj-$(CONFIG_AUX) += auxbus.o
32
diff --git a/include/hw/misc/tz-ppc.h b/include/hw/misc/tz-ppc.h
33
new file mode 100644
34
index XXXXXXX..XXXXXXX
35
--- /dev/null
36
+++ b/include/hw/misc/tz-ppc.h
37
@@ -XXX,XX +XXX,XX @@
38
+/*
36
+/*
39
+ * ARM TrustZone peripheral protection controller emulation
37
+ * When we actually implement ARMv8.1-VHE we should add HCR_E2H to
40
+ *
38
+ * HCR_MASK and then clear it again if the feature bit is not set in
41
+ * Copyright (c) 2018 Linaro Limited
39
+ * hcr_write().
42
+ * Written by Peter Maydell
40
+ */
43
+ *
41
#define HCR_MASK ((1ULL << 34) - 1)
44
+ * This program is free software; you can redistribute it and/or modify
42
45
+ * it under the terms of the GNU General Public License version 2 or
43
#define SCR_NS (1U << 0)
46
+ * (at your option) any later version.
44
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu);
47
+ */
45
# define TARGET_VIRT_ADDR_SPACE_BITS 32
48
+
46
#endif
49
+/* This is a model of the TrustZone peripheral protection controller (PPC).
47
50
+ * It is documented in the ARM CoreLink SIE-200 System IP for Embedded TRM
48
+/**
51
+ * (DDI 0571G):
49
+ * arm_hcr_el2_imo(): Return the effective value of HCR_EL2.IMO.
52
+ * https://developer.arm.com/products/architecture/m-profile/docs/ddi0571/g
50
+ * Depending on the values of HCR_EL2.E2H and TGE, this may be
53
+ *
51
+ * "behaves as 1 for all purposes other than direct read/write" or
54
+ * The PPC sits in front of peripherals and allows secure software to
52
+ * "behaves as 0 for all purposes other than direct read/write"
55
+ * configure it to either pass through or reject transactions.
53
+ */
56
+ * Rejected transactions may be configured to either be aborted, or to
54
+static inline bool arm_hcr_el2_imo(CPUARMState *env)
57
+ * behave as RAZ/WI. An interrupt can be signalled for a rejected transaction.
58
+ *
59
+ * The PPC has no register interface -- it is configured purely by a
60
+ * collection of input signals from other hardware in the system. Typically
61
+ * they are either hardwired or exposed in an ad-hoc register interface by
62
+ * the SoC that uses the PPC.
63
+ *
64
+ * This QEMU model can be used to model either the AHB5 or APB4 TZ PPC,
65
+ * since the only difference between them is that the AHB version has a
66
+ * "default" port which has no security checks applied. In QEMU the default
67
+ * port can be emulated simply by wiring its downstream devices directly
68
+ * into the parent address space, since the PPC does not need to intercept
69
+ * transactions there.
70
+ *
71
+ * In the hardware, selection of which downstream port to use is done by
72
+ * the user's decode logic asserting one of the hsel[] signals. In QEMU,
73
+ * we provide 16 MMIO regions, one per port, and the user maps these into
74
+ * the desired addresses to implement the address decode.
75
+ *
76
+ * QEMU interface:
77
+ * + sysbus MMIO regions 0..15: MemoryRegions defining the upstream end
78
+ * of each of the 16 ports of the PPC
79
+ * + Property "port[0..15]": MemoryRegion defining the downstream device(s)
80
+ * for each of the 16 ports of the PPC
81
+ * + Named GPIO inputs "cfg_nonsec[0..15]": set to 1 if the port should be
82
+ * accessible to NonSecure transactions
83
+ * + Named GPIO inputs "cfg_ap[0..15]": set to 1 if the port should be
84
+ * accessible to non-privileged transactions
85
+ * + Named GPIO input "cfg_sec_resp": set to 1 if a rejected transaction should
86
+ * result in a transaction error, or 0 for the transaction to RAZ/WI
87
+ * + Named GPIO input "irq_enable": set to 1 to enable interrupts
88
+ * + Named GPIO input "irq_clear": set to 1 to clear a pending interrupt
89
+ * + Named GPIO output "irq": set for a transaction-failed interrupt
90
+ * + Property "NONSEC_MASK": if a bit is set in this mask then accesses to
91
+ * the associated port do not have the TZ security check performed. (This
92
+ * corresponds to the hardware allowing this to be set as a Verilog
93
+ * parameter.)
94
+ */
95
+
96
+#ifndef TZ_PPC_H
97
+#define TZ_PPC_H
98
+
99
+#include "hw/sysbus.h"
100
+
101
+#define TYPE_TZ_PPC "tz-ppc"
102
+#define TZ_PPC(obj) OBJECT_CHECK(TZPPC, (obj), TYPE_TZ_PPC)
103
+
104
+#define TZ_NUM_PORTS 16
105
+
106
+typedef struct TZPPC TZPPC;
107
+
108
+typedef struct TZPPCPort {
109
+ TZPPC *ppc;
110
+ MemoryRegion upstream;
111
+ AddressSpace downstream_as;
112
+ MemoryRegion *downstream;
113
+} TZPPCPort;
114
+
115
+struct TZPPC {
116
+ /*< private >*/
117
+ SysBusDevice parent_obj;
118
+
119
+ /*< public >*/
120
+
121
+ /* State: these just track the values of our input signals */
122
+ bool cfg_nonsec[TZ_NUM_PORTS];
123
+ bool cfg_ap[TZ_NUM_PORTS];
124
+ bool cfg_sec_resp;
125
+ bool irq_enable;
126
+ bool irq_clear;
127
+ /* State: are we asserting irq ? */
128
+ bool irq_status;
129
+
130
+ qemu_irq irq;
131
+
132
+ /* Properties */
133
+ uint32_t nonsec_mask;
134
+
135
+ TZPPCPort port[TZ_NUM_PORTS];
136
+};
137
+
138
+#endif
139
diff --git a/hw/misc/tz-ppc.c b/hw/misc/tz-ppc.c
140
new file mode 100644
141
index XXXXXXX..XXXXXXX
142
--- /dev/null
143
+++ b/hw/misc/tz-ppc.c
144
@@ -XXX,XX +XXX,XX @@
145
+/*
146
+ * ARM TrustZone peripheral protection controller emulation
147
+ *
148
+ * Copyright (c) 2018 Linaro Limited
149
+ * Written by Peter Maydell
150
+ *
151
+ * This program is free software; you can redistribute it and/or modify
152
+ * it under the terms of the GNU General Public License version 2 or
153
+ * (at your option) any later version.
154
+ */
155
+
156
+#include "qemu/osdep.h"
157
+#include "qemu/log.h"
158
+#include "qapi/error.h"
159
+#include "trace.h"
160
+#include "hw/sysbus.h"
161
+#include "hw/registerfields.h"
162
+#include "hw/misc/tz-ppc.h"
163
+
164
+static void tz_ppc_update_irq(TZPPC *s)
165
+{
55
+{
166
+ bool level = s->irq_status && s->irq_enable;
56
+ switch (env->cp15.hcr_el2 & (HCR_TGE | HCR_E2H)) {
167
+
57
+ case HCR_TGE:
168
+ trace_tz_ppc_update_irq(level);
58
+ return true;
169
+ qemu_set_irq(s->irq, level);
59
+ case HCR_TGE | HCR_E2H:
170
+}
60
+ return false;
171
+
61
+ default:
172
+static void tz_ppc_cfg_nonsec(void *opaque, int n, int level)
62
+ return env->cp15.hcr_el2 & HCR_IMO;
173
+{
174
+ TZPPC *s = TZ_PPC(opaque);
175
+
176
+ assert(n < TZ_NUM_PORTS);
177
+ trace_tz_ppc_cfg_nonsec(n, level);
178
+ s->cfg_nonsec[n] = level;
179
+}
180
+
181
+static void tz_ppc_cfg_ap(void *opaque, int n, int level)
182
+{
183
+ TZPPC *s = TZ_PPC(opaque);
184
+
185
+ assert(n < TZ_NUM_PORTS);
186
+ trace_tz_ppc_cfg_ap(n, level);
187
+ s->cfg_ap[n] = level;
188
+}
189
+
190
+static void tz_ppc_cfg_sec_resp(void *opaque, int n, int level)
191
+{
192
+ TZPPC *s = TZ_PPC(opaque);
193
+
194
+ trace_tz_ppc_cfg_sec_resp(level);
195
+ s->cfg_sec_resp = level;
196
+}
197
+
198
+static void tz_ppc_irq_enable(void *opaque, int n, int level)
199
+{
200
+ TZPPC *s = TZ_PPC(opaque);
201
+
202
+ trace_tz_ppc_irq_enable(level);
203
+ s->irq_enable = level;
204
+ tz_ppc_update_irq(s);
205
+}
206
+
207
+static void tz_ppc_irq_clear(void *opaque, int n, int level)
208
+{
209
+ TZPPC *s = TZ_PPC(opaque);
210
+
211
+ trace_tz_ppc_irq_clear(level);
212
+
213
+ s->irq_clear = level;
214
+ if (level) {
215
+ s->irq_status = false;
216
+ tz_ppc_update_irq(s);
217
+ }
63
+ }
218
+}
64
+}
219
+
65
+
220
+static bool tz_ppc_check(TZPPC *s, int n, MemTxAttrs attrs)
66
+/**
67
+ * arm_hcr_el2_fmo(): Return the effective value of HCR_EL2.FMO.
68
+ */
69
+static inline bool arm_hcr_el2_fmo(CPUARMState *env)
221
+{
70
+{
222
+ /* Check whether to allow an access to port n; return true if
71
+ switch (env->cp15.hcr_el2 & (HCR_TGE | HCR_E2H)) {
223
+ * the check passes, and false if the transaction must be blocked.
72
+ case HCR_TGE:
224
+ * If the latter, the caller must check cfg_sec_resp to determine
73
+ return true;
225
+ * whether to abort or RAZ/WI the transaction.
74
+ case HCR_TGE | HCR_E2H:
226
+ * The checks are:
227
+ * + nonsec_mask suppresses any check of the secure attribute
228
+ * + otherwise, block if cfg_nonsec is 1 and transaction is secure,
229
+ * or if cfg_nonsec is 0 and transaction is non-secure
230
+ * + block if transaction is usermode and cfg_ap is 0
231
+ */
232
+ if ((attrs.secure == s->cfg_nonsec[n] && !(s->nonsec_mask & (1 << n))) ||
233
+ (attrs.user && !s->cfg_ap[n])) {
234
+ /* Block the transaction. */
235
+ if (!s->irq_clear) {
236
+ /* Note that holding irq_clear high suppresses interrupts */
237
+ s->irq_status = true;
238
+ tz_ppc_update_irq(s);
239
+ }
240
+ return false;
75
+ return false;
241
+ }
242
+ return true;
243
+}
244
+
245
+static MemTxResult tz_ppc_read(void *opaque, hwaddr addr, uint64_t *pdata,
246
+ unsigned size, MemTxAttrs attrs)
247
+{
248
+ TZPPCPort *p = opaque;
249
+ TZPPC *s = p->ppc;
250
+ int n = p - s->port;
251
+ AddressSpace *as = &p->downstream_as;
252
+ uint64_t data;
253
+ MemTxResult res;
254
+
255
+ if (!tz_ppc_check(s, n, attrs)) {
256
+ trace_tz_ppc_read_blocked(n, addr, attrs.secure, attrs.user);
257
+ if (s->cfg_sec_resp) {
258
+ return MEMTX_ERROR;
259
+ } else {
260
+ *pdata = 0;
261
+ return MEMTX_OK;
262
+ }
263
+ }
264
+
265
+ switch (size) {
266
+ case 1:
267
+ data = address_space_ldub(as, addr, attrs, &res);
268
+ break;
269
+ case 2:
270
+ data = address_space_lduw_le(as, addr, attrs, &res);
271
+ break;
272
+ case 4:
273
+ data = address_space_ldl_le(as, addr, attrs, &res);
274
+ break;
275
+ case 8:
276
+ data = address_space_ldq_le(as, addr, attrs, &res);
277
+ break;
278
+ default:
76
+ default:
279
+ g_assert_not_reached();
77
+ return env->cp15.hcr_el2 & HCR_FMO;
280
+ }
281
+ *pdata = data;
282
+ return res;
283
+}
284
+
285
+static MemTxResult tz_ppc_write(void *opaque, hwaddr addr, uint64_t val,
286
+ unsigned size, MemTxAttrs attrs)
287
+{
288
+ TZPPCPort *p = opaque;
289
+ TZPPC *s = p->ppc;
290
+ AddressSpace *as = &p->downstream_as;
291
+ int n = p - s->port;
292
+ MemTxResult res;
293
+
294
+ if (!tz_ppc_check(s, n, attrs)) {
295
+ trace_tz_ppc_write_blocked(n, addr, attrs.secure, attrs.user);
296
+ if (s->cfg_sec_resp) {
297
+ return MEMTX_ERROR;
298
+ } else {
299
+ return MEMTX_OK;
300
+ }
301
+ }
302
+
303
+ switch (size) {
304
+ case 1:
305
+ address_space_stb(as, addr, val, attrs, &res);
306
+ break;
307
+ case 2:
308
+ address_space_stw_le(as, addr, val, attrs, &res);
309
+ break;
310
+ case 4:
311
+ address_space_stl_le(as, addr, val, attrs, &res);
312
+ break;
313
+ case 8:
314
+ address_space_stq_le(as, addr, val, attrs, &res);
315
+ break;
316
+ default:
317
+ g_assert_not_reached();
318
+ }
319
+ return res;
320
+}
321
+
322
+static const MemoryRegionOps tz_ppc_ops = {
323
+ .read_with_attrs = tz_ppc_read,
324
+ .write_with_attrs = tz_ppc_write,
325
+ .endianness = DEVICE_LITTLE_ENDIAN,
326
+};
327
+
328
+static void tz_ppc_reset(DeviceState *dev)
329
+{
330
+ TZPPC *s = TZ_PPC(dev);
331
+
332
+ trace_tz_ppc_reset();
333
+ s->cfg_sec_resp = false;
334
+ memset(s->cfg_nonsec, 0, sizeof(s->cfg_nonsec));
335
+ memset(s->cfg_ap, 0, sizeof(s->cfg_ap));
336
+}
337
+
338
+static void tz_ppc_init(Object *obj)
339
+{
340
+ DeviceState *dev = DEVICE(obj);
341
+ TZPPC *s = TZ_PPC(obj);
342
+
343
+ qdev_init_gpio_in_named(dev, tz_ppc_cfg_nonsec, "cfg_nonsec", TZ_NUM_PORTS);
344
+ qdev_init_gpio_in_named(dev, tz_ppc_cfg_ap, "cfg_ap", TZ_NUM_PORTS);
345
+ qdev_init_gpio_in_named(dev, tz_ppc_cfg_sec_resp, "cfg_sec_resp", 1);
346
+ qdev_init_gpio_in_named(dev, tz_ppc_irq_enable, "irq_enable", 1);
347
+ qdev_init_gpio_in_named(dev, tz_ppc_irq_clear, "irq_clear", 1);
348
+ qdev_init_gpio_out_named(dev, &s->irq, "irq", 1);
349
+}
350
+
351
+static void tz_ppc_realize(DeviceState *dev, Error **errp)
352
+{
353
+ Object *obj = OBJECT(dev);
354
+ SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
355
+ TZPPC *s = TZ_PPC(dev);
356
+ int i;
357
+
358
+ /* We can't create the upstream end of the port until realize,
359
+ * as we don't know the size of the MR used as the downstream until then.
360
+ */
361
+ for (i = 0; i < TZ_NUM_PORTS; i++) {
362
+ TZPPCPort *port = &s->port[i];
363
+ char *name;
364
+ uint64_t size;
365
+
366
+ if (!port->downstream) {
367
+ continue;
368
+ }
369
+
370
+ name = g_strdup_printf("tz-ppc-port[%d]", i);
371
+
372
+ port->ppc = s;
373
+ address_space_init(&port->downstream_as, port->downstream, name);
374
+
375
+ size = memory_region_size(port->downstream);
376
+ memory_region_init_io(&port->upstream, obj, &tz_ppc_ops,
377
+ port, name, size);
378
+ sysbus_init_mmio(sbd, &port->upstream);
379
+ g_free(name);
380
+ }
78
+ }
381
+}
79
+}
382
+
80
+
383
+static const VMStateDescription tz_ppc_vmstate = {
81
+/**
384
+ .name = "tz-ppc",
82
+ * arm_hcr_el2_amo(): Return the effective value of HCR_EL2.AMO.
385
+ .version_id = 1,
83
+ */
386
+ .minimum_version_id = 1,
84
+static inline bool arm_hcr_el2_amo(CPUARMState *env)
387
+ .fields = (VMStateField[]) {
85
+{
388
+ VMSTATE_BOOL_ARRAY(cfg_nonsec, TZPPC, 16),
86
+ switch (env->cp15.hcr_el2 & (HCR_TGE | HCR_E2H)) {
389
+ VMSTATE_BOOL_ARRAY(cfg_ap, TZPPC, 16),
87
+ case HCR_TGE:
390
+ VMSTATE_BOOL(cfg_sec_resp, TZPPC),
88
+ return true;
391
+ VMSTATE_BOOL(irq_enable, TZPPC),
89
+ case HCR_TGE | HCR_E2H:
392
+ VMSTATE_BOOL(irq_clear, TZPPC),
90
+ return false;
393
+ VMSTATE_BOOL(irq_status, TZPPC),
91
+ default:
394
+ VMSTATE_END_OF_LIST()
92
+ return env->cp15.hcr_el2 & HCR_AMO;
395
+ }
93
+ }
396
+};
397
+
398
+#define DEFINE_PORT(N) \
399
+ DEFINE_PROP_LINK("port[" #N "]", TZPPC, port[N].downstream, \
400
+ TYPE_MEMORY_REGION, MemoryRegion *)
401
+
402
+static Property tz_ppc_properties[] = {
403
+ DEFINE_PROP_UINT32("NONSEC_MASK", TZPPC, nonsec_mask, 0),
404
+ DEFINE_PORT(0),
405
+ DEFINE_PORT(1),
406
+ DEFINE_PORT(2),
407
+ DEFINE_PORT(3),
408
+ DEFINE_PORT(4),
409
+ DEFINE_PORT(5),
410
+ DEFINE_PORT(6),
411
+ DEFINE_PORT(7),
412
+ DEFINE_PORT(8),
413
+ DEFINE_PORT(9),
414
+ DEFINE_PORT(10),
415
+ DEFINE_PORT(11),
416
+ DEFINE_PORT(12),
417
+ DEFINE_PORT(13),
418
+ DEFINE_PORT(14),
419
+ DEFINE_PORT(15),
420
+ DEFINE_PROP_END_OF_LIST(),
421
+};
422
+
423
+static void tz_ppc_class_init(ObjectClass *klass, void *data)
424
+{
425
+ DeviceClass *dc = DEVICE_CLASS(klass);
426
+
427
+ dc->realize = tz_ppc_realize;
428
+ dc->vmsd = &tz_ppc_vmstate;
429
+ dc->reset = tz_ppc_reset;
430
+ dc->props = tz_ppc_properties;
431
+}
94
+}
432
+
95
+
433
+static const TypeInfo tz_ppc_info = {
96
static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
434
+ .name = TYPE_TZ_PPC,
97
unsigned int target_el)
435
+ .parent = TYPE_SYS_BUS_DEVICE,
98
{
436
+ .instance_size = sizeof(TZPPC),
99
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
437
+ .instance_init = tz_ppc_init,
100
break;
438
+ .class_init = tz_ppc_class_init,
101
439
+};
102
case EXCP_VFIQ:
440
+
103
- if (secure || !(env->cp15.hcr_el2 & HCR_FMO)
441
+static void tz_ppc_register_types(void)
104
- || (env->cp15.hcr_el2 & HCR_TGE)) {
442
+{
105
+ if (secure || !arm_hcr_el2_fmo(env) || (env->cp15.hcr_el2 & HCR_TGE)) {
443
+ type_register_static(&tz_ppc_info);
106
/* VFIQs are only taken when hypervized and non-secure. */
444
+}
107
return false;
445
+
108
}
446
+type_init(tz_ppc_register_types);
109
return !(env->daif & PSTATE_F);
447
diff --git a/default-configs/arm-softmmu.mak b/default-configs/arm-softmmu.mak
110
case EXCP_VIRQ:
111
- if (secure || !(env->cp15.hcr_el2 & HCR_IMO)
112
- || (env->cp15.hcr_el2 & HCR_TGE)) {
113
+ if (secure || !arm_hcr_el2_imo(env) || (env->cp15.hcr_el2 & HCR_TGE)) {
114
/* VIRQs are only taken when hypervized and non-secure. */
115
return false;
116
}
117
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
118
* to the CPSR.F setting otherwise we further assess the state
119
* below.
120
*/
121
- hcr = (env->cp15.hcr_el2 & HCR_FMO);
122
+ hcr = arm_hcr_el2_fmo(env);
123
scr = (env->cp15.scr_el3 & SCR_FIQ);
124
125
/* When EL3 is 32-bit, the SCR.FW bit controls whether the
126
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
127
* when setting the target EL, so it does not have a further
128
* affect here.
129
*/
130
- hcr = (env->cp15.hcr_el2 & HCR_IMO);
131
+ hcr = arm_hcr_el2_imo(env);
132
scr = false;
133
break;
134
default:
135
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
448
index XXXXXXX..XXXXXXX 100644
136
index XXXXXXX..XXXXXXX 100644
449
--- a/default-configs/arm-softmmu.mak
137
--- a/hw/intc/arm_gicv3_cpuif.c
450
+++ b/default-configs/arm-softmmu.mak
138
+++ b/hw/intc/arm_gicv3_cpuif.c
451
@@ -XXX,XX +XXX,XX @@ CONFIG_CMSDK_APB_UART=y
139
@@ -XXX,XX +XXX,XX @@ static bool icv_access(CPUARMState *env, int hcr_flags)
452
CONFIG_MPS2_FPGAIO=y
140
* * access if NS EL1 and either IMO or FMO == 1:
453
CONFIG_MPS2_SCC=y
141
* CTLR, DIR, PMR, RPR
454
142
*/
455
+CONFIG_TZ_PPC=y
143
- return (env->cp15.hcr_el2 & hcr_flags) && arm_current_el(env) == 1
456
+
144
+ bool flagmatch = ((hcr_flags & HCR_IMO) && arm_hcr_el2_imo(env)) ||
457
CONFIG_VERSATILE_PCI=y
145
+ ((hcr_flags & HCR_FMO) && arm_hcr_el2_fmo(env));
458
CONFIG_VERSATILE_I2C=y
146
+
459
147
+ return flagmatch && arm_current_el(env) == 1
460
diff --git a/hw/misc/trace-events b/hw/misc/trace-events
148
&& !arm_is_secure_below_el3(env);
149
}
150
151
@@ -XXX,XX +XXX,XX @@ static void icc_dir_write(CPUARMState *env, const ARMCPRegInfo *ri,
152
/* No need to include !IsSecure in route_*_to_el2 as it's only
153
* tested in cases where we know !IsSecure is true.
154
*/
155
- route_fiq_to_el2 = env->cp15.hcr_el2 & HCR_FMO;
156
- route_irq_to_el2 = env->cp15.hcr_el2 & HCR_IMO;
157
+ route_fiq_to_el2 = arm_hcr_el2_fmo(env);
158
+ route_irq_to_el2 = arm_hcr_el2_imo(env);
159
160
switch (arm_current_el(env)) {
161
case 3:
162
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gicv3_irqfiq_access(CPUARMState *env,
163
switch (el) {
164
case 1:
165
if (arm_is_secure_below_el3(env) ||
166
- ((env->cp15.hcr_el2 & (HCR_IMO | HCR_FMO)) == 0)) {
167
+ (arm_hcr_el2_imo(env) == 0 && arm_hcr_el2_fmo(env) == 0)) {
168
r = CP_ACCESS_TRAP_EL3;
169
}
170
break;
171
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gicv3_dir_access(CPUARMState *env,
172
static CPAccessResult gicv3_sgi_access(CPUARMState *env,
173
const ARMCPRegInfo *ri, bool isread)
174
{
175
- if ((env->cp15.hcr_el2 & (HCR_IMO | HCR_FMO)) &&
176
+ if ((arm_hcr_el2_imo(env) || arm_hcr_el2_fmo(env)) &&
177
arm_current_el(env) == 1 && !arm_is_secure_below_el3(env)) {
178
/* Takes priority over a possible EL3 trap */
179
return CP_ACCESS_TRAP_EL2;
180
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gicv3_fiq_access(CPUARMState *env,
181
if (env->cp15.scr_el3 & SCR_FIQ) {
182
switch (el) {
183
case 1:
184
- if (arm_is_secure_below_el3(env) ||
185
- ((env->cp15.hcr_el2 & HCR_FMO) == 0)) {
186
+ if (arm_is_secure_below_el3(env) || !arm_hcr_el2_fmo(env)) {
187
r = CP_ACCESS_TRAP_EL3;
188
}
189
break;
190
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gicv3_irq_access(CPUARMState *env,
191
if (env->cp15.scr_el3 & SCR_IRQ) {
192
switch (el) {
193
case 1:
194
- if (arm_is_secure_below_el3(env) ||
195
- ((env->cp15.hcr_el2 & HCR_IMO) == 0)) {
196
+ if (arm_is_secure_below_el3(env) || !arm_hcr_el2_imo(env)) {
197
r = CP_ACCESS_TRAP_EL3;
198
}
199
break;
200
diff --git a/target/arm/helper.c b/target/arm/helper.c
461
index XXXXXXX..XXXXXXX 100644
201
index XXXXXXX..XXXXXXX 100644
462
--- a/hw/misc/trace-events
202
--- a/target/arm/helper.c
463
+++ b/hw/misc/trace-events
203
+++ b/target/arm/helper.c
464
@@ -XXX,XX +XXX,XX @@ mos6522_get_next_irq_time(uint16_t latch, int64_t d, int64_t delta) "latch=%d co
204
@@ -XXX,XX +XXX,XX @@ uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
465
mos6522_set_sr_int(void) "set sr_int"
205
switch (excp_idx) {
466
mos6522_write(uint64_t addr, uint64_t val) "reg=0x%"PRIx64 " val=0x%"PRIx64
206
case EXCP_IRQ:
467
mos6522_read(uint64_t addr, unsigned val) "reg=0x%"PRIx64 " val=0x%x"
207
scr = ((env->cp15.scr_el3 & SCR_IRQ) == SCR_IRQ);
468
+
208
- hcr = ((env->cp15.hcr_el2 & HCR_IMO) == HCR_IMO);
469
+# hw/misc/tz-ppc.c
209
+ hcr = arm_hcr_el2_imo(env);
470
+tz_ppc_reset(void) "TZ PPC: reset"
210
break;
471
+tz_ppc_cfg_nonsec(int n, int level) "TZ PPC: cfg_nonsec[%d] = %d"
211
case EXCP_FIQ:
472
+tz_ppc_cfg_ap(int n, int level) "TZ PPC: cfg_ap[%d] = %d"
212
scr = ((env->cp15.scr_el3 & SCR_FIQ) == SCR_FIQ);
473
+tz_ppc_cfg_sec_resp(int level) "TZ PPC: cfg_sec_resp = %d"
213
- hcr = ((env->cp15.hcr_el2 & HCR_FMO) == HCR_FMO);
474
+tz_ppc_irq_enable(int level) "TZ PPC: int_enable = %d"
214
+ hcr = arm_hcr_el2_fmo(env);
475
+tz_ppc_irq_clear(int level) "TZ PPC: int_clear = %d"
215
break;
476
+tz_ppc_update_irq(int level) "TZ PPC: setting irq line to %d"
216
default:
477
+tz_ppc_read_blocked(int n, hwaddr offset, bool secure, bool user) "TZ PPC: port %d offset 0x%" HWADDR_PRIx " read (secure %d user %d) blocked"
217
scr = ((env->cp15.scr_el3 & SCR_EA) == SCR_EA);
478
+tz_ppc_write_blocked(int n, hwaddr offset, bool secure, bool user) "TZ PPC: port %d offset 0x%" HWADDR_PRIx " write (secure %d user %d) blocked"
218
- hcr = ((env->cp15.hcr_el2 & HCR_AMO) == HCR_AMO);
219
+ hcr = arm_hcr_el2_amo(env);
220
break;
221
};
222
479
--
223
--
480
2.16.2
224
2.18.0
481
225
482
226
diff view generated by jsdifflib
1
Create an "idau" property on the armv7m container object which
1
One of the required effects of setting HCR_EL2.TGE is that when
2
we can forward to the CPU object. Annoyingly, we can't use
2
SCR_EL3.NS is 1 then SCTLR_EL1.M must behave as if it is zero for
3
object_property_add_alias() because the CPU object we want to
3
all purposes except direct reads. That is, it effectively disables
4
forward to doesn't exist until the armv7m container is realized.
4
the MMU for the NS EL0/EL1 translation regime.
5
5
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180220180325.29818-6-peter.maydell@linaro.org
8
Message-id: 20180724115950.17316-6-peter.maydell@linaro.org
9
---
9
---
10
include/hw/arm/armv7m.h | 3 +++
10
target/arm/helper.c | 8 ++++++++
11
hw/arm/armv7m.c | 9 +++++++++
11
1 file changed, 8 insertions(+)
12
2 files changed, 12 insertions(+)
13
12
14
diff --git a/include/hw/arm/armv7m.h b/include/hw/arm/armv7m.h
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
16
--- a/include/hw/arm/armv7m.h
15
--- a/target/arm/helper.c
17
+++ b/include/hw/arm/armv7m.h
16
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@
17
@@ -XXX,XX +XXX,XX @@ static inline bool regime_translation_disabled(CPUARMState *env,
19
18
if (mmu_idx == ARMMMUIdx_S2NS) {
20
#include "hw/sysbus.h"
19
return (env->cp15.hcr_el2 & HCR_VM) == 0;
21
#include "hw/intc/armv7m_nvic.h"
20
}
22
+#include "target/arm/idau.h"
21
+
23
22
+ if (env->cp15.hcr_el2 & HCR_TGE) {
24
#define TYPE_BITBAND "ARM,bitband-memory"
23
+ /* TGE means that NS EL0/1 act as if SCTLR_EL1.M is zero */
25
#define BITBAND(obj) OBJECT_CHECK(BitBandState, (obj), TYPE_BITBAND)
24
+ if (!regime_is_secure(env, mmu_idx) && regime_el(env, mmu_idx) == 1) {
26
@@ -XXX,XX +XXX,XX @@ typedef struct {
25
+ return true;
27
* + Property "memory": MemoryRegion defining the physical address space
28
* that CPU accesses see. (The NVIC, bitbanding and other CPU-internal
29
* devices will be automatically layered on top of this view.)
30
+ * + Property "idau": IDAU interface (forwarded to CPU object)
31
*/
32
typedef struct ARMv7MState {
33
/*< private >*/
34
@@ -XXX,XX +XXX,XX @@ typedef struct ARMv7MState {
35
char *cpu_type;
36
/* MemoryRegion the board provides to us (with its devices, RAM, etc) */
37
MemoryRegion *board_memory;
38
+ Object *idau;
39
} ARMv7MState;
40
41
#endif
42
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/hw/arm/armv7m.c
45
+++ b/hw/arm/armv7m.c
46
@@ -XXX,XX +XXX,XX @@
47
#include "sysemu/qtest.h"
48
#include "qemu/error-report.h"
49
#include "exec/address-spaces.h"
50
+#include "target/arm/idau.h"
51
52
/* Bitbanded IO. Each word corresponds to a single bit. */
53
54
@@ -XXX,XX +XXX,XX @@ static void armv7m_realize(DeviceState *dev, Error **errp)
55
56
object_property_set_link(OBJECT(s->cpu), OBJECT(&s->container), "memory",
57
&error_abort);
58
+ if (object_property_find(OBJECT(s->cpu), "idau", NULL)) {
59
+ object_property_set_link(OBJECT(s->cpu), s->idau, "idau", &err);
60
+ if (err != NULL) {
61
+ error_propagate(errp, err);
62
+ return;
63
+ }
26
+ }
64
+ }
27
+ }
65
object_property_set_bool(OBJECT(s->cpu), true, "realized", &err);
28
+
66
if (err != NULL) {
29
return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0;
67
error_propagate(errp, err);
30
}
68
@@ -XXX,XX +XXX,XX @@ static Property armv7m_properties[] = {
69
DEFINE_PROP_STRING("cpu-type", ARMv7MState, cpu_type),
70
DEFINE_PROP_LINK("memory", ARMv7MState, board_memory, TYPE_MEMORY_REGION,
71
MemoryRegion *),
72
+ DEFINE_PROP_LINK("idau", ARMv7MState, idau, TYPE_IDAU_INTERFACE, Object *),
73
DEFINE_PROP_END_OF_LIST(),
74
};
75
31
76
--
32
--
77
2.16.2
33
2.18.0
78
34
79
35
diff view generated by jsdifflib
1
In v8M, the Implementation Defined Attribution Unit (IDAU) is
1
Improve the exception-taken logging by logging in
2
a small piece of hardware typically implemented in the SoC
2
v7m_exception_taken() the exception we're going to take
3
which provides board or SoC specific security attribution
3
and whether it is secure/nonsecure.
4
information for each address that the CPU performs MPU/SAU
5
checks on. For QEMU, we model this with a QOM interface which
6
is implemented by the board or SoC object and connected to
7
the CPU using a link property.
8
4
9
This commit defines the new interface class, adds the link
5
This requires us to move logging at many callsites from after the
10
property to the CPU object, and makes the SAU checking
6
call to before it, so that the logging appears in a sensible order.
11
code call the IDAU interface if one is present.
7
8
(This will make tail-chaining produce more useful logs; for the
9
current callers of v7m_exception_taken() we know which exception
10
we're going to take, so custom log messages at the callsite sufficed;
11
for tail-chaining only v7m_exception_taken() knows the exception
12
number that we're going to tail-chain to.)
12
13
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20180220180325.29818-5-peter.maydell@linaro.org
16
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
17
Message-id: 20180720145647.8810-2-peter.maydell@linaro.org
16
---
18
---
17
target/arm/cpu.h | 3 +++
19
target/arm/helper.c | 17 +++++++++++------
18
target/arm/idau.h | 61 +++++++++++++++++++++++++++++++++++++++++++++++++++++
20
1 file changed, 11 insertions(+), 6 deletions(-)
19
target/arm/cpu.c | 15 +++++++++++++
20
target/arm/helper.c | 28 +++++++++++++++++++++---
21
4 files changed, 104 insertions(+), 3 deletions(-)
22
create mode 100644 target/arm/idau.h
23
21
24
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
25
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/cpu.h
27
+++ b/target/arm/cpu.h
28
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
29
/* MemoryRegion to use for secure physical accesses */
30
MemoryRegion *secure_memory;
31
32
+ /* For v8M, pointer to the IDAU interface provided by board/SoC */
33
+ Object *idau;
34
+
35
/* 'compatible' string for this CPU for Linux device trees */
36
const char *dtb_compatible;
37
38
diff --git a/target/arm/idau.h b/target/arm/idau.h
39
new file mode 100644
40
index XXXXXXX..XXXXXXX
41
--- /dev/null
42
+++ b/target/arm/idau.h
43
@@ -XXX,XX +XXX,XX @@
44
+/*
45
+ * QEMU ARM CPU -- interface for the Arm v8M IDAU
46
+ *
47
+ * Copyright (c) 2018 Linaro Ltd
48
+ *
49
+ * This program is free software; you can redistribute it and/or
50
+ * modify it under the terms of the GNU General Public License
51
+ * as published by the Free Software Foundation; either version 2
52
+ * of the License, or (at your option) any later version.
53
+ *
54
+ * This program is distributed in the hope that it will be useful,
55
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
56
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
57
+ * GNU General Public License for more details.
58
+ *
59
+ * You should have received a copy of the GNU General Public License
60
+ * along with this program; if not, see
61
+ * <http://www.gnu.org/licenses/gpl-2.0.html>
62
+ *
63
+ * In the v8M architecture, the IDAU is a small piece of hardware
64
+ * typically implemented in the SoC which provides board or SoC
65
+ * specific security attribution information for each address that
66
+ * the CPU performs MPU/SAU checks on. For QEMU, we model this with a
67
+ * QOM interface which is implemented by the board or SoC object and
68
+ * connected to the CPU using a link property.
69
+ */
70
+
71
+#ifndef TARGET_ARM_IDAU_H
72
+#define TARGET_ARM_IDAU_H
73
+
74
+#include "qom/object.h"
75
+
76
+#define TYPE_IDAU_INTERFACE "idau-interface"
77
+#define IDAU_INTERFACE(obj) \
78
+ INTERFACE_CHECK(IDAUInterface, (obj), TYPE_IDAU_INTERFACE)
79
+#define IDAU_INTERFACE_CLASS(class) \
80
+ OBJECT_CLASS_CHECK(IDAUInterfaceClass, (class), TYPE_IDAU_INTERFACE)
81
+#define IDAU_INTERFACE_GET_CLASS(obj) \
82
+ OBJECT_GET_CLASS(IDAUInterfaceClass, (obj), TYPE_IDAU_INTERFACE)
83
+
84
+typedef struct IDAUInterface {
85
+ Object parent;
86
+} IDAUInterface;
87
+
88
+#define IREGION_NOTVALID -1
89
+
90
+typedef struct IDAUInterfaceClass {
91
+ InterfaceClass parent;
92
+
93
+ /* Check the specified address and return the IDAU security information
94
+ * for it by filling in iregion, exempt, ns and nsc:
95
+ * iregion: IDAU region number, or IREGION_NOTVALID if not valid
96
+ * exempt: true if address is exempt from security attribution
97
+ * ns: true if the address is NonSecure
98
+ * nsc: true if the address is NonSecure-callable
99
+ */
100
+ void (*check)(IDAUInterface *ii, uint32_t address, int *iregion,
101
+ bool *exempt, bool *ns, bool *nsc);
102
+} IDAUInterfaceClass;
103
+
104
+#endif
105
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
106
index XXXXXXX..XXXXXXX 100644
107
--- a/target/arm/cpu.c
108
+++ b/target/arm/cpu.c
109
@@ -XXX,XX +XXX,XX @@
110
*/
111
112
#include "qemu/osdep.h"
113
+#include "target/arm/idau.h"
114
#include "qemu/error-report.h"
115
#include "qapi/error.h"
116
#include "cpu.h"
117
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_post_init(Object *obj)
118
}
119
}
120
121
+ if (arm_feature(&cpu->env, ARM_FEATURE_M_SECURITY)) {
122
+ object_property_add_link(obj, "idau", TYPE_IDAU_INTERFACE, &cpu->idau,
123
+ qdev_prop_allow_set_link_before_realize,
124
+ OBJ_PROP_LINK_UNREF_ON_RELEASE,
125
+ &error_abort);
126
+ }
127
+
128
qdev_property_add_static(DEVICE(obj), &arm_cpu_cfgend_property,
129
&error_abort);
130
}
131
@@ -XXX,XX +XXX,XX @@ static const TypeInfo arm_cpu_type_info = {
132
.class_init = arm_cpu_class_init,
133
};
134
135
+static const TypeInfo idau_interface_type_info = {
136
+ .name = TYPE_IDAU_INTERFACE,
137
+ .parent = TYPE_INTERFACE,
138
+ .class_size = sizeof(IDAUInterfaceClass),
139
+};
140
+
141
static void arm_cpu_register_types(void)
142
{
143
const ARMCPUInfo *info = arm_cpus;
144
145
type_register_static(&arm_cpu_type_info);
146
+ type_register_static(&idau_interface_type_info);
147
148
while (info->name) {
149
cpu_register(info);
150
diff --git a/target/arm/helper.c b/target/arm/helper.c
22
diff --git a/target/arm/helper.c b/target/arm/helper.c
151
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
152
--- a/target/arm/helper.c
24
--- a/target/arm/helper.c
153
+++ b/target/arm/helper.c
25
+++ b/target/arm/helper.c
154
@@ -XXX,XX +XXX,XX @@
26
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
155
#include "qemu/osdep.h"
27
bool push_failed = false;
156
+#include "target/arm/idau.h"
28
157
#include "trace.h"
29
armv7m_nvic_get_pending_irq_info(env->nvic, &exc, &targets_secure);
158
#include "cpu.h"
30
+ qemu_log_mask(CPU_LOG_INT, "...taking pending %s exception %d\n",
159
#include "internals.h"
31
+ targets_secure ? "secure" : "nonsecure", exc);
160
@@ -XXX,XX +XXX,XX @@ static void v8m_security_lookup(CPUARMState *env, uint32_t address,
32
161
*/
33
if (arm_feature(env, ARM_FEATURE_V8)) {
162
ARMCPU *cpu = arm_env_get_cpu(env);
34
if (arm_feature(env, ARM_FEATURE_M_SECURITY) &&
163
int r;
35
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
164
+ bool idau_exempt = false, idau_ns = true, idau_nsc = true;
36
* we might now want to take a different exception which
165
+ int idau_region = IREGION_NOTVALID;
37
* targets a different security state, so try again from the top.
166
38
*/
167
- /* TODO: implement IDAU */
39
+ qemu_log_mask(CPU_LOG_INT,
168
+ if (cpu->idau) {
40
+ "...derived exception on callee-saves register stacking");
169
+ IDAUInterfaceClass *iic = IDAU_INTERFACE_GET_CLASS(cpu->idau);
41
v7m_exception_taken(cpu, lr, true, true);
170
+ IDAUInterface *ii = IDAU_INTERFACE(cpu->idau);
171
+
172
+ iic->check(ii, address, &idau_region, &idau_exempt, &idau_ns,
173
+ &idau_nsc);
174
+ }
175
176
if (access_type == MMU_INST_FETCH && extract32(address, 28, 4) == 0xf) {
177
/* 0xf0000000..0xffffffff is always S for insn fetches */
178
return;
42
return;
179
}
43
}
180
44
181
- if (v8m_is_sau_exempt(env, address, access_type)) {
45
if (!arm_v7m_load_vector(cpu, exc, targets_secure, &addr)) {
182
+ if (idau_exempt || v8m_is_sau_exempt(env, address, access_type)) {
46
/* Vector load failed: derived exception */
183
sattrs->ns = !regime_is_secure(env, mmu_idx);
47
+ qemu_log_mask(CPU_LOG_INT, "...derived exception on vector table load");
48
v7m_exception_taken(cpu, lr, true, true);
184
return;
49
return;
185
}
50
}
186
51
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
187
+ if (idau_region != IREGION_NOTVALID) {
52
if (sfault) {
188
+ sattrs->irvalid = true;
53
env->v7m.sfsr |= R_V7M_SFSR_INVER_MASK;
189
+ sattrs->iregion = idau_region;
54
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
190
+ }
55
- v7m_exception_taken(cpu, excret, true, false);
191
+
56
qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
192
switch (env->sau.ctrl & 3) {
57
"stackframe: failed EXC_RETURN.ES validity check\n");
193
case 0: /* SAU.ENABLE == 0, SAU.ALLNS == 0 */
58
+ v7m_exception_taken(cpu, excret, true, false);
194
break;
59
return;
195
@@ -XXX,XX +XXX,XX @@ static void v8m_security_lookup(CPUARMState *env, uint32_t address,
60
}
61
62
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
63
*/
64
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
65
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
66
- v7m_exception_taken(cpu, excret, true, false);
67
qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
68
"stackframe: failed exception return integrity check\n");
69
+ v7m_exception_taken(cpu, excret, true, false);
70
return;
71
}
72
73
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
74
/* Take a SecureFault on the current stack */
75
env->v7m.sfsr |= R_V7M_SFSR_INVIS_MASK;
76
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
77
- v7m_exception_taken(cpu, excret, true, false);
78
qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
79
"stackframe: failed exception return integrity "
80
"signature check\n");
81
+ v7m_exception_taken(cpu, excret, true, false);
82
return;
83
}
84
85
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
86
/* v7m_stack_read() pended a fault, so take it (as a tail
87
* chained exception on the same stack frame)
88
*/
89
+ qemu_log_mask(CPU_LOG_INT, "...derived exception on unstacking\n");
90
v7m_exception_taken(cpu, excret, true, false);
91
return;
92
}
93
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
94
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
95
env->v7m.secure);
96
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
97
- v7m_exception_taken(cpu, excret, true, false);
98
qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
99
"stackframe: failed exception return integrity "
100
"check\n");
101
+ v7m_exception_taken(cpu, excret, true, false);
102
return;
196
}
103
}
197
}
104
}
198
105
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
199
- /* TODO when we support the IDAU then it may override the result here */
106
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, false);
200
+ /* The IDAU will override the SAU lookup results if it specifies
107
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
201
+ * higher security than the SAU does.
108
ignore_stackfaults = v7m_push_stack(cpu);
202
+ */
109
- v7m_exception_taken(cpu, excret, false, ignore_stackfaults);
203
+ if (!idau_ns) {
110
qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on new stackframe: "
204
+ if (sattrs->ns || (!idau_nsc && sattrs->nsc)) {
111
"failed exception return integrity check\n");
205
+ sattrs->ns = false;
112
+ v7m_exception_taken(cpu, excret, false, ignore_stackfaults);
206
+ sattrs->nsc = idau_nsc;
113
return;
207
+ }
208
+ }
209
break;
210
}
114
}
115
116
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
117
118
ignore_stackfaults = v7m_push_stack(cpu);
119
v7m_exception_taken(cpu, lr, false, ignore_stackfaults);
120
- qemu_log_mask(CPU_LOG_INT, "... as %d\n", env->v7m.exception);
211
}
121
}
122
123
/* Function used to synchronize QEMU's AArch64 register set with AArch32
212
--
124
--
213
2.16.2
125
2.18.0
214
126
215
127
diff view generated by jsdifflib
1
Instead of loading guest images to the system address space, use the
1
In do_v7m_exception_exit(), we use the exc_secure variable to track
2
CPU's address space. This is important if we're trying to load the
2
whether the exception we're returning from is secure or non-secure.
3
file to memory or via an alias memory region that is provided by an
3
Unfortunately the statement initializing this was accidentally
4
SoC object and thus not mapped into the system address space.
4
inside an "if (env->v7m.exception != ARMV7M_EXCP_NMI)" conditional,
5
which meant that we were using the wrong value for NMI handlers.
6
Move the initialization out to the right place.
5
7
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20180720145647.8810-3-peter.maydell@linaro.org
9
Message-id: 20180220180325.29818-4-peter.maydell@linaro.org
10
---
12
---
11
hw/arm/armv7m.c | 17 ++++++++++++++---
13
target/arm/helper.c | 2 +-
12
1 file changed, 14 insertions(+), 3 deletions(-)
14
1 file changed, 1 insertion(+), 1 deletion(-)
13
15
14
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
16
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/arm/armv7m.c
18
--- a/target/arm/helper.c
17
+++ b/hw/arm/armv7m.c
19
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ void armv7m_load_kernel(ARMCPU *cpu, const char *kernel_filename, int mem_size)
20
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
19
uint64_t entry;
21
/* For all other purposes, treat ES as 0 (R_HXSR) */
20
uint64_t lowaddr;
22
excret &= ~R_V7M_EXCRET_ES_MASK;
21
int big_endian;
23
}
22
+ AddressSpace *as;
24
+ exc_secure = excret & R_V7M_EXCRET_ES_MASK;
23
+ int asidx;
24
+ CPUState *cs = CPU(cpu);
25
26
#ifdef TARGET_WORDS_BIGENDIAN
27
big_endian = 1;
28
@@ -XXX,XX +XXX,XX @@ void armv7m_load_kernel(ARMCPU *cpu, const char *kernel_filename, int mem_size)
29
exit(1);
30
}
25
}
31
26
32
+ if (arm_feature(&cpu->env, ARM_FEATURE_EL3)) {
27
if (env->v7m.exception != ARMV7M_EXCP_NMI) {
33
+ asidx = ARMASIdx_S;
28
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
34
+ } else {
29
* which security state's faultmask to clear. (v8M ARM ARM R_KBNF.)
35
+ asidx = ARMASIdx_NS;
30
*/
36
+ }
31
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
37
+ as = cpu_get_address_space(cs, asidx);
32
- exc_secure = excret & R_V7M_EXCRET_ES_MASK;
38
+
33
if (armv7m_nvic_raw_execution_priority(env->nvic) >= 0) {
39
if (kernel_filename) {
34
env->v7m.faultmask[exc_secure] = 0;
40
- image_size = load_elf(kernel_filename, NULL, NULL, &entry, &lowaddr,
35
}
41
- NULL, big_endian, EM_ARM, 1, 0);
42
+ image_size = load_elf_as(kernel_filename, NULL, NULL, &entry, &lowaddr,
43
+ NULL, big_endian, EM_ARM, 1, 0, as);
44
if (image_size < 0) {
45
- image_size = load_image_targphys(kernel_filename, 0, mem_size);
46
+ image_size = load_image_targphys_as(kernel_filename, 0,
47
+ mem_size, as);
48
lowaddr = 0;
49
}
50
if (image_size < 0) {
51
--
36
--
52
2.16.2
37
2.18.0
53
38
54
39
diff view generated by jsdifflib
1
Create an "init-svtor" property on the armv7m container
1
On exception return for M-profile, we must restore the CONTROL.SPSEL
2
object which we can forward to the CPU object.
2
bit from the EXCRET value before we do any kind of tailchaining,
3
including for the derived exceptions on integrity check failures.
4
Otherwise we will give the guest an incorrect EXCRET.SPSEL value on
5
exception entry for the tailchained exception.
3
6
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20180220180325.29818-8-peter.maydell@linaro.org
9
Message-id: 20180720145647.8810-4-peter.maydell@linaro.org
7
---
10
---
8
include/hw/arm/armv7m.h | 2 ++
11
target/arm/helper.c | 16 ++++++++++------
9
hw/arm/armv7m.c | 9 +++++++++
12
1 file changed, 10 insertions(+), 6 deletions(-)
10
2 files changed, 11 insertions(+)
11
13
12
diff --git a/include/hw/arm/armv7m.h b/include/hw/arm/armv7m.h
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
14
--- a/include/hw/arm/armv7m.h
16
--- a/target/arm/helper.c
15
+++ b/include/hw/arm/armv7m.h
17
+++ b/target/arm/helper.c
16
@@ -XXX,XX +XXX,XX @@ typedef struct {
18
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
17
* that CPU accesses see. (The NVIC, bitbanding and other CPU-internal
18
* devices will be automatically layered on top of this view.)
19
* + Property "idau": IDAU interface (forwarded to CPU object)
20
+ * + Property "init-svtor": secure VTOR reset value (forwarded to CPU object)
21
*/
22
typedef struct ARMv7MState {
23
/*< private >*/
24
@@ -XXX,XX +XXX,XX @@ typedef struct ARMv7MState {
25
/* MemoryRegion the board provides to us (with its devices, RAM, etc) */
26
MemoryRegion *board_memory;
27
Object *idau;
28
+ uint32_t init_svtor;
29
} ARMv7MState;
30
31
#endif
32
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
33
index XXXXXXX..XXXXXXX 100644
34
--- a/hw/arm/armv7m.c
35
+++ b/hw/arm/armv7m.c
36
@@ -XXX,XX +XXX,XX @@ static void armv7m_realize(DeviceState *dev, Error **errp)
37
return;
38
}
19
}
39
}
20
}
40
+ if (object_property_find(OBJECT(s->cpu), "init-svtor", NULL)) {
21
41
+ object_property_set_uint(OBJECT(s->cpu), s->init_svtor,
22
+ /*
42
+ "init-svtor", &err);
23
+ * Set CONTROL.SPSEL from excret.SPSEL. Since we're still in
43
+ if (err != NULL) {
24
+ * Handler mode (and will be until we write the new XPSR.Interrupt
44
+ error_propagate(errp, err);
25
+ * field) this does not switch around the current stack pointer.
45
+ return;
26
+ * We must do this before we do any kind of tailchaining, including
46
+ }
27
+ * for the derived exceptions on integrity check failures, or we will
47
+ }
28
+ * give the guest an incorrect EXCRET.SPSEL value on exception entry.
48
object_property_set_bool(OBJECT(s->cpu), true, "realized", &err);
29
+ */
49
if (err != NULL) {
30
+ write_v7m_control_spsel_for_secstate(env, return_to_sp_process, exc_secure);
50
error_propagate(errp, err);
31
+
51
@@ -XXX,XX +XXX,XX @@ static Property armv7m_properties[] = {
32
if (sfault) {
52
DEFINE_PROP_LINK("memory", ARMv7MState, board_memory, TYPE_MEMORY_REGION,
33
env->v7m.sfsr |= R_V7M_SFSR_INVER_MASK;
53
MemoryRegion *),
34
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
54
DEFINE_PROP_LINK("idau", ARMv7MState, idau, TYPE_IDAU_INTERFACE, Object *),
35
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
55
+ DEFINE_PROP_UINT32("init-svtor", ARMv7MState, init_svtor, 0),
36
return;
56
DEFINE_PROP_END_OF_LIST(),
37
}
57
};
38
58
39
- /* Set CONTROL.SPSEL from excret.SPSEL. Since we're still in
40
- * Handler mode (and will be until we write the new XPSR.Interrupt
41
- * field) this does not switch around the current stack pointer.
42
- */
43
- write_v7m_control_spsel_for_secstate(env, return_to_sp_process, exc_secure);
44
-
45
switch_v7m_security_state(env, return_to_secure);
46
47
{
59
--
48
--
60
2.16.2
49
2.18.0
61
50
62
51
diff view generated by jsdifflib
1
In some board or SoC models it is necessary to split a qemu_irq line
1
Tailchaining is an optimization in handling of exception return
2
so that one input can feed multiple outputs. We currently have
2
for M-profile cores: if we are about to pop the exception stack
3
qemu_irq_split() for this, but that has several deficiencies:
3
for an exception return, but there is a pending exception which
4
* it can only handle splitting a line into two
4
is higher priority than the priority we are returning to, then
5
* it unavoidably leaks memory, so it can't be used
5
instead of unstacking and then immediately taking the exception
6
in a device that can be deleted
6
and stacking registers again, we can chain to the pending
7
exception without unstacking and stacking.
7
8
8
Implement a qdev device that encapsulates splitting of IRQs, with a
9
For v6M and v7M it is IMPDEF whether tailchaining happens for pending
9
configurable number of outputs. (This is in some ways the inverse of
10
exceptions; for v8M this is architecturally required. Implement it
10
the TYPE_OR_IRQ device.)
11
in QEMU for all M-profile cores, since in practice v6M and v7M
12
hardware implementations generally do have it.
13
14
(We were already doing tailchaining for derived exceptions which
15
happened during exception return, like the validity checks and
16
stack access failures; these have always been required to be
17
tailchained for all versions of the architecture.)
11
18
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20180220180325.29818-13-peter.maydell@linaro.org
21
Message-id: 20180720145647.8810-5-peter.maydell@linaro.org
15
---
22
---
16
hw/core/Makefile.objs | 1 +
23
target/arm/helper.c | 16 ++++++++++++++++
17
include/hw/core/split-irq.h | 57 +++++++++++++++++++++++++++++
24
1 file changed, 16 insertions(+)
18
include/hw/irq.h | 4 +-
19
hw/core/split-irq.c | 89 +++++++++++++++++++++++++++++++++++++++++++++
20
4 files changed, 150 insertions(+), 1 deletion(-)
21
create mode 100644 include/hw/core/split-irq.h
22
create mode 100644 hw/core/split-irq.c
23
25
24
diff --git a/hw/core/Makefile.objs b/hw/core/Makefile.objs
26
diff --git a/target/arm/helper.c b/target/arm/helper.c
25
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
26
--- a/hw/core/Makefile.objs
28
--- a/target/arm/helper.c
27
+++ b/hw/core/Makefile.objs
29
+++ b/target/arm/helper.c
28
@@ -XXX,XX +XXX,XX @@ common-obj-$(CONFIG_FITLOADER) += loader-fit.o
30
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
29
common-obj-$(CONFIG_SOFTMMU) += qdev-properties-system.o
31
return;
30
common-obj-$(CONFIG_SOFTMMU) += register.o
32
}
31
common-obj-$(CONFIG_SOFTMMU) += or-irq.o
33
32
+common-obj-$(CONFIG_SOFTMMU) += split-irq.o
34
+ /*
33
common-obj-$(CONFIG_PLATFORM_BUS) += platform-bus.o
35
+ * Tailchaining: if there is currently a pending exception that
34
36
+ * is high enough priority to preempt execution at the level we're
35
obj-$(CONFIG_SOFTMMU) += generic-loader.o
37
+ * about to return to, then just directly take that exception now,
36
diff --git a/include/hw/core/split-irq.h b/include/hw/core/split-irq.h
38
+ * avoiding an unstack-and-then-stack. Note that now we have
37
new file mode 100644
39
+ * deactivated the previous exception by calling armv7m_nvic_complete_irq()
38
index XXXXXXX..XXXXXXX
40
+ * our current execution priority is already the execution priority we are
39
--- /dev/null
41
+ * returning to -- none of the state we would unstack or set based on
40
+++ b/include/hw/core/split-irq.h
42
+ * the EXCRET value affects it.
41
@@ -XXX,XX +XXX,XX @@
43
+ */
42
+/*
44
+ if (armv7m_nvic_can_take_pending_exception(env->nvic)) {
43
+ * IRQ splitter device.
45
+ qemu_log_mask(CPU_LOG_INT, "...tailchaining to pending exception\n");
44
+ *
46
+ v7m_exception_taken(cpu, excret, true, false);
45
+ * Copyright (c) 2018 Linaro Limited.
46
+ * Written by Peter Maydell
47
+ *
48
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
49
+ * of this software and associated documentation files (the "Software"), to deal
50
+ * in the Software without restriction, including without limitation the rights
51
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
52
+ * copies of the Software, and to permit persons to whom the Software is
53
+ * furnished to do so, subject to the following conditions:
54
+ *
55
+ * The above copyright notice and this permission notice shall be included in
56
+ * all copies or substantial portions of the Software.
57
+ *
58
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
59
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
60
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
61
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
62
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
63
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
64
+ * THE SOFTWARE.
65
+ */
66
+
67
+/* This is a simple device which has one GPIO input line and multiple
68
+ * GPIO output lines. Any change on the input line is forwarded to all
69
+ * of the outputs.
70
+ *
71
+ * QEMU interface:
72
+ * + one unnamed GPIO input: the input line
73
+ * + N unnamed GPIO outputs: the output lines
74
+ * + QOM property "num-lines": sets the number of output lines
75
+ */
76
+#ifndef HW_SPLIT_IRQ_H
77
+#define HW_SPLIT_IRQ_H
78
+
79
+#include "hw/irq.h"
80
+#include "hw/sysbus.h"
81
+#include "qom/object.h"
82
+
83
+#define TYPE_SPLIT_IRQ "split-irq"
84
+
85
+#define MAX_SPLIT_LINES 16
86
+
87
+typedef struct SplitIRQ SplitIRQ;
88
+
89
+#define SPLIT_IRQ(obj) OBJECT_CHECK(SplitIRQ, (obj), TYPE_SPLIT_IRQ)
90
+
91
+struct SplitIRQ {
92
+ DeviceState parent_obj;
93
+
94
+ qemu_irq out_irq[MAX_SPLIT_LINES];
95
+ uint16_t num_lines;
96
+};
97
+
98
+#endif
99
diff --git a/include/hw/irq.h b/include/hw/irq.h
100
index XXXXXXX..XXXXXXX 100644
101
--- a/include/hw/irq.h
102
+++ b/include/hw/irq.h
103
@@ -XXX,XX +XXX,XX @@ void qemu_free_irq(qemu_irq irq);
104
/* Returns a new IRQ with opposite polarity. */
105
qemu_irq qemu_irq_invert(qemu_irq irq);
106
107
-/* Returns a new IRQ which feeds into both the passed IRQs */
108
+/* Returns a new IRQ which feeds into both the passed IRQs.
109
+ * It's probably better to use the TYPE_SPLIT_IRQ device instead.
110
+ */
111
qemu_irq qemu_irq_split(qemu_irq irq1, qemu_irq irq2);
112
113
/* Returns a new IRQ set which connects 1:1 to another IRQ set, which
114
diff --git a/hw/core/split-irq.c b/hw/core/split-irq.c
115
new file mode 100644
116
index XXXXXXX..XXXXXXX
117
--- /dev/null
118
+++ b/hw/core/split-irq.c
119
@@ -XXX,XX +XXX,XX @@
120
+/*
121
+ * IRQ splitter device.
122
+ *
123
+ * Copyright (c) 2018 Linaro Limited.
124
+ * Written by Peter Maydell
125
+ *
126
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
127
+ * of this software and associated documentation files (the "Software"), to deal
128
+ * in the Software without restriction, including without limitation the rights
129
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
130
+ * copies of the Software, and to permit persons to whom the Software is
131
+ * furnished to do so, subject to the following conditions:
132
+ *
133
+ * The above copyright notice and this permission notice shall be included in
134
+ * all copies or substantial portions of the Software.
135
+ *
136
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
137
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
138
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
139
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
140
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
141
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
142
+ * THE SOFTWARE.
143
+ */
144
+
145
+#include "qemu/osdep.h"
146
+#include "hw/core/split-irq.h"
147
+#include "qapi/error.h"
148
+
149
+static void split_irq_handler(void *opaque, int n, int level)
150
+{
151
+ SplitIRQ *s = SPLIT_IRQ(opaque);
152
+ int i;
153
+
154
+ for (i = 0; i < s->num_lines; i++) {
155
+ qemu_set_irq(s->out_irq[i], level);
156
+ }
157
+}
158
+
159
+static void split_irq_init(Object *obj)
160
+{
161
+ qdev_init_gpio_in(DEVICE(obj), split_irq_handler, 1);
162
+}
163
+
164
+static void split_irq_realize(DeviceState *dev, Error **errp)
165
+{
166
+ SplitIRQ *s = SPLIT_IRQ(dev);
167
+
168
+ if (s->num_lines < 1 || s->num_lines >= MAX_SPLIT_LINES) {
169
+ error_setg(errp,
170
+ "IRQ splitter number of lines %d is not between 1 and %d",
171
+ s->num_lines, MAX_SPLIT_LINES);
172
+ return;
47
+ return;
173
+ }
48
+ }
174
+
49
+
175
+ qdev_init_gpio_out(dev, s->out_irq, s->num_lines);
50
switch_v7m_security_state(env, return_to_secure);
176
+}
51
177
+
52
{
178
+static Property split_irq_properties[] = {
179
+ DEFINE_PROP_UINT16("num-lines", SplitIRQ, num_lines, 1),
180
+ DEFINE_PROP_END_OF_LIST(),
181
+};
182
+
183
+static void split_irq_class_init(ObjectClass *klass, void *data)
184
+{
185
+ DeviceClass *dc = DEVICE_CLASS(klass);
186
+
187
+ /* No state to reset or migrate */
188
+ dc->props = split_irq_properties;
189
+ dc->realize = split_irq_realize;
190
+
191
+ /* Reason: Needs to be wired up to work */
192
+ dc->user_creatable = false;
193
+}
194
+
195
+static const TypeInfo split_irq_type_info = {
196
+ .name = TYPE_SPLIT_IRQ,
197
+ .parent = TYPE_DEVICE,
198
+ .instance_size = sizeof(SplitIRQ),
199
+ .instance_init = split_irq_init,
200
+ .class_init = split_irq_class_init,
201
+};
202
+
203
+static void split_irq_register_types(void)
204
+{
205
+ type_register_static(&split_irq_type_info);
206
+}
207
+
208
+type_init(split_irq_register_types)
209
--
53
--
210
2.16.2
54
2.18.0
211
55
212
56
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
3
The normal vector element is sign-extended before
4
comparing with the wide vector element.
5
6
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180228193125.20577-9-richard.henderson@linaro.org
8
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
9
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
10
Tested-by: Alex Bennée <alex.bennee@linaro.org>
11
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
12
Message-id: 20180801123111.3595-2-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
14
---
8
target/arm/translate.c | 46 ++++++++++++++++++++++++++++++++++++++++++----
15
target/arm/sve_helper.c | 12 ++++++------
9
1 file changed, 42 insertions(+), 4 deletions(-)
16
1 file changed, 6 insertions(+), 6 deletions(-)
10
17
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
18
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
12
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
20
--- a/target/arm/sve_helper.c
14
+++ b/target/arm/translate.c
21
+++ b/target/arm/sve_helper.c
15
@@ -XXX,XX +XXX,XX @@ static const char *regnames[] =
22
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc) \
16
{ "r0", "r1", "r2", "r3", "r4", "r5", "r6", "r7",
23
#define DO_CMP_PPZW_S(NAME, TYPE, TYPEW, OP) \
17
"r8", "r9", "r10", "r11", "r12", "r13", "r14", "pc" };
24
DO_CMP_PPZW(NAME, TYPE, TYPEW, OP, H1_4, 0x1111111111111111ull)
18
25
19
+/* Function prototypes for gen_ functions calling Neon helpers. */
26
-DO_CMP_PPZW_B(sve_cmpeq_ppzw_b, uint8_t, uint64_t, ==)
20
+typedef void NeonGenThreeOpEnvFn(TCGv_i32, TCGv_env, TCGv_i32,
27
-DO_CMP_PPZW_H(sve_cmpeq_ppzw_h, uint16_t, uint64_t, ==)
21
+ TCGv_i32, TCGv_i32);
28
-DO_CMP_PPZW_S(sve_cmpeq_ppzw_s, uint32_t, uint64_t, ==)
22
+
29
+DO_CMP_PPZW_B(sve_cmpeq_ppzw_b, int8_t, uint64_t, ==)
23
/* initialize TCG globals. */
30
+DO_CMP_PPZW_H(sve_cmpeq_ppzw_h, int16_t, uint64_t, ==)
24
void arm_translate_init(void)
31
+DO_CMP_PPZW_S(sve_cmpeq_ppzw_s, int32_t, uint64_t, ==)
25
{
32
26
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
33
-DO_CMP_PPZW_B(sve_cmpne_ppzw_b, uint8_t, uint64_t, !=)
27
}
34
-DO_CMP_PPZW_H(sve_cmpne_ppzw_h, uint16_t, uint64_t, !=)
28
neon_store_reg64(cpu_V0, rd + pass);
35
-DO_CMP_PPZW_S(sve_cmpne_ppzw_s, uint32_t, uint64_t, !=)
29
}
36
+DO_CMP_PPZW_B(sve_cmpne_ppzw_b, int8_t, uint64_t, !=)
30
-
37
+DO_CMP_PPZW_H(sve_cmpne_ppzw_h, int16_t, uint64_t, !=)
31
-
38
+DO_CMP_PPZW_S(sve_cmpne_ppzw_s, int32_t, uint64_t, !=)
32
break;
39
33
- default: /* 14 and 15 are RESERVED */
40
DO_CMP_PPZW_B(sve_cmpgt_ppzw_b, int8_t, int64_t, >)
34
- return 1;
41
DO_CMP_PPZW_H(sve_cmpgt_ppzw_h, int16_t, int64_t, >)
35
+ case 14: /* VQRDMLAH scalar */
36
+ case 15: /* VQRDMLSH scalar */
37
+ {
38
+ NeonGenThreeOpEnvFn *fn;
39
+
40
+ if (!arm_dc_feature(s, ARM_FEATURE_V8_RDM)) {
41
+ return 1;
42
+ }
43
+ if (u && ((rd | rn) & 1)) {
44
+ return 1;
45
+ }
46
+ if (op == 14) {
47
+ if (size == 1) {
48
+ fn = gen_helper_neon_qrdmlah_s16;
49
+ } else {
50
+ fn = gen_helper_neon_qrdmlah_s32;
51
+ }
52
+ } else {
53
+ if (size == 1) {
54
+ fn = gen_helper_neon_qrdmlsh_s16;
55
+ } else {
56
+ fn = gen_helper_neon_qrdmlsh_s32;
57
+ }
58
+ }
59
+
60
+ tmp2 = neon_get_scalar(size, rm);
61
+ for (pass = 0; pass < (u ? 4 : 2); pass++) {
62
+ tmp = neon_load_reg(rn, pass);
63
+ tmp3 = neon_load_reg(rd, pass);
64
+ fn(tmp, cpu_env, tmp, tmp2, tmp3);
65
+ tcg_temp_free_i32(tmp3);
66
+ neon_store_reg(rd, pass, tmp);
67
+ }
68
+ tcg_temp_free_i32(tmp2);
69
+ }
70
+ break;
71
+ default:
72
+ g_assert_not_reached();
73
}
74
}
75
} else { /* size == 3 */
76
--
42
--
77
2.16.2
43
2.18.0
78
44
79
45
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Used the wrong temporary in the computation of subtractive overflow.
4
5
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20180228193125.20577-8-richard.henderson@linaro.org
7
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
8
Tested-by: Alex Bennée <alex.bennee@linaro.org>
9
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
10
Message-id: 20180801123111.3595-3-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
12
---
8
target/arm/translate.c | 86 +++++++++++++++++++++++++++++++++++++++-----------
13
target/arm/translate-sve.c | 2 +-
9
1 file changed, 67 insertions(+), 19 deletions(-)
14
1 file changed, 1 insertion(+), 1 deletion(-)
10
15
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
16
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
12
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
18
--- a/target/arm/translate-sve.c
14
+++ b/target/arm/translate.c
19
+++ b/target/arm/translate-sve.c
15
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@ static void do_sat_addsub_64(TCGv_i64 reg, TCGv_i64 val, bool u, bool d)
16
#include "disas/disas.h"
21
/* Detect signed overflow for subtraction. */
17
#include "exec/exec-all.h"
22
tcg_gen_xor_i64(t0, reg, val);
18
#include "tcg-op.h"
23
tcg_gen_sub_i64(t1, reg, val);
19
+#include "tcg-op-gvec.h"
24
- tcg_gen_xor_i64(reg, reg, t0);
20
#include "qemu/log.h"
25
+ tcg_gen_xor_i64(reg, reg, t1);
21
#include "qemu/bitops.h"
26
tcg_gen_and_i64(t0, t0, reg);
22
#include "arm_ldst.h"
27
23
@@ -XXX,XX +XXX,XX @@ static void gen_neon_narrow_op(int op, int u, int size,
28
/* Bound the result. */
24
#define NEON_3R_VPMAX 20
25
#define NEON_3R_VPMIN 21
26
#define NEON_3R_VQDMULH_VQRDMULH 22
27
-#define NEON_3R_VPADD 23
28
+#define NEON_3R_VPADD_VQRDMLAH 23
29
#define NEON_3R_SHA 24 /* SHA1C,SHA1P,SHA1M,SHA1SU0,SHA256H{2},SHA256SU1 */
30
-#define NEON_3R_VFM 25 /* VFMA, VFMS : float fused multiply-add */
31
+#define NEON_3R_VFM_VQRDMLSH 25 /* VFMA, VFMS, VQRDMLSH */
32
#define NEON_3R_FLOAT_ARITH 26 /* float VADD, VSUB, VPADD, VABD */
33
#define NEON_3R_FLOAT_MULTIPLY 27 /* float VMLA, VMLS, VMUL */
34
#define NEON_3R_FLOAT_CMP 28 /* float VCEQ, VCGE, VCGT */
35
@@ -XXX,XX +XXX,XX @@ static const uint8_t neon_3r_sizes[] = {
36
[NEON_3R_VPMAX] = 0x7,
37
[NEON_3R_VPMIN] = 0x7,
38
[NEON_3R_VQDMULH_VQRDMULH] = 0x6,
39
- [NEON_3R_VPADD] = 0x7,
40
+ [NEON_3R_VPADD_VQRDMLAH] = 0x7,
41
[NEON_3R_SHA] = 0xf, /* size field encodes op type */
42
- [NEON_3R_VFM] = 0x5, /* size bit 1 encodes op */
43
+ [NEON_3R_VFM_VQRDMLSH] = 0x7, /* For VFM, size bit 1 encodes op */
44
[NEON_3R_FLOAT_ARITH] = 0x5, /* size bit 1 encodes op */
45
[NEON_3R_FLOAT_MULTIPLY] = 0x5, /* size bit 1 encodes op */
46
[NEON_3R_FLOAT_CMP] = 0x5, /* size bit 1 encodes op */
47
@@ -XXX,XX +XXX,XX @@ static const uint8_t neon_2rm_sizes[] = {
48
[NEON_2RM_VCVT_UF] = 0x4,
49
};
50
51
+
52
+/* Expand v8.1 simd helper. */
53
+static int do_v81_helper(DisasContext *s, gen_helper_gvec_3_ptr *fn,
54
+ int q, int rd, int rn, int rm)
55
+{
56
+ if (arm_dc_feature(s, ARM_FEATURE_V8_RDM)) {
57
+ int opr_sz = (1 + q) * 8;
58
+ tcg_gen_gvec_3_ptr(vfp_reg_offset(1, rd),
59
+ vfp_reg_offset(1, rn),
60
+ vfp_reg_offset(1, rm), cpu_env,
61
+ opr_sz, opr_sz, 0, fn);
62
+ return 0;
63
+ }
64
+ return 1;
65
+}
66
+
67
/* Translate a NEON data processing instruction. Return nonzero if the
68
instruction is invalid.
69
We process data in a mixture of 32-bit and 64-bit chunks.
70
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
71
if (q && ((rd | rn | rm) & 1)) {
72
return 1;
73
}
74
- /*
75
- * The SHA-1/SHA-256 3-register instructions require special treatment
76
- * here, as their size field is overloaded as an op type selector, and
77
- * they all consume their input in a single pass.
78
- */
79
- if (op == NEON_3R_SHA) {
80
+ switch (op) {
81
+ case NEON_3R_SHA:
82
+ /* The SHA-1/SHA-256 3-register instructions require special
83
+ * treatment here, as their size field is overloaded as an
84
+ * op type selector, and they all consume their input in a
85
+ * single pass.
86
+ */
87
if (!q) {
88
return 1;
89
}
90
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
91
tcg_temp_free_ptr(ptr2);
92
tcg_temp_free_ptr(ptr3);
93
return 0;
94
+
95
+ case NEON_3R_VPADD_VQRDMLAH:
96
+ if (!u) {
97
+ break; /* VPADD */
98
+ }
99
+ /* VQRDMLAH */
100
+ switch (size) {
101
+ case 1:
102
+ return do_v81_helper(s, gen_helper_gvec_qrdmlah_s16,
103
+ q, rd, rn, rm);
104
+ case 2:
105
+ return do_v81_helper(s, gen_helper_gvec_qrdmlah_s32,
106
+ q, rd, rn, rm);
107
+ }
108
+ return 1;
109
+
110
+ case NEON_3R_VFM_VQRDMLSH:
111
+ if (!u) {
112
+ /* VFM, VFMS */
113
+ if (size == 1) {
114
+ return 1;
115
+ }
116
+ break;
117
+ }
118
+ /* VQRDMLSH */
119
+ switch (size) {
120
+ case 1:
121
+ return do_v81_helper(s, gen_helper_gvec_qrdmlsh_s16,
122
+ q, rd, rn, rm);
123
+ case 2:
124
+ return do_v81_helper(s, gen_helper_gvec_qrdmlsh_s32,
125
+ q, rd, rn, rm);
126
+ }
127
+ return 1;
128
}
129
if (size == 3 && op != NEON_3R_LOGIC) {
130
/* 64-bit element instructions. */
131
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
132
rm = rtmp;
133
}
134
break;
135
- case NEON_3R_VPADD:
136
- if (u) {
137
- return 1;
138
- }
139
- /* Fall through */
140
+ case NEON_3R_VPADD_VQRDMLAH:
141
case NEON_3R_VPMAX:
142
case NEON_3R_VPMIN:
143
pairwise = 1;
144
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
145
return 1;
146
}
147
break;
148
- case NEON_3R_VFM:
149
- if (!arm_dc_feature(s, ARM_FEATURE_VFP4) || u) {
150
+ case NEON_3R_VFM_VQRDMLSH:
151
+ if (!arm_dc_feature(s, ARM_FEATURE_VFP4)) {
152
return 1;
153
}
154
break;
155
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
156
}
157
}
158
break;
159
- case NEON_3R_VPADD:
160
+ case NEON_3R_VPADD_VQRDMLAH:
161
switch (size) {
162
case 0: gen_helper_neon_padd_u8(tmp, tmp, tmp2); break;
163
case 1: gen_helper_neon_padd_u16(tmp, tmp, tmp2); break;
164
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
165
}
166
}
167
break;
168
- case NEON_3R_VFM:
169
+ case NEON_3R_VFM_VQRDMLSH:
170
{
171
/* VFMA, VFMS: fused multiply-add */
172
TCGv_ptr fpstatus = get_fpstatus_ptr(1);
173
--
29
--
174
2.16.2
30
2.18.0
175
31
176
32
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Not enabled anywhere yet.
3
The pseudocode for this operation is an increment + compare loop,
4
so comparing <= the maximum integer produces an all-true predicate.
4
5
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Rather than bound in both the inline code and the helper, pass the
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
helper the number of predicate bits to set instead of the number
8
of predicate elements to set.
9
10
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180228193125.20577-2-richard.henderson@linaro.org
12
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
13
Tested-by: Alex Bennée <alex.bennee@linaro.org>
14
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
15
Message-id: 20180801123111.3595-4-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
17
---
11
target/arm/cpu.h | 1 +
18
target/arm/sve_helper.c | 5 ----
12
linux-user/elfload.c | 1 +
19
target/arm/translate-sve.c | 49 +++++++++++++++++++++++++-------------
13
2 files changed, 2 insertions(+)
20
2 files changed, 32 insertions(+), 22 deletions(-)
14
21
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
22
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
16
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.h
24
--- a/target/arm/sve_helper.c
18
+++ b/target/arm/cpu.h
25
+++ b/target/arm/sve_helper.c
19
@@ -XXX,XX +XXX,XX @@ enum arm_features {
26
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sve_while)(void *vd, uint32_t count, uint32_t pred_desc)
20
ARM_FEATURE_V8_SHA3, /* implements SHA3 part of v8 Crypto Extensions */
27
return flags;
21
ARM_FEATURE_V8_SM3, /* implements SM3 part of v8 Crypto Extensions */
28
}
22
ARM_FEATURE_V8_SM4, /* implements SM4 part of v8 Crypto Extensions */
29
23
+ ARM_FEATURE_V8_RDM, /* implements v8.1 simd round multiply */
30
- /* Scale from predicate element count to bits. */
24
ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */
31
- count <<= esz;
25
};
32
- /* Bound to the bits in the predicate. */
26
33
- count = MIN(count, oprsz * 8);
27
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
34
-
35
/* Set all of the requested bits. */
36
for (i = 0; i < count / 64; ++i) {
37
d->p[i] = esz_mask;
38
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
28
index XXXXXXX..XXXXXXX 100644
39
index XXXXXXX..XXXXXXX 100644
29
--- a/linux-user/elfload.c
40
--- a/target/arm/translate-sve.c
30
+++ b/linux-user/elfload.c
41
+++ b/target/arm/translate-sve.c
31
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
42
@@ -XXX,XX +XXX,XX @@ static bool trans_CTERM(DisasContext *s, arg_CTERM *a, uint32_t insn)
32
GET_FEATURE(ARM_FEATURE_V8_SHA512, ARM_HWCAP_A64_SHA512);
43
33
GET_FEATURE(ARM_FEATURE_V8_FP16,
44
static bool trans_WHILE(DisasContext *s, arg_WHILE *a, uint32_t insn)
34
ARM_HWCAP_A64_FPHP | ARM_HWCAP_A64_ASIMDHP);
45
{
35
+ GET_FEATURE(ARM_FEATURE_V8_RDM, ARM_HWCAP_A64_ASIMDRDM);
46
- if (!sve_access_check(s)) {
36
#undef GET_FEATURE
47
- return true;
37
48
- }
38
return hwcaps;
49
-
50
- TCGv_i64 op0 = read_cpu_reg(s, a->rn, 1);
51
- TCGv_i64 op1 = read_cpu_reg(s, a->rm, 1);
52
- TCGv_i64 t0 = tcg_temp_new_i64();
53
- TCGv_i64 t1 = tcg_temp_new_i64();
54
+ TCGv_i64 op0, op1, t0, t1, tmax;
55
TCGv_i32 t2, t3;
56
TCGv_ptr ptr;
57
unsigned desc, vsz = vec_full_reg_size(s);
58
TCGCond cond;
59
60
+ if (!sve_access_check(s)) {
61
+ return true;
62
+ }
63
+
64
+ op0 = read_cpu_reg(s, a->rn, 1);
65
+ op1 = read_cpu_reg(s, a->rm, 1);
66
+
67
if (!a->sf) {
68
if (a->u) {
69
tcg_gen_ext32u_i64(op0, op0);
70
@@ -XXX,XX +XXX,XX @@ static bool trans_WHILE(DisasContext *s, arg_WHILE *a, uint32_t insn)
71
72
/* For the helper, compress the different conditions into a computation
73
* of how many iterations for which the condition is true.
74
- *
75
- * This is slightly complicated by 0 <= UINT64_MAX, which is nominally
76
- * 2**64 iterations, overflowing to 0. Of course, predicate registers
77
- * aren't that large, so any value >= predicate size is sufficient.
78
*/
79
+ t0 = tcg_temp_new_i64();
80
+ t1 = tcg_temp_new_i64();
81
tcg_gen_sub_i64(t0, op1, op0);
82
83
- /* t0 = MIN(op1 - op0, vsz). */
84
- tcg_gen_movi_i64(t1, vsz);
85
- tcg_gen_umin_i64(t0, t0, t1);
86
+ tmax = tcg_const_i64(vsz >> a->esz);
87
if (a->eq) {
88
/* Equality means one more iteration. */
89
tcg_gen_addi_i64(t0, t0, 1);
90
+
91
+ /* If op1 is max (un)signed integer (and the only time the addition
92
+ * above could overflow), then we produce an all-true predicate by
93
+ * setting the count to the vector length. This is because the
94
+ * pseudocode is described as an increment + compare loop, and the
95
+ * max integer would always compare true.
96
+ */
97
+ tcg_gen_movi_i64(t1, (a->sf
98
+ ? (a->u ? UINT64_MAX : INT64_MAX)
99
+ : (a->u ? UINT32_MAX : INT32_MAX)));
100
+ tcg_gen_movcond_i64(TCG_COND_EQ, t0, op1, t1, tmax, t0);
101
}
102
103
- /* t0 = (condition true ? t0 : 0). */
104
+ /* Bound to the maximum. */
105
+ tcg_gen_umin_i64(t0, t0, tmax);
106
+ tcg_temp_free_i64(tmax);
107
+
108
+ /* Set the count to zero if the condition is false. */
109
cond = (a->u
110
? (a->eq ? TCG_COND_LEU : TCG_COND_LTU)
111
: (a->eq ? TCG_COND_LE : TCG_COND_LT));
112
tcg_gen_movi_i64(t1, 0);
113
tcg_gen_movcond_i64(cond, t0, op0, op1, t0, t1);
114
+ tcg_temp_free_i64(t1);
115
116
+ /* Since we're bounded, pass as a 32-bit type. */
117
t2 = tcg_temp_new_i32();
118
tcg_gen_extrl_i64_i32(t2, t0);
119
tcg_temp_free_i64(t0);
120
- tcg_temp_free_i64(t1);
121
+
122
+ /* Scale elements to bits. */
123
+ tcg_gen_shli_i32(t2, t2, a->esz);
124
125
desc = (vsz / 8) - 2;
126
desc = deposit32(desc, SIMD_DATA_SHIFT, 2, a->esz);
39
--
127
--
40
2.16.2
128
2.18.0
41
129
42
130
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
3
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
5
Message-id: 20180228193125.20577-12-richard.henderson@linaro.org
8
Tested-by: Alex Bennée <alex.bennee@linaro.org>
9
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
10
Message-id: 20180801123111.3595-5-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
12
---
8
target/arm/helper.h | 7 ++++
13
target/arm/sve_helper.c | 2 +-
9
target/arm/translate-a64.c | 48 ++++++++++++++++++++++-
14
1 file changed, 1 insertion(+), 1 deletion(-)
10
target/arm/vec_helper.c | 97 ++++++++++++++++++++++++++++++++++++++++++++++
11
3 files changed, 151 insertions(+), 1 deletion(-)
12
15
13
diff --git a/target/arm/helper.h b/target/arm/helper.h
16
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
14
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.h
18
--- a/target/arm/sve_helper.c
16
+++ b/target/arm/helper.h
19
+++ b/target/arm/sve_helper.c
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_5(gvec_qrdmlah_s32, TCG_CALL_NO_RWG,
20
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_movz_d)(void *vd, void *vn, void *vg, uint32_t desc)
18
DEF_HELPER_FLAGS_5(gvec_qrdmlsh_s32, TCG_CALL_NO_RWG,
21
uint64_t *d = vd, *n = vn;
19
void, ptr, ptr, ptr, ptr, i32)
22
uint8_t *pg = vg;
20
23
for (i = 0; i < opr_sz; i += 1) {
21
+DEF_HELPER_FLAGS_5(gvec_fcaddh, TCG_CALL_NO_RWG,
24
- d[i] = n[1] & -(uint64_t)(pg[H1(i)] & 1);
22
+ void, ptr, ptr, ptr, ptr, i32)
25
+ d[i] = n[i] & -(uint64_t)(pg[H1(i)] & 1);
23
+DEF_HELPER_FLAGS_5(gvec_fcadds, TCG_CALL_NO_RWG,
26
}
24
+ void, ptr, ptr, ptr, ptr, i32)
25
+DEF_HELPER_FLAGS_5(gvec_fcaddd, TCG_CALL_NO_RWG,
26
+ void, ptr, ptr, ptr, ptr, i32)
27
+
28
#ifdef TARGET_AARCH64
29
#include "helper-a64.h"
30
#endif
31
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/translate-a64.c
34
+++ b/target/arm/translate-a64.c
35
@@ -XXX,XX +XXX,XX @@ static void gen_gvec_op3_env(DisasContext *s, bool is_q, int rd,
36
is_q ? 16 : 8, vec_full_reg_size(s), 0, fn);
37
}
27
}
38
28
39
+/* Expand a 3-operand + fpstatus pointer + simd data value operation using
40
+ * an out-of-line helper.
41
+ */
42
+static void gen_gvec_op3_fpst(DisasContext *s, bool is_q, int rd, int rn,
43
+ int rm, bool is_fp16, int data,
44
+ gen_helper_gvec_3_ptr *fn)
45
+{
46
+ TCGv_ptr fpst = get_fpstatus_ptr(is_fp16);
47
+ tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, rd),
48
+ vec_full_reg_offset(s, rn),
49
+ vec_full_reg_offset(s, rm), fpst,
50
+ is_q ? 16 : 8, vec_full_reg_size(s), data, fn);
51
+ tcg_temp_free_ptr(fpst);
52
+}
53
+
54
/* Set ZF and NF based on a 64 bit result. This is alas fiddlier
55
* than the 32 bit equivalent.
56
*/
57
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
58
int size = extract32(insn, 22, 2);
59
bool u = extract32(insn, 29, 1);
60
bool is_q = extract32(insn, 30, 1);
61
- int feature;
62
+ int feature, rot;
63
64
switch (u * 16 + opcode) {
65
case 0x10: /* SQRDMLAH (vector) */
66
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
67
}
68
feature = ARM_FEATURE_V8_RDM;
69
break;
70
+ case 0xc: /* FCADD, #90 */
71
+ case 0xe: /* FCADD, #270 */
72
+ if (size == 0
73
+ || (size == 1 && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))
74
+ || (size == 3 && !is_q)) {
75
+ unallocated_encoding(s);
76
+ return;
77
+ }
78
+ feature = ARM_FEATURE_V8_FCMA;
79
+ break;
80
default:
81
unallocated_encoding(s);
82
return;
83
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
84
}
85
return;
86
87
+ case 0xc: /* FCADD, #90 */
88
+ case 0xe: /* FCADD, #270 */
89
+ rot = extract32(opcode, 1, 1);
90
+ switch (size) {
91
+ case 1:
92
+ gen_gvec_op3_fpst(s, is_q, rd, rn, rm, size == 1, rot,
93
+ gen_helper_gvec_fcaddh);
94
+ break;
95
+ case 2:
96
+ gen_gvec_op3_fpst(s, is_q, rd, rn, rm, size == 1, rot,
97
+ gen_helper_gvec_fcadds);
98
+ break;
99
+ case 3:
100
+ gen_gvec_op3_fpst(s, is_q, rd, rn, rm, size == 1, rot,
101
+ gen_helper_gvec_fcaddd);
102
+ break;
103
+ default:
104
+ g_assert_not_reached();
105
+ }
106
+ return;
107
+
108
default:
109
g_assert_not_reached();
110
}
111
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
112
index XXXXXXX..XXXXXXX 100644
113
--- a/target/arm/vec_helper.c
114
+++ b/target/arm/vec_helper.c
115
@@ -XXX,XX +XXX,XX @@
116
#include "exec/exec-all.h"
117
#include "exec/helper-proto.h"
118
#include "tcg/tcg-gvec-desc.h"
119
+#include "fpu/softfloat.h"
120
121
122
+/* Note that vector data is stored in host-endian 64-bit chunks,
123
+ so addressing units smaller than that needs a host-endian fixup. */
124
+#ifdef HOST_WORDS_BIGENDIAN
125
+#define H1(x) ((x) ^ 7)
126
+#define H2(x) ((x) ^ 3)
127
+#define H4(x) ((x) ^ 1)
128
+#else
129
+#define H1(x) (x)
130
+#define H2(x) (x)
131
+#define H4(x) (x)
132
+#endif
133
+
134
#define SET_QC() env->vfp.xregs[ARM_VFP_FPSCR] |= CPSR_Q
135
136
static void clear_tail(void *vd, uintptr_t opr_sz, uintptr_t max_sz)
137
@@ -XXX,XX +XXX,XX @@ void HELPER(gvec_qrdmlsh_s32)(void *vd, void *vn, void *vm,
138
}
139
clear_tail(d, opr_sz, simd_maxsz(desc));
140
}
141
+
142
+void HELPER(gvec_fcaddh)(void *vd, void *vn, void *vm,
143
+ void *vfpst, uint32_t desc)
144
+{
145
+ uintptr_t opr_sz = simd_oprsz(desc);
146
+ float16 *d = vd;
147
+ float16 *n = vn;
148
+ float16 *m = vm;
149
+ float_status *fpst = vfpst;
150
+ uint32_t neg_real = extract32(desc, SIMD_DATA_SHIFT, 1);
151
+ uint32_t neg_imag = neg_real ^ 1;
152
+ uintptr_t i;
153
+
154
+ /* Shift boolean to the sign bit so we can xor to negate. */
155
+ neg_real <<= 15;
156
+ neg_imag <<= 15;
157
+
158
+ for (i = 0; i < opr_sz / 2; i += 2) {
159
+ float16 e0 = n[H2(i)];
160
+ float16 e1 = m[H2(i + 1)] ^ neg_imag;
161
+ float16 e2 = n[H2(i + 1)];
162
+ float16 e3 = m[H2(i)] ^ neg_real;
163
+
164
+ d[H2(i)] = float16_add(e0, e1, fpst);
165
+ d[H2(i + 1)] = float16_add(e2, e3, fpst);
166
+ }
167
+ clear_tail(d, opr_sz, simd_maxsz(desc));
168
+}
169
+
170
+void HELPER(gvec_fcadds)(void *vd, void *vn, void *vm,
171
+ void *vfpst, uint32_t desc)
172
+{
173
+ uintptr_t opr_sz = simd_oprsz(desc);
174
+ float32 *d = vd;
175
+ float32 *n = vn;
176
+ float32 *m = vm;
177
+ float_status *fpst = vfpst;
178
+ uint32_t neg_real = extract32(desc, SIMD_DATA_SHIFT, 1);
179
+ uint32_t neg_imag = neg_real ^ 1;
180
+ uintptr_t i;
181
+
182
+ /* Shift boolean to the sign bit so we can xor to negate. */
183
+ neg_real <<= 31;
184
+ neg_imag <<= 31;
185
+
186
+ for (i = 0; i < opr_sz / 4; i += 2) {
187
+ float32 e0 = n[H4(i)];
188
+ float32 e1 = m[H4(i + 1)] ^ neg_imag;
189
+ float32 e2 = n[H4(i + 1)];
190
+ float32 e3 = m[H4(i)] ^ neg_real;
191
+
192
+ d[H4(i)] = float32_add(e0, e1, fpst);
193
+ d[H4(i + 1)] = float32_add(e2, e3, fpst);
194
+ }
195
+ clear_tail(d, opr_sz, simd_maxsz(desc));
196
+}
197
+
198
+void HELPER(gvec_fcaddd)(void *vd, void *vn, void *vm,
199
+ void *vfpst, uint32_t desc)
200
+{
201
+ uintptr_t opr_sz = simd_oprsz(desc);
202
+ float64 *d = vd;
203
+ float64 *n = vn;
204
+ float64 *m = vm;
205
+ float_status *fpst = vfpst;
206
+ uint64_t neg_real = extract64(desc, SIMD_DATA_SHIFT, 1);
207
+ uint64_t neg_imag = neg_real ^ 1;
208
+ uintptr_t i;
209
+
210
+ /* Shift boolean to the sign bit so we can xor to negate. */
211
+ neg_real <<= 63;
212
+ neg_imag <<= 63;
213
+
214
+ for (i = 0; i < opr_sz / 8; i += 2) {
215
+ float64 e0 = n[i];
216
+ float64 e1 = m[i + 1] ^ neg_imag;
217
+ float64 e2 = n[i + 1];
218
+ float64 e3 = m[i] ^ neg_real;
219
+
220
+ d[i] = float64_add(e0, e1, fpst);
221
+ d[i + 1] = float64_add(e2, e3, fpst);
222
+ }
223
+ clear_tail(d, opr_sz, simd_maxsz(desc));
224
+}
225
--
29
--
226
2.16.2
30
2.18.0
227
31
228
32
diff view generated by jsdifflib