1
ARM queue: mostly patches from me, but also the Smartfusion2 board.
1
Some arm patches; my to-review queue is by no means empty, but
2
this is a big enough set of patches to be getting on with...
2
3
3
thanks
4
-- PMM
4
-- PMM
5
5
6
The following changes since commit 9ee660e7c138595224b65ddc1c5712549f0a278c:
6
The following changes since commit cb9c6a8e5ad6a1f0ce164d352e3102df46986e22:
7
7
8
Merge remote-tracking branch 'remotes/yongbok/tags/mips-20170921' into staging (2017-09-21 14:40:32 +0100)
8
.gitlab-ci.d/windows: Work-around timeout and OpenGL problems of the MSYS2 jobs (2023-01-04 18:58:33 +0000)
9
9
10
are available in the git repository at:
10
are available in the Git repository at:
11
11
12
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20170921
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20230105
13
13
14
for you to fetch changes up to 6d262dcb7d108eda93813574c2061398084dc795:
14
for you to fetch changes up to 93c9678de9dc7d2e68f9e8477da072bac30ef132:
15
15
16
msf2: Add Emcraft's Smartfusion2 SOM kit (2017-09-21 16:36:56 +0100)
16
hw/net: Fix read of uninitialized memory in imx_fec. (2023-01-05 15:33:00 +0000)
17
17
18
----------------------------------------------------------------
18
----------------------------------------------------------------
19
target-arm queue:
19
target-arm queue:
20
* more preparatory work for v8M support
20
* Implement AArch32 ARMv8-R support
21
* convert some omap devices away from old_mmio
21
* Add Cortex-R52 CPU
22
* remove out of date ARM ARM section references in comments
22
* fix handling of HLT semihosting in system mode
23
* add the Smartfusion2 board
23
* hw/timer/ixm_epit: cleanup and fix bug in compare handling
24
* target/arm: Coding style fixes
25
* target/arm: Clean up includes
26
* nseries: minor code cleanups
27
* target/arm: align exposed ID registers with Linux
28
* hw/arm/smmu-common: remove unnecessary inlines
29
* i.MX7D: Handle GPT timers
30
* i.MX7D: Connect IRQs to GPIO devices
31
* i.MX6UL: Add a specific GPT timer instance
32
* hw/net: Fix read of uninitialized memory in imx_fec
24
33
25
----------------------------------------------------------------
34
----------------------------------------------------------------
26
Peter Maydell (26):
35
Alex Bennée (1):
27
target/arm: Implement MSR/MRS access to NS banked registers
36
target/arm: fix handling of HLT semihosting in system mode
28
nvic: Add banked exception states
29
nvic: Add cached vectpending_is_s_banked state
30
nvic: Add cached vectpending_prio state
31
nvic: Implement AIRCR changes for v8M
32
nvic: Make ICSR.RETTOBASE handle banked exceptions
33
nvic: Implement NVIC_ITNS<n> registers
34
nvic: Handle banked exceptions in nvic_recompute_state()
35
nvic: Make set_pending and clear_pending take a secure parameter
36
nvic: Make SHPR registers banked
37
nvic: Compare group priority for escalation to HF
38
nvic: In escalation to HardFault, support HF not being priority -1
39
nvic: Implement v8M changes to fixed priority exceptions
40
nvic: Disable the non-secure HardFault if AIRCR.BFHFNMINS is clear
41
nvic: Handle v8M changes in nvic_exec_prio()
42
target/arm: Handle banking in negative-execution-priority check in cpu_mmu_index()
43
nvic: Make ICSR banked for v8M
44
nvic: Make SHCSR banked for v8M
45
nvic: Support banked exceptions in acknowledge and complete
46
target/arm: Remove out of date ARM ARM section references in A64 decoder
47
hw/arm/palm.c: Don't use old_mmio for static_ops
48
hw/gpio/omap_gpio.c: Don't use old_mmio
49
hw/timer/omap_synctimer.c: Don't use old_mmio
50
hw/timer/omap_gptimer: Don't use old_mmio
51
hw/i2c/omap_i2c.c: Don't use old_mmio
52
hw/arm/omap2.c: Don't use old_mmio
53
37
54
Subbaraya Sundeep (5):
38
Axel Heider (8):
55
msf2: Add Smartfusion2 System timer
39
hw/timer/imx_epit: improve comments
56
msf2: Microsemi Smartfusion2 System Register block
40
hw/timer/imx_epit: cleanup CR defines
57
msf2: Add Smartfusion2 SPI controller
41
hw/timer/imx_epit: define SR_OCIF
58
msf2: Add Smartfusion2 SoC
42
hw/timer/imx_epit: update interrupt state on CR write access
59
msf2: Add Emcraft's Smartfusion2 SOM kit
43
hw/timer/imx_epit: hard reset initializes CR with 0
44
hw/timer/imx_epit: factor out register write handlers
45
hw/timer/imx_epit: remove explicit fields cnt and freq
46
hw/timer/imx_epit: fix compare timer handling
60
47
61
hw/arm/Makefile.objs | 1 +
48
Claudio Fontana (1):
62
hw/misc/Makefile.objs | 1 +
49
target/arm: cleanup cpu includes
63
hw/ssi/Makefile.objs | 1 +
64
hw/timer/Makefile.objs | 1 +
65
include/hw/arm/msf2-soc.h | 67 +++
66
include/hw/intc/armv7m_nvic.h | 33 +-
67
include/hw/misc/msf2-sysreg.h | 77 ++++
68
include/hw/ssi/mss-spi.h | 58 +++
69
include/hw/timer/mss-timer.h | 64 +++
70
target/arm/cpu.h | 62 ++-
71
hw/arm/msf2-soc.c | 238 +++++++++++
72
hw/arm/msf2-som.c | 105 +++++
73
hw/arm/omap2.c | 49 ++-
74
hw/arm/palm.c | 30 +-
75
hw/gpio/omap_gpio.c | 26 +-
76
hw/i2c/omap_i2c.c | 44 +-
77
hw/intc/armv7m_nvic.c | 913 ++++++++++++++++++++++++++++++++++------
78
hw/misc/msf2-sysreg.c | 160 +++++++
79
hw/ssi/mss-spi.c | 404 ++++++++++++++++++
80
hw/timer/mss-timer.c | 289 +++++++++++++
81
hw/timer/omap_gptimer.c | 49 ++-
82
hw/timer/omap_synctimer.c | 35 +-
83
target/arm/cpu.c | 7 +
84
target/arm/helper.c | 142 ++++++-
85
target/arm/translate-a64.c | 227 +++++-----
86
default-configs/arm-softmmu.mak | 1 +
87
hw/intc/trace-events | 13 +-
88
hw/misc/trace-events | 5 +
89
28 files changed, 2735 insertions(+), 367 deletions(-)
90
create mode 100644 include/hw/arm/msf2-soc.h
91
create mode 100644 include/hw/misc/msf2-sysreg.h
92
create mode 100644 include/hw/ssi/mss-spi.h
93
create mode 100644 include/hw/timer/mss-timer.h
94
create mode 100644 hw/arm/msf2-soc.c
95
create mode 100644 hw/arm/msf2-som.c
96
create mode 100644 hw/misc/msf2-sysreg.c
97
create mode 100644 hw/ssi/mss-spi.c
98
create mode 100644 hw/timer/mss-timer.c
99
50
51
Fabiano Rosas (5):
52
target/arm: Fix checkpatch comment style warnings in helper.c
53
target/arm: Fix checkpatch space errors in helper.c
54
target/arm: Fix checkpatch brace errors in helper.c
55
target/arm: Remove unused includes from m_helper.c
56
target/arm: Remove unused includes from helper.c
57
58
Jean-Christophe Dubois (4):
59
i.MX7D: Connect GPT timers to IRQ
60
i.MX7D: Compute clock frequency for the fixed frequency clocks.
61
i.MX6UL: Add a specific GPT timer instance for the i.MX6UL
62
i.MX7D: Connect IRQs to GPIO devices.
63
64
Peter Maydell (1):
65
target/arm:Set lg_page_size to 0 if either S1 or S2 asks for it
66
67
Philippe Mathieu-Daudé (5):
68
hw/input/tsc2xxx: Constify set_transform()'s MouseTransformInfo arg
69
hw/arm/nseries: Constify various read-only arrays
70
hw/arm/nseries: Silent -Wmissing-field-initializers warning
71
hw/arm/smmu-common: Reduce smmu_inv_notifiers_mr() scope
72
hw/arm/smmu-common: Avoid using inlined functions with external linkage
73
74
Stephen Longfield (1):
75
hw/net: Fix read of uninitialized memory in imx_fec.
76
77
Tobias Röhmel (7):
78
target/arm: Don't add all MIDR aliases for cores that implement PMSA
79
target/arm: Make RVBAR available for all ARMv8 CPUs
80
target/arm: Make stage_2_format for cache attributes optional
81
target/arm: Enable TTBCR_EAE for ARMv8-R AArch32
82
target/arm: Add PMSAv8r registers
83
target/arm: Add PMSAv8r functionality
84
target/arm: Add ARM Cortex-R52 CPU
85
86
Zhuojia Shen (1):
87
target/arm: align exposed ID registers with Linux
88
89
include/hw/arm/fsl-imx7.h | 20 +
90
include/hw/arm/smmu-common.h | 3 -
91
include/hw/input/tsc2xxx.h | 4 +-
92
include/hw/timer/imx_epit.h | 8 +-
93
include/hw/timer/imx_gpt.h | 1 +
94
target/arm/cpu.h | 6 +
95
target/arm/internals.h | 4 +
96
hw/arm/fsl-imx6ul.c | 2 +-
97
hw/arm/fsl-imx7.c | 41 +-
98
hw/arm/nseries.c | 28 +-
99
hw/arm/smmu-common.c | 15 +-
100
hw/input/tsc2005.c | 2 +-
101
hw/input/tsc210x.c | 3 +-
102
hw/misc/imx6ul_ccm.c | 6 -
103
hw/misc/imx7_ccm.c | 49 ++-
104
hw/net/imx_fec.c | 8 +-
105
hw/timer/imx_epit.c | 376 +++++++++-------
106
hw/timer/imx_gpt.c | 25 ++
107
target/arm/cpu.c | 35 +-
108
target/arm/cpu64.c | 6 -
109
target/arm/cpu_tcg.c | 42 ++
110
target/arm/debug_helper.c | 3 +
111
target/arm/helper.c | 871 +++++++++++++++++++++++++++++---------
112
target/arm/m_helper.c | 16 -
113
target/arm/machine.c | 28 ++
114
target/arm/ptw.c | 152 +++++--
115
target/arm/tlb_helper.c | 4 +
116
target/arm/translate.c | 2 +-
117
tests/tcg/aarch64/sysregs.c | 24 +-
118
tests/tcg/aarch64/Makefile.target | 7 +-
119
30 files changed, 1330 insertions(+), 461 deletions(-)
120
diff view generated by jsdifflib
1
Update the static_ops functions to use new-style mmio
1
In get_phys_addr_twostage() we set the lg_page_size of the result to
2
rather than the legacy old_mmio functions.
2
the maximum of the stage 1 and stage 2 page sizes. This works for
3
the case where we do want to create a TLB entry, because we know the
4
common TLB code only creates entries of the TARGET_PAGE_SIZE and
5
asking for a size larger than that only means that invalidations
6
invalidate the whole larger area. However, if lg_page_size is
7
smaller than TARGET_PAGE_SIZE this effectively means "don't create a
8
TLB entry"; in this case if either S1 or S2 said "this covers less
9
than a page and can't go in a TLB" then the final result also should
10
be marked that way. Set the resulting page size to 0 if either
11
stage asked for a less-than-a-page entry, and expand the comment
12
to explain what's going on.
13
14
This has no effect for VMSA because currently the VMSA lookup always
15
returns results that cover at least TARGET_PAGE_SIZE; however when we
16
add v8R support it will reuse this code path, and for v8R the S1 and
17
S2 results can be smaller than TARGET_PAGE_SIZE.
3
18
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 1505580378-9044-2-git-send-email-peter.maydell@linaro.org
21
Message-id: 20221212142708.610090-1-peter.maydell@linaro.org
7
---
22
---
8
hw/arm/palm.c | 30 ++++++++++--------------------
23
target/arm/ptw.c | 16 +++++++++++++---
9
1 file changed, 10 insertions(+), 20 deletions(-)
24
1 file changed, 13 insertions(+), 3 deletions(-)
10
25
11
diff --git a/hw/arm/palm.c b/hw/arm/palm.c
26
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
12
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/arm/palm.c
28
--- a/target/arm/ptw.c
14
+++ b/hw/arm/palm.c
29
+++ b/target/arm/ptw.c
15
@@ -XXX,XX +XXX,XX @@
30
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
16
#include "exec/address-spaces.h"
31
}
17
#include "cpu.h"
32
18
33
/*
19
-static uint32_t static_readb(void *opaque, hwaddr offset)
34
- * Use the maximum of the S1 & S2 page size, so that invalidation
20
+static uint64_t static_read(void *opaque, hwaddr offset, unsigned size)
35
- * of pages > TARGET_PAGE_SIZE works correctly.
21
{
36
+ * If either S1 or S2 returned a result smaller than TARGET_PAGE_SIZE,
22
- uint32_t *val = (uint32_t *) opaque;
37
+ * this means "don't put this in the TLB"; in this case, return a
23
- return *val >> ((offset & 3) << 3);
38
+ * result with lg_page_size == 0 to achieve that. Otherwise,
24
-}
39
+ * use the maximum of the S1 & S2 page size, so that invalidation
25
+ uint32_t *val = (uint32_t *)opaque;
40
+ * of pages > TARGET_PAGE_SIZE works correctly. (This works even though
26
+ uint32_t sizemask = 7 >> size;
41
+ * we know the combined result permissions etc only cover the minimum
27
42
+ * of the S1 and S2 page size, because we know that the common TLB code
28
-static uint32_t static_readh(void *opaque, hwaddr offset)
43
+ * never actually creates TLB entries bigger than TARGET_PAGE_SIZE,
29
-{
44
+ * and passing a larger page size value only affects invalidations.)
30
- uint32_t *val = (uint32_t *) opaque;
45
*/
31
- return *val >> ((offset & 1) << 3);
46
- if (result->f.lg_page_size < s1_lgpgsz) {
32
-}
47
+ if (result->f.lg_page_size < TARGET_PAGE_BITS ||
33
-
48
+ s1_lgpgsz < TARGET_PAGE_BITS) {
34
-static uint32_t static_readw(void *opaque, hwaddr offset)
49
+ result->f.lg_page_size = 0;
35
-{
50
+ } else if (result->f.lg_page_size < s1_lgpgsz) {
36
- uint32_t *val = (uint32_t *) opaque;
51
result->f.lg_page_size = s1_lgpgsz;
37
- return *val >> ((offset & 0) << 3);
52
}
38
+ return *val >> ((offset & sizemask) << 3);
39
}
40
41
-static void static_write(void *opaque, hwaddr offset,
42
- uint32_t value)
43
+static void static_write(void *opaque, hwaddr offset, uint64_t value,
44
+ unsigned size)
45
{
46
#ifdef SPY
47
printf("%s: value %08lx written at " PA_FMT "\n",
48
@@ -XXX,XX +XXX,XX @@ static void static_write(void *opaque, hwaddr offset,
49
}
50
51
static const MemoryRegionOps static_ops = {
52
- .old_mmio = {
53
- .read = { static_readb, static_readh, static_readw, },
54
- .write = { static_write, static_write, static_write, },
55
- },
56
+ .read = static_read,
57
+ .write = static_write,
58
+ .valid.min_access_size = 1,
59
+ .valid.max_access_size = 4,
60
.endianness = DEVICE_NATIVE_ENDIAN,
61
};
62
53
63
--
54
--
64
2.7.4
55
2.25.1
65
66
diff view generated by jsdifflib
1
When escalating to HardFault, we must go into Lockup if we
1
From: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
2
can't take the synchronous HardFault because the current
3
execution priority is already at or below the priority of
4
HardFault. In v7M HF is always priority -1 so a simple < 0
5
comparison sufficed; in v8M the priority of HardFault can
6
vary depending on whether it is a Secure or NonSecure
7
HardFault, so we must check against the priority of the
8
HardFault exception vector we're about to use.
9
2
3
Cores with PMSA have the MPUIR register which has the
4
same encoding as the MIDR alias with opc2=4. So we only
5
add that alias if we are not realizing a core that
6
implements PMSA.
7
8
Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20221206102504.165775-2-tobias.roehmel@rwth-aachen.de
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 1505240046-11454-13-git-send-email-peter.maydell@linaro.org
13
---
13
---
14
hw/intc/armv7m_nvic.c | 23 ++++++++++++-----------
14
target/arm/helper.c | 13 +++++++++----
15
1 file changed, 12 insertions(+), 11 deletions(-)
15
1 file changed, 9 insertions(+), 4 deletions(-)
16
16
17
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
17
diff --git a/target/arm/helper.c b/target/arm/helper.c
18
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/intc/armv7m_nvic.c
19
--- a/target/arm/helper.c
20
+++ b/hw/intc/armv7m_nvic.c
20
+++ b/target/arm/helper.c
21
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending(void *opaque, int irq, bool secure)
21
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
22
.access = PL1_R, .type = ARM_CP_NO_RAW, .resetvalue = cpu->midr,
23
.fieldoffset = offsetof(CPUARMState, cp15.c0_cpuid),
24
.readfn = midr_read },
25
- /* crn = 0 op1 = 0 crm = 0 op2 = 4,7 : AArch32 aliases of MIDR */
26
- { .name = "MIDR", .type = ARM_CP_ALIAS | ARM_CP_CONST,
27
- .cp = 15, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 4,
28
- .access = PL1_R, .resetvalue = cpu->midr },
29
+ /* crn = 0 op1 = 0 crm = 0 op2 = 7 : AArch32 aliases of MIDR */
30
{ .name = "MIDR", .type = ARM_CP_ALIAS | ARM_CP_CONST,
31
.cp = 15, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 7,
32
.access = PL1_R, .resetvalue = cpu->midr },
33
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
34
.accessfn = access_aa64_tid1,
35
.type = ARM_CP_CONST, .resetvalue = cpu->revidr },
36
};
37
+ ARMCPRegInfo id_v8_midr_alias_cp_reginfo = {
38
+ .name = "MIDR", .type = ARM_CP_ALIAS | ARM_CP_CONST,
39
+ .cp = 15, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 4,
40
+ .access = PL1_R, .resetvalue = cpu->midr
41
+ };
42
ARMCPRegInfo id_cp_reginfo[] = {
43
/* These are common to v8 and pre-v8 */
44
{ .name = "CTR",
45
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
22
}
46
}
23
47
if (arm_feature(env, ARM_FEATURE_V8)) {
24
if (escalate) {
48
define_arm_cp_regs(cpu, id_v8_midr_cp_reginfo);
25
- if (running < 0) {
49
+ if (!arm_feature(env, ARM_FEATURE_PMSA)) {
26
- /* We want to escalate to HardFault but we can't take a
50
+ define_one_arm_cp_reg(cpu, &id_v8_midr_alias_cp_reginfo);
27
- * synchronous HardFault at this point either. This is a
28
- * Lockup condition due to a guest bug. We don't model
29
- * Lockup, so report via cpu_abort() instead.
30
- */
31
- cpu_abort(&s->cpu->parent_obj,
32
- "Lockup: can't escalate %d to HardFault "
33
- "(current priority %d)\n", irq, running);
34
- }
35
36
- /* We can do the escalation, so we take HardFault instead.
37
+ /* We need to escalate this exception to a synchronous HardFault.
38
* If BFHFNMINS is set then we escalate to the banked HF for
39
* the target security state of the original exception; otherwise
40
* we take a Secure HardFault.
41
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending(void *opaque, int irq, bool secure)
42
} else {
43
vec = &s->vectors[irq];
44
}
45
+ if (running <= vec->prio) {
46
+ /* We want to escalate to HardFault but we can't take the
47
+ * synchronous HardFault at this point either. This is a
48
+ * Lockup condition due to a guest bug. We don't model
49
+ * Lockup, so report via cpu_abort() instead.
50
+ */
51
+ cpu_abort(&s->cpu->parent_obj,
52
+ "Lockup: can't escalate %d to HardFault "
53
+ "(current priority %d)\n", irq, running);
54
+ }
51
+ }
55
+
52
} else {
56
/* HF may be banked but there is only one shared HFSR */
53
define_arm_cp_regs(cpu, id_pre_v8_midr_cp_reginfo);
57
s->cpu->env.v7m.hfsr |= R_V7M_HFSR_FORCED_MASK;
58
}
54
}
59
--
55
--
60
2.7.4
56
2.25.1
61
57
62
58
diff view generated by jsdifflib
1
Update armv7m_nvic_acknowledge_irq() and armv7m_nvic_complete_irq()
1
From: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
2
to handle banked exceptions:
3
* acknowledge needs to use the correct vector, which may be
4
in sec_vectors[]
5
* acknowledge needs to return to its caller whether the
6
exception should be taken to secure or non-secure state
7
* complete needs its caller to tell it whether the exception
8
being completed is a secure one or not
9
2
3
RVBAR shadows RVBAR_ELx where x is the highest exception
4
level if the highest EL is not EL3. This patch also allows
5
ARMv8 CPUs to change the reset address with
6
the rvbar property.
7
8
Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20221206102504.165775-3-tobias.roehmel@rwth-aachen.de
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 1505240046-11454-20-git-send-email-peter.maydell@linaro.org
13
---
12
---
14
target/arm/cpu.h | 15 +++++++++++++--
13
target/arm/cpu.c | 6 +++++-
15
hw/intc/armv7m_nvic.c | 26 ++++++++++++++++++++------
14
target/arm/helper.c | 21 ++++++++++++++-------
16
target/arm/helper.c | 8 +++++---
15
2 files changed, 19 insertions(+), 8 deletions(-)
17
hw/intc/trace-events | 4 ++--
18
4 files changed, 40 insertions(+), 13 deletions(-)
19
16
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
17
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
21
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/cpu.h
19
--- a/target/arm/cpu.c
23
+++ b/target/arm/cpu.h
20
+++ b/target/arm/cpu.c
24
@@ -XXX,XX +XXX,XX @@ static inline bool armv7m_nvic_can_take_pending_exception(void *opaque)
21
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset_hold(Object *obj)
25
* of architecturally banked exceptions.
22
env->cp15.cpacr_el1 = FIELD_DP64(env->cp15.cpacr_el1,
26
*/
23
CPACR, CP11, 3);
27
void armv7m_nvic_set_pending(void *opaque, int irq, bool secure);
24
#endif
28
-void armv7m_nvic_acknowledge_irq(void *opaque);
25
+ if (arm_feature(env, ARM_FEATURE_V8)) {
29
+/**
26
+ env->cp15.rvbar = cpu->rvbar_prop;
30
+ * armv7m_nvic_acknowledge_irq: make highest priority pending exception active
27
+ env->regs[15] = cpu->rvbar_prop;
31
+ * @opaque: the NVIC
28
+ }
32
+ *
29
}
33
+ * Move the current highest priority pending exception from the pending
30
34
+ * state to the active state, and update v7m.exception to indicate that
31
#if defined(CONFIG_USER_ONLY)
35
+ * it is the exception currently being handled.
32
@@ -XXX,XX +XXX,XX @@ void arm_cpu_post_init(Object *obj)
36
+ *
33
qdev_property_add_static(DEVICE(obj), &arm_cpu_reset_hivecs_property);
37
+ * Returns: true if exception should be taken to Secure state, false for NS
34
}
38
+ */
35
39
+bool armv7m_nvic_acknowledge_irq(void *opaque);
36
- if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
40
/**
37
+ if (arm_feature(&cpu->env, ARM_FEATURE_V8)) {
41
* armv7m_nvic_complete_irq: complete specified interrupt or exception
38
object_property_add_uint64_ptr(obj, "rvbar",
42
* @opaque: the NVIC
39
&cpu->rvbar_prop,
43
* @irq: the exception number to complete
40
OBJ_PROP_FLAG_READWRITE);
44
+ * @secure: true if this exception was secure
45
*
46
* Returns: -1 if the irq was not active
47
* 1 if completing this irq brought us back to base (no active irqs)
48
* 0 if there is still an irq active after this one was completed
49
* (Ignoring -1, this is the same as the RETTOBASE value before completion.)
50
*/
51
-int armv7m_nvic_complete_irq(void *opaque, int irq);
52
+int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure);
53
/**
54
* armv7m_nvic_raw_execution_priority: return the raw execution priority
55
* @opaque: the NVIC
56
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
57
index XXXXXXX..XXXXXXX 100644
58
--- a/hw/intc/armv7m_nvic.c
59
+++ b/hw/intc/armv7m_nvic.c
60
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending(void *opaque, int irq, bool secure)
61
}
62
63
/* Make pending IRQ active. */
64
-void armv7m_nvic_acknowledge_irq(void *opaque)
65
+bool armv7m_nvic_acknowledge_irq(void *opaque)
66
{
67
NVICState *s = (NVICState *)opaque;
68
CPUARMState *env = &s->cpu->env;
69
const int pending = s->vectpending;
70
const int running = nvic_exec_prio(s);
71
VecInfo *vec;
72
+ bool targets_secure;
73
74
assert(pending > ARMV7M_EXCP_RESET && pending < s->num_irq);
75
76
- vec = &s->vectors[pending];
77
+ if (s->vectpending_is_s_banked) {
78
+ vec = &s->sec_vectors[pending];
79
+ targets_secure = true;
80
+ } else {
81
+ vec = &s->vectors[pending];
82
+ targets_secure = !exc_is_banked(s->vectpending) &&
83
+ exc_targets_secure(s, s->vectpending);
84
+ }
85
86
assert(vec->enabled);
87
assert(vec->pending);
88
89
assert(s->vectpending_prio < running);
90
91
- trace_nvic_acknowledge_irq(pending, s->vectpending_prio);
92
+ trace_nvic_acknowledge_irq(pending, s->vectpending_prio, targets_secure);
93
94
vec->active = 1;
95
vec->pending = 0;
96
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_acknowledge_irq(void *opaque)
97
env->v7m.exception = s->vectpending;
98
99
nvic_irq_update(s);
100
+
101
+ return targets_secure;
102
}
103
104
-int armv7m_nvic_complete_irq(void *opaque, int irq)
105
+int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
106
{
107
NVICState *s = (NVICState *)opaque;
108
VecInfo *vec;
109
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_complete_irq(void *opaque, int irq)
110
111
assert(irq > ARMV7M_EXCP_RESET && irq < s->num_irq);
112
113
- vec = &s->vectors[irq];
114
+ if (secure && exc_is_banked(irq)) {
115
+ vec = &s->sec_vectors[irq];
116
+ } else {
117
+ vec = &s->vectors[irq];
118
+ }
119
120
- trace_nvic_complete_irq(irq);
121
+ trace_nvic_complete_irq(irq, secure);
122
123
if (!vec->active) {
124
/* Tell the caller this was an illegal exception return */
125
diff --git a/target/arm/helper.c b/target/arm/helper.c
41
diff --git a/target/arm/helper.c b/target/arm/helper.c
126
index XXXXXXX..XXXXXXX 100644
42
index XXXXXXX..XXXXXXX 100644
127
--- a/target/arm/helper.c
43
--- a/target/arm/helper.c
128
+++ b/target/arm/helper.c
44
+++ b/target/arm/helper.c
129
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
45
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
130
bool return_to_sp_process = false;
46
if (!arm_feature(env, ARM_FEATURE_EL3) &&
131
bool return_to_handler = false;
47
!arm_feature(env, ARM_FEATURE_EL2)) {
132
bool rettobase = false;
48
ARMCPRegInfo rvbar = {
133
+ bool exc_secure = false;
49
- .name = "RVBAR_EL1", .state = ARM_CP_STATE_AA64,
134
50
+ .name = "RVBAR_EL1", .state = ARM_CP_STATE_BOTH,
135
/* We can only get here from an EXCP_EXCEPTION_EXIT, and
51
.opc0 = 3, .opc1 = 0, .crn = 12, .crm = 0, .opc2 = 1,
136
* gen_bx_excret() enforces the architectural rule
52
.access = PL1_R,
137
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
53
.fieldoffset = offsetof(CPUARMState, cp15.rvbar),
138
* which security state's faultmask to clear. (v8M ARM ARM R_KBNF.)
54
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
139
*/
55
}
140
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
56
/* RVBAR_EL2 is only implemented if EL2 is the highest EL */
141
- int es = excret & R_V7M_EXCRET_ES_MASK;
57
if (!arm_feature(env, ARM_FEATURE_EL3)) {
142
+ exc_secure = excret & R_V7M_EXCRET_ES_MASK;
58
- ARMCPRegInfo rvbar = {
143
if (armv7m_nvic_raw_execution_priority(env->nvic) >= 0) {
59
- .name = "RVBAR_EL2", .state = ARM_CP_STATE_AA64,
144
- env->v7m.faultmask[es] = 0;
60
- .opc0 = 3, .opc1 = 4, .crn = 12, .crm = 0, .opc2 = 1,
145
+ env->v7m.faultmask[exc_secure] = 0;
61
- .access = PL2_R,
146
}
62
- .fieldoffset = offsetof(CPUARMState, cp15.rvbar),
147
} else {
63
+ ARMCPRegInfo rvbar[] = {
148
env->v7m.faultmask[M_REG_NS] = 0;
64
+ {
65
+ .name = "RVBAR_EL2", .state = ARM_CP_STATE_AA64,
66
+ .opc0 = 3, .opc1 = 4, .crn = 12, .crm = 0, .opc2 = 1,
67
+ .access = PL2_R,
68
+ .fieldoffset = offsetof(CPUARMState, cp15.rvbar),
69
+ },
70
+ { .name = "RVBAR", .type = ARM_CP_ALIAS,
71
+ .cp = 15, .opc1 = 0, .crn = 12, .crm = 0, .opc2 = 1,
72
+ .access = PL2_R,
73
+ .fieldoffset = offsetof(CPUARMState, cp15.rvbar),
74
+ },
75
};
76
- define_one_arm_cp_reg(cpu, &rvbar);
77
+ define_arm_cp_regs(cpu, rvbar);
149
}
78
}
150
}
79
}
151
80
152
- switch (armv7m_nvic_complete_irq(env->nvic, env->v7m.exception)) {
153
+ switch (armv7m_nvic_complete_irq(env->nvic, env->v7m.exception,
154
+ exc_secure)) {
155
case -1:
156
/* attempt to exit an exception that isn't active */
157
ufault = true;
158
diff --git a/hw/intc/trace-events b/hw/intc/trace-events
159
index XXXXXXX..XXXXXXX 100644
160
--- a/hw/intc/trace-events
161
+++ b/hw/intc/trace-events
162
@@ -XXX,XX +XXX,XX @@ nvic_escalate_disabled(int irq) "NVIC escalating irq %d to HardFault: disabled"
163
nvic_set_pending(int irq, bool secure, int en, int prio) "NVIC set pending irq %d secure-bank %d (enabled: %d priority %d)"
164
nvic_clear_pending(int irq, bool secure, int en, int prio) "NVIC clear pending irq %d secure-bank %d (enabled: %d priority %d)"
165
nvic_set_pending_level(int irq) "NVIC set pending: irq %d higher prio than vectpending: setting irq line to 1"
166
-nvic_acknowledge_irq(int irq, int prio) "NVIC acknowledge IRQ: %d now active (prio %d)"
167
-nvic_complete_irq(int irq) "NVIC complete IRQ %d"
168
+nvic_acknowledge_irq(int irq, int prio, bool targets_secure) "NVIC acknowledge IRQ: %d now active (prio %d targets_secure %d)"
169
+nvic_complete_irq(int irq, bool secure) "NVIC complete IRQ %d (secure %d)"
170
nvic_set_irq_level(int irq, int level) "NVIC external irq %d level set to %d"
171
nvic_sysreg_read(uint64_t addr, uint32_t value, unsigned size) "NVIC sysreg read addr 0x%" PRIx64 " data 0x%" PRIx32 " size %u"
172
nvic_sysreg_write(uint64_t addr, uint32_t value, unsigned size) "NVIC sysreg write addr 0x%" PRIx64 " data 0x%" PRIx32 " size %u"
173
--
81
--
174
2.7.4
82
2.25.1
175
83
176
84
diff view generated by jsdifflib
1
Handle banking of SHCSR: some register bits are banked between
1
From: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
2
Secure and Non-Secure, and some are only accessible to Secure.
3
2
3
The v8R PMSAv8 has a two-stage MPU translation process, but, unlike
4
VMSAv8, the stage 2 attributes are in the same format as the stage 1
5
attributes (8-bit MAIR format). Rather than converting the MAIR
6
format to the format used for VMSA stage 2 (bits [5:2] of a VMSA
7
stage 2 descriptor) and then converting back to do the attribute
8
combination, allow combined_attrs_nofwb() to accept s2 attributes
9
that are already in the MAIR format.
10
11
We move the assert() to combined_attrs_fwb(), because that function
12
really does require a VMSA stage 2 attribute format. (We will never
13
get there for v8R, because PMSAv8 does not implement FEAT_S2FWB.)
14
15
Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Message-id: 20221206102504.165775-4-tobias.roehmel@rwth-aachen.de
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 1505240046-11454-19-git-send-email-peter.maydell@linaro.org
7
---
19
---
8
hw/intc/armv7m_nvic.c | 221 ++++++++++++++++++++++++++++++++++++++------------
20
target/arm/ptw.c | 10 ++++++++--
9
1 file changed, 169 insertions(+), 52 deletions(-)
21
1 file changed, 8 insertions(+), 2 deletions(-)
10
22
11
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
23
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
12
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/intc/armv7m_nvic.c
25
--- a/target/arm/ptw.c
14
+++ b/hw/intc/armv7m_nvic.c
26
+++ b/target/arm/ptw.c
15
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
27
@@ -XXX,XX +XXX,XX @@ static uint8_t combined_attrs_nofwb(uint64_t hcr,
16
val = cpu->env.v7m.ccr[attrs.secure];
28
{
17
val |= cpu->env.v7m.ccr[M_REG_NS] & R_V7M_CCR_BFHFNMIGN_MASK;
29
uint8_t s1lo, s2lo, s1hi, s2hi, s2_mair_attrs, ret_attrs;
18
return val;
30
19
- case 0xd24: /* System Handler Status. */
31
- s2_mair_attrs = convert_stage2_attrs(hcr, s2.attrs);
20
+ case 0xd24: /* System Handler Control and State (SHCSR) */
32
+ if (s2.is_s2_format) {
21
val = 0;
33
+ s2_mair_attrs = convert_stage2_attrs(hcr, s2.attrs);
22
- if (s->vectors[ARMV7M_EXCP_MEM].active) {
34
+ } else {
23
- val |= (1 << 0);
35
+ s2_mair_attrs = s2.attrs;
24
- }
36
+ }
25
- if (s->vectors[ARMV7M_EXCP_BUS].active) {
37
26
- val |= (1 << 1);
38
s1lo = extract32(s1.attrs, 0, 4);
27
- }
39
s2lo = extract32(s2_mair_attrs, 0, 4);
28
- if (s->vectors[ARMV7M_EXCP_USAGE].active) {
40
@@ -XXX,XX +XXX,XX @@ static uint8_t force_cacheattr_nibble_wb(uint8_t attr)
29
- val |= (1 << 3);
41
*/
30
+ if (attrs.secure) {
42
static uint8_t combined_attrs_fwb(ARMCacheAttrs s1, ARMCacheAttrs s2)
31
+ if (s->sec_vectors[ARMV7M_EXCP_MEM].active) {
43
{
32
+ val |= (1 << 0);
44
+ assert(s2.is_s2_format && !s1.is_s2_format);
33
+ }
34
+ if (s->sec_vectors[ARMV7M_EXCP_HARD].active) {
35
+ val |= (1 << 2);
36
+ }
37
+ if (s->sec_vectors[ARMV7M_EXCP_USAGE].active) {
38
+ val |= (1 << 3);
39
+ }
40
+ if (s->sec_vectors[ARMV7M_EXCP_SVC].active) {
41
+ val |= (1 << 7);
42
+ }
43
+ if (s->sec_vectors[ARMV7M_EXCP_PENDSV].active) {
44
+ val |= (1 << 10);
45
+ }
46
+ if (s->sec_vectors[ARMV7M_EXCP_SYSTICK].active) {
47
+ val |= (1 << 11);
48
+ }
49
+ if (s->sec_vectors[ARMV7M_EXCP_USAGE].pending) {
50
+ val |= (1 << 12);
51
+ }
52
+ if (s->sec_vectors[ARMV7M_EXCP_MEM].pending) {
53
+ val |= (1 << 13);
54
+ }
55
+ if (s->sec_vectors[ARMV7M_EXCP_SVC].pending) {
56
+ val |= (1 << 15);
57
+ }
58
+ if (s->sec_vectors[ARMV7M_EXCP_MEM].enabled) {
59
+ val |= (1 << 16);
60
+ }
61
+ if (s->sec_vectors[ARMV7M_EXCP_USAGE].enabled) {
62
+ val |= (1 << 18);
63
+ }
64
+ if (s->sec_vectors[ARMV7M_EXCP_HARD].pending) {
65
+ val |= (1 << 21);
66
+ }
67
+ /* SecureFault is not banked but is always RAZ/WI to NS */
68
+ if (s->vectors[ARMV7M_EXCP_SECURE].active) {
69
+ val |= (1 << 4);
70
+ }
71
+ if (s->vectors[ARMV7M_EXCP_SECURE].enabled) {
72
+ val |= (1 << 19);
73
+ }
74
+ if (s->vectors[ARMV7M_EXCP_SECURE].pending) {
75
+ val |= (1 << 20);
76
+ }
77
+ } else {
78
+ if (s->vectors[ARMV7M_EXCP_MEM].active) {
79
+ val |= (1 << 0);
80
+ }
81
+ if (arm_feature(&cpu->env, ARM_FEATURE_V8)) {
82
+ /* HARDFAULTACT, HARDFAULTPENDED not present in v7M */
83
+ if (s->vectors[ARMV7M_EXCP_HARD].active) {
84
+ val |= (1 << 2);
85
+ }
86
+ if (s->vectors[ARMV7M_EXCP_HARD].pending) {
87
+ val |= (1 << 21);
88
+ }
89
+ }
90
+ if (s->vectors[ARMV7M_EXCP_USAGE].active) {
91
+ val |= (1 << 3);
92
+ }
93
+ if (s->vectors[ARMV7M_EXCP_SVC].active) {
94
+ val |= (1 << 7);
95
+ }
96
+ if (s->vectors[ARMV7M_EXCP_PENDSV].active) {
97
+ val |= (1 << 10);
98
+ }
99
+ if (s->vectors[ARMV7M_EXCP_SYSTICK].active) {
100
+ val |= (1 << 11);
101
+ }
102
+ if (s->vectors[ARMV7M_EXCP_USAGE].pending) {
103
+ val |= (1 << 12);
104
+ }
105
+ if (s->vectors[ARMV7M_EXCP_MEM].pending) {
106
+ val |= (1 << 13);
107
+ }
108
+ if (s->vectors[ARMV7M_EXCP_SVC].pending) {
109
+ val |= (1 << 15);
110
+ }
111
+ if (s->vectors[ARMV7M_EXCP_MEM].enabled) {
112
+ val |= (1 << 16);
113
+ }
114
+ if (s->vectors[ARMV7M_EXCP_USAGE].enabled) {
115
+ val |= (1 << 18);
116
+ }
117
}
118
- if (s->vectors[ARMV7M_EXCP_SVC].active) {
119
- val |= (1 << 7);
120
+ if (attrs.secure || (cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK)) {
121
+ if (s->vectors[ARMV7M_EXCP_BUS].active) {
122
+ val |= (1 << 1);
123
+ }
124
+ if (s->vectors[ARMV7M_EXCP_BUS].pending) {
125
+ val |= (1 << 14);
126
+ }
127
+ if (s->vectors[ARMV7M_EXCP_BUS].enabled) {
128
+ val |= (1 << 17);
129
+ }
130
+ if (arm_feature(&cpu->env, ARM_FEATURE_V8) &&
131
+ s->vectors[ARMV7M_EXCP_NMI].active) {
132
+ /* NMIACT is not present in v7M */
133
+ val |= (1 << 5);
134
+ }
135
}
136
+
45
+
137
+ /* TODO: this is RAZ/WI from NS if DEMCR.SDME is set */
46
switch (s2.attrs) {
138
if (s->vectors[ARMV7M_EXCP_DEBUG].active) {
47
case 7:
139
val |= (1 << 8);
48
/* Use stage 1 attributes */
140
}
49
@@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(uint64_t hcr,
141
- if (s->vectors[ARMV7M_EXCP_PENDSV].active) {
50
ARMCacheAttrs ret;
142
- val |= (1 << 10);
51
bool tagged = false;
143
- }
52
144
- if (s->vectors[ARMV7M_EXCP_SYSTICK].active) {
53
- assert(s2.is_s2_format && !s1.is_s2_format);
145
- val |= (1 << 11);
54
+ assert(!s1.is_s2_format);
146
- }
55
ret.is_s2_format = false;
147
- if (s->vectors[ARMV7M_EXCP_USAGE].pending) {
56
148
- val |= (1 << 12);
57
if (s1.attrs == 0xf0) {
149
- }
150
- if (s->vectors[ARMV7M_EXCP_MEM].pending) {
151
- val |= (1 << 13);
152
- }
153
- if (s->vectors[ARMV7M_EXCP_BUS].pending) {
154
- val |= (1 << 14);
155
- }
156
- if (s->vectors[ARMV7M_EXCP_SVC].pending) {
157
- val |= (1 << 15);
158
- }
159
- if (s->vectors[ARMV7M_EXCP_MEM].enabled) {
160
- val |= (1 << 16);
161
- }
162
- if (s->vectors[ARMV7M_EXCP_BUS].enabled) {
163
- val |= (1 << 17);
164
- }
165
- if (s->vectors[ARMV7M_EXCP_USAGE].enabled) {
166
- val |= (1 << 18);
167
- }
168
return val;
169
case 0xd28: /* Configurable Fault Status. */
170
/* The BFSR bits [15:8] are shared between security states
171
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
172
173
cpu->env.v7m.ccr[attrs.secure] = value;
174
break;
175
- case 0xd24: /* System Handler Control. */
176
- s->vectors[ARMV7M_EXCP_MEM].active = (value & (1 << 0)) != 0;
177
- s->vectors[ARMV7M_EXCP_BUS].active = (value & (1 << 1)) != 0;
178
- s->vectors[ARMV7M_EXCP_USAGE].active = (value & (1 << 3)) != 0;
179
- s->vectors[ARMV7M_EXCP_SVC].active = (value & (1 << 7)) != 0;
180
+ case 0xd24: /* System Handler Control and State (SHCSR) */
181
+ if (attrs.secure) {
182
+ s->sec_vectors[ARMV7M_EXCP_MEM].active = (value & (1 << 0)) != 0;
183
+ /* Secure HardFault active bit cannot be written */
184
+ s->sec_vectors[ARMV7M_EXCP_USAGE].active = (value & (1 << 3)) != 0;
185
+ s->sec_vectors[ARMV7M_EXCP_SVC].active = (value & (1 << 7)) != 0;
186
+ s->sec_vectors[ARMV7M_EXCP_PENDSV].active =
187
+ (value & (1 << 10)) != 0;
188
+ s->sec_vectors[ARMV7M_EXCP_SYSTICK].active =
189
+ (value & (1 << 11)) != 0;
190
+ s->sec_vectors[ARMV7M_EXCP_USAGE].pending =
191
+ (value & (1 << 12)) != 0;
192
+ s->sec_vectors[ARMV7M_EXCP_MEM].pending = (value & (1 << 13)) != 0;
193
+ s->sec_vectors[ARMV7M_EXCP_SVC].pending = (value & (1 << 15)) != 0;
194
+ s->sec_vectors[ARMV7M_EXCP_MEM].enabled = (value & (1 << 16)) != 0;
195
+ s->sec_vectors[ARMV7M_EXCP_BUS].enabled = (value & (1 << 17)) != 0;
196
+ s->sec_vectors[ARMV7M_EXCP_USAGE].enabled =
197
+ (value & (1 << 18)) != 0;
198
+ /* SecureFault not banked, but RAZ/WI to NS */
199
+ s->vectors[ARMV7M_EXCP_SECURE].active = (value & (1 << 4)) != 0;
200
+ s->vectors[ARMV7M_EXCP_SECURE].enabled = (value & (1 << 19)) != 0;
201
+ s->vectors[ARMV7M_EXCP_SECURE].pending = (value & (1 << 20)) != 0;
202
+ } else {
203
+ s->vectors[ARMV7M_EXCP_MEM].active = (value & (1 << 0)) != 0;
204
+ if (arm_feature(&cpu->env, ARM_FEATURE_V8)) {
205
+ /* HARDFAULTPENDED is not present in v7M */
206
+ s->vectors[ARMV7M_EXCP_HARD].pending = (value & (1 << 21)) != 0;
207
+ }
208
+ s->vectors[ARMV7M_EXCP_USAGE].active = (value & (1 << 3)) != 0;
209
+ s->vectors[ARMV7M_EXCP_SVC].active = (value & (1 << 7)) != 0;
210
+ s->vectors[ARMV7M_EXCP_PENDSV].active = (value & (1 << 10)) != 0;
211
+ s->vectors[ARMV7M_EXCP_SYSTICK].active = (value & (1 << 11)) != 0;
212
+ s->vectors[ARMV7M_EXCP_USAGE].pending = (value & (1 << 12)) != 0;
213
+ s->vectors[ARMV7M_EXCP_MEM].pending = (value & (1 << 13)) != 0;
214
+ s->vectors[ARMV7M_EXCP_SVC].pending = (value & (1 << 15)) != 0;
215
+ s->vectors[ARMV7M_EXCP_MEM].enabled = (value & (1 << 16)) != 0;
216
+ s->vectors[ARMV7M_EXCP_USAGE].enabled = (value & (1 << 18)) != 0;
217
+ }
218
+ if (attrs.secure || (cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK)) {
219
+ s->vectors[ARMV7M_EXCP_BUS].active = (value & (1 << 1)) != 0;
220
+ s->vectors[ARMV7M_EXCP_BUS].pending = (value & (1 << 14)) != 0;
221
+ s->vectors[ARMV7M_EXCP_BUS].enabled = (value & (1 << 17)) != 0;
222
+ }
223
+ /* NMIACT can only be written if the write is of a zero, with
224
+ * BFHFNMINS 1, and by the CPU in secure state via the NS alias.
225
+ */
226
+ if (!attrs.secure && cpu->env.v7m.secure &&
227
+ (cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK) &&
228
+ (value & (1 << 5)) == 0) {
229
+ s->vectors[ARMV7M_EXCP_NMI].active = 0;
230
+ }
231
+ /* HARDFAULTACT can only be written if the write is of a zero
232
+ * to the non-secure HardFault state by the CPU in secure state.
233
+ * The only case where we can be targeting the non-secure HF state
234
+ * when in secure state is if this is a write via the NS alias
235
+ * and BFHFNMINS is 1.
236
+ */
237
+ if (!attrs.secure && cpu->env.v7m.secure &&
238
+ (cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK) &&
239
+ (value & (1 << 2)) == 0) {
240
+ s->vectors[ARMV7M_EXCP_HARD].active = 0;
241
+ }
242
+
243
+ /* TODO: this is RAZ/WI from NS if DEMCR.SDME is set */
244
s->vectors[ARMV7M_EXCP_DEBUG].active = (value & (1 << 8)) != 0;
245
- s->vectors[ARMV7M_EXCP_PENDSV].active = (value & (1 << 10)) != 0;
246
- s->vectors[ARMV7M_EXCP_SYSTICK].active = (value & (1 << 11)) != 0;
247
- s->vectors[ARMV7M_EXCP_USAGE].pending = (value & (1 << 12)) != 0;
248
- s->vectors[ARMV7M_EXCP_MEM].pending = (value & (1 << 13)) != 0;
249
- s->vectors[ARMV7M_EXCP_BUS].pending = (value & (1 << 14)) != 0;
250
- s->vectors[ARMV7M_EXCP_SVC].pending = (value & (1 << 15)) != 0;
251
- s->vectors[ARMV7M_EXCP_MEM].enabled = (value & (1 << 16)) != 0;
252
- s->vectors[ARMV7M_EXCP_BUS].enabled = (value & (1 << 17)) != 0;
253
- s->vectors[ARMV7M_EXCP_USAGE].enabled = (value & (1 << 18)) != 0;
254
nvic_irq_update(s);
255
break;
256
case 0xd28: /* Configurable Fault Status. */
257
--
58
--
258
2.7.4
59
2.25.1
259
60
260
61
diff view generated by jsdifflib
1
Now that we have a banked FAULTMASK register and banked exceptions,
1
From: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
2
we can implement the correct check in cpu_mmu_index() for whether
3
the MPU_CTRL.HFNMIENA bit's effect should apply. This bit causes
4
handlers which have requested a negative execution priority to run
5
with the MPU disabled. In v8M the test has to check this for the
6
current security state and so takes account of banking.
7
2
3
ARMv8-R AArch32 CPUs behave as if TTBCR.EAE is always 1 even
4
tough they don't have the TTBCR register.
5
See ARM Architecture Reference Manual Supplement - ARMv8, for the ARMv8-R
6
AArch32 architecture profile Version:A.c section C1.2.
7
8
Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20221206102504.165775-5-tobias.roehmel@rwth-aachen.de
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 1505240046-11454-17-git-send-email-peter.maydell@linaro.org
11
---
12
---
12
target/arm/cpu.h | 21 ++++++++++++++++-----
13
target/arm/internals.h | 4 ++++
13
hw/intc/armv7m_nvic.c | 29 +++++++++++++++++++++++++++++
14
target/arm/debug_helper.c | 3 +++
14
2 files changed, 45 insertions(+), 5 deletions(-)
15
target/arm/tlb_helper.c | 4 ++++
16
3 files changed, 11 insertions(+)
15
17
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
diff --git a/target/arm/internals.h b/target/arm/internals.h
17
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/cpu.h
20
--- a/target/arm/internals.h
19
+++ b/target/arm/cpu.h
21
+++ b/target/arm/internals.h
20
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_complete_irq(void *opaque, int irq);
22
@@ -XXX,XX +XXX,XX @@ unsigned int arm_pamax(ARMCPU *cpu);
21
* (v8M ARM ARM I_PKLD.)
23
static inline bool extended_addresses_enabled(CPUARMState *env)
22
*/
24
{
23
int armv7m_nvic_raw_execution_priority(void *opaque);
25
uint64_t tcr = env->cp15.tcr_el[arm_is_secure(env) ? 3 : 1];
24
+/**
26
+ if (arm_feature(env, ARM_FEATURE_PMSA) &&
25
+ * armv7m_nvic_neg_prio_requested: return true if the requested execution
27
+ arm_feature(env, ARM_FEATURE_V8)) {
26
+ * priority is negative for the specified security state.
27
+ * @opaque: the NVIC
28
+ * @secure: the security state to test
29
+ * This corresponds to the pseudocode IsReqExecPriNeg().
30
+ */
31
+#ifndef CONFIG_USER_ONLY
32
+bool armv7m_nvic_neg_prio_requested(void *opaque, bool secure);
33
+#else
34
+static inline bool armv7m_nvic_neg_prio_requested(void *opaque, bool secure)
35
+{
36
+ return false;
37
+}
38
+#endif
39
40
/* Interface for defining coprocessor registers.
41
* Registers are defined in tables of arm_cp_reginfo structs
42
@@ -XXX,XX +XXX,XX @@ static inline int cpu_mmu_index(CPUARMState *env, bool ifetch)
43
if (arm_feature(env, ARM_FEATURE_M)) {
44
ARMMMUIdx mmu_idx = el == 0 ? ARMMMUIdx_MUser : ARMMMUIdx_MPriv;
45
46
- /* Execution priority is negative if FAULTMASK is set or
47
- * we're in a HardFault or NMI handler.
48
- */
49
- if ((env->v7m.exception > 0 && env->v7m.exception <= 3)
50
- || env->v7m.faultmask[env->v7m.secure]) {
51
+ if (armv7m_nvic_neg_prio_requested(env->nvic, env->v7m.secure)) {
52
mmu_idx = ARMMMUIdx_MNegPri;
53
}
54
55
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
56
index XXXXXXX..XXXXXXX 100644
57
--- a/hw/intc/armv7m_nvic.c
58
+++ b/hw/intc/armv7m_nvic.c
59
@@ -XXX,XX +XXX,XX @@ static inline int nvic_exec_prio(NVICState *s)
60
return MIN(running, s->exception_prio);
61
}
62
63
+bool armv7m_nvic_neg_prio_requested(void *opaque, bool secure)
64
+{
65
+ /* Return true if the requested execution priority is negative
66
+ * for the specified security state, ie that security state
67
+ * has an active NMI or HardFault or has set its FAULTMASK.
68
+ * Note that this is not the same as whether the execution
69
+ * priority is actually negative (for instance AIRCR.PRIS may
70
+ * mean we don't allow FAULTMASK_NS to actually make the execution
71
+ * priority negative). Compare pseudocode IsReqExcPriNeg().
72
+ */
73
+ NVICState *s = opaque;
74
+
75
+ if (s->cpu->env.v7m.faultmask[secure]) {
76
+ return true;
28
+ return true;
77
+ }
29
+ }
78
+
30
return arm_el_is_aa64(env, 1) ||
79
+ if (secure ? s->sec_vectors[ARMV7M_EXCP_HARD].active :
31
(arm_feature(env, ARM_FEATURE_LPAE) && (tcr & TTBCR_EAE));
80
+ s->vectors[ARMV7M_EXCP_HARD].active) {
32
}
33
diff --git a/target/arm/debug_helper.c b/target/arm/debug_helper.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/debug_helper.c
36
+++ b/target/arm/debug_helper.c
37
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_debug_exception_fsr(CPUARMState *env)
38
39
if (target_el == 2 || arm_el_is_aa64(env, target_el)) {
40
using_lpae = true;
41
+ } else if (arm_feature(env, ARM_FEATURE_PMSA) &&
42
+ arm_feature(env, ARM_FEATURE_V8)) {
43
+ using_lpae = true;
44
} else {
45
if (arm_feature(env, ARM_FEATURE_LPAE) &&
46
(env->cp15.tcr_el[target_el] & TTBCR_EAE)) {
47
diff --git a/target/arm/tlb_helper.c b/target/arm/tlb_helper.c
48
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/tlb_helper.c
50
+++ b/target/arm/tlb_helper.c
51
@@ -XXX,XX +XXX,XX @@ bool regime_using_lpae_format(CPUARMState *env, ARMMMUIdx mmu_idx)
52
if (el == 2 || arm_el_is_aa64(env, el)) {
53
return true;
54
}
55
+ if (arm_feature(env, ARM_FEATURE_PMSA) &&
56
+ arm_feature(env, ARM_FEATURE_V8)) {
81
+ return true;
57
+ return true;
82
+ }
58
+ }
83
+
59
if (arm_feature(env, ARM_FEATURE_LPAE)
84
+ if (s->vectors[ARMV7M_EXCP_NMI].active &&
60
&& (regime_tcr(env, mmu_idx) & TTBCR_EAE)) {
85
+ exc_targets_secure(s, ARMV7M_EXCP_NMI) == secure) {
61
return true;
86
+ return true;
87
+ }
88
+
89
+ return false;
90
+}
91
+
92
bool armv7m_nvic_can_take_pending_exception(void *opaque)
93
{
94
NVICState *s = opaque;
95
--
62
--
96
2.7.4
63
2.25.1
97
64
98
65
diff view generated by jsdifflib
1
From: Subbaraya Sundeep <sundeep.lkml@gmail.com>
1
From: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
2
2
3
Smartfusion2 SoC has hardened Microcontroller subsystem
3
Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
4
and flash based FPGA fabric. This patch adds support for
4
Message-id: 20221206102504.165775-6-tobias.roehmel@rwth-aachen.de
5
Microcontroller subsystem in the SoC.
6
7
Signed-off-by: Subbaraya Sundeep <sundeep.lkml@gmail.com>
8
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
9
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Message-id: 20170920201737.25723-5-f4bug@amsat.org
11
[PMD: drop cpu_model to directly use cpu type, check m3clk non null]
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
6
---
14
hw/arm/Makefile.objs | 1 +
7
target/arm/cpu.h | 6 +
15
include/hw/arm/msf2-soc.h | 67 +++++++++++
8
target/arm/cpu.c | 28 +++-
16
hw/arm/msf2-soc.c | 238 ++++++++++++++++++++++++++++++++++++++++
9
target/arm/helper.c | 302 +++++++++++++++++++++++++++++++++++++++++++
17
default-configs/arm-softmmu.mak | 1 +
10
target/arm/machine.c | 28 ++++
18
4 files changed, 307 insertions(+)
11
4 files changed, 360 insertions(+), 4 deletions(-)
19
create mode 100644 include/hw/arm/msf2-soc.h
20
create mode 100644 hw/arm/msf2-soc.c
21
12
22
diff --git a/hw/arm/Makefile.objs b/hw/arm/Makefile.objs
13
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
23
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
24
--- a/hw/arm/Makefile.objs
15
--- a/target/arm/cpu.h
25
+++ b/hw/arm/Makefile.objs
16
+++ b/target/arm/cpu.h
26
@@ -XXX,XX +XXX,XX @@ obj-$(CONFIG_FSL_IMX31) += fsl-imx31.o kzm.o
17
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
27
obj-$(CONFIG_FSL_IMX6) += fsl-imx6.o sabrelite.o
18
};
28
obj-$(CONFIG_ASPEED_SOC) += aspeed_soc.o aspeed.o
19
uint64_t sctlr_el[4];
29
obj-$(CONFIG_MPS2) += mps2.o
20
};
30
+obj-$(CONFIG_MSF2) += msf2-soc.o
21
+ uint64_t vsctlr; /* Virtualization System control register. */
31
diff --git a/include/hw/arm/msf2-soc.h b/include/hw/arm/msf2-soc.h
22
uint64_t cpacr_el1; /* Architectural feature access control register */
32
new file mode 100644
23
uint64_t cptr_el[4]; /* ARMv8 feature trap registers */
33
index XXXXXXX..XXXXXXX
24
uint32_t c1_xscaleauxcr; /* XScale auxiliary control register. */
34
--- /dev/null
25
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
35
+++ b/include/hw/arm/msf2-soc.h
26
*/
36
@@ -XXX,XX +XXX,XX @@
27
uint32_t *rbar[M_REG_NUM_BANKS];
37
+/*
28
uint32_t *rlar[M_REG_NUM_BANKS];
38
+ * Microsemi Smartfusion2 SoC
29
+ uint32_t *hprbar;
39
+ *
30
+ uint32_t *hprlar;
40
+ * Copyright (c) 2017 Subbaraya Sundeep <sundeep.lkml@gmail.com>
31
uint32_t mair0[M_REG_NUM_BANKS];
41
+ *
32
uint32_t mair1[M_REG_NUM_BANKS];
42
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
33
+ uint32_t hprselr;
43
+ * of this software and associated documentation files (the "Software"), to deal
34
} pmsav8;
44
+ * in the Software without restriction, including without limitation the rights
35
45
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
36
/* v8M SAU */
46
+ * copies of the Software, and to permit persons to whom the Software is
37
@@ -XXX,XX +XXX,XX @@ struct ArchCPU {
47
+ * furnished to do so, subject to the following conditions:
38
bool has_mpu;
48
+ *
39
/* PMSAv7 MPU number of supported regions */
49
+ * The above copyright notice and this permission notice shall be included in
40
uint32_t pmsav7_dregion;
50
+ * all copies or substantial portions of the Software.
41
+ /* PMSAv8 MPU number of supported hyp regions */
51
+ *
42
+ uint32_t pmsav8r_hdregion;
52
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
43
/* v8M SAU number of supported regions */
53
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
44
uint32_t sau_sregion;
54
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
45
55
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
46
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
56
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
47
index XXXXXXX..XXXXXXX 100644
57
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
48
--- a/target/arm/cpu.c
58
+ * THE SOFTWARE.
49
+++ b/target/arm/cpu.c
59
+ */
50
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset_hold(Object *obj)
60
+
51
sizeof(*env->pmsav7.dracr) * cpu->pmsav7_dregion);
61
+#ifndef HW_ARM_MSF2_SOC_H
52
}
62
+#define HW_ARM_MSF2_SOC_H
53
}
63
+
54
+
64
+#include "hw/arm/armv7m.h"
55
+ if (cpu->pmsav8r_hdregion > 0) {
65
+#include "hw/timer/mss-timer.h"
56
+ memset(env->pmsav8.hprbar, 0,
66
+#include "hw/misc/msf2-sysreg.h"
57
+ sizeof(*env->pmsav8.hprbar) * cpu->pmsav8r_hdregion);
67
+#include "hw/ssi/mss-spi.h"
58
+ memset(env->pmsav8.hprlar, 0,
68
+
59
+ sizeof(*env->pmsav8.hprlar) * cpu->pmsav8r_hdregion);
69
+#define TYPE_MSF2_SOC "msf2-soc"
60
+ }
70
+#define MSF2_SOC(obj) OBJECT_CHECK(MSF2State, (obj), TYPE_MSF2_SOC)
61
+
71
+
62
env->pmsav7.rnr[M_REG_NS] = 0;
72
+#define MSF2_NUM_SPIS 2
63
env->pmsav7.rnr[M_REG_S] = 0;
73
+#define MSF2_NUM_UARTS 2
64
env->pmsav8.mair0[M_REG_NS] = 0;
74
+
65
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
75
+/*
66
/* MPU can be configured out of a PMSA CPU either by setting has-mpu
76
+ * System timer consists of two programmable 32-bit
67
* to false or by setting pmsav7-dregion to 0.
77
+ * decrementing counters that generate individual interrupts to
68
*/
78
+ * the Cortex-M3 processor
69
- if (!cpu->has_mpu) {
79
+ */
70
- cpu->pmsav7_dregion = 0;
80
+#define MSF2_NUM_TIMERS 2
71
- }
81
+
72
- if (cpu->pmsav7_dregion == 0) {
82
+typedef struct MSF2State {
73
+ if (!cpu->has_mpu || cpu->pmsav7_dregion == 0) {
83
+ /*< private >*/
74
cpu->has_mpu = false;
84
+ SysBusDevice parent_obj;
75
+ cpu->pmsav7_dregion = 0;
85
+ /*< public >*/
76
+ cpu->pmsav8r_hdregion = 0;
86
+
77
}
87
+ ARMv7MState armv7m;
78
88
+
79
if (arm_feature(env, ARM_FEATURE_PMSA) &&
89
+ char *cpu_type;
80
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
90
+ char *part_name;
81
env->pmsav7.dracr = g_new0(uint32_t, nr);
91
+ uint64_t envm_size;
82
}
92
+ uint64_t esram_size;
83
}
93
+
84
+
94
+ uint32_t m3clk;
85
+ if (cpu->pmsav8r_hdregion > 0xff) {
95
+ uint8_t apb0div;
86
+ error_setg(errp, "PMSAv8 MPU EL2 #regions invalid %" PRIu32,
96
+ uint8_t apb1div;
87
+ cpu->pmsav8r_hdregion);
97
+
88
+ return;
98
+ MSF2SysregState sysreg;
89
+ }
99
+ MSSTimerState timer;
90
+
100
+ MSSSpiState spi[MSF2_NUM_SPIS];
91
+ if (cpu->pmsav8r_hdregion) {
101
+} MSF2State;
92
+ env->pmsav8.hprbar = g_new0(uint32_t,
102
+
93
+ cpu->pmsav8r_hdregion);
103
+#endif
94
+ env->pmsav8.hprlar = g_new0(uint32_t,
104
diff --git a/hw/arm/msf2-soc.c b/hw/arm/msf2-soc.c
95
+ cpu->pmsav8r_hdregion);
105
new file mode 100644
96
+ }
106
index XXXXXXX..XXXXXXX
97
}
107
--- /dev/null
98
108
+++ b/hw/arm/msf2-soc.c
99
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
109
@@ -XXX,XX +XXX,XX @@
100
diff --git a/target/arm/helper.c b/target/arm/helper.c
110
+/*
101
index XXXXXXX..XXXXXXX 100644
111
+ * SmartFusion2 SoC emulation.
102
--- a/target/arm/helper.c
112
+ *
103
+++ b/target/arm/helper.c
113
+ * Copyright (c) 2017 Subbaraya Sundeep <sundeep.lkml@gmail.com>
104
@@ -XXX,XX +XXX,XX @@ static void pmsav7_rgnr_write(CPUARMState *env, const ARMCPRegInfo *ri,
114
+ *
105
raw_write(env, ri, value);
115
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
106
}
116
+ * of this software and associated documentation files (the "Software"), to deal
107
117
+ * in the Software without restriction, including without limitation the rights
108
+static void prbar_write(CPUARMState *env, const ARMCPRegInfo *ri,
118
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
109
+ uint64_t value)
119
+ * copies of the Software, and to permit persons to whom the Software is
110
+{
120
+ * furnished to do so, subject to the following conditions:
111
+ ARMCPU *cpu = env_archcpu(env);
121
+ *
112
+
122
+ * The above copyright notice and this permission notice shall be included in
113
+ tlb_flush(CPU(cpu)); /* Mappings may have changed - purge! */
123
+ * all copies or substantial portions of the Software.
114
+ env->pmsav8.rbar[M_REG_NS][env->pmsav7.rnr[M_REG_NS]] = value;
124
+ *
115
+}
125
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
116
+
126
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
117
+static uint64_t prbar_read(CPUARMState *env, const ARMCPRegInfo *ri)
127
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
118
+{
128
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
119
+ return env->pmsav8.rbar[M_REG_NS][env->pmsav7.rnr[M_REG_NS]];
129
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
120
+}
130
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
121
+
131
+ * THE SOFTWARE.
122
+static void prlar_write(CPUARMState *env, const ARMCPRegInfo *ri,
132
+ */
123
+ uint64_t value)
133
+
124
+{
134
+#include "qemu/osdep.h"
125
+ ARMCPU *cpu = env_archcpu(env);
135
+#include "qapi/error.h"
126
+
136
+#include "qemu-common.h"
127
+ tlb_flush(CPU(cpu)); /* Mappings may have changed - purge! */
137
+#include "hw/arm/arm.h"
128
+ env->pmsav8.rlar[M_REG_NS][env->pmsav7.rnr[M_REG_NS]] = value;
138
+#include "exec/address-spaces.h"
129
+}
139
+#include "hw/char/serial.h"
130
+
140
+#include "hw/boards.h"
131
+static uint64_t prlar_read(CPUARMState *env, const ARMCPRegInfo *ri)
141
+#include "sysemu/block-backend.h"
132
+{
142
+#include "qemu/cutils.h"
133
+ return env->pmsav8.rlar[M_REG_NS][env->pmsav7.rnr[M_REG_NS]];
143
+#include "hw/arm/msf2-soc.h"
134
+}
144
+#include "hw/misc/unimp.h"
135
+
145
+
136
+static void prselr_write(CPUARMState *env, const ARMCPRegInfo *ri,
146
+#define MSF2_TIMER_BASE 0x40004000
137
+ uint64_t value)
147
+#define MSF2_SYSREG_BASE 0x40038000
138
+{
148
+
139
+ ARMCPU *cpu = env_archcpu(env);
149
+#define ENVM_BASE_ADDRESS 0x60000000
140
+
150
+
151
+#define SRAM_BASE_ADDRESS 0x20000000
152
+
153
+#define MSF2_ENVM_MAX_SIZE (512 * K_BYTE)
154
+
155
+/*
156
+ * eSRAM max size is 80k without SECDED(Single error correction and
157
+ * dual error detection) feature and 64k with SECDED.
158
+ * We do not support SECDED now.
159
+ */
160
+#define MSF2_ESRAM_MAX_SIZE (80 * K_BYTE)
161
+
162
+static const uint32_t spi_addr[MSF2_NUM_SPIS] = { 0x40001000 , 0x40011000 };
163
+static const uint32_t uart_addr[MSF2_NUM_UARTS] = { 0x40000000 , 0x40010000 };
164
+
165
+static const int spi_irq[MSF2_NUM_SPIS] = { 2, 3 };
166
+static const int uart_irq[MSF2_NUM_UARTS] = { 10, 11 };
167
+static const int timer_irq[MSF2_NUM_TIMERS] = { 14, 15 };
168
+
169
+static void m2sxxx_soc_initfn(Object *obj)
170
+{
171
+ MSF2State *s = MSF2_SOC(obj);
172
+ int i;
173
+
174
+ object_initialize(&s->armv7m, sizeof(s->armv7m), TYPE_ARMV7M);
175
+ qdev_set_parent_bus(DEVICE(&s->armv7m), sysbus_get_default());
176
+
177
+ object_initialize(&s->sysreg, sizeof(s->sysreg), TYPE_MSF2_SYSREG);
178
+ qdev_set_parent_bus(DEVICE(&s->sysreg), sysbus_get_default());
179
+
180
+ object_initialize(&s->timer, sizeof(s->timer), TYPE_MSS_TIMER);
181
+ qdev_set_parent_bus(DEVICE(&s->timer), sysbus_get_default());
182
+
183
+ for (i = 0; i < MSF2_NUM_SPIS; i++) {
184
+ object_initialize(&s->spi[i], sizeof(s->spi[i]),
185
+ TYPE_MSS_SPI);
186
+ qdev_set_parent_bus(DEVICE(&s->spi[i]), sysbus_get_default());
187
+ }
188
+}
189
+
190
+static void m2sxxx_soc_realize(DeviceState *dev_soc, Error **errp)
191
+{
192
+ MSF2State *s = MSF2_SOC(dev_soc);
193
+ DeviceState *dev, *armv7m;
194
+ SysBusDevice *busdev;
195
+ Error *err = NULL;
196
+ int i;
197
+
198
+ MemoryRegion *system_memory = get_system_memory();
199
+ MemoryRegion *nvm = g_new(MemoryRegion, 1);
200
+ MemoryRegion *nvm_alias = g_new(MemoryRegion, 1);
201
+ MemoryRegion *sram = g_new(MemoryRegion, 1);
202
+
203
+ memory_region_init_rom(nvm, NULL, "MSF2.eNVM", s->envm_size,
204
+ &error_fatal);
205
+ /*
141
+ /*
206
+ * On power-on, the eNVM region 0x60000000 is automatically
142
+ * Ignore writes that would select not implemented region.
207
+ * remapped to the Cortex-M3 processor executable region
143
+ * This is architecturally UNPREDICTABLE.
208
+ * start address (0x0). We do not support remapping other eNVM,
209
+ * eSRAM and DDR regions by guest(via Sysreg) currently.
210
+ */
144
+ */
211
+ memory_region_init_alias(nvm_alias, NULL, "MSF2.eNVM",
145
+ if (value >= cpu->pmsav7_dregion) {
212
+ nvm, 0, s->envm_size);
213
+
214
+ memory_region_add_subregion(system_memory, ENVM_BASE_ADDRESS, nvm);
215
+ memory_region_add_subregion(system_memory, 0, nvm_alias);
216
+
217
+ memory_region_init_ram(sram, NULL, "MSF2.eSRAM", s->esram_size,
218
+ &error_fatal);
219
+ memory_region_add_subregion(system_memory, SRAM_BASE_ADDRESS, sram);
220
+
221
+ armv7m = DEVICE(&s->armv7m);
222
+ qdev_prop_set_uint32(armv7m, "num-irq", 81);
223
+ qdev_prop_set_string(armv7m, "cpu-type", s->cpu_type);
224
+ object_property_set_link(OBJECT(&s->armv7m), OBJECT(get_system_memory()),
225
+ "memory", &error_abort);
226
+ object_property_set_bool(OBJECT(&s->armv7m), true, "realized", &err);
227
+ if (err != NULL) {
228
+ error_propagate(errp, err);
229
+ return;
146
+ return;
230
+ }
147
+ }
231
+
148
+
232
+ if (!s->m3clk) {
149
+ env->pmsav7.rnr[M_REG_NS] = value;
233
+ error_setg(errp, "Invalid m3clk value");
150
+}
234
+ error_append_hint(errp, "m3clk can not be zero\n");
151
+
152
+static void hprbar_write(CPUARMState *env, const ARMCPRegInfo *ri,
153
+ uint64_t value)
154
+{
155
+ ARMCPU *cpu = env_archcpu(env);
156
+
157
+ tlb_flush(CPU(cpu)); /* Mappings may have changed - purge! */
158
+ env->pmsav8.hprbar[env->pmsav8.hprselr] = value;
159
+}
160
+
161
+static uint64_t hprbar_read(CPUARMState *env, const ARMCPRegInfo *ri)
162
+{
163
+ return env->pmsav8.hprbar[env->pmsav8.hprselr];
164
+}
165
+
166
+static void hprlar_write(CPUARMState *env, const ARMCPRegInfo *ri,
167
+ uint64_t value)
168
+{
169
+ ARMCPU *cpu = env_archcpu(env);
170
+
171
+ tlb_flush(CPU(cpu)); /* Mappings may have changed - purge! */
172
+ env->pmsav8.hprlar[env->pmsav8.hprselr] = value;
173
+}
174
+
175
+static uint64_t hprlar_read(CPUARMState *env, const ARMCPRegInfo *ri)
176
+{
177
+ return env->pmsav8.hprlar[env->pmsav8.hprselr];
178
+}
179
+
180
+static void hprenr_write(CPUARMState *env, const ARMCPRegInfo *ri,
181
+ uint64_t value)
182
+{
183
+ uint32_t n;
184
+ uint32_t bit;
185
+ ARMCPU *cpu = env_archcpu(env);
186
+
187
+ /* Ignore writes to unimplemented regions */
188
+ int rmax = MIN(cpu->pmsav8r_hdregion, 32);
189
+ value &= MAKE_64BIT_MASK(0, rmax);
190
+
191
+ tlb_flush(CPU(cpu)); /* Mappings may have changed - purge! */
192
+
193
+ /* Register alias is only valid for first 32 indexes */
194
+ for (n = 0; n < rmax; ++n) {
195
+ bit = extract32(value, n, 1);
196
+ env->pmsav8.hprlar[n] = deposit32(
197
+ env->pmsav8.hprlar[n], 0, 1, bit);
198
+ }
199
+}
200
+
201
+static uint64_t hprenr_read(CPUARMState *env, const ARMCPRegInfo *ri)
202
+{
203
+ uint32_t n;
204
+ uint32_t result = 0x0;
205
+ ARMCPU *cpu = env_archcpu(env);
206
+
207
+ /* Register alias is only valid for first 32 indexes */
208
+ for (n = 0; n < MIN(cpu->pmsav8r_hdregion, 32); ++n) {
209
+ if (env->pmsav8.hprlar[n] & 0x1) {
210
+ result |= (0x1 << n);
211
+ }
212
+ }
213
+ return result;
214
+}
215
+
216
+static void hprselr_write(CPUARMState *env, const ARMCPRegInfo *ri,
217
+ uint64_t value)
218
+{
219
+ ARMCPU *cpu = env_archcpu(env);
220
+
221
+ /*
222
+ * Ignore writes that would select not implemented region.
223
+ * This is architecturally UNPREDICTABLE.
224
+ */
225
+ if (value >= cpu->pmsav8r_hdregion) {
235
+ return;
226
+ return;
236
+ }
227
+ }
237
+ system_clock_scale = NANOSECONDS_PER_SECOND / s->m3clk;
228
+
238
+
229
+ env->pmsav8.hprselr = value;
239
+ for (i = 0; i < MSF2_NUM_UARTS; i++) {
230
+}
240
+ if (serial_hds[i]) {
231
+
241
+ serial_mm_init(get_system_memory(), uart_addr[i], 2,
232
+static void pmsav8r_regn_write(CPUARMState *env, const ARMCPRegInfo *ri,
242
+ qdev_get_gpio_in(armv7m, uart_irq[i]),
233
+ uint64_t value)
243
+ 115200, serial_hds[i], DEVICE_NATIVE_ENDIAN);
234
+{
244
+ }
235
+ ARMCPU *cpu = env_archcpu(env);
245
+ }
236
+ uint8_t index = (extract32(ri->opc0, 0, 1) << 4) |
246
+
237
+ (extract32(ri->crm, 0, 3) << 1) | extract32(ri->opc2, 2, 1);
247
+ dev = DEVICE(&s->timer);
238
+
248
+ /* APB0 clock is the timer input clock */
239
+ tlb_flush(CPU(cpu)); /* Mappings may have changed - purge! */
249
+ qdev_prop_set_uint32(dev, "clock-frequency", s->m3clk / s->apb0div);
240
+
250
+ object_property_set_bool(OBJECT(&s->timer), true, "realized", &err);
241
+ if (ri->opc1 & 4) {
251
+ if (err != NULL) {
242
+ if (index >= cpu->pmsav8r_hdregion) {
252
+ error_propagate(errp, err);
253
+ return;
254
+ }
255
+ busdev = SYS_BUS_DEVICE(dev);
256
+ sysbus_mmio_map(busdev, 0, MSF2_TIMER_BASE);
257
+ sysbus_connect_irq(busdev, 0,
258
+ qdev_get_gpio_in(armv7m, timer_irq[0]));
259
+ sysbus_connect_irq(busdev, 1,
260
+ qdev_get_gpio_in(armv7m, timer_irq[1]));
261
+
262
+ dev = DEVICE(&s->sysreg);
263
+ qdev_prop_set_uint32(dev, "apb0divisor", s->apb0div);
264
+ qdev_prop_set_uint32(dev, "apb1divisor", s->apb1div);
265
+ object_property_set_bool(OBJECT(&s->sysreg), true, "realized", &err);
266
+ if (err != NULL) {
267
+ error_propagate(errp, err);
268
+ return;
269
+ }
270
+ busdev = SYS_BUS_DEVICE(dev);
271
+ sysbus_mmio_map(busdev, 0, MSF2_SYSREG_BASE);
272
+
273
+ for (i = 0; i < MSF2_NUM_SPIS; i++) {
274
+ gchar *bus_name;
275
+
276
+ object_property_set_bool(OBJECT(&s->spi[i]), true, "realized", &err);
277
+ if (err != NULL) {
278
+ error_propagate(errp, err);
279
+ return;
243
+ return;
280
+ }
244
+ }
281
+
245
+ if (ri->opc2 & 0x1) {
282
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->spi[i]), 0, spi_addr[i]);
246
+ env->pmsav8.hprlar[index] = value;
283
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->spi[i]), 0,
247
+ } else {
284
+ qdev_get_gpio_in(armv7m, spi_irq[i]));
248
+ env->pmsav8.hprbar[index] = value;
285
+
249
+ }
286
+ /* Alias controller SPI bus to the SoC itself */
250
+ } else {
287
+ bus_name = g_strdup_printf("spi%d", i);
251
+ if (index >= cpu->pmsav7_dregion) {
288
+ object_property_add_alias(OBJECT(s), bus_name,
252
+ return;
289
+ OBJECT(&s->spi[i]), "spi",
253
+ }
290
+ &error_abort);
254
+ if (ri->opc2 & 0x1) {
291
+ g_free(bus_name);
255
+ env->pmsav8.rlar[M_REG_NS][index] = value;
292
+ }
256
+ } else {
293
+
257
+ env->pmsav8.rbar[M_REG_NS][index] = value;
294
+ /* Below devices are not modelled yet. */
258
+ }
295
+ create_unimplemented_device("i2c_0", 0x40002000, 0x1000);
259
+ }
296
+ create_unimplemented_device("dma", 0x40003000, 0x1000);
260
+}
297
+ create_unimplemented_device("watchdog", 0x40005000, 0x1000);
261
+
298
+ create_unimplemented_device("i2c_1", 0x40012000, 0x1000);
262
+static uint64_t pmsav8r_regn_read(CPUARMState *env, const ARMCPRegInfo *ri)
299
+ create_unimplemented_device("gpio", 0x40013000, 0x1000);
263
+{
300
+ create_unimplemented_device("hs-dma", 0x40014000, 0x1000);
264
+ ARMCPU *cpu = env_archcpu(env);
301
+ create_unimplemented_device("can", 0x40015000, 0x1000);
265
+ uint8_t index = (extract32(ri->opc0, 0, 1) << 4) |
302
+ create_unimplemented_device("rtc", 0x40017000, 0x1000);
266
+ (extract32(ri->crm, 0, 3) << 1) | extract32(ri->opc2, 2, 1);
303
+ create_unimplemented_device("apb_config", 0x40020000, 0x10000);
267
+
304
+ create_unimplemented_device("emac", 0x40041000, 0x1000);
268
+ if (ri->opc1 & 4) {
305
+ create_unimplemented_device("usb", 0x40043000, 0x1000);
269
+ if (index >= cpu->pmsav8r_hdregion) {
306
+}
270
+ return 0x0;
307
+
271
+ }
308
+static Property m2sxxx_soc_properties[] = {
272
+ if (ri->opc2 & 0x1) {
309
+ /*
273
+ return env->pmsav8.hprlar[index];
310
+ * part name specifies the type of SmartFusion2 device variant(this
274
+ } else {
311
+ * property is for information purpose only.
275
+ return env->pmsav8.hprbar[index];
312
+ */
276
+ }
313
+ DEFINE_PROP_STRING("cpu-type", MSF2State, cpu_type),
277
+ } else {
314
+ DEFINE_PROP_STRING("part-name", MSF2State, part_name),
278
+ if (index >= cpu->pmsav7_dregion) {
315
+ DEFINE_PROP_UINT64("eNVM-size", MSF2State, envm_size, MSF2_ENVM_MAX_SIZE),
279
+ return 0x0;
316
+ DEFINE_PROP_UINT64("eSRAM-size", MSF2State, esram_size,
280
+ }
317
+ MSF2_ESRAM_MAX_SIZE),
281
+ if (ri->opc2 & 0x1) {
318
+ /* Libero GUI shows 100Mhz as default for clocks */
282
+ return env->pmsav8.rlar[M_REG_NS][index];
319
+ DEFINE_PROP_UINT32("m3clk", MSF2State, m3clk, 100 * 1000000),
283
+ } else {
320
+ /* default divisors in Libero GUI */
284
+ return env->pmsav8.rbar[M_REG_NS][index];
321
+ DEFINE_PROP_UINT8("apb0div", MSF2State, apb0div, 2),
285
+ }
322
+ DEFINE_PROP_UINT8("apb1div", MSF2State, apb1div, 2),
286
+ }
323
+ DEFINE_PROP_END_OF_LIST(),
287
+}
288
+
289
+static const ARMCPRegInfo pmsav8r_cp_reginfo[] = {
290
+ { .name = "PRBAR",
291
+ .cp = 15, .opc1 = 0, .crn = 6, .crm = 3, .opc2 = 0,
292
+ .access = PL1_RW, .type = ARM_CP_NO_RAW,
293
+ .accessfn = access_tvm_trvm,
294
+ .readfn = prbar_read, .writefn = prbar_write },
295
+ { .name = "PRLAR",
296
+ .cp = 15, .opc1 = 0, .crn = 6, .crm = 3, .opc2 = 1,
297
+ .access = PL1_RW, .type = ARM_CP_NO_RAW,
298
+ .accessfn = access_tvm_trvm,
299
+ .readfn = prlar_read, .writefn = prlar_write },
300
+ { .name = "PRSELR", .resetvalue = 0,
301
+ .cp = 15, .opc1 = 0, .crn = 6, .crm = 2, .opc2 = 1,
302
+ .access = PL1_RW, .accessfn = access_tvm_trvm,
303
+ .writefn = prselr_write,
304
+ .fieldoffset = offsetof(CPUARMState, pmsav7.rnr[M_REG_NS]) },
305
+ { .name = "HPRBAR", .resetvalue = 0,
306
+ .cp = 15, .opc1 = 4, .crn = 6, .crm = 3, .opc2 = 0,
307
+ .access = PL2_RW, .type = ARM_CP_NO_RAW,
308
+ .readfn = hprbar_read, .writefn = hprbar_write },
309
+ { .name = "HPRLAR",
310
+ .cp = 15, .opc1 = 4, .crn = 6, .crm = 3, .opc2 = 1,
311
+ .access = PL2_RW, .type = ARM_CP_NO_RAW,
312
+ .readfn = hprlar_read, .writefn = hprlar_write },
313
+ { .name = "HPRSELR", .resetvalue = 0,
314
+ .cp = 15, .opc1 = 4, .crn = 6, .crm = 2, .opc2 = 1,
315
+ .access = PL2_RW,
316
+ .writefn = hprselr_write,
317
+ .fieldoffset = offsetof(CPUARMState, pmsav8.hprselr) },
318
+ { .name = "HPRENR",
319
+ .cp = 15, .opc1 = 4, .crn = 6, .crm = 1, .opc2 = 1,
320
+ .access = PL2_RW, .type = ARM_CP_NO_RAW,
321
+ .readfn = hprenr_read, .writefn = hprenr_write },
324
+};
322
+};
325
+
323
+
326
+static void m2sxxx_soc_class_init(ObjectClass *klass, void *data)
324
static const ARMCPRegInfo pmsav7_cp_reginfo[] = {
327
+{
325
/* Reset for all these registers is handled in arm_cpu_reset(),
328
+ DeviceClass *dc = DEVICE_CLASS(klass);
326
* because the PMSAv7 is also used by M-profile CPUs, which do
329
+
327
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
330
+ dc->realize = m2sxxx_soc_realize;
328
.access = PL1_R, .type = ARM_CP_CONST,
331
+ dc->props = m2sxxx_soc_properties;
329
.resetvalue = cpu->pmsav7_dregion << 8
332
+}
330
};
333
+
331
+ /* HMPUIR is specific to PMSA V8 */
334
+static const TypeInfo m2sxxx_soc_info = {
332
+ ARMCPRegInfo id_hmpuir_reginfo = {
335
+ .name = TYPE_MSF2_SOC,
333
+ .name = "HMPUIR",
336
+ .parent = TYPE_SYS_BUS_DEVICE,
334
+ .cp = 15, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 4,
337
+ .instance_size = sizeof(MSF2State),
335
+ .access = PL2_R, .type = ARM_CP_CONST,
338
+ .instance_init = m2sxxx_soc_initfn,
336
+ .resetvalue = cpu->pmsav8r_hdregion
339
+ .class_init = m2sxxx_soc_class_init,
337
+ };
338
static const ARMCPRegInfo crn0_wi_reginfo = {
339
.name = "CRN0_WI", .cp = 15, .crn = 0, .crm = CP_ANY,
340
.opc1 = CP_ANY, .opc2 = CP_ANY, .access = PL1_W,
341
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
342
define_arm_cp_regs(cpu, id_cp_reginfo);
343
if (!arm_feature(env, ARM_FEATURE_PMSA)) {
344
define_one_arm_cp_reg(cpu, &id_tlbtr_reginfo);
345
+ } else if (arm_feature(env, ARM_FEATURE_PMSA) &&
346
+ arm_feature(env, ARM_FEATURE_V8)) {
347
+ uint32_t i = 0;
348
+ char *tmp_string;
349
+
350
+ define_one_arm_cp_reg(cpu, &id_mpuir_reginfo);
351
+ define_one_arm_cp_reg(cpu, &id_hmpuir_reginfo);
352
+ define_arm_cp_regs(cpu, pmsav8r_cp_reginfo);
353
+
354
+ /* Register alias is only valid for first 32 indexes */
355
+ for (i = 0; i < MIN(cpu->pmsav7_dregion, 32); ++i) {
356
+ uint8_t crm = 0b1000 | extract32(i, 1, 3);
357
+ uint8_t opc1 = extract32(i, 4, 1);
358
+ uint8_t opc2 = extract32(i, 0, 1) << 2;
359
+
360
+ tmp_string = g_strdup_printf("PRBAR%u", i);
361
+ ARMCPRegInfo tmp_prbarn_reginfo = {
362
+ .name = tmp_string, .type = ARM_CP_ALIAS | ARM_CP_NO_RAW,
363
+ .cp = 15, .opc1 = opc1, .crn = 6, .crm = crm, .opc2 = opc2,
364
+ .access = PL1_RW, .resetvalue = 0,
365
+ .accessfn = access_tvm_trvm,
366
+ .writefn = pmsav8r_regn_write, .readfn = pmsav8r_regn_read
367
+ };
368
+ define_one_arm_cp_reg(cpu, &tmp_prbarn_reginfo);
369
+ g_free(tmp_string);
370
+
371
+ opc2 = extract32(i, 0, 1) << 2 | 0x1;
372
+ tmp_string = g_strdup_printf("PRLAR%u", i);
373
+ ARMCPRegInfo tmp_prlarn_reginfo = {
374
+ .name = tmp_string, .type = ARM_CP_ALIAS | ARM_CP_NO_RAW,
375
+ .cp = 15, .opc1 = opc1, .crn = 6, .crm = crm, .opc2 = opc2,
376
+ .access = PL1_RW, .resetvalue = 0,
377
+ .accessfn = access_tvm_trvm,
378
+ .writefn = pmsav8r_regn_write, .readfn = pmsav8r_regn_read
379
+ };
380
+ define_one_arm_cp_reg(cpu, &tmp_prlarn_reginfo);
381
+ g_free(tmp_string);
382
+ }
383
+
384
+ /* Register alias is only valid for first 32 indexes */
385
+ for (i = 0; i < MIN(cpu->pmsav8r_hdregion, 32); ++i) {
386
+ uint8_t crm = 0b1000 | extract32(i, 1, 3);
387
+ uint8_t opc1 = 0b100 | extract32(i, 4, 1);
388
+ uint8_t opc2 = extract32(i, 0, 1) << 2;
389
+
390
+ tmp_string = g_strdup_printf("HPRBAR%u", i);
391
+ ARMCPRegInfo tmp_hprbarn_reginfo = {
392
+ .name = tmp_string,
393
+ .type = ARM_CP_NO_RAW,
394
+ .cp = 15, .opc1 = opc1, .crn = 6, .crm = crm, .opc2 = opc2,
395
+ .access = PL2_RW, .resetvalue = 0,
396
+ .writefn = pmsav8r_regn_write, .readfn = pmsav8r_regn_read
397
+ };
398
+ define_one_arm_cp_reg(cpu, &tmp_hprbarn_reginfo);
399
+ g_free(tmp_string);
400
+
401
+ opc2 = extract32(i, 0, 1) << 2 | 0x1;
402
+ tmp_string = g_strdup_printf("HPRLAR%u", i);
403
+ ARMCPRegInfo tmp_hprlarn_reginfo = {
404
+ .name = tmp_string,
405
+ .type = ARM_CP_NO_RAW,
406
+ .cp = 15, .opc1 = opc1, .crn = 6, .crm = crm, .opc2 = opc2,
407
+ .access = PL2_RW, .resetvalue = 0,
408
+ .writefn = pmsav8r_regn_write, .readfn = pmsav8r_regn_read
409
+ };
410
+ define_one_arm_cp_reg(cpu, &tmp_hprlarn_reginfo);
411
+ g_free(tmp_string);
412
+ }
413
} else if (arm_feature(env, ARM_FEATURE_V7)) {
414
define_one_arm_cp_reg(cpu, &id_mpuir_reginfo);
415
}
416
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
417
sctlr.type |= ARM_CP_SUPPRESS_TB_END;
418
}
419
define_one_arm_cp_reg(cpu, &sctlr);
420
+
421
+ if (arm_feature(env, ARM_FEATURE_PMSA) &&
422
+ arm_feature(env, ARM_FEATURE_V8)) {
423
+ ARMCPRegInfo vsctlr = {
424
+ .name = "VSCTLR", .state = ARM_CP_STATE_AA32,
425
+ .cp = 15, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 0,
426
+ .access = PL2_RW, .resetvalue = 0x0,
427
+ .fieldoffset = offsetoflow32(CPUARMState, cp15.vsctlr),
428
+ };
429
+ define_one_arm_cp_reg(cpu, &vsctlr);
430
+ }
431
}
432
433
if (cpu_isar_feature(aa64_lor, cpu)) {
434
diff --git a/target/arm/machine.c b/target/arm/machine.c
435
index XXXXXXX..XXXXXXX 100644
436
--- a/target/arm/machine.c
437
+++ b/target/arm/machine.c
438
@@ -XXX,XX +XXX,XX @@ static bool pmsav8_needed(void *opaque)
439
arm_feature(env, ARM_FEATURE_V8);
440
}
441
442
+static bool pmsav8r_needed(void *opaque)
443
+{
444
+ ARMCPU *cpu = opaque;
445
+ CPUARMState *env = &cpu->env;
446
+
447
+ return arm_feature(env, ARM_FEATURE_PMSA) &&
448
+ arm_feature(env, ARM_FEATURE_V8) &&
449
+ !arm_feature(env, ARM_FEATURE_M);
450
+}
451
+
452
+static const VMStateDescription vmstate_pmsav8r = {
453
+ .name = "cpu/pmsav8/pmsav8r",
454
+ .version_id = 1,
455
+ .minimum_version_id = 1,
456
+ .needed = pmsav8r_needed,
457
+ .fields = (VMStateField[]) {
458
+ VMSTATE_VARRAY_UINT32(env.pmsav8.hprbar, ARMCPU,
459
+ pmsav8r_hdregion, 0, vmstate_info_uint32, uint32_t),
460
+ VMSTATE_VARRAY_UINT32(env.pmsav8.hprlar, ARMCPU,
461
+ pmsav8r_hdregion, 0, vmstate_info_uint32, uint32_t),
462
+ VMSTATE_END_OF_LIST()
463
+ },
340
+};
464
+};
341
+
465
+
342
+static void m2sxxx_soc_types(void)
466
static const VMStateDescription vmstate_pmsav8 = {
343
+{
467
.name = "cpu/pmsav8",
344
+ type_register_static(&m2sxxx_soc_info);
468
.version_id = 1,
345
+}
469
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_pmsav8 = {
346
+
470
VMSTATE_UINT32(env.pmsav8.mair0[M_REG_NS], ARMCPU),
347
+type_init(m2sxxx_soc_types)
471
VMSTATE_UINT32(env.pmsav8.mair1[M_REG_NS], ARMCPU),
348
diff --git a/default-configs/arm-softmmu.mak b/default-configs/arm-softmmu.mak
472
VMSTATE_END_OF_LIST()
349
index XXXXXXX..XXXXXXX 100644
473
+ },
350
--- a/default-configs/arm-softmmu.mak
474
+ .subsections = (const VMStateDescription * []) {
351
+++ b/default-configs/arm-softmmu.mak
475
+ &vmstate_pmsav8r,
352
@@ -XXX,XX +XXX,XX @@ CONFIG_ACPI=y
476
+ NULL
353
CONFIG_SMBIOS=y
477
}
354
CONFIG_ASPEED_SOC=y
478
};
355
CONFIG_GPIO_KEY=y
479
356
+CONFIG_MSF2=y
357
--
480
--
358
2.7.4
481
2.25.1
359
482
360
483
diff view generated by jsdifflib
1
Don't use old_mmio in the memory region ops struct.
1
From: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
2
2
3
Add PMSAv8r translation.
4
5
Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20221206102504.165775-7-tobias.roehmel@rwth-aachen.de
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 1505580378-9044-6-git-send-email-peter.maydell@linaro.org
6
---
9
---
7
hw/i2c/omap_i2c.c | 44 ++++++++++++++++++++++++++++++++------------
10
target/arm/ptw.c | 126 ++++++++++++++++++++++++++++++++++++++---------
8
1 file changed, 32 insertions(+), 12 deletions(-)
11
1 file changed, 104 insertions(+), 22 deletions(-)
9
12
10
diff --git a/hw/i2c/omap_i2c.c b/hw/i2c/omap_i2c.c
13
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
11
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
12
--- a/hw/i2c/omap_i2c.c
15
--- a/target/arm/ptw.c
13
+++ b/hw/i2c/omap_i2c.c
16
+++ b/target/arm/ptw.c
14
@@ -XXX,XX +XXX,XX @@ static void omap_i2c_writeb(void *opaque, hwaddr addr,
17
@@ -XXX,XX +XXX,XX @@ static bool pmsav7_use_background_region(ARMCPU *cpu, ARMMMUIdx mmu_idx,
15
}
18
19
if (arm_feature(env, ARM_FEATURE_M)) {
20
return env->v7m.mpu_ctrl[is_secure] & R_V7M_MPU_CTRL_PRIVDEFENA_MASK;
21
- } else {
22
- return regime_sctlr(env, mmu_idx) & SCTLR_BR;
23
}
24
+
25
+ if (mmu_idx == ARMMMUIdx_Stage2) {
26
+ return false;
27
+ }
28
+
29
+ return regime_sctlr(env, mmu_idx) & SCTLR_BR;
16
}
30
}
17
31
18
+static uint64_t omap_i2c_readfn(void *opaque, hwaddr addr,
32
static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
19
+ unsigned size)
33
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
34
return !(result->f.prot & (1 << access_type));
35
}
36
37
+static uint32_t *regime_rbar(CPUARMState *env, ARMMMUIdx mmu_idx,
38
+ uint32_t secure)
20
+{
39
+{
21
+ switch (size) {
40
+ if (regime_el(env, mmu_idx) == 2) {
22
+ case 2:
41
+ return env->pmsav8.hprbar;
23
+ return omap_i2c_read(opaque, addr);
42
+ } else {
24
+ default:
43
+ return env->pmsav8.rbar[secure];
25
+ return omap_badwidth_read16(opaque, addr);
26
+ }
44
+ }
27
+}
45
+}
28
+
46
+
29
+static void omap_i2c_writefn(void *opaque, hwaddr addr,
47
+static uint32_t *regime_rlar(CPUARMState *env, ARMMMUIdx mmu_idx,
30
+ uint64_t value, unsigned size)
48
+ uint32_t secure)
31
+{
49
+{
32
+ switch (size) {
50
+ if (regime_el(env, mmu_idx) == 2) {
33
+ case 1:
51
+ return env->pmsav8.hprlar;
34
+ /* Only the last fifo write can be 8 bit. */
52
+ } else {
35
+ omap_i2c_writeb(opaque, addr, value);
53
+ return env->pmsav8.rlar[secure];
36
+ break;
37
+ case 2:
38
+ omap_i2c_write(opaque, addr, value);
39
+ break;
40
+ default:
41
+ omap_badwidth_write16(opaque, addr, value);
42
+ break;
43
+ }
54
+ }
44
+}
55
+}
45
+
56
+
46
static const MemoryRegionOps omap_i2c_ops = {
57
bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
47
- .old_mmio = {
58
MMUAccessType access_type, ARMMMUIdx mmu_idx,
48
- .read = {
59
bool secure, GetPhysAddrResult *result,
49
- omap_badwidth_read16,
60
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
50
- omap_i2c_read,
61
bool hit = false;
51
- omap_badwidth_read16,
62
uint32_t addr_page_base = address & TARGET_PAGE_MASK;
52
- },
63
uint32_t addr_page_limit = addr_page_base + (TARGET_PAGE_SIZE - 1);
53
- .write = {
64
+ int region_counter;
54
- omap_i2c_writeb, /* Only the last fifo write can be 8 bit. */
65
+
55
- omap_i2c_write,
66
+ if (regime_el(env, mmu_idx) == 2) {
56
- omap_badwidth_write16,
67
+ region_counter = cpu->pmsav8r_hdregion;
57
- },
68
+ } else {
58
- },
69
+ region_counter = cpu->pmsav7_dregion;
59
+ .read = omap_i2c_readfn,
70
+ }
60
+ .write = omap_i2c_writefn,
71
61
+ .valid.min_access_size = 1,
72
result->f.lg_page_size = TARGET_PAGE_BITS;
62
+ .valid.max_access_size = 4,
73
result->f.phys_addr = address;
63
.endianness = DEVICE_NATIVE_ENDIAN,
74
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
64
};
75
*mregion = -1;
65
76
}
77
78
+ if (mmu_idx == ARMMMUIdx_Stage2) {
79
+ fi->stage2 = true;
80
+ }
81
+
82
/*
83
* Unlike the ARM ARM pseudocode, we don't need to check whether this
84
* was an exception vector read from the vector table (which is always
85
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
86
hit = true;
87
}
88
89
- for (n = (int)cpu->pmsav7_dregion - 1; n >= 0; n--) {
90
+ uint32_t bitmask;
91
+ if (arm_feature(env, ARM_FEATURE_M)) {
92
+ bitmask = 0x1f;
93
+ } else {
94
+ bitmask = 0x3f;
95
+ fi->level = 0;
96
+ }
97
+
98
+ for (n = region_counter - 1; n >= 0; n--) {
99
/* region search */
100
/*
101
- * Note that the base address is bits [31:5] from the register
102
- * with bits [4:0] all zeroes, but the limit address is bits
103
- * [31:5] from the register with bits [4:0] all ones.
104
+ * Note that the base address is bits [31:x] from the register
105
+ * with bits [x-1:0] all zeroes, but the limit address is bits
106
+ * [31:x] from the register with bits [x:0] all ones. Where x is
107
+ * 5 for Cortex-M and 6 for Cortex-R
108
*/
109
- uint32_t base = env->pmsav8.rbar[secure][n] & ~0x1f;
110
- uint32_t limit = env->pmsav8.rlar[secure][n] | 0x1f;
111
+ uint32_t base = regime_rbar(env, mmu_idx, secure)[n] & ~bitmask;
112
+ uint32_t limit = regime_rlar(env, mmu_idx, secure)[n] | bitmask;
113
114
- if (!(env->pmsav8.rlar[secure][n] & 0x1)) {
115
+ if (!(regime_rlar(env, mmu_idx, secure)[n] & 0x1)) {
116
/* Region disabled */
117
continue;
118
}
119
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
120
* PMSAv7 where highest-numbered-region wins)
121
*/
122
fi->type = ARMFault_Permission;
123
- fi->level = 1;
124
+ if (arm_feature(env, ARM_FEATURE_M)) {
125
+ fi->level = 1;
126
+ }
127
return true;
128
}
129
130
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
131
}
132
133
if (!hit) {
134
- /* background fault */
135
- fi->type = ARMFault_Background;
136
+ if (arm_feature(env, ARM_FEATURE_M)) {
137
+ fi->type = ARMFault_Background;
138
+ } else {
139
+ fi->type = ARMFault_Permission;
140
+ }
141
return true;
142
}
143
144
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
145
/* hit using the background region */
146
get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->f.prot);
147
} else {
148
- uint32_t ap = extract32(env->pmsav8.rbar[secure][matchregion], 1, 2);
149
- uint32_t xn = extract32(env->pmsav8.rbar[secure][matchregion], 0, 1);
150
+ uint32_t matched_rbar = regime_rbar(env, mmu_idx, secure)[matchregion];
151
+ uint32_t matched_rlar = regime_rlar(env, mmu_idx, secure)[matchregion];
152
+ uint32_t ap = extract32(matched_rbar, 1, 2);
153
+ uint32_t xn = extract32(matched_rbar, 0, 1);
154
bool pxn = false;
155
156
if (arm_feature(env, ARM_FEATURE_V8_1M)) {
157
- pxn = extract32(env->pmsav8.rlar[secure][matchregion], 4, 1);
158
+ pxn = extract32(matched_rlar, 4, 1);
159
}
160
161
if (m_is_system_region(env, address)) {
162
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
163
xn = 1;
164
}
165
166
- result->f.prot = simple_ap_to_rw_prot(env, mmu_idx, ap);
167
+ if (regime_el(env, mmu_idx) == 2) {
168
+ result->f.prot = simple_ap_to_rw_prot_is_user(ap,
169
+ mmu_idx != ARMMMUIdx_E2);
170
+ } else {
171
+ result->f.prot = simple_ap_to_rw_prot(env, mmu_idx, ap);
172
+ }
173
+
174
+ if (!arm_feature(env, ARM_FEATURE_M)) {
175
+ uint8_t attrindx = extract32(matched_rlar, 1, 3);
176
+ uint64_t mair = env->cp15.mair_el[regime_el(env, mmu_idx)];
177
+ uint8_t sh = extract32(matched_rlar, 3, 2);
178
+
179
+ if (regime_sctlr(env, mmu_idx) & SCTLR_WXN &&
180
+ result->f.prot & PAGE_WRITE && mmu_idx != ARMMMUIdx_Stage2) {
181
+ xn = 0x1;
182
+ }
183
+
184
+ if ((regime_el(env, mmu_idx) == 1) &&
185
+ regime_sctlr(env, mmu_idx) & SCTLR_UWXN && ap == 0x1) {
186
+ pxn = 0x1;
187
+ }
188
+
189
+ result->cacheattrs.is_s2_format = false;
190
+ result->cacheattrs.attrs = extract64(mair, attrindx * 8, 8);
191
+ result->cacheattrs.shareability = sh;
192
+ }
193
+
194
if (result->f.prot && !xn && !(pxn && !is_user)) {
195
result->f.prot |= PAGE_EXEC;
196
}
197
- /*
198
- * We don't need to look the attribute up in the MAIR0/MAIR1
199
- * registers because that only tells us about cacheability.
200
- */
201
+
202
if (mregion) {
203
*mregion = matchregion;
204
}
205
}
206
207
fi->type = ARMFault_Permission;
208
- fi->level = 1;
209
+ if (arm_feature(env, ARM_FEATURE_M)) {
210
+ fi->level = 1;
211
+ }
212
return !(result->f.prot & (1 << access_type));
213
}
214
215
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
216
cacheattrs1 = result->cacheattrs;
217
memset(result, 0, sizeof(*result));
218
219
- ret = get_phys_addr_lpae(env, ptw, ipa, access_type, is_el0, result, fi);
220
+ if (arm_feature(env, ARM_FEATURE_PMSA)) {
221
+ ret = get_phys_addr_pmsav8(env, ipa, access_type,
222
+ ptw->in_mmu_idx, is_secure, result, fi);
223
+ } else {
224
+ ret = get_phys_addr_lpae(env, ptw, ipa, access_type,
225
+ is_el0, result, fi);
226
+ }
227
fi->s2addr = ipa;
228
229
/* Combine the S1 and S2 perms. */
66
--
230
--
67
2.7.4
231
2.25.1
68
232
69
233
diff view generated by jsdifflib
1
From: Subbaraya Sundeep <sundeep.lkml@gmail.com>
1
From: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
2
2
3
Emulated Emcraft's Smartfusion2 System On Module starter
3
All constants are taken from the ARM Cortex-R52 Processor TRM Revision: r1p3
4
kit.
5
4
6
Signed-off-by: Subbaraya Sundeep <sundeep.lkml@gmail.com>
5
Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
7
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20170920201737.25723-6-f4bug@amsat.org
7
Message-id: 20221206102504.165775-8-tobias.roehmel@rwth-aachen.de
9
[PMD: drop cpu_model to directly use cpu type]
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
9
---
12
hw/arm/Makefile.objs | 2 +-
10
target/arm/cpu_tcg.c | 42 ++++++++++++++++++++++++++++++++++++++++++
13
hw/arm/msf2-som.c | 105 +++++++++++++++++++++++++++++++++++++++++++++++++++
11
1 file changed, 42 insertions(+)
14
2 files changed, 106 insertions(+), 1 deletion(-)
15
create mode 100644 hw/arm/msf2-som.c
16
12
17
diff --git a/hw/arm/Makefile.objs b/hw/arm/Makefile.objs
13
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
18
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/arm/Makefile.objs
15
--- a/target/arm/cpu_tcg.c
20
+++ b/hw/arm/Makefile.objs
16
+++ b/target/arm/cpu_tcg.c
21
@@ -XXX,XX +XXX,XX @@ obj-$(CONFIG_FSL_IMX31) += fsl-imx31.o kzm.o
17
@@ -XXX,XX +XXX,XX @@ static void cortex_r5_initfn(Object *obj)
22
obj-$(CONFIG_FSL_IMX6) += fsl-imx6.o sabrelite.o
18
define_arm_cp_regs(cpu, cortexr5_cp_reginfo);
23
obj-$(CONFIG_ASPEED_SOC) += aspeed_soc.o aspeed.o
19
}
24
obj-$(CONFIG_MPS2) += mps2.o
20
25
-obj-$(CONFIG_MSF2) += msf2-soc.o
21
+static void cortex_r52_initfn(Object *obj)
26
+obj-$(CONFIG_MSF2) += msf2-soc.o msf2-som.o
22
+{
27
diff --git a/hw/arm/msf2-som.c b/hw/arm/msf2-som.c
23
+ ARMCPU *cpu = ARM_CPU(obj);
28
new file mode 100644
29
index XXXXXXX..XXXXXXX
30
--- /dev/null
31
+++ b/hw/arm/msf2-som.c
32
@@ -XXX,XX +XXX,XX @@
33
+/*
34
+ * SmartFusion2 SOM starter kit(from Emcraft) emulation.
35
+ *
36
+ * Copyright (c) 2017 Subbaraya Sundeep <sundeep.lkml@gmail.com>
37
+ *
38
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
39
+ * of this software and associated documentation files (the "Software"), to deal
40
+ * in the Software without restriction, including without limitation the rights
41
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
42
+ * copies of the Software, and to permit persons to whom the Software is
43
+ * furnished to do so, subject to the following conditions:
44
+ *
45
+ * The above copyright notice and this permission notice shall be included in
46
+ * all copies or substantial portions of the Software.
47
+ *
48
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
49
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
50
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
51
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
52
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
53
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
54
+ * THE SOFTWARE.
55
+ */
56
+
24
+
57
+#include "qemu/osdep.h"
25
+ set_feature(&cpu->env, ARM_FEATURE_V8);
58
+#include "qapi/error.h"
26
+ set_feature(&cpu->env, ARM_FEATURE_EL2);
59
+#include "qemu/error-report.h"
27
+ set_feature(&cpu->env, ARM_FEATURE_PMSA);
60
+#include "hw/boards.h"
28
+ set_feature(&cpu->env, ARM_FEATURE_NEON);
61
+#include "hw/arm/arm.h"
29
+ set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
62
+#include "exec/address-spaces.h"
30
+ cpu->midr = 0x411fd133; /* r1p3 */
63
+#include "qemu/cutils.h"
31
+ cpu->revidr = 0x00000000;
64
+#include "hw/arm/msf2-soc.h"
32
+ cpu->reset_fpsid = 0x41034023;
65
+#include "cpu.h"
33
+ cpu->isar.mvfr0 = 0x10110222;
34
+ cpu->isar.mvfr1 = 0x12111111;
35
+ cpu->isar.mvfr2 = 0x00000043;
36
+ cpu->ctr = 0x8144c004;
37
+ cpu->reset_sctlr = 0x30c50838;
38
+ cpu->isar.id_pfr0 = 0x00000131;
39
+ cpu->isar.id_pfr1 = 0x10111001;
40
+ cpu->isar.id_dfr0 = 0x03010006;
41
+ cpu->id_afr0 = 0x00000000;
42
+ cpu->isar.id_mmfr0 = 0x00211040;
43
+ cpu->isar.id_mmfr1 = 0x40000000;
44
+ cpu->isar.id_mmfr2 = 0x01200000;
45
+ cpu->isar.id_mmfr3 = 0xf0102211;
46
+ cpu->isar.id_mmfr4 = 0x00000010;
47
+ cpu->isar.id_isar0 = 0x02101110;
48
+ cpu->isar.id_isar1 = 0x13112111;
49
+ cpu->isar.id_isar2 = 0x21232142;
50
+ cpu->isar.id_isar3 = 0x01112131;
51
+ cpu->isar.id_isar4 = 0x00010142;
52
+ cpu->isar.id_isar5 = 0x00010001;
53
+ cpu->isar.dbgdidr = 0x77168000;
54
+ cpu->clidr = (1 << 27) | (1 << 24) | 0x3;
55
+ cpu->ccsidr[0] = 0x700fe01a; /* 32KB L1 dcache */
56
+ cpu->ccsidr[1] = 0x201fe00a; /* 32KB L1 icache */
66
+
57
+
67
+#define DDR_BASE_ADDRESS 0xA0000000
58
+ cpu->pmsav7_dregion = 16;
68
+#define DDR_SIZE (64 * M_BYTE)
59
+ cpu->pmsav8r_hdregion = 16;
69
+
70
+#define M2S010_ENVM_SIZE (256 * K_BYTE)
71
+#define M2S010_ESRAM_SIZE (64 * K_BYTE)
72
+
73
+static void emcraft_sf2_s2s010_init(MachineState *machine)
74
+{
75
+ DeviceState *dev;
76
+ DeviceState *spi_flash;
77
+ MSF2State *soc;
78
+ MachineClass *mc = MACHINE_GET_CLASS(machine);
79
+ DriveInfo *dinfo = drive_get_next(IF_MTD);
80
+ qemu_irq cs_line;
81
+ SSIBus *spi_bus;
82
+ MemoryRegion *sysmem = get_system_memory();
83
+ MemoryRegion *ddr = g_new(MemoryRegion, 1);
84
+
85
+ if (strcmp(machine->cpu_type, mc->default_cpu_type) != 0) {
86
+ error_report("This board can only be used with CPU %s",
87
+ mc->default_cpu_type);
88
+ }
89
+
90
+ memory_region_init_ram(ddr, NULL, "ddr-ram", DDR_SIZE,
91
+ &error_fatal);
92
+ memory_region_add_subregion(sysmem, DDR_BASE_ADDRESS, ddr);
93
+
94
+ dev = qdev_create(NULL, TYPE_MSF2_SOC);
95
+ qdev_prop_set_string(dev, "part-name", "M2S010");
96
+ qdev_prop_set_string(dev, "cpu-type", mc->default_cpu_type);
97
+
98
+ qdev_prop_set_uint64(dev, "eNVM-size", M2S010_ENVM_SIZE);
99
+ qdev_prop_set_uint64(dev, "eSRAM-size", M2S010_ESRAM_SIZE);
100
+
101
+ /*
102
+ * CPU clock and peripheral clocks(APB0, APB1)are configurable
103
+ * in Libero. CPU clock is divided by APB0 and APB1 divisors for
104
+ * peripherals. Emcraft's SoM kit comes with these settings by default.
105
+ */
106
+ qdev_prop_set_uint32(dev, "m3clk", 142 * 1000000);
107
+ qdev_prop_set_uint32(dev, "apb0div", 2);
108
+ qdev_prop_set_uint32(dev, "apb1div", 2);
109
+
110
+ object_property_set_bool(OBJECT(dev), true, "realized", &error_fatal);
111
+
112
+ soc = MSF2_SOC(dev);
113
+
114
+ /* Attach SPI flash to SPI0 controller */
115
+ spi_bus = (SSIBus *)qdev_get_child_bus(dev, "spi0");
116
+ spi_flash = ssi_create_slave_no_init(spi_bus, "s25sl12801");
117
+ qdev_prop_set_uint8(spi_flash, "spansion-cr2nv", 1);
118
+ if (dinfo) {
119
+ qdev_prop_set_drive(spi_flash, "drive", blk_by_legacy_dinfo(dinfo),
120
+ &error_fatal);
121
+ }
122
+ qdev_init_nofail(spi_flash);
123
+ cs_line = qdev_get_gpio_in_named(spi_flash, SSI_GPIO_CS, 0);
124
+ sysbus_connect_irq(SYS_BUS_DEVICE(&soc->spi[0]), 1, cs_line);
125
+
126
+ armv7m_load_kernel(ARM_CPU(first_cpu), machine->kernel_filename,
127
+ soc->envm_size);
128
+}
60
+}
129
+
61
+
130
+static void emcraft_sf2_machine_init(MachineClass *mc)
62
static void cortex_r5f_initfn(Object *obj)
131
+{
63
{
132
+ mc->desc = "SmartFusion2 SOM kit from Emcraft (M2S010)";
64
ARMCPU *cpu = ARM_CPU(obj);
133
+ mc->init = emcraft_sf2_s2s010_init;
65
@@ -XXX,XX +XXX,XX @@ static const ARMCPUInfo arm_tcg_cpus[] = {
134
+ mc->default_cpu_type = ARM_CPU_TYPE_NAME("cortex-m3");
66
.class_init = arm_v7m_class_init },
135
+}
67
{ .name = "cortex-r5", .initfn = cortex_r5_initfn },
136
+
68
{ .name = "cortex-r5f", .initfn = cortex_r5f_initfn },
137
+DEFINE_MACHINE("emcraft-sf2", emcraft_sf2_machine_init)
69
+ { .name = "cortex-r52", .initfn = cortex_r52_initfn },
70
{ .name = "ti925t", .initfn = ti925t_initfn },
71
{ .name = "sa1100", .initfn = sa1100_initfn },
72
{ .name = "sa1110", .initfn = sa1110_initfn },
138
--
73
--
139
2.7.4
74
2.25.1
140
75
141
76
diff view generated by jsdifflib
1
In armv7m_nvic_set_pending() we have to compare the
1
From: Alex Bennée <alex.bennee@linaro.org>
2
priority of an exception against the execution priority
3
to decide whether it needs to be escalated to HardFault.
4
In the specification this is a comparison against the
5
exception's group priority; for v7M we implemented it
6
as a comparison against the raw exception priority
7
because the two comparisons will always give the
8
same answer. For v8M the existence of AIRCR.PRIS and
9
the possibility of different PRIGROUP values for secure
10
and nonsecure exceptions means we need to explicitly
11
calculate the vector's group priority for this check.
12
2
3
The check semihosting_enabled() wants to know if the guest is
4
currently in user mode. Unlike the other cases the test was inverted
5
causing us to block semihosting calls in non-EL0 modes.
6
7
Cc: qemu-stable@nongnu.org
8
Fixes: 19b26317e9 (target/arm: Honour -semihosting-config userspace=on)
9
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 1505240046-11454-12-git-send-email-peter.maydell@linaro.org
16
---
12
---
17
hw/intc/armv7m_nvic.c | 2 +-
13
target/arm/translate.c | 2 +-
18
1 file changed, 1 insertion(+), 1 deletion(-)
14
1 file changed, 1 insertion(+), 1 deletion(-)
19
15
20
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
16
diff --git a/target/arm/translate.c b/target/arm/translate.c
21
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/intc/armv7m_nvic.c
18
--- a/target/arm/translate.c
23
+++ b/hw/intc/armv7m_nvic.c
19
+++ b/target/arm/translate.c
24
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending(void *opaque, int irq, bool secure)
20
@@ -XXX,XX +XXX,XX @@ static inline void gen_hlt(DisasContext *s, int imm)
25
int running = nvic_exec_prio(s);
21
* semihosting, to provide some semblance of security
26
bool escalate = false;
22
* (and for consistency with our 32-bit semihosting).
27
23
*/
28
- if (vec->prio >= running) {
24
- if (semihosting_enabled(s->current_el != 0) &&
29
+ if (exc_group_prio(s, vec->prio, secure) >= running) {
25
+ if (semihosting_enabled(s->current_el == 0) &&
30
trace_nvic_escalate_prio(irq, vec->prio, running);
26
(imm == (s->thumb ? 0x3c : 0xf000))) {
31
escalate = true;
27
gen_exception_internal_insn(s, EXCP_SEMIHOST);
32
} else if (!vec->enabled) {
28
return;
33
--
29
--
34
2.7.4
30
2.25.1
35
31
36
32
diff view generated by jsdifflib
1
Don't use old_mmio in the memory region ops struct.
1
From: Axel Heider <axel.heider@hensoldt.net>
2
2
3
Fix typos, add background information
4
5
Signed-off-by: Axel Heider <axel.heider@hensoldt.net>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 1505580378-9044-7-git-send-email-peter.maydell@linaro.org
6
---
8
---
7
hw/arm/omap2.c | 49 +++++++++++++++++++++++++++++++++++++------------
9
hw/timer/imx_epit.c | 20 ++++++++++++++++----
8
1 file changed, 37 insertions(+), 12 deletions(-)
10
1 file changed, 16 insertions(+), 4 deletions(-)
9
11
10
diff --git a/hw/arm/omap2.c b/hw/arm/omap2.c
12
diff --git a/hw/timer/imx_epit.c b/hw/timer/imx_epit.c
11
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
12
--- a/hw/arm/omap2.c
14
--- a/hw/timer/imx_epit.c
13
+++ b/hw/arm/omap2.c
15
+++ b/hw/timer/imx_epit.c
14
@@ -XXX,XX +XXX,XX @@ static void omap_sysctl_write(void *opaque, hwaddr addr,
16
@@ -XXX,XX +XXX,XX @@ static void imx_epit_set_freq(IMXEPITState *s)
15
}
17
}
16
}
18
}
17
19
18
+static uint64_t omap_sysctl_readfn(void *opaque, hwaddr addr,
20
+/*
19
+ unsigned size)
21
+ * This is called both on hardware (device) reset and software reset.
20
+{
22
+ */
21
+ switch (size) {
23
static void imx_epit_reset(DeviceState *dev)
22
+ case 1:
24
{
23
+ return omap_sysctl_read8(opaque, addr);
25
IMXEPITState *s = IMX_EPIT(dev);
24
+ case 2:
26
25
+ return omap_badwidth_read32(opaque, addr); /* TODO */
27
- /*
26
+ case 4:
28
- * Soft reset doesn't touch some bits; hard reset clears them
27
+ return omap_sysctl_read(opaque, addr);
29
- */
28
+ default:
30
+ /* Soft reset doesn't touch some bits; hard reset clears them */
29
+ g_assert_not_reached();
31
s->cr &= (CR_EN|CR_ENMOD|CR_STOPEN|CR_DOZEN|CR_WAITEN|CR_DBGEN);
30
+ }
32
s->sr = 0;
31
+}
33
s->lr = EPIT_TIMER_MAX;
32
+
34
@@ -XXX,XX +XXX,XX @@ static void imx_epit_write(void *opaque, hwaddr offset, uint64_t value,
33
+static void omap_sysctl_writefn(void *opaque, hwaddr addr,
35
ptimer_transaction_begin(s->timer_cmp);
34
+ uint64_t value, unsigned size)
36
ptimer_transaction_begin(s->timer_reload);
35
+{
37
36
+ switch (size) {
38
+ /* Update the frequency. Has been done already in case of a reset. */
37
+ case 1:
39
if (!(s->cr & CR_SWR)) {
38
+ omap_sysctl_write8(opaque, addr, value);
40
imx_epit_set_freq(s);
39
+ break;
41
}
40
+ case 2:
42
@@ -XXX,XX +XXX,XX @@ static void imx_epit_write(void *opaque, hwaddr offset, uint64_t value,
41
+ omap_badwidth_write32(opaque, addr, value); /* TODO */
43
break;
42
+ break;
44
43
+ case 4:
45
case 1: /* SR - ACK*/
44
+ omap_sysctl_write(opaque, addr, value);
46
- /* writing 1 to OCIF clear the OCIF bit */
45
+ break;
47
+ /* writing 1 to OCIF clears the OCIF bit */
46
+ default:
48
if (value & 0x01) {
47
+ g_assert_not_reached();
49
s->sr = 0;
48
+ }
50
imx_epit_update_int(s);
49
+}
51
@@ -XXX,XX +XXX,XX @@ static void imx_epit_realize(DeviceState *dev, Error **errp)
50
+
52
0x00001000);
51
static const MemoryRegionOps omap_sysctl_ops = {
53
sysbus_init_mmio(sbd, &s->iomem);
52
- .old_mmio = {
54
53
- .read = {
55
+ /*
54
- omap_sysctl_read8,
56
+ * The reload timer keeps running when the peripheral is enabled. It is a
55
- omap_badwidth_read32,    /* TODO */
57
+ * kind of wall clock that does not generate any interrupts. The callback
56
- omap_sysctl_read,
58
+ * needs to be provided, but it does nothing as the ptimer already supports
57
- },
59
+ * all necessary reloading functionality.
58
- .write = {
60
+ */
59
- omap_sysctl_write8,
61
s->timer_reload = ptimer_init(imx_epit_reload, s, PTIMER_POLICY_LEGACY);
60
- omap_badwidth_write32,    /* TODO */
62
61
- omap_sysctl_write,
63
+ /*
62
- },
64
+ * The compare timer is running only when the peripheral configuration is
63
- },
65
+ * in a state that will generate compare interrupts.
64
+ .read = omap_sysctl_readfn,
66
+ */
65
+ .write = omap_sysctl_writefn,
67
s->timer_cmp = ptimer_init(imx_epit_cmp, s, PTIMER_POLICY_LEGACY);
66
+ .valid.min_access_size = 1,
68
}
67
+ .valid.max_access_size = 4,
68
.endianness = DEVICE_NATIVE_ENDIAN,
69
};
70
69
71
--
70
--
72
2.7.4
71
2.25.1
73
74
diff view generated by jsdifflib
1
The ICSR NVIC register is banked for v8M. This doesn't
1
From: Axel Heider <axel.heider@hensoldt.net>
2
require any new state, but it does mean that some bits
3
are controlled by BFHNFNMINS and some bits must work
4
with the correct banked exception. There is also a new
5
in v8M PENDNMICLR bit.
6
2
3
remove unused defines, add needed defines
4
5
Signed-off-by: Axel Heider <axel.heider@hensoldt.net>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 1505240046-11454-18-git-send-email-peter.maydell@linaro.org
10
---
8
---
11
hw/intc/armv7m_nvic.c | 45 ++++++++++++++++++++++++++++++++-------------
9
include/hw/timer/imx_epit.h | 4 ++--
12
1 file changed, 32 insertions(+), 13 deletions(-)
10
hw/timer/imx_epit.c | 4 ++--
11
2 files changed, 4 insertions(+), 4 deletions(-)
13
12
14
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
13
diff --git a/include/hw/timer/imx_epit.h b/include/hw/timer/imx_epit.h
15
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/intc/armv7m_nvic.c
15
--- a/include/hw/timer/imx_epit.h
17
+++ b/hw/intc/armv7m_nvic.c
16
+++ b/include/hw/timer/imx_epit.h
18
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
17
@@ -XXX,XX +XXX,XX @@
19
}
18
#define CR_OCIEN (1 << 2)
20
case 0xd00: /* CPUID Base. */
19
#define CR_RLD (1 << 3)
21
return cpu->midr;
20
#define CR_PRESCALE_SHIFT (4)
22
- case 0xd04: /* Interrupt Control State. */
21
-#define CR_PRESCALE_MASK (0xfff)
23
+ case 0xd04: /* Interrupt Control State (ICSR) */
22
+#define CR_PRESCALE_BITS (12)
24
/* VECTACTIVE */
23
#define CR_SWR (1 << 16)
25
val = cpu->env.v7m.exception;
24
#define CR_IOVW (1 << 17)
26
/* VECTPENDING */
25
#define CR_DBGEN (1 << 18)
27
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
26
@@ -XXX,XX +XXX,XX @@
28
if (nvic_rettobase(s)) {
27
#define CR_DOZEN (1 << 20)
29
val |= (1 << 11);
28
#define CR_STOPEN (1 << 21)
30
}
29
#define CR_CLKSRC_SHIFT (24)
31
- /* PENDSTSET */
30
-#define CR_CLKSRC_MASK (0x3 << CR_CLKSRC_SHIFT)
32
- if (s->vectors[ARMV7M_EXCP_SYSTICK].pending) {
31
+#define CR_CLKSRC_BITS (2)
33
- val |= (1 << 26);
32
34
- }
33
#define EPIT_TIMER_MAX 0XFFFFFFFFUL
35
- /* PENDSVSET */
34
36
- if (s->vectors[ARMV7M_EXCP_PENDSV].pending) {
35
diff --git a/hw/timer/imx_epit.c b/hw/timer/imx_epit.c
37
- val |= (1 << 28);
36
index XXXXXXX..XXXXXXX 100644
38
+ if (attrs.secure) {
37
--- a/hw/timer/imx_epit.c
39
+ /* PENDSTSET */
38
+++ b/hw/timer/imx_epit.c
40
+ if (s->sec_vectors[ARMV7M_EXCP_SYSTICK].pending) {
39
@@ -XXX,XX +XXX,XX @@ static void imx_epit_set_freq(IMXEPITState *s)
41
+ val |= (1 << 26);
40
uint32_t clksrc;
42
+ }
41
uint32_t prescaler;
43
+ /* PENDSVSET */
42
44
+ if (s->sec_vectors[ARMV7M_EXCP_PENDSV].pending) {
43
- clksrc = extract32(s->cr, CR_CLKSRC_SHIFT, 2);
45
+ val |= (1 << 28);
44
- prescaler = 1 + extract32(s->cr, CR_PRESCALE_SHIFT, 12);
46
+ }
45
+ clksrc = extract32(s->cr, CR_CLKSRC_SHIFT, CR_CLKSRC_BITS);
47
+ } else {
46
+ prescaler = 1 + extract32(s->cr, CR_PRESCALE_SHIFT, CR_PRESCALE_BITS);
48
+ /* PENDSTSET */
47
49
+ if (s->vectors[ARMV7M_EXCP_SYSTICK].pending) {
48
s->freq = imx_ccm_get_clock_frequency(s->ccm,
50
+ val |= (1 << 26);
49
imx_epit_clocks[clksrc]) / prescaler;
51
+ }
52
+ /* PENDSVSET */
53
+ if (s->vectors[ARMV7M_EXCP_PENDSV].pending) {
54
+ val |= (1 << 28);
55
+ }
56
}
57
/* NMIPENDSET */
58
- if (s->vectors[ARMV7M_EXCP_NMI].pending) {
59
+ if ((cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK) &&
60
+ s->vectors[ARMV7M_EXCP_NMI].pending) {
61
val |= (1 << 31);
62
}
63
- /* ISRPREEMPT not implemented */
64
+ /* ISRPREEMPT: RES0 when halting debug not implemented */
65
+ /* STTNS: RES0 for the Main Extension */
66
return val;
67
case 0xd08: /* Vector Table Offset. */
68
return cpu->env.v7m.vecbase[attrs.secure];
69
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
70
nvic_irq_update(s);
71
break;
72
}
73
- case 0xd04: /* Interrupt Control State. */
74
- if (value & (1 << 31)) {
75
- armv7m_nvic_set_pending(s, ARMV7M_EXCP_NMI, false);
76
+ case 0xd04: /* Interrupt Control State (ICSR) */
77
+ if (cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK) {
78
+ if (value & (1 << 31)) {
79
+ armv7m_nvic_set_pending(s, ARMV7M_EXCP_NMI, false);
80
+ } else if (value & (1 << 30) &&
81
+ arm_feature(&cpu->env, ARM_FEATURE_V8)) {
82
+ /* PENDNMICLR didn't exist in v7M */
83
+ armv7m_nvic_clear_pending(s, ARMV7M_EXCP_NMI, false);
84
+ }
85
}
86
if (value & (1 << 28)) {
87
armv7m_nvic_set_pending(s, ARMV7M_EXCP_PENDSV, attrs.secure);
88
--
50
--
89
2.7.4
51
2.25.1
90
91
diff view generated by jsdifflib
1
For v8M, the NVIC has a new set of registers per interrupt,
1
From: Axel Heider <axel.heider@hensoldt.net>
2
NVIC_ITNS<n>. These determine whether the interrupt targets Secure
3
or Non-secure state. Implement the register read/write code for
4
these, and make them cause NVIC_IABR, NVIC_ICER, NVIC_ISER,
5
NVIC_ICPR, NVIC_IPR and NVIC_ISPR to RAZ/WI for non-secure
6
accesses to fields corresponding to interrupts which are
7
configured to target secure state.
8
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 1505240046-11454-8-git-send-email-peter.maydell@linaro.org
12
---
5
---
13
include/hw/intc/armv7m_nvic.h | 3 ++
6
include/hw/timer/imx_epit.h | 2 ++
14
hw/intc/armv7m_nvic.c | 74 +++++++++++++++++++++++++++++++++++++++----
7
hw/timer/imx_epit.c | 12 ++++++------
15
2 files changed, 70 insertions(+), 7 deletions(-)
8
2 files changed, 8 insertions(+), 6 deletions(-)
16
9
17
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
10
diff --git a/include/hw/timer/imx_epit.h b/include/hw/timer/imx_epit.h
18
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
19
--- a/include/hw/intc/armv7m_nvic.h
12
--- a/include/hw/timer/imx_epit.h
20
+++ b/include/hw/intc/armv7m_nvic.h
13
+++ b/include/hw/timer/imx_epit.h
21
@@ -XXX,XX +XXX,XX @@ typedef struct NVICState {
14
@@ -XXX,XX +XXX,XX @@
22
/* The PRIGROUP field in AIRCR is banked */
15
#define CR_CLKSRC_SHIFT (24)
23
uint32_t prigroup[M_REG_NUM_BANKS];
16
#define CR_CLKSRC_BITS (2)
24
17
25
+ /* v8M NVIC_ITNS state (stored as a bool per bit) */
18
+#define SR_OCIF (1 << 0)
26
+ bool itns[NVIC_MAX_VECTORS];
27
+
19
+
28
/* The following fields are all cached state that can be recalculated
20
#define EPIT_TIMER_MAX 0XFFFFFFFFUL
29
* from the vectors[] and sec_vectors[] arrays and the prigroup field:
21
30
* - vectpending
22
#define TYPE_IMX_EPIT "imx.epit"
31
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
23
diff --git a/hw/timer/imx_epit.c b/hw/timer/imx_epit.c
32
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
33
--- a/hw/intc/armv7m_nvic.c
25
--- a/hw/timer/imx_epit.c
34
+++ b/hw/intc/armv7m_nvic.c
26
+++ b/hw/timer/imx_epit.c
35
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
27
@@ -XXX,XX +XXX,XX @@ static const IMXClk imx_epit_clocks[] = {
36
switch (offset) {
28
*/
37
case 4: /* Interrupt Control Type. */
29
static void imx_epit_update_int(IMXEPITState *s)
38
return ((s->num_irq - NVIC_FIRST_IRQ) / 32) - 1;
30
{
39
+ case 0x380 ... 0x3bf: /* NVIC_ITNS<n> */
31
- if (s->sr && (s->cr & CR_OCIEN) && (s->cr & CR_EN)) {
40
+ {
32
+ if ((s->sr & SR_OCIF) && (s->cr & CR_OCIEN) && (s->cr & CR_EN)) {
41
+ int startvec = 32 * (offset - 0x380) + NVIC_FIRST_IRQ;
33
qemu_irq_raise(s->irq);
42
+ int i;
34
} else {
43
+
35
qemu_irq_lower(s->irq);
44
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
36
@@ -XXX,XX +XXX,XX @@ static void imx_epit_write(void *opaque, hwaddr offset, uint64_t value,
45
+ goto bad_offset;
37
break;
46
+ }
38
47
+ if (!attrs.secure) {
39
case 1: /* SR - ACK*/
48
+ return 0;
40
- /* writing 1 to OCIF clears the OCIF bit */
49
+ }
41
- if (value & 0x01) {
50
+ val = 0;
42
- s->sr = 0;
51
+ for (i = 0; i < 32 && startvec + i < s->num_irq; i++) {
43
+ /* writing 1 to SR.OCIF clears this bit and turns the interrupt off */
52
+ if (s->itns[startvec + i]) {
44
+ if (value & SR_OCIF) {
53
+ val |= (1 << i);
45
+ s->sr = 0; /* SR.OCIF is the only bit in this register anyway */
54
+ }
46
imx_epit_update_int(s);
55
+ }
56
+ return val;
57
+ }
58
case 0xd00: /* CPUID Base. */
59
return cpu->midr;
60
case 0xd04: /* Interrupt Control State. */
61
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
62
ARMCPU *cpu = s->cpu;
63
64
switch (offset) {
65
+ case 0x380 ... 0x3bf: /* NVIC_ITNS<n> */
66
+ {
67
+ int startvec = 32 * (offset - 0x380) + NVIC_FIRST_IRQ;
68
+ int i;
69
+
70
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
71
+ goto bad_offset;
72
+ }
73
+ if (!attrs.secure) {
74
+ break;
75
+ }
76
+ for (i = 0; i < 32 && startvec + i < s->num_irq; i++) {
77
+ s->itns[startvec + i] = (value >> i) & 1;
78
+ }
79
+ nvic_irq_update(s);
80
+ break;
81
+ }
82
case 0xd04: /* Interrupt Control State. */
83
if (value & (1 << 31)) {
84
armv7m_nvic_set_pending(s, ARMV7M_EXCP_NMI);
85
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_read(void *opaque, hwaddr addr,
86
startvec = offset - 0x180 + NVIC_FIRST_IRQ; /* vector # */
87
88
for (i = 0, end = size * 8; i < end && startvec + i < s->num_irq; i++) {
89
- if (s->vectors[startvec + i].enabled) {
90
+ if (s->vectors[startvec + i].enabled &&
91
+ (attrs.secure || s->itns[startvec + i])) {
92
val |= (1 << i);
93
}
94
}
95
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_read(void *opaque, hwaddr addr,
96
val = 0;
97
startvec = offset - 0x280 + NVIC_FIRST_IRQ; /* vector # */
98
for (i = 0, end = size * 8; i < end && startvec + i < s->num_irq; i++) {
99
- if (s->vectors[startvec + i].pending) {
100
+ if (s->vectors[startvec + i].pending &&
101
+ (attrs.secure || s->itns[startvec + i])) {
102
val |= (1 << i);
103
}
104
}
105
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_read(void *opaque, hwaddr addr,
106
startvec = offset - 0x300 + NVIC_FIRST_IRQ; /* vector # */
107
108
for (i = 0, end = size * 8; i < end && startvec + i < s->num_irq; i++) {
109
- if (s->vectors[startvec + i].active) {
110
+ if (s->vectors[startvec + i].active &&
111
+ (attrs.secure || s->itns[startvec + i])) {
112
val |= (1 << i);
113
}
114
}
115
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_read(void *opaque, hwaddr addr,
116
startvec = offset - 0x400 + NVIC_FIRST_IRQ; /* vector # */
117
118
for (i = 0; i < size && startvec + i < s->num_irq; i++) {
119
- val |= s->vectors[startvec + i].prio << (8 * i);
120
+ if (attrs.secure || s->itns[startvec + i]) {
121
+ val |= s->vectors[startvec + i].prio << (8 * i);
122
+ }
123
}
47
}
124
break;
48
break;
125
case 0xd18 ... 0xd23: /* System Handler Priority. */
49
@@ -XXX,XX +XXX,XX @@ static void imx_epit_cmp(void *opaque)
126
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_write(void *opaque, hwaddr addr,
50
IMXEPITState *s = IMX_EPIT(opaque);
127
startvec = 8 * (offset - 0x180) + NVIC_FIRST_IRQ;
51
128
52
DPRINTF("sr was %d\n", s->sr);
129
for (i = 0, end = size * 8; i < end && startvec + i < s->num_irq; i++) {
53
-
130
- if (value & (1 << i)) {
54
- s->sr = 1;
131
+ if (value & (1 << i) &&
55
+ /* Set interrupt status bit SR.OCIF and update the interrupt state */
132
+ (attrs.secure || s->itns[startvec + i])) {
56
+ s->sr |= SR_OCIF;
133
s->vectors[startvec + i].enabled = setval;
57
imx_epit_update_int(s);
134
}
135
}
136
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_write(void *opaque, hwaddr addr,
137
startvec = 8 * (offset - 0x280) + NVIC_FIRST_IRQ; /* vector # */
138
139
for (i = 0, end = size * 8; i < end && startvec + i < s->num_irq; i++) {
140
- if (value & (1 << i)) {
141
+ if (value & (1 << i) &&
142
+ (attrs.secure || s->itns[startvec + i])) {
143
s->vectors[startvec + i].pending = setval;
144
}
145
}
146
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_write(void *opaque, hwaddr addr,
147
startvec = 8 * (offset - 0x400) + NVIC_FIRST_IRQ; /* vector # */
148
149
for (i = 0; i < size && startvec + i < s->num_irq; i++) {
150
- set_prio(s, startvec + i, (value >> (i * 8)) & 0xff);
151
+ if (attrs.secure || s->itns[startvec + i]) {
152
+ set_prio(s, startvec + i, (value >> (i * 8)) & 0xff);
153
+ }
154
}
155
nvic_irq_update(s);
156
return MEMTX_OK;
157
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_nvic_security = {
158
VMSTATE_STRUCT_ARRAY(sec_vectors, NVICState, NVIC_INTERNAL_VECTORS, 1,
159
vmstate_VecInfo, VecInfo),
160
VMSTATE_UINT32(prigroup[M_REG_S], NVICState),
161
+ VMSTATE_BOOL_ARRAY(itns, NVICState, NVIC_MAX_VECTORS),
162
VMSTATE_END_OF_LIST()
163
}
164
};
165
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_reset(DeviceState *dev)
166
s->vectpending = 0;
167
s->vectpending_is_s_banked = false;
168
s->vectpending_prio = NVIC_NOEXC_PRIO;
169
+
170
+ if (arm_feature(&s->cpu->env, ARM_FEATURE_M_SECURITY)) {
171
+ memset(s->itns, 0, sizeof(s->itns));
172
+ } else {
173
+ /* This state is constant and not guest accessible in a non-security
174
+ * NVIC; we set the bits to true to avoid having to do a feature
175
+ * bit check in the NVIC enable/pend/etc register accessors.
176
+ */
177
+ int i;
178
+
179
+ for (i = NVIC_FIRST_IRQ; i < ARRAY_SIZE(s->itns); i++) {
180
+ s->itns[i] = true;
181
+ }
182
+ }
183
}
58
}
184
59
185
static void nvic_systick_trigger(void *opaque, int n, int level)
186
--
60
--
187
2.7.4
61
2.25.1
188
189
diff view generated by jsdifflib
1
In v7M, the fixed-priority exceptions are:
1
From: Axel Heider <axel.heider@hensoldt.net>
2
Reset: -3
3
NMI: -2
4
HardFault: -1
5
2
6
In v8M, this changes because Secure HardFault may need
3
The interrupt state can change due to:
7
to be prioritised above NMI:
4
- reset clears both SR.OCIF and CR.OCIE
8
Reset: -4
5
- write to CR.EN or CR.OCIE
9
Secure HardFault if AIRCR.BFHFNMINS == 1: -3
10
NMI: -2
11
Secure HardFault if AIRCR.BFHFNMINS == 0: -1
12
NonSecure HardFault: -1
13
6
14
Make these changes, including support for changing the
7
Signed-off-by: Axel Heider <axel.heider@hensoldt.net>
15
priority of Secure HardFault as AIRCR.BFHFNMINS changes.
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
hw/timer/imx_epit.c | 16 ++++++++++++----
12
1 file changed, 12 insertions(+), 4 deletions(-)
16
13
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
diff --git a/hw/timer/imx_epit.c b/hw/timer/imx_epit.c
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Message-id: 1505240046-11454-14-git-send-email-peter.maydell@linaro.org
20
---
21
hw/intc/armv7m_nvic.c | 22 +++++++++++++++++++---
22
1 file changed, 19 insertions(+), 3 deletions(-)
23
24
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
25
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
26
--- a/hw/intc/armv7m_nvic.c
16
--- a/hw/timer/imx_epit.c
27
+++ b/hw/intc/armv7m_nvic.c
17
+++ b/hw/timer/imx_epit.c
28
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
18
@@ -XXX,XX +XXX,XX @@ static void imx_epit_write(void *opaque, hwaddr offset, uint64_t value,
29
(R_V7M_AIRCR_SYSRESETREQS_MASK |
19
if (s->cr & CR_SWR) {
30
R_V7M_AIRCR_BFHFNMINS_MASK |
20
/* handle the reset */
31
R_V7M_AIRCR_PRIS_MASK);
21
imx_epit_reset(DEVICE(s));
32
+ /* BFHFNMINS changes the priority of Secure HardFault */
22
- /*
33
+ if (cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK) {
23
- * TODO: could we 'break' here? following operations appear
34
+ s->sec_vectors[ARMV7M_EXCP_HARD].prio = -3;
24
- * to duplicate the work imx_epit_reset() already did.
35
+ } else {
25
- */
36
+ s->sec_vectors[ARMV7M_EXCP_HARD].prio = -1;
37
+ }
38
}
39
nvic_irq_update(s);
40
}
26
}
41
@@ -XXX,XX +XXX,XX @@ static int nvic_post_load(void *opaque, int version_id)
27
42
{
28
+ /*
43
NVICState *s = opaque;
29
+ * The interrupt state can change due to:
44
unsigned i;
30
+ * - reset clears both SR.OCIF and CR.OCIE
45
+ int resetprio;
31
+ * - write to CR.EN or CR.OCIE
46
32
+ */
47
/* Check for out of range priority settings */
33
+ imx_epit_update_int(s);
48
- if (s->vectors[ARMV7M_EXCP_RESET].prio != -3 ||
49
+ resetprio = arm_feature(&s->cpu->env, ARM_FEATURE_V8) ? -4 : -3;
50
+
34
+
51
+ if (s->vectors[ARMV7M_EXCP_RESET].prio != resetprio ||
35
+ /*
52
s->vectors[ARMV7M_EXCP_NMI].prio != -2 ||
36
+ * TODO: could we 'break' here for reset? following operations appear
53
s->vectors[ARMV7M_EXCP_HARD].prio != -1) {
37
+ * to duplicate the work imx_epit_reset() already did.
54
return 1;
55
@@ -XXX,XX +XXX,XX @@ static int nvic_security_post_load(void *opaque, int version_id)
56
int i;
57
58
/* Check for out of range priority settings */
59
- if (s->sec_vectors[ARMV7M_EXCP_HARD].prio != -1) {
60
+ if (s->sec_vectors[ARMV7M_EXCP_HARD].prio != -1
61
+ && s->sec_vectors[ARMV7M_EXCP_HARD].prio != -3) {
62
+ /* We can't cross-check against AIRCR.BFHFNMINS as we don't know
63
+ * if the CPU state has been migrated yet; a mismatch won't
64
+ * cause the emulation to blow up, though.
65
+ */
38
+ */
66
return 1;
39
+
67
}
40
ptimer_transaction_begin(s->timer_cmp);
68
for (i = ARMV7M_EXCP_MEM; i < ARRAY_SIZE(s->sec_vectors); i++) {
41
ptimer_transaction_begin(s->timer_reload);
69
@@ -XXX,XX +XXX,XX @@ static Property props_nvic[] = {
70
71
static void armv7m_nvic_reset(DeviceState *dev)
72
{
73
+ int resetprio;
74
NVICState *s = NVIC(dev);
75
76
s->vectors[ARMV7M_EXCP_NMI].enabled = 1;
77
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_reset(DeviceState *dev)
78
s->vectors[ARMV7M_EXCP_PENDSV].enabled = 1;
79
s->vectors[ARMV7M_EXCP_SYSTICK].enabled = 1;
80
81
- s->vectors[ARMV7M_EXCP_RESET].prio = -3;
82
+ resetprio = arm_feature(&s->cpu->env, ARM_FEATURE_V8) ? -4 : -3;
83
+ s->vectors[ARMV7M_EXCP_RESET].prio = resetprio;
84
s->vectors[ARMV7M_EXCP_NMI].prio = -2;
85
s->vectors[ARMV7M_EXCP_HARD].prio = -1;
86
42
87
--
43
--
88
2.7.4
44
2.25.1
89
90
diff view generated by jsdifflib
1
Don't use the old_mmio struct in memory region ops.
1
From: Axel Heider <axel.heider@hensoldt.net>
2
2
3
Signed-off-by: Axel Heider <axel.heider@hensoldt.net>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 1505580378-9044-5-git-send-email-peter.maydell@linaro.org
6
---
6
---
7
hw/timer/omap_gptimer.c | 49 +++++++++++++++++++++++++++++++++++++------------
7
hw/timer/imx_epit.c | 20 ++++++++++++++------
8
1 file changed, 37 insertions(+), 12 deletions(-)
8
1 file changed, 14 insertions(+), 6 deletions(-)
9
9
10
diff --git a/hw/timer/omap_gptimer.c b/hw/timer/omap_gptimer.c
10
diff --git a/hw/timer/imx_epit.c b/hw/timer/imx_epit.c
11
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
12
--- a/hw/timer/omap_gptimer.c
12
--- a/hw/timer/imx_epit.c
13
+++ b/hw/timer/omap_gptimer.c
13
+++ b/hw/timer/imx_epit.c
14
@@ -XXX,XX +XXX,XX @@ static void omap_gp_timer_writeh(void *opaque, hwaddr addr,
14
@@ -XXX,XX +XXX,XX @@ static void imx_epit_set_freq(IMXEPITState *s)
15
s->writeh = (uint16_t) value;
15
/*
16
* This is called both on hardware (device) reset and software reset.
17
*/
18
-static void imx_epit_reset(DeviceState *dev)
19
+static void imx_epit_reset(IMXEPITState *s, bool is_hard_reset)
20
{
21
- IMXEPITState *s = IMX_EPIT(dev);
22
-
23
/* Soft reset doesn't touch some bits; hard reset clears them */
24
- s->cr &= (CR_EN|CR_ENMOD|CR_STOPEN|CR_DOZEN|CR_WAITEN|CR_DBGEN);
25
+ if (is_hard_reset) {
26
+ s->cr = 0;
27
+ } else {
28
+ s->cr &= (CR_EN|CR_ENMOD|CR_STOPEN|CR_DOZEN|CR_WAITEN|CR_DBGEN);
29
+ }
30
s->sr = 0;
31
s->lr = EPIT_TIMER_MAX;
32
s->cmp = 0;
33
@@ -XXX,XX +XXX,XX @@ static void imx_epit_write(void *opaque, hwaddr offset, uint64_t value,
34
s->cr = value & 0x03ffffff;
35
if (s->cr & CR_SWR) {
36
/* handle the reset */
37
- imx_epit_reset(DEVICE(s));
38
+ imx_epit_reset(s, false);
39
}
40
41
/*
42
@@ -XXX,XX +XXX,XX @@ static void imx_epit_realize(DeviceState *dev, Error **errp)
43
s->timer_cmp = ptimer_init(imx_epit_cmp, s, PTIMER_POLICY_LEGACY);
16
}
44
}
17
45
18
+static uint64_t omap_gp_timer_readfn(void *opaque, hwaddr addr,
46
+static void imx_epit_dev_reset(DeviceState *dev)
19
+ unsigned size)
20
+{
47
+{
21
+ switch (size) {
48
+ IMXEPITState *s = IMX_EPIT(dev);
22
+ case 1:
49
+ imx_epit_reset(s, true);
23
+ return omap_badwidth_read32(opaque, addr);
24
+ case 2:
25
+ return omap_gp_timer_readh(opaque, addr);
26
+ case 4:
27
+ return omap_gp_timer_readw(opaque, addr);
28
+ default:
29
+ g_assert_not_reached();
30
+ }
31
+}
50
+}
32
+
51
+
33
+static void omap_gp_timer_writefn(void *opaque, hwaddr addr,
52
static void imx_epit_class_init(ObjectClass *klass, void *data)
34
+ uint64_t value, unsigned size)
53
{
35
+{
54
DeviceClass *dc = DEVICE_CLASS(klass);
36
+ switch (size) {
55
37
+ case 1:
56
dc->realize = imx_epit_realize;
38
+ omap_badwidth_write32(opaque, addr, value);
57
- dc->reset = imx_epit_reset;
39
+ break;
58
+ dc->reset = imx_epit_dev_reset;
40
+ case 2:
59
dc->vmsd = &vmstate_imx_timer_epit;
41
+ omap_gp_timer_writeh(opaque, addr, value);
60
dc->desc = "i.MX periodic timer";
42
+ break;
61
}
43
+ case 4:
44
+ omap_gp_timer_write(opaque, addr, value);
45
+ break;
46
+ default:
47
+ g_assert_not_reached();
48
+ }
49
+}
50
+
51
static const MemoryRegionOps omap_gp_timer_ops = {
52
- .old_mmio = {
53
- .read = {
54
- omap_badwidth_read32,
55
- omap_gp_timer_readh,
56
- omap_gp_timer_readw,
57
- },
58
- .write = {
59
- omap_badwidth_write32,
60
- omap_gp_timer_writeh,
61
- omap_gp_timer_write,
62
- },
63
- },
64
+ .read = omap_gp_timer_readfn,
65
+ .write = omap_gp_timer_writefn,
66
+ .valid.min_access_size = 1,
67
+ .valid.max_access_size = 4,
68
.endianness = DEVICE_NATIVE_ENDIAN,
69
};
70
71
--
62
--
72
2.7.4
63
2.25.1
73
74
diff view generated by jsdifflib
1
From: Subbaraya Sundeep <sundeep.lkml@gmail.com>
1
From: Axel Heider <axel.heider@hensoldt.net>
2
2
3
Modelled Microsemi's Smartfusion2 SPI controller.
3
Signed-off-by: Axel Heider <axel.heider@hensoldt.net>
4
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Subbaraya Sundeep <sundeep.lkml@gmail.com>
6
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
7
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Message-id: 20170920201737.25723-4-f4bug@amsat.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
6
---
11
hw/ssi/Makefile.objs | 1 +
7
hw/timer/imx_epit.c | 215 ++++++++++++++++++++++++--------------------
12
include/hw/ssi/mss-spi.h | 58 +++++++
8
1 file changed, 117 insertions(+), 98 deletions(-)
13
hw/ssi/mss-spi.c | 404 +++++++++++++++++++++++++++++++++++++++++++++++
14
3 files changed, 463 insertions(+)
15
create mode 100644 include/hw/ssi/mss-spi.h
16
create mode 100644 hw/ssi/mss-spi.c
17
9
18
diff --git a/hw/ssi/Makefile.objs b/hw/ssi/Makefile.objs
10
diff --git a/hw/timer/imx_epit.c b/hw/timer/imx_epit.c
19
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/ssi/Makefile.objs
12
--- a/hw/timer/imx_epit.c
21
+++ b/hw/ssi/Makefile.objs
13
+++ b/hw/timer/imx_epit.c
22
@@ -XXX,XX +XXX,XX @@ common-obj-$(CONFIG_XILINX_SPI) += xilinx_spi.o
14
@@ -XXX,XX +XXX,XX @@ static void imx_epit_reload_compare_timer(IMXEPITState *s)
23
common-obj-$(CONFIG_XILINX_SPIPS) += xilinx_spips.o
15
}
24
common-obj-$(CONFIG_ASPEED_SOC) += aspeed_smc.o
16
}
25
common-obj-$(CONFIG_STM32F2XX_SPI) += stm32f2xx_spi.o
17
26
+common-obj-$(CONFIG_MSF2) += mss-spi.o
18
+static void imx_epit_write_cr(IMXEPITState *s, uint32_t value)
27
19
+{
28
obj-$(CONFIG_OMAP) += omap_spi.o
20
+ uint32_t oldcr = s->cr;
29
obj-$(CONFIG_IMX) += imx_spi.o
21
+
30
diff --git a/include/hw/ssi/mss-spi.h b/include/hw/ssi/mss-spi.h
22
+ s->cr = value & 0x03ffffff;
31
new file mode 100644
23
+
32
index XXXXXXX..XXXXXXX
24
+ if (s->cr & CR_SWR) {
33
--- /dev/null
25
+ /* handle the reset */
34
+++ b/include/hw/ssi/mss-spi.h
26
+ imx_epit_reset(s, false);
35
@@ -XXX,XX +XXX,XX @@
27
+ }
36
+/*
37
+ * Microsemi SmartFusion2 SPI
38
+ *
39
+ * Copyright (c) 2017 Subbaraya Sundeep <sundeep.lkml@gmail.com>
40
+ *
41
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
42
+ * of this software and associated documentation files (the "Software"), to deal
43
+ * in the Software without restriction, including without limitation the rights
44
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
45
+ * copies of the Software, and to permit persons to whom the Software is
46
+ * furnished to do so, subject to the following conditions:
47
+ *
48
+ * The above copyright notice and this permission notice shall be included in
49
+ * all copies or substantial portions of the Software.
50
+ *
51
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
52
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
53
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
54
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
55
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
56
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
57
+ * THE SOFTWARE.
58
+ */
59
+
60
+#ifndef HW_MSS_SPI_H
61
+#define HW_MSS_SPI_H
62
+
63
+#include "hw/sysbus.h"
64
+#include "hw/ssi/ssi.h"
65
+#include "qemu/fifo32.h"
66
+
67
+#define TYPE_MSS_SPI "mss-spi"
68
+#define MSS_SPI(obj) OBJECT_CHECK(MSSSpiState, (obj), TYPE_MSS_SPI)
69
+
70
+#define R_SPI_MAX 16
71
+
72
+typedef struct MSSSpiState {
73
+ SysBusDevice parent_obj;
74
+
75
+ MemoryRegion mmio;
76
+
77
+ qemu_irq irq;
78
+
79
+ qemu_irq cs_line;
80
+
81
+ SSIBus *spi;
82
+
83
+ Fifo32 rx_fifo;
84
+ Fifo32 tx_fifo;
85
+
86
+ int fifo_depth;
87
+ uint32_t frame_count;
88
+ bool enabled;
89
+
90
+ uint32_t regs[R_SPI_MAX];
91
+} MSSSpiState;
92
+
93
+#endif /* HW_MSS_SPI_H */
94
diff --git a/hw/ssi/mss-spi.c b/hw/ssi/mss-spi.c
95
new file mode 100644
96
index XXXXXXX..XXXXXXX
97
--- /dev/null
98
+++ b/hw/ssi/mss-spi.c
99
@@ -XXX,XX +XXX,XX @@
100
+/*
101
+ * Block model of SPI controller present in
102
+ * Microsemi's SmartFusion2 and SmartFusion SoCs.
103
+ *
104
+ * Copyright (C) 2017 Subbaraya Sundeep <sundeep.lkml@gmail.com>
105
+ *
106
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
107
+ * of this software and associated documentation files (the "Software"), to deal
108
+ * in the Software without restriction, including without limitation the rights
109
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
110
+ * copies of the Software, and to permit persons to whom the Software is
111
+ * furnished to do so, subject to the following conditions:
112
+ *
113
+ * The above copyright notice and this permission notice shall be included in
114
+ * all copies or substantial portions of the Software.
115
+ *
116
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
117
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
118
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
119
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
120
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
121
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
122
+ * THE SOFTWARE.
123
+ */
124
+
125
+#include "qemu/osdep.h"
126
+#include "hw/ssi/mss-spi.h"
127
+#include "qemu/log.h"
128
+
129
+#ifndef MSS_SPI_ERR_DEBUG
130
+#define MSS_SPI_ERR_DEBUG 0
131
+#endif
132
+
133
+#define DB_PRINT_L(lvl, fmt, args...) do { \
134
+ if (MSS_SPI_ERR_DEBUG >= lvl) { \
135
+ qemu_log("%s: " fmt "\n", __func__, ## args); \
136
+ } \
137
+} while (0);
138
+
139
+#define DB_PRINT(fmt, args...) DB_PRINT_L(1, fmt, ## args)
140
+
141
+#define FIFO_CAPACITY 32
142
+
143
+#define R_SPI_CONTROL 0
144
+#define R_SPI_DFSIZE 1
145
+#define R_SPI_STATUS 2
146
+#define R_SPI_INTCLR 3
147
+#define R_SPI_RX 4
148
+#define R_SPI_TX 5
149
+#define R_SPI_CLKGEN 6
150
+#define R_SPI_SS 7
151
+#define R_SPI_MIS 8
152
+#define R_SPI_RIS 9
153
+
154
+#define S_TXDONE (1 << 0)
155
+#define S_RXRDY (1 << 1)
156
+#define S_RXCHOVRF (1 << 2)
157
+#define S_RXFIFOFUL (1 << 4)
158
+#define S_RXFIFOFULNXT (1 << 5)
159
+#define S_RXFIFOEMP (1 << 6)
160
+#define S_RXFIFOEMPNXT (1 << 7)
161
+#define S_TXFIFOFUL (1 << 8)
162
+#define S_TXFIFOFULNXT (1 << 9)
163
+#define S_TXFIFOEMP (1 << 10)
164
+#define S_TXFIFOEMPNXT (1 << 11)
165
+#define S_FRAMESTART (1 << 12)
166
+#define S_SSEL (1 << 13)
167
+#define S_ACTIVE (1 << 14)
168
+
169
+#define C_ENABLE (1 << 0)
170
+#define C_MODE (1 << 1)
171
+#define C_INTRXDATA (1 << 4)
172
+#define C_INTTXDATA (1 << 5)
173
+#define C_INTRXOVRFLO (1 << 6)
174
+#define C_SPS (1 << 26)
175
+#define C_BIGFIFO (1 << 29)
176
+#define C_RESET (1 << 31)
177
+
178
+#define FRAMESZ_MASK 0x1F
179
+#define FMCOUNT_MASK 0x00FFFF00
180
+#define FMCOUNT_SHIFT 8
181
+
182
+static void txfifo_reset(MSSSpiState *s)
183
+{
184
+ fifo32_reset(&s->tx_fifo);
185
+
186
+ s->regs[R_SPI_STATUS] &= ~S_TXFIFOFUL;
187
+ s->regs[R_SPI_STATUS] |= S_TXFIFOEMP;
188
+}
189
+
190
+static void rxfifo_reset(MSSSpiState *s)
191
+{
192
+ fifo32_reset(&s->rx_fifo);
193
+
194
+ s->regs[R_SPI_STATUS] &= ~S_RXFIFOFUL;
195
+ s->regs[R_SPI_STATUS] |= S_RXFIFOEMP;
196
+}
197
+
198
+static void set_fifodepth(MSSSpiState *s)
199
+{
200
+ unsigned int size = s->regs[R_SPI_DFSIZE] & FRAMESZ_MASK;
201
+
202
+ if (size <= 8) {
203
+ s->fifo_depth = 32;
204
+ } else if (size <= 16) {
205
+ s->fifo_depth = 16;
206
+ } else if (size <= 32) {
207
+ s->fifo_depth = 8;
208
+ } else {
209
+ s->fifo_depth = 4;
210
+ }
211
+}
212
+
213
+static void update_mis(MSSSpiState *s)
214
+{
215
+ uint32_t reg = s->regs[R_SPI_CONTROL];
216
+ uint32_t tmp;
217
+
28
+
218
+ /*
29
+ /*
219
+ * form the Control register interrupt enable bits
30
+ * The interrupt state can change due to:
220
+ * same as RIS, MIS and Interrupt clear registers for simplicity
31
+ * - reset clears both SR.OCIF and CR.OCIE
32
+ * - write to CR.EN or CR.OCIE
221
+ */
33
+ */
222
+ tmp = ((reg & C_INTRXOVRFLO) >> 4) | ((reg & C_INTRXDATA) >> 3) |
34
+ imx_epit_update_int(s);
223
+ ((reg & C_INTTXDATA) >> 5);
224
+ s->regs[R_SPI_MIS] |= tmp & s->regs[R_SPI_RIS];
225
+}
226
+
227
+static void spi_update_irq(MSSSpiState *s)
228
+{
229
+ int irq;
230
+
231
+ update_mis(s);
232
+ irq = !!(s->regs[R_SPI_MIS]);
233
+
234
+ qemu_set_irq(s->irq, irq);
235
+}
236
+
237
+static void mss_spi_reset(DeviceState *d)
238
+{
239
+ MSSSpiState *s = MSS_SPI(d);
240
+
241
+ memset(s->regs, 0, sizeof s->regs);
242
+ s->regs[R_SPI_CONTROL] = 0x80000102;
243
+ s->regs[R_SPI_DFSIZE] = 0x4;
244
+ s->regs[R_SPI_STATUS] = S_SSEL | S_TXFIFOEMP | S_RXFIFOEMP;
245
+ s->regs[R_SPI_CLKGEN] = 0x7;
246
+ s->regs[R_SPI_RIS] = 0x0;
247
+
248
+ s->fifo_depth = 4;
249
+ s->frame_count = 1;
250
+ s->enabled = false;
251
+
252
+ rxfifo_reset(s);
253
+ txfifo_reset(s);
254
+}
255
+
256
+static uint64_t
257
+spi_read(void *opaque, hwaddr addr, unsigned int size)
258
+{
259
+ MSSSpiState *s = opaque;
260
+ uint32_t ret = 0;
261
+
262
+ addr >>= 2;
263
+ switch (addr) {
264
+ case R_SPI_RX:
265
+ s->regs[R_SPI_STATUS] &= ~S_RXFIFOFUL;
266
+ s->regs[R_SPI_STATUS] &= ~S_RXCHOVRF;
267
+ ret = fifo32_pop(&s->rx_fifo);
268
+ if (fifo32_is_empty(&s->rx_fifo)) {
269
+ s->regs[R_SPI_STATUS] |= S_RXFIFOEMP;
270
+ }
271
+ break;
272
+
273
+ case R_SPI_MIS:
274
+ update_mis(s);
275
+ ret = s->regs[R_SPI_MIS];
276
+ break;
277
+
278
+ default:
279
+ if (addr < ARRAY_SIZE(s->regs)) {
280
+ ret = s->regs[addr];
281
+ } else {
282
+ qemu_log_mask(LOG_GUEST_ERROR,
283
+ "%s: Bad offset 0x%" HWADDR_PRIx "\n", __func__,
284
+ addr * 4);
285
+ return ret;
286
+ }
287
+ break;
288
+ }
289
+
290
+ DB_PRINT("addr=0x%" HWADDR_PRIx " = 0x%" PRIx32, addr * 4, ret);
291
+ spi_update_irq(s);
292
+ return ret;
293
+}
294
+
295
+static void assert_cs(MSSSpiState *s)
296
+{
297
+ qemu_set_irq(s->cs_line, 0);
298
+}
299
+
300
+static void deassert_cs(MSSSpiState *s)
301
+{
302
+ qemu_set_irq(s->cs_line, 1);
303
+}
304
+
305
+static void spi_flush_txfifo(MSSSpiState *s)
306
+{
307
+ uint32_t tx;
308
+ uint32_t rx;
309
+ bool sps = !!(s->regs[R_SPI_CONTROL] & C_SPS);
310
+
35
+
311
+ /*
36
+ /*
312
+ * Chip Select(CS) is automatically controlled by this controller.
37
+ * TODO: could we 'break' here for reset? following operations appear
313
+ * If SPS bit is set in Control register then CS is asserted
38
+ * to duplicate the work imx_epit_reset() already did.
314
+ * until all the frames set in frame count of Control register are
315
+ * transferred. If SPS is not set then CS pulses between frames.
316
+ * Note that Slave Select register specifies which of the CS line
317
+ * has to be controlled automatically by controller. Bits SS[7:1] are for
318
+ * masters in FPGA fabric since we model only Microcontroller subsystem
319
+ * of Smartfusion2 we control only one CS(SS[0]) line.
320
+ */
39
+ */
321
+ while (!fifo32_is_empty(&s->tx_fifo) && s->frame_count) {
40
+
322
+ assert_cs(s);
41
+ ptimer_transaction_begin(s->timer_cmp);
323
+
42
+ ptimer_transaction_begin(s->timer_reload);
324
+ s->regs[R_SPI_STATUS] &= ~(S_TXDONE | S_RXRDY);
43
+
325
+
44
+ /* Update the frequency. Has been done already in case of a reset. */
326
+ tx = fifo32_pop(&s->tx_fifo);
45
+ if (!(s->cr & CR_SWR)) {
327
+ DB_PRINT("data tx:0x%" PRIx32, tx);
46
+ imx_epit_set_freq(s);
328
+ rx = ssi_transfer(s->spi, tx);
47
+ }
329
+ DB_PRINT("data rx:0x%" PRIx32, rx);
48
+
330
+
49
+ if (s->freq && (s->cr & CR_EN) && !(oldcr & CR_EN)) {
331
+ if (fifo32_num_used(&s->rx_fifo) == s->fifo_depth) {
50
+ if (s->cr & CR_ENMOD) {
332
+ s->regs[R_SPI_STATUS] |= S_RXCHOVRF;
51
+ if (s->cr & CR_RLD) {
333
+ s->regs[R_SPI_RIS] |= S_RXCHOVRF;
52
+ ptimer_set_limit(s->timer_reload, s->lr, 1);
334
+ } else {
53
+ ptimer_set_limit(s->timer_cmp, s->lr, 1);
335
+ fifo32_push(&s->rx_fifo, rx);
54
+ } else {
336
+ s->regs[R_SPI_STATUS] &= ~S_RXFIFOEMP;
55
+ ptimer_set_limit(s->timer_reload, EPIT_TIMER_MAX, 1);
337
+ if (fifo32_num_used(&s->rx_fifo) == (s->fifo_depth - 1)) {
56
+ ptimer_set_limit(s->timer_cmp, EPIT_TIMER_MAX, 1);
338
+ s->regs[R_SPI_STATUS] |= S_RXFIFOFULNXT;
339
+ } else if (fifo32_num_used(&s->rx_fifo) == s->fifo_depth) {
340
+ s->regs[R_SPI_STATUS] |= S_RXFIFOFUL;
341
+ }
57
+ }
342
+ }
58
+ }
343
+ s->frame_count--;
59
+
344
+ if (!sps) {
60
+ imx_epit_reload_compare_timer(s);
345
+ deassert_cs(s);
61
+ ptimer_run(s->timer_reload, 0);
62
+ if (s->cr & CR_OCIEN) {
63
+ ptimer_run(s->timer_cmp, 0);
64
+ } else {
65
+ ptimer_stop(s->timer_cmp);
346
+ }
66
+ }
347
+ }
67
+ } else if (!(s->cr & CR_EN)) {
348
+
68
+ /* stop both timers */
349
+ if (!s->frame_count) {
69
+ ptimer_stop(s->timer_reload);
350
+ s->frame_count = (s->regs[R_SPI_CONTROL] & FMCOUNT_MASK) >>
70
+ ptimer_stop(s->timer_cmp);
351
+ FMCOUNT_SHIFT;
71
+ } else if (s->cr & CR_OCIEN) {
352
+ deassert_cs(s);
72
+ if (!(oldcr & CR_OCIEN)) {
353
+ s->regs[R_SPI_RIS] |= S_TXDONE | S_RXRDY;
73
+ imx_epit_reload_compare_timer(s);
354
+ s->regs[R_SPI_STATUS] |= S_TXDONE | S_RXRDY;
74
+ ptimer_run(s->timer_cmp, 0);
355
+ }
356
+}
357
+
358
+static void spi_write(void *opaque, hwaddr addr,
359
+ uint64_t val64, unsigned int size)
360
+{
361
+ MSSSpiState *s = opaque;
362
+ uint32_t value = val64;
363
+
364
+ DB_PRINT("addr=0x%" HWADDR_PRIx " =0x%" PRIx32, addr, value);
365
+ addr >>= 2;
366
+
367
+ switch (addr) {
368
+ case R_SPI_TX:
369
+ /* adding to already full FIFO */
370
+ if (fifo32_num_used(&s->tx_fifo) == s->fifo_depth) {
371
+ break;
372
+ }
75
+ }
373
+ s->regs[R_SPI_STATUS] &= ~S_TXFIFOEMP;
76
+ } else {
374
+ fifo32_push(&s->tx_fifo, value);
77
+ ptimer_stop(s->timer_cmp);
375
+ if (fifo32_num_used(&s->tx_fifo) == (s->fifo_depth - 1)) {
78
+ }
376
+ s->regs[R_SPI_STATUS] |= S_TXFIFOFULNXT;
79
+
377
+ } else if (fifo32_num_used(&s->tx_fifo) == s->fifo_depth) {
80
+ ptimer_transaction_commit(s->timer_cmp);
378
+ s->regs[R_SPI_STATUS] |= S_TXFIFOFUL;
81
+ ptimer_transaction_commit(s->timer_reload);
379
+ }
82
+}
380
+ if (s->enabled) {
83
+
381
+ spi_flush_txfifo(s);
84
+static void imx_epit_write_sr(IMXEPITState *s, uint32_t value)
382
+ }
85
+{
383
+ break;
86
+ /* writing 1 to SR.OCIF clears this bit and turns the interrupt off */
384
+
87
+ if (value & SR_OCIF) {
385
+ case R_SPI_CONTROL:
88
+ s->sr = 0; /* SR.OCIF is the only bit in this register anyway */
386
+ s->regs[R_SPI_CONTROL] = value;
89
+ imx_epit_update_int(s);
387
+ if (value & C_BIGFIFO) {
90
+ }
388
+ set_fifodepth(s);
91
+}
389
+ } else {
92
+
390
+ s->fifo_depth = 4;
93
+static void imx_epit_write_lr(IMXEPITState *s, uint32_t value)
391
+ }
94
+{
392
+ s->enabled = value & C_ENABLE;
95
+ s->lr = value;
393
+ s->frame_count = (value & FMCOUNT_MASK) >> FMCOUNT_SHIFT;
96
+
394
+ if (value & C_RESET) {
97
+ ptimer_transaction_begin(s->timer_cmp);
395
+ mss_spi_reset(DEVICE(s));
98
+ ptimer_transaction_begin(s->timer_reload);
396
+ }
99
+ if (s->cr & CR_RLD) {
397
+ break;
100
+ /* Also set the limit if the LRD bit is set */
398
+
101
+ /* If IOVW bit is set then set the timer value */
399
+ case R_SPI_DFSIZE:
102
+ ptimer_set_limit(s->timer_reload, s->lr, s->cr & CR_IOVW);
400
+ if (s->enabled) {
103
+ ptimer_set_limit(s->timer_cmp, s->lr, 0);
401
+ break;
104
+ } else if (s->cr & CR_IOVW) {
402
+ }
105
+ /* If IOVW bit is set then set the timer value */
403
+ s->regs[R_SPI_DFSIZE] = value;
106
+ ptimer_set_count(s->timer_reload, s->lr);
404
+ break;
107
+ }
405
+
108
+ /*
406
+ case R_SPI_INTCLR:
109
+ * Commit the change to s->timer_reload, so it can propagate. Otherwise
407
+ s->regs[R_SPI_INTCLR] = value;
110
+ * the timer interrupt may not fire properly. The commit must happen
408
+ if (value & S_TXDONE) {
111
+ * before calling imx_epit_reload_compare_timer(), which reads
409
+ s->regs[R_SPI_RIS] &= ~S_TXDONE;
112
+ * s->timer_reload internally again.
410
+ }
113
+ */
411
+ if (value & S_RXRDY) {
114
+ ptimer_transaction_commit(s->timer_reload);
412
+ s->regs[R_SPI_RIS] &= ~S_RXRDY;
115
+ imx_epit_reload_compare_timer(s);
413
+ }
116
+ ptimer_transaction_commit(s->timer_cmp);
414
+ if (value & S_RXCHOVRF) {
117
+}
415
+ s->regs[R_SPI_RIS] &= ~S_RXCHOVRF;
118
+
416
+ }
119
+static void imx_epit_write_cmp(IMXEPITState *s, uint32_t value)
417
+ break;
120
+{
418
+
121
+ s->cmp = value;
419
+ case R_SPI_MIS:
122
+
420
+ case R_SPI_STATUS:
123
+ ptimer_transaction_begin(s->timer_cmp);
421
+ case R_SPI_RIS:
124
+ imx_epit_reload_compare_timer(s);
422
+ qemu_log_mask(LOG_GUEST_ERROR,
125
+ ptimer_transaction_commit(s->timer_cmp);
423
+ "%s: Write to read only register 0x%" HWADDR_PRIx "\n",
126
+}
424
+ __func__, addr * 4);
127
+
425
+ break;
128
static void imx_epit_write(void *opaque, hwaddr offset, uint64_t value,
426
+
129
unsigned size)
427
+ default:
130
{
428
+ if (addr < ARRAY_SIZE(s->regs)) {
131
IMXEPITState *s = IMX_EPIT(opaque);
429
+ s->regs[addr] = value;
132
- uint64_t oldcr;
430
+ } else {
133
431
+ qemu_log_mask(LOG_GUEST_ERROR,
134
DPRINTF("(%s, value = 0x%08x)\n", imx_epit_reg_name(offset >> 2),
432
+ "%s: Bad offset 0x%" HWADDR_PRIx "\n", __func__,
135
(uint32_t)value);
433
+ addr * 4);
136
434
+ }
137
switch (offset >> 2) {
435
+ break;
138
case 0: /* CR */
436
+ }
139
-
437
+
140
- oldcr = s->cr;
438
+ spi_update_irq(s);
141
- s->cr = value & 0x03ffffff;
439
+}
142
- if (s->cr & CR_SWR) {
440
+
143
- /* handle the reset */
441
+static const MemoryRegionOps spi_ops = {
144
- imx_epit_reset(s, false);
442
+ .read = spi_read,
145
- }
443
+ .write = spi_write,
146
-
444
+ .endianness = DEVICE_NATIVE_ENDIAN,
147
- /*
445
+ .valid = {
148
- * The interrupt state can change due to:
446
+ .min_access_size = 1,
149
- * - reset clears both SR.OCIF and CR.OCIE
447
+ .max_access_size = 4
150
- * - write to CR.EN or CR.OCIE
448
+ }
151
- */
449
+};
152
- imx_epit_update_int(s);
450
+
153
-
451
+static void mss_spi_realize(DeviceState *dev, Error **errp)
154
- /*
452
+{
155
- * TODO: could we 'break' here for reset? following operations appear
453
+ MSSSpiState *s = MSS_SPI(dev);
156
- * to duplicate the work imx_epit_reset() already did.
454
+ SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
157
- */
455
+
158
-
456
+ s->spi = ssi_create_bus(dev, "spi");
159
- ptimer_transaction_begin(s->timer_cmp);
457
+
160
- ptimer_transaction_begin(s->timer_reload);
458
+ sysbus_init_irq(sbd, &s->irq);
161
-
459
+ ssi_auto_connect_slaves(dev, &s->cs_line, s->spi);
162
- /* Update the frequency. Has been done already in case of a reset. */
460
+ sysbus_init_irq(sbd, &s->cs_line);
163
- if (!(s->cr & CR_SWR)) {
461
+
164
- imx_epit_set_freq(s);
462
+ memory_region_init_io(&s->mmio, OBJECT(s), &spi_ops, s,
165
- }
463
+ TYPE_MSS_SPI, R_SPI_MAX * 4);
166
-
464
+ sysbus_init_mmio(sbd, &s->mmio);
167
- if (s->freq && (s->cr & CR_EN) && !(oldcr & CR_EN)) {
465
+
168
- if (s->cr & CR_ENMOD) {
466
+ fifo32_create(&s->tx_fifo, FIFO_CAPACITY);
169
- if (s->cr & CR_RLD) {
467
+ fifo32_create(&s->rx_fifo, FIFO_CAPACITY);
170
- ptimer_set_limit(s->timer_reload, s->lr, 1);
468
+}
171
- ptimer_set_limit(s->timer_cmp, s->lr, 1);
469
+
172
- } else {
470
+static const VMStateDescription vmstate_mss_spi = {
173
- ptimer_set_limit(s->timer_reload, EPIT_TIMER_MAX, 1);
471
+ .name = TYPE_MSS_SPI,
174
- ptimer_set_limit(s->timer_cmp, EPIT_TIMER_MAX, 1);
472
+ .version_id = 1,
175
- }
473
+ .minimum_version_id = 1,
176
- }
474
+ .fields = (VMStateField[]) {
177
-
475
+ VMSTATE_FIFO32(tx_fifo, MSSSpiState),
178
- imx_epit_reload_compare_timer(s);
476
+ VMSTATE_FIFO32(rx_fifo, MSSSpiState),
179
- ptimer_run(s->timer_reload, 0);
477
+ VMSTATE_UINT32_ARRAY(regs, MSSSpiState, R_SPI_MAX),
180
- if (s->cr & CR_OCIEN) {
478
+ VMSTATE_END_OF_LIST()
181
- ptimer_run(s->timer_cmp, 0);
479
+ }
182
- } else {
480
+};
183
- ptimer_stop(s->timer_cmp);
481
+
184
- }
482
+static void mss_spi_class_init(ObjectClass *klass, void *data)
185
- } else if (!(s->cr & CR_EN)) {
483
+{
186
- /* stop both timers */
484
+ DeviceClass *dc = DEVICE_CLASS(klass);
187
- ptimer_stop(s->timer_reload);
485
+
188
- ptimer_stop(s->timer_cmp);
486
+ dc->realize = mss_spi_realize;
189
- } else if (s->cr & CR_OCIEN) {
487
+ dc->reset = mss_spi_reset;
190
- if (!(oldcr & CR_OCIEN)) {
488
+ dc->vmsd = &vmstate_mss_spi;
191
- imx_epit_reload_compare_timer(s);
489
+}
192
- ptimer_run(s->timer_cmp, 0);
490
+
193
- }
491
+static const TypeInfo mss_spi_info = {
194
- } else {
492
+ .name = TYPE_MSS_SPI,
195
- ptimer_stop(s->timer_cmp);
493
+ .parent = TYPE_SYS_BUS_DEVICE,
196
- }
494
+ .instance_size = sizeof(MSSSpiState),
197
-
495
+ .class_init = mss_spi_class_init,
198
- ptimer_transaction_commit(s->timer_cmp);
496
+};
199
- ptimer_transaction_commit(s->timer_reload);
497
+
200
+ imx_epit_write_cr(s, (uint32_t)value);
498
+static void mss_spi_register_types(void)
201
break;
499
+{
202
500
+ type_register_static(&mss_spi_info);
203
- case 1: /* SR - ACK*/
501
+}
204
- /* writing 1 to SR.OCIF clears this bit and turns the interrupt off */
502
+
205
- if (value & SR_OCIF) {
503
+type_init(mss_spi_register_types)
206
- s->sr = 0; /* SR.OCIF is the only bit in this register anyway */
207
- imx_epit_update_int(s);
208
- }
209
+ case 1: /* SR */
210
+ imx_epit_write_sr(s, (uint32_t)value);
211
break;
212
213
- case 2: /* LR - set ticks */
214
- s->lr = value;
215
-
216
- ptimer_transaction_begin(s->timer_cmp);
217
- ptimer_transaction_begin(s->timer_reload);
218
- if (s->cr & CR_RLD) {
219
- /* Also set the limit if the LRD bit is set */
220
- /* If IOVW bit is set then set the timer value */
221
- ptimer_set_limit(s->timer_reload, s->lr, s->cr & CR_IOVW);
222
- ptimer_set_limit(s->timer_cmp, s->lr, 0);
223
- } else if (s->cr & CR_IOVW) {
224
- /* If IOVW bit is set then set the timer value */
225
- ptimer_set_count(s->timer_reload, s->lr);
226
- }
227
- /*
228
- * Commit the change to s->timer_reload, so it can propagate. Otherwise
229
- * the timer interrupt may not fire properly. The commit must happen
230
- * before calling imx_epit_reload_compare_timer(), which reads
231
- * s->timer_reload internally again.
232
- */
233
- ptimer_transaction_commit(s->timer_reload);
234
- imx_epit_reload_compare_timer(s);
235
- ptimer_transaction_commit(s->timer_cmp);
236
+ case 2: /* LR */
237
+ imx_epit_write_lr(s, (uint32_t)value);
238
break;
239
240
case 3: /* CMP */
241
- s->cmp = value;
242
-
243
- ptimer_transaction_begin(s->timer_cmp);
244
- imx_epit_reload_compare_timer(s);
245
- ptimer_transaction_commit(s->timer_cmp);
246
-
247
+ imx_epit_write_cmp(s, (uint32_t)value);
248
break;
249
250
default:
251
qemu_log_mask(LOG_GUEST_ERROR, "[%s]%s: Bad register at offset 0x%"
252
HWADDR_PRIx "\n", TYPE_IMX_EPIT, __func__, offset);
253
-
254
break;
255
}
256
}
257
+
258
static void imx_epit_cmp(void *opaque)
259
{
260
IMXEPITState *s = IMX_EPIT(opaque);
504
--
261
--
505
2.7.4
262
2.25.1
506
507
diff view generated by jsdifflib
1
Drop the use of old_mmio in the omap2_gpio memory ops.
1
From: Axel Heider <axel.heider@hensoldt.net>
2
2
3
The CNT register is a read-only register. There is no need to
4
store it's value, it can be calculated on demand.
5
The calculated frequency is needed temporarily only.
6
7
Note that this is a migration compatibility break for all boards
8
types that use the EPIT peripheral.
9
10
Signed-off-by: Axel Heider <axel.heider@hensoldt.net>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 1505580378-9044-3-git-send-email-peter.maydell@linaro.org
6
---
13
---
7
hw/gpio/omap_gpio.c | 26 ++++++++++++--------------
14
include/hw/timer/imx_epit.h | 2 -
8
1 file changed, 12 insertions(+), 14 deletions(-)
15
hw/timer/imx_epit.c | 73 ++++++++++++++-----------------------
16
2 files changed, 28 insertions(+), 47 deletions(-)
9
17
10
diff --git a/hw/gpio/omap_gpio.c b/hw/gpio/omap_gpio.c
18
diff --git a/include/hw/timer/imx_epit.h b/include/hw/timer/imx_epit.h
11
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
12
--- a/hw/gpio/omap_gpio.c
20
--- a/include/hw/timer/imx_epit.h
13
+++ b/hw/gpio/omap_gpio.c
21
+++ b/include/hw/timer/imx_epit.h
14
@@ -XXX,XX +XXX,XX @@ static void omap2_gpio_module_write(void *opaque, hwaddr addr,
22
@@ -XXX,XX +XXX,XX @@ struct IMXEPITState {
23
uint32_t sr;
24
uint32_t lr;
25
uint32_t cmp;
26
- uint32_t cnt;
27
28
- uint32_t freq;
29
qemu_irq irq;
30
};
31
32
diff --git a/hw/timer/imx_epit.c b/hw/timer/imx_epit.c
33
index XXXXXXX..XXXXXXX 100644
34
--- a/hw/timer/imx_epit.c
35
+++ b/hw/timer/imx_epit.c
36
@@ -XXX,XX +XXX,XX @@ static void imx_epit_update_int(IMXEPITState *s)
15
}
37
}
16
}
38
}
17
39
18
-static uint32_t omap2_gpio_module_readp(void *opaque, hwaddr addr)
40
-/*
19
+static uint64_t omap2_gpio_module_readp(void *opaque, hwaddr addr,
41
- * Must be called from within a ptimer_transaction_begin/commit block
20
+ unsigned size)
42
- * for both s->timer_cmp and s->timer_reload.
43
- */
44
-static void imx_epit_set_freq(IMXEPITState *s)
45
+static uint32_t imx_epit_get_freq(IMXEPITState *s)
21
{
46
{
22
return omap2_gpio_module_read(opaque, addr & ~3) >> ((addr & 3) << 3);
47
- uint32_t clksrc;
48
- uint32_t prescaler;
49
-
50
- clksrc = extract32(s->cr, CR_CLKSRC_SHIFT, CR_CLKSRC_BITS);
51
- prescaler = 1 + extract32(s->cr, CR_PRESCALE_SHIFT, CR_PRESCALE_BITS);
52
-
53
- s->freq = imx_ccm_get_clock_frequency(s->ccm,
54
- imx_epit_clocks[clksrc]) / prescaler;
55
-
56
- DPRINTF("Setting ptimer frequency to %u\n", s->freq);
57
-
58
- if (s->freq) {
59
- ptimer_set_freq(s->timer_reload, s->freq);
60
- ptimer_set_freq(s->timer_cmp, s->freq);
61
- }
62
+ uint32_t clksrc = extract32(s->cr, CR_CLKSRC_SHIFT, CR_CLKSRC_BITS);
63
+ uint32_t prescaler = 1 + extract32(s->cr, CR_PRESCALE_SHIFT, CR_PRESCALE_BITS);
64
+ uint32_t f_in = imx_ccm_get_clock_frequency(s->ccm, imx_epit_clocks[clksrc]);
65
+ uint32_t freq = f_in / prescaler;
66
+ DPRINTF("ptimer frequency is %u\n", freq);
67
+ return freq;
23
}
68
}
24
69
25
static void omap2_gpio_module_writep(void *opaque, hwaddr addr,
70
/*
26
- uint32_t value)
71
@@ -XXX,XX +XXX,XX @@ static void imx_epit_reset(IMXEPITState *s, bool is_hard_reset)
27
+ uint64_t value, unsigned size)
72
s->sr = 0;
73
s->lr = EPIT_TIMER_MAX;
74
s->cmp = 0;
75
- s->cnt = 0;
76
ptimer_transaction_begin(s->timer_cmp);
77
ptimer_transaction_begin(s->timer_reload);
78
- /* stop both timers */
79
+
80
+ /*
81
+ * The reset switches off the input clock, so even if the CR.EN is still
82
+ * set, the timers are no longer running.
83
+ */
84
+ assert(imx_epit_get_freq(s) == 0);
85
ptimer_stop(s->timer_cmp);
86
ptimer_stop(s->timer_reload);
87
- /* compute new frequency */
88
- imx_epit_set_freq(s);
89
/* init both timers to EPIT_TIMER_MAX */
90
ptimer_set_limit(s->timer_cmp, EPIT_TIMER_MAX, 1);
91
ptimer_set_limit(s->timer_reload, EPIT_TIMER_MAX, 1);
92
- if (s->freq && (s->cr & CR_EN)) {
93
- /* if the timer is still enabled, restart it */
94
- ptimer_run(s->timer_reload, 0);
95
- }
96
ptimer_transaction_commit(s->timer_cmp);
97
ptimer_transaction_commit(s->timer_reload);
98
}
99
100
-static uint32_t imx_epit_update_count(IMXEPITState *s)
101
-{
102
- s->cnt = ptimer_get_count(s->timer_reload);
103
-
104
- return s->cnt;
105
-}
106
-
107
static uint64_t imx_epit_read(void *opaque, hwaddr offset, unsigned size)
28
{
108
{
29
uint32_t cur = 0;
109
IMXEPITState *s = IMX_EPIT(opaque);
30
uint32_t mask = 0xffff;
110
@@ -XXX,XX +XXX,XX @@ static uint64_t imx_epit_read(void *opaque, hwaddr offset, unsigned size)
31
111
break;
32
+ if (size == 4) {
112
33
+ omap2_gpio_module_write(opaque, addr, value);
113
case 4: /* CNT */
34
+ return;
114
- imx_epit_update_count(s);
35
+ }
115
- reg_value = s->cnt;
36
+
116
+ reg_value = ptimer_get_count(s->timer_reload);
37
switch (addr & ~3) {
117
break;
38
case 0x00:    /* GPIO_REVISION */
118
39
case 0x14:    /* GPIO_SYSSTATUS */
119
default:
40
@@ -XXX,XX +XXX,XX @@ static void omap2_gpio_module_writep(void *opaque, hwaddr addr,
120
@@ -XXX,XX +XXX,XX @@ static void imx_epit_reload_compare_timer(IMXEPITState *s)
41
}
121
{
42
122
if ((s->cr & (CR_EN | CR_OCIEN)) == (CR_EN | CR_OCIEN)) {
43
static const MemoryRegionOps omap2_gpio_module_ops = {
123
/* if the compare feature is on and timers are running */
44
- .old_mmio = {
124
- uint32_t tmp = imx_epit_update_count(s);
45
- .read = {
125
+ uint32_t tmp = ptimer_get_count(s->timer_reload);
46
- omap2_gpio_module_readp,
126
uint64_t next;
47
- omap2_gpio_module_readp,
127
if (tmp > s->cmp) {
48
- omap2_gpio_module_read,
128
/* It'll fire in this round of the timer */
49
- },
129
@@ -XXX,XX +XXX,XX @@ static void imx_epit_reload_compare_timer(IMXEPITState *s)
50
- .write = {
130
51
- omap2_gpio_module_writep,
131
static void imx_epit_write_cr(IMXEPITState *s, uint32_t value)
52
- omap2_gpio_module_writep,
132
{
53
- omap2_gpio_module_write,
133
+ uint32_t freq = 0;
54
- },
134
uint32_t oldcr = s->cr;
55
- },
135
56
+ .read = omap2_gpio_module_readp,
136
s->cr = value & 0x03ffffff;
57
+ .write = omap2_gpio_module_writep,
137
@@ -XXX,XX +XXX,XX @@ static void imx_epit_write_cr(IMXEPITState *s, uint32_t value)
58
+ .valid.min_access_size = 1,
138
ptimer_transaction_begin(s->timer_cmp);
59
+ .valid.max_access_size = 4,
139
ptimer_transaction_begin(s->timer_reload);
60
.endianness = DEVICE_NATIVE_ENDIAN,
140
61
};
141
- /* Update the frequency. Has been done already in case of a reset. */
62
142
+ /*
143
+ * Update the frequency. In case of a reset the input clock was
144
+ * switched off, so this can be skipped.
145
+ */
146
if (!(s->cr & CR_SWR)) {
147
- imx_epit_set_freq(s);
148
+ freq = imx_epit_get_freq(s);
149
+ if (freq) {
150
+ ptimer_set_freq(s->timer_reload, freq);
151
+ ptimer_set_freq(s->timer_cmp, freq);
152
+ }
153
}
154
155
- if (s->freq && (s->cr & CR_EN) && !(oldcr & CR_EN)) {
156
+ if (freq && (s->cr & CR_EN) && !(oldcr & CR_EN)) {
157
if (s->cr & CR_ENMOD) {
158
if (s->cr & CR_RLD) {
159
ptimer_set_limit(s->timer_reload, s->lr, 1);
160
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps imx_epit_ops = {
161
162
static const VMStateDescription vmstate_imx_timer_epit = {
163
.name = TYPE_IMX_EPIT,
164
- .version_id = 2,
165
- .minimum_version_id = 2,
166
+ .version_id = 3,
167
+ .minimum_version_id = 3,
168
.fields = (VMStateField[]) {
169
VMSTATE_UINT32(cr, IMXEPITState),
170
VMSTATE_UINT32(sr, IMXEPITState),
171
VMSTATE_UINT32(lr, IMXEPITState),
172
VMSTATE_UINT32(cmp, IMXEPITState),
173
- VMSTATE_UINT32(cnt, IMXEPITState),
174
- VMSTATE_UINT32(freq, IMXEPITState),
175
VMSTATE_PTIMER(timer_reload, IMXEPITState),
176
VMSTATE_PTIMER(timer_cmp, IMXEPITState),
177
VMSTATE_END_OF_LIST()
63
--
178
--
64
2.7.4
179
2.25.1
65
66
diff view generated by jsdifflib
1
Update nvic_exec_prio() to support the v8M changes:
1
From: Axel Heider <axel.heider@hensoldt.net>
2
* BASEPRI, FAULTMASK and PRIMASK are all banked
3
* AIRCR.PRIS can affect NS priorities
4
* AIRCR.BFHFNMINS affects FAULTMASK behaviour
5
2
6
These changes mean that it's no longer possible to
3
- fix #1263 for CR writes
7
definitely say that if FAULTMASK is set it overrides
4
- rework compare time handling
8
PRIMASK, and if PRIMASK is set it overrides BASEPRI
5
- The compare timer has to run even if CR.OCIEN is not set,
9
(since if PRIMASK_NS is set and AIRCR.PRIS is set then
6
as SR.OCIF must be updated.
10
whether that 0x80 priority should take effect or the
7
- The compare timer fires exactly once when the
11
priority in BASEPRI_S depends on the value of BASEPRI_S,
8
compare value is less than the current value, but the
12
for instance). So we switch to the same approach used
9
reload values is less than the compare value.
13
by the pseudocode of working through BASEPRI, PRIMASK
10
- The compare timer will never fire if the reload value is
14
and FAULTMASK and overriding the previous values if
11
less than the compare value. Disable it in this case.
15
needed.
16
12
13
Signed-off-by: Axel Heider <axel.heider@hensoldt.net>
14
[PMM: fixed minor style nits]
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Message-id: 1505240046-11454-16-git-send-email-peter.maydell@linaro.org
20
---
17
---
21
hw/intc/armv7m_nvic.c | 51 ++++++++++++++++++++++++++++++++++++++++++---------
18
hw/timer/imx_epit.c | 192 ++++++++++++++++++++++++++------------------
22
1 file changed, 42 insertions(+), 9 deletions(-)
19
1 file changed, 116 insertions(+), 76 deletions(-)
23
20
24
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
21
diff --git a/hw/timer/imx_epit.c b/hw/timer/imx_epit.c
25
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
26
--- a/hw/intc/armv7m_nvic.c
23
--- a/hw/timer/imx_epit.c
27
+++ b/hw/intc/armv7m_nvic.c
24
+++ b/hw/timer/imx_epit.c
28
@@ -XXX,XX +XXX,XX @@ static void nvic_recompute_state(NVICState *s)
25
@@ -XXX,XX +XXX,XX @@
29
static inline int nvic_exec_prio(NVICState *s)
26
* Originally written by Hans Jiang
30
{
27
* Updated by Peter Chubb
31
CPUARMState *env = &s->cpu->env;
28
* Updated by Jean-Christophe Dubois <jcd@tribudubois.net>
32
- int running;
29
+ * Updated by Axel Heider
33
+ int running = NVIC_NOEXC_PRIO;
30
*
34
31
* This code is licensed under GPL version 2 or later. See
35
- if (env->v7m.faultmask[env->v7m.secure]) {
32
* the COPYING file in the top-level directory.
36
- running = -1;
33
@@ -XXX,XX +XXX,XX @@ static uint64_t imx_epit_read(void *opaque, hwaddr offset, unsigned size)
37
- } else if (env->v7m.primask[env->v7m.secure]) {
34
return reg_value;
38
+ if (env->v7m.basepri[M_REG_NS] > 0) {
35
}
39
+ running = exc_group_prio(s, env->v7m.basepri[M_REG_NS], M_REG_NS);
36
37
-/* Must be called from ptimer_transaction_begin/commit block for s->timer_cmp */
38
-static void imx_epit_reload_compare_timer(IMXEPITState *s)
39
+/*
40
+ * Must be called from a ptimer_transaction_begin/commit block for
41
+ * s->timer_cmp, but outside of a transaction block of s->timer_reload,
42
+ * so the proper counter value is read.
43
+ */
44
+static void imx_epit_update_compare_timer(IMXEPITState *s)
45
{
46
- if ((s->cr & (CR_EN | CR_OCIEN)) == (CR_EN | CR_OCIEN)) {
47
- /* if the compare feature is on and timers are running */
48
- uint32_t tmp = ptimer_get_count(s->timer_reload);
49
- uint64_t next;
50
- if (tmp > s->cmp) {
51
- /* It'll fire in this round of the timer */
52
- next = tmp - s->cmp;
53
- } else { /* catch it next time around */
54
- next = tmp - s->cmp + ((s->cr & CR_RLD) ? EPIT_TIMER_MAX : s->lr);
55
+ uint64_t counter = 0;
56
+ bool is_oneshot = false;
57
+ /*
58
+ * The compare timer only has to run if the timer peripheral is active
59
+ * and there is an input clock, Otherwise it can be switched off.
60
+ */
61
+ bool is_active = (s->cr & CR_EN) && imx_epit_get_freq(s);
62
+ if (is_active) {
63
+ /*
64
+ * Calculate next timeout for compare timer. Reading the reload
65
+ * counter returns proper results only if pending transactions
66
+ * on it are committed here. Otherwise stale values are be read.
67
+ */
68
+ counter = ptimer_get_count(s->timer_reload);
69
+ uint64_t limit = ptimer_get_limit(s->timer_cmp);
70
+ /*
71
+ * The compare timer is a periodic timer if the limit is at least
72
+ * the compare value. Otherwise it may fire at most once in the
73
+ * current round.
74
+ */
75
+ bool is_oneshot = (limit >= s->cmp);
76
+ if (counter >= s->cmp) {
77
+ /* The compare timer fires in the current round. */
78
+ counter -= s->cmp;
79
+ } else if (!is_oneshot) {
80
+ /*
81
+ * The compare timer fires after a reload, as it is below the
82
+ * compare value already in this round. Note that the counter
83
+ * value calculated below can be above the 32-bit limit, which
84
+ * is legal here because the compare timer is an internal
85
+ * helper ptimer only.
86
+ */
87
+ counter += limit - s->cmp;
88
+ } else {
89
+ /*
90
+ * The compare timer won't fire in this round, and the limit is
91
+ * set to a value below the compare value. This practically means
92
+ * it will never fire, so it can be switched off.
93
+ */
94
+ is_active = false;
95
}
96
- ptimer_set_count(s->timer_cmp, next);
97
}
98
+
99
+ /*
100
+ * Set the compare timer and let it run, or stop it. This is agnostic
101
+ * of CR.OCIEN bit, as this bit affects interrupt generation only. The
102
+ * compare timer needs to run even if no interrupts are to be generated,
103
+ * because the SR.OCIF bit must be updated also.
104
+ * Note that the timer might already be stopped or be running with
105
+ * counter values. However, finding out when an update is needed and
106
+ * when not is not trivial. It's much easier applying the setting again,
107
+ * as this does not harm either and the overhead is negligible.
108
+ */
109
+ if (is_active) {
110
+ ptimer_set_count(s->timer_cmp, counter);
111
+ ptimer_run(s->timer_cmp, is_oneshot ? 1 : 0);
112
+ } else {
113
+ ptimer_stop(s->timer_cmp);
40
+ }
114
+ }
41
+
115
+
42
+ if (env->v7m.basepri[M_REG_S] > 0) {
116
}
43
+ int basepri = exc_group_prio(s, env->v7m.basepri[M_REG_S], M_REG_S);
117
44
+ if (running > basepri) {
118
static void imx_epit_write_cr(IMXEPITState *s, uint32_t value)
45
+ running = basepri;
119
{
120
- uint32_t freq = 0;
121
uint32_t oldcr = s->cr;
122
123
s->cr = value & 0x03ffffff;
124
125
if (s->cr & CR_SWR) {
126
- /* handle the reset */
127
+ /*
128
+ * Reset clears CR.SWR again. It does not touch CR.EN, but the timers
129
+ * are still stopped because the input clock is disabled.
130
+ */
131
imx_epit_reset(s, false);
132
+ } else {
133
+ uint32_t freq;
134
+ uint32_t toggled_cr_bits = oldcr ^ s->cr;
135
+ /* re-initialize the limits if CR.RLD has changed */
136
+ bool set_limit = toggled_cr_bits & CR_RLD;
137
+ /* set the counter if the timer got just enabled and CR.ENMOD is set */
138
+ bool is_switched_on = (toggled_cr_bits & s->cr) & CR_EN;
139
+ bool set_counter = is_switched_on && (s->cr & CR_ENMOD);
140
+
141
+ ptimer_transaction_begin(s->timer_cmp);
142
+ ptimer_transaction_begin(s->timer_reload);
143
+ freq = imx_epit_get_freq(s);
144
+ if (freq) {
145
+ ptimer_set_freq(s->timer_reload, freq);
146
+ ptimer_set_freq(s->timer_cmp, freq);
46
+ }
147
+ }
47
+ }
148
+
48
+
149
+ if (set_limit || set_counter) {
49
+ if (env->v7m.primask[M_REG_NS]) {
150
+ uint64_t limit = (s->cr & CR_RLD) ? s->lr : EPIT_TIMER_MAX;
50
+ if (env->v7m.aircr & R_V7M_AIRCR_PRIS_MASK) {
151
+ ptimer_set_limit(s->timer_reload, limit, set_counter ? 1 : 0);
51
+ if (running > NVIC_NS_PRIO_LIMIT) {
152
+ if (set_limit) {
52
+ running = NVIC_NS_PRIO_LIMIT;
153
+ ptimer_set_limit(s->timer_cmp, limit, 0);
53
+ }
54
+ } else {
55
+ running = 0;
56
+ }
57
+ }
58
+
59
+ if (env->v7m.primask[M_REG_S]) {
60
running = 0;
61
- } else if (env->v7m.basepri[env->v7m.secure] > 0) {
62
- running = env->v7m.basepri[env->v7m.secure] &
63
- nvic_gprio_mask(s, env->v7m.secure);
64
- } else {
65
- running = NVIC_NOEXC_PRIO; /* lower than any possible priority */
66
}
67
+
68
+ if (env->v7m.faultmask[M_REG_NS]) {
69
+ if (env->v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK) {
70
+ running = -1;
71
+ } else {
72
+ if (env->v7m.aircr & R_V7M_AIRCR_PRIS_MASK) {
73
+ if (running > NVIC_NS_PRIO_LIMIT) {
74
+ running = NVIC_NS_PRIO_LIMIT;
75
+ }
76
+ } else {
77
+ running = 0;
78
+ }
154
+ }
79
+ }
155
+ }
80
+ }
156
+ /*
81
+
157
+ * If there is an input clock and the peripheral is enabled, then
82
+ if (env->v7m.faultmask[M_REG_S]) {
158
+ * ensure the wall clock timer is ticking. Otherwise stop the timers.
83
+ running = (env->v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK) ? -3 : -1;
159
+ * The compare timer will be updated later.
84
+ }
160
+ */
85
+
161
+ if (freq && (s->cr & CR_EN)) {
86
/* consider priority of active handler */
162
+ ptimer_run(s->timer_reload, 0);
87
return MIN(running, s->exception_prio);
163
+ } else {
88
}
164
+ ptimer_stop(s->timer_reload);
165
+ }
166
+ /* Commit changes to reload timer, so they can propagate. */
167
+ ptimer_transaction_commit(s->timer_reload);
168
+ /* Update compare timer based on the committed reload timer value. */
169
+ imx_epit_update_compare_timer(s);
170
+ ptimer_transaction_commit(s->timer_cmp);
171
}
172
173
/*
174
@@ -XXX,XX +XXX,XX @@ static void imx_epit_write_cr(IMXEPITState *s, uint32_t value)
175
* - write to CR.EN or CR.OCIE
176
*/
177
imx_epit_update_int(s);
178
-
179
- /*
180
- * TODO: could we 'break' here for reset? following operations appear
181
- * to duplicate the work imx_epit_reset() already did.
182
- */
183
-
184
- ptimer_transaction_begin(s->timer_cmp);
185
- ptimer_transaction_begin(s->timer_reload);
186
-
187
- /*
188
- * Update the frequency. In case of a reset the input clock was
189
- * switched off, so this can be skipped.
190
- */
191
- if (!(s->cr & CR_SWR)) {
192
- freq = imx_epit_get_freq(s);
193
- if (freq) {
194
- ptimer_set_freq(s->timer_reload, freq);
195
- ptimer_set_freq(s->timer_cmp, freq);
196
- }
197
- }
198
-
199
- if (freq && (s->cr & CR_EN) && !(oldcr & CR_EN)) {
200
- if (s->cr & CR_ENMOD) {
201
- if (s->cr & CR_RLD) {
202
- ptimer_set_limit(s->timer_reload, s->lr, 1);
203
- ptimer_set_limit(s->timer_cmp, s->lr, 1);
204
- } else {
205
- ptimer_set_limit(s->timer_reload, EPIT_TIMER_MAX, 1);
206
- ptimer_set_limit(s->timer_cmp, EPIT_TIMER_MAX, 1);
207
- }
208
- }
209
-
210
- imx_epit_reload_compare_timer(s);
211
- ptimer_run(s->timer_reload, 0);
212
- if (s->cr & CR_OCIEN) {
213
- ptimer_run(s->timer_cmp, 0);
214
- } else {
215
- ptimer_stop(s->timer_cmp);
216
- }
217
- } else if (!(s->cr & CR_EN)) {
218
- /* stop both timers */
219
- ptimer_stop(s->timer_reload);
220
- ptimer_stop(s->timer_cmp);
221
- } else if (s->cr & CR_OCIEN) {
222
- if (!(oldcr & CR_OCIEN)) {
223
- imx_epit_reload_compare_timer(s);
224
- ptimer_run(s->timer_cmp, 0);
225
- }
226
- } else {
227
- ptimer_stop(s->timer_cmp);
228
- }
229
-
230
- ptimer_transaction_commit(s->timer_cmp);
231
- ptimer_transaction_commit(s->timer_reload);
232
}
233
234
static void imx_epit_write_sr(IMXEPITState *s, uint32_t value)
235
@@ -XXX,XX +XXX,XX @@ static void imx_epit_write_lr(IMXEPITState *s, uint32_t value)
236
/* If IOVW bit is set then set the timer value */
237
ptimer_set_count(s->timer_reload, s->lr);
238
}
239
- /*
240
- * Commit the change to s->timer_reload, so it can propagate. Otherwise
241
- * the timer interrupt may not fire properly. The commit must happen
242
- * before calling imx_epit_reload_compare_timer(), which reads
243
- * s->timer_reload internally again.
244
- */
245
+ /* Commit the changes to s->timer_reload, so they can propagate. */
246
ptimer_transaction_commit(s->timer_reload);
247
- imx_epit_reload_compare_timer(s);
248
+ /* Update the compare timer based on the committed reload timer value. */
249
+ imx_epit_update_compare_timer(s);
250
ptimer_transaction_commit(s->timer_cmp);
251
}
252
253
@@ -XXX,XX +XXX,XX @@ static void imx_epit_write_cmp(IMXEPITState *s, uint32_t value)
254
{
255
s->cmp = value;
256
257
+ /* Update the compare timer based on the committed reload timer value. */
258
ptimer_transaction_begin(s->timer_cmp);
259
- imx_epit_reload_compare_timer(s);
260
+ imx_epit_update_compare_timer(s);
261
ptimer_transaction_commit(s->timer_cmp);
262
}
263
264
@@ -XXX,XX +XXX,XX @@ static void imx_epit_cmp(void *opaque)
265
{
266
IMXEPITState *s = IMX_EPIT(opaque);
267
268
+ /* The cmp ptimer can't be running when the peripheral is disabled */
269
+ assert(s->cr & CR_EN);
270
+
271
DPRINTF("sr was %d\n", s->sr);
272
/* Set interrupt status bit SR.OCIF and update the interrupt state */
273
s->sr |= SR_OCIF;
89
--
274
--
90
2.7.4
275
2.25.1
91
92
diff view generated by jsdifflib
1
Make the armv7m_nvic_set_pending() and armv7m_nvic_clear_pending()
1
From: Fabiano Rosas <farosas@suse.de>
2
functions take a bool indicating whether to pend the secure
3
or non-secure version of a banked interrupt, and update the
4
callsites accordingly.
5
2
6
In most callsites we can simply pass the correct security
3
Fix these:
7
state in; in a couple of cases we use TODO comments to indicate
8
that we will return the code in a subsequent commit.
9
4
5
WARNING: Block comments use a leading /* on a separate line
6
WARNING: Block comments use * on subsequent lines
7
WARNING: Block comments use a trailing */ on a separate line
8
9
Signed-off-by: Fabiano Rosas <farosas@suse.de>
10
Reviewed-by: Claudio Fontana <cfontana@suse.de>
11
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
12
Message-id: 20221213190537.511-2-farosas@suse.de
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 1505240046-11454-10-git-send-email-peter.maydell@linaro.org
13
---
14
---
14
target/arm/cpu.h | 14 ++++++++++-
15
target/arm/helper.c | 323 +++++++++++++++++++++++++++++---------------
15
hw/intc/armv7m_nvic.c | 64 ++++++++++++++++++++++++++++++++++++++-------------
16
1 file changed, 215 insertions(+), 108 deletions(-)
16
target/arm/helper.c | 24 +++++++++++--------
17
hw/intc/trace-events | 4 ++--
18
4 files changed, 77 insertions(+), 29 deletions(-)
19
17
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/cpu.h
23
+++ b/target/arm/cpu.h
24
@@ -XXX,XX +XXX,XX @@ static inline bool armv7m_nvic_can_take_pending_exception(void *opaque)
25
return true;
26
}
27
#endif
28
-void armv7m_nvic_set_pending(void *opaque, int irq);
29
+/**
30
+ * armv7m_nvic_set_pending: mark the specified exception as pending
31
+ * @opaque: the NVIC
32
+ * @irq: the exception number to mark pending
33
+ * @secure: false for non-banked exceptions or for the nonsecure
34
+ * version of a banked exception, true for the secure version of a banked
35
+ * exception.
36
+ *
37
+ * Marks the specified exception as pending. Note that we will assert()
38
+ * if @secure is true and @irq does not specify one of the fixed set
39
+ * of architecturally banked exceptions.
40
+ */
41
+void armv7m_nvic_set_pending(void *opaque, int irq, bool secure);
42
void armv7m_nvic_acknowledge_irq(void *opaque);
43
/**
44
* armv7m_nvic_complete_irq: complete specified interrupt or exception
45
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
46
index XXXXXXX..XXXXXXX 100644
47
--- a/hw/intc/armv7m_nvic.c
48
+++ b/hw/intc/armv7m_nvic.c
49
@@ -XXX,XX +XXX,XX @@ static void nvic_irq_update(NVICState *s)
50
qemu_set_irq(s->excpout, lvl);
51
}
52
53
-static void armv7m_nvic_clear_pending(void *opaque, int irq)
54
+/**
55
+ * armv7m_nvic_clear_pending: mark the specified exception as not pending
56
+ * @opaque: the NVIC
57
+ * @irq: the exception number to mark as not pending
58
+ * @secure: false for non-banked exceptions or for the nonsecure
59
+ * version of a banked exception, true for the secure version of a banked
60
+ * exception.
61
+ *
62
+ * Marks the specified exception as not pending. Note that we will assert()
63
+ * if @secure is true and @irq does not specify one of the fixed set
64
+ * of architecturally banked exceptions.
65
+ */
66
+static void armv7m_nvic_clear_pending(void *opaque, int irq, bool secure)
67
{
68
NVICState *s = (NVICState *)opaque;
69
VecInfo *vec;
70
71
assert(irq > ARMV7M_EXCP_RESET && irq < s->num_irq);
72
73
- vec = &s->vectors[irq];
74
- trace_nvic_clear_pending(irq, vec->enabled, vec->prio);
75
+ if (secure) {
76
+ assert(exc_is_banked(irq));
77
+ vec = &s->sec_vectors[irq];
78
+ } else {
79
+ vec = &s->vectors[irq];
80
+ }
81
+ trace_nvic_clear_pending(irq, secure, vec->enabled, vec->prio);
82
if (vec->pending) {
83
vec->pending = 0;
84
nvic_irq_update(s);
85
}
86
}
87
88
-void armv7m_nvic_set_pending(void *opaque, int irq)
89
+void armv7m_nvic_set_pending(void *opaque, int irq, bool secure)
90
{
91
NVICState *s = (NVICState *)opaque;
92
+ bool banked = exc_is_banked(irq);
93
VecInfo *vec;
94
95
assert(irq > ARMV7M_EXCP_RESET && irq < s->num_irq);
96
+ assert(!secure || banked);
97
98
- vec = &s->vectors[irq];
99
- trace_nvic_set_pending(irq, vec->enabled, vec->prio);
100
+ vec = (banked && secure) ? &s->sec_vectors[irq] : &s->vectors[irq];
101
102
+ trace_nvic_set_pending(irq, secure, vec->enabled, vec->prio);
103
104
if (irq >= ARMV7M_EXCP_HARD && irq < ARMV7M_EXCP_PENDSV) {
105
/* If a synchronous exception is pending then it may be
106
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending(void *opaque, int irq)
107
"(current priority %d)\n", irq, running);
108
}
109
110
- /* We can do the escalation, so we take HardFault instead */
111
+ /* We can do the escalation, so we take HardFault instead.
112
+ * If BFHFNMINS is set then we escalate to the banked HF for
113
+ * the target security state of the original exception; otherwise
114
+ * we take a Secure HardFault.
115
+ */
116
irq = ARMV7M_EXCP_HARD;
117
- vec = &s->vectors[irq];
118
+ if (arm_feature(&s->cpu->env, ARM_FEATURE_M_SECURITY) &&
119
+ (secure ||
120
+ !(s->cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK))) {
121
+ vec = &s->sec_vectors[irq];
122
+ } else {
123
+ vec = &s->vectors[irq];
124
+ }
125
+ /* HF may be banked but there is only one shared HFSR */
126
s->cpu->env.v7m.hfsr |= R_V7M_HFSR_FORCED_MASK;
127
}
128
}
129
@@ -XXX,XX +XXX,XX @@ static void set_irq_level(void *opaque, int n, int level)
130
if (level != vec->level) {
131
vec->level = level;
132
if (level) {
133
- armv7m_nvic_set_pending(s, n);
134
+ armv7m_nvic_set_pending(s, n, false);
135
}
136
}
137
}
138
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
139
}
140
case 0xd04: /* Interrupt Control State. */
141
if (value & (1 << 31)) {
142
- armv7m_nvic_set_pending(s, ARMV7M_EXCP_NMI);
143
+ armv7m_nvic_set_pending(s, ARMV7M_EXCP_NMI, false);
144
}
145
if (value & (1 << 28)) {
146
- armv7m_nvic_set_pending(s, ARMV7M_EXCP_PENDSV);
147
+ armv7m_nvic_set_pending(s, ARMV7M_EXCP_PENDSV, attrs.secure);
148
} else if (value & (1 << 27)) {
149
- armv7m_nvic_clear_pending(s, ARMV7M_EXCP_PENDSV);
150
+ armv7m_nvic_clear_pending(s, ARMV7M_EXCP_PENDSV, attrs.secure);
151
}
152
if (value & (1 << 26)) {
153
- armv7m_nvic_set_pending(s, ARMV7M_EXCP_SYSTICK);
154
+ armv7m_nvic_set_pending(s, ARMV7M_EXCP_SYSTICK, attrs.secure);
155
} else if (value & (1 << 25)) {
156
- armv7m_nvic_clear_pending(s, ARMV7M_EXCP_SYSTICK);
157
+ armv7m_nvic_clear_pending(s, ARMV7M_EXCP_SYSTICK, attrs.secure);
158
}
159
break;
160
case 0xd08: /* Vector Table Offset. */
161
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
162
{
163
int excnum = (value & 0x1ff) + NVIC_FIRST_IRQ;
164
if (excnum < s->num_irq) {
165
- armv7m_nvic_set_pending(s, excnum);
166
+ armv7m_nvic_set_pending(s, excnum, false);
167
}
168
break;
169
}
170
@@ -XXX,XX +XXX,XX @@ static void nvic_systick_trigger(void *opaque, int n, int level)
171
/* SysTick just asked us to pend its exception.
172
* (This is different from an external interrupt line's
173
* behaviour.)
174
+ * TODO: when we implement the banked systicks we must make
175
+ * this pend the correct banked exception.
176
*/
177
- armv7m_nvic_set_pending(s, ARMV7M_EXCP_SYSTICK);
178
+ armv7m_nvic_set_pending(s, ARMV7M_EXCP_SYSTICK, false);
179
}
180
}
181
182
diff --git a/target/arm/helper.c b/target/arm/helper.c
18
diff --git a/target/arm/helper.c b/target/arm/helper.c
183
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
184
--- a/target/arm/helper.c
20
--- a/target/arm/helper.c
185
+++ b/target/arm/helper.c
21
+++ b/target/arm/helper.c
186
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
22
@@ -XXX,XX +XXX,XX @@ uint64_t read_raw_cp_reg(CPUARMState *env, const ARMCPRegInfo *ri)
187
* stack, directly take a usage fault on the current stack.
23
static void write_raw_cp_reg(CPUARMState *env, const ARMCPRegInfo *ri,
188
*/
24
uint64_t v)
189
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
25
{
190
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE);
26
- /* Raw write of a coprocessor register (as needed for migration, etc).
191
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
27
+ /*
192
v7m_exception_taken(cpu, excret);
28
+ * Raw write of a coprocessor register (as needed for migration, etc).
193
qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
29
* Note that constant registers are treated as write-ignored; the
194
"stackframe: failed exception return integrity check\n");
30
* caller should check for success by whether a readback gives the
195
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
31
* value written.
196
* exception return excret specified then this is a UsageFault.
32
@@ -XXX,XX +XXX,XX @@ static void write_raw_cp_reg(CPUARMState *env, const ARMCPRegInfo *ri,
197
*/
33
198
if (return_to_handler != arm_v7m_is_handler_mode(env)) {
34
static bool raw_accessors_invalid(const ARMCPRegInfo *ri)
199
- /* Take an INVPC UsageFault by pushing the stack again. */
35
{
200
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE);
36
- /* Return true if the regdef would cause an assertion if you called
201
+ /* Take an INVPC UsageFault by pushing the stack again.
37
+ /*
202
+ * TODO: the v8M version of this code should target the
38
+ * Return true if the regdef would cause an assertion if you called
203
+ * background state for this exception.
39
* read_raw_cp_reg() or write_raw_cp_reg() on it (ie if it is a
204
+ */
40
* program bug for it not to have the NO_RAW flag).
205
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, false);
41
* NB that returning false here doesn't necessarily mean that calling
206
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
42
@@ -XXX,XX +XXX,XX @@ bool write_list_to_cpustate(ARMCPU *cpu)
207
v7m_push_stack(cpu);
43
if (ri->type & ARM_CP_NO_RAW) {
208
v7m_exception_taken(cpu, excret);
44
continue;
209
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
45
}
210
handle it. */
46
- /* Write value and confirm it reads back as written
211
switch (cs->exception_index) {
47
+ /*
212
case EXCP_UDEF:
48
+ * Write value and confirm it reads back as written
213
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE);
49
* (to catch read-only registers and partially read-only
214
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
50
* registers where the incoming migration value doesn't match)
215
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNDEFINSTR_MASK;
51
*/
216
break;
52
@@ -XXX,XX +XXX,XX @@ static gint cpreg_key_compare(gconstpointer a, gconstpointer b)
217
case EXCP_NOCP:
53
218
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE);
54
void init_cpreg_list(ARMCPU *cpu)
219
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
55
{
220
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_NOCP_MASK;
56
- /* Initialise the cpreg_tuples[] array based on the cp_regs hash.
221
break;
57
+ /*
222
case EXCP_INVSTATE:
58
+ * Initialise the cpreg_tuples[] array based on the cp_regs hash.
223
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE);
59
* Note that we require cpreg_tuples[] to be sorted by key ID.
224
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
60
*/
225
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVSTATE_MASK;
61
GList *keys;
226
break;
62
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_el3_aa32ns(CPUARMState *env,
227
case EXCP_SWI:
63
return CP_ACCESS_OK;
228
/* The PC already points to the next instruction. */
64
}
229
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SVC);
65
230
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SVC, env->v7m.secure);
66
-/* Some secure-only AArch32 registers trap to EL3 if used from
231
break;
67
+/*
232
case EXCP_PREFETCH_ABORT:
68
+ * Some secure-only AArch32 registers trap to EL3 if used from
233
case EXCP_DATA_ABORT:
69
* Secure EL1 (but are just ordinary UNDEF in other non-EL3 contexts).
234
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
70
* Note that an access from Secure EL1 can only happen if EL3 is AArch64.
235
env->v7m.bfar);
71
* We assume that the .access field is set to PL1_RW.
236
break;
72
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_trap_aa32s_el1(CPUARMState *env,
73
return CP_ACCESS_TRAP_UNCATEGORIZED;
74
}
75
76
-/* Check for traps to performance monitor registers, which are controlled
77
+/*
78
+ * Check for traps to performance monitor registers, which are controlled
79
* by MDCR_EL2.TPM for EL2 and MDCR_EL3.TPM for EL3.
80
*/
81
static CPAccessResult access_tpm(CPUARMState *env, const ARMCPRegInfo *ri,
82
@@ -XXX,XX +XXX,XX @@ static void fcse_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
83
ARMCPU *cpu = env_archcpu(env);
84
85
if (raw_read(env, ri) != value) {
86
- /* Unlike real hardware the qemu TLB uses virtual addresses,
87
+ /*
88
+ * Unlike real hardware the qemu TLB uses virtual addresses,
89
* not modified virtual addresses, so this causes a TLB flush.
90
*/
91
tlb_flush(CPU(cpu));
92
@@ -XXX,XX +XXX,XX @@ static void contextidr_write(CPUARMState *env, const ARMCPRegInfo *ri,
93
94
if (raw_read(env, ri) != value && !arm_feature(env, ARM_FEATURE_PMSA)
95
&& !extended_addresses_enabled(env)) {
96
- /* For VMSA (when not using the LPAE long descriptor page table
97
+ /*
98
+ * For VMSA (when not using the LPAE long descriptor page table
99
* format) this register includes the ASID, so do a TLB flush.
100
* For PMSA it is purely a process ID and no action is needed.
101
*/
102
@@ -XXX,XX +XXX,XX @@ static void tlbiipas2is_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
103
}
104
105
static const ARMCPRegInfo cp_reginfo[] = {
106
- /* Define the secure and non-secure FCSE identifier CP registers
107
+ /*
108
+ * Define the secure and non-secure FCSE identifier CP registers
109
* separately because there is no secure bank in V8 (no _EL3). This allows
110
* the secure register to be properly reset and migrated. There is also no
111
* v8 EL1 version of the register so the non-secure instance stands alone.
112
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cp_reginfo[] = {
113
.access = PL1_RW, .secure = ARM_CP_SECSTATE_S,
114
.fieldoffset = offsetof(CPUARMState, cp15.fcseidr_s),
115
.resetvalue = 0, .writefn = fcse_write, .raw_writefn = raw_write, },
116
- /* Define the secure and non-secure context identifier CP registers
117
+ /*
118
+ * Define the secure and non-secure context identifier CP registers
119
* separately because there is no secure bank in V8 (no _EL3). This allows
120
* the secure register to be properly reset and migrated. In the
121
* non-secure case, the 32-bit register will have reset and migration
122
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cp_reginfo[] = {
123
};
124
125
static const ARMCPRegInfo not_v8_cp_reginfo[] = {
126
- /* NB: Some of these registers exist in v8 but with more precise
127
+ /*
128
+ * NB: Some of these registers exist in v8 but with more precise
129
* definitions that don't use CP_ANY wildcards (mostly in v8_cp_reginfo[]).
130
*/
131
/* MMU Domain access control / MPU write buffer control */
132
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo not_v8_cp_reginfo[] = {
133
.writefn = dacr_write, .raw_writefn = raw_write,
134
.bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.dacr_s),
135
offsetoflow32(CPUARMState, cp15.dacr_ns) } },
136
- /* ARMv7 allocates a range of implementation defined TLB LOCKDOWN regs.
137
+ /*
138
+ * ARMv7 allocates a range of implementation defined TLB LOCKDOWN regs.
139
* For v6 and v5, these mappings are overly broad.
140
*/
141
{ .name = "TLB_LOCKDOWN", .cp = 15, .crn = 10, .crm = 0,
142
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo not_v8_cp_reginfo[] = {
143
};
144
145
static const ARMCPRegInfo not_v6_cp_reginfo[] = {
146
- /* Not all pre-v6 cores implemented this WFI, so this is slightly
147
+ /*
148
+ * Not all pre-v6 cores implemented this WFI, so this is slightly
149
* over-broad.
150
*/
151
{ .name = "WFI_v5", .cp = 15, .crn = 7, .crm = 8, .opc1 = 0, .opc2 = 2,
152
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo not_v6_cp_reginfo[] = {
153
};
154
155
static const ARMCPRegInfo not_v7_cp_reginfo[] = {
156
- /* Standard v6 WFI (also used in some pre-v6 cores); not in v7 (which
157
+ /*
158
+ * Standard v6 WFI (also used in some pre-v6 cores); not in v7 (which
159
* is UNPREDICTABLE; we choose to NOP as most implementations do).
160
*/
161
{ .name = "WFI_v6", .cp = 15, .crn = 7, .crm = 0, .opc1 = 0, .opc2 = 4,
162
.access = PL1_W, .type = ARM_CP_WFI },
163
- /* L1 cache lockdown. Not architectural in v6 and earlier but in practice
164
+ /*
165
+ * L1 cache lockdown. Not architectural in v6 and earlier but in practice
166
* implemented in 926, 946, 1026, 1136, 1176 and 11MPCore. StrongARM and
167
* OMAPCP will override this space.
168
*/
169
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo not_v7_cp_reginfo[] = {
170
{ .name = "DUMMY", .cp = 15, .crn = 0, .crm = 0, .opc1 = 1, .opc2 = CP_ANY,
171
.access = PL1_R, .type = ARM_CP_CONST | ARM_CP_NO_RAW,
172
.resetvalue = 0 },
173
- /* We don't implement pre-v7 debug but most CPUs had at least a DBGDIDR;
174
+ /*
175
+ * We don't implement pre-v7 debug but most CPUs had at least a DBGDIDR;
176
* implementing it as RAZ means the "debug architecture version" bits
177
* will read as a reserved value, which should cause Linux to not try
178
* to use the debug hardware.
179
*/
180
{ .name = "DBGDIDR", .cp = 14, .crn = 0, .crm = 0, .opc1 = 0, .opc2 = 0,
181
.access = PL0_R, .type = ARM_CP_CONST, .resetvalue = 0 },
182
- /* MMU TLB control. Note that the wildcarding means we cover not just
183
+ /*
184
+ * MMU TLB control. Note that the wildcarding means we cover not just
185
* the unified TLB ops but also the dside/iside/inner-shareable variants.
186
*/
187
{ .name = "TLBIALL", .cp = 15, .crn = 8, .crm = CP_ANY,
188
@@ -XXX,XX +XXX,XX @@ static void cpacr_write(CPUARMState *env, const ARMCPRegInfo *ri,
189
190
/* In ARMv8 most bits of CPACR_EL1 are RES0. */
191
if (!arm_feature(env, ARM_FEATURE_V8)) {
192
- /* ARMv7 defines bits for unimplemented coprocessors as RAZ/WI.
193
+ /*
194
+ * ARMv7 defines bits for unimplemented coprocessors as RAZ/WI.
195
* ASEDIS [31] and D32DIS [30] are both UNK/SBZP without VFP.
196
* TRCDIS [28] is RAZ/WI since we do not implement a trace macrocell.
197
*/
198
@@ -XXX,XX +XXX,XX @@ static void cpacr_write(CPUARMState *env, const ARMCPRegInfo *ri,
199
value |= R_CPACR_ASEDIS_MASK;
237
}
200
}
238
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS);
201
239
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false);
202
- /* VFPv3 and upwards with NEON implement 32 double precision
240
break;
203
+ /*
241
default:
204
+ * VFPv3 and upwards with NEON implement 32 double precision
242
/* All other FSR values are either MPU faults or "can't happen
205
* registers (D0-D31).
243
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
206
*/
244
env->v7m.mmfar[env->v7m.secure]);
207
if (!cpu_isar_feature(aa32_simd_r32, env_archcpu(env))) {
245
break;
208
@@ -XXX,XX +XXX,XX @@ static uint64_t cpacr_read(CPUARMState *env, const ARMCPRegInfo *ri)
246
}
209
247
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM);
210
static void cpacr_reset(CPUARMState *env, const ARMCPRegInfo *ri)
248
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM,
211
{
249
+ env->v7m.secure);
212
- /* Call cpacr_write() so that we reset with the correct RAO bits set
250
break;
213
+ /*
214
+ * Call cpacr_write() so that we reset with the correct RAO bits set
215
* for our CPU features.
216
*/
217
cpacr_write(env, ri, 0);
218
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6_cp_reginfo[] = {
219
{ .name = "MVA_prefetch",
220
.cp = 15, .crn = 7, .crm = 13, .opc1 = 0, .opc2 = 1,
221
.access = PL1_W, .type = ARM_CP_NOP },
222
- /* We need to break the TB after ISB to execute self-modifying code
223
+ /*
224
+ * We need to break the TB after ISB to execute self-modifying code
225
* correctly and also to take any pending interrupts immediately.
226
* So use arm_cp_write_ignore() function instead of ARM_CP_NOP flag.
227
*/
228
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6_cp_reginfo[] = {
229
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.ifar_s),
230
offsetof(CPUARMState, cp15.ifar_ns) },
231
.resetvalue = 0, },
232
- /* Watchpoint Fault Address Register : should actually only be present
233
+ /*
234
+ * Watchpoint Fault Address Register : should actually only be present
235
* for 1136, 1176, 11MPCore.
236
*/
237
{ .name = "WFAR", .cp = 15, .crn = 6, .crm = 0, .opc1 = 0, .opc2 = 1,
238
@@ -XXX,XX +XXX,XX @@ static bool event_supported(uint16_t number)
239
static CPAccessResult pmreg_access(CPUARMState *env, const ARMCPRegInfo *ri,
240
bool isread)
241
{
242
- /* Performance monitor registers user accessibility is controlled
243
+ /*
244
+ * Performance monitor registers user accessibility is controlled
245
* by PMUSERENR. MDCR_EL2.TPM and MDCR_EL3.TPM allow configurable
246
* trapping to EL2 or EL3 for other accesses.
247
*/
248
@@ -XXX,XX +XXX,XX @@ static CPAccessResult pmreg_access_ccntr(CPUARMState *env,
249
(MDCR_HPME | MDCR_HPMD | MDCR_HPMN | MDCR_HCCD | MDCR_HLP)
250
#define MDCR_EL3_PMU_ENABLE_BITS (MDCR_SPME | MDCR_SCCD)
251
252
-/* Returns true if the counter (pass 31 for PMCCNTR) should count events using
253
+/*
254
+ * Returns true if the counter (pass 31 for PMCCNTR) should count events using
255
* the current EL, security state, and register configuration.
256
*/
257
static bool pmu_counter_enabled(CPUARMState *env, uint8_t counter)
258
@@ -XXX,XX +XXX,XX @@ static uint64_t pmccntr_read(CPUARMState *env, const ARMCPRegInfo *ri)
259
static void pmselr_write(CPUARMState *env, const ARMCPRegInfo *ri,
260
uint64_t value)
261
{
262
- /* The value of PMSELR.SEL affects the behavior of PMXEVTYPER and
263
+ /*
264
+ * The value of PMSELR.SEL affects the behavior of PMXEVTYPER and
265
* PMXEVCNTR. We allow [0..31] to be written to PMSELR here; in the
266
* meanwhile, we check PMSELR.SEL when PMXEVTYPER and PMXEVCNTR are
267
* accessed.
268
@@ -XXX,XX +XXX,XX @@ static void pmevtyper_write(CPUARMState *env, const ARMCPRegInfo *ri,
269
env->cp15.c14_pmevtyper[counter] = value & PMXEVTYPER_MASK;
270
pmevcntr_op_finish(env, counter);
271
}
272
- /* Attempts to access PMXEVTYPER are CONSTRAINED UNPREDICTABLE when
273
+ /*
274
+ * Attempts to access PMXEVTYPER are CONSTRAINED UNPREDICTABLE when
275
* PMSELR value is equal to or greater than the number of implemented
276
* counters, but not equal to 0x1f. We opt to behave as a RAZ/WI.
277
*/
278
@@ -XXX,XX +XXX,XX @@ static uint64_t pmevcntr_read(CPUARMState *env, const ARMCPRegInfo *ri,
251
}
279
}
252
break;
280
return ret;
253
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
281
} else {
254
return;
282
- /* We opt to behave as a RAZ/WI when attempts to access PM[X]EVCNTR
283
- * are CONSTRAINED UNPREDICTABLE. */
284
+ /*
285
+ * We opt to behave as a RAZ/WI when attempts to access PM[X]EVCNTR
286
+ * are CONSTRAINED UNPREDICTABLE.
287
+ */
288
return 0;
289
}
290
}
291
@@ -XXX,XX +XXX,XX @@ static void pmintenclr_write(CPUARMState *env, const ARMCPRegInfo *ri,
292
static void vbar_write(CPUARMState *env, const ARMCPRegInfo *ri,
293
uint64_t value)
294
{
295
- /* Note that even though the AArch64 view of this register has bits
296
+ /*
297
+ * Note that even though the AArch64 view of this register has bits
298
* [10:0] all RES0 we can only mask the bottom 5, to comply with the
299
* architectural requirements for bits which are RES0 only in some
300
* contexts. (ARMv8 would permit us to do no masking at all, but ARMv7
301
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
302
if (!arm_feature(env, ARM_FEATURE_EL2)) {
303
valid_mask &= ~SCR_HCE;
304
305
- /* On ARMv7, SMD (or SCD as it is called in v7) is only
306
+ /*
307
+ * On ARMv7, SMD (or SCD as it is called in v7) is only
308
* supported if EL2 exists. The bit is UNK/SBZP when
309
* EL2 is unavailable. In QEMU ARMv7, we force it to always zero
310
* when EL2 is unavailable.
311
@@ -XXX,XX +XXX,XX @@ static uint64_t ccsidr_read(CPUARMState *env, const ARMCPRegInfo *ri)
312
{
313
ARMCPU *cpu = env_archcpu(env);
314
315
- /* Acquire the CSSELR index from the bank corresponding to the CCSIDR
316
+ /*
317
+ * Acquire the CSSELR index from the bank corresponding to the CCSIDR
318
* bank
319
*/
320
uint32_t index = A32_BANKED_REG_GET(env, csselr,
321
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
322
/* the old v6 WFI, UNPREDICTABLE in v7 but we choose to NOP */
323
{ .name = "NOP", .cp = 15, .crn = 7, .crm = 0, .opc1 = 0, .opc2 = 4,
324
.access = PL1_W, .type = ARM_CP_NOP },
325
- /* Performance monitors are implementation defined in v7,
326
+ /*
327
+ * Performance monitors are implementation defined in v7,
328
* but with an ARM recommended set of registers, which we
329
* follow.
330
*
331
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
332
.writefn = csselr_write, .resetvalue = 0,
333
.bank_fieldoffsets = { offsetof(CPUARMState, cp15.csselr_s),
334
offsetof(CPUARMState, cp15.csselr_ns) } },
335
- /* Auxiliary ID register: this actually has an IMPDEF value but for now
336
+ /*
337
+ * Auxiliary ID register: this actually has an IMPDEF value but for now
338
* just RAZ for all cores:
339
*/
340
{ .name = "AIDR", .state = ARM_CP_STATE_BOTH,
341
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
342
.access = PL1_R, .type = ARM_CP_CONST,
343
.accessfn = access_aa64_tid1,
344
.resetvalue = 0 },
345
- /* Auxiliary fault status registers: these also are IMPDEF, and we
346
+ /*
347
+ * Auxiliary fault status registers: these also are IMPDEF, and we
348
* choose to RAZ/WI for all cores.
349
*/
350
{ .name = "AFSR0_EL1", .state = ARM_CP_STATE_BOTH,
351
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
352
.opc0 = 3, .opc1 = 0, .crn = 5, .crm = 1, .opc2 = 1,
353
.access = PL1_RW, .accessfn = access_tvm_trvm,
354
.type = ARM_CP_CONST, .resetvalue = 0 },
355
- /* MAIR can just read-as-written because we don't implement caches
356
+ /*
357
+ * MAIR can just read-as-written because we don't implement caches
358
* and so don't need to care about memory attributes.
359
*/
360
{ .name = "MAIR_EL1", .state = ARM_CP_STATE_AA64,
361
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
362
.opc0 = 3, .opc1 = 6, .crn = 10, .crm = 2, .opc2 = 0,
363
.access = PL3_RW, .fieldoffset = offsetof(CPUARMState, cp15.mair_el[3]),
364
.resetvalue = 0 },
365
- /* For non-long-descriptor page tables these are PRRR and NMRR;
366
+ /*
367
+ * For non-long-descriptor page tables these are PRRR and NMRR;
368
* regardless they still act as reads-as-written for QEMU.
369
*/
370
- /* MAIR0/1 are defined separately from their 64-bit counterpart which
371
+ /*
372
+ * MAIR0/1 are defined separately from their 64-bit counterpart which
373
* allows them to assign the correct fieldoffset based on the endianness
374
* handled in the field definitions.
375
*/
376
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6k_cp_reginfo[] = {
377
static CPAccessResult gt_cntfrq_access(CPUARMState *env, const ARMCPRegInfo *ri,
378
bool isread)
379
{
380
- /* CNTFRQ: not visible from PL0 if both PL0PCTEN and PL0VCTEN are zero.
381
+ /*
382
+ * CNTFRQ: not visible from PL0 if both PL0PCTEN and PL0VCTEN are zero.
383
* Writable only at the highest implemented exception level.
384
*/
385
int el = arm_current_el(env);
386
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gt_stimer_access(CPUARMState *env,
387
const ARMCPRegInfo *ri,
388
bool isread)
389
{
390
- /* The AArch64 register view of the secure physical timer is
391
+ /*
392
+ * The AArch64 register view of the secure physical timer is
393
* always accessible from EL3, and configurably accessible from
394
* Secure EL1.
395
*/
396
@@ -XXX,XX +XXX,XX @@ static void gt_recalc_timer(ARMCPU *cpu, int timeridx)
397
ARMGenericTimer *gt = &cpu->env.cp15.c14_timer[timeridx];
398
399
if (gt->ctl & 1) {
400
- /* Timer enabled: calculate and set current ISTATUS, irq, and
401
+ /*
402
+ * Timer enabled: calculate and set current ISTATUS, irq, and
403
* reset timer to when ISTATUS next has to change
404
*/
405
uint64_t offset = timeridx == GTIMER_VIRT ?
406
@@ -XXX,XX +XXX,XX @@ static void gt_recalc_timer(ARMCPU *cpu, int timeridx)
407
/* Next transition is when we hit cval */
408
nexttick = gt->cval + offset;
409
}
410
- /* Note that the desired next expiry time might be beyond the
411
+ /*
412
+ * Note that the desired next expiry time might be beyond the
413
* signed-64-bit range of a QEMUTimer -- in this case we just
414
* set the timer for as far in the future as possible. When the
415
* timer expires we will reset the timer for any remaining period.
416
@@ -XXX,XX +XXX,XX @@ static void gt_ctl_write(CPUARMState *env, const ARMCPRegInfo *ri,
417
/* Enable toggled */
418
gt_recalc_timer(cpu, timeridx);
419
} else if ((oldval ^ value) & 2) {
420
- /* IMASK toggled: don't need to recalculate,
421
+ /*
422
+ * IMASK toggled: don't need to recalculate,
423
* just set the interrupt line based on ISTATUS
424
*/
425
int irqstate = (oldval & 4) && !(value & 2);
426
@@ -XXX,XX +XXX,XX @@ static void arm_gt_cntfrq_reset(CPUARMState *env, const ARMCPRegInfo *opaque)
427
}
428
429
static const ARMCPRegInfo generic_timer_cp_reginfo[] = {
430
- /* Note that CNTFRQ is purely reads-as-written for the benefit
431
+ /*
432
+ * Note that CNTFRQ is purely reads-as-written for the benefit
433
* of software; writing it doesn't actually change the timer frequency.
434
* Our reset value matches the fixed frequency we implement the timer at.
435
*/
436
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo generic_timer_cp_reginfo[] = {
437
.readfn = gt_virt_redir_cval_read, .raw_readfn = raw_read,
438
.writefn = gt_virt_redir_cval_write, .raw_writefn = raw_write,
439
},
440
- /* Secure timer -- this is actually restricted to only EL3
441
+ /*
442
+ * Secure timer -- this is actually restricted to only EL3
443
* and configurably Secure-EL1 via the accessfn.
444
*/
445
{ .name = "CNTPS_TVAL_EL1", .state = ARM_CP_STATE_AA64,
446
@@ -XXX,XX +XXX,XX @@ static CPAccessResult e2h_access(CPUARMState *env, const ARMCPRegInfo *ri,
447
448
#else
449
450
-/* In user-mode most of the generic timer registers are inaccessible
451
+/*
452
+ * In user-mode most of the generic timer registers are inaccessible
453
* however modern kernels (4.12+) allow access to cntvct_el0
454
*/
455
456
@@ -XXX,XX +XXX,XX @@ static uint64_t gt_virt_cnt_read(CPUARMState *env, const ARMCPRegInfo *ri)
457
{
458
ARMCPU *cpu = env_archcpu(env);
459
460
- /* Currently we have no support for QEMUTimer in linux-user so we
461
+ /*
462
+ * Currently we have no support for QEMUTimer in linux-user so we
463
* can't call gt_get_countervalue(env), instead we directly
464
* call the lower level functions.
465
*/
466
@@ -XXX,XX +XXX,XX @@ static CPAccessResult ats_access(CPUARMState *env, const ARMCPRegInfo *ri,
467
bool isread)
468
{
469
if (ri->opc2 & 4) {
470
- /* The ATS12NSO* operations must trap to EL3 or EL2 if executed in
471
+ /*
472
+ * The ATS12NSO* operations must trap to EL3 or EL2 if executed in
473
* Secure EL1 (which can only happen if EL3 is AArch64).
474
* They are simply UNDEF if executed from NS EL1.
475
* They function normally from EL2 or EL3.
476
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
255
}
477
}
256
}
478
}
257
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_DEBUG);
479
} else {
258
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_DEBUG, false);
480
- /* fsr is a DFSR/IFSR value for the short descriptor
259
break;
481
+ /*
260
case EXCP_IRQ:
482
+ * fsr is a DFSR/IFSR value for the short descriptor
261
break;
483
* translation table format (with WnR always clear).
262
diff --git a/hw/intc/trace-events b/hw/intc/trace-events
484
* Convert it to a 32-bit PAR.
263
index XXXXXXX..XXXXXXX 100644
485
*/
264
--- a/hw/intc/trace-events
486
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo pmsav8r_cp_reginfo[] = {
265
+++ b/hw/intc/trace-events
487
};
266
@@ -XXX,XX +XXX,XX @@ nvic_set_prio(int irq, uint8_t prio) "NVIC set irq %d priority %d"
488
267
nvic_irq_update(int vectpending, int pendprio, int exception_prio, int level) "NVIC vectpending %d pending prio %d exception_prio %d: setting irq line to %d"
489
static const ARMCPRegInfo pmsav7_cp_reginfo[] = {
268
nvic_escalate_prio(int irq, int irqprio, int runprio) "NVIC escalating irq %d to HardFault: insufficient priority %d >= %d"
490
- /* Reset for all these registers is handled in arm_cpu_reset(),
269
nvic_escalate_disabled(int irq) "NVIC escalating irq %d to HardFault: disabled"
491
+ /*
270
-nvic_set_pending(int irq, int en, int prio) "NVIC set pending irq %d (enabled: %d priority %d)"
492
+ * Reset for all these registers is handled in arm_cpu_reset(),
271
-nvic_clear_pending(int irq, int en, int prio) "NVIC clear pending irq %d (enabled: %d priority %d)"
493
* because the PMSAv7 is also used by M-profile CPUs, which do
272
+nvic_set_pending(int irq, bool secure, int en, int prio) "NVIC set pending irq %d secure-bank %d (enabled: %d priority %d)"
494
* not register cpregs but still need the state to be reset.
273
+nvic_clear_pending(int irq, bool secure, int en, int prio) "NVIC clear pending irq %d secure-bank %d (enabled: %d priority %d)"
495
*/
274
nvic_set_pending_level(int irq) "NVIC set pending: irq %d higher prio than vectpending: setting irq line to 1"
496
@@ -XXX,XX +XXX,XX @@ static void vmsa_ttbcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
275
nvic_acknowledge_irq(int irq, int prio) "NVIC acknowledge IRQ: %d now active (prio %d)"
497
}
276
nvic_complete_irq(int irq) "NVIC complete IRQ %d"
498
499
if (arm_feature(env, ARM_FEATURE_LPAE)) {
500
- /* With LPAE the TTBCR could result in a change of ASID
501
+ /*
502
+ * With LPAE the TTBCR could result in a change of ASID
503
* via the TTBCR.A1 bit, so do a TLB flush.
504
*/
505
tlb_flush(CPU(cpu));
506
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vmsa_cp_reginfo[] = {
507
offsetoflow32(CPUARMState, cp15.tcr_el[1])} },
508
};
509
510
-/* Note that unlike TTBCR, writing to TTBCR2 does not require flushing
511
+/*
512
+ * Note that unlike TTBCR, writing to TTBCR2 does not require flushing
513
* qemu tlbs nor adjusting cached masks.
514
*/
515
static const ARMCPRegInfo ttbcr2_reginfo = {
516
@@ -XXX,XX +XXX,XX @@ static void omap_wfi_write(CPUARMState *env, const ARMCPRegInfo *ri,
517
static void omap_cachemaint_write(CPUARMState *env, const ARMCPRegInfo *ri,
518
uint64_t value)
519
{
520
- /* On OMAP there are registers indicating the max/min index of dcache lines
521
+ /*
522
+ * On OMAP there are registers indicating the max/min index of dcache lines
523
* containing a dirty line; cache flush operations have to reset these.
524
*/
525
env->cp15.c15_i_max = 0x000;
526
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo omap_cp_reginfo[] = {
527
.crm = 8, .opc1 = 0, .opc2 = 0, .access = PL1_RW,
528
.type = ARM_CP_NO_RAW,
529
.readfn = arm_cp_read_zero, .writefn = omap_wfi_write, },
530
- /* TODO: Peripheral port remap register:
531
+ /*
532
+ * TODO: Peripheral port remap register:
533
* On OMAP2 mcr p15, 0, rn, c15, c2, 4 sets up the interrupt controller
534
* base address at $rn & ~0xfff and map size of 0x200 << ($rn & 0xfff),
535
* when MMU is off.
536
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo xscale_cp_reginfo[] = {
537
.cp = 15, .crn = 1, .crm = 0, .opc1 = 0, .opc2 = 1, .access = PL1_RW,
538
.fieldoffset = offsetof(CPUARMState, cp15.c1_xscaleauxcr),
539
.resetvalue = 0, },
540
- /* XScale specific cache-lockdown: since we have no cache we NOP these
541
+ /*
542
+ * XScale specific cache-lockdown: since we have no cache we NOP these
543
* and hope the guest does not really rely on cache behaviour.
544
*/
545
{ .name = "XSCALE_LOCK_ICACHE_LINE",
546
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo xscale_cp_reginfo[] = {
547
};
548
549
static const ARMCPRegInfo dummy_c15_cp_reginfo[] = {
550
- /* RAZ/WI the whole crn=15 space, when we don't have a more specific
551
+ /*
552
+ * RAZ/WI the whole crn=15 space, when we don't have a more specific
553
* implementation of this implementation-defined space.
554
* Ideally this should eventually disappear in favour of actually
555
* implementing the correct behaviour for all cores.
556
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cache_block_ops_cp_reginfo[] = {
557
};
558
559
static const ARMCPRegInfo cache_test_clean_cp_reginfo[] = {
560
- /* The cache test-and-clean instructions always return (1 << 30)
561
+ /*
562
+ * The cache test-and-clean instructions always return (1 << 30)
563
* to indicate that there are no dirty cache lines.
564
*/
565
{ .name = "TC_DCACHE", .cp = 15, .crn = 7, .crm = 10, .opc1 = 0, .opc2 = 3,
566
@@ -XXX,XX +XXX,XX @@ static uint64_t mpidr_read_val(CPUARMState *env)
567
568
if (arm_feature(env, ARM_FEATURE_V7MP)) {
569
mpidr |= (1U << 31);
570
- /* Cores which are uniprocessor (non-coherent)
571
+ /*
572
+ * Cores which are uniprocessor (non-coherent)
573
* but still implement the MP extensions set
574
* bit 30. (For instance, Cortex-R5).
575
*/
576
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tocu(CPUARMState *env, const ARMCPRegInfo *ri,
577
return do_cacheop_pou_access(env, HCR_TOCU | HCR_TPU);
578
}
579
580
-/* See: D4.7.2 TLB maintenance requirements and the TLB maintenance instructions
581
+/*
582
+ * See: D4.7.2 TLB maintenance requirements and the TLB maintenance instructions
583
* Page D4-1736 (DDI0487A.b)
584
*/
585
586
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
587
static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri,
588
uint64_t value)
589
{
590
- /* Invalidate by VA, EL2
591
+ /*
592
+ * Invalidate by VA, EL2
593
* Currently handles both VAE2 and VALE2, since we don't support
594
* flush-last-level-only.
595
*/
596
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri,
597
static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri,
598
uint64_t value)
599
{
600
- /* Invalidate by VA, EL3
601
+ /*
602
+ * Invalidate by VA, EL3
603
* Currently handles both VAE3 and VALE3, since we don't support
604
* flush-last-level-only.
605
*/
606
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
607
static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
608
uint64_t value)
609
{
610
- /* Invalidate by VA, EL1&0 (AArch64 version).
611
+ /*
612
+ * Invalidate by VA, EL1&0 (AArch64 version).
613
* Currently handles all of VAE1, VAAE1, VAALE1 and VALE1,
614
* since we don't support flush-for-specific-ASID-only or
615
* flush-last-level-only.
616
@@ -XXX,XX +XXX,XX @@ static CPAccessResult sp_el0_access(CPUARMState *env, const ARMCPRegInfo *ri,
617
bool isread)
618
{
619
if (!(env->pstate & PSTATE_SP)) {
620
- /* Access to SP_EL0 is undefined if it's being used as
621
+ /*
622
+ * Access to SP_EL0 is undefined if it's being used as
623
* the stack pointer.
624
*/
625
return CP_ACCESS_TRAP_UNCATEGORIZED;
626
@@ -XXX,XX +XXX,XX @@ static void sctlr_write(CPUARMState *env, const ARMCPRegInfo *ri,
627
}
628
629
if (raw_read(env, ri) == value) {
630
- /* Skip the TLB flush if nothing actually changed; Linux likes
631
+ /*
632
+ * Skip the TLB flush if nothing actually changed; Linux likes
633
* to do a lot of pointless SCTLR writes.
634
*/
635
return;
636
@@ -XXX,XX +XXX,XX @@ static void mdcr_el2_write(CPUARMState *env, const ARMCPRegInfo *ri,
637
}
638
639
static const ARMCPRegInfo v8_cp_reginfo[] = {
640
- /* Minimal set of EL0-visible registers. This will need to be expanded
641
+ /*
642
+ * Minimal set of EL0-visible registers. This will need to be expanded
643
* significantly for system emulation of AArch64 CPUs.
644
*/
645
{ .name = "NZCV", .state = ARM_CP_STATE_AA64,
646
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
647
.opc0 = 3, .opc1 = 0, .crn = 4, .crm = 0, .opc2 = 0,
648
.access = PL1_RW,
649
.fieldoffset = offsetof(CPUARMState, banked_spsr[BANK_SVC]) },
650
- /* We rely on the access checks not allowing the guest to write to the
651
+ /*
652
+ * We rely on the access checks not allowing the guest to write to the
653
* state field when SPSel indicates that it's being used as the stack
654
* pointer.
655
*/
656
@@ -XXX,XX +XXX,XX @@ static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
657
if (arm_feature(env, ARM_FEATURE_EL3)) {
658
valid_mask &= ~HCR_HCD;
659
} else if (cpu->psci_conduit != QEMU_PSCI_CONDUIT_SMC) {
660
- /* Architecturally HCR.TSC is RES0 if EL3 is not implemented.
661
+ /*
662
+ * Architecturally HCR.TSC is RES0 if EL3 is not implemented.
663
* However, if we're using the SMC PSCI conduit then QEMU is
664
* effectively acting like EL3 firmware and so the guest at
665
* EL2 should retain the ability to prevent EL1 from being
666
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
667
.access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
668
.writefn = tlbi_aa64_vae2is_write },
669
#ifndef CONFIG_USER_ONLY
670
- /* Unlike the other EL2-related AT operations, these must
671
+ /*
672
+ * Unlike the other EL2-related AT operations, these must
673
* UNDEF from EL3 if EL2 is not implemented, which is why we
674
* define them here rather than with the rest of the AT ops.
675
*/
676
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
677
.access = PL2_W, .accessfn = at_s1e2_access,
678
.type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC | ARM_CP_EL3_NO_EL2_UNDEF,
679
.writefn = ats_write64 },
680
- /* The AArch32 ATS1H* operations are CONSTRAINED UNPREDICTABLE
681
+ /*
682
+ * The AArch32 ATS1H* operations are CONSTRAINED UNPREDICTABLE
683
* if EL2 is not implemented; we choose to UNDEF. Behaviour at EL3
684
* with SCR.NS == 0 outside Monitor mode is UNPREDICTABLE; we choose
685
* to behave as if SCR.NS was 1.
686
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
687
.writefn = ats1h_write, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC },
688
{ .name = "CNTHCTL_EL2", .state = ARM_CP_STATE_BOTH,
689
.opc0 = 3, .opc1 = 4, .crn = 14, .crm = 1, .opc2 = 0,
690
- /* ARMv7 requires bit 0 and 1 to reset to 1. ARMv8 defines the
691
+ /*
692
+ * ARMv7 requires bit 0 and 1 to reset to 1. ARMv8 defines the
693
* reset values as IMPDEF. We choose to reset to 3 to comply with
694
* both ARMv7 and ARMv8.
695
*/
696
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_sec_cp_reginfo[] = {
697
static CPAccessResult nsacr_access(CPUARMState *env, const ARMCPRegInfo *ri,
698
bool isread)
699
{
700
- /* The NSACR is RW at EL3, and RO for NS EL1 and NS EL2.
701
+ /*
702
+ * The NSACR is RW at EL3, and RO for NS EL1 and NS EL2.
703
* At Secure EL1 it traps to EL3 or EL2.
704
*/
705
if (arm_current_el(env) == 3) {
706
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
707
}
708
}
709
710
-/* We don't know until after realize whether there's a GICv3
711
+/*
712
+ * We don't know until after realize whether there's a GICv3
713
* attached, and that is what registers the gicv3 sysregs.
714
* So we have to fill in the GIC fields in ID_PFR/ID_PFR1_EL1/ID_AA64PFR0_EL1
715
* at runtime.
716
@@ -XXX,XX +XXX,XX @@ static uint64_t id_aa64pfr0_read(CPUARMState *env, const ARMCPRegInfo *ri)
717
}
718
#endif
719
720
-/* Shared logic between LORID and the rest of the LOR* registers.
721
+/*
722
+ * Shared logic between LORID and the rest of the LOR* registers.
723
* Secure state exclusion has already been dealt with.
724
*/
725
static CPAccessResult access_lor_ns(CPUARMState *env,
726
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
727
728
define_arm_cp_regs(cpu, cp_reginfo);
729
if (!arm_feature(env, ARM_FEATURE_V8)) {
730
- /* Must go early as it is full of wildcards that may be
731
+ /*
732
+ * Must go early as it is full of wildcards that may be
733
* overridden by later definitions.
734
*/
735
define_arm_cp_regs(cpu, not_v8_cp_reginfo);
736
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
737
.access = PL1_R, .type = ARM_CP_CONST,
738
.accessfn = access_aa32_tid3,
739
.resetvalue = cpu->isar.id_pfr0 },
740
- /* ID_PFR1 is not a plain ARM_CP_CONST because we don't know
741
+ /*
742
+ * ID_PFR1 is not a plain ARM_CP_CONST because we don't know
743
* the value of the GIC field until after we define these regs.
744
*/
745
{ .name = "ID_PFR1", .state = ARM_CP_STATE_BOTH,
746
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
747
748
define_arm_cp_regs(cpu, el3_regs);
749
}
750
- /* The behaviour of NSACR is sufficiently various that we don't
751
+ /*
752
+ * The behaviour of NSACR is sufficiently various that we don't
753
* try to describe it in a single reginfo:
754
* if EL3 is 64 bit, then trap to EL3 from S EL1,
755
* reads as constant 0xc00 from NS EL1 and NS EL2
756
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
757
if (cpu_isar_feature(aa32_jazelle, cpu)) {
758
define_arm_cp_regs(cpu, jazelle_regs);
759
}
760
- /* Slightly awkwardly, the OMAP and StrongARM cores need all of
761
+ /*
762
+ * Slightly awkwardly, the OMAP and StrongARM cores need all of
763
* cp15 crn=0 to be writes-ignored, whereas for other cores they should
764
* be read-only (ie write causes UNDEF exception).
765
*/
766
{
767
ARMCPRegInfo id_pre_v8_midr_cp_reginfo[] = {
768
- /* Pre-v8 MIDR space.
769
+ /*
770
+ * Pre-v8 MIDR space.
771
* Note that the MIDR isn't a simple constant register because
772
* of the TI925 behaviour where writes to another register can
773
* cause the MIDR value to change.
774
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
775
if (arm_feature(env, ARM_FEATURE_OMAPCP) ||
776
arm_feature(env, ARM_FEATURE_STRONGARM)) {
777
size_t i;
778
- /* Register the blanket "writes ignored" value first to cover the
779
+ /*
780
+ * Register the blanket "writes ignored" value first to cover the
781
* whole space. Then update the specific ID registers to allow write
782
* access, so that they ignore writes rather than causing them to
783
* UNDEF.
784
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
785
.raw_writefn = raw_write,
786
};
787
if (arm_feature(env, ARM_FEATURE_XSCALE)) {
788
- /* Normally we would always end the TB on an SCTLR write, but Linux
789
+ /*
790
+ * Normally we would always end the TB on an SCTLR write, but Linux
791
* arch/arm/mach-pxa/sleep.S expects two instructions following
792
* an MMU enable to execute from cache. Imitate this behaviour.
793
*/
794
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
795
void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
796
const ARMCPRegInfo *r, void *opaque)
797
{
798
- /* Define implementations of coprocessor registers.
799
+ /*
800
+ * Define implementations of coprocessor registers.
801
* We store these in a hashtable because typically
802
* there are less than 150 registers in a space which
803
* is 16*16*16*8*8 = 262144 in size.
804
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
805
default:
806
g_assert_not_reached();
807
}
808
- /* The AArch64 pseudocode CheckSystemAccess() specifies that op1
809
+ /*
810
+ * The AArch64 pseudocode CheckSystemAccess() specifies that op1
811
* encodes a minimum access level for the register. We roll this
812
* runtime check into our general permission check code, so check
813
* here that the reginfo's specified permissions are strict enough
814
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
815
assert((r->access & ~mask) == 0);
816
}
817
818
- /* Check that the register definition has enough info to handle
819
+ /*
820
+ * Check that the register definition has enough info to handle
821
* reads and writes if they are permitted.
822
*/
823
if (!(r->type & (ARM_CP_SPECIAL_MASK | ARM_CP_CONST))) {
824
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
825
continue;
826
}
827
if (state == ARM_CP_STATE_AA32) {
828
- /* Under AArch32 CP registers can be common
829
+ /*
830
+ * Under AArch32 CP registers can be common
831
* (same for secure and non-secure world) or banked.
832
*/
833
char *name;
834
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
835
g_assert_not_reached();
836
}
837
} else {
838
- /* AArch64 registers get mapped to non-secure instance
839
- * of AArch32 */
840
+ /*
841
+ * AArch64 registers get mapped to non-secure instance
842
+ * of AArch32
843
+ */
844
add_cpreg_to_hashtable(cpu, r, opaque, state,
845
ARM_CP_SECSTATE_NS,
846
crm, opc1, opc2, r->name);
847
@@ -XXX,XX +XXX,XX @@ void arm_cp_reset_ignore(CPUARMState *env, const ARMCPRegInfo *opaque)
848
849
static int bad_mode_switch(CPUARMState *env, int mode, CPSRWriteType write_type)
850
{
851
- /* Return true if it is not valid for us to switch to
852
+ /*
853
+ * Return true if it is not valid for us to switch to
854
* this CPU mode (ie all the UNPREDICTABLE cases in
855
* the ARM ARM CPSRWriteByInstr pseudocode).
856
*/
857
@@ -XXX,XX +XXX,XX @@ static int bad_mode_switch(CPUARMState *env, int mode, CPSRWriteType write_type)
858
case ARM_CPU_MODE_UND:
859
case ARM_CPU_MODE_IRQ:
860
case ARM_CPU_MODE_FIQ:
861
- /* Note that we don't implement the IMPDEF NSACR.RFR which in v7
862
+ /*
863
+ * Note that we don't implement the IMPDEF NSACR.RFR which in v7
864
* allows FIQ mode to be Secure-only. (In v8 this doesn't exist.)
865
*/
866
- /* If HCR.TGE is set then changes from Monitor to NS PL1 via MSR
867
+ /*
868
+ * If HCR.TGE is set then changes from Monitor to NS PL1 via MSR
869
* and CPS are treated as illegal mode changes.
870
*/
871
if (write_type == CPSRWriteByInstr &&
872
@@ -XXX,XX +XXX,XX @@ void cpsr_write(CPUARMState *env, uint32_t val, uint32_t mask,
873
env->GE = (val >> 16) & 0xf;
874
}
875
876
- /* In a V7 implementation that includes the security extensions but does
877
+ /*
878
+ * In a V7 implementation that includes the security extensions but does
879
* not include Virtualization Extensions the SCR.FW and SCR.AW bits control
880
* whether non-secure software is allowed to change the CPSR_F and CPSR_A
881
* bits respectively.
882
@@ -XXX,XX +XXX,XX @@ void cpsr_write(CPUARMState *env, uint32_t val, uint32_t mask,
883
changed_daif = (env->daif ^ val) & mask;
884
885
if (changed_daif & CPSR_A) {
886
- /* Check to see if we are allowed to change the masking of async
887
+ /*
888
+ * Check to see if we are allowed to change the masking of async
889
* abort exceptions from a non-secure state.
890
*/
891
if (!(env->cp15.scr_el3 & SCR_AW)) {
892
@@ -XXX,XX +XXX,XX @@ void cpsr_write(CPUARMState *env, uint32_t val, uint32_t mask,
893
}
894
895
if (changed_daif & CPSR_F) {
896
- /* Check to see if we are allowed to change the masking of FIQ
897
+ /*
898
+ * Check to see if we are allowed to change the masking of FIQ
899
* exceptions from a non-secure state.
900
*/
901
if (!(env->cp15.scr_el3 & SCR_FW)) {
902
@@ -XXX,XX +XXX,XX @@ void cpsr_write(CPUARMState *env, uint32_t val, uint32_t mask,
903
mask &= ~CPSR_F;
904
}
905
906
- /* Check whether non-maskable FIQ (NMFI) support is enabled.
907
+ /*
908
+ * Check whether non-maskable FIQ (NMFI) support is enabled.
909
* If this bit is set software is not allowed to mask
910
* FIQs, but is allowed to set CPSR_F to 0.
911
*/
912
@@ -XXX,XX +XXX,XX @@ void cpsr_write(CPUARMState *env, uint32_t val, uint32_t mask,
913
if (write_type != CPSRWriteRaw &&
914
((env->uncached_cpsr ^ val) & mask & CPSR_M)) {
915
if ((env->uncached_cpsr & CPSR_M) == ARM_CPU_MODE_USR) {
916
- /* Note that we can only get here in USR mode if this is a
917
+ /*
918
+ * Note that we can only get here in USR mode if this is a
919
* gdb stub write; for this case we follow the architectural
920
* behaviour for guest writes in USR mode of ignoring an attempt
921
* to switch mode. (Those are caught by translate.c for writes
922
@@ -XXX,XX +XXX,XX @@ void cpsr_write(CPUARMState *env, uint32_t val, uint32_t mask,
923
*/
924
mask &= ~CPSR_M;
925
} else if (bad_mode_switch(env, val & CPSR_M, write_type)) {
926
- /* Attempt to switch to an invalid mode: this is UNPREDICTABLE in
927
+ /*
928
+ * Attempt to switch to an invalid mode: this is UNPREDICTABLE in
929
* v7, and has defined behaviour in v8:
930
* + leave CPSR.M untouched
931
* + allow changes to the other CPSR fields
932
@@ -XXX,XX +XXX,XX @@ static void switch_mode(CPUARMState *env, int mode)
933
env->regs[14] = env->banked_r14[r14_bank_number(mode)];
934
}
935
936
-/* Physical Interrupt Target EL Lookup Table
937
+/*
938
+ * Physical Interrupt Target EL Lookup Table
939
*
940
* [ From ARM ARM section G1.13.4 (Table G1-15) ]
941
*
942
@@ -XXX,XX +XXX,XX @@ uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
943
if (arm_feature(env, ARM_FEATURE_EL3)) {
944
rw = ((env->cp15.scr_el3 & SCR_RW) == SCR_RW);
945
} else {
946
- /* Either EL2 is the highest EL (and so the EL2 register width
947
+ /*
948
+ * Either EL2 is the highest EL (and so the EL2 register width
949
* is given by is64); or there is no EL2 or EL3, in which case
950
* the value of 'rw' does not affect the table lookup anyway.
951
*/
952
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_64_to_32(CPUARMState *env)
953
env->banked_r13[bank_number(ARM_CPU_MODE_UND)] = env->xregs[23];
954
}
955
956
- /* Registers x24-x30 are mapped to r8-r14 in FIQ mode. If we are in FIQ
957
+ /*
958
+ * Registers x24-x30 are mapped to r8-r14 in FIQ mode. If we are in FIQ
959
* mode, then we can copy to r8-r14. Otherwise, we copy to the
960
* FIQ bank for r8-r14.
961
*/
962
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch32(CPUState *cs)
963
/* High vectors. When enabled, base address cannot be remapped. */
964
addr += 0xffff0000;
965
} else {
966
- /* ARM v7 architectures provide a vector base address register to remap
967
+ /*
968
+ * ARM v7 architectures provide a vector base address register to remap
969
* the interrupt vector table.
970
* This register is only followed in non-monitor mode, and is banked.
971
* Note: only bits 31:5 are valid.
972
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
973
aarch64_sve_change_el(env, cur_el, new_el, is_a64(env));
974
975
if (cur_el < new_el) {
976
- /* Entry vector offset depends on whether the implemented EL
977
+ /*
978
+ * Entry vector offset depends on whether the implemented EL
979
* immediately lower than the target level is using AArch32 or AArch64
980
*/
981
bool is_aa64;
982
@@ -XXX,XX +XXX,XX @@ static void handle_semihosting(CPUState *cs)
983
}
984
#endif
985
986
-/* Handle a CPU exception for A and R profile CPUs.
987
+/*
988
+ * Handle a CPU exception for A and R profile CPUs.
989
* Do any appropriate logging, handle PSCI calls, and then hand off
990
* to the AArch64-entry or AArch32-entry function depending on the
991
* target exception level's register width.
992
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_interrupt(CPUState *cs)
993
}
994
#endif
995
996
- /* Hooks may change global state so BQL should be held, also the
997
+ /*
998
+ * Hooks may change global state so BQL should be held, also the
999
* BQL needs to be held for any modification of
1000
* cs->interrupt_request.
1001
*/
1002
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
1003
};
1004
}
1005
1006
-/* Note that signed overflow is undefined in C. The following routines are
1007
- careful to use unsigned types where modulo arithmetic is required.
1008
- Failure to do so _will_ break on newer gcc. */
1009
+/*
1010
+ * Note that signed overflow is undefined in C. The following routines are
1011
+ * careful to use unsigned types where modulo arithmetic is required.
1012
+ * Failure to do so _will_ break on newer gcc.
1013
+ */
1014
1015
/* Signed saturating arithmetic. */
1016
1017
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sel_flags)(uint32_t flags, uint32_t a, uint32_t b)
1018
return (a & mask) | (b & ~mask);
1019
}
1020
1021
-/* CRC helpers.
1022
+/*
1023
+ * CRC helpers.
1024
* The upper bytes of val (above the number specified by 'bytes') must have
1025
* been zeroed out by the caller.
1026
*/
1027
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(crc32c)(uint32_t acc, uint32_t val, uint32_t bytes)
1028
return crc32c(acc, buf, bytes) ^ 0xffffffff;
1029
}
1030
1031
-/* Return the exception level to which FP-disabled exceptions should
1032
+/*
1033
+ * Return the exception level to which FP-disabled exceptions should
1034
* be taken, or 0 if FP is enabled.
1035
*/
1036
int fp_exception_el(CPUARMState *env, int cur_el)
1037
@@ -XXX,XX +XXX,XX @@ int fp_exception_el(CPUARMState *env, int cur_el)
1038
#ifndef CONFIG_USER_ONLY
1039
uint64_t hcr_el2;
1040
1041
- /* CPACR and the CPTR registers don't exist before v6, so FP is
1042
+ /*
1043
+ * CPACR and the CPTR registers don't exist before v6, so FP is
1044
* always accessible
1045
*/
1046
if (!arm_feature(env, ARM_FEATURE_V6)) {
1047
@@ -XXX,XX +XXX,XX @@ int fp_exception_el(CPUARMState *env, int cur_el)
1048
1049
hcr_el2 = arm_hcr_el2_eff(env);
1050
1051
- /* The CPACR controls traps to EL1, or PL1 if we're 32 bit:
1052
+ /*
1053
+ * The CPACR controls traps to EL1, or PL1 if we're 32 bit:
1054
* 0, 2 : trap EL0 and EL1/PL1 accesses
1055
* 1 : trap only EL0 accesses
1056
* 3 : trap no accesses
277
--
1057
--
278
2.7.4
1058
2.25.1
279
280
diff view generated by jsdifflib
1
In v8M the MSR and MRS instructions have extra register value
1
From: Fabiano Rosas <farosas@suse.de>
2
encodings to allow secure code to access the non-secure banked
3
version of various special registers.
4
2
5
(We don't implement the MSPLIM_NS or PSPLIM_NS aliases, because
3
Fix the following:
6
we don't currently implement the stack limit registers at all.)
7
4
5
ERROR: spaces required around that '|' (ctx:VxV)
6
ERROR: space required before the open parenthesis '('
7
ERROR: spaces required around that '+' (ctx:VxB)
8
ERROR: space prohibited between function name and open parenthesis '('
9
10
(the last two still have some occurrences in macros which I left
11
behind because it might impact readability)
12
13
Signed-off-by: Fabiano Rosas <farosas@suse.de>
14
Reviewed-by: Claudio Fontana <cfontana@suse.de>
15
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
16
Message-id: 20221213190537.511-3-farosas@suse.de
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 1505240046-11454-2-git-send-email-peter.maydell@linaro.org
11
---
18
---
12
target/arm/helper.c | 110 ++++++++++++++++++++++++++++++++++++++++++++++++++++
19
target/arm/helper.c | 42 +++++++++++++++++++++---------------------
13
1 file changed, 110 insertions(+)
20
1 file changed, 21 insertions(+), 21 deletions(-)
14
21
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
22
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
24
--- a/target/arm/helper.c
18
+++ b/target/arm/helper.c
25
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg)
26
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_list(gpointer key, gpointer opaque)
20
break;
27
uint32_t regidx = (uintptr_t)key;
21
case 20: /* CONTROL */
28
const ARMCPRegInfo *ri = get_arm_cp_reginfo(cpu->cp_regs, regidx);
22
return env->v7m.control[env->v7m.secure];
29
23
+ case 0x94: /* CONTROL_NS */
30
- if (!(ri->type & (ARM_CP_NO_RAW|ARM_CP_ALIAS))) {
24
+ /* We have to handle this here because unprivileged Secure code
31
+ if (!(ri->type & (ARM_CP_NO_RAW | ARM_CP_ALIAS))) {
25
+ * can read the NS CONTROL register.
32
cpu->cpreg_indexes[cpu->cpreg_array_len] = cpreg_to_kvm_id(regidx);
26
+ */
33
/* The value array need not be initialized at this point */
27
+ if (!env->v7m.secure) {
34
cpu->cpreg_array_len++;
28
+ return 0;
35
@@ -XXX,XX +XXX,XX @@ static void count_cpreg(gpointer key, gpointer opaque)
29
+ }
36
30
+ return env->v7m.control[M_REG_NS];
37
ri = g_hash_table_lookup(cpu->cp_regs, key);
38
39
- if (!(ri->type & (ARM_CP_NO_RAW|ARM_CP_ALIAS))) {
40
+ if (!(ri->type & (ARM_CP_NO_RAW | ARM_CP_ALIAS))) {
41
cpu->cpreg_array_len++;
31
}
42
}
32
43
}
33
if (el == 0) {
44
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6k_cp_reginfo[] = {
34
return 0; /* unprivileged reads others as zero */
45
.resetfn = arm_cp_reset_ignore },
46
{ .name = "TPIDRRO_EL0", .state = ARM_CP_STATE_AA64,
47
.opc0 = 3, .opc1 = 3, .opc2 = 3, .crn = 13, .crm = 0,
48
- .access = PL0_R|PL1_W,
49
+ .access = PL0_R | PL1_W,
50
.fieldoffset = offsetof(CPUARMState, cp15.tpidrro_el[0]),
51
.resetvalue = 0},
52
{ .name = "TPIDRURO", .cp = 15, .crn = 13, .crm = 0, .opc1 = 0, .opc2 = 3,
53
- .access = PL0_R|PL1_W,
54
+ .access = PL0_R | PL1_W,
55
.bank_fieldoffsets = { offsetoflow32(CPUARMState, cp15.tpidruro_s),
56
offsetoflow32(CPUARMState, cp15.tpidruro_ns) },
57
.resetfn = arm_cp_reset_ignore },
58
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo cache_block_ops_cp_reginfo[] = {
59
.resetvalue = 0 },
60
/* The cache ops themselves: these all NOP for QEMU */
61
{ .name = "IICR", .cp = 15, .crm = 5, .opc1 = 0,
62
- .access = PL1_W, .type = ARM_CP_NOP|ARM_CP_64BIT },
63
+ .access = PL1_W, .type = ARM_CP_NOP | ARM_CP_64BIT },
64
{ .name = "IDCR", .cp = 15, .crm = 6, .opc1 = 0,
65
- .access = PL1_W, .type = ARM_CP_NOP|ARM_CP_64BIT },
66
+ .access = PL1_W, .type = ARM_CP_NOP | ARM_CP_64BIT },
67
{ .name = "CDCR", .cp = 15, .crm = 12, .opc1 = 0,
68
- .access = PL0_W, .type = ARM_CP_NOP|ARM_CP_64BIT },
69
+ .access = PL0_W, .type = ARM_CP_NOP | ARM_CP_64BIT },
70
{ .name = "PIR", .cp = 15, .crm = 12, .opc1 = 1,
71
- .access = PL0_W, .type = ARM_CP_NOP|ARM_CP_64BIT },
72
+ .access = PL0_W, .type = ARM_CP_NOP | ARM_CP_64BIT },
73
{ .name = "PDR", .cp = 15, .crm = 12, .opc1 = 2,
74
- .access = PL0_W, .type = ARM_CP_NOP|ARM_CP_64BIT },
75
+ .access = PL0_W, .type = ARM_CP_NOP | ARM_CP_64BIT },
76
{ .name = "CIDCR", .cp = 15, .crm = 14, .opc1 = 0,
77
- .access = PL1_W, .type = ARM_CP_NOP|ARM_CP_64BIT },
78
+ .access = PL1_W, .type = ARM_CP_NOP | ARM_CP_64BIT },
79
};
80
81
static const ARMCPRegInfo cache_test_clean_cp_reginfo[] = {
82
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
83
ARMCPRegInfo cbar = {
84
.name = "CBAR",
85
.cp = 15, .crn = 15, .crm = 0, .opc1 = 4, .opc2 = 0,
86
- .access = PL1_R|PL3_W, .resetvalue = cpu->reset_cbar,
87
+ .access = PL1_R | PL3_W, .resetvalue = cpu->reset_cbar,
88
.fieldoffset = offsetof(CPUARMState,
89
cp15.c15_config_base_address)
90
};
91
@@ -XXX,XX +XXX,XX @@ static void switch_mode(CPUARMState *env, int mode)
92
return;
93
94
if (old_mode == ARM_CPU_MODE_FIQ) {
95
- memcpy (env->fiq_regs, env->regs + 8, 5 * sizeof(uint32_t));
96
- memcpy (env->regs + 8, env->usr_regs, 5 * sizeof(uint32_t));
97
+ memcpy(env->fiq_regs, env->regs + 8, 5 * sizeof(uint32_t));
98
+ memcpy(env->regs + 8, env->usr_regs, 5 * sizeof(uint32_t));
99
} else if (mode == ARM_CPU_MODE_FIQ) {
100
- memcpy (env->usr_regs, env->regs + 8, 5 * sizeof(uint32_t));
101
- memcpy (env->regs + 8, env->fiq_regs, 5 * sizeof(uint32_t));
102
+ memcpy(env->usr_regs, env->regs + 8, 5 * sizeof(uint32_t));
103
+ memcpy(env->regs + 8, env->fiq_regs, 5 * sizeof(uint32_t));
35
}
104
}
36
105
37
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
106
i = bank_number(old_mode);
38
+ switch (reg) {
107
@@ -XXX,XX +XXX,XX @@ static inline uint8_t sub8_usat(uint8_t a, uint8_t b)
39
+ case 0x88: /* MSP_NS */
108
RESULT(sum, n, 16); \
40
+ if (!env->v7m.secure) {
109
if (sum >= 0) \
41
+ return 0;
110
ge |= 3 << (n * 2); \
42
+ }
111
- } while(0)
43
+ return env->v7m.other_ss_msp;
112
+ } while (0)
44
+ case 0x89: /* PSP_NS */
113
45
+ if (!env->v7m.secure) {
114
#define SARITH8(a, b, n, op) do { \
46
+ return 0;
115
int32_t sum; \
47
+ }
116
@@ -XXX,XX +XXX,XX @@ static inline uint8_t sub8_usat(uint8_t a, uint8_t b)
48
+ return env->v7m.other_ss_psp;
117
RESULT(sum, n, 8); \
49
+ case 0x90: /* PRIMASK_NS */
118
if (sum >= 0) \
50
+ if (!env->v7m.secure) {
119
ge |= 1 << n; \
51
+ return 0;
120
- } while(0)
52
+ }
121
+ } while (0)
53
+ return env->v7m.primask[M_REG_NS];
122
54
+ case 0x91: /* BASEPRI_NS */
123
55
+ if (!env->v7m.secure) {
124
#define ADD16(a, b, n) SARITH16(a, b, n, +)
56
+ return 0;
125
@@ -XXX,XX +XXX,XX @@ static inline uint8_t sub8_usat(uint8_t a, uint8_t b)
57
+ }
126
RESULT(sum, n, 16); \
58
+ return env->v7m.basepri[M_REG_NS];
127
if ((sum >> 16) == 1) \
59
+ case 0x93: /* FAULTMASK_NS */
128
ge |= 3 << (n * 2); \
60
+ if (!env->v7m.secure) {
129
- } while(0)
61
+ return 0;
130
+ } while (0)
62
+ }
131
63
+ return env->v7m.faultmask[M_REG_NS];
132
#define ADD8(a, b, n) do { \
64
+ case 0x98: /* SP_NS */
133
uint32_t sum; \
65
+ {
134
@@ -XXX,XX +XXX,XX @@ static inline uint8_t sub8_usat(uint8_t a, uint8_t b)
66
+ /* This gives the non-secure SP selected based on whether we're
135
RESULT(sum, n, 8); \
67
+ * currently in handler mode or not, using the NS CONTROL.SPSEL.
136
if ((sum >> 8) == 1) \
68
+ */
137
ge |= 1 << n; \
69
+ bool spsel = env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSEL_MASK;
138
- } while(0)
70
+
139
+ } while (0)
71
+ if (!env->v7m.secure) {
140
72
+ return 0;
141
#define SUB16(a, b, n) do { \
73
+ }
142
uint32_t sum; \
74
+ if (!arm_v7m_is_handler_mode(env) && spsel) {
143
@@ -XXX,XX +XXX,XX @@ static inline uint8_t sub8_usat(uint8_t a, uint8_t b)
75
+ return env->v7m.other_ss_psp;
144
RESULT(sum, n, 16); \
76
+ } else {
145
if ((sum >> 16) == 0) \
77
+ return env->v7m.other_ss_msp;
146
ge |= 3 << (n * 2); \
78
+ }
147
- } while(0)
79
+ }
148
+ } while (0)
80
+ default:
149
81
+ break;
150
#define SUB8(a, b, n) do { \
82
+ }
151
uint32_t sum; \
83
+ }
152
@@ -XXX,XX +XXX,XX @@ static inline uint8_t sub8_usat(uint8_t a, uint8_t b)
84
+
153
RESULT(sum, n, 8); \
85
switch (reg) {
154
if ((sum >> 8) == 0) \
86
case 8: /* MSP */
155
ge |= 1 << n; \
87
return (env->v7m.control[env->v7m.secure] & R_V7M_CONTROL_SPSEL_MASK) ?
156
- } while(0)
88
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
157
+ } while (0)
89
return;
158
90
}
159
#define PFX u
91
160
#define ARITH_GE
92
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
93
+ switch (reg) {
94
+ case 0x88: /* MSP_NS */
95
+ if (!env->v7m.secure) {
96
+ return;
97
+ }
98
+ env->v7m.other_ss_msp = val;
99
+ return;
100
+ case 0x89: /* PSP_NS */
101
+ if (!env->v7m.secure) {
102
+ return;
103
+ }
104
+ env->v7m.other_ss_psp = val;
105
+ return;
106
+ case 0x90: /* PRIMASK_NS */
107
+ if (!env->v7m.secure) {
108
+ return;
109
+ }
110
+ env->v7m.primask[M_REG_NS] = val & 1;
111
+ return;
112
+ case 0x91: /* BASEPRI_NS */
113
+ if (!env->v7m.secure) {
114
+ return;
115
+ }
116
+ env->v7m.basepri[M_REG_NS] = val & 0xff;
117
+ return;
118
+ case 0x93: /* FAULTMASK_NS */
119
+ if (!env->v7m.secure) {
120
+ return;
121
+ }
122
+ env->v7m.faultmask[M_REG_NS] = val & 1;
123
+ return;
124
+ case 0x98: /* SP_NS */
125
+ {
126
+ /* This gives the non-secure SP selected based on whether we're
127
+ * currently in handler mode or not, using the NS CONTROL.SPSEL.
128
+ */
129
+ bool spsel = env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSEL_MASK;
130
+
131
+ if (!env->v7m.secure) {
132
+ return;
133
+ }
134
+ if (!arm_v7m_is_handler_mode(env) && spsel) {
135
+ env->v7m.other_ss_psp = val;
136
+ } else {
137
+ env->v7m.other_ss_msp = val;
138
+ }
139
+ return;
140
+ }
141
+ default:
142
+ break;
143
+ }
144
+ }
145
+
146
switch (reg) {
147
case 0 ... 7: /* xPSR sub-fields */
148
/* only APSR is actually writable */
149
--
161
--
150
2.7.4
162
2.25.1
151
152
diff view generated by jsdifflib
1
In the A64 decoder, we have a lot of references to section numbers
1
From: Fabiano Rosas <farosas@suse.de>
2
from version A.a of the v8A ARM ARM (DDI0487). This version of the
3
document is now long obsolete (we are currently on revision B.a),
4
and various intervening versions renumbered all the sections.
5
2
6
The most recent B.a version of the document doesn't assign
3
Fix this:
7
section numbers at all to the individual instruction classes
4
ERROR: braces {} are necessary for all arms of this statement
8
in the way that the various A.x versions did. The simplest thing
9
to do is just to delete all the out of date C.x.x references.
10
5
6
Signed-off-by: Fabiano Rosas <farosas@suse.de>
7
Reviewed-by: Claudio Fontana <cfontana@suse.de>
8
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
9
Message-id: 20221213190537.511-4-farosas@suse.de
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
13
Message-id: 20170915150849.23557-1-peter.maydell@linaro.org
14
---
11
---
15
target/arm/translate-a64.c | 227 +++++++++++++++++++++++----------------------
12
target/arm/helper.c | 67 ++++++++++++++++++++++++++++-----------------
16
1 file changed, 114 insertions(+), 113 deletions(-)
13
1 file changed, 42 insertions(+), 25 deletions(-)
17
14
18
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
19
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/translate-a64.c
17
--- a/target/arm/helper.c
21
+++ b/target/arm/translate-a64.c
18
+++ b/target/arm/helper.c
22
@@ -XXX,XX +XXX,XX @@ static inline AArch64DecodeFn *lookup_disas_fn(const AArch64DecodeTable *table,
19
@@ -XXX,XX +XXX,XX @@ void cpsr_write(CPUARMState *env, uint32_t val, uint32_t mask,
20
env->CF = (val >> 29) & 1;
21
env->VF = (val << 3) & 0x80000000;
22
}
23
- if (mask & CPSR_Q)
24
+ if (mask & CPSR_Q) {
25
env->QF = ((val & CPSR_Q) != 0);
26
- if (mask & CPSR_T)
27
+ }
28
+ if (mask & CPSR_T) {
29
env->thumb = ((val & CPSR_T) != 0);
30
+ }
31
if (mask & CPSR_IT_0_1) {
32
env->condexec_bits &= ~3;
33
env->condexec_bits |= (val >> 25) & 3;
34
@@ -XXX,XX +XXX,XX @@ static void switch_mode(CPUARMState *env, int mode)
35
int i;
36
37
old_mode = env->uncached_cpsr & CPSR_M;
38
- if (mode == old_mode)
39
+ if (mode == old_mode) {
40
return;
41
+ }
42
43
if (old_mode == ARM_CPU_MODE_FIQ) {
44
memcpy(env->fiq_regs, env->regs + 8, 5 * sizeof(uint32_t));
45
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch32(CPUState *cs)
46
new_mode = ARM_CPU_MODE_UND;
47
addr = 0x04;
48
mask = CPSR_I;
49
- if (env->thumb)
50
+ if (env->thumb) {
51
offset = 2;
52
- else
53
+ } else {
54
offset = 4;
55
+ }
56
break;
57
case EXCP_SWI:
58
new_mode = ARM_CPU_MODE_SVC;
59
@@ -XXX,XX +XXX,XX @@ static inline uint16_t add16_sat(uint16_t a, uint16_t b)
60
61
res = a + b;
62
if (((res ^ a) & 0x8000) && !((a ^ b) & 0x8000)) {
63
- if (a & 0x8000)
64
+ if (a & 0x8000) {
65
res = 0x8000;
66
- else
67
+ } else {
68
res = 0x7fff;
69
+ }
70
}
71
return res;
23
}
72
}
24
73
@@ -XXX,XX +XXX,XX @@ static inline uint8_t add8_sat(uint8_t a, uint8_t b)
25
/*
74
26
- * the instruction disassembly implemented here matches
75
res = a + b;
27
- * the instruction encoding classifications in chapter 3 (C3)
76
if (((res ^ a) & 0x80) && !((a ^ b) & 0x80)) {
28
- * of the ARM Architecture Reference Manual (DDI0487A_a)
77
- if (a & 0x80)
29
+ * The instruction disassembly implemented here matches
78
+ if (a & 0x80) {
30
+ * the instruction encoding classifications in chapter C4
79
res = 0x80;
31
+ * of the ARM Architecture Reference Manual (DDI0487B_a);
80
- else
32
+ * classification names and decode diagrams here should generally
81
+ } else {
33
+ * match up with those in the manual.
82
res = 0x7f;
34
*/
83
+ }
35
36
-/* C3.2.7 Unconditional branch (immediate)
37
+/* Unconditional branch (immediate)
38
* 31 30 26 25 0
39
* +----+-----------+-------------------------------------+
40
* | op | 0 0 1 0 1 | imm26 |
41
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_imm(DisasContext *s, uint32_t insn)
42
uint64_t addr = s->pc + sextract32(insn, 0, 26) * 4 - 4;
43
44
if (insn & (1U << 31)) {
45
- /* C5.6.26 BL Branch with link */
46
+ /* BL Branch with link */
47
tcg_gen_movi_i64(cpu_reg(s, 30), s->pc);
48
}
84
}
49
85
return res;
50
- /* C5.6.20 B Branch / C5.6.26 BL Branch with link */
51
+ /* B Branch / BL Branch with link */
52
gen_goto_tb(s, 0, addr);
53
}
86
}
54
87
@@ -XXX,XX +XXX,XX @@ static inline uint16_t sub16_sat(uint16_t a, uint16_t b)
55
-/* C3.2.1 Compare & branch (immediate)
88
56
+/* Compare and branch (immediate)
89
res = a - b;
57
* 31 30 25 24 23 5 4 0
90
if (((res ^ a) & 0x8000) && ((a ^ b) & 0x8000)) {
58
* +----+-------------+----+---------------------+--------+
91
- if (a & 0x8000)
59
* | sf | 0 1 1 0 1 0 | op | imm19 | Rt |
92
+ if (a & 0x8000) {
60
@@ -XXX,XX +XXX,XX @@ static void disas_comp_b_imm(DisasContext *s, uint32_t insn)
93
res = 0x8000;
61
gen_goto_tb(s, 1, addr);
94
- else
95
+ } else {
96
res = 0x7fff;
97
+ }
98
}
99
return res;
62
}
100
}
63
101
@@ -XXX,XX +XXX,XX @@ static inline uint8_t sub8_sat(uint8_t a, uint8_t b)
64
-/* C3.2.5 Test & branch (immediate)
102
65
+/* Test and branch (immediate)
103
res = a - b;
66
* 31 30 25 24 23 19 18 5 4 0
104
if (((res ^ a) & 0x80) && ((a ^ b) & 0x80)) {
67
* +----+-------------+----+-------+-------------+------+
105
- if (a & 0x80)
68
* | b5 | 0 1 1 0 1 1 | op | b40 | imm14 | Rt |
106
+ if (a & 0x80) {
69
@@ -XXX,XX +XXX,XX @@ static void disas_test_b_imm(DisasContext *s, uint32_t insn)
107
res = 0x80;
70
gen_goto_tb(s, 1, addr);
108
- else
109
+ } else {
110
res = 0x7f;
111
+ }
112
}
113
return res;
71
}
114
}
72
115
@@ -XXX,XX +XXX,XX @@ static inline uint16_t add16_usat(uint16_t a, uint16_t b)
73
-/* C3.2.2 / C5.6.19 Conditional branch (immediate)
116
{
74
+/* Conditional branch (immediate)
117
uint16_t res;
75
* 31 25 24 23 5 4 3 0
118
res = a + b;
76
* +---------------+----+---------------------+----+------+
119
- if (res < a)
77
* | 0 1 0 1 0 1 0 | o1 | imm19 | o0 | cond |
120
+ if (res < a) {
78
@@ -XXX,XX +XXX,XX @@ static void disas_cond_b_imm(DisasContext *s, uint32_t insn)
121
res = 0xffff;
79
}
122
+ }
123
return res;
80
}
124
}
81
125
82
-/* C5.6.68 HINT */
126
static inline uint16_t sub16_usat(uint16_t a, uint16_t b)
83
+/* HINT instruction group, including various allocated HINTs */
84
static void handle_hint(DisasContext *s, uint32_t insn,
85
unsigned int op1, unsigned int op2, unsigned int crm)
86
{
127
{
87
@@ -XXX,XX +XXX,XX @@ static void handle_sync(DisasContext *s, uint32_t insn,
128
- if (a > b)
88
}
129
+ if (a > b) {
130
return a - b;
131
- else
132
+ } else {
133
return 0;
134
+ }
89
}
135
}
90
136
91
-/* C5.6.130 MSR (immediate) - move immediate to processor state field */
137
static inline uint8_t add8_usat(uint8_t a, uint8_t b)
92
+/* MSR (immediate) - move immediate to processor state field */
93
static void handle_msr_i(DisasContext *s, uint32_t insn,
94
unsigned int op1, unsigned int op2, unsigned int crm)
95
{
138
{
96
@@ -XXX,XX +XXX,XX @@ static void gen_set_nzcv(TCGv_i64 tcg_rt)
139
uint8_t res;
97
tcg_temp_free_i32(nzcv);
140
res = a + b;
141
- if (res < a)
142
+ if (res < a) {
143
res = 0xff;
144
+ }
145
return res;
98
}
146
}
99
147
100
-/* C5.6.129 MRS - move from system register
148
static inline uint8_t sub8_usat(uint8_t a, uint8_t b)
101
- * C5.6.131 MSR (register) - move to system register
149
{
102
- * C5.6.204 SYS
150
- if (a > b)
103
- * C5.6.205 SYSL
151
+ if (a > b) {
104
+/* MRS - move from system register
152
return a - b;
105
+ * MSR (register) - move to system register
153
- else
106
+ * SYS
154
+ } else {
107
+ * SYSL
155
return 0;
108
* These are all essentially the same insn in 'read' and 'write'
156
+ }
109
* versions, with varying op0 fields.
110
*/
111
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
112
}
113
}
157
}
114
158
115
-/* C3.2.4 System
159
#define ADD16(a, b, n) RESULT(add16_usat(a, b), n, 16);
116
+/* System
160
@@ -XXX,XX +XXX,XX @@ static inline uint8_t sub8_usat(uint8_t a, uint8_t b)
117
* 31 22 21 20 19 18 16 15 12 11 8 7 5 4 0
161
118
* +---------------------+---+-----+-----+-------+-------+-----+------+
162
static inline uint8_t do_usad(uint8_t a, uint8_t b)
119
* | 1 1 0 1 0 1 0 1 0 0 | L | op0 | op1 | CRn | CRm | op2 | Rt |
163
{
120
@@ -XXX,XX +XXX,XX @@ static void disas_system(DisasContext *s, uint32_t insn)
164
- if (a > b)
121
return;
165
+ if (a > b) {
122
}
166
return a - b;
123
switch (crn) {
167
- else
124
- case 2: /* C5.6.68 HINT */
168
+ } else {
125
+ case 2: /* HINT (including allocated hints like NOP, YIELD, etc) */
169
return b - a;
126
handle_hint(s, insn, op1, op2, crm);
170
+ }
127
break;
128
case 3: /* CLREX, DSB, DMB, ISB */
129
handle_sync(s, insn, op1, op2, crm);
130
break;
131
- case 4: /* C5.6.130 MSR (immediate) */
132
+ case 4: /* MSR (immediate) */
133
handle_msr_i(s, insn, op1, op2, crm);
134
break;
135
default:
136
@@ -XXX,XX +XXX,XX @@ static void disas_system(DisasContext *s, uint32_t insn)
137
handle_sys(s, insn, l, op0, op1, op2, crn, crm, rt);
138
}
171
}
139
172
140
-/* C3.2.3 Exception generation
173
/* Unsigned sum of absolute byte differences. */
141
+/* Exception generation
174
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sel_flags)(uint32_t flags, uint32_t a, uint32_t b)
142
*
175
uint32_t mask;
143
* 31 24 23 21 20 5 4 2 1 0
176
144
* +-----------------+-----+------------------------+-----+----+
177
mask = 0;
145
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
178
- if (flags & 1)
146
}
179
+ if (flags & 1) {
180
mask |= 0xff;
181
- if (flags & 2)
182
+ }
183
+ if (flags & 2) {
184
mask |= 0xff00;
185
- if (flags & 4)
186
+ }
187
+ if (flags & 4) {
188
mask |= 0xff0000;
189
- if (flags & 8)
190
+ }
191
+ if (flags & 8) {
192
mask |= 0xff000000;
193
+ }
194
return (a & mask) | (b & ~mask);
147
}
195
}
148
196
149
-/* C3.2.7 Unconditional branch (register)
150
+/* Unconditional branch (register)
151
* 31 25 24 21 20 16 15 10 9 5 4 0
152
* +---------------+-------+-------+-------+------+-------+
153
* | 1 1 0 1 0 1 1 | opc | op2 | op3 | Rn | op4 |
154
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
155
s->base.is_jmp = DISAS_JUMP;
156
}
157
158
-/* C3.2 Branches, exception generating and system instructions */
159
+/* Branches, exception generating and system instructions */
160
static void disas_b_exc_sys(DisasContext *s, uint32_t insn)
161
{
162
switch (extract32(insn, 25, 7)) {
163
@@ -XXX,XX +XXX,XX @@ static bool disas_ldst_compute_iss_sf(int size, bool is_signed, int opc)
164
return regsize == 64;
165
}
166
167
-/* C3.3.6 Load/store exclusive
168
+/* Load/store exclusive
169
*
170
* 31 30 29 24 23 22 21 20 16 15 14 10 9 5 4 0
171
* +-----+-------------+----+---+----+------+----+-------+------+------+
172
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
173
}
174
175
/*
176
- * C3.3.5 Load register (literal)
177
+ * Load register (literal)
178
*
179
* 31 30 29 27 26 25 24 23 5 4 0
180
* +-----+-------+---+-----+-------------------+-------+
181
@@ -XXX,XX +XXX,XX @@ static void disas_ld_lit(DisasContext *s, uint32_t insn)
182
}
183
184
/*
185
- * C5.6.80 LDNP (Load Pair - non-temporal hint)
186
- * C5.6.81 LDP (Load Pair - non vector)
187
- * C5.6.82 LDPSW (Load Pair Signed Word - non vector)
188
- * C5.6.176 STNP (Store Pair - non-temporal hint)
189
- * C5.6.177 STP (Store Pair - non vector)
190
- * C6.3.165 LDNP (Load Pair of SIMD&FP - non-temporal hint)
191
- * C6.3.165 LDP (Load Pair of SIMD&FP)
192
- * C6.3.284 STNP (Store Pair of SIMD&FP - non-temporal hint)
193
- * C6.3.284 STP (Store Pair of SIMD&FP)
194
+ * LDNP (Load Pair - non-temporal hint)
195
+ * LDP (Load Pair - non vector)
196
+ * LDPSW (Load Pair Signed Word - non vector)
197
+ * STNP (Store Pair - non-temporal hint)
198
+ * STP (Store Pair - non vector)
199
+ * LDNP (Load Pair of SIMD&FP - non-temporal hint)
200
+ * LDP (Load Pair of SIMD&FP)
201
+ * STNP (Store Pair of SIMD&FP - non-temporal hint)
202
+ * STP (Store Pair of SIMD&FP)
203
*
204
* 31 30 29 27 26 25 24 23 22 21 15 14 10 9 5 4 0
205
* +-----+-------+---+---+-------+---+-----------------------------+
206
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_pair(DisasContext *s, uint32_t insn)
207
}
208
209
/*
210
- * C3.3.8 Load/store (immediate post-indexed)
211
- * C3.3.9 Load/store (immediate pre-indexed)
212
- * C3.3.12 Load/store (unscaled immediate)
213
+ * Load/store (immediate post-indexed)
214
+ * Load/store (immediate pre-indexed)
215
+ * Load/store (unscaled immediate)
216
*
217
* 31 30 29 27 26 25 24 23 22 21 20 12 11 10 9 5 4 0
218
* +----+-------+---+-----+-----+---+--------+-----+------+------+
219
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_imm9(DisasContext *s, uint32_t insn,
220
}
221
222
/*
223
- * C3.3.10 Load/store (register offset)
224
+ * Load/store (register offset)
225
*
226
* 31 30 29 27 26 25 24 23 22 21 20 16 15 13 12 11 10 9 5 4 0
227
* +----+-------+---+-----+-----+---+------+-----+--+-----+----+----+
228
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_roffset(DisasContext *s, uint32_t insn,
229
}
230
231
/*
232
- * C3.3.13 Load/store (unsigned immediate)
233
+ * Load/store (unsigned immediate)
234
*
235
* 31 30 29 27 26 25 24 23 22 21 10 9 5
236
* +----+-------+---+-----+-----+------------+-------+------+
237
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg(DisasContext *s, uint32_t insn)
238
}
239
}
240
241
-/* C3.3.1 AdvSIMD load/store multiple structures
242
+/* AdvSIMD load/store multiple structures
243
*
244
* 31 30 29 23 22 21 16 15 12 11 10 9 5 4 0
245
* +---+---+---------------+---+-------------+--------+------+------+------+
246
* | 0 | Q | 0 0 1 1 0 0 0 | L | 0 0 0 0 0 0 | opcode | size | Rn | Rt |
247
* +---+---+---------------+---+-------------+--------+------+------+------+
248
*
249
- * C3.3.2 AdvSIMD load/store multiple structures (post-indexed)
250
+ * AdvSIMD load/store multiple structures (post-indexed)
251
*
252
* 31 30 29 23 22 21 20 16 15 12 11 10 9 5 4 0
253
* +---+---+---------------+---+---+---------+--------+------+------+------+
254
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
255
tcg_temp_free_i64(tcg_addr);
256
}
257
258
-/* C3.3.3 AdvSIMD load/store single structure
259
+/* AdvSIMD load/store single structure
260
*
261
* 31 30 29 23 22 21 20 16 15 13 12 11 10 9 5 4 0
262
* +---+---+---------------+-----+-----------+-----+---+------+------+------+
263
* | 0 | Q | 0 0 1 1 0 1 0 | L R | 0 0 0 0 0 | opc | S | size | Rn | Rt |
264
* +---+---+---------------+-----+-----------+-----+---+------+------+------+
265
*
266
- * C3.3.4 AdvSIMD load/store single structure (post-indexed)
267
+ * AdvSIMD load/store single structure (post-indexed)
268
*
269
* 31 30 29 23 22 21 20 16 15 13 12 11 10 9 5 4 0
270
* +---+---+---------------+-----+-----------+-----+---+------+------+------+
271
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
272
tcg_temp_free_i64(tcg_addr);
273
}
274
275
-/* C3.3 Loads and stores */
276
+/* Loads and stores */
277
static void disas_ldst(DisasContext *s, uint32_t insn)
278
{
279
switch (extract32(insn, 24, 6)) {
280
@@ -XXX,XX +XXX,XX @@ static void disas_ldst(DisasContext *s, uint32_t insn)
281
}
282
}
283
284
-/* C3.4.6 PC-rel. addressing
285
+/* PC-rel. addressing
286
* 31 30 29 28 24 23 5 4 0
287
* +----+-------+-----------+-------------------+------+
288
* | op | immlo | 1 0 0 0 0 | immhi | Rd |
289
@@ -XXX,XX +XXX,XX @@ static void disas_pc_rel_adr(DisasContext *s, uint32_t insn)
290
}
291
292
/*
293
- * C3.4.1 Add/subtract (immediate)
294
+ * Add/subtract (immediate)
295
*
296
* 31 30 29 28 24 23 22 21 10 9 5 4 0
297
* +--+--+--+-----------+-----+-------------+-----+-----+
298
@@ -XXX,XX +XXX,XX @@ static bool logic_imm_decode_wmask(uint64_t *result, unsigned int immn,
299
return true;
300
}
301
302
-/* C3.4.4 Logical (immediate)
303
+/* Logical (immediate)
304
* 31 30 29 28 23 22 21 16 15 10 9 5 4 0
305
* +----+-----+-------------+---+------+------+------+------+
306
* | sf | opc | 1 0 0 1 0 0 | N | immr | imms | Rn | Rd |
307
@@ -XXX,XX +XXX,XX @@ static void disas_logic_imm(DisasContext *s, uint32_t insn)
308
}
309
310
/*
311
- * C3.4.5 Move wide (immediate)
312
+ * Move wide (immediate)
313
*
314
* 31 30 29 28 23 22 21 20 5 4 0
315
* +--+-----+-------------+-----+----------------+------+
316
@@ -XXX,XX +XXX,XX @@ static void disas_movw_imm(DisasContext *s, uint32_t insn)
317
}
318
}
319
320
-/* C3.4.2 Bitfield
321
+/* Bitfield
322
* 31 30 29 28 23 22 21 16 15 10 9 5 4 0
323
* +----+-----+-------------+---+------+------+------+------+
324
* | sf | opc | 1 0 0 1 1 0 | N | immr | imms | Rn | Rd |
325
@@ -XXX,XX +XXX,XX @@ static void disas_bitfield(DisasContext *s, uint32_t insn)
326
}
327
}
328
329
-/* C3.4.3 Extract
330
+/* Extract
331
* 31 30 29 28 23 22 21 20 16 15 10 9 5 4 0
332
* +----+------+-------------+---+----+------+--------+------+------+
333
* | sf | op21 | 1 0 0 1 1 1 | N | o0 | Rm | imms | Rn | Rd |
334
@@ -XXX,XX +XXX,XX @@ static void disas_extract(DisasContext *s, uint32_t insn)
335
}
336
}
337
338
-/* C3.4 Data processing - immediate */
339
+/* Data processing - immediate */
340
static void disas_data_proc_imm(DisasContext *s, uint32_t insn)
341
{
342
switch (extract32(insn, 23, 6)) {
343
@@ -XXX,XX +XXX,XX @@ static void shift_reg_imm(TCGv_i64 dst, TCGv_i64 src, int sf,
344
}
345
}
346
347
-/* C3.5.10 Logical (shifted register)
348
+/* Logical (shifted register)
349
* 31 30 29 28 24 23 22 21 20 16 15 10 9 5 4 0
350
* +----+-----+-----------+-------+---+------+--------+------+------+
351
* | sf | opc | 0 1 0 1 0 | shift | N | Rm | imm6 | Rn | Rd |
352
@@ -XXX,XX +XXX,XX @@ static void disas_logic_reg(DisasContext *s, uint32_t insn)
353
}
354
355
/*
356
- * C3.5.1 Add/subtract (extended register)
357
+ * Add/subtract (extended register)
358
*
359
* 31|30|29|28 24|23 22|21|20 16|15 13|12 10|9 5|4 0|
360
* +--+--+--+-----------+-----+--+-------+------+------+----+----+
361
@@ -XXX,XX +XXX,XX @@ static void disas_add_sub_ext_reg(DisasContext *s, uint32_t insn)
362
}
363
364
/*
365
- * C3.5.2 Add/subtract (shifted register)
366
+ * Add/subtract (shifted register)
367
*
368
* 31 30 29 28 24 23 22 21 20 16 15 10 9 5 4 0
369
* +--+--+--+-----------+-----+--+-------+---------+------+------+
370
@@ -XXX,XX +XXX,XX @@ static void disas_add_sub_reg(DisasContext *s, uint32_t insn)
371
tcg_temp_free_i64(tcg_result);
372
}
373
374
-/* C3.5.9 Data-processing (3 source)
375
-
376
- 31 30 29 28 24 23 21 20 16 15 14 10 9 5 4 0
377
- +--+------+-----------+------+------+----+------+------+------+
378
- |sf| op54 | 1 1 0 1 1 | op31 | Rm | o0 | Ra | Rn | Rd |
379
- +--+------+-----------+------+------+----+------+------+------+
380
-
381
+/* Data-processing (3 source)
382
+ *
383
+ * 31 30 29 28 24 23 21 20 16 15 14 10 9 5 4 0
384
+ * +--+------+-----------+------+------+----+------+------+------+
385
+ * |sf| op54 | 1 1 0 1 1 | op31 | Rm | o0 | Ra | Rn | Rd |
386
+ * +--+------+-----------+------+------+----+------+------+------+
387
*/
388
static void disas_data_proc_3src(DisasContext *s, uint32_t insn)
389
{
390
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_3src(DisasContext *s, uint32_t insn)
391
tcg_temp_free_i64(tcg_tmp);
392
}
393
394
-/* C3.5.3 - Add/subtract (with carry)
395
+/* Add/subtract (with carry)
396
* 31 30 29 28 27 26 25 24 23 22 21 20 16 15 10 9 5 4 0
397
* +--+--+--+------------------------+------+---------+------+-----+
398
* |sf|op| S| 1 1 0 1 0 0 0 0 | rm | opcode2 | Rn | Rd |
399
@@ -XXX,XX +XXX,XX @@ static void disas_adc_sbc(DisasContext *s, uint32_t insn)
400
}
401
}
402
403
-/* C3.5.4 - C3.5.5 Conditional compare (immediate / register)
404
+/* Conditional compare (immediate / register)
405
* 31 30 29 28 27 26 25 24 23 22 21 20 16 15 12 11 10 9 5 4 3 0
406
* +--+--+--+------------------------+--------+------+----+--+------+--+-----+
407
* |sf|op| S| 1 1 0 1 0 0 1 0 |imm5/rm | cond |i/r |o2| Rn |o3|nzcv |
408
@@ -XXX,XX +XXX,XX @@ static void disas_cc(DisasContext *s, uint32_t insn)
409
tcg_temp_free_i32(tcg_t2);
410
}
411
412
-/* C3.5.6 Conditional select
413
+/* Conditional select
414
* 31 30 29 28 21 20 16 15 12 11 10 9 5 4 0
415
* +----+----+---+-----------------+------+------+-----+------+------+
416
* | sf | op | S | 1 1 0 1 0 1 0 0 | Rm | cond | op2 | Rn | Rd |
417
@@ -XXX,XX +XXX,XX @@ static void handle_rbit(DisasContext *s, unsigned int sf,
418
}
419
}
420
421
-/* C5.6.149 REV with sf==1, opcode==3 ("REV64") */
422
+/* REV with sf==1, opcode==3 ("REV64") */
423
static void handle_rev64(DisasContext *s, unsigned int sf,
424
unsigned int rn, unsigned int rd)
425
{
426
@@ -XXX,XX +XXX,XX @@ static void handle_rev64(DisasContext *s, unsigned int sf,
427
tcg_gen_bswap64_i64(cpu_reg(s, rd), cpu_reg(s, rn));
428
}
429
430
-/* C5.6.149 REV with sf==0, opcode==2
431
- * C5.6.151 REV32 (sf==1, opcode==2)
432
+/* REV with sf==0, opcode==2
433
+ * REV32 (sf==1, opcode==2)
434
*/
435
static void handle_rev32(DisasContext *s, unsigned int sf,
436
unsigned int rn, unsigned int rd)
437
@@ -XXX,XX +XXX,XX @@ static void handle_rev32(DisasContext *s, unsigned int sf,
438
}
439
}
440
441
-/* C5.6.150 REV16 (opcode==1) */
442
+/* REV16 (opcode==1) */
443
static void handle_rev16(DisasContext *s, unsigned int sf,
444
unsigned int rn, unsigned int rd)
445
{
446
@@ -XXX,XX +XXX,XX @@ static void handle_rev16(DisasContext *s, unsigned int sf,
447
tcg_temp_free_i64(tcg_tmp);
448
}
449
450
-/* C3.5.7 Data-processing (1 source)
451
+/* Data-processing (1 source)
452
* 31 30 29 28 21 20 16 15 10 9 5 4 0
453
* +----+---+---+-----------------+---------+--------+------+------+
454
* | sf | 1 | S | 1 1 0 1 0 1 1 0 | opcode2 | opcode | Rn | Rd |
455
@@ -XXX,XX +XXX,XX @@ static void handle_div(DisasContext *s, bool is_signed, unsigned int sf,
456
}
457
}
458
459
-/* C5.6.115 LSLV, C5.6.118 LSRV, C5.6.17 ASRV, C5.6.154 RORV */
460
+/* LSLV, LSRV, ASRV, RORV */
461
static void handle_shift_reg(DisasContext *s,
462
enum a64_shift_type shift_type, unsigned int sf,
463
unsigned int rm, unsigned int rn, unsigned int rd)
464
@@ -XXX,XX +XXX,XX @@ static void handle_crc32(DisasContext *s,
465
tcg_temp_free_i32(tcg_bytes);
466
}
467
468
-/* C3.5.8 Data-processing (2 source)
469
+/* Data-processing (2 source)
470
* 31 30 29 28 21 20 16 15 10 9 5 4 0
471
* +----+---+---+-----------------+------+--------+------+------+
472
* | sf | 0 | S | 1 1 0 1 0 1 1 0 | Rm | opcode | Rn | Rd |
473
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_2src(DisasContext *s, uint32_t insn)
474
}
475
}
476
477
-/* C3.5 Data processing - register */
478
+/* Data processing - register */
479
static void disas_data_proc_reg(DisasContext *s, uint32_t insn)
480
{
481
switch (extract32(insn, 24, 5)) {
482
@@ -XXX,XX +XXX,XX @@ static void handle_fp_compare(DisasContext *s, bool is_double,
483
tcg_temp_free_i64(tcg_flags);
484
}
485
486
-/* C3.6.22 Floating point compare
487
+/* Floating point compare
488
* 31 30 29 28 24 23 22 21 20 16 15 14 13 10 9 5 4 0
489
* +---+---+---+-----------+------+---+------+-----+---------+------+-------+
490
* | M | 0 | S | 1 1 1 1 0 | type | 1 | Rm | op | 1 0 0 0 | Rn | op2 |
491
@@ -XXX,XX +XXX,XX @@ static void disas_fp_compare(DisasContext *s, uint32_t insn)
492
handle_fp_compare(s, type, rn, rm, opc & 1, opc & 2);
493
}
494
495
-/* C3.6.23 Floating point conditional compare
496
+/* Floating point conditional compare
497
* 31 30 29 28 24 23 22 21 20 16 15 12 11 10 9 5 4 3 0
498
* +---+---+---+-----------+------+---+------+------+-----+------+----+------+
499
* | M | 0 | S | 1 1 1 1 0 | type | 1 | Rm | cond | 0 1 | Rn | op | nzcv |
500
@@ -XXX,XX +XXX,XX @@ static void disas_fp_ccomp(DisasContext *s, uint32_t insn)
501
}
502
}
503
504
-/* C3.6.24 Floating point conditional select
505
+/* Floating point conditional select
506
* 31 30 29 28 24 23 22 21 20 16 15 12 11 10 9 5 4 0
507
* +---+---+---+-----------+------+---+------+------+-----+------+------+
508
* | M | 0 | S | 1 1 1 1 0 | type | 1 | Rm | cond | 1 1 | Rn | Rd |
509
@@ -XXX,XX +XXX,XX @@ static void disas_fp_csel(DisasContext *s, uint32_t insn)
510
tcg_temp_free_i64(t_true);
511
}
512
513
-/* C3.6.25 Floating-point data-processing (1 source) - single precision */
514
+/* Floating-point data-processing (1 source) - single precision */
515
static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
516
{
517
TCGv_ptr fpst;
518
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
519
tcg_temp_free_i32(tcg_res);
520
}
521
522
-/* C3.6.25 Floating-point data-processing (1 source) - double precision */
523
+/* Floating-point data-processing (1 source) - double precision */
524
static void handle_fp_1src_double(DisasContext *s, int opcode, int rd, int rn)
525
{
526
TCGv_ptr fpst;
527
@@ -XXX,XX +XXX,XX @@ static void handle_fp_fcvt(DisasContext *s, int opcode,
528
}
529
}
530
531
-/* C3.6.25 Floating point data-processing (1 source)
532
+/* Floating point data-processing (1 source)
533
* 31 30 29 28 24 23 22 21 20 15 14 10 9 5 4 0
534
* +---+---+---+-----------+------+---+--------+-----------+------+------+
535
* | M | 0 | S | 1 1 1 1 0 | type | 1 | opcode | 1 0 0 0 0 | Rn | Rd |
536
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
537
}
538
}
539
540
-/* C3.6.26 Floating-point data-processing (2 source) - single precision */
541
+/* Floating-point data-processing (2 source) - single precision */
542
static void handle_fp_2src_single(DisasContext *s, int opcode,
543
int rd, int rn, int rm)
544
{
545
@@ -XXX,XX +XXX,XX @@ static void handle_fp_2src_single(DisasContext *s, int opcode,
546
tcg_temp_free_i32(tcg_res);
547
}
548
549
-/* C3.6.26 Floating-point data-processing (2 source) - double precision */
550
+/* Floating-point data-processing (2 source) - double precision */
551
static void handle_fp_2src_double(DisasContext *s, int opcode,
552
int rd, int rn, int rm)
553
{
554
@@ -XXX,XX +XXX,XX @@ static void handle_fp_2src_double(DisasContext *s, int opcode,
555
tcg_temp_free_i64(tcg_res);
556
}
557
558
-/* C3.6.26 Floating point data-processing (2 source)
559
+/* Floating point data-processing (2 source)
560
* 31 30 29 28 24 23 22 21 20 16 15 12 11 10 9 5 4 0
561
* +---+---+---+-----------+------+---+------+--------+-----+------+------+
562
* | M | 0 | S | 1 1 1 1 0 | type | 1 | Rm | opcode | 1 0 | Rn | Rd |
563
@@ -XXX,XX +XXX,XX @@ static void disas_fp_2src(DisasContext *s, uint32_t insn)
564
}
565
}
566
567
-/* C3.6.27 Floating-point data-processing (3 source) - single precision */
568
+/* Floating-point data-processing (3 source) - single precision */
569
static void handle_fp_3src_single(DisasContext *s, bool o0, bool o1,
570
int rd, int rn, int rm, int ra)
571
{
572
@@ -XXX,XX +XXX,XX @@ static void handle_fp_3src_single(DisasContext *s, bool o0, bool o1,
573
tcg_temp_free_i32(tcg_res);
574
}
575
576
-/* C3.6.27 Floating-point data-processing (3 source) - double precision */
577
+/* Floating-point data-processing (3 source) - double precision */
578
static void handle_fp_3src_double(DisasContext *s, bool o0, bool o1,
579
int rd, int rn, int rm, int ra)
580
{
581
@@ -XXX,XX +XXX,XX @@ static void handle_fp_3src_double(DisasContext *s, bool o0, bool o1,
582
tcg_temp_free_i64(tcg_res);
583
}
584
585
-/* C3.6.27 Floating point data-processing (3 source)
586
+/* Floating point data-processing (3 source)
587
* 31 30 29 28 24 23 22 21 20 16 15 14 10 9 5 4 0
588
* +---+---+---+-----------+------+----+------+----+------+------+------+
589
* | M | 0 | S | 1 1 1 1 1 | type | o1 | Rm | o0 | Ra | Rn | Rd |
590
@@ -XXX,XX +XXX,XX @@ static void disas_fp_3src(DisasContext *s, uint32_t insn)
591
}
592
}
593
594
-/* C3.6.28 Floating point immediate
595
+/* Floating point immediate
596
* 31 30 29 28 24 23 22 21 20 13 12 10 9 5 4 0
597
* +---+---+---+-----------+------+---+------------+-------+------+------+
598
* | M | 0 | S | 1 1 1 1 0 | type | 1 | imm8 | 1 0 0 | imm5 | Rd |
599
@@ -XXX,XX +XXX,XX @@ static void handle_fpfpcvt(DisasContext *s, int rd, int rn, int opcode,
600
tcg_temp_free_i32(tcg_shift);
601
}
602
603
-/* C3.6.29 Floating point <-> fixed point conversions
604
+/* Floating point <-> fixed point conversions
605
* 31 30 29 28 24 23 22 21 20 19 18 16 15 10 9 5 4 0
606
* +----+---+---+-----------+------+---+-------+--------+-------+------+------+
607
* | sf | 0 | S | 1 1 1 1 0 | type | 0 | rmode | opcode | scale | Rn | Rd |
608
@@ -XXX,XX +XXX,XX @@ static void handle_fmov(DisasContext *s, int rd, int rn, int type, bool itof)
609
}
610
}
611
612
-/* C3.6.30 Floating point <-> integer conversions
613
+/* Floating point <-> integer conversions
614
* 31 30 29 28 24 23 22 21 20 19 18 16 15 10 9 5 4 0
615
* +----+---+---+-----------+------+---+-------+-----+-------------+----+----+
616
* | sf | 0 | S | 1 1 1 1 0 | type | 1 | rmode | opc | 0 0 0 0 0 0 | Rn | Rd |
617
@@ -XXX,XX +XXX,XX @@ static void do_ext64(DisasContext *s, TCGv_i64 tcg_left, TCGv_i64 tcg_right,
618
tcg_temp_free_i64(tcg_tmp);
619
}
620
621
-/* C3.6.1 EXT
622
+/* EXT
623
* 31 30 29 24 23 22 21 20 16 15 14 11 10 9 5 4 0
624
* +---+---+-------------+-----+---+------+---+------+---+------+------+
625
* | 0 | Q | 1 0 1 1 1 0 | op2 | 0 | Rm | 0 | imm4 | 0 | Rn | Rd |
626
@@ -XXX,XX +XXX,XX @@ static void disas_simd_ext(DisasContext *s, uint32_t insn)
627
tcg_temp_free_i64(tcg_resh);
628
}
629
630
-/* C3.6.2 TBL/TBX
631
+/* TBL/TBX
632
* 31 30 29 24 23 22 21 20 16 15 14 13 12 11 10 9 5 4 0
633
* +---+---+-------------+-----+---+------+---+-----+----+-----+------+------+
634
* | 0 | Q | 0 0 1 1 1 0 | op2 | 0 | Rm | 0 | len | op | 0 0 | Rn | Rd |
635
@@ -XXX,XX +XXX,XX @@ static void disas_simd_tb(DisasContext *s, uint32_t insn)
636
tcg_temp_free_i64(tcg_resh);
637
}
638
639
-/* C3.6.3 ZIP/UZP/TRN
640
+/* ZIP/UZP/TRN
641
* 31 30 29 24 23 22 21 20 16 15 14 12 11 10 9 5 4 0
642
* +---+---+-------------+------+---+------+---+------------------+------+
643
* | 0 | Q | 0 0 1 1 1 0 | size | 0 | Rm | 0 | opc | 1 0 | Rn | Rd |
644
@@ -XXX,XX +XXX,XX @@ static void do_minmaxop(DisasContext *s, TCGv_i32 tcg_elt1, TCGv_i32 tcg_elt2,
645
}
646
}
647
648
-/* C3.6.4 AdvSIMD across lanes
649
+/* AdvSIMD across lanes
650
* 31 30 29 28 24 23 22 21 17 16 12 11 10 9 5 4 0
651
* +---+---+---+-----------+------+-----------+--------+-----+------+------+
652
* | 0 | Q | U | 0 1 1 1 0 | size | 1 1 0 0 0 | opcode | 1 0 | Rn | Rd |
653
@@ -XXX,XX +XXX,XX @@ static void disas_simd_across_lanes(DisasContext *s, uint32_t insn)
654
tcg_temp_free_i64(tcg_res);
655
}
656
657
-/* C6.3.31 DUP (Element, Vector)
658
+/* DUP (Element, Vector)
659
*
660
* 31 30 29 21 20 16 15 10 9 5 4 0
661
* +---+---+-------------------+--------+-------------+------+------+
662
@@ -XXX,XX +XXX,XX @@ static void handle_simd_dupe(DisasContext *s, int is_q, int rd, int rn,
663
tcg_temp_free_i64(tmp);
664
}
665
666
-/* C6.3.31 DUP (element, scalar)
667
+/* DUP (element, scalar)
668
* 31 21 20 16 15 10 9 5 4 0
669
* +-----------------------+--------+-------------+------+------+
670
* | 0 1 0 1 1 1 1 0 0 0 0 | imm5 | 0 0 0 0 0 1 | Rn | Rd |
671
@@ -XXX,XX +XXX,XX @@ static void handle_simd_dupes(DisasContext *s, int rd, int rn,
672
tcg_temp_free_i64(tmp);
673
}
674
675
-/* C6.3.32 DUP (General)
676
+/* DUP (General)
677
*
678
* 31 30 29 21 20 16 15 10 9 5 4 0
679
* +---+---+-------------------+--------+-------------+------+------+
680
@@ -XXX,XX +XXX,XX @@ static void handle_simd_dupg(DisasContext *s, int is_q, int rd, int rn,
681
}
682
}
683
684
-/* C6.3.150 INS (Element)
685
+/* INS (Element)
686
*
687
* 31 21 20 16 15 14 11 10 9 5 4 0
688
* +-----------------------+--------+------------+---+------+------+
689
@@ -XXX,XX +XXX,XX @@ static void handle_simd_inse(DisasContext *s, int rd, int rn,
690
}
691
692
693
-/* C6.3.151 INS (General)
694
+/* INS (General)
695
*
696
* 31 21 20 16 15 10 9 5 4 0
697
* +-----------------------+--------+-------------+------+------+
698
@@ -XXX,XX +XXX,XX @@ static void handle_simd_insg(DisasContext *s, int rd, int rn, int imm5)
699
}
700
701
/*
702
- * C6.3.321 UMOV (General)
703
- * C6.3.237 SMOV (General)
704
+ * UMOV (General)
705
+ * SMOV (General)
706
*
707
* 31 30 29 21 20 16 15 12 10 9 5 4 0
708
* +---+---+-------------------+--------+-------------+------+------+
709
@@ -XXX,XX +XXX,XX @@ static void handle_simd_umov_smov(DisasContext *s, int is_q, int is_signed,
710
}
711
}
712
713
-/* C3.6.5 AdvSIMD copy
714
+/* AdvSIMD copy
715
* 31 30 29 28 21 20 16 15 14 11 10 9 5 4 0
716
* +---+---+----+-----------------+------+---+------+---+------+------+
717
* | 0 | Q | op | 0 1 1 1 0 0 0 0 | imm5 | 0 | imm4 | 1 | Rn | Rd |
718
@@ -XXX,XX +XXX,XX @@ static void disas_simd_copy(DisasContext *s, uint32_t insn)
719
}
720
}
721
722
-/* C3.6.6 AdvSIMD modified immediate
723
+/* AdvSIMD modified immediate
724
* 31 30 29 28 19 18 16 15 12 11 10 9 5 4 0
725
* +---+---+----+---------------------+-----+-------+----+---+-------+------+
726
* | 0 | Q | op | 0 1 1 1 1 0 0 0 0 0 | abc | cmode | o2 | 1 | defgh | Rd |
727
@@ -XXX,XX +XXX,XX @@ static void disas_simd_mod_imm(DisasContext *s, uint32_t insn)
728
tcg_temp_free_i64(tcg_imm);
729
}
730
731
-/* C3.6.7 AdvSIMD scalar copy
732
+/* AdvSIMD scalar copy
733
* 31 30 29 28 21 20 16 15 14 11 10 9 5 4 0
734
* +-----+----+-----------------+------+---+------+---+------+------+
735
* | 0 1 | op | 1 1 1 1 0 0 0 0 | imm5 | 0 | imm4 | 1 | Rn | Rd |
736
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_copy(DisasContext *s, uint32_t insn)
737
handle_simd_dupes(s, rd, rn, imm5);
738
}
739
740
-/* C3.6.8 AdvSIMD scalar pairwise
741
+/* AdvSIMD scalar pairwise
742
* 31 30 29 28 24 23 22 21 17 16 12 11 10 9 5 4 0
743
* +-----+---+-----------+------+-----------+--------+-----+------+------+
744
* | 0 1 | U | 1 1 1 1 0 | size | 1 1 0 0 0 | opcode | 1 0 | Rn | Rd |
745
@@ -XXX,XX +XXX,XX @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
746
tcg_temp_free_i32(tcg_rmode);
747
}
748
749
-/* C3.6.9 AdvSIMD scalar shift by immediate
750
+/* AdvSIMD scalar shift by immediate
751
* 31 30 29 28 23 22 19 18 16 15 11 10 9 5 4 0
752
* +-----+---+-------------+------+------+--------+---+------+------+
753
* | 0 1 | U | 1 1 1 1 1 0 | immh | immb | opcode | 1 | Rn | Rd |
754
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_shift_imm(DisasContext *s, uint32_t insn)
755
}
756
}
757
758
-/* C3.6.10 AdvSIMD scalar three different
759
+/* AdvSIMD scalar three different
760
* 31 30 29 28 24 23 22 21 20 16 15 12 11 10 9 5 4 0
761
* +-----+---+-----------+------+---+------+--------+-----+------+------+
762
* | 0 1 | U | 1 1 1 1 0 | size | 1 | Rm | opcode | 0 0 | Rn | Rd |
763
@@ -XXX,XX +XXX,XX @@ static void handle_3same_float(DisasContext *s, int size, int elements,
764
}
765
}
766
767
-/* C3.6.11 AdvSIMD scalar three same
768
+/* AdvSIMD scalar three same
769
* 31 30 29 28 24 23 22 21 20 16 15 11 10 9 5 4 0
770
* +-----+---+-----------+------+---+------+--------+---+------+------+
771
* | 0 1 | U | 1 1 1 1 0 | size | 1 | Rm | opcode | 1 | Rn | Rd |
772
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_satacc(DisasContext *s, bool is_scalar, bool is_u,
773
}
774
}
775
776
-/* C3.6.12 AdvSIMD scalar two reg misc
777
+/* AdvSIMD scalar two reg misc
778
* 31 30 29 28 24 23 22 21 17 16 12 11 10 9 5 4 0
779
* +-----+---+-----------+------+-----------+--------+-----+------+------+
780
* | 0 1 | U | 1 1 1 1 0 | size | 1 0 0 0 0 | opcode | 1 0 | Rn | Rd |
781
@@ -XXX,XX +XXX,XX @@ static void handle_vec_simd_shrn(DisasContext *s, bool is_q,
782
}
783
784
785
-/* C3.6.14 AdvSIMD shift by immediate
786
+/* AdvSIMD shift by immediate
787
* 31 30 29 28 23 22 19 18 16 15 11 10 9 5 4 0
788
* +---+---+---+-------------+------+------+--------+---+------+------+
789
* | 0 | Q | U | 0 1 1 1 1 0 | immh | immb | opcode | 1 | Rn | Rd |
790
@@ -XXX,XX +XXX,XX @@ static void handle_pmull_64(DisasContext *s, int is_q, int rd, int rn, int rm)
791
tcg_temp_free_i64(tcg_res);
792
}
793
794
-/* C3.6.15 AdvSIMD three different
795
+/* AdvSIMD three different
796
* 31 30 29 28 24 23 22 21 20 16 15 12 11 10 9 5 4 0
797
* +---+---+---+-----------+------+---+------+--------+-----+------+------+
798
* | 0 | Q | U | 0 1 1 1 0 | size | 1 | Rm | opcode | 0 0 | Rn | Rd |
799
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
800
}
801
}
802
803
-/* C3.6.16 AdvSIMD three same
804
+/* AdvSIMD three same
805
* 31 30 29 28 24 23 22 21 20 16 15 11 10 9 5 4 0
806
* +---+---+---+-----------+------+---+------+--------+---+------+------+
807
* | 0 | Q | U | 0 1 1 1 0 | size | 1 | Rm | opcode | 1 | Rn | Rd |
808
@@ -XXX,XX +XXX,XX @@ static void handle_shll(DisasContext *s, bool is_q, int size, int rn, int rd)
809
}
810
}
811
812
-/* C3.6.17 AdvSIMD two reg misc
813
+/* AdvSIMD two reg misc
814
* 31 30 29 28 24 23 22 21 17 16 12 11 10 9 5 4 0
815
* +---+---+---+-----------+------+-----------+--------+-----+------+------+
816
* | 0 | Q | U | 0 1 1 1 0 | size | 1 0 0 0 0 | opcode | 1 0 | Rn | Rd |
817
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
818
}
819
}
820
821
-/* C3.6.13 AdvSIMD scalar x indexed element
822
+/* AdvSIMD scalar x indexed element
823
* 31 30 29 28 24 23 22 21 20 19 16 15 12 11 10 9 5 4 0
824
* +-----+---+-----------+------+---+---+------+-----+---+---+------+------+
825
* | 0 1 | U | 1 1 1 1 1 | size | L | M | Rm | opc | H | 0 | Rn | Rd |
826
* +-----+---+-----------+------+---+---+------+-----+---+---+------+------+
827
- * C3.6.18 AdvSIMD vector x indexed element
828
+ * AdvSIMD vector x indexed element
829
* 31 30 29 28 24 23 22 21 20 19 16 15 12 11 10 9 5 4 0
830
* +---+---+---+-----------+------+---+---+------+-----+---+---+------+------+
831
* | 0 | Q | U | 0 1 1 1 1 | size | L | M | Rm | opc | H | 0 | Rn | Rd |
832
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
833
}
834
}
835
836
-/* C3.6.19 Crypto AES
837
+/* Crypto AES
838
* 31 24 23 22 21 17 16 12 11 10 9 5 4 0
839
* +-----------------+------+-----------+--------+-----+------+------+
840
* | 0 1 0 0 1 1 1 0 | size | 1 0 1 0 0 | opcode | 1 0 | Rn | Rd |
841
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_aes(DisasContext *s, uint32_t insn)
842
tcg_temp_free_i32(tcg_decrypt);
843
}
844
845
-/* C3.6.20 Crypto three-reg SHA
846
+/* Crypto three-reg SHA
847
* 31 24 23 22 21 20 16 15 14 12 11 10 9 5 4 0
848
* +-----------------+------+---+------+---+--------+-----+------+------+
849
* | 0 1 0 1 1 1 1 0 | size | 0 | Rm | 0 | opcode | 0 0 | Rn | Rd |
850
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha(DisasContext *s, uint32_t insn)
851
tcg_temp_free_i32(tcg_rm_regno);
852
}
853
854
-/* C3.6.21 Crypto two-reg SHA
855
+/* Crypto two-reg SHA
856
* 31 24 23 22 21 17 16 12 11 10 9 5 4 0
857
* +-----------------+------+-----------+--------+-----+------+------+
858
* | 0 1 0 1 1 1 1 0 | size | 1 0 1 0 0 | opcode | 1 0 | Rn | Rd |
859
--
197
--
860
2.7.4
198
2.25.1
861
862
diff view generated by jsdifflib
New patch
1
From: Fabiano Rosas <farosas@suse.de>
1
2
3
Signed-off-by: Fabiano Rosas <farosas@suse.de>
4
Reviewed-by: Claudio Fontana <cfontana@suse.de>
5
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
6
Message-id: 20221213190537.511-5-farosas@suse.de
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
9
target/arm/m_helper.c | 16 ----------------
10
1 file changed, 16 deletions(-)
11
12
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/m_helper.c
15
+++ b/target/arm/m_helper.c
16
@@ -XXX,XX +XXX,XX @@
17
*/
18
19
#include "qemu/osdep.h"
20
-#include "qemu/units.h"
21
-#include "target/arm/idau.h"
22
-#include "trace.h"
23
#include "cpu.h"
24
#include "internals.h"
25
-#include "exec/gdbstub.h"
26
#include "exec/helper-proto.h"
27
-#include "qemu/host-utils.h"
28
#include "qemu/main-loop.h"
29
#include "qemu/bitops.h"
30
-#include "qemu/crc32c.h"
31
-#include "qemu/qemu-print.h"
32
#include "qemu/log.h"
33
#include "exec/exec-all.h"
34
-#include <zlib.h> /* For crc32 */
35
-#include "semihosting/semihost.h"
36
-#include "sysemu/cpus.h"
37
-#include "sysemu/kvm.h"
38
-#include "qemu/range.h"
39
-#include "qapi/qapi-commands-machine-target.h"
40
-#include "qapi/error.h"
41
-#include "qemu/guest-random.h"
42
#ifdef CONFIG_TCG
43
-#include "arm_ldst.h"
44
#include "exec/cpu_ldst.h"
45
#include "semihosting/common-semi.h"
46
#endif
47
--
48
2.25.1
diff view generated by jsdifflib
New patch
1
From: Fabiano Rosas <farosas@suse.de>
1
2
3
Signed-off-by: Fabiano Rosas <farosas@suse.de>
4
Reviewed-by: Claudio Fontana <cfontana@suse.de>
5
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
6
Message-id: 20221213190537.511-6-farosas@suse.de
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
9
target/arm/helper.c | 7 -------
10
1 file changed, 7 deletions(-)
11
12
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/helper.c
15
+++ b/target/arm/helper.c
16
@@ -XXX,XX +XXX,XX @@
17
*/
18
19
#include "qemu/osdep.h"
20
-#include "qemu/units.h"
21
#include "qemu/log.h"
22
#include "trace.h"
23
#include "cpu.h"
24
#include "internals.h"
25
#include "exec/helper-proto.h"
26
-#include "qemu/host-utils.h"
27
#include "qemu/main-loop.h"
28
#include "qemu/timer.h"
29
#include "qemu/bitops.h"
30
@@ -XXX,XX +XXX,XX @@
31
#include "exec/exec-all.h"
32
#include <zlib.h> /* For crc32 */
33
#include "hw/irq.h"
34
-#include "semihosting/semihost.h"
35
-#include "sysemu/cpus.h"
36
#include "sysemu/cpu-timers.h"
37
#include "sysemu/kvm.h"
38
-#include "qemu/range.h"
39
#include "qapi/qapi-commands-machine-target.h"
40
#include "qapi/error.h"
41
#include "qemu/guest-random.h"
42
#ifdef CONFIG_TCG
43
-#include "arm_ldst.h"
44
-#include "exec/cpu_ldst.h"
45
#include "semihosting/common-semi.h"
46
#endif
47
#include "cpregs.h"
48
--
49
2.25.1
diff view generated by jsdifflib
1
The Application Interrupt and Reset Control Register has some changes
1
From: Claudio Fontana <cfontana@suse.de>
2
for v8M:
3
* new bits SYSRESETREQS, BFHFNMINS and PRIS: these all have
4
real state if the security extension is implemented and otherwise
5
are constant
6
* the PRIGROUP field is banked between security states
7
* non-secure code can be blocked from using the SYSRESET bit
8
to reset the system if SYSRESETREQS is set
9
2
10
Implement the new state and the changes to register read and write.
3
Remove some unused headers.
11
For the moment we ignore the effects of the secure PRIGROUP.
12
We will implement the effects of PRIS and BFHFNMIS later.
13
4
5
Signed-off-by: Claudio Fontana <cfontana@suse.de>
6
Acked-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Claudio Fontana <cfontana@suse.de>
8
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
9
Signed-off-by: Fabiano Rosas <farosas@suse.de>
10
Message-id: 20221213190537.511-7-farosas@suse.de
11
[added back some includes that are still needed at this point]
12
Signed-off-by: Fabiano Rosas <farosas@suse.de>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 1505240046-11454-6-git-send-email-peter.maydell@linaro.org
17
---
14
---
18
include/hw/intc/armv7m_nvic.h | 3 ++-
15
target/arm/cpu.c | 1 -
19
target/arm/cpu.h | 12 +++++++++++
16
target/arm/cpu64.c | 6 ------
20
hw/intc/armv7m_nvic.c | 49 +++++++++++++++++++++++++++++++++----------
17
2 files changed, 7 deletions(-)
21
target/arm/cpu.c | 7 +++++++
22
4 files changed, 59 insertions(+), 12 deletions(-)
23
18
24
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
25
index XXXXXXX..XXXXXXX 100644
26
--- a/include/hw/intc/armv7m_nvic.h
27
+++ b/include/hw/intc/armv7m_nvic.h
28
@@ -XXX,XX +XXX,XX @@ typedef struct NVICState {
29
* Entries in sec_vectors[] for non-banked exception numbers are unused.
30
*/
31
VecInfo sec_vectors[NVIC_INTERNAL_VECTORS];
32
- uint32_t prigroup;
33
+ /* The PRIGROUP field in AIRCR is banked */
34
+ uint32_t prigroup[M_REG_NUM_BANKS];
35
36
/* The following fields are all cached state that can be recalculated
37
* from the vectors[] and sec_vectors[] arrays and the prigroup field:
38
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/cpu.h
41
+++ b/target/arm/cpu.h
42
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
43
int exception;
44
uint32_t primask[M_REG_NUM_BANKS];
45
uint32_t faultmask[M_REG_NUM_BANKS];
46
+ uint32_t aircr; /* only holds r/w state if security extn implemented */
47
uint32_t secure; /* Is CPU in Secure state? (not guest visible) */
48
} v7m;
49
50
@@ -XXX,XX +XXX,XX @@ FIELD(V7M_CCR, STKALIGN, 9, 1)
51
FIELD(V7M_CCR, DC, 16, 1)
52
FIELD(V7M_CCR, IC, 17, 1)
53
54
+/* V7M AIRCR bits */
55
+FIELD(V7M_AIRCR, VECTRESET, 0, 1)
56
+FIELD(V7M_AIRCR, VECTCLRACTIVE, 1, 1)
57
+FIELD(V7M_AIRCR, SYSRESETREQ, 2, 1)
58
+FIELD(V7M_AIRCR, SYSRESETREQS, 3, 1)
59
+FIELD(V7M_AIRCR, PRIGROUP, 8, 3)
60
+FIELD(V7M_AIRCR, BFHFNMINS, 13, 1)
61
+FIELD(V7M_AIRCR, PRIS, 14, 1)
62
+FIELD(V7M_AIRCR, ENDIANNESS, 15, 1)
63
+FIELD(V7M_AIRCR, VECTKEY, 16, 16)
64
+
65
/* V7M CFSR bits for MMFSR */
66
FIELD(V7M_CFSR, IACCVIOL, 0, 1)
67
FIELD(V7M_CFSR, DACCVIOL, 1, 1)
68
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
69
index XXXXXXX..XXXXXXX 100644
70
--- a/hw/intc/armv7m_nvic.c
71
+++ b/hw/intc/armv7m_nvic.c
72
@@ -XXX,XX +XXX,XX @@ static bool nvic_isrpending(NVICState *s)
73
*/
74
static inline uint32_t nvic_gprio_mask(NVICState *s)
75
{
76
- return ~0U << (s->prigroup + 1);
77
+ return ~0U << (s->prigroup[M_REG_NS] + 1);
78
}
79
80
/* Recompute vectpending and exception_prio */
81
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
82
return val;
83
case 0xd08: /* Vector Table Offset. */
84
return cpu->env.v7m.vecbase[attrs.secure];
85
- case 0xd0c: /* Application Interrupt/Reset Control. */
86
- return 0xfa050000 | (s->prigroup << 8);
87
+ case 0xd0c: /* Application Interrupt/Reset Control (AIRCR) */
88
+ val = 0xfa050000 | (s->prigroup[attrs.secure] << 8);
89
+ if (attrs.secure) {
90
+ /* s->aircr stores PRIS, BFHFNMINS, SYSRESETREQS */
91
+ val |= cpu->env.v7m.aircr;
92
+ } else {
93
+ if (arm_feature(&cpu->env, ARM_FEATURE_V8)) {
94
+ /* BFHFNMINS is R/O from NS; other bits are RAZ/WI. If
95
+ * security isn't supported then BFHFNMINS is RAO (and
96
+ * the bit in env.v7m.aircr is always set).
97
+ */
98
+ val |= cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK;
99
+ }
100
+ }
101
+ return val;
102
case 0xd10: /* System Control. */
103
/* TODO: Implement SLEEPONEXIT. */
104
return 0;
105
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
106
case 0xd08: /* Vector Table Offset. */
107
cpu->env.v7m.vecbase[attrs.secure] = value & 0xffffff80;
108
break;
109
- case 0xd0c: /* Application Interrupt/Reset Control. */
110
- if ((value >> 16) == 0x05fa) {
111
- if (value & 4) {
112
- qemu_irq_pulse(s->sysresetreq);
113
+ case 0xd0c: /* Application Interrupt/Reset Control (AIRCR) */
114
+ if ((value >> R_V7M_AIRCR_VECTKEY_SHIFT) == 0x05fa) {
115
+ if (value & R_V7M_AIRCR_SYSRESETREQ_MASK) {
116
+ if (attrs.secure ||
117
+ !(cpu->env.v7m.aircr & R_V7M_AIRCR_SYSRESETREQS_MASK)) {
118
+ qemu_irq_pulse(s->sysresetreq);
119
+ }
120
}
121
- if (value & 2) {
122
+ if (value & R_V7M_AIRCR_VECTCLRACTIVE_MASK) {
123
qemu_log_mask(LOG_GUEST_ERROR,
124
"Setting VECTCLRACTIVE when not in DEBUG mode "
125
"is UNPREDICTABLE\n");
126
}
127
- if (value & 1) {
128
+ if (value & R_V7M_AIRCR_VECTRESET_MASK) {
129
+ /* NB: this bit is RES0 in v8M */
130
qemu_log_mask(LOG_GUEST_ERROR,
131
"Setting VECTRESET when not in DEBUG mode "
132
"is UNPREDICTABLE\n");
133
}
134
- s->prigroup = extract32(value, 8, 3);
135
+ s->prigroup[attrs.secure] = extract32(value,
136
+ R_V7M_AIRCR_PRIGROUP_SHIFT,
137
+ R_V7M_AIRCR_PRIGROUP_LENGTH);
138
+ if (attrs.secure) {
139
+ /* These bits are only writable by secure */
140
+ cpu->env.v7m.aircr = value &
141
+ (R_V7M_AIRCR_SYSRESETREQS_MASK |
142
+ R_V7M_AIRCR_BFHFNMINS_MASK |
143
+ R_V7M_AIRCR_PRIS_MASK);
144
+ }
145
nvic_irq_update(s);
146
}
147
break;
148
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_nvic_security = {
149
.fields = (VMStateField[]) {
150
VMSTATE_STRUCT_ARRAY(sec_vectors, NVICState, NVIC_INTERNAL_VECTORS, 1,
151
vmstate_VecInfo, VecInfo),
152
+ VMSTATE_UINT32(prigroup[M_REG_S], NVICState),
153
VMSTATE_END_OF_LIST()
154
}
155
};
156
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_nvic = {
157
.fields = (VMStateField[]) {
158
VMSTATE_STRUCT_ARRAY(vectors, NVICState, NVIC_MAX_VECTORS, 1,
159
vmstate_VecInfo, VecInfo),
160
- VMSTATE_UINT32(prigroup, NVICState),
161
+ VMSTATE_UINT32(prigroup[M_REG_NS], NVICState),
162
VMSTATE_END_OF_LIST()
163
},
164
.subsections = (const VMStateDescription*[]) {
165
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
19
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
166
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
167
--- a/target/arm/cpu.c
21
--- a/target/arm/cpu.c
168
+++ b/target/arm/cpu.c
22
+++ b/target/arm/cpu.c
169
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(CPUState *s)
23
@@ -XXX,XX +XXX,XX @@
170
24
#include "target/arm/idau.h"
171
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
25
#include "qemu/module.h"
172
env->v7m.secure = true;
26
#include "qapi/error.h"
173
+ } else {
27
-#include "qapi/visitor.h"
174
+ /* This bit resets to 0 if security is supported, but 1 if
28
#include "cpu.h"
175
+ * it is not. The bit is not present in v7M, but we set it
29
#ifdef CONFIG_TCG
176
+ * here so we can avoid having to make checks on it conditional
30
#include "hw/core/tcg-cpu-ops.h"
177
+ * on ARM_FEATURE_V8 (we don't let the guest see the bit).
31
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
178
+ */
32
index XXXXXXX..XXXXXXX 100644
179
+ env->v7m.aircr = R_V7M_AIRCR_BFHFNMINS_MASK;
33
--- a/target/arm/cpu64.c
180
}
34
+++ b/target/arm/cpu64.c
181
35
@@ -XXX,XX +XXX,XX @@
182
/* In v7M the reset value of this bit is IMPDEF, but ARM recommends
36
#include "qemu/osdep.h"
37
#include "qapi/error.h"
38
#include "cpu.h"
39
-#ifdef CONFIG_TCG
40
-#include "hw/core/tcg-cpu-ops.h"
41
-#endif /* CONFIG_TCG */
42
#include "qemu/module.h"
43
-#if !defined(CONFIG_USER_ONLY)
44
-#include "hw/loader.h"
45
-#endif
46
#include "sysemu/kvm.h"
47
#include "sysemu/hvf.h"
48
#include "kvm_arm.h"
183
--
49
--
184
2.7.4
50
2.25.1
185
186
diff view generated by jsdifflib
1
Update the code in nvic_rettobase() so that it checks the
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
sec_vectors[] array as well as the vectors[] array if needed.
3
2
3
The pointed MouseTransformInfo structure is accessed read-only.
4
5
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20221220142520.24094-2-philmd@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 1505240046-11454-7-git-send-email-peter.maydell@linaro.org
7
---
9
---
8
hw/intc/armv7m_nvic.c | 5 ++++-
10
include/hw/input/tsc2xxx.h | 4 ++--
9
1 file changed, 4 insertions(+), 1 deletion(-)
11
hw/input/tsc2005.c | 2 +-
12
hw/input/tsc210x.c | 3 +--
13
3 files changed, 4 insertions(+), 5 deletions(-)
10
14
11
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
15
diff --git a/include/hw/input/tsc2xxx.h b/include/hw/input/tsc2xxx.h
12
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/intc/armv7m_nvic.c
17
--- a/include/hw/input/tsc2xxx.h
14
+++ b/hw/intc/armv7m_nvic.c
18
+++ b/include/hw/input/tsc2xxx.h
15
@@ -XXX,XX +XXX,XX @@ static int nvic_pending_prio(NVICState *s)
19
@@ -XXX,XX +XXX,XX @@ uWireSlave *tsc2102_init(qemu_irq pint);
16
static bool nvic_rettobase(NVICState *s)
20
uWireSlave *tsc2301_init(qemu_irq penirq, qemu_irq kbirq, qemu_irq dav);
21
I2SCodec *tsc210x_codec(uWireSlave *chip);
22
uint32_t tsc210x_txrx(void *opaque, uint32_t value, int len);
23
-void tsc210x_set_transform(uWireSlave *chip, MouseTransformInfo *info);
24
+void tsc210x_set_transform(uWireSlave *chip, const MouseTransformInfo *info);
25
void tsc210x_key_event(uWireSlave *chip, int key, int down);
26
27
/* tsc2005.c */
28
void *tsc2005_init(qemu_irq pintdav);
29
uint32_t tsc2005_txrx(void *opaque, uint32_t value, int len);
30
-void tsc2005_set_transform(void *opaque, MouseTransformInfo *info);
31
+void tsc2005_set_transform(void *opaque, const MouseTransformInfo *info);
32
33
#endif
34
diff --git a/hw/input/tsc2005.c b/hw/input/tsc2005.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/hw/input/tsc2005.c
37
+++ b/hw/input/tsc2005.c
38
@@ -XXX,XX +XXX,XX @@ void *tsc2005_init(qemu_irq pintdav)
39
* from the touchscreen. Assuming 12-bit precision was used during
40
* tslib calibration.
41
*/
42
-void tsc2005_set_transform(void *opaque, MouseTransformInfo *info)
43
+void tsc2005_set_transform(void *opaque, const MouseTransformInfo *info)
17
{
44
{
18
int irq, nhand = 0;
45
TSC2005State *s = (TSC2005State *) opaque;
19
+ bool check_sec = arm_feature(&s->cpu->env, ARM_FEATURE_M_SECURITY);
46
20
47
diff --git a/hw/input/tsc210x.c b/hw/input/tsc210x.c
21
for (irq = ARMV7M_EXCP_RESET; irq < s->num_irq; irq++) {
48
index XXXXXXX..XXXXXXX 100644
22
- if (s->vectors[irq].active) {
49
--- a/hw/input/tsc210x.c
23
+ if (s->vectors[irq].active ||
50
+++ b/hw/input/tsc210x.c
24
+ (check_sec && irq < NVIC_INTERNAL_VECTORS &&
51
@@ -XXX,XX +XXX,XX @@ I2SCodec *tsc210x_codec(uWireSlave *chip)
25
+ s->sec_vectors[irq].active)) {
52
* from the touchscreen. Assuming 12-bit precision was used during
26
nhand++;
53
* tslib calibration.
27
if (nhand == 2) {
54
*/
28
return 0;
55
-void tsc210x_set_transform(uWireSlave *chip,
56
- MouseTransformInfo *info)
57
+void tsc210x_set_transform(uWireSlave *chip, const MouseTransformInfo *info)
58
{
59
TSC210xState *s = (TSC210xState *) chip->opaque;
60
#if 0
29
--
61
--
30
2.7.4
62
2.25.1
31
63
32
64
diff view generated by jsdifflib
1
With banked exceptions, just the exception number in
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
s->vectpending is no longer sufficient to uniquely identify
3
the pending exception. Add a vectpending_is_s_banked bool
4
which is true if the exception is using the sec_vectors[]
5
array.
6
2
3
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20221220142520.24094-3-philmd@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 1505240046-11454-4-git-send-email-peter.maydell@linaro.org
9
---
7
---
10
include/hw/intc/armv7m_nvic.h | 11 +++++++++--
8
hw/arm/nseries.c | 18 +++++++++---------
11
hw/intc/armv7m_nvic.c | 1 +
9
1 file changed, 9 insertions(+), 9 deletions(-)
12
2 files changed, 10 insertions(+), 2 deletions(-)
13
10
14
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
11
diff --git a/hw/arm/nseries.c b/hw/arm/nseries.c
15
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
16
--- a/include/hw/intc/armv7m_nvic.h
13
--- a/hw/arm/nseries.c
17
+++ b/include/hw/intc/armv7m_nvic.h
14
+++ b/hw/arm/nseries.c
18
@@ -XXX,XX +XXX,XX @@ typedef struct NVICState {
15
@@ -XXX,XX +XXX,XX @@ static void n8x0_i2c_setup(struct n800_s *s)
19
VecInfo sec_vectors[NVIC_INTERNAL_VECTORS];
20
uint32_t prigroup;
21
22
- /* vectpending and exception_prio are both cached state that can
23
- * be recalculated from the vectors[] array and the prigroup field.
24
+ /* The following fields are all cached state that can be recalculated
25
+ * from the vectors[] and sec_vectors[] arrays and the prigroup field:
26
+ * - vectpending
27
+ * - vectpending_is_secure
28
+ * - exception_prio
29
*/
30
unsigned int vectpending; /* highest prio pending enabled exception */
31
+ /* true if vectpending is a banked secure exception, ie it is in
32
+ * sec_vectors[] rather than vectors[]
33
+ */
34
+ bool vectpending_is_s_banked;
35
int exception_prio; /* group prio of the highest prio active exception */
36
37
MemoryRegion sysregmem;
38
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/hw/intc/armv7m_nvic.c
41
+++ b/hw/intc/armv7m_nvic.c
42
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_reset(DeviceState *dev)
43
44
s->exception_prio = NVIC_NOEXC_PRIO;
45
s->vectpending = 0;
46
+ s->vectpending_is_s_banked = false;
47
}
16
}
48
17
49
static void nvic_systick_trigger(void *opaque, int n, int level)
18
/* Touchscreen and keypad controller */
19
-static MouseTransformInfo n800_pointercal = {
20
+static const MouseTransformInfo n800_pointercal = {
21
.x = 800,
22
.y = 480,
23
.a = { 14560, -68, -3455208, -39, -9621, 35152972, 65536 },
24
};
25
26
-static MouseTransformInfo n810_pointercal = {
27
+static const MouseTransformInfo n810_pointercal = {
28
.x = 800,
29
.y = 480,
30
.a = { 15041, 148, -4731056, 171, -10238, 35933380, 65536 },
31
@@ -XXX,XX +XXX,XX @@ static void n810_key_event(void *opaque, int keycode)
32
33
#define M    0
34
35
-static int n810_keys[0x80] = {
36
+static const int n810_keys[0x80] = {
37
[0x01] = 16,    /* Q */
38
[0x02] = 37,    /* K */
39
[0x03] = 24,    /* O */
40
@@ -XXX,XX +XXX,XX @@ static void n8x0_usb_setup(struct n800_s *s)
41
/* Setup done before the main bootloader starts by some early setup code
42
* - used when we want to run the main bootloader in emulation. This
43
* isn't documented. */
44
-static uint32_t n800_pinout[104] = {
45
+static const uint32_t n800_pinout[104] = {
46
0x080f00d8, 0x00d40808, 0x03080808, 0x080800d0,
47
0x00dc0808, 0x0b0f0f00, 0x080800b4, 0x00c00808,
48
0x08080808, 0x180800c4, 0x00b80000, 0x08080808,
49
@@ -XXX,XX +XXX,XX @@ static void n8x0_boot_init(void *opaque)
50
#define OMAP_TAG_CBUS        0x4e03
51
#define OMAP_TAG_EM_ASIC_BB5    0x4e04
52
53
-static struct omap_gpiosw_info_s {
54
+static const struct omap_gpiosw_info_s {
55
const char *name;
56
int line;
57
int type;
58
@@ -XXX,XX +XXX,XX @@ static struct omap_gpiosw_info_s {
59
{ NULL }
60
};
61
62
-static struct omap_partition_info_s {
63
+static const struct omap_partition_info_s {
64
uint32_t offset;
65
uint32_t size;
66
int mask;
67
@@ -XXX,XX +XXX,XX @@ static struct omap_partition_info_s {
68
{ 0, 0, 0, NULL }
69
};
70
71
-static uint8_t n8x0_bd_addr[6] = { N8X0_BD_ADDR };
72
+static const uint8_t n8x0_bd_addr[6] = { N8X0_BD_ADDR };
73
74
static int n8x0_atag_setup(void *p, int model)
75
{
76
uint8_t *b;
77
uint16_t *w;
78
uint32_t *l;
79
- struct omap_gpiosw_info_s *gpiosw;
80
- struct omap_partition_info_s *partition;
81
+ const struct omap_gpiosw_info_s *gpiosw;
82
+ const struct omap_partition_info_s *partition;
83
const char *tag;
84
85
w = p;
50
--
86
--
51
2.7.4
87
2.25.1
52
88
53
89
diff view generated by jsdifflib
1
For the v8M security extension, some exceptions must be banked
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
between security states. Add the new vecinfo array which holds
3
the state for the banked exceptions and migrate it if the
4
CPU the NVIC is attached to implements the security extension.
5
2
3
Silent when compiling with -Wextra:
4
5
../hw/arm/nseries.c:1081:12: warning: missing field 'line' initializer [-Wmissing-field-initializers]
6
{ NULL }
7
^
8
9
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
Message-id: 20221220142520.24094-4-philmd@linaro.org
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
---
13
---
9
include/hw/intc/armv7m_nvic.h | 14 ++++++++++++
14
hw/arm/nseries.c | 10 ++++------
10
hw/intc/armv7m_nvic.c | 53 ++++++++++++++++++++++++++++++++++++++++++-
15
1 file changed, 4 insertions(+), 6 deletions(-)
11
2 files changed, 66 insertions(+), 1 deletion(-)
12
16
13
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
17
diff --git a/hw/arm/nseries.c b/hw/arm/nseries.c
14
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
15
--- a/include/hw/intc/armv7m_nvic.h
19
--- a/hw/arm/nseries.c
16
+++ b/include/hw/intc/armv7m_nvic.h
20
+++ b/hw/arm/nseries.c
17
@@ -XXX,XX +XXX,XX @@
21
@@ -XXX,XX +XXX,XX @@ static const struct omap_gpiosw_info_s {
18
22
"headphone", N8X0_HEADPHONE_GPIO,
19
/* Highest permitted number of exceptions (architectural limit) */
23
OMAP_GPIOSW_TYPE_CONNECTION | OMAP_GPIOSW_INVERTED,
20
#define NVIC_MAX_VECTORS 512
24
},
21
+/* Number of internal exceptions */
25
- { NULL }
22
+#define NVIC_INTERNAL_VECTORS 16
26
+ { /* end of list */ }
23
27
}, n810_gpiosw_info[] = {
24
typedef struct VecInfo {
28
{
25
/* Exception priorities can range from -3 to 255; only the unmodifiable
29
"gps_reset", N810_GPS_RESET_GPIO,
26
@@ -XXX,XX +XXX,XX @@ typedef struct NVICState {
30
@@ -XXX,XX +XXX,XX @@ static const struct omap_gpiosw_info_s {
27
ARMCPU *cpu;
31
"slide", N810_SLIDE_GPIO,
28
32
OMAP_GPIOSW_TYPE_COVER | OMAP_GPIOSW_INVERTED,
29
VecInfo vectors[NVIC_MAX_VECTORS];
33
},
30
+ /* If the v8M security extension is implemented, some of the internal
34
- { NULL }
31
+ * exceptions are banked between security states (ie there exists both
35
+ { /* end of list */ }
32
+ * a Secure and a NonSecure version of the exception and its state):
33
+ * HardFault, MemManage, UsageFault, SVCall, PendSV, SysTick (R_PJHV)
34
+ * The rest (including all the external exceptions) are not banked, though
35
+ * they may be configurable to target either Secure or NonSecure state.
36
+ * We store the secure exception state in sec_vectors[] for the banked
37
+ * exceptions, and otherwise use only vectors[] (including for exceptions
38
+ * like SecureFault that unconditionally target Secure state).
39
+ * Entries in sec_vectors[] for non-banked exception numbers are unused.
40
+ */
41
+ VecInfo sec_vectors[NVIC_INTERNAL_VECTORS];
42
uint32_t prigroup;
43
44
/* vectpending and exception_prio are both cached state that can
45
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
46
index XXXXXXX..XXXXXXX 100644
47
--- a/hw/intc/armv7m_nvic.c
48
+++ b/hw/intc/armv7m_nvic.c
49
@@ -XXX,XX +XXX,XX @@
50
* For historical reasons QEMU tends to use "interrupt" and
51
* "exception" more or less interchangeably.
52
*/
53
-#define NVIC_FIRST_IRQ 16
54
+#define NVIC_FIRST_IRQ NVIC_INTERNAL_VECTORS
55
#define NVIC_MAX_IRQ (NVIC_MAX_VECTORS - NVIC_FIRST_IRQ)
56
57
/* Effective running priority of the CPU when no exception is active
58
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_VecInfo = {
59
}
60
};
36
};
61
37
62
+static bool nvic_security_needed(void *opaque)
38
static const struct omap_partition_info_s {
63
+{
39
@@ -XXX,XX +XXX,XX @@ static const struct omap_partition_info_s {
64
+ NVICState *s = opaque;
40
{ 0x00080000, 0x00200000, 0x0, "kernel" },
65
+
41
{ 0x00280000, 0x00200000, 0x3, "initfs" },
66
+ return arm_feature(&s->cpu->env, ARM_FEATURE_M_SECURITY);
42
{ 0x00480000, 0x0fb80000, 0x3, "rootfs" },
67
+}
43
-
68
+
44
- { 0, 0, 0, NULL }
69
+static int nvic_security_post_load(void *opaque, int version_id)
45
+ { /* end of list */ }
70
+{
46
}, n810_part_info[] = {
71
+ NVICState *s = opaque;
47
{ 0x00000000, 0x00020000, 0x3, "bootloader" },
72
+ int i;
48
{ 0x00020000, 0x00060000, 0x0, "config" },
73
+
49
{ 0x00080000, 0x00220000, 0x0, "kernel" },
74
+ /* Check for out of range priority settings */
50
{ 0x002a0000, 0x00400000, 0x0, "initfs" },
75
+ if (s->sec_vectors[ARMV7M_EXCP_HARD].prio != -1) {
51
{ 0x006a0000, 0x0f960000, 0x0, "rootfs" },
76
+ return 1;
52
-
77
+ }
53
- { 0, 0, 0, NULL }
78
+ for (i = ARMV7M_EXCP_MEM; i < ARRAY_SIZE(s->sec_vectors); i++) {
54
+ { /* end of list */ }
79
+ if (s->sec_vectors[i].prio & ~0xff) {
80
+ return 1;
81
+ }
82
+ }
83
+ return 0;
84
+}
85
+
86
+static const VMStateDescription vmstate_nvic_security = {
87
+ .name = "nvic/m-security",
88
+ .version_id = 1,
89
+ .minimum_version_id = 1,
90
+ .needed = nvic_security_needed,
91
+ .post_load = &nvic_security_post_load,
92
+ .fields = (VMStateField[]) {
93
+ VMSTATE_STRUCT_ARRAY(sec_vectors, NVICState, NVIC_INTERNAL_VECTORS, 1,
94
+ vmstate_VecInfo, VecInfo),
95
+ VMSTATE_END_OF_LIST()
96
+ }
97
+};
98
+
99
static const VMStateDescription vmstate_nvic = {
100
.name = "armv7m_nvic",
101
.version_id = 4,
102
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_nvic = {
103
vmstate_VecInfo, VecInfo),
104
VMSTATE_UINT32(prigroup, NVICState),
105
VMSTATE_END_OF_LIST()
106
+ },
107
+ .subsections = (const VMStateDescription*[]) {
108
+ &vmstate_nvic_security,
109
+ NULL
110
}
111
};
55
};
112
56
113
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_reset(DeviceState *dev)
57
static const uint8_t n8x0_bd_addr[6] = { N8X0_BD_ADDR };
114
s->vectors[ARMV7M_EXCP_NMI].prio = -2;
115
s->vectors[ARMV7M_EXCP_HARD].prio = -1;
116
117
+ if (arm_feature(&s->cpu->env, ARM_FEATURE_M_SECURITY)) {
118
+ s->sec_vectors[ARMV7M_EXCP_HARD].enabled = 1;
119
+ s->sec_vectors[ARMV7M_EXCP_SVC].enabled = 1;
120
+ s->sec_vectors[ARMV7M_EXCP_PENDSV].enabled = 1;
121
+ s->sec_vectors[ARMV7M_EXCP_SYSTICK].enabled = 1;
122
+
123
+ /* AIRCR.BFHFNMINS resets to 0 so Secure HF is priority -1 (R_CMTC) */
124
+ s->sec_vectors[ARMV7M_EXCP_HARD].prio = -1;
125
+ }
126
+
127
/* Strictly speaking the reset handler should be enabled.
128
* However, we don't simulate soft resets through the NVIC,
129
* and the reset vector should never be pended.
130
--
58
--
131
2.7.4
59
2.25.1
132
60
133
61
diff view generated by jsdifflib
New patch
1
1
From: Zhuojia Shen <chaosdefinition@hotmail.com>
2
3
In CPUID registers exposed to userspace, some registers were missing
4
and some fields were not exposed. This patch aligns exposed ID
5
registers and their fields with what the upstream kernel currently
6
exposes.
7
8
Specifically, the following new ID registers/fields are exposed to
9
userspace:
10
11
ID_AA64PFR1_EL1.BT: bits 3-0
12
ID_AA64PFR1_EL1.MTE: bits 11-8
13
ID_AA64PFR1_EL1.SME: bits 27-24
14
15
ID_AA64ZFR0_EL1.SVEver: bits 3-0
16
ID_AA64ZFR0_EL1.AES: bits 7-4
17
ID_AA64ZFR0_EL1.BitPerm: bits 19-16
18
ID_AA64ZFR0_EL1.BF16: bits 23-20
19
ID_AA64ZFR0_EL1.SHA3: bits 35-32
20
ID_AA64ZFR0_EL1.SM4: bits 43-40
21
ID_AA64ZFR0_EL1.I8MM: bits 47-44
22
ID_AA64ZFR0_EL1.F32MM: bits 55-52
23
ID_AA64ZFR0_EL1.F64MM: bits 59-56
24
25
ID_AA64SMFR0_EL1.F32F32: bit 32
26
ID_AA64SMFR0_EL1.B16F32: bit 34
27
ID_AA64SMFR0_EL1.F16F32: bit 35
28
ID_AA64SMFR0_EL1.I8I32: bits 39-36
29
ID_AA64SMFR0_EL1.F64F64: bit 48
30
ID_AA64SMFR0_EL1.I16I64: bits 55-52
31
ID_AA64SMFR0_EL1.FA64: bit 63
32
33
ID_AA64MMFR0_EL1.ECV: bits 63-60
34
35
ID_AA64MMFR1_EL1.AFP: bits 47-44
36
37
ID_AA64MMFR2_EL1.AT: bits 35-32
38
39
ID_AA64ISAR0_EL1.RNDR: bits 63-60
40
41
ID_AA64ISAR1_EL1.FRINTTS: bits 35-32
42
ID_AA64ISAR1_EL1.BF16: bits 47-44
43
ID_AA64ISAR1_EL1.DGH: bits 51-48
44
ID_AA64ISAR1_EL1.I8MM: bits 55-52
45
46
ID_AA64ISAR2_EL1.WFxT: bits 3-0
47
ID_AA64ISAR2_EL1.RPRES: bits 7-4
48
ID_AA64ISAR2_EL1.GPA3: bits 11-8
49
ID_AA64ISAR2_EL1.APA3: bits 15-12
50
51
The code is also refactored to use symbolic names for ID register fields
52
for better readability and maintainability.
53
54
The test case in tests/tcg/aarch64/sysregs.c is also updated to match
55
the intended behavior.
56
57
Signed-off-by: Zhuojia Shen <chaosdefinition@hotmail.com>
58
Message-id: DS7PR12MB6309FB585E10772928F14271ACE79@DS7PR12MB6309.namprd12.prod.outlook.com
59
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
60
[PMM: use Sn_n_Cn_Cn_n syntax to work with older assemblers
61
that don't recognize id_aa64isar2_el1 and id_aa64mmfr2_el1]
62
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
63
---
64
target/arm/helper.c | 96 +++++++++++++++++++++++++------
65
tests/tcg/aarch64/sysregs.c | 24 ++++++--
66
tests/tcg/aarch64/Makefile.target | 7 ++-
67
3 files changed, 103 insertions(+), 24 deletions(-)
68
69
diff --git a/target/arm/helper.c b/target/arm/helper.c
70
index XXXXXXX..XXXXXXX 100644
71
--- a/target/arm/helper.c
72
+++ b/target/arm/helper.c
73
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
74
#ifdef CONFIG_USER_ONLY
75
static const ARMCPRegUserSpaceInfo v8_user_idregs[] = {
76
{ .name = "ID_AA64PFR0_EL1",
77
- .exported_bits = 0x000f000f00ff0000,
78
- .fixed_bits = 0x0000000000000011 },
79
+ .exported_bits = R_ID_AA64PFR0_FP_MASK |
80
+ R_ID_AA64PFR0_ADVSIMD_MASK |
81
+ R_ID_AA64PFR0_SVE_MASK |
82
+ R_ID_AA64PFR0_DIT_MASK,
83
+ .fixed_bits = (0x1u << R_ID_AA64PFR0_EL0_SHIFT) |
84
+ (0x1u << R_ID_AA64PFR0_EL1_SHIFT) },
85
{ .name = "ID_AA64PFR1_EL1",
86
- .exported_bits = 0x00000000000000f0 },
87
+ .exported_bits = R_ID_AA64PFR1_BT_MASK |
88
+ R_ID_AA64PFR1_SSBS_MASK |
89
+ R_ID_AA64PFR1_MTE_MASK |
90
+ R_ID_AA64PFR1_SME_MASK },
91
{ .name = "ID_AA64PFR*_EL1_RESERVED",
92
- .is_glob = true },
93
- { .name = "ID_AA64ZFR0_EL1" },
94
+ .is_glob = true },
95
+ { .name = "ID_AA64ZFR0_EL1",
96
+ .exported_bits = R_ID_AA64ZFR0_SVEVER_MASK |
97
+ R_ID_AA64ZFR0_AES_MASK |
98
+ R_ID_AA64ZFR0_BITPERM_MASK |
99
+ R_ID_AA64ZFR0_BFLOAT16_MASK |
100
+ R_ID_AA64ZFR0_SHA3_MASK |
101
+ R_ID_AA64ZFR0_SM4_MASK |
102
+ R_ID_AA64ZFR0_I8MM_MASK |
103
+ R_ID_AA64ZFR0_F32MM_MASK |
104
+ R_ID_AA64ZFR0_F64MM_MASK },
105
+ { .name = "ID_AA64SMFR0_EL1",
106
+ .exported_bits = R_ID_AA64SMFR0_F32F32_MASK |
107
+ R_ID_AA64SMFR0_B16F32_MASK |
108
+ R_ID_AA64SMFR0_F16F32_MASK |
109
+ R_ID_AA64SMFR0_I8I32_MASK |
110
+ R_ID_AA64SMFR0_F64F64_MASK |
111
+ R_ID_AA64SMFR0_I16I64_MASK |
112
+ R_ID_AA64SMFR0_FA64_MASK },
113
{ .name = "ID_AA64MMFR0_EL1",
114
- .fixed_bits = 0x00000000ff000000 },
115
- { .name = "ID_AA64MMFR1_EL1" },
116
+ .exported_bits = R_ID_AA64MMFR0_ECV_MASK,
117
+ .fixed_bits = (0xfu << R_ID_AA64MMFR0_TGRAN64_SHIFT) |
118
+ (0xfu << R_ID_AA64MMFR0_TGRAN4_SHIFT) },
119
+ { .name = "ID_AA64MMFR1_EL1",
120
+ .exported_bits = R_ID_AA64MMFR1_AFP_MASK },
121
+ { .name = "ID_AA64MMFR2_EL1",
122
+ .exported_bits = R_ID_AA64MMFR2_AT_MASK },
123
{ .name = "ID_AA64MMFR*_EL1_RESERVED",
124
- .is_glob = true },
125
+ .is_glob = true },
126
{ .name = "ID_AA64DFR0_EL1",
127
- .fixed_bits = 0x0000000000000006 },
128
- { .name = "ID_AA64DFR1_EL1" },
129
+ .fixed_bits = (0x6u << R_ID_AA64DFR0_DEBUGVER_SHIFT) },
130
+ { .name = "ID_AA64DFR1_EL1" },
131
{ .name = "ID_AA64DFR*_EL1_RESERVED",
132
- .is_glob = true },
133
+ .is_glob = true },
134
{ .name = "ID_AA64AFR*",
135
- .is_glob = true },
136
+ .is_glob = true },
137
{ .name = "ID_AA64ISAR0_EL1",
138
- .exported_bits = 0x00fffffff0fffff0 },
139
+ .exported_bits = R_ID_AA64ISAR0_AES_MASK |
140
+ R_ID_AA64ISAR0_SHA1_MASK |
141
+ R_ID_AA64ISAR0_SHA2_MASK |
142
+ R_ID_AA64ISAR0_CRC32_MASK |
143
+ R_ID_AA64ISAR0_ATOMIC_MASK |
144
+ R_ID_AA64ISAR0_RDM_MASK |
145
+ R_ID_AA64ISAR0_SHA3_MASK |
146
+ R_ID_AA64ISAR0_SM3_MASK |
147
+ R_ID_AA64ISAR0_SM4_MASK |
148
+ R_ID_AA64ISAR0_DP_MASK |
149
+ R_ID_AA64ISAR0_FHM_MASK |
150
+ R_ID_AA64ISAR0_TS_MASK |
151
+ R_ID_AA64ISAR0_RNDR_MASK },
152
{ .name = "ID_AA64ISAR1_EL1",
153
- .exported_bits = 0x000000f0ffffffff },
154
+ .exported_bits = R_ID_AA64ISAR1_DPB_MASK |
155
+ R_ID_AA64ISAR1_APA_MASK |
156
+ R_ID_AA64ISAR1_API_MASK |
157
+ R_ID_AA64ISAR1_JSCVT_MASK |
158
+ R_ID_AA64ISAR1_FCMA_MASK |
159
+ R_ID_AA64ISAR1_LRCPC_MASK |
160
+ R_ID_AA64ISAR1_GPA_MASK |
161
+ R_ID_AA64ISAR1_GPI_MASK |
162
+ R_ID_AA64ISAR1_FRINTTS_MASK |
163
+ R_ID_AA64ISAR1_SB_MASK |
164
+ R_ID_AA64ISAR1_BF16_MASK |
165
+ R_ID_AA64ISAR1_DGH_MASK |
166
+ R_ID_AA64ISAR1_I8MM_MASK },
167
+ { .name = "ID_AA64ISAR2_EL1",
168
+ .exported_bits = R_ID_AA64ISAR2_WFXT_MASK |
169
+ R_ID_AA64ISAR2_RPRES_MASK |
170
+ R_ID_AA64ISAR2_GPA3_MASK |
171
+ R_ID_AA64ISAR2_APA3_MASK },
172
{ .name = "ID_AA64ISAR*_EL1_RESERVED",
173
- .is_glob = true },
174
+ .is_glob = true },
175
};
176
modify_arm_cp_regs(v8_idregs, v8_user_idregs);
177
#endif
178
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
179
#ifdef CONFIG_USER_ONLY
180
static const ARMCPRegUserSpaceInfo id_v8_user_midr_cp_reginfo[] = {
181
{ .name = "MIDR_EL1",
182
- .exported_bits = 0x00000000ffffffff },
183
- { .name = "REVIDR_EL1" },
184
+ .exported_bits = R_MIDR_EL1_REVISION_MASK |
185
+ R_MIDR_EL1_PARTNUM_MASK |
186
+ R_MIDR_EL1_ARCHITECTURE_MASK |
187
+ R_MIDR_EL1_VARIANT_MASK |
188
+ R_MIDR_EL1_IMPLEMENTER_MASK },
189
+ { .name = "REVIDR_EL1" },
190
};
191
modify_arm_cp_regs(id_v8_midr_cp_reginfo, id_v8_user_midr_cp_reginfo);
192
#endif
193
diff --git a/tests/tcg/aarch64/sysregs.c b/tests/tcg/aarch64/sysregs.c
194
index XXXXXXX..XXXXXXX 100644
195
--- a/tests/tcg/aarch64/sysregs.c
196
+++ b/tests/tcg/aarch64/sysregs.c
197
@@ -XXX,XX +XXX,XX @@
198
#define HWCAP_CPUID (1 << 11)
199
#endif
200
201
+/*
202
+ * Older assemblers don't recognize newer system register names,
203
+ * but we can still access them by the Sn_n_Cn_Cn_n syntax.
204
+ */
205
+#define SYS_ID_AA64ISAR2_EL1 S3_0_C0_C6_2
206
+#define SYS_ID_AA64MMFR2_EL1 S3_0_C0_C7_2
207
+
208
int failed_bit_count;
209
210
/* Read and print system register `id' value */
211
@@ -XXX,XX +XXX,XX @@ int main(void)
212
* minimum valid fields - for the purposes of this check allowed
213
* to have non-zero values.
214
*/
215
- get_cpu_reg_check_mask(id_aa64isar0_el1, _m(00ff,ffff,f0ff,fff0));
216
- get_cpu_reg_check_mask(id_aa64isar1_el1, _m(0000,00f0,ffff,ffff));
217
+ get_cpu_reg_check_mask(id_aa64isar0_el1, _m(f0ff,ffff,f0ff,fff0));
218
+ get_cpu_reg_check_mask(id_aa64isar1_el1, _m(00ff,f0ff,ffff,ffff));
219
+ get_cpu_reg_check_mask(SYS_ID_AA64ISAR2_EL1, _m(0000,0000,0000,ffff));
220
/* TGran4 & TGran64 as pegged to -1 */
221
- get_cpu_reg_check_mask(id_aa64mmfr0_el1, _m(0000,0000,ff00,0000));
222
- get_cpu_reg_check_zero(id_aa64mmfr1_el1);
223
+ get_cpu_reg_check_mask(id_aa64mmfr0_el1, _m(f000,0000,ff00,0000));
224
+ get_cpu_reg_check_mask(id_aa64mmfr1_el1, _m(0000,f000,0000,0000));
225
+ get_cpu_reg_check_mask(SYS_ID_AA64MMFR2_EL1, _m(0000,000f,0000,0000));
226
/* EL1/EL0 reported as AA64 only */
227
get_cpu_reg_check_mask(id_aa64pfr0_el1, _m(000f,000f,00ff,0011));
228
- get_cpu_reg_check_mask(id_aa64pfr1_el1, _m(0000,0000,0000,00f0));
229
+ get_cpu_reg_check_mask(id_aa64pfr1_el1, _m(0000,0000,0f00,0fff));
230
/* all hidden, DebugVer fixed to 0x6 (ARMv8 debug architecture) */
231
get_cpu_reg_check_mask(id_aa64dfr0_el1, _m(0000,0000,0000,0006));
232
get_cpu_reg_check_zero(id_aa64dfr1_el1);
233
- get_cpu_reg_check_zero(id_aa64zfr0_el1);
234
+ get_cpu_reg_check_mask(id_aa64zfr0_el1, _m(0ff0,ff0f,00ff,00ff));
235
+#ifdef HAS_ARMV9_SME
236
+ get_cpu_reg_check_mask(id_aa64smfr0_el1, _m(80f1,00fd,0000,0000));
237
+#endif
238
239
get_cpu_reg_check_zero(id_aa64afr0_el1);
240
get_cpu_reg_check_zero(id_aa64afr1_el1);
241
diff --git a/tests/tcg/aarch64/Makefile.target b/tests/tcg/aarch64/Makefile.target
242
index XXXXXXX..XXXXXXX 100644
243
--- a/tests/tcg/aarch64/Makefile.target
244
+++ b/tests/tcg/aarch64/Makefile.target
245
@@ -XXX,XX +XXX,XX @@ config-cc.mak: Makefile
246
     $(call cc-option,-march=armv8.1-a+sve2, CROSS_CC_HAS_SVE2); \
247
     $(call cc-option,-march=armv8.3-a, CROSS_CC_HAS_ARMV8_3); \
248
     $(call cc-option,-mbranch-protection=standard, CROSS_CC_HAS_ARMV8_BTI); \
249
-     $(call cc-option,-march=armv8.5-a+memtag, CROSS_CC_HAS_ARMV8_MTE)) 3> config-cc.mak
250
+     $(call cc-option,-march=armv8.5-a+memtag, CROSS_CC_HAS_ARMV8_MTE); \
251
+     $(call cc-option,-march=armv9-a+sme, CROSS_CC_HAS_ARMV9_SME)) 3> config-cc.mak
252
-include config-cc.mak
253
254
# Pauth Tests
255
@@ -XXX,XX +XXX,XX @@ endif
256
ifneq ($(CROSS_CC_HAS_SVE),)
257
# System Registers Tests
258
AARCH64_TESTS += sysregs
259
+ifneq ($(CROSS_CC_HAS_ARMV9_SME),)
260
+sysregs: CFLAGS+=-march=armv9-a+sme -DHAS_ARMV9_SME
261
+else
262
sysregs: CFLAGS+=-march=armv8.1-a+sve
263
+endif
264
265
# SVE ioctl test
266
AARCH64_TESTS += sve-ioctls
267
--
268
2.25.1
diff view generated by jsdifflib
1
Don't use the old_mmio in the memory region ops struct.
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
2
3
This function is not used anywhere outside this file,
4
so we can make the function "static void".
5
6
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Eric Auger <eric.auger@redhat.com>
9
Message-id: 20221216214924.4711-2-philmd@linaro.org
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 1505580378-9044-4-git-send-email-peter.maydell@linaro.org
6
---
11
---
7
hw/timer/omap_synctimer.c | 35 +++++++++++++++++++++--------------
12
include/hw/arm/smmu-common.h | 3 ---
8
1 file changed, 21 insertions(+), 14 deletions(-)
13
hw/arm/smmu-common.c | 2 +-
14
2 files changed, 1 insertion(+), 4 deletions(-)
9
15
10
diff --git a/hw/timer/omap_synctimer.c b/hw/timer/omap_synctimer.c
16
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
11
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
12
--- a/hw/timer/omap_synctimer.c
18
--- a/include/hw/arm/smmu-common.h
13
+++ b/hw/timer/omap_synctimer.c
19
+++ b/include/hw/arm/smmu-common.h
14
@@ -XXX,XX +XXX,XX @@ static uint32_t omap_synctimer_readh(void *opaque, hwaddr addr)
20
@@ -XXX,XX +XXX,XX @@ void smmu_iotlb_inv_iova(SMMUState *s, int asid, dma_addr_t iova,
15
}
21
/* Unmap the range of all the notifiers registered to any IOMMU mr */
22
void smmu_inv_notifiers_all(SMMUState *s);
23
24
-/* Unmap the range of all the notifiers registered to @mr */
25
-void smmu_inv_notifiers_mr(IOMMUMemoryRegion *mr);
26
-
27
#endif /* HW_ARM_SMMU_COMMON_H */
28
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/hw/arm/smmu-common.c
31
+++ b/hw/arm/smmu-common.c
32
@@ -XXX,XX +XXX,XX @@ static void smmu_unmap_notifier_range(IOMMUNotifier *n)
16
}
33
}
17
34
18
-static void omap_synctimer_write(void *opaque, hwaddr addr,
35
/* Unmap all notifiers attached to @mr */
19
- uint32_t value)
36
-inline void smmu_inv_notifiers_mr(IOMMUMemoryRegion *mr)
20
+static uint64_t omap_synctimer_readfn(void *opaque, hwaddr addr,
37
+static void smmu_inv_notifiers_mr(IOMMUMemoryRegion *mr)
21
+ unsigned size)
22
+{
23
+ switch (size) {
24
+ case 1:
25
+ return omap_badwidth_read32(opaque, addr);
26
+ case 2:
27
+ return omap_synctimer_readh(opaque, addr);
28
+ case 4:
29
+ return omap_synctimer_readw(opaque, addr);
30
+ default:
31
+ g_assert_not_reached();
32
+ }
33
+}
34
+
35
+static void omap_synctimer_writefn(void *opaque, hwaddr addr,
36
+ uint64_t value, unsigned size)
37
{
38
{
38
OMAP_BAD_REG(addr);
39
IOMMUNotifier *n;
39
}
40
41
static const MemoryRegionOps omap_synctimer_ops = {
42
- .old_mmio = {
43
- .read = {
44
- omap_badwidth_read32,
45
- omap_synctimer_readh,
46
- omap_synctimer_readw,
47
- },
48
- .write = {
49
- omap_badwidth_write32,
50
- omap_synctimer_write,
51
- omap_synctimer_write,
52
- },
53
- },
54
+ .read = omap_synctimer_readfn,
55
+ .write = omap_synctimer_writefn,
56
+ .valid.min_access_size = 1,
57
+ .valid.max_access_size = 4,
58
.endianness = DEVICE_NATIVE_ENDIAN,
59
};
60
40
61
--
41
--
62
2.7.4
42
2.25.1
63
43
64
44
diff view generated by jsdifflib
1
Make the set_prio() function take a bool indicating
1
From: Philippe Mathieu-Daudé <philmd@linaro.org>
2
whether to pend the secure or non-secure version of a banked
3
interrupt, and use this to implement the correct banking
4
semantics for the SHPR registers.
5
2
3
When using Clang ("Apple clang version 14.0.0 (clang-1400.0.29.202)")
4
and building with -Wall we get:
5
6
hw/arm/smmu-common.c:173:33: warning: static function 'smmu_hash_remove_by_asid_iova' is used in an inline function with external linkage [-Wstatic-in-inline]
7
hw/arm/smmu-common.h:170:1: note: use 'static' to give inline function 'smmu_iotlb_inv_iova' internal linkage
8
void smmu_iotlb_inv_iova(SMMUState *s, int asid, dma_addr_t iova,
9
^
10
static
11
12
None of our code base require / use inlined functions with external
13
linkage. Some places use internal inlining in the hot path. These
14
two functions are certainly not in any hot path and don't justify
15
any inlining, so these are likely oversights rather than intentional.
16
17
Reported-by: Stefan Weil <sw@weilnetz.de>
18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
21
Reviewed-by: Eric Auger <eric.auger@redhat.com>
22
Message-id: 20221216214924.4711-3-philmd@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 1505240046-11454-11-git-send-email-peter.maydell@linaro.org
9
---
24
---
10
hw/intc/armv7m_nvic.c | 96 ++++++++++++++++++++++++++++++++++++++++++++++-----
25
hw/arm/smmu-common.c | 13 ++++++-------
11
hw/intc/trace-events | 2 +-
26
1 file changed, 6 insertions(+), 7 deletions(-)
12
2 files changed, 88 insertions(+), 10 deletions(-)
13
27
14
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
28
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
15
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/intc/armv7m_nvic.c
30
--- a/hw/arm/smmu-common.c
17
+++ b/hw/intc/armv7m_nvic.c
31
+++ b/hw/arm/smmu-common.c
18
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_raw_execution_priority(void *opaque)
32
@@ -XXX,XX +XXX,XX @@ void smmu_iotlb_insert(SMMUState *bs, SMMUTransCfg *cfg, SMMUTLBEntry *new)
19
return s->exception_prio;
33
g_hash_table_insert(bs->iotlb, key, new);
20
}
34
}
21
35
22
-/* caller must call nvic_irq_update() after this */
36
-inline void smmu_iotlb_inv_all(SMMUState *s)
23
-static void set_prio(NVICState *s, unsigned irq, uint8_t prio)
37
+void smmu_iotlb_inv_all(SMMUState *s)
24
+/* caller must call nvic_irq_update() after this.
25
+ * secure indicates the bank to use for banked exceptions (we assert if
26
+ * we are passed secure=true for a non-banked exception).
27
+ */
28
+static void set_prio(NVICState *s, unsigned irq, bool secure, uint8_t prio)
29
{
38
{
30
assert(irq > ARMV7M_EXCP_NMI); /* only use for configurable prios */
39
trace_smmu_iotlb_inv_all();
31
assert(irq < s->num_irq);
40
g_hash_table_remove_all(s->iotlb);
32
41
@@ -XXX,XX +XXX,XX @@ static gboolean smmu_hash_remove_by_asid_iova(gpointer key, gpointer value,
33
- s->vectors[irq].prio = prio;
42
((entry->iova & ~info->mask) == info->iova);
34
+ if (secure) {
35
+ assert(exc_is_banked(irq));
36
+ s->sec_vectors[irq].prio = prio;
37
+ } else {
38
+ s->vectors[irq].prio = prio;
39
+ }
40
+
41
+ trace_nvic_set_prio(irq, secure, prio);
42
+}
43
+
44
+/* Return the current raw priority register value.
45
+ * secure indicates the bank to use for banked exceptions (we assert if
46
+ * we are passed secure=true for a non-banked exception).
47
+ */
48
+static int get_prio(NVICState *s, unsigned irq, bool secure)
49
+{
50
+ assert(irq > ARMV7M_EXCP_NMI); /* only use for configurable prios */
51
+ assert(irq < s->num_irq);
52
53
- trace_nvic_set_prio(irq, prio);
54
+ if (secure) {
55
+ assert(exc_is_banked(irq));
56
+ return s->sec_vectors[irq].prio;
57
+ } else {
58
+ return s->vectors[irq].prio;
59
+ }
60
}
43
}
61
44
62
/* Recompute state and assert irq line accordingly.
45
-inline void
63
@@ -XXX,XX +XXX,XX @@ static bool nvic_user_access_ok(NVICState *s, hwaddr offset, MemTxAttrs attrs)
46
-smmu_iotlb_inv_iova(SMMUState *s, int asid, dma_addr_t iova,
64
}
47
- uint8_t tg, uint64_t num_pages, uint8_t ttl)
48
+void smmu_iotlb_inv_iova(SMMUState *s, int asid, dma_addr_t iova,
49
+ uint8_t tg, uint64_t num_pages, uint8_t ttl)
50
{
51
/* if tg is not set we use 4KB range invalidation */
52
uint8_t granule = tg ? tg * 2 + 10 : 12;
53
@@ -XXX,XX +XXX,XX @@ smmu_iotlb_inv_iova(SMMUState *s, int asid, dma_addr_t iova,
54
&info);
65
}
55
}
66
56
67
+static int shpr_bank(NVICState *s, int exc, MemTxAttrs attrs)
57
-inline void smmu_iotlb_inv_asid(SMMUState *s, uint16_t asid)
68
+{
58
+void smmu_iotlb_inv_asid(SMMUState *s, uint16_t asid)
69
+ /* Behaviour for the SHPR register field for this exception:
59
{
70
+ * return M_REG_NS to use the nonsecure vector (including for
60
trace_smmu_iotlb_inv_asid(asid);
71
+ * non-banked exceptions), M_REG_S for the secure version of
61
g_hash_table_foreach_remove(s->iotlb, smmu_hash_remove_by_asid, &asid);
72
+ * a banked exception, and -1 if this field should RAZ/WI.
62
@@ -XXX,XX +XXX,XX @@ error:
73
+ */
63
*
74
+ switch (exc) {
64
* return 0 on success
75
+ case ARMV7M_EXCP_MEM:
65
*/
76
+ case ARMV7M_EXCP_USAGE:
66
-inline int smmu_ptw(SMMUTransCfg *cfg, dma_addr_t iova, IOMMUAccessFlags perm,
77
+ case ARMV7M_EXCP_SVC:
67
- SMMUTLBEntry *tlbe, SMMUPTWEventInfo *info)
78
+ case ARMV7M_EXCP_PENDSV:
68
+int smmu_ptw(SMMUTransCfg *cfg, dma_addr_t iova, IOMMUAccessFlags perm,
79
+ case ARMV7M_EXCP_SYSTICK:
69
+ SMMUTLBEntry *tlbe, SMMUPTWEventInfo *info)
80
+ /* Banked exceptions */
70
{
81
+ return attrs.secure;
71
if (!cfg->aa64) {
82
+ case ARMV7M_EXCP_BUS:
72
/*
83
+ /* Not banked, RAZ/WI from nonsecure if BFHFNMINS is zero */
84
+ if (!attrs.secure &&
85
+ !(s->cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK)) {
86
+ return -1;
87
+ }
88
+ return M_REG_NS;
89
+ case ARMV7M_EXCP_SECURE:
90
+ /* Not banked, RAZ/WI from nonsecure */
91
+ if (!attrs.secure) {
92
+ return -1;
93
+ }
94
+ return M_REG_NS;
95
+ case ARMV7M_EXCP_DEBUG:
96
+ /* Not banked. TODO should RAZ/WI if DEMCR.SDME is set */
97
+ return M_REG_NS;
98
+ case 8 ... 10:
99
+ case 13:
100
+ /* RES0 */
101
+ return -1;
102
+ default:
103
+ /* Not reachable due to decode of SHPR register addresses */
104
+ g_assert_not_reached();
105
+ }
106
+}
107
+
108
static MemTxResult nvic_sysreg_read(void *opaque, hwaddr addr,
109
uint64_t *data, unsigned size,
110
MemTxAttrs attrs)
111
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_read(void *opaque, hwaddr addr,
112
}
113
}
114
break;
115
- case 0xd18 ... 0xd23: /* System Handler Priority. */
116
+ case 0xd18 ... 0xd23: /* System Handler Priority (SHPR1, SHPR2, SHPR3) */
117
val = 0;
118
for (i = 0; i < size; i++) {
119
- val |= s->vectors[(offset - 0xd14) + i].prio << (i * 8);
120
+ unsigned hdlidx = (offset - 0xd14) + i;
121
+ int sbank = shpr_bank(s, hdlidx, attrs);
122
+
123
+ if (sbank < 0) {
124
+ continue;
125
+ }
126
+ val = deposit32(val, i * 8, 8, get_prio(s, hdlidx, sbank));
127
}
128
break;
129
case 0xfe0 ... 0xfff: /* ID. */
130
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_write(void *opaque, hwaddr addr,
131
132
for (i = 0; i < size && startvec + i < s->num_irq; i++) {
133
if (attrs.secure || s->itns[startvec + i]) {
134
- set_prio(s, startvec + i, (value >> (i * 8)) & 0xff);
135
+ set_prio(s, startvec + i, false, (value >> (i * 8)) & 0xff);
136
}
137
}
138
nvic_irq_update(s);
139
return MEMTX_OK;
140
- case 0xd18 ... 0xd23: /* System Handler Priority. */
141
+ case 0xd18 ... 0xd23: /* System Handler Priority (SHPR1, SHPR2, SHPR3) */
142
for (i = 0; i < size; i++) {
143
unsigned hdlidx = (offset - 0xd14) + i;
144
- set_prio(s, hdlidx, (value >> (i * 8)) & 0xff);
145
+ int newprio = extract32(value, i * 8, 8);
146
+ int sbank = shpr_bank(s, hdlidx, attrs);
147
+
148
+ if (sbank < 0) {
149
+ continue;
150
+ }
151
+ set_prio(s, hdlidx, sbank, newprio);
152
}
153
nvic_irq_update(s);
154
return MEMTX_OK;
155
diff --git a/hw/intc/trace-events b/hw/intc/trace-events
156
index XXXXXXX..XXXXXXX 100644
157
--- a/hw/intc/trace-events
158
+++ b/hw/intc/trace-events
159
@@ -XXX,XX +XXX,XX @@ gicv3_redist_send_sgi(uint32_t cpu, int irq) "GICv3 redistributor 0x%x pending S
160
# hw/intc/armv7m_nvic.c
161
nvic_recompute_state(int vectpending, int vectpending_prio, int exception_prio) "NVIC state recomputed: vectpending %d vectpending_prio %d exception_prio %d"
162
nvic_recompute_state_secure(int vectpending, bool vectpending_is_s_banked, int vectpending_prio, int exception_prio) "NVIC state recomputed: vectpending %d is_s_banked %d vectpending_prio %d exception_prio %d"
163
-nvic_set_prio(int irq, uint8_t prio) "NVIC set irq %d priority %d"
164
+nvic_set_prio(int irq, bool secure, uint8_t prio) "NVIC set irq %d secure-bank %d priority %d"
165
nvic_irq_update(int vectpending, int pendprio, int exception_prio, int level) "NVIC vectpending %d pending prio %d exception_prio %d: setting irq line to %d"
166
nvic_escalate_prio(int irq, int irqprio, int runprio) "NVIC escalating irq %d to HardFault: insufficient priority %d >= %d"
167
nvic_escalate_disabled(int irq) "NVIC escalating irq %d to HardFault: disabled"
168
--
73
--
169
2.7.4
74
2.25.1
170
75
171
76
diff view generated by jsdifflib
1
Instead of looking up the pending priority
1
From: Jean-Christophe Dubois <jcd@tribudubois.net>
2
in nvic_pending_prio(), cache it in a new state struct
3
field. The calculation of the pending priority given
4
the interrupt number is more complicated in v8M with
5
the security extension, so the caching will be worthwhile.
6
2
7
This changes nvic_pending_prio() from returning a full
3
So far the GPT timers were unable to raise IRQs to the processor.
8
(group + subpriority) priority value to returning a group
9
priority. This doesn't require changes to its callsites
10
because we use it only in comparisons of the form
11
execution_prio > nvic_pending_prio()
12
and execution priority is always a group priority, so
13
a test (exec prio > full prio) is true if and only if
14
(execprio > group_prio).
15
4
16
(Architecturally the expected comparison is with the
5
Signed-off-by: Jean-Christophe Dubois <jcd@tribudubois.net>
17
group priority for this sort of "would we preempt" test;
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
18
we were only doing a test with a full priority as an
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
optimisation to avoid the mask, which is possible
8
---
20
precisely because the two comparisons always give the
9
include/hw/arm/fsl-imx7.h | 5 +++++
21
same answer.)
10
hw/arm/fsl-imx7.c | 10 ++++++++++
11
2 files changed, 15 insertions(+)
22
12
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
diff --git a/include/hw/arm/fsl-imx7.h b/include/hw/arm/fsl-imx7.h
24
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
25
Message-id: 1505240046-11454-5-git-send-email-peter.maydell@linaro.org
26
---
27
include/hw/intc/armv7m_nvic.h | 2 ++
28
hw/intc/armv7m_nvic.c | 23 +++++++++++++----------
29
hw/intc/trace-events | 2 +-
30
3 files changed, 16 insertions(+), 11 deletions(-)
31
32
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
33
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
34
--- a/include/hw/intc/armv7m_nvic.h
15
--- a/include/hw/arm/fsl-imx7.h
35
+++ b/include/hw/intc/armv7m_nvic.h
16
+++ b/include/hw/arm/fsl-imx7.h
36
@@ -XXX,XX +XXX,XX @@ typedef struct NVICState {
17
@@ -XXX,XX +XXX,XX @@ enum FslIMX7IRQs {
37
* - vectpending
18
FSL_IMX7_USB2_IRQ = 42,
38
* - vectpending_is_secure
19
FSL_IMX7_USB3_IRQ = 40,
39
* - exception_prio
20
40
+ * - vectpending_prio
21
+ FSL_IMX7_GPT1_IRQ = 55,
41
*/
22
+ FSL_IMX7_GPT2_IRQ = 54,
42
unsigned int vectpending; /* highest prio pending enabled exception */
23
+ FSL_IMX7_GPT3_IRQ = 53,
43
/* true if vectpending is a banked secure exception, ie it is in
24
+ FSL_IMX7_GPT4_IRQ = 52,
44
@@ -XXX,XX +XXX,XX @@ typedef struct NVICState {
25
+
45
*/
26
FSL_IMX7_WDOG1_IRQ = 78,
46
bool vectpending_is_s_banked;
27
FSL_IMX7_WDOG2_IRQ = 79,
47
int exception_prio; /* group prio of the highest prio active exception */
28
FSL_IMX7_WDOG3_IRQ = 10,
48
+ int vectpending_prio; /* group prio of the exeception in vectpending */
29
diff --git a/hw/arm/fsl-imx7.c b/hw/arm/fsl-imx7.c
49
50
MemoryRegion sysregmem;
51
MemoryRegion sysreg_ns_mem;
52
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
53
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
54
--- a/hw/intc/armv7m_nvic.c
31
--- a/hw/arm/fsl-imx7.c
55
+++ b/hw/intc/armv7m_nvic.c
32
+++ b/hw/arm/fsl-imx7.c
56
@@ -XXX,XX +XXX,XX @@ static const uint8_t nvic_id[] = {
33
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
57
34
FSL_IMX7_GPT4_ADDR,
58
static int nvic_pending_prio(NVICState *s)
35
};
59
{
36
60
- /* return the priority of the current pending interrupt,
37
+ static const int FSL_IMX7_GPTn_IRQ[FSL_IMX7_NUM_GPTS] = {
61
+ /* return the group priority of the current pending interrupt,
38
+ FSL_IMX7_GPT1_IRQ,
62
* or NVIC_NOEXC_PRIO if no interrupt is pending
39
+ FSL_IMX7_GPT2_IRQ,
63
*/
40
+ FSL_IMX7_GPT3_IRQ,
64
- return s->vectpending ? s->vectors[s->vectpending].prio : NVIC_NOEXC_PRIO;
41
+ FSL_IMX7_GPT4_IRQ,
65
+ return s->vectpending_prio;
42
+ };
66
}
43
+
67
44
s->gpt[i].ccm = IMX_CCM(&s->ccm);
68
/* Return the value of the ISCR RETTOBASE bit:
45
sysbus_realize(SYS_BUS_DEVICE(&s->gpt[i]), &error_abort);
69
@@ -XXX,XX +XXX,XX @@ static void nvic_recompute_state(NVICState *s)
46
sysbus_mmio_map(SYS_BUS_DEVICE(&s->gpt[i]), 0, FSL_IMX7_GPTn_ADDR[i]);
70
active_prio &= nvic_gprio_mask(s);
47
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gpt[i]), 0,
48
+ qdev_get_gpio_in(DEVICE(&s->a7mpcore),
49
+ FSL_IMX7_GPTn_IRQ[i]));
71
}
50
}
72
51
73
+ if (pend_prio > 0) {
52
for (i = 0; i < FSL_IMX7_NUM_GPIOS; i++) {
74
+ pend_prio &= nvic_gprio_mask(s);
75
+ }
76
+
77
s->vectpending = pend_irq;
78
+ s->vectpending_prio = pend_prio;
79
s->exception_prio = active_prio;
80
81
- trace_nvic_recompute_state(s->vectpending, s->exception_prio);
82
+ trace_nvic_recompute_state(s->vectpending,
83
+ s->vectpending_prio,
84
+ s->exception_prio);
85
}
86
87
/* Return the current execution priority of the CPU
88
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_acknowledge_irq(void *opaque)
89
CPUARMState *env = &s->cpu->env;
90
const int pending = s->vectpending;
91
const int running = nvic_exec_prio(s);
92
- int pendgroupprio;
93
VecInfo *vec;
94
95
assert(pending > ARMV7M_EXCP_RESET && pending < s->num_irq);
96
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_acknowledge_irq(void *opaque)
97
assert(vec->enabled);
98
assert(vec->pending);
99
100
- pendgroupprio = vec->prio;
101
- if (pendgroupprio > 0) {
102
- pendgroupprio &= nvic_gprio_mask(s);
103
- }
104
- assert(pendgroupprio < running);
105
+ assert(s->vectpending_prio < running);
106
107
- trace_nvic_acknowledge_irq(pending, vec->prio);
108
+ trace_nvic_acknowledge_irq(pending, s->vectpending_prio);
109
110
vec->active = 1;
111
vec->pending = 0;
112
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_reset(DeviceState *dev)
113
s->exception_prio = NVIC_NOEXC_PRIO;
114
s->vectpending = 0;
115
s->vectpending_is_s_banked = false;
116
+ s->vectpending_prio = NVIC_NOEXC_PRIO;
117
}
118
119
static void nvic_systick_trigger(void *opaque, int n, int level)
120
diff --git a/hw/intc/trace-events b/hw/intc/trace-events
121
index XXXXXXX..XXXXXXX 100644
122
--- a/hw/intc/trace-events
123
+++ b/hw/intc/trace-events
124
@@ -XXX,XX +XXX,XX @@ gicv3_redist_set_irq(uint32_t cpu, int irq, int level) "GICv3 redistributor 0x%x
125
gicv3_redist_send_sgi(uint32_t cpu, int irq) "GICv3 redistributor 0x%x pending SGI %d"
126
127
# hw/intc/armv7m_nvic.c
128
-nvic_recompute_state(int vectpending, int exception_prio) "NVIC state recomputed: vectpending %d exception_prio %d"
129
+nvic_recompute_state(int vectpending, int vectpending_prio, int exception_prio) "NVIC state recomputed: vectpending %d vectpending_prio %d exception_prio %d"
130
nvic_set_prio(int irq, uint8_t prio) "NVIC set irq %d priority %d"
131
nvic_irq_update(int vectpending, int pendprio, int exception_prio, int level) "NVIC vectpending %d pending prio %d exception_prio %d: setting irq line to %d"
132
nvic_escalate_prio(int irq, int irqprio, int runprio) "NVIC escalating irq %d to HardFault: insufficient priority %d >= %d"
133
--
53
--
134
2.7.4
54
2.25.1
135
136
diff view generated by jsdifflib
1
From: Subbaraya Sundeep <sundeep.lkml@gmail.com>
1
From: Jean-Christophe Dubois <jcd@tribudubois.net>
2
2
3
Added Sytem register block of Smartfusion2.
3
CCM derived clocks will have to be added later.
4
This block has PLL registers which are accessed by guest.
5
4
6
Signed-off-by: Subbaraya Sundeep <sundeep.lkml@gmail.com>
5
Signed-off-by: Jean-Christophe Dubois <jcd@tribudubois.net>
7
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Acked-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Message-id: 20170920201737.25723-3-f4bug@amsat.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
8
---
13
hw/misc/Makefile.objs | 1 +
9
hw/misc/imx7_ccm.c | 49 +++++++++++++++++++++++++++++++++++++---------
14
include/hw/misc/msf2-sysreg.h | 77 ++++++++++++++++++++
10
1 file changed, 40 insertions(+), 9 deletions(-)
15
hw/misc/msf2-sysreg.c | 160 ++++++++++++++++++++++++++++++++++++++++++
16
hw/misc/trace-events | 5 ++
17
4 files changed, 243 insertions(+)
18
create mode 100644 include/hw/misc/msf2-sysreg.h
19
create mode 100644 hw/misc/msf2-sysreg.c
20
11
21
diff --git a/hw/misc/Makefile.objs b/hw/misc/Makefile.objs
12
diff --git a/hw/misc/imx7_ccm.c b/hw/misc/imx7_ccm.c
22
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
23
--- a/hw/misc/Makefile.objs
14
--- a/hw/misc/imx7_ccm.c
24
+++ b/hw/misc/Makefile.objs
15
+++ b/hw/misc/imx7_ccm.c
25
@@ -XXX,XX +XXX,XX @@ obj-$(CONFIG_HYPERV_TESTDEV) += hyperv_testdev.o
26
obj-$(CONFIG_AUX) += auxbus.o
27
obj-$(CONFIG_ASPEED_SOC) += aspeed_scu.o aspeed_sdmc.o
28
obj-y += mmio_interface.o
29
+obj-$(CONFIG_MSF2) += msf2-sysreg.o
30
diff --git a/include/hw/misc/msf2-sysreg.h b/include/hw/misc/msf2-sysreg.h
31
new file mode 100644
32
index XXXXXXX..XXXXXXX
33
--- /dev/null
34
+++ b/include/hw/misc/msf2-sysreg.h
35
@@ -XXX,XX +XXX,XX @@
16
@@ -XXX,XX +XXX,XX @@
36
+/*
17
#include "hw/misc/imx7_ccm.h"
37
+ * Microsemi SmartFusion2 SYSREG
18
#include "migration/vmstate.h"
38
+ *
19
39
+ * Copyright (c) 2017 Subbaraya Sundeep <sundeep.lkml@gmail.com>
40
+ *
41
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
42
+ * of this software and associated documentation files (the "Software"), to deal
43
+ * in the Software without restriction, including without limitation the rights
44
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
45
+ * copies of the Software, and to permit persons to whom the Software is
46
+ * furnished to do so, subject to the following conditions:
47
+ *
48
+ * The above copyright notice and this permission notice shall be included in
49
+ * all copies or substantial portions of the Software.
50
+ *
51
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
52
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
53
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
54
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
55
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
56
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
57
+ * THE SOFTWARE.
58
+ */
59
+
60
+#ifndef HW_MSF2_SYSREG_H
61
+#define HW_MSF2_SYSREG_H
62
+
63
+#include "hw/sysbus.h"
64
+
65
+enum {
66
+ ESRAM_CR = 0x00 / 4,
67
+ ESRAM_MAX_LAT,
68
+ DDR_CR,
69
+ ENVM_CR,
70
+ ENVM_REMAP_BASE_CR,
71
+ ENVM_REMAP_FAB_CR,
72
+ CC_CR,
73
+ CC_REGION_CR,
74
+ CC_LOCK_BASE_ADDR_CR,
75
+ CC_FLUSH_INDX_CR,
76
+ DDRB_BUF_TIMER_CR,
77
+ DDRB_NB_ADDR_CR,
78
+ DDRB_NB_SIZE_CR,
79
+ DDRB_CR,
80
+
81
+ SOFT_RESET_CR = 0x48 / 4,
82
+ M3_CR,
83
+
84
+ GPIO_SYSRESET_SEL_CR = 0x58 / 4,
85
+
86
+ MDDR_CR = 0x60 / 4,
87
+
88
+ MSSDDR_PLL_STATUS_LOW_CR = 0x90 / 4,
89
+ MSSDDR_PLL_STATUS_HIGH_CR,
90
+ MSSDDR_FACC1_CR,
91
+ MSSDDR_FACC2_CR,
92
+
93
+ MSSDDR_PLL_STATUS = 0x150 / 4,
94
+};
95
+
96
+#define MSF2_SYSREG_MMIO_SIZE 0x300
97
+
98
+#define TYPE_MSF2_SYSREG "msf2-sysreg"
99
+#define MSF2_SYSREG(obj) OBJECT_CHECK(MSF2SysregState, (obj), TYPE_MSF2_SYSREG)
100
+
101
+typedef struct MSF2SysregState {
102
+ SysBusDevice parent_obj;
103
+
104
+ MemoryRegion iomem;
105
+
106
+ uint8_t apb0div;
107
+ uint8_t apb1div;
108
+
109
+ uint32_t regs[MSF2_SYSREG_MMIO_SIZE / 4];
110
+} MSF2SysregState;
111
+
112
+#endif /* HW_MSF2_SYSREG_H */
113
diff --git a/hw/misc/msf2-sysreg.c b/hw/misc/msf2-sysreg.c
114
new file mode 100644
115
index XXXXXXX..XXXXXXX
116
--- /dev/null
117
+++ b/hw/misc/msf2-sysreg.c
118
@@ -XXX,XX +XXX,XX @@
119
+/*
120
+ * System Register block model of Microsemi SmartFusion2.
121
+ *
122
+ * Copyright (c) 2017 Subbaraya Sundeep <sundeep.lkml@gmail.com>
123
+ *
124
+ * This program is free software; you can redistribute it and/or
125
+ * modify it under the terms of the GNU General Public License
126
+ * as published by the Free Software Foundation; either version
127
+ * 2 of the License, or (at your option) any later version.
128
+ *
129
+ * You should have received a copy of the GNU General Public License along
130
+ * with this program; if not, see <http://www.gnu.org/licenses/>.
131
+ */
132
+
133
+#include "qemu/osdep.h"
134
+#include "qapi/error.h"
135
+#include "qemu/log.h"
136
+#include "hw/misc/msf2-sysreg.h"
137
+#include "qemu/error-report.h"
138
+#include "trace.h"
20
+#include "trace.h"
139
+
21
+
140
+static inline int msf2_divbits(uint32_t div)
22
+#define CKIH_FREQ 24000000 /* 24MHz crystal input */
141
+{
142
+ int r = ctz32(div);
143
+
23
+
144
+ return (div < 8) ? r : r + 1;
24
static void imx7_analog_reset(DeviceState *dev)
145
+}
25
{
26
IMX7AnalogState *s = IMX7_ANALOG(dev);
27
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_imx7_ccm = {
28
static uint32_t imx7_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
29
{
30
/*
31
- * This function is "consumed" by GPT emulation code, however on
32
- * i.MX7 each GPT block can have their own clock root. This means
33
- * that this functions needs somehow to know requester's identity
34
- * and the way to pass it: be it via additional IMXClk constants
35
- * or by adding another argument to this method needs to be
36
- * figured out
37
+ * This function is "consumed" by GPT emulation code. Some clocks
38
+ * have fixed frequencies and we can provide requested frequency
39
+ * easily. However for CCM provided clocks (like IPG) each GPT
40
+ * timer can have its own clock root.
41
+ * This means we need additionnal information when calling this
42
+ * function to know the requester's identity.
43
*/
44
- qemu_log_mask(LOG_GUEST_ERROR, "[%s]%s: Not implemented\n",
45
- TYPE_IMX7_CCM, __func__);
46
- return 0;
47
+ uint32_t freq = 0;
146
+
48
+
147
+static void msf2_sysreg_reset(DeviceState *d)
49
+ switch (clock) {
148
+{
50
+ case CLK_NONE:
149
+ MSF2SysregState *s = MSF2_SYSREG(d);
51
+ break;
150
+
52
+ case CLK_32k:
151
+ s->regs[MSSDDR_PLL_STATUS_LOW_CR] = 0x021A2358;
53
+ freq = CKIL_FREQ;
152
+ s->regs[MSSDDR_PLL_STATUS] = 0x3;
54
+ break;
153
+ s->regs[MSSDDR_FACC1_CR] = msf2_divbits(s->apb0div) << 5 |
55
+ case CLK_HIGH:
154
+ msf2_divbits(s->apb1div) << 2;
56
+ freq = CKIH_FREQ;
155
+}
57
+ break;
156
+
58
+ case CLK_IPG:
157
+static uint64_t msf2_sysreg_read(void *opaque, hwaddr offset,
59
+ case CLK_IPG_HIGH:
158
+ unsigned size)
60
+ /*
159
+{
61
+ * For now we don't have a way to figure out the device this
160
+ MSF2SysregState *s = opaque;
62
+ * function is called for. Until then the IPG derived clocks
161
+ uint32_t ret = 0;
63
+ * are left unimplemented.
162
+
64
+ */
163
+ offset >>= 2;
65
+ qemu_log_mask(LOG_GUEST_ERROR, "[%s]%s: Clock %d Not implemented\n",
164
+ if (offset < ARRAY_SIZE(s->regs)) {
66
+ TYPE_IMX7_CCM, __func__, clock);
165
+ ret = s->regs[offset];
67
+ break;
166
+ trace_msf2_sysreg_read(offset << 2, ret);
68
+ default:
167
+ } else {
69
+ qemu_log_mask(LOG_GUEST_ERROR, "[%s]%s: unsupported clock %d\n",
168
+ qemu_log_mask(LOG_GUEST_ERROR,
70
+ TYPE_IMX7_CCM, __func__, clock);
169
+ "%s: Bad offset 0x%08" HWADDR_PRIx "\n", __func__,
71
+ break;
170
+ offset << 2);
171
+ }
72
+ }
172
+
73
+
173
+ return ret;
74
+ trace_ccm_clock_freq(clock, freq);
174
+}
175
+
75
+
176
+static void msf2_sysreg_write(void *opaque, hwaddr offset,
76
+ return freq;
177
+ uint64_t val, unsigned size)
77
}
178
+{
78
179
+ MSF2SysregState *s = opaque;
79
static void imx7_ccm_class_init(ObjectClass *klass, void *data)
180
+ uint32_t newval = val;
181
+
182
+ offset >>= 2;
183
+
184
+ switch (offset) {
185
+ case MSSDDR_PLL_STATUS:
186
+ trace_msf2_sysreg_write_pll_status();
187
+ break;
188
+
189
+ case ESRAM_CR:
190
+ case DDR_CR:
191
+ case ENVM_REMAP_BASE_CR:
192
+ if (newval != s->regs[offset]) {
193
+ qemu_log_mask(LOG_UNIMP,
194
+ TYPE_MSF2_SYSREG": remapping not supported\n");
195
+ }
196
+ break;
197
+
198
+ default:
199
+ if (offset < ARRAY_SIZE(s->regs)) {
200
+ trace_msf2_sysreg_write(offset << 2, newval, s->regs[offset]);
201
+ s->regs[offset] = newval;
202
+ } else {
203
+ qemu_log_mask(LOG_GUEST_ERROR,
204
+ "%s: Bad offset 0x%08" HWADDR_PRIx "\n", __func__,
205
+ offset << 2);
206
+ }
207
+ break;
208
+ }
209
+}
210
+
211
+static const MemoryRegionOps sysreg_ops = {
212
+ .read = msf2_sysreg_read,
213
+ .write = msf2_sysreg_write,
214
+ .endianness = DEVICE_NATIVE_ENDIAN,
215
+};
216
+
217
+static void msf2_sysreg_init(Object *obj)
218
+{
219
+ MSF2SysregState *s = MSF2_SYSREG(obj);
220
+
221
+ memory_region_init_io(&s->iomem, obj, &sysreg_ops, s, TYPE_MSF2_SYSREG,
222
+ MSF2_SYSREG_MMIO_SIZE);
223
+ sysbus_init_mmio(SYS_BUS_DEVICE(obj), &s->iomem);
224
+}
225
+
226
+static const VMStateDescription vmstate_msf2_sysreg = {
227
+ .name = TYPE_MSF2_SYSREG,
228
+ .version_id = 1,
229
+ .minimum_version_id = 1,
230
+ .fields = (VMStateField[]) {
231
+ VMSTATE_UINT32_ARRAY(regs, MSF2SysregState, MSF2_SYSREG_MMIO_SIZE / 4),
232
+ VMSTATE_END_OF_LIST()
233
+ }
234
+};
235
+
236
+static Property msf2_sysreg_properties[] = {
237
+ /* default divisors in Libero GUI */
238
+ DEFINE_PROP_UINT8("apb0divisor", MSF2SysregState, apb0div, 2),
239
+ DEFINE_PROP_UINT8("apb1divisor", MSF2SysregState, apb1div, 2),
240
+ DEFINE_PROP_END_OF_LIST(),
241
+};
242
+
243
+static void msf2_sysreg_realize(DeviceState *dev, Error **errp)
244
+{
245
+ MSF2SysregState *s = MSF2_SYSREG(dev);
246
+
247
+ if ((s->apb0div > 32 || !is_power_of_2(s->apb0div))
248
+ || (s->apb1div > 32 || !is_power_of_2(s->apb1div))) {
249
+ error_setg(errp, "Invalid apb divisor value");
250
+ error_append_hint(errp, "apb divisor must be a power of 2"
251
+ " and maximum value is 32\n");
252
+ }
253
+}
254
+
255
+static void msf2_sysreg_class_init(ObjectClass *klass, void *data)
256
+{
257
+ DeviceClass *dc = DEVICE_CLASS(klass);
258
+
259
+ dc->vmsd = &vmstate_msf2_sysreg;
260
+ dc->reset = msf2_sysreg_reset;
261
+ dc->props = msf2_sysreg_properties;
262
+ dc->realize = msf2_sysreg_realize;
263
+}
264
+
265
+static const TypeInfo msf2_sysreg_info = {
266
+ .name = TYPE_MSF2_SYSREG,
267
+ .parent = TYPE_SYS_BUS_DEVICE,
268
+ .class_init = msf2_sysreg_class_init,
269
+ .instance_size = sizeof(MSF2SysregState),
270
+ .instance_init = msf2_sysreg_init,
271
+};
272
+
273
+static void msf2_sysreg_register_types(void)
274
+{
275
+ type_register_static(&msf2_sysreg_info);
276
+}
277
+
278
+type_init(msf2_sysreg_register_types)
279
diff --git a/hw/misc/trace-events b/hw/misc/trace-events
280
index XXXXXXX..XXXXXXX 100644
281
--- a/hw/misc/trace-events
282
+++ b/hw/misc/trace-events
283
@@ -XXX,XX +XXX,XX @@ mps2_scc_reset(void) "MPS2 SCC: reset"
284
mps2_scc_leds(char led7, char led6, char led5, char led4, char led3, char led2, char led1, char led0) "MPS2 SCC LEDs: %c%c%c%c%c%c%c%c"
285
mps2_scc_cfg_write(unsigned function, unsigned device, uint32_t value) "MPS2 SCC config write: function %d device %d data 0x%" PRIx32
286
mps2_scc_cfg_read(unsigned function, unsigned device, uint32_t value) "MPS2 SCC config read: function %d device %d data 0x%" PRIx32
287
+
288
+# hw/misc/msf2-sysreg.c
289
+msf2_sysreg_write(uint64_t offset, uint32_t val, uint32_t prev) "msf2-sysreg write: addr 0x%08" HWADDR_PRIx " data 0x%" PRIx32 " prev 0x%" PRIx32
290
+msf2_sysreg_read(uint64_t offset, uint32_t val) "msf2-sysreg read: addr 0x%08" HWADDR_PRIx " data 0x%08" PRIx32
291
+msf2_sysreg_write_pll_status(void) "Invalid write to read only PLL status register"
292
--
80
--
293
2.7.4
81
2.25.1
294
295
diff view generated by jsdifflib
1
From: Subbaraya Sundeep <sundeep.lkml@gmail.com>
1
From: Jean-Christophe Dubois <jcd@tribudubois.net>
2
2
3
Modelled System Timer in Microsemi's Smartfusion2 Soc.
3
The i.MX6UL doesn't support CLK_HIGH ou CLK_HIGH_DIV clock source.
4
Timer has two 32bit down counters and two interrupts.
5
4
6
Signed-off-by: Subbaraya Sundeep <sundeep.lkml@gmail.com>
5
Signed-off-by: Jean-Christophe Dubois <jcd@tribudubois.net>
7
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Acked-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Message-id: 20170920201737.25723-2-f4bug@amsat.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
8
---
13
hw/timer/Makefile.objs | 1 +
9
include/hw/timer/imx_gpt.h | 1 +
14
include/hw/timer/mss-timer.h | 64 ++++++++++
10
hw/arm/fsl-imx6ul.c | 2 +-
15
hw/timer/mss-timer.c | 289 +++++++++++++++++++++++++++++++++++++++++++
11
hw/misc/imx6ul_ccm.c | 6 ------
16
3 files changed, 354 insertions(+)
12
hw/timer/imx_gpt.c | 25 +++++++++++++++++++++++++
17
create mode 100644 include/hw/timer/mss-timer.h
13
4 files changed, 27 insertions(+), 7 deletions(-)
18
create mode 100644 hw/timer/mss-timer.c
19
14
20
diff --git a/hw/timer/Makefile.objs b/hw/timer/Makefile.objs
15
diff --git a/include/hw/timer/imx_gpt.h b/include/hw/timer/imx_gpt.h
21
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/timer/Makefile.objs
17
--- a/include/hw/timer/imx_gpt.h
23
+++ b/hw/timer/Makefile.objs
18
+++ b/include/hw/timer/imx_gpt.h
24
@@ -XXX,XX +XXX,XX @@ common-obj-$(CONFIG_ASPEED_SOC) += aspeed_timer.o
25
26
common-obj-$(CONFIG_SUN4V_RTC) += sun4v-rtc.o
27
common-obj-$(CONFIG_CMSDK_APB_TIMER) += cmsdk-apb-timer.o
28
+common-obj-$(CONFIG_MSF2) += mss-timer.o
29
diff --git a/include/hw/timer/mss-timer.h b/include/hw/timer/mss-timer.h
30
new file mode 100644
31
index XXXXXXX..XXXXXXX
32
--- /dev/null
33
+++ b/include/hw/timer/mss-timer.h
34
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@
35
+/*
20
#define TYPE_IMX25_GPT "imx25.gpt"
36
+ * Microsemi SmartFusion2 Timer.
21
#define TYPE_IMX31_GPT "imx31.gpt"
37
+ *
22
#define TYPE_IMX6_GPT "imx6.gpt"
38
+ * Copyright (c) 2017 Subbaraya Sundeep <sundeep.lkml@gmail.com>
23
+#define TYPE_IMX6UL_GPT "imx6ul.gpt"
39
+ *
24
#define TYPE_IMX7_GPT "imx7.gpt"
40
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
25
41
+ * of this software and associated documentation files (the "Software"), to deal
26
#define TYPE_IMX_GPT TYPE_IMX25_GPT
42
+ * in the Software without restriction, including without limitation the rights
27
diff --git a/hw/arm/fsl-imx6ul.c b/hw/arm/fsl-imx6ul.c
43
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
28
index XXXXXXX..XXXXXXX 100644
44
+ * copies of the Software, and to permit persons to whom the Software is
29
--- a/hw/arm/fsl-imx6ul.c
45
+ * furnished to do so, subject to the following conditions:
30
+++ b/hw/arm/fsl-imx6ul.c
46
+ *
31
@@ -XXX,XX +XXX,XX @@ static void fsl_imx6ul_init(Object *obj)
47
+ * The above copyright notice and this permission notice shall be included in
32
*/
48
+ * all copies or substantial portions of the Software.
33
for (i = 0; i < FSL_IMX6UL_NUM_GPTS; i++) {
49
+ *
34
snprintf(name, NAME_SIZE, "gpt%d", i);
50
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
35
- object_initialize_child(obj, name, &s->gpt[i], TYPE_IMX7_GPT);
51
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
36
+ object_initialize_child(obj, name, &s->gpt[i], TYPE_IMX6UL_GPT);
52
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
37
}
53
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
38
54
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
39
/*
55
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
40
diff --git a/hw/misc/imx6ul_ccm.c b/hw/misc/imx6ul_ccm.c
56
+ * THE SOFTWARE.
41
index XXXXXXX..XXXXXXX 100644
57
+ */
42
--- a/hw/misc/imx6ul_ccm.c
58
+
43
+++ b/hw/misc/imx6ul_ccm.c
59
+#ifndef HW_MSS_TIMER_H
44
@@ -XXX,XX +XXX,XX @@ static uint32_t imx6ul_ccm_get_clock_frequency(IMXCCMState *dev, IMXClk clock)
60
+#define HW_MSS_TIMER_H
45
case CLK_32k:
61
+
46
freq = CKIL_FREQ;
62
+#include "hw/sysbus.h"
47
break;
63
+#include "hw/ptimer.h"
48
- case CLK_HIGH:
64
+
49
- freq = CKIH_FREQ;
65
+#define TYPE_MSS_TIMER "mss-timer"
50
- break;
66
+#define MSS_TIMER(obj) OBJECT_CHECK(MSSTimerState, \
51
- case CLK_HIGH_DIV:
67
+ (obj), TYPE_MSS_TIMER)
52
- freq = CKIH_FREQ / 8;
68
+
53
- break;
69
+/*
54
default:
70
+ * There are two 32-bit down counting timers.
55
qemu_log_mask(LOG_GUEST_ERROR, "[%s]%s: unsupported clock %d\n",
71
+ * Timers 1 and 2 can be concatenated into a single 64-bit Timer
56
TYPE_IMX6UL_CCM, __func__, clock);
72
+ * that operates either in Periodic mode or in One-shot mode.
57
diff --git a/hw/timer/imx_gpt.c b/hw/timer/imx_gpt.c
73
+ * Writing 1 to the TIM64_MODE register bit 0 sets the Timers in 64-bit mode.
58
index XXXXXXX..XXXXXXX 100644
74
+ * In 64-bit mode, writing to the 32-bit registers has no effect.
59
--- a/hw/timer/imx_gpt.c
75
+ * Similarly, in 32-bit mode, writing to the 64-bit mode registers
60
+++ b/hw/timer/imx_gpt.c
76
+ * has no effect. Only two 32-bit timers are supported currently.
61
@@ -XXX,XX +XXX,XX @@ static const IMXClk imx6_gpt_clocks[] = {
77
+ */
62
CLK_HIGH, /* 111 reference clock */
78
+#define NUM_TIMERS 2
63
};
79
+
64
80
+#define R_TIM1_MAX 6
65
+static const IMXClk imx6ul_gpt_clocks[] = {
81
+
66
+ CLK_NONE, /* 000 No clock source */
82
+struct Msf2Timer {
67
+ CLK_IPG, /* 001 ipg_clk, 532MHz*/
83
+ QEMUBH *bh;
68
+ CLK_IPG_HIGH, /* 010 ipg_clk_highfreq */
84
+ ptimer_state *ptimer;
69
+ CLK_EXT, /* 011 External clock */
85
+
70
+ CLK_32k, /* 100 ipg_clk_32k */
86
+ uint32_t regs[R_TIM1_MAX];
71
+ CLK_NONE, /* 101 not defined */
87
+ qemu_irq irq;
72
+ CLK_NONE, /* 110 not defined */
73
+ CLK_NONE, /* 111 not defined */
88
+};
74
+};
89
+
75
+
90
+typedef struct MSSTimerState {
76
static const IMXClk imx7_gpt_clocks[] = {
91
+ SysBusDevice parent_obj;
77
CLK_NONE, /* 000 No clock source */
78
CLK_IPG, /* 001 ipg_clk, 532MHz*/
79
@@ -XXX,XX +XXX,XX @@ static void imx6_gpt_init(Object *obj)
80
s->clocks = imx6_gpt_clocks;
81
}
82
83
+static void imx6ul_gpt_init(Object *obj)
84
+{
85
+ IMXGPTState *s = IMX_GPT(obj);
92
+
86
+
93
+ MemoryRegion mmio;
87
+ s->clocks = imx6ul_gpt_clocks;
94
+ uint32_t freq_hz;
95
+ struct Msf2Timer timers[NUM_TIMERS];
96
+} MSSTimerState;
97
+
98
+#endif /* HW_MSS_TIMER_H */
99
diff --git a/hw/timer/mss-timer.c b/hw/timer/mss-timer.c
100
new file mode 100644
101
index XXXXXXX..XXXXXXX
102
--- /dev/null
103
+++ b/hw/timer/mss-timer.c
104
@@ -XXX,XX +XXX,XX @@
105
+/*
106
+ * Block model of System timer present in
107
+ * Microsemi's SmartFusion2 and SmartFusion SoCs.
108
+ *
109
+ * Copyright (c) 2017 Subbaraya Sundeep <sundeep.lkml@gmail.com>.
110
+ *
111
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
112
+ * of this software and associated documentation files (the "Software"), to deal
113
+ * in the Software without restriction, including without limitation the rights
114
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
115
+ * copies of the Software, and to permit persons to whom the Software is
116
+ * furnished to do so, subject to the following conditions:
117
+ *
118
+ * The above copyright notice and this permission notice shall be included in
119
+ * all copies or substantial portions of the Software.
120
+ *
121
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
122
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
123
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
124
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
125
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
126
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
127
+ * THE SOFTWARE.
128
+ */
129
+
130
+#include "qemu/osdep.h"
131
+#include "qemu/main-loop.h"
132
+#include "qemu/log.h"
133
+#include "hw/timer/mss-timer.h"
134
+
135
+#ifndef MSS_TIMER_ERR_DEBUG
136
+#define MSS_TIMER_ERR_DEBUG 0
137
+#endif
138
+
139
+#define DB_PRINT_L(lvl, fmt, args...) do { \
140
+ if (MSS_TIMER_ERR_DEBUG >= lvl) { \
141
+ qemu_log("%s: " fmt "\n", __func__, ## args); \
142
+ } \
143
+} while (0);
144
+
145
+#define DB_PRINT(fmt, args...) DB_PRINT_L(1, fmt, ## args)
146
+
147
+#define R_TIM_VAL 0
148
+#define R_TIM_LOADVAL 1
149
+#define R_TIM_BGLOADVAL 2
150
+#define R_TIM_CTRL 3
151
+#define R_TIM_RIS 4
152
+#define R_TIM_MIS 5
153
+
154
+#define TIMER_CTRL_ENBL (1 << 0)
155
+#define TIMER_CTRL_ONESHOT (1 << 1)
156
+#define TIMER_CTRL_INTR (1 << 2)
157
+#define TIMER_RIS_ACK (1 << 0)
158
+#define TIMER_RST_CLR (1 << 6)
159
+#define TIMER_MODE (1 << 0)
160
+
161
+static void timer_update_irq(struct Msf2Timer *st)
162
+{
163
+ bool isr, ier;
164
+
165
+ isr = !!(st->regs[R_TIM_RIS] & TIMER_RIS_ACK);
166
+ ier = !!(st->regs[R_TIM_CTRL] & TIMER_CTRL_INTR);
167
+ qemu_set_irq(st->irq, (ier && isr));
168
+}
88
+}
169
+
89
+
170
+static void timer_update(struct Msf2Timer *st)
90
static void imx7_gpt_init(Object *obj)
171
+{
91
{
172
+ uint64_t count;
92
IMXGPTState *s = IMX_GPT(obj);
173
+
93
@@ -XXX,XX +XXX,XX @@ static const TypeInfo imx6_gpt_info = {
174
+ if (!(st->regs[R_TIM_CTRL] & TIMER_CTRL_ENBL)) {
94
.instance_init = imx6_gpt_init,
175
+ ptimer_stop(st->ptimer);
95
};
176
+ return;
96
177
+ }
97
+static const TypeInfo imx6ul_gpt_info = {
178
+
98
+ .name = TYPE_IMX6UL_GPT,
179
+ count = st->regs[R_TIM_LOADVAL];
99
+ .parent = TYPE_IMX25_GPT,
180
+ ptimer_set_limit(st->ptimer, count, 1);
100
+ .instance_init = imx6ul_gpt_init,
181
+ ptimer_run(st->ptimer, 1);
182
+}
183
+
184
+static uint64_t
185
+timer_read(void *opaque, hwaddr offset, unsigned int size)
186
+{
187
+ MSSTimerState *t = opaque;
188
+ hwaddr addr;
189
+ struct Msf2Timer *st;
190
+ uint32_t ret = 0;
191
+ int timer = 0;
192
+ int isr;
193
+ int ier;
194
+
195
+ addr = offset >> 2;
196
+ /*
197
+ * Two independent timers has same base address.
198
+ * Based on address passed figure out which timer is being used.
199
+ */
200
+ if ((addr >= R_TIM1_MAX) && (addr < NUM_TIMERS * R_TIM1_MAX)) {
201
+ timer = 1;
202
+ addr -= R_TIM1_MAX;
203
+ }
204
+
205
+ st = &t->timers[timer];
206
+
207
+ switch (addr) {
208
+ case R_TIM_VAL:
209
+ ret = ptimer_get_count(st->ptimer);
210
+ break;
211
+
212
+ case R_TIM_MIS:
213
+ isr = !!(st->regs[R_TIM_RIS] & TIMER_RIS_ACK);
214
+ ier = !!(st->regs[R_TIM_CTRL] & TIMER_CTRL_INTR);
215
+ ret = ier & isr;
216
+ break;
217
+
218
+ default:
219
+ if (addr < R_TIM1_MAX) {
220
+ ret = st->regs[addr];
221
+ } else {
222
+ qemu_log_mask(LOG_GUEST_ERROR,
223
+ TYPE_MSS_TIMER": 64-bit mode not supported\n");
224
+ return ret;
225
+ }
226
+ break;
227
+ }
228
+
229
+ DB_PRINT("timer=%d 0x%" HWADDR_PRIx "=0x%" PRIx32, timer, offset,
230
+ ret);
231
+ return ret;
232
+}
233
+
234
+static void
235
+timer_write(void *opaque, hwaddr offset,
236
+ uint64_t val64, unsigned int size)
237
+{
238
+ MSSTimerState *t = opaque;
239
+ hwaddr addr;
240
+ struct Msf2Timer *st;
241
+ int timer = 0;
242
+ uint32_t value = val64;
243
+
244
+ addr = offset >> 2;
245
+ /*
246
+ * Two independent timers has same base address.
247
+ * Based on addr passed figure out which timer is being used.
248
+ */
249
+ if ((addr >= R_TIM1_MAX) && (addr < NUM_TIMERS * R_TIM1_MAX)) {
250
+ timer = 1;
251
+ addr -= R_TIM1_MAX;
252
+ }
253
+
254
+ st = &t->timers[timer];
255
+
256
+ DB_PRINT("addr=0x%" HWADDR_PRIx " val=0x%" PRIx32 " (timer=%d)", offset,
257
+ value, timer);
258
+
259
+ switch (addr) {
260
+ case R_TIM_CTRL:
261
+ st->regs[R_TIM_CTRL] = value;
262
+ timer_update(st);
263
+ break;
264
+
265
+ case R_TIM_RIS:
266
+ if (value & TIMER_RIS_ACK) {
267
+ st->regs[R_TIM_RIS] &= ~TIMER_RIS_ACK;
268
+ }
269
+ break;
270
+
271
+ case R_TIM_LOADVAL:
272
+ st->regs[R_TIM_LOADVAL] = value;
273
+ if (st->regs[R_TIM_CTRL] & TIMER_CTRL_ENBL) {
274
+ timer_update(st);
275
+ }
276
+ break;
277
+
278
+ case R_TIM_BGLOADVAL:
279
+ st->regs[R_TIM_BGLOADVAL] = value;
280
+ st->regs[R_TIM_LOADVAL] = value;
281
+ break;
282
+
283
+ case R_TIM_VAL:
284
+ case R_TIM_MIS:
285
+ break;
286
+
287
+ default:
288
+ if (addr < R_TIM1_MAX) {
289
+ st->regs[addr] = value;
290
+ } else {
291
+ qemu_log_mask(LOG_GUEST_ERROR,
292
+ TYPE_MSS_TIMER": 64-bit mode not supported\n");
293
+ return;
294
+ }
295
+ break;
296
+ }
297
+ timer_update_irq(st);
298
+}
299
+
300
+static const MemoryRegionOps timer_ops = {
301
+ .read = timer_read,
302
+ .write = timer_write,
303
+ .endianness = DEVICE_NATIVE_ENDIAN,
304
+ .valid = {
305
+ .min_access_size = 1,
306
+ .max_access_size = 4
307
+ }
308
+};
101
+};
309
+
102
+
310
+static void timer_hit(void *opaque)
103
static const TypeInfo imx7_gpt_info = {
311
+{
104
.name = TYPE_IMX7_GPT,
312
+ struct Msf2Timer *st = opaque;
105
.parent = TYPE_IMX25_GPT,
313
+
106
@@ -XXX,XX +XXX,XX @@ static void imx_gpt_register_types(void)
314
+ st->regs[R_TIM_RIS] |= TIMER_RIS_ACK;
107
type_register_static(&imx25_gpt_info);
315
+
108
type_register_static(&imx31_gpt_info);
316
+ if (!(st->regs[R_TIM_CTRL] & TIMER_CTRL_ONESHOT)) {
109
type_register_static(&imx6_gpt_info);
317
+ timer_update(st);
110
+ type_register_static(&imx6ul_gpt_info);
318
+ }
111
type_register_static(&imx7_gpt_info);
319
+ timer_update_irq(st);
112
}
320
+}
113
321
+
322
+static void mss_timer_init(Object *obj)
323
+{
324
+ MSSTimerState *t = MSS_TIMER(obj);
325
+ int i;
326
+
327
+ /* Init all the ptimers. */
328
+ for (i = 0; i < NUM_TIMERS; i++) {
329
+ struct Msf2Timer *st = &t->timers[i];
330
+
331
+ st->bh = qemu_bh_new(timer_hit, st);
332
+ st->ptimer = ptimer_init(st->bh, PTIMER_POLICY_DEFAULT);
333
+ ptimer_set_freq(st->ptimer, t->freq_hz);
334
+ sysbus_init_irq(SYS_BUS_DEVICE(obj), &st->irq);
335
+ }
336
+
337
+ memory_region_init_io(&t->mmio, OBJECT(t), &timer_ops, t, TYPE_MSS_TIMER,
338
+ NUM_TIMERS * R_TIM1_MAX * 4);
339
+ sysbus_init_mmio(SYS_BUS_DEVICE(obj), &t->mmio);
340
+}
341
+
342
+static const VMStateDescription vmstate_timers = {
343
+ .name = "mss-timer-block",
344
+ .version_id = 1,
345
+ .minimum_version_id = 1,
346
+ .fields = (VMStateField[]) {
347
+ VMSTATE_PTIMER(ptimer, struct Msf2Timer),
348
+ VMSTATE_UINT32_ARRAY(regs, struct Msf2Timer, R_TIM1_MAX),
349
+ VMSTATE_END_OF_LIST()
350
+ }
351
+};
352
+
353
+static const VMStateDescription vmstate_mss_timer = {
354
+ .name = TYPE_MSS_TIMER,
355
+ .version_id = 1,
356
+ .minimum_version_id = 1,
357
+ .fields = (VMStateField[]) {
358
+ VMSTATE_UINT32(freq_hz, MSSTimerState),
359
+ VMSTATE_STRUCT_ARRAY(timers, MSSTimerState, NUM_TIMERS, 0,
360
+ vmstate_timers, struct Msf2Timer),
361
+ VMSTATE_END_OF_LIST()
362
+ }
363
+};
364
+
365
+static Property mss_timer_properties[] = {
366
+ /* Libero GUI shows 100Mhz as default for clocks */
367
+ DEFINE_PROP_UINT32("clock-frequency", MSSTimerState, freq_hz,
368
+ 100 * 1000000),
369
+ DEFINE_PROP_END_OF_LIST(),
370
+};
371
+
372
+static void mss_timer_class_init(ObjectClass *klass, void *data)
373
+{
374
+ DeviceClass *dc = DEVICE_CLASS(klass);
375
+
376
+ dc->props = mss_timer_properties;
377
+ dc->vmsd = &vmstate_mss_timer;
378
+}
379
+
380
+static const TypeInfo mss_timer_info = {
381
+ .name = TYPE_MSS_TIMER,
382
+ .parent = TYPE_SYS_BUS_DEVICE,
383
+ .instance_size = sizeof(MSSTimerState),
384
+ .instance_init = mss_timer_init,
385
+ .class_init = mss_timer_class_init,
386
+};
387
+
388
+static void mss_timer_register_types(void)
389
+{
390
+ type_register_static(&mss_timer_info);
391
+}
392
+
393
+type_init(mss_timer_register_types)
394
--
114
--
395
2.7.4
115
2.25.1
396
397
diff view generated by jsdifflib
1
If AIRCR.BFHFNMINS is clear, then although NonSecure HardFault
1
From: Jean-Christophe Dubois <jcd@tribudubois.net>
2
can still be pended via SHCSR.HARDFAULTPENDED it mustn't actually
3
preempt execution. The simple way to achieve this is to clear the
4
enable bit for it, since the enable bit isn't guest visible.
5
2
3
IRQs were not associated to the various GPIO devices inside i.MX7D.
4
This patch brings the i.MX7D on par with i.MX6.
5
6
Signed-off-by: Jean-Christophe Dubois <jcd@tribudubois.net>
7
Message-id: 20221226101418.415170-1-jcd@tribudubois.net
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 1505240046-11454-15-git-send-email-peter.maydell@linaro.org
9
---
10
---
10
hw/intc/armv7m_nvic.c | 12 ++++++++++--
11
include/hw/arm/fsl-imx7.h | 15 +++++++++++++++
11
1 file changed, 10 insertions(+), 2 deletions(-)
12
hw/arm/fsl-imx7.c | 31 ++++++++++++++++++++++++++++++-
13
2 files changed, 45 insertions(+), 1 deletion(-)
12
14
13
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
15
diff --git a/include/hw/arm/fsl-imx7.h b/include/hw/arm/fsl-imx7.h
14
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/intc/armv7m_nvic.c
17
--- a/include/hw/arm/fsl-imx7.h
16
+++ b/hw/intc/armv7m_nvic.c
18
+++ b/include/hw/arm/fsl-imx7.h
17
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
19
@@ -XXX,XX +XXX,XX @@ enum FslIMX7IRQs {
18
(R_V7M_AIRCR_SYSRESETREQS_MASK |
20
FSL_IMX7_GPT3_IRQ = 53,
19
R_V7M_AIRCR_BFHFNMINS_MASK |
21
FSL_IMX7_GPT4_IRQ = 52,
20
R_V7M_AIRCR_PRIS_MASK);
22
21
- /* BFHFNMINS changes the priority of Secure HardFault */
23
+ FSL_IMX7_GPIO1_LOW_IRQ = 64,
22
+ /* BFHFNMINS changes the priority of Secure HardFault, and
24
+ FSL_IMX7_GPIO1_HIGH_IRQ = 65,
23
+ * allows a pending Non-secure HardFault to preempt (which
25
+ FSL_IMX7_GPIO2_LOW_IRQ = 66,
24
+ * we implement by marking it enabled).
26
+ FSL_IMX7_GPIO2_HIGH_IRQ = 67,
25
+ */
27
+ FSL_IMX7_GPIO3_LOW_IRQ = 68,
26
if (cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK) {
28
+ FSL_IMX7_GPIO3_HIGH_IRQ = 69,
27
s->sec_vectors[ARMV7M_EXCP_HARD].prio = -3;
29
+ FSL_IMX7_GPIO4_LOW_IRQ = 70,
28
+ s->vectors[ARMV7M_EXCP_HARD].enabled = 1;
30
+ FSL_IMX7_GPIO4_HIGH_IRQ = 71,
29
} else {
31
+ FSL_IMX7_GPIO5_LOW_IRQ = 72,
30
s->sec_vectors[ARMV7M_EXCP_HARD].prio = -1;
32
+ FSL_IMX7_GPIO5_HIGH_IRQ = 73,
31
+ s->vectors[ARMV7M_EXCP_HARD].enabled = 0;
33
+ FSL_IMX7_GPIO6_LOW_IRQ = 74,
32
}
34
+ FSL_IMX7_GPIO6_HIGH_IRQ = 75,
33
}
35
+ FSL_IMX7_GPIO7_LOW_IRQ = 76,
34
nvic_irq_update(s);
36
+ FSL_IMX7_GPIO7_HIGH_IRQ = 77,
35
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_reset(DeviceState *dev)
37
+
36
NVICState *s = NVIC(dev);
38
FSL_IMX7_WDOG1_IRQ = 78,
37
39
FSL_IMX7_WDOG2_IRQ = 79,
38
s->vectors[ARMV7M_EXCP_NMI].enabled = 1;
40
FSL_IMX7_WDOG3_IRQ = 10,
39
- s->vectors[ARMV7M_EXCP_HARD].enabled = 1;
41
diff --git a/hw/arm/fsl-imx7.c b/hw/arm/fsl-imx7.c
40
/* MEM, BUS, and USAGE are enabled through
42
index XXXXXXX..XXXXXXX 100644
41
* the System Handler Control register
43
--- a/hw/arm/fsl-imx7.c
42
*/
44
+++ b/hw/arm/fsl-imx7.c
43
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_reset(DeviceState *dev)
45
@@ -XXX,XX +XXX,XX @@ static void fsl_imx7_realize(DeviceState *dev, Error **errp)
44
46
FSL_IMX7_GPIO7_ADDR,
45
/* AIRCR.BFHFNMINS resets to 0 so Secure HF is priority -1 (R_CMTC) */
47
};
46
s->sec_vectors[ARMV7M_EXCP_HARD].prio = -1;
48
47
+ /* If AIRCR.BFHFNMINS is 0 then NS HF is (effectively) disabled */
49
+ static const int FSL_IMX7_GPIOn_LOW_IRQ[FSL_IMX7_NUM_GPIOS] = {
48
+ s->vectors[ARMV7M_EXCP_HARD].enabled = 0;
50
+ FSL_IMX7_GPIO1_LOW_IRQ,
49
+ } else {
51
+ FSL_IMX7_GPIO2_LOW_IRQ,
50
+ s->vectors[ARMV7M_EXCP_HARD].enabled = 1;
52
+ FSL_IMX7_GPIO3_LOW_IRQ,
53
+ FSL_IMX7_GPIO4_LOW_IRQ,
54
+ FSL_IMX7_GPIO5_LOW_IRQ,
55
+ FSL_IMX7_GPIO6_LOW_IRQ,
56
+ FSL_IMX7_GPIO7_LOW_IRQ,
57
+ };
58
+
59
+ static const int FSL_IMX7_GPIOn_HIGH_IRQ[FSL_IMX7_NUM_GPIOS] = {
60
+ FSL_IMX7_GPIO1_HIGH_IRQ,
61
+ FSL_IMX7_GPIO2_HIGH_IRQ,
62
+ FSL_IMX7_GPIO3_HIGH_IRQ,
63
+ FSL_IMX7_GPIO4_HIGH_IRQ,
64
+ FSL_IMX7_GPIO5_HIGH_IRQ,
65
+ FSL_IMX7_GPIO6_HIGH_IRQ,
66
+ FSL_IMX7_GPIO7_HIGH_IRQ,
67
+ };
68
+
69
sysbus_realize(SYS_BUS_DEVICE(&s->gpio[i]), &error_abort);
70
- sysbus_mmio_map(SYS_BUS_DEVICE(&s->gpio[i]), 0, FSL_IMX7_GPIOn_ADDR[i]);
71
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->gpio[i]), 0,
72
+ FSL_IMX7_GPIOn_ADDR[i]);
73
+
74
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gpio[i]), 0,
75
+ qdev_get_gpio_in(DEVICE(&s->a7mpcore),
76
+ FSL_IMX7_GPIOn_LOW_IRQ[i]));
77
+
78
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gpio[i]), 1,
79
+ qdev_get_gpio_in(DEVICE(&s->a7mpcore),
80
+ FSL_IMX7_GPIOn_HIGH_IRQ[i]));
51
}
81
}
52
82
53
/* Strictly speaking the reset handler should be enabled.
83
/*
54
--
84
--
55
2.7.4
85
2.25.1
56
57
diff view generated by jsdifflib
1
Update the nvic_recompute_state() code to handle the security
1
From: Stephen Longfield <slongfield@google.com>
2
extension and its associated banked registers.
3
2
4
Code that uses the resulting cached state (ie the irq
3
Size is used at lines 1088/1188 for the loop, which reads the last 4
5
acknowledge and complete code) will be updated in a later
4
bytes from the crc_ptr so it does need to get increased, however it
6
commit.
5
shouldn't be increased before the buffer is passed to CRC computation,
6
or the crc32 function will access uninitialized memory.
7
7
8
This was pointed out to me by clg@kaod.org during the code review of
9
a similar patch to hw/net/ftgmac100.c
10
11
Change-Id: Ib0464303b191af1e28abeb2f5105eb25aadb5e9b
12
Signed-off-by: Stephen Longfield <slongfield@google.com>
13
Reviewed-by: Patrick Venture <venture@google.com>
14
Message-id: 20221221183202.3788132-1-slongfield@google.com
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 1505240046-11454-9-git-send-email-peter.maydell@linaro.org
11
---
17
---
12
hw/intc/armv7m_nvic.c | 151 ++++++++++++++++++++++++++++++++++++++++++++++++--
18
hw/net/imx_fec.c | 8 ++++----
13
hw/intc/trace-events | 1 +
19
1 file changed, 4 insertions(+), 4 deletions(-)
14
2 files changed, 147 insertions(+), 5 deletions(-)
15
20
16
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
21
diff --git a/hw/net/imx_fec.c b/hw/net/imx_fec.c
17
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/intc/armv7m_nvic.c
23
--- a/hw/net/imx_fec.c
19
+++ b/hw/intc/armv7m_nvic.c
24
+++ b/hw/net/imx_fec.c
20
@@ -XXX,XX +XXX,XX @@
25
@@ -XXX,XX +XXX,XX @@ static ssize_t imx_fec_receive(NetClientState *nc, const uint8_t *buf,
21
* (higher than the highest possible priority value)
26
return 0;
22
*/
23
#define NVIC_NOEXC_PRIO 0x100
24
+/* Maximum priority of non-secure exceptions when AIRCR.PRIS is set */
25
+#define NVIC_NS_PRIO_LIMIT 0x80
26
27
static const uint8_t nvic_id[] = {
28
0x00, 0xb0, 0x1b, 0x00, 0x0d, 0xe0, 0x05, 0xb1
29
@@ -XXX,XX +XXX,XX @@ static bool nvic_isrpending(NVICState *s)
30
return false;
31
}
32
33
+static bool exc_is_banked(int exc)
34
+{
35
+ /* Return true if this is one of the limited set of exceptions which
36
+ * are banked (and thus have state in sec_vectors[])
37
+ */
38
+ return exc == ARMV7M_EXCP_HARD ||
39
+ exc == ARMV7M_EXCP_MEM ||
40
+ exc == ARMV7M_EXCP_USAGE ||
41
+ exc == ARMV7M_EXCP_SVC ||
42
+ exc == ARMV7M_EXCP_PENDSV ||
43
+ exc == ARMV7M_EXCP_SYSTICK;
44
+}
45
+
46
/* Return a mask word which clears the subpriority bits from
47
* a priority value for an M-profile exception, leaving only
48
* the group priority.
49
*/
50
-static inline uint32_t nvic_gprio_mask(NVICState *s)
51
+static inline uint32_t nvic_gprio_mask(NVICState *s, bool secure)
52
+{
53
+ return ~0U << (s->prigroup[secure] + 1);
54
+}
55
+
56
+static bool exc_targets_secure(NVICState *s, int exc)
57
+{
58
+ /* Return true if this non-banked exception targets Secure state. */
59
+ if (!arm_feature(&s->cpu->env, ARM_FEATURE_M_SECURITY)) {
60
+ return false;
61
+ }
62
+
63
+ if (exc >= NVIC_FIRST_IRQ) {
64
+ return !s->itns[exc];
65
+ }
66
+
67
+ /* Function shouldn't be called for banked exceptions. */
68
+ assert(!exc_is_banked(exc));
69
+
70
+ switch (exc) {
71
+ case ARMV7M_EXCP_NMI:
72
+ case ARMV7M_EXCP_BUS:
73
+ return !(s->cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK);
74
+ case ARMV7M_EXCP_SECURE:
75
+ return true;
76
+ case ARMV7M_EXCP_DEBUG:
77
+ /* TODO: controlled by DEMCR.SDME, which we don't yet implement */
78
+ return false;
79
+ default:
80
+ /* reset, and reserved (unused) low exception numbers.
81
+ * We'll get called by code that loops through all the exception
82
+ * numbers, but it doesn't matter what we return here as these
83
+ * non-existent exceptions will never be pended or active.
84
+ */
85
+ return true;
86
+ }
87
+}
88
+
89
+static int exc_group_prio(NVICState *s, int rawprio, bool targets_secure)
90
+{
91
+ /* Return the group priority for this exception, given its raw
92
+ * (group-and-subgroup) priority value and whether it is targeting
93
+ * secure state or not.
94
+ */
95
+ if (rawprio < 0) {
96
+ return rawprio;
97
+ }
98
+ rawprio &= nvic_gprio_mask(s, targets_secure);
99
+ /* AIRCR.PRIS causes us to squash all NS priorities into the
100
+ * lower half of the total range
101
+ */
102
+ if (!targets_secure &&
103
+ (s->cpu->env.v7m.aircr & R_V7M_AIRCR_PRIS_MASK)) {
104
+ rawprio = (rawprio >> 1) + NVIC_NS_PRIO_LIMIT;
105
+ }
106
+ return rawprio;
107
+}
108
+
109
+/* Recompute vectpending and exception_prio for a CPU which implements
110
+ * the Security extension
111
+ */
112
+static void nvic_recompute_state_secure(NVICState *s)
113
{
114
- return ~0U << (s->prigroup[M_REG_NS] + 1);
115
+ int i, bank;
116
+ int pend_prio = NVIC_NOEXC_PRIO;
117
+ int active_prio = NVIC_NOEXC_PRIO;
118
+ int pend_irq = 0;
119
+ bool pending_is_s_banked = false;
120
+
121
+ /* R_CQRV: precedence is by:
122
+ * - lowest group priority; if both the same then
123
+ * - lowest subpriority; if both the same then
124
+ * - lowest exception number; if both the same (ie banked) then
125
+ * - secure exception takes precedence
126
+ * Compare pseudocode RawExecutionPriority.
127
+ * Annoyingly, now we have two prigroup values (for S and NS)
128
+ * we can't do the loop comparison on raw priority values.
129
+ */
130
+ for (i = 1; i < s->num_irq; i++) {
131
+ for (bank = M_REG_S; bank >= M_REG_NS; bank--) {
132
+ VecInfo *vec;
133
+ int prio;
134
+ bool targets_secure;
135
+
136
+ if (bank == M_REG_S) {
137
+ if (!exc_is_banked(i)) {
138
+ continue;
139
+ }
140
+ vec = &s->sec_vectors[i];
141
+ targets_secure = true;
142
+ } else {
143
+ vec = &s->vectors[i];
144
+ targets_secure = !exc_is_banked(i) && exc_targets_secure(s, i);
145
+ }
146
+
147
+ prio = exc_group_prio(s, vec->prio, targets_secure);
148
+ if (vec->enabled && vec->pending && prio < pend_prio) {
149
+ pend_prio = prio;
150
+ pend_irq = i;
151
+ pending_is_s_banked = (bank == M_REG_S);
152
+ }
153
+ if (vec->active && prio < active_prio) {
154
+ active_prio = prio;
155
+ }
156
+ }
157
+ }
158
+
159
+ s->vectpending_is_s_banked = pending_is_s_banked;
160
+ s->vectpending = pend_irq;
161
+ s->vectpending_prio = pend_prio;
162
+ s->exception_prio = active_prio;
163
+
164
+ trace_nvic_recompute_state_secure(s->vectpending,
165
+ s->vectpending_is_s_banked,
166
+ s->vectpending_prio,
167
+ s->exception_prio);
168
}
169
170
/* Recompute vectpending and exception_prio */
171
@@ -XXX,XX +XXX,XX @@ static void nvic_recompute_state(NVICState *s)
172
int active_prio = NVIC_NOEXC_PRIO;
173
int pend_irq = 0;
174
175
+ /* In theory we could write one function that handled both
176
+ * the "security extension present" and "not present"; however
177
+ * the security related changes significantly complicate the
178
+ * recomputation just by themselves and mixing both cases together
179
+ * would be even worse, so we retain a separate non-secure-only
180
+ * version for CPUs which don't implement the security extension.
181
+ */
182
+ if (arm_feature(&s->cpu->env, ARM_FEATURE_M_SECURITY)) {
183
+ nvic_recompute_state_secure(s);
184
+ return;
185
+ }
186
+
187
for (i = 1; i < s->num_irq; i++) {
188
VecInfo *vec = &s->vectors[i];
189
190
@@ -XXX,XX +XXX,XX @@ static void nvic_recompute_state(NVICState *s)
191
}
27
}
192
28
193
if (active_prio > 0) {
29
- /* 4 bytes for the CRC. */
194
- active_prio &= nvic_gprio_mask(s);
30
- size += 4;
195
+ active_prio &= nvic_gprio_mask(s, false);
31
crc = cpu_to_be32(crc32(~0, buf, size));
32
+ /* Increase size by 4, loop below reads the last 4 bytes from crc_ptr. */
33
+ size += 4;
34
crc_ptr = (uint8_t *) &crc;
35
36
/* Huge frames are truncated. */
37
@@ -XXX,XX +XXX,XX @@ static ssize_t imx_enet_receive(NetClientState *nc, const uint8_t *buf,
38
return 0;
196
}
39
}
197
40
198
if (pend_prio > 0) {
41
- /* 4 bytes for the CRC. */
199
- pend_prio &= nvic_gprio_mask(s);
42
- size += 4;
200
+ pend_prio &= nvic_gprio_mask(s, false);
43
crc = cpu_to_be32(crc32(~0, buf, size));
201
}
44
+ /* Increase size by 4, loop below reads the last 4 bytes from crc_ptr. */
202
45
+ size += 4;
203
s->vectpending = pend_irq;
46
crc_ptr = (uint8_t *) &crc;
204
@@ -XXX,XX +XXX,XX @@ static inline int nvic_exec_prio(NVICState *s)
47
205
} else if (env->v7m.primask[env->v7m.secure]) {
48
if (shift16) {
206
running = 0;
207
} else if (env->v7m.basepri[env->v7m.secure] > 0) {
208
- running = env->v7m.basepri[env->v7m.secure] & nvic_gprio_mask(s);
209
+ running = env->v7m.basepri[env->v7m.secure] &
210
+ nvic_gprio_mask(s, env->v7m.secure);
211
} else {
212
running = NVIC_NOEXC_PRIO; /* lower than any possible priority */
213
}
214
diff --git a/hw/intc/trace-events b/hw/intc/trace-events
215
index XXXXXXX..XXXXXXX 100644
216
--- a/hw/intc/trace-events
217
+++ b/hw/intc/trace-events
218
@@ -XXX,XX +XXX,XX @@ gicv3_redist_send_sgi(uint32_t cpu, int irq) "GICv3 redistributor 0x%x pending S
219
220
# hw/intc/armv7m_nvic.c
221
nvic_recompute_state(int vectpending, int vectpending_prio, int exception_prio) "NVIC state recomputed: vectpending %d vectpending_prio %d exception_prio %d"
222
+nvic_recompute_state_secure(int vectpending, bool vectpending_is_s_banked, int vectpending_prio, int exception_prio) "NVIC state recomputed: vectpending %d is_s_banked %d vectpending_prio %d exception_prio %d"
223
nvic_set_prio(int irq, uint8_t prio) "NVIC set irq %d priority %d"
224
nvic_irq_update(int vectpending, int pendprio, int exception_prio, int level) "NVIC vectpending %d pending prio %d exception_prio %d: setting irq line to %d"
225
nvic_escalate_prio(int irq, int irqprio, int runprio) "NVIC escalating irq %d to HardFault: insufficient priority %d >= %d"
226
--
49
--
227
2.7.4
50
2.25.1
228
229
diff view generated by jsdifflib