1
First pullreq for arm of the 4.1 series, since I'm back from
1
The big thing here is RTH's patchset implementing ARMv8.1-VHE
2
holiday now. This is mostly my M-profile FPU series and Philippe's
2
emulation; otherwise just a handful of smaller fixes.
3
devices.h cleanup. I have a pile of other patchsets to work through
4
in my to-review folder, but 42 patches is definitely quite
5
big enough to send now...
6
3
7
thanks
4
thanks
8
-- PMM
5
-- PMM
9
6
10
The following changes since commit 413a99a92c13ec408dcf2adaa87918dc81e890c8:
7
The following changes since commit 346ed3151f1c43e72c40cb55b392a1d4cface62c:
11
8
12
Add Nios II semihosting support. (2019-04-29 16:09:51 +0100)
9
Merge remote-tracking branch 'remotes/awilliam/tags/vfio-update-20200206.0' into staging (2020-02-07 11:52:15 +0000)
13
10
14
are available in the Git repository at:
11
are available in the Git repository at:
15
12
16
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20190429
13
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20200207
17
14
18
for you to fetch changes up to 437cc27ddfded3bbab6afd5ac1761e0e195edba7:
15
for you to fetch changes up to af6c91b490e9b1bce7a168f8a9c848f3e60f616e:
19
16
20
hw/devices: Move SMSC 91C111 declaration into a new header (2019-04-29 17:57:21 +0100)
17
stellaris: delay timer_new to avoid memleaks (2020-02-07 14:04:28 +0000)
21
18
22
----------------------------------------------------------------
19
----------------------------------------------------------------
23
target-arm queue:
20
target-arm queue:
24
* remove "bag of random stuff" hw/devices.h header
21
* monitor: fix query-cpu-model-expansion crash when using machine type none
25
* implement FPU for Cortex-M and enable it for Cortex-M4 and -M33
22
* Support emulation of the ARMv8.1-VHE architecture feature
26
* hw/dma: Compile the bcm2835_dma device as common object
23
* bcm2835_dma: fix bugs in TD mode handling
27
* configure: Remove --source-path option
24
* docs/arm-cpu-features: Make kvm-no-adjvtime comment clearer
28
* hw/ssi/xilinx_spips: Avoid variable length array
25
* stellaris, stm32f2xx_timer, armv7m_systick: fix minor memory leaks
29
* hw/arm/smmuv3: Remove SMMUNotifierNode
30
26
31
----------------------------------------------------------------
27
----------------------------------------------------------------
32
Eric Auger (1):
28
Alex Bennée (1):
33
hw/arm/smmuv3: Remove SMMUNotifierNode
29
target/arm: check TGE and E2H flags for EL0 pauth traps
34
30
35
Peter Maydell (28):
31
Liang Yan (1):
36
hw/ssi/xilinx_spips: Avoid variable length array
32
target/arm/monitor: query-cpu-model-expansion crashed qemu when using machine type none
37
configure: Remove --source-path option
38
target/arm: Make sure M-profile FPSCR RES0 bits are not settable
39
hw/intc/armv7m_nvic: Allow reading of M-profile MVFR* registers
40
target/arm: Implement dummy versions of M-profile FP-related registers
41
target/arm: Disable most VFP sysregs for M-profile
42
target/arm: Honour M-profile FP enable bits
43
target/arm: Decode FP instructions for M profile
44
target/arm: Clear CONTROL_S.SFPA in SG insn if FPU present
45
target/arm: Handle SFPA and FPCA bits in reads and writes of CONTROL
46
target/arm/helper: don't return early for STKOF faults during stacking
47
target/arm: Handle floating point registers in exception entry
48
target/arm: Implement v7m_update_fpccr()
49
target/arm: Clear CONTROL.SFPA in BXNS and BLXNS
50
target/arm: Clean excReturn bits when tail chaining
51
target/arm: Allow for floating point in callee stack integrity check
52
target/arm: Handle floating point registers in exception return
53
target/arm: Move NS TBFLAG from bit 19 to bit 6
54
target/arm: Overlap VECSTRIDE and XSCALE_CPAR TB flags
55
target/arm: Set FPCCR.S when executing M-profile floating point insns
56
target/arm: Activate M-profile floating point context when FPCCR.ASPEN is set
57
target/arm: New helper function arm_v7m_mmu_idx_all()
58
target/arm: New function armv7m_nvic_set_pending_lazyfp()
59
target/arm: Add lazy-FP-stacking support to v7m_stack_write()
60
target/arm: Implement M-profile lazy FP state preservation
61
target/arm: Implement VLSTM for v7M CPUs with an FPU
62
target/arm: Implement VLLDM for v7M CPUs with an FPU
63
target/arm: Enable FPU for Cortex-M4 and Cortex-M33
64
33
65
Philippe Mathieu-Daudé (13):
34
Pan Nengyuan (3):
66
hw/dma: Compile the bcm2835_dma device as common object
35
armv7m_systick: delay timer_new to avoid memleaks
67
hw/arm/aspeed: Use TYPE_TMP105/TYPE_PCA9552 instead of hardcoded string
36
stm32f2xx_timer: delay timer_new to avoid memleaks
68
hw/arm/nseries: Use TYPE_TMP105 instead of hardcoded string
37
stellaris: delay timer_new to avoid memleaks
69
hw/display/tc6393xb: Remove unused functions
70
hw/devices: Move TC6393XB declarations into a new header
71
hw/devices: Move Blizzard declarations into a new header
72
hw/devices: Move CBus declarations into a new header
73
hw/devices: Move Gamepad declarations into a new header
74
hw/devices: Move TI touchscreen declarations into a new header
75
hw/devices: Move LAN9118 declarations into a new header
76
hw/net/ne2000-isa: Add guards to the header
77
hw/net/lan9118: Export TYPE_LAN9118 and use it instead of hardcoded string
78
hw/devices: Move SMSC 91C111 declaration into a new header
79
38
80
configure | 10 +-
39
Philippe Mathieu-Daudé (1):
81
hw/dma/Makefile.objs | 2 +-
40
docs/arm-cpu-features: Make kvm-no-adjvtime comment clearer
82
include/hw/arm/omap.h | 6 +-
83
include/hw/arm/smmu-common.h | 8 +-
84
include/hw/devices.h | 62 ---
85
include/hw/display/blizzard.h | 22 ++
86
include/hw/display/tc6393xb.h | 24 ++
87
include/hw/input/gamepad.h | 19 +
88
include/hw/input/tsc2xxx.h | 36 ++
89
include/hw/misc/cbus.h | 32 ++
90
include/hw/net/lan9118.h | 21 +
91
include/hw/net/ne2000-isa.h | 6 +
92
include/hw/net/smc91c111.h | 19 +
93
include/qemu/typedefs.h | 1 -
94
target/arm/cpu.h | 95 ++++-
95
target/arm/helper.h | 5 +
96
target/arm/translate.h | 3 +
97
hw/arm/aspeed.c | 13 +-
98
hw/arm/exynos4_boards.c | 3 +-
99
hw/arm/gumstix.c | 2 +-
100
hw/arm/integratorcp.c | 2 +-
101
hw/arm/kzm.c | 2 +-
102
hw/arm/mainstone.c | 2 +-
103
hw/arm/mps2-tz.c | 3 +-
104
hw/arm/mps2.c | 2 +-
105
hw/arm/nseries.c | 7 +-
106
hw/arm/palm.c | 2 +-
107
hw/arm/realview.c | 3 +-
108
hw/arm/smmu-common.c | 6 +-
109
hw/arm/smmuv3.c | 28 +-
110
hw/arm/stellaris.c | 2 +-
111
hw/arm/tosa.c | 2 +-
112
hw/arm/versatilepb.c | 2 +-
113
hw/arm/vexpress.c | 2 +-
114
hw/display/blizzard.c | 2 +-
115
hw/display/tc6393xb.c | 18 +-
116
hw/input/stellaris_input.c | 2 +-
117
hw/input/tsc2005.c | 2 +-
118
hw/input/tsc210x.c | 4 +-
119
hw/intc/armv7m_nvic.c | 261 +++++++++++++
120
hw/misc/cbus.c | 2 +-
121
hw/net/lan9118.c | 3 +-
122
hw/net/smc91c111.c | 2 +-
123
hw/ssi/xilinx_spips.c | 6 +-
124
target/arm/cpu.c | 20 +
125
target/arm/helper.c | 873 +++++++++++++++++++++++++++++++++++++++---
126
target/arm/machine.c | 16 +
127
target/arm/translate.c | 150 +++++++-
128
target/arm/vfp_helper.c | 8 +
129
MAINTAINERS | 7 +
130
50 files changed, 1595 insertions(+), 235 deletions(-)
131
delete mode 100644 include/hw/devices.h
132
create mode 100644 include/hw/display/blizzard.h
133
create mode 100644 include/hw/display/tc6393xb.h
134
create mode 100644 include/hw/input/gamepad.h
135
create mode 100644 include/hw/input/tsc2xxx.h
136
create mode 100644 include/hw/misc/cbus.h
137
create mode 100644 include/hw/net/lan9118.h
138
create mode 100644 include/hw/net/smc91c111.h
139
41
42
Rene Stange (2):
43
bcm2835_dma: Fix the ylen loop in TD mode
44
bcm2835_dma: Re-initialize xlen in TD mode
45
46
Richard Henderson (40):
47
target/arm: Define isar_feature_aa64_vh
48
target/arm: Enable HCR_E2H for VHE
49
target/arm: Add CONTEXTIDR_EL2
50
target/arm: Add TTBR1_EL2
51
target/arm: Update CNTVCT_EL0 for VHE
52
target/arm: Split out vae1_tlbmask
53
target/arm: Split out alle1_tlbmask
54
target/arm: Simplify tlb_force_broadcast alternatives
55
target/arm: Rename ARMMMUIdx*_S12NSE* to ARMMMUIdx*_E10_*
56
target/arm: Rename ARMMMUIdx_S2NS to ARMMMUIdx_Stage2
57
target/arm: Rename ARMMMUIdx_S1NSE* to ARMMMUIdx_Stage1_E*
58
target/arm: Rename ARMMMUIdx_S1SE[01] to ARMMMUIdx_SE10_[01]
59
target/arm: Rename ARMMMUIdx*_S1E3 to ARMMMUIdx*_SE3
60
target/arm: Rename ARMMMUIdx_S1E2 to ARMMMUIdx_E2
61
target/arm: Recover 4 bits from TBFLAGs
62
target/arm: Expand TBFLAG_ANY.MMUIDX to 4 bits
63
target/arm: Rearrange ARMMMUIdxBit
64
target/arm: Tidy ARMMMUIdx m-profile definitions
65
target/arm: Reorganize ARMMMUIdx
66
target/arm: Add regime_has_2_ranges
67
target/arm: Update arm_mmu_idx for VHE
68
target/arm: Update arm_sctlr for VHE
69
target/arm: Update aa64_zva_access for EL2
70
target/arm: Update ctr_el0_access for EL2
71
target/arm: Add the hypervisor virtual counter
72
target/arm: Update timer access for VHE
73
target/arm: Update define_one_arm_cp_reg_with_opaque for VHE
74
target/arm: Add VHE system register redirection and aliasing
75
target/arm: Add VHE timer register redirection and aliasing
76
target/arm: Flush tlb for ASID changes in EL2&0 translation regime
77
target/arm: Flush tlbs for E2&0 translation regime
78
target/arm: Update arm_phys_excp_target_el for TGE
79
target/arm: Update {fp,sve}_exception_el for VHE
80
target/arm: Update get_a64_user_mem_index for VHE
81
target/arm: Update arm_cpu_do_interrupt_aarch64 for VHE
82
target/arm: Enable ARMv8.1-VHE in -cpu max
83
target/arm: Move arm_excp_unmasked to cpu.c
84
target/arm: Pass more cpu state to arm_excp_unmasked
85
target/arm: Use bool for unmasked in arm_excp_unmasked
86
target/arm: Raise only one interrupt in arm_cpu_exec_interrupt
87
88
target/arm/cpu-param.h | 2 +-
89
target/arm/cpu-qom.h | 1 +
90
target/arm/cpu.h | 423 ++++++----------
91
target/arm/internals.h | 73 ++-
92
target/arm/translate.h | 4 +-
93
hw/arm/stellaris.c | 7 +-
94
hw/dma/bcm2835_dma.c | 8 +-
95
hw/timer/armv7m_systick.c | 6 +
96
hw/timer/stm32f2xx_timer.c | 5 +
97
target/arm/cpu.c | 162 +++++-
98
target/arm/cpu64.c | 1 +
99
target/arm/debug_helper.c | 50 +-
100
target/arm/helper-a64.c | 2 +-
101
target/arm/helper.c | 1211 ++++++++++++++++++++++++++++++++------------
102
target/arm/monitor.c | 15 +-
103
target/arm/pauth_helper.c | 14 +-
104
target/arm/translate-a64.c | 47 +-
105
target/arm/translate.c | 74 +--
106
docs/arm-cpu-features.rst | 2 +-
107
19 files changed, 1415 insertions(+), 692 deletions(-)
108
diff view generated by jsdifflib
New patch
1
From: Liang Yan <lyan@suse.com>
1
2
3
Commit e19afd566781 mentioned that target-arm only supports queryable
4
cpu models 'max', 'host', and the current type when KVM is in use.
5
The logic works well until using machine type none.
6
7
For machine type none, cpu_type will be null if cpu option is not
8
set by command line, strlen(cpu_type) will terminate process.
9
So We add a check above it.
10
11
This won't affect i386 and s390x since they do not use current_cpu.
12
13
Signed-off-by: Liang Yan <lyan@suse.com>
14
Message-id: 20200203134251.12986-1-lyan@suse.com
15
Reviewed-by: Andrew Jones <drjones@redhat.com>
16
Tested-by: Andrew Jones <drjones@redhat.com>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
---
19
target/arm/monitor.c | 15 +++++++++------
20
1 file changed, 9 insertions(+), 6 deletions(-)
21
22
diff --git a/target/arm/monitor.c b/target/arm/monitor.c
23
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/monitor.c
25
+++ b/target/arm/monitor.c
26
@@ -XXX,XX +XXX,XX @@ CpuModelExpansionInfo *qmp_query_cpu_model_expansion(CpuModelExpansionType type,
27
}
28
29
if (kvm_enabled()) {
30
- const char *cpu_type = current_machine->cpu_type;
31
- int len = strlen(cpu_type) - strlen(ARM_CPU_TYPE_SUFFIX);
32
bool supported = false;
33
34
if (!strcmp(model->name, "host") || !strcmp(model->name, "max")) {
35
/* These are kvmarm's recommended cpu types */
36
supported = true;
37
- } else if (strlen(model->name) == len &&
38
- !strncmp(model->name, cpu_type, len)) {
39
- /* KVM is enabled and we're using this type, so it works. */
40
- supported = true;
41
+ } else if (current_machine->cpu_type) {
42
+ const char *cpu_type = current_machine->cpu_type;
43
+ int len = strlen(cpu_type) - strlen(ARM_CPU_TYPE_SUFFIX);
44
+
45
+ if (strlen(model->name) == len &&
46
+ !strncmp(model->name, cpu_type, len)) {
47
+ /* KVM is enabled and we're using this type, so it works. */
48
+ supported = true;
49
+ }
50
}
51
if (!supported) {
52
error_setg(errp, "We cannot guarantee the CPU type '%s' works "
53
--
54
2.20.1
55
56
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Tested-by: Alex Bennée <alex.bennee@linaro.org>
4
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20200206105448.4726-2-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
9
target/arm/cpu.h | 5 +++++
10
1 file changed, 5 insertions(+)
11
12
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/cpu.h
15
+++ b/target/arm/cpu.h
16
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_sve(const ARMISARegisters *id)
17
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, SVE) != 0;
18
}
19
20
+static inline bool isar_feature_aa64_vh(const ARMISARegisters *id)
21
+{
22
+ return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, VH) != 0;
23
+}
24
+
25
static inline bool isar_feature_aa64_lor(const ARMISARegisters *id)
26
{
27
return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, LO) != 0;
28
--
29
2.20.1
30
31
diff view generated by jsdifflib
1
Implement the code which updates the FPCCR register on an
1
From: Richard Henderson <richard.henderson@linaro.org>
2
exception entry where we are going to use lazy FP stacking.
3
We have to defer to the NVIC to determine whether the
4
various exceptions are currently ready or not.
5
2
3
Tested-by: Alex Bennée <alex.bennee@linaro.org>
4
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20200206105448.4726-3-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20190416125744.27770-12-peter.maydell@linaro.org
8
---
8
---
9
target/arm/cpu.h | 14 +++++++++
9
target/arm/cpu.h | 7 -------
10
hw/intc/armv7m_nvic.c | 34 ++++++++++++++++++++++
10
target/arm/helper.c | 6 +++++-
11
target/arm/helper.c | 67 ++++++++++++++++++++++++++++++++++++++++++-
11
2 files changed, 5 insertions(+), 8 deletions(-)
12
3 files changed, 114 insertions(+), 1 deletion(-)
13
12
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
13
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpu.h
15
--- a/target/arm/cpu.h
17
+++ b/target/arm/cpu.h
16
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_acknowledge_irq(void *opaque);
17
@@ -XXX,XX +XXX,XX @@ static inline void xpsr_write(CPUARMState *env, uint32_t val, uint32_t mask)
19
* (Ignoring -1, this is the same as the RETTOBASE value before completion.)
18
#define HCR_ATA (1ULL << 56)
20
*/
19
#define HCR_DCT (1ULL << 57)
21
int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure);
20
22
+/**
21
-/*
23
+ * armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure)
22
- * When we actually implement ARMv8.1-VHE we should add HCR_E2H to
24
+ * @opaque: the NVIC
23
- * HCR_MASK and then clear it again if the feature bit is not set in
25
+ * @irq: the exception number to mark pending
24
- * hcr_write().
26
+ * @secure: false for non-banked exceptions or for the nonsecure
25
- */
27
+ * version of a banked exception, true for the secure version of a banked
26
-#define HCR_MASK ((1ULL << 34) - 1)
28
+ * exception.
27
-
29
+ *
28
#define SCR_NS (1U << 0)
30
+ * Return whether an exception is "ready", i.e. whether the exception is
29
#define SCR_IRQ (1U << 1)
31
+ * enabled and is configured at a priority which would allow it to
30
#define SCR_FIQ (1U << 2)
32
+ * interrupt the current execution priority. This controls whether the
33
+ * RDY bit for it in the FPCCR is set.
34
+ */
35
+bool armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure);
36
/**
37
* armv7m_nvic_raw_execution_priority: return the raw execution priority
38
* @opaque: the NVIC
39
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
40
index XXXXXXX..XXXXXXX 100644
41
--- a/hw/intc/armv7m_nvic.c
42
+++ b/hw/intc/armv7m_nvic.c
43
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
44
return ret;
45
}
46
47
+bool armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure)
48
+{
49
+ /*
50
+ * Return whether an exception is "ready", i.e. it is enabled and is
51
+ * configured at a priority which would allow it to interrupt the
52
+ * current execution priority.
53
+ *
54
+ * irq and secure have the same semantics as for armv7m_nvic_set_pending():
55
+ * for non-banked exceptions secure is always false; for banked exceptions
56
+ * it indicates which of the exceptions is required.
57
+ */
58
+ NVICState *s = (NVICState *)opaque;
59
+ bool banked = exc_is_banked(irq);
60
+ VecInfo *vec;
61
+ int running = nvic_exec_prio(s);
62
+
63
+ assert(irq > ARMV7M_EXCP_RESET && irq < s->num_irq);
64
+ assert(!secure || banked);
65
+
66
+ /*
67
+ * HardFault is an odd special case: we always check against -1,
68
+ * even if we're secure and HardFault has priority -3; we never
69
+ * need to check for enabled state.
70
+ */
71
+ if (irq == ARMV7M_EXCP_HARD) {
72
+ return running > -1;
73
+ }
74
+
75
+ vec = (banked && secure) ? &s->sec_vectors[irq] : &s->vectors[irq];
76
+
77
+ return vec->enabled &&
78
+ exc_group_prio(s, vec->prio, secure) < running;
79
+}
80
+
81
/* callback when external interrupt line is changed */
82
static void set_irq_level(void *opaque, int n, int level)
83
{
84
diff --git a/target/arm/helper.c b/target/arm/helper.c
31
diff --git a/target/arm/helper.c b/target/arm/helper.c
85
index XXXXXXX..XXXXXXX 100644
32
index XXXXXXX..XXXXXXX 100644
86
--- a/target/arm/helper.c
33
--- a/target/arm/helper.c
87
+++ b/target/arm/helper.c
34
+++ b/target/arm/helper.c
88
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
35
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_no_el2_v8_cp_reginfo[] = {
89
env->thumb = addr & 1;
36
static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
90
}
37
{
91
38
ARMCPU *cpu = env_archcpu(env);
92
+static void v7m_update_fpccr(CPUARMState *env, uint32_t frameptr,
39
- uint64_t valid_mask = HCR_MASK;
93
+ bool apply_splim)
40
+ /* Begin with bits defined in base ARMv8.0. */
94
+{
41
+ uint64_t valid_mask = MAKE_64BIT_MASK(0, 34);
95
+ /*
42
96
+ * Like the pseudocode UpdateFPCCR: save state in FPCAR and FPCCR
43
if (arm_feature(env, ARM_FEATURE_EL3)) {
97
+ * that we will need later in order to do lazy FP reg stacking.
44
valid_mask &= ~HCR_HCD;
98
+ */
45
@@ -XXX,XX +XXX,XX @@ static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
99
+ bool is_secure = env->v7m.secure;
46
*/
100
+ void *nvic = env->nvic;
47
valid_mask &= ~HCR_TSC;
101
+ /*
48
}
102
+ * Some bits are unbanked and live always in fpccr[M_REG_S]; some bits
49
+ if (cpu_isar_feature(aa64_vh, cpu)) {
103
+ * are banked and we want to update the bit in the bank for the
50
+ valid_mask |= HCR_E2H;
104
+ * current security state; and in one case we want to specifically
105
+ * update the NS banked version of a bit even if we are secure.
106
+ */
107
+ uint32_t *fpccr_s = &env->v7m.fpccr[M_REG_S];
108
+ uint32_t *fpccr_ns = &env->v7m.fpccr[M_REG_NS];
109
+ uint32_t *fpccr = &env->v7m.fpccr[is_secure];
110
+ bool hfrdy, bfrdy, mmrdy, ns_ufrdy, s_ufrdy, sfrdy, monrdy;
111
+
112
+ env->v7m.fpcar[is_secure] = frameptr & ~0x7;
113
+
114
+ if (apply_splim && arm_feature(env, ARM_FEATURE_V8)) {
115
+ bool splimviol;
116
+ uint32_t splim = v7m_sp_limit(env);
117
+ bool ign = armv7m_nvic_neg_prio_requested(nvic, is_secure) &&
118
+ (env->v7m.ccr[is_secure] & R_V7M_CCR_STKOFHFNMIGN_MASK);
119
+
120
+ splimviol = !ign && frameptr < splim;
121
+ *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, SPLIMVIOL, splimviol);
122
+ }
51
+ }
123
+
52
if (cpu_isar_feature(aa64_lor, cpu)) {
124
+ *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, LSPACT, 1);
53
valid_mask |= HCR_TLOR;
125
+
126
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, S, is_secure);
127
+
128
+ *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, USER, arm_current_el(env) == 0);
129
+
130
+ *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, THREAD,
131
+ !arm_v7m_is_handler_mode(env));
132
+
133
+ hfrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_HARD, false);
134
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, HFRDY, hfrdy);
135
+
136
+ bfrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_BUS, false);
137
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, BFRDY, bfrdy);
138
+
139
+ mmrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_MEM, is_secure);
140
+ *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, MMRDY, mmrdy);
141
+
142
+ ns_ufrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_USAGE, false);
143
+ *fpccr_ns = FIELD_DP32(*fpccr_ns, V7M_FPCCR, UFRDY, ns_ufrdy);
144
+
145
+ monrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_DEBUG, false);
146
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, MONRDY, monrdy);
147
+
148
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
149
+ s_ufrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_USAGE, true);
150
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, UFRDY, s_ufrdy);
151
+
152
+ sfrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_SECURE, false);
153
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, SFRDY, sfrdy);
154
+ }
155
+}
156
+
157
static bool v7m_push_stack(ARMCPU *cpu)
158
{
159
/* Do the "set up stack frame" part of exception entry,
160
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
161
}
162
} else {
163
/* Lazy stacking enabled, save necessary info to stack later */
164
- /* TODO : equivalent of UpdateFPCCR() pseudocode */
165
+ v7m_update_fpccr(env, frameptr + 0x20, true);
166
}
167
}
168
}
54
}
169
--
55
--
170
2.20.1
56
2.20.1
171
57
172
58
diff view generated by jsdifflib
1
Enforce that for M-profile various FPSCR bits which are RES0 there
1
From: Richard Henderson <richard.henderson@linaro.org>
2
but have defined meanings on A-profile are never settable. This
3
ensures that M-profile code can't enable the A-profile behaviour
4
(notably vector length/stride handling) by accident.
5
2
3
Not all of the breakpoint types are supported, but those that
4
only examine contextidr are extended to support the new register.
5
6
Tested-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20200206105448.4726-4-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20190416125744.27770-2-peter.maydell@linaro.org
9
---
11
---
10
target/arm/vfp_helper.c | 8 ++++++++
12
target/arm/debug_helper.c | 50 +++++++++++++++++++++++++++++----------
11
1 file changed, 8 insertions(+)
13
target/arm/helper.c | 12 ++++++++++
14
2 files changed, 50 insertions(+), 12 deletions(-)
12
15
13
diff --git a/target/arm/vfp_helper.c b/target/arm/vfp_helper.c
16
diff --git a/target/arm/debug_helper.c b/target/arm/debug_helper.c
14
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/vfp_helper.c
18
--- a/target/arm/debug_helper.c
16
+++ b/target/arm/vfp_helper.c
19
+++ b/target/arm/debug_helper.c
17
@@ -XXX,XX +XXX,XX @@ void HELPER(vfp_set_fpscr)(CPUARMState *env, uint32_t val)
20
@@ -XXX,XX +XXX,XX @@ static bool linked_bp_matches(ARMCPU *cpu, int lbn)
18
val &= ~FPCR_FZ16;
21
int ctx_cmps = extract32(cpu->dbgdidr, 20, 4);
22
int bt;
23
uint32_t contextidr;
24
+ uint64_t hcr_el2;
25
26
/*
27
* Links to unimplemented or non-context aware breakpoints are
28
@@ -XXX,XX +XXX,XX @@ static bool linked_bp_matches(ARMCPU *cpu, int lbn)
19
}
29
}
20
30
21
+ if (arm_feature(env, ARM_FEATURE_M)) {
31
bt = extract64(bcr, 20, 4);
22
+ /*
32
-
23
+ * M profile FPSCR is RES0 for the QC, STRIDE, FZ16, LEN bits
33
- /*
24
+ * and also for the trapped-exception-handling bits IxE.
34
- * We match the whole register even if this is AArch32 using the
25
+ */
35
- * short descriptor format (in which case it holds both PROCID and ASID),
26
+ val &= 0xf7c0009f;
36
- * since we don't implement the optional v7 context ID masking.
37
- */
38
- contextidr = extract64(env->cp15.contextidr_el[1], 0, 32);
39
+ hcr_el2 = arm_hcr_el2_eff(env);
40
41
switch (bt) {
42
case 3: /* linked context ID match */
43
- if (arm_current_el(env) > 1) {
44
- /* Context matches never fire in EL2 or (AArch64) EL3 */
45
+ switch (arm_current_el(env)) {
46
+ default:
47
+ /* Context matches never fire in AArch64 EL3 */
48
return false;
49
+ case 2:
50
+ if (!(hcr_el2 & HCR_E2H)) {
51
+ /* Context matches never fire in EL2 without E2H enabled. */
52
+ return false;
53
+ }
54
+ contextidr = env->cp15.contextidr_el[2];
55
+ break;
56
+ case 1:
57
+ contextidr = env->cp15.contextidr_el[1];
58
+ break;
59
+ case 0:
60
+ if ((hcr_el2 & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) {
61
+ contextidr = env->cp15.contextidr_el[2];
62
+ } else {
63
+ contextidr = env->cp15.contextidr_el[1];
64
+ }
65
+ break;
66
}
67
- return (contextidr == extract64(env->cp15.dbgbvr[lbn], 0, 32));
68
- case 5: /* linked address mismatch (reserved in AArch64) */
69
+ break;
70
+
71
+ case 7: /* linked contextidr_el1 match */
72
+ contextidr = env->cp15.contextidr_el[1];
73
+ break;
74
+ case 13: /* linked contextidr_el2 match */
75
+ contextidr = env->cp15.contextidr_el[2];
76
+ break;
77
+
78
case 9: /* linked VMID match (reserved if no EL2) */
79
case 11: /* linked context ID and VMID match (reserved if no EL2) */
80
+ case 15: /* linked full context ID match */
81
default:
82
/*
83
* Links to Unlinked context breakpoints must generate no
84
@@ -XXX,XX +XXX,XX @@ static bool linked_bp_matches(ARMCPU *cpu, int lbn)
85
return false;
86
}
87
88
- return false;
89
+ /*
90
+ * We match the whole register even if this is AArch32 using the
91
+ * short descriptor format (in which case it holds both PROCID and ASID),
92
+ * since we don't implement the optional v7 context ID masking.
93
+ */
94
+ return contextidr == (uint32_t)env->cp15.dbgbvr[lbn];
95
}
96
97
static bool bp_wp_matches(ARMCPU *cpu, int n, bool is_wp)
98
diff --git a/target/arm/helper.c b/target/arm/helper.c
99
index XXXXXXX..XXXXXXX 100644
100
--- a/target/arm/helper.c
101
+++ b/target/arm/helper.c
102
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo jazelle_regs[] = {
103
REGINFO_SENTINEL
104
};
105
106
+static const ARMCPRegInfo vhe_reginfo[] = {
107
+ { .name = "CONTEXTIDR_EL2", .state = ARM_CP_STATE_AA64,
108
+ .opc0 = 3, .opc1 = 4, .crn = 13, .crm = 0, .opc2 = 1,
109
+ .access = PL2_RW,
110
+ .fieldoffset = offsetof(CPUARMState, cp15.contextidr_el[2]) },
111
+ REGINFO_SENTINEL
112
+};
113
+
114
void register_cp_regs_for_features(ARMCPU *cpu)
115
{
116
/* Register all the coprocessor registers based on feature bits */
117
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
118
define_arm_cp_regs(cpu, lor_reginfo);
119
}
120
121
+ if (arm_feature(env, ARM_FEATURE_EL2) && cpu_isar_feature(aa64_vh, cpu)) {
122
+ define_arm_cp_regs(cpu, vhe_reginfo);
27
+ }
123
+ }
28
+
124
+
29
/*
125
if (cpu_isar_feature(aa64_sve, cpu)) {
30
* We don't implement trapped exception handling, so the
126
define_one_arm_cp_reg(cpu, &zcr_el1_reginfo);
31
* trap enable bits, IDE|IXE|UFE|OFE|DZE|IOE are all RAZ/WI (not RES0!)
127
if (arm_feature(env, ARM_FEATURE_EL2)) {
32
--
128
--
33
2.20.1
129
2.20.1
34
130
35
131
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Markus Armbruster <armbru@redhat.com>
3
At the same time, add writefn to TTBR0_EL2 and TCR_EL2.
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
4
A later patch will update any ASID therein.
5
Message-id: 20190412165416.7977-12-philmd@redhat.com
5
6
Tested-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20200206105448.4726-5-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
11
---
8
include/hw/net/lan9118.h | 2 ++
12
target/arm/helper.c | 13 ++++++++++++-
9
hw/arm/exynos4_boards.c | 3 ++-
13
1 file changed, 12 insertions(+), 1 deletion(-)
10
hw/arm/mps2-tz.c | 3 ++-
11
hw/net/lan9118.c | 1 -
12
4 files changed, 6 insertions(+), 3 deletions(-)
13
14
14
diff --git a/include/hw/net/lan9118.h b/include/hw/net/lan9118.h
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/include/hw/net/lan9118.h
17
--- a/target/arm/helper.c
17
+++ b/include/hw/net/lan9118.h
18
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ static void vmsa_ttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
19
#include "hw/irq.h"
20
raw_write(env, ri, value);
20
#include "net/net.h"
21
}
21
22
22
+#define TYPE_LAN9118 "lan9118"
23
+static void vmsa_tcr_ttbr_el2_write(CPUARMState *env, const ARMCPRegInfo *ri,
24
+ uint64_t value)
25
+{
26
+ /* TODO: There are ASID fields in here with HCR_EL2.E2H */
27
+ raw_write(env, ri, value);
28
+}
23
+
29
+
24
void lan9118_init(NICInfo *, uint32_t, qemu_irq);
30
static void vttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
25
31
uint64_t value)
26
#endif
32
{
27
diff --git a/hw/arm/exynos4_boards.c b/hw/arm/exynos4_boards.c
33
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
28
index XXXXXXX..XXXXXXX 100644
34
.fieldoffset = offsetof(CPUARMState, cp15.tpidr_el[2]) },
29
--- a/hw/arm/exynos4_boards.c
35
{ .name = "TTBR0_EL2", .state = ARM_CP_STATE_AA64,
30
+++ b/hw/arm/exynos4_boards.c
36
.opc0 = 3, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 0,
31
@@ -XXX,XX +XXX,XX @@
37
- .access = PL2_RW, .resetvalue = 0,
32
#include "hw/arm/arm.h"
38
+ .access = PL2_RW, .resetvalue = 0, .writefn = vmsa_tcr_ttbr_el2_write,
33
#include "exec/address-spaces.h"
39
.fieldoffset = offsetof(CPUARMState, cp15.ttbr0_el[2]) },
34
#include "hw/arm/exynos4210.h"
40
{ .name = "HTTBR", .cp = 15, .opc1 = 4, .crm = 2,
35
+#include "hw/net/lan9118.h"
41
.access = PL2_RW, .type = ARM_CP_64BIT | ARM_CP_ALIAS,
36
#include "hw/boards.h"
42
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vhe_reginfo[] = {
37
43
.opc0 = 3, .opc1 = 4, .crn = 13, .crm = 0, .opc2 = 1,
38
#undef DEBUG
44
.access = PL2_RW,
39
@@ -XXX,XX +XXX,XX @@ static void lan9215_init(uint32_t base, qemu_irq irq)
45
.fieldoffset = offsetof(CPUARMState, cp15.contextidr_el[2]) },
40
/* This should be a 9215 but the 9118 is close enough */
46
+ { .name = "TTBR1_EL2", .state = ARM_CP_STATE_AA64,
41
if (nd_table[0].used) {
47
+ .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 1,
42
qemu_check_nic_model(&nd_table[0], "lan9118");
48
+ .access = PL2_RW, .writefn = vmsa_tcr_ttbr_el2_write,
43
- dev = qdev_create(NULL, "lan9118");
49
+ .fieldoffset = offsetof(CPUARMState, cp15.ttbr1_el[2]) },
44
+ dev = qdev_create(NULL, TYPE_LAN9118);
50
REGINFO_SENTINEL
45
qdev_set_nic_properties(dev, &nd_table[0]);
46
qdev_prop_set_uint32(dev, "mode_16bit", 1);
47
qdev_init_nofail(dev);
48
diff --git a/hw/arm/mps2-tz.c b/hw/arm/mps2-tz.c
49
index XXXXXXX..XXXXXXX 100644
50
--- a/hw/arm/mps2-tz.c
51
+++ b/hw/arm/mps2-tz.c
52
@@ -XXX,XX +XXX,XX @@
53
#include "hw/arm/armsse.h"
54
#include "hw/dma/pl080.h"
55
#include "hw/ssi/pl022.h"
56
+#include "hw/net/lan9118.h"
57
#include "net/net.h"
58
#include "hw/core/split-irq.h"
59
60
@@ -XXX,XX +XXX,XX @@ static MemoryRegion *make_eth_dev(MPS2TZMachineState *mms, void *opaque,
61
* except that it doesn't support the checksum-offload feature.
62
*/
63
qemu_check_nic_model(nd, "lan9118");
64
- mms->lan9118 = qdev_create(NULL, "lan9118");
65
+ mms->lan9118 = qdev_create(NULL, TYPE_LAN9118);
66
qdev_set_nic_properties(mms->lan9118, nd);
67
qdev_init_nofail(mms->lan9118);
68
69
diff --git a/hw/net/lan9118.c b/hw/net/lan9118.c
70
index XXXXXXX..XXXXXXX 100644
71
--- a/hw/net/lan9118.c
72
+++ b/hw/net/lan9118.c
73
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_lan9118_packet = {
74
}
75
};
51
};
76
52
77
-#define TYPE_LAN9118 "lan9118"
78
#define LAN9118(obj) OBJECT_CHECK(lan9118_state, (obj), TYPE_LAN9118)
79
80
typedef struct {
81
--
53
--
82
2.20.1
54
2.20.1
83
55
84
56
diff view generated by jsdifflib
1
Handle floating point registers in exception return.
1
From: Richard Henderson <richard.henderson@linaro.org>
2
This corresponds to pseudocode functions ValidateExceptionReturn(),
3
ExceptionReturn(), PopStack() and ConsumeExcStackFrame().
4
2
3
The virtual offset may be 0 depending on EL, E2H and TGE.
4
5
Tested-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200206105448.4726-6-richard.henderson@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20190416125744.27770-16-peter.maydell@linaro.org
8
---
10
---
9
target/arm/helper.c | 142 +++++++++++++++++++++++++++++++++++++++++++-
11
target/arm/helper.c | 40 +++++++++++++++++++++++++++++++++++++---
10
1 file changed, 141 insertions(+), 1 deletion(-)
12
1 file changed, 37 insertions(+), 3 deletions(-)
11
13
12
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/helper.c
16
--- a/target/arm/helper.c
15
+++ b/target/arm/helper.c
17
+++ b/target/arm/helper.c
16
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
18
@@ -XXX,XX +XXX,XX @@ static uint64_t gt_cnt_read(CPUARMState *env, const ARMCPRegInfo *ri)
17
bool rettobase = false;
19
return gt_get_countervalue(env);
18
bool exc_secure = false;
20
}
19
bool return_to_secure;
21
20
+ bool ftype;
22
+static uint64_t gt_virt_cnt_offset(CPUARMState *env)
21
+ bool restore_s16_s31;
23
+{
22
24
+ uint64_t hcr;
23
/* If we're not in Handler mode then jumps to magic exception-exit
24
* addresses don't have magic behaviour. However for the v8M
25
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
26
excret);
27
}
28
29
+ ftype = excret & R_V7M_EXCRET_FTYPE_MASK;
30
+
25
+
31
+ if (!arm_feature(env, ARM_FEATURE_VFP) && !ftype) {
26
+ switch (arm_current_el(env)) {
32
+ qemu_log_mask(LOG_GUEST_ERROR, "M profile: zero FTYPE in exception "
27
+ case 2:
33
+ "exit PC value 0x%" PRIx32 " is UNPREDICTABLE "
28
+ hcr = arm_hcr_el2_eff(env);
34
+ "if FPU not present\n",
29
+ if (hcr & HCR_E2H) {
35
+ excret);
30
+ return 0;
36
+ ftype = true;
31
+ }
32
+ break;
33
+ case 0:
34
+ hcr = arm_hcr_el2_eff(env);
35
+ if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) {
36
+ return 0;
37
+ }
38
+ break;
37
+ }
39
+ }
38
+
40
+
39
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
41
+ return env->cp15.cntvoff_el2;
40
/* EXC_RETURN.ES validation check (R_SMFL). We must do this before
42
+}
41
* we pick which FAULTMASK to clear.
42
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
43
*/
44
write_v7m_control_spsel_for_secstate(env, return_to_sp_process, exc_secure);
45
46
+ /*
47
+ * Clear scratch FP values left in caller saved registers; this
48
+ * must happen before any kind of tail chaining.
49
+ */
50
+ if ((env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_CLRONRET_MASK) &&
51
+ (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK)) {
52
+ if (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPACT_MASK) {
53
+ env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
54
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
55
+ qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
56
+ "stackframe: error during lazy state deactivation\n");
57
+ v7m_exception_taken(cpu, excret, true, false);
58
+ return;
59
+ } else {
60
+ /* Clear s0..s15 and FPSCR */
61
+ int i;
62
+
43
+
63
+ for (i = 0; i < 16; i += 2) {
44
static uint64_t gt_virt_cnt_read(CPUARMState *env, const ARMCPRegInfo *ri)
64
+ *aa32_vfp_dreg(env, i / 2) = 0;
45
{
65
+ }
46
- return gt_get_countervalue(env) - env->cp15.cntvoff_el2;
66
+ vfp_set_fpscr(env, 0);
47
+ return gt_get_countervalue(env) - gt_virt_cnt_offset(env);
67
+ }
48
}
49
50
static void gt_cval_write(CPUARMState *env, const ARMCPRegInfo *ri,
51
@@ -XXX,XX +XXX,XX @@ static void gt_cval_write(CPUARMState *env, const ARMCPRegInfo *ri,
52
static uint64_t gt_tval_read(CPUARMState *env, const ARMCPRegInfo *ri,
53
int timeridx)
54
{
55
- uint64_t offset = timeridx == GTIMER_VIRT ? env->cp15.cntvoff_el2 : 0;
56
+ uint64_t offset = 0;
57
+
58
+ switch (timeridx) {
59
+ case GTIMER_VIRT:
60
+ offset = gt_virt_cnt_offset(env);
61
+ break;
68
+ }
62
+ }
63
64
return (uint32_t)(env->cp15.c14_timer[timeridx].cval -
65
(gt_get_countervalue(env) - offset));
66
@@ -XXX,XX +XXX,XX @@ static void gt_tval_write(CPUARMState *env, const ARMCPRegInfo *ri,
67
int timeridx,
68
uint64_t value)
69
{
70
- uint64_t offset = timeridx == GTIMER_VIRT ? env->cp15.cntvoff_el2 : 0;
71
+ uint64_t offset = 0;
69
+
72
+
70
if (sfault) {
73
+ switch (timeridx) {
71
env->v7m.sfsr |= R_V7M_SFSR_INVER_MASK;
74
+ case GTIMER_VIRT:
72
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
75
+ offset = gt_virt_cnt_offset(env);
73
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
76
+ break;
74
}
75
}
76
77
+ if (!ftype) {
78
+ /* FP present and we need to handle it */
79
+ if (!return_to_secure &&
80
+ (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPACT_MASK)) {
81
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
82
+ env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
83
+ qemu_log_mask(CPU_LOG_INT,
84
+ "...taking SecureFault on existing stackframe: "
85
+ "Secure LSPACT set but exception return is "
86
+ "not to secure state\n");
87
+ v7m_exception_taken(cpu, excret, true, false);
88
+ return;
89
+ }
90
+
91
+ restore_s16_s31 = return_to_secure &&
92
+ (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK);
93
+
94
+ if (env->v7m.fpccr[return_to_secure] & R_V7M_FPCCR_LSPACT_MASK) {
95
+ /* State in FPU is still valid, just clear LSPACT */
96
+ env->v7m.fpccr[return_to_secure] &= ~R_V7M_FPCCR_LSPACT_MASK;
97
+ } else {
98
+ int i;
99
+ uint32_t fpscr;
100
+ bool cpacr_pass, nsacr_pass;
101
+
102
+ cpacr_pass = v7m_cpacr_pass(env, return_to_secure,
103
+ return_to_priv);
104
+ nsacr_pass = return_to_secure ||
105
+ extract32(env->v7m.nsacr, 10, 1);
106
+
107
+ if (!cpacr_pass) {
108
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
109
+ return_to_secure);
110
+ env->v7m.cfsr[return_to_secure] |= R_V7M_CFSR_NOCP_MASK;
111
+ qemu_log_mask(CPU_LOG_INT,
112
+ "...taking UsageFault on existing "
113
+ "stackframe: CPACR.CP10 prevents unstacking "
114
+ "FP regs\n");
115
+ v7m_exception_taken(cpu, excret, true, false);
116
+ return;
117
+ } else if (!nsacr_pass) {
118
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, true);
119
+ env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_INVPC_MASK;
120
+ qemu_log_mask(CPU_LOG_INT,
121
+ "...taking Secure UsageFault on existing "
122
+ "stackframe: NSACR.CP10 prevents unstacking "
123
+ "FP regs\n");
124
+ v7m_exception_taken(cpu, excret, true, false);
125
+ return;
126
+ }
127
+
128
+ for (i = 0; i < (restore_s16_s31 ? 32 : 16); i += 2) {
129
+ uint32_t slo, shi;
130
+ uint64_t dn;
131
+ uint32_t faddr = frameptr + 0x20 + 4 * i;
132
+
133
+ if (i >= 16) {
134
+ faddr += 8; /* Skip the slot for the FPSCR */
135
+ }
136
+
137
+ pop_ok = pop_ok &&
138
+ v7m_stack_read(cpu, &slo, faddr, mmu_idx) &&
139
+ v7m_stack_read(cpu, &shi, faddr + 4, mmu_idx);
140
+
141
+ if (!pop_ok) {
142
+ break;
143
+ }
144
+
145
+ dn = (uint64_t)shi << 32 | slo;
146
+ *aa32_vfp_dreg(env, i / 2) = dn;
147
+ }
148
+ pop_ok = pop_ok &&
149
+ v7m_stack_read(cpu, &fpscr, frameptr + 0x60, mmu_idx);
150
+ if (pop_ok) {
151
+ vfp_set_fpscr(env, fpscr);
152
+ }
153
+ if (!pop_ok) {
154
+ /*
155
+ * These regs are 0 if security extension present;
156
+ * otherwise merely UNKNOWN. We zero always.
157
+ */
158
+ for (i = 0; i < (restore_s16_s31 ? 32 : 16); i += 2) {
159
+ *aa32_vfp_dreg(env, i / 2) = 0;
160
+ }
161
+ vfp_set_fpscr(env, 0);
162
+ }
163
+ }
164
+ }
165
+ env->v7m.control[M_REG_S] = FIELD_DP32(env->v7m.control[M_REG_S],
166
+ V7M_CONTROL, FPCA, !ftype);
167
+
168
/* Commit to consuming the stack frame */
169
frameptr += 0x20;
170
+ if (!ftype) {
171
+ frameptr += 0x48;
172
+ if (restore_s16_s31) {
173
+ frameptr += 0x40;
174
+ }
175
+ }
176
/* Undo stack alignment (the SPREALIGN bit indicates that the original
177
* pre-exception SP was not 8-aligned and we added a padding word to
178
* align it, so we undo this by ORing in the bit that increases it
179
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
180
*frame_sp_p = frameptr;
181
}
182
/* This xpsr_write() will invalidate frame_sp_p as it may switch stack */
183
- xpsr_write(env, xpsr, ~XPSR_SPREALIGN);
184
+ xpsr_write(env, xpsr, ~(XPSR_SPREALIGN | XPSR_SFPA));
185
+
186
+ if (env->v7m.secure) {
187
+ bool sfpa = xpsr & XPSR_SFPA;
188
+
189
+ env->v7m.control[M_REG_S] = FIELD_DP32(env->v7m.control[M_REG_S],
190
+ V7M_CONTROL, SFPA, sfpa);
191
+ }
77
+ }
192
78
193
/* The restored xPSR exception field will be zero if we're
79
trace_arm_gt_tval_write(timeridx, value);
194
* resuming in Thread mode. If that doesn't match what the
80
env->cp15.c14_timer[timeridx].cval = gt_get_countervalue(env) - offset +
195
--
81
--
196
2.20.1
82
2.20.1
197
83
198
84
diff view generated by jsdifflib
1
The M-profile CONTROL register has two bits -- SFPA and FPCA --
1
From: Richard Henderson <richard.henderson@linaro.org>
2
which relate to floating-point support, and should be RES0 otherwise.
3
Handle them correctly in the MSR/MRS register access code.
4
Neither is banked between security states, so they are stored
5
in v7m.control[M_REG_S] regardless of current security state.
6
2
3
No functional change, but unify code sequences.
4
5
Tested-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20200206105448.4726-7-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20190416125744.27770-9-peter.maydell@linaro.org
10
---
11
---
11
target/arm/helper.c | 57 ++++++++++++++++++++++++++++++++++++++-------
12
target/arm/helper.c | 32 +++++++++++++-------------------
12
1 file changed, 49 insertions(+), 8 deletions(-)
13
1 file changed, 13 insertions(+), 19 deletions(-)
13
14
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.c
17
--- a/target/arm/helper.c
17
+++ b/target/arm/helper.c
18
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg)
19
@@ -XXX,XX +XXX,XX @@ static CPAccessResult aa64_cacheop_access(CPUARMState *env,
19
return xpsr_read(env) & mask;
20
* Page D4-1736 (DDI0487A.b)
20
break;
21
*/
21
case 20: /* CONTROL */
22
22
- return env->v7m.control[env->v7m.secure];
23
+static int vae1_tlbmask(CPUARMState *env)
23
+ {
24
+{
24
+ uint32_t value = env->v7m.control[env->v7m.secure];
25
+ if (arm_is_secure_below_el3(env)) {
25
+ if (!env->v7m.secure) {
26
+ return ARMMMUIdxBit_S1SE1 | ARMMMUIdxBit_S1SE0;
26
+ /* SFPA is RAZ/WI from NS; FPCA is stored in the M_REG_S bank */
27
+ } else {
27
+ value |= env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK;
28
+ return ARMMMUIdxBit_S12NSE1 | ARMMMUIdxBit_S12NSE0;
28
+ }
29
+ return value;
30
+ }
29
+ }
31
case 0x94: /* CONTROL_NS */
30
+}
32
/* We have to handle this here because unprivileged Secure code
31
+
33
* can read the NS CONTROL register.
32
static void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
34
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg)
33
uint64_t value)
35
if (!env->v7m.secure) {
34
{
36
return 0;
35
CPUState *cs = env_cpu(env);
37
}
36
- bool sec = arm_is_secure_below_el3(env);
38
- return env->v7m.control[M_REG_NS];
37
+ int mask = vae1_tlbmask(env);
39
+ return env->v7m.control[M_REG_NS] |
38
40
+ (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK);
39
- if (sec) {
41
}
40
- tlb_flush_by_mmuidx_all_cpus_synced(cs,
42
41
- ARMMMUIdxBit_S1SE1 |
43
if (el == 0) {
42
- ARMMMUIdxBit_S1SE0);
44
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
43
- } else {
45
*/
44
- tlb_flush_by_mmuidx_all_cpus_synced(cs,
46
uint32_t mask = extract32(maskreg, 8, 4);
45
- ARMMMUIdxBit_S12NSE1 |
47
uint32_t reg = extract32(maskreg, 0, 8);
46
- ARMMMUIdxBit_S12NSE0);
48
+ int cur_el = arm_current_el(env);
47
- }
49
48
+ tlb_flush_by_mmuidx_all_cpus_synced(cs, mask);
50
- if (arm_current_el(env) == 0 && reg > 7) {
49
}
51
- /* only xPSR sub-fields may be written by unprivileged */
50
52
+ if (cur_el == 0 && reg > 7 && reg != 20) {
51
static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
53
+ /*
52
uint64_t value)
54
+ * only xPSR sub-fields and CONTROL.SFPA may be written by
53
{
55
+ * unprivileged code
54
CPUState *cs = env_cpu(env);
56
+ */
55
+ int mask = vae1_tlbmask(env);
56
57
if (tlb_force_broadcast(env)) {
58
tlbi_aa64_vmalle1is_write(env, NULL, value);
57
return;
59
return;
58
}
60
}
59
61
60
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
62
- if (arm_is_secure_below_el3(env)) {
61
env->v7m.control[M_REG_NS] &= ~R_V7M_CONTROL_NPRIV_MASK;
63
- tlb_flush_by_mmuidx(cs,
62
env->v7m.control[M_REG_NS] |= val & R_V7M_CONTROL_NPRIV_MASK;
64
- ARMMMUIdxBit_S1SE1 |
63
}
65
- ARMMMUIdxBit_S1SE0);
64
+ /*
66
- } else {
65
+ * SFPA is RAZ/WI from NS. FPCA is RO if NSACR.CP10 == 0,
67
- tlb_flush_by_mmuidx(cs,
66
+ * RES0 if the FPU is not present, and is stored in the S bank
68
- ARMMMUIdxBit_S12NSE1 |
67
+ */
69
- ARMMMUIdxBit_S12NSE0);
68
+ if (arm_feature(env, ARM_FEATURE_VFP) &&
70
- }
69
+ extract32(env->v7m.nsacr, 10, 1)) {
71
+ tlb_flush_by_mmuidx(cs, mask);
70
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK;
72
}
71
+ env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_FPCA_MASK;
73
72
+ }
74
static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
73
return;
74
case 0x98: /* SP_NS */
75
{
76
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
77
env->v7m.faultmask[env->v7m.secure] = val & 1;
78
break;
79
case 20: /* CONTROL */
80
- /* Writing to the SPSEL bit only has an effect if we are in
81
+ /*
82
+ * Writing to the SPSEL bit only has an effect if we are in
83
* thread mode; other bits can be updated by any privileged code.
84
* write_v7m_control_spsel() deals with updating the SPSEL bit in
85
* env->v7m.control, so we only need update the others.
86
* For v7M, we must just ignore explicit writes to SPSEL in handler
87
* mode; for v8M the write is permitted but will have no effect.
88
+ * All these bits are writes-ignored from non-privileged code,
89
+ * except for SFPA.
90
*/
91
- if (arm_feature(env, ARM_FEATURE_V8) ||
92
- !arm_v7m_is_handler_mode(env)) {
93
+ if (cur_el > 0 && (arm_feature(env, ARM_FEATURE_V8) ||
94
+ !arm_v7m_is_handler_mode(env))) {
95
write_v7m_control_spsel(env, (val & R_V7M_CONTROL_SPSEL_MASK) != 0);
96
}
97
- if (arm_feature(env, ARM_FEATURE_M_MAIN)) {
98
+ if (cur_el > 0 && arm_feature(env, ARM_FEATURE_M_MAIN)) {
99
env->v7m.control[env->v7m.secure] &= ~R_V7M_CONTROL_NPRIV_MASK;
100
env->v7m.control[env->v7m.secure] |= val & R_V7M_CONTROL_NPRIV_MASK;
101
}
102
+ if (arm_feature(env, ARM_FEATURE_VFP)) {
103
+ /*
104
+ * SFPA is RAZ/WI from NS or if no FPU.
105
+ * FPCA is RO if NSACR.CP10 == 0, RES0 if the FPU is not present.
106
+ * Both are stored in the S bank.
107
+ */
108
+ if (env->v7m.secure) {
109
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
110
+ env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_SFPA_MASK;
111
+ }
112
+ if (cur_el > 0 &&
113
+ (env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_SECURITY) ||
114
+ extract32(env->v7m.nsacr, 10, 1))) {
115
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK;
116
+ env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_FPCA_MASK;
117
+ }
118
+ }
119
break;
120
default:
121
bad_reg:
122
--
75
--
123
2.20.1
76
2.20.1
124
77
125
78
diff view generated by jsdifflib
1
The magic value pushed onto the callee stack as an integrity
1
From: Richard Henderson <richard.henderson@linaro.org>
2
check is different if floating point is present.
3
2
3
No functional change, but unify code sequences.
4
5
Tested-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20200206105448.4726-8-richard.henderson@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20190416125744.27770-15-peter.maydell@linaro.org
7
---
11
---
8
target/arm/helper.c | 22 +++++++++++++++++++---
12
target/arm/helper.c | 86 +++++++++++++--------------------------------
9
1 file changed, 19 insertions(+), 3 deletions(-)
13
1 file changed, 24 insertions(+), 62 deletions(-)
10
14
11
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
12
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/helper.c
17
--- a/target/arm/helper.c
14
+++ b/target/arm/helper.c
18
+++ b/target/arm/helper.c
15
@@ -XXX,XX +XXX,XX @@ load_fail:
19
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
16
return false;
20
tlb_flush_by_mmuidx(cs, mask);
17
}
21
}
18
22
19
+static uint32_t v7m_integrity_sig(CPUARMState *env, uint32_t lr)
23
-static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
24
- uint64_t value)
25
+static int alle1_tlbmask(CPUARMState *env)
26
{
27
- /* Note that the 'ALL' scope must invalidate both stage 1 and
28
+ /*
29
+ * Note that the 'ALL' scope must invalidate both stage 1 and
30
* stage 2 translations, whereas most other scopes only invalidate
31
* stage 1 translations.
32
*/
33
- ARMCPU *cpu = env_archcpu(env);
34
- CPUState *cs = CPU(cpu);
35
-
36
if (arm_is_secure_below_el3(env)) {
37
- tlb_flush_by_mmuidx(cs,
38
- ARMMMUIdxBit_S1SE1 |
39
- ARMMMUIdxBit_S1SE0);
40
+ return ARMMMUIdxBit_S1SE1 | ARMMMUIdxBit_S1SE0;
41
+ } else if (arm_feature(env, ARM_FEATURE_EL2)) {
42
+ return ARMMMUIdxBit_S12NSE1 | ARMMMUIdxBit_S12NSE0 | ARMMMUIdxBit_S2NS;
43
} else {
44
- if (arm_feature(env, ARM_FEATURE_EL2)) {
45
- tlb_flush_by_mmuidx(cs,
46
- ARMMMUIdxBit_S12NSE1 |
47
- ARMMMUIdxBit_S12NSE0 |
48
- ARMMMUIdxBit_S2NS);
49
- } else {
50
- tlb_flush_by_mmuidx(cs,
51
- ARMMMUIdxBit_S12NSE1 |
52
- ARMMMUIdxBit_S12NSE0);
53
- }
54
+ return ARMMMUIdxBit_S12NSE1 | ARMMMUIdxBit_S12NSE0;
55
}
56
}
57
58
+static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
59
+ uint64_t value)
20
+{
60
+{
21
+ /*
61
+ CPUState *cs = env_cpu(env);
22
+ * Return the integrity signature value for the callee-saves
62
+ int mask = alle1_tlbmask(env);
23
+ * stack frame section. @lr is the exception return payload/LR value
24
+ * whose FType bit forms bit 0 of the signature if FP is present.
25
+ */
26
+ uint32_t sig = 0xfefa125a;
27
+
63
+
28
+ if (!arm_feature(env, ARM_FEATURE_VFP) || (lr & R_V7M_EXCRET_FTYPE_MASK)) {
64
+ tlb_flush_by_mmuidx(cs, mask);
29
+ sig |= 1;
30
+ }
31
+ return sig;
32
+}
65
+}
33
+
66
+
34
static bool v7m_push_callee_stack(ARMCPU *cpu, uint32_t lr, bool dotailchain,
67
static void tlbi_aa64_alle2_write(CPUARMState *env, const ARMCPRegInfo *ri,
35
bool ignore_faults)
68
uint64_t value)
36
{
69
{
37
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_callee_stack(ARMCPU *cpu, uint32_t lr, bool dotailchain,
70
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3_write(CPUARMState *env, const ARMCPRegInfo *ri,
38
bool stacked_ok;
71
static void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
39
uint32_t limit;
72
uint64_t value)
40
bool want_psp;
73
{
41
+ uint32_t sig;
74
- /* Note that the 'ALL' scope must invalidate both stage 1 and
42
75
- * stage 2 translations, whereas most other scopes only invalidate
43
if (dotailchain) {
76
- * stage 1 translations.
44
bool mode = lr & R_V7M_EXCRET_MODE_MASK;
77
- */
45
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_callee_stack(ARMCPU *cpu, uint32_t lr, bool dotailchain,
78
CPUState *cs = env_cpu(env);
46
/* Write as much of the stack frame as we can. A write failure may
79
- bool sec = arm_is_secure_below_el3(env);
47
* cause us to pend a derived exception.
80
- bool has_el2 = arm_feature(env, ARM_FEATURE_EL2);
81
+ int mask = alle1_tlbmask(env);
82
83
- if (sec) {
84
- tlb_flush_by_mmuidx_all_cpus_synced(cs,
85
- ARMMMUIdxBit_S1SE1 |
86
- ARMMMUIdxBit_S1SE0);
87
- } else if (has_el2) {
88
- tlb_flush_by_mmuidx_all_cpus_synced(cs,
89
- ARMMMUIdxBit_S12NSE1 |
90
- ARMMMUIdxBit_S12NSE0 |
91
- ARMMMUIdxBit_S2NS);
92
- } else {
93
- tlb_flush_by_mmuidx_all_cpus_synced(cs,
94
- ARMMMUIdxBit_S12NSE1 |
95
- ARMMMUIdxBit_S12NSE0);
96
- }
97
+ tlb_flush_by_mmuidx_all_cpus_synced(cs, mask);
98
}
99
100
static void tlbi_aa64_alle2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
101
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri,
102
static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
103
uint64_t value)
104
{
105
- ARMCPU *cpu = env_archcpu(env);
106
- CPUState *cs = CPU(cpu);
107
- bool sec = arm_is_secure_below_el3(env);
108
+ CPUState *cs = env_cpu(env);
109
+ int mask = vae1_tlbmask(env);
110
uint64_t pageaddr = sextract64(value << 12, 0, 56);
111
112
- if (sec) {
113
- tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr,
114
- ARMMMUIdxBit_S1SE1 |
115
- ARMMMUIdxBit_S1SE0);
116
- } else {
117
- tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr,
118
- ARMMMUIdxBit_S12NSE1 |
119
- ARMMMUIdxBit_S12NSE0);
120
- }
121
+ tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, mask);
122
}
123
124
static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
125
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
126
* since we don't support flush-for-specific-ASID-only or
127
* flush-last-level-only.
48
*/
128
*/
49
+ sig = v7m_integrity_sig(env, lr);
129
- ARMCPU *cpu = env_archcpu(env);
50
stacked_ok =
130
- CPUState *cs = CPU(cpu);
51
- v7m_stack_write(cpu, frameptr, 0xfefa125b, mmu_idx, ignore_faults) &&
131
+ CPUState *cs = env_cpu(env);
52
+ v7m_stack_write(cpu, frameptr, sig, mmu_idx, ignore_faults) &&
132
+ int mask = vae1_tlbmask(env);
53
v7m_stack_write(cpu, frameptr + 0x8, env->regs[4], mmu_idx,
133
uint64_t pageaddr = sextract64(value << 12, 0, 56);
54
ignore_faults) &&
134
55
v7m_stack_write(cpu, frameptr + 0xc, env->regs[5], mmu_idx,
135
if (tlb_force_broadcast(env)) {
56
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
136
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
57
if (return_to_secure &&
137
return;
58
((excret & R_V7M_EXCRET_ES_MASK) == 0 ||
138
}
59
(excret & R_V7M_EXCRET_DCRS_MASK) == 0)) {
139
60
- uint32_t expected_sig = 0xfefa125b;
140
- if (arm_is_secure_below_el3(env)) {
61
uint32_t actual_sig;
141
- tlb_flush_page_by_mmuidx(cs, pageaddr,
62
142
- ARMMMUIdxBit_S1SE1 |
63
pop_ok = v7m_stack_read(cpu, &actual_sig, frameptr, mmu_idx);
143
- ARMMMUIdxBit_S1SE0);
64
144
- } else {
65
- if (pop_ok && expected_sig != actual_sig) {
145
- tlb_flush_page_by_mmuidx(cs, pageaddr,
66
+ if (pop_ok && v7m_integrity_sig(env, excret) != actual_sig) {
146
- ARMMMUIdxBit_S12NSE1 |
67
/* Take a SecureFault on the current stack */
147
- ARMMMUIdxBit_S12NSE0);
68
env->v7m.sfsr |= R_V7M_SFSR_INVIS_MASK;
148
- }
69
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
149
+ tlb_flush_page_by_mmuidx(cs, pageaddr, mask);
150
}
151
152
static void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
70
--
153
--
71
2.20.1
154
2.20.1
72
155
73
156
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Rather than call to a separate function and re-compute any
4
parameters for the flush, simply use the correct flush
5
function directly.
6
7
Tested-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20200206105448.4726-9-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/helper.c | 52 +++++++++++++++++++++------------------------
14
1 file changed, 24 insertions(+), 28 deletions(-)
15
16
diff --git a/target/arm/helper.c b/target/arm/helper.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/helper.c
19
+++ b/target/arm/helper.c
20
@@ -XXX,XX +XXX,XX @@ static void tlbiall_write(CPUARMState *env, const ARMCPRegInfo *ri,
21
uint64_t value)
22
{
23
/* Invalidate all (TLBIALL) */
24
- ARMCPU *cpu = env_archcpu(env);
25
+ CPUState *cs = env_cpu(env);
26
27
if (tlb_force_broadcast(env)) {
28
- tlbiall_is_write(env, NULL, value);
29
- return;
30
+ tlb_flush_all_cpus_synced(cs);
31
+ } else {
32
+ tlb_flush(cs);
33
}
34
-
35
- tlb_flush(CPU(cpu));
36
}
37
38
static void tlbimva_write(CPUARMState *env, const ARMCPRegInfo *ri,
39
uint64_t value)
40
{
41
/* Invalidate single TLB entry by MVA and ASID (TLBIMVA) */
42
- ARMCPU *cpu = env_archcpu(env);
43
+ CPUState *cs = env_cpu(env);
44
45
+ value &= TARGET_PAGE_MASK;
46
if (tlb_force_broadcast(env)) {
47
- tlbimva_is_write(env, NULL, value);
48
- return;
49
+ tlb_flush_page_all_cpus_synced(cs, value);
50
+ } else {
51
+ tlb_flush_page(cs, value);
52
}
53
-
54
- tlb_flush_page(CPU(cpu), value & TARGET_PAGE_MASK);
55
}
56
57
static void tlbiasid_write(CPUARMState *env, const ARMCPRegInfo *ri,
58
uint64_t value)
59
{
60
/* Invalidate by ASID (TLBIASID) */
61
- ARMCPU *cpu = env_archcpu(env);
62
+ CPUState *cs = env_cpu(env);
63
64
if (tlb_force_broadcast(env)) {
65
- tlbiasid_is_write(env, NULL, value);
66
- return;
67
+ tlb_flush_all_cpus_synced(cs);
68
+ } else {
69
+ tlb_flush(cs);
70
}
71
-
72
- tlb_flush(CPU(cpu));
73
}
74
75
static void tlbimvaa_write(CPUARMState *env, const ARMCPRegInfo *ri,
76
uint64_t value)
77
{
78
/* Invalidate single entry by MVA, all ASIDs (TLBIMVAA) */
79
- ARMCPU *cpu = env_archcpu(env);
80
+ CPUState *cs = env_cpu(env);
81
82
+ value &= TARGET_PAGE_MASK;
83
if (tlb_force_broadcast(env)) {
84
- tlbimvaa_is_write(env, NULL, value);
85
- return;
86
+ tlb_flush_page_all_cpus_synced(cs, value);
87
+ } else {
88
+ tlb_flush_page(cs, value);
89
}
90
-
91
- tlb_flush_page(CPU(cpu), value & TARGET_PAGE_MASK);
92
}
93
94
static void tlbiall_nsnh_write(CPUARMState *env, const ARMCPRegInfo *ri,
95
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
96
int mask = vae1_tlbmask(env);
97
98
if (tlb_force_broadcast(env)) {
99
- tlbi_aa64_vmalle1is_write(env, NULL, value);
100
- return;
101
+ tlb_flush_by_mmuidx_all_cpus_synced(cs, mask);
102
+ } else {
103
+ tlb_flush_by_mmuidx(cs, mask);
104
}
105
-
106
- tlb_flush_by_mmuidx(cs, mask);
107
}
108
109
static int alle1_tlbmask(CPUARMState *env)
110
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
111
uint64_t pageaddr = sextract64(value << 12, 0, 56);
112
113
if (tlb_force_broadcast(env)) {
114
- tlbi_aa64_vae1is_write(env, NULL, value);
115
- return;
116
+ tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, mask);
117
+ } else {
118
+ tlb_flush_page_by_mmuidx(cs, pageaddr, mask);
119
}
120
-
121
- tlb_flush_page_by_mmuidx(cs, pageaddr, mask);
122
}
123
124
static void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
125
--
126
2.20.1
127
128
diff view generated by jsdifflib
1
Correct the decode of the M-profile "coprocessor and
1
From: Richard Henderson <richard.henderson@linaro.org>
2
floating-point instructions" space:
2
3
* op0 == 0b11 is always unallocated
3
This is part of a reorganization to the set of mmu_idx.
4
* if the CPU has an FPU then all insns with op1 == 0b101
4
This emphasizes that they apply to the EL1&0 regime.
5
are floating point and go to disas_vfp_insn()
5
6
6
The ultimate goal is
7
For the moment we leave VLLDM and VLSTM as NOPs; in
7
8
a later commit we will fill in the proper implementation
8
-- Non-secure regimes:
9
for the case where an FPU is present.
9
ARMMMUIdx_E10_0,
10
10
ARMMMUIdx_E20_0,
11
ARMMMUIdx_E10_1,
12
ARMMMUIdx_E2,
13
ARMMMUIdx_E20_2,
14
15
-- Secure regimes:
16
ARMMMUIdx_SE10_0,
17
ARMMMUIdx_SE10_1,
18
ARMMMUIdx_SE3,
19
20
-- Helper mmu_idx for non-secure EL1&0 stage1 and stage2
21
ARMMMUIdx_Stage2,
22
ARMMMUIdx_Stage1_E0,
23
ARMMMUIdx_Stage1_E1,
24
25
The 'S' prefix is reserved for "Secure". Unless otherwise specified,
26
each mmu_idx represents all stages of translation.
27
28
Tested-by: Alex Bennée <alex.bennee@linaro.org>
29
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
30
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
31
Message-id: 20200206105448.4726-10-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
32
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20190416125744.27770-7-peter.maydell@linaro.org
14
---
33
---
15
target/arm/translate.c | 26 ++++++++++++++++++++++----
34
target/arm/cpu.h | 8 ++++----
16
1 file changed, 22 insertions(+), 4 deletions(-)
35
target/arm/internals.h | 4 ++--
17
36
target/arm/helper.c | 40 +++++++++++++++++++-------------------
37
target/arm/translate-a64.c | 4 ++--
38
target/arm/translate.c | 6 +++---
39
5 files changed, 31 insertions(+), 31 deletions(-)
40
41
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
42
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/cpu.h
44
+++ b/target/arm/cpu.h
45
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
46
#define ARM_MMU_IDX_COREIDX_MASK 0x7
47
48
typedef enum ARMMMUIdx {
49
- ARMMMUIdx_S12NSE0 = 0 | ARM_MMU_IDX_A,
50
- ARMMMUIdx_S12NSE1 = 1 | ARM_MMU_IDX_A,
51
+ ARMMMUIdx_E10_0 = 0 | ARM_MMU_IDX_A,
52
+ ARMMMUIdx_E10_1 = 1 | ARM_MMU_IDX_A,
53
ARMMMUIdx_S1E2 = 2 | ARM_MMU_IDX_A,
54
ARMMMUIdx_S1E3 = 3 | ARM_MMU_IDX_A,
55
ARMMMUIdx_S1SE0 = 4 | ARM_MMU_IDX_A,
56
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
57
* for use when calling tlb_flush_by_mmuidx() and friends.
58
*/
59
typedef enum ARMMMUIdxBit {
60
- ARMMMUIdxBit_S12NSE0 = 1 << 0,
61
- ARMMMUIdxBit_S12NSE1 = 1 << 1,
62
+ ARMMMUIdxBit_E10_0 = 1 << 0,
63
+ ARMMMUIdxBit_E10_1 = 1 << 1,
64
ARMMMUIdxBit_S1E2 = 1 << 2,
65
ARMMMUIdxBit_S1E3 = 1 << 3,
66
ARMMMUIdxBit_S1SE0 = 1 << 4,
67
diff --git a/target/arm/internals.h b/target/arm/internals.h
68
index XXXXXXX..XXXXXXX 100644
69
--- a/target/arm/internals.h
70
+++ b/target/arm/internals.h
71
@@ -XXX,XX +XXX,XX @@ static inline void arm_call_el_change_hook(ARMCPU *cpu)
72
static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx)
73
{
74
switch (mmu_idx) {
75
- case ARMMMUIdx_S12NSE0:
76
- case ARMMMUIdx_S12NSE1:
77
+ case ARMMMUIdx_E10_0:
78
+ case ARMMMUIdx_E10_1:
79
case ARMMMUIdx_S1NSE0:
80
case ARMMMUIdx_S1NSE1:
81
case ARMMMUIdx_S1E2:
82
diff --git a/target/arm/helper.c b/target/arm/helper.c
83
index XXXXXXX..XXXXXXX 100644
84
--- a/target/arm/helper.c
85
+++ b/target/arm/helper.c
86
@@ -XXX,XX +XXX,XX @@ static void tlbiall_nsnh_write(CPUARMState *env, const ARMCPRegInfo *ri,
87
CPUState *cs = env_cpu(env);
88
89
tlb_flush_by_mmuidx(cs,
90
- ARMMMUIdxBit_S12NSE1 |
91
- ARMMMUIdxBit_S12NSE0 |
92
+ ARMMMUIdxBit_E10_1 |
93
+ ARMMMUIdxBit_E10_0 |
94
ARMMMUIdxBit_S2NS);
95
}
96
97
@@ -XXX,XX +XXX,XX @@ static void tlbiall_nsnh_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
98
CPUState *cs = env_cpu(env);
99
100
tlb_flush_by_mmuidx_all_cpus_synced(cs,
101
- ARMMMUIdxBit_S12NSE1 |
102
- ARMMMUIdxBit_S12NSE0 |
103
+ ARMMMUIdxBit_E10_1 |
104
+ ARMMMUIdxBit_E10_0 |
105
ARMMMUIdxBit_S2NS);
106
}
107
108
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
109
format64 = arm_s1_regime_using_lpae_format(env, mmu_idx);
110
111
if (arm_feature(env, ARM_FEATURE_EL2)) {
112
- if (mmu_idx == ARMMMUIdx_S12NSE0 || mmu_idx == ARMMMUIdx_S12NSE1) {
113
+ if (mmu_idx == ARMMMUIdx_E10_0 || mmu_idx == ARMMMUIdx_E10_1) {
114
format64 |= env->cp15.hcr_el2 & (HCR_VM | HCR_DC);
115
} else {
116
format64 |= arm_current_el(env) == 2;
117
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
118
break;
119
case 4:
120
/* stage 1+2 NonSecure PL1: ATS12NSOPR, ATS12NSOPW */
121
- mmu_idx = ARMMMUIdx_S12NSE1;
122
+ mmu_idx = ARMMMUIdx_E10_1;
123
break;
124
case 6:
125
/* stage 1+2 NonSecure PL0: ATS12NSOUR, ATS12NSOUW */
126
- mmu_idx = ARMMMUIdx_S12NSE0;
127
+ mmu_idx = ARMMMUIdx_E10_0;
128
break;
129
default:
130
g_assert_not_reached();
131
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
132
mmu_idx = secure ? ARMMMUIdx_S1SE0 : ARMMMUIdx_S1NSE0;
133
break;
134
case 4: /* AT S12E1R, AT S12E1W */
135
- mmu_idx = secure ? ARMMMUIdx_S1SE1 : ARMMMUIdx_S12NSE1;
136
+ mmu_idx = secure ? ARMMMUIdx_S1SE1 : ARMMMUIdx_E10_1;
137
break;
138
case 6: /* AT S12E0R, AT S12E0W */
139
- mmu_idx = secure ? ARMMMUIdx_S1SE0 : ARMMMUIdx_S12NSE0;
140
+ mmu_idx = secure ? ARMMMUIdx_S1SE0 : ARMMMUIdx_E10_0;
141
break;
142
default:
143
g_assert_not_reached();
144
@@ -XXX,XX +XXX,XX @@ static void vttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
145
/* Accesses to VTTBR may change the VMID so we must flush the TLB. */
146
if (raw_read(env, ri) != value) {
147
tlb_flush_by_mmuidx(cs,
148
- ARMMMUIdxBit_S12NSE1 |
149
- ARMMMUIdxBit_S12NSE0 |
150
+ ARMMMUIdxBit_E10_1 |
151
+ ARMMMUIdxBit_E10_0 |
152
ARMMMUIdxBit_S2NS);
153
raw_write(env, ri, value);
154
}
155
@@ -XXX,XX +XXX,XX @@ static int vae1_tlbmask(CPUARMState *env)
156
if (arm_is_secure_below_el3(env)) {
157
return ARMMMUIdxBit_S1SE1 | ARMMMUIdxBit_S1SE0;
158
} else {
159
- return ARMMMUIdxBit_S12NSE1 | ARMMMUIdxBit_S12NSE0;
160
+ return ARMMMUIdxBit_E10_1 | ARMMMUIdxBit_E10_0;
161
}
162
}
163
164
@@ -XXX,XX +XXX,XX @@ static int alle1_tlbmask(CPUARMState *env)
165
if (arm_is_secure_below_el3(env)) {
166
return ARMMMUIdxBit_S1SE1 | ARMMMUIdxBit_S1SE0;
167
} else if (arm_feature(env, ARM_FEATURE_EL2)) {
168
- return ARMMMUIdxBit_S12NSE1 | ARMMMUIdxBit_S12NSE0 | ARMMMUIdxBit_S2NS;
169
+ return ARMMMUIdxBit_E10_1 | ARMMMUIdxBit_E10_0 | ARMMMUIdxBit_S2NS;
170
} else {
171
- return ARMMMUIdxBit_S12NSE1 | ARMMMUIdxBit_S12NSE0;
172
+ return ARMMMUIdxBit_E10_1 | ARMMMUIdxBit_E10_0;
173
}
174
}
175
176
@@ -XXX,XX +XXX,XX @@ static inline TCR *regime_tcr(CPUARMState *env, ARMMMUIdx mmu_idx)
177
*/
178
static inline ARMMMUIdx stage_1_mmu_idx(ARMMMUIdx mmu_idx)
179
{
180
- if (mmu_idx == ARMMMUIdx_S12NSE0 || mmu_idx == ARMMMUIdx_S12NSE1) {
181
- mmu_idx += (ARMMMUIdx_S1NSE0 - ARMMMUIdx_S12NSE0);
182
+ if (mmu_idx == ARMMMUIdx_E10_0 || mmu_idx == ARMMMUIdx_E10_1) {
183
+ mmu_idx += (ARMMMUIdx_S1NSE0 - ARMMMUIdx_E10_0);
184
}
185
return mmu_idx;
186
}
187
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
188
return true;
189
default:
190
return false;
191
- case ARMMMUIdx_S12NSE0:
192
- case ARMMMUIdx_S12NSE1:
193
+ case ARMMMUIdx_E10_0:
194
+ case ARMMMUIdx_E10_1:
195
g_assert_not_reached();
196
}
197
}
198
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
199
target_ulong *page_size,
200
ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs)
201
{
202
- if (mmu_idx == ARMMMUIdx_S12NSE0 || mmu_idx == ARMMMUIdx_S12NSE1) {
203
+ if (mmu_idx == ARMMMUIdx_E10_0 || mmu_idx == ARMMMUIdx_E10_1) {
204
/* Call ourselves recursively to do the stage 1 and then stage 2
205
* translations.
206
*/
207
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_mmu_idx_el(CPUARMState *env, int el)
208
if (el < 2 && arm_is_secure_below_el3(env)) {
209
return ARMMMUIdx_S1SE0 + el;
210
} else {
211
- return ARMMMUIdx_S12NSE0 + el;
212
+ return ARMMMUIdx_E10_0 + el;
213
}
214
}
215
216
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
217
index XXXXXXX..XXXXXXX 100644
218
--- a/target/arm/translate-a64.c
219
+++ b/target/arm/translate-a64.c
220
@@ -XXX,XX +XXX,XX @@ static inline int get_a64_user_mem_index(DisasContext *s)
221
ARMMMUIdx useridx;
222
223
switch (s->mmu_idx) {
224
- case ARMMMUIdx_S12NSE1:
225
- useridx = ARMMMUIdx_S12NSE0;
226
+ case ARMMMUIdx_E10_1:
227
+ useridx = ARMMMUIdx_E10_0;
228
break;
229
case ARMMMUIdx_S1SE1:
230
useridx = ARMMMUIdx_S1SE0;
18
diff --git a/target/arm/translate.c b/target/arm/translate.c
231
diff --git a/target/arm/translate.c b/target/arm/translate.c
19
index XXXXXXX..XXXXXXX 100644
232
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/translate.c
233
--- a/target/arm/translate.c
21
+++ b/target/arm/translate.c
234
+++ b/target/arm/translate.c
22
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
235
@@ -XXX,XX +XXX,XX @@ static inline int get_a32_user_mem_index(DisasContext *s)
23
case 6: case 7: case 14: case 15:
236
*/
24
/* Coprocessor. */
237
switch (s->mmu_idx) {
25
if (arm_dc_feature(s, ARM_FEATURE_M)) {
238
case ARMMMUIdx_S1E2: /* this one is UNPREDICTABLE */
26
- /* We don't currently implement M profile FP support,
239
- case ARMMMUIdx_S12NSE0:
27
- * so this entire space should give a NOCP fault, with
240
- case ARMMMUIdx_S12NSE1:
28
- * the exception of the v8M VLLDM and VLSTM insns, which
241
- return arm_to_core_mmu_idx(ARMMMUIdx_S12NSE0);
29
- * must be NOPs in Secure state and UNDEF in Nonsecure state.
242
+ case ARMMMUIdx_E10_0:
30
+ /* 0b111x_11xx_xxxx_xxxx_xxxx_xxxx_xxxx_xxxx */
243
+ case ARMMMUIdx_E10_1:
31
+ if (extract32(insn, 24, 2) == 3) {
244
+ return arm_to_core_mmu_idx(ARMMMUIdx_E10_0);
32
+ goto illegal_op; /* op0 = 0b11 : unallocated */
245
case ARMMMUIdx_S1E3:
33
+ }
246
case ARMMMUIdx_S1SE0:
34
+
247
case ARMMMUIdx_S1SE1:
35
+ /*
36
+ * Decode VLLDM and VLSTM first: these are nonstandard because:
37
+ * * if there is no FPU then these insns must NOP in
38
+ * Secure state and UNDEF in Nonsecure state
39
+ * * if there is an FPU then these insns do not have
40
+ * the usual behaviour that disas_vfp_insn() provides of
41
+ * being controlled by CPACR/NSACR enable bits or the
42
+ * lazy-stacking logic.
43
*/
44
if (arm_dc_feature(s, ARM_FEATURE_V8) &&
45
(insn & 0xffa00f00) == 0xec200a00) {
46
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
47
/* Just NOP since FP support is not implemented */
48
break;
49
}
50
+ if (arm_dc_feature(s, ARM_FEATURE_VFP) &&
51
+ ((insn >> 8) & 0xe) == 10) {
52
+ /* FP, and the CPU supports it */
53
+ if (disas_vfp_insn(s, insn)) {
54
+ goto illegal_op;
55
+ }
56
+ break;
57
+ }
58
+
59
/* All other insns: NOCP */
60
gen_exception_insn(s, 4, EXCP_NOCP, syn_uncategorized(),
61
default_exception_el(s));
62
--
248
--
63
2.20.1
249
2.20.1
64
250
65
251
diff view generated by jsdifflib
1
The only "system register" that M-profile floating point exposes
1
From: Richard Henderson <richard.henderson@linaro.org>
2
via the VMRS/VMRS instructions is FPSCR, and it does not have
2
3
the odd special case for rd==15. Add a check to ensure we only
3
The EL1&0 regime is the only one that uses 2-stage translation.
4
expose FPSCR.
4
5
5
Tested-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200206105448.4726-11-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20190416125744.27770-5-peter.maydell@linaro.org
9
---
10
---
10
target/arm/translate.c | 19 +++++++++++++++++--
11
target/arm/cpu.h | 4 +--
11
1 file changed, 17 insertions(+), 2 deletions(-)
12
target/arm/internals.h | 2 +-
12
13
target/arm/helper.c | 57 ++++++++++++++++++++------------------
14
target/arm/translate-a64.c | 2 +-
15
target/arm/translate.c | 2 +-
16
5 files changed, 35 insertions(+), 32 deletions(-)
17
18
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/cpu.h
21
+++ b/target/arm/cpu.h
22
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
23
ARMMMUIdx_S1E3 = 3 | ARM_MMU_IDX_A,
24
ARMMMUIdx_S1SE0 = 4 | ARM_MMU_IDX_A,
25
ARMMMUIdx_S1SE1 = 5 | ARM_MMU_IDX_A,
26
- ARMMMUIdx_S2NS = 6 | ARM_MMU_IDX_A,
27
+ ARMMMUIdx_Stage2 = 6 | ARM_MMU_IDX_A,
28
ARMMMUIdx_MUser = 0 | ARM_MMU_IDX_M,
29
ARMMMUIdx_MPriv = 1 | ARM_MMU_IDX_M,
30
ARMMMUIdx_MUserNegPri = 2 | ARM_MMU_IDX_M,
31
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdxBit {
32
ARMMMUIdxBit_S1E3 = 1 << 3,
33
ARMMMUIdxBit_S1SE0 = 1 << 4,
34
ARMMMUIdxBit_S1SE1 = 1 << 5,
35
- ARMMMUIdxBit_S2NS = 1 << 6,
36
+ ARMMMUIdxBit_Stage2 = 1 << 6,
37
ARMMMUIdxBit_MUser = 1 << 0,
38
ARMMMUIdxBit_MPriv = 1 << 1,
39
ARMMMUIdxBit_MUserNegPri = 1 << 2,
40
diff --git a/target/arm/internals.h b/target/arm/internals.h
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/internals.h
43
+++ b/target/arm/internals.h
44
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx)
45
case ARMMMUIdx_S1NSE0:
46
case ARMMMUIdx_S1NSE1:
47
case ARMMMUIdx_S1E2:
48
- case ARMMMUIdx_S2NS:
49
+ case ARMMMUIdx_Stage2:
50
case ARMMMUIdx_MPrivNegPri:
51
case ARMMMUIdx_MUserNegPri:
52
case ARMMMUIdx_MPriv:
53
diff --git a/target/arm/helper.c b/target/arm/helper.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/target/arm/helper.c
56
+++ b/target/arm/helper.c
57
@@ -XXX,XX +XXX,XX @@ static void tlbiall_nsnh_write(CPUARMState *env, const ARMCPRegInfo *ri,
58
tlb_flush_by_mmuidx(cs,
59
ARMMMUIdxBit_E10_1 |
60
ARMMMUIdxBit_E10_0 |
61
- ARMMMUIdxBit_S2NS);
62
+ ARMMMUIdxBit_Stage2);
63
}
64
65
static void tlbiall_nsnh_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
66
@@ -XXX,XX +XXX,XX @@ static void tlbiall_nsnh_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
67
tlb_flush_by_mmuidx_all_cpus_synced(cs,
68
ARMMMUIdxBit_E10_1 |
69
ARMMMUIdxBit_E10_0 |
70
- ARMMMUIdxBit_S2NS);
71
+ ARMMMUIdxBit_Stage2);
72
}
73
74
static void tlbiipas2_write(CPUARMState *env, const ARMCPRegInfo *ri,
75
@@ -XXX,XX +XXX,XX @@ static void tlbiipas2_write(CPUARMState *env, const ARMCPRegInfo *ri,
76
77
pageaddr = sextract64(value << 12, 0, 40);
78
79
- tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_S2NS);
80
+ tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_Stage2);
81
}
82
83
static void tlbiipas2_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
84
@@ -XXX,XX +XXX,XX @@ static void tlbiipas2_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
85
pageaddr = sextract64(value << 12, 0, 40);
86
87
tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr,
88
- ARMMMUIdxBit_S2NS);
89
+ ARMMMUIdxBit_Stage2);
90
}
91
92
static void tlbiall_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
93
@@ -XXX,XX +XXX,XX @@ static void vttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
94
ARMCPU *cpu = env_archcpu(env);
95
CPUState *cs = CPU(cpu);
96
97
- /* Accesses to VTTBR may change the VMID so we must flush the TLB. */
98
+ /*
99
+ * A change in VMID to the stage2 page table (Stage2) invalidates
100
+ * the combined stage 1&2 tlbs (EL10_1 and EL10_0).
101
+ */
102
if (raw_read(env, ri) != value) {
103
tlb_flush_by_mmuidx(cs,
104
ARMMMUIdxBit_E10_1 |
105
ARMMMUIdxBit_E10_0 |
106
- ARMMMUIdxBit_S2NS);
107
+ ARMMMUIdxBit_Stage2);
108
raw_write(env, ri, value);
109
}
110
}
111
@@ -XXX,XX +XXX,XX @@ static int alle1_tlbmask(CPUARMState *env)
112
if (arm_is_secure_below_el3(env)) {
113
return ARMMMUIdxBit_S1SE1 | ARMMMUIdxBit_S1SE0;
114
} else if (arm_feature(env, ARM_FEATURE_EL2)) {
115
- return ARMMMUIdxBit_E10_1 | ARMMMUIdxBit_E10_0 | ARMMMUIdxBit_S2NS;
116
+ return ARMMMUIdxBit_E10_1 | ARMMMUIdxBit_E10_0 | ARMMMUIdxBit_Stage2;
117
} else {
118
return ARMMMUIdxBit_E10_1 | ARMMMUIdxBit_E10_0;
119
}
120
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_ipas2e1_write(CPUARMState *env, const ARMCPRegInfo *ri,
121
122
pageaddr = sextract64(value << 12, 0, 48);
123
124
- tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_S2NS);
125
+ tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_Stage2);
126
}
127
128
static void tlbi_aa64_ipas2e1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
129
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_ipas2e1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
130
pageaddr = sextract64(value << 12, 0, 48);
131
132
tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr,
133
- ARMMMUIdxBit_S2NS);
134
+ ARMMMUIdxBit_Stage2);
135
}
136
137
static CPAccessResult aa64_zva_access(CPUARMState *env, const ARMCPRegInfo *ri,
138
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_interrupt(CPUState *cs)
139
static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
140
{
141
switch (mmu_idx) {
142
- case ARMMMUIdx_S2NS:
143
+ case ARMMMUIdx_Stage2:
144
case ARMMMUIdx_S1E2:
145
return 2;
146
case ARMMMUIdx_S1E3:
147
@@ -XXX,XX +XXX,XX @@ static inline bool regime_translation_disabled(CPUARMState *env,
148
}
149
}
150
151
- if (mmu_idx == ARMMMUIdx_S2NS) {
152
+ if (mmu_idx == ARMMMUIdx_Stage2) {
153
/* HCR.DC means HCR.VM behaves as 1 */
154
return (env->cp15.hcr_el2 & (HCR_DC | HCR_VM)) == 0;
155
}
156
@@ -XXX,XX +XXX,XX @@ static inline bool regime_translation_big_endian(CPUARMState *env,
157
static inline uint64_t regime_ttbr(CPUARMState *env, ARMMMUIdx mmu_idx,
158
int ttbrn)
159
{
160
- if (mmu_idx == ARMMMUIdx_S2NS) {
161
+ if (mmu_idx == ARMMMUIdx_Stage2) {
162
return env->cp15.vttbr_el2;
163
}
164
if (ttbrn == 0) {
165
@@ -XXX,XX +XXX,XX @@ static inline uint64_t regime_ttbr(CPUARMState *env, ARMMMUIdx mmu_idx,
166
/* Return the TCR controlling this translation regime */
167
static inline TCR *regime_tcr(CPUARMState *env, ARMMMUIdx mmu_idx)
168
{
169
- if (mmu_idx == ARMMMUIdx_S2NS) {
170
+ if (mmu_idx == ARMMMUIdx_Stage2) {
171
return &env->cp15.vtcr_el2;
172
}
173
return &env->cp15.tcr_el[regime_el(env, mmu_idx)];
174
@@ -XXX,XX +XXX,XX @@ static int get_S1prot(CPUARMState *env, ARMMMUIdx mmu_idx, bool is_aa64,
175
bool have_wxn;
176
int wxn = 0;
177
178
- assert(mmu_idx != ARMMMUIdx_S2NS);
179
+ assert(mmu_idx != ARMMMUIdx_Stage2);
180
181
user_rw = simple_ap_to_rw_prot_is_user(ap, true);
182
if (is_user) {
183
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
184
ARMMMUFaultInfo *fi)
185
{
186
if ((mmu_idx == ARMMMUIdx_S1NSE0 || mmu_idx == ARMMMUIdx_S1NSE1) &&
187
- !regime_translation_disabled(env, ARMMMUIdx_S2NS)) {
188
+ !regime_translation_disabled(env, ARMMMUIdx_Stage2)) {
189
target_ulong s2size;
190
hwaddr s2pa;
191
int s2prot;
192
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
193
pcacheattrs = &cacheattrs;
194
}
195
196
- ret = get_phys_addr_lpae(env, addr, 0, ARMMMUIdx_S2NS, &s2pa,
197
+ ret = get_phys_addr_lpae(env, addr, 0, ARMMMUIdx_Stage2, &s2pa,
198
&txattrs, &s2prot, &s2size, fi, pcacheattrs);
199
if (ret) {
200
assert(fi->type != ARMFault_None);
201
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters_both(CPUARMState *env, uint64_t va,
202
tsz = extract32(tcr, 0, 6);
203
using64k = extract32(tcr, 14, 1);
204
using16k = extract32(tcr, 15, 1);
205
- if (mmu_idx == ARMMMUIdx_S2NS) {
206
+ if (mmu_idx == ARMMMUIdx_Stage2) {
207
/* VTCR_EL2 */
208
tbi = tbid = hpd = false;
209
} else {
210
@@ -XXX,XX +XXX,XX @@ static ARMVAParameters aa32_va_parameters(CPUARMState *env, uint32_t va,
211
int select, tsz;
212
bool epd, hpd;
213
214
- if (mmu_idx == ARMMMUIdx_S2NS) {
215
+ if (mmu_idx == ARMMMUIdx_Stage2) {
216
/* VTCR */
217
bool sext = extract32(tcr, 4, 1);
218
bool sign = extract32(tcr, 3, 1);
219
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
220
level = 1;
221
/* There is no TTBR1 for EL2 */
222
ttbr1_valid = (el != 2);
223
- addrsize = (mmu_idx == ARMMMUIdx_S2NS ? 40 : 32);
224
+ addrsize = (mmu_idx == ARMMMUIdx_Stage2 ? 40 : 32);
225
inputsize = addrsize - param.tsz;
226
}
227
228
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
229
goto do_fault;
230
}
231
232
- if (mmu_idx != ARMMMUIdx_S2NS) {
233
+ if (mmu_idx != ARMMMUIdx_Stage2) {
234
/* The starting level depends on the virtual address size (which can
235
* be up to 48 bits) and the translation granule size. It indicates
236
* the number of strides (stride bits at a time) needed to
237
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
238
attrs = extract64(descriptor, 2, 10)
239
| (extract64(descriptor, 52, 12) << 10);
240
241
- if (mmu_idx == ARMMMUIdx_S2NS) {
242
+ if (mmu_idx == ARMMMUIdx_Stage2) {
243
/* Stage 2 table descriptors do not include any attribute fields */
244
break;
245
}
246
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
247
ap = extract32(attrs, 4, 2);
248
xn = extract32(attrs, 12, 1);
249
250
- if (mmu_idx == ARMMMUIdx_S2NS) {
251
+ if (mmu_idx == ARMMMUIdx_Stage2) {
252
ns = true;
253
*prot = get_S2prot(env, ap, xn);
254
} else {
255
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
256
}
257
258
if (cacheattrs != NULL) {
259
- if (mmu_idx == ARMMMUIdx_S2NS) {
260
+ if (mmu_idx == ARMMMUIdx_Stage2) {
261
cacheattrs->attrs = convert_stage2_attrs(env,
262
extract32(attrs, 0, 4));
263
} else {
264
@@ -XXX,XX +XXX,XX @@ do_fault:
265
fi->type = fault_type;
266
fi->level = level;
267
/* Tag the error as S2 for failed S1 PTW at S2 or ordinary S2. */
268
- fi->stage2 = fi->s1ptw || (mmu_idx == ARMMMUIdx_S2NS);
269
+ fi->stage2 = fi->s1ptw || (mmu_idx == ARMMMUIdx_Stage2);
270
return true;
271
}
272
273
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
274
prot, page_size, fi, cacheattrs);
275
276
/* If S1 fails or S2 is disabled, return early. */
277
- if (ret || regime_translation_disabled(env, ARMMMUIdx_S2NS)) {
278
+ if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2)) {
279
*phys_ptr = ipa;
280
return ret;
281
}
282
283
/* S1 is done. Now do S2 translation. */
284
- ret = get_phys_addr_lpae(env, ipa, access_type, ARMMMUIdx_S2NS,
285
+ ret = get_phys_addr_lpae(env, ipa, access_type, ARMMMUIdx_Stage2,
286
phys_ptr, attrs, &s2_prot,
287
page_size, fi,
288
cacheattrs != NULL ? &cacheattrs2 : NULL);
289
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
290
/* Fast Context Switch Extension. This doesn't exist at all in v8.
291
* In v7 and earlier it affects all stage 1 translations.
292
*/
293
- if (address < 0x02000000 && mmu_idx != ARMMMUIdx_S2NS
294
+ if (address < 0x02000000 && mmu_idx != ARMMMUIdx_Stage2
295
&& !arm_feature(env, ARM_FEATURE_V8)) {
296
if (regime_el(env, mmu_idx) == 3) {
297
address += env->cp15.fcseidr_s;
298
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
299
index XXXXXXX..XXXXXXX 100644
300
--- a/target/arm/translate-a64.c
301
+++ b/target/arm/translate-a64.c
302
@@ -XXX,XX +XXX,XX @@ static inline int get_a64_user_mem_index(DisasContext *s)
303
case ARMMMUIdx_S1SE1:
304
useridx = ARMMMUIdx_S1SE0;
305
break;
306
- case ARMMMUIdx_S2NS:
307
+ case ARMMMUIdx_Stage2:
308
g_assert_not_reached();
309
default:
310
useridx = s->mmu_idx;
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
311
diff --git a/target/arm/translate.c b/target/arm/translate.c
14
index XXXXXXX..XXXXXXX 100644
312
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate.c
313
--- a/target/arm/translate.c
16
+++ b/target/arm/translate.c
314
+++ b/target/arm/translate.c
17
@@ -XXX,XX +XXX,XX @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
315
@@ -XXX,XX +XXX,XX @@ static inline int get_a32_user_mem_index(DisasContext *s)
18
}
316
case ARMMMUIdx_MSUserNegPri:
19
}
317
case ARMMMUIdx_MSPrivNegPri:
20
} else { /* !dp */
318
return arm_to_core_mmu_idx(ARMMMUIdx_MSUserNegPri);
21
+ bool is_sysreg;
319
- case ARMMMUIdx_S2NS:
22
+
320
+ case ARMMMUIdx_Stage2:
23
if ((insn & 0x6f) != 0x00)
321
default:
24
return 1;
322
g_assert_not_reached();
25
rn = VFP_SREG_N(insn);
323
}
26
+
27
+ is_sysreg = extract32(insn, 21, 1);
28
+
29
+ if (arm_dc_feature(s, ARM_FEATURE_M)) {
30
+ /*
31
+ * The only M-profile VFP vmrs/vmsr sysreg is FPSCR.
32
+ * Writes to R15 are UNPREDICTABLE; we choose to undef.
33
+ */
34
+ if (is_sysreg && (rd == 15 || (rn >> 1) != ARM_VFP_FPSCR)) {
35
+ return 1;
36
+ }
37
+ }
38
+
39
if (insn & ARM_CP_RW_BIT) {
40
/* vfp->arm */
41
- if (insn & (1 << 21)) {
42
+ if (is_sysreg) {
43
/* system register */
44
rn >>= 1;
45
46
@@ -XXX,XX +XXX,XX @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
47
}
48
} else {
49
/* arm->vfp */
50
- if (insn & (1 << 21)) {
51
+ if (is_sysreg) {
52
rn >>= 1;
53
/* system register */
54
switch (rn) {
55
--
324
--
56
2.20.1
325
2.20.1
57
326
58
327
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
This is part of a reorganization to the set of mmu_idx.
4
The EL1&0 regime is the only one that uses 2-stage translation.
5
Spelling out Stage avoids confusion with Secure.
6
7
Tested-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20200206105448.4726-12-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/cpu.h | 4 ++--
14
target/arm/internals.h | 6 +++---
15
target/arm/helper.c | 27 ++++++++++++++-------------
16
3 files changed, 19 insertions(+), 18 deletions(-)
17
18
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/cpu.h
21
+++ b/target/arm/cpu.h
22
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
23
/* Indexes below here don't have TLBs and are used only for AT system
24
* instructions or for the first stage of an S12 page table walk.
25
*/
26
- ARMMMUIdx_S1NSE0 = 0 | ARM_MMU_IDX_NOTLB,
27
- ARMMMUIdx_S1NSE1 = 1 | ARM_MMU_IDX_NOTLB,
28
+ ARMMMUIdx_Stage1_E0 = 0 | ARM_MMU_IDX_NOTLB,
29
+ ARMMMUIdx_Stage1_E1 = 1 | ARM_MMU_IDX_NOTLB,
30
} ARMMMUIdx;
31
32
/* Bit macros for the core-mmu-index values for each index,
33
diff --git a/target/arm/internals.h b/target/arm/internals.h
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/internals.h
36
+++ b/target/arm/internals.h
37
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx)
38
switch (mmu_idx) {
39
case ARMMMUIdx_E10_0:
40
case ARMMMUIdx_E10_1:
41
- case ARMMMUIdx_S1NSE0:
42
- case ARMMMUIdx_S1NSE1:
43
+ case ARMMMUIdx_Stage1_E0:
44
+ case ARMMMUIdx_Stage1_E1:
45
case ARMMMUIdx_S1E2:
46
case ARMMMUIdx_Stage2:
47
case ARMMMUIdx_MPrivNegPri:
48
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_mmu_idx(CPUARMState *env);
49
#ifdef CONFIG_USER_ONLY
50
static inline ARMMMUIdx arm_stage1_mmu_idx(CPUARMState *env)
51
{
52
- return ARMMMUIdx_S1NSE0;
53
+ return ARMMMUIdx_Stage1_E0;
54
}
55
#else
56
ARMMMUIdx arm_stage1_mmu_idx(CPUARMState *env);
57
diff --git a/target/arm/helper.c b/target/arm/helper.c
58
index XXXXXXX..XXXXXXX 100644
59
--- a/target/arm/helper.c
60
+++ b/target/arm/helper.c
61
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
62
bool take_exc = false;
63
64
if (fi.s1ptw && current_el == 1 && !arm_is_secure(env)
65
- && (mmu_idx == ARMMMUIdx_S1NSE1 || mmu_idx == ARMMMUIdx_S1NSE0)) {
66
+ && (mmu_idx == ARMMMUIdx_Stage1_E1 ||
67
+ mmu_idx == ARMMMUIdx_Stage1_E0)) {
68
/*
69
* Synchronous stage 2 fault on an access made as part of the
70
* translation table walk for AT S1E0* or AT S1E1* insn
71
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
72
mmu_idx = ARMMMUIdx_S1E3;
73
break;
74
case 2:
75
- mmu_idx = ARMMMUIdx_S1NSE1;
76
+ mmu_idx = ARMMMUIdx_Stage1_E1;
77
break;
78
case 1:
79
- mmu_idx = secure ? ARMMMUIdx_S1SE1 : ARMMMUIdx_S1NSE1;
80
+ mmu_idx = secure ? ARMMMUIdx_S1SE1 : ARMMMUIdx_Stage1_E1;
81
break;
82
default:
83
g_assert_not_reached();
84
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
85
mmu_idx = ARMMMUIdx_S1SE0;
86
break;
87
case 2:
88
- mmu_idx = ARMMMUIdx_S1NSE0;
89
+ mmu_idx = ARMMMUIdx_Stage1_E0;
90
break;
91
case 1:
92
- mmu_idx = secure ? ARMMMUIdx_S1SE0 : ARMMMUIdx_S1NSE0;
93
+ mmu_idx = secure ? ARMMMUIdx_S1SE0 : ARMMMUIdx_Stage1_E0;
94
break;
95
default:
96
g_assert_not_reached();
97
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
98
case 0:
99
switch (ri->opc1) {
100
case 0: /* AT S1E1R, AT S1E1W */
101
- mmu_idx = secure ? ARMMMUIdx_S1SE1 : ARMMMUIdx_S1NSE1;
102
+ mmu_idx = secure ? ARMMMUIdx_S1SE1 : ARMMMUIdx_Stage1_E1;
103
break;
104
case 4: /* AT S1E2R, AT S1E2W */
105
mmu_idx = ARMMMUIdx_S1E2;
106
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
107
}
108
break;
109
case 2: /* AT S1E0R, AT S1E0W */
110
- mmu_idx = secure ? ARMMMUIdx_S1SE0 : ARMMMUIdx_S1NSE0;
111
+ mmu_idx = secure ? ARMMMUIdx_S1SE0 : ARMMMUIdx_Stage1_E0;
112
break;
113
case 4: /* AT S12E1R, AT S12E1W */
114
mmu_idx = secure ? ARMMMUIdx_S1SE1 : ARMMMUIdx_E10_1;
115
@@ -XXX,XX +XXX,XX @@ static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
116
case ARMMMUIdx_S1SE0:
117
return arm_el_is_aa64(env, 3) ? 1 : 3;
118
case ARMMMUIdx_S1SE1:
119
- case ARMMMUIdx_S1NSE0:
120
- case ARMMMUIdx_S1NSE1:
121
+ case ARMMMUIdx_Stage1_E0:
122
+ case ARMMMUIdx_Stage1_E1:
123
case ARMMMUIdx_MPrivNegPri:
124
case ARMMMUIdx_MUserNegPri:
125
case ARMMMUIdx_MPriv:
126
@@ -XXX,XX +XXX,XX @@ static inline bool regime_translation_disabled(CPUARMState *env,
127
}
128
129
if ((env->cp15.hcr_el2 & HCR_DC) &&
130
- (mmu_idx == ARMMMUIdx_S1NSE0 || mmu_idx == ARMMMUIdx_S1NSE1)) {
131
+ (mmu_idx == ARMMMUIdx_Stage1_E0 || mmu_idx == ARMMMUIdx_Stage1_E1)) {
132
/* HCR.DC means SCTLR_EL1.M behaves as 0 */
133
return true;
134
}
135
@@ -XXX,XX +XXX,XX @@ static inline TCR *regime_tcr(CPUARMState *env, ARMMMUIdx mmu_idx)
136
static inline ARMMMUIdx stage_1_mmu_idx(ARMMMUIdx mmu_idx)
137
{
138
if (mmu_idx == ARMMMUIdx_E10_0 || mmu_idx == ARMMMUIdx_E10_1) {
139
- mmu_idx += (ARMMMUIdx_S1NSE0 - ARMMMUIdx_E10_0);
140
+ mmu_idx += (ARMMMUIdx_Stage1_E0 - ARMMMUIdx_E10_0);
141
}
142
return mmu_idx;
143
}
144
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
145
{
146
switch (mmu_idx) {
147
case ARMMMUIdx_S1SE0:
148
- case ARMMMUIdx_S1NSE0:
149
+ case ARMMMUIdx_Stage1_E0:
150
case ARMMMUIdx_MUser:
151
case ARMMMUIdx_MSUser:
152
case ARMMMUIdx_MUserNegPri:
153
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
154
hwaddr addr, MemTxAttrs txattrs,
155
ARMMMUFaultInfo *fi)
156
{
157
- if ((mmu_idx == ARMMMUIdx_S1NSE0 || mmu_idx == ARMMMUIdx_S1NSE1) &&
158
+ if ((mmu_idx == ARMMMUIdx_Stage1_E0 || mmu_idx == ARMMMUIdx_Stage1_E1) &&
159
!regime_translation_disabled(env, ARMMMUIdx_Stage2)) {
160
target_ulong s2size;
161
hwaddr s2pa;
162
--
163
2.20.1
164
165
diff view generated by jsdifflib
1
The M-profile FPCCR.S bit indicates the security status of
1
From: Richard Henderson <richard.henderson@linaro.org>
2
the floating point context. In the pseudocode ExecuteFPCheck()
2
3
function it is unconditionally set to match the current
3
This is part of a reorganization to the set of mmu_idx.
4
security state whenever a floating point instruction is
4
This emphasizes that they apply to the Secure EL1&0 regime.
5
executed.
5
6
6
Tested-by: Alex Bennée <alex.bennee@linaro.org>
7
Implement this by adding a new TB flag which tracks whether
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
FPCCR.S is different from the current security state, so
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
that we only need to emit the code to update it in the
9
Message-id: 20200206105448.4726-13-richard.henderson@linaro.org
10
less-common case when it is not already set correctly.
11
12
Note that we will add the handling for the other work done
13
by ExecuteFPCheck() in later commits.
14
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
17
Message-id: 20190416125744.27770-19-peter.maydell@linaro.org
18
---
11
---
19
target/arm/cpu.h | 2 ++
12
target/arm/cpu.h | 8 ++++----
20
target/arm/translate.h | 1 +
13
target/arm/internals.h | 4 ++--
21
target/arm/helper.c | 5 +++++
14
target/arm/translate.h | 2 +-
22
target/arm/translate.c | 20 ++++++++++++++++++++
15
target/arm/helper.c | 26 +++++++++++++-------------
23
4 files changed, 28 insertions(+)
16
target/arm/translate-a64.c | 4 ++--
17
target/arm/translate.c | 6 +++---
18
6 files changed, 25 insertions(+), 25 deletions(-)
24
19
25
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
26
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/cpu.h
22
--- a/target/arm/cpu.h
28
+++ b/target/arm/cpu.h
23
+++ b/target/arm/cpu.h
29
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A32, NS, 6, 1)
24
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
30
FIELD(TBFLAG_A32, VFPEN, 7, 1)
25
ARMMMUIdx_E10_1 = 1 | ARM_MMU_IDX_A,
31
FIELD(TBFLAG_A32, CONDEXEC, 8, 8)
26
ARMMMUIdx_S1E2 = 2 | ARM_MMU_IDX_A,
32
FIELD(TBFLAG_A32, SCTLR_B, 16, 1)
27
ARMMMUIdx_S1E3 = 3 | ARM_MMU_IDX_A,
33
+/* For M profile only, set if FPCCR.S does not match current security state */
28
- ARMMMUIdx_S1SE0 = 4 | ARM_MMU_IDX_A,
34
+FIELD(TBFLAG_A32, FPCCR_S_WRONG, 20, 1)
29
- ARMMMUIdx_S1SE1 = 5 | ARM_MMU_IDX_A,
35
/* For M profile only, Handler (ie not Thread) mode */
30
+ ARMMMUIdx_SE10_0 = 4 | ARM_MMU_IDX_A,
36
FIELD(TBFLAG_A32, HANDLER, 21, 1)
31
+ ARMMMUIdx_SE10_1 = 5 | ARM_MMU_IDX_A,
37
/* For M profile only, whether we should generate stack-limit checks */
32
ARMMMUIdx_Stage2 = 6 | ARM_MMU_IDX_A,
33
ARMMMUIdx_MUser = 0 | ARM_MMU_IDX_M,
34
ARMMMUIdx_MPriv = 1 | ARM_MMU_IDX_M,
35
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdxBit {
36
ARMMMUIdxBit_E10_1 = 1 << 1,
37
ARMMMUIdxBit_S1E2 = 1 << 2,
38
ARMMMUIdxBit_S1E3 = 1 << 3,
39
- ARMMMUIdxBit_S1SE0 = 1 << 4,
40
- ARMMMUIdxBit_S1SE1 = 1 << 5,
41
+ ARMMMUIdxBit_SE10_0 = 1 << 4,
42
+ ARMMMUIdxBit_SE10_1 = 1 << 5,
43
ARMMMUIdxBit_Stage2 = 1 << 6,
44
ARMMMUIdxBit_MUser = 1 << 0,
45
ARMMMUIdxBit_MPriv = 1 << 1,
46
diff --git a/target/arm/internals.h b/target/arm/internals.h
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/internals.h
49
+++ b/target/arm/internals.h
50
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx)
51
case ARMMMUIdx_MUser:
52
return false;
53
case ARMMMUIdx_S1E3:
54
- case ARMMMUIdx_S1SE0:
55
- case ARMMMUIdx_S1SE1:
56
+ case ARMMMUIdx_SE10_0:
57
+ case ARMMMUIdx_SE10_1:
58
case ARMMMUIdx_MSPrivNegPri:
59
case ARMMMUIdx_MSUserNegPri:
60
case ARMMMUIdx_MSPriv:
38
diff --git a/target/arm/translate.h b/target/arm/translate.h
61
diff --git a/target/arm/translate.h b/target/arm/translate.h
39
index XXXXXXX..XXXXXXX 100644
62
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/translate.h
63
--- a/target/arm/translate.h
41
+++ b/target/arm/translate.h
64
+++ b/target/arm/translate.h
42
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
65
@@ -XXX,XX +XXX,XX @@ static inline int default_exception_el(DisasContext *s)
43
bool v7m_handler_mode;
66
* exceptions can only be routed to ELs above 1, so we target the higher of
44
bool v8m_secure; /* true if v8M and we're in Secure mode */
67
* 1 or the current EL.
45
bool v8m_stackcheck; /* true if we need to perform v8M stack limit checks */
46
+ bool v8m_fpccr_s_wrong; /* true if v8M FPCCR.S != v8m_secure */
47
/* Immediate value in AArch32 SVC insn; must be set if is_jmp == DISAS_SWI
48
* so that top level loop can generate correct syndrome information.
49
*/
68
*/
69
- return (s->mmu_idx == ARMMMUIdx_S1SE0 && s->secure_routed_to_el3)
70
+ return (s->mmu_idx == ARMMMUIdx_SE10_0 && s->secure_routed_to_el3)
71
? 3 : MAX(1, s->current_el);
72
}
73
50
diff --git a/target/arm/helper.c b/target/arm/helper.c
74
diff --git a/target/arm/helper.c b/target/arm/helper.c
51
index XXXXXXX..XXXXXXX 100644
75
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/helper.c
76
--- a/target/arm/helper.c
53
+++ b/target/arm/helper.c
77
+++ b/target/arm/helper.c
54
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
78
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
55
flags = FIELD_DP32(flags, TBFLAG_A32, STACKCHECK, 1);
79
mmu_idx = ARMMMUIdx_Stage1_E1;
80
break;
81
case 1:
82
- mmu_idx = secure ? ARMMMUIdx_S1SE1 : ARMMMUIdx_Stage1_E1;
83
+ mmu_idx = secure ? ARMMMUIdx_SE10_1 : ARMMMUIdx_Stage1_E1;
84
break;
85
default:
86
g_assert_not_reached();
87
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
88
/* stage 1 current state PL0: ATS1CUR, ATS1CUW */
89
switch (el) {
90
case 3:
91
- mmu_idx = ARMMMUIdx_S1SE0;
92
+ mmu_idx = ARMMMUIdx_SE10_0;
93
break;
94
case 2:
95
mmu_idx = ARMMMUIdx_Stage1_E0;
96
break;
97
case 1:
98
- mmu_idx = secure ? ARMMMUIdx_S1SE0 : ARMMMUIdx_Stage1_E0;
99
+ mmu_idx = secure ? ARMMMUIdx_SE10_0 : ARMMMUIdx_Stage1_E0;
100
break;
101
default:
102
g_assert_not_reached();
103
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
104
case 0:
105
switch (ri->opc1) {
106
case 0: /* AT S1E1R, AT S1E1W */
107
- mmu_idx = secure ? ARMMMUIdx_S1SE1 : ARMMMUIdx_Stage1_E1;
108
+ mmu_idx = secure ? ARMMMUIdx_SE10_1 : ARMMMUIdx_Stage1_E1;
109
break;
110
case 4: /* AT S1E2R, AT S1E2W */
111
mmu_idx = ARMMMUIdx_S1E2;
112
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
113
}
114
break;
115
case 2: /* AT S1E0R, AT S1E0W */
116
- mmu_idx = secure ? ARMMMUIdx_S1SE0 : ARMMMUIdx_Stage1_E0;
117
+ mmu_idx = secure ? ARMMMUIdx_SE10_0 : ARMMMUIdx_Stage1_E0;
118
break;
119
case 4: /* AT S12E1R, AT S12E1W */
120
- mmu_idx = secure ? ARMMMUIdx_S1SE1 : ARMMMUIdx_E10_1;
121
+ mmu_idx = secure ? ARMMMUIdx_SE10_1 : ARMMMUIdx_E10_1;
122
break;
123
case 6: /* AT S12E0R, AT S12E0W */
124
- mmu_idx = secure ? ARMMMUIdx_S1SE0 : ARMMMUIdx_E10_0;
125
+ mmu_idx = secure ? ARMMMUIdx_SE10_0 : ARMMMUIdx_E10_0;
126
break;
127
default:
128
g_assert_not_reached();
129
@@ -XXX,XX +XXX,XX @@ static CPAccessResult aa64_cacheop_access(CPUARMState *env,
130
static int vae1_tlbmask(CPUARMState *env)
131
{
132
if (arm_is_secure_below_el3(env)) {
133
- return ARMMMUIdxBit_S1SE1 | ARMMMUIdxBit_S1SE0;
134
+ return ARMMMUIdxBit_SE10_1 | ARMMMUIdxBit_SE10_0;
135
} else {
136
return ARMMMUIdxBit_E10_1 | ARMMMUIdxBit_E10_0;
56
}
137
}
57
138
@@ -XXX,XX +XXX,XX @@ static int alle1_tlbmask(CPUARMState *env)
58
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY) &&
139
* stage 1 translations.
59
+ FIELD_EX32(env->v7m.fpccr[M_REG_S], V7M_FPCCR, S) != env->v7m.secure) {
140
*/
60
+ flags = FIELD_DP32(flags, TBFLAG_A32, FPCCR_S_WRONG, 1);
141
if (arm_is_secure_below_el3(env)) {
61
+ }
142
- return ARMMMUIdxBit_S1SE1 | ARMMMUIdxBit_S1SE0;
62
+
143
+ return ARMMMUIdxBit_SE10_1 | ARMMMUIdxBit_SE10_0;
63
*pflags = flags;
144
} else if (arm_feature(env, ARM_FEATURE_EL2)) {
64
*cs_base = 0;
145
return ARMMMUIdxBit_E10_1 | ARMMMUIdxBit_E10_0 | ARMMMUIdxBit_Stage2;
65
}
146
} else {
147
@@ -XXX,XX +XXX,XX @@ static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
148
return 2;
149
case ARMMMUIdx_S1E3:
150
return 3;
151
- case ARMMMUIdx_S1SE0:
152
+ case ARMMMUIdx_SE10_0:
153
return arm_el_is_aa64(env, 3) ? 1 : 3;
154
- case ARMMMUIdx_S1SE1:
155
+ case ARMMMUIdx_SE10_1:
156
case ARMMMUIdx_Stage1_E0:
157
case ARMMMUIdx_Stage1_E1:
158
case ARMMMUIdx_MPrivNegPri:
159
@@ -XXX,XX +XXX,XX @@ bool arm_s1_regime_using_lpae_format(CPUARMState *env, ARMMMUIdx mmu_idx)
160
static inline bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
161
{
162
switch (mmu_idx) {
163
- case ARMMMUIdx_S1SE0:
164
+ case ARMMMUIdx_SE10_0:
165
case ARMMMUIdx_Stage1_E0:
166
case ARMMMUIdx_MUser:
167
case ARMMMUIdx_MSUser:
168
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_mmu_idx_el(CPUARMState *env, int el)
169
}
170
171
if (el < 2 && arm_is_secure_below_el3(env)) {
172
- return ARMMMUIdx_S1SE0 + el;
173
+ return ARMMMUIdx_SE10_0 + el;
174
} else {
175
return ARMMMUIdx_E10_0 + el;
176
}
177
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
178
index XXXXXXX..XXXXXXX 100644
179
--- a/target/arm/translate-a64.c
180
+++ b/target/arm/translate-a64.c
181
@@ -XXX,XX +XXX,XX @@ static inline int get_a64_user_mem_index(DisasContext *s)
182
case ARMMMUIdx_E10_1:
183
useridx = ARMMMUIdx_E10_0;
184
break;
185
- case ARMMMUIdx_S1SE1:
186
- useridx = ARMMMUIdx_S1SE0;
187
+ case ARMMMUIdx_SE10_1:
188
+ useridx = ARMMMUIdx_SE10_0;
189
break;
190
case ARMMMUIdx_Stage2:
191
g_assert_not_reached();
66
diff --git a/target/arm/translate.c b/target/arm/translate.c
192
diff --git a/target/arm/translate.c b/target/arm/translate.c
67
index XXXXXXX..XXXXXXX 100644
193
index XXXXXXX..XXXXXXX 100644
68
--- a/target/arm/translate.c
194
--- a/target/arm/translate.c
69
+++ b/target/arm/translate.c
195
+++ b/target/arm/translate.c
70
@@ -XXX,XX +XXX,XX @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
196
@@ -XXX,XX +XXX,XX @@ static inline int get_a32_user_mem_index(DisasContext *s)
71
}
197
case ARMMMUIdx_E10_1:
72
}
198
return arm_to_core_mmu_idx(ARMMMUIdx_E10_0);
73
199
case ARMMMUIdx_S1E3:
74
+ if (arm_dc_feature(s, ARM_FEATURE_M)) {
200
- case ARMMMUIdx_S1SE0:
75
+ /* Handle M-profile lazy FP state mechanics */
201
- case ARMMMUIdx_S1SE1:
76
+
202
- return arm_to_core_mmu_idx(ARMMMUIdx_S1SE0);
77
+ /* Update ownership of FP context: set FPCCR.S to match current state */
203
+ case ARMMMUIdx_SE10_0:
78
+ if (s->v8m_fpccr_s_wrong) {
204
+ case ARMMMUIdx_SE10_1:
79
+ TCGv_i32 tmp;
205
+ return arm_to_core_mmu_idx(ARMMMUIdx_SE10_0);
80
+
206
case ARMMMUIdx_MUser:
81
+ tmp = load_cpu_field(v7m.fpccr[M_REG_S]);
207
case ARMMMUIdx_MPriv:
82
+ if (s->v8m_secure) {
208
return arm_to_core_mmu_idx(ARMMMUIdx_MUser);
83
+ tcg_gen_ori_i32(tmp, tmp, R_V7M_FPCCR_S_MASK);
84
+ } else {
85
+ tcg_gen_andi_i32(tmp, tmp, ~R_V7M_FPCCR_S_MASK);
86
+ }
87
+ store_cpu_field(tmp, v7m.fpccr[M_REG_S]);
88
+ /* Don't need to do this for any further FP insns in this TB */
89
+ s->v8m_fpccr_s_wrong = false;
90
+ }
91
+ }
92
+
93
if (extract32(insn, 28, 4) == 0xf) {
94
/*
95
* Encodings with T=1 (Thumb) or unconditional (ARM):
96
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
97
dc->v8m_secure = arm_feature(env, ARM_FEATURE_M_SECURITY) &&
98
regime_is_secure(env, dc->mmu_idx);
99
dc->v8m_stackcheck = FIELD_EX32(tb_flags, TBFLAG_A32, STACKCHECK);
100
+ dc->v8m_fpccr_s_wrong = FIELD_EX32(tb_flags, TBFLAG_A32, FPCCR_S_WRONG);
101
dc->cp_regs = cpu->cp_regs;
102
dc->features = env->features;
103
104
--
209
--
105
2.20.1
210
2.20.1
106
211
107
212
diff view generated by jsdifflib
1
The M-profile architecture floating point system supports
1
From: Richard Henderson <richard.henderson@linaro.org>
2
lazy FP state preservation, where FP registers are not
3
pushed to the stack when an exception occurs but are instead
4
only saved if and when the first FP instruction in the exception
5
handler is executed. Implement this in QEMU, corresponding
6
to the check of LSPACT in the pseudocode ExecuteFPCheck().
7
2
3
This is part of a reorganization to the set of mmu_idx.
4
The EL3 regime only has a single stage translation, and
5
is always secure.
6
7
Tested-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20200206105448.4726-14-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20190416125744.27770-24-peter.maydell@linaro.org
11
---
12
---
12
target/arm/cpu.h | 3 ++
13
target/arm/cpu.h | 4 ++--
13
target/arm/helper.h | 2 +
14
target/arm/internals.h | 2 +-
14
target/arm/translate.h | 1 +
15
target/arm/helper.c | 14 +++++++-------
15
target/arm/helper.c | 112 +++++++++++++++++++++++++++++++++++++++++
16
target/arm/translate.c | 2 +-
16
target/arm/translate.c | 22 ++++++++
17
4 files changed, 11 insertions(+), 11 deletions(-)
17
5 files changed, 140 insertions(+)
18
18
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/cpu.h
21
--- a/target/arm/cpu.h
22
+++ b/target/arm/cpu.h
22
+++ b/target/arm/cpu.h
23
@@ -XXX,XX +XXX,XX @@
23
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
24
#define EXCP_NOCP 17 /* v7M NOCP UsageFault */
24
ARMMMUIdx_E10_0 = 0 | ARM_MMU_IDX_A,
25
#define EXCP_INVSTATE 18 /* v7M INVSTATE UsageFault */
25
ARMMMUIdx_E10_1 = 1 | ARM_MMU_IDX_A,
26
#define EXCP_STKOF 19 /* v8M STKOF UsageFault */
26
ARMMMUIdx_S1E2 = 2 | ARM_MMU_IDX_A,
27
+#define EXCP_LAZYFP 20 /* v7M fault during lazy FP stacking */
27
- ARMMMUIdx_S1E3 = 3 | ARM_MMU_IDX_A,
28
/* NB: add new EXCP_ defines to the array in arm_log_exception() too */
28
+ ARMMMUIdx_SE3 = 3 | ARM_MMU_IDX_A,
29
29
ARMMMUIdx_SE10_0 = 4 | ARM_MMU_IDX_A,
30
#define ARMV7M_EXCP_RESET 1
30
ARMMMUIdx_SE10_1 = 5 | ARM_MMU_IDX_A,
31
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A32, NS, 6, 1)
31
ARMMMUIdx_Stage2 = 6 | ARM_MMU_IDX_A,
32
FIELD(TBFLAG_A32, VFPEN, 7, 1)
32
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdxBit {
33
FIELD(TBFLAG_A32, CONDEXEC, 8, 8)
33
ARMMMUIdxBit_E10_0 = 1 << 0,
34
FIELD(TBFLAG_A32, SCTLR_B, 16, 1)
34
ARMMMUIdxBit_E10_1 = 1 << 1,
35
+/* For M profile only, set if FPCCR.LSPACT is set */
35
ARMMMUIdxBit_S1E2 = 1 << 2,
36
+FIELD(TBFLAG_A32, LSPACT, 18, 1)
36
- ARMMMUIdxBit_S1E3 = 1 << 3,
37
/* For M profile only, set if we must create a new FP context */
37
+ ARMMMUIdxBit_SE3 = 1 << 3,
38
FIELD(TBFLAG_A32, NEW_FP_CTXT_NEEDED, 19, 1)
38
ARMMMUIdxBit_SE10_0 = 1 << 4,
39
/* For M profile only, set if FPCCR.S does not match current security state */
39
ARMMMUIdxBit_SE10_1 = 1 << 5,
40
diff --git a/target/arm/helper.h b/target/arm/helper.h
40
ARMMMUIdxBit_Stage2 = 1 << 6,
41
diff --git a/target/arm/internals.h b/target/arm/internals.h
41
index XXXXXXX..XXXXXXX 100644
42
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/helper.h
43
--- a/target/arm/internals.h
43
+++ b/target/arm/helper.h
44
+++ b/target/arm/internals.h
44
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_2(v7m_blxns, void, env, i32)
45
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx)
45
46
case ARMMMUIdx_MPriv:
46
DEF_HELPER_3(v7m_tt, i32, env, i32, i32)
47
case ARMMMUIdx_MUser:
47
48
return false;
48
+DEF_HELPER_1(v7m_preserve_fp_state, void, env)
49
- case ARMMMUIdx_S1E3:
49
+
50
+ case ARMMMUIdx_SE3:
50
DEF_HELPER_2(v8m_stackcheck, void, env, i32)
51
case ARMMMUIdx_SE10_0:
51
52
case ARMMMUIdx_SE10_1:
52
DEF_HELPER_4(access_check_cp_reg, void, env, ptr, i32, i32)
53
case ARMMMUIdx_MSPrivNegPri:
53
diff --git a/target/arm/translate.h b/target/arm/translate.h
54
index XXXXXXX..XXXXXXX 100644
55
--- a/target/arm/translate.h
56
+++ b/target/arm/translate.h
57
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
58
bool v8m_stackcheck; /* true if we need to perform v8M stack limit checks */
59
bool v8m_fpccr_s_wrong; /* true if v8M FPCCR.S != v8m_secure */
60
bool v7m_new_fp_ctxt_needed; /* ASPEN set but no active FP context */
61
+ bool v7m_lspact; /* FPCCR.LSPACT set */
62
/* Immediate value in AArch32 SVC insn; must be set if is_jmp == DISAS_SWI
63
* so that top level loop can generate correct syndrome information.
64
*/
65
diff --git a/target/arm/helper.c b/target/arm/helper.c
54
diff --git a/target/arm/helper.c b/target/arm/helper.c
66
index XXXXXXX..XXXXXXX 100644
55
index XXXXXXX..XXXXXXX 100644
67
--- a/target/arm/helper.c
56
--- a/target/arm/helper.c
68
+++ b/target/arm/helper.c
57
+++ b/target/arm/helper.c
69
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest)
58
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
70
g_assert_not_reached();
59
/* stage 1 current state PL1: ATS1CPR, ATS1CPW */
60
switch (el) {
61
case 3:
62
- mmu_idx = ARMMMUIdx_S1E3;
63
+ mmu_idx = ARMMMUIdx_SE3;
64
break;
65
case 2:
66
mmu_idx = ARMMMUIdx_Stage1_E1;
67
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
68
mmu_idx = ARMMMUIdx_S1E2;
69
break;
70
case 6: /* AT S1E3R, AT S1E3W */
71
- mmu_idx = ARMMMUIdx_S1E3;
72
+ mmu_idx = ARMMMUIdx_SE3;
73
break;
74
default:
75
g_assert_not_reached();
76
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3_write(CPUARMState *env, const ARMCPRegInfo *ri,
77
ARMCPU *cpu = env_archcpu(env);
78
CPUState *cs = CPU(cpu);
79
80
- tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_S1E3);
81
+ tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_SE3);
71
}
82
}
72
83
73
+void HELPER(v7m_preserve_fp_state)(CPUARMState *env)
84
static void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
74
+{
85
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
75
+ /* translate.c should never generate calls here in user-only mode */
76
+ g_assert_not_reached();
77
+}
78
+
79
uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
80
{
86
{
81
/* The TT instructions can be used by unprivileged code, but in
87
CPUState *cs = env_cpu(env);
82
@@ -XXX,XX +XXX,XX @@ pend_fault:
88
83
return false;
89
- tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_S1E3);
90
+ tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_SE3);
84
}
91
}
85
92
86
+void HELPER(v7m_preserve_fp_state)(CPUARMState *env)
93
static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri,
87
+{
94
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri,
88
+ /*
95
CPUState *cs = CPU(cpu);
89
+ * Preserve FP state (because LSPACT was set and we are about
96
uint64_t pageaddr = sextract64(value << 12, 0, 56);
90
+ * to execute an FP instruction). This corresponds to the
97
91
+ * PreserveFPState() pseudocode.
98
- tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_S1E3);
92
+ * We may throw an exception if the stacking fails.
99
+ tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_SE3);
93
+ */
94
+ ARMCPU *cpu = arm_env_get_cpu(env);
95
+ bool is_secure = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
96
+ bool negpri = !(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_HFRDY_MASK);
97
+ bool is_priv = !(env->v7m.fpccr[is_secure] & R_V7M_FPCCR_USER_MASK);
98
+ bool splimviol = env->v7m.fpccr[is_secure] & R_V7M_FPCCR_SPLIMVIOL_MASK;
99
+ uint32_t fpcar = env->v7m.fpcar[is_secure];
100
+ bool stacked_ok = true;
101
+ bool ts = is_secure && (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK);
102
+ bool take_exception;
103
+
104
+ /* Take the iothread lock as we are going to touch the NVIC */
105
+ qemu_mutex_lock_iothread();
106
+
107
+ /* Check the background context had access to the FPU */
108
+ if (!v7m_cpacr_pass(env, is_secure, is_priv)) {
109
+ armv7m_nvic_set_pending_lazyfp(env->nvic, ARMV7M_EXCP_USAGE, is_secure);
110
+ env->v7m.cfsr[is_secure] |= R_V7M_CFSR_NOCP_MASK;
111
+ stacked_ok = false;
112
+ } else if (!is_secure && !extract32(env->v7m.nsacr, 10, 1)) {
113
+ armv7m_nvic_set_pending_lazyfp(env->nvic, ARMV7M_EXCP_USAGE, M_REG_S);
114
+ env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_NOCP_MASK;
115
+ stacked_ok = false;
116
+ }
117
+
118
+ if (!splimviol && stacked_ok) {
119
+ /* We only stack if the stack limit wasn't violated */
120
+ int i;
121
+ ARMMMUIdx mmu_idx;
122
+
123
+ mmu_idx = arm_v7m_mmu_idx_all(env, is_secure, is_priv, negpri);
124
+ for (i = 0; i < (ts ? 32 : 16); i += 2) {
125
+ uint64_t dn = *aa32_vfp_dreg(env, i / 2);
126
+ uint32_t faddr = fpcar + 4 * i;
127
+ uint32_t slo = extract64(dn, 0, 32);
128
+ uint32_t shi = extract64(dn, 32, 32);
129
+
130
+ if (i >= 16) {
131
+ faddr += 8; /* skip the slot for the FPSCR */
132
+ }
133
+ stacked_ok = stacked_ok &&
134
+ v7m_stack_write(cpu, faddr, slo, mmu_idx, STACK_LAZYFP) &&
135
+ v7m_stack_write(cpu, faddr + 4, shi, mmu_idx, STACK_LAZYFP);
136
+ }
137
+
138
+ stacked_ok = stacked_ok &&
139
+ v7m_stack_write(cpu, fpcar + 0x40,
140
+ vfp_get_fpscr(env), mmu_idx, STACK_LAZYFP);
141
+ }
142
+
143
+ /*
144
+ * We definitely pended an exception, but it's possible that it
145
+ * might not be able to be taken now. If its priority permits us
146
+ * to take it now, then we must not update the LSPACT or FP regs,
147
+ * but instead jump out to take the exception immediately.
148
+ * If it's just pending and won't be taken until the current
149
+ * handler exits, then we do update LSPACT and the FP regs.
150
+ */
151
+ take_exception = !stacked_ok &&
152
+ armv7m_nvic_can_take_pending_exception(env->nvic);
153
+
154
+ qemu_mutex_unlock_iothread();
155
+
156
+ if (take_exception) {
157
+ raise_exception_ra(env, EXCP_LAZYFP, 0, 1, GETPC());
158
+ }
159
+
160
+ env->v7m.fpccr[is_secure] &= ~R_V7M_FPCCR_LSPACT_MASK;
161
+
162
+ if (ts) {
163
+ /* Clear s0 to s31 and the FPSCR */
164
+ int i;
165
+
166
+ for (i = 0; i < 32; i += 2) {
167
+ *aa32_vfp_dreg(env, i / 2) = 0;
168
+ }
169
+ vfp_set_fpscr(env, 0);
170
+ }
171
+ /*
172
+ * Otherwise s0 to s15 and FPSCR are UNKNOWN; we choose to leave them
173
+ * unchanged.
174
+ */
175
+}
176
+
177
/* Write to v7M CONTROL.SPSEL bit for the specified security bank.
178
* This may change the current stack pointer between Main and Process
179
* stack pointers if it is done for the CONTROL register for the current
180
@@ -XXX,XX +XXX,XX @@ static void arm_log_exception(int idx)
181
[EXCP_NOCP] = "v7M NOCP UsageFault",
182
[EXCP_INVSTATE] = "v7M INVSTATE UsageFault",
183
[EXCP_STKOF] = "v8M STKOF UsageFault",
184
+ [EXCP_LAZYFP] = "v7M exception during lazy FP stacking",
185
};
186
187
if (idx >= 0 && idx < ARRAY_SIZE(excnames)) {
188
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
189
return;
190
}
191
break;
192
+ case EXCP_LAZYFP:
193
+ /*
194
+ * We already pended the specific exception in the NVIC in the
195
+ * v7m_preserve_fp_state() helper function.
196
+ */
197
+ break;
198
default:
199
cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index);
200
return; /* Never happens. Keep compiler happy. */
201
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
202
flags = FIELD_DP32(flags, TBFLAG_A32, NEW_FP_CTXT_NEEDED, 1);
203
}
204
205
+ if (arm_feature(env, ARM_FEATURE_M)) {
206
+ bool is_secure = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
207
+
208
+ if (env->v7m.fpccr[is_secure] & R_V7M_FPCCR_LSPACT_MASK) {
209
+ flags = FIELD_DP32(flags, TBFLAG_A32, LSPACT, 1);
210
+ }
211
+ }
212
+
213
*pflags = flags;
214
*cs_base = 0;
215
}
100
}
101
102
static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
103
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
104
uint64_t pageaddr = sextract64(value << 12, 0, 56);
105
106
tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr,
107
- ARMMMUIdxBit_S1E3);
108
+ ARMMMUIdxBit_SE3);
109
}
110
111
static void tlbi_aa64_ipas2e1_write(CPUARMState *env, const ARMCPRegInfo *ri,
112
@@ -XXX,XX +XXX,XX @@ static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
113
case ARMMMUIdx_Stage2:
114
case ARMMMUIdx_S1E2:
115
return 2;
116
- case ARMMMUIdx_S1E3:
117
+ case ARMMMUIdx_SE3:
118
return 3;
119
case ARMMMUIdx_SE10_0:
120
return arm_el_is_aa64(env, 3) ? 1 : 3;
216
diff --git a/target/arm/translate.c b/target/arm/translate.c
121
diff --git a/target/arm/translate.c b/target/arm/translate.c
217
index XXXXXXX..XXXXXXX 100644
122
index XXXXXXX..XXXXXXX 100644
218
--- a/target/arm/translate.c
123
--- a/target/arm/translate.c
219
+++ b/target/arm/translate.c
124
+++ b/target/arm/translate.c
220
@@ -XXX,XX +XXX,XX @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
125
@@ -XXX,XX +XXX,XX @@ static inline int get_a32_user_mem_index(DisasContext *s)
221
if (arm_dc_feature(s, ARM_FEATURE_M)) {
126
case ARMMMUIdx_E10_0:
222
/* Handle M-profile lazy FP state mechanics */
127
case ARMMMUIdx_E10_1:
223
128
return arm_to_core_mmu_idx(ARMMMUIdx_E10_0);
224
+ /* Trigger lazy-state preservation if necessary */
129
- case ARMMMUIdx_S1E3:
225
+ if (s->v7m_lspact) {
130
+ case ARMMMUIdx_SE3:
226
+ /*
131
case ARMMMUIdx_SE10_0:
227
+ * Lazy state saving affects external memory and also the NVIC,
132
case ARMMMUIdx_SE10_1:
228
+ * so we must mark it as an IO operation for icount.
133
return arm_to_core_mmu_idx(ARMMMUIdx_SE10_0);
229
+ */
230
+ if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
231
+ gen_io_start();
232
+ }
233
+ gen_helper_v7m_preserve_fp_state(cpu_env);
234
+ if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
235
+ gen_io_end();
236
+ }
237
+ /*
238
+ * If the preserve_fp_state helper doesn't throw an exception
239
+ * then it will clear LSPACT; we don't need to repeat this for
240
+ * any further FP insns in this TB.
241
+ */
242
+ s->v7m_lspact = false;
243
+ }
244
+
245
/* Update ownership of FP context: set FPCCR.S to match current state */
246
if (s->v8m_fpccr_s_wrong) {
247
TCGv_i32 tmp;
248
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
249
dc->v8m_fpccr_s_wrong = FIELD_EX32(tb_flags, TBFLAG_A32, FPCCR_S_WRONG);
250
dc->v7m_new_fp_ctxt_needed =
251
FIELD_EX32(tb_flags, TBFLAG_A32, NEW_FP_CTXT_NEEDED);
252
+ dc->v7m_lspact = FIELD_EX32(tb_flags, TBFLAG_A32, LSPACT);
253
dc->cp_regs = cpu->cp_regs;
254
dc->features = env->features;
255
256
--
134
--
257
2.20.1
135
2.20.1
258
136
259
137
diff view generated by jsdifflib
1
Implement the VLSTM instruction for v7M for the FPU present case.
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This is part of a reorganization to the set of mmu_idx.
4
The non-secure EL2 regime only has a single stage translation;
5
there is no point in pointing out that the idx is for stage1.
6
7
Tested-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20200206105448.4726-15-richard.henderson@linaro.org
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20190416125744.27770-25-peter.maydell@linaro.org
6
---
12
---
7
target/arm/cpu.h | 2 +
13
target/arm/cpu.h | 4 ++--
8
target/arm/helper.h | 2 +
14
target/arm/internals.h | 2 +-
9
target/arm/helper.c | 84 ++++++++++++++++++++++++++++++++++++++++++
15
target/arm/helper.c | 22 +++++++++++-----------
10
target/arm/translate.c | 15 +++++++-
16
target/arm/translate.c | 2 +-
11
4 files changed, 102 insertions(+), 1 deletion(-)
17
4 files changed, 15 insertions(+), 15 deletions(-)
12
18
13
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/cpu.h
21
--- a/target/arm/cpu.h
16
+++ b/target/arm/cpu.h
22
+++ b/target/arm/cpu.h
17
@@ -XXX,XX +XXX,XX @@
23
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
18
#define EXCP_INVSTATE 18 /* v7M INVSTATE UsageFault */
24
typedef enum ARMMMUIdx {
19
#define EXCP_STKOF 19 /* v8M STKOF UsageFault */
25
ARMMMUIdx_E10_0 = 0 | ARM_MMU_IDX_A,
20
#define EXCP_LAZYFP 20 /* v7M fault during lazy FP stacking */
26
ARMMMUIdx_E10_1 = 1 | ARM_MMU_IDX_A,
21
+#define EXCP_LSERR 21 /* v8M LSERR SecureFault */
27
- ARMMMUIdx_S1E2 = 2 | ARM_MMU_IDX_A,
22
+#define EXCP_UNALIGNED 22 /* v7M UNALIGNED UsageFault */
28
+ ARMMMUIdx_E2 = 2 | ARM_MMU_IDX_A,
23
/* NB: add new EXCP_ defines to the array in arm_log_exception() too */
29
ARMMMUIdx_SE3 = 3 | ARM_MMU_IDX_A,
24
30
ARMMMUIdx_SE10_0 = 4 | ARM_MMU_IDX_A,
25
#define ARMV7M_EXCP_RESET 1
31
ARMMMUIdx_SE10_1 = 5 | ARM_MMU_IDX_A,
26
diff --git a/target/arm/helper.h b/target/arm/helper.h
32
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
33
typedef enum ARMMMUIdxBit {
34
ARMMMUIdxBit_E10_0 = 1 << 0,
35
ARMMMUIdxBit_E10_1 = 1 << 1,
36
- ARMMMUIdxBit_S1E2 = 1 << 2,
37
+ ARMMMUIdxBit_E2 = 1 << 2,
38
ARMMMUIdxBit_SE3 = 1 << 3,
39
ARMMMUIdxBit_SE10_0 = 1 << 4,
40
ARMMMUIdxBit_SE10_1 = 1 << 5,
41
diff --git a/target/arm/internals.h b/target/arm/internals.h
27
index XXXXXXX..XXXXXXX 100644
42
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/helper.h
43
--- a/target/arm/internals.h
29
+++ b/target/arm/helper.h
44
+++ b/target/arm/internals.h
30
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(v7m_tt, i32, env, i32, i32)
45
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx)
31
46
case ARMMMUIdx_E10_1:
32
DEF_HELPER_1(v7m_preserve_fp_state, void, env)
47
case ARMMMUIdx_Stage1_E0:
33
48
case ARMMMUIdx_Stage1_E1:
34
+DEF_HELPER_2(v7m_vlstm, void, env, i32)
49
- case ARMMMUIdx_S1E2:
35
+
50
+ case ARMMMUIdx_E2:
36
DEF_HELPER_2(v8m_stackcheck, void, env, i32)
51
case ARMMMUIdx_Stage2:
37
52
case ARMMMUIdx_MPrivNegPri:
38
DEF_HELPER_4(access_check_cp_reg, void, env, ptr, i32, i32)
53
case ARMMMUIdx_MUserNegPri:
39
diff --git a/target/arm/helper.c b/target/arm/helper.c
54
diff --git a/target/arm/helper.c b/target/arm/helper.c
40
index XXXXXXX..XXXXXXX 100644
55
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/helper.c
56
--- a/target/arm/helper.c
42
+++ b/target/arm/helper.c
57
+++ b/target/arm/helper.c
43
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_preserve_fp_state)(CPUARMState *env)
58
@@ -XXX,XX +XXX,XX @@ static void tlbiall_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
44
g_assert_not_reached();
59
{
60
CPUState *cs = env_cpu(env);
61
62
- tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_S1E2);
63
+ tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_E2);
45
}
64
}
46
65
47
+void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
66
static void tlbiall_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
48
+{
67
@@ -XXX,XX +XXX,XX @@ static void tlbiall_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
49
+ /* translate.c should never generate calls here in user-only mode */
50
+ g_assert_not_reached();
51
+}
52
+
53
uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
54
{
68
{
55
/* The TT instructions can be used by unprivileged code, but in
69
CPUState *cs = env_cpu(env);
56
@@ -XXX,XX +XXX,XX @@ static void v7m_update_fpccr(CPUARMState *env, uint32_t frameptr,
70
57
}
71
- tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_S1E2);
72
+ tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_E2);
58
}
73
}
59
74
60
+void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
75
static void tlbimva_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
61
+{
76
@@ -XXX,XX +XXX,XX @@ static void tlbimva_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
62
+ /* fptr is the value of Rn, the frame pointer we store the FP regs to */
77
CPUState *cs = env_cpu(env);
63
+ bool s = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
78
uint64_t pageaddr = value & ~MAKE_64BIT_MASK(0, 12);
64
+ bool lspact = env->v7m.fpccr[s] & R_V7M_FPCCR_LSPACT_MASK;
79
65
+
80
- tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_S1E2);
66
+ assert(env->v7m.secure);
81
+ tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_E2);
67
+
82
}
68
+ if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)) {
83
69
+ return;
84
static void tlbimva_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
70
+ }
85
@@ -XXX,XX +XXX,XX @@ static void tlbimva_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
71
+
86
uint64_t pageaddr = value & ~MAKE_64BIT_MASK(0, 12);
72
+ /* Check access to the coprocessor is permitted */
87
73
+ if (!v7m_cpacr_pass(env, true, arm_current_el(env) != 0)) {
88
tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr,
74
+ raise_exception_ra(env, EXCP_NOCP, 0, 1, GETPC());
89
- ARMMMUIdxBit_S1E2);
75
+ }
90
+ ARMMMUIdxBit_E2);
76
+
91
}
77
+ if (lspact) {
92
78
+ /* LSPACT should not be active when there is active FP state */
93
static const ARMCPRegInfo cp_reginfo[] = {
79
+ raise_exception_ra(env, EXCP_LSERR, 0, 1, GETPC());
94
@@ -XXX,XX +XXX,XX @@ static void ats1h_write(CPUARMState *env, const ARMCPRegInfo *ri,
80
+ }
95
MMUAccessType access_type = ri->opc2 & 1 ? MMU_DATA_STORE : MMU_DATA_LOAD;
81
+
96
uint64_t par64;
82
+ if (fptr & 7) {
97
83
+ raise_exception_ra(env, EXCP_UNALIGNED, 0, 1, GETPC());
98
- par64 = do_ats_write(env, value, access_type, ARMMMUIdx_S1E2);
84
+ }
99
+ par64 = do_ats_write(env, value, access_type, ARMMMUIdx_E2);
85
+
100
86
+ /*
101
A32_BANKED_CURRENT_REG_SET(env, par, par64);
87
+ * Note that we do not use v7m_stack_write() here, because the
102
}
88
+ * accesses should not set the FSR bits for stacking errors if they
103
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
89
+ * fail. (In pseudocode terms, they are AccType_NORMAL, not AccType_STACK
104
mmu_idx = secure ? ARMMMUIdx_SE10_1 : ARMMMUIdx_Stage1_E1;
90
+ * or AccType_LAZYFP). Faults in cpu_stl_data() will throw exceptions
105
break;
91
+ * and longjmp out.
106
case 4: /* AT S1E2R, AT S1E2W */
92
+ */
107
- mmu_idx = ARMMMUIdx_S1E2;
93
+ if (!(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPEN_MASK)) {
108
+ mmu_idx = ARMMMUIdx_E2;
94
+ bool ts = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK;
109
break;
95
+ int i;
110
case 6: /* AT S1E3R, AT S1E3W */
96
+
111
mmu_idx = ARMMMUIdx_SE3;
97
+ for (i = 0; i < (ts ? 32 : 16); i += 2) {
112
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle2_write(CPUARMState *env, const ARMCPRegInfo *ri,
98
+ uint64_t dn = *aa32_vfp_dreg(env, i / 2);
113
ARMCPU *cpu = env_archcpu(env);
99
+ uint32_t faddr = fptr + 4 * i;
114
CPUState *cs = CPU(cpu);
100
+ uint32_t slo = extract64(dn, 0, 32);
115
101
+ uint32_t shi = extract64(dn, 32, 32);
116
- tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_S1E2);
102
+
117
+ tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_E2);
103
+ if (i >= 16) {
118
}
104
+ faddr += 8; /* skip the slot for the FPSCR */
119
105
+ }
120
static void tlbi_aa64_alle3_write(CPUARMState *env, const ARMCPRegInfo *ri,
106
+ cpu_stl_data(env, faddr, slo);
121
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
107
+ cpu_stl_data(env, faddr + 4, shi);
108
+ }
109
+ cpu_stl_data(env, fptr + 0x40, vfp_get_fpscr(env));
110
+
111
+ /*
112
+ * If TS is 0 then s0 to s15 and FPSCR are UNKNOWN; we choose to
113
+ * leave them unchanged, matching our choice in v7m_preserve_fp_state.
114
+ */
115
+ if (ts) {
116
+ for (i = 0; i < 32; i += 2) {
117
+ *aa32_vfp_dreg(env, i / 2) = 0;
118
+ }
119
+ vfp_set_fpscr(env, 0);
120
+ }
121
+ } else {
122
+ v7m_update_fpccr(env, fptr, false);
123
+ }
124
+
125
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK;
126
+}
127
+
128
static bool v7m_push_stack(ARMCPU *cpu)
129
{
122
{
130
/* Do the "set up stack frame" part of exception entry,
123
CPUState *cs = env_cpu(env);
131
@@ -XXX,XX +XXX,XX @@ static void arm_log_exception(int idx)
124
132
[EXCP_INVSTATE] = "v7M INVSTATE UsageFault",
125
- tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_S1E2);
133
[EXCP_STKOF] = "v8M STKOF UsageFault",
126
+ tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_E2);
134
[EXCP_LAZYFP] = "v7M exception during lazy FP stacking",
127
}
135
+ [EXCP_LSERR] = "v8M LSERR UsageFault",
128
136
+ [EXCP_UNALIGNED] = "v7M UNALIGNED UsageFault",
129
static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
137
};
130
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri,
138
131
CPUState *cs = CPU(cpu);
139
if (idx >= 0 && idx < ARRAY_SIZE(excnames)) {
132
uint64_t pageaddr = sextract64(value << 12, 0, 56);
140
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
133
141
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
134
- tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_S1E2);
142
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_STKOF_MASK;
135
+ tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_E2);
143
break;
136
}
144
+ case EXCP_LSERR:
137
145
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
138
static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri,
146
+ env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
139
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
147
+ break;
140
uint64_t pageaddr = sextract64(value << 12, 0, 56);
148
+ case EXCP_UNALIGNED:
141
149
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
142
tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr,
150
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNALIGNED_MASK;
143
- ARMMMUIdxBit_S1E2);
151
+ break;
144
+ ARMMMUIdxBit_E2);
152
case EXCP_SWI:
145
}
153
/* The PC already points to the next instruction. */
146
154
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SVC, env->v7m.secure);
147
static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
148
@@ -XXX,XX +XXX,XX @@ static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
149
{
150
switch (mmu_idx) {
151
case ARMMMUIdx_Stage2:
152
- case ARMMMUIdx_S1E2:
153
+ case ARMMMUIdx_E2:
154
return 2;
155
case ARMMMUIdx_SE3:
156
return 3;
155
diff --git a/target/arm/translate.c b/target/arm/translate.c
157
diff --git a/target/arm/translate.c b/target/arm/translate.c
156
index XXXXXXX..XXXXXXX 100644
158
index XXXXXXX..XXXXXXX 100644
157
--- a/target/arm/translate.c
159
--- a/target/arm/translate.c
158
+++ b/target/arm/translate.c
160
+++ b/target/arm/translate.c
159
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
161
@@ -XXX,XX +XXX,XX @@ static inline int get_a32_user_mem_index(DisasContext *s)
160
if (!s->v8m_secure || (insn & 0x0040f0ff)) {
162
* otherwise, access as if at PL0.
161
goto illegal_op;
163
*/
162
}
164
switch (s->mmu_idx) {
163
- /* Just NOP since FP support is not implemented */
165
- case ARMMMUIdx_S1E2: /* this one is UNPREDICTABLE */
164
+
166
+ case ARMMMUIdx_E2: /* this one is UNPREDICTABLE */
165
+ if (arm_dc_feature(s, ARM_FEATURE_VFP)) {
167
case ARMMMUIdx_E10_0:
166
+ TCGv_i32 fptr = load_reg(s, rn);
168
case ARMMMUIdx_E10_1:
167
+
169
return arm_to_core_mmu_idx(ARMMMUIdx_E10_0);
168
+ if (extract32(insn, 20, 1)) {
169
+ /* VLLDM */
170
+ } else {
171
+ gen_helper_v7m_vlstm(cpu_env, fptr);
172
+ }
173
+ tcg_temp_free_i32(fptr);
174
+
175
+ /* End the TB, because we have updated FP control bits */
176
+ s->base.is_jmp = DISAS_UPDATE;
177
+ }
178
break;
179
}
180
if (arm_dc_feature(s, ARM_FEATURE_VFP) &&
181
--
170
--
182
2.20.1
171
2.20.1
183
172
184
173
diff view generated by jsdifflib
1
We are close to running out of TB flags for AArch32; we could
1
From: Richard Henderson <richard.henderson@linaro.org>
2
start using the cs_base word, but before we do that we can
2
3
economise on our usage by sharing the same bits for the VFP
3
We had completely run out of TBFLAG bits.
4
VECSTRIDE field and the XScale XSCALE_CPAR field. This
4
Split A- and M-profile bits into two overlapping buckets.
5
works because no XScale CPU ever had VFP.
5
This results in 4 free bits.
6
6
7
We used to initialize all of the a32 and m32 fields in DisasContext
8
by assignment, in arm_tr_init_disas_context. Now we only initialize
9
either the a32 or m32 by assignment, because the bits overlap in
10
tbflags. So zero the entire structure in gen_intermediate_code.
11
12
Tested-by: Alex Bennée <alex.bennee@linaro.org>
13
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
14
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20200206105448.4726-16-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20190416125744.27770-18-peter.maydell@linaro.org
10
---
17
---
11
target/arm/cpu.h | 10 ++++++----
18
target/arm/cpu.h | 68 ++++++++++++++++++++++++++----------------
12
target/arm/cpu.c | 7 +++++++
19
target/arm/helper.c | 17 +++++------
13
target/arm/helper.c | 6 +++++-
20
target/arm/translate.c | 57 +++++++++++++++++++----------------
14
target/arm/translate.c | 9 +++++++--
21
3 files changed, 82 insertions(+), 60 deletions(-)
15
4 files changed, 25 insertions(+), 7 deletions(-)
16
22
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
23
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/cpu.h
25
--- a/target/arm/cpu.h
20
+++ b/target/arm/cpu.h
26
+++ b/target/arm/cpu.h
21
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_ANY, BE_DATA, 23, 1)
27
@@ -XXX,XX +XXX,XX @@ typedef ARMCPU ArchCPU;
22
FIELD(TBFLAG_A32, THUMB, 0, 1)
28
* We put flags which are shared between 32 and 64 bit mode at the top
23
FIELD(TBFLAG_A32, VECLEN, 1, 3)
29
* of the word, and flags which apply to only one mode at the bottom.
24
FIELD(TBFLAG_A32, VECSTRIDE, 4, 2)
30
*
25
+/*
31
+ * 31 21 18 14 9 0
26
+ * We store the bottom two bits of the CPAR as TB flags and handle
32
+ * +--------------+-----+-----+----------+--------------+
27
+ * checks on the other bits at runtime. This shares the same bits as
33
+ * | | | TBFLAG_A32 | |
28
+ * VECSTRIDE, which is OK as no XScale CPU has VFP.
34
+ * | | +-----+----------+ TBFLAG_AM32 |
29
+ */
35
+ * | TBFLAG_ANY | |TBFLAG_M32| |
30
+FIELD(TBFLAG_A32, XSCALE_CPAR, 4, 2)
36
+ * | | +-------------------------|
37
+ * | | | TBFLAG_A64 |
38
+ * +--------------+-----------+-------------------------+
39
+ * 31 21 14 0
40
+ *
41
* Unless otherwise noted, these bits are cached in env->hflags.
42
*/
43
FIELD(TBFLAG_ANY, AARCH64_STATE, 31, 1)
44
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_ANY, PSTATE_SS, 26, 1) /* Not cached. */
45
/* Target EL if we take a floating-point-disabled exception */
46
FIELD(TBFLAG_ANY, FPEXC_EL, 24, 2)
47
FIELD(TBFLAG_ANY, BE_DATA, 23, 1)
48
-/*
49
- * For A-profile only, target EL for debug exceptions.
50
- * Note that this overlaps with the M-profile-only HANDLER and STACKCHECK bits.
51
- */
52
+/* For A-profile only, target EL for debug exceptions. */
53
FIELD(TBFLAG_ANY, DEBUG_TARGET_EL, 21, 2)
54
55
-/* Bit usage when in AArch32 state: */
56
-FIELD(TBFLAG_A32, THUMB, 0, 1) /* Not cached. */
57
-FIELD(TBFLAG_A32, VECLEN, 1, 3) /* Not cached. */
58
-FIELD(TBFLAG_A32, VECSTRIDE, 4, 2) /* Not cached. */
59
+/*
60
+ * Bit usage when in AArch32 state, both A- and M-profile.
61
+ */
62
+FIELD(TBFLAG_AM32, CONDEXEC, 0, 8) /* Not cached. */
63
+FIELD(TBFLAG_AM32, THUMB, 8, 1) /* Not cached. */
64
+
65
+/*
66
+ * Bit usage when in AArch32 state, for A-profile only.
67
+ */
68
+FIELD(TBFLAG_A32, VECLEN, 9, 3) /* Not cached. */
69
+FIELD(TBFLAG_A32, VECSTRIDE, 12, 2) /* Not cached. */
70
/*
71
* We store the bottom two bits of the CPAR as TB flags and handle
72
* checks on the other bits at runtime. This shares the same bits as
73
* VECSTRIDE, which is OK as no XScale CPU has VFP.
74
* Not cached, because VECLEN+VECSTRIDE are not cached.
75
*/
76
-FIELD(TBFLAG_A32, XSCALE_CPAR, 4, 2)
77
+FIELD(TBFLAG_A32, XSCALE_CPAR, 12, 2)
78
+FIELD(TBFLAG_A32, VFPEN, 14, 1) /* Partially cached, minus FPEXC. */
79
+FIELD(TBFLAG_A32, SCTLR_B, 15, 1)
80
+FIELD(TBFLAG_A32, HSTR_ACTIVE, 16, 1)
31
/*
81
/*
32
* Indicates whether cp register reads and writes by guest code should access
82
* Indicates whether cp register reads and writes by guest code should access
33
* the secure or nonsecure bank of banked registers; note that this is not
83
* the secure or nonsecure bank of banked registers; note that this is not
34
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A32, NS, 6, 1)
84
* the same thing as the current security state of the processor!
35
FIELD(TBFLAG_A32, VFPEN, 7, 1)
85
*/
36
FIELD(TBFLAG_A32, CONDEXEC, 8, 8)
86
-FIELD(TBFLAG_A32, NS, 6, 1)
37
FIELD(TBFLAG_A32, SCTLR_B, 16, 1)
87
-FIELD(TBFLAG_A32, VFPEN, 7, 1) /* Partially cached, minus FPEXC. */
38
-/* We store the bottom two bits of the CPAR as TB flags and handle
88
-FIELD(TBFLAG_A32, CONDEXEC, 8, 8) /* Not cached. */
39
- * checks on the other bits at runtime
89
-FIELD(TBFLAG_A32, SCTLR_B, 16, 1)
40
- */
90
-FIELD(TBFLAG_A32, HSTR_ACTIVE, 17, 1)
41
-FIELD(TBFLAG_A32, XSCALE_CPAR, 17, 2)
91
+FIELD(TBFLAG_A32, NS, 17, 1)
42
/* For M profile only, Handler (ie not Thread) mode */
92
43
FIELD(TBFLAG_A32, HANDLER, 21, 1)
93
-/* For M profile only, set if FPCCR.LSPACT is set */
44
/* For M profile only, whether we should generate stack-limit checks */
94
-FIELD(TBFLAG_A32, LSPACT, 18, 1) /* Not cached. */
45
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
95
-/* For M profile only, set if we must create a new FP context */
46
index XXXXXXX..XXXXXXX 100644
96
-FIELD(TBFLAG_A32, NEW_FP_CTXT_NEEDED, 19, 1) /* Not cached. */
47
--- a/target/arm/cpu.c
97
-/* For M profile only, set if FPCCR.S does not match current security state */
48
+++ b/target/arm/cpu.c
98
-FIELD(TBFLAG_A32, FPCCR_S_WRONG, 20, 1) /* Not cached. */
49
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
99
-/* For M profile only, Handler (ie not Thread) mode */
50
set_feature(env, ARM_FEATURE_THUMB_DSP);
100
-FIELD(TBFLAG_A32, HANDLER, 21, 1)
51
}
101
-/* For M profile only, whether we should generate stack-limit checks */
52
102
-FIELD(TBFLAG_A32, STACKCHECK, 22, 1)
53
+ /*
103
+/*
54
+ * We rely on no XScale CPU having VFP so we can use the same bits in the
104
+ * Bit usage when in AArch32 state, for M-profile only.
55
+ * TB flags field for VECSTRIDE and XSCALE_CPAR.
105
+ */
56
+ */
106
+/* Handler (ie not Thread) mode */
57
+ assert(!(arm_feature(env, ARM_FEATURE_VFP) &&
107
+FIELD(TBFLAG_M32, HANDLER, 9, 1)
58
+ arm_feature(env, ARM_FEATURE_XSCALE)));
108
+/* Whether we should generate stack-limit checks */
59
+
109
+FIELD(TBFLAG_M32, STACKCHECK, 10, 1)
60
if (arm_feature(env, ARM_FEATURE_V7) &&
110
+/* Set if FPCCR.LSPACT is set */
61
!arm_feature(env, ARM_FEATURE_M) &&
111
+FIELD(TBFLAG_M32, LSPACT, 11, 1) /* Not cached. */
62
!arm_feature(env, ARM_FEATURE_PMSA)) {
112
+/* Set if we must create a new FP context */
113
+FIELD(TBFLAG_M32, NEW_FP_CTXT_NEEDED, 12, 1) /* Not cached. */
114
+/* Set if FPCCR.S does not match current security state */
115
+FIELD(TBFLAG_M32, FPCCR_S_WRONG, 13, 1) /* Not cached. */
116
117
-/* Bit usage when in AArch64 state */
118
+/*
119
+ * Bit usage when in AArch64 state
120
+ */
121
FIELD(TBFLAG_A64, TBII, 0, 2)
122
FIELD(TBFLAG_A64, SVEEXC_EL, 2, 2)
123
FIELD(TBFLAG_A64, ZCR_LEN, 4, 4)
63
diff --git a/target/arm/helper.c b/target/arm/helper.c
124
diff --git a/target/arm/helper.c b/target/arm/helper.c
64
index XXXXXXX..XXXXXXX 100644
125
index XXXXXXX..XXXXXXX 100644
65
--- a/target/arm/helper.c
126
--- a/target/arm/helper.c
66
+++ b/target/arm/helper.c
127
+++ b/target/arm/helper.c
128
@@ -XXX,XX +XXX,XX @@ static uint32_t rebuild_hflags_m32(CPUARMState *env, int fp_el,
129
{
130
uint32_t flags = 0;
131
132
- /* v8M always enables the fpu. */
133
- flags = FIELD_DP32(flags, TBFLAG_A32, VFPEN, 1);
134
-
135
if (arm_v7m_is_handler_mode(env)) {
136
- flags = FIELD_DP32(flags, TBFLAG_A32, HANDLER, 1);
137
+ flags = FIELD_DP32(flags, TBFLAG_M32, HANDLER, 1);
138
}
139
140
/*
141
@@ -XXX,XX +XXX,XX @@ static uint32_t rebuild_hflags_m32(CPUARMState *env, int fp_el,
142
if (arm_feature(env, ARM_FEATURE_V8) &&
143
!((mmu_idx & ARM_MMU_IDX_M_NEGPRI) &&
144
(env->v7m.ccr[env->v7m.secure] & R_V7M_CCR_STKOFHFNMIGN_MASK))) {
145
- flags = FIELD_DP32(flags, TBFLAG_A32, STACKCHECK, 1);
146
+ flags = FIELD_DP32(flags, TBFLAG_M32, STACKCHECK, 1);
147
}
148
149
return rebuild_hflags_common_32(env, fp_el, mmu_idx, flags);
67
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
150
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
68
|| arm_el_is_aa64(env, 1) || arm_feature(env, ARM_FEATURE_M)) {
151
if (arm_feature(env, ARM_FEATURE_M_SECURITY) &&
69
flags = FIELD_DP32(flags, TBFLAG_A32, VFPEN, 1);
152
FIELD_EX32(env->v7m.fpccr[M_REG_S], V7M_FPCCR, S)
153
!= env->v7m.secure) {
154
- flags = FIELD_DP32(flags, TBFLAG_A32, FPCCR_S_WRONG, 1);
155
+ flags = FIELD_DP32(flags, TBFLAG_M32, FPCCR_S_WRONG, 1);
156
}
157
158
if ((env->v7m.fpccr[env->v7m.secure] & R_V7M_FPCCR_ASPEN_MASK) &&
159
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
160
* active FP context; we must create a new FP context before
161
* executing any FP insn.
162
*/
163
- flags = FIELD_DP32(flags, TBFLAG_A32, NEW_FP_CTXT_NEEDED, 1);
164
+ flags = FIELD_DP32(flags, TBFLAG_M32, NEW_FP_CTXT_NEEDED, 1);
165
}
166
167
bool is_secure = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
168
if (env->v7m.fpccr[is_secure] & R_V7M_FPCCR_LSPACT_MASK) {
169
- flags = FIELD_DP32(flags, TBFLAG_A32, LSPACT, 1);
170
+ flags = FIELD_DP32(flags, TBFLAG_M32, LSPACT, 1);
171
}
172
} else {
173
/*
174
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
175
}
70
}
176
}
71
- flags = FIELD_DP32(flags, TBFLAG_A32, XSCALE_CPAR, env->cp15.c15_cpar);
177
72
+ /* Note that XSCALE_CPAR shares bits with VECSTRIDE */
178
- flags = FIELD_DP32(flags, TBFLAG_A32, THUMB, env->thumb);
73
+ if (arm_feature(env, ARM_FEATURE_XSCALE)) {
179
- flags = FIELD_DP32(flags, TBFLAG_A32, CONDEXEC, env->condexec_bits);
74
+ flags = FIELD_DP32(flags, TBFLAG_A32,
180
+ flags = FIELD_DP32(flags, TBFLAG_AM32, THUMB, env->thumb);
75
+ XSCALE_CPAR, env->cp15.c15_cpar);
181
+ flags = FIELD_DP32(flags, TBFLAG_AM32, CONDEXEC, env->condexec_bits);
76
+ }
182
pstate_for_ss = env->uncached_cpsr;
77
}
183
}
78
184
79
flags = FIELD_DP32(flags, TBFLAG_ANY, MMUIDX, arm_to_core_mmu_idx(mmu_idx));
80
diff --git a/target/arm/translate.c b/target/arm/translate.c
185
diff --git a/target/arm/translate.c b/target/arm/translate.c
81
index XXXXXXX..XXXXXXX 100644
186
index XXXXXXX..XXXXXXX 100644
82
--- a/target/arm/translate.c
187
--- a/target/arm/translate.c
83
+++ b/target/arm/translate.c
188
+++ b/target/arm/translate.c
84
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
189
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
190
*/
191
dc->secure_routed_to_el3 = arm_feature(env, ARM_FEATURE_EL3) &&
192
!arm_el_is_aa64(env, 3);
193
- dc->thumb = FIELD_EX32(tb_flags, TBFLAG_A32, THUMB);
194
- dc->sctlr_b = FIELD_EX32(tb_flags, TBFLAG_A32, SCTLR_B);
195
- dc->hstr_active = FIELD_EX32(tb_flags, TBFLAG_A32, HSTR_ACTIVE);
196
+ dc->thumb = FIELD_EX32(tb_flags, TBFLAG_AM32, THUMB);
197
dc->be_data = FIELD_EX32(tb_flags, TBFLAG_ANY, BE_DATA) ? MO_BE : MO_LE;
198
- condexec = FIELD_EX32(tb_flags, TBFLAG_A32, CONDEXEC);
199
+ condexec = FIELD_EX32(tb_flags, TBFLAG_AM32, CONDEXEC);
200
dc->condexec_mask = (condexec & 0xf) << 1;
201
dc->condexec_cond = condexec >> 4;
202
+
203
core_mmu_idx = FIELD_EX32(tb_flags, TBFLAG_ANY, MMUIDX);
204
dc->mmu_idx = core_to_arm_mmu_idx(env, core_mmu_idx);
205
dc->current_el = arm_mmu_idx_to_el(dc->mmu_idx);
206
#if !defined(CONFIG_USER_ONLY)
207
dc->user = (dc->current_el == 0);
208
#endif
209
- dc->ns = FIELD_EX32(tb_flags, TBFLAG_A32, NS);
85
dc->fp_excp_el = FIELD_EX32(tb_flags, TBFLAG_ANY, FPEXC_EL);
210
dc->fp_excp_el = FIELD_EX32(tb_flags, TBFLAG_ANY, FPEXC_EL);
86
dc->vfp_enabled = FIELD_EX32(tb_flags, TBFLAG_A32, VFPEN);
211
- dc->vfp_enabled = FIELD_EX32(tb_flags, TBFLAG_A32, VFPEN);
87
dc->vec_len = FIELD_EX32(tb_flags, TBFLAG_A32, VECLEN);
212
- dc->vec_len = FIELD_EX32(tb_flags, TBFLAG_A32, VECLEN);
88
- dc->vec_stride = FIELD_EX32(tb_flags, TBFLAG_A32, VECSTRIDE);
213
- if (arm_feature(env, ARM_FEATURE_XSCALE)) {
89
- dc->c15_cpar = FIELD_EX32(tb_flags, TBFLAG_A32, XSCALE_CPAR);
214
- dc->c15_cpar = FIELD_EX32(tb_flags, TBFLAG_A32, XSCALE_CPAR);
90
+ if (arm_feature(env, ARM_FEATURE_XSCALE)) {
215
- dc->vec_stride = 0;
91
+ dc->c15_cpar = FIELD_EX32(tb_flags, TBFLAG_A32, XSCALE_CPAR);
216
+
92
+ dc->vec_stride = 0;
217
+ if (arm_feature(env, ARM_FEATURE_M)) {
93
+ } else {
218
+ dc->vfp_enabled = 1;
94
+ dc->vec_stride = FIELD_EX32(tb_flags, TBFLAG_A32, VECSTRIDE);
219
+ dc->be_data = MO_TE;
95
+ dc->c15_cpar = 0;
220
+ dc->v7m_handler_mode = FIELD_EX32(tb_flags, TBFLAG_M32, HANDLER);
96
+ }
221
+ dc->v8m_secure = arm_feature(env, ARM_FEATURE_M_SECURITY) &&
97
dc->v7m_handler_mode = FIELD_EX32(tb_flags, TBFLAG_A32, HANDLER);
222
+ regime_is_secure(env, dc->mmu_idx);
98
dc->v8m_secure = arm_feature(env, ARM_FEATURE_M_SECURITY) &&
223
+ dc->v8m_stackcheck = FIELD_EX32(tb_flags, TBFLAG_M32, STACKCHECK);
99
regime_is_secure(env, dc->mmu_idx);
224
+ dc->v8m_fpccr_s_wrong =
225
+ FIELD_EX32(tb_flags, TBFLAG_M32, FPCCR_S_WRONG);
226
+ dc->v7m_new_fp_ctxt_needed =
227
+ FIELD_EX32(tb_flags, TBFLAG_M32, NEW_FP_CTXT_NEEDED);
228
+ dc->v7m_lspact = FIELD_EX32(tb_flags, TBFLAG_M32, LSPACT);
229
} else {
230
- dc->vec_stride = FIELD_EX32(tb_flags, TBFLAG_A32, VECSTRIDE);
231
- dc->c15_cpar = 0;
232
+ dc->be_data =
233
+ FIELD_EX32(tb_flags, TBFLAG_ANY, BE_DATA) ? MO_BE : MO_LE;
234
+ dc->debug_target_el =
235
+ FIELD_EX32(tb_flags, TBFLAG_ANY, DEBUG_TARGET_EL);
236
+ dc->sctlr_b = FIELD_EX32(tb_flags, TBFLAG_A32, SCTLR_B);
237
+ dc->hstr_active = FIELD_EX32(tb_flags, TBFLAG_A32, HSTR_ACTIVE);
238
+ dc->ns = FIELD_EX32(tb_flags, TBFLAG_A32, NS);
239
+ dc->vfp_enabled = FIELD_EX32(tb_flags, TBFLAG_A32, VFPEN);
240
+ if (arm_feature(env, ARM_FEATURE_XSCALE)) {
241
+ dc->c15_cpar = FIELD_EX32(tb_flags, TBFLAG_A32, XSCALE_CPAR);
242
+ } else {
243
+ dc->vec_len = FIELD_EX32(tb_flags, TBFLAG_A32, VECLEN);
244
+ dc->vec_stride = FIELD_EX32(tb_flags, TBFLAG_A32, VECSTRIDE);
245
+ }
246
}
247
- dc->v7m_handler_mode = FIELD_EX32(tb_flags, TBFLAG_A32, HANDLER);
248
- dc->v8m_secure = arm_feature(env, ARM_FEATURE_M_SECURITY) &&
249
- regime_is_secure(env, dc->mmu_idx);
250
- dc->v8m_stackcheck = FIELD_EX32(tb_flags, TBFLAG_A32, STACKCHECK);
251
- dc->v8m_fpccr_s_wrong = FIELD_EX32(tb_flags, TBFLAG_A32, FPCCR_S_WRONG);
252
- dc->v7m_new_fp_ctxt_needed =
253
- FIELD_EX32(tb_flags, TBFLAG_A32, NEW_FP_CTXT_NEEDED);
254
- dc->v7m_lspact = FIELD_EX32(tb_flags, TBFLAG_A32, LSPACT);
255
dc->cp_regs = cpu->cp_regs;
256
dc->features = env->features;
257
258
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
259
dc->ss_active = FIELD_EX32(tb_flags, TBFLAG_ANY, SS_ACTIVE);
260
dc->pstate_ss = FIELD_EX32(tb_flags, TBFLAG_ANY, PSTATE_SS);
261
dc->is_ldex = false;
262
- if (!arm_feature(env, ARM_FEATURE_M)) {
263
- dc->debug_target_el = FIELD_EX32(tb_flags, TBFLAG_ANY, DEBUG_TARGET_EL);
264
- }
265
266
dc->page_start = dc->base.pc_first & TARGET_PAGE_MASK;
267
268
@@ -XXX,XX +XXX,XX @@ static const TranslatorOps thumb_translator_ops = {
269
/* generate intermediate code for basic block 'tb'. */
270
void gen_intermediate_code(CPUState *cpu, TranslationBlock *tb, int max_insns)
271
{
272
- DisasContext dc;
273
+ DisasContext dc = { };
274
const TranslatorOps *ops = &arm_translator_ops;
275
276
- if (FIELD_EX32(tb->flags, TBFLAG_A32, THUMB)) {
277
+ if (FIELD_EX32(tb->flags, TBFLAG_AM32, THUMB)) {
278
ops = &thumb_translator_ops;
279
}
280
#ifdef TARGET_AARCH64
100
--
281
--
101
2.20.1
282
2.20.1
102
283
103
284
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
We are about to expand the number of mmuidx to 10, and so need 4 bits.
4
For the benefit of reading the number out of -d exec, align it to the
5
penultimate nibble.
6
7
Tested-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20200206105448.4726-17-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/cpu.h | 16 ++++++++--------
14
1 file changed, 8 insertions(+), 8 deletions(-)
15
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/cpu.h
19
+++ b/target/arm/cpu.h
20
@@ -XXX,XX +XXX,XX @@ typedef ARMCPU ArchCPU;
21
* We put flags which are shared between 32 and 64 bit mode at the top
22
* of the word, and flags which apply to only one mode at the bottom.
23
*
24
- * 31 21 18 14 9 0
25
+ * 31 20 18 14 9 0
26
* +--------------+-----+-----+----------+--------------+
27
* | | | TBFLAG_A32 | |
28
* | | +-----+----------+ TBFLAG_AM32 |
29
@@ -XXX,XX +XXX,XX @@ typedef ARMCPU ArchCPU;
30
* | | +-------------------------|
31
* | | | TBFLAG_A64 |
32
* +--------------+-----------+-------------------------+
33
- * 31 21 14 0
34
+ * 31 20 14 0
35
*
36
* Unless otherwise noted, these bits are cached in env->hflags.
37
*/
38
FIELD(TBFLAG_ANY, AARCH64_STATE, 31, 1)
39
-FIELD(TBFLAG_ANY, MMUIDX, 28, 3)
40
-FIELD(TBFLAG_ANY, SS_ACTIVE, 27, 1)
41
-FIELD(TBFLAG_ANY, PSTATE_SS, 26, 1) /* Not cached. */
42
+FIELD(TBFLAG_ANY, SS_ACTIVE, 30, 1)
43
+FIELD(TBFLAG_ANY, PSTATE_SS, 29, 1) /* Not cached. */
44
+FIELD(TBFLAG_ANY, BE_DATA, 28, 1)
45
+FIELD(TBFLAG_ANY, MMUIDX, 24, 4)
46
/* Target EL if we take a floating-point-disabled exception */
47
-FIELD(TBFLAG_ANY, FPEXC_EL, 24, 2)
48
-FIELD(TBFLAG_ANY, BE_DATA, 23, 1)
49
+FIELD(TBFLAG_ANY, FPEXC_EL, 22, 2)
50
/* For A-profile only, target EL for debug exceptions. */
51
-FIELD(TBFLAG_ANY, DEBUG_TARGET_EL, 21, 2)
52
+FIELD(TBFLAG_ANY, DEBUG_TARGET_EL, 20, 2)
53
54
/*
55
* Bit usage when in AArch32 state, both A- and M-profile.
56
--
57
2.20.1
58
59
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Define via macro expansion, so that renumbering of the base ARMMMUIdx
4
symbols is automatically reflected in the bit definitions.
5
6
Tested-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20200206105448.4726-18-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/cpu.h | 39 +++++++++++++++++++++++----------------
14
1 file changed, 23 insertions(+), 16 deletions(-)
15
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/cpu.h
19
+++ b/target/arm/cpu.h
20
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
21
ARMMMUIdx_Stage1_E1 = 1 | ARM_MMU_IDX_NOTLB,
22
} ARMMMUIdx;
23
24
-/* Bit macros for the core-mmu-index values for each index,
25
+/*
26
+ * Bit macros for the core-mmu-index values for each index,
27
* for use when calling tlb_flush_by_mmuidx() and friends.
28
*/
29
+#define TO_CORE_BIT(NAME) \
30
+ ARMMMUIdxBit_##NAME = 1 << (ARMMMUIdx_##NAME & ARM_MMU_IDX_COREIDX_MASK)
31
+
32
typedef enum ARMMMUIdxBit {
33
- ARMMMUIdxBit_E10_0 = 1 << 0,
34
- ARMMMUIdxBit_E10_1 = 1 << 1,
35
- ARMMMUIdxBit_E2 = 1 << 2,
36
- ARMMMUIdxBit_SE3 = 1 << 3,
37
- ARMMMUIdxBit_SE10_0 = 1 << 4,
38
- ARMMMUIdxBit_SE10_1 = 1 << 5,
39
- ARMMMUIdxBit_Stage2 = 1 << 6,
40
- ARMMMUIdxBit_MUser = 1 << 0,
41
- ARMMMUIdxBit_MPriv = 1 << 1,
42
- ARMMMUIdxBit_MUserNegPri = 1 << 2,
43
- ARMMMUIdxBit_MPrivNegPri = 1 << 3,
44
- ARMMMUIdxBit_MSUser = 1 << 4,
45
- ARMMMUIdxBit_MSPriv = 1 << 5,
46
- ARMMMUIdxBit_MSUserNegPri = 1 << 6,
47
- ARMMMUIdxBit_MSPrivNegPri = 1 << 7,
48
+ TO_CORE_BIT(E10_0),
49
+ TO_CORE_BIT(E10_1),
50
+ TO_CORE_BIT(E2),
51
+ TO_CORE_BIT(SE10_0),
52
+ TO_CORE_BIT(SE10_1),
53
+ TO_CORE_BIT(SE3),
54
+ TO_CORE_BIT(Stage2),
55
+
56
+ TO_CORE_BIT(MUser),
57
+ TO_CORE_BIT(MPriv),
58
+ TO_CORE_BIT(MUserNegPri),
59
+ TO_CORE_BIT(MPrivNegPri),
60
+ TO_CORE_BIT(MSUser),
61
+ TO_CORE_BIT(MSPriv),
62
+ TO_CORE_BIT(MSUserNegPri),
63
+ TO_CORE_BIT(MSPrivNegPri),
64
} ARMMMUIdxBit;
65
66
+#undef TO_CORE_BIT
67
+
68
#define MMU_USER_IDX 0
69
70
static inline int arm_to_core_mmu_idx(ARMMMUIdx mmu_idx)
71
--
72
2.20.1
73
74
diff view generated by jsdifflib
1
Move the NS TBFLAG down from bit 19 to bit 6, which has not
1
From: Richard Henderson <richard.henderson@linaro.org>
2
been used since commit c1e3781090b9d36c60 in 2015, when we
3
started passing the entire MMU index in the TB flags rather
4
than just a 'privilege level' bit.
5
2
6
This rearrangement is not strictly necessary, but means that
3
Replace the magic numbers with the relevant ARM_MMU_IDX_M_* constants.
7
we can put M-profile-only bits next to each other rather
4
Keep the definitions short by referencing previous symbols.
8
than scattered across the flag word.
9
5
6
Tested-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20200206105448.4726-19-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20190416125744.27770-17-peter.maydell@linaro.org
13
---
11
---
14
target/arm/cpu.h | 11 ++++++-----
12
target/arm/cpu.h | 16 ++++++++--------
15
1 file changed, 6 insertions(+), 5 deletions(-)
13
1 file changed, 8 insertions(+), 8 deletions(-)
16
14
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/cpu.h
17
--- a/target/arm/cpu.h
20
+++ b/target/arm/cpu.h
18
+++ b/target/arm/cpu.h
21
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_ANY, BE_DATA, 23, 1)
19
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
22
FIELD(TBFLAG_A32, THUMB, 0, 1)
20
ARMMMUIdx_SE10_0 = 4 | ARM_MMU_IDX_A,
23
FIELD(TBFLAG_A32, VECLEN, 1, 3)
21
ARMMMUIdx_SE10_1 = 5 | ARM_MMU_IDX_A,
24
FIELD(TBFLAG_A32, VECSTRIDE, 4, 2)
22
ARMMMUIdx_Stage2 = 6 | ARM_MMU_IDX_A,
25
+/*
23
- ARMMMUIdx_MUser = 0 | ARM_MMU_IDX_M,
26
+ * Indicates whether cp register reads and writes by guest code should access
24
- ARMMMUIdx_MPriv = 1 | ARM_MMU_IDX_M,
27
+ * the secure or nonsecure bank of banked registers; note that this is not
25
- ARMMMUIdx_MUserNegPri = 2 | ARM_MMU_IDX_M,
28
+ * the same thing as the current security state of the processor!
26
- ARMMMUIdx_MPrivNegPri = 3 | ARM_MMU_IDX_M,
29
+ */
27
- ARMMMUIdx_MSUser = 4 | ARM_MMU_IDX_M,
30
+FIELD(TBFLAG_A32, NS, 6, 1)
28
- ARMMMUIdx_MSPriv = 5 | ARM_MMU_IDX_M,
31
FIELD(TBFLAG_A32, VFPEN, 7, 1)
29
- ARMMMUIdx_MSUserNegPri = 6 | ARM_MMU_IDX_M,
32
FIELD(TBFLAG_A32, CONDEXEC, 8, 8)
30
- ARMMMUIdx_MSPrivNegPri = 7 | ARM_MMU_IDX_M,
33
FIELD(TBFLAG_A32, SCTLR_B, 16, 1)
31
+ ARMMMUIdx_MUser = ARM_MMU_IDX_M,
34
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A32, SCTLR_B, 16, 1)
32
+ ARMMMUIdx_MPriv = ARM_MMU_IDX_M | ARM_MMU_IDX_M_PRIV,
35
* checks on the other bits at runtime
33
+ ARMMMUIdx_MUserNegPri = ARMMMUIdx_MUser | ARM_MMU_IDX_M_NEGPRI,
36
*/
34
+ ARMMMUIdx_MPrivNegPri = ARMMMUIdx_MPriv | ARM_MMU_IDX_M_NEGPRI,
37
FIELD(TBFLAG_A32, XSCALE_CPAR, 17, 2)
35
+ ARMMMUIdx_MSUser = ARMMMUIdx_MUser | ARM_MMU_IDX_M_S,
38
-/* Indicates whether cp register reads and writes by guest code should access
36
+ ARMMMUIdx_MSPriv = ARMMMUIdx_MPriv | ARM_MMU_IDX_M_S,
39
- * the secure or nonsecure bank of banked registers; note that this is not
37
+ ARMMMUIdx_MSUserNegPri = ARMMMUIdx_MUserNegPri | ARM_MMU_IDX_M_S,
40
- * the same thing as the current security state of the processor!
38
+ ARMMMUIdx_MSPrivNegPri = ARMMMUIdx_MPrivNegPri | ARM_MMU_IDX_M_S,
41
- */
39
/* Indexes below here don't have TLBs and are used only for AT system
42
-FIELD(TBFLAG_A32, NS, 19, 1)
40
* instructions or for the first stage of an S12 page table walk.
43
/* For M profile only, Handler (ie not Thread) mode */
41
*/
44
FIELD(TBFLAG_A32, HANDLER, 21, 1)
45
/* For M profile only, whether we should generate stack-limit checks */
46
--
42
--
47
2.20.1
43
2.20.1
48
44
49
45
diff view generated by jsdifflib
1
Add a new helper function which returns the MMU index to use
1
From: Richard Henderson <richard.henderson@linaro.org>
2
for v7M, where the caller specifies all of the security
3
state, privilege level and whether the execution priority
4
is negative, and reimplement the existing
5
arm_v7m_mmu_idx_for_secstate_and_priv() in terms of it.
6
2
7
We are going to need this for the lazy-FP-stacking code.
3
Prepare for, but do not yet implement, the EL2&0 regime.
4
This involves adding the new MMUIdx enumerators and adjusting
5
some of the MMUIdx related predicates to match.
8
6
7
Tested-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20200206105448.4726-20-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20190416125744.27770-21-peter.maydell@linaro.org
12
---
12
---
13
target/arm/cpu.h | 7 +++++++
13
target/arm/cpu-param.h | 2 +-
14
target/arm/helper.c | 14 +++++++++++---
14
target/arm/cpu.h | 134 ++++++++++++++++++-----------------------
15
2 files changed, 18 insertions(+), 3 deletions(-)
15
target/arm/internals.h | 35 +++++++++++
16
target/arm/helper.c | 66 +++++++++++++++++---
17
target/arm/translate.c | 1 -
18
5 files changed, 152 insertions(+), 86 deletions(-)
16
19
20
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/cpu-param.h
23
+++ b/target/arm/cpu-param.h
24
@@ -XXX,XX +XXX,XX @@
25
# define TARGET_PAGE_BITS_MIN 10
26
#endif
27
28
-#define NB_MMU_MODES 8
29
+#define NB_MMU_MODES 9
30
31
#endif
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
32
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/cpu.h
34
--- a/target/arm/cpu.h
20
+++ b/target/arm/cpu.h
35
+++ b/target/arm/cpu.h
21
@@ -XXX,XX +XXX,XX @@ static inline int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx)
36
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
22
}
37
* + NonSecure EL1 & 0 stage 1
23
}
38
* + NonSecure EL1 & 0 stage 2
24
39
* + NonSecure EL2
40
- * + Secure EL1 & EL0
41
+ * + NonSecure EL2 & 0 (ARMv8.1-VHE)
42
+ * + Secure EL1 & 0
43
* + Secure EL3
44
* If EL3 is 32-bit:
45
* + NonSecure PL1 & 0 stage 1
46
* + NonSecure PL1 & 0 stage 2
47
* + NonSecure PL2
48
- * + Secure PL0 & PL1
49
+ * + Secure PL0
50
+ * + Secure PL1
51
* (reminder: for 32 bit EL3, Secure PL1 is *EL3*, not EL1.)
52
*
53
* For QEMU, an mmu_idx is not quite the same as a translation regime because:
54
- * 1. we need to split the "EL1 & 0" regimes into two mmu_idxes, because they
55
- * may differ in access permissions even if the VA->PA map is the same
56
+ * 1. we need to split the "EL1 & 0" and "EL2 & 0" regimes into two mmu_idxes,
57
+ * because they may differ in access permissions even if the VA->PA map is
58
+ * the same
59
* 2. we want to cache in our TLB the full VA->IPA->PA lookup for a stage 1+2
60
* translation, which means that we have one mmu_idx that deals with two
61
* concatenated translation regimes [this sort of combined s1+2 TLB is
62
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
63
* 4. we can also safely fold together the "32 bit EL3" and "64 bit EL3"
64
* translation regimes, because they map reasonably well to each other
65
* and they can't both be active at the same time.
66
- * This gives us the following list of mmu_idx values:
67
+ * 5. we want to be able to use the TLB for accesses done as part of a
68
+ * stage1 page table walk, rather than having to walk the stage2 page
69
+ * table over and over.
70
*
71
- * NS EL0 (aka NS PL0) stage 1+2
72
- * NS EL1 (aka NS PL1) stage 1+2
73
+ * This gives us the following list of cases:
74
+ *
75
+ * NS EL0 EL1&0 stage 1+2 (aka NS PL0)
76
+ * NS EL1 EL1&0 stage 1+2 (aka NS PL1)
77
+ * NS EL0 EL2&0
78
+ * NS EL2 EL2&0
79
* NS EL2 (aka NS PL2)
80
+ * S EL0 EL1&0 (aka S PL0)
81
+ * S EL1 EL1&0 (not used if EL3 is 32 bit)
82
* S EL3 (aka S PL1)
83
- * S EL0 (aka S PL0)
84
- * S EL1 (not used if EL3 is 32 bit)
85
- * NS EL0+1 stage 2
86
+ * NS EL1&0 stage 2
87
*
88
- * (The last of these is an mmu_idx because we want to be able to use the TLB
89
- * for the accesses done as part of a stage 1 page table walk, rather than
90
- * having to walk the stage 2 page table over and over.)
91
+ * for a total of 9 different mmu_idx.
92
*
93
* R profile CPUs have an MPU, but can use the same set of MMU indexes
94
* as A profile. They only need to distinguish NS EL0 and NS EL1 (and
95
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
96
* For M profile we arrange them to have a bit for priv, a bit for negpri
97
* and a bit for secure.
98
*/
99
-#define ARM_MMU_IDX_A 0x10 /* A profile */
100
-#define ARM_MMU_IDX_NOTLB 0x20 /* does not have a TLB */
101
-#define ARM_MMU_IDX_M 0x40 /* M profile */
102
+#define ARM_MMU_IDX_A 0x10 /* A profile */
103
+#define ARM_MMU_IDX_NOTLB 0x20 /* does not have a TLB */
104
+#define ARM_MMU_IDX_M 0x40 /* M profile */
105
106
-/* meanings of the bits for M profile mmu idx values */
107
-#define ARM_MMU_IDX_M_PRIV 0x1
108
+/* Meanings of the bits for M profile mmu idx values */
109
+#define ARM_MMU_IDX_M_PRIV 0x1
110
#define ARM_MMU_IDX_M_NEGPRI 0x2
111
-#define ARM_MMU_IDX_M_S 0x4
112
+#define ARM_MMU_IDX_M_S 0x4 /* Secure */
113
114
-#define ARM_MMU_IDX_TYPE_MASK (~0x7)
115
-#define ARM_MMU_IDX_COREIDX_MASK 0x7
116
+#define ARM_MMU_IDX_TYPE_MASK \
117
+ (ARM_MMU_IDX_A | ARM_MMU_IDX_M | ARM_MMU_IDX_NOTLB)
118
+#define ARM_MMU_IDX_COREIDX_MASK 0xf
119
120
typedef enum ARMMMUIdx {
121
- ARMMMUIdx_E10_0 = 0 | ARM_MMU_IDX_A,
122
- ARMMMUIdx_E10_1 = 1 | ARM_MMU_IDX_A,
123
- ARMMMUIdx_E2 = 2 | ARM_MMU_IDX_A,
124
- ARMMMUIdx_SE3 = 3 | ARM_MMU_IDX_A,
125
- ARMMMUIdx_SE10_0 = 4 | ARM_MMU_IDX_A,
126
- ARMMMUIdx_SE10_1 = 5 | ARM_MMU_IDX_A,
127
- ARMMMUIdx_Stage2 = 6 | ARM_MMU_IDX_A,
128
+ /*
129
+ * A-profile.
130
+ */
131
+ ARMMMUIdx_E10_0 = 0 | ARM_MMU_IDX_A,
132
+ ARMMMUIdx_E20_0 = 1 | ARM_MMU_IDX_A,
133
+
134
+ ARMMMUIdx_E10_1 = 2 | ARM_MMU_IDX_A,
135
+
136
+ ARMMMUIdx_E2 = 3 | ARM_MMU_IDX_A,
137
+ ARMMMUIdx_E20_2 = 4 | ARM_MMU_IDX_A,
138
+
139
+ ARMMMUIdx_SE10_0 = 5 | ARM_MMU_IDX_A,
140
+ ARMMMUIdx_SE10_1 = 6 | ARM_MMU_IDX_A,
141
+ ARMMMUIdx_SE3 = 7 | ARM_MMU_IDX_A,
142
+
143
+ ARMMMUIdx_Stage2 = 8 | ARM_MMU_IDX_A,
144
+
145
+ /*
146
+ * These are not allocated TLBs and are used only for AT system
147
+ * instructions or for the first stage of an S12 page table walk.
148
+ */
149
+ ARMMMUIdx_Stage1_E0 = 0 | ARM_MMU_IDX_NOTLB,
150
+ ARMMMUIdx_Stage1_E1 = 1 | ARM_MMU_IDX_NOTLB,
151
+
152
+ /*
153
+ * M-profile.
154
+ */
155
ARMMMUIdx_MUser = ARM_MMU_IDX_M,
156
ARMMMUIdx_MPriv = ARM_MMU_IDX_M | ARM_MMU_IDX_M_PRIV,
157
ARMMMUIdx_MUserNegPri = ARMMMUIdx_MUser | ARM_MMU_IDX_M_NEGPRI,
158
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
159
ARMMMUIdx_MSPriv = ARMMMUIdx_MPriv | ARM_MMU_IDX_M_S,
160
ARMMMUIdx_MSUserNegPri = ARMMMUIdx_MUserNegPri | ARM_MMU_IDX_M_S,
161
ARMMMUIdx_MSPrivNegPri = ARMMMUIdx_MPrivNegPri | ARM_MMU_IDX_M_S,
162
- /* Indexes below here don't have TLBs and are used only for AT system
163
- * instructions or for the first stage of an S12 page table walk.
164
- */
165
- ARMMMUIdx_Stage1_E0 = 0 | ARM_MMU_IDX_NOTLB,
166
- ARMMMUIdx_Stage1_E1 = 1 | ARM_MMU_IDX_NOTLB,
167
} ARMMMUIdx;
168
169
/*
170
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
171
172
typedef enum ARMMMUIdxBit {
173
TO_CORE_BIT(E10_0),
174
+ TO_CORE_BIT(E20_0),
175
TO_CORE_BIT(E10_1),
176
TO_CORE_BIT(E2),
177
+ TO_CORE_BIT(E20_2),
178
TO_CORE_BIT(SE10_0),
179
TO_CORE_BIT(SE10_1),
180
TO_CORE_BIT(SE3),
181
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdxBit {
182
183
#define MMU_USER_IDX 0
184
185
-static inline int arm_to_core_mmu_idx(ARMMMUIdx mmu_idx)
186
-{
187
- return mmu_idx & ARM_MMU_IDX_COREIDX_MASK;
188
-}
189
-
190
-static inline ARMMMUIdx core_to_arm_mmu_idx(CPUARMState *env, int mmu_idx)
191
-{
192
- if (arm_feature(env, ARM_FEATURE_M)) {
193
- return mmu_idx | ARM_MMU_IDX_M;
194
- } else {
195
- return mmu_idx | ARM_MMU_IDX_A;
196
- }
197
-}
198
-
199
-/* Return the exception level we're running at if this is our mmu_idx */
200
-static inline int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx)
201
-{
202
- switch (mmu_idx & ARM_MMU_IDX_TYPE_MASK) {
203
- case ARM_MMU_IDX_A:
204
- return mmu_idx & 3;
205
- case ARM_MMU_IDX_M:
206
- return mmu_idx & ARM_MMU_IDX_M_PRIV;
207
- default:
208
- g_assert_not_reached();
209
- }
210
-}
211
-
212
-/*
213
- * Return the MMU index for a v7M CPU with all relevant information
214
- * manually specified.
215
- */
216
-ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
217
- bool secstate, bool priv, bool negpri);
218
-
219
-/* Return the MMU index for a v7M CPU in the specified security and
220
- * privilege state.
221
- */
222
-ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
223
- bool secstate, bool priv);
224
-
225
-/* Return the MMU index for a v7M CPU in the specified security state */
226
-ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate);
227
-
228
/**
229
* cpu_mmu_index:
230
* @env: The cpu environment
231
diff --git a/target/arm/internals.h b/target/arm/internals.h
232
index XXXXXXX..XXXXXXX 100644
233
--- a/target/arm/internals.h
234
+++ b/target/arm/internals.h
235
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
236
MMUAccessType access_type, int mmu_idx,
237
bool probe, uintptr_t retaddr);
238
239
+static inline int arm_to_core_mmu_idx(ARMMMUIdx mmu_idx)
240
+{
241
+ return mmu_idx & ARM_MMU_IDX_COREIDX_MASK;
242
+}
243
+
244
+static inline ARMMMUIdx core_to_arm_mmu_idx(CPUARMState *env, int mmu_idx)
245
+{
246
+ if (arm_feature(env, ARM_FEATURE_M)) {
247
+ return mmu_idx | ARM_MMU_IDX_M;
248
+ } else {
249
+ return mmu_idx | ARM_MMU_IDX_A;
250
+ }
251
+}
252
+
253
+int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx);
254
+
25
+/*
255
+/*
26
+ * Return the MMU index for a v7M CPU with all relevant information
256
+ * Return the MMU index for a v7M CPU with all relevant information
27
+ * manually specified.
257
+ * manually specified.
28
+ */
258
+ */
29
+ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
259
+ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
30
+ bool secstate, bool priv, bool negpri);
260
+ bool secstate, bool priv, bool negpri);
31
+
261
+
32
/* Return the MMU index for a v7M CPU in the specified security and
262
+/*
33
* privilege state.
263
+ * Return the MMU index for a v7M CPU in the specified security and
34
*/
264
+ * privilege state.
265
+ */
266
+ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
267
+ bool secstate, bool priv);
268
+
269
+/* Return the MMU index for a v7M CPU in the specified security state */
270
+ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate);
271
+
272
/* Return true if the stage 1 translation regime is using LPAE format page
273
* tables */
274
bool arm_s1_regime_using_lpae_format(CPUARMState *env, ARMMMUIdx mmu_idx);
275
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx)
276
switch (mmu_idx) {
277
case ARMMMUIdx_E10_0:
278
case ARMMMUIdx_E10_1:
279
+ case ARMMMUIdx_E20_0:
280
+ case ARMMMUIdx_E20_2:
281
case ARMMMUIdx_Stage1_E0:
282
case ARMMMUIdx_Stage1_E1:
283
case ARMMMUIdx_E2:
35
diff --git a/target/arm/helper.c b/target/arm/helper.c
284
diff --git a/target/arm/helper.c b/target/arm/helper.c
36
index XXXXXXX..XXXXXXX 100644
285
index XXXXXXX..XXXXXXX 100644
37
--- a/target/arm/helper.c
286
--- a/target/arm/helper.c
38
+++ b/target/arm/helper.c
287
+++ b/target/arm/helper.c
288
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_interrupt(CPUState *cs)
289
#endif /* !CONFIG_USER_ONLY */
290
291
/* Return the exception level which controls this address translation regime */
292
-static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
293
+static uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
294
{
295
switch (mmu_idx) {
296
+ case ARMMMUIdx_E20_0:
297
+ case ARMMMUIdx_E20_2:
298
case ARMMMUIdx_Stage2:
299
case ARMMMUIdx_E2:
300
return 2;
301
@@ -XXX,XX +XXX,XX @@ static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
302
case ARMMMUIdx_SE10_1:
303
case ARMMMUIdx_Stage1_E0:
304
case ARMMMUIdx_Stage1_E1:
305
+ case ARMMMUIdx_E10_0:
306
+ case ARMMMUIdx_E10_1:
307
case ARMMMUIdx_MPrivNegPri:
308
case ARMMMUIdx_MUserNegPri:
309
case ARMMMUIdx_MPriv:
310
@@ -XXX,XX +XXX,XX @@ static inline TCR *regime_tcr(CPUARMState *env, ARMMMUIdx mmu_idx)
311
*/
312
static inline ARMMMUIdx stage_1_mmu_idx(ARMMMUIdx mmu_idx)
313
{
314
- if (mmu_idx == ARMMMUIdx_E10_0 || mmu_idx == ARMMMUIdx_E10_1) {
315
- mmu_idx += (ARMMMUIdx_Stage1_E0 - ARMMMUIdx_E10_0);
316
+ switch (mmu_idx) {
317
+ case ARMMMUIdx_E10_0:
318
+ return ARMMMUIdx_Stage1_E0;
319
+ case ARMMMUIdx_E10_1:
320
+ return ARMMMUIdx_Stage1_E1;
321
+ default:
322
+ return mmu_idx;
323
}
324
- return mmu_idx;
325
}
326
327
/* Return true if the translation regime is using LPAE format page tables */
328
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
329
{
330
switch (mmu_idx) {
331
case ARMMMUIdx_SE10_0:
332
+ case ARMMMUIdx_E20_0:
333
case ARMMMUIdx_Stage1_E0:
334
case ARMMMUIdx_MUser:
335
case ARMMMUIdx_MSUser:
39
@@ -XXX,XX +XXX,XX @@ int fp_exception_el(CPUARMState *env, int cur_el)
336
@@ -XXX,XX +XXX,XX @@ int fp_exception_el(CPUARMState *env, int cur_el)
40
return 0;
337
return 0;
41
}
338
}
42
339
43
-ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
340
+/* Return the exception level we're running at if this is our mmu_idx */
44
- bool secstate, bool priv)
341
+int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx)
45
+ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
46
+ bool secstate, bool priv, bool negpri)
47
{
48
ARMMMUIdx mmu_idx = ARM_MMU_IDX_M;
49
50
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
51
mmu_idx |= ARM_MMU_IDX_M_PRIV;
52
}
53
54
- if (armv7m_nvic_neg_prio_requested(env->nvic, secstate)) {
55
+ if (negpri) {
56
mmu_idx |= ARM_MMU_IDX_M_NEGPRI;
57
}
58
59
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
60
return mmu_idx;
61
}
62
63
+ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
64
+ bool secstate, bool priv)
65
+{
342
+{
66
+ bool negpri = armv7m_nvic_neg_prio_requested(env->nvic, secstate);
343
+ if (mmu_idx & ARM_MMU_IDX_M) {
67
+
344
+ return mmu_idx & ARM_MMU_IDX_M_PRIV;
68
+ return arm_v7m_mmu_idx_all(env, secstate, priv, negpri);
345
+ }
346
+
347
+ switch (mmu_idx) {
348
+ case ARMMMUIdx_E10_0:
349
+ case ARMMMUIdx_E20_0:
350
+ case ARMMMUIdx_SE10_0:
351
+ return 0;
352
+ case ARMMMUIdx_E10_1:
353
+ case ARMMMUIdx_SE10_1:
354
+ return 1;
355
+ case ARMMMUIdx_E2:
356
+ case ARMMMUIdx_E20_2:
357
+ return 2;
358
+ case ARMMMUIdx_SE3:
359
+ return 3;
360
+ default:
361
+ g_assert_not_reached();
362
+ }
69
+}
363
+}
70
+
364
+
71
/* Return the MMU index for a v7M CPU in the specified security state */
365
#ifndef CONFIG_TCG
72
ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
366
ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
73
{
367
{
368
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_mmu_idx_el(CPUARMState *env, int el)
369
return arm_v7m_mmu_idx_for_secstate(env, env->v7m.secure);
370
}
371
372
- if (el < 2 && arm_is_secure_below_el3(env)) {
373
- return ARMMMUIdx_SE10_0 + el;
374
- } else {
375
- return ARMMMUIdx_E10_0 + el;
376
+ switch (el) {
377
+ case 0:
378
+ /* TODO: ARMv8.1-VHE */
379
+ if (arm_is_secure_below_el3(env)) {
380
+ return ARMMMUIdx_SE10_0;
381
+ }
382
+ return ARMMMUIdx_E10_0;
383
+ case 1:
384
+ if (arm_is_secure_below_el3(env)) {
385
+ return ARMMMUIdx_SE10_1;
386
+ }
387
+ return ARMMMUIdx_E10_1;
388
+ case 2:
389
+ /* TODO: ARMv8.1-VHE */
390
+ /* TODO: ARMv8.4-SecEL2 */
391
+ return ARMMMUIdx_E2;
392
+ case 3:
393
+ return ARMMMUIdx_SE3;
394
+ default:
395
+ g_assert_not_reached();
396
}
397
}
398
399
diff --git a/target/arm/translate.c b/target/arm/translate.c
400
index XXXXXXX..XXXXXXX 100644
401
--- a/target/arm/translate.c
402
+++ b/target/arm/translate.c
403
@@ -XXX,XX +XXX,XX @@ static inline int get_a32_user_mem_index(DisasContext *s)
404
case ARMMMUIdx_MSUserNegPri:
405
case ARMMMUIdx_MSPrivNegPri:
406
return arm_to_core_mmu_idx(ARMMMUIdx_MSUserNegPri);
407
- case ARMMMUIdx_Stage2:
408
default:
409
g_assert_not_reached();
410
}
74
--
411
--
75
2.20.1
412
2.20.1
76
413
77
414
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This commit finally deletes "hw/devices.h".
3
Create a predicate to indicate whether the regime has
4
both positive and negative addresses.
4
5
5
Reviewed-by: Markus Armbruster <armbru@redhat.com>
6
Tested-by: Alex Bennée <alex.bennee@linaro.org>
6
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Message-id: 20190412165416.7977-13-philmd@redhat.com
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20200206105448.4726-21-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
11
---
10
include/hw/devices.h | 11 -----------
12
target/arm/internals.h | 18 ++++++++++++++++++
11
include/hw/net/smc91c111.h | 19 +++++++++++++++++++
13
target/arm/helper.c | 23 ++++++-----------------
12
hw/arm/gumstix.c | 2 +-
14
target/arm/translate-a64.c | 3 +--
13
hw/arm/integratorcp.c | 2 +-
15
3 files changed, 25 insertions(+), 19 deletions(-)
14
hw/arm/mainstone.c | 2 +-
15
hw/arm/realview.c | 2 +-
16
hw/arm/versatilepb.c | 2 +-
17
hw/net/smc91c111.c | 2 +-
18
8 files changed, 25 insertions(+), 17 deletions(-)
19
delete mode 100644 include/hw/devices.h
20
create mode 100644 include/hw/net/smc91c111.h
21
16
22
diff --git a/include/hw/devices.h b/include/hw/devices.h
17
diff --git a/target/arm/internals.h b/target/arm/internals.h
23
deleted file mode 100644
18
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX
19
--- a/target/arm/internals.h
25
--- a/include/hw/devices.h
20
+++ b/target/arm/internals.h
26
+++ /dev/null
21
@@ -XXX,XX +XXX,XX @@ static inline void arm_call_el_change_hook(ARMCPU *cpu)
27
@@ -XXX,XX +XXX,XX @@
22
}
28
-#ifndef QEMU_DEVICES_H
23
}
29
-#define QEMU_DEVICES_H
24
30
-
25
+/* Return true if this address translation regime has two ranges. */
31
-/* Devices that have nowhere better to go. */
26
+static inline bool regime_has_2_ranges(ARMMMUIdx mmu_idx)
32
-
27
+{
33
-#include "hw/hw.h"
28
+ switch (mmu_idx) {
34
-
29
+ case ARMMMUIdx_Stage1_E0:
35
-/* smc91c111.c */
30
+ case ARMMMUIdx_Stage1_E1:
36
-void smc91c111_init(NICInfo *, uint32_t, qemu_irq);
31
+ case ARMMMUIdx_E10_0:
37
-
32
+ case ARMMMUIdx_E10_1:
38
-#endif
33
+ case ARMMMUIdx_E20_0:
39
diff --git a/include/hw/net/smc91c111.h b/include/hw/net/smc91c111.h
34
+ case ARMMMUIdx_E20_2:
40
new file mode 100644
35
+ case ARMMMUIdx_SE10_0:
41
index XXXXXXX..XXXXXXX
36
+ case ARMMMUIdx_SE10_1:
42
--- /dev/null
37
+ return true;
43
+++ b/include/hw/net/smc91c111.h
38
+ default:
44
@@ -XXX,XX +XXX,XX @@
39
+ return false;
45
+/*
40
+ }
46
+ * SMSC 91C111 Ethernet interface emulation
41
+}
47
+ *
48
+ * Copyright (c) 2005 CodeSourcery, LLC.
49
+ * Written by Paul Brook
50
+ *
51
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
52
+ * See the COPYING file in the top-level directory.
53
+ */
54
+
42
+
55
+#ifndef HW_NET_SMC91C111_H
43
/* Return true if this address translation regime is secure */
56
+#define HW_NET_SMC91C111_H
44
static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx)
57
+
45
{
58
+#include "hw/irq.h"
46
diff --git a/target/arm/helper.c b/target/arm/helper.c
59
+#include "net/net.h"
60
+
61
+void smc91c111_init(NICInfo *, uint32_t, qemu_irq);
62
+
63
+#endif
64
diff --git a/hw/arm/gumstix.c b/hw/arm/gumstix.c
65
index XXXXXXX..XXXXXXX 100644
47
index XXXXXXX..XXXXXXX 100644
66
--- a/hw/arm/gumstix.c
48
--- a/target/arm/helper.c
67
+++ b/hw/arm/gumstix.c
49
+++ b/target/arm/helper.c
68
@@ -XXX,XX +XXX,XX @@
50
@@ -XXX,XX +XXX,XX @@ static int get_S1prot(CPUARMState *env, ARMMMUIdx mmu_idx, bool is_aa64,
69
#include "hw/arm/pxa.h"
51
}
70
#include "net/net.h"
52
71
#include "hw/block/flash.h"
53
if (is_aa64) {
72
-#include "hw/devices.h"
54
- switch (regime_el(env, mmu_idx)) {
73
+#include "hw/net/smc91c111.h"
55
- case 1:
74
#include "hw/boards.h"
56
- if (!is_user) {
75
#include "exec/address-spaces.h"
57
- xn = pxn || (user_rw & PAGE_WRITE);
76
#include "sysemu/qtest.h"
58
- }
77
diff --git a/hw/arm/integratorcp.c b/hw/arm/integratorcp.c
59
- break;
60
- case 2:
61
- case 3:
62
- break;
63
+ if (regime_has_2_ranges(mmu_idx) && !is_user) {
64
+ xn = pxn || (user_rw & PAGE_WRITE);
65
}
66
} else if (arm_feature(env, ARM_FEATURE_V7)) {
67
switch (regime_el(env, mmu_idx)) {
68
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters_both(CPUARMState *env, uint64_t va,
69
ARMMMUIdx mmu_idx)
70
{
71
uint64_t tcr = regime_tcr(env, mmu_idx)->raw_tcr;
72
- uint32_t el = regime_el(env, mmu_idx);
73
bool tbi, tbid, epd, hpd, using16k, using64k;
74
int select, tsz;
75
76
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters_both(CPUARMState *env, uint64_t va,
77
*/
78
select = extract64(va, 55, 1);
79
80
- if (el > 1) {
81
+ if (!regime_has_2_ranges(mmu_idx)) {
82
tsz = extract32(tcr, 0, 6);
83
using64k = extract32(tcr, 14, 1);
84
using16k = extract32(tcr, 15, 1);
85
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
86
param = aa64_va_parameters(env, address, mmu_idx,
87
access_type != MMU_INST_FETCH);
88
level = 0;
89
- /* If we are in 64-bit EL2 or EL3 then there is no TTBR1, so mark it
90
- * invalid.
91
- */
92
- ttbr1_valid = (el < 2);
93
+ ttbr1_valid = regime_has_2_ranges(mmu_idx);
94
addrsize = 64 - 8 * param.tbi;
95
inputsize = 64 - param.tsz;
96
} else {
97
@@ -XXX,XX +XXX,XX @@ static uint32_t rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
98
99
flags = FIELD_DP32(flags, TBFLAG_ANY, AARCH64_STATE, 1);
100
101
- /* FIXME: ARMv8.1-VHE S2 translation regime. */
102
- if (regime_el(env, stage1) < 2) {
103
+ /* Get control bits for tagged addresses. */
104
+ if (regime_has_2_ranges(mmu_idx)) {
105
ARMVAParameters p1 = aa64_va_parameters_both(env, -1, stage1);
106
tbid = (p1.tbi << 1) | p0.tbi;
107
tbii = tbid & ~((p1.tbid << 1) | p0.tbid);
108
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
78
index XXXXXXX..XXXXXXX 100644
109
index XXXXXXX..XXXXXXX 100644
79
--- a/hw/arm/integratorcp.c
110
--- a/target/arm/translate-a64.c
80
+++ b/hw/arm/integratorcp.c
111
+++ b/target/arm/translate-a64.c
81
@@ -XXX,XX +XXX,XX @@
112
@@ -XXX,XX +XXX,XX @@ static void gen_top_byte_ignore(DisasContext *s, TCGv_i64 dst,
82
#include "qemu-common.h"
113
if (tbi == 0) {
83
#include "cpu.h"
114
/* Load unmodified address */
84
#include "hw/sysbus.h"
115
tcg_gen_mov_i64(dst, src);
85
-#include "hw/devices.h"
116
- } else if (s->current_el >= 2) {
86
#include "hw/boards.h"
117
- /* FIXME: ARMv8.1-VHE S2 translation regime. */
87
#include "hw/arm/arm.h"
118
+ } else if (!regime_has_2_ranges(s->mmu_idx)) {
88
#include "hw/misc/arm_integrator_debug.h"
119
/* Force tag byte to all zero */
89
+#include "hw/net/smc91c111.h"
120
tcg_gen_extract_i64(dst, src, 0, 56);
90
#include "net/net.h"
121
} else {
91
#include "exec/address-spaces.h"
92
#include "sysemu/sysemu.h"
93
diff --git a/hw/arm/mainstone.c b/hw/arm/mainstone.c
94
index XXXXXXX..XXXXXXX 100644
95
--- a/hw/arm/mainstone.c
96
+++ b/hw/arm/mainstone.c
97
@@ -XXX,XX +XXX,XX @@
98
#include "hw/arm/pxa.h"
99
#include "hw/arm/arm.h"
100
#include "net/net.h"
101
-#include "hw/devices.h"
102
+#include "hw/net/smc91c111.h"
103
#include "hw/boards.h"
104
#include "hw/block/flash.h"
105
#include "hw/sysbus.h"
106
diff --git a/hw/arm/realview.c b/hw/arm/realview.c
107
index XXXXXXX..XXXXXXX 100644
108
--- a/hw/arm/realview.c
109
+++ b/hw/arm/realview.c
110
@@ -XXX,XX +XXX,XX @@
111
#include "hw/sysbus.h"
112
#include "hw/arm/arm.h"
113
#include "hw/arm/primecell.h"
114
-#include "hw/devices.h"
115
#include "hw/net/lan9118.h"
116
+#include "hw/net/smc91c111.h"
117
#include "hw/pci/pci.h"
118
#include "net/net.h"
119
#include "sysemu/sysemu.h"
120
diff --git a/hw/arm/versatilepb.c b/hw/arm/versatilepb.c
121
index XXXXXXX..XXXXXXX 100644
122
--- a/hw/arm/versatilepb.c
123
+++ b/hw/arm/versatilepb.c
124
@@ -XXX,XX +XXX,XX @@
125
#include "cpu.h"
126
#include "hw/sysbus.h"
127
#include "hw/arm/arm.h"
128
-#include "hw/devices.h"
129
+#include "hw/net/smc91c111.h"
130
#include "net/net.h"
131
#include "sysemu/sysemu.h"
132
#include "hw/pci/pci.h"
133
diff --git a/hw/net/smc91c111.c b/hw/net/smc91c111.c
134
index XXXXXXX..XXXXXXX 100644
135
--- a/hw/net/smc91c111.c
136
+++ b/hw/net/smc91c111.c
137
@@ -XXX,XX +XXX,XX @@
138
#include "qemu/osdep.h"
139
#include "hw/sysbus.h"
140
#include "net/net.h"
141
-#include "hw/devices.h"
142
+#include "hw/net/smc91c111.h"
143
#include "qemu/log.h"
144
/* For crc32 */
145
#include <zlib.h>
146
--
122
--
147
2.20.1
123
2.20.1
148
124
149
125
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Markus Armbruster <armbru@redhat.com>
3
Return the indexes for the EL2&0 regime when the appropriate bits
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
4
are set within HCR_EL2.
5
Message-id: 20190412165416.7977-10-philmd@redhat.com
5
6
Tested-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20200206105448.4726-22-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
11
---
8
include/hw/devices.h | 3 ---
12
target/arm/helper.c | 11 +++++++++--
9
include/hw/net/lan9118.h | 19 +++++++++++++++++++
13
1 file changed, 9 insertions(+), 2 deletions(-)
10
hw/arm/kzm.c | 2 +-
11
hw/arm/mps2.c | 2 +-
12
hw/arm/realview.c | 1 +
13
hw/arm/vexpress.c | 2 +-
14
hw/net/lan9118.c | 2 +-
15
7 files changed, 24 insertions(+), 7 deletions(-)
16
create mode 100644 include/hw/net/lan9118.h
17
14
18
diff --git a/include/hw/devices.h b/include/hw/devices.h
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
19
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
20
--- a/include/hw/devices.h
17
--- a/target/arm/helper.c
21
+++ b/include/hw/devices.h
18
+++ b/target/arm/helper.c
22
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_mmu_idx_el(CPUARMState *env, int el)
23
/* smc91c111.c */
20
return arm_v7m_mmu_idx_for_secstate(env, env->v7m.secure);
24
void smc91c111_init(NICInfo *, uint32_t, qemu_irq);
21
}
25
22
26
-/* lan9118.c */
23
+ /* See ARM pseudo-function ELIsInHost. */
27
-void lan9118_init(NICInfo *, uint32_t, qemu_irq);
24
switch (el) {
28
-
25
case 0:
29
#endif
26
- /* TODO: ARMv8.1-VHE */
30
diff --git a/include/hw/net/lan9118.h b/include/hw/net/lan9118.h
27
if (arm_is_secure_below_el3(env)) {
31
new file mode 100644
28
return ARMMMUIdx_SE10_0;
32
index XXXXXXX..XXXXXXX
29
}
33
--- /dev/null
30
+ if ((env->cp15.hcr_el2 & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)
34
+++ b/include/hw/net/lan9118.h
31
+ && arm_el_is_aa64(env, 2)) {
35
@@ -XXX,XX +XXX,XX @@
32
+ return ARMMMUIdx_E20_0;
36
+/*
33
+ }
37
+ * SMSC LAN9118 Ethernet interface emulation
34
return ARMMMUIdx_E10_0;
38
+ *
35
case 1:
39
+ * Copyright (c) 2009 CodeSourcery, LLC.
36
if (arm_is_secure_below_el3(env)) {
40
+ * Written by Paul Brook
37
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_mmu_idx_el(CPUARMState *env, int el)
41
+ *
38
}
42
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
39
return ARMMMUIdx_E10_1;
43
+ * See the COPYING file in the top-level directory.
40
case 2:
44
+ */
41
- /* TODO: ARMv8.1-VHE */
45
+
42
/* TODO: ARMv8.4-SecEL2 */
46
+#ifndef HW_NET_LAN9118_H
43
+ /* Note that TGE does not apply at EL2. */
47
+#define HW_NET_LAN9118_H
44
+ if ((env->cp15.hcr_el2 & HCR_E2H) && arm_el_is_aa64(env, 2)) {
48
+
45
+ return ARMMMUIdx_E20_2;
49
+#include "hw/irq.h"
46
+ }
50
+#include "net/net.h"
47
return ARMMMUIdx_E2;
51
+
48
case 3:
52
+void lan9118_init(NICInfo *, uint32_t, qemu_irq);
49
return ARMMMUIdx_SE3;
53
+
54
+#endif
55
diff --git a/hw/arm/kzm.c b/hw/arm/kzm.c
56
index XXXXXXX..XXXXXXX 100644
57
--- a/hw/arm/kzm.c
58
+++ b/hw/arm/kzm.c
59
@@ -XXX,XX +XXX,XX @@
60
#include "qemu/error-report.h"
61
#include "exec/address-spaces.h"
62
#include "net/net.h"
63
-#include "hw/devices.h"
64
+#include "hw/net/lan9118.h"
65
#include "hw/char/serial.h"
66
#include "sysemu/qtest.h"
67
68
diff --git a/hw/arm/mps2.c b/hw/arm/mps2.c
69
index XXXXXXX..XXXXXXX 100644
70
--- a/hw/arm/mps2.c
71
+++ b/hw/arm/mps2.c
72
@@ -XXX,XX +XXX,XX @@
73
#include "hw/timer/cmsdk-apb-timer.h"
74
#include "hw/timer/cmsdk-apb-dualtimer.h"
75
#include "hw/misc/mps2-scc.h"
76
-#include "hw/devices.h"
77
+#include "hw/net/lan9118.h"
78
#include "net/net.h"
79
80
typedef enum MPS2FPGAType {
81
diff --git a/hw/arm/realview.c b/hw/arm/realview.c
82
index XXXXXXX..XXXXXXX 100644
83
--- a/hw/arm/realview.c
84
+++ b/hw/arm/realview.c
85
@@ -XXX,XX +XXX,XX @@
86
#include "hw/arm/arm.h"
87
#include "hw/arm/primecell.h"
88
#include "hw/devices.h"
89
+#include "hw/net/lan9118.h"
90
#include "hw/pci/pci.h"
91
#include "net/net.h"
92
#include "sysemu/sysemu.h"
93
diff --git a/hw/arm/vexpress.c b/hw/arm/vexpress.c
94
index XXXXXXX..XXXXXXX 100644
95
--- a/hw/arm/vexpress.c
96
+++ b/hw/arm/vexpress.c
97
@@ -XXX,XX +XXX,XX @@
98
#include "hw/sysbus.h"
99
#include "hw/arm/arm.h"
100
#include "hw/arm/primecell.h"
101
-#include "hw/devices.h"
102
+#include "hw/net/lan9118.h"
103
#include "hw/i2c/i2c.h"
104
#include "net/net.h"
105
#include "sysemu/sysemu.h"
106
diff --git a/hw/net/lan9118.c b/hw/net/lan9118.c
107
index XXXXXXX..XXXXXXX 100644
108
--- a/hw/net/lan9118.c
109
+++ b/hw/net/lan9118.c
110
@@ -XXX,XX +XXX,XX @@
111
#include "hw/sysbus.h"
112
#include "net/net.h"
113
#include "net/eth.h"
114
-#include "hw/devices.h"
115
+#include "hw/net/lan9118.h"
116
#include "sysemu/sysemu.h"
117
#include "hw/ptimer.h"
118
#include "qemu/log.h"
119
--
50
--
120
2.20.1
51
2.20.1
121
52
122
53
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The SMMUNotifierNode struct is not necessary and brings extra
3
Use the correct sctlr for EL2&0 regime. Due to header ordering,
4
complexity so let's remove it. We now directly track the SMMUDevices
4
and where arm_mmu_idx_el is declared, we need to move the function
5
which have registered IOMMU MR notifiers.
5
out of line. Use the function in many more places in order to
6
select the correct control.
6
7
7
This is inspired from the same transformation on intel-iommu
8
Tested-by: Alex Bennée <alex.bennee@linaro.org>
8
done in commit b4a4ba0d68f50f218ee3957b6638dbee32a5eeef
9
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
("intel-iommu: remove IntelIOMMUNotifierNode")
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
11
Message-id: 20200206105448.4726-23-richard.henderson@linaro.org
11
Signed-off-by: Eric Auger <eric.auger@redhat.com>
12
Reviewed-by: Peter Xu <peterx@redhat.com>
13
Message-id: 20190409160219.19026-1-eric.auger@redhat.com
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
13
---
16
include/hw/arm/smmu-common.h | 8 ++------
14
target/arm/cpu.h | 10 +---------
17
hw/arm/smmu-common.c | 6 +++---
15
target/arm/helper-a64.c | 2 +-
18
hw/arm/smmuv3.c | 28 +++++++---------------------
16
target/arm/helper.c | 20 +++++++++++++++-----
19
3 files changed, 12 insertions(+), 30 deletions(-)
17
target/arm/pauth_helper.c | 9 +--------
18
4 files changed, 18 insertions(+), 23 deletions(-)
20
19
21
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
22
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
23
--- a/include/hw/arm/smmu-common.h
22
--- a/target/arm/cpu.h
24
+++ b/include/hw/arm/smmu-common.h
23
+++ b/target/arm/cpu.h
25
@@ -XXX,XX +XXX,XX @@ typedef struct SMMUDevice {
24
@@ -XXX,XX +XXX,XX @@ static inline bool arm_sctlr_b(CPUARMState *env)
26
AddressSpace as;
25
(env->cp15.sctlr_el[1] & SCTLR_B) != 0;
27
uint32_t cfg_cache_hits;
26
}
28
uint32_t cfg_cache_misses;
27
29
+ QLIST_ENTRY(SMMUDevice) next;
28
-static inline uint64_t arm_sctlr(CPUARMState *env, int el)
30
} SMMUDevice;
29
-{
31
30
- if (el == 0) {
32
-typedef struct SMMUNotifierNode {
31
- /* FIXME: ARMv8.1-VHE S2 translation regime. */
33
- SMMUDevice *sdev;
32
- return env->cp15.sctlr_el[1];
34
- QLIST_ENTRY(SMMUNotifierNode) next;
33
- } else {
35
-} SMMUNotifierNode;
34
- return env->cp15.sctlr_el[el];
36
-
35
- }
37
typedef struct SMMUPciBus {
36
-}
38
PCIBus *bus;
37
+uint64_t arm_sctlr(CPUARMState *env, int el);
39
SMMUDevice *pbdev[0]; /* Parent array is sparse, so dynamically alloc */
38
40
@@ -XXX,XX +XXX,XX @@ typedef struct SMMUState {
39
static inline bool arm_cpu_data_is_big_endian_a32(CPUARMState *env,
41
GHashTable *iotlb;
40
bool sctlr_b)
42
SMMUPciBus *smmu_pcibus_by_bus_num[SMMU_PCI_BUS_MAX];
41
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
43
PCIBus *pci_bus;
44
- QLIST_HEAD(, SMMUNotifierNode) notifiers_list;
45
+ QLIST_HEAD(, SMMUDevice) devices_with_notifiers;
46
uint8_t bus_num;
47
PCIBus *primary_bus;
48
} SMMUState;
49
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
50
index XXXXXXX..XXXXXXX 100644
42
index XXXXXXX..XXXXXXX 100644
51
--- a/hw/arm/smmu-common.c
43
--- a/target/arm/helper-a64.c
52
+++ b/hw/arm/smmu-common.c
44
+++ b/target/arm/helper-a64.c
53
@@ -XXX,XX +XXX,XX @@ inline void smmu_inv_notifiers_mr(IOMMUMemoryRegion *mr)
45
@@ -XXX,XX +XXX,XX @@ static void daif_check(CPUARMState *env, uint32_t op,
54
/* Unmap all notifiers of all mr's */
46
uint32_t imm, uintptr_t ra)
55
void smmu_inv_notifiers_all(SMMUState *s)
56
{
47
{
57
- SMMUNotifierNode *node;
48
/* DAIF update to PSTATE. This is OK from EL0 only if UMA is set. */
58
+ SMMUDevice *sdev;
49
- if (arm_current_el(env) == 0 && !(env->cp15.sctlr_el[1] & SCTLR_UMA)) {
59
50
+ if (arm_current_el(env) == 0 && !(arm_sctlr(env, 0) & SCTLR_UMA)) {
60
- QLIST_FOREACH(node, &s->notifiers_list, next) {
51
raise_exception_ra(env, EXCP_UDEF,
61
- smmu_inv_notifiers_mr(&node->sdev->iommu);
52
syn_aa64_sysregtrap(0, extract32(op, 0, 3),
62
+ QLIST_FOREACH(sdev, &s->devices_with_notifiers, next) {
53
extract32(op, 3, 3), 4,
63
+ smmu_inv_notifiers_mr(&sdev->iommu);
54
diff --git a/target/arm/helper.c b/target/arm/helper.c
55
index XXXXXXX..XXXXXXX 100644
56
--- a/target/arm/helper.c
57
+++ b/target/arm/helper.c
58
@@ -XXX,XX +XXX,XX @@ static void aa64_fpsr_write(CPUARMState *env, const ARMCPRegInfo *ri,
59
static CPAccessResult aa64_daif_access(CPUARMState *env, const ARMCPRegInfo *ri,
60
bool isread)
61
{
62
- if (arm_current_el(env) == 0 && !(env->cp15.sctlr_el[1] & SCTLR_UMA)) {
63
+ if (arm_current_el(env) == 0 && !(arm_sctlr(env, 0) & SCTLR_UMA)) {
64
return CP_ACCESS_TRAP;
65
}
66
return CP_ACCESS_OK;
67
@@ -XXX,XX +XXX,XX @@ static CPAccessResult aa64_cacheop_access(CPUARMState *env,
68
/* Cache invalidate/clean: NOP, but EL0 must UNDEF unless
69
* SCTLR_EL1.UCI is set.
70
*/
71
- if (arm_current_el(env) == 0 && !(env->cp15.sctlr_el[1] & SCTLR_UCI)) {
72
+ if (arm_current_el(env) == 0 && !(arm_sctlr(env, 0) & SCTLR_UCI)) {
73
return CP_ACCESS_TRAP;
74
}
75
return CP_ACCESS_OK;
76
@@ -XXX,XX +XXX,XX @@ static uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
64
}
77
}
65
}
78
}
66
79
67
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
80
-#ifndef CONFIG_USER_ONLY
81
+uint64_t arm_sctlr(CPUARMState *env, int el)
82
+{
83
+ /* Only EL0 needs to be adjusted for EL1&0 or EL2&0. */
84
+ if (el == 0) {
85
+ ARMMMUIdx mmu_idx = arm_mmu_idx_el(env, 0);
86
+ el = (mmu_idx == ARMMMUIdx_E20_0 ? 2 : 1);
87
+ }
88
+ return env->cp15.sctlr_el[el];
89
+}
90
91
/* Return the SCTLR value which controls this address translation regime */
92
-static inline uint32_t regime_sctlr(CPUARMState *env, ARMMMUIdx mmu_idx)
93
+static inline uint64_t regime_sctlr(CPUARMState *env, ARMMMUIdx mmu_idx)
94
{
95
return env->cp15.sctlr_el[regime_el(env, mmu_idx)];
96
}
97
98
+#ifndef CONFIG_USER_ONLY
99
+
100
/* Return true if the specified stage of address translation is disabled */
101
static inline bool regime_translation_disabled(CPUARMState *env,
102
ARMMMUIdx mmu_idx)
103
@@ -XXX,XX +XXX,XX @@ static uint32_t rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
104
flags = FIELD_DP32(flags, TBFLAG_A64, ZCR_LEN, zcr_len);
105
}
106
107
- sctlr = arm_sctlr(env, el);
108
+ sctlr = regime_sctlr(env, stage1);
109
110
if (arm_cpu_data_is_big_endian_a64(el, sctlr)) {
111
flags = FIELD_DP32(flags, TBFLAG_ANY, BE_DATA, 1);
112
diff --git a/target/arm/pauth_helper.c b/target/arm/pauth_helper.c
68
index XXXXXXX..XXXXXXX 100644
113
index XXXXXXX..XXXXXXX 100644
69
--- a/hw/arm/smmuv3.c
114
--- a/target/arm/pauth_helper.c
70
+++ b/hw/arm/smmuv3.c
115
+++ b/target/arm/pauth_helper.c
71
@@ -XXX,XX +XXX,XX @@ static void smmuv3_notify_iova(IOMMUMemoryRegion *mr,
116
@@ -XXX,XX +XXX,XX @@ static void pauth_check_trap(CPUARMState *env, int el, uintptr_t ra)
72
/* invalidate an asid/iova tuple in all mr's */
117
73
static void smmuv3_inv_notifiers_iova(SMMUState *s, int asid, dma_addr_t iova)
118
static bool pauth_key_enabled(CPUARMState *env, int el, uint32_t bit)
74
{
119
{
75
- SMMUNotifierNode *node;
120
- uint32_t sctlr;
76
+ SMMUDevice *sdev;
121
- if (el == 0) {
77
122
- /* FIXME: ARMv8.1-VHE S2 translation regime. */
78
- QLIST_FOREACH(node, &s->notifiers_list, next) {
123
- sctlr = env->cp15.sctlr_el[1];
79
- IOMMUMemoryRegion *mr = &node->sdev->iommu;
124
- } else {
80
+ QLIST_FOREACH(sdev, &s->devices_with_notifiers, next) {
125
- sctlr = env->cp15.sctlr_el[el];
81
+ IOMMUMemoryRegion *mr = &sdev->iommu;
82
IOMMUNotifier *n;
83
84
trace_smmuv3_inv_notifiers_iova(mr->parent_obj.name, asid, iova);
85
@@ -XXX,XX +XXX,XX @@ static void smmuv3_notify_flag_changed(IOMMUMemoryRegion *iommu,
86
SMMUDevice *sdev = container_of(iommu, SMMUDevice, iommu);
87
SMMUv3State *s3 = sdev->smmu;
88
SMMUState *s = &(s3->smmu_state);
89
- SMMUNotifierNode *node = NULL;
90
- SMMUNotifierNode *next_node = NULL;
91
92
if (new & IOMMU_NOTIFIER_MAP) {
93
int bus_num = pci_bus_num(sdev->bus);
94
@@ -XXX,XX +XXX,XX @@ static void smmuv3_notify_flag_changed(IOMMUMemoryRegion *iommu,
95
96
if (old == IOMMU_NOTIFIER_NONE) {
97
trace_smmuv3_notify_flag_add(iommu->parent_obj.name);
98
- node = g_malloc0(sizeof(*node));
99
- node->sdev = sdev;
100
- QLIST_INSERT_HEAD(&s->notifiers_list, node, next);
101
- return;
102
- }
126
- }
103
-
127
- return (sctlr & bit) != 0;
104
- /* update notifier node with new flags */
128
+ return (arm_sctlr(env, el) & bit) != 0;
105
- QLIST_FOREACH_SAFE(node, &s->notifiers_list, next, next_node) {
106
- if (node->sdev == sdev) {
107
- if (new == IOMMU_NOTIFIER_NONE) {
108
- trace_smmuv3_notify_flag_del(iommu->parent_obj.name);
109
- QLIST_REMOVE(node, next);
110
- g_free(node);
111
- }
112
- return;
113
- }
114
+ QLIST_INSERT_HEAD(&s->devices_with_notifiers, sdev, next);
115
+ } else if (new == IOMMU_NOTIFIER_NONE) {
116
+ trace_smmuv3_notify_flag_del(iommu->parent_obj.name);
117
+ QLIST_REMOVE(sdev, next);
118
}
119
}
129
}
120
130
131
uint64_t HELPER(pacia)(CPUARMState *env, uint64_t x, uint64_t y)
121
--
132
--
122
2.20.1
133
2.20.1
123
134
124
135
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Since uWireSlave is only used in this new header, there is no
3
The comment that we don't support EL2 is somewhat out of date.
4
need to expose it via "qemu/typedefs.h".
4
Update to include checks against HCR_EL2.TDZ.
5
5
6
Reviewed-by: Markus Armbruster <armbru@redhat.com>
6
Tested-by: Alex Bennée <alex.bennee@linaro.org>
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
8
Message-id: 20190412165416.7977-9-philmd@redhat.com
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20200206105448.4726-24-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
---
11
include/hw/arm/omap.h | 6 +-----
12
target/arm/helper.c | 26 +++++++++++++++++++++-----
12
include/hw/devices.h | 15 ---------------
13
1 file changed, 21 insertions(+), 5 deletions(-)
13
include/hw/input/tsc2xxx.h | 36 ++++++++++++++++++++++++++++++++++++
14
include/qemu/typedefs.h | 1 -
15
hw/arm/nseries.c | 2 +-
16
hw/arm/palm.c | 2 +-
17
hw/input/tsc2005.c | 2 +-
18
hw/input/tsc210x.c | 4 ++--
19
MAINTAINERS | 2 ++
20
9 files changed, 44 insertions(+), 26 deletions(-)
21
create mode 100644 include/hw/input/tsc2xxx.h
22
14
23
diff --git a/include/hw/arm/omap.h b/include/hw/arm/omap.h
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
24
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
25
--- a/include/hw/arm/omap.h
17
--- a/target/arm/helper.c
26
+++ b/include/hw/arm/omap.h
18
+++ b/target/arm/helper.c
27
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_ipas2e1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
28
#include "exec/memory.h"
20
static CPAccessResult aa64_zva_access(CPUARMState *env, const ARMCPRegInfo *ri,
29
# define hw_omap_h        "omap.h"
21
bool isread)
30
#include "hw/irq.h"
22
{
31
+#include "hw/input/tsc2xxx.h"
23
- /* We don't implement EL2, so the only control on DC ZVA is the
32
#include "target/arm/cpu-qom.h"
24
- * bit in the SCTLR which can prohibit access for EL0.
33
#include "qemu/log.h"
25
- */
34
26
- if (arm_current_el(env) == 0 && !(env->cp15.sctlr_el[1] & SCTLR_DZE)) {
35
@@ -XXX,XX +XXX,XX @@ qemu_irq *omap_mpuio_in_get(struct omap_mpuio_s *s);
27
- return CP_ACCESS_TRAP;
36
void omap_mpuio_out_set(struct omap_mpuio_s *s, int line, qemu_irq handler);
28
+ int cur_el = arm_current_el(env);
37
void omap_mpuio_key(struct omap_mpuio_s *s, int row, int col, int down);
38
39
-struct uWireSlave {
40
- uint16_t (*receive)(void *opaque);
41
- void (*send)(void *opaque, uint16_t data);
42
- void *opaque;
43
-};
44
struct omap_uwire_s;
45
void omap_uwire_attach(struct omap_uwire_s *s,
46
uWireSlave *slave, int chipselect);
47
diff --git a/include/hw/devices.h b/include/hw/devices.h
48
index XXXXXXX..XXXXXXX 100644
49
--- a/include/hw/devices.h
50
+++ b/include/hw/devices.h
51
@@ -XXX,XX +XXX,XX @@
52
/* Devices that have nowhere better to go. */
53
54
#include "hw/hw.h"
55
-#include "ui/console.h"
56
57
/* smc91c111.c */
58
void smc91c111_init(NICInfo *, uint32_t, qemu_irq);
59
@@ -XXX,XX +XXX,XX @@ void smc91c111_init(NICInfo *, uint32_t, qemu_irq);
60
/* lan9118.c */
61
void lan9118_init(NICInfo *, uint32_t, qemu_irq);
62
63
-/* tsc210x.c */
64
-uWireSlave *tsc2102_init(qemu_irq pint);
65
-uWireSlave *tsc2301_init(qemu_irq penirq, qemu_irq kbirq, qemu_irq dav);
66
-I2SCodec *tsc210x_codec(uWireSlave *chip);
67
-uint32_t tsc210x_txrx(void *opaque, uint32_t value, int len);
68
-void tsc210x_set_transform(uWireSlave *chip,
69
- MouseTransformInfo *info);
70
-void tsc210x_key_event(uWireSlave *chip, int key, int down);
71
-
72
-/* tsc2005.c */
73
-void *tsc2005_init(qemu_irq pintdav);
74
-uint32_t tsc2005_txrx(void *opaque, uint32_t value, int len);
75
-void tsc2005_set_transform(void *opaque, MouseTransformInfo *info);
76
-
77
#endif
78
diff --git a/include/hw/input/tsc2xxx.h b/include/hw/input/tsc2xxx.h
79
new file mode 100644
80
index XXXXXXX..XXXXXXX
81
--- /dev/null
82
+++ b/include/hw/input/tsc2xxx.h
83
@@ -XXX,XX +XXX,XX @@
84
+/*
85
+ * TI touchscreen controller
86
+ *
87
+ * Copyright (c) 2006 Andrzej Zaborowski
88
+ * Copyright (C) 2008 Nokia Corporation
89
+ *
90
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
91
+ * See the COPYING file in the top-level directory.
92
+ */
93
+
29
+
94
+#ifndef HW_INPUT_TSC2XXX_H
30
+ if (cur_el < 2) {
95
+#define HW_INPUT_TSC2XXX_H
31
+ uint64_t hcr = arm_hcr_el2_eff(env);
96
+
32
+
97
+#include "hw/irq.h"
33
+ if (cur_el == 0) {
98
+#include "ui/console.h"
34
+ if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) {
99
+
35
+ if (!(env->cp15.sctlr_el[2] & SCTLR_DZE)) {
100
+typedef struct uWireSlave {
36
+ return CP_ACCESS_TRAP_EL2;
101
+ uint16_t (*receive)(void *opaque);
37
+ }
102
+ void (*send)(void *opaque, uint16_t data);
38
+ } else {
103
+ void *opaque;
39
+ if (!(env->cp15.sctlr_el[1] & SCTLR_DZE)) {
104
+} uWireSlave;
40
+ return CP_ACCESS_TRAP;
105
+
41
+ }
106
+/* tsc210x.c */
42
+ if (hcr & HCR_TDZ) {
107
+uWireSlave *tsc2102_init(qemu_irq pint);
43
+ return CP_ACCESS_TRAP_EL2;
108
+uWireSlave *tsc2301_init(qemu_irq penirq, qemu_irq kbirq, qemu_irq dav);
44
+ }
109
+I2SCodec *tsc210x_codec(uWireSlave *chip);
45
+ }
110
+uint32_t tsc210x_txrx(void *opaque, uint32_t value, int len);
46
+ } else if (hcr & HCR_TDZ) {
111
+void tsc210x_set_transform(uWireSlave *chip, MouseTransformInfo *info);
47
+ return CP_ACCESS_TRAP_EL2;
112
+void tsc210x_key_event(uWireSlave *chip, int key, int down);
48
+ }
113
+
49
}
114
+/* tsc2005.c */
50
return CP_ACCESS_OK;
115
+void *tsc2005_init(qemu_irq pintdav);
51
}
116
+uint32_t tsc2005_txrx(void *opaque, uint32_t value, int len);
117
+void tsc2005_set_transform(void *opaque, MouseTransformInfo *info);
118
+
119
+#endif
120
diff --git a/include/qemu/typedefs.h b/include/qemu/typedefs.h
121
index XXXXXXX..XXXXXXX 100644
122
--- a/include/qemu/typedefs.h
123
+++ b/include/qemu/typedefs.h
124
@@ -XXX,XX +XXX,XX @@ typedef struct RAMBlock RAMBlock;
125
typedef struct Range Range;
126
typedef struct SHPCDevice SHPCDevice;
127
typedef struct SSIBus SSIBus;
128
-typedef struct uWireSlave uWireSlave;
129
typedef struct VirtIODevice VirtIODevice;
130
typedef struct Visitor Visitor;
131
typedef void SaveStateHandler(QEMUFile *f, void *opaque);
132
diff --git a/hw/arm/nseries.c b/hw/arm/nseries.c
133
index XXXXXXX..XXXXXXX 100644
134
--- a/hw/arm/nseries.c
135
+++ b/hw/arm/nseries.c
136
@@ -XXX,XX +XXX,XX @@
137
#include "ui/console.h"
138
#include "hw/boards.h"
139
#include "hw/i2c/i2c.h"
140
-#include "hw/devices.h"
141
#include "hw/display/blizzard.h"
142
+#include "hw/input/tsc2xxx.h"
143
#include "hw/misc/cbus.h"
144
#include "hw/misc/tmp105.h"
145
#include "hw/block/flash.h"
146
diff --git a/hw/arm/palm.c b/hw/arm/palm.c
147
index XXXXXXX..XXXXXXX 100644
148
--- a/hw/arm/palm.c
149
+++ b/hw/arm/palm.c
150
@@ -XXX,XX +XXX,XX @@
151
#include "hw/arm/omap.h"
152
#include "hw/boards.h"
153
#include "hw/arm/arm.h"
154
-#include "hw/devices.h"
155
+#include "hw/input/tsc2xxx.h"
156
#include "hw/loader.h"
157
#include "exec/address-spaces.h"
158
#include "cpu.h"
159
diff --git a/hw/input/tsc2005.c b/hw/input/tsc2005.c
160
index XXXXXXX..XXXXXXX 100644
161
--- a/hw/input/tsc2005.c
162
+++ b/hw/input/tsc2005.c
163
@@ -XXX,XX +XXX,XX @@
164
#include "hw/hw.h"
165
#include "qemu/timer.h"
166
#include "ui/console.h"
167
-#include "hw/devices.h"
168
+#include "hw/input/tsc2xxx.h"
169
#include "trace.h"
170
171
#define TSC_CUT_RESOLUTION(value, p)    ((value) >> (16 - (p ? 12 : 10)))
172
diff --git a/hw/input/tsc210x.c b/hw/input/tsc210x.c
173
index XXXXXXX..XXXXXXX 100644
174
--- a/hw/input/tsc210x.c
175
+++ b/hw/input/tsc210x.c
176
@@ -XXX,XX +XXX,XX @@
177
#include "audio/audio.h"
178
#include "qemu/timer.h"
179
#include "ui/console.h"
180
-#include "hw/arm/omap.h"    /* For I2SCodec and uWireSlave */
181
-#include "hw/devices.h"
182
+#include "hw/arm/omap.h" /* For I2SCodec */
183
+#include "hw/input/tsc2xxx.h"
184
185
#define TSC_DATA_REGISTERS_PAGE        0x0
186
#define TSC_CONTROL_REGISTERS_PAGE    0x1
187
diff --git a/MAINTAINERS b/MAINTAINERS
188
index XXXXXXX..XXXXXXX 100644
189
--- a/MAINTAINERS
190
+++ b/MAINTAINERS
191
@@ -XXX,XX +XXX,XX @@ F: hw/input/tsc2005.c
192
F: hw/misc/cbus.c
193
F: hw/timer/twl92230.c
194
F: include/hw/display/blizzard.h
195
+F: include/hw/input/tsc2xxx.h
196
F: include/hw/misc/cbus.h
197
198
Palm
199
@@ -XXX,XX +XXX,XX @@ L: qemu-arm@nongnu.org
200
S: Odd Fixes
201
F: hw/arm/palm.c
202
F: hw/input/tsc210x.c
203
+F: include/hw/input/tsc2xxx.h
204
205
Raspberry Pi
206
M: Peter Maydell <peter.maydell@linaro.org>
207
--
52
--
208
2.20.1
53
2.20.1
209
54
210
55
diff view generated by jsdifflib
1
For v8M floating point support, transitions from Secure
1
From: Richard Henderson <richard.henderson@linaro.org>
2
to Non-secure state via BLNS and BLXNS must clear the
3
CONTROL.SFPA bit. (This corresponds to the pseudocode
4
BranchToNS() function.)
5
2
3
Update to include checks against HCR_EL2.TID2.
4
5
Tested-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200206105448.4726-25-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20190416125744.27770-13-peter.maydell@linaro.org
9
---
10
---
10
target/arm/helper.c | 4 ++++
11
target/arm/helper.c | 26 +++++++++++++++++++++-----
11
1 file changed, 4 insertions(+)
12
1 file changed, 21 insertions(+), 5 deletions(-)
12
13
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
16
--- a/target/arm/helper.c
16
+++ b/target/arm/helper.c
17
+++ b/target/arm/helper.c
17
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest)
18
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_cp_reginfo[] = {
18
/* translate.c should have made BXNS UNDEF unless we're secure */
19
static CPAccessResult ctr_el0_access(CPUARMState *env, const ARMCPRegInfo *ri,
19
assert(env->v7m.secure);
20
bool isread)
20
21
{
21
+ if (!(dest & 1)) {
22
- /* Only accessible in EL0 if SCTLR.UCT is set (and only in AArch64,
22
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
23
- * but the AArch32 CTR has its own reginfo struct)
23
+ }
24
- */
24
switch_v7m_security_state(env, dest & 1);
25
- if (arm_current_el(env) == 0 && !(env->cp15.sctlr_el[1] & SCTLR_UCT)) {
25
env->thumb = 1;
26
- return CP_ACCESS_TRAP;
26
env->regs[15] = dest & ~1;
27
+ int cur_el = arm_current_el(env);
27
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest)
28
+
28
*/
29
+ if (cur_el < 2) {
29
write_v7m_exception(env, 1);
30
+ uint64_t hcr = arm_hcr_el2_eff(env);
31
+
32
+ if (cur_el == 0) {
33
+ if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) {
34
+ if (!(env->cp15.sctlr_el[2] & SCTLR_UCT)) {
35
+ return CP_ACCESS_TRAP_EL2;
36
+ }
37
+ } else {
38
+ if (!(env->cp15.sctlr_el[1] & SCTLR_UCT)) {
39
+ return CP_ACCESS_TRAP;
40
+ }
41
+ if (hcr & HCR_TID2) {
42
+ return CP_ACCESS_TRAP_EL2;
43
+ }
44
+ }
45
+ } else if (hcr & HCR_TID2) {
46
+ return CP_ACCESS_TRAP_EL2;
47
+ }
30
}
48
}
31
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
49
32
switch_v7m_security_state(env, 0);
50
if (arm_current_el(env) < 2 && arm_hcr_el2_eff(env) & HCR_TID2) {
33
env->thumb = 1;
34
env->regs[15] = dest;
35
--
51
--
36
2.20.1
52
2.20.1
37
53
38
54
diff view generated by jsdifflib
1
The M-profile floating point support has three associated config
1
From: Richard Henderson <richard.henderson@linaro.org>
2
registers: FPCAR, FPCCR and FPDSCR. It also makes the registers
3
CPACR and NSACR have behaviour other than reads-as-zero.
4
Add support for all of these as simple reads-as-written registers.
5
We will hook up actual functionality later.
6
2
7
The main complexity here is handling the FPCCR register, which
3
Tested-by: Alex Bennée <alex.bennee@linaro.org>
8
has a mix of banked and unbanked bits.
4
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20200206105448.4726-26-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
9
target/arm/cpu-qom.h | 1 +
10
target/arm/cpu.h | 11 +++++----
11
target/arm/cpu.c | 3 ++-
12
target/arm/helper.c | 56 ++++++++++++++++++++++++++++++++++++++++++++
13
4 files changed, 65 insertions(+), 6 deletions(-)
9
14
10
Note that we don't share storage with the A-profile
15
diff --git a/target/arm/cpu-qom.h b/target/arm/cpu-qom.h
11
cpu->cp15.nsacr and cpu->cp15.cpacr_el1, though the behaviour
16
index XXXXXXX..XXXXXXX 100644
12
is quite similar, for two reasons:
17
--- a/target/arm/cpu-qom.h
13
* the M profile CPACR is banked between security states
18
+++ b/target/arm/cpu-qom.h
14
* it preserves the invariant that M profile uses no state
19
@@ -XXX,XX +XXX,XX @@ void arm_gt_ptimer_cb(void *opaque);
15
inside the cp15 substruct
20
void arm_gt_vtimer_cb(void *opaque);
16
21
void arm_gt_htimer_cb(void *opaque);
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
void arm_gt_stimer_cb(void *opaque);
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
23
+void arm_gt_hvtimer_cb(void *opaque);
19
Message-id: 20190416125744.27770-4-peter.maydell@linaro.org
24
20
---
25
#define ARM_AFF0_SHIFT 0
21
target/arm/cpu.h | 34 ++++++++++++
26
#define ARM_AFF0_MASK (0xFFULL << ARM_AFF0_SHIFT)
22
hw/intc/armv7m_nvic.c | 125 ++++++++++++++++++++++++++++++++++++++++++
23
target/arm/cpu.c | 5 ++
24
target/arm/machine.c | 16 ++++++
25
4 files changed, 180 insertions(+)
26
27
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
27
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
28
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/cpu.h
29
--- a/target/arm/cpu.h
30
+++ b/target/arm/cpu.h
30
+++ b/target/arm/cpu.h
31
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
31
@@ -XXX,XX +XXX,XX @@ typedef struct ARMGenericTimer {
32
uint32_t scr[M_REG_NUM_BANKS];
32
uint64_t ctl; /* Timer Control register */
33
uint32_t msplim[M_REG_NUM_BANKS];
33
} ARMGenericTimer;
34
uint32_t psplim[M_REG_NUM_BANKS];
34
35
+ uint32_t fpcar[M_REG_NUM_BANKS];
35
-#define GTIMER_PHYS 0
36
+ uint32_t fpccr[M_REG_NUM_BANKS];
36
-#define GTIMER_VIRT 1
37
+ uint32_t fpdscr[M_REG_NUM_BANKS];
37
-#define GTIMER_HYP 2
38
+ uint32_t cpacr[M_REG_NUM_BANKS];
38
-#define GTIMER_SEC 3
39
+ uint32_t nsacr;
39
-#define NUM_GTIMERS 4
40
} v7m;
40
+#define GTIMER_PHYS 0
41
41
+#define GTIMER_VIRT 1
42
/* Information associated with an exception about to be taken:
42
+#define GTIMER_HYP 2
43
@@ -XXX,XX +XXX,XX @@ FIELD(V7M_CSSELR, LEVEL, 1, 3)
43
+#define GTIMER_SEC 3
44
*/
44
+#define GTIMER_HYPVIRT 4
45
FIELD(V7M_CSSELR, INDEX, 0, 4)
45
+#define NUM_GTIMERS 5
46
46
47
+/* v7M FPCCR bits */
47
typedef struct {
48
+FIELD(V7M_FPCCR, LSPACT, 0, 1)
48
uint64_t raw_tcr;
49
+FIELD(V7M_FPCCR, USER, 1, 1)
50
+FIELD(V7M_FPCCR, S, 2, 1)
51
+FIELD(V7M_FPCCR, THREAD, 3, 1)
52
+FIELD(V7M_FPCCR, HFRDY, 4, 1)
53
+FIELD(V7M_FPCCR, MMRDY, 5, 1)
54
+FIELD(V7M_FPCCR, BFRDY, 6, 1)
55
+FIELD(V7M_FPCCR, SFRDY, 7, 1)
56
+FIELD(V7M_FPCCR, MONRDY, 8, 1)
57
+FIELD(V7M_FPCCR, SPLIMVIOL, 9, 1)
58
+FIELD(V7M_FPCCR, UFRDY, 10, 1)
59
+FIELD(V7M_FPCCR, RES0, 11, 15)
60
+FIELD(V7M_FPCCR, TS, 26, 1)
61
+FIELD(V7M_FPCCR, CLRONRETS, 27, 1)
62
+FIELD(V7M_FPCCR, CLRONRET, 28, 1)
63
+FIELD(V7M_FPCCR, LSPENS, 29, 1)
64
+FIELD(V7M_FPCCR, LSPEN, 30, 1)
65
+FIELD(V7M_FPCCR, ASPEN, 31, 1)
66
+/* These bits are banked. Others are non-banked and live in the M_REG_S bank */
67
+#define R_V7M_FPCCR_BANKED_MASK \
68
+ (R_V7M_FPCCR_LSPACT_MASK | \
69
+ R_V7M_FPCCR_USER_MASK | \
70
+ R_V7M_FPCCR_THREAD_MASK | \
71
+ R_V7M_FPCCR_MMRDY_MASK | \
72
+ R_V7M_FPCCR_SPLIMVIOL_MASK | \
73
+ R_V7M_FPCCR_UFRDY_MASK | \
74
+ R_V7M_FPCCR_ASPEN_MASK)
75
+
76
/*
77
* System register ID fields.
78
*/
79
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
80
index XXXXXXX..XXXXXXX 100644
81
--- a/hw/intc/armv7m_nvic.c
82
+++ b/hw/intc/armv7m_nvic.c
83
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
84
}
85
case 0xd84: /* CSSELR */
86
return cpu->env.v7m.csselr[attrs.secure];
87
+ case 0xd88: /* CPACR */
88
+ if (!arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
89
+ return 0;
90
+ }
91
+ return cpu->env.v7m.cpacr[attrs.secure];
92
+ case 0xd8c: /* NSACR */
93
+ if (!attrs.secure || !arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
94
+ return 0;
95
+ }
96
+ return cpu->env.v7m.nsacr;
97
/* TODO: Implement debug registers. */
98
case 0xd90: /* MPU_TYPE */
99
/* Unified MPU; if the MPU is not present this value is zero */
100
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
101
return 0;
102
}
103
return cpu->env.v7m.sfar;
104
+ case 0xf34: /* FPCCR */
105
+ if (!arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
106
+ return 0;
107
+ }
108
+ if (attrs.secure) {
109
+ return cpu->env.v7m.fpccr[M_REG_S];
110
+ } else {
111
+ /*
112
+ * NS can read LSPEN, CLRONRET and MONRDY. It can read
113
+ * BFRDY and HFRDY if AIRCR.BFHFNMINS != 0;
114
+ * other non-banked bits RAZ.
115
+ * TODO: MONRDY should RAZ/WI if DEMCR.SDME is set.
116
+ */
117
+ uint32_t value = cpu->env.v7m.fpccr[M_REG_S];
118
+ uint32_t mask = R_V7M_FPCCR_LSPEN_MASK |
119
+ R_V7M_FPCCR_CLRONRET_MASK |
120
+ R_V7M_FPCCR_MONRDY_MASK;
121
+
122
+ if (s->cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK) {
123
+ mask |= R_V7M_FPCCR_BFRDY_MASK | R_V7M_FPCCR_HFRDY_MASK;
124
+ }
125
+
126
+ value &= mask;
127
+
128
+ value |= cpu->env.v7m.fpccr[M_REG_NS];
129
+ return value;
130
+ }
131
+ case 0xf38: /* FPCAR */
132
+ if (!arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
133
+ return 0;
134
+ }
135
+ return cpu->env.v7m.fpcar[attrs.secure];
136
+ case 0xf3c: /* FPDSCR */
137
+ if (!arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
138
+ return 0;
139
+ }
140
+ return cpu->env.v7m.fpdscr[attrs.secure];
141
case 0xf40: /* MVFR0 */
142
return cpu->isar.mvfr0;
143
case 0xf44: /* MVFR1 */
144
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
145
cpu->env.v7m.csselr[attrs.secure] = value & R_V7M_CSSELR_INDEX_MASK;
146
}
147
break;
148
+ case 0xd88: /* CPACR */
149
+ if (arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
150
+ /* We implement only the Floating Point extension's CP10/CP11 */
151
+ cpu->env.v7m.cpacr[attrs.secure] = value & (0xf << 20);
152
+ }
153
+ break;
154
+ case 0xd8c: /* NSACR */
155
+ if (attrs.secure && arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
156
+ /* We implement only the Floating Point extension's CP10/CP11 */
157
+ cpu->env.v7m.nsacr = value & (3 << 10);
158
+ }
159
+ break;
160
case 0xd90: /* MPU_TYPE */
161
return; /* RO */
162
case 0xd94: /* MPU_CTRL */
163
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
164
}
165
break;
166
}
167
+ case 0xf34: /* FPCCR */
168
+ if (arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
169
+ /* Not all bits here are banked. */
170
+ uint32_t fpccr_s;
171
+
172
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
173
+ /* Don't allow setting of bits not present in v7M */
174
+ value &= (R_V7M_FPCCR_LSPACT_MASK |
175
+ R_V7M_FPCCR_USER_MASK |
176
+ R_V7M_FPCCR_THREAD_MASK |
177
+ R_V7M_FPCCR_HFRDY_MASK |
178
+ R_V7M_FPCCR_MMRDY_MASK |
179
+ R_V7M_FPCCR_BFRDY_MASK |
180
+ R_V7M_FPCCR_MONRDY_MASK |
181
+ R_V7M_FPCCR_LSPEN_MASK |
182
+ R_V7M_FPCCR_ASPEN_MASK);
183
+ }
184
+ value &= ~R_V7M_FPCCR_RES0_MASK;
185
+
186
+ if (!attrs.secure) {
187
+ /* Some non-banked bits are configurably writable by NS */
188
+ fpccr_s = cpu->env.v7m.fpccr[M_REG_S];
189
+ if (!(fpccr_s & R_V7M_FPCCR_LSPENS_MASK)) {
190
+ uint32_t lspen = FIELD_EX32(value, V7M_FPCCR, LSPEN);
191
+ fpccr_s = FIELD_DP32(fpccr_s, V7M_FPCCR, LSPEN, lspen);
192
+ }
193
+ if (!(fpccr_s & R_V7M_FPCCR_CLRONRETS_MASK)) {
194
+ uint32_t cor = FIELD_EX32(value, V7M_FPCCR, CLRONRET);
195
+ fpccr_s = FIELD_DP32(fpccr_s, V7M_FPCCR, CLRONRET, cor);
196
+ }
197
+ if ((s->cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK)) {
198
+ uint32_t hfrdy = FIELD_EX32(value, V7M_FPCCR, HFRDY);
199
+ uint32_t bfrdy = FIELD_EX32(value, V7M_FPCCR, BFRDY);
200
+ fpccr_s = FIELD_DP32(fpccr_s, V7M_FPCCR, HFRDY, hfrdy);
201
+ fpccr_s = FIELD_DP32(fpccr_s, V7M_FPCCR, BFRDY, bfrdy);
202
+ }
203
+ /* TODO MONRDY should RAZ/WI if DEMCR.SDME is set */
204
+ {
205
+ uint32_t monrdy = FIELD_EX32(value, V7M_FPCCR, MONRDY);
206
+ fpccr_s = FIELD_DP32(fpccr_s, V7M_FPCCR, MONRDY, monrdy);
207
+ }
208
+
209
+ /*
210
+ * All other non-banked bits are RAZ/WI from NS; write
211
+ * just the banked bits to fpccr[M_REG_NS].
212
+ */
213
+ value &= R_V7M_FPCCR_BANKED_MASK;
214
+ cpu->env.v7m.fpccr[M_REG_NS] = value;
215
+ } else {
216
+ fpccr_s = value;
217
+ }
218
+ cpu->env.v7m.fpccr[M_REG_S] = fpccr_s;
219
+ }
220
+ break;
221
+ case 0xf38: /* FPCAR */
222
+ if (arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
223
+ value &= ~7;
224
+ cpu->env.v7m.fpcar[attrs.secure] = value;
225
+ }
226
+ break;
227
+ case 0xf3c: /* FPDSCR */
228
+ if (arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
229
+ value &= 0x07c00000;
230
+ cpu->env.v7m.fpdscr[attrs.secure] = value;
231
+ }
232
+ break;
233
case 0xf50: /* ICIALLU */
234
case 0xf58: /* ICIMVAU */
235
case 0xf5c: /* DCIMVAC */
236
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
49
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
237
index XXXXXXX..XXXXXXX 100644
50
index XXXXXXX..XXXXXXX 100644
238
--- a/target/arm/cpu.c
51
--- a/target/arm/cpu.c
239
+++ b/target/arm/cpu.c
52
+++ b/target/arm/cpu.c
240
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(CPUState *s)
53
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
241
env->v7m.ccr[M_REG_S] |= R_V7M_CCR_UNALIGN_TRP_MASK;
242
}
54
}
243
55
}
244
+ if (arm_feature(env, ARM_FEATURE_VFP)) {
56
245
+ env->v7m.fpccr[M_REG_NS] = R_V7M_FPCCR_ASPEN_MASK;
57
-
246
+ env->v7m.fpccr[M_REG_S] = R_V7M_FPCCR_ASPEN_MASK |
58
{
247
+ R_V7M_FPCCR_LSPEN_MASK | R_V7M_FPCCR_S_MASK;
59
uint64_t scale;
248
+ }
60
249
/* Unlike A/R profile, M profile defines the reset LR value */
61
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
250
env->regs[14] = 0xffffffff;
62
arm_gt_htimer_cb, cpu);
251
63
cpu->gt_timer[GTIMER_SEC] = timer_new(QEMU_CLOCK_VIRTUAL, scale,
252
diff --git a/target/arm/machine.c b/target/arm/machine.c
64
arm_gt_stimer_cb, cpu);
65
+ cpu->gt_timer[GTIMER_HYPVIRT] = timer_new(QEMU_CLOCK_VIRTUAL, scale,
66
+ arm_gt_hvtimer_cb, cpu);
67
}
68
#endif
69
70
diff --git a/target/arm/helper.c b/target/arm/helper.c
253
index XXXXXXX..XXXXXXX 100644
71
index XXXXXXX..XXXXXXX 100644
254
--- a/target/arm/machine.c
72
--- a/target/arm/helper.c
255
+++ b/target/arm/machine.c
73
+++ b/target/arm/helper.c
256
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m_v8m = {
74
@@ -XXX,XX +XXX,XX @@ static uint64_t gt_tval_read(CPUARMState *env, const ARMCPRegInfo *ri,
75
76
switch (timeridx) {
77
case GTIMER_VIRT:
78
+ case GTIMER_HYPVIRT:
79
offset = gt_virt_cnt_offset(env);
80
break;
257
}
81
}
82
@@ -XXX,XX +XXX,XX @@ static void gt_tval_write(CPUARMState *env, const ARMCPRegInfo *ri,
83
84
switch (timeridx) {
85
case GTIMER_VIRT:
86
+ case GTIMER_HYPVIRT:
87
offset = gt_virt_cnt_offset(env);
88
break;
89
}
90
@@ -XXX,XX +XXX,XX @@ static void gt_sec_ctl_write(CPUARMState *env, const ARMCPRegInfo *ri,
91
gt_ctl_write(env, ri, GTIMER_SEC, value);
92
}
93
94
+static void gt_hv_timer_reset(CPUARMState *env, const ARMCPRegInfo *ri)
95
+{
96
+ gt_timer_reset(env, ri, GTIMER_HYPVIRT);
97
+}
98
+
99
+static void gt_hv_cval_write(CPUARMState *env, const ARMCPRegInfo *ri,
100
+ uint64_t value)
101
+{
102
+ gt_cval_write(env, ri, GTIMER_HYPVIRT, value);
103
+}
104
+
105
+static uint64_t gt_hv_tval_read(CPUARMState *env, const ARMCPRegInfo *ri)
106
+{
107
+ return gt_tval_read(env, ri, GTIMER_HYPVIRT);
108
+}
109
+
110
+static void gt_hv_tval_write(CPUARMState *env, const ARMCPRegInfo *ri,
111
+ uint64_t value)
112
+{
113
+ gt_tval_write(env, ri, GTIMER_HYPVIRT, value);
114
+}
115
+
116
+static void gt_hv_ctl_write(CPUARMState *env, const ARMCPRegInfo *ri,
117
+ uint64_t value)
118
+{
119
+ gt_ctl_write(env, ri, GTIMER_HYPVIRT, value);
120
+}
121
+
122
void arm_gt_ptimer_cb(void *opaque)
123
{
124
ARMCPU *cpu = opaque;
125
@@ -XXX,XX +XXX,XX @@ void arm_gt_stimer_cb(void *opaque)
126
gt_recalc_timer(cpu, GTIMER_SEC);
127
}
128
129
+void arm_gt_hvtimer_cb(void *opaque)
130
+{
131
+ ARMCPU *cpu = opaque;
132
+
133
+ gt_recalc_timer(cpu, GTIMER_HYPVIRT);
134
+}
135
+
136
static void arm_gt_cntfrq_reset(CPUARMState *env, const ARMCPRegInfo *opaque)
137
{
138
ARMCPU *cpu = env_archcpu(env);
139
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vhe_reginfo[] = {
140
.opc0 = 3, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 1,
141
.access = PL2_RW, .writefn = vmsa_tcr_ttbr_el2_write,
142
.fieldoffset = offsetof(CPUARMState, cp15.ttbr1_el[2]) },
143
+#ifndef CONFIG_USER_ONLY
144
+ { .name = "CNTHV_CVAL_EL2", .state = ARM_CP_STATE_AA64,
145
+ .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 3, .opc2 = 2,
146
+ .fieldoffset =
147
+ offsetof(CPUARMState, cp15.c14_timer[GTIMER_HYPVIRT].cval),
148
+ .type = ARM_CP_IO, .access = PL2_RW,
149
+ .writefn = gt_hv_cval_write, .raw_writefn = raw_write },
150
+ { .name = "CNTHV_TVAL_EL2", .state = ARM_CP_STATE_BOTH,
151
+ .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 3, .opc2 = 0,
152
+ .type = ARM_CP_NO_RAW | ARM_CP_IO, .access = PL2_RW,
153
+ .resetfn = gt_hv_timer_reset,
154
+ .readfn = gt_hv_tval_read, .writefn = gt_hv_tval_write },
155
+ { .name = "CNTHV_CTL_EL2", .state = ARM_CP_STATE_BOTH,
156
+ .type = ARM_CP_IO,
157
+ .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 3, .opc2 = 1,
158
+ .access = PL2_RW,
159
+ .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_HYPVIRT].ctl),
160
+ .writefn = gt_hv_ctl_write, .raw_writefn = raw_write },
161
+#endif
162
REGINFO_SENTINEL
258
};
163
};
259
164
260
+static const VMStateDescription vmstate_m_fp = {
261
+ .name = "cpu/m/fp",
262
+ .version_id = 1,
263
+ .minimum_version_id = 1,
264
+ .needed = vfp_needed,
265
+ .fields = (VMStateField[]) {
266
+ VMSTATE_UINT32_ARRAY(env.v7m.fpcar, ARMCPU, M_REG_NUM_BANKS),
267
+ VMSTATE_UINT32_ARRAY(env.v7m.fpccr, ARMCPU, M_REG_NUM_BANKS),
268
+ VMSTATE_UINT32_ARRAY(env.v7m.fpdscr, ARMCPU, M_REG_NUM_BANKS),
269
+ VMSTATE_UINT32_ARRAY(env.v7m.cpacr, ARMCPU, M_REG_NUM_BANKS),
270
+ VMSTATE_UINT32(env.v7m.nsacr, ARMCPU),
271
+ VMSTATE_END_OF_LIST()
272
+ }
273
+};
274
+
275
static const VMStateDescription vmstate_m = {
276
.name = "cpu/m",
277
.version_id = 4,
278
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m = {
279
&vmstate_m_scr,
280
&vmstate_m_other_sp,
281
&vmstate_m_v8m,
282
+ &vmstate_m_fp,
283
NULL
284
}
285
};
286
--
165
--
287
2.20.1
166
2.20.1
288
167
289
168
diff view generated by jsdifflib
1
Handle floating point registers in exception entry.
1
From: Richard Henderson <richard.henderson@linaro.org>
2
This corresponds to the FP-specific parts of the pseudocode
3
functions ActivateException() and PushStack().
4
2
5
We defer the code corresponding to UpdateFPCCR() to a later patch.
3
Tested-by: Alex Bennée <alex.bennee@linaro.org>
6
4
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20200206105448.4726-27-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20190416125744.27770-11-peter.maydell@linaro.org
10
---
8
---
11
target/arm/helper.c | 98 +++++++++++++++++++++++++++++++++++++++++++--
9
target/arm/helper.c | 102 +++++++++++++++++++++++++++++++++++---------
12
1 file changed, 95 insertions(+), 3 deletions(-)
10
1 file changed, 81 insertions(+), 21 deletions(-)
13
11
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
12
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.c
14
--- a/target/arm/helper.c
17
+++ b/target/arm/helper.c
15
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
16
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gt_cntfrq_access(CPUARMState *env, const ARMCPRegInfo *ri,
19
switch_v7m_security_state(env, targets_secure);
17
* Writable only at the highest implemented exception level.
20
write_v7m_control_spsel(env, 0);
18
*/
21
arm_clear_exclusive(env);
19
int el = arm_current_el(env);
22
+ /* Clear SFPA and FPCA (has no effect if no FPU) */
20
+ uint64_t hcr;
23
+ env->v7m.control[M_REG_S] &=
21
+ uint32_t cntkctl;
24
+ ~(R_V7M_CONTROL_FPCA_MASK | R_V7M_CONTROL_SFPA_MASK);
22
25
/* Clear IT bits */
23
switch (el) {
26
env->condexec_bits = 0;
24
case 0:
27
env->regs[14] = lr;
25
- if (!extract32(env->cp15.c14_cntkctl, 0, 2)) {
28
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
26
+ hcr = arm_hcr_el2_eff(env);
29
uint32_t xpsr = xpsr_read(env);
27
+ if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) {
30
uint32_t frameptr = env->regs[13];
28
+ cntkctl = env->cp15.cnthctl_el2;
31
ARMMMUIdx mmu_idx = arm_mmu_idx(env);
29
+ } else {
32
+ uint32_t framesize;
30
+ cntkctl = env->cp15.c14_cntkctl;
33
+ bool nsacr_cp10 = extract32(env->v7m.nsacr, 10, 1);
31
+ }
32
+ if (!extract32(cntkctl, 0, 2)) {
33
return CP_ACCESS_TRAP;
34
}
35
break;
36
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gt_counter_access(CPUARMState *env, int timeridx,
37
{
38
unsigned int cur_el = arm_current_el(env);
39
bool secure = arm_is_secure(env);
40
+ uint64_t hcr = arm_hcr_el2_eff(env);
41
42
- /* CNT[PV]CT: not visible from PL0 if ELO[PV]CTEN is zero */
43
- if (cur_el == 0 &&
44
- !extract32(env->cp15.c14_cntkctl, timeridx, 1)) {
45
- return CP_ACCESS_TRAP;
46
- }
47
+ switch (cur_el) {
48
+ case 0:
49
+ /* If HCR_EL2.<E2H,TGE> == '11': check CNTHCTL_EL2.EL0[PV]CTEN. */
50
+ if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) {
51
+ return (extract32(env->cp15.cnthctl_el2, timeridx, 1)
52
+ ? CP_ACCESS_OK : CP_ACCESS_TRAP_EL2);
53
+ }
54
55
- if (arm_feature(env, ARM_FEATURE_EL2) &&
56
- timeridx == GTIMER_PHYS && !secure && cur_el < 2 &&
57
- !extract32(env->cp15.cnthctl_el2, 0, 1)) {
58
- return CP_ACCESS_TRAP_EL2;
59
+ /* CNT[PV]CT: not visible from PL0 if EL0[PV]CTEN is zero */
60
+ if (!extract32(env->cp15.c14_cntkctl, timeridx, 1)) {
61
+ return CP_ACCESS_TRAP;
62
+ }
34
+
63
+
35
+ if ((env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK) &&
64
+ /* If HCR_EL2.<E2H,TGE> == '10': check CNTHCTL_EL2.EL1PCTEN. */
36
+ (env->v7m.secure || nsacr_cp10)) {
65
+ if (hcr & HCR_E2H) {
37
+ if (env->v7m.secure &&
66
+ if (timeridx == GTIMER_PHYS &&
38
+ env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK) {
67
+ !extract32(env->cp15.cnthctl_el2, 10, 1)) {
39
+ framesize = 0xa8;
68
+ return CP_ACCESS_TRAP_EL2;
69
+ }
40
+ } else {
70
+ } else {
41
+ framesize = 0x68;
71
+ /* If HCR_EL2.<E2H> == 0: check CNTHCTL_EL2.EL1PCEN. */
72
+ if (arm_feature(env, ARM_FEATURE_EL2) &&
73
+ timeridx == GTIMER_PHYS && !secure &&
74
+ !extract32(env->cp15.cnthctl_el2, 1, 1)) {
75
+ return CP_ACCESS_TRAP_EL2;
76
+ }
42
+ }
77
+ }
43
+ } else {
78
+ break;
44
+ framesize = 0x20;
79
+
45
+ }
80
+ case 1:
46
81
+ /* Check CNTHCTL_EL2.EL1PCTEN, which changes location based on E2H. */
47
/* Align stack pointer if the guest wants that */
82
+ if (arm_feature(env, ARM_FEATURE_EL2) &&
48
if ((frameptr & 4) &&
83
+ timeridx == GTIMER_PHYS && !secure &&
49
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
84
+ (hcr & HCR_E2H
50
xpsr |= XPSR_SPREALIGN;
85
+ ? !extract32(env->cp15.cnthctl_el2, 10, 1)
86
+ : !extract32(env->cp15.cnthctl_el2, 0, 1))) {
87
+ return CP_ACCESS_TRAP_EL2;
88
+ }
89
+ break;
51
}
90
}
52
91
return CP_ACCESS_OK;
53
- frameptr -= 0x20;
92
}
54
+ xpsr &= ~XPSR_SFPA;
93
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gt_timer_access(CPUARMState *env, int timeridx,
55
+ if (env->v7m.secure &&
94
{
56
+ (env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)) {
95
unsigned int cur_el = arm_current_el(env);
57
+ xpsr |= XPSR_SFPA;
96
bool secure = arm_is_secure(env);
58
+ }
97
+ uint64_t hcr = arm_hcr_el2_eff(env);
98
99
- /* CNT[PV]_CVAL, CNT[PV]_CTL, CNT[PV]_TVAL: not visible from PL0 if
100
- * EL0[PV]TEN is zero.
101
- */
102
- if (cur_el == 0 &&
103
- !extract32(env->cp15.c14_cntkctl, 9 - timeridx, 1)) {
104
- return CP_ACCESS_TRAP;
105
- }
106
+ switch (cur_el) {
107
+ case 0:
108
+ if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) {
109
+ /* If HCR_EL2.<E2H,TGE> == '11': check CNTHCTL_EL2.EL0[PV]TEN. */
110
+ return (extract32(env->cp15.cnthctl_el2, 9 - timeridx, 1)
111
+ ? CP_ACCESS_OK : CP_ACCESS_TRAP_EL2);
112
+ }
113
114
- if (arm_feature(env, ARM_FEATURE_EL2) &&
115
- timeridx == GTIMER_PHYS && !secure && cur_el < 2 &&
116
- !extract32(env->cp15.cnthctl_el2, 1, 1)) {
117
- return CP_ACCESS_TRAP_EL2;
118
+ /*
119
+ * CNT[PV]_CVAL, CNT[PV]_CTL, CNT[PV]_TVAL: not visible from
120
+ * EL0 if EL0[PV]TEN is zero.
121
+ */
122
+ if (!extract32(env->cp15.c14_cntkctl, 9 - timeridx, 1)) {
123
+ return CP_ACCESS_TRAP;
124
+ }
125
+ /* fall through */
59
+
126
+
60
+ frameptr -= framesize;
127
+ case 1:
61
128
+ if (arm_feature(env, ARM_FEATURE_EL2) &&
62
if (arm_feature(env, ARM_FEATURE_V8)) {
129
+ timeridx == GTIMER_PHYS && !secure) {
63
uint32_t limit = v7m_sp_limit(env);
130
+ if (hcr & HCR_E2H) {
64
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
131
+ /* If HCR_EL2.<E2H,TGE> == '10': check CNTHCTL_EL2.EL1PTEN. */
65
v7m_stack_write(cpu, frameptr + 24, env->regs[15], mmu_idx, false) &&
132
+ if (!extract32(env->cp15.cnthctl_el2, 11, 1)) {
66
v7m_stack_write(cpu, frameptr + 28, xpsr, mmu_idx, false);
133
+ return CP_ACCESS_TRAP_EL2;
67
68
+ if (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK) {
69
+ /* FPU is active, try to save its registers */
70
+ bool fpccr_s = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
71
+ bool lspact = env->v7m.fpccr[fpccr_s] & R_V7M_FPCCR_LSPACT_MASK;
72
+
73
+ if (lspact && arm_feature(env, ARM_FEATURE_M_SECURITY)) {
74
+ qemu_log_mask(CPU_LOG_INT,
75
+ "...SecureFault because LSPACT and FPCA both set\n");
76
+ env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
77
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
78
+ } else if (!env->v7m.secure && !nsacr_cp10) {
79
+ qemu_log_mask(CPU_LOG_INT,
80
+ "...Secure UsageFault with CFSR.NOCP because "
81
+ "NSACR.CP10 prevents stacking FP regs\n");
82
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, M_REG_S);
83
+ env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_NOCP_MASK;
84
+ } else {
85
+ if (!(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPEN_MASK)) {
86
+ /* Lazy stacking disabled, save registers now */
87
+ int i;
88
+ bool cpacr_pass = v7m_cpacr_pass(env, env->v7m.secure,
89
+ arm_current_el(env) != 0);
90
+
91
+ if (stacked_ok && !cpacr_pass) {
92
+ /*
93
+ * Take UsageFault if CPACR forbids access. The pseudocode
94
+ * here does a full CheckCPEnabled() but we know the NSACR
95
+ * check can never fail as we have already handled that.
96
+ */
97
+ qemu_log_mask(CPU_LOG_INT,
98
+ "...UsageFault with CFSR.NOCP because "
99
+ "CPACR.CP10 prevents stacking FP regs\n");
100
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
101
+ env->v7m.secure);
102
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_NOCP_MASK;
103
+ stacked_ok = false;
104
+ }
105
+
106
+ for (i = 0; i < ((framesize == 0xa8) ? 32 : 16); i += 2) {
107
+ uint64_t dn = *aa32_vfp_dreg(env, i / 2);
108
+ uint32_t faddr = frameptr + 0x20 + 4 * i;
109
+ uint32_t slo = extract64(dn, 0, 32);
110
+ uint32_t shi = extract64(dn, 32, 32);
111
+
112
+ if (i >= 16) {
113
+ faddr += 8; /* skip the slot for the FPSCR */
114
+ }
115
+ stacked_ok = stacked_ok &&
116
+ v7m_stack_write(cpu, faddr, slo, mmu_idx, false) &&
117
+ v7m_stack_write(cpu, faddr + 4, shi, mmu_idx, false);
118
+ }
119
+ stacked_ok = stacked_ok &&
120
+ v7m_stack_write(cpu, frameptr + 0x60,
121
+ vfp_get_fpscr(env), mmu_idx, false);
122
+ if (cpacr_pass) {
123
+ for (i = 0; i < ((framesize == 0xa8) ? 32 : 16); i += 2) {
124
+ *aa32_vfp_dreg(env, i / 2) = 0;
125
+ }
126
+ vfp_set_fpscr(env, 0);
127
+ }
134
+ }
128
+ } else {
135
+ } else {
129
+ /* Lazy stacking enabled, save necessary info to stack later */
136
+ /* If HCR_EL2.<E2H> == 0: check CNTHCTL_EL2.EL1PCEN. */
130
+ /* TODO : equivalent of UpdateFPCCR() pseudocode */
137
+ if (!extract32(env->cp15.cnthctl_el2, 1, 1)) {
138
+ return CP_ACCESS_TRAP_EL2;
139
+ }
131
+ }
140
+ }
132
+ }
141
+ }
133
+ }
142
+ break;
134
+
143
}
135
/*
144
return CP_ACCESS_OK;
136
* If we broke a stack limit then SP was already updated earlier;
145
}
137
* otherwise we update SP regardless of whether any of the stack
138
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
139
140
if (arm_feature(env, ARM_FEATURE_V8)) {
141
lr = R_V7M_EXCRET_RES1_MASK |
142
- R_V7M_EXCRET_DCRS_MASK |
143
- R_V7M_EXCRET_FTYPE_MASK;
144
+ R_V7M_EXCRET_DCRS_MASK;
145
/* The S bit indicates whether we should return to Secure
146
* or NonSecure (ie our current state).
147
* The ES bit indicates whether we're taking this exception
148
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
149
if (env->v7m.secure) {
150
lr |= R_V7M_EXCRET_S_MASK;
151
}
152
+ if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK)) {
153
+ lr |= R_V7M_EXCRET_FTYPE_MASK;
154
+ }
155
} else {
156
lr = R_V7M_EXCRET_RES1_MASK |
157
R_V7M_EXCRET_S_MASK |
158
--
146
--
159
2.20.1
147
2.20.1
160
148
161
149
diff view generated by jsdifflib
1
The TailChain() pseudocode specifies that a tail chaining
1
From: Richard Henderson <richard.henderson@linaro.org>
2
exception should sanitize the excReturn all-ones bits and
3
(if there is no FPU) the excReturn FType bits; we weren't
4
doing this.
5
2
3
For ARMv8.1, op1 == 5 is reserved for EL2 aliases of
4
EL1 and EL0 registers.
5
6
Tested-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20200206105448.4726-28-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20190416125744.27770-14-peter.maydell@linaro.org
9
---
11
---
10
target/arm/helper.c | 8 ++++++++
12
target/arm/helper.c | 5 +----
11
1 file changed, 8 insertions(+)
13
1 file changed, 1 insertion(+), 4 deletions(-)
12
14
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
17
--- a/target/arm/helper.c
16
+++ b/target/arm/helper.c
18
+++ b/target/arm/helper.c
17
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
19
@@ -XXX,XX +XXX,XX @@ void define_one_arm_cp_reg_with_opaque(ARMCPU *cpu,
18
qemu_log_mask(CPU_LOG_INT, "...taking pending %s exception %d\n",
20
mask = PL0_RW;
19
targets_secure ? "secure" : "nonsecure", exc);
21
break;
20
22
case 4:
21
+ if (dotailchain) {
23
+ case 5:
22
+ /* Sanitize LR FType and PREFIX bits */
24
/* min_EL EL2 */
23
+ if (!arm_feature(env, ARM_FEATURE_VFP)) {
25
mask = PL2_RW;
24
+ lr |= R_V7M_EXCRET_FTYPE_MASK;
26
break;
25
+ }
27
- case 5:
26
+ lr = deposit32(lr, 24, 8, 0xff);
28
- /* unallocated encoding, so not possible */
27
+ }
29
- assert(false);
28
+
30
- break;
29
if (arm_feature(env, ARM_FEATURE_V8)) {
31
case 6:
30
if (arm_feature(env, ARM_FEATURE_M_SECURITY) &&
32
/* min_EL EL3 */
31
(lr & R_V7M_EXCRET_S_MASK)) {
33
mask = PL3_RW;
32
--
34
--
33
2.20.1
35
2.20.1
34
36
35
37
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
No code used the tc6393xb_gpio_in_get() and tc6393xb_gpio_out_set()
3
Several of the EL1/0 registers are redirected to the EL2 version when in
4
functions since their introduction in commit 88d2c950b002. Time to
4
EL2 and HCR_EL2.E2H is set. Many of these registers have side effects.
5
remove them.
5
Link together the two ARMCPRegInfo structures after they have been
6
6
properly instantiated. Install common dispatch routines to all of the
7
Suggested-by: Markus Armbruster <armbru@redhat.com>
7
relevant registers.
8
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
9
Message-id: 20190412165416.7977-4-philmd@redhat.com
9
The same set of registers that are redirected also have additional
10
EL12/EL02 aliases created to access the original register that was
11
redirected.
12
13
Omit the generic timer registers from redirection here, because we'll
14
need multiple kinds of redirection from both EL0 and EL2.
15
16
Tested-by: Alex Bennée <alex.bennee@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
19
Message-id: 20200206105448.4726-29-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
21
---
13
include/hw/devices.h | 3 ---
22
target/arm/cpu.h | 13 ++++
14
hw/display/tc6393xb.c | 16 ----------------
23
target/arm/helper.c | 162 ++++++++++++++++++++++++++++++++++++++++++++
15
2 files changed, 19 deletions(-)
24
2 files changed, 175 insertions(+)
16
25
17
diff --git a/include/hw/devices.h b/include/hw/devices.h
26
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
19
--- a/include/hw/devices.h
28
--- a/target/arm/cpu.h
20
+++ b/include/hw/devices.h
29
+++ b/target/arm/cpu.h
21
@@ -XXX,XX +XXX,XX @@ void retu_key_event(void *retu, int state);
30
@@ -XXX,XX +XXX,XX @@ struct ARMCPRegInfo {
22
typedef struct TC6393xbState TC6393xbState;
31
* fieldoffset is 0 then no reset will be done.
23
TC6393xbState *tc6393xb_init(struct MemoryRegion *sysmem,
32
*/
24
uint32_t base, qemu_irq irq);
33
CPResetFn *resetfn;
25
-void tc6393xb_gpio_out_set(TC6393xbState *s, int line,
34
+
26
- qemu_irq handler);
35
+ /*
27
-qemu_irq *tc6393xb_gpio_in_get(TC6393xbState *s);
36
+ * "Original" writefn and readfn.
28
qemu_irq tc6393xb_l3v_get(TC6393xbState *s);
37
+ * For ARMv8.1-VHE register aliases, we overwrite the read/write
29
38
+ * accessor functions of various EL1/EL0 to perform the runtime
30
#endif
39
+ * check for which sysreg should actually be modified, and then
31
diff --git a/hw/display/tc6393xb.c b/hw/display/tc6393xb.c
40
+ * forwards the operation. Before overwriting the accessors,
41
+ * the original function is copied here, so that accesses that
42
+ * really do go to the EL1/EL0 version proceed normally.
43
+ * (The corresponding EL2 register is linked via opaque.)
44
+ */
45
+ CPReadFn *orig_readfn;
46
+ CPWriteFn *orig_writefn;
47
};
48
49
/* Macros which are lvalues for the field in CPUARMState for the
50
diff --git a/target/arm/helper.c b/target/arm/helper.c
32
index XXXXXXX..XXXXXXX 100644
51
index XXXXXXX..XXXXXXX 100644
33
--- a/hw/display/tc6393xb.c
52
--- a/target/arm/helper.c
34
+++ b/hw/display/tc6393xb.c
53
+++ b/target/arm/helper.c
35
@@ -XXX,XX +XXX,XX @@ struct TC6393xbState {
54
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_cp_reginfo[] = {
36
blanked : 1;
55
REGINFO_SENTINEL
37
};
56
};
38
57
39
-qemu_irq *tc6393xb_gpio_in_get(TC6393xbState *s)
58
+#ifndef CONFIG_USER_ONLY
40
-{
59
+/* Test if system register redirection is to occur in the current state. */
41
- return s->gpio_in;
60
+static bool redirect_for_e2h(CPUARMState *env)
42
-}
61
+{
43
-
62
+ return arm_current_el(env) == 2 && (arm_hcr_el2_eff(env) & HCR_E2H);
44
static void tc6393xb_gpio_set(void *opaque, int line, int level)
63
+}
64
+
65
+static uint64_t el2_e2h_read(CPUARMState *env, const ARMCPRegInfo *ri)
66
+{
67
+ CPReadFn *readfn;
68
+
69
+ if (redirect_for_e2h(env)) {
70
+ /* Switch to the saved EL2 version of the register. */
71
+ ri = ri->opaque;
72
+ readfn = ri->readfn;
73
+ } else {
74
+ readfn = ri->orig_readfn;
75
+ }
76
+ if (readfn == NULL) {
77
+ readfn = raw_read;
78
+ }
79
+ return readfn(env, ri);
80
+}
81
+
82
+static void el2_e2h_write(CPUARMState *env, const ARMCPRegInfo *ri,
83
+ uint64_t value)
84
+{
85
+ CPWriteFn *writefn;
86
+
87
+ if (redirect_for_e2h(env)) {
88
+ /* Switch to the saved EL2 version of the register. */
89
+ ri = ri->opaque;
90
+ writefn = ri->writefn;
91
+ } else {
92
+ writefn = ri->orig_writefn;
93
+ }
94
+ if (writefn == NULL) {
95
+ writefn = raw_write;
96
+ }
97
+ writefn(env, ri, value);
98
+}
99
+
100
+static void define_arm_vh_e2h_redirects_aliases(ARMCPU *cpu)
101
+{
102
+ struct E2HAlias {
103
+ uint32_t src_key, dst_key, new_key;
104
+ const char *src_name, *dst_name, *new_name;
105
+ bool (*feature)(const ARMISARegisters *id);
106
+ };
107
+
108
+#define K(op0, op1, crn, crm, op2) \
109
+ ENCODE_AA64_CP_REG(CP_REG_ARM64_SYSREG_CP, crn, crm, op0, op1, op2)
110
+
111
+ static const struct E2HAlias aliases[] = {
112
+ { K(3, 0, 1, 0, 0), K(3, 4, 1, 0, 0), K(3, 5, 1, 0, 0),
113
+ "SCTLR", "SCTLR_EL2", "SCTLR_EL12" },
114
+ { K(3, 0, 1, 0, 2), K(3, 4, 1, 1, 2), K(3, 5, 1, 0, 2),
115
+ "CPACR", "CPTR_EL2", "CPACR_EL12" },
116
+ { K(3, 0, 2, 0, 0), K(3, 4, 2, 0, 0), K(3, 5, 2, 0, 0),
117
+ "TTBR0_EL1", "TTBR0_EL2", "TTBR0_EL12" },
118
+ { K(3, 0, 2, 0, 1), K(3, 4, 2, 0, 1), K(3, 5, 2, 0, 1),
119
+ "TTBR1_EL1", "TTBR1_EL2", "TTBR1_EL12" },
120
+ { K(3, 0, 2, 0, 2), K(3, 4, 2, 0, 2), K(3, 5, 2, 0, 2),
121
+ "TCR_EL1", "TCR_EL2", "TCR_EL12" },
122
+ { K(3, 0, 4, 0, 0), K(3, 4, 4, 0, 0), K(3, 5, 4, 0, 0),
123
+ "SPSR_EL1", "SPSR_EL2", "SPSR_EL12" },
124
+ { K(3, 0, 4, 0, 1), K(3, 4, 4, 0, 1), K(3, 5, 4, 0, 1),
125
+ "ELR_EL1", "ELR_EL2", "ELR_EL12" },
126
+ { K(3, 0, 5, 1, 0), K(3, 4, 5, 1, 0), K(3, 5, 5, 1, 0),
127
+ "AFSR0_EL1", "AFSR0_EL2", "AFSR0_EL12" },
128
+ { K(3, 0, 5, 1, 1), K(3, 4, 5, 1, 1), K(3, 5, 5, 1, 1),
129
+ "AFSR1_EL1", "AFSR1_EL2", "AFSR1_EL12" },
130
+ { K(3, 0, 5, 2, 0), K(3, 4, 5, 2, 0), K(3, 5, 5, 2, 0),
131
+ "ESR_EL1", "ESR_EL2", "ESR_EL12" },
132
+ { K(3, 0, 6, 0, 0), K(3, 4, 6, 0, 0), K(3, 5, 6, 0, 0),
133
+ "FAR_EL1", "FAR_EL2", "FAR_EL12" },
134
+ { K(3, 0, 10, 2, 0), K(3, 4, 10, 2, 0), K(3, 5, 10, 2, 0),
135
+ "MAIR_EL1", "MAIR_EL2", "MAIR_EL12" },
136
+ { K(3, 0, 10, 3, 0), K(3, 4, 10, 3, 0), K(3, 5, 10, 3, 0),
137
+ "AMAIR0", "AMAIR_EL2", "AMAIR_EL12" },
138
+ { K(3, 0, 12, 0, 0), K(3, 4, 12, 0, 0), K(3, 5, 12, 0, 0),
139
+ "VBAR", "VBAR_EL2", "VBAR_EL12" },
140
+ { K(3, 0, 13, 0, 1), K(3, 4, 13, 0, 1), K(3, 5, 13, 0, 1),
141
+ "CONTEXTIDR_EL1", "CONTEXTIDR_EL2", "CONTEXTIDR_EL12" },
142
+ { K(3, 0, 14, 1, 0), K(3, 4, 14, 1, 0), K(3, 5, 14, 1, 0),
143
+ "CNTKCTL", "CNTHCTL_EL2", "CNTKCTL_EL12" },
144
+
145
+ /*
146
+ * Note that redirection of ZCR is mentioned in the description
147
+ * of ZCR_EL2, and aliasing in the description of ZCR_EL1, but
148
+ * not in the summary table.
149
+ */
150
+ { K(3, 0, 1, 2, 0), K(3, 4, 1, 2, 0), K(3, 5, 1, 2, 0),
151
+ "ZCR_EL1", "ZCR_EL2", "ZCR_EL12", isar_feature_aa64_sve },
152
+
153
+ /* TODO: ARMv8.2-SPE -- PMSCR_EL2 */
154
+ /* TODO: ARMv8.4-Trace -- TRFCR_EL2 */
155
+ };
156
+#undef K
157
+
158
+ size_t i;
159
+
160
+ for (i = 0; i < ARRAY_SIZE(aliases); i++) {
161
+ const struct E2HAlias *a = &aliases[i];
162
+ ARMCPRegInfo *src_reg, *dst_reg;
163
+
164
+ if (a->feature && !a->feature(&cpu->isar)) {
165
+ continue;
166
+ }
167
+
168
+ src_reg = g_hash_table_lookup(cpu->cp_regs, &a->src_key);
169
+ dst_reg = g_hash_table_lookup(cpu->cp_regs, &a->dst_key);
170
+ g_assert(src_reg != NULL);
171
+ g_assert(dst_reg != NULL);
172
+
173
+ /* Cross-compare names to detect typos in the keys. */
174
+ g_assert(strcmp(src_reg->name, a->src_name) == 0);
175
+ g_assert(strcmp(dst_reg->name, a->dst_name) == 0);
176
+
177
+ /* None of the core system registers use opaque; we will. */
178
+ g_assert(src_reg->opaque == NULL);
179
+
180
+ /* Create alias before redirection so we dup the right data. */
181
+ if (a->new_key) {
182
+ ARMCPRegInfo *new_reg = g_memdup(src_reg, sizeof(ARMCPRegInfo));
183
+ uint32_t *new_key = g_memdup(&a->new_key, sizeof(uint32_t));
184
+ bool ok;
185
+
186
+ new_reg->name = a->new_name;
187
+ new_reg->type |= ARM_CP_ALIAS;
188
+ /* Remove PL1/PL0 access, leaving PL2/PL3 R/W in place. */
189
+ new_reg->access &= PL2_RW | PL3_RW;
190
+
191
+ ok = g_hash_table_insert(cpu->cp_regs, new_key, new_reg);
192
+ g_assert(ok);
193
+ }
194
+
195
+ src_reg->opaque = dst_reg;
196
+ src_reg->orig_readfn = src_reg->readfn ?: raw_read;
197
+ src_reg->orig_writefn = src_reg->writefn ?: raw_write;
198
+ if (!src_reg->raw_readfn) {
199
+ src_reg->raw_readfn = raw_read;
200
+ }
201
+ if (!src_reg->raw_writefn) {
202
+ src_reg->raw_writefn = raw_write;
203
+ }
204
+ src_reg->readfn = el2_e2h_read;
205
+ src_reg->writefn = el2_e2h_write;
206
+ }
207
+}
208
+#endif
209
+
210
static CPAccessResult ctr_el0_access(CPUARMState *env, const ARMCPRegInfo *ri,
211
bool isread)
45
{
212
{
46
// TC6393xbState *s = opaque;
213
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
47
@@ -XXX,XX +XXX,XX @@ static void tc6393xb_gpio_set(void *opaque, int line, int level)
214
: cpu_isar_feature(aa32_predinv, cpu)) {
48
// FIXME: how does the chip reflect the GPIO input level change?
215
define_arm_cp_regs(cpu, predinv_reginfo);
216
}
217
+
218
+#ifndef CONFIG_USER_ONLY
219
+ /*
220
+ * Register redirections and aliases must be done last,
221
+ * after the registers from the other extensions have been defined.
222
+ */
223
+ if (arm_feature(env, ARM_FEATURE_EL2) && cpu_isar_feature(aa64_vh, cpu)) {
224
+ define_arm_vh_e2h_redirects_aliases(cpu);
225
+ }
226
+#endif
49
}
227
}
50
228
51
-void tc6393xb_gpio_out_set(TC6393xbState *s, int line,
229
void arm_cpu_register_gdb_regs_for_features(ARMCPU *cpu)
52
- qemu_irq handler)
53
-{
54
- if (line >= TC6393XB_GPIOS) {
55
- fprintf(stderr, "TC6393xb: no GPIO pin %d\n", line);
56
- return;
57
- }
58
-
59
- s->handler[line] = handler;
60
-}
61
-
62
static void tc6393xb_gpio_handler_update(TC6393xbState *s)
63
{
64
uint32_t level, diff;
65
--
230
--
66
2.20.1
231
2.20.1
67
232
68
233
diff view generated by jsdifflib
1
Like AArch64, M-profile floating point has no FPEXC enable
1
From: Richard Henderson <richard.henderson@linaro.org>
2
bit to gate floating point; so always set the VFPEN TB flag.
2
3
3
Apart from the wholesale redirection that HCR_EL2.E2H performs
4
M-profile also has CPACR and NSACR similar to A-profile;
4
for EL2, there's a separate redirection specific to the timers
5
they behave slightly differently:
5
that happens for EL0 when running in the EL2&0 regime.
6
* the CPACR is banked between Secure and Non-Secure
6
7
* if the NSACR forces a trap then this is taken to
7
Tested-by: Alex Bennée <alex.bennee@linaro.org>
8
the Secure state, not the Non-Secure state
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Honour the CPACR and NSACR settings. The NSACR handling
10
Message-id: 20200206105448.4726-30-richard.henderson@linaro.org
11
requires us to borrow the exception.target_el field
12
(usually meaningless for M profile) to distinguish the
13
NOCP UsageFault taken to Secure state from the more
14
usual fault taken to the current security state.
15
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Message-id: 20190416125744.27770-6-peter.maydell@linaro.org
19
---
12
---
20
target/arm/helper.c | 55 +++++++++++++++++++++++++++++++++++++++---
13
target/arm/helper.c | 181 +++++++++++++++++++++++++++++++++++++++++---
21
target/arm/translate.c | 10 ++++++--
14
1 file changed, 169 insertions(+), 12 deletions(-)
22
2 files changed, 60 insertions(+), 5 deletions(-)
23
15
24
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
diff --git a/target/arm/helper.c b/target/arm/helper.c
25
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/helper.c
18
--- a/target/arm/helper.c
27
+++ b/target/arm/helper.c
19
+++ b/target/arm/helper.c
28
@@ -XXX,XX +XXX,XX @@ uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
20
@@ -XXX,XX +XXX,XX @@ static void gt_phys_ctl_write(CPUARMState *env, const ARMCPRegInfo *ri,
29
return target_el;
21
gt_ctl_write(env, ri, GTIMER_PHYS, value);
30
}
22
}
31
23
32
+/*
24
+static int gt_phys_redir_timeridx(CPUARMState *env)
33
+ * Return true if the v7M CPACR permits access to the FPU for the specified
25
+{
34
+ * security state and privilege level.
26
+ switch (arm_mmu_idx(env)) {
35
+ */
27
+ case ARMMMUIdx_E20_0:
36
+static bool v7m_cpacr_pass(CPUARMState *env, bool is_secure, bool is_priv)
28
+ case ARMMMUIdx_E20_2:
37
+{
29
+ return GTIMER_HYP;
38
+ switch (extract32(env->v7m.cpacr[is_secure], 20, 2)) {
39
+ case 0:
40
+ case 2: /* UNPREDICTABLE: we treat like 0 */
41
+ return false;
42
+ case 1:
43
+ return is_priv;
44
+ case 3:
45
+ return true;
46
+ default:
30
+ default:
47
+ g_assert_not_reached();
31
+ return GTIMER_PHYS;
48
+ }
32
+ }
49
+}
33
+}
50
+
34
+
51
static bool v7m_stack_write(ARMCPU *cpu, uint32_t addr, uint32_t value,
35
+static int gt_virt_redir_timeridx(CPUARMState *env)
52
ARMMMUIdx mmu_idx, bool ignfault)
36
+{
37
+ switch (arm_mmu_idx(env)) {
38
+ case ARMMMUIdx_E20_0:
39
+ case ARMMMUIdx_E20_2:
40
+ return GTIMER_HYPVIRT;
41
+ default:
42
+ return GTIMER_VIRT;
43
+ }
44
+}
45
+
46
+static uint64_t gt_phys_redir_cval_read(CPUARMState *env,
47
+ const ARMCPRegInfo *ri)
48
+{
49
+ int timeridx = gt_phys_redir_timeridx(env);
50
+ return env->cp15.c14_timer[timeridx].cval;
51
+}
52
+
53
+static void gt_phys_redir_cval_write(CPUARMState *env, const ARMCPRegInfo *ri,
54
+ uint64_t value)
55
+{
56
+ int timeridx = gt_phys_redir_timeridx(env);
57
+ gt_cval_write(env, ri, timeridx, value);
58
+}
59
+
60
+static uint64_t gt_phys_redir_tval_read(CPUARMState *env,
61
+ const ARMCPRegInfo *ri)
62
+{
63
+ int timeridx = gt_phys_redir_timeridx(env);
64
+ return gt_tval_read(env, ri, timeridx);
65
+}
66
+
67
+static void gt_phys_redir_tval_write(CPUARMState *env, const ARMCPRegInfo *ri,
68
+ uint64_t value)
69
+{
70
+ int timeridx = gt_phys_redir_timeridx(env);
71
+ gt_tval_write(env, ri, timeridx, value);
72
+}
73
+
74
+static uint64_t gt_phys_redir_ctl_read(CPUARMState *env,
75
+ const ARMCPRegInfo *ri)
76
+{
77
+ int timeridx = gt_phys_redir_timeridx(env);
78
+ return env->cp15.c14_timer[timeridx].ctl;
79
+}
80
+
81
+static void gt_phys_redir_ctl_write(CPUARMState *env, const ARMCPRegInfo *ri,
82
+ uint64_t value)
83
+{
84
+ int timeridx = gt_phys_redir_timeridx(env);
85
+ gt_ctl_write(env, ri, timeridx, value);
86
+}
87
+
88
static void gt_virt_timer_reset(CPUARMState *env, const ARMCPRegInfo *ri)
53
{
89
{
54
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
90
gt_timer_reset(env, ri, GTIMER_VIRT);
55
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNDEFINSTR_MASK;
91
@@ -XXX,XX +XXX,XX @@ static void gt_cntvoff_write(CPUARMState *env, const ARMCPRegInfo *ri,
56
break;
92
gt_recalc_timer(cpu, GTIMER_VIRT);
57
case EXCP_NOCP:
93
}
58
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
94
59
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_NOCP_MASK;
95
+static uint64_t gt_virt_redir_cval_read(CPUARMState *env,
60
+ {
96
+ const ARMCPRegInfo *ri)
61
+ /*
97
+{
62
+ * NOCP might be directed to something other than the current
98
+ int timeridx = gt_virt_redir_timeridx(env);
63
+ * security state if this fault is because of NSACR; we indicate
99
+ return env->cp15.c14_timer[timeridx].cval;
64
+ * the target security state using exception.target_el.
100
+}
65
+ */
101
+
66
+ int target_secstate;
102
+static void gt_virt_redir_cval_write(CPUARMState *env, const ARMCPRegInfo *ri,
67
+
103
+ uint64_t value)
68
+ if (env->exception.target_el == 3) {
104
+{
69
+ target_secstate = M_REG_S;
105
+ int timeridx = gt_virt_redir_timeridx(env);
70
+ } else {
106
+ gt_cval_write(env, ri, timeridx, value);
71
+ target_secstate = env->v7m.secure;
107
+}
72
+ }
108
+
73
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, target_secstate);
109
+static uint64_t gt_virt_redir_tval_read(CPUARMState *env,
74
+ env->v7m.cfsr[target_secstate] |= R_V7M_CFSR_NOCP_MASK;
110
+ const ARMCPRegInfo *ri)
75
break;
111
+{
112
+ int timeridx = gt_virt_redir_timeridx(env);
113
+ return gt_tval_read(env, ri, timeridx);
114
+}
115
+
116
+static void gt_virt_redir_tval_write(CPUARMState *env, const ARMCPRegInfo *ri,
117
+ uint64_t value)
118
+{
119
+ int timeridx = gt_virt_redir_timeridx(env);
120
+ gt_tval_write(env, ri, timeridx, value);
121
+}
122
+
123
+static uint64_t gt_virt_redir_ctl_read(CPUARMState *env,
124
+ const ARMCPRegInfo *ri)
125
+{
126
+ int timeridx = gt_virt_redir_timeridx(env);
127
+ return env->cp15.c14_timer[timeridx].ctl;
128
+}
129
+
130
+static void gt_virt_redir_ctl_write(CPUARMState *env, const ARMCPRegInfo *ri,
131
+ uint64_t value)
132
+{
133
+ int timeridx = gt_virt_redir_timeridx(env);
134
+ gt_ctl_write(env, ri, timeridx, value);
135
+}
136
+
137
static void gt_hyp_timer_reset(CPUARMState *env, const ARMCPRegInfo *ri)
138
{
139
gt_timer_reset(env, ri, GTIMER_HYP);
140
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo generic_timer_cp_reginfo[] = {
141
.accessfn = gt_ptimer_access,
142
.fieldoffset = offsetoflow32(CPUARMState,
143
cp15.c14_timer[GTIMER_PHYS].ctl),
144
- .writefn = gt_phys_ctl_write, .raw_writefn = raw_write,
145
+ .readfn = gt_phys_redir_ctl_read, .raw_readfn = raw_read,
146
+ .writefn = gt_phys_redir_ctl_write, .raw_writefn = raw_write,
147
},
148
{ .name = "CNTP_CTL_S",
149
.cp = 15, .crn = 14, .crm = 2, .opc1 = 0, .opc2 = 1,
150
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo generic_timer_cp_reginfo[] = {
151
.accessfn = gt_ptimer_access,
152
.fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_PHYS].ctl),
153
.resetvalue = 0,
154
- .writefn = gt_phys_ctl_write, .raw_writefn = raw_write,
155
+ .readfn = gt_phys_redir_ctl_read, .raw_readfn = raw_read,
156
+ .writefn = gt_phys_redir_ctl_write, .raw_writefn = raw_write,
157
},
158
{ .name = "CNTV_CTL", .cp = 15, .crn = 14, .crm = 3, .opc1 = 0, .opc2 = 1,
159
.type = ARM_CP_IO | ARM_CP_ALIAS, .access = PL0_RW,
160
.accessfn = gt_vtimer_access,
161
.fieldoffset = offsetoflow32(CPUARMState,
162
cp15.c14_timer[GTIMER_VIRT].ctl),
163
- .writefn = gt_virt_ctl_write, .raw_writefn = raw_write,
164
+ .readfn = gt_virt_redir_ctl_read, .raw_readfn = raw_read,
165
+ .writefn = gt_virt_redir_ctl_write, .raw_writefn = raw_write,
166
},
167
{ .name = "CNTV_CTL_EL0", .state = ARM_CP_STATE_AA64,
168
.opc0 = 3, .opc1 = 3, .crn = 14, .crm = 3, .opc2 = 1,
169
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo generic_timer_cp_reginfo[] = {
170
.accessfn = gt_vtimer_access,
171
.fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_VIRT].ctl),
172
.resetvalue = 0,
173
- .writefn = gt_virt_ctl_write, .raw_writefn = raw_write,
174
+ .readfn = gt_virt_redir_ctl_read, .raw_readfn = raw_read,
175
+ .writefn = gt_virt_redir_ctl_write, .raw_writefn = raw_write,
176
},
177
/* TimerValue views: a 32 bit downcounting view of the underlying state */
178
{ .name = "CNTP_TVAL", .cp = 15, .crn = 14, .crm = 2, .opc1 = 0, .opc2 = 0,
179
.secure = ARM_CP_SECSTATE_NS,
180
.type = ARM_CP_NO_RAW | ARM_CP_IO, .access = PL0_RW,
181
.accessfn = gt_ptimer_access,
182
- .readfn = gt_phys_tval_read, .writefn = gt_phys_tval_write,
183
+ .readfn = gt_phys_redir_tval_read, .writefn = gt_phys_redir_tval_write,
184
},
185
{ .name = "CNTP_TVAL_S",
186
.cp = 15, .crn = 14, .crm = 2, .opc1 = 0, .opc2 = 0,
187
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo generic_timer_cp_reginfo[] = {
188
.opc0 = 3, .opc1 = 3, .crn = 14, .crm = 2, .opc2 = 0,
189
.type = ARM_CP_NO_RAW | ARM_CP_IO, .access = PL0_RW,
190
.accessfn = gt_ptimer_access, .resetfn = gt_phys_timer_reset,
191
- .readfn = gt_phys_tval_read, .writefn = gt_phys_tval_write,
192
+ .readfn = gt_phys_redir_tval_read, .writefn = gt_phys_redir_tval_write,
193
},
194
{ .name = "CNTV_TVAL", .cp = 15, .crn = 14, .crm = 3, .opc1 = 0, .opc2 = 0,
195
.type = ARM_CP_NO_RAW | ARM_CP_IO, .access = PL0_RW,
196
.accessfn = gt_vtimer_access,
197
- .readfn = gt_virt_tval_read, .writefn = gt_virt_tval_write,
198
+ .readfn = gt_virt_redir_tval_read, .writefn = gt_virt_redir_tval_write,
199
},
200
{ .name = "CNTV_TVAL_EL0", .state = ARM_CP_STATE_AA64,
201
.opc0 = 3, .opc1 = 3, .crn = 14, .crm = 3, .opc2 = 0,
202
.type = ARM_CP_NO_RAW | ARM_CP_IO, .access = PL0_RW,
203
.accessfn = gt_vtimer_access, .resetfn = gt_virt_timer_reset,
204
- .readfn = gt_virt_tval_read, .writefn = gt_virt_tval_write,
205
+ .readfn = gt_virt_redir_tval_read, .writefn = gt_virt_redir_tval_write,
206
},
207
/* The counter itself */
208
{ .name = "CNTPCT", .cp = 15, .crm = 14, .opc1 = 0,
209
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo generic_timer_cp_reginfo[] = {
210
.type = ARM_CP_64BIT | ARM_CP_IO | ARM_CP_ALIAS,
211
.fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_PHYS].cval),
212
.accessfn = gt_ptimer_access,
213
- .writefn = gt_phys_cval_write, .raw_writefn = raw_write,
214
+ .readfn = gt_phys_redir_cval_read, .raw_readfn = raw_read,
215
+ .writefn = gt_phys_redir_cval_write, .raw_writefn = raw_write,
216
},
217
{ .name = "CNTP_CVAL_S", .cp = 15, .crm = 14, .opc1 = 2,
218
.secure = ARM_CP_SECSTATE_S,
219
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo generic_timer_cp_reginfo[] = {
220
.type = ARM_CP_IO,
221
.fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_PHYS].cval),
222
.resetvalue = 0, .accessfn = gt_ptimer_access,
223
- .writefn = gt_phys_cval_write, .raw_writefn = raw_write,
224
+ .readfn = gt_phys_redir_cval_read, .raw_readfn = raw_read,
225
+ .writefn = gt_phys_redir_cval_write, .raw_writefn = raw_write,
226
},
227
{ .name = "CNTV_CVAL", .cp = 15, .crm = 14, .opc1 = 3,
228
.access = PL0_RW,
229
.type = ARM_CP_64BIT | ARM_CP_IO | ARM_CP_ALIAS,
230
.fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_VIRT].cval),
231
.accessfn = gt_vtimer_access,
232
- .writefn = gt_virt_cval_write, .raw_writefn = raw_write,
233
+ .readfn = gt_virt_redir_cval_read, .raw_readfn = raw_read,
234
+ .writefn = gt_virt_redir_cval_write, .raw_writefn = raw_write,
235
},
236
{ .name = "CNTV_CVAL_EL0", .state = ARM_CP_STATE_AA64,
237
.opc0 = 3, .opc1 = 3, .crn = 14, .crm = 3, .opc2 = 2,
238
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo generic_timer_cp_reginfo[] = {
239
.type = ARM_CP_IO,
240
.fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_VIRT].cval),
241
.resetvalue = 0, .accessfn = gt_vtimer_access,
242
- .writefn = gt_virt_cval_write, .raw_writefn = raw_write,
243
+ .readfn = gt_virt_redir_cval_read, .raw_readfn = raw_read,
244
+ .writefn = gt_virt_redir_cval_write, .raw_writefn = raw_write,
245
},
246
/* Secure timer -- this is actually restricted to only EL3
247
* and configurably Secure-EL1 via the accessfn.
248
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo generic_timer_cp_reginfo[] = {
249
REGINFO_SENTINEL
250
};
251
252
+static CPAccessResult e2h_access(CPUARMState *env, const ARMCPRegInfo *ri,
253
+ bool isread)
254
+{
255
+ if (!(arm_hcr_el2_eff(env) & HCR_E2H)) {
256
+ return CP_ACCESS_TRAP;
76
+ }
257
+ }
77
case EXCP_INVSTATE:
258
+ return CP_ACCESS_OK;
78
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
259
+}
79
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVSTATE_MASK;
260
+
80
@@ -XXX,XX +XXX,XX @@ int fp_exception_el(CPUARMState *env, int cur_el)
261
#else
81
return 0;
262
82
}
263
/* In user-mode most of the generic timer registers are inaccessible
83
264
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vhe_reginfo[] = {
84
+ if (arm_feature(env, ARM_FEATURE_M)) {
265
.access = PL2_RW,
85
+ /* CPACR can cause a NOCP UsageFault taken to current security state */
266
.fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_HYPVIRT].ctl),
86
+ if (!v7m_cpacr_pass(env, env->v7m.secure, cur_el != 0)) {
267
.writefn = gt_hv_ctl_write, .raw_writefn = raw_write },
87
+ return 1;
268
+ { .name = "CNTP_CTL_EL02", .state = ARM_CP_STATE_AA64,
88
+ }
269
+ .opc0 = 3, .opc1 = 5, .crn = 14, .crm = 2, .opc2 = 1,
89
+
270
+ .type = ARM_CP_IO | ARM_CP_ALIAS,
90
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY) && !env->v7m.secure) {
271
+ .access = PL2_RW, .accessfn = e2h_access,
91
+ if (!extract32(env->v7m.nsacr, 10, 1)) {
272
+ .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_PHYS].ctl),
92
+ /* FP insns cause a NOCP UsageFault taken to Secure */
273
+ .writefn = gt_phys_ctl_write, .raw_writefn = raw_write },
93
+ return 3;
274
+ { .name = "CNTV_CTL_EL02", .state = ARM_CP_STATE_AA64,
94
+ }
275
+ .opc0 = 3, .opc1 = 5, .crn = 14, .crm = 3, .opc2 = 1,
95
+ }
276
+ .type = ARM_CP_IO | ARM_CP_ALIAS,
96
+
277
+ .access = PL2_RW, .accessfn = e2h_access,
97
+ return 0;
278
+ .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_VIRT].ctl),
98
+ }
279
+ .writefn = gt_virt_ctl_write, .raw_writefn = raw_write },
99
+
280
+ { .name = "CNTP_TVAL_EL02", .state = ARM_CP_STATE_AA64,
100
/* The CPACR controls traps to EL1, or PL1 if we're 32 bit:
281
+ .opc0 = 3, .opc1 = 5, .crn = 14, .crm = 2, .opc2 = 0,
101
* 0, 2 : trap EL0 and EL1/PL1 accesses
282
+ .type = ARM_CP_NO_RAW | ARM_CP_IO | ARM_CP_ALIAS,
102
* 1 : trap only EL0 accesses
283
+ .access = PL2_RW, .accessfn = e2h_access,
103
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
284
+ .readfn = gt_phys_tval_read, .writefn = gt_phys_tval_write },
104
flags = FIELD_DP32(flags, TBFLAG_A32, SCTLR_B, arm_sctlr_b(env));
285
+ { .name = "CNTV_TVAL_EL02", .state = ARM_CP_STATE_AA64,
105
flags = FIELD_DP32(flags, TBFLAG_A32, NS, !access_secure_reg(env));
286
+ .opc0 = 3, .opc1 = 5, .crn = 14, .crm = 3, .opc2 = 0,
106
if (env->vfp.xregs[ARM_VFP_FPEXC] & (1 << 30)
287
+ .type = ARM_CP_NO_RAW | ARM_CP_IO | ARM_CP_ALIAS,
107
- || arm_el_is_aa64(env, 1)) {
288
+ .access = PL2_RW, .accessfn = e2h_access,
108
+ || arm_el_is_aa64(env, 1) || arm_feature(env, ARM_FEATURE_M)) {
289
+ .readfn = gt_virt_tval_read, .writefn = gt_virt_tval_write },
109
flags = FIELD_DP32(flags, TBFLAG_A32, VFPEN, 1);
290
+ { .name = "CNTP_CVAL_EL02", .state = ARM_CP_STATE_AA64,
110
}
291
+ .opc0 = 3, .opc1 = 5, .crn = 14, .crm = 2, .opc2 = 2,
111
flags = FIELD_DP32(flags, TBFLAG_A32, XSCALE_CPAR, env->cp15.c15_cpar);
292
+ .type = ARM_CP_IO | ARM_CP_ALIAS,
112
diff --git a/target/arm/translate.c b/target/arm/translate.c
293
+ .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_PHYS].cval),
113
index XXXXXXX..XXXXXXX 100644
294
+ .access = PL2_RW, .accessfn = e2h_access,
114
--- a/target/arm/translate.c
295
+ .writefn = gt_phys_cval_write, .raw_writefn = raw_write },
115
+++ b/target/arm/translate.c
296
+ { .name = "CNTV_CVAL_EL02", .state = ARM_CP_STATE_AA64,
116
@@ -XXX,XX +XXX,XX @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
297
+ .opc0 = 3, .opc1 = 5, .crn = 14, .crm = 3, .opc2 = 2,
117
* for attempts to execute invalid vfp/neon encodings with FP disabled.
298
+ .type = ARM_CP_IO | ARM_CP_ALIAS,
118
*/
299
+ .fieldoffset = offsetof(CPUARMState, cp15.c14_timer[GTIMER_VIRT].cval),
119
if (s->fp_excp_el) {
300
+ .access = PL2_RW, .accessfn = e2h_access,
120
- gen_exception_insn(s, 4, EXCP_UDEF,
301
+ .writefn = gt_virt_cval_write, .raw_writefn = raw_write },
121
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
302
#endif
122
+ if (arm_dc_feature(s, ARM_FEATURE_M)) {
303
REGINFO_SENTINEL
123
+ gen_exception_insn(s, 4, EXCP_NOCP, syn_uncategorized(),
304
};
124
+ s->fp_excp_el);
125
+ } else {
126
+ gen_exception_insn(s, 4, EXCP_UDEF,
127
+ syn_fp_access_trap(1, 0xe, false),
128
+ s->fp_excp_el);
129
+ }
130
return 0;
131
}
132
133
--
305
--
134
2.20.1
306
2.20.1
135
307
136
308
diff view generated by jsdifflib
1
Implement the VLLDM instruction for v7M for the FPU present cas.
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Since we only support a single ASID, flush the tlb when it changes.
4
5
Note that TCR_EL2, like TCR_EL1, has the A1 bit that chooses between
6
the two TTBR* registers for the location of the ASID.
7
8
Tested-by: Alex Bennée <alex.bennee@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20200206105448.4726-31-richard.henderson@linaro.org
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20190416125744.27770-26-peter.maydell@linaro.org
6
---
13
---
7
target/arm/helper.h | 1 +
14
target/arm/helper.c | 22 +++++++++++++++-------
8
target/arm/helper.c | 54 ++++++++++++++++++++++++++++++++++++++++++
15
1 file changed, 15 insertions(+), 7 deletions(-)
9
target/arm/translate.c | 2 +-
10
3 files changed, 56 insertions(+), 1 deletion(-)
11
16
12
diff --git a/target/arm/helper.h b/target/arm/helper.h
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/helper.h
15
+++ b/target/arm/helper.h
16
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(v7m_tt, i32, env, i32, i32)
17
DEF_HELPER_1(v7m_preserve_fp_state, void, env)
18
19
DEF_HELPER_2(v7m_vlstm, void, env, i32)
20
+DEF_HELPER_2(v7m_vlldm, void, env, i32)
21
22
DEF_HELPER_2(v8m_stackcheck, void, env, i32)
23
24
diff --git a/target/arm/helper.c b/target/arm/helper.c
17
diff --git a/target/arm/helper.c b/target/arm/helper.c
25
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/helper.c
19
--- a/target/arm/helper.c
27
+++ b/target/arm/helper.c
20
+++ b/target/arm/helper.c
28
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
21
@@ -XXX,XX +XXX,XX @@ static void vmsa_ttbcr_reset(CPUARMState *env, const ARMCPRegInfo *ri)
29
g_assert_not_reached();
22
tcr->base_mask = 0xffffc000u;
30
}
23
}
31
24
32
+void HELPER(v7m_vlldm)(CPUARMState *env, uint32_t fptr)
25
-static void vmsa_tcr_el1_write(CPUARMState *env, const ARMCPRegInfo *ri,
33
+{
26
+static void vmsa_tcr_el12_write(CPUARMState *env, const ARMCPRegInfo *ri,
34
+ /* translate.c should never generate calls here in user-only mode */
27
uint64_t value)
35
+ g_assert_not_reached();
36
+}
37
+
38
uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
39
{
28
{
40
/* The TT instructions can be used by unprivileged code, but in
29
ARMCPU *cpu = env_archcpu(env);
41
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
30
@@ -XXX,XX +XXX,XX @@ static void vmsa_ttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
42
env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK;
31
static void vmsa_tcr_ttbr_el2_write(CPUARMState *env, const ARMCPRegInfo *ri,
32
uint64_t value)
33
{
34
- /* TODO: There are ASID fields in here with HCR_EL2.E2H */
35
+ /*
36
+ * If we are running with E2&0 regime, then an ASID is active.
37
+ * Flush if that might be changing. Note we're not checking
38
+ * TCR_EL2.A1 to know if this is really the TTBRx_EL2 that
39
+ * holds the active ASID, only checking the field that might.
40
+ */
41
+ if (extract64(raw_read(env, ri) ^ value, 48, 16) &&
42
+ (arm_hcr_el2_eff(env) & HCR_E2H)) {
43
+ tlb_flush_by_mmuidx(env_cpu(env),
44
+ ARMMMUIdxBit_E20_2 | ARMMMUIdxBit_E20_0);
45
+ }
46
raw_write(env, ri, value);
43
}
47
}
44
48
45
+void HELPER(v7m_vlldm)(CPUARMState *env, uint32_t fptr)
49
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo vmsa_cp_reginfo[] = {
46
+{
50
offsetof(CPUARMState, cp15.ttbr1_ns) } },
47
+ /* fptr is the value of Rn, the frame pointer we load the FP regs from */
51
{ .name = "TCR_EL1", .state = ARM_CP_STATE_AA64,
48
+ assert(env->v7m.secure);
52
.opc0 = 3, .crn = 2, .crm = 0, .opc1 = 0, .opc2 = 2,
49
+
53
- .access = PL1_RW, .writefn = vmsa_tcr_el1_write,
50
+ if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)) {
54
+ .access = PL1_RW, .writefn = vmsa_tcr_el12_write,
51
+ return;
55
.resetfn = vmsa_ttbcr_reset, .raw_writefn = raw_write,
52
+ }
56
.fieldoffset = offsetof(CPUARMState, cp15.tcr_el[1]) },
53
+
57
{ .name = "TTBCR", .cp = 15, .crn = 2, .crm = 0, .opc1 = 0, .opc2 = 2,
54
+ /* Check access to the coprocessor is permitted */
58
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
55
+ if (!v7m_cpacr_pass(env, true, arm_current_el(env) != 0)) {
59
.resetvalue = 0 },
56
+ raise_exception_ra(env, EXCP_NOCP, 0, 1, GETPC());
60
{ .name = "TCR_EL2", .state = ARM_CP_STATE_BOTH,
57
+ }
61
.opc0 = 3, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 2,
58
+
62
- .access = PL2_RW,
59
+ if (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPACT_MASK) {
63
- /* no .writefn needed as this can't cause an ASID change;
60
+ /* State in FP is still valid */
64
- * no .raw_writefn or .resetfn needed as we never use mask/base_mask
61
+ env->v7m.fpccr[M_REG_S] &= ~R_V7M_FPCCR_LSPACT_MASK;
65
- */
62
+ } else {
66
+ .access = PL2_RW, .writefn = vmsa_tcr_el12_write,
63
+ bool ts = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK;
67
+ /* no .raw_writefn or .resetfn needed as we never use mask/base_mask */
64
+ int i;
68
.fieldoffset = offsetof(CPUARMState, cp15.tcr_el[2]) },
65
+ uint32_t fpscr;
69
{ .name = "VTCR", .state = ARM_CP_STATE_AA32,
66
+
70
.cp = 15, .opc1 = 4, .crn = 2, .crm = 1, .opc2 = 2,
67
+ if (fptr & 7) {
68
+ raise_exception_ra(env, EXCP_UNALIGNED, 0, 1, GETPC());
69
+ }
70
+
71
+ for (i = 0; i < (ts ? 32 : 16); i += 2) {
72
+ uint32_t slo, shi;
73
+ uint64_t dn;
74
+ uint32_t faddr = fptr + 4 * i;
75
+
76
+ if (i >= 16) {
77
+ faddr += 8; /* skip the slot for the FPSCR */
78
+ }
79
+
80
+ slo = cpu_ldl_data(env, faddr);
81
+ shi = cpu_ldl_data(env, faddr + 4);
82
+
83
+ dn = (uint64_t) shi << 32 | slo;
84
+ *aa32_vfp_dreg(env, i / 2) = dn;
85
+ }
86
+ fpscr = cpu_ldl_data(env, fptr + 0x40);
87
+ vfp_set_fpscr(env, fpscr);
88
+ }
89
+
90
+ env->v7m.control[M_REG_S] |= R_V7M_CONTROL_FPCA_MASK;
91
+}
92
+
93
static bool v7m_push_stack(ARMCPU *cpu)
94
{
95
/* Do the "set up stack frame" part of exception entry,
96
diff --git a/target/arm/translate.c b/target/arm/translate.c
97
index XXXXXXX..XXXXXXX 100644
98
--- a/target/arm/translate.c
99
+++ b/target/arm/translate.c
100
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
101
TCGv_i32 fptr = load_reg(s, rn);
102
103
if (extract32(insn, 20, 1)) {
104
- /* VLLDM */
105
+ gen_helper_v7m_vlldm(cpu_env, fptr);
106
} else {
107
gen_helper_v7m_vlstm(cpu_env, fptr);
108
}
109
--
71
--
110
2.20.1
72
2.20.1
111
73
112
74
diff view generated by jsdifflib
1
Pushing registers to the stack for v7M needs to handle three cases:
1
From: Richard Henderson <richard.henderson@linaro.org>
2
* the "normal" case where we pend exceptions
3
* an "ignore faults" case where we set FSR bits but
4
do not pend exceptions (this is used when we are
5
handling some kinds of derived exception on exception entry)
6
* a "lazy FP stacking" case, where different FSR bits
7
are set and the exception is pended differently
8
2
9
Implement this by changing the existing flag argument that
3
Tested-by: Alex Bennée <alex.bennee@linaro.org>
10
tells us whether to ignore faults or not into an enum that
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
specifies which of the 3 modes we should handle.
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
6
Message-id: 20200206105448.4726-32-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20190416125744.27770-23-peter.maydell@linaro.org
16
---
8
---
17
target/arm/helper.c | 118 +++++++++++++++++++++++++++++---------------
9
target/arm/helper.c | 25 ++++++++++++++++++-------
18
1 file changed, 79 insertions(+), 39 deletions(-)
10
1 file changed, 18 insertions(+), 7 deletions(-)
19
11
20
diff --git a/target/arm/helper.c b/target/arm/helper.c
12
diff --git a/target/arm/helper.c b/target/arm/helper.c
21
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/helper.c
14
--- a/target/arm/helper.c
23
+++ b/target/arm/helper.c
15
+++ b/target/arm/helper.c
24
@@ -XXX,XX +XXX,XX @@ static bool v7m_cpacr_pass(CPUARMState *env, bool is_secure, bool is_priv)
16
@@ -XXX,XX +XXX,XX @@ static CPAccessResult aa64_cacheop_access(CPUARMState *env,
17
18
static int vae1_tlbmask(CPUARMState *env)
19
{
20
+ /* Since we exclude secure first, we may read HCR_EL2 directly. */
21
if (arm_is_secure_below_el3(env)) {
22
return ARMMMUIdxBit_SE10_1 | ARMMMUIdxBit_SE10_0;
23
+ } else if ((env->cp15.hcr_el2 & (HCR_E2H | HCR_TGE))
24
+ == (HCR_E2H | HCR_TGE)) {
25
+ return ARMMMUIdxBit_E20_2 | ARMMMUIdxBit_E20_0;
26
} else {
27
return ARMMMUIdxBit_E10_1 | ARMMMUIdxBit_E10_0;
28
}
29
@@ -XXX,XX +XXX,XX @@ static int alle1_tlbmask(CPUARMState *env)
25
}
30
}
26
}
31
}
27
32
28
+/*
33
+static int e2_tlbmask(CPUARMState *env)
29
+ * What kind of stack write are we doing? This affects how exceptions
34
+{
30
+ * generated during the stacking are treated.
35
+ /* TODO: ARMv8.4-SecEL2 */
31
+ */
36
+ return ARMMMUIdxBit_E20_0 | ARMMMUIdxBit_E20_2 | ARMMMUIdxBit_E2;
32
+typedef enum StackingMode {
37
+}
33
+ STACK_NORMAL,
34
+ STACK_IGNFAULTS,
35
+ STACK_LAZYFP,
36
+} StackingMode;
37
+
38
+
38
static bool v7m_stack_write(ARMCPU *cpu, uint32_t addr, uint32_t value,
39
static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
39
- ARMMMUIdx mmu_idx, bool ignfault)
40
uint64_t value)
40
+ ARMMMUIdx mmu_idx, StackingMode mode)
41
{
41
{
42
CPUState *cs = CPU(cpu);
42
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
43
CPUARMState *env = &cpu->env;
43
static void tlbi_aa64_alle2_write(CPUARMState *env, const ARMCPRegInfo *ri,
44
@@ -XXX,XX +XXX,XX @@ static bool v7m_stack_write(ARMCPU *cpu, uint32_t addr, uint32_t value,
44
uint64_t value)
45
&attrs, &prot, &page_size, &fi, NULL)) {
45
{
46
/* MPU/SAU lookup failed */
46
- ARMCPU *cpu = env_archcpu(env);
47
if (fi.type == ARMFault_QEMU_SFault) {
47
- CPUState *cs = CPU(cpu);
48
- qemu_log_mask(CPU_LOG_INT,
48
+ CPUState *cs = env_cpu(env);
49
- "...SecureFault with SFSR.AUVIOL during stacking\n");
49
+ int mask = e2_tlbmask(env);
50
- env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK | R_V7M_SFSR_SFARVALID_MASK;
50
51
+ if (mode == STACK_LAZYFP) {
51
- tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_E2);
52
+ qemu_log_mask(CPU_LOG_INT,
52
+ tlb_flush_by_mmuidx(cs, mask);
53
+ "...SecureFault with SFSR.LSPERR "
53
}
54
+ "during lazy stacking\n");
54
55
+ env->v7m.sfsr |= R_V7M_SFSR_LSPERR_MASK;
55
static void tlbi_aa64_alle3_write(CPUARMState *env, const ARMCPRegInfo *ri,
56
+ } else {
56
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
57
+ qemu_log_mask(CPU_LOG_INT,
57
uint64_t value)
58
+ "...SecureFault with SFSR.AUVIOL "
58
{
59
+ "during stacking\n");
59
CPUState *cs = env_cpu(env);
60
+ env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK;
60
+ int mask = e2_tlbmask(env);
61
+ }
61
62
+ env->v7m.sfsr |= R_V7M_SFSR_SFARVALID_MASK;
62
- tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_E2);
63
env->v7m.sfar = addr;
63
+ tlb_flush_by_mmuidx_all_cpus_synced(cs, mask);
64
exc = ARMV7M_EXCP_SECURE;
64
}
65
exc_secure = false;
65
66
} else {
66
static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
67
- qemu_log_mask(CPU_LOG_INT, "...MemManageFault with CFSR.MSTKERR\n");
67
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri,
68
- env->v7m.cfsr[secure] |= R_V7M_CFSR_MSTKERR_MASK;
68
* Currently handles both VAE2 and VALE2, since we don't support
69
+ if (mode == STACK_LAZYFP) {
69
* flush-last-level-only.
70
+ qemu_log_mask(CPU_LOG_INT,
71
+ "...MemManageFault with CFSR.MLSPERR\n");
72
+ env->v7m.cfsr[secure] |= R_V7M_CFSR_MLSPERR_MASK;
73
+ } else {
74
+ qemu_log_mask(CPU_LOG_INT,
75
+ "...MemManageFault with CFSR.MSTKERR\n");
76
+ env->v7m.cfsr[secure] |= R_V7M_CFSR_MSTKERR_MASK;
77
+ }
78
exc = ARMV7M_EXCP_MEM;
79
exc_secure = secure;
80
}
81
@@ -XXX,XX +XXX,XX @@ static bool v7m_stack_write(ARMCPU *cpu, uint32_t addr, uint32_t value,
82
attrs, &txres);
83
if (txres != MEMTX_OK) {
84
/* BusFault trying to write the data */
85
- qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.STKERR\n");
86
- env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_STKERR_MASK;
87
+ if (mode == STACK_LAZYFP) {
88
+ qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.LSPERR\n");
89
+ env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_LSPERR_MASK;
90
+ } else {
91
+ qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.STKERR\n");
92
+ env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_STKERR_MASK;
93
+ }
94
exc = ARMV7M_EXCP_BUS;
95
exc_secure = false;
96
goto pend_fault;
97
@@ -XXX,XX +XXX,XX @@ pend_fault:
98
* later if we have two derived exceptions.
99
* The only case when we must not pend the exception but instead
100
* throw it away is if we are doing the push of the callee registers
101
- * and we've already generated a derived exception. Even in this
102
- * case we will still update the fault status registers.
103
+ * and we've already generated a derived exception (this is indicated
104
+ * by the caller passing STACK_IGNFAULTS). Even in this case we will
105
+ * still update the fault status registers.
106
*/
70
*/
107
- if (!ignfault) {
71
- ARMCPU *cpu = env_archcpu(env);
108
+ switch (mode) {
72
- CPUState *cs = CPU(cpu);
109
+ case STACK_NORMAL:
73
+ CPUState *cs = env_cpu(env);
110
armv7m_nvic_set_pending_derived(env->nvic, exc, exc_secure);
74
+ int mask = e2_tlbmask(env);
111
+ break;
75
uint64_t pageaddr = sextract64(value << 12, 0, 56);
112
+ case STACK_LAZYFP:
76
113
+ armv7m_nvic_set_pending_lazyfp(env->nvic, exc, exc_secure);
77
- tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_E2);
114
+ break;
78
+ tlb_flush_page_by_mmuidx(cs, pageaddr, mask);
115
+ case STACK_IGNFAULTS:
116
+ break;
117
}
118
return false;
119
}
79
}
120
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_callee_stack(ARMCPU *cpu, uint32_t lr, bool dotailchain,
80
121
uint32_t limit;
81
static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri,
122
bool want_psp;
123
uint32_t sig;
124
+ StackingMode smode = ignore_faults ? STACK_IGNFAULTS : STACK_NORMAL;
125
126
if (dotailchain) {
127
bool mode = lr & R_V7M_EXCRET_MODE_MASK;
128
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_callee_stack(ARMCPU *cpu, uint32_t lr, bool dotailchain,
129
*/
130
sig = v7m_integrity_sig(env, lr);
131
stacked_ok =
132
- v7m_stack_write(cpu, frameptr, sig, mmu_idx, ignore_faults) &&
133
- v7m_stack_write(cpu, frameptr + 0x8, env->regs[4], mmu_idx,
134
- ignore_faults) &&
135
- v7m_stack_write(cpu, frameptr + 0xc, env->regs[5], mmu_idx,
136
- ignore_faults) &&
137
- v7m_stack_write(cpu, frameptr + 0x10, env->regs[6], mmu_idx,
138
- ignore_faults) &&
139
- v7m_stack_write(cpu, frameptr + 0x14, env->regs[7], mmu_idx,
140
- ignore_faults) &&
141
- v7m_stack_write(cpu, frameptr + 0x18, env->regs[8], mmu_idx,
142
- ignore_faults) &&
143
- v7m_stack_write(cpu, frameptr + 0x1c, env->regs[9], mmu_idx,
144
- ignore_faults) &&
145
- v7m_stack_write(cpu, frameptr + 0x20, env->regs[10], mmu_idx,
146
- ignore_faults) &&
147
- v7m_stack_write(cpu, frameptr + 0x24, env->regs[11], mmu_idx,
148
- ignore_faults);
149
+ v7m_stack_write(cpu, frameptr, sig, mmu_idx, smode) &&
150
+ v7m_stack_write(cpu, frameptr + 0x8, env->regs[4], mmu_idx, smode) &&
151
+ v7m_stack_write(cpu, frameptr + 0xc, env->regs[5], mmu_idx, smode) &&
152
+ v7m_stack_write(cpu, frameptr + 0x10, env->regs[6], mmu_idx, smode) &&
153
+ v7m_stack_write(cpu, frameptr + 0x14, env->regs[7], mmu_idx, smode) &&
154
+ v7m_stack_write(cpu, frameptr + 0x18, env->regs[8], mmu_idx, smode) &&
155
+ v7m_stack_write(cpu, frameptr + 0x1c, env->regs[9], mmu_idx, smode) &&
156
+ v7m_stack_write(cpu, frameptr + 0x20, env->regs[10], mmu_idx, smode) &&
157
+ v7m_stack_write(cpu, frameptr + 0x24, env->regs[11], mmu_idx, smode);
158
159
/* Update SP regardless of whether any of the stack accesses failed. */
160
*frame_sp_p = frameptr;
161
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
162
* if it has higher priority).
163
*/
164
stacked_ok = stacked_ok &&
165
- v7m_stack_write(cpu, frameptr, env->regs[0], mmu_idx, false) &&
166
- v7m_stack_write(cpu, frameptr + 4, env->regs[1], mmu_idx, false) &&
167
- v7m_stack_write(cpu, frameptr + 8, env->regs[2], mmu_idx, false) &&
168
- v7m_stack_write(cpu, frameptr + 12, env->regs[3], mmu_idx, false) &&
169
- v7m_stack_write(cpu, frameptr + 16, env->regs[12], mmu_idx, false) &&
170
- v7m_stack_write(cpu, frameptr + 20, env->regs[14], mmu_idx, false) &&
171
- v7m_stack_write(cpu, frameptr + 24, env->regs[15], mmu_idx, false) &&
172
- v7m_stack_write(cpu, frameptr + 28, xpsr, mmu_idx, false);
173
+ v7m_stack_write(cpu, frameptr, env->regs[0], mmu_idx, STACK_NORMAL) &&
174
+ v7m_stack_write(cpu, frameptr + 4, env->regs[1],
175
+ mmu_idx, STACK_NORMAL) &&
176
+ v7m_stack_write(cpu, frameptr + 8, env->regs[2],
177
+ mmu_idx, STACK_NORMAL) &&
178
+ v7m_stack_write(cpu, frameptr + 12, env->regs[3],
179
+ mmu_idx, STACK_NORMAL) &&
180
+ v7m_stack_write(cpu, frameptr + 16, env->regs[12],
181
+ mmu_idx, STACK_NORMAL) &&
182
+ v7m_stack_write(cpu, frameptr + 20, env->regs[14],
183
+ mmu_idx, STACK_NORMAL) &&
184
+ v7m_stack_write(cpu, frameptr + 24, env->regs[15],
185
+ mmu_idx, STACK_NORMAL) &&
186
+ v7m_stack_write(cpu, frameptr + 28, xpsr, mmu_idx, STACK_NORMAL);
187
188
if (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK) {
189
/* FPU is active, try to save its registers */
190
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
191
faddr += 8; /* skip the slot for the FPSCR */
192
}
193
stacked_ok = stacked_ok &&
194
- v7m_stack_write(cpu, faddr, slo, mmu_idx, false) &&
195
- v7m_stack_write(cpu, faddr + 4, shi, mmu_idx, false);
196
+ v7m_stack_write(cpu, faddr, slo,
197
+ mmu_idx, STACK_NORMAL) &&
198
+ v7m_stack_write(cpu, faddr + 4, shi,
199
+ mmu_idx, STACK_NORMAL);
200
}
201
stacked_ok = stacked_ok &&
202
v7m_stack_write(cpu, frameptr + 0x60,
203
- vfp_get_fpscr(env), mmu_idx, false);
204
+ vfp_get_fpscr(env), mmu_idx, STACK_NORMAL);
205
if (cpacr_pass) {
206
for (i = 0; i < ((framesize == 0xa8) ? 32 : 16); i += 2) {
207
*aa32_vfp_dreg(env, i / 2) = 0;
208
--
82
--
209
2.20.1
83
2.20.1
210
84
211
85
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Thomas Huth <thuth@redhat.com>
3
The TGE bit routes all asynchronous exceptions to EL2.
4
Reviewed-by: Markus Armbruster <armbru@redhat.com>
4
5
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Tested-by: Alex Bennée <alex.bennee@linaro.org>
6
Message-id: 20190412165416.7977-11-philmd@redhat.com
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200206105448.4726-33-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
10
---
9
include/hw/net/ne2000-isa.h | 6 ++++++
11
target/arm/helper.c | 6 ++++++
10
1 file changed, 6 insertions(+)
12
1 file changed, 6 insertions(+)
11
13
12
diff --git a/include/hw/net/ne2000-isa.h b/include/hw/net/ne2000-isa.h
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
14
--- a/include/hw/net/ne2000-isa.h
16
--- a/target/arm/helper.c
15
+++ b/include/hw/net/ne2000-isa.h
17
+++ b/target/arm/helper.c
16
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@ uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
17
* This work is licensed under the terms of the GNU GPL, version 2 or later.
19
break;
18
* See the COPYING file in the top-level directory.
20
};
19
*/
21
22
+ /*
23
+ * For these purposes, TGE and AMO/IMO/FMO both force the
24
+ * interrupt to EL2. Fold TGE into the bit extracted above.
25
+ */
26
+ hcr |= (hcr_el2 & HCR_TGE) != 0;
20
+
27
+
21
+#ifndef HW_NET_NE2K_ISA_H
28
/* Perform a table-lookup for the target EL given the current state */
22
+#define HW_NET_NE2K_ISA_H
29
target_el = target_el_table[is64][scr][rw][hcr][secure][cur_el];
23
+
30
24
#include "hw/hw.h"
25
#include "hw/qdev.h"
26
#include "hw/isa/isa.h"
27
@@ -XXX,XX +XXX,XX @@ static inline ISADevice *isa_ne2000_init(ISABus *bus, int base, int irq,
28
}
29
return d;
30
}
31
+
32
+#endif
33
--
31
--
34
2.20.1
32
2.20.1
35
33
36
34
diff view generated by jsdifflib
1
Currently the code in v7m_push_stack() which detects a violation
1
From: Richard Henderson <richard.henderson@linaro.org>
2
of the v8M stack limit simply returns early if it does so. This
3
is OK for the current integer-only code, but won't work for the
4
floating point handling we're about to add. We need to continue
5
executing the rest of the function so that we check for other
6
exceptions like not having permission to use the FPU and so
7
that we correctly set the FPCCR state if we are doing lazy
8
stacking. Refactor to avoid the early return.
9
2
3
When TGE+E2H are both set, CPACR_EL1 is ignored.
4
5
Tested-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200206105448.4726-34-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20190416125744.27770-10-peter.maydell@linaro.org
13
---
10
---
14
target/arm/helper.c | 23 ++++++++++++++++++-----
11
target/arm/helper.c | 53 ++++++++++++++++++++++++---------------------
15
1 file changed, 18 insertions(+), 5 deletions(-)
12
1 file changed, 28 insertions(+), 25 deletions(-)
16
13
17
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
18
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.c
16
--- a/target/arm/helper.c
20
+++ b/target/arm/helper.c
17
+++ b/target/arm/helper.c
21
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
18
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_lpae_cp_reginfo[] = {
22
* should ignore further stack faults trying to process
19
int sve_exception_el(CPUARMState *env, int el)
23
* that derived exception.)
20
{
21
#ifndef CONFIG_USER_ONLY
22
- if (el <= 1) {
23
+ uint64_t hcr_el2 = arm_hcr_el2_eff(env);
24
+
25
+ if (el <= 1 && (hcr_el2 & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) {
26
bool disabled = false;
27
28
/* The CPACR.ZEN controls traps to EL1:
29
@@ -XXX,XX +XXX,XX @@ int sve_exception_el(CPUARMState *env, int el)
30
}
31
if (disabled) {
32
/* route_to_el2 */
33
- return (arm_feature(env, ARM_FEATURE_EL2)
34
- && (arm_hcr_el2_eff(env) & HCR_TGE) ? 2 : 1);
35
+ return hcr_el2 & HCR_TGE ? 2 : 1;
36
}
37
38
/* Check CPACR.FPEN. */
39
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(crc32c)(uint32_t acc, uint32_t val, uint32_t bytes)
40
int fp_exception_el(CPUARMState *env, int cur_el)
41
{
42
#ifndef CONFIG_USER_ONLY
43
- int fpen;
44
-
45
/* CPACR and the CPTR registers don't exist before v6, so FP is
46
* always accessible
24
*/
47
*/
25
- bool stacked_ok;
48
@@ -XXX,XX +XXX,XX @@ int fp_exception_el(CPUARMState *env, int cur_el)
26
+ bool stacked_ok = true, limitviol = false;
49
* 0, 2 : trap EL0 and EL1/PL1 accesses
27
CPUARMState *env = &cpu->env;
50
* 1 : trap only EL0 accesses
28
uint32_t xpsr = xpsr_read(env);
51
* 3 : trap no accesses
29
uint32_t frameptr = env->regs[13];
52
+ * This register is ignored if E2H+TGE are both set.
30
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
53
*/
31
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
54
- fpen = extract32(env->cp15.cpacr_el1, 20, 2);
32
env->v7m.secure);
55
- switch (fpen) {
33
env->regs[13] = limit;
56
- case 0:
34
- return true;
57
- case 2:
35
+ /*
58
- if (cur_el == 0 || cur_el == 1) {
36
+ * We won't try to perform any further memory accesses but
59
- /* Trap to PL1, which might be EL1 or EL3 */
37
+ * we must continue through the following code to check for
60
- if (arm_is_secure(env) && !arm_el_is_aa64(env, 3)) {
38
+ * permission faults during FPU state preservation, and we
61
+ if ((arm_hcr_el2_eff(env) & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) {
39
+ * must update FPCCR if lazy stacking is enabled.
62
+ int fpen = extract32(env->cp15.cpacr_el1, 20, 2);
40
+ */
63
+
41
+ limitviol = true;
64
+ switch (fpen) {
42
+ stacked_ok = false;
65
+ case 0:
66
+ case 2:
67
+ if (cur_el == 0 || cur_el == 1) {
68
+ /* Trap to PL1, which might be EL1 or EL3 */
69
+ if (arm_is_secure(env) && !arm_el_is_aa64(env, 3)) {
70
+ return 3;
71
+ }
72
+ return 1;
73
+ }
74
+ if (cur_el == 3 && !is_a64(env)) {
75
+ /* Secure PL1 running at EL3 */
76
return 3;
77
}
78
- return 1;
79
+ break;
80
+ case 1:
81
+ if (cur_el == 0) {
82
+ return 1;
83
+ }
84
+ break;
85
+ case 3:
86
+ break;
43
}
87
}
88
- if (cur_el == 3 && !is_a64(env)) {
89
- /* Secure PL1 running at EL3 */
90
- return 3;
91
- }
92
- break;
93
- case 1:
94
- if (cur_el == 0) {
95
- return 1;
96
- }
97
- break;
98
- case 3:
99
- break;
44
}
100
}
45
101
46
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
102
/*
47
* (which may be taken in preference to the one we started with
48
* if it has higher priority).
49
*/
50
- stacked_ok =
51
+ stacked_ok = stacked_ok &&
52
v7m_stack_write(cpu, frameptr, env->regs[0], mmu_idx, false) &&
53
v7m_stack_write(cpu, frameptr + 4, env->regs[1], mmu_idx, false) &&
54
v7m_stack_write(cpu, frameptr + 8, env->regs[2], mmu_idx, false) &&
55
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
56
v7m_stack_write(cpu, frameptr + 24, env->regs[15], mmu_idx, false) &&
57
v7m_stack_write(cpu, frameptr + 28, xpsr, mmu_idx, false);
58
59
- /* Update SP regardless of whether any of the stack accesses failed. */
60
- env->regs[13] = frameptr;
61
+ /*
62
+ * If we broke a stack limit then SP was already updated earlier;
63
+ * otherwise we update SP regardless of whether any of the stack
64
+ * accesses failed or we took some other kind of fault.
65
+ */
66
+ if (!limitviol) {
67
+ env->regs[13] = frameptr;
68
+ }
69
70
return !stacked_ok;
71
}
72
--
103
--
73
2.20.1
104
2.20.1
74
105
75
106
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Alex Bennée <alex.bennee@linaro.org>
2
2
3
Reviewed-by: Thomas Huth <thuth@redhat.com>
3
According to ARM ARM we should only trap from the EL1&0 regime.
4
Reviewed-by: Markus Armbruster <armbru@redhat.com>
4
5
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Tested-by: Alex Bennée <alex.bennee@linaro.org>
6
Message-id: 20190412165416.7977-7-philmd@redhat.com
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20200206105448.4726-35-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
11
---
9
include/hw/devices.h | 14 --------------
12
target/arm/pauth_helper.c | 5 ++++-
10
include/hw/misc/cbus.h | 32 ++++++++++++++++++++++++++++++++
13
1 file changed, 4 insertions(+), 1 deletion(-)
11
hw/arm/nseries.c | 1 +
12
hw/misc/cbus.c | 2 +-
13
MAINTAINERS | 1 +
14
5 files changed, 35 insertions(+), 15 deletions(-)
15
create mode 100644 include/hw/misc/cbus.h
16
14
17
diff --git a/include/hw/devices.h b/include/hw/devices.h
15
diff --git a/target/arm/pauth_helper.c b/target/arm/pauth_helper.c
18
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
19
--- a/include/hw/devices.h
17
--- a/target/arm/pauth_helper.c
20
+++ b/include/hw/devices.h
18
+++ b/target/arm/pauth_helper.c
21
@@ -XXX,XX +XXX,XX @@ void tsc2005_set_transform(void *opaque, MouseTransformInfo *info);
19
@@ -XXX,XX +XXX,XX @@ static void pauth_check_trap(CPUARMState *env, int el, uintptr_t ra)
22
/* stellaris_input.c */
20
if (el < 2 && arm_feature(env, ARM_FEATURE_EL2)) {
23
void stellaris_gamepad_init(int n, qemu_irq *irq, const int *keycode);
21
uint64_t hcr = arm_hcr_el2_eff(env);
24
22
bool trap = !(hcr & HCR_API);
25
-/* cbus.c */
23
- /* FIXME: ARMv8.1-VHE: trap only applies to EL1&0 regime. */
26
-typedef struct {
24
+ if (el == 0) {
27
- qemu_irq clk;
25
+ /* Trap only applies to EL1&0 regime. */
28
- qemu_irq dat;
26
+ trap &= (hcr & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE);
29
- qemu_irq sel;
27
+ }
30
-} CBus;
28
/* FIXME: ARMv8.3-NV: HCR_NV trap takes precedence for ERETA[AB]. */
31
-CBus *cbus_init(qemu_irq dat_out);
29
if (trap) {
32
-void cbus_attach(CBus *bus, void *slave_opaque);
30
pauth_trap(env, 2, ra);
33
-
34
-void *retu_init(qemu_irq irq, int vilma);
35
-void *tahvo_init(qemu_irq irq, int betty);
36
-
37
-void retu_key_event(void *retu, int state);
38
-
39
#endif
40
diff --git a/include/hw/misc/cbus.h b/include/hw/misc/cbus.h
41
new file mode 100644
42
index XXXXXXX..XXXXXXX
43
--- /dev/null
44
+++ b/include/hw/misc/cbus.h
45
@@ -XXX,XX +XXX,XX @@
46
+/*
47
+ * CBUS three-pin bus and the Retu / Betty / Tahvo / Vilma / Avilma /
48
+ * Hinku / Vinku / Ahne / Pihi chips used in various Nokia platforms.
49
+ * Based on reverse-engineering of a linux driver.
50
+ *
51
+ * Copyright (C) 2008 Nokia Corporation
52
+ * Written by Andrzej Zaborowski
53
+ *
54
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
55
+ * See the COPYING file in the top-level directory.
56
+ */
57
+
58
+#ifndef HW_MISC_CBUS_H
59
+#define HW_MISC_CBUS_H
60
+
61
+#include "hw/irq.h"
62
+
63
+typedef struct {
64
+ qemu_irq clk;
65
+ qemu_irq dat;
66
+ qemu_irq sel;
67
+} CBus;
68
+
69
+CBus *cbus_init(qemu_irq dat_out);
70
+void cbus_attach(CBus *bus, void *slave_opaque);
71
+
72
+void *retu_init(qemu_irq irq, int vilma);
73
+void *tahvo_init(qemu_irq irq, int betty);
74
+
75
+void retu_key_event(void *retu, int state);
76
+
77
+#endif
78
diff --git a/hw/arm/nseries.c b/hw/arm/nseries.c
79
index XXXXXXX..XXXXXXX 100644
80
--- a/hw/arm/nseries.c
81
+++ b/hw/arm/nseries.c
82
@@ -XXX,XX +XXX,XX @@
83
#include "hw/i2c/i2c.h"
84
#include "hw/devices.h"
85
#include "hw/display/blizzard.h"
86
+#include "hw/misc/cbus.h"
87
#include "hw/misc/tmp105.h"
88
#include "hw/block/flash.h"
89
#include "hw/hw.h"
90
diff --git a/hw/misc/cbus.c b/hw/misc/cbus.c
91
index XXXXXXX..XXXXXXX 100644
92
--- a/hw/misc/cbus.c
93
+++ b/hw/misc/cbus.c
94
@@ -XXX,XX +XXX,XX @@
95
#include "qemu/osdep.h"
96
#include "hw/hw.h"
97
#include "hw/irq.h"
98
-#include "hw/devices.h"
99
+#include "hw/misc/cbus.h"
100
#include "sysemu/sysemu.h"
101
102
//#define DEBUG
103
diff --git a/MAINTAINERS b/MAINTAINERS
104
index XXXXXXX..XXXXXXX 100644
105
--- a/MAINTAINERS
106
+++ b/MAINTAINERS
107
@@ -XXX,XX +XXX,XX @@ F: hw/input/tsc2005.c
108
F: hw/misc/cbus.c
109
F: hw/timer/twl92230.c
110
F: include/hw/display/blizzard.h
111
+F: include/hw/misc/cbus.h
112
113
Palm
114
M: Andrzej Zaborowski <balrogg@gmail.com>
115
--
31
--
116
2.20.1
32
2.20.1
117
33
118
34
diff view generated by jsdifflib
1
The M-profile FPCCR.ASPEN bit indicates that automatic floating-point
1
From: Richard Henderson <richard.henderson@linaro.org>
2
context preservation is enabled. Before executing any floating-point
3
instruction, if FPCCR.ASPEN is set and the CONTROL FPCA/SFPA bits
4
indicate that there is no active floating point context then we
5
must create a new context (by initializing FPSCR and setting
6
FPCA/SFPA to indicate that the context is now active). In the
7
pseudocode this is handled by ExecuteFPCheck().
8
2
9
Implement this with a new TB flag which tracks whether we
3
The EL2&0 translation regime is affected by Load Register (unpriv).
10
need to create a new FP context.
11
4
5
The code structure used here will facilitate later changes in this
6
area for implementing UAO and NV.
7
8
Tested-by: Alex Bennée <alex.bennee@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20200206105448.4726-36-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20190416125744.27770-20-peter.maydell@linaro.org
15
---
13
---
16
target/arm/cpu.h | 2 ++
14
target/arm/cpu.h | 9 ++++----
17
target/arm/translate.h | 1 +
15
target/arm/translate.h | 2 ++
18
target/arm/helper.c | 13 +++++++++++++
16
target/arm/helper.c | 22 +++++++++++++++++++
19
target/arm/translate.c | 29 +++++++++++++++++++++++++++++
17
target/arm/translate-a64.c | 44 ++++++++++++++++++++++++--------------
20
4 files changed, 45 insertions(+)
18
4 files changed, 57 insertions(+), 20 deletions(-)
21
19
22
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
23
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/cpu.h
22
--- a/target/arm/cpu.h
25
+++ b/target/arm/cpu.h
23
+++ b/target/arm/cpu.h
26
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A32, NS, 6, 1)
24
@@ -XXX,XX +XXX,XX @@ typedef ARMCPU ArchCPU;
27
FIELD(TBFLAG_A32, VFPEN, 7, 1)
25
* | | | TBFLAG_A32 | |
28
FIELD(TBFLAG_A32, CONDEXEC, 8, 8)
26
* | | +-----+----------+ TBFLAG_AM32 |
29
FIELD(TBFLAG_A32, SCTLR_B, 16, 1)
27
* | TBFLAG_ANY | |TBFLAG_M32| |
30
+/* For M profile only, set if we must create a new FP context */
28
- * | | +-------------------------|
31
+FIELD(TBFLAG_A32, NEW_FP_CTXT_NEEDED, 19, 1)
29
- * | | | TBFLAG_A64 |
32
/* For M profile only, set if FPCCR.S does not match current security state */
30
- * +--------------+-----------+-------------------------+
33
FIELD(TBFLAG_A32, FPCCR_S_WRONG, 20, 1)
31
- * 31 20 14 0
34
/* For M profile only, Handler (ie not Thread) mode */
32
+ * | | +-+----------+--------------|
33
+ * | | | TBFLAG_A64 |
34
+ * +--------------+---------+---------------------------+
35
+ * 31 20 15 0
36
*
37
* Unless otherwise noted, these bits are cached in env->hflags.
38
*/
39
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A64, PAUTH_ACTIVE, 8, 1)
40
FIELD(TBFLAG_A64, BT, 9, 1)
41
FIELD(TBFLAG_A64, BTYPE, 10, 2) /* Not cached. */
42
FIELD(TBFLAG_A64, TBID, 12, 2)
43
+FIELD(TBFLAG_A64, UNPRIV, 14, 1)
44
45
static inline bool bswap_code(bool sctlr_b)
46
{
35
diff --git a/target/arm/translate.h b/target/arm/translate.h
47
diff --git a/target/arm/translate.h b/target/arm/translate.h
36
index XXXXXXX..XXXXXXX 100644
48
index XXXXXXX..XXXXXXX 100644
37
--- a/target/arm/translate.h
49
--- a/target/arm/translate.h
38
+++ b/target/arm/translate.h
50
+++ b/target/arm/translate.h
39
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
51
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
40
bool v8m_secure; /* true if v8M and we're in Secure mode */
52
* ie A64 LDX*, LDAX*, A32/T32 LDREX*, LDAEX*.
41
bool v8m_stackcheck; /* true if we need to perform v8M stack limit checks */
42
bool v8m_fpccr_s_wrong; /* true if v8M FPCCR.S != v8m_secure */
43
+ bool v7m_new_fp_ctxt_needed; /* ASPEN set but no active FP context */
44
/* Immediate value in AArch32 SVC insn; must be set if is_jmp == DISAS_SWI
45
* so that top level loop can generate correct syndrome information.
46
*/
53
*/
54
bool is_ldex;
55
+ /* True if AccType_UNPRIV should be used for LDTR et al */
56
+ bool unpriv;
57
/* True if v8.3-PAuth is active. */
58
bool pauth_active;
59
/* True with v8.5-BTI and SCTLR_ELx.BT* set. */
47
diff --git a/target/arm/helper.c b/target/arm/helper.c
60
diff --git a/target/arm/helper.c b/target/arm/helper.c
48
index XXXXXXX..XXXXXXX 100644
61
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/helper.c
62
--- a/target/arm/helper.c
50
+++ b/target/arm/helper.c
63
+++ b/target/arm/helper.c
51
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
64
@@ -XXX,XX +XXX,XX @@ static uint32_t rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
52
flags = FIELD_DP32(flags, TBFLAG_A32, FPCCR_S_WRONG, 1);
65
}
53
}
66
}
54
67
55
+ if (arm_feature(env, ARM_FEATURE_M) &&
68
+ /* Compute the condition for using AccType_UNPRIV for LDTR et al. */
56
+ (env->v7m.fpccr[env->v7m.secure] & R_V7M_FPCCR_ASPEN_MASK) &&
69
+ /* TODO: ARMv8.2-UAO */
57
+ (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK) ||
70
+ switch (mmu_idx) {
58
+ (env->v7m.secure &&
71
+ case ARMMMUIdx_E10_1:
59
+ !(env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)))) {
72
+ case ARMMMUIdx_SE10_1:
73
+ /* TODO: ARMv8.3-NV */
74
+ flags = FIELD_DP32(flags, TBFLAG_A64, UNPRIV, 1);
75
+ break;
76
+ case ARMMMUIdx_E20_2:
77
+ /* TODO: ARMv8.4-SecEL2 */
60
+ /*
78
+ /*
61
+ * ASPEN is set, but FPCA/SFPA indicate that there is no active
79
+ * Note that E20_2 is gated by HCR_EL2.E2H == 1, but E20_0 is
62
+ * FP context; we must create a new FP context before executing
80
+ * gated by HCR_EL2.<E2H,TGE> == '11', and so is LDTR.
63
+ * any FP insn.
64
+ */
81
+ */
65
+ flags = FIELD_DP32(flags, TBFLAG_A32, NEW_FP_CTXT_NEEDED, 1);
82
+ if (env->cp15.hcr_el2 & HCR_TGE) {
83
+ flags = FIELD_DP32(flags, TBFLAG_A64, UNPRIV, 1);
84
+ }
85
+ break;
86
+ default:
87
+ break;
66
+ }
88
+ }
67
+
89
+
68
*pflags = flags;
90
return rebuild_hflags_common(env, fp_el, mmu_idx, flags);
69
*cs_base = 0;
70
}
91
}
71
diff --git a/target/arm/translate.c b/target/arm/translate.c
92
93
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
72
index XXXXXXX..XXXXXXX 100644
94
index XXXXXXX..XXXXXXX 100644
73
--- a/target/arm/translate.c
95
--- a/target/arm/translate-a64.c
74
+++ b/target/arm/translate.c
96
+++ b/target/arm/translate-a64.c
75
@@ -XXX,XX +XXX,XX @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
97
@@ -XXX,XX +XXX,XX @@ void a64_translate_init(void)
76
/* Don't need to do this for any further FP insns in this TB */
98
offsetof(CPUARMState, exclusive_high), "exclusive_high");
77
s->v8m_fpccr_s_wrong = false;
99
}
78
}
100
79
+
101
-static inline int get_a64_user_mem_index(DisasContext *s)
80
+ if (s->v7m_new_fp_ctxt_needed) {
102
+/*
81
+ /*
103
+ * Return the core mmu_idx to use for A64 "unprivileged load/store" insns
82
+ * Create new FP context by updating CONTROL.FPCA, CONTROL.SFPA
104
+ */
83
+ * and the FPSCR.
105
+static int get_a64_user_mem_index(DisasContext *s)
84
+ */
106
{
85
+ TCGv_i32 control, fpscr;
107
- /* Return the core mmu_idx to use for A64 "unprivileged load/store" insns:
86
+ uint32_t bits = R_V7M_CONTROL_FPCA_MASK;
108
- * if EL1, access as if EL0; otherwise access at current EL
87
+
109
+ /*
88
+ fpscr = load_cpu_field(v7m.fpdscr[s->v8m_secure]);
110
+ * If AccType_UNPRIV is not used, the insn uses AccType_NORMAL,
89
+ gen_helper_vfp_set_fpscr(cpu_env, fpscr);
111
+ * which is the usual mmu_idx for this cpu state.
90
+ tcg_temp_free_i32(fpscr);
112
*/
91
+ /*
113
- ARMMMUIdx useridx;
92
+ * We don't need to arrange to end the TB, because the only
114
+ ARMMMUIdx useridx = s->mmu_idx;
93
+ * parts of FPSCR which we cache in the TB flags are the VECLEN
115
94
+ * and VECSTRIDE, and those don't exist for M-profile.
116
- switch (s->mmu_idx) {
95
+ */
117
- case ARMMMUIdx_E10_1:
96
+
118
- useridx = ARMMMUIdx_E10_0;
97
+ if (s->v8m_secure) {
119
- break;
98
+ bits |= R_V7M_CONTROL_SFPA_MASK;
120
- case ARMMMUIdx_SE10_1:
99
+ }
121
- useridx = ARMMMUIdx_SE10_0;
100
+ control = load_cpu_field(v7m.control[M_REG_S]);
122
- break;
101
+ tcg_gen_ori_i32(control, control, bits);
123
- case ARMMMUIdx_Stage2:
102
+ store_cpu_field(control, v7m.control[M_REG_S]);
124
- g_assert_not_reached();
103
+ /* Don't need to do this for any further FP insns in this TB */
125
- default:
104
+ s->v7m_new_fp_ctxt_needed = false;
126
- useridx = s->mmu_idx;
127
- break;
128
+ if (s->unpriv) {
129
+ /*
130
+ * We have pre-computed the condition for AccType_UNPRIV.
131
+ * Therefore we should never get here with a mmu_idx for
132
+ * which we do not know the corresponding user mmu_idx.
133
+ */
134
+ switch (useridx) {
135
+ case ARMMMUIdx_E10_1:
136
+ useridx = ARMMMUIdx_E10_0;
137
+ break;
138
+ case ARMMMUIdx_E20_2:
139
+ useridx = ARMMMUIdx_E20_0;
140
+ break;
141
+ case ARMMMUIdx_SE10_1:
142
+ useridx = ARMMMUIdx_SE10_0;
143
+ break;
144
+ default:
145
+ g_assert_not_reached();
105
+ }
146
+ }
106
}
147
}
107
148
return arm_to_core_mmu_idx(useridx);
108
if (extract32(insn, 28, 4) == 0xf) {
149
}
109
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
150
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
110
regime_is_secure(env, dc->mmu_idx);
151
dc->pauth_active = FIELD_EX32(tb_flags, TBFLAG_A64, PAUTH_ACTIVE);
111
dc->v8m_stackcheck = FIELD_EX32(tb_flags, TBFLAG_A32, STACKCHECK);
152
dc->bt = FIELD_EX32(tb_flags, TBFLAG_A64, BT);
112
dc->v8m_fpccr_s_wrong = FIELD_EX32(tb_flags, TBFLAG_A32, FPCCR_S_WRONG);
153
dc->btype = FIELD_EX32(tb_flags, TBFLAG_A64, BTYPE);
113
+ dc->v7m_new_fp_ctxt_needed =
154
+ dc->unpriv = FIELD_EX32(tb_flags, TBFLAG_A64, UNPRIV);
114
+ FIELD_EX32(tb_flags, TBFLAG_A32, NEW_FP_CTXT_NEEDED);
155
dc->vec_len = 0;
115
dc->cp_regs = cpu->cp_regs;
156
dc->vec_stride = 0;
116
dc->features = env->features;
157
dc->cp_regs = arm_cpu->cp_regs;
117
118
--
158
--
119
2.20.1
159
2.20.1
120
160
121
161
diff view generated by jsdifflib
1
If the floating point extension is present, then the SG instruction
1
From: Richard Henderson <richard.henderson@linaro.org>
2
must clear the CONTROL_S.SFPA bit. Implement this.
3
2
4
(On a no-FPU system the bit will always be zero, so we don't need
3
When VHE is enabled, the exception level below EL2 is not EL1,
5
to make the clearing of the bit conditional on ARM_FEATURE_VFP.)
4
but EL0, and so to identify the entry vector offset for exceptions
5
targeting EL2 we need to look at the width of EL0, not of EL1.
6
6
7
Tested-by: Alex Bennée <alex.bennee@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20200206105448.4726-37-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20190416125744.27770-8-peter.maydell@linaro.org
10
---
12
---
11
target/arm/helper.c | 1 +
13
target/arm/helper.c | 9 +++++++--
12
1 file changed, 1 insertion(+)
14
1 file changed, 7 insertions(+), 2 deletions(-)
13
15
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.c
18
--- a/target/arm/helper.c
17
+++ b/target/arm/helper.c
19
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ static bool v7m_handle_execute_nsc(ARMCPU *cpu)
20
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
19
qemu_log_mask(CPU_LOG_INT, "...really an SG instruction at 0x%08" PRIx32
21
* immediately lower than the target level is using AArch32 or AArch64
20
", executing it\n", env->regs[15]);
22
*/
21
env->regs[14] &= ~1;
23
bool is_aa64;
22
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
24
+ uint64_t hcr;
23
switch_v7m_security_state(env, true);
25
24
xpsr_write(env, 0, XPSR_IT);
26
switch (new_el) {
25
env->regs[15] += 4;
27
case 3:
28
is_aa64 = (env->cp15.scr_el3 & SCR_RW) != 0;
29
break;
30
case 2:
31
- is_aa64 = (env->cp15.hcr_el2 & HCR_RW) != 0;
32
- break;
33
+ hcr = arm_hcr_el2_eff(env);
34
+ if ((hcr & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) {
35
+ is_aa64 = (hcr & HCR_RW) != 0;
36
+ break;
37
+ }
38
+ /* fall through */
39
case 1:
40
is_aa64 = is_a64(env);
41
break;
26
--
42
--
27
2.20.1
43
2.20.1
28
44
29
45
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add an entries the Blizzard device in MAINTAINERS.
3
Tested-by: Alex Bennée <alex.bennee@linaro.org>
4
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Thomas Huth <thuth@redhat.com>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Markus Armbruster <armbru@redhat.com>
6
Message-id: 20200206105448.4726-38-richard.henderson@linaro.org
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Message-id: 20190412165416.7977-6-philmd@redhat.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
8
---
11
include/hw/devices.h | 7 -------
9
target/arm/cpu64.c | 1 +
12
include/hw/display/blizzard.h | 22 ++++++++++++++++++++++
10
1 file changed, 1 insertion(+)
13
hw/arm/nseries.c | 1 +
14
hw/display/blizzard.c | 2 +-
15
MAINTAINERS | 2 ++
16
5 files changed, 26 insertions(+), 8 deletions(-)
17
create mode 100644 include/hw/display/blizzard.h
18
11
19
diff --git a/include/hw/devices.h b/include/hw/devices.h
12
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
20
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
21
--- a/include/hw/devices.h
14
--- a/target/arm/cpu64.c
22
+++ b/include/hw/devices.h
15
+++ b/target/arm/cpu64.c
23
@@ -XXX,XX +XXX,XX @@ void tsc2005_set_transform(void *opaque, MouseTransformInfo *info);
16
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
24
/* stellaris_input.c */
17
t = cpu->isar.id_aa64mmfr1;
25
void stellaris_gamepad_init(int n, qemu_irq *irq, const int *keycode);
18
t = FIELD_DP64(t, ID_AA64MMFR1, HPDS, 1); /* HPD */
26
19
t = FIELD_DP64(t, ID_AA64MMFR1, LO, 1);
27
-/* blizzard.c */
20
+ t = FIELD_DP64(t, ID_AA64MMFR1, VH, 1);
28
-void *s1d13745_init(qemu_irq gpio_int);
21
cpu->isar.id_aa64mmfr1 = t;
29
-void s1d13745_write(void *opaque, int dc, uint16_t value);
22
30
-void s1d13745_write_block(void *opaque, int dc,
23
/* Replicate the same data to the 32-bit id registers. */
31
- void *buf, size_t len, int pitch);
32
-uint16_t s1d13745_read(void *opaque, int dc);
33
-
34
/* cbus.c */
35
typedef struct {
36
qemu_irq clk;
37
diff --git a/include/hw/display/blizzard.h b/include/hw/display/blizzard.h
38
new file mode 100644
39
index XXXXXXX..XXXXXXX
40
--- /dev/null
41
+++ b/include/hw/display/blizzard.h
42
@@ -XXX,XX +XXX,XX @@
43
+/*
44
+ * Epson S1D13744/S1D13745 (Blizzard/Hailstorm/Tornado) LCD/TV controller.
45
+ *
46
+ * Copyright (C) 2008 Nokia Corporation
47
+ * Written by Andrzej Zaborowski
48
+ *
49
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
50
+ * See the COPYING file in the top-level directory.
51
+ */
52
+
53
+#ifndef HW_DISPLAY_BLIZZARD_H
54
+#define HW_DISPLAY_BLIZZARD_H
55
+
56
+#include "hw/irq.h"
57
+
58
+void *s1d13745_init(qemu_irq gpio_int);
59
+void s1d13745_write(void *opaque, int dc, uint16_t value);
60
+void s1d13745_write_block(void *opaque, int dc,
61
+ void *buf, size_t len, int pitch);
62
+uint16_t s1d13745_read(void *opaque, int dc);
63
+
64
+#endif
65
diff --git a/hw/arm/nseries.c b/hw/arm/nseries.c
66
index XXXXXXX..XXXXXXX 100644
67
--- a/hw/arm/nseries.c
68
+++ b/hw/arm/nseries.c
69
@@ -XXX,XX +XXX,XX @@
70
#include "hw/boards.h"
71
#include "hw/i2c/i2c.h"
72
#include "hw/devices.h"
73
+#include "hw/display/blizzard.h"
74
#include "hw/misc/tmp105.h"
75
#include "hw/block/flash.h"
76
#include "hw/hw.h"
77
diff --git a/hw/display/blizzard.c b/hw/display/blizzard.c
78
index XXXXXXX..XXXXXXX 100644
79
--- a/hw/display/blizzard.c
80
+++ b/hw/display/blizzard.c
81
@@ -XXX,XX +XXX,XX @@
82
#include "qemu/osdep.h"
83
#include "qemu-common.h"
84
#include "ui/console.h"
85
-#include "hw/devices.h"
86
+#include "hw/display/blizzard.h"
87
#include "ui/pixel_ops.h"
88
89
typedef void (*blizzard_fn_t)(uint8_t *, const uint8_t *, unsigned int);
90
diff --git a/MAINTAINERS b/MAINTAINERS
91
index XXXXXXX..XXXXXXX 100644
92
--- a/MAINTAINERS
93
+++ b/MAINTAINERS
94
@@ -XXX,XX +XXX,XX @@ M: Peter Maydell <peter.maydell@linaro.org>
95
L: qemu-arm@nongnu.org
96
S: Odd Fixes
97
F: hw/arm/nseries.c
98
+F: hw/display/blizzard.c
99
F: hw/input/lm832x.c
100
F: hw/input/tsc2005.c
101
F: hw/misc/cbus.c
102
F: hw/timer/twl92230.c
103
+F: include/hw/display/blizzard.h
104
105
Palm
106
M: Andrzej Zaborowski <balrogg@gmail.com>
107
--
24
--
108
2.20.1
25
2.20.1
109
26
110
27
diff view generated by jsdifflib
1
In the v7M architecture, if an exception is generated in the process
1
From: Richard Henderson <richard.henderson@linaro.org>
2
of doing the lazy stacking of FP registers, the handling of
2
3
possible escalation to HardFault is treated differently to the normal
3
This inline function has one user in cpu.c, and need not be exposed
4
approach: it works based on the saved information about exception
4
otherwise. Code movement only, with fixups for checkpatch.
5
readiness that was stored in the FPCCR when the stack frame was
5
6
created. Provide a new function armv7m_nvic_set_pending_lazyfp()
6
Tested-by: Alex Bennée <alex.bennee@linaro.org>
7
which pends exceptions during lazy stacking, and implements
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
this logic.
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
9
Message-id: 20200206105448.4726-39-richard.henderson@linaro.org
10
This corresponds to the pseudocode TakePreserveFPException().
11
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20190416125744.27770-22-peter.maydell@linaro.org
15
---
11
---
16
target/arm/cpu.h | 12 ++++++
12
target/arm/cpu.h | 111 -------------------------------------------
17
hw/intc/armv7m_nvic.c | 96 +++++++++++++++++++++++++++++++++++++++++++
13
target/arm/cpu.c | 119 +++++++++++++++++++++++++++++++++++++++++++++++
18
2 files changed, 108 insertions(+)
14
2 files changed, 119 insertions(+), 111 deletions(-)
19
15
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
21
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/cpu.h
18
--- a/target/arm/cpu.h
23
+++ b/target/arm/cpu.h
19
+++ b/target/arm/cpu.h
24
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending(void *opaque, int irq, bool secure);
20
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
25
* a different exception).
21
#define ARM_CPUID_TI915T 0x54029152
26
*/
22
#define ARM_CPUID_TI925T 0x54029252
27
void armv7m_nvic_set_pending_derived(void *opaque, int irq, bool secure);
23
28
+/**
24
-static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
29
+ * armv7m_nvic_set_pending_lazyfp: mark this lazy FP exception as pending
25
- unsigned int target_el)
30
+ * @opaque: the NVIC
26
-{
31
+ * @irq: the exception number to mark pending
27
- CPUARMState *env = cs->env_ptr;
32
+ * @secure: false for non-banked exceptions or for the nonsecure
28
- unsigned int cur_el = arm_current_el(env);
33
+ * version of a banked exception, true for the secure version of a banked
29
- bool secure = arm_is_secure(env);
34
+ * exception.
30
- bool pstate_unmasked;
35
+ *
31
- int8_t unmasked = 0;
36
+ * Similar to armv7m_nvic_set_pending(), but specifically for exceptions
32
- uint64_t hcr_el2;
37
+ * generated in the course of lazy stacking of FP registers.
33
-
38
+ */
34
- /* Don't take exceptions if they target a lower EL.
39
+void armv7m_nvic_set_pending_lazyfp(void *opaque, int irq, bool secure);
35
- * This check should catch any exceptions that would not be taken but left
40
/**
36
- * pending.
41
* armv7m_nvic_get_pending_irq_info: return highest priority pending
37
- */
42
* exception, and whether it targets Secure state
38
- if (cur_el > target_el) {
43
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
39
- return false;
40
- }
41
-
42
- hcr_el2 = arm_hcr_el2_eff(env);
43
-
44
- switch (excp_idx) {
45
- case EXCP_FIQ:
46
- pstate_unmasked = !(env->daif & PSTATE_F);
47
- break;
48
-
49
- case EXCP_IRQ:
50
- pstate_unmasked = !(env->daif & PSTATE_I);
51
- break;
52
-
53
- case EXCP_VFIQ:
54
- if (secure || !(hcr_el2 & HCR_FMO) || (hcr_el2 & HCR_TGE)) {
55
- /* VFIQs are only taken when hypervized and non-secure. */
56
- return false;
57
- }
58
- return !(env->daif & PSTATE_F);
59
- case EXCP_VIRQ:
60
- if (secure || !(hcr_el2 & HCR_IMO) || (hcr_el2 & HCR_TGE)) {
61
- /* VIRQs are only taken when hypervized and non-secure. */
62
- return false;
63
- }
64
- return !(env->daif & PSTATE_I);
65
- default:
66
- g_assert_not_reached();
67
- }
68
-
69
- /* Use the target EL, current execution state and SCR/HCR settings to
70
- * determine whether the corresponding CPSR bit is used to mask the
71
- * interrupt.
72
- */
73
- if ((target_el > cur_el) && (target_el != 1)) {
74
- /* Exceptions targeting a higher EL may not be maskable */
75
- if (arm_feature(env, ARM_FEATURE_AARCH64)) {
76
- /* 64-bit masking rules are simple: exceptions to EL3
77
- * can't be masked, and exceptions to EL2 can only be
78
- * masked from Secure state. The HCR and SCR settings
79
- * don't affect the masking logic, only the interrupt routing.
80
- */
81
- if (target_el == 3 || !secure) {
82
- unmasked = 1;
83
- }
84
- } else {
85
- /* The old 32-bit-only environment has a more complicated
86
- * masking setup. HCR and SCR bits not only affect interrupt
87
- * routing but also change the behaviour of masking.
88
- */
89
- bool hcr, scr;
90
-
91
- switch (excp_idx) {
92
- case EXCP_FIQ:
93
- /* If FIQs are routed to EL3 or EL2 then there are cases where
94
- * we override the CPSR.F in determining if the exception is
95
- * masked or not. If neither of these are set then we fall back
96
- * to the CPSR.F setting otherwise we further assess the state
97
- * below.
98
- */
99
- hcr = hcr_el2 & HCR_FMO;
100
- scr = (env->cp15.scr_el3 & SCR_FIQ);
101
-
102
- /* When EL3 is 32-bit, the SCR.FW bit controls whether the
103
- * CPSR.F bit masks FIQ interrupts when taken in non-secure
104
- * state. If SCR.FW is set then FIQs can be masked by CPSR.F
105
- * when non-secure but only when FIQs are only routed to EL3.
106
- */
107
- scr = scr && !((env->cp15.scr_el3 & SCR_FW) && !hcr);
108
- break;
109
- case EXCP_IRQ:
110
- /* When EL3 execution state is 32-bit, if HCR.IMO is set then
111
- * we may override the CPSR.I masking when in non-secure state.
112
- * The SCR.IRQ setting has already been taken into consideration
113
- * when setting the target EL, so it does not have a further
114
- * affect here.
115
- */
116
- hcr = hcr_el2 & HCR_IMO;
117
- scr = false;
118
- break;
119
- default:
120
- g_assert_not_reached();
121
- }
122
-
123
- if ((scr || hcr) && !secure) {
124
- unmasked = 1;
125
- }
126
- }
127
- }
128
-
129
- /* The PSTATE bits only mask the interrupt if we have not overriden the
130
- * ability above.
131
- */
132
- return unmasked || pstate_unmasked;
133
-}
134
-
135
#define ARM_CPU_TYPE_SUFFIX "-" TYPE_ARM_CPU
136
#define ARM_CPU_TYPE_NAME(name) (name ARM_CPU_TYPE_SUFFIX)
137
#define CPU_RESOLVING_TYPE TYPE_ARM_CPU
138
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
44
index XXXXXXX..XXXXXXX 100644
139
index XXXXXXX..XXXXXXX 100644
45
--- a/hw/intc/armv7m_nvic.c
140
--- a/target/arm/cpu.c
46
+++ b/hw/intc/armv7m_nvic.c
141
+++ b/target/arm/cpu.c
47
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending_derived(void *opaque, int irq, bool secure)
142
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(CPUState *s)
48
do_armv7m_nvic_set_pending(opaque, irq, secure, true);
143
arm_rebuild_hflags(env);
49
}
144
}
50
145
51
+void armv7m_nvic_set_pending_lazyfp(void *opaque, int irq, bool secure)
146
+static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
147
+ unsigned int target_el)
52
+{
148
+{
149
+ CPUARMState *env = cs->env_ptr;
150
+ unsigned int cur_el = arm_current_el(env);
151
+ bool secure = arm_is_secure(env);
152
+ bool pstate_unmasked;
153
+ int8_t unmasked = 0;
154
+ uint64_t hcr_el2;
155
+
53
+ /*
156
+ /*
54
+ * Pend an exception during lazy FP stacking. This differs
157
+ * Don't take exceptions if they target a lower EL.
55
+ * from the usual exception pending because the logic for
158
+ * This check should catch any exceptions that would not be taken
56
+ * whether we should escalate depends on the saved context
159
+ * but left pending.
57
+ * in the FPCCR register, not on the current state of the CPU/NVIC.
58
+ */
160
+ */
59
+ NVICState *s = (NVICState *)opaque;
161
+ if (cur_el > target_el) {
60
+ bool banked = exc_is_banked(irq);
162
+ return false;
61
+ VecInfo *vec;
163
+ }
62
+ bool targets_secure;
164
+
63
+ bool escalate = false;
165
+ hcr_el2 = arm_hcr_el2_eff(env);
64
+ /*
166
+
65
+ * We will only look at bits in fpccr if this is a banked exception
167
+ switch (excp_idx) {
66
+ * (in which case 'secure' tells us whether it is the S or NS version).
168
+ case EXCP_FIQ:
67
+ * All the bits for the non-banked exceptions are in fpccr_s.
169
+ pstate_unmasked = !(env->daif & PSTATE_F);
68
+ */
170
+ break;
69
+ uint32_t fpccr_s = s->cpu->env.v7m.fpccr[M_REG_S];
171
+
70
+ uint32_t fpccr = s->cpu->env.v7m.fpccr[secure];
172
+ case EXCP_IRQ:
71
+
173
+ pstate_unmasked = !(env->daif & PSTATE_I);
72
+ assert(irq > ARMV7M_EXCP_RESET && irq < s->num_irq);
174
+ break;
73
+ assert(!secure || banked);
175
+
74
+
176
+ case EXCP_VFIQ:
75
+ vec = (banked && secure) ? &s->sec_vectors[irq] : &s->vectors[irq];
177
+ if (secure || !(hcr_el2 & HCR_FMO) || (hcr_el2 & HCR_TGE)) {
76
+
178
+ /* VFIQs are only taken when hypervized and non-secure. */
77
+ targets_secure = banked ? secure : exc_targets_secure(s, irq);
179
+ return false;
78
+
79
+ switch (irq) {
80
+ case ARMV7M_EXCP_DEBUG:
81
+ if (!(fpccr_s & R_V7M_FPCCR_MONRDY_MASK)) {
82
+ /* Ignore DebugMonitor exception */
83
+ return;
84
+ }
180
+ }
85
+ break;
181
+ return !(env->daif & PSTATE_F);
86
+ case ARMV7M_EXCP_MEM:
182
+ case EXCP_VIRQ:
87
+ escalate = !(fpccr & R_V7M_FPCCR_MMRDY_MASK);
183
+ if (secure || !(hcr_el2 & HCR_IMO) || (hcr_el2 & HCR_TGE)) {
88
+ break;
184
+ /* VIRQs are only taken when hypervized and non-secure. */
89
+ case ARMV7M_EXCP_USAGE:
185
+ return false;
90
+ escalate = !(fpccr & R_V7M_FPCCR_UFRDY_MASK);
186
+ }
91
+ break;
187
+ return !(env->daif & PSTATE_I);
92
+ case ARMV7M_EXCP_BUS:
93
+ escalate = !(fpccr_s & R_V7M_FPCCR_BFRDY_MASK);
94
+ break;
95
+ case ARMV7M_EXCP_SECURE:
96
+ escalate = !(fpccr_s & R_V7M_FPCCR_SFRDY_MASK);
97
+ break;
98
+ default:
188
+ default:
99
+ g_assert_not_reached();
189
+ g_assert_not_reached();
100
+ }
190
+ }
101
+
191
+
102
+ if (escalate) {
192
+ /*
103
+ /*
193
+ * Use the target EL, current execution state and SCR/HCR settings to
104
+ * Escalate to HardFault: faults that initially targeted Secure
194
+ * determine whether the corresponding CPSR bit is used to mask the
105
+ * continue to do so, even if HF normally targets NonSecure.
195
+ * interrupt.
106
+ */
196
+ */
107
+ irq = ARMV7M_EXCP_HARD;
197
+ if ((target_el > cur_el) && (target_el != 1)) {
108
+ if (arm_feature(&s->cpu->env, ARM_FEATURE_M_SECURITY) &&
198
+ /* Exceptions targeting a higher EL may not be maskable */
109
+ (targets_secure ||
199
+ if (arm_feature(env, ARM_FEATURE_AARCH64)) {
110
+ !(s->cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK))) {
200
+ /*
111
+ vec = &s->sec_vectors[irq];
201
+ * 64-bit masking rules are simple: exceptions to EL3
202
+ * can't be masked, and exceptions to EL2 can only be
203
+ * masked from Secure state. The HCR and SCR settings
204
+ * don't affect the masking logic, only the interrupt routing.
205
+ */
206
+ if (target_el == 3 || !secure) {
207
+ unmasked = 1;
208
+ }
112
+ } else {
209
+ } else {
113
+ vec = &s->vectors[irq];
210
+ /*
211
+ * The old 32-bit-only environment has a more complicated
212
+ * masking setup. HCR and SCR bits not only affect interrupt
213
+ * routing but also change the behaviour of masking.
214
+ */
215
+ bool hcr, scr;
216
+
217
+ switch (excp_idx) {
218
+ case EXCP_FIQ:
219
+ /*
220
+ * If FIQs are routed to EL3 or EL2 then there are cases where
221
+ * we override the CPSR.F in determining if the exception is
222
+ * masked or not. If neither of these are set then we fall back
223
+ * to the CPSR.F setting otherwise we further assess the state
224
+ * below.
225
+ */
226
+ hcr = hcr_el2 & HCR_FMO;
227
+ scr = (env->cp15.scr_el3 & SCR_FIQ);
228
+
229
+ /*
230
+ * When EL3 is 32-bit, the SCR.FW bit controls whether the
231
+ * CPSR.F bit masks FIQ interrupts when taken in non-secure
232
+ * state. If SCR.FW is set then FIQs can be masked by CPSR.F
233
+ * when non-secure but only when FIQs are only routed to EL3.
234
+ */
235
+ scr = scr && !((env->cp15.scr_el3 & SCR_FW) && !hcr);
236
+ break;
237
+ case EXCP_IRQ:
238
+ /*
239
+ * When EL3 execution state is 32-bit, if HCR.IMO is set then
240
+ * we may override the CPSR.I masking when in non-secure state.
241
+ * The SCR.IRQ setting has already been taken into consideration
242
+ * when setting the target EL, so it does not have a further
243
+ * affect here.
244
+ */
245
+ hcr = hcr_el2 & HCR_IMO;
246
+ scr = false;
247
+ break;
248
+ default:
249
+ g_assert_not_reached();
250
+ }
251
+
252
+ if ((scr || hcr) && !secure) {
253
+ unmasked = 1;
254
+ }
114
+ }
255
+ }
115
+ }
256
+ }
116
+
257
+
117
+ if (!vec->enabled ||
258
+ /*
118
+ nvic_exec_prio(s) <= exc_group_prio(s, vec->prio, secure)) {
259
+ * The PSTATE bits only mask the interrupt if we have not overriden the
119
+ if (!(fpccr_s & R_V7M_FPCCR_HFRDY_MASK)) {
260
+ * ability above.
120
+ /*
261
+ */
121
+ * We want to escalate to HardFault but the context the
262
+ return unmasked || pstate_unmasked;
122
+ * FP state belongs to prevents the exception pre-empting.
123
+ */
124
+ cpu_abort(&s->cpu->parent_obj,
125
+ "Lockup: can't escalate to HardFault during "
126
+ "lazy FP register stacking\n");
127
+ }
128
+ }
129
+
130
+ if (escalate) {
131
+ s->cpu->env.v7m.hfsr |= R_V7M_HFSR_FORCED_MASK;
132
+ }
133
+ if (!vec->pending) {
134
+ vec->pending = 1;
135
+ /*
136
+ * We do not call nvic_irq_update(), because we know our caller
137
+ * is going to handle causing us to take the exception by
138
+ * raising EXCP_LAZYFP, so raising the IRQ line would be
139
+ * pointless extra work. We just need to recompute the
140
+ * priorities so that armv7m_nvic_can_take_pending_exception()
141
+ * returns the right answer.
142
+ */
143
+ nvic_recompute_state(s);
144
+ }
145
+}
263
+}
146
+
264
+
147
/* Make pending IRQ active. */
265
bool arm_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
148
void armv7m_nvic_acknowledge_irq(void *opaque)
149
{
266
{
267
CPUClass *cc = CPU_GET_CLASS(cs);
150
--
268
--
151
2.20.1
269
2.20.1
152
270
153
271
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Markus Armbruster <armbru@redhat.com>
3
Avoid redundant computation of cpu state by passing it in
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
4
from the caller, which has already computed it for itself.
5
Message-id: 20190412165416.7977-5-philmd@redhat.com
5
6
Tested-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20200206105448.4726-40-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
11
---
8
include/hw/devices.h | 6 ------
12
target/arm/cpu.c | 22 ++++++++++++----------
9
include/hw/display/tc6393xb.h | 24 ++++++++++++++++++++++++
13
1 file changed, 12 insertions(+), 10 deletions(-)
10
hw/arm/tosa.c | 2 +-
11
hw/display/tc6393xb.c | 2 +-
12
MAINTAINERS | 1 +
13
5 files changed, 27 insertions(+), 8 deletions(-)
14
create mode 100644 include/hw/display/tc6393xb.h
15
14
16
diff --git a/include/hw/devices.h b/include/hw/devices.h
15
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
17
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
18
--- a/include/hw/devices.h
17
--- a/target/arm/cpu.c
19
+++ b/include/hw/devices.h
18
+++ b/target/arm/cpu.c
20
@@ -XXX,XX +XXX,XX @@ void *tahvo_init(qemu_irq irq, int betty);
19
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(CPUState *s)
21
20
}
22
void retu_key_event(void *retu, int state);
21
23
22
static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
24
-/* tc6393xb.c */
23
- unsigned int target_el)
25
-typedef struct TC6393xbState TC6393xbState;
24
+ unsigned int target_el,
26
-TC6393xbState *tc6393xb_init(struct MemoryRegion *sysmem,
25
+ unsigned int cur_el, bool secure,
27
- uint32_t base, qemu_irq irq);
26
+ uint64_t hcr_el2)
28
-qemu_irq tc6393xb_l3v_get(TC6393xbState *s);
27
{
28
CPUARMState *env = cs->env_ptr;
29
- unsigned int cur_el = arm_current_el(env);
30
- bool secure = arm_is_secure(env);
31
bool pstate_unmasked;
32
int8_t unmasked = 0;
33
- uint64_t hcr_el2;
34
35
/*
36
* Don't take exceptions if they target a lower EL.
37
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
38
return false;
39
}
40
41
- hcr_el2 = arm_hcr_el2_eff(env);
29
-
42
-
30
#endif
43
switch (excp_idx) {
31
diff --git a/include/hw/display/tc6393xb.h b/include/hw/display/tc6393xb.h
44
case EXCP_FIQ:
32
new file mode 100644
45
pstate_unmasked = !(env->daif & PSTATE_F);
33
index XXXXXXX..XXXXXXX
46
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
34
--- /dev/null
47
CPUARMState *env = cs->env_ptr;
35
+++ b/include/hw/display/tc6393xb.h
48
uint32_t cur_el = arm_current_el(env);
36
@@ -XXX,XX +XXX,XX @@
49
bool secure = arm_is_secure(env);
37
+/*
50
+ uint64_t hcr_el2 = arm_hcr_el2_eff(env);
38
+ * Toshiba TC6393XB I/O Controller.
51
uint32_t target_el;
39
+ * Found in Sharp Zaurus SL-6000 (tosa) or some
52
uint32_t excp_idx;
40
+ * Toshiba e-Series PDAs.
53
bool ret = false;
41
+ *
54
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
42
+ * Copyright (c) 2007 Hervé Poussineau
55
if (interrupt_request & CPU_INTERRUPT_FIQ) {
43
+ *
56
excp_idx = EXCP_FIQ;
44
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
57
target_el = arm_phys_excp_target_el(cs, excp_idx, cur_el, secure);
45
+ * See the COPYING file in the top-level directory.
58
- if (arm_excp_unmasked(cs, excp_idx, target_el)) {
46
+ */
59
+ if (arm_excp_unmasked(cs, excp_idx, target_el,
47
+
60
+ cur_el, secure, hcr_el2)) {
48
+#ifndef HW_DISPLAY_TC6393XB_H
61
cs->exception_index = excp_idx;
49
+#define HW_DISPLAY_TC6393XB_H
62
env->exception.target_el = target_el;
50
+
63
cc->do_interrupt(cs);
51
+#include "exec/memory.h"
64
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
52
+#include "hw/irq.h"
65
if (interrupt_request & CPU_INTERRUPT_HARD) {
53
+
66
excp_idx = EXCP_IRQ;
54
+typedef struct TC6393xbState TC6393xbState;
67
target_el = arm_phys_excp_target_el(cs, excp_idx, cur_el, secure);
55
+
68
- if (arm_excp_unmasked(cs, excp_idx, target_el)) {
56
+TC6393xbState *tc6393xb_init(struct MemoryRegion *sysmem,
69
+ if (arm_excp_unmasked(cs, excp_idx, target_el,
57
+ uint32_t base, qemu_irq irq);
70
+ cur_el, secure, hcr_el2)) {
58
+qemu_irq tc6393xb_l3v_get(TC6393xbState *s);
71
cs->exception_index = excp_idx;
59
+
72
env->exception.target_el = target_el;
60
+#endif
73
cc->do_interrupt(cs);
61
diff --git a/hw/arm/tosa.c b/hw/arm/tosa.c
74
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
62
index XXXXXXX..XXXXXXX 100644
75
if (interrupt_request & CPU_INTERRUPT_VIRQ) {
63
--- a/hw/arm/tosa.c
76
excp_idx = EXCP_VIRQ;
64
+++ b/hw/arm/tosa.c
77
target_el = 1;
65
@@ -XXX,XX +XXX,XX @@
78
- if (arm_excp_unmasked(cs, excp_idx, target_el)) {
66
#include "hw/hw.h"
79
+ if (arm_excp_unmasked(cs, excp_idx, target_el,
67
#include "hw/arm/pxa.h"
80
+ cur_el, secure, hcr_el2)) {
68
#include "hw/arm/arm.h"
81
cs->exception_index = excp_idx;
69
-#include "hw/devices.h"
82
env->exception.target_el = target_el;
70
#include "hw/arm/sharpsl.h"
83
cc->do_interrupt(cs);
71
#include "hw/pcmcia.h"
84
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
72
#include "hw/boards.h"
85
if (interrupt_request & CPU_INTERRUPT_VFIQ) {
73
+#include "hw/display/tc6393xb.h"
86
excp_idx = EXCP_VFIQ;
74
#include "hw/i2c/i2c.h"
87
target_el = 1;
75
#include "hw/ssi/ssi.h"
88
- if (arm_excp_unmasked(cs, excp_idx, target_el)) {
76
#include "hw/sysbus.h"
89
+ if (arm_excp_unmasked(cs, excp_idx, target_el,
77
diff --git a/hw/display/tc6393xb.c b/hw/display/tc6393xb.c
90
+ cur_el, secure, hcr_el2)) {
78
index XXXXXXX..XXXXXXX 100644
91
cs->exception_index = excp_idx;
79
--- a/hw/display/tc6393xb.c
92
env->exception.target_el = target_el;
80
+++ b/hw/display/tc6393xb.c
93
cc->do_interrupt(cs);
81
@@ -XXX,XX +XXX,XX @@
82
#include "qapi/error.h"
83
#include "qemu/host-utils.h"
84
#include "hw/hw.h"
85
-#include "hw/devices.h"
86
+#include "hw/display/tc6393xb.h"
87
#include "hw/block/flash.h"
88
#include "ui/console.h"
89
#include "ui/pixel_ops.h"
90
diff --git a/MAINTAINERS b/MAINTAINERS
91
index XXXXXXX..XXXXXXX 100644
92
--- a/MAINTAINERS
93
+++ b/MAINTAINERS
94
@@ -XXX,XX +XXX,XX @@ F: hw/misc/mst_fpga.c
95
F: hw/misc/max111x.c
96
F: include/hw/arm/pxa.h
97
F: include/hw/arm/sharpsl.h
98
+F: include/hw/display/tc6393xb.h
99
100
SABRELITE / i.MX6
101
M: Peter Maydell <peter.maydell@linaro.org>
102
--
94
--
103
2.20.1
95
2.20.1
104
96
105
97
diff view generated by jsdifflib
1
Enable the FPU by default for the Cortex-M4 and Cortex-M33.
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The value computed is fully boolean; using int8_t is odd.
4
5
Tested-by: Alex Bennée <alex.bennee@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20200206105448.4726-41-richard.henderson@linaro.org
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20190416125744.27770-27-peter.maydell@linaro.org
6
---
10
---
7
target/arm/cpu.c | 8 ++++++++
11
target/arm/cpu.c | 6 +++---
8
1 file changed, 8 insertions(+)
12
1 file changed, 3 insertions(+), 3 deletions(-)
9
13
10
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
14
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
11
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
12
--- a/target/arm/cpu.c
16
--- a/target/arm/cpu.c
13
+++ b/target/arm/cpu.c
17
+++ b/target/arm/cpu.c
14
@@ -XXX,XX +XXX,XX @@ static void cortex_m4_initfn(Object *obj)
18
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
15
set_feature(&cpu->env, ARM_FEATURE_M);
19
{
16
set_feature(&cpu->env, ARM_FEATURE_M_MAIN);
20
CPUARMState *env = cs->env_ptr;
17
set_feature(&cpu->env, ARM_FEATURE_THUMB_DSP);
21
bool pstate_unmasked;
18
+ set_feature(&cpu->env, ARM_FEATURE_VFP4);
22
- int8_t unmasked = 0;
19
cpu->midr = 0x410fc240; /* r0p0 */
23
+ bool unmasked = false;
20
cpu->pmsav7_dregion = 8;
24
21
+ cpu->isar.mvfr0 = 0x10110021;
25
/*
22
+ cpu->isar.mvfr1 = 0x11000011;
26
* Don't take exceptions if they target a lower EL.
23
+ cpu->isar.mvfr2 = 0x00000000;
27
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
24
cpu->id_pfr0 = 0x00000030;
28
* don't affect the masking logic, only the interrupt routing.
25
cpu->id_pfr1 = 0x00000200;
29
*/
26
cpu->id_dfr0 = 0x00100000;
30
if (target_el == 3 || !secure) {
27
@@ -XXX,XX +XXX,XX @@ static void cortex_m33_initfn(Object *obj)
31
- unmasked = 1;
28
set_feature(&cpu->env, ARM_FEATURE_M_MAIN);
32
+ unmasked = true;
29
set_feature(&cpu->env, ARM_FEATURE_M_SECURITY);
33
}
30
set_feature(&cpu->env, ARM_FEATURE_THUMB_DSP);
34
} else {
31
+ set_feature(&cpu->env, ARM_FEATURE_VFP4);
35
/*
32
cpu->midr = 0x410fd213; /* r0p3 */
36
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
33
cpu->pmsav7_dregion = 16;
37
}
34
cpu->sau_sregion = 8;
38
35
+ cpu->isar.mvfr0 = 0x10110021;
39
if ((scr || hcr) && !secure) {
36
+ cpu->isar.mvfr1 = 0x11000011;
40
- unmasked = 1;
37
+ cpu->isar.mvfr2 = 0x00000040;
41
+ unmasked = true;
38
cpu->id_pfr0 = 0x00000030;
42
}
39
cpu->id_pfr1 = 0x00000210;
43
}
40
cpu->id_dfr0 = 0x00200000;
44
}
41
--
45
--
42
2.20.1
46
2.20.1
43
47
44
48
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Thomas Huth <thuth@redhat.com>
3
The fall through organization of this function meant that we
4
Reviewed-by: Cédric Le Goater <clg@kaod.org>
4
would raise an interrupt, then might overwrite that with another.
5
Reviewed-by: Markus Armbruster <armbru@redhat.com>
5
Since interrupt prioritization is IMPLEMENTATION DEFINED, we
6
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
can recognize these in any order we choose.
7
Message-id: 20190412165416.7977-2-philmd@redhat.com
7
8
Unify the code to raise the interrupt in a block at the end.
9
10
Tested-by: Alex Bennée <alex.bennee@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20200206105448.4726-42-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
15
---
10
hw/arm/aspeed.c | 13 +++++++++----
16
target/arm/cpu.c | 30 ++++++++++++------------------
11
1 file changed, 9 insertions(+), 4 deletions(-)
17
1 file changed, 12 insertions(+), 18 deletions(-)
12
18
13
diff --git a/hw/arm/aspeed.c b/hw/arm/aspeed.c
19
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
14
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/arm/aspeed.c
21
--- a/target/arm/cpu.c
16
+++ b/hw/arm/aspeed.c
22
+++ b/target/arm/cpu.c
17
@@ -XXX,XX +XXX,XX @@
23
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
18
#include "hw/arm/aspeed_soc.h"
24
uint64_t hcr_el2 = arm_hcr_el2_eff(env);
19
#include "hw/boards.h"
25
uint32_t target_el;
20
#include "hw/i2c/smbus_eeprom.h"
26
uint32_t excp_idx;
21
+#include "hw/misc/pca9552.h"
27
- bool ret = false;
22
+#include "hw/misc/tmp105.h"
28
+
23
#include "qemu/log.h"
29
+ /* The prioritization of interrupts is IMPLEMENTATION DEFINED. */
24
#include "sysemu/block-backend.h"
30
25
#include "hw/loader.h"
31
if (interrupt_request & CPU_INTERRUPT_FIQ) {
26
@@ -XXX,XX +XXX,XX @@ static void ast2500_evb_i2c_init(AspeedBoardState *bmc)
32
excp_idx = EXCP_FIQ;
27
eeprom_buf);
33
target_el = arm_phys_excp_target_el(cs, excp_idx, cur_el, secure);
28
34
if (arm_excp_unmasked(cs, excp_idx, target_el,
29
/* The AST2500 EVB expects a LM75 but a TMP105 is compatible */
35
cur_el, secure, hcr_el2)) {
30
- i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 7), "tmp105", 0x4d);
36
- cs->exception_index = excp_idx;
31
+ i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 7),
37
- env->exception.target_el = target_el;
32
+ TYPE_TMP105, 0x4d);
38
- cc->do_interrupt(cs);
33
39
- ret = true;
34
/* The AST2500 EVB does not have an RTC. Let's pretend that one is
40
+ goto found;
35
* plugged on the I2C bus header */
41
}
36
@@ -XXX,XX +XXX,XX @@ static void witherspoon_bmc_i2c_init(AspeedBoardState *bmc)
42
}
37
AspeedSoCState *soc = &bmc->soc;
43
if (interrupt_request & CPU_INTERRUPT_HARD) {
38
uint8_t *eeprom_buf = g_malloc0(8 * 1024);
44
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
39
45
target_el = arm_phys_excp_target_el(cs, excp_idx, cur_el, secure);
40
- i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 3), "pca9552", 0x60);
46
if (arm_excp_unmasked(cs, excp_idx, target_el,
41
+ i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 3), TYPE_PCA9552,
47
cur_el, secure, hcr_el2)) {
42
+ 0x60);
48
- cs->exception_index = excp_idx;
43
49
- env->exception.target_el = target_el;
44
i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 4), "tmp423", 0x4c);
50
- cc->do_interrupt(cs);
45
i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 5), "tmp423", 0x4c);
51
- ret = true;
46
52
+ goto found;
47
/* The Witherspoon expects a TMP275 but a TMP105 is compatible */
53
}
48
- i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 9), "tmp105", 0x4a);
54
}
49
+ i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 9), TYPE_TMP105,
55
if (interrupt_request & CPU_INTERRUPT_VIRQ) {
50
+ 0x4a);
56
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
51
57
target_el = 1;
52
/* The witherspoon board expects Epson RX8900 I2C RTC but a ds1338 is
58
if (arm_excp_unmasked(cs, excp_idx, target_el,
53
* good enough */
59
cur_el, secure, hcr_el2)) {
54
@@ -XXX,XX +XXX,XX @@ static void witherspoon_bmc_i2c_init(AspeedBoardState *bmc)
60
- cs->exception_index = excp_idx;
55
61
- env->exception.target_el = target_el;
56
smbus_eeprom_init_one(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 11), 0x51,
62
- cc->do_interrupt(cs);
57
eeprom_buf);
63
- ret = true;
58
- i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 11), "pca9552",
64
+ goto found;
59
+ i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 11), TYPE_PCA9552,
65
}
60
0x60);
66
}
67
if (interrupt_request & CPU_INTERRUPT_VFIQ) {
68
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
69
target_el = 1;
70
if (arm_excp_unmasked(cs, excp_idx, target_el,
71
cur_el, secure, hcr_el2)) {
72
- cs->exception_index = excp_idx;
73
- env->exception.target_el = target_el;
74
- cc->do_interrupt(cs);
75
- ret = true;
76
+ goto found;
77
}
78
}
79
+ return false;
80
81
- return ret;
82
+ found:
83
+ cs->exception_index = excp_idx;
84
+ env->exception.target_el = target_el;
85
+ cc->do_interrupt(cs);
86
+ return true;
61
}
87
}
62
88
89
#if !defined(CONFIG_USER_ONLY) || !defined(TARGET_AARCH64)
63
--
90
--
64
2.20.1
91
2.20.1
65
92
66
93
diff view generated by jsdifflib
1
Normally configure identifies the source path by looking
1
From: Rene Stange <rsta2@o2online.de>
2
at the location where the configure script itself exists.
3
We also provide a --source-path option which lets the user
4
manually override this.
5
2
6
There isn't really an obvious use case for the --source-path
3
In TD (two dimensions) DMA mode ylen has to be increased by one after
7
option, and in commit 927128222b0a91f56c13a in 2017 we
4
reading it from the TXFR_LEN register, because a value of zero has to
8
accidentally added some logic that looks at $source_path
5
result in one run through of the ylen loop. This has been tested on a
9
before the command line option that overrides it has been
6
real Raspberry Pi 3 Model B+. In the previous implementation the ylen
10
processed.
7
loop was not passed at all for a value of zero.
11
8
12
The fact that nobody complained suggests that there isn't
9
Signed-off-by: Rene Stange <rsta2@o2online.de>
13
any use of this option and we aren't testing it either;
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
14
remove it. This allows us to move the "make $source_path
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
absolute" logic up so that there is no window in the script
12
---
16
where $source_path is set but not yet absolute.
13
hw/dma/bcm2835_dma.c | 4 ++--
14
1 file changed, 2 insertions(+), 2 deletions(-)
17
15
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
diff --git a/hw/dma/bcm2835_dma.c b/hw/dma/bcm2835_dma.c
19
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
17
index XXXXXXX..XXXXXXX 100644
20
Message-id: 20190318134019.23729-1-peter.maydell@linaro.org
18
--- a/hw/dma/bcm2835_dma.c
21
---
19
+++ b/hw/dma/bcm2835_dma.c
22
configure | 10 ++--------
20
@@ -XXX,XX +XXX,XX @@ static void bcm2835_dma_update(BCM2835DMAState *s, unsigned c)
23
1 file changed, 2 insertions(+), 8 deletions(-)
21
ch->stride = ldl_le_phys(&s->dma_as, ch->conblk_ad + 16);
24
22
ch->nextconbk = ldl_le_phys(&s->dma_as, ch->conblk_ad + 20);
25
diff --git a/configure b/configure
23
26
index XXXXXXX..XXXXXXX 100755
24
+ ylen = 1;
27
--- a/configure
25
if (ch->ti & BCM2708_DMA_TDMODE) {
28
+++ b/configure
26
/* 2D transfer mode */
29
@@ -XXX,XX +XXX,XX @@ ld_has() {
27
- ylen = (ch->txfr_len >> 16) & 0x3fff;
30
28
+ ylen += (ch->txfr_len >> 16) & 0x3fff;
31
# default parameters
29
xlen = ch->txfr_len & 0xffff;
32
source_path=$(dirname "$0")
30
dst_stride = ch->stride >> 16;
33
+# make source path absolute
31
src_stride = ch->stride & 0xffff;
34
+source_path=$(cd "$source_path"; pwd)
32
} else {
35
cpu=""
33
- ylen = 1;
36
iasl="iasl"
34
xlen = ch->txfr_len;
37
interp_prefix="/usr/gnemul/qemu-%M"
35
dst_stride = 0;
38
@@ -XXX,XX +XXX,XX @@ for opt do
36
src_stride = 0;
39
;;
40
--cxx=*) CXX="$optarg"
41
;;
42
- --source-path=*) source_path="$optarg"
43
- ;;
44
--cpu=*) cpu="$optarg"
45
;;
46
--extra-cflags=*) QEMU_CFLAGS="$QEMU_CFLAGS $optarg"
47
@@ -XXX,XX +XXX,XX @@ if test "$debug_info" = "yes"; then
48
LDFLAGS="-g $LDFLAGS"
49
fi
50
51
-# make source path absolute
52
-source_path=$(cd "$source_path"; pwd)
53
-
54
# running configure in the source tree?
55
# we know that's the case if configure is there.
56
if test -f "./configure"; then
57
@@ -XXX,XX +XXX,XX @@ for opt do
58
;;
59
--interp-prefix=*) interp_prefix="$optarg"
60
;;
61
- --source-path=*)
62
- ;;
63
--cross-prefix=*)
64
;;
65
--cc=*)
66
@@ -XXX,XX +XXX,XX @@ $(echo Available targets: $default_target_list | \
67
--target-list-exclude=LIST exclude a set of targets from the default target-list
68
69
Advanced options (experts only):
70
- --source-path=PATH path of source code [$source_path]
71
--cross-prefix=PREFIX use PREFIX for compile tools [$cross_prefix]
72
--cc=CC use C compiler CC [$cc]
73
--iasl=IASL use ACPI compiler IASL [$iasl]
74
--
37
--
75
2.20.1
38
2.20.1
76
39
77
40
diff view generated by jsdifflib
1
In the stripe8() function we use a variable length array; however
1
From: Rene Stange <rsta2@o2online.de>
2
we know that the maximum length required is MAX_NUM_BUSSES. Use
3
a fixed-length array and an assert instead.
4
2
3
TD (two dimensions) DMA mode did not work, because the xlen variable
4
has not been re-initialized before each additional ylen run through
5
in bcm2835_dma_update(). Fix it.
6
7
Signed-off-by: Rene Stange <rsta2@o2online.de>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
10
Message-id: 20190328152635.2794-1-peter.maydell@linaro.org
11
---
10
---
12
hw/ssi/xilinx_spips.c | 6 ++++--
11
hw/dma/bcm2835_dma.c | 4 +++-
13
1 file changed, 4 insertions(+), 2 deletions(-)
12
1 file changed, 3 insertions(+), 1 deletion(-)
14
13
15
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
14
diff --git a/hw/dma/bcm2835_dma.c b/hw/dma/bcm2835_dma.c
16
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/ssi/xilinx_spips.c
16
--- a/hw/dma/bcm2835_dma.c
18
+++ b/hw/ssi/xilinx_spips.c
17
+++ b/hw/dma/bcm2835_dma.c
19
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_qspips_reset(DeviceState *d)
18
@@ -XXX,XX +XXX,XX @@
20
19
static void bcm2835_dma_update(BCM2835DMAState *s, unsigned c)
21
static inline void stripe8(uint8_t *x, int num, bool dir)
22
{
20
{
23
- uint8_t r[num];
21
BCM2835DMAChan *ch = &s->chan[c];
24
- memset(r, 0, sizeof(uint8_t) * num);
22
- uint32_t data, xlen, ylen;
25
+ uint8_t r[MAX_NUM_BUSSES];
23
+ uint32_t data, xlen, xlen_td, ylen;
26
int idx[2] = {0, 0};
24
int16_t dst_stride, src_stride;
27
int bit[2] = {0, 7};
25
28
int d = dir;
26
if (!(s->enable & (1 << c))) {
29
27
@@ -XXX,XX +XXX,XX @@ static void bcm2835_dma_update(BCM2835DMAState *s, unsigned c)
30
+ assert(num <= MAX_NUM_BUSSES);
28
dst_stride = 0;
31
+ memset(r, 0, sizeof(uint8_t) * num);
29
src_stride = 0;
32
+
30
}
33
for (idx[0] = 0; idx[0] < num; ++idx[0]) {
31
+ xlen_td = xlen;
34
for (bit[0] = 7; bit[0] >= 0; bit[0]--) {
32
35
r[idx[!d]] |= x[idx[d]] & 1 << bit[d] ? 1 << bit[!d] : 0;
33
while (ylen != 0) {
34
/* Normal transfer mode */
35
@@ -XXX,XX +XXX,XX @@ static void bcm2835_dma_update(BCM2835DMAState *s, unsigned c)
36
if (--ylen != 0) {
37
ch->source_ad += src_stride;
38
ch->dest_ad += dst_stride;
39
+ xlen = xlen_td;
40
}
41
}
42
ch->cs |= BCM2708_DMA_END;
36
--
43
--
37
2.20.1
44
2.20.1
38
45
39
46
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
2
3
This device is used by both ARM (BCM2836, for raspi2) and AArch64
3
The bold text sounds like 'knock knock'. Only bolding the
4
(BCM2837, for raspi3) targets, and is not CPU-specific.
4
second 'not' makes it easier to read.
5
Move it to common object, so we build it once for all targets.
6
5
6
Fixes: dea101a1ae
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Message-id: 20190427133028.12874-1-philmd@redhat.com
8
Reviewed-by: Andrew Jones <drjones@redhat.com>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20200206225148.23923-1-philmd@redhat.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
11
---
12
hw/dma/Makefile.objs | 2 +-
12
docs/arm-cpu-features.rst | 2 +-
13
1 file changed, 1 insertion(+), 1 deletion(-)
13
1 file changed, 1 insertion(+), 1 deletion(-)
14
14
15
diff --git a/hw/dma/Makefile.objs b/hw/dma/Makefile.objs
15
diff --git a/docs/arm-cpu-features.rst b/docs/arm-cpu-features.rst
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/dma/Makefile.objs
17
--- a/docs/arm-cpu-features.rst
18
+++ b/hw/dma/Makefile.objs
18
+++ b/docs/arm-cpu-features.rst
19
@@ -XXX,XX +XXX,XX @@ common-obj-$(CONFIG_XLNX_ZYNQMP_ARM) += xlnx-zdma.o
19
@@ -XXX,XX +XXX,XX @@ the list of KVM VCPU features and their descriptions.
20
20
21
obj-$(CONFIG_OMAP) += omap_dma.o soc_dma.o
21
kvm-no-adjvtime By default kvm-no-adjvtime is disabled. This
22
obj-$(CONFIG_PXA2XX) += pxa2xx_dma.o
22
means that by default the virtual time
23
-obj-$(CONFIG_RASPI) += bcm2835_dma.o
23
- adjustment is enabled (vtime is *not not*
24
+common-obj-$(CONFIG_RASPI) += bcm2835_dma.o
24
+ adjustment is enabled (vtime is not *not*
25
adjusted).
26
27
When virtual time adjustment is enabled each
25
--
28
--
26
2.20.1
29
2.20.1
27
30
28
31
diff view generated by jsdifflib
1
For M-profile the MVFR* ID registers are memory mapped, in the
1
From: Pan Nengyuan <pannengyuan@huawei.com>
2
range we implement via the NVIC. Allow them to be read.
3
(If the CPU has no FPU, these registers are defined to be RAZ.)
4
2
3
There is a memory leak when we call 'device_list_properties' with typename = armv7m_systick. It's easy to reproduce as follow:
4
5
virsh qemu-monitor-command vm1 --pretty '{"execute": "device-list-properties", "arguments": {"typename": "armv7m_systick"}}'
6
7
This patch delay timer_new to fix this memleaks.
8
9
Reported-by: Euler Robot <euler.robot@huawei.com>
10
Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com>
11
Message-id: 20200205070659.22488-2-pannengyuan@huawei.com
12
Cc: qemu-arm@nongnu.org
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20190416125744.27770-3-peter.maydell@linaro.org
8
---
15
---
9
hw/intc/armv7m_nvic.c | 6 ++++++
16
hw/timer/armv7m_systick.c | 6 ++++++
10
1 file changed, 6 insertions(+)
17
1 file changed, 6 insertions(+)
11
18
12
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
19
diff --git a/hw/timer/armv7m_systick.c b/hw/timer/armv7m_systick.c
13
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
14
--- a/hw/intc/armv7m_nvic.c
21
--- a/hw/timer/armv7m_systick.c
15
+++ b/hw/intc/armv7m_nvic.c
22
+++ b/hw/timer/armv7m_systick.c
16
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
23
@@ -XXX,XX +XXX,XX @@ static void systick_instance_init(Object *obj)
17
return 0;
24
memory_region_init_io(&s->iomem, obj, &systick_ops, s, "systick", 0xe0);
18
}
25
sysbus_init_mmio(sbd, &s->iomem);
19
return cpu->env.v7m.sfar;
26
sysbus_init_irq(sbd, &s->irq);
20
+ case 0xf40: /* MVFR0 */
27
+}
21
+ return cpu->isar.mvfr0;
28
+
22
+ case 0xf44: /* MVFR1 */
29
+static void systick_realize(DeviceState *dev, Error **errp)
23
+ return cpu->isar.mvfr1;
30
+{
24
+ case 0xf48: /* MVFR2 */
31
+ SysTickState *s = SYSTICK(dev);
25
+ return cpu->isar.mvfr2;
32
s->timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, systick_timer_tick, s);
26
default:
33
}
27
bad_offset:
34
28
qemu_log_mask(LOG_GUEST_ERROR, "NVIC: Bad read offset 0x%x\n", offset);
35
@@ -XXX,XX +XXX,XX @@ static void systick_class_init(ObjectClass *klass, void *data)
36
37
dc->vmsd = &vmstate_systick;
38
dc->reset = systick_reset;
39
+ dc->realize = systick_realize;
40
}
41
42
static const TypeInfo armv7m_systick_info = {
29
--
43
--
30
2.20.1
44
2.20.1
31
45
32
46
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Pan Nengyuan <pannengyuan@huawei.com>
2
2
3
Suggested-by: Markus Armbruster <armbru@redhat.com>
3
There is a memory leak when we call 'device_list_properties' with typename = stm32f2xx_timer. It's easy to reproduce as follow:
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
4
5
Message-id: 20190412165416.7977-3-philmd@redhat.com
5
virsh qemu-monitor-command vm1 --pretty '{"execute": "device-list-properties", "arguments": {"typename": "stm32f2xx_timer"}}'
6
7
This patch delay timer_new to fix this memleaks.
8
9
Reported-by: Euler Robot <euler.robot@huawei.com>
10
Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com>
11
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
12
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
13
Message-id: 20200205070659.22488-3-pannengyuan@huawei.com
14
Cc: Alistair Francis <alistair@alistair23.me>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
17
---
9
hw/arm/nseries.c | 3 ++-
18
hw/timer/stm32f2xx_timer.c | 5 +++++
10
1 file changed, 2 insertions(+), 1 deletion(-)
19
1 file changed, 5 insertions(+)
11
20
12
diff --git a/hw/arm/nseries.c b/hw/arm/nseries.c
21
diff --git a/hw/timer/stm32f2xx_timer.c b/hw/timer/stm32f2xx_timer.c
13
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
14
--- a/hw/arm/nseries.c
23
--- a/hw/timer/stm32f2xx_timer.c
15
+++ b/hw/arm/nseries.c
24
+++ b/hw/timer/stm32f2xx_timer.c
16
@@ -XXX,XX +XXX,XX @@
25
@@ -XXX,XX +XXX,XX @@ static void stm32f2xx_timer_init(Object *obj)
17
#include "hw/boards.h"
26
memory_region_init_io(&s->iomem, obj, &stm32f2xx_timer_ops, s,
18
#include "hw/i2c/i2c.h"
27
"stm32f2xx_timer", 0x400);
19
#include "hw/devices.h"
28
sysbus_init_mmio(SYS_BUS_DEVICE(obj), &s->iomem);
20
+#include "hw/misc/tmp105.h"
29
+}
21
#include "hw/block/flash.h"
30
22
#include "hw/hw.h"
31
+static void stm32f2xx_timer_realize(DeviceState *dev, Error **errp)
23
#include "hw/bt.h"
32
+{
24
@@ -XXX,XX +XXX,XX @@ static void n8x0_i2c_setup(struct n800_s *s)
33
+ STM32F2XXTimerState *s = STM32F2XXTIMER(dev);
25
qemu_register_powerdown_notifier(&n8x0_system_powerdown_notifier);
34
s->timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, stm32f2xx_timer_interrupt, s);
26
27
/* Attach a TMP105 PM chip (A0 wired to ground) */
28
- dev = i2c_create_slave(i2c, "tmp105", N8X0_TMP105_ADDR);
29
+ dev = i2c_create_slave(i2c, TYPE_TMP105, N8X0_TMP105_ADDR);
30
qdev_connect_gpio_out(dev, 0, tmp_irq);
31
}
35
}
32
36
37
@@ -XXX,XX +XXX,XX @@ static void stm32f2xx_timer_class_init(ObjectClass *klass, void *data)
38
dc->reset = stm32f2xx_timer_reset;
39
device_class_set_props(dc, stm32f2xx_timer_properties);
40
dc->vmsd = &vmstate_stm32f2xx_timer;
41
+ dc->realize = stm32f2xx_timer_realize;
42
}
43
44
static const TypeInfo stm32f2xx_timer_info = {
33
--
45
--
34
2.20.1
46
2.20.1
35
47
36
48
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Pan Nengyuan <pannengyuan@huawei.com>
2
2
3
Reviewed-by: Markus Armbruster <armbru@redhat.com>
3
There is a memory leak when we call 'device_list_properties' with typename = stellaris-gptm. It's easy to reproduce as follow:
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
4
5
Message-id: 20190412165416.7977-8-philmd@redhat.com
5
virsh qemu-monitor-command vm1 --pretty '{"execute": "device-list-properties", "arguments": {"typename": "stellaris-gptm"}}'
6
7
This patch delay timer_new in realize to fix it.
8
9
Reported-by: Euler Robot <euler.robot@huawei.com>
10
Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com>
11
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
12
Message-id: 20200205070659.22488-4-pannengyuan@huawei.com
13
Cc: qemu-arm@nongnu.org
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
16
---
8
include/hw/devices.h | 3 ---
17
hw/arm/stellaris.c | 7 ++++++-
9
include/hw/input/gamepad.h | 19 +++++++++++++++++++
18
1 file changed, 6 insertions(+), 1 deletion(-)
10
hw/arm/stellaris.c | 2 +-
11
hw/input/stellaris_input.c | 2 +-
12
MAINTAINERS | 1 +
13
5 files changed, 22 insertions(+), 5 deletions(-)
14
create mode 100644 include/hw/input/gamepad.h
15
19
16
diff --git a/include/hw/devices.h b/include/hw/devices.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/include/hw/devices.h
19
+++ b/include/hw/devices.h
20
@@ -XXX,XX +XXX,XX @@ void *tsc2005_init(qemu_irq pintdav);
21
uint32_t tsc2005_txrx(void *opaque, uint32_t value, int len);
22
void tsc2005_set_transform(void *opaque, MouseTransformInfo *info);
23
24
-/* stellaris_input.c */
25
-void stellaris_gamepad_init(int n, qemu_irq *irq, const int *keycode);
26
-
27
#endif
28
diff --git a/include/hw/input/gamepad.h b/include/hw/input/gamepad.h
29
new file mode 100644
30
index XXXXXXX..XXXXXXX
31
--- /dev/null
32
+++ b/include/hw/input/gamepad.h
33
@@ -XXX,XX +XXX,XX @@
34
+/*
35
+ * Gamepad style buttons connected to IRQ/GPIO lines
36
+ *
37
+ * Copyright (c) 2007 CodeSourcery.
38
+ * Written by Paul Brook
39
+ *
40
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
41
+ * See the COPYING file in the top-level directory.
42
+ */
43
+
44
+#ifndef HW_INPUT_GAMEPAD_H
45
+#define HW_INPUT_GAMEPAD_H
46
+
47
+#include "hw/irq.h"
48
+
49
+/* stellaris_input.c */
50
+void stellaris_gamepad_init(int n, qemu_irq *irq, const int *keycode);
51
+
52
+#endif
53
diff --git a/hw/arm/stellaris.c b/hw/arm/stellaris.c
20
diff --git a/hw/arm/stellaris.c b/hw/arm/stellaris.c
54
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
55
--- a/hw/arm/stellaris.c
22
--- a/hw/arm/stellaris.c
56
+++ b/hw/arm/stellaris.c
23
+++ b/hw/arm/stellaris.c
57
@@ -XXX,XX +XXX,XX @@
24
@@ -XXX,XX +XXX,XX @@ static void stellaris_gptm_init(Object *obj)
58
#include "hw/sysbus.h"
25
sysbus_init_mmio(sbd, &s->iomem);
59
#include "hw/ssi/ssi.h"
26
60
#include "hw/arm/arm.h"
27
s->opaque[0] = s->opaque[1] = s;
61
-#include "hw/devices.h"
28
+}
62
#include "qemu/timer.h"
29
+
63
#include "hw/i2c/i2c.h"
30
+static void stellaris_gptm_realize(DeviceState *dev, Error **errp)
64
#include "net/net.h"
31
+{
65
@@ -XXX,XX +XXX,XX @@
32
+ gptm_state *s = STELLARIS_GPTM(dev);
66
#include "sysemu/sysemu.h"
33
s->timer[0] = timer_new_ns(QEMU_CLOCK_VIRTUAL, gptm_tick, &s->opaque[0]);
67
#include "hw/arm/armv7m.h"
34
s->timer[1] = timer_new_ns(QEMU_CLOCK_VIRTUAL, gptm_tick, &s->opaque[1]);
68
#include "hw/char/pl011.h"
35
}
69
+#include "hw/input/gamepad.h"
36
70
#include "hw/watchdog/cmsdk-apb-watchdog.h"
37
-
71
#include "hw/misc/unimp.h"
38
/* System controller. */
72
#include "cpu.h"
73
diff --git a/hw/input/stellaris_input.c b/hw/input/stellaris_input.c
74
index XXXXXXX..XXXXXXX 100644
75
--- a/hw/input/stellaris_input.c
76
+++ b/hw/input/stellaris_input.c
77
@@ -XXX,XX +XXX,XX @@
78
*/
79
#include "qemu/osdep.h"
80
#include "hw/hw.h"
81
-#include "hw/devices.h"
82
+#include "hw/input/gamepad.h"
83
#include "ui/console.h"
84
39
85
typedef struct {
40
typedef struct {
86
diff --git a/MAINTAINERS b/MAINTAINERS
41
@@ -XXX,XX +XXX,XX @@ static void stellaris_gptm_class_init(ObjectClass *klass, void *data)
87
index XXXXXXX..XXXXXXX 100644
42
DeviceClass *dc = DEVICE_CLASS(klass);
88
--- a/MAINTAINERS
43
89
+++ b/MAINTAINERS
44
dc->vmsd = &vmstate_stellaris_gptm;
90
@@ -XXX,XX +XXX,XX @@ M: Peter Maydell <peter.maydell@linaro.org>
45
+ dc->realize = stellaris_gptm_realize;
91
L: qemu-arm@nongnu.org
46
}
92
S: Maintained
47
93
F: hw/*/stellaris*
48
static const TypeInfo stellaris_gptm_info = {
94
+F: include/hw/input/gamepad.h
95
96
Versatile Express
97
M: Peter Maydell <peter.maydell@linaro.org>
98
--
49
--
99
2.20.1
50
2.20.1
100
51
101
52
diff view generated by jsdifflib